DeepSeek V4 Report Signal Ledger wordmark

Independent Editorial Ledger

DeepSeek V4 Report

A source-linked overview of the public DeepSeek-V4 preview: what the official homepage, API docs, and deepseek-ai model cards say right now, and what this page deliberately does not claim.

Not an official DeepSeek website. This page summarizes public source material and preserves source boundaries.

Key Findings

The public signal is strong enough to map the product shape, but still narrow enough to require restraint.

Signal 01

DeepSeek itself says the V4 preview is live.

The homepage announcement says the DeepSeek-V4 preview has been released and is available across the web product, mobile app, and API. That is the clearest first-party availability statement on the public web.

Source S1

Signal 02

The public API already exposes named V4 endpoints.

DeepSeek's API docs list deepseek-v4-flash and deepseek-v4-pro, and note that deepseek-chat and deepseek-reasoner are compatibility aliases scheduled for deprecation on July 24, 2026.

Source S2

Signal 03

The official model card frames V4 as a million-token MoE series.

The deepseek-ai model card on Hugging Face describes two public tiers, DeepSeek-V4-Pro and DeepSeek-V4-Flash, both with a stated one million token context length and distinct activated-parameter budgets.

Source S4

Public Snapshot

A compact ledger of what is confirmed versus what stays outside the boundary of this report.

What is confirmed

  • The DeepSeek homepage publicly announces a DeepSeek-V4 preview release.
  • The API docs publicly list deepseek-v4-flash and deepseek-v4-pro.
  • The model card describes Flash and Pro as Mixture-of-Experts models with 1M context.
  • The official collection under deepseek-ai publishes four V4 repositories on Hugging Face.

What this page does not claim

  • No independent benchmark reproduction is performed here.
  • No undocumented pricing, latency, or hardware claims are inferred.
  • No community rumor or media speculation is elevated above the official pages.
  • No claim is made that DeepSeek-V4 universally beats every frontier closed model in all settings.

Benchmarks

Selected numbers from the official model card, shown as indicators rather than a full benchmark mirror.

Selected official metrics

The official README mixes base-model and instruct-model slices. This table keeps those slices explicit.

Model slice Metric Reported result
V4-Flash-Base HumanEval Pass@1 69.5
V4-Pro-Base HumanEval Pass@1 76.8
V4-Pro-Base LongBench-V2 EM 51.5
V4-Pro Max MRCR 1M MMR 83.5
V4-Pro Max Terminal Bench 2.0 Acc 67.9
Source S4

Reasoning modes

The official model card describes three reasoning effort modes for the instruct models: non-think, think, and think max. In practice, that means the same model family is presented with multiple effort profiles instead of a single fixed behavior envelope.

Reading the chart correctly

Some rows compare base models, while others compare instruct models under max reasoning effort. This page keeps the official framing intact instead of compressing everything into one misleading score.

Modes & Fit

A quick matrix for interpreting the public model lineup and reasoning controls.

Flash

The lighter public tier for routine usage, cost sensitivity, and faster interaction loops.

  • Named in the API docs as deepseek-v4-flash.
  • Official model card says 284B total params and 13B activated.
  • Useful when you want the V4 family without the Pro footprint.

Pro

The heavier public tier for stronger capability ceilings and wider benchmark coverage.

  • Named in the API docs as deepseek-v4-pro.
  • Official model card says 1.6T total params and 49B activated.
  • The model card also describes a Pro Max reasoning setting.

Reasoning controls

The docs and model card together imply a tiered effort model rather than a single answer mode.

  • non-think for faster, lighter answers.
  • think for higher-effort analysis.
  • think max for the longest, most deliberate reasoning mode.

FAQ

Answers to the first questions a source-conscious reader will ask.

What is DeepSeek V4 in the narrowest defensible sense?

As of April 24, 2026, it is a publicly referenced DeepSeek preview series visible on the DeepSeek homepage, the official API docs, and the deepseek-ai Hugging Face collection.

Where can the public preview be accessed?

The homepage announcement says it is live on the web product, the mobile app, and the API. The API docs publicly expose deepseek-v4-flash and deepseek-v4-pro.

What does the one million token claim come from?

The official Hugging Face model card says both V4 tiers support a context length of one million tokens. This page repeats that claim only as a sourced statement, not as an independently verified lab result.

Is this page a replacement for the official technical report?

No. It is a guided editorial summary designed to shorten the path from public release signals to a practical understanding of the model family and source trail.

Primary Sources

The report is only as strong as the links underneath it.

S1

DeepSeek homepage announcement

The homepage banner states that the DeepSeek-V4 preview has been released and is live on web, app, and API.

Open source

S2

DeepSeek API docs

The official docs list deepseek-v4-flash and deepseek-v4-pro, plus compatibility notes for older aliases.

Open source

S3

deepseek-ai V4 collection

The official Hugging Face collection exposes the current public V4 repositories under the deepseek-ai account.

Open source

S4

DeepSeek-V4-Pro model card

The model card describes the V4 series architecture, context length, model sizes, reasoning modes, and benchmark tables.

Open source

S5

Technical report PDF

The official report file is published alongside the model release on the DeepSeek-V4-Pro repository.

Open source