← Back to feed
8

Anthropic: 1M Token Context Window Generally Available for Claude Opus 4.6 and Sonnet 4.6

ProductsTop News1 source·Mar 16

Summary

  • • Anthropic makes 1M token context window generally available at standard pricing
  • • No long-context price premium — flat per-token rate across the full window
  • • Opus 4.6 scores 78.3% on MRCR v2, highest among frontier models at 1M context
  • • Claude Code Max, Team, and Enterprise users get 1M context automatically on Opus 4.6
Adjust signal

Details

1.Product Launch

1M token context window reaches general availability for Claude Opus 4.6 and Sonnet 4.6

Previously in beta, the 1M context window now requires no special header and activates automatically for requests over 200K tokens. Existing code sending the beta header requires no changes — it is simply ignored.

2.Financials

Flat per-token pricing applies across the full 1M context window with no long-context surcharge

Opus 4.6 is priced at $5 per million input tokens and $25 per million output tokens. Sonnet 4.6 is $3/$15. A 900K-token request is billed at the same per-token rate as a 9K one — eliminating the cost unpredictability that previously discouraged large-context usage.

3.New Tech

Media capacity expanded 6x to 600 images or PDF pages per request

Up from a prior limit of 100 items, this increase allows document-heavy workflows — legal review, financial analysis, large report processing — to operate without batching or splitting inputs across multiple requests.

4.Research

Opus 4.6 scores 78.3% on MRCR v2, the highest of any frontier model at 1M context length

MRCR v2 benchmarks a model's ability to retrieve and reason over specific details across a very long context. A high score here indicates the model does not merely accept a large context but can accurately use it — critical for real-world tasks like codebase navigation or long-document analysis.

5.Industry Update

Claude Code Max, Team, and Enterprise users default to 1M context on Opus 4.6 automatically

For agentic coding sessions, this means fewer compactions and the full tool call history, intermediate reasoning, and observations remain accessible throughout a session without manual intervention.

6.Infrastructure

Feature is live across four major deployment surfaces simultaneously

Available on Claude Platform natively, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry from launch day — ensuring enterprise customers on major cloud providers have immediate access without waiting for platform-specific rollouts.

7.Insight

Large context eliminates common engineering workarounds like lossy summarization and context clearing

Teams working with large codebases, long agent traces, or extensive document corpora previously had to build infrastructure to manage context limits. With reliable retrieval at 1M tokens, those workarounds become unnecessary, reducing both engineering overhead and information loss.

Product Launch = GA release, Financials = pricing details, New Tech = capability expansion, Research = benchmark performance, Industry Update = product tier changes, Infrastructure = deployment availability, Insight = practical workflow implications

What This Means

Making 1M token context generally available at flat pricing removes two of the main friction points that limited real-world use: cost unpredictability and the engineering overhead of context management. For developers building agentic systems, legal or financial document workflows, or large-scale code analysis, the model can now hold an entire working set in memory without summarization loss or manual chunking. The MRCR v2 score matters here — a large context window is only useful if the model can reliably retrieve and reason across it, and Anthropic's leading benchmark result suggests Opus 4.6 clears that bar. Combined with multi-cloud availability from day one, this positions the feature for immediate production adoption rather than experimental use.

Sentiment

Broadly excited among developers and AI practitioners, with practical praise for removing context friction

@rohanpaul_aiRohan Paul · AI newsletter analystView post
Excited

Big news for Anthropic Claude again. They made a 1M token context window generally available for Claude Opus 4.6 and Sonnet 4.6. They also got rid of the extra fees for long context in the API... Opus 4.6 scores 78.3% on the MRCR v2 memory test at the full 1M length.

@DataChazCharly Wargnier · AI agent builder, ex-Streamlit & SnowflakeView post
Impressed

What Anthropic has shipped so far in 2026 is just insane: → Claude Opus 4.6 → Claude Sonnet 4.6 → ... → 1M context window. @AnthropicAI is on an absolute, once-in-a-lifetime tear right now

@ashebytesashe · ex-ML engineer @Stanford @NASA @AppleView post
Supportive

would love more opinionated write ups on context engineering given current capabilities

@richard_loriccoRichard LoRicco · Software engineerView post
Impressed

Anthropic recently made 1M context standard for Claude Sonnet and Opus 4.6... I've been using Claude Code to build daily, and the context window anxiety is just... gone now. small thing that changes how you actually work.

@eurofounderMatthias Schmidt · EU founder building GDPR-compliant startupsView post
Skeptical

This isn't really important for the EU users. Can you outline your data privacy measures instead?

Split

~95/5 positive/concerns; developers thrilled by production usability vs. rare EU privacy skeptics.

Sources

Similar Events