Claude Opus 4.7: what Anthropic announced—and how comparisons are framed

By Paath.online16 April 202612 min read

On April 16, 2026, Anthropic announced Claude Opus 4.7 as generally available across Claude products, the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. This article summarises only what appears on Anthropic's own announcement page—plus linked official docs—so you can read primary sources yourself.

Primary link: anthropic.com/news/claude-opus-4-7.

Opus 4.7 vs Opus 4.6 (same post)

Anthropic positions Opus 4.7 as a notable improvement on Opus 4.6 for advanced software engineering, especially on difficult tasks—users can hand off harder coding work with less supervision. The post also highlights:

  • Vision: higher-resolution image support—images up to 2,576 pixels on the long edge (~3.75 megapixels), described as more than three times prior Claude models' capacity for comparable use cases.
  • Instruction following: the model follows instructions more literally; Anthropic warns that prompts tuned for earlier models may behave differently and recommends re-tuning harnesses.
  • Cyber capabilities vs Mythos Preview: Opus 4.7 is explicitly described as less broadly capable in cybersecurity than Claude Mythos Preview, with automated safeguards that block high-risk cybersecurity requests. Legitimate security professionals are pointed to Anthropic's Cyber Verification Program (linked from the announcement).

Pricing and API identifier

The announcement states pricing is unchanged from Opus 4.6: $5 per million input tokens and $25 per million output tokens. Developers should use the model id claude-opus-4-7 via the Claude API model overview.

GPT‑5.4, Gemini 3.1 Pro, and published charts

Anthropic's post includes comparison charts and notes a footnote (on the announcement page) that for GPT‑5.4 and Gemini 3.1 Pro they compared against the best reported model version available via API in those charts. That is an important detail: third-party availability and model versioning move weekly—always open the original charts rather than relying on re-blogged numbers.

For independent third-party leaderboards, many teams use Artificial Analysis (separate from Anthropic)—useful for trends, not a substitute for your own evals.

Safety documentation and migration

Learn more on Paath.online

Claude Mythos Preview & Project Glasswing · Choosing the right LLM · LLM evaluation basics.