Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

New Claude 4 Models Redefine AI Reasoning Power

Claude AI Now Searches the Web—Is Google in Trouble? Claude AI Now Searches the Web—Is Google in Trouble?
IMAGE CREDITS: FABRICE COFFRINI/AFP VIA GETTY IMAGES

Anthropic just lifted the curtain on Claude 4, its most powerful family of AI models yet, designed to handle complex tasks, write high-quality code, and think through problems over multiple steps.

Launched at its first-ever developer conference, the new Claude 4 lineup includes Claude Opus 4 and Claude Sonnet 4. Both models are trained to tackle long-horizon tasks and work with large datasets—making them ideal for developers, engineers, and teams looking to build smarter, more responsive AI tools.

A New Era for AI Coding Assistants

What makes Claude 4 stand out? According to Anthropic, both models excel in programming tasks and follow instructions more accurately than earlier versions. Opus 4, the more advanced of the two, can stay focused across many steps in a workflow—something many current AI models struggle with. It’s available to paying users only, while Sonnet 4 will be accessible even through the free tier.

Both models are already live across Anthropic’s own products and can also be used via API through Amazon Bedrock and Google Vertex AI. Pricing for Opus 4 sits at $15 per million input tokens and $75 per million output tokens, with Sonnet 4 priced more affordably at $3 and $15 respectively. To put that in perspective, a million tokens equals about 750,000 words—longer than War and Peace.

Anthropic isn’t just rolling out new models—it’s aiming high. The company, founded by ex-OpenAI researchers, is projecting $2.2 billion in revenue this year, with a bold target of $12 billion by 2027. Backed by Amazon and other major investors, it recently secured a $2.5 billion credit line to help fund the development of next-generation models.

How Claude 4 Stacks Up

Opus 4 is built to outperform competitors on key benchmarks. It edges out Google’s Gemini 2.5 Pro and OpenAI’s latest GPT-4.1 in code-related tasks, like SWE-bench Verified. However, it still trails OpenAI’s o3 on some multimodal and science-heavy tests like MMMU and GPQA Diamond. Even so, Anthropic believes its models offer a unique blend of speed, memory, and structured reasoning.

Both Claude 4 models are what Anthropic calls “hybrids.” That means they can generate instant responses but also switch into a deeper reasoning mode when a more thoughtful answer is needed. During this mode, they offer a summary of their reasoning path—a peek into how the answer was formed—while keeping full transparency limited to protect proprietary methods.

To prevent misuse, Anthropic added stronger safeguards to Opus 4, especially because internal tests showed it could enhance the ability to produce dangerous materials. The model now meets Anthropic’s ASL-3 safety tier and includes tougher filters for harmful content.

Claude Code Gets an Upgrade

Alongside the model release, Anthropic is updating Claude Code—its AI-powered coding assistant. The tool now works with IDEs like VS Code and JetBrains and includes an SDK for integrating it into third-party apps. A new GitHub connector also lets Claude Code respond to pull request comments, fix bugs, and modify code automatically.

While AI still has a long way to go in producing secure, bug-free code, its productivity potential is pushing developers to adopt these tools quickly. Anthropic knows this and says it’s shifting to a faster update cycle to keep improving Claude’s capabilities.

In a blog post shared ahead of the launch, the company wrote: “We’re rolling out model updates more frequently to help you stay on the cutting edge. Each update will bring powerful new features and refinements.”

With Claude 4, Anthropic isn’t just competing—it’s betting big on becoming the go-to AI for reasoning, coding, and next-gen productivity.

Share with others