OpenAI has introduced two new AI models, o3 and o4-mini, built to deliver stronger reasoning, faster responses, and deeper understanding. These models are designed to pause, think, and solve problems more intelligently—marking a new chapter in AI development.
According to OpenAI, o3 is its most advanced reasoning model yet. It outperforms previous versions in math, logic, science, coding, and visual tasks. Meanwhile, o4-mini gives developers a smart balance between cost, speed, and power—perfect for building efficient, AI-powered tools.
These models aren’t just smarter—they’re more capable. Both o3 and o4-mini support ChatGPT tools like web browsing, Python execution, and image analysis. They can also generate images and respond to visual inputs. OpenAI also launched a variant, o4-mini-high, which takes more time to deliver highly accurate responses.
This release comes as global competition heats up. OpenAI is racing to stay ahead of rivals like Google, Meta, Anthropic, and xAI. Although OpenAI pioneered reasoning models with o1, other labs have quickly caught up. Today, advanced reasoning is becoming the new standard in AI.
Interestingly, o3 almost didn’t launch in ChatGPT. In February, CEO Sam Altman hinted at focusing on a new system built using o3’s foundation. But the growing pressure from rivals may have changed those plans. OpenAI moved forward and brought o3 directly to ChatGPT.
In performance tests, o3 scored 69.1% on the SWE-bench benchmark—a test that measures coding ability without added tools. That’s a leap from 49.3% scored by o3-mini. o4-mini isn’t far behind, scoring 68.1%. These results even surpass Claude 3.7 Sonnet, which scored 62.3%.
One standout feature? These models can “think with images.” You can upload diagrams, sketches, or blurry screenshots, and the models will interpret them before giving answers. During this phase, they may zoom, rotate, or enhance the visuals to make sense of the content—just like a human would.
Beyond visuals, o3 and o4-mini also run Python code inside your browser using ChatGPT’s new Canvas tool. They can also search the web for up-to-date answers, which means you’re not limited to outdated information.
Developers can access these models through OpenAI’s Chat Completions and Responses APIs. Pricing is designed to be flexible:
- o3 costs $10 per million input tokens and $40 for output tokens.
- o4-mini is priced at $1.10 and $4.40 respectively—the same as o3-mini.
In the coming weeks, OpenAI will also release o3-pro, a high-performance version of o3 for ChatGPT Pro users. It will offer deeper reasoning by using more computing power for every response.
Sam Altman has hinted that o3 and o4-mini could be the last standalone reasoning models before the launch of GPT-5. That model is expected to combine classic language generation with the deep reasoning seen in o3—creating a single, unified AI platform for the future.