OpenAI Releases Advanced AI Models o3, o4-mini with Image Use

OpenAI launches o3 and o4-mini AI models with 20% fewer errors, advanced image reasoning, and Codex CLI for developers.

chandramouli
By
chandramouli
Founder
Chandra Mouli is a former software developer from Andhra Pradesh, India, who left the IT world to start CyberOven full-time. With a background in frontend technologies...
- Founder
4 Min Read
The image features the Open AI logo encircled by various Open AI icons, illustrating innovation and technology.
Highlights
  • OpenAI released o3 and o4-mini models with improved reasoning.
  • Models can use images, tools, and have fewer major mistakes.
  • Codex CLI lets programmers use models directly in their terminal.

OpenAI has released two new AI reasoning models called o3 and o4-mini today (April 17, 2025), according to information from OpenAI. These new models can solve problems better and can now work with both text and images. They are available to ChatGPT Plus, Pro, and Team users through the ChatGPT platform and API’s.

You might wonder: what exactly is an AI reasoning model? Think of it as a smart computer program that solves problems step-by-step, similar to how you might solve a puzzle. These programs can understand both words and pictures, break big problems into smaller parts, and find solutions more easily.

The new models come with several exciting abilities:

  • They can think with and manipulate images – zooming in, rotating pictures, and analyzing what they see
  • They can use all ChatGPT tools together to solve complex problems
  • They make 20% fewer major errors compared to older models
  • o3 achieves 87.7% accuracy on expert-level science benchmarks.
  • They perform better on difficult tests in subjects like math, science, and coding
  • o4-mini is a faster, more cost-efficient alternative.

How are these models different from older ones like GPT-4? Here’s what makes them special:

Featureo3 and o4-miniPrevious Models
Image handlingCan zoom, rotate, and manipulate imagesBasic image understanding only
Text lengthCan handle 200,000 words at onceLimited to 128,000 words
Accuracy20% fewer major errorsMore prone to mistakes
Tool useCan combine all tools automaticallyMore limited tool capabilities

OpenAI has also made these models safer than previous versions. They’ve improved safety in several important ways:

  • Created new safety training data about dangerous topics like creating harmful programs or biological risks
  • Added a special safety monitor that caught 99% of potentially harmful conversations during testing
  • Ensured both models stay below the “High” risk threshold in important safety categories

Along with these new models, OpenAI introduced Codex CLI, which is a new tool for programmers. Codex CLI is like a helpful coding assistant that works right in your computer’s command line. It can help write code, fix bugs, and solve programming problems. The best part? It’s completely free and open-source, meaning anyone can use and improve it.

If you’re a programmer who wants to use Codex CLI, here’s how you can get started:

  • Install it using a tool called npm (a program installer)
  • Set up your OpenAI API key
  • Type questions or commands in regular language to get coding help
  • You can even share screenshots of your code or simple drawings to explain your problems
  • Find it at github.com/openai/codex

OpenAI is also launching a $1 million funding program to support projects that use Codex CLI and OpenAI models. They will give $25,000 in API credits (which let you use their AI services) to projects they approve. This could help developers create new and useful tools with these technologies.

While o3 is already available, OpenAI says that an even more powerful version called o3-pro will be released in a few weeks. These improvements show how AI is getting better at thinking and solving problems, bringing us closer to AI that can truly help with complex tasks in our daily lives.

Share This Article