Daily Tech News, Interviews, Reviews and Updates

OpenAI introduces o3 and o4 mini models; Codex CLI coding agent also released

Yesterday, OpenAI officially introduced the latest models in the O-series- o3 and o4 mini. These models can agentically use and combine every tool within ChatGPT- this includes searching the web, analyzing uploaded files and other data with Python, reasoning deeply about visual inputs, and even generating images.

OpenAI o3 & o4 mini

OpenAI o3 is a powerful reasoning model that pushes the frontier across coding, math, science, visual perception, and more. It’s ideal for complex queries requiring multi-faceted analysis and whose answers may not be immediately obvious. It performs especially strongly at visual tasks like analyzing images, charts, and graphics. In evaluations by external experts, o3 makes 20 percent fewer major errors than OpenAI o1 on difficult, real-world tasks, especially excelling in areas like programming, business/consulting, and creative ideation.

OpenAI o4-mini is a smaller model optimized for fast, cost-efficient reasoning. It achieves remarkable performance for its size and cost, particularly in math, coding, and visual tasks. It also outperforms its predecessor, o3‑mini, on non-STEM tasks and domains like data science. o4-mini supports significantly higher usage limits than o3.

OpenAI has trained both models to use tools through reinforcement learning, teaching them not just how to use tools, but to reason about when to use them. Their ability to deploy tools based on desired outcomes makes them more capable in open-ended situations, particularly those involving visual reasoning and multi-step workflows.

Thinking with images

For the first time, these models can integrate images directly into their chain of thought. People can upload a photo of a whiteboard, a textbook diagram, or a hand-drawn sketch, and the model can interpret it, even if the image is blurry, reversed, or low quality. With tool use, the models can manipulate images on the fly, rotating, zooming, or transforming them as part of their reasoning process. These models deliver best-in-class accuracy on visual perception tasks, enabling them to solve questions that were previously out of reach.

Other than this, OpenAI o3 and o4 mini have full access to tools within ChatGPT, as well as your custom tools via function calling in the API. For example, a user might ask: “How will summer energy usage in California compare to last year?” The model can search the web for public utility data, write Python code to build a forecast, generate a graph or image, and explain the key factors behind the prediction, chaining together multiple tool calls.

For these models, OpenAI has completely rebuilt its safety training data, adding new refusal prompts in areas such as biological threats, malware generation, and jailbreaks. While the company has also developed system-level mitigations to flag dangerous prompts in frontier risk areas.

Availability

ChatGPT Plus, Pro, and Team users will see o3, o4-mini, and o4-mini-high in the model selector starting today, replacing o1, o3‑mini, and o3‑mini‑high. ChatGPT Enterprise and Edu users will gain access in one week. Free users can try o4-mini by selecting ‘Think’ in the composer before submitting their query. Rate limits across all plans remain unchanged from the prior set of models. Both o3 and o4-mini are also available to developers today via the Chat Completions API and Responses API. The company is expected to release OpenAI o3-pro in a few weeks with full tool support.

Apart from these latest O-series models, OpenAI has also released a new coding agent, Codex CLI.

Codex CLI

This coding agent runs directly on your computer and is designed to maximize the reasoning capabilities of models like o3 and o4-mini, with upcoming support for additional API models like GPT-4.1. Users can get the benefits of multimodal reasoning from the command line by passing screenshots or low fidelity sketches to the model, combined with access to your code locally. Codex CLI is fully open source and available at github.com/openai/codex.

Get real time updates directly on you device, subscribe now.



You might also like