Jan 31, 2025
OpenAI o3-mini
OpenAI o3-mini
OpenAI o3-mini
OpenAI has released o3-mini 🧠, a cost-efficient STEM reasoning model available in ChatGPT and the API. It improves on o1-mini with better accuracy, speed, and lower latency. Developers can adjust reasoning effort (low, medium, high) and use function calling, structured outputs, and developer messages.
OpenAI has released o3-mini 🧠, a cost-efficient STEM reasoning model available in ChatGPT and the API. It improves on o1-mini with better accuracy, speed, and lower latency. Developers can adjust reasoning effort (low, medium, high) and use function calling, structured outputs, and developer messages.
ChatGPT Plus, Team, and Pro users can access it today, with Enterprise access in February. Free-tier users can try it under the “Reason” mode, making it the first reasoning model available to non-paying users.
The Details
Enhanced STEM Performance: o3-mini outperforms o1-mini in math, coding, and science, reducing major errors by 39% and preferred by 56% of testers.
Customizable Reasoning: Developers can adjust low, medium, or high reasoning effort to balance speed and complexity, improving performance in math, science, and engineering tasks.
Faster and More Efficient: o3-mini responds 24% faster than o1-mini, with an average time of 7.7 seconds and 2.5 seconds faster to first token.
Why It Matters
o3-mini lowers the cost of high-quality reasoning models with strong performance in math, coding, and science. It enables faster, more efficient problem-solving for developers, students, and professionals.
ChatGPT Plus, Team, and Pro users can access it today, with Enterprise access in February. Free-tier users can try it under the “Reason” mode, making it the first reasoning model available to non-paying users.
The Details
Enhanced STEM Performance: o3-mini outperforms o1-mini in math, coding, and science, reducing major errors by 39% and preferred by 56% of testers.
Customizable Reasoning: Developers can adjust low, medium, or high reasoning effort to balance speed and complexity, improving performance in math, science, and engineering tasks.
Faster and More Efficient: o3-mini responds 24% faster than o1-mini, with an average time of 7.7 seconds and 2.5 seconds faster to first token.
Why It Matters
o3-mini lowers the cost of high-quality reasoning models with strong performance in math, coding, and science. It enables faster, more efficient problem-solving for developers, students, and professionals.