OpenAI o1-mini is a cost-efficient reasoning model optimized for STEM tasks (science, technology, engineering, and math), particularly excelling in mathematics and coding. It offers advanced reasoning capabilities at a fraction of the cost of its larger counterpart, o1-preview.
Key Features
Enhanced performance in STEM reasoning tasks
Cost-effective alternative to o1-preview (80% cheaper)
Improved speed compared to o1-preview
Strong capabilities in coding and mathematical problem-solving
Incorporates chain-of-thought reasoning
Intended Use
o1-mini is designed for applications requiring focused reasoning without extensive world knowledge, particularly in:
Complex code generation and analysis
Advanced mathematical problem-solving
Scientific research and data analysis
Educational tools for STEM subjects
GPT o1-mini advances AI medical coding capabilities, excelling in writing, generating, and debugging complex code with enhanced speed and precision, especially in research. Learn more about this and other models and their applications in Healthcare here.
Language Support
While specific language support details are not explicitly mentioned, the model demonstrates strong performance across various languages, including low-resource languages.
Context Window
128,000 tokens
Max Output Tokens
65,536 tokens
Beta Limitations
During the beta phase, many chat completion API parameters are not yet available. Most notably:
Modalities: text only, images are not supported.
Message types: user and assistant messages only, system messages are not supported.
Streaming: not supported.
Tools: tools, function calling, and response format parameters are not supported.
Logprobs: not supported.
Other: temperature, top_p and n are fixed at 1, while presence_penalty and frequency_penalty are fixed at 0.
Assistants and Batch: these models are not supported in the Assistants API or Batch API.
Technical Details
Architecture
o1-mini utilizes a transformer-based architecture optimized for STEM reasoning. It employs large-scale reinforcement learning to perform chain-of-thought reasoning, similar to the o1-preview model but with a smaller parameter count.
Training Data
Data Source and Size: Trained on a vast dataset up to October 2023
Knowledge Cutoff: October 2023
Performance Metrics
Scored 70.0% on the AIME (American Invitational Mathematics Examination)
Achieved a 1650 Elo rating on Codeforces (86th percentile)
Excels in HumanEval coding and high-school cybersecurity challenges
Surpasses GPT-4o on some reasoning benchmarks like GPQA and MATH-500
Comparison to Other Models
Accuracy
Closely matches o1-preview performance on AIME and Codeforces
Outperforms GPT-4o on reasoning-heavy tasks in STEM domains
Less effective than GPT-4o in language-focused tasks and broad world knowledge
Speed
3-5 times faster than GPT-4o in reaching conclusions for complex reasoning tasks
Output speed: 73.9 tokens per second
Time to First Token (TTFT): 13.98 seconds
Robustness
Demonstrates 59% greater jailbreak resilience on the StrongREJECT dataset compared to GPT-4o
Handles diverse inputs well within its STEM specialization
Trained using the same alignment and safety techniques as o1-preview
Undergone comprehensive testing, red-teaming, and collaboration with U.S. and U.K. AI Safety Institutes
Designed to adhere to ethical guidelines and safety protocols embedded in its reasoning process
Price
OpenAI o1-mini is available through AI/ML API services. Pricing is set at $0.0031500 per 1K input tokens and $0.0126 per 1K output tokens, making it 80% cheaper than o1-preview.