On October 16, 2024, billion-dollar French AI startup Mistral AI released two brand-new state-of-the-art models. These new models, titled “Les Ministraux,” consist of the Ministral 3B and Ministral 8B. These are designed specifically for on-device computing and at-the-edge applications. The release announcement coincided with the first anniversary of Mistral 7B’s (the first Mistral model) release date.
Mistral 7B Tutorial: A Step-by-Step Guide on How to Use Mistral LLM
What is Mistral AI’s Ministraux?
The les Ministraux models are “two new state-of-the-art models for on-device computing and at-the-edge use cases.” The Ministral 3B and Ministral 8B models are specifically optimized for tasks that require localized, low-latency computation.
They are ideal for a variety of use cases, from on-device translation to offline smart assistants, local analytics, and autonomous robotics.
The new models are designed for powerful performance while maintaining efficiency within the sub-10B parameter category.
With a context length support of up to 128k tokens, these models can be tuned for more advanced tasks, such as agentic workflows and specialist task automation.
Arthur Mensch Net Worth: Mistral AI CEO and Co-Founder
Benchmark Performance
According to Mistral, both Ministral 3B and Ministral 8B “consistently outperform their peers.” Compared to models like Gemma 2 and Llama 3, both the Ministral 3B and Ministral 8B models show performance gains. The improvements are even more pronounced when compared to larger models such as Mistral 7B.
Hence, Les Ministraux models deliver high-end results despite their smaller size.
Image Source: Mistral AI
In benchmarking tests, the Ministraux models excel in categories like knowledge, commonsense reasoning, and function-calling.
Image Source: Mistral AI
Mistral AI introduces Codestral, its First-Ever Code Model
Features
These are some of the defining features of Mistral AI’s Ministraux models:
- Sub-10B Parameter Models
- 128k Context Length
- Ministral 8B Sliding-Window Attention
- Privacy-First Inference
- Multi-step Agentic Workflow Support
Availability and Pricing
Both the Ministral 3B and Ministral 8B models are available starting today (October 16, 2024). Mistral AI priced the Ministral 8B at $0.10 per million tokens for commercial use and the Ministral 3B at $0.04 per million tokens.
Also, there are self-deployment options for those who want more customized solutions, as well as assistance in lossless quantization to maximize performance.
In addition to this, the Ministral 8B Instruct model is available for research use. Both of the models will be available from Mistral AI’s cloud partners.
What is Mistral AI La Plateforme? How to use it to Create AI Agents?
The Bottom Line
Mistral AI is continuously striving to improve products and offerings, to stay relevant in the AI race. Mistral 7B was released only a year ago, and now even the smallest Mistral model is outperforming it on most benchmarks. This constant innovation and improvement has allowed them to become a billion-dollar startup.
Mixtral 8x22B vs 8x7B vs Mistral 7B: Which one is better? Check Here!