The Mistral Small 3.1 artificial intelligence (AI) model was made public by Mistral AI. The Paris-based AI company unveiled two open-source versions of the most recent model, talk and instruct. The model replaces the Mistral Small 3 and provides multimodal understanding and better text performance. According to the business, it performs better on several benchmarks than similar models, including OpenAI’s GPT-4o small and Google’s Gemma 3. The recently presented model’s quick reaction times are one of its main benefits.
Release of the Mistral Small 3.1 AI Model
The AI company described the new models in a press article. According to reports, the Mistral Small 3.1 can infer 150 tokens per second and has an extended context window of up to 1,28,000 tokens. This indicates that the AI model has a very quick response time. It comes in two versions: instruct and chat. While the latter is optimized to follow user instructions and is helpful when developing an application with a specific purpose, the former functions as a standard chatbot.
Also Read: Mistral AI Unveiled Major Updates to Le Chat
The Mistral Small 3.1 is publicly available, much like its earlier iterations. The company’s Hugging Face listing offers the open weights for download. The Apache 2.0 license that comes with the AI model permits use in academic and research settings but prohibits commercial use cases.
According to Mistral, the large language model (LLM) is designed to operate on a Mac computer with 32GB of RAM or a single Nvidia RTX 4090 GPU. This implies that enthusiasts can download and use it without the need for a costly infrastructure to run AI models. Additionally, the paradigm provides low-latency function execution and calling, which might help create agentic processes and automation. Additionally, the business lets developers modify the Mistral Small 3.1 to suit certain domain use cases.
Also Read: Mistral Announces the Release of the AI Agent Building Platform
Regarding performance, the AI company provided a range of benchmark results from internal testing. On the Graduate-Level Google-Proof Q&A (GPQA) Main and Diamond, HumanEval, MathVista, and DocVQA benchmarks, the Mistral Small 3.1 is reported to perform better than Gemma 3 and GPT-4o small. However, Gemma 3 outperformed it on the MATH test, and GPT-4o mini did better on the Massive Multitask Language Understanding (MMLU) benchmark.
In addition to Hugging Face, the new model may be accessed through Google Cloud’s Vertex AI and Mistral AI’s developer playground, La Plateforme’s application programming interface (API). In the upcoming weeks, it will also be accessible on Microsoft’s Azure AI Foundry and Nvidia’s NIM.