Mistral AI Unveils Mistral 3: Discover the Latest in Open Source Models!

December 9, 2025

Mistral AI lance Mistral 3, sa nouvelle génération de modèles open source

Mistral AI has unveiled Mistral 3, a unified series of open-source, multimodal, multilingual models, aimed at strengthening its position against the giants of AI.

On December 2, 2025, Mistral AI announced the release of Mistral 3, a new suite of AI models that the French startup describes as its most advanced generation, with the entire model published in open source under the Apache 2.0 license. This announcement comes amid intense competition, as other players like OpenAI, Google, and DeepSeek also unveil new architectures.

Mistral AI’s strategy is to provide developers and businesses with powerful, multimodal, and multilingual models that are freely accessible and modifiable. In a market dominated by proprietary solutions, the company continues to bet on open source to strengthen its international standing.

Mistral Large 3, the Flagship Model of the New Generation

The Mistral 3 generation includes several models spanning a wide range of sizes and use cases. The startup has designed this generation as a unified family, intended to gradually replace previous lines. All models now share the same formats and basic capabilities, with multimodality natively integrated across the range.

At the top of this range sits Mistral Large 3, a cutting-edge model featuring a Mixture-of-Experts (MoE) architecture, with a total of 675 billion parameters, of which 41 billion are actively engaged per generated token. This approach enhances efficiency by only activating a subset of neurons depending on the queries.

According to evaluations published by the startup, Mistral Large 3 ranks among the top open-source multimodal and multilingual models, particularly for non-English conversations. Mistral AI also plans to release a more reasoning-focused variant, intended for more complex tasks, in the coming weeks.

Ministral 3, Lightweight Models Designed for Resource-Constrained Environments

Alongside the flagship model, the startup has also introduced the Ministral 3 series, which incorporates the advancements of the main range into more compact formats designed for resource-constrained environments. Each size (3B, 8B, 14B) comes in three variants (base, instruct, and reasoning) and benefits from multimodal capabilities including text and vision.

Mistral AI notes that these models can operate locally or on edge devices, which facilitates their integration into embedded products or applications requiring low latency, while maintaining the freedoms provided by the Apache 2.0 license. According to the startup, they offer a good balance between operational cost and performance in their category.

The complete range of Mistral 3 and Ministral 3 models is now accessible via several platforms and portals, including Mistral AI Studio, Amazon Bedrock, Azure Foundry, Hugging Face, and other open AI integrators.

Similar Posts

Rate this post

Leave a Comment

Share to...