Mistral has launched new AI models optimized for use on laptops and smartphones.

French AI start-up Mistral has announced its first models of edge-based generative AI.
Mistral has launched new AI models optimized for use on laptops and smartphones.

French AI start-up Mistral has announced its first models of edge-based generative AI.
Mistral's new family of models, which it terms "Les Ministraux," can be used, or tuned, for any number of applications-from relatively simple text generation to working with more capable models to complete tasks.

There are two Les Ministraux models: Ministral 3B and Ministral 8B, the same context window being 128,000 tokens so they could ingest roughly the length of a 50-page book.

Our most innovative customers and partners have increasingly been asking for local, privacy-first inference for critical applications such as on-device translation, internet-less smart assistants, local analytics, and autonomous robotics," Mistral writes in a blog post. "Les Ministraux were built to provide a compute-efficient and low-latency solution for these scenarios.".

Ministral 8B is available for download today — strictly for research purposes, however. Mistral requires devs and companies interested in any of its Ministral 8B or Ministral 3B self-deployment setups to contact it in order to obtain a commercial license.

Otherwise, devs can use Ministral 3B and Ministral 8B through Mistral's cloud platform, La Platforme, as well as the other clouds with which the startup has partnered in the coming weeks. Ministral 8B costs 10 cents per million output/input tokens (~750,000 words), while Ministral 3B costs 4 cents per million output/input tokens.

There's been a trend lately toward smaller models, which are less expensive and faster to train, fine-tune, and run than their bigger brethren. Google continues to expand its Gemma small model family; Microsoft offers its Phi collection of models; and Meta introduced several small models optimized for edge hardware in the latest refresh of its Llama suite.

These models, according to Mistral, outperform similar Llama and Gemma models — as well as Mistral's own 7B — on several AI benchmarks meant to assess instruction-following and problem-solving capabilities, such as Following Instructed Instructions and Problem Solving and Zero-Shot Learning.

Although it only recently launched with venture capital funding of $640 million, Paris-based Mistral is still slowly accumulating a product portfolio in AI. In the last few months, the company released a free service for developers to test its models, an SDK to enable customers to fine-tune those models and new models that include a generative model for code called Codestral.

Founded by alumni from Meta and Google's DeepMind, Mistral has said its mission is to produce models equivalent to the best performing models in the marketplace today – OpenAI's GPT-4o and Anthropic's Claude — and preferably make money while doing so. The "making money" part has yet to prove particularly successful (as it has for most generative AI entrepreneurs), but Mistral reportedly achieved revenue this summer.

Blog
|
2024-10-17 19:22:56