France's Mistral unveils its first 'reasoning' AI model


The AI was designed for "general purpose use requiring longer thought processing and better accuracy" than its previous generations of large language models (LLMs), the company added. — Pixabay

PARIS: French artificial intelligence startup Mistral on June 10 announced a so-called "reasoning" model it said was capable of working through complex problems, following in the footsteps of top US developers.

Available immediately on the company's platforms as well as the AI platform Hugging Face, the Magistral "is designed to think things through – in ways familiar to us," Mistral said in a blog post.

The AI was designed for "general purpose use requiring longer thought processing and better accuracy" than its previous generations of large language models (LLMs), the company added.

Like other "reasoning" models, Magistral displays a so-called "chain of thought" that purports to show how the system is approaching a problem given to it in natural language.

This means users in fields like law, finance, healthcare and government would receive "traceable reasoning that meets compliance requirements" as "every conclusion can be traced back through its logical steps", Mistral said.

The company's claim gestures towards the challenge of so-called "interpretability" – working out how AI systems arrive at a given response.

Since they are "trained" on gigantic corpuses of data rather than directly programmed by humans, much behaviour by AI systems remains impenetrable even to their creators.

FUTURE-PROOFING MALAYSIA’S OGSE THROUGH ECOSYSTEM RESILIENCE

Mistral also vaunted improved performance in software coding and creative writing by Magistral.

Competing "reasoning" models include OpenAI's o3, some versions of Google's Gemini and Anthropic's Claude, or Chinese challenger DeepSeek's R1.

The idea that AIs can "reason" was called into question this week by Apple – the tech giant that has struggled to match achievements by leaders in the field.

Several Apple researchers published a paper called "The Illusion of Thinking" that claimed to find "fundamental limitations in current models" which "fail to develop generalizable reasoning capabilities beyond certain complexity thresholds". – AFP

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Others Also Read


Want to listen to full audio?

Unlock unlimited access to enjoy personalise features on the TheStar.com.my

Already a subscriber? Log In