Carl Franzen@AI News | VentureBeat
//
Mistral AI has launched its first reasoning model, Magistral, signaling a commitment to open-source AI development. The Magistral family features two models: Magistral Small, a 24-billion parameter model available with open weights under the Apache 2.0 license, and Magistral Medium, a proprietary model accessible through an API. This dual release strategy aims to cater to both enterprise clients seeking advanced reasoning capabilities and the broader AI community interested in open-source innovation.
Mistral's decision to release Magistral Small under the permissive Apache 2.0 license marks a significant return to its open-source roots. The license allows for the free use, modification, and distribution of the model's source code, even for commercial purposes. This empowers startups and established companies to build and deploy their own applications on top of Mistral’s latest reasoning architecture, without the burdens of licensing fees or vendor lock-in. The release serves as a powerful counter-narrative, reaffirming Mistral’s dedication to arming the open community with cutting-edge tools. Magistral Medium demonstrates competitive performance in the reasoning arena, according to internal benchmarks released by Mistral. The model was tested against its predecessor, Mistral-Medium 3, and models from Deepseek. Furthermore, Mistral's Agents API's Handoffs feature facilitates smart, multi-agent workflows, allowing different agents to collaborate on complex tasks. This enables modular and efficient problem-solving, as demonstrated in systems where agents collaborate to answer inflation-related questions. References :
Classification:
@www.marktechpost.com
//
DeepSeek has released a major update to its R1 reasoning model, dubbed DeepSeek-R1-0528, marking a significant step forward in open-source AI. The update boasts enhanced performance in complex reasoning, mathematics, and coding, positioning it as a strong competitor to leading commercial models like OpenAI's o3 and Google's Gemini 2.5 Pro. The model's weights, training recipes, and comprehensive documentation are openly available under the MIT license, fostering transparency and community-driven innovation. This release allows researchers, developers, and businesses to access cutting-edge AI capabilities without the constraints of closed ecosystems or expensive subscriptions.
The DeepSeek-R1-0528 update brings several core improvements. The model's parameter count has increased from 671 billion to 685 billion, enabling it to process and store more intricate patterns. Enhanced chain-of-thought layers deepen the model's reasoning capabilities, making it more reliable in handling multi-step logic problems. Post-training optimizations have also been applied to reduce hallucinations and improve output stability. In practical terms, the update introduces JSON outputs, native function calling, and simplified system prompts, all designed to streamline real-world deployment and enhance the developer experience. Specifically, DeepSeek R1-0528 demonstrates a remarkable leap in mathematical reasoning. On the AIME 2025 test, its accuracy improved from 70% to an impressive 87.5%, rivaling OpenAI's o3. This improvement is attributed to "enhanced thinking depth," with the model now utilizing significantly more tokens per question, indicating more thorough and systematic logical analysis. The open-source nature of DeepSeek-R1-0528 empowers users to fine-tune and adapt the model to their specific needs, fostering further innovation and advancements within the AI community. References :
Classification:
@www.artificialintelligence-news.com
//
ServiceNow is making significant strides in the realm of artificial intelligence with the unveiling of Apriel-Nemotron-15b-Thinker, a new reasoning model optimized for enterprise-scale deployment and efficiency. The model, consisting of 15 billion parameters, is designed to handle complex tasks such as solving mathematical problems, interpreting logical statements, and assisting with enterprise decision-making. This release addresses the growing need for AI models that combine strong performance with efficient memory and token usage, making them viable for deployment in practical hardware environments.
ServiceNow is betting on unified AI to untangle enterprise complexity, providing businesses with a single, coherent way to integrate various AI tools and intelligent agents across the entire company. This ambition was unveiled at Knowledge 2025, where the company showcased its new AI platform and deepened relationships with tech giants like NVIDIA, Microsoft, Google, and Oracle. The aim is to help businesses orchestrate their operations with genuine intelligence, as evidenced by the adoption from industry leaders like Adobe, Aptiv, the NHL, Visa, and Wells Fargo. To further broaden its reach, ServiceNow has introduced the Core Business Suite, an AI-driven solution aimed at the mid-market. This suite connects employees, suppliers, systems, and data in one place, enabling organizations of all sizes to work faster and more efficiently across critical business processes such as HR, procurement, finance, facilities, and legal affairs. ServiceNow aims for rapid implementation, suggesting deployment within a few weeks, and integrates functionalities from different divisions into a single, uniform experience. References :
Classification:
|
Blogs
|