Top Mathematics discussions

NishMath - #enterprises

Carl Franzen@AI News | VentureBeat //
Mistral AI has launched Magistral, its inaugural reasoning large language model (LLM), available in two distinct versions. Magistral Small, a 24 billion parameter model, is offered with open weights under the Apache 2.0 license, enabling developers to freely use, modify, and distribute the code for commercial or non-commercial purposes. This model can be run locally using tools like Ollama. The other version, Magistral Medium, is accessible exclusively via Mistral’s API and is tailored for enterprise clients, providing traceable reasoning capabilities crucial for compliance in highly regulated sectors such as legal, financial, healthcare, and government.

Mistral is positioning Magistral as a powerful tool for both professional and creative applications. The company highlights Magistral's ability to perform "transparent, multilingual reasoning," making it suitable for tasks involving complex calculations, programming logic, decision trees, and rule-based systems. Additionally, Mistral is promoting Magistral for creative writing, touting its capacity to generate coherent or, if desired, uniquely eccentric content. Users can experiment with Magistral Medium through the "Thinking" mode within Mistral's Le Chat platform, with options for "Pure Thinking" and a high-speed "10x speed" mode powered by Cerebras.

Benchmark tests reveal that Magistral Medium is competitive in the reasoning arena. On the AIME-24 mathematics benchmark, the model achieved an impressive 73.6% accuracy, comparable to its predecessor, Mistral Medium 3, and outperforming Deepseek's models. Mistral's strategic release of Magistral Small under the Apache 2.0 license is seen as a reaffirmation of its commitment to open source principles. This move contrasts with the company's previous release of Medium 3 as a proprietary offering, which had raised concerns about a shift towards a more closed ecosystem.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • AI News | VentureBeat: Mistrals first reasoning model, Magistral, launches with large and small Apache 2.0 version.
  • Simon Willison: Mistral's first reasoning LLM - Magistral - was released today and is available in two sizes, an open weights (Apache 2) 24B model called Magistral Small and an API/hosted only model called Magistral Medium. My notes here, including running Small locally with Ollama and accessing Medium via my llm-mistral plugin
  • Simon Willison's Weblog: Magistral — the first reasoning model by Mistral AI
  • the-decoder.com: Mistral launches Europe's first reasoning model Magistral but lags behind competitors
  • SiliconANGLE: Mistral AI debuts new Magistral series of reasoning LLMs
  • MarkTechPost: Mistral AI Releases Magistral Series: Advanced Chain-of-Thought LLMs for Enterprise and Open-Source Applications
  • TestingCatalog: Mistral AI debuts Magistral models focused on advanced reasoning
  • siliconangle.com: Mistral AI SAS today introduced Magistral, a new lineup of reasoning-optimized large language models. The LLM series includes two algorithms on launch.
  • www.artificialintelligence-news.com: Mistral AI challenges big tech with reasoning model
  • www.marktechpost.com: Mistral AI Releases Magistral Series: Advanced Chain-of-Thought LLMs for Enterprise and Open-Source Applications
  • WhatIs: What differentiates Mistral AI reasoning model Magistral
Classification:
  • HashTags: #AI #LLM #OpenSource
  • Company: Mistral AI
  • Target: AI Community
  • Product: Magistral
  • Feature: Reasoning Model
  • Type: AI
  • Severity: Informative
@www.artificialintelligence-news.com //
ServiceNow is making significant strides in the realm of artificial intelligence with the unveiling of Apriel-Nemotron-15b-Thinker, a new reasoning model optimized for enterprise-scale deployment and efficiency. The model, consisting of 15 billion parameters, is designed to handle complex tasks such as solving mathematical problems, interpreting logical statements, and assisting with enterprise decision-making. This release addresses the growing need for AI models that combine strong performance with efficient memory and token usage, making them viable for deployment in practical hardware environments.

ServiceNow is betting on unified AI to untangle enterprise complexity, providing businesses with a single, coherent way to integrate various AI tools and intelligent agents across the entire company. This ambition was unveiled at Knowledge 2025, where the company showcased its new AI platform and deepened relationships with tech giants like NVIDIA, Microsoft, Google, and Oracle. The aim is to help businesses orchestrate their operations with genuine intelligence, as evidenced by the adoption from industry leaders like Adobe, Aptiv, the NHL, Visa, and Wells Fargo.

To further broaden its reach, ServiceNow has introduced the Core Business Suite, an AI-driven solution aimed at the mid-market. This suite connects employees, suppliers, systems, and data in one place, enabling organizations of all sizes to work faster and more efficiently across critical business processes such as HR, procurement, finance, facilities, and legal affairs. ServiceNow aims for rapid implementation, suggesting deployment within a few weeks, and integrates functionalities from different divisions into a single, uniform experience.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • siliconangle.com: ServiceNow debuts AI agents for security and risk to support autonomous enterprise defense
  • www.artificialintelligence-news.com: ServiceNow bets on unified AI to untangle enterprise complexity
  • AI News: ServiceNow bets on unified AI to untangle enterprise complexity
  • www.marktechpost.com: ServiceNow AI Released Apriel-Nemotron-15b-Thinker: A Compact Yet Powerful Reasoning Model Optimized for Enterprise-Scale Deployment and Efficiency
Classification:
  • HashTags: #AI #EnterpriseAI #ReasoningModel
  • Company: ServiceNow
  • Target: Enterprises
  • Product: Apriel-Nemotron-15b
  • Feature: Apriel-Nemotron-15b-Thinker
  • Type: AI
  • Severity: Informative
@siliconangle.com //
SAS and Intel are collaborating to redefine AI architecture through optimized intelligence, moving away from a GPU-centric approach. This partnership focuses on aligning hardware and software roadmaps to deliver smarter performance, lower costs, and greater trust across various environments. Optimized intelligence allows businesses to tailor their AI infrastructure to specific use cases, which ensures efficient and ethical AI practices with human-centered design, instilling greater confidence in real-world outcomes. SAS and Intel have a 25-year relationship built around this concept, with deep investments in technical alignment to ensure hardware and software co-evolve.

SAS is integrating Intel's silicon innovations, such as AMX acceleration and Gaudi GPUs, into its Viya platform to provide cost-effective performance. This collaboration enables clients to deploy advanced models without overspending on infrastructure, with Viya demonstrating significant performance improvements on the latest Intel platforms. The company is also working with companies like Procter & Gamble and quantum hardware providers including D-Wave, IBM, and QuEra to develop hybrid quantum-classical solutions for real-world problems across industries like life sciences, finance, and manufacturing.

A recent global SAS survey revealed that over 60% of business leaders are actively investing in or exploring quantum AI, although concerns remain regarding high costs, a lack of understanding, and unclear use cases. SAS aims to make quantum AI more accessible by working on pilot projects and research, providing guidance to businesses on applying quantum technologies. SAS Principal Quantum Architect Bill Wisotsky states that quantum technologies allow companies to analyze more data and achieve fast answers to complex questions, and SAS wants to simplify this research for its customers.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
Classification:
  • HashTags: #QuantumAI #AISolutions #DecisionMaking
  • Company: SAS
  • Target: Enterprises
  • Product: Viya
  • Feature: Quantum AI
  • Type: AI
  • Severity: Informative