@www.marktechpost.com
//
Apple researchers are challenging the perceived reasoning capabilities of Large Reasoning Models (LRMs), sparking debate within the AI community. A recent paper from Apple, titled "The Illusion of Thinking," suggests that these models, which generate intermediate thinking steps like Chain-of-Thought reasoning, struggle with fundamental reasoning tasks. The research indicates that current evaluation methods relying on math and code benchmarks are insufficient, as they often suffer from data contamination and fail to assess the structure or quality of the reasoning process.
To address these shortcomings, Apple researchers introduced controllable puzzle environments, including the Tower of Hanoi, River Crossing, Checker Jumping, and Blocks World, allowing for precise manipulation of problem complexity. These puzzles require diverse reasoning abilities, such as constraint satisfaction and sequential planning, and are free from data contamination. The Apple paper concluded that state-of-the-art LRMs ultimately fail to develop generalizable problem-solving capabilities, with accuracy collapsing to zero beyond certain complexities across different environments.
However, the Apple research has faced criticism. Experts, like Professor Seok Joon Kwon, argue that Apple's lack of high-performance hardware, such as a large GPU-based cluster comparable to those operated by Google or Microsoft, could be a factor in their findings. Some argue that the models perform better on familiar puzzles, suggesting that their success may be linked to training exposure rather than genuine problem-solving skills. Others, such as Alex Lawsen and "C. Opus," argue that the Apple researchers' results don't support claims about fundamental reasoning limitations, but rather highlight engineering challenges related to token limits and evaluation methods.
References :
- TheSequence: The Sequence Research #663: The Illusion of Thinking, Inside the Most Controversial AI Paper of Recent Weeks
- chatgptiseatingtheworld.com: Research: Did Apple researchers overstate “The Illusion of Thinking†in reasoning models. Opus, Lawsen think so.
- www.marktechpost.com: Apple Researchers Reveal Structural Failures in Large Reasoning Models Using Puzzle-Based Evaluation
- arstechnica.com: New Apple study challenges whether AI models truly “reason†through problems
- 9to5Mac: New paper pushes back on Apple’s LLM ‘reasoning collapse’ study
Classification:
- HashTags: #AI #ReasoningModels #AppleResearch
- Company: Universities
- Target: AI Industry
- Attacker: AI
- Product: Large Language Models
- Feature: Large reasoning models
- Type: Research
- Severity: Informative
Carl Franzen@AI News | VentureBeat
//
Mistral AI has launched its first reasoning model, Magistral, signaling a commitment to open-source AI development. The Magistral family features two models: Magistral Small, a 24-billion parameter model available with open weights under the Apache 2.0 license, and Magistral Medium, a proprietary model accessible through an API. This dual release strategy aims to cater to both enterprise clients seeking advanced reasoning capabilities and the broader AI community interested in open-source innovation.
Mistral's decision to release Magistral Small under the permissive Apache 2.0 license marks a significant return to its open-source roots. The license allows for the free use, modification, and distribution of the model's source code, even for commercial purposes. This empowers startups and established companies to build and deploy their own applications on top of Mistral’s latest reasoning architecture, without the burdens of licensing fees or vendor lock-in. The release serves as a powerful counter-narrative, reaffirming Mistral’s dedication to arming the open community with cutting-edge tools.
Magistral Medium demonstrates competitive performance in the reasoning arena, according to internal benchmarks released by Mistral. The model was tested against its predecessor, Mistral-Medium 3, and models from Deepseek. Furthermore, Mistral's Agents API's Handoffs feature facilitates smart, multi-agent workflows, allowing different agents to collaborate on complex tasks. This enables modular and efficient problem-solving, as demonstrated in systems where agents collaborate to answer inflation-related questions.
References :
- Simon Willison: Mistral's first reasoning LLM - Magistral - was released today and is available in two sizes, an open weights (Apache 2) 24B model called Magistral Small and an API/hosted only model called Magistral Medium.
- Simon Willison's Weblog: Mistral's first reasoning model is out today, in two sizes. There's a 24B Apache 2 licensed open-weights model called Magistral Small (actually Magistral-Small-2506), and a larger API-only model called Magistral Medium.
- THE DECODER: Mistral launches Europe's first reasoning model Magistral but lags behind competitors
- AI News | VentureBeat: The company is signaling that the future of reasoning AI will be both powerful and, in a meaningful way, open to all.
- www.marktechpost.com: How to Create Smart Multi-Agent Workflows Using the Mistral Agents API’s Handoffs Feature
- TestingCatalog: Mistral AI debuts Magistral models focused on advanced reasoning
- www.artificialintelligence-news.com: Mistral AI has pulled back the curtain on Magistral, their first model specifically built for reasoning tasks.
- www.infoworld.com: Mistral AI unveils Magistral reasoning model
- AI News: Mistral AI has pulled back the curtain on Magistral, their first model specifically built for reasoning tasks.
- the-decoder.com: The French start-up Mistral is launching its first reasoning model on the market with Magistral. It is designed to enable logical thinking in European languages.
- Simon Willison: Mistral's first reasoning LLM - Magistral - was released today and is available in two sizes, an open weights (Apache 2) 24B model called Magistral Small and an API/hosted only model called Magistral Medium. My notes here, including running Small locally with Ollama and accessing Medium via my llm-mistral plugin
- SiliconANGLE: Mistral AI debuts new Magistral series of reasoning LLMs.
- siliconangle.com: Mistral AI debuts new Magistral series of reasoning LLMs
- MarkTechPost: Mistral AI Releases Magistral Series: Advanced Chain-of-Thought LLMs for Enterprise and Open-Source Applications
- www.marktechpost.com: Mistral AI Releases Magistral Series: Advanced Chain-of-Thought LLMs for Enterprise and Open-Source Applications
- WhatIs: What differentiates Mistral AI reasoning model Magistral
- AlternativeTo: Mistral AI debuts Magistral: a transparent, multilingual reasoning model family, including open-source Magistral Small available on Hugging Face and enterprise-focused Magistral Medium available on various platforms.
Classification:
- HashTags: #MistralAI #ReasoningAI #OpenSourceAI
- Company: Mistral AI
- Target: AI Developers
- Product: Magistral
- Feature: Reasoning Model
- Type: AI
- Severity: Informative
@www.marktechpost.com
//
DeepSeek has released a major update to its R1 reasoning model, dubbed DeepSeek-R1-0528, marking a significant step forward in open-source AI. The update boasts enhanced performance in complex reasoning, mathematics, and coding, positioning it as a strong competitor to leading commercial models like OpenAI's o3 and Google's Gemini 2.5 Pro. The model's weights, training recipes, and comprehensive documentation are openly available under the MIT license, fostering transparency and community-driven innovation. This release allows researchers, developers, and businesses to access cutting-edge AI capabilities without the constraints of closed ecosystems or expensive subscriptions.
The DeepSeek-R1-0528 update brings several core improvements. The model's parameter count has increased from 671 billion to 685 billion, enabling it to process and store more intricate patterns. Enhanced chain-of-thought layers deepen the model's reasoning capabilities, making it more reliable in handling multi-step logic problems. Post-training optimizations have also been applied to reduce hallucinations and improve output stability. In practical terms, the update introduces JSON outputs, native function calling, and simplified system prompts, all designed to streamline real-world deployment and enhance the developer experience.
Specifically, DeepSeek R1-0528 demonstrates a remarkable leap in mathematical reasoning. On the AIME 2025 test, its accuracy improved from 70% to an impressive 87.5%, rivaling OpenAI's o3. This improvement is attributed to "enhanced thinking depth," with the model now utilizing significantly more tokens per question, indicating more thorough and systematic logical analysis. The open-source nature of DeepSeek-R1-0528 empowers users to fine-tune and adapt the model to their specific needs, fostering further innovation and advancements within the AI community.
References :
- Kyle Wiggers ?: DeepSeek updates its R1 reasoning AI model, releases it on Hugging Face
- AI News | VentureBeat: VentureBeat article on DeepSeek R1-0528.
- Analytics Vidhya: New Deepseek R1-0528 Update is INSANE
- MacStories: Testing DeepSeek R1-0528 on the M3 Ultra Mac Studio and Installing Local GGUF Models with Ollama on macOS
- www.analyticsvidhya.com: New Deepseek R1-0528 Update is INSANE
- www.marktechpost.com: DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU Efficiency
- NextBigFuture.com: DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training.
- MarkTechPost: DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU Efficiency
- : In the early hours of May 29, Chinese AI startup DeepSeek quietly open-sourced the latest iteration of its R1 large language model, DeepSeek-R1-0528, on the Hugging Face platform .
- www.computerworld.com: Reports that DeepSeek releases a new version of its R1 reasoning AI model.
- techcrunch.com: DeepSeek updates its R1 reasoning AI model, releases it on Hugging Face
- the-decoder.com: Deepseek's R1 model closes the gap with OpenAI and Google after major update
- Simon Willison: Some notes on the new DeepSeek-R1-0528 - a completely different model from the R1 they released in January, despite having a very similar name Terrible LLM naming has managed to infect the Chinese AI labs too
- Analytics India Magazine: The new DeepSeek-R1 Is as good as OpenAI o3 and Gemini 2.5 Pro
- : The 'Minor Upgrade' That's Anything But: DeepSeek R1-0528 Deep Dive
- simonwillison.net: Some notes on the new DeepSeek-R1-0528 - a completely different model from the R1 they released in January, despite having a very similar name Terrible LLM naming has managed to infect the Chinese AI labs too
- TheSequence: This article provides an overview of the new DeepSeek R1-0528 model and notes its improvements over the prior model released in January.
- Kyle Wiggers ?: News about the release of DeepSeek's updated R1 AI model, emphasizing its increased censorship.
- Fello AI: Reports that the R1-0528 model from DeepSeek is matching the capabilities of OpenAI's o3 and Google's Gemini 2.5 Pro.
- felloai.com: Latest DeepSeek Update Called R1-0528 Is Matching OpenAI’s o3 & Gemini 2.5 Pro
- www.tomsguide.com: DeepSeek’s latest update is a serious threat to ChatGPT and Google — here’s why
Classification:
- HashTags: #DeepSeekR1 #OpenSourceAI #LLM
- Company: DeepSeek
- Target: AI community
- Product: DeepSeek R1
- Feature: Reasoning and coding
- Type: AI
- Severity: Major
|
- Wordle: How to Turn a Popular Word Game into a Math Problem - Nishanth Tharakan
- How to Not Do Experiments: Phacking - Nishanth Tharakan
- My Reflection on Locally Running LLMs - Nishanth Tharakan
- Investigate that Tech: LinkedIn - Nishanth Tharakan
- How Do Models Think, and Why Is There Chinese In My English Responses? - Nishanth Tharakan
- CERN - Nishanth Tharakan
- The Intersection of Mathematics, Physics, Psychology, and Music - Nishanth Tharakan
- Python: The Language That Won AI (And How Hype Helped) - Nishanth Tharakan
- Beginner’s Guide to Oscillations - Nishanth Tharakan
- Russian-American Race - tanyakh
- Epidemiology Part 2: My Journey Through Simulating a Pandemic - Nishanth Tharakan
- The Game of SET for Groups (Part 2), jointly with Andrey Khesin - tanyakh
- Pi: The Number That Has Made Its Way Into Everything - Nishanth Tharakan
- Beginner’s Guide to Sets - Nishanth Tharakan
- How Changing Our Perspective on Math Expanded Its Possibilities - Nishanth Tharakan
- Beginner’s Guide to Differential Equations: An Overview of UCLA’s MATH33B Class - Nishanth Tharakan
- Beginner’s Guide to Mathematical Induction - Nishanth Tharakan
- Foams and the Four-Color Theorem - tanyakh
- Beginner’s Guide to Game Theory - Nishanth Tharakan
- Forever and Ever: Infinite Chess And How to Visually Represent Infinity - Nishanth Tharakan
- Math Values for the New Year - Annie Petitt
- Happy 2025! - tanyakh
- Identical Twins - tanyakh
- A Puzzle from the Möbius Tournament - tanyakh
- A Baker, a Decorator, and a Wedding Planner Walk into a Classroom - Annie Petitt
- Beliefs and Belongings in Mathematics - David Bressoud
- Red, Yellow, and Green Hats - tanyakh
- Square out of a Plus - tanyakh
- The Game of SET for Groups (Part 1), jointly with Andrey Khesin - tanyakh
|