Top Mathematics discussions

NishMath

@machinelearning.apple.com //
Apple researchers have released a study questioning the reasoning capabilities of advanced AI models, including Claude 3.7 and Deepseek-R1. The study challenges the notion that these Large Reasoning Models (LRMs) excel at complex problem-solving by simulating human thought processes, as originally intended. Researchers discovered that as the complexity of tasks increases, these models often perform worse and may even reduce their "thinking" efforts, contradicting expectations. The findings suggest a fundamental scaling limitation in the reasoning abilities of current AI models.

To investigate these limitations, the Apple team subjected several reasoning models to a series of classic puzzle environments: Tower of Hanoi, Checkers Jumping, River Crossing, and Blocks World. These puzzles allowed for controlled increases in complexity while maintaining consistent logical structures. The results revealed that standard language models, like Claude 3.7 without its "thinking" mode, outperformed reasoning models on simple tasks, demonstrating higher accuracy with lower token consumption. The reasoning models only showed an advantage at intermediate complexity levels, however, when the puzzles became highly complex, all models experienced a complete collapse in accuracy, even with ample computational resources.

The study's findings have significant implications for the artificial intelligence industry, particularly regarding the trust placed in reasoning models. The Apple researchers found that the behavior of these LLMs is "better explained by sophisticated pattern matching" and not formal reasoning. Apple is now left to face increasing pressure to respond to their AI competition, particularly with Apple Intelligence, which was debuted last year, not living up to developers expectations.

Recommended read:
References :
  • THE DECODER: LLMs designed for reasoning, like Claude 3.7 and Deepseek-R1, are supposed to excel at complex problem-solving by simulating thought processes.
  • machinelearning.apple.com: Apple machine learning discusses Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
  • PPC Land: PPC Land reports on Apple study exposes fundamental limits in AI reasoning models through puzzle tests.
  • the-decoder.com: The Decoder covers Apple's study, highlighting the limitation in thinking abilities of reasoning models.
  • felloai.com: In a breakthrough paper, Apple researchers reveal the uncomfortable truth about large reasoning models (LRMs): their internal “thought processes” might be nothing more than performative illusions.
  • Gadgets 360: Apple Claims AI Reasoning Models Suffer From ‘Accuracy Collapse’ When Solving Complex Problems
  • futurism.com: Apple Researchers Just Released a Damning Paper That Pours Water on the Entire AI Industry
  • The Register - Software: Apple AI boffins puncture AGI hype as reasoning models flail on complex planning
  • www.theguardian.com: Advanced AI suffers ‘complete accuracy collapse’ in face of complex problems, study finds
  • chatgptiseatingtheworld.com: Apple researchers cast doubt on AI reasoning models of other companies
  • www.livescience.com: AI reasoning models aren’t as smart as they were cracked up to be, Apple study claims
  • www.computerworld.com: Apple warns: GenAI still isn’t very smart

@medium.com //
References: medium.com , medium.com , medium.com ...
Medium is currently hosting a series of articles that delve into the core concepts and practical applications of cryptography. These articles aim to demystify complex topics such as symmetric key cryptography, also known as secret key or private key cryptography, where a single shared key is used for both encryption and decryption. This method is highlighted for its speed and efficiency, making it suitable for bulk data encryption, though it primarily provides confidentiality and requires secure key distribution. The resources available are designed to cater to individuals with varying levels of expertise, offering accessible guides to enhance their understanding of secure communication and cryptographic systems.

The published materials offer detailed explorations of cryptographic techniques, including AES-256 encryption and decryption. AES-256, which stands for Advanced Encryption Standard with a 256-bit key size, is a symmetric encryption algorithm renowned for its high level of security. Articles break down the internal mechanics of AES-256, explaining the rounds of transformation and key expansion involved in the encryption process. These explanations are presented in both technical terms for those with a deeper understanding and in layman's terms to make the concepts accessible to a broader audience.

In addition to theoretical explanations, the Medium articles also showcase the practical applications of cryptography. One example provided is the combination of OSINT (Open Source Intelligence), web, crypto, and forensics techniques in CTF (Capture The Flag) challenges. These challenges offer hands-on experience in applying cryptographic principles to real-world scenarios, such as identifying the final resting place of historical figures through OSINT techniques. The series underscores the importance of mastering cryptography in the evolving landscape of cybersecurity, equipping readers with the knowledge to secure digital communications and protect sensitive information.

Recommended read:
References :
  • medium.com: Understanding AES-256 Encryption and Decryption: A Detailed Guide for All Levels
  • medium.com: Understanding Cryptography: The Art of Secure Communication
  • mraviteja9949.medium.com: Symmetric Key Cryptography
  • medium.com: ECC & Web3 Cryptography: Superhero Dunia Digital (Tapi Ada Kryptonite-nya)
  • medium.com: A hashing algorithm is a function that can be used to map arbitrary long data to an output of a fixed length.
  • medium.com: Quantum-Resistant Cryptography: Preparing Your Code for Post-Quantum Era

@www.quantamagazine.org //
References: StartsWithABang , Ray Lee , Ray Lee ...
Fermilab has announced the final results from its Muon g-2 experiment, aiming to resolve a long-standing anomaly regarding the magnetic moment of muons. This experiment delves into the quantum realm, exploring how short-lived particles popping in and out of existence influence the magnetic properties of muons. The initial results from this experiment suggested that the Standard Model of physics might be incomplete, hinting at the presence of undiscovered particles or forces.

The experiment's findings continue to show a discrepancy between experimental measurements and the predictions of the Standard Model. However, the statistical significance of this discrepancy has decreased due to improvements in theoretical calculations. This implies that while the Standard Model may not fully account for the behavior of muons, the evidence for new physics is not as strong as previously thought. The result is at 4.2σ (standard deviations) away from what's calculated using the Standard Model, which is a bit short of the 5 sigma normally used to declare a discovery. There's about a 1 in 40,000 chance that this is a fluke.

Despite the reduced statistical significance, the results remain intriguing and motivate further research. The possibility of undiscovered particles influencing muons still exists, pushing physicists to explore new theoretical models and conduct additional experiments. Fermilab shared first results from their "g-2" experiment showing the Standard Model of physics is even more incomplete than we thought. If the universe includes particles we don't yet know about, these too will show up as fluctuations around particles, influencing the properties we can measure.

Recommended read:
References :
  • StartsWithABang: Anomaly no more! “Muon g-2†puzzle resolved at last Can theory and experiment agree on the magnetic moment of the muon? At last, a new theory initiative paper coupled with final, world's best experimental results point to the resolution.
  • Ray Lee: Fermilab is announcing final results from the muon g-2 experiment today! I'm heading out the door, but the results will be at 10am CT. Quoting myself from April 7th, 2021: Fermilab shared first results from their "g-2" experiment showing the Standard Model of physics is even more incomplete than we thought.
  • bigthink.com: Anomaly no more! “Muon g-2†puzzle resolved at last Can theory and experiment agree on the magnetic moment of the muon? At last, a new theory initiative paper coupled with final, world's best experimental results point to the resolution.
  • Ray Lee: I should add, there have been various papers since this announcement back in 2021 that claim the calculations were incomplete and newer methods, such as brute-forcing the calculation via SM lattice methods on supercomputers, has pushed the discrepancy with experiment down to less than 2 sigma. Today we'll learn more! 3/3
  • physics.aps.org: Link to the stream: A rather nice cartoon explainer of all this by Jorge Cham: An accessible and slightly more scientific walkthrough over at Quanta Magazine from 2021: And the below graphic, showing how one particle physicist (who's name escapes me), viewed the tension in the results, four years ago. 2/3

@www.quantamagazine.org //
Recent breakthroughs have significantly advanced the "Core of Fermat's Last Theorem," a concept deeply rooted in number theory. Four mathematicians have extended the key insight behind Fermat's Last Theorem, which states there are no three positive integers that, when raised to a power greater than two, can be added together to equal another number raised to the same power. Their work involves applying this concept to other mathematical objects, notably elliptic curves. This extension represents a major step towards building a "grand unified theory" of mathematics, a long-sought goal in the field.

This achievement builds upon the groundwork laid by Andrew Wiles's famous 1994 proof of Fermat's Last Theorem. Wiles, with assistance from Richard Taylor, demonstrated that elliptic curves and modular forms, seemingly distinct mathematical entities, are interconnected. This discovery revealed a surprising "modularity," where these realms mirror each other in a distorted way. Mathematicians can now leverage this connection, translating problems about elliptic curves into the language of modular forms, solving them, and then applying the results back to the original problem.

This new research goes beyond elliptic curves, extending the modularity connection to more complicated mathematical objects. This breakthrough defies previous expectations that such extensions would be impossible. The Langlands program, a set of conjectures aiming to develop a grand unified theory of mathematics, hinges on such correspondences. The team's success provides strong support for the Langlands program and opens new avenues for solving previously intractable problems in various areas of mathematics, solidifying the power and reach of the "Core of Fermat's Last Theorem."

Recommended read:
References :
  • Computational Complexity: The research discussed in this cluster is part of a broader effort to build a unified theory of mathematics, and it involves the extension of the key insight behind Fermat's Last Theorem to include the study of other mathematical objects, such as elliptic curves.
  • Terence Tao: The research discussed in this cluster is part of a broader effort to build a unified theory of mathematics, and it involves the extension of the key insight behind Fermat's Last Theorem to include the study of other mathematical objects, such as elliptic curves.
  • nLab: The research discussed in this cluster is part of a broader effort to build a unified theory of mathematics, and it involves the extension of the key insight behind Fermat's Last Theorem to include the study of other mathematical objects, such as elliptic curves.
  • Quanta Magazine: The research discussed in this cluster is part of a broader effort to build a unified theory of mathematics, and it involves the extension of the key insight behind Fermat's Last Theorem to include the study of other mathematical objects, such as elliptic curves.

@www.linkedin.com //
Nvidia's Blackwell GPUs have achieved top rankings in the latest MLPerf Training v5.0 benchmarks, demonstrating breakthrough performance across various AI workloads. The NVIDIA AI platform delivered the highest performance at scale on every benchmark, including the most challenging large language model (LLM) test, Llama 3.1 405B pretraining. Nvidia was the only vendor to submit results on all MLPerf Training v5.0 benchmarks, highlighting the versatility of the NVIDIA platform across a wide array of AI workloads, including LLMs, recommendation systems, multimodal LLMs, object detection, and graph neural networks.

The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. Nvidia collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. The GB200 NVL72 systems achieved 90% scaling efficiency up to 2,496 GPUs, improving time-to-convergence by up to 2.6x compared to Hopper-generation H100.

The new MLPerf Training v5.0 benchmark suite introduces a pretraining benchmark based on the Llama 3.1 405B generative AI system, the largest model to be introduced in the training benchmark suite. On this benchmark, Blackwell delivered 2.2x greater performance compared with the previous-generation architecture at the same scale. Furthermore, on the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance gains highlight advancements in the Blackwell architecture and software stack, including high-density liquid-cooled racks, fifth-generation NVLink and NVLink Switch interconnect technologies, and NVIDIA Quantum-2 InfiniBand networking.

Recommended read:
References :
  • NVIDIA Newsroom: NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results
  • NVIDIA Technical Blog: NVIDIA Blackwell Delivers up to 2.6x Higher Performance in MLPerf Training v5.0
  • IEEE Spectrum: Nvidia’s Blackwell Conquers Largest LLM Training Benchmark
  • NVIDIA Technical Blog: Reproducing NVIDIA MLPerf v5.0 Training Scores for LLM Benchmarks
  • AI News | VentureBeat: Nvidia says its Blackwell chips lead benchmarks in training AI LLMs
  • MLCommons: New MLCommons MLPerf Training v5.0 Benchmark Results Reflect Rapid Growth and Evolution of the Field of AI
  • www.aiwire.net: MLPerf Training v5.0 results show Nvidia’s Blackwell GB200 accelerators sprinting through record time-to-train scores.
  • blogs.nvidia.com: NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the
  • mlcommons.org: New MLCommons MLPerf Training v5.0 Benchmark Results Reflect Rapid Growth and Evolution of the Field of AI
  • NVIDIA Newsroom: NVIDIA RTX Blackwell GPUs Accelerate Professional-Grade Video Editing
  • ServeTheHome: The new MLPerf Training v5.0 are dominated by NVIDIA Blackwell and Hopper results, but we also get AMD Instinct MI325X on a benchmark as well
  • AIwire: This is a news article on nvidia Blackwell GPUs lift Nvidia to the top of MLPerf Training Rankings
  • www.servethehome.com: MLPerf Training v5.0 is Out
  • IEEE Spectrum: Nvidia’s Blackwell Conquers Largest LLM Training Benchmark

@medium.com //
Google Quantum AI has published a study that dramatically lowers the estimated quantum resources needed to break RSA-2048, one of the most widely used encryption standards. The study, authored by Craig Gidney, indicates that RSA cracking may be possible with fewer qubits than previously estimated, potentially impacting digital security protocols used in secure web browsing, email encryption, VPNs, and blockchain systems. This breakthrough could significantly accelerate the timeline for "Q-Day," the point at which quantum computers can break modern encryption.

Previous estimates, including Gidney's 2019 study, suggested that cracking RSA-2048 would require around 20 million qubits and 8 hours of computation. However, the new analysis reveals it could be done in under a week using fewer than 1 million noisy qubits. This reduction in hardware requirements is attributed to several technical innovations, including approximate residue arithmetic, magic state cultivation, optimized period finding with Ekerå-Håstad algorithms, and yoked surface codes & sparse lookups. These improvements minimize the overhead in fault-tolerant quantum circuits, enabling better scaling.

Google's researchers have discovered that, thanks to new error correction tricks and smarter algorithms, the encryption could be broken with under 1 million qubits and in less than a week, given favorable assumptions like a 0.1% gate error rate and a 1-microsecond gate time. This significantly faster encryption breaking capability, potentially 20x faster than previously anticipated, raises concerns about the security of Bitcoin wallets and other financial systems that rely on RSA encryption. The findings could potentially make Bitcoin wallets and financial systems vulnerable much sooner than expected.

Recommended read:
References :
  • medium.com: Last week, Craig Gidney from Google Quantum AI published a breakthrough study that redefines the landscape of cryptographic security. His 
  • www.theguardian.com: Google working on AI email tool that can ‘answer in your style’
  • The Official Google Blog: We’re investing for a cleaner energy future with TAE Technologies, a leading nuclear fusion company.
  • medium.com: Google’s quantum leap just changed everything: They can now break encryption 20x faster than 

@quantumcomputingreport.com //
References: medium.com , medium.com , medium.com ...
The rapid advancement of quantum computing poses a significant threat to current encryption methods, particularly RSA, which secures much of today's internet communication. Google's recent breakthroughs have redefined the landscape of cryptographic security, with researchers like Craig Gidney significantly lowering the estimated quantum resources needed to break RSA-2048. A new study indicates that RSA-2048 could be cracked in under a week using fewer than 1 million noisy qubits, a dramatic reduction from previous estimates of around 20 million qubits and eight hours of computation. This shift accelerates the timeline for "Q-Day," the hypothetical moment when quantum computers can break modern encryption, impacting everything from email to financial transactions.

This vulnerability stems from the ability of quantum computers to utilize Shor's algorithm for factoring large numbers, a task prohibitively difficult for classical computers. Google's innovation involves several technical advancements, including approximate residue arithmetic, magic state cultivation, optimized period finding with Ekerå-Håstad algorithms, and yoked surface codes with sparse lookups. These improvements streamline modular arithmetic, reduce the depth of quantum circuits, and minimize overhead in fault-tolerant quantum circuits, collectively reducing the physical qubit requirement to under 1 million while maintaining a relatively short computation time.

In response to this threat, post-quantum cryptography (PQC) is gaining momentum. PQC refers to cryptographic algorithms designed to be secure against both classical and quantum attacks. NIST has already announced the first set of quantum-safe algorithms for standardization, including FrodoKEM, a key encapsulation protocol offering a simple design and strong security guarantees. The urgency of transitioning to quantum-resistant cryptographic systems is underscored by ongoing advances in quantum computing. While the digital world relies on encryption, the evolution to AI and quantum computing is challenging the security. Professionals who understand both cybersecurity and artificial intelligence will be the leaders in adapting to these challenges.

Recommended read:
References :
  • medium.com: Should Post-Quantum Cryptography Start Now? The Clock Is Ticking
  • medium.com: Google’s quantum leap just changed everything: They can now break encryption 20x faster than…
  • quantumcomputingreport.com: Significant Theoretical Advancement in Factoring 2048 Bit RSA Integers
  • medium.com: Last week, Craig Gidney from Google Quantum AI published a breakthrough study that redefines the landscape of cryptographic security.
  • www.microsoft.com: The recent advances in quantum computing offer many advantages—but also challenge current cryptographic strategies. Learn how FrodoKEM could help strengthen security, even in a future with powerful quantum computers.
  • medium.com: Securing the Internet of Things: Why Post-Quantum Cryptography Is Critical for IoT’s Future
  • medium.com: Quantum Resilience Starts Now: Building Secure Infrastructure with Hybrid Cryptography
  • medium.com: Quantum-Resistant Cryptography: Preparing Your Code for Post-Quantum Era

@aasnova.org //
JWST is currently being used to study exoplanets, particularly sub-Neptunes, providing valuable data on their atmospheric composition. A recent study utilized JWST spectroscopy to analyze the atmosphere of the sub-Neptune GJ 3090b. This planet orbits a late-type, low-mass star and its radius places it at the outer edge of the radius valley. Sub-Neptunes are the most common type of planet in the Milky Way, however their formation and composition are not well understood, making these studies especially important.

The JWST's observations of GJ 3090b revealed a low-amplitude helium signature, suggesting a metal-enriched atmosphere. The presence of heavy molecules like water, carbon dioxide, and sulfur further contributes to the understanding of the planet's atmospheric properties. These atmospheric observations help clarify how hydrogen and helium may be escaping the planet’s atmosphere, with the presence of metals slowing down mass loss and weakening the helium signature.

While JWST is making significant contributions to exoplanet research, it won't find the very first stars. Other telescopes will be needed to make those observations. JWST however contains some of the latest discoveries, including the new cosmic record-holder for the most distant galaxy, MoM-z14.

Recommended read:
References :
  • StartsWithABang: Earlier this week, I gave a talk about JWST to the RASC Toronto audience through York University, and it has the latest and greatest of its discoveries inside, including the new cosmic record-holder for most distant galaxy: MoM-z14. Check it out!
  • aasnova.org: Abundant but Ambiguous: Understanding the Atmospheres of Sub-Neptunes with JWST

Dashveenjit Kaur@TechHQ //
Dell Technologies has secured a contract with the U.S. Department of Energy to construct the next-generation NERSC-10 supercomputer, a project powered by NVIDIA's Vera Rubin architecture. This new system, dubbed "Doudna" after Nobel laureate Jennifer Doudna, a pioneer in CRISPR gene-editing technology, is poised to be a major federal investment in scientific computing infrastructure. Energy Secretary Chris Wright announced the contract during a visit to Lawrence Berkeley National Laboratory, emphasizing that the deployment in 2026 is crucial for maintaining American technological leadership amidst increasing global competition in AI and quantum computing.

The "Doudna" supercomputer, also known as NERSC-10, aims to significantly accelerate scientific research across multiple domains, including fusion energy, astronomy, and life sciences. Designed to serve 11,000 researchers, it represents an integration of artificial intelligence, quantum workflows, and real-time data streaming from experimental facilities. Unlike traditional supercomputers, Doudna’s architecture emphasizes coherent memory access between CPUs and GPUs, facilitating efficient data sharing between heterogeneous processors which is essential for modern AI-accelerated scientific workflows.

The Doudna system is expected to deliver a 10x increase in scientific output compared to its predecessor, Perlmutter, while only consuming 2-3x the power, translating to a 3-5x improvement in performance per watt. Nick Wright, advanced technologies group lead and Doudna chief architect at NERSC, stated, "We’re not just building a faster computer, we’re building a system that helps researchers think bigger and discover sooner." NVIDIA's Vera Rubin platform introduces hardware-level optimizations specifically designed for the convergence of simulation, machine learning, and quantum algorithm development, marking a significant advancement in cutting-edge research capabilities.

Recommended read:
References :
  • blogs.nvidia.com: Ready for a front-row seat to the next scientific revolution? That’s the idea behind Doudna — a groundbreaking supercomputer announced today at Lawrence Berkeley National Laboratory in Berkeley, California.
  • insidehpc.com: The new system, due in 2026, is named after Jennifer Doudna, the Berkeley Lab-based biochemist who won the 2020 Nobel Prize for Chemistry for her work on gene-editing technology.
  • TechHQ: Nvidia Vera Rubin supercomputer to serve researchers in fusion energy, astronomy, and life sciences.
  • techxplore.com: A new supercomputer named after a winner of the Nobel Prize in chemistry will help power artificial intelligence technology and scientific discoveries from a perch in the hills above the University of California, Berkeley, federal officials said Thursday.
  • insidehpc.com: DOE Announces “Doudna†Dell-NVIDIA Supercomputer at NERSC
  • techhq.com: Nvidia Vera Rubin supercomputer to serve researchers in fusion energy, astronomy, and life sciences. Dell’s system targets 10x performance, 3-5x better power efficiency, to be deployed in 2026.

@www.quantamagazine.org //
Researchers are making strides in AI reasoning and efficiency, tackling both complex problem-solving and the energy consumption of these systems. One promising area involves reversible computing, where programs can run backward as easily as forward, theoretically saving energy by avoiding data deletion. Michael Frank, a researcher interested in the physical limits of computation, discovered that reversible computing could keep computational progress going as traditional computing slows due to physical limitations. Christof Teuscher at Portland State University emphasized the potential for significant power savings with this approach.

An evolution of the LLM-as-a-Judge paradigm is emerging. Meta AI has introduced the J1 framework which shifts the paradigm of LLMs from passive generators to active, deliberative evaluators through self-evaluation. This approach, detailed in "J1: Incentivizing Thinking in LLM-as-a-Judge via Reinforcement Learning," addresses the growing need for rigorous and scalable evaluation as AI systems become more capable and widely deployed. By reframing judgment as a structured reasoning task trained through reinforcement learning, J1 aims to create models that perform consistent, interpretable, and high-fidelity evaluations.

Soheil Feizi, an associate professor at the University of Maryland, has received a $1 million federal grant to advance foundational research in reasoning AI models. This funding, stemming from a Presidential Early Career Award for Scientists and Engineers (PECASE), will support his work in defending large language models (LLMs) against attacks, identifying weaknesses in how these models learn, encouraging transparent, step-by-step logic, and understanding the "reasoning tokens" that drive decision-making. Feizi plans to explore innovative approaches like live activation probing and novel reinforcement-learning designs, aiming to transform theoretical advancements into practical applications and real-world usages.

Recommended read:
References :

@www.marktechpost.com //
DeepSeek has released a major update to its R1 reasoning model, dubbed DeepSeek-R1-0528, marking a significant step forward in open-source AI. The update boasts enhanced performance in complex reasoning, mathematics, and coding, positioning it as a strong competitor to leading commercial models like OpenAI's o3 and Google's Gemini 2.5 Pro. The model's weights, training recipes, and comprehensive documentation are openly available under the MIT license, fostering transparency and community-driven innovation. This release allows researchers, developers, and businesses to access cutting-edge AI capabilities without the constraints of closed ecosystems or expensive subscriptions.

The DeepSeek-R1-0528 update brings several core improvements. The model's parameter count has increased from 671 billion to 685 billion, enabling it to process and store more intricate patterns. Enhanced chain-of-thought layers deepen the model's reasoning capabilities, making it more reliable in handling multi-step logic problems. Post-training optimizations have also been applied to reduce hallucinations and improve output stability. In practical terms, the update introduces JSON outputs, native function calling, and simplified system prompts, all designed to streamline real-world deployment and enhance the developer experience.

Specifically, DeepSeek R1-0528 demonstrates a remarkable leap in mathematical reasoning. On the AIME 2025 test, its accuracy improved from 70% to an impressive 87.5%, rivaling OpenAI's o3. This improvement is attributed to "enhanced thinking depth," with the model now utilizing significantly more tokens per question, indicating more thorough and systematic logical analysis. The open-source nature of DeepSeek-R1-0528 empowers users to fine-tune and adapt the model to their specific needs, fostering further innovation and advancements within the AI community.

Recommended read:
References :
  • Kyle Wiggers ?: DeepSeek updates its R1 reasoning AI model, releases it on Hugging Face
  • AI News | VentureBeat: VentureBeat article on DeepSeek R1-0528.
  • Analytics Vidhya: New Deepseek R1-0528 Update is INSANE
  • MacStories: Testing DeepSeek R1-0528 on the M3 Ultra Mac Studio and Installing Local GGUF Models with Ollama on macOS
  • www.analyticsvidhya.com: New Deepseek R1-0528 Update is INSANE
  • www.marktechpost.com: DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU Efficiency
  • NextBigFuture.com: DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training.
  • MarkTechPost: DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU Efficiency
  • Pandaily: In the early hours of May 29, Chinese AI startup DeepSeek quietly open-sourced the latest iteration of its R1 large language model, DeepSeek-R1-0528, on the Hugging Face platform .
  • www.computerworld.com: Reports that DeepSeek releases a new version of its R1 reasoning AI model.
  • techcrunch.com: DeepSeek updates its R1 reasoning AI model, releases it on Hugging Face
  • the-decoder.com: Deepseek's R1 model closes the gap with OpenAI and Google after major update
  • Simon Willison: Some notes on the new DeepSeek-R1-0528 - a completely different model from the R1 they released in January, despite having a very similar name Terrible LLM naming has managed to infect the Chinese AI labs too
  • Analytics India Magazine: The new DeepSeek-R1 Is as good as OpenAI o3 and Gemini 2.5 Pro
  • RunPod Blog: The 'Minor Upgrade' That's Anything But: DeepSeek R1-0528 Deep Dive
  • simonwillison.net: Some notes on the new DeepSeek-R1-0528 - a completely different model from the R1 they released in January, despite having a very similar name Terrible LLM naming has managed to infect the Chinese AI labs too
  • TheSequence: This article provides an overview of the new DeepSeek R1-0528 model and notes its improvements over the prior model released in January.
  • Kyle Wiggers ?: News about the release of DeepSeek's updated R1 AI model, emphasizing its increased censorship.
  • Fello AI: Reports that the R1-0528 model from DeepSeek is matching the capabilities of OpenAI's o3 and Google's Gemini 2.5 Pro.
  • felloai.com: Latest DeepSeek Update Called R1-0528 Is Matching OpenAI’s o3 & Gemini 2.5 Pro
  • www.tomsguide.com: DeepSeek’s latest update is a serious threat to ChatGPT and Google — here’s why

@www.microsoft.com //
Microsoft is taking a proactive approach to future cybersecurity threats by integrating post-quantum cryptography (PQC) into its Windows and Linux systems. This move is designed to protect against the potential for quantum computers to break current encryption methods like RSA, which secure online communications, banking transactions, and sensitive data. Quantum computers, leveraging quantum mechanics, can solve complex problems far faster than classical computers, posing a significant threat to existing cryptographic schemes. Microsoft's initiative aims to safeguard data from a "harvest now, decrypt later" scenario, where hackers steal encrypted data today with the intent of decrypting it once quantum technology becomes advanced enough.

Microsoft's PQC implementation includes the addition of two key algorithms: ML-KEM (Module Lattice-Based Key Encapsulation Mechanism) and ML-DSA (Module Lattice-Based Digital Signature Algorithm). ML-KEM, also known as CRYSTALS-Kyber, secures key exchanges and prevents attacks by protecting the start of secure connections. ML-DSA, formerly CRYSTALS-Dilithium, ensures data integrity and authenticity through digital signatures. These algorithms are being introduced in Windows Insider builds (Canary Build 27852+) and Linux via SymCrypt-OpenSSL v1.9.0, allowing developers and organizations to begin testing and preparing for a quantum-secure future.

This update to Windows 11 is a critical step in what Microsoft views as a major technological transition. By making quantum-resistant algorithms available through SymCrypt, the core cryptographic code library in Windows, and updating SymCrypt-OpenSSL, Microsoft is enabling the widely used OpenSSL library to leverage SymCrypt for cryptographic operations. The new algorithms, selected by the National Institute of Standards and Technology (NIST), represent a move towards replacing vulnerable cryptosystems like RSA and elliptic curves. This signifies a broader effort to bolster cybersecurity against the emerging threat of quantum computing.

Recommended read:
References :
  • www.microsoft.com: FrodoKEM: A conservative quantum-safe cryptographic algorithm
  • medium.com: Welcome to the Quantum Era, where even the strongest locks we use to protect our digital lives might soon be breakable. However, don’t…
  • arstechnica.com: Here’s how Windows 11 aims to make the world safe in the post-quantum era
  • medium.com: Quantum Computing and Encryption Breakthroughs in 2025: A New Era of Innovation
  • medium.com: Cracking RSA with Fewer Qubits: What Google’s New Quantum Factoring Estimate Means for…
  • medium.com: Google’s quantum leap just changed everything: They can now break encryption 20x faster than…
  • medium.com: On August 13, 2024, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced the approval of…
  • medium.com: As our world becomes increasingly interconnected, the Internet of Things (IoT) is transforming industries, homes, and entire cities. From…
  • : Post-Quantum Cryptography Coalition (PQCC) Publishes Comprehensive Roadmap for Post-Quantum Cryptography Migration
  • www.techradar.com: Breaking encryption with quantum computers may be easier than we thought

@www.microsoft.com //
References: mfesgin.github.io , IACR News , medium.com ...
IACR News has highlighted recent advancements in post-quantum cryptography, essential for safeguarding data against future quantum computer attacks. A key area of focus is the development of algorithms and protocols that remain secure even when classical cryptographic methods become vulnerable. Among these efforts, FrodoKEM stands out as a conservative quantum-safe cryptographic algorithm, designed to provide strong security guarantees in the face of quantum computing threats.

The adaptive security of key-unique threshold signatures is also under scrutiny. Research presented by Elizabeth Crites, Chelsea Komlo, and Mary Mallere, investigates the security assumptions required to prove the adaptive security of threshold signatures. Their work reveals impossibility results that highlight the difficulty of achieving adaptive security for key-unique threshold signatures, particularly for schemes compatible with standard, single-party signatures like BLS, ECDSA, and Schnorr. This research aims to guide the development of new assumptions and properties for constructing adaptively secure threshold schemes.

In related news, Muhammed F. Esgin is offering PhD and Post-Doc positions in post-quantum cryptography, emphasizing the need for candidates with a strong mathematical and cryptography background. Students at Monash University can expect to work on their research from the beginning, supported by competitive stipends and opportunities for teaching assistant roles. These academic opportunities are crucial for training the next generation of cryptographers who will develop and implement post-quantum solutions.

Recommended read:
References :
  • mfesgin.github.io: PhD and Post-Doc in Post-Quantum Cryptography
  • IACR News: Zero-Trust Post-quantum Cryptography Implementation Using Category Theory
  • medium.com: Post-Quantum Cryptography Is Arriving on Windows & Linux
  • medium.com: NIST Approves Three Post-Quantum Cryptography Standards: A Milestone for Digital Security
  • medium.com: Should Post-Quantum Cryptography Start Now? The Clock Is Ticking

@www.microsoft.com //
Microsoft is actively preparing for a future where quantum computers pose a significant threat to current encryption methods. The company is exploring Post-Quantum Cryptography (PQC) solutions, with a focus on algorithms like FrodoKEM, to bolster security on Windows and Linux platforms. This move is driven by the understanding that quantum computers, with their ability to solve complex problems exponentially faster than classical computers, could break the cryptographic backbone of today’s digital world, including systems like RSA, Diffie-Hellman, and elliptic curve cryptography. The urgency is underscored by recent advances like Microsoft’s Majorana 1, a quantum processor powered by topological qubits, which marks major steps toward practical quantum computing.

Microsoft's efforts to transition to quantum-resistant cryptographic systems include adding PQC algorithms to their core cryptography library, SymCrypt. Recently, Microsoft has taken the next step by adding PQC support to Windows Insiders (Canary Build 27852+) and to Linux through SymCrypt-OpenSSL v1.9.0. These additions allow companies and developers to start testing and preparing for a quantum-secure future, preventing a potential "harvest now, decrypt later" scenario where hackers collect encrypted data today to decrypt later using quantum computers using quantum computers. This proactive approach aims to safeguard digital lives against the looming quantum threat.

The new additions to Windows include ML-KEM (Module Lattice-Based Key Encapsulation Mechanism), also known as CRYSTALS-Kyber, designed for secure key exchange, and ML-DSA (Module Lattice-Based Digital Signature Algorithm), previously known as CRYSTALS-Dilithium, used for digital signatures to ensure data integrity and authenticity. NIST approved three PQC standards which are called FIPS 203, 204, and 205. FIPS 203 is a Module-Lattice-Based Key-Encapsulation Mechanism Standard that specifies a key encapsulation mechanism designed to protect information exchange over public networks, ensuring confidentiality even in the presence of quantum adversaries. FIPS 204 is a Module-Lattice-Based Digital Signature Standard that defines a digital signature scheme that provides authentication and integrity, crucial for verifying identities and securing communications. The FIPS 205:Stateless Hash-Based Digital Signature Standard outlines a stateless hash-based digital signature scheme, offering an alternative approach to digital signatures with strong security assurances. NIST encourages organizations to begin the transition to these new standards to ensure long-term data security.

Recommended read:
References :
  • medium.com: Welcome to the Quantum Era, where even the strongest locks we use to protect our digital lives might soon be breakable.
  • Microsoft Research: The recent advances in quantum computing offer many advantages—but also challenge current cryptographic strategies.
  • www.microsoft.com: FrodoKEM: A conservative quantum-safe cryptographic algorithm
  • arstechnica.com: Here’s how Windows 11 aims to make the world safe in the post-quantum era

Haden Pelletier@Towards Data Science //
Recent discussions in statistics highlight significant concepts and applications relevant to data science. A book review explores seminal ideas and controversies in the field, focusing on key papers and historical perspectives. The review mentions Fisher's 1922 paper, which is credited with creating modern mathematical statistics, and discusses debates around hypothesis testing and Bayesian analysis.

Stephen Senn's guest post addresses the concept of "relevant significance" in statistical testing, cautioning against misinterpreting statistical significance as proof of a genuine effect. Senn points out that rejecting a null hypothesis does not necessarily mean it is false, emphasizing the importance of careful interpretation of statistical results.

Furthermore, aspiring data scientists are advised to familiarize themselves with essential statistical concepts for job interviews. These include understanding p-values, Z-scores, and outlier detection methods. A p-value is crucial for hypothesis testing, and Z-scores help identify data points that deviate significantly from the mean. These concepts form a foundation for analyzing data and drawing meaningful conclusions in data science applications.

Recommended read:
References :
  • errorstatistics.com: Stephen Senn (guest post): “Relevant significance? Be careful what you wish for”
  • Towards Data Science: 5 Statistical Concepts You Need to Know Before Your Next Data Science Interview
  • Xi'an's Og: Seminal ideas and controversies in Statistics [book review]
  • medium.com: Statistics for Data Science and Machine Learning
  • medium.com: Why Data Science Needs Statistics

Source Asia@Source Asia //
Microsoft Research has unveiled Aurora, a groundbreaking AI foundation model with 1.3 billion parameters, that is set to revolutionize Earth system forecasting. This innovative model outperforms traditional operational forecasts in critical areas such as air quality prediction, ocean wave forecasting, tropical cyclone tracking, and high-resolution weather prediction. Aurora achieves this superior performance at significantly lower computational costs, marking a significant advancement in the field. The model's capabilities extend beyond traditional weather forecasting, positioning it as a versatile tool for addressing a wide range of environmental challenges.

Aurora's architecture, based on Perceiver IO, allows it to efficiently process structured inputs and outputs, making it well-suited for complex Earth system data. Researchers at Microsoft have trained Aurora on an unprecedented volume of atmospheric data, incorporating information from satellites, radar, weather stations, simulations, and forecasts. This extensive training enables Aurora to rapidly generate forecasts and adapt to specific tasks through fine-tuning with smaller, task-specific datasets. The model's flexibility and ability to learn from diverse data sources are key factors in its exceptional forecasting accuracy.

The development of Aurora signifies a major step forward in applying AI to Earth science. By demonstrating the potential of foundation models to accurately and efficiently predict various environmental phenomena, Aurora paves the way for new approaches to disaster preparedness, resource management, and climate change mitigation. The publicly available code and weights of Aurora, accessible on GitHub, encourage further research and development in this exciting area. Microsoft's work underscores the transformative power of AI in addressing some of the world's most pressing environmental challenges.

Recommended read:
References :
  • news.microsoft.com: From sea to sky: Microsoft’s Aurora AI foundation model goes beyond weather forecasting
  • Microsoft Research: Abstracts: Aurora with Megan Stanley and Wessel Bruinsma

@docslib.org //
References: nLab
The Kazhdan-Lusztig correspondence, a significant concept in representation theory, is gaining increased attention. This correspondence establishes an equivalence between representation categories of quantum groups and affine Lie algebras. Recent research explores its applications in areas like logarithmic conformal field theory (CFT), particularly concerning the representation category of the triplet W-algebra. The Kazhdan-Lusztig correspondence has also been investigated in relation to vertex algebras, further solidifying its importance across different mathematical and physical domains.

Dennis Gaitsgory was awarded the Breakthrough Prize in Mathematics for his broad contributions to the field, including work closely related to representation theory and the geometric Langlands program. His recognition highlights the impact of representation theory on other areas of mathematics. Further research is focusing on exploring tensor structures arising from affine Lie algebras and building on Kazhdan and Lusztig's foundational work in the area.

Recent work has also explored the Kazhdan-Lusztig correspondence at a positive level using Arkhipov-Gaitsgory duality for affine Lie algebras. A functor is defined which sends objects in the DG category of G(O)-equivariant positive level affine Lie algebra modules to objects in the DG category of modules over Lusztig’s quantum group at a root of unity. Researchers are actively working to prove that the semi-infinite cohomology functor for positive level modules factors through the Kazhdan-Lusztig functor at positive level and the quantum group cohomology functor with respect to the positive part of Lusztig’s quantum group.

Recommended read:
References :
  • nLab: Kazhdan-Luzstig correspondence.

@www.marktechpost.com //
MIT researchers are making significant strides in artificial intelligence, focusing on enhancing AI's ability to learn and interact with the world more naturally. One project involves developing AI models that can learn connections between vision and sound without human intervention. This innovative approach aims to mimic how humans learn, by associating what they see with what they hear. The model could be useful in applications such as journalism and film production, where the model could help with curating multimodal content through automatic video and audio retrieval.

The new machine-learning model can pinpoint exactly where a particular sound occurs in a video clip, eliminating the need for manual labeling. By adjusting how the original model is trained, it learns a finer-grained correspondence between a particular video frame and the audio that occurs in that moment. The enhancements improved the model’s ability to retrieve videos based on an audio query and predict the class of an audio-visual scene, like the sound of a roller coaster in action or an airplane taking flight. Researchers also made architectural tweaks that help the system balance two distinct learning objectives, which improves performance.

Additionally, researchers from the National University of Singapore have introduced 'Thinkless,' an adaptive framework designed to reduce unnecessary reasoning in language models. Thinkless reduces unnecessary reasoning by up to 90% using DeGRPO. By incorporating a novel algorithm called Decoupled Group Relative Policy Optimization (DeGRPO), Thinkless separates the training focus between selecting the reasoning mode and improving the accuracy of the generated response. This framework equips a language model with the ability to dynamically decide between using short or long-form reasoning, addressing the issue of resource-intensive and wasteful reasoning sequences for simple queries.

Recommended read:
References :
  • learn.aisingapore.org: AI learns how vision and sound are connected, without human intervention | MIT News
  • news.mit.edu: AI learns how vision and sound are connected, without human intervention
  • www.marktechpost.com: Researchers from the National University of Singapore Introduce ‘Thinkless,’ an Adaptive Framework that Reduces Unnecessary Reasoning by up to 90% Using DeGRPO
  • news.mit.edu: Learning how to predict rare kinds of failures
  • MarkTechPost: Researchers from the National University of Singapore Introduce ‘Thinkless,’ an Adaptive Framework that Reduces Unnecessary Reasoning by up to 90% Using DeGRPO

@www.microsoft.com //
References: cyberinsider.com , Dan Goodin , medium.com ...
Microsoft is taking a significant step towards future-proofing cybersecurity by integrating post-quantum cryptography (PQC) into Windows Insider builds. This move aims to protect data against the potential threat of quantum computers, which could render current encryption methods vulnerable. The integration of PQC is a critical step toward quantum-resilient cybersecurity, ensuring that Windows systems can withstand attacks from more advanced computing power in the future.

Microsoft announced the availability of PQC support in Windows Insider Canary builds (27852 and above). This release allows developers and organizations to begin experimenting with PQC in real-world environments, assessing integration challenges, performance trade-offs, and compatibility. This is being done in an attempt to jump-start what’s likely to be the most formidable and important technology transition in modern history.

The urgency behind this transition stems from the "harvest now, decrypt later" threat, where malicious actors store encrypted communications today, with the intent to decrypt them once quantum computers become capable. These captured secrets, such as passwords, encryption keys, or medical data, could remain valuable to attackers for years to come. By adopting PQC algorithms, Microsoft aims to safeguard sensitive information against this future risk, emphasizing the importance of starting the transition now.

Recommended read:
References :
  • cyberinsider.com: Microsoft has begun integrating post-quantum cryptography (PQC) into Windows Insider builds, marking a critical step toward quantum-resilient cybersecurity. Microsoft announced the availability of PQC support in Windows Insider Canary builds (27852 and above). This release allows developers and organizations to begin experimenting with PQC in real-world environments, assessing integration challenges, performance trade-offs, and compatibility with …
  • Dan Goodin: Microsoft is updating Windows 11 with a set of new encryption algorithms that can withstand future attacks from quantum computers in an attempt to jump-start what’s likely to be the most formidable and important technology transition in modern history.
  • Red Hat Security: In their article on post-quantum cryptography, Emily Fox and Simo Sorce explained how Red Hat is integrating post-quantum cryptography (PQC) into our products. PQC protects confidentiality, integrity and authenticity of communication and data against quantum computers, which will make attacks on existing classic cryptographic algorithms such as RSA and elliptic curves feasible. Cryptographically relevant quantum computers (CRQCs) are not known to exist yet, but continued advances in research point to a future risk of successful attacks. While the migration to algorithms resistant against such
  • medium.com: Post-Quantum Cryptography Is Arriving on Windows & Linux
  • www.microsoft.com: The recent advances in quantum computing offer many advantages—but also challenge current cryptographic strategies. Learn how FrodoKEM could help strengthen security, even in a future with powerful quantum computers. The post first appeared on .
  • arstechnica.com: For the first time, new quantum-safe algorithms can be invoked using standard Windows APIs.

Source Asia@Source Asia //
Microsoft's Aurora AI model is revolutionizing weather forecasting by providing accurate 10-day forecasts in mere seconds. This AI foundation model, developed by Microsoft Research, has demonstrated capabilities that extend beyond traditional weather prediction, encompassing environmental events such as tropical cyclones, air quality, and ocean waves. Aurora achieves this by training on a massive dataset of over one million hours of atmospheric data from satellites, radar, weather stations, simulations, and forecasts, which Microsoft believes is the largest collection ever assembled for training an AI forecasting model. The model's speed and accuracy have the potential to improve safety and inform decisions across various sectors.

The core strength of Aurora lies in its foundation model architecture. It's not simply limited to weather forecasting; it can be fine-tuned for specific environmental prediction tasks. After initial training on general weather patterns, Aurora can be adapted with smaller datasets to forecast elements like wave height or air quality. The AI does not fully grasp the physical laws governing weather, but its use for environmental prediction tasks and ability to provide accurate forecasts is still significant. This flexibility makes it a versatile tool for understanding and predicting various aspects of the Earth system.

Aurora's performance has been noteworthy, beating existing numerical and AI models across 91 percent of forecasting targets when fine-tuned to medium-range weather forecasts. Its rapid processing time, taking seconds compared to the hours required by traditional models, makes it a valuable asset for timely decision-making. Microsoft is leveraging AI technology to make weather forecasting more efficient and accurate. While generative AI is revolutionizing how we do things, integrating it into workflows is making work easier by automating redundant tasks, creating more time to focus on more important tasks.

Recommended read:
References :
  • Source Asia: From sea to sky: Microsoft’s Aurora AI foundation model goes beyond weather forecasting
  • Microsoft Research: Abstracts: Aurora with Megan Stanley and Wessel Bruinsma
  • www.windowscentral.com: Microsoft's latest AI model can accurately forecast the weather: “It doesn’t know the laws of physics, so it could make up something completely crazyâ€
  • www.nature.com: A foundation model for the Earth system Bodnar, C., Bruinsma, W.P., Lucic, A. et al. A foundation model for the Earth system. Nature (2025).
  • maXpool: A foundation model for the Earth system. Aurora is a 1.3-billion-parameter foundation model for the Earth system.
  • doi.org: A foundation model for the Earth system
  • techxplore.com: Aurora, Microsoft's new AI model, is poised to revolutionize weather prediction with detailed forecasting.

@deepmind.google //
Google DeepMind has unveiled AlphaEvolve, a groundbreaking AI agent designed for algorithmic and scientific discovery. This innovative agent combines the power of large language models (LLMs) like Gemini Pro, evolutionary search frameworks, and automated evaluation methods to evolve superior algorithms. Unlike systems that merely generate plausible code, AlphaEvolve iteratively refines entire codebases, optimizing across multiple performance metrics and grounding itself in actual code execution results, effectively sidestepping hallucinations. Terence Tao also collaborated with the DeepMind team on AlphaEvolve, highlighting its significance in the field.

AlphaEvolve's capabilities extend to a range of algorithmic and scientific challenges. It has optimized Google's data center scheduling, recovering 0.7% of Google's compute capacity, simplified hardware accelerator circuit designs, and accelerated the training of its own underlying LLM, offering a glimpse into AI self-improvement. Notably, AlphaEvolve cracked a problem unchanged since 1969, devising a more efficient method for multiplying two 4x4 complex matrices using only 48 scalar multiplications, surpassing Strassen's algorithm after 56 years. The agent also tackled over 50 other open mathematical problems, often matching or exceeding the state of the art.

In parallel, Google has launched "Jules," a new coding agent powered by Google's Gemini 2.5 Pro model and designed to assist developers with repetitive tasks such as bug-fixing, documentation, test generation, and feature building. Jules operates in a secure cloud environment, breaking down complex tasks into smaller steps and adapting to user instructions. The agent automatically creates pull requests with audio summaries, streamlining the code review process. This move signifies the rapid maturation of AI in software development and a broader trend towards AI agents becoming trusted engineering partners.

Recommended read:
References :
  • pub.towardsai.net: TAI #153: AlphaEvolve & Codex — AI Breakthroughs in Algorithm Discovery & Software Engineering
  • composio.dev: AlphaEvolve: Evolutionary agent from DeepMind
  • deepmind.google: AlphaEvolve: A coding agent for scientific and algorithmic discovery paper
  • gregrobison.medium.com: AlphaEvolve: How AI-Driven Algorithm Discovery Is Rewriting Computing
  • towardsdatascience.com: Google’s AlphaEvolve: Getting Started with Evolutionary Coding Agents
  • x.com: AI is able to devise a more efficient method for multiplying two 4x4 complex matrices using only 48 scalar multiplications
  • sites.libsyn.com: OpenAI's Roadmap to AGI, Google's AlphaEvolve Codes Itself & So Many AI Babies
  • Last Week in AI: #209 - OpenAI non-profit, US diffusion rules, AlphaEvolve

Source Asia@Source Asia //
References: Source Asia , Source , news.microsoft.com ...
Microsoft's Aurora AI foundation model is revolutionizing weather and environmental forecasting, offering quicker and more accurate predictions compared to traditional methods. Developed by Microsoft Research, Aurora is a large-scale AI model trained on a vast dataset of atmospheric information, including satellite data, radar readings, weather station observations, and simulations. This comprehensive training allows Aurora to forecast a range of environmental events, from hurricanes and typhoons to air quality and ocean waves, with exceptional precision and speed. The model's capabilities extend beyond conventional weather forecasting, making it a versatile tool for understanding and predicting environmental changes.

Aurora's unique architecture enables it to be fine-tuned for specific tasks using modest amounts of additional data. This "fine-tuning" process allows the model to generate forecasts in seconds, demonstrating its efficiency and adaptability. Researchers have shown that Aurora outperforms existing numerical and AI models in 91% of forecasting targets when fine-tuned for medium-range weather forecasts. Its ability to accurately predict hurricane trajectories and other extreme weather events highlights its potential to improve disaster preparedness and response efforts, ultimately saving lives and mitigating damage.

Senior researchers Megan Stanley and Wessel Bruinsma emphasized Aurora's broader impact on environmental science, noting its potential to revolutionize the field. In a paper published in Nature, they highlighted Aurora's ability to correctly forecast hurricanes in 2023 more accurately than operational forecasting centers, such as the US National Hurricane Center. Aurora also demonstrated its capabilities when correctly forecasting where and when Doksuri would hit the Philippines four days in advance. These findings underscore the transformative potential of AI in addressing complex environmental challenges and paving the way for more effective climate modeling and environmental event management.

Recommended read:
References :
  • Source Asia: Microsoft’s Aurora AI foundation model goes beyond weather forecasting
  • Source: Aurora is a new foundation model from Microsoft Research that goes beyond weather forecasting, delivering faster, more accurate predictions of environmental events. Awesome to see this breakthrough published in Nature Magazine.
  • Microsoft Research: Abstracts: Aurora with Megan Stanley and Wessel Bruinsma
  • news.microsoft.com: Microsoft’s Aurora AI foundation model goes beyond weather forecasting
  • www.sciencedaily.com: AI is good at weather forecasting. Can it predict freak weather events?
  • techxplore.com: Microsoft has developed an artificial intelligence (AI) model that beats current forecasting methods in tracking air quality, weather patterns, and climate-addled tropical storms, according to findings published Wednesday.
  • www.nature.com: Details about Aurora, a 1.3-billion-parameter foundation model for the Earth system, outperforming operational forecasts.
  • www.windowscentral.com: Microsoft's latest AI model can accurately forecast the weather: “It doesn’t know the laws of physics, so it could make up something completely crazyâ€
  • The Tech Basic: Microsoft’s New AI Can Predict Storms and Pollution Better Than Ever
  • doi.org: A foundation model for the Earth system
  • thetechbasic.com: Microsoft’s New AI Can Predict Storms and Pollution Better Than Ever
  • bsky.app: An #AI trained on decades of weather data can predict hurricanes better than other approaches: https://www.theregister.com/2025/05/21/earth_system_model_hurricane_forecast/ #ArtificialIntelligence
  • computersweden.se: Microsoft släpper ny AI som bättre kan förutspÃ¥ luftkvalitet och väder
  • intelligence-artificielle.developpez.com: Microsoft has developed the AI model Aurora which generates 10-day weather forecasts and predicts hurricane trajectories, thus surpassing current forecasting methods.
  • techcrunch.com: Microsoft's new AI will provide better air quality and weather forecasts

@blogs.nvidia.com //
References: , , quantumcomputingreport.com ...
Recent advancements in quantum computing include the launch of new supercomputers and the development of open-source frameworks. NVIDIA and AIST have collaborated to launch ABCI-Q, a supercomputing system designed for hybrid quantum-AI research. This system, powered by NVIDIA H100 GPUs and utilizing NVIDIA’s Quantum-2 InfiniBand platform, is hosted at the Global Research and Development Center for Business by Quantum-AI Technology (G-QuAT). ABCI-Q supports hybrid workloads by integrating GPU-based simulation with physical quantum processors from Fujitsu, QuEra, and OptQC, aiming to advance quantum error correction and algorithm development. It serves as a testbed for quantum-GPU workflows across various hardware modalities.

Quantum Machines has introduced QUAlibrate, an open-source calibration framework designed to significantly reduce the time required for quantum computer calibration. Calibration, a major hurdle in quantum system performance and scalability, can now be reduced from hours to minutes. QUAlibrate enables the creation, execution, and sharing of modular calibration protocols, allowing researchers to calibrate multi-qubit superconducting systems rapidly. At the Israeli Quantum Computing Center, full multi-qubit calibration was achieved in just 140 seconds using QUAlibrate. The framework is built on the QUA programming language and uses the Quantum Abstract Machine (QUAM) to model quantum hardware, featuring a graph-based calibration approach.

These advancements are supported by strategic collaborations and investments in quantum technologies. SilQ Connect, a startup focusing on distributed quantum computing, has secured pre-seed funding to advance modular quantum interconnects. This funding from QV Studio, Quantacet, and Quantonation will support the development of microwave-optical quantum interconnects for scalable quantum systems. Additionally, Taiwan's National Center for High-Performance Computing is deploying a new NVIDIA-powered AI supercomputer to support research in climate science, quantum research, and the development of large language models. This initiative aims to foster cross-domain collaboration and global AI leadership.

Recommended read:
References :
  • : Quantum Machines Releases Open-Source QUAlibrate Framework to Accelerate Quantum System Calibration
  • : NVIDIA and AIST Launch ABCI-Q Supercomputer for Hybrid Quantum-AI Research
  • NVIDIA Newsroom: NVIDIA-Powered Supercomputer to Enable Quantum Leap for Taiwan Research
  • quantumcomputingreport.com: Quantum Machines Releases Open-Source QUAlibrate Framework to Accelerate Quantum System Calibration
  • AI News | VentureBeat: Quantum Machines launches Qualibrate open source framework to speed quantum computer calibration
  • quantumcomputingreport.com: NVIDIA and AIST Launch ABCI-Q Supercomputer for Hybrid Quantum-AI Research