Top Mathematics discussions

NishMath

Charlie Fink,@Charlie Fink //
References: SiliconANGLE , Charlie Fink , Unite.AI ...
Sourcetable has launched its AI-powered spreadsheet, securing $4.3 million in funding. This new platform aims to revolutionize data analysis, allowing users to interact with and analyze data using natural language, eliminating the need for complex formulas. The funding round was led by Bee Partners, with participation from figures such as Hugging Face CTO Julien Chaumond and GitHub co-founder Tom Preston-Werner. The company aims to make data analysis more accessible, and empower more people to become spreadsheet power users.

The "self-driving spreadsheet" can understand data context without the need for users to pre-select ranges, interpreting multiple ranges across different tabs. Users can issue commands via text or voice, creating financial models, generating pivot tables, cleaning data, and creating charts and graphs. Sourcetable CEO Eoin McMillan says that AI is the biggest platform shift since the browser.

Recommended read:
References :
  • SiliconANGLE: Sourcetable gets $4.3M in funding to help everyone become a spreadsheet power user
  • Charlie Fink: Sourcetable Launches AI Spreadsheets With $4.3 Million In New Funding
  • www.itpro.com: Sourcetable, a startup behind a ‘self-driving spreadsheet’ tool, wants to replicate the vibe coding trend for data analysts
  • Unite.AI: Sourcetable Raises $4.3M to Launch the World’s First Self-Driving Spreadsheet, Powered by AI

@sciencedaily.com //
Recent advancements in quantum computing research have yielded promising results. Researchers at the University of the Witwatersrand in Johannesburg, along with collaborators from Huzhou University in China, have discovered a method to shield quantum information from environmental disruptions, potentially leading to more reliable quantum technologies. This breakthrough involves manipulating quantum wave functions to preserve quantum information, which could enhance medical imaging, improve AI diagnostics, and strengthen data security by providing ultra-secure communication.

UK startup Phasecraft has announced a new algorithm, THRIFT, that improves the ability of quantum computers to model new materials and chemicals by a factor of 10. By optimizing quantum simulation, THRIFT enables scientists to model new materials and chemicals faster and more accurately, even on today’s slower machines. Furthermore, Oxford researchers have demonstrated a 25-nanosecond controlled-Z gate with 99.8% fidelity, combining high speed and accuracy in a simplified superconducting circuit. This achievement advances fault-tolerant quantum computing by improving raw gate performance without relying heavily on error correction or added hardware.

Recommended read:
References :
  • The Quantum Insider: Oxford Researchers Demonstrate Fast, 99.8% Fidelity Two-Qubit Gate Using Simplified Circuit Design
  • www.sciencedaily.com: Researchers find a way to shield quantum information from 'noise'
  • Bernard Marr: Quantum computing is poised to revolutionize industries from drug development to cybersecurity, with the global market projected to reach $15 billion by 2030.

Mike Watts@computational-intelligence.blogspot.com //
Recent developments highlight advancements in quantum computing, artificial intelligence, and cryptography. Classiq Technologies, in collaboration with Sumitomo Corporation and Mizuho-DL Financial Technology, achieved up to 95% compression of quantum circuits for Monte Carlo simulations used in financial risk analysis. This project explored the use of Classiq’s technology to generate more efficient quantum circuits for a novel quantum Monte Carlo simulation algorithm incorporating pseudo-random numbers proposed by Mizuho-DL FT, evaluating the feasibility of implementing quantum algorithms in financial applications.

Oxford researchers demonstrated a fast, 99.8% fidelity two-qubit gate using a simplified circuit design, achieving this using a modified coaxmon circuit architecture. Also, a collaborative team from JPMorganChase, Quantinuum, Argonne National Laboratory, Oak Ridge National Laboratory, and the University of Texas at Austin demonstrated a certified randomness protocol using a 56-qubit Quantinuum System Model H2 trapped-ion quantum computer. This is a major milestone for real-world quantum applications, with the certified randomness validated using over 1.1 exaflops of classical computing power, confirming the quantum system’s ability to generate entropy beyond classical reach.

The 2025 IEEE International Conference on Quantum Artificial Intelligence will be held in Naples, Italy, from November 2-5, 2025, with a paper submission deadline of May 15, 2025. Vanderbilt University will host a series of workshops devoted to Groups in Geometry, Analysis and Logic starting May 28, 2025.

Recommended read:
References :

Terence Tao@What's new //
References: beuke.org , What's new
Terence Tao has recently uploaded a paper to the arXiv titled "Decomposing a factorial into large factors." The paper explores a mathematical quantity, denoted as t(N), which represents the largest value such that N! can be factorized into t(N) factors, with each factor being at least N. This concept, initially introduced by Erdös, delves into how equitably a factorial can be split into its constituent factors.

Erdös initially conjectured that an upper bound on t(N) was asymptotically sharp, implying that factorials could be split into factors of nearly uniform size for large N. However, a purported proof by Erdös, Selfridge, and Straus was lost, leading to the assertion becoming a conjecture. The paper establishes bounds on t(N), recovering a previously lost result. Further conjectures were made by Guy and Selfridge, exploring whether relationships held true for all values of N.

On March 30th, mathematical enthusiasts celebrated facts related to the number 89. Eighty-nine is a Fibonacci prime, and patterns emerge when finding it's reciprocal. Also, the number 89 can be obtained by a summation of the first 5 integers to the power of the first 5 Fibonacci numbers. 89 is also related to Armstrong numbers, which are numbers that are the sum of their digits raised to the number of digits in the number.

Recommended read:
References :
  • beuke.org: Your browser does not support the audio element. Profunctor optics are a modern, category-theoretic generalization of optics – bidirectional data accessors used to focus on and update parts of a data structure.
  • What's new: I;ve just uploaded to the arXiv the paper “Decomposing a factorial into large factors“. This paper studies the quantity , defined as the largest quantity such that it is possible to factorize into factors , each of which is at least .

Tom Bridges@blogs.surrey.ac.uk //
Recent breakthroughs are pushing the boundaries of quantum theory and quantum randomness, paving the way for commercial applications and more reliable quantum technologies. A paper by Dorje Brody, along with collaborators Eva-Maria Graefe and Rishindra Melanathuru, has been published in Physical Review Letters, exploring decoherence resulting from phase-space measurements. Their work addresses the question of decoherence resulting from a monitoring of position and momentum, i.e., a phase-space measurement, by the environment.

Researchers have also made strides in protecting quantum information from environmental disruptions, offering hope for more stable quantum computers and networks. Scientists have demonstrated how certain quantum states can maintain their critical information even when disturbed by environmental noise. This could lead to more reliable quantum technology, enhanced medical imaging techniques, improved AI-driven diagnostics, and stronger data security.

Simultaneously, a joint research team consisting of members from JPMorgan Chase, Quantinuum, multiple national labs, and UT Austin, has achieved certified quantum randomness, turning once theoretical experiments into first commercial applications for quantum computing. The team demonstrated a certified randomness protocol using Quantinuum's 56-qubit H2 trapped-ion system, showcasing a quantum computer's ability to generate entropy beyond classical reach. Furthermore, the high cost of quantum randomness is dropping due to advancements in pseudorandomness techniques, which may open new doors for quantum computing and cryptography research.

Recommended read:
References :
  • blogs.surrey.ac.uk: Paper of Dorje Brody on quantum theory is published in Physical Review Letters
  • The Quantum Insider: Joint Research Team Achieves Certified Quantum Randomness, Turns Once Theoretical Experiments Into First Commercial Applications For Quantum Computing
  • Quanta Magazine: The High Cost of Quantum Randomness Is Dropping

Webb Wright@Quanta Magazine //
Researchers are making significant strides in reducing the costs associated with quantum randomness, a crucial element for cryptography and simulations. Traditionally, obtaining true quantum randomness has been complex and expensive. However, the exploration of "pseudorandomness" offers a practical alternative, allowing researchers to utilize computational algorithms that mimic randomness, thus sidestepping the high costs of pure quantum randomness. This development broadens the accessibility of randomness, enabling researchers to pursue new scientific investigations.

The team from JPMorganChase, Quantinuum, multiple national labs, and UT Austin demonstrated a certified quantum randomness protocol. They showcased the first successful demonstration of a quantum computing method to generate certified randomness. Using a 56-qubit quantum machine, they output more randomness than they initially put in. What makes this truly remarkable is that this feat is considered impossible for even the most powerful classical supercomputers. This groundbreaking achievement could open new doors for quantum computing and cryptography research.

Recommended read:
References :
  • The Quantum Insider: Joint Research Team Achieves Certified Quantum Randomness, Turns Once Theoretical Experiments Into First Commercial Applications For Quantum Computing
  • Quanta Magazine: The High Cost of Quantum Randomness Is Dropping
  • hetarahulpatel.medium.com: Random Numbers Just Got Real, Thanks to Quantum Magic!

Ellie Ramirez-Camara@Data Phoenix //
References: RunPod Blog , Data Phoenix , eWEEK ...
The ARC Prize Foundation has launched ARC-AGI-2, a new AI benchmark designed to challenge current foundation models and track progress towards artificial general intelligence (AGI). Building on the original ARC benchmark, ARC-AGI-2 blocks brute force techniques and introduces new tasks intended for next-generation AI systems. The goal is to evaluate real progress toward AGI by requiring models to reason abstractly, generalize from few examples, and apply knowledge in new contexts, tasks that are simple for humans but difficult for machines.

The Foundation has also announced the ARC Prize 2025, a competition running from March 26 to November 3, with a grand prize of $700,000 for a solution achieving an 85% score on the ARC-AGI-2 benchmark's private evaluation dataset. Early testing results show that even OpenAI's top models experienced a significant performance drop, with o3 falling from 75% to approximately 4% on ARC-AGI-2. This highlights how the new benchmark significantly raises the bar for AI tests, measuring general fluid intelligence rather than memorized skills.

Recommended read:
References :
  • RunPod Blog: The race toward artificial general intelligence isn't just happening behind closed doors at trillion-dollar tech companies. It's also unfolding in the open—in research labs, Discord servers, GitHub repos, and competitions like the ARC Prize. This year, the ARC Prize Foundation is back with ARC-AGI-2
  • Data Phoenix: The ARC Prize Foundation has officially released the ARC-AGI-2 to challenge current foundation models and help track progress towards AGI. Additionally, the Foundation has opened the ARC Prize 2025, running from Mar 26 to Nov 3, with a $700K Grand Prize for an 85% scoring solution on the ARC-AGI-2.
  • THE DECODER: The new AI benchmark ARC-AGI-2 significantly raises the bar for AI tests. While humans can easily solve the tasks, even highly developed AI systems such as OpenAI o3 clearly fail. The article appeared first on .
  • eWEEK: The newest AI benchmark, ARC-AGI-2, builds on the first iteration by blocking brute force techniques and designing new tasks for next-gen AI systems. The post appeared first on .

Matt Marshall@AI News | VentureBeat //
Microsoft is enhancing its Copilot Studio platform with AI-driven improvements, introducing deep reasoning capabilities that enable agents to tackle intricate problems through methodical thinking and combining AI flexibility with deterministic business process automation. The company has also unveiled specialized deep reasoning agents for Microsoft 365 Copilot, named Researcher and Analyst, to help users achieve tasks more efficiently. These agents are designed to function like personal data scientists, processing diverse data sources and generating insights through code execution and visualization.

Microsoft's focus includes securing AI and using it to bolster security measures, as demonstrated by the upcoming Microsoft Security Copilot agents and new security features. Microsoft aims to provide an AI-first, end-to-end security platform that helps organizations secure their future, one example being the AI agents designed to autonomously assist with phishing, data security, and identity management. The Security Copilot tool will automate routine tasks, allowing IT and security staff to focus on more complex issues, aiding in defense against cyberattacks.

Recommended read:
References :
  • Microsoft Security Blog: Learn about the upcoming availability of Microsoft Security Copilot agents and other new offerings for a more secure AI future.
  • www.zdnet.com: Designed for Microsoft's Security Copilot tool, the AI-powered agents will automate basic tasks, freeing IT and security staff to tackle more complex issues.

Amir Najmi@unofficialgoogledatascience.com //
References: medium.com , medium.com , medium.com ...
Data scientists and statisticians are continuously exploring methods to refine data analysis and modeling. A recent blog post from Google details a project focused on quantifying the statistical skills necessary for data scientists within their organization, aiming to clarify job descriptions and address ambiguities in assessing practical data science abilities. The authors, David Mease and Amir Najmi, leveraged their extensive experience conducting over 600 interviews at Google to identify crucial statistical expertise required for the "Data Scientist - Research" role.

Statistical testing remains a cornerstone of data analysis, guiding analysts in transforming raw numbers into actionable insights. One must also keep in mind bias-variance tradeoff and how to choose the right statistical test to ensure the validity of analyses. These tools are critical for both traditional statistical roles and the evolving field of AI/ML, where responsible practices are paramount, as highlighted in discussions about the relevance of statistical controversies to ethical AI/ML development at an AI ethics conference on March 8.

Recommended read:
References :
  • medium.com: Data Science: Bias-Variance Tradeoff
  • medium.com: Six Essential Statistics Concepts Every Data Scientist Should Know
  • www.unofficialgoogledatascience.com: Quantifying the statistical skills needed to be a Google Data Scientist
  • medium.com: These are the best Udemy Courses you can join to learn Mathematics and statistics in 2025
  • medium.com: Python by Examples: Quantifying Predictor Informativeness in Statistical Forecasting

Stephen Ornes@Quanta Magazine //
References: Quanta Magazine , medium.com
A novel quantum algorithm has demonstrated a speedup over classical computers for a significant class of optimization problems, according to a recent report. This breakthrough could represent a major advancement in harnessing the potential of quantum computers, which have long promised faster solutions to complex computational challenges. The new algorithm, known as decoded quantum interferometry (DQI), outperforms all known classical algorithms in finding good solutions to a wide range of optimization problems, which involve searching for the best possible solution from a vast number of choices.

Classical researchers have been struggling to keep up with this quantum advancement. Reports of quantum algorithms often spark excitement, partly because they can offer new perspectives on difficult problems. The DQI algorithm is considered a "breakthrough in quantum algorithms" by Gil Kalai, a mathematician at Reichman University. While quantum computers have generated considerable buzz, it has been challenging to identify specific problems where they can significantly outperform classical machines. This new algorithm demonstrates the potential for quantum computers to excel in optimization tasks, a development that could have broad implications across various fields.

Recommended read:
References :
  • Quanta Magazine: Quantum computers can answer questions faster than classical machines. A new algorithm appears to do it for some critical optimization tasks.
  • medium.com: How Qubits Are Rewriting the Rules of Computation

Maximilian Schreiner@THE DECODER //
Google's Gemini 2.5 Pro is making waves as a top-tier reasoning model, marking a leap forward in Google's AI capabilities. Released recently, it's already garnering attention from enterprise technical decision-makers, especially those who have traditionally relied on OpenAI or Claude for production-grade reasoning. Early experiments, benchmark data, and developer reactions suggest Gemini 2.5 Pro is worth serious consideration.

Gemini 2.5 Pro distinguishes itself with its transparent, structured reasoning. Google's step-by-step training approach results in a structured chain of thought that provides clarity. The model presents ideas in numbered steps, with sub-bullets and internal logic that's remarkably coherent and transparent. This breakthrough offers greater trust and steerability, enabling enterprise users to validate, correct, or redirect the model with more confidence when evaluating output for critical tasks.

Recommended read:
References :
  • SiliconANGLE: Google LLC said today it’s updating its flagship Gemini artificial intelligence model family by introducing an experimental Gemini 2.5 Pro version.
  • The Tech Basic: Google's New AI Models “Think” Before Answering, Outperform Rivals
  • AI News | VentureBeat: Google releases ‘most intelligent model to date,’ Gemini 2.5 Pro
  • Analytics Vidhya: We Tried the Google 2.5 Pro Experimental Model and It’s Mind-Blowing!
  • www.tomsguide.com: Google unveils Gemini 2.5 — claims AI breakthrough with enhanced reasoning and multimodal power
  • Google DeepMind Blog: Gemini 2.5: Our most intelligent AI model
  • THE DECODER: Google Deepmind has introduced Gemini 2.5 Pro, which the company describes as its most capable AI model to date. The article appeared first on .
  • intelligence-artificielle.developpez.com: Google DeepMind a lancé Gemini 2.5 Pro, un modèle d'IA qui raisonne avant de répondre, affirmant qu'il est le meilleur sur plusieurs critères de référence en matière de raisonnement et de codage
  • The Tech Portal: Google unveils Gemini 2.5, its most intelligent AI model yet with ‘built-in thinking’
  • Ars OpenForum: Google says the new Gemini 2.5 Pro model is its “smartest†AI yet
  • The Official Google Blog: Gemini 2.5: Our most intelligent AI model
  • www.techradar.com: I pitted Gemini 2.5 Pro against ChatGPT o3-mini to find out which AI reasoning model is best
  • bsky.app: Google's AI comeback is official. Gemini 2.5 Pro Experimental leads in benchmarks for coding, math, science, writing, instruction following, and more, ahead of OpenAI's o3-mini, OpenAI's GPT-4.5, Anthropic's Claude 3.7, xAI's Grok 3, and DeepSeek's R1. The narrative has finally shifted.
  • Shelly Palmer: Google’s Gemini 2.5: AI That Thinks Before It Speaks
  • bdtechtalks.com: What to know about Google Gemini 2.5 Pro
  • Interconnects: The end of a busy spring of model improvements and what's next for the presumed leader in AI abilities.
  • www.techradar.com: Gemini 2.5 is now available for Advanced users and it seriously improves Google’s AI reasoning
  • www.zdnet.com: Google releases 'most intelligent' experimental Gemini 2.5 Pro - here's how to try it
  • Unite.AI: Gemini 2.5 Pro is Here—And it Changes the AI Game (Again)
  • TestingCatalog: Gemini 2.5 Pro sets new AI benchmark and launches on AI Studio and Gemini
  • Analytics Vidhya: Google DeepMind's latest AI model, Gemini 2.5 Pro, has reached the #1 position on the Arena leaderboard.
  • AI News: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date
  • Fello AI: Google’s Gemini 2.5 Shocks the World: Crushing AI Benchmark Like No Other AI Model!
  • Analytics India Magazine: Google Unveils Gemini 2.5, Crushes OpenAI GPT-4.5, DeepSeek R1, & Claude 3.7 Sonnet
  • Practical Technology: Practical Tech covers the launch of Google's Gemini 2.5 Pro and its new AI benchmark achievements.
  • Shelly Palmer: Google's Gemini 2.5: AI That Thinks Before It Speaks
  • www.producthunt.com: Google's most intelligent AI model
  • Windows Copilot News: Google reveals AI ‘reasoning’ model that ‘explicitly shows its thoughts’
  • AI News | VentureBeat: Hands on with Gemini 2.5 Pro: why it might be the most useful reasoning model yet
  • thezvi.wordpress.com: Gemini 2.5 Pro Experimental is America’s next top large language model. That doesn’t mean it is the best model for everything. In particular, it’s still Gemini, so it still is a proud member of the Fun Police, in terms of …
  • www.computerworld.com: Gemini 2.5 can, among other things, analyze information, draw logical conclusions, take context into account, and make informed decisions.
  • www.infoworld.com: Google introduces Gemini 2.5 reasoning models
  • Maginative: Google's Gemini 2.5 Pro leads AI benchmarks with enhanced reasoning capabilities, positioning it ahead of competing models from OpenAI and others.
  • www.infoq.com: Google's Gemini 2.5 Pro is a powerful new AI model that's quickly becoming a favorite among developers and researchers. It's capable of advanced reasoning and excels in complex tasks.
  • AI News | VentureBeat: Google’s Gemini 2.5 Pro is the smartest model you’re not using – and 4 reasons it matters for enterprise AI
  • Communications of the ACM: Google has released Gemini 2.5 Pro, an updated AI model focused on enhanced reasoning, code generation, and multimodal processing.
  • The Next Web: Google has released Gemini 2.5 Pro, an updated AI model focused on enhanced reasoning, code generation, and multimodal processing.
  • www.tomsguide.com: Surprise move comes just days after Gemini 2.5 Pro Experimental arrived for Advanced subscribers.
  • Composio: Google just launched Gemini 2.5 Pro on March 26th, claiming to be the best in coding, reasoning and overall everything. But I The post appeared first on .
  • Composio: Google's Gemini 2.5 Pro, released on March 26th, is being hailed for its enhanced reasoning, coding, and multimodal capabilities.
  • Analytics India Magazine: Gemini 2.5 Pro is better than the Claude 3.7 Sonnet for coding in the Aider Polyglot leaderboard.
  • www.zdnet.com: Gemini's latest model outperforms OpenAI's o3 mini and Anthropic's Claude 3.7 Sonnet on the latest benchmarks.
  • www.marketingaiinstitute.com: [The AI Show Episode 142]: ChatGPT’s New Image Generator, Studio Ghibli Craze and Backlash, Gemini 2.5, OpenAI Academy, 4o Updates, Vibe Marketing & xAI Acquires X
  • www.tomsguide.com: Gemini 2.5 is free, but can it beat DeepSeek?
  • www.tomsguide.com: Google Gemini could soon help your kids with their homework — here’s what we know
  • PCWorld: Google’s latest Gemini 2.5 Pro AI model is now free for all users

Tom Bridges@blogs.surrey.ac.uk //
Recent activity in the mathematical community has highlighted the enduring fascination with mathematical constants and visual representations of mathematical concepts. A blog post on March 23, 2025, discussed a remarkably accurate approximation for pi, noting that π ≈ 3 log(640320) / √163 is exact within the limits of floating-point arithmetic, achieving accuracy to 15 decimal places. This discovery builds upon historical efforts to approximate pi, from ancient Babylonian and Egyptian calculations to Archimedes' method of exhaustion and the achievements of Chinese mathematicians like Liu Hui and Zu Chongzhi.

Visual insights in mathematics continue to be explored. A blog called Visual Insight shares striking images that help explain topics in mathematics. The creator gave a talk about it at the Illustrating Math Seminar. The blog features images created by people such as Refurio Anachro, Greg Egan, and Roice Nelson, and individual articles are available on the AMS website.

Recommended read:
References :
  • blogs.surrey.ac.uk: Details on a mathematical paper on data-driven modeling.
  • denisegaskins.com: Blog post on various mathematical topics.
  • medium.com: Refining Quantum Uncertainty Mathematics (QUM): A Structured Approach
  • medium.com: Is Mathematics a Designed System? Exploring Its Origins and Implications

Denise Gaskins@denisegaskins.com //
References: phys.org , Math Blog
Recent studies and educational resources are focusing on enhancing math education through innovative approaches. Denise Gaskins' "Let's Play Math" blog offers resources for families to learn and enjoy math together, including playful math books and internet resources suitable for various age groups. Math journaling and games have been highlighted as effective tools to engage students, promote problem-solving skills, and foster a richer mathematical mindset.

Numerous games and activities can make learning fun. For instance, "Make a Square" is a game that builds 2-D visualization skills and strategic thinking. Quick number games that can be played anywhere. The divisibility rules for numbers, particularly divisibility by 2, are being emphasized to help students easily identify even and odd numbers. A megastudy also revealed that behaviorally informed email messages improved students' math progress, demonstrating how simple interventions can positively impact learning outcomes.

Recommended read:
References :
  • phys.org: Megastudy finds a simple way to boost math progress
  • Math Blog: Mar 20, 5th Grade Even and Odd Numbers | Definitions | Examples

Yvonne Smit@Qusoft //
References: Qusoft
Koen Groenland's book, "Introduction to Quantum Computing for Business," is gaining attention as a key resource for guiding companies on leveraging quantum advancements. As the Dutch quantum ecosystem expands, experts like Groenland are playing a vital role in making quantum knowledge accessible to the business world. The book aims to demystify this technology for business professionals without a technical background, focusing on the capabilities and applications of quantum computers rather than the underlying technical details. Groenland hopes the book will become a standard work for anyone starting a quantum journey, emphasizing the importance of understanding quantum algorithms for business value.

Classiq Technologies, in collaboration with Sumitomo Corporation and Mizuho-DL Financial Technology, achieved significant compression of quantum circuits for Monte Carlo simulations used in financial risk analysis. The study compared traditional and pseudo-random number-based quantum Monte Carlo methods, optimizing circuit depth and qubit usage using Classiq’s high-level quantum design platform, Qmod. The results showed efficient circuit compression is possible without compromising accuracy, supporting the feasibility of scalable, noise-tolerant quantum applications in financial risk management.

The Open Source Initiative (OSI) and Apereo Foundation have jointly responded to the White House Office of Science & Technology Policy's (OSTP) request for information on an AI Action Plan. Their comment emphasizes the benefits of Open Source and positions the Open Source community as a valuable resource for policymakers. The OSI highlighted its history of stewarding the Open Source Definition and its recent work in co-developing the Open Source AI Definition (OSAID), recommending that the White House rely on the OSAID as a foundational piece of any future AI Action Plan.

Recommended read:
References :
  • Qusoft: Koen Groenland's book, Introduction to Quantum Computing for Business, is discussed as a step toward guiding companies on how to leverage quantum advancements.

@The Cryptography Caffe? ? //
The UK's National Cyber Security Centre (NCSC) has released a roadmap for transitioning to post-quantum cryptography (PQC), establishing key dates for organizations to assess risks, define strategies, and fully transition by 2035. This initiative aims to mitigate the future threat of quantum computers, which could potentially break today's widely used encryption methods. The NCSC’s guidance recognizes that PQC migration is a complex and lengthy process requiring significant planning and investment.

By 2028, organizations are expected to complete a discovery phase, identifying systems and services reliant on cryptography that need upgrades, and draft a migration plan. High-priority migration activities should be completed by 2031, with infrastructure prepared for a full transition. The NCSC emphasizes that these steps are essential for addressing quantum threats and improving overall cyber resilience. Ali El Kaafarani, CEO of PQShield, noted that these timelines give clear instructions to protect the UK’s digital future.

Researchers have also introduced ZKPyTorch, a compiler that integrates ML frameworks with ZKP engines to simplify the development of zero-knowledge machine learning (ZKML). ZKPyTorch automates the translation of ML operations into optimized ZKP circuits and improves proof generation efficiency. Through case studies, ZKPyTorch successfully converted VGG-16 and Llama-3 models into ZKP-compatible circuits.

Recommended read:
References :
  • The Quantum Insider: UK Sets Timeline, Road Map for Post-Quantum Cryptography Migration
  • The Register - Security: The post-quantum cryptography apocalypse will be televised in 10 years, says UK's NCSC
  • Dhole Moments: Post-Quantum Cryptography Is About The Keys You Don’t Play
  • IACR News: ePrint Report: An Optimized Instantiation of Post-Quantum MQTT protocol on 8-bit AVR Sensor Nodes YoungBeom Kim, Seog Chung Seo Since the selection of the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) standardization algorithms, research on integrating PQC into security protocols such as TLS/SSL, IPSec, and DNSSEC has been actively pursued. However, PQC migration for Internet of Things (IoT) communication protocols remains largely unexplored. Embedded devices in IoT environments have limited computational power and memory, making it crucial to optimize PQC algorithms for efficient computation and minimal memory usage when deploying them on low-spec IoT devices. In this paper, we introduce KEM-MQTT, a lightweight and efficient Key Encapsulation Mechanism (KEM) for the Message Queuing Telemetry Transport (MQTT) protocol, widely used in IoT environments. Our approach applies the NIST KEM algorithm Crystals-Kyber (Kyber) while leveraging MQTT’s characteristics and sensor node constraints. To enhance efficiency, we address certificate verification issues and adopt KEMTLS to eliminate the need for Post-Quantum Digital Signatures Algorithm (PQC-DSA) in mutual authentication. As a result, KEM-MQTT retains its lightweight properties while maintaining the security guarantees of TLS 1.3. We identify inefficiencies in existing Kyber implementations on 8-bit AVR microcontrollers (MCUs), which are highly resource-constrained. To address this, we propose novel implementation techniques that optimize Kyber for AVR, focusing on high-speed execution, reduced memory consumption, and secure implementation, including Signed LookUp-Table (LUT) Reduction. Our optimized Kyber achieves performance gains of 81%,75%, and 85% in the KeyGen, Encaps, and DeCaps processes, respectively, compared to the reference implementation. With approximately 3 KB of stack usage, our Kyber implementation surpasses all state-of-the-art Elliptic Curve Diffie-Hellman (ECDH) implementations. Finally, in KEM-MQTT using Kyber-512, an 8-bit AVR device completes the handshake preparation process in 4.32 seconds, excluding the physical transmission and reception times.

Editor-In-Chief, BitDegree@bitdegree.org //
A new, fully AI-driven weather prediction system called Aardvark Weather is making waves in the field. Developed through an international collaboration including researchers from the University of Cambridge, Alan Turing Institute, Microsoft Research, and the European Centre for Medium-Range Weather Forecasts (ECMWF), Aardvark Weather uses a deep learning architecture to process observational data and generate high-resolution forecasts. The model is designed to ingest data directly from observational sources, such as weather stations and satellites.

This innovative system stands out because it can run on a single desktop computer, generating forecasts tens of times faster than traditional systems and requiring thousands of times less computing power. While traditional weather forecasting relies on Numerical Weather Prediction (NWP) models that use physics-based equations and vast computational resources, Aardvark Weather replaces all stages of this process with a streamlined machine learning model. According to researchers, Aardvark Weather can generate a forecast in seconds or minutes, using only about 10% of the weather data required by current forecasting systems.

Recommended read:
References :
  • www.computerworld.com: The AI system achieved this by replacing the entire process of weather forecasting with a single machine-learning model; it can take in observations from satellites, weather stations and other sensors and then generate both global and local forecasts.
  • www.livescience.com: New AI is better at weather prediction than supercomputers — and it consumes 1000s of times less energy
  • www.newscientist.com: AI can forecast the weather in seconds without needing supercomputers
  • The Register - Software: PC-size ML prediction model predicted to be as good as a super at fraction of the cost Aardvark, a novel machine learning-based weather prediction system, teases a future where supercomputers are optional for forecasting - but don't pull the plug just yet.
  • AIwire: Fully AI-Driven System Signals a New Era in Weather Forecasting
  • eWEEK: New AI Weather Forecasting Model is ‘Thousands of Times Faster’ Than Previous Methods
  • bsky.app: An #AI based weather forecasting system that is much faster than traditional approaches:
  • NVIDIA Technical Blog: From hyperlocal forecasts that guide daily operations to planet-scale models illuminating new climate insights, the world is entering a new frontier in weather...
  • I Learnt: DIY weather prediction and strategy selection
  • www.bitdegree.org: A new artificial intelligence (AI) based tool called Aardvark Weather is offering a different way to predict weather across the globe.

staff@insidehpc.com //
Nvidia CEO Jensen Huang has publicly walked back previous comments made in January, where he expressed skepticism regarding the timeline for quantum computers becoming practically useful. Huang apologized for his earlier statements, which caused a drop in stock prices for quantum computing companies. During the recent Nvidia GTC 2025 conference in San Jose, Huang admitted his misjudgment and highlighted ongoing advancements in the field, attributing his initial doubts to his background in traditional computer systems development. He expressed surprise that his comments had such a significant impact on the market, joking about the public listing of quantum computing firms.

SEEQC and Nvidia announced a significant breakthrough at the conference, demonstrating a fully digital quantum-classical interface protocol between a Quantum Processing Unit (QPU) and a Graphics Processing Unit (GPU). This interface is designed to facilitate ultra-low latency and bandwidth-efficient quantum error correction. Furthermore, Nvidia is enhancing its support for quantum research with the CUDA-Q platform, designed to streamline the development of hybrid, accelerated quantum supercomputers. CUDA-Q performance can now be pushed further than ever with v0.10 support for the NVIDIA GB200 NVL72.

Recommended read:
References :
  • NVIDIA Technical Blog: The NVIDIA CUDA-Q platform is designed to streamline software and hardware development for hybrid, accelerated quantum supercomputers.
  • insidehpc.com: During quantum day at Nvidia's GTC 2025 conference in San Jose, SEEQC and NVIDIA announced they have completed an end-to-end fully digital quantum-classical interface protocol demo between a QPU and GPU.
  • OODAloop: Nvidia CEO Huang on Thursday walked back comments he made in January, when he cast doubt on whether useful quantum computers would hit the market in the next 15 years.
  • The Tech Basic: Nvidia CEO Jensen Huang apologized for comments he made earlier this year that caused stock prices of quantum computing companies to plunge.

Charlie Wood@Quanta Magazine //
Recent data from the Dark Energy Spectroscopic Instrument (DESI) suggests that dark energy, the mysterious force driving the accelerating expansion of the universe, may be weakening over time. This challenges the standard model of cosmology, which assumes dark energy has a constant density and pressure. Researchers, including Seshadri Nadathur from the DESI collaboration, have analyzed significantly more data than in previous studies, strengthening the conclusion that the engine driving cosmic expansion might be sputtering.

The findings are also supported by evidence from the Dark Energy Survey (DES), which also observed a vast expanse of the cosmos and reported indications of varying dark energy. Miguel Zumalacárregui notes that Euclid's capabilities could better determine the universe's expansion rate through gravitational-wave observations. If confirmed, this would rewrite our understanding of the universe's fate, potentially leading to alternative scenarios beyond the current model of endless expansion and eventual cosmic emptiness.

Recommended read:
References :

Unknown (noreply@blogger.com)@Pat'sBlog //
References: Pat'sBlog
Recent discussions have highlighted the diverse applications and historical roots of mathematics. A blog post explored the history of mathematical terms such as billion, trillion, and others, tracing their origins back to figures like Nicholas Chuquet, a French physician from the 15th century. The evolution of these terms and their varying definitions across different countries demonstrate the rich history and changing conventions within mathematical nomenclature. This information has recently resurfaced in a post from earlier this year.

Alongside the history of math, practical math applications are being discussed. For example, recent word problems are now available that focuses on division suitable for fourth-grade students. The step-by-step solutions for problems involving dividing quantities among groups can help students improve their comprehension of division and problem solving. Mathematics continues to be the basis for many algorithms in a variety of modern technological applications and is not widely recognized as a science.

Recommended read:
References :
  • Pat'sBlog: This blog post discusses the history of terms for large numbers such as billion, trillion, etc.

Matt Swayne@The Quantum Insider //
D-Wave Quantum Inc. has made a splash by claiming its Advantage2 annealing quantum computer achieved quantum supremacy in complex materials simulations, publishing their study in the journal Science. The company states that its system can perform simulations in minutes that would take the Frontier supercomputer nearly a million years and consume more than the world’s annual electricity consumption. According to D-Wave CEO Alan Baratz, this achievement validates quantum annealing's practical advantage and represents a major milestone in quantum computational supremacy and materials discovery.

However, D-Wave's claim has faced criticism, with researchers suggesting that classical algorithms can rival or even exceed quantum methods in these simulations. Some researchers say that they performed similar calculations on a normal laptop in just two hours. Concerns have been raised about the real-world applicability and practical benefits of D-Wave's quantum supremacy claims in computational tasks. Despite the criticisms, D-Wave is standing by the claims from the study.

Recommended read:
References :