Top Mathematics discussions

NishMath

@github.com //
A critical security vulnerability has been discovered in OpenPGP.js, a widely used JavaScript library that implements the OpenPGP standard for email and data encryption. Tracked as CVE-2025-47934, the flaw allows attackers to spoof both signed and encrypted messages, effectively undermining the trust inherent in public key cryptography. Security researchers from Codean Labs, Edoardo Geraci and Thomas Rinsma, discovered that the vulnerability stems from the `openpgp.verify` and `openpgp.decrypt` functions, and it essentially undermines the core purpose of using public key cryptography to secure communications.

The vulnerability impacts versions 5.0.1 to 5.11.2 and 6.0.0-alpha.0 to 6.1.0 of the OpenPGP.js library. According to an advisory posted on the library's GitHub repository, a maliciously modified message can be passed to one of these functions, and the function may return a result indicating a valid signature, even if the message has not been legitimately signed. This flaw affects both inline signed messages and signed-and-encrypted messages. The advisory also states that to spoof a message, an attacker needs a single valid message signature along with the plaintext data that was legitimately signed. They can then construct a fake message that appears legitimately signed.

Users are strongly advised to upgrade to versions 5.11.3 or 6.1.1 as soon as possible to mitigate the risk. Versions 4.x are not affected by the vulnerability. While a full write-up and proof-of-concept exploit are expected to be released soon, the current advisory offers enough details to highlight the severity of the issue. The underlying problem is that OpenPGP.js trusts the signing process without properly verifying it, leaving users open to having signed and encrypted messages spoofed. This vulnerability allows message signature verification to be spoofed.

Recommended read:
References :
  • The Register - Software: Freshly discovered bug in OpenPGP.js undermines whole point of encrypted comms
  • thecyberexpress.com: A flaw has been discovered in OpenPGP.js, a widely used JavaScript library for OpenPGP encryption. Tracked as CVE-2025-47934, the vulnerability allows threat actors to spoof both signed and encrypted messages, effectively undermining the very foundation of trust in public key cryptography.
  • Security Affairs: A critical flaw in OpenPGP.js, tracked as CVE-2025-47934, lets attackers spoof message signatures; updates have been released to address the flaw. OpenPGP.js is an open-source JavaScript library that implements the OpenPGP standard for email and data encryption.
  • www.csoonline.com: Critical flaw in OpenPGP.js raises alarms for encrypted email services
  • www.techradar.com: Researchers found a bug that allowed malicious actors to spoof messages. Users are advised to patch up.
  • securityaffairs.com: A critical flaw in OpenPGP.js lets attackers spoof message signatures; updates have been released to address the flaw.
  • securityaffairs.com: A critical flaw in OpenPGP.js lets attackers spoof message signatures

@medium.com //
DeepSeek, a Chinese AI unicorn, has released DeepSeek-R1-0528, a significant update to its R1 reasoning model. This new release aims to enhance the model's capabilities in mathematics, programming, and general logical reasoning, positioning it as a formidable open-source alternative to leading proprietary models like OpenAI's o3 and Google's Gemini 2.5 Pro. The updated model is available on Hugging Face under the MIT license, promoting transparency and accessibility in AI development.

The R1-0528 update showcases improved reasoning depth and inference accuracy. Its performance on the AIME 2025 math benchmark has increased significantly, jumping from 70% to 87.5%. This indicates a deeper reasoning process, averaging 23,000 tokens per question, up from 12,000 in the previous version. These enhancements are attributed to increased computational resources and algorithmic optimizations during post-training. Additionally, the model exhibits improved performance in code generation tasks, ranking just below OpenAI's o4 mini and o3 models on LiveCodeBench benchmarks, and outperforming xAI's Grok 3 mini and Alibaba's Qwen 3.

DeepSeek has also released a distilled version of R1-0528, named DeepSeek-R1-0528-Qwen3-8B. This lightweight model, fine-tuned from Alibaba’s Qwen3-8B, achieves state-of-the-art performance among open-source models on the AIME 2024 benchmark and is designed for efficient operation on a single GPU. The current cost for DeepSeek’s API is $0.14 per 1 million input tokens during regular hours of 8:30 pm to 12:30 pm (drops to $0.035 during discount hours). Output for 1 million tokens is consistently priced at $2.19.

Recommended read:
References :
  • pub.towardsai.net: DeepSeek R1 : Is It Right For You? (A Practical Self‑Assessment for Businesses and Individuals)
  • AI News | VentureBeat: DeepSeek R1-0528 arrives in powerful open source challenge to OpenAI o3 and Google Gemini 2.5 Pro
  • Analytics Vidhya: New Deepseek R1-0528 Update is INSANE
  • Kyle Wiggers ?: DeepSeek updates its R1 reasoning AI model, releases it on Hugging Face
  • MacStories: Testing DeepSeek R1-0528 on the M3 Ultra Mac Studio and Installing Local GGUF Models with Ollama on macOS
  • Kyle Wiggers ?: DeepSeek’s updated R1 AI model is more censored, test finds
  • www.analyticsvidhya.com: New Deepseek R1-0528 Update is INSANE
  • www.marktechpost.com: DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU Efficiency
  • NextBigFuture.com: DeepSeek New Deepseek-R1 Model is Competitive With OpenAI O3 and Gemini 2.5 Pro
  • MarkTechPost: Information about DeepSeek's R1-0528 model and its enhancements in math and code performance.

@Google DeepMind Blog //
References: LearnAI , The Next Web , www.unite.ai ...
Google DeepMind has introduced AlphaEvolve, a revolutionary AI coding agent designed to autonomously discover innovative algorithms and scientific solutions. This groundbreaking research, detailed in the paper "AlphaEvolve: A Coding Agent for Scientific and Algorithmic Discovery," represents a significant step towards achieving Artificial General Intelligence (AGI) and potentially even Artificial Superintelligence (ASI). AlphaEvolve distinguishes itself through its evolutionary approach, where it autonomously generates, evaluates, and refines code across generations, rather than relying on static fine-tuning or human-labeled datasets. AlphaEvolve combines Google’s Gemini Flash, Gemini Pro, and automated evaluation metrics.

AlphaEvolve operates using an evolutionary pipeline powered by large language models (LLMs). This pipeline doesn't just generate outputs—it mutates, evaluates, selects, and improves code across generations. The system begins with an initial program and iteratively refines it by introducing carefully structured changes. These changes take the form of LLM-generated diffs—code modifications suggested by a language model based on prior examples and explicit instructions. A diff in software engineering refers to the difference between two versions of a file, typically highlighting lines to be removed or replaced.

Google's AlphaEvolve is not merely another code generator, but a system that generates and evolves code, allowing it to discover new algorithms. This innovation has already demonstrated its potential by shattering a 56-year-old record in matrix multiplication, a core component of many machine learning workloads. Additionally, AlphaEvolve has reclaimed 0.7% of compute capacity across Google's global data centers, showcasing its efficiency and cost-effectiveness. AlphaEvolve imagined as a genetic algorithm coupled to a large language model.

Recommended read:
References :
  • LearnAI: Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer
  • The Next Web: Article on The Next Web describing feats of DeepMind’s AI coding agent AlphaEvolve.
  • Towards Data Science: A blend of LLMs' creative generation capabilities with genetic algorithms
  • www.unite.ai: Google DeepMind has unveiled AlphaEvolve, an evolutionary coding agent designed to autonomously discover novel algorithms and scientific solutions. Presented in the paper titled “AlphaEvolve: A Coding Agent for Scientific and Algorithmic Discovery,†this research represents a foundational step toward Artificial General Intelligence (AGI) and even Artificial Superintelligence (ASI).
  • learn.aisingapore.org: AlphaEvolve imagined as a genetic algorithm coupled to a large language model. Models have undeniably revolutionized how many of us approach coding, but they’re often more like a super-powered intern than a seasoned architect.
  • AI News | VentureBeat: Google's AlphaEvolve is the epitome of a best-practice AI agent orchestration. It offers a lesson in production-grade agent engineering. Discover its architecture & essential takeaways for your enterprise AI strategy.
  • Unite.AI: Google DeepMind has unveiled AlphaEvolve, an evolutionary coding agent designed to autonomously discover novel algorithms and scientific solutions.
  • Last Week in AI: DeepMind introduced Alpha Evolve, a new coding agent designed for scientific and algorithmic discovery, showing improvements in automated code generation and efficiency.
  • venturebeat.com: VentureBeat article about Google DeepMind's AlphaEvolve system.

@phys.org //
A mathematician from UNSW Sydney has made a significant breakthrough in addressing a longstanding problem in algebra: solving higher polynomial equations. Honorary Professor Norman Wildberger has developed a novel method that utilizes intriguing number sequences to tackle equations where variables are raised to the power of five or higher, a challenge that has eluded mathematicians for centuries. The findings are outlined in a recent publication co-authored with computer scientist Dr. Dean Rubine, and could lead to advancements in various mathematical disciplines. This new approach has the potential to reshape mathematical problem-solving techniques.

Professor Wildberger's approach challenges traditional methods that rely on radicals, which often represent irrational numbers—decimals that extend infinitely without repeating. He argues that these irrational numbers introduce imprecision, and that a real answer to a polynomial equation can never be completely calculated because it would need an infinite amount of work. The mathematician suggests this solution "reopens a previously closed book in mathematics history."

Prior to this discovery, French mathematician Évariste Galois demonstrated in 1832 that it's impossible to resolve higher polynomial equations with a general formula, such as the quadratic formula. Approximate solutions for higher-degree polynomials have been used, but are not pure algebra. Wildberger's radical rejection has lead to a new method for solving this decades old problem.

Recommended read:
References :
  • phys.org: Mathematician solves algebra's oldest problem using intriguing new number sequences
  • www.sciencedaily.com: Mathematician solves algebra's oldest problem using intriguing new number sequences
  • John Carlos Baez: Blog post on 2025/05/04 that contains a headline: Mathematician solves algebra’s oldest problem using intriguing new number sequences.
  • www.eurekalert.org: Headline: "Mathematician solves algebra’s oldest problem using intriguing new number sequences." 😮 In the article: "So, when we assume ∛7 'exists' in a formula, we’re assuming that this infinite, never-ending decimal is somehow a complete object. This is why, Prof. Wildberger says, he “doesn’t believe in irrational numbers."
  • www.techexplorist.com: Mathematician solves algebra’s oldest problem
  • Tech Explorist: New approach using novel number sequence.

@www.marktechpost.com //
MIT researchers are making significant strides in artificial intelligence, focusing on enhancing AI's ability to learn and interact with the world more naturally. One project involves developing AI models that can learn connections between vision and sound without human intervention. This innovative approach aims to mimic how humans learn, by associating what they see with what they hear. The model could be useful in applications such as journalism and film production, where the model could help with curating multimodal content through automatic video and audio retrieval.

The new machine-learning model can pinpoint exactly where a particular sound occurs in a video clip, eliminating the need for manual labeling. By adjusting how the original model is trained, it learns a finer-grained correspondence between a particular video frame and the audio that occurs in that moment. The enhancements improved the model’s ability to retrieve videos based on an audio query and predict the class of an audio-visual scene, like the sound of a roller coaster in action or an airplane taking flight. Researchers also made architectural tweaks that help the system balance two distinct learning objectives, which improves performance.

Additionally, researchers from the National University of Singapore have introduced 'Thinkless,' an adaptive framework designed to reduce unnecessary reasoning in language models. Thinkless reduces unnecessary reasoning by up to 90% using DeGRPO. By incorporating a novel algorithm called Decoupled Group Relative Policy Optimization (DeGRPO), Thinkless separates the training focus between selecting the reasoning mode and improving the accuracy of the generated response. This framework equips a language model with the ability to dynamically decide between using short or long-form reasoning, addressing the issue of resource-intensive and wasteful reasoning sequences for simple queries.

Recommended read:
References :
  • learn.aisingapore.org: AI learns how vision and sound are connected, without human intervention | MIT News
  • news.mit.edu: AI learns how vision and sound are connected, without human intervention
  • www.marktechpost.com: Researchers from the National University of Singapore Introduce ‘Thinkless,’ an Adaptive Framework that Reduces Unnecessary Reasoning by up to 90% Using DeGRPO
  • news.mit.edu: Learning how to predict rare kinds of failures
  • MarkTechPost: Researchers from the National University of Singapore Introduce ‘Thinkless,’ an Adaptive Framework that Reduces Unnecessary Reasoning by up to 90% Using DeGRPO

@sciencedaily.com //
References: Dan Drake ? , phys.org ,
A mathematician from UNSW Sydney has reportedly solved one of algebra's oldest and most challenging problems: finding a general algebraic solution for higher polynomial equations. These equations, which involve a variable raised to powers, are fundamental in mathematics and science, with broad applications ranging from describing planetary movement to writing computer programs. While solutions for lower-degree polynomials (up to degree four) have been known for centuries, a general method for solving equations of degree five or higher had remained elusive, until now.

Professor Norman Wildberger, along with computer scientist Dr. Dean Rubine, developed a new approach using novel number sequences to tackle this problem. Their solution, recently published in The American Mathematical Monthly journal, challenges established mathematical assumptions and potentially "reopens a previously closed book in mathematics history," according to Professor Wildberger. The breakthrough centers around rethinking the use of radicals (roots of numbers) in classical formulas, traditionally used to solve lower-order polynomials.

Wildberger argues that radicals, often representing irrational numbers with infinite, non-repeating decimal expansions, introduce incompleteness into calculations. He claims that since these irrational numbers can never be fully calculated, assuming their 'existence' in a formula is problematic. This perspective led him to develop an alternative method based on number sequences, potentially offering a purely algebraic solution to higher-degree polynomial equations, bypassing the limitations of traditional radical-based approaches.

Recommended read:
References :
  • Dan Drake ?: Talks about solution to mathematical problem, using number sequences.
  • phys.org: Discusses the mathematician's solution to an algebraic problem.
  • www.sciencedaily.com: Explains how the mathematician solved the problem.

@medium.com //
Quantum computing is rapidly advancing, bringing both immense potential and significant cybersecurity risks. The UK’s National Cyber Security Centre (NCSC) and experts across the globe are warning of a "colossal" overhaul needed in digital defenses to prepare for the quantum era. The concern is that powerful quantum computers could render current encryption methods obsolete, breaking security protocols that protect financial transactions, medical records, military communications, and blockchain technology. This urgency is underscored by the threat of "harvest now, decrypt later" attacks, where sensitive data is collected and stored for future decryption once quantum computers become powerful enough.

Across the globe, governments and organizations are scrambling to prepare for a quantum future by adopting post-quantum cryptography (PQC). PQC involves creating new encryption algorithms resistant to attacks from both classical and quantum computers. The U.S. National Institute of Standards and Technology (NIST) has already released several algorithms believed to be secure from quantum hacking. The NCSC has issued guidance, setting clear timelines for the UK’s migration to PQC, advising organizations to complete the transition by 2035. Industry leaders are also urging the U.S. Congress to reauthorize and expand the National Quantum Initiative to support research, workforce development, and a resilient supply chain.

Oxford Ionics is one of the companies leading the way in quantum computing development. Oxford has released a multi-phase roadmap focused on achieving scalability and fault tolerance in their trapped-ion quantum computing platform. Their strategy includes the 'Foundation' phase, which involves deploying QPUs with 16-64 qubits with 99.99% fidelity, already operational. The second phase introduces chips with 256+ qubits and error rates as low as 10-8 via quantum error correction (QEC). The goal is to scale to over 10,000 physical qubits per chip, supporting 700+ logical qubits with minimal infrastructure change. There are also multiple bills introduced in the U.S. Congress and the state of Texas to foster the advancement of quantum technology.

Recommended read:
References :
  • medium.com: Post‑Quantum Cryptography: Safeguarding the Digital World Beyond Quantum Supremacy
  • Peter Bendor-Samuel: The Realistic Path To Quantum Computing: Separating Hype From Reality
  • www.techradar.com: Safeguarding data for the quantum era

@blogs.nvidia.com //
References: , , quantumcomputingreport.com ...
Recent advancements in quantum computing include the launch of new supercomputers and the development of open-source frameworks. NVIDIA and AIST have collaborated to launch ABCI-Q, a supercomputing system designed for hybrid quantum-AI research. This system, powered by NVIDIA H100 GPUs and utilizing NVIDIA’s Quantum-2 InfiniBand platform, is hosted at the Global Research and Development Center for Business by Quantum-AI Technology (G-QuAT). ABCI-Q supports hybrid workloads by integrating GPU-based simulation with physical quantum processors from Fujitsu, QuEra, and OptQC, aiming to advance quantum error correction and algorithm development. It serves as a testbed for quantum-GPU workflows across various hardware modalities.

Quantum Machines has introduced QUAlibrate, an open-source calibration framework designed to significantly reduce the time required for quantum computer calibration. Calibration, a major hurdle in quantum system performance and scalability, can now be reduced from hours to minutes. QUAlibrate enables the creation, execution, and sharing of modular calibration protocols, allowing researchers to calibrate multi-qubit superconducting systems rapidly. At the Israeli Quantum Computing Center, full multi-qubit calibration was achieved in just 140 seconds using QUAlibrate. The framework is built on the QUA programming language and uses the Quantum Abstract Machine (QUAM) to model quantum hardware, featuring a graph-based calibration approach.

These advancements are supported by strategic collaborations and investments in quantum technologies. SilQ Connect, a startup focusing on distributed quantum computing, has secured pre-seed funding to advance modular quantum interconnects. This funding from QV Studio, Quantacet, and Quantonation will support the development of microwave-optical quantum interconnects for scalable quantum systems. Additionally, Taiwan's National Center for High-Performance Computing is deploying a new NVIDIA-powered AI supercomputer to support research in climate science, quantum research, and the development of large language models. This initiative aims to foster cross-domain collaboration and global AI leadership.

Recommended read:
References :
  • : Quantum Machines Releases Open-Source QUAlibrate Framework to Accelerate Quantum System Calibration
  • : NVIDIA and AIST Launch ABCI-Q Supercomputer for Hybrid Quantum-AI Research
  • NVIDIA Newsroom: NVIDIA-Powered Supercomputer to Enable Quantum Leap for Taiwan Research
  • quantumcomputingreport.com: Quantum Machines Releases Open-Source QUAlibrate Framework to Accelerate Quantum System Calibration
  • AI News | VentureBeat: Quantum Machines launches Qualibrate open source framework to speed quantum computer calibration
  • quantumcomputingreport.com: NVIDIA and AIST Launch ABCI-Q Supercomputer for Hybrid Quantum-AI Research

@www.microsoft.com //
Microsoft is actively preparing for a future where quantum computers pose a significant threat to current encryption methods. The company is exploring Post-Quantum Cryptography (PQC) solutions, with a focus on algorithms like FrodoKEM, to bolster security on Windows and Linux platforms. This move is driven by the understanding that quantum computers, with their ability to solve complex problems exponentially faster than classical computers, could break the cryptographic backbone of today’s digital world, including systems like RSA, Diffie-Hellman, and elliptic curve cryptography. The urgency is underscored by recent advances like Microsoft’s Majorana 1, a quantum processor powered by topological qubits, which marks major steps toward practical quantum computing.

Microsoft's efforts to transition to quantum-resistant cryptographic systems include adding PQC algorithms to their core cryptography library, SymCrypt. Recently, Microsoft has taken the next step by adding PQC support to Windows Insiders (Canary Build 27852+) and to Linux through SymCrypt-OpenSSL v1.9.0. These additions allow companies and developers to start testing and preparing for a quantum-secure future, preventing a potential "harvest now, decrypt later" scenario where hackers collect encrypted data today to decrypt later using quantum computers using quantum computers. This proactive approach aims to safeguard digital lives against the looming quantum threat.

The new additions to Windows include ML-KEM (Module Lattice-Based Key Encapsulation Mechanism), also known as CRYSTALS-Kyber, designed for secure key exchange, and ML-DSA (Module Lattice-Based Digital Signature Algorithm), previously known as CRYSTALS-Dilithium, used for digital signatures to ensure data integrity and authenticity. NIST approved three PQC standards which are called FIPS 203, 204, and 205. FIPS 203 is a Module-Lattice-Based Key-Encapsulation Mechanism Standard that specifies a key encapsulation mechanism designed to protect information exchange over public networks, ensuring confidentiality even in the presence of quantum adversaries. FIPS 204 is a Module-Lattice-Based Digital Signature Standard that defines a digital signature scheme that provides authentication and integrity, crucial for verifying identities and securing communications. The FIPS 205:Stateless Hash-Based Digital Signature Standard outlines a stateless hash-based digital signature scheme, offering an alternative approach to digital signatures with strong security assurances. NIST encourages organizations to begin the transition to these new standards to ensure long-term data security.

Recommended read:
References :
  • medium.com: Welcome to the Quantum Era, where even the strongest locks we use to protect our digital lives might soon be breakable.
  • Microsoft Research: The recent advances in quantum computing offer many advantages—but also challenge current cryptographic strategies.
  • www.microsoft.com: FrodoKEM: A conservative quantum-safe cryptographic algorithm
  • arstechnica.com: Here’s how Windows 11 aims to make the world safe in the post-quantum era

@www.microsoft.com //
References: cyberinsider.com , Dan Goodin , medium.com ...
Microsoft is taking a significant step towards future-proofing cybersecurity by integrating post-quantum cryptography (PQC) into Windows Insider builds. This move aims to protect data against the potential threat of quantum computers, which could render current encryption methods vulnerable. The integration of PQC is a critical step toward quantum-resilient cybersecurity, ensuring that Windows systems can withstand attacks from more advanced computing power in the future.

Microsoft announced the availability of PQC support in Windows Insider Canary builds (27852 and above). This release allows developers and organizations to begin experimenting with PQC in real-world environments, assessing integration challenges, performance trade-offs, and compatibility. This is being done in an attempt to jump-start what’s likely to be the most formidable and important technology transition in modern history.

The urgency behind this transition stems from the "harvest now, decrypt later" threat, where malicious actors store encrypted communications today, with the intent to decrypt them once quantum computers become capable. These captured secrets, such as passwords, encryption keys, or medical data, could remain valuable to attackers for years to come. By adopting PQC algorithms, Microsoft aims to safeguard sensitive information against this future risk, emphasizing the importance of starting the transition now.

Recommended read:
References :
  • cyberinsider.com: Microsoft has begun integrating post-quantum cryptography (PQC) into Windows Insider builds, marking a critical step toward quantum-resilient cybersecurity. Microsoft announced the availability of PQC support in Windows Insider Canary builds (27852 and above). This release allows developers and organizations to begin experimenting with PQC in real-world environments, assessing integration challenges, performance trade-offs, and compatibility with …
  • Dan Goodin: Microsoft is updating Windows 11 with a set of new encryption algorithms that can withstand future attacks from quantum computers in an attempt to jump-start what’s likely to be the most formidable and important technology transition in modern history.
  • Red Hat Security: In their article on post-quantum cryptography, Emily Fox and Simo Sorce explained how Red Hat is integrating post-quantum cryptography (PQC) into our products. PQC protects confidentiality, integrity and authenticity of communication and data against quantum computers, which will make attacks on existing classic cryptographic algorithms such as RSA and elliptic curves feasible. Cryptographically relevant quantum computers (CRQCs) are not known to exist yet, but continued advances in research point to a future risk of successful attacks. While the migration to algorithms resistant against such
  • medium.com: Post-Quantum Cryptography Is Arriving on Windows & Linux
  • www.microsoft.com: The recent advances in quantum computing offer many advantages—but also challenge current cryptographic strategies. Learn how FrodoKEM could help strengthen security, even in a future with powerful quantum computers. The post first appeared on .

@medium.com //
References: medium.com , medium.com , medium.com ...
The convergence of quantum computing and cryptography is rapidly evolving, presenting both opportunities and threats to the digital landscape. EntropiQ, a startup specializing in quantum solutions, has launched Quantum Entropy as a Service (QEaaS), offering on-demand, crypto-agile quantum entropy distribution. This service is designed for critical infrastructure and integrates with existing systems via API, aligning with NIST SP 800-90 guidelines. To bolster deployment and operational validation, EntropiQ has partnered with Equinix and GIS QSP, demonstrating its platform in secure, scalable environments across various locations, including Silicon Valley and Washington, D.C.

The imminent threat posed by quantum computers to current cryptographic systems is driving the need for innovative security measures. Algorithms like RSA and ECC, which underpin much of today's digital security, are vulnerable to quantum algorithms like Shor's, which can efficiently factor large integers. This has prompted significant research into post-quantum cryptography (PQC), with solutions like SPQR-AC emerging to leverage hybrid cryptographic frameworks combining lattice-based and code-based primitives. The UK’s National Cyber Security Centre (NCSC) has issued guidance, urging organizations to plan their transition to quantum-safe cryptography by 2028 and complete migration of high-criticality systems by 2031.

Artificial intelligence (AI) is increasingly being integrated into quantum cryptography to enhance security and build resilience against emerging quantum threats. This fusion of AI and quantum-resistant encryption is aimed at protecting data in the post-quantum era, as AI can aid in developing more robust and adaptive cryptographic solutions. The NCSC's recommendations emphasize the importance of understanding the risks and taking proactive steps to secure digital infrastructure. Furthermore, the concept of "crypto agility" is gaining traction, encouraging businesses to develop the capacity to rapidly adapt encryption standards as quantum computers advance, ensuring continuous protection against evolving threats.

Recommended read:
References :
  • medium.com: AI Meets Quantum Cryptography: Securing Our Digital Future
  • medium.com: How Quantum Computing is a Threat to Cryptography
  • medium.com: Quantum Security: The Silent Threat Coming for Your Business
  • medium.com: Blog post about Post‑Quantum Cryptography.
  • The Next Web: UK’s digital defences need ‘colossal’ overhaul for quantum era

@quantumcomputingreport.com //
References: AI News | VentureBeat , ,
NVIDIA is significantly advancing quantum and AI research through strategic collaborations and cutting-edge technology. The company is partnering with Japan’s National Institute of Advanced Industrial Science and Technology (AIST) to launch ABCI-Q, a new supercomputing system focused on hybrid quantum-classical computing. This research-focused system is designed to support large-scale operations, utilizing the power of 2,020 NVIDIA H100 GPUs interconnected with NVIDIA’s Quantum-2 InfiniBand platform. The ABCI-Q system will be hosted at the newly established Global Research and Development Center for Business by Quantum-AI Technology (G-QuAT).

The ABCI-Q infrastructure integrates CUDA-Q, an open-source platform that orchestrates large-scale quantum-classical computing, enabling researchers to simulate and accelerate quantum applications. This hybrid setup combines GPU-based simulation with physical quantum processors from vendors such as Fujitsu (superconducting qubits), QuEra (neutral atom qubits), and OptQC (photonic qubits). This modular architecture will allow for testing quantum error correction, developing algorithms, and refining co-design strategies, which are all critical for future quantum systems. The system serves as a testbed for evaluating quantum-GPU workflows and advancing practical use cases across multiple hardware modalities.

NVIDIA is also expanding its presence in Taiwan, powering a new supercomputer at the National Center for High-Performance Computing (NCHC). This supercomputer is projected to deliver eight times the AI performance compared to the center's previous Taiwania 2 system. The new supercomputer will feature NVIDIA HGX H200 systems with over 1,700 GPUs, two NVIDIA GB200 NVL72 rack-scale systems, and an NVIDIA HGX B300 system built on the NVIDIA Blackwell Ultra platform, all interconnected by NVIDIA Quantum InfiniBand networking. This enhanced infrastructure is expected to significantly boost research in AI development, climate science, and quantum computing, fostering technological autonomy and global AI leadership for Taiwan.

Recommended read:
References :
  • AI News | VentureBeat: Nvidia is powering a supercomputer at Taiwan’s National Center for High-Performance Computing that’s set to deliver over eight times more AI performance than before.
  • : Japan’s National Institute of Advanced Industrial Science and Technology (AIST), in collaboration with NVIDIA, has launched ABCI-Q, a new research-focused supercomputing system designed to support large-scale hybrid quantum-classical computing.
  • quantumcomputingreport.com: NVIDIA and AIST Launch ABCI-Q Supercomputer for Hybrid Quantum-AI Research

@towardsdatascience.com //
Recent discussions in statistics highlight significant concepts and applications relevant to data science. A book review explores seminal ideas and controversies in the field, focusing on key papers and historical perspectives. The review mentions Fisher's 1922 paper, which is credited with creating modern mathematical statistics, and discusses debates around hypothesis testing and Bayesian analysis.

Stephen Senn's guest post addresses the concept of "relevant significance" in statistical testing, cautioning against misinterpreting statistical significance as proof of a genuine effect. Senn points out that rejecting a null hypothesis does not necessarily mean it is false, emphasizing the importance of careful interpretation of statistical results.

Furthermore, aspiring data scientists are advised to familiarize themselves with essential statistical concepts for job interviews. These include understanding p-values, Z-scores, and outlier detection methods. A p-value is crucial for hypothesis testing, and Z-scores help identify data points that deviate significantly from the mean. These concepts form a foundation for analyzing data and drawing meaningful conclusions in data science applications.

Recommended read:
References :
  • errorstatistics.com: Stephen Senn (guest post): “Relevant significance? Be careful what you wish for”
  • Towards Data Science: 5 Statistical Concepts You Need to Know Before Your Next Data Science Interview
  • Xi'an's Og: Seminal ideas and controversies in Statistics [book review]

@www.first.org //
Researchers from the U.S. National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency (CISA) have collaborated to develop a new security metric designed to better assess the likelihood of vulnerability exploitation. This metric aims to enhance the existing Exploit Prediction Scoring System (EPSS) and CISA's Known Exploited Vulnerabilities (KEV) catalog, providing a more refined approach to identifying vulnerabilities that are at high risk of being exploited in the wild. Peter Mell, formerly of NIST, and Jonathan Spring from CISA are credited with outlining this vulnerability exploit metric.

This new metric, detailed in a NIST White Paper titled "Likely Exploited Vulnerabilities," seeks to improve the accuracy with which vulnerabilities are prioritized for remediation. By augmenting the EPSS and KEV lists, the metric intends to provide a clearer understanding of a vulnerability's exploitability. The researchers propose this augmentation as a means to better express how likely a vulnerability is to be exploited, which can aid organizations in focusing their security efforts on the most critical threats.

Meanwhile, CISA has recently added six new vulnerabilities to its Known Exploited Vulnerabilities catalog, underscoring the importance of addressing actively exploited flaws. In a related development, Wiz Research has observed in-the-wild exploitation of CVE-2025-4427 and CVE-2025-4428, two recently disclosed vulnerabilities affecting Ivanti Endpoint Manager Mobile (EPMM). These Ivanti EPMM vulnerabilities, which involve a chain of exploits leading to remote code execution, highlight the need for organizations to promptly apply security patches and mitigate potential risks.

Recommended read:
References :
  • Metacurity: Peter Mell from NIST and Tom Spring from CISA propose an alternative/augmentation to the Exploit Prediction Scoring System (EPSS) and Known Exploited Vulnerability (KEV) lists to better express a vulnerability's exploitability.
  • thecyberexpress.com: Researchers from the U.S. National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency (CISA) have developed a new security metric to determine the likelihood that a vulnerability has been exploited. In a published this week, Peter Mell, formerly of NIST, and CISA’s Jonathan Spring outlined their vulnerability exploit metric that augments the work of the Exploit Prediction Scoring System ( ) and CISA’s Known Exploited Vulnerabilities ( ) catalog.

@docslib.org //
References: nLab
The Kazhdan-Lusztig correspondence, a significant concept in representation theory, is gaining increased attention. This correspondence establishes an equivalence between representation categories of quantum groups and affine Lie algebras. Recent research explores its applications in areas like logarithmic conformal field theory (CFT), particularly concerning the representation category of the triplet W-algebra. The Kazhdan-Lusztig correspondence has also been investigated in relation to vertex algebras, further solidifying its importance across different mathematical and physical domains.

Dennis Gaitsgory was awarded the Breakthrough Prize in Mathematics for his broad contributions to the field, including work closely related to representation theory and the geometric Langlands program. His recognition highlights the impact of representation theory on other areas of mathematics. Further research is focusing on exploring tensor structures arising from affine Lie algebras and building on Kazhdan and Lusztig's foundational work in the area.

Recent work has also explored the Kazhdan-Lusztig correspondence at a positive level using Arkhipov-Gaitsgory duality for affine Lie algebras. A functor is defined which sends objects in the DG category of G(O)-equivariant positive level affine Lie algebra modules to objects in the DG category of modules over Lusztig’s quantum group at a root of unity. Researchers are actively working to prove that the semi-infinite cohomology functor for positive level modules factors through the Kazhdan-Lusztig functor at positive level and the quantum group cohomology functor with respect to the positive part of Lusztig’s quantum group.

Recommended read:
References :
  • nLab: Kazhdan-Luzstig correspondence.

@www.microsoft.com //
References: mfesgin.github.io , IACR News ,
IACR News has highlighted recent advancements in post-quantum cryptography, essential for safeguarding data against future quantum computer attacks. A key area of focus is the development of algorithms and protocols that remain secure even when classical cryptographic methods become vulnerable. Among these efforts, FrodoKEM stands out as a conservative quantum-safe cryptographic algorithm, designed to provide strong security guarantees in the face of quantum computing threats.

The adaptive security of key-unique threshold signatures is also under scrutiny. Research presented by Elizabeth Crites, Chelsea Komlo, and Mary Mallere, investigates the security assumptions required to prove the adaptive security of threshold signatures. Their work reveals impossibility results that highlight the difficulty of achieving adaptive security for key-unique threshold signatures, particularly for schemes compatible with standard, single-party signatures like BLS, ECDSA, and Schnorr. This research aims to guide the development of new assumptions and properties for constructing adaptively secure threshold schemes.

In related news, Muhammed F. Esgin is offering PhD and Post-Doc positions in post-quantum cryptography, emphasizing the need for candidates with a strong mathematical and cryptography background. Students at Monash University can expect to work on their research from the beginning, supported by competitive stipends and opportunities for teaching assistant roles. These academic opportunities are crucial for training the next generation of cryptographers who will develop and implement post-quantum solutions.

Recommended read:
References :
  • mfesgin.github.io: PhD and Post-Doc in Post-Quantum Cryptography
  • IACR News: Zero-Trust Post-quantum Cryptography Implementation Using Category Theory
  • medium.com: Post-Quantum Cryptography Is Arriving on Windows & Linux

@www.quantamagazine.org //
Researchers are making strides in AI reasoning and efficiency, tackling both complex problem-solving and the energy consumption of these systems. One promising area involves reversible computing, where programs can run backward as easily as forward, theoretically saving energy by avoiding data deletion. Michael Frank, a researcher interested in the physical limits of computation, discovered that reversible computing could keep computational progress going as traditional computing slows due to physical limitations. Christof Teuscher at Portland State University emphasized the potential for significant power savings with this approach.

An evolution of the LLM-as-a-Judge paradigm is emerging. Meta AI has introduced the J1 framework which shifts the paradigm of LLMs from passive generators to active, deliberative evaluators through self-evaluation. This approach, detailed in "J1: Incentivizing Thinking in LLM-as-a-Judge via Reinforcement Learning," addresses the growing need for rigorous and scalable evaluation as AI systems become more capable and widely deployed. By reframing judgment as a structured reasoning task trained through reinforcement learning, J1 aims to create models that perform consistent, interpretable, and high-fidelity evaluations.

Soheil Feizi, an associate professor at the University of Maryland, has received a $1 million federal grant to advance foundational research in reasoning AI models. This funding, stemming from a Presidential Early Career Award for Scientists and Engineers (PECASE), will support his work in defending large language models (LLMs) against attacks, identifying weaknesses in how these models learn, encouraging transparent, step-by-step logic, and understanding the "reasoning tokens" that drive decision-making. Feizi plans to explore innovative approaches like live activation probing and novel reinforcement-learning designs, aiming to transform theoretical advancements into practical applications and real-world usages.

Recommended read:
References :

@medium.com //
Cryptography is a critical component in today's digital landscape, ensuring secure communication, data integrity, and user authentication across various platforms. Cryptography, or “secret writing”, has been used for centuries, evolving from ancient methods like the Caesar cipher to modern, complex algorithms. In the Ethereum blockchain, cryptography is the foundation of security, underpinning trustless transactions and immutable data accessible only to authorized users. Key areas where cryptography manifests in Ethereum include digital signatures, used as electronic stamps of authenticity, and cryptographic hashes, which serve as digital fingerprints for data. Cryptography is essential for securing data in transit, verifying identities, and safeguarding sensitive information such as passwords.

Asymmetric encryption, also known as public-key cryptography (PKC), plays a vital role in Ethereum. This method uses key pairs consisting of a public key, shared freely, and a private key, kept securely. Ethereum leverages elliptic curve cryptography, specifically the secp256k1 algorithm, to generate these key pairs. This algorithm relies on mathematical properties of elliptic curves with finite fields. Quantum-resistant cryptography is also gaining traction in blockchain security due to the emerging threat of quantum computers, which have the potential to break current encryption methods like RSA and ECC. In 2025, blockchain platforms are actively testing post-quantum cryptography to ensure the long-term safety of old data, secure smart contracts, and maintain user trust.

Quantum computing advancements pose a significant risk to current cryptographic methods. The U.S. House Committee on Science, Space, and Technology convened in May 2025 to discuss the future of the National Quantum Initiative (NQI). Industry leaders testified on the need to reauthorize and expand the NQI to maintain U.S. leadership in quantum technology. To counter the potential quantum threat to blockchain, developers are exploring quantum-resistant wallets and smart contract tools. Some new blockchains, like QANplatform and XX Network, are building with post-quantum crypto from the start. The importance of sustained investment in quantum sciences and the development of a skilled workforce were highlighted.

Recommended read:
References :

Igor Konnov@Protocols Made Fun //
Igor Konnov has successfully completed full proofs of consistency for the two-phase commit (2PC) protocol using the Lean 4 theorem prover, starting from a functional specification. This work builds upon previous efforts in specifying, simulating, and property-based testing 2PC in Lean 4. The entire process, including specification and simulation from prior work, took approximately 45 hours, with the proof writing itself consuming 29 hours. The rapid proof development was attributed to a correct inductive invariant discovered using the Apalache model checker, underscoring the benefits of combining formal specification with interactive theorem proving.

The Lean 4 proofs involved a modular approach, breaking down the verification into functional, propositional, and inductive components. Statistics show that the proofs, propositional and inductive, are about 15 times longer than the system code. Specifically, Functional.lean and System.lean contained 139 lines of code, Propositional.lean had 90, PropositionalProofs.lean comprised 275, and InductiveProofs.lean reached 1077 lines of code. This ratio aligns with the empirical standard in software verification, where proofs typically range from 10 to 20 times the length of the source code.

Konnov also mentioned exploring an alternative route: proving equivalence between their specification in Propositional.lean and the Veil specification. The Veil examples repository already contains a version of two-phase commit, albeit slightly different from the TLA+ version and the Lean specification. He suggests that this could be a topic for future work. The consistency of the protocol was specified using TLA+, leveraging its methodology for defining TCConsistent. The Lean 4 code for the two-phase commit protocol can be found on GitHub.

Recommended read:
References :
  • Protocols Made Fun: Article on proving consistency of two-phase commit in Lean4
  • github.com: Github repository which contains the code for TwoPhase Protocol
  • github.com: Examples/IvyBench/TwoPhaseCommit.lean
  • github.com: This GitHub repository contains Lean 4 code for the two-phase commit protocol.
  • github.com: TwoPhase.tla

Mohamed Abdel-Kareem@quantumcomputingreport.com //
References: osintteam.blog , medium.com
Recent advances in quantum computing pose a significant threat to current cryptographic systems, necessitating the development and deployment of post-quantum cryptography (PQC). Quantum computers, leveraging quantum mechanics, can perform certain calculations exponentially faster than classical computers. This capability undermines the security of widely used public key cryptography algorithms like RSA and Elliptic Curve Cryptography (ECC), which rely on the difficulty of factoring large numbers and finding discrete logarithms. Mathematician Peter Shor's algorithm demonstrated that quantum computers could break RSA encryption, spurring interest in quantum-resistant cryptography. While symmetric key algorithms like AES and hash functions are considered more robust, the vulnerability of public key cryptography demands immediate attention and transition to PQC solutions.

The Bitcoin ecosystem is actively exploring the integration of post-quantum cryptographic solutions to safeguard against potential quantum attacks. Blockstream is seeking an Applied Cryptographer to research, evaluate, and implement PQC tailored for Bitcoin's unique challenges. This includes adapting state-of-the-art PQC research to the Bitcoin domain, exploring features relevant for Bitcoin such as threshold signatures, signature aggregation, Taproot tweaking, silent payments, and HD wallets. The focus is on analyzing the implications of integrating post-quantum schemes into Bitcoin and contributing to Bitcoin Improvement Proposals (BIPs) to standardize cryptography for use in Bitcoin.

In related news, Heriot-Watt University has launched a £2.5 million Optical Ground Station (HOGS) to advance satellite-based quantum-secure communication. This facility will enable quantum key distribution (QKD) experiments with satellites, contributing to the development of a quantum-secure internet. Furthermore, U.S. Congress is considering the "Quantum Sandbox for Near-Term Applications Act" to promote the commercial advancement of quantum technology through public-private partnerships. Simultaneously, research is underway to enhance telehealth cybersecurity by integrating PQC with QKD and privacy-preserving mechanisms, ensuring data confidentiality and immutability for patient records in a post-quantum era.

Recommended read:
References :
  • osintteam.blog: Understanding Cryptography: How Your Crypto Wallets, Apps, and NFTs Stay Secure
  • medium.com: Quantum Computing and Post-Quantum Cryptography

@thequantuminsider.com //
Heriot-Watt University has launched a £2.5 million Optical Ground Station (HOGS) at its Research Park in Edinburgh, marking a significant advancement in satellite-based quantum-secure communication. The facility, developed under the UK Quantum Communications Hub, features a 70-cm precision telescope equipped with adaptive optics and quantum detectors. This investment positions Heriot-Watt at the forefront of quantum communication research and development.

The HOGS facility will enable quantum key distribution (QKD) experiments with satellites, facilitating secure communication channels resistant to future decryption by quantum computers. The station is equipped to monitor space debris and test ultra-high-speed optical communications for next-generation networks. This is the UK’s first major infrastructure investment in free-space quantum key distribution research, as it will serve as a testbed for space-to-ground optical links that use quantum-secure protocols to exchange encryption keys via single photons.

The project marks a major step in the UK’s efforts to build a quantum-secure internet, offering a unique testbed for industry and academia. Connected via dark fibre to Heriot-Watt’s quantum labs, HOGS enables real-time simulation and validation of urban to intercontinental optical quantum networks. HOGS is part of Heriot-Watt’s leadership in the new Integrated Quantum Networks (IQN) Hub, positioning the university as a central player in the development of quantum-secure communications. The facility aims to grow Scotland’s space economy and future workforce, partnering with universities, national laboratories, and businesses, including STEM programs for students.

Recommended read:
References :
  • quantumcomputingreport.com: Heriot-Watt University Opens £2.5M ($3.3M USD) Quantum Optical Ground Station to Advance Secure Satellite Communications
  • : Heriot-Watt University Opens £2.5M ($3.3M USD) Quantum Optical Ground Station to Advance Secure Satellite Communications
  • thequantuminsider.com: Heriot-Watt Opens $3M Quantum Satellite Research Facility, UK’s First Optical QKD Station
  • thequantuminsider.com: Congress will ultimately decide how much quantum funding is preserved or expanded. But the White House’s proposal seems to be signaling that quantum matters, but it must compete with a number of other priorities.

@siliconangle.com //
SAS and Intel are collaborating to redefine AI architecture through optimized intelligence, moving away from a GPU-centric approach. This partnership focuses on aligning hardware and software roadmaps to deliver smarter performance, lower costs, and greater trust across various environments. Optimized intelligence allows businesses to tailor their AI infrastructure to specific use cases, which ensures efficient and ethical AI practices with human-centered design, instilling greater confidence in real-world outcomes. SAS and Intel have a 25-year relationship built around this concept, with deep investments in technical alignment to ensure hardware and software co-evolve.

SAS is integrating Intel's silicon innovations, such as AMX acceleration and Gaudi GPUs, into its Viya platform to provide cost-effective performance. This collaboration enables clients to deploy advanced models without overspending on infrastructure, with Viya demonstrating significant performance improvements on the latest Intel platforms. The company is also working with companies like Procter & Gamble and quantum hardware providers including D-Wave, IBM, and QuEra to develop hybrid quantum-classical solutions for real-world problems across industries like life sciences, finance, and manufacturing.

A recent global SAS survey revealed that over 60% of business leaders are actively investing in or exploring quantum AI, although concerns remain regarding high costs, a lack of understanding, and unclear use cases. SAS aims to make quantum AI more accessible by working on pilot projects and research, providing guidance to businesses on applying quantum technologies. SAS Principal Quantum Architect Bill Wisotsky states that quantum technologies allow companies to analyze more data and achieve fast answers to complex questions, and SAS wants to simplify this research for its customers.

Recommended read:
References :