Top Mathematics discussions

NishMath - #none

@quantumcomputingreport.com //
Project Eleven, an open science initiative, has launched the QDay Prize, a global competition offering a reward of one Bitcoin, currently valued around $84,000-$85,000, to the first individual or team that can successfully break elliptic curve cryptography (ECC) using Shor’s algorithm on a quantum computer. The competition aims to assess the current progress in quantum computing and its potential to undermine existing cryptographic systems, emphasizing the transition to post-quantum cryptography. Participants are required to submit a working quantum implementation targeting ECC keys, with no classical shortcuts or hybrid methods allowed, ensuring a pure quantum solution.

The challenge involves breaking the largest ECC key possible using Shor’s algorithm on a quantum computer, focusing on a gate-level implementation of Shor’s algorithm solving the elliptic curve discrete logarithm problem (ECDLP). Project Eleven has prepared a set of ECC keys ranging from 1 to 25 bits for testing, with submissions required to include quantum program code, a written explanation of the method, and details about the hardware used. The quantum machine does not need to be publicly available, but submissions will be shared publicly to ensure transparency.

The contest, which runs until April 5, 2026, highlights the real-world cryptographic risks of advancing quantum hardware. Project Eleven believes that even achieving a few bits of a private key would be a significant breakthrough. Experts estimate that a 256-bit ECC key could be cracked with 2,000 logical qubits, potentially within a decade, underscoring the urgency of understanding how close current technologies are to threatening ECC security. The QDay Prize seeks to establish a verifiable and open marker of when practical quantum attacks against widely used encryption systems may emerge.

Recommended read:
References :
  • thequantuminsider.com: A new competition is offering a single Bitcoin to anyone who can break elliptic curve cryptography using a quantum computer — no shortcuts allowed.
  • Bitcoin News: Project Eleven believes this would be an extremely hard task, and achieving even a few bits of a private key would be big news.
  • Quantum Computing Report: Project Eleven (P11) has announced the QDay Prize, an open competition offering a reward of one Bitcoin (current value about $85,000) for demonstrating the ability to break elliptic curve cryptography (ECC) using Shor’s algorithm on a quantum computer.

Miranda Martinengo@Istituto Grothendieck //
Recent developments in the mathematics community showcase notable achievements and career advancements. Ryuya Hora, a doctoral scholar from the University of Tokyo specializing in topos theory and automata theory applications, has been appointed Research Associate of the Centre for Topos Theory and its Applications (CTTA). He is scheduled to collaborate with Olivia Caramello and other researchers at the Centre in Paris between April and June 2025. His appointment signifies a valuable addition to the field, with opportunities to follow his work, including his talk at the "Toposes in Mondovì" conference.

Cesare Tronci has been promoted to Professor of Mathematics at the University of Surrey, effective April 1, 2025. This promotion acknowledges his contributions to the field, and further information about his research can be found on his website. Also at the University of Surrey, Jessica Furber has successfully defended her PhD thesis, "Mathematical Analysis of Fine-Scale Badger Movement Data," marking the completion of her doctoral studies. Her external examiner was Prof Yuliya Kyrychko from Sussex, and the internal examiner was Dr Joaquin Prada from the Vet School, Surrey.

In related news, the Mathematics Division at Stellenbosch University in South Africa is seeking a new permanent appointee at the Lecturer or Senior Lecturer level, with consideration potentially given to other levels under specific circumstances. While preference will be given to candidates working in number theory or a related area, applications from those in other areas of mathematics will also be considered. The deadline for applications is April 30, 2025, with detailed information available in the official advertisement.

Recommended read:
References :
  • blogs.surrey.ac.uk: Congratulations to Cesare Tronci who has been promoted to Professor of Mathematics by the University of Surrey.
  • Istituto Grothendieck: Ryuya Hora, doctoral scholar of the University of Tokyo working in topos theory and its applications, in particular to automata theory, has recently been appointed Research Associate of the Centre for Topos Theory and its Applications.
  • blogs.surrey.ac.uk: Jessica Furber passes PhD viva
  • igrothendieck.org: Ryuya Hora, doctoral scholar of the University of Tokyo working in topos theory and its applications, in particular to automata theory, has recently been appointed Research Associate of the Centre for Topos Theory and its Applications.

@Martin Escardo //
A new approach to defining interval objects in category theory is being explored, focusing on the universal characterization of the Euclidean interval. This research, a collaboration between Martin Escardo and Alex Simpson, aims to establish a definition of interval objects applicable to general categories, capturing both geometrical and computational aspects. The goal is to find a definition that works across diverse categorical settings, allowing for a more abstract and unified understanding of intervals. This work builds upon their previous research, aiming for a broader mathematical foundation for interval objects.

The work by Escardo and Simpson delves into defining arithmetic operations within this abstract framework. Given an interval object [-1,1] in a category with finite products, they demonstrate how to define operations such as negation and multiplication using the universal property of the interval. Negation, denoted as -x, is defined as the unique automorphism that maps -1 to 1 and 1 to -1, ensuring that -(-x) = x. Similarly, multiplication x × (-) is defined as the unique automorphism mapping -1 to -x and 1 to x, resulting in commutative and associative multiplication.

This research has already produced significant results, including two joint papers: "A universal characterization of the closed Euclidean interval (extended abstract)" from LICS 2001 and "Abstract Datatypes for Real Numbers in Type Theory" from RTA/TLCA'2014. A third paper, focused more on the mathematical aspects, is currently in preparation. This work aims to provide a robust and universal characterization of interval objects, impacting both theoretical mathematics and practical applications in computer science and related fields.

Recommended read:
References :
  • www.johndcook.com: A paper about a notion of interval object in any category with finite products, on joint work with Alex Simpson.
  • Martin Escardo: The original post announcing A universal characterization of the closed Euclidean interval.

@www.quantamagazine.org //
Researchers are exploring innovative methods to enhance the performance of artificial intelligence language models by minimizing their reliance on direct language processing. This approach involves enabling models to operate more within mathematical or "latent" spaces, reducing the need for constant translation between numerical representations and human language. Studies suggest that processing information directly in these spaces can improve efficiency and reasoning capabilities, as language can sometimes constrain and diminish the information retained by the model. By sidestepping the traditional language-bound processes, AI systems may achieve better results by "thinking" independently of linguistic structures.

Meta has announced plans to resume training its AI models using publicly available content from European users. This move aims to improve the capabilities of Meta's AI systems by leveraging a vast dataset of user-generated information. The decision comes after a period of suspension prompted by concerns regarding data privacy, which were raised by activist groups. Meta is emphasizing that the training will utilize public posts and comments shared by adult users within the European Union, as well as user interactions with Meta AI, such as questions and queries, to enhance model accuracy and overall performance.

A new method has been developed to efficiently safeguard sensitive data used in AI model training, reducing the traditional tradeoff between privacy and accuracy. This innovative framework maintains an AI model's performance while preventing attackers from extracting confidential information, such as medical images or financial records. By focusing on the stability of algorithms and utilizing a metric called PAC Privacy, researchers have shown that it's possible to privatize almost any algorithm without needing access to its internal workings, potentially making privacy more accessible and less computationally expensive in real-world applications.

Recommended read:
References :

@teorth.github.io //
The Equational Theories Project has achieved a major breakthrough, formalizing all possible implications between a test list of 4694 equational laws in the Lean theorem prover. This involved verifying a staggering 22,033,636 implications (4694 squared) over a period of just over 200 days. The project's success is attributed to a substantial and diverse collection of code, data, and text, highlighting the complexity and scale of the formalization effort. This milestone marks a significant advancement in the field of automated theorem proving, with potential applications in formal verification of mathematical theories and software.

The project leverages the Lean theorem prover, a powerful tool for formalizing mathematics and verifying software. The formalization effort required managing a large volume of code, data, and textual descriptions. Now that the formalization is complete, the project team is focusing on documenting their methodologies and results in a comprehensive paper. This paper will detail the techniques used to tackle the challenge of formalizing such a vast number of implications, offering insights for future research in automated reasoning and formal verification.

The next key step for the Equational Theories Project is drafting the accompanying paper. The current draft is in an incomplete state, but is now the central focus of the project. This paper will serve as a crucial resource for understanding the project's accomplishments and methodologies. While the code and data are essential, the paper will provide the necessary context and explanation to make the formalization accessible and useful to the broader research community.

Recommended read:
References :
  • leanprover.zulipchat.com: after just over 200 days, the last of the 4694^2 = 22033636 possible implications between our test list of 4694 equational laws has now been formalized in Lean .
  • Terence Tao: A key milestone in the Equational Theories Project: after just over 200 days, the last of the 4694^2 = 22033636 possible implications between our test list of 4694 equational laws has now been formalized in Lean .
  • teorth.github.io: after just over 200 days, the last of the 4694^2 = 22033636 possible implications between our test list of 4694 equational laws has now been formalized in Lean .

Sophia Wood@Fractal Kitty //
References: The Aperiodical
The 238th Carnival of Mathematics is now available online at Fractal Kitty, rounding up math blog posts from March 2025. This edition, organized by Aperiodical, features a variety of math art and explores interesting facts about the number 238, including that it is 2 × 7 × 17, the sum of the first 13 primes, and a "triprime." The Mathstodon community contributed fun facts about 238, such as its relation to Uranium-238 and its representation in hexadecimal as "EE."

The carnival includes a variety of blog posts and activities from around the mathematical community. Peter Cameron shared thoughts on Compactness, Memories of CFSG, and defending research against government censorship, while other posts covered topics like polyominoes, a modern presentation of Peano Axioms, and the Monty Hall Problem. Karen Campe continued her visual Go For Geometry Series, and Amédée d’Aboville explored Group Theory With Zoombinis. These diverse topics showcase the breadth of interests and engagement within the math world.

Beyond traditional blog posts, the carnival highlights creative endeavors like Ayliean's #MathArtMarch, which showcased crochet, coding, painting, and other artistic expressions inspired by mathematics. There's also discussion happening on platforms like Mathstodon, with Terence Tao sharing insights on dynamical systems and the complexities of linear versus nonlinear regimes. Pat's Blog delves into geometry, discussing properties of rhombuses and extensions of concurrency theorems, demonstrating the vibrant and varied nature of mathematical discussions and explorations.

Recommended read:
References :
  • The Aperiodical: The next issue of the Carnival of Mathematics, rounding up blog posts from the month of March 2025, is now online at Fractal Kitty.

@aperiodical.com //
References: Fractal Kitty
The 238th Carnival of Mathematics, organized by Aperiodical, has been celebrated with a diverse range of submissions and mathematical artwork. The carnival highlights interesting properties of the number 238, which is the product of three primes (2 × 7 × 17) and the sum of the first 13 primes. It's also noted as a "triprime." The event showcases the beauty and fun in mathematics, encouraging exploration and engagement with numbers and their unique attributes. Various individuals from the Mathstodon community contributed interesting facts about 238, further enriching the carnival's celebration of mathematics.

The Carnival features engaging math art and thoughtful blog posts covering diverse topics. Ayliean's #MathArtMarch initiative inspired creative works including crochet, coding, painting, and structural designs. Blog posts include Peter Cameron's reflections on Compactness, Memories of CFSG, and research defense strategies. Further topics discussed were polyominoes, a modern presentation of Peano Axioms, practical math for programmers, the Monty Hall Problem, communication failures, a visual Go For Geometry series, and group theory with Zoombinis.

Prime numbers and their curiosities were also explored, inviting mathematicians and enthusiasts to discover and share interesting properties. The Prime Pages maintain an evolving collection of prime numbers with unique characteristics. "Prime Curios!" is an exciting collection of curiosities, wonders and trivia related to prime numbers. There are currently 31951 curios corresponding to 22773 different numbers in their database. One post highlighted truncatable primes and a game based on creating prime number strings. The goal is to list the small primes that are especially curious and provide explanations understandable to a general audience, fostering further interest and investigation in prime numbers.

Recommended read:
References :

@lobste.rs //
Mathematical blogs and platforms are currently buzzing with diverse explorations. Elinor, in a guest post for #MathArtMarch, has curated a collection of favorite mathematical art from the month, providing inspiration for artists and mathematicians alike. Meanwhile, the "exponential sum of the day" page continues to captivate audiences by generating a new figure daily. This figure is created by plotting partial sums and drawing lines between consecutive terms, resulting in visually intriguing patterns that often feature unexpected flat sides.

Recently, Bo’az Klartag has released "Striking new Lower Bounds for Sphere Packing in High Dimensions," which has garnered attention in the mathematical community. Kalai notes that this paper presents a significant breakthrough in the field. Klartag's paper demonstrates that there exists a lattice sphere packing with a density significantly higher than previously known constructions. His proof involves a stochastically evolving ellipsoid designed to accumulate lattice points on its boundary while avoiding them in its interior, a technique rooted in Minkowski's ideas on sphere packing and ellipsoids.

Other areas of mathematical interest being explored include Elliptical Python Programming, as discussed on Susam Pal's blog. Also the article "exponential sum of the day" page draws a new figure each day by plotting the partial sums of and drawing a line between consecutive terms. Overall, these diverse explorations highlight the vibrant and dynamic nature of mathematical research and its connections to various fields like art and computer science.

Recommended read:
References :
  • gilkalai.wordpress.com: Blog on striking new Lower Bounds for Sphere Packing in High Dimensions by Bo’az Klartag
  • Susam Pal: Discusses an example of Elliptical Python Programming.
  • davidlowryduda.com: Blog post: Learning Möbius from Inconvenient Integer Representations

@primes.utm.edu //
This week saw a flurry of mathematical activity, highlighted by the 238th Carnival of Mathematics, organized by Aperiodical. The carnival showcases a variety of submissions and mathematical art, focusing on the number 238 itself. Noteworthy facts about 238 include that it is 2 × 7 × 17, the sum of the first 13 primes, and a "triprime". The carnival also encourages exploration beyond pure mathematics, with community members contributing insights linking the number to uranium isotopes, birth minutes, and even hexadecimal representations. It also shines a light on #MathArtMarch, with examples of crochet, coding, and painting from around the world.

Continuing the daily exploration of numbers, several interesting facts and events were highlighted for April 6th, 7th, 8th and 10th. The number 96, the 96th day of the year, was examined for its unique properties, such as being the smallest number expressible as the difference of two squares in four different ways. Events like Euler's first paper on partitions (April 7th, 1741) and Al-Biruni's observation of a solar eclipse in 1019 were also noted, linking mathematical concepts to historical contexts. Also, the number 97 has been noted as the 97th day of the year, where 97 is the largest prime that we can ever find that is less than the sum of square of its digits.

In recreational mathematics, a "Salute" game for reinforcing multiplication and division was featured, emphasizing the inverse relationship between these operations. Additionally, the concept of "truncatable primes" was explored through a game where players create strings of prime numbers by adding digits to either end of a number. The number 91 was discussed as the 91st day of the year where 10 n + 91 and 10 n + 93 are twin primes for n = 1, 2, 3 and 4. Finally, highlighting mathematics beyond academia, James Abram Garfield, a former Congressman and mathematician, was mentioned for his original proof of the Pythagorean Theorem, illustrating the interdisciplinary nature of mathematics.

Recommended read:
References :

@console.cloud.google.com //
References: Compute , BigDATAwire
Google Cloud is empowering global scientific discovery and innovation by integrating Google DeepMind and Google Research technologies with its cloud infrastructure. This initiative aims to provide researchers with advanced, cloud-scale tools for scientific computing. The company is introducing supercomputing-class infrastructure, including H4D VMs powered by AMD CPUs and A4/A4X VMs powered by NVIDIA GPUs, which boast low-latency networking and high memory bandwidth. Additionally, Google Cloud Managed Lustre offers high-performance storage I/O, enabling scientists to tackle large-scale and complex scientific problems.

Google Cloud is also rolling out advanced scientific applications powered by AI models. These include AlphaFold 3 for predicting the structure and interactions of biomolecules, and WeatherNext models for weather forecasting. Moreover, the company is introducing AI agents designed to accelerate scientific discovery. As an example, Google Cloud and Ai2 are investing $20 million in the Cancer AI Alliance to accelerate cancer research using AI, advanced models, and cloud computing power. Google Cloud will provide the AI infrastructure and security, while Ai2 will deliver the training and development of cancer models.

In addition to these advancements, Google unveiled its seventh-generation Tensor Processing Unit (TPU), Ironwood. The company claims Ironwood delivers 24 times the computing power of the world’s fastest supercomputer when deployed at scale. Ironwood is specifically designed for inference workloads, marking a shift in Google's AI chip development strategy. When scaled to 9,216 chips per pod, Ironwood delivers 42.5 exaflops of computing power, and each chip comes with 192GB of High Bandwidth Memory.

Recommended read:
References :
  • Compute: Discusses enabling global scientific discovery and innovation on Google Cloud.
  • BigDATAwire: Google Cloud Preps for Agentic AI Era with ‘Ironwood’ TPU, New Models and Software

@gilkalai.wordpress.com //
Recent breakthroughs in mathematics have captured the attention of researchers, spanning both theoretical and practical domains. Bo’az Klartag has released a new paper detailing findings on lower bounds for sphere packing in high dimensions. This is a significant achievement as it surpasses previously known constructions. Additionally, advancements are being made in understanding analytic combinatorics and its application to problems such as counting ternary trees.

Klartag's paper presents a novel approach to sphere packing. It proves that in any dimension, there exists an origin-symmetric ellipsoid of specific volume that contains no lattice points other than the origin. This leads to a lattice sphere packing with a density significantly higher than previously achieved, marking a substantial leap forward in this area of study. Gil Kalai, who lives in the same neighborhood as Klartag, was among the first to acknowledge and celebrate this significant accomplishment.

Beyond sphere packing, researchers are also exploring analytic combinatorics and its applications. One specific example involves determining the asymptotic formula for the number of ternary trees with *n* nodes. A recent blog post delves into this problem, showcasing how to derive the surprising formula. Furthermore, incremental computation and dynamic dependencies are being addressed in blog build systems, demonstrating the broad impact of these mathematical and computational advancements.

Recommended read:
References :
  • Combinatorics and more: Bo’az Klartag: Striking new Lower Bounds for Sphere Packing in High Dimensions
  • grossack.site: Wow! ANOTHER blog post? This time about analytic combinatorics and how to show the INCREDIBLY surprising fact that the number of ternary trees on n nodes is asymptotically given by this bizarre formula! Want to know why? Take a look at

@hubblesite.org //
Cosmology has undergone significant changes from 2000 to 2025, marked by an increased understanding of dark matter and dark energy's dominance in the Universe. Evidence gathered in the late 1990s pointed towards these mysterious components making up the majority of the cosmic energy budget, with normal matter contributing a mere 5%. Subsequent data from projects like the Hubble key project, WMAP, and Planck's Cosmic Microwave Background (CMB) observations, alongside extensive supernova and large-scale structure surveys, appeared to solidify this picture. However, tensions have emerged as these different data sets reveal inconsistencies, hinting at a potential need for a breakthrough in cosmological understanding.

The core issue revolves around the Hubble constant, a measure of the Universe's expansion rate. Measurements derived from supernova data, CMB observations, and large-scale structure surveys are not mutually compatible, leading to a significant debate within the scientific community. While some propose a crisis in cosmology, questioning the foundations of the Big Bang and the ΛCDM model, others argue that the situation is less dire. Alterations or modifications to the current cosmological model might be necessary to reconcile the discrepancies and restore order. The DESI survey, designed to measure the evolution of large-scale structure, is crucial in understanding how dark energy affects this evolution.

Furthermore, recent research indicates that dark energy may not be constant, challenging our established cosmological history. Astronomers are also finding the sky brighter than previously thought, necessitating a reanalysis of existing data. Studies involving Type Ia supernovae at high redshifts, as highlighted by the Union2 compilation of 557 supernovae, provide crucial data for refining the understanding of dark energy's equation-of-state parameter. These observations, made possible by telescopes such as the Hubble Space Telescope, Gemini, and the Very Large Telescope, are instrumental in probing the expansion history of the Universe and revealing potential variations in dark energy's behavior over cosmic time.

Recommended read:
References :
  • bigthink.com: How has cosmology changed from 2000 to 2025?
  • theconversation.com: Article on dark energy and its potential non-constant nature.
  • bigthink.com: How has cosmology changed from 2000 to 2025?
  • hubblesite.org: How has cosmology changed from 2000 to 2025?
  • Terence Tao: A new post, on intriguing hints from the DESI survey data that suggests that the cosmological constant (aka "dark energy) might not, in fact, be constant after all.

Unknown (noreply@blogger.com)@Pat'sBlog //
The online mathematics community is buzzing with activity, as evidenced by the 238th Carnival of Mathematics, hosted by Aperiodical. This month's carnival showcases diverse submissions and beautiful math art, starting with an exploration of the number 238 itself. Found to be 2 x 7 x 17 and the sum of the first 13 primes, the number also inspired community contributions, with users pointing out its appearance in uranium isotopes, hexagonal representations, and even birth minute celebrations. The carnival highlights the engaging and creative ways people interact with mathematical concepts online.

The carnival features a collection of blog posts and activities from various math enthusiasts. Number yoga is explored as a technique to develop creative reasoning and comprehension in mathematics. This involves noticing details, wondering about possibilities, and creating explanations or related puzzles. Also featured are posts on polyominoes, a modern presentation of Peano Axioms, practical math for programmers, the Monty Hall Problem, and group theory using Zoombinis. Karen Campe also continues her visual "Go For Geometry" series.

Furthermore, the online discussion includes extensions of basic geometry, focusing on pedal triangles and related theorems. A blog post delves into generalizations of perpendiculars from a point in a triangle, highlighting properties of the orthocenter and the orthic triangle. The orthic triangle's perimeter and its connection to the angles of the original triangle are discussed. The community also shares the art from Ayliean's MathArtMarch.

Recommended read:
References :

@x.com //
References: IEEE Spectrum
The integration of Artificial Intelligence (AI) into coding practices is rapidly transforming software development, with engineers increasingly leveraging AI to generate code based on intuitive "vibes." Inspired by the approach of Andrej Karpathy, developers like Naik and Touleyrou are using AI to accelerate their projects, creating applications and prototypes with minimal prior programming knowledge. This emerging trend, known as "vibe coding," streamlines the development process and democratizes access to software creation.

Open-source AI is playing a crucial role in these advancements, particularly among younger developers who are quick to embrace new technologies. A recent Stack Overflow survey of over 1,000 developers and technologists reveals a strong preference for open-source AI, driven by a belief in transparency and community collaboration. While experienced developers recognize the benefits of open-source due to their existing knowledge, younger developers are leading the way in experimenting with these emerging technologies, fostering trust and accelerating the adoption of open-source AI tools.

To further enhance the capabilities and reliability of AI models, particularly in complex reasoning tasks, Microsoft researchers have introduced inference-time scaling techniques. In addition, Amazon Bedrock Evaluations now offers enhanced capabilities to evaluate Retrieval Augmented Generation (RAG) systems and models, providing developers with tools to assess the performance of their AI applications. The introduction of "bring your own inference responses" allows for the evaluation of RAG systems and models regardless of their deployment environment, while new citation metrics offer deeper insights into the accuracy and relevance of retrieved information.

Recommended read:
References :

Megan Crouse@techrepublic.com //
Researchers from DeepSeek and Tsinghua University have recently made significant advancements in AI reasoning capabilities. By combining Reinforcement Learning with a self-reflection mechanism, they have created AI models that can achieve a deeper understanding of problems and solutions without needing external supervision. This innovative approach is setting new standards for AI development, enabling models to reason, self-correct, and explore alternative solutions more effectively. The advancements showcase that outstanding performance and efficiency don’t require secrecy.

Researchers have implemented the Chain-of-Action-Thought (COAT) approach in these enhanced AI models. This method leverages special tokens such as "continue," "reflect," and "explore" to guide the model through distinct reasoning actions. This allows the AI to navigate complex reasoning tasks in a more structured and efficient manner. The models are trained in a two-stage process.

DeepSeek has also released papers expanding on reinforcement learning for LLM alignment. Building off prior work, they introduce Rejective Fine-Tuning (RFT) and Self-Principled Critique Tuning (SPCT). The first method, RFT, has a pre-trained model produce multiple responses and then evaluates and assigns reward scores to each response based on generated principles, helping the model refine its output. The second method, SPCT, uses reinforcement learning to improve the model’s ability to generate critiques and principles without human intervention, creating a feedback loop where the model learns to self-evaluate and improve its reasoning capabilities.

Recommended read:
References :
  • hlfshell: DeepSeek released another cool paper expanding on reinforcement learning for LLM alignment. Building off of their prior work (which I talk about here), they introduce two new methods.
  • www.techrepublic.com: Researchers from DeepSeek and Tsinghua University say combining two techniques improves the answers the large language model creates with computer reasoning techniques.

@thequantuminsider.com //
References: medium.com , mrtecht.medium.com ,
The rise of quantum computing is creating a new era of strategic competition, with nations and organizations racing to prepare for the potential disruption to modern encryption. Quantum computers, leveraging qubits that can exist in multiple states simultaneously, have the potential to break current encryption standards, revolutionize fields like medicine and finance, and reshape global power dynamics. Governments and businesses are acutely aware of this threat, with the U.S. scrambling to implement quantum-resistant cryptography and China investing heavily in quantum networks. This competition extends to technology controls, with the U.S. restricting China's access to quantum technology, mirroring actions taken with advanced semiconductors.

The urgency stems from the fact that a cryptanalytically relevant quantum computer capable of breaking common public key schemes like RSA or ECC is anticipated by 2030. To address this, the National Institute of Standards and Technology (NIST) has standardized quantum-secure algorithms and set a 2030 deadline for their implementation, alongside the depreciation of current cryptographic methods. Companies like Utimaco are launching post-quantum cryptography (PQC) application packages such as Quantum Protect for its u.trust General Purpose HSM Se-Series, enabling secure migration ahead of the quantum threat. This package supports NIST-standardized PQC algorithms like ML-KEM and ML-DSA, as well as stateful hash-based signatures LMS and XMSS.

Efforts are also underway to secure blockchain technology against quantum attacks. Blockchains rely on cryptography techniques like public-key cryptography and hashing to keep transactions secure, however, quantum computers could potentially weaken these protections. Post-quantum cryptography focuses on developing encryption methods resistant to quantum attacks. Key approaches include Lattice-Based Cryptography, which uses complex mathematical structures that quantum computers would struggle to solve. The transition to a quantum-resistant future presents challenges, including the need for crypto-agility and the development of secure migration strategies.

Recommended read:
References :
  • medium.com: Approaching post-quantum cryptography: an overview of the most well-known algorithms
  • mrtecht.medium.com: The Quantum Threat to Your Encryption is Coming: Understanding Post-Quantum Cryptography
  • The Quantum Insider: Utimaco Launches Post Quantum Security App Package

Greg Bock@The Quantum Insider //
References: The Quantum Insider
Quantum computing has taken a significant leap forward with Phasecraft's development of a novel quantum simulation method called THRIFT (Trotter Heuristic Resource Improved Formulas for Time-dynamics). This breakthrough, detailed in a recent *Nature Communications* publication, drastically improves simulation efficiency and lowers computational costs, bringing real-world quantum applications closer to reality. THRIFT optimizes quantum simulations by prioritizing interactions with different energy scales within quantum systems, streamlining their implementation into smaller, more manageable steps.

This approach allows for larger and longer simulations to be executed without the need for increased quantum circuit size, thereby reducing computational resources and costs. In benchmarking tests using the 1D transverse-field Ising model, a widely used benchmark in quantum physics, THRIFT achieved a tenfold improvement in both simulation estimates and circuit complexities, enabling simulations that are ten times larger and run ten times longer compared to traditional methods. This development holds immense promise for advancements in materials science and drug discovery.

Separately, mathematicians have achieved a breakthrough in understanding and modeling melting ice and other similar phenomena through a new proof that resolves long-standing issues related to singularities. A powerful mathematical technique used to model melting ice and other phenomena had been hampered by “nightmare scenarios.” A new proof has removed that obstacle. This new proof addresses concerns about "nightmare scenarios" that previously hindered the analysis of these processes, ensuring that singularities do not impede the continued evolution of the surface being modeled. The resolution, described in Quanta Magazine, allows mathematicians to more effectively assess the surface's evolution even after a singularity appears.

Finally, researchers at Cornell University have introduced a novel data representation method inspired by quantum mechanics that tackles the challenge of handling big, noisy data sets. This quantum statistical approach simplifies large data sets and filters out noise, allowing for more efficient analysis than traditional methods. By borrowing mathematical structures from quantum mechanics, this technique enables a more concise representation of complex data, potentially revolutionizing innovation in data-rich fields such as healthcare and epigenetics where traditional methods have proven insufficient.

Recommended read:
References :
  • The Quantum Insider: Press RELEASE — In a breakthrough that puts us a step closer to real-world quantum applications, Phasecraft – the quantum algorithms company – has developed a novel approach to quantum simulation that significantly improves efficiency while cutting computational costs. The method, known as THRIFT (Trotter Heuristic Resource Improved Formulas for Time-dynamics), optimizes the quantum.

@simonwillison.net //
Google has broadened access to its advanced AI model, Gemini 2.5 Pro, showcasing impressive capabilities and competitive pricing designed to challenge rival models like OpenAI's GPT-4o and Anthropic's Claude 3.7 Sonnet. Google's latest flagship model is currently recognized as a top performer, excelling in Optical Character Recognition (OCR), audio transcription, and long-context coding tasks. Alphabet CEO Sundar Pichai highlighted Gemini 2.5 Pro as Google's "most intelligent model + now our most in demand." Demand has increased by over 80 percent this month alone across both Google AI Studio and the Gemini API.

Google's expansion includes a tiered pricing structure for the Gemini 2.5 Pro API, offering a more affordable option compared to competitors. Prompts with less than 200,000 tokens are priced at $1.25 per million for input and $10 per million for output, while larger prompts increase to $2.50 and $15 per million tokens, respectively. Although prompt caching is not yet available, its future implementation could potentially lower costs further. The free tier allows 500 free grounding queries with Google Search per day, with an additional 1,500 free queries in the paid tier, with costs per 1,000 queries set at $35 beyond that.

The AI research group EpochAI reported that Gemini 2.5 Pro scored 84% on the GPQA Diamond benchmark, surpassing the typical 70% score of human experts. This benchmark assesses challenging multiple-choice questions in biology, chemistry, and physics, validating Google's benchmark results. The model is now available as a paid model, along with a free tier option. The free tier can use data to improve Google's products while the paid tier cannot. Rates vary by tier and range from 150-2,000/minute. Google will retire the Gemini 2.0 Pro preview entirely in favor of 2.5.

Recommended read:
References :
  • Data Phoenix: Google Unveils Gemini 2.5: Its Most Intelligent AI Model Yet
  • AI News | VentureBeat: Gemini 2.5 Pro is now available without limits and for cheaper than Claude, GPT-4o
  • Simon Willison's Weblog: Google's Gemini 2.5 Pro is currently the top model and, from , a superb model for OCR, audio transcription and long-context coding. You can now pay for it! The new gemini-2.5-pro-preview-03-25 model ID is priced like this: Prompts less than 200,00 tokens: $1.25/million tokens for input, $10/million for output Prompts more than 200,000 tokens (up to the 1,048,576 max): $2.50/million for input, $15/million for output This is priced at around the same level as Gemini 1.5 Pro ($1.25/$5 for input/output below 128,000 tokens, $2.50/$10 above 128,000 tokens), is cheaper than GPT-4o for shorter prompts ($2.50/$10) and is cheaper than Claude 3.7 Sonnet ($3/$15). Gemini 2.5 Pro is a reasoning model, and invisible reasoning tokens are included in the output token count. I just tried prompting "hi" and it charged me 2 tokens for input and 623 for output, of which 613 were "thinking" tokens. That still adds up to just 0.6232 cents (less than a cent) using my which I updated to support the new model just now. I released this morning adding support for the new model: llm install -U llm-gemini llm -m gemini-2.5-pro-preview-03-25 hi Note that the model continues to be available for free under the previous gemini-2.5-pro-exp-03-25 model ID: llm -m gemini-2.5-pro-exp-03-25 hi The free tier is "used to improve our products", the paid tier is not. Rate limits for the paid model - from 150/minute and 1,000/day for tier 1 (billing configured), 1,000/minute and 50,000/day for Tier 2 ($250 total spend) and 2,000/minute and unlimited/day for Tier 3 ($1,000 total spend). Meanwhile the free tier continues to limit you to 5 requests per minute and 25 per day. Google are entirely in favour of 2.5. Via Tags: , , , , , , ,
  • THE DECODER: Google has opened broader access to Gemini 2.5 Pro, its latest AI flagship model, which demonstrates impressive performance in scientific testing while introducing competitive pricing.
  • Bernard Marr: Google's latest AI model, Gemini 2.5 Pro, is poised to streamline complex mathematical and coding operations.
  • The Cognitive Revolution: In this illuminating episode of The Cognitive Revolution, host Nathan Labenz speaks with Jack Rae, principal research scientist at Google DeepMind and technical lead on Google's thinking and inference time scaling work.
  • bsky.app: Gemini 2. 5 Pro pricing was announced today - it's cheaper than both GPT-4o and Claude 3.7 Sonnet I've updated my llm-gemini plugin to add support for the new paid model Full notes here:
  • Last Week in AI: Google unveils a next-gen AI reasoning model, OpenAI rolls out image generation powered by GPT-4o to ChatGPT, Tencent’s Hunyuan T1 AI reasoning model rivals DeepSeek in performance and price

@www.quantamagazine.org //
Quantum computing faces the challenge of demonstrating a consistent advantage over classical computing. Ewin Tang's work on "dequantizing" quantum algorithms has questioned the assumption that quantum computers can always outperform classical ones. Tang designed classical algorithms to match the speed of quantum algorithms in solving certain problems, initiating an approach where researchers seek classical counterparts to quantum computations. This raises fundamental questions about the true potential and future trajectory of quantum computing, especially considering the resources required.

The discussion extends to the costs associated with quantum randomness, exploring pseudorandomness as a practical alternative. Researchers at the University of the Witwatersrand have found a method to shield quantum information from environmental disruptions, which could lead to more stable quantum computers and networks. Despite the potential of quantum computing to revolutionize fields like science, pharmaceuticals, and healthcare, limitations in energy demands and computing power suggest that it will likely be applied selectively to areas where it offers the most significant advantage, rather than replacing classical computing across all applications.

Recommended read:
References :
  • Quanta Magazine: What Is the True Promise of Quantum Computing?
  • Bernard Marr: Quantum Vs. Classical Computing: Understanding Tomorrow's Tech Balance
  • Frederic Jacobs: âš›ï¸ An attempt to prove that a quantum algorithm had an exponential speedup compared to classical systems turned out to show that classical computers can solve the recommendation problem nearly as fast as quantum computers. This further reduces the amount of commercially-interesting problems quantum computers are believed to be useful for. Great discussion by with Ewin Tang on that process.
  • mstdn.social: An attempt to prove that a quantum algorithm had an exponential speedup compared to classical systems turned out to show that classical computers can solve the recommendation problem nearly as fast as quantum computers.

Terence Tao@What's new //
References: beuke.org , What's new
Terence Tao has recently uploaded a paper to the arXiv titled "Decomposing a factorial into large factors." The paper explores a mathematical quantity, denoted as t(N), which represents the largest value such that N! can be factorized into t(N) factors, with each factor being at least N. This concept, initially introduced by Erdös, delves into how equitably a factorial can be split into its constituent factors.

Erdös initially conjectured that an upper bound on t(N) was asymptotically sharp, implying that factorials could be split into factors of nearly uniform size for large N. However, a purported proof by Erdös, Selfridge, and Straus was lost, leading to the assertion becoming a conjecture. The paper establishes bounds on t(N), recovering a previously lost result. Further conjectures were made by Guy and Selfridge, exploring whether relationships held true for all values of N.

On March 30th, mathematical enthusiasts celebrated facts related to the number 89. Eighty-nine is a Fibonacci prime, and patterns emerge when finding it's reciprocal. Also, the number 89 can be obtained by a summation of the first 5 integers to the power of the first 5 Fibonacci numbers. 89 is also related to Armstrong numbers, which are numbers that are the sum of their digits raised to the number of digits in the number.

Recommended read:
References :
  • beuke.org: Your browser does not support the audio element. Profunctor optics are a modern, category-theoretic generalization of optics – bidirectional data accessors used to focus on and update parts of a data structure.
  • What's new: I;ve just uploaded to the arXiv the paper “Decomposing a factorial into large factors“. This paper studies the quantity , defined as the largest quantity such that it is possible to factorize into factors , each of which is at least .

Webb Wright@Quanta Magazine //
Researchers are making significant strides in reducing the costs associated with quantum randomness, a crucial element for cryptography and simulations. Traditionally, obtaining true quantum randomness has been complex and expensive. However, the exploration of "pseudorandomness" offers a practical alternative, allowing researchers to utilize computational algorithms that mimic randomness, thus sidestepping the high costs of pure quantum randomness. This development broadens the accessibility of randomness, enabling researchers to pursue new scientific investigations.

The team from JPMorganChase, Quantinuum, multiple national labs, and UT Austin demonstrated a certified quantum randomness protocol. They showcased the first successful demonstration of a quantum computing method to generate certified randomness. Using a 56-qubit quantum machine, they output more randomness than they initially put in. What makes this truly remarkable is that this feat is considered impossible for even the most powerful classical supercomputers. This groundbreaking achievement could open new doors for quantum computing and cryptography research.

Recommended read:
References :
  • The Quantum Insider: Joint Research Team Achieves Certified Quantum Randomness, Turns Once Theoretical Experiments Into First Commercial Applications For Quantum Computing
  • Quanta Magazine: The High Cost of Quantum Randomness Is Dropping
  • hetarahulpatel.medium.com: Random Numbers Just Got Real, Thanks to Quantum Magic!

Ryan Daws@www.artificialintelligence-news.com //
References: THE DECODER , venturebeat.com , ...
Anthropic has unveiled groundbreaking insights into the 'AI biology' of their advanced language model, Claude. Through innovative methods, researchers have been able to peer into the complex inner workings of the AI, demystifying how it processes information and learns strategies. This research provides a detailed look at how Claude "thinks," revealing sophisticated behaviors previously unseen, and showing these models are more sophisticated than previously understood.

These new methods allowed scientists to discover that Claude plans ahead when writing poetry and sometimes lies, showing the AI is more complex than previously thought. The new interpretability techniques, which the company dubs “circuit tracing” and “attribution graphs,” allow researchers to map out the specific pathways of neuron-like features that activate when models perform tasks. This approach borrows concepts from neuroscience, viewing AI models as analogous to biological systems.

This research, published in two papers, marks a significant advancement in AI interpretability, drawing inspiration from neuroscience techniques used to study biological brains. Joshua Batson, a researcher at Anthropic, highlighted the importance of understanding how these AI systems develop their capabilities, emphasizing that these techniques allow them to learn many things they “wouldn’t have guessed going in.” The findings have implications for ensuring the reliability, safety, and trustworthiness of increasingly powerful AI technologies.

Recommended read:
References :
  • THE DECODER: Anthropic and Databricks have entered a five-year partnership worth $100 million to jointly sell AI tools to businesses.
  • venturebeat.com: Anthropic has developed a new method for peering inside large language models like Claude, revealing for the first time how these AI systems process information and make decisions.
  • venturebeat.com: Anthropic scientists expose how AI actually ‘thinks’ — and discover it secretly plans ahead and sometimes lies
  • : Anthropic provides insights into the ‘AI biology’ of Claude
  • www.techrepublic.com: ‘AI Biology’ Research: Anthropic Looks Into How Its AI Claude ‘Thinks’
  • THE DECODER: Anthropic's AI microscope reveals how Claude plans ahead when generating poetry
  • The Tech Basic: Anthropic Now Redefines AI Research With Self Coordinating Agent Networks

Matt Marshall@AI News | VentureBeat //
Microsoft is enhancing its Copilot Studio platform with AI-driven improvements, introducing deep reasoning capabilities that enable agents to tackle intricate problems through methodical thinking and combining AI flexibility with deterministic business process automation. The company has also unveiled specialized deep reasoning agents for Microsoft 365 Copilot, named Researcher and Analyst, to help users achieve tasks more efficiently. These agents are designed to function like personal data scientists, processing diverse data sources and generating insights through code execution and visualization.

Microsoft's focus includes securing AI and using it to bolster security measures, as demonstrated by the upcoming Microsoft Security Copilot agents and new security features. Microsoft aims to provide an AI-first, end-to-end security platform that helps organizations secure their future, one example being the AI agents designed to autonomously assist with phishing, data security, and identity management. The Security Copilot tool will automate routine tasks, allowing IT and security staff to focus on more complex issues, aiding in defense against cyberattacks.

Recommended read:
References :
  • Microsoft Security Blog: Learn about the upcoming availability of Microsoft Security Copilot agents and other new offerings for a more secure AI future.
  • www.zdnet.com: Designed for Microsoft's Security Copilot tool, the AI-powered agents will automate basic tasks, freeing IT and security staff to tackle more complex issues.

Maximilian Schreiner@THE DECODER //
Google DeepMind has announced Gemini 2.5 Pro, its latest and most advanced AI model to date. This new model boasts enhanced reasoning capabilities and improved accuracy, marking a significant step forward in AI development. Gemini 2.5 Pro is designed with built-in 'thinking' capabilities, enabling it to break down complex tasks into multiple steps and analyze information more effectively before generating a response. This allows the AI to deduce logical conclusions, incorporate contextual nuances, and make informed decisions with unprecedented accuracy, according to Google.

The Gemini 2.5 Pro has already secured the top position on the LMArena leaderboard, surpassing other AI models in head-to-head comparisons. This achievement highlights its superior performance and high-quality style in handling intricate tasks. The model also leads in math and science benchmarks, demonstrating its advanced reasoning capabilities across various domains. This new model is available as Gemini 2.5 Pro (experimental) on Google’s AI Studio and for Gemini Advanced users on the Gemini chat interface.

Recommended read:
References :
  • Google DeepMind Blog: Gemini 2.5: Our most intelligent AI model
  • Shelly Palmer: Google’s Gemini 2.5: AI That Thinks Before It Speaks
  • : Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date
  • Interconnects: Gemini 2.5 Pro and Google's second chance with AI
  • SiliconANGLE: Google introduces Gemini 2.5 Pro with chain-of-thought reasoning built-in
  • AI News | VentureBeat: Google releases ‘most intelligent model to date,’ Gemini 2.5 Pro
  • Analytics Vidhya: Gemini 2.5 Pro is Now #1 on Chatbot Arena with Impressive Jump
  • www.tomsguide.com: Google unveils Gemini 2.5 — claims AI breakthrough with enhanced reasoning and multimodal power
  • Fello AI: Google’s Gemini 2.5 Shocks the World: Crushing AI Benchmark Like No Other AI Model!
  • bdtechtalks.com: What to know about Google Gemini 2.5 Pro
  • TestingCatalog: Gemini 2.5 Pro sets new AI benchmark and launches on AI Studio and Gemini
  • AI News | VentureBeat: Google’s Gemini 2.5 Pro is the smartest model you’re not using – and 4 reasons it matters for enterprise AI
  • thezvi.wordpress.com: Gemini 2.5 is the New SoTA
  • www.infoworld.com: Google has introduced version 2.5 of its , which the company said offers a new level of performance by combining an enhanced base model with improved post-training.
  • Composio: Gemini 2.5 Pro vs. Claude 3.7 Sonnet: Coding Comparison
  • Composio: Google dropped its best-ever creation, Gemini 2.5 Pro Experimental, on March 25. It is a stupidly incredible reasoning model shining on every The post first appeared on.
  • www.tomsguide.com: Gemini 2.5 Pro is now free to all users in surprise move
  • Analytics India Magazine: Did Google Just Build The Best AI Model for Coding?
  • www.zdnet.com: Everyone can now try Gemini 2.5 Pro - for free

Stephen Ornes@Quanta Magazine //
References: Quanta Magazine , medium.com
A novel quantum algorithm has demonstrated a speedup over classical computers for a significant class of optimization problems, according to a recent report. This breakthrough could represent a major advancement in harnessing the potential of quantum computers, which have long promised faster solutions to complex computational challenges. The new algorithm, known as decoded quantum interferometry (DQI), outperforms all known classical algorithms in finding good solutions to a wide range of optimization problems, which involve searching for the best possible solution from a vast number of choices.

Classical researchers have been struggling to keep up with this quantum advancement. Reports of quantum algorithms often spark excitement, partly because they can offer new perspectives on difficult problems. The DQI algorithm is considered a "breakthrough in quantum algorithms" by Gil Kalai, a mathematician at Reichman University. While quantum computers have generated considerable buzz, it has been challenging to identify specific problems where they can significantly outperform classical machines. This new algorithm demonstrates the potential for quantum computers to excel in optimization tasks, a development that could have broad implications across various fields.

Recommended read:
References :
  • Quanta Magazine: Quantum computers can answer questions faster than classical machines. A new algorithm appears to do it for some critical optimization tasks.
  • medium.com: How Qubits Are Rewriting the Rules of Computation