@Trebor
//
References:
Trebor
Recent discussions in theoretical computer science and programming have touched upon diverse topics, ranging from type theory for SDG (Sustainable Development Goals) to the complexities encountered in programming. One thread explored the characteristics a type theory for SDG should possess, suggesting it should include a judgmentally commutative ring, possibly a Q-algebra, where neutral forms of type R are polynomials with other neutral forms as indeterminates. Participants believe such a system would have decidable typechecking.
A common sentiment shared among programmers, particularly those using languages with dependent types like Rust, is the initial hurdle of satisfying the compiler's requirements. Some have described the experience as an engaging puzzle that can involve spending considerable time to prove the validity of their code. The discussion also addressed the subjective nature of "complexity" in programming, suggesting it is a term often used to dismiss unfamiliar concepts rather than a concrete measure of inherent difficulty. In related news, Microsoft’s Krysta Svore has announced geometric error-correcting codes as a potential advancement toward practical quantum computing. These codes utilize high-dimensional geometry to enhance performance, potentially leading to more efficient encoding and logical operations with fewer qubits. The approach builds on topological error correction, employing a mathematical method called Hermite normal form to reshape the grid, resulting in substantial reductions in qubit count and faster logical clock speeds. This geometric reshaping results in substantial reductions in qubit count. In one notable case, they achieved six logical qubits using just 96 physical qubits, which is a 16-to-1 ratio that would mark a significant improvement over standard two-dimensional codes. Recommended read:
References :
Steve Vandenberg@Microsoft Security Blog
//
Microsoft is making significant strides in AI and data security, demonstrated by recent advancements and reports. The company's commitment to responsible AI is highlighted in its 2025 Responsible AI Transparency Report, detailing efforts to build trustworthy AI technologies. Microsoft is also addressing the critical issue of data breach reporting, offering solutions like Microsoft Data Security Investigations to assist organizations in meeting stringent regulatory requirements such as GDPR and SEC rules. These initiatives underscore Microsoft's dedication to ethical and secure AI development and deployment across various sectors.
AI's transformative potential is being explored in higher education, with Microsoft providing AI solutions for creating AI-ready campuses. Institutions are focusing on using AI for unique differentiation and innovation rather than just automation and cost savings. Strategies include establishing guidelines for responsible AI use, fostering collaborative communities for knowledge sharing, and partnering with technology vendors like Microsoft, OpenAI, and NVIDIA. Comprehensive training programs are also essential to ensure stakeholders are proficient with AI tools, promoting a culture of experimentation and ethical AI practices. Furthermore, Microsoft Research has achieved a breakthrough in computational chemistry by using deep learning to enhance the accuracy of density functional theory (DFT). This advancement allows for more reliable predictions of molecular and material properties, accelerating scientific discovery in fields such as drug development, battery technology, and green fertilizers. By generating vast amounts of accurate data and using scalable deep-learning approaches, the team has overcome limitations in DFT, enabling the design of molecules and materials through computational simulations rather than relying solely on laboratory experiments. Recommended read:
References :
@martinescardo.github.io
//
References:
ellipticnews.wordpress.com
The mathematics community is buzzing with activity, including upcoming online events and ongoing discussions about research methodologies. A significant event to watch for is the online celebration marking the 40th anniversary of Elliptic Curve Cryptography (ECC) on August 11, 2025. This event will commemorate the foundational work of Victor Miller and Neal Koblitz in 1985. It is anticipated to be a very important event for those in the cryptography community and to those who work with elliptic curves.
The ECC celebration will feature personal reflections from Miller and Koblitz, alongside lectures by Dan Boneh and Kristin Lauter, who will explore ECC's broad impact on cryptography and its unforeseen applications. The history of ECC is used as a good example of how fundamental research can lead to unexpected and practical outcomes. This serves as a good way to promote blue skies research. In other news, mathematicians are actively discussing the use of formal methods in their research. One Mathstodon user described using LaTeX and Agda in TypeTopology for writing papers and formalizing mathematical remarks. They found that formalizing remarks in a paper could reveal errors in thinking and improve results, even in meta-mathematical methodology. This shows how computational tools are increasingly being used to verify and explore mathematical ideas, highlighting the practical utility of pure math skills in applied contexts. Recommended read:
References :
@forge.dyalog.com
//
The APL Forge competition is in its final week, with the deadline for submissions set for Monday, June 23, 2024, at 12:00 UTC. This annual event is designed to promote the use and development of the APL programming language within the community. Participants are challenged to create innovative open-source libraries and commercial applications using Dyalog APL. The APL Forge is where developers are rewarded for using Dyalog APL to solve problems and develop libraries, applications, and tools.
Whether you're an individual, a group, or a company, if you have a passion for problem-solving in APL, this competition is for you. The APL Forge competition is rewarding participants for using Dyalog APL to solve problems and develop libraries, applications, and tools. The winner of the APL Forge competition will receive £2,500 (GBP) and an expenses-paid trip to present at our next user meeting. Those looking for inspiration are encouraged to check out the project ideas listed on the APL Forge website, where they can also find eligibility and judging criteria, submission guidelines, and frequently asked questions. For more information and to enter the APL Forge, visit forge.dyalog.com. Recommended read:
References :
@phys.org
//
References:
bigthink.com
, phys.org
Recent research is challenging previous assumptions about the composition and structure of the smallest galaxies. Traditionally believed to be dominated by dark matter due to the expulsion of normal matter through stellar winds and radiation during star formation, new evidence suggests that supermassive black holes may play a more significant role than previously thought. A recent study indicates that Segue 1, known as the most dark matter-dominated galaxy, might harbor a supermassive black hole at its center, potentially altering our understanding of galactic dynamics in low-mass systems. This proposition offers an alternative explanation for the observed gravitational effects, suggesting that these central black holes could be anchoring these tiny galaxies.
The realm of statistical analysis is also undergoing significant advancements. Mathematician Tyron Lardy has pioneered a novel approach to hypothesis testing, utilizing e-values instead of the conventional p-values. E-values, representing 'expected value', provide greater flexibility, particularly during mid-study analysis when adjustments to data collection or analysis plans are necessary. Unlike p-values, which require conclusions to be drawn only after all data is gathered to maintain statistical validity, e-values remain statistically sound even with modifications to the research process. This advancement holds promise for fields like medicine and psychology, where complex situations often demand adaptable data handling techniques. The development of e-values is based on the concept of betting, where the e-value signifies the potential earnings from such bets, offering quantifiable evidence against the initial assumption. This approach allows researchers to assess whether an assumption still holds true. While the general method for calculating optimal e-values can be intricate, its flexibility and robustness in handling data adjustments offer a valuable tool for scientific research, enhancing the reliability and adaptability of hypothesis testing in various disciplines. Recommended read:
References :
@www.marktechpost.com
//
Google has unveiled a new AI model designed to forecast tropical cyclones with improved accuracy. Developed through a collaboration between Google Research and DeepMind, the model is accessible via a newly launched website called Weather Lab. The AI aims to predict both the path and intensity of cyclones days in advance, overcoming limitations present in traditional physics-based weather prediction models. Google claims its algorithm achieves "state-of-the-art accuracy" in forecasting cyclone track and intensity, as well as details like formation, size, and shape.
The AI model was trained using two extensive datasets: one describing the characteristics of nearly 5,000 cyclones from the past 45 years, and another containing millions of weather observations. Internal testing demonstrated the algorithm's ability to accurately predict the paths of recent cyclones, in some cases up to a week in advance. The model can generate 50 possible scenarios, extending forecast capabilities up to 15 days. This breakthrough has already seen adoption by the U.S. National Hurricane Center, which is now using these experimental AI predictions alongside traditional forecasting models in its operational workflow. Google's AI's ability to forecast up to 15 days in advance marks a significant improvement over current models, which typically provide 3-5 day forecasts. Google made the AI accessible through a new website called Weather Lab. The model is available alongside two years' worth of historical forecasts, as well as data from traditional physics-based weather prediction algorithms. According to Google, this could help weather agencies and emergency service experts better anticipate a cyclone’s path and intensity. Recommended read:
References :
@www.microsoft.com
//
References:
medium.com
, www.microsoft.com
Microsoft is undertaking a significant modernization effort of its SymCrypt cryptographic library by rewriting key components in the Rust programming language. This strategic move aims to bolster memory safety and provide enhanced defenses against sophisticated side-channel attacks. The decision to use Rust is driven by its ability to enable formal verification, ensuring that cryptographic implementations behave as intended and remain secure against potential vulnerabilities, an essential component of robust security. This modernization also ensures the library can maintain backward compatibility through a Rust-to-C compiler.
This initiative is particularly focused on the implementation of elliptic curve cryptography (ECC), a vital cryptographic algorithm used to secure Web3 applications and other sensitive systems. ECC offers a modern approach to asymmetric key cryptography, providing comparable security to older methods like RSA but with significantly smaller key sizes. This efficiency is crucial for resource-constrained devices such as mobile phones and IoT devices, enabling faster encryption and decryption processes while maintaining high levels of security against cryptanalytic attacks, providing a strong foundation for secure digital interactions. The project involves incorporating formal verification methods using tools like Aeneas, developed by Microsoft Azure Research and Inria, allowing the mathematical verification of program properties. This process confirms that code will always satisfy given properties, regardless of input, thereby preventing attacks stemming from flawed implementations. Furthermore, the team plans to analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior, ensuring a comprehensive defense against a wide range of threats, solidifying Microsoft's commitment to providing cutting-edge security solutions. Recommended read:
References :
@quantumcomputingreport.com
//
References:
thequantuminsider.com
,
,
The quantum computing industry is experiencing a surge in activity, marked by significant acquisitions and technological advancements. IonQ has announced its intent to acquire UK-based Oxford Ionics for $1.075 billion in stock and cash, uniting two leaders in trapped-ion quantum computing. This deal aims to accelerate the development of scalable and reliable quantum systems, targeting 256 high-fidelity qubits by 2026 and over 10,000 physical qubits by 2027. The acquisition combines IonQ's quantum computing stack with Oxford Ionics' semiconductor-compatible ion-trap technology, strengthening IonQ's technical capabilities and expanding its European presence. CEO of IonQ, Niccolo de Masi, highlighted the strategic importance of this acquisition, uniting talent from across the world to become the world’s best quantum computing, quantum communication and quantum networking ecosystem.
Recent advancements also include the activation of Europe’s first room-temperature quantum accelerator by Fraunhofer IAF, featuring Quantum Brilliance’s diamond-based QB-QDK2.0 system. This system utilizes nitrogen-vacancy (NV) centers and operates without cryogenic requirements, seamlessly integrating into existing high-performance computing environments. It's co-located with classical processors and NVIDIA GPUs to support hybrid quantum-classical workloads. Moreover, IBM has announced plans to build the world’s first large-scale, error-corrected quantum computer named Starling, aiming for completion by 2028 and cloud availability by 2029. IBM claims it has cracked the code for quantum error correction, moving from science to engineering. Further bolstering the industry's growth, collaborative projects are demonstrating the potential of quantum computing in various applications. IonQ, in partnership with AstraZeneca, AWS, and NVIDIA, has showcased a quantum-accelerated drug discovery workflow that drastically reduces simulation time for key pharmaceutical reactions. Their hybrid system, integrating IonQ’s Forte quantum processor with NVIDIA CUDA-Q and AWS infrastructure, achieved over a 20-fold improvement in time-to-solution for the Suzuki-Miyaura reaction. Additionally, the Karnataka State Cabinet has approved the second phase of the Quantum Research Park at the Indian Institute of Science (IISc) in Bengaluru, allocating ₹48 crore ($5.595 million USD) to expand the state’s quantum technology infrastructure and foster collaboration between academia, startups, and industry. Recommended read:
References :
Carl Franzen@AI News | VentureBeat
//
Mistral AI has launched its first reasoning model, Magistral, signaling a commitment to open-source AI development. The Magistral family features two models: Magistral Small, a 24-billion parameter model available with open weights under the Apache 2.0 license, and Magistral Medium, a proprietary model accessible through an API. This dual release strategy aims to cater to both enterprise clients seeking advanced reasoning capabilities and the broader AI community interested in open-source innovation.
Mistral's decision to release Magistral Small under the permissive Apache 2.0 license marks a significant return to its open-source roots. The license allows for the free use, modification, and distribution of the model's source code, even for commercial purposes. This empowers startups and established companies to build and deploy their own applications on top of Mistral’s latest reasoning architecture, without the burdens of licensing fees or vendor lock-in. The release serves as a powerful counter-narrative, reaffirming Mistral’s dedication to arming the open community with cutting-edge tools. Magistral Medium demonstrates competitive performance in the reasoning arena, according to internal benchmarks released by Mistral. The model was tested against its predecessor, Mistral-Medium 3, and models from Deepseek. Furthermore, Mistral's Agents API's Handoffs feature facilitates smart, multi-agent workflows, allowing different agents to collaborate on complex tasks. This enables modular and efficient problem-solving, as demonstrated in systems where agents collaborate to answer inflation-related questions. Recommended read:
References :
@www.marktechpost.com
//
A new framework called AlphaOne, developed by researchers at the University of Illinois Urbana-Champaign and the University of California, Berkeley, offers AI developers a novel method to modulate the reasoning processes of large language models (LLMs). This test-time scaling technique improves model accuracy and efficiency without requiring costly retraining. AlphaOne essentially provides a new "dial" to control LLM 'thinking,' allowing developers to boost performance on complex tasks in a more controlled and cost-effective manner compared to existing approaches. The framework dynamically manages slow-to-fast reasoning transitions, optimizing accuracy on real-world datasets like AMC23 and LiveCodeBench.
One persistent issue with large reasoning models is their inability to self-regulate shifts between fast and slow thinking, leading to either premature conclusions or excessive processing. AlphaOne addresses this by providing a universal method for modulating the reasoning process of advanced LLMs. Previous solutions, such as parallel scaling (running a model multiple times) or sequential scaling (modulating thinking during a single run), often lack synchronization between the duration of reasoning and the scheduling of slow-to-fast thinking transitions. AlphaOne aims to overcome these limitations by effectively adapting reasoning processes. In addition to AlphaOne, Amazon Nova provides a solution for data consistency in generative AI through Text-to-SQL. Businesses rely on precise, real-time insights to make critical decisions, and Text-to-SQL bridges the gap by generating precise, schema-specific queries that empower faster decision-making and foster a data-driven culture. Unlike Retrieval Augmented Generation (RAG) which is better suited for extracting insights from unstructured data and Generative Business Intelligence, Text-to-SQL excels in querying structured organizational data directly from relational schemas and provides deterministic, reproducible results for specific, schema-dependent queries. Recommended read:
References :
Mark Tyson@tomshardware.com
//
OpenAI has recently launched its newest reasoning model, o3-pro, making it available to ChatGPT Pro and Team subscribers, as well as through OpenAI’s API. Enterprise and Edu subscribers will gain access the following week. The company touts o3-pro as a significant upgrade, emphasizing its enhanced capabilities in mathematics, science, and coding, and its improved ability to utilize external tools.
OpenAI has also slashed the price of o3 by 80% and o3-pro by 87%, positioning the model as a more accessible option for developers seeking advanced reasoning capabilities. This price adjustment comes at a time when AI providers are competing more aggressively on both performance and affordability. Experts note that evaluations consistently prefer o3-pro over the standard o3 model across all categories, especially in science, programming, and business tasks. O3-pro utilizes the same underlying architecture as o3, but it’s tuned to be more reliable, especially on complex tasks, with better long-range reasoning. The model supports tools like web browsing, code execution, vision analysis, and memory. While the increased complexity can lead to slower response times, OpenAI suggests that the tradeoff is worthwhile for the most challenging questions "where reliability matters more than speed, and waiting a few minutes is worth the tradeoff.” Recommended read:
References :
@machinelearning.apple.com
//
Apple researchers have released a new study questioning the capabilities of Large Reasoning Models (LRMs), casting doubt on the industry's pursuit of Artificial General Intelligence (AGI). The research paper, titled "The Illusion of Thinking," reveals that these models, including those from OpenAI, Google DeepMind, Anthropic, and DeepSeek, experience a 'complete accuracy collapse' when faced with complex problems. Unlike existing evaluations primarily focused on mathematical and coding benchmarks, this study evaluates the reasoning traces of these models, offering insights into how LRMs "think".
Researchers tested various models, including OpenAI's o3-mini, DeepSeek-R1, and Claude 3.7 Sonnet, using puzzles like the Tower of Hanoi, Checker Jumping, River Crossing, and Blocks World. These environments allowed for the manipulation of complexity while maintaining consistent logical structures. The team discovered that standard language models surprisingly outperformed LRMs in low-complexity scenarios, while LRMs only demonstrated advantages in medium-complexity tasks. However, all models experienced a performance collapse when faced with highly complex tasks. The study suggests that the so-called reasoning of LRMs may be more akin to sophisticated pattern matching, which is fragile and prone to failure when challenged with significant complexity. Apple's research team identified three distinct performance regimes: low-complexity tasks where standard models outperform LRMs, medium-complexity tasks where LRMs show advantages, and high-complexity tasks where all models collapse. Apple has begun integrating powerful generative AI into its own apps and experiences. The new Foundation Models framework gives app developers access to the on-device foundation language model. Recommended read:
References :
@medium.com
//
Medium is currently hosting a series of articles that delve into the core concepts and practical applications of cryptography. These articles aim to demystify complex topics such as symmetric key cryptography, also known as secret key or private key cryptography, where a single shared key is used for both encryption and decryption. This method is highlighted for its speed and efficiency, making it suitable for bulk data encryption, though it primarily provides confidentiality and requires secure key distribution. The resources available are designed to cater to individuals with varying levels of expertise, offering accessible guides to enhance their understanding of secure communication and cryptographic systems.
The published materials offer detailed explorations of cryptographic techniques, including AES-256 encryption and decryption. AES-256, which stands for Advanced Encryption Standard with a 256-bit key size, is a symmetric encryption algorithm renowned for its high level of security. Articles break down the internal mechanics of AES-256, explaining the rounds of transformation and key expansion involved in the encryption process. These explanations are presented in both technical terms for those with a deeper understanding and in layman's terms to make the concepts accessible to a broader audience. In addition to theoretical explanations, the Medium articles also showcase the practical applications of cryptography. One example provided is the combination of OSINT (Open Source Intelligence), web, crypto, and forensics techniques in CTF (Capture The Flag) challenges. These challenges offer hands-on experience in applying cryptographic principles to real-world scenarios, such as identifying the final resting place of historical figures through OSINT techniques. The series underscores the importance of mastering cryptography in the evolving landscape of cybersecurity, equipping readers with the knowledge to secure digital communications and protect sensitive information. Recommended read:
References :
@medium.com
//
Recent advancements in math education are focusing on making mathematics more accessible and intuitive for all learners. Universal Design for Learning (UDL) is gaining traction as a framework to optimize teaching and learning by acknowledging the varied needs of students. This approach aims to eliminate barriers and foster a belief that every student is capable of excelling in math. Educators are encouraged to offer multiple modalities for interacting with content, addressing the "why," "what," and "how" of learning to ensure every student has a successful access point.
Mathz AI is emerging as a powerful tool extending beyond traditional homework help. It emphasizes conceptual clarity by guiding users through multiple solution paths with interactive explanations. Features include versatile input methods, clear problem displays, hints, step-by-step solutions, and auto-generated practice questions. It offers targeted revision plans and breakdowns the logic behind each solution. This AI-driven approach promotes active engagement, enabling students to see patterns, connect concepts, and build confidence. It also acts as a resource for parents and tutors, offering intuitive ways to assist learners. Machine learning is becoming more accessible to individuals without advanced math backgrounds. While concepts like linear algebra, calculus, and probability are relevant, a strong understanding of fundamental principles, critical thinking, and the ability to apply appropriate tools are sufficient to start. Linear Regression is a fundamental machine learning model to grasp and implement, allowing us to find relationships between data and make predictions. Interactive tools are also enhancing the learning experience, providing visual and intuitive ways to understand complex machine learning and mathematical concepts. Recommended read:
References :
@www.iansresearch.com
//
The increasing capabilities of quantum computers are posing a significant threat to current encryption methods, potentially jeopardizing the security of digital assets and the Internet of Things. Researchers at Google Quantum AI are urging software developers and encryption experts to accelerate the implementation of next-generation cryptography, anticipating that quantum computers will soon be able to break widely used encryption standards like RSA. This urgency is fueled by new estimates suggesting that breaking RSA encryption may be far easier than previously believed, with a quantum computer containing approximately 1 million qubits potentially capable of cracking it. Experts recommend that vulnerable systems should be deprecated after 2030 and disallowed after 2035.
Last week, Craig Gidney from Google Quantum AI published research that significantly lowers the estimated quantum resources needed to break RSA-2048. Where previous estimates projected that cracking RSA-2048 would require around 20 million qubits and 8 hours of computation, the new analysis reveals that it could be done in under a week using fewer than 1 million noisy qubits. This more than 95% reduction in hardware requirements is a seismic shift in the projected timeline for "Q-Day," the hypothetical moment when quantum computers can break modern encryption. RSA encryption, used in secure web browsing, email encryption, VPNs, and blockchain systems, relies on the difficulty of factoring large numbers into their prime components. Quantum computers, leveraging Shor's algorithm, can exponentially accelerate this process. Recent innovations, including Approximate Residue Arithmetic, Magic State Cultivation, Optimized Period Finding with Ekerå-Håstad Algorithms, and Yoked Surface Codes & Sparse Lookups, have collectively reduced the physical qubit requirement to under 1 million and allow the algorithm to complete in less than 7 days. Recommended read:
References :
@www.quantamagazine.org
//
Fermilab has announced the final results from its Muon g-2 experiment, aiming to resolve a long-standing anomaly regarding the magnetic moment of muons. This experiment delves into the quantum realm, exploring how short-lived particles popping in and out of existence influence the magnetic properties of muons. The initial results from this experiment suggested that the Standard Model of physics might be incomplete, hinting at the presence of undiscovered particles or forces.
The experiment's findings continue to show a discrepancy between experimental measurements and the predictions of the Standard Model. However, the statistical significance of this discrepancy has decreased due to improvements in theoretical calculations. This implies that while the Standard Model may not fully account for the behavior of muons, the evidence for new physics is not as strong as previously thought. The result is at 4.2σ (standard deviations) away from what's calculated using the Standard Model, which is a bit short of the 5 sigma normally used to declare a discovery. There's about a 1 in 40,000 chance that this is a fluke. Despite the reduced statistical significance, the results remain intriguing and motivate further research. The possibility of undiscovered particles influencing muons still exists, pushing physicists to explore new theoretical models and conduct additional experiments. Fermilab shared first results from their "g-2" experiment showing the Standard Model of physics is even more incomplete than we thought. If the universe includes particles we don't yet know about, these too will show up as fluctuations around particles, influencing the properties we can measure. Recommended read:
References :
@www.linkedin.com
//
Nvidia's Blackwell GPUs have achieved top rankings in the latest MLPerf Training v5.0 benchmarks, demonstrating breakthrough performance across various AI workloads. The NVIDIA AI platform delivered the highest performance at scale on every benchmark, including the most challenging large language model (LLM) test, Llama 3.1 405B pretraining. Nvidia was the only vendor to submit results on all MLPerf Training v5.0 benchmarks, highlighting the versatility of the NVIDIA platform across a wide array of AI workloads, including LLMs, recommendation systems, multimodal LLMs, object detection, and graph neural networks.
The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. Nvidia collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. The GB200 NVL72 systems achieved 90% scaling efficiency up to 2,496 GPUs, improving time-to-convergence by up to 2.6x compared to Hopper-generation H100. The new MLPerf Training v5.0 benchmark suite introduces a pretraining benchmark based on the Llama 3.1 405B generative AI system, the largest model to be introduced in the training benchmark suite. On this benchmark, Blackwell delivered 2.2x greater performance compared with the previous-generation architecture at the same scale. Furthermore, on the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance gains highlight advancements in the Blackwell architecture and software stack, including high-density liquid-cooled racks, fifth-generation NVLink and NVLink Switch interconnect technologies, and NVIDIA Quantum-2 InfiniBand networking. Recommended read:
References :
@medium.com
//
Google Quantum AI has published a study that dramatically lowers the estimated quantum resources needed to break RSA-2048, one of the most widely used encryption standards. The study, authored by Craig Gidney, indicates that RSA cracking may be possible with fewer qubits than previously estimated, potentially impacting digital security protocols used in secure web browsing, email encryption, VPNs, and blockchain systems. This breakthrough could significantly accelerate the timeline for "Q-Day," the point at which quantum computers can break modern encryption.
Previous estimates, including Gidney's 2019 study, suggested that cracking RSA-2048 would require around 20 million qubits and 8 hours of computation. However, the new analysis reveals it could be done in under a week using fewer than 1 million noisy qubits. This reduction in hardware requirements is attributed to several technical innovations, including approximate residue arithmetic, magic state cultivation, optimized period finding with Ekerå-Håstad algorithms, and yoked surface codes & sparse lookups. These improvements minimize the overhead in fault-tolerant quantum circuits, enabling better scaling. Google's researchers have discovered that, thanks to new error correction tricks and smarter algorithms, the encryption could be broken with under 1 million qubits and in less than a week, given favorable assumptions like a 0.1% gate error rate and a 1-microsecond gate time. This significantly faster encryption breaking capability, potentially 20x faster than previously anticipated, raises concerns about the security of Bitcoin wallets and other financial systems that rely on RSA encryption. The findings could potentially make Bitcoin wallets and financial systems vulnerable much sooner than expected. Recommended read:
References :
@medium.com
//
The Post-Quantum Cryptography Coalition (PQCC) has recently published a comprehensive roadmap designed to assist organizations in transitioning from traditional cryptographic systems to quantum-resistant alternatives. This strategic initiative comes as quantum computing capabilities rapidly advance, posing a significant threat to existing data security measures. The roadmap emphasizes the importance of proactive planning to mitigate long-term risks associated with cryptographically relevant quantum computers. It is structured into four key implementation categories: Preparation, Baseline Understanding, Planning and Execution, and Monitoring and Evaluation.
The roadmap offers detailed steps for organizations to customize their adoption strategies, regardless of size or sector. Activities include inventorying cryptographic assets, assigning migration leads, prioritizing systems for upgrades, and aligning stakeholders across technical and operational domains. Furthermore, it underscores the urgency of Post-Quantum Cryptography (PQC) adoption, particularly for entities managing long-lived or sensitive data vulnerable to "harvest now, decrypt later" attacks. Guidance is also provided on vendor engagement, creating a cryptographic bill of materials (CBOM), and integrating cryptographic agility into procurement and system updates. In related advancements, research is focusing on enhancing the efficiency of post-quantum cryptographic algorithms through hardware implementations. A new study proposes a Modular Tiled Toeplitz Matrix-Vector Polynomial Multiplication (MT-TMVP) method for lattice-based PQC algorithms, specifically designed for Field Programmable Gate Arrays (FPGAs). This innovative approach significantly reduces resource utilization and improves the Area-Delay Product (ADP) compared to existing polynomial multipliers. By leveraging Block RAM (BRAM), the architecture also offers enhanced robustness against timing-based Side-Channel Attacks (SCAs), making it a modular and scalable solution for varying polynomial degrees. This combined with hybrid cryptographic models is a practical guide to implementing post quantum cryptography using hybrid models for TLS, PKI, and identity infrastructure. Recommended read:
References :
@aasnova.org
//
References:
StartsWithABang
, aasnova.org
JWST is currently being used to study exoplanets, particularly sub-Neptunes, providing valuable data on their atmospheric composition. A recent study utilized JWST spectroscopy to analyze the atmosphere of the sub-Neptune GJ 3090b. This planet orbits a late-type, low-mass star and its radius places it at the outer edge of the radius valley. Sub-Neptunes are the most common type of planet in the Milky Way, however their formation and composition are not well understood, making these studies especially important.
The JWST's observations of GJ 3090b revealed a low-amplitude helium signature, suggesting a metal-enriched atmosphere. The presence of heavy molecules like water, carbon dioxide, and sulfur further contributes to the understanding of the planet's atmospheric properties. These atmospheric observations help clarify how hydrogen and helium may be escaping the planet’s atmosphere, with the presence of metals slowing down mass loss and weakening the helium signature. While JWST is making significant contributions to exoplanet research, it won't find the very first stars. Other telescopes will be needed to make those observations. JWST however contains some of the latest discoveries, including the new cosmic record-holder for the most distant galaxy, MoM-z14. Recommended read:
References :
@phys.org
//
Recent developments in mathematics education and problem-solving strategies have captured attention, ranging from fundamental arithmetic to advanced machine learning applications. Resources such as Math Only Math are providing step-by-step guidance on solving percentage problems, offering practical examples like finding 18% of 500 or calculating 15% of 60. These resources cater to a broad audience, from students learning basic concepts to professionals applying these principles in real-world scenarios. Understanding percentages is crucial, as demonstrated in examples involving calculating marks in exams, determining the quantity of alloys, and solving everyday problems.
May has been a busy month for must-reads in the data science, AI, and machine learning fields, including a focus on the math needed for machine learning engineers. Topics range from linear algebra and calculus to statistics and probability. It highlights the importance of grasping core ideas like mean, median, and standard deviation. The emphasis is not only on mastering mathematical formulas but also on developing critical thinking and analytical skills to solve problems effectively. Practical resources, such as the Codanics YouTube channel and the Elements of AI free course, are invaluable for individuals seeking to build their foundations in these areas. Furthermore, innovative approaches to problem-solving are emerging, such as solving geometric problems with pure logic, as discussed on Pat's Blog. This method encourages students to deduce answers without complex calculations. The approach can promote a deeper understanding of mathematical concepts and encourage creative problem-solving strategies. The blog post highlights how understanding geometrical problems using logic can often lead to a more efficient and insightful solutions. These developments collectively contribute to a more accessible and engaging mathematical learning environment. Recommended read:
References :
@medium.com
//
References:
, TheSequence
DeepSeek's latest AI model, R1-0528, is making waves in the AI community due to its impressive performance in math and reasoning tasks. This new model, despite having a similar name to its predecessor, boasts a completely different architecture and performance profile, marking a significant leap forward. DeepSeek R1-0528 has demonstrated "unprecedented levels of demand" shooting to the top of the App Store past closed model rivals and overloading their API with unprecedented levels of demand to the point that they actually had to stop accepting payments.
The most notable improvement in DeepSeek R1-0528 is its mathematical reasoning capabilities. On the AIME 2025 test, the model's accuracy increased from 70% to 87.5%, surpassing Gemini 2.5 Pro and putting it in close competition with OpenAI's o3. This improvement is attributed to "enhanced thinking depth," with the model using significantly more tokens per question, engaging in more thorough chains of reasoning. This means the model can check its own work, recognize errors, and course-correct during problem-solving. DeepSeek's success is challenging established closed models and driving competition in the AI landscape. DeepSeek-R1-0528 continues to utilize a Mixture-of-Experts (MoE) architecture, now scaled up to an enormous size. This sparse activation allows for powerful specialized expertise in different coding domains while maintaining efficiency. The context also continues to remain at 128k (with RoPE scaling or other improvements capable of extending it further.) The rise of DeepSeek is underscored by its performance benchmarks, which show it outperforming some of the industry’s leading models, including OpenAI’s ChatGPT. Furthermore, the release of a distilled variant, R1-0528-Qwen3-8B, ensures broad accessibility of this powerful technology. Recommended read:
References :
Dashveenjit Kaur@TechHQ
//
Dell Technologies has secured a contract with the U.S. Department of Energy to construct the next-generation NERSC-10 supercomputer, a project powered by NVIDIA's Vera Rubin architecture. This new system, dubbed "Doudna" after Nobel laureate Jennifer Doudna, a pioneer in CRISPR gene-editing technology, is poised to be a major federal investment in scientific computing infrastructure. Energy Secretary Chris Wright announced the contract during a visit to Lawrence Berkeley National Laboratory, emphasizing that the deployment in 2026 is crucial for maintaining American technological leadership amidst increasing global competition in AI and quantum computing.
The "Doudna" supercomputer, also known as NERSC-10, aims to significantly accelerate scientific research across multiple domains, including fusion energy, astronomy, and life sciences. Designed to serve 11,000 researchers, it represents an integration of artificial intelligence, quantum workflows, and real-time data streaming from experimental facilities. Unlike traditional supercomputers, Doudna’s architecture emphasizes coherent memory access between CPUs and GPUs, facilitating efficient data sharing between heterogeneous processors which is essential for modern AI-accelerated scientific workflows. The Doudna system is expected to deliver a 10x increase in scientific output compared to its predecessor, Perlmutter, while only consuming 2-3x the power, translating to a 3-5x improvement in performance per watt. Nick Wright, advanced technologies group lead and Doudna chief architect at NERSC, stated, "We’re not just building a faster computer, we’re building a system that helps researchers think bigger and discover sooner." NVIDIA's Vera Rubin platform introduces hardware-level optimizations specifically designed for the convergence of simulation, machine learning, and quantum algorithm development, marking a significant advancement in cutting-edge research capabilities. Recommended read:
References :
@www.quantamagazine.org
//
References:
Quanta Magazine
, www.trails.umd.edu
Researchers are making strides in AI reasoning and efficiency, tackling both complex problem-solving and the energy consumption of these systems. One promising area involves reversible computing, where programs can run backward as easily as forward, theoretically saving energy by avoiding data deletion. Michael Frank, a researcher interested in the physical limits of computation, discovered that reversible computing could keep computational progress going as traditional computing slows due to physical limitations. Christof Teuscher at Portland State University emphasized the potential for significant power savings with this approach.
An evolution of the LLM-as-a-Judge paradigm is emerging. Meta AI has introduced the J1 framework which shifts the paradigm of LLMs from passive generators to active, deliberative evaluators through self-evaluation. This approach, detailed in "J1: Incentivizing Thinking in LLM-as-a-Judge via Reinforcement Learning," addresses the growing need for rigorous and scalable evaluation as AI systems become more capable and widely deployed. By reframing judgment as a structured reasoning task trained through reinforcement learning, J1 aims to create models that perform consistent, interpretable, and high-fidelity evaluations. Soheil Feizi, an associate professor at the University of Maryland, has received a $1 million federal grant to advance foundational research in reasoning AI models. This funding, stemming from a Presidential Early Career Award for Scientists and Engineers (PECASE), will support his work in defending large language models (LLMs) against attacks, identifying weaknesses in how these models learn, encouraging transparent, step-by-step logic, and understanding the "reasoning tokens" that drive decision-making. Feizi plans to explore innovative approaches like live activation probing and novel reinforcement-learning designs, aiming to transform theoretical advancements into practical applications and real-world usages. Recommended read:
References :
@www.marktechpost.com
//
DeepSeek has released a major update to its R1 reasoning model, dubbed DeepSeek-R1-0528, marking a significant step forward in open-source AI. The update boasts enhanced performance in complex reasoning, mathematics, and coding, positioning it as a strong competitor to leading commercial models like OpenAI's o3 and Google's Gemini 2.5 Pro. The model's weights, training recipes, and comprehensive documentation are openly available under the MIT license, fostering transparency and community-driven innovation. This release allows researchers, developers, and businesses to access cutting-edge AI capabilities without the constraints of closed ecosystems or expensive subscriptions.
The DeepSeek-R1-0528 update brings several core improvements. The model's parameter count has increased from 671 billion to 685 billion, enabling it to process and store more intricate patterns. Enhanced chain-of-thought layers deepen the model's reasoning capabilities, making it more reliable in handling multi-step logic problems. Post-training optimizations have also been applied to reduce hallucinations and improve output stability. In practical terms, the update introduces JSON outputs, native function calling, and simplified system prompts, all designed to streamline real-world deployment and enhance the developer experience. Specifically, DeepSeek R1-0528 demonstrates a remarkable leap in mathematical reasoning. On the AIME 2025 test, its accuracy improved from 70% to an impressive 87.5%, rivaling OpenAI's o3. This improvement is attributed to "enhanced thinking depth," with the model now utilizing significantly more tokens per question, indicating more thorough and systematic logical analysis. The open-source nature of DeepSeek-R1-0528 empowers users to fine-tune and adapt the model to their specific needs, fostering further innovation and advancements within the AI community. Recommended read:
References :
|
Blogs
|