@medium.com
//
Google Quantum AI has published a study that dramatically lowers the estimated quantum resources needed to break RSA-2048, one of the most widely used encryption standards. The study, authored by Craig Gidney, indicates that RSA cracking may be possible with fewer qubits than previously estimated, potentially impacting digital security protocols used in secure web browsing, email encryption, VPNs, and blockchain systems. This breakthrough could significantly accelerate the timeline for "Q-Day," the point at which quantum computers can break modern encryption.
Previous estimates, including Gidney's 2019 study, suggested that cracking RSA-2048 would require around 20 million qubits and 8 hours of computation. However, the new analysis reveals it could be done in under a week using fewer than 1 million noisy qubits. This reduction in hardware requirements is attributed to several technical innovations, including approximate residue arithmetic, magic state cultivation, optimized period finding with Ekerå-Håstad algorithms, and yoked surface codes & sparse lookups. These improvements minimize the overhead in fault-tolerant quantum circuits, enabling better scaling. Google's researchers have discovered that, thanks to new error correction tricks and smarter algorithms, the encryption could be broken with under 1 million qubits and in less than a week, given favorable assumptions like a 0.1% gate error rate and a 1-microsecond gate time. This significantly faster encryption breaking capability, potentially 20x faster than previously anticipated, raises concerns about the security of Bitcoin wallets and other financial systems that rely on RSA encryption. The findings could potentially make Bitcoin wallets and financial systems vulnerable much sooner than expected. Recommended read:
References :
@quantumcomputingreport.com
//
The rapid advancement of quantum computing poses a significant threat to current encryption methods, particularly RSA, which secures much of today's internet communication. Google's recent breakthroughs have redefined the landscape of cryptographic security, with researchers like Craig Gidney significantly lowering the estimated quantum resources needed to break RSA-2048. A new study indicates that RSA-2048 could be cracked in under a week using fewer than 1 million noisy qubits, a dramatic reduction from previous estimates of around 20 million qubits and eight hours of computation. This shift accelerates the timeline for "Q-Day," the hypothetical moment when quantum computers can break modern encryption, impacting everything from email to financial transactions.
This vulnerability stems from the ability of quantum computers to utilize Shor's algorithm for factoring large numbers, a task prohibitively difficult for classical computers. Google's innovation involves several technical advancements, including approximate residue arithmetic, magic state cultivation, optimized period finding with Ekerå-Håstad algorithms, and yoked surface codes with sparse lookups. These improvements streamline modular arithmetic, reduce the depth of quantum circuits, and minimize overhead in fault-tolerant quantum circuits, collectively reducing the physical qubit requirement to under 1 million while maintaining a relatively short computation time. In response to this threat, post-quantum cryptography (PQC) is gaining momentum. PQC refers to cryptographic algorithms designed to be secure against both classical and quantum attacks. NIST has already announced the first set of quantum-safe algorithms for standardization, including FrodoKEM, a key encapsulation protocol offering a simple design and strong security guarantees. The urgency of transitioning to quantum-resistant cryptographic systems is underscored by ongoing advances in quantum computing. While the digital world relies on encryption, the evolution to AI and quantum computing is challenging the security. Professionals who understand both cybersecurity and artificial intelligence will be the leaders in adapting to these challenges. Recommended read:
References :
@deepmind.google
//
Google DeepMind has unveiled AlphaEvolve, a groundbreaking AI agent designed for algorithmic and scientific discovery. This innovative agent combines the power of large language models (LLMs) like Gemini Pro, evolutionary search frameworks, and automated evaluation methods to evolve superior algorithms. Unlike systems that merely generate plausible code, AlphaEvolve iteratively refines entire codebases, optimizing across multiple performance metrics and grounding itself in actual code execution results, effectively sidestepping hallucinations. Terence Tao also collaborated with the DeepMind team on AlphaEvolve, highlighting its significance in the field.
AlphaEvolve's capabilities extend to a range of algorithmic and scientific challenges. It has optimized Google's data center scheduling, recovering 0.7% of Google's compute capacity, simplified hardware accelerator circuit designs, and accelerated the training of its own underlying LLM, offering a glimpse into AI self-improvement. Notably, AlphaEvolve cracked a problem unchanged since 1969, devising a more efficient method for multiplying two 4x4 complex matrices using only 48 scalar multiplications, surpassing Strassen's algorithm after 56 years. The agent also tackled over 50 other open mathematical problems, often matching or exceeding the state of the art. In parallel, Google has launched "Jules," a new coding agent powered by Google's Gemini 2.5 Pro model and designed to assist developers with repetitive tasks such as bug-fixing, documentation, test generation, and feature building. Jules operates in a secure cloud environment, breaking down complex tasks into smaller steps and adapting to user instructions. The agent automatically creates pull requests with audio summaries, streamlining the code review process. This move signifies the rapid maturation of AI in software development and a broader trend towards AI agents becoming trusted engineering partners. Recommended read:
References :
@console.cloud.google.com
//
References:
Compute
, BigDATAwire
Google Cloud is empowering global scientific discovery and innovation by integrating Google DeepMind and Google Research technologies with its cloud infrastructure. This initiative aims to provide researchers with advanced, cloud-scale tools for scientific computing. The company is introducing supercomputing-class infrastructure, including H4D VMs powered by AMD CPUs and A4/A4X VMs powered by NVIDIA GPUs, which boast low-latency networking and high memory bandwidth. Additionally, Google Cloud Managed Lustre offers high-performance storage I/O, enabling scientists to tackle large-scale and complex scientific problems.
Google Cloud is also rolling out advanced scientific applications powered by AI models. These include AlphaFold 3 for predicting the structure and interactions of biomolecules, and WeatherNext models for weather forecasting. Moreover, the company is introducing AI agents designed to accelerate scientific discovery. As an example, Google Cloud and Ai2 are investing $20 million in the Cancer AI Alliance to accelerate cancer research using AI, advanced models, and cloud computing power. Google Cloud will provide the AI infrastructure and security, while Ai2 will deliver the training and development of cancer models. In addition to these advancements, Google unveiled its seventh-generation Tensor Processing Unit (TPU), Ironwood. The company claims Ironwood delivers 24 times the computing power of the world’s fastest supercomputer when deployed at scale. Ironwood is specifically designed for inference workloads, marking a shift in Google's AI chip development strategy. When scaled to 9,216 chips per pod, Ironwood delivers 42.5 exaflops of computing power, and each chip comes with 192GB of High Bandwidth Memory. Recommended read:
References :
@simonwillison.net
//
Google has broadened access to its advanced AI model, Gemini 2.5 Pro, showcasing impressive capabilities and competitive pricing designed to challenge rival models like OpenAI's GPT-4o and Anthropic's Claude 3.7 Sonnet. Google's latest flagship model is currently recognized as a top performer, excelling in Optical Character Recognition (OCR), audio transcription, and long-context coding tasks. Alphabet CEO Sundar Pichai highlighted Gemini 2.5 Pro as Google's "most intelligent model + now our most in demand." Demand has increased by over 80 percent this month alone across both Google AI Studio and the Gemini API.
Google's expansion includes a tiered pricing structure for the Gemini 2.5 Pro API, offering a more affordable option compared to competitors. Prompts with less than 200,000 tokens are priced at $1.25 per million for input and $10 per million for output, while larger prompts increase to $2.50 and $15 per million tokens, respectively. Although prompt caching is not yet available, its future implementation could potentially lower costs further. The free tier allows 500 free grounding queries with Google Search per day, with an additional 1,500 free queries in the paid tier, with costs per 1,000 queries set at $35 beyond that. The AI research group EpochAI reported that Gemini 2.5 Pro scored 84% on the GPQA Diamond benchmark, surpassing the typical 70% score of human experts. This benchmark assesses challenging multiple-choice questions in biology, chemistry, and physics, validating Google's benchmark results. The model is now available as a paid model, along with a free tier option. The free tier can use data to improve Google's products while the paid tier cannot. Rates vary by tier and range from 150-2,000/minute. Google will retire the Gemini 2.0 Pro preview entirely in favor of 2.5. Recommended read:
References :
Maximilian Schreiner@THE DECODER
//
Google's Gemini 2.5 Pro is making waves as a top-tier reasoning model, marking a leap forward in Google's AI capabilities. Released recently, it's already garnering attention from enterprise technical decision-makers, especially those who have traditionally relied on OpenAI or Claude for production-grade reasoning. Early experiments, benchmark data, and developer reactions suggest Gemini 2.5 Pro is worth serious consideration.
Gemini 2.5 Pro distinguishes itself with its transparent, structured reasoning. Google's step-by-step training approach results in a structured chain of thought that provides clarity. The model presents ideas in numbered steps, with sub-bullets and internal logic that's remarkably coherent and transparent. This breakthrough offers greater trust and steerability, enabling enterprise users to validate, correct, or redirect the model with more confidence when evaluating output for critical tasks. Recommended read:
References :
Maximilian Schreiner@THE DECODER
//
Google DeepMind has announced Gemini 2.5 Pro, its latest and most advanced AI model to date. This new model boasts enhanced reasoning capabilities and improved accuracy, marking a significant step forward in AI development. Gemini 2.5 Pro is designed with built-in 'thinking' capabilities, enabling it to break down complex tasks into multiple steps and analyze information more effectively before generating a response. This allows the AI to deduce logical conclusions, incorporate contextual nuances, and make informed decisions with unprecedented accuracy, according to Google.
The Gemini 2.5 Pro has already secured the top position on the LMArena leaderboard, surpassing other AI models in head-to-head comparisons. This achievement highlights its superior performance and high-quality style in handling intricate tasks. The model also leads in math and science benchmarks, demonstrating its advanced reasoning capabilities across various domains. This new model is available as Gemini 2.5 Pro (experimental) on Google’s AI Studio and for Gemini Advanced users on the Gemini chat interface. Recommended read:
References :
Amir Najmi@unofficialgoogledatascience.com
//
Data scientists and statisticians are continuously exploring methods to refine data analysis and modeling. A recent blog post from Google details a project focused on quantifying the statistical skills necessary for data scientists within their organization, aiming to clarify job descriptions and address ambiguities in assessing practical data science abilities. The authors, David Mease and Amir Najmi, leveraged their extensive experience conducting over 600 interviews at Google to identify crucial statistical expertise required for the "Data Scientist - Research" role.
Statistical testing remains a cornerstone of data analysis, guiding analysts in transforming raw numbers into actionable insights. One must also keep in mind bias-variance tradeoff and how to choose the right statistical test to ensure the validity of analyses. These tools are critical for both traditional statistical roles and the evolving field of AI/ML, where responsible practices are paramount, as highlighted in discussions about the relevance of statistical controversies to ethical AI/ML development at an AI ethics conference on March 8. Recommended read:
References :
Maximilian Schreiner@THE DECODER
//
Google has unveiled Gemini 2.5 Pro, its latest and "most intelligent" AI model to date, showcasing significant advancements in reasoning, coding proficiency, and multimodal functionalities. According to Google, these improvements come from combining a significantly enhanced base model with improved post-training techniques. The model is designed to analyze complex information, incorporate contextual nuances, and draw logical conclusions with unprecedented accuracy. Gemini 2.5 Pro is now available for Gemini Advanced users and on Google's AI Studio.
Google emphasizes the model's "thinking" capabilities, achieved through chain-of-thought reasoning, which allows it to break down complex tasks into multiple steps and reason through them before responding. This new model can handle multimodal input from text, audio, images, videos, and large datasets. Additionally, Gemini 2.5 Pro exhibits strong performance in coding tasks, surpassing Gemini 2.0 in specific benchmarks and excelling at creating visually compelling web apps and agentic code applications. The model also achieved 18.8% on Humanity’s Last Exam, demonstrating its ability to handle complex knowledge-based questions. Recommended read:
References :
|
Blogs
|