@www.marktechpost.com
//
References:
Maginative
, MarkTechPost
,
Google DeepMind has launched AlphaGenome, a new deep learning framework designed to predict the regulatory consequences of DNA sequence variations. This AI model aims to decode how mutations affect non-coding DNA, which makes up 98% of the human genome, potentially transforming the understanding of diseases. AlphaGenome processes up to one million base pairs of DNA at once, delivering predictions on gene expression, splicing, chromatin accessibility, transcription factor binding, and 3D genome structure.
AlphaGenome stands out by comprehensively predicting the impact of single variants or mutations, especially in non-coding regions, on gene regulation. It uses a hybrid neural network that combines convolutional layers and transformers to digest long DNA sequences. The model addresses limitations in earlier models by bridging the gap between long-sequence input processing and nucleotide-level output precision, unifying predictive tasks across 11 output modalities and handling thousands of human and mouse genomic tracks. This makes AlphaGenome one of the most comprehensive sequence-to-function models in genomics. The AI tool is available via API for non-commercial research to advance scientific research and is planned to be released to the general public in the future. In performance tests, AlphaGenome outperformed or matched the best external models on 24 out of 26 variant effect prediction benchmarks. According to DeepMind's Vice President for Research Pushmeet Kohli, AlphaGenome unifies many different challenges that come with understanding the genome. The model can help researchers identify disease-causing variants and better understand genome function and disease biology, potentially driving new biological discoveries and the development of new treatments. Recommended read:
References :
Steve Vandenberg@Microsoft Security Blog
//
Microsoft is making significant strides in AI and data security, demonstrated by recent advancements and reports. The company's commitment to responsible AI is highlighted in its 2025 Responsible AI Transparency Report, detailing efforts to build trustworthy AI technologies. Microsoft is also addressing the critical issue of data breach reporting, offering solutions like Microsoft Data Security Investigations to assist organizations in meeting stringent regulatory requirements such as GDPR and SEC rules. These initiatives underscore Microsoft's dedication to ethical and secure AI development and deployment across various sectors.
AI's transformative potential is being explored in higher education, with Microsoft providing AI solutions for creating AI-ready campuses. Institutions are focusing on using AI for unique differentiation and innovation rather than just automation and cost savings. Strategies include establishing guidelines for responsible AI use, fostering collaborative communities for knowledge sharing, and partnering with technology vendors like Microsoft, OpenAI, and NVIDIA. Comprehensive training programs are also essential to ensure stakeholders are proficient with AI tools, promoting a culture of experimentation and ethical AI practices. Furthermore, Microsoft Research has achieved a breakthrough in computational chemistry by using deep learning to enhance the accuracy of density functional theory (DFT). This advancement allows for more reliable predictions of molecular and material properties, accelerating scientific discovery in fields such as drug development, battery technology, and green fertilizers. By generating vast amounts of accurate data and using scalable deep-learning approaches, the team has overcome limitations in DFT, enabling the design of molecules and materials through computational simulations rather than relying solely on laboratory experiments. Recommended read:
References :
@www.marktechpost.com
//
Google has unveiled a new AI model designed to forecast tropical cyclones with improved accuracy. Developed through a collaboration between Google Research and DeepMind, the model is accessible via a newly launched website called Weather Lab. The AI aims to predict both the path and intensity of cyclones days in advance, overcoming limitations present in traditional physics-based weather prediction models. Google claims its algorithm achieves "state-of-the-art accuracy" in forecasting cyclone track and intensity, as well as details like formation, size, and shape.
The AI model was trained using two extensive datasets: one describing the characteristics of nearly 5,000 cyclones from the past 45 years, and another containing millions of weather observations. Internal testing demonstrated the algorithm's ability to accurately predict the paths of recent cyclones, in some cases up to a week in advance. The model can generate 50 possible scenarios, extending forecast capabilities up to 15 days. This breakthrough has already seen adoption by the U.S. National Hurricane Center, which is now using these experimental AI predictions alongside traditional forecasting models in its operational workflow. Google's AI's ability to forecast up to 15 days in advance marks a significant improvement over current models, which typically provide 3-5 day forecasts. Google made the AI accessible through a new website called Weather Lab. The model is available alongside two years' worth of historical forecasts, as well as data from traditional physics-based weather prediction algorithms. According to Google, this could help weather agencies and emergency service experts better anticipate a cyclone’s path and intensity. Recommended read:
References :
@www.marktechpost.com
//
Apple researchers are challenging the perceived reasoning capabilities of Large Reasoning Models (LRMs), sparking debate within the AI community. A recent paper from Apple, titled "The Illusion of Thinking," suggests that these models, which generate intermediate thinking steps like Chain-of-Thought reasoning, struggle with fundamental reasoning tasks. The research indicates that current evaluation methods relying on math and code benchmarks are insufficient, as they often suffer from data contamination and fail to assess the structure or quality of the reasoning process.
To address these shortcomings, Apple researchers introduced controllable puzzle environments, including the Tower of Hanoi, River Crossing, Checker Jumping, and Blocks World, allowing for precise manipulation of problem complexity. These puzzles require diverse reasoning abilities, such as constraint satisfaction and sequential planning, and are free from data contamination. The Apple paper concluded that state-of-the-art LRMs ultimately fail to develop generalizable problem-solving capabilities, with accuracy collapsing to zero beyond certain complexities across different environments. However, the Apple research has faced criticism. Experts, like Professor Seok Joon Kwon, argue that Apple's lack of high-performance hardware, such as a large GPU-based cluster comparable to those operated by Google or Microsoft, could be a factor in their findings. Some argue that the models perform better on familiar puzzles, suggesting that their success may be linked to training exposure rather than genuine problem-solving skills. Others, such as Alex Lawsen and "C. Opus," argue that the Apple researchers' results don't support claims about fundamental reasoning limitations, but rather highlight engineering challenges related to token limits and evaluation methods. Recommended read:
References :
Carl Franzen@AI News | VentureBeat
//
Mistral AI has launched its first reasoning model, Magistral, signaling a commitment to open-source AI development. The Magistral family features two models: Magistral Small, a 24-billion parameter model available with open weights under the Apache 2.0 license, and Magistral Medium, a proprietary model accessible through an API. This dual release strategy aims to cater to both enterprise clients seeking advanced reasoning capabilities and the broader AI community interested in open-source innovation.
Mistral's decision to release Magistral Small under the permissive Apache 2.0 license marks a significant return to its open-source roots. The license allows for the free use, modification, and distribution of the model's source code, even for commercial purposes. This empowers startups and established companies to build and deploy their own applications on top of Mistral’s latest reasoning architecture, without the burdens of licensing fees or vendor lock-in. The release serves as a powerful counter-narrative, reaffirming Mistral’s dedication to arming the open community with cutting-edge tools. Magistral Medium demonstrates competitive performance in the reasoning arena, according to internal benchmarks released by Mistral. The model was tested against its predecessor, Mistral-Medium 3, and models from Deepseek. Furthermore, Mistral's Agents API's Handoffs feature facilitates smart, multi-agent workflows, allowing different agents to collaborate on complex tasks. This enables modular and efficient problem-solving, as demonstrated in systems where agents collaborate to answer inflation-related questions. Recommended read:
References :
Mark Tyson@tomshardware.com
//
OpenAI has recently launched its newest reasoning model, o3-pro, making it available to ChatGPT Pro and Team subscribers, as well as through OpenAI’s API. Enterprise and Edu subscribers will gain access the following week. The company touts o3-pro as a significant upgrade, emphasizing its enhanced capabilities in mathematics, science, and coding, and its improved ability to utilize external tools.
OpenAI has also slashed the price of o3 by 80% and o3-pro by 87%, positioning the model as a more accessible option for developers seeking advanced reasoning capabilities. This price adjustment comes at a time when AI providers are competing more aggressively on both performance and affordability. Experts note that evaluations consistently prefer o3-pro over the standard o3 model across all categories, especially in science, programming, and business tasks. O3-pro utilizes the same underlying architecture as o3, but it’s tuned to be more reliable, especially on complex tasks, with better long-range reasoning. The model supports tools like web browsing, code execution, vision analysis, and memory. While the increased complexity can lead to slower response times, OpenAI suggests that the tradeoff is worthwhile for the most challenging questions "where reliability matters more than speed, and waiting a few minutes is worth the tradeoff.” Recommended read:
References :
@www.marktechpost.com
//
DeepSeek has released a major update to its R1 reasoning model, dubbed DeepSeek-R1-0528, marking a significant step forward in open-source AI. The update boasts enhanced performance in complex reasoning, mathematics, and coding, positioning it as a strong competitor to leading commercial models like OpenAI's o3 and Google's Gemini 2.5 Pro. The model's weights, training recipes, and comprehensive documentation are openly available under the MIT license, fostering transparency and community-driven innovation. This release allows researchers, developers, and businesses to access cutting-edge AI capabilities without the constraints of closed ecosystems or expensive subscriptions.
The DeepSeek-R1-0528 update brings several core improvements. The model's parameter count has increased from 671 billion to 685 billion, enabling it to process and store more intricate patterns. Enhanced chain-of-thought layers deepen the model's reasoning capabilities, making it more reliable in handling multi-step logic problems. Post-training optimizations have also been applied to reduce hallucinations and improve output stability. In practical terms, the update introduces JSON outputs, native function calling, and simplified system prompts, all designed to streamline real-world deployment and enhance the developer experience. Specifically, DeepSeek R1-0528 demonstrates a remarkable leap in mathematical reasoning. On the AIME 2025 test, its accuracy improved from 70% to an impressive 87.5%, rivaling OpenAI's o3. This improvement is attributed to "enhanced thinking depth," with the model now utilizing significantly more tokens per question, indicating more thorough and systematic logical analysis. The open-source nature of DeepSeek-R1-0528 empowers users to fine-tune and adapt the model to their specific needs, fostering further innovation and advancements within the AI community. Recommended read:
References :
Source Asia@Source Asia
//
References:
news.microsoft.com
, Microsoft Research
Microsoft Research has unveiled Aurora, a groundbreaking AI foundation model with 1.3 billion parameters, that is set to revolutionize Earth system forecasting. This innovative model outperforms traditional operational forecasts in critical areas such as air quality prediction, ocean wave forecasting, tropical cyclone tracking, and high-resolution weather prediction. Aurora achieves this superior performance at significantly lower computational costs, marking a significant advancement in the field. The model's capabilities extend beyond traditional weather forecasting, positioning it as a versatile tool for addressing a wide range of environmental challenges.
Aurora's architecture, based on Perceiver IO, allows it to efficiently process structured inputs and outputs, making it well-suited for complex Earth system data. Researchers at Microsoft have trained Aurora on an unprecedented volume of atmospheric data, incorporating information from satellites, radar, weather stations, simulations, and forecasts. This extensive training enables Aurora to rapidly generate forecasts and adapt to specific tasks through fine-tuning with smaller, task-specific datasets. The model's flexibility and ability to learn from diverse data sources are key factors in its exceptional forecasting accuracy. The development of Aurora signifies a major step forward in applying AI to Earth science. By demonstrating the potential of foundation models to accurately and efficiently predict various environmental phenomena, Aurora paves the way for new approaches to disaster preparedness, resource management, and climate change mitigation. The publicly available code and weights of Aurora, accessible on GitHub, encourage further research and development in this exciting area. Microsoft's work underscores the transformative power of AI in addressing some of the world's most pressing environmental challenges. Recommended read:
References :
Source Asia@Source Asia
//
Microsoft's Aurora AI model is revolutionizing weather forecasting by providing accurate 10-day forecasts in mere seconds. This AI foundation model, developed by Microsoft Research, has demonstrated capabilities that extend beyond traditional weather prediction, encompassing environmental events such as tropical cyclones, air quality, and ocean waves. Aurora achieves this by training on a massive dataset of over one million hours of atmospheric data from satellites, radar, weather stations, simulations, and forecasts, which Microsoft believes is the largest collection ever assembled for training an AI forecasting model. The model's speed and accuracy have the potential to improve safety and inform decisions across various sectors.
The core strength of Aurora lies in its foundation model architecture. It's not simply limited to weather forecasting; it can be fine-tuned for specific environmental prediction tasks. After initial training on general weather patterns, Aurora can be adapted with smaller datasets to forecast elements like wave height or air quality. The AI does not fully grasp the physical laws governing weather, but its use for environmental prediction tasks and ability to provide accurate forecasts is still significant. This flexibility makes it a versatile tool for understanding and predicting various aspects of the Earth system. Aurora's performance has been noteworthy, beating existing numerical and AI models across 91 percent of forecasting targets when fine-tuned to medium-range weather forecasts. Its rapid processing time, taking seconds compared to the hours required by traditional models, makes it a valuable asset for timely decision-making. Microsoft is leveraging AI technology to make weather forecasting more efficient and accurate. While generative AI is revolutionizing how we do things, integrating it into workflows is making work easier by automating redundant tasks, creating more time to focus on more important tasks. Recommended read:
References :
Source Asia@Source Asia
//
Microsoft's Aurora AI foundation model is revolutionizing weather and environmental forecasting, offering quicker and more accurate predictions compared to traditional methods. Developed by Microsoft Research, Aurora is a large-scale AI model trained on a vast dataset of atmospheric information, including satellite data, radar readings, weather station observations, and simulations. This comprehensive training allows Aurora to forecast a range of environmental events, from hurricanes and typhoons to air quality and ocean waves, with exceptional precision and speed. The model's capabilities extend beyond conventional weather forecasting, making it a versatile tool for understanding and predicting environmental changes.
Aurora's unique architecture enables it to be fine-tuned for specific tasks using modest amounts of additional data. This "fine-tuning" process allows the model to generate forecasts in seconds, demonstrating its efficiency and adaptability. Researchers have shown that Aurora outperforms existing numerical and AI models in 91% of forecasting targets when fine-tuned for medium-range weather forecasts. Its ability to accurately predict hurricane trajectories and other extreme weather events highlights its potential to improve disaster preparedness and response efforts, ultimately saving lives and mitigating damage. Senior researchers Megan Stanley and Wessel Bruinsma emphasized Aurora's broader impact on environmental science, noting its potential to revolutionize the field. In a paper published in Nature, they highlighted Aurora's ability to correctly forecast hurricanes in 2023 more accurately than operational forecasting centers, such as the US National Hurricane Center. Aurora also demonstrated its capabilities when correctly forecasting where and when Doksuri would hit the Philippines four days in advance. These findings underscore the transformative potential of AI in addressing complex environmental challenges and paving the way for more effective climate modeling and environmental event management. Recommended read:
References :
Matthias Bastian@THE DECODER
//
OpenAI has announced the integration of GPT-4.1 and GPT-4.1 mini models into ChatGPT, aimed at enhancing coding and web development capabilities. The GPT-4.1 model, designed as a specialized model excelling at coding tasks and instruction following, is now available to ChatGPT Plus, Pro, and Team users. According to OpenAI, GPT-4.1 is faster and a great alternative to OpenAI o3 & o4-mini for everyday coding needs, providing more help to developers creating applications.
OpenAI is also rolling out GPT-4.1 mini, which will be available to all ChatGPT users, including those on the free tier, replacing the previous GPT-4o mini model. This model serves as the fallback option once GPT-4o usage limits are reached. The release notes confirm that GPT 4.1 mini offers various improvements over GPT-4o mini, including instruction-following, coding, and overall intelligence. This initiative is part of OpenAI's effort to make advanced AI tools more accessible and useful for a broader audience, particularly those engaged in programming and web development. Johannes Heidecke, Head of Systems at OpenAI, has emphasized that the new models build upon the safety measures established for GPT-4o, ensuring parity in safety performance. According to Heidecke, no new safety risks have been introduced, as GPT-4.1 doesn’t introduce new modalities or ways of interacting with the AI, and that it doesn’t surpass o3 in intelligence. The rollout marks another step in OpenAI's increasingly rapid model release cadence, significantly expanding access to specialized capabilities in web development and coding. Recommended read:
References :
@Google DeepMind Blog
//
Google DeepMind has introduced AlphaEvolve, a revolutionary AI coding agent designed to autonomously discover innovative algorithms and scientific solutions. This groundbreaking research, detailed in the paper "AlphaEvolve: A Coding Agent for Scientific and Algorithmic Discovery," represents a significant step towards achieving Artificial General Intelligence (AGI) and potentially even Artificial Superintelligence (ASI). AlphaEvolve distinguishes itself through its evolutionary approach, where it autonomously generates, evaluates, and refines code across generations, rather than relying on static fine-tuning or human-labeled datasets. AlphaEvolve combines Google’s Gemini Flash, Gemini Pro, and automated evaluation metrics.
AlphaEvolve operates using an evolutionary pipeline powered by large language models (LLMs). This pipeline doesn't just generate outputs—it mutates, evaluates, selects, and improves code across generations. The system begins with an initial program and iteratively refines it by introducing carefully structured changes. These changes take the form of LLM-generated diffs—code modifications suggested by a language model based on prior examples and explicit instructions. A diff in software engineering refers to the difference between two versions of a file, typically highlighting lines to be removed or replaced. Google's AlphaEvolve is not merely another code generator, but a system that generates and evolves code, allowing it to discover new algorithms. This innovation has already demonstrated its potential by shattering a 56-year-old record in matrix multiplication, a core component of many machine learning workloads. Additionally, AlphaEvolve has reclaimed 0.7% of compute capacity across Google's global data centers, showcasing its efficiency and cost-effectiveness. AlphaEvolve imagined as a genetic algorithm coupled to a large language model. Recommended read:
References :
@medium.com
//
References:
medium.com
, Peter Bendor-Samuel
,
Quantum computing is rapidly advancing, bringing both immense potential and significant cybersecurity risks. The UK’s National Cyber Security Centre (NCSC) and experts across the globe are warning of a "colossal" overhaul needed in digital defenses to prepare for the quantum era. The concern is that powerful quantum computers could render current encryption methods obsolete, breaking security protocols that protect financial transactions, medical records, military communications, and blockchain technology. This urgency is underscored by the threat of "harvest now, decrypt later" attacks, where sensitive data is collected and stored for future decryption once quantum computers become powerful enough.
Across the globe, governments and organizations are scrambling to prepare for a quantum future by adopting post-quantum cryptography (PQC). PQC involves creating new encryption algorithms resistant to attacks from both classical and quantum computers. The U.S. National Institute of Standards and Technology (NIST) has already released several algorithms believed to be secure from quantum hacking. The NCSC has issued guidance, setting clear timelines for the UK’s migration to PQC, advising organizations to complete the transition by 2035. Industry leaders are also urging the U.S. Congress to reauthorize and expand the National Quantum Initiative to support research, workforce development, and a resilient supply chain. Oxford Ionics is one of the companies leading the way in quantum computing development. Oxford has released a multi-phase roadmap focused on achieving scalability and fault tolerance in their trapped-ion quantum computing platform. Their strategy includes the 'Foundation' phase, which involves deploying QPUs with 16-64 qubits with 99.99% fidelity, already operational. The second phase introduces chips with 256+ qubits and error rates as low as 10-8 via quantum error correction (QEC). The goal is to scale to over 10,000 physical qubits per chip, supporting 700+ logical qubits with minimal infrastructure change. There are also multiple bills introduced in the U.S. Congress and the state of Texas to foster the advancement of quantum technology. Recommended read:
References :
@www.artificialintelligence-news.com
//
ServiceNow is making significant strides in the realm of artificial intelligence with the unveiling of Apriel-Nemotron-15b-Thinker, a new reasoning model optimized for enterprise-scale deployment and efficiency. The model, consisting of 15 billion parameters, is designed to handle complex tasks such as solving mathematical problems, interpreting logical statements, and assisting with enterprise decision-making. This release addresses the growing need for AI models that combine strong performance with efficient memory and token usage, making them viable for deployment in practical hardware environments.
ServiceNow is betting on unified AI to untangle enterprise complexity, providing businesses with a single, coherent way to integrate various AI tools and intelligent agents across the entire company. This ambition was unveiled at Knowledge 2025, where the company showcased its new AI platform and deepened relationships with tech giants like NVIDIA, Microsoft, Google, and Oracle. The aim is to help businesses orchestrate their operations with genuine intelligence, as evidenced by the adoption from industry leaders like Adobe, Aptiv, the NHL, Visa, and Wells Fargo. To further broaden its reach, ServiceNow has introduced the Core Business Suite, an AI-driven solution aimed at the mid-market. This suite connects employees, suppliers, systems, and data in one place, enabling organizations of all sizes to work faster and more efficiently across critical business processes such as HR, procurement, finance, facilities, and legal affairs. ServiceNow aims for rapid implementation, suggesting deployment within a few weeks, and integrates functionalities from different divisions into a single, uniform experience. Recommended read:
References :
|
Blogs
|