Top Mathematics discussions

NishMath - #aimodel

@www.marktechpost.com //
Google has unveiled a new AI model designed to forecast tropical cyclones with improved accuracy. Developed through a collaboration between Google Research and DeepMind, the model is accessible via a newly launched website called Weather Lab. The AI aims to predict both the path and intensity of cyclones days in advance, overcoming limitations present in traditional physics-based weather prediction models. Google claims its algorithm achieves "state-of-the-art accuracy" in forecasting cyclone track and intensity, as well as details like formation, size, and shape.

The AI model was trained using two extensive datasets: one describing the characteristics of nearly 5,000 cyclones from the past 45 years, and another containing millions of weather observations. Internal testing demonstrated the algorithm's ability to accurately predict the paths of recent cyclones, in some cases up to a week in advance. The model can generate 50 possible scenarios, extending forecast capabilities up to 15 days.

This breakthrough has already seen adoption by the U.S. National Hurricane Center, which is now using these experimental AI predictions alongside traditional forecasting models in its operational workflow. Google's AI's ability to forecast up to 15 days in advance marks a significant improvement over current models, which typically provide 3-5 day forecasts. Google made the AI accessible through a new website called Weather Lab. The model is available alongside two years' worth of historical forecasts, as well as data from traditional physics-based weather prediction algorithms. According to Google, this could help weather agencies and emergency service experts better anticipate a cyclone’s path and intensity.

Recommended read:
References :
  • siliconangle.com: Google LLC today detailed an artificial intelligence model that can forecast the path and intensity of tropical cyclones days in advance.
  • AI News | VentureBeat: Google DeepMind just changed hurricane forecasting forever with new AI model
  • MarkTechPost: Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty Assessment
  • Maginative: Google's AI Can Now Predict Hurricane Paths 15 Days Out — and the Hurricane Center Is Using It
  • SiliconANGLE: Google develops AI model for forecasting tropical cyclones. According to the company, the algorithm was developed through a collaboration between its Google Research and DeepMind units. It’s available through a newly launched website called Weather Lab.
  • The Official Google Blog: Weather Lab is an interactive website for sharing Google’s AI weather models.
  • www.engadget.com: Google DeepMind is sharing its AI forecasts with the National Weather Service
  • www.producthunt.com: Predicting cyclone paths & intensity 15 days ahead |
  • the-decoder.com: Google Deepmind launches Weather Lab to test AI models for tropical cyclone forecasting
  • AIwire: Google DeepMind Launches Interactive AI That Lets You Explore Storm Forecasts
  • www.aiwire.net: Google DeepMind and Google Research are launching Weather Lab - a new AI-driven platform designed specifically to improve forecasts for tropical cyclone formation, intensity, and trajectory.

Mark Tyson@tomshardware.com //
OpenAI has recently launched its newest reasoning model, o3-pro, making it available to ChatGPT Pro and Team subscribers, as well as through OpenAI’s API. Enterprise and Edu subscribers will gain access the following week. The company touts o3-pro as a significant upgrade, emphasizing its enhanced capabilities in mathematics, science, and coding, and its improved ability to utilize external tools.

OpenAI has also slashed the price of o3 by 80% and o3-pro by 87%, positioning the model as a more accessible option for developers seeking advanced reasoning capabilities. This price adjustment comes at a time when AI providers are competing more aggressively on both performance and affordability. Experts note that evaluations consistently prefer o3-pro over the standard o3 model across all categories, especially in science, programming, and business tasks.

O3-pro utilizes the same underlying architecture as o3, but it’s tuned to be more reliable, especially on complex tasks, with better long-range reasoning. The model supports tools like web browsing, code execution, vision analysis, and memory. While the increased complexity can lead to slower response times, OpenAI suggests that the tradeoff is worthwhile for the most challenging questions "where reliability matters more than speed, and waiting a few minutes is worth the tradeoff.”

Recommended read:
References :
  • Maginative: OpenAI’s new o3-pro model is now available in ChatGPT and the API, offering top-tier performance in math, science, and coding—at a dramatically lower price.
  • AI News | VentureBeat: OpenAI's most powerful reasoning model, o3, is now 80% cheaper, making it more affordable for businesses, researchers, and individual developers.
  • Latent.Space: OpenAI just dropped the price of their o3 model by 80% today and launched o3-pro.
  • THE DECODER: OpenAI has lowered the price of its o3 language model by 80 percent, CEO Sam Altman said.
  • Simon Willison's Weblog: OpenAI's Adam Groth explained that the engineers have optimized inference, allowing a significant price reduction for the o3 model.
  • the-decoder.com: OpenAI lowered the price of its o3 language model by 80 percent, CEO Sam Altman said.
  • AI News | VentureBeat: OpenAI released the latest in its o-series of reasoning model that promises more reliable and accurate responses for enterprises.
  • bsky.app: The OpenAI API is back to running at 100% again, plus we dropped o3 prices by 80% and launched o3-pro - enjoy!
  • Sam Altman: We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.
  • siliconangle.com: OpenAI’s newest reasoning model o3-pro surpasses rivals on multiple benchmarks, but it’s not very fast
  • SiliconANGLE: OpenAI’s newest reasoning model o3-pro surpasses rivals on multiple benchmarks, but it’s not very fast
  • bsky.app: the OpenAI API is back to running at 100% again, plus we dropped o3 prices by 80% and launched o3-pro - enjoy!
  • bsky.app: OpenAI has launched o3-pro. The new model is available to ChatGPT Pro and Team subscribers and in OpenAI’s API now, while Enterprise and Edu subscribers will get access next week. If you use reasoning models like o1 or o3, try o3-pro, which is much smarter and better at using external tools.
  • The Algorithmic Bridge: OpenAI o3-Pro Is So Good That I Can’t Tell How Good It Is
  • datafloq.com: What is OpenAI o3 and How is it Different than other LLMs?
  • www.marketingaiinstitute.com: [The AI Show Episode 153]: OpenAI Releases o3-Pro, Disney Sues Midjourney, Altman: “Gentle Singularity†Is Here, AI and Jobs & News Sites Getting Crushed by AI Search

@medium.com //
References: , TheSequence , thezvi.substack.com ...
DeepSeek's latest AI model, R1-0528, is making waves in the AI community due to its impressive performance in math and reasoning tasks. This new model, despite having a similar name to its predecessor, boasts a completely different architecture and performance profile, marking a significant leap forward. DeepSeek R1-0528 has demonstrated "unprecedented levels of demand" shooting to the top of the App Store past closed model rivals and overloading their API with unprecedented levels of demand to the point that they actually had to stop accepting payments.

The most notable improvement in DeepSeek R1-0528 is its mathematical reasoning capabilities. On the AIME 2025 test, the model's accuracy increased from 70% to 87.5%, surpassing Gemini 2.5 Pro and putting it in close competition with OpenAI's o3. This improvement is attributed to "enhanced thinking depth," with the model using significantly more tokens per question, engaging in more thorough chains of reasoning. This means the model can check its own work, recognize errors, and course-correct during problem-solving.

DeepSeek's success is challenging established closed models and driving competition in the AI landscape. DeepSeek-R1-0528 continues to utilize a Mixture-of-Experts (MoE) architecture, now scaled up to an enormous size. This sparse activation allows for powerful specialized expertise in different coding domains while maintaining efficiency. The context also continues to remain at 128k (with RoPE scaling or other improvements capable of extending it further.) The rise of DeepSeek is underscored by its performance benchmarks, which show it outperforming some of the industry’s leading models, including OpenAI’s ChatGPT. Furthermore, the release of a distilled variant, R1-0528-Qwen3-8B, ensures broad accessibility of this powerful technology.

Recommended read:
References :
  • : The 'Minor Upgrade' That's Anything But: DeepSeek R1-0528 Deep Dive
  • TheSequence: The Sequence Radar #554 : The New DeepSeek R1-0528 is Very Impressive
  • lambda.ai: DeepSeek-R1-0528: The Open-Source Titan Now Live on Lambda’s Inference API
  • thezvi.substack.com: DeepSeek-r1-0528 Did Not Have a Moment
  • thezvi.wordpress.com: DeepSeek-r1-0528 Did Not Have a Moment

@www.marktechpost.com //
DeepSeek has released a major update to its R1 reasoning model, dubbed DeepSeek-R1-0528, marking a significant step forward in open-source AI. The update boasts enhanced performance in complex reasoning, mathematics, and coding, positioning it as a strong competitor to leading commercial models like OpenAI's o3 and Google's Gemini 2.5 Pro. The model's weights, training recipes, and comprehensive documentation are openly available under the MIT license, fostering transparency and community-driven innovation. This release allows researchers, developers, and businesses to access cutting-edge AI capabilities without the constraints of closed ecosystems or expensive subscriptions.

The DeepSeek-R1-0528 update brings several core improvements. The model's parameter count has increased from 671 billion to 685 billion, enabling it to process and store more intricate patterns. Enhanced chain-of-thought layers deepen the model's reasoning capabilities, making it more reliable in handling multi-step logic problems. Post-training optimizations have also been applied to reduce hallucinations and improve output stability. In practical terms, the update introduces JSON outputs, native function calling, and simplified system prompts, all designed to streamline real-world deployment and enhance the developer experience.

Specifically, DeepSeek R1-0528 demonstrates a remarkable leap in mathematical reasoning. On the AIME 2025 test, its accuracy improved from 70% to an impressive 87.5%, rivaling OpenAI's o3. This improvement is attributed to "enhanced thinking depth," with the model now utilizing significantly more tokens per question, indicating more thorough and systematic logical analysis. The open-source nature of DeepSeek-R1-0528 empowers users to fine-tune and adapt the model to their specific needs, fostering further innovation and advancements within the AI community.

Recommended read:
References :
  • Kyle Wiggers ?: DeepSeek updates its R1 reasoning AI model, releases it on Hugging Face
  • AI News | VentureBeat: VentureBeat article on DeepSeek R1-0528.
  • Analytics Vidhya: New Deepseek R1-0528 Update is INSANE
  • MacStories: Testing DeepSeek R1-0528 on the M3 Ultra Mac Studio and Installing Local GGUF Models with Ollama on macOS
  • www.analyticsvidhya.com: New Deepseek R1-0528 Update is INSANE
  • www.marktechpost.com: DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU Efficiency
  • NextBigFuture.com: DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training.
  • MarkTechPost: DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU Efficiency
  • pandaily.com: In the early hours of May 29, Chinese AI startup DeepSeek quietly open-sourced the latest iteration of its R1 large language model, DeepSeek-R1-0528, on the Hugging Face platform .
  • www.computerworld.com: Reports that DeepSeek releases a new version of its R1 reasoning AI model.
  • techcrunch.com: DeepSeek updates its R1 reasoning AI model, releases it on Hugging Face
  • the-decoder.com: Deepseek's R1 model closes the gap with OpenAI and Google after major update
  • Simon Willison: Some notes on the new DeepSeek-R1-0528 - a completely different model from the R1 they released in January, despite having a very similar name Terrible LLM naming has managed to infect the Chinese AI labs too
  • Analytics India Magazine: The new DeepSeek-R1 Is as good as OpenAI o3 and Gemini 2.5 Pro
  • : The 'Minor Upgrade' That's Anything But: DeepSeek R1-0528 Deep Dive
  • simonwillison.net: Some notes on the new DeepSeek-R1-0528 - a completely different model from the R1 they released in January, despite having a very similar name Terrible LLM naming has managed to infect the Chinese AI labs too
  • TheSequence: This article provides an overview of the new DeepSeek R1-0528 model and notes its improvements over the prior model released in January.
  • Kyle Wiggers ?: News about the release of DeepSeek's updated R1 AI model, emphasizing its increased censorship.
  • Fello AI: Reports that the R1-0528 model from DeepSeek is matching the capabilities of OpenAI's o3 and Google's Gemini 2.5 Pro.
  • felloai.com: Latest DeepSeek Update Called R1-0528 Is Matching OpenAI’s o3 & Gemini 2.5 Pro
  • www.tomsguide.com: DeepSeek’s latest update is a serious threat to ChatGPT and Google — here’s why