@medium.com
//
DeepSeek's latest AI model, R1-0528, is making waves in the AI community due to its impressive performance in math and reasoning tasks. This new model, despite having a similar name to its predecessor, boasts a completely different architecture and performance profile, marking a significant leap forward. DeepSeek R1-0528 has demonstrated "unprecedented levels of demand" shooting to the top of the App Store past closed model rivals and overloading their API with unprecedented levels of demand to the point that they actually had to stop accepting payments.
The most notable improvement in DeepSeek R1-0528 is its mathematical reasoning capabilities. On the AIME 2025 test, the model's accuracy increased from 70% to 87.5%, surpassing Gemini 2.5 Pro and putting it in close competition with OpenAI's o3. This improvement is attributed to "enhanced thinking depth," with the model using significantly more tokens per question, engaging in more thorough chains of reasoning. This means the model can check its own work, recognize errors, and course-correct during problem-solving.
DeepSeek's success is challenging established closed models and driving competition in the AI landscape. DeepSeek-R1-0528 continues to utilize a Mixture-of-Experts (MoE) architecture, now scaled up to an enormous size. This sparse activation allows for powerful specialized expertise in different coding domains while maintaining efficiency. The context also continues to remain at 128k (with RoPE scaling or other improvements capable of extending it further.) The rise of DeepSeek is underscored by its performance benchmarks, which show it outperforming some of the industry’s leading models, including OpenAI’s ChatGPT. Furthermore, the release of a distilled variant, R1-0528-Qwen3-8B, ensures broad accessibility of this powerful technology.
Recommended read:
References :
- RunPod Blog: The 'Minor Upgrade' That's Anything But: DeepSeek R1-0528 Deep Dive
- TheSequence: The Sequence Radar #554 : The New DeepSeek R1-0528 is Very Impressive
@www.marktechpost.com
//
DeepSeek has released a major update to its R1 reasoning model, dubbed DeepSeek-R1-0528, marking a significant step forward in open-source AI. The update boasts enhanced performance in complex reasoning, mathematics, and coding, positioning it as a strong competitor to leading commercial models like OpenAI's o3 and Google's Gemini 2.5 Pro. The model's weights, training recipes, and comprehensive documentation are openly available under the MIT license, fostering transparency and community-driven innovation. This release allows researchers, developers, and businesses to access cutting-edge AI capabilities without the constraints of closed ecosystems or expensive subscriptions.
The DeepSeek-R1-0528 update brings several core improvements. The model's parameter count has increased from 671 billion to 685 billion, enabling it to process and store more intricate patterns. Enhanced chain-of-thought layers deepen the model's reasoning capabilities, making it more reliable in handling multi-step logic problems. Post-training optimizations have also been applied to reduce hallucinations and improve output stability. In practical terms, the update introduces JSON outputs, native function calling, and simplified system prompts, all designed to streamline real-world deployment and enhance the developer experience.
Specifically, DeepSeek R1-0528 demonstrates a remarkable leap in mathematical reasoning. On the AIME 2025 test, its accuracy improved from 70% to an impressive 87.5%, rivaling OpenAI's o3. This improvement is attributed to "enhanced thinking depth," with the model now utilizing significantly more tokens per question, indicating more thorough and systematic logical analysis. The open-source nature of DeepSeek-R1-0528 empowers users to fine-tune and adapt the model to their specific needs, fostering further innovation and advancements within the AI community.
Recommended read:
References :
- pub.towardsai.net: DeepSeek R1 : Is It Right For You? (A Practical Self‑Assessment for Businesses and Individuals)
- AI News | VentureBeat: VentureBeat article on DeepSeek R1-0528.
- Analytics Vidhya: New Deepseek R1-0528 Update is INSANE
- Kyle Wiggers ?: DeepSeek updates its R1 reasoning AI model, releases it on Hugging Face
- MacStories: Testing DeepSeek R1-0528 on the M3 Ultra Mac Studio and Installing Local GGUF Models with Ollama on macOS
- www.analyticsvidhya.com: When DeepSeek R1 launched in January, it instantly became one of the most talked-about open-source models on the scene, gaining popularity for its sharp reasoning and impressive performance. Fast-forward to today, and DeepSeek is back with a so-called “minor trial upgradeâ€, but don’t let the modest name fool you. DeepSeek-R1-0528 delivers major leaps in reasoning, […]
- www.marktechpost.com: DeepSeek, the Chinese AI Unicorn, has released an updated version of its R1 reasoning model, named DeepSeek-R1-0528. This release enhances the model’s capabilities in mathematics, programming, and general logical reasoning, positioning it as a formidable open-source alternative to leading models like OpenAI’s o3 and Google’s Gemini 2.5 Pro. Technical Enhancements The R1-0528 update introduces significant […]
- NextBigFuture.com: DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training.
- MarkTechPost: Information about DeepSeek's R1-0528 model and its enhancements in math and code performance.
- Pandaily: In the early hours of May 29, Chinese AI startup DeepSeek quietly open-sourced the latest iteration of its R1 large language model, DeepSeek-R1-0528, on the Hugging Face platform .
- www.computerworld.com: Reports that DeepSeek releases a new version of its R1 reasoning AI model.
- techcrunch.com: DeepSeek updates its R1 reasoning AI model, releases it on Hugging Face
- the-decoder.com: Deepseek's R1 model closes the gap with OpenAI and Google after major update
- Simon Willison: Some notes on the new DeepSeek-R1-0528 - a completely different model from the R1 they released in January, despite having a very similar name Terrible LLM naming has managed to infect the Chinese AI labs too
- Analytics India Magazine: The new DeepSeek-R1 Is as good as OpenAI o3 and Gemini 2.5 Pro
- RunPod Blog: The 'Minor Upgrade' That's Anything But: DeepSeek R1-0528 Deep Dive
- simonwillison.net: Some notes on the new DeepSeek-R1-0528 - a completely different model from the R1 they released in January, despite having a very similar name Terrible LLM naming has managed to infect the Chinese AI labs too
- TheSequence: This article provides an overview of the new DeepSeek R1-0528 model and notes its improvements over the prior model released in January.
- Kyle Wiggers ?: News about the release of DeepSeek's updated R1 AI model, emphasizing its increased censorship.
- Fello AI: Reports that the R1-0528 model from DeepSeek is matching the capabilities of OpenAI's o3 and Google's Gemini 2.5 Pro.
- felloai.com: Latest DeepSeek Update Called R1-0528 Is Matching OpenAI’s o3 & Gemini 2.5 Pro
- www.tomsguide.com: DeepSeek’s latest update is a serious threat to ChatGPT and Google — here’s why
@simonwillison.net
//
Google has broadened access to its advanced AI model, Gemini 2.5 Pro, showcasing impressive capabilities and competitive pricing designed to challenge rival models like OpenAI's GPT-4o and Anthropic's Claude 3.7 Sonnet. Google's latest flagship model is currently recognized as a top performer, excelling in Optical Character Recognition (OCR), audio transcription, and long-context coding tasks. Alphabet CEO Sundar Pichai highlighted Gemini 2.5 Pro as Google's "most intelligent model + now our most in demand." Demand has increased by over 80 percent this month alone across both Google AI Studio and the Gemini API.
Google's expansion includes a tiered pricing structure for the Gemini 2.5 Pro API, offering a more affordable option compared to competitors. Prompts with less than 200,000 tokens are priced at $1.25 per million for input and $10 per million for output, while larger prompts increase to $2.50 and $15 per million tokens, respectively. Although prompt caching is not yet available, its future implementation could potentially lower costs further. The free tier allows 500 free grounding queries with Google Search per day, with an additional 1,500 free queries in the paid tier, with costs per 1,000 queries set at $35 beyond that.
The AI research group EpochAI reported that Gemini 2.5 Pro scored 84% on the GPQA Diamond benchmark, surpassing the typical 70% score of human experts. This benchmark assesses challenging multiple-choice questions in biology, chemistry, and physics, validating Google's benchmark results. The model is now available as a paid model, along with a free tier option. The free tier can use data to improve Google's products while the paid tier cannot. Rates vary by tier and range from 150-2,000/minute. Google will retire the Gemini 2.0 Pro preview entirely in favor of 2.5.
Recommended read:
References :
- Data Phoenix: Google Unveils Gemini 2.5: Its Most Intelligent AI Model Yet
- AI News | VentureBeat: Gemini 2.5 Pro is now available without limits and for cheaper than Claude, GPT-4o
- Simon Willison's Weblog: Google's Gemini 2.5 Pro is currently the top model and, from , a superb model for OCR, audio transcription and long-context coding. You can now pay for it! The new gemini-2.5-pro-preview-03-25 model ID is priced like this: Prompts less than 200,00 tokens: $1.25/million tokens for input, $10/million for output Prompts more than 200,000 tokens (up to the 1,048,576 max): $2.50/million for input, $15/million for output This is priced at around the same level as Gemini 1.5 Pro ($1.25/$5 for input/output below 128,000 tokens, $2.50/$10 above 128,000 tokens), is cheaper than GPT-4o for shorter prompts ($2.50/$10) and is cheaper than Claude 3.7 Sonnet ($3/$15). Gemini 2.5 Pro is a reasoning model, and invisible reasoning tokens are included in the output token count. I just tried prompting "hi" and it charged me 2 tokens for input and 623 for output, of which 613 were "thinking" tokens. That still adds up to just 0.6232 cents (less than a cent) using my which I updated to support the new model just now. I released this morning adding support for the new model: llm install -U llm-gemini llm -m gemini-2.5-pro-preview-03-25 hi Note that the model continues to be available for free under the previous gemini-2.5-pro-exp-03-25 model ID: llm -m gemini-2.5-pro-exp-03-25 hi The free tier is "used to improve our products", the paid tier is not. Rate limits for the paid model - from 150/minute and 1,000/day for tier 1 (billing configured), 1,000/minute and 50,000/day for Tier 2 ($250 total spend) and 2,000/minute and unlimited/day for Tier 3 ($1,000 total spend). Meanwhile the free tier continues to limit you to 5 requests per minute and 25 per day. Google are entirely in favour of 2.5. Via Tags: , , , , , , ,
- THE DECODER: Google has opened broader access to Gemini 2.5 Pro, its latest AI flagship model, which demonstrates impressive performance in scientific testing while introducing competitive pricing.
- Bernard Marr: Google's latest AI model, Gemini 2.5 Pro, is poised to streamline complex mathematical and coding operations.
- The Cognitive Revolution: In this illuminating episode of The Cognitive Revolution, host Nathan Labenz speaks with Jack Rae, principal research scientist at Google DeepMind and technical lead on Google's thinking and inference time scaling work.
- bsky.app: Gemini 2. 5 Pro pricing was announced today - it's cheaper than both GPT-4o and Claude 3.7 Sonnet I've updated my llm-gemini plugin to add support for the new paid model Full notes here:
- Last Week in AI: Google unveils a next-gen AI reasoning model, OpenAI rolls out image generation powered by GPT-4o to ChatGPT, Tencent’s Hunyuan T1 AI reasoning model rivals DeepSeek in performance and price
Maximilian Schreiner@THE DECODER
//
Google has unveiled Gemini 2.5 Pro, its latest and "most intelligent" AI model to date, showcasing significant advancements in reasoning, coding proficiency, and multimodal functionalities. According to Google, these improvements come from combining a significantly enhanced base model with improved post-training techniques. The model is designed to analyze complex information, incorporate contextual nuances, and draw logical conclusions with unprecedented accuracy. Gemini 2.5 Pro is now available for Gemini Advanced users and on Google's AI Studio.
Google emphasizes the model's "thinking" capabilities, achieved through chain-of-thought reasoning, which allows it to break down complex tasks into multiple steps and reason through them before responding. This new model can handle multimodal input from text, audio, images, videos, and large datasets. Additionally, Gemini 2.5 Pro exhibits strong performance in coding tasks, surpassing Gemini 2.0 in specific benchmarks and excelling at creating visually compelling web apps and agentic code applications. The model also achieved 18.8% on Humanity’s Last Exam, demonstrating its ability to handle complex knowledge-based questions.
Recommended read:
References :
- SiliconANGLE: Google LLC said today it’s updating its flagship Gemini artificial intelligence model family by introducing an experimental Gemini 2.5 Pro version.
- The Tech Basic: Google's New AI Models “Think” Before Answering, Outperform Rivals
- AI News | VentureBeat: Google releases ‘most intelligent model to date,’ Gemini 2.5 Pro
- Analytics Vidhya: We Tried the Google 2.5 Pro Experimental Model and It’s Mind-Blowing!
- www.tomsguide.com: Google unveils Gemini 2.5 — claims AI breakthrough with enhanced reasoning and multimodal power
- Google DeepMind Blog: Gemini 2.5: Our most intelligent AI model
- THE DECODER: Google Deepmind has introduced Gemini 2.5 Pro, which the company describes as its most capable AI model to date. The article appeared first on .
- intelligence-artificielle.developpez.com: Google DeepMind a lancé Gemini 2.5 Pro, un modèle d'IA qui raisonne avant de répondre, affirmant qu'il est le meilleur sur plusieurs critères de référence en matière de raisonnement et de codage
- The Tech Portal: Google unveils Gemini 2.5, its most intelligent AI model yet with ‘built-in thinking’
- Ars OpenForum: Google says the new Gemini 2.5 Pro model is its “smartest†AI yet
- The Official Google Blog: Gemini 2.5: Our most intelligent AI model
- www.techradar.com: I pitted Gemini 2.5 Pro against ChatGPT o3-mini to find out which AI reasoning model is best
- bsky.app: Google's AI comeback is official. Gemini 2.5 Pro Experimental leads in benchmarks for coding, math, science, writing, instruction following, and more, ahead of OpenAI's o3-mini, OpenAI's GPT-4.5, Anthropic's Claude 3.7, xAI's Grok 3, and DeepSeek's R1. The narrative has finally shifted.
- Shelly Palmer: Google’s Gemini 2.5: AI That Thinks Before It Speaks
- bdtechtalks.com: Gemini 2.5 Pro is a new reasoning model that excels in long-context tasks and benchmarks, revitalizing Google’s AI strategy against competitors like OpenAI.
- Interconnects: The end of a busy spring of model improvements and what's next for the presumed leader in AI abilities.
- www.techradar.com: Gemini 2.5 is now available for Advanced users and it seriously improves Google’s AI reasoning
- www.zdnet.com: Google releases 'most intelligent' experimental Gemini 2.5 Pro - here's how to try it
- Unite.AI: Gemini 2.5 Pro is Here—And it Changes the AI Game (Again)
- TestingCatalog: Gemini 2.5 Pro sets new AI benchmark and launches on AI Studio and Gemini
- Analytics Vidhya: Google DeepMind's latest AI model, Gemini 2.5 Pro, has reached the #1 position on the Arena leaderboard.
- AI News: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date
- Fello AI: Google’s Gemini 2.5 Shocks the World: Crushing AI Benchmark Like No Other AI Model!
- Analytics India Magazine: Google Unveils Gemini 2.5, Crushes OpenAI GPT-4.5, DeepSeek R1, & Claude 3.7 Sonnet
- Practical Technology: Practical Tech covers the launch of Google's Gemini 2.5 Pro and its new AI benchmark achievements.
- Shelly Palmer: Google's Gemini 2.5: AI That Thinks Before It Speaks
- www.producthunt.com: Google's most intelligent AI model
- Windows Copilot News: Google reveals AI ‘reasoning’ model that ‘explicitly shows its thoughts’
- AI News | VentureBeat: Hands on with Gemini 2.5 Pro: why it might be the most useful reasoning model yet
- thezvi.wordpress.com: Gemini 2.5 Pro Experimental is America’s next top large language model. That doesn’t mean it is the best model for everything. In particular, it’s still Gemini, so it still is a proud member of the Fun Police, in terms of …
- www.computerworld.com: Gemini 2.5 can, among other things, analyze information, draw logical conclusions, take context into account, and make informed decisions.
- www.infoworld.com: Google introduces Gemini 2.5 reasoning models
- Maginative: Google's Gemini 2.5 Pro leads AI benchmarks with enhanced reasoning capabilities, positioning it ahead of competing models from OpenAI and others.
- www.infoq.com: Google's Gemini 2.5 Pro is a powerful new AI model that's quickly becoming a favorite among developers and researchers. It's capable of advanced reasoning and excels in complex tasks.
- AI News | VentureBeat: Google’s Gemini 2.5 Pro is the smartest model you’re not using – and 4 reasons it matters for enterprise AI
- Communications of the ACM: Google has released Gemini 2.5 Pro, an updated AI model focused on enhanced reasoning, code generation, and multimodal processing.
- The Next Web: Google has released Gemini 2.5 Pro, an updated AI model focused on enhanced reasoning, code generation, and multimodal processing.
- www.tomsguide.com: Gemini 2.5 Pro is now free to all users in surprise move
- Composio: Google just launched Gemini 2.5 Pro on March 26th, claiming to be the best in coding, reasoning and overall everything. But I The post appeared first on .
- Composio: Google's Gemini 2.5 Pro, released on March 26th, is being hailed for its enhanced reasoning, coding, and multimodal capabilities.
- Analytics India Magazine: Gemini 2.5 Pro is better than the Claude 3.7 Sonnet for coding in the Aider Polyglot leaderboard.
- www.zdnet.com: Gemini's latest model outperforms OpenAI's o3 mini and Anthropic's Claude 3.7 Sonnet on the latest benchmarks. Here's how to try it.
- www.marketingaiinstitute.com: [The AI Show Episode 142]: ChatGPT’s New Image Generator, Studio Ghibli Craze and Backlash, Gemini 2.5, OpenAI Academy, 4o Updates, Vibe Marketing & xAI Acquires X
- www.tomsguide.com: Gemini 2.5 is free, but can it beat DeepSeek?
- www.tomsguide.com: Google Gemini could soon help your kids with their homework — here’s what we know
- PCWorld: Google’s latest Gemini 2.5 Pro AI model is now free for all users
- www.techradar.com: Google just made Gemini 2.5 Pro Experimental free for everyone, and that's awesome.
- Last Week in AI: #205 - Gemini 2.5, ChatGPT Image Gen, Thoughts of LLMs
Matthias Bastian@THE DECODER
//
Mistral AI, a French artificial intelligence startup, has launched Mistral Small 3.1, a new open-source language model boasting 24 billion parameters. According to the company, this model outperforms similar offerings from Google and OpenAI, specifically Gemma 3 and GPT-4o Mini, while operating efficiently on consumer hardware like a single RTX 4090 GPU or a MacBook with 32GB RAM. It supports multimodal inputs, processing both text and images, and features an expanded context window of up to 128,000 tokens, which makes it suitable for long-form reasoning and document analysis.
Mistral Small 3.1 is released under the Apache 2.0 license, promoting accessibility and competition within the AI landscape. Mistral AI aims to challenge the dominance of major U.S. tech firms by offering a high-performance, cost-effective AI solution. The model achieves inference speeds of 150 tokens per second and is designed for text and multimodal understanding, positioning itself as a powerful alternative to industry-leading models without the need for expensive cloud infrastructure.
Recommended read:
References :
- THE DECODER: Mistral launches improved Small 3.1 multimodal model
- venturebeat.com: Mistral AI launches efficient open-source model that outperforms Google and OpenAI offerings with just 24 billion parameters, challenging U.S. tech giants' dominance in artificial intelligence.
- Maginative: Mistral Small 3.1 Outperforms Gemma 3 and GPT-4o Mini
- TestingCatalog: Mistral Small 3: A 24B open-source AI model optimized for speed
- Simon Willison's Weblog: Mistral Small 3.1, an open-source AI model, delivers state-of-the-art performance.
- SiliconANGLE: Paris-based artificial intelligence startup Mistral AI said today it’s open-sourcing a new, lightweight AI model called Mistral Small 3.1, claiming it surpasses the capabilities of similar models created by OpenAI and Google LLC.
- Analytics Vidhya: Mistral Small 3.1: The Best Model in its Weight Class
- Analytics Vidhya: Mistral 3.1 vs Gemma 3: Which is the Better Model?
@bdtechtalks.com
//
Alibaba has recently launched Qwen-32B, a new reasoning model, which demonstrates performance levels on par with DeepMind's R1 model. This development signifies a notable achievement in the field of AI, particularly for smaller models. The Qwen team showcased that reinforcement learning on a strong base model can unlock reasoning capabilities for smaller models that enhances their performance to be on par with giant models.
Qwen-32B not only matches but also surpasses models like DeepSeek-R1 and OpenAI's o1-mini across key industry benchmarks, including AIME24, LiveBench, and BFCL. This is significant because Qwen-32B achieves this level of performance with only approximately 5% of the parameters used by DeepSeek-R1, resulting in lower inference costs without compromising on quality or capability. Groq is offering developers the ability to build FAST with Qwen QwQ 32B on GroqCloud™, running the 32B parameter model at ~400 T/s. This model is proving to be very competitive in reasoning benchmarks and is one of the top open source models being used.
The Qwen-32B model was explicitly designed for tool use and adapting its reasoning based on environmental feedback, which is a huge win for AI agents that need to reason, plan, and adapt based on context (outperforms R1 and o1-mini on the Berkeley Function Calling Leaderboard). With these capabilities, Qwen-32B shows that RL on a strong base model can unlock reasoning capabilities for smaller models that enhances their performance to be on par with giant models.
Recommended read:
References :
- Last Week in AI: LWiAI Podcast #202 - Qwen-32B, Anthropic's $3.5 billion, LLM Cognitive Behaviors
- Groq: A Guide to Reasoning with Qwen QwQ 32B
- Last Week in AI: #202 - Qwen-32B, Anthropic's $3.5 billion, LLM Cognitive Behaviors
- Sebastian Raschka, PhD: This article explores recent research advancements in reasoning-optimized LLMs, with a particular focus on inference-time compute scaling that have emerged since the release of DeepSeek R1.
- Analytics Vidhya: China is rapidly advancing in AI, releasing models like DeepSeek and Qwen to rival global giants.
- Last Week in AI: Alibaba’s New QwQ 32B Model is as Good as DeepSeek-R1
- Maginative: Despite having far fewer parameters, Qwen’s new QwQ-32B model outperforms DeepSeek-R1 and OpenAI’s o1-mini in mathematical benchmarks and scientific reasoning, showcasing the power of reinforcement learning.
@bdtechtalks.com
//
Alibaba's Qwen team has unveiled QwQ-32B, a 32-billion-parameter reasoning model that rivals much larger AI models in problem-solving capabilities. This development highlights the potential of reinforcement learning (RL) in enhancing AI performance. QwQ-32B excels in mathematics, coding, and scientific reasoning tasks, outperforming models like DeepSeek-R1 (671B parameters) and OpenAI's o1-mini, despite its significantly smaller size. Its effectiveness lies in a multi-stage RL training approach, demonstrating the ability of smaller models with scaled reinforcement learning to match or surpass the performance of giant models.
The QwQ-32B is not only competitive in performance but also offers practical advantages. It is available as open-weight under an Apache 2.0 license, allowing businesses to customize and deploy it without restrictions. Additionally, QwQ-32B requires significantly less computational power, running on a single high-end GPU compared to the multi-GPU setups needed for larger models like DeepSeek-R1. This combination of performance, accessibility, and efficiency positions QwQ-32B as a valuable resource for the AI community and enterprises seeking to leverage advanced reasoning capabilities.
Recommended read:
References :
- Groq: A Guide to Reasoning with Qwen QwQ 32B
- Analytics Vidhya: Qwen’s QwQ-32B: Small Model with Huge Potential
- Maginative: Alibaba's Latest AI Model, QwQ-32B, Beats Larger Rivals in Math and Reasoning
- bdtechtalks.com: Alibaba’s QwQ-32B reasoning model matches DeepSeek-R1, outperforms OpenAI o1-mini
- Last Week in AI: LWiAI Podcast #202 - Qwen-32B, Anthropic's $3.5 billion, LLM Cognitive Behaviors
|
- How to Not Do Experiments: Phacking - Nishanth Tharakan
- My Reflection on Locally Running LLMs - Nishanth Tharakan
- Investigate that Tech: LinkedIn - Nishanth Tharakan
- How Do Models Think, and Why Is There Chinese In My English Responses? - Nishanth Tharakan
- CERN - Nishanth Tharakan
- The Intersection of Mathematics, Physics, Psychology, and Music - Nishanth Tharakan
- Python: The Language That Won AI (And How Hype Helped) - Nishanth Tharakan
- Beginner’s Guide to Oscillations - Nishanth Tharakan
- Russian-American Race - tanyakh
- The Evolution of Feminized Digital Assistants: From Telephone Operators to AI - Nishanth Tharakan
- Epidemiology Part 2: My Journey Through Simulating a Pandemic - Nishanth Tharakan
- The Mathematics Behind Epidemiology: Why do Masks, Social Distancing, and Vaccines Work? - Nishanth Tharakan
- The Game of SET for Groups (Part 2), jointly with Andrey Khesin - tanyakh
- Pi: The Number That Has Made Its Way Into Everything - Nishanth Tharakan
- Beginner’s Guide to Sets - Nishanth Tharakan
- How Changing Our Perspective on Math Expanded Its Possibilities - Nishanth Tharakan
- Beginner’s Guide to Differential Equations: An Overview of UCLA’s MATH33B Class - Nishanth Tharakan
- Beginner’s Guide to Mathematical Induction - Nishanth Tharakan
- Foams and the Four-Color Theorem - tanyakh
- Beginner’s Guide to Game Theory - Nishanth Tharakan
- Forever and Ever: Infinite Chess And How to Visually Represent Infinity - Nishanth Tharakan
- Math Values for the New Year - Annie Petitt
- Happy 2025! - tanyakh
- Identical Twins - tanyakh
- A Puzzle from the Möbius Tournament - tanyakh
- A Baker, a Decorator, and a Wedding Planner Walk into a Classroom - Annie Petitt
- Beliefs and Belongings in Mathematics - David Bressoud
- Red, Yellow, and Green Hats - tanyakh
- Square out of a Plus - tanyakh
- The Game of SET for Groups (Part 1), jointly with Andrey Khesin - tanyakh
|