Matthias Bastian@THE DECODER
//
Mistral AI, a French artificial intelligence startup, has launched Mistral Small 3.1, a new open-source language model boasting 24 billion parameters. According to the company, this model outperforms similar offerings from Google and OpenAI, specifically Gemma 3 and GPT-4o Mini, while operating efficiently on consumer hardware like a single RTX 4090 GPU or a MacBook with 32GB RAM. It supports multimodal inputs, processing both text and images, and features an expanded context window of up to 128,000 tokens, which makes it suitable for long-form reasoning and document analysis.
Mistral Small 3.1 is released under the Apache 2.0 license, promoting accessibility and competition within the AI landscape. Mistral AI aims to challenge the dominance of major U.S. tech firms by offering a high-performance, cost-effective AI solution. The model achieves inference speeds of 150 tokens per second and is designed for text and multimodal understanding, positioning itself as a powerful alternative to industry-leading models without the need for expensive cloud infrastructure. References :
Classification:
@bdtechtalks.com
//
Alibaba has recently launched Qwen-32B, a new reasoning model, which demonstrates performance levels on par with DeepMind's R1 model. This development signifies a notable achievement in the field of AI, particularly for smaller models. The Qwen team showcased that reinforcement learning on a strong base model can unlock reasoning capabilities for smaller models that enhances their performance to be on par with giant models.
Qwen-32B not only matches but also surpasses models like DeepSeek-R1 and OpenAI's o1-mini across key industry benchmarks, including AIME24, LiveBench, and BFCL. This is significant because Qwen-32B achieves this level of performance with only approximately 5% of the parameters used by DeepSeek-R1, resulting in lower inference costs without compromising on quality or capability. Groq is offering developers the ability to build FAST with Qwen QwQ 32B on GroqCloud™, running the 32B parameter model at ~400 T/s. This model is proving to be very competitive in reasoning benchmarks and is one of the top open source models being used. The Qwen-32B model was explicitly designed for tool use and adapting its reasoning based on environmental feedback, which is a huge win for AI agents that need to reason, plan, and adapt based on context (outperforms R1 and o1-mini on the Berkeley Function Calling Leaderboard). With these capabilities, Qwen-32B shows that RL on a strong base model can unlock reasoning capabilities for smaller models that enhances their performance to be on par with giant models. References :
Classification:
Yvonne Smit@Qusoft
//
Koen Groenland's book, "Introduction to Quantum Computing for Business," is gaining attention as a key resource for guiding companies on leveraging quantum advancements. As the Dutch quantum ecosystem expands, experts like Groenland are playing a vital role in making quantum knowledge accessible to the business world. The book aims to demystify this technology for business professionals without a technical background, focusing on the capabilities and applications of quantum computers rather than the underlying technical details. Groenland hopes the book will become a standard work for anyone starting a quantum journey, emphasizing the importance of understanding quantum algorithms for business value.
Classiq Technologies, in collaboration with Sumitomo Corporation and Mizuho-DL Financial Technology, achieved significant compression of quantum circuits for Monte Carlo simulations used in financial risk analysis. The study compared traditional and pseudo-random number-based quantum Monte Carlo methods, optimizing circuit depth and qubit usage using Classiq’s high-level quantum design platform, Qmod. The results showed efficient circuit compression is possible without compromising accuracy, supporting the feasibility of scalable, noise-tolerant quantum applications in financial risk management. The Open Source Initiative (OSI) and Apereo Foundation have jointly responded to the White House Office of Science & Technology Policy's (OSTP) request for information on an AI Action Plan. Their comment emphasizes the benefits of Open Source and positions the Open Source community as a valuable resource for policymakers. The OSI highlighted its history of stewarding the Open Source Definition and its recent work in co-developing the Open Source AI Definition (OSAID), recommending that the White House rely on the OSAID as a foundational piece of any future AI Action Plan. References :
Classification:
|
Blogs
|