Top Mathematics discussions

NishMath - #nvidia

@www.linkedin.com //
Nvidia's Blackwell GPUs have achieved top rankings in the latest MLPerf Training v5.0 benchmarks, demonstrating breakthrough performance across various AI workloads. The NVIDIA AI platform delivered the highest performance at scale on every benchmark, including the most challenging large language model (LLM) test, Llama 3.1 405B pretraining. Nvidia was the only vendor to submit results on all MLPerf Training v5.0 benchmarks, highlighting the versatility of the NVIDIA platform across a wide array of AI workloads, including LLMs, recommendation systems, multimodal LLMs, object detection, and graph neural networks.

The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. Nvidia collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. The GB200 NVL72 systems achieved 90% scaling efficiency up to 2,496 GPUs, improving time-to-convergence by up to 2.6x compared to Hopper-generation H100.

The new MLPerf Training v5.0 benchmark suite introduces a pretraining benchmark based on the Llama 3.1 405B generative AI system, the largest model to be introduced in the training benchmark suite. On this benchmark, Blackwell delivered 2.2x greater performance compared with the previous-generation architecture at the same scale. Furthermore, on the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance gains highlight advancements in the Blackwell architecture and software stack, including high-density liquid-cooled racks, fifth-generation NVLink and NVLink Switch interconnect technologies, and NVIDIA Quantum-2 InfiniBand networking.

Recommended read:
References :
  • NVIDIA Newsroom: NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results
  • NVIDIA Technical Blog: NVIDIA Blackwell Delivers up to 2.6x Higher Performance in MLPerf Training v5.0
  • IEEE Spectrum: Nvidia’s Blackwell Conquers Largest LLM Training Benchmark
  • NVIDIA Technical Blog: Reproducing NVIDIA MLPerf v5.0 Training Scores for LLM Benchmarks
  • AI News | VentureBeat: Nvidia says its Blackwell chips lead benchmarks in training AI LLMs
  • MLCommons: New MLCommons MLPerf Training v5.0 Benchmark Results Reflect Rapid Growth and Evolution of the Field of AI
  • www.aiwire.net: MLPerf Training v5.0 results show Nvidia’s Blackwell GB200 accelerators sprinting through record time-to-train scores.
  • blogs.nvidia.com: NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the
  • mlcommons.org: New MLCommons MLPerf Training v5.0 Benchmark Results Reflect Rapid Growth and Evolution of the Field of AI
  • NVIDIA Newsroom: NVIDIA RTX Blackwell GPUs Accelerate Professional-Grade Video Editing
  • ServeTheHome: The new MLPerf Training v5.0 are dominated by NVIDIA Blackwell and Hopper results, but we also get AMD Instinct MI325X on a benchmark as well
  • AIwire: This is a news article on nvidia Blackwell GPUs lift Nvidia to the top of MLPerf Training Rankings
  • IEEE Spectrum: Nvidia’s Blackwell Conquers Largest LLM Training Benchmark

@quantumcomputingreport.com //
References: AI News | VentureBeat , ,
NVIDIA is significantly advancing quantum and AI research through strategic collaborations and cutting-edge technology. The company is partnering with Japan’s National Institute of Advanced Industrial Science and Technology (AIST) to launch ABCI-Q, a new supercomputing system focused on hybrid quantum-classical computing. This research-focused system is designed to support large-scale operations, utilizing the power of 2,020 NVIDIA H100 GPUs interconnected with NVIDIA’s Quantum-2 InfiniBand platform. The ABCI-Q system will be hosted at the newly established Global Research and Development Center for Business by Quantum-AI Technology (G-QuAT).

The ABCI-Q infrastructure integrates CUDA-Q, an open-source platform that orchestrates large-scale quantum-classical computing, enabling researchers to simulate and accelerate quantum applications. This hybrid setup combines GPU-based simulation with physical quantum processors from vendors such as Fujitsu (superconducting qubits), QuEra (neutral atom qubits), and OptQC (photonic qubits). This modular architecture will allow for testing quantum error correction, developing algorithms, and refining co-design strategies, which are all critical for future quantum systems. The system serves as a testbed for evaluating quantum-GPU workflows and advancing practical use cases across multiple hardware modalities.

NVIDIA is also expanding its presence in Taiwan, powering a new supercomputer at the National Center for High-Performance Computing (NCHC). This supercomputer is projected to deliver eight times the AI performance compared to the center's previous Taiwania 2 system. The new supercomputer will feature NVIDIA HGX H200 systems with over 1,700 GPUs, two NVIDIA GB200 NVL72 rack-scale systems, and an NVIDIA HGX B300 system built on the NVIDIA Blackwell Ultra platform, all interconnected by NVIDIA Quantum InfiniBand networking. This enhanced infrastructure is expected to significantly boost research in AI development, climate science, and quantum computing, fostering technological autonomy and global AI leadership for Taiwan.

Recommended read:
References :
  • AI News | VentureBeat: Nvidia is powering a supercomputer at Taiwan’s National Center for High-Performance Computing that’s set to deliver over eight times more AI performance than before.
  • : Japan’s National Institute of Advanced Industrial Science and Technology (AIST), in collaboration with NVIDIA, has launched ABCI-Q, a new research-focused supercomputing system designed to support large-scale hybrid quantum-classical computing.
  • quantumcomputingreport.com: NVIDIA and AIST Launch ABCI-Q Supercomputer for Hybrid Quantum-AI Research

staff@insidehpc.com //
Nvidia CEO Jensen Huang has publicly walked back previous comments made in January, where he expressed skepticism regarding the timeline for quantum computers becoming practically useful. Huang apologized for his earlier statements, which caused a drop in stock prices for quantum computing companies. During the recent Nvidia GTC 2025 conference in San Jose, Huang admitted his misjudgment and highlighted ongoing advancements in the field, attributing his initial doubts to his background in traditional computer systems development. He expressed surprise that his comments had such a significant impact on the market, joking about the public listing of quantum computing firms.

SEEQC and Nvidia announced a significant breakthrough at the conference, demonstrating a fully digital quantum-classical interface protocol between a Quantum Processing Unit (QPU) and a Graphics Processing Unit (GPU). This interface is designed to facilitate ultra-low latency and bandwidth-efficient quantum error correction. Furthermore, Nvidia is enhancing its support for quantum research with the CUDA-Q platform, designed to streamline the development of hybrid, accelerated quantum supercomputers. CUDA-Q performance can now be pushed further than ever with v0.10 support for the NVIDIA GB200 NVL72.

Recommended read:
References :
  • NVIDIA Technical Blog: The NVIDIA CUDA-Q platform is designed to streamline software and hardware development for hybrid, accelerated quantum supercomputers.
  • insidehpc.com: During quantum day at Nvidia's GTC 2025 conference in San Jose, SEEQC and NVIDIA announced they have completed an end-to-end fully digital quantum-classical interface protocol demo between a QPU and GPU.
  • OODAloop: Nvidia CEO Huang on Thursday walked back comments he made in January, when he cast doubt on whether useful quantum computers would hit the market in the next 15 years.
  • The Tech Basic: Nvidia CEO Jensen Huang apologized for comments he made earlier this year that caused stock prices of quantum computing companies to plunge.

Cierra Choucair@thequantuminsider.com //
References: The Register - On-Prem , , ...
NVIDIA is establishing the Accelerated Quantum Research Center (NVAQC) in Boston to integrate quantum hardware with AI supercomputers. The aim of the NVAQC is to enable accelerated quantum supercomputing, addressing quantum computing challenges such as qubit noise and error correction. Commercial and academic partners will work with NVIDIA, with collaborations involving industry leaders like Quantinuum, Quantum Machines, and QuEra, as well as researchers from Harvard's HQI and MIT's EQuS.

NVIDIA's GB200 NVL72 systems and the CUDA-Q platform will power research on quantum simulations, hybrid quantum algorithms, and AI-driven quantum applications. The center will support the broader quantum ecosystem, accelerating the transition from experimental to practical quantum computing. Despite the CEO's recent statement that practical quantum systems are likely still 20 years away, this investment shows confidence in the long-term potential of the technology.

Recommended read:
References :
  • The Register - On-Prem: Nvidia invests in quantum computing weeks after CEO said it's decades from being useful
  • : NVIDIA Launches Boston-Based Quantum Research Center to Integrate AI Supercomputing with Quantum Computing
  • AI News | VentureBeat: Nvidia will build accelerated quantum computing research center
  • : NVIDIA’s Quantum Strategy: Not Building the Computer, But the World That Enables It
  • : Quantum Machines Announces NVIDIA DGX Quantum Early Access Program