@www.linkedin.com
//
Nvidia's Blackwell GPUs have achieved top rankings in the latest MLPerf Training v5.0 benchmarks, demonstrating breakthrough performance across various AI workloads. The NVIDIA AI platform delivered the highest performance at scale on every benchmark, including the most challenging large language model (LLM) test, Llama 3.1 405B pretraining. Nvidia was the only vendor to submit results on all MLPerf Training v5.0 benchmarks, highlighting the versatility of the NVIDIA platform across a wide array of AI workloads, including LLMs, recommendation systems, multimodal LLMs, object detection, and graph neural networks.
The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. Nvidia collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. The GB200 NVL72 systems achieved 90% scaling efficiency up to 2,496 GPUs, improving time-to-convergence by up to 2.6x compared to Hopper-generation H100. The new MLPerf Training v5.0 benchmark suite introduces a pretraining benchmark based on the Llama 3.1 405B generative AI system, the largest model to be introduced in the training benchmark suite. On this benchmark, Blackwell delivered 2.2x greater performance compared with the previous-generation architecture at the same scale. Furthermore, on the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance gains highlight advancements in the Blackwell architecture and software stack, including high-density liquid-cooled racks, fifth-generation NVLink and NVLink Switch interconnect technologies, and NVIDIA Quantum-2 InfiniBand networking. Recommended read:
References :
@quantumcomputingreport.com
//
References:
AI News | VentureBeat
,
,
NVIDIA is significantly advancing quantum and AI research through strategic collaborations and cutting-edge technology. The company is partnering with Japan’s National Institute of Advanced Industrial Science and Technology (AIST) to launch ABCI-Q, a new supercomputing system focused on hybrid quantum-classical computing. This research-focused system is designed to support large-scale operations, utilizing the power of 2,020 NVIDIA H100 GPUs interconnected with NVIDIA’s Quantum-2 InfiniBand platform. The ABCI-Q system will be hosted at the newly established Global Research and Development Center for Business by Quantum-AI Technology (G-QuAT).
The ABCI-Q infrastructure integrates CUDA-Q, an open-source platform that orchestrates large-scale quantum-classical computing, enabling researchers to simulate and accelerate quantum applications. This hybrid setup combines GPU-based simulation with physical quantum processors from vendors such as Fujitsu (superconducting qubits), QuEra (neutral atom qubits), and OptQC (photonic qubits). This modular architecture will allow for testing quantum error correction, developing algorithms, and refining co-design strategies, which are all critical for future quantum systems. The system serves as a testbed for evaluating quantum-GPU workflows and advancing practical use cases across multiple hardware modalities. NVIDIA is also expanding its presence in Taiwan, powering a new supercomputer at the National Center for High-Performance Computing (NCHC). This supercomputer is projected to deliver eight times the AI performance compared to the center's previous Taiwania 2 system. The new supercomputer will feature NVIDIA HGX H200 systems with over 1,700 GPUs, two NVIDIA GB200 NVL72 rack-scale systems, and an NVIDIA HGX B300 system built on the NVIDIA Blackwell Ultra platform, all interconnected by NVIDIA Quantum InfiniBand networking. This enhanced infrastructure is expected to significantly boost research in AI development, climate science, and quantum computing, fostering technological autonomy and global AI leadership for Taiwan. Recommended read:
References :
staff@insidehpc.com
//
Nvidia CEO Jensen Huang has publicly walked back previous comments made in January, where he expressed skepticism regarding the timeline for quantum computers becoming practically useful. Huang apologized for his earlier statements, which caused a drop in stock prices for quantum computing companies. During the recent Nvidia GTC 2025 conference in San Jose, Huang admitted his misjudgment and highlighted ongoing advancements in the field, attributing his initial doubts to his background in traditional computer systems development. He expressed surprise that his comments had such a significant impact on the market, joking about the public listing of quantum computing firms.
SEEQC and Nvidia announced a significant breakthrough at the conference, demonstrating a fully digital quantum-classical interface protocol between a Quantum Processing Unit (QPU) and a Graphics Processing Unit (GPU). This interface is designed to facilitate ultra-low latency and bandwidth-efficient quantum error correction. Furthermore, Nvidia is enhancing its support for quantum research with the CUDA-Q platform, designed to streamline the development of hybrid, accelerated quantum supercomputers. CUDA-Q performance can now be pushed further than ever with v0.10 support for the NVIDIA GB200 NVL72. Recommended read:
References :
Cierra Choucair@thequantuminsider.com
//
NVIDIA is establishing the Accelerated Quantum Research Center (NVAQC) in Boston to integrate quantum hardware with AI supercomputers. The aim of the NVAQC is to enable accelerated quantum supercomputing, addressing quantum computing challenges such as qubit noise and error correction. Commercial and academic partners will work with NVIDIA, with collaborations involving industry leaders like Quantinuum, Quantum Machines, and QuEra, as well as researchers from Harvard's HQI and MIT's EQuS.
NVIDIA's GB200 NVL72 systems and the CUDA-Q platform will power research on quantum simulations, hybrid quantum algorithms, and AI-driven quantum applications. The center will support the broader quantum ecosystem, accelerating the transition from experimental to practical quantum computing. Despite the CEO's recent statement that practical quantum systems are likely still 20 years away, this investment shows confidence in the long-term potential of the technology. Recommended read:
References :
|
Blogs
|