Top Mathematics discussions

NishMath - #nvidia

@www.linkedin.com //
Nvidia's Blackwell GPUs have achieved top rankings in the latest MLPerf Training v5.0 benchmarks, demonstrating breakthrough performance across various AI workloads. The NVIDIA AI platform delivered the highest performance at scale on every benchmark, including the most challenging large language model (LLM) test, Llama 3.1 405B pretraining. Nvidia was the only vendor to submit results on all MLPerf Training v5.0 benchmarks, highlighting the versatility of the NVIDIA platform across a wide array of AI workloads, including LLMs, recommendation systems, multimodal LLMs, object detection, and graph neural networks.

The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. Nvidia collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. The GB200 NVL72 systems achieved 90% scaling efficiency up to 2,496 GPUs, improving time-to-convergence by up to 2.6x compared to Hopper-generation H100.

The new MLPerf Training v5.0 benchmark suite introduces a pretraining benchmark based on the Llama 3.1 405B generative AI system, the largest model to be introduced in the training benchmark suite. On this benchmark, Blackwell delivered 2.2x greater performance compared with the previous-generation architecture at the same scale. Furthermore, on the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance gains highlight advancements in the Blackwell architecture and software stack, including high-density liquid-cooled racks, fifth-generation NVLink and NVLink Switch interconnect technologies, and NVIDIA Quantum-2 InfiniBand networking.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • NVIDIA Newsroom: NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results
  • NVIDIA Technical Blog: NVIDIA Blackwell Delivers up to 2.6x Higher Performance in MLPerf Training v5.0
  • IEEE Spectrum: Nvidia’s Blackwell Conquers Largest LLM Training Benchmark
  • NVIDIA Technical Blog: Reproducing NVIDIA MLPerf v5.0 Training Scores for LLM Benchmarks
  • AI News | VentureBeat: Nvidia says its Blackwell chips lead benchmarks in training AI LLMs
  • blogs.nvidia.com: NVIDIA RTX Blackwell GPUs Accelerate Professional-Grade Video Editing
  • MLCommons: New MLCommons MLPerf Training v5.0 Benchmark Results Reflect Rapid Growth and Evolution of the Field of AI
  • www.aiwire.net: MLPerf Training v5.0 results show Nvidia’s Blackwell GB200 accelerators sprinting through record time-to-train scores.
  • blogs.nvidia.com: NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the
  • mlcommons.org: New MLCommons MLPerf Training v5.0 Benchmark Results Reflect Rapid Growth and Evolution of the Field of AI
  • NVIDIA Newsroom: NVIDIA RTX Blackwell GPUs Accelerate Professional-Grade Video Editing
  • ServeTheHome: The new MLPerf Training v5.0 are dominated by NVIDIA Blackwell and Hopper results, but we also get AMD Instinct MI325X on a benchmark as well
  • AIwire: This is a news article on nvidia Blackwell GPUs lift Nvidia to the top of MLPerf Training Rankings
  • IEEE Spectrum: Nvidia’s Blackwell Conquers Largest LLM Training Benchmark
  • www.servethehome.com: MLPerf Training v5.0 is Out
Classification:
  • HashTags: #MLPerf #NvidiaBlackwell #AITraining
  • Company: Nvidia
  • Target: AI Model Training
  • Attacker: Nvidia
  • Product: Blackwell GPUs
  • Feature: MLPerf Training 5.0
  • Malware: GB200
  • Type: AI
  • Severity: Informative
@quantumcomputingreport.com //
NVIDIA is significantly advancing quantum and AI research through strategic collaborations and cutting-edge technology. The company is partnering with Japan’s National Institute of Advanced Industrial Science and Technology (AIST) to launch ABCI-Q, a new supercomputing system focused on hybrid quantum-classical computing. This research-focused system is designed to support large-scale operations, utilizing the power of 2,020 NVIDIA H100 GPUs interconnected with NVIDIA’s Quantum-2 InfiniBand platform. The ABCI-Q system will be hosted at the newly established Global Research and Development Center for Business by Quantum-AI Technology (G-QuAT).

The ABCI-Q infrastructure integrates CUDA-Q, an open-source platform that orchestrates large-scale quantum-classical computing, enabling researchers to simulate and accelerate quantum applications. This hybrid setup combines GPU-based simulation with physical quantum processors from vendors such as Fujitsu (superconducting qubits), QuEra (neutral atom qubits), and OptQC (photonic qubits). This modular architecture will allow for testing quantum error correction, developing algorithms, and refining co-design strategies, which are all critical for future quantum systems. The system serves as a testbed for evaluating quantum-GPU workflows and advancing practical use cases across multiple hardware modalities.

NVIDIA is also expanding its presence in Taiwan, powering a new supercomputer at the National Center for High-Performance Computing (NCHC). This supercomputer is projected to deliver eight times the AI performance compared to the center's previous Taiwania 2 system. The new supercomputer will feature NVIDIA HGX H200 systems with over 1,700 GPUs, two NVIDIA GB200 NVL72 rack-scale systems, and an NVIDIA HGX B300 system built on the NVIDIA Blackwell Ultra platform, all interconnected by NVIDIA Quantum InfiniBand networking. This enhanced infrastructure is expected to significantly boost research in AI development, climate science, and quantum computing, fostering technological autonomy and global AI leadership for Taiwan.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • AI News | VentureBeat: Nvidia is powering a supercomputer at Taiwan’s National Center for High-Performance Computing that’s set to deliver over eight times more AI performance than before.
  • Quantum Computing Report: Japan’s National Institute of Advanced Industrial Science and Technology (AIST), in collaboration with NVIDIA, has launched ABCI-Q, a new research-focused supercomputing system designed to support large-scale hybrid quantum-classical computing.
  • quantumcomputingreport.com: NVIDIA and AIST Launch ABCI-Q Supercomputer for Hybrid Quantum-AI Research
Classification: