The NVIDIA HGX B200 is a premier accelerated scale-up platform designed for the most demanding generative AI, data analytics, and high-performance computing (HPC) workloads. It integrates eight NVIDIA Blackwell GPUs with high-speed interconnects to deliver unprecedented inference and training performance for modern data centers.
- Features 8x NVIDIA Blackwell SXM GPUs for massive parallel processing power.
- Delivers up to 144 PFLOPS of FP4 Tensor Core performance (sparse).
- Equipped with 1.4 TB of high-speed HBM3e memory for large-scale model handling.
- Fifth-generation NVIDIA NVLink technology providing 1.8 TB/s GPU-to-GPU bandwidth.
- Total NVLink bandwidth of 14.4 TB/s via integrated NVLink 5 Switches.
- Supports networking bandwidth up to 0.8 TB/s for efficient scale-out.
- Optimized for real-time agentic AI inference and large language model (LLM) training.
- Compatible with NVIDIA Vera CPUs or x86-based CPU baseboards.
- Includes support for NVIDIA BlueField-3 DPUs for cloud networking and security.
- Provides 600 TFLOPS of FP32 performance for high-precision scientific computing.
Related Products
A relatable product