The NVIDIA DGX H200 is the world’s premier AI platform, purpose-built for the most demanding generative AI and high-performance computing (HPC) workloads. It leverages the NVIDIA H200 Tensor Core GPU with HBM3e memory to provide the massive memory capacity and bandwidth required for frontier AI models.
- 8x NVIDIA H200 Tensor Core GPUs with 141GB HBM3e each
- Total GPU memory of 1.1TB with 4.8TB/s bandwidth per GPU
- NVIDIA NVLink and NVSwitch fabric providing 900GB/s bidirectional GPU-to-GPU bandwidth
- Up to 32 Petaflops of FP8 AI performance for training and inference
- Dual high-performance x86 CPUs and 2TB of system memory
- 8x NVIDIA ConnectX-7 400Gb/s InfiniBand/Ethernet network interfaces
- Dual NVIDIA BlueField-3 DPUs for offloading infrastructure tasks
- 30TB of high-performance NVMe internal storage
- Optimized for NVIDIA AI Enterprise and Base Command software
- 8U rackmount form factor with redundant power and cooling
NVIDIA
NVIDIA - DGX H200 NVLink System - AI Supercomputing Infrastructure
For a quote, contact us at info@tropical.com
- SKU:
- DGX H200 NVLink System
- Weight:
- 287.60 LBS
- Width:
- 19.00 (in)
- Height:
- 14.00 (in)
- Depth:
- 35.30 (in)
Related Products
A relatable product