The NVIDIA DGX H200 is the premier AI infrastructure solution designed for enterprise AI factories, featuring the groundbreaking performance of the H200 Tensor Core GPU. It provides a fully integrated hardware and software stack optimized for the most demanding generative AI, natural language processing, and deep learning workloads.
- 8x NVIDIA H200 Tensor Core GPUs with 1,128GB total GPU memory
- 32 petaFLOPS of FP8 AI performance for large-scale model training and inference
- Dual Intel Xeon Platinum 8480C processors with 112 total cores and 2.00GHz base clock
- 2TB of high-speed system memory for intensive data processing
- 10x NVIDIA ConnectX-7 400Gb/s network interfaces for 1TB/s peak bidirectional bandwidth
- 4x NVIDIA NVSwitches providing 7.2TB/s bidirectional GPU-to-GPU bandwidth
- 30TB NVMe SSD internal storage for maximum data throughput
- Integrated NVIDIA AI Enterprise and Base Command software suites
- Support for NVIDIA DGX OS, Ubuntu, Red Hat Enterprise Linux, and Rocky OS
- Standard 19-inch rackmount form factor with 10.2kW maximum power usage
NVIDIA
NVIDIA - DGX H200 - Enterprise AI System
For a quote, contact us at info@tropical.com
- SKU:
- DGX H100 Enterprise AI System
- Weight:
- 287.60 LBS
- Width:
- 19.00 (in)
- Height:
- 14.00 (in)
- Depth:
- 35.30 (in)
Related Products
A relatable product