Performance and productivity for HPC and giant AI workloads

Open up enormous potential in the age of AI with the world’s most versatile computing platform.

no form fill or personal details required for access
Data sheet front page

The most efficient large memory supercomputer

Designed for AI training, inference, and HPC

The NVIDIA GH200 empowers businesses to foster innovation and unearth new value by enhancing large language model training and inference. It further amplifies recommender systems through expanded fast-access memory, and facilitates deeper insights via advanced graph neural network analysis.

The power of coherent memory

The NVIDIA NVLink-C2C interconnect provides 900GB/s of bidirectional bandwidth between CPU and GPU for 7x the performance found in accelerated systems. The connection provides unified cache coherence with a single memory address space that combines system and HBM GPU memory for simplified programmability.

Performance and speed with the NVIDIA GH200 Grace Hopper™ Superchip

GH200 will deliver up to 10x higher performance for applications running terabytes of data, helping scientists and researchers reach unprecedented solutions for the world's most complex problems.


Grace CPU Cores 72 Cores
CPU LPDDR5X bandwidth Up to 500 GB/s
GPU HBM bandwidth 4TB/s HBM3
NVLink-C2C bandwidth 900GB/s total, 450GB/s per direction
CPU LPDDR5X capacity 480GB
GPU HBM capacity 96GB HBM3

Get started, or get some advice

Start your GPU-accelerated project now by signing up for a free Vultr account.
Or, if you’d like to speak with us regarding your needs, please reach out.