The only cloud to virtualize
the NVIDIA A100

Vultr has made accelerated computing affordable by virtualizing the industry-leading GPU for machine learning: the NVIDIA A100. Our unique approach partitions physical GPUs into discrete virtual GPUs, each with their own memory and compute. Perfect for AI inference, NLP, voice recognition, and computer vision.

Spin up fast with ML tools and stacks

With Vultr, it’s easy to provision NVIDIA A100 GPUs with the end-to-end, integrated NVIDIA hardware and software stack. The NVIDIA NGC Catalog image provides full access to NVIDIA AI Enterprise. An end-to-end, secure, cloud native suite of AI software, NVIDIA AI Enterprise accelerates the data science pipeline and streamlines the development and deployment of predictive artificial intelligence (AI) models. Vultr makes NVIDIA’s latest AI innovations accessible and affordable for everyone.

Low latency through global availability

Vultr offers a global cloud GPU platform, allowing you to place your GPU servers close both to your applications’ end users, and to the regions where training data is first originated.

Toggle the switch to compare global coverage

Chicago, Illinois United States
Miami, Florida United States
Amsterdam Netherlands
New Jersey United States
Dallas, Texas United States
Paris France
Mexico City Mexico
São Paulo Brazil
Madrid Spain
Warsaw Poland
Tokyo Japan
Seattle, Washington United States
Los Angeles, California United States
Silicon Valley , California United States
Singapore
Atlanta, Georgia United States
London United Kingdom
Frankfurt Germany
Sydney Australia
Melbourne Australia
Toronto Canada
Seoul South Korea
Stockholm Sweden
Honolulu, Hawaii United States
Mumbai India
Bangalore India
Delhi NCR India
Santiago Chile
Tel Aviv-Yafo Israel
Johannesburg South Africa
Osaka Japan
Manchester United Kingdom
29 locations

Specifications

Our easy-to-use control panel and API let you spend more time coding and less time managing your infrastructure.

A100 80GB PCIe A100 80GB SXM
FP64 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS
FP32 19.5 TFLOPS
Tensor Float 32 (TF32) 156 TFLOPS | 312 TFLOPS*
BFLOAT16 Tensor Core 312 TFLOPS | 624 TFLOPS*
FP16 Tensor Core 312 TFLOPS | 624 TFLOPS*
INT8 Tensor Core 312 TFLOPS | 624 TFLOPS*
GPU Memory 80GB HBM2e 80GB HBM2e
GPU Memory Bandwidth 1,935 GB/s 2,039 GB/s
Max Thermal Design Power (TDP) 300W 400W ***
Multi-Instance GPU Up to 7 MIGs @ 10GB Up to 7 MIGs @ 10GB
Form Factor
  • PCIe
  • Dual-slot air-cooled or single-slot liquid-cooled
SXM
Interconnect
  • NVIDIA® NVLink® Bridge
  • for 2 GPUs: 600 GB/s **
  • PCIe Gen4: 64 GB/s
  • NVLink: 600 GB/s
  • PCIe Gen4: 64 GB/s
Server Options
  • Partner and NVIDIA-Certified
  • Systems™ with 1-8 GPUs
NVIDIA HGX™ A100-Partner and NVIDIA-Certified Systems with 4,8, or 16 GPUs NVIDIA DGX™ A100 with 8 GPUs

Additional resources

Docs, demos, and information to help you succeed with your machine learning projects.

Get started, or get some advice

Start your machine learning project now by signing up for a free Vultr account.
Or, if you’d like to speak with us regarding your needs, please reach out.