Vultr and Domino Data Lab Create Joint MLOps and Compute Solution to Accelerate Time-to-Value for Enterprise AI
May 18, 2023

ChatGPT and generative AI have ushered in the "iPhone moment” for AI and enterprises are striving to gain a competitive advantage with AI, yet are facing challenges.

While 95% of data executives (CDOs and CDAOs) say their leadership expects revenue increases from AI and ML applications, only 19% report they have the resources to meet these expectations (according to Domino’s recent survey of enterprise CDOs and CDAOs). Providing researchers on data science teams with preferred tools and infrastructure is critical for AI/ML initiatives – for hiring/retaining top talent, keeping that talent productive, and for maintaining robust governance processes.

A spike in demand for chips and GPUs is predicted by Forbes to last through late 2023. Major cloud players are limiting availability for customers – with some reporting multi-month waits for hardware access, with priority going to those that have made multi-year spending commitments, as reported by the Information.

Domino and Vultr partner on joint MLOps and compute solution

We are excited to announce the general availability of a new integrated MLOps and GPU solution between Domino Data Lab and Vultr, two industry leaders in the data science and cloud computing industries. This collaboration will provide our customers and prospects with unparalleled access to state-of-the-art NVIDIA infrastructure on Vultr, including NVIDIA A100 and H100 Tensor Core GPUs, to train, deploy, and manage their own deep learning models, including for generative AI, with ease and affordability.

As AI continues to advance at a rapid pace, having access to powerful compute infrastructure has become critical for training and deploying AI/ML models. The Domino and Vultr partnership offers a unique advantage by providing customers the opportunity to access cutting-edge GPUs within weeks, empowering data scientists and developers to build sophisticated AI models that drive innovation across industries at a lower cost than the hyperscalers.

The joint Domino and Vultr solution (announced in March) accelerates time-to-value for data science investments, so data science teams and IT leaders have confidence in immediate ROI for their investments. Data science teams and IT leaders no longer have to separately procure, standup, provision, and integrate all of the machinery for AI/ML – the Domino & Vultr solution is validated for seamless integration of Domino’s enterprise MLOps platform on Vultr’s cloud GPU offerings powered by NVIDIA. This pairs Domino’s leading Enterprise MLOps platform and industry-leading NVIDIA GPU-based infrastructure on Vultr, integrated with the most popular data science software and tooling, such as NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform; the NVIDIA NGC portal of enterprise services, software, and support; Anaconda’s open-source Python tooling, and more.

Solution overview

Our joint offering is underpinned by Vultr Kubernetes Engine (VKE), and Domino Nexus, Domino’s hybrid-/multi-cloud MLOps offering. Domino’s Kubernetes-native platform runs seamlessly on VKE, with VKE providing automated container orchestration so you can operate with confidence and easily scale data science workflows.

Domino’s Enterprise MLOps platform provides the data science orchestration layer, providing governed, self-service access to common data science tooling and infrastructure under the governance of IT – from a single "control plane.”

Our joint offering allows customers to:

  1. Leverage the high-performance compute capabilities of NVIDIA A100 and H100 Tensor Core GPUs, with availability within weeks. Balance cost, performance, and availability to scale GPU resources up or down on-demand, ensuring optimal utilization and cost management for your AI workloads – including access to fractional GPUs.
  2. Accelerate the data science and model lifecycle with Domino’s enterprise MLOps, fostering a more efficient and effective data science process.
  3. Increase data scientists’ productivity with packages, frameworks, and industry solutions – including NVIDIA AI Enterprise, NVIDIA NGC, Anaconda, and more – all validated on the Domino and Vultr solution.
  4. Seamlessly develop and scale data science across the hybrid-/multi-cloud from a single "control plane” with Domino Nexus, enabling you to run data science workloads across any compute cluster.
  5. Optimize cost and performance by decoupling the data science "control plane” and "data/compute” planes using Domino Nexus, meaning customers can seamlessly "burst” to GPUs in Vultr’s cloud to alleviate infrastructure capacity issues during model training. Alternatively, leverage different compute clusters for model training versus hosting/inference to further reduce infrastructure spend.

Accelerate data science time-to-value across every industry

Domino’s open and flexible platform is already trusted by 20% of the Fortune 100 to scale enterprise data science across multiple industries. Use cases include:

  • Financial Services: Leverage AI for fraud detection, credit scoring, algorithmic trading, and personalized customer experiences, unlocking new growth opportunities and driving efficiency. Moody’s Analytics achieved a 50% reduction in time to deploy models so they can get information into the hands of clients faster.
  • Healthcare & Lifesciences: Accelerate drug discovery and development, optimize clinical trial design, and enhance patient care through personalized medicine using AI-powered solutions. Janssen, the pharmaceutical arm of Johnson & Johnson, develops deep learning models 10x faster to accelerate deep learning.
  • Public Sector: Use AI to enhance public safety, optimize urban planning, improve resource allocation, and enable data-driven decision-making for more effective governance. Lockheed Martin yields $20 million in value each year and delivers new innovations faster.

Affordable and efficient hosting for AI model inference

Once your generative AI models have been trained, it’s time to deploy them for inference. Thanks to Vultr’s cost-effective cloud infrastructure paired with Domino Nexus’ hybrid-/multi-cloud capabilities, you can enjoy affordable, flexible model hosting and inferencing without compromising on performance.

Vultr’s global network of cloud data centers ensures that your AI models can be deployed quickly, with low latency, and can handle high volumes of requests, making it the ideal platform for hosting AI-driven applications and services. Domino’s flexible platform supports a variety of deployment and hosting options – from on-premises data centers to the cloud to the edge – all managed from Domino’s single pane of glass.

Learn more about how the combination of Vultr and Domino can enable you to accelerate time-to-value for AI investments.