ChatGPT and generative AI have ushered in the "iPhone moment” for AI and enterprises are striving to gain a competitive advantage with AI, yet are facing challenges.
While 95% of data executives (CDOs and CDAOs) say their leadership expects revenue increases from AI and ML applications, only 19% report they have the resources to meet these expectations (according to Domino’s recent survey of enterprise CDOs and CDAOs). Providing researchers on data science teams with preferred tools and infrastructure is critical for AI/ML initiatives – for hiring/retaining top talent, keeping that talent productive, and for maintaining robust governance processes.
A spike in demand for chips and GPUs is predicted by Forbes to last through late 2023. Major cloud players are limiting availability for customers – with some reporting multi-month waits for hardware access, with priority going to those that have made multi-year spending commitments, as reported by the Information.
We are excited to announce the general availability of a new integrated MLOps and GPU solution between Domino Data Lab and Vultr, two industry leaders in the data science and cloud computing industries. This collaboration will provide our customers and prospects with unparalleled access to state-of-the-art NVIDIA infrastructure on Vultr, including NVIDIA A100 and H100 Tensor Core GPUs, to train, deploy, and manage their own deep learning models, including for generative AI, with ease and affordability.
As AI continues to advance at a rapid pace, having access to powerful compute infrastructure has become critical for training and deploying AI/ML models. The Domino and Vultr partnership offers a unique advantage by providing customers the opportunity to access cutting-edge GPUs within weeks, empowering data scientists and developers to build sophisticated AI models that drive innovation across industries at a lower cost than the hyperscalers.
The joint Domino and Vultr solution (announced in March) accelerates time-to-value for data science investments, so data science teams and IT leaders have confidence in immediate ROI for their investments. Data science teams and IT leaders no longer have to separately procure, standup, provision, and integrate all of the machinery for AI/ML – the Domino & Vultr solution is validated for seamless integration of Domino’s enterprise MLOps platform on Vultr’s cloud GPU offerings powered by NVIDIA. This pairs Domino’s leading Enterprise MLOps platform and industry-leading NVIDIA GPU-based infrastructure on Vultr, integrated with the most popular data science software and tooling, such as NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform; the NVIDIA NGC portal of enterprise services, software, and support; Anaconda’s open-source Python tooling, and more.
Domino’s Enterprise MLOps platform provides the data science orchestration layer, providing governed, self-service access to common data science tooling and infrastructure under the governance of IT – from a single "control plane.”
Our joint offering allows customers to:
Domino’s open and flexible platform is already trusted by 20% of the Fortune 100 to scale enterprise data science across multiple industries. Use cases include:
Once your generative AI models have been trained, it’s time to deploy them for inference. Thanks to Vultr’s cost-effective cloud infrastructure paired with Domino Nexus’ hybrid-/multi-cloud capabilities, you can enjoy affordable, flexible model hosting and inferencing without compromising on performance.
Vultr’s global network of cloud data centers ensures that your AI models can be deployed quickly, with low latency, and can handle high volumes of requests, making it the ideal platform for hosting AI-driven applications and services. Domino’s flexible platform supports a variety of deployment and hosting options – from on-premises data centers to the cloud to the edge – all managed from Domino’s single pane of glass.
Learn more about how the combination of Vultr and Domino can enable you to accelerate time-to-value for AI investments.