Vultr GPU Stack logo
Vultr GPU Stack
Preview of the Vultr management interface for Vultr GPU Stack on a mobile device.
Vultr GPU Stack logo|trans
Vultr GPU Stack
Preview of the Vultr server deploy page control panel for Vultr GPU Stack on a web browser.

Vultr GPU Stack

Introduction

Vultr GPU Stack is a comprehensive solution designed to streamline the deployment of GPU-accelerated applications via Kubernetes. Leveraging the power of k3s, this preconfigured deployment package eliminates the complexities associated with setting up your Kubernetes and NVIDIA environment, enabling you to focus solely on deploying your applications seamlessly.

Key Features

  • Kubernetes Deployment: Enjoy a hassle-free Kubernetes deployment process with Vultr GPU Stack. The entire setup is preconfigured, allowing you to kickstart your deployment without the need for extensive configuration.

  • Preinstalled Tools: Vultr GPU Stack comes equipped with essential tools such as kubectl and helm, ensuring that you have everything you need at your fingertips to manage and orchestrate your Kubernetes deployment effortlessly. All the NVIDIA specific containers and services are also preinstalled and configured for you.

  • Optimized Configurations: With predefined YAML configurations for popular frameworks like PyTorch and JupyterLab, Vultr GPU Stack simplifies the process of deploying GPU-accelerated applications. Whether you're a data scientist, machine learning engineer, or developer, you can leverage these optimized configurations to streamline your workflow.

How It Works

  1. Deploy with Ease: Simply deploy Vultr GPU Stack on your preferred infrastructure, and you'll have a fully functional Kubernetes environment at your disposal within minutes.

  2. Access Preinstalled Tools: Utilize the preinstalled kubectl and helm commands to manage your Kubernetes cluster efficiently.

  3. Deploy Applications Seamlessly: Leverage the provided YAML configurations for PyTorch and JupyterLab to deploy your GPU-accelerated applications effortlessly.

Get Started

Experience the simplicity and efficiency of deploying GPU-accelerated applications with Vultr GPU Stack. Get started today and unlock the full potential of your Kubernetes environment.

Introduction

Vultr GPU Stack is a preconfigured Kubernetes deployment via k3s. Everything required to get started with
deploying your app is installed and ready to go. Follow the instructions bellow to become familiar with your
new k3s deployment and how to use it.

Login to your k3s

Login to your k3s via SSH with the user and password provided above.

For your convenience Vultr GPU Stack comes with kubectl and helm installed and configured for the root
user. You may deploy to and manage your k3s via the root user and SSH directly.

If you would rather work with kubectl on your own device feel free to follow the instructions bellow on how
to do so.

Install Kubectl

Please follow the guide bellow for your operating system,

Install Helm

Please follow the guide bellow,

Get your k3s kubeconf

To retrieve you kubeconf follow these steps,

  • Login to your k3s via SSH
  • Run the following command
    cat /etc/rancher/k3s/k3s.yaml

Setup kubeconf

When you have your kubeconf text. Paste it into the following file based on which operating system
you are using.

Windows

C:\Users\your_username\.kube\config

Linux/macOS

~/.kube/config

Run a PyTorch app

Lets run your very first application on your new k3s deployment! For your convenience we include an example deployment yaml
for pytorch at,
/root/pytorch.yaml

To begin, lets deploy PyTorch. Do so with the following command from as root on the server.
kubectl apply -f /root/pytorch.yaml

It will take awhile for the container to pull and start running. You can check the status with,
kubectl get pods -n pytorch

When it says "running" it is done. If you would like to check where the pod is at deploying, you can use the following command,
kubectl describe pod -n pytorch pytorch

Under events you can see what is happening behind the scenes.

Once your pod is deployed lets go ahead and access it.
kubectl exec -it -n pytorch pytorch -- /bin/bash

Once you are in the shell, start a python shell,
python

From here lets import pytorch,
import torch

Then lets confirm pytorch can access CUDA,
torch.cuda.is_available()

It should respond with,
True

It should respond true, showing that your GPU's, your k3s deployment, and your container are operational! From here you can
either setup your app to run here or you can delete this deployment to make your own. You can delete it with,
kubectl delete -f /root/pytorch.yaml

Run a JupyterLab

Lets try to run JupyterLab now. If you ran PyTorch you will need to delete it before continuing unless you deployed with
more than one GPU. For your convenience we include an example deployment yaml for JupyterLab at,
/root/jupyterlab.yaml

Deploy with the following command.
kubectl apply -f /root/jupyterlab.yaml

It will take awhile for the container to pull and start running. You can check the status with,
kubectl get pods -n jupyter

When it says "running" it is done. If you would like to check where the pod is at deploying, you can use the following command,
kubectl describe pod -n jupyter jupyter

Under events you can see what is happening behind the scenes.

Once your pod is deployed you will have a website you can visit to connect to the lab. Use the following details.

URL: http://use.your.ip:30080
Password: Password

You should now be on the dashboard for your new JupyterLab instance.

You can delete JupyterLab with,
kubectl delete -f /root/jupyterlab.yaml

Support Information

Support Contact

Website
https://www.vultr.com
Email
support@vultr.com
Support URL
https://my.vultr.com
Repository
https://www.vultr.com
Twitter
vultr

Maintainer Contact

Report Application

Report an application with malicious intent or harmful content.

Thank you for your report!

Our Team has received your report and will respond accordingly as possible.