Vultr GPU Stack is a comprehensive solution designed to streamline the deployment of GPU-accelerated applications via Kubernetes. Leveraging the power of k3s, this preconfigured deployment package eliminates the complexities associated with setting up your Kubernetes and NVIDIA environment, enabling you to focus solely on deploying your applications seamlessly.
Kubernetes Deployment: Enjoy a hassle-free Kubernetes deployment process with Vultr GPU Stack. The entire setup is preconfigured, allowing you to kickstart your deployment without the need for extensive configuration.
Preinstalled Tools: Vultr GPU Stack comes equipped with essential tools such as kubectl and helm, ensuring that you have everything you need at your fingertips to manage and orchestrate your Kubernetes deployment effortlessly. All the NVIDIA specific containers and services are also preinstalled and configured for you.
Optimized Configurations: With predefined YAML configurations for popular frameworks like PyTorch and JupyterLab, Vultr GPU Stack simplifies the process of deploying GPU-accelerated applications. Whether you're a data scientist, machine learning engineer, or developer, you can leverage these optimized configurations to streamline your workflow.
Deploy with Ease: Simply deploy Vultr GPU Stack on your preferred infrastructure, and you'll have a fully functional Kubernetes environment at your disposal within minutes.
Access Preinstalled Tools: Utilize the preinstalled kubectl and helm commands to manage your Kubernetes cluster efficiently.
Deploy Applications Seamlessly: Leverage the provided YAML configurations for PyTorch and JupyterLab to deploy your GPU-accelerated applications effortlessly.
Experience the simplicity and efficiency of deploying GPU-accelerated applications with Vultr GPU Stack. Get started today and unlock the full potential of your Kubernetes environment.
Vultr GPU Stack is a preconfigured Kubernetes deployment via k3s. Everything required to get started with
deploying your app is installed and ready to go. Follow the instructions bellow to become familiar with your
new k3s deployment and how to use it.
Login to your k3s via SSH with the user and password provided above.
For your convenience Vultr GPU Stack comes with kubectl and helm installed and configured for the root
user. You may deploy to and manage your k3s via the root user and SSH directly.
If you would rather work with kubectl on your own device feel free to follow the instructions bellow on how
to do so.
Please follow the guide bellow for your operating system,
Please follow the guide bellow,
To retrieve you kubeconf follow these steps,
cat /root/k3s-public.yaml
When you have your kubeconf text. Paste it into the following file based on which operating system
you are using.
C:\Users\your_username\.kube\config
~/.kube/config
Lets run your very first application on your new k3s deployment! For your convenience we include an example deployment yaml
for pytorch at,
/root/pytorch.yaml
To begin, lets deploy PyTorch. Do so with the following command from as root on the server.
kubectl apply -f /root/pytorch.yaml
It will take awhile for the container to pull and start running. You can check the status with,
kubectl get pods -n pytorch
When it says "running" it is done. If you would like to check where the pod is at deploying, you can use the following command,
kubectl describe pod -n pytorch pytorch
Under events you can see what is happening behind the scenes.
Once your pod is deployed lets go ahead and access it.
kubectl exec -it -n pytorch pytorch -- /bin/bash
Once you are in the shell, start a python shell,
python
From here lets import pytorch,
import torch
Then lets confirm pytorch can access CUDA,
torch.cuda.is_available()
It should respond with,
True
It should respond true, showing that your GPU's, your k3s deployment, and your container are operational! From here you can
either setup your app to run here or you can delete this deployment to make your own. You can delete it with,
kubectl delete -f /root/pytorch.yaml
Lets try to run JupyterLab now. If you ran PyTorch you will need to delete it before continuing unless you deployed with
more than one GPU. For your convenience we include an example deployment yaml for JupyterLab at,
/root/jupyterlab.yaml
Deploy with the following command.
kubectl apply -f /root/jupyterlab.yaml
It will take awhile for the container to pull and start running. You can check the status with,
kubectl get pods -n jupyter
When it says "running" it is done. If you would like to check where the pod is at deploying, you can use the following command,
kubectl describe pod -n jupyter jupyter
Under events you can see what is happening behind the scenes.
Once your pod is deployed you will have a website you can visit to connect to the lab. Use the following details.
URL: http://use.your.ip:30080
Password: Password
You should now be on the dashboard for your new JupyterLab instance.
You can delete JupyterLab with,
kubectl delete -f /root/jupyterlab.yaml
If you want to get the benchmarks for your current setup, please run the following manifest.
kubectl apply -f /root/benchmark.yaml
This will result in the following files being generated containing the output of the run at,
/root/benchmarks/
/root/validation.log
Report an application with malicious intent or harmful content.