Article

How to use a Vultr Load Balancer with VKE

Author: Quan Hua

Last Updated: Thu, Jun 9, 2022
Kubernetes

Introduction

Vultr Load Balancer is a fully-managed solution to distribute traffic to groups of servers which decouples the availability of a backend service from the health of a single server. Vultr Load Balancer ensures your service stay online by distributing the load across multiple servers to ensure that servers don’t get overloaded.

If you are new to Vultr Load Balancers, you should read the Load Balancer Quickstart Guide first.

Vultr Kubernetes Engine (VKE) is a fully-managed Kubernetes product. When deploying an application to VKE, Kubernetes automatically spreads Pods across different nodes in a cluster for better availability.

Vultr Load Balancers are compatible with VKE to distribute traffic across multiple Pods in different nodes. Vultr Load Balancer in VKE offers all the same features and capabilities as the fully-managed solution for standalone scenarios.

This guide explains how to deploy and configure Vultr Load Balancers in Vultr Kubernetes Engine (VKE) with detailed configuration information.

Prerequisites

Before you begin, you should:

  • Deploy a Vultr Kubernetes Cluster with at least three nodes.
  • Configure kubectl and git in your machine.
  • Have a domain name if you want to follow the TLS/SSL certificates sections.

1. Deploy Web Servers

This section shows how to deploy web servers to the Kubernetes cluster using a Deployment. The web server in this article is a Python web server that returns the hostname of the pod and HTTP request headers.

This example application has a public Docker image (quanhua92/whoami) in the Docker Hub. You can go to this GitHub repository to see the source code of the application.

  1. Create a file named deployment.yaml with the following content:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
        name: whoami
    spec:
        replicas: 3
        selector:
            matchLabels:
                name: whoami
        template:
            metadata:
                labels:
                    name: whoami
            spec:
                containers:
                    - name: whoami
                      image: quanhua92/whoami:latest
                      imagePullPolicy: Always
                      ports:
                          - containerPort: 8080
    
  2. Deploy the application using kubectl

    $ kubectl apply -f deployment.yaml
    

Notice that the deployment name in this example is whoami and the Pod listens to requests on the port 8080.

2. Deploy a Load Balancer for HTTP traffic

This section shows how to deploy a load balancer for HTTP traffic on port 80. You deploy a Kubernetes Service with the LoadBalancer type and use metadata annotations to configure VKE Load Balancer.

The default load-balancing algorithm is the Round Robin algorithm. This works by using each server behind the load balancer in turns.

  1. Create a file named service.yaml with the following content. The app selector whoami matches the existing deployment and the target port 8080 matches the container port in the previous step.

    apiVersion: v1
    kind: Service
    metadata:
        name: whoami-lb
        annotations:
            service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
    spec:
        type: LoadBalancer
        selector:
            name: whoami
        ports:
            - name: http
              port: 80
              targetPort: 8080
    
  2. Deploy the Service using kubectl

    $ kubectl apply -f service.yaml
    
  3. Run the following command to see the VKE Load Balancer setup progress:

    $ kubectl get service whoami-lb -w
    

The result should look like:

NAME        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
whoami-lb   LoadBalancer   10.108.167.185   <pending>     80:32365/TCP   9s
whoami-lb   LoadBalancer   10.108.167.185   139.180.143.107   80:32365/TCP   81s

You can also go to the Load Balancer page in the Customer Portal to inspect your Load Balancers.

You can navigate to the IP address of your Load Balancer to access the application.

Notice that it may take a few minutes before you can access the application through the Load Balancer IP address.

The response of the application should look like:

Hostname: whoami-84798c47cd-2gnhd
Host: 139.180.143.107
Cache-Control: max-age=0
Dnt: 1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9,vi;q=0.8,la;q=0.7,nl;q=0.6
Cookie: session=eyJteV9zZXNzaW9uIjoiVDk4UVUifQ.YoqD3g.o1pyE6s6vTkQqnbvPhG08_6tvOI
X-Forwarded-Proto: http
X-Forwarded-For: 113.172.203.231
Connection: close
Session: T98QU

Refresh the website a few times. Notice that the Hostname changes after a few requests which means the Load Balancer can distribute the traffic across multiple pods.

3. Using the Least Connections Load Balancing Algorithm

The least connections load balancing algorithm is a dynamic load balancing algorithm that distributes the client requests to the application server with the least number of active connections at the time the load balancer receives the client request. This algorithm works best in environments where the application servers have similar capabilities.

  1. Change the service.yaml as follows:

    apiVersion: v1
    kind: Service
    metadata:
        name: whoami-lb
        annotations:
            service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
            service.beta.kubernetes.io/vultr-loadbalancer-algorithm: "least_connections"
    spec:
        type: LoadBalancer
        selector:
            name: whoami
        ports:
            - name: http
              port: 80
              targetPort: 8080
    
  2. Deploy the Service using kubectl

    $ kubectl apply -f service.yaml
    

4. Configure Health Check on the VKE Load Balancer

Vultr Load Balancers provides health checks to determine if the application servers respond to client requests. Here are some configurations that you can customize:

Here are some configuration that you can customize:

  • healthcheck-protocol: The protocol that load balancers use to perform health check. Two possible values are tcp and http. The default value is tcp
  • healthcheck-path: The URL path that load balancers use to check on the application server. The default value is the root path, /.
  • healthcheck-port: The port that load balancers use to check on the application server. The Kubernetes defines this value. You should not change this value in normal scenarios.
  • healthcheck-check-interval: The interval between health checks in seconds. The default value is 15.
  • healthcheck-response-timeout: The response timeout in seconds. The default value is 5.
  • healthcheck-unhealthy-threshold: The number of unhealthy requests before load balancers remove the application server from the server pool. The default value is 5.
  • healthcheck-healthy-threshold: The number of healthy requests before load balancers add the application server back to the server pool. The default value is 5.

The example application in this article has an endpoint for the health check, /health. The benefit of using /health instead of / is that:

  • You can reduce the computation required to run the health check.
  • You can reduce the response time and content length.

In the example application code, the endpoint returns an empty response with 200 status without any complex computation.

  1. Change the service.yaml as follows:

    apiVersion: v1
    kind: Service
    metadata:
        name: whoami-lb
        annotations:
            service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
            service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-protocol: "http"
            service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-path: "/health"
            service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-check-interval: "10"
            service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-response-timeout: "5"
            service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-unhealthy-threshold: "5"
            service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-healthy-threshold: "5"
    spec:
        type: LoadBalancer
        selector:
            name: whoami
        ports:
            - name: http
              port: 80
              targetPort: 8080
    
  2. Deploy the Service using kubectl

    $ kubectl apply -f service.yaml
    

5. Expose the Application With Free TLS/SSL Certificates from Let’s Encrypt

This section shows how to deploy a load balancer for HTTPS traffic on port 443.

Here are some approaches to obtaining TLS/SSL Certificates:

  • Self-Signed Certificates: Use your own Certificate Authority to create and sign TLS/SSL certificates. This is a great option for development environments.
  • Purchase TLS/SSL Certificates: You need to buy a TLS/SSL certificate from a well-known Certificate Authority for production use-cases.
  • Use Free TLS/SSL Certificates: Use free TLS/SSL certificates from Let’s Encrypt or ZeroSSL.

In this section, you install NGINX Ingress Controller to handle incoming SSL/TLS traffic and Cert Manager to manage free TLS/SSL certificates from Let’s Encrypt.

NGINX Ingress Controller creates a LoadBalancer service to handle incoming traffic. This LoadBalancer service is also a VKE Load Balancer so you don’t need the service created in the previous sections.

VKE Load Balancer routes incoming traffics to a pool of server nodes. Then, each server node routes the load into the NGINX Ingress Controllers. Each NGINX Ingress Controller routes the requests into the corresponding application pods.

By default, there is only one NGINX Ingress Controller. You can scale the NGINX Ingress Controller depending on the traffic of your system.

Cert Manager automates the creation and management of TLS/SSL certificates from various issuing sources, including Let’s Encrypt, HashiCorp Vault, Venafi and private public key infrastructure.

You need a domain name to issue and manage free TLS/SSL certificates Let’s Encrypt on your domain.

5.1. Prepare the Application Service

  1. (Optional) Delete the Service in the previous section by using the following command:

    $ kubectl delete -f service.yaml
    
  2. Create a Service file service-02.yaml with the following content. The app selector whoami matches the existing deployment and the target port 8080 matches the container port in the previous step. Notice that this service is not a LoadBalancer type and the name of this service is whoami-service.

    apiVersion: v1
    kind: Service
    metadata:
        name: whoami-service
    spec:
        selector:
            name: whoami
        ports:
            - name: http
              port: 80
              targetPort: 8080
    
  3. Run the command to create the service

    $ kubectl apply -f service-02.yaml
    

5.2. Install NGINX Ingress Controller

  1. Install NGINX Ingress Controller (ingress-nginx)

    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
    
  2. Go to your Load Balancers dashboard and get the IP Address of the newly created Load Balancer. This is the Load Balancer created for the NGINX ingress.

  3. Create an A record in your domain DNS that points to the above IP address.
  4. (Optional) Scale NGINX Ingress Controller to 03 replicas.

    $ kubectl scale deployment --namespace ingress-nginx ingress-nginx-controller --replicas=3 
    

5.3. Install Cert Manager

  1. Install cert-manager to manage SSL certificates

    $ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.7.0/cert-manager.yaml
    
  2. Create a manifest file letsencrypt.yaml to handle Let’s Encrypt certificates. Replace <YOUR_EMAIL> with your actual email.

    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-staging
    spec:
      acme:
        # The ACME server URL
        server: https://acme-staging-v02.api.letsencrypt.org/directory
        preferredChain: "ISRG Root X1"
        # Email address used for ACME registration
        email: <YOUR_EMAIL>
        # Name of a secret used to store the ACME account private key
        privateKeySecretRef:
          name: letsencrypt-staging
        solvers:
          - http01:
              ingress:
                class: nginx
    ---
    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-prod
    spec:
      acme:
        # The ACME server URL
        server: https://acme-v02.api.letsencrypt.org/directory
        # Email address used for ACME registration
        email: <YOUR_EMAIL>
        # Name of a secret used to store the ACME account private key
        privateKeySecretRef:
          name: letsencrypt-prod
        solvers:
          - http01:
              ingress:
                class: nginx
    
  3. Run the command to install the above Let’s Encrypt issuers.

    $ kubectl apply -f letsencrypt.yaml
    

5.4. Expose Application with Ingress

  1. Create an Ingress manifest file ingress.yaml with the following content. Replace <YOUR_DOMAIN> with the domain that you have created A record in the above step. Replace whoami-service with your service name.

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: whoami-ingress
      annotations:
        kubernetes.io/ingress.class: nginx
        cert-manager.io/cluster-issuer: letsencrypt-prod
    spec:
      tls:
        - secretName: whoami-tls
          hosts:
            - <YOUR_DOMAIN>
      rules:
        - host: <YOUR_DOMAIN>
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: whoami-service
                    port:
                      number: 80
    
  2. Run the command to create the ingress

    $ kubectl apply -f ingress.yaml
    
  3. Run the command kubectl get ingress to see the newly created ingress. The result should look like:

    NAME                      CLASS    HOSTS               ADDRESS        PORTS     AGE
    whoami-ingress   <none>   <YOUR_DOMAIN>      140.82.41.69   80, 443   37s
    
  4. Check the certificates

    $ kubectl get certificates
    
  5. Navigate to https://<YOUR_DOMAIN> to access your application.

6. Using Sticky Sessions

By default, a load balancer routes each request independently to a pool of servers based on the load balancing algorithm. However, you can use the sticky session (also known as session affinity) feature to bind a user’s session to a specific server.

In the context of a Kubernetes environment, the sticky session feature helps to keep the session from the client to a specific application pod. If the application pod is not available, the load balancer re-routes the requests to another application pod.

This section shows how to achieve sticky sessions using the NGINX Ingress Controller from the previous section.

You need to add the following annotations to the Ingress manifest file:

  • nginx.ingress.kubernetes.io/affinity: enable the Sticky Session. The value has to be cookie”.
  • nginx.ingress.kubernetes.io/session-cookie-name: name of the cookie to track the instance for each request to each application pod.
  • nginx.ingress.kubernetes.io/session-cookie-max-age: time until the cookie expires in seconds
  • nginx.ingress.kubernetes.io/session-cookie-expires: a legacy version of the previous annotation for compatibility with old browsers.
  1. Change the ingress.yaml as follows:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
        name: whoami-ingress
        annotations:
            kubernetes.io/ingress.class: nginx
            cert-manager.io/cluster-issuer: letsencrypt-prod
            nginx.ingress.kubernetes.io/affinity: "cookie"
            nginx.ingress.kubernetes.io/session-cookie-name: "sticky"
            nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
            nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    spec:
        tls:
            - secretName: whoami-tls
              hosts:
                  - <YOUR_DOMAIN>
        rules:
            - host: <YOUR_DOMAIN>
              http:
                  paths:
                      - path: /
                        pathType: Prefix
                        backend:
                            service:
                                name: whoami-service
                                port:
                                    number: 80
    
  2. Apply the changes using kubectl

    $ kubectl apply -f ingress.yaml
    
  3. Confirm that the Ingress works

    $ kubectl describe ingress whoami-ingress
    
  4. Check if the server responds a Set-Cookie header

    $ curl -I https://<YOUR_DOMAIN>
    
  5. The result should look like:

    HTTP/2 200
    date: Tue, 24 May 2022 17:34:58 GMT
    content-type: text/html; charset=utf-8
    content-length: 372
    set-cookie: sticky=1653413699.542.140.602576|38fb12998d06bbfbeaeccec9bf71c761; Expires=Thu, 26-May-22 17:34:58 GMT; Max-Age=172800; Path=/; Secure; HttpOnly
    set-cookie: session=eyJteV9zZXNzaW9uIjoiVDBSQUsifQ.Yo0XQg.qaDgq6kq_P2gMC1vgqLPqN1KQfE; HttpOnly; Path=/
    vary: Cookie
    strict-transport-security: max-age=15724800; includeSubDomains
    

Notice that the response contains a set-cookie header with the key sticky. This cookie contains information about the upstream server. The NGINX Ingress Controller tries to route any requests with the same cookie to the same application pod.

Refresh the website a few times. Notice that the Hostname doesn't change until the cookie expires which means the sticky session works as expected.

More Information

Want to contribute?

You could earn up to $600 by adding new articles.