How to use Proxy Protocol with Vultr Load Balancers

Updated on June 9, 2022
How to use Proxy Protocol with Vultr Load Balancers header image

Vultr Load Balancer is a fully-managed solution to distribute traffic to groups of servers which decouples the availability of a backend service from the health of a single server.

If you are new to Vultr Load Balancers, you should read the Load Balancer Quickstart Guide first.

Proxy Protocol is a network protocol for preserving a client’s connection information (such as IP addresses) when the client’s connection passes through a proxy. The ability to preserve the client information is important to analyze the traffic logs or change the application functionality based on the geographic IP address.

Without Proxy Protocol, upstream servers behind the proxy lose all the client information and incorrectly assume that the traffic source is the proxy.

On a higher-level protocol such as HTTP, a proxy can add an HTTP header to store the client’s original IP address to the request so that upstream servers can retrieve the client information. However, other protocols such as SMTP, FTP, and IMAP lack a similar solution.

Proxy Protocol operates at the TCP layer and can support all the higher layer protocols. Proxy Protocol adds a header that contains the original client information to the beginning of the TCP connection. When using Proxy Protocol, both the proxy and the upstream server have to support it.

With Vultr Load Balancers, you can preserve the client information on both HTTP and TCP protocols.

This article shows how to use a Vultr Load Balancer to distribute traffic to multiple application servers and collect the client information on HTTP, HTTPS, and TCP forwarding rules.

This article uses a Ubuntu server and Docker Compose to deploy the application. You can still apply the idea to your system with minimal changes.

Prerequisites

Before you begin, you should:

  • Deploy a Vultr Load Balancer
  • Deploy a Vultr Instance in the same region with your load balancer.
  • Have a domain if you want to follow the TLS/SSL certificates sections.

Prepare an Example Application

This section shows how to deploy an example application to help you understand the concepts in further sections. You can skip this section if you already have a working application server.

The web server in this article is a Python web server that returns the hostname of the pod and HTTP request headers.

This example application has a public Docker image (quanhua92/whoami) in the Docker Hub. You can go to this GitHub repository to see the source code of the application.

Install Docker

Docker is an open-source platform for developing, shipping, and running applications. Docker enables you to run the application in an isolated and optimized environment.

Follow the following steps to install Docker on your Ubuntu server.

  1. Uninstall old versions such as docker, docker.io, or docker-engine.

     $ sudo apt-get remove docker docker-engine docker.io containerd runc
  2. Set up the repository $ sudo apt-get update

     $ sudo apt-get install ca-certificates curl gnupg lsb-release
    
     $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
    
     $ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
     $ sudo apt update
  3. Install the latest version of Docker Engine

     $ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

Deploy the Application with Docker Compose

  1. Create a file named docker-compose.yml with the following content:

     version: '3'
    
     services:
         whoami:
             image: quanhua92/whoami
             ports:
                 - 8080:8080
  2. Run the following command to start the application and expose it on port 8080 of your server.

     $ docker compose up -d
  3. Open port 8080 on your firewall. See the section at the end of this article for firewall details.

  4. Navigate to http://<YOUR_SERVER_IP>:8080 to access your application.

Prepare the TLS/SSL Certificate for Your Domain

You need a TLS/SSL certificate if you want to deploy your application for HTTPS traffic on port 443.

Here are some approaches to obtaining TLS/SSL Certificates:

  • Self-Signed Certificates: Use your own Certificate Authority to create and sign TLS/SSL certificates. This is a great option for development environments.
  • Purchase TLS/SSL Certificates: You need to buy a TLS/SSL certificate from a well-known Certificate Authority for production use-cases.
  • Use Free TLS/SSL Certificates: Use free TLS/SSL certificates from Let’s Encrypt or ZeroSSL.

SSL Termination without Proxy Protocol

This section shows how to config SSL Termination at a Vultr Load Balancer and preserve the client information for upstream servers.

In this configuration, you install the TLS/SSL certificate on the load balancer. Users connect to the load balancer via HTTPS protocol and the load balancer distributes the load to the upstream servers.

SSL Termination is a great option to offload the TLS/SSL processing from your servers. Load balancers can inspect the traffic, and do a better job of load balancing and protection against TLS/SSL attacks. If your application requires a TCP connection instead of an HTTP connection or you want to handle TLS/SSL certificates on your server, you should read the next section about SSL Passthrough approach.

By default, Vultr Load Balancer includes an X-Forwarded-For HTTP header into the connection and the application on upstream servers can retrieve the client information without any major changes.

  1. Navigate to Load Balancer in your Customer Portal)
  2. Create a new Load Balancer with Forwarding Rules as follows:
    • Choose HTTPS protocol for the load balancer. The instance protocol can be HTTPS or HTTP depending on your requirements. In this example, the instance protocol is HTTP on port 8080.
    • Choose HTTP protocol for the load balancer. The instance protocol is HTTP on port 8080.
  3. In the "SSL Certificate" of the Load Balancer, enter the TLS/SSL Certificate for your domain.
  4. Create an A record in your domain DNS that points to the Load Balancer IP address.
  5. Navigate to https://<YOUR_DOMAIN> to access your application.

Notice that there is an HTTP header that contains your IP address as follows:

X-Forwarded-For: 113.112.103.231

SSL Passthrough with Proxy Protocol

This section shows how to configure SSL Passthrough with Proxy Protocol at a Vultr Load Balancer and set up an NGINX proxy on upstream servers.

In this configuration, the Vultr Load Balancer passes the TCP session to the upstream servers for HTTPS handling. You install the TLS/SSL certificate on your servers. Your servers have to support the Proxy Protocol to handle the traffic properly.

An NGINX proxy is a good solution in this situation. The NGINX proxy handles all the incoming traffic to the server and forwards them to the application without having to make your application aware of the Proxy Protocol.

Deploy a Vultr Load Balancer

  1. Navigate to Load Balancer in your Customer Portal
  2. Create a new Load Balancer with Forwarding Rules as follows:
    • Choose TCP protocol on port 80 for the load balancer. The instance protocol is TCP on port 80.
    • Choose TCP protocol on port 443 for the load balancer. The instance protocol is TCP on port 443.
  3. Create an A record in your domain DNS that points to the Load Balancer IP address.

Install NGINX on Your Servers

Install NGINX on a Ubuntu server using the following command:

$ sudo apt install nginx

For other Linux distributions, take a look at How to Install and Configure Nginx on a Vultr Cloud Server

Configure NGINX to Support Proxy Protocol

  1. Make a copy of your TLS/SSL certificates on each upstream server.

    • A certificate bundle file contains your TLS/SSL certificate, the intermediate certificate, and the root certificate in one file
    • A private key of your TLS/SSL certificate.
  2. Create a/etc/nginx/sites-available/<YOUR_DOMAIN> file. Replace <YOUR_DOMAIN> with your actual domain name. Replace 8080 with your application port.

     server {
             listen 80 proxy_protocol;
             listen 443 ssl proxy_protocol;
    
             ssl_certificate <YOUR_CERTIFICATE_BUNDLE_PATH>;
             ssl_certificate_key <YOUR_CERTIFICATE_KEY_PATH>;
    
             server_name <YOUR_DOMAIN>;
    
             location / {
                     proxy_pass http://localhost:8080;
                     proxy_set_header X-Real-IP       $proxy_protocol_addr;
                     proxy_set_header X-Forwarded-For $proxy_protocol_addr;
             }
     }
  3. Enable the new NGINX server block by creating a link as follows. Replace <YOUR_DOMAIN> with your actual domain name.

     $ ln -s /etc/nginx/sites-available/<YOUR_DOMAIN> /etc/nginx/sites-enabled/
  4. Open ports 80 and 443 on your firewall. See the section at the end of this article for firewall details.

  5. Restart NGINX to load the new server block

     $ systemctl restart nginx
  6. Navigate to https://<YOUR_DOMAIN> to access your application.

How to Configure NGINX Log Format with Proxy Protocol

You can configure the NGINX to log the traffic with the client information as follows:

  1. Open /etc/nginx/nginx.conf file and find the following block:

     access_log /var/log/nginx/access.log;
     error_log /var/log/nginx/error.log;
  2. Change the above content as follows:

     log_format proxy '$proxy_protocol_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent"';        
     access_log /var/log/nginx/access.log proxy;
     error_log /var/log/nginx/error.log;

How to Open A Port with firewalld

Some Linux distributions use firewalld as the default firewall. You need to open the port on the firewall.

  1. Check if you are using firewalld

     $ sudo systemctl status firewalld
  2. Open the port with firewalld.

     $ sudo firewall-cmd --add-port=80/tcp --permanent
     $ sudo firewall-cmd --add-port=443/tcp --permanent
  3. Reload the settings

     $ sudo firewall-cmd --reload

How to Open A Port with ufw

Some Linux distributions use ufw as the default firewall. You need to open the port on the firewall.

  1. Check if you are using ufw

     $ sudo systemctl status ufw
  2. Check the ufw status

     $ sudo ufw status
  3. Open the port with ufw

     $ sudo ufw allow 80
     $ sudo ufw allow 443
  4. Enable ufw if it is not running

     $ sudo ufw enable