Table of Contents
Was this article helpful?

25  out of  58 found this helpful

Try Vultr Today with

$50 Free on Us!

Want to contribute?

You could earn up to $300 by adding new articles!

Vultr Load Balancers

Author: David Dymko

Last Updated: Tue, Nov 24, 2020
DevOps Load Balancer Scaling Networking Popular System Admin

What is a Load Balancer

Load Balancers sit in front of your application and distribute incoming traffic across multiple instances of your application. For example, let's say you have an e-commerce website. You notice that you have gained traction and have been attracting more and more customers to your application. To accommodate this traffic, you can deploy another instance of your e-commerce store. Now, to direct users between these instances of your store, you deploy a load balancer in front of them.

With a load balancer, you can:

  • Scale your application: Deploy more instances to increase the amount of traffic you can handle.
  • Improve your uptime: The load balancer diverts traffic to healthy running nodes if any instance goes offline.

Supported Platforms

Vultr load balancers work with any of our cloud server products including Bare Metal, Cloud Compute, High Frequency Compute, and Dedicated Cloud.


Load balancers are fully managed. You do not have to worry about keeping your load balancer infrastructure up or running. This allows you to focus on building your applications and growing them as you see fit. You must ensure that your application works behind a load balancer.

Forwarding Rules

Forwarding rules define what public ports map to which internal ports with each protocol. For example, if an application is listening on port 1234 but users should access it through port 80, then define the mapping as Load Balancer HTTP 80 -> Instance HTTP 1234.

HTTPS Support

Load balancers support HTTPS via two methods: You can either install your SSL certificate on the load balancer, or install the certificate on each instance and transparently pass the session via TCP. Choose the option appropriate for your situation.

Option 1: SSL Offloading / SSL Termination - Install SSL certificate on the load balancer

In this configuration, users connect to the load balancer via HTTPS, and the load balancer proxies the connection to the instances. This is known as SSL Offloading or SSL Termination.

Choose HTTPS protocol for the load balancer. The configuration screen prompts you to install the SSL certificate.


The instance protocol can be HTTPS or HTTP, depending on your requirements.

Option 2: SSL Passthrough - Install SSL certificate on the instances

In this configuration, the load balancer performs SSL Passthrough, which passes the TCP session to the instances for HTTPS handling.

Choose TCP protocol for the load balancer and install your SSL certificate on each back-end instance.

SSL on Instance

Load Balancer Configuration


There are two available options for the algorithm:

  • Roundrobin - Selects servers in turns. This is the default algorithm.

  • Leastconn - Selects the server with the least number of connections – it is recommended for longer sessions. Servers in the same back-end rotate in a round-robin fashion.

Sticky Sessions

The load balancer uses application-controlled session persistence for sticky sessions. Your application generates a cookie that determines the duration of session stickiness. The load balancer still issues its session cookie on top of it, but it now follows the lifetime of the application cookie.

This makes sticky sessions more efficient, ensuring that users are never routed to a server after their local session cookie has already expired. However, it's more complex to implement because it requires additional integration between the load balancer and the application.

You must supply a cookie name when configuring sticky sessions.


This is a label for your load balancer.


This will force HTTP to redirect to HTTPS. Before setting this option, make sure you have installed a valid SSL cert on the load balancer, as described in Option 1 above, and configured HTTPS forwarding.

Proxy Protocol

Proxy Protocol forwards client information to the backend nodes. If this feature is enabled, you must configure your backend nodes to accept Proxy protocol. For more information, see the Proxy Protocol documentation.

Health Checks

Health checks verify that the attached applications/instances are healthy and reachable. If an instance fails the health check, the load balancer cuts traffic to that instance. Use these parameters to configure a health check.


Valid protocols are:

    • The HTTP and HTTPS health checks consider any 2XX success return code a success. Any other return code is a failure.
  • TCP
    • An open port is a success. A closed port is a failure.


The TCP, HTTP, or HTTPS port to test.

Interval between health checks

How often to run the health check in seconds.

Response timeout

The number of seconds the Load Balancer waits between responses. Default value is 5 seconds.

Unhealthy Threshold

The number of times an instance must consecutively fail a health check before the Load Balancer stops forwarding traffic to it. The default value is 5 failures.

Healthy Threshold

The number of times an instance must consecutively pass a health check before the Load Balancer forwards traffic to it. The default value is 5 successes.


The HTTP path the load balancer should test. The default value is /. This option is only present if the protocol is HTTP or HTTPS.

Post Deployment

Once the deployment is complete, you will find the public IPv4 and IPv6 address for the load balancer in your customer portal. You can attach/detach your application/instances to the load balancer, and make any configuration changes needed.


Metrics are available once your load balancer has been running for a few minutes. You will be able to view your metrics from the metrics tab in your load balancer dashboard.

Manage the Load Balancer via API

The Vultr API offers several endpoints to manage the Vultr Load Balancer.

Load balancer

  • Create a new load balancer in a particular region.
  • List the load balancers in your account.
  • Update information for a load balancer.

Load balancer rules

  • Create a new forwarding rule for a load balancer.
  • List the fowarding rules for a load balancer.
  • Get information for a forwarding rule on a load balancer.
  • Delete a forwarding rule on a load balancer.

Frequently Asked Questions

How many instances and connection are allowed?

Load balancers support up to ten instances and 10,000 simultaneous connections per instance.

Can I use a load balancer for servers in multiple regions?

No. A load balancer can only direct traffic for server instances located in the same datacenter location as the load balancer itself.

Can I use my load balancer in one region with instances in a different region?

Unfortunately not. Load Balancers and attached instances must be in the same region.

My servers are working, why is my health check failing?

  • If using HTTP or HTTPS protocol, make sure the port and URL path are both correct. The health check looks for HTTP 200 OK success status response code. Any other code is considered unhealthy.
  • If using TCP protocol, make sure to test an open port on the attached node.

How is bandwidth charged?

Vultr Load Balancers are bandwidth neutral. We only charge on bandwidth on the instances attached to the load balancer.

How do I attach instances to my Vultr Load Balancer?

In the Vultr dashboard you will be able to assign and remove instances to a given Load Balancer.

How do I manage my Load Balancer?

You do not have to worry about managing Vultr Load Balancers. They are fully managed.

What protocols do you support?

Vultr Load Balancers support TCP, HTTP, and HTTPS.

Want to contribute?

You could earn up to $300 by adding new articles