Vultr Load Balancers

Last Updated: Mon, Jan 13, 2020
DevOps Load Balancer Scaling Networking Popular System Admin

What is a Load Balancer

Load Balancers sit in front of your application and distribute incoming traffic across multiple instances of your application. For example, let's say you have an e-commerce website. You notice that you have gained traction and have been attracting more and more customers to your application. To accommodate this traffic, you can deploy another instance of your e-commerce store. Now, to direct users between these instances of your store, you deploy a load balancer in front of them.

With a load balancer, you can:

  • Scale your application: Deploy more instances to increase the amount of traffic you can handle.
  • Improve your uptime: The load balancer diverts traffic to healthy running nodes if any instance goes offline.

Frequently Asked Questions

How many instances and connection are allowed?

Load balancers support up to ten instances and 10,000 simultaneous connections per instance.

Can I use a load balancer for servers in multiple regions?

No. A load balancer can only direct traffic for server instances located in the same datacenter location as the load balancer itself.

Can I use my load balancer in one region with instances in a different region?

Unfortunately not. Load Balancers and attached instances must be in the same region.

My servers are working, why is my health check failing?

  • If using HTTP or HTTPS protocol, make sure the port and URL path are both correct. The health check looks for HTTP 200 OK success status response code. Any other code is considered unhealthy.
  • If using TCP protocol, make sure to test an open port on the attached node.

How is bandwidth charged?

Vultr Load Balancers are bandwidth neutral. We only charge on bandwidth on the instances attached to the load balancer.

How do I attach instances to my Vultr Load Balancer?

In the Vultr dashboard you will be able to assign and remove instances to a given Load Balancer.

How do I manage my Load Balancer?

You do not have to worry about managing Vultr Load Balancers. They are fully managed.

What protocols do you support?

Vultr Load Balancers support TCP, HTTP, and HTTPS.

Deploying a Load Balancer


Load balancers are fully managed. You do not have to worry about keeping your load balancer infrastructure up or running. This allows you to focus on building your applications and growing them as you see fit. You must ensure that your application works behind a load balancer.

Forwarding Rules

Forwarding rules define what public ports map to which internal ports with each protocol. For example, if an application is listening on port 1234 but users should access it through port 80, then define the mapping as Load Balancer HTTP 80 -> Instance HTTP 1234.

If you select HTTPS protocol, the load balancer will request your SSL certificate on the configuration screen.

Load Balancer Configuration


There are two available options for the algorithm:

  • Roundrobin - Selects servers in turns. This is the default algorithm.

  • Leastconn - Selects the server with the least number of connections – it is recommended for longer sessions. Servers in the same back-end rotate in a round-robin fashion.

Sticky Sessions

The load balancer uses application-controlled session persistence for sticky sessions.

Your application generates a cookie that determines the duration of session stickiness. The load balancer still issues its session cookie on top of it, but it now follows the lifetime of the application cookie.

This makes sticky sessions more efficient, ensuring that users are never routed to a server after their local session cookie has already expired. However, it's more complex to implement because it requires additional integration between the load balancer and the application.

You will be asked to add a cookie-name.


This is just a label for your load balancer.


This will force HTTP to redirect to HTTPS. You will need a valid SSL cert and HTTPS configured for this to work properly.

Proxy Protocol

Proxy Protocol forwards client information to the backend nodes. If this feature is enabled, you must configure your backend nodes to accept Proxy protocol. For more information, see the Proxy Protocol documentation.

Health Checks

These health checks will verify that your attached applications/instances are healthy, and are reachable. If an instance fails the health check, the load balancer will cut traffic to that instance.

Post Deployment

Once the deployment is complete, you will find the public IPv4 and IPv6 address for the load balancer in your customer portal. You can attach/detach your application/instances to the load balancer, and make any configuration changes needed.


Metrics are available once your load balancer has been running for a few minutes. You will be able to view your metrics from the metrics tab in your load balancer dashboard.

Want to contribute?

You could earn up to $300 by adding new articles