Author: David DymkoLast Updated: Tue, Oct 6, 2020
Load Balancers sit in front of your application and distribute incoming traffic across multiple instances of your application. For example, let's say you have an e-commerce website. You notice that you have gained traction and have been attracting more and more customers to your application. To accommodate this traffic, you can deploy another instance of your e-commerce store. Now, to direct users between these instances of your store, you deploy a load balancer in front of them.
With a load balancer, you can:
Load balancers are fully managed. You do not have to worry about keeping your load balancer infrastructure up or running. This allows you to focus on building your applications and growing them as you see fit. You must ensure that your application works behind a load balancer.
Forwarding rules define what public ports map to which internal ports with each protocol. For example, if an application is listening on port 1234 but users should access it through port 80, then define the mapping as Load Balancer HTTP 80 -> Instance HTTP 1234.
Load balancers support HTTPS via two methods: You can either install your SSL certificate on the load balancer, or install the certificate on each instance and transparently pass the session via TCP. Choose the option appropriate for your situation.
In this configuration, users connect to the load balancer via HTTPS, and the load balancer proxies the connection to the instances. This is known as SSL Offloading or SSL Termination.
Choose HTTPS protocol for the load balancer. The configuration screen prompts you to install the SSL certificate.
The instance protocol can be HTTPS or HTTP, depending on your requirements.
In this configuration, the load balancer performs SSL Passthrough, which passes the TCP session to the instances for HTTPS handling.
Choose TCP protocol for the load balancer and install your SSL certificate on each back-end instance.
There are two available options for the algorithm:
Roundrobin - Selects servers in turns. This is the default algorithm.
Leastconn - Selects the server with the least number of connections – it is recommended for longer sessions. Servers in the same back-end rotate in a round-robin fashion.
The load balancer uses application-controlled session persistence for sticky sessions. Your application generates a cookie that determines the duration of session stickiness. The load balancer still issues its session cookie on top of it, but it now follows the lifetime of the application cookie.
This makes sticky sessions more efficient, ensuring that users are never routed to a server after their local session cookie has already expired. However, it's more complex to implement because it requires additional integration between the load balancer and the application.
You must supply a cookie name when configuring sticky sessions.
This is a label for your load balancer.
This will force HTTP to redirect to HTTPS. Before setting this option, make sure you have installed a valid SSL cert on the load balancer, as described in Option 1 above, and configured HTTPS forwarding.
Proxy Protocol forwards client information to the backend nodes. If this feature is enabled, you must configure your backend nodes to accept Proxy protocol. For more information, see the Proxy Protocol documentation.
These health checks will verify that your attached applications/instances are healthy, and are reachable. If an instance fails the health check, the load balancer will cut traffic to that instance.
Once the deployment is complete, you will find the public IPv4 and IPv6 address for the load balancer in your customer portal. You can attach/detach your application/instances to the load balancer, and make any configuration changes needed.
Metrics are available once your load balancer has been running for a few minutes. You will be able to view your metrics from the metrics tab in your load balancer dashboard.
The Vultr API offers several endpoints to manage the Vultr Load Balancer.
Load balancers support up to ten instances and 10,000 simultaneous connections per instance.
No. A load balancer can only direct traffic for server instances located in the same datacenter location as the load balancer itself.
Unfortunately not. Load Balancers and attached instances must be in the same region.
Vultr Load Balancers are bandwidth neutral. We only charge on bandwidth on the instances attached to the load balancer.
In the Vultr dashboard you will be able to assign and remove instances to a given Load Balancer.
You do not have to worry about managing Vultr Load Balancers. They are fully managed.
Vultr Load Balancers support TCP, HTTP, and HTTPS.