Load Balancers sit in front of your application and distribute incoming traffic across multiple instances of your application. For example, let's say you have an e-commerce website. You notice that you have gained traction and have been attracting more and more customers to your application. To accommodate this traffic, you can deploy another instance of your e-commerce store. Now, to direct users between these instances of your store, you deploy a load balancer in front of them.
With a load balancer, you can:
Load balancers support up to ten instances and 10,000 simultaneous connections per instance.
No. A load balancer can only direct traffic for server instances located in the same datacenter location as the load balancer itself.
Unfortunately not. Load Balancers and attached instances must be in the same region.
Vultr Load Balancers are bandwidth neutral. We only charge on bandwidth on the instances attached to the load balancer.
In the Vultr dashboard you will be able to assign and remove instances to a given Load Balancer.
You do not have to worry about managing Vultr Load Balancers. They are fully managed.
Vultr Load Balancers support TCP, HTTP, and HTTPS.
Load balancers are fully managed. You do not have to worry about keeping your load balancer infrastructure up or running. This allows you to focus on building your applications and growing them as you see fit. You must ensure that your application works behind a load balancer.
Forwarding rules define what public ports map to which internal ports with each protocol. For example, if an application is listening on port 1234 but users should access it through port 80, then define the mapping as Load Balancer HTTP 80 -> Instance HTTP 1234.
If you select HTTPS protocol, the load balancer will request your SSL certificate on the configuration screen.
There are two available options for the algorithm:
Roundrobin - Selects servers in turns. This is the default algorithm.
Leastconn - Selects the server with the least number of connections – it is recommended for longer sessions. Servers in the same back-end rotate in a round-robin fashion.
The load balancer uses application-controlled session persistence for sticky sessions.
Your application generates a cookie that determines the duration of session stickiness. The load balancer still issues its session cookie on top of it, but it now follows the lifetime of the application cookie.
This makes sticky sessions more efficient, ensuring that users are never routed to a server after their local session cookie has already expired. However, it's more complex to implement because it requires additional integration between the load balancer and the application.
You will be asked to add a cookie-name.
This is just a label for your load balancer.
This will force HTTP to redirect to HTTPS. You will need a valid SSL cert and HTTPS configured for this to work properly.
Proxy Protocol forwards client information to the backend nodes. If this feature is enabled, you must configure your backend nodes to accept Proxy protocol. For more information, see the Proxy Protocol documentation.
These health checks will verify that your attached applications/instances are healthy, and are reachable. If an instance fails the health check, the load balancer will cut traffic to that instance.
Once the deployment is complete, you will find the public IPv4 and IPv6 address for the load balancer in your customer portal. You can attach/detach your application/instances to the load balancer, and make any configuration changes needed.
Metrics are available once your load balancer has been running for a few minutes. You will be able to view your metrics from the metrics tab in your load balancer dashboard.