Load Balancers sit in front of your application and distribute incoming traffic across multiple instances of your application. For example, let's say you have an e-commerce website. You notice that you have gained traction and have been attracting more and more customers to your application. In order to accommodate this traffic, you can deploy another instance of your e-commerce store. Now, in order to have users be directed in between these instances of your store, you deploy a load balancer in front of them.
The load balancer will distribute traffic between the two instances of your application. This allows you to scale your application by deploying more instances, and by putting them behind a load balancer you increase the amount of traffic you can handle. This also helps improve your uptime - with a load balancer, if you have multiple instances, and one of them fails or goes offline, the load balancer would then divert traffic to the healthy running nodes.
This is a fully managed service. You do not have to worry about keeping your load balancer infrastructure up or running. This allows you to focus on building your applications and growing them as you see fit. Please note, however, that you will be required to make sure that your application is configured and setup properly to work with the load balancer.
Here you will define what public ports are mapped to which internal ports with protocol. For example: if I have a ruby on rails application that I wish to put a load balancer in front of, and the application is listening on port
1234 but I want users to access my application through port
80, then I would define my mapping as
Load Balancer http 80 -> Instance HTTP 1234
If you chose HTTPS as your protocol you will be prompted to upload your own SSL cert. In a future release, Let's Encrypt support will be added.
There are two available options for the algorithm:
Roundrobin - Selects servers in turns. This is the default algorithm.
Leastconn - Selects the server with the least number of connections – it is recommended for longer sessions. Servers in the same back-end are also rotated in a round-robin fashion.
We are using application-controlled session persistence for our sticky sessions.
Your application generates a cookie that determines the duration of session stickiness. The load balancer still issues its own session cookie on top of it, but it now follows the lifetime of the application cookie.
This makes sticky sessions more efficient, ensuring that users are never routed to a server after their local session cookie has already expired. However, it’s more complex to implement because it requires additional integration between the load balancer and the application.
You will be asked to add a cookie-name.
This is just a label for your load balancer.
This will force HTTP redirect to HTTPS. You will need a valid SSL cert and HTTPS configured for this to work properly.
These health checks will verify that your attached applications/instances are healthy and they can be properly routed to. If one of your instances fails a health check, then the load balancer will cut traffic to that instance.
Once the deployment is done, (it may take a few minutes), you will be given an IPV4 and an IPV6 address which is your public IP for the load balancer.
Now you will also be able to attach/detach your application/instances to the load balancer, along with any configuration changes you would like.
Once your load balancer has been running for a few minutes, metrics will be gathered. You will be able to view your metrics from the metrics tab in your load balancer dashboard.