Docker Load Balancing Now Available in Rancher 0.16

on Apr 21, 2015

Hello, my name is Alena Prokharchyk and I am a part of the software development team at Rancher Labs. In this article I’m going to give an overview of a new feature I’ve been working on, which was released this week with Rancher 0.16 – a Docker Load Balancing service.

One of the most frequently requested Rancher features, load balancers are used to distribute traffic between docker containers. Now Rancher users can configure, update and scale up an integrated load balancing service to meet their application needs, using either Rancher’s UI or API.  To implement our load balancing functionality we decided to use HAproxy, which is deployed as a contianer, and managed by the Rancher orchestration functionality.

With Rancher’s Load Balancing capability, users are now able to use a consistent, portable load balancing service on any infrastructure where they can run Docker. Whether it is running in a public cloud, private cloud, lab, cluster, or even on a laptop, any container can be a target for the load balancer.

Creating a Load Balancer

Once you have an environment running in Rancher, it is simple to create a Load Balancer. You’ll see a new top level tab in the Rancher UI called “Balancing” from which you can create and access your load balancers.

Screen Shot 2015-04-12 at 8.24.47 PM

 

To create a new load balancer click on + Add Load Balancer. You’ll be given a configuration screen to provide details on how you want the load balancer to function.

Screen Shot 2015-04-12 at 8.25.05 PM

There are a number of different options for configuration, and I’ve created a video demonstration to walk through the process.

Updating an active Load Balancer

In some cases after your Load Balancer has been created, you might want to change its settings – for example to add or remove listener ports, configure a health check, or simply add more target containers. Rancher performs all the updates without any downtime for your application. To update the Load Balancer, bring up the Load Balancer “Details” view by clicking on its name in the UI:

 

UpdateNavigation

 

Then navigate to the toolbar of the setting you want to change, and make the update:

 

LBUpdateConfig

Understanding Health Checks

Health checks can be incredibly helpful when running a production application. Health checks monitor the availability of target containers, so that if one of the load balanced containers in your app becomes unresponsive, it can be excluded from the list of balanced hosts, until its functioning again. You can delegate this task to the Rancher Load Balancer by configuring the health check on it from the UI.  Just provide a monitoring URL for the target container, as well as check intervals and healthy and unhealthy response thresholds.  You can see the UI for this in the image below.

healthCheck

 

Stickiness Policies

Some applications require that a user continues to connect to the same backend server within the same login session. This persistence is achieved by configuring Stickiness policy on the Load Balancer. With stickiness, you can control whether the session cookie is provided by the application, or directly from the load balancer.

 

Scaling your application

The Load Balancer service is primarily used to help scale up applications as you add additional targets to the load balancer. However, to provide an additional layer of scaling,  the load balancer itself can also scale across multiple hosts, creating a clustered load balancing service.  With the Load Balancer deployed on multiple hosts, you can use a Global Load Balancing service, such as Amazon Web Services, Route 53, to distribute incoming traffic across load balancers.  This can be especially useful when running load balancers in different physical locations. The diagram below explains how this can be done.

Screen Shot 2015-04-08 at 5.55.10 PM

 

Load Balancing and Service Discovery

This new load balancing support has plenty of independent value, but it will also be an important part of the work we’re doing on service discovery, and support for Docker Compose. We’re still working on this and testing it, but you should start to see this functionality in Rancher over the next four to six weeks.  If you’d like to learn about load balancing, Docker Compose, service discovery and running microservices with Rancher, please join our next online meetup where we’ll be covering all of these topics by clicking the button below. 


REGISTER NOW







Avatar

Alena Prokharchyk

@lemonjet

https://github.com/alena1108

 

Online Meetup: Managing Kubernetes Clusters with Rancher 2.0

Thursday, November 30 at 1PM ET

One of the things we’re really excited about in the Rancher 2.0 tech preview is centralized management of multiple Kubernetes clusters.

Join us Thursday, November 30 as we explore how the new cluster management features significantly increase visibility into and control of your Kubernetes clusters.

Register here

Recent Posts


Upcoming Events