Hi everyone, my name is Alena Prokharchyk, part of the engineering team here at Rancher, and still loving working on container infrastructure. A few months ago I wrote an article introducing Docker load balancing in Rancher. Today, I want to focus on how we’ve built a brand new service discovery capability into Rancher, as well as how we’ve integrated it with load balancing. If you’re not familiar with service discovery, it is a networking capability that allows groups of devices (or in our case containers) to be identified with a common name, and discovered by other services on the network. In Rancher we enable this using our container network and DNS management services. We have also integrated it with our Load Balancer solution to making it simple to deploy services based off Docker images, define how they can discover one another, and allow load balancing to route traffic to specific services. In today’s post I’m going to walk through this new feature and give you an overview of how to get started using it. So lets start simple, and build a use case from my previous post on using the load balancer in front of an nginx server - but this time we are going to run both nginx and our load balancer as services within Rancher.
Creating services in Rancher
We’ve updated our UI to make it easy to understand when you’re working with services, and when you are working with infrastructure like individual containers and hosts. In our top nav, you’ll notice a Services tab, clicking on that will take you to a services page, where you can create your first \“Project.\” A Project is a group of services representing your deployment, and the domain in which service discovery works: Now let’s add our nginx service. Creating a service is very similar to starting a single container, we specify the docker image we want to pull from DockerHub, I’m using nginx:v1.9, and then define the number of containers we want to create. In this case I’m using scale=3, but we can always adjust this number later. Finally hit create, and it will be added to our project page. Next, click on the \“Add Balancer\” button on the project page to add a Load Balancer service: Configure this service to specify your source/target ports, but instead of pointing it to a container, as we did last time, we’re going to point it to a service this time, so select \“nginx\” from the \“Target\” list. This will configure the LB service to balance all the instances of our nginx service, wherever they run. Once our services are defined, its time to start them. From the project page, click on the menu icon and select \“Start Services\” this will trigger container creation for both services, and our service discovery functionality will the traffic is forwarded from our load balancer to the nginx service, and balanced across nginx containers:
Scaling up nginx service
The beauty of combining container networking with service discovery and load balancing is that as your application grows, and you need more containers to support it, you can scale up the services easily. Let’s walk through this. Start by clicking on \“Edit Service\” from the menu icon in our nginx service: As soon as we save this new service configuration, four additional containers will be launched. These will automatically be registered with the Load Balancer, and traffic will be distributed automatically. As always, you can call these triggers from our API as well as the UI.
Scaling up the Load Balancer service
This same type of scaling also works with our Load Balancer service. If we look back at our stand alone Load Balancer, in order to create more instances, we had to manually add additional hosts to it. With services its so much simpler - simply scale up the LB service, and Rancher will automatically match the scale by starting more instances on the hosts picked by our internal allocator. Our \“Scale Up\” button is another way to easily change the scale from within the UI:
Load Balancing across services
I think service load balancing is going to have a significant impact on how our users upgrade services. With this in mind, we designed our load balancer to be able to scale any number of different services. Using our nginx example, we would create a new service in our project, this time running the latest version of nginx v1.9.1 and called \“nginix-latest\“. Once we have created that service, we can go to our load balancer and add an additional service link to \“nginx-latest\”: Now we’ve balanced traffic across two services. This allows us to do a/b testing, and when we’re ready we can either remove the original nginx from the load balancer, or simply stop the service. Our UI shows how traffic is now being distributed across both services:
Discovery between services without a load balancer.
The new Rancher serrvice discovery capability can be deployed between services as well. In the below screenshot, you can see I’ve set up a multi-tier application, and am in the process of upgrading my app server. Service connections can be made or updated at any time from the service configuration screen.
How does this relate to Docker Compose?
As always, we design our container infrastructure services to align with Docker and the management tools they are creating. We think Docker compose is a fantastic way to deploy application, so we allow you to directly create services from docker compose files using our prototype Rancher-Compose CLI, or you can export any services you create in Rancher as a Docker Compose file. For service discovery, that means we use compose \“links\” to define service relationships. We also create a \“rancher-compose.yml\” that specifies all of the service discovery information we create outside of Docker Compose. By clicking on the \“script\” icon at the top of any project you can see both yml files:
When can you change load balancer targets?
A set of target services can be changed for the load balancer service at any point. There are no requirements on the state of the LB or target service. If, for example, you’ve added the link from LB to the new service when it wasn’t fully activated yet - the containers were still deploying - the new containers will get registered in the LB as soon as they start.
Load Balancer configuration
As before, health checks and stickiness policies are still supported in Load Balancer services, and work just the same way as in stand alone Rancher Load Balancer. Any health check rules that you create will now apply to all services’ instances that are being balanced, as well as the policies.
More fun stuff coming soon
In the next couple weeks we’ll be introducing how we are using host and container tags, affinity policies, docker labels, and health checks to deploy services exactly how we want them, and ensure they are robust. We’ll aslo be talking about load balancing alancing traffic between external services’ IPs and http routing by domain/host name. If you’d like to see all of this working, please join our next meetup where we’ll be demonstrating service discovery, load balancing and more.
Alena Prokharchyk If you have any questions or feedback, please contact me on twitter: @lemonjet https://github.com/alena1108
Alena is a Principal Software Engineer at Rancher Labs, who’s been working on building infrastructure services first for Virtual Machines, now for containers with main focus on Kubernetes. She enjoys helping others make sense of problems and explore solutions together. In her free time Alena enjoys rollerblading, reading books on totally random subjects and listening to other people’s stories.