Kubernetes is the container orchestration system of choice for many enterprise deployments. That’s a tribute to its reliability, flexibility, and broad range of features. In this post, we’re going to take a closer look at how Kubernetes handles a very common and very necessary job: load balancing. Load balancing is a relatively straightforward task in many non-container environments (i.e., balancing between servers), but it involves a bit of special handling when it comes to containers. Read more
In Kubernetes, we often hear terms like resource management, scheduling and load balancing. While Kubernetes offers many capabilities, understanding these concepts is key to appreciating how workloads are placed, managed and made resilient. In this short article, I provide an overview of each facility, explain how they are implemented in Kubernetes, and how they interact with one another to provide efficient management of containerized workloads. If you’re new to Kubernetes and seeking to learn the space, please consider reading our case for Kubernetes article.
Resource management is all about the efficient allocation of infrastructure resources. In Kubernetes, resources are things that can be requested by, allocated to, or consumed by a container or pod. Having a common resource management model is essential, since many components in Kubernetes need to be resource aware including the scheduler, load balancers, worker-pool managers and even applications themselves. If resources are underutilized, this translates into waste and cost-inefficiency. If resources are over-subscribed, the result can be application failures, downtime, or missed SLAs. Read more
Rancher ships with two types of catalog items to deploy applications; Rancher certified catalog and community catalog, which enable the community to contribute to the reusable pre-built application stack templates.
One of the recent interesting community catalog templates is the external load balancer for AWS Classic Elastic Load Balancer, which keeps an existing Load balancer updated with the EC2 instances on which Rancher services that have one or more exposed ports and specific label.
This blog post will explain how to set up a Classic ELB and walk through the details of launching a catalog template for ELB from the community catalog to update the Classic ELB automatically. Read more
Hello, I’m Alena Prokharchyk, one of the developers here at Rancher. In my previous blog posts, I’ve covered various aspects of Service Discovery, a feature we use to discover and interconnect services of user applications deployed in Rancher. This discovery is done by services registering themselves dynamically to Rancher’s internal DNS so that other services in the system can discover them by fully qualified domain name (FQDN). Service Discovery can also be registered to Rancher’s Load Balancer (LB) service which allows it to balance traffic between all of a services’ containers.
Even though this covers many use cases, one major piece was lacking – making applications discoverable to outside world applications (or users) that don’t have access to Rancher’s internal DNS. I’m excited to let you know we’ve implemented a new solution that integrates our DNS service with Amazon Route53, which is now available as of Rancher 0.44. In this post I’ll describe its setup and implementation details.
Hi everyone, my name is Alena Prokharchyk, part of the engineering team here at Rancher, and still loving working on container infrastructure. A few months ago I wrote an article introducing Docker load balancing in Rancher. Today, I want to focus on how we’ve built a brand new service discovery capability into Rancher, as well as how we’ve integrated it with load balancing.
If you’re not familiar with service discovery, it is a networking capability that allows groups of devices (or in our case containers) to be identified with a common name, and discovered by other services on the network. In Rancher we enable this using our container network and DNS management services. We have also integrated it with our Load Balancer solution to making it simple to deploy services based off Docker images, define how they can discover one another, and allow load balancing to route traffic to specific services. In today’s post I’m going to walk through this new feature and give you an overview of how to get started using it.
So lets start simple, and build a use case from my previous post on using the load balancer in front of an nginx server – but this time we are going to run both nginx and our load balancer as services within Rancher. Read more