Tag: kubernetes

Load-Balancing in Kubernetes

August 14, 2017

Kubernetes is the container orchestration system of choice for many enterprise deployments. That’s a tribute to its reliability, flexibility, and broad range of features. In this post, we’re going to take a closer look at how Kubernetes handles a very common and very necessary job: load balancing. Load balancing is a relatively straightforward task in many non-container environments (i.e., balancing between servers), but it involves a bit of special handling when it comes to containers.

 

Managing Containers

To understand Kubernetes load balancing, you first have to understand how Kubernetes organizes containers.

Since containers typically perform specific services or sets of services, it makes sense to look at them in terms of the services they provide, rather than individual instances of a service (i.e., a single container). In essence, this is what Kubernetes does.

Placing Them in Pods

In Kubernetes, the pod serves as a kind of basic, functional unit. A pod is a set of containers, along with their shared volumes. The containers are generally closely related in terms of function and services provided.

Pods that have the same set of functions are abstracted into sets, called services. It is these services which the client of a Kubernetes-based application accesses; the service stands in for the individual pods, which in turn manage access to the containers that make them up, leaving the client insulated from the containers themselves.

Read more


Configuring Kubernetes for Maximum Scalability

August 9, 2017

Kubernetes is designed to address some of the difficulties that are inherent in managing large-scale containerized environments. However, this doesn’t mean Kubernetes can scale in all situations all on its own. There are steps you can and should take to maximize Kubernetes’ ability to scale—and there are important caveats and limitations to keep in mind when scaling Kubernetes. I’ll explain them in this article.

Scale versus Performance

The first thing that must be understood about scaling a Kubernetes cluster is that there is a tradeoff between scale and performance. For example, Kubernetes 1.6 is designed for use in clusters with up to 5,000 nodes. But 5,000 nodes is not a hard limit; it is merely the recommended node maximum. In actuality, it is possible to exceed the 5,000 node cluster limit substantially, but performance begins to drop off after doing so.

What this means more specifically is this: Kubernetes has defined two service level objectives. The first of these objectives is to return 99% of all API calls in less than a second. The second objective is to be able to start 99% of pods within less than five seconds. Although these objectives do not act as a comprehensive set of performance metrics, they do provide a good baseline for evaluating general cluster performance. According to Kubernetes, clusters with more than 5,000 nodes may not be able to achieve these service level objectives.

So, keep in mind that beyond a certain point, you may have to sacrifice performance in order to gain scalability in Kubernetes. Maybe this sacrifice is worth it to you, and maybe it’s not, depending on your deployment scenario.

 

Quotas

One of the main issues that you are likely to encounter when setting up a really large Kubernetes cluster is that of quota limitations. This is especially true for cloud-based nodes since cloud service providers commonly implement quota limitations. Read more


Comparing Kubernetes and Docker Swarm

August 7, 2017

For teams building and deploying containerized applications using Docker, selecting the right orchestration engine can be a challenge.  The decision affects not only deployment and management, but how applications are architected as well.  DevOps teams need to think about details like how data is persisted, how containerized services communicate with one another, load balancing, service discovery, packaging and more.  It turns out that the choice of orchestration engine is critical to all these areas.

While Rancher has the nice property that it can support multiple orchestration engines concurrently, choosing the right solution is still important.  Rather than attempting to boil the ocean by looking at many orchestrators, we chose to look at two likely to be on the short list for most organizations – Kubernetes and Docker Swarm.

Evolving at a rapid clip

To say these frameworks are evolving quickly is an understatement.  In the just the past year there have been four major releases of Docker (1.12, 1.13, 17.03 and 17.06) with dozens of new features and a wholesale change to the Swarm architecture.  Kubernetes has been evolving at an even more frenetic pace.  Since Kubernetes 1.3 was introduced in July of 2016 there have been four additional major releases and no less than a dozen minor releases. Kubernetes is at version 1.7.2 at the time of this writing with 1.8.0 now in alpha 2.   Check out the Kubernetes changelog to get a sense of the pace of development.

Comparing Kubernetes and Docker Swarm is a little like trying to compare two Rocket ships speeding along on separate trajectories.  By the time you catch up with one and get close enough to see what’s happening, the other is in a whole different place! 

Points of comparison

Despite the challenges posed by their rapid evolution, we decided to take a crack at comparing Swarm and Kubernetes in some detail, taking a fresh look at new capabilities in each solution.  At a high-level the two solutions do broadly similar things, but they differ substantially in their implementation.  We took both solutions out for a test drive (using Rancher running in AWS), got into the weeds, and compared them systematically in these areas:

  • Architecture
  • User experience
  • Ease-of-use
  • Networking model
  • Storage management
  • Scheduling
  • Service discovery
  • Load balancing
  • Healthchecks
  • Scalability

Lots for DevOps teams to ponder

Both Swarm and Kubernetes are impressive, capable solutions.  Depending on their needs, organizations could reasonably choose either solution.  If you are new to one solution or the other, understanding the strengths and weaknesses of different solutions, and differences in how they are implemented, can help you make a more informed decision.

Swarm is impressive for its simplicity and seamless integration with Docker.  For those experienced with Docker, evolving to use Swarm is simple.  Swarm’s new DAB format for multi-host, multi-service applications extends naturally from docker-compose, and the Swarm command set is now part of Docker Engine, so administrators face a minimal learning curve.

Customers considering larger, more complex deployments will want to look at Kubernetes.  Docker users will need to invest a little more time to get familiar with Kubernetes, but even if you don’t use all the features out of the gate, the features are there for good reason.  Kubernetes has its own discrete command set, API and an architecture that is discrete from Docker.  For Kubernetes, the watchword is flexibility.   Kubernetes is extensible and configurable and can be deployed in a variety of ways.  It introduces concepts like Pods, Replica Sets and Stateful Sets not found in Swarm along with features like autoscaling.   While Kubernetes is a little more complex to learn and master, for users with more sophisticated requirements, Kubernetes has the potential to simplify management by reducing the need for ongoing manual interventions.

About the whitepaper

Our comparison was done using Rancher’s container management framework to deploy separate environments for Docker Swarm and Kubernetes.  Rather than focus on Rancher however, comparisons are made at the level of Swarm and Kubernetes themselves.  Whether you are using Rancher or a different container management framework, the observations should still be useful.

Included in the paper are:

  • Detailed comparisons between Kubernetes and Swarm
  • Considerations when deploying both orchestrators in Rancher
  • Considerations for application designers
  • High-level guidance on what orchestrator to consider when

Download the free whitepaper for an up to date look at Kubernetes and Docker Swarm.

As always, we appreciate your thoughts and feedback!


What App Developers Should Know About Kubernetes Networking

July 17, 2017

In the world of containers, Kubernetes has become the community standard for container orchestration and management. But there are some basic elements surrounding networking that need to be considered as applications are built to ensure that full multi-cloud capabilities can be leveraged.

 

The Basics of Kubernetes Networking: Pods

The basic unit of management inside Kubernetes is not a container—It is called a pod. A pod is simply one or more containers that are deployed as a unit. Often, they are a single functional endpoint used as part of a service offering.

Two examples of valid pods are:

  • Database pod—a single MySQL container
  • Web pod—an instance of Python in one container and Redis in a second container

Useful things to know about pods:

  • They share resources—including the network stack and namespace.
  • A pod is assigned a single IP which clients connect to.
  • A pod configuration defines any public ports and what container hosts the port.
  • All containers within a pod can interact over any port over the network. (They are all referenced as localhost, so be sure that all the services in the pod have unique ports.)

 

Kubernetes Services

A Kubernetes service is where multiple identical pods are managed behind a load balancer. Clients connect to the IP of the load balancer instead of the individual IPs of each pod. Defining your application as a service allows Kubernetes to scale the number of pods based on the rules defined, and available resources.

Defining an application as part of a service is the only way to make it available to clients outside of the Kubernetes infrastructure. Even if you never scale past one node, services is the avenue to have an external IP address assigned.

Read more


RBAC, Kubernetes, and Rancher

July 10, 2017

kubernetesWith Kubernetes 1.6 came the beta release of the role-based access control (RBAC) feature. This feature allows admins to create policies defining which users can access which resources within a Kubernetes cluster or namespace. Kubernetes itself does not have a native concept of users and instead delegates to an external authentication system. As of version 1.6.3, Rancher integrates with this process to allow any authentication system supported by Rancher, such as Github, LDAP, or Azure AD, to be used for Kubernetes RBAC policies. In short, Rancher v1.6.3 picks up where Kubernetes RBAC leaves off, giving teams a dead-easy mechanism for authenticating users across their Kubernetes clusters.

To see how one might use this feature, let’s consider a company that has a development team and a QA team. Let’s also suppose they’re using GitHub as the authentication method for their Rancher setup. In their Kubernetes environment they’ve created two namespaces, one for developers and one for QA. It’s now possible for admins to define rules such as the following:

  • Only members of the “developers” Github team should have read and write access to the “dev” namespace
  • Likewise, only members of the “QA” Github team should have read and write access to the “QA” namespace
  • Developers should also be able to view, but not edit, resources in the “QA” namespace
  • Users “user1” and “user2” should be able to read and write to both the “dev” and “QA” namespaces

Other Kubernetes features can be leveraged to more richly define multitenancy within a Rancher Kubernetes environment. With resource quotas, an admin can define limits on resources such as CPU and memory usage for a namespace. This is particularly important since namespaces within an environment run on the same set of hosts.

Role bindings follow the pattern of tying a particular role to one or more users or one or more groups. The preceding example made use of three built-in roles defined by Kubernetes:

  • view: Read-only access to most objects in a namespace
  • edit – Read and write access to most objects in a namespace
  • admin – Includes all permissions from the edit role and allows the creation of new roles and role bindings

These predefined roles enhance the usability of RBAC by allowing most users to reuse these roles in place of defining their own. We recommend using these roles for those who are less familiar with individual Kubernetes resources or for those who don’t have a need to define their own custom roles.

From our work with teams running Kubernetes in production, particularly those offering CaaS solutions across their organizations, we understand the need for straightforward RBAC, authentication, and authorization. More detailed information on this feature can be found in our docs. We always want feedback on what we’re building for Rancher and Kubernetes – for any questions, you can find me on Rancher Users Slack (@josh) or on Twitter (@joshwget).

 

Josh Curl is a software engineer at Rancher Labs. 


The Three Pillars of Kubernetes Container Orchestration

May 18, 2017

In Kubernetes, we often hear terms like resource management, scheduling and load balancing.  While Kubernetes offers many capabilities, understanding these concepts is key to appreciating how workloads are placed, managed and made resilient.  In this short article, I provide an overview of each facility, explain how they are implemented in Kubernetes, and how they interact with one another to provide efficient management of containerized workloads.  If you’re new to Kubernetes and seeking to learn the space, please consider reading our case for Kubernetes article.

Resource Management

Resource management is all about the efficient allocation of infrastructure resources.  In Kubernetes, resources are things that can be requested by, allocated to, or consumed by a container or pod.  Having a common resource management model is essential, since many components in Kubernetes need to be resource aware including the scheduler, load balancers, worker-pool managers and even applications themselves.  If resources are underutilized, this translates into waste and cost-inefficiency.  If resources are over-subscribed, the result can be application failures, downtime, or missed SLAs. Read more