In Kubernetes, we often hear terms like resource management, scheduling and load balancing. While Kubernetes offers many capabilities, understanding these concepts is key to appreciating how workloads are placed, managed and made resilient. In this short article, I provide an overview of each facility, explain how they are implemented in Kubernetes, and how they interact with one another to provide efficient management of containerized workloads. If you’re new to Kubernetes and seeking to learn the space, please consider reading our case for Kubernetes article.
Resource management is all about the efficient allocation of infrastructure resources. In Kubernetes, resources are things that can be requested by, allocated to, or consumed by a container or pod. Having a common resource management model is essential, since many components in Kubernetes need to be resource aware including the scheduler, load balancers, worker-pool managers and even applications themselves. If resources are underutilized, this translates into waste and cost-inefficiency. If resources are over-subscribed, the result can be application failures, downtime, or missed SLAs. Read more
Since Docker launched in 2013, it has brought a level of excitement and innovation to software development that’s contagious. It has rallied support from every corner—enterprises to startups, developers to IT folk, plus the open source community, ISVs, the biggest public cloud vendors, and every tool across the software stack. Since the launch of Docker, many major milestones have served to advance the container revolution. Let’s look at some of them.
Container Orchestration Options
Getting started with your first container is fairly simple. All it takes is your laptop and a Docker client. However, running a microservices app is a whole other beast. The most difficult part is in creating, managing, and automating clusters of ephemeral containers.
The first major tool to address this challenge was Mesos with its Marathon orchestrator. Having powered distributed infrastructure even before Docker, Marathon is in use in production workloads at Twitter, and in other large-scale web applications.
The next orchestration tool to gain prominence was Kubernetes. In fact, today, Kubernetes leads the pack of Docker orchestration tools because of how extensible it is. It supports a broad list of programming languages, infrastructure options, and enjoys tremendous support from the container ecosystem. It isolates the application layer from the infrastructure layer, thus enabling true portability across multiple cloud vendors, and infrastructure setups. Read more
One of the first questions you are likely to come up against when deploying containers in production is the choice of orchestration framework. While it may not be the right solution for everyone, Kubernetes is a popular scheduler that enjoys strong industry support. In this short article, I’ll provide an overview of Kubernetes, explain how it is deployed with Rancher, and show some of the advantages of using Kubernetes for distributed multi-tier applications.
Kubernetes has an impressive heritage. Spun-off as an open-source project in 2015, the technology on which Kubernetes is based (Google’s Borg system) has been managing containerized workloads at scale for over a decade. While it’s young as open-source projects go, the underlying architecture is mature and proven. The name Kubernetes derives from the Greek word for “helmsman” and is meant to be evocative of steering container-laden ships through choppy seas. I won’t attempt to describe the architecture of Kubernetes here. There are already some excellent posts on this topic including this informative article by Usman Ismail.
Like other orchestration solutions deployed with Rancher, Kubernetes deploys services comprised of Docker containers. Kubernetes evolved independently of Docker, so for those familiar with Docker and docker-compose, the Kubernetes management model will take a little getting used to. Kubernetes clusters are managed via a kubectl CLI or the Kubernetes Web UI (referred to as the Dashboard). Applications and various services are defined to Kubernetes using JSON or YAML manifest files in a format that is different than docker-compose. To make it easy for people familiar with Docker to get started with Kubernetes, a kubectl primer provides Kubernetes equivalents for the most commonly used Docker commands.
A Primer on Kubernetes Concepts
Kubernetes involves some new concepts that at first glance may seem confusing, but for multi-tier applications, the Kubernetes management model is elegant and powerful. Read more
Modern microservices applications span multiple containers, and sometimes a single app may use thousands of containers. When operating at this scale, you need a container orchestration tool to manage all of those containers. Managing them by hand is simply not feasible.
This is where Kubernetes comes in. Kubernetes manages Docker containers that are used to package applications at scale. Since its launch in 2014, Kubernetes has enjoyed widespread adoption within the container ecosystem. It is fast becoming the de facto tool for orchestrating containers at scale.
What are the reasons for the meteoric rise of Kubernetes, and what are the factors that will shape its future? Let’s take a look by examining the major milestones in Kubernetes’ history. Read more
If you’re going to successfully deploy containers in production, you need more than just container orchestration
Kubernetes is a valuable tool
Kubernetes is an open-source container orchestrator for deploying and managing containerized applications. Building on 15 years of experience running production workloads at Google, it provides the advantages inherent to containers, while enabling DevOps teams to build container-ready environments which are customized to their needs.
The Kubernetes architecture is comprised of loosely coupled components combined with a rich set of APIs, making Kubernetes well-suited for running highly distributed application architectures, including microservices, monolithic web applications and batch applications. In production, these applications typically span multiple containers across multiple server hosts, which are networked together to form a cluster.
Kubernetes provides the orchestration and management capabilities required to deploy containers for distributed application workloads. It enables users to build multi-container application services and schedule the containers across a cluster, as well as manage the health of the containers. Because these operational tasks are automated, DevOps team can now do many of the same things that other application platforms enable them to do, but using containers.
But configuring and deploying Kubernetes can be hard
It’s commonly believed that Kubernetes is the key to successfully operationalizing containers at scale. This may be true if you are running a single Kubernetes cluster in the cloud or have reasonably homogenous infrastructure. However, many organizations have a diverse application portfolio and user requirements, and therefore have more expansive and diverse needs. Read more
2017 Predictions: Rapid Adoption and Innovation to Come
Rapid adoption of container orchestration frameworks
As more companies use containers in production, adoption of orchestration frameworks like Kubernetes, Mesos, Cattle and Docker Swarm will increase as well. These projects have evolved quickly in terms of stability, community and partner ecosystem, and will act as necessary and enabling technologies for enterprises using containers more widely in production. Read more