Tag: orchestration

Comparing Kubernetes and Docker Swarm

August 7, 2017

For teams building and deploying containerized applications using Docker, selecting the right orchestration engine can be a challenge.  The decision affects not only deployment and management, but how applications are architected as well.  DevOps teams need to think about details like how data is persisted, how containerized services communicate with one another, load balancing, service discovery, packaging and more.  It turns out that the choice of orchestration engine is critical to all these areas.

While Rancher has the nice property that it can support multiple orchestration engines concurrently, choosing the right solution is still important.  Rather than attempting to boil the ocean by looking at many orchestrators, we chose to look at two likely to be on the short list for most organizations – Kubernetes and Docker Swarm.

Evolving at a rapid clip

To say these frameworks are evolving quickly is an understatement.  In the just the past year there have been four major releases of Docker (1.12, 1.13, 17.03 and 17.06) with dozens of new features and a wholesale change to the Swarm architecture.  Kubernetes has been evolving at an even more frenetic pace.  Since Kubernetes 1.3 was introduced in July of 2016 there have been four additional major releases and no less than a dozen minor releases. Kubernetes is at version 1.7.2 at the time of this writing with 1.8.0 now in alpha 2.   Check out the Kubernetes changelog to get a sense of the pace of development.

Comparing Kubernetes and Docker Swarm is a little like trying to compare two Rocket ships speeding along on separate trajectories.  By the time you catch up with one and get close enough to see what’s happening, the other is in a whole different place! 

Points of comparison

Despite the challenges posed by their rapid evolution, we decided to take a crack at comparing Swarm and Kubernetes in some detail, taking a fresh look at new capabilities in each solution.  At a high-level the two solutions do broadly similar things, but they differ substantially in their implementation.  We took both solutions out for a test drive (using Rancher running in AWS), got into the weeds, and compared them systematically in these areas:

  • Architecture
  • User experience
  • Ease-of-use
  • Networking model
  • Storage management
  • Scheduling
  • Service discovery
  • Load balancing
  • Healthchecks
  • Scalability

Lots for DevOps teams to ponder

Both Swarm and Kubernetes are impressive, capable solutions.  Depending on their needs, organizations could reasonably choose either solution.  If you are new to one solution or the other, understanding the strengths and weaknesses of different solutions, and differences in how they are implemented, can help you make a more informed decision.

Swarm is impressive for its simplicity and seamless integration with Docker.  For those experienced with Docker, evolving to use Swarm is simple.  Swarm’s new DAB format for multi-host, multi-service applications extends naturally from docker-compose, and the Swarm command set is now part of Docker Engine, so administrators face a minimal learning curve.

Customers considering larger, more complex deployments will want to look at Kubernetes.  Docker users will need to invest a little more time to get familiar with Kubernetes, but even if you don’t use all the features out of the gate, the features are there for good reason.  Kubernetes has its own discrete command set, API and an architecture that is discrete from Docker.  For Kubernetes, the watchword is flexibility.   Kubernetes is extensible and configurable and can be deployed in a variety of ways.  It introduces concepts like Pods, Replica Sets and Stateful Sets not found in Swarm along with features like autoscaling.   While Kubernetes is a little more complex to learn and master, for users with more sophisticated requirements, Kubernetes has the potential to simplify management by reducing the need for ongoing manual interventions.

About the whitepaper

Our comparison was done using Rancher’s container management framework to deploy separate environments for Docker Swarm and Kubernetes.  Rather than focus on Rancher however, comparisons are made at the level of Swarm and Kubernetes themselves.  Whether you are using Rancher or a different container management framework, the observations should still be useful.

Included in the paper are:

  • Detailed comparisons between Kubernetes and Swarm
  • Considerations when deploying both orchestrators in Rancher
  • Considerations for application designers
  • High-level guidance on what orchestrator to consider when

Download the free whitepaper for an up to date look at Kubernetes and Docker Swarm.

As always, we appreciate your thoughts and feedback!


The Case for Kubernetes

April 24, 2017

One of the first questions you are likely to come up against when deploying containers in production is the choice of orchestration framework.  While it may not be the right solution for everyone, Kubernetes is a popular scheduler that enjoys strong industry support.  In this short article, I’ll provide an overview of Kubernetes, explain how it is deployed with Rancher, and show some of the advantages of using Kubernetes for distributed multi-tier applications.

About Kubernetes

Kubernetes has an impressive heritage.  Spun-off as an open-source project in 2015, the technology on which Kubernetes is based (Google’s Borg system) has been managing containerized workloads at scale for over a decade.  While it’s young as open-source projects go, the underlying architecture is mature and proven.  The name Kubernetes derives from the Greek word for “helmsman” and is meant to be evocative of steering container-laden ships through choppy seas.  I won’t attempt to describe the architecture of Kubernetes here.  There are already some excellent posts on this topic including this informative article by Usman Ismail.

Like other orchestration solutions deployed with Rancher, Kubernetes deploys services comprised of Docker containers. Kubernetes evolved independently of Docker, so for those familiar with Docker and docker-compose, the Kubernetes management model will take a little getting used to. Kubernetes clusters are managed via a kubectl CLI or the Kubernetes Web UI (referred to as the Dashboard).  Applications and various services are defined to Kubernetes using JSON or YAML manifest files in a format that is different than docker-compose.  To make it easy for people familiar with Docker to get started with Kubernetes, a kubectl primer provides Kubernetes equivalents for the most commonly used Docker commands.

A Primer on Kubernetes Concepts

Kubernetes involves some new concepts that at first glance may seem confusing, but for multi-tier applications, the Kubernetes management model is elegant and powerful. Read more


Hidden Dependencies with Microservices

December 15, 2016

One of the great things about microservices is that they allow engineering to decouple software development from application lifecycle. Every microservice:

  • can be written in its own language, be it Go, Java, or Python
  • can be contained and isolated form others
  • can be scaled horizontally across additional nodes and instances
  • is owned by a single team, rather than being a shared responsibility among many teams
  • communicates with other microservices through an API a message bus
  • must support a common service level agreement to be consumed by other microservices, and conversely, to consume other microservices

These are all very cool features, and most of them help to decouple various software dependencies from each other.

But what is the operations point of view? While the cool aspects of microservices bulleted above are great for development teams, they pose some new challenges for DevOps teams. Namely:

Read more


DockerCon 2016: Where Docker-Native Orchestration Grows Up

June 21, 2016

We just came back from DockerCon 2016, the biggest and most exciting DockerCon yet. Rancher had a large and well-trafficked presence there – our developers even skipped attending breakout sessions in favor of staffing the booth, just to talk with all the people who were interested in Rancher. In only two days, over a thousand people stopped by to talk to us!

Rancher Labs at DockerCon 2016

Docker-Native Orchestration

Without a doubt, the biggest news out of DockerCon this year is the new built-in container orchestration capabilities in the upcoming Docker 1.12 release. With this capability, developers can now create a Swarm cluster with a simple command and will be able to deploy, manage, and scale services from application templates.

Docker Swarm Orchestration

Docker 1.12 Built-in Container Orchestration (Source: Docker Blog)

Multi-Framework Support

At Rancher Labs, we are committed to supporting multiple container orchestration frameworks. Modern DevOps practices encourage individual teams to have their choice of tools and frameworks, and as a result, large enterprise organizations often find it necessary to support multiple container orchestration engines. Goldman Sachs, for example, plans to use both Swarm and Kubernetes in their quest to migrate 90% of computing to containers.

Rancher is the only container management platform on the market today capable of supporting all leading container orchestration frameworks: Swarm, Kubernetes, and Mesos. 

Orchestration frameworks in rancher

With the new built-in orchestration support coming in Docker 1.12, Swarm will continue to be an attractive choice for DevOps teams.

Docker-Native Orchestration Support Coming Soon in Rancher

We are very excited about the latest Docker-native container orchestration capabilities built into Docker 1.12 and the engineering team has already begun work to  integrate these capabilities into Rancher.  We expect a preview version of this integration in early July and can’t wait to show you what we’re doing to bring these amazing new capabilities to Rancher users. Stay tuned!


Docker Load Balancing Now Available in Rancher 0.16

April 21, 2015

Hello, my name is Alena Prokharchyk and I am a part of the software development team at Rancher Labs. In this article I’m going to give an overview of a new feature I’ve been working on, which was released this week with Rancher 0.16 – a Docker Load Balancing service.

One of the most frequently requested Rancher features, load balancers are used to distribute traffic between docker containers. Now Rancher users can configure, update and scale up an integrated load balancing service to meet their application needs, using either Rancher’s UI or API.  To implement our load balancing functionality we decided to use HAproxy, which is deployed as a contianer, and managed by the Rancher orchestration functionality.

With Rancher’s Load Balancing capability, users are now able to use a consistent, portable load balancing service on any infrastructure where they can run Docker. Whether it is running in a public cloud, private cloud, lab, cluster, or even on a laptop, any container can be a target for the load balancer.

Read more


Darren Shepherd demonstrates RancherOS at San Francisco Docker Meetup

February 25, 2015

Thanks to Docker, Orange and Blumberg Capital for hosting a great meetup last night in San Francisco. Darren Shepherd, Chief Architect of Rancher Labs introduced RancherOS for the first time, and answered questions from the audience. Learn more about RancherOS, or download it from GitHub.

If you’d like to learn more, Darren will be presenting RancherOS at an online meetup on March 31st, 2015. 

Register Now

 

RancherOS Demo at Docker Meetup from Rancher Labs on Vimeo.