For teams building and deploying containerized applications using Docker, selecting the right orchestration engine can be a challenge. The decision affects not only deployment and management, but how applications are architected as well. DevOps teams need to think about details like how data is persisted, how containerized services communicate with one another, load balancing, service discovery, packaging and more. It turns out that the choice of orchestration engine is critical to all these areas.
While Rancher has the nice property that it can support multiple orchestration engines concurrently, choosing the right solution is still important. Rather than attempting to boil the ocean by looking at many orchestrators, we chose to look at two likely to be on the short list for most organizations – Kubernetes and Docker Swarm.
Evolving at a rapid clip
To say these frameworks are evolving quickly is an understatement. In the just the past year there have been four major releases of Docker (1.12, 1.13, 17.03 and 17.06) with dozens of new features and a wholesale change to the Swarm architecture. Kubernetes has been evolving at an even more frenetic pace. Since Kubernetes 1.3 was introduced in July of 2016 there have been four additional major releases and no less than a dozen minor releases. Kubernetes is at version 1.7.2 at the time of this writing with 1.8.0 now in alpha 2. Check out the Kubernetes changelog to get a sense of the pace of development.
Comparing Kubernetes and Docker Swarm is a little like trying to compare two Rocket ships speeding along on separate trajectories. By the time you catch up with one and get close enough to see what’s happening, the other is in a whole different place!
Points of comparison
Despite the challenges posed by their rapid evolution, we decided to take a crack at comparing Swarm and Kubernetes in some detail, taking a fresh look at new capabilities in each solution. At a high-level the two solutions do broadly similar things, but they differ substantially in their implementation. We took both solutions out for a test drive (using Rancher running in AWS), got into the weeds, and compared them systematically in these areas:
Lots for DevOps teams to ponder
Both Swarm and Kubernetes are impressive, capable solutions. Depending on their needs, organizations could reasonably choose either solution. If you are new to one solution or the other, understanding the strengths and weaknesses of different solutions, and differences in how they are implemented, can help you make a more informed decision.
Swarm is impressive for its simplicity and seamless integration with Docker. For those experienced with Docker, evolving to use Swarm is simple. Swarm’s new DAB format for multi-host, multi-service applications extends naturally from docker-compose, and the Swarm command set is now part of Docker Engine, so administrators face a minimal learning curve.
Customers considering larger, more complex deployments will want to look at Kubernetes. Docker users will need to invest a little more time to get familiar with Kubernetes, but even if you don’t use all the features out of the gate, the features are there for good reason. Kubernetes has its own discrete command set, API and an architecture that is discrete from Docker. For Kubernetes, the watchword is flexibility. Kubernetes is extensible and configurable and can be deployed in a variety of ways. It introduces concepts like Pods, Replica Sets and Stateful Sets not found in Swarm along with features like autoscaling. While Kubernetes is a little more complex to learn and master, for users with more sophisticated requirements, Kubernetes has the potential to simplify management by reducing the need for ongoing manual interventions.
About the whitepaper
Our comparison was done using Rancher’s container management framework to deploy separate environments for Docker Swarm and Kubernetes. Rather than focus on Rancher however, comparisons are made at the level of Swarm and Kubernetes themselves. Whether you are using Rancher or a different container management framework, the observations should still be useful.
Included in the paper are:
Detailed comparisons between Kubernetes and Swarm
Considerations when deploying both orchestrators in Rancher
Considerations for application designers
High-level guidance on what orchestrator to consider when
Container security was initially a big obstacle to many organizations in adopting Docker. However, that has changed over the past year, as many open source projects, startups, cloud vendors, and even Docker itself have stepped up to the challenge by creating new solutions for hardening Docker environments. Today, there is a wide range of security tools that cater to every aspect of the container lifecycle.
Docker security tools fall into these categories:
Kernel security tools: These tools have their origins in the work of the open source Linux community. They have been inherited by container systems like Docker as foundational security tools at the kernel level.
Image scanning tools: Docker Hub is the most popular container registry, but there are many others, too. Most registries now have solutions for scanning container images for known vulnerabilities.
Orchestration security tools: Kubernetes and Docker Swarm are the two most popular orchestrators, and their security features have been gaining strength over the past year.
Network security tools: In a distributed system powered by containers, the network is more important than ever. Policy-based network security is gaining prominence over perimeter-based firewalls.
Security benchmark tools: The Center for Internet Security (CIS) has provided guidelines for container security, which have been adopted by Docker Bench and similar benchmark security tools.
Security with CaaS platforms: AWS ECS, GKE and other CaaS platforms build on the security features of their parent IaaS platform, and then add container-specific features or borrow security features from Docker or Kubernetes.
Purpose-built container security tools: This is the most advanced option for container security. In it, machine learning takes center stage as these tools look to build an intelligent solution to container security.
Here’s a cheatsheet of Docker security tools available as of mid-2017. It’s organized according to which part of the Docker stack the tool secures.
For any team using containers – whether in development, test, or production – an enterprise-grade registry is a non-negotiable requirement. JFrog Artifactory is much beloved by Java developers, and it’s easy to use as a Docker registry as well. To make it even easier, we’ve put together a short walkthrough to setting things up Artifactory in Rancher.
Before you start
For this article, we’ve assumed that you already have a Rancher installation up and running (if not, check out our Quick Start guide), and will be working with either Artifactory Pro or Artifactory Enterprise.
Choosing the right version of Artifactory depends on your development needs. If your main development needs include building with Maven package types, then Artifactory open source may be suitable. However, if you build using Docker, Chef Cookbooks, NuGet, PyPI, RubyGems, and other package formats then you’ll want to consider Artifactory Pro. Moreover, if you have a globally distributed development team with HA and DR needs, you’ll want to consider Artifactory Enterprise. JFrog provides a detailed matrix with the differences between the versions of Artifactory.
There’s several values you’ll need to select in order to set Artifactory up as a Docker registry, such as a public name, or public port. In this article, we refer to them as variables; just substitute the values you choose in for the variables throughout this post.
To deploy Artifactory, you’ll first need to create (or already) have a wildcard imported into Rancher for “*.$public_name”. You’ll also need to create DNS entries to the IP address for artifactory-lb, the load balancer for the Artifactory high availability architecture. Artifactory will be reached via $publish_schema://$public_name:$public_port, while the Docker registry will be reachable at $publish_schema://$docker_repo_name.$public_name:$public_port
I am incredibly excited to be joining such a talented, diverse group at Rancher Labs as Vice President of Business Development. In this role, I’ll be building upon my experience of developing foundational and strategic relationships based on open source technology. This change is motivated by my desire to go back to my roots, working with small, promising companies with passionate teams.
I joined Docker, Inc. in 2013, just as it started to bring containers out of the shadows and empower developers to write software with the tools of their choice, while redefining their relationship with infrastructure. Now that Docker is available in every cloud environment, embedded in developer tools, and integrated in development pipelines, the focus has shifted to making it more efficient and sustainable for business. Read more
Each time a new software technology arrives on the scene, InfoSec teams can get a little anxious. And why shouldn’t they? Their job is to assess and mitigate risk – and new software introduces unknown variables that equate to additional risk for the enterprise. It’s a tough job to make judgments about new, evolving, and complex technologies; that these teams approach unknown, new technologies with skepticism should be appreciated.
This article is an appeal to the InfoSec people of the world to be optimistic when it comes to containers, as containers come with some inherent security advantages:
Containers may be super cool, but at the end of the day, they’re just another kind of infrastructure. A seasoned developer is probably already familiar with several other kinds of infrastructure and approaches to deploying applications. Another is not really that big of a deal.
However, when the infrastructure creates new possibilities with the way an application is architected—as containers do—that’s a huge deal. That is why the services in a microservice application are far more important than the containerized infrastructure they run on.
Modularity has always been a goal of application architectures, and now that the concept of microservices is possible, how you build those services end up dictating where they run and how they are deployed. Services are where application functionality meets the user, and where the value your application can provide is realized.
That’s why if you want to make the most of containers, you should be thinking about more than just containers. You have to focus on services, because they’re the really cool thing that containers enable.
Services v. Containers
For conversation’s sake, using services and containers interchangeably is fine—because the ideal use case for a containerized application is one that is deconstructed into services, where each service is deployed as a container (or containers).
However, the tactics cannot be synonymous. Services are an implied infrastructure, but more importantly, an application architecture. When you talk about a service that is part of your application, that service is persistent. You can’t suddenly have an application without a login page or a shopping cart, for example, and expect things to go well.
Containers, on the other hand, are designed to live and die in very short time frames. Ideally, with every deployment or revert, the container is killed as soon as the new deployment is live and the traffic is routed to it. So containers are not persistent. And if the delivery chain is working correctly, that should not matter at all.
Microservices, as both an application and an infrastructure term, does have some unique elements associated with it, which diverge the relationship even further. Read more