For teams building and deploying containerized applications using Docker, selecting the right orchestration engine can be a challenge. The decision affects not only deployment and management, but how applications are architected as well. DevOps teams need to think about details like how data is persisted, how containerized services communicate with one another, load balancing, service discovery, packaging and more. It turns out that the choice of orchestration engine is critical to all these areas.
While Rancher has the nice property that it can support multiple orchestration engines concurrently, choosing the right solution is still important. Rather than attempting to boil the ocean by looking at many orchestrators, we chose to look at two likely to be on the short list for most organizations – Kubernetes and Docker Swarm. Read more
One of the first questions you are likely to come up against when deploying containers in production is the choice of orchestration framework. While it may not be the right solution for everyone, Kubernetes is a popular scheduler that enjoys strong industry support. In this short article, I’ll provide an overview of Kubernetes, explain how it is deployed with Rancher, and show some of the advantages of using Kubernetes for distributed multi-tier applications.
Kubernetes has an impressive heritage. Spun-off as an open-source project in 2015, the technology on which Kubernetes is based (Google’s Borg system) has been managing containerized workloads at scale for over a decade. While it’s young as open-source projects go, the underlying architecture is mature and proven. The name Kubernetes derives from the Greek word for “helmsman” and is meant to be evocative of steering container-laden ships through choppy seas. I won’t attempt to describe the architecture of Kubernetes here. There are already some excellent posts on this topic including this informative article by Usman Ismail.
Like other orchestration solutions deployed with Rancher, Kubernetes deploys services comprised of Docker containers. Kubernetes evolved independently of Docker, so for those familiar with Docker and docker-compose, the Kubernetes management model will take a little getting used to. Kubernetes clusters are managed via a kubectl CLI or the Kubernetes Web UI (referred to as the Dashboard). Applications and various services are defined to Kubernetes using JSON or YAML manifest files in a format that is different than docker-compose. To make it easy for people familiar with Docker to get started with Kubernetes, a kubectl primer provides Kubernetes equivalents for the most commonly used Docker commands.
A Primer on Kubernetes Concepts
Kubernetes involves some new concepts that at first glance may seem confusing, but for multi-tier applications, the Kubernetes management model is elegant and powerful. Read more
We just came back from DockerCon 2016, the biggest and most exciting DockerCon yet. Rancher had a large and well-trafficked presence there – our developers even skipped attending breakout sessions in favor of staffing the booth, just to talk with all the people who were interested in Rancher. In only two days, over a thousand people stopped by to talk to us!
Without a doubt, the biggest news out of DockerCon this year is the new built-in container orchestration capabilities in the upcoming Docker 1.12 release. With this capability, developers can now create a Swarm cluster with a simple command and will be able to deploy, manage, and scale services from application templates.
Rancher is the only container management platform on the market today capable of supporting all leading container orchestration frameworks: Swarm, Kubernetes, and Mesos.
With the new built-in orchestration support coming in Docker 1.12, Swarm will continue to be an attractive choice for DevOps teams.
Docker-Native Orchestration Support Coming Soon in Rancher
We are very excited about the latest Docker-native container orchestration capabilities built into Docker 1.12 and the engineering team has already begun work to integrate these capabilities into Rancher. We expect a preview version of this integration in early July and can’t wait to show you what we’re doing to bring these amazing new capabilities to Rancher users. Stay tuned!
Hello, my name is Alena Prokharchyk and I am a part of the software development team at Rancher Labs. In this article I’m going to give an overview of a new feature I’ve been working on, which was released this week with Rancher 0.16 – a Docker Load Balancing service.
One of the most frequently requested Rancher features, load balancers are used to distribute traffic between docker containers. Now Rancher users can configure, update and scale up an integrated load balancing service to meet their application needs, using either Rancher’s UI or API. To implement our load balancing functionality we decided to use HAproxy, which is deployed as a contianer, and managed by the Rancher orchestration functionality.
With Rancher’s Load Balancing capability, users are now able to use a consistent, portable load balancing service on any infrastructure where they can run Docker. Whether it is running in a public cloud, private cloud, lab, cluster, or even on a laptop, any container can be a target for the load balancer.
Thanks to Docker, Orange and Blumberg Capital for hosting a great meetup last night in San Francisco. Darren Shepherd, Chief Architect of Rancher Labs introduced RancherOS for the first time, and answered questions from the audience. Learn more about RancherOS, or download it from GitHub.
If you’d like to learn more, Darren will be presenting RancherOS at an online meetup on March 31st, 2015.