Tag: docker

Rancher Labs 2017 Predictions: Rapid Adoption and Innovation to Come

December 27, 2016

2017 Predictions: Rapid Adoption and Innovation to Come

Rapid adoption of container orchestration frameworks

As more companies use containers in production, adoption of orchestration frameworks like Kubernetes, Mesos, Cattle and Docker Swarm will increase as well. These projects have evolved quickly in terms of stability, community and partner ecosystem, and will act as necessary and enabling technologies for enterprises using containers more widely in production.   Read more


Kubernetes, Mesos, and Swarm: Comparing the Rancher Orchestration Engine Options

October 20, 2016

kubernetes_mesos_swarm

Note: Since publishing this article, we’ve gotten requests for a downloadable version. You can request a copy here

Recent versions of Rancher have added support for several common orchestration engines in addition to the standard Cattle. The three newly supported engines, Swarm (soon to be Docker Native Orchestration), Kubernetes and Mesos are the most widely used orchestration systems in the Docker community and provide a gradient of usability versus feature sets. Although Docker is the defacto standard for containerization, there are no clear winners in the orchestration space. In this article, we go over the features and characteristics of the three systems and make recommendations of use cases where they may be suitable.

Docker Native Orchestration is fairly bare bones at the moment but is getting new features at a rapid clip. Since it is part of the official Docker system, it will be the default choice for many developers and hence will have likely have good tooling and community support. Kubernetes is among the most widely used container orchestration systems today and has the support of Google. Lastly, Mesos with Mesosphere (or Marathon, its open source version) takes a much more compartmentalized approach to service managements where a lot of features are left to independent plug-ins and applications. This makes it easier to customize the deployment as individual parts can be swapped out or customized. However, this also means more tinkering is required to get a working setup. Kubernetes is more opinionated about how to build clusters and ships with integrated systems for many common use cases.

Read more


5 Keys to Running Workloads Resiliently with Rancher and Docker – Part 2

September 14, 2016

In Part 1: Rancher Server HA, we looked into setting up Rancher Server in HA mode to secure it against failure. There now exists a degree of engineering in our system on top of which we can iterate. So what now? In this installment, we’ll look towards building better service resiliency with Rancher Health Checks and Load Balancing.

Since the Rancher documentation for Health Checks and Load Balancing are extremely detailed, Part 2 will focus on illustrating how they work, so we can become familiar with the nuances of running services in Rancher. A person tasked with supporting the system might have several questions. For example, how does Rancher know a container is down? How is this scenario different from a Health Check? What component is responsible for operating the health checks? How does networking work with Health Checks? Read more


5 Keys to Running Workloads Resiliently with Rancher and Docker – Part 1

August 4, 2016

Containers and orchestration frameworks like Rancher will soon allow every organization to have access to efficient cluster management.

This brave new world frees operations from managing application configuration and allows development to focus on writing code; containers abstract complex dependency requirements, which enables ops to deploy immutable containerized applications and allows devs a consistent runtime for their code.

If the benefits are so clear, then why do companies with existing infrastructure practices not switch? One of the key issues is risk. The risk of new unknowns brought by an untested technology, the risk of inexperience operating a new stack, and the risk of downtime impacting the brand.
Read more


Converting the Catalog Prometheus Template From Cattle to Kubernetes

July 13, 2016

prometheus-logoPrometheus is a modern and popular monitoring alerting system, built at SoundCloud and eventually open sourced in 2012 – it handles multi-dimensional time series data really well, and friends at InfinityWorks have already developed a Rancher template to deploy Prometheus at click of a button.

In hybrid cloud environments, it is likely that one might be using multiple orchestration engines such as Kubernetes and Mesos, in which case it is helpful to have the stack or application portable across environments. In this short tutorial, we will convert the template for Prometheus from Cattle format to make it work in a Kubernetes environment. It is assumed that the reader has a basic understanding of Kubernetes concepts such as pods, replication controller (RC), services and so on. If you need a refresher on the basic concepts, the Kubernetes 101 and concept guide are excellent starting points.

Prometheus Cattle Template Components

If you look at latest version of the Prometheus template here you will notice:

  • docker-compose.yml – defines containers in docker compose format
  • rancher-compose.yml – adds additional Rancher functionality to manage container lifecycle.

Below is a quick overview of each component’s role (Defined in docker-compose.yml):

Read more


Lessons Learned Building a Deployment Pipeline with Docker, Docker Compose and Rancher (Part 4)

and May 22, 2016

consul-logo-grad

In this post, we’ll discuss how we implemented consul for service discovery with Rancher.

John Patterson (@cantrobot) and Chris Lunsford run This End Out, an operations and infrastructure services company. You can find them online at https://www.thisendout.com and follow them on twitter @thisendout

If you haven’t already, please read the previous posts in this series:
Part 1: Getting started with CI/CD and Docker

Part 2: Moving to Compose blueprints
Part 3: Adding Rancher for Orchestration

In this final post of the series on building a deployment pipeline, we will explore some of the challenges we faced when transitioning to Rancher for cluster scheduling. In the previous article, we removed the operator from the process of choosing where a container would run by allowing Rancher to perform the scheduling. With this new scheme, we must address how the rest of our environment knows where the scheduler places these services and how they can be reached. We will also talk about manipulating the scheduler with labels to adjust where containers are placed and avoid port binding conflicts. Lastly, we will optimize our upgrade process by taking advantage of Rancher’s rollback capability.

Before the introduction of Rancher, our environment was a fairly static one. We always deployed containers to the same hosts, and deploying to a different host meant that we would need to update a few config files to reflect the new location. For example, if we were to add one additional instance of the ‘java-service-1’ application, we would also need to update the loadbalancer to point to the IP of the additional instance. Now that we employ a scheduler, we lose predictability of where our containers get deployed and need to make our environment configuration dynamic, adapting to changes automatically. To do this, we make use of service registration and discovery.

A service registry provides us a single source of truth about where our applications are in the environment. Rather than hard-code service locations, our applications can query the service registry through an API and automatically reconfigure themselves when there is a change in our environment. Rancher provides service discovery out of the box using the Rancher DNS and metadata services (there is a good write-up on the Rancher blog on service discovery here). However, having a mix of Docker and non-Docker applications, we couldn’t rely purely on Rancher to handle service discovery. We needed an independent tool to track the locations of all our services, and consul fit that bill.

We won’t detail how to setup Consul in your environment, however, we’ll briefly describe the way we use Consul at ABC Inc. In each environment, we have a Consul cluster deployed as containers. On each host in the environment, we deploy a Consul agent, and if the host is running Docker, we also deploy a registrator container. Registrator monitors the Docker events API for each daemon and automatically updates Consul during lifecycle events. For example, after a new container is deployed, registrator automatically registers the service in Consul. When the container is removed, registrator deregisters it. Read more