Docker containers make app development easier. But deploying them in production can be hard.
Software developers are typically focused on a single application, application stack or workload that they need to run on a specific infrastructure. In production, however, a diverse set of applications run on a variety of technology (e.g. Java, LAMP, etc.), which need to be deployed on heterogeneous infrastructure running on-premises, in the cloud or both. This gives rise to several challenges Read more
This post is the first in a series in which we’d like to share the story of how we implemented a container deployment workflow using Docker, Docker-Compose and Rancher. Instead of just giving you the polished retrospective, though, we want to walk you through the evolution of the pipeline from the beginning, highlighting the pain points and decisions that were made along the way. Thankfully, there are many great resources to help you set up a continuous integration and deployment workflow with Docker. This is not one of them! A simple deployment workflow is relatively easy to set up. But our own experience has been that building a deployment system is complicated mostly because the easy parts must be done alongside the legacy environment, with many dependencies, and while changing your dev team and ops organization to support the new processes. Hopefully, our experience of building our pipeline the hard way will help you with the hard parts of building yours.
In this first post, we’ll go back to the beginning and look at the initial workflow we developed using just Docker. In future posts, we’ll progress through the introduction of Docker-compose and eventually Rancher into our workflow.
To set the stage, the following events all took place at a Software-as-a-Service provider where we worked on a long-term services engagement. For the purpose of this post, we’ll call the company Acme Business Company, Inc., or ABC. This project started while ABC was in the early stages of migrating its mostly-Java micro-services stack from on-premise bare metal servers to Docker deployments running in Amazon Web Services (AWS). The goals of the project were not unique: lower lead times on features and better reliability of deployed services.
The plan to get there was to make software deployment look something like this:
Over the last few months our team at Rancher Labs has been adding support for Kubernetes within Rancher. We’ve been implementing Kubernetes in a way that takes advantage of Rancher’s platform orchestration, simple UI, access control, networking and storage capabilities to deliver simple to deploy Kubernetes clusters for managing applications. In our February meetup we introduced this new support, and discussed how these environments compare with our traditional Docker environments and help users understand when and how each can be used to deploy and manage container deployments.
This new functionality will be available in Rancher in two to three weeks in March of 2016. We’ve uploaded a recording of the meetup below, as well as posted the slides to Slideshare. Read more
So far in this series of articles we have looked at creating continuous integration pipelines using Jenkins and continuously deploying to integration environments. We also looked at using Rancher compose to run deployments as well as Route53 integration to do basic DNS management. Today we will cover production deployments strategies and also circle back to DNS management to cover how we can run multi-region and/or multi-data-center deployments with automatic fail-over. We also look at some rudimentary auto-scaling so that we can automatically respond to request surges and scale back when request rate drops again. If you’d like to read this entire series, we’ve made an eBook “Continuous Integration and Deployment with Docker and Rancher” available for download. Read more
Recently Rancher introduced the Rancher catalog, an awesome feature that enables Rancher users to one-click deploy common applications and complex services from catalog templates on your infrastructure, and Rancher will take care of creating and orchestrating the Docker containers for you.
Rancher catalog offers a wide variety of applications in its out of the box catalog, including glusterfs or elasticsearch, as well as supporting private catalogs. Today I am going to introduce a new catalog template I developed for deploying a MongoDB replicaset, and show you how I built it. Read more
Over the last year we have written about getting several application stacks running on top of docker, e.g. Magento, Jenkins, Prometheus and so forth. However, containerized deployment can be useful for more than just defining application stacks. In this series of articles we would like to cover an end-to-end development pipeline and discuss how to leverage Docker and Rancher in its’ various stages. Specifically, we’re going to cover; building code, running tests, packaging artifacts, continuous integration and deployment, as well as managing an application stack in production. You can also download the entire series as an eBook beginning today.
To kick things off, we start at the pipheline ingress, i.e., building source code. When any project starts off, building/compilation is not a significant concern as most languages and tools have well-defined and well documented processes for compiling source code. However, as projects and teams scale, and the number of dependencies increase, ensuring a consistent and stable build for all developers while ensuring code quality becomes a much bigger challenge. In this post we will cover some of the challenges around CI and testing, discuss best practices and show how Docker can be used to implement them. Read more