As you may have seen, Rancher recently announced our integration with docker-machine. This integration will allow users to spin up Rancher compute nodes across multiple cloud providers right from the Rancher UI. In our initial release, we supported Digital Ocean. Amazon EC2 is soon to follow and we’ll continue to add more cloud providers as interest dictates. We believe this feature will really help the Zero-to-Docker _(and Zero-to-Rancher)_ experience. But the feature itself is not the focus of this post.
In the first part of this post, I created a full Node.js application stack using MongoDB as the application’s database and Nginx as a load balancer that distributed incoming requests to two Node.js application servers. I created the environment on Rancher and using Docker containers. In this post I will go through setting up Rancher authentication with GitHub, and creating a webhook with GitHub for automatic deployments. Rancher Access Control Starting from version 0.
I’m not gonna tell you how to live your life—that’s for your doctor to do. What I am gonna tell you is how a beautifully poetic dynamic duo of DevOps delightfulness can make your next project shine brighter than the sun and give you more marketable skills. We live in a world where everything is becoming more modular. From your phone to your Keurig coffee maker to your USB type-C laptop setup, modularity allows you to do more and rearrange components of your life to best suit your needs.
So last week I finally got out from my “tech” comfort zone, and tried to set up a Node.js application which uses a MongoDB database, and to add an extra layer of fun I used Rancher to set up the whole application stack using Docker containers. I designed a small application with Node, its only function is to calculate the number of hits on the website, you can find the code at Github
Hussein Galal is a Linux System Administrator, with experience in Linux, Unix, Networking, and open source technologies like Nginx, Apache, PHP-FPM, Passenger, MySQL, LXC, and Docker. You can follow Hussein on Twitter @galal_hussein. I recently used Docker and Rancher to set up a Redis cluster on Digital Ocean. Redis clustering provides a way to share data across multiple Redis instances, keys are distributed equally across instances using hash slots. Redis clusters provide a number of nice features, such as data resharding and availability between instances.
So far in this series of articles we have looked at creating continuous integration pipelines using Jenkins and continuously deployingto integration environments. We also looked at using Rancher compose to run deployments as well as Route53 integration to do basic DNS management. Today we will cover production deployments strategies and also circle back to DNS management to cover how we can run multi-region and/or multi-data-center deployments with automatic fail-over. We also look at some rudimentary auto-scaling so that we can automatically respond to request surges and scale back when request rate drops again.
I have blogged about monitoring docker deployments a couple times now (here & here), however, up to this point we have been monitoring container stats without looking at the bigger picture. How do these containers fit into a larger unit and how we get insights into the deployment as a whole rather than individual containers. In this post I will cover leveraging docker labels and Rancher’s projects and services support to provide monitoring information that understands the deployment structure.
This week we released Rancher 0.12, which adds support for provisioning hosts using Docker Machine. We’re really excited to get this feature out, because it makes launching Rancher-enabled Docker hosts easier than ever. If you’re not familiar with Docker Machine, it is a project that allows cloud providers to develop standard \“drivers\” for provisioning cloud infrastructure on the fly. You can learn more about it on the Docker website. The first cloud we’re supporting with Docker Machine is Digital Ocean.
Having a cool deployment system is pretty neat, but one thing every engineer learns one way or another is that manual processes aren’t processes, they’re chores. If you have to do something more than once, you should automate it if you can. Of course, if the task of automating the process takes longer than the total projected time you’ll spend executing the process, you shouldn’t automate it. XKCD 1205 - Is It Worth the Time?