Continental Innovates with Rancher and Kubernetes
Containers have become incredibly common in modern development workflows and production environments. But what exactly are they and why are they getting so much attention? In this article, we will talk about what containers are, how they differ from related technologies, and what primary advantages they provide for the individuals and teams who adopt them.
Containers generally deploy faster and perform better than virtual machines. Visit Rancher to explore five tips for making Docker technology faster.
Editor’s note: On June 2, 2020, Rancher Labs announced the general availability of Longhorn, an enterprise-grade, cloud-native container storage solution. Longhorn directly answers the need for an enterprise-grade, vendor-neutral persistent storage solution that supports the easy development of stateful applications within Kubernetes.
-- I’m super excited to unveil Project Longhorn, a new way to build distributed block storage for container and cloud deployment models. Following the principles of microservices, we have leveraged containers to build distributed block storage out of small independent components, and use container orchestration to coordinate these components to form a resilient distributed system.
As a relatively new technology, Docker containers may seem like a risk when it comes to security -- and it’s true that, in some ways, Docker creates new security challenges. But if implemented in a secure way, containers can actually help to make your entire environment more secure overall than it would be if you stuck with legacy infrastructure technologies. This article builds on existing container security resources, like Security for your Container, to explain how a secured containerized environment can harden your entire infrastructure against attack.
Modern microservices applications span multiple containers, and sometimes a single app may use thousands of containers. When operating at this scale, you need a container orchestration tool to manage all of those containers. Managing them by hand is simply not feasible. This is where Kubernetes comes in. Kubernetes manages Docker containers that are used to package applications at scale. Since its launch in 2014, Kubernetes has enjoyed widespread adoption within the container ecosystem.
In the third section on data resiliency, we delve into various ways that data can be managed on Rancher (you can catch up on Part 1 and Part 2 here). We left off last time after setting up loadbalancers, health checks and multi-container applications for our WordPress setup. Our containers spin up and down in response to health checks, and we are able to run the same code that works on our desktops in production.
In Part 1: Rancher Server HA, we looked into setting up Rancher Server in HA mode to secure it against failure. There now exists a degree of engineering in our system on top of which we can iterate. So what now? In this installment, we’ll look towards building better service resiliency with Rancher Health Checks and Load Balancing. Since the Rancher documentation for Health Checks and Load Balancing are extremely detailed, Part 2 will focus on illustrating how they work, so we can become familiar with the nuances of running services in Rancher.
Containers and orchestration frameworks like Rancher will soon allow every organization to have access to efficient cluster management. This brave new world frees operations from managing application configuration and allows development to focus on writing code; containers abstract complex dependency requirements, which enables ops to deploy immutable containerized applications and allows devs a consistent runtime for their code. If the benefits are so clear, then why do companies with existing infrastructure practices not switch?
[Rancher is a complete container management solution, and to be a complete platform, we’ve placed careful consideration into how we handle networking between containers on our platform. So today, we’re posting a quick example to illustrate how networking in Rancher works. While Rancher can be deployed on a single node, or scaled to thousands of nodes, in this walkthrough, we’ll use just a handful of hosts and containers.]
Setting up and Launching a Containerized Application [Our first task is to set up our infrastructure, and for this exercise, we’ll use AWS.
Today we achieved a major milestone by shipping Rancher 1.0, our first generally available release. After more than one and a half years of development, Rancher has reached the quality and feature completeness for production deployment. We first unveiled a preview of Rancher to the world at the November 2014 Amazon Re:invent conference. We followed that with a Beta release in June 2015. I’d like to congratulate the entire Rancher development team for this achievement.
[Recently Rancher introduced the Rancher catalog, an awesome feature that enables Rancher users to one-click deploy common applications and complex services from catalog templates on your infrastructure, and Rancher will take care of creating and orchestrating the Docker containers for you.] Rancher catalog offers a wide variety of applications in its out of the box catalog, including glusterfs or elasticsearch, as well as supporting private catalogs. Today I am going to introduce a new catalog template I developed for deploying a MongoDB replicaset, and show you how I built it.
Last month we introduced a new application catalog in the latest versions of Rancher. The Rancher Catalog provides an easy to use interface that simplifies deploying Docker-based applications. Using a catalog entry it becomes simple to deploy complex applications such as Elasticsearch, Jenkins, Hadoop, as well as tools like etcd and zookeeper, storage services like GlusterFS, and databases like MongoDB. Already, companies like Sysdig and others have provided easy to use templates for deploying their services using Docker.
Meetup Screenshot: Bill Maxwell Demonstrates Sysdig monitoring his Rancher environment Yesterday we hosted an online meetup with the team from Sysdig, in which we discussed best practices for Docker monitoring, and some of the unique challenges around applying monitoring policies to containers. Over the course of the meetup, we introduced Rancher and Sysdig, and demonstrated how we’re using Sysdig here at Rancher to manage our containers. The meetup included a number of presentations, and we’ve included the agenda below along with direct links to that portion of the meetup if you’d like to jump ahead at all.
Last week Ivan Mikushin discussed adding system services to RancherOS using Docker Compose. Today I want to show you an exmaple of how to deploy Linux Dash as a system service. Linux Dash is a simple, low overhead, and web supported monitoring tool for Linux, you can read more about Linux Dash here. In this post i will add Linux Dash as a system service to RancherOS version 0.3.0 which allows users to add system services using rancherctl command.
Hi, I’m Craig Jellick, an engineer here at Rancher Labs, and I wanted to walk you through a new set of features that we recently added to Rancher as we prepared for beta. Internally, we call it our \“Native Docker Management\” functionality, and it is incredibly core to our mission here at Rancher. When we built Rancher, we explicitly didn’t want to wrap Docker’s APIs with a new management layer. A number of existing tools already take that approach, and while it is an effective way of building a controlled system, we really loved the experience using the Docker CLI and API, and were sure that it would just keep getting better over time.
In my last post I showed you how to deploy a Highly Available Wordpress installation using Rancher Services, a Gluster cluster for distributed storage, and a database cluster based on Percona XtraDB Cluster. Now I’m going one step further and we are setting Gluster and PXC clusters using Rancher Services too. And now we are using new service features available on the beta Rancher release like DNS service discovery and Label Scheduling.
Rancher co-founder Shannon Williams provides a quick video overview on how to get started with Rancher.
Getting Started with Rancher from Rancher Labs
Since I started playing with Docker I have been thinking that its network implementation is something that will need to be improved before I could really use it in production. It is based on container links and service discovery but it only works for host-local containers. This creates issues for a few use cases, for example when you are setting up services that need advanced network features like broadcasting/multicasting for clustering.
Over the last few months our team, with the help of Daniel Walsh (@rhatdan) from Red Hat and many other community members, have worked to add support for labels in Docker 1.6. Labels allow users to attach arbitrary key value metadata to Docker images and containers. This feature, while very simple in concept, gives us the opportunity to add many powerful features to Rancher, and will benefit everyone in the Docker ecosystem.
As you may have seen, Rancher recently announced our integration with docker-machine. This integration will allow users to spin up Rancher compute nodes across multiple cloud providers right from the Rancher UI. In our initial release, we supported Digital Ocean. Amazon EC2 is soon to follow and we’ll continue to add more cloud providers as interest dictates. We believe this feature will really help the Zero-to-Docker _(and Zero-to-Rancher)_ experience. But the feature itself is not the focus of this post.
In the first part of this post, I created a full Node.js application stack using MongoDB as the application’s database and Nginx as a load balancer that distributed incoming requests to two Node.js application servers. I created the environment on Rancher and using Docker containers.
In this post I will go through setting up Rancher authentication with GitHub, and creating a webhook with GitHub for automatic deployments.
Rancher Access Control Starting from version 0.
Today Docker acquired SDN software maker SocketPlane. Congratulations to both Docker and SocketPlane teams. We have worked closely with SocketPlane team since the early Docker networking discussions and have a great amount of respect for their technical abilities. We are also happy to see Docker Inc. make a serious effort to bring SDN capabilities to the Docker platform. Many customers have told us that the lack of multi-host networking is one of the last remaining gaps that impede the wide-spread production use of Docker containers.
Today I would like to announce a new open source project called RancherOS – the smallest, easiest way to run Docker in production and at scale. RancherOS is the first operating system to fully embrace Docker, and to run all system services as Docker containers. At Rancher Labs we focus on building tools that help customers run Docker in production, and we think RancherOS will be an excellent choice for anyone who wants a lightweight version of Linux ideal for running containers.
In addition to managing container networking across cloud providers, we are excited to announce the following features in Rancher v0.2. First up, the team has exposed the building blocks for storage management.
Almost one year ago I started Stampede as an R&D project to look at the implications of Docker on cloud computing moving forward, and as such I’ve explored many ideas. After releasing Stampede, and getting so much great feedback, I’ve decided to concentrate my efforts. I’m renaming Stampede.io to Rancher.io to signify the new direction and focus the project is taking. Going forward, instead of the experimental personal project that Stampede was, Rancher will be a well-sponsored open source project focused on building a portable implementation of infrastructure services similar to EBS, VPC, ELB, and many other services.