2017 Predictions: Rapid Adoption and Innovation to Come
Rapid adoption of container orchestration frameworks
As more companies use containers in production, adoption of orchestration frameworks like Kubernetes, Mesos, Cattle and Docker Swarm will increase as well. These projects have evolved quickly in terms of stability, community and partner ecosystem, and will act as necessary and enabling technologies for enterprises using containers more widely in production. Read more
Registries are one of the key components that make working with containers, primarily Docker, so appealing to the masses. A registry hosts images that are downloaded and run on hosts in a container engine. A container is simply a running instance of a specific image. Think of an image as a ready-to-go package, like an MSI on Microsoft Windows or an RPM on Red Hat Enterprise Linux. I won’t go into the details of how registries work here, but if you want to learn more, this article is a great read.
Instead, what I’d like to do in this post is highlight some of the container registries that currently remain under the radar. While the big-name registries are already familiar to most people who work with Docker, there are smaller registries worth considering, too, when you are deciding where to host your images.
Keep reading for a discussion of these lesser-known container registries.
Once any application, dockerized or otherwise, reaches production, log aggregation becomes one of the biggest concerns. We will be looking at a number of solutions for gathering and parsing application logs from docker containers running on multiple hosts. This will include using a third-party service such as Loggly for getting setup quickly as well as bringing up an ELK stack (Elastic Search, Log Stash, Kibana) stack. We will look at using middleware such as FluentD to gather logs from Docker containers which can then be routed to one of the hundreds of consumers supported by fluentd. In this article we focus on using third party tools for Docker Logging, specifically using Loggly as an example. We will highlight how to get application logs to Loggly using both Docker and Rancher.
If you want to get setup aggregating logs quickly the best option is to use a hosted third-party solution. There are many such solutions for example Paper Trail, Splunk Cloud and Loggly. We will be using Loggly as an example, however all three platforms support similar ingestion interfaces. One option is to implement loggly integration directly into your application. For example if you are using Java you can use Logback. However, for a more general solution we can setup integration through the rsyslog daemon. This allows you to use the same setup regardless of application and language as syslog support is available in the vast majority of languages and platforms. Further, syslog allows you to configure local filtering and sampling to reduce the amount of logs you will send on to Loggly. This is important as Logging services tend to get very expensive for large volumes. Lastly, syslog integration can be used with many other logging solutions, so if you choose to switch from loggly to another option at a later point your application code does not have to change. Read more
At Rancher Labs we generate a lot of logs in our internal environments. As we conduct more and more testing on these environments we have found the need to centrally aggregate the logs from each environment. We decided to use Rancher to build and run a scalable ELK stack to manage all of these logs.
For those that are unfamiliar with the ELK stack, it is made up of Elasticsearch, Logstash and Kibana. Logstash provides a pipeline for shipping logs from various sources and input types, combining, massaging and moving them into Elasticsearch, or several other stores. It is a really powerful tool in the logging arsenal.
Elasticsearch is a document database that is really good at search. It can take our processed output from Logstash, analyze and provides an interface to query all of our logging data. Together with Kibana, a powerful visualization tool that consumes Elasticsearch data, you have amazing ability to gain insights from your logging. Read more
The latest release of Docker Engine now supports volume plugins, which allow users to extend Docker capabilities by adding solutions that can create and manage data volumes for containers that need to manage and operate on persistent datasets.This is especially important for databases, and addresses one of the key limitations in Docker.
Recently at Rancher we released Convoy, an open-source Docker volume driver that makes it simple to snapshot, backup, restore Docker volumes across clouds.
In this post I will put Convoy into action, by using Convoy to snapshot and backup a database state for a WordPress application, and will use the backup to create a replica in another datacenter. I’ll also cover incremental and scheduled backups, so that you can begin regularly backing up any stateful data running in containers. Read more
Over the last few months our team at Rancher Labs has been working on building software that would allow users to create and manage persistent Docker volumes. With the release of Docker 1.8, which now officially supports Docker volume drivers, we announced Convoy, an open-source Docker volume driver that can snapshot, backup and restore Docker volumes anywhere.
Convoy is designed to be a standalone Docker volume driver that runs on individual Linux hosts. Our initial implementation of Convoy utilizes Linux Device Mapper to deliver four key storage functions for Docker volumes:
Create thin provisioned volumes
Take snapshots of volumes
Incrementally backup snapshots to object stores, such as Amazon S3
Restore volumes on any host running Convoy
On August 26th we demonstrated Convoy and discussed our plans for incorporating it into Rancher. We walked through example use cases, and discussed how Convoy can be extended to work with other backend storage platforms. You can view a recording of this meetup below.