Tag: storage

Block Storage, Object Storage, and File Systems: What They Mean for Containers

June 14, 2017

One of the things that often surprises administrators when they first begin working with Docker containers is the fact that containers natively use non-persistent storage. When a container is removed, so too is the container’s storage.

Of course, containerized applications would be of very limited use if there were no way of enabling persistent storage. Fortunately, there are ways to implement persistent storage in a containerized environment. Although a container’s own native storage is non-persistent, a container can be connected to storage that is external to the container. This allows for the storage of persistent data, since this external storage is not removed when a container is stopped.

The first step in deciding how to go about implementing persistent storage for your containers is to determine the underlying type of storage system that you will use. In this regard, there are three main options that are generally available: File system storage, block storage, and object storage. Below, I explain the differences between each type of storage and what they mean when it comes to setting up storage for a containerized environment.

Read more


Rancher 1.2 Is Now Available!

December 1, 2016

1-2

Note: since this article has posted, we’ve released Rancher 1.2.1, which addresses much of the feedback we have received on the initial release. You can read more about the v1.2.1 release on Github

I am very excited to announce the release of Rancher 1.2! This release goes beyond the requisite support for the latest versions of Kubernetes, Docker, and Docker Compose, and includes major enhancements to the Rancher container management platform itself

Rancher 1.2 fully supports the latest storage and networking plugin frameworks (more on this later), and introduces a new and simplified HA setup, a more flexible configuration of Rancher’s HAProxy load balancer, and a new Rancher CLI. We’ve also added SAML 2.0 support, resource scheduling, and numerous improvements for performance and scale. This is a relatively large release, with many more features outlined in the release notes.

Out of all these enhancements, there’s a few things that we’d like to highlight:

Full support for container networking and storage plugin frameworks

Last year, Docker introduced Docker Volume plugins and libnetwork, while Kubernetes opted for the Container Network Interface (CNI) and FlexVolume frameworks. Since then, we’ve seen the container ecosystem explode with implementations of all these plugin frameworks to allow users to take advantage of the vast storage and network solutions out there today.

One of Rancher’s superpowers is enabling users to leverage their tooling of choice across diverse infrastructure. With the release of v1.2, Rancher supports CNI and is fully capable of leveraging any vendor CNI network plugins, along with our own newly-rewritten IPSec and VXLAN solutions for cross-host networking. Users can also create volumes with any Docker Volume plugins scoped to the container, stack, or environment. Plugins included with Rancher 1.2 are our newly-rewritten support for NFS (which replaces ConvoyNFS), AWS EFS, and AWS EBS, with more to come.

Modular, push-button, container-ready environments

While Rancher 1.2 provides users with the ability to distribute and provide lifecycle management for storage and networking plugins, we are also introducing the concept of custom environment templates. Networking and storage plugins can now be incorporated as options in a customizable template, which also includes options for orchestration engines, external DNS, and health checks. This allows users to better organize and manage services, and provides a straightforward, consistent, and repeatable deployment of your infrastructure services. In the future, we expect to expand the scope of environment templates to include additional infrastructure services such as logging, monitoring, and databases.

Faster, more frequent releases

Finally, when Rancher became generally available with v1.0 earlier this year, our goal was to provide stable releases each quarter, with bi-weekly pre-release snapshots for our open source community eager to play with our latest enhancements. However, key components in Docker and Kubernetes adhere to different release schedules, and our open source community requires stable releases more frequently than each quarter. We have decided that starting with v1.3, we will ship monthly stable releases of Rancher.

This means we will no longer ship pre-release builds as we have in the past, though release candidates will be available for download and test. We hope with this new release schedule, we will be able to increase our agility to ship new features, remain up-to-date with Docker and Kubernetes, and shorten the time between stable releases for Rancher users that want to quickly take advantage of new features and major fixes.

We really could not have released Rancher 1.2 without the support of our customers and open source community so a very BIG thank you for helping us with this release. We also have big plans for 2017 and can’t wait to share that with you as soon as we can. Stay tuned!

To see Rancher 1.2 in action, check out the recording of our December 2016 meetup


5 Keys to Running Workloads Resiliently with Docker and Rancher - Part 3

November 17, 2016

In the third section on data resiliency, we delve into various ways that data can be managed on Rancher (you can catch up on Part 1 and Part 2 here).

We left off last time after setting up loadbalancers, health checks and multi-container applications for our WordPress setup. Our containers spin up and down in response to health checks, and we are able to run the same code that works on our desktops in production.

Rancher Multi-container WordPress

Rancher Multi-container WordPress

All of this is nicely defined in a docker-compose.yml file along with the rancher-compose.yml companion that extends compose’s functionality on the Rancher cluster. The only issue is that when we terminated the MySQL container all of the data was lost. Read more


Optimizing Applications and Infrastructure with Atlantis Computing and Rancher

October 26, 2016

Yesterday, Atlantis Computing announced a new converged platform for managing infrastructure and containers, which combines Rancher with their award-winning USX software-defined storage solution. This turnkey solution will make it easier for IT organizations to deliver containers as a service to their developers with enterprise-grade storage, without losing sight of the very real, bottom-line benefits that come from optimizing virtualized infrastructure. This solution will be available as a tech preview in early November.

Credit: Atlantis Computing

Credit: Atlantis Computing

From a single UI, users will be able to provision a new compute host and automatically create USX-powered persistent storage for containers on those hosts; moreover, that storage takes advantage of Atlantis’ USX technology for data reduction and near-instantaneous IO for containers running in memory. The result: containerized applications that can run, scale, and update that much faster, with lower data center costs, and straightforward management for organizations building and overseeing them. Atlantis’ Hugo Phan does an excellent job of diving into the technical details of the platform here.

Read more


Creating Microservices Deployments on Kubernetes with Rancher - Part 2

September 22, 2016

In a previous article in this series we looked at the basic Kubernetes concepts including namespaces, pods, deployments and services. Now we will use these building blocks in a realistic deployment. We will cover how to setup persistent volumes, how to setup claims for those volumes and then mount those claims into pods. We will also look at creating and using secrets using the Kubernetes secrets management system. Lastly, we will look at service discovery within the cluster as well as exposing services to the outside world.

Sample Application

We will be using go-auth as a sample application to illustrate the features of Kubernetes. If you have gone through our Docker CI/CD series of articles then you will be familiar with the application. It is a simple authentication service consisting of an array of stateless web-servers and a database cluster. Creating a database inside Kubernetes is nontrivial as the ephemeral nature of containers conflicts with the persistent storage requirements of databases.

Persistent Volumes

Prior to launching our go-auth application we must setup a database for it to connect to. Prior setting up a database server in Kubernetes we must provide it with a persistent storage volume. This will help in making database state persistent across database restarts, and in migrating storage when containers are moved from one host to another. The list of currently supported persistent volume types are listed below: Read more


Setting Up Shared Volumes with Convoy-NFS

August 18, 2016

Introduction

If you have been working with Docker for any length of time, you probably already know that shared volumes and data access across hosts is a tough problem. While the Docker ecosystem is maturing, implementing persistent storage across environments still seems to be a problem for most folks. Luckily, Rancher has been working on this problem and come up with a unique solution that addresses most of these issues. Running a database with shared storage still isn’t widely recommended, but for many other use cases, sharing volumes across hosts is good practice.

Much of the guide was inspired by one of the Rancher Online meetups. Additionally, here is a little reference to go from that includes some of the NFS configuration information if you want to build something like this yourself from scratch.

Rancher Convoy

If you haven’t heard of it yet, the Convoy project by Rancher is aimed at making persistent volume storage easy. Convoy is a very appealing volume plugin because it offers a variety of different options. For example, there is EBS volume and S3 support, along with VFS/NFS support, giving users some great and flexible options for provisioning shared storage.

Dockerized-NFS

This is a little recipe for standing up a Dockerized NFS server for the convoy-nfs service to connect to. Docker-NFS is basically a poor man’s EFS, and you should only run this if you are confident that the server won’t get destroyed or the data simply isn’t important enough to matter if it is lost. You can find more information about the Docker NFS server I used here.

Read more


Recent Posts


Upcoming Events