Illumina Innovates with Rancher and Kubernetes
Note: this is Part 4 in a series on building highly resilient workloads. Parts 1, 2, and 3 are available already online. In Part 4 of this series on running resilient workloads with Docker and Rancher, we take a look at service updates. Generally, service updates are where the risk of downtime is the highest. It doesn’t hurt to have a grasp of how deployments work in Rancher and the options available within.
At Rancher Labs we generate a lot of logs in our internal environments. As we conduct more and more testing on these environments we have found the need to centrally aggregate the logs from each environment. We decided to use Rancherto build and run a scalable ELK stack to manage all of these logs. For those that are unfamiliar with the ELK stack, it is made up of Elasticsearch, Logstash and Kibana.
MongoDB, the popular open source NoSQL database, has been in the news a lot recently—and not for reasons that are good for MongoDB admins. Early this year, reports began appearing of MongoDB databases being “taken hostage” by attackers who delete all of the data stored inside the databases, then demand ransoms to restore it. Security is always important, no matter which type of database you’re using. But the recent spate of MongoDB attacks makes it especially crucial to secure any MongoDB databases that you may use as part of your container stack.
As one of the most disruptive technologies in recent years, container-based applications are rapidly gaining traction as a platform on which to launch applications. But as with any new technology, the security of containers in all stages of the software lifecycle must be our highest priority. This post seeks to identify some of the inherent security challenges you’ll encounter with a container environment, and suggests base elements for a docker security plan to mitigate those vulnerabilities.
Introduction If you have been working with Docker for any length of time, you probably already know that shared volumes and data access across hosts is a tough problem. While the Docker ecosystem is maturing, implementing persistent storage across environments still seems to be a problem for most folks. Luckily, Rancher has been working on this problem and come up with a unique solution that addresses most of these issues.
For any team using containers – whether in development, test, or production – an enterprise-grade registry is a non-negotiable requirement. JFrog Artifactory is much beloved by Java developers, and it’s easy to use as a Docker registry as well. To make it even easier, we’ve put together a short walkthrough to setting things up Artifactory in Rancher.
Before you start For this article, we’ve assumed that you already have a Rancher installation up and running (if not, check out our Quick Start guide), and will be working with either Artifactory Pro or Artifactory Enterprise.
[RabbitMQ ] RabbitMQ is a messaging broker that transports messages between data producers and data consumers. Data producers can be just about any application, host, or device that emits data that needs to be consumed by other applications for aggregation, processing, or analysis. RabbitMQ is easy to set up, use, and maintain. It can be scaled to handle large numbers of messages between many different data producers and consumers in a variety of application use cases.
Note: Rancher has come a long way since this was first published in June 2015. We’ve revised this post (as of August 2016) to reflect the updates in our enterprise container management service. Read on for the updated tutorial!
Rancher supports multiple orchestration engines for its managed environments, including Kubernetes, Mesos, Docker Swarm, and Cattle (the default Rancher managed environment). The Cattle environment is rich with features like stacks, services, and load balancing, and in this post, we’ll highlight common uses for these features.
Why Smart Container Management is Key For anyone working in IT, the excitement around containers has been hard to miss. According to RightScale, enterprise deployments of Docker over doubled in 2016 with 29% of organizations using the software versus just 14% in 2015 . Even more impressive, fully 67% of organizations surveyed are either using Docker or plan to adopt it. While many of these efforts are early stage, separate research shows that over two thirds of organizations who try Docker report that it meets or exceeds expectations , and the average Docker deployment quintuples in size in just nine months.
The latest release of Docker Engine now supports volume plugins, which allow users to extend Docker capabilities by adding solutions that can create and manage data volumes for containers that need to manage and operate on persistent datasets.This is especially important for databases, and addresses one of the key limitations in Docker. Recently at Rancher we released Convoy, an open-source Docker volume driver that makes it simple to snapshot, backup, restore Docker volumes across clouds.
Hello, I’m Ivan Mikushin (@imikushin), one of the developers here at Rancher working on RancherOS. Today I wanted to walk you through the concept of RancherOS \“system services.\” As you may know, RancherOS was designed from the ground up to run everything above the kernel as Docker containers, allowing simple upgrades and a tiny OS footprint. The goal of RancherOS is to provide the perfect small OS for running Docker containers.
If you use containers as part of your day-to-day operations, you need to monitor them -- ideally, by using a docker performance monitoring solution that you already have in place, rather than implementing an entirely new tool. Containers are often deployed quickly and at a high volume, and they frequently consume and release system resources at a rapid rate. You need to have some way of measuring container performance, and the impact that container deployment has on your system.
If you’re headed to Pasadena, California this weekend for Scale15x, come see us! We’re excited to be presenting, and our talks are focused on practical knowledge for running containers and Kubernetes in production. While Rancher makes it easy to deploy and manage containers and Kubernetes, building that ease of use has required specific expertise and disciplined thought on how teams are incorporating them into their projects today. We’re headed to Scale to share what we’ve learned, and to get feedback from you.
In the world of containers, Kubernetes has become the community standard for container orchestration and management. But there are some basic elements surrounding networking that need to be considered as applications are built to ensure that full multi-cloud capabilities can be leveraged.
The Basics of Kubernetes Networking: Pods The basic unit of management inside Kubernetes is not a container—It is called a pod. A pod is simply one or more containers that are deployed as a unit.
When public clouds first began gaining popularity, it seemed that providers were quick to append the phrase “as a service” to everything imaginable, as a way of indicating that a given application, service, or infrastructure component was designed to run in the cloud. It should therefore come as no surprise that Container as a Service, or CaaS, refers to a cloud-based container environment. But there is a bit more to the CaaS story than this.
Your storage system should be locked down with all security and access control tools available to you as well. That is true whether the storage serves containers or any other type of application environment. How do you secure containers? That may sound like a simple question, but it actually has a six- or seven-part answer. That’s because securing containers doesn’t involve just deploying one tool or paying careful attention to one area where vulnerabilities can exist.