Continental Innovates with Rancher and Kubernetes
Kubernetes applications are increasingly making their way to the edge and embedded computing. Storage will quickly follow as the applications that rely on this edge infrastructure become more advanced and naturally carry more state. Longhorn is a cloud native persistent storage technology that has been quickly evolving as stateful applications in Kubernetes increase. An enhancement in its latest version 1.1 is quite applicable to this topic: native ARM64 support.
Rancher CEO Sheng Liang and Portworx CEO Murli Thirumale discuss the evolution of Kubernetes storage and how industry players must come together for the benefit of the open source community and industry at large.
Set up Rancher monitoring remote endpoint integration with Thanos, an open source, highly available Prometheus setup with long-term metric capabilities.
Rancher persistent storage, built with CNCF Project Longhorn, is now generally available. Longhorn directly answers the need for an enterprise-grade, vendor-neutral persistent storage solution that supports the easy development of stateful applications within Kubernetes.
Today I am very excited to announce that Rancher Labs' Project Longhorn has been accepted by the Cloud Native Computing Foundation as a sandbox project.
Editor’s note: Additional writing by Arsh Sharma, who is pursuing a Bachelor’s Degree from the Indian Institute of Technology (BHU), Varanasi.
Introduction Docker’s process-based virtualization has many advantages, especially when combined with the benefits of image layers. It allows for incredibly fast container spawning and lightweight resource utilization. However, one of the side effects of Docker’s ephemeral process model is that you have to be certain to plan when you want to save persistent data.
When you try to to create persistent storage in Kubernetes, the first two concepts you will likely encounter are Kubernetes PV, and PVC.
Project Longhorn v0.3.0 Release
Since we announced Project Longhorn last year, there has been a great deal of interest in running Longhorn storage on a Kubernetes cluster.
-- Today, I am very excited to announce the availability of Project Longhorn v0.2, which is a persistent storage implementation for any Kubernetes cluster. Once deployed on a Kubernetes cluster, Longhorn automatically clusters all available local storage from all the nodes in the cluster to form replicated and distributed block storage.
One of the things that often surprises administrators when they first begin working with Docker containers is the fact that containers natively use non-persistent storage. When a container is removed, so too is the container’s storage. Of course, containerized applications would be of very limited use if there were no way of enabling persistent docker container storage. Fortunately, there are ways to implement persistent storage in a containerized environment. Although a container’s own native storage is non-persistent, a container can be connected to storage that is external to the container.
We’ve just returned from DockerCon 2017, which was a fantastic experience. I thought I’d share some of my thoughts and impressions of the event, including my perspective on some of the key announcements, while they are still fresh in my mind.
-- New open source projects Container adoption for production environments is very real. The keynotes on both days included some exciting announcements that should further accelerate adoption in the enterprise as well as foster innovation in the open source community.
Editor’s note: On June 2, 2020, Rancher Labs announced the general availability of Longhorn, an enterprise-grade, cloud-native container storage solution. Longhorn directly answers the need for an enterprise-grade, vendor-neutral persistent storage solution that supports the easy development of stateful applications within Kubernetes.
-- I’m super excited to unveil Project Longhorn, a new way to build distributed block storage for container and cloud deployment models. Following the principles of microservices, we have leveraged containers to build distributed block storage out of small independent components, and use container orchestration to coordinate these components to form a resilient distributed system.
*Note: since this article has posted, we’ve released Rancher 1.2.1, which addresses much of the feedback we have received on the initial release. You can read more about the v1.2.1 release on Github. * I am very excited to announce the release of Rancher 1.2! This release goes beyond the requisite support for the latest versions of Kubernetes, Docker, and Docker Compose, and includes major enhancements to the Rancher container management platform itself Rancher 1.
In less than a week, over 24,000 developers, sysadmins, and engineers will arrive in Las Vegas to attend AWS re:Invent (Nov. 28 - Dec 2). If you’re headed to the conference, we look forward to seeing you there! We’ll be onsite previewing enhancements included in our upcoming Rancher v1.2 release:
Support for the latest versions of Kubernetes and Docker: As we’ve previously mentioned, we’re committed to supporting multiple container orchestration frameworks, and we’re eager to show off our latest support for Docker Native Orchestration and Kubernetes.
In the third section on data resiliency, we delve into various ways that data can be managed on Rancher (you can catch up on Part 1 and Part 2 here). We left off last time after setting up loadbalancers, health checks and multi-container applications for our WordPress setup. Our containers spin up and down in response to health checks, and we are able to run the same code that works on our desktops in production.
Yesterday, Atlantis Computing announced a new converged platform for managing infrastructure and containers, which combines Rancher with their award-winning USX software-defined storage solution. This turnkey solution will make it easier for IT organizations to deliver containers as a service to their developers with enterprise-grade storage, without losing sight of the very real, bottom-line benefits that come from optimizing virtualized infrastructure. This solution will be available as a tech preview in early November.
In a previous article in this series we looked at the basic Kubernetes concepts including namespaces, pods, deployments and services. Now we will use these building blocks in a realistic deployment. We will cover how to setup persistent volumes, how to setup claims for those volumes and then mount those claims into pods. We will also look at creating and using secrets using the Kubernetes secrets management system. Lastly, we will look at service discovery within the cluster as well as exposing services to the outside world.
*Quentin Hamard is one of the founders of Octoperf, and is based in Marseille, France. * Octoperf is a full-stack cloud load testing SaaS platform. It allows developers to test the design performance limits of mobile apps and websites in a realistic virtual environment. As a startup, we are attempting to use containers to change the load testing paradigm, and deliver a platfrom that can run on any cloud, for a fraction of the cost of existing approaches.
Today, our team at Rancher announced an exciting new feature called Persistent Storage Services. Persistent storage support builds on the work we’ve done with Rancher Convoy, and makes it dramatically easier to run stateful applications in production using Rancher. The Docker volume plugins, introduced in Docker 1.8 and further enhanced in Docker 1.9, enables developers to utilize a variety of persistent storage implementations as standard Docker volumes. Our new Persistent Storage Services capability complements Docker volume plugins by providing a backend implementation of a Docker volume plugin, and is the core storage technology in our recently announced hyper-converged infrastructure stack for Docker.
Over the last few months our team at Rancher Labs has been working on building software that would allow users to create and manage persistent Docker volumes. With the release of Docker 1.8, which now officially supports Docker volume drivers, we announced Convoy, an open-source Docker volume driver that can snapshot, backup and restore Docker volumes anywhere. Convoy is designed to be a standalone Docker volume driver that runs on individual Linux hosts.
Hi, I am Sheng Yang (@yasker), an engineer here at Rancher Labs. Over the last few months our team has been working on building Docker storage software that would allow users to create and manage persistent Docker volumes. With last week’s release of Docker 1.8, which now officially supports Docker volume drivers, I am excited to announce Convoy, an open-source Docker volume driver that can snapshot, backup and restore Docker volumes anywhere.
In my last post I showed you how to deploy a Highly Available Wordpress installation using Rancher Services, a Gluster cluster for distributed storage, and a database cluster based on Percona XtraDB Cluster. Now I’m going one step further and we are setting Gluster and PXC clusters using Rancher Services too. And now we are using new service features available on the beta Rancher release like DNS service discovery and Label Scheduling.
GlusterFS is a scalable, highly available, and distributed network file system widely used for applications that need shared storage including cloud computing, media streaming, content delivery networks, and web cluster solutions. High availability is ensured by the fact that storage data is redundant, so in case one node fails another will cover it without service interruption. In this post I’ll show you how to create a GlusterFS cluster for Docker that you can use to store your containers data.
In addition to managing container networking across cloud providers, we are excited to announce the following features in Rancher v0.2. First up, the team has exposed the building blocks for storage management.
Almost one year ago I started Stampede as an R&D project to look at the implications of Docker on cloud computing moving forward, and as such I’ve explored many ideas. After releasing Stampede, and getting so much great feedback, I’ve decided to concentrate my efforts. I’m renaming Stampede.io to Rancher.io to signify the new direction and focus the project is taking. Going forward, instead of the experimental personal project that Stampede was, Rancher will be a well-sponsored open source project focused on building a portable implementation of infrastructure services similar to EBS, VPC, ELB, and many other services.