In this post we discuss how to backup etcd and how to recover from a backup to restore operations to a Kubernetes cluster. Etcd is a highly available distributed key-value store that provides a reliable way to store data across machines.
Partnership Combines Rancher 2.0 with Canonical Kubernetes and Leading Cloud OS, Ubuntu Today, we joined Canonical in announcing the Canonical Cloud Native Platform, a new offering that provides complete support and management for Kubernetes in the Enterprise. The Cloud Native Platform combines Rancher 2.0 container management software with Canonical Ubuntu and Ubuntu Kubernetes, and will be available when Rancher 2.0 launches next spring. This announcement is an enormous accomplishment for our team here at Rancher.
Today, Amazon announced a managed Kubernetes service called Elastic Container Service for Kubernetes (EKS). This means that all three major cloud providers—AWS, Azure, and GCP—now offer managed Kubernetes services. This is great news for Kubernetes users. Even though users always have the option to stand up their own Kubernetes clusters, and new tools like Rancher Kubernetes Engine (RKE) make that process even easier, cloud-managed Kubernetes installations should be the best choice for the majority of Kubernetes users.
I just came back from DockerCon EU. I have not met a more friendly and helpful group of people than the users, vendors, and Docker employees at DockerCon. It was a well-organized event and a fun experience. I went into the event with some questions[ about where Docker was headed. Solomon Hykes addressed these questions in his keynote, which was the highlight of the entire show. Docker embracing Kubernetes is clearly the single biggest piece of news coming out of DockerCon.
Google Container Engine, or GKE for short (the K stands for Kubernetes), is Google’s offering in the space of Kubernetes runtime deployments. When used in conjunction with a couple of other components from the Google Cloud Platform, GKE provides a one-stop shop for creating your own Kubernetes environment, on which you can deploy all of the containers and pods that you wish without having to worry about managing Kubernetes masters and capacity.
When public clouds first began gaining popularity, it seemed that providers were quick to append the phrase “as a service” to everything imaginable, as a way of indicating that a given application, service, or infrastructure component was designed to run in the cloud. It should therefore come as no surprise that Container as a Service, or CaaS, refers to a cloud-based container environment. But there is a bit more to the CaaS story than this.
It’s finally here: the Rancher you’ve all been waiting for. Rancher 2.0 is now in preview mode and available to deploy! Rancher 2.0 brings us a whole new Kubernetes-based structure, with new features like platform-wide multi-select, adoption of existing Kubernetes clusters, and much, much more. If you’re looking to dive in with Rancher 2.0, you’ve come to the right place. Assumptions You have a Linux host with at least 4 GB of RAM.
If you’ve followed the container space recently, you’ve likely seen the influx of Kubernetes-related technologies being announced. So, when another one comes along, it’s easy to be less than excited about it. However, in the case of Rancher’s recent product announcement, it’s well worth your time. The engineering team at Rancher Labs has been working on some new ideas that I think will have a real influence on the way we all think about Kubernetes (K8s).
This latest release makes it possible to manage all Kubernetes clusters under a single Rancher instance.
Update: This tutorial was updated for Rancher 2.x in 2019 here Any time an organization, team or developer adopts a new platform, there are certain challenges during the setup and configuration process. Often installations have to be restarted from scratch and workloads are lost. This leaves adopters apprehensive about moving forward with new technologies. The cost, risk and effort are too great in the business of today. With Rancher, we’ve established a clear container installation and upgrade path so no work is thrown away.