Continental Innovates with Rancher and Kubernetes
Kubernetes is a commodity. Like electricity, it matters little which company it comes from, only that it’s there when you want to turn on the lights.
If you use a hosted Kubernetes service from a provider such as Google, Amazon, Azure, Alibaba, Tencent, or Huawei, you can spin up a cluster directly from the Rancher interface. As new companies offer Kubernetes as a service, Rancher will support them.
Hosted Kubernetes doesn’t work for everyone, so if you prefer to use compute instances, Rancher has you covered. With a few clicks you can launch instances in EC2, GCP, Azure, Digital Ocean or one of a dozen other providers, and Rancher will proceed to install Kubernetes into those nodes for you. Tell Rancher which nodes should run etcd, which should be the management plane, and which are workers, and it will provision them exactly as you’ve requested.
A traditional Kubernetes installation introduces dependencies on the hosts, as it requires that you install base Kubernetes components and provision networking before joining nodes to the cluster. This makes it harder for you to move quickly and adapt to changes in the environment.
When you use Rancher, the only requirement for a host is that it runs Linux and a supported version of Docker. With a single command you can deploy and scale clusters, and you can perform zero-downtime upgrades of Kubernetes with full rollback support if something goes awry.
Rancher and RKE work with both x86 and ARM architectures. Rancher is actively involved in the use of Kubernetes for IoT and edge computing, including smart cities and the processing of sensor data in enterprises looking to reduce energy consumption and waste.
If the teams in your organization are already using Kubernetes in any location, on any provider, they can continue to do so. Use Rancher to import those clusters and manage them alongside any other cluster that you’re running.
You already model your infrastructure as code, using tools like Terraform and CloudFormation to deploy systems and Ansible, Puppet, or Chef to configure them. Now you can do the same for your Kubernetes installations. The Rancher Kubernetes Engine (RKE) uses YAML to define the cluster, and deploys it with a single command.
Your developers can use RKE on their local machines and define the cluster alongside the application code. When they push their changes, Rancher can use the configuration to provision live clusters for staging or production, giving you the same confidence with Kubernetes that you already have with containers.
When you use someone else’s Kubernetes, you’re on their schedule for upgrades. Their schedule might be different from yours, and you want to be as close to the Kubernetes release cycle as possible.
When you use Rancher Kubernetes Engine (RKE), you’re using upstream Kubernetes. Rancher Labs quickly bundles new Kubernetes releases into RKE after they come out, and you decide when to deploy them. RKE executes zero-downtime upgrades of your clusters, and because everything runs within Docker containers, it’s easy to roll back to an earlier release.