This page explains what needs to be done to upgrade your Kubernetes cluster. If you are upgrading an existing Kubernetes setup to require plane isolation, please skip to Upgrading to require plane isolation below.
Before you start the upgrade, checking the state of several vital components can help to make the upgrade go as smooth as possible.
Activestate in the Infrastructure -> Hosts view
Up to Datein the Kubernetes -> Infrastructure Stacks view
etcdctl cluster-health(check this in every
etcdinstance). This should list
member X is healthyfor every
etcdinstance and end with
cluster is healthy.
Readystate when checking the status using
kubectl get nodes
Upgradedstate, click on Upgraded: Finish Upgrade.
The upgrade is now complete. To verify the health of your cluster after the upgrade, run the Kubernetes specific steps from the checklist again.
This part of the documentation is only needed when you want to migrate from a setup without using plane isolation/resiliency planes to a setup using plane isolation/resiliency planes.
The migration process is performed in two stages.
Confirm that your environment has enough hosts with labels for the planes. You can either add new hosts or use existing hosts.
compute=true. These are the nodes already registered to Kubernetes when you run
kubectl get node. This step is critically important because, without the label, Kubernetes pods will be orphaned on the host during this upgrade. If you have hosts running the kubelet and proxy containers, you can follow the steps of removing them from the compute plane. You can also add more hosts and label these hosts with
etcd=truelabels are on those hosts.
orchestration=true. You can get away with 1 host, but you sacrifice high availability. In the event of this host failing, some K8s features such as the API, rescheduling pods in the event of failure, etc. will not occur until a new host is provisioned.
requiredfor Plane Isolation.
upgradedstate, click on Upgraded: Finish Upgrade.
WARNING: If you plan to remove any hosts from the compute plane, bare pods that aren’t part of a replication controller or similar will not be rescheduled. This is normal behavior.
Hosts in the compute plane are running the kubelet and proxy containers.
compute=truelabel, remove the label from the host. This will prevent the kubelet and proxy containers to be re-scheduled onto the host after these containers are deleted.
kubectlthrough the remote CLI or shell, run
kubectl delete node <HOST>. You can find the hostname (i.e.
<HOST>) from the Rancher UI or from kubectl by running
kubectl get node. Please wait for all pods to be deleted before moving to the next optional step.