Continental Innovates with Rancher and Kubernetes
RKE supports adding/removing nodes for worker and controlplane hosts.
In order to add additional nodes, you update the original cluster.yml file with any additional nodes and specify their role in the Kubernetes cluster.
cluster.yml
In order to remove nodes, remove the node information from the nodes list in the original cluster.yml.
After you’ve made changes to add/remove nodes, run rke up with the updated cluster.yml.
rke up
You can add/remove only worker nodes, by running rke up --update-only. This will ignore everything else in the cluster.yml except for any worker nodes.
rke up --update-only
Note: When using --update-only, other actions that do not specifically relate to nodes may be deployed or updated, for example addons.
--update-only
In order to remove the Kubernetes components from nodes, you use the rke remove command.
rke remove
Warning: This command is irreversible and will destroy the Kubernetes cluster, including etcd snapshots on S3. If there is a disaster and your cluster is inaccessible, refer to the process for restoring your cluster from a snapshot.
The rke remove command does the following to each node in the cluster.yml:
etcd
kube-apiserver
kube-controller-manager
kubelet
kube-proxy
nginx-proxy
The cluster’s etcd snapshots are removed, including both local snapshots and snapshots that are stored on S3.
Note: Pods are not removed from the nodes. If the node is re-used, the pods will automatically be removed when the new Kubernetes cluster is created.