Continental Innovates with Rancher and Kubernetes
Available as of v2.5
The cluster registration feature replaced the feature to import clusters.
The control that Rancher has to manage a registered cluster depends on the type of cluster. For details, see Management Capabilities for Registered Clusters.
Registering EKS clusters now provides additional benefits.
If your existing Kubernetes cluster already has a cluster-admin role defined, you must have this cluster-admin privilege to register the cluster in Rancher.
cluster-admin
In order to apply the privilege, you need to run:
kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin \ --user [USER_ACCOUNT]
before running the kubectl command to register the cluster.
kubectl
By default, GKE users are not given this privilege, so you will need to run the command before registering GKE clusters. To learn more about role-based access control for GKE, please click here.
If you are registering a K3s cluster, make sure the cluster.yml is readable. It is protected by default. For details, refer to Configuring a K3s cluster to enable importation to Rancher.
cluster.yml
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
kubectl get nodes
certificate signed by unknown authority
curl
Result:
Default
default
System
cattle-system
ingress-nginx
kube-public
kube-system
Note: You can not re-register a cluster that is currently active in a Rancher setup.
The K3s server needs to be configured to allow writing to the kubeconfig file.
This can be accomplished by passing --write-kubeconfig-mode 644 as a flag during installation:
--write-kubeconfig-mode 644
$ curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644
The option can also be specified using the environment variable K3S_KUBECONFIG_MODE:
K3S_KUBECONFIG_MODE
$ curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s -
The control that Rancher has to manage a registered cluster depends on the type of cluster.
After registering a cluster, the cluster owner can:
K3s is a lightweight, fully compliant Kubernetes distribution.
When a K3s cluster is registered in Rancher, Rancher will recognize it as K3s. The Rancher UI will expose the features for all registered clusters, in addition to the following features for editing and upgrading the cluster:
Registering an Amazon EKS cluster allows Rancher to treat it as though it were created in Rancher.
Amazon EKS clusters can now be registered in Rancher. For the most part, registered EKS clusters and EKS clusters created in Rancher are treated the same way in the Rancher UI, except for deletion.
When you delete an EKS cluster that was created in Rancher, the cluster is destroyed. When you delete an EKS cluster that was registered in Rancher, it is disconnected from the Rancher server, but it still exists and you can still access it in the same way you did before it was registered in Rancher.
The capabilities for registered EKS clusters are listed in the table on this page.
It is a Kubernetes best practice to back up the cluster before upgrading. When upgrading a high-availability K3s cluster with an external database, back up the database in whichever way is recommended by the relational database provider.
The concurrency is the maximum number of nodes that are permitted to be unavailable during an upgrade. If number of unavailable nodes is larger than the concurrency, the upgrade will fail. If an upgrade fails, you may need to repair or remove failed nodes before the upgrade can succeed.
In the K3s documentation, controlplane nodes are called server nodes. These nodes run the Kubernetes master, which maintains the desired state of the cluster. In K3s, these controlplane nodes have the capability to have workloads scheduled to them by default.
Also in the K3s documentation, nodes with the worker role are called agent nodes. Any workloads or pods that are deployed in the cluster can be scheduled to these nodes by default.
Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two plans to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes.
To enable debug logging on the system upgrade controller deployment, edit the configmap to set the debug environment variable to true. Then restart the system-upgrade-controller pod.
system-upgrade-controller
Logs created by the system-upgrade-controller can be viewed by running this command:
kubectl logs -n cattle-system system-upgrade-controller
The current status of the plans can be viewed with this command:
kubectl get plans -A -o yaml
If the cluster becomes stuck in upgrading, restart the system-upgrade-controller.
To prevent issues when upgrading, the Kubernetes upgrade best practices should be followed.
For all types of registered Kubernetes clusters except for K3s Kubernetes clusters, Rancher doesn’t have any information about how the cluster is provisioned or configured.
Therefore, when Rancher registers a cluster, it assumes that several capabilities are disabled by default. Rancher assumes this in order to avoid exposing UI options to the user even when the capabilities are not enabled in the registered cluster.
However, if the cluster has a certain capability, such as the ability to use a pod security policy, a user of that cluster might still want to select pod security policies for the cluster in the Rancher UI. In order to do that, the user will need to manually indicate to Rancher that pod security policies are enabled for the cluster.
By annotating a registered cluster, it is possible to indicate to Rancher that a cluster was given a pod security policy, or another capability, outside of Rancher.
This example annotation indicates that a pod security policy is enabled:
"capabilities.cattle.io/pspEnabled": "true"
The following annotation indicates Ingress capabilities. Note that that the values of non-primitive objects need to be JSON encoded, with quotations escaped.
"capabilities.cattle.io/ingressCapabilities": "[ { "customDefaultBackend":true, "ingressProvider":"asdf" } ]"
These capabilities can be annotated for the cluster:
ingressCapabilities
loadBalancerCapabilities
nodePoolScalingSupported
nodePortRange
pspEnabled
taintSupport
All the capabilities and their type definitions can be viewed in the Rancher API view, at [Rancher Server URL]/v3/schemas/capabilities.
[Rancher Server URL]/v3/schemas/capabilities
To annotate a registered cluster,
capabilities/<capability>: <value>
value
Result: The annotation does not give the capabilities to the cluster, but it does indicate to Rancher that the cluster has those capabilities.