When setting up your cluster.yml for RKE, there are a lot of different options that can be configured to control the behavior of how RKE launches Kubernetes.

There are several options that can be configured in cluster configuration option. There are several example yamls that contain all the options.

Configuring Nodes

Configuring Kubernetes Cluster

Cluster Level Options

Cluster Name

By default, the name of your cluster will be local. If you want a different name, you would use the cluster_name directive to change the name of your cluster. The name will be set in your cluster’s generated kubeconfig file.

cluster_name: mycluster

Supported Docker Versions

By default, RKE will check the installed Docker version on all hosts and fail with an error if the version is not supported by Kubernetes. The list of supported Docker versions are set specifically for each Kubernetes version. To override this behavior, set this option to true.

The default value is false.

ignore_docker_version: true

Kubernetes Version

You can select which version of Kubernetes to install for your cluster. Each version of RKE has a specific list of supported Kubernetes versions. If a version is defined in kubernetes_version and is not found in this list, the default version is used. If you want to use a different version than listed below, please use the system images option.

The supported Kubernetes versions for RKE v0.1.13 are:

Kubernetes version
v1.12.3-rancher1-1
v1.11.5-rancher1-1 (default)
v1.10.11-rancher1-1


You can define the Kubernetes version as follows:

kubernetes_version: "v1.11.5-rancher1-1"

In case both kubernetes_version and system images are defined, the system images configuration will take precedence over kubernetes_version.

Cluster Level SSH Key Path

RKE connects to host(s) using ssh. Typically, each node will have an independent path for each ssh key, i.e. ssh_key_path, in the nodes section, but if you have a SSH key that is able to access all hosts in your cluster configuration file, you can set the path to that ssh key at the top level. Otherwise, you would set the ssh key path in the nodes.

If ssh key paths are defined at the cluster level and at the node level, the node-level key will take precedence.

ssh_key_path: ~/.ssh/test

SSH Agent

RKE supports using ssh connection configuration from a local ssh agent. The default value for this option is false. If you want to set using a local ssh agent, you would set this to true.

ssh_agent_auth: true

If you want to use an SSH private key with a passphrase, you will need to add your key to ssh-agent and have the environment variable SSH_AUTH_SOCK configured.

$ eval "$(ssh-agent -s)"
Agent pid 3975
$ ssh-add /home/user/.ssh/id_rsa
Enter passphrase for /home/user/.ssh/id_rsa:
Identity added: /home/user/.ssh/id_rsa (/home/user/.ssh/id_rsa)
$ echo $SSH_AUTH_SOCK
/tmp/ssh-118TMqxrXsEx/agent.3974

Add-ons Job Timeout

You can define add-ons to be deployed after the Kubernetes cluster comes up, which uses Kubernetes jobs. RKE will stop attempting to retrieve the job status after the timeout, which is in seconds. The default timeout value is 30 seconds.

addon_job_timeout: 30