Continental Innovates with Rancher and Kubernetes
When setting up your cluster.yml for RKE, there are a lot of different options that can be configured to control the behavior of how RKE launches Kubernetes.
cluster.yml
There are several options that can be configured in cluster configuration option. There are several example yamls that contain all the options.
By default, the name of your cluster will be local. If you want a different name, you would use the cluster_name directive to change the name of your cluster. The name will be set in your cluster’s generated kubeconfig file.
local
cluster_name
cluster_name: mycluster
By default, RKE will check the installed Docker version on all hosts and fail with an error if the version is not supported by Kubernetes. The list of supported Docker versions are set specifically for each Kubernetes version. To override this behavior, set this option to true.
true
The default value is false.
false
ignore_docker_version: true
For information on upgrading Kubernetes, refer to the upgrade section.
Rolling back to previous Kubernetes versions is not supported.
For some operating systems including ROS, and CoreOS, RKE stores its resources to a different prefix path, this prefix path is by default for these operating systems is:
/opt/rke
So /etc/kubernetes will be stored in /opt/rke/etc/kubernetes and /var/lib/etcd will be stored in /opt/rke/var/lib/etcd etc.
/etc/kubernetes
/opt/rke/etc/kubernetes
/var/lib/etcd
/opt/rke/var/lib/etcd
To change the default prefix path for any cluster, you can use the following option in the cluster configuration file cluster.yml:
prefix_path: /opt/custom_path
RKE connects to host(s) using ssh. Typically, each node will have an independent path for each ssh key, i.e. ssh_key_path, in the nodes section, but if you have a SSH key that is able to access all hosts in your cluster configuration file, you can set the path to that ssh key at the top level. Otherwise, you would set the ssh key path in the nodes.
ssh
ssh_key_path
nodes
If ssh key paths are defined at the cluster level and at the node level, the node-level key will take precedence.
ssh_key_path: ~/.ssh/test
RKE supports using ssh connection configuration from a local ssh agent. The default value for this option is false. If you want to set using a local ssh agent, you would set this to true.
ssh_agent_auth: true
If you want to use an SSH private key with a passphrase, you will need to add your key to ssh-agent and have the environment variable SSH_AUTH_SOCK configured.
ssh-agent
SSH_AUTH_SOCK
$ eval "$(ssh-agent -s)" Agent pid 3975 $ ssh-add /home/user/.ssh/id_rsa Enter passphrase for /home/user/.ssh/id_rsa: Identity added: /home/user/.ssh/id_rsa (/home/user/.ssh/id_rsa) $ echo $SSH_AUTH_SOCK /tmp/ssh-118TMqxrXsEx/agent.3974
You can define add-ons to be deployed after the Kubernetes cluster comes up, which uses Kubernetes jobs. RKE will stop attempting to retrieve the job status after the timeout, which is in seconds. The default timeout value is 30 seconds.
30