When Rancher installs Kubernetes, it uses RKE as the Kubernetes distribution.
This section covers the configuration options that are available in Rancher for a new or existing RKE Kubernetes cluster.
You can configure the Kubernetes options one of two ways:
- Rancher UI: Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster.
- Cluster Config File: Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML.
The RKE cluster config options are nested under the
rancher_kubernetes_engine_config directive. For more information, see the section about the cluster config file.
This section is a cluster configuration reference, covering the following topics:
- Rancher UI Options
- Advanced Options
- Cluster config file
- Rancher specific parameters
Rancher UI Options
When creating a cluster using one of the options described in Rancher Launched Kubernetes, you can configure basic Kubernetes options using the Cluster Options section.
The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on hyperkube.
Note: After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesn’t allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications.
Out of the box, Rancher is compatible with the following network providers:
Notes on Weave:
When Weave is selected as network provider, Rancher will automatically enable encryption by generating a random password. If you want to specify the password manually, please see how to configure your cluster using a Config File and the Weave Network Plug-in Options.
Project Network Isolation
Project network isolation is used to enable or disable communication between pods in different projects.
To enable project network isolation as a cluster option, you will need to use any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
To enable project network isolation as a cluster option, you will need to use Canal as the CNI.
Kubernetes Cloud Providers
You can configure a Kubernetes cloud provider. If you want to use volumes and storage in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the
aws cloud provider.
Note: If the cloud provider you want to use is not listed as an option, you will need to use the config file option to configure the cloud provider. Please reference the RKE cloud provider documentation on how to configure the cloud provider.
If you want to see all the configuration options for a cluster, please click Show advanced options on the bottom right. The advanced options are described below:
The cluster-level private registry configuration is only used for provisioning clusters.
There are two main ways to set up private registries in Rancher: by setting up the global default registry through the Settings tab in the global view, and by setting up a private registry in the advanced options in the cluster-level settings. The global default registry is intended to be used for air-gapped setups, for registries that do not require credentials. The cluster-level private registry is intended to be used in all setups in which the private registry requires credentials.
If your private registry requires credentials, you need to pass the credentials to Rancher by editing the cluster options for each cluster that needs to pull images from the registry.
- System images are components needed to maintain the Kubernetes cluster.
- Add-ons are used to deploy several cluster components, including network plug-ins, the ingress controller, the DNS provider, or the metrics server.
See the RKE documentation on private registries for more information on the private registry for components applied during the provisioning of the cluster.
Authorized Cluster Endpoint
Authorized Cluster Endpoint can be used to directly access the Kubernetes API server, without requiring communication through Rancher.
The authorized cluster endpoint is available only in clusters that Rancher has provisioned using RKE. It is not available for clusters in hosted Kubernetes providers, such as Amazon’s EKS. Additionally, the authorized cluster endpoint cannot be enabled for RKE clusters that are registered with Rancher; it is available only on Rancher-launched Kubernetes clusters.
This is enabled by default in Rancher-launched Kubernetes clusters, using the IP of the node with the
controlplane role and the default Kubernetes self signed certificates.
For more detail on how an authorized cluster endpoint works and why it is used, refer to the architecture section.
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the recommended architecture section.
For information on using the Rancher UI to set up node pools in an RKE cluster, refer to this page.
The following options are available when you create clusters in the Rancher UI. They are located under Advanced Options.
Option to enable or disable the NGINX ingress controller.
Node Port Range
Option to change the range of ports that can be used for NodePort services. Default is
Metrics Server Monitoring
Option to enable or disable Metrics Server.
Pod Security Policy Support
Option to enable and select a default Pod Security Policy. You must have an existing Pod Security Policy configured before you can use this option.
Docker Version on Nodes
Option to require a supported Docker version installed on the cluster nodes that are added to the cluster, or to allow unsupported Docker versions installed on the cluster nodes.
Docker Root Directory
If the nodes you are adding to the cluster have Docker configured with a non-default Docker Root Directory (default is
/var/lib/docker), please specify the correct Docker Root Directory in this option.
Recurring etcd Snapshots
Option to enable or disable recurring etcd snapshots.
Agent Environment Variables
Available as of v2.5.6
Option to set environment variables for rancher agents. The environment variables can be set using key value pairs. If rancher agent requires use of proxy to communicate with Rancher server,
NO_PROXY environment variables can be set using agent environment variables.
Cluster Config File
Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for
system_images configuration. The
system_images option is not supported when creating a cluster with the Rancher UI or API.
- To edit an RKE config file directly from the Rancher UI, click Edit as YAML.
- To read from an existing RKE file, click Read from a file.
Config File Structure in Rancher v2.3.0+
RKE (Rancher Kubernetes Engine) is the tool that Rancher uses to provision Kubernetes clusters. Rancher’s cluster config files used to have the same structure as RKE config files, but the structure changed so that in Rancher, RKE cluster config items are separated from non-RKE config items. Therefore, configuration for your cluster needs to be nested under the
rancher_kubernetes_engine_config directive in the cluster config file. Cluster config files created with earlier versions of Rancher will need to be updated for this format. An example cluster config file is included below.
# # Cluster Config # docker_root_dir: /var/lib/docker enable_cluster_alerting: false enable_cluster_monitoring: false enable_network_policy: false local_cluster_auth_endpoint: enabled: true # # Rancher Config # rancher_kubernetes_engine_config: # Your RKE template config goes here. addon_job_timeout: 30 authentication: strategy: x509 ignore_docker_version: true # # # Currently only nginx ingress provider is supported. # # To disable ingress controller, set `provider: none` # # To enable ingress on specific nodes, use the node_selector, eg: # provider: nginx # node_selector: # app: ingress # ingress: provider: nginx kubernetes_version: v1.15.3-rancher3-1 monitoring: provider: metrics-server # # If you are using calico on AWS # # network: # plugin: calico # calico_network_provider: # cloud_provider: aws # # # To specify flannel interface # # network: # plugin: flannel # flannel_network_provider: # iface: eth1 # # # To specify flannel interface for canal plugin # # network: # plugin: canal # canal_network_provider: # iface: eth1 # network: options: flannel_backend_type: vxlan plugin: canal # # services: # kube-api: # service_cluster_ip_range: 10.43.0.0/16 # kube-controller: # cluster_cidr: 10.42.0.0/16 # service_cluster_ip_range: 10.43.0.0/16 # kubelet: # cluster_domain: cluster.local # cluster_dns_server: 10.43.0.10 # services: etcd: backup_config: enabled: true interval_hours: 12 retention: 6 safe_timestamp: false creation: 12h extra_args: election-timeout: 5000 heartbeat-interval: 500 gid: 0 retention: 72h snapshot: false uid: 0 kube_api: always_pull_images: false pod_security_policy: false service_node_port_range: 30000-32767 ssh_agent_auth: false windows_prefered_cluster: false
Default DNS provider
The table below indicates what DNS provider is deployed by default. See RKE documentation on DNS provider for more information how to configure a different DNS provider. CoreDNS can only be used on Kubernetes v1.12.0 and higher.
|Rancher version||Kubernetes version||Default DNS provider|
|v2.2.5 and higher||v1.14.0 and higher||CoreDNS|
|v2.2.5 and higher||v1.13.x and lower||kube-dns|
|v2.2.4 and lower||any||kube-dns|
Rancher specific parameters
Besides the RKE config file options, there are also Rancher specific settings that can be configured in the Config File (YAML):
Option to enable or disable Cluster Monitoring.
Option to enable or disable Project Network Isolation.
Before Rancher v2.5.8, project network isolation is only available if you are using the Canal network plugin for RKE.
In v2.5.8+, project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
local_cluster_auth_endpoint: enabled: true fqdn: "FQDN" ca_certs: "BASE64_CACERT"
Custom Network Plug-in
You can add a custom network plug-in by using the user-defined add-on functionality of RKE. You define any add-on that you want deployed after the Kubernetes cluster is deployed.
There are two ways that you can specify an add-on:
For an example of how to configure a custom network plug-in by editing the
cluster.yml, refer to the RKE documentation.