Continental Innovates with Rancher and Kubernetes
This document provides prescriptive guidance for hardening a production installation of Rancher v2.3.0-v2.3.2. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS).
This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes.
This hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher:
Click here to download a PDF version of this document
For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.3.x.
The following profile definitions agree with the CIS benchmarks for Kubernetes.
A profile is a set of configurations that provide a certain amount of hardening. Generally, the more hardened an environment is, the more it affects performance.
Items in this profile intend to:
Items in this profile extend the “Level 1” profile and exhibit one or more of the following characteristics:
(See Appendix A. for full ubuntu cloud-config example)
cloud-config
Profile Applicability
Description
Configure sysctl settings to match what the kubelet would set if allowed.
Rationale
We recommend that users launch the kubelet with the --protect-kernel-defaults option. The settings that the kubelet initially attempts to change can be set manually.
--protect-kernel-defaults
This supports the following control:
Audit
vm.overcommit_memory = 1
sysctl vm.overcommit_memory
vm.panic_on_oom = 0
sysctl vm.panic_on_oom
kernel.panic = 10
sysctl kernel.panic
kernel.panic_on_oops = 1
sysctl kernel.panic_on_oops
kernel.keys.root_maxkeys = 1000000
sysctl kernel.keys.root_maxkeys
kernel.keys.root_maxbytes = 25000000
sysctl kernel.keys.root_maxbytes
Remediation
/etc/sysctl.d/90-kubelet.conf
vm.overcommit_memory=1 vm.panic_on_oom=0 kernel.panic=10 kernel.panic_on_oops=1 kernel.keys.root_maxkeys=1000000 kernel.keys.root_maxbytes=25000000
sysctl -p /etc/sysctl.d/90-kubelet.conf
Create a Kubernetes encryption configuration file on each of the RKE nodes that will be provisioned with the controlplane role:
controlplane
NOTE: The --experimental-encryption-provider-config flag in Kubernetes 1.13+ is actually --encryption-provider-config
--experimental-encryption-provider-config
--encryption-provider-config
This configuration file will ensure that the Rancher RKE cluster encrypts secrets at rest, which Kubernetes does not do by default.
This supports the following controls:
aescbc
On the control plane hosts for the Rancher HA cluster run:
stat /opt/kubernetes/encryption.yaml
Ensure that:
0600
root:root
apiVersion: apiserver.config.k8s.io/v1 kind: EncryptionConfiguration resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: <32-byte base64 encoded string> - identity: {}
Where aescbc is the key type, and secret is populated with a 32-byte base64 encoded string.
secret
head -c 32 /dev/urandom | base64 -i - touch /opt/kubernetes/encryption.yaml
chown root:root /opt/kubernetes/encryption.yaml chmod 0600 /opt/kubernetes/encryption.yaml
apiVersion: v1 kind: EncryptionConfig resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: <32-byte base64 encoded string> - identity: {}
Where secret is the 32-byte base64-encoded string generated in the first step.
NOTE:
Files that are placed in /opt/kubernetes need to be mounted in using the extra_binds functionality in RKE.
/opt/kubernetes
extra_binds
Place the configuration file for Kubernetes audit logging on each of the control plane nodes in the cluster.
The Kubernetes API has audit logging capability that is the best way to track actions in the cluster.
--audit-log-path
--audit-log-maxage
--audit-log-maxbackup
--audit-log-maxsize
AdvancedAuditing
On each control plane node, run:
stat /opt/kubernetes/audit.yaml
apiVersion: audit.k8s.io/v1beta1 kind: Policy rules: - level: Metadata
On nodes with the controlplane role:
touch /opt/kubernetes/audit.yaml
chown root:root /opt/kubernetes/audit.yaml chmod 0600 /opt/kubernetes/audit.yaml
Place the configuration file for Kubernetes event limit configuration on each of the control plane nodes in the cluster.
Set up the EventRateLimit admission control plugin to prevent clients from overwhelming the API server. The settings below are intended as an initial value and may need to be adjusted for larger clusters.
EventRateLimit
On nodes with the controlplane role run:
stat /opt/kubernetes/admission.yaml stat /opt/kubernetes/event.yaml
For each file, ensure that:
For admission.yaml ensure that the file contains:
admission.yaml
apiVersion: apiserver.k8s.io/v1alpha1 kind: AdmissionConfiguration plugins: - name: EventRateLimit path: /opt/kubernetes/event.yaml
For event.yaml ensure that the file contains:
event.yaml
apiVersion: eventratelimit.admission.k8s.io/v1alpha1 kind: Configuration limits: - type: Server qps: 5000 burst: 20000
touch /opt/kubernetes/admission.yaml touch /opt/kubernetes/event.yaml
chown root:root /opt/kubernetes/admission.yaml chown root:root /opt/kubernetes/event.yaml chmod 0600 /opt/kubernetes/admission.yaml chmod 0600 /opt/kubernetes/event.yaml
700
Ensure that the etcd data directory has permissions of 700 or more restrictive.
etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. This data directory should be protected from any unauthorized reads or writes. It should not be readable or writable by any group members or the world.
On the etcd server node, get the etcd data directory, passed as an argument --data-dir , from the below command:
--data-dir
ps -ef | grep etcd
Run the below command (based on the etcd data directory found above). For example,
stat -c %a /var/lib/rancher/etcd
Verify that the permissions are 700 or more restrictive.
Follow the steps as documented in 1.4.12 remediation.
etcd:etcd
Ensure that the etcd data directory ownership is set to etcd:etcd.
etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. This data directory should be protected from any unauthorized reads or writes. It should be owned by etcd:etcd.
On a etcd server node, get the etcd data directory, passed as an argument --data-dir, from the below command:
stat -c %U:%G /var/lib/rancher/etcd
Verify that the ownership is set to etcd:etcd.
etcd
useradd etcd
Record the uid/gid:
id etcd
cluster.yml
services
services: etcd: uid: <etcd user uid recorded previously> gid: <etcd user gid recorded previously>
(See Appendix B. for full RKE cluster.yml example)
Ensure Kubelet options are configured to match CIS controls.
To pass the following controls in the CIS benchmark, ensure the appropriate flags are passed to the Kubelet.
--anonymous-auth
--authorization-mode
AlwaysAllow
--streaming-connection-idle-timeout
--make-iptables-util-chains
--event-qps
RotateKubeletServerCertificate
Inspect the Kubelet containers on all hosts and verify that they are running with the following options:
--streaming-connection-idle-timeout=<duration greater than 0>
--authorization-mode=Webhook
--protect-kernel-defaults=true
--make-iptables-util-chains=true
--event-qps=0
--anonymous-auth=false
--feature-gates="RotateKubeletServerCertificate=true"
--tls-cipher-suites="TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
services: kubelet: extra_args: authorization-mode: "Webhook" streaming-connection-idle-timeout: "<duration>" protect-kernel-defaults: "true" make-iptables-util-chains: "true" event-qps: "0" anonymous-auth: "false" feature-gates: "RotateKubeletServerCertificate=true" tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
Where <duration> is in a form like 1800s.
<duration>
1800s
rke up --config cluster.yml
Ensure the RKE configuration is set to deploy the kube-api service with the options required for controls.
kube-api
Enabling the AlwaysPullImages admission control plugin can cause degraded performance due to overhead of always pulling images. Enabling the DenyEscalatingExec admission control plugin will prevent the ‘Launch kubectl’ functionality in the UI from working.
AlwaysPullImages
DenyEscalatingExec
To pass the following controls for the kube-api server ensure RKE configuration passes the appropriate options.
--profiling
NamespaceLifecycle
--service-account-lookup
PodSecurityPolicy
false
kube-apiserver
docker inspect kube-apiserver
--anonymous-auth=false --profiling=false --service-account-lookup=true --enable-admission-plugins= "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy" --encryption-provider-config=/opt/kubernetes/encryption.yaml --admission-control-config-file=/opt/kubernetes/admission.yaml --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxage=5 --audit-log-maxbackup=5 --audit-log-maxsize=100 --audit-log-format=json --audit-policy-file=/opt/kubernetes/audit.yaml --tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
volume
/var/log/kube-audit:/var/log/kube-audit
services: kube-api: pod_security_policy: true event_rate_limit: enabled: true extra_args: anonymous-auth: "false" profiling: "false" service-account-lookup: "true" enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy" audit-log-path: "/var/log/kube-audit/audit-log.json" audit-log-maxage: "5" audit-log-maxbackup: "5" audit-log-maxsize: "100" audit-log-format: "json" tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256" extra_binds: - "/opt/kubernetes:/opt/kubernetes"
Set the appropriate options for the Kubernetes scheduling service.
NOTE: Setting --address to 127.0.0.1 will prevent Rancher cluster monitoring from scraping this endpoint.
--address
127.0.0.1
To address the following controls on the CIS benchmark, the command line options should be set on the Kubernetes scheduler.
kube-scheduler
docker inspect kube-scheduler
command
--profiling=false --address=127.0.0.1
services: … scheduler: extra_args: profiling: "false" address: "127.0.0.1"
Set the appropriate arguments on the Kubernetes controller manager.
5*NOTE:** Setting --address to 127.0.0.1 will prevent Rancher cluster monitoring from scraping this endpoint.
To address the following controls the options need to be passed to the Kubernetes controller manager.
--terminated-pod-gc-threshold
kube-controller-manager
docker inspect kube-controller-manager
--terminated-pod-gc-threshold=1000 --profiling=false --address=127.0.0.1 --feature-gates="RotateKubeletServerCertificate=true"
services: kube-controller: extra_args: profiling: "false" address: "127.0.0.1" terminated-pod-gc-threshold: "1000" feature-gates: "RotateKubeletServerCertificate=true"
Configure a restrictive pod security policy (PSP) as the default and create role bindings for system level services to use the less restrictive default PSP.
To address the following controls, a restrictive default PSP needs to be applied as the default. Role bindings need to be in place to allow system services to still function.
allowPrivilegeEscalation
cattle-system
kubectl get ns |grep cattle
kubectl get role default-psp-role -n ingress-nginx kubectl get role default-psp-role -n cattle-system kubectl get clusterrole psp:restricted
kubectl get rolebinding -n ingress-nginx default-psp-rolebinding kubectl get rolebinding -n cattle-system default-psp-rolebinding kubectl get clusterrolebinding psp:restricted
kubectl get psp restricted
addons: | apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: default-psp-role namespace: ingress-nginx rules: - apiGroups: - extensions resourceNames: - default-psp resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: default-psp-rolebinding namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: default-psp-role subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts - apiGroup: rbac.authorization.k8s.io kind: Group name: system:authenticated --- apiVersion: v1 kind: Namespace metadata: name: cattle-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: default-psp-role namespace: cattle-system rules: - apiGroups: - extensions resourceNames: - default-psp resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: default-psp-rolebinding namespace: cattle-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: default-psp-role subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts - apiGroup: rbac.authorization.k8s.io kind: Group name: system:authenticated --- apiVersion: extensions/v1beta1 kind: PodSecurityPolicy metadata: name: restricted spec: requiredDropCapabilities: - NET_RAW privileged: false allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false fsGroup: rule: RunAsAny runAsUser: rule: MustRunAsNonRoot seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - emptyDir - secret - persistentVolumeClaim - downwardAPI - configMap - projected --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: psp:restricted rules: - apiGroups: - extensions resourceNames: - restricted resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: psp:restricted roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: psp:restricted subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts - apiGroup: rbac.authorization.k8s.io kind: Group name: system:authenticated
When deploying Rancher, disable the local cluster option on the Rancher Server.
NOTE: This requires Rancher v2.1.2 or above.
Having access to the local cluster from the Rancher UI is convenient for troubleshooting and debugging; however, if the local cluster is enabled in the Rancher UI, a user has access to all elements of the system, including the Rancher management server itself. Disabling the local cluster is a defense in depth measure and removes the possible attack vector from the Rancher UI and API.
--add-local=false
kubectl get deployment rancher -n cattle-system -o yaml |grep 'add-local'
local
--set addLocal="false"
Enable Rancher’s built-in audit logging capability.
Tracking down what actions were performed by users in Rancher can provide insight during post mortems, and if monitored proactively can be used to quickly detect malicious actions.
kubectl get deployment rancher -n cattle-system -o yaml | grep auditLog
Verify that the log is going to the appropriate destination, as set by auditLog.destination
auditLog.destination
sidecar
List pods:
kubectl get pods -n cattle-system
Tail logs:
kubectl logs <pod> -n cattle-system -c rancher-audit-log
hostPath
auditlog.hostPath
Upgrade the Rancher server installation using Helm, and configure the audit log settings. The instructions for doing so can be found in the reference section below.
The local administrator password should be changed from the default.
The default administrator password is common across all Rancher installations and should be changed immediately upon startup.
Attempt to login into the UI with the following credentials: - Username: admin - Password: admin
The login attempt must not succeed.
Change the password from admin to a password that meets the recommended password standards for your organization.
admin
When running Rancher in a production environment, configure an identity provider for authentication.
Rancher supports several authentication backends that are common in enterprises. It is recommended to tie Rancher into an external authentication system to simplify user and group access in the Rancher cluster. Doing so assures that access control follows the organization’s change management process for user accounts.
Configure the appropriate authentication provider for your Rancher installation according to the documentation found at the link in the reference section below.
Restrict administrator access to only those responsible for managing and operating the Rancher server.
The admin privilege level gives the user the highest level of access to the Rancher server and all attached clusters. This privilege should only be granted to a few people who are responsible for the availability and support of Rancher and the clusters that it manages.
The following script uses the Rancher API to show users with administrator privileges:
#!/bin/bash for i in $(curl -sk -u 'token-<id>:<secret>' https://<RANCHER_URL>/v3/users|jq -r .data[].links.globalRoleBindings); do curl -sk -u 'token-<id>:<secret>' $i| jq '.data[] | "\(.userId) \(.globalRoleId)"' done
The admin role should only be assigned to users that require administrative privileges. Any role that is not admin or user should be audited in the RBAC section of the UI to ensure that the privileges adhere to policies for global access.
user
The Rancher server permits customization of the default global permissions. We recommend that auditors also review the policies of any custom global roles.
Remove the admin role from any user that does not require administrative privileges.
Ensure that node drivers that are not needed or approved are not active in the Rancher console.
Node drivers are used to provision compute nodes in various cloud providers and local IaaS infrastructure. For convenience, popular cloud providers are enabled by default. If the organization does not intend to use these or does not allow users to provision resources in certain providers, the drivers should be disabled. This will prevent users from using Rancher resources to provision the nodes.
If a disallowed node driver is active, visit the Node Drivers page under Global and disable it.
cloud-config file to automate hardening manual steps on nodes deployment.
#cloud-config bootcmd: - apt-get update - apt-get install -y apt-transport-https apt: sources: docker: source: "deb [arch=amd64] https://download.docker.com/linux/ubuntu $RELEASE stable" keyid: 0EBFCD88 packages: - [docker-ce, '5:19.03.5~3-0~ubuntu-bionic'] - jq write_files: # 1.1.1 - Configure default sysctl settings on all hosts - path: /etc/sysctl.d/90-kubelet.conf owner: root:root permissions: '0644' content: | vm.overcommit_memory=1 vm.panic_on_oom=0 kernel.panic=10 kernel.panic_on_oops=1 kernel.keys.root_maxkeys=1000000 kernel.keys.root_maxbytes=25000000 # 1.1.2 encription provider - path: /opt/kubernetes/encryption.yaml owner: root:root permissions: '0600' content: | apiVersion: apiserver.config.k8s.io/v1 kind: EncryptionConfiguration resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: QRCexFindur3dzS0P/UmHs5xA6sKu58RbtWOQFarfh4= - identity: {} # 1.1.3 audit log - path: /opt/kubernetes/audit.yaml owner: root:root permissions: '0600' content: | apiVersion: audit.k8s.io/v1beta1 kind: Policy rules: - level: Metadata # 1.1.4 event limit - path: /opt/kubernetes/admission.yaml owner: root:root permissions: '0600' content: | apiVersion: apiserver.k8s.io/v1alpha1 kind: AdmissionConfiguration plugins: - name: EventRateLimit path: /opt/kubernetes/event.yaml - path: /opt/kubernetes/event.yaml owner: root:root permissions: '0600' content: | apiVersion: eventratelimit.admission.k8s.io/v1alpha1 kind: Configuration limits: - type: Server qps: 5000 burst: 20000 # 1.4.12 etcd user groups: - etcd users: - default - name: etcd gecos: Etcd user primary_group: etcd homedir: /var/lib/etcd # 1.4.11 etcd data dir runcmd: - chmod 0700 /var/lib/etcd - usermod -G docker -a ubuntu - sysctl -p /etc/sysctl.d/90-kubelet.conf
nodes: - address: 18.191.190.205 internal_address: 172.31.24.213 user: ubuntu role: [ "controlplane", "etcd", "worker" ] - address: 18.191.190.203 internal_address: 172.31.24.203 user: ubuntu role: [ "controlplane", "etcd", "worker" ] - address: 18.191.190.10 internal_address: 172.31.24.244 user: ubuntu role: [ "controlplane", "etcd", "worker" ] services: kubelet: extra_args: streaming-connection-idle-timeout: "1800s" authorization-mode: "Webhook" protect-kernel-defaults: "true" make-iptables-util-chains: "true" event-qps: "0" anonymous-auth: "false" feature-gates: "RotateKubeletServerCertificate=true" tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256" generate_serving_certificate: true kube-api: pod_security_policy: true event_rate_limit: enabled: true extra_args: anonymous-auth: "false" profiling: "false" service-account-lookup: "true" enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy" audit-log-path: "/var/log/kube-audit/audit-log.json" audit-log-maxage: "5" audit-log-maxbackup: "5" audit-log-maxsize: "100" audit-log-format: "json" audit-policy-file: /opt/kubernetes/audit.yaml tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256" extra_binds: - "/opt/kubernetes:/opt/kubernetes" scheduler: extra_args: profiling: "false" address: "127.0.0.1" kube-controller: extra_args: profiling: "false" address: "127.0.0.1" terminated-pod-gc-threshold: "1000" feature-gates: "RotateKubeletServerCertificate=true" services: etcd: uid: 1001 gid: 1001 addons: | apiVersion: v1 kind: Namespace metadata: name: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: default-psp-role namespace: ingress-nginx rules: - apiGroups: - extensions resourceNames: - default-psp resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: default-psp-rolebinding namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: default-psp-role subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts - apiGroup: rbac.authorization.k8s.io kind: Group name: system:authenticated --- apiVersion: v1 kind: Namespace metadata: name: cattle-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: default-psp-role namespace: cattle-system rules: - apiGroups: - extensions resourceNames: - default-psp resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: default-psp-rolebinding namespace: cattle-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: default-psp-role subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts - apiGroup: rbac.authorization.k8s.io kind: Group name: system:authenticated --- apiVersion: extensions/v1beta1 kind: PodSecurityPolicy metadata: name: restricted spec: requiredDropCapabilities: - NET_RAW privileged: false allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false fsGroup: rule: RunAsAny runAsUser: rule: MustRunAsNonRoot seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - emptyDir - secret - persistentVolumeClaim - downwardAPI - configMap - projected --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: psp:restricted rules: - apiGroups: - extensions resourceNames: - restricted resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: psp:restricted roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: psp:restricted subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts - apiGroup: rbac.authorization.k8s.io kind: Group name: system:authenticated
# # Cluster Config # default_pod_security_policy_template_id: restricted docker_root_dir: /var/lib/docker enable_cluster_alerting: false enable_cluster_monitoring: false enable_network_policy: false # # Rancher Config # rancher_kubernetes_engine_config: addon_job_timeout: 30 ignore_docker_version: true # # If you are using calico on AWS # # network: # plugin: calico # calico_network_provider: # cloud_provider: aws # # # To specify flannel interface # # network: # plugin: flannel # flannel_network_provider: # iface: eth1 # # # To specify flannel interface for canal plugin # # network: # plugin: canal # canal_network_provider: # iface: eth1 # network: plugin: canal # # services: # kube-api: # service_cluster_ip_range: 10.43.0.0/16 # kube-controller: # cluster_cidr: 10.42.0.0/16 # service_cluster_ip_range: 10.43.0.0/16 # kubelet: # cluster_domain: cluster.local # cluster_dns_server: 10.43.0.10 # services: etcd: backup_config: enabled: false interval_hours: 12 retention: 6 safe_timestamp: false creation: 12h extra_args: election-timeout: '5000' heartbeat-interval: '500' gid: 1001 retention: 72h snapshot: false uid: 1001 kube_api: always_pull_images: false event_rate_limit: enabled: true extra_args: anonymous-auth: 'false' audit-log-format: json audit-log-maxage: '5' audit-log-maxbackup: '5' audit-log-maxsize: '100' audit-log-path: /var/log/kube-audit/audit-log.json enable-admission-plugins: >- ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy profiling: 'false' service-account-lookup: 'true' tls-cipher-suites: >- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 extra_binds: - '/opt/kubernetes:/opt/kubernetes' pod_security_policy: true service_node_port_range: 30000-32767 kube_controller: extra_args: address: 127.0.0.1 feature-gates: RotateKubeletServerCertificate=true profiling: 'false' terminated-pod-gc-threshold: '1000' kubelet: extra_args: anonymous-auth: 'false' event-qps: '0' feature-gates: RotateKubeletServerCertificate=true make-iptables-util-chains: 'true' protect-kernel-defaults: 'true' streaming-connection-idle-timeout: 1800s tls-cipher-suites: >- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 fail_swap_on: false scheduler: extra_args: address: 127.0.0.1 profiling: 'false' ssh_agent_auth: false windows_prefered_cluster: false