Continental Innovates with Rancher and Kubernetes
Rancher v1.6 provided TCP and HTTP health checks on your nodes and services using its own health check microservice. These health checks monitored your containers to confirm they’re operating as intended. If a container failed a health check, Rancher would destroy the unhealthy container and then replicates a healthy one to replace it.
For Rancher v2.x, we’ve replaced the health check microservice, leveraging instead Kubernetes’ native health check support.
Use this document to correct Rancher v2.x workloads and services that list health_check in output.txt. You can correct them by configuring a liveness probe (i.e., a health check).
For example, for the image below, we would configure liveness probes for the web and weblb workloads (i.e., the Kubernetes manifests output by migration-tools CLI).
In Rancher v1.6, you could add health checks to monitor a particular service’s operations. These checks were performed by the Rancher health check microservice, which is launched in a container on a node separate from the node hosting the monitored service (however, Rancher v1.6.20 and later also runs a local health check container as a redundancy for the primary health check container on another node). Health check settings were stored in the rancher-compose.yml file for your stack.
The health check microservice features two types of health checks, which have a variety of options for timeout, check interval, etc.:
TCP health checks:
These health checks check if a TCP connection opens at the specified port for the monitored service. For full details, see the Rancher v1.6 documentation.
HTTP health checks:
These health checks monitor HTTP requests to a specified path and check whether the response is expected response (which is configured along with the health check).
The following diagram displays the health check microservice evaluating a container running Nginx. Notice that the microservice is making its check across nodes.
In Rancher v2.x, the health check microservice is replaced with Kubernetes’s native health check mechanisms, called probes. These probes, similar to the Rancher v1.6 health check microservice, monitor the health of pods over TCP and HTTP.
However, probes in Rancher v2.x have some important differences, which are described below. For full details about probes, see the Kubernetes documentation.
Unlike the Rancher v1.6 health checks performed across hosts, probes in Rancher v2.x occur on same host, performed by the kubelet.
Kubernetes includes two different types of probes: liveness checks and readiness checks.
Checks if the monitored container is running. If the probe reports failure, Kubernetes kills the pod, and then restarts it according to the deployment restart policy.
Checks if the container is ready to accept and serve requests. If the probe reports failure, the pod is sequestered from the public until it self heals.
The following diagram displays kubelets running probes on containers they are monitoring (kubelets are the primary “agent” running on each node). The node on the left is running a liveness probe, while the one of the right is running a readiness check. Notice that the kubelet is scanning containers on its host node rather than across nodes, as in Rancher v1.6.
The migration-tool CLI cannot parse health checks from Compose files to Kubernetes manifest. Therefore, if want you to add health checks to your Rancher v2.x workloads, you’ll have to add them manually.
Using the Rancher v2.x UI, you can add TCP or HTTP health checks to Kubernetes workloads. By default, Rancher asks you to configure a readiness check for your workloads and applies a liveness check using the same configuration. Optionally, you can define a separate liveness check.
If the probe fails, the container is restarted per the restartPolicy defined in the workload specs. This setting is equivalent to the strategy parameter for health checks in Rancher v1.6.
Configure probes by using the Health Check section while editing deployments called out in output.txt.
While you create a workload using Rancher v2.x, we recommend configuring a check that monitors the health of the deployment’s pods.
TCP checks monitor your deployment’s health by attempting to open a connection to the pod over a specified port. If the probe can open the port, it’s considered healthy. Failure to open it is considered unhealthy, which notifies Kubernetes that it should kill the pod and then replace it according to its restart policy. (this applies to Liveness probes, for Readiness probes, it will mark the pod as Unready).
You can configure the probe along with values for specifying its behavior by selecting the TCP connection opens successfully option in the Health Check section. For more information, see Deploying Workloads. For help setting probe timeout and threshold values, see Health Check Parameter Mappings.
When you configure a readiness check using Rancher v2.x, the readinessProbe directive and the values you’ve set are added to the deployment’s Kubernetes manifest. Configuring a readiness check also automatically adds a liveness check (livenessProbe) to the deployment.
HTTP checks monitor your deployment’s health by sending an HTTP GET request to a specific URL path that you define. If the pod responds with a message range of 200-400, the health check is considered successful. If the pod replies with any other value, the check is considered unsuccessful, so Kubernetes kills and replaces the pod according to its restart policy. (this applies to Liveness probes, for Readiness probes, it will mark the pod as Unready).
You can configure the probe along with values for specifying its behavior by selecting the HTTP returns successful status or HTTPS returns successful status. For more information, see Deploying Workloads. For help setting probe timeout and threshold values, see Health Check Parameter Mappings.
While configuring a readiness check for either the TCP or HTTP protocol, you can configure a separate liveness check by clicking the Define a separate liveness check. For help setting probe timeout and threshold values, see Health Check Parameter Mappings.
Rancher v2.x, like v1.6, lets you perform health checks using the TCP and HTTP protocols. However, Rancher v2.x also lets you check the health of a pod by running a command inside of it. If the container exits with a code of 0 after running the command, the pod is considered healthy.
You can configure a liveness or readiness check that executes a command that you specify by selecting the Command run inside the container exits with status 0 option from Health Checks while deploying a workload.
Command run inside the container exits with status 0
While configuring readiness checks and liveness checks, Rancher prompts you to fill in various timeout and threshold values that determine whether the probe is a success or failure. The reference table below shows you the equivalent health check values from Rancher v1.6.