Continental Innovates with Rancher and Kubernetes
Real world applications deployed using containers usually need to allow outside traffic to be routed to the application containers.
Standard ways for providing external access include exposing public ports on the nodes where the application is deployed or placing a load balancer in front of the application containers.
Cattle users on Rancher 1.6 are familiar with port mapping to expose services. In this article, we will explore various options for exposing your Kubernetes workload publicly in Rancher 2.0 using port mapping. Using load balancing solutions is a wide topic and we can look at them separately in later articles.
Rancher 1.6 enabled users to deploy their containerized apps and expose them publicly via Port Mapping.
Users could choose a specific port on the host or let Rancher assign a random one, and that port would be opened for public access. This public port routed traffic to the private port of the service containers running on that host.
Rancher 2.0 also supports adding port mapping to your workloads deployed on the Kubernetes cluster. These are the options in Kubernetes for exposing a public port for your workload:
As seen above, the UI for port mapping is pretty similar to the 1.6 experience. Rancher internally adds the necessary Kubernetes HostPort or NodePort specs while creating the deployments for a Kubernetes cluster.
Let’s look at HostPort and NodePort in some detail.
The HostPort setting has to be specified in the Kubernetes YAML specs under the ‘Containers’ section while creating the workload in Kubernetes. Rancher performs this action internally when you select the HostPort for mapping.
When a HostPort is specified, that port is exposed to public access on the host where the pod container is deployed. Traffic hitting at <host IP>:<HostPort> is routed to the pod container’s private port.
Here is how the Kubernetes YAML for our Nginx workload specifying the HostPort setting under the ‘ports’ section looks:
- image: nginx
- containerPort: 80
Using a HostPort for a Kubernetes pod is equivalent to exposing a public port for a Docker container in Rancher 1.6.
You can request any available port on the host to be exposed via the HostPort setting.
The configuration is simple, and the HostPort setting is placed directly in the Kubernetes pod specs. No other object needs to be created for exposing your application in comparison to a NodePort.
Using a HostPort limits the scheduling options for your pod, since only those hosts that have the specified port available can be used for deployment.
If the scale of your workload is more than the number of nodes in your Kubernetes cluster, then the deployment will fail.
Any two workloads that specify the same HostPort cannot be deployed on the same node.
If the host where the pods are running goes down, Kubernetes will have to reschedule the pods to different nodes. Thus, the IP address where your workload is accessible will change, breaking any external clients of your application. The same thing will happen when the pods are restarted, and Kubernetes reschedules them on a different node.
Before we dive into how to create a NodePort for exposing your Kubernetes workload, let’s look at some background on the Kubernetes Service.
A Kubernetes Service is a REST object that abstracts access to Kubernetes pods. The IP address that Kubernetes pods listen to cannot be used as a reliable endpoint for public access to your workload because pods can be destroyed and recreated dynamically, changing their IP address.
A Kubernetes Service provides a static endpoint to the pods. So even if the pods switch IP addresses, external clients that depend on the workload launched over these pods can keep accessing the workload without disruption and without knowledge of the back end pod recreation via the Kubernetes Service interface.
By default, a service is accessible within the Kubernetes cluster on an internal IP. This internal scope is defined using the type parameter of the service spec. So by default for a service, the yaml is type: ClusterIP.
If you want to expose the service outside of the Kubernetes cluster, refer to these ServiceType options in Kubernetes.
One of these types is NodePort, which provides external access to the Kubernetes Service created for your workload pods.
Consider the workload running the image of Nginx again. For this workload, we need to expose the private container port 80 externally.
We can do this by creating a NodePort service for the workload. Here is how a NodePort service spec will look:
- name: 80tcp01
If we specify a NodePort service, Kubernetes will allocate a port on every node. The chosen NodePort will be visible in the service spec after creation, as seen above. Alternatively, one can specify a particular port to be used as NodePort in the spec while creating the service. If a specific NodePort is not specified, a port from a range configured on the Kubernetes cluster (default: 30000-32767) will be picked at random.
From outside the Kubernetes cluster, traffic coming to <NodeIP>:<NodePort> will be directed to the workload (kube-proxy component handles this). The NodeIP can be the IP address of any node in your Kubernetes cluster.
Creating a NodePort service provides a static public endpoint to your workload pods. So even if the pods get dynamically destroyed, Kubernetes can deploy the workload anywhere in the cluster without altering the public endpoint.
The scale of the pods is not limited by the number of nodes in the cluster. Nodeport allows decoupling of public access from the number and location of pods.
When a NodePort is used, that <NodeIP>:<NodePort> gets reserved in your Kubernetes cluster for every node, even if the workload is never deployed on that node.
You can only specify a port from the configured range and not any random port.
An extra Kubernetes object (a Kubernetes Service of type NodePort) is needed to expose your workload. Thus, finding out how your application is exposed is not straightforward.
The content above is how a Cattle user can add port mapping in the Rancher 2.0 UI, as compared to 1.6. Now lets see how we can do the same via compose files and Rancher CLI.
We can convert the docker-compose.yml file from Rancher 1.6 to Kubernetes YAML using the Kompose tool, and then deploy the application using Rancher CLI in the Kubernetes cluster.
Here is the docker-compose.yml config for the above Nginx service running on 1.6:
Kompose generates the YAML files for the Kubernetes deployment and service objects needed to deploy the Nginx workload in Rancher 2.0. The Kubernetes deployment specs define the pod and container specs, while the service specs define the public access to the pods.
As seen in the previous article in this blog series, Kompose does not add the required HostPort construct to our deployment specs, even if docker-compose.yml specifies exposed ports. So to replicate the port mapping in a Rancher 2.0 cluster, we can manually add the HostPort construct to the pod container specs in nginx-deployment.yaml and deploy using Rancher CLI.
To add a NodePort service for the deployment via Kompose, the label kompose.service.type should be added to docker-compose.yml file, per the Kompose docs.
Now running Kompose using docker-compose.yml generates the necessary NodePort service along with the deployment specs. Using Rancher CLI, we could deploy to successfully expose the workload via NodePort.
In this article we explored how to use port mapping in Rancher 2.0 to expose the application workloads to public access. The Rancher 1.6 functionality of port mapping can be transitioned to the Kubernetes platform easily. In addition, the Rancher 2.0 UI provides the same intuitive experience for mapping ports while creating or upgrading a workload.
In the upcoming article let’s explore how to monitor the health of your application workloads using Kubernetes and see if the healthcheck support that Cattle provided can be fully migrated to Rancher 2.0!