Service discovery is one of the core functionalities of any container-based environment. Once you have packaged your application and launched it using containers, the next step is making it discoverable to other application containers in your environment or the external world.
In this article we will go over the service discovery support provided by Rancher 2.0 and see how the Rancher 1.6 feature set maps to the latest version.
Service Discovery in Rancher 1.6
Rancher 1.6 provided service discovery within Cattle environments. Rancher’s own DNS microservice provided the internal DNS functionality.
Rancher’s internal DNS provides the following key features:
Service discovery within stack and across stack
All services in the stack are resolvable by
All containers are resolvable globally by their name.
Creating a service alias name
Adding an alias name to services and linking to other services using aliases.
Discovery of external services
Pointing to services deployed outside of Rancher using the external IP(s) OR a domain name.
Service Discovery in Rancher 2.0
Rancher 2.0 uses the native Kubernetes DNS support to provide equivalent service discovery for Kubernetes workloads and pods. A Cattle user will be able to replicate all the service discovery features in Rancher 2.0 without loss of any functionality.
Similar to the Rancher 1.6 DNS microservice, Kubernetes schedules a DNS pod and service in the cluster and configures the kubelets to route all DNS lookups to this DNS service. Rancher 2.0’s Kubernetes cluster deploys skyDNS as the Kubernetes DNS service, which is a flavor of the default Kube-DNS implementation.
Kubernetes workloads are objects that specify the deployment rules for pods that are launched for the workload. Workload objects by themselves are not resolvable via DNS to other objects in the Kubernetes cluster. To lookup and access a workload, a Kubernetes Service needs to be created for the workload. Here are some details about a Kubernetes Service.
Any service created within Kubernetes gets a DNS name. The DNS A record created for the service is of the form
<service_name>.<namespace_name>.svc.cluster.local. The DNS name for the service resolves to the cluster IP of the service. The cluster IP is an internal IP assigned to the service which is resolvable within the cluster.
Within the Kubernetes namespace, the service is resolvable directly by the
<service_name> and outside of the namespace using
<service_name>.<namespace_name>. This convention is similar to the service discovery within stack and across stack for Rancher 1.6.
Thus to lookup and access your application workload, a service needs to be created that gets a DNS record assigned.
Rancher simplifies this process by automatically creating a service along with the workload, using the service port and service type you select in the UI while deploying the workload and service name identical to the workload’s name. If no port is exposed, port
42 is used. This practice makes the workload discoverable within and across namespaces by its name.
For example, as seen below, I deploy a few workloads of type Deployment in two namespaces using Rancher 2.0 UI.
I can see the corresponding DNS records auto-created by Rancher for the workloads under Cluster > Project > Service Discovery tab.
The workloads become accessible to any other workload within and across the namespaces as demonstrated below.
Individual pods running in the Kubernetes cluster also get a DNS record assigned, which is in the form
<pod_ip_address>.<namespace_name>.pod.cluster.local. For example, a pod with an IP of 10.42.2.7 in the namespace
default with a DNS name of
cluster.local would have an entry of
Pods can also be resolved using the
subdomain fields if set in the pod spec. Details about this resolution is covered in the Kubernetes docs here.
Creating Alias Names for Workloads and External Services
Just as you can create an alias for Rancher 1.6 services, you can do the same for Kubernetes workloads using Rancher 2.0. Similarly you can also create DNS records pointing to externally running services using their hostname or IP address in Rancher 2.0. These DNS records are Kubernetes service objects.
Using the 2.0 UI, navigate to the Cluster > Project view and choose the Service Discovery tab. Here, all the existing DNS records created for your workloads will be listed under each namespace.
Add Record to create new DNS records and view the various options supported to link to external services or to create aliases for another workload/DNS record/set of pods.
One thing to note is that out of these options for creating DNS records, the following options are supported natively by Kubernetes:
- Point to an external hostname
- Point to a set of pods which match a selector
The remaining options are implemented by Rancher leveraging Kubernetes:
- Point to external IP address
- Create alias for another DNS record
- Point to another workload
Docker Compose to Kubernetes YAML
Now let’s see what is needed if we want to migrate an application from 1.6 to 2.0 using Compose files instead of deploying it over the 2.0 UI.
As noted above, when we deploy workloads using the Rancher 2.0 UI, Rancher internally takes care of creating the necessary Kubernetes
ClusterIP service for service discovery. However, if you deploy the workload via Rancher CLI or Kubectl client, what should you do to ensure that the same service discovery behavior is accomplished?
Service Discovery Within and Across Namespaces via Compose
Lets start with the following docker-compose.yml file, which shows two services (
bar) within a stack. Within a Cattle stack, these two services can reach each other by using their service names.
version: '2' services: bar: image: user/testnewhostrouting stdin_open: true tty: true labels: io.rancher.container.pull_image: always foo: image: user/testnewhostrouting stdin_open: true tty: true labels: io.rancher.container.pull_image: always
What happens to service discovery if we migrate these two services to a namespace in Rancher 2.0?
Now this conversion generates the
*-deployment.yaml files, and deploying them using Rancher CLI creates the corresponding workloads within a namespace.
Can these workloads reach each other within the namespace? We can exec into the shell of workload
foo using Rancher 2.0 UI and see if pinging the other workload
No! The reason is because we only created the workload objects of type
Deployment. To make these workloads discoverable, they each need a service of type
ClusterIP pointing to them that will be assigned a DNS record. The Kubernetes YAML for such a service should look like the sample below.
ports is a required field. Therefore, we need to provide it using some port number, such as
42 as shown here.
apiVersion: v1 kind: Service metadata: annotations: io.rancher.container.pull_image: always creationTimestamp: null labels: io.kompose.service: bar name: bar spec: clusterIP: None ports: - name: default port: 42 protocol: TCP targetPort: 42 selector: io.kompose.service: bar
After deploying this service via CLI, service
foo can successfully ping service
Thus if you take the Compose-to-Kubernetes-YAML route to migrate your 1.6 services to Rancher 2.0, make sure you also deploy corresponding
ClusterIP services for the workloads. The same solution also applies to cross-namespace referencing of workloads.
Links/External_Links via Compose
If you are a Cattle user, you know that in Rancher 1.6 you can create a service-link/alias pointing to another service, and use that alias name in your application to discover that linked target service.
For example, consider the application below, where the
web service links to the
database service using the alias name
Using Kompose, converting this Compose file to Kubernetes YAML generated the corresponding deployment and service YAML specs. If your services in docker-compose.yml expose ports, Kompose generates a Kubernetes
ClusterIP service YAML spec by default.
Deploying these using Rancher CLI generated the necessary workloads.
However the service link
mongo is missing, as the Kompose conversion does not support links in the docker-compose.yml file. As a result, the workload
web encounters an error, and its pods keep restarting, failing to resolve the
mongo link to
How do we fix the broken DNS link? The solution is to create another
ClusterIP service spec and set its
name to the alias name of the link in docker-compose.
Deploying this service creates the necessary DNS record, and the link
mongo is created, making the
web workload available!
The following image shows that the pods launched for the
web workload entered a
Transitioning from SkyDNS to CoreDNS in the Future
As of v2.0.7, Rancher deploys skyDNS as supported by Kubernetes version 1.10.x. In Kubernetes version 1.11 and later, CoreDNS can be installed as a DNS provider. We are evaluating CoreDNS as well, and it will be presentable as an alternative to skyDNS in the future versions of Rancher.
This article looked at how equivalent service discovery can be supported in Rancher 2.0 via Kubernetes DNS functionality. In the upcoming article, I plan to look at load balancing options supported by Rancher 2.0 and any limitations present in comparison to Rancher 1.6.