Managing Kubernetes Workloads With Rancher 2.0

Rancher 2.0 was built with many things in mind. You can provision and manage Kubernetes clusters, deploy user services onto them and easily control access with authentication and RBAC. One of the coolest things about Rancher 2.0 is its intuitive UI, which we’ve designed to try and demystify Kubernetes, and accelerate adoption for anyone new to it. In this tutorial I’ll walk you through that new user interface, and explain how you can use it to deploy a simple NGINX service.

Designing Your Workload

There are several things that you might need to figure out before deploying the workload for your app:

  • Is it a stateless or stateful app?
  • How many instances of your app need to be running?
  • What are the placement rules — whether the app needs to run on specific hosts?
  • Is your app meant to be exposed as a service on a private network, so other applications can talk to it?
  • Is public access to the app needed?

There can be more questions to answer, but the above are the most basic ones and a good starting place. The Rancher UI will give you more details on what you can configure on your workload, so you can tune it up or update later.

Deploying your first workload with Rancher 2.0

Lets start with the fun part — deploying some very simple workload and exposing it to the outside world with Rancher. Assuming Rancher installation is done (it takes just one click), and at least one Kubernetes cluster is provisioned (a little bit more challenging than one click, but also very fast), switch to Project View and hit “Deploy” on the Workloads page:

All the options are default, except for image and Port Mapping (we will get into more details on this later). I want my service to publish on a random port on every host in my cluster, and when the port is hit, the traffic redirected to nginx internal port 80. Once the workload is deployed, the public endpoint will be set on the object in the UI for easy access:

By clicking on the 31217 public endpoint link, you’d get redirected straight to your service:

As you can see, it takes just one step to deploy the workload and publish it to the outside, which is very similar to Rancher 1.6. If you are a Kubernetes user, you know it takes a couple of Kubernetes objects to backup the above — a deployment and a service. The deployment will take care of starting the containerized application; it also monitors its health, restarts if it crashes based on a restart policy, etc. But in order to expose the application to the outside, Kubernetes needs a service object created explicitly. Rancher makes it simple for the end user by just getting the workload declaration in a user friendly way, and creating all the required Kubernetes constructs behind the scenes. More on those constructs in the next section.

More Workload Options

By default, the Rancher UI presents the user with the basic options for the workload deployment. You can choose to change them starting with the Workload type:

Based on the type picked, a corresponding Kubernetes resource is going to get created.

  • Scalable deployment of (n) pods — Kubernetes Deployment
  • Run one pod on each node — Kubernetes DaemonSet
  • Stateful set — Kubernetes StatefulSet
  • Run on a cron schedule — Kubernetes CronJob

Along with the type, options like image, environment variables, and labels can be set. That will all define the deployment spec of your application. Now, exposing the application to the outside can be done via the Port Mapping section:

With this port declaration, after the workload is deployed, it will be exposed via the same random port on every node in the cluster. Modify Source Port if you need a specific value instead of a random one. There are several options for “Publish on”:

Based on the value picked, Rancher will create a corresponding service object on the Kubernetes side:

  • Every node — Kubernetes NodePort Service
  • Internal cluster IP — Kubernetes ClusterIP service. Your workload will be accessible via a private network only in this case.
  • Load Balancer — Kubernetes Load Balancer service. This option should be picked only when your Kubernetes cluster is deployed in the public cloud, such as with AWS, and has an External Load Balancer support (like AWS ELB).
  • Nodes running a pod — no service gets created; HostPort option gets set in the Deployment spec

We highlight the implementation details, but you don’t really need to use them. Rancher UI/API would give all the necessary information in order to access your workload by providing a clickable link to the workload endpoint.

Traffic Distribution between Workloads using Ingress

There is one more way to publish the workload — via Ingress. Not only does it publish applications on standard http ports 80443, but it also provides L7 routing capabilities along with SSL termination. Functionality like this can be useful if you deploy a web application and would like your traffic routed to different endpoints based on the host/path routing rules:  

Unlike in Rancher 1.6, the Load Balancer is not tight to a specific LB provider like haproxy. The implementation varies based on Cluster type. For Google Container Engine clusters, it is GLBC, for Amazon EKS —  AWS ELB/ALB, for on Digital Ocean/Amazon EC2 — nginx load balancer. The last one Rancher installs and manages, and we are planning to introduce more Load Balancer providers in the future on demand.

Enhanced Service Discovery

If you are building an application that consists of multiple workloads talking to each other, most likely DNS is used to resolve the service name. You can certainly connect to the container using the API address, but the container can die, and the ip address will change. So DNS is really the preferable way. Kubernetes Service Discovery comes as a built in feature in all the clusters provisioned by Rancher. Every workload created from the Rancher UI can be resolved by its name within the same namespace. Although a Kubernetes service (of ClusterIP type) needs to be created explicitly in order to discover the workload, Rancher takes this burden from its users, and creates the service automatically for every workload. In addition, Rancher enhances the Service Discovery by letting users create:

  • An Alias of another DNS value
  • A Custom record pointing to one or more existing workloads

All the above is available under Workloads Service Discovery page in the UI:

As you can see, configuring workloads in Rancher 2.0 is just as easy as in 1.6. Even though the backend now implements everything through Kubernetes, the Rancher UI still simplifies workload creation just as before. Through the Rancher interface, you can expose your workload to the public, place it behind a load balancer and configure internal service discovery — all accomplished in an intuitive and easy way. This blog covered the basics of workload management. We are planning to write more on features like Volumes, Application Catalog, etc. In addition, our UI and backend are constantly evolving.  There may be new cool features being exposed as you read this post—so stay tuned!  

alena Prokharchyk

Alena Prokharchyk

twitter: @lemonjet