Continental Innovates with Rancher and Kubernetes
In a previous
in this series we looked at the basic
Kubernetes concepts including
namespaces, pods, deployments and services. Now we will use these
building blocks in a realistic deployment. We will cover how to setup
persistent volumes, how to setup claims for those volumes and then mount
those claims into pods. We will also look at creating and using secrets
using the Kubernetes secrets management system. Lastly, we will look at
service discovery within the cluster as well as exposing services to the
We will be using
as a sample application to illustrate the features of Kubernetes. If you
have gone through our Docker
CI/CD series of
articles then you will be familiar with the application. It is a simple
authentication service consisting of an array of stateless web-servers
and a database cluster. Creating a database inside Kubernetes is
nontrivial as the ephemeral nature of containers conflicts with the
persistent storage requirements of databases.
Prior to launching our go-auth application we must setup a database for
it to connect to. Prior setting up a database server in Kubernetes we
must provide it with a persistent storage volume. This will help in
making database state persistent across database restarts, and in
migrating storage when containers are moved from one host to another.
The list of currently supported persistent volume types are listed
We are going to use NFS-based volumes, as NFS is ubiquitous in network
storage systems. If you do not have an NFS server handy, you may want to
use Amazon Elastic File Store to quickly
setup a mountable NFS volume. Once you have your NFS volume (or EFS
volume) you can setup a persistent volume in Kubernetes using the
following spec. We specify the hostname or IP of our NFS/EFS server, 1
GB of storage with read, write many access mode.
Once you create your volume using *kubectl create -f
persistent-volume.yaml, *you can use the following command to list your
newly created volume:
$kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
mysql-volume 1Gi RWX Available 29s
Now that we have our volume, we can create a Persistent Volume Claim
using the spec below. A persistent volume claim will reserve our
persistent volume, and can then be mounted into a container volume. The
specifications we provide for our claim are used to match available
persistent volumes and bind them if found. For example, we specified
that we only want a ReadWriteMany volume with at least 1 GB of storage
We can see if our claim was able to bind to a volume using the command
$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
mysql-claim Bound nfs 1Gi RWX 13s
Before we start using our persistent volume and claim in a MySQL
container we also need to figure out how to get a secret such as
database password into Kubernetes Pods. Luckily, Kubernetes provides a
system for this purpose.
To create a managed secret for Database password, create a file called
password.txt and add your plain text password here. Make sure there are
no newline characters in this file as they will become part of the
secret. Once you have created your password file, use the following
command to store your secret in Kubernetes:
$kubectl create secret generic mysql-pass --from-file=password.txt
secret "mysql-pass" created
You can look at a list of all current secrets using the following
$kubectl get secret
NAME TYPE DATA AGE
mysql-pass Opaque 1 3m
Now we have all the requisite pieces, we can setup our MySQL deployment
using the spec below. Some interesting things to note: in the spec, we
use the strategy recreate, which means that an update of the
deployment will drop all containers and create them again rather than
using a rolling deploy. This is needed because we only want one MySQL
container accessing the persistent volume. However, this also means that
there will be downtime if we redeploy our database. Secondly, we use the
valueFrom and secretKeyRef parameters to inject the secret we
created earlier into our container as an environment variable. Lastly,
note in the ports section that we can name our port and in downstream
containers we will refer to the port by its name, not its value. This
allows us to change the port in future deployments without having to
update our downstream containers.
- image: mysql:5.6
- name: MYSQL_ROOT_PASSWORD
- containerPort: 3306
- name: mysql-persistent-storage
- name: mysql-persistent-storage
Once we have a MySQL deployment we must attach a service front end to it
so that it is accessible to other services in our application. To create
a service we can use the following spec. Note that we could specify a
cluster IP in this spec if we wanted to statically link our application
layer to this database service. However, we will use the service
discovery mechanisms in Kubernetes to avoid hard-coding IPs.
- port: 3306
In Kubernetes, service discovery is available through Docker link style
environment variables. All services in a cluster are visible to all
containers/pod in the cluster. Kubernetes uses IP Tables rules to
redirect service requests to the Kube proxy which in turn routes to the
hosts and pods with the requisite service. For example, if you use
kubectl exec CONTAINER_NAME bash into any container and run env, you
can see the service link variables as shown below. We will use this
setup to connect our go-auth web application to the database.
$env | grep GO_AUTH_MYSQL_SERVICE
Now that we finally have our database up and exposed, we can finally
bring up our web layer. We will use the spec shown below for our web
layer. We will be using the usman/go-auth-kubernetes image, which uses
to add the database service Cluster IP to the /etc/hosts. If you use the
DNS add in Kubernetes, you can skip this step. We also use the secrets
management feature in Kubernetes to mount the mysql-pass secret into the
container. Using the the args parameter, we specify the db-host
argument as the mysql host we setup in /etc/hosts. In addition, we
specify db-password-file to so that our application connects to the
mysql cluster. We also use the livenessProbe element to monitor our web
service container. If the process has problems, Kubernetes will detect
the failure and replace the pod automatically.
replicas: 2 # We want two pods for this deployment
- name: go-auth-web
- containerPort: 8080
- "-l debug"
- "--db-host mysql"
- "--db-user root"
- "--db-password-file /etc/db/password.txt"
- "-p 8080"
- name: mysql-password-volume
- name: mysql-password-volume
Now that we have setup our go-auth service, we can expose the service
with the following spec. We specify that we are using the service type
NodePort which exposes the service on a given port from Kubernetes range
(30,000-32,767) on every Kubernetes host. The host then uses the
kubeproxy to route traffic to one of the pods in the go-auth deployment.
We can now use round-robin DNS or an external loadbalancer to route
traffic to all Kubernetes nodes for fault tolerance and to spread load.
- port: 8080
With our service exposed, we can use the go-auth REST API to create a
user, generate a token for the user, and verify the token using the
following commands. These commands will work even if you kill one of the
go-auth-web containers. They will also still work if you delete the
MySQL container (after a time when it gets replaced).
curl -i -X PUT -d userid=USERNAME \
-d password=PASSWORD KUBERNETES_HOST:30000/user
curl -i -X POST 'KUBERNETES_HOST:30000/token/USERNAME' \
With our services setup, we have both a persistent MySQL service and
deployment, as well as a stateless web deployment for go-auth service.
We can terminate the MySQL container and it will restart without losing
state (although there will be temporary down time). You may also mount
the same NFS volume as a read-only volume for MySQL slaves to allow
reads even if the master is down and being replaced. In future articles,
we will cover using Pet Sets and Cassandra-style application layer
replicated databases to have persistent layers which are tolerant to
failure without any downtime. For the stateless web layer we already
support failure recovery without downtime. In addition to our services
and deployments, we looked at how to manage secrets in our cluster such
that they can be exposed to the application only at run time. Lastly, we
looked at a mechanism by which services discover each other.
Kubernetes can be daunting with its plethora of terminology and
verbosity. However, if you need to run workloads in production under
load, Kubernetes provides a lot of the plumbing that you would otherwise
have to hand-roll.