Rancher Blog

Adding custom nodes to your Kubernetes cluster in Rancher 2.0 Tech Preview 2

February 13, 2018

Recently, we announced our second milestone release of Rancher 2.0 Tech Preview 2. This includes the possibility to add custom nodes (nodes that are already provisioned with a Linux operating system and Docker) by running a generated docker run command to launch the rancher/agent container, or by connecting over SSH to that node. In this post, we will explore how we can automate the generation of the command to add nodes using the docker runcommand.

Warning: this is not a production ready product yet, don’t put your production workloads on it just yet.


  • Host running Linux and Docker
  • JSON utility jq installed, to parse API responses
  • sha256sum binary to calculate CA certificate checksum

Start Rancher server

Before we can execute any action, we need to launch the rancher/servercontainer. The image to use for the 2.0 Tech Preview 2 is rancher/server:preview . Another change from 1.6 to 2.0, is that we no longer expose on port 8080. Instead, we expose port 80 and 443, where 80 is by default redirected to 443. You can start the container as follows:

docker run -d -p 80:80 -p 443:443 rancher/server:preview

If you want the data for this setup to be persistent, you can mount a host volume to /var/lib/rancher as shown below:

docker run -d -p 80:80 -p 443:443 -v /data:/var/lib/rancher rancher/server:preview

Logging in and creating API key

In Rancher 1.x, there was no authentication enabled by default. After launching the rancher/server container, you could access the API/UI without any credentials. In Rancher 2.0, we enable authentication with the default username and password admin. After logging in, we get a Bearer token, which allows us to change the password. After changing the password, we will create an API key to execute the other requests. The API key is also a Bearer token, which we call automation to be used for automation purposes.

Logging in 

# Login
LOGINRESPONSE=`curl -s '' -H 'content-type: application/json' --data-binary '{"username":"admin","password":"admin"}' --insecure`
LOGINTOKEN=`echo $LOGINRESPONSE | jq -r .token`

Changing the password (thisisyournewpassword)

# Change password
curl -s '' -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary '{"currentPassword":"admin","newPassword":"thisisyournewpassword"}' --insecure

Create API key

# Create API key
APIRESPONSE=`curl -s '' -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary '{"type":"token","description":"automation"}' --insecure`
# Extract and store token
APITOKEN=`echo $APIRESPONSE | jq -r .token`

Creating the cluster

With the newly generated API key, we can create a Cluster. When you create a cluster, you have 3 options:

  • Launch a Cloud Cluster (Google Kubernetes Engine/GKE for now)
  • Create a Cluster (our own Kubernetes installer, Rancher Kubernetes Engine, is used for this)
  • Import an Existing Cluster (if you already have a Kubernetes cluster, you can import it by inserting the kubeconfig file from that cluster)

For this post, we’ll be creating a cluster using Rancher Kubernetes Engine (rke). When you are creating a cluster, you can choose to create new nodes directly when creating the cluster (by creating nodes from cloud providers like DigitalOcean/Amazon) or use pre-existing nodes and let Rancher connect to the node using provided SSH credentials. The method we are discussing in this post (adding node by running the docker run command) is only available after the cluster has been created.

You can create the cluster (yournewcluster) using the following commands. As you can see, only the parameter ignoreDockerVersion is included here (which ignores an unsupported Docker version for Kubernetes). The rest will be default, which we will go into in another post. Till then you can discover the configurable options through the UI.

# Create cluster
CLUSTERRESPONSE=`curl -s '' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary '{"type":"cluster","nodes":[],"rancherKubernetesEngineConfig":{"ignoreDockerVersion":true},"name":"yournewcluster"}' --insecure`
# Extract clusterid to use for generating the docker run command

After running this, you should see your new cluster in the UI. The status will be waiting for nodes to provision or a valid configuration as there are no nodes added yet.

Assembling the docker run command to launch the rancher/agent

The final part of adding the node, is to launch the rancher/agent container which will add the node to the cluster. For this to succeed we need:

  • The agent image that is coupled with the Rancher version
  • The roles for the node (etcd and/or controlplane and/or worker)
  • The address where the rancher/server container can be reached
  • Cluster token which the agent uses to join the cluster
  • Checksum of the CA certificate

The agent image can be retrieved from the settings endpoint in the API:

AGENTIMAGE=`curl -s -H "Authorization: Bearer $APITOKEN" --insecure | jq -r .value`

The roles for the node, you can decide for yourself. (For this example, we’ll be using all three roles):

ROLEFLAGS="--etcd --controlplane --worker"

The address where the rancher/server container can be reached, should be self explanatory. The rancher/agent will connect to that endpoint.


The cluster token can be retrieved from the created cluster. We saved the created clusterid in CLUSTERID , which we can now use to generate a token.

# Generate token (clusterRegistrationToken)
AGENTTOKEN=`curl -s '' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary '{"type":"clusterRegistrationToken","clusterId":"'$CLUSTERID'"}' --insecure | jq -r .token`

The generated CA certificate is stored in the API as well, and can be retrieved as shown below. We append sha256sum to generate the checksum we need to join the cluster.

# Retrieve CA certificate and generate checksum
CACHECKSUM=`curl -s -H "Authorization: Bearer $APITOKEN" --insecure | jq -r .value | sha256sum | awk '{ print $1 }'`

All data needed to join the cluster is now available, we only need to assemble the command.

# Assemble the docker run command
AGENTCOMMAND="docker run -d --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --net=host $AGENTIMAGE $ROLEFLAGS --server $RANCHERSERVER --token $AGENTTOKEN --ca-checksum $CACHECKSUM"
# Show the command

The last command ( echo $AGENTCOMMAND ) should look like this

docker run -d --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --net=host rancher/agent:v2.0.2 --etcd --controlplane --worker --server https://rancher_server_address --token xg2hdr8rwljjbv8r94qhrbzpwbbfnkhphq5vjjs4dfxgmb4wrt9rpq --ca-checksum 3d6f14b44763184519a98697d4a5cc169a409e8dde143edeca38aebc1512c31d

After running this command on a node, you should see it join the cluster and get provisioned by Rancher.

Protip: the tokens can also directly be used as basic authentication, for example:

curl -u $APITOKEN --insecure

Complete GitHub gist for reference

Hopefully this post helped with the first steps of automating your Rancher 2.0 Tech Preview 2 setup. We explored what steps you need to take to automatically generate the docker run command to add a node to a Cluster. Keep an eye on this blog for other post regarding Rancher 2.0.

Also, if you have any questions, join our Rancher Users Slack by visiting https://slack.rancher.io and join the #2–0-tech-preview channel. You can also visit our forums to ask any question: https://forums.rancher.com/

CICD Debates: Drone vs Jenkins

January 31, 2018


Jenkins has been the industry standard CI tool for years. It contains a multitude of functionalities, with almost 1,000 plugins in its ecosystem, this can be daunting to some who appreciate simplicity. Jenkins also came up in a world before containers, though it does fit nicely into the environment. This means that there is not a particular focus on the things that make containers great, though with the inclusion of Blue Ocean and pipelines, that is rapidly changing.

Drone is an open source CI tool that wears simple like a badge of honor. It is truly Docker native; meaning that all actions take place within containers. This makes it a perfect fit for a platform like Kubernetes, where launching containers is an easy task.

Both of these tools walk hand in hand with Rancher, which makes standing up a robust Kubernetes cluster an automatic process. I’ve used Rancher 1.6 to deploy a K8s 1.8 cluster on GCE; as simple as can be.

This article will take Drone deployed on Kubernetes (on Rancher), and compare it to Jenkins across three categories:

  1. Platform installation and management
  2. Plugin ecosystem
  3. Pipeline details

In the end, I’ll stack them up side by side and try to give a recommendation. As usually is the case however, there may not be a clear winner. Each tool has its core focus, though by nature there will be overlap.


Before getting started, we need to do a bit of set up. This involves setting up Drone as an authorized Oauth2 app with a Github account. You can see the settings I’ve used here. All of this is contained within the Drone documentation.

There is one gotcha which I encountered setting up Drone. Drone maintains a passive relationship with the source control repository. In this case, this means that it sets up a webhook with Github for notification of events. The default behavior is to build on push and PR merge events. In order for Github to properly notify Drone, the server must be accessible to the world. With other, on-prem SCMs, this would not be the case, but for the example described here it is. I’ve set up my Rancher server on GCE, so that it is reachable from Github.com.

Drone installs from a container through a set of deployment files, just like any other Kubernetes app. I’ve adapted the deployment files found in this repo. Within the config map spec file, there are several values we need to change. Namely, we need to set the Github-related values to ones specific to our account. We’ll take the client secret and client key from the setup steps and place them into this file, as well as the username of the authorized user. Within the drone-secret file, we can place our Github password in the appropriate slot.

This is a major departure from the way Jenkins interacts with source code. In Jenkins, each job can define its relationship with source control independent of another job. This allows you to pull source from a variety of different repositories, including Github, Gitlab, svn, and others. As of now, Drone only supports git-based repos. A full list is available in the documentation, but all of the most popular choices for git-based development are supported.

We also can’t forget our Kubernetes cluster! Rancher makes it incredibly easy to launch and manage a cluster. I’ve chosen to use latest stable version of Rancher, 1.6. We could’ve used the new Rancher 2.0 tech preview, but constructing this guide worked best with the stable version. however, the information and steps to install should be the same, so if you’d like to try it out with newer Rancher, go ahead!

Task 1 – Installation and Management

Launching Drone on Kubernetes and Rancher is as simple as copy paste. I used the default K8s dashboard to launch the files. Uploading them one by one, starting with the namespace and config files, will get the ball rolling. [Here are some of the deployment files I used](https://github.com/appleboy/drone-on-kubernetes/tree/master/gke). I pulled from this repository and made my own local edits. This repo is owned by a frequent Drone contributor, and includes instructions on how to launch on GCE, as well as AWS. The Kubernetes yaml files are the only things we need here. To replicate, just edit the ConfigMap file with your specific values. Check out one of my files below.

apiVersion: extensions/v1beta1
kind: Deployment
  name: drone-server
namespace: drone
replicas:  1
app: drone-server
- image: drone/drone:0.8
imagePullPolicy:  Always
name:  drone-server
- containerPort: 8000
protocol:  TCP
- containerPort: 9000
protocol: TCP
# Persist our configs in an SQLite DB in here
- name: drone-server-sqlite-db
mountPath: /var/lib/drone
cpu: 40m
memory: 32Mi
- name: DRONE_HOST
name: drone-config
key: server.host
- name: DRONE_OPEN
name: drone-config
key: server.open
name: drone-config
key: server.database.driver
name: drone-config
key: server.database.datasource
name: drone-secrets
key: server.secret
name: drone-config
key: server.admin
name: drone-config
key: server.remote.github
name: drone-config
key: server.remote.github.client
key: server.remote.github.secret
name: drone-config
key: server.debug

- name: drone-server-sqlite-db
path: /var/lib/k8s/drone
- name: docker-socket
path: /var/run/docker.sock

Jenkins can be launched in much the same way. Because it is deployable in a Docker container, you can construct a similar deployment file and launch on Kubernetes. Here’s an example below. This file was taken from the GCE examples repo for the Jenkins CI server.

apiVersion: extensions/v1beta1
kind: Deployment
name: jenkins
namespace: jenkins
replicas: 1
app: master
- name: master
image: jenkins/jenkins:2.67
- containerPort: 8080
- containerPort: 50000
path: /login
port: 8080
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 2
failureThreshold: 5
name: jenkins
key: options
- name: JAVA_OPTS
value: '-Xmx1400m'
- mountPath: /var/jenkins_home
name: jenkins-home
cpu: 500m
memory: 1500Mi
cpu: 500m
memory: 1500Mi
- name: jenkins-home
pdName: jenkins-home
fsType: ext4
partition: 1

Launching Jenkins is similarly easy. Because of the simplicity of Docker and Rancher, all you need to do is take the set of deployment files and paste them into the dashboard. My preferred way is using the Kubernetes dashboard for all management purposes. From here, I can upload the Jenkins files one by one to get the server up and running.

Managing the Drone server comes down to configurations passed when launching. Hooking up to Github involved adding OAuth2 tokens, as well as (in my case) a username and password to access a repository. changing this would involve either granting organization access through GIthub, or relaunching the server with new credentials. This could possibly hamper development, as it means that Drone cannot handle more than one source provider. As mentioned above, Jenkins allows for any number of source repos, with the caveat that each job only uses one.

Task 2 – Plugins

Plugins in Drone are very simple to configure and manage. In fact, there isn’t much you need to do to get one up and running. The ecosystem is considerably smaller than that for Jenkins, but there are still plugins for almost every major tool available. There are plugins for most major cloud providers, as well as integrations with popular source control repos. As mentioned before, containers in Drone are first class citizens. This means that each plugin and executed task is also a container.
Jenkins is the undisputed king of plugins. If you can think of the task, there is probably a plugin to accomplish it. There are at last glance, almost 1000 plugins available for use. The downside of this is that it can sometimes be difficult to determine, out of a selection of similar looking plugins, which one is the best choice for what you’re trying to accomplish

There are docker plugins for building pushing and images, AWS and K8s plugins for deploying to clusters, and various others. Because of the comparative youth of the Drone platform, there are a great deal fewer plugins available here than for Jenkins. That does not however, take away from their effectiveness and ease of use. A simple stanza in a drone.yml file will automatically download, configure, and run a selected plugin, with no other input needed. And remember, because of Drone’s relationship with containers, each plugin is maintained within an image. There are no extra dependencies to manage; if the plugin creator has done their job correctly, everything will be contained within that container.

When I built the drone.yml file for the simple node app, adding a Docker plugin was a breeze. There were only a few lines needed, and the image was built and pushed to a Dockerhub repo of my choosing. In the next section, you can see the section labeled docker. This stanza is all that’s needed to configure and run the plugin to build and push the Docker image.

Task 3

The last task is the bread and butter of any CI system. Drone and Jenkins are both designed to build apps. Originally, Jenkins was targeted towards java apps, but over the years the scope has expanded to include anything you could compile and execute as code. Jenkins even excels at new pipelines and cron-job like scheduled tasks. However, it is not container native, though it does fit very well into the container ecosystem.

    image: node:alpine
      - npm install
      - npm run test
      - npm run build
    image: plugins/docker
    dockerfile: Dockerfile
    repo: badamsbb/node-example
    tags: v1

For comparison, here’s a Jenkinsfile for the same app.

#!/usr/bin/env groovy
pipeline {
 agent {
  node {
   label 'docker'
 tools {
  nodejs 'node8.4.0'
 stages {
  stage ('Checkout Code') {
   steps {
    checkout scm
  stage ('Verify Tools'){
   steps {
    parallel (
     node: {
       sh "npm -v"
     docker: {
       sh "docker -v"
  stage ('Build app') {
   steps {
    sh "npm prune"
    sh "npm install"
  stage ('Test'){
   steps {
    sh "npm test"
  stage ('Build container') {
   steps {
    sh "docker build -t badamsbb/node-example:latest ."
    sh "docker tag badamsbb/node-example:latest badamsbb/node-example:v${env.BUILD_ID}"
  stage ('Verify') {
   steps {
    input "Everything good?"
  stage ('Clean') {
   steps {
    sh "npm prune"
    sh "rm -rf node_modules"

While this example is verbose for the sake of explanation, you can see that accomplishing the same goal, a built Docker image, can be more involved than with Drone. In addition, what’s not pictured is the set up of the interactions between Jenkins and Docker. Because Jenkins is not Docker native, agent must be configured ahead of time to properly interact with the Docker daemon. This can be confusing to some, which is where Drone comes out ahead. It is already running on top of Docker; this same Docker is used to run its tasks.


Drone is a wonderful piece of CI software. It has quickly become a very popular choice for wanting to get up and running quickly, looking for a simple container-native CI solution. The simplicity of it is elegant, though as it is still in a pre-release status, there is much more to come. Adventurous engineers may be willing to give it a shot in production, and indeed many have. In my opinion, it is best suited to smaller teams looking to get up and running quickly. Its small footprint and simplicity of use lends itself readily to this kind of development.

However, Jenkins is the tried and true powerhouse of the CI community. It takes a lot to topple the king, especially one so entrenched in his position. Jenkins has been very successful at adapting to the market, with Blue Ocean and container-based pipelines making strong cases for its staying power. Jenkins can be used by teams of all sizes, but excels at scale. Larger organizations love Jenkins due to its history and numerous integrations. It also has distinct support options, either active community support for open source, or enterprise-level support through CloudBees But as with all tools, both Drone and Jenkins have their place within the CI ecosystem.


Brandon Adams
Certified Jenkins Engineer, and Docker enthusiast. I’ve been using Docker since the early days, and love hearing about new applications for the technology. Currently working for a Docker consulting partner in Bethesda, MD.

Announcing Rancher 2.0 Tech Preview 2

January 24, 2018

Today we released the second tech preview of Rancher 2.0, our next major Rancher product release. We’ve been hard at work since the last tech preview release in September 2017, driven by the overwhelmingly positive response to our Rancher 2.0 vision and a great deal of feedback we have received.

The Tech Preview 2 release contains many significant changes and enhancements:

1. Rancher server is now 100% written in Go and no longer requires a MySQL database.
2. You can deploy Rancher server on any Docker host as before, or you can deploy Rancher server on an existing Kubernetes cluster.
3. You can create new Kubernetes clusters using Rancher Kubernetes Engine (RKE) or cloud-managed Kubernetes services such as GKE. Rancher automates both RKE and GKE cluster provisioning. Support for additional cloud-managed Kubernetes services such as EKS and AKS will be added in the future.
4. You can manage all your Kubernetes clusters from a unified cluster management interface. Rancher implements centralized authentication and authorization across all Kubernetes clusters, regardless where these Kubernetes clusters are hosted.
5. Rancher provides a simple workload management interface across all your Kubernetes clusters. This is still a work in progress. We continue to offer an intuitive Rancher 1.0-style container-centric interface. We are working on adding many advanced workload management features such as the app catalog, CI/CD and monitoring integration, sophisticated stats, and centralized logging.

If you are interested in more technical details, take a look at the updated Rancher 2.0 Architecture Document, The Quick Start Guide, and the Technical Release 2 page on Github.

Also, enjoy the recording of the online meetup where we gave a live demo.

Again, Rancher 2.0 is a work-in-progress. We’ll be rolling out new features in the coming days and weeks as we get ready for our Beta release next month. You can register here to be part of the Beta User Community.

Using Kubernetes API from Go: Kubecon 2017 session recap

January 19, 2018

Last month I had the great pleasure of attending Kubecon 2017, which took place in Austin, TX. The conference was super informative, and deciding on what session to join was really hard as all of them were great. But what deserves special recognition is how well the organizers respected the attendees’ diversity of Kubernetes experiences. Support is especially important if you are new to the project and need advice (and sometimes encouragement) to get started. Kubernetes 101 track sessions were a good way to get more familiar with the concepts, tools and the community. I was very excited to be a speaker on 101 track, and this blog post is a recap of my session Using Kubernetes APIs from Go

In this article we are going to learn what makes Kubernetes a great platform for developers, and cover the basics of writing a custom controller for Kubernetes in the Go language using the client-go library.

Kubernetes is a platform

Kubernetes can be liked for many reasons. As a user, you appreciate its features richness, stability and performance. As a contributor, the Kubernetes open source community is not only large, but approachable and responsive. But what really makes Kubernetes appealing to a third party developer is its extensibility. The project provides so many ways to add new features, extend existing ones without disrupting the main code base. And thats what makes Kubernetes a platform.

Here are some ways to extend Kubernetes:


On the picture, you can see that every Kuberentes cluster component can be extended in a certain way, whether it is a Kubelet, or API server. Today we are going to focus on a “Custom Controller” way, I’ll refer to it as Kubernetes Controller or simply a Controller from now on.

What exactly is Kubernetes Controller?

The most common definition for controller is “Code that brings current state of the system to the desired state”. But what exactly does it mean? Lets look at Ingress controller example. Ingress is a Kubernetes resource that lets you define external access to the services in cluster, typically in HTTP and usually with the Load Balancing support. But Kubernetes core code has no ingress implementation. The implementation gets covered by the third party controllers that would:

  • Watch ingress/services/endpoints resource events (Create/Update/Remove)
  • Program internal or external Load Balancer
  • Update Ingress with the Load Balancer address

The “desired” state of the ingress is the IP Address pointing to the functioning Load Balancer programmed with the rules defined by the user in Ingress specification. And external ingress controller is responsible for bringing the ingress resource to this state.

The implementation of the controller for the same resource, as well as the way to deploy them, can vary. You can pick nginx controller and deploy it on every node in your cluster as a Daemon Set, or you can chose to run your ingress controller outside of Kubernetes cluster and program F5 as a Load Balancer. There are no strict rules, Kubernetes is flexible in that way.


There are several ways to get information about Kubernetes cluster and its resources. You can do it using Dashboard, kubectl, or using programmatic access to Kubernetes APIs. Client-go is the most popular library used by the tools written in Go. There are clients for many other languages out there (java, python, etc). Although if you want to write your very first controller, I encourage you to try go/client-go. Kubernetes is written in Go, and I find it easier to develop a plugin in the same language the main project is written.

Lets build…

The best way to get familiar with the platforms and tools around it, is to write something. Lets start simple, and implement a controller that:

  • Monitors Kubernetes nodes
  • Alerts when storage occupied by images on the node, changes

The code source can be found here.

Ground work

Setup the project

As a developer, I like to sneak a peek at the tools my peers use to make their life easier. Here I’m going to share 3 favorite tools of mine that are gonna help us with our very first project.

  1.  go-skel – skeleton for Go microservices Just run ./skel.sh test123, and it will create the skeleton for the new go project test123.
  2.  trash – Go vendor management tool. There are many go dependencies management tools out there, but trash has been proved to be simple to use and great when it comes to transient dependencies management.
  3. dapper – a tool to wrap any existing build tool in an consistent environment

Add client-go as a dependency

In order to use client-go code, we have to pull it as a dependency to our project. Add it to vendor.conf:

And run trash. It will automatically pull all the dependencies defined in vendor.conf to the vendor folder of the project. Make sure client-go version is compatible with the Kubernetes version of your cluster.

Create a client

Before creating a client that is going to talk to Kubernetes API, we have to decide how we want to run our tool: inside or outside the Kubernetes cluster. When run inside the cluster, your application is containerized and gets deployed as Kubernetes Pod. It gives you certain perks – you can chose the way to deploy it (Daemon set to run on every node, or as a Deployment with n replicas), configure the healthcheck for it, etc. When your application runs outside of the cluster, you have to manage it yourself. Lets make our tool flexible, and support both ways of defining the client based on the config flag:

We are going to use outside of cluster mode while debugging the app as this way you do not have to build the image every time and redeploy as kubernetes Pod. Once app is tested, we can build and image and deploy it in cluster.

As you can see on the screen shot, the config is being built, and passed to kubernetes.NewForConfig to generate the client.

Play with basic CRUDs

For our tool, we need to monitor Nodes. It is a good idea to get familiar with the way to do CRUD operations using client-go before implementing the logic:

Screen shot above displays how to do:

  • List nodes named “minikube” which can be achieved by passing FieldSelector filter to the command.
  • Update the node with the new annotation
  • Delete the node with the gracePeriod=10 seconds – meaning that the removal will happen only after 10 seconds since the command is issued.

All that is done using the clientset we’ve created on the previous step.

We would need information about the images on the node; it can be retrieved by accessing corresponding field:

Watch/Notify using Informer

Now we know how to fetch the nodes from Kubernetes APIs and get images information from it. How do we monitor the changes to images’ size? The most simple way would be to periodically poll the nodes, calculate the current images storage capacity and compare it with the result from the previous poll. The downside to that – we execute the list call to fetch all the nodes, no matter if there were changes to them or not, and that can be expensive especially if your poll interval is small. What we really want is – to be notified when the node gets changed, and only then do our logic. Thats where client-go Informer comes to the rescue.

On this example, we create the Informer for the Node object by passing the watchList instruction on how to monitor the Node, object type api.Node and 30 seconds as a resync period instructing to periodically poll the node even when there were no changes to it – a nice way to fall back on in case the update event gets dropped by some reason. And as a last argument, we are passing 2 call back functions – handleNodeAdd and handleNodeUpdate. Those callbacks will have an actual logic that has to be triggered on the node’s changes – find out whether the storage occupied by images on the node got changed. The NewInformer gives back 2 objects – controller and store. Once the controller is started, the watch on node.update and node.add will start, and the callback functions will get called. The store is in memory cache which gets updated by the informer, and you can fetch the node object from the cache instead of calling Kubernetes APIs directly:

As we have a single controller in our project, using regular Informer is fine enough. But if your future project ends up having several controllers for the same object, using SharedInformer is more recommended. So instead of creating multiple regular informers – one per controller – you can register one Shared informer, and let each controller register its own set of callbacks, and get back a shared cache in return which will reduce memory footprint:

Deployment time

Now it is time to deploy and test the code! For the first run, we are simply building a go binary and run it in out of cluster mode:

To change the message output, deploy a pod using an image which is not presented on the node yet.

Once basic functionality is tested, it is time to try running it in cluster mode. For that, we have to create the image first. Define the Dockerfile:

And create an image using docker build . It will generate the image that you can use to deploy the pod in Kubernetes. Now your application can be run as a Pod in Kubernetes cluster. Here is an example of deployment definition, and on the screen shot above I’m using it to deploy our app:

So we have:

  • Created go project
  • Added client-go package dependencies to it
  • Created a client to talk to Kubernetes api
  • Defined an Informer that would watch node object changes, and execute callback function once that happens
  • Implemented an actual logic in the callback definition.
  • Tested the code by running the binary in outside of cluster, and then deployed it inside the cluster

If you have any comments or questions on the topic, please feel free to share them with me !


AvatarAlena Prokharchyk

twitter: @lemonjet

github: https://github.com/alena1108

2017 Container Technology Retrospective - The Year of Kubernetes

December 27, 2017

It is not an overstatement to say that, when it comes to container technologies, 2017 was the year of Kubernetes. While Kubernetes has been steadily gaining momentum ever since it was announced in 2014, it reached escape velocity in 2017. Just this year, more than 10,000 people participated in our free online Kubernetes Training classes. A few other key data points:

  1. Our company, Rancher Labs, built a product that supported multiple container orchestrators, including Swarm, Mesos, and Kubernetes. Responding to overwhelming market and customer demands, we decided to build Rancher 2.0 to 100% focus on Kubernetes. We are not alone. Even vendors who developed competing frameworks, like Docker Inc. and Mesosphere, announced support for Kubernetes this year.
  2. It has become significantly easier to install and operate Kubernetes. In fact, in most cases, you no longer need to install and operate Kubernetes at all. All major cloud providers, including Google, Microsoft Azure, AWS, and leading Chinese cloud providers such as Huawei, Alibaba, and Tencent, launched Kubernetes as a Service. Not only is it easier to set up and use cloud Kubernetes services like Google GKE, cloud Kubernetes services are cheaper. They often do not charge for resources required to run the Kubernetes master. Because it takes at least 3 nodes to run Kubernetes API servers and the etcd database, cloud Kubernetes-as-a-Service can lead to significant savings. For users who still want to stand up Kubernetes in their own data center, VMware announced Pivotal Container Service (PKS.) Indeed, with more than 40 vendors shipping CNCF-certified Kubernetes distributions, standing up and operating Kubernetes is easier than ever.
  3. The most important sign of the growth of Kubernetes is the significant number of users who started to run their mission-critical production workload on Kubernetes. At Rancher, because we supported multiple orchestration engines from day one, we have a unique perspective of the growth of Kubernetes relative to other technologies. One Fortune 50 Rancher customer, for example, runs their applications handling billions of dollars of transactions every day on Kubernetes clusters.

A significant trend we observed this year was an increased focus on security among customers who run Kubernetes in production. Back in 2016, the most common questions we heard from our customers centered around CI/CD. That was when Kubernetes was primarily used in development and testing environments. Nowadays, the most common feature requests from customers are single sign-on, centralized access control, strong isolation between applications and services, infrastructure hardening, and secret and credentials management. We believe, in fact, offering a layer to define and enforce security policies will be one of the strongest selling points of Kubernetes. There’s no doubt security will continue to be one of the hottest areas of development in 2018.

With cloud providers and VMware all supporting Kubernetes services, Kubernetes has become a new infrastructure standard. This has huge implications to the IT industry. As we all know, compute workload is moving to public IaaS clouds, and IaaS is built on virtual machines. There is no standard virtual machine image format or standard virtual machine cluster manager. As a result, application built for one cloud cannot easily be deployed on other clouds. Kubernetes is a game changer. An application built for Kubernetes can be deployed on any compliant Kubernetes services, regardless of the underlying infrastructure. Among Rancher customers, we already see wide-spread adoption of multi-cloud deployments. With Kubernetes, multi-cloud is easy. DevOps team get the benefit of increased flexibility, increased reliability, and reduced cost, without having to complicate their operational practices.

I am really excited about how Kubernetes will continue to grow in 2018. Here are some specific areas we should pay attention:

  1. Service Mesh gaining mainstream adoption. At the recent KubeCon show, the hottest topic was Service Mesh. Linkerd, Envoy, Istio, etc. all gained traction in 2017. Even though the adoption of these technologies is still at an early stage, the potential is huge. People often think of service mesh as a microservices framework. I believe, however, service mesh will bring benefits far beyond a microservice framework. Service mesh can become a common underpinning for all distributed applications. It offers application developers a great deal of support in communication, monitoring, and management of various components that make up an application. These components may or may not be microservices. They don’t even have to be built from containers. Even though not many people use service mesh today, we believe it will become popular in 2018. We, like most people in the container industry, want to play a part. We are busy integrating service mesh technologies into Rancher 2.0 now!
  2. From cloud-native to Kubernetes-native. The term “cloud native application” has been popular for a few years. It means applications developed to run on a cloud like AWS, instead of static environments like vSphere or bare metal clusters. Applications developed for Kubernetes are by definition cloud-native because Kubernetes is now available on all clouds. I believe, however, the world is ready to move from cloud-native to, using a term I first heard from Joe Beda, “Kubernetes-native”. I know of many organizations developing applications specifically to run on Kubernetes. These applications don’t just use Kubernetes as a deployment platform. They persist data in Kubernetes’s own etcd database. They use Kubernetes custom resource definition (CRD) as data access objects. They encode business logic in Kubernetes controllers. They use Kubelets to manage distributed clusters. They build their own API layer on Kubernetes API server. They use `kubectl` as their own CLI. Kubernetes-native applications are easy to build, run anywhere, and are massively scalable. In 2018, we will surely see more Kubernetes-native applications!
  3. Massive number of ready-to-run applications for Kubernetes. Most people use Kubernetes today to deploy their own applications. Not many organizations ship their application packages as YAML files or Helm charts yet. I believe this is about to change. Already most modern software (such as AI frameworks like Tensorflow) are available as Docker containers. It is easy to deploy these containers in Kubernetes clusters. A few weeks ago, Apache Spark project added support to use Kubernetes as a scheduler, in addition to Mesos and YARN. Kubernetes is now a great big-data platform. We believe, from this point onward, all service-side software packages will be distributed as containers and will be able to leverage Kubernetes as a cluster manager. Watch out for vast growth and availability of ready-to-run YAML files or Helm charts in 2018.

Looking back, growth of Kubernetes in 2017 far exceeded what all of us thought at the end of 2016. While we expected AWS to support Kubernetes, we did not expect the interest in service mesh and Kubernetes-native apps to grow so quickly. 2018 could very well bring us many unexpected technological developments. I can’t wait to find out!

Recent Posts