Tag: microservices

Using Kubernetes API from Go: Kubecon 2017 session recap

January 19, 2018

Last month I had the great pleasure of attending Kubecon 2017, which took place in Austin, TX. The conference was super informative, and deciding on what session to join was really hard as all of them were great. But what deserves special recognition is how well the organizers respected the attendees’ diversity of Kubernetes experiences. Support is especially important if you are new to the project and need advice (and sometimes encouragement) to get started. Kubernetes 101 track sessions were a good way to get more familiar with the concepts, tools and the community. I was very excited to be a speaker on 101 track, and this blog post is a recap of my session Using Kubernetes APIs from Go

In this article we are going to learn what makes Kubernetes a great platform for developers, and cover the basics of writing a custom controller for Kubernetes in the Go language using the client-go library.

Kubernetes is a platform

Kubernetes can be liked for many reasons. As a user, you appreciate its features richness, stability and performance. As a contributor, the Kubernetes open source community is not only large, but approachable and responsive. But what really makes Kubernetes appealing to a third party developer is its extensibility. The project provides so many ways to add new features, extend existing ones without disrupting the main code base. And thats what makes Kubernetes a platform.

Here are some ways to extend Kubernetes:

 

On the picture, you can see that every Kuberentes cluster component can be extended in a certain way, whether it is a Kubelet, or API server. Today we are going to focus on a “Custom Controller” way, I’ll refer to it as Kubernetes Controller or simply a Controller from now on.

What exactly is Kubernetes Controller?

The most common definition for controller is “Code that brings current state of the system to the desired state”. But what exactly does it mean? Lets look at Ingress controller example. Ingress is a Kubernetes resource that lets you define external access to the services in cluster, typically in HTTP and usually with the Load Balancing support. But Kubernetes core code has no ingress implementation. The implementation gets covered by the third party controllers that would:

  • Watch ingress/services/endpoints resource events (Create/Update/Remove)
  • Program internal or external Load Balancer
  • Update Ingress with the Load Balancer address

The “desired” state of the ingress is the IP Address pointing to the functioning Load Balancer programmed with the rules defined by the user in Ingress specification. And external ingress controller is responsible for bringing the ingress resource to this state.

The implementation of the controller for the same resource, as well as the way to deploy them, can vary. You can pick nginx controller and deploy it on every node in your cluster as a Daemon Set, or you can chose to run your ingress controller outside of Kubernetes cluster and program F5 as a Load Balancer. There are no strict rules, Kubernetes is flexible in that way.

Client-go

There are several ways to get information about Kubernetes cluster and its resources. You can do it using Dashboard, kubectl, or using programmatic access to Kubernetes APIs. Client-go is the most popular library used by the tools written in Go. There are clients for many other languages out there (java, python, etc). Although if you want to write your very first controller, I encourage you to try go/client-go. Kubernetes is written in Go, and I find it easier to develop a plugin in the same language the main project is written.

Lets build…

The best way to get familiar with the platforms and tools around it, is to write something. Lets start simple, and implement a controller that:

  • Monitors Kubernetes nodes
  • Alerts when storage occupied by images on the node, changes

The code source can be found here.

Ground work

Setup the project

As a developer, I like to sneak a peek at the tools my peers use to make their life easier. Here I’m going to share 3 favorite tools of mine that are gonna help us with our very first project.

  1.  go-skel – skeleton for Go microservices Just run ./skel.sh test123, and it will create the skeleton for the new go project test123.
  2.  trash – Go vendor management tool. There are many go dependencies management tools out there, but trash has been proved to be simple to use and great when it comes to transient dependencies management.
  3. dapper – a tool to wrap any existing build tool in an consistent environment

Add client-go as a dependency

In order to use client-go code, we have to pull it as a dependency to our project. Add it to vendor.conf:

And run trash. It will automatically pull all the dependencies defined in vendor.conf to the vendor folder of the project. Make sure client-go version is compatible with the Kubernetes version of your cluster.

Create a client

Before creating a client that is going to talk to Kubernetes API, we have to decide how we want to run our tool: inside or outside the Kubernetes cluster. When run inside the cluster, your application is containerized and gets deployed as Kubernetes Pod. It gives you certain perks – you can chose the way to deploy it (Daemon set to run on every node, or as a Deployment with n replicas), configure the healthcheck for it, etc. When your application runs outside of the cluster, you have to manage it yourself. Lets make our tool flexible, and support both ways of defining the client based on the config flag:

We are going to use outside of cluster mode while debugging the app as this way you do not have to build the image every time and redeploy as kubernetes Pod. Once app is tested, we can build and image and deploy it in cluster.

As you can see on the screen shot, the config is being built, and passed to kubernetes.NewForConfig to generate the client.

Play with basic CRUDs

For our tool, we need to monitor Nodes. It is a good idea to get familiar with the way to do CRUD operations using client-go before implementing the logic:

Screen shot above displays how to do:

  • List nodes named “minikube” which can be achieved by passing FieldSelector filter to the command.
  • Update the node with the new annotation
  • Delete the node with the gracePeriod=10 seconds – meaning that the removal will happen only after 10 seconds since the command is issued.

All that is done using the clientset we’ve created on the previous step.

We would need information about the images on the node; it can be retrieved by accessing corresponding field:

Watch/Notify using Informer

Now we know how to fetch the nodes from Kubernetes APIs and get images information from it. How do we monitor the changes to images’ size? The most simple way would be to periodically poll the nodes, calculate the current images storage capacity and compare it with the result from the previous poll. The downside to that – we execute the list call to fetch all the nodes, no matter if there were changes to them or not, and that can be expensive especially if your poll interval is small. What we really want is – to be notified when the node gets changed, and only then do our logic. Thats where client-go Informer comes to the rescue.

On this example, we create the Informer for the Node object by passing the watchList instruction on how to monitor the Node, object type api.Node and 30 seconds as a resync period instructing to periodically poll the node even when there were no changes to it – a nice way to fall back on in case the update event gets dropped by some reason. And as a last argument, we are passing 2 call back functions – handleNodeAdd and handleNodeUpdate. Those callbacks will have an actual logic that has to be triggered on the node’s changes – find out whether the storage occupied by images on the node got changed. The NewInformer gives back 2 objects – controller and store. Once the controller is started, the watch on node.update and node.add will start, and the callback functions will get called. The store is in memory cache which gets updated by the informer, and you can fetch the node object from the cache instead of calling Kubernetes APIs directly:

As we have a single controller in our project, using regular Informer is fine enough. But if your future project ends up having several controllers for the same object, using SharedInformer is more recommended. So instead of creating multiple regular informers – one per controller – you can register one Shared informer, and let each controller register its own set of callbacks, and get back a shared cache in return which will reduce memory footprint:

Deployment time

Now it is time to deploy and test the code! For the first run, we are simply building a go binary and run it in out of cluster mode:

To change the message output, deploy a pod using an image which is not presented on the node yet.

Once basic functionality is tested, it is time to try running it in cluster mode. For that, we have to create the image first. Define the Dockerfile:

And create an image using docker build . It will generate the image that you can use to deploy the pod in Kubernetes. Now your application can be run as a Pod in Kubernetes cluster. Here is an example of deployment definition, and on the screen shot above I’m using it to deploy our app:

So we have:

  • Created go project
  • Added client-go package dependencies to it
  • Created a client to talk to Kubernetes api
  • Defined an Informer that would watch node object changes, and execute callback function once that happens
  • Implemented an actual logic in the callback definition.
  • Tested the code by running the binary in outside of cluster, and then deployed it inside the cluster

If you have any comments or questions on the topic, please feel free to share them with me !

 

AvatarAlena Prokharchyk

twitter: @lemonjet

github: https://github.com/alena1108


Containers vs. Serverless Computing

October 9, 2017

Serverless computing is a hot topic right now—perhaps even hotter than Docker containers.

Is that because serverless computing is a replacement for containers? Or is it just another popular technology that can be used alongside containers? Read more


Microservices Made Easier Using Istio

August 24, 2017

One of the recent open source initiatives that has caught our interest at Rancher Labs is Istio, the micro-services development framework.  It’s a great technology, combining some of the latest ideas in distributed services architecture in an easy-to-use abstraction. Istio does several things for you. Sometimes referred to as a “service mesh”, it has facilities for API authentication/authorization,  service routing, service discovery, request monitoring, request rate-limiting, and more. It’s made up of a few modular components that can be consumed separately or as a whole.
Read more


Moving Your Monolith: Best Practices and Focus Areas

June 26, 2017

You have a complex monolithic system that is critical to your business. You’ve read articles and would love to move it to a more modern platform using microservices and containers, but you have no idea where to start.

If that sounds like your situation, then this is the article for you. Below, I identify best practices and the areas to focus on as you evolve your monolithic application into a microservices-oriented application.

 

Overview

We all know that net new, greenfield development is ideal, starting with a container-based approach using cloud services. Unfortunately, that is not the day-to-day reality inside most development teams. Most development teams support multiple existing applications that have been around for a few years and need to be refactored to take advantage of modern toolsets and platforms. This is often referred to as brownfield development.

Not all application technology will fit into containers easily. It can always be made to fit, but one has to question if it is worth it. For example, you could lift and shift an entire large-scale application into containers or onto a cloud platform, but you will realize none of the benefits around flexibility or cost containment.

Read more


Focus on Services, Not on Containers

June 7, 2017

Containers may be super cool, but at the end of the day, they’re just another kind of infrastructure. A seasoned developer is probably already familiar with several other kinds of infrastructure and approaches to deploying applications. Another is not really that big of a deal.

However, when the infrastructure creates new possibilities with the way an application is architected—as containers do—that’s a huge deal. That is why the services in a microservice application are far more important than the containerized infrastructure they run on.

Modularity has always been a goal of application architectures, and now that the concept of microservices is possible, how you build those services end up dictating where they run and how they are deployed. Services are where application functionality meets the user, and where the value your application can provide is realized.

That’s why if you want to make the most of containers, you should be thinking about more than just containers. You have to focus on services, because they’re the really cool thing that containers enable.

 

Services v. Containers

For conversation’s sake, using services and containers interchangeably is fine—because the ideal use case for a containerized application is one that is deconstructed into services, where each service is deployed as a container (or containers).

However, the tactics cannot be synonymous. Services are an implied infrastructure, but more importantly, an application architecture. When you talk about a service that is part of your application, that service is persistent. You can’t suddenly have an application without a login page or a shopping cart, for example, and expect things to go well.

Containers, on the other hand, are designed to live and die in very short time frames. Ideally, with every deployment or revert, the container is killed as soon as the new deployment is live and the traffic is routed to it. So containers are not persistent. And if the delivery chain is working correctly, that should not matter at all.

Microservices, as both an application and an infrastructure term, does have some unique elements associated with it, which diverge the relationship even further. Read more


Upcoming Events


Recent Posts