Tag: kubernetes

Adding custom nodes to your Kubernetes cluster in Rancher 2.0 Tech Preview 2

February 13, 2018

Recently, we announced our second milestone release of Rancher 2.0 Tech Preview 2. This includes the possibility to add custom nodes (nodes that are already provisioned with a Linux operating system and Docker) by running a generated docker run command to launch the rancher/agent container, or by connecting over SSH to that node. In this post, we will explore how we can automate the generation of the command to add nodes using the docker runcommand.

Warning: this is not a production ready product yet, don’t put your production workloads on it just yet.


  • Host running Linux and Docker
  • JSON utility jq installed, to parse API responses
  • sha256sum binary to calculate CA certificate checksum

Start Rancher server

Before we can execute any action, we need to launch the rancher/servercontainer. The image to use for the 2.0 Tech Preview 2 is rancher/server:preview . Another change from 1.6 to 2.0, is that we no longer expose on port 8080. Instead, we expose port 80 and 443, where 80 is by default redirected to 443. You can start the container as follows:

docker run -d -p 80:80 -p 443:443 rancher/server:preview

If you want the data for this setup to be persistent, you can mount a host volume to /var/lib/rancher as shown below:

docker run -d -p 80:80 -p 443:443 -v /data:/var/lib/rancher rancher/server:preview

Logging in and creating API key

In Rancher 1.x, there was no authentication enabled by default. After launching the rancher/server container, you could access the API/UI without any credentials. In Rancher 2.0, we enable authentication with the default username and password admin. After logging in, we get a Bearer token, which allows us to change the password. After changing the password, we will create an API key to execute the other requests. The API key is also a Bearer token, which we call automation to be used for automation purposes.

Logging in 

# Login
LOGINRESPONSE=`curl -s '' -H 'content-type: application/json' --data-binary '{"username":"admin","password":"admin"}' --insecure`
LOGINTOKEN=`echo $LOGINRESPONSE | jq -r .token`

Changing the password (thisisyournewpassword)

# Change password
curl -s '' -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary '{"currentPassword":"admin","newPassword":"thisisyournewpassword"}' --insecure

Create API key

# Create API key
APIRESPONSE=`curl -s '' -H 'content-type: application/json' -H "Authorization: Bearer $LOGINTOKEN" --data-binary '{"type":"token","description":"automation"}' --insecure`
# Extract and store token
APITOKEN=`echo $APIRESPONSE | jq -r .token`

Creating the cluster

With the newly generated API key, we can create a Cluster. When you create a cluster, you have 3 options:

  • Launch a Cloud Cluster (Google Kubernetes Engine/GKE for now)
  • Create a Cluster (our own Kubernetes installer, Rancher Kubernetes Engine, is used for this)
  • Import an Existing Cluster (if you already have a Kubernetes cluster, you can import it by inserting the kubeconfig file from that cluster)

For this post, we’ll be creating a cluster using Rancher Kubernetes Engine (rke). When you are creating a cluster, you can choose to create new nodes directly when creating the cluster (by creating nodes from cloud providers like DigitalOcean/Amazon) or use pre-existing nodes and let Rancher connect to the node using provided SSH credentials. The method we are discussing in this post (adding node by running the docker run command) is only available after the cluster has been created.

You can create the cluster (yournewcluster) using the following commands. As you can see, only the parameter ignoreDockerVersion is included here (which ignores an unsupported Docker version for Kubernetes). The rest will be default, which we will go into in another post. Till then you can discover the configurable options through the UI.

# Create cluster
CLUSTERRESPONSE=`curl -s '' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary '{"type":"cluster","nodes":[],"rancherKubernetesEngineConfig":{"ignoreDockerVersion":true},"name":"yournewcluster"}' --insecure`
# Extract clusterid to use for generating the docker run command

After running this, you should see your new cluster in the UI. The status will be waiting for nodes to provision or a valid configuration as there are no nodes added yet.

Assembling the docker run command to launch the rancher/agent

The final part of adding the node, is to launch the rancher/agent container which will add the node to the cluster. For this to succeed we need:

  • The agent image that is coupled with the Rancher version
  • The roles for the node (etcd and/or controlplane and/or worker)
  • The address where the rancher/server container can be reached
  • Cluster token which the agent uses to join the cluster
  • Checksum of the CA certificate

The agent image can be retrieved from the settings endpoint in the API:

AGENTIMAGE=`curl -s -H "Authorization: Bearer $APITOKEN" --insecure | jq -r .value`

The roles for the node, you can decide for yourself. (For this example, we’ll be using all three roles):

ROLEFLAGS="--etcd --controlplane --worker"

The address where the rancher/server container can be reached, should be self explanatory. The rancher/agent will connect to that endpoint.


The cluster token can be retrieved from the created cluster. We saved the created clusterid in CLUSTERID , which we can now use to generate a token.

# Generate token (clusterRegistrationToken)
AGENTTOKEN=`curl -s '' -H 'content-type: application/json' -H "Authorization: Bearer $APITOKEN" --data-binary '{"type":"clusterRegistrationToken","clusterId":"'$CLUSTERID'"}' --insecure | jq -r .token`

The generated CA certificate is stored in the API as well, and can be retrieved as shown below. We append sha256sum to generate the checksum we need to join the cluster.

# Retrieve CA certificate and generate checksum
CACHECKSUM=`curl -s -H "Authorization: Bearer $APITOKEN" --insecure | jq -r .value | sha256sum | awk '{ print $1 }'`

All data needed to join the cluster is now available, we only need to assemble the command.

# Assemble the docker run command
AGENTCOMMAND="docker run -d --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --net=host $AGENTIMAGE $ROLEFLAGS --server $RANCHERSERVER --token $AGENTTOKEN --ca-checksum $CACHECKSUM"
# Show the command

The last command ( echo $AGENTCOMMAND ) should look like this

docker run -d --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --net=host rancher/agent:v2.0.2 --etcd --controlplane --worker --server https://rancher_server_address --token xg2hdr8rwljjbv8r94qhrbzpwbbfnkhphq5vjjs4dfxgmb4wrt9rpq --ca-checksum 3d6f14b44763184519a98697d4a5cc169a409e8dde143edeca38aebc1512c31d

After running this command on a node, you should see it join the cluster and get provisioned by Rancher.

Protip: the tokens can also directly be used as basic authentication, for example:

curl -u $APITOKEN --insecure

Complete GitHub gist for reference

Hopefully this post helped with the first steps of automating your Rancher 2.0 Tech Preview 2 setup. We explored what steps you need to take to automatically generate the docker run command to add a node to a Cluster. Keep an eye on this blog for other post regarding Rancher 2.0.

Also, if you have any questions, join our Rancher Users Slack by visiting https://slack.rancher.io and join the #2–0-tech-preview channel. You can also visit our forums to ask any question: https://forums.rancher.com/

Try to solve the new Formula Cube! It works exactly like a Rubik’s Cube but it is only $2, from China. Learn to solve it with the tutorial on rubiksplace.com or use the solver to calculate the solution in a few steps. (Please subscribe for a membership to stop adding promotional messages to the documents)

CICD Debates: Drone vs Jenkins

January 31, 2018


Jenkins has been the industry standard CI tool for years. It contains a multitude of functionalities, with almost 1,000 plugins in its ecosystem, this can be daunting to some who appreciate simplicity. Jenkins also came up in a world before containers, though it does fit nicely into the environment. This means that there is not a particular focus on the things that make containers great, though with the inclusion of Blue Ocean and pipelines, that is rapidly changing.

Drone is an open source CI tool that wears simple like a badge of honor. It is truly Docker native; meaning that all actions take place within containers. This makes it a perfect fit for a platform like Kubernetes, where launching containers is an easy task.

Both of these tools walk hand in hand with Rancher, which makes standing up a robust Kubernetes cluster an automatic process. I’ve used Rancher 1.6 to deploy a K8s 1.8 cluster on GCE; as simple as can be.

This article will take Drone deployed on Kubernetes (on Rancher), and compare it to Jenkins across three categories:

  1. Platform installation and management
  2. Plugin ecosystem
  3. Pipeline details

In the end, I’ll stack them up side by side and try to give a recommendation. As usually is the case however, there may not be a clear winner. Each tool has its core focus, though by nature there will be overlap.


Before getting started, we need to do a bit of set up. This involves setting up Drone as an authorized Oauth2 app with a Github account. You can see the settings I’ve used here. All of this is contained within the Drone documentation.

There is one gotcha which I encountered setting up Drone. Drone maintains a passive relationship with the source control repository. In this case, this means that it sets up a webhook with Github for notification of events. The default behavior is to build on push and PR merge events. In order for Github to properly notify Drone, the server must be accessible to the world. With other, on-prem SCMs, this would not be the case, but for the example described here it is. I’ve set up my Rancher server on GCE, so that it is reachable from Github.com.

Drone installs from a container through a set of deployment files, just like any other Kubernetes app. I’ve adapted the deployment files found in this repo. Within the config map spec file, there are several values we need to change. Namely, we need to set the Github-related values to ones specific to our account. We’ll take the client secret and client key from the setup steps and place them into this file, as well as the username of the authorized user. Within the drone-secret file, we can place our Github password in the appropriate slot.

This is a major departure from the way Jenkins interacts with source code. In Jenkins, each job can define its relationship with source control independent of another job. This allows you to pull source from a variety of different repositories, including Github, Gitlab, svn, and others. As of now, Drone only supports git-based repos. A full list is available in the documentation, but all of the most popular choices for git-based development are supported.

We also can’t forget our Kubernetes cluster! Rancher makes it incredibly easy to launch and manage a cluster. I’ve chosen to use latest stable version of Rancher, 1.6. We could’ve used the new Rancher 2.0 tech preview, but constructing this guide worked best with the stable version. however, the information and steps to install should be the same, so if you’d like to try it out with newer Rancher, go ahead!

Task 1 – Installation and Management

Launching Drone on Kubernetes and Rancher is as simple as copy paste. I used the default K8s dashboard to launch the files. Uploading them one by one, starting with the namespace and config files, will get the ball rolling. [Here are some of the deployment files I used](https://github.com/appleboy/drone-on-kubernetes/tree/master/gke). I pulled from this repository and made my own local edits. This repo is owned by a frequent Drone contributor, and includes instructions on how to launch on GCE, as well as AWS. The Kubernetes yaml files are the only things we need here. To replicate, just edit the ConfigMap file with your specific values. Check out one of my files below.

apiVersion: extensions/v1beta1
kind: Deployment
  name: drone-server
namespace: drone
replicas:  1
app: drone-server
- image: drone/drone:0.8
imagePullPolicy:  Always
name:  drone-server
- containerPort: 8000
protocol:  TCP
- containerPort: 9000
protocol: TCP
# Persist our configs in an SQLite DB in here
- name: drone-server-sqlite-db
mountPath: /var/lib/drone
cpu: 40m
memory: 32Mi
- name: DRONE_HOST
name: drone-config
key: server.host
- name: DRONE_OPEN
name: drone-config
key: server.open
name: drone-config
key: server.database.driver
name: drone-config
key: server.database.datasource
name: drone-secrets
key: server.secret
name: drone-config
key: server.admin
name: drone-config
key: server.remote.github
name: drone-config
key: server.remote.github.client
key: server.remote.github.secret
name: drone-config
key: server.debug

- name: drone-server-sqlite-db
path: /var/lib/k8s/drone
- name: docker-socket
path: /var/run/docker.sock

Jenkins can be launched in much the same way. Because it is deployable in a Docker container, you can construct a similar deployment file and launch on Kubernetes. Here’s an example below. This file was taken from the GCE examples repo for the Jenkins CI server.

apiVersion: extensions/v1beta1
kind: Deployment
name: jenkins
namespace: jenkins
replicas: 1
app: master
- name: master
image: jenkins/jenkins:2.67
- containerPort: 8080
- containerPort: 50000
path: /login
port: 8080
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 2
failureThreshold: 5
name: jenkins
key: options
- name: JAVA_OPTS
value: '-Xmx1400m'
- mountPath: /var/jenkins_home
name: jenkins-home
cpu: 500m
memory: 1500Mi
cpu: 500m
memory: 1500Mi
- name: jenkins-home
pdName: jenkins-home
fsType: ext4
partition: 1

Launching Jenkins is similarly easy. Because of the simplicity of Docker and Rancher, all you need to do is take the set of deployment files and paste them into the dashboard. My preferred way is using the Kubernetes dashboard for all management purposes. From here, I can upload the Jenkins files one by one to get the server up and running.

Managing the Drone server comes down to configurations passed when launching. Hooking up to Github involved adding OAuth2 tokens, as well as (in my case) a username and password to access a repository. changing this would involve either granting organization access through GIthub, or relaunching the server with new credentials. This could possibly hamper development, as it means that Drone cannot handle more than one source provider. As mentioned above, Jenkins allows for any number of source repos, with the caveat that each job only uses one.

Task 2 – Plugins

Plugins in Drone are very simple to configure and manage. In fact, there isn’t much you need to do to get one up and running. The ecosystem is considerably smaller than that for Jenkins, but there are still plugins for almost every major tool available. There are plugins for most major cloud providers, as well as integrations with popular source control repos. As mentioned before, containers in Drone are first class citizens. This means that each plugin and executed task is also a container.
Jenkins is the undisputed king of plugins. If you can think of the task, there is probably a plugin to accomplish it. There are at last glance, almost 1000 plugins available for use. The downside of this is that it can sometimes be difficult to determine, out of a selection of similar looking plugins, which one is the best choice for what you’re trying to accomplish

There are docker plugins for building pushing and images, AWS and K8s plugins for deploying to clusters, and various others. Because of the comparative youth of the Drone platform, there are a great deal fewer plugins available here than for Jenkins. That does not however, take away from their effectiveness and ease of use. A simple stanza in a drone.yml file will automatically download, configure, and run a selected plugin, with no other input needed. And remember, because of Drone’s relationship with containers, each plugin is maintained within an image. There are no extra dependencies to manage; if the plugin creator has done their job correctly, everything will be contained within that container.

When I built the drone.yml file for the simple node app, adding a Docker plugin was a breeze. There were only a few lines needed, and the image was built and pushed to a Dockerhub repo of my choosing. In the next section, you can see the section labeled docker. This stanza is all that’s needed to configure and run the plugin to build and push the Docker image.

Task 3

The last task is the bread and butter of any CI system. Drone and Jenkins are both designed to build apps. Originally, Jenkins was targeted towards java apps, but over the years the scope has expanded to include anything you could compile and execute as code. Jenkins even excels at new pipelines and cron-job like scheduled tasks. However, it is not container native, though it does fit very well into the container ecosystem.

    image: node:alpine
      - npm install
      - npm run test
      - npm run build
    image: plugins/docker
    dockerfile: Dockerfile
    repo: badamsbb/node-example
    tags: v1

For comparison, here’s a Jenkinsfile for the same app.

#!/usr/bin/env groovy
pipeline {
 agent {
  node {
   label 'docker'
 tools {
  nodejs 'node8.4.0'
 stages {
  stage ('Checkout Code') {
   steps {
    checkout scm
  stage ('Verify Tools'){
   steps {
    parallel (
     node: {
       sh "npm -v"
     docker: {
       sh "docker -v"
  stage ('Build app') {
   steps {
    sh "npm prune"
    sh "npm install"
  stage ('Test'){
   steps {
    sh "npm test"
  stage ('Build container') {
   steps {
    sh "docker build -t badamsbb/node-example:latest ."
    sh "docker tag badamsbb/node-example:latest badamsbb/node-example:v${env.BUILD_ID}"
  stage ('Verify') {
   steps {
    input "Everything good?"
  stage ('Clean') {
   steps {
    sh "npm prune"
    sh "rm -rf node_modules"

While this example is verbose for the sake of explanation, you can see that accomplishing the same goal, a built Docker image, can be more involved than with Drone. In addition, what’s not pictured is the set up of the interactions between Jenkins and Docker. Because Jenkins is not Docker native, agent must be configured ahead of time to properly interact with the Docker daemon. This can be confusing to some, which is where Drone comes out ahead. It is already running on top of Docker; this same Docker is used to run its tasks.


Drone is a wonderful piece of CI software. It has quickly become a very popular choice for wanting to get up and running quickly, looking for a simple container-native CI solution. The simplicity of it is elegant, though as it is still in a pre-release status, there is much more to come. Adventurous engineers may be willing to give it a shot in production, and indeed many have. In my opinion, it is best suited to smaller teams looking to get up and running quickly. Its small footprint and simplicity of use lends itself readily to this kind of development.

However, Jenkins is the tried and true powerhouse of the CI community. It takes a lot to topple the king, especially one so entrenched in his position. Jenkins has been very successful at adapting to the market, with Blue Ocean and container-based pipelines making strong cases for its staying power. Jenkins can be used by teams of all sizes, but excels at scale. Larger organizations love Jenkins due to its history and numerous integrations. It also has distinct support options, either active community support for open source, or enterprise-level support through CloudBees But as with all tools, both Drone and Jenkins have their place within the CI ecosystem.


Brandon Adams
Certified Jenkins Engineer, and Docker enthusiast. I’ve been using Docker since the early days, and love hearing about new applications for the technology. Currently working for a Docker consulting partner in Bethesda, MD.

Using Kubernetes API from Go: Kubecon 2017 session recap

January 19, 2018

Last month I had the great pleasure of attending Kubecon 2017, which took place in Austin, TX. The conference was super informative, and deciding on what session to join was really hard as all of them were great. But what deserves special recognition is how well the organizers respected the attendees’ diversity of Kubernetes experiences. Support is especially important if you are new to the project and need advice (and sometimes encouragement) to get started. Kubernetes 101 track sessions were a good way to get more familiar with the concepts, tools and the community. I was very excited to be a speaker on 101 track, and this blog post is a recap of my session Using Kubernetes APIs from Go

In this article we are going to learn what makes Kubernetes a great platform for developers, and cover the basics of writing a custom controller for Kubernetes in the Go language using the client-go library.

Kubernetes is a platform

Kubernetes can be liked for many reasons. As a user, you appreciate its features richness, stability and performance. As a contributor, the Kubernetes open source community is not only large, but approachable and responsive. But what really makes Kubernetes appealing to a third party developer is its extensibility. The project provides so many ways to add new features, extend existing ones without disrupting the main code base. And thats what makes Kubernetes a platform.

Here are some ways to extend Kubernetes:


On the picture, you can see that every Kuberentes cluster component can be extended in a certain way, whether it is a Kubelet, or API server. Today we are going to focus on a “Custom Controller” way, I’ll refer to it as Kubernetes Controller or simply a Controller from now on.

What exactly is Kubernetes Controller?

The most common definition for controller is “Code that brings current state of the system to the desired state”. But what exactly does it mean? Lets look at Ingress controller example. Ingress is a Kubernetes resource that lets you define external access to the services in cluster, typically in HTTP and usually with the Load Balancing support. But Kubernetes core code has no ingress implementation. The implementation gets covered by the third party controllers that would:

  • Watch ingress/services/endpoints resource events (Create/Update/Remove)
  • Program internal or external Load Balancer
  • Update Ingress with the Load Balancer address

The “desired” state of the ingress is the IP Address pointing to the functioning Load Balancer programmed with the rules defined by the user in Ingress specification. And external ingress controller is responsible for bringing the ingress resource to this state.

The implementation of the controller for the same resource, as well as the way to deploy them, can vary. You can pick nginx controller and deploy it on every node in your cluster as a Daemon Set, or you can chose to run your ingress controller outside of Kubernetes cluster and program F5 as a Load Balancer. There are no strict rules, Kubernetes is flexible in that way.


There are several ways to get information about Kubernetes cluster and its resources. You can do it using Dashboard, kubectl, or using programmatic access to Kubernetes APIs. Client-go is the most popular library used by the tools written in Go. There are clients for many other languages out there (java, python, etc). Although if you want to write your very first controller, I encourage you to try go/client-go. Kubernetes is written in Go, and I find it easier to develop a plugin in the same language the main project is written.

Lets build…

The best way to get familiar with the platforms and tools around it, is to write something. Lets start simple, and implement a controller that:

  • Monitors Kubernetes nodes
  • Alerts when storage occupied by images on the node, changes

The code source can be found here.

Ground work

Setup the project

As a developer, I like to sneak a peek at the tools my peers use to make their life easier. Here I’m going to share 3 favorite tools of mine that are gonna help us with our very first project.

  1.  go-skel – skeleton for Go microservices Just run ./skel.sh test123, and it will create the skeleton for the new go project test123.
  2.  trash – Go vendor management tool. There are many go dependencies management tools out there, but trash has been proved to be simple to use and great when it comes to transient dependencies management.
  3. dapper – a tool to wrap any existing build tool in an consistent environment

Add client-go as a dependency

In order to use client-go code, we have to pull it as a dependency to our project. Add it to vendor.conf:

And run trash. It will automatically pull all the dependencies defined in vendor.conf to the vendor folder of the project. Make sure client-go version is compatible with the Kubernetes version of your cluster.

Create a client

Before creating a client that is going to talk to Kubernetes API, we have to decide how we want to run our tool: inside or outside the Kubernetes cluster. When run inside the cluster, your application is containerized and gets deployed as Kubernetes Pod. It gives you certain perks – you can chose the way to deploy it (Daemon set to run on every node, or as a Deployment with n replicas), configure the healthcheck for it, etc. When your application runs outside of the cluster, you have to manage it yourself. Lets make our tool flexible, and support both ways of defining the client based on the config flag:

We are going to use outside of cluster mode while debugging the app as this way you do not have to build the image every time and redeploy as kubernetes Pod. Once app is tested, we can build and image and deploy it in cluster.

As you can see on the screen shot, the config is being built, and passed to kubernetes.NewForConfig to generate the client.

Play with basic CRUDs

For our tool, we need to monitor Nodes. It is a good idea to get familiar with the way to do CRUD operations using client-go before implementing the logic:

Screen shot above displays how to do:

  • List nodes named “minikube” which can be achieved by passing FieldSelector filter to the command.
  • Update the node with the new annotation
  • Delete the node with the gracePeriod=10 seconds – meaning that the removal will happen only after 10 seconds since the command is issued.

All that is done using the clientset we’ve created on the previous step.

We would need information about the images on the node; it can be retrieved by accessing corresponding field:

Watch/Notify using Informer

Now we know how to fetch the nodes from Kubernetes APIs and get images information from it. How do we monitor the changes to images’ size? The most simple way would be to periodically poll the nodes, calculate the current images storage capacity and compare it with the result from the previous poll. The downside to that – we execute the list call to fetch all the nodes, no matter if there were changes to them or not, and that can be expensive especially if your poll interval is small. What we really want is – to be notified when the node gets changed, and only then do our logic. Thats where client-go Informer comes to the rescue.

On this example, we create the Informer for the Node object by passing the watchList instruction on how to monitor the Node, object type api.Node and 30 seconds as a resync period instructing to periodically poll the node even when there were no changes to it – a nice way to fall back on in case the update event gets dropped by some reason. And as a last argument, we are passing 2 call back functions – handleNodeAdd and handleNodeUpdate. Those callbacks will have an actual logic that has to be triggered on the node’s changes – find out whether the storage occupied by images on the node got changed. The NewInformer gives back 2 objects – controller and store. Once the controller is started, the watch on node.update and node.add will start, and the callback functions will get called. The store is in memory cache which gets updated by the informer, and you can fetch the node object from the cache instead of calling Kubernetes APIs directly:

As we have a single controller in our project, using regular Informer is fine enough. But if your future project ends up having several controllers for the same object, using SharedInformer is more recommended. So instead of creating multiple regular informers – one per controller – you can register one Shared informer, and let each controller register its own set of callbacks, and get back a shared cache in return which will reduce memory footprint:

Deployment time

Now it is time to deploy and test the code! For the first run, we are simply building a go binary and run it in out of cluster mode:

To change the message output, deploy a pod using an image which is not presented on the node yet.

Once basic functionality is tested, it is time to try running it in cluster mode. For that, we have to create the image first. Define the Dockerfile:

And create an image using docker build . It will generate the image that you can use to deploy the pod in Kubernetes. Now your application can be run as a Pod in Kubernetes cluster. Here is an example of deployment definition, and on the screen shot above I’m using it to deploy our app:

So we have:

  • Created go project
  • Added client-go package dependencies to it
  • Created a client to talk to Kubernetes api
  • Defined an Informer that would watch node object changes, and execute callback function once that happens
  • Implemented an actual logic in the callback definition.
  • Tested the code by running the binary in outside of cluster, and then deployed it inside the cluster

If you have any comments or questions on the topic, please feel free to share them with me !


AvatarAlena Prokharchyk

twitter: @lemonjet

github: https://github.com/alena1108

Canonical Announces Cloud Native Platform, Powered by Rancher

December 5, 2017

Partnership Combines Rancher 2.0 with Canonical Kubernetes and Leading Cloud OS, Ubuntu

Today, we joined Canonical in announcing the Canonical Cloud Native Platform, a new offering that provides complete support and management for Kubernetes in the Enterprise.  The Cloud Native Platform combines Rancher 2.0 container management software with Canonical Ubuntu and Ubuntu Kubernetes, and will be available when Rancher 2.0 launches next spring. Read more

Rancher 2.0 to Work with Amazon EKS

November 29, 2017

Today, Amazon announced a managed Kubernetes service called Elastic Container Service for Kubernetes (EKS).  This means that all three major cloud providers—AWS, Azure, and GCP—now offer managed Kubernetes services. This is great news for Kubernetes users. Even though users always have the option to stand up their own Kubernetes clusters, and new tools like Rancher Kubernetes Engine (RKE) make that process even easier, cloud-managed Kubernetes installations should be the best choice for the majority of Kubernetes users. Read more

Upcoming Events

Recent Posts