Rancher Blog

Continuous Delivery of Everything with Rancher, Drone, and Terraform

August 16, 2017

It’s 8:00 PM. I just deployed to production, but nothing’s working. Oh, wait. the production Kinesis stream doesn’t exist, because the CloudFormation template for production wasn’t updated. Okay, fix that. 9:00 PM. Redeploy. Still broken. Oh, wait. The production config file wasn’t updated to use the new database. Okay, fix that. Finally, it works, and it’s time to go home.

Ever been there? How about the late night when your provisioning scripts work for updating existing servers, but not for creating a brand new environment? Or, a manual deployment step missing from a task list? Or, a config file pointing to a resource from another environment?

Each of these problems stems from separating the activity of provisioning infrastructure from that of deploying software, whether by choice, or limitation of tools. The impact of deploying should be to allow customers to benefit from added value or validate a business hypothesis. In order to accomplish this, infrastructure and software are both needed, and they normally change together. Thus, a deployment can be defined as:

  • reconciling the infrastructure needed with the infrastructure that already exists; and
  • reconciling the software that we want to run with the software that is already running.

With Rancher, Terraform, and Drone, you can build a continuous delivery pipeline that lets you deploy this way. Let’s look at a sample system: Read more


Load-Balancing in Kubernetes

August 14, 2017

Kubernetes is the container orchestration system of choice for many enterprise deployments. That’s a tribute to its reliability, flexibility, and broad range of features. In this post, we’re going to take a closer look at how Kubernetes handles a very common and very necessary job: load balancing. Load balancing is a relatively straightforward task in many non-container environments (i.e., balancing between servers), but it involves a bit of special handling when it comes to containers.

 

Managing Containers

To understand Kubernetes load balancing, you first have to understand how Kubernetes organizes containers.

Since containers typically perform specific services or sets of services, it makes sense to look at them in terms of the services they provide, rather than individual instances of a service (i.e., a single container). In essence, this is what Kubernetes does.

Placing Them in Pods

In Kubernetes, the pod serves as a kind of basic, functional unit. A pod is a set of containers, along with their shared volumes. The containers are generally closely related in terms of function and services provided.

Pods that have the same set of functions are abstracted into sets, called services. It is these services which the client of a Kubernetes-based application accesses; the service stands in for the individual pods, which in turn manage access to the containers that make them up, leaving the client insulated from the containers themselves.

Read more


Configuring Kubernetes for Maximum Scalability

August 9, 2017

Kubernetes is designed to address some of the difficulties that are inherent in managing large-scale containerized environments. However, this doesn’t mean Kubernetes can scale in all situations all on its own. There are steps you can and should take to maximize Kubernetes’ ability to scale—and there are important caveats and limitations to keep in mind when scaling Kubernetes. I’ll explain them in this article.

Scale versus Performance

The first thing that must be understood about scaling a Kubernetes cluster is that there is a tradeoff between scale and performance. For example, Kubernetes 1.6 is designed for use in clusters with up to 5,000 nodes. But 5,000 nodes is not a hard limit; it is merely the recommended node maximum. In actuality, it is possible to exceed the 5,000 node cluster limit substantially, but performance begins to drop off after doing so.

What this means more specifically is this: Kubernetes has defined two service level objectives. The first of these objectives is to return 99% of all API calls in less than a second. The second objective is to be able to start 99% of pods within less than five seconds. Although these objectives do not act as a comprehensive set of performance metrics, they do provide a good baseline for evaluating general cluster performance. According to Kubernetes, clusters with more than 5,000 nodes may not be able to achieve these service level objectives.

So, keep in mind that beyond a certain point, you may have to sacrifice performance in order to gain scalability in Kubernetes. Maybe this sacrifice is worth it to you, and maybe it’s not, depending on your deployment scenario.

 

Quotas

One of the main issues that you are likely to encounter when setting up a really large Kubernetes cluster is that of quota limitations. This is especially true for cloud-based nodes since cloud service providers commonly implement quota limitations. Read more


Comparing Kubernetes and Docker Swarm

August 7, 2017

For teams building and deploying containerized applications using Docker, selecting the right orchestration engine can be a challenge.  The decision affects not only deployment and management, but how applications are architected as well.  DevOps teams need to think about details like how data is persisted, how containerized services communicate with one another, load balancing, service discovery, packaging and more.  It turns out that the choice of orchestration engine is critical to all these areas.

While Rancher has the nice property that it can support multiple orchestration engines concurrently, choosing the right solution is still important.  Rather than attempting to boil the ocean by looking at many orchestrators, we chose to look at two likely to be on the short list for most organizations – Kubernetes and Docker Swarm.

Evolving at a rapid clip

To say these frameworks are evolving quickly is an understatement.  In the just the past year there have been four major releases of Docker (1.12, 1.13, 17.03 and 17.06) with dozens of new features and a wholesale change to the Swarm architecture.  Kubernetes has been evolving at an even more frenetic pace.  Since Kubernetes 1.3 was introduced in July of 2016 there have been four additional major releases and no less than a dozen minor releases. Kubernetes is at version 1.7.2 at the time of this writing with 1.8.0 now in alpha 2.   Check out the Kubernetes changelog to get a sense of the pace of development.

Comparing Kubernetes and Docker Swarm is a little like trying to compare two Rocket ships speeding along on separate trajectories.  By the time you catch up with one and get close enough to see what’s happening, the other is in a whole different place! 

Points of comparison

Despite the challenges posed by their rapid evolution, we decided to take a crack at comparing Swarm and Kubernetes in some detail, taking a fresh look at new capabilities in each solution.  At a high-level the two solutions do broadly similar things, but they differ substantially in their implementation.  We took both solutions out for a test drive (using Rancher running in AWS), got into the weeds, and compared them systematically in these areas:

  • Architecture
  • User experience
  • Ease-of-use
  • Networking model
  • Storage management
  • Scheduling
  • Service discovery
  • Load balancing
  • Healthchecks
  • Scalability

Lots for DevOps teams to ponder

Both Swarm and Kubernetes are impressive, capable solutions.  Depending on their needs, organizations could reasonably choose either solution.  If you are new to one solution or the other, understanding the strengths and weaknesses of different solutions, and differences in how they are implemented, can help you make a more informed decision.

Swarm is impressive for its simplicity and seamless integration with Docker.  For those experienced with Docker, evolving to use Swarm is simple.  Swarm’s new DAB format for multi-host, multi-service applications extends naturally from docker-compose, and the Swarm command set is now part of Docker Engine, so administrators face a minimal learning curve.

Customers considering larger, more complex deployments will want to look at Kubernetes.  Docker users will need to invest a little more time to get familiar with Kubernetes, but even if you don’t use all the features out of the gate, the features are there for good reason.  Kubernetes has its own discrete command set, API and an architecture that is discrete from Docker.  For Kubernetes, the watchword is flexibility.   Kubernetes is extensible and configurable and can be deployed in a variety of ways.  It introduces concepts like Pods, Replica Sets and Stateful Sets not found in Swarm along with features like autoscaling.   While Kubernetes is a little more complex to learn and master, for users with more sophisticated requirements, Kubernetes has the potential to simplify management by reducing the need for ongoing manual interventions.

About the whitepaper

Our comparison was done using Rancher’s container management framework to deploy separate environments for Docker Swarm and Kubernetes.  Rather than focus on Rancher however, comparisons are made at the level of Swarm and Kubernetes themselves.  Whether you are using Rancher or a different container management framework, the observations should still be useful.

Included in the paper are:

  • Detailed comparisons between Kubernetes and Swarm
  • Considerations when deploying both orchestrators in Rancher
  • Considerations for application designers
  • High-level guidance on what orchestrator to consider when

Download the free whitepaper for an up to date look at Kubernetes and Docker Swarm.

As always, we appreciate your thoughts and feedback!


Winning Hackathons, DevOps-Style!

August 3, 2017

Recently, I moved to New York City. As a new resident, I decided to take part in the NYC DeveloperWeek hackathon, where our team won the NetApp challenge. In this post, I’ll walk through the product we put together, and share how we built a CI/CD pipeline for quick, iterative product development under tight constraints.

The Problem: Have you ever lived or worked in a building where it’s a pain to configure the buzzer to forward to multiple roommates or coworkers? Imagine that a friend arrives and buzzes your number, which is set to forward to your roommate who is visiting South Africa, and has no cell service. If you’re running late, your friend is just stuck outside.

The Product: We built a PBX-style application that integrations with Zang, and forwards a buzzer to multiple numbers, and even allows your friend to use a PIN on his phone to gain entry.

The constraint: For the hackathon, we had to use hardware that had already been setup and allocated.

 

Building our Hackathon CI/CD Pipeline

Our newly-updated eBook walks you through incorporating containers into your CI/CD pipeline. Download today!

For the competition, we knew that we wanted our builds scalable from the start, and that each deployment would snapshot our entire data environment pre-deployment using NetApp ONTAP (a sponsor of the Hackathon, and a really nice group of folks); if any deployment had an issue, we could simply and quickly roll back.

Fortunately, I am a firm believer in building stateless, Docker container-ready, rebuildable architecture; I can’t go back to the world that existed before: no real CI/CD, lots of SSH-ing into other systems, or SCP-ing data from one place to another. Thankfully, today we have tools and strategies that can help us.

Here’s what we used: Read more