Why Kubernetes?

Kubernetes delivers unparalleled benefits for development and operations teams. Releases happen faster. Applications run better. Systems heal automatically. Uptime stays high. Productivity goes up. Costs go down. Users stay happy.

Kubernetes is a tool, and like any tool, whether or not you use it comes down to the benefits that it provides.

Section for variable content

Vulpes etiam luptatum estas vehicula imputo decet abdo. Dolor maecenas nascetueo. Aliquet zelus nobis crasus pellentesque nisi. Taciti ille laoreet multo amet erat indolebis importunus. Ipsum habitasse veniam non sem patria abdo ymo ideo adsum macto.

June Online Meetup -- Introducing Rio: Making Containers Fun

View Recording

Organizational Benefits

The technicals are interesting, but the bottom line matters most. You want maximum performance at the lowest cost, and you're always searching for the intersection between value and impact.

What will Kubernetes do for you?

Kubernetes enables your business to adapt to changes in the environment, pushing out changes to your application as quickly as your teams can develop and test them. It makes it easy to design a system where changes go out without any downtime.


More info

How Qordoba increased deployment velocity by 60%

Kubernetes at Box: Microservices at Maximum Velocity

Kubernetes is finding a home in businesses of all kinds, including telecom and service providers, retail and manufacturing, content providers, banks, financial services, media and entertainment producers, government, and healthcare. All of these businesses have found that Kubernetes makes their business more successful.


More info

71% of Fortune 100 companies use containers, and more than 50% use Kubernetes

How Kubernetes transforms your business

Kubernetes is open source software, and in March of 2018 it became the second-largest open source project after Linux itself. As of October 2018 it has 24,500 contributors across more than 100 companies. As long as your business is using the OSS version of Kubernetes from the Cloud Native Computing Foundation and not a feature-restricted fork, your projects will never find themselves starving from a framework abandoned by its creators.


More info

Four Ways To Avoid Vendor Lock-In When Moving To The Public Cloud

Multi-Cloud Kubernetes with Triton

Docker allows your team to package an application and run it anywhere that Docker runs. It is guaranteed to be consistent. Kubernetes carries this concept further, wiring containers together into applications and delivering inter-container communication, load balancing, monitoring, metrics, logging, and other services. It guarantees that these apps will run within any Kubernetes environment anywhere. Where cloud applications are today hardwired to services provided by cloud vendors, Kubernetes abstracts those services into pluggable modules. Applications then use these modules to provision and consume resources like storage, without concern for where the resource is located or how it was provisioned. This gives you the freedom to move between vendors and providers as your business requires.


More info

Achieving Application Portability with Kubernetes and Cloud Native Design

Building application portability with Kubernetes at Software Motor Company

Kubernetes maximizes your investment in infrastructure, whether on-premise or in the cloud. If you choose to run applications inside of a hosted Kubernetes environment such as AWS, GCP, or Azure, you're already starting with a reduced footprint and benefitting from the cloud model of paying for what you use. The monitoring present in Kubernetes lets you pack applications to near maximum density, and if any application needs more resources, Kubernetes will automatically scale the resources or the underlying infrastructure up and down to meet the need. You're only paying for what you use, and it makes sure you're using exactly the right amount at any given moment.


More info

Kubernetes Autoscaling 101

How containers cut server costs at the Financial Times by 80 percent

Kubernetes enables your business to adapt to changes in the environment, pushing out changes to your application as quickly as your teams can develop and test them. It makes it easy to design a system where changes go out without any downtime.


More info

How Qordoba increased deployment velocity by 60%

Kubernetes at Box: Microservices at Maximum Velocity

Kubernetes is finding a home in businesses of all kinds, including telecom and service providers, retail and manufacturing, content providers, banks, financial services, media and entertainment producers, government, and healthcare. All of these businesses have found that Kubernetes makes their business more successful.


More info

71% of Fortune 100 companies use containers, and more than 50% use Kubernetes

How Kubernetes transforms your business

Kubernetes is open source software, and in March of 2018 it became the second-largest open source project after Linux itself. As of October 2018 it has 24,500 contributors across more than 100 companies. As long as your business is using the OSS version of Kubernetes from the Cloud Native Computing Foundation and not a feature-restricted fork, your projects will never find themselves starving from a framework abandoned by its creators.


More info

Four Ways To Avoid Vendor Lock-In When Moving To The Public Cloud

Multi-Cloud Kubernetes with Triton

Docker allows your team to package an application and run it anywhere that Docker runs. It is guaranteed to be consistent. Kubernetes carries this concept further, wiring containers together into applications and delivering inter-container communication, load balancing, monitoring, metrics, logging, and other services. It guarantees that these apps will run within any Kubernetes environment anywhere. Where cloud applications are today hardwired to services provided by cloud vendors, Kubernetes abstracts those services into pluggable modules. Applications then use these modules to provision and consume resources like storage, without concern for where the resource is located or how it was provisioned. This gives you the freedom to move between vendors and providers as your business requires.


More info

Achieving Application Portability with Kubernetes and Cloud Native Design

Building application portability with Kubernetes at Software Motor Company

Kubernetes maximizes your investment in infrastructure, whether on-premise or in the cloud. If you choose to run applications inside of a hosted Kubernetes environment such as AWS, GCP, or Azure, you're already starting with a reduced footprint and benefitting from the cloud model of paying for what you use. The monitoring present in Kubernetes lets you pack applications to near maximum density, and if any application needs more resources, Kubernetes will automatically scale the resources or the underlying infrastructure up and down to meet the need. You're only paying for what you use, and it makes sure you're using exactly the right amount at any given moment.


More info

Kubernetes Autoscaling 101

How containers cut server costs at the Financial Times by 80 percent

User Benefits

You're a developer or an operations engineer. You want stability, but you also want agility. You're searching for a solution that automates human tasks while still following the instructions set by the humans that control it.

What will Kubernetes do for you?

Kubernetes is driven by its API. Your CI/CD system can communicate with Kubernetes via its REST and CLI interfaces to carry out actions within the cluster. You can build staging environments. You can automatically deploy new releases. You can perform staged rollouts with atomic versioning and easy rollbacks in the event of issues. You can tear down environments when they're no longer needed. Whether you use Jenkins, Gitlab, Circle, Travis, or the CI/CD system of tomorrow, Kubernetes seamlessly integrates with it and guarantees that tasks will be executed the same way every time, without human error.


More info

CI/CD for Kubernetes

State of Cloud Native CI/CD Tools for Kubernetes

Kubernetes monitors the containers running within it, performing readiness and health checks via TCP, HTTP, or by running a command within the container. Only healthy resources receive traffic, and those that fail health checks for too long will be restarted by Kubernetes or moved to other nodes in the cluster.


More info

Running Scalable Workloads on Kubernetes

Databases on Kubernetes – How to Recover from Failures

Pod Disruptions in Kubernetes

Gone are the days of ramping up server capacity in anticipation of a surge of traffic. Kubernetes is ready for Oprah, ready for a viral news story, ready for anything. It scales by deploying more copies of the pods that run your application, or it can allocate more host resources to applications where spawning more doesn't make sense. It can even scale the cluster size up if you deploy more resources than the current cluster can safely handle, or down when those resources are no longer needed.


More info

Kubernetes Autoscaling Explained

Kubernetes Autoscaling 101

Kubernetes turns a promise of cloud computing into a reality. You don't need to know, care, or pay attention to where resources are running. You can control scheduling if you wish, but Kubernetes makes everything available by default. It handles the internal wiring between pod replicas, the common IP address that load balances across them, and the external addresses that make those resources accessible to the world. It adds and removes resources to keep applications available as the number of replicas grows or shrinks, or as application failures occur. When you deploy a microservice-style application into a Kubernetes cluster, it builds and maintains all of the routing and communication paths for you, no matter which node your application lands on.


More info

Getting started with microservices and Kubernetes

Kubernetes and microservices: A developers’ movement to make the web faster, stable, and more open

The Kubernetes model is declarative. You create instructions in YAML that tell it the desired state of the cluster. It then does what it needs to do to maintain this state. When the cluster deviates, such as when you want more or fewer pods running for your application, it recognizes that the current state differs from the desired state, and it carries out actions to bring the cluster back to the desired state. There are no surprises. You tell it what to be, and it works to become that and exist that way until told otherwise.


More info

Declarative Management of Kubernetes Objects Using Configuration Files

Kubernetes Deployments

Kubernetes is driven by its API. Your CI/CD system can communicate with Kubernetes via its REST and CLI interfaces to carry out actions within the cluster. You can build staging environments. You can automatically deploy new releases. You can perform staged rollouts with atomic versioning and easy rollbacks in the event of issues. You can tear down environments when they're no longer needed. Whether you use Jenkins, Gitlab, Circle, Travis, or the CI/CD system of tomorrow, Kubernetes seamlessly integrates with it and guarantees that tasks will be executed the same way every time, without human error.


More info

CI/CD for Kubernetes

State of Cloud Native CI/CD Tools for Kubernetes

Kubernetes monitors the containers running within it, performing readiness and health checks via TCP, HTTP, or by running a command within the container. Only healthy resources receive traffic, and those that fail health checks for too long will be restarted by Kubernetes or moved to other nodes in the cluster.


More info

Running Scalable Workloads on Kubernetes

Databases on Kubernetes – How to Recover from Failures

Pod Disruptions in Kubernetes

Gone are the days of ramping up server capacity in anticipation of a surge of traffic. Kubernetes is ready for Oprah, ready for a viral news story, ready for anything. It scales by deploying more copies of the pods that run your application, or it can allocate more host resources to applications where spawning more doesn't make sense. It can even scale the cluster size up if you deploy more resources than the current cluster can safely handle, or down when those resources are no longer needed.


More info

Kubernetes Autoscaling Explained

Kubernetes Autoscaling 101

Kubernetes turns a promise of cloud computing into a reality. You don't need to know, care, or pay attention to where resources are running. You can control scheduling if you wish, but Kubernetes makes everything available by default. It handles the internal wiring between pod replicas, the common IP address that load balances across them, and the external addresses that make those resources accessible to the world. It adds and removes resources to keep applications available as the number of replicas grows or shrinks, or as application failures occur. When you deploy a microservice-style application into a Kubernetes cluster, it builds and maintains all of the routing and communication paths for you, no matter which node your application lands on.


More info

Getting started with microservices and Kubernetes

Kubernetes and microservices: A developers’ movement to make the web faster, stable, and more open

The Kubernetes model is declarative. You create instructions in YAML that tell it the desired state of the cluster. It then does what it needs to do to maintain this state. When the cluster deviates, such as when you want more or fewer pods running for your application, it recognizes that the current state differs from the desired state, and it carries out actions to bring the cluster back to the desired state. There are no surprises. You tell it what to be, and it works to become that and exist that way until told otherwise.


More info

Declarative Management of Kubernetes Objects Using Configuration Files

Kubernetes Deployments

The evolution of server setups

Standard Deployment
Container Deployment
Kubernetes

Kubernetes is everywhere

You can run it on bare metal servers or in a private cloud. You can run it on the public cloud. You can run applications within a hosted Kubernetes service. Your options are almost limitless, and this flexibility makes it a buyer's market.

Hosted Solutions

Spin up a Kubernetes cluster and within a few minutes you're deploying applications into it. Providers like Google and Amazon manage the Kubernetes master and backend components for you, and they integrate external service offerings like storage, DNS and load balancing into the offering.

Platform Providers

Some companies, like RedHat and Pivotal, have a Kubernetes offering that integrates tightly with their other product offerings. These solutions appear to benefit from shopping with a single vendor, but they often carry heavy license fees and have features that are exclusive to the company and their products.

Self Managed

It's easy to deploy Kubernetes yourself on bare metal, private cloud, or public cloud. You can run it on a single virtual machine or at Google's scale, but don't be overwhelmed by it.

Multi-Cloud Management

Rancher does this by integrating with external authentication providers like Active Directory, Azure AD, Ping, Github, LDAP, and others. You're then empowered to deploy security policies uniformly across all Kubernetes environments that you manage.

Rancher's
Contribution
to Kubernetes

Since its inception in 2014, Rancher Labs has been a leader in open source software and container solutions. When v1 of Rancher came out in 2016, we quickly saw that Kubernetes was on the rise and added a solution for it.

We rebuilt Rancher v2 within Kubernetes, making our already-tight integration even tighter. We also developed the Rancher Kubernetes Engine (RKE) to make it easy to deploy Kubernetes clusters in any location and to keep those clusters updated.

Who uses Kubernetes with Rancher?

What would you like to do next?

I want to install
Kubernetes

I want training on
Kubernetes

I want to see a demo of
Kubernetes and Rancher

Get started with Rancher