In the world of containers, Kubernetes has become the community standard for container orchestration and management. But there are some basic elements surrounding networking that need to be considered as applications are built to ensure that full multi-cloud capabilities can be leveraged.
The Basics of Kubernetes Networking: Pods
The basic unit of management inside Kubernetes is not a container—It is called a pod. A pod is simply one or more containers that are deployed as a unit. Often, they are a single functional endpoint used as part of a service offering.
Two examples of valid pods are:
Database pod—a single MySQL container
Web pod—an instance of Python in one container and Redis in a second container
Useful things to know about pods:
They share resources—including the network stack and namespace.
A pod is assigned a single IP which clients connect to.
A pod configuration defines any public ports and what container hosts the port.
All containers within a pod can interact over any port over the network. (They are all referenced as localhost, so be sure that all the services in the pod have unique ports.)
A Kubernetes service is where multiple identical pods are managed behind a load balancer. Clients connect to the IP of the load balancer instead of the individual IPs of each pod. Defining your application as a service allows Kubernetes to scale the number of pods based on the rules defined, and available resources.
Defining an application as part of a service is the only way to make it available to clients outside of the Kubernetes infrastructure. Even if you never scale past one node, services is the avenue to have an external IP address assigned.
When deploying applications in the container world, one of the less obvious points is how to make the application available to the external world, outside of the container cluster. One option is to use the host port, which basically maps one port of the host to the container port where the application is exposed. While this option is fine for local development, it is not viable in a real cluster with many applications deployed. One solution is to use an HTTP proxy/load balancer. This container will be exposed using standard HTTP/HTTPS ports on the host, and will route traffic to the application running as a container.
In this post, we will setup Traefik as an HTTP proxy / load balancer for web services running in a Rancher Cattle setup. Traefik will dynamically update its configuration using the Rancher API. An SSL wildcard cert will be used. The (nice) Let’s Encrypt ACME feature Traefik is offering will not be used here. We will make use of Rancher secret feature. If you plan to use Traefik with Let’s Encrypt SSL certs, I encourage you to use the Traefik stack available in Rancher Community Catalog.
With Kubernetes 1.6 came the beta release of the role-based access control (RBAC) feature. This feature allows admins to create policies defining which users can access which resources within a Kubernetes cluster or namespace. Kubernetes itself does not have a native concept of users and instead delegates to an external authentication system. As of version 1.6.3, Rancher integrates with this process to allow any authentication system supported by Rancher, such as Github, LDAP, or Azure AD, to be used for Kubernetes RBAC policies. In short, Rancher v1.6.3 picks up where Kubernetes RBAC leaves off, giving teams a dead-easy mechanism for authenticating users across their Kubernetes clusters.
To see how one might use this feature, let’s consider a company that has a development team and a QA team. Let’s also suppose they’re using GitHub as the authentication method for their Rancher setup. In their Kubernetes environment they’ve created two namespaces, one for developers and one for QA. It’s now possible for admins to define rules such as the following:
Only members of the “developers” Github team should have read and write access to the “dev” namespace
Likewise, only members of the “QA” Github team should have read and write access to the “QA” namespace
Developers should also be able to view, but not edit, resources in the “QA” namespace
Users “user1” and “user2” should be able to read and write to both the “dev” and “QA” namespaces
Other Kubernetes features can be leveraged to more richly define multitenancy within a Rancher Kubernetes environment. With resource quotas, an admin can define limits on resources such as CPU and memory usage for a namespace. This is particularly important since namespaces within an environment run on the same set of hosts.
Role bindings follow the pattern of tying a particular role to one or more users or one or more groups. The preceding example made use of three built-in roles defined by Kubernetes:
view: Read-only access to most objects in a namespace
edit – Read and write access to most objects in a namespace
admin – Includes all permissions from the edit role and allows the creation of new roles and role bindings
These predefined roles enhance the usability of RBAC by allowing most users to reuse these roles in place of defining their own. We recommend using these roles for those who are less familiar with individual Kubernetes resources or for those who don’t have a need to define their own custom roles.
From our work with teams running Kubernetes in production, particularly those offering CaaS solutions across their organizations, we understand the need for straightforward RBAC, authentication, and authorization. More detailed information on this feature can be found in our docs. We always want feedback on what we’re building for Rancher and Kubernetes – for any questions, you can find me on Rancher Users Slack (@josh) or on Twitter (@joshwget).
At Higher Education, we’ve tested and used quite a few CI/CD tools for our Docker CI pipeline. Using Rancher and Drone has proven to be the simplest, fastest, and most enjoyable experience we’ve found to date. From the moment code is pushed/merged to a deployment branch, code is tested, built, and deployed to production in about half the time of cloud-hosted solutions – as little as three to five minutes (Some apps take longer due to a larger build/test process).
Drone builds are also extremely easy for our developers to configure and maintain, and getting it setup on Rancher is just like everything else on Rancher – very simple.
Our Top Requirements for a CI/CD Pipeline
The CI/CD pipeline is really the core of the DevOps experience and is certainly the most impactful for our developers. From a developer perspective, the two things that matter most for a CI/CD pipeline are speed and simplicity.
Speed is #1 on the list, because nothing’s worse than pushing out one line of code and waiting for 20 minutes for it to get to production. Or even worse…when there is a production issue, a developer pushes out a hot fix only to have company dollars continue to grow wings and fly away as your deployment pipeline churns.
Simplicity is #2, because in an ideal world, developers can build and maintain their own application deployment configurations. This makes life easier for everyone in the long run. You certainly don’t want developers knocking on your (Slack) door every time their build fails for some reason.
Docker CI/CD Pipeline Speed Pain points
While immutable containers are far superior to maintaining stateful servers, they do have a few drawbacks – the biggest one being deployment speed: It’s slower to build and deploy a container image than to simply push code to an existing server. Here are all the places that a Docker deployment pipeline can spend time: Read more
Cyber security is no longer a luxury. If you need a reminder of that, just take a look at the seemingly endless number of stories appearing in the news lately about things like malware and security breaches.
If you manage a Docker environment, and you want to help make sure your organization or users are not mentioned in the news stories that accompany the next big breach, you should know the tools available to you for helping to secure the Docker stack, and put them to work. This post identifies the Docker security tools available (both native ones from Docker itself and third-party options) that can help to secure your Docker containers.