containerd is an industry-standard core container runtime that was initially released by Docker Inc. in December 2015 and contributed to CNCF in March 2017. We’ve received a number of questions about the project, so I thought I would provide you my perspective as well as some preliminary thoughts on how how Rancher Labs will leverage it.
Docker, Kubernetes, and containerd
The containerd project represents an important step in the evolution of the Docker platform. In the beginning, the Docker engine was quite simple. It merely consisted of the minimum support required to run Docker images on a single host. Over the last few years, however, the Docker Engine has evolved significantly. The Docker engine now includes sophisticated support for cluster management, multi-host networking, and scheduling. Today, Docker is actually closer to a platform like Kubernetes, even though Kubernetes was created to manage Docker. Read more
GitLab is, at its core, a tool for centrally managing Git repositories. As one might expect form a platform that provides this service, GitLab provides a robust authentication and authorization mechanism, groups, issue tracking, wiki, and snippets, along with public, internal, and private repositories.
GitLab goes further by including a powerful continuous integration (CI) engine and a Docker container registry, allowing you to go from development to release inside of the same utility. It continues by adding two more powerful open source software utilities: Prometheus for monitoring, and Mattermost for team communication. The platform has a solid API and integrates with existing third-party systems like JIRA, Bugzilla, Pivotal, Slack, HipChat, Bamboo, and more.
All of this begs the question: why use GitLab instead of using SaaS services directly? The answer lies in personal taste. For most people, paying SaaS providers for the services that they provide is an excellent solution. You can focus on building your application and let them worry about maintaining the tools. What if you already have infrastructure with spare capacity? What if you just want private repositories without paying for that privilege? What if you did the math and determined that you can save money by hosting it yourself? What if you just want to own your own data?
Bringing services in-house opens you up to the risk of spending more time managing them and getting them to talk to each other, which is why GitLab truly shines as an option. With one click and a few minutes of configuration, you can be up and running with a complete solution.
One of the great benefits of the Rancher container management platform is that it runs on any infrastructure. While it’s possible to add any Linux machine as a host using our custom setup option, using one of the machine drivers in Rancher makes it especially easy to add and manage your infrastructure.
Today, we’re pleased to have a new machine driver available in Rancher, from our friends at cloud.ca. cloud.ca is a regional cloud IaaS for Canadian or foreign businesses requiring that all or some of their data remain in Canada, for reasons of compliance, performance, privacy or cost. The platform works as a standalone IaaS and can be combined with hybrid or multi-cloud services, allowing a mix of private cloud and other public cloud infrastructures such as Amazon Web Services. Having the cloud.ca driver available within Rancher makes it that much easier for our collective users to focus on building and running their applications, while minding data compliance requirements. Read more
Leading container management company recognized for innovation within cloud infrastructure industry
Cupertino, Calif. – May 17, 2017 – Rancher Labs, a provider of container management software, today announced they were recognized as one of four “Cool Vendors” in the May report by Gartner, Inc., Cool Vendors in Cloud Infrastructure, 2017.
“As container adoption continues to grow within the enterprise, the need for simplified container management persists,” said Sheng Liang, CEO and co-founder of Rancher Labs. “With Rancher, we’re providing a turnkey solution that enables organizations to deploy and manage containers in production, and on any choice of infrastructure. We’re thrilled to be named a “Cool Vendor” by Gartner and to be acknowledged for our innovation and execution in the cloud infrastructure space.” Read more
One of the more novel concepts in systems design lately has been the notion of serverless architectures. It is no doubt a bit of hyperbole as there are certainly servers involved, but it does mean we get to think about servers differently.
The potential upside of serverless
Imagine a simple web based application that handles requests from HTTP clients. Instead of having some number of program runtimes waiting for a request to arrive, then invoking a function to handle them, what if we could start the runtime on-demand for each function as a needed and throw it away afterwards? We wouldn’t need to worry about the number of servers running that can accept connections, or deal with complex configuration management systems to build new instances of your application when you scale. Additionally, we’d reduce the chances of common issues with state management such as memory leaks, segmentation faults, etc.
Perhaps most importantly, this on-demand approach to function calls would allow us to scale every function to match the number of requests and process them in parallel. Every “customer” would get a dedicated process to handle their request, and the number of processes would only be limited by the compute capacity at your disposal. When coupled with a large cloud provider whose available, on-demand compute sufficiently exceeds your usage, serverless has the potential to remove a lot of the complexity around scaling your application.
One of the first questions you are likely to come up against when deploying containers in production is the choice of orchestration framework. While it may not be the right solution for everyone, Kubernetes is a popular scheduler that enjoys strong industry support. In this short article, I’ll provide an overview of Kubernetes, explain how it is deployed with Rancher, and show some of the advantages of using Kubernetes for distributed multi-tier applications.
Kubernetes has an impressive heritage. Spun-off as an open-source project in 2015, the technology on which Kubernetes is based (Google’s Borg system) has been managing containerized workloads at scale for over a decade. While it’s young as open-source projects go, the underlying architecture is mature and proven. The name Kubernetes derives from the Greek word for “helmsman” and is meant to be evocative of steering container-laden ships through choppy seas. I won’t attempt to describe the architecture of Kubernetes here. There are already some excellent posts on this topic including this informative article by Usman Ismail.
Like other orchestration solutions deployed with Rancher, Kubernetes deploys services comprised of Docker containers. Kubernetes evolved independently of Docker, so for those familiar with Docker and docker-compose, the Kubernetes management model will take a little getting used to. Kubernetes clusters are managed via a kubectl CLI or the Kubernetes Web UI (referred to as the Dashboard). Applications and various services are defined to Kubernetes using JSON or YAML manifest files in a format that is different than docker-compose. To make it easy for people familiar with Docker to get started with Kubernetes, a kubectl primer provides Kubernetes equivalents for the most commonly used Docker commands.
A Primer on Kubernetes Concepts
Kubernetes involves some new concepts that at first glance may seem confusing, but for multi-tier applications, the Kubernetes management model is elegant and powerful. Read more