One of the great benefits of the Rancher container management platform is that it runs on any infrastructure. While it’s possible to add any Linux machine as a host using our custom setup option, using one of the machine drivers in Rancher makes it especially easy to add and manage your infrastructure.
Today, we’re pleased to have a new machine driver available in Rancher, from our friends at cloud.ca. cloud.ca is a regional cloud IaaS for Canadian or foreign businesses requiring that all or some of their data remain in Canada, for reasons of compliance, performance, privacy or cost. The platform works as a standalone IaaS and can be combined with hybrid or multi-cloud services, allowing a mix of private cloud and other public cloud infrastructures such as Amazon Web Services. Having the cloud.ca driver available within Rancher makes it that much easier for our collective users to focus on building and running their applications, while minding data compliance requirements. Read more
Even with the almost unimaginable efficiencies achieved by the major public cloud providers, at any given time they still have excess capacity that is left idle. And as an incentive to try to get some return on those resources, both AWS and Google Compute Engine are willing to sell those resources at a steep discount, usually starting at around 90%.
What’s the catch? Well, the prices are market driven, set by the highest bidder. It’s a classic marketplace model: demand drives the value of assets. The challenge for public cloud users, however, is that at any given time the spot instance you are using can be reclaimed if someone outbids you. With Amazon, you have two minutes to vacate the instance before it is terminated; Google Cloud gives you 30 seconds.
This volatility has kept the bulk of companies using public clouds away from this model. How can I expect to keep my application running if I can lose a server at any given moment, especially if setting up the server to be production ready takes significant amount of time? It is not uncommon for configuration management tools to take 10 or more minutes to install packages and deploy an application. The time it takes to set up a server and the narrow window of time to vacate makes it extremely challenging to make effective use of these discounted instance types.
How containers help optimize cloud costs
Well as you might have guessed, containers can help with this obstacle to using the spot market. The pre-built nature of containers means startup times can be drastically smaller than with a dynamic, scripted or configuration management-driven approach. The required packages, application code, and various files have all been figured out at build time, and written to essentially a compressed archive (Docker image). This means startup times for your application in the sub-minute time frame are now within reach. Read more
In Kubernetes, we often hear terms like resource management, scheduling and load balancing. While Kubernetes offers many capabilities, understanding these concepts is key to appreciating how workloads are placed, managed and made resilient. In this short article, I provide an overview of each facility, explain how they are implemented in Kubernetes, and how they interact with one another to provide efficient management of containerized workloads. If you’re new to Kubernetes and seeking to learn the space, please consider reading our case for Kubernetes article.
Resource management is all about the efficient allocation of infrastructure resources. In Kubernetes, resources are things that can be requested by, allocated to, or consumed by a container or pod. Having a common resource management model is essential, since many components in Kubernetes need to be resource aware including the scheduler, load balancers, worker-pool managers and even applications themselves. If resources are underutilized, this translates into waste and cost-inefficiency. If resources are over-subscribed, the result can be application failures, downtime, or missed SLAs. Read more
Leading container management company recognized for innovation within cloud infrastructure industry
Cupertino, Calif. – May 17, 2017 – Rancher Labs, a provider of container management software, today announced they were recognized as one of four “Cool Vendors” in the May report by Gartner, Inc., Cool Vendors in Cloud Infrastructure, 2017.
“As container adoption continues to grow within the enterprise, the need for simplified container management persists,” said Sheng Liang, CEO and co-founder of Rancher Labs. “With Rancher, we’re providing a turnkey solution that enables organizations to deploy and manage containers in production, and on any choice of infrastructure. We’re thrilled to be named a “Cool Vendor” by Gartner and to be acknowledged for our innovation and execution in the cloud infrastructure space.” Read more
One of the more novel concepts in systems design lately has been the notion of serverless architectures. It is no doubt a bit of hyperbole as there are certainly servers involved, but it does mean we get to think about servers differently.
The potential upside of serverless
Imagine a simple web based application that handles requests from HTTP clients. Instead of having some number of program runtimes waiting for a request to arrive, then invoking a function to handle them, what if we could start the runtime on-demand for each function as a needed and throw it away afterwards? We wouldn’t need to worry about the number of servers running that can accept connections, or deal with complex configuration management systems to build new instances of your application when you scale. Additionally, we’d reduce the chances of common issues with state management such as memory leaks, segmentation faults, etc.
Perhaps most importantly, this on-demand approach to function calls would allow us to scale every function to match the number of requests and process them in parallel. Every “customer” would get a dedicated process to handle their request, and the number of processes would only be limited by the compute capacity at your disposal. When coupled with a large cloud provider whose available, on-demand compute sufficiently exceeds your usage, serverless has the potential to remove a lot of the complexity around scaling your application.