John Engelman February 2, 2017
Infrastructure as code is a practice of codifying and automating the deployment and management of infrastructure with tooling. This allows for testing, reviewing, approving, and deploying infrastructure changes with the same processes and tools as application code. In this blog post, we’ll walk through using Rancher and Terraform to implement infrastructure as code, using the recently built-in Rancher Terraform provider.
Terraform from Hashicorp is a tool for abstracting service and provider APIs into declarative configuration files. It then tracks the state of the infrastructure and converges it to match the specified configuration. Terraform ships with built-in support for a variety of cloud providers (AWS, CenturyLink Cloud, Google Cloud, Microsoft Azure, OpenStack, VMware vSphere, etc.) and other services such as BitBucket, GitHub, Fastly, Heroku DNSimple, and Rancher. The full list of providers can be found at online in the Terraform docs. Read more
Sheng Liang January 1, 2017
As we start a new year, I’d like to thank the Rancher community for a great 2016. 2016 was an awesome year for Rancher Labs, and we’ve been fortunate to have a deeply engaged community of open source users and developers, customers, and partners. In March, we shipped our 1.0 GA release, and since then Rancher has established itself as a leading product in the container ecosystem.
2016 was especially rewarding because of the tremendous amount of support we received from our users and customers. So many of you have posted insightful articles, blog posts, forum questions and answers, and GitHub issues, and seeing how users talk about us on Twitter and other social media platforms drives us to work harder. We are continually inspired by the great stories people have about how they use Rancher, like those by like those by Dispatch, LateRooms.com, and Alertacall. We are grateful to users who are willing to share their experiences using Rancher with the world, and to our friends at Align Technology who are so enthusiastic about our product that they organized the first Rancher user group in the US.
In 2016, our product development was guided by a few key ideas, and our community of users will continue to see us expand upon these in 2017: Read more
Raul Sanchez Liebana December 15, 2016
One of the great things about microservices is that they allows engineering to decouple software development from application lifecycle. Every microservice:
- can be written in its own language, be it Go, Java, or Python
- can be contained and isolated form others
- can be scaled horizontally across additional nodes and instances
- is owned by a single team, rather than being a shared responsibility among many teams
- communicates with other microservices through an API a message bus
- must support a common service level agreement to be consumed by other microservices, and conversely, to consume other microservices
These are all very cool features, and most of them help to decouple various software dependencies from each other.
But what is the operations point of view? While the cool aspects of microservices bulleted above are great for development teams, they pose some new challenges for DevOps teams. Namely:
Kathryn Hsu October 26, 2016
Yesterday, Atlantis Computing announced a new converged platform for managing infrastructure and containers, which combines Rancher with their award-winning USX software-defined storage solution. This turnkey solution will make it easier for IT organizations to deliver containers as a service to their developers with enterprise-grade storage, without losing sight of the very real, bottom-line benefits that come from optimizing virtualized infrastructure. This solution will be available as a tech preview in early November.
Credit: Atlantis Computing
From a single UI, users will be able to provision a new compute host and automatically create USX-powered persistent storage for containers on those hosts; moreover, that storage takes advantage of Atlantis’ USX technology for data reduction and near-instantaneous IO for containers running in memory. The result: containerized applications that can run, scale, and update that much faster, with lower data center costs, and straightforward management for organizations building and overseeing them. Atlantis’ Hugo Phan does an excellent job of diving into the technical details of the platform here.
Hussein Galal October 25, 2016
Rancher ships with two types of catalog items to deploy applications; Rancher certified catalog and community catalog, which enable the community to contribute to the reusable pre-built application stack templates.
One of the recent interesting community catalog templates is the external load balancer for AWS Classic Elastic Load Balancer, which keeps an existing Load balancer updated with the EC2 instances on which Rancher services that have one or more exposed ports and specific label.
This blog post will explain how to set up a Classic ELB and walk through the details of launching a catalog template for ELB from the community catalog to update the Classic ELB automatically. Read more
Nick Ma September 14, 2016
In Part 1: Rancher Server HA, we looked into setting up Rancher Server in HA mode to secure it against failure. There now exists a degree of engineering in our system on top of which we can iterate. So what now? In this installment, we’ll look towards building better service resiliency with Rancher Health Checks and Load Balancing.
Since the Rancher documentation for Health Checks and Load Balancing are extremely detailed, Part 2 will focus on illustrating how they work, so we can become familiar with the nuances of running services in Rancher. A person tasked with supporting the system might have several questions. For example, how does Rancher know a container is down? How is this scenario different from a Health Check? What component is responsible for operating the health checks? How does networking work with Health Checks? Read more