Continental Innovates with Rancher and Kubernetes
Longhorn 1.1, an enterprise-grade, cloud native container storage solution and CNCF Sandbox project, is now available. It the first cloud native storage solution designed and built for the edge, with ARM64 support, new self-healing capabilities and increased performance visibility. See how Rancher, SUSE’s Kubernetes management platform, and Longhorn work together to power your Kubernetes deployments on the edge.
Rancher is cohosting the Computing on the Edge with Kubernetes conference on October 21 with Arm, AWS and Microsoft. We’re bringing together thought leaders and customers to share their knowledge and insight about Kubernetes and the edge. If you’re thinking about the edge (or know you should be) you don’t want to miss this free event.
Automotive manufacturing giant Continental adopted Kubernetes and Rancher to streamline its manufacturing infrastructure into an agile, cloud-native and platform-based architecture.
Hotelbeds, a travel technology company that operates a hotel distribution platform, leveraged Rancher to manage its on-premise and cloud Kubernetes clusters and improved workload migration by 90 percent while lowering costs by 35 percent.
Rancher CEO Sheng Liang and Portworx CEO Murli Thirumale discuss the evolution of Kubernetes storage and how industry players must come together for the benefit of the open source community and industry at large.
Video gaming pioneer Ubisoft’s central Kubernetes platform based on Rancher gives thousands of developers that ability to spin up new Kubernetes clusters in a controllable, centrally managed way -- reducing cluster deployment time by 80 percent.
CRN, a brand of The Channel Company, has recognized Rancher Labs in its 2020 Emerging Vendors list, in the Data Center category. This annual list honors new, rising technology suppliers that exhibit great promise in shaping the future success of the channel with their dedication to innovation.
Many of Helm’s security best practices apply directly to how you manage your Kubernetes applications. Discover how you can use Helm and Helm Charts to create reproducible security in your Kubernetes deployments.
Learn how to use HAProxy as your Ingress Controller with Rancher. We explore features including zero downtime reloads, performance and observability
Learn how to turn a machine learning project into a Kubeflow machine learning pipeline and deploy it onto Kubernetes with Rancher.
Citrix ADC cloud-native portfolio brings a seamless cloud load balancer deployment to customers using Rancher on-premise.
Albert Heijn, the largest grocery brand in the Netherlands, works with Rancher to achieve 80 percent reduction in management hours and testing time
Learn how to set up custom alerts with Rancher and Prometheus Alertmanager to find problems in your Kubernetes clusters before there's an outage.
Monitoring in Kubenernetes would not be complete without alerting. Alerts can notify us as soon as a problem occurs, letting us know immediately when something goes wrong with our system. Learn how to set up custom alerts using Prometheus queries.
Citrix and Rancher are partnering to deliver Citrix' cloud-native stack on Rancher, the complete enterprise computing platform to run Kubernetes clusters on-premises, in the cloud or at the edge. Citrix Cloud Native Stack integrates with Rancher as pre-built and reusable application stack templates. These templates stitch together all components of the Citrix Cloud Native Stack (Citrix Ingress Controller, Citrix Node Controller, Citrix Observability Exporter, IPAM). You can modify and deploy these templates into running application stacks via the Rancher admin console.
Some open source vendors think the number of code commits demonstrates their technical prowess and community commitment. I think users care much more about ‘business value’.
This article will compare and contrast six operating systems commonly used in container deployments. It will present information on why the choice of operating system matters, and how differences in application may require differences in operating system.
CNI, or container network interface, is a standard system for provision networking for containers, especially for multi-host orchestrators like Kubernetes. In this article, we'll describe what CNI is, why it's helpful, and then compare some popular CNI plugins for establishing the network for Kubernetes containers.
When evaluating application and system architecture, it is important to understand your options and their implications. In recent years, highly distributed systems have become popular, in part due to an influx of sophisticated tooling and an evolution in system management practices. In this guide, we will discuss some of the historical contexts from which distributed systems emerged and offer some general advice on what to keep in mind when designing these applications.
Swapnil Bhartiya of TFiR interviewed Rancher co-founder and CEO Sheng Liang at KubeCon China. The ensuing conversation will teach you about the fascinating ways Kubernetes enhances IT infrastructure from the ground up. Watch the video or read the transcript.
When traffic increases, we need to have a way to scale our application to keep up with user demand. With Kubernetes multi-cluster management through Rancher, scaling has never been easier and more efficient. Read here about scaling Kubernetes and the challenges you might be facing when managing a hybrid cloud environment.
This blog covers building a CI/CD Pipeline using the hosted GitLab.com solution. The Kubernetes integrations that are covered are generic and should work with any CI/CD provider that interface directly to Kubernetes using a service account. Tools used are Auto Devops, Rancher, and Gitlab.
One of the nicer features of Kubernetes is the ability to code and configure autoscale on your running services. Without autoscaling, it's difficult to accommodate deployment scaling and meet SLAs. This article will show you how to autoscale your services on Kubernetes using Horizontal Pod Autoscale.
Learn about Rancher management plane architecture where every API resource is represented as a CustomResourceDefinition(CRD) and every functional routine runs as Kubernetes controller
Objective: In this article, we will walk through running a distributed, production-quality database setup managed by Rancher and characterized by stable persistence. We will use Stateful Sets with a Kubernetes cluster in Rancher for the purpose of deploying a stateful distributed Cassandra database.
-- Pre-requisites: We assume that you have a Kubernetes cluster provisioned with a cloud provider. Consult the Rancher resource if you would like to create a K8s cluster in Amazon EC2 using Rancher 2.