Google Container Engine, or GKE for short (the K stands for Kubernetes), is Google’s offering in the space of Kubernetes runtime deployments. When used in conjunction with a couple of other components from the Google Cloud Platform, GKE provides a one-stop shop for creating your own Kubernetes environment, on which you can deploy all of the containers and pods that you wish without having to worry about managing Kubernetes masters and capacity. Read more
For any team using containers – whether in development, test, or production – an enterprise-grade registry is a non-negotiable requirement. JFrog Artifactory is much beloved by Java developers, and it’s easy to use as a Docker registry as well. To make it even easier, we’ve put together a short walkthrough to setting things up Artifactory in Rancher.
Before you start
For this article, we’ve assumed that you already have a Rancher installation up and running (if not, check out our Quick Start guide), and will be working with either Artifactory Pro or Artifactory Enterprise.
Choosing the right version of Artifactory depends on your development needs. If your main development needs include building with Maven package types, then Artifactory open source may be suitable. However, if you build using Docker, Chef Cookbooks, NuGet, PyPI, RubyGems, and other package formats then you’ll want to consider Artifactory Pro. Moreover, if you have a globally distributed development team with HA and DR needs, you’ll want to consider Artifactory Enterprise. JFrog provides a detailed matrix with the differences between the versions of Artifactory.
There’s several values you’ll need to select in order to set Artifactory up as a Docker registry, such as a public name, or public port. In this article, we refer to them as variables; just substitute the values you choose in for the variables throughout this post.
To deploy Artifactory, you’ll first need to create (or already) have a wildcard imported into Rancher for “*.$public_name”. You’ll also need to create DNS entries to the IP address for artifactory-lb, the load balancer for the Artifactory high availability architecture. Artifactory will be reached via $publish_schema://$public_name:$public_port, while the Docker registry will be reachable at $publish_schema://$docker_repo_name.$public_name:$public_port
Managing containers requires a broad scope from application development, test, and system OS preparation, and as a result, securing containers can be a broad topic with many separate areas. Taking a layered security approach works just as well for containers as it does for any IT infrastructure.
There are many precautions that should be taken before running containers in production.* These include:
Hardening, scanning and signing images
Implementing access controls through management tools
Enable/switch settings to only use secured communication protocols
Use your own digital signatures
Securing the host, platforms and Docker by hardening, scanning and locking down versions
*Download “15 Tips for Container Security” for a more detailed explanation
But at the end of the day, containers need to run in a production environment where constant vigilance is required to keep them secure. No matter how many precautions and controls have been put in place prior to running in production, there is always the risk that a hacker may get through or a malware might try to spread from an internal network. With the breaking of applications into microservices, internal ‘east-west’ traffic increases dramatically and it becomes more difficult to monitor and secure traffic. Recent examples include the ransomware attacks which can exploit thousands of MongoDB or ElasticSearch servers, include containers, with very simple attack scripts. It’s often reported that some serious data leakage or damage also has happened from an internal malicious laptop or desktop.
What is ‘Run-Time Container Security’?
Run-time container security focuses on monitoring and securing containers running in a production environment. This includes container and host processes, system calls, and most importantly, network connections. Read more
Rancher Labs delivers fast, ultra-lightweight container operating system
Cupertino, Calif. – April 12, 2017 – Rancher Labs, a provider of container management software, today announced the general availability of RancherOS, a simplified Linux distribution built from containers, for containers. RancherOS eliminates any unnecessary libraries and services, resulting in a footprint three times smaller than that of other container operating systems. The simplified container environment reduces container boot time, increases efficiency and improves security by reducing the number of components that can be exploited.
“At BRCloud Services, we strive to deliver the best solutions to address our customers’ needs,” said Helvio Lima, CEO at BRCloud Services. “RancherOS epitomizes what modern infrastructure should look like. We’re thrilled to integrate the container operating system into our portfolio.”
RancherOS makes it simple to run containers at scale in development, test and production. By containerizing system services and leveraging Docker for management, the operating system provides an incredibly reliable and simple to manage container-ready environment. System services are defined by Docker Compose and automatically configured using cloud-init, reducing administrative burden. Unneeded libraries and services are eliminated, significantly reducing the OS footprint and minimizing the hassle of updating, patching and maintaining a container host operating system. Containers running on RancherOS boot in seconds, making the operating system ideal for running microservices or auto-scaling. Teams can use the Rancher container management platform to easily manage RancherOS at large scale in production.Read more