What's new in Kubernetes 1.12

Jan Bruder
Jan Bruder
Published: September 24, 2018
Updated: December 6, 2018
A Detailed Overview of Rancher's Architecture
This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.

Kubernetes 1.12 will be released this week on Thursday, September 27, 2018. Version 1.12 ships just three months after Kubernetes 1.11 and marks the third major release of this year. The short cycle is inline with the quarterly release cycle the project has followed since it’s GA in 2015.

Kubernetes releases 2018

| Kubernetes Release | Date               |
| 1.10               | March 26, 2018     |
| 1.11               | June 27, 2018      |
| 1.12               | September 27, 2018 |

Whether you are a developer using Kubernetes or an admin operating clusters, it’s worth getting an idea about the new features and fixes that you can expect in Kubernetes 1.12.

A total of 38 features are included in this milestone. Let’s have a look at some of the highlights.

Kubelet certificate rotation

Kubelet certificate rotation was promoted to beta status. This functionality allows for automated renewal of key and a certificate for the kubelet API server as the current certificate approaches expiration. Until the official 1.12 docs have been published, you can read the beta documentation on this feature here.

Network Policies: CIDR selector and egress rules

Two formerly beta features have now reached stable status: One of them is the ipBlock selector, which allows specifying ingress/egress rules based on network addresses in CIDR notation. The second one adds support for filtering the traffic that is leaving the pods by specifying egress rules. The below example illustrates the use of both features:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
  name: network-policy
  namespace: default
      role: app
  - Egress
  - to:
    - ipBlock:

As previoulsy beta features, both egress and ipBlock are already described in the official network policies documentation.

Mount namespace propagation

Mount namespace propagation, i.e. the ability to mount a volume rshared so that any mounts from inside the container are reflected in the root (= host) mount namespace, has been promoted to stable. You can read more about this feature in the Kubernetes volumes docs.

Taint nodes by condition

This feature introduced in 1.8 as early alpha has been promoted to beta. Enabling it’s featureflag causes the node controller to create taints based on node conditions and the scheduler to filter nodes based on taints instead of conditions. The official documentation is available here.

Horizontal pod autoscaler with custom metrics

While support for custom metrics in HPA continuous to be in beta status, version 1.12 adds various enhancements like the the ability to select metrics based on the labels available in your monitoring pipeline. If you are interested in autoscaling pods based on application-level metrics provided by monitoring systems such as Prometheus, Sysdig or Datadog, I recommend to checkout the design proposal for external metrics in HPA.


RuntimeClass is a new cluster-scoped resource “that surfaces container runtime properties to the control plane”. In other words: This early alpha feature will enable users to select and configure (per pod) a specific container runtime (such as Docker, Rkt or Virtlet) by providing the runtimeClass field in the PodSpec. You can read more about it in these docs.

Resource Quota by priority

Resource quotas allow administrators to limit the resource consumption in namespaces. This is especially practical in scenarios where the available compute and storage resources in a cluster are shared by several tenants (users, teams). The beta feature Resource quota by priority allows admins to fine-tune resource allocation within the namespace by scoping quotas based on the PriorityClass of pods. You can find more details here.

Volume Snapshots

One of the most exciting new 1.12 features for storage is the early alpha implementation of persistent volume snapshots. This feature allows users to create and restore snapshots at a particular point in time backed by any CSI storage provider. As part of this implementation three new API resources have been added:
VolumeSnapshotClass defines how snapshots for existing volumes are provisioned. VolumeSnapshotContent represents existing snapshots and VolumeSnapshot allows users to request a new snapshot of a persistent volume like so:

apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
  name: new-snapshot-test
  snapshotClassName: csi-hostpath-snapclass
    name: pvc-test
    kind: PersistentVolumeClaim

For the nitty gritty details take a look at the 1.12 documentation branch on Github.

Topology aware dynamic provisioning

Another storage related feature, topology aware dynamic provisioning, was introduced in v1.11 and has been promoted to beta in 1.12. It addresses some limitations with dynamic provisioning of volumes in clusters spread across multiple zones where single-zone storage backends are not globally accessible from all nodes.

Enhancements for Azure Cloud provider

These two improvements regarding running Kubernetes in Azure are shipping in 1.12:

Cluster autoscaler support

The cluster autoscaler support for Azure was promoted to stable. This will allow for automatic scaling of the number of Azure nodes in Kubernetes clusters based on global resource usage.

Azure availability zone support

Kubernetes v1.12 adds alpha support for Azure availability zones (AZ). Nodes in an availability zone will be added with label failure-domain.beta.kubernetes.io/zone=<region>-<AZ> , and topology-aware provisioning is added for Azure managed disks storage class.

Anything else?

Kubernetes 1.12 contains many bug fixes and improvements of internal components, clearly focusing on stabilising the core, maturing existing beta features and improving the release velocity by adding more automated tests to the projects CI pipeline. A noteworthy example for the latter is the addition of CI e2e conformance tests for arm, arm64, ppc64, s390x and windows platforms to the projects test harness.

For a full list of changes in 1.12 see the release notes.

Rancher will support Kubernetes 1.12 on hosted clusters as soon as it becomes available on the particular provider. For RKE provisioned clusters it will be supported starting with Rancher 2.2.

A Detailed Overview of Rancher's Architecture
This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.
Jan Bruder
Jan Bruder
Field Engineer
Building on 4 years of experience working with Linux containers, Jan Bruder has been assisting SME and large enterprise organisations in the architecture and implementation of scalable, highly available application environments based on Rancher and Kubernetes. Having a strong developer background in Go, he enjoys building custom tools and services with a focus on Kubernetes and networking. Jan has contributed to several open source software projects in the cloud native ecosystem.
Get started with Rancher