What is CNI?

CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted.

Kubernetes uses CNI as an interface between network providers and Kubernetes pod networking.

CNI Logo

For more information visit CNI GitHub project.

What Network Models are Used in CNI?

CNI providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan (VXLAN) or an unencapsulated network model such as Border Gateway Protocol (BGP).

What is an Encapsulated Network?

This network model provides a logical Layer 2 (L2) network encapsulated over the existing Layer 3 (L3) network topology that spans the Kubernetes cluster nodes. With this model you have an isolated L2 network for containers without needing routing distribution, all at the cost of minimal overhead in terms of processing and increased IP package size, which comes from an IP header generated by overlay encapsulation. Encapsulation information is distributed by UDP ports between Kubernetes workers, interchanging network control plane information about how MAC addresses can be reached. Common encapsulation used in this kind of network model is VXLAN, Internet Protocol Security (IPSec), and IP-in-IP.

In simple terms, this network model generates a kind of network bridge extended between Kubernetes workers, where pods are connected.

This network model is used when an extended L2 bridge is preferred. This network model is sensible for L3 network latencies of the Kubernetes workers. If datacenters are in distinct geolocations, be sure to have low latencies between them to avoid eventual network segmentation.

CNI providers using this network model include Flannel, Canal, and Weave.

Encapsulated Network

What is an Unencapsulated Network?

This network model provides an L3 network to route packets between containers. This model doesn’t generate an isolated l2 network, nor generates overhead. These benefits come at the cost of Kubernetes workers having to manage any route distribution that’s needed. Instead of using IP headers for encapsulation, this network model uses a network protocol between Kubernetes workers to distribute routing information to reach pods, such as BGP.

In simple terms, this network model generates a kind of network router extended between Kubernetes workers, which provides information about how to reach pods.

This network model is used when a routed L3 network is preferred. This mode dynamically updates routes at the OS level for Kubernetes workers. It’s less sensible to latency.

CNI providers using this network model include Calico and Romana.

Unencapsulated Network

What CNI Providers are Supported by Rancher?

Out-of-the-box, Rancher supports three different CNI providers for Kubernetes clusters: Canal, Flannel, and Calico. You can choose your CNI when you create new Kubernetes clusters from Rancher.

Canal

Canal Logo

Canal is a CNI provider that gives you the best of Flannel and Calico. It allows users to easily deploy Calico and Flannel networking together as a unified networking solution, combining Calico’s network policy enforcement with the rich superset of Calico (unencapsulated) and/or Flannel (encapsulated) network connectivity options.

In Rancher, Canal is the default CNI provider combined with Flannel and VXLAN encapsulation.

Kubernetes workers should open UDP port 8472 (VXLAN) and TCP port 9099 (healthcheck). See Port Requirements for more details.

Canal Diagram

For more information, see the Canal GitHub Page.

Flannel

Flannel Logo

Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host’s public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being VXLAN.

Encapsulated traffic is unencrypted by default. Threfore, flannel provides an experimental backend for encryption, IPSec, which makes use of strongSwan to establish encrypted IPSec tunnels between Kubernetes workers.

Kubernetes workers should open UDP port 8472 (VXLAN) and TCP port 9099 (healthcheck). See Port Requirements for more details.

Flannel Diagram

For more information, see the Flannel GitHub Page.

Calico

Calico Logo

Calico enables networking and network policy in Kubernetes clusters across the cloud. Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-premise using BGP.

Calico also provides a stateless IP-in-IP encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies.

Kubernetes workers should open TCP port 179 (BGP).

Calico Diagram

For more information, see the following pages:

CNI Features by Provider

The following table summarizes the different features available for each CNI provider supported by Rancher.

Provider Network Model Route Distribution Network Policies Mesh External Datastore Encryption Ingress/Egress Policies Commercial Support
Canal Encapsulated (VXLAN) No Yes No K8S API No Yes No
Flannel Encapsulated (VXLAN) No No No K8S API No No No
Calico Unencapsulated Yes Yes Yes Etcd Yes Yes Yes
  • Network Model: Encapsulated or unencapsulated. For more information, see What Network Models are Used in CNI?

  • Route Distribution: An exterior gateway protocol designed to exchange routing and reachability information on the Internet. BGP can assist with pod-to-pod networking between clusters. This feature is a must on unencapsulated CNI providers, and it is typically done by BGP. If you plan to build clusters split across network segments, route distribution is a feature that’s nice-to-have.

  • Network Policies: Kubernetes offers functionality to enforce rules about which services can communicate with each other using network policies. This feature is stable as of Kubernetes v1.7 and is ready to use with supported networking plugins.

  • Mesh: This feature allows service-to-service networking communication between distinct Kubernetes clusters.

  • External Datastore: CNI providers with this feature need an external datastore for its data.

  • Encyption: This feature allows cyphered and secure network control and data planes.

  • Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications.

CNI Community Popularity

The following table summarizes different GitHub metrics to give you an idea of each supported project’s popularity and activity. This data was collected in July 2018.

Provider Project Stars Forks Contributors
Canal https://github.com/projectcalico/canal 536 75 19
flannel https://github.com/coreos/flannel 3.279 774 107
Calico https://github.com/projectcalico/calico 572 225 82


Which CNI Provider Should I Use?

It depends on your project needs. There are many different providers, which each have various features and options. There isn’t one provider that meets everyone’s needs. At the moment, Rancher v2.0 supports the 3 most versatile CNI providers.

As of Rancher v2.0.7, Canal is the default CNI provider. We recommend it for most use cases. It provides encapsulated networking for containers with Flannel, while adding Calico network policies that can provide project/namespace isolation in terms of networking.

All of 3 solutions are capable CNI providers and will likely suit your needs.