Kubernetes networking for beginners (how to not get eaten)
Networking is hard. It was before Virtual machines, it will be after containers. I assume you already know your way around Docker and maybe you’ve dipped your toes into Kubernetes, but have come to a stale with networking. Fortunately, Kubernetes networking is not as hard as it used to be, thanks to the Container Networking Interface. Let’s take a look.
Kubernetes networking: a foreword
Although not strictly required, I highly recommend reading this article after you’ve matured a strong understanding of Docker and Docker Networking. Also you must understand, and possibly have used, Kuberntes.
Although Kubernetes networking isn’t as hard as Docker networking, you can guess that Kubernetes uses some sort of abstraction in order to manage Docker networking (although Docker is not always involved). That means using Kuberntes networking is as easy as using one command, but network troubleshooting shouldn’t be belittled.
CNI: one network interface to rule them all
Thanks to the CNI initiative Kubernetes can leverage multiple networking solutions using the same interface. This means that Kubernetes networking is modular and there are a number of CNI plugins that can be used, each with is own strengths and weaknesses. As a matter of fact a newly created Kubernetes cluster requires networking, but doesn’t include a CNI plugin. The cluster will not be able to execute common operations without networking, hence you will need to operate a choice between the various plugins.
It is important to say that every CNI plugin will allow inter-pod, inter-node communication within the cluster, but some will add more features such as encryption and some will not. To get started it is not important which one to choose, as every CNI plugin will allow the cluster to operate, but as you get deeper into your Kubernetes journey it may be worth your while to look at each solution. Let’s take a quick look at the most common/used solutions.
Flannel: dead simple
Flannel is one of the most common networking solutions in Kubernetes deployments. It is by far one of the simplest out there, and it is only composed of one single binary. Flannel creates a layer-3 network overlay across nodes and each pod can communicate within the cluster using this network. Flannel uses VXLAN or UDP to encapsulate packets (there are more options). To store data, Flannel uses etcd, but when paired with Kubernetes it can use Kubernetes API to store status information. Flannel doesn’t encrypt packets by default.
You can apply Flannel using the following oneliner:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Calico: secure and powerful
While Flannel aims to provide a network that “just works”, Calico has performance and security as its main goals. Instead of creating a network overlay it uses the BGP protocol (pardon the redundancy) to route packets without encapsulating them. This reduces the overhead for each packet boosting performance. Calico can do authentication and authorization and enforce network policies. Calico uses etcd or Kubernetes API to store status information.
You can apply Calico using the following oneliner, but you’re encouraged to customize it:
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
Weave Net: the mesh revolutionist
Flannel and Calico use two common methods that have been used in the industry for years. Weave Net creates a mesh between the nodes, allowing communication. Weave Net doesn’t need etcd. On top of that it also supports network policies.
If you’re wondering how Weave Net is different from Istio, let’s just say Istio is a service mesh while Weave Net is a, more standard, network mesh.
You can apply Weave Net using the following oneliner:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Canal: bringing Flannel and Calico together
Canal was born to integrate the simplicity of Flannel and the power of Calico. The result of this joint effort was Canal, but over time it became clear that the goal could be achieved working on both projects (Flannel, Calico) rather than creating a new one. Today the Canal project lives in the Calico documentation as the “Calico for policy and flannel for networking” section. It is still common to refer to this solution as Canal. Canal uses etcd, but can also leverage Kubernetes API to store status information. Canal provides the power of Calico (security, network policies) over a Flannel-managed network (overlay network).
You can apply Canal using the following, replace <your-pod-cidr> with the actual CIDR used in your Kubernetes cluster:
curl https://docs.projectcalico.org/v3.8/manifests/canal.yaml -O POD_CIDR="<your-pod-cidr>" \ sed -i -e "s?10.244.0.0/16?$POD_CIDR?g" canal.yaml kubectl apply -f canal.yaml