Kubernetes Certification Training Course : Lecture 8

In Docker, any running container has its own ip address. Docker follows CNM (Container Network Model) where networking is part of Docker software. But in Kubernetes there is no network by default, it has to be installed separately (Weave, Romano, Calico etc). This is so called CNI (Container Network Interface) model.

In general, when there are two or more containers on one machine, they can communicate with one another via network type called Bridge Network. And what if there are two or more different containers on two machines ? This is where Overlay Network is used. Overlay Network helps to build communication across multiple containers on two or more machines.

Host Network means that containers don’t get separate ip addresses.

Kubernetes uses Overlay Network, since it spans pods on more than one machine (in most cases).

ip a – command that shows all ip interfaces on the host (including in-built Docker networking and separately installed network for Kubernetes)

How ip addresses are assigned to pods on each Node in Kubernetes ? On every Node separately installed network (Weave, Romano, Calico etc) comes with the range of ip addresses it can assign to pods (ip a command, then find proper interface, see inet and brd). As of system pods (namespace kube-system : Api-Server, Scheduler, etcd …), they use Host Networking, they are not run by separate network, but by Host Network, their ip address will match those of the hosts pods are running on.

Here is a set of actions showcasing ongoing topic. In Kubernetes, services communicate with pods not through ip addresses directly, services use dns names internally.

Network Policies

Imagine there are different pods running on different nodes in the Kubernetes Cluster. By default, any Pod can communicate with any other Pod, all pods run in the same network. Sometimes this behavior must be corrected. For example, database Pod must communicate only with application Pod, not with any other. For every Pod, there can be two types of requests : incoming (Ingress, no confuse with Ingress Controller) and outgoing (Egress). If some particular pods have to communicate with each other only, labels must be assigned to pods to server these needs. Another way is to restrict incoming requests to ip addresses range or particular ports. More about network policies can be read here.

Network Policies are implemented by separately installed network (Weave, Romano, Calico etc ; note that not all networks have network policy capabilities, for example, Flannel does not, Weave and Calico have).

Troubleshooting

journalctl -u kubelet – command that shows all kubelet related data (note that all Master components (Api-Server, Scheduler, etcd, Controller) are static pods that are managed by kubelet)

If kubectl command is not working, probably it is because connection with Api-Server cannot be established. But it also can occur if config file is corrupted or not present (for example, in current user’s home directory). If Node is not in Ready status, kubelet may not be running on this Node. kubelet logs should be executed on this Node for examining possible reason. If static Pod is not getting created, config file on the Node containing script for creating this static Pod should be checked (/var/lib/kubelet/config.yaml, staticPodPath parameter). See this file with description of troubleshooting some sort of issues.

Here is the example of working microservices application (this repository has to be cloned on Master). This script must be executed to launch an application. But before script execution another namespace called sock-shop must be created (kubectl create ns sock-shop). NodePort service for front-end Pod should be accessed (Master ip or one of nodes ips : NodePort number).

Leave a Reply

Your email address will not be published. Required fields are marked *