Lesson 4.3: Network Policies


Network Policies in Kubernetes are used to control the traffic flow between pods. They allow you to specify how groups of pods are allowed to communicate with each other and other network endpoints. By default, all pods in a Kubernetes cluster can communicate with each other without any restrictions. Network Policies provide a way to enforce segmentation and security by defining rules that allow or deny traffic based on pod labels, namespaces, and IP blocks.

Key components of a Network Policy:

  • PodSelector: Selects the pods to which the policy applies.
  • PolicyTypes: Specifies whether the policy applies to ingress, egress, or both.
  • Ingress: Defines the rules for incoming traffic to the selected pods.
  • Egress: Defines the rules for outgoing traffic from the selected pods.
[root@master kubernetes]# cat config.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraPortMappings: - containerPort: 30001 hostPort: 30001 - role: worker - role: worker networking: disableDefaultCNI: true [root@master kubernetes]# kind create cluster --image kindest/node:v1.29.14@sha256:8703bd94ee24e51b778d5556ae310c6c0fa67d761fae6379c8e0bb480e6fea29 --name cka-new --config config.yaml Creating cluster "cka-new" ... Ensuring node image (kindest/node:v1.29.14) 🖼 Preparing nodes 📦 📦 📦 Writing configuration 📜 Starting control-plane 🕹️ Installing StorageClass 💾 Joining worker nodes 🚜 Set kubectl context to "kind-cka-new" You can now use your cluster with: kubectl cluster-info --context kind-cka-new Have a nice day! 👋
  • You can see the error message in the describe page because network plugin is not installed , and it requires the CNI plugin to create a fully functional kubernetes cluster.
    • KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized

Weave Net

[root@master kubernetes]# kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created role.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.apps/weave-net created [root@master kubernetes]# kubectl get ds -n=kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 7m13s weave-net 3 3 0 3 0 <none> 32s

Cilium

[root@master kubernetes]# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 [root@master kubernetes]# chmod 700 get_helm.sh [root@master kubernetes]# ./get_helm.sh [root@master kubernetes]# helm repo add cilium https://helm.cilium.io/ "cilium" has been added to your repositories [root@master kubernetes]# helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "cilium" chart repository Update Complete. ⎈Happy Helming!⎈ [root@master kubernetes]# helm install cilium cilium/cilium --version 1.14.0 --namespace kube-system [root@master kubernetes]# kubectl get pods -n kube-system -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-6qmr4 1/1 Running 0 86s cilium-j2qx9 1/1 Running 0 86s cilium-x859w 1/1 Running 0 86s [root@master kubernetes]# kubectl get nodes NAME STATUS ROLES AGE VERSION cka-new-control-plane Ready control-plane 15m v1.32.2 cka-new-worker Ready <none> 15m v1.32.2 cka-new-worker2 Ready <none> 15m v1.32.2

Calico

Calico is a popular networking and network policy provider for Kubernetes. It implements Kubernetes Network Policies and provides additional features for advanced network management. Calico uses a distributed firewall model to enforce network policies, ensuring that traffic between pods is controlled according to the defined rules.

In the provided example, we have a Kubernetes cluster with three nodes (one control-plane and two workers) and a set of pods representing a frontend, backend, and a MySQL database. Initially, all pods can communicate with each other. The goal is to restrict access to the MySQL database (db) so that only the backend pod can access it.

Cluster Setup with Calico:

  • The cluster is created using kind with Calico as the CNI (Container Network Interface). The disableDefaultCNI: true option ensures that the default CNI is not used, and Calico is installed instead.
  • Calico is deployed using the manifest provided by the Calico project.
[root@master kubernetes]# cat cluster_config.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraPortMappings: - containerPort: 30001 hostPort: 30001 - role: worker - role: worker networking: disableDefaultCNI: true podSubnet: 192.168.0.0/16 [root@master kubernetes]# kind create cluster --config cluster_config.yaml --name dev Creating cluster "dev" ... Ensuring node image (kindest/node:v1.32.2) 🖼 Preparing nodes 📦 📦 📦 Writing configuration 📜 Starting control-plane 🕹️ Installing StorageClass 💾 Joining worker nodes 🚜 Set kubectl context to "kind-dev" You can now use your cluster with: kubectl cluster-info --context kind-dev Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/ [root@master kubernetes]# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME dev-control-plane NotReady control-plane 40s v1.32.2 172.18.0.2 <none> Debian GNU/Linux 12 (bookworm) 5.14.0-390.el9.aarch64 containerd://2.0.2 dev-worker NotReady <none> 26s v1.32.2 172.18.0.4 <none> Debian GNU/Linux 12 (bookworm) 5.14.0-390.el9.aarch64 containerd://2.0.2 dev-worker2 NotReady <none> 26s v1.32.2 172.18.0.3 <none> Debian GNU/Linux 12 (bookworm) 5.14.0-390.el9.aarch64 containerd://2.0.2 [root@master kubernetes]# kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/calico.yaml [root@master kubernetes]# kubectl get pods -l k8s-app=calico-node -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-5qxtf 0/1 Running 0 86s kube-system calico-node-glgmq 0/1 Running 0 86s kube-system calico-node-pjhtn 0/1 Running 0 86s [root@master kubernetes]# kubectl get nodes NAME STATUS ROLES AGE VERSION dev-control-plane Ready control-plane 2m21s v1.32.2 dev-worker Ready <none> 2m7s v1.32.2 dev-worker2 Ready <none> 2m7s v1.32.2

Application Deployment:

  • The application.yml file defines three pods (frontend, backend, and mysql) and corresponding services.
  • The pods are deployed, and services are created to expose them within the cluster.
[root@master kubernetes]# cat application.yml apiVersion: v1 kind: Pod metadata: name: frontend labels: role: frontend spec: containers: - name: nginx image: nginx ports: - name: http containerPort: 80 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: frontend labels: role: frontend spec: selector: role: frontend ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: v1 kind: Pod metadata: name: backend labels: role: backend spec: containers: - name: nginx image: nginx ports: - name: http containerPort: 80 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: backend labels: role: backend spec: selector: role: backend ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: v1 kind: Service metadata: name: db labels: name: mysql spec: selector: name: mysql ports: - protocol: TCP port: 3306 targetPort: 3306 --- apiVersion: v1 kind: Pod metadata: name: mysql labels: name: mysql spec: containers: - name: mysql image: mysql:latest env: - name: "MYSQL_USER" value: "mysql" - name: "MYSQL_PASSWORD" value: "mysql" - name: "MYSQL_DATABASE" value: "testdb" - name: "MYSQL_ROOT_PASSWORD" value: "verysecure" ports: - name: http containerPort: 3306 protocol: TCP [root@master kubernetes]# kubectl apply -f application.yml pod/frontend created service/frontend created pod/backend created service/backend created service/db created pod/mysql created [root@master kubernetes]# kubectl get pods NAME READY STATUS RESTARTS AGE backend 1/1 Running 0 2m40s frontend 1/1 Running 0 2m40s mysql 1/1 Running 0 2m39s [root@master kubernetes]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE backend ClusterIP 10.96.216.109 <none> 80/TCP 2m46s db ClusterIP 10.96.83.156 <none> 3306/TCP 2m46s frontend ClusterIP 10.96.107.104 <none> 80/TCP 2m46s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7m30s

Initial Connectivity:

  • Initially, the frontend pod can access both the backend and mysql pods. This is verified using curl and telnet commands from within the frontend pod.
  • Here we can see that we are able to access db and backend from frontend.
[root@master kubernetes]# kubectl exec -it frontend -- sh # curl backend:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> # apt-get update && apt-get install telnet # telnet db 3306 Trying 10.96.83.156... Connected to db. Escape character is '^]'. I 9.2.0 XGyNO.#?H51o=cBcaching_sha2_password

Network Policy Application:

  • A Network Policy (network-policy.yml) is defined to restrict access to the mysql pod. The policy allows ingress traffic only from pods with the label role: backend on port 3306.
  • The policy is applied using kubectl apply -f network-policy.yml.
[root@master kubernetes]# cat network-policy.yml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: db-test spec: podSelector: matchLabels: name: mysql policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: role: backend ports: - port: 3306 [root@master kubernetes]# kubectl apply -f network-policy.yml networkpolicy.networking.k8s.io/db-test created [root@master kubernetes]# kubectl get networkpolicy NAME POD-SELECTOR AGE db-test name=mysql 8s [root@master kubernetes]# kubectl describe networkpolicy db-test Name: db-test Namespace: default Created on: 2025-03-09 17:49:24 +0800 CST Labels: <none> Annotations: <none> Spec: PodSelector: name=mysql Allowing ingress traffic: To Port: 3306/TCP From: PodSelector: role=backend Not affecting egress traffic Policy Types: Ingress
  • podSelector: Selects the mysql pod using the label name: mysql.
  • policyTypes: Specifies that this policy applies to ingress traffic.
  • ingress: Defines the allowed ingress traffic:
  • from: Allows traffic from pods with the label role: backend.
  • ports: Allows traffic on port 3306.

Verification:

  • After applying the Network Policy, the frontend pod can no longer access the mysql pod, as verified by the telnet command.
  • The backend pod can still access the mysql pod, confirming that the Network Policy is correctly enforcing the desired traffic restrictions.
[root@master kubernetes]# kubectl exec -it frontend -- sh # telnet db 3306 Trying 10.96.83.156... [root@master kubernetes]# kubectl exec -it backend -- sh # apt-get update && apt-get install telnet # telnet db 3306 Trying 10.96.83.156... Connected to db.

References

All systems normal

© 2025 2023 Sanjeeb KC. All rights reserved.