Lesson 1.4: Multi Node Cluster Setup Using Kind


What is kind?

kind (Kubernetes IN Docker) is a tool for running local Kubernetes clusters using Docker containers. It is designed for testing, development, and CI/CD pipelines.

Why is kind Used?

  • Local Development: kind allows you to run a Kubernetes cluster on your local machine without needing a full-blown cloud provider or virtual machines.
  • Lightweight and Fast: Since kind uses Docker containers to run Kubernetes nodes, it is lightweight and starts up quickly compared to other solutions like Minikube or kubeadm.
  • Multi-Node Clusters: kind can simulate multi-node clusters (control-plane and worker nodes) on a single machine, which is useful for testing advanced Kubernetes features like high availability, networking, and storage.
  • CI/CD Integration: kind is often used in CI/CD pipelines to spin up temporary Kubernetes clusters for testing applications and infrastructure as code (e.g., Helm charts, Kubernetes manifests).
  • Cross-Platform: kind works on Linux, macOS, and Windows (with Docker installed), making it a versatile tool for developers.

How kind Works:

kind creates Docker containers that act as Kubernetes nodes (control-plane and worker nodes).

  • It uses a pre-built Kubernetes node image to bootstrap the cluster.
  • The entire cluster runs inside Docker, making it isolated and easy to clean up.

Example Use Cases:

  • Testing Kubernetes manifests or Helm charts locally.
  • Developing and debugging Kubernetes operators or controllers.

Simulating multi-node clusters for learning or experimentation. Here’s a step-by-step explanation for each part of your process in setting up a multi-node cluster using Kind (Kubernetes in Docker):

Setup a linux server

[root@master ~]# whoami root [root@master ~]# hostname master [root@master ~]# hostname -I 192.168.208.100 [root@master ~]# yum -y install epel-release

Install go

Here, you install Go on the system and verify its installation.

[root@master ~]# dnf -y install go-toolset [root@master ~]# go version go version go1.23.4 (Red Hat 1.23.4-1.el9) linux/arm64 [root@master ~]# vim helloworld.go [root@master ~]# cat helloworld.go package main import "fmt" func main() { fmt.Println("Hello Go World !") } [root@master ~]# go build helloworld.go [root@master ~]# ./helloworld Hello Go World !
  • dnf -y install go-toolset: Installs the Go programming language tools on the system.
  • go version: Verifies the installed version of Go.
  • Create a simple helloworld.go file: This file is used to test if Go is installed correctly. It prints "Hello Go World !" to the terminal when executed.
  • go build helloworld.go: Compiles the Go code into an executable binary.
  • ./helloworld: Runs the compiled Go program and displays "Hello Go World !" to ensure the Go setup works properly.

Install Docker

This section ensures Docker is installed and running on your server.

# remove conflict packages with Docker first [root@master ~]# dnf -y remove podman runc [root@master ~]# curl https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo [root@master ~]# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/docker-ce.repo [root@master ~]# dnf --enablerepo=docker-ce-stable -y install docker-ce [root@master ~]# systemctl enable --now docker [root@master ~]# rpm -q docker-ce docker-ce-28.0.0-1.el9.aarch64 [root@master ~]# docker version Client: Docker Engine - Community Version: 28.0.0 API version: 1.48 Go version: go1.23.6 Git commit: f9ced58 Built: Wed Feb 19 22:12:15 2025 OS/Arch: linux/arm64 Context: default Server: Docker Engine - Community Engine: Version: 28.0.0 API version: 1.48 (minimum version 1.24) Go version: go1.23.6 Git commit: af898ab Built: Wed Feb 19 22:10:40 2025 OS/Arch: linux/arm64 Experimental: false containerd: Version: 1.7.25 GitCommit: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb runc: Version: 1.2.4 GitCommit: v1.2.4-0-g6c52b3f docker-init: Version: 0.19.0 GitCommit: de40ad0
  • Remove conflicting packages (podman and runc): These might interfere with Docker, so they are removed.
  • Add Docker repository: You download and configure the Docker repository for CentOS-based systems. Disable the default repository to avoid issues with conflicting repository settings.
  • Install Docker CE (Community Edition): Docker is installed using the CentOS repository.
  • Enable Docker to start on boot (systemctl enable --now docker): Ensures Docker starts automatically on boot.
  • Check Docker version: Verifies that Docker is installed successfully by checking its version.

Install kind

This step installs the kind tool, which is used to create Kubernetes clusters in Docker containers.

[root@master bin]# uname -m aarch64 [root@master bin]# pwd /usr/local/bin [root@master bin]# curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.27.0/kind-linux-arm64 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 98 0 98 0 0 301 0 --:--:-- --:--:-- --:--:-- 303 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 9822k 100 9822k 0 0 3329k 0 0:00:02 0:00:02 --:--:-- 5178k [root@master bin]# ls kind [root@master bin]# chmod +x kind [root@master bin]# file ./kind ./kind: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, not stripped [root@master bin]# kind --version kind version 0.27.0
  • Download kind: The appropriate version of kind for ARM architecture is downloaded.
  • Make it executable: chmod +x kind ensures the downloaded kind binary is executable.
  • Verify kind version: This confirms the installation of the kind tool by checking its version.

Fix configurations

[root@master ~]# echo fs.inotify.max_user_watches=655360 | sudo tee -a /etc/sysctl.conf fs.inotify.max_user_watches=655360 [root@master ~]# echo fs.inotify.max_user_instances=1280 | sudo tee -a /etc/sysctl.conf fs.inotify.max_user_instances=1280 [root@master ~]# sudo sysctl -p fs.inotify.max_user_watches = 655360 fs.inotify.max_user_instances = 1280

Create Cluster

[root@master ~]# kind create cluster --image kindest/node:v1.32.2@sha256:f226345927d7e348497136874b6d207e0b32cc52154ad8323129352923a3142f --name cka-cluster1 Creating cluster "cka-cluster1" ... Ensuring node image (kindest/node:v1.32.2) 🖼 Preparing nodes 📦 Writing configuration 📜 Starting control-plane 🕹️ Installing CNI 🔌 Installing StorageClass 💾 Set kubectl context to "kind-cka-cluster1" You can now use your cluster with: kubectl cluster-info --context kind-cka-cluster1 Have a nice day! 👋
  • kind create cluster: This command creates a new Kubernetes cluster named cka-cluster1 using the specified kindest/node image version v1.32.2.
  • Progress feedback: The output shows the steps kind is taking to create the cluster, such as preparing nodes, writing configurations, and starting the control plane.
  • Set kubectl context: It automatically configures kubectl to use the newly created cluster by setting the context to kind-cka-cluster1.
  • Cluster Info: Once the cluster is created, you can use kubectl cluster-info to check the status of your Kubernetes cluster.

Kubectl

What is kubectl?

kubectl (pronounced "kube-control" or "kube-cuttle") is the command-line tool for interacting with Kubernetes clusters. It allows you to deploy, inspect, manage, and troubleshoot applications and resources in a Kubernetes cluster.

Why is kubectl Required?

  • Cluster Interaction: kubectl is the primary tool for communicating with the Kubernetes API server. Without it, you cannot directly manage or interact with your Kubernetes cluster.
  • Resource Management: You use kubectl to create, update, delete, and inspect Kubernetes resources like Pods, Deployments, Services, ConfigMaps, Secrets, etc.
  • Debugging and Troubleshooting: kubectl provides commands to view logs, exec into containers, describe resources, and check the status of your cluster and applications.
  • Automation and Scripting: kubectl can be used in scripts to automate tasks like deploying applications, scaling resources, or managing cluster configurations.
  • Portability: kubectl works with any Kubernetes cluster, whether it’s running locally (e.g., using kind, Minikube) or in the cloud (e.g., GKE, EKS, AKS).

Setup kubectl command

# Install kubectl to a directory in your PATH (e.g., /usr/local/bin): [root@master bin]# pwd /usr/local/bin # Download the kubectl binary for ARM64: [root@master bin]# curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 138 100 138 0 0 449 0 --:--:-- --:--:-- --:--:-- 449 100 53.2M 100 53.2M 0 0 10.8M 0 0:00:04 0:00:04 --:--:-- 12.2M # Make the binary executable: [root@master bin]# chmod +x kubectl # Verify the installation: [root@master bin]# kubectl version --client --output=yaml clientVersion: buildDate: "2025-02-12T21:26:09Z" compiler: gc gitCommit: 67a30c0adcf52bd3f56ff0893ce19966be12991f gitTreeState: clean gitVersion: v1.32.2 goVersion: go1.23.6 major: "1" minor: "32" platform: linux/arm64 kustomizeVersion: v5.5.0

Check if ready

[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION cka-cluster1-control-plane Ready control-plane 148m v1.32.2

Multi-node clusters

[root@master ~]# cat config.yaml # three node (two workers) cluster config kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraPortMappings: - containerPort: 30001 hostPort: 30001 - role: worker - role: worker [root@master ~]# kind create cluster --image kindest/node:v1.29.14@sha256:8703bd94ee24e51b778d5556ae310c6c0fa67d761fae6379c8e0bb480e6fea29 --name cka-cluster2 --config config.yaml [root@master ~]# kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kind-cka-cluster1 kind-cka-cluster1 kind-cka-cluster1 kind-cka-cluster2 kind-cka-cluster2 kind-cka-cluster2 # Use created context [root@master ~]# kubectl config use-context kind-cka-cluster2 Switched to context "kind-cka-cluster2". # Check the nodes in the cluster [root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION cka-cluster2-control-plane Ready control-plane 34m v1.32.2 cka-cluster2-worker Ready <none> 33m v1.32.2 cka-cluster2-worker2 Ready <none> 33m v1.32.2

The image kindest/node:v1.29.14 is a Docker image specifically designed for use with Kind (Kubernetes IN Docker). It contains all the necessary components to run a Kubernetes node in a containerized environment. Here's a detailed explanation:

What is kindest/node?

  • kindest/node is the official Docker image provided by the Kind project.
  • It is preconfigured with:
  • The Kubernetes binaries (e.g., kubeadm, kubelet, kubectl).
  • The container runtime (usually containerd).
  • Other dependencies required to run a Kubernetes node.

These images are optimized for running Kubernetes clusters in Docker containers, making them lightweight and fast for local development and testing.

What does v1.29.14 mean?

  • v1.29.14 refers to the version of Kubernetes included in the image.
    • v1.29 is the major and minor version of Kubernetes (1.29).
    • 14 is the patch version, which includes bug fixes and security updates.
  • This image specifically contains Kubernetes version 1.29.14.

Why is this image used with Kind?

  • Kind uses these images to create Kubernetes nodes as Docker containers.
  • When you create a cluster with Kind, it spins up one or more containers using the kindest/node image, each acting as a Kubernetes node (either a control-plane node or a worker node).
  • The image is preconfigured to work seamlessly with Kind, so you don’t need to manually install or configure Kubernetes components.

Where does this image come from?

  • The kindest/node images are built and maintained by the Kind project.
  • They are hosted on Docker Hub: https://hub.docker.com/r/kindest/node/tags
  • You can find images for various Kubernetes versions, including older and newer releases.

Why specify the SHA256 hash?

  • In the command, the image is specified with a SHA256 hash:
  • kindest/node:v1.29.14@sha256:8703bd94ee24e51b778d5556ae310c6c0fa67d761fae6379c8e0bb480e6fea29
  • The SHA256 hash ensures that you are using the exact same image every time you create a cluster.
  • This is important for reproducibility, as the v1.29.14 tag could potentially point to a different image if it were updated (though this is rare for stable releases).

Mapping ports to the host machine

You can map extra ports from the nodes to the host machine with extraPortMappings:

  • This can be useful if using NodePort services or daemonsets exposing host ports.
  • Note: binding the listenAddress to 127.0.0.1 may affect your ability to access the service.
All systems normal

© 2025 2023 Sanjeeb KC. All rights reserved.