Lesson 1.3: Multi-node Kubernetes cluster setup using kubeadm


Configure Manager Node first.

  1. Configure nginx
[root@mgr ~]# dnf -y install nginx nginx-mod-stream
[root@mgr ~]# vi /etc/nginx/nginx.conf
    server {
    	# line 38 : change listening port
        listen       8080;
        listen       [::]:8080;
 
# add to the end : proxy settings
stream {
    upstream k8s-api {
        server 10.0.0.30:6443;
    }
    server {
        listen 6443;
        proxy_pass k8s-api;
    }
}
 
[root@mgr ~]# systemctl enable --now nginx
  1. If SELinux is enabled, change policy like follows.
[root@mgr ~]# setsebool -P httpd_can_network_connect on
[root@mgr ~]# setsebool -P httpd_graceful_shutdown on
[root@mgr ~]# setsebool -P httpd_can_network_relay on
[root@mgr ~]# setsebool -P nis_enabled on
[root@mgr ~]# semanage port -a -t http_port_t -p tcp 6443
  1. If Firewalld is running, allow related services.
[root@mgr ~]# firewall-cmd --add-service={kube-apiserver,http,https}
success
[root@mgr ~]# firewall-cmd --runtime-to-permanent
success
  1. On Manager Node, Install Kubernetes client. Replace the version number with the one you want to install.
[root@mgr ~]# cat <<'EOF' > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/
enabled=0
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key
EOF
 
[root@mgr ~]# dnf --enablerepo=kubernetes -y install kubectl

Kubernetes : Configure Control Plane Node

Configure initial setup on Control Plane Node.

For [control-plane-endpoint], specify the Hostname or IP address that is shared among the Kubernetes Cluster. For the case proxying Kubernetes cluster with a Manager node like this example, specify Manager Node IP address.

For [apiserver-advertise-address], specify Control Plane Node IP address.

For [--pod-network-cidr] option, specify network which Pod Network uses. There are some plugins for Pod Network. (refer to details below)

https://kubernetes.io/docs/concepts/cluster-administration/networking/

On this example, it selects Calico.

# if Firewalld is running, allow services below
[root@dlp ~]# firewall-cmd --add-service={kube-apiserver,kube-control-plane,kube-control-plane-secure,kubelet,kubelet-readonly,http,https}
success
[root@dlp ~]# firewall-cmd --runtime-to-permanent
success
[root@dlp ~]# kubeadm init --control-plane-endpoint=10.0.0.25 --apiserver-advertise-address=10.0.0.30 --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///var/run/crio/crio.sock
[init] Using Kubernetes version: v1.30.3
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [dlp.offix.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.30 10.0.0.25]
 
.....
.....
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Alternatively, if you are the root user, you can run:
 
  export KUBECONFIG=/etc/kubernetes/admin.conf
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
 
  kubeadm join 10.0.0.25:6443 --token ci35m5.rplxk3qvq8n05kth \
        --discovery-token-ca-cert-hash sha256:aa5a5cd977b2c6d24b5ac589b83d58b5cbf5ddc49650851f41d3b56cdf533ce6 \
        --control-plane
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 10.0.0.25:6443 --token ci35m5.rplxk3qvq8n05kth \
        --discovery-token-ca-cert-hash sha256:aa5a5cd977b2c6d24b5ac589b83d58b5cbf5ddc49650851f41d3b56cdf533ce6
 
# transfrer authentication file for cluster admin to Manager Node with any user
[root@dlp ~]# scp /etc/kubernetes/admin.conf centos@10.0.0.25:/tmp
centos@10.0.0.25's password:
admin.conf                                    100% 5645    20.7MB/s   00:00
  1. Work on Manager Node. Configure Pod Network with Calico.
# set cluster admin user with a file you transferred from Control Plane
# if you set common user as cluster admin, login with it and run [sudo cp/chown ***]
[root@mgr ~]# mkdir -p $HOME/.kube
[root@mgr ~]# cp /tmp/admin.conf $HOME/.kube/config
[root@mgr ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@mgr ~]# wget https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml
[root@mgr ~]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
 
# show state : OK if STATUS = Ready
[root@mgr ~]# kubectl get nodes
NAME            STATUS   ROLES           AGE     VERSION
dlp.offix.com   Ready    control-plane   4m31s   v1.30.3
 
# show state : OK if all are Running
[root@mgr ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-86996b59f4-6kn2r   1/1     Running   0          41s
kube-system   calico-node-2jpww                          1/1     Running   0          41s
kube-system   coredns-7db6d8ff4d-729g9                   1/1     Running   0          4m28s
kube-system   coredns-7db6d8ff4d-z8cxh                   1/1     Running   0          4m28s
kube-system   etcd-dlp.offix.com                         1/1     Running   0          4m46s
kube-system   kube-apiserver-dlp.offix.com               1/1     Running   0          4m45s
kube-system   kube-controller-manager-dlp.offix.com      1/1     Running   0          4m45s
kube-system   kube-proxy-npf5c                           1/1     Running   0          4m28s
kube-system   kube-scheduler-dlp.offix.com               1/1     Running   0          4m45s

Configuring Worker Nodes

  1. On all Kubernetes Cluster Nodes except Manager Node, Change settings for System requirements.
 
[root@dlp ~]# cat > /etc/sysctl.d/99-k8s-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables=1
EOF
[root@dlp ~]# sysctl --system
[root@dlp ~]# modprobe overlay
[root@dlp ~]# modprobe br_netfilter
[root@dlp ~]# echo -e overlay\\nbr_netfilter > /etc/modules-load.d/k8s.conf
 
# install from EPEL
[root@dlp ~]# dnf --enablerepo=epel -y install iptables-legacy
[root@dlp ~]# alternatives --config iptables
 
There are 2 programs which provide 'iptables'.
 
  Selection    Command
-----------------------------------------------
*+ 1           /usr/sbin/iptables-nft
   2           /usr/sbin/iptables-legacy
 
# switch to [iptables-legacy]
Enter to keep the current selection[+], or type selection number: 2
 
# set Swap off setting
[root@dlp ~]# swapoff -a
[root@dlp ~]# vi /etc/fstab
# comment out the Swap line
#/dev/mapper/cs-swap     none                    swap    defaults        0 0
  1. On all Kubernetes Cluster Nodes except Manager Node, Install required packages. This example shows to use CRI-O for container runtime.
 
[root@dlp ~]# dnf -y install centos-release-okd-4.16
[root@dlp ~]# sed -i -e "s/enabled=1/enabled=0/g" /etc/yum.repos.d/CentOS-OKD-4.16.repo
[root@dlp ~]# dnf --enablerepo=centos-okd-4.16 -y install cri-o
[root@dlp ~]# systemctl enable --now crio
[root@dlp ~]# cat <<'EOF' > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/
enabled=0
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key
EOF
 
[root@dlp ~]# dnf --enablerepo=kubernetes -y install kubeadm kubelet cri-tools iproute-tc container-selinux
[root@dlp ~]# systemctl enable kubelet
  1. Join in Kubernetes Cluster which is initialized on Control Plane Node.
# if Firewalld is running, disable it
[root@node01 ~]# systemctl disable --now firewalld
[root@node01 ~]# kubeadm join 10.0.0.25:6443 --token ci35m5.rplxk3qvq8n05kth \
--discovery-token-ca-cert-hash sha256:aa5a5cd977b2c6d24b5ac589b83d58b5cbf5ddc49650851f41d3b56cdf533ce6
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 11.000808301s
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
 
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
 
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# OK if [This node has joined the cluster]
  1. Verify Status on Manager Node or Client Hosts you did setup cluster admin file. That's Ok if all STATUS are Ready.
 
[root@mgr ~]# kubectl get nodes
NAME               STATUS   ROLES           AGE     VERSION
dlp.offix.com      Ready    control-plane   12m     v1.30.3
node01.offix.com   Ready    <none>          3m35s   v1.30.3
node02.offix.com   Ready    <none>          19s     v1.30.3

Code Completion Setup

[root@mgr ~]# kubectl completion bash >/root/kubecom.sh
 
[root@mgr ~]# vi .bashrc 
source /root/kubecom.sh
All systems normal

© 2025 2023 Sanjeeb KC. All rights reserved.