Installation of Kubernetes (windows /linux) cluster with ContainerD as Runtime
Kubernetes Nodes
In a Kubernetes cluster, you will encounter two distinct categories of nodes:
Master Nodes: These nodes play a crucial role in managing the control API calls for various components within the Kubernetes cluster. This includes overseeing pods, replication controllers, services, nodes, and more.
Worker Nodes: Worker nodes are responsible for providing runtime environments for containers. It’s worth noting that a group of container pods can extend across multiple worker nodes, ensuring optimal resource allocation and management.
Prerequisites
Before diving into the installation, ensure that your environment meets the following prerequisites:
- An Ubuntu 22.04 system.
- Privileged access to the system (root or sudo user).
- Active internet connection.
- Minimum 2GB RAM or more.
Prerequisites
Before diving into the installation, ensure that your environment meets the following prerequisites:
- An Ubuntu 22.04 system.
- Privileged access to the system (root or sudo user).
- Active internet connection.
- Minimum 2GB RAM or more.
- Minimum 2 CPU cores (or 2 vCPUs).
- 20 GB of free disk space on /var (or more).
- 20 GB of free disk space on /var (or more).
- Windows os
Prerequisites
Before diving into the installation, ensure that your environment meets the following prerequisites:
- An Ubuntu 22.04 system.
- Privileged access to the system (root or sudo user).
- Active internet connection.
Prerequisites
Before diving into the installation, ensure that your environment meets the following prerequisites:
- An Ubuntu 22.04 system.
- Privileged access to the system (root or sudo user).
- Active internet connection.
- Minimum 2GB RAM or more.
- Minimum 2 CPU cores (or 2 vCPUs).
- 20 GB of free disk space on /var (or more).
- 20 GB of free disk space on /var (or more).
- Windows os
Prerequisites
Before diving into the installation, ensure that your environment meets the following prerequisites:
- An Ubuntu 22.04 system.
- Privileged access to the system (root or sudo user).
- Active internet connection.
- Minimum 2GB RAM or more.
- Minimum 2 CPU cores (or 2 vCPUs).
- 20 GB of free disk space on /var (or more).
- Windows OS
- Minimum 2GB RAM or more
- 20 GB free disk
- Windows updated with all packages
Step 1: Update and Upgrade Ubuntu (all nodes)
Begin by ensuring that your system is up to date. Open a terminal and execute the following commands:
sudo apt update
sudo apt upgrade
Step 2: Disable Swap (all nodes)
To enhance Kubernetes performance, disable swap and set essential kernel parameters. Run the following commands on all nodes to disable all swaps:
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Step 3: Add Kernel Parameters (all nodes)
Load the required kernel modules on all nodes:
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
Configure the critical kernel parameters for Kubernetes using the following:
sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
Then, reload the changes:
sudo sysctl --system
Step 4: Install Containerd Runtime (all nodes)
We are using the containerd runtime. Install containerd and its dependencies with the following commands:
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificat
Enable the Docker repository:
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu
Update the package list and install containerd:
sudo apt update
sudo apt install -y containerd.io
Configure containerd to start using systemd as cgroup:
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
Restart and enable the containerd service:
sudo systemctl restart containerd
sudo systemctl enable containerd
Step 5: Add Apt Repository for Kubernetes (all nodes)
Kubernetes packages are not available in the default Ubuntu 22.04 repositories. Add the Kubernetes repositories with the following commands:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/kubernetes-xenial.gpg
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
Step 6: Install Kubectl, Kubeadm, and Kubelet (all nodes)
After adding the repositories, install essential Kubernetes components, including kubectl, kubelet, and kubeadm, on all nodes with the following commands:
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Step 7: Initialize Kubernetes Cluster with Kubeadm (master node)
With all the prerequisites in place, initialize the Kubernetes cluster on the master node using the following Kubeadm command:
sudo kubeadm init — — config=configfile.yaml
After the initialization is complete make a note of the kubeadm join
command for future reference.
Run the following commands on the master node:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Next, use kubectl
commands to check the cluster and node status:
kubectl get nodes
Step 8: Add Worker Nodes to the Cluster (worker nodes)
On each worker node, use the kubeadm join
command you noted down earlier:
kubeadm join 146.190.135.86:6443 — token f1h95l.u4nkex9cw8d0g63w — discovery-token-ca-cert-hash sha256:6d15f2a79bdb38d1666af50c85f060b9fadc73f13c932e0e2a9eeef08f51f91a
Step :9 Install Kubernetes Network Plugin (master node)
To enable communication between pods in the cluster, you need a network plugin. Install the Calico network plugin with the following command from the master node:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/apiserver.yaml
openssl req -x509 -nodes -newkey rsa:4096 -keyout apiserver.key -out apiserver.crt -days 365 -subj “/” -addext “subjectAltName = DNS:calico-api.calico-apiserver.svc”
kubectl create secret -n calico-apiserver generic calico-apiserver-certs — from-file=apiserver.key — from-file=apiserver.crt
kubectl patch apiservice v3.projectcalico.org -p \
“{\”spec\”: {\”caBundle\”: \”$(kubectl get secret -n calico-apiserver calico-apiserver-certs -o go-template=’{{ index .data “apiserver.crt” }}’)\”}}”
kubectl api-resources | grep ‘\projectcalico.org’
Create file from this and apply https://github.com/projectcalico/calico/blob/master/manifests/calico-vxlan.yaml
Step :10 Install Kubernetes Network Plugin (master node)
curl -L https://github.com/projectcalico/calico/releases/download/v3.26.3/calicoctl-linux-amd64 -o calicoctl
chmod +x ./calicoctl
./calicoctl ipam configure — strictaffinity=true
kubectl patch installation default — type=merge -p ‘{“spec”: {“calicoNetwork”: {“bgp”: “Disabled”}}}’
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/calico-windows-vxlan.yaml -o calico-windows.yaml
nano calico-windows.yaml change values and namespace (kube-system)then apply
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/windows-kube-proxy.yaml -o windows-kube-proxy.yaml
kubectl apply -f windows-kube-proxy.yaml
Step :11 Windows
mkdir c:\k
Copy the Kubernetes kubeconfig file from the control plane node (default, Location $HOME/.kube/config), to c:\k\config.
Invoke-WebRequest https://github.com/projectcalico/calico/releases/download/v3.26.3/install-calico-windows.ps1 -OutFile c:\install-calico-windows.ps1
Invoke-WebRequest https://docs.tigera.io/calico/3.26/scripts/Install-Containerd.ps1 -OutFile c:\Install-Containerd.ps1
c:\Install-Containerd.ps1 -ContainerDVersion 1.7.3 -CNIConfigPath “c:/etc/cni/net.d” -CNIBinPath “c:/opt/cni/bin”
Invoke-WebRequest https://docs.tigera.io/calico/3.26/scripts/PrepareNode.ps1 -OutFile c:\PrepareNode.ps1
c:\PrepareNode.ps1 -KubernetesVersion v1.28.3 -ContainerRuntime ContainerD
Get the cluster’s Kubernetes API server host and port, which will be used to update the Calico for Windows config map. The API server host and port is required so that the Calico for Windows installation script can create a kubeconfig file for Calico services. If your Windows nodes already have Calico for Windows installed manually, skip this step. The installation script will use the API server host and port from your node’s existing kubeconfig file if the KUBERNETES_SERVICE_HOST
and KUBERNETES_SERVICE_PORT
variables are not provided in the calico-windows-config
ConfigMap.
Edit the calico-windows-config
ConfigMap in the downloaded manifest and ensure the required variables are correct for your cluster.
CALICO_NETWORKING_BACKEND
: This should be set to vxlan.KUBERNETES_SERVICE_HOST
andKUBERNETES_SERVICE_PORT
: The Kubernetes API server host and port (discovered in the previous step) used to create a kubeconfig file for Calico services. If your node already has an existing kubeconfig file, leave these variables blank.- kubectl create -f calico-windows.yaml
- kubectl logs -f -n calico-system -l k8s-app=calico-node-windows -c install
- Download the kube-proxy manifest:
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/windows-kube-proxy.yaml -o windows-kube-proxy.yaml
kubectl describe ds -n kube-system kube-proxy-windows
calicoctl get ippool -o yaml
outgoing: true
nodeSelector: all()
vxlanMode: Always
Congratulations! You now have a Kubernetes cluster with Calico for Windows and a Linux control node.