Introduction

The goal of this guide is to show how to use kubeadm to create a Kubernetes cluster on three machines provisioned on Oracle Cloud Infrastructure (OCI). Thus, the following sections will describe the topology used, the configuration needed on OCI and all the steps performed to deploy a Kubernetes cluster on top of these machines.

Architecture

The architecture is quite simple and is depicted by the diagram below.

A diagram of a diagram Description automatically generated

From above diagram we will have three virtual machines one for the control plane and two for the worker nodes.

Provisioning the infrastructure on OCI

Creating a compartment.

Follow these steps.

  1. Click on the hamburger menu and then click on “Identity & Security”

    A screenshot of a search box Description automatically generated

  2. Click on “Create Compartment”

    A screenshot of a computer Description automatically generated

  3. Fill the form and click on “Create Compartment”

    A screenshot of a computer Description automatically generated

Creating a virtual cloud network (VCN)

  1. Click on the hamburger menu and then click on “Networking > Virtual Cloud Networks”.

    A screenshot of a computer Description automatically generated

  2. Select the compartment created before.

    A screenshot of a computer Description automatically generated

  3. Click on “Start VCN Wizard”.

    A screenshot of a computer Description automatically generated

  4. Select the one with Internet connectivity.

    A screenshot of a computer Description automatically generated

  5. Provide a name, a compartment and then click on “Next”

    A screenshot of a computer Description automatically generated

  6. Check the configuration and then click on “Create”

    A screenshot of a computer Description automatically generated

  7. If everything goes well, you will see this.

    A screenshot of a computer Description automatically generated

Creating the virtual machines

Creating the control-plane

  1. Click on the hamburger menu and then click on “Compute > Instances”.

    A screenshot of a computer Description automatically generated

  2. Click on “Instances” while choosing the right Compartment.

    A screenshot of a computer Description automatically generated

  3. Click on “Create Instance”

    A screenshot of a computer Description automatically generated

  4. Below, we have the options used for the control-plane.

    Name and Compartment

    A screenshot of a computer Description automatically generated

    Image and shape

    A screenshot of a computer Description automatically generated

    Networking

    A screenshot of a computer Description automatically generated

    Don’t forget to save your private key.

    A screenshot of a computer Description automatically generated

    Now you should click on “Create”.

    A screenshot of a computer Description automatically generated

The process used to create two worker nodes is pretty much the same so, the table below summarizes it.

Instance name Compartment Image Shape OCPUs Shape memory Networking VCN Networking subnet Networking public IPv4 address SSH keys
Control-plane cubeClusterCompartment Canonical Ubuntu 22.04 2 12 GB kubeclustervcn public subnet- clustervcn Yes

“Generate a key pair for me”

Save the private key

Worker-01 cubeClusterCompartment Canonical Ubuntu 22.04 1 6 GB kubeclustervcn public subnet- clustervcn Yes

“Generate a key pair for me”

Save the private key

Worker-02 cubeClusterCompartment Canonical Ubuntu 22.04 1 6 GB kubeclustervcn public subnet- clustervcn Yes

“Generate a key pair for me”

Save the private key

Installing kubeadm, kebeles and kubectl (ALL NODES)

Source: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

We should run these commands on the control-plane node and our worker nodes.

  1. Update the apt package index and install packages needed to use the Kubernetes apt repository.

    sudo apt-get update
    sudo apt-get install -y apt-transport-https ca-certificates curl
    

    A screenshot of a computer Description automatically generated

  2. Download the Google Cloud public signing key.

    curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
    

    A screenshot of a computer screen Description automatically generated

  3. Add the Kubernetes apt repository.

    echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
    

    A screenshot of a computer Description automatically generated

  4. Update apt package index, install kubelet, kubeadm and kubectl, and pin their version.

        sudo apt-get update
        sudo apt-get install -y kubelet kubeadm kubectl
        sudo apt-mark hold kubelet kubeadm kubectl
    

    A screenshot of a computer Description automatically generated

Installing a Container Runtime Interface (CRI) (ALL NODES)

In this case we are going to install containerd

Forwarding IPv4 and letting iptables see bridged traffic

Source: https://kubernetes.io/docs/setup/production-environment/container-runtimes/

This is one of the prerequisites to have a CRI in place.

  1. Perform below configuration.

        cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
        overlay
        br_netfilter
        EOF
    

    A black screen with white text Description automatically generated

    sudo modprobe overlay
    sudo modprobe br_netfilter
    

    A screenshot of a computer Description automatically generated

    # sysctl params required by setup, params persist across reboots

        sudo cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
        net.bridge.bridge-nf-call-iptables  = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.ipv4.ip_forward                 = 1
        EOF
    

    A screenshot of a computer screen Description automatically generated

    # Apply sysctl params without reboot

    sudo sysctl –system
    

    A screenshot of a computer Description automatically generated

  2. Verify that the br_netfilter, overlay modules are loaded by running the following commands:

    lsmod | grep br_netfilter
    lsmod | grep overlay
    

    A screenshot of a computer screen Description automatically generated

  3. Verify that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, and net.ipv4.ip_forward system variables are set to 1 in your sysctl config by running the following command:

    sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
    

    A screenshot of a computer screen Description automatically generated

Installing containerd

Source: https://github.com/containerd/containerd/blob/main/docs/getting-started.md

  1. Get the official binary according to the Linux distribution and the hardware you are using; in our case.

    wget https://github.com/containerd/containerd/releases/download/v1.7.2/containerd-1.7.2-linux-arm64.tar.gz
    

    A screenshot of a computer Description automatically generated

  2. Unpack the binary

    sudo tar Cxzvf /usr/local containerd-1.7.2-linux-arm64.tar.gz
    

    A screenshot of a computer Description automatically generated

    A screenshot of a computer Description automatically generated

  3. Configuring systemd by executing below commands.

    wget
    https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
    sudo cp containerd.service /lib/systemd/system/containerd.service
    sudo systemctl daemon-reload
    sudo systemctl enable --now containerd
    

    A screenshot of a computer Description automatically generated

  4. Check the containerd service

    A screenshot of a computer screen Description automatically generated

Installing runc

Source: https://github.com/containerd/containerd/blob/main/docs/getting-started.md

Get the proper binary for your Linux distribution and hardware and then install it by using the commands below.

wget https://github.com/opencontainers/runc/releases/download/v1.1.8/runc.arm64
sudo install -m 755 runc.arm64 /usr/local/sbin/runc

A screenshot of a computer Description automatically generated

Installing CNI plugins

Source: https://github.com/containerd/containerd/blob/main/docs/getting-started.md

Get the proper binary for your Linux distribution and hardware and then install it by using the commands below.

wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm-v1.3.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-arm-v1.3.0.tgz

A screenshot of a computer Description automatically generated

You will see an outcome similar to this.

A screenshot of a computer program Description automatically generated

Creating /etc/containerd/config.toml

Source: https://github.com/containerd/containerd/blob/main/docs/getting-started.md

Execute the below commands to create the “config.toml” file with default values.

sudo mkdir /etc/containerd/
sudo su - 
containerd config default > /etc/containerd/config.toml

A screenshot of a computer Description automatically generated

Configuring the systemd cgroup driver

Source: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd

To use the systemd cgroup driver in /etc/containerd/config.toml with runc, follow below instructions.

sudo vi /etc/containerd/config.toml

Then set SystemdCgroup to true as is shown below.

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true  <--------

A screenshot of a computer program Description automatically generated

Execute these commands

sudo systemctl restart containerd
sudo systemctl restart kubelet

A screenshot of a computer Description automatically generated

Initializing your control-plane node (control-plane node)

This is the moment we have been waiting for

  1. Go to control-plane node.

  2. Run this command.

    sudo kubeadm init --v=5
    
  3. If everything goes well, you will see an outcome like this.

    A screenshot of a computer program Description automatically generated

    Above figure shows that we should deploy a pod network, we will see this below and the have a token and a hash to add worker to the cluster so, please save this information.

Deploying a pod network to the cluster (control-plane node)

Source: https://www.golinuxcloud.com/calico-kubernetes/

Follow below steps.

  1. Get the Calico networking manifest.

    curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml -O
    
  2. Execute below command.

    sudo KUBECONFIG=/etc/kubernetes/admin.conf kubectl apply -f calico.yaml
    

    A screen shot of a computer Description automatically generated

Preparing the cluster to add worker nodes.

Let’s follow the steps below.

Configuring the VCN on OCI to allow the traffic on port 6443

This is important as worker nodes will try to contact the control-plane node via port 6443.

  1. Go to OCI console.

  2. Click on “Networking > Virtual cloud networks”.

    A screenshot of a computer Description automatically generated

  3. Click on the VCN the Kubernetes cluster is using.

    A screenshot of a computer Description automatically generated

  4. Click on “Security Lists”.

    A screenshot of a computer Description automatically generated

  5. Choose the default security list.

    A screenshot of a computer Description automatically generated

  6. Click on “Add Ingress Rules”.

    A screenshot of a computer Description automatically generated

  7. Fill in the form and click on “Add Ingress Rules”.

    A screenshot of a computer Description automatically generated

Configuring iptables to allow traffic throughout port 6443 (control-plane node)

This configuration will be gone after rebooting the control-plane node, which is fine as we need it just once to join the worker nodes to the cluster.

Execute the below command.

sudo iptables -I INPUT -p tcp -m state --state NEW,ESTABLISHED -m tcp --dport 6443 -m comment --comment "Required for Kubernetes." -j ACCEPT

Adding worker nodes to the Kubernetes cluster.

Follow the below instructions.

  1. If you lose the toke to join worker nodes, get a new one with below command.

    sudo kubeadm token create --print-join-command
    

  2. Execute the generated command on every worker node; in our case we have two nodes.

    sudo kubeadm join 10.0.0.228:6443 --token fvnkpq.5nuyu1o3064fryn4 --discovery-token-ca-cert-hash sha256:baffbdb7b6993bdc41384ca46053536e05b04e0359e8b8dac78a22e6a4422358
    
  3. If everything goes well, you will see this.

    A screenshot of a computer program Description automatically generated

Checking the nodes and pods with kubectl

  1. Go to the control-plane node and become root.

  2. Set this variable.

    export KUBECONFIG=/etc/kubernetes/admin.conf
    
  3. Execute the command below to check the nodes.

    kubectl get nodes -o wide
    

  4. Execute the command below to check the pods.

    kubectl get pods -A -o wide
    

    A screenshot of a computer screen Description automatically generated