How to Install Kubernetes on Rocky Linux

April 27, 2023


Rocky Linux is one of the new distributions that emerged as an alternative to CentOS after CentOS's discontinuation in 2021. As a free and open-source project, Rocky Linux aims to provide a viable replacement for enterprise operating systems in application development.

The server-centric and performance-oriented nature of Rocky Linux makes it a good choice for running containerized workloads. However, managing app containers at scale requires a container orchestrator like Kubernetes.

This article will guide you through installing Kubernetes on Rocky Linux.

How to install Kubernetes on Rocky Linux.


Install Kubernetes on Rocky Linux (Manual Method)

Manual installation of Kubernetes on Rocky Linux involves:

  • Setting up a container runtime interface (CRI).
  • Making adjustments to security and networking configuration.
  • Installing the essential Kubernetes tools.

Note: Execute the installation steps on each node (physical or virtual machine) you plan to add to the cluster.

Step 1: Install containerd

containerd is a Docker-made CRI tool that creates, executes, and supervises containers. Follow the procedure below to set it up on your Rocky Linux system.

1. Add the official Docker repository to your system. Docker does not maintain a separate repository for Rocky Linux, but the CentOS repo is fully compatible.

sudo dnf config-manager --add-repo

The output confirms the success of the operation.

Adding the official Docker repository for CentOS on Rocky Linux.

2. Refresh the local repository information.

sudo dnf makecache
Refreshing the local repo information.

3. Install the package.

dnf install -y
Installing the package.

4. Back up the default configuration file for containerd:

sudo mv /etc/containerd/config.toml /etc/containerd/config.toml.bak

5. Create a new file with the default template:

containerd config default > config.toml

6. Open the file in a text editor. This tutorial uses nano.

sudo nano config.toml

7. Find the SystemdCgroup field and change its value to true.

SystemdCgroup = true
Editing the containerd configuration file.

Save the file and exit.

8. Place the new file in the /etc/containerd directory:

sudo mv config.toml /etc/containerd/config.toml

9. Enable the containerd service:

systemctl enable --now containerd.service
Enabling the containerd service.

10. Open the Kubernetes modules configuration file:

sudo nano /etc/modules-load.d/k8s.conf

11. Add the two modules required by the container runtime:

Editing the Kubernetes modules file.

Save the file and exit.

12. Add the modules to the system using the modprobe command:

sudo modprobe overlay
sudo modprobe br_netfilter

If the commands execute successfully, they return no output.

Step 2: Modify SELinux and Firewall Settings

For Kubernetes to work properly, cluster nodes need to communicate without interruptions. To ensure smooth networking, adjust SELinux permissions and open the necessary ports on each machine:

1. Change the SELinux mode to permissive with the setenforce command:

sudo setenforce 0

2. Enter the following sed command to make changes to the SELinux configuration:

sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux

3. Confirm the changes by checking the SELinux status:


The value of the Current mode field should be set to permissive.

Confirming that the current mode of SELinux is set to permissive.

4. Add firewall exceptions to allow Kubernetes to communicate via dedicated ports. On the master node machine, execute the following commands:

sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=2379-2380/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=10251/tcp
sudo firewall-cmd --permanent --add-port=10259/tcp
sudo firewall-cmd --permanent --add-port=10257/tcp
sudo firewall-cmd --permanent --add-port=179/tcp
sudo firewall-cmd --permanent --add-port=4789/udp

The output confirms the success of the operation.

Adding the firewall exceptions on the master node.

5. On worker nodes, open the following ports:

sudo firewall-cmd --permanent --add-port=179/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=30000-32767/tcp
sudo firewall-cmd --permanent --add-port=4789/udp

6. Reload the firewall configuration to enforce the changes.

sudo firewall-cmd --reload
Reloading the firewall configuration.

Step 3: Configure Networking

Kubernetes requires filtering and port forwarding enabled for packets going through a network bridge. Perform the network configuration in the k8s.conf file:

1. Open the file in a text editor:

sudo nano /etc/sysctl.d/k8s.conf

2. Ensure the file contains the following lines:

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
Editing the Kubernetes networking configuration file.

Save the file and exit.

Note: Read our tutorial to find out how can you save a file in Vim and exit.

3. Apply the changes with the sysctl command:

sudo sysctl --system

The system processes the k8s.conf file for changes.

Applying the configuration changes with sysctl.

Step 4: Disable Swap

For performance reasons and the maximum utilization of each node's resources, Kubernetes requires virtual memory to be disabled on each node.

1. Disable swap with the swapoff command.

sudo swapoff -a

2. Make the changes persist across reboots by typing:

sudo sed -e '/swap/s/^/#/g' -i /etc/fstab

Step 5: Install Kubernetes Tools

The following are the three main packages in a Kubernetes installation:

  • kubeadm helps initialize a Kubernetes cluster.
  • kubelet runs containers on each node.
  • kubectl is the command-line utility for controlling the cluster and its components.

Install the packages by following the procedure explained below:

1. Create a repository file for Kubernetes:

sudo nano /etc/yum.repos.d/k8s.repo

2. Copy the repository specification below and paste it into the file.

Creating a repository file for Kubernetes on Rocky Linux.

Save the file and exit.

3. Refresh the local repository cache.

sudo dnf makecache

When prompted, type Y and press Enter.

Refreshing the repository information on the system.

4. Install the packages with the following command.

dnf install -y {kubelet,kubeadm,kubectl} --disableexcludes=kubernetes
Installing Kubernetes tools.

The system is now ready to deploy a Kubernetes cluster.

Install Kubernetes on Rocky Linux Using Ansible

Ansible is an IaC tool that facilitates infrastructure deployment automation. It uses human-readable instruction files called playbooks to simplify and speed up repetitive deployments.

The following sections provide instructions for installing Kubernetes using Ansible.

Step 1: Connect Hosts

To enable communication between the Ansible host and the Kubernetes nodes, connect the machines via SSH.

1. Generate an SSH key:


When prompted, type the filename for the new key and press Enter. Next, press Enter two more times to create an empty passphrase.

Generating an SSH key.

2. Copy the credentials to each machine:

ssh-copy-id -i ~/.ssh/[ssh-key-name].pub root@[ip-address]

For example, to copy the id_rsa key to the machine with the IP address, type:

ssh-copy-id -i ~/.ssh/ [email protected]

3. Create and go to the kube directory.

mkdir kube && cd kube

4. Create a file titled hosts using a text editor:

nano hosts

5. Paste the information about the nodes into the file. Split the info into two sections, masters and workers:

master ansible_host=[ip-address] ansible_user=root

worker1 ansible_host=[ip-address] ansible_user=root
Editing the Ansible hosts file.

Save the file and exit.

6. Test the connectivity between the nodes and the Ansible host by typing:

ansible -i hosts all -m ping

The output confirms that Ansible has pinged the machines successfully.

Pinging the connected hosts using Ansible.

Step 2: Create Users

The first playbook that needs to be applied creates a user called kube on each machine. This user receives an authorized SSH key and permissions that allow it to run sudo commands without providing a password.

1. Create a playbook YML file in a text editor:

nano user-create.yml

2. Copy and paste the code below into the file.

- hosts: 'workers, masters'
  become: yes

    - name: create a new user and name it kube
      user: name=kube append=yes state=present createhome=yes shell=/bin/bash

    - name: allow the user to run sudo without requiring a password
        dest: /etc/sudoers
        line: 'kube ALL=(ALL) NOPASSWD: ALL'
        validate: 'visudo -cf %s'

    - name: add authorized key for user        
      authorized_key: user=kube key="{{item}}"
        - ~/.ssh/

Save the file and exit. The playbook now contains a set of tasks that Ansible will execute on the relevant connected machines.

3. Run the playbook by typing:

ansible-playbook -i hosts user-create.yml

The output shows the progress for each task.

Executing the Ansible playbook for privileged user creation.

Step 3: Install Kubernetes

After the necessary setup, create the playbook instructing Ansible to install Kubernetes tools on each node.

1. Create a YAML file in a text editor.

nano k8s-install.yml

2. Copy and paste the following code into the file.

- hosts: "masters, workers"
  remote_user: [current-user]
  become: yes
  become_method: sudo
  become_user: root
  gather_facts: yes
  connection: ssh

     - name: create containerd configuration file
         path: "/etc/modules-load.d/containerd.conf"
         state: "touch"

     - name: set up containerd prerequisites
         path: "/etc/modules-load.d/containerd.conf"
         block: |

     - name: load modules
       shell: |
               sudo modprobe overlay
               sudo modprobe br_netfilter

     - name: create network settings configuration file
         path: "/etc/sysctl.d/99-kubernetes-cri.conf"
         state: "touch"

     - name: set up containerd networking
         path: "/etc/sysctl.d/99-kubernetes-cri.conf"
         block: |
                net.bridge.bridge-nf-call-iptables = 1
                net.ipv4.ip_forward = 1
                net.bridge.bridge-nf-call-ip6tables = 1

     - name: apply settings
       command: sudo sysctl --system

     - name: add docker repository
       shell: |	
               sudo dnf config-manager --add-repo
               sudo dnf makecache
               sudo dnf install -y
               sudo mkdir -p /etc/containerd
               sudo containerd config default | sudo tee /etc/containerd/config.toml
               sudo systemctl restart containerd

     - name: create k8s repo file
         path: "/etc/yum.repos.d/kubernetes.repo"
         state: "touch"

     - name: write repository information in the kube repo file
         path: "/etc/yum.repos.d/kubernetes.repo"
         block: |

     - name: install kubernetes
       shell: |
               sudo dnf install -y kubelet kubeadm kubectl

     - name: disable swap
       shell: |
               sudo swapoff -a
               sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Note: Do not forget to replace the [current-user] value in the remote_user field with the current username on your Ansible host.

Save the file and exit.

3. Execute the playbook by entering the following:

ansible-playbook -i hosts k8s-install.yml

When Ansible finishes all the operations, it displays a Play Recap.

The Ansible Play Recap showing the successful installation of the Kubernetes tools.

Kubernetes has been successfully installed on all the nodes.


After completing this tutorial, you should know how to install Kubernetes on Rocky Linux and prepare for cluster deployment. The tutorial covered two methods for installation - manual and via Ansible-based.

If you are still looking for the best replacement for CentOS, read our comparison article Rocky Linux vs. AlmaLinux, to see how the two major competitors stack up against each other.

Was this article helpful?
Marko Aleksic
Marko Aleksić is a Technical Writer at phoenixNAP. His innate curiosity regarding all things IT, combined with over a decade long background in writing, teaching and working in IT-related fields, led him to technical writing, where he has an opportunity to employ his skills and make technology less daunting to everyone.
Next you should read
How to Install Rocky Linux on VMware
November 1, 2022

Virtual machine software, such as VMware, enables test-driving Rocky Linux. By relying on hypervisors, a host machine can separate hardware resources...
Read more
How to Install Docker on Rocky Linux
November 2, 2022

This tutorial shows you how to install and perform the basic setup of Docker on Rocky Linux.
Read more
When to Use Kubernetes
March 23, 2023

Learning about the most common Kubernetes use cases can help you assess whether it suits your needs. Read an overview of Kubernetes' advantages to help you decide...
Read more
Ansible Playbook Dry Run
November 19, 2020

Ansible provides a check mode in which you can test a playbook. This tutorial shows you how to do a dry run of an Ansible playbook...
Read more