Kubernetes Cluster On Ubuntu 24.04: A Step-by-Step Guide
Kubernetes Cluster on Ubuntu 24.04: A Step-by-Step Guide
Hey everyone! So, you’re looking to get your hands dirty with Kubernetes on the latest Ubuntu 24.04, huh? That’s awesome! Setting up a Kubernetes cluster might sound like a super complex task, but trust me, guys, with a little guidance, it’s totally doable. We’re going to walk through the entire process, from prepping your servers to getting your first pods up and running. Think of this as your ultimate cheat sheet to deploying a Kubernetes cluster on Ubuntu 24.04 . We’ll break down each step so you don’t get lost in the technical jargon. Ready to dive in?
Table of Contents
- Preparing Your Ubuntu 24.04 Nodes
- Installing Containerd Runtime
- Installing Kubernetes Components (kubeadm, kubelet, kubectl)
- Initializing the Control Plane Node
- Deploying a Pod Network Add-on (CNI)
- Joining Worker Nodes to the Cluster
- Deploying Your First Application
- Conclusion: Your Kubernetes Journey Begins!
Preparing Your Ubuntu 24.04 Nodes
Alright, first things first, we need to get our machines, or
nodes
as we call them in Kubernetes lingo, ready. For this guide, we’ll assume you have at least two Ubuntu 24.04 servers. One will be your
control plane
(the brain of your cluster), and the others will be your
worker nodes
(where your applications will actually run). It’s super important that these machines can talk to each other over the network, so make sure your firewall isn’t blocking any necessary ports. We’ll need to do a few things on
all
nodes – yes, both control plane and workers. First off, update your system:
sudo apt update && sudo apt upgrade -y
. This is like giving your servers a fresh coat of paint and making sure they have all the latest security patches. Next, we need to disable swap. Kubernetes doesn’t play nicely with swap, so we have to turn it off. Run
sudo swapoff -a
and then comment out the swap line in
/etc/fstab
by adding a
#
at the beginning. You can use
sudo sed -i '/ swap / s/^\(.*tag\*[^\t]*\t[^\t]*\)\( +.*\)$/#\1\2/g' /etc/fstab
to do this.
Crucially
, you need to ensure that the
br_netfilter
module is loaded and that your network traffic can be bridged. This is essential for how containers communicate within the cluster. You can load the module with
sudo modprobe br_netfilter
and then make it persistent across reboots by creating a file:
echo 'br_netfilter' | sudo tee /etc/modules-load.d/k8s.conf
. To apply the bridging settings, create another file:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
. Then, apply these settings with
sudo sysctl --system
. These preparatory steps are absolutely vital. Skipping any of them can lead to subtle and frustrating issues down the line, making your
Kubernetes cluster installation on Ubuntu 24.04
much harder than it needs to be. So, take your time here, double-check everything, and ensure each node is perfectly prepped before moving on. It’s all about building a solid foundation, guys!
Installing Containerd Runtime
Alright, so your nodes are prepped and ready to roll. The next major step in setting up your
Kubernetes cluster on Ubuntu 24.04
is installing a container runtime. Kubernetes needs something to actually run your containers, and
containerd
is a popular and robust choice. It’s a bit of a behind-the-scenes player, but it’s super important. First, we need to install the
containerd.io
package. You can do this easily with
sudo apt install containerd.io -y
. Once it’s installed, we need to configure it. The default configuration might not be exactly what Kubernetes expects, so we’ll generate a default config file and then tweak it. Run
sudo containerd config default | sudo tee /etc/containerd/config.toml
. Now, for the important part: we need to enable the systemd cgroup driver. This is how containerd communicates with the operating system’s control groups, which is crucial for resource management in Kubernetes. Open the configuration file you just created:
sudo nano /etc/containerd/config.toml
. Look for the
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
section. Inside this, find the line
SystemdCgroup = false
and change it to
SystemdCgroup = true
. Save the file (Ctrl+X, then Y, then Enter if you’re using nano). After saving, you need to restart the containerd service for these changes to take effect:
sudo systemctl restart containerd
. And to make sure it starts up every time your server boots, enable it:
sudo systemctl enable containerd
. Why is this
SystemdCgroup
setting so important, you ask? Well, Kubernetes uses cgroups to limit and isolate container resources like CPU and memory. The systemd cgroup driver ensures that containerd works seamlessly with systemd to manage these cgroups correctly. Without it, your pods might not start, or they might behave erratically. So, this step is
absolutely critical
for a stable
Kubernetes installation on Ubuntu 24.04
. Don’t skip it, guys! It’s all about making sure your containers have a stable home to run in.
Installing Kubernetes Components (kubeadm, kubelet, kubectl)
Okay, we’ve got our nodes prepped and containerd up and running. Now it’s time to install the star players of the Kubernetes show:
kubeadm
,
kubelet
, and
kubectl
.
kubeadm
is the tool that helps us bootstrap our cluster,
kubelet
is the agent that runs on each node and ensures containers are running in pods, and
kubectl
is our command-line interface to interact with the cluster. First, let’s set up the Kubernetes package repository so
apt
knows where to find these components. We’ll need to install some prerequisite packages first:
sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl
. Now, download the Google Cloud public signing key:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg
. Then, add the Kubernetes repository:
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
. After adding the repo, update your package list again:
sudo apt update
. Now, we can finally install the Kubernetes tools:
sudo apt install -y kubelet kubeadm kubectl
. It’s important to
hold
these packages to prevent them from being automatically updated to a potentially incompatible version later:
sudo apt-mark hold kubelet kubeadm kubectl
. This ensures that your cluster remains stable. After installation, we need to enable the
kubelet
service so it starts on boot:
sudo systemctl enable kubelet
. Note that
kubelet
will not start until
kubeadm init
is run, as it needs to be configured by
kubeadm
. So, don’t worry if you see errors about
kubelet
not starting yet. We’ve now got the core Kubernetes software installed on all our nodes! This is a huge milestone in your
Kubernetes cluster setup on Ubuntu 24.04
. You’ve successfully brought the brain and the muscles of Kubernetes onto your servers. Pretty neat, huh?
Initializing the Control Plane Node
We’re getting so close, guys! We’ve installed all the necessary software. Now, we need to initialize the
control plane node
– this is where the magic of Kubernetes cluster creation really begins. Remember, you should only run this command on the node you’ve designated as your control plane. Execute the following command:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
. Let’s break this down a bit.
kubeadm init
is the command that bootstraps a Kubernetes control plane. The
--pod-network-cidr=192.168.0.0/16
flag is
extremely
important. It tells Kubernetes the IP address range that pods will use. This is necessary for certain network plugins (CNIs) to function correctly. Make sure this CIDR doesn’t conflict with your existing network. The output of this command is crucial. It will give you a
kubeadm join
command with a token and a discovery token CA certificate hash.
Save this output!
Seriously, copy it and paste it somewhere safe. This
join
command is what you’ll use later to add your worker nodes to the cluster. After
kubeadm init
finishes, it will also give you instructions on how to configure
kubectl
for your regular user. You’ll need to run these commands:
mkdir -p $HOME/.kube
,
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
,
sudo chown $(id -u):$(id -g) $HOME/.kube/config
. Now, you can run
kubectl get nodes
and you should see your control plane node listed, but it will likely be in a
NotReady
state. That’s because we haven’t installed a network plugin yet. Don’t freak out! This is totally normal. The control plane is up, but it can’t communicate effectively without a network solution. So, while it might seem like nothing’s happening, this
kubeadm init
step is the fundamental building block of your
Kubernetes cluster on Ubuntu 24.04
. You’ve successfully turned one of your servers into the command center!
Deploying a Pod Network Add-on (CNI)
So, your control plane node is initialized, and
kubectl get nodes
shows it, but it’s in a
NotReady
state, right? That’s because Kubernetes needs a network plugin, also known as a Container Network Interface (CNI), to allow pods to communicate with each other and with external services. Without a CNI, your pods can’t get IP addresses or route traffic correctly. For this guide, we’ll use Calico, which is a popular and powerful choice, but there are others like Flannel or Weave Net you could also use. To deploy Calico, you’ll need to apply its YAML manifest file. You can get the latest manifest from the Calico documentation, but a common command to apply it is:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
. This command tells Kubernetes to create all the necessary resources (like Deployments, DaemonSets, and ConfigMaps) defined in the Calico manifest. It might take a few minutes for Calico to download its container images and start up on your nodes. Once Calico is up and running, you should see its pods in the
kube-system
namespace. Go ahead and run
kubectl get pods -n kube-system
to check their status. They should all be
Running
. Now, if you run
kubectl get nodes
again, you should see your control plane node transition from
NotReady
to
Ready
! Bingo! This is a massive step forward for your
Kubernetes cluster on Ubuntu 24.04
. With a CNI installed, your cluster is now ready to start scheduling and running actual application workloads. Choosing the right CNI is important as it impacts network policies, performance, and complexity. Calico is great for its robust network policy features, but make sure you check the compatibility and requirements for the specific version you’re installing. Guys, you’re practically there! The network is the backbone of any distributed system, and Kubernetes is no exception.
Joining Worker Nodes to the Cluster
Alright, you’ve got a working control plane and a CNI installed. Now it’s time to bring your
worker nodes
into the fold! These are the machines that will actually run your applications. Remember that
kubeadm join
command you saved earlier from the
kubeadm init
output? It’s time to use it! SSH into each of your worker nodes and run that exact command. It will look something like this:
sudo kubeadm join <control-plane-ip>:6443 --token <your-token> --discovery-token-ca-cert-hash sha256:<your-hash>
. Replace
<control-plane-ip>
,
<your-token>
, and
<your-hash>
with the actual values you saved. If, for some reason, you lost the token or it expired (they usually last 24 hours), you can generate a new one on the control plane node with
sudo kubeadm token create --print-join-command
. This will output a new
kubeadm join
command that you can then use on your worker nodes. Once the
kubeadm join
command is executed on a worker node, it will connect to the control plane, get configured, and start running the
kubelet
agent. This process might take a minute or two. Now, head back to your
control plane node
(or any machine where you have
kubectl
configured) and run
kubectl get nodes
. You should now see all your worker nodes listed, and they should all be in the
Ready
state. Congratulations! You’ve successfully added worker nodes to your
Kubernetes cluster on Ubuntu 24.04
. This is where the real power of Kubernetes comes in – you now have a pool of resources that can be used to deploy and manage your applications at scale. It’s like building your own private cloud! Make sure your worker nodes have enough resources (CPU, RAM, disk) to handle the workloads you plan to run on them. Also, ensure that the network configuration between your control plane and worker nodes is stable and allows for communication on the necessary ports. This step is all about expanding your cluster’s capacity and preparing it for actual deployment. You’ve built a solid foundation, guys!
Deploying Your First Application
We’ve done it! We’ve successfully set up a
Kubernetes cluster on Ubuntu 24.04
with a control plane and worker nodes. Now, let’s have some fun and deploy our very first application. For this, we’ll deploy a simple Nginx web server. You’ll need a YAML file to define your application. Let’s create a file named
nginx-app.yaml
with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
This YAML defines two things: a
Deployment
that tells Kubernetes to run 3 replicas of the Nginx container using the
nginx:latest
image, and a
Service
of type
LoadBalancer
that exposes Nginx on port 80. The
LoadBalancer
type is a bit special; on cloud providers, it automatically provisions a cloud load balancer. In our local cluster setup, we’ll need a specific component to make this work. If you’re using a local setup like Minikube or kind, they usually have a built-in LoadBalancer solution. For a bare-metal setup like we’ve been building, you’ll typically need to install a MetalLB or similar solution. For now, let’s assume you have a way to expose
LoadBalancer
services (or you can change the service type to
NodePort
to access it via
<node-ip>:30080
). To deploy this, simply run:
kubectl apply -f nginx-app.yaml
. Kubernetes will now create the Deployment and the Service. You can check the status of your deployment with
kubectl get deployments
and your pods with
kubectl get pods
. You should see 3 Nginx pods running. To check the service, run
kubectl get services
. If you have MetalLB or a similar solution configured, you’ll see an external IP address assigned to
nginx-service
. If not, you might see
<pending>
. If you changed it to
NodePort
, you’ll see the NodePort allocated. You’ve officially deployed your first application on your
Kubernetes cluster on Ubuntu 24.04
! This is what it’s all about, guys – running and managing your applications in a resilient and scalable way. Pretty awesome, right?
Conclusion: Your Kubernetes Journey Begins!
And there you have it, folks! You’ve successfully navigated the exciting world of setting up a Kubernetes cluster on Ubuntu 24.04 . From preparing your nodes and installing essential components like containerd, kubelet, and kubeadm, to initializing the control plane, setting up networking with Calico, joining your worker nodes, and finally deploying your first application – you’ve accomplished a lot! This isn’t just about following a set of commands; it’s about understanding the core concepts that make Kubernetes such a powerful platform for container orchestration. You now have a solid foundation to explore more advanced Kubernetes features like persistent storage, advanced networking, scaling applications, and implementing robust security policies. Remember, the Kubernetes ecosystem is vast and constantly evolving, so continuous learning is key. Don’t be afraid to experiment, break things (in a safe test environment, of course!), and learn from them. This journey into Kubernetes is just beginning, and with your newfound skills on Ubuntu 24.04, you’re well-equipped to tackle complex containerized deployments. Keep building, keep learning, and welcome to the awesome world of Kubernetes, guys!