Deploying Kubernetes on-premise presents its own set of challenges and choices. In this post, I’ll share my journey of selecting a Kubernetes distribution for a small cluster, the issues I faced with MicroK8s, and how I successfully deployed K3s on CentOS.
Choosing a Kubernetes Distribution for On-Premise Deployment
There are numerous Kubernetes distributions available for on-premise deployment, each with its own features and complexities. Some of the popular options include:
- Rancher Kubernetes Engine (RKE)
- OpenShift
- k0s
- K3s
- MicroK8s
- Minikube
For a small cluster, especially in a development or testing environment, it’s essential to choose a distribution that is lightweight, easy to set up, and doesn’t require extensive resources.
Considerations for Small Clusters
- Resource Consumption: Limited CPU, memory, and storage capacity.
- Simplicity: Ease of installation and management.
- Support: Active community or commercial support if needed.
- Compatibility: Support for the operating system in use (e.g., CentOS).
Attempting with MicroK8s
I initially chose MicroK8s because it’s designed to be a lightweight, single-package Kubernetes distribution that’s easy to install and ideal for local development.
Installation on Single Node
On a single node, MicroK8s worked well:
sudo snap install microk8s --classic
However, I needed a multi-node cluster for high availability and scalability.
Issues with Multi-Node Cluster
When trying to set up a multi-node cluster with MicroK8s on CentOS, I encountered issues:
- Cluster Registration Problems: Worker nodes reported being connected to the master, but the master node didn’t register them.
- Lack of Clear Error Messages: Difficulty in diagnosing the issue due to minimal logs or error messages.
- Compatibility Concerns: MicroK8s is primarily developed for Ubuntu, and while it’s possible to run it on CentOS using Snap, it may not be as straightforward.
After spending considerable time troubleshooting without success, I decided to look for an alternative.
Switching to K3s
I turned to K3s, a lightweight Kubernetes distribution developed by Rancher Labs. K3s is designed for resource-constrained environments and edge computing, making it ideal for small clusters.
Why K3s?
- Lightweight: K3s is a single binary of less than 100MB.
- Simplified Installation: Easy to set up on various Linux distributions, including CentOS.
- Built-In SQLite Support: Uses SQLite by default, reducing the overhead of etcd (though etcd can be used if needed).
- Active Development: Backed by Rancher Labs with an active community.
Advantages of K3s
- Ease of Use: Simplified installation commands.
- Low Resource Requirements: Minimal CPU and memory usage.
- Flexibility: Supports multiple architectures and operating systems.
- Compatibility: Works well with existing Kubernetes tooling (
kubectl
, Helm, etc.).
Setting Up K3s on CentOS
Prerequisites
- CentOS Installed: On all nodes (master and workers).
- SSH Access: Ability to SSH into all nodes with sufficient privileges.
- Firewall Configuration: Ports need to be opened for cluster communication.
Step 1: Opening Firewall Ports
Proper firewall configuration is crucial for Kubernetes components to communicate. Run the following commands on all nodes:
bash
Copy code
# Allow essential services
sudo firewall-cmd --permanent --add-service=ssh
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
# Trust Kubernetes pod and service networks
sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 # Pods CIDR
sudo firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 # Services CIDR
# Create a new service for K3s
sudo firewall-cmd --permanent --new-service=k3s
sudo firewall-cmd --permanent --service=k3s --set-description="K3s Firewall Rules"
sudo firewall-cmd --permanent --service=k3s --add-port=2379-2380/tcp # etcd ports
sudo firewall-cmd --permanent --service=k3s --add-port=6443/tcp # Kubernetes API server
sudo firewall-cmd --permanent --service=k3s --add-port=8472/udp # Flannel VXLAN
sudo firewall-cmd --permanent --service=k3s --add-port=10250-10252/tcp # Kubelet and scheduler
sudo firewall-cmd --permanent --service=k3s --add-port=30000-32767/tcp # NodePort Services
sudo firewall-cmd --permanent --add-service=k3s
# Reload firewall to apply changes
sudo firewall-cmd --reload
Note: Adjust the CIDR blocks (10.42.0.0/16
, 10.43.0.0/16
) if your cluster uses different networking settings.
Step 2: Installing K3s on the Master Node
On the master node (also known as the server node), run:
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.31.1+k3s1" sh -s - server --cluster-init --disable=traefik
Explanation:
INSTALL_K3S_VERSION
: Specifies the version of K3s to install.server --cluster-init
: Initializes a new cluster.-disable=traefik
: Disables the default Traefik ingress controller because I plan to use NGINX for ingress.
Step 3: Retrieving the Cluster Token
After installation, obtain the cluster token needed for other nodes to join:
sudo cat /var/lib/rancher/k3s/server/node-token
Copy the token; you’ll need it for the worker nodes.
Step 4: Installing K3s on Worker Nodes
On each worker node, run:
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.31.1+k3s1" K3S_TOKEN=<cluster_token> sh -s - server --server https://<master_node_ip>:6443 --disable=traefik
Explanation:
K3S_TOKEN=<cluster_token>
: The token retrieved from the master node.-server https://<master_node_ip>:6443
: Specifies the master node’s API server endpoint.-disable=traefik
: Disables Traefik to maintain consistency.
Note: I chose to run server
instead of agent
to enable high availability (HA). In K3s, additional server nodes provide HA for the Kubernetes API server and datastore.
Step 5: Verifying the Cluster
Back on the master node, check the status of the nodes:
kubectl get nodes
You should see a list of all nodes in the cluster, with their status indicating they’re ready.
Additional Configuration
Using NGINX Ingress Controller
Since Traefik was disabled, I installed the NGINX ingress controller for managing ingress traffic:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.1/deploy/static/provider/cloud/deploy.yaml
Conclusion
Deploying Kubernetes on-premise can seem daunting, but with lightweight distributions like K3s, it’s achievable even for small clusters. K3s simplifies the installation process while providing the full functionality of Kubernetes.
Key Takeaways
- Choose the Right Distribution: For small clusters, lightweight options like K3s are ideal.
- Firewall Configuration Is Crucial: Properly opening and configuring firewall ports ensures smooth communication between nodes.
- Disable Unnecessary Components: Disabling default components like Traefik allows you to use alternatives that better suit your needs.
- High Availability: Running multiple server nodes in K3s provides HA for critical cluster components.
By sharing my experience, I hope to help others navigate the process of deploying Kubernetes on-premise. Whether you’re setting up a development environment or a production cluster, understanding the steps and potential pitfalls can save you time and frustration.