Securing Kubernetes Services with WireGuard
As my application nears production readiness, one of the key considerations is securely accessing internal Kubernetes services—such as databases and message brokers—from my local development machine. Initially, I configured TCP forwarding for these services on my ingress controller: tcp:"4222": nats/nats-cluster:4222"5432": pgo/astring-ha:5432"6379": redis/redis:6379"9042": scylla/scylla-client:9042While each service requires authentication, I’m still not fully comfortable exposing them directly to the public internet. Ideally, only HTTP endpoints (like my backend services or monitoring tools) should be publicly accessible, protected via basic auth or other mechanisms....
Transitioning to Cilium
In my previous posts, I discussed how I used MetalLB to implement load balancing in my on-premises Kubernetes cluster. While MetalLB served its purpose by providing Layer 2 load balancing, I heard Cilium and discovered its powerful networking capabilities. In this post, I’ll explain what Cilium is, why it’s beneficial, and how I replaced both my Container Network Interface (CNI) and MetalLB with Cilium. Understanding Cilium and Its Advantages What Is Cilium?...
Managing Persistent Storage in Kubernetes
In a Kubernetes cluster, applications and services sometimes require persistent storage, especially when dealing with databases and stateful workloads. Managing this storage efficiently and reliably is crucial for data integrity and application performance. In this post, I’ll delve into the concepts of persistent storage in Kubernetes, explain Persistent Volumes (PV) and Persistent Volume Claims (PVC), discuss stateful versus stateless applications, and share how I implemented Longhorn to address storage challenges in my cluster....
Ingress Controller
Managing multiple domains and services in a Kubernetes cluster can become complex, especially as the number of services grows. Clients typically interact with applications through domain names, and ensuring smooth routing and access control is essential. In this post, we’ll explore how using an Ingress Controller in Kubernetes simplifies domain management, compare it with traditional solutions, and demonstrate how to set up basic authentication for sensitive services. Traditional Solutions Before Containers Before the advent of containerization and Kubernetes, managing multiple services under different domains often involved:...
Monitoring and Observability Part 2 - Logging
In the previous part, we discussed how to monitor and visualize metrics using Prometheus and Grafana. However, metrics alone aren’t sufficient for complete observability. To understand the full picture of your system’s health and behavior, you also need to collect and analyze logs. Logs provide detailed insights into what’s happening inside your applications and services, allowing you to diagnose issues and understand system behavior at a granular level. In this post, we’ll explore how to set up centralized logging in Kubernetes using Fluent Bit and Loki, and how to integrate logs into your existing Grafana dashboards for unified monitoring....
Monitoring and Observability in Kubernetes
In today’s complex distributed systems, monitoring and observability are crucial for maintaining system health and ensuring optimal performance. Even minor issues can have significant impacts on the overall functioning of your applications. Kubernetes simplifies this process by providing mechanisms to collect and monitor system metrics more efficiently than traditional infrastructures. Understanding Monitoring and Observability Monitoring is the process of collecting, analyzing, and using information to track the performance and health of your system....
Metallb - Load Balancer
When deploying applications on cloud platforms like AWS, Google Cloud, or Azure, load balancing is often a seamless experience. Cloud providers offer managed load balancer services (e.g., AWS Application Load Balancer) that integrate effortlessly with Kubernetes clusters. However, when running Kubernetes on-premises, things get a bit more complicated. You need to handle load balancing yourself, which raises questions: How do you distribute incoming traffic to your services? How can you ensure high availability (HA) of your load balancer?...
K3s - On-premise kubernetes
Deploying Kubernetes on-premise presents its own set of challenges and choices. In this post, I’ll share my journey of selecting a Kubernetes distribution for a small cluster, the issues I faced with MicroK8s, and how I successfully deployed K3s on CentOS. Choosing a Kubernetes Distribution for On-Premise Deployment There are numerous Kubernetes distributions available for on-premise deployment, each with its own features and complexities. Some of the popular options include:...
CI/CD, Kaniko and more
Continuous Integration and Continuous Deployment (CI/CD) are essential practices in modern software development. They help teams deliver code changes more frequently and reliably. In this post, I’ll share how I set up a CI/CD pipeline using GitLab CI/CD and Kaniko to build and deploy my Go application to Kubernetes without requiring Docker on the build server. What Is CI/CD? Continuous Integration (CI) is the practice of automating the integration of code changes from multiple contributors into a single software project....
Deploying Go application
Deploying an application involves packaging your code and making it available on servers so that users can access it. In this post, I’ll walk through how to build a Go (Golang) application into a Docker image and deploy it to a Kubernetes cluster. We’ll cover creating an optimized Dockerfile, setting up Kubernetes Deployment and Service manifests, and understanding important configurations like GOMAXPROCS and Kubernetes resource limits. What Is Deployment? Deployment is the process of distributing an application so that it can be run on a target environment, such as servers or cloud platforms....