Deploying an application involves packaging your code and making it available on servers so that users can access it. In this post, I’ll walk through how to build a Go (Golang) application into a Docker image and deploy it to a Kubernetes cluster. We’ll cover creating an optimized Dockerfile, setting up Kubernetes Deployment and Service manifests, and understanding important configurations like GOMAXPROCS
and Kubernetes resource limits.
What Is Deployment?
Deployment is the process of distributing an application so that it can be run on a target environment, such as servers or cloud platforms. It involves packaging the application, setting up the necessary infrastructure, and configuring it to run reliably and efficiently.
Building a Go Application into a Docker Image
What Is Docker and Dockerfile?
Docker is a platform that allows developers to package applications into containers—standardized units that contain everything the software needs to run, including libraries, dependencies, and configuration files.
A Dockerfile is a script containing a set of instructions to build a Docker image. It specifies the base image, application code, environment variables, and commands to run.
Creating the Dockerfile
Below is a Dockerfile for building and deploying a Go application:
# syntax=docker/dockerfile:1
# Stage 1: Build
FROM golang:alpine AS build
ENV TZ=Asia/Ulaanbaatar
ENV GO111MODULE=on
ENV GOPROXY=https://goproxy.io,direct
# Install necessary packages
RUN apk add --no-cache bash ca-certificates git gcc g++ libc-dev make tzdata
# Set the working directory
WORKDIR /go/src/app
# Copy go.mod and go.sum files and download dependencies
COPY go.mod go.sum ./
RUN go mod download
# Copy the source code
COPY . .
# Build the Go application
RUN CGO_ENABLED=0 GOOS=linux go build -a -o main main.go
# Stage 2: Deploy
FROM busybox:stable-musl
ENV TZ=Asia/Ulaanbaatar
# Copy necessary files from the build stage
COPY --from=build /go/src/app/main /home/
COPY --from=build /go/src/app/config.yml /home/
COPY --from=build /go/src/app/privateKey.pem /home/
COPY --from=build /go/src/app/publicKey.pem /home/
COPY --from=build /go/src/app/template /home/template
# Set the working directory
WORKDIR /home
# Expose the application port
EXPOSE 4000
# Set the maximum number of open file descriptors
RUN ulimit -n 10000
# Start the application
ENTRYPOINT ["/home/main"]
Explanation of the Dockerfile
Multi-Stage Build
The Dockerfile uses a multi-stage build to optimize the final image size:
- Build Stage (
FROM golang:alpine AS build
):- Uses the official Go Alpine image for building the application.
- Sets up environment variables for time zone and Go modules.
- Installs necessary packages like Git and compilers.
- Copies
go.mod
andgo.sum
files and runsgo mod download
to cache dependencies. - Copies the entire source code and builds the application binary.
- Deployment Stage (
FROM busybox:stable-musl
):- Uses a minimal base image (BusyBox) to reduce the final image size.
- Copies only the necessary files from the build stage: the binary and any required configuration or asset files.
- Sets up the working directory and exposes the application port.
Why Use Multi-Stage Builds?
- Smaller Image Size: By excluding development tools and source code, the final image is much smaller.
- Security: Reduces the attack surface by including only what’s necessary to run the application.
- Portability: The resulting image can run anywhere since it contains the statically compiled binary.
Caching Dependencies
- Caching
go.mod
andgo.sum
: By copyinggo.mod
andgo.sum
before the source code and runninggo mod download
, Docker can cache the dependencies layer. This means that unless the module files change, Docker will use the cached layer, speeding up subsequent builds.
Setting Environment Variables
GO111MODULE=on
: Ensures Go modules are used.GOPROXY=https://goproxy.io,direct
: Sets the Go proxy to speed up module downloads.
Setting ulimit
ulimit -n 10000
: Increases the maximum number of open file descriptors. This is important for applications that handle many concurrent connections or file operations.
Exposing Ports and Entry Point
EXPOSE 4000
: Informs Docker that the container listens on port 4000.ENTRYPOINT ["/home/main"]
: Specifies the command to run when the container starts.
Deploying to Kubernetes
To run the application on Kubernetes, we need to create Deployment and Service manifests and handle any necessary configurations like image pulling and resource limits.
Kubernetes Service Manifest
The Service exposes the application internally within the cluster.
apiVersion: v1
kind: Service
metadata:
name: astring-backend-service
namespace: astring
spec:
selector:
app: astring-backend
ports:
- protocol: TCP
port: 4000
targetPort: 4000
Kubernetes Deployment Manifest
The Deployment manages the application pods and ensures the desired number of replicas are running.
apiVersion: apps/v1
kind: Deployment
metadata:
name: astring-backend
namespace: astring
spec:
replicas: 3
selector:
matchLabels:
app: astring-backend
template:
metadata:
labels:
app: astring-backend
spec:
containers:
- name: astring-backend-container
image: registry.gitlab.com/<repo>:latest
ports:
- containerPort: 4000
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "500m"
env:
- name: GOMAXPROCS
value: "1"
imagePullSecrets:
- name: gitlab-registry-secret
Explanation of the Deployment Manifest
Image Pull Secret
imagePullSecrets
: Used to authenticate with a private Docker registry. In this case, we’re pulling the image from GitLab’s registry.
Resource Requests and Limits
requests
: The minimum amount of resources the container requires. Kubernetes uses this to schedule pods on nodes with sufficient resources.limits
: The maximum amount of resources the container can use. Enforced by the Kubernetes scheduler.
Setting GOMAXPROCS
GOMAXPROCS
Environment Variable:- By default, Go applications use all available CPU cores on the host machine.
- In a Kubernetes cluster, a node might have multiple CPU cores, but we may limit the CPU available to a pod.
- Setting
GOMAXPROCS
to match the CPU limit ensures the Go runtime schedules goroutines appropriately, preventing CPU throttling. - In the example,
GOMAXPROCS
is set to"1"
because the CPU limit is500m
(0.5 CPU). This should be adjusted based on the CPU limit set.
CPU Limits and GOMAXPROCS
- Understanding Kubernetes CPU Limits:
cpu: "500m"
means the container can use up to half of a CPU core.- If the Go application thinks it has access to more CPU cores, it may spawn more threads, leading to context switching and inefficiency.
- Why Set
GOMAXPROCS
?:- Aligns the Go scheduler with the actual CPU resources available.
- Reduces unnecessary goroutines and threads.
- Improves application performance under CPU limits.
Adjusting GOMAXPROCS
Dynamically
Alternatively, you can set GOMAXPROCS
dynamically based on the CPU limits:
env:
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
resource: limits.cpu
However, note that resourceFieldRef
may return fractional CPU values (e.g., 500m
), which can cause issues because GOMAXPROCS
expects an integer. You may need to handle this within your application or set GOMAXPROCS
explicitly.
Creating a Secret for the Private Registry
Since the Docker image is hosted in a private registry (e.g., GitLab’s registry), you need to create a Kubernetes secret for authentication:
kubectl create secret docker-registry gitlab-registry-secret \
--docker-server=registry.gitlab.com \
--docker-username=<username> \
--docker-password=<password> \
--docker-email=<email> \
-n astring
gitlab-registry-secret
: The name of the secret.-docker-server
: The registry URL.-docker-username
,-docker-password
,-docker-email
: Your registry credentials.n astring
: The namespace where the secret will be created.
Applying the Manifests
After creating the manifests and the secret, apply them to your cluster:
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Understanding Kubernetes Resource Limits and Go Runtime
How Kubernetes CPU Limits Work
- CPU Requests:
- The amount of CPU guaranteed to the container.
- Kubernetes uses this value to schedule pods on nodes that have sufficient capacity.
- CPU Limits:
- The maximum amount of CPU the container is allowed to use.
- Enforced by the Linux Kernel’s CFS (Completely Fair Scheduler) quota system.
- If a container exceeds its CPU limit, it will be throttled.
The Problem with Default GOMAXPROCS
- Default Behavior:
- By default, Go sets
GOMAXPROCS
to the number of CPU cores available on the host machine. - If the node has 8 cores,
GOMAXPROCS
will be 8.
- By default, Go sets
- Issue in Kubernetes:
- If your container is limited to less than the full CPU capacity of the node, Go will still think it has access to all cores.
- This can lead to the Go runtime scheduling more threads and goroutines than allowed, causing excessive context switching and CPU throttling.
Setting GOMAXPROCS
Correctly
- Explicitly Set
GOMAXPROCS
:- Set
GOMAXPROCS
to match the CPU limit of the container. - For example, if
cpu: "500m"
, setGOMAXPROCS
to1
(since500m
is half of a CPU core).
- Set
- Benefits:
- Ensures the Go scheduler operates within the CPU constraints.
- Reduces overhead from unnecessary goroutines and threads.
- Improves performance and resource utilization.
Conclusion
Deploying a Go application to Kubernetes involves several steps:
- Building an Optimized Docker Image:
- Use multi-stage builds to reduce image size.
- Cache dependencies to speed up builds.
- Minimize the runtime environment for security and efficiency.
- Configuring Kubernetes Manifests:
- Define Deployment and Service resources.
- Handle private image registries with image pull secrets.
- Set resource requests and limits appropriately.
- Understanding and Configuring
GOMAXPROCS
:- Align the Go runtime with Kubernetes CPU limits.
- Improve application performance under resource constraints.
By paying attention to these details, you can ensure your Go applications run efficiently and reliably in a Kubernetes environment.