When I first started building my application, everything ran smoothly on my laptop. Development was straightforward, and I had full control over my environment. However, as the project grew, I needed to deploy it so others could access and test it. This is where my journey into infrastructure began—a journey that took me through cloud services like Fly.io and Vercel, into managed Kubernetes on Linode, and eventually to an on-premise Kubernetes cluster.
What Is Infrastructure?
In the context of software development, infrastructure refers to all the underlying systems and services necessary for an application to run. This includes servers, storage, networking, databases, and more. It’s the foundation upon which applications are built and deployed. Managing infrastructure involves making decisions about where and how your application will run, whether it’s on local servers (on-premise) or in the cloud.
Phase 1: Fly.io and Vercel
Fly.io for Backend Deployment
For my backend application, I initially chose to deploy everything to Fly.io. This included my databases, cache, NATS (a messaging system), and the backend service itself. Fly.io made it easy to deploy these services, and since I was primarily using it for development and occasional testing, the costs were minimal—staying under their $5 billing threshold.
However, deploying and managing services like NATS and ScyllaDB (a high-performance NoSQL database) on Fly.io proved to be complicated. Persistent storage and other configurations required more control than Fly.io offered at the time, which made scaling and managing these services challenging.
Vercel for Frontend Deployment
For the frontend, I used Vercel, which turned out to be an excellent choice. Vercel seamlessly integrates with GitLab (and GitHub), automatically deploying updates whenever I pushed changes to the repository. The deployment process was fast and effortless, allowing me to focus on developing features rather than worrying about the deployment pipeline.
Challenges with Fly.io
While Fly.io was convenient, I faced some limitations:
- Complexity in Managing Stateful Services: Deploying databases and message brokers required persistent storage and specific configurations that were not straightforward on Fly.io.
- Limited Control: I didn’t have much control over the infrastructure, which made it difficult to optimize performance and handle custom setups.
Phase 2: Linode Kubernetes
As my application grew, I needed a more robust solution for managing my backend services. I turned to Linode, which offers a $100 credit upon signing up. This was a perfect opportunity to experiment with Kubernetes, a powerful orchestration tool for deploying, scaling, and managing containerized applications.
Setting Up Kubernetes on Linode
Linode’s managed Kubernetes service made it straightforward to set up a cluster. The management layer is free; you only pay for the resources you use. I set up:
- 3 Worker Nodes: Each with 1 CPU, 2 GB of RAM, and 50 GB of storage.
- 1 Load Balancer: To distribute traffic across the nodes.
The pricing was reasonable:
- Nodes: $12 per node per month, totaling $36.
- Load Balancer: $10 per month.
I used Terraform to deploy and manage the cluster infrastructure, which allowed me to version control my configurations and easily make changes or redeploy as needed.
Benefits and Challenges
Using Kubernetes on Linode provided several advantages:
- Scalability: It was easy to scale the number of nodes up or down based on demand.
- Management: Kubernetes abstracts away much of the complexity involved in deploying and managing applications.
However, once my Linode credit expired, I realized that the costs were significant for a project that was still in the testing phase. Paying over $46 per month wasn’t justifiable at that stage.
Phase 3: On-Premise Kubernetes
To continue development without incurring high costs, I considered on-premise solutions. I reached out to my teacher, who generously provided me with access to three virtual machines on their host infrastructure. Each machine had:
- 8 Cores
- 8 GB of RAM
- 30 GB of Storage
Deploying to On-Premise Kubernetes
With these resources, I set up my own Kubernetes cluster. Deploying all my services—databases, cache, messaging systems, backend services—became a learning experience in managing infrastructure without the conveniences of cloud services.
Lessons Learned
- Control: On-premise deployment gave me full control over the environment, allowing for customized configurations.
- Complexity: Managing physical infrastructure is more complex and requires handling networking, hardware limitations, and redundancy manually.
- Cost-Effective: For development and testing, on-premise was cost-effective since I wasn’t paying cloud service fees.
Understanding Infrastructure, Kubernetes, and Deployment Options
What Is Kubernetes?
Kubernetes is an open-source platform designed to automate deploying, scaling, and operating containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes is cloud-agnostic, meaning it can run on various environments, from public clouds to on-premise servers.
Key Features of Kubernetes
- Automated Rollouts and Rollbacks: Kubernetes can automatically roll out changes to applications or their configurations while monitoring application health.
- Service Discovery and Load Balancing: It can expose containers using DNS names or their own IP addresses and distribute network traffic to balance the load.
- Storage Orchestration: Automatically mount the storage system of your choice, whether from local storage, a public cloud provider, or a network storage system.
- Self-Healing: Restarts containers that fail, replaces containers, and kills containers that don’t respond to user-defined health checks.
Cloud vs. On-Premise Infrastructure
Cloud Infrastructure
- Pros:
- Scalability: Easily scale resources up or down.
- Managed Services: Providers handle maintenance tasks like updates and backups.
- Global Availability: Deploy applications closer to users around the world.
- Cons:
- Cost: Can become expensive, especially for persistent high-resource needs.
- Control: Less control over the underlying hardware and infrastructure.
On-Premise Infrastructure
- Pros:
- Control: Full control over hardware and configurations.
- Cost: Potentially lower costs for constant workloads over time.
- Cons:
- Complexity: Requires handling maintenance, updates, and physical hardware.
- Scalability: Scaling up requires purchasing and setting up new hardware.
Conclusion
My journey from local development to deploying on Fly.io, then to Linode’s managed Kubernetes, and finally to an on-premise Kubernetes cluster has taught me a lot about infrastructure choices. Each option has its trade-offs in terms of cost, complexity, scalability, and control.
In the future, I plan to write more about Kubernetes and the technologies I used, diving deeper into how they work and how to leverage them effectively.