In my previous post, I explained how I selected an object storage provider for my application, along with the trade-offs. While that setup is working well, I encountered a common issue: public media files like profile images were still being streamed through my backend (Golang) — because Wasabi no longer supports public buckets.
These public files are accessed frequently, especially small assets like avatars. If every request hits my backend (and subsequently Wasabi), it introduces latency and increases operational cost. This meant:
- Every request triggered a backend process
- Every file had to be downloaded from Wasabi (per request)
- No caching was applied
For high-frequency resources like avatars, this was unsustainable.
Exploring CDN Options
To solve this, the standard approach is to use a Content Delivery Network (CDN).
A CDN caches and serves static files from edge servers located around the world. It improves performance by serving users from the nearest edge location and reduces origin load. Popular CDN options include:
- Cloudflare R2
- Fastly
- AWS CloudFront
I was particularly interested in R2 and Fastly, mainly because they offer egress-free delivery — a big plus. However, their per-request pricing (every PUT/GET counts) didn’t align with my budget, especially since Wasabi charges only for storage, not requests.
So I ruled out R2 and Fastly for now, even though they’re excellent services.
Backend Caching Consideration
Another approach I considered was in-cluster caching, using either:
- My Golang backend with in-memory cache, or
- Reverse proxies like NGINX or Varnish
This could avoid hitting Wasabi altogether once cached, but it would consume memory and CPU, which isn’t ideal in my resource-constrained Kubernetes environment.
Final Decision: Cloudflare Edge Caching
Instead of caching in my cluster, I decided to use Cloudflare’s built-in caching to offload both my backend and Wasabi entirely.
Key decisions:
- Set Cache Rules for paths like
/api/v1/media/file/*
- Mark responses as “Eligible for Cache”
- Force an Edge TTL of 1 year
- Bypass backend
Cache-Control
headers and rely on Cloudflare’s own cache logic - Use immutable URLs to avoid purging
Now, once a file is cached at Cloudflare’s edge, it doesn’t hit my server anymore — which is exactly what I needed.
Real Result
Here’s a real-world example of a cached file being served:

You can see:
CF-Cache-Status: HIT
— confirms it’s being served from Cloudflare’s edgeAge: 774
— how long it’s been in the edge cacheCache-Control: max-age=31536000
— the browser is also instructed to cache it for 1 year
Conclusion
This approach:
- Requires zero extra resources in my cluster
- Avoids Wasabi bandwidth on repeat requests
- Provides low-latency global access via Cloudflare’s edge network