FontAlternatives serves thousands of images: OG images, font previews, specimen screenshots. I needed a CDN that could handle global traffic without costing anything for a side project.
Cloudflare R2 + Workers gave me exactly that.
The requirements
For a static font directory site, I needed:
- Global edge delivery: Fast load times worldwide
- Large file storage: 2,000+ images, growing weekly
- Predictable costs: Ideally free for hobby-level traffic
- CI/CD integration: Upload assets during builds
Traditional CDN options (CloudFront, Fastly) charge for bandwidth. At hobby scale, that’s fine. But costs can spike unexpectedly with traffic.
Why R2
Cloudflare R2 is S3-compatible object storage with a key difference: no egress fees.
| Provider | Storage (10GB) | Egress (100GB/mo) |
|---|---|---|
| AWS S3 | $0.23 | $9.00 |
| Google Cloud | $0.20 | $12.00 |
| Cloudflare R2 | $0.15 | $0.00 |
For a site that serves mostly images, egress is the expensive part. R2 eliminates that entirely.
Free tier limits:
- 10GB storage
- 10 million Class A operations/month (writes)
- 1 million Class B operations/month (reads)
My usage: ~5GB storage, ~500K reads/month. Comfortably within free tier.
The architecture
flowchart TD
A[User Request] --> B[Cloudflare Edge]
subgraph edge[Cloudflare Edge - Global]
B --> C{Request type}
C -->|HTML| D[Workers]
C -->|Assets| E[R2 Bucket]
D --> F[dist/ pages]
E --> G[/previews/]
E --> H[/og/]
E --> I[/screenshots/]
end
style A fill:#e3f2fd
style edge fill:#fff3e0
Workers serve HTML. R2 serves images. Both run at the edge, close to users.
Wrangler configuration
The wrangler.jsonc binds R2 to the Worker:
{
"name": "fontalternatives",
"main": "dist/_worker.js",
"compatibility_date": "2024-01-01",
"r2_buckets": [
{
"binding": "ASSETS_BUCKET",
"bucket_name": "fontalternatives-assets"
}
],
"routes": [
{
"pattern": "fontalternatives.com/*",
"custom_domain": true
},
{
"pattern": "cdn.fontalternatives.com/*",
"custom_domain": true
}
]
}
The CDN subdomain routes directly to R2 for image requests.
Asset sync during CI/CD
I sync assets to R2 during the build process. The key insight: only upload changed files.
import { S3Client, PutObjectCommand, HeadObjectCommand } from '@aws-sdk/client-s3';
import { createHash } from 'crypto';
import { readFileSync } from 'fs';
const client = new S3Client({
region: 'auto',
endpoint: `https://${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
});
async function syncFile(localPath: string, remotePath: string): Promise<boolean> {
const content = readFileSync(localPath);
const hash = createHash('sha256').update(content).digest('hex');
// Check if file exists with same hash
try {
const head = await client.send(new HeadObjectCommand({
Bucket: 'fontalternatives-assets',
Key: remotePath,
}));
if (head.Metadata?.['content-hash'] === hash) {
return false; // No change
}
} catch {
// File doesn't exist, will upload
}
// Upload with hash metadata
await client.send(new PutObjectCommand({
Bucket: 'fontalternatives-assets',
Key: remotePath,
Body: content,
ContentType: getContentType(localPath),
Metadata: { 'content-hash': hash },
CacheControl: 'public, max-age=31536000, immutable',
}));
return true;
}
The SHA-256 hash stored in metadata lets me skip unchanged files. A full sync of 2,000 images takes 10 seconds because most files don’t change.
Manifest tracking
I maintain a local manifest of all R2 assets:
{
"version": "1.0",
"generatedAt": "2024-01-15T10:30:00Z",
"assets": {
"previews/inter.webp": {
"hash": "a1b2c3d4...",
"size": 45230,
"uploadedAt": "2024-01-10T08:00:00Z"
},
"og/alternatives-helvetica.png": {
"hash": "e5f6g7h8...",
"size": 128450,
"uploadedAt": "2024-01-12T14:20:00Z"
}
}
}
The manifest serves two purposes:
- Fast syncs: Compare local manifest to remote, only check files that differ
- Cache invalidation: Know exactly which files changed for cache purging
Local development with CDN fallback
During local development, I don’t want to download all 5GB of assets. Instead, the dev server proxies to the CDN:
function serveCacheAssetsPlugin() {
return {
name: 'serve-cache-assets-with-fallback',
configureServer(server) {
server.middlewares.use('/preview', async (req, res, next) => {
const localPath = `.cache/assets/previews${req.url}`;
// Try local first
if (existsSync(localPath)) {
return serveLocal(localPath, res);
}
// Fallback to CDN
const cdnUrl = `https://cdn.fontalternatives.com/previews${req.url}`;
const response = await fetch(cdnUrl);
if (!response.ok) {
res.statusCode = 404;
return res.end('Not found');
}
// Write-through cache for next time
const buffer = await response.arrayBuffer();
mkdirSync(dirname(localPath), { recursive: true });
writeFileSync(localPath, Buffer.from(buffer));
res.setHeader('Content-Type', response.headers.get('content-type'));
res.end(Buffer.from(buffer));
});
},
};
}
First request hits the CDN. Subsequent requests serve from local cache. Best of both worlds.
For how R2 fits into the overall caching strategy, see Why I Chose Manifest-Based R2 Caching Over ISR.
Cache headers
R2 serves assets with long cache headers:
Cache-Control: public, max-age=31536000, immutable
One year expiry with immutable tells browsers to never revalidate. Since asset URLs include content hashes (or change when content changes), this is safe.
For the HTML Worker, I use shorter caching:
return new Response(html, {
headers: {
'Content-Type': 'text/html',
'Cache-Control': 'public, max-age=3600, s-maxage=86400',
},
});
max-age=3600: Browser caches for 1 hours-maxage=86400: Edge caches for 24 hours
Preview isolation
Preview deployments need their own asset namespace. I upload preview-specific assets to pr/{number}/:
async function syncPreviewAssets(prNumber: number): Promise<void> {
const newAssets = findNewAssets();
for (const asset of newAssets) {
const remotePath = `pr/${prNumber}/${asset.path}`;
await uploadToR2(asset.localPath, remotePath);
}
}
The preview Worker knows to check pr/{number}/ first, then fall back to production assets:
async function handleAssetRequest(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const prNumber = getPRNumber(request);
if (prNumber) {
// Try preview-specific asset first
const previewKey = `pr/${prNumber}${url.pathname}`;
const previewAsset = await env.ASSETS_BUCKET.get(previewKey);
if (previewAsset) {
return new Response(previewAsset.body, {
headers: { 'Content-Type': previewAsset.httpMetadata?.contentType },
});
}
}
// Fall back to production asset
const prodKey = url.pathname.slice(1);
const prodAsset = await env.ASSETS_BUCKET.get(prodKey);
if (!prodAsset) {
return new Response('Not found', { status: 404 });
}
return new Response(prodAsset.body, {
headers: { 'Content-Type': prodAsset.httpMetadata?.contentType },
});
}
Asset promotion on merge
When a PR is merged, preview assets are promoted to production before cleanup:
- name: Promote preview assets
run: npx tsx scripts/promote-all-previews.ts
The script:
- Lists all objects under
pr/*/ - Copies them to production root (removes the
pr/{number}/prefix) - Updates the production manifest
- Deletes the
pr/{number}/namespace
This enables smart production builds: if preview assets exist, the production workflow skips asset generation and does an HTML-only build (~2-3 min instead of 30+).
Cleanup
For PRs that close without merging, cleanup removes orphaned assets:
name: Cleanup Preview
on:
pull_request:
types: [closed]
jobs:
cleanup:
runs-on: ubuntu-latest
steps:
- name: Delete preview assets
run: |
npx tsx scripts/cleanup-preview-assets.ts \
--pr ${{ github.event.pull_request.number }}
The script lists all objects in pr/{number}/ and deletes them.
Cost breakdown
After 6 months of running:
| Resource | Usage | Cost |
|---|---|---|
| R2 Storage | 4.8 GB | $0.00 (free tier) |
| R2 Class A ops | 12K/mo | $0.00 (free tier) |
| R2 Class B ops | 480K/mo | $0.00 (free tier) |
| Workers requests | 800K/mo | $0.00 (free tier) |
| Total | $0.00 |
Free tier limits are generous for hobby projects. I’d need 10x traffic to exceed them.
Tradeoffs
What I gained:
- Global edge delivery at zero cost
- S3-compatible API (easy tooling)
- Integrated with Workers (same provider)
- No egress surprises
What I lost:
- Vendor lock-in to Cloudflare
- S3 tooling works but not all features (no versioning)
- R2 dashboard is basic compared to S3
Decisions I’d reconsider:
- Could use R2’s public bucket feature instead of Workers for pure CDN use
- Manifest tracking adds complexity; R2’s native ETags might be enough
The stack in practice
Every image request:
- Hits Cloudflare edge (closest to user)
- Checks edge cache (usually hit)
- If miss, fetches from R2 (same data center)
- Returns with long cache headers
p95 latency: ~50ms globally. Cost: $0.
For a side project serving thousands of font images, R2 + Workers is the obvious choice.