When I started building FontAlternatives, I knew I’d eventually have thousands of pages. Currently, the site has 3,400+ pages: 303 premium fonts, 74 free fonts, ~2,500 comparison pages, plus category, foundry, and hub pages. The build time question loomed large.
The obvious modern answer? Incremental Static Regeneration. Let me explain why I didn’t use it.
The problem
Static site builds scale linearly with page count. My build generates:
- 3,400+ HTML pages
- 700+ OG images
- Font preview images
- Screenshot processing
A full cold build takes 30-40 minutes. That’s painful for CI/CD. The goal was to get this under 15 minutes while keeping the simplicity of static deployment.
What I considered
Option 1: Netlify ISR
Netlify has ISR support for Astro. You mark pages with prerender = false and set cache headers:
// Netlify approach
export const prerender = false;
export function GET() {
return new Response(html, {
headers: {
'Netlify-CDN-Cache-Control': 'public, max-age=0, stale-while-revalidate=31536000'
}
});
}
Why I didn’t use it:
- Requires the Netlify adapter (lock-in)
- Moves rendering to edge functions (complexity)
- Harder to reason about what’s cached where
- My content doesn’t change that often anyway
Option 2: Vercel ISR
Similar story. Vercel’s ISR is excellent, but:
- Requires their adapter
- Edge function pricing for 3,400+ pages
- Overkill for a site where content changes weekly, not hourly
Option 3: On-demand revalidation
Both platforms support webhooks that invalidate specific pages. I’d hook this up to my content management (GitHub commits) and rebuild only changed pages.
Why I didn’t use it:
- Still requires the platform-specific adapter
- Complexity of maintaining revalidation webhooks
- My actual bottleneck isn’t HTML generation
The real problem: assets, not HTML
Here’s what I realized: HTML generation is fast. The Astro build optimization is what makes the HTML generation fast. Astro renders 3,400 pages in about 90 seconds. The bottleneck is assets:
| Asset Type | Count | Cold Build Time |
|---|---|---|
| OG images | 700+ | 8-10 minutes |
| Font previews | 300+ | 3-4 minutes |
| Screenshot processing | 500+ | 5-7 minutes |
| HTML pages | 3,400+ | ~90 seconds |
ISR doesn’t help here. These assets are generated at build time, not on-demand.
My solution: manifest-based R2 sync
Instead of ISR, I built a simple caching layer:
flowchart TD
subgraph build[Build System]
A[Font metadata] --> B{Check manifest}
C[Source images] --> D{Check manifest}
B -->|Changed| E[OG image generator]
B -->|Unchanged| F[Skip]
D -->|Changed| G[Screenshot processor]
D -->|Unchanged| H[Skip]
E --> I[.cache/og/]
G --> J[.cache/screenshots/]
end
subgraph sync[R2 CDN Sync]
K[Local .cache/]
L[Cloudflare R2]
K <-->|sync-r2.ts| L
end
I --> K
J --> K
style build fill:#e3f2fd
style sync fill:#c8e6c9
How it works
1. Content-addressed caching
Every asset gets a content hash:
// Bad: mtime-based (breaks on git checkout)
hash = `${stat.mtime}-${stat.size}`
// Good: content-based (works everywhere)
hash = sha256(fileContent).slice(0, 16)
Git operations reset file mtimes. Content hashing ensures cache validity across environments.
2. Manifest tracking
Each generator maintains a manifest:
{
"inter": {
"sourceHash": "a1b2c3d4",
"outputs": ["og/inter.webp"],
"generatedAt": "2026-01-24T10:00:00Z"
}
}
Before generating, we check: does sourceHash match current content? If yes, skip.
3. R2 as remote cache
GitHub Actions has a 10GB cache limit. My assets are ~1.2GB and growing. R2 has a generous free tier (10GB storage, 10M reads/month). For more on the R2 infrastructure, see Zero-Cost CDN for Static Assets.
CI workflow:
- name: Pull from R2
run: npx tsx scripts/sync-r2.ts --pull
- name: Build
run: npm run build # Only regenerates changed assets
- name: Push to R2
run: npx tsx scripts/sync-r2.ts --push
The results
| Metric | Before | After |
|---|---|---|
| Cold build (no cache) | 35-40 min | 35-40 min |
| Warm build (with R2 cache) | 35-40 min | ~12 min |
| Incremental (1 font changed) | 35-40 min | ~4 min |
| Smart build (PR merged, no new fonts) | 35-40 min | ~2-3 min |
The key insight: warm builds are the common case. Cold builds only happen when you change the generation logic, which is rare.
Update: I’ve since added smart build detection to production deploys. When a PR is merged, the workflow checks if preview assets exist (from the preview build) and skips asset generation entirely. Most production deploys now complete in 2-3 minutes.
Tradeoffs
What I gained:
- No vendor lock-in (works with any static host)
- Simple mental model (everything is static)
- Full control over caching logic
- Cheaper than edge function invocations
What I lost:
- No instant page updates (must rebuild)
- Still a full HTML rebuild when content changes
- More custom code to maintain
What I’d reconsider:
- If page count exceeded 10,000, I’d look at hybrid rendering
- If content changed hourly, ISR would make more sense
- If I needed personalization, static wouldn’t work anyway
Why this works for FontAlternatives
My content update pattern is: add 2-3 fonts per week, maybe update existing content monthly. The build time isn’t the bottleneck for this workflow.
What matters is:
- Builds are predictable (same result every time)
- Rollbacks are easy (redeploy previous dist/)
- No runtime errors from stale cache
- CDN handles all traffic (no origin compute)
Code snippets
Manifest structure
interface GenerationManifest {
[slug: string]: {
sourceHash: string;
outputs: string[];
generatedAt: string;
};
}
Hash-based skip logic
function shouldRegenerate(slug: string, currentHash: string): boolean {
const manifest = loadManifest();
const cached = manifest[slug];
if (!cached) return true; // Never generated
if (cached.sourceHash !== currentHash) return true; // Content changed
if (!cached.outputs.every(fs.existsSync)) return true; // Output missing
return false;
}
R2 sync script
async function syncToR2(direction: 'pull' | 'push') {
const manifest = loadR2Manifest();
if (direction === 'pull') {
// Download assets not in local cache
for (const [key, r2Path] of Object.entries(manifest)) {
if (!fs.existsSync(localPath(key))) {
await downloadFromR2(r2Path);
}
}
} else {
// Upload new/changed assets
for (const file of getLocalAssets()) {
const hash = hashFile(file);
if (manifest[file] !== hash) {
await uploadToR2(file);
manifest[file] = hash;
}
}
}
}
Conclusion
ISR is a great technology. I didn’t use it because my problem wasn’t “pages take too long to render” but “assets take too long to generate.”
The manifest-based approach solves my actual problem: regenerate only what changed, share cache across CI runs, keep everything static.
Sometimes the modern approach isn’t the right one. Understand your bottleneck, then pick the tool that addresses it.