FontAlternatives has 3,311 pages. Premium fonts, free fonts, comparisons, categories, foundries. Building them used to take 15 minutes.
Now it takes 2 minutes. Here’s how.
The page breakdown
| Page type | Count | Generation |
|---|---|---|
| Premium fonts | 303 | Dynamic from content |
| Free fonts | 74 | Dynamic from content |
| Comparisons | 1,304 | Computed from pairs |
| Categories | 84 | Dynamic from content |
| Foundries | 130 | Dynamic from content |
| ”Fonts like” hubs | 303 | One per premium font |
| Use cases | 42 | Dynamic from content |
| Static pages | 10 | Homepage, about, etc. |
| Total | 3,311 |
Most pages are generated from content collections. The comparisons (1,304) are computed from font pair combinations.
Content collections
Astro’s content collections provide type-safe content with Zod validation:
const premiumFonts = defineCollection({
loader: glob({ pattern: '**/*.md', base: './src/content/premiumFonts' }),
schema: z.object({
name: z.string(),
slug: z.string(),
classification: z.enum(['sans-serif', 'serif', 'display', 'mono']),
foundry: z.string(),
alternatives: z.array(z.object({
slug: z.string(),
similarity: z.number(),
})).default([]),
}),
});
Content is validated at build time. Invalid frontmatter fails the build, not production.
Dynamic route generation
Most pages use getStaticPaths() to generate routes from content:
// src/pages/alternatives/[slug].astro
export async function getStaticPaths() {
const premiumFonts = await getCollection('premiumFonts');
return premiumFonts.map((font) => ({
params: { slug: font.data.slug },
props: { font },
}));
}
const { font } = Astro.props;
Astro calls this once, generates 303 pages. Clean.
Comparison page math
Comparison pages are the expensive part. Each premium font with N alternatives creates N comparison pages:
// src/pages/compare/[slug].astro
export async function getStaticPaths() {
const premiumFonts = await getCollection('premiumFonts');
const freeFonts = await getCollection('freeFonts');
const freeBySlug = new Map(freeFonts.map(f => [f.data.slug, f]));
const paths = [];
for (const premium of premiumFonts) {
for (const alt of premium.data.alternatives) {
const free = freeBySlug.get(alt.slug);
if (!free) continue;
paths.push({
params: {
slug: `${premium.data.slug}-vs-${alt.slug}`,
},
props: {
premium,
free,
similarity: alt.similarity,
},
});
}
}
return paths;
}
303 premium fonts with ~4 alternatives average = 1,304 comparison pages.
Pre-computation
Some data is expensive to compute. I pre-compute it once:
// scripts/precompute-data.ts
interface PrecomputedData {
fontsByFoundry: Record<string, string[]>;
fontsByClassification: Record<string, string[]>;
popularFonts: string[];
relatedFonts: Record<string, string[]>;
}
async function precompute(): Promise<void> {
const premiumFonts = await loadContent('premiumFonts');
const data: PrecomputedData = {
fontsByFoundry: {},
fontsByClassification: {},
popularFonts: [],
relatedFonts: {},
};
// Group fonts by foundry
for (const font of premiumFonts) {
const foundry = font.foundry;
data.fontsByFoundry[foundry] ??= [];
data.fontsByFoundry[foundry].push(font.slug);
}
// ... more computation
await writeFile('.cache/build-data.json', JSON.stringify(data));
}
Build reads from .cache/build-data.json instead of recomputing.
Incremental asset generation
Assets (OG images, font previews) are the slowest part. I generate them incrementally:
async function generateOGImages(): Promise<void> {
const fonts = await loadContent('premiumFonts');
const manifest = await loadManifest('.cache/og-manifest.json');
for (const font of fonts) {
const hash = hashContent(font);
// Skip if already generated with same content
if (manifest[font.slug]?.hash === hash) {
continue;
}
await generateOGImage(font);
manifest[font.slug] = { hash, generatedAt: Date.now() };
}
await saveManifest('.cache/og-manifest.json', manifest);
}
On a typical build, only 0-5 fonts changed. Generate 5 images, not 303.
Build cache
Astro caches content collection transforms:
.astro/
content-assets.mjs # Cached content transforms
content-modules.mjs # Cached module resolution
First build: 45 seconds for content processing. Subsequent builds: 5 seconds (cache hit).
The build script
Everything orchestrated through npm scripts:
{
"scripts": {
"build": "tsx scripts/build.ts",
"build:html": "astro build",
"precompute": "tsx scripts/precompute-data.ts"
}
}
The build script runs pre-computation, asset generation, then Astro build in sequence.
Timing breakdown
| Step | First build | Cached build |
|---|---|---|
| Pre-compute | 2s | 0.1s (cached) |
| llms.txt | 0.5s | 0.5s |
| Asset generation | 300s | 5s (incremental) |
| Astro build | 45s | 30s |
| Total | ~6 min | ~36s |
The “2 minute” claim is for typical CI builds where assets exist.
Parallel page generation
Astro generates pages in parallel by default. But I can tune it:
// astro.config.mjs
export default defineConfig({
build: {
concurrency: 10, // Parallel page builds
},
});
More parallelism = faster builds, but more memory.
Memory optimization
With 3,000+ pages, memory matters. I stream large collections:
async function* streamFonts() {
const files = await glob('src/content/premiumFonts/*.md');
for (const file of files) {
const content = await readFile(file, 'utf-8');
const { data, content: body } = matter(content);
yield { ...data, body };
}
}
// Process one at a time, not all in memory
for await (const font of streamFonts()) {
await processFont(font);
}
Keeps memory under 1GB even for full builds.
CI optimization
GitHub Actions builds use caching:
- name: Restore build cache
uses: actions/cache@v4
with:
path: |
.cache/build-data.json
.cache/og-manifest.json
.astro/
key: build-${{ hashFiles('src/content/**') }}
restore-keys: build-
Cache key includes content hash. Content changes invalidate cache.
What I avoided
Things that would slow builds:
- Runtime data fetching: All data is static at build time
- Heavy image processing: Done incrementally, cached
- Complex computed layouts: Pre-compute, not per-page
- External API calls: Everything local
Tradeoffs
What I gained:
- 2 minute builds (from 15 minutes)
- Predictable build times
- Easy to reason about performance
What I lost:
- Dynamic content (everything is static)
- Real-time updates (need rebuild + deploy)
- Some flexibility (pre-computation is rigid)
What I’d do differently:
- Implement page-level caching (only rebuild changed pages)
- Use worker threads for parallel asset generation
- Add build timing telemetry
The deployment flow
flowchart TD
A[Content change] --> B[Pre-compute]
B -->|0.1s cached| C[Generate changed assets]
C -->|5s| D[Astro build]
D -->|30s| E[Upload to R2]
E -->|10s| F[Deploy to Workers]
F -->|5s| G[Live]
style A fill:#e3f2fd
style G fill:#c8e6c9
3,311 pages. Built and deployed in under 2 minutes.
For why I chose this approach over ISR, see Why I Chose Manifest-Based R2 Caching Over ISR.