FontAlternatives has 300+ fonts, each with structured frontmatter, bidirectional links, and generated assets. Mistakes are easy to make.
I built a guardrails system with three validation levels. Each level adds more checks, taking more time.
The three levels
| Level | Checks | Time | Use case |
|---|---|---|---|
| 1 | Lint + font validation | ~10s | Quick iteration |
| 2 | + Build | ~30s | Pre-commit |
| 3 | + E2E tests | ~5m | PR/deploy |
Level 1 catches typos. Level 2 catches build errors. Level 3 catches runtime bugs.
Level 1: quick validation
The fastest checks. Run constantly during development.
#!/bin/bash
# guardrails.sh --level 1
echo "Level 1: Quick validation"
# Biome lint
npm run lint
if [ $? -ne 0 ]; then
echo "Lint failed"
exit 1
fi
# Font schema validation
npm run check:fonts:schemas
if [ $? -ne 0 ]; then
echo "Font schema validation failed"
exit 1
fi
# Bidirectional link check
npm run check:fonts:links
if [ $? -ne 0 ]; then
echo "Link validation failed"
exit 1
fi
echo "Level 1 passed"
What it catches:
- Formatting issues
- Invalid frontmatter schemas
- Broken bidirectional links (premium font lists alternative that doesn’t exist)
- Missing required fields
Schema validation
Each font type has a Zod schema. Validation is strict:
const premiumFontSchema = z.object({
name: z.string(),
slug: z.string(),
tier: z.enum(['1', '2', '3']).default('2'),
classification: z.enum(['sans-serif', 'serif', 'display', 'mono']),
foundry: z.string(),
traits: z.array(z.string()).default([]),
useCases: z.array(z.string()).default([]),
alternatives: z.array(z.object({
slug: z.string(),
similarity: z.number().min(0).max(100),
notes: z.string().optional(),
})).default([]),
});
async function validateSchemas(): Promise<void> {
const premiumFonts = await glob('src/content/premiumFonts/*.md');
for (const file of premiumFonts) {
const content = await readFile(file, 'utf-8');
const { data } = matter(content);
const result = premiumFontSchema.safeParse(data);
if (!result.success) {
console.error(`Schema error in ${file}:`);
console.error(result.error.format());
process.exit(1);
}
}
}
Zod gives clear error messages: “Expected number, received string at alternatives[0].similarity”
Bidirectional link validation
If Helvetica lists Inter as an alternative, Inter should list Helvetica in its alternativeFor:
async function validateLinks(): Promise<void> {
const premiumFonts = await loadCollection('premiumFonts');
const freeFonts = await loadCollection('freeFonts');
// Build lookup maps
const premiumBySlug = new Map(premiumFonts.map(f => [f.slug, f]));
const freeBySlug = new Map(freeFonts.map(f => [f.slug, f]));
const errors: string[] = [];
// Check premium -> free links
for (const premium of premiumFonts) {
for (const alt of premium.alternatives) {
const free = freeBySlug.get(alt.slug);
if (!free) {
errors.push(`${premium.slug}: Alternative "${alt.slug}" doesn't exist`);
continue;
}
// Check reverse link
if (!free.alternativeFor.includes(premium.slug)) {
errors.push(
`${premium.slug} lists ${alt.slug} as alternative, ` +
`but ${alt.slug} doesn't include ${premium.slug} in alternativeFor`
);
}
}
}
if (errors.length > 0) {
console.error('Link validation errors:');
errors.forEach(e => console.error(` - ${e}`));
process.exit(1);
}
}
This catches common mistakes when adding fonts manually.
Level 2: build validation
Level 1 + a full build:
#!/bin/bash
# guardrails.sh --level 2
./guardrails.sh --level 1
if [ $? -ne 0 ]; then exit 1; fi
echo "Level 2: Build validation"
# Full build
npm run build
if [ $? -ne 0 ]; then
echo "Build failed"
exit 1
fi
echo "Level 2 passed"
What it catches:
- Import errors
- Template syntax errors
- Missing assets
- TypeScript errors
The build takes ~30 seconds. Fast enough for pre-commit.
Level 3: full validation
Level 2 + E2E tests:
#!/bin/bash
# guardrails.sh --level 3
./guardrails.sh --level 2
if [ $? -ne 0 ]; then exit 1; fi
echo "Level 3: E2E validation"
# Start preview server
npm run preview &
SERVER_PID=$!
sleep 5
# Run Playwright tests
npm run test
TEST_RESULT=$?
# Cleanup
kill $SERVER_PID
if [ $TEST_RESULT -ne 0 ]; then
echo "E2E tests failed"
exit 1
fi
echo "Level 3 passed"
What it catches:
- Broken navigation
- Missing pages
- Accessibility violations
- SEO issues
- Performance regressions
The test suites
~150 tests across six categories:
// Smoke tests - do pages load?
test('homepage loads', async ({ page }) => {
await page.goto('/');
await expect(page.locator('h1')).toBeVisible();
});
// Accessibility tests - WCAG compliance
test('font page is accessible', async ({ page }) => {
await page.goto('/alternatives/helvetica/');
const results = await new AxeBuilder({ page }).analyze();
expect(results.violations).toEqual([]);
});
// SEO tests - meta tags present
test('has proper meta tags', async ({ page }) => {
await page.goto('/alternatives/helvetica/');
await expect(page.locator('meta[name="description"]')).toHaveAttribute(
'content',
/free alternatives to Helvetica/i
);
});
// Content tests - data integrity
test('alternatives link to existing fonts', async ({ page }) => {
await page.goto('/alternatives/helvetica/');
const links = page.locator('[data-testid="alternative-link"]');
const count = await links.count();
expect(count).toBeGreaterThan(0);
for (let i = 0; i < count; i++) {
const href = await links.nth(i).getAttribute('href');
const response = await page.goto(href!);
expect(response?.status()).toBe(200);
}
});
// Performance tests - Core Web Vitals
test('meets LCP threshold', async ({ page }) => {
await page.goto('/alternatives/helvetica/');
const lcp = await page.evaluate(() => {
return new Promise((resolve) => {
new PerformanceObserver((list) => {
const entries = list.getEntries();
resolve(entries[entries.length - 1].startTime);
}).observe({ entryTypes: ['largest-contentful-paint'] });
});
});
expect(lcp).toBeLessThan(2500);
});
When to use each level
| Situation | Level |
|---|---|
| Just wrote some code | 1 |
| About to commit | 2 |
| Opening a PR | 3 |
| Deploying to production | 3 |
I have a git hook for level 1:
# .husky/pre-commit
./guardrails.sh --level 1
CI runs level 3 on every PR.
The CLI interface
# Quick iteration
./guardrails.sh --level 1
# Pre-commit
./guardrails.sh --level 2
# Full validation
./guardrails.sh --level 3
# Run specific checks
npm run check:fonts:schemas
npm run check:fonts:links
npm run check:fonts:content
npm run check:fonts:orphans
Each check can run independently for debugging.
Content checks
Beyond schema validation, I check content quality:
async function checkContent(): Promise<void> {
const fonts = await loadCollection('premiumFonts');
for (const font of fonts) {
// Tier 1 fonts need more content
if (font.tier === '1') {
if (font.body.length < 500) {
console.warn(`${font.slug}: Tier 1 font has thin content (${font.body.length} chars)`);
}
if (font.alternatives.length < 3) {
console.warn(`${font.slug}: Tier 1 font has few alternatives (${font.alternatives.length})`);
}
}
// All fonts need some content
if (font.body.length < 100) {
console.error(`${font.slug}: Content too short (${font.body.length} chars)`);
process.exit(1);
}
}
}
Tier 1 fonts (top 50) need richer content than Tier 2 or 3.
Orphan detection
Find content that exists but isn’t linked:
async function checkOrphans(): Promise<void> {
const premiumFonts = await loadCollection('premiumFonts');
const freeFonts = await loadCollection('freeFonts');
const referencedFree = new Set<string>();
// Collect all referenced free fonts
for (const premium of premiumFonts) {
for (const alt of premium.alternatives) {
referencedFree.add(alt.slug);
}
}
// Find orphans
for (const free of freeFonts) {
if (!referencedFree.has(free.slug)) {
console.warn(`Orphan free font: ${free.slug} (not referenced by any premium font)`);
}
}
}
Orphan warnings don’t fail the build, but they flag content to review.
Tradeoffs
What I gained:
- Confidence in changes
- Catch errors early (before users see them)
- Clear progression for validation depth
What I lost:
- Time overhead (5 minutes for level 3)
- Maintenance of test suite
- False positives occasionally
Improvements I’d make:
- Parallel test execution
- Smarter test selection (only run affected tests)
- Visual regression tests for font previews
The result
Since implementing guardrails:
- Zero broken pages shipped to production
- Link errors caught in seconds, not after deploy
- Confidence to refactor freely
Level 1 runs hundreds of times a day. Level 3 runs on every PR. Errors get caught at the appropriate stage.