Building a Dynamic Resume Page with PDF Export and CI/CD for Astro
How I built a filterable tech resume with multiple export formats and set up a proper CI/CD pipeline with GitHub Actions and Dependabot.
Yesterday I built a Christmas gift tracker. Today I decided my portfolio site needed a resume page. And while I was at it, I set up a proper CI/CD pipeline because pushing to production without tests is chaos.
Part 1: The Resume Page
The “I Haven’t Needed a Resume in 10 Years” Problem
Here’s an admission: I haven’t thought about my resume in probably a decade. When you’re happily employed and not job hunting, that document just… sits there. Somewhere. Maybe in a Google Doc from 2015?
So when I decided to build a resume page, my first challenge wasn’t technical - it was “where is my career history even stored?”
The answer, of course, is LinkedIn. I’ve been dutifully updating my LinkedIn profile for years, but I’d never actually extracted that data. Turns out LinkedIn has a feature for this that I’d completely forgotten about.
Getting Your Data Out of LinkedIn
LinkedIn lets you download your account data - and it’s surprisingly comprehensive:
- Go to Settings & Privacy → Data Privacy
- Click “Get a copy of your data”
- Choose “The works” (or pick specific categories like Positions, Education, Skills)
- Wait for the email (minutes for specific data, up to 24 hours for everything)
- Download and extract the ZIP file
What you get is a folder full of CSV files:
Profile.csv- Your headline, summary, locationPositions.csv- Every job with company, title, dates, descriptionEducation.csv- Schools, degrees, datesSkills.csv- All those endorsements you’ve collectedCertifications.csv- Professional certifications- And more…
Converting LinkedIn Export to JSON Resume
CSV files aren’t directly usable for a web page, so I wrote a script to convert them to JSON Resume format:
node scripts/linkedin-to-resume.mjs ~/Downloads/Basic_LinkedInDataExport_12-22-2025/
The script:
- Parses all the CSV files (handling LinkedIn’s multiline quoted fields)
- Maps fields to JSON Resume schema
- Extracts bullet points from job descriptions as highlights
- Auto-categorizes skills (Cloud, Programming, Databases, etc.)
- Outputs a JSON file ready for editing
// The script handles LinkedIn's CSV format
const positions = readCSVFile(exportDir, 'Positions.csv');
const work = positions.map(pos => ({
name: pos['Company Name'],
position: pos['Title'],
startDate: formatDate(pos['Started On']), // "Jan 2020" → "2020-01"
endDate: formatDate(pos['Finished On']),
summary: pos['Description'],
highlights: extractBulletPoints(pos['Description'])
}));
What LinkedIn doesn’t export:
- Your profile photo (you’ll need to save it manually)
- Detailed job highlights (the descriptions are often sparse)
- Your LinkedIn URL (ironically)
- Recommendations text
After running the script, I spent about 30 minutes enriching the data - adding specific achievements, metrics, and details that make a resume actually useful to recruiters.
Why Build a Custom Resume Page?
Most developers either link to a PDF or use LinkedIn. I wanted something better:
- Filterable - Recruiters can filter by years of experience or company
- Multiple formats - JSON, YAML, PDF, and DOCX exports
- Single source of truth - Edit one JSON file, everything updates
- SEO friendly - Proper meta tags and structured data
The Architecture
The resume uses the JSON Resume schema - a standardized format that means the data is portable. Edit one file, everything updates:
src/data/resume.json # Single source of truth
↓
src/pages/resume.astro # Dynamic page with filtering
src/pages/api/resume.json.ts # JSON export endpoint
src/pages/api/resume.yaml.ts # YAML export endpoint
public/resume.pdf # Pre-generated PDF
public/resume.docx # Pre-generated DOCX
Client-Side Filtering
The page includes JavaScript for filtering without page reloads. Filter by years of experience (1Y, 2Y, 5Y, 10Y) or by company:
const filterExperience = (years) => {
const cutoffDate = new Date();
cutoffDate.setFullYear(cutoffDate.getFullYear() - years);
document.querySelectorAll('.experience-item').forEach(item => {
const endDate = item.dataset.endDate;
// Show if current role OR ended within the timeframe
const show = !endDate || new Date(endDate) >= cutoffDate;
item.style.display = show ? 'block' : 'none';
});
};
This means a recruiter looking for “last 2 years of experience” can click a button and see exactly that.
The PDF Generation Challenge
My first attempt used html2pdf.js for client-side PDF generation. Problem: Content Security Policy blocked the CDN script.
The solution: Build-time PDF generation with Puppeteer
Instead of generating PDFs in the browser, I created a Node.js script that:
- Builds the site
- Starts a local static server
- Uses Puppeteer to render the resume page
- Injects compact styling for a 2-page PDF
- Saves the result
async function generatePDF() {
const browser = await puppeteer.launch({ headless: true });
const page = await browser.newPage();
await page.goto('http://localhost:3456/resume/', {
waitUntil: 'networkidle0'
});
// Inject compact styling for 2-page resume
await page.evaluate(() => {
const style = document.createElement('style');
style.textContent = `
body { font-size: 11px !important; }
h1 { font-size: 24px !important; }
/* Hide nav and footer for clean PDF */
`;
document.head.appendChild(style);
document.querySelector('nav').style.display = 'none';
});
await page.pdf({
path: 'dist/resume.pdf',
format: 'Letter',
printBackground: true,
margin: { top: '0.4in', right: '0.4in', bottom: '0.4in', left: '0.4in' }
});
await browser.close();
}
Key insight: Puppeteer and docx are installed locally but NOT in package.json. This avoids CI conflicts - they’re only needed for local PDF generation, not for Cloudflare builds.
# Local only - not in package.json
npm install puppeteer docx
# Generate resume files
npm run resume:generate
DOCX Generation
For Word documents, I used the docx library to programmatically build the document from the same JSON data:
import { Document, Packer, Paragraph, TextRun } from 'docx';
const doc = new Document({
sections: [{
children: [
new Paragraph({
children: [new TextRun({ text: resumeData.basics.name, bold: true, size: 48 })]
}),
// ... build document structure from resume.json
]
}]
});
const buffer = await Packer.toBuffer(doc);
writeFileSync('dist/resume.docx', buffer);
Part 2: The CI/CD Pipeline
The Problem
Before today, pushing to main would deploy directly to Cloudflare Pages with no checks. A bad dependency update could break production. Case in point: Dependabot opened a PR to upgrade Tailwind CSS from v3 to v4 - a complete rewrite that would have broken everything.
GitHub Actions Setup
I created two workflows:
Main CI Workflow
Every push and PR runs:
- Build & Test - Builds the site, validates all images
- Security Audit -
npm audit --audit-level=highfails on vulnerabilities - Lighthouse - Performance/accessibility checks (PRs only)
jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'npm'
- run: npm ci
- run: npm run build
- run: npm run validate:images
security-audit:
runs-on: ubuntu-latest
steps:
- run: npm ci
- run: npm audit --audit-level=high
Dependabot Auto-Merge
Patch and minor updates auto-merge when CI passes. Major updates get a comment explaining they need manual review:
- name: Auto-merge patch updates
if: steps.metadata.outputs.update-type == 'version-update:semver-patch'
run: gh pr merge --auto --squash "$PR_URL"
- name: Comment on major updates
if: steps.metadata.outputs.update-type == 'version-update:semver-major'
run: |
gh pr comment "$PR_URL" --body "⚠️ Major version update - requires manual review"
Handling the Tailwind v4 PR
Today I got a Dependabot PR trying to upgrade Tailwind CSS from v3 to v4. The CI would have caught the build failure, but more importantly - this is a migration project, not a routine update.
The right approach:
- CI detected it as a major update
- Auto-merge was skipped
- I closed the PR with an explanation
- Added an ignore rule to prevent future v4 PRs
# .github/dependabot.yml
ignore:
- dependency-name: "tailwindcss"
update-types: ["version-update:semver-major"]
Security Vulnerability Handling
The pipeline handles security at multiple levels:
| Layer | What It Does |
|---|---|
npm audit in CI | Blocks PRs with high/critical vulnerabilities |
| Dependabot alerts | Notifies of known vulnerabilities in dependencies |
| Auto-merge patches | Security fixes merge automatically when CI passes |
| Manual review | Major updates require human approval |
Results
The resume page is now live with:
- Dynamic filtering (1Y, 2Y, 5Y, 10Y experience views)
- Company filtering dropdown
- JSON/YAML API endpoints at
/api/resume.jsonand/api/resume.yaml - Downloadable PDF (2 pages, professionally formatted)
- Downloadable DOCX
The CI/CD pipeline provides:
- Automatic builds and tests on every push
- Security scanning for vulnerabilities
- Auto-merging safe dependency updates
- Protection against breaking changes
Lessons Learned
-
Build-time vs runtime PDF generation - Puppeteer at build time is more reliable than browser-based solutions that fight with CSP
-
Don’t put dev-only dependencies in package.json - Puppeteer caused CI conflicts with Tailwind’s peer dependencies. Keep build-only tools local.
-
Group Dependabot updates - Individual PRs for every patch are noisy. Grouping makes review manageable.
-
Ignore major versions for complex deps - Tailwind v4 is a migration project, not a routine update. Configure Dependabot to skip it.
-
Auto-merge what’s safe - Patch updates rarely break things. Let CI validate and merge them automatically.
-
Security scanning should fail builds -
npm audit --audit-level=highin CI catches vulnerabilities before they hit production.
The resume lives at /resume with all the export options. The JSON Resume format means I can also use it with other resume tools if I ever decide to switch.