Skip to main content

CI/CD Pipeline for SaaS Boilerplates 2026

·StarterPick Team
Share:

TL;DR

  • GitHub Actions + Vercel is the standard Next.js CI/CD stack in 2026: free for most teams, tight integration, zero ops overhead.
  • A production-ready pipeline: lint → test → build → preview deploy → approval gate → production deploy → post-deploy checks.
  • Database migrations are the most dangerous part of SaaS deployments—never run them automatically in production.
  • Preview deployments (Vercel, Railway) are non-negotiable for team workflows: every PR gets its own URL.
  • Secrets management in CI: use GitHub Actions secrets, never commit credentials, rotate on each team member departure.
  • The most common deployment incident: deploying code before migrating the database (or vice versa).

Key Takeaways

  • Vercel's GitHub integration auto-deploys preview branches and production on push to main—no custom pipeline needed for basic setups.
  • Adding test gates before deployment significantly reduces production incidents.
  • Environment parity matters: staging should mirror production as closely as possible.
  • Preview deployments need their own database—use Neon's branching feature or a shared staging database.
  • Post-deploy smoke tests are the safety net: a fast Playwright test that validates the app is functional after each deploy.
  • Semantic versioning via semantic-release or changesets is worth adding once you have external consumers.

Why CI/CD Configuration Should Be Part of the Boilerplate

Most developers underestimate the time cost of configuring CI/CD on a new project. When you start from scratch, the typical path looks like this: you create a GitHub Actions workflow file and immediately run into YAML syntax errors because indentation is wrong. You add secrets but reference them incorrectly in the workflow. You forget to cache node_modules and watch your 2-minute install become a 6-minute one. You run your test suite in CI and discover that half your tests fail because the test database is not set up. You get the tests passing but then the build fails because environment variables that exist on your machine do not exist in CI. You get the build working but realize you have no idea how to make Playwright install browser binaries in a repeatable way.

This sequence takes 4-8 hours for an experienced developer setting it up carefully. For someone newer to CI/CD, it takes a full day or longer. A boilerplate with working CI included eliminates this cost entirely. More importantly, it eliminates the risk that CI/CD gets deferred: in projects without it pre-configured, the pattern is that CI gets added "later," and later never arrives until something goes wrong in production.

The question of what "production-ready CI/CD" means for a SaaS has a specific answer. It is not just running tests. A complete pipeline for a Next.js SaaS looks like this: lint and type-check run first because they are fast and catch the most common issues immediately. Unit tests run second because they are the next fastest layer. Only if unit tests pass does the build step run, which takes the most time. E2E tests run after a successful build, either against the built artifacts or against a preview deployment. The pipeline terminates immediately at any failure, and the developer gets notified of the failure and which step caused it.

This "fail fast" principle — sometimes called "shift left" — is not just about speed. It is about signal clarity. If you run all your steps in parallel and three of them fail, you have to untangle which failures are root causes and which are consequences. If you fail on a type error in the lint step before even reaching the build, you have a clear and immediate diagnosis. Running lint and type-check before the expensive build step means that the most common causes of pipeline failure (type errors, linting violations) surface in 30-60 seconds rather than 5-10 minutes. Over the lifetime of a project, this saves hours of waiting for feedback that could have come immediately.

The other concept worth understanding is what failing fast means at the organizational level. A team that sees fast, clear CI feedback catches problems before code reaches main. A team with slow or confusing CI learns to ignore CI failures or work around them. The pipeline design is a statement about what the team values.


GitHub Actions Caching and Cost Optimization

CI run time and cost are directly related: slower pipelines cost more money and create more friction. The good news is that most of the time cost in a typical Next.js CI run comes from steps that are highly cacheable.

Node.js Dependency Caching

The single most impactful optimization for most projects is caching node_modules between runs. The actions/setup-node action has built-in caching support that is trivial to enable:

- uses: actions/setup-node@v4
  with:
    node-version: "20"
    cache: "npm"  # or "pnpm" or "yarn"

This caches based on the lockfile hash. When the lockfile changes (new or updated dependencies), the cache is invalidated and npm ci runs a full install. When the lockfile is unchanged, the cache restores in 15-20 seconds instead of running a full install that might take 2-3 minutes.

For pnpm, you need a slightly more explicit approach to get the cache directory right:

- name: Get pnpm cache directory
  id: pnpm-cache
  run: echo "dir=$(pnpm store path)" >> $GITHUB_OUTPUT

- uses: actions/cache@v4
  with:
    path: ${{ steps.pnpm-cache.outputs.dir }}
    key: pnpm-${{ hashFiles('pnpm-lock.yaml') }}
    restore-keys: pnpm-

Playwright Browser Binary Caching

Playwright's browser binaries are large (Chromium alone is ~150MB) and take 1-2 minutes to download and install. Caching them between runs is a significant time saver:

- name: Cache Playwright browsers
  uses: actions/cache@v4
  id: playwright-cache
  with:
    path: ~/.cache/ms-playwright
    key: playwright-${{ hashFiles('package-lock.json') }}

- name: Install Playwright browsers
  if: steps.playwright-cache.outputs.cache-hit != 'true'
  run: npx playwright install --with-deps chromium

The if: steps.playwright-cache.outputs.cache-hit != 'true' condition means the install step only runs when the cache misses. Playwright binaries are versioned with the @playwright/test package, so the cache key based on the lockfile hash correctly invalidates when you upgrade Playwright.

Turborepo Remote Caching

For monorepo setups using Turborepo, remote caching is the highest-leverage optimization available. Turborepo tracks which tasks have run with which inputs (source files + environment variables) and stores their outputs in a remote cache. After the first run, subsequent builds with unchanged inputs restore from cache in seconds rather than rebuilding.

The effect on build times is dramatic. A monorepo with 4 packages that takes 8 minutes to build on the first run might take 30 seconds on subsequent runs with unchanged packages, because only the packages with changes need to actually rebuild. For teams running CI on every PR commit, this compounds into hours of saved compute time per week.

Remote caching requires a Turborepo cache server. Vercel provides one free as part of their platform, or you can self-host with the ducktors/turborepo-remote-cache open-source implementation.

Next.js Build Cache

Next.js produces a .next/cache directory during builds that dramatically speeds up subsequent builds when source files are unchanged. Persisting this between CI runs with actions/cache can cut build times by 40-60% for large applications:

- name: Cache Next.js build
  uses: actions/cache@v4
  with:
    path: |
      ~/.npm
      ${{ github.workspace }}/.next/cache
    key: nextjs-${{ hashFiles('package-lock.json') }}-${{ hashFiles('**/*.ts', '**/*.tsx') }}
    restore-keys: nextjs-${{ hashFiles('package-lock.json') }}-

GitHub Actions Billing

For teams evaluating CI costs, GitHub Actions billing works as follows: public repositories get unlimited free minutes. Private repositories on the free plan get 2,000 minutes per month, which sounds like a lot until you have a team of 5 developers each pushing 3-4 times per day with a 10-minute pipeline — that is potentially 600 minutes per day, exhausting the monthly budget in under 4 days.

A realistic estimate for a small SaaS team: aim to keep your pipeline under 10 minutes per run. With caching properly configured, a Next.js SaaS pipeline (lint + type-check + unit tests + build + E2E) should run in 6-10 minutes. At that cadence with a 2-developer team pushing 3 times per day each, you use roughly 360 minutes per week, or about 1,400 minutes per month — well within the 2,000 free minutes. With faster pipelines achieved through caching, most small teams can stay within the free tier indefinitely.


Preview Deployments: The Killer Feature for SaaS Teams

Preview deployments are the single feature that most changes how teams collaborate on frontend changes. Without them, reviewing a PR requires either running the app locally (which takes setup time and may have different environment configuration) or waiting for someone to describe what the change looks like. With preview deployments, every PR automatically gets a live, shareable URL where anyone can immediately see and test the change without any local setup.

Vercel's preview deployment system is the most mature implementation. When you connect a repository to Vercel and push to a branch or open a PR, Vercel automatically builds and deploys that branch to a unique URL (https://your-app-pr-42-yourteam.vercel.app). The deployment URL is posted as a check on the GitHub PR. This happens without any manual configuration beyond the initial Vercel GitHub app installation.

Preview-Specific Environment Variables

Preview deployments need their own configuration. A preview deployment should use Stripe test mode keys (not production keys), a separate database instance or branch (not the production database), and potentially a separate email provider configuration. Mixing preview deployments with production services creates data pollution and potential security issues.

Vercel's dashboard lets you configure environment variables per environment: production, preview, and development. Set your preview DATABASE_URL to point to a staging database or a Neon database branch, and set your preview Stripe keys to test mode.

For more granular control — different configuration per branch, for example — you can use the Vercel CLI in your CI workflow to set environment variables dynamically.

Using Preview URLs in Playwright Tests

Preview deployments can be combined with Playwright E2E tests for integration testing against a real deployed environment rather than a locally-running dev server. The workflow looks like this: Vercel deploys the preview, outputs the deployment URL, and then your CI workflow runs Playwright tests against that URL.

- name: Wait for Vercel preview
  uses: patrickedqvist/wait-for-vercel-preview@v1.3.1
  id: vercel-preview
  with:
    token: ${{ secrets.GITHUB_TOKEN }}
    max_timeout: 300

- name: Run E2E tests against preview
  run: npx playwright test
  env:
    BASE_URL: ${{ steps.vercel-preview.outputs.url }}

This approach tests your application as it will actually run in production — with the real CDN, the real serverless function cold start times, and the real network conditions — rather than against a local dev server.

Protecting Main Branch

Vercel's GitHub integration creates checks on PRs: a "Vercel" check that shows whether the deployment succeeded. You can require this check to pass before merging by configuring branch protection rules in your GitHub repository settings. Combined with requiring passing CI checks (your lint, test, and build workflow), this ensures that nothing can land on main without a successful build and deployment.

The practical setup in GitHub repository settings: go to Settings → Branches → Add rule for main, enable "Require status checks to pass before merging," and add both the Vercel deployment check and your GitHub Actions workflow checks as required checks.

Alternatives to Vercel

If Vercel is not an option — due to pricing at scale, vendor preference, or self-hosting requirements — the preview deployment pattern is available elsewhere.

Netlify has preview deployments that work similarly to Vercel's: automatic deployments per branch, unique URLs, GitHub check integration. The main difference is that Netlify's Edge Functions and serverless function runtime differ from Vercel's, which can affect behavior for Next.js applications that rely on Vercel-specific features.

Railway supports ephemeral environments: you can configure Railway to create a new environment for each PR, with its own database and service instances, and tear it down when the PR is closed. This is more powerful than static preview URLs but requires more configuration.

Coolify, the open-source self-hosted alternative, has a review app feature that works similarly to Railway's ephemeral environments. If your team is already self-hosting infrastructure on Coolify, this is a natural fit.


Database Migrations in CI/CD

Database migrations are the most operationally risky part of SaaS deployments. A bug in application code is usually recoverable — you can roll back the deployment and the previous version runs against the same database. A destructive migration (accidentally dropping a column with data, changing a column type that breaks existing data) can cause data loss that no code rollback can fix.

Running Migrations in CI Against a Test Database

Before migrations reach production, they should be validated in CI. The pattern is to run migrations against a test database in your CI pipeline, alongside your tests. GitHub Actions services make this straightforward by spinning up a PostgreSQL container that exists only for the duration of the workflow:

jobs:
  test-with-migrations:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_PASSWORD: testpassword
          POSTGRES_DB: testdb
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
        ports:
          - 5432:5432

    steps:
      - uses: actions/checkout@v4
      - run: npm ci
      - name: Run migrations
        run: npx prisma migrate deploy
        env:
          DATABASE_URL: postgresql://postgres:testpassword@localhost:5432/testdb
      - name: Run tests
        run: npm test
        env:
          DATABASE_URL: postgresql://postgres:testpassword@localhost:5432/testdb

This validates two things: that the migration files are syntactically valid and apply cleanly to a fresh database, and that your tests pass against the migrated schema. If either fails, you catch it before it reaches staging or production.

Zero-Downtime Migrations

The fundamental tension in database migrations is that you often need to change the schema while the application is running and serving requests. If you deploy a migration that renames a column and the application code deployment lags behind, you will have a brief window where the running code references a column that no longer exists. For high-traffic applications, even this brief window causes errors.

The expand-contract pattern eliminates this window. Instead of renaming a column in a single migration, you execute three deployments:

First, the "expand" migration: add the new column alongside the old one. Deploy application code that writes to both columns and reads from the old one. The application keeps working because the old column still exists.

Second, the "backfill" step: migrate all existing data from the old column to the new one. This can happen in a background job or a one-off script, without touching the application deployment.

Third, the "contract" migration: once you have confirmed that all data is in the new column and the application reads from it correctly, remove the old column. This final migration is safe because the application no longer references it.

The expand-contract pattern requires more deployments but eliminates the risk window. For most SaaS applications, the simpler approach (deploy migration, then deploy code, with the very brief window where the old code hits the new schema) is acceptable because the window is measured in seconds and most column additions or renames do not break running queries. But for any migration that removes or renames columns, the expand-contract pattern is the safe choice.

Migration Testing Against Production Data Copies

The most dangerous class of migration failure is a migration that runs fine against a clean test database but fails against production data because of constraint violations, unexpected null values, or data that does not match the assumed format. The only way to catch this class of failure is to run migrations against a copy of production data.

Neon's database branching feature makes this practical: you can create a branch from your production database with a single API call, run your migration against it in CI, and check whether it succeeds. This is not a replacement for testing against a clean database — you need both — but it catches the data-specific failures that clean-database tests cannot.


The Full Pipeline

A production-grade SaaS CI/CD pipeline:

Push/PR → Lint → Tests → Build → Preview Deploy →
PR Review → Merge to main → Tests → Production Deploy → Smoke Tests

GitHub Actions Setup

Core Workflow

# .github/workflows/ci.yml
name: CI/CD

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

env:
  NODE_VERSION: "20"

jobs:
  lint:
    name: Lint
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: "npm"
      - run: npm ci
      - run: npm run lint
      - run: npm run type-check

  test:
    name: Unit Tests
    runs-on: ubuntu-latest
    needs: lint
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: "npm"
      - run: npm ci
      - run: npm run test:coverage
      - uses: codecov/codecov-action@v4
        if: github.event_name == 'push'

  build:
    name: Build
    runs-on: ubuntu-latest
    needs: test
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: "npm"
      - run: npm ci
      - run: npm run build
        env:
          DATABASE_URL: ${{ secrets.DATABASE_URL_BUILD }}
          NEXTAUTH_SECRET: ${{ secrets.NEXTAUTH_SECRET }}
          NEXTAUTH_URL: http://localhost:3000
      - uses: actions/upload-artifact@v4
        with:
          name: next-build
          path: .next/
          retention-days: 1

  e2e:
    name: E2E Tests
    runs-on: ubuntu-latest
    needs: build
    if: github.event_name == 'pull_request'
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: "npm"
      - run: npm ci
      - run: npx playwright install --with-deps chromium
      - uses: actions/download-artifact@v4
        with:
          name: next-build
          path: .next/
      - run: npx playwright test
        env:
          DATABASE_URL: ${{ secrets.TEST_DATABASE_URL }}
          NEXTAUTH_SECRET: test-secret-32-chars-minimum-here
          NEXTAUTH_URL: http://localhost:3000
          STRIPE_SECRET_KEY: ${{ secrets.STRIPE_TEST_SECRET_KEY }}
      - uses: actions/upload-artifact@v4
        if: failure()
        with:
          name: playwright-report
          path: playwright-report/

Database Migration Workflow

# .github/workflows/migrate.yml
name: Database Migration

on:
  workflow_dispatch:
    inputs:
      environment:
        description: "Target environment"
        required: true
        type: choice
        options: [staging, production]
      confirm:
        description: "Type CONFIRM to proceed"
        required: true

jobs:
  migrate:
    name: Run Migrations
    runs-on: ubuntu-latest
    environment: ${{ github.event.inputs.environment }}
    if: github.event.inputs.confirm == 'CONFIRM'
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: "20"
          cache: "npm"
      - run: npm ci
      - name: Run Prisma migrations
        run: npx prisma migrate deploy
        env:
          DATABASE_URL: ${{ secrets.DATABASE_URL }}
      # OR for Drizzle:
      # - run: npx drizzle-kit migrate

The workflow_dispatch trigger with a confirmation input prevents accidental migration runs. Environment secrets ensure production credentials are separate from staging.


Vercel Integration

Vercel's GitHub integration handles most of the deployment workflow automatically:

Automatic Deployments

  1. Install the Vercel GitHub app
  2. Connect your repository
  3. Vercel automatically:
    • Deploys every push to main to production
    • Creates a preview deployment for every PR branch
    • Adds deployment status checks to PRs
    • Notifies on failures

Environment Variables

Set environment variables in Vercel's dashboard, not in your repository:

  • Production, preview, and development environments each get separate values
  • DATABASE_URL for production points to the production database
  • DATABASE_URL for preview can use a staging database or Neon branch
# Or via Vercel CLI
vercel env add DATABASE_URL production
vercel env add DATABASE_URL preview

Preview Database with Neon Branching

Neon's database branching feature creates isolated database copies for each PR:

# .github/workflows/preview-db.yml
name: Preview Database

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  create-preview-db:
    runs-on: ubuntu-latest
    steps:
      - name: Create Neon branch
        id: neon
        uses: neondatabase/create-branch-action@v5
        with:
          project_id: ${{ secrets.NEON_PROJECT_ID }}
          branch_name: preview/pr-${{ github.event.number }}
          api_key: ${{ secrets.NEON_API_KEY }}

      - name: Set Vercel env for preview
        run: |
          vercel env rm DATABASE_URL preview --yes || true
          echo "${{ steps.neon.outputs.db_url }}" | vercel env add DATABASE_URL preview
        env:
          VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}

Pre-commit Hooks

Pre-commit hooks catch issues before they hit CI:

npm install -D husky lint-staged
npx husky init
// package.json
{
  "lint-staged": {
    "*.{ts,tsx}": [
      "eslint --fix",
      "prettier --write"
    ],
    "*.{json,md,css}": "prettier --write"
  }
}
# .husky/pre-commit
npx lint-staged
# .husky/pre-push
npm run type-check
npm test -- --run

Post-Deploy Smoke Tests

After each production deployment, run a fast smoke test to verify the app is functional:

// e2e/smoke.spec.ts
import { test, expect } from "@playwright/test";

test.describe("Smoke Tests", () => {
  test("home page loads", async ({ page }) => {
    await page.goto("/");
    await expect(page).toHaveTitle(/YourApp/);
  });

  test("login page accessible", async ({ page }) => {
    await page.goto("/auth/login");
    await expect(page.getByRole("form")).toBeVisible();
  });

  test("API health check", async ({ request }) => {
    const response = await request.get("/api/health");
    expect(response.ok()).toBeTruthy();
    const json = await response.json();
    expect(json.status).toBe("ok");
  });
});
# Add to production deploy workflow
- name: Smoke tests
  run: npx playwright test smoke.spec.ts
  env:
    BASE_URL: https://your-production-url.com

Secrets Management

GitHub Actions secrets:

  • Go to repository Settings → Secrets and variables → Actions
  • Add each secret separately (never commit .env to git)
  • Use environment-scoped secrets for production

Secret rotation checklist (on team member departure):

  • Stripe API keys
  • Database connection strings
  • Auth secrets (NEXTAUTH_SECRET)
  • Third-party API keys

Required secrets for a standard Next.js SaaS:

DATABASE_URL          # Production database
DATABASE_URL_BUILD    # Build-time (can use staging)
NEXTAUTH_SECRET       # 32+ char random string
NEXTAUTH_URL          # Production URL
STRIPE_SECRET_KEY     # Stripe live key
STRIPE_WEBHOOK_SECRET # Stripe webhook signature
SENDGRID_API_KEY      # Or Resend, Postmark
GITHUB_ID             # OAuth app
GITHUB_SECRET         # OAuth app secret
GOOGLE_CLIENT_ID      # Google OAuth
GOOGLE_CLIENT_SECRET  # Google OAuth secret

For the test setup that these pipelines run, see testing setup: Vitest & Playwright in boilerplates. For how TypeScript config fits into the build pipeline, see TypeScript config: boilerplate best practices. For the boilerplates that ship CI/CD pre-configured, see best Next.js boilerplates 2026.


Methodology

Pipeline patterns based on production deployments in Next.js SaaS boilerplates (ShipFast, Supastarter, Bedrock), GitHub Actions official documentation, Vercel deployment documentation, and Neon branching feature documentation. Security recommendations follow GitHub's security best practices guide.

The SaaS Boilerplate Matrix (Free PDF)

20+ SaaS starters compared: pricing, tech stack, auth, payments, and what you actually ship with. Updated monthly. Used by 150+ founders.

Join 150+ SaaS founders. Unsubscribe in one click.