Skip to main content

Guide

Best Serverless Boilerplates and Starter Kits 2026

Serverless boilerplates in 2026: SST Ion, Serverless Framework, Hono on Cloudflare, and AWS starters compared — scale-to-zero billing included Updated.

StarterPick Team

TL;DR

SST Ion (v3) is the most complete serverless framework for building production SaaS on AWS. Hono on Cloudflare Workers is the best choice for globally distributed APIs with zero cold start. For simpler needs, Vercel Functions or Netlify Functions eliminate infrastructure management entirely. The right choice depends on how much AWS control you need versus deployment simplicity.

Serverless in 2026: Three Paradigms

Serverless has matured into three distinct paradigms with different cost models, performance characteristics, and use cases:

  1. Cloud functions — AWS Lambda, Vercel Functions, Netlify Functions. Per-invocation billing, managed by the platform, limited to request/response patterns.
  2. Edge computing — Cloudflare Workers, Vercel Edge, Deno Deploy. Runs at 300+ locations globally, near-zero cold start, ~10ms response times worldwide.
  3. Full serverless frameworks — SST, Serverless Framework, Architect. Infrastructure-as-code for complete serverless SaaS with databases, queues, storage, and functions.

Each has different boilerplates, different trade-offs, and different sweet spots.

Quick Comparison

StarterPlatformLanguageCold StartScaleBest For
SST IonAWSTypeScript100ms-2sUnlimitedFull AWS serverless SaaS
Serverless FrameworkMulti-cloudTS/Python100ms-2sUnlimitedMulti-cloud functions
Hono (Cloudflare)CloudflareTypeScript~0msUnlimitedEdge API
Vercel FunctionsVercelTypeScript~50msVercel limitsNext.js API routes
ArchitectAWSNode.js100ms-2sUnlimitedAWS Lambda specialist

The Starters

SST Ion — Best Full AWS Serverless

Price: Free | Creator: SST team | GitHub Stars: 22k+

SST v3 (Ion) is the most complete serverless framework available. It defines your entire AWS infrastructure as TypeScript using Pulumi — Lambda functions, DynamoDB tables, S3 buckets, API Gateway, RDS, SQS, EventBridge, and more. The killer feature: Live Lambda development with local code changes reflected in real AWS infrastructure in real time.

// sst.config.ts — full infrastructure in TypeScript
import { SSTConfig } from 'sst';
import { Api, Table, Bucket, Queue } from 'sst/constructs';

export default {
  config(_input) {
    return { name: 'my-saas', region: 'us-east-1' };
  },
  stacks(app) {
    app.stack(function Stack({ stack }) {
      const table = new Table(stack, 'Users', {
        fields: { userId: 'string', email: 'string' },
        primaryIndex: { partitionKey: 'userId' },
        globalIndexes: {
          emailIndex: { partitionKey: 'email' },
        },
      });

      const queue = new Queue(stack, 'EmailQueue', {
        consumer: 'functions/processEmail.handler',
      });

      const api = new Api(stack, 'Api', {
        defaults: {
          function: { bind: [table, queue] },
        },
        routes: {
          'GET /users': 'functions/listUsers.handler',
          'POST /users': 'functions/createUser.handler',
        },
      });

      stack.addOutputs({ ApiEndpoint: api.url });
    });
  },
} satisfies SSTConfig;
// functions/createUser.ts — Lambda with type-safe resource access
import { DynamoDB } from '@aws-sdk/client-dynamodb';
import { Table } from 'sst/node/table';
import { Queue } from 'sst/node/queue';
import { SQS } from '@aws-sdk/client-sqs';

export const handler = async (event: APIGatewayEvent) => {
  const { email } = JSON.parse(event.body!);
  const db = new DynamoDB({});
  const sqs = new SQS({});

  const userId = crypto.randomUUID();
  await db.putItem({
    TableName: Table['Users'].tableName,  // Type-safe from SST binding
    Item: {
      userId: { S: userId },
      email: { S: email },
      createdAt: { S: new Date().toISOString() },
    },
  });

  await sqs.sendMessage({
    QueueUrl: Queue['EmailQueue'].queueUrl,
    MessageBody: JSON.stringify({ userId, email, type: 'welcome' }),
  });

  return { statusCode: 201, body: JSON.stringify({ userId }) };
};

Live Lambda development changes the serverless DX fundamentally. Running sst dev starts a local proxy — your Lambda function code runs locally with live reload, but it's invoked by real AWS API Gateway with real IAM permissions, real DynamoDB access, and real SQS messages. No local emulation, no mocking.

Choose if: You're building a production SaaS on AWS and want infrastructure-as-code with a great developer experience.

Hono on Cloudflare Workers — Best Edge API

Price: Free | Creator: Hono team | GitHub Stars: 23k+

Hono is a lightweight, Express-like web framework that runs on Cloudflare Workers (and every other JS runtime). Cloudflare Workers run at 300+ edge locations worldwide — cold start is <1ms, and global round-trips are typically 10-50ms.

npm create cloudflare@latest my-api -- --template hono
cd my-api && npm run dev  # Runs locally with Wrangler miniflare
// src/index.ts — Hono API on Cloudflare Workers
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { bearerAuth } from 'hono/bearer-auth';

type Env = {
  DB: D1Database;        // Cloudflare D1 (SQLite at edge)
  CACHE: KVNamespace;    // Cloudflare KV (edge key-value store)
  API_SECRET: string;    // Environment variable
};

const app = new Hono<{ Bindings: Env }>();

app.use('/api/*', cors());
app.use('/api/protected/*', bearerAuth({ token: c => c.env.API_SECRET }));

app.get('/api/users', async (c) => {
  const cached = await c.env.CACHE.get('all-users');
  if (cached) return c.json(JSON.parse(cached));

  const { results } = await c.env.DB.prepare('SELECT * FROM users').all();
  await c.env.CACHE.put('all-users', JSON.stringify(results), { expirationTtl: 60 });

  return c.json(results);
});

app.post('/api/users', async (c) => {
  const { email } = await c.req.json();
  const id = crypto.randomUUID();

  await c.env.DB.prepare('INSERT INTO users (id, email) VALUES (?, ?)')
    .bind(id, email)
    .run();

  return c.json({ id, email }, 201);
});

export default app;

Cloudflare ecosystem for SaaS:

  • D1 — SQLite database at the edge, free for development
  • KV — Key-value store with global replication, ~0ms reads
  • R2 — S3-compatible object storage, no egress fees
  • Queues — Durable message queuing for background jobs
  • Durable Objects — Stateful coordination at the edge (real-time features)

Choose if: You need a globally distributed API with minimal latency, or your users are geographically dispersed.

Serverless Framework — Best Multi-Cloud

Price: Free | Creator: Serverless Inc. | GitHub Stars: 46k+

The original serverless deployment framework. Deploy functions to AWS Lambda, Google Cloud Functions, or Azure Functions from the same serverless.yml config. More mature than SST, less opinionated, and supports non-TypeScript languages (Python, Go, Java).

# serverless.yml
service: my-saas-api
frameworkVersion: '3'
provider:
  name: aws
  runtime: nodejs20.x
  region: us-east-1
  iam:
    role:
      statements:
        - Effect: Allow
          Action: ['dynamodb:*']
          Resource: !GetAtt UsersTable.Arn

functions:
  createUser:
    handler: src/users.create
    events:
      - httpApi:
          path: /users
          method: POST
  getUser:
    handler: src/users.get
    events:
      - httpApi:
          path: /users/{id}
          method: GET

resources:
  Resources:
    UsersTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: users-${sls:stage}
        AttributeDefinitions:
          - AttributeName: id
            AttributeType: S
        KeySchema:
          - AttributeName: id
            KeyType: HASH
        BillingMode: PAY_PER_REQUEST

Choose if: You need multi-cloud function deployment or prefer YAML-based infrastructure-as-code.

Serverless Trade-offs

FactorServerlessTraditional Server
Cost (idle)$0Fixed monthly
Cost (busy)Per requestFixed monthly
ScaleAuto, unlimitedManual or auto-scale
Cold start100ms-2s (Lambda)0ms
Vendor lock-inHighLow
StateStateless onlyStateful possible
Local devComplexSimple
Long tasksMax 15min (Lambda)Unlimited

The cold start problem deserves specific attention. Lambda cold starts (100ms-2s for Node.js) are invisible at low traffic but become noticeable when functions scale from zero. Solutions: provisioned concurrency ($$$), Cloudflare Workers (near-zero cold start), or keeping functions warm with scheduled pings. For user-facing APIs where latency matters, design to minimize cold starts or use Cloudflare Workers.

When Serverless Wins

Right architecture for:

  • Unpredictable traffic — Scale to zero during quiet periods, unlimited during spikes
  • Event-driven workflows — File processing, webhook handlers, scheduled jobs
  • Per-tenant isolation — Lambda functions can run with per-tenant IAM roles
  • Zero operations — No servers to patch, no infrastructure to monitor
  • Cost at low volume — $0/month for <1M Lambda invocations on AWS free tier

Wrong architecture for:

  • Latency-critical paths — Cold starts hurt P99 response times
  • Long-running tasks — Video encoding, ML training, batch jobs exceed function time limits
  • WebSocket persistence — Stateless functions don't maintain connection state (use API Gateway WebSockets or Cloudflare Durable Objects)
  • Cost at high, steady-state traffic — Fixed-cost servers are cheaper above ~$500/month in Lambda spend

Deployment Comparison

ProviderDeploy CommandDeploy TimeFree Tier
SST Ion (AWS)sst deploy2-5 minAWS free tier
Cloudflare Workerswrangler deploy~30 sec100k req/day
Vercel Functionsvercel deploy~2 min100GB-hours/mo
Netlify Functionsnetlify deploy~2 min125k invocations/mo
Serverless Frameworksls deploy2-5 minAWS free tier

Cloudflare Workers deploy in seconds — the fastest feedback loop in serverless.

Getting Started with SST Ion

SST Ion is the most production-ready serverless starter for SaaS. Here's the setup path:

# Create a new SST app
npx create-sst@latest my-saas
# Choose: Next.js or API Only
cd my-saas

# Configure AWS credentials
aws configure  # or use AWS SSO

# Deploy to your personal dev stage
npx sst dev    # Deploys to AWS + starts local proxy

SST dev mode deploys a personal stack (e.g., royce-dev) and keeps it synchronized with your local code. Every Lambda function runs locally — you can set breakpoints in VS Code and step through Lambda invocations. When you make a code change, it hot-reloads the local function without redeploying.

For production deployment:

npx sst deploy --stage production

This deploys immutable infrastructure to AWS with a separate production stack. SST uses the stage name to namespace all resources — DynamoDB table names, S3 bucket names, Lambda function names — so dev and production never interfere.

Getting Started with Hono on Cloudflare

npm create cloudflare@latest my-api -- --template hono
cd my-api

# Development with Wrangler (local Workers runtime)
npm run dev  # Starts at localhost:8787

# Deploy to Cloudflare
npm run deploy

Cloudflare D1 database setup:

# Create D1 database
npx wrangler d1 create my-database

# Add to wrangler.toml
# [[d1_databases]]
# binding = "DB"
# database_name = "my-database"
# database_id = "your-db-id"

# Run migrations
npx wrangler d1 execute my-database --file=./schema.sql

Cloudflare D1 (edge SQLite) is in stable production for Hono-based APIs. The free tier includes 5GB storage and 5 billion row reads/month — sufficient for most early-stage SaaS.

Cost Estimating Serverless

Serverless cost structures differ from traditional VPS pricing. Here's a realistic cost model:

AWS Lambda pricing (US East):

  • First 1M requests/month: $0 (free tier)
  • Requests: $0.20 per 1M requests after free tier
  • Duration: $0.0000166667 per GB-second
  • A 256MB Lambda handling 100ms requests: 0.0000166667 × 0.25 × (requests/10)

Real example — 100,000 API requests/day:

  • 3M requests/month (2M paid at $0.40)
  • 500ms average duration, 256MB: 3M × 0.256 × 0.5 = 384,000 GB-seconds = $6.40/month
  • Total: ~$7/month for 3M API calls

Cloudflare Workers pricing:

  • Free tier: 100,000 requests/day ($0)
  • Paid: $5/month for 10M requests, then $0.30/M after

For most SaaS at early stage (under 10M requests/month), serverless costs are negligible. The pricing advantage reverses at high, steady-state load — when Lambda is running constantly, reserved EC2 instances become cheaper.

Common Serverless Patterns for SaaS

Background Job Queue

// SST Queue pattern for async processing
const queue = new Queue(stack, 'Jobs', {
  consumer: {
    function: {
      handler: 'functions/worker.handler',
      timeout: 60,  // 60 second max (SQS default visibility timeout)
    },
    cdk: {
      eventSource: {
        batchSize: 10,  // Process 10 messages at a time
        reportBatchItemFailures: true,  // Retry only failed messages
      },
    },
  },
});

Webhook Handler

// Idempotent webhook processing with DynamoDB dedup
export const handler = async (event: APIGatewayEvent) => {
  const body = JSON.parse(event.body!);
  const webhookId = event.headers['webhook-id'];

  // Idempotency check
  const existing = await db.getItem({
    TableName: Table.Webhooks.tableName,
    Key: { id: { S: webhookId } },
  });

  if (existing.Item) {
    return { statusCode: 200, body: 'Already processed' };
  }

  // Process webhook
  await processWebhook(body);

  // Mark as processed
  await db.putItem({
    TableName: Table.Webhooks.tableName,
    Item: {
      id: { S: webhookId },
      processedAt: { S: new Date().toISOString() },
      ttl: { N: String(Math.floor(Date.now() / 1000) + 86400) },  // 24h TTL
    },
  });

  return { statusCode: 200, body: 'Processed' };
};

These patterns — background queues and idempotent webhook handlers — are the two most important serverless primitives for production SaaS. SST makes both straightforward with type-safe bindings between infrastructure and functions.

Key Takeaways

  • SST Ion is the best serverless framework for production SaaS on AWS with full infrastructure-as-code
  • Hono on Cloudflare Workers delivers near-zero cold start and global distribution for APIs
  • Serverless Framework is mature and supports multi-cloud if you need AWS + GCP + Azure
  • Cold starts (100ms-2s on Lambda) are the primary operational concern for user-facing APIs
  • Serverless is most cost-effective at variable traffic; fixed-cost servers win at sustained high volume

How to Evaluate Serverless Boilerplates

Serverless boilerplates fail in ways that aren't visible in demos. The most important evaluation steps happen after the happy-path demo works:

Test cold start behavior. After deploying to AWS Lambda, let the function scale to zero (no invocations for 15-30 minutes). Then hit the endpoint and measure response time. For Node.js Lambda functions, cold starts range from 300ms to 2000ms depending on the bundle size and initialization code. If your SaaS has user-facing API endpoints where this latency is noticeable, either configure provisioned concurrency (additional cost) or switch to Cloudflare Workers (near-zero cold start).

Verify local development parity. SST's Live Lambda development is the key feature to evaluate. Does the local development experience match production behavior? Can you set breakpoints and step through Lambda invocations? Are environment variables and resource bindings identical locally and in AWS? False production parity (local development that works but production fails on edge cases) is the most common SST pitfall.

Audit IAM permissions. Serverless infrastructure-as-code is only as secure as its IAM policies. Review what permissions SST generates for each Lambda function. Overly broad policies (dynamodb:* on all tables rather than dynamodb:GetItem,PutItem,UpdateItem on specific table ARNs) create security surface area that enterprise customers and compliance audits will flag.

Test failure handling in background queues. Serverless queue processors fail silently when error handling isn't explicit. Deploy the queue worker, publish a message that will fail processing, and verify that: the failed message goes to a dead-letter queue, the failure is logged with enough context to diagnose, and the queue eventually processes non-failing messages without being blocked by the failed one.

What These Serverless Starters Have in Common

SST Ion, Serverless Framework, Hono on Cloudflare Workers, and Vercel/Netlify Functions share the fundamental trade-off: zero operational overhead at the cost of function execution model constraints.

All serverless platforms enforce statelessness. Function invocations share no memory state between them. Every external state access (database, cache, session) requires a network call. This is not a boilerplate limitation — it's an architectural property of serverless that shapes every design decision.

All serverless deployments benefit from connection pooling. Lambda functions that open new database connections on every invocation exhaust connection pool limits quickly. Neon's serverless driver, PlanetScale's serverless adapter, and Supabase's REST API all solve this problem differently, but all serverless boilerplates need one of these solutions.

All function-based runtimes have execution time limits. AWS Lambda maxes at 15 minutes. Cloudflare Workers max at 30 seconds (CPU time, not wall clock). Vercel Functions at 60 seconds. Any feature requiring longer execution — video processing, large data exports, ML inference — must be architected as an async queue job rather than a synchronous API response.

The cold start challenge is universal but manageable. SST's provisioned concurrency, Cloudflare Workers' persistent V8 isolates, and Vercel's Edge Functions all address cold starts differently, and the choice between them is often a deployment preference rather than a technical requirement.

For the comparison between serverless and traditional server-based deployments and when each makes sense, see the buy vs build SaaS analysis. For boilerplates that deploy to traditional servers with Docker, the best Docker SaaS boilerplates guide covers the opposite deployment model. For Next.js SaaS on Vercel — the common alternative to dedicated serverless infrastructure — see the best Next.js boilerplates guide.

The serverless vs. traditional server choice is less final in 2026 than it was three years ago. SST Ion makes it possible to deploy a Next.js app alongside Lambda functions, DynamoDB tables, and SQS queues — blending what was previously a binary architectural choice. Cloudflare Workers' compatibility with standard Web APIs means Hono applications often run unchanged on Workers, Node.js, and Bun. The operational model you start with doesn't lock you in the way it once did, which means the selection criteria can focus on developer experience and deployment simplicity rather than future architectural flexibility.

Compare serverless and traditional SaaS boilerplates in the StarterPick directory.

See how SST Ion compares to traditional Next.js SaaS boilerplates for full-stack development.

Review free open-source SaaS boilerplates for edge-first SaaS architecture.

Check out this starter

View SST Ionon StarterPick →

The SaaS Boilerplate Matrix (Free PDF)

20+ SaaS starters compared: pricing, tech stack, auth, payments, and what you actually ship with. Updated monthly. Used by 150+ founders.

Join 150+ SaaS founders. Unsubscribe in one click.