Skip to main content

Guide

DBOS vs Trigger.dev v3 vs Inngest Functions: Durable Workflows for SaaS Boilerplates 2026

Pick the right durable execution layer for your SaaS boilerplate in 2026: DBOS, Trigger.dev v3, and Inngest Functions compared on programming model, cost, deploy story, and lock-in.

StarterPick Team

Quick Verdict

For typical Next.js SaaS boilerplate work — onboarding flows, AI pipelines, scheduled jobs, webhook fan-out — Inngest remains the lowest-friction choice. Trigger.dev v3 is the better pick if your jobs are long-running (think 30 minutes of AI inference) or if you want a self-hosted option. DBOS is the right call when "durable" means "this is database state and we cannot afford to drop it" — billing pipelines, ledger updates, multi-step financial workflows.

These are not interchangeable. Their programming models are genuinely different.

Key Takeaways

  • Inngest = serverless functions with steps; no infra, generous free tier, perfect for event-driven SaaS work.
  • Trigger.dev v3 = long-running tasks with checkpointing, machine-sized workers, self-host option, AI-friendly.
  • DBOS = transactional workflows that run inside Postgres; the strongest correctness guarantees but the heaviest mental shift.

Decision Table

WorkloadPick
User signs up → send 5 emails over 7 daysInngest (event + delays)
Generate a 20-minute video with multiple AI calls and retriesTrigger.dev v3 (long-running task)
Apply a usage-based billing batch with audit guaranteesDBOS (Postgres-transactional workflow)
Webhook fan-out from Stripe to 6 internal handlersInngest (events + per-handler retry)
Cron job that syncs CRM data nightlyEither Inngest or Trigger.dev
Multi-step Saga (charge → ship → notify, with compensations)DBOS or Trigger.dev

What "Durable" Actually Means Here

Standard background job queues (BullMQ, SQS, Resque) give you "the job will run." Durable workflow engines give you "every step will run exactly once, in order, and resume from the exact line of code on failure or redeploy."

The trade is programming model. You can no longer write await fetch(url) and expect that the runtime will replay it. You write await step.run('fetch', () => fetch(url)) so the framework can checkpoint and replay deterministically.

If you've shipped a SaaS boilerplate without durable workflows, you've already paid for it: an email got sent twice, a Stripe webhook handler crashed mid-way, a cron job double-charged a customer. Durable execution is the long-term answer.

Inngest

Pricing: Free up to 50k steps/month and 25 concurrent steps; paid plans scale concurrency and step volume. No infra to run.

Fit: Most Next.js, Hono, or Express boilerplates. Event-driven products. Anything where steps are short (< 30 seconds) and triggered by user actions or webhooks.

What you write:

import { inngest } from '@/inngest/client';

export const onSignup = inngest.createFunction(
  { id: 'onboarding' },
  { event: 'user/signup' },
  async ({ event, step }) => {
    await step.run('create-stripe-customer', () =>
      stripe.customers.create({ email: event.data.email }),
    );
    await step.sleep('wait-1-day', '1d');
    await step.run('day-1-email', () => sendDay1Email(event.data.userId));
    await step.sleep('wait-3-days', '3d');
    await step.run('day-3-email', () => sendDay3Email(event.data.userId));
  },
);

Strengths:

  • Deploys with your Next.js app — one Vercel deployment, no separate worker.
  • Local dev server (npx inngest-cli dev) gives a dashboard for replays and step-level inspection.
  • Fan-out, throttling, debounce, and rate limit are first-class concepts.
  • Cron jobs, event triggers, and waitForEvent in one model.

Limits:

  • Step duration tied to your serverless function timeout (Vercel: 15 min Pro, 5 min Hobby).
  • Long-running AI generation needs explicit chunking.

This is the layer most boilerplates in our background jobs roundup default to.

Trigger.dev v3

Pricing: Free up to 10k tasks/month with 1 concurrent run; paid plans add concurrency, longer runs, and machine sizes. Self-host is free (you operate the workers).

Fit: AI products with long-running pipelines (video generation, deep research, document processing). Teams that want a self-host escape valve. Products with heavy file work — Trigger.dev workers ship with the file system, network, and binaries you'd expect.

What you write:

import { task } from '@trigger.dev/sdk/v3';

export const generateReport = task({
  id: 'generate-report',
  maxDuration: 30 * 60,
  run: async (payload: { userId: string }) => {
    const data = await fetchData(payload.userId);
    const sections = await Promise.all(
      data.map((d) => llm.generate(d.prompt)),
    );
    const pdf = await renderPdf(sections);
    return { url: await upload(pdf) };
  },
});

Strengths:

  • Tasks can run for hours; checkpoint resume across redeploys.
  • Machine sizes (small/medium/large/large-2x) per task — pick by memory.
  • React-based dashboard with run history, retries, and replay.
  • Self-host with Docker + Postgres if you need to keep data in your VPC.

Limits:

  • Runs in Trigger.dev's cloud workers by default; not your serverless runtime.
  • Mental model is "task," less natural for fine-grained event fan-out than Inngest.

DBOS

Pricing: OSS (Apache 2.0). DBOS Cloud has a free tier; paid scales by compute. Self-host is free.

Fit: Workflows where the system of record is Postgres and you cannot tolerate exactly-once violations: billing, ledger, settlement, refunds, multi-step state machines that touch the database.

What you write:

import { Workflow, Transaction, DBOS } from '@dbos-inc/dbos-sdk';

export class BillingFlow {
  @Transaction()
  static async chargeCustomer(ctx, customerId: string, amount: number) {
    await ctx.client.query(
      'INSERT INTO charges(customer_id, amount) VALUES($1,$2)',
      [customerId, amount],
    );
  }

  @Workflow()
  static async runMonthlyBilling(ctx, customerId: string) {
    const invoice = await DBOS.invoke(BillingFlow).chargeCustomer(customerId, 99);
    await DBOS.send(`customer-${customerId}`, invoice, 'invoice-emitted');
  }
}

Strengths:

  • The workflow's state lives in your Postgres. No external broker.
  • Transactional guarantees — workflow steps and database writes commit together.
  • Time travel debugging: rewind a workflow to any step and inspect.
  • Strong story for fintech, healthcare, anything regulated.

Limits:

  • Mental model is the heaviest of the three. Decorators, OOP-style classes, deterministic step contracts.
  • TypeScript SDK is stable; some SaaS boilerplates haven't yet shipped DBOS templates.
  • Postgres-only.

Programming Model Comparison

ConceptInngestTrigger.dev v3DBOS
Step / unitstep.run('id', fn)Task body@Step / @Transaction
IdempotencyPer step idPer task runPer workflow id
StorageInngest cloudTrigger DBYour Postgres
Timerstep.sleepwait.for()DBOS.sleep
Wait for eventstep.waitForEventwait.forTokenDBOS.recv
Local devNative CLI dashboardNative CLI dashboardDBOS dev server
Lock-inCloud-only (managed); self-host limitedSelf-host availableOSS, run anywhere

Cost Reality at SaaS Scale

A boilerplate that sends 5 onboarding emails + 1 daily digest per user, at 10,000 active users:

  • Roughly 60k step executions/day (~1.8M/month).
  • Inngest: comfortably in low-paid tier (~$50–150/mo depending on concurrency).
  • Trigger.dev: probably $50–100/mo if tasks are short.
  • DBOS: free if self-hosted on your existing Postgres; cloud pricing varies by compute.

The pricing only really swings when you have long-running AI pipelines. There Trigger.dev's machine billing matters; Inngest's per-step billing penalizes you for many short steps inside a long process.

What to Pick by Boilerplate Profile

  • Indie SaaS launching this month → Inngest. The dev experience is unmatched and the free tier is generous.
  • AI SaaS doing 30-minute generations → Trigger.dev v3.
  • B2B SaaS with billing complexity or audit requirements → DBOS for billing flows; Inngest for everything else.
  • Self-host requirement (regulated, EU) → DBOS or Trigger.dev v3 self-hosted.
  • Full-stack TypeScript boilerplate (T3, Makerkit, ShipFast) → Inngest, with Trigger.dev for any AI-heavy task.

Migrating Off BullMQ or Cron

If you're on BullMQ + Redis with a homemade retry policy, the upgrade story is:

  1. Identify the workflows where "exactly once" actually matters (billing, fulfillment, anything tied to money).
  2. Move just those to a durable engine first.
  3. Leave the firehose work (sending emails, indexing into search) on BullMQ if it's working.

Don't replace your queue wholesale. Replace the workflows where correctness matters and leave the rest.

FAQ

Can I run multiple in one app? Yes. A common pattern is Inngest for event-driven user flows + DBOS for billing. Just budget the cognitive cost.

What about Temporal? Temporal is the older, heaviest, most enterprise-grade option. It's the right call at very large scale or when your team already operates it. For a SaaS boilerplate, the three above are easier first picks.

Does my boilerplate's webhook layer still matter? Yes — webhooks are the trigger; the workflow engine is what runs deterministically afterwards.


See the broader background jobs comparison for the queue-only alternatives.

If you're starting from scratch, the observability stack guide covers logs and traces for whichever runtime you pick.

The SaaS Boilerplate Matrix (Free PDF)

20+ SaaS starters compared: pricing, tech stack, auth, payments, and what you actually ship with. Updated monthly. Used by 150+ founders.

Join 150+ SaaS founders. Unsubscribe in one click.