Quick Verdict
For typical Next.js SaaS boilerplate work — onboarding flows, AI pipelines, scheduled jobs, webhook fan-out — Inngest remains the lowest-friction choice. Trigger.dev v3 is the better pick if your jobs are long-running (think 30 minutes of AI inference) or if you want a self-hosted option. DBOS is the right call when "durable" means "this is database state and we cannot afford to drop it" — billing pipelines, ledger updates, multi-step financial workflows.
These are not interchangeable. Their programming models are genuinely different.
Key Takeaways
- Inngest = serverless functions with steps; no infra, generous free tier, perfect for event-driven SaaS work.
- Trigger.dev v3 = long-running tasks with checkpointing, machine-sized workers, self-host option, AI-friendly.
- DBOS = transactional workflows that run inside Postgres; the strongest correctness guarantees but the heaviest mental shift.
Decision Table
| Workload | Pick |
|---|---|
| User signs up → send 5 emails over 7 days | Inngest (event + delays) |
| Generate a 20-minute video with multiple AI calls and retries | Trigger.dev v3 (long-running task) |
| Apply a usage-based billing batch with audit guarantees | DBOS (Postgres-transactional workflow) |
| Webhook fan-out from Stripe to 6 internal handlers | Inngest (events + per-handler retry) |
| Cron job that syncs CRM data nightly | Either Inngest or Trigger.dev |
| Multi-step Saga (charge → ship → notify, with compensations) | DBOS or Trigger.dev |
What "Durable" Actually Means Here
Standard background job queues (BullMQ, SQS, Resque) give you "the job will run." Durable workflow engines give you "every step will run exactly once, in order, and resume from the exact line of code on failure or redeploy."
The trade is programming model. You can no longer write await fetch(url) and expect that the runtime will replay it. You write await step.run('fetch', () => fetch(url)) so the framework can checkpoint and replay deterministically.
If you've shipped a SaaS boilerplate without durable workflows, you've already paid for it: an email got sent twice, a Stripe webhook handler crashed mid-way, a cron job double-charged a customer. Durable execution is the long-term answer.
Inngest
Pricing: Free up to 50k steps/month and 25 concurrent steps; paid plans scale concurrency and step volume. No infra to run.
Fit: Most Next.js, Hono, or Express boilerplates. Event-driven products. Anything where steps are short (< 30 seconds) and triggered by user actions or webhooks.
What you write:
import { inngest } from '@/inngest/client';
export const onSignup = inngest.createFunction(
{ id: 'onboarding' },
{ event: 'user/signup' },
async ({ event, step }) => {
await step.run('create-stripe-customer', () =>
stripe.customers.create({ email: event.data.email }),
);
await step.sleep('wait-1-day', '1d');
await step.run('day-1-email', () => sendDay1Email(event.data.userId));
await step.sleep('wait-3-days', '3d');
await step.run('day-3-email', () => sendDay3Email(event.data.userId));
},
);
Strengths:
- Deploys with your Next.js app — one Vercel deployment, no separate worker.
- Local dev server (
npx inngest-cli dev) gives a dashboard for replays and step-level inspection. - Fan-out, throttling, debounce, and rate limit are first-class concepts.
- Cron jobs, event triggers, and waitForEvent in one model.
Limits:
- Step duration tied to your serverless function timeout (Vercel: 15 min Pro, 5 min Hobby).
- Long-running AI generation needs explicit chunking.
This is the layer most boilerplates in our background jobs roundup default to.
Trigger.dev v3
Pricing: Free up to 10k tasks/month with 1 concurrent run; paid plans add concurrency, longer runs, and machine sizes. Self-host is free (you operate the workers).
Fit: AI products with long-running pipelines (video generation, deep research, document processing). Teams that want a self-host escape valve. Products with heavy file work — Trigger.dev workers ship with the file system, network, and binaries you'd expect.
What you write:
import { task } from '@trigger.dev/sdk/v3';
export const generateReport = task({
id: 'generate-report',
maxDuration: 30 * 60,
run: async (payload: { userId: string }) => {
const data = await fetchData(payload.userId);
const sections = await Promise.all(
data.map((d) => llm.generate(d.prompt)),
);
const pdf = await renderPdf(sections);
return { url: await upload(pdf) };
},
});
Strengths:
- Tasks can run for hours; checkpoint resume across redeploys.
- Machine sizes (small/medium/large/large-2x) per task — pick by memory.
- React-based dashboard with run history, retries, and replay.
- Self-host with Docker + Postgres if you need to keep data in your VPC.
Limits:
- Runs in Trigger.dev's cloud workers by default; not your serverless runtime.
- Mental model is "task," less natural for fine-grained event fan-out than Inngest.
DBOS
Pricing: OSS (Apache 2.0). DBOS Cloud has a free tier; paid scales by compute. Self-host is free.
Fit: Workflows where the system of record is Postgres and you cannot tolerate exactly-once violations: billing, ledger, settlement, refunds, multi-step state machines that touch the database.
What you write:
import { Workflow, Transaction, DBOS } from '@dbos-inc/dbos-sdk';
export class BillingFlow {
@Transaction()
static async chargeCustomer(ctx, customerId: string, amount: number) {
await ctx.client.query(
'INSERT INTO charges(customer_id, amount) VALUES($1,$2)',
[customerId, amount],
);
}
@Workflow()
static async runMonthlyBilling(ctx, customerId: string) {
const invoice = await DBOS.invoke(BillingFlow).chargeCustomer(customerId, 99);
await DBOS.send(`customer-${customerId}`, invoice, 'invoice-emitted');
}
}
Strengths:
- The workflow's state lives in your Postgres. No external broker.
- Transactional guarantees — workflow steps and database writes commit together.
- Time travel debugging: rewind a workflow to any step and inspect.
- Strong story for fintech, healthcare, anything regulated.
Limits:
- Mental model is the heaviest of the three. Decorators, OOP-style classes, deterministic step contracts.
- TypeScript SDK is stable; some SaaS boilerplates haven't yet shipped DBOS templates.
- Postgres-only.
Programming Model Comparison
| Concept | Inngest | Trigger.dev v3 | DBOS |
|---|---|---|---|
| Step / unit | step.run('id', fn) | Task body | @Step / @Transaction |
| Idempotency | Per step id | Per task run | Per workflow id |
| Storage | Inngest cloud | Trigger DB | Your Postgres |
| Timer | step.sleep | wait.for() | DBOS.sleep |
| Wait for event | step.waitForEvent | wait.forToken | DBOS.recv |
| Local dev | Native CLI dashboard | Native CLI dashboard | DBOS dev server |
| Lock-in | Cloud-only (managed); self-host limited | Self-host available | OSS, run anywhere |
Cost Reality at SaaS Scale
A boilerplate that sends 5 onboarding emails + 1 daily digest per user, at 10,000 active users:
- Roughly 60k step executions/day (~1.8M/month).
- Inngest: comfortably in low-paid tier (~$50–150/mo depending on concurrency).
- Trigger.dev: probably $50–100/mo if tasks are short.
- DBOS: free if self-hosted on your existing Postgres; cloud pricing varies by compute.
The pricing only really swings when you have long-running AI pipelines. There Trigger.dev's machine billing matters; Inngest's per-step billing penalizes you for many short steps inside a long process.
What to Pick by Boilerplate Profile
- Indie SaaS launching this month → Inngest. The dev experience is unmatched and the free tier is generous.
- AI SaaS doing 30-minute generations → Trigger.dev v3.
- B2B SaaS with billing complexity or audit requirements → DBOS for billing flows; Inngest for everything else.
- Self-host requirement (regulated, EU) → DBOS or Trigger.dev v3 self-hosted.
- Full-stack TypeScript boilerplate (T3, Makerkit, ShipFast) → Inngest, with Trigger.dev for any AI-heavy task.
Migrating Off BullMQ or Cron
If you're on BullMQ + Redis with a homemade retry policy, the upgrade story is:
- Identify the workflows where "exactly once" actually matters (billing, fulfillment, anything tied to money).
- Move just those to a durable engine first.
- Leave the firehose work (sending emails, indexing into search) on BullMQ if it's working.
Don't replace your queue wholesale. Replace the workflows where correctness matters and leave the rest.
FAQ
Can I run multiple in one app? Yes. A common pattern is Inngest for event-driven user flows + DBOS for billing. Just budget the cognitive cost.
What about Temporal? Temporal is the older, heaviest, most enterprise-grade option. It's the right call at very large scale or when your team already operates it. For a SaaS boilerplate, the three above are easier first picks.
Does my boilerplate's webhook layer still matter? Yes — webhooks are the trigger; the workflow engine is what runs deterministically afterwards.
See the broader background jobs comparison for the queue-only alternatives.
If you're starting from scratch, the observability stack guide covers logs and traces for whichever runtime you pick.