Inngest vs BullMQ vs Trigger.dev for SaaS Boilerplates 2026
Background Jobs: Required Infrastructure for Production SaaS
Every production SaaS needs background processing: sending emails asynchronously, processing uploaded files, syncing with external APIs, generating reports, running scheduled tasks. These operations cannot happen in the request-response cycle — they take too long and failures would break the user experience.
In 2026, three options dominate for Next.js SaaS:
- Inngest — serverless background jobs with durable functions
- BullMQ — Redis-backed job queues (requires Redis infrastructure)
- Trigger.dev — managed background jobs with real-time observability
TL;DR
- Inngest: Use for serverless-native Next.js apps (Vercel). No Redis needed. Generous free tier.
- BullMQ: Use when you have Redis and need maximum performance or complex job topologies.
- Trigger.dev: Use for long-running AI jobs, workflows with many steps, and production observability.
Key Takeaways
- Inngest runs on any serverless platform — zero infrastructure to manage
- BullMQ requires Redis (Upstash or self-hosted) — more infrastructure but maximum performance
- Trigger.dev is purpose-built for AI/LLM workflows that run for minutes or hours
- Inngest free tier: 50K function runs/month — sufficient for most SaaS in early stages
- Trigger.dev is used by Midday v1 — the open-source boilerplate proves it in production
- Scheduled jobs (cron) are supported by all three
Inngest: Serverless Background Jobs
Inngest is the background job system that requires no infrastructure — it runs as a handler in your Next.js API route.
npm install inngest
Setup
// inngest/client.ts
import { Inngest } from 'inngest';
export const inngest = new Inngest({ id: 'my-saas' });
// inngest/functions.ts
import { inngest } from './client';
// A durable background function:
export const processDocument = inngest.createFunction(
{ id: 'process-document', name: 'Process Document Upload' },
{ event: 'document/uploaded' },
async ({ event, step }) => {
const { documentId, userId } = event.data;
// Step 1: Extract text
const text = await step.run('extract-text', async () => {
const doc = await db.document.findUnique({ where: { id: documentId } });
return extractTextFromPdf(doc.url);
});
// Step 2: Generate embeddings (if this fails, retries from here)
const embedding = await step.run('generate-embedding', async () => {
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: text,
});
return embedding;
});
// Step 3: Store in database
await step.run('store-embedding', async () => {
await db.document.update({
where: { id: documentId },
data: { embedding, processedAt: new Date() },
});
});
// Step 4: Notify user
await step.run('notify-user', async () => {
await sendEmail({
to: userId,
subject: 'Document processed',
body: 'Your document is ready for search.',
});
});
return { success: true };
}
);
// Scheduled function (cron):
export const dailyDigest = inngest.createFunction(
{ id: 'daily-digest' },
{ cron: '0 9 * * *' }, // 9am daily
async ({ step }) => {
const users = await step.run('get-active-users', async () => {
return db.user.findMany({ where: { emailDigest: true } });
});
await step.run('send-digests', async () => {
await Promise.all(users.map(u => sendDigestEmail(u)));
});
}
);
// app/api/inngest/route.ts
import { serve } from 'inngest/next';
import { inngest } from '@/inngest/client';
import { processDocument, dailyDigest } from '@/inngest/functions';
export const { GET, POST, PUT } = serve({
client: inngest,
functions: [processDocument, dailyDigest],
});
// Triggering a function from your app:
await inngest.send({
name: 'document/uploaded',
data: { documentId, userId },
});
BullMQ: Redis-Backed Performance
BullMQ is the modern job queue built on Redis. It is the fastest option and supports complex job patterns (priorities, rate limiting, job dependencies).
npm install bullmq ioredis
// lib/queue.ts
import { Queue, Worker } from 'bullmq';
import { Redis } from 'ioredis';
const connection = new Redis(process.env.REDIS_URL!, { maxRetriesPerRequest: null });
// Define queues:
export const emailQueue = new Queue('email', { connection });
export const documentQueue = new Queue('document-processing', { connection });
// Define workers (these run in a separate process):
export const emailWorker = new Worker(
'email',
async (job) => {
const { to, subject, body } = job.data;
await sendEmail({ to, subject, body });
},
{
connection,
concurrency: 10, // Process 10 jobs simultaneously
}
);
export const documentWorker = new Worker(
'document-processing',
async (job) => {
const { documentId } = job.data;
await processDocumentJob(documentId);
},
{
connection,
concurrency: 5,
}
);
// Error handling:
documentWorker.on('failed', (job, err) => {
console.error(`Job ${job?.id} failed:`, err.message);
// Send to Sentry, etc.
});
// Adding jobs to the queue:
// From any API route or server action:
await emailQueue.add(
'welcome-email',
{ to: user.email, subject: 'Welcome!', body: welcomeBody },
{
delay: 0,
attempts: 3,
backoff: { type: 'exponential', delay: 2000 },
}
);
// Scheduled job (requires separate cron setup):
await documentQueue.add(
'process-pending',
{ batchSize: 50 },
{
repeat: { pattern: '*/5 * * * *' }, // Every 5 minutes
}
);
BullMQ requirement: You need Redis. Use Upstash Redis (free tier: 10K commands/day; $0.20/100K after) or self-host.
Trigger.dev: Long-Running AI Jobs
Trigger.dev is built for AI workflows that run for minutes or hours — perfect for LLM pipelines, multi-step automations, and anything that hits third-party APIs.
npm install @trigger.dev/sdk @trigger.dev/nextjs
// trigger/process-document.ts
import { task, logger } from '@trigger.dev/sdk/v3';
export const processDocumentTask = task({
id: 'process-document',
maxDuration: 300, // 5 minutes max
retry: {
maxAttempts: 3,
factor: 2,
minTimeoutInMs: 1000,
maxTimeoutInMs: 30000,
},
run: async (payload: { documentId: string; userId: string }) => {
logger.info('Processing document', { documentId: payload.documentId });
// Step 1: Download document
const doc = await db.document.findUnique({ where: { id: payload.documentId } });
const text = await downloadAndExtract(doc.url);
logger.info('Text extracted', { length: text.length });
// Step 2: AI analysis (may take 30-60 seconds for large docs)
const analysis = await analyzeWithAI(text);
// Step 3: Store results
await db.document.update({
where: { id: payload.documentId },
data: { analysis, processedAt: new Date() },
});
// Step 4: Notify user
await sendEmail({
to: payload.userId,
subject: 'Document analysis complete',
});
return { success: true, analysisLength: JSON.stringify(analysis).length };
},
});
// Trigger a task from your API:
import { tasks } from '@trigger.dev/sdk/v3';
import { processDocumentTask } from '@/trigger/process-document';
export async function POST(req: Request) {
const { documentId } = await req.json();
const session = await auth();
const handle = await processDocumentTask.trigger({
documentId,
userId: session.user.id,
});
return Response.json({ runId: handle.id });
}
Trigger.dev provides a real-time dashboard showing each step, its duration, inputs, outputs, and any errors. This observability is Trigger.dev's key differentiator.
Comparison Table
| Feature | Inngest | BullMQ | Trigger.dev |
|---|---|---|---|
| Infrastructure | None (serverless) | Redis required | None (managed) |
| Serverless compatible | Yes | Partial | Yes |
| Max job duration | 15 min (Vercel) | Unlimited | Hours |
| Dashboard | Yes | Third-party | Yes (excellent) |
| Steps/checkpoints | Yes | No | Yes |
| Cron scheduling | Yes | Yes | Yes |
| Free tier | 50K runs/month | Redis cost only | 10K runs/month |
| Paid tier | $50/mo | Redis cost | $50/mo |
| Best for | Serverless, simple | Performance, complex | AI/LLM, long-running |
Which Boilerplates Use What?
| Boilerplate | Background Jobs |
|---|---|
| ShipFast | None (add manually) |
| OpenSaaS (Wasp) | Built-in (PgBoss) |
| Makerkit | Inngest (plugin) |
| Midday v1 | Trigger.dev |
| T3 Stack | None (add manually) |
Decision Guide
Choose Inngest if:
→ Deploying to Vercel serverless
→ Don't want Redis infrastructure
→ Simple to medium job complexity
→ Generous free tier is sufficient
Choose BullMQ if:
→ Already have Redis (Upstash works)
→ Need maximum throughput
→ Complex job topologies (dependencies, priorities)
→ Long-running workers in a separate process
Choose Trigger.dev if:
→ AI/LLM workflows (long-running)
→ Need excellent observability/debugging
→ Multi-step workflows with retry per step
→ Using Midday v1 (it's pre-configured)
Methodology
Based on publicly available documentation from Inngest, BullMQ, and Trigger.dev, and boilerplate analysis as of March 2026.
Building a SaaS with background jobs? StarterPick helps you find boilerplates pre-configured with the right job infrastructure for your needs.