Redis vs In-Memory: Caching Strategies in SaaS Starters in 2026
TL;DR
Most early-stage SaaS doesn't need Redis. PostgreSQL handles 90% of caching use cases, Next.js has built-in request memoization, and premature caching adds complexity without benefit. Add Redis when: you need rate limiting, session-level caching in serverless, or background job queues. Upstash is the default serverless Redis for Vercel-hosted apps.
What Caching Actually Solves
Caching solves three distinct problems — each with different solutions:
| Problem | Without Cache | With Cache |
|---|---|---|
| Repeated expensive DB queries | 200ms per request | 2ms (cache hit) |
| Rate limiting across serverless functions | Impossible (no shared state) | Redis shared counter |
| Session data in stateless functions | Must re-query DB | Redis session store |
| Background job queues | No queuing | Redis-backed BullMQ |
The Serverless Caching Problem
Serverless functions (Vercel, Cloudflare Workers) don't share memory between invocations:
// ❌ This doesn't work in serverless
const cache = new Map<string, any>(); // Wiped on every cold start
export async function GET(req: Request) {
const cacheKey = 'popular-posts';
if (cache.has(cacheKey)) return Response.json(cache.get(cacheKey));
const posts = await prisma.post.findMany({ take: 10, orderBy: { views: 'desc' } });
cache.set(cacheKey, posts);
return Response.json(posts);
}
// In serverless: cache is always empty. Every request hits the DB.
// ✅ Redis works across serverless instances
import { Redis } from '@upstash/redis';
const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL!,
token: process.env.UPSTASH_REDIS_REST_TOKEN!,
});
export async function GET() {
const cached = await redis.get<Post[]>('popular-posts');
if (cached) return Response.json(cached);
const posts = await prisma.post.findMany({ take: 10, orderBy: { views: 'desc' } });
await redis.setex('popular-posts', 300, posts); // Cache 5 minutes
return Response.json(posts);
}
Upstash: Serverless Redis for Vercel
Upstash is the default Redis choice for Vercel-hosted apps. Unlike traditional Redis, Upstash uses HTTP (REST API) instead of a persistent TCP connection — making it work in edge/serverless environments.
import { Ratelimit } from '@upstash/ratelimit';
import { Redis } from '@upstash/redis';
// Rate limiting — common in SaaS APIs
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, '10 s'), // 10 requests per 10 seconds
});
export async function POST(req: Request) {
const ip = req.headers.get('x-forwarded-for') ?? 'anonymous';
const { success, limit, reset, remaining } = await ratelimit.limit(ip);
if (!success) {
return Response.json(
{ error: 'Too many requests' },
{
status: 429,
headers: {
'X-RateLimit-Limit': limit.toString(),
'X-RateLimit-Remaining': remaining.toString(),
'X-RateLimit-Reset': reset.toString(),
},
}
);
}
// Process request...
}
Upstash pricing:
- Free: 10k commands/day, 256MB
- Pay-as-you-go: $0.2 per 100k commands
- Most indie SaaS fits in the free tier indefinitely
Railway Redis: Persistent Option
For apps on Railway that need persistent Redis (background job queues, session storage):
// BullMQ worker — needs persistent Redis connection
import { Queue, Worker } from 'bullmq';
import { Redis } from 'ioredis';
const connection = new Redis(process.env.REDIS_URL!, {
maxRetriesPerRequest: null,
});
const emailQueue = new Queue('emails', { connection });
const worker = new Worker(
'emails',
async (job) => {
await sendEmail(job.data);
},
{
connection,
concurrency: 10,
}
);
// Queue an email from API route
export async function POST(req: Request) {
const { userId, template } = await req.json();
await emailQueue.add('send', { userId, template }, {
attempts: 3,
backoff: { type: 'exponential', delay: 1000 },
});
return Response.json({ queued: true });
}
Railway Redis: ~$5/month for 512MB. Use when you need BullMQ workers.
Next.js Built-In Caching (No Redis Needed)
Next.js App Router has built-in request memoization and fetch caching that covers most use cases without Redis:
// Request memoization — same data fetched once per request
// Even if called from multiple components
async function getUser(userId: string) {
return prisma.user.findUnique({ where: { id: userId } });
}
// Page.tsx calls getUser(userId)
// Layout.tsx also calls getUser(userId)
// -> Only ONE database query (React's cache() deduplicates)
import { cache } from 'react';
const getCachedUser = cache(getUser); // Memoized for request lifecycle
// Full Route Cache — static pages cached automatically
// Revalidation on demand or by time
export async function GET() {
const posts = await prisma.post.findMany({ take: 10 });
return Response.json(posts, {
headers: {
'Cache-Control': 'public, s-maxage=60, stale-while-revalidate=300',
},
});
}
For server-rendered pages with data that changes infrequently, Next.js caching handles this better than Redis.
When You DON'T Need Redis
Most indie SaaS at early stage:
// PostgreSQL is fast enough for most queries
// Adding Redis for this is premature optimization:
const posts = await prisma.post.findMany({
where: { published: true },
take: 10,
orderBy: { createdAt: 'desc' },
});
// This query takes 5-20ms with proper indexes
// Redis would save 4-18ms — not worth the complexity
// Add an index instead:
// @@index([published, createdAt]) in Prisma schema
Don't add Redis until you have:
- Rate limiting requirements
- Background job queues (BullMQ)
- Session storage needs exceeding DB performance
- Caching needs that PostgreSQL and Next.js don't cover
- Actual performance problems (measured, not assumed)
Boilerplate Redis Support
| Boilerplate | Redis Included | Provider | Use Case |
|---|---|---|---|
| T3 Stack | ❌ | — | Add if needed |
| ShipFast | ✅ Optional | Upstash | Rate limiting |
| Supastarter | ✅ | Upstash | Rate limiting, cache |
| Makerkit | ✅ | Upstash | Cache, rate limiting |
| Epic Stack | ❌ | — | Explicit no-Redis stance |
Find boilerplates with caching and Redis setup on StarterPick.
Check out this boilerplate
View Upstash Redis on StarterPick →