Adding AI Features to Your SaaS Boilerplate
·StarterPick Team
aisaas-boilerplatevercel-ai-sdkopenaistreaming2026
TL;DR
No major SaaS boilerplate ships production-ready AI features in 2026. ShipFast includes a basic chat demo; most others include nothing. You'll add AI yourself — but the good news is it's only ~200 lines of code for a solid foundation: a streaming chat API route, a useChat hook on the frontend, per-user token tracking, and rate limiting. The hard parts aren't the AI call itself — they're billing (who pays for tokens?), abuse prevention (protecting expensive endpoints), and UX (streaming responses, error states, interruption handling).
Key Takeaways
- Vercel AI SDK is the standard for Next.js AI integration — streaming, tool use, provider switching in one package
- Token tracking belongs in
onFinishcallback, writing to your DB after every completion - Rate limiting: AI endpoints need tighter limits than regular API — use Upstash per-user sliding window
- Credit system is simpler than Stripe Meters for most SaaS — deduct credits on completion, check before request
- ShipFast includes a chat UI demo; T3/Supastarter/Makerkit require you to build from scratch
- Streaming interruption (user presses stop) needs explicit
AbortControllerhandling
What Boilerplates Currently Include
| Boilerplate | AI Chat | Token Tracking | Rate Limiting | Credit System |
|---|---|---|---|---|
| ShipFast | ✅ Demo | ❌ | ❌ | ❌ |
| T3 Stack | ❌ | ❌ | ❌ | ❌ |
| Supastarter | ❌ | ❌ | ❌ | ❌ |
| Makerkit | ❌ | ❌ | ❌ | ❌ |
| Open SaaS (Wasp) | ✅ Demo | ❌ | ❌ | ❌ |
You will build this yourself. Here's the complete stack.
Step 1: Streaming Chat API Route
// app/api/ai/chat/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { auth } from '@/auth';
import { checkRateLimit } from '@/lib/rate-limit';
import { checkCredits, deductCredits } from '@/lib/credits';
export const maxDuration = 60; // Vercel function timeout
export async function POST(req: Request) {
const session = await auth();
if (!session?.user) {
return new Response('Unauthorized', { status: 401 });
}
// Rate limit: 20 AI requests per minute per user
const { success: withinLimit } = await checkRateLimit(session.user.id, 'ai-chat', 20, '1m');
if (!withinLimit) {
return new Response('Rate limit exceeded. Please wait before sending another message.', {
status: 429,
});
}
// Credit check before calling the LLM
const hasCredits = await checkCredits(session.user.id, 1);
if (!hasCredits) {
return new Response(
JSON.stringify({ error: 'insufficient_credits', message: 'You\'ve used all your AI credits. Upgrade to continue.' }),
{ status: 402, headers: { 'Content-Type': 'application/json' } }
);
}
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o-mini'), // Use cheaper model by default
messages,
system: `You are a helpful assistant for ${process.env.NEXT_PUBLIC_APP_NAME}.
Be concise and accurate. If you don't know something, say so.`,
maxTokens: 1024,
onFinish: async ({ usage, text }) => {
// Track token usage per user
await Promise.all([
db.aiUsage.create({
data: {
userId: session.user.id,
model: 'gpt-4o-mini',
promptTokens: usage.promptTokens,
completionTokens: usage.completionTokens,
totalTokens: usage.totalTokens,
estimatedCostUsd: (usage.promptTokens * 0.00000015) + (usage.completionTokens * 0.0000006),
},
}),
deductCredits(session.user.id, 1),
]);
},
});
return result.toDataStreamResponse();
}
Step 2: Frontend Chat UI
// components/ai/chat.tsx
'use client';
import { useChat } from 'ai/react';
import { useState, useRef, useEffect } from 'react';
import { toast } from 'sonner';
interface ChatProps {
systemPrompt?: string;
placeholder?: string;
creditsRemaining: number;
}
export function AiChat({ placeholder = 'Ask anything...', creditsRemaining }: ChatProps) {
const [credits, setCredits] = useState(creditsRemaining);
const {
messages,
input,
handleInputChange,
handleSubmit,
isLoading,
stop,
error,
setMessages,
} = useChat({
api: '/api/ai/chat',
onFinish: () => {
setCredits((c) => Math.max(0, c - 1));
},
onError: (error) => {
const body = JSON.parse(error.message || '{}');
if (body.error === 'insufficient_credits') {
toast.error('Out of AI credits', {
description: 'Upgrade your plan to continue.',
action: { label: 'Upgrade', onClick: () => window.location.href = '/pricing' },
});
} else if (error.message.includes('429')) {
toast.error('Slow down — you\'re sending messages too fast.');
} else {
toast.error('Something went wrong. Please try again.');
}
},
});
const bottomRef = useRef<HTMLDivElement>(null);
useEffect(() => {
bottomRef.current?.scrollIntoView({ behavior: 'smooth' });
}, [messages]);
return (
<div className="flex flex-col h-full">
{/* Credits indicator */}
<div className="flex items-center justify-between px-4 py-2 border-b text-sm text-gray-500">
<span>AI Assistant</span>
<span>{credits} credit{credits !== 1 ? 's' : ''} remaining</span>
</div>
{/* Messages */}
<div className="flex-1 overflow-y-auto p-4 space-y-4">
{messages.length === 0 && (
<div className="text-center text-gray-400 mt-20">
<p className="text-lg font-medium">How can I help?</p>
</div>
)}
{messages.map((m) => (
<div key={m.id} className={`flex ${m.role === 'user' ? 'justify-end' : 'justify-start'}`}>
<div
className={`max-w-[80%] rounded-2xl px-4 py-2 ${
m.role === 'user'
? 'bg-blue-600 text-white'
: 'bg-gray-100 text-gray-900'
}`}
>
<p className="whitespace-pre-wrap text-sm">{m.content}</p>
</div>
</div>
))}
{isLoading && (
<div className="flex justify-start">
<div className="bg-gray-100 rounded-2xl px-4 py-2">
<span className="flex gap-1">
<span className="w-2 h-2 bg-gray-400 rounded-full animate-bounce [animation-delay:0ms]" />
<span className="w-2 h-2 bg-gray-400 rounded-full animate-bounce [animation-delay:150ms]" />
<span className="w-2 h-2 bg-gray-400 rounded-full animate-bounce [animation-delay:300ms]" />
</span>
</div>
</div>
)}
<div ref={bottomRef} />
</div>
{/* Input */}
<form onSubmit={handleSubmit} className="p-4 border-t flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder={credits === 0 ? 'Upgrade to continue...' : placeholder}
disabled={credits === 0 || isLoading}
className="flex-1 rounded-lg border px-3 py-2 text-sm focus:outline-none focus:ring-2 focus:ring-blue-500 disabled:opacity-50"
/>
{isLoading ? (
<button
type="button"
onClick={stop}
className="px-4 py-2 bg-red-500 text-white rounded-lg text-sm hover:bg-red-600"
>
Stop
</button>
) : (
<button
type="submit"
disabled={!input.trim() || credits === 0}
className="px-4 py-2 bg-blue-600 text-white rounded-lg text-sm hover:bg-blue-700 disabled:opacity-50"
>
Send
</button>
)}
</form>
</div>
);
}
Step 3: Credits System
// lib/credits.ts
import { db } from '@/lib/db';
const PLAN_CREDITS: Record<string, number> = {
free: 10, // 10 AI messages per month
pro: 500, // 500 AI messages per month
team: 2000,
};
export async function checkCredits(userId: string, required = 1): Promise<boolean> {
const user = await db.user.findUnique({
where: { id: userId },
select: { plan: true, aiCreditsUsed: true, aiCreditsResetAt: true },
});
if (!user) return false;
// Reset monthly credits
const now = new Date();
if (!user.aiCreditsResetAt || user.aiCreditsResetAt < now) {
await db.user.update({
where: { id: userId },
data: {
aiCreditsUsed: 0,
aiCreditsResetAt: new Date(now.getFullYear(), now.getMonth() + 1, 1),
},
});
return true; // Fresh credits
}
const limit = PLAN_CREDITS[user.plan ?? 'free'] ?? 10;
return (user.aiCreditsUsed ?? 0) + required <= limit;
}
export async function deductCredits(userId: string, amount = 1): Promise<void> {
await db.user.update({
where: { id: userId },
data: { aiCreditsUsed: { increment: amount } },
});
}
Step 4: Rate Limiting AI Endpoints
// lib/rate-limit.ts
import { Ratelimit } from '@upstash/ratelimit';
import { Redis } from '@upstash/redis';
const redis = Redis.fromEnv();
const limiters = {
'ai-chat': new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(20, '1m'), // 20 messages per minute
prefix: 'rl:ai-chat',
}),
'ai-chat-daily': new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(200, '24h'), // 200 per day as abuse ceiling
prefix: 'rl:ai-daily',
}),
};
export async function checkRateLimit(
userId: string,
limiterName: keyof typeof limiters,
_limit?: number,
_window?: string,
) {
const limiter = limiters[limiterName];
return limiter.limit(userId);
}
Step 5: Token Usage Dashboard
// app/dashboard/ai-usage/page.tsx
import { auth } from '@/auth';
import { db } from '@/lib/db';
export default async function AiUsagePage() {
const session = await auth();
const startOfMonth = new Date(new Date().getFullYear(), new Date().getMonth(), 1);
const usage = await db.aiUsage.aggregate({
where: { userId: session!.user.id, createdAt: { gte: startOfMonth } },
_sum: { totalTokens: true, estimatedCostUsd: true },
_count: { id: true },
});
const recentHistory = await db.aiUsage.findMany({
where: { userId: session!.user.id },
orderBy: { createdAt: 'desc' },
take: 10,
select: { model: true, totalTokens: true, estimatedCostUsd: true, createdAt: true },
});
return (
<div className="space-y-6">
<h1 className="text-2xl font-bold">AI Usage</h1>
<div className="grid grid-cols-3 gap-4">
<div className="rounded-lg border p-4">
<p className="text-sm text-gray-500">Messages This Month</p>
<p className="text-3xl font-bold">{usage._count.id}</p>
</div>
<div className="rounded-lg border p-4">
<p className="text-sm text-gray-500">Tokens Used</p>
<p className="text-3xl font-bold">{(usage._sum.totalTokens ?? 0).toLocaleString()}</p>
</div>
<div className="rounded-lg border p-4">
<p className="text-sm text-gray-500">Est. Cost</p>
<p className="text-3xl font-bold">${(usage._sum.estimatedCostUsd ?? 0).toFixed(4)}</p>
</div>
</div>
{/* Recent history table */}
<div className="rounded-lg border">
<table className="w-full text-sm">
<thead>
<tr className="border-b bg-gray-50">
<th className="p-3 text-left">Model</th>
<th className="p-3 text-left">Tokens</th>
<th className="p-3 text-left">Cost</th>
<th className="p-3 text-left">Time</th>
</tr>
</thead>
<tbody>
{recentHistory.map((row, i) => (
<tr key={i} className="border-b last:border-0">
<td className="p-3 font-mono">{row.model}</td>
<td className="p-3">{row.totalTokens.toLocaleString()}</td>
<td className="p-3">${row.estimatedCostUsd.toFixed(5)}</td>
<td className="p-3 text-gray-500">{row.createdAt.toLocaleString()}</td>
</tr>
))}
</tbody>
</table>
</div>
</div>
);
}
Prisma Schema Additions
// Add to your existing schema:
model AiUsage {
id String @id @default(cuid())
userId String
model String
promptTokens Int
completionTokens Int
totalTokens Int
estimatedCostUsd Decimal @db.Decimal(10, 8)
createdAt DateTime @default(now())
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
@@index([userId, createdAt])
}
// Add to User model:
// aiCreditsUsed Int @default(0)
// aiCreditsResetAt DateTime?
Find boilerplates with AI features pre-built at StarterPick.