Skip to main content

Best AI SaaS Boilerplates with Claude/OpenAI Integration 2026

·StarterPick Team
ai-saasvercel-ai-sdkopenaiclaudeboilerplatestreaming2026

TL;DR

Most mainstream SaaS boilerplates don't include AI features — you add them yourself. The ones that do (Open SaaS, Shipped.club AI tier, some indie starters) give you Vercel AI SDK integration, streaming chat UI, and basic token tracking. For serious AI products: build AI features on top of ShipFast or T3, using the Vercel AI SDK as the layer. The patterns are well-established: streamText for streaming, generateObject for structured output, credit system for billing, rate limiting for abuse prevention.

Key Takeaways

  • Vercel AI SDK (ai): 4.5M downloads/week — the standard for AI features in Next.js
  • Boilerplates with AI: Open SaaS (Wasp, free), Shipped.club (paid, AI tier), some indie starters
  • What to build yourself: streaming chat UI + token tracking + credits + rate limiting
  • Credit system: ~500 lines of Drizzle/Prisma + Stripe + AI SDK code
  • Models supported: OpenAI, Anthropic Claude, Google Gemini via single ai package
  • Streaming: works with Server Actions and API routes; use useChat hook on client

Vercel AI SDK: The Foundation

npm install ai @ai-sdk/openai @ai-sdk/anthropic
// app/api/chat/route.ts — streaming chat endpoint:
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { auth } from '@/lib/auth';
import { checkAndDeductCredits } from '@/lib/credits';

export async function POST(req: Request) {
  const session = await auth();
  if (!session?.user) return new Response('Unauthorized', { status: 401 });
  
  const { messages } = await req.json();
  
  // Check user credits before running:
  const hasCredits = await checkAndDeductCredits(session.user.id, {
    estimatedTokens: 1000,
    model: 'gpt-4o-mini',
  });
  
  if (!hasCredits) {
    return new Response('Insufficient credits', { status: 402 });
  }
  
  const result = streamText({
    // Swap model easily — same API for OpenAI, Anthropic, etc.:
    model: openai('gpt-4o-mini'),
    // model: anthropic('claude-3-5-haiku-20241022'),
    
    system: 'You are a helpful assistant. Be concise and accurate.',
    messages,
    
    maxTokens: 2048,
    temperature: 0.7,
    
    // Track actual token usage after completion:
    onFinish: async ({ usage, finishReason }) => {
      await recordTokenUsage(session.user.id, {
        promptTokens: usage.promptTokens,
        completionTokens: usage.completionTokens,
        model: 'gpt-4o-mini',
      });
    },
  });
  
  return result.toDataStreamResponse();
}
// Client chat component with streaming:
'use client';
import { useChat } from 'ai/react';

export function ChatInterface() {
  const { messages, input, handleInputChange, handleSubmit, isLoading, error } = useChat({
    api: '/api/chat',
    onError: (err) => {
      if (err.message.includes('402')) {
        alert('Out of credits — please upgrade your plan');
      }
    },
  });

  return (
    <div className="flex flex-col h-full">
      <div className="flex-1 overflow-auto p-4 space-y-4">
        {messages.map((msg) => (
          <div key={msg.id} className={`flex ${msg.role === 'user' ? 'justify-end' : 'justify-start'}`}>
            <div className={`rounded-lg p-3 max-w-[80%] ${msg.role === 'user' ? 'bg-blue-500 text-white' : 'bg-gray-100'}`}>
              {msg.content}
            </div>
          </div>
        ))}
        {isLoading && <div className="text-gray-400">AI is thinking...</div>}
      </div>
      
      <form onSubmit={handleSubmit} className="p-4 border-t">
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Ask anything..."
          className="w-full p-3 border rounded-lg"
          disabled={isLoading}
        />
      </form>
    </div>
  );
}

Credit System Implementation

The standard pattern for AI SaaS billing:

// lib/credits.ts — full credit system:
import { db } from './db';
import { users, creditTransactions } from './db/schema';
import { eq } from 'drizzle-orm';

// Model pricing (per 1K tokens):
const MODEL_COSTS = {
  'gpt-4o': { input: 0.005, output: 0.015 },
  'gpt-4o-mini': { input: 0.00015, output: 0.0006 },
  'claude-3-5-haiku-20241022': { input: 0.0008, output: 0.004 },
  'claude-3-5-sonnet-20241022': { input: 0.003, output: 0.015 },
} as const;

// Credits: 1 credit = $0.001 USD
const CREDITS_PER_DOLLAR = 1000;

export async function checkAndDeductCredits(
  userId: string,
  { estimatedTokens, model }: { estimatedTokens: number; model: keyof typeof MODEL_COSTS }
): Promise<boolean> {
  const costs = MODEL_COSTS[model];
  const estimatedCostUsd = (estimatedTokens / 1000) * costs.output;
  const estimatedCredits = Math.ceil(estimatedCostUsd * CREDITS_PER_DOLLAR);
  
  // Atomic deduction — prevents race conditions:
  const result = await db.transaction(async (tx) => {
    const [user] = await tx.select({ credits: users.credits })
      .from(users)
      .where(eq(users.id, userId))
      .for('update');  // Lock row during transaction
    
    if (!user || user.credits < estimatedCredits) return false;
    
    await tx.update(users)
      .set({ credits: user.credits - estimatedCredits })
      .where(eq(users.id, userId));
    
    await tx.insert(creditTransactions).values({
      userId,
      amount: -estimatedCredits,
      type: 'usage',
      description: `Estimated usage for ${model}`,
    });
    
    return true;
  });
  
  return result;
}

export async function recordTokenUsage(
  userId: string,
  { promptTokens, completionTokens, model }: {
    promptTokens: number;
    completionTokens: number;
    model: keyof typeof MODEL_COSTS;
  }
) {
  const costs = MODEL_COSTS[model];
  const actualCostUsd = 
    (promptTokens / 1000) * costs.input +
    (completionTokens / 1000) * costs.output;
  const actualCredits = Math.ceil(actualCostUsd * CREDITS_PER_DOLLAR);
  
  // Reconcile estimated vs actual (refund or charge difference):
  await db.insert(creditTransactions).values({
    userId,
    amount: -actualCredits,  // Negative = used credits
    type: 'usage_actual',
    description: `${model}: ${promptTokens}+${completionTokens} tokens`,
    metadata: { promptTokens, completionTokens, model, costUsd: actualCostUsd },
  });
}
// lib/db/schema.ts additions for credits:
import { integer, jsonb } from 'drizzle-orm/pg-core';

// Add to users table:
export const users = pgTable('users', {
  // ... existing fields
  credits: integer('credits').notNull().default(100),  // Start with 100 free credits
  creditsUsedTotal: integer('credits_used_total').notNull().default(0),
});

export const creditTransactions = pgTable('credit_transactions', {
  id: text('id').primaryKey().$defaultFn(() => crypto.randomUUID()),
  userId: text('user_id').notNull().references(() => users.id),
  amount: integer('amount').notNull(),  // Positive = added, negative = used
  type: text('type', { enum: ['purchase', 'usage', 'usage_actual', 'refund', 'bonus'] }).notNull(),
  description: text('description').notNull(),
  metadata: jsonb('metadata'),
  createdAt: timestamp('created_at').defaultNow().notNull(),
});

Structured Output for AI Features

// app/api/analyze/route.ts — structured AI response:
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const analysisSchema = z.object({
  sentiment: z.enum(['positive', 'negative', 'neutral']),
  score: z.number().min(0).max(1),
  keywords: z.array(z.string()).max(5),
  summary: z.string().max(200),
  actionItems: z.array(z.object({
    action: z.string(),
    priority: z.enum(['high', 'medium', 'low']),
  })).optional(),
});

export async function POST(req: Request) {
  const { text } = await req.json();
  
  const { object } = await generateObject({
    model: openai('gpt-4o-mini'),
    schema: analysisSchema,
    prompt: `Analyze the following customer feedback:\n\n${text}`,
  });
  
  // object is typed as z.infer<typeof analysisSchema>
  return Response.json(object);
}

Open SaaS: Best Free AI Boilerplate

The Open SaaS project (Wasp-powered) includes an AI demo feature as a reference implementation:

Open SaaS AI features:
  ├── /ai-generated-cover-letter — demo AI feature
  ├── GPT-4 via OpenAI SDK
  ├── Credits system (basic)
  ├── Token tracking
  └── Error handling for API failures

This gives you a working reference for the full flow, even if you replace the specific use case.


Boilerplates with AI Features

BoilerplateAI FeaturesPriceNotes
Open SaaSBasic chat demoFreeWasp framework, reference only
Shipped.club AIFull AI chat + credits$199+Vercel AI SDK, production-ready
ShipFastNone built-in$299Add yourself (1-2 hours)
T3 StackNone built-inFreeAdd yourself with AI SDK
SupastarterNone built-in$299Add yourself

For most teams: start with ShipFast or T3, add the credit system + streamText pattern from this article. 4-8 hours of work to have a production AI feature.

Building your own AI SaaS feature checklist:
  ✅ Install: ai, @ai-sdk/openai (or @ai-sdk/anthropic)
  ✅ Route: /api/chat with streamText
  ✅ Client: useChat hook with streaming UI
  ✅ Credits: check before + record after with onFinish
  ✅ Rate limiting: per-user per-minute limits
  ✅ Error handling: model failures, credit exhaustion
  ✅ Cost tracking: token usage → dollar cost → credits
  ✅ Plan limits: free tier caps, pro tier upgrades

Find AI SaaS boilerplates and compare features at StarterPick.

Comments