Skip to main content

Best Boilerplates for Building AI Chatbot Products 2026

·StarterPick Team
ai-chatbotvercel-ai-sdksaas-boilerplateopenaistreaming2026

TL;DR

There's no single dominant AI chatbot boilerplate yet — most teams combine a standard SaaS starter (ShipFast, T3) with Vercel AI SDK patterns. The AI chatbot stack has become formulaic: useChat hook for streaming UI, streaming API route with streamText, database for conversation history, credit system for billing, rate limiting for abuse. Several purpose-built starters add RAG and multi-model support. The fastest path to a chatbot SaaS: ShipFast or T3 Stack + the patterns from this article.

Key Takeaways

  • Standard stack: Vercel AI SDK (ai) + useChat + Next.js + Postgres (conversation history)
  • Purpose-built starters: Chatbot UI (open source), OpenAI Starter (Vercel template), Chathn
  • Multi-model: Vercel AI SDK supports OpenAI, Anthropic, Google, Mistral via same interface
  • RAG: pgvector or Pinecone for retrieval, embed() + semanticSearch() pattern
  • Conversation history: store messages in DB, load last N on new conversation start
  • Credit billing: 1 credit ≈ 1K tokens, track with onFinish callback

The Standard Chatbot Stack

// Full chatbot implementation — 3 files:

// 1. lib/ai.ts — AI SDK setup:
import { createOpenAI } from '@ai-sdk/openai';
import { createAnthropic } from '@ai-sdk/anthropic';

export const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
export const anthropic = createAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

// Model selector (let users choose):
export function getModel(modelId: string) {
  switch (modelId) {
    case 'gpt-4o': return openai('gpt-4o');
    case 'gpt-4o-mini': return openai('gpt-4o-mini');
    case 'claude-3-5-sonnet': return anthropic('claude-3-5-sonnet-20241022');
    case 'claude-3-5-haiku': return anthropic('claude-3-5-haiku-20241022');
    default: return openai('gpt-4o-mini');
  }
}
// 2. app/api/chat/route.ts — streaming endpoint:
import { streamText, convertToCoreMessages } from 'ai';
import { auth } from '@/lib/auth';
import { getModel } from '@/lib/ai';
import { loadConversationHistory, saveMessage } from '@/lib/conversations';
import { checkCredits, recordUsage } from '@/lib/credits';

export async function POST(req: Request) {
  const session = await auth();
  if (!session?.user?.id) return new Response('Unauthorized', { status: 401 });
  
  const { messages, conversationId, modelId = 'gpt-4o-mini', systemPrompt } = await req.json();
  
  // Check credits:
  const hasCredits = await checkCredits(session.user.id, 1000);
  if (!hasCredits) return new Response('Insufficient credits', { status: 402 });
  
  // Load conversation history from DB:
  const history = conversationId
    ? await loadConversationHistory(conversationId, 20)  // Last 20 messages
    : [];
  
  const allMessages = [...history, ...convertToCoreMessages(messages)];
  
  const result = streamText({
    model: getModel(modelId),
    system: systemPrompt ?? 'You are a helpful assistant.',
    messages: allMessages,
    maxTokens: 2048,
    
    onFinish: async ({ usage, text }) => {
      // Save the new messages to DB:
      await saveMessage({
        conversationId,
        role: 'user',
        content: messages[messages.length - 1].content,
      });
      await saveMessage({ conversationId, role: 'assistant', content: text });
      
      // Track usage:
      await recordUsage(session.user.id, { 
        tokens: usage.totalTokens, 
        model: modelId 
      });
    },
  });
  
  return result.toDataStreamResponse();
}
// 3. components/ChatInterface.tsx — streaming UI:
'use client';
import { useChat } from 'ai/react';
import { useState } from 'react';
import { Send, Bot, User } from 'lucide-react';

interface ChatInterfaceProps {
  conversationId?: string;
  systemPrompt?: string;
  placeholder?: string;
}

export function ChatInterface({ conversationId, systemPrompt, placeholder }: ChatInterfaceProps) {
  const [model, setModel] = useState('gpt-4o-mini');
  
  const { messages, input, handleInputChange, handleSubmit, isLoading, error } = useChat({
    api: '/api/chat',
    body: { conversationId, systemPrompt, modelId: model },
    onError: (err) => {
      if (err.message.includes('402')) toast.error('Out of credits — upgrade your plan');
    },
  });

  return (
    <div className="flex flex-col h-full max-h-screen">
      {/* Model selector */}
      <div className="p-3 border-b flex items-center gap-2">
        <select value={model} onChange={(e) => setModel(e.target.value)} className="text-sm border rounded px-2 py-1">
          <option value="gpt-4o-mini">GPT-4o Mini (fast)</option>
          <option value="gpt-4o">GPT-4o (best)</option>
          <option value="claude-3-5-haiku">Claude Haiku (fast)</option>
          <option value="claude-3-5-sonnet">Claude Sonnet (best)</option>
        </select>
      </div>
      
      {/* Messages */}
      <div className="flex-1 overflow-y-auto p-4 space-y-4">
        {messages.length === 0 && (
          <div className="text-center text-gray-400 mt-20">
            Start a conversation
          </div>
        )}
        {messages.map((msg) => (
          <div key={msg.id} className={`flex gap-3 ${msg.role === 'user' ? 'justify-end' : 'justify-start'}`}>
            {msg.role === 'assistant' && <Bot className="h-6 w-6 mt-1 flex-shrink-0" />}
            <div className={`rounded-2xl px-4 py-2 max-w-[75%] ${
              msg.role === 'user' ? 'bg-blue-500 text-white' : 'bg-gray-100 text-gray-900'
            }`}>
              <div className="whitespace-pre-wrap">{msg.content}</div>
            </div>
            {msg.role === 'user' && <User className="h-6 w-6 mt-1 flex-shrink-0" />}
          </div>
        ))}
        {isLoading && (
          <div className="flex gap-3">
            <Bot className="h-6 w-6 mt-1" />
            <div className="bg-gray-100 rounded-2xl px-4 py-2">
              <div className="flex gap-1">
                <span className="animate-bounce delay-0"></span>
                <span className="animate-bounce delay-100"></span>
                <span className="animate-bounce delay-200"></span>
              </div>
            </div>
          </div>
        )}
      </div>
      
      {/* Input */}
      <form onSubmit={handleSubmit} className="p-4 border-t">
        <div className="flex gap-2">
          <input
            value={input}
            onChange={handleInputChange}
            placeholder={placeholder ?? 'Message...'}
            className="flex-1 border rounded-lg px-4 py-2 focus:outline-none focus:ring-2 focus:ring-blue-500"
            disabled={isLoading}
          />
          <button
            type="submit"
            disabled={isLoading || !input.trim()}
            className="bg-blue-500 text-white rounded-lg px-4 py-2 hover:bg-blue-600 disabled:opacity-50"
          >
            <Send className="h-4 w-4" />
          </button>
        </div>
      </form>
    </div>
  );
}

Conversation History Schema

// db/schema.ts additions:
export const conversations = pgTable('conversations', {
  id: text('id').primaryKey().$defaultFn(() => crypto.randomUUID()),
  userId: text('user_id').notNull().references(() => users.id),
  title: text('title'),
  model: text('model').notNull().default('gpt-4o-mini'),
  createdAt: timestamp('created_at').defaultNow().notNull(),
  updatedAt: timestamp('updated_at').defaultNow().notNull(),
});

export const messages = pgTable('messages', {
  id: text('id').primaryKey().$defaultFn(() => crypto.randomUUID()),
  conversationId: text('conversation_id').notNull().references(() => conversations.id),
  role: text('role', { enum: ['user', 'assistant', 'system'] }).notNull(),
  content: text('content').notNull(),
  tokens: integer('tokens'),  // Usage tracking
  createdAt: timestamp('created_at').defaultNow().notNull(),
});

Purpose-Built Chatbot Starters

StarterStackFeaturesCost
Vercel AI ChatbotNext.js + OpenAI + DrizzleStreaming, auth, historyFree
ChatGPT Clone (various)React + OpenAIBasic streamingFree
Custom stackT3/ShipFast + AI SDKFull SaaS$0-$299
Recommended approach for chatbot SaaS:
1. Start with ShipFast ($299) or T3 Stack (free)
2. Add Vercel AI SDK + useChat (1 hour)
3. Add conversation history schema (1 hour)
4. Add credit system (2-3 hours, see credit guide)
5. Add rate limiting (30 minutes)
6. Add model selector (30 minutes)
Total: 1 day to production-ready chatbot SaaS

Find AI chatbot boilerplates and starters at StarterPick.

Comments