LangChain.js Starter vs Vercel AI SDK Starter 2026
Two Philosophies, Two Different Products
The Vercel AI SDK and LangChain.js serve different markets. Choosing the wrong one adds complexity you do not need, or limits you to capabilities you will outgrow.
Vercel AI SDK is a UI-first React/Next.js library. It is designed to get streaming AI responses into a web UI with minimal code. If you are building a web product with AI features, it is the default choice in 2026.
LangChain.js is an orchestration framework. It chains LLM calls, tools, agents, and memory together into complex pipelines. If you are building multi-step AI workflows, autonomous agents, or RAG systems with advanced retrieval, LangChain is the framework for it.
Most SaaS boilerplates add the Vercel AI SDK. LangChain is used when the AI product requires complex orchestration that a simple streamText() call cannot handle.
TL;DR
- Vercel AI SDK: Use for web apps, chatbots, streaming UI, simple generation tasks. Default choice for SaaS with AI features.
- LangChain.js: Use for complex pipelines, multi-step agents, multi-source RAG, autonomous task execution.
- Both together: Common pattern — Vercel AI SDK for the UI layer, LangChain for orchestration of complex backend flows.
- For boilerplate starters: Vercel AI SDK integrates with every Next.js boilerplate out of the box. LangChain requires more setup and is typically used alongside, not instead of, a SaaS boilerplate.
Key Takeaways
- Vercel AI SDK has 4M+ weekly npm downloads vs LangChain.js 1M+
- Vercel AI SDK supports 20+ providers (OpenAI, Anthropic, Google, Mistral, Cohere, etc.) with a unified interface
- LangChain supports 80+ integrations including vector stores, document loaders, and tool executors
- Vercel AI SDK's
useChatanduseCompletionhooks reduce chat UI implementation to ~20 lines - LangChain.js enables patterns like HyDE, multi-query RAG, and chain-of-thought agents
- Most production AI SaaS products use Vercel AI SDK for the API layer; LangChain for complex orchestration when needed
Vercel AI SDK: What It Does
The Vercel AI SDK (npm install ai) provides:
Provider-Unified Interface
import { generateText, streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';
// Switch providers by changing one line:
const result = await generateText({
model: openai('gpt-4o'), // or anthropic('claude-opus-4-6')
prompt: 'Explain RAG in one sentence.',
});
Streaming Chat for Next.js
// app/api/chat/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4o'),
messages,
});
return result.toDataStreamResponse();
}
// components/Chat.tsx
'use client';
import { useChat } from 'ai/react';
export function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<form onSubmit={handleSubmit}>
{messages.map(m => <div key={m.id}>{m.content}</div>)}
<input value={input} onChange={handleInputChange} />
<button type="submit">Send</button>
</form>
);
}
That is the complete streaming chat implementation — server route + client component. LangChain requires significantly more setup to accomplish the same.
Tool Use / Function Calling
import { streamText, tool } from 'ai';
import { z } from 'zod';
const result = await streamText({
model: openai('gpt-4o'),
messages,
tools: {
getWeather: tool({
description: 'Get current weather for a city',
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => {
const data = await fetchWeather(city);
return data;
},
}),
},
});
Structured Output
import { generateObject } from 'ai';
import { z } from 'zod';
const { object } = await generateObject({
model: openai('gpt-4o'),
schema: z.object({
sentiment: z.enum(['positive', 'neutral', 'negative']),
score: z.number().min(0).max(1),
summary: z.string(),
}),
prompt: `Analyze: "${reviewText}"`,
});
// object.sentiment, object.score, object.summary — fully typed
Embedding and RAG
import { embed, embedMany } from 'ai';
import { openai } from '@ai-sdk/openai';
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: 'Your text to embed',
});
// embedding: number[] — store in pgvector
LangChain.js: What It Does
LangChain.js (npm install langchain @langchain/openai) provides:
Chains and Sequential Pipelines
import { ChatOpenAI } from '@langchain/openai';
import { PromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
const model = new ChatOpenAI({ model: 'gpt-4o' });
// Chain: template → model → parser
const chain = PromptTemplate.fromTemplate(
'Summarize this product review in 2 sentences: {review}'
)
.pipe(model)
.pipe(new StringOutputParser());
const summary = await chain.invoke({ review: userReview });
Advanced RAG Patterns
import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
import { SupabaseVectorStore } from '@langchain/community/vectorstores/supabase';
import { createRetrievalChain } from 'langchain/chains/retrieval';
import { createStuffDocumentsChain } from 'langchain/chains/combine_documents';
import { CohereRerank } from '@langchain/cohere';
const embeddings = new OpenAIEmbeddings();
const vectorStore = await SupabaseVectorStore.fromExistingIndex(embeddings, {
client: supabaseClient,
tableName: 'documents',
});
// With reranking:
const reranker = new CohereRerank({ model: 'rerank-english-v3.0', topN: 5 });
const retriever = vectorStore.asRetriever({ k: 10 });
const rerankedRetriever = retriever.pipe(reranker);
const qaChain = await createRetrievalChain({
retriever: rerankedRetriever,
combineDocsChain: await createStuffDocumentsChain({
llm: new ChatOpenAI({ model: 'gpt-4o' }),
prompt: answerPrompt,
}),
});
const result = await qaChain.invoke({ input: userQuery });
Agents with Tool Use
import { createReactAgent } from '@langchain/langgraph/prebuilt';
import { TavilySearchResults } from '@langchain/community/tools/tavily_search';
import { ChatOpenAI } from '@langchain/openai';
const tools = [
new TavilySearchResults({ maxResults: 3 }),
// Additional tools: code execution, database queries, etc.
];
const agent = createReactAgent({
llm: new ChatOpenAI({ model: 'gpt-4o' }),
tools,
});
// Agent autonomously decides which tools to call:
const result = await agent.invoke({
messages: [{ role: 'user', content: 'Research the top 5 SaaS boilerplates and compare their pricing.' }],
});
Memory and Conversation State
import { ChatOpenAI } from '@langchain/openai';
import { ConversationSummaryBufferMemory } from 'langchain/memory';
import { ConversationChain } from 'langchain/chains';
// Summarize old messages to stay within context window:
const memory = new ConversationSummaryBufferMemory({
llm: new ChatOpenAI({ model: 'gpt-3.5-turbo' }), // Cheap model for summarization
maxTokenLimit: 2000,
});
const chain = new ConversationChain({
llm: new ChatOpenAI({ model: 'gpt-4o' }),
memory,
});
Side-by-Side Comparison
| Feature | Vercel AI SDK | LangChain.js |
|---|---|---|
| Primary use | Web UI streaming, simple generation | Complex pipelines, agents, multi-step RAG |
| Learning curve | Low — 30 minutes to streaming chat | High — chains, runnables, tools, agents |
| React/Next.js hooks | Yes (useChat, useCompletion, useObject) | No (need to wire manually) |
| Streaming | First-class, built-in | Yes, but more setup |
| Provider support | 20+ with unified API | 80+ (less unified) |
| Tool use | Yes (built-in tool()) | Yes (extensive toolset) |
| RAG support | Embed + retrieval primitives | Full RAG chains, reranking |
| Agents | Basic (tool loops) | Full ReAct agents, LangGraph |
| Memory | Manual | Built-in memory types |
| Document loaders | No | 100+ loaders (PDF, CSV, web, etc.) |
| Vector store integrations | No (use provider SDK) | 50+ integrations |
| Bundle size | Small | Large (~500KB+) |
| npm downloads/week | ~4M | ~1M |
| Maturity | 2023 (rapidly growing) | 2022 (established) |
Starter Templates
Vercel AI SDK Starters
Official Vercel AI Chatbot:
npx create-next-app -e https://github.com/vercel/ai-chatbot
Includes: Next.js 15, streaming chat, multiple models, Vercel KV persistence, NextAuth, shadcn/ui.
Add to existing boilerplate:
npm install ai @ai-sdk/openai
# That's it — works with ShipFast, Makerkit, OpenSaaS, any Next.js app
LangChain.js Starters
LangChain.js minimal starter:
npm install langchain @langchain/openai @langchain/community
No official comprehensive starter template exists. Community templates:
langchain-nextjs-template(GitHub) — basic RAG example with Next.js- LangChain documentation examples — copy-paste patterns for specific use cases
LangSmith (LangChain's observability platform):
npm install langsmith
# Set LANGCHAIN_TRACING_V2=true in .env
LangSmith provides tracing, debugging, and evaluation for LangChain pipelines.
When to Use Each
Use Vercel AI SDK When:
- Building a web application with streaming AI responses
- You need React hooks (
useChat,useCompletion) - Your AI feature is single-step: generate text, classify, summarize, extract
- You want multi-provider flexibility without boilerplate
- You are adding AI to an existing Next.js SaaS
- You need structured output with Zod schemas
Use LangChain.js When:
- Building multi-step AI pipelines: search → summarize → generate
- Implementing advanced RAG: multiple sources, reranking, query transformation
- Building autonomous agents that decide which tools to use
- Integrating many data sources: PDFs, web pages, databases, APIs
- You need conversation memory with summarization
- Processing large document sets (100+ documents with chunking/loading)
Use Both Together:
The most common production pattern: Vercel AI SDK for the streaming UI layer, LangChain for complex backend orchestration.
// app/api/research/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { runResearchChain } from '@/lib/langchain/research'; // LangChain pipeline
export async function POST(req: Request) {
const { query } = await req.json();
// LangChain runs the complex pipeline (search, retrieve, synthesize):
const context = await runResearchChain(query);
// Vercel AI SDK streams the result to the UI:
const result = await streamText({
model: openai('gpt-4o'),
system: `Use the following research results to answer the question:\n\n${context}`,
messages: [{ role: 'user', content: query }],
});
return result.toDataStreamResponse();
}
Performance and Bundle Size
Vercel AI SDK is significantly lighter:
| Metric | Vercel AI SDK | LangChain.js (full) |
|---|---|---|
| Install size | ~2MB | ~50MB+ |
| Bundle impact | Small | Large (tree-shaking helps) |
| Cold start (edge) | Fast | Slow (avoid on edge) |
| TypeScript support | Excellent | Good |
For edge deployments (Cloudflare Workers, Vercel Edge), use Vercel AI SDK. LangChain has partial edge support but works best on Node.js.
Recommendation by Use Case
| Use Case | Recommended |
|---|---|
| Streaming chatbot | Vercel AI SDK |
| Document Q&A (simple) | Vercel AI SDK + pgvector |
| Document Q&A (advanced, multi-source) | LangChain.js |
| Content generation tool | Vercel AI SDK |
| Autonomous research agent | LangChain.js / LangGraph |
| Structured data extraction | Vercel AI SDK (generateObject) |
| Multi-step workflow automation | LangChain.js |
| Adding AI to SaaS boilerplate | Vercel AI SDK |
Methodology
Based on publicly available documentation from the Vercel AI SDK docs (sdk.vercel.ai), LangChain.js documentation (js.langchain.com), npm download statistics, and community resources as of March 2026.
Building an AI-powered SaaS? StarterPick helps you find the right boilerplate foundation before you add your AI layer.