Skip to main content

Guide

Mastra vs Vercel AI SDK vs LangGraph.js: AI Agent SaaS Boilerplate Stack 2026

Pick the right TypeScript AI agent framework for your SaaS boilerplate in 2026: Mastra, Vercel AI SDK, and LangGraph.js compared on programming model, observability, deployment, and use cases.

StarterPick Team

Quick Verdict

For an AI-native SaaS in 2026:

  • Vercel AI SDK — fastest path to "send messages to an LLM with tool use and streaming UI." If your AI feature is a chat or single-step agent, this is enough.
  • Mastra — opinionated TypeScript agent framework with workflows, memory, evals, and a dev UI. The 2026 default for "I'm building a real agent product, not a chatbot."
  • LangGraph.js — graph-based stateful agents with the most expressive control flow. Best when your agent has explicit branching, retries, and human-in-the-loop steps.

If unsure, start on Vercel AI SDK and graduate to Mastra when you need workflows, memory, and evals. Reach for LangGraph when the agent's control flow is genuinely a graph.

Key Takeaways

  • All three are TypeScript-first and run on Node, Bun, and Edge runtimes.
  • The trade is simplicity (Vercel AI SDK) vs opinionation (Mastra) vs expressiveness (LangGraph.js).
  • Most SaaS boilerplates with AI features bundle the Vercel AI SDK. Mastra and LangGraph adoption is growing among AI-first products.

Decision Table

ProductPick
Customer-facing chatbot with tool useVercel AI SDK
Multi-step research agent that searches, summarizes, writesMastra
Agent with explicit human approval gates and retry policiesLangGraph.js
AI feature inside an existing CRUD SaaSVercel AI SDK
New AI-native SaaS (verticalized agent)Mastra
Workflow with parallel tool calls and conditional edgesLangGraph.js

What an Agent Framework Buys You

A "naive" AI feature is streamText({ model, messages }). You quickly outgrow it:

  • Memory — the agent should remember a user across sessions.
  • Tools — the agent should call your APIs (search, database, third-party).
  • Workflow — multi-step plans with branching and retries.
  • Evals — repeatable tests that catch regressions when you change the prompt or model.
  • Observability — traces of every step, every prompt, every tool result.

Vercel AI SDK gives you tools and streaming. Mastra adds workflows, memory, evals, and a dev UI. LangGraph adds first-class graph-shaped control flow.

Vercel AI SDK

Pricing: Free, MIT.

Fit: Any chat or single-step agent feature inside a Next.js, SvelteKit, Nuxt, or Astro boilerplate. Pairs naturally with the Vercel AI Gateway.

import { streamText, tool } from 'ai';
import { gateway } from '@ai-sdk/gateway';
import { z } from 'zod';

const result = await streamText({
  model: gateway('anthropic/claude-sonnet-4-6'),
  messages,
  tools: {
    searchProducts: tool({
      description: 'Search the product catalog',
      parameters: z.object({ query: z.string() }),
      execute: async ({ query }) => db.products.search(query),
    }),
  },
  maxSteps: 5,
});
return result.toDataStreamResponse();

What you get:

  • streamText, generateText, streamObject, generateObject covering 90% of API patterns.
  • Tool use with Zod schemas — typed end to end.
  • Automatic UI streaming (useChat, useCompletion) for React/Vue/Svelte.
  • Multi-step tool loops with maxSteps.
  • Provider abstraction — swap Anthropic, OpenAI, Google, Mistral with one line.

Where it bites:

  • No first-party memory or persistence — you wire it.
  • No first-party workflow / branching — you write if statements.
  • No first-party evals.

Mastra

Pricing: Free, Apache 2.0. Mastra Cloud is the optional managed plane.

Fit: Anything past a single chat surface. Multi-agent products. Apps where you'll add memory, RAG, evals, and workflows over time.

import { Mastra } from '@mastra/core';
import { createAgent } from '@mastra/core/agent';

const supportAgent = createAgent({
  name: 'support',
  instructions: 'Help customers with billing and product questions.',
  model: openai('gpt-4o'),
  tools: { lookupCustomer, refund },
  memory: new Memory({ storage: postgresStorage }),
});

const mastra = new Mastra({ agents: { supportAgent } });

What you get:

  • Agents with first-class memory (vector + key-value).
  • Workflows — typed step graphs with retries, branching, parallel execution.
  • Evals — answer relevance, hallucination, faithfulness baked in.
  • RAG primitives — chunkers, embedders, retrievers, rerankers.
  • A local dev UI for tracing every agent run, viewing memory, replaying conversations.
  • Built on top of the Vercel AI SDK — no rewrite of provider code.

Where it bites:

  • Newer ecosystem; fewer Stack Overflow answers than LangChain.
  • Conventions to learn — agents, workflows, tools, memory have specific shapes.
  • Smaller battle-test surface than LangChain in 2026.

LangGraph.js

Pricing: Free, MIT. LangSmith (observability) and LangGraph Cloud are the paid managed pieces.

Fit: Agents whose flow is genuinely a graph: research → critique → revise loops, human-in-the-loop approvals, complex retry / fallback policies.

import { StateGraph } from '@langchain/langgraph';

const graph = new StateGraph<{ query: string; results: string[] }>()
  .addNode('search', async (s) => ({ ...s, results: await search(s.query) }))
  .addNode('summarize', async (s) => ({ ...s, summary: await llm.summarize(s.results) }))
  .addNode('critique', async (s) => ({ ...s, ok: await llm.critique(s.summary) }))
  .addEdge('search', 'summarize')
  .addEdge('summarize', 'critique')
  .addConditionalEdges('critique', s => s.ok ? '__end__' : 'summarize')
  .compile();

What you get:

  • Explicit graph control flow — nodes, edges, conditional edges, cycles.
  • Persistent state (checkpointer) — pause and resume runs across sessions.
  • Human-in-the-loop interruption built in.
  • Multi-agent supervisor patterns documented and supported.
  • Tight integration with LangSmith for tracing.

Where it bites:

  • Steepest learning curve of the three.
  • Heavier code surface for simple agents — overkill if your "agent" is a single LLM call with one tool.
  • LangChain.js dependency surface is famously broad (improving in 2026).

Programming Model Comparison

ConceptVercel AI SDKMastraLangGraph.js
UnitGeneration callAgent / WorkflowGraph node
MemoryDIYFirst-class (vector + KV)Checkpointer + DIY
Toolstool({...})tool({...})Tool class
Multi-stepmaxStepsWorkflow stepsGraph edges
EvalsDIYFirst-classDIY (LangSmith add-on)
Streaming UIuseChat, useCompletionStream from agentstreamEvents
Local dev UINoneMastra StudioLangGraph Studio
Hosted opsVercelMastra CloudLangGraph Cloud

What Boilerplates Bundle

BoilerplateAI Layer
Vercel AI Chatbot templateVercel AI SDK
Open SaaS (Wasp)Vercel AI SDK
Mastra StarterMastra
LangGraph.js StarterLangGraph + LangSmith
Most premium SaaS boilerplates (ShipFast, Makerkit)Vercel AI SDK
AI agent kitsMastra or LangGraph

For a deeper look at the AI-feature-bundling boilerplates, see the best AI SaaS boilerplates ranking.

Cost Profile

All three frameworks are free; the cost is your model spend. The framework choice does affect cost indirectly:

  • Vercel AI SDK — minimal overhead. You pay for the tokens you send.
  • Mastra — adds memory and embeddings; vector storage cost is real for high-volume agents.
  • LangGraph — checkpointing adds storage cost; nodes can fan out and double-spend if you don't budget step counts.

A well-instrumented agent always costs more than the same logic in a single LLM call — that's the price of being correct, retryable, and inspectable. Pair with an AI gateway for cost attribution and caching.

Choosing in 60 Seconds

  • One chat surface, no memory needed → Vercel AI SDK.
  • Multi-agent product, want memory and evals → Mastra.
  • Complex graph control flow, human approvals → LangGraph.js.
  • Greenfield AI-native SaaS → Mastra unless you have a specific reason to want LangGraph's graph semantics.

What This Replaces

If you've been writing custom agent loops with while (toolCalls.length) over OpenAI's responses, all three replace that boilerplate. Mastra and LangGraph also replace your homegrown memory store, your homegrown evals, and your homegrown workflow engine — see durable workflow comparison for when Mastra workflows are enough and when you need a separate durable execution layer for the non-AI parts.

FAQ

Can I mix them? Yes. A common pattern: Mastra for the agent, Vercel AI SDK's streamText inside Mastra tools for ad-hoc generation, and LangGraph for one specific complex workflow.

Do they support local models? All three. Vercel AI SDK via OpenAI-compatible providers; Mastra via the same; LangGraph via Ollama and provider integrations.

Edge runtime? Vercel AI SDK is the most edge-friendly. Mastra runs on Edge for many features but memory storage usually pins to Node. LangGraph.js depends on the checkpointer.

Do I need LangSmith? Useful for LangGraph users; not required. Mastra has its own dev UI; Vercel AI SDK has provider-side dashboards plus your gateway's observability.


If you're starting from a generic SaaS foundation rather than an AI-native one, the best AI SaaS boilerplates with Claude/OpenAI integration covers what to bolt these onto.

The SaaS Boilerplate Matrix (Free PDF)

20+ SaaS starters compared: pricing, tech stack, auth, payments, and what you actually ship with. Updated monthly. Used by 150+ founders.

Join 150+ SaaS founders. Unsubscribe in one click.