Vercel AI SDK Tutorial: Build an AI Chatbot with Next.js
Monday, Dec 29, 2025
Vercel AI SDK is a library that makes AI integration into web applications incredibly easy. With this SDK, you can build a chatbot with streaming responses, support for multiple AI providers, and advanced features like tool calling.
In this tutorial, I’ll guide you from setup to a production-ready chatbot.
What is Vercel AI SDK?
Vercel AI SDK is an open-source library for building AI applications. Its advantages:
- ✅ Streaming responses - text appears word by word, better UX
- ✅ Multiple AI providers - OpenAI, Anthropic, Google, and more
- ✅ React hooks -
useChat,useCompletionready to use - ✅ Edge-ready - can deploy on edge runtime
- ✅ TypeScript first - full type safety
- ✅ Structured outputs - generate JSON with schema validation
Project Setup
1. Create Next.js Project
npx create-next-app@latest ai-chatbot --typescript --tailwind --app
cd ai-chatbot
2. Install Dependencies
npm install ai @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google
The SDK is modular, so you only need to install the providers you use.
3. Setup Environment Variables
Create file .env.local:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_GENERATIVE_AI_API_KEY=...
First Chatbot with useChat
API Route
Create app/api/chat/route.ts:
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
system: "You are a helpful assistant. Answer concisely.",
});
return result.toDataStreamResponse();
}
Chat Component
Create app/page.tsx:
"use client";
import { useChat } from "ai/react";
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, isLoading } =
useChat();
return (
<div className="flex flex-col h-screen max-w-2xl mx-auto p-4">
<div className="flex-1 overflow-y-auto space-y-4 mb-4">
{messages.map((message) => (
<div
key={message.id}
className={`p-4 rounded-lg ${
message.role === "user"
? "bg-blue-500 text-white ml-auto max-w-[80%]"
: "bg-gray-100 mr-auto max-w-[80%]"
}`}
>
{message.content}
</div>
))}
{isLoading && (
<div className="bg-gray-100 p-4 rounded-lg mr-auto">
<div className="animate-pulse">Thinking...</div>
</div>
)}
</div>
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Type a message..."
className="flex-1 p-3 border rounded-lg focus:outline-none focus:ring-2 focus:ring-blue-500"
/>
<button
type="submit"
disabled={isLoading}
className="px-6 py-3 bg-blue-500 text-white rounded-lg hover:bg-blue-600 disabled:opacity-50"
>
Send
</button>
</form>
</div>
);
}
Run npm run dev and the chatbot is running!
Streaming Responses Deep Dive
Why Streaming Matters
Without streaming, users have to wait for the entire response to finish generating. With streaming, text appears in real-time — much better UX.
How Streaming Works
import { streamText } from "ai";
const result = streamText({
model: openai("gpt-4o"),
messages,
});
// Option 1: Data stream (recommended for useChat)
return result.toDataStreamResponse();
// Option 2: Text stream (for other use cases)
return result.toTextStreamResponse();
Handle Streaming on Client
useChat automatically handles streaming. But for manual handling:
const response = await fetch("/api/chat", {
method: "POST",
body: JSON.stringify({ messages }),
});
const reader = response.body?.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader!.read();
if (done) break;
const chunk = decoder.decode(value);
console.log(chunk); // Text chunk
}
Multiple AI Providers
One of the strengths of Vercel AI SDK is a unified API for various providers.
OpenAI
import { openai } from "@ai-sdk/openai";
const result = streamText({
model: openai("gpt-4o"), // or "gpt-4o-mini", "gpt-3.5-turbo"
messages,
});
Anthropic (Claude)
import { anthropic } from "@ai-sdk/anthropic";
const result = streamText({
model: anthropic("claude-sonnet-4-20250514"), // or "claude-3-5-haiku-20241022"
messages,
});
Google (Gemini)
import { google } from "@ai-sdk/google";
const result = streamText({
model: google("gemini-2.0-flash"), // or "gemini-1.5-pro"
messages,
});
Dynamic Provider Selection
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import { google } from "@ai-sdk/google";
function getModel(provider: string) {
switch (provider) {
case "openai":
return openai("gpt-4o");
case "anthropic":
return anthropic("claude-sonnet-4-20250514");
case "google":
return google("gemini-2.0-flash");
default:
return openai("gpt-4o-mini");
}
}
export async function POST(req: Request) {
const { messages, provider } = await req.json();
const result = streamText({
model: getModel(provider),
messages,
});
return result.toDataStreamResponse();
}
useChat Hook Options
useChat has many powerful options:
const {
messages, // Array of messages
input, // Current input value
handleInputChange, // Input onChange handler
handleSubmit, // Form submit handler
isLoading, // Loading state
error, // Error state
reload, // Reload last AI response
stop, // Stop streaming
append, // Append message programmatically
setMessages, // Set messages manually
} = useChat({
api: "/api/chat", // Custom API endpoint
id: "unique-chat-id", // For multiple chats
initialMessages: [], // Pre-populate messages
body: {
// Extra data to API
userId: "123",
},
headers: {
// Custom headers
Authorization: "Bearer token",
},
onResponse: (response) => {
// Callback when response received
console.log("Response received");
},
onFinish: (message) => {
// Callback when streaming finished
console.log("Finished:", message);
},
onError: (error) => {
// Error handling
console.error("Error:", error);
},
});
Example: Chat with Stop Button
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, isLoading, stop } =
useChat();
return (
<div>
{/* Messages */}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
{isLoading ? (
<button type="button" onClick={stop}>
Stop
</button>
) : (
<button type="submit">Send</button>
)}
</form>
</div>
);
}
Structured Outputs
Generate JSON with schema validation using Zod:
import { openai } from "@ai-sdk/openai";
import { generateObject } from "ai";
import { z } from "zod";
const result = await generateObject({
model: openai("gpt-4o"),
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(
z.object({
name: z.string(),
amount: z.string(),
})
),
steps: z.array(z.string()),
cookingTime: z.number().describe("Cooking time in minutes"),
}),
}),
prompt: "Generate a fried rice recipe",
});
console.log(result.object);
// { recipe: { name: "Fried Rice", ingredients: [...], steps: [...], cookingTime: 15 } }
Streaming Structured Output
import { streamObject } from "ai";
const result = streamObject({
model: openai("gpt-4o"),
schema: recipeSchema,
prompt: "Generate a fried rice recipe",
});
for await (const partialObject of result.partialObjectStream) {
console.log(partialObject);
// Object built incrementally
}
Tool Calling (Function Calling)
Tool calling allows AI to call functions you define:
import { openai } from "@ai-sdk/openai";
import { streamText, tool } from "ai";
import { z } from "zod";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
tools: {
getWeather: tool({
description: "Get current weather for a location",
parameters: z.object({
location: z.string().describe("City name"),
}),
execute: async ({ location }) => {
// Call weather API
const weather = await fetchWeather(location);
return weather;
},
}),
searchProducts: tool({
description: "Search products in database",
parameters: z.object({
query: z.string(),
category: z.string().optional(),
maxPrice: z.number().optional(),
}),
execute: async ({ query, category, maxPrice }) => {
const products = await db.products.search({
query,
category,
maxPrice,
});
return products;
},
}),
},
maxSteps: 5, // Allow multiple tool calls
});
return result.toDataStreamResponse();
}
Display Tool Results in UI
export default function Chat() {
const { messages } = useChat();
return (
<div>
{messages.map((message) => (
<div key={message.id}>
{message.role === "user" ? (
<div>{message.content}</div>
) : (
<div>
{message.content}
{/* Display tool invocations */}
{message.toolInvocations?.map((tool) => (
<div key={tool.toolCallId} className="bg-gray-50 p-2 rounded">
<div>Tool: {tool.toolName}</div>
{tool.state === "result" && (
<pre>{JSON.stringify(tool.result, null, 2)}</pre>
)}
</div>
))}
</div>
)}
</div>
))}
</div>
);
}
Rate Limiting
Protect API from abuse with rate limiting:
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, "1 m"), // 10 requests per minute
});
export async function POST(req: Request) {
// Get user identifier
const ip = req.headers.get("x-forwarded-for") ?? "anonymous";
const { success, limit, reset, remaining } = await ratelimit.limit(ip);
if (!success) {
return new Response("Rate limit exceeded", {
status: 429,
headers: {
"X-RateLimit-Limit": limit.toString(),
"X-RateLimit-Remaining": remaining.toString(),
"X-RateLimit-Reset": reset.toString(),
},
});
}
// Process chat...
}
Production Deployment Tips
1. Error Handling
export async function POST(req: Request) {
try {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
});
return result.toDataStreamResponse();
} catch (error) {
console.error("Chat error:", error);
if (error instanceof Error) {
if (error.message.includes("rate limit")) {
return new Response("Rate limit exceeded", { status: 429 });
}
if (error.message.includes("invalid api key")) {
return new Response("Configuration error", { status: 500 });
}
}
return new Response("Internal server error", { status: 500 });
}
}
2. Timeout Handling
export const maxDuration = 60; // Increase timeout for long responses
export async function POST(req: Request) {
const result = streamText({
model: openai("gpt-4o"),
messages,
abortSignal: AbortSignal.timeout(55000), // Abort before edge timeout
});
return result.toDataStreamResponse();
}
3. Input Validation
import { z } from "zod";
const requestSchema = z.object({
messages: z.array(
z.object({
role: z.enum(["user", "assistant", "system"]),
content: z.string().max(10000),
})
),
});
export async function POST(req: Request) {
const body = await req.json();
const { messages } = requestSchema.parse(body);
// Process...
}
4. Cost Saving Tips
- Use the right model - GPT-4o-mini is sufficient for many use cases
- Limit context - Don’t send all history, just the last N messages
- Cache responses - For similar questions
- Set max tokens - Limit response length
const result = streamText({
model: openai("gpt-4o-mini"), // Cheaper
messages: messages.slice(-10), // Only last 10 messages
maxTokens: 500, // Limit response length
});
Conclusion
Vercel AI SDK makes chatbot development much easier:
- Quick setup - Streaming in minutes
- Unified API - Switch providers without changing much code
- Production-ready - Built-in features for scale
- Type-safe - Full TypeScript support
Next steps:
- Explore AI SDK Documentation
- Try templates at Vercel AI Templates
- Experiment with tool calling for specific use cases
Happy coding! 🚀
Have questions about Vercel AI SDK implementation? Reach out on Twitter @nayakayp!