Build an AI App

Building a Chatbot

Define a route handler for streamText in api/chat/route.ts. If deploying on Vercel, remember to set the max duration to a value greater than 10s.

app/(5-chatbot)/api/chat/route.ts
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
 
export async function POST(req: Request) {}

Get the incoming messages from the request body.

app/(5-chatbot)/api/chat/route.ts
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
 
export async function POST(req: Request) {
  const { messages } = await req.json(); 
}

Call streamText passing in your model and messages.

app/(5-chatbot)/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
 
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
 
export async function POST(req: Request) {
  const { messages } = await req.json();
 
  const result = streamText({ 
    model: openai('gpt-4o'), 
    messages, 
  }); 
}

Return the resulting generating as a streaming response.

app/(5-chatbot)/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
 
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
 
export async function POST(req: Request) {
  const { messages } = await req.json();
 
  const result = streamText({
    model: openai('gpt-4o'),
    messages,
  });
 
  return result.toDataStreamResponse(); 
}

Import the useChat hook from ai/react and use it in your chat.tsx file.

app/(5-chatbot)/chat/page.tsx
'use client'; 
 
import { useChat } from 'ai/react'; 
 
export default function Chat() {
  const {} = useChat(); 
  return <div>Chatbot</div>;
}

Destructure messages and map over them to display the chat messages.

app/(5-chatbot)/chat/page.tsx
'use client';
 
import { useChat } from 'ai/react';
 
export default function Chat() {
  const { messages } = useChat(); 
  return (
    <div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
      {messages.map(m => ( 
        <div key={m.id} className="whitespace-pre-wrap">
          {m.role === 'user' ? 'User: ' : 'AI: '}
          {m.content}
        </div>
      ))}
    </div>
  );
}

Destructure input, handleInputChange, and handleSubmit from the useChat hook. Add an input field and a form to submit messages.

app/(5-chatbot)/chat/page.tsx
'use client';
 
import { useChat } from 'ai/react';
 
export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat(); 
  return (
    <div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
      {messages.map(m => (
        <div key={m.id} className="whitespace-pre-wrap">
          {m.role === 'user' ? 'User: ' : 'AI: '}
          {m.content}
        </div>
      ))}
 
      <form onSubmit={handleSubmit}>
        <input
          className="fixed bottom-0 w-full max-w-md p-2 mb-8 border border-gray-300 rounded shadow-xl"
          value={input} 
          placeholder="Say something..."
          onChange={handleInputChange} 
        />
      </form>
    </div>
  );
}

Run the app and navigate to /chat to see the chatbot in action.

pnpm run dev

Head back to your api/chat/route.ts file and add a system prompt to change how the model responds.

app/(5-chatbot)/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
 
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
 
export async function POST(req: Request) {
  const { messages } = await req.json();
 
  const result = streamText({
    model: openai('gpt-4o'),
    system: 'You are an unhelpful assistant that only responds to users with confusing riddles.', 
    messages,
  });
 
  return result.toDataStreamResponse();
}

Head back to the browser and ask a new question to see the new response. Notice we've completely changed the behavior of the model without changing the model itself.

Last one! Change the system prompt to anything you would like. I suggest some kind of pop culture reference.

app/(5-chatbot)/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
 
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
 
export async function POST(req: Request) {
  const { messages } = await req.json();
 
  const result = streamText({
    model: openai('gpt-4o'),
    system: `You are Steve Jobs. Assume his character, both strengths and flaws.
    Respond exactly how he would, in exactly his tone.
    It is 1984 you have just created the Macintosh.`, 
    messages,
  });
 
  return result.toDataStreamResponse();
}

Now ask a question like:

What do you think of Bill?

Notice the response is eerily similar to what Steve Jobs might say. This is the power of system prompts.

Finally, try asking about the current weather in SF.

What's the weather like in San Francisco?

Notice it can't respond, we can fix this with tools.