Recipes
Six complete copy-paste builds (ChatGPT clone, RAG UI, agent playground…).
docs/RECIPES.mdSix copy-paste recipes that compose AgentKit components into complete product surfaces. Each one is a single file, real data, and a real user flow. Reach for these before you start from scratch — they collapse days of layout bikeshedding into a morning.
Every recipe assumes you've followed GETTING-STARTED.md (project installed, /api/chat wired up). For design-token tweaks, see CUSTOMIZE.md.
1 — ChatGPT-clone layout (10 min)
What you're building: a full-page chat with a conversation sidebar, a message thread, a composer, a typing indicator, and streamed replies against the built-in /api/chat route.
Components used
ChatSidebar— left rail, search + pin/archive.MessageBubble— role-aware bubble, fed straight fromuseChat().ChatInput— auto-growing textarea with Enter-to-send and busy/stop state.TypingIndicator— the three-dot filler while the model boots.Toaster— drop-in for error messages from the stream.
// app/chat/page.tsx
'use client'
import { useState } from 'react'
import { useChat } from '@ai-sdk/react'
import { ChatSidebar, type Conversation } from '@/components/ai/chat-sidebar'
import { MessageBubble } from '@/components/ai/message-bubble'
import { ChatInput } from '@/components/ai/chat-input'
import { TypingIndicator } from '@/components/ai/typing-indicator'
import { Toaster, toast } from '@/components/ui/toast'
const seed: Conversation[] = [
{ id: 'c1', title: 'Q3 retention analysis', updatedAt: '2026-04-19T16:42:00Z', preview: 'Churn drivers across the pro tier…', pinned: true },
{ id: 'c2', title: 'Onboarding email rewrite', updatedAt: '2026-04-19T11:20:00Z', preview: 'Make the welcome less corporate.' },
{ id: 'c3', title: 'pgvector vs Pinecone', updatedAt: '2026-04-18T09:05:00Z', preview: 'Cost + latency trade-offs.' },
]
export default function ChatPage() {
const [convos, setConvos] = useState(seed)
const [activeId, setActiveId] = useState('c1')
const { messages, sendMessage, status, error } = useChat()
const busy = status === 'submitted' || status === 'streaming'
if (error && !busy) toast.error('Stream dropped. Try resending.')
return (
<div className="flex h-dvh bg-[var(--color-bg)]">
<ChatSidebar
conversations={convos}
selectedId={activeId}
onSelect={setActiveId}
onNew={() => setActiveId('new')}
onPin={(id) => setConvos((xs) => xs.map((c) => (c.id === id ? { ...c, pinned: !c.pinned } : c)))}
onArchive={(id) => setConvos((xs) => xs.map((c) => (c.id === id ? { ...c, archived: !c.archived } : c)))}
/>
<main className="flex min-w-0 flex-1 flex-col">
<div className="flex-1 overflow-y-auto px-6 py-8 space-y-5">
{messages.map((m) => (
<MessageBubble key={m.id} role={m.role} name={m.role === 'assistant' ? 'GPT-4o' : 'You'}>
{m.parts.filter((p) => p.type === 'text').map((p) => p.text).join('')}
</MessageBubble>
))}
{status === 'submitted' && <TypingIndicator label="Thinking…" />}
</div>
<div className="border-t border-[var(--color-border)] px-6 py-4">
<ChatInput onSend={(text) => sendMessage({ text })} busy={busy} maxLength={4000} />
</div>
</main>
<Toaster />
</div>
)
}
Next steps
- Swap
MessageBubbleforMessageThreadto allow per-message replies (passroot+repliesderived from aparentIdfield). - Render
<SuggestedPrompts>above the composer whenmessages.length === 0. - Persist
convosto your DB —ChatSidebaris purely controlled.
2 — RAG chat with inline citations (20 min)
What you're building: a Q&A surface where every assistant claim carries a superscript [n] that hovers to reveal the source, plus a side drawer listing every retrieved chunk and a dropzone for the user's own docs.
Components used
MessageBubble— the answer bubble; children can contain inline citations.InlineCitation— superscript numeral with a hover preview card.CitationDrawer— slide-in panel listing every source, searchable.DocumentUploader— drag-drop zone for the user's corpus.AISearchBox— the "ask a question" field with suggestion dropdown.
// app/research/page.tsx
'use client'
import { useState } from 'react'
import { MessageBubble } from '@/components/ai/message-bubble'
import { InlineCitation, type Source } from '@/components/ai/source-citation'
import { CitationDrawer, type Source as DrawerSource } from '@/components/ai/citation-drawer'
import { DocumentUploader, type UploadFile } from '@/components/ai/document-uploader'
import { AISearchBox } from '@/components/ai/ai-search-box'
import { Button } from '@/components/ui/button'
const sources: Source[] = [
{ title: 'Stripe Billing — metered pricing v2', url: 'https://stripe.com/docs/billing/subscriptions/metered', type: 'web', snippet: 'Report usage via the usage_records endpoint; Stripe aggregates per period.' },
{ title: 'AWS RDS pgvector benchmark (2026)', url: 'https://aws.amazon.com/blogs/database/pgvector-benchmarks', type: 'document', snippet: 'pgvector 0.7 handles 10M rows with HNSW in under 25ms p95.' },
]
const drawerSources: DrawerSource[] = sources.map((s, i) => ({ id: String(i + 1), title: s.title, url: s.url, type: s.type === 'web' ? 'web' : 'pdf', snippet: s.snippet, relevance: 0.9 - i * 0.1 }))
export default function ResearchPage() {
const [q, setQ] = useState('')
const [drawerOpen, setDrawerOpen] = useState(false)
const [uploads, setUploads] = useState<UploadFile[]>([])
const addFiles = (files: File[]) =>
setUploads((xs) => [...xs, ...files.map((f) => ({ id: crypto.randomUUID(), name: f.name, size: f.size, type: f.type, status: 'ready' as const }))])
return (
<div className="mx-auto grid max-w-5xl gap-6 p-8 md:grid-cols-[1fr_280px]">
<section className="space-y-5">
<AISearchBox value={q} onChange={setQ} onSearch={() => {/* fire retrieval + stream */}} placeholder="Ask anything about your docs…" />
<MessageBubble role="assistant" name="Research agent">
Stripe meters usage via the <code>usage_records</code> endpoint
<InlineCitation index={1} source={sources[0]} />, and pgvector handles 10M rows at 25ms p95 on RDS
<InlineCitation index={2} source={sources[1]} />.
</MessageBubble>
<Button variant="secondary" onClick={() => setDrawerOpen(true)}>View {sources.length} sources</Button>
</section>
<aside>
<DocumentUploader onFilesAdded={addFiles} accept="application/pdf,.md,.txt" maxSize={10 * 1024 * 1024} files={uploads} onRemove={(id) => setUploads((xs) => xs.filter((u) => u.id !== id))} />
</aside>
<CitationDrawer open={drawerOpen} onOpenChange={setDrawerOpen} sources={drawerSources} />
</div>
)
}
Where your retrieval layer plugs in
onSearch— call your embeddings + vector store, then setmessagesstate from the streamed answer.- Annotate citations server-side: emit a message shape like
{ text, citations: [{ index, sourceId }] }and map them to<InlineCitation>at render time.
Next steps
- Fan the drawer's
sourceslist off a live retrieval result so it reflects what the model actually used. - Add
<SourceCitation>full cards at the bottom of each assistant turn for print-friendly export.
3 — Agent playground with trace + cost (20 min)
What you're building: an internal playground for tuning a prompt. Left: system-prompt editor. Center: the trace of every run. Right: cost, model, and temperature dials. This is the UI you wish every AI lab tool had.
Components used
SystemPromptEditor— mono editor with{{variable}}highlighting + token count.ModelSelector,TemperatureSlider— param dials.AgentTraceViewer— multi-step reasoning trace, collapsible tool calls.CostTracker— live input/output token cost with an optional budget bar.ChatInput— prompt box to kick off a run.
// app/playground/page.tsx
'use client'
import { useState } from 'react'
import { SystemPromptEditor } from '@/components/ai/system-prompt-editor'
import { ModelSelector, defaultModels, type Model } from '@/components/ai/model-selector'
import { TemperatureSlider } from '@/components/ai/temperature-slider'
import { AgentTraceViewer, type TraceStep } from '@/components/ai/agent-trace-viewer'
import { CostTracker } from '@/components/ai/cost-tracker'
import { ChatInput } from '@/components/ai/chat-input'
const defaultPrompt = 'You are a travel agent. Always confirm dates before searching flights. User: {{user_name}}.'
const sampleTrace: TraceStep[] = [
{ kind: 'thought', label: 'Plan', detail: 'User asked for NYC→Tokyo under $1k. Need to confirm dates first.' },
{ kind: 'tool', label: 'search_flights', tool: { name: 'search_flights', args: { from: 'JFK', to: 'HND', maxUsd: 1000, depart: '2026-05-04' }, result: '[{"airline":"United","price":745,"stops":1}]', durationMs: 312 } },
{ kind: 'final', label: 'Responded with 3 flights, cheapest United at $745.' },
]
export default function PlaygroundPage() {
const [prompt, setPrompt] = useState(defaultPrompt)
const [model, setModel] = useState<Model>(defaultModels[0])
const [temp, setTemp] = useState(0.7)
const [steps, setSteps] = useState<TraceStep[]>(sampleTrace)
const [tokens, setTokens] = useState({ input: 1240, output: 880 })
const run = async (user: string) => {
setSteps([])
// POST { prompt, model, temp, user } to your runner; stream steps as they arrive.
setSteps(sampleTrace)
setTokens({ input: tokens.input + user.length, output: tokens.output + 200 })
}
return (
<div className="grid h-dvh grid-cols-[360px_1fr_320px] gap-4 p-4 bg-[var(--color-bg)]">
<aside className="flex flex-col gap-3">
<SystemPromptEditor value={prompt} onChange={setPrompt} initialValue={defaultPrompt} maxTokens={2048} onReset={() => setPrompt(defaultPrompt)} />
</aside>
<main className="flex min-h-0 flex-col gap-3">
<AgentTraceViewer steps={steps} className="flex-1" animate stepDelayMs={400} />
<ChatInput onSend={run} placeholder="Try: 'Find NYC to Tokyo under $1k'" />
</main>
<aside className="flex flex-col gap-3">
<ModelSelector value={model} onChange={setModel} />
<TemperatureSlider value={temp} onChange={setTemp} marks={[{ value: 0, label: 'Precise' }, { value: 1, label: 'Balanced' }, { value: 2, label: 'Creative' }]} />
<CostTracker inputTokens={tokens.input} outputTokens={tokens.output} inputRate={0.15} outputRate={0.6} budget={5} modelLabel={model.name} />
</aside>
</div>
)
}
Next steps
- Persist
{ prompt, model, temp }as a "preset" and render a list you can diff with<DiffViewer>. - Stream trace steps from the server using the AI SDK's
streamTextstep callbacks —setSteps((xs) => [...xs, newStep]). - Gate destructive tools behind
<ToolPermissionGate>before they execute.
4 — Multi-agent orchestration view (15 min)
What you're building: a dashboard showing three agents working in parallel — Planner, Researcher, Writer — each with its own status and a tail of its most recent output. Below: a unified timeline of every event they emit, and a task list of what they're collectively trying to accomplish.
Components used
MultiAgentView— the side-by-side agent cards with per-agent status + output tail.AgentTimeline— vertical chronological stream across all agents.TaskList— the shared backlog of what's being worked on.
// app/crew/page.tsx
'use client'
import { useEffect, useState } from 'react'
import { MultiAgentView, type MultiAgent } from '@/components/ai/multi-agent-view'
import { AgentTimeline, type TimelineEvent } from '@/components/ai/agent-timeline'
import { TaskList, type Task } from '@/components/ai/task-list'
const initial: MultiAgent[] = [
{ id: 'planner', name: 'Planner', role: 'Orchestrator', status: 'done', currentAction: 'Plan ready', output: 'Decomposed into 3 subtasks.\nAssigned to researcher + writer.' },
{ id: 'researcher', name: 'Researcher', role: 'Web + docs', status: 'tool', currentAction: 'fetch(stripe.com/docs/billing)', output: 'Fetched 4 pages.\nExtracted 12 quotes.\nRanking by relevance…' },
{ id: 'writer', name: 'Writer', role: 'Synthesis', status: 'idle', currentAction: 'waiting for researcher' },
]
const events: TimelineEvent[] = [
{ id: 'e1', timestamp: '2026-04-20T14:30:00Z', type: 'system', label: 'Crew started', agentId: 'planner' },
{ id: 'e2', timestamp: '2026-04-20T14:30:02Z', type: 'thought', label: 'Plan: research → draft → review', agentId: 'planner' },
{ id: 'e3', timestamp: '2026-04-20T14:30:05Z', type: 'tool', label: 'fetch(stripe.com)', durationMs: 412, agentId: 'researcher' },
{ id: 'e4', timestamp: '2026-04-20T14:30:07Z', type: 'message', label: 'Found 12 relevant quotes.', agentId: 'researcher' },
]
const tasks: Task[] = [
{ id: 't1', title: 'Decompose user request', status: 'done', assignedTo: 'planner', durationMs: 1800 },
{ id: 't2', title: 'Research metered billing patterns', status: 'running', assignedTo: 'researcher', children: [
{ id: 't2a', title: 'Pull Stripe docs', status: 'done' },
{ id: 't2b', title: 'Pull Lago comparison', status: 'running' },
]},
{ id: 't3', title: 'Draft blog post', status: 'pending', assignedTo: 'writer' },
]
export default function CrewPage() {
const [agents, setAgents] = useState(initial)
// Simulate 3 parallel streams; real code would map fetch() calls per agent.
useEffect(() => {
const id = setInterval(() => {
setAgents((xs) => xs.map((a) => a.id === 'writer' ? { ...a, status: 'thinking', currentAction: 'drafting intro' } : a))
}, 3500)
return () => clearInterval(id)
}, [])
return (
<div className="mx-auto max-w-6xl space-y-6 p-8">
<MultiAgentView agents={agents} layout="grid" onAgentClick={(id) => console.log('inspect', id)} />
<div className="grid gap-6 md:grid-cols-[1fr_360px]">
<AgentTimeline events={events} groupByAgent maxHeight={420} />
<TaskList tasks={tasks} onRetry={(id) => console.log('retry', id)} />
</div>
</div>
)
}
Next steps
- Wire each agent to its own
EventSourceor WebSocket; push tosetAgentsandsetEventson every chunk. - When an agent finishes, flip its card to
status: 'done'and render a "Handoff" event in the timeline.
5 — Tool-calling demo for marketing (10 min)
What you're building: a landing-page section that auto-plays one end-to-end tool call so visitors see what "function calling" actually means. Two variants: a cheap weather lookup and a meatier flight search.
Components used
ToolCallPanel— the 4-phase (prompt → running → result → synthesis) panel; passautoPlayand it animates itself.
// components/sections/tool-call-demo.tsx
'use client'
import { ToolCallPanel } from '@/components/ai/tool-call-panel'
const weather = {
prompt: 'What\'s the weather in Tokyo right now?',
call: {
name: 'get_weather',
args: { location: 'Tokyo, Japan', units: 'celsius' },
result: { temp: 18, condition: 'partly_cloudy', wind: '9 km/h', humidity: 62 },
durationMs: 142,
},
final: 'It\'s 18°C and partly cloudy in Tokyo, with a light breeze.',
}
const flights = {
prompt: 'Find me a flight from NYC to Tokyo next month under $1000.',
call: {
name: 'search_flights',
args: { from: 'JFK', to: 'HND', depart: '2026-05-04', maxUsd: 1000 },
result: [
{ airline: 'United', price: 745, stops: 1, duration: '14h 20m' },
{ airline: 'ANA', price: 812, stops: 0, duration: '13h 45m' },
{ airline: 'Delta', price: 889, stops: 1, duration: '15h 10m' },
],
durationMs: 523,
},
final: 'Found 3 under $1000. Cheapest: United at $745 with 1 stop. ANA is nonstop for $812 if you\'d rather skip the layover.',
}
export function ToolCallDemo() {
return (
<section className="mx-auto grid max-w-6xl gap-6 px-6 py-16 md:grid-cols-2">
<ToolCallPanel prompt={weather.prompt} call={weather.call} finalMessage={weather.final} autoPlay phaseDurations={[700, 900, 1000, 1400]} />
<ToolCallPanel
prompt={flights.prompt}
call={flights.call}
finalMessage={flights.final}
autoPlay
phaseDurations={[900, 1200, 1400, 1600]}
renderResult={(rows) => (
<ul className="space-y-1 text-[13px]">
{(rows as typeof flights.call.result).map((r) => (
<li key={r.airline} className="flex justify-between font-mono text-[var(--color-text)]">
<span>{r.airline} · {r.stops === 0 ? 'nonstop' : `${r.stops} stop`}</span>
<span className="tabular-nums text-[var(--color-brand)]">${r.price}</span>
</li>
))}
</ul>
)}
/>
</section>
)
}
Next steps
- Drop the weather variant into your hero; it loops forever with zero maintenance.
- For a screenshot-friendly frozen state, pass
phase={4}instead ofautoPlay. - Swap
renderResultfor<ToolResultTable>or<ToolResultChart>when your result is row-shaped.
6 — Reasoning tree for research assistant (15 min)
What you're building: a research tool where the agent explores several branches of reasoning, scores them, and commits to one. Clicking a node drills into the thought plus the evidence that supports it.
Components used
ReasoningTree— branching tree with scored, status-colored nodes.ThoughtBubble— the expanded chain-of-thought for the selected node.SourceCitation— evidence cards backing the selected branch.
// app/explore/page.tsx
'use client'
import { useState } from 'react'
import { ReasoningTree, type ReasoningNode } from '@/components/ai/reasoning-tree'
import { ThoughtBubble } from '@/components/ai/thought-bubble'
import { SourceCitation, type Source } from '@/components/ai/source-citation'
const tree: ReasoningNode = {
id: 'root',
label: 'How do we reduce churn in the Pro tier?',
children: [
{ id: 'onboard', label: 'Improve first-run onboarding', score: 0.78, status: 'selected', children: [
{ id: 'onboard-a', label: 'Add guided checklist', score: 0.81, status: 'selected' },
{ id: 'onboard-b', label: 'Trigger email at day 3', score: 0.62, status: 'explored' },
]},
{ id: 'price', label: 'Switch to usage-based pricing', score: 0.54, status: 'explored', children: [
{ id: 'price-a', label: 'Charge per seat', score: 0.41, status: 'pruned' },
{ id: 'price-b', label: 'Charge per workflow run', score: 0.68, status: 'pending' },
]},
{ id: 'outreach', label: 'Cold-email dormant accounts', score: 0.22, status: 'pruned' },
],
}
const thoughts: Record<string, { body: string; evidence: Source[] }> = {
onboard: {
body: 'First-run onboarding is the highest-leverage lever: 63% of Pro churn happens within the first 14 days. A guided checklist converts better than passive tooltips.',
evidence: [
{ title: 'Amplitude retention benchmark (SaaS, 2026)', url: 'https://amplitude.com/blog/saas-retention-2026', type: 'web', snippet: 'Teams that complete an onboarding checklist retain 2.4× better at day 30.', relevance: 0.92 },
{ title: 'Internal: Pro churn cohort (Q1 2026)', type: 'document', snippet: '63% of Pro churn is in the first 14 days; 71% never completed workspace setup.', relevance: 0.88 },
],
},
'onboard-a': { body: 'A checklist beats email-only because it collapses TTV and makes progress visible.', evidence: [] },
}
export default function ExplorePage() {
const [selectedId, setSelectedId] = useState('onboard')
const current = thoughts[selectedId]
return (
<div className="mx-auto grid max-w-6xl gap-6 p-8 md:grid-cols-[1fr_360px]">
<ReasoningTree root={tree} selectedPath={[selectedId]} onNodeClick={setSelectedId} orientation="vertical" />
<aside className="space-y-3">
{current && (
<>
<ThoughtBubble role="deciding" durationMs={1900}>{current.body}</ThoughtBubble>
{current.evidence.map((s, i) => (
<SourceCitation key={s.title} index={i + 1} source={s} highlighted={i === 0} />
))}
</>
)}
</aside>
</div>
)
}
Modeling tips
- Keep
statusin sync with what the agent actually did:exploredfor every visited node,prunedfor rejected,selectedfor the committed path,pendingfor planned-but-not-yet-evaluated. selectedPathis an array so the whole ancestor chain highlights — pass['root', 'onboard', 'onboard-a'], not just the leaf.
Next steps
- Feed
treefrom a streaming backend; callsetSelectedIdas the agent commits so the UI follows the agent's own selection. - Swap
orientation="horizontal"for a presentation-grade mind-map screenshot. - Pipe the evidence list through
<CitationDrawer>when it exceeds 3 cards.
Where to go next
- Customizing the look —
CUSTOMIZE.mdcovers design tokens and theme switching. - Shipping your first page —
GETTING-STARTED.mdhas the 15-minute path from zip to live domain. - Component reference — every component has a
/components/<slug>page with live previews.
These six recipes cover ~80% of what buyers ship on day one. If your surface doesn't fit any of them, pick the closest one, delete what you don't need, and keep going — every component in the library is a single file with no cross-imports, so there's nothing stopping you.