Streaming

StreamingMarkdown

Progressive markdown renderer tuned for LLM output. Unterminated **bold or *italic markers render as plain text instead of flickering, so mid-stream tokens never crash the layout. Supports headings, lists, blockquotes, inline + fenced code, and links — no external markdown dep.

Sourcesrc/components/ai/streaming-markdown.tsx

Completed response

A settled markdown body with headings, lists, inline code, a blockquote, and a link.

Summary

The agent searched 3 sources and synthesized the results below.

  • Semantic chunking outperforms fixed-size for long docs
  • Reranking adds ~40ms but cuts hallucination in half
  • Use topK=8 for most RAG pipelines
Citations are tracked inline and surface in the sidebar.

See the full guide.

With a fenced code block

Fenced code renders through the same CodeBlock component used elsewhere in AgentKit.

Example: streaming a chat turn

Use the AI SDK's useChat hook:

ts
const { messages } = useChat({ api: '/api/chat' })
return messages.map((m) => <Bubble key={m.id} message={m} />)

Each token arrives over SSE.

Mid-stream cursor

Setting showCursor={true} appends a blinking caret to the last block so users can tell tokens are still arriving.

Plan

  1. Check calendar for conflicts
  2. Draft an agenda
  3. Send invites to the attend