StreamingCode
Code block optimized for progressive streaming: auto-scrolls to the bottom on every update, shows a 'streaming' pill while in-flight, and disables the copy button until generation settles. Uses the same tiny regex highlighter as CodeBlock — pair with Shiki if you need real tokenization.
Source
src/components/ai/streaming-code.tsxSettled TypeScript
showCursor={false} hides the streaming pill and re-enables copy. Filename + language label render in the header.
app/api/chat/route.tstypescript
import { openai } from '@ai-sdk/openai'
import { streamText } from 'ai'
export async function POST(req: Request) {
const { messages } = await req.json()
const result = streamText({
model: openai('gpt-4o'),
messages,
})
return result.toDataStreamResponse()
}With line numbers
showLineNumbers adds a left gutter — handy for docs that reference specific lines.
rerank.pypython
def rerank(query: str, candidates: list[str]) -> list[str]: # Score each candidate with a cross-encoder. scored = [(c, cross_encode(query, c)) for c in candidates] scored.sort(key=lambda pair: pair[1], reverse=True) return [c for c, _ in scored[:8]]
Mid-stream with maxHeight
While showCursor is true the 'streaming' pill and blinking caret render, and the copy button is disabled. maxHeight keeps long output inside a scroll pane that auto-pins to the bottom.
agent.tstypescriptstreaming
async function* tokens() {
const stream = await model.stream({ prompt })
for await (const chunk of stream) {
yield chunk.text
}
}
for await (const t of tokens()) {
accumulated +=