Agent state

ReasoningBlock

Collapsible 'thinking' panel for extended-reasoning models (Claude extended thinking, OpenAI o-series, DeepSeek R1). Auto-scrolls while streaming, shows token count + duration, and tucks itself away when reasoning finishes.

Sourcesrc/components/ai/reasoning-block.tsx

Live streaming

The header pulses while the model is still emitting reasoning tokens. The content auto-scrolls to follow the stream. When done, it auto-collapses 1.5s later.

No reasoning emitted yet.

Finished reasoning, default-collapsed

When the model emits a long reasoning trace before its final answer, render it collapsed by default — the user clicks to expand if they want to audit.

Default-expanded

Pass defaultOpen if you want the panel always visible on render — useful for review surfaces (eval dashboards, prompt-debugging tools).

The user is asking me to compare pgvector and Pinecone for their AI startup. Let me think about what they actually need to know: 1. They're a small team (4 engineers) running on Postgres already. 2. They want a recommendation, not a feature matrix. The deciding factor for teams of this size is operational complexity. pgvector keeps embeddings inside the same database they already operate; Pinecone is a separate data plane with its own credentials, monitoring, and failure modes. For their stated scale (~5M vectors), pgvector is the safer default. Pinecone becomes worth the trade once they pass roughly 10M vectors and start hitting HNSW memory limits on their RDS instance. Final answer: recommend pgvector, name Pinecone as the migration target if their volume crosses the threshold.