Chat & AI API¶
Endpoints for talking to the AI agent. Source: services/api/src/routes/chat/ and services/api/src/routes/plan.ts.
List a project's chats¶
Returns chat metadata (title, mode, created/updated). Each project has at least one default chat.
Create a chat¶
POST /projects/:projectId/chats
{ "mode": "build" | "plan" | "chat" | "<custom>", "title": "Optional" }
Send a message (streaming)¶
POST /chat/:projectId/messages
Content-Type: application/json
Accept: text/event-stream
{
"chatId": "...",
"text": "Add a contact form below the hero",
"model": "claude-sonnet-4", // optional override
"attachments": [ // optional
{ "type": "image", "url": "data:image/png;base64,..." }
]
}
The response is a Server-Sent-Events stream. Each data: line is a JSON-encoded DoCoreEvent. Important kinds:
type |
Payload |
|---|---|
assistant.message_delta |
{ delta: "..." } — text chunk |
assistant.reasoning_delta |
{ delta: "..." } — model reasoning chunk (thinking models) |
tool.call |
{ id, tool, args } — agent invoking a tool |
tool.result |
{ id, result } — tool output |
tool.error |
{ id, error } — denied or failed tool |
assistant.message |
{ text } — final assistant message |
session.usage_info |
{ inputTokens, outputTokens, cost } |
session.compaction_start / compaction_complete |
Context auto-summarization |
session.task_complete |
Stream done |
Closing the connection cancels the in-flight model call.
The full event union is in packages/docore/src/events.ts.
Stop generation¶
Retrieve history¶
Returns the persisted message timeline (the same events that were streamed).
Plan mode¶
Plan mode is identical except it forces the read-only tool subset and parses the final message into a structured plan:
Once the stream ends, fetch the structured plan:
Returns:
{
steps: Array<{
title: string;
description: string;
files: string[];
risk: 'low' | 'medium' | 'high';
}>;
rawText: string;
}
The user can hit "Run plan" — the API switches the chat to Build mode and feeds the plan back as the next message.
Direct save (visual edit)¶
The visual editor sends AST-based patches directly without spending AI tokens:
POST /direct-save/:projectId
{
"filePath": "src/Hero.tsx",
"patches": [{ "selector": "h1", "property": "text", "value": "New title" }]
}
See services/api/src/direct-save/ and services/api/src/visual-edit-bridge-inline.ts.
Errors¶
{ "error": "no_provider", "message": "No AI provider configured for this workspace" }
{ "error": "credit_exhausted", "message": "Workspace credit limit reached" }
{ "error": "model_unavailable", "message": "anthropic returned 429" }
For streaming endpoints these arrive as a final SSE event:
See also¶
- AI Overview.
- AI Providers.
- WebSocket API — file events flowing back to the editor.