Skip to content

AI Overview

The AI agent is the heart of Doable. This page tells you what it does, how it's wired, and where to look in the code.

What the agent can do

  • Read and edit files in the active project.
  • Run shell commands (within the policy allow-list).
  • Install packages via npm install / pnpm add.
  • Fetch URLs for docs, examples, or assets (within the URL allow-list).
  • Call MCP servers and integration tools (Notion, Slack, Linear, …).
  • Stream its plan, reasoning, and tool calls back to the chat panel in real time.
  • Use the project's session memory so multi-turn refactors remember context.

Components

Component What it does Code
Provider A specific AI backend (Anthropic, OpenAI, Copilot SDK) services/api/src/ai/providers/
docore engine Spawns provider sessions, normalizes events, enforces policy packages/docore/
Tool loader Builds the tool list from defaults + integrations + workspace policy services/api/src/ai/providers/copilot-tool-loader.ts
Modes Build / Plan / Chat — different system prompts and tool sets services/api/src/ai/modes/
Streaming Maps provider events → SSE for the chat UI services/api/src/ai/streaming.ts, sse-mapper.ts
Yjs bridge Pushes AI-driven file changes into the live collab room services/api/src/ai/yjs-bridge.ts
Trace collector Records every event for the project's history services/api/src/ai/trace-*.ts

Request flow (one chat message)

  1. BrowserPOST /chat/:projectId/messages with the user's prompt and the active mode.
  2. API auth → resolves the user, workspace, project, and the workspace's AI settings.
  3. Engine resolver → picks a provider/model based on workspace settings (or a per-message override).
  4. Tool loader → assembles the tool list:
  5. Built-in tools (read_file, write_file, shell, fetch, install_package).
  6. Workspace-enabled MCP servers.
  7. Integration tools the user connected (Notion docs, Slack messages, …).
  8. docore engine → opens or resumes a session for (workspace, project, user). Sessions are persisted under ~/.copilot/session-state/<sessionId>/.
  9. Provider streams assistant.message_delta, tool.call, tool.result, assistant.message, … events.
  10. Sandbox intercepts each tool call, checks policy, runs it inside dovault if filesystem-bound.
  11. Event mapper normalizes provider events → typed DoCoreEvent.
  12. SSE mapper → forwards normalized events to the browser as data: {…} chunks.
  13. Trace collector persists each event in the DB for the chat history view.
  14. Yjs bridge broadcasts file writes into the project's collab room so other users see the change instantly.

Why a dedicated docore engine?

The Copilot SDK gives us streaming + tool execution; everything else docore adds is for multi-tenant SaaS reality:

  • One process pool sized to the host's RAM, not one provider process per request.
  • Per-user accounting (UserManager) so a single noisy workspace can't starve others.
  • A typed event bus so the rest of the stack doesn't depend on Copilot SDK internals.
  • Pluggable isolation backends (nsjail, systemd, Job Object).
  • Persisted policies that survive restarts.

See @doable/docore reference.

Configuring AI per workspace

Most settings can be tuned per workspace from Workspace Settings → AI:

  • Default model (e.g. claude-3-5-sonnet, gpt-4o, …).
  • Provider preference order for fallback.
  • Tool/MCP allow-lists.
  • Credit limits per user / per session.
  • Mode definitions — custom prompts for additional modes beyond Build/Plan/Chat.

These are stored in workspaces.ai_settings (JSONB) and the mode_tool_config table.

Costs & credits

If you enable Stripe, every AI call deducts credits from the workspace balance. Pricing logic lives in queries/credits.ts and routes/billing.ts. With Stripe disabled, credits are tracked but never enforced — useful for self-hosted internal use.

Where to go next

  • Providers — set up Anthropic, OpenAI, or Copilot.
  • docore engine — internal API of the engine package.
  • Tools & MCP — extend what the agent can do.
  • Modes — Build / Plan / Chat and how to add new ones.