Skip to content

Configuring AI Providers

Doable supports three AI provider paths today. Pick at least one β€” the chat agent is dormant without one configured.

Per-workspace overrides are configured from the UI (Workspace Settings β†’ AI β†’ Providers). The variables below are the global defaults the API falls back to.

Anthropic (Claude)

Recommended for highest-quality code generation and longest context windows.

ANTHROPIC_API_KEY=sk-ant-...

Workspaces will see Claude models in the model picker (Sonnet, Opus, Haiku β€” depending on your account access).

OpenAI

OPENAI_API_KEY=sk-...

Workspaces see GPT-4-class models. Doable also supports OpenAI-compatible endpoints (Azure OpenAI, vLLM, OpenRouter) by setting the SDK's standard OPENAI_BASE_URL env var.

GitHub Copilot SDK

The most powerful option β€” the same agent that powers GitHub Copilot Chat β€” but requires a separately installed CLI.

Option A β€” local CLI

  1. Install the CLI: see the Copilot SDK README.
  2. Authenticate it once: copilot auth login.
  3. Set the path:
COPILOT_CLI_PATH=/usr/local/bin/copilot

Doable spawns a subprocess per session.

Option B β€” remote CLI server

If you want one shared Copilot CLI server (useful for scaling), run copilot --server somewhere and point Doable at it:

COPILOT_CLI_URL=http://copilot.internal:7878

Pick a default model

COPILOT_DEFAULT_MODEL=claude-sonnet-4

Available models depend on the user's Copilot subscription. Doable surfaces the full list per session.

"Bring your own GitHub Copilot" flow

If a workspace member has a personal Copilot subscription, they can connect it from Settings β†’ AI β†’ GitHub Copilot. Doable runs the OAuth flow at GITHUB_COPILOT_REDIRECT_URI and stores an encrypted token. The agent then uses their Copilot quota, not yours. Implementation lives in services/api/src/routes/auth/ and services/api/src/ai/providers/copilot-manager.ts.

Per-workspace overrides

Stored in workspaces.ai_settings (JSONB). Edit from Workspace Settings β†’ AI. The relevant tables:

  • ai_provider_keys β€” encrypted API keys at workspace scope.
  • user_ai_preferences β€” personal model preferences.
  • mode_tool_config β€” which tools each mode can use.

The encryption key for stored credentials is ENCRYPTION_KEY from your env.

Routing & fallback

If multiple providers are configured, the engine resolver (services/api/src/ai/engine-resolver.ts) picks based on:

  1. Explicit per-message override (the user picked a model in the UI).
  2. Workspace default model.
  3. Global default (env var).
  4. First-available healthy provider.

If a provider call fails (rate limit, 5xx), the resolver retries the next provider in the chain.

Cost control

  • Credit limits per user / per session β€” configured in Workspace Settings β†’ AI β†’ Limits.
  • Model allow-list β€” restrict expensive models to admins.
  • Token budgets β€” automatic context compaction kicks in well before the model's hard limit (session.compaction_start / session.compaction_complete events).

Verifying it works

Sign in, create a project, and send the message β€œWhat's 2+2?”. You should see streaming text within a second or two. If you see β€œNo AI provider configured”:

  1. Confirm the env var is set in the API environment (not just the shell).
  2. Check the API logs: docker compose logs api or journalctl -u doable -e.
  3. Verify the workspace's AI settings haven't disabled all providers.

β†’ Next: How docore works.