Configuring AI Providers¶
Doable supports three AI provider paths today. Pick at least one β the chat agent is dormant without one configured.
Per-workspace overrides are configured from the UI (Workspace Settings β AI β Providers). The variables below are the global defaults the API falls back to.
Anthropic (Claude)¶
Recommended for highest-quality code generation and longest context windows.
Workspaces will see Claude models in the model picker (Sonnet, Opus, Haiku β depending on your account access).
OpenAI¶
Workspaces see GPT-4-class models. Doable also supports OpenAI-compatible endpoints (Azure OpenAI, vLLM, OpenRouter) by setting the SDK's standard OPENAI_BASE_URL env var.
GitHub Copilot SDK¶
The most powerful option β the same agent that powers GitHub Copilot Chat β but requires a separately installed CLI.
Option A β local CLI¶
- Install the CLI: see the Copilot SDK README.
- Authenticate it once:
copilot auth login. - Set the path:
Doable spawns a subprocess per session.
Option B β remote CLI server¶
If you want one shared Copilot CLI server (useful for scaling), run copilot --server somewhere and point Doable at it:
Pick a default model¶
Available models depend on the user's Copilot subscription. Doable surfaces the full list per session.
"Bring your own GitHub Copilot" flow¶
If a workspace member has a personal Copilot subscription, they can connect it from Settings β AI β GitHub Copilot. Doable runs the OAuth flow at GITHUB_COPILOT_REDIRECT_URI and stores an encrypted token. The agent then uses their Copilot quota, not yours. Implementation lives in services/api/src/routes/auth/ and services/api/src/ai/providers/copilot-manager.ts.
Per-workspace overrides¶
Stored in workspaces.ai_settings (JSONB). Edit from Workspace Settings β AI. The relevant tables:
ai_provider_keysβ encrypted API keys at workspace scope.user_ai_preferencesβ personal model preferences.mode_tool_configβ which tools each mode can use.
The encryption key for stored credentials is ENCRYPTION_KEY from your env.
Routing & fallback¶
If multiple providers are configured, the engine resolver (services/api/src/ai/engine-resolver.ts) picks based on:
- Explicit per-message override (the user picked a model in the UI).
- Workspace default model.
- Global default (env var).
- First-available healthy provider.
If a provider call fails (rate limit, 5xx), the resolver retries the next provider in the chain.
Cost control¶
- Credit limits per user / per session β configured in Workspace Settings β AI β Limits.
- Model allow-list β restrict expensive models to admins.
- Token budgets β automatic context compaction kicks in well before the model's hard limit (
session.compaction_start/session.compaction_completeevents).
Verifying it works¶
Sign in, create a project, and send the message βWhat's 2+2?β. You should see streaming text within a second or two. If you see βNo AI provider configuredβ:
- Confirm the env var is set in the API environment (not just the shell).
- Check the API logs:
docker compose logs apiorjournalctl -u doable -e. - Verify the workspace's AI settings haven't disabled all providers.
β Next: How docore works.