Adding a New AI Provider¶
Doable's AI runtime is pluggable. To add a new provider (a hosted LLM API, a self-hosted Ollama, a custom CLI, etc.), you write a small adapter module and register it.
Where things live¶
services/api/src/ai/providers/— provider adapters (anthropic.ts,openai.ts,copilot.ts, ...).services/api/src/ai/engine-resolver.ts— picks a provider based on workspace config, model name, available env vars.services/api/src/ai/provider-catalog.ts— list of human-readable provider/model entries shown in the UI.packages/docore/— shared streaming engine that wraps providers in a uniform event interface.
1. Write the adapter¶
Create services/api/src/ai/providers/myprovider.ts. The minimum surface:
import type { AIProvider, ChatRequest, AIEvent } from '../types.js';
export const myProvider: AIProvider = {
id: 'myprovider',
displayName: 'My Provider',
// Which env vars enable this provider.
isAvailable: () => Boolean(process.env.MYPROVIDER_API_KEY),
models: [
{ id: 'myprovider:flash', name: 'My Flash', context: 128_000 },
{ id: 'myprovider:pro', name: 'My Pro', context: 1_000_000 },
],
async *stream(req: ChatRequest): AsyncIterable<AIEvent> {
const res = await fetch('https://api.myprovider.com/v1/chat', {
method: 'POST',
headers: {
authorization: `Bearer ${process.env.MYPROVIDER_API_KEY}`,
'content-type': 'application/json',
},
body: JSON.stringify({
model: req.model,
messages: req.messages,
tools: req.tools,
stream: true,
}),
});
if (!res.body) throw new Error('myprovider: no body');
for await (const chunk of parseSse(res.body)) {
// Translate provider-specific events into Doable's uniform AIEvent shape.
if (chunk.type === 'text_delta') {
yield { kind: 'assistant.message_delta', delta: chunk.text };
} else if (chunk.type === 'tool_call') {
yield {
kind: 'tool.call',
callId: chunk.id,
name: chunk.name,
arguments: chunk.args,
};
} else if (chunk.type === 'message_end') {
yield { kind: 'assistant.message', content: chunk.full_text };
}
}
},
};
Look at anthropic.ts for the canonical example with streaming, tool calls, reasoning blocks, and back-pressure handling.
2. Map provider events to Doable's AIEvent type¶
The uniform event kinds (in packages/docore/src/event-mapper.ts):
assistant.message_delta/assistant.messageassistant.reasoning_delta/assistant.reasoningtool.call,tool.result,tool.errorusage—{ input, output, total }token counts.error— fatal stream error.done
Bad mapping → broken UI. Test the stream against a real prompt and assert the event sequence in a Vitest test.
3. Register the provider¶
In services/api/src/ai/providers/index.ts:
import { myProvider } from './myprovider.js';
export const providers = [
// ... existing providers
myProvider,
];
The engine-resolver picks the first provider whose isAvailable() returns true and whose models include the requested model id.
4. Add the env vars¶
services/api/.env.example:
docker/setup.sh — add a comment in the generated .env template if you'd like operators to know about it.
5. Catalog UI¶
services/api/src/ai/provider-catalog.ts — controls what users see in Workspace Settings → AI:
{
id: 'myprovider',
displayName: 'My Provider',
description: 'Open-weights LLMs hosted by My Provider Inc.',
pricingPage: 'https://myprovider.com/pricing',
models: [
{ id: 'myprovider:flash', name: 'Flash', tier: 'fast' },
{ id: 'myprovider:pro', name: 'Pro', tier: 'powerful' },
],
}
6. Tool support¶
If your provider supports OpenAI-style tool calls, the existing tool-routing in docore works out of the box — just make sure your stream emits tool.call events with the same callId you'll later receive in tool.result from the runtime.
If your provider has a different tool format (e.g. function calling via XML, or no tool support), either translate it in the adapter or set supportsTools: false and Doable will fall back to text-only mode for that model.
7. Tests¶
Mock fetch, feed in a recorded SSE stream, assert the emitted AIEvents. Keep the recorded fixture small and version it.
Existing providers¶
| Provider | File | Notes |
|---|---|---|
| Anthropic Claude | anthropic.ts |
Native SDK, supports reasoning blocks |
| OpenAI | openai.ts |
Function calling, tool routing |
| GitHub Copilot CLI | copilot.ts |
Shells out to copilot JSON-RPC server |
| (Add yours here) | — | — |
Quality bar¶
- Streaming must work — no buffering an entire response then returning.
- Errors must surface as
errorevents, not be swallowed. - Token counts must be reported (if the provider exposes them).
- Tool calls must round-trip cleanly: every
tool.callgets a matchingtool.resultortool.errorfrom the runtime.