@doable/docore¶
The AI agent engine. Wraps the GitHub Copilot SDK with worker pooling, per-user accounting, typed events, OS-level process isolation, and persistent tool/MCP policies.
The full conceptual tour is at AI → docore. This page is the API reference.
Install¶
Workspace-internal — already wired up:
Top-level¶
class DoCoreEngine¶
| Option | Type | Default | Notes |
|---|---|---|---|
cliPath |
string |
env COPILOT_CLI_PATH |
Path to a local Copilot CLI binary |
cliUrl |
string |
env COPILOT_CLI_URL |
Use a remote Copilot CLI server |
defaultModel |
string |
env COPILOT_DEFAULT_MODEL |
Fallback model name |
maxWorkers |
number |
4 |
Hard cap on concurrent worker processes |
isolation |
{ backend: 'auto' \| 'nsjail' \| 'systemd' \| 'jobobject' \| 'direct' } |
'auto' |
Process isolation backend |
tracer |
Tracer |
noopTracer |
OTel-style sink |
Methods:
start(): Promise<void>— boots the worker pool.openSession(opts): Promise<Session>— see below.shutdown(): Promise<void>— graceful drain.
class DoCorePool¶
A pool of DoCoreEngine workers. Use directly only if you want fine-grained control; otherwise let DoCoreEngine own the pool internally.
class DoCoreUserManager¶
new DoCoreUserManager(pool: DoCorePool, options: {
perUserLimit: number; // max concurrent sessions per user
perUserMemoryMB?: number;
})
acquire({ userId, sessionId }) returns a session-bound worker; release() to return it.
class WorkerPool¶
Generic worker pool primitive used internally. Most callers don't touch this.
Sessions¶
const session = await engine.openSession({
sessionId: string, // stable across reconnects
cwd: string, // project root on disk
systemPrompt: string,
tools: ToolDefinition[],
permissionHandler?: PermissionHandler,
});
Methods:
send({ text, attachments? })→AsyncIterable<DoCoreEvent>— stream the assistant's response.abort()— cancel the in-flight model call.compact()— force context compaction.close()— release resources (the on-disk session is preserved).
Events¶
The full union is in packages/docore/src/events.ts. Frequently used kinds:
| Kind | Notes |
|---|---|
assistant.message_delta |
Token chunk from the assistant |
assistant.reasoning_delta |
Chain-of-thought chunk (thinking models) |
assistant.message |
Final assembled message |
assistant.reasoning |
Final reasoning trace |
tool.call |
Agent calling a tool |
tool.result |
Tool output |
tool.error |
Denied or failed tool |
session.start / resume / idle / shutdown |
Lifecycle |
session.usage_info |
Tokens & cost rollup |
session.compaction_start / complete |
Auto-summarization |
session.task_complete |
Stream finished |
session.title_changed |
Auto-generated chat title |
session.model_change / mode_changed |
Model/mode switched mid-session |
Use mapSdkEvent(rawSdkEvent, ctx) to convert raw Copilot SDK events to the typed DoCoreEvent.
Sandboxing¶
import {
createPolicySandbox,
PolicyStore,
POLICY_DEFAULTS,
FilePersistence,
} from "@doable/docore";
const policy = new PolicyStore({
persistence: new FilePersistence("./policies.json"),
defaults: POLICY_DEFAULTS,
});
const handler = createPolicySandbox({
policy,
audit: (entry) => recordAudit(entry),
});
Pass handler as permissionHandler to openSession.
Process isolation¶
import {
ProcessIsolator,
NsjailBackend,
SystemdBackend,
JobObjectBackend,
DirectBackend,
} from "@doable/docore";
Most callers don't construct these — DoCoreEngine picks one based on isolation.backend.
Tracing¶
import { Tracer } from "@doable/docore";
const tracer = new Tracer({
sink: (span) => console.log(span),
});
Pass to DoCoreEngine or DoCorePool.
Standalone server mode¶
DoCoreServer exposes a thin HTTP API around an engine — useful if you want a dedicated AI worker host:
import { DoCoreServer, DoCoreEngine } from "@doable/docore";
const server = new DoCoreServer({
engine: new DoCoreEngine({ /* … */ }),
port: 7878,
});
await server.start();
The Doable API does not use this by default; it embeds the engine in-process for lower latency.
See also¶
- AI → docore — narrative tour.
- Sandboxing.
@doable/dovault.