Backups¶
You must back up two things:
- The PostgreSQL database —
pg_dump. - The project filesystem (
PROJECTS_ROOT/) — every file the AI ever wrote for users.
Either alone is incomplete. Restoring the DB without the files leaves projects pointing at nothing; restoring files without the DB makes them invisible.
Database backup¶
Bare-metal¶
# Daily backup with date in filename
pg_dump --format=custom --compress=9 \
--dbname="postgres://doable:$DB_PASS@localhost:5432/doable" \
--file="/var/backups/doable/db-$(date +%F).dump"
# Restore
pg_restore --clean --if-exists --no-owner \
--dbname="postgres://doable:$DB_PASS@localhost:5432/doable" \
/var/backups/doable/db-2026-04-22.dump
Wire it up via cron:
0 3 * * * pg_dump --format=custom --compress=9 \
--dbname="postgres://doable:$DB_PASS@localhost:5432/doable" \
--file="/var/backups/doable/db-$(date +\%F).dump" \
&& find /var/backups/doable -name 'db-*.dump' -mtime +30 -delete
Docker¶
docker compose -f docker/docker-compose.yml exec -T postgres \
pg_dump --format=custom --compress=9 -U doable doable \
> /var/backups/doable/db-$(date +%F).dump
# Restore
docker compose -f docker/docker-compose.yml exec -T postgres \
pg_restore --clean --if-exists --no-owner -U doable -d doable \
< /var/backups/doable/db-2026-04-22.dump
Managed Postgres¶
Enable point-in-time recovery (PITR) in the provider's console (RDS, Neon, Supabase). PITR + nightly snapshots is much better than periodic dumps.
Project filesystem backup¶
Bare-metal¶
# Default location:
PROJECTS_ROOT=/root/doable/services/api/projects
# Daily incremental rsync to off-host storage:
rsync -aH --delete \
"$PROJECTS_ROOT/" \
backup-host:/srv/backups/doable/projects/
-H preserves hard links; many projects share node_modules content via pnpm's content-addressable store.
Docker¶
The api_projects named volume holds the data:
# Snapshot to a tar.gz on the host
docker run --rm \
-v doable_api_projects:/data:ro \
-v /var/backups/doable:/backup \
alpine tar czf /backup/projects-$(date +%F).tar.gz -C /data .
# Restore (stop the api container first to avoid concurrent writes)
docker compose -f docker/docker-compose.yml stop api
docker run --rm \
-v doable_api_projects:/data \
-v /var/backups/doable:/backup \
alpine sh -c "rm -rf /data/* && tar xzf /backup/projects-2026-04-22.tar.gz -C /data"
docker compose -f docker/docker-compose.yml start api
For large fleets, use a CSI snapshot or shared NFS with its own snapshot policy.
Other things worth backing up¶
| What | Where | Cadence |
|---|---|---|
| Thumbnails | services/api/thumbnails/ or api_thumbnails volume |
Weekly (regeneratable) |
.env files |
host | When secrets change |
| Caddy / nginx config | /etc/caddy/, /etc/nginx/ |
When config changes |
| Cloudflare Tunnel cert | /root/.cloudflared/ |
Once (after install) |
| Published sites | SITES_DIR/ |
Daily (rebuildable from project files) |
What you don't need to back up¶
node_modules/— restored bypnpm install.- Container images — rebuildable from the repo.
- Copilot session-state under
~/.copilot/session-state/<id>/— chat history is in the DB; sessions resume cleanly.
Test your backups¶
A backup you've never restored is a wish, not a backup. Schedule a quarterly drill:
- Spin up an empty staging host.
- Restore last night's DB dump and project filesystem snapshot.
- Sign in, browse a project, open a file in the editor.
- Document any gotchas in your runbook.
Off-host storage¶
Copies on the same host don't survive disk failure. Push to:
- An object store (S3 / R2 / B2 / GCS) with lifecycle rules.
- Another VPS in a different region (
rsyncover SSH). - A cold backup (encrypted external drive rotated weekly).
Encrypt before transmission (gpg, age, or your object store's SSE).