Skip to content

Network Binding Rules

Hard rule: every Doable application service binds to 127.0.0.1 only. Never 0.0.0.0 or any public interface. Public traffic enters exclusively through one TLS-terminating proxy (nginx, Caddy, or Cloudflare Tunnel).

This applies in all environments: dev, staging, production, single-host, multi-host.

Why

  • A misconfigured firewall or a missed port opening can't expose Doable directly.
  • The reverse proxy is a single, well-understood choke point — easy to harden, monitor, and rate-limit.
  • Internal API ↔ WS communication can use INTERNAL_SECRET-signed plain HTTP because it's loopback-only.
  • It catches mistakes early — ss -tlnp | grep -v 127.0.0.1 is a one-line audit.

What it looks like in practice

Service Bind address Port
Next.js (web) 127.0.0.1 3000
Hono API 127.0.0.1 4000
WS server 127.0.0.1 4001
PostgreSQL 127.0.0.1 5432
Redis (optional) 127.0.0.1 6379
Caddy / nginx 0.0.0.0 80, 443
sshd 0.0.0.0 22

Anything else bound publicly is a bug.

How it's enforced

Surface Enforcement
services/api API_HOST=127.0.0.1 env var, default in .env.example
services/ws WS_HOST=127.0.0.1 env var
apps/web (Next.js) --hostname 127.0.0.1 in package.json dev and start scripts
PostgreSQL listen_addresses = 'localhost' in postgresql.conf
Redis bind 127.0.0.1 in redis.conf
Caddy bind 127.0.0.1 directive (when behind Cloudflare Tunnel)
Docker 127.0.0.1:<port>:<port> mapping in docker-compose.yml
UFW Default deny + only 22/80/443 allowed

What about Docker?

Inside a container, services may bind to 0.0.0.0 — that's fine because the container is the boundary. The docker-compose.yml then maps 127.0.0.1:HOSTPORT:CONTAINERPORT, so the host only exposes them on loopback. The actual Compose snippet:

ports:
  - "127.0.0.1:4000:4000"

Never use "4000:4000" — that binds to all host interfaces.

Verifying

# After every deploy / restart:
sudo ss -tlnp | awk 'NR==1 || $4 !~ /127\.0\.0\.1|\[::1\]/'

# Should show only:
# - sshd on *:22
# - your reverse proxy (nginx/caddy) on *:80, *:443

If anything else appears bound publicly, stop the deploy and fix it before continuing.

Cloudflare Tunnel deployments

When using setup-server.sh, Cloudflare Tunnel runs as cloudflared.service, dials out to Cloudflare's edge, and forwards public traffic into Caddy on 127.0.0.1. Even ports 80/443 don't need to be open publicly — UFW can deny them. The setup script does exactly this when a tunnel is configured.

See also