Deployment
Platos ships as a Docker Compose stack. One docker compose up and you have the backend, dashboard, database, Redis, and a Celery worker + beat scheduler running.
Prerequisites
- Docker 24+ and Docker Compose v2
- 4 GB RAM free (Postgres + pgvector + Redis + API + worker + web)
- Ports
3000(web),8000(API),5432(Postgres),6379(Redis) free
For local development without containers you can also install uv (Python) and pnpm (Node) - both pinned in the repo root.
The services
From docker-compose.yml (and the table in the root README.md):
| Service | Image / source | Purpose |
|---|---|---|
platos-db | pgvector/pgvector:pg16 | Postgres 16 + pgvector for 1024-dim embeddings. |
platos-redis | redis:7-alpine | Warm memory cache, Celery broker, session state. |
platos-api | apps/api/Dockerfile | FastAPI + Pydantic AI agent runtime. |
platos-worker | apps/api/Dockerfile | Celery worker - memory extraction, long-running tasks. |
platos-beat | apps/api/Dockerfile | Celery beat scheduler. |
platos-web | apps/web/Dockerfile | Next.js 15 dashboard. |
Every image is pinned - no :latest tags, no surprise updates.
Bootstrapping a fresh checkout
# 1. Clone + bootstrap (installs Python + JS deps, creates .env from template)git clone https://github.com/tejassudsfp/platos.gitcd platosmake bootstrap
# 2. Generate local secrets into .env (Fernet key + JWT secret)make gen-keys
# 3. Start everythingmake up # full stack# ormake up-infra # just Postgres + Redis, run API/web locally
# 4. Verifycurl http://localhost:8000/healthopen http://localhost:3000make help lists the full target set.
Required secrets
Two secrets must be set before staging / production will start - the backend hard-fails at import time if either is missing or too weak. This is enforced in apps/api/platos/config.py with an InsecureConfigError to keep a misconfigured prod from silently accepting traffic.
| Env var | Purpose | Requirement |
|---|---|---|
PLATOS_FERNET_KEY | Encrypts every API key and OAuth token at rest (PRD §9). | 32-byte urlsafe base64 - generate with python -m cryptography.fernet.generate_key or make gen-keys. |
PLATOS_JWT_SECRET | Signs access + refresh tokens for the dashboard API. | Minimum 32 characters. |
Development mode (PLATOS_ENV=development) skips the gate so make up works from a fresh clone - the make gen-keys target writes real keys either way, so there’s no reason to keep the placeholders around for long.
Database migrations
Every migration is reversible (PRD §9) - downgrade() must restore the previous schema exactly. Run them with Alembic:
# Apply every pending migrationuv run alembic upgrade head
# Roll back the last migration (useful in dev)uv run alembic downgrade -1The Compose stack auto-runs alembic upgrade head on platos-api startup so a fresh docker compose up lands with the DB fully migrated. Production deploys should run migrations as a pre-deploy step against the same image.
Scaling notes
platos-apiis stateless. Scale horizontally behind any L4 load balancer. WebSocket connections from SDKs are sticky to whichever replica accepted them - use a long-lived LB connection withproxy_read_timeout>= 10 minutes so heartbeats don’t kill idle sessions.platos-workerscales independently. Celery handles distribution; add replicas when memory extraction queues back up (Phase C metrics).platos-dbis a hard dependency on Postgres. Production deploys typically swap the Compose service for RDS / Cloud SQL / Neon and pointPLATOS_DATABASE_URLat it.platos-redisis a hard dependency on Redis. Same pattern - swap for ElastiCache / Upstash in production.
Observability
The API writes structured JSON logs to stdout - your log collector (Datadog, Grafana Loki, Papertrail, etc.) picks them up off the container logs. Metrics and quality-check events land in Postgres and are queryable via the dashboard’s Monitoring tab (Phase H).
HTTPS with Caddy
For production deploys, put a reverse proxy in front of the API and dashboard. Caddy auto-provisions TLS certificates:
# Caddyfileplatform.platos.dev { reverse_proxy platos-api:8000}
app.platos.dev { reverse_proxy platos-web:3000}Add a caddy service to your docker-compose.override.yml:
services: caddy: image: caddy:2-alpine ports: - "80:80" - "443:443" volumes: - ./Caddyfile:/etc/caddy/Caddyfile - caddy_data:/data depends_on: - platos-api - platos-web
volumes: caddy_data:WebSocket connections (/ws/sdk) are proxied automatically. Caddy handles the Upgrade header without extra configuration.
Environment variables reference
Copy .env.example at the repo root to .env and fill in values. Key variables:
| Variable | Required | Description |
|---|---|---|
PLATOS_FERNET_KEY | Production | 32-byte urlsafe base64 key for encrypting secrets at rest. |
PLATOS_JWT_SECRET | Production | Min 32 characters. Signs access + refresh tokens. |
PLATOS_DATABASE_URL | If not using Compose DB | Postgres connection string. |
PLATOS_REDIS_URL | If not using Compose Redis | Redis connection string. |
PLATOS_ENV | No | development (default) or production. Production enforces key strength. |
Docs site (Vercel)
The marketing site and docs (marketing/) deploy to Vercel as a static Astro build:
cd marketingpnpm installpnpm build # outputs to dist/The Vercel project auto-deploys from the main branch. The build command is cd marketing && pnpm build with output directory marketing/dist. No server-side runtime is needed - the docs are fully static.
Upgrades
- Pin to a release tag. The Compose file references images by digest in CI; local dev uses
:latestfor speed. - Run migrations first. If a new release includes a migration, run
alembic upgrade headbefore the new API container starts serving traffic. The stateless API can roll through with a brief window of mixed versions; the DB can’t. - Roll back via
alembic downgrade. Every migration is reversible, and releases publish the revision IDs so you can target a specific version.
Next steps
- API Reference - complete endpoint listing.
- Getting Started - install the SDK and connect to your deployed platform.
- Examples - runnable integrations to test your deployment.