Skip to main content

Architecture


Quick Start

1

Configure Environment

cd agent-studio-backend
cp app/env/.env.example app/env/.env
# Edit .env with your production credentials
2

Build & Start

docker-compose up -d
This starts all 5 services: db, redis, backend, celery-worker, celery-beat.
3

Run Migrations

docker-compose exec backend alembic upgrade head
4

Verify

# Check all containers are running
docker-compose ps

# Test API
curl http://localhost:8000/docs

Docker Compose Services

ServiceImagePortResources
dbpostgres:165432pgdata volume
redisredis:76379In-memory
backendBuild from Dockerfile80002 Gunicorn workers
celery-workerSame as backendTask processing
celery-beatSame as backendPeriodic scheduler

Dockerfile Overview

The backend Dockerfile uses a two-stage build: Key details:
  • Base: python:3.12-slim-bookworm
  • LiveKit CLI: v2.4.14 (pinned)
  • Process manager: tini (proper PID 1 signal handling)
  • WSGI server: Gunicorn with Uvicorn workers
  • Runs as non-root appuser

Configuration

Gunicorn Settings

VariableDefaultDescription
GUNICORN_WORKERS2Number of worker processes
GUNICORN_THREADS1Threads per worker
GUNICORN_TIMEOUT90Request timeout (seconds)

Logging

Docker logging is configured with JSON file driver:
  • Max file size: 10MB
  • Max files: 5 (rotation)
  • Application logs: written to LOG_DIR mount

Networking

All services communicate over the pp-net Docker network (external).
The pp-net network must be created before starting the stack: docker network create pp-net

Commands

# Start all services
docker-compose up -d

# Stop all services
docker-compose down

# Rebuild after code changes
docker-compose build backend
docker-compose up -d backend

# View logs
docker-compose logs -f backend
docker-compose logs -f celery-worker
Ensure pgdata volume is backed up regularly. Database loss means all agents, calls, and configuration are lost.