Skip to content

Deploying to AWS with SST

Procella can be deployed serverlessly on AWS using SST (Serverless Stack). This approach uses Lambda for the API, Aurora Serverless v2 with Data API for the database, S3 for blob storage, and CloudFront for the static UI. Zero servers to manage, scales to zero, ~$55-65/month base cost.

Official docs: sst.dev/docs


  • AWS account with credentials configured
  • Node.js 18+ and Bun
  • SST CLI (npx sst)

The SST configuration is split into modular infrastructure components:

FilePurpose
sst.config.tsRoot configuration, stacks assembly
infra/database.tsShared VPC, Aurora Serverless v2 with RDS Proxy (0.5-16 ACU)
infra/storage.tsS3 bucket for Procella checkpoint blob storage
infra/secrets.tsSST secrets (encryption key, auth tokens, Descope keys)
infra/api.tsCompiled Bun binary on Lambda (provided.al2023, handler at apps/server/src/lambda-bootstrap.ts)
infra/gc.tsGC worker Lambda on 1-minute EventBridge schedule (same Bun binary approach)
infra/site.tsReact UI dashboard (StaticSite with CloudFront)
infra/descope.tsDescope auth provisioning (optional)

Set required secrets before deploying:

Terminal window
npx sst secret set ProcellaEncryptionKey "$(openssl rand -hex 32)"
npx sst secret set ProcellaDevAuthToken "your-token-here"

In CI/CD, secrets are passed as SST_SECRET_* environment variables — no manual sst secret set needed. For local or one-off deploys:

Terminal window
npx sst secret set ProcellaDescopeManagementKey "..."

SST provides a local development mode that uses your local Docker PostgreSQL instead of Aurora:

Terminal window
# Start local PostgreSQL
docker compose up -d postgres
# Run SST dev mode
npx sst dev

In dev mode:

  • The database uses local Docker PostgreSQL (skips Aurora deployment)
  • Lambda functions run locally with hot reloading via Live Lambda Development
  • The React UI is served by Vite dev server with HMR
  • All AWS resources are stubbed or use local equivalents

Deploy to a specific stage (environment):

Terminal window
npx sst deploy --stage production

SST uses stage-based deployment:

  • dev — Local development with live Lambda
  • staging — Pre-production testing
  • production — Production workload

Each stage gets isolated resources. View outputs after deployment:

Terminal window
npx sst state --stage production

This shows the API URL, CloudFront distribution URL, and other stack outputs.


Procella uses Aurora Serverless v2 with RDS Proxy for connection pooling. Lambda functions connect directly via Bun.sql (the same driver used in local development and Docker deployments), running inside the VPC for Aurora connectivity.

Key benefits of direct PG via RDS Proxy:

  • Same Bun.sql driver as local dev (no separate Data API driver)
  • RDS Proxy handles Lambda connection pooling automatically
  • No per-query HTTP overhead (direct TCP connections)
  • PROCELLA_DATABASE_URL is the only config needed, everywhere

Run database migrations:

Terminal window
npx sst shell --stage production -- bun apps/server/src/index.ts --migrate

Aurora Serverless v2 scaling:

SettingDefaultDescription
min0.5 ACUMinimum capacity (can set to 0 for dev)
max16 ACUMaximum capacity

Adjust in infra/database.ts:

scaling: {
min: "0.5 ACU",
max: "16 ACU"
}

Lambda scaling:

  • Automatic concurrency scaling
  • No configuration needed
  • Pay per invocation

StaticSite (CloudFront):

  • Global CDN with edge caching
  • Automatic scaling

ComponentCost/MonthNotes
Aurora Serverless v2~$430.5 ACU minimum, scales with load
Lambda~$0-5Pay per invocation, generous free tier
S3~$1-5Depends on number of stacks
CloudFront~$1-5Static site CDN
VPC (NAT)~$6fck-nat EC2 instance
Total~$55-65Scales with usage

SST automatically maps PROCELLA_* environment variables via infra/api.ts. The following are set automatically:

VariableSource
PROCELLA_DATABASE_URLConstructed from Aurora RDS Proxy host/port/credentials in infra/database.ts
PROCELLA_BLOB_BACKENDSet to s3 automatically
PROCELLA_BLOB_S3_BUCKETS3 bucket name from infra/storage.ts

All other PROCELLA_* config works unchanged. See the configuration reference for the full list.


The GC worker runs as a separate Lambda function triggered by an EventBridge cron schedule (1 minute). It cleans up stale updates:

  • Running updates with expired leases
  • Not-started updates older than 1 hour
  • Requested updates that never started

The GC worker is defined in infra/gc.ts and uses the compiled Bun binary at apps/server/src/gc-bootstrap.ts. It runs independently from the API Lambda to ensure cleanup continues even if the API is under heavy load.


To tear down a stage:

Terminal window
npx sst remove --stage <stage>

Compiled Bun binaries: Lambda functions use bun build --compile --production --sourcemap to create self-contained executables deployed as provided.al2023 custom runtimes. No Node.js, no esbuild, no runtime layers. --production enables dead code elimination and minification. --sourcemap embeds compressed source maps so stack traces resolve to original file:line.

VPC configuration: Both Aurora and Lambda functions run inside the VPC. Lambda connects to Aurora through RDS Proxy for connection pooling. The VPC uses fck-nat for cost-effective NAT instead of AWS NAT Gateway.

Production protection: The production stage has protect: true set, preventing accidental deletion via sst remove.