Deploying to AWS with SST
Procella can be deployed serverlessly on AWS using SST (Serverless Stack). This approach uses Lambda for the API, Aurora Serverless v2 with Data API for the database, S3 for blob storage, and CloudFront for the static UI. Zero servers to manage, scales to zero, ~$55-65/month base cost.
Official docs: sst.dev/docs
Prerequisites
Section titled “Prerequisites”- AWS account with credentials configured
- Node.js 18+ and Bun
- SST CLI (
npx sst)
Infrastructure Overview
Section titled “Infrastructure Overview”The SST configuration is split into modular infrastructure components:
| File | Purpose |
|---|---|
sst.config.ts | Root configuration, stacks assembly |
infra/database.ts | Shared VPC, Aurora Serverless v2 with RDS Proxy (0.5-16 ACU) |
infra/storage.ts | S3 bucket for Procella checkpoint blob storage |
infra/secrets.ts | SST secrets (encryption key, auth tokens, Descope keys) |
infra/api.ts | Compiled Bun binary on Lambda (provided.al2023, handler at apps/server/src/lambda-bootstrap.ts) |
infra/gc.ts | GC worker Lambda on 1-minute EventBridge schedule (same Bun binary approach) |
infra/site.ts | React UI dashboard (StaticSite with CloudFront) |
infra/descope.ts | Descope auth provisioning (optional) |
Set Secrets
Section titled “Set Secrets”Set required secrets before deploying:
npx sst secret set ProcellaEncryptionKey "$(openssl rand -hex 32)"npx sst secret set ProcellaDevAuthToken "your-token-here"In CI/CD, secrets are passed as SST_SECRET_* environment variables — no manual sst secret set needed. For local or one-off deploys:
npx sst secret set ProcellaDescopeManagementKey "..."Local Development
Section titled “Local Development”SST provides a local development mode that uses your local Docker PostgreSQL instead of Aurora:
# Start local PostgreSQLdocker compose up -d postgres
# Run SST dev modenpx sst devIn dev mode:
- The database uses local Docker PostgreSQL (skips Aurora deployment)
- Lambda functions run locally with hot reloading via Live Lambda Development
- The React UI is served by Vite dev server with HMR
- All AWS resources are stubbed or use local equivalents
Deploy
Section titled “Deploy”Deploy to a specific stage (environment):
npx sst deploy --stage productionSST uses stage-based deployment:
dev— Local development with live Lambdastaging— Pre-production testingproduction— Production workload
Each stage gets isolated resources. View outputs after deployment:
npx sst state --stage productionThis shows the API URL, CloudFront distribution URL, and other stack outputs.
Database and Migrations
Section titled “Database and Migrations”Procella uses Aurora Serverless v2 with RDS Proxy for connection pooling. Lambda functions connect directly via Bun.sql (the same driver used in local development and Docker deployments), running inside the VPC for Aurora connectivity.
Key benefits of direct PG via RDS Proxy:
- Same
Bun.sqldriver as local dev (no separate Data API driver) - RDS Proxy handles Lambda connection pooling automatically
- No per-query HTTP overhead (direct TCP connections)
PROCELLA_DATABASE_URLis the only config needed, everywhere
Run database migrations:
npx sst shell --stage production -- bun apps/server/src/index.ts --migrateScaling
Section titled “Scaling”Aurora Serverless v2 scaling:
| Setting | Default | Description |
|---|---|---|
| min | 0.5 ACU | Minimum capacity (can set to 0 for dev) |
| max | 16 ACU | Maximum capacity |
Adjust in infra/database.ts:
scaling: { min: "0.5 ACU", max: "16 ACU"}Lambda scaling:
- Automatic concurrency scaling
- No configuration needed
- Pay per invocation
StaticSite (CloudFront):
- Global CDN with edge caching
- Automatic scaling
Cost Breakdown
Section titled “Cost Breakdown”| Component | Cost/Month | Notes |
|---|---|---|
| Aurora Serverless v2 | ~$43 | 0.5 ACU minimum, scales with load |
| Lambda | ~$0-5 | Pay per invocation, generous free tier |
| S3 | ~$1-5 | Depends on number of stacks |
| CloudFront | ~$1-5 | Static site CDN |
| VPC (NAT) | ~$6 | fck-nat EC2 instance |
| Total | ~$55-65 | Scales with usage |
Environment Variables
Section titled “Environment Variables”SST automatically maps PROCELLA_* environment variables via infra/api.ts. The following are set automatically:
| Variable | Source |
|---|---|
PROCELLA_DATABASE_URL | Constructed from Aurora RDS Proxy host/port/credentials in infra/database.ts |
PROCELLA_BLOB_BACKEND | Set to s3 automatically |
PROCELLA_BLOB_S3_BUCKET | S3 bucket name from infra/storage.ts |
All other PROCELLA_* config works unchanged. See the configuration reference for the full list.
GC Worker
Section titled “GC Worker”The GC worker runs as a separate Lambda function triggered by an EventBridge cron schedule (1 minute). It cleans up stale updates:
- Running updates with expired leases
- Not-started updates older than 1 hour
- Requested updates that never started
The GC worker is defined in infra/gc.ts and uses the compiled Bun binary at apps/server/src/gc-bootstrap.ts. It runs independently from the API Lambda to ensure cleanup continues even if the API is under heavy load.
Removing
Section titled “Removing”To tear down a stage:
npx sst remove --stage <stage>Additional Notes
Section titled “Additional Notes”Compiled Bun binaries: Lambda functions use bun build --compile --production --sourcemap to create self-contained executables deployed as provided.al2023 custom runtimes. No Node.js, no esbuild, no runtime layers. --production enables dead code elimination and minification. --sourcemap embeds compressed source maps so stack traces resolve to original file:line.
VPC configuration: Both Aurora and Lambda functions run inside the VPC. Lambda connects to Aurora through RDS Proxy for connection pooling. The VPC uses fck-nat for cost-effective NAT instead of AWS NAT Gateway.
Production protection: The production stage has protect: true set, preventing accidental deletion via sst remove.