Date: March 10, 2026
Status: Production Hardening — Tier 1
Target: SQLite → Supabase migration with security lockdown
This guide walks you through deploying DivineOS from local development (SQLite) to production (Supabase PostgreSQL) with security hardening for remote access via ngrok or cloud infrastructure.
Key Sections:
- Local Development (SQLite baseline)
- Supabase Setup (PostgreSQL cloud database)
- Connection Pooling & Performance
- Security Lockdown (JWT, API key rotation, rate limiting)
- Remote Access via ngrok (development/testing)
- Production Deployment (AWS/GCP)
# Python 3.11+
python --version
# Install dependencies
pip install -r requirements.txt
# Create .env file
cp .env.example .env# Database (SQLite for local dev)
DATABASE_URL=sqlite:///./data/consciousness.db
DB_PASSPHRASE=your-32-char-passphrase-here-12345
# LLM API Keys (optional for local testing)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# Security (local dev — relaxed)
DIVINEOS_SEAL_KEY=dev-seal-key-32-chars-long-12345
JWT_SECRET_KEY=dev-jwt-secret-32-chars-long-12345
# Optional: Enable security middleware
DIVINEOS_SECURITY_ENABLED=0 # Set to 1 for production
# Optional: Test mode (disables thread spawning)
DIVINEOS_TEST_NO_UNIFIED=0# Start API server
python api_server.py
# Or with uvicorn
uvicorn api_server:app --host 127.0.0.1 --port 8000 --reload
# Test health check
curl http://localhost:8000/health
# Test process endpoint
curl -X POST http://localhost:8000/process \
-H "Content-Type: application/json" \
-d '{"text": "Hello, DivineOS"}'- Go to supabase.com
- Sign up or log in
- Create a new project:
- Name:
divine-os-consciousness - Database Password: Generate strong password (save securely)
- Region: Choose closest to your deployment region
- Name:
- Wait for project to initialize (~2 minutes)
- In Supabase dashboard, go to Settings → Database
- Copy the Connection String (URI format):
postgresql://postgres:[PASSWORD]@[HOST]:[PORT]/postgres - Note the Connection Pooler URL (for connection pooling):
postgresql://postgres:[PASSWORD]@[HOST]:[PORT]/postgres?sslmode=require
DivineOS uses SQLAlchemy ORM. The schema is auto-created on first run, but you can pre-initialize:
# Set Supabase connection string
export DATABASE_URL="postgresql://postgres:[PASSWORD]@[HOST]:[PORT]/postgres?sslmode=require"
# Initialize schema (creates tables)
python -c "
from DivineOS.core.encrypted_db import get_encrypted_connection
conn = get_encrypted_connection()
# Tables auto-created by SQLAlchemy on first query
print('✓ Database initialized')
"# Test connection
psql postgresql://postgres:[PASSWORD]@[HOST]:[PORT]/postgres
# List tables
\dt
# Exit
\q- Problem: Each request opens a new database connection → slow, resource-intensive
- Solution: Reuse connections via pooling → 10-100x faster
Supabase provides built-in connection pooling. Use the Connection Pooler URL:
# .env (Production)
DATABASE_URL=postgresql://postgres:[PASSWORD]@[POOLER_HOST]:[PORT]/postgres?sslmode=requireConnection Pooler Settings (in Supabase dashboard):
- Pool Mode:
transaction(recommended for web apps) - Max Connections: 100 (adjust based on load)
- Idle Timeout: 600 seconds
Update DivineOS/core/encrypted_db.py:
from sqlalchemy import create_engine
from sqlalchemy.pool import QueuePool
# Production: use connection pooling
engine = create_engine(
DATABASE_URL,
poolclass=QueuePool,
pool_size=20, # Connections to keep in pool
max_overflow=40, # Additional connections if needed
pool_recycle=3600, # Recycle connections every hour
pool_pre_ping=True, # Test connection before use
echo=False, # Set to True for SQL debugging
)# Monitor connection pool
# In your app, add monitoring:
from sqlalchemy import event
@event.listens_for(engine, "connect")
def receive_connect(dbapi_conn, connection_record):
print(f"Pool size: {engine.pool.size()}, Checked out: {engine.pool.checkedout()}")DivineOS uses JWT for API authentication. Configure in .env:
# Generate strong JWT secret (32+ chars)
JWT_SECRET_KEY=$(python -c "import secrets; print(secrets.token_urlsafe(32))")
# Add to .env
echo "JWT_SECRET_KEY=$JWT_SECRET_KEY" >> .envImplement key rotation for external API calls (OpenAI, Anthropic, etc.):
# .env (Production)
OPENAI_API_KEY=sk-...
OPENAI_API_KEY_BACKUP=sk-... # Backup key for rotation
# Rotation script (run weekly)
python scripts/rotate_api_keys.py# .env (Production)
DIVINEOS_SECURITY_ENABLED=1This enables:
- CORS: Restrict cross-origin requests
- TrustedHost: Only allow specific domains
- SecurityHeaders: Add security headers (HSTS, CSP, etc.)
- RateLimiting: Prevent abuse (100 req/min per IP)
DivineOS uses SQLCipher for encrypted database storage:
# .env (Production)
DB_PASSPHRASE=$(python -c "import secrets; print(secrets.token_urlsafe(32))")
# Add to .env
echo "DB_PASSPHRASE=$DB_PASSPHRASE" >> .env# Generate seal key (32 chars)
DIVINEOS_SEAL_KEY=$(python -c "import secrets; print(secrets.token_urlsafe(32))")
# Add to .env
echo "DIVINEOS_SEAL_KEY=$DIVINEOS_SEAL_KEY" >> .env- Expose local API to internet for IDE integrations (Cursor, Claude Code)
- Test webhooks and integrations
- Share API with team members
# Install ngrok
# macOS: brew install ngrok
# Linux: download from ngrok.com
# Windows: download from ngrok.com
# Authenticate
ngrok config add-authtoken YOUR_AUTH_TOKEN
# Start tunnel
ngrok http 8000
# Output:
# Forwarding https://abc123.ngrok.io -> http://localhost:8000# 1. Enable JWT authentication (required)
# In api_server.py, add JWT validation:
from fastapi import Depends, HTTPException
from fastapi.security import HTTPBearer, HTTPAuthCredential
import jwt
security = HTTPBearer()
async def verify_jwt(credentials: HTTPAuthCredential = Depends(security)):
try:
payload = jwt.decode(
credentials.credentials,
os.environ["JWT_SECRET_KEY"],
algorithms=["HS256"]
)
return payload
except jwt.InvalidTokenError:
raise HTTPException(status_code=401, detail="Invalid token")
# 2. Add JWT requirement to /process endpoint
@app.post("/process")
async def process_request(
request: RequestInput,
token: dict = Depends(verify_jwt) # Require JWT
):
# ... process request ...# 2. Rate limiting (prevent abuse)
# In api_server.py:
from slowapi import Limiter
from slowapi.util import get_remote_address
limiter = Limiter(key_func=get_remote_address)
app.state.limiter = limiter
@app.post("/process")
@limiter.limit("100/minute") # 100 requests per minute per IP
async def process_request(request: RequestInput):
# ... process request ...# 3. IP whitelisting (optional)
# In .env:
ALLOWED_IPS=192.168.1.1,10.0.0.1
# In api_server.py:
from fastapi import Request
@app.middleware("http")
async def check_ip(request: Request, call_next):
allowed = os.environ.get("ALLOWED_IPS", "").split(",")
if allowed and request.client.host not in allowed:
return JSONResponse({"error": "IP not allowed"}, status_code=403)
return await call_next(request)# Generate token (valid for 24 hours)
python -c "
import jwt
import os
from datetime import datetime, timedelta
secret = os.environ['JWT_SECRET_KEY']
payload = {
'sub': 'test-user',
'exp': datetime.utcnow() + timedelta(hours=24)
}
token = jwt.encode(payload, secret, algorithm='HS256')
print(f'Token: {token}')
"
# Use token in requests
curl -X POST https://abc123.ngrok.io/process \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"text": "Hello"}'# 1. Create RDS PostgreSQL instance
# - Engine: PostgreSQL 14+
# - Instance class: db.t3.micro (free tier) or db.t3.small
# - Storage: 20 GB
# - Multi-AZ: Yes (for HA)
# 2. Create ECS cluster
# - Cluster name: divine-os
# - Infrastructure: Fargate
# 3. Create task definition
# - Image: your-ecr-repo/divine-os:latest
# - CPU: 256
# - Memory: 512
# - Environment variables:
# - DATABASE_URL: RDS connection string
# - JWT_SECRET_KEY: from Secrets Manager
# - DIVINEOS_SEAL_KEY: from Secrets Manager
# 4. Create service
# - Task definition: divine-os
# - Desired count: 2 (for HA)
# - Load balancer: Application Load Balancer
# 5. Configure auto-scaling
# - Min tasks: 2
# - Max tasks: 10
# - Target CPU: 70%# 1. Build Docker image
docker build -t divine-os:latest .
# 2. Push to Google Container Registry
gcloud builds submit --tag gcr.io/PROJECT_ID/divine-os:latest
# 3. Deploy to Cloud Run
gcloud run deploy divine-os \
--image gcr.io/PROJECT_ID/divine-os:latest \
--platform managed \
--region us-central1 \
--memory 512Mi \
--cpu 1 \
--set-env-vars DATABASE_URL=postgresql://...,JWT_SECRET_KEY=... \
--allow-unauthenticated# docker-compose.yml
version: '3.8'
services:
postgres:
image: postgres:14
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: divine_os
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
divine-os:
build: .
environment:
DATABASE_URL: postgresql://postgres:${DB_PASSWORD}@postgres:5432/divine_os
JWT_SECRET_KEY: ${JWT_SECRET_KEY}
DIVINEOS_SEAL_KEY: ${DIVINEOS_SEAL_KEY}
DIVINEOS_SECURITY_ENABLED: 1
ports:
- "8000:8000"
depends_on:
- postgres
volumes:
postgres_data:# Run
docker-compose up -d
# Check logs
docker-compose logs -f divine-os# Kubernetes liveness probe
curl http://localhost:8000/health
# Response:
# {"status": "healthy", "timestamp": 1707948294.5, "version": "1.0.0"}# Supabase: Automatic daily backups (7-day retention)
# Manual backup:
pg_dump postgresql://postgres:[PASSWORD]@[HOST]:[PORT]/postgres > backup.sql
# Restore:
psql postgresql://postgres:[PASSWORD]@[HOST]:[PORT]/postgres < backup.sql# Enable Prometheus metrics
# In api_server.py:
from prometheus_client import Counter, Histogram, generate_latest
request_count = Counter('divine_os_requests_total', 'Total requests')
request_duration = Histogram('divine_os_request_duration_seconds', 'Request duration')
@app.get("/metrics")
async def metrics():
return generate_latest()
# Scrape with Prometheus
# prometheus.yml:
# scrape_configs:
# - job_name: 'divine-os'
# static_configs:
# - targets: ['localhost:8000']
# metrics_path: '/metrics'# Check database connectivity
psql postgresql://postgres:[PASSWORD]@[HOST]:[PORT]/postgres
# If fails, check:
# 1. Firewall rules (allow port 5432)
# 2. VPC security groups (allow inbound from app)
# 3. Database password (correct?)# Enable query logging
# In .env:
SQLALCHEMY_ECHO=1
# Check slow query log
# In Supabase dashboard: Logs → Slow Queries# Generate new token
python -c "
import jwt
import os
from datetime import datetime, timedelta
secret = os.environ['JWT_SECRET_KEY']
payload = {
'sub': 'test-user',
'exp': datetime.utcnow() + timedelta(hours=24)
}
token = jwt.encode(payload, secret, algorithm='HS256')
print(f'New token: {token}')
"- Database: Supabase PostgreSQL configured
- Connection pooling: PgBouncer enabled
- JWT: Secret key generated and stored securely
- API keys: Rotated and stored in Secrets Manager
- Security: CORS, TrustedHost, SecurityHeaders enabled
- Rate limiting: Configured (100 req/min per IP)
- Monitoring: Prometheus metrics enabled
- Backups: Automated daily backups configured
- Logging: Centralized logging (CloudWatch, Stackdriver, etc.)
- SSL/TLS: HTTPS enforced
- Load balancer: Configured with health checks
- Auto-scaling: Configured (min 2, max 10 instances)
- Disaster recovery: Tested failover procedure
- Supabase Documentation
- PostgreSQL Connection Pooling
- JWT Best Practices
- OWASP API Security
- ngrok Documentation
Next Steps: After deploying to production, follow the Security Hardening Guide for additional security measures.