EC2 Deployment Playbook
Step-by-step guide for deploying NHC Portals to AWS using Docker Compose on EC2. Follow in order — each section depends on the previous.
Before starting, confirm every item in Prerequisites Checklist is resolved. The hard blockers are
nova.p12and a productionDJANGO_SECRET_KEY. Do not provision infrastructure until those are in hand.
1. AWS infrastructure
All commands assume the AWS CLI is configured with appropriate permissions (AdministratorAccess or a scoped deployment role).
1a. S3 buckets
Two buckets are required. Replace nhc with your preferred prefix and set the correct region.
REGION=us-east-1
PUBLIC_BUCKET=nhc-portals-public
PRIVATE_BUCKET=nhc-portals-private
# Public bucket — static files and public uploads
aws s3api create-bucket \
--bucket $PUBLIC_BUCKET \
--region $REGION \
--create-bucket-configuration LocationConstraint=$REGION
aws s3api put-bucket-cors --bucket $PUBLIC_BUCKET --cors-configuration '{
"CORSRules": [{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET"],
"AllowedOrigins": ["https://your-frontend-domain.com"],
"MaxAgeSeconds": 3000
}]
}'
# Private bucket — documents and media (presigned URL access only)
aws s3api create-bucket \
--bucket $PRIVATE_BUCKET \
--region $REGION \
--create-bucket-configuration LocationConstraint=$REGION
aws s3api put-public-access-block \
--bucket $PRIVATE_BUCKET \
--public-access-block-configuration \
"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"1b. IAM role for EC2
Create an instance profile so the EC2 instance can access S3 without long-lived access keys.
# Create the role
aws iam create-role \
--role-name nhc-portals-ec2-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com" },
"Action": "sts:AssumeRole"
}]
}'
# Attach an inline S3 policy (adjust bucket names as needed)
aws iam put-role-policy \
--role-name nhc-portals-ec2-role \
--policy-name nhc-portals-s3 \
--policy-document "{
\"Version\": \"2012-10-17\",
\"Statement\": [{
\"Effect\": \"Allow\",
\"Action\": [\"s3:GetObject\",\"s3:PutObject\",\"s3:DeleteObject\",\"s3:ListBucket\"],
\"Resource\": [
\"arn:aws:s3:::$PUBLIC_BUCKET\",
\"arn:aws:s3:::$PUBLIC_BUCKET/*\",
\"arn:aws:s3:::$PRIVATE_BUCKET\",
\"arn:aws:s3:::$PRIVATE_BUCKET/*\"
]
}]
}"
# Create instance profile and attach role
aws iam create-instance-profile --instance-profile-name nhc-portals-ec2-profile
aws iam add-role-to-instance-profile \
--instance-profile-name nhc-portals-ec2-profile \
--role-name nhc-portals-ec2-roleWith an instance role in place, remove
DJANGO_AWS_ACCESS_KEY_IDandDJANGO_AWS_SECRET_ACCESS_KEYfrom the production.env— boto3 will pick up credentials automatically from the instance metadata.
1c. RDS MySQL
# Create a subnet group first (use your VPC's private subnets)
aws rds create-db-subnet-group \
--db-subnet-group-name nhc-portals-db-subnet \
--db-subnet-group-description "NHC Portals DB subnet group" \
--subnet-ids subnet-XXXXXXXX subnet-YYYYYYYY
# Create the RDS instance
aws rds create-db-instance \
--db-instance-identifier nhc-portals-db \
--db-instance-class db.t3.medium \
--engine mysql \
--engine-version 8.0 \
--master-username nhcadmin \
--master-user-password YOUR_DB_PASSWORD \
--db-name novahomecareapi \
--db-subnet-group-name nhc-portals-db-subnet \
--vpc-security-group-ids sg-XXXXXXXX \
--backup-retention-period 7 \
--storage-type gp3 \
--allocated-storage 20 \
--no-publicly-accessibleNote the RDS endpoint once the instance is available — it becomes DATABASE_URL.
1d. ElastiCache Redis
aws elasticache create-cache-cluster \
--cache-cluster-id nhc-portals-redis \
--cache-node-type cache.t3.micro \
--engine redis \
--engine-version 7.2 \
--num-cache-nodes 1 \
--cache-subnet-group-name nhc-portals-cache-subnet \
--security-group-ids sg-XXXXXXXXNote the primary endpoint — it becomes CELERY_BROKER_URL and CELERY_RESULT_BACKEND.
1e. Security groups
| Group | Inbound rules |
|---|---|
nhc-portals-ec2 | 22 (SSH, your IP only), 80 (HTTP, 0.0.0.0/0), 443 (HTTPS, 0.0.0.0/0) |
nhc-portals-db | 3306 from nhc-portals-ec2 only |
nhc-portals-redis | 6379 from nhc-portals-ec2 only |
1f. EC2 instance
aws ec2 run-instances \
--image-id ami-XXXXXXXX \ # Amazon Linux 2023 in your region
--instance-type t3.medium \
--key-name your-key-pair \
--security-group-ids sg-XXXXXXXX \
--subnet-id subnet-XXXXXXXX \
--iam-instance-profile Name=nhc-portals-ec2-profile \
--block-device-mappings '[{"DeviceName":"/dev/xvda","Ebs":{"VolumeSize":30,"VolumeType":"gp3"}}]' \
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=nhc-portals}]'2. EC2 server setup
SSH into the instance and run:
# Docker
sudo dnf update -y
sudo dnf install -y docker git
sudo systemctl enable --now docker
sudo usermod -aG docker ec2-user
# Docker Compose plugin
sudo mkdir -p /usr/local/lib/docker/cli-plugins
sudo curl -SL https://github.com/docker/compose/releases/latest/download/docker-compose-linux-x86_64 \
-o /usr/local/lib/docker/cli-plugins/docker-compose
sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
# Caddy (TLS + reverse proxy — handles Let's Encrypt automatically)
sudo dnf install -y 'dnf-command(copr)'
sudo dnf copr enable -y @caddy/caddy
sudo dnf install -y caddy
sudo systemctl enable caddyLog out and back in so the docker group takes effect.
3. Application setup
# Clone the parent repo with all submodules
git clone --recurse-submodules git@gitlab.com:nova-hc/mwe/v2/pod-portals.git /opt/nhc-portals
cd /opt/nhc-portals3a. Environment files
# Django
cp django-api/.env.development.example django-api/.env
nano django-api/.env # fill in production values — see table below| Variable | Production value |
|---|---|
DATABASE_URL | mysql://nhcadmin:PASSWORD@RDS_ENDPOINT:3306/novahomecareapi |
CELERY_BROKER_URL | redis://ELASTICACHE_ENDPOINT:6379/0 |
CELERY_RESULT_BACKEND | redis://ELASTICACHE_ENDPOINT:6379/0 |
DJANGO_SECRET_KEY | Generate with python -c "import secrets; print(secrets.token_urlsafe(50))" |
DJANGO_ALLOWED_HOSTS | api.your-domain.com |
DJANGO_ALLOWED_ORIGINS | https://app.your-domain.com |
DJANGO_AWS_S3_REGION_NAME | us-east-1 |
DJANGO_AWS_S3_PUBLIC_BUCKET_NAME | nhc-portals-public |
DJANGO_AWS_S3_PRIVATE_BUCKET_NAME | nhc-portals-private |
SENDGRID_API_KEY | From SendGrid dashboard |
DEFAULT_FROM_EMAIL | Nova Home Care <noreply@your-domain.com> |
DJANGO_GOOGLE_RECAPTCHA_TOKEN | From Google reCAPTCHA console |
PYHANKO_P12_PASSWORD | Certificate password |
DJANGO_SECURE_SSL_REDIRECT | True |
Since the EC2 instance uses an IAM role, omit
DJANGO_AWS_ACCESS_KEY_IDandDJANGO_AWS_SECRET_ACCESS_KEY. boto3 resolves credentials from the instance metadata automatically.
3b. Signing certificate
sudo mkdir -p /opt/nhc-portals/django-api/secrets
# Copy nova.p12 from secure storage (e.g. SSM, 1Password, encrypted transfer)
sudo cp /path/to/nova.p12 /opt/nhc-portals/django-api/secrets/nova.p12
sudo chmod 600 /opt/nhc-portals/django-api/secrets/nova.p12Verify the django-api/Dockerfile mounts secrets/ correctly — the app expects the file at ROOT_DIR/secrets/nova.p12 inside the container.
3c. Update docker-compose.yml for production
The root docker-compose.yml runs Vite dev server for portals — not appropriate for production. For Phase 1, serve the frontend as a static build from the same EC2 via Caddy, or skip the portals service and deploy to Cloudflare Pages separately.
To disable the portals dev server:
# Start everything except portals (serve frontend separately)
docker compose up -d django celeryworker celerybeat db redis4. First-run commands
Run these once after the containers are up:
# Migrations
docker compose exec django python manage.py migrate
# Static files to S3
docker compose exec django python manage.py collectstatic --noinput
# Load fixtures (seed organisations, roles, etc.)
docker compose exec django bash -c "python manage.py loaddata novahomecareapi/fixtures/*.json"
# Create superuser
docker compose exec django python manage.py createsuperuser5. TLS + reverse proxy (Caddy)
Create /etc/caddy/Caddyfile:
api.your-domain.com {
reverse_proxy localhost:8000
}sudo systemctl restart caddyCaddy automatically obtains and renews a Let's Encrypt certificate. Point the DNS A record for api.your-domain.com to the EC2 public IP before restarting.
6. Smoke test checklist
Run through these after the first deploy:
- [ ]
https://api.your-domain.com/api/schema/returns the OpenAPI schema - [ ]
https://api.your-domain.com/api/schema/swagger-ui/loads Swagger UI - [ ] Django admin accessible at the custom admin URL
- [ ] Login via the portals frontend succeeds
- [ ] A document can be generated and signed (tests
nova.p12is working) — do not skip this. A missing or incorrect certificate does not crash the app on startup; it silently fails only when a form submission triggers PDF signing. The Celery task will error but the user just sees the document stuck in a pending state. - [ ] An email is sent and received (tests SendGrid)
- [ ] A file upload lands in the private S3 bucket
- [ ] Celery tasks are processing (check Django admin → Periodic Tasks)
- [ ] Run the Cypress login suite against the production URL
7. Ongoing operations
Deploy a new version
cd /opt/nhc-portals
git pull --recurse-submodules
docker compose build django
docker compose up -d --no-deps django celeryworker celerybeat
docker compose exec django python manage.py migrate
docker compose exec django python manage.py collectstatic --noinputView logs
docker compose logs -f django
docker compose logs -f celeryworkerDatabase backup
RDS automated backups are enabled (7-day retention). For a manual snapshot:
aws rds create-db-snapshot \
--db-instance-identifier nhc-portals-db \
--db-snapshot-identifier nhc-portals-manual-$(date +%Y%m%d)