Skip to content

EC2 Deployment Playbook

Step-by-step guide for deploying NHC Portals to AWS using Docker Compose on EC2. Follow in order — each section depends on the previous.

Before starting, confirm every item in Prerequisites Checklist is resolved. The hard blockers are nova.p12 and a production DJANGO_SECRET_KEY. Do not provision infrastructure until those are in hand.


1. AWS infrastructure

All commands assume the AWS CLI is configured with appropriate permissions (AdministratorAccess or a scoped deployment role).

1a. S3 buckets

Two buckets are required. Replace nhc with your preferred prefix and set the correct region.

bash
REGION=us-east-1
PUBLIC_BUCKET=nhc-portals-public
PRIVATE_BUCKET=nhc-portals-private

# Public bucket — static files and public uploads
aws s3api create-bucket \
  --bucket $PUBLIC_BUCKET \
  --region $REGION \
  --create-bucket-configuration LocationConstraint=$REGION

aws s3api put-bucket-cors --bucket $PUBLIC_BUCKET --cors-configuration '{
  "CORSRules": [{
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["GET"],
    "AllowedOrigins": ["https://your-frontend-domain.com"],
    "MaxAgeSeconds": 3000
  }]
}'

# Private bucket — documents and media (presigned URL access only)
aws s3api create-bucket \
  --bucket $PRIVATE_BUCKET \
  --region $REGION \
  --create-bucket-configuration LocationConstraint=$REGION

aws s3api put-public-access-block \
  --bucket $PRIVATE_BUCKET \
  --public-access-block-configuration \
  "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"

1b. IAM role for EC2

Create an instance profile so the EC2 instance can access S3 without long-lived access keys.

bash
# Create the role
aws iam create-role \
  --role-name nhc-portals-ec2-role \
  --assume-role-policy-document '{
    "Version": "2012-10-17",
    "Statement": [{
      "Effect": "Allow",
      "Principal": { "Service": "ec2.amazonaws.com" },
      "Action": "sts:AssumeRole"
    }]
  }'

# Attach an inline S3 policy (adjust bucket names as needed)
aws iam put-role-policy \
  --role-name nhc-portals-ec2-role \
  --policy-name nhc-portals-s3 \
  --policy-document "{
    \"Version\": \"2012-10-17\",
    \"Statement\": [{
      \"Effect\": \"Allow\",
      \"Action\": [\"s3:GetObject\",\"s3:PutObject\",\"s3:DeleteObject\",\"s3:ListBucket\"],
      \"Resource\": [
        \"arn:aws:s3:::$PUBLIC_BUCKET\",
        \"arn:aws:s3:::$PUBLIC_BUCKET/*\",
        \"arn:aws:s3:::$PRIVATE_BUCKET\",
        \"arn:aws:s3:::$PRIVATE_BUCKET/*\"
      ]
    }]
  }"

# Create instance profile and attach role
aws iam create-instance-profile --instance-profile-name nhc-portals-ec2-profile
aws iam add-role-to-instance-profile \
  --instance-profile-name nhc-portals-ec2-profile \
  --role-name nhc-portals-ec2-role

With an instance role in place, remove DJANGO_AWS_ACCESS_KEY_ID and DJANGO_AWS_SECRET_ACCESS_KEY from the production .env — boto3 will pick up credentials automatically from the instance metadata.

1c. RDS MySQL

bash
# Create a subnet group first (use your VPC's private subnets)
aws rds create-db-subnet-group \
  --db-subnet-group-name nhc-portals-db-subnet \
  --db-subnet-group-description "NHC Portals DB subnet group" \
  --subnet-ids subnet-XXXXXXXX subnet-YYYYYYYY

# Create the RDS instance
aws rds create-db-instance \
  --db-instance-identifier nhc-portals-db \
  --db-instance-class db.t3.medium \
  --engine mysql \
  --engine-version 8.0 \
  --master-username nhcadmin \
  --master-user-password YOUR_DB_PASSWORD \
  --db-name novahomecareapi \
  --db-subnet-group-name nhc-portals-db-subnet \
  --vpc-security-group-ids sg-XXXXXXXX \
  --backup-retention-period 7 \
  --storage-type gp3 \
  --allocated-storage 20 \
  --no-publicly-accessible

Note the RDS endpoint once the instance is available — it becomes DATABASE_URL.

1d. ElastiCache Redis

bash
aws elasticache create-cache-cluster \
  --cache-cluster-id nhc-portals-redis \
  --cache-node-type cache.t3.micro \
  --engine redis \
  --engine-version 7.2 \
  --num-cache-nodes 1 \
  --cache-subnet-group-name nhc-portals-cache-subnet \
  --security-group-ids sg-XXXXXXXX

Note the primary endpoint — it becomes CELERY_BROKER_URL and CELERY_RESULT_BACKEND.

1e. Security groups

GroupInbound rules
nhc-portals-ec222 (SSH, your IP only), 80 (HTTP, 0.0.0.0/0), 443 (HTTPS, 0.0.0.0/0)
nhc-portals-db3306 from nhc-portals-ec2 only
nhc-portals-redis6379 from nhc-portals-ec2 only

1f. EC2 instance

bash
aws ec2 run-instances \
  --image-id ami-XXXXXXXX \          # Amazon Linux 2023 in your region
  --instance-type t3.medium \
  --key-name your-key-pair \
  --security-group-ids sg-XXXXXXXX \
  --subnet-id subnet-XXXXXXXX \
  --iam-instance-profile Name=nhc-portals-ec2-profile \
  --block-device-mappings '[{"DeviceName":"/dev/xvda","Ebs":{"VolumeSize":30,"VolumeType":"gp3"}}]' \
  --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=nhc-portals}]'

2. EC2 server setup

SSH into the instance and run:

bash
# Docker
sudo dnf update -y
sudo dnf install -y docker git
sudo systemctl enable --now docker
sudo usermod -aG docker ec2-user

# Docker Compose plugin
sudo mkdir -p /usr/local/lib/docker/cli-plugins
sudo curl -SL https://github.com/docker/compose/releases/latest/download/docker-compose-linux-x86_64 \
  -o /usr/local/lib/docker/cli-plugins/docker-compose
sudo chmod +x /usr/local/lib/docker/cli-plugins/docker-compose

# Caddy (TLS + reverse proxy — handles Let's Encrypt automatically)
sudo dnf install -y 'dnf-command(copr)'
sudo dnf copr enable -y @caddy/caddy
sudo dnf install -y caddy
sudo systemctl enable caddy

Log out and back in so the docker group takes effect.


3. Application setup

bash
# Clone the parent repo with all submodules
git clone --recurse-submodules git@gitlab.com:nova-hc/mwe/v2/pod-portals.git /opt/nhc-portals
cd /opt/nhc-portals

3a. Environment files

bash
# Django
cp django-api/.env.development.example django-api/.env
nano django-api/.env   # fill in production values — see table below
VariableProduction value
DATABASE_URLmysql://nhcadmin:PASSWORD@RDS_ENDPOINT:3306/novahomecareapi
CELERY_BROKER_URLredis://ELASTICACHE_ENDPOINT:6379/0
CELERY_RESULT_BACKENDredis://ELASTICACHE_ENDPOINT:6379/0
DJANGO_SECRET_KEYGenerate with python -c "import secrets; print(secrets.token_urlsafe(50))"
DJANGO_ALLOWED_HOSTSapi.your-domain.com
DJANGO_ALLOWED_ORIGINShttps://app.your-domain.com
DJANGO_AWS_S3_REGION_NAMEus-east-1
DJANGO_AWS_S3_PUBLIC_BUCKET_NAMEnhc-portals-public
DJANGO_AWS_S3_PRIVATE_BUCKET_NAMEnhc-portals-private
SENDGRID_API_KEYFrom SendGrid dashboard
DEFAULT_FROM_EMAILNova Home Care <noreply@your-domain.com>
DJANGO_GOOGLE_RECAPTCHA_TOKENFrom Google reCAPTCHA console
PYHANKO_P12_PASSWORDCertificate password
DJANGO_SECURE_SSL_REDIRECTTrue

Since the EC2 instance uses an IAM role, omit DJANGO_AWS_ACCESS_KEY_ID and DJANGO_AWS_SECRET_ACCESS_KEY. boto3 resolves credentials from the instance metadata automatically.

3b. Signing certificate

bash
sudo mkdir -p /opt/nhc-portals/django-api/secrets
# Copy nova.p12 from secure storage (e.g. SSM, 1Password, encrypted transfer)
sudo cp /path/to/nova.p12 /opt/nhc-portals/django-api/secrets/nova.p12
sudo chmod 600 /opt/nhc-portals/django-api/secrets/nova.p12

Verify the django-api/Dockerfile mounts secrets/ correctly — the app expects the file at ROOT_DIR/secrets/nova.p12 inside the container.

3c. Update docker-compose.yml for production

The root docker-compose.yml runs Vite dev server for portals — not appropriate for production. For Phase 1, serve the frontend as a static build from the same EC2 via Caddy, or skip the portals service and deploy to Cloudflare Pages separately.

To disable the portals dev server:

bash
# Start everything except portals (serve frontend separately)
docker compose up -d django celeryworker celerybeat db redis

4. First-run commands

Run these once after the containers are up:

bash
# Migrations
docker compose exec django python manage.py migrate

# Static files to S3
docker compose exec django python manage.py collectstatic --noinput

# Load fixtures (seed organisations, roles, etc.)
docker compose exec django bash -c "python manage.py loaddata novahomecareapi/fixtures/*.json"

# Create superuser
docker compose exec django python manage.py createsuperuser

5. TLS + reverse proxy (Caddy)

Create /etc/caddy/Caddyfile:

caddy
api.your-domain.com {
    reverse_proxy localhost:8000
}
bash
sudo systemctl restart caddy

Caddy automatically obtains and renews a Let's Encrypt certificate. Point the DNS A record for api.your-domain.com to the EC2 public IP before restarting.


6. Smoke test checklist

Run through these after the first deploy:

  • [ ] https://api.your-domain.com/api/schema/ returns the OpenAPI schema
  • [ ] https://api.your-domain.com/api/schema/swagger-ui/ loads Swagger UI
  • [ ] Django admin accessible at the custom admin URL
  • [ ] Login via the portals frontend succeeds
  • [ ] A document can be generated and signed (tests nova.p12 is working) — do not skip this. A missing or incorrect certificate does not crash the app on startup; it silently fails only when a form submission triggers PDF signing. The Celery task will error but the user just sees the document stuck in a pending state.
  • [ ] An email is sent and received (tests SendGrid)
  • [ ] A file upload lands in the private S3 bucket
  • [ ] Celery tasks are processing (check Django admin → Periodic Tasks)
  • [ ] Run the Cypress login suite against the production URL

7. Ongoing operations

Deploy a new version

bash
cd /opt/nhc-portals
git pull --recurse-submodules
docker compose build django
docker compose up -d --no-deps django celeryworker celerybeat
docker compose exec django python manage.py migrate
docker compose exec django python manage.py collectstatic --noinput

View logs

bash
docker compose logs -f django
docker compose logs -f celeryworker

Database backup

RDS automated backups are enabled (7-day retention). For a manual snapshot:

bash
aws rds create-db-snapshot \
  --db-instance-identifier nhc-portals-db \
  --db-snapshot-identifier nhc-portals-manual-$(date +%Y%m%d)

Nova Home Care — Internal Developer Docs