Deployment
This section covers the strategy and architecture for deploying NHC Portals to AWS.
For the hands-on execution steps, go to the Migration → section.
Architecture at a glance
┌─────────────────────────────────────┐
│ AWS (NHC account) │
│ │
Internet ──► Caddy TLS ──► EC2 (Docker Compose) │
│ ├── django (gunicorn :8000) │
│ ├── celeryworker │
│ └── celerybeat │
│ │ │
│ RDS MySQL 8 │
│ ElastiCache Redis │
│ │
│ S3 public (static files) │
│ S3 private (documents, media) │
└─────────────────────────────────────┘
Portals frontend ──► Static build on Cloudflare Pages / S3+CloudFrontThe db and redis containers from local docker-compose.yml are replaced by RDS and ElastiCache in production. The portals dev server is replaced by a static build.
Phased approach
| Phase | Goal | Status |
|---|---|---|
| 1 — EC2 + Compose | Get the app running in AWS. Move stateful services to RDS and ElastiCache. Prove the app works end-to-end against real AWS infrastructure. | Planned |
| 2 — Stabilise | Run the Cypress suite against the live environment. Fix any environment-specific issues. Set up monitoring and alerting. | Planned |
| 3 — ECS Fargate | Migrate containers to ECS for rolling deploys, per-service scaling, and CloudWatch integration. EC2 phase proves the containers first — ECS is the production-grade finish line. | Planned |
Why EC2 first, then ECS?
EC2 + Compose is nearly identical to the local dev setup — the same docker-compose.yml, the same commands, the same mental model. This means the first migration focuses on one problem: making the app work against real AWS infrastructure (RDS, ElastiCache, S3, SendGrid). No new orchestration concepts.
Once that's confirmed working and the Cypress suite passes against it, migrating to ECS becomes an infrastructure-only change — the containers are already proven. The two problems (app correctness + production orchestration) are solved separately rather than simultaneously.