Deploy Listmonk on AWS ECS (EC2) with the CDK
February 6, 2026
This post is a walkthrough of the CDK stack in infra/ that deploys Listmonk on ECS (EC2 launch type). It’s a single box setup (Listmonk + Postgres on the same instance) with persistent storage, CloudFront, and Route53. I’m just going to follow the code top to bottom and explain what each part is doing so you can tweak it later.
App entrypoint and region
The CDK app is defined in infra/bin/listmonk-ecs-ec2.ts. It instantiates the stack with an explicit env:
- Account comes from
CDK_DEFAULT_ACCOUNT - Region is
CDK_DEFAULT_REGIONor defaults toeu-west-1
So you can deploy without hard coding account/region, but if you don’t set CDK_DEFAULT_REGION, it’s going to land in eu-west-1.
Stack parameters
The stack starts by defining CloudFormation parameters that you’ll pass at deploy time:
DbPassword: Postgres password (min 8 chars, hidden)AppSecret: Listmonk app secret (min 16 chars, hidden)AdminUser/AdminPassword: optional super-admin credentials (can be blank)DataVolumeSizeGiB: size of the EBS data volume (default 20 GiB, min 8)HostedZoneId/HostedZoneName: Route53 hosted zone info (use your own zone ID and domain)CloudFrontPrefixListId: AWS-managed CloudFront prefix list ID (used to lock the ALB/NLB ingress)
These parameters are the “knobs” that control credentials, storage, and DNS.
VPC layout
The VPC in the stack is intentionally minimal:
- 1 Availability Zone
- No NAT gateways
- Only public subnets
That keeps cost and complexity low, which is fine for a single instance Listmonk setup.
Security group for the instance
The EC2 instances get a security group that allows only CloudFront to reach the app port:
- Ingress: CloudFront prefix list to TCP 9000
- Egress: allowed
So the Listmonk UI is only reachable through CloudFront (and whatever else you later allow), not directly from the public internet.
ECS cluster and EC2 user data
The cluster is a standard ECS cluster attached to that VPC. The EC2 launch user data does the heavy lifting for persistent storage:
- Attaches a secondary EBS volume at
/dev/xvdf(or/dev/nvme1n1) - Formats it as XFS if it’s new
- Mounts it at
/var/lib/listmonk - Creates
/var/lib/listmonk/pgdataand/var/lib/listmonk/uploads - Grants write perms for both directories
This is how Postgres and uploads survive container restarts.
EC2 capacity
The stack uses a Launch Template and Auto Scaling Group:
- Instance type:
t3.micro - AMI: ECS-optimized Amazon Linux 2
- Public IPs enabled
- Uses the security group above
- EBS data volume size driven by
DataVolumeSizeGiB - ASG fixed to 1 instance (min/max/desired all 1)
Then the ASG is connected to the ECS cluster through an ECS capacity provider.
Task definition and volumes
The ECS task definition uses NetworkMode.HOST so the containers bind directly to the instance’s network stack. Two host volumes are defined:
pgdata->/var/lib/listmonk/pgdatauploads->/var/lib/listmonk/uploads
This matches the user data mount points from the previous step.
Postgres container
The Postgres container is simple and local:
- Image:
postgres:15-alpine - Port mapping: 5432
- Environment:
POSTGRES_DB=listmonk,POSTGRES_USER=listmonk,POSTGRES_PASSWORDfromDbPassword - Mounts
/var/lib/postgresql/datato thepgdatahost volume - Logs to CloudWatch (
listmonk-dbprefix)
Since it runs in the same task definition, it shares the host network.
Listmonk container
The Listmonk container is wired to Postgres on localhost:
- Image:
listmonk/listmonk:latest - Port mapping: 9000
- Environment config sets DB host to
127.0.0.1:5432 - App secret from
AppSecret - Optional admin user/password from parameters
- Mounts
/listmonk/uploadsto theuploadsvolume - Logs to CloudWatch (
listmonk-appprefix)
The container runs a boot sequence that ensures DB setup:
./listmonk --install --idempotent --yes --config ''./listmonk --upgrade --yes --config ''./listmonk --config ''
So a fresh deploy installs and upgrades the schema automatically.
ECS service
The service is a single-instance Ec2Service with:
- Desired count: 1
minHealthyPercent: 0,maxHealthyPercent: 100
That allows a replacement to start even if the old task is still running, which keeps downtime minimal for a single-node setup.
NLB in front of the service
An internet-facing Network Load Balancer exposes Listmonk:
- Listener: TCP 80
- Target: ECS service on port 9000
CloudFront will sit in front of this, but the NLB provides the stable origin endpoint.
CloudFront and ACM certificate
The stack creates a DNS-validated certificate in us-east-1 (required by CloudFront). Then it provisions a CloudFront distribution:
- Origin: the NLB
- Origin protocol: HTTP
- Viewer protocol: HTTPS redirect
- Custom domain:
listmonk.example.com
Finally, it creates a Route53 A record pointing listmonk.example.com to the CloudFront distribution.
Stack outputs
Two helpful outputs are included:
ListmonkPort:9000ListmonkCloudFrontDomain: the CloudFront distribution domain name
Deploying the stack
From the infra/ folder you can deploy with parameters like:
cd infra
cdk deploy \
--parameters DbPassword='supersecretpass' \
--parameters AppSecret='some-long-random-secret' \
--parameters AdminUser='admin' \
--parameters AdminPassword='anothersecret' \
--parameters DataVolumeSizeGiB=20 \
--parameters HostedZoneId='ZXXXXXXXXXXXX' \
--parameters HostedZoneName='example.com' \
--parameters CloudFrontPrefixListId='pl-xxxxxxxx'
If you want to customize this setup, the most common tweaks are:
- Use a larger instance type for heavier workloads
- Increase the EBS volume size for Postgres/uploads
- Swap to a private VPC layout with NAT
- Replace the NLB with an ALB if you want HTTP routing rules
That’s it. The stack is intentionally compact and all the moving parts are right there if you want to adjust it.