Deploying a Simple Web Application to AWS ECS with GitHub Actions and Terraform

Shrihari Haridas

--

Introduction:
In this blog, we will walk through the process of deploying a simple “Hello World” web application to AWS ECS (Elastic Container Service) using GitHub Actions for Continuous Deployment (CD). By the end of this guide, you will have a fully automated pipeline that pushes your Dockerized application to AWS and runs it using ECS, making your deployment process seamless and efficient. We will cover everything from creating the infrastructure with Terraform to pushing your code to GitHub and verifying the successful deployment in AWS

  1. Today, we will set up GitHub Actions CI/CD with the help of AWS and Terraform. The CI/CD platform will be GitHub, the infrastructure will be created on AWS, and Terraform will be used to manage the infrastructure.

Problem : Create a simple website with automated deployment using AWS services.

Solution : We built a complete CI/CD pipeline where:

  • Developer writes code and pushes to GitHub
  • GitHub automatically builds and deploys the website
  • Website runs on AWS cloud infrastructure

Key Components :

  • Source Code: Simple “Hello World” website
  • Container: Website packaged in Docker
  • Cloud Storage: AWS ECR holds our Docker image
  • Cloud Hosting: AWS ECS runs our website
  • Automation: GitHub Actions handles deployment

End Result :

  • Push code → Website automatically goes live
  • No manual deployment steps needed
  • Everything runs in AWS cloud

It’s like having a robot that takes your code, packages it, and puts it live on the internet automatically whenever you update it.

2. The folder structure of our code is shown below.

3. Before we proceed, create a GitHub repository — whether public or private, depending on your preference. Then, clone the empty repository to your local machine to start writing the code.

4. Now, let’s proceed with writing the code.

  • > Now, we will write the following code inside the terraform folder.

A. main.tf

# terraform/main.tf

# Get available AZs
data "aws_availability_zones" "available" {
state = "available"
}

# VPC
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true

tags = {
Name = "${var.project_name}-vpc"
Environment = var.environment
}
}

# Internet Gateway
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id

tags = {
Name = "${var.project_name}-igw"
Environment = var.environment
}
}

# Public Subnets
resource "aws_subnet" "public" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = "10.0.${count.index + 1}.0/24"
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true

tags = {
Name = "${var.project_name}-public-subnet-${count.index + 1}"
Environment = var.environment
}
}

# Route Table for Public Subnets
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}

tags = {
Name = "${var.project_name}-public-rt"
Environment = var.environment
}
}

# Route Table Association
resource "aws_route_table_association" "public" {
count = 2
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}

# Security Group for ECS Tasks
resource "aws_security_group" "ecs_tasks" {
name = "${var.project_name}-ecs-tasks-sg"
description = "Allow inbound traffic for ECS tasks"
vpc_id = aws_vpc.main.id

ingress {
description = "Allow HTTP inbound"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "${var.project_name}-ecs-tasks-sg"
Environment = var.environment
}
}

# ECR Repository
resource "aws_ecr_repository" "app" {
name = "${var.project_name}-app"

image_scanning_configuration {
scan_on_push = true
}

tags = {
Name = "${var.project_name}-ecr"
Environment = var.environment
}
}

# ECS Cluster
resource "aws_ecs_cluster" "main" {
name = "${var.project_name}-cluster"

tags = {
Name = "${var.project_name}-ecs-cluster"
Environment = var.environment
}
}

# ECS Task Execution Role
resource "aws_iam_role" "ecs_task_execution_role" {
name = "${var.project_name}-ecs-task-execution-role"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
}
]
})
}

# Attach the AWS managed policy for ECS task execution
resource "aws_iam_role_policy_attachment" "ecs_task_execution_role_policy" {
role = aws_iam_role.ecs_task_execution_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}

# ECS Task Definition
resource "aws_ecs_task_definition" "app" {
family = "${var.project_name}-task"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = 256
memory = 512
execution_role_arn = aws_iam_role.ecs_task_execution_role.arn

container_definitions = jsonencode([
{
name = "${var.project_name}-container"
image = "${aws_ecr_repository.app.repository_url}:latest"
portMappings = [
{
containerPort = 80
hostPort = 80
protocol = "tcp"
}
]
logConfiguration = {
logDriver = "awslogs"
options = {
"awslogs-group" = "/ecs/${var.project_name}"
"awslogs-region" = var.aws_region
"awslogs-stream-prefix" = "ecs"
}
}
}
])

tags = {
Name = "${var.project_name}-task-definition"
Environment = var.environment
}
}

# CloudWatch Log Group
resource "aws_cloudwatch_log_group" "ecs_logs" {
name = "/ecs/${var.project_name}"
retention_in_days = 30

tags = {
Name = "${var.project_name}-logs"
Environment = var.environment
}
}

# ECS Service
resource "aws_ecs_service" "app" {
name = "${var.project_name}-service"
cluster = aws_ecs_cluster.main.id
task_definition = aws_ecs_task_definition.app.arn
desired_count = 1
launch_type = "FARGATE"

network_configuration {
subnets = aws_subnet.public[*].id
security_groups = [aws_security_group.ecs_tasks.id]
assign_public_ip = true
}

tags = {
Name = "${var.project_name}-ecs-service"
Environment = var.environment
}
}

Summary:

Data Source for Availability Zones

  • Fetches the list of available AWS Availability Zones.

VPC Creation

  • Defines a Virtual Private Cloud (VPC) with DNS support.

Internet Gateway

  • Creates an Internet Gateway to enable internet access.

Public Subnets

  • Creates two public subnets across different Availability Zones.

Route Table and Association

  • Sets up a route table for public subnets with internet access.

Security Group for ECS Tasks

  • Allows inbound HTTP (port 80) traffic and outbound traffic for ECS containers.

ECR Repository

  • Creates an AWS Elastic Container Registry (ECR) to store container images.

ECS Cluster

  • Creates an ECS cluster for running containerized applications.

IAM Role for ECS Task Execution

  • Defines an IAM role and attaches the necessary policy for ECS task execution.

ECS Task Definition

  • Defines the ECS task with Fargate as the launch type, specifying container details.

CloudWatch Log Group

  • Creates a CloudWatch log group for logging ECS container activities.

ECS Service

  • Deploys an ECS service using the Fargate launch type, ensuring networking and security configurations.

This setup provisions an AWS ECS Fargate-based infrastructure using Terraform for a containerized application deployment.

B. outpuuts.tf

output "ecr_repository_url" {
value = aws_ecr_repository.app.repository_url
}

output "ecs_cluster_name" {
value = aws_ecs_cluster.main.name
}

output "ecs_service_name" {
value = aws_ecs_service.app.name
}

ECR Repository URL (ecr_repository_url)

  • Outputs the repository URL of the AWS Elastic Container Registry (ECR).
  • Used to push and pull Docker images for deployment.

ECS Cluster Name (ecs_cluster_name)

  • Outputs the name of the created Amazon ECS cluster.
  • Useful for managing and deploying services within the cluster.

ECS Service Name (ecs_service_name)

  • Outputs the name of the ECS service.
  • Helps track and manage the running containerized application.

These outputs make it easier to reference critical AWS resources in deployments and automation workflows.

C. provider.tf

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

provider "aws" {
region = var.aws_region
}

Required Providers Block

  • Specifies that Terraform will use the AWS provider from HashiCorp.
  • Defines the version constraint (~> 4.0), ensuring compatibility with AWS provider versions 4.x.

AWS Provider Block

  • Configures the AWS provider.
  • Uses a variable (var.aws_region) to dynamically set the AWS region.

This setup ensures Terraform knows which provider to use and where to deploy AWS resources

D. variables.tf

variable "aws_region" {
description = "AWS region"
type = string
default = "us-west-2"
}

variable "project_name" {
description = "Project name to be used for tagging"
type = string
default = "hello-world"
}

variable "environment" {
description = "Environment (dev/staging/prod)"
type = string
default = "dev"
}

AWS Region (aws_region)

  • Defines the AWS region where resources will be deployed.
  • Default value is set to "us-west-2", but it can be customized.

Project Name (project_name)

  • Defines the name of the project, used for resource tagging.
  • Default value is "hello-world", which can be changed for different projects.

Environment (environment)

  • Specifies the environment (e.g., dev, staging, prod).
  • Default value is set to "dev", but it can be adjusted based on deployment needs.

These variables allow for flexible, reusable Terraform code with different configurations based on region, project, and environment.

5. Now that we’ve added the Terraform code, let’s create a .gitignore file because we don't want to push the tfstate and other state files to GitHub.

# Terraform files
**/.terraform/*
*.tfstate
*.tfstate.*
crash.log
*.tfvars
override.tf
override.tf.json
*_override.tf
*_override.tf.json
.terraformrc
terraform.rc

**/.terraform/*

  • Ignores all files and directories in the .terraform folder, which contains provider plugins and cached data.

*.tfstate

  • Ignores Terraform state files, which store resource state and should not be shared.

*.tfstate.*

  • Ignores backup state files created by Terraform.

crash.log

  • Ignores crash logs generated by Terraform when an operation fails.

*.tfvars

  • Ignores variable files containing sensitive or environment-specific information.

override.tf / override.tf.json

  • Ignores Terraform override files, which may contain custom configurations.

*_override.tf / *_override.tf.json

  • Ignores any files with custom overrides for Terraform configurations.

.terraformrc / terraform.rc

  • Ignores Terraform configuration files, which store settings for the Terraform CLI.

These rules help prevent sensitive or configuration files from being pushed to the repository.

6. Now let’s create a Dockerfile.

FROM nginx:alpine
COPY src/index.html /usr/share/nginx/html/index.html
EXPOSE 80

FROM nginx:alpine

  • Uses the lightweight Alpine-based Nginx image as the base image.
  • This ensures a minimal footprint while running an Nginx web server.

COPY src/index.html /usr/share/nginx/html/index.html

  • Copies the index.html file from the src/ directory in your project into the Nginx web server’s default HTML directory (/usr/share/nginx/html/).
  • This replaces the default Nginx welcome page with your custom HTML file.

EXPOSE 80

  • Informs Docker that the container will listen on port 80 (the default HTTP port).
  • This allows external traffic to reach the web server when the container is running.

7. Now, the last file we need to create is index.html, under SRC folder, so let's do that.

<!DOCTYPE html>
<html>
<head>
<title>Hello World</title>
</head>
<body>
<h1>Hello World from ECS!</h1>
<p>This is a simple website deployed using GitHub Actions and AWS ECS.</p>
</body>
</html>

This is a simple HTML file that serves as the homepage for a website deployed on AWS ECS.

  • <!DOCTYPE html>: Declares the document type as HTML5.
  • <html>: Root element of the HTML document.
  • <head>: Contains metadata, including the page title.
  • <title>Hello World</title>: Sets the title of the webpage.
  • <body>: Contains the visible content of the webpage.
  • <h1>: Displays a main heading "Hello World from ECS!".
  • <p>: Displays a paragraph describing the deployment using GitHub Actions and AWS ECS.

8. Now, we have written all the files. The last file remaining is the GitHub workflow (deploy.yaml), which will automatically trigger our pipeline.

name: Deploy to AWS ECS

on:
push:
branches: [ main ]

env:
AWS_REGION: us-west-2

jobs:
deploy:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2

- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}

- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1

- name: Build and push Docker image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: hello-world-app
IMAGE_TAG: latest
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG

- name: Force new deployment
run: |
aws ecs update-service --cluster hello-world-cluster --service hello-world-service --force-new-deployment
  • Triggers on push to main branch.
  • Sets AWS region as an environment variable.
  • Runs the deployment job on ubuntu-latest.

Steps:

A. Checkout the repository using GitHub Actions.

B. Configure AWS credentials from GitHub Secrets.

C. Login to Amazon ECR to authenticate Docker push.

D. Build and push the Docker image to the ECR repository.

E. Force a new ECS deployment to update the service with the latest image.

9. Now we have written all the essential code and files for our project. You can also see the image below. It’s time to deploy what we have created.

But please note that we need to create the infrastructure first before pushing the code to the GitHub repository. The reason behind this is that if the infrastructure is not created, our pipeline will fail.

So, navigate to the Terraform folder and run the following commands:

terraform init
terraform plan
terraform apply

10. You can see that our infrastructure has been created successfully. If you want to verify, you can log in and check whether the services have been created or not.

11. Before pushing our code to GitHub, go to the specific repository → Settings → On the left side, scroll down to “Secrets and variables” → Click on “Actions” → Add your AWS Access Key and Secret Key.

This is required to deploy the code to the specified AWS account. Remember, in the GitHub Actions workflow, we only declared the variables but did not set their values. So, configure them first and then push the code.

12. It’s time to push our code to the GitHub repository. First, navigate to the Terraform folder, and then run the following commands:

git add .
git commit -m "Add Terraform code and GitHub Actions workflow"
git push

This will push the code to the respective repository. After that, go to GitHub → Actions to check whether the pipeline has been triggered or not.

13. You can see above that your code has been successfully deployed. Now, it’s time to check in AWS.

Go back to the AWS Console and:

  1. Open Amazon ECR to verify if the Docker image has been pushed successfully.
  2. Open Amazon ECS to check if the ECS service is running correctly.

This will confirm that your deployment process is working as expected.

14. Okay, now to access your website:

  1. Go to your running ECS cluster.
  2. Navigate to the Tasks section.
  3. Click on the running task.
  4. Under the Configuration section, you will find the public IP.
  5. Copy the public IP and paste it into a new browser tab to check if the website is accessible.

You should now see the index.html page displaying successfully.

Conclusion:
Congratulations! You have successfully set up an automated CI/CD pipeline using GitHub Actions to deploy a Dockerized web application to AWS ECS. You learned how to configure Terraform to create the necessary AWS resources, push your Docker image to ECR, and use GitHub Actions to handle the deployment process. This workflow not only streamlines the deployment process but also ensures that your application can be updated and scaled efficiently in AWS ECS. With this setup, you can confidently deploy and manage applications on AWS with minimal manual intervention.

--

--

Responses (3)