DevOps with AWS

Deploying to ECS with the AWS Cloud Development Kit, CodeBuild and CodePipeline

James Collerton
8 min readSep 10, 2021

Audience

This article is aimed at engineers looking to understand a little bit more about the DevOps offerings from AWS. It assumes a small amount of familiarity with CI/CD, AWS, Docker, containerisation frameworks, Infrastructure as Code (IaC), and React. It isn’t necessary to have actually employed them, only to conceptually understand how they are used.

In this post we will be creating a small React application that we will build and deploy using AWS CodeBuild and AWS CodePipeline, hosting it on an ECS cluster. Some of the more simple infrastructure will be provisioned using the AWS Cloud Development Kit (CDK). Along the way we will be explaining the use of each of the components, and how they fit together.

Argument

The first thing we will do is introduce each of the different technologies we will be focussing on: CodeBuild, CodePipeline, ECS and the AWS CDK. React is not deemed to be one of these as we will be recycling a Docker based solution from a previous article.

AWS CodeBuild

CodeBuild is Amazon’s build tool offering. It allows us to compile, unit test and package software. It is fully managed, and automatically scales. It’s part of the wider suite of development tools, including CodePipeline. One of the most useful facets of CodeBuild is you only pay for the time you are building, unlike a Jenkins server which must remain up and ready for when you need it.

AWS CodePipeline

CodePipeline is Amazon’s CI/CD offering. It can orchestrate the CodeBuild component, as well as handle things like running automated tests and deploying your code to various environments. These steps are modelled as a pipeline (hence the name). It is fully managed and costs $1 per active pipeline a month.

ECS

ECS is Amazon’s fully managed container orchestration framework. It’s similar to EKS in many ways, but uses the AWS engine, rather than Kubernetes. It is used to deploy, manage, and scale containers. I would recommend reading through the attached article if you aren’t already familiar with container orchestration.

AWS CDK

The AWS CDK is one of Amazon’s IaC offerings. It allows us to use familiar programming languages (TypeScript, Python, Java, .NET, and Go) to define our Cloudformation stacks, as well as offering helpful tools like the VS Code visualiser.

With all of that out the way, let’s get on to our example!

Our Example

The aim of this article is to create an S3 bucket and ECR repository using the AWS CDK, then to push a React app to the S3 bucket, triggering CodePipeline, running a CodeBuild job, then finally deploying to a newly created ECS cluster.

The first thing we will do is examine our existing React project, which can be found at this repository (disclaimer, this was originally just for EKS applications, so there may be small components of it that refer solely to EKS, but it works for both). The overall outcome is a simple site displaying the below. You can run it locally to check!

Our example React application!

Once we’ve checked over this we would like to set up our S3 bucket and ECR repository using the AWS CDK.

We will be using the CDK with Typescript. All of the code for our example can be found in the repository here. We’ll need to download the AWS CDK using the below command.

npm install -g aws-cdk

We then need to initialise our repository using:

cdk init app --language typescript

Let’s start off simple and create an S3 bucket. We will be pushing our code there in order to trigger the code build.

We add the S3 dependency to our project.

npm install @aws-cdk/aws-s3

We then add a ts file defining our script as below.

We can generate our CloudFormation yaml file using the command.

cdk synth

We can then deploy our new bucket using:

cdk deploy CodeS3BucketStack

Which brings up our bucket! If we want to get rid of it we can do so using cdk destroy.

The other exciting thing about the AWS CDK is the ability to write tests for our infrastructure. There are three main types:

  • Snapshot tests: These test your new stack against the previous one to check for changes.
  • Fine-grained assertions: These test specific parts of your new infrastructure, such as making sure they have certain properties with certain values.
  • Validation tests: These make sure your code fails when it is passed invalid data.

We will only cover the first type of tests in this article, so as not to dilute the message. In the S3 bucket CDK code we have set the deletion policy to ‘Delete’. However, the default is ‘Retain’. Let’s add a test to make sure that we don’t accidentally regress to the default.

If we were to run the commands to build and test the project (npm run build && npm test) we would receive an error message similar to the below.

An example AWS CDK error message, highlighting the issues

We can use this to sense check the changes, and update the stack we deem as ‘acceptable’ using npm test --u.

Adding an ECR repository using the CDK is very similar and has been omitted for brevity. The code can be found in the linked repository.

Now we need to look into our CodeBuild job. Although we could do this using the CDK, to make things a bit more straightforward we’ll do it through the console. A lot of the information we’re using comes from the document here.

Initially we navigate to the CodeBuild section of the console and hit ‘Create Build’, which takes us to the job creation wizard. We enter a name, and source provider as S3 (where our code to build comes from). The bucket name must be react-ecs-bucket in order to match our created one, and I have chosen react-ecs-code.zip as the name of the code we will be building.

We configure our system as below, notice also how we have given the build Privileged access in order to allow it to create images in our new ECR repository.

At the very bottom of the section we are also creating a new service role for the job (mentioned later). We will need to create some environment variables, which we will use in our build.

  • AWS_DEFAULT_REGION with whichever region you are using.
  • AWS_ACCOUNT_ID with your account Id.
  • IMAGE_TAG with a value of ‘latest’.
  • IMAGE_REPO_NAME with react-ecs-ecr.

Once all of this is done, hit create and you should be met with a screen similar to the below.

As part of the wizard creation we generated a service role for our build. We will be pushing to an ECR repository, so we need to add the below ECR permissions to the role.

{
"Statement": [
...
{
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetAuthorizationToken",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
],
"Resource": "*",
"Effect": "Allow"
},
...
],
"Version": "2012-10-17"
}

Once all of this is done we have a build linked to our S3 bucket with all the requisite permissions to push to ECR! We can test it by zipping our code and pushing it to the S3 bucket, running the build manually, and confirming it creates a new image in our repository.

Our newly built image!

Next we set up our ECS cluster. Again, this is done through the console. The wizard allows us to set up a separate VPC, which is very useful. We use the ‘networking only’ option, give our cluster a name and hit create. The cluster itself contains very little information, most of this is held in the task and service definitions.

Our task definition allows us to specify information for containers being run. This includes the role, the memory and CPU allocations, and the container name.

A task definition and its revisions after we have made image updates

We then use a service to deploy a task on our cluster. We specify which task we would like to deploy (which image we would like to run as containers on our cluster), as well as information such as how we would like updates applied, health checks and how we would like deployments to work.

As a brief summary:

  1. The cluster holds all of the resources we use to run our containers, whether this be controlling Fargate allocations or provisioning EC2 instances.
  2. The task definition says which image we would like to use, and how much resource we should give it.
  3. The service is used to deploy the task definitions onto the cluster.

Finally we implement the pipeline in order to tie it all together. The pipeline will have three main stages.

  1. Source stage: Triggering the build when we update our code in S3.
  2. Build stage: Running the CodeBuild job we declared previously.
  3. Deploy stage: Taking the image created by our build and running it on ECS.

Luckily CodePipeline is well set up to cater to each of these three steps! We give the pipeline a name and a role, then set up each of the three stages.

The source stage of our pipeline
The build stage of our pipeline
The deploy stage of our pipeline

It really is that easy! I was very impressed. At this point we’re pretty much done, and all we need to do is test it. First of all let’s make a change, I’m going to change the bottom text of the site to This is our example AWS EKS/ ECS application, with a change!

I then run the commands on my local machine to zip up the new folder and push it to S3.

zip -r react-ecs-code.zip . -x '/my-app/node_modules*' '/.git*'
aws s3 cp react-ecs-code.zip s3://react-ecs-bucket

This triggers our pipeline, which runs as below.

Our successful pipeline

To get the public IP address of our cluster we need to go to EC2 network interfaces and find the Elastic Network Interface for our newly created service. This should have a public IP address assigned to it we can use. Visiting that we receive the below!

Our updated application!

Conclusion

In conclusion we have explored a number of the AWS DevOps tools available to us, as well as worked through an example covering each of them.

--

--

James Collerton

Senior Software Engineer at Spotify, Ex-Principal Engineer at the BBC