Introduction to Terraform

Everyone keeps talking about it, but what the hell is it?

James Collerton
6 min readDec 24, 2019
Woman standing by computer servers

Audience and Aim

This article is aimed at any engineer who has heard of the concept of Infrastructure as Code/ Terraform and has at least a familiarity with AWS.

We aim to answer the following questions:

  • What is Infrastructure as Code?
  • What is Terraform?
  • How do we use Terraform?

The final section will walk through an example where we generate our own S3 bucket in AWS by utilising the Terraform module and workspace functionalities.

Argument

What is Infrastructure as Code?

Infrastructure as Code (IaC) is one of the key DevOps practices and relates to the management of infrastructure (networks, virtual machines, load balancers, etc.) in a descriptive model.

For example we may write some code to say ‘we would like a server of this type in this location’. We would then run it against our cloud provider and they will provision us the described infrastructure.

Provisioning infrastructure in this way (as opposed to manually) means:

  1. It is easy to bring up our desired configurations,
  2. We can have version controlled IaC code
  3. Our infrastructure is easily reproducible between environments.

What is Terraform?

Terraform lets you use high-level syntax to declare infrastructure in configuration files. We will explore this concept shortly.

The key features of Terraform are:

  • The generation of execution plans to show the infrastructure to be created.
  • A resource graph which calculates which parts of your desired infrastructure are independent, allowing you to parallelise their modification.
  • The intelligent diffing of your IaC and existing infrastructure, allowing only updates to be run.

There are other IaC providers (for example Ansible or Cloudformation) which offer similar concepts, but for the purpose of this article we will remain with Terraform.

How do we use Terraform?

In the next few sections we will generate an S3 bucket in our AWS account. We will do this using two different methods, allowing us to demonstrate some of the key functionalities of Terraform.

Prerequisites

Initially, you will need an AWS account. If you do not already have one, sign up for the free tier here. From there you will need Terraform installed. If you’re using MacOS and have Homebrew you should be able to use that as demonstrated here. Otherwise, follow instructions from here. If is all correctly installed you should be able to run:

terraform -v

To display your Terraform version.

Once Terraform is installed we will need to set up a default AWS profile, the instructions can be found here.

I appreciate this is a little bit of setup, but it’s all stuff worth learning!

Making Your First S3 Bucket

Once we have this, go to a convenient location and make a directory with any name you like, for the sake of example I will be using:

mkdir shared-remote-state-s3

Once we are inside the directory we will write our first configuration file! I will call mine main.tf, but you can call it anything with the .tf file extension. Inside that file write the below, making sure to replace the your-name-here section with your name:

# This tells us to use AWS as our cloud provider and specifies the AWS profile and region we would like to use.provider "aws" {
profile = "default"
region = "eu-west-1"
}
# Now we provision an S3 bucketresource "aws_s3_bucket" "aws_s3_bucket" {
bucket = "terraform-s3-bucket-your-name-here"
acl = "private"
force_destroy = "true"
}

The file itself is quite self-explanatory. As Terraform is cloud agnostic we tell it which provider we would like to use, the resource section defined our infrastructure.

We then use:

terraform init

In order to initialise our Terraform repository. Hopefully you should receive something containing the below:

Terraform has been successfully initialized!

Try doing ls -a — a new .terraform folder should have appeared. Similar to git this folder contains information around the current state of the Terraform workspace.

The next thing we would like to do is run a plan to see if what we have written is correct, then apply the changes and make sure it is working.

Run:

terraform plan

To make sure that what you are generating an S3 bucket as expected. Hopefully under the bucket name you should have terraform-s3-bucket and force destroy should be true. Run:

terraform apply

To apply the changes and create the bucket. When prompted type yes to confirm. You should get a message containing the below:

Apply complete! Resources: 1 added, 0 changed, 0 destroyed

Lets go to our AWS management console and check — you should now be able to see your empty bucket!

Bootstrap your Remote State Backend

Navigate back to the folder we were just in. If you notice there should be terraform.tfstate and terraform.tfstate.backup files. These files are what is responsible for telling you the current state of your infrastructure (is it up, is it down…). However, it doesn’t make sense to keep these locally. We would like to store them in S3 so that other people can have access and we can share the current state of our infrastructure. This allows multiple developers to work on the same infrastructure simultaneously.

Go back to the main.tf file and add the following at the top, make sure to use the name of your own bucket:

# This code tells you to copy up your current state into the remote state backend in order to share it with other people.terraform {
backend "s3" {
bucket = "terraform-s3-bucket-your-name-here"
key = "shared-remote-state/terraform.tfstate"
profile = "default"
region = "eu-west-1"
}
}

Notice this is a bit of a Catch 22. We need an S3 bucket before we can use it as a remote state backend! Now rerun:

terraform init

You should get a message asking if you would like to copy over existing state to the remote backend. Type yes and you will hopefully receive a success message. Now navigate to the S3 bucket in the AWS management console. Your state file should now be there!

Using Modules and Workspaces

We now introduce some slightly more complex concepts:

  • Modules: These allow for us to store infrastructure definitions centrally (for example in a GitHub repository) and then reuse them elsewhere.
  • Workspaces: These allow for us to map the same infrastructure to different ‘workspaces’. For example, we may want to use the same folder and create three different versions of the infrastructure, dev test and live.

Go to a convenient location and make a directory with any name you like. For the sake of example I will be using:

mkdir modules-and-workspaces

Now we need to write a file for pulling in a module and applying the code within a workspace. Put the below in a main.tf file, making sure to change your name to match the bucket from previously.

terraform {
backend "s3" {
bucket = "terraform-s3-bucket-your-name-here"
key = "modules-and-workspaces/terraform.tfstate"
profile = "default"
region = "eu-west-1"
}
}
# This tells us to use AWS as our cloud provider and specifies the AWS profile and region we would like to use.provider "aws" {
profile = "default"
region = "eu-west-1"
}
# Now we provision an S3 bucket using the code at the GitHub linkmodule "s3" {
source = "github.com/JamesCollerton/Terraform_Modules//s3"
aws_s3_bucket_name = "${terraform.workspace}-module-s3-bucket-your-name-here"
}

We can see we are using a module which sources code from another location and takes in a variable for some settings. This means we can do things like write code for complex infrastructure, store it as a module and reuse it in other places.

Now we can create our workspaces. Run the following:

terraform init

You should receive a message saying it is getting the module code and has been successfully initialised. From here we need to create workspaces:

terraform workspace list

This should show only one workspace: default. Lets create dev and test:

terraform workspace new dev 
terraform workspace new test

You can check they’re there by using the previous command.

Lets swap to dev:

terraform workspace select dev

Now lets plan and apply the changes like previously using the plan and apply commands:

terraform plan
terraform apply

You should see that the name of the bucket has the environment interpolated into it thanks to ${terraform.workspace} . Swap into the test workspace and reapply. Now go the AWS management console, services, S3. You should see the two buckets there, also go into the remote state bucket to check that the remote state has been correctly written.

Now we’re done mucking around let’s tear down our infrastructure. In both workspaces for the modules example and in the shared-remote-state directory apply

terraform destroy

Go to the management console and check everything has gone.

Conclusion

To summarise, we have explored:

  • The definition and uses of ‘Infrastructure as Code’.
  • The offering that Terraform makes.
  • A small series of examples of using Terraform.

I hope it was useful!

--

--

James Collerton

Senior Software Engineer at Spotify, Ex-Principal Engineer at the BBC