Learning HashiCorp Terraform on AWS

on
Aug 22, 2022
in

I have been working as a consultant for over a year now. And I noticed that a lot of our clients are asking for experience on HashiCorp Terraform and AWS. So I figured it might be time to learn something new!

So when I started to look at Terraform I never used it. What I did have is extensive knowledge of AWS. And like any other infrastructure as code. You need to know the cloud provider to actually build proper infrastructure. Tools like Terraform, CloudFormation and CDK are tools to build infrastructure. Those can be learned and you can switch between them depending on the project needs.

HashiCorp Terraform on AWS

Terraform introduction

So I started out with reaching out to my colleagues Dean Shanahan and Bruno Schaatsbergen. They explained the basics on how Terraform works. Let me recap that for you in a short and simple story:

In Terraform you declare your infrastructure. The declarations are translated in API calls towards the cloud provider. So, if you know how to build infrastructure in your cloud provider. You know what is needed in Terraform, you only need to lookup the syntax.

So what is the difference between Terraform and CloudFormation then? When you remove the actual DSL (Domain Specific Language). Then the only difference is that CloudFormation manages the state for you. While in Terraform the state is managed by Terraform. Meaning you have the responsibility of storing it somewhere.

So what is this state that I mentioned?

Imagine that we want to build a S3 bucket. When you do this using the AWS Console you follow the wizard. The console uses the CreateBucket API call to create the bucket. When you create a bucket using CloudFormation. You hand over a template to the CloudFormation service. The service reads the template. It checks if the bucket we added is already in the current version. Since we added it it is not, so CloudFormation now knows it needs to perform a CreateBucket. This lookup action is done against the state.

The same principle applies to Terraform. When you add a resource that is not known in the state it will create it. When the resource is known in the state it will update it.

When you are just playing around with Terraform you can have the state in a local file. But when you are dealing with teams and pipelines you do want your state to be stored somewhere central. You can do this using a S3 Bucket and a DynamoDB table. The S3 bucket will contain the state file. The DynamoDB table is used for state locking. The state locking prevents 2 deployments from happening at the same time.

Remote state using a S3 Bucket and DynamoDB

Learning Terraform

I followed the Terraform Associate Certification Exam Preparation on Cloud Academy. But HashiCorp Certified Terraform Associate from A Cloud Guru also works. Or you can just play around reading the documentation.

Within a couple of hours I already became comfortable enough to state. I know terraform! There are so many providers available nobody knows them all. You shouldn’t learn them by memory because the providers receive updates. So it’s always better to rely on the documentation than your memory.

In the end when you join an existing team you need to learn what they are actually building. Once you know that you can have a look on how they did that in Terraform.

Using Terraform

So one thing that I really liked from Terraform. You can do pre and post actions when you create resources. This is super useful when for example you create an Elastic Container Registry. After you created the repository you can build a Docker image and upload it into the repository.

But also when you delete resources. For example, when you delete a S3 Bucket. The bucket needs to be empty. With Terraform you can first remove all the objects and then let Terraform remove the bucket.

Terraform also has native drift detection. For example, when you add tags to a resource using the console. And if you provision an update using Terraform you will get a notification that that tag will be removed.

That being said, I usually use CodePipeline for my deployments. CodePipeline has a good integration with CloudFormation. If you want to deploy your Terraform code in a CodePipeline. You will need to use CodeBuild to perform the Terraform deployment. And there is some additional work needed to manage the state properly.

Conclusion

CloudFormation and Terraform are really comparable products in what they do. If you know how to build your infrastructure in your cloud provider. Learning Terraform and/or CloudFormation is easy. It’s just declaring what you want to build.

The more you use it the easier it gets because you gain experience in the tool. But in the end it’s just a tool and if a better one comes along. You can easily switch because you know the underlying cloud provider.

Photo by Pixabay

Joris has been working with the AWS cloud since 2009 and focussing on building event driven architectures. While working with the cloud from (almost) the start he has seen most of the services being launched. Joris strongly believes in automation and infrastructure as code and is open to learn new things and experiment with them, because that is the way to learn and grow. In his spare time he enjoys running and runs a small micro brewery from his home.
Share this article: Tweet this post / Post on LinkedIn