We have all been there, testing the Infrastructure-as-Code (IaC) a fellow engineer has written just last week only to discover that our local Terraform version is not compatible with their code. ARRRGGGHHH! It seems that there is an ever increasing stack of software tooling required to run and maintain infrastructure these days. We need cloud vendor CLIs, Ansible, Terraform, Kubectl, and much, much more as the ever expanding list just keeps on growing. Furthermore, version conflicts are found everywhere. Good luck managing two different cloud environments if they were deployed with different versions of the same tooling! That’s why last year I made the decision to stop installing tooling locally and I now run everything in containers; I call these tooling containers, toolboxes.
Every time I start a new project on a cloud provider, I piece together a tooling container that I will use to deploy and configure the infrastructure. I typically build it from a toolbox template stored on the Binx.io GitHub repository; this offers a good starting point as I can take an existing Dockerfile for installing the most common tools. To keep the containers as light as possible I only install the tools I need. So typically this is just the CLI tool of the cloud provider I am using and one or two other things. This Dockerfile can then be pushed to a git repository and distributed to all members of the team to ensure that we are all using the same tooling and versions. From here, the only issue that needs to be taken care of is how to authenticate to the cloud provider.
For authentication, I take a different approach with each cloud provider. Authentication is not a big issue for CI pipelines as you can just code the couple of commands needed and store the secrets as environment variables. However, having to type out the same commands every time you launch the toolbox can get old pretty quickly. In order to make it easier I do one of two things:
- Take advantage of the fact that CI tooling typically overwrites the
--entrypointflag for a container so we can use the
ENTRYPOINTcommand in the Dockerfile to script the lines needed for authentication, then pass the secrets to the container as environment variables.
Write a shell script that executes the lines required for logging into the tooling container, then alias the execution of this file so it can be quickly ran.
Of these two methods, I prefer the second as you keep the container generic for others to make use of. It also allows me to take care of the volume mounts without having to add
-v $PWD:/homeeach time. I use the format of clientname–environment as the alias for my script so I know exactly which tooling container I am running, and for whom. Bye, bye, authentication issues!
Now that I can easily authenticate my tooling containers to their respective cloud environments. The only thing I have left to worry about is how to manage versions. It has become a cliché in the IT industry at this point, but you should not rely upon containers that use the tag latest. To make it easy for me to manage tooling versions and update the containers I use a GitOps strategy. The strategy is simple:
A standard git flow with branches is used for features and merged to the master branch when approved.
- A CI pipeline file is used to build the image.
- Each time a push is made to the master branch, the resulting container image is pushed to the container registry with the tag latest.
- When a commit has been tagged using
git --tagthen the resulting container image is pushed to the container registry using both the tag reference and the tag latest.
- The changes between tagged versions are documented in a CHANGELOG.md.
So now all I need to do is tell my team that we are using version 0.2.3 of the toolbox and we can all be certain that we, and our CI pipelines, are deploying with the same versions of each software. As the old adage goes, “consistency is key”!