Blog

How to Deploy ElasticSearch on GKE using Terraform and Helm

02 May, 2021
Xebia Background Header Wave

I was recently tasked with deploying ElasticSearch on GKE using Terraform and Helm, and doing so in most readable way possible. I wasn’t very familiar with Helm before, so I did some research to find approach that would fulfill the requirements. In this post I will share with you the Terraform configuration I used to achieve a successful deployment.

What is Helm?

Helm is, at its most basic, a templating engine to help you define, install, and upgrade applications running on Kubernetes. Using Helm, you can leverage its Charts feature, which are simply Kubernetes YAML configuration files (that can be further configured and extended) combined into a single package that can be used to deploy applications on a Kubernetes cluster. To be able to use Helm via Terraform, we need to define the corresponding provider and pass the credentials needed to connect to the GKE cluster.

provider "helm" {
  kubernetes {
    token                  = data.google_client_config.client.access_token
    host                   = data.google_container_cluster.gke.endpoint
    cluster_ca_certificate = base64decode(data.google_container_cluster.gke.master_auth[0].cluster_ca_certificate)
  }
}

Terraform configuration

I am defining an helm_release resource with Terraform, which will deploy the ElasticSearch cluster when applied. Since I am using an Helm chart for the cluster, doing so is incredibly easy. All I had to do was tell Helm the name of the chart to use and where it is located (repository), along with the version of ElasticSearch that I would like to use.
With the set blocks instead, I can override the default values from the template: this makes it easy to select an appropriate storage class, amount of storage and in general any other piece of configuration that can be changed (you have to refer to the documentation of the chart itself to see which values can be overridden), directly from Terraform.

resource "helm_release" "elasticsearch" {
  name       = "elasticsearch"
  repository = "https://helm.elastic.co"
  chart      = "elasticsearch"
  version    = "6.8.14"
  timeout    = 900

  set {
    name  = "volumeClaimTemplate.storageClassName"
    value = "elasticsearch-ssd"
  }

  set {
    name  = "volumeClaimTemplate.resources.requests.storage"
    value = "5Gi"
  }

  set {
    name  = "imageTag"
    value = "6.8.14"
  }
}

Creating a new storage class

I then had to provision a new storage class, which will be used by the ElasticSearch cluster to store data. The configuration below sets up the SSD (SSD is recommended for such purpose, since it’s faster than a regular HDD) persistent disk that I referenced in the main configuration above.

resource "kubernetes_storage_class" "elasticsearch_ssd" {
  metadata {
    name = "elasticsearch-ssd"
  }
  storage_provisioner = "kubernetes.io/gce-pd"
  reclaim_policy      = "Retain"
  parameters = {
    type = "pd-ssd"
  }
  allow_volume_expansion = true
}

Summary

In this blog post I have shown you how to deploy ElasticSearch on GKE using Terraform and Helm. The required configuration is simple and very readable, lowering the barrier to handling all of your infrastructure via Terraform, rather than, for example, using Cloud Marketplace, managed services, or other custom solutions.
Credits: Header image by Luca Cavallin on nylavak.com

Luca Cavallin
Luca is a Software Engineer and Trainer with full-stack experience ranging from distributed systems to cross-platform apps. He is currently interested in building modern, serverless solutions on Google Cloud using Golang, Rust and React and leveraging SRE and Agile practices. Luca holds 3 Google Cloud certifications, he is part of the Google Developers Experts community and he is the co-organizer of the Google Cloud User Group that Binx.io holds with Google.
Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts