How to use terraform to setup kind with localhost

Welcome to this exploration of containerized applications, where I reveal the collection of tools I’ve employed to manage and orchestrate my development environment, all from the comfort of my M1 MacBook.

Our journey begins with Terraform, the revolutionary open-source Infrastructure as Code (IaC) tool. Terraform has been instrumental in defining and provisioning my datacenter infrastructure, thanks to its intuitive, high-level configuration language.

In my tech stack, I’ve also incorporated Kind (Kubernetes in Docker). Kind facilitates the running of local Kubernetes clusters, creating a seamless, sandboxed environment that serves as the perfect test-bed for my applications.

To manage service exposure within these Kubernetes clusters, I’ve relied on MetalLB, a load balancer designed specifically for bare-metal Kubernetes clusters. It plays a crucial role in managing external access to my services.

The primary focus of this article centers on a specific issue encountered when using Docker on MacOS and Windows: the inaccessibility of the Kind cluster directly. Due to this complication, we have to resort to port mappings to visualize our cluster contents on localhost. Through the course of this piece, we will delve into this challenge, unravel its implications, and explore effective solutions to navigate through it.

In terraform you can just append all resources in one big file or split them up into multiple files. I prefer the latter, so I have multiple files in my terraform folder.

Terraform provider

In order to realize our end goal, we’ll need to leverage various Terraform providers. Each of these providers plays a unique role in managing our Kind cluster and Kubernetes resources.

terraform {
  required_providers {
    kind = {
      source  = "tehcyx/kind"
      version = "~> 0.1.1"
    }
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = "1.14.0"
    }
  }
}

provider "kind" {}

provider "kubectl" {
  host                   = kind_cluster.default.endpoint
  cluster_ca_certificate = kind_cluster.default.cluster_ca_certificate
  client_certificate     = kind_cluster.default.client_certificate
  client_key             = kind_cluster.default.client_key
  load_config_file       = false
}

provider "kubernetes" {
  host                   = kind_cluster.default.endpoint
  cluster_ca_certificate = kind_cluster.default.cluster_ca_certificate
  client_certificate     = kind_cluster.default.client_certificate
  client_key             = kind_cluster.default.client_key
}

provider "helm" {
  kubernetes {
    host                   = kind_cluster.default.endpoint
    cluster_ca_certificate = kind_cluster.default.cluster_ca_certificate
    client_certificate     = kind_cluster.default.client_certificate
    client_key             = kind_cluster.default.client_key
  }
}

You may notice the kind_cluster.default… references in the above code. These references are to a resource that we will define later in the article. For now, just know that we are using the default Kind cluster.

Kind cluster

Now lets spin up our local kubernetes cluster using KinD.

resource "kind_cluster" "default" {
  name           = "test-cluster"
  wait_for_ready = true

  kind_config {
    kind        = "Cluster"
    api_version = "kind.x-k8s.io/v1alpha4"
    node {
      role = "control-plane"
    }
    node {
      role = "worker"
      extra_port_mappings {
        container_port = 80
        host_port      = 80
      }
      extra_port_mappings {
        container_port = 443
        host_port      = 443
      }
    }
  }
}

Most part of the KinD configuration is straight forward but the extra_port_mappings are a bit tricky. The reason for this is that the KinD cluster is running inside a docker container and we need to map the ports from the container to the host.

MetalLB

To setup MetalLB we have to detect an IP address from the docker network. This is a bit tricky because we have to run a bash script to get the IP address and then pass it to the terraform resource.

data "external" "kind_network_inspect" {
  depends_on = [kind_cluster.default]
  program    = ["bash", "get_docker_network_ip.sh"]
}

resource "helm_release" "metallb" {
  depends_on = [kind_cluster.default]
  name       = "metallb"

  repository       = "https://charts.bitnami.com/bitnami"
  chart            = "metallb"
  namespace        = "metallb"
  version          = "4.5.1"
  create_namespace = true
}

resource "kubectl_manifest" "kind_address_pool" {
  depends_on = [helm_release.metallb]
  yaml_body  = <<YAML
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: main-address
  namespace: metallb
spec:
  addresses:
  - ${data.external.kind_network_inspect.result.Subnet}
YAML
}

The bash script is pretty simple and just uses jq to get the IP address from the docker network.

#!/bin/bash
docker network inspect kind | jq '.[0].IPAM.Config[0]'

Ingress controller

To expose our services to the outside world we need an ingress controller. I prefer to use Contour because it is easy to setup.

resource "helm_release" "projectcontour" {
  depends_on = [kind_cluster.default]
  name       = "contour"

  repository       = "https://charts.bitnami.com/bitnami"
  chart            = "contour"
  namespace        = "projectcontour"
  version          = "12.1.0"
  create_namespace = true
}

Echoserver

To test our setup we will deploy the echoserver. This is a simple application that just returns the request headers and the request body.

resource "kubernetes_namespace" "echoserver" {
  metadata {
    name = "echoserver"
  }
}

resource "kubernetes_deployment" "echoserver" {
    depends_on = [ kubernetes_namespace.echoserver ]
  metadata {
    name      = "echoserver"
    namespace = "echoserver"
    labels = {
      App = "echoserver"
    }
  }

  spec {
    replicas = 1
    selector {
      match_labels = {
        App = "echoserver"
      }
    }
    template {
      metadata {
        labels = {
          App = "echoserver"
        }
      }
      spec {
        container {
          image = "jmalloc/echo-server"
          name  = "echoserver"
          port {
            container_port = 8080
          }
        }
      }
    }
  }
}

resource "kubernetes_service" "echoserver" {
    depends_on = [ kubernetes_deployment.echoserver ]
    metadata {
        name = "echoserver"
        namespace = "echoserver"
    }
    spec {
        selector = {
            App = "echoserver"
        }
        port {
            port = 8080
            target_port = 8080
        }
    }
}

resource "kubectl_manifest" "echoserver_httpproxy" {
  depends_on = [helm_release.projectcontour, kubernetes_service.echoserver]
  yaml_body  = <<YAML
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  name: echoserver
  namespace: echoserver
spec:
  virtualhost:
    fqdn: localhost
  routes:
    - services:
      - name: echoserver
        port: 8080
YAML
}

Conclusion

In this article we have seen how to setup a local kubernetes cluster using KinD and how to expose services to the outside world using MetalLB and Contour. This setup is very useful for local development and testing. The complete code can be found on GitHub.