Initial tooling setup gcloud, kubectl and terraform


curl -o \

unzip -d /usr/local/bin/


Verify terraform version 0.12.13 or higher is installed:

terraform version

Deploying kubectl


brew install kubernetes-cli


kubectl version --client

Install gcloud sdk

curl | bash

Authenticate to gcloud

Before configuring gcloud CLI you can check available Zones and Regions nearest to your location

gcloud auth login
gcloud compute regions list
gcloud compute zones list

Follow gcloud init and select default Zone Ex. us-east1

gcloud init

Creating Google Cloud project and service account for terraform

Best practice to use separate account “technical account” to manage infrastructure, this account can be used in automated code deployment like in TravisCI or any other tool we may choose.

Set up environment

export TF_VAR_org_id=46xxxxxxxxx
export TF_VAR_billing_account=010xxxxxxxxxxx
export TF_ADMIN=terraform-admin
export TF_CREDS=~/.config/gcloud/terraform-admin.json

NOTE: value of YOUR_ORG_ID and YOUR_BILLING_ACCOUNT_ID you can find by running

gcloud organizations list

gcloud beta billing accounts list

Create the Terraform Admin Project

Create a new project and link it to our billing account

gcloud projects create ${TF_ADMIN} --organization ${TF_VAR_org_id} --set-as-default

gcloud beta billing projects link ${TF_ADMIN} --billing-account ${TF_VAR_billing_account}

Create the Terraform service account

Create the service account in the Terraform admin project and download the JSON credentials:

gcloud iam service-accounts create terraform --display-name "Terraform admin account"

gcloud iam service-accounts keys create ${TF_CREDS} --iam-account terraform@${TF_ADMIN}

Grant the service account permission to view the Admin Project and manage Cloud Storage

gcloud projects add-iam-policy-binding ${TF_ADMIN} \
 --member serviceAccount:terraform@${TF_ADMIN}  --role roles/viewer

 gcloud projects add-iam-policy-binding ${TF_ADMIN} \
  --member serviceAccount:terraform@${TF_ADMIN}  --role roles/storage.admin

Enabled API for newly created projects

gcloud services enable && \
gcloud services enable && \
gcloud services enable && \
gcloud services enable && \
gcloud services enable && \
gcloud services enable

Add organization/folder-level permissions

Grant the service account permission to create projects and assign billing accounts

gcloud organizations add-iam-policy-binding ${TF_VAR_org_id} \
 --member serviceAccount:terraform@${TF_ADMIN} \
 --role roles/resourcemanager.projectCreator

 gcloud organizations add-iam-policy-binding ${TF_VAR_org_id} \
 --member serviceAccount:terraform@${TF_ADMIN} \
 --role roles/billing.user

Creating backend storage to tfstate file in Cloud Storage

Terraform stores the state about infrastructure and configuration by default local file “terraform.tfstate. State is used by Terraform to map resources to configuration, track metadata.

Terraform allows state file to be stored remotely, which works better in a team environment or automated deployments. We will used Google Storage and create new bucket where we can store state files.

Create the remote back-end bucket in Cloud Storage for storage of the terraform.tfstate file

gsutil mb -p ${TF_ADMIN} -l us-east1 gs://${TF_ADMIN}

Enable versioning for said remote bucket:

gsutil versioning set on gs://${TF_ADMIN}

Configure your environment for the Google Cloud terraform provider


Setting up separate projects for Development and Production environments

In order to segregate Development environment we will use Google cloud projects that allows us to segregate infrastructure but maintain same time same code base for terraform.

Terraform allow us to use separate tfstate file for different environment by using terraform functionality workspaces. Let’s see current file structure

├── terraform.tfvars

Initialize and pull terraform cloud specific dependencies

Terraform uses modular setup and in order to download specific plugin for cloud provider, terraform will need to be 1st initiated.

terraform init

Workspace creation for dev and prod

Once we have our project code and our tfvar secretes secure we can create workspaces for terraform

NOTE: in below example we will use only dev workspace but we can use both following same logic

Terraform plan

Terraform plan will simulate what changes terraform will be done on cloud provider

terraform plan

Apply terraform plan for selected environment

terraform apply

Creating Kubernetes cluster on GKE and PostgreSQL on Cloud SQL

Once we have project ready for dev and prod we can move into deploying our gke and sql infrastructure.

Code structure

├── backend
│   ├── firewall
│   │   ├──
│   │   └──
│   ├── subnet
│   │   ├──
│   │   ├──
│   │   └──
│   └── vpc
│       ├──
│       └──
├── cloudsql
│   ├──
│   ├──
│   └──
├── gke
│   ├──
│   ├──
│   └──

Deploy our infrastructure, noticeable differences between prod and dev workspaces we can find in the terraform files.

Its best to use modules for every segment Networking(vpc, subnets and firewall), cloudsql and gke.

# Configure the Google Cloud provider

data "terraform_remote_state" "project_id" {
  backend   = "gcs"
  workspace = "${terraform.workspace}"

  config {
    bucket = "${var.bucket_name}"
    prefix = "terraform-project"

provider "google" {
  version = "~> 2.5"
  project = "${data.terraform_remote_state.project_id.project_id}"
  region  = "${var.region}"

module "vpc" {
  source = "./backend/vpc"

module "subnet" {
  source      = "./backend/subnet"
  region      = "${var.region}"
  vpc_name     = "${module.vpc.vpc_name}"
  subnet_cidr = "${var.subnet_cidr}"

module "firewall" {
  source        = "./backend/firewall"
  vpc_name       = "${module.vpc.vpc_name}"
  ip_cidr_range = "${module.subnet.ip_cidr_range}"

module "cloudsql" {
  source                     = "./cloudsql"
  region                     = "${var.region}"
  availability_type          = "${var.availability_type}"
  sql_instance_size          = "${var.sql_instance_size}"
  sql_disk_type              = "${var.sql_disk_type}"
  sql_disk_size              = "${var.sql_disk_size}"
  sql_require_ssl            = "${var.sql_require_ssl}"
  sql_master_zone            = "${var.sql_master_zone}"
  sql_connect_retry_interval = "${var.sql_connect_retry_interval}"
  sql_replica_zone           = "${var.sql_replica_zone}"
  sql_user                   = "${var.sql_user}"
  sql_pass                   = "${var.sql_pass}"

module "gke" {
  source                = "./gke"
  region                = "${var.region}"
  min_master_version    = "${var.min_master_version}"
  node_version          = "${var.node_version}"
  gke_num_nodes         = "${var.gke_num_nodes}"
  vpc_name              = "${module.vpc.vpc_name}"
  subnet_name           = "${module.subnet.subnet_name}"
  gke_master_user       = "${var.gke_master_user}"
  gke_master_pass       = "${var.gke_master_pass}"
  gke_node_machine_type = "${var.gke_node_machine_type}"
  gke_label             = "${var.gke_label}"

All variables that is consumed by modules I keep in single file.

We will use same google storage bucket but with different prefix not to conflict with project creation terraform plan.

# Configure the Google Cloud tfstate file location
terraform {
  backend "gcs" {
    bucket = "gke-service"
    prefix = "source-tfstate"

Running terraform changes for infrastructure

As we are in separate code base will need to follow same sequence as in project creation.

NOTE: Just make sure we have new terraform.tfvars

bucket_name         = "gke-service"
gke_master_pass     = "your-gke-password"
sql_pass            = "your-sql-password"

Terraform Tips

For more info, please check the GitHub repo