Skip to content

⚙️ Getting Started

View code for this repo


This document will walk through each step in order. After following this document you will have created:


Prerequisites

  • Your google account must have the otfe-staging role assigned
  • You must have Ansible installed
  • Mac users: brew install ansible
  • You must have Terraform installed
  • Mac users: brew install terraform
  • You must have Helm installed
  • Mac users: brew install helm
  • You must have GKE Auth Plugin installed
  • All users: gcloud components install gke-gcloud-auth-plugin
  • You must have the .staging.env & .production.env environment files at the root of the repo (pulled from Keeper)

Warning

The naming convention and location of the *.env files are important.

  • .staging.env
  • .production.env

They must be placed at the root of the repository.

  • You must have an otfe.json Google credentials JSON file in the creds/otfe.json location at the root of the repo (pulled from Keeper)

Warning

The naming convention and location of the credentials files are important.

  • creds/otfe.json

They must be placed at the root of the repository underneath the creds directory.

Initializing the GKE cluster

The first step is to initialize the actual Kubernetes cluster on Google Cloud Platform. This is done using Terraform.

What does this step do?

  • Select Terraform workspace to use managed state for the proper environment
  • View and validate the Terraform plans
  • Spin up GKE cluster
  • Create proper DNS records on Cloudflare for the respective zone
  • Ensure proper networking, NAT, router and firewalls and created correctly on the cluster

Initialization

  1. make init to pull down the terraform state from Google Cloud Storage
  2. There are 4 different workspaces you can select
make use-staging
make use-prod
make use-staging-aus
make use-prod-aus
  1. Preview plan with make plan

  2. Apply plan with make apply

Info

You will need to manually approve this step before the provisioning continues.

Provisioning the GKE cluster

Warning

Ensure WARP is disabled to avoid certificate failures during the bootstrapping process.

Once the cluster is running, we need to bootstrap it with the proper dependencies, TLS certificates, ingress resources for routing, secrets & more. This is done using Ansible.

What does this step do?

  • Creates Kubernetes secrets
  • Creates the ingress for routing
  • Generates TLS certificates for the proper domain
  • Validates health of the data plane on K8s

Initialization

  1. There are 4 different workspaces you can select (if you didn't run this in the previous section already)
make use-staging
make use-prod
make use-staging-aus
make use-prod-aus

Next, we need to bootstrap the cluster with all the resources, secrets, controllers, certs, etc.

Provision GKE cluster with Ansible
make bootstrap

Deploying OTFE

Within the OTFE repo underneath k8s/ will contain deployment YAML specs for both environments.

Configuring Deployment

Note

Because this is used as an emergency fallback, the images are pulled from GCR and hardcoded into the YAML. If you must change or update the images, you can pull them from GCR and update the YAML accordingly.

export CLUSTER_NAME=stream-k8s-cluster-staging
export PROJECT_NAME=cf-video
export REGION=us-west2

gcloud container clusters get-credentials $CLUSTER_NAME --project $PROJECT_NAME --region $REGION

# Validate access is successful
kubectl get nodes
export CLUSTER_NAME=stream-k8s-cluster-production
export PROJECT_NAME=cf-video
export REGION=us-west2

gcloud container clusters get-credentials $CLUSTER_NAME --project $PROJECT_NAME --region $REGION

# Validate access is successful
kubectl get nodes
export CLUSTER_NAME=stream-k8s-cluster-staging-aus
export PROJECT_NAME=cf-video
export REGION=australia-southeast2

gcloud container clusters get-credentials $CLUSTER_NAME --project $PROJECT_NAME --region $REGION

# Validate access is successful
kubectl get nodes
export CLUSTER_NAME=stream-k8s-cluster-production-aus
export PROJECT_NAME=cf-video
export REGION=australia-southeast2

gcloud container clusters get-credentials $CLUSTER_NAME --project $PROJECT_NAME --region $REGION

# Validate access is successful
kubectl get nodes

Once successful, we can now deploy OTFE.

kubectl apply -f k8s/gcp/staging-gcp.yaml
kubectl apply -f k8s/gcp/prod-gcp.yaml
kubectl apply -f k8s/gcp/staging-gcp-aus.yaml
kubectl apply -f k8s/prod-gcp-aus.yaml

Validate the service

Note

It may take a couple minutes for the GCP TCP load balancer to bind to the ingress. You can validate this with kubectl get ing -n otfe-staging to ensure a public IPv4 address has been assigned to it.

make healthcheck-staging
make healthcheck-prod
make healthcheck-aus-staging
make healthcheck-aus-prod

Updating routing for encoding

Once the cluster is running and properly bootstrapped, we need to adjust the routing from delivery worker. This is done via the OTFE active-active experiment. See this doc for detailed instructions.