Deploying VMware Tanzu Data for K8S on GCE Part 1: Postgres

Photo by James Pond on Unsplash

This article is a demo of getting Tanzu Data (SCDF/Gemfire/Postgresql/MySQL) for K8S on a “non-supported” platform. Why? If you have the license to run Tanzu Postgresql, you don’t need to worry about the k8s control plane. Why bother? I want to understand how Tanzu Data interacts with the K8S control plane,e.g., how to setup network policy/service mesh to secure Tanzu Data and automate Tanzu Data releases to my K8S operation lifecycle.

First, if you don’t have a K8S cluster (1.16+), you can follow my GitHub https://github.com/vmware-ysung/cks-centos create one in GCE or consider using kubespray, kind, or kop.

Once the cluster is ready, you can “pre-requisite” your k8s env for Tanzu Data. These pre-requisites include 1) accessing GCR, Nexus, Harbor, or Dockerhub from your cluster so that k8s can pull images from those repositories, 2) cert-manager installed 3) helm v3 in your local env.

I use my terraform/ansible/kubeadm to deploy one control-plane and three worker nodes env with cert-manager and Nginx ingress. It should look like this:

Next, let’s get the images from network.pivotal.io (You need to register and agree on the license). In network.pivotal.io, search for “Postgres for Kubernetes” and download the file (postgres-for-kubernetes-v1.0.0.tar.gz) to your desktop, then unzip the file.

Inside the directory, you can load the images to your local docker, tag the local/remote docker image, then push them to remote gcr, nexus, harbor, or dockerhub.

Once the images are ready in your gcr repository, you need a “docker-registry” type secret in your k8s cluster, so your k8s resource can pull images. Here is the example:

Now we are ready to deploy Tanzu SQL Postgres. There are two components of Postgres for K8S: Postgres operator and Postgres instance. We first deploy operator, then we direct operator how our Postgres instance should look like using the YAML file.

Go back to the Tanzu SQL Postgres folder you just unzipped.

Review the “values.yaml” file in the operator subdirectory. Ensure that the dockerRegistrySecretName matches the one you just created, and the operatorImageRepository/postgresImageRepository matches the URIs you did push.

Now we are ready to deploy Postgres Operator.

Wait for a couple seconds to get operator “READY”.

Once the Postgres Operator is ready, we can create a Postgres instance using a YAML manifest like this:

For testing, you can use “kubectl exec” to run psql in the pod once the pod is ready.

As you can see in the following, there is a ClusterIP service, pg-instance-1. Other resources can use this service to connect to our Postgres instance. Next article, I will show you how to set up another deployment to connect the Postgres instance. Stay tuned.

A data nerd started from data center field engineer to cloud database reliability engineer.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store