My Terraform/Ansible script doesn’t work anymore after I turned on GCP OS Login. I didn’t know what OS Login means and just turned it on. Then I spent a couple of hours figuring out if it is caused by my custom image (OEL7). It turns out it is not. OS Login is a better authentication (oauth2) for Enterprise customers. In short, GCP OS Login lets you use your own desktop ssh key to log in to all GCE instances you are allowed to access (limited by service account). It is pretty straightforward if you use “gcloud compute ssh” like this.
In an “enterprise” context, it is common to block users from pulling images from public container registries. Harbor is a private registry for k8s providing many security features, such as content signing and vulnerability scanning. For more info on why harbor in your k8s env, visit the harbor website. In this article, I describe how I set up the Harbor and run a vulnerability scan on an example image and, of course, troubleshooting.
Long story short, I left my 8-year job and moved to a startup company. As such, I lost my company-sponsored GCP account as my lab. So I picked up my home lab equipment and made my first baremetal K8S cluster at home. This is what I have learned so far.
Before the home lab project, I used git, Github, ansible, terraform, visual code, and kubeadm to quickly bring up a cluster and automate an environment to experiment with microservice. The learning path is bumpy, but I think I picked the right tools to make my learning less frustrating. I want…
This article is the last part of pg_hba.conf explained. Note that pg_hba.conf is only for authentication. Most auth-methods make sure the client and the postmaster’s data exchange in this period secured, for example, ldap with tls, krb, pam_sss, scram-sha-256. In other words, the password is secure in transition. What about the data in transit encryption? Can someone turn on the network sniffer and get all the query resultsets (network packets) I sent to the postmaster? Yes, it is possible. That the topic I want to explore; TLS/SSL. Let’s turn on TLS on pg-master. First, you need a server certificate from…
In this part, I explain the pam authentication in pg_hba.conf. PAM stands for “pluggable authentication modules.” PAM supports four types of services, auth, account, password, and session, but Postgresql pam only supports two services; auth and account. In the last part, we installed ipa-client on pg-master. ipa-client should setup sssd/krb/ldap/pki on pg-master already. After installing PostgreSQL, you should have a default pam configuration in /etc/pam.d/postgresql.
[root@master1 pam.d]# cat postgresql
auth include password-auth
account include password-auth
Using pam in PostgreSQL is as easy as making pg_hba.conf like the following and reloading the configuration.
host all all 0.0.0.0/0 pam pamservice=postgresql
In part1, we understand the basic rules of pg_hba.conf. Let’s review the entry I put in the pg_hba.conf in part1. It was:
host all all 192.168.20.0/24 scram-sha-256
Translation: All clients (users) connecting from 192.168.20.0/24 subnet try to access ALL databases will use scram-sha-256 password. From a dba perspective, this entry is still too open. I would suggest narrow it down to something like the following:
host dvdrental remote_user1 192.168.20.21/32 scram-sha-256
As you can see, the CIDR is narrow to one single database (dvdrental) from single IP. This is more rigid but well…you start to think: Oh dear, maintaning this pg_hba.conf…
This article attempts to demystify how to configure pg_hba.conf and integrate “enterprise systems” for different use cases.
The GCE env I demo contains three VMS, ipa-server, pg-master, and pg-client (you can git clone and deploy the same environment from my GitHub repo https://github.com/vmware-ysung/pg_hba_explained). FreeIPA is like “MS Active Directory.” FreeIPA integrates a Directory Server( 389), MIT Kerberos, NTP, DNS, and DogTag (PKI).
In Postgresql, hba stands for “host-based authentication.” pg_hba.conf contains a set of rules. The first field is the connection type. In the beginning, you need to know two basic types, local and host. “local” means local domain socket…
This article is part 2 of Tanzu SQL Postgres. I demo how to access the instance through service and Kube-proxy to DBeaver on your desktop. The whole idea is putting some data into the pg-instance, and other deployments can consume it.
In Part1, I used “kubectl exec” to test the postgres connection. Does anyone really use that to run CRUD?
ysung@ysung-a01 postgres-for-kubernetes-v1.0.0 % k exec -it pg-instance-1-0 -- psql
psql (11.9 (VMware Postgres 11.9.3))
Type "help" for help.postgres=# \q
Let me show you a “little” better approach first. I can use kubectl port-forward to proxy the pg-instance-1 service to…
This article is a demo of getting Tanzu Data (SCDF/Gemfire/Postgresql/MySQL) for K8S on a “non-supported” platform. Why? If you have the license to run Tanzu Postgresql, you don’t need to worry about the k8s control plane. Why bother? I want to understand how Tanzu Data interacts with the K8S control plane,e.g., how to setup network policy/service mesh to secure Tanzu Data and automate Tanzu Data releases to my K8S operation lifecycle.
First, if you don’t have a K8S cluster (1.16+), you can follow my GitHub https://github.com/vmware-ysung/cks-centos create one in GCE or consider using kubespray, kind, or kop.
Once the cluster…
A data nerd started from data center field engineer to cloud database reliability engineer.