The issue with the persistent volume on GCE was solved, basically the problem was my misunderstanding of what a Persitent Volume Claim really does, it automagically provision the volume for you but I had a k8s configuration to create the GCE PV which clearly was not necessary and was duplicating the same volume thus the error.
Debugging a last standing issue in the deployment. I had intermittent failures connecting to the database pod and after narrowing down the problem I found:
- A bad configmap used to add customized DB configuration. When applied it removed configuration files existing in the same directory where the mount point is created. That's fixed now by adding all the files required in the config map.
- The squash-db service had bad labels and selector. It was selecting the db pod and the api pod using the same labels. So I guess the route configuration was screwed up sending traffic to both pods and causing this intermittent behaviour. Labels were reviewed and it is working fine now.
Creating a GKE persistent volume...
Error from server (Forbidden): error when creating "kubernetes/gke_volume.yaml": persistentvolumes "mysql-volume-1" is forbidden: error querying GCE PD volume mysql-volume-1: disk is not found
make[1]: *** [deployment] Error 1
Implemented LOCAL_VOLUME option for now.
--allowed-hosts
--allow-websocket-origin
Pending: