kustomize
lets you customize raw, template-free YAML
files for multiple purposes, leaving the original YAML
untouched and usable as is.
kustomize
targets kubernetes; it understands and can
patch kubernetes style API objects. It's like
make
, in that what it does is declared in a file,
and it's like sed
, in that it emits edited text.
This tool is sponsored by sig-cli (KEP).
This fork of Kustomize allows for integration with Hashicorp Vault by reading secrets from Vault and
dropping the secrets into a ConfigMap
. It does so by exposing a vaultSecretGenerator
as an option in your kustomization.yml
. Each
entry in the generator corresponds to a secret in an instance of Hashicorp Vault that you provision yourself, which will then be accessible
as a ConfigMap in your base
or overlays
.
For instance, let's say you have a base with an application that wants access to an instance of MongoDB, and want to mount the secret in your kubernetes manifest:
~/base/manifest.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: some-app
name: some-app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: some-app
template:
metadata:
labels:
app: some-app
spec:
containers:
- command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
volumeMounts:
- name: example-secret
mountPath: /etc/config/secrets
readOnly: true
image: busybox
imagePullPolicy: Always
name: some-app
volumes:
- name: example-secret
configMap:
name: mongo
Then you would reference it in your overlay kustomization file as such:
~/overlay/kustomization.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
vaultSecretGenerator:
- name: mongo
path: secrets/environment/mongo
secretKey: value
NOTE: this assumes that you have a secret mounted in your vault cluster at the path
designated as secrets/environment/mongo
,
and with a secret keyed at the secretKey
value of value
. It also assumes that the path enabled is running the V1 Secret Engine.
Read more about Vault Secret engines here.
This version of kustomize relies on a few environment variables to be set.
VAULT_ADDR
: Public/Private Address where your Vault Cluster is locatedVAULT_USERNAME
: Username to access the clusterVAULT_PASSWORD
: Password to access the cluster
Without these variables set, this plugin will fail.
After applying this in my own kubernetes cluster, I was able to then exec into the kubernetes Pod and check the contents that were mounted into the pod from Hashicorp Vault.
(⎈ |stage-eks:default)➜ staging git:(master) ✗ kubectl exec -it some-app-deployment-84bfc64b8d-vvvjq -- sh
$ cat /etc/config/secrets/mongo
this is a vault secret!
-
Secrets are no longer a run time dependency. They are either pulled, read and then mounted into your k8s manifest, or we don’t update the deployment because the external connection to your secrets manager is failing for some reason. Secrets management shouldn't bring down your live deployments!
-
Whenever you have any doubt about whether a secret is being loaded, one can just exec into the pod and look to see what exactly is there. This greatly increases debugging capabilities in production environments, but at the cost of having to ensure you leverage access restrictions to your k8s Cluster.
-
GoPlugins are notorious for being incredible hard for developers to work with. With this approach, the Vault integration is built right in with the other builtin generators. You won't need to worry about any sorts of build issues or other pains that you would normally deal with in a kustomize version, as Vault support is now native to Kustomize.
The kustomize build flow at v2.0.3 was added to kubectl v1.14. The kustomize flow in kubectl remained frozen at v2.0.3 until kubectl v1.21, which updated it to v4.0.5. It will be updated on a regular basis going forward, and such updates will be reflected in the Kubernetes release notes.
Kubectl version | Kustomize version |
---|---|
< v1.14 | n/a |
v1.14-v1.20 | v2.0.3 |
v1.21 | v4.0.5 |
v1.22 | v4.2.0 |
For examples and guides for using the kubectl integration please see the kubectl book or the kubernetes documentation.
1) Make a kustomization file
In some directory containing your YAML resource files (deployments, services, configmaps, etc.), create a kustomization file.
This file should declare those resources, and any customization to apply to them, e.g. add a common label.
File structure:
~/someApp ├── deployment.yaml ├── kustomization.yaml └── service.yaml
The resources in this directory could be a fork of someone else's configuration. If so, you can easily rebase from the source material to capture improvements, because you don't modify the resources directly.
Generate customized YAML with:
kustomize build ~/someApp
The YAML can be directly applied to a cluster:
kustomize build ~/someApp | kubectl apply -f -
Manage traditional variants of a configuration - like development, staging and production - using overlays that modify a common base.
File structure:
~/someApp ├── base │ ├── deployment.yaml │ ├── kustomization.yaml │ └── service.yaml └── overlays ├── development │ ├── cpu_count.yaml │ ├── kustomization.yaml │ └── replica_count.yaml └── production ├── cpu_count.yaml ├── kustomization.yaml └── replica_count.yaml
Take the work from step (1) above, move it into a
someApp
subdirectory called base
, then
place overlays in a sibling directory.
An overlay is just another kustomization, referring to the base, and referring to patches to apply to that base.
This arrangement makes it easy to manage your
configuration with git
. The base could have files
from an upstream repository managed by someone else.
The overlays could be in a repository you own.
Arranging the repo clones as siblings on disk avoids
the need for git submodules (though that works fine, if
you are a submodule fan).
Generate YAML with
kustomize build ~/someApp/overlays/production
The YAML can be directly applied to a cluster:
kustomize build ~/someApp/overlays/production | kubectl apply -f -
- file a bug instructions
- contribute a feature instructions
Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.