This directory contains a Terraform configuration to deploy a bunch of Kubernetes clusters on various cloud providers, using their respective managed Kubernetes products.
This is the recommended use. It makes it easy to start N clusters
on any provider. It will create a directory with a name like
tag-YYYY-MM-DD-HH-MM-SS-SEED-PROVIDER
, copy the Terraform configuration
to that directory, then create the clusters using that configuration.
- One-time setup: configure provider authentication for the provider(s) that you wish to use.
-
Digital Ocean:
doctl auth init
-
Google Cloud Platform: you will need to create a project named
prepare-tf
and enable the relevant APIs for this project (sorry, if you're new to GCP, this sounds vague; but if you're familiar with it you know what to do; if you want to change the project name you can edit the Terraform configuration) -
Linode:
linode-cli configure
-
Oracle Cloud: FIXME (set up
oci
through theoci-cli
Python package) -
Scaleway: run
scw init
- Run!
./run.sh <providername> <location> [number of clusters] [min nodes] [max nodes]
If you don't specify a provider name, it will list available providers.
If you don't specify a location, it will list locations available for this provider.
You can also specify multiple locations, and then they will be used in round-robin fashion.
For example, with Google Cloud, since the default quotas are very low (my account is limited to 8 public IP addresses per zone, and my requests to increase that quota were denied) you can do the following:
LOCATIONS=$(gcloud compute zones list --format=json | jq -r .[].name | grep ^europe)
./run.sh googlecloud "$LOCATIONS"
Then when you apply, clusters will be created across all available zones in Europe. (When I write this, there are 20+ zones in Europe, so even with my quota, I can create 40 clusters.)
- Shutting down
Go to the directory that was created by the previous step (tag-YYYY-MM...
)
and run terraform destroy
.
You can also run ./clean.sh
which will destroy ALL clusters deployed by the previous run script.
Expert mode.
Useful to run steps sperarately, and/or when working on the Terraform configurations.
- Select the provider you wish to use.
Go to the source
directory and edit main.tf
.
Change the source
attribute of the module "clusters"
section.
Check the content of the modules
directory to see available choices.
- Initialize the provider.
terraform init
- Configure provider authentication.
See steps above, and add the following extra steps:
-
Digital Ocean:
export DIGITALOCEAN_ACCESS_TOKEN=$(grep ^access-token ~/.config/doctl/config.yaml | cut -d: -f2 | tr -d " ")
-
Linode:
export LINODE_TOKEN=$(grep ^token ~/.config/linode-cli | cut -d= -f2 | tr -d " ")
-
Decide how many clusters and how many nodes per clusters you want.
-
Provision clusters.
terraform apply
- Perform second stage provisioning.
This will install an SSH server on the clusters.
cd stage2
terraform init
terraform apply
- Obtain cluster connection information.
The following command shows connection information, one cluster per line, ready to copy-paste in a shared document or spreadsheet.
terraform output -json | jq -r 'to_entries[].value.value'
- Destroy clusters.
cd ..
terraform destroy
- Clean up stage2.
rm stage2/terraform.tfstate*