This directory holds code and related artifacts to support API related integration tests.
The diagram below summarizes the system design. Integration tests use an API client that makes calls to a backend service. Prior to fulfilling the response, the service checks and decrements a quota. Said quota persists in a backend redis instance that is refreshed on an interval by the Refresher.
The Echo Service implements a simple gRPC service that echos a payload. See echo.proto for details.
flowchart LR
echoClient --> echoSvc
subgraph "Integration Tests"
echoClient[Echo Client]
end
subgraph Backend
echoSvc[Echo Service<./src/main/go/cmd/service/echo>]
refresher[Refresher<./src/main/go/cmd/service/refresher>]
redis[redis://:6739]
refresher -- SetQuota(<string>,<int64>,<time.Duration>) --> redis
echoSvc -- DecrementQuota(<string>) --> redis
end
Dependency | Reason |
---|---|
go | For making code changes in this directory. See go.mod for required version. |
buf | Optional for when making changes to proto. |
ko | To easily build Go container images. |
poetry | To manage python dependencies. |
To run unit tests in this project, execute the following command:
go test ./src/main/go/internal/...
Integration tests require the following values.
Each allocated quota corresponds to a unique ID known as the Quota ID. There exists a one-to-one relationship between the allocated quota and the infrastructure/kubernetes/refresher/overlays.
To query the Kubernetes cluster for allocated Quota IDs:
kubectl get deploy --selector=app.kubernetes.io/name=refresher -o custom-columns='QUOTA_ID:.metadata.labels.quota-id'
To list available endpoints, run:
kubectl get svc -o=custom-columns='NAME:.metadata.name,HOST:.status.loadBalancer.ingress[*].ip,PORT_NAME:.spec.ports[*].name,PORT:.spec.ports[*].port'
You should see something similar to:
NAME HOST PORT_NAME PORT
echo 10.n.n.n grpc,http 50051,8080
When running tests locally, you will need to first run:
kubectl port-forward service/echo 50051:50051 8080:8080
which allows you to access the gRPC via localhost:50051
and the HTTP via
http://localhost:8080/v1/echo
.
When running tests on Dataflow, you supply 10.n.n.n:50051
for gRPC and
http://10.n.n.n:8080/v1/echo
for HTTP.
To execute the services on your local machine, you'll need redis.
Follow these steps to run the services on your local machine.
-
Start redis
Start redis using the following command.
redis-server
-
Start the refresher service in a new terminal.
export CACHE_HOST=localhost:6379; \ export QUOTA_ID=$(uuidgen); \ export QUOTA_REFRESH_INTERVAL=10s; \ export QUOTA_SIZE=100; \ go run ./src/main/go/cmd/service/refresher
-
Start the echo service in a new terminal.
export HTTP_PORT=8080; \ export GRPC_PORT=50051; \ export CACHE_HOST=localhost:6379; \ go run ./src/main/go/cmd/service/echo
The following has already been performed for the apache-beam-testing
project
and only needs to be done for a different Google Cloud project.
To deploy the APIs and dependent services, run the following commands.
terraform -chdir=infrastructure/terraform init
terraform -chdir=infrastructure/terraform apply -var-file=apache-beam-testing.tfvars
After the terraform module completes, you will need to set the following:
export KO_DOCKER_REPO=<region>-docker.pkg.dev/<project>/<repository>
where:
region
- is the GCP compute regionproject
- is the GCP project id i.e.apache-beam-testing
repository
- is the repository name created by the terraform module. To find this run:gcloud artifacts repositories list --project=<project> --location=<region>
. For example,gcloud artifacts repositories list --project=apache-beam-testing --location=us-west1
Run the following command to setup credentials to the Kubernetes cluster.
gcloud container clusters get-credentials <cluster> --region <region> --project <project>
where:
region
- is the GCP compute regionproject
- is the GCP project id i.e.apache-beam-testing
<cluster>
- is the name of the cluster created by the terraform module. You can find this by runninggcloud container clusters list --project=<project> --region=<region>
kubectl kustomize --enable-helm infrastructure/kubernetes/redis | kubectl apply -f -
**You will initially see "Unschedulable" while the cluster is applying the helm chart. It's important to wait until the helm chart completely provisions resources before proceeding. Using Google Kubernetes Engine (GKE) autopilot may take some time before this autoscales appropriately. **
Run the following command to provision the Echo service.
kubectl kustomize infrastructure/kubernetes/echo | ko resolve -f - | kubectl apply -f -
Like previously, you may see "Does not have minimum availability" message showing on the status. It may take some time for GKE autopilot to scale the node pool.
The Refresher service relies on kustomize overlays which are located at infrastructure/kubernetes/refresher/overlays.
Each folder contained in infrastructure/kubernetes/refresher/overlays corresponds to an individual Refresher instance that is identified by a unique string id. You will need to deploy each one individually.
For example:
kubectl kustomize infrastructure/kubernetes/refresher/overlays/echo-should-never-exceed-quota | ko resolve -f - | kubectl apply -f -
Like previously, you may see "Does not have minimum availability" message showing on the status. It may take some time for GKE autopilot to scale the node pool.