Skip to content

Commit

Permalink
Locust tests.
Browse files Browse the repository at this point in the history
Adding Locust tests.

Adding command to start k8s proxy to the READMe file.

Changing the Dockerfile and the directories so that we have a single
image and a more flat sctucture.

Small improvements to Python files for better readability.

Fixed indentation.

Fixed pep8 errors.
  • Loading branch information
pm7h authored and jkowalski committed Mar 8, 2019
1 parent a625fc9 commit 8002ed8
Show file tree
Hide file tree
Showing 5 changed files with 539 additions and 0 deletions.
24 changes: 24 additions & 0 deletions test/load/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

FROM hopsoft/graphite-statsd

# Locust
RUN pip install locustio
EXPOSE 8089 5557 5558

RUN mkdir /etc/service/locust
COPY /locust-files ./
COPY /run.sh /etc/service/locust/run
RUN chmod +x /etc/service/locust/run
51 changes: 51 additions & 0 deletions test/load/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Load and performance tests

Load tests aim to test the performance of the system under heavy load. For Agones, game server allocation is an example where heavy load and multiple parallel operations can be envisioned. Locust provides a good framework for testing a system under heavy load. It provides a light-weight mechanism to launch thousands of workers that run a given test.

The goal of performance tests is to provide metrics on various operations. For
Agones, fleet scaling is a good example where performance metrics are useful.
Similar to laod tests, Locust can be used for performance tests with the main
difference being the number of workers that are launched.

## Build and run tests

Prerequisites:
- Docker.
- A running k8s cluster.

Load tests are written using Locust. These tests are also integrated with Graphite and Grafana
for storage and visualization of the results.

### Running load tests using Locust on your local machine

This test uses the HTTP proxy on the local machine to access the k8s API. The default port for the proxy is 8001. To start a proxy to the Kubernetes API
server:

```
kubectl proxy [--port=PORT] &
```

Next, we need to build the Docker images and run the container:

```
docker build -t locust-files .
```

The above will build a Docker container to install Locust, Grafana, and Graphite and will configure
them. To run Locust tests for game server allocation:

```
docker run --rm --network="host" -e "LOCUST_FILE=gameserver_allocation.py" -e "TARGET_HOST=http://127.0.0.1:8001" locust-files:latest
```

To run Locust tests for fleet autoscaling:

```
docker run --rm --network="host" -e "LOCUST_FILE=fleet_autoscaling.py" -e "TARGET_HOST=http://127.0.0.1:8001" locust-files:latest
```

NOTE: The Docker network host only works for Linux. For macOS and Windows you can use the special DNS name host.docker.internal.

After running the Docker container you can access Locust on port 8089 of your local machine. When running Locust tests, it is recommended to use the same value for number of users and hatch rate. For game server allocation, these numbers can be large, but for fleet autoscaling a single user is sufficient.

Grafana will be available on port 80, and Graphite on port 81.
254 changes: 254 additions & 0 deletions test/load/locust-files/fleet_autoscaling.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,254 @@
# Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from locust import HttpLocust, TaskSet, events, task
import locust.events
import json
import os
import time
import socket
import atexit

FLEET_SIZE = 100
DEADLINE = 30 * 60


class UserBehavior(TaskSet):
@task
def scaleUpFleet(self):
# Create a fleet.
initial_size = 1
start_time = time.time()
payload = {
"apiVersion": "stable.agones.dev/v1alpha1",
"kind": "Fleet",
"metadata": {
"generateName": "fleet-simple-udp",
"namespace": "default"
},
"spec": {
"replicas": initial_size,
"scheduling": "Packed",
"strategy": {
"type": "RollingUpdate"
},
"template": {
"spec": {
"ports": [
{
"name": "default",
"portPolicy": "dynamic",
"containerPort": 26000
}
],
"template": {
"spec": {
"containers": [
{
"name": "simple-udp",
"image": (
"gcr.io/agones-images"
"/udp-server:0.5")
}
]
}
}
}
}
}
}
headers = {'content-type': 'application/json'}
response = self.client.post(
"/apis/stable.agones.dev/v1alpha1/namespaces/default/fleets",
data=json.dumps(payload),
headers=headers)
response_json = response.json()
name = response_json['metadata']['name']
selfLink = response_json['metadata']['selfLink']

# Wait until the fleet is up.
self.waitForScaling(selfLink, initial_size)
total_time = int((time.time() - start_time) * 1000)
events.request_success.fire(
request_type="fleet_spawn_up",
name="fleet_spawn_up",
response_time=total_time,
response_length=0)

# Scale up the fleet.
fleet_size = FLEET_SIZE
resource_version = self.getResourceVersion(selfLink)
start_time = time.time()
payload = {
"apiVersion": "stable.agones.dev/v1alpha1",
"kind": "Fleet",
"metadata": {
"name": str(name),
"namespace": "default",
"resourceVersion": str(resource_version)
},
"spec": {
"replicas": fleet_size,
"scheduling": "Packed",
"strategy": {
"type": "RollingUpdate"
},
"template": {
"spec": {
"ports": [
{
"name": "default",
"portPolicy": "dynamic",
"containerPort": 26000
}
],
"template": {
"spec": {
"containers": [
{
"name": "simple-udp",
"image": (
"gcr.io/agones-images"
"/udp-server:0.5")
}
]
}
}
}
}
}
}
response = self.client.put(
selfLink,
data=json.dumps(payload),
headers=headers)
self.waitForScaling(selfLink, fleet_size)
total_time = int((time.time() - start_time) * 1000)
events.request_success.fire(
request_type="fleet_scaling_up",
name="fleet_scaling_up",
response_time=total_time,
response_length=0)

# Scale down the fleet.
resource_version = self.getResourceVersion(selfLink)
start_time = time.time()
payload = {
"apiVersion": "stable.agones.dev/v1alpha1",
"kind": "Fleet",
"metadata": {
"name": str(name),
"namespace": "default",
"resourceVersion": str(resource_version)
},
"spec": {
"replicas": 0,
"scheduling": "Packed",
"strategy": {
"type": "RollingUpdate"
},
"template": {
"spec": {
"ports": [
{
"name": "default",
"portPolicy": "dynamic",
"containerPort": 26000
}
],
"template": {
"spec": {
"containers": [
{
"name": simple-udp",
"image": (
"gcr.io/agones-images"
"/udp-server:0.5")
}
]
}
}
}
}
}
}
response = self.client.put(
selfLink,
data=json.dumps(payload),
headers=headers)
self.waitForScaling(selfLink, 0)
total_time = int((time.time() - start_time) * 1000)
events.request_success.fire(
request_type="fleet_scaling_down",
name="fleet_scaling_down",
response_time=total_time,
response_length=0)

# Delete the fleet.
response = self.client.delete(selfLink, headers=headers)

def waitForScaling(self, selfLink, fleet_size):
global ready_replicas
start_time = time.time()
while True:
total_time = time.time() - start_time
response = self.client.get(selfLink)
response_json = response.json()
status = response_json.get('status')
if status is not None:
ready_replicas = response_json['status']['readyReplicas']
if (ready_replicas is not None and ready_replicas == fleet_size):
print "Fleet is scaled to: " + str(fleet_size)
break
if (total_time > DEADLINE):
print "Fleet did not scale up in time"
events.request_success.fire(
request_type="fleet_scaling_timeout",
name="fleet_scaling_timeout",
response_time=total_time * 1000,
response_length=0)
break

def getResourceVersion(self, selfLink):
response = self.client.get(selfLink)
response_json = response.json()
return response_json['metadata']['resourceVersion']


class AgonesUser(HttpLocust):
task_set = UserBehavior
min_wait = 500
max_wait = 900

def __init__(self):
super(AgonesUser, self).__init__()
self.sock = socket.socket()
self.sock.connect(("localhost", 2003))
locust.events.request_success += self.hook_request_success
atexit.register(self.exit_handler)

def hook_request_success(self,
request_type,
name,
response_time,
response_length):
self.sock.send(
"%s %d %d\n" % (
"performance." + name.replace('.', '-'),
response_time,
time.time()))

def exit_handler(self):
self.sock.shutdown(socket.SHUT_RDWR)
self.sock.close()
Loading

0 comments on commit 8002ed8

Please sign in to comment.