Skip to content

tbaums/konvoy-kudo-studio

Repository files navigation

Konvoy + Kudo Studio

NB: This is very much a work in progress. Suggestions, constructive criticism, and (especially) PRs are most welcome!

Tested with Konvoy v1.2.1 and Kudo v0.7.5 on AWS.

screencap-gif

Prerequisites

This demo assumes you have kubectl installed and connected to a Konvoy cluster running the default configuration in AWS.

Secondly, the commands below assume you have the kudo cli plugin installed. brew install kudo-cli or brew upgrade kudo-cli

Initial setup

Import Kafka dashboard

  1. Nagivate to Grafana
  2. Hover over + in left-hand side nav bar
  3. Select import
  4. Copy and paste JSON found here
  5. Click Upload
  6. Select Prometheus as data source

Deploy Kudo Kafka

  1. kubectl kudo init
  2. kubectl kudo install zookeeper --instance=zk
  3. Wait for all 3 Zookeeper pods to be RUNNING and READY
  4. kubectl kudo install kafka --instance=kafka
  5. Wait for all 3 Kafka brokers to be RUNNING and READY
  6. kubectl create -f https://raw.githubusercontent.com/kudobuilder/operators/master/repository/kafka/docs/v0.2/resources/service-monitor.yaml

Deploy Kafka Client API, Svelte front-end, and Node.js Websocket server

  1. kubectl apply -f https://raw.githubusercontent.com/tbaums/konvoy-kudo-studio/master/kafka-python-api/kafka-client-api.yaml
  2. kubectl apply -f https://raw.githubusercontent.com/tbaums/konvoy-kudo-studio/master/svelte-ui/svelte-client.yaml
  3. kubectl apply -f https://raw.githubusercontent.com/tbaums/konvoy-kudo-studio/master/kafka-node-js-api/kafka-node-js-api.yaml

Navigate to Svelte UI

  1. Visit <your AWS elb>/svelte

Begin demonstration

Manufacturing and IoT example

  1. Click 'Manufactoring & IoT' in the Nav bar
  2. Explain demo architecture
  3. Click '-' button to collapse architecture diagram
  4. Click button 'Click me to start fetch'
  5. kubectl apply -f https://raw.githubusercontent.com/tbaums/konvoy-kudo-studio/master/kafka-dummy-actors/kafka-dummy-actor.yaml
  6. Observe a single actor on the map (left) and in the actor list (on right).
  7. Run kubectl scale deploy kafka-dummy-actor --replicas=7 to see the list fill in real-time and observe the actors moving around the map.

User Research example

  1. Click 'User Research' in the Nav bar
  2. Explain demo architecture
  3. Click '-' button to collapse architecture diagram
  4. Open browser 'Network' panel and reload the page. (Right click on the page, and select "Inspect Element". Then, select the 'Network' panel tab. Reload the page to start capturing network traffic.)
  5. Move mouse across left-hand screenshot
  6. Explain that each mouse movement captured by the browser is posted directly to the Python Kafka API server, via an endpoint exposed through Traefik
  7. Observe Node.js Kafka API reading from Kafka queue and returning the mouse movements in the right-hand screenshot
  8. Observe POST request duration (should be ~500ms)

Demonstrate the power of horizontal scaling

To demonstrate the power of granular microservice scaling, first we need to generate more load on the Python Kafka API. We will then observe POST request times increase. Lastly, we will scale the Python Kafka API and observe POST request times return to normal.

From User Research screen (assumes above demo steps completed):

  1. kubectl scale deploy kafka-dummy-actor --replicas=70
  2. Move mouse across left-hand panel
  3. Observe POST request duration in browser's Network panel (should be >1000ms)
  4. kubectl scale deploy kafka-client-api --replicas=5
  5. Observe POST request duration (should return to ~500ms)