Skip to content

Commit

Permalink
Merge pull request Azure#273 from redkubes/master
Browse files Browse the repository at this point in the history
Added Labs for Otomi
  • Loading branch information
alicejgibbons authored Apr 21, 2022
2 parents 96b378b + add702d commit 4e1116f
Show file tree
Hide file tree
Showing 11 changed files with 470 additions and 0 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -82,3 +82,6 @@ labs/security/secure-tiller/*.srl
*~
*.swp
*.swo

# Misc
.history/
2 changes: 2 additions & 0 deletions labs/paas/otomi/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Misc
.history/
126 changes: 126 additions & 0 deletions labs/paas/otomi/1_create_aks_cluster/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
# Lab 1: Creating an AKS cluster

In this lab, we'll be using the Azure CLI to create a Azure Kubernetes Service (AKS) cluster and configure `kubectl` access to it.

## Prerequisites

- Azure account with required permissions to create and manage K8s clusters

## Instructions

1. Login to Azure Portal at <http://portal.azure.com>.
2. Open the Azure Cloud Shell and choose Bash Shell (do not choose Powershell)

![Azure Cloud Shell](img-cloud-shell.png "Azure Cloud Shell")

3. The first time Cloud Shell is started will require you to create a storage account.

4. Once your cloud shell is started, clone the workshop repo into the cloud shell environment:

```bash
git clone https://github.com/Azure/kubernetes-hackfest
```

> Note: In the cloud shell, you are automatically logged into your Azure subscription.
5. Ensure you are using the correct Azure subscription you want to deploy AKS to:

```bash
# View subscriptions
az account list
```

```bash
# Verify selected subscription
az account show
```

```bash
# Set correct subscription (if needed)
az account set --subscription <subscription_id>

# Verify correct subscription is now set
az account show
```

6. Create a unique identifier suffix for resources to be created in this lab:

```bash
echo -e "\n# Start AKS Otomi Hackfest Lab Params">>~/.bashrc
UNIQUE_SUFFIX=$USER$RANDOM
# Remove Underscores and Dashes (Not Allowed in AKS and ACR Names)
UNIQUE_SUFFIX="${UNIQUE_SUFFIX//_}"
UNIQUE_SUFFIX="${UNIQUE_SUFFIX//-}"
# Check Unique Suffix Value (Should be No Underscores or Dashes)
echo $UNIQUE_SUFFIX
# Persist for Later Sessions in Case of Timeout
echo export UNIQUE_SUFFIX=$UNIQUE_SUFFIX >> ~/.bashrc
```

7. Create an Azure Resource Group in `East US`:

```bash
# Set Resource Group Name using the unique suffix
RGNAME=aks-rg-$UNIQUE_SUFFIX
# Persist for Later Sessions in Case of Timeout
echo export RGNAME=$RGNAME >> ~/.bashrc
# Set Region (Location)
LOCATION=eastus
# Persist for Later Sessions in Case of Timeout
echo export LOCATION=eastus >> ~/.bashrc
# Create Resource Group
az group create -n $RGNAME -l $LOCATION
```

8. Create an AKS cluster:

```bash
# Set AKS Cluster Name
CLUSTERNAME=aks${UNIQUE_SUFFIX}
# Look at AKS Cluster Name for Future Reference
echo $CLUSTERNAME
# Persist for Later Sessions in Case of Timeout
echo export CLUSTERNAME=aks${UNIQUE_SUFFIX} >> ~/.bashrc
```

```bash
# Create AKS cluster
az aks create --name $CLUSTERNAME \
--resource-group $RGNAME \
--location $LOCATION \
--zones 1 2 \
--vm-set-type VirtualMachineScaleSets \
--nodepool-name otomipool \
--node-count 2 \
--node-vm-size Standard_D5_v2 \
--kubernetes-version 1.21.9 \
--enable-cluster-autoscaler \
--min-count 2 \
--max-count 3 \
--max-pods 100 \
--network-plugin azure \
--network-policy calico \
--outbound-type loadBalancer \
--uptime-sla \
--generate-ssh-keys
```

9. Verify your cluster status. The `ProvisioningState` should be `Succeeded`:

```bash
az aks list -o table
```

10. Get the Kubernetes config files for your new AKS cluster:

```bash
az aks get-credentials -n $CLUSTERNAME -g $RGNAME
```

11. Verify you have API access to your new AKS cluster:

```bash
kubectl get nodes
```

Go to the [next lab](../2_install_otomi/README.md)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
99 changes: 99 additions & 0 deletions labs/paas/otomi/2_install_otomi/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# Lab 2: Installing Otomi on AKS

In this lab, we'll be installing [Otomi](https://github.com/redkubes/otomi-core) using `helm`.

## Instructions

1. Helm CLI is installed by default in Azure Cloud shell

```bash
helm version
```

2. Add Otomi helm chart repository

```bash
helm repo add otomi https://otomi.io/otomi-core && \
helm repo update
```

3. Install Otomi with chart values

```bash
helm install otomi otomi/otomi \
--set cluster.k8sVersion="1.21" \
--set cluster.name=$CLUSTERNAME \
--set cluster.provider=azure
```

4. Monitoring the Chart install

```bash
# The chart deploys a Job (`otomi`) in the `default` namespace
# Monitor the status of the job
kubectl get job otomi -w
# watch the helm chart install status (optional)
watch helm list -Aa
```

5. When the installer job has finished, go to the end of the logs

```bash
kubectl logs jobs/otomi -n default -f
```

There you will see the following:

```bash
2022-04-01T10:01:59.239Z otomi:cmd:commit:commit:info
########################################################################################
# To start using Otomi, go to https://<your-ip>.nip.io and sign in to the web console
# with username "otomi-admin" and password "password".
# Then activate Drone. For more information see: https://otomi.io/docs/installation/activation
########################################################################################
```

6. Sign in to the web UI (Otomi Console)

Once Otomi is installed, go to the url provided in the logs of the installer job and sign in to the web UI with the provided username and password.

7. Add the auto generated CA to your keychain (optional)

Since we install Otomi without proving a custom CA or using LetsEncrypt, the installer generated a CA. This CA is of course not trusted on your local machine.
To prevent you from clicking away lots of security warnings in your browser, you can add the generated CA to your keychain:

- In the left menu of the console, click on "Download CA"
- Double click the downloaded CA.crt or add the CA to your keychain on your mac using the following command:

```bash
# On Mac
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ~/Downloads/ca.crt
```

```powershell
# On Windows(PowerShell - Run as Administrator)
# Use certutil:
certutil.exe -addstore root <downloaded cert path>
# Or
Import-Certificate -FilePath "<downloaded cert path>" -CertStoreLocation Cert:\LocalMachine\Root
# Restart the browser
```

But you could also run Chrome (sorry Msft folks ;) in insecure mode:

```bash
alias chrome-insecure='/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --ignore-certificate-errors --ignore-urlfetcher-cert-requests &> /dev/null'
```

8. Activate Drone:

- In the side menu of Otomi Console under `Platform`, select `Apps` and click on the **Drone** app
- Click on the `play` button in the top right. A new tab will open for Drone
- Sign in locally with as `otomi-admin` and the password provided in the logs of the installer job.
- Click on `Authorize Application`
- Click on `Submit on the Complete your Drone Registration page. You don't need to fill in your Email, Full Name or Company Name if you don't want to
- Click on the `otomi/values` repository
- Click on `+ Activate Repository`
Go to the [next lab](../3_create_team/README.md)
41 changes: 41 additions & 0 deletions labs/paas/otomi/3_create_team/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Lab 3: Creating Teams in Otomi

In this lab, we are going to create a Team in Otomi. Teams in Otomi serve the following purpose:

- Creating a namespace on the cluster, configuring RBAC and setting default quota's

- Provide self-service options for team members in Otomi Console

- Isolate ingress traffic between teams

- Optionally: Separate team metrics and logs. When multi-tenancy is not enabled (default), metrics and logs are not separated (providing all users admin role to see cluster wide metrics and logs)

Let's create a Team!

## Instructions

1. In the side menu, click on `Teams` under the `Platform` section.

2. Click on `Create team`.

3. Provide a name for the team.

4. Under NetworkPolicy, disable `Network policies` and `Egress control` (we will activate this later on).

5. Leave all other settings default.

6. Click on `Submit`.

7. Click on `Deploy Changes` and check the progress of the deployment in the `Drone` application.
Note: this will become active after in the side menu after you submit a change

8. Select your team in the top bar. Here you can select your context (cluster and team).

9. In the side menu, the team section will now become visible.

Note:

- Because we did not enable Alert manager, the Alerts section is disabled
- Because we did not enable Grafana, the Azure Monitor section is disabled

Go to the [next lab](../4_netpols/README.md)
74 changes: 74 additions & 0 deletions labs/paas/otomi/4_netpols/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Lab 4: Configuring network policies in Otomi

In this lab we are going to deploy a multi tier web application, called `guestbook`, map the 3 K8s services to Otomi and configure public access to the front-end. Next we are going to turn on the NetworkPolicies option for the team.

## Instructions

1. Install the Guestbook application resources in the `team-<TEAM-NAME>` namespace:

```bash
kubectl apply -f https://raw.githubusercontent.com/redkubes/workshops/main/netpol/manifests/guestbook.yaml -n team-<TEAM-NAME>
```

2. Get the names of the created ClusterIP services:

```bash
kubectl get svc -n team-<TEAM-NAME>
```

You will see three services:

```bash
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend ClusterIP 10.0.183.235 <none> 80/TCP 6m44s
redis-follower ClusterIP 10.0.135.61 <none> 6379/TCP 6m44s
redis-leader ClusterIP 10.0.82.226 <none> 6379/TCP 6m44s
```

3. Go to Otomi Console. Make sure you have selected your team in the top bar en and then click the `Services` item under your team in the side menu.

4. We will now first add the created frontend service to Otomi. Click `Create Service`.

5. Fill in the name `frontend`.

6. Under `Exposure ingress`, select `Public`. Leave all other settings under exposure default.

7. Leave all other settings default and click `Submit`.

8. Click `Deploy Changes`.

After the changes have been deployed (this will take a couple of minutes), you will see that the service we just created has a host name. Click on the host name. What do you see? Submit a couple of messages.

9. Now add the other two services (`redis-follower` and `redis-leader`). Make sure to provide the correct port (6379) for both the `redis-leader` and `redis-follower` services. Leave all other settings default (so no exposure) and Submit. You don't need to Deploy Changes after every Submit. Just create the 2 services and then Deploy Changes.

When you create a service in Otomi with ingress `Cluster`, the K8s service will be added to the service-mesh in Otomi. When you create services in Otomi, the Istio Gateway is automatically configured and Istio virtual services are also automatically created.

Notice that the guestbook front-end still works!

10. In Otomi Console go to your team and then click the `Settings` item.

11. Under NetworkPolicy, enable `Network Policies`. Click on `submit` and then `Deploy Changes`.

Now go to the Guestbook application and notice that your messages are gone and you can't submit new messages. This is because traffic between the frontend and the 2 redis services is not permitted anymore. Let's fix this.

12. Click on the `redis-leader` service.

13. Under Network Policies, select `Allow selected` and click `Add Item`. Add the following 2 items and Submit:

| Team name | Service Name |
| ----------- | ------------ |
| TEAM-NAME | frontend |
| TEAM-NAME | redis-follower |

Before deploying changes, go to the `redis-follower` service and do the same, but in this case only allow the frontend service:

| Team name | Service Name |
| ----------- | ------------ |
| TEAM-NAME | frontend |
| TEAM-NAME | redis-leader |

Now `Deploy Changes` and check the progress of the deployment in the `Drone` application.

Note that the Guestbook app works again.

Go to the [next lab](../5_activate_apps/README.md)
25 changes: 25 additions & 0 deletions labs/paas/otomi/5_activate_apps/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Lab 5: Activate apps in Otomi

Otomi by default installs a minimal set of apps. These are called the Core apps. With the Core apps, Otomi offers an advanced ingress architecture, using Nginx, Istio, Keycloak, Certmanager and Oauth2, combined with developer self-service. Next to the Core apps, Otomi offers optional apps like Knative, Harbor, Vault, Kubeapps, Prometheus, Loki, Alertmanager, and more. These apps are all fully integrated and can be activated by dragging them to the active apps section in the Console.

In this lab we are going to activate Loki for logging. But first: The multi-tenancy option in Otomi is by default not enabled. When multi-tenancy is enabled, team metrics and logs will be separated per team. When multi-tenancy is disabled this effectively gives all users the admin role for logs and metrics, including metrics and logs of all platform services.

## Instructions

1. Go to `Settings` under the `Platform` section in the side menu and then select `Otomi`. In the bottom of the page you will see the flag `Multi-tenancy`. For this lab, we will not enable multi-tenancy.

2. Go to `Apps` under the `Platform` section in the side menu and Drag and Drop `Loki` from the `Disabled apps` to the `Enabled apps`. Notice that `Grafana` and `Prometheus` will also be enabled. This is because Loki requires Grafana, and Grafana requires Prometheus and therefor are also installed because of these dependencies.

3. Click on `Deploy Changes`

4. To see the progress of the installation of Loki, go to apps under the Platform section and click on `Drone`. In the top right you will see a play button. Click on it. The Drone app will now open in a new tab. Click on the `otomi/values` repository and then on the last build execution. When the `apply` step is finished, Loki and Grafana will be installed and ready to use.

5. Go to the Apps section again and click on `Loki`. In the app bar, click on `Values`. The Loki chart has been installed with sane default values to support the most common use cases. Click on `Duration` to see the default value. All the defaults (specified in the Otomi values [schema](https://github.com/redkubes/otomi-core/blob/master/values-schema.yaml) can be modified.

6. In the app bar, click on `Raw values`. In the Raw values, all values of the Loki chart that are not provided with defaults from the Otomi values schema can be used here.

7. Click on the play button. A new tab wil open and here you can execute queries to search for logs. Add the following query: `{namespace="team-<TEAM-NAME>"}`. Now you will see all the logs of containers running in the namespace of your team. Copy the path after .nip.io/ from the address bar in your browser.

8. Go back to the console and in the Loki app, click on `Shortcuts`. Click `Edit` and the `Add item`. Fill in a title (like "TEAM-NAME logs"), a description (like "The logs of TEAM-NAME") and paste the copied path. Now click Submit. The shortcut you now created can be used to go directly to Loki and see the result of your query.

Go to the [next lab](../6_knative/README.md)
Loading

0 comments on commit 4e1116f

Please sign in to comment.