Skip to content

Latest commit

 

History

History

k8s

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

K8S Cluster

Setup for the Kubernetes cluster.

KinD: Local dev env

Prerequisites

Using KinD

In order to test a minimal setup of a K8s cluster in a local environment, it may be useful to use Kind

  • Build the Cluster with Kind

    kind create cluster --name gnolive --config=cluster/kind/kind.yaml
  • Get cluster info

    kubectl cluster-info --context kind-gnolive
  • (opt.) Delete cluster using

    kind delete cluster --name gnolive

Gotchas

  • It maky take a little bit to the cluster to deploy all the required resources. Be patient...

  • Cluster created using Kind is configured to expose port 3000 on the control plane host using extraPortMappings. This method should work out of the box on any host, but it may be influenced by a combination of the versions of Docker and Kubernetes.

  • (alternatively) Expose dashboard service manually

    kubectl port-forward service/grafana 3000:3000

Core services

  • Add Namespace
  kubectl apply -f cluster/namespaces/core.yaml

Tip: to switch default namespace check here

Gnoland

  • Generate Persistent Volume

    kubectl apply -f core/gno-land/storage/
  • Spin up Gno service

    kubectl apply -f core/gno-land/deploys/
  • (opt.) Run Stress Tests - SUPERNOVA

    kubectl apply -f core/jobs/supernova.yaml

All core services in a shot

  • Using skaffold. FULL STOP!

    skaffold run

OR

  • Generate Persistent Volumes (Gnoland and Tx-Indexer)

    kubectl apply -f core/gno-land/storage/ -f core/indexer/storage/
  • Spin up ALL services

    kubectl apply -f core/ -R

Traefik

  • Generate CRD and RBAC

    kubectl apply -f traefik/ingress-route/crd.yaml -f traefik/ingress-route/rbac.yaml
  • Spin up service (IngressRoute version)

    kubectl apply -f traefik/ingress-route/traefik.yaml

Monitoring services

  • Add Namespace
  kubectl apply -f cluster/namespaces/monitoring.yaml

All monitoring services in a shot

  • Using skaffold. FULL STOP!

    skaffold run

Grafana

  • create the file secrets/grafana.ini containing only a plain text password for the Grafana dashboard (just a plain string no key/value, no quotes)

  • Generate secrets

    kubectl apply -k monitoring/grafana/secrets/
  • Generate Config Map for static config files

    kubectl apply -k monitoring/grafana/configmaps/
  • Generate Volumes

    kubectl apply -f monitoring/grafana/storage/
  • Spin up Grafana service

    kubectl apply -f monitoring/grafana/deploys/
  • Check out the Grafana dashboard by visiting http://127.0.0.1:3000/dashboards and after logging in navigate to the Gnoland Dashboard (use the password defined into grafana.ini file for the admin user)


Cleaning up Cluster

Issues with PVC in AWS EKS

Generally speaking in AWS EKS storage is represented via EBS volumes which are related to PVC resources in Kubernetes. When creating the cluster from scratch, the given configuration will also create:

  • the CSI (Container Storage Interface) resource for AWS EKS
  • the AWS Storage Class for EBS volumes

When resources are deleted via skaffold, it may blindly remove this specific AWS resources without waiting the removal of other resources. In this scenario PVC and in turn corresponding PODs will remain in a pending removal status, since there is not anymore a storage class (which was already removed) able to satisfy the removal request. When this happens the best thing to do in order to clean up is to manually re-add and re-remove the storage elements.

k apply -k aws-eks/storage
# ... wait pvc/pod full removal
# k get pvc -A -w
k delete -k aws-eks/storage

K8s Tips

  • Change current Namespace
kubectl config set-context --current --namespace=gno
  • Apply all the YAML files in a folder tree
kubectl apply -R -f dir
  • Get logs from init containers
kubectl logs --previous <pod_name> -c <init_container_name>
  • Disruptive and forced deletion of a Pod
kubectl delete pod <pod_name> -n <namespace> --grace-period=0 --force
  • Local DNS name resolution for Traefik -> Edit /etc/hosts on localhost machine by appending:
127.0.0.1 gnoland.tech
127.0.0.1 rpc.gnoland.tech
127.0.0.1 web.gnoland.tech
127.0.0.1 indexer.gnoland.tech
127.0.0.1 faucet.gnoland.tech
127.0.0.1 grafana.gnoland.tech
  • Get specific information on resource capacity of a node
kubectl describe node  <node-name>

and check output

Capacity:
  cpu:                2
  memory:             3922840Ki
  pods:               29
  ...

Resources

Security Context

Service Account

Downward API

AWS EKS

Kustomize

Skaffold

Log Rotation