Skip to content

End to End demo environment using Cluster API for a base K8S Cluster for Tanzu Application Service on K8S using Amazon EC2

Notifications You must be signed in to change notification settings

dbbaskette/capitok

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 

Repository files navigation

capitok

End to End demo environment using Cluster API for a base K8S Cluster for Tanzu Application Service on K8S using AMAZON EC2

These instructions are designed to work on a mac

Prereqs:

  • kubectl
  • docker
  • kind
  • AWS cli
  • jq
  • helm v3
  • Clone of this repo
  • DNS Domain

Building the Workload Cluster

  1. Install clusterctl binary

    curl -L  https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.3/clusterctl-darwin-amd64 -o clusterctl
    chmod +x ./clusterctl
    sudo mv ./clusterctl /usr/local/bin/clusterctl
    
  2. Install clusterawsadm binary

    curl -L https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v0.5.2/clusterawsadm-darwin-amd64 -o clusterawsadm
    chmod +x ./clusterawsadm
    sudo mv ./clusterawsadm /usr/local/bin/clusterawsadm
    
  3. Export AWS Variables

    source env-files/aws-exports.sh
    
  4. Create local management cluster:

    kind create cluster --name clusterapi
    
  5. Create the Cloudformation Stack

    clusterawsadm alpha bootstrap create-stack
    
  6. Initialize the Management Cluster

    clusterctl init --infrastructure aws
    
  7. Backup the kubeconfig

    cp $HOME/.kube/config  $HOME/.kube/config.capi
    
  8. Build the cluster configuration

    clusterctl config cluster tas --kubernetes-version v1.15.7 --control-plane-machine-count=1 --worker-machine-count=6 --kubeconfig=$HOME/.kube/config.capi > tas.yaml
    
  9. Modify the tas.yaml file to adjust root disk sizing. Add these lines to the AWSMachineTemplate spec. The result should look like this:

    rootVolume:
        size: 25
    

    result:

    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
    kind: AWSMachineTemplate
    metadata:
    name: tas-md-0
    namespace: default
    spec:
    template:
        spec:
        iamInstanceProfile: nodes.cluster-api-provider-aws.sigs.k8s.io
        instanceType: t3.xlarge
        sshKeyName: tmc
        rootVolume:
            size: 25
    
  10. Create your Workload cluster for your TAS deployment:

    kubectl --kubeconfig=$HOME/.kube/config.capi  apply -f ./tas.yaml
    

    Output:

    cluster.cluster.x-k8s.io/tas created
    awscluster.infrastructure.cluster.x-k8s.io/tas created
    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/tas-control-plane created
    awsmachinetemplate.infrastructure.cluster.x-k8s.io/tas-control-plane created
    machinedeployment.cluster.x-k8s.io/tas-md-0 created
    awsmachinetemplate.infrastructure.cluster.x-k8s.io/tas-md-0 created
    kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/tas-md-0 created
    
  11. Monitor the status until complete (it will take awhile). kubeadmcontrolplane will report as initialized:

    kubectl --kubeconfig=$HOME/.kube/config.capi  get cluster --all-namespaces
    kubectl --kubeconfig=$HOME/.kube/config.capi  get machines --all-namespaces
    kubectl --kubeconfig=$HOME/.kube/config.capi  get kubeadmcontrolplane --all-namespaces
    

    Cluster Ready

Preparing the Workload Cluster for TAS

  1. Get the kubeconfig for the Workload cluster:
    kubectl --kubeconfig=$HOME/.kube/config.capi --namespace=default get secret/tas-kubeconfig -o jsonpath={.data.value} | base64 --decode > /Users/dbaskette/.kube/config.tas
    
  2. Make the kubeconfig the default;
    cp $HOME/.kube/config.tas $HOME/.kube/config
    
  3. Install the Calico Networking CNI into the cluster
    kubectl --kubeconfig=$HOME/.kube/config.tas apply -f https://docs.projectcalico.org/v3.12/manifests/calico.yaml
    
  4. Add AWS EBS Storage Class for Dynamic Volume Provisioning
    kubectl create -f yaml/aws-ebs-storageclass.yaml
    

Setup cert-manager and Let's encrypt for SSH certs

  1. Install NGINX Ingress Controller.

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/cloud-generic.yaml
    
    
  2. Make a note of the AWS elb assigned to NGINX

    kubectl get svc ingress-nginx --namespace=ingress-nginx -o=jsonpath='{.status.loadBalancer.ingress[0].hostname}'
    
  3. Install cert-manager.

    kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.2/cert-manager.yaml
    
  4. Edit the yaml/staging-issuer.yaml file and add your email address. The create the staging issuer object.

    kubectl create -f yaml/staging-issuer.yaml
    
  5. Create the simple echo test services

    kubectl apply -f yaml/echo1.yaml
    kubectl apply -f yaml/echo2.yaml
    
  6. Create the ingress for the test services

    kubectl apply -f yaml/echo-ingress-staging.yaml
    
  7. Create a Route53 Zone that matches your Personal DNS domain. Edit your Personal DNS domain to use the same NS records as this new Route53 Zone. You could also do this as a subdomain, but that's not covered here.

  8. Create Route53 CNAME Records for Test Services

    echo1.<YOUR-DOMAIN>   CNAME <ELB Address from Step Above>
    echo2.<YOUR-DOMAIN>   CNAME <ELB Address from Step Above>
    
  9. Check if cert was created. This should return Successfully created Certificate echo-tls

    kubectl describe ingress
    
  10. Test the service:

    curl https://echo1.<YOUR-DOMAIN>
    curl https://echo2.<YOUR-DOMAIN>
    

    NOTE: The curl may fail. In thise case, ignore it and proceed.

  11. Edit the yaml/prod-issuer.yaml file and add your email address. The create the production issuer object.

    kubectl create -f yaml/prod-issuer.yaml
    
  12. Create the ingress for the production issuer

    kubectl apply -f yaml/echo-ingress-prod.yaml
    
  13. Verify certificate was created properly

    kubectl describe certificate
    

Install Harbor Container Registry

  1. Add Bitnami Repo to Helm
    helm repo add bitnami https://charts.bitnami.com/bitnami
    
  2. Edit yaml/harbor-value.yaml and insert your domain name
  3. Install harbor via Helm
    helm install harbor-release bitnami/harbor -f yaml/harbor-values.yaml
    
  4. When install is complete, use kubectl to get the elb address (should be smae as echo tests). Then, create a Route53 CNAME record that points harbor. to that elb address.
    kubectl get ingress harbor-release-ingress
    
  5. You can now login to harbor with the admin user. Run this command to get the password:
    kubectl get secret --namespace default harbor-release-core-envvars -o=jsonpath='{.data.HARBOR_ADMIN_PASSWORD}' | base64 --decode
    
  6. Create a project in harbor called tas-workloads. Create Project

INSTALL TAS for Kubernetes

  1. Obtain the tarfile release of tas for k8s and extract it.
  2. Remove the custom overlay that uses clusterIP instead of a Load Balancer
    rm -f ./custom-overlays/replace-loadbalancer-with-clusterip.yaml
    
  3. Edit env-files/tas-exports.sh and then source it.
    source env-files/tas-exports.sh
    
  4. Generate Deployment defaults
    ./bin/generate-values.sh -d "tas.<YOUR-DOMAIN>" > /tmp/deployment-values.yml
    
  5. Install TAS for K8s
    ./bin/install-tas.sh /tmp/deployment-values.yml
    
  6. Get the name of the AWS ELB created for the Istio Gateway.
    kubectl get svc istio-ingressgateway --namespace=istio-system
    
  7. Create a ROUTE53 cname record in your DNS Zone that redirects a wildcard tas domain to the ELB from the previous step.
    *.tas.<YOUR_DOMAIN>.  CNAME <ELB-ADDRESS>
    

LOGIN TO TAS for K8S AND TEST

  1. Set the API Target
    cf api --skip-ssl-validation https://api.tas.<YOUR-DOMAIN>
    
  2. Get the admin password from the deployment file
    cat /tmp/deployment-values.yml| grep cf_admin_password
    
  3. Login as admin
    cf auth admin <password>
    
  4. Enable docker container support (THIS IS A TEMP STEP)
    cf enable-feature-flag diego_docker
    
  5. Create Test Org and Space
    cf create-org test-org
    cf create-space -o test-org test-space
    cf target -o test-org -s test-space
    
  6. Clone Application for deployment and build it
    git clone https://github.com/cloudfoundry-samples/spring-music.git
    cd spring-music
    ./gradlew clean assemble
    
  7. Push application to TAS
    cf push
    

About

End to End demo environment using Cluster API for a base K8S Cluster for Tanzu Application Service on K8S using Amazon EC2

Resources

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages