Abstract:
Although AWS EKS has been in GA for quite a while, and AWS EKS Fargate on the roadmap but not
available yet, it still requires a fair amount of manual effort to create the worker nodes and
configure kubectl to talk to the cluster. In this QuickStart will build on CloudFormation
scripts AWS has already provided to fully automate the creation of the EKS Cluster. We'll also
use some basic shell scripts to configure kubectl on the EC2 Instance to talk to the cluster.
This solution shows how to create an AWS EKS Cluster and deploy a simple web application with an external Load Balancer. This readme updates an article "Getting Started with Amazon EKS" referenced below and provides a more basic step by step process. It uses CloudFormation and cloud-init scripts we created to do more of the heavy lifting required to setup the cluster.
Note: This how-to assumes you are creating the eks cluster in us-east-1, you have access to your AWS Root Account, and you can login to an EC2 Instance remotely.
Steps:
- Create AWS EKS Cluster and EC2 Instance for Kubectl Console using AWS CloudFormation
- Configure kubectl on Your EC2 Instance
- Deploy WebApp to Your Cluster
- Configure the Kubernetes Dashboard (Optional)
- Remove Your AWS EKS Cluster
To make this first microservice easy to deploy we'll use a docker image located in DockerHub at kskalvar/web. This image is nothing more than a simple webapp that returns the current ip address of the container it's running in. We'll create an external AWS Load Balancer and you should see a unique ip address as it is load balanced across containers.
The project also includes the Dockerfile for those interested in the configuration of the actual application or to build your own and deploy using ECR.
We'll use CloudFormation to create the EKS Cluster, Worker Nodes, and EC2 Instance in which to run kubectl. This is a step by step process.
Click on "Create Stack"
Select "Specify an Amazon S3 template URL"
https://998551034662-aws-eks-cluster.s3.amazonaws.com/eks-cluster-demo.json
Click on "Next"
Specify Details
Stack name: eks-cluster-demo
KeyName: <Your AWS KeyName>
Click on "Next"
Click on "Next"
Select "I acknowledge that AWS CloudFormation might create IAM resources with custom names"
Select "I acknowledge that AWS CloudFormation might require the following capability: CAPABILITY_AUTO_EXPAND"
Click on "Create"
Wait for Status CREATE_COMPLETE before proceeding
You will need to ssh into the AWS EC2 Instance you created above. This is a step by step process.
NOTE: The EC2 Instance used for kubectl was created by CloudFomration references an image
we previouly created using the cloud-init script in this project. The image can be found
under Images/AMIs from the EC2 Dashboard once you create it. Simply update the default
ConsoleImageId in CloudFormation form when creeating the eks cluster.
Using ssh from your local machine, connect to your AWS EC2 Instance
ssh -i <AWS EC2 Private Key> ec2-user@<AWS EC2 Instance IP Address>
See contents of "/tmp/install-eks-support" it should say "installation complete".
Use the AWS CLI to set Access Key, Secret Key, and Region Name
aws configure
AWS Access Key ID []: <Your Access Key ID>
AWS Secret Access Key []: <Your Secret Access Key>
Default region name []: us-east-1
Test aws cli
aws s3 ls
Configure kubectl to access the cluster
NOTE: There is a script in /home/ec2-user called "configure-kube-control".
You may run this script to automate the creation and population of environment
variables in .kube/aws-auth-cm.yaml and .kube/control-kubeconfig. It
uses the naming convention I specified in this HOW-TO. So if you didn't
use the naming convention it won't work. If you do use the script then all
you need to do is run the "Test Cluster" and "Test Cluster Nodes" steps.
./configure-kube-control
Using kubectl test the cluster status
kubectl get svc
Use kubectl to test status of cluster nodes
kubectl get nodes
Wait till you see all nodes appear in "STATUS Ready"
You will need to ssh into the AWS EC2 Instance you created above. This is a step by step process.
Use kubectl to create the web service
kubectl apply -f ~/aws-eks-cluster-quickstart/kube-deployment/web-deployment-service.yaml
Use kubectl to display pods
kubectl get pods --output wide
Wait till you see all pods appear in "STATUS Running"
Capture EXTERNAL-IP for use below
kubectl get service web --output wide
Using your client-side browser enter the following URL
http://<EXTERNAL-IP>
Use kubectl to delete application
kubectl delete -f ~/aws-eks-cluster-quickstart/kube-deployment/web-deployment-service.yaml
You will need to configure the dashboard from the AWS EC2 Instance you created as well as use ssh to create a tunnel on port 8001 from your local machine. This is a step by step process.
Configure Kubernetes Dashboard
NOTE: There is a script in /home/ec2-user called "configure-kube-dashboard".
You may run this script to automate the installation of the dashboard components into the cluster,
configure the service role, and start the kubectl proxy.
./configure-kube-dashboard
Using ssh from your local machine, open a tunnel to your AWS EC2 Instance
ssh -i <AWS EC2 Private Key> ec2-user@<AWS EC2 Instance IP Address> -L 8001:localhost:8001
Using your local client-side browser enter the following URL. The configure-kube-dashboard script also generated a "Security Token" required to login to the dashboard.
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Using the AWS Console to delete all resources used by the AWS EKS Cluster
Note: Before proceeding be sure you delete deployment,service web as instructed above.
Failure to do so will cause cloudformation script to fail.
Delete "eks-cluster-demo" Stack
AWS EKS QuickStart
https://github.com/kskalvar/aws-eks-cluster-quickstart
AWS Summit Slides for EKS
https://www.slideshare.net/AmazonWebServices/srv318-running-kubernetes-with-amazon-eks
Kubernetes
https://kubernetes.io
AWS EKS Getting Started
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html