-
Notifications
You must be signed in to change notification settings - Fork 151
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Changed the instructions to start use EKS Blueprints for Terraform
- Loading branch information
Showing
8 changed files
with
386 additions
and
54 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
15 changes: 15 additions & 0 deletions
15
content/using_ec2_spot_instances_with_eks/021_terraform/_index.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
--- | ||
title: "Launch using Terraform" | ||
chapter: true | ||
weight: 25 | ||
--- | ||
|
||
# Launch using [Terraform](https://www.terraform.io/) | ||
|
||
[Terraform](https://www.terraform.io/) is an infrastructure as code tool that lets you build, change, and version infrastructure safely and efficiently in AWS. | ||
|
||
[EKS Blueprints for Terraform](https://github.com/aws-ia/terraform-aws-eks-blueprints) helps you compose complete EKS clusters that are fully bootstrapped with the operational software that is needed to deploy and operate workloads. With EKS Blueprints, you describe the configuration for the desired state of your EKS environment, such as the control plane, worker nodes, and Kubernetes add-ons, as an IaC blueprint. Once a blueprint is configured, you can use it to stamp out consistent environments across multiple AWS accounts and Regions using continuous deployment automation. | ||
|
||
In this module, we will use EKS Blueprints for Terraform to launch and configure our EKS cluster and nodes. | ||
|
||
{{< youtube DhoZMbqwwsw >}} |
282 changes: 282 additions & 0 deletions
282
...2_spot_instances_with_eks/021_terraform/create_eks_cluster_terraform_command.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,282 @@ | ||
--- | ||
title: "Create EKS cluster Command" | ||
chapter: false | ||
disableToc: true | ||
hidden: true | ||
--- | ||
|
||
Create a Terraform template file (eksblueprints.yaml) to create an EKS cluster: | ||
|
||
|
||
``` | ||
cat << EOF > eksblueprints.yaml | ||
--- | ||
terraform { | ||
required_version = ">= 1.0.0" | ||
required_providers { | ||
aws = { | ||
source = "hashicorp/aws" | ||
version = ">= 3.72" | ||
} | ||
kubernetes = { | ||
source = "hashicorp/kubernetes" | ||
version = ">= 2.10" | ||
} | ||
helm = { | ||
source = "hashicorp/helm" | ||
version = ">= 2.4.1" | ||
} | ||
} | ||
} | ||
provider "kubernetes" { | ||
host = module.eks_blueprints.eks_cluster_endpoint | ||
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data) | ||
token = data.aws_eks_cluster_auth.this.token | ||
} | ||
provider "helm" { | ||
kubernetes { | ||
host = module.eks_blueprints.eks_cluster_endpoint | ||
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data) | ||
token = data.aws_eks_cluster_auth.this.token | ||
} | ||
} | ||
data "aws_eks_cluster_auth" "this" { | ||
name = module.eks_blueprints.eks_cluster_id | ||
} | ||
data "aws_ami" "amazonlinux2eks" { | ||
most_recent = true | ||
filter { | ||
name = "name" | ||
values = ["amazon-eks-node-${local.cluster_version}-*"] | ||
} | ||
owners = ["amazon"] | ||
} | ||
data "aws_availability_zones" "available" {} | ||
locals { | ||
name = "eksspotworkshop" | ||
cluster_version = "1.24" | ||
vpc_cidr = "10.0.0.0/16" | ||
azs = slice(data.aws_availability_zones.available.names, 0, 3) | ||
tags = { | ||
Blueprint = local.name | ||
} | ||
} | ||
#--------------------------------------------------------------- | ||
# EKS Blueprints | ||
#--------------------------------------------------------------- | ||
module "eks_blueprints" { | ||
source = "github.com/aws-ia/terraform-aws-eks-blueprints?ref=v4.21.0" | ||
cluster_name = local.name | ||
cluster_version = local.cluster_version | ||
vpc_id = module.vpc.vpc_id | ||
private_subnet_ids = module.vpc.private_subnets | ||
node_security_group_additional_rules = { | ||
# Extend node-to-node security group rules. Recommended and required for the Add-ons | ||
ingress_self_all = { | ||
description = "Node to node all ports/protocols" | ||
protocol = "-1" | ||
from_port = 0 | ||
to_port = 0 | ||
type = "ingress" | ||
self = true | ||
} | ||
# Recommended outbound traffic for Node groups | ||
egress_all = { | ||
description = "Node all egress" | ||
protocol = "-1" | ||
from_port = 0 | ||
to_port = 0 | ||
type = "egress" | ||
cidr_blocks = ["0.0.0.0/0"] | ||
ipv6_cidr_blocks = ["::/0"] | ||
} | ||
# Allows Control Plane Nodes to talk to Worker nodes on all ports. Added this to simplify the example and further avoid issues with Add-ons communication with Control plane. | ||
# This can be restricted further to specific port based on the requirement for each Add-on e.g., metrics-server 4443, spark-operator 8080, karpenter 8443 etc. | ||
# Change this according to your security requirements if needed | ||
ingress_cluster_to_node_all_traffic = { | ||
description = "Cluster API to Nodegroup all traffic" | ||
protocol = "-1" | ||
from_port = 0 | ||
to_port = 0 | ||
type = "ingress" | ||
source_cluster_security_group = true | ||
} | ||
} | ||
managed_node_groups = { | ||
# Managed Node groups with minimum config | ||
mg5 = { | ||
node_group_name = "mng-od-m5large" | ||
instance_types = ["m5.large"] | ||
max_size = 3 | ||
desired_size = 2 | ||
min_size = 0 | ||
create_iam_role = false # Changing `create_iam_role=false` to bring your own IAM Role | ||
iam_role_arn = aws_iam_role.managed_ng.arn | ||
disk_size = 100 # Disk size is used only with Managed Node Groups without Launch Templates | ||
update_config = [{ | ||
max_unavailable_percentage = 30 | ||
}] | ||
launch_template_os = "amazonlinux2eks" # amazonlinux2eks or bottlerocket | ||
kubelet_extra_args = "--node-labels=intent=control-apps" | ||
}, | ||
// ### -->> SPOT NODE GROUPS GO HERE <<--- ### | ||
} | ||
tags = local.tags | ||
} | ||
module "eks_blueprints_kubernetes_addons" { | ||
source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.21.0" | ||
eks_cluster_id = module.eks_blueprints.eks_cluster_id | ||
eks_cluster_endpoint = module.eks_blueprints.eks_cluster_endpoint | ||
eks_oidc_provider = module.eks_blueprints.oidc_provider | ||
eks_cluster_version = module.eks_blueprints.eks_cluster_version | ||
enable_metrics_server = true | ||
tags = local.tags | ||
depends_on = [ | ||
module.eks_blueprints | ||
] | ||
} | ||
#--------------------------------------------------------------- | ||
# Supporting Resources | ||
#--------------------------------------------------------------- | ||
module "vpc" { | ||
source = "terraform-aws-modules/vpc/aws" | ||
version = "~> 3.0" | ||
name = local.name | ||
cidr = local.vpc_cidr | ||
azs = local.azs | ||
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k)] | ||
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 10)] | ||
enable_nat_gateway = true | ||
single_nat_gateway = true | ||
enable_dns_hostnames = true | ||
# Manage so we can name | ||
manage_default_network_acl = true | ||
default_network_acl_tags = { Name = "${local.name}-default" } | ||
manage_default_route_table = true | ||
default_route_table_tags = { Name = "${local.name}-default" } | ||
manage_default_security_group = true | ||
default_security_group_tags = { Name = "${local.name}-default" } | ||
public_subnet_tags = { | ||
"kubernetes.io/cluster/${local.name}" = "shared" | ||
"kubernetes.io/role/elb" = 1 | ||
} | ||
private_subnet_tags = { | ||
"kubernetes.io/cluster/${local.name}" = "shared" | ||
"kubernetes.io/role/internal-elb" = 1 | ||
} | ||
tags = local.tags | ||
} | ||
#--------------------------------------------------------------- | ||
# Custom IAM roles for Node Groups | ||
#--------------------------------------------------------------- | ||
data "aws_iam_policy_document" "managed_ng_assume_role_policy" { | ||
statement { | ||
sid = "EKSWorkerAssumeRole" | ||
actions = [ | ||
"sts:AssumeRole", | ||
] | ||
principals { | ||
type = "Service" | ||
identifiers = ["ec2.amazonaws.com"] | ||
} | ||
} | ||
} | ||
resource "aws_iam_role" "managed_ng" { | ||
name = "managed-node-role" | ||
description = "EKS Managed Node group IAM Role" | ||
assume_role_policy = data.aws_iam_policy_document.managed_ng_assume_role_policy.json | ||
path = "/" | ||
force_detach_policies = true | ||
managed_policy_arns = [ | ||
"arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy", | ||
"arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy", | ||
"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly", | ||
"arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" | ||
] | ||
tags = local.tags | ||
} | ||
resource "aws_iam_instance_profile" "managed_ng" { | ||
name = "managed-node-instance-profile" | ||
role = aws_iam_role.managed_ng.name | ||
path = "/" | ||
lifecycle { | ||
create_before_destroy = true | ||
} | ||
tags = local.tags | ||
} | ||
output "configure_kubectl" { | ||
description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig" | ||
value = module.eks_blueprints.configure_kubectl | ||
} | ||
EOF | ||
``` | ||
|
||
Next, run the following command to download all Terraform libraries: | ||
|
||
``` | ||
terraform init | ||
``` | ||
|
||
You should see a message saying that the initialization has completed: | ||
|
||
``` | ||
Terraform has been successfully initialized! | ||
``` | ||
|
||
Next, run the following command to create the cluster and all its dependencies: | ||
|
||
``` | ||
terraform apply --auto-approve | ||
``` | ||
|
||
Once the cluster creation is complete, you should see an output like this: | ||
|
||
``` | ||
Apply complete! Resources: 55 added, 0 changed, 0 destroyed. | ||
``` | ||
|
||
Run the following command to set the context to the new EKS cluster: | ||
|
||
``` | ||
$(terraform output -raw configure_kubectl) | ||
``` |
36 changes: 36 additions & 0 deletions
36
content/using_ec2_spot_instances_with_eks/021_terraform/launchterraform.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
--- | ||
title: "Launch EKS" | ||
weight: 20 | ||
--- | ||
|
||
|
||
{{% notice warning %}} | ||
**DO NOT PROCEED** with this step unless you have [validated the IAM role]({{< relref "../010_prerequisites/update_workspaceiam.md#validate_iam" >}}) in use by the Cloud9 IDE. You will not be able to run the necessary kubectl commands in the later modules unless the EKS cluster is built using the IAM role. | ||
{{% /notice %}} | ||
|
||
#### Challenge: | ||
**How do I check the IAM role on the workspace?** | ||
|
||
{{%expand "Expand here to see the solution" %}} | ||
|
||
### Validate the IAM role {#validate_iam} | ||
|
||
Use the [GetCallerIdentity](https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html) CLI command to validate that the Cloud9 IDE is using the correct IAM role. | ||
|
||
``` | ||
aws sts get-caller-identity | ||
``` | ||
|
||
You can verify what the output an correct role shoulld be in the **[validate the IAM role section]({{< relref "../010_prerequisites/update_workspaceiam.md" >}})**. If you do see the correct role, proceed to next step to create an EKS cluster. | ||
|
||
{{% /expand %}} | ||
|
||
|
||
### Create an EKS cluster | ||
|
||
{{% insert-md-from-file file="using_ec2_spot_instances_with_eks/021_terraform/create_eks_cluster_terraform_command.md" %}} | ||
|
||
{{% notice info %}} | ||
Launching EKS and all the dependencies will take approximately 20 minutes | ||
{{% /notice %}} |
18 changes: 18 additions & 0 deletions
18
content/using_ec2_spot_instances_with_eks/021_terraform/prerequisites.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
--- | ||
title: "Prerequisites" | ||
weight: 10 | ||
--- | ||
|
||
For this module, we need to download and install the [Terraform](https://www.terraform.io/) binary: | ||
|
||
``` | ||
sudo yum install -y yum-utils | ||
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo | ||
sudo yum -y install terraform-1.3.7-1 | ||
``` | ||
|
||
Confirm the Terraform command works: | ||
|
||
``` | ||
terraform version | ||
``` |
Oops, something went wrong.