Skip to content

Commit

Permalink
Refactor code into modules
Browse files Browse the repository at this point in the history
  • Loading branch information
brikis98 committed Apr 4, 2017
1 parent 49cf80c commit a531cef
Show file tree
Hide file tree
Showing 20 changed files with 595 additions and 343 deletions.
3 changes: 1 addition & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,7 @@ rails-frontend/tmp
# deployment. You may want to remove these gitignore entries for a real app.
terraform.tfvars
.terraform
*.tfstate
*.tfstate.backup
*.tfstate*

# Ignore OS X files
.DS_Store
Expand Down
28 changes: 17 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,27 +2,33 @@

This repo contains the sample code for the talk [Infrastructure-as-code: running microservices on AWS with Docker,
Terraform, and ECS](http://www.ybrikman.com/writing/2016/03/31/infrastructure-as-code-microservices-aws-docker-terraform-ecs/).
It includes a couple sample Dockerized microservices and the Terraform code to deploy them on AWS:

It consists of:
![Architecture](/_docs/architecture.png)

Here's an overview of the code:

1. An example [sinatra-backend microservice](./sinatra-backend) that just returns the text "Hello, World". This app
includes a [Dockerfile](./sinatra-backend/Dockerfile) to package it as a Docker container.
2. An example [rails-frontend microservice](./rails-frontend) that makes an HTTP call to the sinatra-backend and

1. An example [rails-frontend microservice](./rails-frontend) that makes an HTTP call to the sinatra-backend and
renders the result as HTML. This app includes a [Dockerfile](./rails-frontend/Dockerfile) to package it as a Docker
container.
3. A [docker-compose.yml](./docker-compose.yml) file to deploy both Docker containers so you can see how the two

1. A [docker-compose.yml](./docker-compose.yml) file to deploy both Docker containers so you can see how the two
microservices work together in the development environment. To allow the services to talk to each other, we are
using [Docker Links](https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/) as a simple
"service discovery" mechanism.
4. [Terraform templates](./terraform-templates) to deploy both Docker containers on Amazon's

1. [Terraform configurations](./terraform-configurations) to deploy both Docker containers on Amazon's
[EC2 Container Service (ECS)](https://aws.amazon.com/ecs/) so you can see how the two microservices work together in
the production environment. To allow the services to talk to each other, we deploy an [Elastic Load Balancer
(ELB)](https://aws.amazon.com/elasticloadbalancing/) in front of each service and use Terraform to pass the ELB
URLs between services. We are using the same environment variables as Docker Links, so this acts as a simple
"service discovery" mechanism that works in both dev and prod.

**Note**: This repo is for demonstration purposes only and should NOT be used to run anything important. For
production-ready version of these templates and many other types of infrastructure (e.g. using a more robust service
production-ready version of this code s and many other types of infrastructure (e.g. using a more robust service
discovery mechanism such as [Consul](https://www.consul.io/)), check out [Gruntwork](http://www.gruntwork.io/).

## How to run the microservices locally
Expand All @@ -45,20 +51,20 @@ lets you do iterative "make-a-change-and-refresh" style development.

## How to deploy the microservices to production

To deploy the microservices to your AWS account, see the [terraform-templates README](./terraform-templates).
To deploy the microservices to your AWS account, see the [terraform-configurations README](./terraform-configurations).

## How to use your own Docker images

By default, [docker-compose.yml](./docker-compose.yml) and the [terraform-templates](./terraform-templates) are using
the `gruntwork/rails-frontend` and `gruntwork/sinatra-backend` Docker images. These are images I pushed to the [Gruntwork Docker
Hub account](https://hub.docker.com/r/gruntwork/rails-example-app/) to make it easy for you to try this repo quickly.
Obviously, in the real world, you'll want to use your own images instead.
By default, [docker-compose.yml](./docker-compose.yml) and the [terraform-configurations](./terraform-configurations)
are using the `gruntwork/rails-frontend` and `gruntwork/sinatra-backend` Docker images. These are images I pushed to
the [Gruntwork Docker Hub account](https://hub.docker.com/r/gruntwork/rails-example-app/) to make it easy for you to
try this repo quickly. Obviously, in the real world, you'll want to use your own images instead.

Follow Docker's documentation to [create your own Docker
images](https://docs.docker.com/engine/userguide/containers/dockerimages/) and fill in the new image id and tag in:

1. `docker-compose.yml`: the `image` attribute for `rails_frontend` or `sinatra_backend`.
2. `terraform-templates/terraform.tfvars`: the `rails_frontend_image` and `rails_frontend_version` or
2. `terraform-configurations/terraform.tfvars`: the `rails_frontend_image` and `rails_frontend_version` or
`sinatra_backend_image` and `sinatra_backend_version` variables.

## More info
Expand Down
Binary file added _docs/architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -1,33 +1,54 @@
# Terraform templates
# Terraform configurations

This folder contains [Terraform](https://www.terraform.io/) templates to deploy [Docker](https://www.docker.com/)
This folder contains [Terraform](https://www.terraform.io/) configurations to deploy [Docker](https://www.docker.com/)
images of the [rails-frontend](../rails-frontend) and [sinatra-backend](../sinatra-backend) example microservices using
Amazon's [EC2 Container Service (ECS)](https://aws.amazon.com/ecs/).

![Architecture](/_docs/architecture.png)





## Quick start

**NOTE**: Following these instructions will deploy code into your AWS account, including four `t2.micro` instances and
two ELBs. All of this qualifies for the [AWS Free Tier](https://aws.amazon.com/free/), but if you've already used up
your credits, running this code may cost you money.


### Initial setup

1. Sign up for an [AWS account](https://aws.amazon.com/). If this is your first time using AWS Marketplace, head over
to the [ECS AMI Marketplace page](https://aws.amazon.com/marketplace/pp/B00U6QTYI2) and accept the terms of service.
1. Install [Terraform](https://www.terraform.io/). Minimum version 0.7.0.
1. `cd terraform-templates`
1. Open `vars.tf`, set the environment variables specified at the top of the file, and fill in any other variables that
don't have a default.
1. Install [Terraform](https://www.terraform.io/).
1. `cd terraform-configurations`
1. Open `vars.tf`, set the environment variables specified at the top of the file, and feel free to tweak any of the
other variables to your liking.


### Deploying

1. Configure your AWS credentials as [environment
variables](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-environment):

```
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
```
1. `terraform init`
1. `terraform plan`
1. If the plan looks good, run `terraform apply` to deploy the code into your AWS account.
1. Wait a few minutes for everything to deploy. You can monitor the state of the ECS cluster using the [ECS
Console](https://console.aws.amazon.com/ecs/home).
After `terraform apply` completes, it will output the URLs of the ELBs of the rails-frontend and sinatra-backend apps.
### Deploying new versions
Every time you want to deploy a new version of one of the microservices, you need to:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,25 +1,26 @@
# Configure the AWS Provider
provider "aws" {
region = "${var.region}"
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE AN ECS CLUSTER
# ---------------------------------------------------------------------------------------------------------------------

# The ECS Cluster
resource "aws_ecs_cluster" "example_cluster" {
name = "${var.ecs_cluster_name}"
name = "${var.name}"
}

# The Auto Scaling Group that determines how many EC2 Instances we will be
# running
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY AN AUTO SCALING GROUP (ASG)
# Each EC2 Instance in the ASG will register as an ECS Cluster Instance.
# ---------------------------------------------------------------------------------------------------------------------

resource "aws_autoscaling_group" "ecs_cluster_instances" {
name = "ecs-cluster-instances"
min_size = 5
max_size = 5
name = "${var.name}"
min_size = "${var.size}"
max_size = "${var.size}"
launch_configuration = "${aws_launch_configuration.ecs_instance.name}"
vpc_zone_identifier = ["${var.ecs_cluster_subnet_ids}"]
vpc_zone_identifier = ["${var.subnet_ids}"]

tag {
key = "Name"
value = "ecs-cluster-instances"
value = "${var.name}"
propagate_at_launch = true
}
}
Expand All @@ -39,8 +40,8 @@ data "aws_ami" "ecs" {
# The launch configuration for each EC2 Instance that will run in the ECS
# Cluster
resource "aws_launch_configuration" "ecs_instance" {
name_prefix = "ecs-instance-"
instance_type = "t2.micro"
name_prefix = "${var.name}-"
instance_type = "${var.instance_type}"
key_name = "${var.key_pair_name}"
iam_instance_profile = "${aws_iam_instance_profile.ecs_instance.name}"
security_groups = ["${aws_security_group.ecs_instance.id}"]
Expand All @@ -50,7 +51,7 @@ resource "aws_launch_configuration" "ecs_instance" {
# to the right ECS cluster
user_data = <<EOF
#!/bin/bash
echo "ECS_CLUSTER=${var.ecs_cluster_name}" >> /etc/ecs/ecs.config
echo "ECS_CLUSTER=${var.name}" >> /etc/ecs/ecs.config
EOF

# Important note: whenever using a launch configuration with an auto scaling
Expand All @@ -66,21 +67,13 @@ EOF
}
}

# An IAM instance profile we can attach to an EC2 instance
resource "aws_iam_instance_profile" "ecs_instance" {
name = "ecs-instance"
roles = ["${aws_iam_role.ecs_instance.name}"]

# aws_launch_configuration.ecs_instance sets create_before_destroy to true, which means every resource it depends on,
# including this one, must also set the create_before_destroy flag to true, or you'll get a cyclic dependency error.
lifecycle {
create_before_destroy = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE AN IAM ROLE FOR EACH INSTANCE IN THE CLUSTER
# We export the IAM role ID as an output variable so users of this module can attach custom policies.
# ---------------------------------------------------------------------------------------------------------------------

# An IAM role that we attach to the EC2 Instances in ECS.
resource "aws_iam_role" "ecs_instance" {
name = "ecs-instance"
name = "${var.name}"
assume_role_policy = "${data.aws_iam_policy_document.ecs_instance.json}"

# aws_iam_instance_profile.ecs_instance sets create_before_destroy to true, which means every resource it depends on,
Expand All @@ -101,8 +94,24 @@ data "aws_iam_policy_document" "ecs_instance" {
}
}

# IAM policy we add to our EC2 Instance Role that allows an ECS Agent running
# on the EC2 Instance to communicate with the ECS cluster
# To attach an IAM Role to an EC2 Instance, you use an IAM Instance Profile
resource "aws_iam_instance_profile" "ecs_instance" {
name = "${var.name}"
roles = ["${aws_iam_role.ecs_instance.name}"]

# aws_launch_configuration.ecs_instance sets create_before_destroy to true, which means every resource it depends on,
# including this one, must also set the create_before_destroy flag to true, or you'll get a cyclic dependency error.
lifecycle {
create_before_destroy = true
}
}


# ---------------------------------------------------------------------------------------------------------------------
# ATTACH IAM POLICIES TO THE IAM ROLE
# The IAM policy allows an ECS Agent running on each EC2 Instance to communicate with the ECS scheduler.
# ---------------------------------------------------------------------------------------------------------------------

resource "aws_iam_role_policy" "ecs_cluster_permissions" {
name = "ecs-cluster-permissions"
role = "${aws_iam_role.ecs_instance.id}"
Expand All @@ -125,86 +134,39 @@ data "aws_iam_policy_document" "ecs_cluster_permissions" {
}
}

# Security group that controls what network traffic is allowed to go in and out of each EC2 instance in the cluster
# ---------------------------------------------------------------------------------------------------------------------
# CREATE A SECURITY GROUP THAT CONTROLS WHAT TRAFFIC CAN GO IN AND OUT OF THE CLUSTER
# Note that we only attach a few rules to this Security Group. However, we export the ID of the group as an output
# variable so users of this module can attach custom rules.
# ---------------------------------------------------------------------------------------------------------------------

resource "aws_security_group" "ecs_instance" {
name = "ecs-instance"
description = "Security group for the EC2 instances in the ECS cluster"
name = "${var.name}"
description = "Security group for the EC2 instances in the ECS cluster ${var.name}"
vpc_id = "${var.vpc_id}"

# Outbound Everything
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

# Inbound HTTP for the rails-frontend from anywhere
ingress {
from_port = "${var.rails_frontend_port}"
to_port = "${var.rails_frontend_port}"
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

# Inbound HTTP for the sinatra-backend from anywhere
ingress {
from_port = "${var.sinatra_backend_port}"
to_port = "${var.sinatra_backend_port}"
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

# Inbound SSH from anywhere
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

# aws_launch_configuration.ecs_instance sets create_before_destroy to true, which means every resource it depends on,
# including this one, must also set the create_before_destroy flag to true, or you'll get a cyclic dependency error.
lifecycle {
create_before_destroy = true
}
}

# An IAM Role that we attach to ECS Services. See the
# aws_aim_role_policy below to see what permissions this role has.
resource "aws_iam_role" "ecs_service_role" {
name = "ecs-service-role"
assume_role_policy = "${data.aws_iam_policy_document.ecs_service_role.json}"
}

data "aws_iam_policy_document" "ecs_service_role" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ecs.amazonaws.com"]
}
}
resource "aws_security_group_rule" "all_outbound_all" {
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = "${aws_security_group.ecs_instance.id}"
}

# IAM Policy that allows an ECS Service to communicate with EC2 Instances.
resource "aws_iam_role_policy" "ecs_service_policy" {
name = "ecs-service-policy"
role = "${aws_iam_role.ecs_service_role.id}"
policy = "${data.aws_iam_policy_document.ecs_service_policy.json}"
resource "aws_security_group_rule" "all_inbound_ssh" {
type = "ingress"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${var.allow_ssh_from_cidr_blocks}"]
security_group_id = "${aws_security_group.ecs_instance.id}"
}

data "aws_iam_policy_document" "ecs_service_policy" {
statement {
effect = "Allow"
resources = ["*"]
actions = [
"elasticloadbalancing:Describe*",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"ec2:Describe*",
"ec2:AuthorizeSecurityGroupIngress"
]
}
}
19 changes: 19 additions & 0 deletions terraform-configurations/ecs-cluster/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
output "ecs_cluster_id" {
value = "${aws_ecs_cluster.example_cluster.id}"
}

output "security_group_id" {
value = "${aws_security_group.ecs_instance.id}"
}

output "iam_role_id" {
value = "${aws_iam_role.ecs_instance.id}"
}

output "iam_role_name" {
value = "${aws_iam_role.ecs_instance.name}"
}

output "asg_name" {
value = "${aws_autoscaling_group.ecs_cluster_instances.name}"
}
Loading

0 comments on commit a531cef

Please sign in to comment.