forked from docker/docs
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Added latest docs from Editions AWS & Azure (docker#887)
* Added Docker for AWS and Azure and moved navigation Signed-off-by: French Ben <[email protected]> * Fixed image links Signed-off-by: French Ben <[email protected]>
- Loading branch information
Showing
24 changed files
with
1,091 additions
and
35 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,169 @@ | ||
--- | ||
description: Deploying Apps on Docker for AWS | ||
keywords: aws, amazon, iaas, deploy | ||
title: Deploy your app on Docker for AWS | ||
--- | ||
|
||
## Connecting to your manager nodes | ||
|
||
This section will walk you through connecting to your installation and deploying | ||
applications. Instructions are included for both AWS and Azure, so be sure to | ||
follow the instructions for the cloud provider of your choice in each section. | ||
|
||
First, you will obtain the public IP address for a manager node. Any manager | ||
node can be used for administrating the swarm. | ||
|
||
##### Manager Public IP on AWS | ||
|
||
Once you've deployed Docker on AWS, go to the "Outputs" tab for the stack in | ||
CloudFormation. | ||
|
||
The "Managers" output is a URL you can use to see the available manager nodes of | ||
the swarm in your AWS console. Once present on this page, you can see the | ||
"Public IP" of each manager node in the table and/or "Description" tab if you | ||
click on the instance. | ||
|
||
![](/img/managers.png) | ||
|
||
## Connecting via SSH | ||
|
||
#### Manager nodes | ||
|
||
Obtain the public IP and/or port for the manager node as instructed above and | ||
using the provided SSH key to begin administrating your swarm: | ||
|
||
$ ssh -i <path-to-ssh-key> docker@<ssh-host> | ||
Welcome to Docker! | ||
|
||
Once you are logged into the container you can run Docker commands on the swarm: | ||
|
||
$ docker info | ||
$ docker node ls | ||
|
||
You can also tunnel the Docker socket over SSH to remotely run commands on the cluster (requires [OpenSSH 6.7](https://lwn.net/Articles/609321/) or later): | ||
|
||
$ ssh -NL localhost:2374:/var/run/docker.sock docker@<ssh-host> & | ||
$ docker -H localhost:2374 info | ||
|
||
If you don't want to pass `-H` when using the tunnel, you can set the `DOCKER_HOST` environment variable to point to the localhost tunnel opening. | ||
|
||
### Worker nodes | ||
|
||
As of Beta 13, the worker nodes also have SSH enabled when connecting from | ||
manager nodes. SSH access is not possible to the worker nodes from the public | ||
Internet. To access the worker nodes, you will need to first connect to a | ||
manager node (see above). | ||
|
||
On the manager node you can then `ssh` to the worker node, over the private | ||
network. Make sure you have SSH agent forwarding enabled (see below). If you run | ||
the `docker node ls` command you can see the full list of nodes in your swarm. | ||
You can then `ssh docker@<worker-host>` to get access to that node. | ||
|
||
##### AWS | ||
|
||
Use the `HOSTNAME` reported in `docker node ls` directly. | ||
|
||
``` | ||
$ docker node ls | ||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS | ||
a3d4vdn9b277p7bszd0lz8grp * ip-172-31-31-40.us-east-2.compute.internal Ready Active Reachable | ||
... | ||
$ ssh [email protected] | ||
``` | ||
|
||
##### Using SSH agent forwarding | ||
|
||
SSH agent forwarding allows you to forward along your ssh keys when connecting from one node to another. This eliminates the need for installing your private key on all nodes you might want to connect from. | ||
|
||
You can use this feature to SSH into worker nodes from a manager node without | ||
installing keys directly on the manager. | ||
|
||
If your haven't added your ssh key to the `ssh-agent` you will also need to do this first. | ||
|
||
To see the keys in the agent already, run: | ||
|
||
``` | ||
$ ssh-add -L | ||
``` | ||
|
||
If you don't see your key, add it like this. | ||
|
||
``` | ||
$ ssh-add ~/.ssh/your_key | ||
``` | ||
|
||
On Mac OS X, the `ssh-agent` will forget this key, once it gets restarted. But you can import your SSH key into your Keychain like this. This will have your key survive restarts. | ||
|
||
``` | ||
$ ssh-add -K ~/.ssh/your_key | ||
``` | ||
|
||
You can then enable SSH forwarding per-session using the `-A` flag for the ssh command. | ||
|
||
Connecting to the Manager. | ||
``` | ||
$ ssh -A docker@<manager ip> | ||
``` | ||
|
||
To always have it turned on for a given host, you can edit your ssh config file | ||
(`/etc/ssh_config`, `~/.ssh/config`, etc) to add the `ForwardAgent yes` option. | ||
|
||
Example configuration: | ||
|
||
``` | ||
Host manager0 | ||
HostName <manager ip> | ||
ForwardAgent yes | ||
``` | ||
|
||
To SSH in to the manager with the above settings: | ||
|
||
``` | ||
$ ssh docker@manager0 | ||
``` | ||
|
||
## Running apps | ||
|
||
You can now start creating containers and services. | ||
|
||
$ docker run hello-world | ||
|
||
You can run websites too. Ports exposed with `-p` are automatically exposed through the platform load balancer: | ||
|
||
$ docker service create --name nginx -p 80:80 nginx | ||
|
||
Once up, find the `DefaultDNSTarget` output in either the AWS or Azure portals to access the site. | ||
|
||
### Execute docker commands in all swarm nodes | ||
|
||
There are cases (such as installing a volume plugin) wherein a docker command may need to be executed in all the nodes across the cluster. You can use the `swarm-exec` tool to achieve that. | ||
|
||
Usage : `swarm-exec {Docker command}` | ||
|
||
The following will install a test plugin in all the nodes in the cluster | ||
|
||
Example : `swarm-exec docker plugin install --grant-all-permissions mavenugo/test-docker-netplugin` | ||
|
||
This tool internally makes use of docker global-mode service that runs a task on each of the nodes in the cluster. This task in turn executes your docker command. The global-mode service also guarantees that when a new node is added to the cluster or during upgrades, a new task is executed on that node and hence the docker command will be automatically executed. | ||
|
||
### Distributed Application Bundles | ||
|
||
To deploy complex multi-container apps, you can use [distributed application bundles](https://github.com/docker/docker/blob/master/experimental/docker-stacks-and-bundles.md). You can either run `docker deploy` to deploy a bundle on your machine over an SSH tunnel, or copy the bundle (for example using `scp`) to a manager node, SSH into the manager and then run `docker deploy` (if you have multiple managers, you have to ensure that your session is on one that has the bundle file). | ||
|
||
A good sample app to test application bundles is the [Docker voting app](https://github.com/docker/example-voting-app). | ||
|
||
By default, apps deployed with bundles do not have ports publicly exposed. Update port mappings for services, and Docker will automatically wire up the underlying platform load balancers: | ||
|
||
docker service update --publish-add 80:80 <example-service> | ||
|
||
### Images in private repos | ||
|
||
To create swarm services using images in private repos, first make sure you're authenticated and have access to the private repo, then create the service with the `--with-registry-auth` flag (the example below assumes you're using Docker Hub): | ||
|
||
docker login | ||
... | ||
docker service create --with-registry-auth user/private-repo | ||
... | ||
|
||
This will cause swarm to cache and use the cached registry credentials when creating containers for the service. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,88 @@ | ||
--- | ||
description: Frequently asked questions | ||
keywords: aws faqs | ||
title: Docker for AWS Frequently asked questions (FAQ) | ||
--- | ||
|
||
## Can I use my own AMI? | ||
|
||
No, at this time we only support the default Docker for AWS AMI. | ||
|
||
## How to use Docker for AWS with an AWS account in an EC2-Classic region. | ||
|
||
If you have an AWS account that was created before **December 4th, 2013** you have what is known as an **EC2-Classic** account on regions where you have previously deployed resources. **EC2-Classic** accounts don't have default VPC's or the associated subnets, etc. This causes a problem when using our CloudFormation template because we are using the [Fn:GetAZs](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getavailabilityzones.html) function they provide to determine which availability zones you have access too. When used in a region where you have **EC2-Classic**, this function will return all availability zones for a region, even ones you don't have access too. When you have an **EC2-VPC** account, it will return only the availability zones you have access to. | ||
|
||
This will cause an error like the following: | ||
|
||
> "Value (us-east-1a) for parameter availabilityZone is invalid. Subnets can currently only be created in the following availability zones: us-east-1d, us-east-1c, us-east-1b, us-east-1e." | ||
If you have an **EC2-Classic** account, and you don't have access to the `a` and `b` availability zones for that region. | ||
|
||
There isn't anything we can do right now to fix this issue, we have contacted Amazon, and we are hoping they will be able to provide us with a way to determine if an account is either **EC2-Classic** or **EC2-VPC**, so we can act accordingly. | ||
|
||
#### How to tell if you have this issue. | ||
|
||
This AWS documentation page will describe how you can tell if you have EC2-Classic, EC2-VPC or both. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html | ||
|
||
#### How to fix: | ||
There are a few work arounds that you can try to get Docker for AWS up and running for you. | ||
|
||
1. Use a region that doesn't have **EC2-Classic**. The most common region with this issue is `us-east-1`. So try another region, `us-west-1`, `us-west-2`, or the new `us-east-2`. These regions will more then likely be setup with **EC2-VPC** and you will not longer have this issue. | ||
2. Create an new AWS account, all new accounts will be setup using **EC2-VPC** and will not have this problem. | ||
3. You can try and contact AWS support to convert your **EC2-Classic** account to a **EC2-VPC** account. For more information checkout the following answer for **"Q. I really want a default VPC for my existing EC2 account. Is that possible?"** on https://aws.amazon.com/vpc/faqs/#Default_VPCs | ||
|
||
#### Helpful links: | ||
- http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html | ||
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-platforms.html | ||
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-vpc.html | ||
- https://aws.amazon.com/vpc/faqs/#Default_VPCs | ||
- https://aws.amazon.com/blogs/aws/amazon-ec2-update-virtual-private-clouds-for-everyone/ | ||
|
||
|
||
## Can I use my existing VPC? | ||
|
||
Not at this time, but it is on our roadmap for future releases. | ||
|
||
## Which AWS regions will this work with. | ||
|
||
Docker for AWS should work with all regions except for AWS China, which is a little different than the other regions. | ||
|
||
## How many Availability Zones does Docker for AWS use? | ||
|
||
All of Amazons regions have at least 2 AZ's, and some have more. To make sure Docker for AWS works in all regions, only 2 AZ's are used even if more are available. | ||
|
||
## What do I do if I get "KeyPair error" on AWS? | ||
As part of the prerequisites, you need to have an SSH key uploaded to the AWS region you are trying to deploy to. | ||
For more information about adding an SSH key pair to your account, please refer to the [Amazon EC2 Key Pairs docs](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) | ||
|
||
## I have a problem/bug where do I report it? | ||
|
||
Send an email to <[email protected]> or post to the [Docker for AWS](https://github.com/docker/for-aws) GitHub repositories. | ||
|
||
In AWS, if your stack is misbehaving, please run the following diagnostic tool from one of the managers - this will collect your docker logs and send them to Docker: | ||
|
||
``` | ||
$ docker-diagnose | ||
OK hostname=manager1 | ||
OK hostname=worker1 | ||
OK hostname=worker2 | ||
Done requesting diagnostics. | ||
Your diagnostics session ID is 1234567890-xxxxxxxxxxxxxx | ||
Please provide this session ID to the maintainer debugging your issue. | ||
``` | ||
|
||
_Please note that your output will be slightly different from the above, depending on your swarm configuration_ | ||
|
||
## Analytics | ||
|
||
The beta versions of Docker for AWS and Azure send anonymized analytics to Docker. These analytics are used to monitor beta adoption and are critical to improve Docker for AWS and Azure. | ||
|
||
## How to run administrative commands? | ||
|
||
By default when you SSH into a manager, you will be logged in as the regular username: `docker` - It is possible however to run commands with elevated privileges by using `sudo`. | ||
For example to ping one of the nodes, after finding its IP via the Azure/AWS portal (e.g. 10.0.0.4), you could run: | ||
``` | ||
$ sudo ping 10.0.0.4 | ||
``` | ||
|
||
Note that access to Docker for AWS and Azure happens through a shell container that itself runs on Docker. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.