Skip to content

Commit

Permalink
Fix MD errors
Browse files Browse the repository at this point in the history
  • Loading branch information
iblancasa committed Apr 27, 2017
1 parent 9b92c02 commit d2c2c08
Show file tree
Hide file tree
Showing 11 changed files with 35 additions and 37 deletions.
12 changes: 6 additions & 6 deletions networking/concepts/03-linux-networking.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@

##<a name="drivers"></a><a name="linuxnetworking"></a>Linux Network Fundamentals
## <a name="drivers"></a><a name="linuxnetworking"></a>Linux Network Fundamentals

The Linux kernel features an extremely mature and performant implementation of the TCP/IP stack (in addition to other native kernel features like DNS and VXLAN). Docker networking uses the kernel's networking stack as low level primitives to create higher level network drivers. Simply put, _Docker networking <b>is</b> Linux networking._

This implementation of existing Linux kernel features ensures high performance and robustness. Most importantly, it provides portability across many distributions and versions which enhances application portability.

There are several Linux networking building blocks which Docker uses to implement its built-in CNM network drivers. This list includes **Linux bridges**, **network namespaces**, **veth pairs**, and **iptables**. The combination of these tools implemented as network drivers provide the forwarding rules, network segmentation, and management tools for complex network policy.

###<a name="linuxbridge"></a>The Linux Bridge
### <a name="linuxbridge"></a>The Linux Bridge
A **Linux bridge** is a Layer 2 device that is the virtual implementation of a physical switch inside the Linux kernel. It forwards traffic based on MAC addresses which it learns dynamically by inspecting traffic. Linux bridges are used extensively in many of the Docker network drivers. A Linux bridge is not to be confused with the `bridge` Docker network driver which is a higher level implementation of the Linux bridge.


###Network Namespaces
### Network Namespaces
A Linux **network namespace** is an isolated network stack in the kernel with its own interfaces, routes, and firewall rules. It is a security aspect of containers and Linux, used to isolate containers. In networking terminology they are akin to a VRF that segments the network control and data plane inside the host. Network namespaces ensure that two containers on the same host will not be able to communicate with each other or even the host itself unless configured to do so via Docker networks. Typically, CNM network drivers implement separate namespaces for each container. However, containers can share the same network namespace or even be a part of the host's network namespace. The host network namespace containers the host interfaces and host routing table. This network namespace is called the global network namespace.

###Virtual Ethernet Devices
### Virtual Ethernet Devices
A **virtual ethernet device** or **veth** is a Linux networking interface that acts as a connecting wire between two network namespaces. A veth is a full duplex link that has a single interface in each namespace. Traffic in one interface is directed out the other interface. Docker network drivers utilize veths to provide explicit connections between namespaces when Docker networks are created. When a container is attached to a Docker network, one end of the veth is placed inside the container (usually seen as the `ethX` interface) while the other is attached to the Docker network.

###iptables
### iptables
**`iptables`** is the native packet filtering system that has been a part of the Linux kernel since version 2.4. It's a feature rich L3/L4 firewall that provides rule chains for packet marking, masquerading, and dropping. The built-in Docker network drivers utilize `iptables` extensively to segment network traffic, provide host port mapping, and to mark traffic for load balancing decisions.

Next: **[Docker Network Control Plane](04-docker-network-cp.md)**
Next: **[Docker Network Control Plane](04-docker-network-cp.md)**
6 changes: 2 additions & 4 deletions networking/concepts/04-docker-network-cp.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,9 @@
##<a name="controlplane"></a>Docker Network Control Plane
## <a name="controlplane"></a>Docker Network Control Plane
The Docker-distributed network control plane manages the state of Swarm-scoped Docker networks in addition to propagating control plane data. It is a built-in capability of Docker Swarm clusters and does not require any extra components such as an external KV store. The control plane uses a [Gossip](https://en.wikipedia.org/wiki/Gossip_protocol) protocol based on [SWIM](https://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf) to propagate network state information and topology across Docker container clusters. The Gossip protocol is highly efficient at reaching eventual consistency within the cluster while maintaining constant rates of message size, failure detection times, and convergence time across very large scale clusters. This ensures that the network is able to scale across many nodes without introducing scaling issues such as slow convergence or false positive node failures.

The control plane is highly secure, providing confidentiality, integrity, and authentication through encrypted channels. It is also scoped per network which greatly reduces the updates that any given host will receive.

<span class="float-right">
![Docker Network Control Plane](./img/gossip.png)
</span>

It is composed of several components that work together to achieve fast convergence across large scale networks. The distributed nature of the control plane ensures that cluster controller failures don't affect network performance.

Expand All @@ -19,4 +17,4 @@ The Docker network control plane components are as follows:

> The Docker Network Control Plane is a component of [Swarm](https://docs.docker.com/engine/swarm/) and requires a Swarm cluster to operate.
Next: **[Docker Bridge Network Driver Architecture](05-bridge-networks.md)**
Next: **[Docker Bridge Network Driver Architecture](05-bridge-networks.md)**
10 changes: 5 additions & 5 deletions networking/concepts/05-bridge-networks.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
##<a name="drivers"></a>Docker Bridge Network Driver Architecture
## <a name="drivers"></a>Docker Bridge Network Driver Architecture

This section explains the default Docker bridge network as well as user-defined bridge networks.

###Default Docker Bridge Network
### Default Docker Bridge Network
On any host running Docker Engine, there will, by default, be a local Docker network named `bridge`. This network is created using a `bridge` network driver which instantiates a Linux bridge called `docker0`. This may sound confusing.

- `bridge` is the name of the Docker network
Expand Down Expand Up @@ -57,7 +57,7 @@ By default `bridge` will be assigned one subnet from the ranges 172.[17-31].0.0/


###<a name="userdefined"></a>User-Defined Bridge Networks
### <a name="userdefined"></a>User-Defined Bridge Networks
In addition to the default networks, users can create their own networks called **user-defined networks** of any network driver type. In the case of user-defined `bridge` networks, Docker will create a new Linux bridge on the host. Unlike the default `bridge` network, user-defined networks supports manual IP address and subnet assignment. If an assignment isn't given, then Docker's default IPAM driver will assign the next subnet available in the private IP space.

![User-Defined Bridge Network](./img/bridge2.png)
Expand Down Expand Up @@ -101,7 +101,7 @@ $ ip link
...
```

###External and Internal Connectivity
### External and Internal Connectivity
By default all containers on the same `bridge` driver network will have connectivity with each other without extra configuration. This is an aspect of most types of Docker networks. By virtue of the Docker network the containers are able to communicate across their network namespaces and (for multi-host drivers) across external networks as well. **Communication between different Docker networks is firewalled by default.** This is a fundamental security aspect that allows us to provide network policy using Docker networks. For example, in the figure above containers `c2` and `c3` have reachability but they cannot reach `c1`.

Docker `bridge` networks are not exposed on the external (underlay) host network by default. Container interfaces are given IPs on the private subnets of the bridge network. Containers communicating with the external network are port mapped or masqueraded so that their traffic uses an IP address of the host. The example below shows outbound and inbound container traffic passing between the host interface and a user-defined `bridge` network.
Expand All @@ -117,4 +117,4 @@ This previous diagram shows how port mapping and masquerading takes place on a h
Exposed ports can be configured using `--publish` in the Docker CLI or UCP. The diagram shows an exposed port with the container port `80` mapped to the host interface on port `5000`. The exposed container would be advertised at `192.168.0.2:5000`, and all traffic going to this interface:port would be sent to the container at `10.0.0.2:80`.


Next: **[Overlay Driver Network Architecture](06-overlay-networks.md)**
Next: **[Overlay Driver Network Architecture](06-overlay-networks.md)**
4 changes: 2 additions & 2 deletions networking/concepts/07-macvlan.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ PING 127.0.0.1 (127.0.0.1): 56 data bytes

As you can see in this diagram, `c1` and `c2` are attached via the MACVLAN network called `macvlan` attached to `eth0` on the host.

###VLAN Trunking with MACVLAN
### VLAN Trunking with MACVLAN

Trunking 802.1q to a Linux host is notoriously painful for many in operations. It requires configuration file changes in order to be persistent through a reboot. If a bridge is involved, a physical NIC needs to be moved into the bridge, and the bridge then gets the IP address. The `macvlan` driver completely manages sub-interfaces and other components of the MACVLAN network through creation, destruction, and host reboots.

Expand All @@ -55,4 +55,4 @@ In the preceding configuration we've created two separate networks using the `ma

> Because multiple MAC addresses are living behind a single host interface you might need to enable promiscuous mode on the interface depending on the NIC's support for MAC filtering.
Next: **[Host (Native) Network Driver](08-host-networking.md)**
Next: **[Host (Native) Network Driver](08-host-networking.md)**
6 changes: 3 additions & 3 deletions networking/concepts/08-host-networking.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@

##<a name="hostdriver"></a>Host (Native) Network Driver
## <a name="hostdriver"></a>Host (Native) Network Driver

The `host` network driver connects a container directly to the host networking stack. Containers using the `host` driver reside in the same network namespace as the host itself. Thus, containers will have native bare-metal network performance at the cost of namespace isolation.

Expand Down Expand Up @@ -53,7 +53,7 @@ Every container using the `host` network will all share the same host interfaces

Full host access and no automated policy management may make the `host` driver a difficult fit as a general network driver. However, `host` does have some interesting properties that may be applicable for use cases such as ultra high performance applications, troubleshooting, or monitoring.

##<a name="nonedriver"></a>None (Isolated) Network Driver
## <a name="nonedriver"></a>None (Isolated) Network Driver

Similar to the `host` network driver, the `none` network driver is essentially an unmanaged networking option. Docker Engine will not create interfaces inside the container, establish port mapping, or install routes for connectivity. A container using `--net=none` will be completely isolated from other containers and the host. The networking admin or external tools must be responsible for providing this plumbing. In the following example we see that a container using `none` only has a loopback interface and no other interfaces.

Expand All @@ -75,4 +75,4 @@ Unlike the `host` driver, the `none` driver will create a separate namespace for

> Containers using `--net=none` or `--net=host` cannot be connected to any other Docker networks.
Next: **[Physical Network Design Requirements](09-physical-networking.md)**
Next: **[Physical Network Design Requirements](09-physical-networking.md)**
6 changes: 3 additions & 3 deletions networking/concepts/09-physical-networking.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
##<a name="requirements"></a>Physical Network Design Requirements
## <a name="requirements"></a>Physical Network Design Requirements
Docker Datacenter and Docker networking are designed to run over common data center network infrastructure and topologies. Its centralized controller and fault-tolerant cluster guarantee compatibility across a wide range of network environments. The components that provide networking functionality (network provisioning, MAC learning, overlay encryption) are either a part of Docker Engine, UCP, or the Linux kernel itself. No extra components or special networking features are required to run any of the built-in Docker networking drivers.

More specifically, the Docker built-in network drivers have NO requirements for:
Expand All @@ -11,7 +11,7 @@ More specifically, the Docker built-in network drivers have NO requirements for:

This is in line with the Container Networking Model which promotes application portability across all environments while still achieving the performance and policy required of applications.

##<a name="sd"></a>Service Discovery Design Considerations
## <a name="sd"></a>Service Discovery Design Considerations

Docker uses embedded DNS to provide service discovery for containers running on a single Docker Engine and `tasks` running in a Docker Swarm. Docker Engine has an internal DNS server that provides name resolution to all of the containers on the host in user-defined bridge, overlay, and MACVLAN networks. Each Docker container ( or `task` in Swarm mode) has a DNS resolver that forwards DNS queries to Docker Engine, which acts as a DNS server. Docker Engine then checks if the DNS query belongs to a container or `service` on network(s) that the requesting container belongs to. If it does, then Docker Engine looks up the IP address that matches a container, `task`, or`service`'s **name** in its key-value store and returns that IP or `service` Virtual IP (VIP) back to the requester.

Expand All @@ -23,7 +23,7 @@ If the destination container or `service` does not belong on same network(s) as

In this example there is a service of two containers called `myservice`. A second service (`client`) exists on the same network. The `client` executes two `curl` operations for `docker.com` and `myservice`. These are the resulting actions:


- DNS queries are initiated by `client` for `docker.com` and `myservice`.
- The container's built in resolver intercepts the DNS queries on `127.0.0.11:53` and sends them to Docker Engine's DNS server.
- `myservice` resolves to the Virtual IP (VIP) of that service which is internally load balanced to the individual task IP addresses. Container names will be resolved as well, albeit directly to their IP address.
Expand Down
6 changes: 3 additions & 3 deletions networking/concepts/10-load-balancing.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
##<a name="lb"></a>Load Balancing Design Considerations
## <a name="lb"></a>Load Balancing Design Considerations

Load balancing is a major requirement in modern, distributed applications. Docker Swarm mode introduced in 1.12 comes with a native internal and external load balancing functionalities that utilize both `iptables` and `ipvs`, a transport-layer load balancing inside the Linux kernel.

###Internal Load Balancing
### Internal Load Balancing
When services are created in a Docker Swarm cluster, they are automatically assigned a Virtual IP (VIP) that is part of the service's network. The VIP is returned when resolving the service's name. Traffic to that VIP will be automatically sent to all healthy tasks of that service across the overlay network. This approach avoids any client-side load balancing because only a single IP is returned to the client. Docker takes care of routing and equally distributing the traffic across the healthy service tasks.


Expand Down Expand Up @@ -57,4 +57,4 @@ This diagram illustrates how the Routing Mesh works.
- Traffic destined for the `app` can enter on any host. In this case the external LB sends the traffic to a host without a service replica.
- The kernel's IPVS load balancer redirects traffic on the `ingress` overlay network to a healthy service replica.

Next: **[Network Security and Encryption Design Considerations](11-security.md)**
Next: **[Network Security and Encryption Design Considerations](11-security.md)**
10 changes: 5 additions & 5 deletions networking/concepts/11-security.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
##<a name="security"></a>Network Security and Encryption Design Considerations
## <a name="security"></a>Network Security and Encryption Design Considerations

Network security is a top-of-mind consideration when designing and implementing containerized workloads with Docker. In this section, we will go over three key design considerations that are typically raised around Docker network security and how you can utilize Docker features and best practices to address them.

###Container Networking Segmentation
### Container Networking Segmentation

Docker allows you to create an isolated network per application using the `overlay` driver. By default different Docker networks are firewalled from eachother. This approach provides a true network isolation at Layer 3. No malicious container can communicate with your application's container unless it's on the same network or your applications' containers expose services on the host port. Therefore, creating networks for each applications adds another layer of security. The principles of "Defense in Depth" still recommends application-level security to protect at L3 and L7.

###Securing the Control Plane
### Securing the Control Plane

Docker Swarm comes with integrated PKI. All managers and nodes in the Swarm have a cryptographically signed identify in the form of a signed certificate. All manager-to-manager and manager-to-node control communication is secured out of the box with TLS. No need to generate certs externally or set up any CAs manually to get end-to-end control plane traffic secured in Docker Swarm mode. Certificates are periodically and automatically rotated.

###Securing the Data Plane
### Securing the Data Plane

In Docker Swarm mode the data path (e.g. application traffic) can be encrypted out-of-the-box. This feature uses IPSec tunnels to encrypt network traffic as it leaves the source container and decrypts it as it enters the destination container. This ensure that your application traffic is highly secure when it's in transit regardless of the underlying networks. In a hybrid, multi-tenant, or multi-cloud environment, it is crucial to ensure data is secure as it traverses networks you might not have control over.

Expand All @@ -22,4 +22,4 @@ This feature works with the `overlay` driver in Swarm mode only and can be enabl

The Swarm leader periodically regenerates a symmetrical key and distributes it securely to all cluster nodes. This key is used by IPsec to encrypt and decrypt data plane traffic. The encryption is implemented via IPSec in host-to-host transport mode using AES-GCM.

Next: **[IP Address Management](12-ipaddress-management.md)**
Next: **[IP Address Management](12-ipaddress-management.md)**
4 changes: 2 additions & 2 deletions networking/concepts/12-ipaddress-management.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
##<a name="ipam"></a>IP Address Management
## <a name="ipam"></a>IP Address Management

The Container Networking Model (CNM) provides flexibility in how IP addresses are managed. There are two methods for IP address management.

Expand All @@ -9,4 +9,4 @@ Manual configuration of container IP addresses and network subnets can be done u

Subnet size and design is largely dependent on a given application and the specific network driver. IP address space design is covered in more depth for each [Network Deployment Model](#models) in the next section. The uses of port mapping, overlays, and MACVLAN all have implications on how IP addressing is arranged. In general, container addressing falls into two buckets. Internal container networks (bridge and overlay) address containers with IP addresses that are not routable on the physical network by default. MACVLAN networks provide IP addresses to containers that are on the subnet of the physical network. Thus, traffic from container interfaces can be routable on the physical network. It is important to note that subnets for internal networks (bridge, overlay) should not conflict with the IP space of the physical underlay network. Overlapping address space can cause traffic to not reach its destination.

Next: **[Network Troubleshooting](13-troubleshooting.md)**
Next: **[Network Troubleshooting](13-troubleshooting.md)**
Loading

0 comments on commit d2c2c08

Please sign in to comment.