Skip to content

Commit

Permalink
Merge pull request openshift#14991 from maxwelldb/modularizing-arch-c…
Browse files Browse the repository at this point in the history
…ontent

Initial add of modularized arch guide content
  • Loading branch information
kalexand-rh authored May 22, 2019
2 parents 9f7b6e1 + 6e1b894 commit b5f5d1d
Show file tree
Hide file tree
Showing 43 changed files with 1,101 additions and 10 deletions.
4 changes: 4 additions & 0 deletions _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,15 @@ Topics:
- Name: Product architecture
File: architecture
Distros: openshift-enterprise,openshift-origin,openshift-dedicated
- Name: Understanding OpenShift development
File: understanding-development
- Name: Abstraction layers and topology
File: architecture-topology
- Name: Installation and update
File: architecture-installation
Distros: openshift-enterprise,openshift-origin
- Name: The control plane
File: control-plane
- Name: Operators in OpenShift Container Platform
File: architecture-operators
Distros: openshift-enterprise,openshift-origin
Expand Down
8 changes: 8 additions & 0 deletions architecture/architecture.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,14 @@ toc::[]
//becomes available.
//====
include::modules/platform-introduction.adoc[leveloffset=+1]
include::modules/kubernetes-introduction.adoc[leveloffset=+2]
include::modules/container-application-benefits.adoc[leveloffset=+2]
include::modules/platform-benefits.adoc[leveloffset=+2]
include::modules/architecture-overview.adoc[leveloffset=+1]
include::modules/architecture-components.adoc[leveloffset=+1]
Expand Down
16 changes: 16 additions & 0 deletions architecture/control-plane.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
[id="control-plane"]
= The {product-title} control plane
include::modules/common-attributes.adoc[]
:context: control-plane
toc::[]

include::modules/understanding-control-plane.adoc[leveloffset=+1]
include::modules/understanding-node-roles.adoc[leveloffset=+2]
include::modules/understanding-workers-masters.adoc[leveloffset=+2]
include::modules/defining-workers.adoc[leveloffset=+3]
include::modules/defining-masters.adoc[leveloffset=+3]
include::modules/understanding-operators.adoc[leveloffset=+2]
include::modules/understanding-cluster-version-operator.adoc[leveloffset=+3]
include::modules/understanding-machine-config-operator.adoc[leveloffset=+3]
include::modules/digging-into-machine-config.adoc[leveloffset=+3]
include::modules/looking-inside-nodes.adoc[leveloffset=+3]
17 changes: 17 additions & 0 deletions architecture/introduction-architecture.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
// [id="architecture"]
// = {product-title} architecture
// include::modules/common-attributes.adoc[]
// :context: architecture-intro
// toc::[]

// include::modules/platform-introduction.adoc[leveloffset=+1]
// include::modules/container-application-benefits.adoc[leveloffset=+2]
// include::modules/kubernetes-introduction.adoc[leveloffset=+2]
// include::modules/platform-benefits.adoc[leveloffset=+2]
// For install docs
// include::modules/understanding-installation.adoc[leveloffset=+1]
// include::modules/running-simple-installation.adoc[leveloffset=+2]
// include::modules/running-modified-installation.adoc[leveloffset=+2]
// include::modules/following-installation.adoc[leveloffset=+2]
// include::modules/completing-installation.adoc[leveloffset=+2]
// End install docs
45 changes: 45 additions & 0 deletions architecture/understanding-development.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
[id="understanding-openshift-development"]
= Understanding {product-title} development
include::modules/common-attributes.adoc[]
:context: container-development
toc::[]

For many people, their first experience building and running containers is awesome.

Often knowing little or nothing about containers, many people have been able to build a containerized application in just a few minutes, push it to a registry to make it available to anyone they chose, and run it from any Linux system with a container runtime. If they were just running individual applications on their local laptop, they might feel like containers give them everything they need.

But as thrilling as the first experience can be, building and running a single container manually just makes you want more. To make containers a viable entity for developing and running enterprise-quality applications, they needed to be surrounded by tools that allowed them to be:

* Created as discrete microservices that could be connected together with other containerized, and non-containerized, services. For example, you might want to join your application with a database or have a monitoring application go with it.

* Resilient, so if a server crashes or needs to go down for maintenance or to be decommissioned, containers can just start up on another node

* Automated to pick up code changes automatically, then spin up and deploy new versions of themselves.

* Scaled up (replicated) to have more instances serving clients as demand increases, then spun down to fewer instances as demand declines.

* Run in different ways, depending on the type of application. For example, one application may run once a month to produce a report, then exit. Another application might need to run all the time and be highly available to clients.

* Managed so you can watch the state of your application and react when something goes wrong.

Containers’ wide-spread acceptance, and the resulting hunger for tools and methods to make them enterprise-ready, lead to an explosion of wrap up and manage containers. At a glance, it might be hard to figure out which approaches to choose.

So, where do you start? The rest of this section lays out the different kinds of assets you can create as someone building and deploying containerized Kubernetes applications in OpenShift. It also describes which approaches are most appropriate for different kinds of applications and development requirements.


== Developing containerized applications

There are many ways to approach application development with containers. The goal of this section is to step through one approach that begins with developing a single container to ultimately deploying that container as a mission-critical application for a large enterprise. Along the way, you will see the different kinds of tools, formats, and methods you can employ in this journey. From a high level, this path includes:

* Building a simple container and storing it in a registry
* Creating a Kubernetes manifest and saving it to a git repository
* Making an Operator to share your application with others

Although we are illustrating a particular path from a simple container to an enterprise-ready application, along the way you will see options you have to incorporate different tools and methods, as well as reasons why you might want to choose those other options.

include::modules/building-simple-container.adoc[leveloffset=+1]
include::modules/choosing-container-build-tools.adoc[leveloffset=+2]
include::modules/choosing-base-image.adoc[leveloffset=+2]
include::modules/choosing-registry.adoc[leveloffset=+2]
include::modules/creating-kubernetes-manifest-openshift.adoc[leveloffset=+1]
include::modules/develop-for-operators.adoc[leveloffset=+1]
Binary file added images/create-nodes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/create-push-app.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/developer-catalog.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/image2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/image3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/image4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/installconfig.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/targets-and-dependencies.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion modules/architecture-components.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
[id="architecture-components_{context}"]
= Components

{product-title} 4 consists of a number of key components making up the product
{product-title} {product-version} consists of a number of key components making up the product
stack.

== Infrastructure
Expand Down
10 changes: 5 additions & 5 deletions modules/architecture-overview.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,18 @@
// * architecture/architecture.adoc

[id="architecture-overview_{context}"]
= Architecture overview
= Changes to {product-title} {product-version}

With {product-title} 4, the core story remains unchanged: {product-title} offers
With {product-title} {product-version}, the core story remains unchanged: {product-title} offers
your developers a set of tools to evolve their applications under operational oversight
and using Kubernetes to provide application infrastructure. The key change to {product-title} 4 is
and using Kubernetes to provide application infrastructure. The key change to {product-title} {product-version} is
that the infrastructure and its management are flexible, automated, and self-managing.

A major difference between {product-title} 3 and {product-title} 4 is that {product-title} 4 uses Operators
A major difference between {product-title} 3 and {product-title} {product-version} is that {product-title} {product-version} uses Operators
as both the fundamental unit of the product and an option for easily deploying
and managing utilities that your apps use.

{product-title} 4 runs on top of a Kubernetes cluster, with data about the
{product-title} {product-version} runs on top of a Kubernetes cluster, with data about the
objects stored in etcd, a reliable clustered key-value store. The cluster is
enhanced with standard components that are required to run your cluster, including
network, Ingress, logging, and monitoring, that run as Operators to increase the
Expand Down
22 changes: 22 additions & 0 deletions modules/building-simple-container.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
// Module included in the following assemblies:
//
// * architecture/understanding-openshift-development.adoc
[id="building-simple-container_{context}"]
= Building a simple container

You have an idea for an application and you want to containerize it. All you need to get started is a tool for building a container (buildah or docker) and a file that describes what will go into your container (typically, a https://docs.docker.com/engine/reference/builder/[Dockerfile]). Next you will want a place to push the resulting container image (a container registry) so you can pull it to run anywhere you want it to run.

Some examples of each of those components just described come with most Linux systems, except for the Dockerfile which you provide yourself. The following diagram shows what the process of building and pushing an image entails:

.Create a simple containerized application and push it to a registry
image::create-push-app.png[Creating and pushing a containerized application]

Using a Red Hat Enterprise Linux (RHEL) system as an example, here’s what the process of creating a containerized application might look like:

* Install container build tools: RHEL contains a set of tools (podman, buildah, skopeo and others) for building and managing containers. In particular, the command line buildah build-using-dockerfile can replace the common docker build command for turning your Dockerfile and software into a container image. You can also build container without a Dockerfile using buildah.
* Create Dockerfile to combine base image and software: Information about building your container goes into a file named Dockerfile. In that file you identify the base image you build from, the software package you install, and the software you copy into the container. You also identify things like network ports you expose outside the container and volumes you mount inside the container. Put your Dockerfile and the software you want to containerized in a directory on your RHEL system.
* Run buildah or docker build: Running buildah or docker build pulls your chosen base image to the local system and creates a container image that is stored locally.
* Tag and push to a registry: Add a tag to your new container image that identifies the location of the registry in which you want to store and share your container. Then push that image, podman push or docker push, to the registry.
* Pull and run the image: From any system that has a container client tool, such as podman or docker, run a command line that identifies your new image. For example, run podman run or docker run, followed by the name of your new container image (for example, quay.io/myrepo/myapp:latest). The registry might require credentials to push and pull images.

For more details on the process of building container images, pushing them to registries, and running them, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html-single/managing_containers/index%23using_podman_to_work_with_containers[Using podman to work with containers] and https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html-single/managing_containers/index%23building_container_images_with_buildah[Building container images with buildah]. Along this process, you needed to make some decisions about the tools and features you use. Some of those choices are detailed here.
16 changes: 16 additions & 0 deletions modules/choosing-base-image.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
// Module included in the following assemblies:
//
// * architecture/understanding-openshift-development.adoc
[id="choosing-base-image_{context}"]
= Choosing a base image

The base image you choose to build your application on contains a set of software that looks like a Linux system to your application. When you build your own image, your software is placed into that file system and sees that file system as though it were looking at its operating system. Choosing this base image has major impact on how secure, efficient and upgradeable your container is in the future.

Red Hat provides a new set of base images referred to as https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html-single/getting_started_with_containers/index%23using_red_hat_base_container_images_standard_and_minimal[Red Hat Universal Base Images] (UBI). These RHEL-based images are similar to base images Red Hat has offered in the past, with one major difference: they are freely redistributable without a Red Hat subscription. As a result, you can build your application on UBI images without having to worry about how they are shared. No need to create different images for different environments.

These UBI images have standard, init, and minimal versions. There is also a set of https://access.redhat.com/documentation/en-us/red_hat_software_collections/3/html-single/using_red_hat_software_collections_container_images/index[Red Hat Software Collections] images that can be used as a foundation for applications that rely on specific runtime environments (such as Node.js, perl, python and other runtimes). Special versions of some of these runtime base images referred to as Source-to-image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code.

S2I images are available for you to use directly from the {product-title} web UI by selecting Catalog → Developer Catalog, as shown in the following figure:

.Choose S2I base images for apps that need specific runtimes
image::developer-catalog.png[{product-title} Developer Catalog]
12 changes: 12 additions & 0 deletions modules/choosing-container-build-tools.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
// Module included in the following assemblies:
//
// * architecture/understanding-openshift-development.adoc

[id="choosing-container-build-tools_{context}"]
= Choosing container build tools

When containers first really took hold, most people used the Docker Container Engine and docker command to work with containers. You can still use those tools to create containers that will run in {product-title} and any other container platform. However, with RHEL and also many other Linux system you can instead choose a different set of container tools that includes: podman, skopeo, and buildah.

Building and managing containers with buildah, podman, and skopeo results in industry standard container images that include features tuned specifically for ultimately deploying those containers in {product-title} or other Kubernetes environments. These tools are daemonless and can be run without root privileges, so there is less overhead in running them.

When you ultimately run your containers in {product-title}, the https://cri-o.io/[CRI-O] container engine has replaced Docker as the container engine. CRI-O runs on every worker and master node in an {product-title} cluster, but CRI-O is not yet supported as a standalone runtime outside of {product-title}.
16 changes: 16 additions & 0 deletions modules/choosing-registry.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
// Module included in the following assemblies:
//
// * architecture/understanding-openshift-development.adoc

[id="choosing-registry_{context}"]
= Choosing a registry

Container Registries are where you store container images so you can share them with others and make them available to the platform where they ultimately run. There are large, public container registries that offer free accounts, as well as premium versions that offer more storage and special features. You can also install your own registry that can be exclusive to your organization or selectively shared with others.

To get Red Hat images and certified partner images, you can draw from the Red Hat Registry. The Red Hat Registry is represented by two locations: registry.access.redhat.com (unauthenticated and deprecated) and registry.redhat.io (requires authentication). You can learn about the Red Hat and partner images in the Red Hat Registry from the https://access.redhat.com/containers/[Red Hat Container Catalog]. Besides listing Red Hat container images, it also shows extensive information about the contents and quality of those images, including health scores based on applied security updates.

Large, public registries include https://hub.docker.com/[Docker Hub] and https://quay.io/[Quay.io]. The Quay.io registry is owned and managed by Red Hat. Many of the components used in {product-title} are stored in Quay.io, including container images and Operators used to deploy {product-title} itself. Quay.io also offers the means of storing other types of content, including Helm Charts.

If you want your own, private container registry, {product-title} itself includes a private container registry that is installed with {product-title} and runs on its cluster. Red Hat also offers a private version of the Quay.io registry called https://access.redhat.com/products/red-hat-quay[Red Hat Quay]. Included with Red Hat Quay are features for geo replication, git build triggers, and Clair image scanning, among other features.

All of the registries mentioned here can require credentials for someone to download images from those registries. Some of those credentials are presented on a cluster-wide basis from {product-title}, while other credentials can represent an individual’s credentials.
33 changes: 33 additions & 0 deletions modules/completing-installation.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
// Module included in the following assemblies:
//
// * architecture/architecture.adoc

[id="completing-installation_{context}"]
= Completing and verifying the {product-title} installation

When the bootstrap node is done with its work and has handed off control to the new {product-title} cluster, the bootstrap node is destroyed. The installer waits for the cluster to initialize, creates a route to the {product-title} console, and presents the information and credentials you need to log into the cluster. Here’s an example:

----
INFO Install complete!                                
INFO Run 'export KUBECONFIG=/home/joe/ocp/auth/kubeconfig' to manage the cluster with 'oc', the {product-title} CLI.
INFO The cluster is ready when 'oc login -u kubeadmin -p 39RPg-y4c7V-n4bbn-vAF3M' succeeds (wait a few minutes).
INFO Access the {product-title} web-console here: https://console-openshift-console.apps.mycluster.devel.example.com
INFO Login to the console with user: kubeadmin, password: 39RPg-y4c7V-n4bbn-vAF3M
----

To access the {product-title} cluster from your web browser, log in as kubeadmin with the password (for example, 39RPg-y4c7V-n4bbn-vAF3M), using the URL shown:

     https://console-openshift-console.apps.mycluster.devel.example.com

To access the {product-title} cluster from the command line, identify the location of the credentials file (export the KUBECONFIG variable) and log in as kubeadmin with the provided password:
----
$ export KUBECONFIG=/home/joe/ocp/auth/kubeconfig
$ oc login -u kubeadmin -p 39RPg-y4c7V-n4bbn-vAF3M
----

At this point, you can begin using the {product-title} cluster. To understand the management of your {product-title} cluster going forward, you should explore the {product-title} control plane.
Loading

0 comments on commit b5f5d1d

Please sign in to comment.