Skip to content

Commit

Permalink
Website Cleanup Part 7 - Usage (gardener#7541)
Browse files Browse the repository at this point in the history
* Fix "similar to" usage

* Removed etc. from lists with e.g.

* Proofread files

* Resolved comments

* Resolved "similar" comment

* Further changes to "similar" usage

* Fully resolved "similar" suggestion
  • Loading branch information
n-boshnakov authored Feb 27, 2023
1 parent 6764bf9 commit 302b1a9
Show file tree
Hide file tree
Showing 67 changed files with 554 additions and 550 deletions.
2 changes: 1 addition & 1 deletion docs/concepts/apiserver_admission_plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ _(enabled by default)_

This admission controller reacts on `CREATE` and `UPDATE` operations for `ManagedSeeds`s.
It validates certain configuration values in the specification against the referred `Shoot`, for example Seed provider, network ranges, DNS domain, etc.
Similarly to `ShootValidator`, it performs validations that cannot be handled by the static API validation due to their dynamic nature.
Similar to `ShootValidator`, it performs validations that cannot be handled by the static API validation due to their dynamic nature.
Additionally, it performs certain defaulting tasks, making sure that configuration values that are not specified are defaulted to the values of the referred `Shoot`, for example Seed provider, network ranges, DNS domain, etc.

## `ManagedSeedShoot`
Expand Down
6 changes: 3 additions & 3 deletions docs/concepts/gardenlet.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ endpoint returns HTTP status code 200.
If that is the case, the gardenlet renews the lease in the Garden cluster in the `gardener-system-seed-lease` namespace and updates
the `GardenletReady` condition in the `status.conditions` field of the `Seed` resource. For more information, see [this section](#lease-reconciler).

Similarly to the `node-lifecycle-controller` inside the `kube-controller-manager`,
Similar to the `node-lifecycle-controller` inside the `kube-controller-manager`,
the `gardener-controller-manager` features a `seed-lifecycle-controller` that sets
the `GardenletReady` condition to `Unknown` in case the gardenlet fails to renew the lease.
As a consequence, the `gardener-scheduler` doesn’t consider this seed cluster for newly created shoot clusters anymore.
Expand Down Expand Up @@ -406,8 +406,8 @@ It maintains five conditions and performs the following checks:
- `APIServerAvailable`: The `/healthz` endpoint of the shoot's `kube-apiserver` is called and considered healthy when it responds with `200 OK`.
- `ControlPlaneHealthy`: The control plane is considered healthy when the respective `Deployment`s (for example `kube-apiserver`,`kube-controller-manager`), and `Etcd`s (for example `etcd-main`) exist and are healthy.
- `ObservabilityComponentsHealthy`: This condition is considered healthy when the respective `Deployment`s (for example `grafana`), `StatefulSet`s (for example `prometheus`,`loki`), exist and are healthy.
- `EveryNodyReady`: The conditions of the worker nodes are checked (e.g., `Ready`, `MemoryPressure`, etc.). Also, it's checked whether the Kubernetes version of the installed `kubelet` matches the desired version specified in the `Shoot` resource.
- `SystemComponentsHealthy`: The conditions of the `ManagedResource`s are checked (e.g. `ResourcesApplied`, etc.). Also, it is verified whether the VPN tunnel connection is established (which is required for the `kube-apiserver` to communicate with the worker nodes).
- `EveryNodyReady`: The conditions of the worker nodes are checked (e.g., `Ready`, `MemoryPressure`). Also, it's checked whether the Kubernetes version of the installed `kubelet` matches the desired version specified in the `Shoot` resource.
- `SystemComponentsHealthy`: The conditions of the `ManagedResource`s are checked (e.g., `ResourcesApplied`). Also, it is verified whether the VPN tunnel connection is established (which is required for the `kube-apiserver` to communicate with the worker nodes).

Sometimes, `ManagedResource`s can have both `Healthy` and `Progressing` conditions set to `True` (e.g., when a `DaemonSet` rolls out one-by-one on a large cluster with many nodes) while this is not reflected in the `Shoot` status. In order to catch issues where the rollout gets stuck, one can set `.controllers.shootCare.managedResourceProgressingThreshold` in the `gardenlet`'s component configuration. If the `Progressing` condition is still `True` for more than the configured duration, the `SystemComponentsHealthy` condition in the `Shoot` is set to `False`, eventually.

Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/resource-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -668,7 +668,7 @@ The components in namespace `b` now need to be labeled with `networking.resource

#### `Service` Targets In Multiple Namespaces

Finally, let's say there is a `Service` called `example` which exists in different namespaces whose names are not static (e.g., `foo-1`, `foo-2`, etc.), and a component in namespace `bar` wants to initiate connections with all of them.
Finally, let's say there is a `Service` called `example` which exists in different namespaces whose names are not static (e.g., `foo-1`, `foo-2`), and a component in namespace `bar` wants to initiate connections with all of them.

The `example` `Service`s in these namespaces can now be annotated with `networking.resources.gardener.cloud/namespace-selectors='[{"matchLabels":{"kubernetes.io/metadata.name":"bar"}}]'`.
As a consequence, the component in namespace `bar` now needs to be labeled with `networking.resources.gardener.cloud/to-foo-1-example-tcp-8080=allowed`, `networking.resources.gardener.cloud/to-foo-2-example-tcp-8080=allowed`, etc.
Expand Down
4 changes: 2 additions & 2 deletions docs/concepts/scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,11 +88,11 @@ The `shoots/binding` subresource is used to bind a `Shoot` to a `Seed`. On creat
Only an operator with the necessary RBAC can update this binding manually. This can be done by changing the `.spec.seedName` of the shoot. However, if a different seed is already assigned to the shoot, this will trigger a control-plane migration. For required steps, please see [Triggering the Migration](../usage/control_plane_migration.md#triggering-the-migration).

## `spec.seedName` Field in the `Shoot` Specification
Similarly to the `.spec.nodeName` field in `Pod`s, the `Shoot` specification has an optional `.spec.seedName` field. If this field is set on creation, the shoot will be scheduled to this seed. However, this field can only be set by users having RBAC for the `shoots/binding` subresource. If this field is not set, the `scheduler` will assign a suitable seed automatically and populate this field with the seed name.
Similar to the `.spec.nodeName` field in `Pod`s, the `Shoot` specification has an optional `.spec.seedName` field. If this field is set on creation, the shoot will be scheduled to this seed. However, this field can only be set by users having RBAC for the `shoots/binding` subresource. If this field is not set, the `scheduler` will assign a suitable seed automatically and populate this field with the seed name.

## `seedSelector` Field in the `Shoot` Specification

Similarly to the `.spec.nodeSelector` field in `Pod`s, the `Shoot` specification has an optional `.spec.seedSelector` field.
Similar to the `.spec.nodeSelector` field in `Pod`s, the `Shoot` specification has an optional `.spec.seedSelector` field.
It allows the user to provide a label selector that must match the labels of the `Seed`s in order to be scheduled to one of them.
The labels on the `Seed`s are usually controlled by Gardener administrators/operators - end users cannot add arbitrary labels themselves.
If provided, the Gardener Scheduler will only consider as "suitable" those seeds whose labels match those provided in the `.spec.seedSelector` of the `Shoot`.
Expand Down
2 changes: 1 addition & 1 deletion docs/deployment/gardenlet_api_access.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ When enabling the plugins, there is one additional step for each before the `gar
![Flow Diagram](content/gardenlet_api_access_flow.png)

Please note that the example shows a request to an object (`Shoot`) residing in one of the API groups served by `gardener-apiserver`.
However, the `gardenlet` is also interacting with objects in API groups served by the `kube-apiserver` (e.g., `Secret`,`ConfigMap`, etc.).
However, the `gardenlet` is also interacting with objects in API groups served by the `kube-apiserver` (e.g., `Secret`,`ConfigMap`).
In this case, the consultation of the `SeedRestriction` admission plugin is performed by the `kube-apiserver` itself before it forwards the request to the `gardener-apiserver`.

Today, the following rules are implemented:
Expand Down
2 changes: 1 addition & 1 deletion docs/deployment/setup_gardener.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Deploying Gardener into a Kubernetes Cluster

Similarly to Kubernetes, Gardener consists out of control plane components (Gardener API server, Gardener controller manager, Gardener scheduler), and an agent component (gardenlet).
Similar to Kubernetes, Gardener consists out of control plane components (Gardener API server, Gardener controller manager, Gardener scheduler), and an agent component (gardenlet).
The control plane is deployed in the so-called garden cluster, while the agent is installed into every seed cluster.
Please note that it is possible to use the garden cluster as seed cluster by simply deploying the gardenlet into it.

Expand Down
4 changes: 2 additions & 2 deletions docs/development/kubernetes-clients.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ For historical reasons, you will find different kinds of Kubernetes clients in G
### Client-Go Clients

[client-go](https://github.com/kubernetes/client-go) is the default/official client for talking to the Kubernetes API in Golang.
It features the so called ["client sets"](https://github.com/kubernetes/client-go/blob/release-1.21/kubernetes/clientset.go#L72) for all built-in Kubernetes API groups and versions (e.g. `v1` (aka `core/v1`), `apps/v1`, etc.).
It features the so called ["client sets"](https://github.com/kubernetes/client-go/blob/release-1.21/kubernetes/clientset.go#L72) for all built-in Kubernetes API groups and versions (e.g. `v1` (aka `core/v1`), `apps/v1`).
client-go clients are generated from the built-in API types using [client-gen](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/generating-clientset.md) and are composed of interfaces for every known API GroupVersionKind.
A typical client-go usage looks like this:
```go
Expand Down Expand Up @@ -294,7 +294,7 @@ However, in any case, retrying on conflict is probably not the right option to s
As explained before, conflicts are actually important and prevent clients from doing wrongful concurrent updates. This means that conflicts are not something we generally want to avoid or ignore.
However, in many cases controllers are exclusive owners of the fields they want to update and thus it might be safe to run without optimistic locking.

For example, the gardenlet is the exclusive owner of the `spec` section of the Extension resources it creates on behalf of a Shoot (e.g., the `Infrastructure` resource for creating VPC, etc.). Meaning, it knows the exact desired state and no other actor is supposed to update the Infrastructure's `spec` fields.
For example, the gardenlet is the exclusive owner of the `spec` section of the Extension resources it creates on behalf of a Shoot (e.g., the `Infrastructure` resource for creating VPC). Meaning, it knows the exact desired state and no other actor is supposed to update the Infrastructure's `spec` fields.
When the gardenlet now updates the Infrastructures `spec` section as part of the Shoot reconciliation, it can simply issue a `PATCH` request that only updates the `spec` and runs without optimistic locking.
If another controller concurrently updated the object in the meantime (e.g., the `status` section), the `resourceVersion` got changed, which would cause a conflict error if running with optimistic locking.
However, concurrent `status` updates would not change the gardenlet's mind on the desired `spec` of the Infrastructure resource as it is determined only by looking at the Shoot's specification.
Expand Down
2 changes: 1 addition & 1 deletion docs/development/secrets_management.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Secrets Management for Seed and Shoot Cluster

The gardenlet needs to create quite some amount of credentials (certificates, private keys, passwords, etc.) for seed and shoot clusters in order to ensure secure deployments.
The gardenlet needs to create quite some amount of credentials (certificates, private keys, passwords) for seed and shoot clusters in order to ensure secure deployments.
Such credentials typically should be renewed automatically when their validity expires, rotated regularly, and they potentially need to be persisted such that they don't get lost in case of a control plane migration or a lost seed cluster.

## SecretsManager Introduction
Expand Down
2 changes: 1 addition & 1 deletion docs/development/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -535,7 +535,7 @@ Please see [Test Machinery Tests](testmachinery_tests.md).
- Testing against real infrastructure can cause flakes sometimes (e.g., in outage situations).
- Failures are hard to debug, because clusters are deleted after the test (for obvious cost reasons).
- Bugs can only be caught, once it's "too late", i.e., when code is merged and deployed.
- Today, test machinery tests cover a bigger "test matrix" (e.g., Shoot creation across infrastructures, kubernetes versions, machine image versions, etc.).
- Today, test machinery tests cover a bigger "test matrix" (e.g., Shoot creation across infrastructures, kubernetes versions, machine image versions).
- Test machinery also runs Kubernetes conformance tests.
- However, because of the listed drawbacks, we should rather focus on augmenting our e2e tests, as we can run them locally and in CI in order to catch bugs before they get merged.
- It's still a good idea to add test machinery tests if a feature that is depending on some installation-specific configuration needs to be tested.
Expand Down
2 changes: 1 addition & 1 deletion docs/extensions/containerruntime.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Gardener would deploy four `ContainerRuntime` resources. For `worker-one`: one `

## Supporting a New Container Runtime Provider

To add support for another container runtime (e.g., gvisor, kata-containers, etc.), a container runtime extension controller needs to be implemented. It should support Gardener's supported CRI plugins.
To add support for another container runtime (e.g., gvisor, kata-containers), a container runtime extension controller needs to be implemented. It should support Gardener's supported CRI plugins.

The container runtime extension should install the necessary resources into the shoot cluster (e.g., `RuntimeClass`es), and it should copy the runtime binaries to the relevant worker machines in path: `spec.binaryPath`.
Gardener labels the shoot nodes according to the CRI configured: `worker.gardener.cloud/cri-name=<value>` (e.g `worker.gardener.cloud/cri-name=containerd`) and multiple labels for each of the container runtimes configured for the shoot Worker machine:
Expand Down
2 changes: 1 addition & 1 deletion docs/extensions/network.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ The above resources is divided into two parts (more information can be found at

## Supporting a New Network Extension Provider

To add support for another networking provider (e.g., weave, Cilium, Flannel, etc.) a network extension controller needs to be implemented which would optionally have its own custom configuration specified in the `spec.providerConfig` in the `Network` resource. For example, if support for a network plugin named `gardenet` is required, the following `Network` resource would be created:
To add support for another networking provider (e.g., weave, Cilium, Flannel) a network extension controller needs to be implemented which would optionally have its own custom configuration specified in the `spec.providerConfig` in the `Network` resource. For example, if support for a network plugin named `gardenet` is required, the following `Network` resource would be created:

```yaml
---
Expand Down
2 changes: 1 addition & 1 deletion docs/extensions/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ status:
```

Gardener waits until the `.status.lastOperation` / `.status.lastError` indicates that the operation reached a final state and either continuous with the next step, or stops and reports the potential error.
The extension-specific output in `.status.providerStatus` is - similarly to `.spec.providerConfig` - not evaluated, and simply forwarded to CRDs in subsequent steps.
The extension-specific output in `.status.providerStatus` is - similar to `.spec.providerConfig` - not evaluated, and simply forwarded to CRDs in subsequent steps.

**Example 2**:

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/01-extensibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ This proposal aims to move out the cloud-specific implementations (called "(clou

Currently, it is too hard to support additional cloud providers or operation systems/distributions as everything must be done in-tree, which might affect the implementation of other cloud providers as well.
The various conditions and branches make the code hard to maintain and hard to test.
Every change must be done centrally, requires to completely rebuild Gardener, and cannot be deployed individually. Similarly to the motivation for Kubernetes to extract their cloud-specifics into dedicated cloud-controller-managers or to extract the container/storage/network/... specifics into CRI/CSI/CNI/..., we aim to do the same right now.
Every change must be done centrally, requires to completely rebuild Gardener, and cannot be deployed individually. Similar to the motivation for Kubernetes to extract their cloud-specifics into dedicated cloud-controller-managers or to extract the container/storage/network/... specifics into CRI/CSI/CNI/..., we aim to do the same right now.

### Goals

Expand Down
2 changes: 1 addition & 1 deletion docs/proposals/07-shoot-control-plane-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ status:

Extensions which do not require state migration should set `status.state=nil` in their Custom Resources and trigger a normal reconciliation operation if the CR contains the `core.gardener.cloud/operation=restore` annotation.

Similarly to the contract for the [reconcile operation](https://github.com/gardener/gardener/blob/master/docs/extensions/reconcile-trigger.md), the extension controller has to remove the `restore` annotation after the restoration operation has finished.
Similar to the contract for the [reconcile operation](https://github.com/gardener/gardener/blob/master/docs/extensions/reconcile-trigger.md), the extension controller has to remove the `restore` annotation after the restoration operation has finished.

An additional annotation `gardener.cloud/operation=migrate` is added to the Custom Resources. It is used to tell the extension controllers in the **Source Seed** that they must stop reconciling resources (in case they are requeued due to errors) and should perform cleanup activities in the Shoot's control plane. These cleanup activities involve removing the finalizers on Custom Resources and deleting them without actually deleting any infrastructure resources.

Expand Down
Loading

0 comments on commit 302b1a9

Please sign in to comment.