Skip to content

Commit

Permalink
rename ContainerName to EnvSourceContainerName (kedacore#235)
Browse files Browse the repository at this point in the history
* rename ContainerName to EnvSourceContainerName

Signed-off-by: Zbynek Roubalik <[email protected]>

* fix

Signed-off-by: Zbynek Roubalik <[email protected]>
  • Loading branch information
zroubalik authored Sep 10, 2020
1 parent b4e20e4 commit 992ca55
Show file tree
Hide file tree
Showing 4 changed files with 84 additions and 70 deletions.
86 changes: 43 additions & 43 deletions content/docs/2.0/concepts/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ spec:
queueLength : '5'
```
If you have multiple containers in a deployment, you will need to include the name of the container that has the references in the `ScaledObject`. If you do not include a `containerName` it will default to the first container. KEDA will attempt to resolve references from secrets, config maps, and environment variables of the container.
If you have multiple containers in a deployment, you will need to include the name of the container that has the references in the `ScaledObject`. If you do not include a `envSourceContainerName` it will default to the first container. KEDA will attempt to resolve references from secrets, config maps, and environment variables of the container.

While this method works for many scenarios, there are some downsides. This method makes it difficult to efficiently share auth config across `ScaledObjects`. It also doesn’t support referencing a secret directly, only secrets that are referenced by the container. This method also doesn't support a model where other types of authentication may work - namely "pod identity" where access to a source could be acquired with no secrets or connection strings. For these and other reasons, we also provide a `TriggerAuthentication` resource to define authentication as a separate resource to a `ScaledObject`, which can reference secrets directly or supply configuration like pod identity.

Expand All @@ -93,27 +93,27 @@ metadata:
namespace: default # must be same namespace as the ScaledObject
spec:
podIdentity:
provider: none | azure | gcp | spiffe | aws-eks | aws-kiam # Optional. Default: none
secretTargetRef: # Optional.
- parameter: {scaledObject-parameter-name} # Required.
name: {secret-name} # Required.
key: {secret-key-name} # Required.
env: # Optional.
- parameter: {scaledObject-parameter-name} # Required.
name: {env-name} # Required.
containerName: {container-name} # Optional. Default: scaleTargetRef.containerName of ScaledObject
hashiCorpVault: # Optional.
address: {hashicorp-vault-address} # Required.
authentication: token | kubernetes # Required.
role: {hashicorp-vault-role} # Optional.
mount: {hashicorp-vault-mount} # Optional.
credential: # Optional.
token: {hashicorp-vault-token} # Optional.
serviceAccount: {path-to-service-account-file} # Optional.
secrets: # Required.
- parameter: {scaledObject-parameter-name} # Required.
key: {hasicorp-vault-secret-key-name} # Required.
path: {hasicorp-vault-secret-path} # Required.
provider: none | azure | gcp | spiffe | aws-eks | aws-kiam # Optional. Default: none
secretTargetRef: # Optional.
- parameter: {scaledObject-parameter-name} # Required.
name: {secret-name} # Required.
key: {secret-key-name} # Required.
env: # Optional.
- parameter: {scaledObject-parameter-name} # Required.
name: {env-name} # Required.
containerName: {container-name} # Optional. Default: scaleTargetRef.envSourceContainerName of ScaledObject
hashiCorpVault: # Optional.
address: {hashicorp-vault-address} # Required.
authentication: token | kubernetes # Required.
role: {hashicorp-vault-role} # Optional.
mount: {hashicorp-vault-mount} # Optional.
credential: # Optional.
token: {hashicorp-vault-token} # Optional.
serviceAccount: {path-to-service-account-file} # Optional.
secrets: # Required.
- parameter: {scaledObject-parameter-name} # Required.
key: {hasicorp-vault-secret-key-name} # Required.
path: {hasicorp-vault-secret-path} # Required.
```

Based on the requirements you can mix and match the reference types providers in order to configure all required parameters.
Expand All @@ -136,10 +136,10 @@ Every parameter you define in `TriggerAuthentication` definition does not need t
You can pull information via one or more environment variables by providing the `name` of the variable for a given `containerName`.

```yaml
env: # Optional.
- parameter: region # Required - Defined by the scale trigger
name: my-env-var # Required.
containerName: my-container # Optional. Default: scaleTargetRef.containerName of ScaledObject
env: # Optional.
- parameter: region # Required - Defined by the scale trigger
name: my-env-var # Required.
containerName: my-container # Optional. Default: scaleTargetRef.envSourceContainerName of ScaledObject
```

**Assumptions:** `containerName` is in the same resource as referenced by `scaleTargetRef.name` in the ScaledObject, unless specified otherwise.
Expand All @@ -149,10 +149,10 @@ env: # Optional.
You can pull one or more secrets into the trigger by defining the `name` of the Kubernetes Secret and the `key` to use.

```yaml
secretTargetRef: # Optional.
- parameter: connectionString # Required - Defined by the scale trigger
name: my-keda-secret-entity # Required.
key: azure-storage-connectionstring # Required.
secretTargetRef: # Optional.
- parameter: connectionString # Required - Defined by the scale trigger
name: my-keda-secret-entity # Required.
key: azure-storage-connectionstring # Required.
```

**Assumptions:** `namespace` is in the same resource as referenced by `scaleTargetRef.name` in the ScaledObject, unless specified otherwise.
Expand All @@ -164,18 +164,18 @@ You can pull one or more Hashicorp Vault secrets into the trigger by defining th
`secrets` list defines the mapping between the path and the key of the secret in Vault to the parameter.

```yaml
hashiCorpVault: # Optional.
address: {hashicorp-vault-address} # Required.
authentication: token | kubernetes # Required.
role: {hashicorp-vault-role} # Optional.
mount: {hashicorp-vault-mount} # Optional.
credential: # Optional.
token: {hashicorp-vault-token} # Optional.
serviceAccount: {path-to-service-account-file} # Optional.
secrets: # Required.
- parameter: {scaledObject-parameter-name} # Required.
key: {hasicorp-vault-secret-key-name} # Required.
path: {hasicorp-vault-secret-path} # Required.
hashiCorpVault: # Optional.
address: {hashicorp-vault-address} # Required.
authentication: token | kubernetes # Required.
role: {hashicorp-vault-role} # Optional.
mount: {hashicorp-vault-mount} # Optional.
credential: # Optional.
token: {hashicorp-vault-token} # Optional.
serviceAccount: {path-to-service-account-file} # Optional.
secrets: # Required.
- parameter: {scaledObject-parameter-name} # Required.
key: {hasicorp-vault-secret-key-name} # Required.
path: {hasicorp-vault-secret-path} # Required.
```

### Pod Authentication Providers
Expand All @@ -186,7 +186,7 @@ Currently we support the following:

```yaml
podIdentity:
provider: none | azure | gcp | spiffe | aws-eks | aws-kiam # Optional. Default: none
provider: none | azure | gcp | spiffe | aws-eks | aws-kiam # Optional. Default: none
```

#### Azure Pod Identity
Expand Down
8 changes: 4 additions & 4 deletions content/docs/2.0/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ spec:
apiVersion: {api-version-of-target-resource} # Optional. Default: apps/v1
kind: {kind-of-target-resource} # Optional. Default: Deployment
name: {name-of-target-resource} # Mandatory. Must be in the same namespace as the ScaledObject
containerName: {container-name} # Optional. Default: .spec.template.spec.containers[0]
envSourceContainerName: {container-name} # Optional. Default: .spec.template.spec.containers[0]
pollingInterval: 30 # Optional. Default: 30 seconds
cooldownPeriod: 300 # Optional. Default: 300 seconds
minReplicaCount: 0 # Optional. Default: 0
Expand Down Expand Up @@ -77,16 +77,16 @@ You can find all supported triggers [here](/scalers).
apiVersion: {api-version-of-target-resource} # Optional. Default: apps/v1
kind: {kind-of-target-resource} # Optional. Default: Deployment
name: {name-of-target-resource} # Mandatory. Must be in the same namespace as the ScaledObject
containerName: {container-name} # Optional. Default: .spec.template.spec.containers[0]
envSourceContainerName: {container-name} # Optional. Default: .spec.template.spec.containers[0]
```
The reference to the resource this ScaledObject is configured for. This is the resource KEDA will scale up/down and setup an HPA for, based on the triggers defined in `triggers:`.

To scale Kubernetes Deployments only `name` is needed to be specified, if one wants to scale a different resource such as StatefulSet or Custom Resource (that defines `/scale` subresource), appropriate `apiVersion` (following standard Kubernetes convetion, ie. `{api}/{version}`) and `kind` need to be specfied.

`containerName` is a name of container in the target resource, from which KEDA should try to get environment properties holding secrets etc.
`envSourceContainerName` is an optional property that specifies the name of container in the target resource, from which KEDA should try to get environment properties holding secrets etc. If it is not defined it, KEDA will try to get environment properties from the first Container, ie. from `.spec.template.spec.containers[0]`.

**Assumptions:** Resource referenced by `name` (and `apiVersion`, `kind`) is in the same namespace as the scaledObject
**Assumptions:** Resource referenced by `name` (and `apiVersion`, `kind`) is in the same namespace as the ScaledObject

---

Expand Down
51 changes: 32 additions & 19 deletions content/docs/2.0/concepts/scaling-jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,16 +29,17 @@ metadata:
name: {scaled-job-name}
spec:
jobTargetRef:
parallelism: 1 # [max number of desired pods](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#controlling-parallelism)
completions: 1 # [desired number of successfully finished pods](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#controlling-parallelism)
activeDeadlineSeconds: 600 # Specifies the duration in seconds relative to the startTime that the job may be active before the system tries to terminate it; value must be positive integer
backoffLimit: 6 # Specifies the number of retries before marking this job failed. Defaults to 6
parallelism: 1 # [max number of desired pods](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#controlling-parallelism)
completions: 1 # [desired number of successfully finished pods](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#controlling-parallelism)
activeDeadlineSeconds: 600 # Specifies the duration in seconds relative to the startTime that the job may be active before the system tries to terminate it; value must be positive integer
backoffLimit: 6 # Specifies the number of retries before marking this job failed. Defaults to 6
template:
# describes the [job template](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/)
pollingInterval: 30 # Optional. Default: 30 seconds
successfulJobsHistoryLimit: 5 # Optional. Default: 100. How many completed jobs should be kept.
failedJobsHistoryLimit: 5 # Optional. Default: 100. How many failed jobs should be kept.
maxReplicaCount: 100 # Optional. Default: 100
pollingInterval: 30 # Optional. Default: 30 seconds
successfulJobsHistoryLimit: 5 # Optional. Default: 100. How many completed jobs should be kept.
failedJobsHistoryLimit: 5 # Optional. Default: 100. How many failed jobs should be kept.
envSourceContainerName: {container-name} # Optional. Default: .spec.JobTargetRef.template.spec.containers[0]
maxReplicaCount: 100 # Optional. Default: 100
triggers:
# {list of triggers to create jobs}
```
Expand All @@ -49,24 +50,25 @@ You can find all supported triggers [here](../scalers).

```yaml
jobTargetRef:
parallelism: 1 # Max number of desired instances ([docs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#controlling-parallelism))
completions: 1 # Desired number of successfully finished instances ([docs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#controlling-parallelism))
activeDeadlineSeconds: 600 # Specifies the duration in seconds relative to the startTime that the job may be active before the system tries to terminate it; value must be positive integer
backoffLimit: 6 # Specifies the number of retries before marking this job failed. Defaults to 6
parallelism: 1 # Max number of desired instances ([docs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#controlling-parallelism))
completions: 1 # Desired number of successfully finished instances ([docs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#controlling-parallelism))
activeDeadlineSeconds: 600 # Specifies the duration in seconds relative to the startTime that the job may be active before the system tries to terminate it; value must be positive integer
backoffLimit: 6 # Specifies the number of retries before marking this job failed. Defaults to 6
```
The optional configuration parameter. Currently not implemented. It is going to be supported.
---
```yaml
pollingInterval: 30 # Optional. Default: 30 seconds
```
This is the interval to check each trigger on. By default, KEDA will check each trigger source on every ScaledJob every 30 seconds.
---
```yaml
successfulJobsHistoryLimit: 5 # Optional. Default: 100. How many completed jobs should be kept.
failedJobsHistoryLimit: 5 # Optional. Default: 100. How many failed jobs should be kept.
successfulJobsHistoryLimit: 5 # Optional. Default: 100. How many completed jobs should be kept.
failedJobsHistoryLimit: 5 # Optional. Default: 100. How many failed jobs should be kept.
```
The `successfulJobsHistoryLimit` and `failedJobsHistoryLimit` fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 100.
Expand All @@ -75,6 +77,17 @@ This concept is similar to [Jobs History Limits](https://kubernetes.io/docs/task

The actual number of jobs could exceed the limit in a short time. However, it is going to resolve in the cleanup period. Currently, the cleanup period is the same as the Polling interval.

---


```yaml
envSourceContainerName: {container-name} # Optional. Default: .spec.JobTargetRef.template.spec.containers[0]
```

This optional property specifies the name of container in the Job, from which KEDA should try to get environment properties holding secrets etc. If it is not defined it, KEDA will try to get environment properties from the first Container, ie. from `.spec.JobTargetRef.template.spec.containers[0]`.

---

```yaml
maxReplicaCount: 100 # Optional. Default: 100
```
Expand Down Expand Up @@ -124,10 +137,10 @@ spec:
name: rabbitmq-consumer
restartPolicy: Never
backoffLimit: 4
pollingInterval: 10 # Optional. Default: 30 seconds
maxReplicaCount: 30 # Optional. Default: 100
successfulJobsHistoryLimit: 3 # Optional. Default: 100. How many completed jobs should be kept.
failedJobsHistoryLimit: 2 # Optional. Default: 100. How many failed jobs should be kept.
pollingInterval: 10 # Optional. Default: 30 seconds
maxReplicaCount: 30 # Optional. Default: 100
successfulJobsHistoryLimit: 3 # Optional. Default: 100. How many completed jobs should be kept.
failedJobsHistoryLimit: 2 # Optional. Default: 100. How many failed jobs should be kept.
triggers:
- type: rabbitmq
metadata:
Expand Down
9 changes: 5 additions & 4 deletions content/docs/2.0/migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ KEDA v2 is using a new API namespace for it's Custom Resources Definitions (CRD)
In order to scale `Deployments` with KEDA v2, you need to do only a few modifications to existing v1 `ScaledObjects` definitions, so they comply with v2:
- Change the value of `apiVersion` property from `keda.k8s.io/v1alpha1` to `keda.sh/v1alpha1`
- Rename property `spec.scaleTargetRef.deploymentName` to `spec.scaleTargetRef.name`
- Rename property `spec.scaleTargetRef.containerName` to `spec.scaleTargetRef.envSourceContainerName`
- Label `deploymentName` (in `metadata.labels.`) is no longer needed to be specified on v2 ScaledObject (it was mandatory on older versions of v1)

Please see the examples below or refer to the full [v2 ScaledObject Specification](../concepts/scaling-deployments/#scaledobject-spec)
Expand All @@ -38,14 +39,14 @@ spec:

**Example of v2 ScaledObject**
```yaml
apiVersion: keda.sh/v1alpha1 # <--- Property value was changed
apiVersion: keda.sh/v1alpha1 # <--- Property value was changed
kind: ScaledObject
metadata: # <--- labels.deploymentName is not needed
metadata: # <--- labels.deploymentName is not needed
name: {scaled-object-name}
spec:
scaleTargetRef:
name: {deployment-name} # <--- Property name was changed
containerName: {container-name}
name: {deployment-name} # <--- Property name was changed
envSourceContainerName: {container-name} # <--- Property name was changed
pollingInterval: 30
cooldownPeriod: 300
minReplicaCount: 0
Expand Down

0 comments on commit 992ca55

Please sign in to comment.