Skip to content

Commit

Permalink
Add reference link for KubernetesPodOperator in kubernetes.rst (apach…
Browse files Browse the repository at this point in the history
…e#11782)

This makes it easy to go to the class definition and find the arguments/params that can be passed to the Operator
  • Loading branch information
kaxil authored Oct 23, 2020
1 parent 4d04bb6 commit cc9c4c6
Showing 1 changed file with 10 additions and 5 deletions.
15 changes: 10 additions & 5 deletions docs/howto/operator/kubernetes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,13 +40,15 @@ you to create and run Pods on a Kubernetes cluster.

How does this operator work?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``KubernetesPodOperator`` uses the Kubernetes API to launch a pod in a Kubernetes cluster. By supplying an
The :class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator` uses the
Kubernetes API to launch a pod in a Kubernetes cluster. By supplying an
image URL and a command with optional arguments, the operator uses the Kube Python Client to generate a Kubernetes API
request that dynamically launches those individual pods.
Users can specify a kubeconfig file using the ``config_file`` parameter, otherwise the operator will default
to ``~/.kube/config``.

The ``KubernetesPodOperator`` enables task-level resource configuration and is optimal for custom Python
The :class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator` enables task-level
resource configuration and is optimal for custom Python
dependencies that are not available through the public PyPI repository. It also allows users to supply a template
YAML file using the ``pod_template_file`` parameter.
Ultimately, it allows Airflow to act a job orchestrator - no matter the language those jobs are written in.
Expand All @@ -73,15 +75,17 @@ and type safety. While we have removed almost all Kubernetes convenience classes

Difference between ``KubernetesPodOperator`` and Kubernetes object spec
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``KubernetesPodOperator`` can be considered a substitute for a Kubernetes object spec definition that is able
The :class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator` can be considered
a substitute for a Kubernetes object spec definition that is able
to be run in the Airflow scheduler in the DAG context. If using the operator, there is no need to create the
equivalent YAML/JSON object spec for the Pod you would like to run.
The YAML file can still be provided with the ``pod_template_file`` or even the Pod Spec constructed in Python via
the ``full_pod_spec`` parameter which requires a Kubernetes ``V1Pod``.

How to use private images (container registry)?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default, the ``KubernetesPodOperator`` will look for images hosted publicly on Dockerhub.
By default, the :class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator` will
look for images hosted publicly on Dockerhub.
To pull images from a private registry (such as ECR, GCR, Quay, or others), you must create a
Kubernetes Secret that represents the credentials for accessing images from the private registry that is ultimately
specified in the ``image_pull_secrets`` parameter.
Expand All @@ -104,7 +108,8 @@ Then use it in your pod like so:

How does XCom work?
^^^^^^^^^^^^^^^^^^^
The ``KubernetesPodOperator`` handles XCom values differently than other operators. In order to pass a XCom value
The :class:`~airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator` handles
XCom values differently than other operators. In order to pass a XCom value
from your Pod you must specify the ``do_xcom_push`` as ``True``. This will create a sidecar container that runs
alongside the Pod. The Pod must write the XCom value into this location at the ``/airflow/xcom/return.json`` path.

Expand Down

0 comments on commit cc9c4c6

Please sign in to comment.