The Datadog Agent collects many metrics from Airflow, including those for:
- DAGs (Directed Acyclic Graphs): Number of DAG processes, DAG bag size, etc.
- Tasks: Task failures, successes, killed, etc.
- Pools: Open slots, used slots, etc.
- Executors: Open slots, queued tasks, running tasks, etc.
Metrics are collected through the Airflow StatsD plugin and sent to Datadog's DogStatsD.
In addition to metrics, the Datadog Agent also sends service checks related to Airflow's health.
All steps below are needed for the Airflow integration to work properly. Before you begin, install the Datadog Agent version >=6.17
or >=7.17
, which includes the StatsD/DogStatsD mapping feature.
There are two forms of the Airflow integration. There is the Datadog Agent integration which makes requests to a provided endpoint for Airflow to report whether it can connect and is healthy. Then there is the Airflow StatsD portion where Airflow can be configured to send metrics to the Datadog Agent, which can remap the Airflow notation to a Datadog notation.
Configure the Airflow check included in the Datadog Agent package to collect health metrics and service checks. This can be done by editing the url
within the airflow.d/conf.yaml
file, in the conf.d/
folder at the root of your Agent's configuration directory, to start collecting your Airflow service checks. See the sample airflow.d/conf.yaml for all available configuration options.
Ensure that url
matches your Airflow webserver base_url
, the URL used to connect to your Airflow instance.
Connect Airflow to DogStatsD (included in the Datadog Agent) by using the Airflow statsd
feature to collect metrics. For more information about the metrics reported by the Airflow version used and the additional configuration options, see the Airflow documentation below:
Note: Presence or absence of StatsD metrics reported by Airflow might vary depending on the Airflow Executor used. For example: airflow.ti_failures/successes
, airflow.operator_failures/successes
, airflow.dag.task.duration
are not reported for KubernetesExecutor
.
-
Install the Airflow StatsD plugin.
pip install 'apache-airflow[statsd]'
-
Update the Airflow configuration file
airflow.cfg
by adding the following configs:[scheduler] statsd_on = True # Hostname or IP of server running the Datadog Agent statsd_host = localhost # DogStatsD port configured in the Datadog Agent statsd_port = 8125 statsd_prefix = airflow
-
Update the Datadog Agent main configuration file
datadog.yaml
by adding the following configs:# dogstatsd_mapper_cache_size: 1000 # default to 1000 dogstatsd_mapper_profiles: - name: airflow prefix: "airflow." mappings: - match: "airflow.*_start" name: "airflow.job.start" tags: job_name: "$1" - match: "airflow.*_end" name: "airflow.job.end" tags: job_name: "$1" - match: "airflow.*_heartbeat_failure" name: airflow.job.heartbeat.failure tags: job_name: "$1" - match: "airflow.operator_failures_*" name: "airflow.operator_failures" tags: operator_name: "$1" - match: "airflow.operator_successes_*" name: "airflow.operator_successes" tags: operator_name: "$1" - match: 'airflow\.dag_processing\.last_runtime\.(.*)' match_type: "regex" name: "airflow.dag_processing.last_runtime" tags: dag_file: "$1" - match: 'airflow\.dag_processing\.last_run\.seconds_ago\.(.*)' match_type: "regex" name: "airflow.dag_processing.last_run.seconds_ago" tags: dag_file: "$1" - match: 'airflow\.dag\.loading-duration\.(.*)' match_type: "regex" name: "airflow.dag.loading_duration" tags: dag_file: "$1" - match: "airflow.dagrun.*.first_task_scheduling_delay" name: "airflow.dagrun.first_task_scheduling_delay" tags: dag_id: "$1" - match: "airflow.pool.open_slots.*" name: "airflow.pool.open_slots" tags: pool_name: "$1" - match: "airflow.pool.queued_slots.*" name: "airflow.pool.queued_slots" tags: pool_name: "$1" - match: "airflow.pool.running_slots.*" name: "airflow.pool.running_slots" tags: pool_name: "$1" - match: "airflow.pool.used_slots.*" name: "airflow.pool.used_slots" tags: pool_name: "$1" - match: "airflow.pool.starving_tasks.*" name: "airflow.pool.starving_tasks" tags: pool_name: "$1" - match: 'airflow\.dagrun\.dependency-check\.(.*)' match_type: "regex" name: "airflow.dagrun.dependency_check" tags: dag_id: "$1" - match: 'airflow\.dag\.(.*)\.([^.]*)\.duration' match_type: "regex" name: "airflow.dag.task.duration" tags: dag_id: "$1" task_id: "$2" - match: 'airflow\.dag_processing\.last_duration\.(.*)' match_type: "regex" name: "airflow.dag_processing.last_duration" tags: dag_file: "$1" - match: 'airflow\.dagrun\.duration\.success\.(.*)' match_type: "regex" name: "airflow.dagrun.duration.success" tags: dag_id: "$1" - match: 'airflow\.dagrun\.duration\.failed\.(.*)' match_type: "regex" name: "airflow.dagrun.duration.failed" tags: dag_id: "$1" - match: 'airflow\.dagrun\.schedule_delay\.(.*)' match_type: "regex" name: "airflow.dagrun.schedule_delay" tags: dag_id: "$1" - match: 'airflow.scheduler.tasks.running' name: "airflow.scheduler.tasks.running" - match: 'airflow.scheduler.tasks.starving' name: "airflow.scheduler.tasks.starving" - match: 'airflow.sla_email_notification_failure' name: 'airflow.sla_email_notification_failure' - match: 'airflow\.task_removed_from_dag\.(.*)' match_type: "regex" name: "airflow.dag.task_removed" tags: dag_id: "$1" - match: 'airflow\.task_restored_to_dag\.(.*)' match_type: "regex" name: "airflow.dag.task_restored" tags: dag_id: "$1" - match: "airflow.task_instance_created-*" name: "airflow.task.instance_created" tags: task_class: "$1" - match: "airflow.ti.start.*.*" name: "airflow.ti.start" tags: dag_id: "$1" task_id: "$2" - match: "airflow.ti.finish.*.*.*" name: "airflow.ti.finish" tags: dag_id: "$1" task_id: "$2" state: "$3"
- Restart the Agent.
- Restart Airflow to start sending your Airflow metrics to the Agent DogStatsD endpoint.
Use the default configuration in your airflow.d/conf.yaml
file to activate your Airflow service checks. See the sample airflow.d/conf.yaml for all available configuration options.
Available for Agent versions >6.0
-
Collecting logs is disabled by default in the Datadog Agent. Enable it in your
datadog.yaml
file:logs_enabled: true
-
Uncomment and edit this configuration block at the bottom of your
airflow.d/conf.yaml
: Change thepath
andservice
parameter values and configure them for your environment.-
Configuration for DAG processor manager and Scheduler logs:
logs: - type: file path: "<PATH_TO_AIRFLOW>/logs/dag_processor_manager/dag_processor_manager.log" source: airflow log_processing_rules: - type: multi_line name: new_log_start_with_date pattern: \[\d{4}\-\d{2}\-\d{2} - type: file path: "<PATH_TO_AIRFLOW>/logs/scheduler/latest/*.log" source: airflow log_processing_rules: - type: multi_line name: new_log_start_with_date pattern: \[\d{4}\-\d{2}\-\d{2}
Regular clean up is recommended for scheduler logs with daily log rotation.
-
Additional configuration for DAG tasks logs:
logs: - type: file path: "<PATH_TO_AIRFLOW>/logs/!(scheduler)/*/*.log" source: airflow log_processing_rules: - type: multi_line name: new_log_start_with_date pattern: \[\d{4}\-\d{2}\-\d{2}
Caveat: By default Airflow uses this log file template for tasks:
log_filename_template = {{ ti.dag_id }}/{{ ti.task_id }}/{{ ts }}/{{ try_number }}.log
. The number of log files grow quickly if not cleaned regularly. This pattern is used by Airflow UI to display logs individually for each executed task.If you do not view logs in Airflow UI, Datadog recommends this configuration in
airflow.cfg
:log_filename_template = dag_tasks.log
. Then log rotate this file and use this configuration:logs: - type: file path: "<PATH_TO_AIRFLOW>/logs/dag_tasks.log" source: airflow log_processing_rules: - type: multi_line name: new_log_start_with_date pattern: \[\d{4}\-\d{2}\-\d{2}
-
For containerized environments, see the Autodiscovery Integration Templates for guidance on applying the parameters below.
Parameter | Value |
---|---|
<INTEGRATION_NAME> |
airflow |
<INIT_CONFIG> |
blank or {} |
<INSTANCE_CONFIG> |
{"url": "http://%%host%%:8080"} |
Ensure that url
matches your Airflow webserver base_url
, the URL used to connect to your Airflow instance. Replace localhost
with the template variable %%host%%
.
Connect Airflow to DogStatsD (included in the Datadog Agent) by using the Airflow statsd
feature to collect metrics. For more information about the metrics reported by the Airflow version used and the additional configuration options, see the Airflow documentation below:
Note: Presence or absence of StatsD metrics reported by Airflow might vary depending on the Airflow Executor used. For example: airflow.ti_failures/successes
, airflow.operator_failures/successes
, airflow.dag.task.duration
are not reported for KubernetesExecutor
.
Note: The environment variables used for Airflow may differ between versions. For example in Airflow 2.0.0
this utilizes the environment variable AIRFLOW__METRICS__STATSD_HOST
, whereas Airflow 1.10.15
utilizes AIRFLOW__SCHEDULER__STATSD_HOST
.
The Airflow StatsD configuration can be enabled with the following environment variables in a Kubernetes Deployment:
env:
- name: AIRFLOW__SCHEDULER__STATSD_ON
value: "True"
- name: AIRFLOW__SCHEDULER__STATSD_PORT
value: "8125"
- name: AIRFLOW__SCHEDULER__STATSD_PREFIX
value: "airflow"
- name: AIRFLOW__SCHEDULER__STATSD_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
The environment variable for the host endpoint AIRFLOW__SCHEDULER__STATSD_HOST
is supplied with the node's host IP address to route the StatsD data to the Datadog Agent pod on the same node as the Airflow pod. This setup also requires the Agent to have a hostPort
open for this port 8125
and accepting non-local StatsD traffic. For more information, see DogStatsD on Kubernetes Setup.
This should direct the StatsD traffic from the Airflow container to a Datadog Agent ready to accept the incoming data. The last portion is to update the Datadog Agent with the corresponding dogstatsd_mapper_profiles
. This can be done by copying the dogstatsd_mapper_profiles
provided in the Host installation into your datadog.yaml
file. Or by deploying your Datadog Agent with the equivalent JSON configuration in the environment variable DD_DOGSTATSD_MAPPER_PROFILES
. With respect to Kubernetes the equivalent environment variable notation is:
env:
- name: DD_DOGSTATSD_MAPPER_PROFILES
value: >
[{"prefix":"airflow.","name":"airflow","mappings":[{"name":"airflow.job.start","match":"airflow.*_start","tags":{"job_name":"$1"}},{"name":"airflow.job.end","match":"airflow.*_end","tags":{"job_name":"$1"}},{"name":"airflow.job.heartbeat.failure","match":"airflow.*_heartbeat_failure","tags":{"job_name":"$1"}},{"name":"airflow.operator_failures","match":"airflow.operator_failures_*","tags":{"operator_name":"$1"}},{"name":"airflow.operator_successes","match":"airflow.operator_successes_*","tags":{"operator_name":"$1"}},{"match_type":"regex","name":"airflow.dag_processing.last_runtime","match":"airflow\\.dag_processing\\.last_runtime\\.(.*)","tags":{"dag_file":"$1"}},{"match_type":"regex","name":"airflow.dag_processing.last_run.seconds_ago","match":"airflow\\.dag_processing\\.last_run\\.seconds_ago\\.(.*)","tags":{"dag_file":"$1"}},{"match_type":"regex","name":"airflow.dag.loading_duration","match":"airflow\\.dag\\.loading-duration\\.(.*)","tags":{"dag_file":"$1"}},{"name":"airflow.dagrun.first_task_scheduling_delay","match":"airflow.dagrun.*.first_task_scheduling_delay","tags":{"dag_id":"$1"}},{"name":"airflow.pool.open_slots","match":"airflow.pool.open_slots.*","tags":{"pool_name":"$1"}},{"name":"airflow.pool.queued_slots","match":"airflow.pool.queued_slots.*","tags":{"pool_name":"$1"}},{"name":"airflow.pool.running_slots","match":"airflow.pool.running_slots.*","tags":{"pool_name":"$1"}},{"name":"airflow.pool.used_slots","match":"airflow.pool.used_slots.*","tags":{"pool_name":"$1"}},{"name":"airflow.pool.starving_tasks","match":"airflow.pool.starving_tasks.*","tags":{"pool_name":"$1"}},{"match_type":"regex","name":"airflow.dagrun.dependency_check","match":"airflow\\.dagrun\\.dependency-check\\.(.*)","tags":{"dag_id":"$1"}},{"match_type":"regex","name":"airflow.dag.task.duration","match":"airflow\\.dag\\.(.*)\\.([^.]*)\\.duration","tags":{"dag_id":"$1","task_id":"$2"}},{"match_type":"regex","name":"airflow.dag_processing.last_duration","match":"airflow\\.dag_processing\\.last_duration\\.(.*)","tags":{"dag_file":"$1"}},{"match_type":"regex","name":"airflow.dagrun.duration.success","match":"airflow\\.dagrun\\.duration\\.success\\.(.*)","tags":{"dag_id":"$1"}},{"match_type":"regex","name":"airflow.dagrun.duration.failed","match":"airflow\\.dagrun\\.duration\\.failed\\.(.*)","tags":{"dag_id":"$1"}},{"match_type":"regex","name":"airflow.dagrun.schedule_delay","match":"airflow\\.dagrun\\.schedule_delay\\.(.*)","tags":{"dag_id":"$1"}},{"name":"airflow.scheduler.tasks.running","match":"airflow.scheduler.tasks.running"},{"name":"airflow.scheduler.tasks.starving","match":"airflow.scheduler.tasks.starving"},{"name":"airflow.sla_email_notification_failure","match":"airflow.sla_email_notification_failure"},{"match_type":"regex","name":"airflow.dag.task_removed","match":"airflow\\.task_removed_from_dag\\.(.*)","tags":{"dag_id":"$1"}},{"match_type":"regex","name":"airflow.dag.task_restored","match":"airflow\\.task_restored_to_dag\\.(.*)","tags":{"dag_id":"$1"}},{"name":"airflow.task.instance_created","match":"airflow.task_instance_created-*","tags":{"task_class":"$1"}},{"name":"airflow.ti.start","match":"airflow.ti.start.*.*","tags":{"dag_id":"$1","task_id":"$2"}},{"name":"airflow.ti.finish","match":"airflow.ti.finish.*.*.*","tags":{"dag_id":"$1","state":"$3","task_id":"$2"}}]}]
See the Datadog integrations-core
repo for an example setup.
Available for Agent versions >6.0
Collecting logs is disabled by default in the Datadog Agent. To enable it, see Kubernetes Log Collection.
Parameter | Value |
---|---|
<LOG_CONFIG> |
{"source": "airflow", "service": "<YOUR_APP_NAME>"} |
Run the Agent's status subcommand and look for airflow
under the Checks section.
In addition, Airflow DatadogHook can be used to interact with Datadog:
- Send Metric
- Query Metric
- Post Event
See metadata.csv for a list of metrics provided by this check.
The Airflow check does not include any events.
See service_checks.json for a list of service checks provided by this integration.
Need help? Contact Datadog support.