This check monitors Flink. Datadog collects Flink metrics through Flink's Datadog HTTP Reporter, which uses Datadog's HTTP API.
The Flink check is included in the Datadog Agent package. No additional installation is needed on your server.
-
Configure the Datadog HTTP Reporter in Flink.
Copy
<FLINK_HOME>/opt/flink-metrics-datadog-<DATADOG_REPORTER_VERSION>.jar
into your<FLINK_HOME>/lib
folder. In your<FLINK_HOME>/conf/flink-conf.yaml
, add these lines, replacing<DATADOG_API_KEY>
with your Datadog API key:metrics.reporter.dghttp.class: org.apache.flink.metrics.datadog.DatadogHttpReporter metrics.reporter.dghttp.apikey: <DATADOG_API_KEY> metrics.reporter.dghttp.dataCenter: {{< region-param key="dd_datacenter" >}}
-
Re-map system scopes in your
<FLINK_HOME>/conf/flink-conf.yaml
.metrics.scope.jm: flink.jobmanager metrics.scope.jm.job: flink.jobmanager.job metrics.scope.tm: flink.taskmanager metrics.scope.tm.job: flink.taskmanager.job metrics.scope.task: flink.task metrics.scope.operator: flink.operator
Note: The system scopes must be remapped for your Flink metrics to be supported, otherwise they are submitted as custom metrics.
-
Configure additional tags in
<FLINK_HOME>/conf/flink-conf.yaml
. Here is an example of custom tags:metrics.reporter.dghttp.tags: <KEY1>:<VALUE1>, <KEY1>:<VALUE2>
Note: By default, any variables in metric names are sent as tags, so there is no need to add custom tags for
job_id
,task_id
, etc. -
Restart Flink to start sending your Flink metrics to Datadog.
Available for Agent >6.0
-
Flink uses the
log4j
logger by default. To activate logging to a file and customize the format edit thelog4j.properties
,log4j-cli.properties
,log4j-yarn-session.properties
, orlog4j-console.properties
file. See Flink's repository for default configurations. For examplelog4j.properties
contains this configuration by default:log4j.appender.file=org.apache.log4j.FileAppender log4j.appender.file.file=${log.file} log4j.appender.file.append=false log4j.appender.file.layout=org.apache.log4j.PatternLayout log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
-
By default, the integration pipeline supports the following conversion pattern:
%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
An example of a valid timestamp is:
2020-02-03 18:43:12,251
.Clone and edit the integration pipeline if you have a different format.
-
Collecting logs is disabled by default in the Datadog Agent, enable it in your
datadog.yaml
file:logs_enabled: true
-
Uncomment and edit the logs configuration block in your
flink.d/conf.yaml
file. Change thepath
andservice
parameter values based on your environment. See the sample flink.d/conf.yaml for all available configuration options.logs: - type: file path: /var/log/flink/server.log source: flink service: myapp #To handle multi line that starts with yyyy-mm-dd use the following pattern #log_processing_rules: # - type: multi_line # pattern: \d{4}\-(0?[1-9]|1[012])\-(0?[1-9]|[12][0-9]|3[01]) # name: new_log_start_with_date
Run the Agent's status subcommand and look for flink
under the Checks section.
See metadata.csv for a list of metrics provided by this integration.
Flink does not include any service checks.
Flink does not include any events.
Need help? Contact Datadog support.