Monitor your primary and standby HDFS NameNodes to know when your cluster falls into a precarious state: when you're down to one NameNode remaining, or when it's time to add more capacity to the cluster. This Agent check collects metrics for remaining capacity, corrupt/missing blocks, dead DataNodes, filesystem load, under-replicated blocks, total volume failures (across all DataNodes), and many more.
Use this check (hdfs_namenode) and its counterpart check (hdfs_datanode), not the older two-in-one check (hdfs); that check is deprecated.
Follow the instructions below to install and configure this check for an Agent running on a host. For containerized environments, see the Autodiscovery Integration Templates for guidance on applying these instructions.
The HDFS NameNode check is included in the Datadog Agent package, so you don't need to install anything else on your NameNodes.
-
The Agent collects metrics from the NameNode's JMX remote interface. The interface is disabled by default, so enable it by setting the following option in
hadoop-env.sh
(usually found in $HADOOP_HOME/conf):export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=50070 $HADOOP_NAMENODE_OPTS"
-
Restart the NameNode process to enable the JMX interface.
Follow the instructions below to configure this check for an Agent running on a host. For containerized environments, see the Containerized section.
-
Edit the
hdfs_namenode.d/conf.yaml
file, in theconf.d/
folder at the root of your Agent's configuration directory. See the sample hdfs_namenode.d/conf.yaml for all available configuration options:init_config: instances: ## @param hdfs_namenode_jmx_uri - string - required ## The HDFS NameNode check retrieves metrics from the HDFS NameNode's JMX ## interface via HTTP(S) (not a JMX remote connection). This check must be installed on ## a HDFS NameNode. The HDFS NameNode JMX URI is composed of the NameNode's hostname and port. ## ## The hostname and port can be found in the hdfs-site.xml conf file under ## the property dfs.namenode.http-address ## https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml # - hdfs_namenode_jmx_uri: http://localhost:50070
For containerized environments, see the Autodiscovery Integration Templates for guidance on applying the parameters below.
Parameter | Value |
---|---|
<INTEGRATION_NAME> |
hdfs_namenode |
<INIT_CONFIG> |
blank or {} |
<INSTANCE_CONFIG> |
{"hdfs_namenode_jmx_uri": "https://%%host%%:50070"} |
Available for Agent >6.0
-
Collecting logs is disabled by default in the Datadog Agent. Enable it in the
datadog.yaml
file with:logs_enabled: true
-
Add this configuration block to your
hdfs_namenode.d/conf.yaml
file to start collecting your NameNode logs:logs: - type: file path: /var/log/hadoop-hdfs/*.log source: hdfs_namenode service: <SERVICE_NAME>
Change the
path
andservice
parameter values and configure them for your environment. -
[Restart the Agent][6].
Run the Agent's status subcommand and look for hdfs_namenode
under the Checks section.
See metadata.csv for a list of metrics provided by this integration.
The HDFS-namenode check does not include any events.
hdfs.namenode.jmx.can_connect:
Returns Critical
if the Agent cannot connect to the NameNode's JMX interface for any reason (e.g. wrong port provided, timeout, un-parseable JSON response).
Need help? Contact Datadog support.