Skip to content

Latest commit

 

History

History
119 lines (86 loc) · 8.4 KB

apache-spark-job-debugging.md

File metadata and controls

119 lines (86 loc) · 8.4 KB
title description services author ms.reviewer ms.service ms.custom ms.topic ms.date ms.author
Debug Apache Spark jobs running on Azure HDInsight
Use YARN UI, Spark UI, and Spark History server to track and debug jobs running on a Spark cluster in Azure HDInsight
hdinsight
hrasheed-msft
jasonh
hdinsight
hdinsightactive
conceptual
12/05/2018
hrasheed

Debug Apache Spark jobs running on Azure HDInsight

In this article, you learn how to track and debug Apache Spark jobs running on HDInsight clusters using the Apache Hadoop YARN UI, Spark UI, and the Spark History Server. You start a Spark job using a notebook available with the Spark cluster, Machine learning: Predictive analysis on food inspection data using MLLib. You can use the following steps to track an application that you submitted using any other approach as well, for example, spark-submit.

Prerequisites

You must have the following:

Track an application in the YARN UI

  1. Launch the YARN UI. Click Yarn under Cluster dashboards.

    Launch YARN UI

    [!TIP] Alternatively, you can also launch the YARN UI from the Ambari UI. To launch the Ambari UI, click Ambari home under Cluster dashboards. From the Ambari UI, click YARN, click Quick Links, click the active Resource Manager, and then click Resource Manager UI.

  2. Because you started the Spark job using Jupyter notebooks, the application has the name remotesparkmagics (this is the name for all applications that are started from the notebooks). Click the application ID against the application name to get more information about the job. This launches the application view.

    Find Spark application ID

    For such applications that are launched from the Jupyter notebooks, the status is always RUNNING until you exit the notebook.

  3. From the application view, you can drill down further to find out the containers associated with the application and the logs (stdout/stderr). You can also launch the Spark UI by clicking the linking corresponding to the Tracking URL, as shown below.

    Download container logs

Track an application in the Spark UI

In the Spark UI, you can drill down into the Spark jobs that are spawned by the application you started earlier.

  1. To launch the Spark UI, from the application view, click the link against the Tracking URL, as shown in the screen capture above. You can see all the Spark jobs that are launched by the application running in the Jupyter notebook.

    View Spark jobs

  2. Click the Executors tab to see processing and storage information for each executor. You can also retrieve the call stack by clicking on the Thread Dump link.

    View Spark executors

  3. Click the Stages tab to see the stages associated with the application.

    View Spark stages

    Each stage can have multiple tasks for which you can view execution statistics, like shown below.

    View Spark stages

  4. From the stage details page, you can launch DAG Visualization. Expand the DAG Visualization link at the top of the page, as shown below.

    View Spark stages DAG visualization

    DAG or Direct Aclyic Graph represents the different stages in the application. Each blue box in the graph represents a Spark operation invoked from the application.

  5. From the stage details page, you can also launch the application timeline view. Expand the Event Timeline link at the top of the page, as shown below.

    View Spark stages event timeline

    This displays the Spark events in the form of a timeline. The timeline view is available at three levels, across jobs, within a job, and within a stage. The image above captures the timeline view for a given stage.

    [!TIP] If you select the Enable zooming check box, you can scroll left and right across the timeline view.

  6. Other tabs in the Spark UI provide useful information about the Spark instance as well.

    • Storage tab - If your application creates an RDDs, you can find information about those in the Storage tab.
    • Environment tab - This tab provides a lot of useful information about your Spark instance such as the
      • Scala version
      • Event log directory associated with the cluster
      • Number of executor cores for the application
      • Etc.

Find information about completed jobs using the Spark History Server

Once a job is completed, the information about the job is persisted in the Spark History Server.

  1. To launch the Spark History Server, from the Overview blade, click Spark history server under Cluster dashboards.

    Launch Spark History Server

    [!TIP] Alternatively, you can also launch the Spark History Server UI from the Ambari UI. To launch the Ambari UI, from the Overview blade, click Ambari home under Cluster dashboards. From the Ambari UI, click Spark, click Quick Links, and then click Spark History Server UI.

  2. You see all the completed applications listed. Click an application ID to drill down into an application for more info.

    Launch Spark History Server

See also

For data analysts

For Spark developers