How to Kill Running Yarn Application ( Spark, Hive, and Tez)

To Kill Running Yarn Application

There are multiple instances you needed to stop a running Yarn application. This could be due to various reasons, such as when the application is stuck or taking up too many resources, In this article, We will learn, How to Kill the Running Yarn Application?

Yarn is a popular open-source resource manager responsible for allocating resources, You can kill a running Yarn application by using the command “yarn application -kill <application ID>” obtained from “yarn application -list”, but please be cautious as this may result in data loss or corruption.

We will show you some of the easiest ways to kill a running Yarn application using the command line interface.

  • Yarn kill command
  • A hard kill from the server (kill -9 ) – This is for a specific case

Using the Yarn kill command

Using the “yarn kill” command, We can able to kill the spark application and it is the same for the hive, Tez, and MR jobs as all of them are managed by yarn by default

yarn application -kill <application ID>

Before that, We need to find the application ID

How to find the application ID?

To kill an application, We need to find the application id First, For that, we can use the below methods

Using RM WebUI

Can able to get the Spark or other job application id on the yarn resource manager WebUI as below

How to Kill Running Yarn Application ( Spark, Hive, and Tez)

application id = application_1669782628024_0001

Using the Spark history server

Similarly, For spark, we can able to get the application id from the spark history server WebUI as below

application id = application_1669782628024_0001

Spark driver or console logs

If you are running spark in client mode, then you will see the application id in the client log and can able to fetch it from there as below

Check here to learn more about spark driver

]$ spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode client  /opt/cloudera/parcels/CDH/jars/spark-examples*.jar 1 1
22/11/30 05:35:47 INFO spark.SparkContext: Running Spark version 2.4.0-cdh6.2.x-SNAPSHOT
22/11/30 05:35:48 INFO logging.DriverLogger: Added a local log appender at: /tmp/spark-75af077f-83b5-4e58-a156-a44bcbcfa7a2/__driver_logs__/driver.log
22/11/30 05:35:48 INFO spark.SparkContext: Submitted application: Spark Pi
22/11/30 05:35:58 INFO yarn.Client: 
	 client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
	 diagnostics: AM container is launched, waiting for AM container to Register with RM
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: root.users.systest
	 start time: 1669786556198
	 final status: UNDEFINED
	 tracking URL: https://:8090/proxy/application_1669782628024_0001/
	 user: systest
22/11/30 05:35:59 INFO yarn.Client: Application report for application_1669782628024_0001 (state: ACCEPTED)

application id = application_1669782628024_0001

Hard kill “kill -9”

– If you are running the spark application in the client mode, We can kill the spark driver process, or “ctrl+c” while the driver is running will kill the spark application.

– “ctrl+c” will cancel the ongoing spark context which results in shutting down the spark application


22/11/30 05:45:30 INFO scheduler.DAGScheduler: Job 0 failed: reduce at SparkPi.scala:38, took 272.194177 s
22/11/30 05:45:30 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:38) failed in 269.644 s due to Stage cancelled because SparkContext was shut down
Exception in thread "main" 22/11/30 05:45:30 INFO scheduler.TaskSetManager: Starting task 45797.0 in stage 0.0 (TID 45797,, executor 15, partition 45797, PROCESS_LOCAL, 7746 bytes)
22/11/30 05:45:30 INFO scheduler.TaskSetManager: Finished task 45775.0 in stage 0.0 (TID 45775) in 27 ms on (executor 15) (45775/1000000)
org.apache.spark.SparkException: Job 0 cancelled because SparkContext was shut down

– Using “kill -9” will kill the spark driver process causing context failure and shutting down

Note: Killing AM container or another executor will not cause application failure, As spark will recreate the container for some number of attempts


In Summary, Killing a running Yarn application is a simple process that can be done using the command line interface. By following the steps mentioned in this article, you can stop an application and free up the resources that it was using. However, please note that killing a Yarn application may result in data loss or corruption, so use this command with caution.

Good Luck with your Learning !!

Related Topics:

Resolve the “Container killed by YARN for exceeding memory limits” Issue in Hive, Tez, and Spark jobs

Resolve the “java.lang.OutOfMemoryError: Java heap space” issue in Spark and Hive(Tez and MR)

Similar Posts