Understanding YARN Architecture: An In-Depth Overview – IQCode

YARN Architecture Overview

YARN, or Yet Another Resource Negotiator, is an Apache Hadoop component that manages the resources and schedules tasks in Hadoop clusters. YARN architecture comprises Resource Manager, Node Manager, Containers, and Application Master. The Resource Manager and Node Manager are the core components responsible for scheduling and managing Hadoop jobs on the clusters. The Node Manager allocates tasks to data nodes based on their hardware specifications and receives task requests from the Resource Manager. YARN has high-availability modes and capabilities that make it an efficient and reliable resource negotiator.

Understanding YARN Architecture

YARN operates as an operating system for a cluster of computers. The cluster is a group of connected computers that act together as a single system. YARN arbitrates the resources of the cluster, such as computing power, memory, disk space, and network bandwidth, among all the jobs running on the system. Just like an operating system allocates resources among processes, YARN allocates cluster resources among competing jobs.

Hadoop YARN Architecture and Components

In Hadoop, YARN (Yet Another Resource Negotiator) manages cluster resources and schedules jobs. It utilizes two long-running daemons: the Resource Manager (RM) and the Node Manager. The RM is the master and the Node Managers are the slaves, with one per machine. The two components together create the data-computation framework.

Resource Manager

The Resource Manager serves as the supreme authority that manages resources across all the system applications. It has two main components:

* Applications Manager: accepts job submissions and initiates a container for the ApplicationMaster entity. It also restarts an ApplicationMaster container if it fails.
* Scheduler: allocates resources such as CPU, disk, and network for running applications, following restrictions imposed by queues and capacity. It does not monitor applications nor initiate restarts for application or hardware failures.

In Unix, a container is a process, and a cgroup is in Linux. Containers host map and reduce tasks. A cluster machine may have multiple containers.

The Resource Manager is a single point of failure. If the RM machine goes down, there would be no scheduled jobs. High availability in YARN was introduced in Hadoop 2.4 to mitigate this. A pair of Resource Managers operates in an Active/Standby configuration to provide high availability. The standby RM becomes active if the active RM fails. An administrator can transition from standby to active mode manually, or it can occur automatically with Zookeeper’s election.


The NodeManager acts as an API and operates on each machine of the cluster. It launches containers and oversees resource utilization by them. It further reports usage statistics to the Scheduler component of the Resource Manager.


Each node contains CPU, RAM, and other resources that are allocated to containers. The life cycle of YARN containers is managed by their launch contexts, and resources are provided to applications for specific purposes on the host. This mimics the behavior of an API.

Application Master

The Application Master keeps track of running applications and oversees their execution. When a job is submitted to the framework, an Application Master is selected. The Application Master is responsible for assigning resources from the Resource Manager to the Node Manager, which then monitors and performs the tasks.

Features of YARN Architecture

YARN is popular for the following reasons:

* YARN’s resource manager scalability allows Hadoop to manage thousands of nodes and clusters.
* YARN maintains compatibility with Hadoop 1.0 without disrupting map-reduce applications.
* YARN enables dynamic cluster utilization in Hadoop, resulting in better cluster utilization.
* YARN supports multi-tenancy, allowing organizations to benefit from multiple engines simultaneously.

Hadoop YARN Application Workflow

To run a YARN application, a client requests the Resource Manager (RM) to create an Application Master (AM) process. The RM finds a suitable Node Manager to launch a container for hosting the AM process. The AM represents the client job/application and can execute the job or request additional resources from the RM. If the latter, the RM directs other Node Managers to launch containers to run the distributed computation as close to the input data as possible, based on data locality preferences. YARN applications can run from seconds to days and can be mapped to jobs in three ways: one job per application, several jobs per application (reusing containers and caching intermediate data between jobs), and perpetually running applications that continuously coordinate multiple jobs. An always-on application master reduces latency because it eliminates the overhead of starting one.

YARN for Scalable, Fault-tolerant Distributed Applications

YARN is a distributed system designed to scale and balance loads. It’s perfect for parallel execution of large task sets. YARN facilitates application scaling with its resource allocation concept. You can allocate resources to tasks based on their importance. YARN also offers metrics, monitoring, and real-time alerts, giving you real-time insight into your application and notifying you of any task failures.

Top 10 Productivity Tools for Programmers

Exploring Selenium Architecture: A Comprehensive Guide – IQCode

DBMS vs. RDBMS: Understanding the Key Differences- IQCode

Applications of Linked List Data Structure – IQCode’s Top Picks