Hadoop Yarn配置项 yarn.nodemanager.resource.local-dirs探讨

时间:2021-11-22 20:41:01

1. What is the recommended value for "yarn.nodemanager.resource.local-dirs"?

We only have one value (directory) configured for the above property, which has a size of 200GB.

Our hive jobs' map/reduce fill this folder up, and yarn places this node in the blocklist. Moving to tez engine and/or increasing the quota size may fix this, but we'd like to know the recommended value.

最佳解答

个解答,截止Sourygna Luangsay  · 2015年10月28日 08:04

If you use the same partitions for yarn intermediate data than for the HDFS blocks, then you might also consider setting the fs.datanode.du.reserved property, which reserves some space on those partitions for non-hdfs use (such as intermediate yarn data).

One base recommendation I saw on my first Hadoop training long time ago was to dedicate 25% of the "data disks" for that kind of intermediate data.

I guess the optimal answer should consider the maximum amount of intermediate data you can get at the same time (when launching a job,

do you use all the data of HDFS as input data?) and dedicate the space for yarn.nodemanager.resource.local-dirs accordingly.

I would also recommend turning on the property mapreduce.map.output.compress in order to reduce the size of the intermediate data.

 

个解答,截止Jean-Philippe Player  · 2015年10月27日 20:58

You would assign one folder to each of the datanode disks, closely mapping dfs.datanode.data.dir. On a 12 disk system you would have 12 yarn local-dir locations.

2.Though Dataflow can be used with an out of the box Hadoop installation , there are a couple of configuration properties which may improve DataFlow/Hadoop performance

Resolution

Using the O/S file system (i.e. /tmp or /var/) can be problematic especially if any applications log a lot of information or require large local files. So we have two properties to overcome this bottleneck.
 The first is yarn.nodemanager.local-dirs. This setting specifies the directories to use as base directories for the containers run within YARN.

For each application and container created in YARN, a set of directories will be created underneath these local directories. These are then cleaned up when the application completes.
 
Here’s the setting from the yarn-site.xml file on one of our clusters. Note we have eight data disks per node on these clusters and create a directory for YARN on each data filesystem.

<property>
<name>yarn.nodemanager.local-dirs</name>
<value>
/hadoop/hdfs/data1/hadoop/yarn/local,/hadoop/hdfs/data2/hadoop/yarn/local,/hadoop/hdfs/data3/hadoop/yarn/local,/hadoop/hdfs/data4/hadoop/yarn/local,/hadoop/hdfs/data5/hadoop/yarn/local,/hadoop/hdfs/data6/hadoop/yarn/local,/hadoop/hdfs/data7/hadoop/yarn/local,/hadoop/hdfs/data8/hadoop/yarn/local
</value>
<source>yarn-site.xml</source>
</property>

The second is yarn.nodemanager.log-dirs. Much like the local-dirs property, this setting specifies where container log files should go on the local disk. YARN spreads the load around if you specify multiple directories.
And here’s a sample setting:

<property>
<name>yarn.nodemanager.log-dirs</name>
<value>
/hadoop/hdfs/data1/hadoop/yarn/log,/hadoop/hdfs/data2/hadoop/yarn/log,/hadoop/hdfs/data3/hadoop/yarn/log,/hadoop/hdfs/data4/hadoop/yarn/log,/hadoop/hdfs/data5/hadoop/yarn/log,/hadoop/hdfs/data6/hadoop/yarn/log,/hadoop/hdfs/data7/hadoop/yarn/log,/hadoop/hdfs/data8/hadoop/yarn/log
</value>
<source>yarn-site.xml</source>
</property>

Another YARN property you want to validate is the yarn.nodemanager.resource.memory-mb. This setting specifies the amount of memory YARN is allowed to allocate per worker node.

YARN will only allocate this much memory in total to containers. So it’s important to set this to some value less than the physical memory per worker node.

HDP appears to automatically pick 75% of the physical memory for this setting as our machines have 16GB of RAM each.
Here’s an example:

<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value></value>
<source>yarn-site.xml</source>
</property>

 当然也可以考虑使用nfs挂载,相关资料如下

3.How can I change yarn.nodemanager.local-dirs to point to file:/// (high performance NFS mount point)

Hi,I'm trying to change the "yarn.nodemanager.local-dirs" to point to "file:///fast_nfs/yarn/local". This is indeed a high-performance NFS mount-point that all the nodes in my cluster have.

When I try to change it in Ambari I can't and the message "Must be a slash or drive at the start, and must not contain white spaces" is displayed.

If I manually change the /etc/hadoop/conf/yarn-site.xml in all the nodes, after restarting YARN the "file:///" is removed from that option.

I want to have all the shuffle happening in my high-performance NFS array instead of in HDFS.

How can I change this behaviour in HDP?

@Raul Pingarrón

The culprit is "file:/// "you should get a was to create a mount point /fast_nfs/yarn/local, hence the message "Must be a slash or drive ........" like te list below

/hadoop/yarn/local,/opt/hadoop/yarn/local,/usr/hadoop/yarn/local,/var/hadoop/yarn/local

Hope that helps

4. How to set yarn.nodemanager.local-dirs on M3 cluster to write to mapr fs

We are running a four node M3 cluster with one node running NFS. We are getting the the following error.

1/1 local-dirs are bad: /mapr/clustername/tmp/host_name on the nodes that does not have NFS running.

What is the best way to set this property in the yarn-site.xml to allow all nodes to use mapr fs /tmp as the default location and not the local file system /tmp

I believe the property "yarn.nodemanager.local-dirs" is meant to be a location on the local file system. It cannot be a location of the distributed file system (HDFS or MapR FS).

This property determines the location where the node manager maintains intermediate data (for example during the shuffle phase).

You can the find gory details here: http://hortonworks.com/blog/resource-localization-in-yarn-deep-dive/

The default location as you mentioned is /tmp. If you want to improve performance, you could provide multiple directories on separate disks for better I/O throughput.

But, you should ascertain that this is a indeed bottleneck and if a separate disk is warranted for this purpose (or you are better of using it as a MapR data disk).

One other thing, the NFS mounted location (/mapr/clustername/tmp/host_name) is not a part of the distributed FS.

MapR makes it seamless to work between its distributed file system and the POSIX file system. But the files of the POSIX system are not stored in any containers/chunks/blocks, etc.

Since the path you specified is really a local directory on the node running NFS, you don't get an error message on that node . But on the other nodes, the system can't find a local directory by that name and hence it is complaining.