Quantcast
Channel: Planet Apache
Viewing all articles
Browse latest Browse all 9364

Edward J. Yoon: Shuffle Error: MAX_FAILED_UNIQUE_FETCHES; bailing-out

$
0
0
First, long time no use MapReduce! Today I wasted some time figuring this out; "Shuffle Error: MAX_FAILED_UNIQUE_FETCHES; bailing-out".

If you meet this error message w/ the higher version than the hadoop-0.20.2, you should check the file "mapred-site.xml" in {$HADOOP_HOME}/conf directory and the "/etc/hosts" because this happens when the IP addresses are all confused and things aren't on the right ports.

<property>
    <name>mapreduce.task.tracker.http.address</name>
    <value>0.0.0.0:50060</value>
    <description>
    The task tracker http server address and port.
    If the port is 0 then the server will start on a free port.
    </description>
  </property>

Or, if you have a lot of map and reduce processes in your cluster, check the "tasktracker.http.threads" property.

<property>
    <name>mapreduce.tasktracker.http.threads</name>
    <value>400<:/value>
  </property>

Viewing all articles
Browse latest Browse all 9364

Trending Articles