hadoop cluster: centos 5.8 jdk1.7 hadoop1.0.1.
master: 192.168.1.101; slave: 192.168.1.102,192.168.1.103,192.168.1.104.
individually configured with a development environment:
OS: win2003, eclipse-jee-helios-SR2-win32.zip, jdk1.7.
configured development environment, debugging wordcount program, such a mistake.
12/04/24 15:32:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform ... using builtin- java classes where applicable
12/04/24 15:32:44 ERROR security.UserGroupInformation: PriviledgedActionException as: Administrator cause: java.io.IOException: Failed to set permissions of path: \ tmp \ hadoop-Administrator \ mapred \ staging \ Administrator-519341271 \. staging to 0700
Exception in thread "main" java.io.IOException: Failed to set permissions of path: \ tmp \ hadoop-Administrator \ mapred \ staging \ Administrator -519341271 \. staging to 0700
at org.apache.hadoop.fs.FileUtil.checkReturnValue (FileUtil.java: 682)
at org.apache.hadoop.fs.FileUtil.setPermission (FileUtil.java: 655)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission (RawLocalFileSystem.java: 509)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs (RawLocalFileSystem.java: 344)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs (FilterFileSystem.java: 189)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir (JobSubmissionFiles.java: 116)
at org.apache.hadoop.mapred.JobClient $ 2.run (JobClient.java: 856)
at org.apache.hadoop.mapred.JobClient $ 2.run (JobClient.java: 850)
at java.security.AccessController.doPrivileged (Native Method)
at javax.security.auth.Subject.doAs (Subject.java: 396)
at org.apache.hadoop.security.UserGroupInformation.doAs (UserGroupInformation.java: 1093)
at org.apache.hadoop.mapred.JobClient.submitJobInternal (JobClient.java: 850)
at org.apache.hadoop.mapreduce.Job.submit (Job.java: 500)
at org.apache.hadoop.mapreduce.Job.waitForCompletion (Job.java: 530)
at com.hadoop.learn.test.WordCountTest.main (WordCountTest.java: 85)
online tips are installed under Windows file permissions issue, saying that under linux does not have this problem.
So I decided to re-build a separate centOS development environment, the problem still exists.
in your code conf.set ("mapred.job.tracker", "192.168.1.155:9001"); sentence after the problem is resolved, but
program runs very slow, but you can get the right result. Tracking tasks in http://192.168.1.101:50030 found: slave1 and slave2 have this error: state: failed.Error: Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out..
wordcount run directly on the master does not have this error.
or in windows development environment, change / hadoop-1.0.2/src/core/org/apache/hadoop/fs/FileUtil.java inside checkReturnValue:
private static void checkReturnValue (boolean rv, File p,
FsPermission permission
) throws IOException {
/ **
if (! rv) {
throw new IOException ("Failed to set permissions of path:" + p +
"to" +
String.format ("% 04o", permission.toShort ()));
}
** /
}
commented after recompilation hadoop-core-1.0.1.jar, and no Shuffle Error. Puzzled by this problem a long time. Experts expect to solve it.
------ Solution ------------------------------------ --------
looks like hadoop default start many threads, more than it will error, modify the configuration file.
------ For reference only -------------------------------------- -
above error is completed in the map, reduce phase appears. reduce 16% when after
没有评论:
发表评论