最近在部署Hive上線,結果在線上線下同時出現了MoveTask報錯的現象,雖然兩者錯誤的日志以及錯誤信息一樣,但是經過分析解決又發現兩者的原因是不一樣的。
首先線下的錯誤日志:
2015-05-18 18:53:09,679 ERROR [main]: exec.Task (SessionState.java:printError(833)) - Failed with exception Unable to rename: hdfs: // hadoop-master:9000/tmp/hive/hadoop/4d905c9f-ee65-4b1f-be96-93115b3aad61/hive_2015-05-18_18-51-42_401_2711668916550397051-1/-ext-10000 to: /user/hive/partitions/users_statis/dt_user_statis_behavior/event=play/period=0 org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename: hdfs: // hadoop-master:9000/tmp/hive/hadoop/4d905c9f-ee65-4b1f-be96-93115b3aad61/hive_2015-05-18_18-51-42_401_2711668916550397051-1/-ext-10000 to: /user/hive/partitions/users_statis/dt_user_statis_behavior/event=play/period=0 at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:111 ) at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java: 213 ) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java: 160 ) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java: 85 ) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java: 1604 ) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java: 1364 ) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java: 1177 ) at org.apache.hadoop.hive.ql.Driver.run(Driver.java: 1004 ) at org.apache.hadoop.hive.ql.Driver.run(Driver.java: 994 ) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java: 247 ) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java: 199 ) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java: 410 ) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java: 783 ) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java: 677 ) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java: 616 ) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java: 57 ) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java: 43 ) at java.lang.reflect.Method.invoke(Method.java: 606 ) at org.apache.hadoop.util.RunJar.run(RunJar.java: 221 ) at org.apache.hadoop.util.RunJar.main(RunJar.java: 136)
這個問題在http://blog.csdn.net/lucien_zong/article/details/10198533中有詳細的描述
執行SQL時,最后一個任務是MoveTask,它的作用是將運行SQL生成的Mapeduce任務結果文件放到SQL中指定的存儲查詢結果的路徑中,具體方法就是重命名
下面是 org.apache.hadoop.hive.ql.exec.MoveTask 中對結果文件重命名的一段代碼://這個sourcePath參數就是存放Mapeduce結果文件的目錄,所以它的值可能是 //hdfs://indigo:8020/tmp/hive-root/hive_2013-08-22_18-42-03_218_2856924886757165243/-ext-10000 if (fs.exists(sourcePath)) { Path deletePath = null ; // If it multiple level of folder are there fs.rename is failing so first // create the targetpath.getParent() if it not exist if (HiveConf.getBoolVar(conf, HiveConf.ConfVars.HIVE_INSERT_INTO_MULTILEVEL_DIRS)) { deletePath = createTargetPath(targetPath, fs); } //這里targetPath的值就是指定的放置結果文件的目錄,值可能是 result/userName154122639/4e574b5d9f894a70b074ccd3981ca0f1 if (!fs.rename(sourcePath, targetPath)) {//上面產生的異常就是因為這里rename失敗,進了if,throw了個異常 try { if (deletePath != null ) { fs.delete(deletePath, true ); } } catch (IOException e) { LOG.info("Unable to delete the path created for facilitating rename" + deletePath); } throw new HiveException("Unable to rename: " + sourcePath + " to: " + targetPath); } }
rename的targetPath必須存在。
其實之前已經檢查和創建targetPath了:
?
private Path createTargetPath(Path targetPath, FileSystem fs) throws IOException { Path deletePath = null ; Path mkDirPath = targetPath.getParent(); if (mkDirPath != null & ! fs.exists(mkDirPath)) { Path actualPath = mkDirPath; while (actualPath != null && ! fs.exists(actualPath)) { deletePath = actualPath; actualPath = actualPath.getParent(); }? <property> <name>hive.insert.into.multilevel.dirs</name> <value>true</value> </property>? fs.mkdirs(mkDirPath); } return deletePath;//返回新創建的最頂層的目錄,萬一失敗用來刪除用 }
Apache出現過這個 問題 ,已經解決掉了
CDH 竟然加了個參數 hive.insert.into.multilevel.dirs,默認是false,意思是我還有這BUG呢哈。
當你被坑了,想打個patch時,會發現改個配置就可以了。
意思是我保留這個BUG,但你要是被坑了也不能說我有BUG,自己改配置好了.
目前還沒發現其他地方用到了這個參數,在這里唯一作用就是限制SQL中指定存放結果文件不存在的目錄的深度不能大于1.
不過也沒發現這有什么好處。折騰半天,加個配置就可以了:
? <property> <name>hive.insert.into.multilevel.dirs</name> <value> true </value> </property> ?
解決完線下的問題滿心歡喜的以為可以解決線上的問題,結果發現了不行,仔細查看日志發現原來是因為缺包造成的
2015-05-18 19:22:03,799 ERROR [main]: exec.Task (SessionState.java:printError(861)) - Failed with exception Unable to move source hdfs: // hadoop1:9000/tmp/hive/statistics/dt_statistics_content_daily/.hive-staging_hive_2015-05-18_19-11-45_323_132664610162390564-1/-ext-10000 to destination /tmp/hive/statistics/dt_statistics_content_daily org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs: // hadoop1:9000/tmp/hive/statistics/dt_statistics_content_daily/.hive-staging_hive_2015-05-18_19-11-45_323_132664610162390564-1/-ext-10000 to destination /tmp/hive/statistics/dt_statistics_content_daily at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2483 ) at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java: 105 ) at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java: 222 ) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java: 160 ) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java: 88 ) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java: 1638 ) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java: 1397 ) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java: 1183 ) at org.apache.hadoop.hive.ql.Driver.run(Driver.java: 1049 ) at org.apache.hadoop.hive.ql.Driver.run(Driver.java: 1039 ) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java: 207 ) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java: 159 ) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java: 370 ) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java: 754 ) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java: 675 ) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java: 615 ) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java: 57 ) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java: 43 ) at java.lang.reflect.Method.invoke(Method.java: 606 ) at org.apache.hadoop.util.RunJar.run(RunJar.java: 221 ) at org.apache.hadoop.util.RunJar.main(RunJar.java: 136 ) Caused by: java.io.IOException: Cannot find DistCp class package : org.apache.hadoop.tools.DistCp at org.apache.hadoop.hive.shims.Hadoop23Shims.runDistCp(Hadoop23Shims.java: 1123 ) at org.apache.hadoop.hive.common.FileUtils.copy(FileUtils.java: 553 ) at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java: 2461 ) ... 21 more
查看源碼發現在調用moveFile時候會去調用Hadoop的distcp接口
@Override public boolean runDistCp(Path src, Path dst, Configuration conf) throws IOException { int rc; // Creates the command-line parameters for distcp String[] params = {"-update", "-skipcrccheck" , src.toString(), dst.toString()}; try { Class clazzDistCp = Class.forName("org.apache.hadoop.tools.DistCp" ); Constructor c = clazzDistCp.getConstructor(); c.setAccessible( true ); Tool distcp = (Tool)c.newInstance(); distcp.setConf(conf); rc = distcp.run(params); } catch (ClassNotFoundException e) { throw new IOException("Cannot find DistCp class package: " + e.getMessage()); } catch (NoSuchMethodException e) { throw new IOException("Cannot get DistCp constructor: " + e.getMessage()); } catch (Exception e) { throw new IOException("Cannot execute DistCp process: " + e, e); } return (0 == rc); }
使用命令發現Hive環境變量中并沒有Hadoop-Distcp-2.6.0.jar
? hive -e 'set' | grep distcp ?
于是只要把hadoop/share/tools/下的Hadoop-Distcp-2.6.0.jar加載下來就行。
需要說明的是線上的Hive是1.1.0版本,線下的hive是0.14版本。同時需要注意的是線下的move文件的方式是rename,線上則用distcp方式,這跟Hive的版本有關了。
?
更多文章、技術交流、商務合作、聯系博主
微信掃碼或搜索:z360901061

微信掃一掃加我為好友
QQ號聯系: 360901061
您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點擊下面給點支持吧,站長非常感激您!手機微信長按不能支付解決辦法:請將微信支付二維碼保存到相冊,切換到微信,然后點擊微信右上角掃一掃功能,選擇支付二維碼完成支付。
【本文對您有幫助就好】元
