mapred.queue.namesdefault,queue1,queue2,queue3,que" />

日韩久久久精品,亚洲精品久久久久久久久久久,亚洲欧美一区二区三区国产精品 ,一区二区福利

hadoop mapred-queue-acls 配置

系統(tǒng) 4238 0
hadoop作業(yè)提交時(shí)可以指定相應(yīng)的隊(duì)列,例如:-Dmapred.job.queue.name=queue2

通過(guò)對(duì)mapred-queue-acls.xml和mapred-site.xml配置可以對(duì)不同的隊(duì)列實(shí)現(xiàn)不同用戶(hù)的提交權(quán)限.
先編輯mapred-site.xml,修改配置如下(增加四個(gè)隊(duì)列):

    
  1. < property >
  2. < name > mapred.queue.names </ name >
  3. < value > default, queue1,queue2,queue3,queue4 </ value >
  4. < description > Commaseparatedlistofqueuesconfiguredforthisjobtracker.
  5. Jobsareaddedtoqueuesandschedulerscanconfiguredifferent
  6. schedulingpropertiesforthevariousqueues.Toconfigureaproperty
  7. foraqueue,thenameofthequeuemustmatchthenamespecifiedinthis
  8. value.Queuepropertiesthatarecommontoallschedulersareconfigured
  9. herewiththenamingconvention,mapred.queue.$QUEUE-NAME.$PROPERTY-NAME,
  10. fore.g.mapred.queue.default.submit-job-acl.
  11. Thenumberofqueuesconfiguredinthisparametercoulddependonthe
  12. typeofschedulerbeingused,asspecifiedin
  13. mapred.jobtracker.taskScheduler.Forexample,theJobQueueTaskScheduler
  14. supportsonlyasinglequeue,whichisthedefaultconfiguredhere.
  15. Beforeaddingmorequeues,ensurethatthescheduleryou'veconfigured
  16. supportsmultiplequeues.
  17. </ description >
  18. </ property >

修改生效后通過(guò)jobtrack界面可以看到配置的隊(duì)列信息:

hadoop mapred-queue-acls 配置

要對(duì)隊(duì)列進(jìn)行控制, 還需要編輯mapred-queue-acls.xml文件

    
  1. < property >
  2. < name > mapred.queue. queue1 .acl-submit-job </ name >
  3. < value > '' </ value >
  4. < description > Commaseparatedlistofuserandgroupnamesthatareallowed
  5. tosubmitjobstothe'default'queue.Theuserlistandthegrouplist
  6. areseparatedbyablank.Fore.g.user1,user2group1,group2.
  7. Ifsettothespecialvalue'*',itmeansallusersareallowedto
  8. submitjobs.Ifsetto''(i.e.space),nouserwillbeallowedtosubmit
  9. jobs.
  10. ItisonlyusedifauthorizationisenabledinMap/Reducebysettingthe
  11. configurationpropertymapred.acls.enabledtotrue.
  12. IrrespectiveofthisACLconfiguration,theuserwhostartedtheclusterand
  13. clusteradministratorsconfiguredvia
  14. mapreduce.cluster.administratorscansubmitjobs.
  15. </ description >
  16. </ property >

要配置多個(gè)隊(duì)列, 只需要重復(fù)添加上面配置信息,修改隊(duì)列名稱(chēng)和value值,為方便測(cè)試,queue1禁止所有用戶(hù)向其提交作業(yè).
要使該配置生效, 還需要修改mapred-site.xml,將mapred.acls.enabled值設(shè)置為true

    
  1. < property >
  2. < name > mapred.acls.enabled </ name >
  3. < value > true </ value >
  4. < description > SpecifieswhetherACLsshouldbechecked
  5. forauthorizationofusersfordoingvariousqueueandjobleveloperations.
  6. ACLsaredisabledbydefault.Ifenabled,accesscontrolchecksaremadeby
  7. JobTrackerandTaskTrackerwhenrequestsaremadebyusersforqueue
  8. operationslikesubmitjobtoaqueueandkillajobinthequeueandjob
  9. operationslikeviewingthejob-details(Seemapreduce.job.acl-view-job)
  10. orformodifyingthejob(Seemapreduce.job.acl-modify-job)using
  11. Map/ReduceAPIs,RPCsorviatheconsoleandwebuserinterfaces.
  12. </ description >
  13. </ property >

重啟hadoop, 使配置生效, 接下來(lái)拿hive進(jìn)行測(cè)試:
先使用queue2隊(duì)列:

    
  1. set mapred.job.queue.name = queue2 ;
  2. hive >
  3. > selectcount(*)fromt_aa_pc_log;
  4. TotalMapReduce jobs = 1
  5. LaunchingJob1outof1
  6. Numberofreducetasksdeterminedatcompiletime:1
  7. Inordertochangetheaverageloadforareducer(inbytes):
  8. set hive.exec.reducers.bytes.per.reducer = < number >
  9. Inordertolimitthemaximumnumberofreducers:
  10. set hive.exec.reducers.max = < number >
  11. Inordertosetaconstantnumberofreducers:
  12. set mapred.reduce.tasks = < number >
  13. Starting Job = job_201205211843_0002 ,Tracking URL = http ://192.168.189.128:50030/jobdetails.jsp? jobid = job_201205211843_0002
  14. Kill Command =/opt/app/hadoop-0.20.2-cdh3u3/bin/hadoopjob -Dmapred.job.tracker = 192 .168.189.128:9020-killjob_201205211843_0002
  15. 2012-05-2118:45:01,593Stage-1 map = 0 %, reduce = 0 %
  16. 2012-05-2118:45:04,613Stage-1 map = 100 %, reduce = 0 %
  17. 2012-05-2118:45:12,695Stage-1 map = 100 %, reduce = 100 %
  18. Ended Job = job_201205211843_0002
  19. OK
  20. 136003
  21. Timetaken:14.674seconds
  22. hive >

作業(yè)成功完成

再來(lái)向queue1隊(duì)列提交作業(yè):

    
  1. > set mapred.job.queue.name = queue1 ;
  2. hive > selectcount(*)fromt_aa_pc_log;
  3. TotalMapReduce jobs = 1
  4. LaunchingJob1outof1
  5. Numberofreducetasksdeterminedatcompiletime:1
  6. Inordertochangetheaverageloadforareducer(inbytes):
  7. set hive.exec.reducers.bytes.per.reducer = < number >
  8. Inordertolimitthemaximumnumberofreducers:
  9. set hive.exec.reducers.max = < number >
  10. Inordertosetaconstantnumberofreducers:
  11. set mapred.reduce.tasks = < number >
  12. org.apache.hadoop.ipc.RemoteException:org.apache.hadoop.security.AccessControlException:Userp_sdo_data_01cannotperformoperationSUBMIT_JOBonqueuequeue1.
  13. Pleaserun"hadoopqueue-showacls"commandtofindthequeuesyouhaveaccessto.
  14. atorg.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:179)
  15. atorg.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:136)
  16. atorg.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:113)
  17. atorg.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3781)
  18. atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)
  19. atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  20. atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  21. atjava.lang.reflect.Method.invoke(Method.java:597)
  22. atorg.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
  23. atorg.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
  24. atorg.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
  25. atjava.security.AccessController.doPrivileged(NativeMethod)
  26. atjavax.security.auth.Subject.doAs(Subject.java:396)
  27. atorg.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
  28. atorg.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

作業(yè)提交失敗!

最后, 可以使用hadoop queue -showacls 命令查看隊(duì)列信息:

    
  1. [hadoop@localhostconf]$hadoopqueue-showacls
  2. Queueaclsforuser:hadoop
  3. Queue Operations
  4. =====================
  5. queue1administer-jobs
  6. queue2submit-job,administer-jobs
  7. queue3submit-job,administer-jobs
  8. queue4submit-job,administer-jobs

hadoop mapred-queue-acls 配置


更多文章、技術(shù)交流、商務(wù)合作、聯(lián)系博主

微信掃碼或搜索:z360901061

微信掃一掃加我為好友

QQ號(hào)聯(lián)系: 360901061

您的支持是博主寫(xiě)作最大的動(dòng)力,如果您喜歡我的文章,感覺(jué)我的文章對(duì)您有幫助,請(qǐng)用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點(diǎn)擊下面給點(diǎn)支持吧,站長(zhǎng)非常感激您!手機(jī)微信長(zhǎng)按不能支付解決辦法:請(qǐng)將微信支付二維碼保存到相冊(cè),切換到微信,然后點(diǎn)擊微信右上角掃一掃功能,選擇支付二維碼完成支付。

【本文對(duì)您有幫助就好】

您的支持是博主寫(xiě)作最大的動(dòng)力,如果您喜歡我的文章,感覺(jué)我的文章對(duì)您有幫助,請(qǐng)用微信掃描上面二維碼支持博主2元、5元、10元、自定義金額等您想捐的金額吧,站長(zhǎng)會(huì)非常 感謝您的哦!!!

發(fā)表我的評(píng)論
最新評(píng)論 總共0條評(píng)論
主站蜘蛛池模板: 古丈县| 克拉玛依市| 平度市| 大宁县| 都兰县| 淮滨县| 浙江省| 子长县| 彰化市| 平陆县| 腾冲县| 筠连县| 巢湖市| 七台河市| 甘孜| 从江县| 郓城县| 辛集市| 凌云县| 松阳县| 湘西| 乃东县| 门头沟区| 阳新县| 通辽市| 兴仁县| 论坛| 临安市| 广汉市| 周口市| 安岳县| 德州市| 绩溪县| 渭源县| 广安市| 敦化市| 新和县| 公主岭市| 凤山县| 长丰县| 双辽市|