2013年8月17日星期六

[Jiya ~] hadoop capacity-scheduler queue scheduling problem, thankyou for a favor friends ~

 This post last edited by the zhang__bing on 2013-08-01 11:28:49
If equipped with three queues A, B, C, resource allocation of 30%, 40%, 30%
then submit to the B two job (1,2), job1 requires resources 40%, job2 need 20%, then the capacity allocation queue job1 back is assigned to B, job2 will be assigned to A.


I think, if the resource is not enough, a second job should be out of the blocked state, waiting for job1 finished running and then run it


capacity queues can be configured to strictly limit the resources allocated?

Or hadoop there a queue scheduling can meet the strict allocation of resources is now the effect of it?

Jijiji acridine ~


help out friends ~

------ Solution ------------------------------------ --------
Capacity Scheduler used in each queue is a FIFO scheduling strategy algorithms.
Capacity Scheduler does not support default priority, but you can enable this option in the configuration file, if supported priority scheduling algorithm is with priority FIFO.
Capacity Scheduler does not support preemptive priority, once a job begins execution completes execution before its resources are not preempted by higher priority jobs.
Capacity Scheduler for queue jobs submitted by the same user can obtain a limit on the percentage of the resources that belong to a user's job can not appear monopolize resources.

reconfigured according to your needs under the 2,3,4 key, and then
configuration is complete, restart the jobtracker can.
stop-mapred.sh
start-mapred.sh
note when submitting the Job, remember to set job.set (), specify the group or Pool;


------ For reference only ---------------------------------- -----
See: http://blog.csdn.net/jiedushi/article/details/7920455
------ For reference only ---------- -----------------------------
Although your answer is no fundamental solution to my problem, but thank you la ~

没有评论:

发表评论