Small Business & Entrepreneurship

Justice: A Deadline-aware, Fair-share Resource Allocator for Implementing Multi-analytics

In this paper, we present Justice, a fair-share deadline-aware resource allocator for big data cluster managers. In resource constrained environments, where resource contention introduces significant execution delays, Justice outperforms the popular
of 12
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Similar Documents
   Justice : A Deadline-aware, Fair-share ResourceAllocator for Implementing Multi-analytics Stratos Dimopoulos, Chandra Krintz, Rich Wolski Department of Computer ScienceUniversity of California, Santa Barbara { stratos, ckrintz, rich }  Abstract —In this paper, we present  Justice , a fair-sharedeadline-aware resource allocator for big data cluster managers.In resource constrained environments, where resource contentionintroduces significant execution delays,  Justice  outperforms thepopular existing fair-share allocator that is implemented as partof Mesos and YARN.  Justice  uses deadline information suppliedwith each job and historical job execution logs to implementadmission control. It automatically adapts to changing workloadconditions to assign enough resources for each job to meetits deadline “just in time.” We use trace-based simulation of production YARN workloads to evaluate  Justice  under differentdeadline formulations. We compare  Justice  to the existing fair-share allocation policy deployed on cluster managers like YARNand Mesos and find that in resource-constrained settings,  Justice improves fairness, satisfies significantly more deadlines, andutilizes resources more efficiently.  Keywords —  resource-constrained clusters; deadlines; admission control; resource allocation; big data; I. I NTRODUCTION Scalable platforms such as Apache Hadoop [1] and ApacheSpark [2] implement batch processing of distributed analyticsapplications, often using clusters (physical or virtual) as in-frastructure. However, cluster administrators do not use space-sharing job schedulers (e.g. [16, 33, 34, 54]) to partitioncluster resources for these platforms. Instead, many “big data”systems are designed to work with a cluster manager suchas Mesos [24] or YARN [44], which divide cluster resources(processors and memory) at a more fine-grained level tofacilitate effective resource sharing and utilization.Cluster managers implement fair-share resource alloca-tion [15, 20] via per-application negotiation. Each big dataframework (e.g. Hadoop or Spark instance) negotiates with thecluster manager to receive resources and uses these resourcesto run the tasks of the associated submitted jobs (i.e. userapplications). The cluster manager tracks the current allocationof each job and the available cluster resources and uses a fair-share algorithm to distribute resources to the frameworks as jobs arrive.In this work, we investigate the efficacy of fair-shareschedulers in cluster settings that are used increasingly byapplications for the emerging “Internet-of-Things” (IoT) do-main – distributed systems in which ordinary physical objectsare equipped with Internet connectivity so that they may beaccessed (in terms of their sensing capabilities) and actuatedautomatically. IoT applications combine data gathered viasimple, low-power sensing devices with data analytics, to makedata-driven inferences, predictions, and actuation decisions.Increasingly, IoT analytics are offloaded to more capable,“edge computing” systems that provide data aggregation, anal-yses, and decision support  near where data is collected   toprovide low-latency (deadline-driven) actuation of edge de-vices and to reduce the bandwidth requirements of large-scaleIoT deployments [14, 46]. Edge computing [19] (also termedFog computing [5]) hypothesizes that these edge systems besmall or medium-sized cluster systems capable of acting as anextension of more centralized infrastructure (i.e. as a kind of “content distribution network” for cloud-based analytics).Edge analytics for IoT represents a shift in the requirementsimposed on big-data frameworks and cluster managers, bothof which were designed for very large-scale, resource-richclusters and e-commerce applications (which co-locate datafusion, analysis, and actuation in the cloud). In this paper,we investigate the suitability of existing fair-share resourceallocators given this shift in cluster or cloud settings wheredeadlines are necessary to provide low-latency actuation, andresource contention imposes significant overhead on job turn-around time.We compare existing fair-share algorithms employed byMesos [24] and YARN [44] to a new approach, called  Justice ,which uses deadline information for each job and historicalworkload analysis to improve deadline satisfaction and fair-ness. Rather than using fairness as the allocation criterion asis done for Mesos and YARN,  Justice  estimates the fractionof the requested resources that are necessary to complete each job just before its deadline expires. It makes this estimatewhen the job is submitted using the requested number of resources as the number necessary to complete the job assoon as possible. It then relaxes this number according to arunning tabulation of an expansion factor that is computedfrom an on-line post-mortem analysis of all previous jobs runon the cluster. Because the expansion factor is computed across jobs (i.e. globally for the cluster) each analytics framework receives a “fair penalty” for its jobs, which results in a betterfair-share, subject to the deadlines associated with each job.Further,  Justice  “risks” running some jobs with greater or fewerresources than it computes they need so that it can adapt itsallocations automatically to changing workload characteristics.We describe the  Justice  approach and compare it to thebaseline allocator employed by Mesos and YARN, to simpleintuitive extensions to this allocator, and to a job workload“oracle”, which knows precisely (i.e. without estimation error)the minimum number of resources needed for each job tomeet its deadline. For evaluation, our work uses large produc-tion workload traces from an industry partner that provides  commercial big-data services using YARN 1 . The srcinal jobsin this trace were not resource constrained nor did theyrequire completion according to individual deadlines. For thesereasons we use a discrete-event, trace-driven simulation to rep-resent how these workloads would execute with significantlyfewer cluster resources under different deadline formulationsfound in related work [17, 32, 47, 48, 57].Our results show that  Justice  performs similarly to theoracle in terms of fairness and deadline satisfaction, and sig-nificantly better than the baseline Mesos and YARN allocator.In addition,  Justice  achieves greater productivity and signifi-cantly better utilization than its counter-parts when resourcesare severely constrained. We next describe  Justice  and itsimplementation. We then overview our empirical methodol-ogy (Section III) and the characteristics of the workload weevaluate (Section IV). We present our results in Section V. InSections VI and VII, we discuss related work and conclude.II. J USTICE  Justice  is a fair-share preserving and deadline-aware re-source allocator with admission control for resource negotia-tors such as Mesos [24] and YARN [44] that manages batchapplications with deadlines for resource-constrained, sharedclusters.  Justice  employs a black-box, framework-agnosticprediction technique (based on measurements of historical jobexecution times) to estimate the minimum number of CPUs(parallelism) that a job requires to meet its deadline.Prior work has shown that most fair-share allocators (possi-bly designed for the “infinite” resources available in a cloud)fail to preserve fairness when resources are limited [11, 22,53]. This shortfall is due, in part, to their greedy allocation(a result of their inability to predict future demand) and lack of corrective mechanisms (ex: job preemption or dropping).Instead,  Justice  improves upon these techniques by proactivelyadapting to future demand and cluster conditions through itsresource allocation and admission control mechanisms.Existing fair-share allocators are deadline-insensitive. Theyassume that a job submitted by a user has value to that userregardless of how large the turn-around time may be. For  Jus-tice , we assume that because cluster resources may be scarce,each job is submitted with a “maximum runtime” parameterthat tells the resource negotiator when the completion of each job is no longer valuable. Henceforth, we term this parameterthe “deadline” for the job. The only assumption we make aboutthis user-specified deadline is that the deadline is feasible,i.e., an optimal allocation in an empty cluster is sufficient tocomplete the job before its deadline.Note that current resource negotiators such as Mesos andYARN do not include a provision for specifying job maximumruntime. Instead, many cluster administrators statically splittheir clusters with the use of a capacity scheduler [6], orrequire users to reserve resources in advance [8, 43] to createdifferentiated service classes with respect to turn-around time.However, such approaches are inefficient and impractical inresource-constrained clusters, as they further limit peak clustercapacity. In contrast,  Justice  incorporates deadline information 1 The partner wishes to remain anonymous for reasons of commercialcompetitiveness. Algorithm 1  Justice  TRACK JOB Algorithm 1:  function  TRACK JOB ( compTime ,  requestedTasks , deadline ,  numCPUsAllocd ,  success ) 2:  deadlineCPUs  =  compTime/deadline 3:  maxCPUs  =  min ( requestedTasks,cluster capacity ) 4:  minReqRate  =  deadlineCPUs/maxCPUs 5:  minReqRateList.add ( minReqRate ) 6:  MinCPUFrac  =  min ( minReqRateList ) 7:  MaxCPUFrac  =  max ( minReqRateList ) 8:  LastCPUFrac  =  numCPUsAllocd/maxCPUs 9:  LastSuccess  =  success 10:  end function to drive its resource allocation, admission-control, and job-dropping decisions.  A. Resource Allocation To determine how many CPUs to allocate to a new job,  Justice  uses execution time data logged for previously exe-cuted jobs.  Justice  analyzes each completed job and uses thisinformation to estimate the minimum number of CPUs thatthe job  would have needed   to have finished by its deadline“just-in-time” (represented as the  deadlineCPUs  variable inAlgorithm 1).  Justice  assumes that this minimum required ca-pacity utilizes perfect parallelism (speedup per CPU) and thatthe number of tasks for a job (the division of its input size andthe HDFS block size) is the maximum parallelization possiblefor the job. We refer to this number as the  requestedTasks for the job. Therefore, the maximum number of CPUs thatcan be assigned to any job ( maxCPUs ) at any given time isthe minimum between the  requestedTasks  and the totalcluster capacity ( cluster_capacity ).To bootstrap the system,  Justice  admits all jobs regardlessof deadline, i.e., it allocates  requestedTasks  CPUs to the jobs. For any job for which there are insufficient resourcesfor the allocation,  Justice  allocates the number of CPUsavailable. When a job completes either by meeting or byexceeding its deadline,  Justice  invokes the pseudocode function TRACK_JOB  shown in Algorithm 1. TRACK_JOB  calculates the minimum number of CPUsrequired ( deadlineCPUs ) if the job were to complete byits deadline, using its execution profile available from clustermanager logs. Line 2 in the function is derived from theequality: numCPUsAllocd  ∗  jobET   =  deadlineCPUs  ∗  deadline On the left is the actual computation time by individ-ual tasks, which we call  compTime  in the algorithm. numCPUsAllocd  is the number of CPUs that the job usedduring execution and  jobET  is its execution time withoutqueuing delay. The right side of the equation is the to-tal computation time consumed across tasks if the job hadbeen assigned  deadlineCPUs , given this execution profile( compTime ).  deadline  is the time (in seconds) specified inthe job submission. By dividing  compTime  by  deadline ,we extract  deadlineCPUs  for this job.Next,  Justice  divides  deadlineCPUs  by the maxi-mum number of CPUs allocated to the job. The result-  ing  minReqRate  is a fraction of the maximum that  Jus-tice  could have assigned to the job and still have itmeet its deadline.  Justice  adds  minReqRate  to a listof fractions ( minReqRateList ) that contains the min-imum required rates (fractions of   deadlineCPUs  over requestedTasks ) across all completed jobs. Then it cal-culates from this list the global minimum ( MinCPUFrac )and maximum ( MaxCPUFrac ) fractions. It also tracksthe observed fraction allocated to the last completed job( LastCPUFrac ) and whether the job satisfied or exceededits deadline ( LastSucess ).  Justice  then uses  MaxCPUFrac and  MinCPUFrac  to predict the allocatable fractions of future jobs.  MaxCPUFrac  and  MinCPUFrac  are always less thanor equal to 1. The tighter the deadlines, the more conservative(nearer to 1) these fractions and the corresponding  Justice ’sresource provisioning will be.Then, to compute the cpu allocation fraction( alocCPUFrac ) for each newly submitted job,  Justice  takesthe average of the  LastCPUFrac  and either  MinCPUFrac or  MaxCPUFrac , depending on whether the last completed job met or violated its deadline respectively. In other words,consecutive successes make  Justice  more aggressive andit allocates smaller resource fractions ( alocCPUFrac converges to  MinCPUFrac ) while deadline violations make  Justice  more conservative and it increases the allocated fractionto prevent future violations ( alocCPUFrac  converges to MaxCPUFrac ).  Justice  also utilizes a Kalman filter mechanism to cor-rect inaccuracies of its initial estimations. Every time a jobcompletes its execution,  Justice  tracks the estimation error;the divergence of the given allocation fraction from the idealminimum fraction ( deadlineCPUs ). To correct the allo-cation estimations,  Justice  calculates a weighted average of the historical errors. It can be configured to assign the sameweights to all past errors or use exponential smoothing and“trust” more recent errors. Lastly, a validation mechanismensures that the corrected fraction is still between allowablelimits (the fraction should not be less than the minimumobserved  MinCPUFrac  or greater than 1).After  alocCPUFrac  is calculated, corrected, and vali-dated as described above,  Justice  multiplies  alocCPUFrac by the number of tasks requested in the job submission(rounding to the next largest integer value). It uses this value(or the maximum cluster capacity, whichever is smaller) asthe number of CPUs to assign to the job for execution. If this number of CPUs is not available, it enqueues the job.  Justice  performs this process each time a job is submitted orcompletes. It also updates the deadlines for jobs in the queue(reducing each by the time that has passed since submission),recomputes the CPU allocation of each enqueued job and dropsany enqueued jobs with infeasible deadlines.  B. Admission Control After estimating the resources that jobs need to meet theirdeadlines,  Justice  implements a  proactive  admission controlso that it can prevent infeasible jobs (jobs likely to misstheir deadlines) from ever entering the system and consumingresources wastefully. This way,  Justice  attempts to maximizethe number of jobs that meet their deadline even undersevere resource constraints (i.e. limited cluster capacity or highutilization).  Justice  also tracks jobs that violate their deadlinesand selectively drops some of them to avoid further wasteof resources. It is selective in that it terminates jobs whentheir  requestedTasks  exceed a configurable threshold.Thus, it still able to collect statistics on “misses” to improveits estimations by letting the smaller violating jobs completetheir execution while at the same time it prevents the biggerviolators (which are expected to run longer) from wastingcluster resources.  Justice  admits jobs based on a pluggable priority policy. Wehave considered various policies for  Justice  and use a policythat prioritizes minimizing the number of jobs that miss theirdeadlines. For this policy,  Justice  gives priority to jobs with asmall number of tasks and greatest time-to-deadline. However,all of the policies that we considered (including shortest time-to-deadline) perform similarly. Once  Justice  has selected a jobfor admission, it allocates the CPUs to the job and initiatesexecution. Once a job run commences, its CPU allocation doesnot change.III. E XPERIMENTAL  M ETHODOLOGY We compare  Justice  to the fair-share allocator that currentlyships with the open-source Mesos [24] and YARN [44] usingtrace-based simulation. Our system is based on Simpy [40]and replicates the execution behavior of industry-providedproduction traces of big data workloads (cf Section IV).The current Mesos and YARN fair-share allocator doesnot take into account the notion of deadline. When makingallocation decisions, it (tacitly) assumes that each job willuse the resources allocated to it indefinitely and that there isno limit on the turn-around time a job’s owner is willing totolerate. We hypothesize a straight-forward modification to thebasic allocator that allows it to consider job deadlines (whichwould need to be submitted with each job) when makingdecisions.Finally, we implement an “oracle” allocator that has perfectforeknowledge of the minimum resource requirements each job needs to meet its deadline exactly. Note that the oracledoes not implement system-wide prescience – its predictionis perfect on a per-job basis. That is, the oracle does not tryall possible combinations of job schedules to determine theoptimal allocation. Instead, the oracle makes its decision basedon a perfect prediction of each job’s needs. These allocationpolicies are summarized as follows: Baseline FS : This allocator employs a fair sharing pol-icy [4, 18, 20, 41, 50] without admission control. Its behavior issimilar to that of the default allocator in Mesos and YARN and,as such, runs all jobs submitted regardless of their deadlinesand resource requirements. Reactive FS : This allocator extends Baseline FS by al-lowing the allocator to terminate any job that has exceeded itsdeadline. That is, it “reacts” to a deadline miss by freeing theresources so that other jobs may use them. Oracle : This allocator allocates the minimum number of resources that a job requires to meet its deadline. If sufficientresources are unavailable, the Oracle queues the job until theresources become available or until its deadline has passed (or  (a) CDFs of number of tasks per job (b) CDFs of computation time per job (c) Job computation time vs number of tasks Fig. 1:  Workload Characteristics:  Number of tasks per job, computation time per job, and computation time relative to jobssize (in number of tasks). Small jobs are large in number but consume a very small proportion of trace computation no longer achievable). For the queued jobs, Oracle givespriority to jobs with fewer required resources and longer timeuntil the deadline. Justice : As described in Section II, this allocator proac-tively drops, enqueues, or admits jobs submitted. It estimatesthe share of each job as a fraction of its maximum demand.This fraction is based on the historical performance of jobsin the cluster. For the queued jobs,  Justice  gives priority to jobs with fewer required resources and longer computationtimes.  Justice  drops any jobs that are infeasible based on acomparison of their deadlines with a prediction of the time tocompletion. Jobs that are predicted to miss their deadlines arenot admitted (they are dropped immediately) as are any jobsthat exceed their deadlines.  A. Deadline Types We evaluate the robustness of our approach by runningexperiments using deadline formulations from prior work [17,32, 47, 48, 57] and interesting variations on them. In particular,we assign deadlines that are multiples of the optimal executiontime of a job (which we extract from our workload trace). Weuse two types of multiples: Fixed and variable. Fixed Deadlines:  With fixed deadlines, we use a dead-line that is a multiple of the optimal execution time (aformulation found in [32, 57]). Each deadline is expressed as D i  =  x  ·  T  i , where  T  i  is the optimal runtime of the job and x > = 1 . 0  is some fixed multiplicative expansion factor. In ourexperiments, we use constant factors of   x  = 1  and  x  = 2 ,which we refer to as  Fixed1x  and  Fixed2x  respectively. Variable Deadlines:  For variable deadlines, we com-pute deadline multiples by sampling distributions. We considerthe following variable deadline types: •  Jockey : We pick with equal probability a deadlineexpansion factor  x  from two possible values (a for-mulation described in [17]). In this study, we use theintervals from the sets with values  (1 , 2)  and  (2 , 4)  tochoose  x  and, again, compute  D i  =  x  ·  T  i , where  T  i is the minimum possible execution time. We refer tothis variable deadline formulation as  Jockey1x2x  and  Jockey2x4x . •  90loose : This is a variation of the Jockey1x2x dead-lines, in which the deadlines take on the larger value CPUs Jobs Comp.Time(Hours)1-TaskPct1-TaskTime Pct 9345 159194 8585673 58% 0.1% TABLE I: Trace Summary. Columns are peak cluster capacity,total number of jobs, total computation time in hours, percent-age of 1-task jobs, and percentage of 1-task job computationtime.(i.e. are loose) with a higher probability ( 0 . 9 ) whilethe other uses the smaller value. •  Aria : The deadline multiples of this type are uniformlydistributed in the intervals  [1 , 3]  and  [2 , 4]  (as de-scribed in [47, 48]); we refer to these deadlines as  Aria1x3x  and  Aria2x4x , respectively.IV. W ORKLOAD  C HARACTERIZATION To evaluate  Justice , we use a 3-month trace from pro-duction Hadoop deployments executing over different YARNclusters. The trace was recently donated to the  Justice  effortby an industry partner on condition of anonymity. The tracecontains a job ID, job category, number of map and reducetasks, map and reduce time (computation time across tasks), job runtime, among other data. It does not contain informationabout the scheduling policy or HDFS configuration used ineach cluster. Thus we assume a minimum of one CPU per task and use this minimum to derive cluster capacity; we are con-sidering sub-portions of CPUs (vcores) as part of future work.  Justice  uses the number of map tasks (as  requestedTasks in Algorithm 1) and map time (as  compTime  in Algorithm 1).Table I summarizes the job characteristics of the trace. Thetable shows the peak cluster capacity (total number of CPUs),the total number of jobs, the total computation time across alltasks in the jobs, the percentage of jobs that have only onetask, and the percentage of computation time that single-task  jobs consume across jobs. There are 159194 jobs submitted  and the peak observed capacity (maximum number of CPUsin use) is 9345 2 .The table also shows that even though there are manysingle-task jobs, they consume a small percentage of the totalcomputation time. To understand this characteristic better, wepresent the cumulative distribution of number of tasks inFig. 1a and computation time in Fig. 1b per job in logarithmicscale. Approximately 60% of the jobs have a single task and70% of the jobs have fewer than 10 tasks. Only 13% of the jobs have more than 1000 tasks. Also, the vast majority of  jobs have short computation times. Approximately 70% of jobshave computation time that is less than 1000 CPU*seconds, i.e.their execution would be 1000 seconds if they were runningin one CPU core.The right graph in the figure compares job computationtime with the number of tasks per job (both axes are on alogarithmic scale). 80% of the 1-task jobs and 60% of the 2-10 task jobs have computation time of fewer than 100 seconds.Their aggregate computation time is less than 1% of the totalcomputation time of the trace. Jobs with more than 1000 tasksaccount for 98% of the total computation time. Finally, jobcomputation time varies significantly across jobs.We have considered leveraging the job ID and number of map and reduce tasks to track repeated jobs, but find that forthis real-world trace such jobs are small in number. 18% of the jobs repeat more than once and 12% of the jobs repeatmore than 30 times. Moreover, we observe high performancevariation within each job class. Previous research has reportedsimilar findings and limited benefits from exploiting job re-peats for production traces [17].V. R ESULTS We evaluate  Justice  using the production trace for differentresource-constrained cluster capacities (number of CPUs). Wecompare  Justice  against different fair share schedulers andan Oracle using multiple deadline strategies: a fixed multiple(Fixed), a random multiple (Jockey), a uniform multiple (Aria)of the actual computation time, and mixed loose and strictdeadlines (90loose), as described on Section III.  A. Fairness Evaluation We use Jain’s fairness index [28] applied to the fractionof demand each scheduler is able to achieve as a measure of fairness. For each job  i , among  n  total jobs, we define thefraction of demand as  F  i  =  A i D i where  D i  is the resourcerequest for job  i  and  A i  is the allocation given to job  i . When A i  > =  D i  the fraction is defined to be  1 . Jain’s fairness indexis then  |  ni =1  F  i | 2 n ∗  ni =1  F  2 i .Figure 2 presents the fairness index averaged over 60-secintervals for all the allocation policies and deadlines considered 2 We have tested  Justice  on a second trace that contains more than  1 million job entries from the same industry partner. The distribution propertiesof job sizes are remarkably similar to the trace we have chosen to use.However, because many of the jobs in this larger trace repeat (creating moreautocorrelation in the job series), we believe that the smaller trace is a greaterchallenge for  Justice ’s predictive mechanisms. The results for this larger traceare, indeed, better than the results we present in this paper. We have omittedthem for brevity. in this study and for two resource-constrained cluster sizes;very constrained cluster with 2250 CPUs (left graph) andmoderately constrained cluster with 4500 CPUs (right graph).The results show that in resource constrained settings, fair-shair allocation policies generate substantially lower fairnessindices compared to  Justice .In resource constrained clusters, when CPU demands ex-ceed available cluster resources, fair-share mechanisms canviolate fairness. This occurs because these mechanisms donot anticipate the arrival of future workload. Thus jobs thatrequire large fractions of the total resource pool get their fullallocations, causing jobs that arrive later to block or to beunder-served [11]. Moreover, jobs that are waiting in queuemay miss their deadlines while waiting (i.e. receive an  A i value of zero) or receive an under allocation once they arereleased.Note that adding the ability to simply drop jobs that havemissed their deadlines does not alleviate the fairness problementirely. The Reactive FS policy (described in Section III)achieves better fairness than the Baseline fair-share scheduler,but does not achieve the same levels as  Justice . When a large job (one with a large value of   D i ) can meet its deadline ( is not dropped by Reactive FS), it may only get a smallfraction of its requested allocation (receiving a small valueof   A i ) thereby contributing to the fairness imbalance whencompared to  Justice . Because the confidence intervals betweenReactive FS and  Justice  overlap, we also conducted a Welch’st-test [51] for all deadline-types and cluster sizes. We find thatin all cases, the P-value is very small (e.g. significantly smallerthan 0.01). Thus the probability that the means are the sameis also very small.The reason  Justice  is able to achieve fairness is becauseit uses predictions of future demand to implement admissioncontrol.  Justice  uses a running tabulation of the average frac-tion of   A i /D i  that was required by previous jobs to meet theirdeadline to weight the value of   A i /D i  for each newly arriving job.  Justice  computes this fraction globally by performingan on-line “post mortem” of completed jobs. Then, for eachnew job,  Justice  allocates a fraction of the demand requestedusing this estimated fraction.  Justice  continuously updates itsestimate of this fraction so that it can adapt to changingworkload conditions. As a result, every requesting job getsthe same share of resources as a fraction of its total demand,which is by definition the best possible fairness according toJain’s formula.Interestingly,  Justice  achieves a better fairness index thanthe Oracle for variable deadlines (e.g. Aria1x3x). The Oracleallocates to every job the minimum amount of resourcesrequired to meet the deadline. Consequently, when the deadlinetightness across jobs differ, the fraction of resources that each job gets compared to its maximum resources will also differ.This leads to inequalities in terms of fairness. To avoid theparadox of an Oracle not giving perfect fairness, we couldmodify Jain’s formula by replacing the maximum demand of a job with the minimum required resources in order to meeta deadline. However, we wish to use prior art when makingcomparisons to the existing fair-share allocators, and so theOracle (under this previous definition) also does not achieveperfect fairness. In other words, Oracle is an oracle withrespect to minimum resource requirements needed to satisfy
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!