Skip to content

Commit 1e00040

Browse files
vingu-linaropundiramit
authored andcommitted
UPSTREAM: sched/core: Fix find_idlest_group() for fork
During fork, the utilization of a task is init once the rq has been selected because the current utilization level of the rq is used to set the utilization of the fork task. As the task's utilization is still 0 at this step of the fork sequence, it doesn't make sense to look for some spare capacity that can fit the task's utilization. Furthermore, I can see perf regressions for the test: hackbench -P -g 1 because the least loaded policy is always bypassed and tasks are not spread during fork. With this patch and the fix below, we are back to same performances as for v4.8. The fix below is only a temporary one used for the test until a smarter solution is found because we can't simply remove the test which is useful for others benchmarks | @@ -5708,13 +5708,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t | | avg_cost = this_sd->avg_scan_cost; | | - /* | - * Due to large variance we need a large fuzz factor; hackbench in | - * particularly is sensitive here. | - */ | - if ((avg_idle / 512) < avg_cost) | - return -1; | - | time = local_clock(); | | for_each_cpu_wrap(cpu, sched_domain_span(sd), target, wrap) { Tested-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Acked-by: Morten Rasmussen <morten.rasmussen@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dietmar.eggemann@arm.com Cc: kernellwp@gmail.com Cc: umgwanakikbuti@gmail.com Cc: yuyang.du@intel.comc Link: http://lkml.kernel.org/r/1481216215-24651-2-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit f519a3f1c6b7a990e5aed37a8f853c6ecfdee945) Signed-off-by: Brendan Jackman <brendan.jackman@arm.com> Signed-off-by: Chris Redpath <chris.redpath@arm.com> Change-Id: I86cc2ad81af3467c0b2f82b995111f428248baa4
1 parent 98ac5c4 commit 1e00040

1 file changed

Lines changed: 8 additions & 0 deletions

File tree

kernel/sched/fair.c

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6051,13 +6051,21 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
60516051
* utilized systems if we require spare_capacity > task_util(p),
60526052
* so we allow for some task stuffing by using
60536053
* spare_capacity > task_util(p)/2.
6054+
*
6055+
* Spare capacity can't be used for fork because the utilization has
6056+
* not been set yet, we must first select a rq to compute the initial
6057+
* utilization.
60546058
*/
6059+
if (sd_flag & SD_BALANCE_FORK)
6060+
goto skip_spare;
6061+
60556062
if (this_spare > task_util(p) / 2 &&
60566063
imbalance*this_spare > 100*most_spare)
60576064
return NULL;
60586065
else if (most_spare > task_util(p) / 2)
60596066
return most_spare_sg;
60606067

6068+
skip_spare:
60616069
if (!idlest || 100*this_load < imbalance*min_load)
60626070
return NULL;
60636071
return idlest;

0 commit comments

Comments
 (0)