Skip to content

Commit 623b519

Browse files
Brendan Jackmanpundiramit
authored andcommitted
UPSTREAM: sched/fair: Sync task util before slow-path wakeup
We use task_util() in find_idlest_group() via capacity_spare_wake(). This task_util() updated in wake_cap(). However wake_cap() is not the only reason for ending up in find_idlest_group() - we could have been sent there by wake_wide(). So explicitly sync the task util with prev_cpu when we are about to head to find_idlest_group(). We could simply do this at the beginning of select_task_rq_fair() (i.e. irrespective of whether we're heading to select_idle_sibling() or find_idlest_group() & co), but I didn't want to slow down the select_idle_sibling() path more than necessary. Don't do this during fork balancing, we won't need the task_util and we'd just clobber the last_update_time, which is supposed to be 0. Change-Id: I935f4bfdfec3e8b914457aac3387ce264d5fd484 Signed-off-by: Brendan Jackman <brendan.jackman@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andres Oportus <andresoportus@google.com> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Joel Fernandes <joelaf@google.com> Cc: Josef Bacik <josef@toxicpanda.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Morten Rasmussen <morten.rasmussen@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vincent Guittot <vincent.guittot@linaro.org> Link: http://lkml.kernel.org/r/20170808095519.10077-1-brendan.jackman@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry-picked-from: commit ea16f0ea6c3d tip:sched/core) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
1 parent eea0ea6 commit 623b519

1 file changed

Lines changed: 9 additions & 0 deletions

File tree

kernel/sched/fair.c

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6757,6 +6757,15 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
67576757
new_cpu = cpu;
67586758
}
67596759

6760+
if (sd && !(sd_flag & SD_BALANCE_FORK)) {
6761+
/*
6762+
* We're going to need the task's util for capacity_spare_wake
6763+
* in find_idlest_group. Sync it up to prev_cpu's
6764+
* last_update_time.
6765+
*/
6766+
sync_entity_load_avg(&p->se);
6767+
}
6768+
67606769
if (!sd) {
67616770
if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */
67626771
new_cpu = select_idle_sibling(p, prev_cpu, new_cpu);

0 commit comments

Comments
 (0)