Skip to content

Commit 34bf123

Browse files
Peter Zijlstragregkh
authored andcommitted
sched/fair: Fix effective_load() to consistently use smoothed load
commit 7dd4912594daf769a46744848b05bd5bc6d62469 upstream. Starting with the following commit: fde7d22 ("sched/fair: Fix overly small weight for interactive group entities") calc_tg_weight() doesn't compute the right value as expected by effective_load(). The difference is in the 'correction' term. In order to ensure \Sum rw_j >= rw_i we cannot use tg->load_avg directly, since that might be lagging a correction on the current cfs_rq->avg.load_avg value. Therefore we use tg->load_avg - cfs_rq->tg_load_avg_contrib + cfs_rq->avg.load_avg. Now, per the referenced commit, calc_tg_weight() doesn't use cfs_rq->avg.load_avg, as is later used in @w, but uses cfs_rq->load.weight instead. So stop using calc_tg_weight() and do it explicitly. The effects of this bug are wake_affine() making randomly poor choices in cgroup-intense workloads. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: fde7d22 ("sched/fair: Fix overly small weight for interactive group entities") Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent d29e5fa commit 34bf123

1 file changed

Lines changed: 9 additions & 6 deletions

File tree

kernel/sched/fair.c

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -687,8 +687,6 @@ void init_entity_runnable_average(struct sched_entity *se)
687687
/* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
688688
}
689689

690-
static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq);
691-
static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq);
692690
#else
693691
void init_entity_runnable_average(struct sched_entity *se)
694692
{
@@ -4594,19 +4592,24 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
45944592
return wl;
45954593

45964594
for_each_sched_entity(se) {
4597-
long w, W;
4595+
struct cfs_rq *cfs_rq = se->my_q;
4596+
long W, w = cfs_rq_load_avg(cfs_rq);
45984597

4599-
tg = se->my_q->tg;
4598+
tg = cfs_rq->tg;
46004599

46014600
/*
46024601
* W = @wg + \Sum rw_j
46034602
*/
4604-
W = wg + calc_tg_weight(tg, se->my_q);
4603+
W = wg + atomic_long_read(&tg->load_avg);
4604+
4605+
/* Ensure \Sum rw_j >= rw_i */
4606+
W -= cfs_rq->tg_load_avg_contrib;
4607+
W += w;
46054608

46064609
/*
46074610
* w = rw_i + @wl
46084611
*/
4609-
w = cfs_rq_load_avg(se->my_q) + wl;
4612+
w += wl;
46104613

46114614
/*
46124615
* wl = S * s'_i; see (2)

0 commit comments

Comments
 (0)