Skip to content

Commit 926b5b1

Browse files
credppundiramit
authored andcommitted
BACKPORT: sched/fair: Make it possible to account fair load avg consistently
While set_task_rq_fair() is introduced in mainline by commit ad936d8658fd ("sched/fair: Make it possible to account fair load avg consistently"), the function results to be introduced here by the backport of commit 09a43ace1f98 ("sched/fair: Propagate load during synchronous attach/detach"). The problem (apart from the confusion introduced by the backport) is actually that set_task_rq_fair() is currently not called at all. Fix the problem by backporting again commit ad936d8658fd ("sched/fair: Make it possible to account fair load avg consistently"). Original change log: The current code accounts for the time a task was absent from the fair class (per ATTACH_AGE_LOAD). However it does not work correctly when a task got migrated or moved to another cgroup while outside of the fair class. This patch tries to address that by aging on migration. We locklessly read the 'last_update_time' stamp from both the old and new cfs_rq, ages the load upto the old time, and sets it to the new time. These timestamps should in general not be more than 1 tick apart from one another, so there is a definite bound on things. Signed-off-by: Byungchul Park <byungchul.park@lge.com> [ Changelog, a few edits and !SMP build fix ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1445616981-29904-2-git-send-email-byungchul.park@lge.com Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry-picked from ad936d8658fd348338cb7d42c577dac77892b074) Signed-off-by: Juri Lelli <juri.lelli@arm.com> Signed-off-by: Chris Redpath <chris.redpath@arm.com> Change-Id: I17294ab0ada3901d35895014715fd60952949358 Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
1 parent 6c70907 commit 926b5b1

2 files changed

Lines changed: 14 additions & 1 deletion

File tree

kernel/sched/core.c

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2173,6 +2173,10 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
21732173
INIT_LIST_HEAD(&p->se.group_node);
21742174
walt_init_new_task_load(p);
21752175

2176+
#ifdef CONFIG_FAIR_GROUP_SCHED
2177+
p->se.cfs_rq = NULL;
2178+
#endif
2179+
21762180
#ifdef CONFIG_SCHEDSTATS
21772181
memset(&p->se.statistics, 0, sizeof(p->se.statistics));
21782182
#endif

kernel/sched/sched.h

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -335,7 +335,15 @@ extern void sched_move_task(struct task_struct *tsk);
335335

336336
#ifdef CONFIG_FAIR_GROUP_SCHED
337337
extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);
338-
#endif
338+
339+
#ifdef CONFIG_SMP
340+
extern void set_task_rq_fair(struct sched_entity *se,
341+
struct cfs_rq *prev, struct cfs_rq *next);
342+
#else /* !CONFIG_SMP */
343+
static inline void set_task_rq_fair(struct sched_entity *se,
344+
struct cfs_rq *prev, struct cfs_rq *next) { }
345+
#endif /* CONFIG_SMP */
346+
#endif /* CONFIG_FAIR_GROUP_SCHED */
339347

340348
#else /* CONFIG_CGROUP_SCHED */
341349

@@ -987,6 +995,7 @@ static inline void set_task_rq(struct task_struct *p, unsigned int cpu)
987995
#endif
988996

989997
#ifdef CONFIG_FAIR_GROUP_SCHED
998+
set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]);
990999
p->se.cfs_rq = tg->cfs_rq[cpu];
9911000
p->se.parent = tg->se[cpu];
9921001
#endif

0 commit comments

Comments
 (0)