Skip to content

Commit 7beab85

Browse files
credppundiramit
authored andcommitted
cpufreq/sched: Consider max cpu capacity when choosing frequencies
When using schedfreq on cpus with max capacity significantly smaller than 1024, the tick update uses non-normalised capacities - this leads to selecting an incorrect OPP as we were scaling the frequency as if the max capacity achievable was 1024 rather than the max for that particular cpu or group. This could result in a cpu being stuck at the lowest OPP and unable to generate enough utilisation to climb out if the max capacity is significantly smaller than 1024. Instead, normalize the capacity to be in the range 0-1024 in the tick so that when we later select a frequency, we get the correct one. Also comments updated to be clearer about what is needed. Change-Id: Id84391c7ac015311002ada21813a353ee13bee60 Signed-off-by: Chris Redpath <chris.redpath@arm.com>
1 parent 926b5b1 commit 7beab85

3 files changed

Lines changed: 10 additions & 2 deletions

File tree

kernel/sched/core.c

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2991,7 +2991,9 @@ static void sched_freq_tick_pelt(int cpu)
29912991
* utilization and to harm its performance the least, request
29922992
* a jump to a higher OPP as soon as the margin of free capacity
29932993
* is impacted (specified by capacity_margin).
2994+
* Remember CPU utilization in sched_capacity_reqs should be normalised.
29942995
*/
2996+
cpu_utilization = cpu_utilization * SCHED_CAPACITY_SCALE / capacity_orig_of(cpu);
29952997
set_cfs_cpu_capacity(cpu, true, cpu_utilization);
29962998
}
29972999

@@ -3018,7 +3020,9 @@ static void sched_freq_tick_walt(int cpu)
30183020
* It is likely that the load is growing so we
30193021
* keep the added margin in our request as an
30203022
* extra boost.
3023+
* Remember CPU utilization in sched_capacity_reqs should be normalised.
30213024
*/
3025+
cpu_utilization = cpu_utilization * SCHED_CAPACITY_SCALE / capacity_orig_of(cpu);
30223026
set_cfs_cpu_capacity(cpu, true, cpu_utilization);
30233027

30243028
}

kernel/sched/fair.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4671,7 +4671,7 @@ static void update_capacity_of(int cpu)
46714671
if (!sched_freq())
46724672
return;
46734673

4674-
/* Convert scale-invariant capacity to cpu. */
4674+
/* Normalize scale-invariant capacity to cpu. */
46754675
req_cap = boosted_cpu_util(cpu);
46764676
req_cap = req_cap * SCHED_CAPACITY_SCALE / capacity_orig_of(cpu);
46774677
set_cfs_cpu_capacity(cpu, true, req_cap);
@@ -4864,7 +4864,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
48644864
if (rq->cfs.nr_running)
48654865
update_capacity_of(cpu_of(rq));
48664866
else if (sched_freq())
4867-
set_cfs_cpu_capacity(cpu_of(rq), false, 0);
4867+
set_cfs_cpu_capacity(cpu_of(rq), false, 0); /* no normalization required for 0 */
48684868
}
48694869
}
48704870

kernel/sched/sched.h

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1630,6 +1630,10 @@ static inline bool sched_freq(void)
16301630
return static_key_false(&__sched_freq);
16311631
}
16321632

1633+
/*
1634+
* sched_capacity_reqs expects capacity requests to be normalised.
1635+
* All capacities should sum to the range of 0-1024.
1636+
*/
16331637
DECLARE_PER_CPU(struct sched_capacity_reqs, cpu_sched_capacity_reqs);
16341638
void update_cpu_capacity_request(int cpu, bool request);
16351639

0 commit comments

Comments
 (0)