Lines Matching +full:rate +full:- +full:idle +full:- +full:ms
15 individual tasks to task-group slices to CPU runqueues. As the basis for this
17 is decayed such that y^32 = 0.5. That is, the most recent 32ms contribute
31 Note that blocked tasks still contribute to the aggregates (task-group slices
57 r_dvfs := -----
61 hardware counters (Intel APERF/MPERF, ARMv8.4-AMU) to provide us this ratio.
65 f_cur := ----- * P0
68 4C-turbo; if available and turbo enabled
69 f_max := { 1C-turbo; if turbo enabled
73 r_dvfs := min( 1, ----- )
88 - kernel/sched/pelt.h:update_rq_clock_pelt()
89 - arch/x86/kernel/smpboot.c:"APERF/MPERF frequency ratio computation."
90 - Documentation/scheduler/sched-capacity.rst:"1. CPU Capacity + 2. Task utilization"
98 (DVFS) ramp-up after they are running again.
101 Impulse Response (IIR) EWMA with the 'running' value on dequeue -- when it is
142 XXX IO-wait: when the update is due to a task wakeup from IO-completion we
145 This frequency is then used to select a P-state/OPP or directly munged into a
152 interaction should be 'fast' and non-blocking. Schedutil supports
153 rate-limiting DVFS requests for when hardware interaction is slow and
162 - On low-load scenarios, where DVFS is most relevant, the 'running' numbers
165 - In saturated scenarios task movement will cause some transient dips,
167 to an idle CPU, the old CPU will have a 'running' value of 0.75 while the
169 correct this. XXX do we still guarantee f_max due to no idle-time?
171 - Much of the above is about avoiding DVFS dips, and independent DVFS domains
172 having to re-learn / ramp-up when load shifts.