#
2a73991b |
| 24-Jan-2013 |
Li Zefan <lizefan@huawei.com> |
sched: remove redundant NULL cgroup check in task_group_path()
A task_group won't be online (thus no one can see it) until cpu_cgroup_css_online(), and at that time tg->css.cgroup has been initializ
sched: remove redundant NULL cgroup check in task_group_path()
A task_group won't be online (thus no one can see it) until cpu_cgroup_css_online(), and at that time tg->css.cgroup has been initialized, so this NULL check is redundant.
Signed-off-by: Li Zefan <lizefan@huawei.com> Signed-off-by: Tejun Heo <tj@kernel.org>
show more ...
|
Revision tags: v3.8-rc4, v3.8-rc3, v3.8-rc2, v3.8-rc1, v3.7, v3.7-rc8, v3.7-rc7, v3.7-rc6, v3.7-rc5, v3.7-rc4, v3.7-rc3, v3.7-rc2, v3.7-rc1 |
|
#
82958366 |
| 04-Oct-2012 |
Paul Turner <pjt@google.com> |
sched: Replace update_shares weight distribution with per-entity computation
Now that the machinery in place is in place to compute contributed load in a bottom up fashion; replace the shares distri
sched: Replace update_shares weight distribution with per-entity computation
Now that the machinery in place is in place to compute contributed load in a bottom up fashion; replace the shares distribution code within update_shares() accordingly.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141507.061208672@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
#
bb17f655 |
| 04-Oct-2012 |
Paul Turner <pjt@google.com> |
sched: Normalize tg load contributions against runnable time
Entities of equal weight should receive equitable distribution of cpu time. This is challenging in the case of a task_group's shares as e
sched: Normalize tg load contributions against runnable time
Entities of equal weight should receive equitable distribution of cpu time. This is challenging in the case of a task_group's shares as execution may be occurring on multiple cpus simultaneously.
To handle this we divide up the shares into weights proportionate with the load on each cfs_rq. This does not however, account for the fact that the sum of the parts may be less than one cpu and so we need to normalize: load(tg) = min(runnable_avg(tg), 1) * tg->shares Where runnable_avg is the aggregate time in which the task_group had runnable children.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com>. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.930124292@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
#
c566e8e9 |
| 04-Oct-2012 |
Paul Turner <pjt@google.com> |
sched: Aggregate total task_group load
Maintain a global running sum of the average load seen on each cfs_rq belonging to each task group so that it may be used in calculating an appropriate shares:
sched: Aggregate total task_group load
Maintain a global running sum of the average load seen on each cfs_rq belonging to each task group so that it may be used in calculating an appropriate shares:weight distribution.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.792901086@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
#
9ee474f5 |
| 04-Oct-2012 |
Paul Turner <pjt@google.com> |
sched: Maintain the load contribution of blocked entities
We are currently maintaining:
runnable_load(cfs_rq) = \Sum task_load(t)
For all running children t of cfs_rq. While this can be natural
sched: Maintain the load contribution of blocked entities
We are currently maintaining:
runnable_load(cfs_rq) = \Sum task_load(t)
For all running children t of cfs_rq. While this can be naturally updated for tasks in a runnable state (as they are scheduled); this does not account for the load contributed by blocked task entities.
This can be solved by introducing a separate accounting for blocked load:
blocked_load(cfs_rq) = \Sum runnable(b) * weight(b)
Obviously we do not want to iterate over all blocked entities to account for their decay, we instead observe that:
runnable_load(t) = \Sum p_i*y^i
and that to account for an additional idle period we only need to compute:
y*runnable_load(t).
This means that we can compute all blocked entities at once by evaluating:
blocked_load(cfs_rq)` = y * blocked_load(cfs_rq)
Finally we maintain a decay counter so that when a sleeping entity re-awakens we can determine how much of its load should be removed from the blocked sum.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.585389902@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
#
2dac754e |
| 04-Oct-2012 |
Paul Turner <pjt@google.com> |
sched: Aggregate load contributed by task entities on parenting cfs_rq
For a given task t, we can compute its contribution to load as:
task_load(t) = runnable_avg(t) * weight(t)
On a parenting c
sched: Aggregate load contributed by task entities on parenting cfs_rq
For a given task t, we can compute its contribution to load as:
task_load(t) = runnable_avg(t) * weight(t)
On a parenting cfs_rq we can then aggregate:
runnable_load(cfs_rq) = \Sum task_load(t), for all runnable children t
Maintain this bottom up, with task entities adding their contributed load to the parenting cfs_rq sum. When a task entity's load changes we add the same delta to the maintained sum.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.514678907@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
#
18bf2805 |
| 04-Oct-2012 |
Ben Segall <bsegall@google.com> |
sched: Maintain per-rq runnable averages
Since runqueues do not have a corresponding sched_entity we instead embed a sched_avg structure directly.
Signed-off-by: Ben Segall <bsegall@google.com> Rev
sched: Maintain per-rq runnable averages
Since runqueues do not have a corresponding sched_entity we instead embed a sched_avg structure directly.
Signed-off-by: Ben Segall <bsegall@google.com> Reviewed-by: Paul Turner <pjt@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.442637130@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
#
9d85f21c |
| 04-Oct-2012 |
Paul Turner <pjt@google.com> |
sched: Track the runnable average on a per-task entity basis
Instead of tracking averaging the load parented by a cfs_rq, we can track entity load directly. With the load for a given cfs_rq then bei
sched: Track the runnable average on a per-task entity basis
Instead of tracking averaging the load parented by a cfs_rq, we can track entity load directly. With the load for a given cfs_rq then being the sum of its children.
To do this we represent the historical contribution to runnable average within each trailing 1024us of execution as the coefficients of a geometric series.
We can express this for a given task t as:
runnable_sum(t) = \Sum u_i * y^i, runnable_avg_period(t) = \Sum 1024 * y^i load(t) = weight_t * runnable_sum(t) / runnable_avg_period(t)
Where: u_i is the usage in the last i`th 1024us period (approximately 1ms) ~ms and y is chosen such that y^k = 1/2. We currently choose k to be 32 which roughly translates to about a sched period.
Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.372695337@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
Revision tags: v3.6, v3.6-rc7, v3.6-rc6, v3.6-rc5, v3.6-rc4, v3.6-rc3, v3.6-rc2, v3.6-rc1, v3.5, v3.5-rc7, v3.5-rc6, v3.5-rc5, v3.5-rc4, v3.5-rc3, v3.5-rc2, v3.5-rc1, v3.4 |
|
#
13e099d2 |
| 14-May-2012 |
Peter Zijlstra <a.p.zijlstra@chello.nl> |
sched/debug: Fix printing large integers on 32-bit platforms
Some numbers like nr_running and nr_uninterruptible are fundamentally unsigned since its impossible to have a negative amount of tasks, y
sched/debug: Fix printing large integers on 32-bit platforms
Some numbers like nr_running and nr_uninterruptible are fundamentally unsigned since its impossible to have a negative amount of tasks, yet we still print them as signed to easily recognise the underflow condition.
rq->nr_uninterruptible has 'special' accounting and can in fact very easily become negative on a per-cpu basis.
It was noted that since the P() macro assumes things are long long and the promotion of unsigned 'int/long' to long long on 32bit doesn't sign extend we print silly large numbers instead of the easier to read signed numbers.
Therefore extend the P() macro to not require the sign extention.
Reported-by: Diwakar Tundlam <dtundlam@nvidia.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-gk5tm8t2n4ix2vkpns42uqqp@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
Revision tags: v3.4-rc7, v3.4-rc6, v3.4-rc5 |
|
#
c82513e5 |
| 26-Apr-2012 |
Peter Zijlstra <a.p.zijlstra@chello.nl> |
sched: Change rq->nr_running to unsigned int
Since there's a PID space limit of 30bits (see futex.h:FUTEX_TID_MASK) and allocating that many tasks (assuming a lower bound of 2 pages per task) would
sched: Change rq->nr_running to unsigned int
Since there's a PID space limit of 30bits (see futex.h:FUTEX_TID_MASK) and allocating that many tasks (assuming a lower bound of 2 pages per task) would still take 8T of memory it seems reasonable to say that unsigned int is sufficient for rq->nr_running.
When we do get anywhere near that amount of tasks I suspect other things would go funny, load-balancer load computations would really need to be hoisted to 128bit etc.
So save a few bytes and convert rq->nr_running and friends to unsigned int.
Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-y3tvyszjdmbibade5bw8zl81@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
Revision tags: v3.4-rc4, v3.4-rc3, v3.4-rc2, v3.4-rc1, v3.3, v3.3-rc7, v3.3-rc6, v3.3-rc5, v3.3-rc4, v3.3-rc3, v3.3-rc2 |
|
#
30fd049a |
| 24-Jan-2012 |
Rakib Mullick <rakib.mullick@gmail.com> |
sched: Remove sched_switch
Currently we don't utilize the sched_switch field anymore.
But, simply removing sched_switch field from the middle of the sched_stat output will break tools.
So, to stay
sched: Remove sched_switch
Currently we don't utilize the sched_switch field anymore.
But, simply removing sched_switch field from the middle of the sched_stat output will break tools.
So, to stay compatible we hardcode it to zero and remove the field from the scheduler data structures.
Update the schedstat documentation accordingly.
Signed-off-by: Rakib Mullick <rakib.mullick@gmail.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1327422836.27181.5.camel@localhost.localdomain Signed-off-by: Ingo Molnar <mingo@elte.hu>
show more ...
|
Revision tags: v3.3-rc1, v3.2, v3.2-rc7, v3.2-rc6, v3.2-rc5, v3.2-rc4, v3.2-rc3, v3.2-rc2 |
|
#
391e43da |
| 15-Nov-2011 |
Peter Zijlstra <a.p.zijlstra@chello.nl> |
sched: Move all scheduler bits into kernel/sched/
There's too many sched*.[ch] files in kernel/, give them their own directory.
(No code changed, other than Makefile glue added.)
Signed-off-by: Pe
sched: Move all scheduler bits into kernel/sched/
There's too many sched*.[ch] files in kernel/, give them their own directory.
(No code changed, other than Makefile glue added.)
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
show more ...
|
#
190a7f90 |
| 02-Jun-2021 |
Dietmar Eggemann <dietmar.eggemann@arm.com> |
sched/fair: Fix util_est UTIL_AVG_UNCHANGED handling commit 68d7a190682aa4eb02db477328088ebad15acc83 upstream. The util_est internal UTIL_AVG_UNCHANGED flag which is used to prevent
sched/fair: Fix util_est UTIL_AVG_UNCHANGED handling commit 68d7a190682aa4eb02db477328088ebad15acc83 upstream. The util_est internal UTIL_AVG_UNCHANGED flag which is used to prevent unnecessary util_est updates uses the LSB of util_est.enqueued. It is exposed via _task_util_est() (and task_util_est()). Commit 92a801e5d5b7 ("sched/fair: Mask UTIL_AVG_UNCHANGED usages") mentions that the LSB is lost for util_est resolution but find_energy_efficient_cpu() checks if task_util_est() returns 0 to return prev_cpu early. _task_util_est() returns the max value of util_est.ewma and util_est.enqueued or'ed w/ UTIL_AVG_UNCHANGED. So task_util_est() returning the max of task_util() and _task_util_est() will never return 0 under the default SCHED_FEAT(UTIL_EST, true). To fix this use the MSB of util_est.enqueued instead and keep the flag util_est internal, i.e. don't export it via _task_util_est(). The maximal possible util_avg value for a task is 1024 so the MSB of 'unsigned int util_est.enqueued' isn't used to store a util value. As a caveat the code behind the util_est_se trace point has to filter UTIL_AVG_UNCHANGED to see the real util_est.enqueued value which should be easy to do. This also fixes an issue report by Xuewen Yan that util_est_update() only used UTIL_AVG_UNCHANGED for the subtrahend of the equation: last_enqueued_diff = ue.enqueued - (task_util() | UTIL_AVG_UNCHANGED) Fixes: b89997aa88f0b sched/pelt: Fix task util_est update filtering Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Xuewen Yan <xuewen.yan@unisoc.com> Reviewed-by: Vincent Donnefort <vincent.donnefort@arm.com> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lore.kernel.org/r/20210602145808.1562603-1-dietmar.eggemann@arm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
94f1bdf0 |
| 15-Apr-2021 |
Waiman Long <longman@redhat.com> |
sched/debug: Fix cgroup_path[] serialization [ Upstream commit ad789f84c9a145f8a18744c0387cec22ec51651e ] The handling of sysrq key can be activated by echoing the key to /proc/
sched/debug: Fix cgroup_path[] serialization [ Upstream commit ad789f84c9a145f8a18744c0387cec22ec51651e ] The handling of sysrq key can be activated by echoing the key to /proc/sysrq-trigger or via the magic key sequence typed into a terminal that is connected to the system in some way (serial, USB or other mean). In the former case, the handling is done in a user context. In the latter case, it is likely to be in an interrupt context. Currently in print_cpu() of kernel/sched/debug.c, sched_debug_lock is taken with interrupt disabled for the whole duration of the calls to print_*_stats() and print_rq() which could last for the quite some time if the information dump happens on the serial console. If the system has many cpus and the sched_debug_lock is somehow busy (e.g. parallel sysrq-t), the system may hit a hard lockup panic depending on the actually serial console implementation of the system. The purpose of sched_debug_lock is to serialize the use of the global cgroup_path[] buffer in print_cpu(). The rests of the printk calls don't need serialization from sched_debug_lock. Calling printk() with interrupt disabled can still be problematic if multiple instances are running. Allocating a stack buffer of PATH_MAX bytes is not feasible because of the limited size of the kernel stack. The solution implemented in this patch is to allow only one caller at a time to use the full size group_path[], while other simultaneous callers will have to use shorter stack buffers with the possibility of path name truncation. A "..." suffix will be printed if truncation may have happened. The cgroup path name is provided for informational purpose only, so occasional path name truncation should not be a big problem. Fixes: efe25c2c7b3a ("sched: Reinstate group names in /proc/sched_debug") Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210415195426.6677-1-longman@redhat.com Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
8d4d9c7b |
| 29-Oct-2020 |
Colin Ian King <colin.king@canonical.com> |
sched/debug: Fix memory corruption caused by multiple small reads of flags Reading /proc/sys/kernel/sched_domain/cpu*/domain0/flags mutliple times with small reads causes oopses with slu
sched/debug: Fix memory corruption caused by multiple small reads of flags Reading /proc/sys/kernel/sched_domain/cpu*/domain0/flags mutliple times with small reads causes oopses with slub corruption issues because the kfree is free'ing an offset from a previous allocation. Fix this by adding in a new pointer 'buf' for the allocation and kfree and use the temporary pointer tmp to handle memory copies of the buf offsets. Fixes: 5b9f8ff7b320 ("sched/debug: Output SD flag names rather than their values") Reported-by: Jeff Bastian <jbastian@redhat.com> Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20201029151103.373410-1-colin.king@canonical.com
show more ...
|
Revision tags: v5.8.17, v5.8.16, v5.8.15, v5.9, v5.8.14, v5.8.13, v5.8.12, v5.8.11, v5.8.10, v5.8.9, v5.8.8 |
|
#
848785df |
| 08-Sep-2020 |
Valentin Schneider <valentin.schneider@arm.com> |
sched/topology: Move sd_flag_debug out of #ifdef CONFIG_SYSCTL The last sd_flag_debug shuffle inadvertently moved its definition within an #ifdef CONFIG_SYSCTL region. While CONFIG_SYSCT
sched/topology: Move sd_flag_debug out of #ifdef CONFIG_SYSCTL The last sd_flag_debug shuffle inadvertently moved its definition within an #ifdef CONFIG_SYSCTL region. While CONFIG_SYSCTL is indeed required to produce the sched domain ctl interface (which uses sd_flag_debug to output flag names), it isn't required to run any assertion on the sched_domain hierarchy itself. Move the definition of sd_flag_debug to a CONFIG_SCHED_DEBUG region of topology.c. Now at long last we have: - sd_flag_debug declared in include/linux/sched/topology.h iff CONFIG_SCHED_DEBUG=y - sd_flag_debug defined in kernel/sched/topology.c, conditioned by: - CONFIG_SCHED_DEBUG, with an explicit #ifdef block - CONFIG_SMP, as a requirement to compile topology.c With this change, all symbols pertaining to SD flag metadata (with the exception of __SD_FLAG_CNT) are now defined exclusively within topology.c Fixes: 8fca9494d4b4 ("sched/topology: Move sd_flag_debug out of linux/sched/topology.h") Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200908184956.23369-1-valentin.schneider@arm.com
show more ...
|
Revision tags: v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61 |
|
#
8fca9494 |
| 25-Aug-2020 |
Valentin Schneider <valentin.schneider@arm.com> |
sched/topology: Move sd_flag_debug out of linux/sched/topology.h Defining an array in a header imported all over the place clearly is a daft idea, that still didn't stop me from doing it
sched/topology: Move sd_flag_debug out of linux/sched/topology.h Defining an array in a header imported all over the place clearly is a daft idea, that still didn't stop me from doing it. Leave a declaration of sd_flag_debug in topology.h and move its definition to sched/debug.c. Fixes: b6e862f38672 ("sched/topology: Define and assign sched_domain flag metadata") Reported-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200825133216.9163-1-valentin.schneider@arm.com
show more ...
|
Revision tags: v5.8.3, v5.4.60, v5.8.2, v5.4.59 |
|
#
5b9f8ff7 |
| 17-Aug-2020 |
Valentin Schneider <valentin.schneider@arm.com> |
sched/debug: Output SD flag names rather than their values Decoding the output of /proc/sys/kernel/sched_domain/cpu*/domain*/flags has always been somewhat annoying, as one needs to go f
sched/debug: Output SD flag names rather than their values Decoding the output of /proc/sys/kernel/sched_domain/cpu*/domain*/flags has always been somewhat annoying, as one needs to go fetch the bit -> name mapping from the source code itself. This encoding can be saved in a script somewhere, but that isn't safe from flags being added, removed or even shuffled around. What matters for debugging purposes is to get *which* flags are set in a given domain, their associated value is pretty much meaningless. Make the sd flags debug file output flag names. Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: https://lore.kernel.org/r/20200817113003.20802-7-valentin.schneider@arm.com
show more ...
|
Revision tags: v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44, v5.7, v5.4.43 |
|
#
126c2092 |
| 26-May-2020 |
Peter Zijlstra <peterz@infradead.org> |
sched: Add rq::ttwu_pending In preparation of removing rq->wake_list, replace the !list_empty(rq->wake_list) with rq->ttwu_pending. This is not fully equivalent as this new variable
sched: Add rq::ttwu_pending In preparation of removing rq->wake_list, replace the !list_empty(rq->wake_list) with rq->ttwu_pending. This is not fully equivalent as this new variable is racy. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200526161908.070399698@infradead.org
show more ...
|
#
9013196a |
| 19-May-2020 |
Peter Zijlstra <peterz@infradead.org> |
Merge branch 'sched/urgent'
|
Revision tags: v5.4.42, v5.4.41 |
|
#
ad32bb41 |
| 10-May-2020 |
Pavankumar Kondeti <pkondeti@codeaurora.org> |
sched/debug: Fix requested task uclamp values shown in procfs The intention of commit 96e74ebf8d59 ("sched/debug: Add task uclamp values to SCHED_DEBUG procfs") was to print requested an
sched/debug: Fix requested task uclamp values shown in procfs The intention of commit 96e74ebf8d59 ("sched/debug: Add task uclamp values to SCHED_DEBUG procfs") was to print requested and effective task uclamp values. The requested values printed are read from p->uclamp, which holds the last effective values. Fix this by printing the values from p->uclamp_req. Fixes: 96e74ebf8d59 ("sched/debug: Add task uclamp values to SCHED_DEBUG procfs") Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Tested-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/1589115401-26391-1-git-send-email-pkondeti@codeaurora.org
show more ...
|
Revision tags: v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33 |
|
#
9818427c |
| 15-Apr-2020 |
Valentin Schneider <valentin.schneider@arm.com> |
sched/debug: Make sd->flags sysctl read-only Writing to the sysctl of a sched_domain->flags directly updates the value of the field, and goes nowhere near update_top_cache_domain(). This
sched/debug: Make sd->flags sysctl read-only Writing to the sysctl of a sched_domain->flags directly updates the value of the field, and goes nowhere near update_top_cache_domain(). This means that the cached domain pointers can end up containing stale data (e.g. the domain pointed to doesn't have the relevant flag set anymore). Explicit domain walks that check for flags will be affected by the write, but this won't be in sync with the cached pointers which will still point to the domains that were cached at the last sched_domain build. In other words, writing to this interface is playing a dangerous game. It could be made to trigger an update of the cached sched_domain pointers when written to, but this does not seem to be worth the trouble. Make it read-only. Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200415210512.805-3-valentin.schneider@arm.com
show more ...
|
#
f080d93e |
| 14-Apr-2020 |
Xie XiuQi <xiexiuqi@huawei.com> |
sched/debug: Fix trival print_task() format Ensure leave one space between state and task name. w/o patch: runnable tasks: S task PID tree-key switches
sched/debug: Fix trival print_task() format Ensure leave one space between state and task name. w/o patch: runnable tasks: S task PID tree-key switches prio wait Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200414125721.195801-1-xiexiuqi@huawei.com
show more ...
|
Revision tags: v5.4.32, v5.4.31, v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24, v5.4.23 |
|
#
96e74ebf |
| 26-Feb-2020 |
Valentin Schneider <valentin.schneider@arm.com> |
sched/debug: Add task uclamp values to SCHED_DEBUG procfs Requested and effective uclamp values can be a bit tricky to decipher when playing with cgroup hierarchies. Add them to a task's
sched/debug: Add task uclamp values to SCHED_DEBUG procfs Requested and effective uclamp values can be a bit tricky to decipher when playing with cgroup hierarchies. Add them to a task's procfs when SCHED_DEBUG is enabled. Reviewed-by: Qais Yousef <qais.yousef@arm.com> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20200226124543.31986-4-valentin.schneider@arm.com
show more ...
|
#
9e3bf946 |
| 26-Feb-2020 |
Valentin Schneider <valentin.schneider@arm.com> |
sched/debug: Factor out printing formats into common macros The printing macros in debug.c keep redefining the same output format. Collect each output format in a single definition, and
sched/debug: Factor out printing formats into common macros The printing macros in debug.c keep redefining the same output format. Collect each output format in a single definition, and reuse that definition in the other macros. While at it, add a layer of parentheses and replace printf's with the newly introduced macros. Reviewed-by: Qais Yousef <qais.yousef@arm.com> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20200226124543.31986-3-valentin.schneider@arm.com
show more ...
|