History log of /openbmc/linux/kernel/sched/core.c (Results 1 – 25 of 4015)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 0f9b4c3c 11-Mar-2025 Andrew Jeffery <andrew@codeconstruct.com.au>

Merge tag 'v6.6.81' into for/openbmc/dev-6.6

This is the 6.6.81 stable release

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmfLFMMACgkQONu9yGCS
# aT6riRAAjyTT+Ia

Merge tag 'v6.6.81' into for/openbmc/dev-6.6

This is the 6.6.81 stable release

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmfLFMMACgkQONu9yGCS
# aT6riRAAjyTT+IaZXhdNjx56GXicO4l4Spr9VmgecZWBJceNkDETDMof3dhhIHeV
# u+wKZKyL/ZBfElLqvMbOUKUK5aw4odI5d1Ri8C3ViYYrCsSTJNXUZ27YnF2xHyql
# Aw+otwwNTqDEJENFCLtdXhFBPinBXop5piv/eQeP5qHa1D8OGzl0Ntb1Lan5OIf3
# PIMoX7T5CQUd9cuUiJE9uX8t/VD18hqOrivOUGy7avyIOvaE7GaDt3xEz5Pw+b4l
# D+VOaskhFoLcehDlOmLxbKmdUFYMG/h1eSVE9azd9scFRjU5EEbRtOfkMFb8XE6D
# rgLY39Zol7dqdxBerqNvmbj1CRFIQfxoOSadKjiSRyZ2POttfSs3IADGOTRj/T/k
# +Hj1kFs3wLFoQ5RDRKDrzCq91pC2RcXFe3zQZWxla19Dhx3DwZZa5WlfHzIuFEZX
# mfJ7VR4k/ue9EFyvNGHZuAWfB0OcCCZ2WAlc8mXfvp3cIA5arEGvPGnffr/3/jCs
# gXxt186um11iSNYzVT1Ia6b7xz33EtHjMXUAnixlyuDK9dUPg6m48KIcTta5WPoP
# 7cqPNKDb59sua26TVyf91nb7r+7jp4Zfb9Be/7BxZoLTLxwPeD4gl7Sk7PYDycIk
# xmWetnoFhE8aNHJ5hId2LC7zd6jxn8xYHJQffJJ4yCU2ZmRBKI8=
# =VWtK
# -----END PGP SIGNATURE-----
# gpg: Signature made Sat 08 Mar 2025 02:16:11 ACDT
# gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E
# gpg: Good signature from "Greg Kroah-Hartman <gregkh@kernel.org>" [marginal]
# gpg: gregkh@kernel.org: Verified 10 signatures in the past 6 weeks. Encrypted
# 0 messages.
# gpg: Warning: you have yet to encrypt a message to this key!
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 647F 2865 4894 E3BD 4571 99BE 38DB BDC8 6092 693E

show more ...


Revision tags: v6.6.81, v6.6.80, v6.6.79, v6.6.78, v6.6.77, v6.6.76, v6.6.75, v6.6.74, v6.6.73, v6.6.72, v6.6.71, v6.12.9, v6.6.70, v6.12.8, v6.6.69, v6.12.7, v6.6.68, v6.12.6, v6.6.67
# 68786ab0 16-Dec-2024 Thomas Gleixner <tglx@linutronix.de>

sched/core: Prevent rescheduling when interrupts are disabled

commit 82c387ef7568c0d96a918a5a78d9cad6256cfa15 upstream.

David reported a warning observed while loop testing kexec jump:

Interrupt

sched/core: Prevent rescheduling when interrupts are disabled

commit 82c387ef7568c0d96a918a5a78d9cad6256cfa15 upstream.

David reported a warning observed while loop testing kexec jump:

Interrupts enabled after irqrouter_resume+0x0/0x50
WARNING: CPU: 0 PID: 560 at drivers/base/syscore.c:103 syscore_resume+0x18a/0x220
kernel_kexec+0xf6/0x180
__do_sys_reboot+0x206/0x250
do_syscall_64+0x95/0x180

The corresponding interrupt flag trace:

hardirqs last enabled at (15573): [<ffffffffa8281b8e>] __up_console_sem+0x7e/0x90
hardirqs last disabled at (15580): [<ffffffffa8281b73>] __up_console_sem+0x63/0x90

That means __up_console_sem() was invoked with interrupts enabled. Further
instrumentation revealed that in the interrupt disabled section of kexec
jump one of the syscore_suspend() callbacks woke up a task, which set the
NEED_RESCHED flag. A later callback in the resume path invoked
cond_resched() which in turn led to the invocation of the scheduler:

__cond_resched+0x21/0x60
down_timeout+0x18/0x60
acpi_os_wait_semaphore+0x4c/0x80
acpi_ut_acquire_mutex+0x3d/0x100
acpi_ns_get_node+0x27/0x60
acpi_ns_evaluate+0x1cb/0x2d0
acpi_rs_set_srs_method_data+0x156/0x190
acpi_pci_link_set+0x11c/0x290
irqrouter_resume+0x54/0x60
syscore_resume+0x6a/0x200
kernel_kexec+0x145/0x1c0
__do_sys_reboot+0xeb/0x240
do_syscall_64+0x95/0x180

This is a long standing problem, which probably got more visible with
the recent printk changes. Something does a task wakeup and the
scheduler sets the NEED_RESCHED flag. cond_resched() sees it set and
invokes schedule() from a completely bogus context. The scheduler
enables interrupts after context switching, which causes the above
warning at the end.

Quite some of the code paths in syscore_suspend()/resume() can result in
triggering a wakeup with the exactly same consequences. They might not
have done so yet, but as they share a lot of code with normal operations
it's just a question of time.

The problem only affects the PREEMPT_NONE and PREEMPT_VOLUNTARY scheduling
models. Full preemption is not affected as cond_resched() is disabled and
the preemption check preemptible() takes the interrupt disabled flag into
account.

Cure the problem by adding a corresponding check into cond_resched().

Reported-by: David Woodhouse <dwmw@amazon.co.uk>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: David Woodhouse <dwmw@amazon.co.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: stable@vger.kernel.org
Closes: https://lore.kernel.org/all/7717fe2ac0ce5f0a2c43fdab8b11f4483d54a2a4.camel@infradead.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 360823a0 17-Feb-2025 Andrew Jeffery <andrew@codeconstruct.com.au>

Merge tag 'v6.6.78' into for/openbmc/dev-6.6

This is the 6.6.78 stable release

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmey9hUACgkQONu9yGCS
# aT7Ecw//Ts3+DVy

Merge tag 'v6.6.78' into for/openbmc/dev-6.6

This is the 6.6.78 stable release

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmey9hUACgkQONu9yGCS
# aT7Ecw//Ts3+DVyM1iMAUj6zZHQ7+UVqRxvVQ0yJwe1gzECrasxhu+ack0MDuRXb
# RTOHzrVkpHrOZ58T0kkkp4DVea4bq8kpq9wnnOxpta4SzQYuwxuypxw9ZML2u8kR
# A77akcb4MPBpeTwlLUTEX1K2CrF+Wfz9ZGauJRTmrnWogJe1hZWTxr3tc9TqGeMA
# tk93g9kWy7hxxubPJpAUbNVmWbpm/TfZuMAEyktpNf8E0DLukHjr0If85t3BC0KZ
# kxLSCN05ZmWoZVQjmaerS8pXFvwj08OeRbUtW+b4oaraUV7vsrwxW/WcOqb6vIBn
# AEohV3w7CpFj0moRPXJO+UuxmP5TrSCIGUaEGjnrMCPJfjxwnmFYaf+9DYi3bR4H
# U8UyU55PhGTWlWg238Qp64KsDn41M/rlNKOiPEGq08+1Qnhoj4LWfFFHzLhO8y4R
# xLfsOzu6cHgEUnMKPTV6TnkWSCEL9t51wgzsqa7iKdO7kyAL1YCb4+LkskJAqUzW
# t3i8Sw8nygE7cKQ5eHzG6CClKEfgxtMGiR63gan9npEUgcFbzoVP0uz9RYz7+0Vz
# 5oE2ZSGXSoiJNWhdjJVrr1gqg/TwrzmVjsmUEnf4uTDABh9GXL+g+UZHGSMvvvYi
# T8gUY4aFwXO5fGKN1RW8RXJSbJr4nKYde2s/h4ZT1EwRVdj5Zcc=
# =+i1A
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 17 Feb 2025 19:10:53 ACDT
# gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E
# gpg: Good signature from "Greg Kroah-Hartman <gregkh@kernel.org>" [marginal]
# gpg: gregkh@kernel.org: Verified 7 signatures in the past 3 weeks. Encrypted
# 0 messages.
# gpg: Warning: you have yet to encrypt a message to this key!
# gpg: Warning: if you think you've seen more signatures by this key and user
# id, then this key might be a forgery! Carefully examine the email address
# for small variations. If the key is suspect, then use
# gpg --tofu-policy bad 647F28654894E3BD457199BE38DBBDC86092693E
# to mark it as being bad.
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 647F 2865 4894 E3BD 4571 99BE 38DB BDC8 6092 693E

show more ...


Revision tags: v6.12.5, v6.6.66, v6.6.65, v6.12.4, v6.6.64, v6.12.3, v6.12.2, v6.6.63, v6.12.1
# 73808199 17-Nov-2024 Suleiman Souhlal <suleiman@google.com>

sched: Don't try to catch up excess steal time.

[ Upstream commit 108ad0999085df2366dd9ef437573955cb3f5586 ]

When steal time exceeds the measured delta when updating clock_task, we
currently try to

sched: Don't try to catch up excess steal time.

[ Upstream commit 108ad0999085df2366dd9ef437573955cb3f5586 ]

When steal time exceeds the measured delta when updating clock_task, we
currently try to catch up the excess in future updates.
However, this results in inaccurate run times for the future things using
clock_task, in some situations, as they end up getting additional steal
time that did not actually happen.
This is because there is a window between reading the elapsed time in
update_rq_clock() and sampling the steal time in update_rq_clock_task().
If the VCPU gets preempted between those two points, any additional
steal time is accounted to the outgoing task even though the calculated
delta did not actually contain any of that "stolen" time.
When this race happens, we can end up with steal time that exceeds the
calculated delta, and the previous code would try to catch up that excess
steal time in future clock updates, which is given to the next,
incoming task, even though it did not actually have any time stolen.

This behavior is particularly bad when steal time can be very long,
which we've seen when trying to extend steal time to contain the duration
that the host was suspended [0]. When this happens, clock_task stays
frozen, during which the running task stays running for the whole
duration, since its run time doesn't increase.
However the race can happen even under normal operation.

Ideally we would read the elapsed cpu time and the steal time atomically,
to prevent this race from happening in the first place, but doing so
is non-trivial.

Since the time between those two points isn't otherwise accounted anywhere,
neither to the outgoing task nor the incoming task (because the "end of
outgoing task" and "start of incoming task" timestamps are the same),
I would argue that the right thing to do is to simply drop any excess steal
time, in order to prevent these issues.

[0] https://lore.kernel.org/kvm/20240820043543.837914-1-suleiman@google.com/

Signed-off-by: Suleiman Souhlal <suleiman@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20241118043745.1857272-1-suleiman@google.com
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# 9144f784 09-Jan-2025 Andrew Jeffery <andrew@codeconstruct.com.au>

Merge tag 'v6.6.70' into for/openbmc/dev-6.6

This is the 6.6.70 stable release

Conflicts:
include/linux/usb/chipidea.h

Conflict was a trivial addition.

Signed-off-by: Andrew Jeffery <andrew@c

Merge tag 'v6.6.70' into for/openbmc/dev-6.6

This is the 6.6.70 stable release

Conflicts:
include/linux/usb/chipidea.h

Conflict was a trivial addition.

Signed-off-by: Andrew Jeffery <andrew@codeconstruct.com.au>

show more ...


Revision tags: v6.12, v6.6.62, v6.6.61, v6.6.60, v6.6.59
# 3adf89f1 28-Oct-2024 Thomas Gleixner <tglx@linutronix.de>

sched: Initialize idle tasks only once

[ Upstream commit b23decf8ac9102fc52c4de5196f4dc0a5f3eb80b ]

Idle tasks are initialized via __sched_fork() twice:

fork_idle()
copy_process()

sched: Initialize idle tasks only once

[ Upstream commit b23decf8ac9102fc52c4de5196f4dc0a5f3eb80b ]

Idle tasks are initialized via __sched_fork() twice:

fork_idle()
copy_process()
sched_fork()
__sched_fork()
init_idle()
__sched_fork()

Instead of cleaning this up, sched_ext hacked around it. Even when analyis
and solution were provided in a discussion, nobody cared to clean this up.

init_idle() is also invoked from sched_init() to initialize the boot CPU's
idle task, which requires the __sched_fork() invocation. But this can be
trivially solved by invoking __sched_fork() before init_idle() in
sched_init() and removing the __sched_fork() invocation from init_idle().

Do so and clean up the comments explaining this historical leftover.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20241028103142.359584747@linutronix.de
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# 278002ed 15-Dec-2024 Andrew Jeffery <andrew@codeconstruct.com.au>

Merge tag 'v6.6.66' into for/openbmc/dev-6.6

This is the 6.6.66 stable release


Revision tags: v6.6.58, v6.6.57, v6.6.56, v6.6.55, v6.6.54, v6.6.53, v6.6.52, v6.6.51, v6.6.50, v6.6.49, v6.6.48, v6.6.47, v6.6.46, v6.6.45, v6.6.44, v6.6.43, v6.6.42, v6.6.41, v6.6.40, v6.6.39, v6.6.38, v6.6.37, v6.6.36, v6.6.35, v6.6.34, v6.6.33, v6.6.32, v6.6.31, v6.6.30, v6.6.29, v6.6.28, v6.6.27, v6.6.26, v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1
# 842010e3 04-Nov-2023 Peter Zijlstra <peterz@infradead.org>

sched/deadline: Collect sched_dl_entity initialization

[ Upstream commit 9e07d45c5210f5dd6701c00d55791983db7320fa ]

Create a single function that initializes a sched_dl_entity.

Signed-off-by: Pete

sched/deadline: Collect sched_dl_entity initialization

[ Upstream commit 9e07d45c5210f5dd6701c00d55791983db7320fa ]

Create a single function that initializes a sched_dl_entity.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Phil Auld <pauld@redhat.com>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lkml.kernel.org/r/51acc695eecf0a1a2f78f9a044e11ffd9b316bcf.1699095159.git.bristot@kernel.org
Stable-dep-of: 0664e2c311b9 ("sched/deadline: Fix warning in migrate_enable for boosted tasks")
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


Revision tags: v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4
# b2f7d750 19-Sep-2023 Ingo Molnar <mingo@kernel.org>

sched/fair: Rename check_preempt_curr() to wakeup_preempt()

[ Upstream commit e23edc86b09df655bf8963bbcb16647adc787395 ]

The name is a bit opaque - make it clear that this is about wakeup
preemptio

sched/fair: Rename check_preempt_curr() to wakeup_preempt()

[ Upstream commit e23edc86b09df655bf8963bbcb16647adc787395 ]

The name is a bit opaque - make it clear that this is about wakeup
preemption.

Also rename the ->check_preempt_curr() methods similarly.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Stable-dep-of: 0664e2c311b9 ("sched/deadline: Fix warning in migrate_enable for boosted tasks")
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# b607a388 18-Nov-2024 K Prateek Nayak <kprateek.nayak@amd.com>

sched/core: Prevent wakeup of ksoftirqd during idle load balance

[ Upstream commit e932c4ab38f072ce5894b2851fea8bc5754bb8e5 ]

Scheduler raises a SCHED_SOFTIRQ to trigger a load balancing event on
f

sched/core: Prevent wakeup of ksoftirqd during idle load balance

[ Upstream commit e932c4ab38f072ce5894b2851fea8bc5754bb8e5 ]

Scheduler raises a SCHED_SOFTIRQ to trigger a load balancing event on
from the IPI handler on the idle CPU. If the SMP function is invoked
from an idle CPU via flush_smp_call_function_queue() then the HARD-IRQ
flag is not set and raise_softirq_irqoff() needlessly wakes ksoftirqd
because soft interrupts are handled before ksoftirqd get on the CPU.

Adding a trace_printk() in nohz_csd_func() at the spot of raising
SCHED_SOFTIRQ and enabling trace events for sched_switch, sched_wakeup,
and softirq_entry (for SCHED_SOFTIRQ vector alone) helps observing the
current behavior:

<idle>-0 [000] dN.1.: nohz_csd_func: Raising SCHED_SOFTIRQ from nohz_csd_func
<idle>-0 [000] dN.4.: sched_wakeup: comm=ksoftirqd/0 pid=16 prio=120 target_cpu=000
<idle>-0 [000] .Ns1.: softirq_entry: vec=7 [action=SCHED]
<idle>-0 [000] .Ns1.: softirq_exit: vec=7 [action=SCHED]
<idle>-0 [000] d..2.: sched_switch: prev_comm=swapper/0 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/0 next_pid=16 next_prio=120
ksoftirqd/0-16 [000] d..2.: sched_switch: prev_comm=ksoftirqd/0 prev_pid=16 prev_prio=120 prev_state=S ==> next_comm=swapper/0 next_pid=0 next_prio=120
...

Use __raise_softirq_irqoff() to raise the softirq. The SMP function call
is always invoked on the requested CPU in an interrupt handler. It is
guaranteed that soft interrupts are handled at the end.

Following are the observations with the changes when enabling the same
set of events:

<idle>-0 [000] dN.1.: nohz_csd_func: Raising SCHED_SOFTIRQ for nohz_idle_balance
<idle>-0 [000] dN.1.: softirq_raise: vec=7 [action=SCHED]
<idle>-0 [000] .Ns1.: softirq_entry: vec=7 [action=SCHED]

No unnecessary ksoftirqd wakeups are seen from idle task's context to
service the softirq.

Fixes: b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()")
Closes: https://lore.kernel.org/lkml/fcf823f-195e-6c9a-eac3-25f870cb35ac@inria.fr/ [1]
Reported-by: Julia Lawall <julia.lawall@inria.fr>
Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Link: https://lore.kernel.org/r/20241119054432.6405-5-kprateek.nayak@amd.com
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# f163cf9c 18-Nov-2024 K Prateek Nayak <kprateek.nayak@amd.com>

sched/core: Remove the unnecessary need_resched() check in nohz_csd_func()

[ Upstream commit ea9cffc0a154124821531991d5afdd7e8b20d7aa ]

The need_resched() check currently in nohz_csd_func() can be

sched/core: Remove the unnecessary need_resched() check in nohz_csd_func()

[ Upstream commit ea9cffc0a154124821531991d5afdd7e8b20d7aa ]

The need_resched() check currently in nohz_csd_func() can be tracked
to have been added in scheduler_ipi() back in 2011 via commit
ca38062e57e9 ("sched: Use resched IPI to kick off the nohz idle balance")

Since then, it has travelled quite a bit but it seems like an idle_cpu()
check currently is sufficient to detect the need to bail out from an
idle load balancing. To justify this removal, consider all the following
case where an idle load balancing could race with a task wakeup:

o Since commit f3dd3f674555b ("sched: Remove the limitation of WF_ON_CPU
on wakelist if wakee cpu is idle") a target perceived to be idle
(target_rq->nr_running == 0) will return true for
ttwu_queue_cond(target) which will offload the task wakeup to the idle
target via an IPI.

In all such cases target_rq->ttwu_pending will be set to 1 before
queuing the wake function.

If an idle load balance races here, following scenarios are possible:

- The CPU is not in TIF_POLLING_NRFLAG mode in which case an actual
IPI is sent to the CPU to wake it out of idle. If the
nohz_csd_func() queues before sched_ttwu_pending(), the idle load
balance will bail out since idle_cpu(target) returns 0 since
target_rq->ttwu_pending is 1. If the nohz_csd_func() is queued after
sched_ttwu_pending() it should see rq->nr_running to be non-zero and
bail out of idle load balancing.

- The CPU is in TIF_POLLING_NRFLAG mode and instead of an actual IPI,
the sender will simply set TIF_NEED_RESCHED for the target to put it
out of idle and flush_smp_call_function_queue() in do_idle() will
execute the call function. Depending on the ordering of the queuing
of nohz_csd_func() and sched_ttwu_pending(), the idle_cpu() check in
nohz_csd_func() should either see target_rq->ttwu_pending = 1 or
target_rq->nr_running to be non-zero if there is a genuine task
wakeup racing with the idle load balance kick.

o The waker CPU perceives the target CPU to be busy
(targer_rq->nr_running != 0) but the CPU is in fact going idle and due
to a series of unfortunate events, the system reaches a case where the
waker CPU decides to perform the wakeup by itself in ttwu_queue() on
the target CPU but target is concurrently selected for idle load
balance (XXX: Can this happen? I'm not sure, but we'll consider the
mother of all coincidences to estimate the worst case scenario).

ttwu_do_activate() calls enqueue_task() which would increment
"rq->nr_running" post which it calls wakeup_preempt() which is
responsible for setting TIF_NEED_RESCHED (via a resched IPI or by
setting TIF_NEED_RESCHED on a TIF_POLLING_NRFLAG idle CPU) The key
thing to note in this case is that rq->nr_running is already non-zero
in case of a wakeup before TIF_NEED_RESCHED is set which would
lead to idle_cpu() check returning false.

In all cases, it seems that need_resched() check is unnecessary when
checking for idle_cpu() first since an impending wakeup racing with idle
load balancer will either set the "rq->ttwu_pending" or indicate a newly
woken task via "rq->nr_running".

Chasing the reason why this check might have existed in the first place,
I came across Peter's suggestion on the fist iteration of Suresh's
patch from 2011 [1] where the condition to raise the SCHED_SOFTIRQ was:

sched_ttwu_do_pending(list);

if (unlikely((rq->idle == current) &&
rq->nohz_balance_kick &&
!need_resched()))
raise_softirq_irqoff(SCHED_SOFTIRQ);

Since the condition to raise the SCHED_SOFIRQ was preceded by
sched_ttwu_do_pending() (which is equivalent of sched_ttwu_pending()) in
the current upstream kernel, the need_resched() check was necessary to
catch a newly queued task. Peter suggested modifying it to:

if (idle_cpu() && rq->nohz_balance_kick && !need_resched())
raise_softirq_irqoff(SCHED_SOFTIRQ);

where idle_cpu() seems to have replaced "rq->idle == current" check.

Even back then, the idle_cpu() check would have been sufficient to catch
a new task being enqueued. Since commit b2a02fc43a1f ("smp: Optimize
send_call_function_single_ipi()") overloads the interpretation of
TIF_NEED_RESCHED for TIF_POLLING_NRFLAG idling, remove the
need_resched() check in nohz_csd_func() to raise SCHED_SOFTIRQ based
on Peter's suggestion.

Fixes: b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()")
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20241119054432.6405-3-kprateek.nayak@amd.com
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# e50e86db 03-Nov-2024 Andrew Jeffery <andrew@codeconstruct.com.au>

Merge tag 'v6.6.59' into for/openbmc/dev-6.6

This is the 6.6.59 stable release


# 509c29d0 09-Oct-2024 Waiman Long <longman@redhat.com>

sched/core: Disable page allocation in task_tick_mm_cid()

[ Upstream commit 73ab05aa46b02d96509cb029a8d04fca7bbde8c7 ]

With KASAN and PREEMPT_RT enabled, calling task_work_add() in
task_tick_mm_cid

sched/core: Disable page allocation in task_tick_mm_cid()

[ Upstream commit 73ab05aa46b02d96509cb029a8d04fca7bbde8c7 ]

With KASAN and PREEMPT_RT enabled, calling task_work_add() in
task_tick_mm_cid() may cause the following splat.

[ 63.696416] BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
[ 63.696416] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 610, name: modprobe
[ 63.696416] preempt_count: 10001, expected: 0
[ 63.696416] RCU nest depth: 1, expected: 1

This problem is caused by the following call trace.

sched_tick() [ acquire rq->__lock ]
-> task_tick_mm_cid()
-> task_work_add()
-> __kasan_record_aux_stack()
-> kasan_save_stack()
-> stack_depot_save_flags()
-> alloc_pages_mpol_noprof()
-> __alloc_pages_noprof()
-> get_page_from_freelist()
-> rmqueue()
-> rmqueue_pcplist()
-> __rmqueue_pcplist()
-> rmqueue_bulk()
-> rt_spin_lock()

The rq lock is a raw_spinlock_t. We can't sleep while holding
it. IOW, we can't call alloc_pages() in stack_depot_save_flags().

The task_tick_mm_cid() function with its task_work_add() call was
introduced by commit 223baf9d17f2 ("sched: Fix performance regression
introduced by mm_cid") in v6.4 kernel.

Fortunately, there is a kasan_record_aux_stack_noalloc() variant that
calls stack_depot_save_flags() while not allowing it to allocate
new pages. To allow task_tick_mm_cid() to use task_work without
page allocation, a new TWAF_NO_ALLOC flag is added to enable calling
kasan_record_aux_stack_noalloc() instead of kasan_record_aux_stack()
if set. The task_tick_mm_cid() function is modified to add this new flag.

The possible downside is the missing stack trace in a KASAN report due
to new page allocation required when task_work_add_noallloc() is called
which should be rare.

Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by mm_cid")
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20241010014432.194742-1-longman@redhat.com
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# ca2478a7 12-Sep-2024 Andrew Jeffery <andrew@codeconstruct.com.au>

Merge tag 'v6.6.51' into for/openbmc/dev-6.6

This is the 6.6.51 stable release


# a2977c0c 31-Jan-2024 Andrea Parri <parri.andrea@gmail.com>

membarrier: riscv: Add full memory barrier in switch_mm()

commit d6cfd1770f20392d7009ae1fdb04733794514fa9 upstream.

The membarrier system call requires a full memory barrier after storing
to rq->cu

membarrier: riscv: Add full memory barrier in switch_mm()

commit d6cfd1770f20392d7009ae1fdb04733794514fa9 upstream.

The membarrier system call requires a full memory barrier after storing
to rq->curr, before going back to user-space. The barrier is only
needed when switching between processes: the barrier is implied by
mmdrop() when switching from kernel to userspace, and it's not needed
when switching from userspace to kernel.

Rely on the feature/mechanism ARCH_HAS_MEMBARRIER_CALLBACKS and on the
primitive membarrier_arch_switch_mm(), already adopted by the PowerPC
architecture, to insert the required barrier.

Fixes: fab957c11efe2f ("RISC-V: Atomic and Locking Code")
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/r/20240131144936.29190-2-parri.andrea@gmail.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Signed-off-by: WangYuli <wangyuli@uniontech.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# c005e2f6 14-Aug-2024 Andrew Jeffery <andrew@codeconstruct.com.au>

Merge tag 'v6.6.46' into for/openbmc/dev-6.6

This is the 6.6.46 stable release


# 78f1990b 02-Jul-2024 Yang Yingliang <yangyingliang@huawei.com>

sched/core: Fix unbalance set_rq_online/offline() in sched_cpu_deactivate()

commit fe7a11c78d2a9bdb8b50afc278a31ac177000948 upstream.

If cpuset_cpu_inactive() fails, set_rq_online() need be called

sched/core: Fix unbalance set_rq_online/offline() in sched_cpu_deactivate()

commit fe7a11c78d2a9bdb8b50afc278a31ac177000948 upstream.

If cpuset_cpu_inactive() fails, set_rq_online() need be called to rollback.

Fixes: 120455c514f7 ("sched: Fix hotplug vs CPU bandwidth control")
Cc: stable@kernel.org
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20240703031610.587047-5-yangyingliang@huaweicloud.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 4c15b20c 02-Jul-2024 Yang Yingliang <yangyingliang@huawei.com>

sched/core: Introduce sched_set_rq_on/offline() helper

commit 2f027354122f58ee846468a6f6b48672fff92e9b upstream.

Introduce sched_set_rq_on/offline() helper, so it can be called
in normal or error p

sched/core: Introduce sched_set_rq_on/offline() helper

commit 2f027354122f58ee846468a6f6b48672fff92e9b upstream.

Introduce sched_set_rq_on/offline() helper, so it can be called
in normal or error path simply. No functional changed.

Cc: stable@kernel.org
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20240703031610.587047-4-yangyingliang@huaweicloud.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 65727331 02-Jul-2024 Yang Yingliang <yangyingliang@huawei.com>

sched/smt: Fix unbalance sched_smt_present dec/inc

commit e22f910a26cc2a3ac9c66b8e935ef2a7dd881117 upstream.

I got the following warn report while doing stress test:

jump label: negative count!
WA

sched/smt: Fix unbalance sched_smt_present dec/inc

commit e22f910a26cc2a3ac9c66b8e935ef2a7dd881117 upstream.

I got the following warn report while doing stress test:

jump label: negative count!
WARNING: CPU: 3 PID: 38 at kernel/jump_label.c:263 static_key_slow_try_dec+0x9d/0xb0
Call Trace:
<TASK>
__static_key_slow_dec_cpuslocked+0x16/0x70
sched_cpu_deactivate+0x26e/0x2a0
cpuhp_invoke_callback+0x3ad/0x10d0
cpuhp_thread_fun+0x3f5/0x680
smpboot_thread_fn+0x56d/0x8d0
kthread+0x309/0x400
ret_from_fork+0x41/0x70
ret_from_fork_asm+0x1b/0x30
</TASK>

Because when cpuset_cpu_inactive() fails in sched_cpu_deactivate(),
the cpu offline failed, but sched_smt_present is decremented before
calling sched_cpu_deactivate(), it leads to unbalanced dec/inc, so
fix it by incrementing sched_smt_present in the error path.

Fixes: c5511d03ec09 ("sched/smt: Make sched_smt_present track topology")
Cc: stable@kernel.org
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Chen Yu <yu.c.chen@intel.com>
Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
Link: https://lore.kernel.org/r/20240703031610.587047-3-yangyingliang@huaweicloud.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 41d85656 02-Jul-2024 Yang Yingliang <yangyingliang@huawei.com>

sched/smt: Introduce sched_smt_present_inc/dec() helper

commit 31b164e2e4af84d08d2498083676e7eeaa102493 upstream.

Introduce sched_smt_present_inc/dec() helper, so it can be called
in normal or erro

sched/smt: Introduce sched_smt_present_inc/dec() helper

commit 31b164e2e4af84d08d2498083676e7eeaa102493 upstream.

Introduce sched_smt_present_inc/dec() helper, so it can be called
in normal or error path simply. No functional changed.

Cc: stable@kernel.org
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20240703031610.587047-2-yangyingliang@huaweicloud.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 7e24a55b 04-Aug-2024 Andrew Jeffery <andrew@codeconstruct.com.au>

Merge tag 'v6.6.44' into for/openbmc/dev-6.6

This is the 6.6.44 stable release


# 7ca52974 25-Jun-2024 Tejun Heo <tj@kernel.org>

sched/fair: set_load_weight() must also call reweight_task() for SCHED_IDLE tasks

commit d329605287020c3d1c3b0dadc63d8208e7251382 upstream.

When a task's weight is being changed, set_load_weight()

sched/fair: set_load_weight() must also call reweight_task() for SCHED_IDLE tasks

commit d329605287020c3d1c3b0dadc63d8208e7251382 upstream.

When a task's weight is being changed, set_load_weight() is called with
@update_load set. As weight changes aren't trivial for the fair class,
set_load_weight() calls fair.c::reweight_task() for fair class tasks.

However, set_load_weight() first tests task_has_idle_policy() on entry and
skips calling reweight_task() for SCHED_IDLE tasks. This is buggy as
SCHED_IDLE tasks are just fair tasks with a very low weight and they would
incorrectly skip load, vlag and position updates.

Fix it by updating reweight_task() to take struct load_weight as idle weight
can't be expressed with prio and making set_load_weight() call
reweight_task() for SCHED_IDLE tasks too when @update_load is set.

Fixes: 9059393e4ec1 ("sched/fair: Use reweight_entity() for set_user_nice()")
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: stable@vger.kernel.org # v4.15+
Link: http://lkml.kernel.org/r/20240624102331.GI31592@noisy.programming.kicks-ass.net
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 43b75d54 17-May-2024 Frederic Weisbecker <frederic@kernel.org>

rcu/tasks: Fix stale task snaphot for Tasks Trace

[ Upstream commit 399ced9594dfab51b782798efe60a2376cd5b724 ]

When RCU-TASKS-TRACE pre-gp takes a snapshot of the current task running
on all online

rcu/tasks: Fix stale task snaphot for Tasks Trace

[ Upstream commit 399ced9594dfab51b782798efe60a2376cd5b724 ]

When RCU-TASKS-TRACE pre-gp takes a snapshot of the current task running
on all online CPUs, no explicit ordering synchronizes properly with a
context switch. This lack of ordering can permit the new task to miss
pre-grace-period update-side accesses. The following diagram, courtesy
of Paul, shows the possible bad scenario:

CPU 0 CPU 1
----- -----

// Pre-GP update side access
WRITE_ONCE(*X, 1);
smp_mb();
r0 = rq->curr;
RCU_INIT_POINTER(rq->curr, TASK_B)
spin_unlock(rq)
rcu_read_lock_trace()
r1 = X;
/* ignore TASK_B */

Either r0==TASK_B or r1==1 is needed but neither is guaranteed.

One possible solution to solve this is to wait for an RCU grace period
at the beginning of the RCU-tasks-trace grace period before taking the
current tasks snaphot. However this would introduce large additional
latencies to RCU-tasks-trace grace periods.

Another solution is to lock the target runqueue while taking the current
task snapshot. This ensures that the update side sees the latest context
switch and subsequent context switches will see the pre-grace-period
update side accesses.

This commit therefore adds runqueue locking to cpu_curr_snapshot().

Fixes: e386b6725798 ("rcu-tasks: Eliminate RCU Tasks Trace IPIs to online CPUs")
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# 6e4f6b5e 18-Jul-2024 Andrew Jeffery <andrew@codeconstruct.com.au>

Merge tag 'v6.6.41' into dev-6.6

This is the 6.6.41 stable release


# 448a2500 18-Jun-2024 John Stultz <jstultz@google.com>

sched: Move psi_account_irqtime() out of update_rq_clock_task() hotpath

commit ddae0ca2a8fe12d0e24ab10ba759c3fbd755ada8 upstream.

It was reported that in moving to 6.1, a larger then 10%
regression

sched: Move psi_account_irqtime() out of update_rq_clock_task() hotpath

commit ddae0ca2a8fe12d0e24ab10ba759c3fbd755ada8 upstream.

It was reported that in moving to 6.1, a larger then 10%
regression was seen in the performance of
clock_gettime(CLOCK_THREAD_CPUTIME_ID,...).

Using a simple reproducer, I found:
5.10:
100000000 calls in 24345994193 ns => 243.460 ns per call
100000000 calls in 24288172050 ns => 242.882 ns per call
100000000 calls in 24289135225 ns => 242.891 ns per call

6.1:
100000000 calls in 28248646742 ns => 282.486 ns per call
100000000 calls in 28227055067 ns => 282.271 ns per call
100000000 calls in 28177471287 ns => 281.775 ns per call

The cause of this was finally narrowed down to the addition of
psi_account_irqtime() in update_rq_clock_task(), in commit
52b1364ba0b1 ("sched/psi: Add PSI_IRQ to track IRQ/SOFTIRQ
pressure").

In my initial attempt to resolve this, I leaned towards moving
all accounting work out of the clock_gettime() call path, but it
wasn't very pretty, so it will have to wait for a later deeper
rework. Instead, Peter shared this approach:

Rework psi_account_irqtime() to use its own psi_irq_time base
for accounting, and move it out of the hotpath, calling it
instead from sched_tick() and __schedule().

In testing this, we found the importance of ensuring
psi_account_irqtime() is run under the rq_lock, which Johannes
Weiner helpfully explained, so also add some lockdep annotations
to make that requirement clear.

With this change the performance is back in-line with 5.10:
6.1+fix:
100000000 calls in 24297324597 ns => 242.973 ns per call
100000000 calls in 24318869234 ns => 243.189 ns per call
100000000 calls in 24291564588 ns => 242.916 ns per call

Reported-by: Jimmy Shiu <jimmyshiu@google.com>
Originally-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: John Stultz <jstultz@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
Reviewed-by: Qais Yousef <qyousef@layalina.io>
Link: https://lore.kernel.org/r/20240618215909.4099720-1-jstultz@google.com
Fixes: 52b1364ba0b1 ("sched/psi: Add PSI_IRQ to track IRQ/SOFTIRQ pressure")
[jstultz: Fixed up minor collisions w/ 6.6-stable]
Signed-off-by: John Stultz <jstultz@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


12345678910>>...161