Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46 |
|
#
4d369680 |
| 12-Aug-2023 |
Vineet Gupta <vgupta@kernel.org> |
ARC: -Wmissing-prototype warning fixes
Anrd reported [1] new compiler warnings due to -Wmissing-protype. These are for non static functions mostly used in asm code hence not exported already. Fix th
ARC: -Wmissing-prototype warning fixes
Anrd reported [1] new compiler warnings due to -Wmissing-protype. These are for non static functions mostly used in asm code hence not exported already. Fix this by adding the prototypes.
[1] https://lore.kernel.org/lkml/20230810141947.1236730-1-arnd@kernel.org
Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
show more ...
|
Revision tags: v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21, v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16 |
|
#
4c8c3c7f |
| 07-Mar-2023 |
Valentin Schneider <vschneid@redhat.com> |
treewide: Trace IPIs sent via smp_send_reschedule()
To be able to trace invocations of smp_send_reschedule(), rename the arch-specific definitions of it to arch_smp_send_reschedule() and wrap it int
treewide: Trace IPIs sent via smp_send_reschedule()
To be able to trace invocations of smp_send_reschedule(), rename the arch-specific definitions of it to arch_smp_send_reschedule() and wrap it into an smp_send_reschedule() that contains a tracepoint.
Changes to include the declaration of the tracepoint were driven by the following coccinelle script:
@func_use@ @@ smp_send_reschedule(...);
@include@ @@ #include <trace/events/ipi.h>
@no_include depends on func_use && !include@ @@ #include <...> + + #include <trace/events/ipi.h>
[csky bits] [riscv bits] Signed-off-by: Valentin Schneider <vschneid@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Guo Ren <guoren@kernel.org> Acked-by: Palmer Dabbelt <palmer@rivosinc.com> Link: https://lore.kernel.org/r/20230307143558.294354-6-vschneid@redhat.com
show more ...
|
Revision tags: v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12, v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60, v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49, v5.15.48, v5.15.47, v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41, v5.15.40, v5.15.39, v5.15.38 |
|
#
63d1dfd0 |
| 07-May-2022 |
Jilin Yuan <yuanjilin@cdjrlc.com> |
ARC: Fix comment typo
- Remove one of the repeated 'call' in comment line 396. - Delete the redundant word 'to', 'since'
Signed-off-by: Jilin Yuan <yuanjilin@cdjrlc.com> Signed-off-by: Vineet Gup
ARC: Fix comment typo
- Remove one of the repeated 'call' in comment line 396. - Delete the redundant word 'to', 'since'
Signed-off-by: Jilin Yuan <yuanjilin@cdjrlc.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
show more ...
|
#
787dbea1 |
| 21-Jul-2022 |
Ben Dooks <ben-linux@fluff.org> |
profile: setup_profiling_timer() is moslty not implemented
The setup_profiling_timer() is mostly un-implemented by many architectures. In many places it isn't guarded by CONFIG_PROFILE which is nee
profile: setup_profiling_timer() is moslty not implemented
The setup_profiling_timer() is mostly un-implemented by many architectures. In many places it isn't guarded by CONFIG_PROFILE which is needed for it to be used. Make it a weak symbol in kernel/profile.c and remove the 'return -EINVAL' implementations from the kenrel.
There are a couple of architectures which do return 0 from the setup_profiling_timer() function but they don't seem to do anything else with it. To keep the /proc compatibility for now, leave these for a future update or removal.
On ARM, this fixes the following sparse warning: arch/arm/kernel/smp.c:793:5: warning: symbol 'setup_profiling_timer' was not declared. Should it be static?
Link: https://lkml.kernel.org/r/20220721195509.418205-1-ben-linux@fluff.org Signed-off-by: Ben Dooks <ben-linux@fluff.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
show more ...
|
Revision tags: v5.15.37, v5.15.36, v5.15.35, v5.15.34, v5.15.33, v5.15.32, v5.15.31, v5.17, v5.15.30 |
|
#
c6ed4d84 |
| 18-Mar-2022 |
Bang Li <libang.linuxer@gmail.com> |
ARC: remove redundant READ_ONCE() in cmpxchg loop
This patch reverts commit 7082a29c22ac ("ARC: use ACCESS_ONCE in cmpxchg loop").
It is not necessary to use READ_ONCE() because cmpxchg contains ba
ARC: remove redundant READ_ONCE() in cmpxchg loop
This patch reverts commit 7082a29c22ac ("ARC: use ACCESS_ONCE in cmpxchg loop").
It is not necessary to use READ_ONCE() because cmpxchg contains barrier. We can get it from commit d57f727264f1 ("ARC: add compiler barrier to LLSC based cmpxchg").
Signed-off-by: Bang Li <libang.linuxer@gmail.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
show more ...
|
#
ecaa054f |
| 18-Mar-2022 |
Julia Lawall <Julia.Lawall@inria.fr> |
ARC: fix typos in comments
Various spelling mistakes in comments. Detected with the help of Coccinelle.
Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Signed-off-by: Vineet Gupta <vgupta@kerne
ARC: fix typos in comments
Various spelling mistakes in comments. Detected with the help of Coccinelle.
Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
show more ...
|
Revision tags: v5.15.29, v5.15.28, v5.15.27, v5.15.26, v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16, v5.15.15, v5.16, v5.15.10, v5.15.9, v5.15.8, v5.15.7, v5.15.6, v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15, v5.14.14, v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61, v5.10.60, v5.10.53, v5.10.52, v5.10.51, v5.10.50, v5.10.49, v5.13, v5.10.46, v5.10.43, v5.10.42, v5.10.41, v5.10.40, v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14, v5.10, v5.8.17, v5.8.16, v5.8.15, v5.9, v5.8.14, v5.8.13, v5.8.12, v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44, v5.7, v5.4.43, v5.4.42, v5.4.41, v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31, v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24, v5.4.23, v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16, v5.5, v5.4.15, v5.4.14, v5.4.13, v5.4.12, v5.4.11, v5.4.10, v5.4.9, v5.4.8, v5.4.7, v5.4.6, v5.4.5, v5.4.4, v5.4.3, v5.3.15, v5.4.2, v5.4.1, v5.3.14, v5.4, v5.3.13, v5.3.12, v5.3.11, v5.3.10, v5.3.9, v5.3.8, v5.3.7, v5.3.6, v5.3.5, v5.3.4, v5.3.3, v5.3.2, v5.3.1, v5.3, v5.2.14, v5.3-rc8, v5.2.13, v5.2.12, v5.2.11, v5.2.10, v5.2.9, v5.2.8, v5.2.7, v5.2.6, v5.2.5, v5.2.4, v5.2.3, v5.2.2, v5.2.1, v5.2, v5.1.16, v5.1.15, v5.1.14, v5.1.13, v5.1.12, v5.1.11, v5.1.10, v5.1.9, v5.1.8, v5.1.7, v5.1.6, v5.1.5, v5.1.4, v5.1.3, v5.1.2, v5.1.1, v5.0.14, v5.1, v5.0.13, v5.0.12, v5.0.11, v5.0.10, v5.0.9, v5.0.8, v5.0.7, v5.0.6, v5.0.5, v5.0.4, v5.0.3, v4.19.29, v5.0.2, v4.19.28, v5.0.1, v4.19.27, v5.0, v4.19.26, v4.19.25, v4.19.24, v4.19.23, v4.19.22, v4.19.21, v4.19.20, v4.19.19, v4.19.18, v4.19.17, v4.19.16, v4.19.15, v4.19.14, v4.19.13, v4.19.12, v4.19.11, v4.19.10, v4.19.9, v4.19.8, v4.19.7, v4.19.6, v4.19.5, v4.19.4, v4.18.20, v4.19.3, v4.18.19, v4.19.2, v4.18.18, v4.18.17, v4.19.1, v4.19, v4.18.16, v4.18.15, v4.18.14, v4.18.13, v4.18.12, v4.18.11, v4.18.10, v4.18.9, v4.18.7, v4.18.6 |
|
#
cea43147 |
| 04-Sep-2018 |
Vineet Gupta <vgupta@kernel.org> |
ARC: switch to generic bitops
- !LLSC now only needs a single spinlock for atomics and bitops
- Some codegen changes (slight bloat) with generic bitops
1. code increase due to LD-check-atomic
ARC: switch to generic bitops
- !LLSC now only needs a single spinlock for atomics and bitops
- Some codegen changes (slight bloat) with generic bitops
1. code increase due to LD-check-atomic paradigm vs. unconditonal atomic (but dirty'ing the cache line even if set already). So despite increase, generic is right thing to do.
2. code decrease (but use of costlier instructions such as DIV vs. shifts based math) due to signed arithmetic. This needs to be revisited seperately.
arc: static inline int test_bit(unsigned int nr, const volatile unsigned long *addr) ^^^^^^^^^^^^ generic: static inline int test_bit(int nr, const volatile unsigned long *addr) ^^^
Link: https://lore.kernel.org/r/20180830135749.GA13005@arm.com Signed-off-by: Will Deacon <will@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> [vgupta: wrote patch based on Will's poc, analysed codegen diffs] Signed-off-by: Vineet Gupta <vgupta@kernel.org>
show more ...
|
#
82a42305 |
| 13-Aug-2021 |
Changcheng Deng <deng.changcheng@zte.com.cn> |
arch/arc/kernel/: fix misspellings using codespell tool
Some typos are found out by codespell tool:
./intc-compact.c:145: prioity ==> priority ./smp.c:286: recevier ==> receiver ./stacktrace.c:152
arch/arc/kernel/: fix misspellings using codespell tool
Some typos are found out by codespell tool:
./intc-compact.c:145: prioity ==> priority ./smp.c:286: recevier ==> receiver ./stacktrace.c:152 prelogue ==> prologue
Fix typos found by codespell.
Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Changcheng Deng <deng.changcheng@zte.com.cn> Signed-off-by: Yi Wang <wang.yi59@zte.com.cn> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
show more ...
|
#
f1a0a376 |
| 12-May-2021 |
Valentin Schneider <valentin.schneider@arm.com> |
sched/core: Initialize the idle task with preemption disabled
As pointed out by commit
de9b8f5dcbd9 ("sched: Fix crash trying to dequeue/enqueue the idle thread")
init_idle() can and will be inv
sched/core: Initialize the idle task with preemption disabled
As pointed out by commit
de9b8f5dcbd9 ("sched: Fix crash trying to dequeue/enqueue the idle thread")
init_idle() can and will be invoked more than once on the same idle task. At boot time, it is invoked for the boot CPU thread by sched_init(). Then smp_init() creates the threads for all the secondary CPUs and invokes init_idle() on them.
As the hotplug machinery brings the secondaries to life, it will issue calls to idle_thread_get(), which itself invokes init_idle() yet again. In this case it's invoked twice more per secondary: at _cpu_up(), and at bringup_cpu().
Given smp_init() already initializes the idle tasks for all *possible* CPUs, no further initialization should be required. Now, removing init_idle() from idle_thread_get() exposes some interesting expectations with regards to the idle task's preempt_count: the secondary startup always issues a preempt_disable(), requiring some reset of the preempt count to 0 between hot-unplug and hotplug, which is currently served by idle_thread_get() -> idle_init().
Given the idle task is supposed to have preemption disabled once and never see it re-enabled, it seems that what we actually want is to initialize its preempt_count to PREEMPT_DISABLED and leave it there. Do that, and remove init_idle() from idle_thread_get().
Secondary startups were patched via coccinelle:
@begone@ @@
-preempt_disable(); ... cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210512094636.2958515-1-valentin.schneider@arm.com
show more ...
|
#
3c51d82d |
| 12-May-2021 |
Valentin Schneider <valentin.schneider@arm.com> |
sched/core: Initialize the idle task with preemption disabled
[ Upstream commit f1a0a376ca0c4ef1fc3d24e3e502acbb5b795674 ]
As pointed out by commit
de9b8f5dcbd9 ("sched: Fix crash trying to dequ
sched/core: Initialize the idle task with preemption disabled
[ Upstream commit f1a0a376ca0c4ef1fc3d24e3e502acbb5b795674 ]
As pointed out by commit
de9b8f5dcbd9 ("sched: Fix crash trying to dequeue/enqueue the idle thread")
init_idle() can and will be invoked more than once on the same idle task. At boot time, it is invoked for the boot CPU thread by sched_init(). Then smp_init() creates the threads for all the secondary CPUs and invokes init_idle() on them.
As the hotplug machinery brings the secondaries to life, it will issue calls to idle_thread_get(), which itself invokes init_idle() yet again. In this case it's invoked twice more per secondary: at _cpu_up(), and at bringup_cpu().
Given smp_init() already initializes the idle tasks for all *possible* CPUs, no further initialization should be required. Now, removing init_idle() from idle_thread_get() exposes some interesting expectations with regards to the idle task's preempt_count: the secondary startup always issues a preempt_disable(), requiring some reset of the preempt count to 0 between hot-unplug and hotplug, which is currently served by idle_thread_get() -> idle_init().
Given the idle task is supposed to have preemption disabled once and never see it re-enabled, it seems that what we actually want is to initialize its preempt_count to PREEMPT_DISABLED and leave it there. Do that, and remove init_idle() from idle_thread_get().
Secondary startups were patched via coccinelle:
@begone@ @@
-preempt_disable(); ... cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210512094636.2958515-1-valentin.schneider@arm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
937cf85f |
| 05-Oct-2020 |
Mike Rapoport <rppt@linux.ibm.com> |
ARC: SMP: fix typo and use "come up" instead of "comeup"
When a secondary CPU fails to come up, there is a missing space in the log:
Timeout: CPU1 FAILED to comeup !!!
Fix it.
Signed-off-by: Mik
ARC: SMP: fix typo and use "come up" instead of "comeup"
When a secondary CPU fails to come up, there is a missing space in the log:
Timeout: CPU1 FAILED to comeup !!!
Fix it.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
show more ...
|
#
d2912cb1 |
| 04-Jun-2019 |
Thomas Gleixner <tglx@linutronix.de> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation #
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 4122 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v4.18.5, v4.17.18, v4.18.4, v4.18.3, v4.17.17, v4.18.2, v4.17.16, v4.17.15, v4.18.1, v4.18, v4.17.14, v4.17.13, v4.17.12, v4.17.11, v4.17.10, v4.17.9, v4.17.8, v4.17.7, v4.17.6, v4.17.5, v4.17.4, v4.17.3, v4.17.2, v4.17.1, v4.17, v4.16 |
|
#
a29a2527 |
| 23-Feb-2018 |
Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> |
ARC: setup cpu possible mask according to possible-cpus dts property
As we have option in u-boot to set CPU mask for running linux, we want to pass information to kernel about CPU cores should be br
ARC: setup cpu possible mask according to possible-cpus dts property
As we have option in u-boot to set CPU mask for running linux, we want to pass information to kernel about CPU cores should be brought up. So we patch kernel dtb in u-boot to set possible-cpus property.
This also allows us to have correctly setuped MCIP debug mask.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
show more ...
|
Revision tags: v4.15, v4.13.16, v4.14 |
|
#
6aa7de05 |
| 23-Oct-2017 |
Mark Rutland <mark.rutland@arm.com> |
locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()
Please do not apply this to mainline directly, instead please re-run the coccinelle script sh
locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()
Please do not apply this to mainline directly, instead please re-run the coccinelle script shown below and apply its output.
For several reasons, it is desirable to use {READ,WRITE}_ONCE() in preference to ACCESS_ONCE(), and new code is expected to use one of the former. So far, there's been no reason to change most existing uses of ACCESS_ONCE(), as these aren't harmful, and changing them results in churn.
However, for some features, the read/write distinction is critical to correct operation. To distinguish these cases, separate read/write accessors must be used. This patch migrates (most) remaining ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following coccinelle script:
---- // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and // WRITE_ONCE()
// $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch
virtual patch
@ depends on patch @ expression E1, E2; @@
- ACCESS_ONCE(E1) = E2 + WRITE_ONCE(E1, E2)
@ depends on patch @ expression E; @@
- ACCESS_ONCE(E) + READ_ONCE(E) ----
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: davem@davemloft.net Cc: linux-arch@vger.kernel.org Cc: mpe@ellerman.id.au Cc: shuah@kernel.org Cc: snitzer@redhat.com Cc: thor.thayer@linux.intel.com Cc: tj@kernel.org Cc: viro@zeniv.linux.org.uk Cc: will.deacon@arm.com Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
#
fdbed196 |
| 11-Oct-2017 |
Vineet Gupta <vgupta@synopsys.com> |
ARC: unbork module link errors with !CONFIG_ARC_HAS_LLSC
| SYSMAP System.map | Building modules, stage 2. | MODPOST 18 modules |ERROR: "smp_atomic_ops_lock" [drivers/gpu/drm/drm_kms_helper.ko] u
ARC: unbork module link errors with !CONFIG_ARC_HAS_LLSC
| SYSMAP System.map | Building modules, stage 2. | MODPOST 18 modules |ERROR: "smp_atomic_ops_lock" [drivers/gpu/drm/drm_kms_helper.ko] undefined! |ERROR: "smp_bitops_lock" [drivers/gpu/drm/drm_kms_helper.ko] undefined! |ERROR: "smp_atomic_ops_lock" [drivers/gpu/drm/drm.ko] undefined! | ERROR: "smp_bitops_lock" [drivers/gpu/drm/drm.ko] undefined! |../scripts/Makefile.modpost:91: recipe for target '__modpost' failed
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
show more ...
|
Revision tags: v4.13.5, v4.13, v4.12, v4.10.17, v4.10.16, v4.10.15, v4.10.14, v4.10.13, v4.10.12, v4.10.11, v4.10.10, v4.10.9, v4.10.8, v4.10.7, v4.10.6, v4.10.5, v4.10.4, v4.10.3, v4.10.2, v4.10.1, v4.10 |
|
#
68e21be2 |
| 01-Feb-2017 |
Ingo Molnar <mingo@kernel.org> |
sched/headers: Move task->mm handling methods to <linux/sched/mm.h>
Move the following task->mm helper APIs into a new header file, <linux/sched/mm.h>, to further reduce the size and complexity of <
sched/headers: Move task->mm handling methods to <linux/sched/mm.h>
Move the following task->mm helper APIs into a new header file, <linux/sched/mm.h>, to further reduce the size and complexity of <linux/sched.h>.
Here are how the APIs are used in various kernel files:
# mm_alloc(): arch/arm/mach-rpc/ecard.c fs/exec.c include/linux/sched/mm.h kernel/fork.c
# __mmdrop(): arch/arc/include/asm/mmu_context.h include/linux/sched/mm.h kernel/fork.c
# mmdrop(): arch/arm/mach-rpc/ecard.c arch/m68k/sun3/mmu_emu.c arch/x86/mm/tlb.c drivers/gpu/drm/amd/amdkfd/kfd_process.c drivers/gpu/drm/i915/i915_gem_userptr.c drivers/infiniband/hw/hfi1/file_ops.c drivers/vfio/vfio_iommu_spapr_tce.c fs/exec.c fs/proc/base.c fs/proc/task_mmu.c fs/proc/task_nommu.c fs/userfaultfd.c include/linux/mmu_notifier.h include/linux/sched/mm.h kernel/fork.c kernel/futex.c kernel/sched/core.c mm/khugepaged.c mm/ksm.c mm/mmu_context.c mm/mmu_notifier.c mm/oom_kill.c virt/kvm/kvm_main.c
# mmdrop_async_fn(): include/linux/sched/mm.h
# mmdrop_async(): include/linux/sched/mm.h kernel/fork.c
# mmget_not_zero(): fs/userfaultfd.c include/linux/sched/mm.h mm/oom_kill.c
# mmput(): arch/arc/include/asm/mmu_context.h arch/arc/kernel/troubleshoot.c arch/frv/mm/mmu-context.c arch/powerpc/platforms/cell/spufs/context.c arch/sparc/include/asm/mmu_context_32.h drivers/android/binder.c drivers/gpu/drm/etnaviv/etnaviv_gem.c drivers/gpu/drm/i915/i915_gem_userptr.c drivers/infiniband/core/umem.c drivers/infiniband/core/umem_odp.c drivers/infiniband/core/uverbs_main.c drivers/infiniband/hw/mlx4/main.c drivers/infiniband/hw/mlx5/main.c drivers/infiniband/hw/usnic/usnic_uiom.c drivers/iommu/amd_iommu_v2.c drivers/iommu/intel-svm.c drivers/lguest/lguest_user.c drivers/misc/cxl/fault.c drivers/misc/mic/scif/scif_rma.c drivers/oprofile/buffer_sync.c drivers/vfio/vfio_iommu_type1.c drivers/vhost/vhost.c drivers/xen/gntdev.c fs/exec.c fs/proc/array.c fs/proc/base.c fs/proc/task_mmu.c fs/proc/task_nommu.c fs/userfaultfd.c include/linux/sched/mm.h kernel/cpuset.c kernel/events/core.c kernel/events/uprobes.c kernel/exit.c kernel/fork.c kernel/ptrace.c kernel/sys.c kernel/trace/trace_output.c kernel/tsacct.c mm/memcontrol.c mm/memory.c mm/mempolicy.c mm/migrate.c mm/mmu_notifier.c mm/nommu.c mm/oom_kill.c mm/process_vm_access.c mm/rmap.c mm/swapfile.c mm/util.c virt/kvm/async_pf.c
# mmput_async(): include/linux/sched/mm.h kernel/fork.c mm/oom_kill.c
# get_task_mm(): arch/arc/kernel/troubleshoot.c arch/powerpc/platforms/cell/spufs/context.c drivers/android/binder.c drivers/gpu/drm/etnaviv/etnaviv_gem.c drivers/infiniband/core/umem.c drivers/infiniband/core/umem_odp.c drivers/infiniband/hw/mlx4/main.c drivers/infiniband/hw/mlx5/main.c drivers/infiniband/hw/usnic/usnic_uiom.c drivers/iommu/amd_iommu_v2.c drivers/iommu/intel-svm.c drivers/lguest/lguest_user.c drivers/misc/cxl/fault.c drivers/misc/mic/scif/scif_rma.c drivers/oprofile/buffer_sync.c drivers/vfio/vfio_iommu_type1.c drivers/vhost/vhost.c drivers/xen/gntdev.c fs/proc/array.c fs/proc/base.c fs/proc/task_mmu.c include/linux/sched/mm.h kernel/cpuset.c kernel/events/core.c kernel/exit.c kernel/fork.c kernel/ptrace.c kernel/sys.c kernel/trace/trace_output.c kernel/tsacct.c mm/memcontrol.c mm/memory.c mm/mempolicy.c mm/migrate.c mm/mmu_notifier.c mm/nommu.c mm/util.c
# mm_access(): fs/proc/base.c include/linux/sched/mm.h kernel/fork.c mm/process_vm_access.c
# mm_release(): arch/arc/include/asm/mmu_context.h fs/exec.c include/linux/sched/mm.h include/uapi/linux/sched.h kernel/exit.c kernel/fork.c
Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
#
3fce371b |
| 27-Feb-2017 |
Vegard Nossum <vegard.nossum@oracle.com> |
mm: add new mmget() helper
Apart from adding the helper function itself, the rest of the kernel is converted mechanically using:
git grep -l 'atomic_inc.*mm_users' | xargs sed -i 's/atomic_inc(&\
mm: add new mmget() helper
Apart from adding the helper function itself, the rest of the kernel is converted mechanically using:
git grep -l 'atomic_inc.*mm_users' | xargs sed -i 's/atomic_inc(&\(.*\)->mm_users);/mmget\(\1\);/' git grep -l 'atomic_inc.*mm_users' | xargs sed -i 's/atomic_inc(&\(.*\)\.mm_users);/mmget\(\&\1\);/'
This is needed for a later patch that hooks into the helper, but might be a worthwhile cleanup on its own.
(Michal Hocko provided most of the kerneldoc comment.)
Link: http://lkml.kernel.org/r/20161218123229.22952-2-vegard.nossum@oracle.com Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|
#
f1f10076 |
| 27-Feb-2017 |
Vegard Nossum <vegard.nossum@oracle.com> |
mm: add new mmgrab() helper
Apart from adding the helper function itself, the rest of the kernel is converted mechanically using:
git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&
mm: add new mmgrab() helper
Apart from adding the helper function itself, the rest of the kernel is converted mechanically using:
git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)->mm_count);/mmgrab\(\1\);/' git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)\.mm_count);/mmgrab\(\&\1\);/'
This is needed for a later patch that hooks into the helper, but might be a worthwhile cleanup on its own.
(Michal Hocko provided most of the kerneldoc comment.)
Link: http://lkml.kernel.org/r/20161218123229.22952-1-vegard.nossum@oracle.com Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|
Revision tags: v4.9, openbmc-4.4-20161121-1, v4.4.33, v4.4.32, v4.4.31, v4.4.30, v4.4.29, v4.4.28, v4.4.27, v4.7.10, openbmc-4.4-20161021-1, v4.7.9, v4.4.26, v4.7.8, v4.4.25, v4.4.24, v4.7.7, v4.8, v4.4.23, v4.7.6, v4.7.5, v4.4.22, v4.4.21, v4.7.4, v4.7.3, v4.4.20, v4.7.2, v4.4.19, openbmc-4.4-20160819-1, v4.7.1, v4.4.18, v4.4.17, openbmc-4.4-20160804-1, v4.4.16, v4.7, openbmc-4.4-20160722-1, openbmc-20160722-1, openbmc-20160713-1, v4.4.15, v4.6.4, v4.6.3, v4.4.14 |
|
#
78f824d4 |
| 21-Jun-2016 |
Vineet Gupta <vgupta@synopsys.com> |
ARCv2: smp-boot: wake_flag polling by non-Masters needs to be uncached
This is needed on HS38 cores, for setting up IO-Coherency aperture properly
The polling could perturb the caches and coherecy
ARCv2: smp-boot: wake_flag polling by non-Masters needs to be uncached
This is needed on HS38 cores, for setting up IO-Coherency aperture properly
The polling could perturb the caches and coherecy fabric which could be wrong in the small window when Master is setting up IOC aperture etc in arc_cache_init()
We do it only for ARCv2 based builds to not affect EZChip ARCompact based platform.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
show more ...
|
#
bf02454a |
| 12-Jan-2017 |
Vineet Gupta <vgupta@synopsys.com> |
ARC: smp-boot: Decouple Non masters waiting API from jump to entry point
For run-on-reset SMP configs, non master cores call a routine which waits until Master gives it a "go" signal (currently usin
ARC: smp-boot: Decouple Non masters waiting API from jump to entry point
For run-on-reset SMP configs, non master cores call a routine which waits until Master gives it a "go" signal (currently using a shared mem flag). The same routine then jumps off the well known entry point of all non Master cores i.e. @first_lines_of_secondary
This patch moves out the last part into one single place in early boot code.
This is better in terms of absraction (the wait API only waits) and returns, leaving out the "jump off to" part.
In actual implementation this requires some restructuring of the early boot code as well as Master now jumps to BSS setup explicitly, vs. falling thru into it before.
Technically this patch doesn't cause any functional change, it just moves the ugly #ifdef'ry from assembly code to "C"
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
show more ...
|
#
34e71e4c |
| 08-Nov-2016 |
Yuriy Kolerov <yuriy.kolerov@synopsys.com> |
ARC: IRQ: Do not use hwirq as virq and vice versa
This came up when reviewing code to address missing IRQ affinity setting in AXS103 platform and/or implementing hierarchical IRQ domains
- smp_ipi_
ARC: IRQ: Do not use hwirq as virq and vice versa
This came up when reviewing code to address missing IRQ affinity setting in AXS103 platform and/or implementing hierarchical IRQ domains
- smp_ipi_irq_setup() callers pass hwirq but in turn calls request_percpu_irq() which expects a linux virq. So invoke irq_find_mapping() to do the conversion (also explicitify this in code by renaming the args appropriately)
- idu_of_init()/idu_cascade_isr() were similarly using linux virq where hwirq is expected, so do the conversion using irqd_to_hwirq() helper
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com> [vgupta: made changelog a bit concise a bit] Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
show more ...
|
#
8f6d9eb2 |
| 30-Oct-2016 |
Noam Camus <noamca@mellanox.com> |
ARC: [SMP] avoid overriding present cpumask
At smp_prepare_cpus() we set present cpu mask as part of init for all CPUs at range [0-max_cpus]. This is done without checking if this mask is already be
ARC: [SMP] avoid overriding present cpumask
At smp_prepare_cpus() we set present cpu mask as part of init for all CPUs at range [0-max_cpus]. This is done without checking if this mask is already being set. At platform of eznps this mask is already being initialized at smp_init_cpus() by using hook plat_smp_ops.init_early_smp(). So to avoid overriding of present cpu mask we check the number of bits which are set in this mask. At the begin only bit for boot CPU is set so if number of bits already set is no more than one we can be assure that there is no overriding of this mask.
Signed-off-by: Noam Camus <noamca@mellanox.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
show more ...
|
Revision tags: v4.6.2, v4.4.13, openbmc-20160606-1, v4.6.1, v4.4.12, openbmc-20160521-1, v4.4.11, openbmc-20160518-1, v4.6, v4.4.10, openbmc-20160511-1, openbmc-20160505-1, v4.4.9, v4.4.8, v4.4.7, openbmc-20160329-2, openbmc-20160329-1, openbmc-20160321-1, v4.4.6, v4.5, v4.4.5, v4.4.4, v4.4.3, openbmc-20160222-1, v4.4.2, openbmc-20160212-1, openbmc-20160210-1, openbmc-20160202-2, openbmc-20160202-1, v4.4.1, openbmc-20160127-1, openbmc-20160120-1, v4.4, openbmc-20151217-1, openbmc-20151210-1, openbmc-20151202-1, openbmc-20151123-1, openbmc-20151118-1 |
|
#
71f9cf8f |
| 07-Nov-2015 |
Noam Camus <noamc@ezchip.com> |
ARC: Mark secondary cpu online only after all HW setup is done
In SMP setup, master loops for each_present_cpu calling cpu_up(). For ARC it returns as soon as new cpu's status becomes online, Howeve
ARC: Mark secondary cpu online only after all HW setup is done
In SMP setup, master loops for each_present_cpu calling cpu_up(). For ARC it returns as soon as new cpu's status becomes online, However secondary may still do HW initializing, machine or platform hook level.
So turn secondary online only after all HW setup is done. Signed-off-by: Noam Camus <noamc@ezchip.com> Acked-by: Vineet Gupta <vgupta@synopsys.com>
show more ...
|
#
eec3c58e |
| 01-Jan-2016 |
Noam Camus <noamc@ezchip.com> |
ARC: clockevent: switch to cpu notifier for clockevent setup
ARC Timers so far have been handled as "legacy" w/o explicit description in DT. This poses challenge for newer platforms wanting to use t
ARC: clockevent: switch to cpu notifier for clockevent setup
ARC Timers so far have been handled as "legacy" w/o explicit description in DT. This poses challenge for newer platforms wanting to use them. This series will eventually help move timers over to DT.
This patch does a small change of using a CPU notifier to set clockevent on non-boot CPUs. So explicit setup is done only on boot CPU (which will later be done by DT)
Signed-off-by: Noam Camus <noamc@ezchip.com> [vgupta: broken off from a bigger patch] Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
show more ...
|
#
56957940 |
| 28-Jan-2016 |
Vineet Gupta <vgupta@synopsys.com> |
ARC: opencode arc_request_percpu_irq
- The idea is to remove the API usage since it has a subltle design flaw - relies on being called on cpu0 first. This is true for some early per cpu irqs suc
ARC: opencode arc_request_percpu_irq
- The idea is to remove the API usage since it has a subltle design flaw - relies on being called on cpu0 first. This is true for some early per cpu irqs such as TIMER/IPI, but not for late probed per cpu peripherals such a perf. And it's usage in perf has already bitten us once: see c6317bc7c5ab ("ARCv2: perf: Ensure perf intr gets enabled on all cores") where we ended up open coding it anyways
- The seeming duplication will go away once we start using cpu notifier for timer setup
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
show more ...
|