Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21, v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16, v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8 |
|
#
06855063 |
| 18-Jan-2023 |
Andrzej Hajda <andrzej.hajda@intel.com> |
locking/arch: Rename all internal __xchg() names to __arch_xchg()
Decrease the probability of this internal facility to be used by driver code.
Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com
locking/arch: Rename all internal __xchg() names to __arch_xchg()
Decrease the probability of this internal facility to be used by driver code.
Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Palmer Dabbelt <palmer@rivosinc.com> [riscv] Link: https://lore.kernel.org/r/20230118154450.73842-1-andrzej.hajda@intel.com Cc: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|
Revision tags: v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12, v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60, v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49, v5.15.48, v5.15.47, v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41, v5.15.40, v5.15.39, v5.15.38, v5.15.37, v5.15.36, v5.15.35, v5.15.34, v5.15.33, v5.15.32, v5.15.31, v5.17, v5.15.30, v5.15.29, v5.15.28, v5.15.27, v5.15.26, v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16, v5.15.15, v5.16, v5.15.10, v5.15.9 |
|
#
f0b7ddbd |
| 15-Dec-2021 |
Huang Pei <huangpei@loongson.cn> |
MIPS: retire "asm/llsc.h"
all that "asm/llsc.h" does is just to help inline asm, which can be stringifyed from "asm/asm.h"
+. Since "asm/asm.h" has all we need, retire "asm/llsc.h"
+. remove unuse
MIPS: retire "asm/llsc.h"
all that "asm/llsc.h" does is just to help inline asm, which can be stringifyed from "asm/asm.h"
+. Since "asm/asm.h" has all we need, retire "asm/llsc.h"
+. remove unused header file
Inspired-by: Maciej W. Rozycki <macro@orcam.me.uk> Signed-off-by: Huang Pei <huangpei@loongson.cn> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
show more ...
|
Revision tags: v5.15.8, v5.15.7, v5.15.6, v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15 |
|
#
a923a267 |
| 21-Oct-2021 |
Maciej W. Rozycki <macro@orcam.me.uk> |
MIPS: Fix assembly error from MIPSr2 code used within MIPS_ISA_ARCH_LEVEL
Fix assembly errors like:
{standard input}: Assembler messages: {standard input}:287: Error: opcode not supported on this p
MIPS: Fix assembly error from MIPSr2 code used within MIPS_ISA_ARCH_LEVEL
Fix assembly errors like:
{standard input}: Assembler messages: {standard input}:287: Error: opcode not supported on this processor: mips3 (mips3) `dins $10,$7,32,32' {standard input}:680: Error: opcode not supported on this processor: mips3 (mips3) `dins $10,$7,32,32' {standard input}:1274: Error: opcode not supported on this processor: mips3 (mips3) `dins $12,$9,32,32' {standard input}:2175: Error: opcode not supported on this processor: mips3 (mips3) `dins $10,$7,32,32' make[1]: *** [scripts/Makefile.build:277: mm/highmem.o] Error 1
with code produced from `__cmpxchg64' for MIPS64r2 CPU configurations using CONFIG_32BIT and CONFIG_PHYS_ADDR_T_64BIT.
This is due to MIPS_ISA_ARCH_LEVEL downgrading the assembly architecture to `r4000' i.e. MIPS III for MIPS64r2 configurations, while there is a block of code containing a DINS MIPS64r2 instruction conditionalized on MIPS_ISA_REV >= 2 within the scope of the downgrade.
The assembly architecture override code pattern has been put there for LL/SC instructions, so that code compiles for configurations that select a processor to build for that does not support these instructions while still providing run-time support for processors that do, dynamically switched by non-constant `cpu_has_llsc'. It went in with linux-mips.org commit aac8aa7717a2 ("Enable a suitable ISA for the assembler around ll/sc so that code builds even for processors that don't support the instructions. Plus minor formatting fixes.") back in 2005.
Fix the problem by wrapping these instructions along with the adjacent SYNC instructions only, following the practice established with commit cfd54de3b0e4 ("MIPS: Avoid move psuedo-instruction whilst using MIPS_ISA_LEVEL") and commit 378ed6f0e3c5 ("MIPS: Avoid using .set mips0 to restore ISA"). Strictly speaking the SYNC instructions do not have to be wrapped as they are only used as a Loongson3 erratum workaround, so they will be enabled in the assembler by default, but do this so as to keep code consistent with other places.
Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Fixes: c7e2d71dda7a ("MIPS: Fix set_pte() for Netlogic XLR using cmpxchg64()") Cc: stable@vger.kernel.org # v5.1+ Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
show more ...
|
#
3cd12b61 |
| 21-Oct-2021 |
Maciej W. Rozycki <macro@orcam.me.uk> |
MIPS: Fix assembly error from MIPSr2 code used within MIPS_ISA_ARCH_LEVEL
commit a923a2676e60683aee46aa4b93c30aff240ac20d upstream.
Fix assembly errors like:
{standard input}: Assembler messages:
MIPS: Fix assembly error from MIPSr2 code used within MIPS_ISA_ARCH_LEVEL
commit a923a2676e60683aee46aa4b93c30aff240ac20d upstream.
Fix assembly errors like:
{standard input}: Assembler messages: {standard input}:287: Error: opcode not supported on this processor: mips3 (mips3) `dins $10,$7,32,32' {standard input}:680: Error: opcode not supported on this processor: mips3 (mips3) `dins $10,$7,32,32' {standard input}:1274: Error: opcode not supported on this processor: mips3 (mips3) `dins $12,$9,32,32' {standard input}:2175: Error: opcode not supported on this processor: mips3 (mips3) `dins $10,$7,32,32' make[1]: *** [scripts/Makefile.build:277: mm/highmem.o] Error 1
with code produced from `__cmpxchg64' for MIPS64r2 CPU configurations using CONFIG_32BIT and CONFIG_PHYS_ADDR_T_64BIT.
This is due to MIPS_ISA_ARCH_LEVEL downgrading the assembly architecture to `r4000' i.e. MIPS III for MIPS64r2 configurations, while there is a block of code containing a DINS MIPS64r2 instruction conditionalized on MIPS_ISA_REV >= 2 within the scope of the downgrade.
The assembly architecture override code pattern has been put there for LL/SC instructions, so that code compiles for configurations that select a processor to build for that does not support these instructions while still providing run-time support for processors that do, dynamically switched by non-constant `cpu_has_llsc'. It went in with linux-mips.org commit aac8aa7717a2 ("Enable a suitable ISA for the assembler around ll/sc so that code builds even for processors that don't support the instructions. Plus minor formatting fixes.") back in 2005.
Fix the problem by wrapping these instructions along with the adjacent SYNC instructions only, following the practice established with commit cfd54de3b0e4 ("MIPS: Avoid move psuedo-instruction whilst using MIPS_ISA_LEVEL") and commit 378ed6f0e3c5 ("MIPS: Avoid using .set mips0 to restore ISA"). Strictly speaking the SYNC instructions do not have to be wrapped as they are only used as a Loongson3 erratum workaround, so they will be enabled in the assembler by default, but do this so as to keep code consistent with other places.
Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Maciej W. Rozycki <macro@orcam.me.uk> Fixes: c7e2d71dda7a ("MIPS: Fix set_pte() for Netlogic XLR using cmpxchg64()") Cc: stable@vger.kernel.org # v5.1+ Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v5.14.14, v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61, v5.10.60, v5.10.53, v5.10.52, v5.10.51, v5.10.50, v5.10.49, v5.13, v5.10.46, v5.10.43, v5.10.42, v5.10.41, v5.10.40 |
|
#
c7b5fd6f |
| 25-May-2021 |
Mark Rutland <mark.rutland@arm.com> |
locking/atomic: mips: move to ARCH_ATOMIC
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomi
locking/atomic: mips: move to ARCH_ATOMIC
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers).
As a step towards that, this patch migrates mips to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210525140232.53872-23-mark.rutland@arm.com
show more ...
|
#
6988631b |
| 25-May-2021 |
Mark Rutland <mark.rutland@arm.com> |
locking/atomic: cmpxchg: make `generic` a prefix
The asm-generic implementations of cmpxchg_local() and cmpxchg64_local() use a `_generic` suffix to distinguish themselves from arch code or wrappers
locking/atomic: cmpxchg: make `generic` a prefix
The asm-generic implementations of cmpxchg_local() and cmpxchg64_local() use a `_generic` suffix to distinguish themselves from arch code or wrappers used elsewhere.
Subsequent patches will add ARCH_ATOMIC support to these implementations, and will distinguish more functions with a `generic` portion. To align with how ARCH_ATOMIC uses an `arch_` prefix, it would be helpful to use a `generic_` prefix rather than a `_generic` suffix.
In preparation for this, this patch renames the existing functions to make `generic` a prefix rather than a suffix. There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210525140232.53872-12-mark.rutland@arm.com
show more ...
|
Revision tags: v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14 |
|
#
8790ccf8 |
| 14-Jan-2021 |
Nathan Chancellor <natechancellor@gmail.com> |
MIPS: Compare __SYNC_loongson3_war against 0
When building with clang when CONFIG_CPU_LOONGSON3_WORKAROUNDS is enabled:
In file included from lib/errseq.c:4: In file included from ./include/linux
MIPS: Compare __SYNC_loongson3_war against 0
When building with clang when CONFIG_CPU_LOONGSON3_WORKAROUNDS is enabled:
In file included from lib/errseq.c:4: In file included from ./include/linux/atomic.h:7: ./arch/mips/include/asm/atomic.h:52:1: warning: converting the result of '<<' to a boolean always evaluates to true [-Wtautological-constant-compare] ATOMIC_OPS(atomic64, s64) ^ ./arch/mips/include/asm/atomic.h:40:9: note: expanded from macro 'ATOMIC_OPS' return cmpxchg(&v->counter, o, n); ^ ./arch/mips/include/asm/cmpxchg.h:194:7: note: expanded from macro 'cmpxchg' if (!__SYNC_loongson3_war) ^ ./arch/mips/include/asm/sync.h:147:34: note: expanded from macro '__SYNC_loongson3_war' # define __SYNC_loongson3_war (1 << 31) ^
While it is not wrong that the result of this shift is always true in a boolean context, it is not a problem here. Regardless, the warning is really noisy so rather than making the shift a boolean implicitly, use it in an equality comparison so the shift is used as an integer value.
Fixes: 4d1dbfe6cbec ("MIPS: atomic: Emit Loongson3 sync workarounds within asm") Fixes: a91f2a1dba44 ("MIPS: cmpxchg: Omit redundant barriers for Loongson3") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
show more ...
|
#
cc1c1fe7 |
| 14-Jan-2021 |
Nathan Chancellor <natechancellor@gmail.com> |
MIPS: Compare __SYNC_loongson3_war against 0
[ Upstream commit 8790ccf8daf1a8c53b6cb8ce0c9a109274bd3fa8 ]
When building with clang when CONFIG_CPU_LOONGSON3_WORKAROUNDS is enabled:
In file includ
MIPS: Compare __SYNC_loongson3_war against 0
[ Upstream commit 8790ccf8daf1a8c53b6cb8ce0c9a109274bd3fa8 ]
When building with clang when CONFIG_CPU_LOONGSON3_WORKAROUNDS is enabled:
In file included from lib/errseq.c:4: In file included from ./include/linux/atomic.h:7: ./arch/mips/include/asm/atomic.h:52:1: warning: converting the result of '<<' to a boolean always evaluates to true [-Wtautological-constant-compare] ATOMIC_OPS(atomic64, s64) ^ ./arch/mips/include/asm/atomic.h:40:9: note: expanded from macro 'ATOMIC_OPS' return cmpxchg(&v->counter, o, n); ^ ./arch/mips/include/asm/cmpxchg.h:194:7: note: expanded from macro 'cmpxchg' if (!__SYNC_loongson3_war) ^ ./arch/mips/include/asm/sync.h:147:34: note: expanded from macro '__SYNC_loongson3_war' # define __SYNC_loongson3_war (1 << 31) ^
While it is not wrong that the result of this shift is always true in a boolean context, it is not a problem here. Regardless, the warning is really noisy so rather than making the shift a boolean implicitly, use it in an equality comparison so the shift is used as an integer value.
Fixes: 4d1dbfe6cbec ("MIPS: atomic: Emit Loongson3 sync workarounds within asm") Fixes: a91f2a1dba44 ("MIPS: cmpxchg: Omit redundant barriers for Loongson3") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
Revision tags: v5.10, v5.8.17, v5.8.16, v5.8.15, v5.9, v5.8.14, v5.8.13, v5.8.12, v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44, v5.7, v5.4.43, v5.4.42, v5.4.41, v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31, v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24, v5.4.23, v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16, v5.5, v5.4.15, v5.4.14, v5.4.13, v5.4.12, v5.4.11, v5.4.10, v5.4.9, v5.4.8, v5.4.7, v5.4.6, v5.4.5, v5.4.4, v5.4.3, v5.3.15, v5.4.2, v5.4.1, v5.3.14, v5.4, v5.3.13, v5.3.12, v5.3.11, v5.3.10, v5.3.9, v5.3.8, v5.3.7, v5.3.6 |
|
#
46f16195 |
| 09-Oct-2019 |
Thomas Bogendoerfer <tbogendoerfer@suse.de> |
MIPS: include: Mark __xchg as __always_inline
Commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") allows compiler to uninline functions marked as 'inline'. In cace of __xchg t
MIPS: include: Mark __xchg as __always_inline
Commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") allows compiler to uninline functions marked as 'inline'. In cace of __xchg this would cause to reference function __xchg_called_with_bad_pointer, which is an error case for catching bugs and will not happen for correct code, if __xchg is inlined.
Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@suse.de> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org
show more ...
|
Revision tags: v5.3.5, v5.3.4, v5.3.3 |
|
#
a91f2a1d |
| 01-Oct-2019 |
Paul Burton <paul.burton@mips.com> |
MIPS: cmpxchg: Omit redundant barriers for Loongson3
When building a kernel configured to support Loongson3 LL/SC workarounds (ie. CONFIG_CPU_LOONGSON3_WORKAROUNDS=y) the inline assembly in __xchg_a
MIPS: cmpxchg: Omit redundant barriers for Loongson3
When building a kernel configured to support Loongson3 LL/SC workarounds (ie. CONFIG_CPU_LOONGSON3_WORKAROUNDS=y) the inline assembly in __xchg_asm() & __cmpxchg_asm() already emits completion barriers, and as such we don't need to emit extra barriers from the xchg() or cmpxchg() macros. Add compile-time constant checks causing us to omit the redundant memory barriers.
Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
show more ...
|
#
6a57d2d1 |
| 01-Oct-2019 |
Paul Burton <paul.burton@mips.com> |
MIPS: cmpxchg: Emit Loongson3 sync workarounds within asm
Generate the sync instructions required to workaround Loongson3 LL/SC errata within inline asm blocks, which feels a little safer than doing
MIPS: cmpxchg: Emit Loongson3 sync workarounds within asm
Generate the sync instructions required to workaround Loongson3 LL/SC errata within inline asm blocks, which feels a little safer than doing it from C where strictly speaking the compiler would be well within its rights to insert a memory access between the separate asm statements we previously had, containing sync & ll instructions respectively.
Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
show more ...
|
#
878f75c7 |
| 01-Oct-2019 |
Paul Burton <paul.burton@mips.com> |
MIPS: Unify sc beqz definition
We currently duplicate the definition of __scbeqz in asm/atomic.h & asm/cmpxchg.h. Move it to asm/llsc.h & rename it to __SC_BEQZ to fit better with the existing __SC
MIPS: Unify sc beqz definition
We currently duplicate the definition of __scbeqz in asm/atomic.h & asm/cmpxchg.h. Move it to asm/llsc.h & rename it to __SC_BEQZ to fit better with the existing __SC macro provided there.
We include a tab in the string in order to avoid the need for users to indent code any further to include whitespace of their own after the instruction mnemonic.
Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
show more ...
|
#
88356d09 |
| 06-Oct-2019 |
Thomas Bogendoerfer <tbogendoerfer@suse.de> |
MIPS: include: Mark __cmpxchg as __always_inline
Commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") allows compiler to uninline functions marked as 'inline'. In cace of cmpxc
MIPS: include: Mark __cmpxchg as __always_inline
Commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") allows compiler to uninline functions marked as 'inline'. In cace of cmpxchg this would cause to reference function __cmpxchg_called_with_bad_pointer, which is a error case for catching bugs and will not happen for correct code, if __cmpxchg is inlined.
Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@suse.de> [paul.burton@mips.com: s/__cmpxchd/__cmpxchg in subject] Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org
show more ...
|
Revision tags: v5.3.2, v5.3.1, v5.3, v5.2.14, v5.3-rc8, v5.2.13, v5.2.12, v5.2.11, v5.2.10, v5.2.9, v5.2.8, v5.2.7, v5.2.6, v5.2.5, v5.2.4, v5.2.3, v5.2.2, v5.2.1, v5.2, v5.1.16, v5.1.15, v5.1.14, v5.1.13, v5.1.12, v5.1.11, v5.1.10 |
|
#
42344113 |
| 13-Jun-2019 |
Peter Zijlstra <peterz@infradead.org> |
mips/atomic: Fix smp_mb__{before,after}_atomic()
Recent probing at the Linux Kernel Memory Model uncovered a 'surprise'. Strongly ordered architectures where the atomic RmW primitive implies full me
mips/atomic: Fix smp_mb__{before,after}_atomic()
Recent probing at the Linux Kernel Memory Model uncovered a 'surprise'. Strongly ordered architectures where the atomic RmW primitive implies full memory ordering and smp_mb__{before,after}_atomic() are a simple barrier() (such as MIPS without WEAK_REORDERING_BEYOND_LLSC) fail for:
*x = 1; atomic_inc(u); smp_mb__after_atomic(); r0 = *y;
Because, while the atomic_inc() implies memory order, it (surprisingly) does not provide a compiler barrier. This then allows the compiler to re-order like so:
atomic_inc(u); *x = 1; smp_mb__after_atomic(); r0 = *y;
Which the CPU is then allowed to re-order (under TSO rules) like:
atomic_inc(u); r0 = *y; *x = 1;
And this very much was not intended. Therefore strengthen the atomic RmW ops to include a compiler barrier.
Reported-by: Andrea Parri <andrea.parri@amarulasolutions.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul Burton <paul.burton@mips.com>
show more ...
|
#
1c6c1ca3 |
| 13-Jun-2019 |
Peter Zijlstra <peterz@infradead.org> |
mips/atomic: Fix loongson_llsc_mb() wreckage
The comment describing the loongson_llsc_mb() reorder case doesn't make any sense what so ever. Instruction re-ordering is not an SMP artifact, but rathe
mips/atomic: Fix loongson_llsc_mb() wreckage
The comment describing the loongson_llsc_mb() reorder case doesn't make any sense what so ever. Instruction re-ordering is not an SMP artifact, but rather a CPU local phenomenon. Clarify the comment by explaining that these issue cause a coherence fail.
For the branch speculation case; if futex_atomic_cmpxchg_inatomic() needs one at the bne branch target, then surely the normal __cmpxch_asm() implementation does too. We cannot rely on the barriers from cmpxchg() because cmpxchg_local() is implemented with the same macro, and branch prediction and speculation are, too, CPU local.
Fixes: e02e07e3127d ("MIPS: Loongson: Introduce and use loongson_llsc_mb()") Cc: Huacai Chen <chenhc@lemote.com> Cc: Huang Pei <huangpei@loongson.cn> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul Burton <paul.burton@mips.com>
show more ...
|
#
dfc8d8de |
| 13-Jun-2019 |
Peter Zijlstra <peterz@infradead.org> |
mips/atomic: Fix cmpxchg64 barriers
There were no memory barriers on the 32bit implementation of cmpxchg64(). Fix this.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: P
mips/atomic: Fix cmpxchg64 barriers
There were no memory barriers on the 32bit implementation of cmpxchg64(). Fix this.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul Burton <paul.burton@mips.com>
show more ...
|
Revision tags: v5.1.9, v5.1.8, v5.1.7, v5.1.6, v5.1.5, v5.1.4, v5.1.3, v5.1.2, v5.1.1, v5.0.14, v5.1, v5.0.13, v5.0.12, v5.0.11, v5.0.10, v5.0.9, v5.0.8, v5.0.7, v5.0.6, v5.0.5, v5.0.4, v5.0.3, v4.19.29, v5.0.2, v4.19.28, v5.0.1, v4.19.27, v5.0, v4.19.26, v4.19.25, v4.19.24, v4.19.23, v4.19.22, v4.19.21 |
|
#
c7e2d71d |
| 06-Feb-2019 |
Paul Burton <paul.burton@mips.com> |
MIPS: Fix set_pte() for Netlogic XLR using cmpxchg64()
Commit 46011e6ea392 ("MIPS: Make set_pte() SMP safe.") introduced an open-coded version of cmpxchg() within set_pte(), that always operated on
MIPS: Fix set_pte() for Netlogic XLR using cmpxchg64()
Commit 46011e6ea392 ("MIPS: Make set_pte() SMP safe.") introduced an open-coded version of cmpxchg() within set_pte(), that always operated on a value the size of an unsigned long. That is, it used ll/sc instructions when CONFIG_32BIT=y or lld/scd instructions when CONFIG_64BIT=y.
This was broken for configurations in which pte_t is larger than an unsigned long (with the exception of XPA configurations which have a different implementation of set_pte()), because we no longer update the whole PTE. Indeed commit 46011e6ea392 ("MIPS: Make set_pte() SMP safe.") notes:
> The case of CONFIG_64BIT_PHYS_ADDR && CONFIG_CPU_MIPS32 is *not* > handled.
In practice this affects Netlogic XLR/XLS systems including nlm_xlr_defconfig.
Commit 82f4f66ddf11 ("MIPS: Remove open-coded cmpxchg() in set_pte()") then replaced this open-coded version of cmpxchg() with an actual call to cmpxchg(). Unfortunately the configurations mentioned above then fail to build because cmpxchg() can only operate on values 32 bits or smaller in size, resulting in:
arch/mips/include/asm/cmpxchg.h:166:11: error: call to '__cmpxchg_called_with_bad_pointer' declared with attribute error: Bad argument size for cmpxchg
One option that would fix the build failure & restore the previous behaviour would be to cast the pte pointer to a pointer to unsigned long, so that cmpxchg() would operate on just 32 bits of the PTE as it has been since commit 46011e6ea392 ("MIPS: Make set_pte() SMP safe."). That feels like an ugly hack though, and the behaviour of set_pte() is likely a little broken.
Instead we take advantage of the fact that the affected configurations already know at compile time that the CPU will support 64 bits (ie. have hardcoded cpu_has_64bits in cpu-feature-overrides.h) in order to allow cmpxchg64() to be used in these configurations. set_pte() then makes use of cmpxchg64() when necessary.
Signed-off-by: Paul Burton <paul.burton@mips.com> Fixes: 46011e6ea392 ("MIPS: Make set_pte() SMP safe.") Fixes: 82f4f66ddf11 ("MIPS: Remove open-coded cmpxchg() in set_pte()")
show more ...
|
Revision tags: v4.19.20, v4.19.19, v4.19.18, v4.19.17, v4.19.16, v4.19.15, v4.19.14, v4.19.13, v4.19.12, v4.19.11, v4.19.10, v4.19.9, v4.19.8, v4.19.7, v4.19.6, v4.19.5, v4.19.4, v4.18.20, v4.19.3, v4.18.19, v4.19.2, v4.18.18 |
|
#
378ed6f0 |
| 08-Nov-2018 |
Paul Burton <paul.burton@mips.com> |
MIPS: Avoid using .set mips0 to restore ISA
We currently have 2 commonly used methods for switching ISA within assembly code, then restoring the original ISA.
1) Using a pair of .set push & .set
MIPS: Avoid using .set mips0 to restore ISA
We currently have 2 commonly used methods for switching ISA within assembly code, then restoring the original ISA.
1) Using a pair of .set push & .set pop directives. For example:
.set push .set mips32r2 <some_insn> .set pop
2) Using .set mips0 to restore the ISA originally specified on the command line. For example:
.set mips32r2 <some_insn> .set mips0
Unfortunately method 2 does not work with nanoMIPS toolchains, where the assembler rejects the .set mips0 directive like so:
Error: cannot change ISA from nanoMIPS to mips0
In preparation for supporting nanoMIPS builds, switch all instances of method 2 in generic non-platform-specific code to use push & pop as in method 1 instead. The .set push & .set pop is arguably cleaner anyway, and if nothing else it's good to consistently use one method.
Signed-off-by: Paul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/21037/ Cc: linux-mips@linux-mips.org
show more ...
|
Revision tags: v4.18.17, v4.19.1, v4.19, v4.18.16, v4.18.15, v4.18.14, v4.18.13, v4.18.12, v4.18.11, v4.18.10, v4.18.9, v4.18.7, v4.18.6, v4.18.5, v4.17.18, v4.18.4, v4.18.3, v4.17.17, v4.18.2, v4.17.16, v4.17.15, v4.18.1, v4.18, v4.17.14, v4.17.13, v4.17.12, v4.17.11, v4.17.10, v4.17.9, v4.17.8, v4.17.7, v4.17.6, v4.17.5, v4.17.4, v4.17.3, v4.17.2, v4.17.1, v4.17, v4.16, v4.15, v4.13.16, v4.14, v4.13.5 |
|
#
a3f14310 |
| 03-Oct-2017 |
Ben Hutchings <ben@decadent.org.uk> |
MIPS: cmpxchg64() and HAVE_VIRT_CPU_ACCOUNTING_GEN don't work for 32-bit SMP
__cmpxchg64_local_generic() is atomic only w.r.t tasks and interrupts on the same CPU (that's what the 'local' means). W
MIPS: cmpxchg64() and HAVE_VIRT_CPU_ACCOUNTING_GEN don't work for 32-bit SMP
__cmpxchg64_local_generic() is atomic only w.r.t tasks and interrupts on the same CPU (that's what the 'local' means). We can't use it to implement cmpxchg64() in SMP configurations.
So, for 32-bit SMP configurations:
- Don't define cmpxchg64() - Don't enable HAVE_VIRT_CPU_ACCOUNTING_GEN, which requires it
Fixes: e2093c7b03c1 ("MIPS: Fall back to generic implementation of ...") Fixes: bb877e96bea1 ("MIPS: Add support for full dynticks CPU time accounting") Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Deng-Cheng Zhu <dengcheng.zhu@mips.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.1+ Patchwork: https://patchwork.linux-mips.org/patch/17413/ Signed-off-by: James Hogan <jhogan@kernel.org>
show more ...
|
Revision tags: v4.13 |
|
#
133d68e0 |
| 01-Sep-2017 |
Paul Burton <paul.burton@imgtec.com> |
MIPS: Fix cmpxchg on 32b signed ints for 64b kernel with !kernel_uses_llsc
Commit 8263db4d7768 ("MIPS: cmpxchg: Implement __cmpxchg() as a function") refactored our implementation of __cmpxchg() to
MIPS: Fix cmpxchg on 32b signed ints for 64b kernel with !kernel_uses_llsc
Commit 8263db4d7768 ("MIPS: cmpxchg: Implement __cmpxchg() as a function") refactored our implementation of __cmpxchg() to be a function rather than a macro, with the aim of making it easier to read & modify. Unfortunately the commit breaks use of cmpxchg() for signed 32 bit values when we have a 64 bit kernel with kernel_uses_llsc == false, because:
- In cmpxchg_local() we cast the old value to the type the pointer points to, and then to an unsigned long. If the pointer points to a signed type smaller than 64 bits then the old value will be sign extended to 64 bits. That is, bits beyond the size of the pointed to type will be set to 1 if the old value is negative. In the case of a signed 32 bit integer with a negative value, bits 63:32 will all be set.
- In __cmpxchg_asm() we load the value from memory, ie. dereference the pointer, and store the value as an unsigned integer (__ret) whose size matches the pointer. For a 32 bit cmpxchg() this means we store the value in a u32, because the pointer provided to __cmpxchg_asm() by __cmpxchg() is of type volatile u32 *.
- __cmpxchg_asm() then checks whether the value in memory (__ret) matches the provided old value, by comparing the two values. This results in the u32 being promoted to a 64 bit unsigned long to match the old argument - however because both types are unsigned the value is zero extended, which does not match the sign extension performed on the old value in cmpxchg_local() earlier.
This mismatch means that unfortunate cmpxchg() calls can incorrectly fail for 64 bit kernels with kernel_uses_llsc == false. This is the case on at least non-SMP Cavium Octeon kernels, which hardcode kernel_uses_llsc in their cpu-feature-overrides.h header. Using a v4.13-rc7 kernel configured using cavium_octeon_defconfig with SMP manually disabled, this presents itself as oddity when we reach userland - for example:
can't run '/bin/mount': Text file busy can't run '/bin/mkdir': Text file busy can't run '/bin/mkdir': Text file busy can't run '/bin/mount': Text file busy can't run '/bin/hostname': Text file busy can't run '/etc/init.d/rcS': Text file busy can't run '/sbin/getty': Text file busy can't run '/sbin/getty': Text file busy
It appears that some part of the init process, which is in this case buildroot's busybox init, is running successfully. It never manages to reach the login prompt though, and complains about /sbin/getty being busy repeatedly and indefinitely.
Fix this by casting the old value provided to __cmpxchg_asm() to an appropriately sized unsigned integer, such that we consistently zero-extend avoiding the mismatch. The __cmpxchg_small() case for 8 & 16 bit values is unaffected because __cmpxchg_small() already masks provided values appropriately.
Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: 8263db4d7768 ("MIPS: cmpxchg: Implement __cmpxchg() as a function") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17226/ Cc: linux-mips@linux-mips.org Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
show more ...
|
Revision tags: v4.12 |
|
#
4843cf8d |
| 09-Jun-2017 |
Paul Burton <paul.burton@imgtec.com> |
MIPS: cmpxchg: Rearrange __xchg() arguments to match xchg()
The __xchg() function declares its first 2 arguments in reverse order compared to the xchg() macro, which is confusing & serves no purpose
MIPS: cmpxchg: Rearrange __xchg() arguments to match xchg()
The __xchg() function declares its first 2 arguments in reverse order compared to the xchg() macro, which is confusing & serves no purpose. Reorder the arguments such that __xchg() & xchg() match.
Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16356/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
show more ...
|
#
3ba7f44d |
| 09-Jun-2017 |
Paul Burton <paul.burton@imgtec.com> |
MIPS: cmpxchg: Implement 1 byte & 2 byte cmpxchg()
Implement support for 1 & 2 byte cmpxchg() using read-modify-write atop a 4 byte cmpxchg(). This allows us to support these atomic operations despi
MIPS: cmpxchg: Implement 1 byte & 2 byte cmpxchg()
Implement support for 1 & 2 byte cmpxchg() using read-modify-write atop a 4 byte cmpxchg(). This allows us to support these atomic operations despite the MIPS ISA only providing 4 & 8 byte atomic operations.
This is required in order to support queued rwlocks (qrwlock) in a later patch, since these make use of a 1 byte cmpxchg() in their slow path.
Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16355/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
show more ...
|
#
b70eb300 |
| 09-Jun-2017 |
Paul Burton <paul.burton@imgtec.com> |
MIPS: cmpxchg: Implement 1 byte & 2 byte xchg()
Implement 1 & 2 byte xchg() using read-modify-write atop a 4 byte cmpxchg(). This allows us to support these atomic operations despite the MIPS ISA on
MIPS: cmpxchg: Implement 1 byte & 2 byte xchg()
Implement 1 & 2 byte xchg() using read-modify-write atop a 4 byte cmpxchg(). This allows us to support these atomic operations despite the MIPS ISA only providing for 4 & 8 byte atomic operations.
This is required in order to support queued spinlocks (qspinlock) in a later patch, since these make use of a 2 byte xchg() in their slow path.
Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16354/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
show more ...
|
#
8263db4d |
| 09-Jun-2017 |
Paul Burton <paul.burton@imgtec.com> |
MIPS: cmpxchg: Implement __cmpxchg() as a function
Replace the macro definition of __cmpxchg() with an inline function, which is easier to read & modify. The cmpxchg() & cmpxchg_local() macros are a
MIPS: cmpxchg: Implement __cmpxchg() as a function
Replace the macro definition of __cmpxchg() with an inline function, which is easier to read & modify. The cmpxchg() & cmpxchg_local() macros are adjusted to call the new __cmpxchg() function.
Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16353/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
show more ...
|
#
62c6081d |
| 09-Jun-2017 |
Paul Burton <paul.burton@imgtec.com> |
MIPS: cmpxchg: Drop __xchg_u{32,64} functions
The __xchg_u32() & __xchg_u64() functions now add very little value. This patch therefore removes them, by:
- Moving memory barriers out of them & in
MIPS: cmpxchg: Drop __xchg_u{32,64} functions
The __xchg_u32() & __xchg_u64() functions now add very little value. This patch therefore removes them, by:
- Moving memory barriers out of them & into xchg(), which also removes the duplication & readies us to support xchg_relaxed() if we wish to.
- Calling __xchg_asm() directly from __xchg().
- Performing the check for CONFIG_64BIT being enabled in the size=8 case of __xchg().
Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16352/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
show more ...
|