Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39 |
|
#
6b289911 |
| 11-Jul-2023 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/features: Add capability to update mmu features later
On powerpc32, features fixup is performed very early and that's too early to read the cmdline and take into account 'nosmap' parameter.
powerpc/features: Add capability to update mmu features later
On powerpc32, features fixup is performed very early and that's too early to read the cmdline and take into account 'nosmap' parameter.
On the other hand, no userspace access is performed that early and KUAP feature fixup can be performed later.
Add a function to update mmu features. The function is passed a mask with the features that can be updated.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/31b27ee2c9d338f4f82cd8cd69d6bff979495290.1689091022.git.christophe.leroy@csgroup.eu
show more ...
|
Revision tags: v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21, v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16, v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12, v6.0.11 |
|
#
b988e779 |
| 02-Dec-2022 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/feature-fixups: Do not patch init section after init
Once init section is freed, attempting to patch init code ends up in the weed.
Commit 51c3c62b58b3 ("powerpc: Avoid code patching freed
powerpc/feature-fixups: Do not patch init section after init
Once init section is freed, attempting to patch init code ends up in the weed.
Commit 51c3c62b58b3 ("powerpc: Avoid code patching freed init sections") protected patch_instruction() against that, but it is the responsibility of the caller to ensure that the patched memory is valid.
In the same spirit as jump_label with its jump_label_can_update() function, add is_fixup_addr_valid() function to skip patching on freed init section.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8e9311fc1b057e4e6a2a3a0701ebcc74b787affe.1669969781.git.christophe.leroy@csgroup.eu
show more ...
|
#
3d1dbbca |
| 02-Dec-2022 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/feature-fixups: Refactor other fixups patching
Several fonctions have the same loop for patching instructions.
Introduce function do_patch_fixups() to refactor those loops.
Signed-off-by:
powerpc/feature-fixups: Refactor other fixups patching
Several fonctions have the same loop for patching instructions.
Introduce function do_patch_fixups() to refactor those loops.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/58ab36949c18f94d466fc98d6c085783b0cd474f.1669969781.git.christophe.leroy@csgroup.eu
show more ...
|
#
6076dc34 |
| 02-Dec-2022 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/feature-fixups: Refactor entry fixups patching
Several fonctions have the same loop for patching instructions.
Introduce function do_patch_entry_fixups() to refactor those loops.
Signed-of
powerpc/feature-fixups: Refactor entry fixups patching
Several fonctions have the same loop for patching instructions.
Introduce function do_patch_entry_fixups() to refactor those loops.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/79eeff7b20a98f7136da5f79b1f7c436928f27f3.1669969781.git.christophe.leroy@csgroup.eu
show more ...
|
Revision tags: v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69 |
|
#
3e731858 |
| 19-Sep-2022 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc: Remove CONFIG_PPC_FSL_BOOK3E
CONFIG_PPC_FSL_BOOK3E is redundant with CONFIG_PPC_E500.
Remove it.
And rename five files accordingly.
Signed-off-by: Christophe Leroy <christophe.leroy@csgr
powerpc: Remove CONFIG_PPC_FSL_BOOK3E
CONFIG_PPC_FSL_BOOK3E is redundant with CONFIG_PPC_E500.
Remove it.
And rename five files accordingly.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Rename include guards to match new file names] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/795cb93b88c9a0279289712e674f39e3b108a1b4.1663606876.git.christophe.leroy@csgroup.eu
show more ...
|
Revision tags: v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60, v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49, v5.15.48, v5.15.47, v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41, v5.15.40, v5.15.39, v5.15.38 |
|
#
4390a58e |
| 09-May-2022 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/inst: Remove PPC_INST_BRANCH
Convert last users of PPC_INST_BRANCH to PPC_RAW_BRANCH()
And remove PPC_INST_BRANCH.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-
powerpc/inst: Remove PPC_INST_BRANCH
Convert last users of PPC_INST_BRANCH to PPC_RAW_BRANCH()
And remove PPC_INST_BRANCH.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fa8807108a2ef2287a2c9651d6e1ff7c051923d9.1652074503.git.christophe.leroy@csgroup.eu
show more ...
|
Revision tags: v5.15.37, v5.15.36, v5.15.35, v5.15.34, v5.15.33, v5.15.32, v5.15.31, v5.17, v5.15.30, v5.15.29, v5.15.28, v5.15.27, v5.15.26, v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16, v5.15.15, v5.16, v5.15.10 |
|
#
ce0c6be9 |
| 16-Dec-2021 |
Nick Child <nick.child@ibm.com> |
powerpc/lib: Add __init attribute to eligible functions
Some functions defined in 'arch/powerpc/lib' are deserving of an `__init` macro attribute. These functions are only called by other initializa
powerpc/lib: Add __init attribute to eligible functions
Some functions defined in 'arch/powerpc/lib' are deserving of an `__init` macro attribute. These functions are only called by other initialization functions and therefore should inherit the attribute. Also, change function declarations in header files to include `__init`.
Signed-off-by: Nick Child <nick.child@ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211216220035.605465-3-nick.child@ibm.com
show more ...
|
Revision tags: v5.15.9, v5.15.8, v5.15.7, v5.15.6 |
|
#
c545b9f0 |
| 29-Nov-2021 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/inst: Define ppc_inst_t
In order to stop using 'struct ppc_inst' on PPC32, define a ppc_inst_t typedef.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael
powerpc/inst: Define ppc_inst_t
In order to stop using 'struct ppc_inst' on PPC32, define a ppc_inst_t typedef.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fe5baa2c66fea9db05a8b300b3e8d2880a42596c.1638208156.git.christophe.leroy@csgroup.eu
show more ...
|
Revision tags: v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15 |
|
#
3c12b4df |
| 27-Oct-2021 |
Russell Currey <ruscur@russell.cc> |
powerpc/security: Use a mutex for interrupt exit code patching
The mitigation-patching.sh script in the powerpc selftests toggles all mitigations on and off simultaneously, revealing that rfi_flush
powerpc/security: Use a mutex for interrupt exit code patching
The mitigation-patching.sh script in the powerpc selftests toggles all mitigations on and off simultaneously, revealing that rfi_flush and stf_barrier cannot safely operate at the same time due to races in updating the static key.
On some systems, the static key code throws a warning and the kernel remains functional. On others, the kernel will hang or crash.
Fix this by slapping on a mutex.
Fixes: 13799748b957 ("powerpc/64: use interrupt restart table to speed up return from interrupt") Cc: stable@vger.kernel.org # v5.14+ Signed-off-by: Russell Currey <ruscur@russell.cc> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211027072410.40950-1-ruscur@russell.cc
show more ...
|
#
02da3632 |
| 27-Oct-2021 |
Russell Currey <ruscur@russell.cc> |
powerpc/security: Use a mutex for interrupt exit code patching
commit 3c12b4df8d5e026345a19886ae375b3ebc33c0b6 upstream.
The mitigation-patching.sh script in the powerpc selftests toggles all mitig
powerpc/security: Use a mutex for interrupt exit code patching
commit 3c12b4df8d5e026345a19886ae375b3ebc33c0b6 upstream.
The mitigation-patching.sh script in the powerpc selftests toggles all mitigations on and off simultaneously, revealing that rfi_flush and stf_barrier cannot safely operate at the same time due to races in updating the static key.
On some systems, the static key code throws a warning and the kernel remains functional. On others, the kernel will hang or crash.
Fix this by slapping on a mutex.
Fixes: 13799748b957 ("powerpc/64: use interrupt restart table to speed up return from interrupt") Cc: stable@vger.kernel.org # v5.14+ Signed-off-by: Russell Currey <ruscur@russell.cc> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211027072410.40950-1-ruscur@russell.cc Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
Revision tags: v5.14.14, v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61, v5.10.60, v5.10.53, v5.10.52, v5.10.51, v5.10.50, v5.10.49, v5.13, v5.10.46 |
|
#
13799748 |
| 17-Jun-2021 |
Nicholas Piggin <npiggin@gmail.com> |
powerpc/64: use interrupt restart table to speed up return from interrupt
Use the restart table facility to return from interrupt or system calls without disabling MSR[EE] or MSR[RI].
Interrupt ret
powerpc/64: use interrupt restart table to speed up return from interrupt
Use the restart table facility to return from interrupt or system calls without disabling MSR[EE] or MSR[RI].
Interrupt return asm is put into the low soft-masked region, to prevent interrupts being processed here, although they are still taken as masked interrupts which causes SRRs to be clobbered, and a pending soft-masked interrupt to require replaying.
The return code uses restart table regions to redirct to a fixup handler rather than continue with the exit, if such an interrupt happens. In this case the interrupt return is redirected to a fixup handler which reloads r1 for the interrupt stack and reloads registers and sets state up to replay the soft-masked interrupt and try the exit again.
Some types of security exit fallback flushes and barriers are currently unable to cope with reentrant interrupts, e.g., because they store some state in the scratch SPR which would be clobbered even by masked interrupts. For now the interrupts-enabled exits are disabled when these flushes are used.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Guard unused exit_must_hard_disable() as reported by lkp] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-13-npiggin@gmail.com
show more ...
|
Revision tags: v5.10.43, v5.10.42, v5.10.41, v5.10.40, v5.10.39 |
|
#
69d4d6e5 |
| 20-May-2021 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc: Don't use 'struct ppc_inst' to reference instruction location
'struct ppc_inst' is an internal representation of an instruction, but in-memory instructions are and will remain a table of 'u
powerpc: Don't use 'struct ppc_inst' to reference instruction location
'struct ppc_inst' is an internal representation of an instruction, but in-memory instructions are and will remain a table of 'u32' forever.
Replace all 'struct ppc_inst *' used for locating an instruction in memory by 'u32 *'. This removes a lot of undue casts to 'struct ppc_inst *'.
It also helps locating ab-use of 'struct ppc_inst' dereference.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fix ppc_inst_next(), use u32 instead of unsigned int] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7062722b087228e42cbd896e39bfdf526d6a340a.1621516826.git.christophe.leroy@csgroup.eu
show more ...
|
#
18c85964 |
| 20-May-2021 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc: Do not dereference code as 'struct ppc_inst' (uprobe, code-patching, feature-fixups)
'struct ppc_inst' is an internal structure to represent an instruction, it is not directly the represent
powerpc: Do not dereference code as 'struct ppc_inst' (uprobe, code-patching, feature-fixups)
'struct ppc_inst' is an internal structure to represent an instruction, it is not directly the representation of that instruction in text code. It is not meant to map and dereference code.
Dereferencing code directly through 'struct ppc_inst' has two main issues: - On powerpc, structs are expected to be 8 bytes aligned while code is spread every 4 byte. - Should a non prefixed instruction lie at the end of the page and the following page not be mapped, it would generate a page fault.
In-memory code must be accessed with ppc_inst_read().
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c9a1201dd0a66b4a0f91f0fb46d9385cbf030feb.1621516826.git.christophe.leroy@csgroup.eu
show more ...
|
#
ef909ba9 |
| 20-May-2021 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/lib/feature-fixups: Use PPC_RAW_xxx() macros
Use PPC_RAW_xxx() macros instead of open coding assembly opcodes.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fix bad co
powerpc/lib/feature-fixups: Use PPC_RAW_xxx() macros
Use PPC_RAW_xxx() macros instead of open coding assembly opcodes.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fix bad converison in do_stf_exit_barrier_fixups()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e79cd8e111ca13bf8c61a384bac365aa7e207647.1621506159.git.christophe.leroy@csgroup.eu
show more ...
|
Revision tags: v5.4.119 |
|
#
5b48ba2f |
| 13-May-2021 |
Michael Ellerman <mpe@ellerman.id.au> |
powerpc/64s: Fix stf mitigation patching w/strict RWX & hash
The stf entry barrier fallback is unsafe to execute in a semi-patched state, which can happen when enabling/disabling the mitigation with
powerpc/64s: Fix stf mitigation patching w/strict RWX & hash
The stf entry barrier fallback is unsafe to execute in a semi-patched state, which can happen when enabling/disabling the mitigation with strict kernel RWX enabled and using the hash MMU.
See the previous commit for more details.
Fix it by changing the order in which we patch the instructions.
Note the stf barrier fallback is only used on Power6 or earlier.
Fixes: bd573a81312f ("powerpc/mm/64s: Allow STRICT_KERNEL_RWX again") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210513140800.1391706-2-mpe@ellerman.id.au
show more ...
|
#
49b39ec2 |
| 13-May-2021 |
Michael Ellerman <mpe@ellerman.id.au> |
powerpc/64s: Fix entry flush patching w/strict RWX & hash
The entry flush mitigation can be enabled/disabled at runtime. When this happens it results in the kernel patching its own instructions to e
powerpc/64s: Fix entry flush patching w/strict RWX & hash
The entry flush mitigation can be enabled/disabled at runtime. When this happens it results in the kernel patching its own instructions to enable/disable the mitigation sequence.
With strict kernel RWX enabled instruction patching happens via a secondary mapping of the kernel text, so that we don't have to make the primary mapping writable. With the hash MMU this leads to a hash fault, which causes us to execute the exception entry which contains the entry flush mitigation.
This means we end up executing the entry flush in a semi-patched state, ie. after we have patched the first instruction but before we patch the second or third instruction of the sequence.
On machines with updated firmware the entry flush is a series of special nops, and it's safe to to execute in a semi-patched state.
However when using the fallback flush the sequence is mflr/branch/mtlr, and so it's not safe to execute if we have patched out the mflr but not the other two instructions. Doing so leads to us corrputing LR, leading to an oops, for example:
# echo 0 > /sys/kernel/debug/powerpc/entry_flush kernel tried to execute exec-protected page (c000000002971000) - exploit attempt? (uid: 0) BUG: Unable to handle kernel instruction fetch Faulting instruction address: 0xc000000002971000 Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries CPU: 0 PID: 2215 Comm: bash Not tainted 5.13.0-rc1-00010-gda3bb206c9ce #1 NIP: c000000002971000 LR: c000000002971000 CTR: c000000000120c40 REGS: c000000013243840 TRAP: 0400 Not tainted (5.13.0-rc1-00010-gda3bb206c9ce) MSR: 8000000010009033 <SF,EE,ME,IR,DR,RI,LE> CR: 48428482 XER: 00000000 ... NIP 0xc000000002971000 LR 0xc000000002971000 Call Trace: do_patch_instruction+0xc4/0x340 (unreliable) do_entry_flush_fixups+0x100/0x3b0 entry_flush_set+0x50/0xe0 simple_attr_write+0x160/0x1a0 full_proxy_write+0x8c/0x110 vfs_write+0xf0/0x340 ksys_write+0x84/0x140 system_call_exception+0x164/0x2d0 system_call_common+0xec/0x278
The simplest fix is to change the order in which we patch the instructions, so that the sequence is always safe to execute. For the non-fallback flushes it doesn't matter what order we patch in.
Fixes: bd573a81312f ("powerpc/mm/64s: Allow STRICT_KERNEL_RWX again") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210513140800.1391706-1-mpe@ellerman.id.au
show more ...
|
Revision tags: v5.10.36, v5.10.35 |
|
#
aec86b05 |
| 05-May-2021 |
Michael Ellerman <mpe@ellerman.id.au> |
powerpc/64s: Fix crashes when toggling entry flush barrier
The entry flush mitigation can be enabled/disabled at runtime via a debugfs file (entry_flush), which causes the kernel to patch itself to
powerpc/64s: Fix crashes when toggling entry flush barrier
The entry flush mitigation can be enabled/disabled at runtime via a debugfs file (entry_flush), which causes the kernel to patch itself to enable/disable the relevant mitigations.
However depending on which mitigation we're using, it may not be safe to do that patching while other CPUs are active. For example the following crash:
sleeper[15639]: segfault (11) at c000000000004c20 nip c000000000004c20 lr c000000000004c20
Shows that we returned to userspace with a corrupted LR that points into the kernel, due to executing the partially patched call to the fallback entry flush (ie. we missed the LR restore).
Fix it by doing the patching under stop machine. The CPUs that aren't doing the patching will be spinning in the core of the stop machine logic. That is currently sufficient for our purposes, because none of the patching we do is to that code or anywhere in the vicinity.
Fixes: f79643787e0a ("powerpc/64s: flush L1D on kernel entry") Cc: stable@vger.kernel.org # v5.10+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210506044959.1298123-2-mpe@ellerman.id.au
show more ...
|
#
8ec7791b |
| 05-May-2021 |
Michael Ellerman <mpe@ellerman.id.au> |
powerpc/64s: Fix crashes when toggling stf barrier
The STF (store-to-load forwarding) barrier mitigation can be enabled/disabled at runtime via a debugfs file (stf_barrier), which causes the kernel
powerpc/64s: Fix crashes when toggling stf barrier
The STF (store-to-load forwarding) barrier mitigation can be enabled/disabled at runtime via a debugfs file (stf_barrier), which causes the kernel to patch itself to enable/disable the relevant mitigations.
However depending on which mitigation we're using, it may not be safe to do that patching while other CPUs are active. For example the following crash:
User access of kernel address (c00000003fff5af0) - exploit attempt? (uid: 0) segfault (11) at c00000003fff5af0 nip 7fff8ad12198 lr 7fff8ad121f8 code 1 code: 40820128 e93c00d0 e9290058 7c292840 40810058 38600000 4bfd9a81 e8410018 code: 2c030006 41810154 3860ffb6 e9210098 <e94d8ff0> 7d295279 39400000 40820a3c
Shows that we returned to userspace without restoring the user r13 value, due to executing the partially patched STF exit code.
Fix it by doing the patching under stop machine. The CPUs that aren't doing the patching will be spinning in the core of the stop machine logic. That is currently sufficient for our purposes, because none of the patching we do is to that code or anywhere in the vicinity.
Fixes: a048a07d7f45 ("powerpc/64s: Add support for a store forwarding barrier at kernel entry/exit") Cc: stable@vger.kernel.org # v4.17+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210506044959.1298123-1-mpe@ellerman.id.au
show more ...
|
Revision tags: v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14 |
|
#
08685be7 |
| 11-Jan-2021 |
Nicholas Piggin <npiggin@gmail.com> |
powerpc/64s: fix scv entry fallback flush vs interrupt
The L1D flush fallback functions are not recoverable vs interrupts, yet the scv entry flush runs with MSR[EE]=1. This can result in a timer (so
powerpc/64s: fix scv entry fallback flush vs interrupt
The L1D flush fallback functions are not recoverable vs interrupts, yet the scv entry flush runs with MSR[EE]=1. This can result in a timer (soft-NMI) or MCE or SRESET interrupt hitting here and overwriting the EXRFI save area, which ends up corrupting userspace registers for scv return.
Fix this by disabling RI and EE for the scv entry fallback flush.
Fixes: f79643787e0a0 ("powerpc/64s: flush L1D on kernel entry") Cc: stable@vger.kernel.org # 5.9+ which also have flush L1D patch backport Reported-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210111062408.287092-1-npiggin@gmail.com
show more ...
|
Revision tags: v5.10 |
|
#
1fc0c27b |
| 01-Dec-2020 |
Daniel Axtens <dja@axtens.net> |
powerpc/feature-fixups: use a semicolon rather than a comma
In a bunch of our security flushes, we use a comma rather than a semicolon to 'terminate' an assignment. Nothing breaks, but checkpatch pi
powerpc/feature-fixups: use a semicolon rather than a comma
In a bunch of our security flushes, we use a comma rather than a semicolon to 'terminate' an assignment. Nothing breaks, but checkpatch picks it up if you copy it into another flush.
Switch to semicolons for ending statements.
Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201144344.1228421-1-dja@axtens.net
show more ...
|
#
d2e3590c |
| 05-May-2021 |
Michael Ellerman <mpe@ellerman.id.au> |
powerpc/64s: Fix crashes when toggling entry flush barrier
commit aec86b052df6541cc97c5fca44e5934cbea4963b upstream.
The entry flush mitigation can be enabled/disabled at runtime via a debugfs file
powerpc/64s: Fix crashes when toggling entry flush barrier
commit aec86b052df6541cc97c5fca44e5934cbea4963b upstream.
The entry flush mitigation can be enabled/disabled at runtime via a debugfs file (entry_flush), which causes the kernel to patch itself to enable/disable the relevant mitigations.
However depending on which mitigation we're using, it may not be safe to do that patching while other CPUs are active. For example the following crash:
sleeper[15639]: segfault (11) at c000000000004c20 nip c000000000004c20 lr c000000000004c20
Shows that we returned to userspace with a corrupted LR that points into the kernel, due to executing the partially patched call to the fallback entry flush (ie. we missed the LR restore).
Fix it by doing the patching under stop machine. The CPUs that aren't doing the patching will be spinning in the core of the stop machine logic. That is currently sufficient for our purposes, because none of the patching we do is to that code or anywhere in the vicinity.
Fixes: f79643787e0a ("powerpc/64s: flush L1D on kernel entry") Cc: stable@vger.kernel.org # v5.10+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210506044959.1298123-2-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
51570bee |
| 05-May-2021 |
Michael Ellerman <mpe@ellerman.id.au> |
powerpc/64s: Fix crashes when toggling stf barrier
commit 8ec7791bae1327b1c279c5cd6e929c3b12daaf0a upstream.
The STF (store-to-load forwarding) barrier mitigation can be enabled/disabled at runtime
powerpc/64s: Fix crashes when toggling stf barrier
commit 8ec7791bae1327b1c279c5cd6e929c3b12daaf0a upstream.
The STF (store-to-load forwarding) barrier mitigation can be enabled/disabled at runtime via a debugfs file (stf_barrier), which causes the kernel to patch itself to enable/disable the relevant mitigations.
However depending on which mitigation we're using, it may not be safe to do that patching while other CPUs are active. For example the following crash:
User access of kernel address (c00000003fff5af0) - exploit attempt? (uid: 0) segfault (11) at c00000003fff5af0 nip 7fff8ad12198 lr 7fff8ad121f8 code 1 code: 40820128 e93c00d0 e9290058 7c292840 40810058 38600000 4bfd9a81 e8410018 code: 2c030006 41810154 3860ffb6 e9210098 <e94d8ff0> 7d295279 39400000 40820a3c
Shows that we returned to userspace without restoring the user r13 value, due to executing the partially patched STF exit code.
Fix it by doing the patching under stop machine. The CPUs that aren't doing the patching will be spinning in the core of the stop machine logic. That is currently sufficient for our purposes, because none of the patching we do is to that code or anywhere in the vicinity.
Fixes: a048a07d7f45 ("powerpc/64s: Add support for a store forwarding barrier at kernel entry/exit") Cc: stable@vger.kernel.org # v4.17+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210506044959.1298123-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
062dea90 |
| 11-Jan-2021 |
Nicholas Piggin <npiggin@gmail.com> |
powerpc/64s: fix scv entry fallback flush vs interrupt
commit 08685be7761d69914f08c3d6211c543a385a5b9c upstream.
The L1D flush fallback functions are not recoverable vs interrupts, yet the scv entr
powerpc/64s: fix scv entry fallback flush vs interrupt
commit 08685be7761d69914f08c3d6211c543a385a5b9c upstream.
The L1D flush fallback functions are not recoverable vs interrupts, yet the scv entry flush runs with MSR[EE]=1. This can result in a timer (soft-NMI) or MCE or SRESET interrupt hitting here and overwriting the EXRFI save area, which ends up corrupting userspace registers for scv return.
Fix this by disabling RI and EE for the scv entry fallback flush.
Fixes: f79643787e0a0 ("powerpc/64s: flush L1D on kernel entry") Cc: stable@vger.kernel.org # 5.9+ which also have flush L1D patch backport Reported-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210111062408.287092-1-npiggin@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
9a32a7e7 |
| 16-Nov-2020 |
Nicholas Piggin <npiggin@gmail.com> |
powerpc/64s: flush L1D after user accesses
IBM Power9 processors can speculatively operate on data in the L1 cache before it has been completely validated, via a way-prediction mechanism. It is not
powerpc/64s: flush L1D after user accesses
IBM Power9 processors can speculatively operate on data in the L1 cache before it has been completely validated, via a way-prediction mechanism. It is not possible for an attacker to determine the contents of impermissible memory using this method, since these systems implement a combination of hardware and software security measures to prevent scenarios where protected data could be leaked.
However these measures don't address the scenario where an attacker induces the operating system to speculatively execute instructions using data that the attacker controls. This can be used for example to speculatively bypass "kernel user access prevention" techniques, as discovered by Anthony Steinhauser of Google's Safeside Project. This is not an attack by itself, but there is a possibility it could be used in conjunction with side-channels or other weaknesses in the privileged code to construct an attack.
This issue can be mitigated by flushing the L1 cache between privilege boundaries of concern. This patch flushes the L1 cache after user accesses.
This is part of the fix for CVE-2020-4788.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
show more ...
|
#
f7964378 |
| 16-Nov-2020 |
Nicholas Piggin <npiggin@gmail.com> |
powerpc/64s: flush L1D on kernel entry
IBM Power9 processors can speculatively operate on data in the L1 cache before it has been completely validated, via a way-prediction mechanism. It is not poss
powerpc/64s: flush L1D on kernel entry
IBM Power9 processors can speculatively operate on data in the L1 cache before it has been completely validated, via a way-prediction mechanism. It is not possible for an attacker to determine the contents of impermissible memory using this method, since these systems implement a combination of hardware and software security measures to prevent scenarios where protected data could be leaked.
However these measures don't address the scenario where an attacker induces the operating system to speculatively execute instructions using data that the attacker controls. This can be used for example to speculatively bypass "kernel user access prevention" techniques, as discovered by Anthony Steinhauser of Google's Safeside Project. This is not an attack by itself, but there is a possibility it could be used in conjunction with side-channels or other weaknesses in the privileged code to construct an attack.
This issue can be mitigated by flushing the L1 cache between privilege boundaries of concern. This patch flushes the L1 cache on kernel entry.
This is part of the fix for CVE-2020-4788.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
show more ...
|