2c08b9b3 | 06-Jul-2022 |
Peter Zijlstra <peterz@infradead.org> |
x86/entry: Move PUSH_AND_CLEAR_REGS() back into error_entry
Commit
ee774dac0da1 ("x86/entry: Move PUSH_AND_CLEAR_REGS out of error_entry()")
moved PUSH_AND_CLEAR_REGS out of error_entry, into it
x86/entry: Move PUSH_AND_CLEAR_REGS() back into error_entry
Commit
ee774dac0da1 ("x86/entry: Move PUSH_AND_CLEAR_REGS out of error_entry()")
moved PUSH_AND_CLEAR_REGS out of error_entry, into its own function, in part to avoid calling error_entry() for XenPV.
However, commit
7c81c0c9210c ("x86/entry: Avoid very early RET")
had to change that because the 'ret' was too early and moved it into idtentry, bloating the text size, since idtentry is expanded for every exception vector.
However, with the advent of xen_error_entry() in commit
d147553b64bad ("x86/xen: Add UNTRAIN_RET")
it became possible to remove PUSH_AND_CLEAR_REGS from idtentry, back into *error_entry().
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de>
show more ...
|
b2620fac | 14-Jun-2022 |
Josh Poimboeuf <jpoimboe@kernel.org> |
x86/speculation: Fix RSB filling with CONFIG_RETPOLINE=n
If a kernel is built with CONFIG_RETPOLINE=n, but the user still wants to mitigate Spectre v2 using IBRS or eIBRS, the RSB filling will be si
x86/speculation: Fix RSB filling with CONFIG_RETPOLINE=n
If a kernel is built with CONFIG_RETPOLINE=n, but the user still wants to mitigate Spectre v2 using IBRS or eIBRS, the RSB filling will be silently disabled.
There's nothing retpoline-specific about RSB buffer filling. Remove the CONFIG_RETPOLINE guards around it.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de>
show more ...
|
a09a6e23 | 14-Jun-2022 |
Peter Zijlstra <peterz@infradead.org> |
objtool: Add entry UNRET validation
Since entry asm is tricky, add a validation pass that ensures the retbleed mitigation has been done before the first actual RET instruction.
Entry points are tho
objtool: Add entry UNRET validation
Since entry asm is tricky, add a validation pass that ensures the retbleed mitigation has been done before the first actual RET instruction.
Entry points are those that either have UNWIND_HINT_ENTRY, which acts as UNWIND_HINT_EMPTY but marks the instruction as an entry point, or those that have UWIND_HINT_IRET_REGS at +0.
This is basically a variant of validate_branch() that is intra-function and it will simply follow all branches from marked entry points and ensures that all paths lead to ANNOTATE_UNRET_END.
If a path hits RET or an indirection the path is a fail and will be reported.
There are 3 ANNOTATE_UNRET_END instances:
- UNTRAIN_RET itself - exception from-kernel; this path doesn't need UNTRAIN_RET - all early exceptions; these also don't need UNTRAIN_RET
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de>
show more ...
|
3ebc1700 | 14-Jun-2022 |
Peter Zijlstra <peterz@infradead.org> |
x86/bugs: Add retbleed=ibpb
jmp2ret mitigates the easy-to-attack case at relatively low overhead. It mitigates the long speculation windows after a mispredicted RET, but it does not mitigate the sho
x86/bugs: Add retbleed=ibpb
jmp2ret mitigates the easy-to-attack case at relatively low overhead. It mitigates the long speculation windows after a mispredicted RET, but it does not mitigate the short speculation window from arbitrary instruction boundaries.
On Zen2, there is a chicken bit which needs setting, which mitigates "arbitrary instruction boundaries" down to just "basic block boundaries".
But there is no fix for the short speculation window on basic block boundaries, other than to flush the entire BTB to evict all attacker predictions.
On the spectrum of "fast & blurry" -> "safe", there is (on top of STIBP or no-SMT):
1) Nothing System wide open 2) jmp2ret May stop a script kiddy 3) jmp2ret+chickenbit Raises the bar rather further 4) IBPB Only thing which can count as "safe".
Tentative numbers put IBPB-on-entry at a 2.5x hit on Zen2, and a 10x hit on Zen1 according to lmbench.
[ bp: Fixup feature bit comments, document option, 32-bit build fix. ]
Suggested-by: Andrew Cooper <Andrew.Cooper3@citrix.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de>
show more ...
|
2dbb887e | 14-Jun-2022 |
Peter Zijlstra <peterz@infradead.org> |
x86/entry: Add kernel IBRS implementation
Implement Kernel IBRS - currently the only known option to mitigate RSB underflow speculation issues on Skylake hardware.
Note: since IBRS_ENTER requires f
x86/entry: Add kernel IBRS implementation
Implement Kernel IBRS - currently the only known option to mitigate RSB underflow speculation issues on Skylake hardware.
Note: since IBRS_ENTER requires fuller context established than UNTRAIN_RET, it must be placed after it. However, since UNTRAIN_RET itself implies a RET, it must come after IBRS_ENTER. This means IBRS_ENTER needs to also move UNTRAIN_RET.
Note 2: KERNEL_IBRS is sub-optimal for XenPV.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de>
show more ...
|
a149180f | 14-Jun-2022 |
Peter Zijlstra <peterz@infradead.org> |
x86: Add magic AMD return-thunk
Note: needs to be in a section distinct from Retpolines such that the Retpoline RET substitution cannot possibly use immediate jumps.
ORC unwinding for zen_untrain_r
x86: Add magic AMD return-thunk
Note: needs to be in a section distinct from Retpolines such that the Retpoline RET substitution cannot possibly use immediate jumps.
ORC unwinding for zen_untrain_ret() and __x86_return_thunk() is a little tricky but works due to the fact that zen_untrain_ret() doesn't have any stack ops and as such will emit a single ORC entry at the start (+0x3f).
Meanwhile, unwinding an IP, including the __x86_return_thunk() one (+0x40) will search for the largest ORC entry smaller or equal to the IP, these will find the one ORC entry (+0x3f) and all works.
[ Alexandre: SVM part. ] [ bp: Build fix, massages. ]
Suggested-by: Andrew Cooper <Andrew.Cooper3@citrix.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de>
show more ...
|
7c81c0c9 | 14-Jun-2022 |
Peter Zijlstra <peterz@infradead.org> |
x86/entry: Avoid very early RET
Commit
ee774dac0da1 ("x86/entry: Move PUSH_AND_CLEAR_REGS out of error_entry()")
manages to introduce a CALL/RET pair that is before SWITCH_TO_KERNEL_CR3, which m
x86/entry: Avoid very early RET
Commit
ee774dac0da1 ("x86/entry: Move PUSH_AND_CLEAR_REGS out of error_entry()")
manages to introduce a CALL/RET pair that is before SWITCH_TO_KERNEL_CR3, which means it is before RETBleed can be mitigated.
Revert to an earlier version of the commit in Fixes. Down side is that this will bloat .text size somewhat. The alternative is fully reverting it.
The purpose of this patch was to allow migrating error_entry() to C, including the whole of kPTI. Much care needs to be taken moving that forward to not re-introduce this problem of early RETs.
Fixes: ee774dac0da1 ("x86/entry: Move PUSH_AND_CLEAR_REGS out of error_entry()") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de>
show more ...
|
aa3d4803 | 14-Jun-2022 |
Peter Zijlstra <peterz@infradead.org> |
x86: Use return-thunk in asm code
Use the return thunk in asm code. If the thunk isn't needed, it will get patched into a RET instruction during boot by apply_returns().
Since alternatives can't ha
x86: Use return-thunk in asm code
Use the return thunk in asm code. If the thunk isn't needed, it will get patched into a RET instruction during boot by apply_returns().
Since alternatives can't handle relocations outside of the first instruction, putting a 'jmp __x86_return_thunk' in one is not valid, therefore carve out the memmove ERMS path into a separate label and jump to it.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de>
show more ...
|
d6ecaa00 | 23-May-2022 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'x86_vdso_for_v5.19_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 vdso update from Borislav Petkov:
- Get rid of CONFIG_LEGACY_VSYSCALL_EMULATE as nothing should
Merge tag 'x86_vdso_for_v5.19_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 vdso update from Borislav Petkov:
- Get rid of CONFIG_LEGACY_VSYSCALL_EMULATE as nothing should be using it anymore
* tag 'x86_vdso_for_v5.19_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/vsyscall: Remove CONFIG_LEGACY_VSYSCALL_EMULATE
show more ...
|