Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24 |
|
#
ca708599 |
| 12-Apr-2023 |
Mark Rutland <mark.rutland@arm.com> |
arm64: use XPACLRI to strip PAC
Currently we strip the PAC from pointers using C code, which requires generating bitmasks, and conditionally clearing/setting bits depending on bit 55. We can do bett
arm64: use XPACLRI to strip PAC
Currently we strip the PAC from pointers using C code, which requires generating bitmasks, and conditionally clearing/setting bits depending on bit 55. We can do better by using XPACLRI directly.
When the logic was originally written to strip PACs from user pointers, contemporary toolchains used for the kernel had assemblers which were unaware of the PAC instructions. As stripping the PAC from userspace pointers required unconditional clearing of a fixed set of bits (which could be performed with a single instruction), it was simpler to implement the masking in C than it was to make use of XPACI or XPACLRI.
When support for in-kernel pointer authentication was added, the stripping logic was extended to cover TTBR1 pointers, requiring several instructions to handle whether to clear/set bits dependent on bit 55 of the pointer.
This patch simplifies the stripping of PACs by using XPACLRI directly, as contemporary toolchains do within __builtin_return_address(). This saves a number of instructions, especially where __builtin_return_address() does not implicitly strip the PAC but is heavily used (e.g. with tracepoints). As the kernel might be compiled with an assembler without knowledge of XPACLRI, it is assembled using the 'HINT #7' alias, which results in an identical opcode.
At the same time, I've split ptrauth_strip_insn_pac() into ptrauth_strip_user_insn_pac() and ptrauth_strip_kernel_insn_pac() helpers so that we can avoid unnecessary PAC stripping when pointer authentication is not in use in userspace or kernel respectively.
The underlying xpaclri() macro uses inline assembly which clobbers x30. The clobber causes the compiler to save/restore the original x30 value in a frame record (protected with PACIASP and AUTIASP when in-kernel authentication is enabled), so this does not provide a gadget to alter the return address. Similarly this does not adversely affect unwinding due to the presence of the frame record.
The ptrauth_user_pac_mask() and ptrauth_kernel_pac_mask() are exported from the kernel in ptrace and core dumps, so these are retained. A subsequent patch will move them out of <asm/compiler.h>.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kristina Martsenko <kristina.martsenko@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230412160134.306148-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
#
b5ecc19e |
| 11-Apr-2023 |
Mark Rutland <mark.rutland@arm.com> |
arm64: stacktrace: always inline core stacktrace functions
The arm64 stacktrace code can be used in kprobe context, and so cannot be safely probed. Some (but not all) of the unwind functions are ann
arm64: stacktrace: always inline core stacktrace functions
The arm64 stacktrace code can be used in kprobe context, and so cannot be safely probed. Some (but not all) of the unwind functions are annotated with `NOKPROBE_SYMBOL()` to ensure this, with others markes as `__always_inline`, relying on the top-level unwind function being marked as `noinstr`.
This patch has stacktrace.c consistently mark the internal stacktrace functions as `__always_inline`, removing the need for NOKPROBE_SYMBOL() as the top-level unwind function (arch_stack_walk()) is marked as `noinstr`. This is more consistent and is a simpler pattern to follow for future additions to stacktrace.c.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230411162943.203199-4-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
#
ead6122c |
| 11-Apr-2023 |
Mark Rutland <mark.rutland@arm.com> |
arm64: stacktrace: move dump functions to end of file
For historical reasons, the backtrace dumping functions are placed in the middle of stacktrace.c, despite using functions defined later. For cla
arm64: stacktrace: move dump functions to end of file
For historical reasons, the backtrace dumping functions are placed in the middle of stacktrace.c, despite using functions defined later. For clarity, and to make subsequent refactoring easier, move the dumping functions to the end of stacktrace.c
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230411162943.203199-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
#
9e09d445 |
| 11-Apr-2023 |
Mark Rutland <mark.rutland@arm.com> |
arm64: stacktrace: recover return address for first entry
The function which calls the top-level backtracing function may have been instrumented with ftrace and/or kprobes, and hence the first retur
arm64: stacktrace: recover return address for first entry
The function which calls the top-level backtracing function may have been instrumented with ftrace and/or kprobes, and hence the first return address may have been rewritten.
Factor out the existing fgraph / kretprobes address recovery, and use this for the first address. As the comment for the fgraph case isn't all that helpful, I've also dropped that.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230411162943.203199-2-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
Revision tags: v6.1.23, v6.1.22, v6.1.21, v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16, v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1 |
|
#
7ea55715 |
| 09-Dec-2022 |
Ard Biesheuvel <ardb@kernel.org> |
arm64: efi: Account for the EFI runtime stack in stack unwinder
The EFI runtime services run from a dedicated stack now, and so the stack unwinder needs to be informed about this.
Acked-by: Mark Ru
arm64: efi: Account for the EFI runtime stack in stack unwinder
The EFI runtime services run from a dedicated stack now, and so the stack unwinder needs to be informed about this.
Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
show more ...
|
Revision tags: v6.0.12, v6.0.11 |
|
#
0fbcd8ab |
| 01-Dec-2022 |
Masami Hiramatsu (Google) <mhiramat@kernel.org> |
arm64: Prohibit instrumentation on arch_stack_walk()
Mark arch_stack_walk() as noinstr instead of notrace and inline functions called from arch_stack_walk() as __always_inline so that user does not
arm64: Prohibit instrumentation on arch_stack_walk()
Mark arch_stack_walk() as noinstr instead of notrace and inline functions called from arch_stack_walk() as __always_inline so that user does not put any instrumentations on it, because this function can be used from return_address() which is used by lockdep.
Without this, if the kernel built with CONFIG_LOCKDEP=y, just probing arch_stack_walk() via <tracefs>/kprobe_events will crash the kernel on arm64.
# echo p arch_stack_walk >> ${TRACEFS}/kprobe_events # echo 1 > ${TRACEFS}/events/kprobes/enable kprobes: Failed to recover from reentered kprobes. kprobes: Dump kprobe: .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 ------------[ cut here ]------------ kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! kprobes: Failed to recover from reentered kprobes. kprobes: Dump kprobe: .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 ------------[ cut here ]------------ kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! PREEMPT SMP Modules linked in: CPU: 0 PID: 17 Comm: migration/0 Tainted: G N 6.1.0-rc5+ #6 Hardware name: linux,dummy-virt (DT) Stopper: 0x0 <- 0x0 pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : kprobe_breakpoint_handler+0x178/0x17c lr : kprobe_breakpoint_handler+0x178/0x17c sp : ffff8000080d3090 x29: ffff8000080d3090 x28: ffff0df5845798c0 x27: ffffc4f59057a774 x26: ffff0df5ffbba770 x25: ffff0df58f420f18 x24: ffff49006f641000 x23: ffffc4f590579768 x22: ffff0df58f420f18 x21: ffff8000080d31c0 x20: ffffc4f590579768 x19: ffffc4f590579770 x18: 0000000000000006 x17: 5f6b636174735f68 x16: 637261203d207264 x15: 64612e202c30203d x14: 2074657366666f2e x13: 30633178302f3078 x12: 302b6b6c61775f6b x11: 636174735f686372 x10: ffffc4f590dc5bd8 x9 : ffffc4f58eb31958 x8 : 00000000ffffefff x7 : ffffc4f590dc5bd8 x6 : 80000000fffff000 x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000 x2 : 0000000000000000 x1 : ffff0df5845798c0 x0 : 0000000000000064 Call trace: kprobes: Failed to recover from reentered kprobes. kprobes: Dump kprobe: .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 ------------[ cut here ]------------ kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
Fixes: 39ef362d2d45 ("arm64: Make return_address() use arch_stack_walk()") Cc: stable@vger.kernel.org Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/166994751368.439920.3236636557520824664.stgit@devnote3 Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
Revision tags: v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65 |
|
#
4b5e694e |
| 01-Sep-2022 |
Mark Rutland <mark.rutland@arm.com> |
arm64: stacktrace: track hyp stacks in unwinder's address space
Currently unwind_next_frame_record() has an optional callback to convert the address space of the FP. This is necessary for the NVHE u
arm64: stacktrace: track hyp stacks in unwinder's address space
Currently unwind_next_frame_record() has an optional callback to convert the address space of the FP. This is necessary for the NVHE unwinder, which tracks the stacks in the hyp VA space, but accesses the frame records in the kernel VA space.
This is a bit unfortunate since it clutters unwind_next_frame_record(), which will get in the way of future rework.
Instead, this patch changes the NVHE unwinder to track the stacks in the kernel's VA space and translate to FP prior to calling unwind_next_frame_record(). This removes the need for the translate_fp() callback, as all unwinders consistently track stacks in the native address space of the unwinder.
At the same time, this patch consolidates the generation of the stack addresses behind the stackinfo_get_*() helpers.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Fuad Tabba <tabba@google.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220901130646.1316937-10-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
8df13730 |
| 01-Sep-2022 |
Mark Rutland <mark.rutland@arm.com> |
arm64: stacktrace: track all stack boundaries explicitly
Currently we call an on_accessible_stack() callback for each step of the unwinder, requiring redundant work to be performed in the core of th
arm64: stacktrace: track all stack boundaries explicitly
Currently we call an on_accessible_stack() callback for each step of the unwinder, requiring redundant work to be performed in the core of the unwind loop (e.g. disabling preemption around accesses to per-cpu variables containing stack boundaries). To prevent unwind loops which go through a stack multiple times, we have to track the set of unwound stacks, requiring a stack_type enum which needs to cater for all the stacks of all possible callees. To prevent loops within a stack, we must track the prior FP values.
This patch reworks the unwinder to minimize the work in the core of the unwinder, and to remove the need for the stack_type enum. The set of accessible stacks (and their boundaries) are determined at the start of the unwind, and the current stack is tracked during the unwind, with completed stacks removed from the set of accessible stacks. This makes the boundary checks more accurate (e.g. detecting overlapped frame records), and removes the need for separate tracking of the prior FP and visited stacks.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Fuad Tabba <tabba@google.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220901130646.1316937-9-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
d1f684e4 |
| 01-Sep-2022 |
Mark Rutland <mark.rutland@arm.com> |
arm64: stacktrace: rework stack boundary discovery
In subsequent patches we'll want to acquire the stack boundaries ahead-of-time, and we'll need to be able to acquire the relevant stack_info regard
arm64: stacktrace: rework stack boundary discovery
In subsequent patches we'll want to acquire the stack boundaries ahead-of-time, and we'll need to be able to acquire the relevant stack_info regardless of whether we have an object the happens to be on the stack.
This patch replaces the on_XXX_stack() helpers with stackinfo_get_XXX() helpers, with the caller being responsible for the checking whether an object is on a relevant stack. For the moment this is moved into the on_accessible_stack() functions, making these slightly larger; subsequent patches will remove the on_accessible_stack() functions and simplify the logic.
The on_irq_stack() and on_task_stack() helpers are kept as these are used by IRQ entry sequences and stackleak respectively. As they're only used as predicates, the stack_info pointer parameter is removed in both cases.
As the on_accessible_stack() functions are always passed a non-NULL info pointer, these now update info unconditionally. When updating the type to STACK_TYPE_UNKNOWN, the low/high bounds are also modified, but as these will not be consumed this should have no adverse affect.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Fuad Tabba <tabba@google.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220901130646.1316937-7-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
75758d51 |
| 01-Sep-2022 |
Mark Rutland <mark.rutland@arm.com> |
arm64: stacktrace: move SDEI stack helpers to stacktrace code
For clarity and ease of maintenance, it would be helpful for all the stack helpers to be in the same place.
Move the SDEI stack helpers
arm64: stacktrace: move SDEI stack helpers to stacktrace code
For clarity and ease of maintenance, it would be helpful for all the stack helpers to be in the same place.
Move the SDEI stack helpers into the stacktrace code where all the other stack helpers live.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Fuad Tabba <tabba@google.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220901130646.1316937-5-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
b532ab5f |
| 01-Sep-2022 |
Mark Rutland <mark.rutland@arm.com> |
arm64: stacktrace: rename unwind_next_common() -> unwind_next_frame_record()
The unwind_next_common() function unwinds a single frame record. There are other unwind steps (e.g. unwinding through tra
arm64: stacktrace: rename unwind_next_common() -> unwind_next_frame_record()
The unwind_next_common() function unwinds a single frame record. There are other unwind steps (e.g. unwinding through trampolines) which are handled in the regular kernel unwinder, and in future there may be other common unwind helpers.
Clarify the purpose of unwind_next_common() by renaming it to unwind_next_frame_record(). At the same time, add commentary, and delete the redundant comment at the top of asm/stacktrace/common.h.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Fuad Tabba <tabba@google.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220901130646.1316937-4-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
bc8d7521 |
| 01-Sep-2022 |
Mark Rutland <mark.rutland@arm.com> |
arm64: stacktrace: simplify unwind_next_common()
Currently unwind_next_common() takes a pointer to a stack_info which is only ever used within unwind_next_common().
Make it a local variable and sim
arm64: stacktrace: simplify unwind_next_common()
Currently unwind_next_common() takes a pointer to a stack_info which is only ever used within unwind_next_common().
Make it a local variable and simplify callers.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> Cc: Fuad Tabba <tabba@google.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220901130646.1316937-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
Revision tags: v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60, v5.15.59, v5.19, v5.15.58 |
|
#
4e00532f |
| 27-Jul-2022 |
Marc Zyngier <maz@kernel.org> |
KVM: arm64: Make unwind()/on_accessible_stack() per-unwinder functions
Having multiple versions of on_accessible_stack() (one per unwinder) makes it very hard to reason about what is used where due
KVM: arm64: Make unwind()/on_accessible_stack() per-unwinder functions
Having multiple versions of on_accessible_stack() (one per unwinder) makes it very hard to reason about what is used where due to the complexity of the various includes, the forward declarations, and the reliance on everything being 'inline'.
Instead, move the code back where it should be. Each unwinder implements:
- on_accessible_stack() as well as the helpers it depends on,
- unwind()/unwind_next(), as they pass on_accessible_stack as a parameter to unwind_next_common() (which is the only common code here)
This hardly results in any duplication, and makes it much easier to reason about the code.
Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> Tested-by: Kalesh Singh <kaleshsingh@google.com> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20220727142906.1856759-4-maz@kernel.org
show more ...
|
#
f51e7146 |
| 26-Jul-2022 |
Kalesh Singh <kaleshsingh@google.com> |
arm64: stacktrace: Factor out common unwind()
Move unwind() to stacktrace/common.h, and as a result the kernel unwind_next() to asm/stacktrace.h. This allow reusing unwind() in the implementation of
arm64: stacktrace: Factor out common unwind()
Move unwind() to stacktrace/common.h, and as a result the kernel unwind_next() to asm/stacktrace.h. This allow reusing unwind() in the implementation of the nVHE HYP stack unwinder, later in the series.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com> Reviewed-by: Fuad Tabba <tabba@google.com> Reviewed-by: Mark Brown <broonie@kernel.org> Tested-by: Fuad Tabba <tabba@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-6-kaleshsingh@google.com
show more ...
|
#
5b1b0861 |
| 26-Jul-2022 |
Kalesh Singh <kaleshsingh@google.com> |
arm64: stacktrace: Handle frame pointer from different address spaces
The unwinder code is made reusable so that it can be used to unwind various types of stacks. One usecase is unwinding the nVHE h
arm64: stacktrace: Handle frame pointer from different address spaces
The unwinder code is made reusable so that it can be used to unwind various types of stacks. One usecase is unwinding the nVHE hyp stack from the host (EL1) in non-protected mode. This means that the unwinder must be able to translate HYP stack addresses to kernel addresses.
Add a callback (stack_trace_translate_fp_fn) to allow specifying the translation function.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com> Reviewed-by: Fuad Tabba <tabba@google.com> Tested-by: Fuad Tabba <tabba@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-5-kaleshsingh@google.com
show more ...
|
#
be63c647 |
| 26-Jul-2022 |
Kalesh Singh <kaleshsingh@google.com> |
arm64: stacktrace: Factor out unwind_next_common()
Move common unwind_next logic to stacktrace/common.h. This allows reusing the code in the implementation the nVHE hypervisor stack unwinder, later
arm64: stacktrace: Factor out unwind_next_common()
Move common unwind_next logic to stacktrace/common.h. This allows reusing the code in the implementation the nVHE hypervisor stack unwinder, later in this series.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com> Reviewed-by: Fuad Tabba <tabba@google.com> Reviewed-by: Mark Brown <broonie@kernel.org> Tested-by: Fuad Tabba <tabba@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-4-kaleshsingh@google.com
show more ...
|
#
6bf212c8 |
| 26-Jul-2022 |
Kalesh Singh <kaleshsingh@google.com> |
arm64: stacktrace: Add shared header for common stack unwinding code
In order to reuse the arm64 stack unwinding logic for the nVHE hypervisor stack, move the common code to a shared header (arch/ar
arm64: stacktrace: Add shared header for common stack unwinding code
In order to reuse the arm64 stack unwinding logic for the nVHE hypervisor stack, move the common code to a shared header (arch/arm64/include/asm/stacktrace/common.h).
The nVHE hypervisor cannot safely link against kernel code, so we make use of the shared header to avoid duplicated logic later in this series.
Signed-off-by: Kalesh Singh <kaleshsingh@google.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Fuad Tabba <tabba@google.com> Tested-by: Fuad Tabba <tabba@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220726073750.3219117-2-kaleshsingh@google.com
show more ...
|
Revision tags: v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52, v5.15.51, v5.15.50, v5.15.49 |
|
#
82a592c1 |
| 17-Jun-2022 |
Madhavan T. Venkataraman <madvenka@linux.microsoft.com> |
arm64: Copy the task argument to unwind_state
Copy the task argument passed to arch_stack_walk() to unwind_state so that it can be passed to unwind functions via unwind_state rather than as a separa
arm64: Copy the task argument to unwind_state
Copy the task argument passed to arch_stack_walk() to unwind_state so that it can be passed to unwind functions via unwind_state rather than as a separate argument. The task is a fundamental part of the unwind state.
Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20220617180219.20352-3-madvenka@linux.microsoft.com Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
#
a019d8a2 |
| 17-Jun-2022 |
Madhavan T. Venkataraman <madvenka@linux.microsoft.com> |
arm64: Split unwind_init()
unwind_init() is currently a single function that initializes all of the unwind state. Split it into the following functions and call them appropriately:
- unwind_init_f
arm64: Split unwind_init()
unwind_init() is currently a single function that initializes all of the unwind state. Split it into the following functions and call them appropriately:
- unwind_init_from_regs() - initialize from regs passed by caller.
- unwind_init_from_caller() - initialize for the current task from the caller of arch_stack_walk().
- unwind_init_from_task() - initialize from the saved state of a task other than the current task. In this case, the other task must not be running.
This is done for two reasons:
- the different ways of initializing are clear
- specialized code can be added to each initializer in the future.
Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20220617180219.20352-2-madvenka@linux.microsoft.com Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
Revision tags: v5.15.48, v5.15.47, v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42 |
|
#
446297b2 |
| 23-May-2022 |
Andrey Konovalov <andreyknvl@google.com> |
arm64: stacktrace: use non-atomic __set_bit
Use the non-atomic version of set_bit() in arch/arm64/kernel/stacktrace.c, as there is no concurrent accesses to frame->prev_type.
This speeds up stack t
arm64: stacktrace: use non-atomic __set_bit
Use the non-atomic version of set_bit() in arch/arm64/kernel/stacktrace.c, as there is no concurrent accesses to frame->prev_type.
This speeds up stack trace collection and improves the boot time of Generic KASAN by 2-5%.
Suggested-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Link: https://lore.kernel.org/r/23dfa36d1cc91e4a1059945b7834eac22fb9854d.1653317461.git.andreyknvl@google.com Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
#
802b9111 |
| 23-May-2022 |
Andrey Konovalov <andreyknvl@google.com> |
arm64: kasan: do not instrument stacktrace.c
Disable KASAN instrumentation of arch/arm64/kernel/stacktrace.c.
This speeds up Generic KASAN by 5-20%.
As a side-effect, KASAN is now unable to detect
arm64: kasan: do not instrument stacktrace.c
Disable KASAN instrumentation of arch/arm64/kernel/stacktrace.c.
This speeds up Generic KASAN by 5-20%.
As a side-effect, KASAN is now unable to detect bugs in the stack trace collection code. This is taken as an acceptable downside.
Also replace READ_ONCE_NOCHECK() with READ_ONCE() in stacktrace.c. As the file is now not instrumented, there is no need to use the NOCHECK version of READ_ONCE().
Suggested-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Link: https://lore.kernel.org/r/c4c944a2a905e949760fbeb29258185087171708.1653317461.git.andreyknvl@google.com Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
Revision tags: v5.18, v5.15.41, v5.15.40, v5.15.39, v5.15.38, v5.15.37, v5.15.36, v5.15.35, v5.15.34 |
|
#
bd5552bc |
| 13-Apr-2022 |
Madhavan T. Venkataraman <madvenka@linux.microsoft.com> |
arm64: stacktrace: align with common naming
For historical reasons, the naming of parameters and their types in the arm64 stacktrace code differs from that used in generic code and other architectur
arm64: stacktrace: align with common naming
For historical reasons, the naming of parameters and their types in the arm64 stacktrace code differs from that used in generic code and other architectures, even though the types are equivalent.
For consistency and clarity, use the generic names.
There should be no functional change as a result of this patch.
Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> for the series. Link: https://lore.kernel.org/r/20220413145910.3060139-7-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
e9d75a0b |
| 13-Apr-2022 |
Madhavan T. Venkataraman <madvenka@linux.microsoft.com> |
arm64: stacktrace: rename stackframe to unwind_state
Rename "struct stackframe" to "struct unwind_state" for consistency and better naming. Accordingly, rename variable/argument "frame" to "state".
arm64: stacktrace: rename stackframe to unwind_state
Rename "struct stackframe" to "struct unwind_state" for consistency and better naming. Accordingly, rename variable/argument "frame" to "state".
There should be no functional change as a result of this patch.
Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> for the series. Link: https://lore.kernel.org/r/20220413145910.3060139-6-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
c797bd45 |
| 13-Apr-2022 |
Madhavan T. Venkataraman <madvenka@linux.microsoft.com> |
arm64: stacktrace: rename unwinder functions
Rename unwinder functions for consistency and better naming.
- Rename start_backtrace() to unwind_init(). - Rename unwind_frame() to unwind_next(). -
arm64: stacktrace: rename unwinder functions
Rename unwinder functions for consistency and better naming.
- Rename start_backtrace() to unwind_init(). - Rename unwind_frame() to unwind_next(). - Rename walk_stackframe() to unwind().
There should be no functional change as a result of this patch.
Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> for the series. Link: https://lore.kernel.org/r/20220413145910.3060139-5-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
96bb1530 |
| 13-Apr-2022 |
Mark Rutland <mark.rutland@arm.com> |
arm64: stacktrace: make struct stackframe private to stacktrace.c
Now that arm64 uses arch_stack_walk() consistently, struct stackframe is only used within stacktrace.c. To make it easier to read an
arm64: stacktrace: make struct stackframe private to stacktrace.c
Now that arm64 uses arch_stack_walk() consistently, struct stackframe is only used within stacktrace.c. To make it easier to read and maintain this code, it would be nicer if the definition were there too.
Move the definition into stacktrace.c.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviwed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Kalesh Singh <kaleshsingh@google.com> for the series. Link: https://lore.kernel.org/r/20220413145910.3060139-4-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|