History log of /openbmc/linux/arch/arm/include/asm/ptrace.h (Results 1 – 25 of 83)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3, v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46, v6.1.45, v6.1.44, v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39, v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32
# ae1f8d79 02-Jun-2023 Arnd Bergmann <arnd@arndb.de>

ARM: 9304/1: add prototype for function called only from asm

When building with 'make W=1', the compiler warns about any function
definition that does not come with a prototype in a header, to ensur

ARM: 9304/1: add prototype for function called only from asm

When building with 'make W=1', the compiler warns about any function
definition that does not come with a prototype in a header, to ensure
it matches what the caller expects.

This includes functions that are only ever caller from assembly
code and don't technically need a declaration:

arch/arm/kernel/ftrace.c:227:6: error: no previous prototype for 'prepare_ftrace_return'
arch/arm/kernel/ptrace.c:850:16: error: no previous prototype for 'syscall_trace_enter'
arch/arm/kernel/ptrace.c:878:17: error: no previous prototype for 'syscall_trace_exit'
arch/arm/kernel/signal.c:601:1: error: no previous prototype for 'do_work_pending'
arch/arm/kernel/signal.c:672:17: error: no previous prototype for 'do_rseq_syscall'
arch/arm/kernel/suspend.c:75:6: error: no previous prototype for '__cpu_suspend_save'
arch/arm/kernel/traps.c:451:17: error: no previous prototype for 'do_undefinstr'
arch/arm/kernel/traps.c:516:39: error: no previous prototype for 'handle_fiq_as_nmi'
arch/arm/kernel/traps.c:535:17: error: no previous prototype for 'bad_mode'
arch/arm/kernel/traps.c:608:16: error: no previous prototype for 'arm_syscall'
arch/arm/kernel/traps.c:734:1: error: no previous prototype for 'baddataabort'
arch/arm/kernel/traps.c:774:17: error: no previous prototype for '__div0'
arch/arm/kernel/traps.c:97:6: error: no previous prototype for 'dump_backtrace_stm'
arch/arm/kernel/unwind.c:40:6: error: no previous prototype for '__aeabi_unwind_cpp_pr0'
arch/arm/kernel/unwind.c:45:6: error: no previous prototype for '__aeabi_unwind_cpp_pr1'
arch/arm/kernel/unwind.c:50:6: error: no previous prototype for '__aeabi_unwind_cpp_pr2'
arch/arm/mm/fault.c:554:1: error: no previous prototype for 'do_DataAbort'
arch/arm/mm/fault.c:584:1: error: no previous prototype for 'do_PrefetchAbort'
arch/arm/mm/proc-v7-bugs.c:280:6: error: no previous prototype for 'cpu_v7_ca8_ibe'
arch/arm/mm/proc-v7-bugs.c:293:6: error: no previous prototype for 'cpu_v7_bugs_init'
arch/arm/vdso/vgettimeofday.c:36:6: error: no previous prototype for '__aeabi_unwind_cpp_pr0'
arch/arm/vdso/vgettimeofday.c:40:6: error: no previous prototype for '__aeabi_unwind_cpp_pr1'
arch/arm/vdso/vgettimeofday.c:44:6: error: no previous prototype for '__aeabi_unwind_cpp_pr2'
arch/arm/vfp/vfpmodule.c:323:6: error: no previous prototype for 'VFP_bounce'

Add the prototypes anyway, to allow enabling this warning by default in
the future.

Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>

show more ...


Revision tags: v6.1.31, v6.1.30, v6.1.29, v6.1.28, v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23, v6.1.22, v6.1.21, v6.1.20, v6.1.19, v6.1.18, v6.1.17, v6.1.16, v6.1.15, v6.1.14, v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10, v6.1.9, v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1, v6.0.12
# aaa4dd1b 03-Dec-2022 Wang Kefeng <wangkefeng.wang@huawei.com>

ARM: 9279/1: support function error injection

This enables HAVE_FUNCTION_ERROR_INJECTION by adding necessary
regs_set_return_value() and override_function_with_return().

Simply tested according to

ARM: 9279/1: support function error injection

This enables HAVE_FUNCTION_ERROR_INJECTION by adding necessary
regs_set_return_value() and override_function_with_return().

Simply tested according to Documentation/fault-injection/fault-injection.rst.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>

show more ...


Revision tags: v6.0.11, v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0, v5.15.71, v5.15.70, v5.15.69, v5.15.68, v5.15.67, v5.15.66, v5.15.65, v5.15.64, v5.15.63, v5.15.62, v5.15.61, v5.15.60, v5.15.59, v5.19, v5.15.58, v5.15.57, v5.15.56, v5.15.55, v5.15.54, v5.15.53, v5.15.52
# e5c46fde 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get ou

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


# 2a098504 30-Jun-2022 Ard Biesheuvel <ardb@kernel.org>

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode,

ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction

commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream.

After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.

So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.

Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


Revision tags: v5.15.51, v5.15.50, v5.15.49, v5.15.48, v5.15.47, v5.15.46, v5.15.45, v5.15.44, v5.15.43, v5.15.42, v5.18, v5.15.41, v5.15.40, v5.15.39, v5.15.38, v5.15.37, v5.15.36, v5.15.35, v5.15.34, v5.15.33, v5.15.32, v5.15.31, v5.17, v5.15.30, v5.15.29, v5.15.28, v5.15.27, v5.15.26, v5.15.25, v5.15.24, v5.15.23, v5.15.22, v5.15.21, v5.15.20, v5.15.19, v5.15.18, v5.15.17, v5.4.173, v5.15.16, v5.15.15, v5.16, v5.15.10, v5.15.9, v5.15.8, v5.15.7, v5.15.6, v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15, v5.14.14, v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61, v5.10.60
# 8ac6f5d7 11-Aug-2021 Arnd Bergmann <arnd@arndb.de>

ARM: 9113/1: uaccess: remove set_fs() implementation

There are no remaining callers of set_fs(), so just remove it
along with all associated code that operates on
thread_info->addr_limit.

There are

ARM: 9113/1: uaccess: remove set_fs() implementation

There are no remaining callers of set_fs(), so just remove it
along with all associated code that operates on
thread_info->addr_limit.

There are still further optimizations that can be done:

- In get_user(), the address check could be moved entirely
into the out of line code, rather than passing a constant
as an argument,

- I assume the DACR handling can be simplified as we now
only change it during user access when CONFIG_CPU_SW_DOMAIN_PAN
is set, but not during set_fs().

Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>

show more ...


Revision tags: v5.10.53, v5.10.52, v5.10.51, v5.10.50, v5.10.49, v5.13, v5.10.46, v5.10.43, v5.10.42, v5.10.41, v5.10.40, v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14, v5.10, v5.8.17, v5.8.16, v5.8.15, v5.9, v5.8.14, v5.8.13, v5.8.12, v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44, v5.7, v5.4.43, v5.4.42, v5.4.41, v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31, v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24, v5.4.23, v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16, v5.5, v5.4.15, v5.4.14, v5.4.13, v5.4.12, v5.4.11, v5.4.10, v5.4.9, v5.4.8, v5.4.7, v5.4.6, v5.4.5, v5.4.4, v5.4.3, v5.3.15, v5.4.2, v5.4.1, v5.3.14, v5.4, v5.3.13, v5.3.12, v5.3.11, v5.3.10, v5.3.9, v5.3.8, v5.3.7, v5.3.6, v5.3.5, v5.3.4, v5.3.3, v5.3.2, v5.3.1, v5.3, v5.2.14, v5.3-rc8, v5.2.13, v5.2.12, v5.2.11, v5.2.10, v5.2.9, v5.2.8, v5.2.7, v5.2.6, v5.2.5, v5.2.4, v5.2.3, v5.2.2, v5.2.1, v5.2, v5.1.16, v5.1.15, v5.1.14, v5.1.13, v5.1.12, v5.1.11, v5.1.10, v5.1.9, v5.1.8
# d2912cb1 04-Jun-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500

Based on 2 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of th

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500

Based on 2 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

show more ...


Revision tags: v5.1.7, v5.1.6, v5.1.5, v5.1.4, v5.1.3, v5.1.2, v5.1.1, v5.0.14, v5.1, v5.0.13, v5.0.12, v5.0.11, v5.0.10, v5.0.9, v5.0.8, v5.0.7, v5.0.6, v5.0.5, v5.0.4, v5.0.3, v4.19.29, v5.0.2, v4.19.28, v5.0.1, v4.19.27, v5.0, v4.19.26, v4.19.25, v4.19.24, v4.19.23, v4.19.22, v4.19.21, v4.19.20, v4.19.19, v4.19.18, v4.19.17, v4.19.16, v4.19.15, v4.19.14, v4.19.13, v4.19.12, v4.19.11, v4.19.10, v4.19.9, v4.19.8, v4.19.7, v4.19.6, v4.19.5, v4.19.4, v4.18.20, v4.19.3, v4.18.19, v4.19.2, v4.18.18, v4.18.17, v4.19.1, v4.19, v4.18.16, v4.18.15, v4.18.14, v4.18.13, v4.18.12, v4.18.11, v4.18.10, v4.18.9, v4.18.7, v4.18.6, v4.18.5, v4.17.18, v4.18.4, v4.18.3, v4.17.17, v4.18.2, v4.17.16, v4.17.15, v4.18.1, v4.18, v4.17.14, v4.17.13, v4.17.12, v4.17.11, v4.17.10, v4.17.9, v4.17.8, v4.17.7, v4.17.6, v4.17.5, v4.17.4, v4.17.3, v4.17.2, v4.17.1, v4.17, v4.16, v4.15, v4.13.16, v4.14
# d1515582 24-Oct-2017 Will Deacon <will.deacon@arm.com>

linux/compiler.h: Split into compiler.h and compiler_types.h

linux/compiler.h is included indirectly by linux/types.h via
uapi/linux/types.h -> uapi/linux/posix_types.h -> linux/stddef.h
-> uapi/lin

linux/compiler.h: Split into compiler.h and compiler_types.h

linux/compiler.h is included indirectly by linux/types.h via
uapi/linux/types.h -> uapi/linux/posix_types.h -> linux/stddef.h
-> uapi/linux/stddef.h and is needed to provide a proper definition of
offsetof.

Unfortunately, compiler.h requires a definition of
smp_read_barrier_depends() for defining lockless_dereference() and soon
for defining READ_ONCE(), which means that all
users of READ_ONCE() will need to include asm/barrier.h to avoid splats
such as:

In file included from include/uapi/linux/stddef.h:1:0,
from include/linux/stddef.h:4,
from arch/h8300/kernel/asm-offsets.c:11:
include/linux/list.h: In function 'list_empty':
>> include/linux/compiler.h:343:2: error: implicit declaration of function 'smp_read_barrier_depends' [-Werror=implicit-function-declaration]
smp_read_barrier_depends(); /* Enforce dependency ordering from x */ \
^

A better alternative is to include asm/barrier.h in linux/compiler.h,
but this requires a type definition for "bool" on some architectures
(e.g. x86), which is defined later by linux/types.h. Type "bool" is also
used directly in linux/compiler.h, so the whole thing is pretty fragile.

This patch splits compiler.h in two: compiler_types.h contains type
annotations, definitions and the compiler-specific parts, whereas
compiler.h #includes compiler-types.h and additionally defines macros
such as {READ,WRITE.ACCESS}_ONCE().

uapi/linux/stddef.h and linux/linkage.h are then moved over to include
linux/compiler_types.h, which fixes the build for h8 and blackfin.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1508840570-22169-2-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>

show more ...


Revision tags: v4.13.5, v4.13, v4.12, v4.10.17, v4.10.16, v4.10.15, v4.10.14, v4.10.13, v4.10.12, v4.10.11, v4.10.10, v4.10.9, v4.10.8, v4.10.7, v4.10.6, v4.10.5, v4.10.4, v4.10.3, v4.10.2, v4.10.1, v4.10, v4.9, openbmc-4.4-20161121-1, v4.4.33, v4.4.32, v4.4.31, v4.4.30, v4.4.29, v4.4.28, v4.4.27, v4.7.10, openbmc-4.4-20161021-1, v4.7.9, v4.4.26, v4.7.8, v4.4.25, v4.4.24, v4.7.7, v4.8, v4.4.23, v4.7.6, v4.7.5, v4.4.22, v4.4.21, v4.7.4, v4.7.3, v4.4.20, v4.7.2, v4.4.19, openbmc-4.4-20160819-1, v4.7.1, v4.4.18, v4.4.17, openbmc-4.4-20160804-1, v4.4.16
# fd6380b7 27-Jul-2016 Catalin Marinas <catalin.marinas@arm.com>

arm64: arm: Fix-up the removal of the arm64 regs_query_register_name() prototype

Commit 0a8ea52c3eb1 ("arm64: Add HAVE_REGS_AND_STACK_ACCESS_API
feature") inadvertently removed the arch/arm prototyp

arm64: arm: Fix-up the removal of the arm64 regs_query_register_name() prototype

Commit 0a8ea52c3eb1 ("arm64: Add HAVE_REGS_AND_STACK_ACCESS_API
feature") inadvertently removed the arch/arm prototype instead of the
arm64 one introduced by the original patch. There should not be any
bisection issues since this function is not called from anywhere else
(it could as well be removed from arch/arm at some point).

Fixes: 0a8ea52c3eb1 ("arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

show more ...


Revision tags: v4.7, openbmc-4.4-20160722-1, openbmc-20160722-1, openbmc-20160713-1, v4.4.15, v4.6.4
# 0a8ea52c 08-Jul-2016 David A. Long <dave.long@linaro.org>

arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature

Add HAVE_REGS_AND_STACK_ACCESS_API feature for arm64, including supporting
functions and defines.

Signed-off-by: David A. Long <dave.long@linaro.or

arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature

Add HAVE_REGS_AND_STACK_ACCESS_API feature for arm64, including supporting
functions and defines.

Signed-off-by: David A. Long <dave.long@linaro.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
[catalin.marinas@arm.com: Remove unused functions]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

show more ...


Revision tags: v4.6.3, v4.4.14, v4.6.2, v4.4.13, openbmc-20160606-1, v4.6.1, v4.4.12, openbmc-20160521-1, v4.4.11, openbmc-20160518-1, v4.6
# e6978e4b 13-May-2016 Russell King <rmk+kernel@armlinux.org.uk>

ARM: save and reset the address limit when entering an exception

When we enter an exception, the current address limit should not apply
to the exception context: if the exception context wishes to a

ARM: save and reset the address limit when entering an exception

When we enter an exception, the current address limit should not apply
to the exception context: if the exception context wishes to access
kernel space via the user accessors (eg, perf code), it must explicitly
request such access.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>

show more ...


# 5fa9da50 13-May-2016 Russell King <rmk+kernel@armlinux.org.uk>

ARM: get rid of horrible *(unsigned int *)(regs + 1)

Get rid of the horrible "*(unsigned int *)(regs + 1)" to get at the
parent context domain access register value, instead using the newly
introduc

ARM: get rid of horrible *(unsigned int *)(regs + 1)

Get rid of the horrible "*(unsigned int *)(regs + 1)" to get at the
parent context domain access register value, instead using the newly
introduced svc_pt_regs structure.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>

show more ...


1234