Revision tags: v6.6.25, v6.6.24, v6.6.23, v6.6.16, v6.6.15, v6.6.14, v6.6.13, v6.6.12, v6.6.11, v6.6.10, v6.6.9, v6.6.8, v6.6.7, v6.6.6, v6.6.5, v6.6.4, v6.6.3, v6.6.2, v6.5.11, v6.6.1, v6.5.10, v6.6, v6.5.9, v6.5.8, v6.5.7, v6.5.6, v6.5.5, v6.5.4, v6.5.3 |
|
#
2d71340f |
| 08-Sep-2023 |
David Howells <dhowells@redhat.com> |
iov_iter: Kunit tests for copying to/from an iterator
Add some kunit tests for page extraction for ITER_BVEC, ITER_KVEC and ITER_XARRAY type iterators. ITER_UBUF and ITER_IOVEC aren't dealt with as
iov_iter: Kunit tests for copying to/from an iterator
Add some kunit tests for page extraction for ITER_BVEC, ITER_KVEC and ITER_XARRAY type iterators. ITER_UBUF and ITER_IOVEC aren't dealt with as they require userspace VM interaction. ITER_DISCARD isn't dealt with either as that does nothing.
Signed-off-by: David Howells <dhowells@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Christian Brauner <brauner@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: David Hildenbrand <david@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|
Revision tags: v6.5.2, v6.1.51, v6.5.1, v6.1.50, v6.5, v6.1.49, v6.1.48, v6.1.46 |
|
#
aebc7b0d |
| 11-Aug-2023 |
Marco Elver <elver@google.com> |
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened configs [3]. The m
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened configs [3]. The motivation behind this is that the option can be used as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025 are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would result in an immediate access fault (i.e. minimal hardening checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang, but not yet with GCC) to minimize the code footprint for calling the reporting slow path. As a result, function size of callers is reduced by avoiding saving registers before calling the rarely called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing, including list_debug.c, and __preserve_most's implied notrace has no effect in this case.
3. Because the inline checks are a subset of the full set of checks in __list_*_valid_or_report(), always return false if the inline checks failed. This avoids redundant compare and conditional branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the Kconfig variables are changed to reflect that: DEBUG_LIST selects LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with "preserve_most") shows throughput improvements, in my case of ~7% on average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1] Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2] Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3] Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4] Signed-off-by: Marco Elver <elver@google.com> Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com Signed-off-by: Kees Cook <keescook@chromium.org>
show more ...
|
Revision tags: v6.1.45, v6.1.44 |
|
#
2a598d0b |
| 04-Aug-2023 |
Herbert Xu <herbert@gondor.apana.org.au> |
crypto: lib - Move mpi into lib/crypto
As lib/mpi is mostly used by crypto code, move it under lib/crypto so that patches touching it get directed to the right mailing list.
Signed-off-by: Herbert
crypto: lib - Move mpi into lib/crypto
As lib/mpi is mostly used by crypto code, move it under lib/crypto so that patches touching it get directed to the right mailing list.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
Revision tags: v6.1.43, v6.1.42, v6.1.41, v6.1.40, v6.1.39 |
|
#
2356d198 |
| 17-Jul-2023 |
Yury Norov <yury.norov@gmail.com> |
lib/bitmap: workaround const_eval test build failure
When building with Clang, and when KASAN and GCOV_PROFILE_ALL are both enabled, the test fails to build [1]:
>> lib/test_bitmap.c:920:2: error:
lib/bitmap: workaround const_eval test build failure
When building with Clang, and when KASAN and GCOV_PROFILE_ALL are both enabled, the test fails to build [1]:
>> lib/test_bitmap.c:920:2: error: call to '__compiletime_assert_239' declared with 'error' attribute: BUILD_BUG_ON failed: !__builtin_constant_p(res) BUILD_BUG_ON(!__builtin_constant_p(res)); ^ include/linux/build_bug.h:50:2: note: expanded from macro 'BUILD_BUG_ON' BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition) ^ include/linux/build_bug.h:39:37: note: expanded from macro 'BUILD_BUG_ON_MSG' #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg) ^ include/linux/compiler_types.h:352:2: note: expanded from macro 'compiletime_assert' _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) ^ include/linux/compiler_types.h:340:2: note: expanded from macro '_compiletime_assert' __compiletime_assert(condition, msg, prefix, suffix) ^ include/linux/compiler_types.h:333:4: note: expanded from macro '__compiletime_assert' prefix ## suffix(); \ ^ <scratch space>:185:1: note: expanded from here __compiletime_assert_239
Originally it was attributed to s390, which now looks seemingly wrong. The issue is not related to bitmap code itself, but it breaks build for a given configuration.
Disabling the const_eval test under that config may potentially hide other bugs. Instead, workaround it by disabling GCOV for the test_bitmap unless the compiler will get fixed.
[1] https://github.com/ClangBuiltLinux/linux/issues/1874
Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202307171254.yFcH97ej-lkp@intel.com/ Fixes: dc34d5036692 ("lib: test_bitmap: add compile-time optimization/evaluations assertions") Co-developed-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Yury Norov <yury.norov@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>
show more ...
|
Revision tags: v6.1.38, v6.1.37, v6.1.36, v6.4, v6.1.35, v6.1.34, v6.1.33, v6.1.32, v6.1.31, v6.1.30, v6.1.29 |
|
#
e9aae170 |
| 16-May-2023 |
Kefeng Wang <wangkefeng.wang@huawei.com> |
mm: page_alloc: collect mem statistic into show_mem.c
Let's move show_mem.c from lib to mm, as it belongs memory subsystem, also split some memory statistic related functions from page_alloc.c to sh
mm: page_alloc: collect mem statistic into show_mem.c
Let's move show_mem.c from lib to mm, as it belongs memory subsystem, also split some memory statistic related functions from page_alloc.c to show_mem.c, and we cleanup some unneeded include.
There is no functional change.
Link: https://lkml.kernel.org/r/20230516063821.121844-5-wangkefeng.wang@huawei.com Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: David Hildenbrand <david@redhat.com> Cc: "Huang, Ying" <ying.huang@intel.com> Cc: Iurii Zaikin <yzaikin@google.com> Cc: Kees Cook <keescook@chromium.org> Cc: Len Brown <len.brown@intel.com> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Oscar Salvador <osalvador@suse.de> Cc: Pavel Machek <pavel@ucw.cz> Cc: Rafael J. Wysocki <rafael@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
show more ...
|
Revision tags: v6.1.28 |
|
#
688eb819 |
| 10-May-2023 |
Noah Goldstein <goldstein.w.n@gmail.com> |
x86/csum: Improve performance of `csum_partial`
1) Add special case for len == 40 as that is the hottest value. The nets a ~8-9% latency improvement and a ~30% throughput improvement in the le
x86/csum: Improve performance of `csum_partial`
1) Add special case for len == 40 as that is the hottest value. The nets a ~8-9% latency improvement and a ~30% throughput improvement in the len == 40 case.
2) Use multiple accumulators in the 64-byte loop. This dramatically improves ILP and results in up to a 40% latency/throughput improvement (better for more iterations).
Results from benchmarking on Icelake. Times measured with rdtsc() len lat_new lat_old r tput_new tput_old r 8 3.58 3.47 1.032 3.58 3.51 1.021 16 4.14 4.02 1.028 3.96 3.78 1.046 24 4.99 5.03 0.992 4.23 4.03 1.050 32 5.09 5.08 1.001 4.68 4.47 1.048 40 5.57 6.08 0.916 3.05 4.43 0.690 48 6.65 6.63 1.003 4.97 4.69 1.059 56 7.74 7.72 1.003 5.22 4.95 1.055 64 6.65 7.22 0.921 6.38 6.42 0.994 96 9.43 9.96 0.946 7.46 7.54 0.990 128 9.39 12.15 0.773 8.90 8.79 1.012 200 12.65 18.08 0.699 11.63 11.60 1.002 272 15.82 23.37 0.677 14.43 14.35 1.005 440 24.12 36.43 0.662 21.57 22.69 0.951 952 46.20 74.01 0.624 42.98 53.12 0.809 1024 47.12 78.24 0.602 46.36 58.83 0.788 1552 72.01 117.30 0.614 71.92 96.78 0.743 2048 93.07 153.25 0.607 93.28 137.20 0.680 2600 114.73 194.30 0.590 114.28 179.32 0.637 3608 156.34 268.41 0.582 154.97 254.02 0.610 4096 175.01 304.03 0.576 175.89 292.08 0.602
There is no such thing as a free lunch, however, and the special case for len == 40 does add overhead to the len != 40 cases. This seems to amount to be ~5% throughput and slightly less in terms of latency.
Testing: Part of this change is a new kunit test. The tests check all alignment X length pairs in [0, 64) X [0, 512). There are three cases. 1) Precomputed random inputs/seed. The expected results where generated use the generic implementation (which is assumed to be non-buggy). 2) An input of all 1s. The goal of this test is to catch any case a carry is missing. 3) An input that never carries. The goal of this test si to catch any case of incorrectly carrying.
More exhaustive tests that test all alignment X length pairs in [0, 8192) X [0, 8192] on random data are also available here: https://github.com/goldsteinn/csum-reproduction
The reposity also has the code for reproducing the above benchmark numbers.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/all/20230511011002.935690-1-goldstein.w.n%40gmail.com
show more ...
|
Revision tags: v6.1.27, v6.1.26, v6.3, v6.1.25, v6.1.24, v6.1.23 |
|
#
3bf301e1 |
| 04-Apr-2023 |
Kees Cook <keescook@chromium.org> |
string: Add Kunit tests for strcat() family
Add tests to make sure the strcat() family of functions behave correctly.
Signed-off-by: Kees Cook <keescook@chromium.org>
|
Revision tags: v6.1.22 |
|
#
ee1ee6db |
| 23-Mar-2023 |
Thomas Gleixner <tglx@linutronix.de> |
atomics: Provide rcuref - scalable reference counting
atomic_t based reference counting, including refcount_t, uses atomic_inc_not_zero() for acquiring a reference. atomic_inc_not_zero() is implemen
atomics: Provide rcuref - scalable reference counting
atomic_t based reference counting, including refcount_t, uses atomic_inc_not_zero() for acquiring a reference. atomic_inc_not_zero() is implemented with a atomic_try_cmpxchg() loop. High contention of the reference count leads to retry loops and scales badly. There is nothing to improve on this implementation as the semantics have to be preserved.
Provide rcuref as a scalable alternative solution which is suitable for RCU managed objects. Similar to refcount_t it comes with overflow and underflow detection and mitigation.
rcuref treats the underlying atomic_t as an unsigned integer and partitions this space into zones:
0x00000000 - 0x7FFFFFFF valid zone (1 .. (INT_MAX + 1) references) 0x80000000 - 0xBFFFFFFF saturation zone 0xC0000000 - 0xFFFFFFFE dead zone 0xFFFFFFFF no reference
rcuref_get() unconditionally increments the reference count with atomic_add_negative_relaxed(). rcuref_put() unconditionally decrements the reference count with atomic_add_negative_release().
This unconditional increment avoids the inc_not_zero() problem, but requires a more complex implementation on the put() side when the count drops from 0 to -1.
When this transition is detected then it is attempted to mark the reference count dead, by setting it to the midpoint of the dead zone with a single atomic_cmpxchg_release() operation. This operation can fail due to a concurrent rcuref_get() elevating the reference count from -1 to 0 again.
If the unconditional increment in rcuref_get() hits a reference count which is marked dead (or saturated) it will detect it after the fact and bring back the reference count to the midpoint of the respective zone. The zones provide enough tolerance which makes it practically impossible to escape from a zone.
The racy implementation of rcuref_put() requires to protect rcuref_put() against a grace period ending in order to prevent a subtle use after free. As RCU is the only mechanism which allows to protect against that, it is not possible to fully replace the atomic_inc_not_zero() based implementation of refcount_t with this scheme.
The final drop is slightly more expensive than the atomic_dec_return() counterpart, but that's not the case which this is optimized for. The optimization is on the high frequeunt get()/put() pairs and their scalability.
The performance of an uncontended rcuref_get()/put() pair where the put() is not dropping the last reference is still on par with the plain atomic operations, while at the same time providing overflow and underflow detection and mitigation.
The performance of rcuref compared to plain atomic_inc_not_zero() and atomic_dec_return() based reference counting under contention:
- Micro benchmark: All CPUs running a increment/decrement loop on an elevated reference count, which means the 0 to -1 transition never happens.
The performance gain depends on microarchitecture and the number of CPUs and has been observed in the range of 1.3X to 4.7X
- Conversion of dst_entry::__refcnt to rcuref and testing with the localhost memtier/memcached benchmark. That benchmark shows the reference count contention prominently.
The performance gain depends on microarchitecture and the number of CPUs and has been observed in the range of 1.1X to 2.6X over the previous fix for the false sharing issue vs. struct dst_entry::__refcnt.
When memtier is run over a real 1Gb network connection, there is a small gain on top of the false sharing fix. The two changes combined result in a 2%-5% total gain for that networked test.
Reported-by: Wangyang Guo <wangyang.guo@intel.com> Reported-by: Arjan Van De Ven <arjan.van.de.ven@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20230323102800.158429195@linutronix.de
show more ...
|
Revision tags: v6.1.21, v6.1.20, v6.1.19, v6.1.18, v6.1.17 |
|
#
7ce93729 |
| 10-Mar-2023 |
Jason Baron <jbaron@akamai.com> |
dyndbg: cleanup dynamic usage in ib_srp.c
Currently, in dynamic_debug.h we only provide DEFINE_DYNAMIC_DEBUG_METADATA() and DYNAMIC_DEBUG_BRANCH() definitions if CONFIG_DYNAMIC_CORE is enabled. Thus
dyndbg: cleanup dynamic usage in ib_srp.c
Currently, in dynamic_debug.h we only provide DEFINE_DYNAMIC_DEBUG_METADATA() and DYNAMIC_DEBUG_BRANCH() definitions if CONFIG_DYNAMIC_CORE is enabled. Thus, drivers such as infiniband srp (see: drivers/infiniband/ulp/srp/ib_srp.c) must provide their own definitions for !CONFIG_DYNAMIC_CORE.
Thus, let's move this !CONFIG_DYNAMIC_CORE case into dynamic_debug.h. However, the dynamic debug interfaces should really only be defined if CONFIG_DYNAMIC_DEBUG is set or CONFIG_DYNAMIC_CORE is set along with DYNAMIC_DEBUG_MODULE, (see: Documentation/admin-guide/dynamic-debug-howto.rst). Thus, the undefined case becomes: !((CONFIG_DYNAMIC_DEBUG || (CONFIG_DYNAMIC_CORE && DYNAMIC_DEBUG_MODULE)). With those changes in place, we can remove the !CONFIG_DYNAMIC_CORE case from ib_srp.c
This change was prompted by a build breakeage in ib_srp.c stemming from the inclusion of dynamic_debug.h unconditionally in module.h, due to commit 7deabd674988 ("dyndbg: use the module notifier callbacks"). In that case, if we have CONFIG_DYNAMIC_CORE=y and CONFIG_DYNAMIC_DEBUG=n then the definitions for DEFINE_DYNAMIC_DEBUG_METADATA() and DYNAMIC_DEBUG_BRANCH() are defined once in ib_srp.c and then again in the dynamic_debug.h. This had been working prior to the above referenced commit because dynamic_debug.h was only pulled into ib_srp.c conditinally via printk.h if CONFIG_DYNAMIC_DEBUG was set.
Also, the exported functions in lib/dynamic_debug.c itself may not have a prototype if CONFIG_DYNAMIC_DEBUG=n and CONFIG_DYNAMIC_CORE=y. This would trigger the -Wmissing-prototypes warning.
The exported functions are behind (include/linux/dynamic_debug.h):
if defined(CONFIG_DYNAMIC_DEBUG) || \ (defined(CONFIG_DYNAMIC_DEBUG_CORE) && defined(DYNAMIC_DEBUG_MODULE))
Thus, by adding -DDYNAMIC_CONFIG_MODULE to the lib/Makefile we can ensure that the exported functions have a prototype in all cases, since lib/dynamic_debug.c is built whenever CONFIG_DYNAMIC_DEBUG_CORE=y.
Fixes: 7deabd674988 ("dyndbg: use the module notifier callbacks") Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/oe-kbuild-all/202303071444.sIbZTDCy-lkp@intel.com/ Signed-off-by: Jason Baron <jbaron@akamai.com> [mcgrof: adjust commit log, and remove urldefense from URL] Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
show more ...
|
Revision tags: v6.1.16, v6.1.15, v6.1.14 |
|
#
32ff6831 |
| 24-Feb-2023 |
David Gow <davidgow@google.com> |
kunit: Fix 'hooks.o' build by recursing into kunit
KUnit's 'hooks.o' file need to be built-in whenever KUnit is enabled (even if CONFIG_KUNIT=m). We'd previously attemtped to do this by adding 'kun
kunit: Fix 'hooks.o' build by recursing into kunit
KUnit's 'hooks.o' file need to be built-in whenever KUnit is enabled (even if CONFIG_KUNIT=m). We'd previously attemtped to do this by adding 'kunit/hooks.o' to obj-y in lib/Makefile, but this caused hooks.c to be rebuilt even when it was unchanged.
Instead, always recurse into lib/kunit using obj-y when KUnit is enabled, and add the hooks there.
Fixes: 7170b7ed6acb ("kunit: Add "hooks" to call into KUnit when it's built as a module"). Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/linux-kselftest/CAHk-=wiEf7irTKwPJ0jTMOF3CS-13UXmF6Fns3wuWpOZ_wGyZQ@mail.gmail.com/ Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|
Revision tags: v6.1.13, v6.2, v6.1.12, v6.1.11, v6.1.10 |
|
#
25b84002 |
| 02-Feb-2023 |
Kees Cook <keescook@chromium.org> |
arm64: Support Clang UBSAN trap codes for better reporting
When building with CONFIG_UBSAN_TRAP=y on arm64, Clang encodes the UBSAN check (handler) type in the esr. Extract this and actually report
arm64: Support Clang UBSAN trap codes for better reporting
When building with CONFIG_UBSAN_TRAP=y on arm64, Clang encodes the UBSAN check (handler) type in the esr. Extract this and actually report these traps as coming from the specific UBSAN check that tripped.
Before:
Internal error: BRK handler: 00000000f20003e8 [#1] PREEMPT SMP
After:
Internal error: UBSAN: shift out of bounds: 00000000f2005514 [#1] PREEMPT SMP
Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mukesh Ojha <quic_mojha@quicinc.com> Reviewed-by: Fangrui Song <maskray@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: John Stultz <jstultz@google.com> Cc: Yongqin Liu <yongqin.liu@linaro.org> Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Yury Norov <yury.norov@gmail.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Marco Elver <elver@google.com> Cc: linux-arm-kernel@lists.infradead.org Cc: llvm@lists.linux.dev Signed-off-by: Kees Cook <keescook@chromium.org>
show more ...
|
Revision tags: v6.1.9 |
|
#
789538c6 |
| 25-Jan-2023 |
Rae Moar <rmoar@google.com> |
lib/hashtable_test.c: add test for the hashtable structure
Add a KUnit test for the kernel hashtable implementation in include/linux/hashtable.h.
Note that this version does not yet test each of th
lib/hashtable_test.c: add test for the hashtable structure
Add a KUnit test for the kernel hashtable implementation in include/linux/hashtable.h.
Note that this version does not yet test each of the rcu alternative versions of functions.
Signed-off-by: Rae Moar <rmoar@google.com> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
show more ...
|
#
7170b7ed |
| 28-Jan-2023 |
David Gow <davidgow@google.com> |
kunit: Add "hooks" to call into KUnit when it's built as a module
KUnit has several macros and functions intended for use from non-test code. These hooks, currently the kunit_get_current_test() and
kunit: Add "hooks" to call into KUnit when it's built as a module
KUnit has several macros and functions intended for use from non-test code. These hooks, currently the kunit_get_current_test() and kunit_fail_current_test() macros, didn't work when CONFIG_KUNIT=m.
In order to support this case, the required functions and static data need to be available unconditionally, even when KUnit itself is not built-in. The new 'hooks.c' file is therefore always included, and has both the static key required for kunit_get_current_test(), and a table of function pointers in struct kunit_hooks_table. This is filled in with the real implementations by kunit_install_hooks(), which is kept in hooks-impl.h and called when the kunit module is loaded.
This can be extended for future features which require similar "hook" behaviour, such as static stubs, by simply adding new entries to the struct, and the appropriate code to set them.
Fixed white-space errors during commit: Shuah Khan <skhan@linuxfoundation.org>
Resolved merge conflicts with: db105c37a4d6 ("kunit: Export kunit_running()") This patch supersedes the above. Shuah Khan <skhan@linuxfoundation.org>
Signed-off-by: David Gow <davidgow@google.com> Reviewed-by: Rae Moar <rmoar@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
show more ...
|
Revision tags: v6.1.8, v6.1.7, v6.1.6, v6.1.5, v6.0.19, v6.0.18, v6.1.4, v6.1.3, v6.0.17, v6.1.2, v6.0.16, v6.1.1, v6.0.15, v6.0.14, v6.0.13, v6.1 |
|
#
d5528cc1 |
| 08-Dec-2022 |
Geert Uytterhoeven <geert+renesas@glider.be> |
lib: add Dhrystone benchmark test
When working on SoC bring-up, (a full) userspace may not be available, making it hard to benchmark the CPU performance of the system under development. Still, one
lib: add Dhrystone benchmark test
When working on SoC bring-up, (a full) userspace may not be available, making it hard to benchmark the CPU performance of the system under development. Still, one may want to have a rough idea of the (relative) performance of one or more CPU cores, especially when working on e.g. the clock driver that controls the CPU core clock(s).
Hence make the classical Dhrystone 2.1 benchmark available as a Linux kernel test module, based on[1].
When built-in, this benchmark can be run without any userspace present.
Parallel runs (run on multiple CPU cores) are supported, just kick the "run" file multiple times.
Note that the actual figures depend on the configuration options that control compiler optimization (e.g. CONFIG_CC_OPTIMIZE_FOR_SIZE vs. CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE), and on the compiler options used when building the kernel in general. Hence numbers may differ from those obtained by running similar benchmarks in userspace.
[1] https://github.com/qris/dhrystone-deb.git
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lkml.kernel.org/r/4d07ad990740a5f1e426ce4566fb514f60ec9bdd.1670509558.git.geert+renesas@glider.be Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: David Gow <davidgow@google.com> [geert+renesas@glider.be: fix uninitialized use of ret] Link: https://lkml.kernel.org/r/alpine.DEB.2.22.394.2212190857310.137329@ramsan.of.borg Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
show more ...
|
#
f7b3ea8c |
| 26-Dec-2022 |
Ming Lei <ming.lei@redhat.com> |
genirq/affinity: Move group_cpus_evenly() into lib/
group_cpus_evenly() has become a generic function which can be used for other subsystems than the interrupt subsystem, so move it into lib/.
Sign
genirq/affinity: Move group_cpus_evenly() into lib/
group_cpus_evenly() has become a generic function which can be used for other subsystems than the interrupt subsystem, so move it into lib/.
Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jens Axboe <axboe@kernel.dk> Link: https://lore.kernel.org/r/20221227022905.352674-6-ming.lei@redhat.com
show more ...
|
Revision tags: v6.0.12, v6.0.11 |
|
#
5abf6987 |
| 28-Nov-2022 |
Anders Roxell <anders.roxell@linaro.org> |
lib: fortify_kunit: build without structleak plugin
Building allmodconfig with aarch64-linux-gnu-gcc (Debian 11.3.0-6), fortify_kunit with strucleak plugin enabled makes the stack frame size to grow
lib: fortify_kunit: build without structleak plugin
Building allmodconfig with aarch64-linux-gnu-gcc (Debian 11.3.0-6), fortify_kunit with strucleak plugin enabled makes the stack frame size to grow too large:
lib/fortify_kunit.c:140:1: error: the frame size of 2368 bytes is larger than 2048 bytes [-Werror=frame-larger-than=]
Turn off the structleak plugin checks for fortify_kunit.
Suggested-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Kees Cook <keescook@chromium.org>
show more ...
|
Revision tags: v6.0.10, v5.15.80, v6.0.9, v5.15.79, v6.0.8, v5.15.78, v6.0.7, v5.15.77, v5.15.76, v6.0.6, v6.0.5, v5.15.75, v6.0.4, v6.0.3, v6.0.2, v5.15.74, v5.15.73, v6.0.1, v5.15.72, v6.0 |
|
#
9124a264 |
| 29-Sep-2022 |
Kees Cook <keescook@chromium.org> |
kunit/fortify: Validate __alloc_size attribute results
Validate the effect of the __alloc_size attribute on allocators. If the compiler doesn't support __builtin_dynamic_object_size(), skip the asso
kunit/fortify: Validate __alloc_size attribute results
Validate the effect of the __alloc_size attribute on allocators. If the compiler doesn't support __builtin_dynamic_object_size(), skip the associated tests.
(For GCC, just remove the "--make_options" line below...)
$ ./tools/testing/kunit/kunit.py run --arch x86_64 \ --kconfig_add CONFIG_FORTIFY_SOURCE=y \ --make_options LLVM=1 fortify ... [15:16:30] ================== fortify (10 subtests) =================== [15:16:30] [PASSED] known_sizes_test [15:16:30] [PASSED] control_flow_split_test [15:16:30] [PASSED] alloc_size_kmalloc_const_test [15:16:30] [PASSED] alloc_size_kmalloc_dynamic_test [15:16:30] [PASSED] alloc_size_vmalloc_const_test [15:16:30] [PASSED] alloc_size_vmalloc_dynamic_test [15:16:30] [PASSED] alloc_size_kvmalloc_const_test [15:16:30] [PASSED] alloc_size_kvmalloc_dynamic_test [15:16:30] [PASSED] alloc_size_devm_kmalloc_const_test [15:16:30] [PASSED] alloc_size_devm_kmalloc_dynamic_test [15:16:30] ===================== [PASSED] fortify ===================== [15:16:30] ============================================================ [15:16:30] Testing complete. Ran 10 tests: passed: 10 [15:16:31] Elapsed time: 8.348s total, 0.002s configuring, 6.923s building, 1.075s running
For earlier GCC prior to version 12, the dynamic tests will be skipped:
[15:18:59] ================== fortify (10 subtests) =================== [15:18:59] [PASSED] known_sizes_test [15:18:59] [PASSED] control_flow_split_test [15:18:59] [PASSED] alloc_size_kmalloc_const_test [15:18:59] [SKIPPED] alloc_size_kmalloc_dynamic_test [15:18:59] [PASSED] alloc_size_vmalloc_const_test [15:18:59] [SKIPPED] alloc_size_vmalloc_dynamic_test [15:18:59] [PASSED] alloc_size_kvmalloc_const_test [15:18:59] [SKIPPED] alloc_size_kvmalloc_dynamic_test [15:18:59] [PASSED] alloc_size_devm_kmalloc_const_test [15:18:59] [SKIPPED] alloc_size_devm_kmalloc_dynamic_test [15:18:59] ===================== [PASSED] fortify ===================== [15:18:59] ============================================================ [15:18:59] Testing complete. Ran 10 tests: passed: 6, skipped: 4 [15:18:59] Elapsed time: 11.965s total, 0.002s configuring, 10.540s building, 1.068s running
Cc: David Gow <davidgow@google.com> Cc: linux-hardening@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org>
show more ...
|
#
120b1162 |
| 28-Oct-2022 |
Liam Howlett <liam.howlett@oracle.com> |
maple_tree: reorganize testing to restore module testing
Along the development cycle, the testing code support for module/in-kernel compiles was removed. Restore this functionality by moving any in
maple_tree: reorganize testing to restore module testing
Along the development cycle, the testing code support for module/in-kernel compiles was removed. Restore this functionality by moving any internal API tests to the userspace side, as well as threading tests. Fix the lockdep issues and add a way to reduce memory usage so the tests can complete with KASAN + memleak detection. Make the tests work on 32 bit hosts where possible and detect 32 bit hosts in the radix test suite.
[akpm@linux-foundation.org: fix module export] [akpm@linux-foundation.org: fix it some more] [liam.howlett@oracle.com: fix compile warnings on 32bit build in check_find()] Link: https://lkml.kernel.org/r/20221107203816.1260327-1-Liam.Howlett@oracle.com Link: https://lkml.kernel.org/r/20221028180415.3074673-1-Liam.Howlett@oracle.com Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
show more ...
|
#
4b21d25b |
| 24-Oct-2022 |
Kees Cook <keescook@chromium.org> |
overflow: Introduce overflows_type() and castable_to_type()
Implement a robust overflows_type() macro to test if a variable or constant value would overflow another variable or type. This can be use
overflow: Introduce overflows_type() and castable_to_type()
Implement a robust overflows_type() macro to test if a variable or constant value would overflow another variable or type. This can be used as a constant expression for static_assert() (which requires a constant expression[1][2]) when used on constant values. This must be constructed manually, since __builtin_add_overflow() does not produce a constant expression[3].
Additionally adds castable_to_type(), similar to __same_type(), but for checking if a constant value would overflow if cast to a given type.
Add unit tests for overflows_type(), __same_type(), and castable_to_type() to the existing KUnit "overflow" test:
[16:03:33] ================== overflow (21 subtests) ================== ... [16:03:33] [PASSED] overflows_type_test [16:03:33] [PASSED] same_type_test [16:03:33] [PASSED] castable_to_type_test [16:03:33] ==================== [PASSED] overflow ===================== [16:03:33] ============================================================ [16:03:33] Testing complete. Ran 21 tests: passed: 21 [16:03:33] Elapsed time: 24.022s total, 0.002s configuring, 22.598s building, 0.767s running
[1] https://en.cppreference.com/w/c/language/_Static_assert [2] C11 standard (ISO/IEC 9899:2011): 6.7.10 Static assertions [3] https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Builtins.html 6.56 Built-in Functions to Perform Arithmetic with Overflow Checking Built-in Function: bool __builtin_add_overflow (type1 a, type2 b,
Cc: Luc Van Oostenryck <luc.vanoostenryck@gmail.com> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Tom Rix <trix@redhat.com> Cc: Daniel Latypov <dlatypov@google.com> Cc: Vitor Massaru Iha <vitor@massaru.org> Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: linux-hardening@vger.kernel.org Cc: llvm@lists.linux.dev Co-developed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221024201125.1416422-1-gwan-gyeong.mun@intel.com
show more ...
|
#
fb3d88ab |
| 02-Oct-2022 |
Kees Cook <keescook@chromium.org> |
siphash: Convert selftest to KUnit
Convert the siphash self-test to KUnit so it will be included in "all KUnit tests" coverage, and can be run individually still:
$ ./tools/testing/kunit/kunit.py r
siphash: Convert selftest to KUnit
Convert the siphash self-test to KUnit so it will be included in "all KUnit tests" coverage, and can be run individually still:
$ ./tools/testing/kunit/kunit.py run siphash ... [02:58:45] Starting KUnit Kernel (1/1)... [02:58:45] ============================================================ [02:58:45] =================== siphash (1 subtest) ==================== [02:58:45] [PASSED] siphash_test [02:58:45] ===================== [PASSED] siphash ===================== [02:58:45] ============================================================ [02:58:45] Testing complete. Ran 1 tests: passed: 1 [02:58:45] Elapsed time: 21.421s total, 4.306s configuring, 16.947s building, 0.148s running
Cc: Vlastimil Babka <vbabka@suse.cz> Cc: "Steven Rostedt (Google)" <rostedt@goodmis.org> Cc: Yury Norov <yury.norov@gmail.com> Cc: Sander Vanheule <sander@svanheule.net> Acked-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/lkml/CAHmME9r+9MPH6zk3Vn=buEMSbQiWMFryqqzerKarmjYk+tHLJA@mail.gmail.com Tested-by: David Gow <davidgow@google.com> Signed-off-by: Kees Cook <keescook@chromium.org>
show more ...
|
#
41eefc46 |
| 02-Oct-2022 |
Kees Cook <keescook@chromium.org> |
string: Convert strscpy() self-test to KUnit
Convert the strscpy() self-test to a KUnit test.
Cc: David Gow <davidgow@google.com> Cc: Tobin C. Harding <tobin@kernel.org> Tested-by: Nathan Chancello
string: Convert strscpy() self-test to KUnit
Convert the strscpy() self-test to a KUnit test.
Cc: David Gow <davidgow@google.com> Cc: Tobin C. Harding <tobin@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/lkml/Y072ZMk/hNkfwqMv@dev-arch.thelio-3990X Signed-off-by: Kees Cook <keescook@chromium.org>
show more ...
|
Revision tags: v5.15.71, v5.15.70, v5.15.69 |
|
#
79dbd006 |
| 15-Sep-2022 |
Alexander Potapenko <glider@google.com> |
kmsan: disable instrumentation of unsupported common kernel code
EFI stub cannot be linked with KMSAN runtime, so we disable instrumentation for it.
Instrumenting kcov, stackdepot or lockdep leads
kmsan: disable instrumentation of unsupported common kernel code
EFI stub cannot be linked with KMSAN runtime, so we disable instrumentation for it.
Instrumenting kcov, stackdepot or lockdep leads to infinite recursion caused by instrumentation hooks calling instrumented code again.
Link: https://lkml.kernel.org/r/20220915150417.722975-13-glider@google.com Signed-off-by: Alexander Potapenko <glider@google.com> Reviewed-by: Marco Elver <elver@google.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Christoph Hellwig <hch@lst.de> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Eric Biggers <ebiggers@google.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Eric Dumazet <edumazet@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Ilya Leoshkevich <iii@linux.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vegard Nossum <vegard.nossum@oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
show more ...
|
Revision tags: v5.15.68, v5.15.67, v5.15.66 |
|
#
f7e01ab8 |
| 05-Sep-2022 |
Andrey Konovalov <andreyknvl@google.com> |
kasan: move tests to mm/kasan/
Move KASAN tests to mm/kasan/ to keep the test code alongside the implementation.
Link: https://lkml.kernel.org/r/676398f0aeecd47d2f8e3369ea0e95563f641a36.1662416260.
kasan: move tests to mm/kasan/
Move KASAN tests to mm/kasan/ to keep the test code alongside the implementation.
Link: https://lkml.kernel.org/r/676398f0aeecd47d2f8e3369ea0e95563f641a36.1662416260.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Marco Elver <elver@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Marco Elver <elver@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
show more ...
|
#
54a611b6 |
| 06-Sep-2022 |
Liam R. Howlett <Liam.Howlett@Oracle.com> |
Maple Tree: add new data structure
Patch series "Introducing the Maple Tree"
The maple tree is an RCU-safe range based B-tree designed to use modern processor cache efficiently. There are a number
Maple Tree: add new data structure
Patch series "Introducing the Maple Tree"
The maple tree is an RCU-safe range based B-tree designed to use modern processor cache efficiently. There are a number of places in the kernel that a non-overlapping range-based tree would be beneficial, especially one with a simple interface. If you use an rbtree with other data structures to improve performance or an interval tree to track non-overlapping ranges, then this is for you.
The tree has a branching factor of 10 for non-leaf nodes and 16 for leaf nodes. With the increased branching factor, it is significantly shorter than the rbtree so it has fewer cache misses. The removal of the linked list between subsequent entries also reduces the cache misses and the need to pull in the previous and next VMA during many tree alterations.
The first user that is covered in this patch set is the vm_area_struct, where three data structures are replaced by the maple tree: the augmented rbtree, the vma cache, and the linked list of VMAs in the mm_struct. The long term goal is to reduce or remove the mmap_lock contention.
The plan is to get to the point where we use the maple tree in RCU mode. Readers will not block for writers. A single write operation will be allowed at a time. A reader re-walks if stale data is encountered. VMAs would be RCU enabled and this mode would be entered once multiple tasks are using the mm_struct.
Davidlor said
: Yes I like the maple tree, and at this stage I don't think we can ask for : more from this series wrt the MM - albeit there seems to still be some : folks reporting breakage. Fundamentally I see Liam's work to (re)move : complexity out of the MM (not to say that the actual maple tree is not : complex) by consolidating the three complimentary data structures very : much worth it considering performance does not take a hit. This was very : much a turn off with the range locking approach, which worst case scenario : incurred in prohibitive overhead. Also as Liam and Matthew have : mentioned, RCU opens up a lot of nice performance opportunities, and in : addition academia[1] has shown outstanding scalability of address spaces : with the foundation of replacing the locked rbtree with RCU aware trees.
A similar work has been discovered in the academic press
https://pdos.csail.mit.edu/papers/rcuvm:asplos12.pdf
Sheer coincidence. We designed our tree with the intention of solving the hardest problem first. Upon settling on a b-tree variant and a rough outline, we researched ranged based b-trees and RCU b-trees and did find that article. So it was nice to find reassurances that we were on the right path, but our design choice of using ranges made that paper unusable for us.
This patch (of 70):
The maple tree is an RCU-safe range based B-tree designed to use modern processor cache efficiently. There are a number of places in the kernel that a non-overlapping range-based tree would be beneficial, especially one with a simple interface. If you use an rbtree with other data structures to improve performance or an interval tree to track non-overlapping ranges, then this is for you.
The tree has a branching factor of 10 for non-leaf nodes and 16 for leaf nodes. With the increased branching factor, it is significantly shorter than the rbtree so it has fewer cache misses. The removal of the linked list between subsequent entries also reduces the cache misses and the need to pull in the previous and next VMA during many tree alterations.
The first user that is covered in this patch set is the vm_area_struct, where three data structures are replaced by the maple tree: the augmented rbtree, the vma cache, and the linked list of VMAs in the mm_struct. The long term goal is to reduce or remove the mmap_lock contention.
The plan is to get to the point where we use the maple tree in RCU mode. Readers will not block for writers. A single write operation will be allowed at a time. A reader re-walks if stale data is encountered. VMAs would be RCU enabled and this mode would be entered once multiple tasks are using the mm_struct.
There is additional BUG_ON() calls added within the tree, most of which are in debug code. These will be replaced with a WARN_ON() call in the future. There is also additional BUG_ON() calls within the code which will also be reduced in number at a later date. These exist to catch things such as out-of-range accesses which would crash anyways.
Link: https://lkml.kernel.org/r/20220906194824.2110408-1-Liam.Howlett@oracle.com Link: https://lkml.kernel.org/r/20220906194824.2110408-2-Liam.Howlett@oracle.com Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Tested-by: David Howells <dhowells@redhat.com> Tested-by: Sven Schnelle <svens@linux.ibm.com> Tested-by: Yu Zhao <yuzhao@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: David Hildenbrand <david@redhat.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: SeongJae Park <sj@kernel.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
show more ...
|
Revision tags: v5.15.65 |
|
#
875bfd52 |
| 02-Sep-2022 |
Kees Cook <keescook@chromium.org> |
fortify: Add KUnit test for FORTIFY_SOURCE internals
Add lib/fortify_kunit.c KUnit test for checking the expected behavioral characteristics of FORTIFY_SOURCE internals.
Cc: Nick Desaulniers <ndesa
fortify: Add KUnit test for FORTIFY_SOURCE internals
Add lib/fortify_kunit.c KUnit test for checking the expected behavioral characteristics of FORTIFY_SOURCE internals.
Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Tom Rix <trix@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: "Steven Rostedt (Google)" <rostedt@goodmis.org> Cc: Yury Norov <yury.norov@gmail.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Sander Vanheule <sander@svanheule.net> Cc: linux-hardening@vger.kernel.org Cc: llvm@lists.linux.dev Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Kees Cook <keescook@chromium.org>
show more ...
|