#
7c50b1cb |
| 15-Apr-2023 |
Dave Marchevsky <davemarchevsky@fb.com> |
bpf: Add bpf_refcount_acquire kfunc
Currently, BPF programs can interact with the lifetime of refcounted local kptrs in the following ways:
bpf_obj_new - Initialize refcount to 1 as part of new
bpf: Add bpf_refcount_acquire kfunc
Currently, BPF programs can interact with the lifetime of refcounted local kptrs in the following ways:
bpf_obj_new - Initialize refcount to 1 as part of new object creation bpf_obj_drop - Decrement refcount and free object if it's 0 collection add - Pass ownership to the collection. No change to refcount but collection is responsible for bpf_obj_dropping it
In order to be able to add a refcounted local kptr to multiple collections we need to be able to increment the refcount and acquire a new owning reference. This patch adds a kfunc, bpf_refcount_acquire, implementing such an operation.
bpf_refcount_acquire takes a refcounted local kptr and returns a new owning reference to the same underlying memory as the input. The input can be either owning or non-owning. To reinforce why this is safe, consider the following code snippets:
struct node *n = bpf_obj_new(typeof(*n)); // A struct node *m = bpf_refcount_acquire(n); // B
In the above snippet, n will be alive with refcount=1 after (A), and since nothing changes that state before (B), it's obviously safe. If n is instead added to some rbtree, we can still safely refcount_acquire it:
struct node *n = bpf_obj_new(typeof(*n)); struct node *m;
bpf_spin_lock(&glock); bpf_rbtree_add(&groot, &n->node, less); // A m = bpf_refcount_acquire(n); // B bpf_spin_unlock(&glock);
In the above snippet, after (A) n is a non-owning reference, and after (B) m is an owning reference pointing to the same memory as n. Although n has no ownership of that memory's lifetime, it's guaranteed to be alive until the end of the critical section, and n would be clobbered if we were past the end of the critical section, so it's safe to bump refcount.
Implementation details:
* From verifier's perspective, bpf_refcount_acquire handling is similar to bpf_obj_new and bpf_obj_drop. Like the former, it returns a new owning reference matching input type, although like the latter, type can be inferred from concrete kptr input. Verifier changes in {check,fixup}_kfunc_call and check_kfunc_args are largely copied from aforementioned functions' verifier changes.
* An exception to the above is the new KF_ARG_PTR_TO_REFCOUNTED_KPTR arg, indicated by new "__refcounted_kptr" kfunc arg suffix. This is necessary in order to handle both owning and non-owning input without adding special-casing to "__alloc" arg handling. Also a convenient place to confirm that input type has bpf_refcount field.
* The implemented kfunc is actually bpf_refcount_acquire_impl, with 'hidden' second arg that the verifier sets to the type's struct_meta in fixup_kfunc_call.
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Link: https://lore.kernel.org/r/20230415201811.343116-5-davemarchevsky@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
1cf3bfc6 |
| 12-Apr-2023 |
Ilya Leoshkevich <iii@linux.ibm.com> |
bpf: Support 64-bit pointers to kfuncs
test_ksyms_module fails to emit a kfunc call targeting a module on s390x, because the verifier stores the difference between kfunc address and __bpf_call_base
bpf: Support 64-bit pointers to kfuncs
test_ksyms_module fails to emit a kfunc call targeting a module on s390x, because the verifier stores the difference between kfunc address and __bpf_call_base in bpf_insn.imm, which is s32, and modules are roughly (1 << 42) bytes away from the kernel on s390x.
Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs, and storing the absolute address in bpf_kfunc_desc.
Introduce bpf_jit_supports_far_kfunc_call() in order to limit this new behavior to the s390x JIT. Otherwise other JITs need to be modified, which is not desired.
Introduce bpf_get_kfunc_addr() instead of exposing both find_kfunc_desc() and struct bpf_kfunc_desc.
In addition to sorting kfuncs by imm, also sort them by offset, in order to handle conflicting imms from different modules. Do this on all architectures in order to simplify code.
Factor out resolving specialized kfuncs (XPD and dynptr) from fixup_kfunc_call(). This was required in the first place, because fixup_kfunc_call() uses find_kfunc_desc(), which returns a const pointer, so it's not possible to modify kfunc addr without stripping const, which is not nice. It also removes repetition of code like:
if (bpf_jit_supports_far_kfunc_call()) desc->addr = func; else insn->imm = BPF_CALL_IMM(func);
and separates kfunc_desc_tab fixups from kfunc_call fixups.
Suggested-by: Jiri Olsa <olsajiri@gmail.com> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230412230632.885985-1-iii@linux.ibm.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
c11bd046 |
| 12-Apr-2023 |
Yafang <laoar.shao@gmail.com> |
bpf: Add preempt_count_{sub,add} into btf id deny list
The recursion check in __bpf_prog_enter* and __bpf_prog_exit* leave preempt_count_{sub,add} unprotected. When attaching trampoline to them we g
bpf: Add preempt_count_{sub,add} into btf id deny list
The recursion check in __bpf_prog_enter* and __bpf_prog_exit* leave preempt_count_{sub,add} unprotected. When attaching trampoline to them we get panic as follows,
[ 867.843050] BUG: TASK stack guard page was hit at 0000000009d325cf (stack is 0000000046a46a15..00000000537e7b28) [ 867.843064] stack guard page: 0000 [#1] PREEMPT SMP NOPTI [ 867.843067] CPU: 8 PID: 11009 Comm: trace Kdump: loaded Not tainted 6.2.0+ #4 [ 867.843100] Call Trace: [ 867.843101] <TASK> [ 867.843104] asm_exc_int3+0x3a/0x40 [ 867.843108] RIP: 0010:preempt_count_sub+0x1/0xa0 [ 867.843135] __bpf_prog_enter_recur+0x17/0x90 [ 867.843148] bpf_trampoline_6442468108_0+0x2e/0x1000 [ 867.843154] ? preempt_count_sub+0x1/0xa0 [ 867.843157] preempt_count_sub+0x5/0xa0 [ 867.843159] ? migrate_enable+0xac/0xf0 [ 867.843164] __bpf_prog_exit_recur+0x2d/0x40 [ 867.843168] bpf_trampoline_6442468108_0+0x55/0x1000 ... [ 867.843788] preempt_count_sub+0x5/0xa0 [ 867.843793] ? migrate_enable+0xac/0xf0 [ 867.843829] __bpf_prog_exit_recur+0x2d/0x40 [ 867.843837] BUG: IRQ stack guard page was hit at 0000000099bd8228 (stack is 00000000b23e2bc4..000000006d95af35) [ 867.843841] BUG: IRQ stack guard page was hit at 000000005ae07924 (stack is 00000000ffd69623..0000000014eb594c) [ 867.843843] BUG: IRQ stack guard page was hit at 00000000028320f0 (stack is 00000000034b6438..0000000078d1bcec) [ 867.843842] bpf_trampoline_6442468108_0+0x55/0x1000 ...
That is because in __bpf_prog_exit_recur, the preempt_count_{sub,add} are called after prog->active is decreased.
Fixing this by adding these two functions into btf ids deny list.
Suggested-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Yafang <laoar.shao@gmail.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Jiri Olsa <olsajiri@gmail.com> Acked-by: Hao Luo <haoluo@google.com> Link: https://lore.kernel.org/r/20230413025248.79764-1-laoar.shao@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
c2865b11 |
| 13-Apr-2023 |
Jakub Kicinski <kuba@kernel.org> |
Daniel Borkmann says:
==================== pull-request: bpf-next 2023-04-13
We've added 260 non-merge commits during the last 36 day(s) which contain a total of 356 files changed, 21786 insertions
Daniel Borkmann says:
==================== pull-request: bpf-next 2023-04-13
We've added 260 non-merge commits during the last 36 day(s) which contain a total of 356 files changed, 21786 insertions(+), 11275 deletions(-).
The main changes are:
1) Rework BPF verifier log behavior and implement it as a rotating log by default with the option to retain old-style fixed log behavior, from Andrii Nakryiko.
2) Adds support for using {FOU,GUE} encap with an ipip device operating in collect_md mode and add a set of BPF kfuncs for controlling encap params, from Christian Ehrig.
3) Allow BPF programs to detect at load time whether a particular kfunc exists or not, and also add support for this in light skeleton, from Alexei Starovoitov.
4) Optimize hashmap lookups when key size is multiple of 4, from Anton Protopopov.
5) Enable RCU semantics for task BPF kptrs and allow referenced kptr tasks to be stored in BPF maps, from David Vernet.
6) Add support for stashing local BPF kptr into a map value via bpf_kptr_xchg(). This is useful e.g. for rbtree node creation for new cgroups, from Dave Marchevsky.
7) Fix BTF handling of is_int_ptr to skip modifiers to work around tracing issues where a program cannot be attached, from Feng Zhou.
8) Migrate a big portion of test_verifier unit tests over to test_progs -a verifier_* via inline asm to ease {read,debug}ability, from Eduard Zingerman.
9) Several updates to the instruction-set.rst documentation which is subject to future IETF standardization (https://lwn.net/Articles/926882/), from Dave Thaler.
10) Fix BPF verifier in the __reg_bound_offset's 64->32 tnum sub-register known bits information propagation, from Daniel Borkmann.
11) Add skb bitfield compaction work related to BPF with the overall goal to make more of the sk_buff bits optional, from Jakub Kicinski.
12) BPF selftest cleanups for build id extraction which stand on its own from the upcoming integration work of build id into struct file object, from Jiri Olsa.
13) Add fixes and optimizations for xsk descriptor validation and several selftest improvements for xsk sockets, from Kal Conley.
14) Add BPF links for struct_ops and enable switching implementations of BPF TCP cong-ctls under a given name by replacing backing struct_ops map, from Kui-Feng Lee.
15) Remove a misleading BPF verifier env->bypass_spec_v1 check on variable offset stack read as earlier Spectre checks cover this, from Luis Gerhorst.
16) Fix issues in copy_from_user_nofault() for BPF and other tracers to resemble copy_from_user_nmi() from safety PoV, from Florian Lehner and Alexei Starovoitov.
17) Add --json-summary option to test_progs in order for CI tooling to ease parsing of test results, from Manu Bretelle.
18) Batch of improvements and refactoring to prep for upcoming bpf_local_storage conversion to bpf_mem_cache_{alloc,free} allocator, from Martin KaFai Lau.
19) Improve bpftool's visual program dump which produces the control flow graph in a DOT format by adding C source inline annotations, from Quentin Monnet.
20) Fix attaching fentry/fexit/fmod_ret/lsm to modules by extracting the module name from BTF of the target and searching kallsyms of the correct module, from Viktor Malik.
21) Improve BPF verifier handling of '<const> <cond> <non_const>' to better detect whether in particular jmp32 branches are taken, from Yonghong Song.
22) Allow BPF TCP cong-ctls to write app_limited of struct tcp_sock. A built-in cc or one from a kernel module is already able to write to app_limited, from Yixin Shen.
Conflicts:
Documentation/bpf/bpf_devel_QA.rst b7abcd9c656b ("bpf, doc: Link to submitting-patches.rst for general patch submission info") 0f10f647f455 ("bpf, docs: Use internal linking for link to netdev subsystem doc") https://lore.kernel.org/all/20230307095812.236eb1be@canb.auug.org.au/
include/net/ip_tunnels.h bc9d003dc48c3 ("ip_tunnel: Preserve pointer const in ip_tunnel_info_opts") ac931d4cdec3d ("ipip,ip_tunnel,sit: Add FOU support for externally controlled ipip devices") https://lore.kernel.org/all/20230413161235.4093777-1-broonie@kernel.org/
net/bpf/test_run.c e5995bc7e2ba ("bpf, test_run: fix crashes due to XDP frame overwriting/corruption") 294635a8165a ("bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMES") https://lore.kernel.org/all/20230320102619.05b80a98@canb.auug.org.au/ ====================
Link: https://lore.kernel.org/r/20230413191525.7295-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
show more ...
|
#
5603effb |
| 12-Apr-2023 |
Maarten Lankhorst <maarten.lankhorst@linux.intel.com> |
Merge remote-tracking branch 'drm/drm-fixes' into drm-misc-fixes
We were stuck on rc2, should at least attempt to track drm-fixes slightly.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux
Merge remote-tracking branch 'drm/drm-fixes' into drm-misc-fixes
We were stuck on rc2, should at least attempt to track drm-fixes slightly.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
show more ...
|
#
60261442 |
| 12-Apr-2023 |
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> |
Merge branches 'misc' and 'fixes' into for-next
|
#
255f0e14 |
| 11-Apr-2023 |
Daniel Borkmann <daniel@iogearbox.net> |
Merge branch 'bpf-verifier-log-rotation'
Andrii Nakryiko says:
==================== This patch set changes BPF verifier log behavior to behave as a rotating log, by default. If user-supplied log bu
Merge branch 'bpf-verifier-log-rotation'
Andrii Nakryiko says:
==================== This patch set changes BPF verifier log behavior to behave as a rotating log, by default. If user-supplied log buffer is big enough to contain entire verifier log output, there is no effective difference. But where previously user supplied too small log buffer and would get -ENOSPC error result and the beginning part of the verifier log, now there will be no error and user will get ending part of verifier log filling up user-supplied log buffer. Which is, in absolute majority of cases, is exactly what's useful, relevant, and what users want and need, as the ending of the verifier log is containing details of verifier failure and relevant state that got us to that failure. So this rotating mode is made default, but for some niche advanced debugging scenarios it's possible to request old behavior by specifying additional BPF_LOG_FIXED (8) flag.
This patch set adjusts libbpf to allow specifying flags beyond 1 | 2 | 4. We also add --log-size and --log-fixed options to veristat to be able to both test this functionality manually, but also to be used in various debugging scenarios. We also add selftests that tries many variants of log buffer size to stress-test correctness of internal verifier log bookkeeping code.
Further, this patch set is merged with log_size_actual v1 patchset ([0]), which adds ability to get required log buffer size to fit entire verifier log output.
This addresses a long-standing limitation, which causes users and BPF loader library writers to guess and pre-size log buffer, often allocating unnecessary extra memory for this or doing extra program verifications just to size logs better, ultimately wasting resources. This was requested most recently by Go BPF library maintainers ([1]).
See respective patches for details. A bunch of them some drive-by fixes detecting during working with the code. Some other further refactor and compratmentalize verifier log handling code into kernel/bpf/log.c, which should also make it simpler to integrate such verbose log for other complicated bpf() syscall commands, if necessary. The rest are actual logic to calculate maximum log buffer size needed and return it to user-space. Few patches wire this on libbpf side, and the rest add selftests to test proper log truncation and log_buf==NULL handling.
This turned into a pretty sizable patch set with lots of arithmetics, but hopefully the set of features added to verifier log in this patch set are both useful for BPF users and are self-contained and isolated enough to not cause troubles going forward.
v3->v4: - s/log_size_actual/log_true_size/ (Alexei); - log_buf==NULL && log_size==0 don't trigger -ENOSPC (Lorenz); - added WARN_ON_ONCE if we try bpf_vlog_reset() forward (Lorenz); - added selftests for truncation in BPF_LOG_FIXED mode; - fixed edge case in BPF_LOG_FIXED when log_size==1, leaving buf not zero terminated; v2->v3: - typos and comment improvement (Lorenz); - merged with log_size_actual v1 ([0]) patch set (Alexei); - added log_buf==NULL condition allowed (Lorenz); - added BPF_BTF_LOAD logs tests (Lorenz); - more clean up and refactoring of internal verifier log API; v1->v2: - return -ENOSPC even in rotating log mode for preserving backwards compatibility (Lorenz);
[0] https://patchwork.kernel.org/project/netdevbpf/list/?series=735213&state=* [1] https://lore.kernel.org/bpf/CAN+4W8iNoEbQzQVbB_o1W0MWBDV4xCJAq7K3f6psVE-kkCfMqg@mail.gmail.com/ ====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
show more ...
|
#
bdcab414 |
| 06-Apr-2023 |
Andrii Nakryiko <andrii@kernel.org> |
bpf: Simplify internal verifier log interface
Simplify internal verifier log API down to bpf_vlog_init() and bpf_vlog_finalize(). The former handles input arguments validation in one place and makes
bpf: Simplify internal verifier log interface
Simplify internal verifier log API down to bpf_vlog_init() and bpf_vlog_finalize(). The former handles input arguments validation in one place and makes it easier to change it. The latter subsumes -ENOSPC (truncation) and -EFAULT handling and simplifies both caller's code (bpf_check() and btf_parse()).
For btf_parse(), this patch also makes sure that verifier log finalization happens even if there is some error condition during BTF verification process prior to normal finalization step.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-14-andrii@kernel.org
show more ...
|
#
47a71c1f |
| 06-Apr-2023 |
Andrii Nakryiko <andrii@kernel.org> |
bpf: Add log_true_size output field to return necessary log buffer size
Add output-only log_true_size and btf_log_true_size field to BPF_PROG_LOAD and BPF_BTF_LOAD commands, respectively. It will re
bpf: Add log_true_size output field to return necessary log buffer size
Add output-only log_true_size and btf_log_true_size field to BPF_PROG_LOAD and BPF_BTF_LOAD commands, respectively. It will return the size of log buffer necessary to fit in all the log contents at specified log_level. This is very useful for BPF loader libraries like libbpf to be able to size log buffer correctly, but could be used by users directly, if necessary, as well.
This patch plumbs all this through the code, taking into account actual bpf_attr size provided by user to determine if these new fields are expected by users. And if they are, set them from kernel on return.
We refactory btf_parse() function to accommodate this, moving attr and uattr handling inside it. The rest is very straightforward code, which is split from the logging accounting changes in the previous patch to make it simpler to review logic vs UAPI changes.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-13-andrii@kernel.org
show more ...
|
#
8a6ca6bc |
| 06-Apr-2023 |
Andrii Nakryiko <andrii@kernel.org> |
bpf: Simplify logging-related error conditions handling
Move log->level == 0 check into bpf_vlog_truncated() instead of doing it explicitly. Also remove unnecessary goto in kernel/bpf/verifier.c.
S
bpf: Simplify logging-related error conditions handling
Move log->level == 0 check into bpf_vlog_truncated() instead of doing it explicitly. Also remove unnecessary goto in kernel/bpf/verifier.c.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-11-andrii@kernel.org
show more ...
|
#
cbedb42a |
| 06-Apr-2023 |
Andrii Nakryiko <andrii@kernel.org> |
bpf: Avoid incorrect -EFAULT error in BPF_LOG_KERNEL mode
If verifier log is in BPF_LOG_KERNEL mode, no log->ubuf is expected and it stays NULL throughout entire verification process. Don't erroneou
bpf: Avoid incorrect -EFAULT error in BPF_LOG_KERNEL mode
If verifier log is in BPF_LOG_KERNEL mode, no log->ubuf is expected and it stays NULL throughout entire verification process. Don't erroneously return -EFAULT in such case.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-10-andrii@kernel.org
show more ...
|
#
12166409 |
| 06-Apr-2023 |
Andrii Nakryiko <andrii@kernel.org> |
bpf: Switch BPF verifier log to be a rotating log by default
Currently, if user-supplied log buffer to collect BPF verifier log turns out to be too small to contain full log, bpf() syscall returns -
bpf: Switch BPF verifier log to be a rotating log by default
Currently, if user-supplied log buffer to collect BPF verifier log turns out to be too small to contain full log, bpf() syscall returns -ENOSPC, fails BPF program verification/load, and preserves first N-1 bytes of the verifier log (where N is the size of user-supplied buffer).
This is problematic in a bunch of common scenarios, especially when working with real-world BPF programs that tend to be pretty complex as far as verification goes and require big log buffers. Typically, it's when debugging tricky cases at log level 2 (verbose). Also, when BPF program is successfully validated, log level 2 is the only way to actually see verifier state progression and all the important details.
Even with log level 1, it's possible to get -ENOSPC even if the final verifier log fits in log buffer, if there is a code path that's deep enough to fill up entire log, even if normally it would be reset later on (there is a logic to chop off successfully validated portions of BPF verifier log).
In short, it's not always possible to pre-size log buffer. Also, what's worse, in practice, the end of the log most often is way more important than the beginning, but verifier stops emitting log as soon as initial log buffer is filled up.
This patch switches BPF verifier log behavior to effectively behave as rotating log. That is, if user-supplied log buffer turns out to be too short, verifier will keep overwriting previously written log, effectively treating user's log buffer as a ring buffer. -ENOSPC is still going to be returned at the end, to notify user that log contents was truncated, but the important last N bytes of the log would be returned, which might be all that user really needs. This consistent -ENOSPC behavior, regardless of rotating or fixed log behavior, allows to prevent backwards compatibility breakage. The only user-visible change is which portion of verifier log user ends up seeing *if buffer is too small*. Given contents of verifier log itself is not an ABI, there is no breakage due to this behavior change. Specialized tools that rely on specific contents of verifier log in -ENOSPC scenario are expected to be easily adapted to accommodate old and new behaviors.
Importantly, though, to preserve good user experience and not require every user-space application to adopt to this new behavior, before exiting to user-space verifier will rotate log (in place) to make it start at the very beginning of user buffer as a continuous zero-terminated string. The contents will be a chopped off N-1 last bytes of full verifier log, of course.
Given beginning of log is sometimes important as well, we add BPF_LOG_FIXED (which equals 8) flag to force old behavior, which allows tools like veristat to request first part of verifier log, if necessary. BPF_LOG_FIXED flag is also a simple and straightforward way to check if BPF verifier supports rotating behavior.
On the implementation side, conceptually, it's all simple. We maintain 64-bit logical start and end positions. If we need to truncate the log, start position will be adjusted accordingly to lag end position by N bytes. We then use those logical positions to calculate their matching actual positions in user buffer and handle wrap around the end of the buffer properly. Finally, right before returning from bpf_check(), we rotate user log buffer contents in-place as necessary, to make log contents contiguous. See comments in relevant functions for details.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-4-andrii@kernel.org
show more ...
|
#
4294a0a7 |
| 06-Apr-2023 |
Andrii Nakryiko <andrii@kernel.org> |
bpf: Split off basic BPF verifier log into separate file
kernel/bpf/verifier.c file is large and growing larger all the time. So it's good to start splitting off more or less self-contained parts in
bpf: Split off basic BPF verifier log into separate file
kernel/bpf/verifier.c file is large and growing larger all the time. So it's good to start splitting off more or less self-contained parts into separate files to keep source code size (somewhat) somewhat under control.
This patch is a one step in this direction, moving some of BPF verifier log routines into a separate kernel/bpf/log.c. Right now it's most low-level and isolated routines to append data to log, reset log to previous position, etc. Eventually we could probably move verifier state printing logic here as well, but this patch doesn't attempt to do that yet.
Subsequent patches will add more logic to verifier log management, so having basics in a separate file will make sure verifier.c doesn't grow more with new changes.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-2-andrii@kernel.org
show more ...
|
#
53e59b5c |
| 11-Apr-2023 |
Mark Brown <broonie@kernel.org> |
Add support for Rockchip RK860X regulators
Merge series from Cristian Ciocaltea <cristian.ciocaltea@collabora.com>:
This patch series introduces support for the Rockchip RK860X regulators, while al
Add support for Rockchip RK860X regulators
Merge series from Cristian Ciocaltea <cristian.ciocaltea@collabora.com>:
This patch series introduces support for the Rockchip RK860X regulators, while also providing a few fixes and improvements to the existing fan53555 driver.
RK8600/RK8601 are quite similar to the FAN53555 regulators.
RK8602/RK8603 are a bit different, having a wider output voltage selection range, from 0.5 V to 1.5 V in 6.25 mV steps. They are used in the Rock 5B board to power the ARM Cortex-A76 cores and the NPU.
show more ...
|
#
ea68a3e9 |
| 11-Apr-2023 |
Joonas Lahtinen <joonas.lahtinen@linux.intel.com> |
Merge drm/drm-next into drm-intel-gt-next
Need to pull in commit from drm-next (earlier in drm-intel-next):
1eca0778f4b3 ("drm/i915: add struct i915_dsm to wrap dsm members together")
In order to
Merge drm/drm-next into drm-intel-gt-next
Need to pull in commit from drm-next (earlier in drm-intel-next):
1eca0778f4b3 ("drm/i915: add struct i915_dsm to wrap dsm members together")
In order to merge following patch to drm-intel-gt-next:
https://patchwork.freedesktop.org/patch/530942/?series=114925&rev=6
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
show more ...
|
#
039535ec |
| 10-Apr-2023 |
Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
Merge 6.3-rc6 into tty-next
We need the tty/serial fixes in here for testing.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
#
5790d407 |
| 10-Apr-2023 |
Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
Merge 6.3-rc6 into char-misc-next
We need it here to apply other char/misc driver changes to.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
#
4daf0b32 |
| 06-Apr-2023 |
Alexei Starovoitov <ast@kernel.org> |
Merge branch 'bpf: Improve verifier for cond_op and spilled loop index variables'
Yonghong Song says:
====================
LLVM commit [1] introduced hoistMinMax optimization like (i < VIRTIO_MA
Merge branch 'bpf: Improve verifier for cond_op and spilled loop index variables'
Yonghong Song says:
====================
LLVM commit [1] introduced hoistMinMax optimization like (i < VIRTIO_MAX_SGS) && (i < out_sgs) to upper = MIN(VIRTIO_MAX_SGS, out_sgs) ... i < upper ... and caused the verification failure. Commit [2] workarounded the issue by adding some bpf assembly code to prohibit the above optimization. This patch improved verifier such that verification can succeed without the above workaround.
Without [2], the current verifier will hit the following failures: ... 119: (15) if r1 == 0x0 goto pc+1 The sequence of 8193 jumps is too complex. verification time 525829 usec stack depth 64 processed 156616 insns (limit 1000000) max_states_per_insn 8 total_states 1754 peak_states 1712 mark_read 12 -- END PROG LOAD LOG -- libbpf: prog 'trace_virtqueue_add_sgs': failed to load: -14 libbpf: failed to load object 'loop6.bpf.o' ... The failure is due to verifier inadequately handling '<const> <cond_op> <non_const>' which will go through both pathes and generate the following verificaiton states: ... 89: (07) r2 += 1 ; R2_w=5 90: (79) r8 = *(u64 *)(r10 -48) ; R8_w=scalar() R10=fp0 91: (79) r1 = *(u64 *)(r10 -56) ; R1_w=scalar(umax=5,var_off=(0x0; 0x7)) R10=fp0 92: (ad) if r2 < r1 goto pc+41 ; R0_w=scalar() R1_w=scalar(umin=6,umax=5,var_off=(0x4; 0x3)) R2_w=5 R6_w=scalar(id=385) R7_w=0 R8_w=scalar() R9_w=scalar(umax=21474836475,var_off=(0x0; 0x7ffffffff)) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-32= fp-40_w=4 fp-48=mmmmmmmm fp-56= fp-64=mmmmmmmm ... 89: (07) r2 += 1 ; R2_w=6 90: (79) r8 = *(u64 *)(r10 -48) ; R8_w=scalar() R10=fp0 91: (79) r1 = *(u64 *)(r10 -56) ; R1_w=scalar(umax=5,var_off=(0x0; 0x7)) R10=fp0 92: (ad) if r2 < r1 goto pc+41 ; R0_w=scalar() R1_w=scalar(umin=7,umax=5,var_off=(0x4; 0x3)) R2_w=6 R6=scalar(id=388) R7=0 R8_w=scalar() R9_w=scalar(umax=25769803770,var_off=(0x0; 0x7ffffffff)) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-32= fp-40=5 fp-48=mmmmmmmm fp-56= fp-64=mmmmmmmm ... 89: (07) r2 += 1 ; R2_w=4088 90: (79) r8 = *(u64 *)(r10 -48) ; R8_w=scalar() R10=fp0 91: (79) r1 = *(u64 *)(r10 -56) ; R1_w=scalar(umax=5,var_off=(0x0; 0x7)) R10=fp0 92: (ad) if r2 < r1 goto pc+41 ; R0=scalar() R1=scalar(umin=4089,umax=5,var_off=(0x0; 0x7)) R2=4088 R6=scalar(id=12634) R7=0 R8=scalar() R9=scalar(umax=17557826301960,var_off=(0x0; 0xfffffffffff)) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-32= fp-40=4087 fp-48=mmmmmmmm fp-56= fp-64=mmmmmmmm
Patch 3 fixed the above issue by handling '<const> <cond_op> <non_const>' properly. During developing selftests for Patch 3, I found some issues with bound deduction with BPF_EQ/BPF_NE and fixed the issue in Patch 1.
After the above issue is fixed, the second issue shows up. ... 67: (07) r1 += -16 ; R1_w=fp-16 ; bpf_probe_read_kernel(&sgp, sizeof(sgp), sgs + i); 68: (b4) w2 = 8 ; R2_w=8 69: (85) call bpf_probe_read_kernel#113 ; R0_w=scalar() fp-16=mmmmmmmm ; return sgp; 70: (79) r6 = *(u64 *)(r10 -16) ; R6=scalar() R10=fp0 ; for (n = 0, sgp = get_sgp(sgs, i); sgp && (n < SG_MAX); 71: (15) if r6 == 0x0 goto pc-49 ; R6=scalar() 72: (b4) w1 = 0 ; R1_w=0 73: (05) goto pc-46 ; for (i = 0; (i < VIRTIO_MAX_SGS) && (i < out_sgs); i++) { 28: (bc) w7 = w1 ; R1_w=0 R7_w=0 ; bpf_probe_read_kernel(&len, sizeof(len), &sgp->length); ... 23: (79) r3 = *(u64 *)(r10 -40) ; R3_w=2 R10=fp0 ; for (i = 0; (i < VIRTIO_MAX_SGS) && (i < out_sgs); i++) { 24: (07) r3 += 1 ; R3_w=3 ; for (i = 0; (i < VIRTIO_MAX_SGS) && (i < out_sgs); i++) { 25: (79) r1 = *(u64 *)(r10 -56) ; R1_w=scalar(umax=5,var_off=(0x0; 0x7)) R10=fp0 26: (ad) if r3 < r1 goto pc+34 61: R0=scalar() R1_w=scalar(umin=4,umax=5,var_off=(0x4; 0x1)) R3_w=3 R6=scalar(id=1658) R7=0 R8=scalar(id=1653) R9=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-32= fp-40=2 fp-56= fp-64=mmmmmmmm ; if (sg_is_chain(&sg)) 61: (7b) *(u64 *)(r10 -40) = r3 ; R3_w=3 R10=fp0 fp-40_w=3 ... 67: (07) r1 += -16 ; R1_w=fp-16 ; bpf_probe_read_kernel(&sgp, sizeof(sgp), sgs + i); 68: (b4) w2 = 8 ; R2_w=8 69: (85) call bpf_probe_read_kernel#113 ; R0_w=scalar() fp-16=mmmmmmmm ; return sgp; 70: (79) r6 = *(u64 *)(r10 -16) ; for (n = 0, sgp = get_sgp(sgs, i); sgp && (n < SG_MAX); infinite loop detected at insn 71 verification time 90800 usec stack depth 64 processed 25017 insns (limit 1000000) max_states_per_insn 20 total_states 491 peak_states 169 mark_read 12 -- END PROG LOAD LOG -- libbpf: prog 'trace_virtqueue_add_sgs': failed to load: -22
Further analysis found the index variable 'i' is spilled but since it is not marked as precise. This is more tricky as identifying induction variable is not easy in verifier. Although a heuristic is possible, let us leave it for now.
[1] https://reviews.llvm.org/D143726 [2] Commit 3c2611bac08a ("selftests/bpf: Fix trace_virtqueue_add_sgs test issue with LLVM 17.") ====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
953d9f5b |
| 06-Apr-2023 |
Yonghong Song <yhs@fb.com> |
bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in verifier
Currently, the verifier does not handle '<const> <cond_op> <non_const>' well. For example, ... 10: (79) r1 = *(u64 *)
bpf: Improve handling of pattern '<const> <cond_op> <non_const>' in verifier
Currently, the verifier does not handle '<const> <cond_op> <non_const>' well. For example, ... 10: (79) r1 = *(u64 *)(r10 -16) ; R1_w=scalar() R10=fp0 11: (b7) r2 = 0 ; R2_w=0 12: (2d) if r2 > r1 goto pc+2 13: (b7) r0 = 0 14: (95) exit 15: (65) if r1 s> 0x1 goto pc+3 16: (0f) r0 += r1 ... At insn 12, verifier decides both true and false branch are possible, but actually only false branch is possible.
Currently, the verifier already supports patterns '<non_const> <cond_op> <const>. Add support for patterns '<const> <cond_op> <non_const>' in a similar way.
Also fix selftest 'verifier_bounds_mix_sign_unsign/bounds checks mixing signed and unsigned, variant 10' due to this change.
Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20230406164505.1046801-1-yhs@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
13fbcee5 |
| 06-Apr-2023 |
Yonghong Song <yhs@fb.com> |
bpf: Improve verifier JEQ/JNE insn branch taken checking
Currently, for BPF_JEQ/BPF_JNE insn, verifier determines whether the branch is taken or not only if both operands are constants. Therefore, f
bpf: Improve verifier JEQ/JNE insn branch taken checking
Currently, for BPF_JEQ/BPF_JNE insn, verifier determines whether the branch is taken or not only if both operands are constants. Therefore, for the following code snippet, 0: (85) call bpf_ktime_get_ns#5 ; R0_w=scalar() 1: (a5) if r0 < 0x3 goto pc+2 ; R0_w=scalar(umin=3) 2: (b7) r2 = 2 ; R2_w=2 3: (1d) if r0 == r2 goto pc+2 6
At insn 3, since r0 is not a constant, verifier assumes both branch can be taken which may lead inproper verification failure.
Add comparing umin/umax value and the constant. If the umin value is greater than the constant, or umax value is smaller than the constant, for JEQ the branch must be not-taken, and for JNE the branch must be taken. The jmp32 mode JEQ/JNE branch taken checking is also handled similarly.
The following lists the veristat result w.r.t. changed number of processes insns during verification:
File Program Insns (A) Insns (B) Insns (DIFF) ----------------------------------------------------- ---------------------------------------------------- --------- --------- --------------- test_cls_redirect.bpf.linked3.o cls_redirect 64980 73472 +8492 (+13.07%) test_seg6_loop.bpf.linked3.o __add_egr_x 12425 12423 -2 (-0.02%) test_tcp_hdr_options.bpf.linked3.o estab 2634 2558 -76 (-2.89%) test_parse_tcp_hdr_opt.bpf.linked3.o xdp_ingress_v6 1421 1420 -1 (-0.07%) test_parse_tcp_hdr_opt_dynptr.bpf.linked3.o xdp_ingress_v6 1238 1237 -1 (-0.08%) test_tc_dtime.bpf.linked3.o egress_fwdns_prio100 414 411 -3 (-0.72%)
Mostly a small improvement but test_cls_redirect.bpf.linked3.o has a 13% regression. I checked with verifier log and found it this is due to pruning. For some JEQ/JNE branches impacted by this patch, one branch is explored and the other has state equivalence and pruned.
Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20230406164455.1045294-1-yhs@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
Revision tags: v6.1.23 |
|
#
e8f59d84 |
| 04-Apr-2023 |
Andrii Nakryiko <andrii@kernel.org> |
Merge branch 'bpf: Follow up to RCU enforcement in the verifier.'
Alexei Starovoitov says:
====================
From: Alexei Starovoitov <ast@kernel.org>
The patch set is addressing a fallout fro
Merge branch 'bpf: Follow up to RCU enforcement in the verifier.'
Alexei Starovoitov says:
====================
From: Alexei Starovoitov <ast@kernel.org>
The patch set is addressing a fallout from commit 6fcd486b3a0a ("bpf: Refactor RCU enforcement in the verifier.") It was too aggressive with PTR_UNTRUSTED marks. Patches 1-6 are cleanup and adding verifier smartness to address real use cases in bpf programs that broke with too aggressive PTR_UNTRUSTED. The partial revert is done in patch 7 anyway. ====================
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
show more ...
|
#
afeebf9f |
| 03-Apr-2023 |
Alexei Starovoitov <ast@kernel.org> |
bpf: Undo strict enforcement for walking untagged fields.
The commit 6fcd486b3a0a ("bpf: Refactor RCU enforcement in the verifier.") broke several tracing bpf programs. Even in clang compiled kernel
bpf: Undo strict enforcement for walking untagged fields.
The commit 6fcd486b3a0a ("bpf: Refactor RCU enforcement in the verifier.") broke several tracing bpf programs. Even in clang compiled kernels there are many fields that are not marked with __rcu that are safe to read and pass into helpers, but the verifier doesn't know that they're safe. Aggressively marking them as PTR_UNTRUSTED was premature.
Fixes: 6fcd486b3a0a ("bpf: Refactor RCU enforcement in the verifier.") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20230404045029.82870-8-alexei.starovoitov@gmail.com
show more ...
|
#
30ee9821 |
| 03-Apr-2023 |
Alexei Starovoitov <ast@kernel.org> |
bpf: Allowlist few fields similar to __rcu tag.
Allow bpf program access cgrp->kn, mm->exe_file, skb->sk, req->sk.
Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko
bpf: Allowlist few fields similar to __rcu tag.
Allow bpf program access cgrp->kn, mm->exe_file, skb->sk, req->sk.
Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20230404045029.82870-7-alexei.starovoitov@gmail.com
show more ...
|
#
add68b84 |
| 03-Apr-2023 |
Alexei Starovoitov <ast@kernel.org> |
bpf: Refactor NULL-ness check in check_reg_type().
check_reg_type() unconditionally disallows PTR_TO_BTF_ID | PTR_MAYBE_NULL. It's problematic for helpers that allow ARG_PTR_TO_BTF_ID_OR_NULL like b
bpf: Refactor NULL-ness check in check_reg_type().
check_reg_type() unconditionally disallows PTR_TO_BTF_ID | PTR_MAYBE_NULL. It's problematic for helpers that allow ARG_PTR_TO_BTF_ID_OR_NULL like bpf_sk_storage_get(). Allow passing PTR_TO_BTF_ID | PTR_MAYBE_NULL into such helpers. That technically includes bpf_kptr_xchg() helper, but in practice: bpf_kptr_xchg(..., bpf_cpumask_create()); is still disallowed because bpf_cpumask_create() returns ref counted pointer with ref_obj_id > 0.
Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20230404045029.82870-6-alexei.starovoitov@gmail.com
show more ...
|
#
63260df1 |
| 03-Apr-2023 |
Alexei Starovoitov <ast@kernel.org> |
bpf: Refactor btf_nested_type_is_trusted().
btf_nested_type_is_trusted() tries to find a struct member at corresponding offset. It works for flat structures and falls apart in more complex structs w
bpf: Refactor btf_nested_type_is_trusted().
btf_nested_type_is_trusted() tries to find a struct member at corresponding offset. It works for flat structures and falls apart in more complex structs with nested structs. The offset->member search is already performed by btf_struct_walk() including nested structs. Reuse this work and pass {field name, field btf id} into btf_nested_type_is_trusted() instead of offset to make BTF_TYPE_SAFE*() logic more robust.
Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20230404045029.82870-4-alexei.starovoitov@gmail.com
show more ...
|