#
7cad3174 |
| 12-Aug-2024 |
Yonghong Song <yonghong.song@linux.dev> |
bpf: Fix a kernel verifier crash in stacksafe()
commit bed2eb964c70b780fb55925892a74f26cb590b25 upstream.
Daniel Hodges reported a kernel verifier crash when playing with sched-ext. Further investi
bpf: Fix a kernel verifier crash in stacksafe()
commit bed2eb964c70b780fb55925892a74f26cb590b25 upstream.
Daniel Hodges reported a kernel verifier crash when playing with sched-ext. Further investigation shows that the crash is due to invalid memory access in stacksafe(). More specifically, it is the following code:
if (exact != NOT_EXACT && old->stack[spi].slot_type[i % BPF_REG_SIZE] != cur->stack[spi].slot_type[i % BPF_REG_SIZE]) return false;
The 'i' iterates old->allocated_stack. If cur->allocated_stack < old->allocated_stack the out-of-bound access will happen.
To fix the issue add 'i >= cur->allocated_stack' check such that if the condition is true, stacksafe() should fail. Otherwise, cur->stack[spi].slot_type[i % BPF_REG_SIZE] memory access is legal.
Fixes: 2793a8b015f7 ("bpf: exact states comparison for iterator convergence checks") Cc: Eduard Zingerman <eddyz87@gmail.com> Reported-by: Daniel Hodges <hodgesd@meta.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240812214847.213612-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org> [ shung-hsi.yu: "exact" variable is bool instead enum because commit 4f81c16f50ba ("bpf: Recognize that two registers are safe when their ranges match") is not present. ] Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
d60c7ab6 |
| 10-Jul-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.38' into dev-6.6
This is the 6.6.38 stable release
|
#
f91ca89e |
| 10-Jul-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.37' into dev-6.6
This is the 6.6.37 stable release
|
#
e3540e5a |
| 09-Jul-2024 |
Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
Revert "bpf: Take return from set_memory_ro() into account with bpf_prog_lock_ro()"
This reverts commit fdd411af8178edc6b7bf260f8fa4fba1bedd0a6d which is commit 7d2cc63eca0c993c99d18893214abf8f85d56
Revert "bpf: Take return from set_memory_ro() into account with bpf_prog_lock_ro()"
This reverts commit fdd411af8178edc6b7bf260f8fa4fba1bedd0a6d which is commit 7d2cc63eca0c993c99d18893214abf8f85d566d8 upstream.
It is part of a series that is reported to both break the arm64 builds and instantly crashes the powerpc systems at the first load of a bpf program. So revert it for now until it can come back in a safe way.
Reported-by: matoro <matoro_mailinglist_kernel@matoro.tk> Reported-by: Vitaly Chikunov <vt@altlinux.org> Reported-by: WangYuli <wangyuli@uniontech.com> Link: https://lore.kernel.org/r/5A29E00D83AB84E3+20240706031101.637601-1-wangyuli@uniontech.com Link: https://lore.kernel.org/r/cf736c5e37489e7dc7ffd67b9de2ab47@matoro.tk Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Song Liu <song@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Kees Cook <keescook@chromium.org> Cc: Puranjay Mohan <puranjay12@gmail.com> Cc: Ilya Leoshkevich <iii@linux.ibm.com> # s390x Cc: Tiezhu Yang <yangtiezhu@loongson.cn> # LoongArch Cc: Johan Almbladh <johan.almbladh@anyfinetworks.com> # MIPS Part Cc: Alexei Starovoitov <ast@kernel.org> Cc: Sasha Levin <sashal@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
fdd411af |
| 07-Mar-2024 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
bpf: Take return from set_memory_ro() into account with bpf_prog_lock_ro()
[ Upstream commit 7d2cc63eca0c993c99d18893214abf8f85d566d8 ]
set_memory_ro() can fail, leaving memory unprotected.
Check
bpf: Take return from set_memory_ro() into account with bpf_prog_lock_ro()
[ Upstream commit 7d2cc63eca0c993c99d18893214abf8f85d566d8 ]
set_memory_ro() can fail, leaving memory unprotected.
Check its return and take it into account as an error.
Link: https://github.com/KSPP/linux/issues/7 Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: linux-hardening@vger.kernel.org <linux-hardening@vger.kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Message-ID: <286def78955e04382b227cb3e4b6ba272a7442e3.1709850515.git.christophe.leroy@csgroup.eu> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
8d02ead6 |
| 15-Jun-2024 |
Yonghong Song <yonghong.song@linux.dev> |
bpf: Add missed var_off setting in coerce_subreg_to_size_sx()
[ Upstream commit 44b7f7151dfc2e0947f39ed4b9bc4b0c2ccd46fc ]
In coerce_subreg_to_size_sx(), for the case where upper sign extension bit
bpf: Add missed var_off setting in coerce_subreg_to_size_sx()
[ Upstream commit 44b7f7151dfc2e0947f39ed4b9bc4b0c2ccd46fc ]
In coerce_subreg_to_size_sx(), for the case where upper sign extension bits are the same for smax32 and smin32 values, we missed to setup properly. This is especially problematic if both smax32 and smin32's sign extension bits are 1.
The following is a simple example illustrating the inconsistent verifier states due to missed var_off:
0: (85) call bpf_get_prandom_u32#7 ; R0_w=scalar() 1: (bf) r3 = r0 ; R0_w=scalar(id=1) R3_w=scalar(id=1) 2: (57) r3 &= 15 ; R3_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=15,var_off=(0x0; 0xf)) 3: (47) r3 |= 128 ; R3_w=scalar(smin=umin=smin32=umin32=128,smax=umax=smax32=umax32=143,var_off=(0x80; 0xf)) 4: (bc) w7 = (s8)w3 REG INVARIANTS VIOLATION (alu): range bounds violation u64=[0xffffff80, 0x8f] s64=[0xffffff80, 0x8f] u32=[0xffffff80, 0x8f] s32=[0x80, 0xffffff8f] var_off=(0x80, 0xf)
The var_off=(0x80, 0xf) is not correct, and the correct one should be var_off=(0xffffff80; 0xf) since from insn 3, we know that at insn 4, the sign extension bits will be 1. This patch fixed this issue by setting var_off properly.
Fixes: 8100928c8814 ("bpf: Support new sign-extension mov insns") Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240615174632.3995278-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
185dca87 |
| 15-Jun-2024 |
Yonghong Song <yonghong.song@linux.dev> |
bpf: Add missed var_off setting in set_sext32_default_val()
[ Upstream commit 380d5f89a4815ff88461a45de2fb6f28533df708 ]
Zac reported a verification failure and Alexei reproduced the issue with a s
bpf: Add missed var_off setting in set_sext32_default_val()
[ Upstream commit 380d5f89a4815ff88461a45de2fb6f28533df708 ]
Zac reported a verification failure and Alexei reproduced the issue with a simple reproducer ([1]). The verification failure is due to missed setting for var_off.
The following is the reproducer in [1]: 0: R1=ctx() R10=fp0 0: (71) r3 = *(u8 *)(r10 -387) ; R3_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=255,var_off=(0x0; 0xff)) R10=fp0 1: (bc) w7 = (s8)w3 ; R3_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=255,var_off=(0x0; 0xff)) R7_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=127,var_off=(0x0; 0x7f)) 2: (36) if w7 >= 0x2533823b goto pc-3 mark_precise: frame0: last_idx 2 first_idx 0 subseq_idx -1 mark_precise: frame0: regs=r7 stack= before 1: (bc) w7 = (s8)w3 mark_precise: frame0: regs=r3 stack= before 0: (71) r3 = *(u8 *)(r10 -387) 2: R7_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=127,var_off=(0x0; 0x7f)) 3: (b4) w0 = 0 ; R0_w=0 4: (95) exit
Note that after insn 1, the var_off for R7 is (0x0; 0x7f). This is not correct since upper 24 bits of w7 could be 0 or 1. So correct var_off should be (0x0; 0xffffffff). Missing var_off setting in set_sext32_default_val() caused later incorrect analysis in zext_32_to_64(dst_reg) and reg_bounds_sync(dst_reg).
To fix the issue, set var_off correctly in set_sext32_default_val(). The correct reg state after insn 1 becomes: 1: (bc) w7 = (s8)w3 ; R3_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=255,var_off=(0x0; 0xff)) R7_w=scalar(smin=0,smax=umax=0xffffffff,smin32=-128,smax32=127,var_off=(0x0; 0xffffffff)) and at insn 2, the verifier correctly determines either branch is possible.
[1] https://lore.kernel.org/bpf/CAADnVQLPU0Shz7dWV4bn2BgtGdxN3uFHPeobGBA72tpg5Xoykw@mail.gmail.com/
Fixes: 8100928c8814 ("bpf: Support new sign-extension mov insns") Reported-by: Zac Ecob <zacecob@protonmail.com> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240615174626.3994813-1-yonghong.song@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
6c71a057 |
| 23-Jun-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.35' into dev-6.6
This is the 6.6.35 stable release
|
Revision tags: v6.6.8, v6.6.7, v6.6.6, v6.6.5 |
|
#
2ad2f2ed |
| 04-Dec-2023 |
Hou Tao <houtao1@huawei.com> |
bpf: Optimize the free of inner map
[ Upstream commit af66bfd3c8538ed21cf72af18426fc4a408665cf ]
When removing the inner map from the outer map, the inner map will be freed after one RCU grace peri
bpf: Optimize the free of inner map
[ Upstream commit af66bfd3c8538ed21cf72af18426fc4a408665cf ]
When removing the inner map from the outer map, the inner map will be freed after one RCU grace period and one RCU tasks trace grace period, so it is certain that the bpf program, which may access the inner map, has exited before the inner map is freed.
However there is no need to wait for one RCU tasks trace grace period if the outer map is only accessed by non-sleepable program. So adding sleepable_refcnt in bpf_map and increasing sleepable_refcnt when adding the outer map into env->used_maps for sleepable program. Although the max number of bpf program is INT_MAX - 1, the number of bpf programs which are being loaded may be greater than INT_MAX, so using atomic64_t instead of atomic_t for sleepable_refcnt. When removing the inner map from the outer map, using sleepable_refcnt to decide whether or not a RCU tasks trace grace period is needed before freeing the inner map.
Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20231204140425.1480317-6-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Stable-dep-of: 2884dc7d08d9 ("bpf: Fix a potential use-after-free in bpf_link_free()") Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
b181f702 |
| 12-Jun-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.33' into dev-6.6
This is the 6.6.33 stable release
|
#
000a65bf |
| 27-May-2024 |
Jakub Sitnicki <jakub@cloudflare.com> |
bpf: Allow delete from sockmap/sockhash only if update is allowed
[ Upstream commit 98e948fb60d41447fd8d2d0c3b8637fc6b6dc26d ]
We have seen an influx of syzkaller reports where a BPF program attach
bpf: Allow delete from sockmap/sockhash only if update is allowed
[ Upstream commit 98e948fb60d41447fd8d2d0c3b8637fc6b6dc26d ]
We have seen an influx of syzkaller reports where a BPF program attached to a tracepoint triggers a locking rule violation by performing a map_delete on a sockmap/sockhash.
We don't intend to support this artificial use scenario. Extend the existing verifier allowed-program-type check for updating sockmap/sockhash to also cover deleting from a map.
From now on only BPF programs which were previously allowed to update sockmap/sockhash can delete from these map types.
Fixes: ff9105993240 ("bpf, sockmap: Prevent lock inversion deadlock in map delete elem") Reported-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Reported-by: syzbot+ec941d6e24f633a59172@syzkaller.appspotmail.com Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: syzbot+ec941d6e24f633a59172@syzkaller.appspotmail.com Acked-by: John Fastabend <john.fastabend@gmail.com> Closes: https://syzkaller.appspot.com/bug?extid=ec941d6e24f633a59172 Link: https://lore.kernel.org/bpf/20240527-sockmap-verify-deletes-v1-1-944b372f2101@cloudflare.com Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
39f8a293 |
| 26-Apr-2024 |
Alexei Starovoitov <ast@kernel.org> |
bpf: Fix verifier assumptions about socket->sk
[ Upstream commit 0db63c0b86e981a1e97d2596d64ceceba1a5470e ]
The verifier assumes that 'sk' field in 'struct socket' is valid and non-NULL when 'socke
bpf: Fix verifier assumptions about socket->sk
[ Upstream commit 0db63c0b86e981a1e97d2596d64ceceba1a5470e ]
The verifier assumes that 'sk' field in 'struct socket' is valid and non-NULL when 'socket' pointer itself is trusted and non-NULL. That may not be the case when socket was just created and passed to LSM socket_accept hook. Fix this verifier assumption and adjust tests.
Reported-by: Liam Wisehart <liamwisehart@meta.com> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Fixes: 6fcd486b3a0a ("bpf: Refactor RCU enforcement in the verifier.") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20240427002544.68803-1-alexei.starovoitov@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
b1759238 |
| 04-Apr-2024 |
Andrii Nakryiko <andrii@kernel.org> |
bpf: prevent r10 register from being marked as precise
[ Upstream commit 1f2a74b41ea8b902687eb97c4e7e3f558801865b ]
r10 is a special register that is not under BPF program's control and is always e
bpf: prevent r10 register from being marked as precise
[ Upstream commit 1f2a74b41ea8b902687eb97c4e7e3f558801865b ]
r10 is a special register that is not under BPF program's control and is always effectively precise. The rest of precision logic assumes that only r0-r9 SCALAR registers are marked as precise, so prevent r10 from being marked precise.
This can happen due to signed cast instruction allowing to do something like `r0 = (s8)r10;`, which later, if r0 needs to be precise, would lead to an attempt to mark r10 as precise.
Prevent this with an extra check during instruction backtracking.
Fixes: 8100928c8814 ("bpf: Support new sign-extension mov insns") Reported-by: syzbot+148110ee7cf72f39f33e@syzkaller.appspotmail.com Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240404214536.3551295-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
e0d77d0f |
| 19-May-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.31' into dev-6.6
This is the 6.6.31 stable release
|
#
fe4bfff1 |
| 12-Apr-2024 |
Anton Protopopov <aspsk@isovalent.com> |
bpf: Fix a verifier verbose message
[ Upstream commit 37eacb9f6e89fb399a79e952bc9c78eb3e16290e ]
Long ago a map file descriptor in a pseudo ldimm64 instruction could only be present as an immediate
bpf: Fix a verifier verbose message
[ Upstream commit 37eacb9f6e89fb399a79e952bc9c78eb3e16290e ]
Long ago a map file descriptor in a pseudo ldimm64 instruction could only be present as an immediate value insn[0].imm, and thus this value was used in a verbose verifier message printed when the file descriptor wasn't valid. Since addition of BPF_PSEUDO_MAP_IDX_VALUE/BPF_PSEUDO_MAP_IDX the insn[0].imm field can also contain an index pointing to the file descriptor in the attr.fd_array array. However, if the file descriptor is invalid, the verifier still prints the verbose message containing value of insn[0].imm. Patch the verifier message to always print the actual file descriptor value.
Fixes: 387544bfa291 ("bpf: Introduce fd_idx") Signed-off-by: Anton Protopopov <aspsk@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240412141100.3562942-1-aspsk@isovalent.com Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
86aa961b |
| 10-Apr-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.26' into dev-6.6
This is the 6.6.26 stable release
|
#
3f0784b2 |
| 26-Mar-2024 |
Andrei Matei <andreimatei1@gmail.com> |
bpf: Protect against int overflow for stack access size
[ Upstream commit ecc6a2101840177e57c925c102d2d29f260d37c8 ]
This patch re-introduces protection against the size of access to stack memory b
bpf: Protect against int overflow for stack access size
[ Upstream commit ecc6a2101840177e57c925c102d2d29f260d37c8 ]
This patch re-introduces protection against the size of access to stack memory being negative; the access size can appear negative as a result of overflowing its signed int representation. This should not actually happen, as there are other protections along the way, but we should protect against it anyway. One code path was missing such protections (fixed in the previous patch in the series), causing out-of-bounds array accesses in check_stack_range_initialized(). This patch causes the verification of a program with such a non-sensical access size to fail.
This check used to exist in a more indirect way, but was inadvertendly removed in a833a17aeac7.
Fixes: a833a17aeac7 ("bpf: Fix verification of indirect var-off stack access") Reported-by: syzbot+33f4297b5f927648741a@syzkaller.appspotmail.com Reported-by: syzbot+aafd0513053a1cbf52ef@syzkaller.appspotmail.com Closes: https://lore.kernel.org/bpf/CAADnVQLORV5PT0iTAhRER+iLBTkByCYNBYyvBSgjN1T31K+gOw@mail.gmail.com/ Acked-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Andrei Matei <andreimatei1@gmail.com> Link: https://lore.kernel.org/r/20240327024245.318299-3-andreimatei1@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
f8b04488 |
| 17-Mar-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.22' into dev-6.6
Linux 6.6.22
|
#
ff4d6006 |
| 22-Feb-2024 |
Eduard Zingerman <eddyz87@gmail.com> |
bpf: check bpf_func_state->callback_depth when pruning states
[ Upstream commit e9a8e5a587ca55fec6c58e4881742705d45bee54 ]
When comparing current and cached states verifier should consider bpf_func
bpf: check bpf_func_state->callback_depth when pruning states
[ Upstream commit e9a8e5a587ca55fec6c58e4881742705d45bee54 ]
When comparing current and cached states verifier should consider bpf_func_state->callback_depth. Current state cannot be pruned against cached state, when current states has more iterations left compared to cached state. Current state has more iterations left when it's callback_depth is smaller.
Below is an example illustrating this bug, minimized from mailing list discussion [0] (assume that BPF_F_TEST_STATE_FREQ is set). The example is not a safe program: if loop_cb point (1) is followed by loop_cb point (2), then division by zero is possible at point (4).
struct ctx { __u64 a; __u64 b; __u64 c; };
static void loop_cb(int i, struct ctx *ctx) { /* assume that generated code is "fallthrough-first": * if ... == 1 goto * if ... == 2 goto * <default> */ switch (bpf_get_prandom_u32()) { case 1: /* 1 */ ctx->a = 42; return 0; break; case 2: /* 2 */ ctx->b = 42; return 0; break; default: /* 3 */ ctx->c = 42; return 0; break; } }
SEC("tc") __failure __flag(BPF_F_TEST_STATE_FREQ) int test(struct __sk_buff *skb) { struct ctx ctx = { 7, 7, 7 };
bpf_loop(2, loop_cb, &ctx, 0); /* 0 */ /* assume generated checks are in-order: .a first */ if (ctx.a == 42 && ctx.b == 42 && ctx.c == 7) asm volatile("r0 /= 0;":::"r0"); /* 4 */ return 0; }
Prior to this commit verifier built the following checkpoint tree for this example:
.------------------------------------- Checkpoint / State name | .-------------------------------- Code point number | | .---------------------------- Stack state {ctx.a,ctx.b,ctx.c} | | | .------------------- Callback depth in frame #0 v v v v - (0) {7P,7P,7},depth=0 - (3) {7P,7P,7},depth=1 - (0) {7P,7P,42},depth=1 - (3) {7P,7,42},depth=2 - (0) {7P,7,42},depth=2 loop terminates because of depth limit - (4) {7P,7,42},depth=0 predicted false, ctx.a marked precise - (6) exit (a) - (2) {7P,7,42},depth=2 - (0) {7P,42,42},depth=2 loop terminates because of depth limit - (4) {7P,42,42},depth=0 predicted false, ctx.a marked precise - (6) exit (b) - (1) {7P,7P,42},depth=2 - (0) {42P,7P,42},depth=2 loop terminates because of depth limit - (4) {42P,7P,42},depth=0 predicted false, ctx.{a,b} marked precise - (6) exit - (2) {7P,7,7},depth=1 considered safe, pruned using checkpoint (a) (c) - (1) {7P,7P,7},depth=1 considered safe, pruned using checkpoint (b)
Here checkpoint (b) has callback_depth of 2, meaning that it would never reach state {42,42,7}. While checkpoint (c) has callback_depth of 1, and thus could yet explore the state {42,42,7} if not pruned prematurely. This commit makes forbids such premature pruning, allowing verifier to explore states sub-tree starting at (c):
(c) - (1) {7,7,7P},depth=1 - (0) {42P,7,7P},depth=1 ... - (2) {42,7,7},depth=2 - (0) {42,42,7},depth=2 loop terminates because of depth limit - (4) {42,42,7},depth=0 predicted true, ctx.{a,b,c} marked precise - (5) division by zero
[0] https://lore.kernel.org/bpf/9b251840-7cb8-4d17-bd23-1fc8071d8eef@linux.dev/
Fixes: bb124da69c47 ("bpf: keep track of max number of bpf_loop callback iterations") Suggested-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240222154121.6991-2-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
show more ...
|
#
7d7ae873 |
| 10-Feb-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.15' into dev-6.6
This is the 6.6.15 stable release
|
#
1188f7f1 |
| 10-Feb-2024 |
Andrew Jeffery <andrew@codeconstruct.com.au> |
Merge tag 'v6.6.14' into dev-6.6
This is the 6.6.14 stable release
|
Revision tags: v6.6.4, v6.6.3 |
|
#
bfc5c19b |
| 20-Nov-2023 |
Eduard Zingerman <eddyz87@gmail.com> |
bpf: keep track of max number of bpf_loop callback iterations
commit bb124da69c47dd98d69361ec13244ece50bec63e upstream.
In some cases verifier can't infer convergence of the bpf_loop() iteration. E
bpf: keep track of max number of bpf_loop callback iterations
commit bb124da69c47dd98d69361ec13244ece50bec63e upstream.
In some cases verifier can't infer convergence of the bpf_loop() iteration. E.g. for the following program:
static int cb(__u32 idx, struct num_context* ctx) { ctx->i++; return 0; }
SEC("?raw_tp") int prog(void *_) { struct num_context ctx = { .i = 0 }; __u8 choice_arr[2] = { 0, 1 };
bpf_loop(2, cb, &ctx, 0); return choice_arr[ctx.i]; }
Each 'cb' simulation would eventually return to 'prog' and reach 'return choice_arr[ctx.i]' statement. At which point ctx.i would be marked precise, thus forcing verifier to track multitude of separate states with {.i=0}, {.i=1}, ... at bpf_loop() callback entry.
This commit allows "brute force" handling for such cases by limiting number of callback body simulations using 'umax' value of the first bpf_loop() parameter.
For this, extend bpf_func_state with 'callback_depth' field. Increment this field when callback visiting state is pushed to states traversal stack. For frame #N it's 'callback_depth' field counts how many times callback with frame depth N+1 had been executed. Use bpf_func_state specifically to allow independent tracking of callback depths when multiple nested bpf_loop() calls are present.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20231121020701.26440-11-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
1a5a0361 |
| 20-Nov-2023 |
Eduard Zingerman <eddyz87@gmail.com> |
bpf: widening for callback iterators
commit cafe2c21508a38cdb3ed22708842e957b2572c3e upstream.
Callbacks are similar to open coded iterators, so add imprecise widening logic for callback body proce
bpf: widening for callback iterators
commit cafe2c21508a38cdb3ed22708842e957b2572c3e upstream.
Callbacks are similar to open coded iterators, so add imprecise widening logic for callback body processing. This makes callback based loops behave identically to open coded iterators, e.g. allowing to verify programs like below:
struct ctx { u32 i; }; int cb(u32 idx, struct ctx* ctx) { ++ctx->i; return 0; } ... struct ctx ctx = { .i = 0 }; bpf_loop(100, cb, &ctx, 0); ...
Acked-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20231121020701.26440-9-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
b43550d7 |
| 20-Nov-2023 |
Eduard Zingerman <eddyz87@gmail.com> |
bpf: verify callbacks as if they are called unknown number of times
commit ab5cfac139ab8576fb54630d4cca23c3e690ee90 upstream.
Prior to this patch callbacks were handled as regular function calls, e
bpf: verify callbacks as if they are called unknown number of times
commit ab5cfac139ab8576fb54630d4cca23c3e690ee90 upstream.
Prior to this patch callbacks were handled as regular function calls, execution of callback body was modeled exactly once. This patch updates callbacks handling logic as follows: - introduces a function push_callback_call() that schedules callback body verification in env->head stack; - updates prepare_func_exit() to reschedule callback body verification upon BPF_EXIT; - as calls to bpf_*_iter_next(), calls to callback invoking functions are marked as checkpoints; - is_state_visited() is updated to stop callback based iteration when some identical parent state is found.
Paths with callback function invoked zero times are now verified first, which leads to necessity to modify some selftests: - the following negative tests required adding release/unlock/drop calls to avoid previously masked unrelated error reports: - cb_refs.c:underflow_prog - exceptions_fail.c:reject_rbtree_add_throw - exceptions_fail.c:reject_with_cp_reference - the following precision tracking selftests needed change in expected log trace: - verifier_subprog_precision.c:callback_result_precise (note: r0 precision is no longer propagated inside callback and I think this is a correct behavior) - verifier_subprog_precision.c:parent_callee_saved_reg_precise_with_callback - verifier_subprog_precision.c:parent_stack_slot_precise_with_callback
Reported-by: Andrew Werner <awerner32@gmail.com> Closes: https://lore.kernel.org/bpf/CA+vRuzPChFNXmouzGG+wsy=6eMcfr1mFG0F3g7rbg-sedGKW3w@mail.gmail.com/ Acked-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20231121020701.26440-7-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
f661df8f |
| 20-Nov-2023 |
Eduard Zingerman <eddyz87@gmail.com> |
bpf: extract setup_func_entry() utility function
commit 58124a98cb8eda69d248d7f1de954c8b2767c945 upstream.
Move code for simulated stack frame creation to a separate utility function. This function
bpf: extract setup_func_entry() utility function
commit 58124a98cb8eda69d248d7f1de954c8b2767c945 upstream.
Move code for simulated stack frame creation to a separate utility function. This function would be used in the follow-up change for callbacks handling.
Acked-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20231121020701.26440-6-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|