History log of /openbmc/linux/tools/testing/selftests/bpf/progs/test_overhead.c (Results 1 – 19 of 19)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v5.15.10, v5.15.9, v5.15.8, v5.15.7, v5.15.6, v5.15.5, v5.15.4, v5.15.3, v5.15.2, v5.15.1, v5.15, v5.14.14, v5.14.13, v5.14.12, v5.14.11, v5.14.10, v5.14.9, v5.14.8, v5.14.7, v5.14.6, v5.10.67, v5.10.66, v5.14.5, v5.14.4, v5.10.65, v5.14.3, v5.10.64, v5.14.2, v5.10.63, v5.14.1, v5.10.62, v5.14, v5.10.61, v5.10.60, v5.10.53, v5.10.52, v5.10.51, v5.10.50, v5.10.49, v5.13, v5.10.46, v5.10.43, v5.10.42, v5.10.41, v5.10.40, v5.10.39, v5.4.119, v5.10.36, v5.10.35, v5.10.34, v5.4.116, v5.10.33, v5.12, v5.10.32, v5.10.31, v5.10.30, v5.10.27, v5.10.26, v5.10.25, v5.10.24, v5.10.23, v5.10.22, v5.10.21, v5.10.20, v5.10.19, v5.4.101, v5.10.18, v5.10.17, v5.11, v5.10.16, v5.10.15, v5.10.14, v5.10, v5.8.17, v5.8.16, v5.8.15, v5.9, v5.8.14, v5.8.13, v5.8.12
# b000def2 25-Sep-2020 Toke Høiland-Jørgensen <toke@redhat.com>

selftests: Remove fmod_ret from test_overhead

The test_overhead prog_test included an fmod_ret program that attached to
__set_task_comm() in the kernel. However, this function was never listed as
al

selftests: Remove fmod_ret from test_overhead

The test_overhead prog_test included an fmod_ret program that attached to
__set_task_comm() in the kernel. However, this function was never listed as
allowed for return modification, so this only worked because of the
verifier skipping tests when a trampoline already existed for the attach
point. Now that the verifier checks have been fixed, remove fmod_ret from
the test so it works again.

Fixes: 4eaf0b5c5e04 ("selftest/bpf: Fmod_ret prog and implement test_overhead as part of bench")
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


Revision tags: v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44, v5.7, v5.4.43, v5.4.42, v5.4.41
# 6d74f64b 14-May-2020 Yonghong Song <yhs@fb.com>

selftests/bpf: Enforce returning 0 for fentry/fexit programs

There are a few fentry/fexit programs returning non-0.
The tests with these programs will break with the previous
patch which enfoced ret

selftests/bpf: Enforce returning 0 for fentry/fexit programs

There are a few fentry/fexit programs returning non-0.
The tests with these programs will break with the previous
patch which enfoced return-0 rules. Fix them properly.

Fixes: ac065870d928 ("selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros")
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200514053207.1298479-1-yhs@fb.com

show more ...


# 4eaf0b5c 12-May-2020 Andrii Nakryiko <andriin@fb.com>

selftest/bpf: Fmod_ret prog and implement test_overhead as part of bench

Add fmod_ret BPF program to existing test_overhead selftest. Also re-implement
user-space benchmarking part into benchmark ru

selftest/bpf: Fmod_ret prog and implement test_overhead as part of bench

Add fmod_ret BPF program to existing test_overhead selftest. Also re-implement
user-space benchmarking part into benchmark runner to compare results. Results
with ./bench are consistently somewhat lower than test_overhead's, but relative
performance of various types of BPF programs stay consisten (e.g., kretprobe is
noticeably slower). This slowdown seems to be coming from the fact that
test_overhead is single-threaded, while benchmark always spins off at least
one thread for producer. This has been confirmed by hacking multi-threaded
test_overhead variant and also single-threaded bench variant. Resutls are
below. run_bench_rename.sh script from benchs/ subdirectory was used to
produce results for ./bench.

Single-threaded implementations
===============================

/* bench: single-threaded, atomics */
base : 4.622 ± 0.049M/s
kprobe : 3.673 ± 0.052M/s
kretprobe : 2.625 ± 0.052M/s
rawtp : 4.369 ± 0.089M/s
fentry : 4.201 ± 0.558M/s
fexit : 4.309 ± 0.148M/s
fmodret : 4.314 ± 0.203M/s

/* selftest: single-threaded, no atomics */
task_rename base 4555K events per sec
task_rename kprobe 3643K events per sec
task_rename kretprobe 2506K events per sec
task_rename raw_tp 4303K events per sec
task_rename fentry 4307K events per sec
task_rename fexit 4010K events per sec
task_rename fmod_ret 3984K events per sec

Multi-threaded implementations
==============================

/* bench: multi-threaded w/ atomics */
base : 3.910 ± 0.023M/s
kprobe : 3.048 ± 0.037M/s
kretprobe : 2.300 ± 0.015M/s
rawtp : 3.687 ± 0.034M/s
fentry : 3.740 ± 0.087M/s
fexit : 3.510 ± 0.009M/s
fmodret : 3.485 ± 0.050M/s

/* selftest: multi-threaded w/ atomics */
task_rename base 3872K events per sec
task_rename kprobe 3068K events per sec
task_rename kretprobe 2350K events per sec
task_rename raw_tp 3731K events per sec
task_rename fentry 3639K events per sec
task_rename fexit 3558K events per sec
task_rename fmod_ret 3511K events per sec

/* selftest: multi-threaded, no atomics */
task_rename base 3945K events per sec
task_rename kprobe 3298K events per sec
task_rename kretprobe 2451K events per sec
task_rename raw_tp 3718K events per sec
task_rename fentry 3782K events per sec
task_rename fexit 3543K events per sec
task_rename fmod_ret 3526K events per sec

Note that the fact that ./bench benchmark always uses atomic increments for
counting, while test_overhead doesn't, doesn't influence test results all that
much.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200512192445.2351848-4-andriin@fb.com

show more ...


Revision tags: v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31, v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24
# df8ff353 29-Feb-2020 Andrii Nakryiko <andriin@fb.com>

libbpf: Merge selftests' bpf_trace_helpers.h into libbpf's bpf_tracing.h

Move BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macro into libbpf's bpf_tracing.h
header to make it available for non-selftests

libbpf: Merge selftests' bpf_trace_helpers.h into libbpf's bpf_tracing.h

Move BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macro into libbpf's bpf_tracing.h
header to make it available for non-selftests users.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200229231112.1240137-5-andriin@fb.com

show more ...


# 396f544e 29-Feb-2020 Andrii Nakryiko <andriin@fb.com>

selftests/bpf: Fix BPF_KRETPROBE macro and use it in attach_probe test

For kretprobes, there is no point in capturing input arguments from pt_regs,
as they are going to be, most probably, clobbered

selftests/bpf: Fix BPF_KRETPROBE macro and use it in attach_probe test

For kretprobes, there is no point in capturing input arguments from pt_regs,
as they are going to be, most probably, clobbered by the time probed kernel
function returns. So switch BPF_KRETPROBE to accept zero or one argument
(optional return result).

Fixes: ac065870d928 ("selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200229231112.1240137-4-andriin@fb.com

show more ...


Revision tags: v5.4.23, v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16, v5.5, v5.4.15, v5.4.14
# 3e689141 20-Jan-2020 Toke Høiland-Jørgensen <toke@redhat.com>

selftests: Use consistent include paths for libbpf

Fix all selftests to include libbpf header files with the bpf/ prefix, to
be consistent with external users of the library. Also ensure that all
in

selftests: Use consistent include paths for libbpf

Fix all selftests to include libbpf header files with the bpf/ prefix, to
be consistent with external users of the library. Also ensure that all
includes of exported libbpf header files (those that are exported on 'make
install' of the library) use bracketed includes instead of quoted.

To not break the build, keep the old include path until everything has been
changed to the new one; a subsequent patch will remove that.

Fixes: 6910d7d3867a ("selftests/bpf: Ensure bpf_helper_defs.h are taken from selftests dir")
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157952560568.1683545.9649335788846513446.stgit@toke.dk

show more ...


Revision tags: v5.4.13, v5.4.12, v5.4.11
# ac065870 10-Jan-2020 Andrii Nakryiko <andriin@fb.com>

selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros

Streamline BPF_TRACE_x macro by moving out return type and section attribute
definition out of macro itself. That makes those functi

selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros

Streamline BPF_TRACE_x macro by moving out return type and section attribute
definition out of macro itself. That makes those function look in source code
similar to other BPF programs. Additionally, simplify its usage by determining
number of arguments automatically (so just single BPF_TRACE vs a family of
BPF_TRACE_1, BPF_TRACE_2, etc). Also, allow more natural function argument
syntax without commas inbetween argument type and name.

Given this helper is useful not only for tracing tp_btf/fenty/fexit programs,
but could be used for LSM programs and others following the same pattern,
rename BPF_TRACE macro into more generic BPF_PROG. Existing BPF_TRACE_x
usages in selftests are converted to new BPF_PROG macro.

Following the same pattern, define BPF_KPROBE and BPF_KRETPROBE macros for
nicer usage of kprobe/kretprobe arguments, respectively. BPF_KRETPROBE, adopts
same convention used by fexit programs, that last defined argument is probed
function's return result.

v4->v5:
- fix test_overhead test (__set_task_comm is void) (Alexei);

v3->v4:
- rebased and fixed one more BPF_TRACE_x occurence (Alexei);

v2->v3:
- rename to shorter and as generic BPF_PROG (Alexei);

v1->v2:
- verified GCC handles pragmas as expected;
- added descriptions to macros;
- converted new STRUCT_OPS selftest to BPF_HANDLER (worked as expected);
- added original context as 'ctx' parameter, for cases where it has to be
passed into BPF helpers. This might cause an accidental naming collision,
unfortunately, but at least it's easy to work around. Fortunately, this
situation produces quite legible compilation error:

progs/bpf_dctcp.c:46:6: error: redefinition of 'ctx' with a different type: 'int' vs 'unsigned long long *'
int ctx = 123;
^
progs/bpf_dctcp.c:42:6: note: previous definition is here
void BPF_HANDLER(dctcp_init, struct sock *sk)
^
./bpf_trace_helpers.h:58:32: note: expanded from macro 'BPF_HANDLER'
____##name(unsigned long long *ctx, ##args)

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20200110211634.1614739-1-andriin@fb.com

show more ...


Revision tags: v5.4.10, v5.4.9, v5.4.8, v5.4.7, v5.4.6, v5.4.5, v5.4.4, v5.4.3, v5.3.15, v5.4.2, v5.4.1, v5.3.14, v5.4, v5.3.13
# f9a7cf6e 23-Nov-2019 Martin KaFai Lau <kafai@fb.com>

bpf: Introduce BPF_TRACE_x helper for the tracing tests

For BPF_PROG_TYPE_TRACING, the bpf_prog's ctx is an array of u64.
This patch borrows the idea from BPF_CALL_x in filter.h to
convert a u64 to

bpf: Introduce BPF_TRACE_x helper for the tracing tests

For BPF_PROG_TYPE_TRACING, the bpf_prog's ctx is an array of u64.
This patch borrows the idea from BPF_CALL_x in filter.h to
convert a u64 to the arg type of the traced function.

The new BPF_TRACE_x has an arg to specify the return type of a bpf_prog.
It will be used in the future TCP-ops bpf_prog that may return "void".

The new macros are defined in the new header file "bpf_trace_helpers.h".
It is under selftests/bpf/ for now. It could be moved to libbpf later
after seeing more upcoming non-tracing use cases.

The tests are changed to use these new macros also. Hence,
the k[s]u8/16/32/64 are no longer needed and they are removed
from the bpf_helpers.h.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191123202504.1502696-1-kafai@fb.com

show more ...


# c4781e37 21-Nov-2019 Alexei Starovoitov <ast@kernel.org>

selftests/bpf: Add BPF trampoline performance test

Add a test that benchmarks different ways of attaching BPF program to a kernel function.
Here are the results for 2.4Ghz x86 cpu on a kernel withou

selftests/bpf: Add BPF trampoline performance test

Add a test that benchmarks different ways of attaching BPF program to a kernel function.
Here are the results for 2.4Ghz x86 cpu on a kernel without mitigations:
$ ./test_progs -n 49 -v|grep events
task_rename base 2743K events per sec
task_rename kprobe 2419K events per sec
task_rename kretprobe 1876K events per sec
task_rename raw_tp 2578K events per sec
task_rename fentry 2710K events per sec
task_rename fexit 2685K events per sec

On a kernel with retpoline:
$ ./test_progs -n 49 -v|grep events
task_rename base 2401K events per sec
task_rename kprobe 1930K events per sec
task_rename kretprobe 1485K events per sec
task_rename raw_tp 2053K events per sec
task_rename fentry 2351K events per sec
task_rename fexit 2185K events per sec

All 5 approaches:
- kprobe/kretprobe in __set_task_comm()
- raw tracepoint in trace_task_rename()
- fentry/fexit in __set_task_comm()
are roughly equivalent.

__set_task_comm() by itself is quite fast, so any extra instructions add up.
Until BPF trampoline was introduced the fastest mechanism was raw tracepoint.
kprobe via ftrace was second best. kretprobe is slow due to trap. New
fentry/fexit methods via BPF trampoline are clearly the fastest and the
difference is more pronounced with retpoline on, since BPF trampoline doesn't
use indirect jumps.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20191122011515.255371-1-ast@kernel.org

show more ...


# 148a2543 25-Sep-2020 Toke Høiland-Jørgensen <toke@redhat.com>

selftests: Remove fmod_ret from test_overhead

[ Upstream commit b000def2e052fc8ddea31a18019f6ebe044defb3 ]

The test_overhead prog_test included an fmod_ret program that attached to

selftests: Remove fmod_ret from test_overhead

[ Upstream commit b000def2e052fc8ddea31a18019f6ebe044defb3 ]

The test_overhead prog_test included an fmod_ret program that attached to
__set_task_comm() in the kernel. However, this function was never listed as
allowed for return modification, so this only worked because of the
verifier skipping tests when a trampoline already existed for the attach
point. Now that the verifier checks have been fixed, remove fmod_ret from
the test so it works again.

Fixes: 4eaf0b5c5e04 ("selftest/bpf: Fmod_ret prog and implement test_overhead as part of bench")
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>

show more ...


# da07f52d 15-May-2020 David S. Miller <davem@davemloft.net>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Move the bpf verifier trace check into the new switch statement in
HEAD.

Resolve the overlapping changes in hinic,

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Move the bpf verifier trace check into the new switch statement in
HEAD.

Resolve the overlapping changes in hinic, where bug fixes overlap
the addition of VF support.

Signed-off-by: David S. Miller <davem@davemloft.net>

show more ...


Revision tags: v5.8.11, v5.8.10, v5.8.9, v5.8.8, v5.8.7, v5.8.6, v5.4.62, v5.8.5, v5.8.4, v5.4.61, v5.8.3, v5.4.60, v5.8.2, v5.4.59, v5.8.1, v5.4.58, v5.4.57, v5.4.56, v5.8, v5.7.12, v5.4.55, v5.7.11, v5.4.54, v5.7.10, v5.4.53, v5.4.52, v5.7.9, v5.7.8, v5.4.51, v5.4.50, v5.7.7, v5.4.49, v5.7.6, v5.7.5, v5.4.48, v5.7.4, v5.7.3, v5.4.47, v5.4.46, v5.7.2, v5.4.45, v5.7.1, v5.4.44, v5.7, v5.4.43, v5.4.42, v5.4.41
# 6d74f64b 14-May-2020 Yonghong Song <yhs@fb.com>

selftests/bpf: Enforce returning 0 for fentry/fexit programs

There are a few fentry/fexit programs returning non-0.
The tests with these programs will break with the previous
patch w

selftests/bpf: Enforce returning 0 for fentry/fexit programs

There are a few fentry/fexit programs returning non-0.
The tests with these programs will break with the previous
patch which enfoced return-0 rules. Fix them properly.

Fixes: ac065870d928 ("selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros")
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200514053207.1298479-1-yhs@fb.com

show more ...


# 4eaf0b5c 12-May-2020 Andrii Nakryiko <andriin@fb.com>

selftest/bpf: Fmod_ret prog and implement test_overhead as part of bench

Add fmod_ret BPF program to existing test_overhead selftest. Also re-implement
user-space benchmarking part into

selftest/bpf: Fmod_ret prog and implement test_overhead as part of bench

Add fmod_ret BPF program to existing test_overhead selftest. Also re-implement
user-space benchmarking part into benchmark runner to compare results. Results
with ./bench are consistently somewhat lower than test_overhead's, but relative
performance of various types of BPF programs stay consisten (e.g., kretprobe is
noticeably slower). This slowdown seems to be coming from the fact that
test_overhead is single-threaded, while benchmark always spins off at least
one thread for producer. This has been confirmed by hacking multi-threaded
test_overhead variant and also single-threaded bench variant. Resutls are
below. run_bench_rename.sh script from benchs/ subdirectory was used to
produce results for ./bench.

Single-threaded implementations
===============================

/* bench: single-threaded, atomics */
base : 4.622 ± 0.049M/s
kprobe : 3.673 ± 0.052M/s
kretprobe : 2.625 ± 0.052M/s
rawtp : 4.369 ± 0.089M/s
fentry : 4.201 ± 0.558M/s
fexit : 4.309 ± 0.148M/s
fmodret : 4.314 ± 0.203M/s

/* selftest: single-threaded, no atomics */
task_rename base 4555K events per sec
task_rename kprobe 3643K events per sec
task_rename kretprobe 2506K events per sec
task_rename raw_tp 4303K events per sec
task_rename fentry 4307K events per sec
task_rename fexit 4010K events per sec
task_rename fmod_ret 3984K events per sec

Multi-threaded implementations
==============================

/* bench: multi-threaded w/ atomics */
base : 3.910 ± 0.023M/s
kprobe : 3.048 ± 0.037M/s
kretprobe : 2.300 ± 0.015M/s
rawtp : 3.687 ± 0.034M/s
fentry : 3.740 ± 0.087M/s
fexit : 3.510 ± 0.009M/s
fmodret : 3.485 ± 0.050M/s

/* selftest: multi-threaded w/ atomics */
task_rename base 3872K events per sec
task_rename kprobe 3068K events per sec
task_rename kretprobe 2350K events per sec
task_rename raw_tp 3731K events per sec
task_rename fentry 3639K events per sec
task_rename fexit 3558K events per sec
task_rename fmod_ret 3511K events per sec

/* selftest: multi-threaded, no atomics */
task_rename base 3945K events per sec
task_rename kprobe 3298K events per sec
task_rename kretprobe 2451K events per sec
task_rename raw_tp 3718K events per sec
task_rename fentry 3782K events per sec
task_rename fexit 3543K events per sec
task_rename fmod_ret 3526K events per sec

Note that the fact that ./bench benchmark always uses atomic increments for
counting, while test_overhead doesn't, doesn't influence test results all that
much.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200512192445.2351848-4-andriin@fb.com

show more ...


Revision tags: v5.4.40, v5.4.39, v5.4.38, v5.4.37, v5.4.36, v5.4.35, v5.4.34, v5.4.33, v5.4.32, v5.4.31, v5.4.30, v5.4.29, v5.6, v5.4.28, v5.4.27, v5.4.26, v5.4.25, v5.4.24
# df8ff353 29-Feb-2020 Andrii Nakryiko <andriin@fb.com>

libbpf: Merge selftests' bpf_trace_helpers.h into libbpf's bpf_tracing.h

Move BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macro into libbpf's bpf_tracing.h
header to make it available for no

libbpf: Merge selftests' bpf_trace_helpers.h into libbpf's bpf_tracing.h

Move BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macro into libbpf's bpf_tracing.h
header to make it available for non-selftests users.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200229231112.1240137-5-andriin@fb.com

show more ...


# 396f544e 29-Feb-2020 Andrii Nakryiko <andriin@fb.com>

selftests/bpf: Fix BPF_KRETPROBE macro and use it in attach_probe test

For kretprobes, there is no point in capturing input arguments from pt_regs,
as they are going to be, most probably

selftests/bpf: Fix BPF_KRETPROBE macro and use it in attach_probe test

For kretprobes, there is no point in capturing input arguments from pt_regs,
as they are going to be, most probably, clobbered by the time probed kernel
function returns. So switch BPF_KRETPROBE to accept zero or one argument
(optional return result).

Fixes: ac065870d928 ("selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200229231112.1240137-4-andriin@fb.com

show more ...


Revision tags: v5.4.23, v5.4.22, v5.4.21, v5.4.20, v5.4.19, v5.4.18, v5.4.17, v5.4.16, v5.5, v5.4.15, v5.4.14
# 3e689141 20-Jan-2020 Toke Høiland-Jørgensen <toke@redhat.com>

selftests: Use consistent include paths for libbpf

Fix all selftests to include libbpf header files with the bpf/ prefix, to
be consistent with external users of the library. Also ensure

selftests: Use consistent include paths for libbpf

Fix all selftests to include libbpf header files with the bpf/ prefix, to
be consistent with external users of the library. Also ensure that all
includes of exported libbpf header files (those that are exported on 'make
install' of the library) use bracketed includes instead of quoted.

To not break the build, keep the old include path until everything has been
changed to the new one; a subsequent patch will remove that.

Fixes: 6910d7d3867a ("selftests/bpf: Ensure bpf_helper_defs.h are taken from selftests dir")
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/157952560568.1683545.9649335788846513446.stgit@toke.dk

show more ...


Revision tags: v5.4.13, v5.4.12, v5.4.11
# ac065870 10-Jan-2020 Andrii Nakryiko <andriin@fb.com>

selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros

Streamline BPF_TRACE_x macro by moving out return type and section attribute
definition out of macro itself. That makes

selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros

Streamline BPF_TRACE_x macro by moving out return type and section attribute
definition out of macro itself. That makes those function look in source code
similar to other BPF programs. Additionally, simplify its usage by determining
number of arguments automatically (so just single BPF_TRACE vs a family of
BPF_TRACE_1, BPF_TRACE_2, etc). Also, allow more natural function argument
syntax without commas inbetween argument type and name.

Given this helper is useful not only for tracing tp_btf/fenty/fexit programs,
but could be used for LSM programs and others following the same pattern,
rename BPF_TRACE macro into more generic BPF_PROG. Existing BPF_TRACE_x
usages in selftests are converted to new BPF_PROG macro.

Following the same pattern, define BPF_KPROBE and BPF_KRETPROBE macros for
nicer usage of kprobe/kretprobe arguments, respectively. BPF_KRETPROBE, adopts
same convention used by fexit programs, that last defined argument is probed
function's return result.

v4->v5:
- fix test_overhead test (__set_task_comm is void) (Alexei);

v3->v4:
- rebased and fixed one more BPF_TRACE_x occurence (Alexei);

v2->v3:
- rename to shorter and as generic BPF_PROG (Alexei);

v1->v2:
- verified GCC handles pragmas as expected;
- added descriptions to macros;
- converted new STRUCT_OPS selftest to BPF_HANDLER (worked as expected);
- added original context as 'ctx' parameter, for cases where it has to be
passed into BPF helpers. This might cause an accidental naming collision,
unfortunately, but at least it's easy to work around. Fortunately, this
situation produces quite legible compilation error:

progs/bpf_dctcp.c:46:6: error: redefinition of 'ctx' with a different type: 'int' vs 'unsigned long long *'
int ctx = 123;
^
progs/bpf_dctcp.c:42:6: note: previous definition is here
void BPF_HANDLER(dctcp_init, struct sock *sk)
^
./bpf_trace_helpers.h:58:32: note: expanded from macro 'BPF_HANDLER'
____##name(unsigned long long *ctx, ##args)

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20200110211634.1614739-1-andriin@fb.com

show more ...


Revision tags: v5.4.10, v5.4.9, v5.4.8, v5.4.7, v5.4.6, v5.4.5, v5.4.4, v5.4.3, v5.3.15, v5.4.2, v5.4.1, v5.3.14, v5.4, v5.3.13
# f9a7cf6e 23-Nov-2019 Martin KaFai Lau <kafai@fb.com>

bpf: Introduce BPF_TRACE_x helper for the tracing tests

For BPF_PROG_TYPE_TRACING, the bpf_prog's ctx is an array of u64.
This patch borrows the idea from BPF_CALL_x in filter.h to
c

bpf: Introduce BPF_TRACE_x helper for the tracing tests

For BPF_PROG_TYPE_TRACING, the bpf_prog's ctx is an array of u64.
This patch borrows the idea from BPF_CALL_x in filter.h to
convert a u64 to the arg type of the traced function.

The new BPF_TRACE_x has an arg to specify the return type of a bpf_prog.
It will be used in the future TCP-ops bpf_prog that may return "void".

The new macros are defined in the new header file "bpf_trace_helpers.h".
It is under selftests/bpf/ for now. It could be moved to libbpf later
after seeing more upcoming non-tracing use cases.

The tests are changed to use these new macros also. Hence,
the k[s]u8/16/32/64 are no longer needed and they are removed
from the bpf_helpers.h.

Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191123202504.1502696-1-kafai@fb.com

show more ...


# c4781e37 21-Nov-2019 Alexei Starovoitov <ast@kernel.org>

selftests/bpf: Add BPF trampoline performance test

Add a test that benchmarks different ways of attaching BPF program to a kernel function.
Here are the results for 2.4Ghz x86 cpu on a k

selftests/bpf: Add BPF trampoline performance test

Add a test that benchmarks different ways of attaching BPF program to a kernel function.
Here are the results for 2.4Ghz x86 cpu on a kernel without mitigations:
$ ./test_progs -n 49 -v|grep events
task_rename base 2743K events per sec
task_rename kprobe 2419K events per sec
task_rename kretprobe 1876K events per sec
task_rename raw_tp 2578K events per sec
task_rename fentry 2710K events per sec
task_rename fexit 2685K events per sec

On a kernel with retpoline:
$ ./test_progs -n 49 -v|grep events
task_rename base 2401K events per sec
task_rename kprobe 1930K events per sec
task_rename kretprobe 1485K events per sec
task_rename raw_tp 2053K events per sec
task_rename fentry 2351K events per sec
task_rename fexit 2185K events per sec

All 5 approaches:
- kprobe/kretprobe in __set_task_comm()
- raw tracepoint in trace_task_rename()
- fentry/fexit in __set_task_comm()
are roughly equivalent.

__set_task_comm() by itself is quite fast, so any extra instructions add up.
Until BPF trampoline was introduced the fastest mechanism was raw tracepoint.
kprobe via ftrace was second best. kretprobe is slow due to trap. New
fentry/fexit methods via BPF trampoline are clearly the fastest and the
difference is more pronounced with retpoline on, since BPF trampoline doesn't
use indirect jumps.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20191122011515.255371-1-ast@kernel.org

show more ...