Lines Matching +full:auto +full:- +full:detects

1 /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
6 * Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org>
100 * @brief **libbpf_set_print()** sets user-provided log callback function to
105 * This function is thread-safe.
116 * - for object open from file, this will override setting object
118 * - for object open from memory buffer, this will specify an object
119 * name and will override default "<addr>-<buf-size>" name;
122 /* parse map definitions non-strictly, allowing extra attributes/data */
126 * auto-pinned to that path on load; defaults to "/sys/fs/bpf".
136 /* Path to the custom BTF to be used for BPF CO-RE relocations.
138 * for the purpose of CO-RE relocations.
145 * passed-through to bpf() syscall. Keep in mind that kernel might
146 * fail operation with -ENOSPC error if provided buffer is too small
152 * - each BPF progral load (BPF_PROG_LOAD) attempt, unless overriden
153 * with bpf_program__set_log() on per-program level, to get
155 * - during BPF object's BTF load into kernel (BPF_BTF_LOAD) to get
159 * previous contents, so if you need more fine-grained control, set
160 * per-program buffer with bpf_program__set_log_buf() to preserve each
174 * could be either libbpf's own auto-allocated log buffer, if
175 * kernel_log_buffer is NULL, or user-provided custom kernel_log_buf.
318 * @brief **bpf_program__insns()** gives read-only access to BPF program's
332 * instructions will be CO-RE-relocated, BPF subprograms instructions will be
435 * a BPF program based on auto-detection of program type, attach type,
443 * - kprobe/kretprobe (depends on SEC() definition)
444 * - uprobe/uretprobe (depends on SEC() definition)
445 * - tracepoint
446 * - raw tracepoint
447 * - tracing programs (typed raw TP/fentry/fexit/fmod_ret)
455 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
471 * enum probe_attach_mode - the mode to attach kprobe/uprobe
473 * force libbpf to attach kprobe/uprobe in specific mode, -ENOTSUP will
490 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
517 /* array of user-provided values fetchable through bpf_get_attach_cookie */
564 * - syms and offsets are mutually exclusive
565 * - ref_ctr_offsets and cookies are optional
570 * -1 for all processes
587 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
612 * system supports compat syscalls or defines 32-bit syscalls in 64-bit
617 * compat and 32-bit interfaces is required.
634 * a6ca88b241d5 ("trace_uprobe: support reference counter in fd-based uprobe")
637 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
641 /* Function name to attach to. Could be an unqualified ("abc") or library-qualified
665 * -1 for all processes
683 * -1 for all processes
698 /* custom user-provided value accessible through usdt_cookie() */
706 * bpf_program__attach_uprobe_opts() except it covers USDT (User-space
708 * user-space function entry or exit.
712 * -1 for all processes
729 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
751 /* custom user-provided value fetchable through bpf_get_attach_cookie() */
841 * auto-detection of attachment when programs are loaded.
857 /* Per-program log level and log buffer getters/setters.
867 * @brief **bpf_program__set_attach_target()** sets BTF-based attach target
869 * - BTF-aware raw tracepoints (tp_btf);
870 * - fentry/fexit/fmod_ret;
871 * - lsm;
872 * - freplace.
908 * @brief **bpf_map__set_autocreate()** sets whether libbpf has to auto-create
912 * @return 0 on success; -EBUSY if BPF object was already loaded
914 * **bpf_map__set_autocreate()** allows to opt-out from libbpf auto-creating
919 * This API allows to opt-out of this process for specific map instance. This
923 * BPF-side code that expects to use such missing BPF map is recognized by BPF
933 * @return the file descriptor; or -EINVAL in case of an error
961 * There is a special case for maps with associated memory-mapped regions, like
964 * adjust the corresponding BTF info. This attempt is best-effort and can only
1057 * definition's **value_size**. For per-CPU BPF maps value size has to be
1060 * per-CPU values value size has to be aligned up to closest 8 bytes for
1066 * **bpf_map__lookup_elem()** is high-level equivalent of
1081 * definition's **value_size**. For per-CPU BPF maps value size has to be
1084 * per-CPU values value size has to be aligned up to closest 8 bytes for
1090 * **bpf_map__update_elem()** is high-level equivalent of
1106 * **bpf_map__delete_elem()** is high-level equivalent of
1120 * definition's **value_size**. For per-CPU BPF maps value size has to be
1123 * per-CPU values value size has to be aligned up to closest 8 bytes for
1129 * **bpf_map__lookup_and_delete_elem()** is high-level equivalent of
1144 * @return 0, on success; -ENOENT if **cur_key** is the last key in BPF map;
1147 * **bpf_map__get_next_key()** is high-level equivalent of
1275 * @return A pointer to an 8-byte aligned reserved region of the user ring
1298 * should block when waiting for a sample. -1 causes the caller to block
1300 * @return A pointer to an 8-byte aligned reserved region of the user ring
1306 * If **timeout_ms** is -1, the function will block indefinitely until a sample
1307 * becomes available. Otherwise, **timeout_ms** must be non-negative, or errno
1383 * code to send data over to user-space
1384 * @param page_cnt number of memory pages allocated for each per-CPU buffer
1387 * @param ctx user-provided extra context passed into *sample_cb* and *lost_cb*
1398 LIBBPF_PERF_EVENT_ERROR = -1,
1399 LIBBPF_PERF_EVENT_CONT = -2,
1418 /* if cpu_cnt > 0, map_keys specify map keys to set per-CPU FDs for */
1438 * @brief **perf_buffer__buffer()** returns the per-cpu raw mmap()'ed underlying
1477 * @brief **libbpf_probe_bpf_prog_type()** detects if host kernel supports
1490 * @brief **libbpf_probe_bpf_map_type()** detects if host kernel supports
1503 * @brief **libbpf_probe_bpf_helper()** detects if host kernel supports the
1659 * auto-attach is not supported, callback should return 0 and set link to
1671 /* User-provided value that is passed to prog_setup_fn,
1701 * @return Non-negative handler ID is returned on success. This handler ID has
1707 * - if *sec* is just a plain string (e.g., "abc"), it will match only
1710 * - if *sec* is of the form "abc/", proper SEC() form is
1713 * - if *sec* is of the form "abc+", it will successfully match both
1715 * - if *sec* is NULL, custom handler is registered for any BPF program that
1723 * (i.e., it's possible to have custom SEC("perf_event/LLC-load-misses")
1727 * libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs
1743 * libbpf_set_strict_mode(), etc)) these APIs are not thread-safe. User needs