summaryrefslogtreecommitdiffstats
path: root/tools/lib/bpf
AgeCommit message (Collapse)AuthorLines
9 dayslibbpf: Prevent double close and leak of btf objectsJiri Olsa-10/+11
Sashiko found possible double close of btf object fd [1], which happens when strdup in load_module_btfs fails at which point the obj->btf_module_cnt is already incremented. The error path close btf fd and so does later cleanup code in bpf_object_post_load_cleanup function. Also libbpf_ensure_mem failure leaves btf object not assigned and it's leaked. Replacing the err_out label with break to make the error path less confusing as suggested by Alan. Incrementing obj->btf_module_cnt only if there's no failure and releasing btf object in error path. Fixes: 91abb4a6d79d ("libbpf: Support attachment of BPF tracing programs to kernel modules") [1] https://sashiko.dev/#/patchset/20260324081846.2334094-1-jolsa%40kernel.org Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20260416100034.1610852-1-jolsa@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-04-10libbpf: Allow use of feature cache for non-token casesAlan Maguire-3/+12
Allow bpf object feat_cache assignment in BPF selftests to simulate missing features via inclusion of libbpf_internal.h and use of bpf_object_set_feat_cache() and bpf_object__sanitize_btf() to test BTF sanitization for cases where missing features are simulated. Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/r/20260408165735.843763-2-alan.maguire@oracle.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-04-07bpf: reject negative CO-RE accessor indices in bpf_core_parse_spec()Weiming Shi-0/+2
CO-RE accessor strings are colon-separated indices that describe a path from a root BTF type to a target field, e.g. "0:1:2" walks through nested struct members. bpf_core_parse_spec() parses each component with sscanf("%d"), so negative values like -1 are silently accepted. The subsequent bounds checks (access_idx >= btf_vlen(t)) only guard the upper bound and always pass for negative values because C integer promotion converts the __u16 btf_vlen result to int, making the comparison (int)(-1) >= (int)(N) false for any positive N. When -1 reaches btf_member_bit_offset() it gets cast to u32 0xffffffff, producing an out-of-bounds read far past the members array. A crafted BPF program with a negative CO-RE accessor on any struct that exists in vmlinux BTF (e.g. task_struct) crashes the kernel deterministically during BPF_PROG_LOAD on any system with CONFIG_DEBUG_INFO_BTF=y (default on major distributions). The bug is reachable with CAP_BPF: BUG: unable to handle page fault for address: ffffed11818b6626 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page Oops: Oops: 0000 [#1] SMP KASAN NOPTI CPU: 0 UID: 0 PID: 85 Comm: poc Not tainted 7.0.0-rc6 #18 PREEMPT(full) RIP: 0010:bpf_core_parse_spec (tools/lib/bpf/relo_core.c:354) RAX: 00000000ffffffff Call Trace: <TASK> bpf_core_calc_relo_insn (tools/lib/bpf/relo_core.c:1321) bpf_core_apply (kernel/bpf/btf.c:9507) check_core_relo (kernel/bpf/verifier.c:19475) bpf_check (kernel/bpf/verifier.c:26031) bpf_prog_load (kernel/bpf/syscall.c:3089) __sys_bpf (kernel/bpf/syscall.c:6228) </TASK> CO-RE accessor indices are inherently non-negative (struct member index, array element index, or enumerator index), so reject them immediately after parsing. Fixes: ddc7c3042614 ("libbpf: implement BPF CO-RE offset relocation algorithm") Reported-by: Xiang Mei <xmei5@asu.edu> Signed-off-by: Weiming Shi <bestswngs@gmail.com> Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com> Acked-by: Paul Chaignon <paul.chaignon@gmail.com> Link: https://lore.kernel.org/r/20260404161221.961828-2-bestswngs@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-04-02libbpf: Clarify raw-address single kprobe attach behaviorHoyeon Lee-7/+34
bpf_program__attach_kprobe_opts() documents single-kprobe attach through func_name, with an optional offset. For the PMU-based path, func_name = NULL with an absolute address in offset already works as well, but that is not described in the API. This commit clarifies this existing non-legacy behavior. For PMU-based attach, callers can use func_name = NULL with an absolute address in offset as the raw-address form. For legacy tracefs/debugfs kprobes, reject this form explicitly. Signed-off-by: Hoyeon Lee <hoyeon.lee@suse.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20260401143116.185049-3-hoyeon.lee@suse.com
2026-04-02libbpf: Use direct error codes for kprobe/uprobe attachHoyeon Lee-2/+2
perf_event_open_probe() and perf_event_{k,u}probe_open_legacy() helpers are returning negative error codes directly on failure. This commit changes bpf_program__attach_{k,u}probe_opts() to use those return values directly instead of re-reading possibly changed errno. Signed-off-by: Hoyeon Lee <hoyeon.lee@suse.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20260401143116.185049-2-hoyeon.lee@suse.com
2026-04-02libbpf: Fix BTF handling in bpf_program__clone()Mykyta Yatsenko-17/+44
Align bpf_program__clone() with bpf_object_load_prog() by gating BTF func/line info on FEAT_BTF_FUNC kernel support, and resolve caller-provided prog_btf_fd before checking obj->btf so that callers with their own BTF can use clone() even when the object has no BTF loaded. While at it, treat func_info and line_info fields as atomic groups to prevent mismatches between pointer and count from different sources. Move bpf_program__clone() to libbpf 1.8. Fixes: 970bd2dced35 ("libbpf: Introduce bpf_program__clone()") Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260401151640.356419-1-mykyta.yatsenko5@gmail.com
2026-03-26libbpf: Support sanitization of BTF layout for older kernelsAlan Maguire-36/+127
Add a FEAT_BTF_LAYOUT feature check which checks if the kernel supports BTF layout information. Also sanitize BTF if it contains layout data but the kernel does not support it. The sanitization requires rewriting raw BTF data to update the header and eliminate the layout section (since it lies between the types and strings), so refactor sanitization to do the raw BTF retrieval and creation of updated BTF, returning that new BTF on success. Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260326145444.2076244-7-alan.maguire@oracle.com
2026-03-26libbpf: BTF validation can use layout for unknown kindsAlan Maguire-2/+6
BTF parsing can use layout to navigate unknown kinds, so btf_validate_type() should take layout information into account to avoid failure when an unrecognized kind is met. Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260326145444.2076244-6-alan.maguire@oracle.com
2026-03-26libbpf: Add layout encoding supportAlan Maguire-2/+83
Support encoding of BTF layout data via btf__new_empty_opts(). Current supported opts are base_btf and add_layout. Layout information is maintained in btf.c in the layouts[] array; when BTF is created with the add_layout option it represents the current view of supported BTF kinds. Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260326145444.2076244-5-alan.maguire@oracle.com
2026-03-26libbpf: Use layout to compute an unknown kind sizeAlan Maguire-9/+43
This allows BTF parsing to proceed even if we do not know the kind. Fall back to base BTF layout if layout information is not in split BTF. Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260326145444.2076244-4-alan.maguire@oracle.com
2026-03-26libbpf: Support layout section handling in BTFAlan Maguire-153/+320
Support reading in layout fixing endian issues on reading; also support writing layout section to raw BTF object. There is not yet an API to populate the layout with meaningful information. As part of this, we need to consider multiple valid BTF header sizes; the original or the layout-extended headers. So to support this, the "struct btf" representation is modified to contain a "struct btf_header" and we copy the valid portion from the raw data to it; this means we can always safely check fields like btf->hdr.layout_len . Note if parsed-in BTF has extra header information beyond sizeof(struct btf_header) - if so we make that BTF ineligible for modification by setting btf->has_hdr_extra . Ensure that we handle endianness issues for BTF layout section, though currently only field that needs this (flags) is unused. Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260326145444.2076244-3-alan.maguire@oracle.com
2026-03-21libbpf: Introduce bpf_program__clone()Mykyta Yatsenko-0/+96
Add bpf_program__clone() API that loads a single BPF program from a prepared BPF object into the kernel, returning a file descriptor owned by the caller. After bpf_object__prepare(), callers can use bpf_program__clone() to load individual programs with custom bpf_prog_load_opts, instead of loading all programs at once via bpf_object__load(). Non-zero fields in opts override the defaults derived from the program and object internals; passing NULL opts populates everything automatically. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com> Link: https://lore.kernel.org/r/20260317-veristat_prepare-v4-1-74193d4cc9d9@meta.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-03-16libbpf: Start v1.8 development cycleIhor Solodrai-1/+4
libbpf 1.7.0 has been released [1]. Update libbpf.map and libbpf_verson.h to start v1.8 development cycle. [1] https://github.com/libbpf/libbpf/releases/tag/v1.7.0 Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260316163916.2822081-1-ihor.solodrai@linux.dev
2026-03-05libbpf: Optimize kprobe.session attachment for exact function namesAndrey Grodzovsky-1/+18
Detect exact function names (no wildcards) in bpf_program__attach_kprobe_multi_opts() and bypass kallsyms parsing, passing the symbol directly to the kernel via syms[] array. This benefits all callers, not just kprobe.session. When the pattern contains no '*' or '?' characters, set syms to point directly at the pattern string and cnt to 1, skipping the expensive /proc/kallsyms or available_filter_functions parsing (~150ms per function). Error code normalization: the fast path returns ESRCH from kernel's ftrace_lookup_symbols(), while the slow path returns ENOENT from userspace kallsyms parsing. Convert ESRCH to ENOENT in the bpf_link_create error path to maintain API consistency - both paths now return identical error codes for "symbol not found". Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@crowdstrike.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260302200837.317907-2-andrey.grodzovsky@crowdstrike.com
2026-03-05libbpf: Support appending split BTF in btf__add_btf()Josef Bacik-9/+18
btf__add_btf() currently rejects split BTF sources with -ENOTSUP. This prevents merging types from multiple kernel module BTFs that are all split against the same vmlinux base. Extend btf__add_btf() to handle split BTF sources by: - Replacing the blanket -ENOTSUP with a validation that src and dst share the same base BTF pointer when both are split, returning -EOPNOTSUPP on mismatch. - Computing src_start_id from the source's base to distinguish base type ID references (which must remain unchanged) from split type IDs (which must be remapped to new positions in the destination). - Using src_btf->nr_types instead of btf__type_cnt()-1 for the type count, which is correct for both split and non-split sources. - Skipping base string offsets (< start_str_off) during the string rewrite loop, mirroring the type ID skip pattern. Since src and dst share the same base BTF, base string offsets are already valid and need no remapping. For non-split sources the behavior is identical: src_start_id is 1, the type_id < 1 guard is never true (VOID is already skipped), and the remapping formula reduces to the original. start_str_off is 0 so no string offsets are skipped. Assisted-by: Claude:claude-opus-4-6 Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/c00216ed48cf7897078d9645679059d5ebf42738.1772657690.git.josef@toxicpanda.com
2026-03-03libbpf: Add support to detect nop,nop5 instructions combo for usdt probeJiri Olsa-4/+43
Adding support to detect nop,nop5 instructions combo for usdt probe by checking on probe's following nop5 instruction. When the nop,nop5 combo is detected together with uprobe syscall, we can place the probe on top of nop5 and get it optimized. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20260224103915.1369690-3-jolsa@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-03-03libbpf: Add uprobe syscall feature detectionJiri Olsa-0/+26
Adding uprobe syscall feature detection that will be used in following changes. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20260224103915.1369690-2-jolsa@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-02-18libbpf: Remove extern declaration of bpf_stream_vprintk()Ihor Solodrai-3/+0
An issue was reported that building BPF program which includes both vmlinux.h and bpf_helpers.h from libbpf fails due to conflicting declarations of bpf_stream_vprintk(). Remove the extern declaration from bpf_helpers.h to address this. In order to use bpf_stream_printk() macro, BPF programs are expected to either include vmlinux.h of the kernel they are targeting, or add their own extern declaration. Reported-by: Luca Boccassi <luca.boccassi@gmail.com> Closes: https://github.com/libbpf/libbpf/issues/947 Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev> Link: https://lore.kernel.org/r/20260218215651.2057673-3-ihor.solodrai@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-02-17bpftool: Fix truncated netlink dumpsJakub Kicinski-1/+3
Netlink requires that the recv buffer used during dumps is at least min(PAGE_SIZE, 8k) (see the man page). Otherwise the messages will get truncated. Make sure bpftool follows this requirement, avoid missing information on systems with large pages. Acked-by: Quentin Monnet <qmo@kernel.org> Fixes: 7084566a236f ("tools/bpftool: Remove libbpf_internal.h usage in bpftool") Signed-off-by: Jakub Kicinski <kuba@kernel.org> Link: https://lore.kernel.org/r/20260217194150.734701-1-kuba@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-02-17libbpf: Delay feature gate check until object prepare timeEmil Tsalapatis-7/+13
Commit 728ff167910e ("libbpf: Add gating for arena globals relocation feature") adds a feature gate check that loads a map and BPF program to test the running kernel supports large direct offsets for LDIMM64 instructions. This check is currently used to calculate arena symbol offsets during bpf_object__collect_relos, itself called by bpf_object_open. However, the program calling bpf_object_open may not have the permissions to load maps and programs. This is the case with the BPF selftests, where bpftool is invoked at compilation time during skeleton generation. This causes errors as the feature gate unexpectedly fails with -EPERM. Avoid this by moving all the use of the FEAT_LDIMM64_FULL_RANGE_OFF feature gate to BPF object preparation time instead. Fixes: 728ff167910e ("libbpf: Add gating for arena globals relocation feature") Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260217204345.548648-3-emil@etsalapatis.com
2026-02-17libbpf: Do not use PROG_TYPE_TRACEPOINT program for feature gatingEmil Tsalapatis-2/+3
Commit 728ff167910e uses a PROG_TYPE_TRACEPOINT BPF test program to check whether the running kernel supports large LDIMM64 offsets. The feature gate incorrectly assumes that the program will fail at verification time with one of two messages, depending on whether the feature is supported by the running kernel. However, PROG_TYPE_TRACEPOINT programs may fail to load before verification even starts, e.g., if the shell does not have the appropriate capabilities. Use a BPF_PROG_TYPE_SOCKET_FILTER program for the feature gate instead. Also fix two minor issues. First, ensure the log buffer for the test is initialized: Failing program load before verification led to libbpf dumping uninitialized data to stdout. Also, ensure that close() is only called for program_fd in the probe if the program load actually succeeded. The call was currently failing silently with -EBADF most of the time. Fixes: 728ff167910e ("libbpf: Add gating for arena globals relocation feature") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260217204345.548648-2-emil@etsalapatis.com
2026-02-13libbpf: Fix invalid write loop logic in bpf_linker__add_buf()Amery Hung-1/+1
Fix bpf_linker__add_buf()'s logic of copying data from memory buffer into memfd. In the event of short write not writing entire buf_sz bytes into memfd file, we'll append bytes from the beginning of buf *again* (corrupting ELF file contents) instead of correctly appending the rest of not-yet-read buf contents. Closes: https://github.com/libbpf/libbpf/issues/945 Fixes: 6d5e5e5d7ce1 ("libbpf: Extend linker API to support in-memory ELF files") Signed-off-by: Amery Hung <ameryhung@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20260209230134.3530521-1-ameryhung@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-02-13libbpf: Add gating for arena globals relocation featureEmil Tsalapatis-2/+71
Add feature gating for the arena globals relocation introduced in commit c1f61171d44b. The commit depends on a previous commit in the same patchset that is absent from older kernels (12a1fe6e12db "bpf/verifier: Do not limit maximum direct offset into arena map"). Without this commit, arena globals relocation with arenas >= 512MiB fails to load and breaks libbpf's backwards compatibility. Introduce a libbpf feature to check whether the running kernel allows for full range ldimm64 offset, and only relocate arena globals if it does. Fixes: c1f61171d44b ("libbpf: Move arena globals to the end of the arena") Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260210184532.255475-1-emil@etsalapatis.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-01-24libbpf: add fsession supportMenglong Dong-0/+4
Add BPF_TRACE_FSESSION to libbpf. Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Link: https://lore.kernel.org/r/20260124062008.8657-9-dongml2@chinatelecom.cn Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-01-20bpf: Migrate bpf_stream_vprintk() to KF_IMPLICIT_ARGSIhor Solodrai-3/+3
Implement bpf_stream_vprintk with an implicit bpf_prog_aux argument, and remote bpf_stream_vprintk_impl from the kernel. Update the selftests to use the new API with implicit argument. bpf_stream_vprintk macro is changed to use the new bpf_stream_vprintk kfunc, and the extern definition of bpf_stream_vprintk_impl is replaced accordingly. Reviewed-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev> Link: https://lore.kernel.org/r/20260120222638.3976562-11-ihor.solodrai@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-01-13btf: Refactor the code by calling str_is_emptyDonglin Peng-19/+19
Calling the str_is_empty function to clarify the code and no functional changes are introduced. Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20260109130003.3313716-12-dolinux.peng@gmail.com
2026-01-13libbpf: Verify BTF sortingDonglin Peng-0/+25
This patch checks whether the BTF is sorted by name in ascending order. If sorted, binary search will be used when looking up types. Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20260109130003.3313716-6-dolinux.peng@gmail.com
2026-01-13libbpf: Optimize type lookup with binary search for sorted BTFDonglin Peng-25/+65
This patch introduces binary search optimization for BTF type lookups when the BTF instance contains sorted types. The optimization significantly improves performance when searching for types in large BTF instances with sorted types. For unsorted BTF, the implementation falls back to the original linear search. Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260109130003.3313716-5-dolinux.peng@gmail.com
2026-01-13libbpf: Add BTF permutation support for type reorderingDonglin Peng-0/+176
Introduce btf__permute() API to allow in-place rearrangement of BTF types. This function reorganizes BTF type order according to a provided array of type IDs, updating all type references to maintain consistency. Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20260109130003.3313716-2-dolinux.peng@gmail.com
2026-01-09libbpf: Fix OOB read in btf_dump_get_bitfield_valueVarun R Mallya-0/+9
When dumping bitfield data, btf_dump_get_bitfield_value() reads data based on the underlying type's size (t->size). However, it does not verify that the provided data buffer (data_sz) is large enough to contain these bytes. If btf_dump__dump_type_data() is called with a buffer smaller than the type's size, this leads to an out-of-bounds read. This was confirmed by AddressSanitizer in the linked issue. Fix this by ensuring we do not read past the provided data_sz limit. Fixes: a1d3cc3c5eca ("libbpf: Avoid use of __int128 in typed dump display") Reported-by: Harrison Green <harrisonmichaelgreen@gmail.com> Suggested-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Varun R Mallya <varunrmallya@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20260106233527.163487-1-varunrmallya@gmail.com Closes: https://github.com/libbpf/libbpf/issues/928
2026-01-06libbpf: Add BPF_F_CPU and BPF_F_ALL_CPUS flags support for percpu mapsLeon Hwang-19/+36
Add libbpf support for the BPF_F_CPU flag for percpu maps by embedding the cpu info into the high 32 bits of: 1. **flags**: bpf_map_lookup_elem_flags(), bpf_map__lookup_elem(), bpf_map_update_elem() and bpf_map__update_elem() 2. **opts->elem_flags**: bpf_map_lookup_batch() and bpf_map_update_batch() And the flag can be BPF_F_ALL_CPUS, but cannot be 'BPF_F_CPU | BPF_F_ALL_CPUS'. Behavior: * If the flag is BPF_F_ALL_CPUS, the update is applied across all CPUs. * If the flag is BPF_F_CPU, it updates value only to the specified CPU. * If the flag is BPF_F_CPU, lookup value only from the specified CPU. * lookup does not support BPF_F_ALL_CPUS. Acked-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Leon Hwang <leon.hwang@linux.dev> Link: https://lore.kernel.org/r/20260107022022.12843-7-leon.hwang@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-12-16Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf after 6.19-rc1Alexei Starovoitov-3/+4
Cross-merge BPF and other fixes after downstream PR. Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-12-16libbpf: Move arena globals to the end of the arenaEmil Tsalapatis-4/+13
Arena globals are currently placed at the beginning of the arena by libbpf. This is convenient, but prevents users from reserving guard pages in the beginning of the arena to identify NULL pointer dereferences. Adjust the load logic to place the globals at the end of the arena instead. Also modify bpftool to set the arena pointer in the program's BPF skeleton to point to the globals. Users now call bpf_map__initial_value() to find the beginning of the arena mapping and use the arena pointer in the skeleton to determine which part of the mapping holds the arena globals and which part is free. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20251216173325.98465-5-emil@etsalapatis.com
2025-12-16libbpf: Turn relo_core->sym_off unsignedEmil Tsalapatis-7/+7
The symbols' relocation offsets in BPF are stored in an int field, but cannot actually be negative. When in the next patch libbpf relocates globals to the end of the arena, it is also possible to have valid offsets > 2GiB that are used to calculate the final relo offsets. Avoid accidentally interpreting large offsets as negative by turning the sym_off field unsigned. Signed-off-by: Emil Tsalapatis <emil@etsalapatis.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20251216173325.98465-4-emil@etsalapatis.com
2025-12-09libbpf: Fix -Wdiscarded-qualifiers under C23Mikhail Gavrilov-3/+4
glibc ≥ 2.42 (GCC 15) defaults to -std=gnu23, which promotes -Wdiscarded-qualifiers to an error. In C23, strstr() and strchr() return "const char *". Change variable types to const char * where the pointers are never modified (res, sym_sfx, next_path). Suggested-by: Florian Weimer <fweimer@redhat.com> Suggested-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com> Link: https://lore.kernel.org/r/20251206092825.1471385-1-mikhail.v.gavrilov@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-12-05libbpf: Add support for associating BPF program with struct_opsAmery Hung-0/+89
Add low-level wrapper and libbpf API for BPF_PROG_ASSOC_STRUCT_OPS command in the bpf() syscall. Signed-off-by: Amery Hung <ameryhung@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20251203233748.668365-4-ameryhung@gmail.com
2025-12-05libbpf: Add debug messaging in dedup equivalence/identity matchingAlan Maguire-5/+24
We have seen a number of issues like [1]; failures to deduplicate key kernel data structures like task_struct. These are often hard to debug from pahole even with verbose output, especially when identity/equivalence checks fail deep in a nested struct comparison. Here we add debug messages of the form libbpf: STRUCT 'task_struct' size=2560 vlen=194 cand_id[54222] canon_id[102820] shallow-equal but not equiv for field#23 'sched_class': 0 These will be emitted during dedup from pahole when --verbose/-V is specified. This greatly helps identify exactly where dedup failures are experienced. [1] https://lore.kernel.org/bpf/b8e8b560-bce5-414b-846d-0da6d22a9983@oracle.com/ Changes since v1: - updated debug messages to refer to shallow-equal, added ids (Andrii) Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20251203191507.55565-1-alan.maguire@oracle.com
2025-12-03Merge tag 'bpf-next-6.19' of ↵Linus Torvalds-55/+362
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next Pull bpf updates from Alexei Starovoitov: - Convert selftests/bpf/test_tc_edt and test_tc_tunnel from .sh to test_progs runner (Alexis Lothoré) - Convert selftests/bpf/test_xsk to test_progs runner (Bastien Curutchet) - Replace bpf memory allocator with kmalloc_nolock() in bpf_local_storage (Amery Hung), and in bpf streams and range tree (Puranjay Mohan) - Introduce support for indirect jumps in BPF verifier and x86 JIT (Anton Protopopov) and arm64 JIT (Puranjay Mohan) - Remove runqslower bpf tool (Hoyeon Lee) - Fix corner cases in the verifier to close several syzbot reports (Eduard Zingerman, KaFai Wan) - Several improvements in deadlock detection in rqspinlock (Kumar Kartikeya Dwivedi) - Implement "jmp" mode for BPF trampoline and corresponding DYNAMIC_FTRACE_WITH_JMP. It improves "fexit" program type performance from 80 M/s to 136 M/s. With Steven's Ack. (Menglong Dong) - Add ability to test non-linear skbs in BPF_PROG_TEST_RUN (Paul Chaignon) - Do not let BPF_PROG_TEST_RUN emit invalid GSO types to stack (Daniel Borkmann) - Generalize buildid reader into bpf_dynptr (Mykyta Yatsenko) - Optimize bpf_map_update_elem() for map-in-map types (Ritesh Oedayrajsingh Varma) - Introduce overwrite mode for BPF ring buffer (Xu Kuohai) * tag 'bpf-next-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (169 commits) bpf: optimize bpf_map_update_elem() for map-in-map types bpf: make kprobe_multi_link_prog_run always_inline selftests/bpf: do not hardcode target rate in test_tc_edt BPF program selftests/bpf: remove test_tc_edt.sh selftests/bpf: integrate test_tc_edt into test_progs selftests/bpf: rename test_tc_edt.bpf.c section to expose program type selftests/bpf: Add success stats to rqspinlock stress test rqspinlock: Precede non-head waiter queueing with AA check rqspinlock: Disable spinning for trylock fallback rqspinlock: Use trylock fallback when per-CPU rqnode is busy rqspinlock: Perform AA checks immediately rqspinlock: Enclose lock/unlock within lock entry acquisitions bpf: Remove runqslower tool selftests/bpf: Remove usage of lsm/file_alloc_security in selftest bpf: Disable file_alloc_security hook bpf: check for insn arrays in check_ptr_alignment bpf: force BPF_F_RDONLY_PROG on insn array creation bpf: Fix exclusive map memory leak selftests/bpf: Make CS length configurable for rqspinlock stress test selftests/bpf: Add lock wait time stats to rqspinlock stress test ...
2025-12-02Merge tag 's390-6.19-1' of ↵Linus Torvalds-6/+0
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 updates from Heiko Carstens: - Provide a new interface for dynamic configuration and deconfiguration of hotplug memory, allowing with and without memmap_on_memory support. This makes the way memory hotplug is handled on s390 much more similar to other architectures - Remove compat support. There shouldn't be any compat user space around anymore, therefore get rid of a lot of code which also doesn't need to be tested anymore - Add stackprotector support. GCC 16 will get new compiler options, which allow to generate code required for kernel stackprotector support - Merge pai_crypto and pai_ext PMU drivers into a new driver. This removes a lot of duplicated code. The new driver is also extendable and allows to support new PMUs - Add driver override support for AP queues - Rework and extend zcrypt and AP trace events to allow for tracing of crypto requests - Support block sizes larger than 65535 bytes for CCW tape devices - Since the rework of the virtual kernel address space the module area and the kernel image are within the same 4GB area. This eliminates the need of weak per cpu variables. Get rid of ARCH_MODULE_NEEDS_WEAK_PER_CPU - Various other small improvements and fixes * tag 's390-6.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (92 commits) watchdog: diag288_wdt: Remove KMSG_COMPONENT macro s390/entry: Use lay instead of aghik s390/vdso: Get rid of -m64 flag handling s390/vdso: Rename vdso64 to vdso s390: Rename head64.S to head.S s390/vdso: Use common STABS_DEBUG and DWARF_DEBUG macros s390: Add stackprotector support s390/modules: Simplify module_finalize() slightly s390: Remove KMSG_COMPONENT macro s390/percpu: Get rid of ARCH_MODULE_NEEDS_WEAK_PER_CPU s390/ap: Restrict driver_override versus apmask and aqmask use s390/ap: Rename mutex ap_perms_mutex to ap_attr_mutex s390/ap: Support driver_override for AP queue devices s390/ap: Use all-bits-one apmask/aqmask for vfio in_use() checks s390/debug: Update description of resize operation s390/syscalls: Switch to generic system call table generation s390/syscalls: Remove system call table pointer from thread_struct s390/uapi: Remove 31 bit support from uapi header files s390: Remove compat support tools: Remove s390 compat support ...
2025-11-25libbpf: Fix some incorrect @param descriptions in the comment of libbpf.hJianyun Gao-11/+16
Fix up some of missing or incorrect @param descriptions for libbpf public APIs in libbpf.h. Signed-off-by: Jianyun Gao <jianyungao89@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20251118033025.11804-1-jianyungao89@gmail.com
2025-11-17tools: Remove s390 compat supportHeiko Carstens-6/+0
Remove s390 compat support from everything within tools, since s390 compat support will be removed from the kernel. Reviewed-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Thomas Weißschuh <linux@weissschuh.net> # tools/nolibc selftests/nolibc Reviewed-by: Thomas Weißschuh <linux@weissschuh.net> # selftests/vDSO Acked-by: Alexei Starovoitov <ast@kernel.org> # bpf bits Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2025-11-14Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf after 6.18-rc5+Alexei Starovoitov-14/+14
Cross-merge BPF and other fixes after downstream PR. Minor conflict in kernel/bpf/helpers.c Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-11-14libbpf: Fix BTF dedup to support recursive typedef definitionsPaul Houssel-16/+55
Handle recursive typedefs in BTF deduplication Pahole fails to encode BTF for some Go projects (e.g. Kubernetes and Podman) due to recursive type definitions that create reference loops not representable in C. These recursive typedefs trigger a failure in the BTF deduplication algorithm. This patch extends btf_dedup_ref_type() to properly handle potential recursion for BTF_KIND_TYPEDEF, similar to how recursion is already handled for BTF_KIND_STRUCT. This allows pahole to successfully generate BTF for Go binaries using recursive types without impacting existing C-based workflows. Suggested-by: Tristan d'Audibert <tristan.daudibert@gmail.com> Co-developed-by: Martin Horth <martin.horth@telecom-sudparis.eu> Co-developed-by: Ouail Derghal <ouail.derghal@imt-atlantique.fr> Co-developed-by: Guilhem Jazeron <guilhem.jazeron@inria.fr> Co-developed-by: Ludovic Paillat <ludovic.paillat@inria.fr> Co-developed-by: Robin Theveniaut <robin.theveniaut@irit.fr> Signed-off-by: Martin Horth <martin.horth@telecom-sudparis.eu> Signed-off-by: Ouail Derghal <ouail.derghal@imt-atlantique.fr> Signed-off-by: Guilhem Jazeron <guilhem.jazeron@inria.fr> Signed-off-by: Ludovic Paillat <ludovic.paillat@inria.fr> Signed-off-by: Robin Theveniaut <robin.theveniaut@irit.fr> Signed-off-by: Paul Houssel <paul.houssel@orange.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/bf00857b1e06f282aac12f6834de7396a7547ba6.1763037045.git.paul.houssel@orange.com
2025-11-05libbpf: support llvm-generated indirect jumpsAnton Protopopov-1/+251
For v4 instruction set LLVM is allowed to generate indirect jumps for switch statements and for 'goto *rX' assembly. Every such a jump will be accompanied by necessary metadata, e.g. (`llvm-objdump -Sr ...`): 0: r2 = 0x0 ll 0000000000000030: R_BPF_64_64 BPF.JT.0.0 Here BPF.JT.1.0 is a symbol residing in the .jumptables section: Symbol table: 4: 0000000000000000 240 OBJECT GLOBAL DEFAULT 4 BPF.JT.0.0 The -bpf-min-jump-table-entries llvm option may be used to control the minimal size of a switch which will be converted to an indirect jumps. Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20251105090410.1250500-11-a.s.protopopov@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-11-05libbpf: Recognize insn_array map typeAnton Protopopov-0/+5
Teach libbpf about the existence of the new instruction array map. Signed-off-by: Anton Protopopov <a.s.protopopov@gmail.com> Link: https://lore.kernel.org/r/20251105090410.1250500-4-a.s.protopopov@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-11-04bpf: add _impl suffix for bpf_stream_vprintk() kfuncMykyta Yatsenko-14/+14
Rename bpf_stream_vprintk() to bpf_stream_vprintk_impl(). This makes bpf_stream_vprintk() follow the already established "_impl" suffix-based naming convention for kfuncs with the bpf_prog_aux argument provided by the verifier implicitly. This convention will be taken advantage of with the upcoming KF_IMPLICIT_ARGS feature to preserve backwards compatibility to BPF programs. Acked-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> Link: https://lore.kernel.org/r/20251104-implv2-v3-2-4772b9ae0e06@meta.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Ihor Solodrai <ihor.solodrai@linux.dev>
2025-11-04libbpf: Fix parsing of multi-split BTFAlan Maguire-2/+2
When creating multi-split BTF we correctly set the start string offset to be the size of the base string section plus the base BTF start string offset; the latter is needed for multi-split BTF since the offset is non-zero there. Unfortunately the BTF parsing case needed that logic and it was missed. Fixes: 4e29128a9ace ("libbpf/btf: Fix string handling to support multi-split BTF") Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20251104203309.318429-2-alan.maguire@oracle.com
2025-11-04libbpf: Update the comment to remove the reference to the deprecated ↵Jianyun Gao-2/+2
interface bpf_program__load(). Commit be2f2d1680df ("libbpf: Deprecate bpf_program__load() API") marked bpf_program__load() as deprecated starting with libbpf v0.6. And later in commit 146bf811f5ac ("libbpf: remove most other deprecated high-level APIs") actually removed the bpf_program__load() implementation and related old high-level APIs. This patch update the comment in bpf_program__set_attach_target() to remove the reference to the deprecated interface bpf_program__load(). Signed-off-by: Jianyun Gao <jianyungao89@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20251103120727.145965-1-jianyungao89@gmail.com
2025-11-04libbpf: Complete the missing @param and @return tags in btf.hJianyun Gao-0/+8
Complete the missing @param and @return tags in the Doxygen comments of the btf.h file. Signed-off-by: Jianyun Gao <jianyungao89@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20251103115836.144339-1-jianyungao89@gmail.com
2025-11-03Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf after 6.18-rc4Alexei Starovoitov-1/+1
Cross-merge BPF and other fixes after downstream PR. No conflicts. Signed-off-by: Alexei Starovoitov <ast@kernel.org>