linux/kernel/bpf
Yonghong Song 178c54666f bpf: Mark bpf_spin_{lock,unlock}() helpers with notrace correctly
Currently tracing is supposed not to allow for bpf_spin_{lock,unlock}()
helper calls. This is to prevent deadlock for the following cases:
  - there is a prog (prog-A) calling bpf_spin_{lock,unlock}().
  - there is a tracing program (prog-B), e.g., fentry, attached
    to bpf_spin_lock() and/or bpf_spin_unlock().
  - prog-B calls bpf_spin_{lock,unlock}().
For such a case, when prog-A calls bpf_spin_{lock,unlock}(),
a deadlock will happen.

The related source codes are below in kernel/bpf/helpers.c:
  notrace BPF_CALL_1(bpf_spin_lock, struct bpf_spin_lock *, lock)
  notrace BPF_CALL_1(bpf_spin_unlock, struct bpf_spin_lock *, lock)
notrace is supposed to prevent fentry prog from attaching to
bpf_spin_{lock,unlock}().

But actually this is not the case and fentry prog can successfully
attached to bpf_spin_lock(). Siddharth Chintamaneni reported
the issue in [1]. The following is the macro definition for
above BPF_CALL_1:
  #define BPF_CALL_x(x, name, ...)                                               \
        static __always_inline                                                 \
        u64 ____##name(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__));   \
        typedef u64 (*btf_##name)(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__)); \
        u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__));         \
        u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__))          \
        {                                                                      \
                return ((btf_##name)____##name)(__BPF_MAP(x,__BPF_CAST,__BPF_N,__VA_ARGS__));\
        }                                                                      \
        static __always_inline                                                 \
        u64 ____##name(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__))

  #define BPF_CALL_1(name, ...)   BPF_CALL_x(1, name, __VA_ARGS__)

The notrace attribute is actually applied to the static always_inline function
____bpf_spin_{lock,unlock}(). The actual callback function
bpf_spin_{lock,unlock}() is not marked with notrace, hence
allowing fentry prog to attach to two helpers, and this
may cause the above mentioned deadlock. Siddharth Chintamaneni
actually has a reproducer in [2].

To fix the issue, a new macro NOTRACE_BPF_CALL_1 is introduced which
will add notrace attribute to the original function instead of
the hidden always_inline function and this fixed the problem.

  [1] https://lore.kernel.org/bpf/CAE5sdEigPnoGrzN8WU7Tx-h-iFuMZgW06qp0KHWtpvoXxf1OAQ@mail.gmail.com/
  [2] https://lore.kernel.org/bpf/CAE5sdEg6yUc_Jz50AnUXEEUh6O73yQ1Z6NV2srJnef0ZrQkZew@mail.gmail.com/

Fixes: d83525ca62 ("bpf: introduce bpf_spin_lock")
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/bpf/20240207070102.335167-1-yonghong.song@linux.dev
2024-02-13 11:11:25 -08:00
..
preload bpf: make preloaded map iterators to display map elements count 2023-07-06 12:42:25 -07:00
arraymap.c bpf: Consistently use BPF token throughout BPF verifier logic 2024-01-24 16:21:01 -08:00
bloom_filter.c bpf: Centralize permissions checks for all BPF map types 2023-06-19 14:04:04 +02:00
bpf_cgrp_storage.c bpf: Enable bpf_cgrp_storage for cgroup1 non-attach case 2023-12-08 17:08:18 -08:00
bpf_inode_storage.c Networking changes for 6.4. 2023-04-26 16:07:23 -07:00
bpf_iter.c bpf: Add __bpf_kfunc_{start,end}_defs macros 2023-11-01 22:33:53 -07:00
bpf_local_storage.c bpf: Allow compiler to inline most of bpf_local_storage_lookup() 2024-02-11 14:06:24 -08:00
bpf_lru_list.c bpf: Address KCSAN report on bpf_lru_list 2023-05-12 12:01:03 -07:00
bpf_lru_list.h bpf: lru: Remove unused declaration bpf_lru_promote() 2023-08-08 17:21:42 -07:00
bpf_lsm.c bpf: Minor clean-up to sleepable_lsm_hooks BTF set 2024-02-01 18:37:45 +01:00
bpf_struct_ops.c bpf: Remove an unnecessary check. 2024-02-05 10:25:08 -08:00
bpf_task_storage.c bpf: Teach verifier that certain helpers accept NULL pointer. 2023-04-04 16:57:16 -07:00
btf.c bpf, btf: Check btf for register_bpf_struct_ops 2024-02-08 11:37:49 -08:00
cgroup.c bpf: Take into account BPF token when fetching helper protos 2024-01-24 16:21:01 -08:00
cgroup_iter.c bpf: Let verifier consider {task,cgroup} is trusted in bpf_iter_reg 2023-11-07 15:24:25 -08:00
core.c bpf: Consistently use BPF token throughout BPF verifier logic 2024-01-24 16:21:01 -08:00
cpumap.c net, bpf: Add a warning if NAPI cb missed xdp_do_flush(). 2023-10-17 15:02:03 +02:00
cpumask.c bpf: treewide: Annotate BPF kfuncs in BTF 2024-01-31 20:40:56 -08:00
devmap.c net, bpf: Add a warning if NAPI cb missed xdp_do_flush(). 2023-10-17 15:02:03 +02:00
disasm.c bpf: change bpf_alu_sign_string and bpf_movsx_string to static 2023-08-04 16:15:50 -07:00
disasm.h bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
dispatcher.c bpf: Use arch_bpf_trampoline_size 2023-12-06 17:17:20 -08:00
hashtab.c Networking changes for 6.8. 2024-01-11 10:07:29 -08:00
helpers.c bpf: Mark bpf_spin_{lock,unlock}() helpers with notrace correctly 2024-02-13 11:11:25 -08:00
inode.c bpf: Support symbolic BPF FS delegation mount options 2024-01-24 16:21:02 -08:00
Kconfig bpf: Merge two CONFIG_BPF entries 2024-02-07 16:38:20 -08:00
link_iter.c bpf: Add bpf_link iterator 2022-05-10 11:20:45 -07:00
local_storage.c cgroup changes for v6.4-rc1 2023-04-29 10:05:22 -07:00
log.c bpf: emit more dynptr information in verifier log 2023-12-11 19:21:22 -08:00
lpm_trie.c bpf, lpm: Fix check prefixlen before walking trie 2023-11-09 19:07:38 -08:00
Makefile bpf: Introduce BPF token object 2024-01-24 16:21:01 -08:00
map_in_map.c bpf: Optimize the free of inner map 2023-12-04 17:50:26 -08:00
map_in_map.h bpf: Add map and need_defer parameters to .map_fd_put_ptr() 2023-12-04 17:50:26 -08:00
map_iter.c bpf: treewide: Annotate BPF kfuncs in BTF 2024-01-31 20:40:56 -08:00
memalloc.c bpf: Remove unnecessary cpu == 0 check in memalloc 2024-01-04 10:18:14 -08:00
mmap_unlock_work.h bpf: Introduce helper bpf_find_vma 2021-11-07 11:54:51 -08:00
mprog.c bpf: Handle bpf_mprog_query with NULL entry 2023-10-06 17:11:20 -07:00
net_namespace.c net: Add includes masked by netdevice.h including uapi/bpf.h 2021-12-29 20:03:05 -08:00
offload.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net 2023-09-21 21:49:45 +02:00
percpu_freelist.c bpf: Initialize same number of free nodes for each pcpu_freelist 2022-11-11 12:05:14 -08:00
percpu_freelist.h
prog_iter.c
queue_stack_maps.c bpf: Avoid deadlock when using queue and stack maps from NMI 2023-09-11 19:04:49 -07:00
reuseport_array.c bpf: Centralize permissions checks for all BPF map types 2023-06-19 14:04:04 +02:00
ringbuf.c bpf: Fold smp_mb__before_atomic() into atomic_set_release() 2023-10-24 14:26:07 +02:00
stackmap.c bpf: Add crosstask check to __bpf_get_stack 2023-11-10 11:06:10 -08:00
syscall.c bpf,lsm: Refactor bpf_map_alloc/bpf_map_free LSM hooks 2024-01-24 16:21:01 -08:00
sysfs_btf.c
task_iter.c bpf: bpf_iter_task_next: use next_task(kit->task) rather than next_task(kit->pos) 2023-11-19 11:43:44 -08:00
tcx.c bpf, tcx: Get rid of tcx_link_const 2023-10-23 15:01:53 -07:00
tnum.c bpf: simplify tnum output if a fully known constant 2023-12-02 11:36:51 -08:00
token.c bpf,token: Use BIT_ULL() to convert the bit mask 2024-01-29 20:04:55 -08:00
trampoline.c bpf: Use arch_bpf_trampoline_size 2023-12-06 17:17:20 -08:00
verifier.c bpf: Transfer RCU lock state between subprog calls 2024-02-05 20:00:14 -08:00