linux/kernel/bpf
Eduard Zingerman 2793a8b015 bpf: exact states comparison for iterator convergence checks
Convergence for open coded iterators is computed in is_state_visited()
by examining states with branches count > 1 and using states_equal().
states_equal() computes sub-state relation using read and precision marks.
Read and precision marks are propagated from children states,
thus are not guaranteed to be complete inside a loop when branches
count > 1. This could be demonstrated using the following unsafe program:

     1. r7 = -16
     2. r6 = bpf_get_prandom_u32()
     3. while (bpf_iter_num_next(&fp[-8])) {
     4.   if (r6 != 42) {
     5.     r7 = -32
     6.     r6 = bpf_get_prandom_u32()
     7.     continue
     8.   }
     9.   r0 = r10
    10.   r0 += r7
    11.   r8 = *(u64 *)(r0 + 0)
    12.   r6 = bpf_get_prandom_u32()
    13. }

Here verifier would first visit path 1-3, create a checkpoint at 3
with r7=-16, continue to 4-7,3 with r7=-32.

Because instructions at 9-12 had not been visitied yet existing
checkpoint at 3 does not have read or precision mark for r7.
Thus states_equal() would return true and verifier would discard
current state, thus unsafe memory access at 11 would not be caught.

This commit fixes this loophole by introducing exact state comparisons
for iterator convergence logic:
- registers are compared using regs_exact() regardless of read or
  precision marks;
- stack slots have to have identical type.

Unfortunately, this is too strict even for simple programs like below:

    i = 0;
    while(iter_next(&it))
      i++;

At each iteration step i++ would produce a new distinct state and
eventually instruction processing limit would be reached.

To avoid such behavior speculatively forget (widen) range for
imprecise scalar registers, if those registers were not precise at the
end of the previous iteration and do not match exactly.

This a conservative heuristic that allows to verify wide range of
programs, however it precludes verification of programs that conjure
an imprecise value on the first loop iteration and use it as precise
on the second.

Test case iter_task_vma_for_each() presents one of such cases:

        unsigned int seen = 0;
        ...
        bpf_for_each(task_vma, vma, task, 0) {
                if (seen >= 1000)
                        break;
                ...
                seen++;
        }

Here clang generates the following code:

<LBB0_4>:
      24:       r8 = r6                          ; stash current value of
                ... body ...                       'seen'
      29:       r1 = r10
      30:       r1 += -0x8
      31:       call bpf_iter_task_vma_next
      32:       r6 += 0x1                        ; seen++;
      33:       if r0 == 0x0 goto +0x2 <LBB0_6>  ; exit on next() == NULL
      34:       r7 += 0x10
      35:       if r8 < 0x3e7 goto -0xc <LBB0_4> ; loop on seen < 1000

<LBB0_6>:
      ... exit ...

Note that counter in r6 is copied to r8 and then incremented,
conditional jump is done using r8. Because of this precision mark for
r6 lags one state behind of precision mark on r8 and widening logic
kicks in.

Adding barrier_var(seen) after conditional is sufficient to force
clang use the same register for both counting and conditional jump.

This issue was discussed in the thread [1] which was started by
Andrew Werner <awerner32@gmail.com> demonstrating a similar bug
in callback functions handling. The callbacks would be addressed
in a followup patch.

[1] https://lore.kernel.org/bpf/97a90da09404c65c8e810cf83c94ac703705dc0e.camel@gmail.com/

Co-developed-by: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Co-developed-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20231024000917.12153-4-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-10-23 21:49:31 -07:00
..
preload bpf: make preloaded map iterators to display map elements count 2023-07-06 12:42:25 -07:00
arraymap.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
bloom_filter.c bpf: Centralize permissions checks for all BPF map types 2023-06-19 14:04:04 +02:00
bpf_cgrp_storage.c bpf: Teach verifier that certain helpers accept NULL pointer. 2023-04-04 16:57:16 -07:00
bpf_inode_storage.c Networking changes for 6.4. 2023-04-26 16:07:23 -07:00
bpf_iter.c bpf: Don't explicitly emit BTF for struct btf_iter_num 2023-10-13 15:48:58 -07:00
bpf_local_storage.c bpf: bpf_sk_storage: Fix the missing uncharge in sk_omem_alloc 2023-09-06 11:08:14 +02:00
bpf_lru_list.c bpf: Address KCSAN report on bpf_lru_list 2023-05-12 12:01:03 -07:00
bpf_lru_list.h bpf: lru: Remove unused declaration bpf_lru_promote() 2023-08-08 17:21:42 -07:00
bpf_lsm.c bpf: Fix the kernel crash caused by bpf_setsockopt(). 2023-01-26 23:26:40 -08:00
bpf_struct_ops.c bpf: Charge modmem for struct_ops trampoline 2023-09-14 15:30:45 -07:00
bpf_struct_ops_types.h
bpf_task_storage.c bpf: Teach verifier that certain helpers accept NULL pointer. 2023-04-04 16:57:16 -07:00
btf.c bpf: Add bpf_sock_addr_set_sun_path() to allow writing unix sockaddr from bpf 2023-10-11 16:29:25 -07:00
cgroup.c bpf: Implement cgroup sockaddr hooks for unix sockets 2023-10-11 17:27:47 -07:00
cgroup_iter.c bpf: Introduce css open-coded iterator kfuncs 2023-10-19 17:02:46 -07:00
core.c Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next 2023-09-17 15:12:06 +01:00
cpumap.c net, bpf: Add a warning if NAPI cb missed xdp_do_flush(). 2023-10-17 15:02:03 +02:00
cpumask.c bpf: Convert bpf_cpumask to bpf_mem_cache_free_rcu. 2023-07-12 23:45:23 +02:00
devmap.c net, bpf: Add a warning if NAPI cb missed xdp_do_flush(). 2023-10-17 15:02:03 +02:00
disasm.c bpf: change bpf_alu_sign_string and bpf_movsx_string to static 2023-08-04 16:15:50 -07:00
disasm.h
dispatcher.c
hashtab.c bpf: populate the per-cpu insertions/deletions counters for hashmaps 2023-07-06 12:42:25 -07:00
helpers.c bpf: Use bpf_global_percpu_ma for per-cpu kptr in __bpf_obj_drop_impl() 2023-10-20 14:15:13 -07:00
inode.c bpf: convert to ctime accessor functions 2023-07-24 10:30:07 +02:00
Kconfig bpf: Add fd-based tcx multi-prog infra with link support 2023-07-19 10:07:27 -07:00
link_iter.c
local_storage.c cgroup changes for v6.4-rc1 2023-04-29 10:05:22 -07:00
log.c bpf: drop unnecessary user-triggerable WARN_ONCE in verifierl log 2023-05-16 22:34:50 -07:00
lpm_trie.c bpf: Centralize permissions checks for all BPF map types 2023-06-19 14:04:04 +02:00
Makefile bpf: Add fd-based tcx multi-prog infra with link support 2023-07-19 10:07:27 -07:00
map_in_map.c bpf: Fix elem_size not being set for inner maps 2023-06-02 16:22:12 -07:00
map_in_map.h
map_iter.c bpf: allow any program to use the bpf_map_sum_elem_count kfunc 2023-07-19 09:48:53 -07:00
memalloc.c bpf: Use pcpu_alloc_size() in bpf_mem_free{_rcu}() 2023-10-20 14:15:13 -07:00
mmap_unlock_work.h
mprog.c bpf: Handle bpf_mprog_query with NULL entry 2023-10-06 17:11:20 -07:00
net_namespace.c
offload.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net 2023-09-21 21:49:45 +02:00
percpu_freelist.c
percpu_freelist.h
prog_iter.c
queue_stack_maps.c bpf: Avoid deadlock when using queue and stack maps from NMI 2023-09-11 19:04:49 -07:00
reuseport_array.c bpf: Centralize permissions checks for all BPF map types 2023-06-19 14:04:04 +02:00
ringbuf.c bpf: Remove unnecessary ring buffer size check 2023-07-05 14:09:45 +02:00
stackmap.c bpf: Annotate struct bpf_stack_map with __counted_by 2023-10-06 23:44:35 +02:00
syscall.c bpf: Use bpf_global_percpu_ma for per-cpu kptr in __bpf_obj_drop_impl() 2023-10-20 14:15:13 -07:00
sysfs_btf.c
task_iter.c bpf: Let bpf_iter_task_new accept null task ptr 2023-10-19 17:02:46 -07:00
tcx.c bpf, tcx: Get rid of tcx_link_const 2023-10-23 15:01:53 -07:00
tnum.c
trampoline.c bpf, x64: Fix tailcall infinite loop 2023-09-12 13:06:12 -07:00
verifier.c bpf: exact states comparison for iterator convergence checks 2023-10-23 21:49:31 -07:00