linux/kernel/locking
Qian Cai 33190b675c locking/osq_lock: Annotate a data race in osq_lock
The prev->next pointer can be accessed concurrently as noticed by KCSAN:

 write (marked) to 0xffff9d3370dbbe40 of 8 bytes by task 3294 on cpu 107:
  osq_lock+0x25f/0x350
  osq_wait_next at kernel/locking/osq_lock.c:79
  (inlined by) osq_lock at kernel/locking/osq_lock.c:185
  rwsem_optimistic_spin
  <snip>

 read to 0xffff9d3370dbbe40 of 8 bytes by task 3398 on cpu 100:
  osq_lock+0x196/0x350
  osq_lock at kernel/locking/osq_lock.c:157
  rwsem_optimistic_spin
  <snip>

Since the write only stores NULL to prev->next and the read tests if
prev->next equals to this_cpu_ptr(&osq_node). Even if the value is
shattered, the code is still working correctly. Thus, mark it as an
intentional data race using the data_race() macro.

Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-06-29 12:04:48 -07:00
..
lock_events.c locking/lock_events: Don't show pvqspinlock events on bare metal 2019-04-10 10:56:05 +02:00
lock_events.h locking/lock_events: Use raw_cpu_{add,inc}() for stats 2019-06-03 12:32:56 +02:00
lock_events_list.h locking/rwsem: Adaptive disabling of reader optimistic spinning 2019-06-17 12:28:09 +02:00
lockdep.c lockdep: __always_inline more for noinstr 2020-06-11 15:15:28 +02:00
lockdep_internals.h locking/lockdep: Reuse freed chain_hlocks entries 2020-02-11 13:10:52 +01:00
lockdep_proc.c locking/lockdep: Reuse freed chain_hlocks entries 2020-02-11 13:10:52 +01:00
lockdep_states.h locking/lockdep: Rework FS_RECLAIM annotation 2017-08-10 12:29:03 +02:00
locktorture.c locktorture: Forgive apparent unfairness if CPU hotplug 2020-02-20 15:59:59 -08:00
Makefile kcsan: Make KCSAN compatible with lockdep 2020-03-21 09:41:16 +01:00
mcs_spinlock.h locking/mcs: Use smp_cond_load_acquire() in MCS spin loop 2018-04-27 09:48:49 +02:00
mutex-debug.c lockdep: Introduce wait-type checks 2020-03-21 16:00:24 +01:00
mutex-debug.h License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
mutex.c Revert "locking/mutex: Complain upon mutex API misuse in IRQ contexts" 2019-12-11 00:27:43 +01:00
mutex.h mutex: Fix up mutex_waiter usage 2019-08-08 09:09:25 +02:00
osq_lock.c locking/osq_lock: Annotate a data race in osq_lock 2020-06-29 12:04:48 -07:00
percpu-rwsem.c locking/percpu-rwsem: Fix a task_struct refcount 2020-04-08 12:05:06 +02:00
qrwlock.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157 2019-05-30 11:26:37 -07:00
qspinlock.c locking/qspinlock: Fix inaccessible URL of MCS lock paper 2020-01-17 10:19:30 +01:00
qspinlock_paravirt.h Revert "locking/pvqspinlock: Don't wait if vCPU is preempted" 2019-09-25 10:22:37 +02:00
qspinlock_stat.h treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157 2019-05-30 11:26:37 -07:00
rtmutex-debug.c kernel: rename show_stack_loglvl() => show_stack() 2020-06-09 09:39:13 -07:00
rtmutex-debug.h License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
rtmutex.c locking/rtmutex: Remove unused rt_mutex_cmpxchg_relaxed() 2020-04-27 12:26:40 +02:00
rtmutex.h License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
rtmutex_common.h locking/rtmutex: Handle non enqueued waiters gracefully in remove_waiter() 2018-03-28 23:01:30 +02:00
rwsem.c lockdep: Introduce wait-type checks 2020-03-21 16:00:24 +01:00
rwsem.h locking/percpu-rwsem: Remove the embedded rwsem 2020-02-11 13:10:56 +01:00
semaphore.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 436 2019-06-05 17:37:17 +02:00
spinlock.c asm-generic/mmiowb: Add generic implementation of mmiowb() tracking 2019-04-08 11:59:39 +01:00
spinlock_debug.c lockdep: Introduce wait-type checks 2020-03-21 16:00:24 +01:00
test-ww_mutex.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 9 2019-05-21 11:28:40 +02:00