linux/kernel/locking
Jason Low 90631822c5 locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overhead
The cancellable MCS spinlock is currently used to queue threads that are
doing optimistic spinning. It uses per-cpu nodes, where a thread obtaining
the lock would access and queue the local node corresponding to the CPU that
it's running on. Currently, the cancellable MCS lock is implemented by using
pointers to these nodes.

In this patch, instead of operating on pointers to the per-cpu nodes, we
store the CPU numbers in which the per-cpu nodes correspond to in atomic_t.
A similar concept is used with the qspinlock.

By operating on the CPU # of the nodes using atomic_t instead of pointers
to those nodes, this can reduce the overhead of the cancellable MCS spinlock
by 32 bits (on 64 bit systems).

Signed-off-by: Jason Low <jason.low2@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Scott Norton <scott.norton@hp.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Waiman Long <waiman.long@hp.com>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Chris Mason <clm@fb.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Josef Bacik <jbacik@fusionio.com>
Link: http://lkml.kernel.org/r/1405358872-3732-3-git-send-email-jason.low2@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 13:28:04 +02:00
..
lglock.c locking: Move the lglocks code to kernel/locking/ 2013-11-06 09:24:20 +01:00
lockdep.c asmlinkage: Add explicit __visible to drivers/*, lib/*, kernel/* 2014-05-05 16:07:46 -07:00
lockdep_internals.h lockdep: Increase static allocations 2014-04-18 14:20:50 +02:00
lockdep_proc.c lockdep/proc: Fix lock-time avg computation 2013-11-11 12:41:34 +01:00
lockdep_states.h
locktorture.c Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into next 2014-06-03 14:00:15 -07:00
Makefile locking/rwlocks: Introduce 'qrwlocks' - fair, queued rwlocks 2014-06-06 07:58:28 +02:00
mcs_spinlock.c locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overhead 2014-07-16 13:28:04 +02:00
mcs_spinlock.h locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overhead 2014-07-16 13:28:04 +02:00
mutex-debug.c locking/mutex: Fix debug_mutexes 2014-04-11 10:40:35 +02:00
mutex-debug.h
mutex.c locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overhead 2014-07-16 13:28:04 +02:00
mutex.h
percpu-rwsem.c locking: Move the percpu-rwsem code to kernel/locking/ 2013-11-06 09:24:22 +01:00
qrwlock.c locking/rwlocks: Introduce 'qrwlocks' - fair, queued rwlocks 2014-06-06 07:58:28 +02:00
rtmutex-debug.c rtmutex: Turn the plist into an rb-tree 2014-01-13 13:41:50 +01:00
rtmutex-debug.h rtmutex: Handle deadlock detection smarter 2014-06-07 14:55:40 +02:00
rtmutex-tester.c locking: Move the rtmutex code to kernel/locking/ 2013-11-06 09:23:59 +01:00
rtmutex.c rtmutex: Plug slow unlock race 2014-06-16 10:03:09 +02:00
rtmutex.h rtmutex: Handle deadlock detection smarter 2014-06-07 14:55:40 +02:00
rtmutex_common.h sched/deadline: Add SCHED_DEADLINE inheritance logic 2014-01-13 13:42:56 +01:00
rwsem-spinlock.c locking: Move the rwsem code to kernel/locking/ 2013-11-06 09:24:18 +01:00
rwsem-xadd.c locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overhead 2014-07-16 13:28:04 +02:00
rwsem.c locking/rwsem: Support optimistic spinning 2014-06-05 10:38:21 +02:00
semaphore.c locking: Move the semaphore core to kernel/locking/ 2013-11-06 07:55:22 +01:00
spinlock.c locking: Move the spinlock code to kernel/locking/ 2013-11-06 07:55:21 +01:00
spinlock_debug.c locking: Move the spinlock code to kernel/locking/ 2013-11-06 07:55:21 +01:00