kernel-fxtec-pro1x/kernel/locking
Jason Low 47667fa150 locking/mutexes: Modify the way optimistic spinners are queued
The mutex->spin_mlock was introduced in order to ensure that only 1 thread
spins for lock acquisition at a time to reduce cache line contention. When
lock->owner is NULL and the lock->count is still not 1, the spinner(s) will
continually release and obtain the lock->spin_mlock. This can generate
quite a bit of overhead/contention, and also might just delay the spinner
from getting the lock.

This patch modifies the way optimistic spinners are queued by queuing before
entering the optimistic spinning loop as oppose to acquiring before every
call to mutex_spin_on_owner(). So in situations where the spinner requires
a few extra spins before obtaining the lock, then there will only be 1 spinner
trying to get the lock and it will avoid the overhead from unnecessarily
unlocking and locking the spin_mlock.

Signed-off-by: Jason Low <jason.low2@hp.com>
Cc: tglx@linutronix.de
Cc: riel@redhat.com
Cc: akpm@linux-foundation.org
Cc: davidlohr@hp.com
Cc: hpa@zytor.com
Cc: andi@firstfloor.org
Cc: aswin@hp.com
Cc: scott.norton@hp.com
Cc: chegu_vinod@hp.com
Cc: Waiman.Long@hp.com
Cc: paulmck@linux.vnet.ibm.com
Cc: torvalds@linux-foundation.org
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1390936396-3962-3-git-send-email-jason.low2@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-03-11 12:14:53 +01:00
..
lglock.c locking: Move the lglocks code to kernel/locking/ 2013-11-06 09:24:20 +01:00
lockdep.c lockdep: Change mark_held_locks() to check hlock->check instead of lockdep_no_validate 2014-02-09 21:18:59 +01:00
lockdep_internals.h locking: Move the lockdep code to kernel/locking/ 2013-11-06 07:55:08 +01:00
lockdep_proc.c lockdep/proc: Fix lock-time avg computation 2013-11-11 12:41:34 +01:00
lockdep_states.h locking: Move the lockdep code to kernel/locking/ 2013-11-06 07:55:08 +01:00
Makefile locking: Move the percpu-rwsem code to kernel/locking/ 2013-11-06 09:24:22 +01:00
mcs_spinlock.h locking: Move mcs_spinlock.h into kernel/locking/ 2014-03-11 12:14:52 +01:00
mutex-debug.c mutexes: Give more informative mutex warning in the !lock->owner case 2013-12-17 15:35:10 +01:00
mutex-debug.h
mutex.c locking/mutexes: Modify the way optimistic spinners are queued 2014-03-11 12:14:53 +01:00
mutex.h
percpu-rwsem.c locking: Move the percpu-rwsem code to kernel/locking/ 2013-11-06 09:24:22 +01:00
rtmutex-debug.c rtmutex: Turn the plist into an rb-tree 2014-01-13 13:41:50 +01:00
rtmutex-debug.h locking: Move the rtmutex code to kernel/locking/ 2013-11-06 09:23:59 +01:00
rtmutex-tester.c locking: Move the rtmutex code to kernel/locking/ 2013-11-06 09:23:59 +01:00
rtmutex.c sched/deadline: Add SCHED_DEADLINE inheritance logic 2014-01-13 13:42:56 +01:00
rtmutex.h locking: Move the rtmutex code to kernel/locking/ 2013-11-06 09:23:59 +01:00
rtmutex_common.h sched/deadline: Add SCHED_DEADLINE inheritance logic 2014-01-13 13:42:56 +01:00
rwsem-spinlock.c locking: Move the rwsem code to kernel/locking/ 2013-11-06 09:24:18 +01:00
rwsem-xadd.c locking: Move the rwsem code to kernel/locking/ 2013-11-06 09:24:18 +01:00
rwsem.c locking: Move the rwsem code to kernel/locking/ 2013-11-06 09:24:18 +01:00
semaphore.c locking: Move the semaphore core to kernel/locking/ 2013-11-06 07:55:22 +01:00
spinlock.c locking: Move the spinlock code to kernel/locking/ 2013-11-06 07:55:21 +01:00
spinlock_debug.c locking: Move the spinlock code to kernel/locking/ 2013-11-06 07:55:21 +01:00