summaryrefslogtreecommitdiffstats
path: root/kernel/locking/qspinlock.c
AgeCommit message (Expand)AuthorLines
2016-06-27locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec()Pan Xinhui-1/+1
2016-06-14locking/barriers: Introduce smp_acquire__after_ctrl_dep()Peter Zijlstra-1/+1
2016-06-14locking/barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()Peter Zijlstra-6/+6
2016-06-08locking/qspinlock: Add commentsPeter Zijlstra-0/+57
2016-06-08locking/qspinlock: Clarify xchg_tail() orderingPeter Zijlstra-2/+13
2016-06-08locking/qspinlock: Fix spin_unlock_wait() some morePeter Zijlstra-0/+60
2016-02-29locking/qspinlock: Use smp_cond_acquire() in pending codeWaiman Long-4/+3
2015-12-04locking/pvqspinlock: Queue node adaptive spinningWaiman Long-2/+3
2015-12-04locking/pvqspinlock: Allow limited lock stealingWaiman Long-6/+20
2015-12-04locking, sched: Introduce smp_cond_acquire() and use itPeter Zijlstra-2/+1
2015-11-23locking/qspinlock: Avoid redundant read of next pointerWaiman Long-3/+6
2015-11-23locking/qspinlock: Prefetch the next node cachelineWaiman Long-0/+10
2015-11-23locking/qspinlock: Use _acquire/_release() versions of cmpxchg() & xchg()Waiman Long-5/+24
2015-09-11locking/qspinlock/x86: Fix performance regression under unaccelerated VMsPeter Zijlstra-1/+1
2015-08-03locking/pvqspinlock: Only kick CPU at unlock timeWaiman Long-3/+3
2015-05-08locking/pvqspinlock: Implement simple paravirt support for the qspinlockWaiman Long-1/+67
2015-05-08locking/qspinlock: Revert to test-and-set on hypervisorsPeter Zijlstra (Intel)-0/+3
2015-05-08locking/qspinlock: Use a simple write to grab the lockWaiman Long-16/+50
2015-05-08locking/qspinlock: Optimize for smaller NR_CPUSPeter Zijlstra (Intel)-1/+68
2015-05-08locking/qspinlock: Extract out code snippets for the next patchWaiman Long-31/+48
2015-05-08locking/qspinlock: Add pending bitPeter Zijlstra (Intel)-21/+98
2015-05-08locking/qspinlock: Introduce a simple generic 4-byte queued spinlockWaiman Long-0/+209