mirror of
https://github.com/torvalds/linux.git
synced 2025-11-03 10:10:33 +02:00
Readers of rwbase can lock and unlock without taking any inner lock, if
that happens, we need the ordering provided by atomic operations to
satisfy the ordering semantics of lock/unlock. Without that, considering
the follow case:
{ X = 0 initially }
CPU 0 CPU 1
===== =====
rt_write_lock();
X = 1
rt_write_unlock():
atomic_add(READER_BIAS - WRITER_BIAS, ->readers);
// ->readers is READER_BIAS.
rt_read_lock():
if ((r = atomic_read(->readers)) < 0) // True
atomic_try_cmpxchg(->readers, r, r + 1); // succeed.
<acquire the read lock via fast path>
r1 = X; // r1 may be 0, because nothing prevent the reordering
// of "X=1" and atomic_add() on CPU 1.
Therefore audit every usage of atomic operations that may happen in a
fast path, and add necessary barriers.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20210909110203.953991276@infradead.org
|
||
|---|---|---|
| .. | ||
| irqflag-debug.c | ||
| lock_events.c | ||
| lock_events.h | ||
| lock_events_list.h | ||
| lockdep.c | ||
| lockdep_internals.h | ||
| lockdep_proc.c | ||
| lockdep_states.h | ||
| locktorture.c | ||
| Makefile | ||
| mcs_spinlock.h | ||
| mutex-debug.c | ||
| mutex.c | ||
| mutex.h | ||
| osq_lock.c | ||
| percpu-rwsem.c | ||
| qrwlock.c | ||
| qspinlock.c | ||
| qspinlock_paravirt.h | ||
| qspinlock_stat.h | ||
| rtmutex.c | ||
| rtmutex_api.c | ||
| rtmutex_common.h | ||
| rwbase_rt.c | ||
| rwsem.c | ||
| semaphore.c | ||
| spinlock.c | ||
| spinlock_debug.c | ||
| spinlock_rt.c | ||
| test-ww_mutex.c | ||
| ww_mutex.h | ||
| ww_rt_mutex.c | ||