forked from mirrors/linux
		
	atomics: Add header comment so spin_unlock_wait()
There is material describing the ordering guarantees provided by spin_unlock_wait(), but it is not necessarily easy to find. This commit therefore adds a docbook header comment to this function informally describing its semantics. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <peterz@infradead.org>
This commit is contained in:
		
							parent
							
								
									79269ee3fa
								
							
						
					
					
						commit
						6016ffc387
					
				
					 1 changed files with 20 additions and 0 deletions
				
			
		| 
						 | 
				
			
			@ -369,6 +369,26 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
 | 
			
		|||
	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
 | 
			
		||||
})
 | 
			
		||||
 | 
			
		||||
/**
 | 
			
		||||
 * spin_unlock_wait - Interpose between successive critical sections
 | 
			
		||||
 * @lock: the spinlock whose critical sections are to be interposed.
 | 
			
		||||
 *
 | 
			
		||||
 * Semantically this is equivalent to a spin_lock() immediately
 | 
			
		||||
 * followed by a spin_unlock().  However, most architectures have
 | 
			
		||||
 * more efficient implementations in which the spin_unlock_wait()
 | 
			
		||||
 * cannot block concurrent lock acquisition, and in some cases
 | 
			
		||||
 * where spin_unlock_wait() does not write to the lock variable.
 | 
			
		||||
 * Nevertheless, spin_unlock_wait() can have high overhead, so if
 | 
			
		||||
 * you feel the need to use it, please check to see if there is
 | 
			
		||||
 * a better way to get your job done.
 | 
			
		||||
 *
 | 
			
		||||
 * The ordering guarantees provided by spin_unlock_wait() are:
 | 
			
		||||
 *
 | 
			
		||||
 * 1.  All accesses preceding the spin_unlock_wait() happen before
 | 
			
		||||
 *     any accesses in later critical sections for this same lock.
 | 
			
		||||
 * 2.  All accesses following the spin_unlock_wait() happen after
 | 
			
		||||
 *     any accesses in earlier critical sections for this same lock.
 | 
			
		||||
 */
 | 
			
		||||
static __always_inline void spin_unlock_wait(spinlock_t *lock)
 | 
			
		||||
{
 | 
			
		||||
	raw_spin_unlock_wait(&lock->rlock);
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
		Reference in a new issue