forked from mirrors/linux
		
	mm: numa: guarantee that tlb_flush_pending updates are visible before page table updates
According to documentation on barriers, stores issued before a LOCK can complete after the lock implying that it's possible tlb_flush_pending can be visible after a page table update. As per revised documentation, this patch adds a smp_mb__before_spinlock to guarantee the correct ordering. Signed-off-by: Mel Gorman <mgorman@suse.de> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
		
							parent
							
								
									2084140594
								
							
						
					
					
						commit
						af2c1401e6
					
				
					 1 changed files with 6 additions and 1 deletions
				
			
		| 
						 | 
				
			
			@ -482,7 +482,12 @@ static inline bool mm_tlb_flush_pending(struct mm_struct *mm)
 | 
			
		|||
static inline void set_tlb_flush_pending(struct mm_struct *mm)
 | 
			
		||||
{
 | 
			
		||||
	mm->tlb_flush_pending = true;
 | 
			
		||||
	barrier();
 | 
			
		||||
 | 
			
		||||
	/*
 | 
			
		||||
	 * Guarantee that the tlb_flush_pending store does not leak into the
 | 
			
		||||
	 * critical section updating the page tables
 | 
			
		||||
	 */
 | 
			
		||||
	smp_mb__before_spinlock();
 | 
			
		||||
}
 | 
			
		||||
/* Clearing is done after a TLB flush, which also provides a barrier. */
 | 
			
		||||
static inline void clear_tlb_flush_pending(struct mm_struct *mm)
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
		Reference in a new issue