forked from mirrors/linux
		
	mm: do not lose dirty and accessed bits in pmdp_invalidate()
Vlastimil noted that pmdp_invalidate() is not atomic and we can lose dirty and access bits if CPU sets them after pmdp dereference, but before set_pmd_at(). The patch change pmdp_invalidate() to make the entry non-present atomically and return previous value of the entry. This value can be used to check if CPU set dirty/accessed bits under us. The race window is very small and I haven't seen any reports that can be attributed to the bug. For this reason, I don't think backporting to stable trees needed. Link: http://lkml.kernel.org/r/20171213105756.69879-11-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Vlastimil Babka <vbabka@suse.cz> Cc: Hugh Dickins <hughd@google.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: David Daney <david.daney@cavium.com> Cc: David Miller <davem@davemloft.net> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Nitin Gupta <nitin.m.gupta@oracle.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
		
							parent
							
								
									86fa949b05
								
							
						
					
					
						commit
						d52605d7cb
					
				
					 2 changed files with 4 additions and 4 deletions
				
			
		|  | @ -325,7 +325,7 @@ static inline pmd_t generic_pmdp_establish(struct vm_area_struct *vma, | |||
| #endif | ||||
| 
 | ||||
| #ifndef __HAVE_ARCH_PMDP_INVALIDATE | ||||
| extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, | ||||
| extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, | ||||
| 			    pmd_t *pmdp); | ||||
| #endif | ||||
| 
 | ||||
|  |  | |||
|  | @ -181,12 +181,12 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) | |||
| #endif | ||||
| 
 | ||||
| #ifndef __HAVE_ARCH_PMDP_INVALIDATE | ||||
| void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, | ||||
| pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, | ||||
| 		     pmd_t *pmdp) | ||||
| { | ||||
| 	pmd_t entry = *pmdp; | ||||
| 	set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(entry)); | ||||
| 	pmd_t old = pmdp_establish(vma, address, pmdp, pmd_mknotpresent(*pmdp)); | ||||
| 	flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); | ||||
| 	return old; | ||||
| } | ||||
| #endif | ||||
| 
 | ||||
|  |  | |||
		Loading…
	
		Reference in a new issue
	
	 Kirill A. Shutemov
						Kirill A. Shutemov