forked from mirrors/linux
		
	rmap: recompute pgoff for unmapping huge page
We have to recompute pgoff if the given page is huge, since result based
on HPAGE_SIZE is not approapriate for scanning the vma interval tree, as
shown by commit 36e4f20af8 ("hugetlb: do not use vma_hugecache_offset()
for vma_prio_tree_foreach").
Signed-off-by: Hillf Danton <dhillf@gmail.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Michel Lespinasse <walken@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
			
			
This commit is contained in:
		
							parent
							
								
									8375ad98cc
								
							
						
					
					
						commit
						369a713e96
					
				
					 1 changed files with 3 additions and 0 deletions
				
			
		|  | @ -1513,6 +1513,9 @@ static int try_to_unmap_file(struct page *page, enum ttu_flags flags) | |||
| 	unsigned long max_nl_size = 0; | ||||
| 	unsigned int mapcount; | ||||
| 
 | ||||
| 	if (PageHuge(page)) | ||||
| 		pgoff = page->index << compound_order(page); | ||||
| 
 | ||||
| 	mutex_lock(&mapping->i_mmap_mutex); | ||||
| 	vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { | ||||
| 		unsigned long address = vma_address(page, vma); | ||||
|  |  | |||
		Loading…
	
		Reference in a new issue
	
	 Hillf Danton
						Hillf Danton