mirror of
				https://github.com/torvalds/linux.git
				synced 2025-11-04 02:30:34 +02:00 
			
		
		
		
	mm/hugetlb: update nr_huge_pages and surplus_huge_pages together
In alloc_surplus_hugetlb_folio(), we increase nr_huge_pages and
surplus_huge_pages separately.  In the middle window, if we set
nr_hugepages to smaller and satisfy count < persistent_huge_pages(h), the
surplus_huge_pages will be increased by adjust_pool_surplus().
After adding delay in the middle window, we can reproduce the problem
easily by following step:
 1. echo 3 > /proc/sys/vm/nr_overcommit_hugepages
 2. mmap two hugepages. When nr_huge_pages=2 and surplus_huge_pages=1,
    goto step 3.
 3. echo 0 > /proc/sys/vm/nr_huge_pages
Finally, nr_huge_pages is less than surplus_huge_pages.
To fix the problem, call only_alloc_fresh_hugetlb_folio() instead and
move down __prep_account_new_huge_page() into the hugetlb_lock.
Link: https://lkml.kernel.org/r/20250305035409.2391344-1-liushixin2@huawei.com
Fixes: 0c397daea1 ("mm, hugetlb: further simplify hugetlb allocation API")
Signed-off-by: Liu Shixin <liushixin2@huawei.com>
Acked-by: Peter Xu <peterx@redhat.com>
Acked-by: Oscar Salvador <osalvador@suse.de>
Cc: David Hildenbrand <david@redhat.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Liu Shixin <liushixin2@huawei.com>
Cc: Muchun Song <muchun.song@linux.dev>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
			
			
This commit is contained in:
		
							parent
							
								
									114b480877
								
							
						
					
					
						commit
						2273dea6b1
					
				
					 1 changed files with 10 additions and 1 deletions
				
			
		
							
								
								
									
										11
									
								
								mm/hugetlb.c
									
									
									
									
									
								
							
							
						
						
									
										11
									
								
								mm/hugetlb.c
									
									
									
									
									
								
							| 
						 | 
				
			
			@ -2259,11 +2259,20 @@ static struct folio *alloc_surplus_hugetlb_folio(struct hstate *h,
 | 
			
		|||
		goto out_unlock;
 | 
			
		||||
	spin_unlock_irq(&hugetlb_lock);
 | 
			
		||||
 | 
			
		||||
	folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask);
 | 
			
		||||
	folio = only_alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL);
 | 
			
		||||
	if (!folio)
 | 
			
		||||
		return NULL;
 | 
			
		||||
 | 
			
		||||
	hugetlb_vmemmap_optimize_folio(h, folio);
 | 
			
		||||
 | 
			
		||||
	spin_lock_irq(&hugetlb_lock);
 | 
			
		||||
	/*
 | 
			
		||||
	 * nr_huge_pages needs to be adjusted within the same lock cycle
 | 
			
		||||
	 * as surplus_pages, otherwise it might confuse
 | 
			
		||||
	 * persistent_huge_pages() momentarily.
 | 
			
		||||
	 */
 | 
			
		||||
	__prep_account_new_huge_page(h, nid);
 | 
			
		||||
 | 
			
		||||
	/*
 | 
			
		||||
	 * We could have raced with the pool size change.
 | 
			
		||||
	 * Double check that and simply deallocate the new page
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
		Reference in a new issue