mirror of
				https://github.com/torvalds/linux.git
				synced 2025-10-31 16:48:26 +02:00 
			
		
		
		
	hugetlbfs: skip VMAs without shareable locks in hugetlb_vmdelete_list
hugetlb_vmdelete_list() uses trylock to acquire VMA locks during truncate operations. As per the original design in commit40549ba8f8("hugetlb: use new vma_lock for pmd sharing synchronization"), if the trylock fails or the VMA has no lock, it should skip that VMA. Any remaining mapped pages are handled by remove_inode_hugepages() which is called after hugetlb_vmdelete_list() and uses proper lock ordering to guarantee unmapping success. Currently, when hugetlb_vma_trylock_write() returns success (1) for VMAs without shareable locks, the code proceeds to call unmap_hugepage_range(). This causes assertion failures in huge_pmd_unshare() → hugetlb_vma_assert_locked() because no lock is actually held: WARNING: CPU: 1 PID: 6594 Comm: syz.0.28 Not tainted Call Trace: hugetlb_vma_assert_locked+0x1dd/0x250 huge_pmd_unshare+0x2c8/0x540 __unmap_hugepage_range+0x6e3/0x1aa0 unmap_hugepage_range+0x32e/0x410 hugetlb_vmdelete_list+0x189/0x1f0 Fix by using goto to ensure locks acquired by trylock are always released, even when skipping VMAs without shareable locks. Link: https://lkml.kernel.org/r/20250926033255.10930-1-kartikey406@gmail.com Fixes:40549ba8f8("hugetlb: use new vma_lock for pmd sharing synchronization") Signed-off-by: Deepanshu Kartikey <kartikey406@gmail.com> Reported-by: syzbot+f26d7c75c26ec19790e7@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=f26d7c75c26ec19790e7 Suggested-by: Andrew Morton <akpm@linux-foundation.org> Cc: David Hildenbrand <david@redhat.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Oscar Salvador <osalvador@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
		
							parent
							
								
									fb552b2425
								
							
						
					
					
						commit
						dd83609b88
					
				
					 1 changed files with 9 additions and 0 deletions
				
			
		|  | @ -478,6 +478,14 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end, | ||||||
| 		if (!hugetlb_vma_trylock_write(vma)) | 		if (!hugetlb_vma_trylock_write(vma)) | ||||||
| 			continue; | 			continue; | ||||||
| 
 | 
 | ||||||
|  | 		/*
 | ||||||
|  | 		 * Skip VMAs without shareable locks. Per the design in commit | ||||||
|  | 		 * 40549ba8f8e0, these will be handled by remove_inode_hugepages() | ||||||
|  | 		 * called after this function with proper locking. | ||||||
|  | 		 */ | ||||||
|  | 		if (!__vma_shareable_lock(vma)) | ||||||
|  | 			goto skip; | ||||||
|  | 
 | ||||||
| 		v_start = vma_offset_start(vma, start); | 		v_start = vma_offset_start(vma, start); | ||||||
| 		v_end = vma_offset_end(vma, end); | 		v_end = vma_offset_end(vma, end); | ||||||
| 
 | 
 | ||||||
|  | @ -488,6 +496,7 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end, | ||||||
| 		 * vmas.  Therefore, lock is not held when calling | 		 * vmas.  Therefore, lock is not held when calling | ||||||
| 		 * unmap_hugepage_range for private vmas. | 		 * unmap_hugepage_range for private vmas. | ||||||
| 		 */ | 		 */ | ||||||
|  | skip: | ||||||
| 		hugetlb_vma_unlock_write(vma); | 		hugetlb_vma_unlock_write(vma); | ||||||
| 	} | 	} | ||||||
| } | } | ||||||
|  |  | ||||||
		Loading…
	
		Reference in a new issue
	
	 Deepanshu Kartikey
						Deepanshu Kartikey