mm/page_alloc: find_large_buddy() from start_pfn aligned order

We iterate pfn from order 0 to MAX_PAGE_ORDER aligned to find large buddy.
While if the order is less than start_pfn aligned order, we would get the
same pfn and do the same check again.

Iterate from start_pfn aligned order to reduce duplicated work.

[richard.weiyang@gmail.com: add comment on assignment of order]
  Link: https://lkml.kernel.org/r/20250828091618.7869-1-richard.weiyang@gmail.com
  Link: https://lkml.kernel.org/r/20250902025807.11467-1-richard.weiyang@gmail.com
Link: https://lkml.kernel.org/r/20250828091618.7869-1-richard.weiyang@gmail.com
Link: https://lkml.kernel.org/r/20250902025807.11467-1-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Wei Yang 2025-08-28 09:16:18 +00:00 committed by Andrew Morton
parent c66ae64401
commit 204dfefe03

View file

@ -2033,7 +2033,13 @@ static int move_freepages_block(struct zone *zone, struct page *page,
/* Look for a buddy that straddles start_pfn */
static unsigned long find_large_buddy(unsigned long start_pfn)
{
int order = 0;
/*
* If start_pfn is not an order-0 PageBuddy, next PageBuddy containing
* start_pfn has minimal order of __ffs(start_pfn) + 1. Start checking
* the order with __ffs(start_pfn). If start_pfn is order-0 PageBuddy,
* the starting order does not matter.
*/
int order = start_pfn ? __ffs(start_pfn) : MAX_PAGE_ORDER;
struct page *page;
unsigned long pfn = start_pfn;