forked from mirrors/linux
Lin, Yang Shi, Anshuman Khandual and Mike Rapoport
- Some kmemleak fixes from Patrick Wang and Waiman Long
- DAMON updates from SeongJae Park
- memcg debug/visibility work from Roman Gushchin
- vmalloc speedup from Uladzislau Rezki
- more folio conversion work from Matthew Wilcox
- enhancements for coherent device memory mapping from Alex Sierra
- addition of shared pages tracking and CoW support for fsdax, from
Shiyang Ruan
- hugetlb optimizations from Mike Kravetz
- Mel Gorman has contributed some pagealloc changes to improve latency
and realtime behaviour.
- mprotect soft-dirty checking has been improved by Peter Xu
- Many other singleton patches all over the place
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCYuravgAKCRDdBJ7gKXxA
jpqSAQDrXSdII+ht9kSHlaCVYjqRFQz/rRvURQrWQV74f6aeiAD+NHHeDPwZn11/
SPktqEUrF1pxnGQxqLh1kUFUhsVZQgE=
=w/UH
-----END PGP SIGNATURE-----
Merge tag 'mm-stable-2022-08-03' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton:
"Most of the MM queue. A few things are still pending.
Liam's maple tree rework didn't make it. This has resulted in a few
other minor patch series being held over for next time.
Multi-gen LRU still isn't merged as we were waiting for mapletree to
stabilize. The current plan is to merge MGLRU into -mm soon and to
later reintroduce mapletree, with a view to hopefully getting both
into 6.1-rc1.
Summary:
- The usual batches of cleanups from Baoquan He, Muchun Song, Miaohe
Lin, Yang Shi, Anshuman Khandual and Mike Rapoport
- Some kmemleak fixes from Patrick Wang and Waiman Long
- DAMON updates from SeongJae Park
- memcg debug/visibility work from Roman Gushchin
- vmalloc speedup from Uladzislau Rezki
- more folio conversion work from Matthew Wilcox
- enhancements for coherent device memory mapping from Alex Sierra
- addition of shared pages tracking and CoW support for fsdax, from
Shiyang Ruan
- hugetlb optimizations from Mike Kravetz
- Mel Gorman has contributed some pagealloc changes to improve
latency and realtime behaviour.
- mprotect soft-dirty checking has been improved by Peter Xu
- Many other singleton patches all over the place"
[ XFS merge from hell as per Darrick Wong in
https://lore.kernel.org/all/YshKnxb4VwXycPO8@magnolia/ ]
* tag 'mm-stable-2022-08-03' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (282 commits)
tools/testing/selftests/vm/hmm-tests.c: fix build
mm: Kconfig: fix typo
mm: memory-failure: convert to pr_fmt()
mm: use is_zone_movable_page() helper
hugetlbfs: fix inaccurate comment in hugetlbfs_statfs()
hugetlbfs: cleanup some comments in inode.c
hugetlbfs: remove unneeded header file
hugetlbfs: remove unneeded hugetlbfs_ops forward declaration
hugetlbfs: use helper macro SZ_1{K,M}
mm: cleanup is_highmem()
mm/hmm: add a test for cross device private faults
selftests: add soft-dirty into run_vmtests.sh
selftests: soft-dirty: add test for mprotect
mm/mprotect: fix soft-dirty check in can_change_pte_writable()
mm: memcontrol: fix potential oom_lock recursion deadlock
mm/gup.c: fix formatting in check_and_migrate_movable_page()
xfs: fail dax mount if reflink is enabled on a partition
mm/memcontrol.c: remove the redundant updating of stats_flush_threshold
userfaultfd: don't fail on unrecognized features
hugetlb_cgroup: fix wrong hugetlb cgroup numa stat
...
139 lines
3.4 KiB
C
139 lines
3.4 KiB
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
/*
|
|
* include/linux/pagevec.h
|
|
*
|
|
* In many places it is efficient to batch an operation up against multiple
|
|
* pages. A pagevec is a multipage container which is used for that.
|
|
*/
|
|
|
|
#ifndef _LINUX_PAGEVEC_H
|
|
#define _LINUX_PAGEVEC_H
|
|
|
|
#include <linux/xarray.h>
|
|
|
|
/* 15 pointers + header align the pagevec structure to a power of two */
|
|
#define PAGEVEC_SIZE 15
|
|
|
|
struct page;
|
|
struct folio;
|
|
struct address_space;
|
|
|
|
/* Layout must match folio_batch */
|
|
struct pagevec {
|
|
unsigned char nr;
|
|
bool percpu_pvec_drained;
|
|
struct page *pages[PAGEVEC_SIZE];
|
|
};
|
|
|
|
void __pagevec_release(struct pagevec *pvec);
|
|
unsigned pagevec_lookup_range_tag(struct pagevec *pvec,
|
|
struct address_space *mapping, pgoff_t *index, pgoff_t end,
|
|
xa_mark_t tag);
|
|
static inline unsigned pagevec_lookup_tag(struct pagevec *pvec,
|
|
struct address_space *mapping, pgoff_t *index, xa_mark_t tag)
|
|
{
|
|
return pagevec_lookup_range_tag(pvec, mapping, index, (pgoff_t)-1, tag);
|
|
}
|
|
|
|
static inline void pagevec_init(struct pagevec *pvec)
|
|
{
|
|
pvec->nr = 0;
|
|
pvec->percpu_pvec_drained = false;
|
|
}
|
|
|
|
static inline void pagevec_reinit(struct pagevec *pvec)
|
|
{
|
|
pvec->nr = 0;
|
|
}
|
|
|
|
static inline unsigned pagevec_count(struct pagevec *pvec)
|
|
{
|
|
return pvec->nr;
|
|
}
|
|
|
|
static inline unsigned pagevec_space(struct pagevec *pvec)
|
|
{
|
|
return PAGEVEC_SIZE - pvec->nr;
|
|
}
|
|
|
|
/*
|
|
* Add a page to a pagevec. Returns the number of slots still available.
|
|
*/
|
|
static inline unsigned pagevec_add(struct pagevec *pvec, struct page *page)
|
|
{
|
|
pvec->pages[pvec->nr++] = page;
|
|
return pagevec_space(pvec);
|
|
}
|
|
|
|
static inline void pagevec_release(struct pagevec *pvec)
|
|
{
|
|
if (pagevec_count(pvec))
|
|
__pagevec_release(pvec);
|
|
}
|
|
|
|
/**
|
|
* struct folio_batch - A collection of folios.
|
|
*
|
|
* The folio_batch is used to amortise the cost of retrieving and
|
|
* operating on a set of folios. The order of folios in the batch may be
|
|
* significant (eg delete_from_page_cache_batch()). Some users of the
|
|
* folio_batch store "exceptional" entries in it which can be removed
|
|
* by calling folio_batch_remove_exceptionals().
|
|
*/
|
|
struct folio_batch {
|
|
unsigned char nr;
|
|
bool percpu_pvec_drained;
|
|
struct folio *folios[PAGEVEC_SIZE];
|
|
};
|
|
|
|
/* Layout must match pagevec */
|
|
static_assert(sizeof(struct pagevec) == sizeof(struct folio_batch));
|
|
static_assert(offsetof(struct pagevec, pages) ==
|
|
offsetof(struct folio_batch, folios));
|
|
|
|
/**
|
|
* folio_batch_init() - Initialise a batch of folios
|
|
* @fbatch: The folio batch.
|
|
*
|
|
* A freshly initialised folio_batch contains zero folios.
|
|
*/
|
|
static inline void folio_batch_init(struct folio_batch *fbatch)
|
|
{
|
|
fbatch->nr = 0;
|
|
fbatch->percpu_pvec_drained = false;
|
|
}
|
|
|
|
static inline unsigned int folio_batch_count(struct folio_batch *fbatch)
|
|
{
|
|
return fbatch->nr;
|
|
}
|
|
|
|
static inline unsigned int fbatch_space(struct folio_batch *fbatch)
|
|
{
|
|
return PAGEVEC_SIZE - fbatch->nr;
|
|
}
|
|
|
|
/**
|
|
* folio_batch_add() - Add a folio to a batch.
|
|
* @fbatch: The folio batch.
|
|
* @folio: The folio to add.
|
|
*
|
|
* The folio is added to the end of the batch.
|
|
* The batch must have previously been initialised using folio_batch_init().
|
|
*
|
|
* Return: The number of slots still available.
|
|
*/
|
|
static inline unsigned folio_batch_add(struct folio_batch *fbatch,
|
|
struct folio *folio)
|
|
{
|
|
fbatch->folios[fbatch->nr++] = folio;
|
|
return fbatch_space(fbatch);
|
|
}
|
|
|
|
static inline void folio_batch_release(struct folio_batch *fbatch)
|
|
{
|
|
pagevec_release((struct pagevec *)fbatch);
|
|
}
|
|
|
|
void folio_batch_remove_exceptionals(struct folio_batch *fbatch);
|
|
#endif /* _LINUX_PAGEVEC_H */
|