forked from mirrors/linux
Uros Bizjak uses x86 named address space qualifiers to provide compile-time checking of percpu area accesses. This has caused a small amount of fallout - two or three issues were reported. In all cases the calling code was founf to be incorrect. - The 4 patch series "Some cleanup for memcg" from Chen Ridong implements some relatively monir cleanups for the memcontrol code. - The 17 patch series "mm: fixes for device-exclusive entries (hmm)" from David Hildenbrand fixes a boatload of issues which David found then using device-exclusive PTE entries when THP is enabled. More work is needed, but this makes thins better - our own HMM selftests now succeed. - The 2 patch series "mm: zswap: remove z3fold and zbud" from Yosry Ahmed remove the z3fold and zbud implementations. They have been deprecated for half a year and nobody has complained. - The 5 patch series "mm: further simplify VMA merge operation" from Lorenzo Stoakes implements numerous simplifications in this area. No runtime effects are anticipated. - The 4 patch series "mm/madvise: remove redundant mmap_lock operations from process_madvise()" from SeongJae Park rationalizes the locking in the madvise() implementation. Performance gains of 20-25% were observed in one MADV_DONTNEED microbenchmark. - The 12 patch series "Tiny cleanup and improvements about SWAP code" from Baoquan He contains a number of touchups to issues which Baoquan noticed when working on the swap code. - The 2 patch series "mm: kmemleak: Usability improvements" from Catalin Marinas implements a couple of improvements to the kmemleak user-visible output. - The 2 patch series "mm/damon/paddr: fix large folios access and schemes handling" from Usama Arif provides a couple of fixes for DAMON's handling of large folios. - The 3 patch series "mm/damon/core: fix wrong and/or useless damos_walk() behaviors" from SeongJae Park fixes a few issues with the accuracy of kdamond's walking of DAMON regions. - The 3 patch series "expose mapping wrprotect, fix fb_defio use" from Lorenzo Stoakes changes the interaction between framebuffer deferred-io and core MM. No functional changes are anticipated - this is preparatory work for the future removal of page structure fields. - The 4 patch series "mm/damon: add support for hugepage_size DAMOS filter" from Usama Arif adds a DAMOS filter which permits the filtering by huge page sizes. - The 4 patch series "mm: permit guard regions for file-backed/shmem mappings" from Lorenzo Stoakes extends the guard region feature from its present "anon mappings only" state. The feature now covers shmem and file-backed mappings. - The 4 patch series "mm: batched unmap lazyfree large folios during reclamation" from Barry Song cleans up and speeds up the unmapping for pte-mapped large folios. - The 18 patch series "reimplement per-vma lock as a refcount" from Suren Baghdasaryan puts the vm_lock back into the vma. Our reasons for pulling it out were largely bogus and that change made the code more messy. This patchset provides small (0-10%) improvements on one microbenchmark. - The 5 patch series "Docs/mm/damon: misc DAMOS filters documentation fixes and improves" from SeongJae Park does some maintenance work on the DAMON docs. - The 27 patch series "hugetlb/CMA improvements for large systems" from Frank van der Linden addresses a pile of issues which have been observed when using CMA on large machines. - The 2 patch series "mm/damon: introduce DAMOS filter type for unmapped pages" from SeongJae Park enables users of DMAON/DAMOS to filter my the page's mapped/unmapped status. - The 19 patch series "zsmalloc/zram: there be preemption" from Sergey Senozhatsky teaches zram to run its compression and decompression operations preemptibly. - The 12 patch series "selftests/mm: Some cleanups from trying to run them" from Brendan Jackman fixes a pile of unrelated issues which Brendan encountered while runnimg our selftests. - The 2 patch series "fs/proc/task_mmu: add guard region bit to pagemap" from Lorenzo Stoakes permits userspace to use /proc/pid/pagemap to determine whether a particular page is a guard page. - The 7 patch series "mm, swap: remove swap slot cache" from Kairui Song removes the swap slot cache from the allocation path - it simply wasn't being effective. - The 5 patch series "mm: cleanups for device-exclusive entries (hmm)" from David Hildenbrand implements a number of unrelated cleanups in this code. - The 5 patch series "mm: Rework generic PTDUMP configs" from Anshuman Khandual implements a number of preparatoty cleanups to the GENERIC_PTDUMP Kconfig logic. - The 8 patch series "mm/damon: auto-tune aggregation interval" from SeongJae Park implements a feedback-driven automatic tuning feature for DAMON's aggregation interval tuning. - The 5 patch series "Fix lazy mmu mode" from Ryan Roberts fixes some issues in powerpc, sparc and x86 lazy MMU implementations. Ryan did this in preparation for implementing lazy mmu mode for arm64 to optimize vmalloc. - The 2 patch series "mm/page_alloc: Some clarifications for migratetype fallback" from Brendan Jackman reworks some commentary to make the code easier to follow. - The 3 patch series "page_counter cleanup and size reduction" from Shakeel Butt cleans up the page_counter code and fixes a size increase which we accidentally added late last year. - The 3 patch series "Add a command line option that enables control of how many threads should be used to allocate huge pages" from Thomas Prescher does that. It allows the careful operator to significantly reduce boot time by tuning the parallalization of huge page initialization. - The 3 patch series "Fix calculations in trace_balance_dirty_pages() for cgwb" from Tang Yizhou fixes the tracing output from the dirty page balancing code. - The 9 patch series "mm/damon: make allow filters after reject filters useful and intuitive" from SeongJae Park improves the handling of allow and reject filters. Behaviour is made more consistent and the documention is updated accordingly. - The 5 patch series "Switch zswap to object read/write APIs" from Yosry Ahmed updates zswap to the new object read/write APIs and thus permits the removal of some legacy code from zpool and zsmalloc. - The 6 patch series "Some trivial cleanups for shmem" from Baolin Wang does as it claims. - The 20 patch series "fs/dax: Fix ZONE_DEVICE page reference counts" from Alistair Popple regularizes the weird ZONE_DEVICE page refcount handling in DAX, permittig the removal of a number of special-case checks. - The 4 patch series "refactor mremap and fix bug" from Lorenzo Stoakes is a preparatoty refactoring and cleanup of the mremap() code. - The 20 patch series "mm: MM owner tracking for large folios (!hugetlb) + CONFIG_NO_PAGE_MAPCOUNT" from David Hildenbrand reworks the manner in which we determine whether a large folio is known to be mapped exclusively into a single MM. - The 8 patch series "mm/damon: add sysfs dirs for managing DAMOS filters based on handling layers" from SeongJae Park adds a couple of new sysfs directories to ease the management of DAMON/DAMOS filters. - The 13 patch series "arch, mm: reduce code duplication in mem_init()" from Mike Rapoport consolidates many per-arch implementations of mem_init() into code generic code, where that is practical. - The 13 patch series "mm/damon/sysfs: commit parameters online via damon_call()" from SeongJae Park continues the cleaning up of sysfs access to DAMON internal data. - The 3 patch series "mm: page_ext: Introduce new iteration API" from Luiz Capitulino reworks the page_ext initialization to fix a boot-time crash which was observed with an unusual combination of compile and cmdline options. - The 8 patch series "Buddy allocator like (or non-uniform) folio split" from Zi Yan reworks the code to split a folio into smaller folios. The main benefit is lessened memory consumption: fewer post-split folios are generated. - The 2 patch series "Minimize xa_node allocation during xarry split" from Zi Yan reduces the number of xarray xa_nodes which are generated during an xarray split. - The 2 patch series "drivers/base/memory: Two cleanups" from Gavin Shan performs some maintenance work on the drivers/base/memory code. - The 3 patch series "Add tracepoints for lowmem reserves, watermarks and totalreserve_pages" from Martin Liu adds some more tracepoints to the page allocator code. - The 4 patch series "mm/madvise: cleanup requests validations and classifications" from SeongJae Park cleans up some warts which SeongJae observed during his earlier madvise work. - The 3 patch series "mm/hwpoison: Fix regressions in memory failure handling" from Shuai Xue addresses two quite serious regressions which Shuai has observed in the memory-failure implementation. - The 5 patch series "mm: reliable huge page allocator" from Johannes Weiner makes huge page allocations cheaper and more reliable by reducing fragmentation. - The 5 patch series "Minor memcg cleanups & prep for memdescs" from Matthew Wilcox is preparatory work for the future implementation of memdescs. - The 4 patch series "track memory used by balloon drivers" from Nico Pache introduces a way to track memory used by our various balloon drivers. - The 2 patch series "mm/damon: introduce DAMOS filter type for active pages" from Nhat Pham permits users to filter for active/inactive pages, separately for file and anon pages. - The 2 patch series "Adding Proactive Memory Reclaim Statistics" from Hao Jia separates the proactive reclaim statistics from the direct reclaim statistics. - The 2 patch series "mm/vmscan: don't try to reclaim hwpoison folio" from Jinjiang Tu fixes our handling of hwpoisoned pages within the reclaim code. -----BEGIN PGP SIGNATURE----- iHQEABYKAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCZ+nZaAAKCRDdBJ7gKXxA jsOWAPiP4r7CJHMZRK4eyJOkvS1a1r+TsIarrFZtjwvf/GIfAQCEG+JDxVfUaUSF Ee93qSSLR1BkNdDw+931Pu0mXfbnBw== =Pn2K -----END PGP SIGNATURE----- Merge tag 'mm-stable-2025-03-30-16-52' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull MM updates from Andrew Morton: - The series "Enable strict percpu address space checks" from Uros Bizjak uses x86 named address space qualifiers to provide compile-time checking of percpu area accesses. This has caused a small amount of fallout - two or three issues were reported. In all cases the calling code was found to be incorrect. - The series "Some cleanup for memcg" from Chen Ridong implements some relatively monir cleanups for the memcontrol code. - The series "mm: fixes for device-exclusive entries (hmm)" from David Hildenbrand fixes a boatload of issues which David found then using device-exclusive PTE entries when THP is enabled. More work is needed, but this makes thins better - our own HMM selftests now succeed. - The series "mm: zswap: remove z3fold and zbud" from Yosry Ahmed remove the z3fold and zbud implementations. They have been deprecated for half a year and nobody has complained. - The series "mm: further simplify VMA merge operation" from Lorenzo Stoakes implements numerous simplifications in this area. No runtime effects are anticipated. - The series "mm/madvise: remove redundant mmap_lock operations from process_madvise()" from SeongJae Park rationalizes the locking in the madvise() implementation. Performance gains of 20-25% were observed in one MADV_DONTNEED microbenchmark. - The series "Tiny cleanup and improvements about SWAP code" from Baoquan He contains a number of touchups to issues which Baoquan noticed when working on the swap code. - The series "mm: kmemleak: Usability improvements" from Catalin Marinas implements a couple of improvements to the kmemleak user-visible output. - The series "mm/damon/paddr: fix large folios access and schemes handling" from Usama Arif provides a couple of fixes for DAMON's handling of large folios. - The series "mm/damon/core: fix wrong and/or useless damos_walk() behaviors" from SeongJae Park fixes a few issues with the accuracy of kdamond's walking of DAMON regions. - The series "expose mapping wrprotect, fix fb_defio use" from Lorenzo Stoakes changes the interaction between framebuffer deferred-io and core MM. No functional changes are anticipated - this is preparatory work for the future removal of page structure fields. - The series "mm/damon: add support for hugepage_size DAMOS filter" from Usama Arif adds a DAMOS filter which permits the filtering by huge page sizes. - The series "mm: permit guard regions for file-backed/shmem mappings" from Lorenzo Stoakes extends the guard region feature from its present "anon mappings only" state. The feature now covers shmem and file-backed mappings. - The series "mm: batched unmap lazyfree large folios during reclamation" from Barry Song cleans up and speeds up the unmapping for pte-mapped large folios. - The series "reimplement per-vma lock as a refcount" from Suren Baghdasaryan puts the vm_lock back into the vma. Our reasons for pulling it out were largely bogus and that change made the code more messy. This patchset provides small (0-10%) improvements on one microbenchmark. - The series "Docs/mm/damon: misc DAMOS filters documentation fixes and improves" from SeongJae Park does some maintenance work on the DAMON docs. - The series "hugetlb/CMA improvements for large systems" from Frank van der Linden addresses a pile of issues which have been observed when using CMA on large machines. - The series "mm/damon: introduce DAMOS filter type for unmapped pages" from SeongJae Park enables users of DMAON/DAMOS to filter my the page's mapped/unmapped status. - The series "zsmalloc/zram: there be preemption" from Sergey Senozhatsky teaches zram to run its compression and decompression operations preemptibly. - The series "selftests/mm: Some cleanups from trying to run them" from Brendan Jackman fixes a pile of unrelated issues which Brendan encountered while runnimg our selftests. - The series "fs/proc/task_mmu: add guard region bit to pagemap" from Lorenzo Stoakes permits userspace to use /proc/pid/pagemap to determine whether a particular page is a guard page. - The series "mm, swap: remove swap slot cache" from Kairui Song removes the swap slot cache from the allocation path - it simply wasn't being effective. - The series "mm: cleanups for device-exclusive entries (hmm)" from David Hildenbrand implements a number of unrelated cleanups in this code. - The series "mm: Rework generic PTDUMP configs" from Anshuman Khandual implements a number of preparatoty cleanups to the GENERIC_PTDUMP Kconfig logic. - The series "mm/damon: auto-tune aggregation interval" from SeongJae Park implements a feedback-driven automatic tuning feature for DAMON's aggregation interval tuning. - The series "Fix lazy mmu mode" from Ryan Roberts fixes some issues in powerpc, sparc and x86 lazy MMU implementations. Ryan did this in preparation for implementing lazy mmu mode for arm64 to optimize vmalloc. - The series "mm/page_alloc: Some clarifications for migratetype fallback" from Brendan Jackman reworks some commentary to make the code easier to follow. - The series "page_counter cleanup and size reduction" from Shakeel Butt cleans up the page_counter code and fixes a size increase which we accidentally added late last year. - The series "Add a command line option that enables control of how many threads should be used to allocate huge pages" from Thomas Prescher does that. It allows the careful operator to significantly reduce boot time by tuning the parallalization of huge page initialization. - The series "Fix calculations in trace_balance_dirty_pages() for cgwb" from Tang Yizhou fixes the tracing output from the dirty page balancing code. - The series "mm/damon: make allow filters after reject filters useful and intuitive" from SeongJae Park improves the handling of allow and reject filters. Behaviour is made more consistent and the documention is updated accordingly. - The series "Switch zswap to object read/write APIs" from Yosry Ahmed updates zswap to the new object read/write APIs and thus permits the removal of some legacy code from zpool and zsmalloc. - The series "Some trivial cleanups for shmem" from Baolin Wang does as it claims. - The series "fs/dax: Fix ZONE_DEVICE page reference counts" from Alistair Popple regularizes the weird ZONE_DEVICE page refcount handling in DAX, permittig the removal of a number of special-case checks. - The series "refactor mremap and fix bug" from Lorenzo Stoakes is a preparatoty refactoring and cleanup of the mremap() code. - The series "mm: MM owner tracking for large folios (!hugetlb) + CONFIG_NO_PAGE_MAPCOUNT" from David Hildenbrand reworks the manner in which we determine whether a large folio is known to be mapped exclusively into a single MM. - The series "mm/damon: add sysfs dirs for managing DAMOS filters based on handling layers" from SeongJae Park adds a couple of new sysfs directories to ease the management of DAMON/DAMOS filters. - The series "arch, mm: reduce code duplication in mem_init()" from Mike Rapoport consolidates many per-arch implementations of mem_init() into code generic code, where that is practical. - The series "mm/damon/sysfs: commit parameters online via damon_call()" from SeongJae Park continues the cleaning up of sysfs access to DAMON internal data. - The series "mm: page_ext: Introduce new iteration API" from Luiz Capitulino reworks the page_ext initialization to fix a boot-time crash which was observed with an unusual combination of compile and cmdline options. - The series "Buddy allocator like (or non-uniform) folio split" from Zi Yan reworks the code to split a folio into smaller folios. The main benefit is lessened memory consumption: fewer post-split folios are generated. - The series "Minimize xa_node allocation during xarry split" from Zi Yan reduces the number of xarray xa_nodes which are generated during an xarray split. - The series "drivers/base/memory: Two cleanups" from Gavin Shan performs some maintenance work on the drivers/base/memory code. - The series "Add tracepoints for lowmem reserves, watermarks and totalreserve_pages" from Martin Liu adds some more tracepoints to the page allocator code. - The series "mm/madvise: cleanup requests validations and classifications" from SeongJae Park cleans up some warts which SeongJae observed during his earlier madvise work. - The series "mm/hwpoison: Fix regressions in memory failure handling" from Shuai Xue addresses two quite serious regressions which Shuai has observed in the memory-failure implementation. - The series "mm: reliable huge page allocator" from Johannes Weiner makes huge page allocations cheaper and more reliable by reducing fragmentation. - The series "Minor memcg cleanups & prep for memdescs" from Matthew Wilcox is preparatory work for the future implementation of memdescs. - The series "track memory used by balloon drivers" from Nico Pache introduces a way to track memory used by our various balloon drivers. - The series "mm/damon: introduce DAMOS filter type for active pages" from Nhat Pham permits users to filter for active/inactive pages, separately for file and anon pages. - The series "Adding Proactive Memory Reclaim Statistics" from Hao Jia separates the proactive reclaim statistics from the direct reclaim statistics. - The series "mm/vmscan: don't try to reclaim hwpoison folio" from Jinjiang Tu fixes our handling of hwpoisoned pages within the reclaim code. * tag 'mm-stable-2025-03-30-16-52' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (431 commits) mm/page_alloc: remove unnecessary __maybe_unused in order_to_pindex() x86/mm: restore early initialization of high_memory for 32-bits mm/vmscan: don't try to reclaim hwpoison folio mm/hwpoison: introduce folio_contain_hwpoisoned_page() helper cgroup: docs: add pswpin and pswpout items in cgroup v2 doc mm: vmscan: split proactive reclaim statistics from direct reclaim statistics selftests/mm: speed up split_huge_page_test selftests/mm: uffd-unit-tests support for hugepages > 2M docs/mm/damon/design: document active DAMOS filter type mm/damon: implement a new DAMOS filter type for active pages fs/dax: don't disassociate zero page entries MM documentation: add "Unaccepted" meminfo entry selftests/mm: add commentary about 9pfs bugs fork: use __vmalloc_node() for stack allocation docs/mm: Physical Memory: Populate the "Zones" section xen: balloon: update the NR_BALLOON_PAGES state hv_balloon: update the NR_BALLOON_PAGES state balloon_compaction: update the NR_BALLOON_PAGES state meminfo: add a per node counter for balloon drivers mm: remove references to folio in __memcg_kmem_uncharge_page() ...
392 lines
12 KiB
C
392 lines
12 KiB
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
/*
|
|
* include/linux/writeback.h
|
|
*/
|
|
#ifndef WRITEBACK_H
|
|
#define WRITEBACK_H
|
|
|
|
#include <linux/sched.h>
|
|
#include <linux/workqueue.h>
|
|
#include <linux/fs.h>
|
|
#include <linux/flex_proportions.h>
|
|
#include <linux/backing-dev-defs.h>
|
|
#include <linux/blk_types.h>
|
|
#include <linux/pagevec.h>
|
|
|
|
struct bio;
|
|
|
|
DECLARE_PER_CPU(int, dirty_throttle_leaks);
|
|
|
|
/*
|
|
* The global dirty threshold is normally equal to the global dirty limit,
|
|
* except when the system suddenly allocates a lot of anonymous memory and
|
|
* knocks down the global dirty threshold quickly, in which case the global
|
|
* dirty limit will follow down slowly to prevent livelocking all dirtier tasks.
|
|
*/
|
|
#define DIRTY_SCOPE 8
|
|
|
|
struct backing_dev_info;
|
|
|
|
/*
|
|
* fs/fs-writeback.c
|
|
*/
|
|
enum writeback_sync_modes {
|
|
WB_SYNC_NONE, /* Don't wait on anything */
|
|
WB_SYNC_ALL, /* Wait on every mapping */
|
|
};
|
|
|
|
/*
|
|
* A control structure which tells the writeback code what to do. These are
|
|
* always on the stack, and hence need no locking. They are always initialised
|
|
* in a manner such that unspecified fields are set to zero.
|
|
*/
|
|
struct writeback_control {
|
|
/* public fields that can be set and/or consumed by the caller: */
|
|
long nr_to_write; /* Write this many pages, and decrement
|
|
this for each page written */
|
|
long pages_skipped; /* Pages which were not written */
|
|
|
|
/*
|
|
* For a_ops->writepages(): if start or end are non-zero then this is
|
|
* a hint that the filesystem need only write out the pages inside that
|
|
* byterange. The byte at `end' is included in the writeout request.
|
|
*/
|
|
loff_t range_start;
|
|
loff_t range_end;
|
|
|
|
enum writeback_sync_modes sync_mode;
|
|
|
|
unsigned for_kupdate:1; /* A kupdate writeback */
|
|
unsigned for_background:1; /* A background writeback */
|
|
unsigned tagged_writepages:1; /* tag-and-write to avoid livelock */
|
|
unsigned for_reclaim:1; /* Invoked from the page allocator */
|
|
unsigned range_cyclic:1; /* range_start is cyclic */
|
|
unsigned for_sync:1; /* sync(2) WB_SYNC_ALL writeback */
|
|
unsigned unpinned_netfs_wb:1; /* Cleared I_PINNING_NETFS_WB */
|
|
|
|
/*
|
|
* When writeback IOs are bounced through async layers, only the
|
|
* initial synchronous phase should be accounted towards inode
|
|
* cgroup ownership arbitration to avoid confusion. Later stages
|
|
* can set the following flag to disable the accounting.
|
|
*/
|
|
unsigned no_cgroup_owner:1;
|
|
|
|
/* To enable batching of swap writes to non-block-device backends,
|
|
* "plug" can be set point to a 'struct swap_iocb *'. When all swap
|
|
* writes have been submitted, if with swap_iocb is not NULL,
|
|
* swap_write_unplug() should be called.
|
|
*/
|
|
struct swap_iocb **swap_plug;
|
|
|
|
/* Target list for splitting a large folio */
|
|
struct list_head *list;
|
|
|
|
/* internal fields used by the ->writepages implementation: */
|
|
struct folio_batch fbatch;
|
|
pgoff_t index;
|
|
int saved_err;
|
|
|
|
#ifdef CONFIG_CGROUP_WRITEBACK
|
|
struct bdi_writeback *wb; /* wb this writeback is issued under */
|
|
struct inode *inode; /* inode being written out */
|
|
|
|
/* foreign inode detection, see wbc_detach_inode() */
|
|
int wb_id; /* current wb id */
|
|
int wb_lcand_id; /* last foreign candidate wb id */
|
|
int wb_tcand_id; /* this foreign candidate wb id */
|
|
size_t wb_bytes; /* bytes written by current wb */
|
|
size_t wb_lcand_bytes; /* bytes written by last candidate */
|
|
size_t wb_tcand_bytes; /* bytes written by this candidate */
|
|
#endif
|
|
};
|
|
|
|
static inline blk_opf_t wbc_to_write_flags(struct writeback_control *wbc)
|
|
{
|
|
blk_opf_t flags = 0;
|
|
|
|
if (wbc->sync_mode == WB_SYNC_ALL)
|
|
flags |= REQ_SYNC;
|
|
else if (wbc->for_kupdate || wbc->for_background)
|
|
flags |= REQ_BACKGROUND;
|
|
|
|
return flags;
|
|
}
|
|
|
|
#ifdef CONFIG_CGROUP_WRITEBACK
|
|
#define wbc_blkcg_css(wbc) \
|
|
((wbc)->wb ? (wbc)->wb->blkcg_css : blkcg_root_css)
|
|
#else
|
|
#define wbc_blkcg_css(wbc) (blkcg_root_css)
|
|
#endif /* CONFIG_CGROUP_WRITEBACK */
|
|
|
|
/*
|
|
* A wb_domain represents a domain that wb's (bdi_writeback's) belong to
|
|
* and are measured against each other in. There always is one global
|
|
* domain, global_wb_domain, that every wb in the system is a member of.
|
|
* This allows measuring the relative bandwidth of each wb to distribute
|
|
* dirtyable memory accordingly.
|
|
*/
|
|
struct wb_domain {
|
|
spinlock_t lock;
|
|
|
|
/*
|
|
* Scale the writeback cache size proportional to the relative
|
|
* writeout speed.
|
|
*
|
|
* We do this by keeping a floating proportion between BDIs, based
|
|
* on page writeback completions [end_page_writeback()]. Those
|
|
* devices that write out pages fastest will get the larger share,
|
|
* while the slower will get a smaller share.
|
|
*
|
|
* We use page writeout completions because we are interested in
|
|
* getting rid of dirty pages. Having them written out is the
|
|
* primary goal.
|
|
*
|
|
* We introduce a concept of time, a period over which we measure
|
|
* these events, because demand can/will vary over time. The length
|
|
* of this period itself is measured in page writeback completions.
|
|
*/
|
|
struct fprop_global completions;
|
|
struct timer_list period_timer; /* timer for aging of completions */
|
|
unsigned long period_time;
|
|
|
|
/*
|
|
* The dirtyable memory and dirty threshold could be suddenly
|
|
* knocked down by a large amount (eg. on the startup of KVM in a
|
|
* swapless system). This may throw the system into deep dirty
|
|
* exceeded state and throttle heavy/light dirtiers alike. To
|
|
* retain good responsiveness, maintain global_dirty_limit for
|
|
* tracking slowly down to the knocked down dirty threshold.
|
|
*
|
|
* Both fields are protected by ->lock.
|
|
*/
|
|
unsigned long dirty_limit_tstamp;
|
|
unsigned long dirty_limit;
|
|
};
|
|
|
|
/**
|
|
* wb_domain_size_changed - memory available to a wb_domain has changed
|
|
* @dom: wb_domain of interest
|
|
*
|
|
* This function should be called when the amount of memory available to
|
|
* @dom has changed. It resets @dom's dirty limit parameters to prevent
|
|
* the past values which don't match the current configuration from skewing
|
|
* dirty throttling. Without this, when memory size of a wb_domain is
|
|
* greatly reduced, the dirty throttling logic may allow too many pages to
|
|
* be dirtied leading to consecutive unnecessary OOMs and may get stuck in
|
|
* that situation.
|
|
*/
|
|
static inline void wb_domain_size_changed(struct wb_domain *dom)
|
|
{
|
|
spin_lock(&dom->lock);
|
|
dom->dirty_limit_tstamp = jiffies;
|
|
dom->dirty_limit = 0;
|
|
spin_unlock(&dom->lock);
|
|
}
|
|
|
|
/*
|
|
* fs/fs-writeback.c
|
|
*/
|
|
struct bdi_writeback;
|
|
void writeback_inodes_sb(struct super_block *, enum wb_reason reason);
|
|
void writeback_inodes_sb_nr(struct super_block *, unsigned long nr,
|
|
enum wb_reason reason);
|
|
void try_to_writeback_inodes_sb(struct super_block *sb, enum wb_reason reason);
|
|
void sync_inodes_sb(struct super_block *);
|
|
void wakeup_flusher_threads(enum wb_reason reason);
|
|
void wakeup_flusher_threads_bdi(struct backing_dev_info *bdi,
|
|
enum wb_reason reason);
|
|
void inode_wait_for_writeback(struct inode *inode);
|
|
void inode_io_list_del(struct inode *inode);
|
|
|
|
/* writeback.h requires fs.h; it, too, is not included from here. */
|
|
static inline void wait_on_inode(struct inode *inode)
|
|
{
|
|
wait_var_event(inode_state_wait_address(inode, __I_NEW),
|
|
!(READ_ONCE(inode->i_state) & I_NEW));
|
|
}
|
|
|
|
#ifdef CONFIG_CGROUP_WRITEBACK
|
|
|
|
#include <linux/cgroup.h>
|
|
#include <linux/bio.h>
|
|
|
|
void __inode_attach_wb(struct inode *inode, struct folio *folio);
|
|
void wbc_detach_inode(struct writeback_control *wbc);
|
|
void wbc_account_cgroup_owner(struct writeback_control *wbc, struct folio *folio,
|
|
size_t bytes);
|
|
int cgroup_writeback_by_id(u64 bdi_id, int memcg_id,
|
|
enum wb_reason reason, struct wb_completion *done);
|
|
void cgroup_writeback_umount(struct super_block *sb);
|
|
bool cleanup_offline_cgwb(struct bdi_writeback *wb);
|
|
|
|
/**
|
|
* inode_attach_wb - associate an inode with its wb
|
|
* @inode: inode of interest
|
|
* @folio: folio being dirtied (may be NULL)
|
|
*
|
|
* If @inode doesn't have its wb, associate it with the wb matching the
|
|
* memcg of @folio or, if @folio is NULL, %current. May be called w/ or w/o
|
|
* @inode->i_lock.
|
|
*/
|
|
static inline void inode_attach_wb(struct inode *inode, struct folio *folio)
|
|
{
|
|
if (!inode->i_wb)
|
|
__inode_attach_wb(inode, folio);
|
|
}
|
|
|
|
/**
|
|
* inode_detach_wb - disassociate an inode from its wb
|
|
* @inode: inode of interest
|
|
*
|
|
* @inode is being freed. Detach from its wb.
|
|
*/
|
|
static inline void inode_detach_wb(struct inode *inode)
|
|
{
|
|
if (inode->i_wb) {
|
|
WARN_ON_ONCE(!(inode->i_state & I_CLEAR));
|
|
wb_put(inode->i_wb);
|
|
inode->i_wb = NULL;
|
|
}
|
|
}
|
|
|
|
void wbc_attach_fdatawrite_inode(struct writeback_control *wbc,
|
|
struct inode *inode);
|
|
|
|
/**
|
|
* wbc_init_bio - writeback specific initializtion of bio
|
|
* @wbc: writeback_control for the writeback in progress
|
|
* @bio: bio to be initialized
|
|
*
|
|
* @bio is a part of the writeback in progress controlled by @wbc. Perform
|
|
* writeback specific initialization. This is used to apply the cgroup
|
|
* writeback context. Must be called after the bio has been associated with
|
|
* a device.
|
|
*/
|
|
static inline void wbc_init_bio(struct writeback_control *wbc, struct bio *bio)
|
|
{
|
|
/*
|
|
* pageout() path doesn't attach @wbc to the inode being written
|
|
* out. This is intentional as we don't want the function to block
|
|
* behind a slow cgroup. Ultimately, we want pageout() to kick off
|
|
* regular writeback instead of writing things out itself.
|
|
*/
|
|
if (wbc->wb)
|
|
bio_associate_blkg_from_css(bio, wbc->wb->blkcg_css);
|
|
}
|
|
|
|
#else /* CONFIG_CGROUP_WRITEBACK */
|
|
|
|
static inline void inode_attach_wb(struct inode *inode, struct folio *folio)
|
|
{
|
|
}
|
|
|
|
static inline void inode_detach_wb(struct inode *inode)
|
|
{
|
|
}
|
|
|
|
static inline void wbc_attach_fdatawrite_inode(struct writeback_control *wbc,
|
|
struct inode *inode)
|
|
{
|
|
}
|
|
|
|
static inline void wbc_detach_inode(struct writeback_control *wbc)
|
|
{
|
|
}
|
|
|
|
static inline void wbc_init_bio(struct writeback_control *wbc, struct bio *bio)
|
|
{
|
|
}
|
|
|
|
static inline void wbc_account_cgroup_owner(struct writeback_control *wbc,
|
|
struct folio *folio, size_t bytes)
|
|
{
|
|
}
|
|
|
|
static inline void cgroup_writeback_umount(struct super_block *sb)
|
|
{
|
|
}
|
|
|
|
#endif /* CONFIG_CGROUP_WRITEBACK */
|
|
|
|
/*
|
|
* mm/page-writeback.c
|
|
*/
|
|
/* consolidated parameters for balance_dirty_pages() and its subroutines */
|
|
struct dirty_throttle_control {
|
|
#ifdef CONFIG_CGROUP_WRITEBACK
|
|
struct wb_domain *dom;
|
|
struct dirty_throttle_control *gdtc; /* only set in memcg dtc's */
|
|
#endif
|
|
struct bdi_writeback *wb;
|
|
struct fprop_local_percpu *wb_completions;
|
|
|
|
unsigned long avail; /* dirtyable */
|
|
unsigned long dirty; /* file_dirty + write + nfs */
|
|
unsigned long thresh; /* dirty threshold */
|
|
unsigned long bg_thresh; /* dirty background threshold */
|
|
unsigned long limit; /* hard dirty limit */
|
|
|
|
unsigned long wb_dirty; /* per-wb counterparts */
|
|
unsigned long wb_thresh;
|
|
unsigned long wb_bg_thresh;
|
|
|
|
unsigned long pos_ratio;
|
|
bool freerun;
|
|
bool dirty_exceeded;
|
|
};
|
|
|
|
void laptop_io_completion(struct backing_dev_info *info);
|
|
void laptop_sync_completion(void);
|
|
void laptop_mode_timer_fn(struct timer_list *t);
|
|
bool node_dirty_ok(struct pglist_data *pgdat);
|
|
int wb_domain_init(struct wb_domain *dom, gfp_t gfp);
|
|
#ifdef CONFIG_CGROUP_WRITEBACK
|
|
void wb_domain_exit(struct wb_domain *dom);
|
|
#endif
|
|
|
|
extern struct wb_domain global_wb_domain;
|
|
|
|
/* These are exported to sysctl. */
|
|
extern unsigned int dirty_writeback_interval;
|
|
extern unsigned int dirty_expire_interval;
|
|
extern int laptop_mode;
|
|
|
|
void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty);
|
|
unsigned long wb_calc_thresh(struct bdi_writeback *wb, unsigned long thresh);
|
|
unsigned long cgwb_calc_thresh(struct bdi_writeback *wb);
|
|
|
|
void wb_update_bandwidth(struct bdi_writeback *wb);
|
|
|
|
/* Invoke balance dirty pages in async mode. */
|
|
#define BDP_ASYNC 0x0001
|
|
|
|
void balance_dirty_pages_ratelimited(struct address_space *mapping);
|
|
int balance_dirty_pages_ratelimited_flags(struct address_space *mapping,
|
|
unsigned int flags);
|
|
|
|
bool wb_over_bg_thresh(struct bdi_writeback *wb);
|
|
|
|
struct folio *writeback_iter(struct address_space *mapping,
|
|
struct writeback_control *wbc, struct folio *folio, int *error);
|
|
|
|
typedef int (*writepage_t)(struct folio *folio, struct writeback_control *wbc,
|
|
void *data);
|
|
|
|
int write_cache_pages(struct address_space *mapping,
|
|
struct writeback_control *wbc, writepage_t writepage,
|
|
void *data);
|
|
int do_writepages(struct address_space *mapping, struct writeback_control *wbc);
|
|
void writeback_set_ratelimit(void);
|
|
void tag_pages_for_writeback(struct address_space *mapping,
|
|
pgoff_t start, pgoff_t end);
|
|
|
|
bool filemap_dirty_folio(struct address_space *mapping, struct folio *folio);
|
|
bool folio_redirty_for_writepage(struct writeback_control *, struct folio *);
|
|
bool redirty_page_for_writepage(struct writeback_control *, struct page *);
|
|
|
|
void sb_mark_inode_writeback(struct inode *inode);
|
|
void sb_clear_inode_writeback(struct inode *inode);
|
|
|
|
#endif /* WRITEBACK_H */
|