forked from mirrors/linux
		
	 eb0ece1602
			
		
	
	
		eb0ece1602
		
	
	
	
	
		
			
			Uros Bizjak uses x86 named address space qualifiers to provide compile-time checking of percpu area accesses. This has caused a small amount of fallout - two or three issues were reported. In all cases the calling code was founf to be incorrect. - The 4 patch series "Some cleanup for memcg" from Chen Ridong implements some relatively monir cleanups for the memcontrol code. - The 17 patch series "mm: fixes for device-exclusive entries (hmm)" from David Hildenbrand fixes a boatload of issues which David found then using device-exclusive PTE entries when THP is enabled. More work is needed, but this makes thins better - our own HMM selftests now succeed. - The 2 patch series "mm: zswap: remove z3fold and zbud" from Yosry Ahmed remove the z3fold and zbud implementations. They have been deprecated for half a year and nobody has complained. - The 5 patch series "mm: further simplify VMA merge operation" from Lorenzo Stoakes implements numerous simplifications in this area. No runtime effects are anticipated. - The 4 patch series "mm/madvise: remove redundant mmap_lock operations from process_madvise()" from SeongJae Park rationalizes the locking in the madvise() implementation. Performance gains of 20-25% were observed in one MADV_DONTNEED microbenchmark. - The 12 patch series "Tiny cleanup and improvements about SWAP code" from Baoquan He contains a number of touchups to issues which Baoquan noticed when working on the swap code. - The 2 patch series "mm: kmemleak: Usability improvements" from Catalin Marinas implements a couple of improvements to the kmemleak user-visible output. - The 2 patch series "mm/damon/paddr: fix large folios access and schemes handling" from Usama Arif provides a couple of fixes for DAMON's handling of large folios. - The 3 patch series "mm/damon/core: fix wrong and/or useless damos_walk() behaviors" from SeongJae Park fixes a few issues with the accuracy of kdamond's walking of DAMON regions. - The 3 patch series "expose mapping wrprotect, fix fb_defio use" from Lorenzo Stoakes changes the interaction between framebuffer deferred-io and core MM. No functional changes are anticipated - this is preparatory work for the future removal of page structure fields. - The 4 patch series "mm/damon: add support for hugepage_size DAMOS filter" from Usama Arif adds a DAMOS filter which permits the filtering by huge page sizes. - The 4 patch series "mm: permit guard regions for file-backed/shmem mappings" from Lorenzo Stoakes extends the guard region feature from its present "anon mappings only" state. The feature now covers shmem and file-backed mappings. - The 4 patch series "mm: batched unmap lazyfree large folios during reclamation" from Barry Song cleans up and speeds up the unmapping for pte-mapped large folios. - The 18 patch series "reimplement per-vma lock as a refcount" from Suren Baghdasaryan puts the vm_lock back into the vma. Our reasons for pulling it out were largely bogus and that change made the code more messy. This patchset provides small (0-10%) improvements on one microbenchmark. - The 5 patch series "Docs/mm/damon: misc DAMOS filters documentation fixes and improves" from SeongJae Park does some maintenance work on the DAMON docs. - The 27 patch series "hugetlb/CMA improvements for large systems" from Frank van der Linden addresses a pile of issues which have been observed when using CMA on large machines. - The 2 patch series "mm/damon: introduce DAMOS filter type for unmapped pages" from SeongJae Park enables users of DMAON/DAMOS to filter my the page's mapped/unmapped status. - The 19 patch series "zsmalloc/zram: there be preemption" from Sergey Senozhatsky teaches zram to run its compression and decompression operations preemptibly. - The 12 patch series "selftests/mm: Some cleanups from trying to run them" from Brendan Jackman fixes a pile of unrelated issues which Brendan encountered while runnimg our selftests. - The 2 patch series "fs/proc/task_mmu: add guard region bit to pagemap" from Lorenzo Stoakes permits userspace to use /proc/pid/pagemap to determine whether a particular page is a guard page. - The 7 patch series "mm, swap: remove swap slot cache" from Kairui Song removes the swap slot cache from the allocation path - it simply wasn't being effective. - The 5 patch series "mm: cleanups for device-exclusive entries (hmm)" from David Hildenbrand implements a number of unrelated cleanups in this code. - The 5 patch series "mm: Rework generic PTDUMP configs" from Anshuman Khandual implements a number of preparatoty cleanups to the GENERIC_PTDUMP Kconfig logic. - The 8 patch series "mm/damon: auto-tune aggregation interval" from SeongJae Park implements a feedback-driven automatic tuning feature for DAMON's aggregation interval tuning. - The 5 patch series "Fix lazy mmu mode" from Ryan Roberts fixes some issues in powerpc, sparc and x86 lazy MMU implementations. Ryan did this in preparation for implementing lazy mmu mode for arm64 to optimize vmalloc. - The 2 patch series "mm/page_alloc: Some clarifications for migratetype fallback" from Brendan Jackman reworks some commentary to make the code easier to follow. - The 3 patch series "page_counter cleanup and size reduction" from Shakeel Butt cleans up the page_counter code and fixes a size increase which we accidentally added late last year. - The 3 patch series "Add a command line option that enables control of how many threads should be used to allocate huge pages" from Thomas Prescher does that. It allows the careful operator to significantly reduce boot time by tuning the parallalization of huge page initialization. - The 3 patch series "Fix calculations in trace_balance_dirty_pages() for cgwb" from Tang Yizhou fixes the tracing output from the dirty page balancing code. - The 9 patch series "mm/damon: make allow filters after reject filters useful and intuitive" from SeongJae Park improves the handling of allow and reject filters. Behaviour is made more consistent and the documention is updated accordingly. - The 5 patch series "Switch zswap to object read/write APIs" from Yosry Ahmed updates zswap to the new object read/write APIs and thus permits the removal of some legacy code from zpool and zsmalloc. - The 6 patch series "Some trivial cleanups for shmem" from Baolin Wang does as it claims. - The 20 patch series "fs/dax: Fix ZONE_DEVICE page reference counts" from Alistair Popple regularizes the weird ZONE_DEVICE page refcount handling in DAX, permittig the removal of a number of special-case checks. - The 4 patch series "refactor mremap and fix bug" from Lorenzo Stoakes is a preparatoty refactoring and cleanup of the mremap() code. - The 20 patch series "mm: MM owner tracking for large folios (!hugetlb) + CONFIG_NO_PAGE_MAPCOUNT" from David Hildenbrand reworks the manner in which we determine whether a large folio is known to be mapped exclusively into a single MM. - The 8 patch series "mm/damon: add sysfs dirs for managing DAMOS filters based on handling layers" from SeongJae Park adds a couple of new sysfs directories to ease the management of DAMON/DAMOS filters. - The 13 patch series "arch, mm: reduce code duplication in mem_init()" from Mike Rapoport consolidates many per-arch implementations of mem_init() into code generic code, where that is practical. - The 13 patch series "mm/damon/sysfs: commit parameters online via damon_call()" from SeongJae Park continues the cleaning up of sysfs access to DAMON internal data. - The 3 patch series "mm: page_ext: Introduce new iteration API" from Luiz Capitulino reworks the page_ext initialization to fix a boot-time crash which was observed with an unusual combination of compile and cmdline options. - The 8 patch series "Buddy allocator like (or non-uniform) folio split" from Zi Yan reworks the code to split a folio into smaller folios. The main benefit is lessened memory consumption: fewer post-split folios are generated. - The 2 patch series "Minimize xa_node allocation during xarry split" from Zi Yan reduces the number of xarray xa_nodes which are generated during an xarray split. - The 2 patch series "drivers/base/memory: Two cleanups" from Gavin Shan performs some maintenance work on the drivers/base/memory code. - The 3 patch series "Add tracepoints for lowmem reserves, watermarks and totalreserve_pages" from Martin Liu adds some more tracepoints to the page allocator code. - The 4 patch series "mm/madvise: cleanup requests validations and classifications" from SeongJae Park cleans up some warts which SeongJae observed during his earlier madvise work. - The 3 patch series "mm/hwpoison: Fix regressions in memory failure handling" from Shuai Xue addresses two quite serious regressions which Shuai has observed in the memory-failure implementation. - The 5 patch series "mm: reliable huge page allocator" from Johannes Weiner makes huge page allocations cheaper and more reliable by reducing fragmentation. - The 5 patch series "Minor memcg cleanups & prep for memdescs" from Matthew Wilcox is preparatory work for the future implementation of memdescs. - The 4 patch series "track memory used by balloon drivers" from Nico Pache introduces a way to track memory used by our various balloon drivers. - The 2 patch series "mm/damon: introduce DAMOS filter type for active pages" from Nhat Pham permits users to filter for active/inactive pages, separately for file and anon pages. - The 2 patch series "Adding Proactive Memory Reclaim Statistics" from Hao Jia separates the proactive reclaim statistics from the direct reclaim statistics. - The 2 patch series "mm/vmscan: don't try to reclaim hwpoison folio" from Jinjiang Tu fixes our handling of hwpoisoned pages within the reclaim code. -----BEGIN PGP SIGNATURE----- iHQEABYKAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCZ+nZaAAKCRDdBJ7gKXxA jsOWAPiP4r7CJHMZRK4eyJOkvS1a1r+TsIarrFZtjwvf/GIfAQCEG+JDxVfUaUSF Ee93qSSLR1BkNdDw+931Pu0mXfbnBw== =Pn2K -----END PGP SIGNATURE----- Merge tag 'mm-stable-2025-03-30-16-52' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull MM updates from Andrew Morton: - The series "Enable strict percpu address space checks" from Uros Bizjak uses x86 named address space qualifiers to provide compile-time checking of percpu area accesses. This has caused a small amount of fallout - two or three issues were reported. In all cases the calling code was found to be incorrect. - The series "Some cleanup for memcg" from Chen Ridong implements some relatively monir cleanups for the memcontrol code. - The series "mm: fixes for device-exclusive entries (hmm)" from David Hildenbrand fixes a boatload of issues which David found then using device-exclusive PTE entries when THP is enabled. More work is needed, but this makes thins better - our own HMM selftests now succeed. - The series "mm: zswap: remove z3fold and zbud" from Yosry Ahmed remove the z3fold and zbud implementations. They have been deprecated for half a year and nobody has complained. - The series "mm: further simplify VMA merge operation" from Lorenzo Stoakes implements numerous simplifications in this area. No runtime effects are anticipated. - The series "mm/madvise: remove redundant mmap_lock operations from process_madvise()" from SeongJae Park rationalizes the locking in the madvise() implementation. Performance gains of 20-25% were observed in one MADV_DONTNEED microbenchmark. - The series "Tiny cleanup and improvements about SWAP code" from Baoquan He contains a number of touchups to issues which Baoquan noticed when working on the swap code. - The series "mm: kmemleak: Usability improvements" from Catalin Marinas implements a couple of improvements to the kmemleak user-visible output. - The series "mm/damon/paddr: fix large folios access and schemes handling" from Usama Arif provides a couple of fixes for DAMON's handling of large folios. - The series "mm/damon/core: fix wrong and/or useless damos_walk() behaviors" from SeongJae Park fixes a few issues with the accuracy of kdamond's walking of DAMON regions. - The series "expose mapping wrprotect, fix fb_defio use" from Lorenzo Stoakes changes the interaction between framebuffer deferred-io and core MM. No functional changes are anticipated - this is preparatory work for the future removal of page structure fields. - The series "mm/damon: add support for hugepage_size DAMOS filter" from Usama Arif adds a DAMOS filter which permits the filtering by huge page sizes. - The series "mm: permit guard regions for file-backed/shmem mappings" from Lorenzo Stoakes extends the guard region feature from its present "anon mappings only" state. The feature now covers shmem and file-backed mappings. - The series "mm: batched unmap lazyfree large folios during reclamation" from Barry Song cleans up and speeds up the unmapping for pte-mapped large folios. - The series "reimplement per-vma lock as a refcount" from Suren Baghdasaryan puts the vm_lock back into the vma. Our reasons for pulling it out were largely bogus and that change made the code more messy. This patchset provides small (0-10%) improvements on one microbenchmark. - The series "Docs/mm/damon: misc DAMOS filters documentation fixes and improves" from SeongJae Park does some maintenance work on the DAMON docs. - The series "hugetlb/CMA improvements for large systems" from Frank van der Linden addresses a pile of issues which have been observed when using CMA on large machines. - The series "mm/damon: introduce DAMOS filter type for unmapped pages" from SeongJae Park enables users of DMAON/DAMOS to filter my the page's mapped/unmapped status. - The series "zsmalloc/zram: there be preemption" from Sergey Senozhatsky teaches zram to run its compression and decompression operations preemptibly. - The series "selftests/mm: Some cleanups from trying to run them" from Brendan Jackman fixes a pile of unrelated issues which Brendan encountered while runnimg our selftests. - The series "fs/proc/task_mmu: add guard region bit to pagemap" from Lorenzo Stoakes permits userspace to use /proc/pid/pagemap to determine whether a particular page is a guard page. - The series "mm, swap: remove swap slot cache" from Kairui Song removes the swap slot cache from the allocation path - it simply wasn't being effective. - The series "mm: cleanups for device-exclusive entries (hmm)" from David Hildenbrand implements a number of unrelated cleanups in this code. - The series "mm: Rework generic PTDUMP configs" from Anshuman Khandual implements a number of preparatoty cleanups to the GENERIC_PTDUMP Kconfig logic. - The series "mm/damon: auto-tune aggregation interval" from SeongJae Park implements a feedback-driven automatic tuning feature for DAMON's aggregation interval tuning. - The series "Fix lazy mmu mode" from Ryan Roberts fixes some issues in powerpc, sparc and x86 lazy MMU implementations. Ryan did this in preparation for implementing lazy mmu mode for arm64 to optimize vmalloc. - The series "mm/page_alloc: Some clarifications for migratetype fallback" from Brendan Jackman reworks some commentary to make the code easier to follow. - The series "page_counter cleanup and size reduction" from Shakeel Butt cleans up the page_counter code and fixes a size increase which we accidentally added late last year. - The series "Add a command line option that enables control of how many threads should be used to allocate huge pages" from Thomas Prescher does that. It allows the careful operator to significantly reduce boot time by tuning the parallalization of huge page initialization. - The series "Fix calculations in trace_balance_dirty_pages() for cgwb" from Tang Yizhou fixes the tracing output from the dirty page balancing code. - The series "mm/damon: make allow filters after reject filters useful and intuitive" from SeongJae Park improves the handling of allow and reject filters. Behaviour is made more consistent and the documention is updated accordingly. - The series "Switch zswap to object read/write APIs" from Yosry Ahmed updates zswap to the new object read/write APIs and thus permits the removal of some legacy code from zpool and zsmalloc. - The series "Some trivial cleanups for shmem" from Baolin Wang does as it claims. - The series "fs/dax: Fix ZONE_DEVICE page reference counts" from Alistair Popple regularizes the weird ZONE_DEVICE page refcount handling in DAX, permittig the removal of a number of special-case checks. - The series "refactor mremap and fix bug" from Lorenzo Stoakes is a preparatoty refactoring and cleanup of the mremap() code. - The series "mm: MM owner tracking for large folios (!hugetlb) + CONFIG_NO_PAGE_MAPCOUNT" from David Hildenbrand reworks the manner in which we determine whether a large folio is known to be mapped exclusively into a single MM. - The series "mm/damon: add sysfs dirs for managing DAMOS filters based on handling layers" from SeongJae Park adds a couple of new sysfs directories to ease the management of DAMON/DAMOS filters. - The series "arch, mm: reduce code duplication in mem_init()" from Mike Rapoport consolidates many per-arch implementations of mem_init() into code generic code, where that is practical. - The series "mm/damon/sysfs: commit parameters online via damon_call()" from SeongJae Park continues the cleaning up of sysfs access to DAMON internal data. - The series "mm: page_ext: Introduce new iteration API" from Luiz Capitulino reworks the page_ext initialization to fix a boot-time crash which was observed with an unusual combination of compile and cmdline options. - The series "Buddy allocator like (or non-uniform) folio split" from Zi Yan reworks the code to split a folio into smaller folios. The main benefit is lessened memory consumption: fewer post-split folios are generated. - The series "Minimize xa_node allocation during xarry split" from Zi Yan reduces the number of xarray xa_nodes which are generated during an xarray split. - The series "drivers/base/memory: Two cleanups" from Gavin Shan performs some maintenance work on the drivers/base/memory code. - The series "Add tracepoints for lowmem reserves, watermarks and totalreserve_pages" from Martin Liu adds some more tracepoints to the page allocator code. - The series "mm/madvise: cleanup requests validations and classifications" from SeongJae Park cleans up some warts which SeongJae observed during his earlier madvise work. - The series "mm/hwpoison: Fix regressions in memory failure handling" from Shuai Xue addresses two quite serious regressions which Shuai has observed in the memory-failure implementation. - The series "mm: reliable huge page allocator" from Johannes Weiner makes huge page allocations cheaper and more reliable by reducing fragmentation. - The series "Minor memcg cleanups & prep for memdescs" from Matthew Wilcox is preparatory work for the future implementation of memdescs. - The series "track memory used by balloon drivers" from Nico Pache introduces a way to track memory used by our various balloon drivers. - The series "mm/damon: introduce DAMOS filter type for active pages" from Nhat Pham permits users to filter for active/inactive pages, separately for file and anon pages. - The series "Adding Proactive Memory Reclaim Statistics" from Hao Jia separates the proactive reclaim statistics from the direct reclaim statistics. - The series "mm/vmscan: don't try to reclaim hwpoison folio" from Jinjiang Tu fixes our handling of hwpoisoned pages within the reclaim code. * tag 'mm-stable-2025-03-30-16-52' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (431 commits) mm/page_alloc: remove unnecessary __maybe_unused in order_to_pindex() x86/mm: restore early initialization of high_memory for 32-bits mm/vmscan: don't try to reclaim hwpoison folio mm/hwpoison: introduce folio_contain_hwpoisoned_page() helper cgroup: docs: add pswpin and pswpout items in cgroup v2 doc mm: vmscan: split proactive reclaim statistics from direct reclaim statistics selftests/mm: speed up split_huge_page_test selftests/mm: uffd-unit-tests support for hugepages > 2M docs/mm/damon/design: document active DAMOS filter type mm/damon: implement a new DAMOS filter type for active pages fs/dax: don't disassociate zero page entries MM documentation: add "Unaccepted" meminfo entry selftests/mm: add commentary about 9pfs bugs fork: use __vmalloc_node() for stack allocation docs/mm: Physical Memory: Populate the "Zones" section xen: balloon: update the NR_BALLOON_PAGES state hv_balloon: update the NR_BALLOON_PAGES state balloon_compaction: update the NR_BALLOON_PAGES state meminfo: add a per node counter for balloon drivers mm: remove references to folio in __memcg_kmem_uncharge_page() ...
		
			
				
	
	
		
			517 lines
		
	
	
	
		
			17 KiB
		
	
	
	
		
			C
		
	
	
	
	
	
			
		
		
	
	
			517 lines
		
	
	
	
		
			17 KiB
		
	
	
	
		
			C
		
	
	
	
	
	
| /* SPDX-License-Identifier: GPL-2.0-only */
 | |
| /*
 | |
|  * linux/percpu-defs.h - basic definitions for percpu areas
 | |
|  *
 | |
|  * DO NOT INCLUDE DIRECTLY OUTSIDE PERCPU IMPLEMENTATION PROPER.
 | |
|  *
 | |
|  * This file is separate from linux/percpu.h to avoid cyclic inclusion
 | |
|  * dependency from arch header files.  Only to be included from
 | |
|  * asm/percpu.h.
 | |
|  *
 | |
|  * This file includes macros necessary to declare percpu sections and
 | |
|  * variables, and definitions of percpu accessors and operations.  It
 | |
|  * should provide enough percpu features to arch header files even when
 | |
|  * they can only include asm/percpu.h to avoid cyclic inclusion dependency.
 | |
|  */
 | |
| 
 | |
| #ifndef _LINUX_PERCPU_DEFS_H
 | |
| #define _LINUX_PERCPU_DEFS_H
 | |
| 
 | |
| #ifdef CONFIG_SMP
 | |
| 
 | |
| #ifdef MODULE
 | |
| #define PER_CPU_SHARED_ALIGNED_SECTION ""
 | |
| #define PER_CPU_ALIGNED_SECTION ""
 | |
| #else
 | |
| #define PER_CPU_SHARED_ALIGNED_SECTION "..shared_aligned"
 | |
| #define PER_CPU_ALIGNED_SECTION "..shared_aligned"
 | |
| #endif
 | |
| 
 | |
| #else
 | |
| 
 | |
| #define PER_CPU_SHARED_ALIGNED_SECTION ""
 | |
| #define PER_CPU_ALIGNED_SECTION "..shared_aligned"
 | |
| 
 | |
| #endif
 | |
| 
 | |
| /*
 | |
|  * Base implementations of per-CPU variable declarations and definitions, where
 | |
|  * the section in which the variable is to be placed is provided by the
 | |
|  * 'sec' argument.  This may be used to affect the parameters governing the
 | |
|  * variable's storage.
 | |
|  *
 | |
|  * NOTE!  The sections for the DECLARE and for the DEFINE must match, lest
 | |
|  * linkage errors occur due the compiler generating the wrong code to access
 | |
|  * that section.
 | |
|  */
 | |
| #define __PCPU_ATTRS(sec)						\
 | |
| 	__percpu __attribute__((section(PER_CPU_BASE_SECTION sec)))	\
 | |
| 	PER_CPU_ATTRIBUTES
 | |
| 
 | |
| #define __PCPU_DUMMY_ATTRS						\
 | |
| 	__section(".discard") __attribute__((unused))
 | |
| 
 | |
| /*
 | |
|  * s390 and alpha modules require percpu variables to be defined as
 | |
|  * weak to force the compiler to generate GOT based external
 | |
|  * references for them.  This is necessary because percpu sections
 | |
|  * will be located outside of the usually addressable area.
 | |
|  *
 | |
|  * This definition puts the following two extra restrictions when
 | |
|  * defining percpu variables.
 | |
|  *
 | |
|  * 1. The symbol must be globally unique, even the static ones.
 | |
|  * 2. Static percpu variables cannot be defined inside a function.
 | |
|  *
 | |
|  * Archs which need weak percpu definitions should define
 | |
|  * ARCH_NEEDS_WEAK_PER_CPU in asm/percpu.h when necessary.
 | |
|  *
 | |
|  * To ensure that the generic code observes the above two
 | |
|  * restrictions, if CONFIG_DEBUG_FORCE_WEAK_PER_CPU is set weak
 | |
|  * definition is used for all cases.
 | |
|  */
 | |
| #if defined(ARCH_NEEDS_WEAK_PER_CPU) || defined(CONFIG_DEBUG_FORCE_WEAK_PER_CPU)
 | |
| /*
 | |
|  * __pcpu_scope_* dummy variable is used to enforce scope.  It
 | |
|  * receives the static modifier when it's used in front of
 | |
|  * DEFINE_PER_CPU() and will trigger build failure if
 | |
|  * DECLARE_PER_CPU() is used for the same variable.
 | |
|  *
 | |
|  * __pcpu_unique_* dummy variable is used to enforce symbol uniqueness
 | |
|  * such that hidden weak symbol collision, which will cause unrelated
 | |
|  * variables to share the same address, can be detected during build.
 | |
|  */
 | |
| #define DECLARE_PER_CPU_SECTION(type, name, sec)			\
 | |
| 	extern __PCPU_DUMMY_ATTRS char __pcpu_scope_##name;		\
 | |
| 	extern __PCPU_ATTRS(sec) __typeof__(type) name
 | |
| 
 | |
| #define DEFINE_PER_CPU_SECTION(type, name, sec)				\
 | |
| 	__PCPU_DUMMY_ATTRS char __pcpu_scope_##name;			\
 | |
| 	extern __PCPU_DUMMY_ATTRS char __pcpu_unique_##name;		\
 | |
| 	__PCPU_DUMMY_ATTRS char __pcpu_unique_##name;			\
 | |
| 	extern __PCPU_ATTRS(sec) __typeof__(type) name;			\
 | |
| 	__PCPU_ATTRS(sec) __weak __typeof__(type) name
 | |
| #else
 | |
| /*
 | |
|  * Normal declaration and definition macros.
 | |
|  */
 | |
| #define DECLARE_PER_CPU_SECTION(type, name, sec)			\
 | |
| 	extern __PCPU_ATTRS(sec) __typeof__(type) name
 | |
| 
 | |
| #define DEFINE_PER_CPU_SECTION(type, name, sec)				\
 | |
| 	__PCPU_ATTRS(sec) __typeof__(type) name
 | |
| #endif
 | |
| 
 | |
| /*
 | |
|  * Variant on the per-CPU variable declaration/definition theme used for
 | |
|  * ordinary per-CPU variables.
 | |
|  */
 | |
| #define DECLARE_PER_CPU(type, name)					\
 | |
| 	DECLARE_PER_CPU_SECTION(type, name, "")
 | |
| 
 | |
| #define DEFINE_PER_CPU(type, name)					\
 | |
| 	DEFINE_PER_CPU_SECTION(type, name, "")
 | |
| 
 | |
| /*
 | |
|  * Declaration/definition used for per-CPU variables that are frequently
 | |
|  * accessed and should be in a single cacheline.
 | |
|  *
 | |
|  * For use only by architecture and core code.  Only use scalar or pointer
 | |
|  * types to maximize density.
 | |
|  */
 | |
| #define DECLARE_PER_CPU_CACHE_HOT(type, name)				\
 | |
| 	DECLARE_PER_CPU_SECTION(type, name, "..hot.." #name)
 | |
| 
 | |
| #define DEFINE_PER_CPU_CACHE_HOT(type, name)				\
 | |
| 	DEFINE_PER_CPU_SECTION(type, name, "..hot.." #name)
 | |
| 
 | |
| /*
 | |
|  * Declaration/definition used for per-CPU variables that must be cacheline
 | |
|  * aligned under SMP conditions so that, whilst a particular instance of the
 | |
|  * data corresponds to a particular CPU, inefficiencies due to direct access by
 | |
|  * other CPUs are reduced by preventing the data from unnecessarily spanning
 | |
|  * cachelines.
 | |
|  *
 | |
|  * An example of this would be statistical data, where each CPU's set of data
 | |
|  * is updated by that CPU alone, but the data from across all CPUs is collated
 | |
|  * by a CPU processing a read from a proc file.
 | |
|  */
 | |
| #define DECLARE_PER_CPU_SHARED_ALIGNED(type, name)			\
 | |
| 	DECLARE_PER_CPU_SECTION(type, name, PER_CPU_SHARED_ALIGNED_SECTION) \
 | |
| 	____cacheline_aligned_in_smp
 | |
| 
 | |
| #define DEFINE_PER_CPU_SHARED_ALIGNED(type, name)			\
 | |
| 	DEFINE_PER_CPU_SECTION(type, name, PER_CPU_SHARED_ALIGNED_SECTION) \
 | |
| 	____cacheline_aligned_in_smp
 | |
| 
 | |
| #define DECLARE_PER_CPU_ALIGNED(type, name)				\
 | |
| 	DECLARE_PER_CPU_SECTION(type, name, PER_CPU_ALIGNED_SECTION)	\
 | |
| 	____cacheline_aligned
 | |
| 
 | |
| #define DEFINE_PER_CPU_ALIGNED(type, name)				\
 | |
| 	DEFINE_PER_CPU_SECTION(type, name, PER_CPU_ALIGNED_SECTION)	\
 | |
| 	____cacheline_aligned
 | |
| 
 | |
| /*
 | |
|  * Declaration/definition used for per-CPU variables that must be page aligned.
 | |
|  */
 | |
| #define DECLARE_PER_CPU_PAGE_ALIGNED(type, name)			\
 | |
| 	DECLARE_PER_CPU_SECTION(type, name, "..page_aligned")		\
 | |
| 	__aligned(PAGE_SIZE)
 | |
| 
 | |
| #define DEFINE_PER_CPU_PAGE_ALIGNED(type, name)				\
 | |
| 	DEFINE_PER_CPU_SECTION(type, name, "..page_aligned")		\
 | |
| 	__aligned(PAGE_SIZE)
 | |
| 
 | |
| /*
 | |
|  * Declaration/definition used for per-CPU variables that must be read mostly.
 | |
|  */
 | |
| #define DECLARE_PER_CPU_READ_MOSTLY(type, name)			\
 | |
| 	DECLARE_PER_CPU_SECTION(type, name, "..read_mostly")
 | |
| 
 | |
| #define DEFINE_PER_CPU_READ_MOSTLY(type, name)				\
 | |
| 	DEFINE_PER_CPU_SECTION(type, name, "..read_mostly")
 | |
| 
 | |
| /*
 | |
|  * Declaration/definition used for per-CPU variables that should be accessed
 | |
|  * as decrypted when memory encryption is enabled in the guest.
 | |
|  */
 | |
| #ifdef CONFIG_AMD_MEM_ENCRYPT
 | |
| #define DECLARE_PER_CPU_DECRYPTED(type, name)				\
 | |
| 	DECLARE_PER_CPU_SECTION(type, name, "..decrypted")
 | |
| 
 | |
| #define DEFINE_PER_CPU_DECRYPTED(type, name)				\
 | |
| 	DEFINE_PER_CPU_SECTION(type, name, "..decrypted")
 | |
| #else
 | |
| #define DEFINE_PER_CPU_DECRYPTED(type, name)	DEFINE_PER_CPU(type, name)
 | |
| #endif
 | |
| 
 | |
| /*
 | |
|  * Intermodule exports for per-CPU variables.  sparse forgets about
 | |
|  * address space across EXPORT_SYMBOL(), change EXPORT_SYMBOL() to
 | |
|  * noop if __CHECKER__.
 | |
|  */
 | |
| #ifndef __CHECKER__
 | |
| #define EXPORT_PER_CPU_SYMBOL(var) EXPORT_SYMBOL(var)
 | |
| #define EXPORT_PER_CPU_SYMBOL_GPL(var) EXPORT_SYMBOL_GPL(var)
 | |
| #else
 | |
| #define EXPORT_PER_CPU_SYMBOL(var)
 | |
| #define EXPORT_PER_CPU_SYMBOL_GPL(var)
 | |
| #endif
 | |
| 
 | |
| /*
 | |
|  * Accessors and operations.
 | |
|  */
 | |
| #ifndef __ASSEMBLY__
 | |
| 
 | |
| /*
 | |
|  * __verify_pcpu_ptr() verifies @ptr is a percpu pointer without evaluating
 | |
|  * @ptr and is invoked once before a percpu area is accessed by all
 | |
|  * accessors and operations.  This is performed in the generic part of
 | |
|  * percpu and arch overrides don't need to worry about it; however, if an
 | |
|  * arch wants to implement an arch-specific percpu accessor or operation,
 | |
|  * it may use __verify_pcpu_ptr() to verify the parameters.
 | |
|  *
 | |
|  * + 0 is required in order to convert the pointer type from a
 | |
|  * potential array type to a pointer to a single item of the array.
 | |
|  */
 | |
| #define __verify_pcpu_ptr(ptr)						\
 | |
| do {									\
 | |
| 	const void __percpu *__vpp_verify = (typeof((ptr) + 0))NULL;	\
 | |
| 	(void)__vpp_verify;						\
 | |
| } while (0)
 | |
| 
 | |
| #define PERCPU_PTR(__p)							\
 | |
| 	(TYPEOF_UNQUAL(*(__p)) __force __kernel *)((__force unsigned long)(__p))
 | |
| 
 | |
| #ifdef CONFIG_SMP
 | |
| 
 | |
| /*
 | |
|  * Add an offset to a pointer.  Use RELOC_HIDE() to prevent the compiler
 | |
|  * from making incorrect assumptions about the pointer value.
 | |
|  */
 | |
| #define SHIFT_PERCPU_PTR(__p, __offset)					\
 | |
| 	RELOC_HIDE(PERCPU_PTR(__p), (__offset))
 | |
| 
 | |
| #define per_cpu_ptr(ptr, cpu)						\
 | |
| ({									\
 | |
| 	__verify_pcpu_ptr(ptr);						\
 | |
| 	SHIFT_PERCPU_PTR((ptr), per_cpu_offset((cpu)));			\
 | |
| })
 | |
| 
 | |
| #define raw_cpu_ptr(ptr)						\
 | |
| ({									\
 | |
| 	__verify_pcpu_ptr(ptr);						\
 | |
| 	arch_raw_cpu_ptr(ptr);						\
 | |
| })
 | |
| 
 | |
| #ifdef CONFIG_DEBUG_PREEMPT
 | |
| #define this_cpu_ptr(ptr)						\
 | |
| ({									\
 | |
| 	__verify_pcpu_ptr(ptr);						\
 | |
| 	SHIFT_PERCPU_PTR(ptr, my_cpu_offset);				\
 | |
| })
 | |
| #else
 | |
| #define this_cpu_ptr(ptr) raw_cpu_ptr(ptr)
 | |
| #endif
 | |
| 
 | |
| #else	/* CONFIG_SMP */
 | |
| 
 | |
| #define per_cpu_ptr(ptr, cpu)						\
 | |
| ({									\
 | |
| 	(void)(cpu);							\
 | |
| 	__verify_pcpu_ptr(ptr);						\
 | |
| 	PERCPU_PTR(ptr);						\
 | |
| })
 | |
| 
 | |
| #define raw_cpu_ptr(ptr)	per_cpu_ptr(ptr, 0)
 | |
| #define this_cpu_ptr(ptr)	raw_cpu_ptr(ptr)
 | |
| 
 | |
| #endif	/* CONFIG_SMP */
 | |
| 
 | |
| #define per_cpu(var, cpu)	(*per_cpu_ptr(&(var), cpu))
 | |
| 
 | |
| /*
 | |
|  * Must be an lvalue. Since @var must be a simple identifier,
 | |
|  * we force a syntax error here if it isn't.
 | |
|  */
 | |
| #define get_cpu_var(var)						\
 | |
| (*({									\
 | |
| 	preempt_disable();						\
 | |
| 	this_cpu_ptr(&var);						\
 | |
| }))
 | |
| 
 | |
| /*
 | |
|  * The weird & is necessary because sparse considers (void)(var) to be
 | |
|  * a direct dereference of percpu variable (var).
 | |
|  */
 | |
| #define put_cpu_var(var)						\
 | |
| do {									\
 | |
| 	(void)&(var);							\
 | |
| 	preempt_enable();						\
 | |
| } while (0)
 | |
| 
 | |
| #define get_cpu_ptr(var)						\
 | |
| ({									\
 | |
| 	preempt_disable();						\
 | |
| 	this_cpu_ptr(var);						\
 | |
| })
 | |
| 
 | |
| #define put_cpu_ptr(var)						\
 | |
| do {									\
 | |
| 	(void)(var);							\
 | |
| 	preempt_enable();						\
 | |
| } while (0)
 | |
| 
 | |
| /*
 | |
|  * Branching function to split up a function into a set of functions that
 | |
|  * are called for different scalar sizes of the objects handled.
 | |
|  */
 | |
| 
 | |
| extern void __bad_size_call_parameter(void);
 | |
| 
 | |
| #ifdef CONFIG_DEBUG_PREEMPT
 | |
| extern void __this_cpu_preempt_check(const char *op);
 | |
| #else
 | |
| static __always_inline void __this_cpu_preempt_check(const char *op) { }
 | |
| #endif
 | |
| 
 | |
| #define __pcpu_size_call_return(stem, variable)				\
 | |
| ({									\
 | |
| 	TYPEOF_UNQUAL(variable) pscr_ret__;				\
 | |
| 	__verify_pcpu_ptr(&(variable));					\
 | |
| 	switch(sizeof(variable)) {					\
 | |
| 	case 1: pscr_ret__ = stem##1(variable); break;			\
 | |
| 	case 2: pscr_ret__ = stem##2(variable); break;			\
 | |
| 	case 4: pscr_ret__ = stem##4(variable); break;			\
 | |
| 	case 8: pscr_ret__ = stem##8(variable); break;			\
 | |
| 	default:							\
 | |
| 		__bad_size_call_parameter(); break;			\
 | |
| 	}								\
 | |
| 	pscr_ret__;							\
 | |
| })
 | |
| 
 | |
| #define __pcpu_size_call_return2(stem, variable, ...)			\
 | |
| ({									\
 | |
| 	TYPEOF_UNQUAL(variable) pscr2_ret__;				\
 | |
| 	__verify_pcpu_ptr(&(variable));					\
 | |
| 	switch(sizeof(variable)) {					\
 | |
| 	case 1: pscr2_ret__ = stem##1(variable, __VA_ARGS__); break;	\
 | |
| 	case 2: pscr2_ret__ = stem##2(variable, __VA_ARGS__); break;	\
 | |
| 	case 4: pscr2_ret__ = stem##4(variable, __VA_ARGS__); break;	\
 | |
| 	case 8: pscr2_ret__ = stem##8(variable, __VA_ARGS__); break;	\
 | |
| 	default:							\
 | |
| 		__bad_size_call_parameter(); break;			\
 | |
| 	}								\
 | |
| 	pscr2_ret__;							\
 | |
| })
 | |
| 
 | |
| #define __pcpu_size_call_return2bool(stem, variable, ...)		\
 | |
| ({									\
 | |
| 	bool pscr2_ret__;						\
 | |
| 	__verify_pcpu_ptr(&(variable));					\
 | |
| 	switch(sizeof(variable)) {					\
 | |
| 	case 1: pscr2_ret__ = stem##1(variable, __VA_ARGS__); break;	\
 | |
| 	case 2: pscr2_ret__ = stem##2(variable, __VA_ARGS__); break;	\
 | |
| 	case 4: pscr2_ret__ = stem##4(variable, __VA_ARGS__); break;	\
 | |
| 	case 8: pscr2_ret__ = stem##8(variable, __VA_ARGS__); break;	\
 | |
| 	default:							\
 | |
| 		__bad_size_call_parameter(); break;			\
 | |
| 	}								\
 | |
| 	pscr2_ret__;							\
 | |
| })
 | |
| 
 | |
| #define __pcpu_size_call(stem, variable, ...)				\
 | |
| do {									\
 | |
| 	__verify_pcpu_ptr(&(variable));					\
 | |
| 	switch(sizeof(variable)) {					\
 | |
| 		case 1: stem##1(variable, __VA_ARGS__);break;		\
 | |
| 		case 2: stem##2(variable, __VA_ARGS__);break;		\
 | |
| 		case 4: stem##4(variable, __VA_ARGS__);break;		\
 | |
| 		case 8: stem##8(variable, __VA_ARGS__);break;		\
 | |
| 		default: 						\
 | |
| 			__bad_size_call_parameter();break;		\
 | |
| 	}								\
 | |
| } while (0)
 | |
| 
 | |
| /*
 | |
|  * this_cpu operations (C) 2008-2013 Christoph Lameter <cl@linux.com>
 | |
|  *
 | |
|  * Optimized manipulation for memory allocated through the per cpu
 | |
|  * allocator or for addresses of per cpu variables.
 | |
|  *
 | |
|  * These operation guarantee exclusivity of access for other operations
 | |
|  * on the *same* processor. The assumption is that per cpu data is only
 | |
|  * accessed by a single processor instance (the current one).
 | |
|  *
 | |
|  * The arch code can provide optimized implementation by defining macros
 | |
|  * for certain scalar sizes. F.e. provide this_cpu_add_2() to provide per
 | |
|  * cpu atomic operations for 2 byte sized RMW actions. If arch code does
 | |
|  * not provide operations for a scalar size then the fallback in the
 | |
|  * generic code will be used.
 | |
|  *
 | |
|  * cmpxchg_double replaces two adjacent scalars at once.  The first two
 | |
|  * parameters are per cpu variables which have to be of the same size.  A
 | |
|  * truth value is returned to indicate success or failure (since a double
 | |
|  * register result is difficult to handle).  There is very limited hardware
 | |
|  * support for these operations, so only certain sizes may work.
 | |
|  */
 | |
| 
 | |
| /*
 | |
|  * Operations for contexts where we do not want to do any checks for
 | |
|  * preemptions.  Unless strictly necessary, always use [__]this_cpu_*()
 | |
|  * instead.
 | |
|  *
 | |
|  * If there is no other protection through preempt disable and/or disabling
 | |
|  * interrupts then one of these RMW operations can show unexpected behavior
 | |
|  * because the execution thread was rescheduled on another processor or an
 | |
|  * interrupt occurred and the same percpu variable was modified from the
 | |
|  * interrupt context.
 | |
|  */
 | |
| #define raw_cpu_read(pcp)		__pcpu_size_call_return(raw_cpu_read_, pcp)
 | |
| #define raw_cpu_write(pcp, val)		__pcpu_size_call(raw_cpu_write_, pcp, val)
 | |
| #define raw_cpu_add(pcp, val)		__pcpu_size_call(raw_cpu_add_, pcp, val)
 | |
| #define raw_cpu_and(pcp, val)		__pcpu_size_call(raw_cpu_and_, pcp, val)
 | |
| #define raw_cpu_or(pcp, val)		__pcpu_size_call(raw_cpu_or_, pcp, val)
 | |
| #define raw_cpu_add_return(pcp, val)	__pcpu_size_call_return2(raw_cpu_add_return_, pcp, val)
 | |
| #define raw_cpu_xchg(pcp, nval)		__pcpu_size_call_return2(raw_cpu_xchg_, pcp, nval)
 | |
| #define raw_cpu_cmpxchg(pcp, oval, nval) \
 | |
| 	__pcpu_size_call_return2(raw_cpu_cmpxchg_, pcp, oval, nval)
 | |
| #define raw_cpu_try_cmpxchg(pcp, ovalp, nval) \
 | |
| 	__pcpu_size_call_return2bool(raw_cpu_try_cmpxchg_, pcp, ovalp, nval)
 | |
| #define raw_cpu_sub(pcp, val)		raw_cpu_add(pcp, -(val))
 | |
| #define raw_cpu_inc(pcp)		raw_cpu_add(pcp, 1)
 | |
| #define raw_cpu_dec(pcp)		raw_cpu_sub(pcp, 1)
 | |
| #define raw_cpu_sub_return(pcp, val)	raw_cpu_add_return(pcp, -(typeof(pcp))(val))
 | |
| #define raw_cpu_inc_return(pcp)		raw_cpu_add_return(pcp, 1)
 | |
| #define raw_cpu_dec_return(pcp)		raw_cpu_add_return(pcp, -1)
 | |
| 
 | |
| /*
 | |
|  * Operations for contexts that are safe from preemption/interrupts.  These
 | |
|  * operations verify that preemption is disabled.
 | |
|  */
 | |
| #define __this_cpu_read(pcp)						\
 | |
| ({									\
 | |
| 	__this_cpu_preempt_check("read");				\
 | |
| 	raw_cpu_read(pcp);						\
 | |
| })
 | |
| 
 | |
| #define __this_cpu_write(pcp, val)					\
 | |
| ({									\
 | |
| 	__this_cpu_preempt_check("write");				\
 | |
| 	raw_cpu_write(pcp, val);					\
 | |
| })
 | |
| 
 | |
| #define __this_cpu_add(pcp, val)					\
 | |
| ({									\
 | |
| 	__this_cpu_preempt_check("add");				\
 | |
| 	raw_cpu_add(pcp, val);						\
 | |
| })
 | |
| 
 | |
| #define __this_cpu_and(pcp, val)					\
 | |
| ({									\
 | |
| 	__this_cpu_preempt_check("and");				\
 | |
| 	raw_cpu_and(pcp, val);						\
 | |
| })
 | |
| 
 | |
| #define __this_cpu_or(pcp, val)						\
 | |
| ({									\
 | |
| 	__this_cpu_preempt_check("or");					\
 | |
| 	raw_cpu_or(pcp, val);						\
 | |
| })
 | |
| 
 | |
| #define __this_cpu_add_return(pcp, val)					\
 | |
| ({									\
 | |
| 	__this_cpu_preempt_check("add_return");				\
 | |
| 	raw_cpu_add_return(pcp, val);					\
 | |
| })
 | |
| 
 | |
| #define __this_cpu_xchg(pcp, nval)					\
 | |
| ({									\
 | |
| 	__this_cpu_preempt_check("xchg");				\
 | |
| 	raw_cpu_xchg(pcp, nval);					\
 | |
| })
 | |
| 
 | |
| #define __this_cpu_cmpxchg(pcp, oval, nval)				\
 | |
| ({									\
 | |
| 	__this_cpu_preempt_check("cmpxchg");				\
 | |
| 	raw_cpu_cmpxchg(pcp, oval, nval);				\
 | |
| })
 | |
| 
 | |
| #define __this_cpu_try_cmpxchg(pcp, ovalp, nval)			\
 | |
| ({									\
 | |
| 	__this_cpu_preempt_check("try_cmpxchg");			\
 | |
| 	raw_cpu_try_cmpxchg(pcp, ovalp, nval);				\
 | |
| })
 | |
| 
 | |
| #define __this_cpu_sub(pcp, val)	__this_cpu_add(pcp, -(typeof(pcp))(val))
 | |
| #define __this_cpu_inc(pcp)		__this_cpu_add(pcp, 1)
 | |
| #define __this_cpu_dec(pcp)		__this_cpu_sub(pcp, 1)
 | |
| #define __this_cpu_sub_return(pcp, val)	__this_cpu_add_return(pcp, -(typeof(pcp))(val))
 | |
| #define __this_cpu_inc_return(pcp)	__this_cpu_add_return(pcp, 1)
 | |
| #define __this_cpu_dec_return(pcp)	__this_cpu_add_return(pcp, -1)
 | |
| 
 | |
| /*
 | |
|  * Operations with implied preemption/interrupt protection.  These
 | |
|  * operations can be used without worrying about preemption or interrupt.
 | |
|  */
 | |
| #define this_cpu_read(pcp)		__pcpu_size_call_return(this_cpu_read_, pcp)
 | |
| #define this_cpu_write(pcp, val)	__pcpu_size_call(this_cpu_write_, pcp, val)
 | |
| #define this_cpu_add(pcp, val)		__pcpu_size_call(this_cpu_add_, pcp, val)
 | |
| #define this_cpu_and(pcp, val)		__pcpu_size_call(this_cpu_and_, pcp, val)
 | |
| #define this_cpu_or(pcp, val)		__pcpu_size_call(this_cpu_or_, pcp, val)
 | |
| #define this_cpu_add_return(pcp, val)	__pcpu_size_call_return2(this_cpu_add_return_, pcp, val)
 | |
| #define this_cpu_xchg(pcp, nval)	__pcpu_size_call_return2(this_cpu_xchg_, pcp, nval)
 | |
| #define this_cpu_cmpxchg(pcp, oval, nval) \
 | |
| 	__pcpu_size_call_return2(this_cpu_cmpxchg_, pcp, oval, nval)
 | |
| #define this_cpu_try_cmpxchg(pcp, ovalp, nval) \
 | |
| 	__pcpu_size_call_return2bool(this_cpu_try_cmpxchg_, pcp, ovalp, nval)
 | |
| #define this_cpu_sub(pcp, val)		this_cpu_add(pcp, -(typeof(pcp))(val))
 | |
| #define this_cpu_inc(pcp)		this_cpu_add(pcp, 1)
 | |
| #define this_cpu_dec(pcp)		this_cpu_sub(pcp, 1)
 | |
| #define this_cpu_sub_return(pcp, val)	this_cpu_add_return(pcp, -(typeof(pcp))(val))
 | |
| #define this_cpu_inc_return(pcp)	this_cpu_add_return(pcp, 1)
 | |
| #define this_cpu_dec_return(pcp)	this_cpu_add_return(pcp, -1)
 | |
| 
 | |
| #endif /* __ASSEMBLY__ */
 | |
| #endif /* _LINUX_PERCPU_DEFS_H */
 |