forked from mirrors/linux
This pull request contains the following branches:
docs.2025.02.04a:
- Add broken-timing possibility to stallwarn.rst.
- Improve discussion of this_cpu_ptr(), add raw_cpu_ptr().
- Document self-propagating callbacks.
- Point call_srcu() to call_rcu() for detailed memory ordering.
- Add CONFIG_RCU_LAZY delays to call_rcu() kernel-doc header.
- Clarify RCU_LAZY and RCU_LAZY_DEFAULT_OFF help text.
- Remove references to old grace-period-wait primitives.
srcu.2025.02.05a:
- Introduce srcu_read_{un,}lock_fast(), which is similar to
srcu_read_{un,}lock_lite(): avoid smp_mb()s in lock and unlock at the
cost of calling synchronize_rcu() in synchronize_srcu(). Moreover, by
returning the percpu offset of the counter at srcu_read_lock_fast()
time, srcu_read_unlock_fast() can save extra pointer dereferencing,
which makes it faster than srcu_read_{un,}lock_lite().
srcu_read_{un,}lock_fast() are intended to replace
rcu_read_{un,}lock_trace() if possible.
torture.2025.02.05a:
- Add get_torture_init_jiffies() to return the start time of the test.
- Add a test_boost_holdoff module parameter to allow delaying boosting
tests when building rcutorture as built-in.
- Add grace period sequence number logging at the beginning and end of
failure/close-call results.
- Switch to hexadecimal for the expedited grace period sequence number
in the rcu_exp_grace_period trace point.
- Make cur_ops->format_gp_seqs take buffer length.
- Move RCU_TORTURE_TEST_{CHK_RDR_STATE,LOG_CPU} to bool.
- Complain when invalid SRCU reader_flavor is specified.
- Add FORCE_NEED_SRCU_NMI_SAFE Kconfig for testing, which forces SRCU
uses atomics even when percpu ops are NMI safe, and use the Kconfig
for SRCU lockdep testing.
misc.2025.03.04a:
- Split rcu_report_exp_cpu_mult() mask parameter and use for tracing.
- Remove READ_ONCE() for rdp->gpwrap access in __note_gp_changes().
- Fix get_state_synchronize_rcu_full() GP-start detection.
- Move RCU Tasks self-tests to core_initcall().
- Print segment lengths in show_rcu_nocb_gp_state().
- Make RCU watch ct_kernel_exit_state() warning.
- Flush console log from kernel_power_off().
- rcutorture: Allow a negative value for nfakewriters.
- rcu: Update TREE05.boot to test normal synchronize_rcu().
- rcu: Use _full() API to debug synchronize_rcu().
lazypreempt.2025.03.04a: Make RCU handle PREEMPT_LAZY better:
- Fix header guard for rcu_all_qs().
- rcu: Rename PREEMPT_AUTO to PREEMPT_LAZY.
- Update __cond_resched comment about RCU quiescent states.
- Handle unstable rdp in rcu_read_unlock_strict().
- Handle quiescent states for PREEMPT_RCU=n, PREEMPT_COUNT=y.
- osnoise: Provide quiescent states.
- Adjust rcutorture with possible PREEMPT_RCU=n && PREEMPT_COUNT=y
combination.
- Limit PREEMPT_RCU configurations.
- Make rcutorture senario TREE07 and senario TREE10 use PREEMPT_LAZY=y.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEj5IosQTPz8XU1wRHSXnow7UH+rgFAmfeBLQACgkQSXnow7UH
+rh11Qf/Rt6IZJ/YT/V9Sd+8hMx4O0BMh779pr9cD6mbAG+FDk2Yeva1m8vIdFOb
qId6oc8K/ef2JfFjSn0oHMzQP2D3XUyiJWPNbBDHv/D8Os8GZgjzu8dkxVkSbdbY
OxtvIflbcqFN1JDJfGKZnTEW0/YxGqfnS9b6R7iyyA7SOGQ/WubGOE5qNCqPufc9
zJiP+qTUFYQzCIiPlEJul39o9KboPogbt3QAAQjWmi3utd77ehJnm/15FvAjyau4
uhC2cnGfMY535rQaiaQeBQ/IHIowKripCq0JQFvcUNdyArZM3HOI2x79+2II6ft7
mjHskNODOIJHfW2o1RzQ0yRYAywFIg==
=J+mH
-----END PGP SIGNATURE-----
Merge tag 'rcu-next-v6.15' of git://git.kernel.org/pub/scm/linux/kernel/git/rcu/linux
Pull RCU updates from Boqun Feng:
"Documentation:
- Add broken-timing possibility to stallwarn.rst
- Improve discussion of this_cpu_ptr(), add raw_cpu_ptr()
- Document self-propagating callbacks
- Point call_srcu() to call_rcu() for detailed memory ordering
- Add CONFIG_RCU_LAZY delays to call_rcu() kernel-doc header
- Clarify RCU_LAZY and RCU_LAZY_DEFAULT_OFF help text
- Remove references to old grace-period-wait primitives
srcu:
- Introduce srcu_read_{un,}lock_fast(), which is similar to
srcu_read_{un,}lock_lite(): avoid smp_mb()s in lock and unlock
at the cost of calling synchronize_rcu() in synchronize_srcu()
Moreover, by returning the percpu offset of the counter at
srcu_read_lock_fast() time, srcu_read_unlock_fast() can avoid
extra pointer dereferencing, which makes it faster than
srcu_read_{un,}lock_lite()
srcu_read_{un,}lock_fast() are intended to replace
rcu_read_{un,}lock_trace() if possible
RCU torture:
- Add get_torture_init_jiffies() to return the start time of the test
- Add a test_boost_holdoff module parameter to allow delaying
boosting tests when building rcutorture as built-in
- Add grace period sequence number logging at the beginning and end
of failure/close-call results
- Switch to hexadecimal for the expedited grace period sequence
number in the rcu_exp_grace_period trace point
- Make cur_ops->format_gp_seqs take buffer length
- Move RCU_TORTURE_TEST_{CHK_RDR_STATE,LOG_CPU} to bool
- Complain when invalid SRCU reader_flavor is specified
- Add FORCE_NEED_SRCU_NMI_SAFE Kconfig for testing, which forces SRCU
uses atomics even when percpu ops are NMI safe, and use the Kconfig
for SRCU lockdep testing
Misc:
- Split rcu_report_exp_cpu_mult() mask parameter and use for tracing
- Remove READ_ONCE() for rdp->gpwrap access in __note_gp_changes()
- Fix get_state_synchronize_rcu_full() GP-start detection
- Move RCU Tasks self-tests to core_initcall()
- Print segment lengths in show_rcu_nocb_gp_state()
- Make RCU watch ct_kernel_exit_state() warning
- Flush console log from kernel_power_off()
- rcutorture: Allow a negative value for nfakewriters
- rcu: Update TREE05.boot to test normal synchronize_rcu()
- rcu: Use _full() API to debug synchronize_rcu()
Make RCU handle PREEMPT_LAZY better:
- Fix header guard for rcu_all_qs()
- rcu: Rename PREEMPT_AUTO to PREEMPT_LAZY
- Update __cond_resched comment about RCU quiescent states
- Handle unstable rdp in rcu_read_unlock_strict()
- Handle quiescent states for PREEMPT_RCU=n, PREEMPT_COUNT=y
- osnoise: Provide quiescent states
- Adjust rcutorture with possible PREEMPT_RCU=n && PREEMPT_COUNT=y
combination
- Limit PREEMPT_RCU configurations
- Make rcutorture senario TREE07 and senario TREE10 use
PREEMPT_LAZY=y"
* tag 'rcu-next-v6.15' of git://git.kernel.org/pub/scm/linux/kernel/git/rcu/linux: (59 commits)
rcutorture: Make scenario TREE07 build CONFIG_PREEMPT_LAZY=y
rcutorture: Make scenario TREE10 build CONFIG_PREEMPT_LAZY=y
rcu: limit PREEMPT_RCU configurations
rcutorture: Update ->extendables check for lazy preemption
rcutorture: Update rcutorture_one_extend_check() for lazy preemption
osnoise: provide quiescent states
rcu: Use _full() API to debug synchronize_rcu()
rcu: Update TREE05.boot to test normal synchronize_rcu()
rcutorture: Allow a negative value for nfakewriters
Flush console log from kernel_power_off()
context_tracking: Make RCU watch ct_kernel_exit_state() warning
rcu/nocb: Print segment lengths in show_rcu_nocb_gp_state()
rcu-tasks: Move RCU Tasks self-tests to core_initcall()
rcu: Fix get_state_synchronize_rcu_full() GP-start detection
torture: Make SRCU lockdep testing use srcu_read_lock_nmisafe()
srcu: Add FORCE_NEED_SRCU_NMI_SAFE Kconfig for testing
rcutorture: Complain when invalid SRCU reader_flavor is specified
rcutorture: Move RCU_TORTURE_TEST_{CHK_RDR_STATE,LOG_CPU} to bool
rcutorture: Make cur_ops->format_gp_seqs take buffer length
rcutorture: Add ftrace-compatible timestamp to GP# failure/close-call output
...
127 lines
4.2 KiB
C
127 lines
4.2 KiB
C
/* SPDX-License-Identifier: GPL-2.0+ */
|
|
/*
|
|
* Read-Copy Update mechanism for mutual exclusion (tree-based version)
|
|
*
|
|
* Copyright IBM Corporation, 2008
|
|
*
|
|
* Author: Dipankar Sarma <dipankar@in.ibm.com>
|
|
* Paul E. McKenney <paulmck@linux.ibm.com> Hierarchical algorithm
|
|
*
|
|
* Based on the original work by Paul McKenney <paulmck@linux.ibm.com>
|
|
* and inputs from Rusty Russell, Andrea Arcangeli and Andi Kleen.
|
|
*
|
|
* For detailed explanation of Read-Copy Update mechanism see -
|
|
* Documentation/RCU
|
|
*/
|
|
|
|
#ifndef __LINUX_RCUTREE_H
|
|
#define __LINUX_RCUTREE_H
|
|
|
|
void rcu_softirq_qs(void);
|
|
void rcu_note_context_switch(bool preempt);
|
|
int rcu_needs_cpu(void);
|
|
void rcu_cpu_stall_reset(void);
|
|
void rcu_request_urgent_qs_task(struct task_struct *t);
|
|
|
|
/*
|
|
* Note a virtualization-based context switch. This is simply a
|
|
* wrapper around rcu_note_context_switch(), which allows TINY_RCU
|
|
* to save a few bytes. The caller must have disabled interrupts.
|
|
*/
|
|
static inline void rcu_virt_note_context_switch(void)
|
|
{
|
|
rcu_note_context_switch(false);
|
|
}
|
|
|
|
void synchronize_rcu_expedited(void);
|
|
|
|
void rcu_barrier(void);
|
|
void rcu_momentary_eqs(void);
|
|
|
|
struct rcu_gp_oldstate {
|
|
unsigned long rgos_norm;
|
|
unsigned long rgos_exp;
|
|
};
|
|
|
|
// Maximum number of rcu_gp_oldstate values corresponding to
|
|
// not-yet-completed RCU grace periods.
|
|
#define NUM_ACTIVE_RCU_POLL_FULL_OLDSTATE 4
|
|
|
|
/**
|
|
* same_state_synchronize_rcu_full - Are two old-state values identical?
|
|
* @rgosp1: First old-state value.
|
|
* @rgosp2: Second old-state value.
|
|
*
|
|
* The two old-state values must have been obtained from either
|
|
* get_state_synchronize_rcu_full(), start_poll_synchronize_rcu_full(),
|
|
* or get_completed_synchronize_rcu_full(). Returns @true if the two
|
|
* values are identical and @false otherwise. This allows structures
|
|
* whose lifetimes are tracked by old-state values to push these values
|
|
* to a list header, allowing those structures to be slightly smaller.
|
|
*
|
|
* Note that equality is judged on a bitwise basis, so that an
|
|
* @rcu_gp_oldstate structure with an already-completed state in one field
|
|
* will compare not-equal to a structure with an already-completed state
|
|
* in the other field. After all, the @rcu_gp_oldstate structure is opaque
|
|
* so how did such a situation come to pass in the first place?
|
|
*/
|
|
static inline bool same_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp1,
|
|
struct rcu_gp_oldstate *rgosp2)
|
|
{
|
|
return rgosp1->rgos_norm == rgosp2->rgos_norm && rgosp1->rgos_exp == rgosp2->rgos_exp;
|
|
}
|
|
|
|
unsigned long start_poll_synchronize_rcu_expedited(void);
|
|
void start_poll_synchronize_rcu_expedited_full(struct rcu_gp_oldstate *rgosp);
|
|
void cond_synchronize_rcu_expedited(unsigned long oldstate);
|
|
void cond_synchronize_rcu_expedited_full(struct rcu_gp_oldstate *rgosp);
|
|
unsigned long get_state_synchronize_rcu(void);
|
|
void get_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp);
|
|
unsigned long start_poll_synchronize_rcu(void);
|
|
void start_poll_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp);
|
|
bool poll_state_synchronize_rcu(unsigned long oldstate);
|
|
bool poll_state_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp);
|
|
void cond_synchronize_rcu(unsigned long oldstate);
|
|
void cond_synchronize_rcu_full(struct rcu_gp_oldstate *rgosp);
|
|
|
|
#ifdef CONFIG_PROVE_RCU
|
|
void rcu_irq_exit_check_preempt(void);
|
|
#else
|
|
static inline void rcu_irq_exit_check_preempt(void) { }
|
|
#endif
|
|
|
|
struct task_struct;
|
|
void rcu_preempt_deferred_qs(struct task_struct *t);
|
|
|
|
void exit_rcu(void);
|
|
|
|
void rcu_scheduler_starting(void);
|
|
extern int rcu_scheduler_active;
|
|
void rcu_end_inkernel_boot(void);
|
|
bool rcu_inkernel_boot_has_ended(void);
|
|
bool rcu_is_watching(void);
|
|
#ifndef CONFIG_PREEMPT_RCU
|
|
void rcu_all_qs(void);
|
|
#endif
|
|
|
|
/* RCUtree hotplug events */
|
|
int rcutree_prepare_cpu(unsigned int cpu);
|
|
int rcutree_online_cpu(unsigned int cpu);
|
|
void rcutree_report_cpu_starting(unsigned int cpu);
|
|
|
|
#ifdef CONFIG_HOTPLUG_CPU
|
|
int rcutree_dead_cpu(unsigned int cpu);
|
|
int rcutree_dying_cpu(unsigned int cpu);
|
|
int rcutree_offline_cpu(unsigned int cpu);
|
|
#else
|
|
#define rcutree_dead_cpu NULL
|
|
#define rcutree_dying_cpu NULL
|
|
#define rcutree_offline_cpu NULL
|
|
#endif
|
|
|
|
void rcutree_migrate_callbacks(int cpu);
|
|
|
|
/* Called from hotplug and also arm64 early secondary boot failure */
|
|
void rcutree_report_cpu_dead(void);
|
|
|
|
#endif /* __LINUX_RCUTREE_H */
|