bpf: Move out synchronize_rcu_tasks_trace from mutex CS

Commit ef1b808e3b ("bpf: Fix UAF via mismatching bpf_prog/attachment
RCU flavors") resolved a possible UAF issue in uprobes that attach
non-sleepable bpf prog by explicitly waiting for a tasks-trace-RCU grace
period. But, in the current implementation, synchronize_rcu_tasks_trace
is included within the mutex critical section, which increases the
length of the critical section and may affect performance. So let's move
out synchronize_rcu_tasks_trace from mutex CS.

Signed-off-by: Pu Lehui <pulehui@huawei.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20250104013946.1111785-1-pulehui@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
This commit is contained in:
Pu Lehui 2025-01-04 01:39:46 +00:00 committed by Alexei Starovoitov
parent b8b1e30016
commit ca3c4f646a

View file

@ -2245,6 +2245,7 @@ void perf_event_detach_bpf_prog(struct perf_event *event)
{
struct bpf_prog_array *old_array;
struct bpf_prog_array *new_array;
struct bpf_prog *prog = NULL;
int ret;
mutex_lock(&bpf_event_mutex);
@ -2265,18 +2266,22 @@ void perf_event_detach_bpf_prog(struct perf_event *event)
}
put:
/*
* It could be that the bpf_prog is not sleepable (and will be freed
* via normal RCU), but is called from a point that supports sleepable
* programs and uses tasks-trace-RCU.
*/
synchronize_rcu_tasks_trace();
bpf_prog_put(event->prog);
prog = event->prog;
event->prog = NULL;
unlock:
mutex_unlock(&bpf_event_mutex);
if (prog) {
/*
* It could be that the bpf_prog is not sleepable (and will be freed
* via normal RCU), but is called from a point that supports sleepable
* programs and uses tasks-trace-RCU.
*/
synchronize_rcu_tasks_trace();
bpf_prog_put(prog);
}
}
int perf_event_query_prog_array(struct perf_event *event, void __user *info)