tracing: Fix tracing_marker may trigger page fault during preempt_disable

Both tracing_mark_write and tracing_mark_raw_write call
__copy_from_user_inatomic during preempt_disable. But in some case,
__copy_from_user_inatomic may trigger page fault, and will call schedule()
subtly. And if a task is migrated to other cpu, the following warning will
be trigger:
        if (RB_WARN_ON(cpu_buffer,
                       !local_read(&cpu_buffer->committing)))

An example can illustrate this issue:

process flow						CPU
---------------------------------------------------------------------

tracing_mark_raw_write():				cpu:0
   ...
   ring_buffer_lock_reserve():				cpu:0
      ...
      cpu = raw_smp_processor_id()			cpu:0
      cpu_buffer = buffer->buffers[cpu]			cpu:0
      ...
   ...
   __copy_from_user_inatomic():				cpu:0
      ...
      # page fault
      do_mem_abort():					cpu:0
         ...
         # Call schedule
         schedule()					cpu:0
	 ...
   # the task schedule to cpu1
   __buffer_unlock_commit():				cpu:1
      ...
      ring_buffer_unlock_commit():			cpu:1
	 ...
	 cpu = raw_smp_processor_id()			cpu:1
	 cpu_buffer = buffer->buffers[cpu]		cpu:1

As shown above, the process will acquire cpuid twice and the return values
are not the same.

To fix this problem using copy_from_user_nofault instead of
__copy_from_user_inatomic, as the former performs 'access_ok' before
copying.

Link: https://lore.kernel.org/20250819105152.2766363-1-luogengkun@huaweicloud.com
Fixes: 656c7f0d2d ("tracing: Replace kmap with copy_from_user() in trace_marker writing")
Signed-off-by: Luo Gengkun <luogengkun@huaweicloud.com>
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
This commit is contained in:
Luo Gengkun 2025-08-19 10:51:52 +00:00 committed by Steven Rostedt (Google)
parent 81ac63321e
commit 3d62ab32df

View file

@ -7209,7 +7209,7 @@ static ssize_t write_marker_to_buffer(struct trace_array *tr, const char __user
entry = ring_buffer_event_data(event); entry = ring_buffer_event_data(event);
entry->ip = ip; entry->ip = ip;
len = __copy_from_user_inatomic(&entry->buf, ubuf, cnt); len = copy_from_user_nofault(&entry->buf, ubuf, cnt);
if (len) { if (len) {
memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE); memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);
cnt = FAULTED_SIZE; cnt = FAULTED_SIZE;
@ -7306,7 +7306,7 @@ static ssize_t write_raw_marker_to_buffer(struct trace_array *tr,
entry = ring_buffer_event_data(event); entry = ring_buffer_event_data(event);
len = __copy_from_user_inatomic(&entry->id, ubuf, cnt); len = copy_from_user_nofault(&entry->id, ubuf, cnt);
if (len) { if (len) {
entry->id = -1; entry->id = -1;
memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE); memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);