mirror of
				https://github.com/torvalds/linux.git
				synced 2025-11-04 10:40:15 +02:00 
			
		
		
		
	sched/dl: Fix race in dl_task_timer()
Throttled task is still on rq, and it may be moved to other cpu
if user is playing with sched_setaffinity(). Therefore, unlocked
task_rq() access makes the race.
Juri Lelli reports he got this race when dl_bandwidth_enabled()
was not set.
Other thing, pointed by Peter Zijlstra:
   "Now I suppose the problem can still actually happen when
    you change the root domain and trigger a effective affinity
    change that way".
To fix that we do the same as made in __task_rq_lock(). We do not
use __task_rq_lock() itself, because it has a useful lockdep check,
which is not correct in case of dl_task_timer(). We do not need
pi_lock locked here. This case is an exception (PeterZ):
   "The only reason we don't strictly need ->pi_lock now is because
    we're guaranteed to have p->state == TASK_RUNNING here and are
    thus free of ttwu races".
Signed-off-by: Kirill Tkhai <tkhai@yandex.ru>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: <stable@vger.kernel.org> # v3.14+
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/3056991400578422@web14g.yandex.ru
Signed-off-by: Ingo Molnar <mingo@kernel.org>
			
			
This commit is contained in:
		
							parent
							
								
									b14ed2c273
								
							
						
					
					
						commit
						0f397f2c90
					
				
					 1 changed files with 9 additions and 1 deletions
				
			
		| 
						 | 
				
			
			@ -513,9 +513,17 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
 | 
			
		|||
						     struct sched_dl_entity,
 | 
			
		||||
						     dl_timer);
 | 
			
		||||
	struct task_struct *p = dl_task_of(dl_se);
 | 
			
		||||
	struct rq *rq = task_rq(p);
 | 
			
		||||
	struct rq *rq;
 | 
			
		||||
again:
 | 
			
		||||
	rq = task_rq(p);
 | 
			
		||||
	raw_spin_lock(&rq->lock);
 | 
			
		||||
 | 
			
		||||
	if (rq != task_rq(p)) {
 | 
			
		||||
		/* Task was moved, retrying. */
 | 
			
		||||
		raw_spin_unlock(&rq->lock);
 | 
			
		||||
		goto again;
 | 
			
		||||
	}
 | 
			
		||||
 | 
			
		||||
	/*
 | 
			
		||||
	 * We need to take care of a possible races here. In fact, the
 | 
			
		||||
	 * task might have changed its scheduling policy to something
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
		Reference in a new issue