mirror of
				https://github.com/torvalds/linux.git
				synced 2025-11-04 10:40:15 +02:00 
			
		
		
		
	sched/numa: Continue PTE scanning even if migrate rate limited
Avoiding marking PTEs pte_numa because a particular NUMA node is migrate rate limited sees like a bad idea. Even if this node can't migrate anymore other nodes might and we want up-to-date information to do balance decisions. We already rate limit the actual migrations, this should leave enough bandwidth to allow the non-migrating scanning. I think its important we keep up-to-date information if we're going to do placement based on it. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Mel Gorman <mgorman@suse.de> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/1381141781-10992-15-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
		
							parent
							
								
									19a78d110d
								
							
						
					
					
						commit
						9e645ab6d0
					
				
					 1 changed files with 0 additions and 8 deletions
				
			
		| 
						 | 
					@ -951,14 +951,6 @@ void task_numa_work(struct callback_head *work)
 | 
				
			||||||
	 */
 | 
						 */
 | 
				
			||||||
	p->node_stamp += 2 * TICK_NSEC;
 | 
						p->node_stamp += 2 * TICK_NSEC;
 | 
				
			||||||
 | 
					
 | 
				
			||||||
	/*
 | 
					 | 
				
			||||||
	 * Do not set pte_numa if the current running node is rate-limited.
 | 
					 | 
				
			||||||
	 * This loses statistics on the fault but if we are unwilling to
 | 
					 | 
				
			||||||
	 * migrate to this node, it is less likely we can do useful work
 | 
					 | 
				
			||||||
	 */
 | 
					 | 
				
			||||||
	if (migrate_ratelimited(numa_node_id()))
 | 
					 | 
				
			||||||
		return;
 | 
					 | 
				
			||||||
 | 
					 | 
				
			||||||
	start = mm->numa_scan_offset;
 | 
						start = mm->numa_scan_offset;
 | 
				
			||||||
	pages = sysctl_numa_balancing_scan_size;
 | 
						pages = sysctl_numa_balancing_scan_size;
 | 
				
			||||||
	pages <<= 20 - PAGE_SHIFT; /* MB in pages */
 | 
						pages <<= 20 - PAGE_SHIFT; /* MB in pages */
 | 
				
			||||||
| 
						 | 
					
 | 
				
			||||||
		Loading…
	
		Reference in a new issue