[PATCH] sched: fix newly idle load balance in case of SMT
In the presence of SMT, newly idle balance was never happening for multi-core and SMP domains (even when both the logical siblings are idle). If thread 0 is already idle and when thread 1 is about to go to idle, newly idle load balance always think that one of the threads is not idle and skips doing the newly idle load balance for multi-core and SMP domains. This is because of the idle_cpu() macro, which checks if the current process on a cpu is an idle process. But this is not the case for the thread doing the load_balance_newidle(). Fix this by using runqueue's nr_running field instead of idle_cpu(). And also skip the logic of 'only one idle cpu in the group will be doing load balancing' during newly idle case. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
parent
c41917df8a
commit
9439aab8db
1 changed files with 5 additions and 3 deletions
|
@ -2235,7 +2235,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
|
|||
|
||||
rq = cpu_rq(i);
|
||||
|
||||
if (*sd_idle && !idle_cpu(i))
|
||||
if (*sd_idle && rq->nr_running)
|
||||
*sd_idle = 0;
|
||||
|
||||
/* Bias balancing toward cpus of our domain */
|
||||
|
@ -2257,9 +2257,11 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
|
|||
/*
|
||||
* First idle cpu or the first cpu(busiest) in this sched group
|
||||
* is eligible for doing load balancing at this and above
|
||||
* domains.
|
||||
* domains. In the newly idle case, we will allow all the cpu's
|
||||
* to do the newly idle load balance.
|
||||
*/
|
||||
if (local_group && balance_cpu != this_cpu && balance) {
|
||||
if (idle != CPU_NEWLY_IDLE && local_group &&
|
||||
balance_cpu != this_cpu && balance) {
|
||||
*balance = 0;
|
||||
goto ret;
|
||||
}
|
||||
|
|
Loading…
Reference in a new issue