Skip to content

Commit

Permalink
numa,sched: fix load_to_imbalanced logic inversion
Browse files Browse the repository at this point in the history
This function is supposed to return true if the new load imbalance is
worse than the old one.  It didn't.  I can only hope brown paper bags
are in style.

Now things converge much better on both the 4 node and 8 node systems.

I am not sure why this did not seem to impact specjbb performance on the
4 node system, which is the system I have full-time access to.

This bug was introduced recently, with commit e63da03 ("sched/numa:
Allow task switch if load imbalance improves")

Signed-off-by: Rik van Riel <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Rik van Riel authored and torvalds committed Jun 8, 2014
1 parent b738d76 commit 1662867
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -1120,7 +1120,7 @@ static bool load_too_imbalanced(long orig_src_load, long orig_dst_load,
old_imb = orig_dst_load * 100 - orig_src_load * env->imbalance_pct;

/* Would this change make things worse? */
return (old_imb > imb);
return (imb > old_imb);
}

/*
Expand Down

0 comments on commit 1662867

Please sign in to comment.