Skip to content

Commit

Permalink
sched/core: Correct off by one bug in load migration calculation
Browse files Browse the repository at this point in the history
The move of calc_load_migrate() from CPU_DEAD to CPU_DYING did not take into
account that the function is now called from a thread running on the outgoing
CPU. As a result a cpu unplug leakes a load of 1 into the global load
accounting mechanism.

Fix it by adjusting for the currently running thread which calls
calc_load_migrate().

Reported-by: Anton Blanchard <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Vaidyanathan Srinivasan <[email protected]>
Cc: [email protected]
Cc: [email protected]
Fixes: e9cd8fa: ("sched/migration: Move calc_load_migrate() into CPU_DYING")
Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1607121744350.4083@nanos
Signed-off-by: Ingo Molnar <[email protected]>
  • Loading branch information
KAGA-KOKO authored and Ingo Molnar committed Jul 13, 2016
1 parent 92d21ac commit d60585c
Show file tree
Hide file tree
Showing 3 changed files with 9 additions and 7 deletions.
6 changes: 4 additions & 2 deletions kernel/sched/core.c
Original file line number Diff line number Diff line change
Expand Up @@ -5394,13 +5394,15 @@ void idle_task_exit(void)
/*
* Since this CPU is going 'away' for a while, fold any nr_active delta
* we might have. Assumes we're called after migrate_tasks() so that the
* nr_active count is stable.
* nr_active count is stable. We need to take the teardown thread which
* is calling this into account, so we hand in adjust = 1 to the load
* calculation.
*
* Also see the comment "Global load-average calculations".
*/
static void calc_load_migrate(struct rq *rq)
{
long delta = calc_load_fold_active(rq);
long delta = calc_load_fold_active(rq, 1);
if (delta)
atomic_long_add(delta, &calc_load_tasks);
}
Expand Down
8 changes: 4 additions & 4 deletions kernel/sched/loadavg.c
Original file line number Diff line number Diff line change
Expand Up @@ -78,11 +78,11 @@ void get_avenrun(unsigned long *loads, unsigned long offset, int shift)
loads[2] = (avenrun[2] + offset) << shift;
}

long calc_load_fold_active(struct rq *this_rq)
long calc_load_fold_active(struct rq *this_rq, long adjust)
{
long nr_active, delta = 0;

nr_active = this_rq->nr_running;
nr_active = this_rq->nr_running - adjust;
nr_active += (long)this_rq->nr_uninterruptible;

if (nr_active != this_rq->calc_load_active) {
Expand Down Expand Up @@ -188,7 +188,7 @@ void calc_load_enter_idle(void)
* We're going into NOHZ mode, if there's any pending delta, fold it
* into the pending idle delta.
*/
delta = calc_load_fold_active(this_rq);
delta = calc_load_fold_active(this_rq, 0);
if (delta) {
int idx = calc_load_write_idx();

Expand Down Expand Up @@ -389,7 +389,7 @@ void calc_global_load_tick(struct rq *this_rq)
if (time_before(jiffies, this_rq->calc_load_update))
return;

delta = calc_load_fold_active(this_rq);
delta = calc_load_fold_active(this_rq, 0);
if (delta)
atomic_long_add(delta, &calc_load_tasks);

Expand Down
2 changes: 1 addition & 1 deletion kernel/sched/sched.h
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ extern unsigned long calc_load_update;
extern atomic_long_t calc_load_tasks;

extern void calc_global_load_tick(struct rq *this_rq);
extern long calc_load_fold_active(struct rq *this_rq);
extern long calc_load_fold_active(struct rq *this_rq, long adjust);

#ifdef CONFIG_SMP
extern void cpu_load_update_active(struct rq *this_rq);
Expand Down

0 comments on commit d60585c

Please sign in to comment.