Skip to content

Commit

Permalink
sched/fair: Sync load_sum with load_avg after dequeue
Browse files Browse the repository at this point in the history
commit 9e077b5 ("sched/pelt: Check that *_avg are null when *_sum are")
reported some inconsitencies between *_avg and *_sum.

commit 1c35b07 ("sched/fair: Ensure _sum and _avg values stay consistent")
fixed some but one remains when dequeuing load.

sync the cfs's load_sum with its load_avg after dequeuing the load of a
sched_entity.

Fixes: 9e077b5 ("sched/pelt: Check that *_avg are null when *_sum are")
Reported-by: Sachin Sant <[email protected]>
Signed-off-by: Vincent Guittot <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Odin Ugedal <[email protected]>
Tested-by: Sachin Sant <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
  • Loading branch information
vingu-linaro authored and Peter Zijlstra committed Jul 2, 2021
1 parent a22a5cb commit ceb6ba4
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -3037,8 +3037,9 @@ enqueue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
static inline void
dequeue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
u32 divider = get_pelt_divider(&se->avg);
sub_positive(&cfs_rq->avg.load_avg, se->avg.load_avg);
sub_positive(&cfs_rq->avg.load_sum, se_weight(se) * se->avg.load_sum);
cfs_rq->avg.load_sum = cfs_rq->avg.load_avg * divider;
}
#else
static inline void
Expand Down

0 comments on commit ceb6ba4

Please sign in to comment.