Skip to content

Commit

Permalink
sched/pelt: Relax the sync of util_sum with util_avg
Browse files Browse the repository at this point in the history
Rick reported performance regressions in bugzilla because of cpu frequency
being lower than before:
    https://bugzilla.kernel.org/show_bug.cgi?id=215045

He bisected the problem to:
commit 1c35b07 ("sched/fair: Ensure _sum and _avg values stay consistent")

This commit forces util_sum to be synced with the new util_avg after
removing the contribution of a task and before the next periodic sync. By
doing so util_sum is rounded to its lower bound and might lost up to
LOAD_AVG_MAX-1 of accumulated contribution which has not yet been
reflected in util_avg.

Instead of always setting util_sum to the low bound of util_avg, which can
significantly lower the utilization of root cfs_rq after propagating the
change down into the hierarchy, we revert the change of util_sum and
propagate the difference.

In addition, we also check that cfs's util_sum always stays above the
lower bound for a given util_avg as it has been observed that
sched_entity's util_sum is sometimes above cfs one.

Fixes: 1c35b07 ("sched/fair: Ensure _sum and _avg values stay consistent")
Reported-by: Rick Yiu <[email protected]>
Signed-off-by: Vincent Guittot <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Dietmar Eggemann <[email protected]>
Tested-by: Sachin Sant <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
  • Loading branch information
vingu-linaro authored and Peter Zijlstra committed Jan 18, 2022
1 parent a06247c commit 98b0d89
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 4 deletions.
16 changes: 13 additions & 3 deletions kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -3381,7 +3381,6 @@ void set_task_rq_fair(struct sched_entity *se,
se->avg.last_update_time = n_last_update_time;
}


/*
* When on migration a sched_entity joins/leaves the PELT hierarchy, we need to
* propagate its contribution. The key to this propagation is the invariant
Expand Down Expand Up @@ -3449,7 +3448,6 @@ void set_task_rq_fair(struct sched_entity *se,
* XXX: only do this for the part of runnable > running ?
*
*/

static inline void
update_tg_cfs_util(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq)
{
Expand Down Expand Up @@ -3681,7 +3679,19 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)

r = removed_util;
sub_positive(&sa->util_avg, r);
sa->util_sum = sa->util_avg * divider;
sub_positive(&sa->util_sum, r * divider);
/*
* Because of rounding, se->util_sum might ends up being +1 more than
* cfs->util_sum. Although this is not a problem by itself, detaching
* a lot of tasks with the rounding problem between 2 updates of
* util_avg (~1ms) can make cfs->util_sum becoming null whereas
* cfs_util_avg is not.
* Check that util_sum is still above its lower bound for the new
* util_avg. Given that period_contrib might have moved since the last
* sync, we are only sure that util_sum must be above or equal to
* util_avg * minimum possible divider
*/
sa->util_sum = max_t(u32, sa->util_sum, sa->util_avg * PELT_MIN_DIVIDER);

r = removed_runnable;
sub_positive(&sa->runnable_avg, r);
Expand Down
4 changes: 3 additions & 1 deletion kernel/sched/pelt.h
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,11 @@ update_irq_load_avg(struct rq *rq, u64 running)
}
#endif

#define PELT_MIN_DIVIDER (LOAD_AVG_MAX - 1024)

static inline u32 get_pelt_divider(struct sched_avg *avg)
{
return LOAD_AVG_MAX - 1024 + avg->period_contrib;
return PELT_MIN_DIVIDER + avg->period_contrib;
}

static inline void cfs_se_util_change(struct sched_avg *avg)
Expand Down

0 comments on commit 98b0d89

Please sign in to comment.