Skip to content

Commit

Permalink
mm: don't avoid high-priority reclaim on unreclaimable nodes
Browse files Browse the repository at this point in the history
Commit 246e87a ("memcg: fix get_scan_count() for small targets")
sought to avoid high reclaim priorities for kswapd by forcing it to scan
a minimum amount of pages when lru_pages >> priority yielded nothing.

Commit b95a2f2 ("mm: vmscan: convert global reclaim to per-memcg
LRU lists"), due to switching global reclaim to a round-robin scheme
over all cgroups, had to restrict this forceful behavior to
unreclaimable zones in order to prevent massive overreclaim with many
cgroups.

The latter patch effectively neutered the behavior completely for all
but extreme memory pressure.  But in those situations we might as well
drop the reclaimers to lower priority levels.  Remove the check.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Johannes Weiner <[email protected]>
Acked-by: Hillf Danton <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Jia He <[email protected]>
Cc: Mel Gorman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
hnaz authored and torvalds committed May 3, 2017
1 parent 15038d0 commit a2d7f8e
Showing 1 changed file with 5 additions and 14 deletions.
19 changes: 5 additions & 14 deletions mm/vmscan.c
Original file line number Diff line number Diff line change
Expand Up @@ -2130,22 +2130,13 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
int pass;

/*
* If the zone or memcg is small, nr[l] can be 0. This
* results in no scanning on this priority and a potential
* priority drop. Global direct reclaim can go to the next
* zone and tends to have no problems. Global kswapd is for
* zone balancing and it needs to scan a minimum amount. When
* If the zone or memcg is small, nr[l] can be 0. When
* reclaiming for a memcg, a priority drop can cause high
* latencies, so it's better to scan a minimum amount there as
* well.
* latencies, so it's better to scan a minimum amount. When a
* cgroup has already been deleted, scrape out the remaining
* cache forcefully to get rid of the lingering state.
*/
if (current_is_kswapd()) {
if (!pgdat_reclaimable(pgdat))
force_scan = true;
if (!mem_cgroup_online(memcg))
force_scan = true;
}
if (!global_reclaim(sc))
if (!global_reclaim(sc) || !mem_cgroup_online(memcg))
force_scan = true;

/* If we have no swap space, do not bother scanning anon pages. */
Expand Down

0 comments on commit a2d7f8e

Please sign in to comment.