Skip to content

Commit

Permalink
mm, compaction: abort free scanner if split fails
Browse files Browse the repository at this point in the history
If the memory compaction free scanner cannot successfully split a free
page (only possible due to per-zone low watermark), terminate the free
scanner rather than continuing to scan memory needlessly.  If the
watermark is insufficient for a free page of order <= cc->order, then
terminate the scanner since all future splits will also likely fail.

This prevents the compaction freeing scanner from scanning all memory on
very large zones (very noticeable for zones > 128GB, for instance) when
all splits will likely fail while holding zone->lock.

compaction_alloc() iterating a 128GB zone has been benchmarked to take
over 400ms on some systems whereas any free page isolated and ready to
be split ends up failing in split_free_page() because of the low
watermark check and thus the iteration continues.

The next time compaction occurs, the freeing scanner will likely start
at the end of the zone again since no success was made previously and we
get the same lengthy iteration until the zone is brought above the low
watermark.  All thp page faults can take >400ms in such a state without
this fix.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Rientjes <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
rientjes authored and torvalds committed Jun 25, 2016
1 parent 5c335fe commit a4f04f2
Showing 1 changed file with 21 additions and 18 deletions.
39 changes: 21 additions & 18 deletions mm/compaction.c
Original file line number Diff line number Diff line change
Expand Up @@ -441,25 +441,23 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,

/* Found a free page, break it into order-0 pages */
isolated = split_free_page(page);
if (!isolated)
break;

total_isolated += isolated;
cc->nr_freepages += isolated;
for (i = 0; i < isolated; i++) {
list_add(&page->lru, freelist);
page++;
}

/* If a page was split, advance to the end of it */
if (isolated) {
cc->nr_freepages += isolated;
if (!strict &&
cc->nr_migratepages <= cc->nr_freepages) {
blockpfn += isolated;
break;
}

blockpfn += isolated - 1;
cursor += isolated - 1;
continue;
if (!strict && cc->nr_migratepages <= cc->nr_freepages) {
blockpfn += isolated;
break;
}
/* Advance to the end of split page */
blockpfn += isolated - 1;
cursor += isolated - 1;
continue;

isolate_fail:
if (strict)
Expand All @@ -469,6 +467,9 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,

}

if (locked)
spin_unlock_irqrestore(&cc->zone->lock, flags);

/*
* There is a tiny chance that we have read bogus compound_order(),
* so be careful to not go outside of the pageblock.
Expand All @@ -490,9 +491,6 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
if (strict && blockpfn < end_pfn)
total_isolated = 0;

if (locked)
spin_unlock_irqrestore(&cc->zone->lock, flags);

/* Update the pageblock-skip if the whole pageblock was scanned */
if (blockpfn == end_pfn)
update_pageblock_skip(cc, valid_page, total_isolated, false);
Expand Down Expand Up @@ -1011,6 +1009,7 @@ static void isolate_freepages(struct compact_control *cc)
block_end_pfn = block_start_pfn,
block_start_pfn -= pageblock_nr_pages,
isolate_start_pfn = block_start_pfn) {
unsigned long isolated;

/*
* This can iterate a massively long zone without finding any
Expand All @@ -1035,8 +1034,12 @@ static void isolate_freepages(struct compact_control *cc)
continue;

/* Found a block suitable for isolating free pages from. */
isolate_freepages_block(cc, &isolate_start_pfn,
block_end_pfn, freelist, false);
isolated = isolate_freepages_block(cc, &isolate_start_pfn,
block_end_pfn, freelist, false);
/* If isolation failed early, do not continue needlessly */
if (!isolated && isolate_start_pfn < block_end_pfn &&
cc->nr_migratepages > cc->nr_freepages)
break;

/*
* If we isolated enough freepages, or aborted due to async
Expand Down

0 comments on commit a4f04f2

Please sign in to comment.