Skip to content

Commit

Permalink
mm, compaction: restrict async compaction to pageblocks of same migra…
Browse files Browse the repository at this point in the history
…tetype

The migrate scanner in async compaction is currently limited to
MIGRATE_MOVABLE pageblocks.  This is a heuristic intended to reduce
latency, based on the assumption that non-MOVABLE pageblocks are
unlikely to contain movable pages.

However, with the exception of THP's, most high-order allocations are
not movable.  Should the async compaction succeed, this increases the
chance that the non-MOVABLE allocations will fallback to a MOVABLE
pageblock, making the long-term fragmentation worse.

This patch attempts to help the situation by changing async direct
compaction so that the migrate scanner only scans the pageblocks of the
requested migratetype.  If it's a non-MOVABLE type and there are such
pageblocks that do contain movable pages, chances are that the
allocation can succeed within one of such pageblocks, removing the need
for a fallback.  If that fails, the subsequent sync attempt will ignore
this restriction.

In testing based on 4.9 kernel with stress-highalloc from mmtests
configured for order-4 GFP_KERNEL allocations, this patch has reduced
the number of unmovable allocations falling back to movable pageblocks
by 30%.  The number of movable allocations falling back is reduced by
12%.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
tehcaster authored and torvalds committed May 9, 2017
1 parent d39773a commit 282722b
Show file tree
Hide file tree
Showing 2 changed files with 22 additions and 9 deletions.
11 changes: 9 additions & 2 deletions mm/compaction.c
Original file line number Diff line number Diff line change
Expand Up @@ -986,10 +986,17 @@ isolate_migratepages_range(struct compact_control *cc, unsigned long start_pfn,
static bool suitable_migration_source(struct compact_control *cc,
struct page *page)
{
if (cc->mode != MIGRATE_ASYNC)
int block_mt;

if ((cc->mode != MIGRATE_ASYNC) || !cc->direct_compaction)
return true;

return is_migrate_movable(get_pageblock_migratetype(page));
block_mt = get_pageblock_migratetype(page);

if (cc->migratetype == MIGRATE_MOVABLE)
return is_migrate_movable(block_mt);
else
return block_mt == cc->migratetype;
}

/* Returns true if the page is within a block suitable for migration to */
Expand Down
20 changes: 13 additions & 7 deletions mm/page_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -3665,6 +3665,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
struct alloc_context *ac)
{
bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
struct page *page = NULL;
unsigned int alloc_flags;
unsigned long did_some_progress;
Expand Down Expand Up @@ -3732,12 +3733,17 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,

/*
* For costly allocations, try direct compaction first, as it's likely
* that we have enough base pages and don't need to reclaim. Don't try
* that for allocations that are allowed to ignore watermarks, as the
* ALLOC_NO_WATERMARKS attempt didn't yet happen.
* that we have enough base pages and don't need to reclaim. For non-
* movable high-order allocations, do that as well, as compaction will
* try prevent permanent fragmentation by migrating from blocks of the
* same migratetype.
* Don't try this for allocations that are allowed to ignore
* watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen.
*/
if (can_direct_reclaim && order > PAGE_ALLOC_COSTLY_ORDER &&
!gfp_pfmemalloc_allowed(gfp_mask)) {
if (can_direct_reclaim &&
(costly_order ||
(order > 0 && ac->migratetype != MIGRATE_MOVABLE))
&& !gfp_pfmemalloc_allowed(gfp_mask)) {
page = __alloc_pages_direct_compact(gfp_mask, order,
alloc_flags, ac,
INIT_COMPACT_PRIORITY,
Expand All @@ -3749,7 +3755,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
* Checks for costly allocations with __GFP_NORETRY, which
* includes THP page fault allocations
*/
if (gfp_mask & __GFP_NORETRY) {
if (costly_order && (gfp_mask & __GFP_NORETRY)) {
/*
* If compaction is deferred for high-order allocations,
* it is because sync compaction recently failed. If
Expand Down Expand Up @@ -3830,7 +3836,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
* Do not retry costly high order allocations unless they are
* __GFP_REPEAT
*/
if (order > PAGE_ALLOC_COSTLY_ORDER && !(gfp_mask & __GFP_REPEAT))
if (costly_order && !(gfp_mask & __GFP_REPEAT))
goto nopage;

if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
Expand Down

0 comments on commit 282722b

Please sign in to comment.