Skip to content

Commit

Permalink
mm, compaction: check early for huge pages encountered by the migrati…
Browse files Browse the repository at this point in the history
…on scanner

When scanning for sources or targets, PageCompound is checked for huge
pages as they can be skipped quickly but it happens relatively late
after a lot of setup and checking.  This patch short-cuts the check to
make it earlier.  It might still change when the lock is acquired but
this has less overhead overall.  The free scanner advances but the
migration scanner does not.  Typically the free scanner encounters more
movable blocks that change state over the lifetime of the system and
also tends to scan more aggressively as it's actively filling its
portion of the physical address space with data.  This could change in
the future but for the moment, this worked better in practice and
incurred fewer scan restarts.

The impact on latency and allocation success rates is marginal but the
free scan rates are reduced by 15% and system CPU usage is reduced by
3.3%.  The 2-socket results are not materially different.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Mel Gorman <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Dan Carpenter <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: YueHaibing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
gormanm authored and torvalds committed Mar 6, 2019
1 parent cb2dcaf commit 9bebefd
Showing 1 changed file with 12 additions and 4 deletions.
16 changes: 12 additions & 4 deletions mm/compaction.c
Original file line number Diff line number Diff line change
Expand Up @@ -1061,6 +1061,9 @@ static bool suitable_migration_source(struct compact_control *cc,
{
int block_mt;

if (pageblock_skip_persistent(page))
return false;

if ((cc->mode != MIGRATE_ASYNC) || !cc->direct_compaction)
return true;

Expand Down Expand Up @@ -1697,12 +1700,17 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
continue;

/*
* For async compaction, also only scan in MOVABLE blocks.
* Async compaction is optimistic to see if the minimum amount
* of work satisfies the allocation.
* For async compaction, also only scan in MOVABLE blocks
* without huge pages. Async compaction is optimistic to see
* if the minimum amount of work satisfies the allocation.
* The cached PFN is updated as it's possible that all
* remaining blocks between source and target are unsuitable
* and the compaction scanners fail to meet.
*/
if (!suitable_migration_source(cc, page))
if (!suitable_migration_source(cc, page)) {
update_cached_migrate(cc, block_end_pfn);
continue;
}

/* Perform the isolation */
low_pfn = isolate_migratepages_block(cc, low_pfn,
Expand Down

0 comments on commit 9bebefd

Please sign in to comment.