Skip to content

Commit

Permalink
mm: compaction: Use async migration for __GFP_NO_KSWAPD and enforce n…
Browse files Browse the repository at this point in the history
…o writeback

__GFP_NO_KSWAPD allocations are usually very expensive and not mandatory
to succeed as they have graceful fallback.  Waiting for I/O in those,
tends to be overkill in terms of latencies, so we can reduce their latency
by disabling sync migrate.

Unfortunately, even with async migration it's still possible for the
process to be blocked waiting for a request slot (e.g.  get_request_wait
in the block layer) when ->writepage is called.  To prevent
__GFP_NO_KSWAPD blocking, this patch prevents ->writepage being called on
dirty page cache for asynchronous migration.

Addresses https://bugzilla.kernel.org/show_bug.cgi?id=31142

[[email protected]: Avoid writebacks for NFS, retry locked pages, use bool]
Signed-off-by: Andrea Arcangeli <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
Cc: Arthur Marsh <[email protected]>
Cc: Clemens Ladisch <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Minchan Kim <[email protected]>
Reported-by: Alex Villacis Lasso <[email protected]>
Tested-by: Alex Villacis Lasso <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
aagit authored and torvalds committed Mar 23, 2011
1 parent b2eef8c commit 11bc82d
Show file tree
Hide file tree
Showing 2 changed files with 34 additions and 16 deletions.
48 changes: 33 additions & 15 deletions mm/migrate.c
Original file line number Diff line number Diff line change
Expand Up @@ -564,7 +564,7 @@ static int fallback_migrate_page(struct address_space *mapping,
* == 0 - success
*/
static int move_to_new_page(struct page *newpage, struct page *page,
int remap_swapcache)
int remap_swapcache, bool sync)
{
struct address_space *mapping;
int rc;
Expand All @@ -586,18 +586,28 @@ static int move_to_new_page(struct page *newpage, struct page *page,
mapping = page_mapping(page);
if (!mapping)
rc = migrate_page(mapping, newpage, page);
else if (mapping->a_ops->migratepage)
else {
/*
* Most pages have a mapping and most filesystems
* should provide a migration function. Anonymous
* pages are part of swap space which also has its
* own migration function. This is the most common
* path for page migration.
* Do not writeback pages if !sync and migratepage is
* not pointing to migrate_page() which is nonblocking
* (swapcache/tmpfs uses migratepage = migrate_page).
*/
rc = mapping->a_ops->migratepage(mapping,
newpage, page);
else
rc = fallback_migrate_page(mapping, newpage, page);
if (PageDirty(page) && !sync &&
mapping->a_ops->migratepage != migrate_page)
rc = -EBUSY;
else if (mapping->a_ops->migratepage)
/*
* Most pages have a mapping and most filesystems
* should provide a migration function. Anonymous
* pages are part of swap space which also has its
* own migration function. This is the most common
* path for page migration.
*/
rc = mapping->a_ops->migratepage(mapping,
newpage, page);
else
rc = fallback_migrate_page(mapping, newpage, page);
}

if (rc) {
newpage->mapping = NULL;
Expand Down Expand Up @@ -641,7 +651,7 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
rc = -EAGAIN;

if (!trylock_page(page)) {
if (!force)
if (!force || !sync)
goto move_newpage;

/*
Expand Down Expand Up @@ -686,7 +696,15 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
BUG_ON(charge);

if (PageWriteback(page)) {
if (!force || !sync)
/*
* For !sync, there is no point retrying as the retry loop
* is expected to be too short for PageWriteback to be cleared
*/
if (!sync) {
rc = -EBUSY;
goto uncharge;
}
if (!force)
goto uncharge;
wait_on_page_writeback(page);
}
Expand Down Expand Up @@ -757,7 +775,7 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,

skip_unmap:
if (!page_mapped(page))
rc = move_to_new_page(newpage, page, remap_swapcache);
rc = move_to_new_page(newpage, page, remap_swapcache, sync);

if (rc && remap_swapcache)
remove_migration_ptes(page, page);
Expand Down Expand Up @@ -850,7 +868,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
try_to_unmap(hpage, TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);

if (!page_mapped(hpage))
rc = move_to_new_page(new_hpage, hpage, 1);
rc = move_to_new_page(new_hpage, hpage, 1, sync);

if (rc)
remove_migration_ptes(hpage, hpage);
Expand Down
2 changes: 1 addition & 1 deletion mm/page_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -2103,7 +2103,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
sync_migration);
if (page)
goto got_pg;
sync_migration = true;
sync_migration = !(gfp_mask & __GFP_NO_KSWAPD);

/* Try direct reclaim and then allocating */
page = __alloc_pages_direct_reclaim(gfp_mask, order,
Expand Down

0 comments on commit 11bc82d

Please sign in to comment.