Skip to content

Commit

Permalink
mm: page migration use the put_new_page whenever necessary
Browse files Browse the repository at this point in the history
I don't know of any problem from the way it's used in our current tree,
but there is one defect in page migration's custom put_new_page feature.

An unused newpage is expected to be released with the put_new_page(), but
there was one MIGRATEPAGE_SUCCESS (0) path which released it with
putback_lru_page(): which can be very wrong for a custom pool.

Fixed more easily by resetting put_new_page once it won't be needed, than
by adding a further flag to modify the rc test.

Signed-off-by: Hugh Dickins <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Rik van Riel <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Cc: Davidlohr Bueso <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: Sasha Levin <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: KOSAKI Motohiro <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Hugh Dickins authored and torvalds committed Nov 6, 2015
1 parent 14e0f9b commit 2def742
Showing 1 changed file with 11 additions and 8 deletions.
19 changes: 11 additions & 8 deletions mm/migrate.c
Original file line number Diff line number Diff line change
Expand Up @@ -938,10 +938,11 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page,
int force, enum migrate_mode mode,
enum migrate_reason reason)
{
int rc = 0;
int rc = MIGRATEPAGE_SUCCESS;
int *result = NULL;
struct page *newpage = get_new_page(page, private, &result);
struct page *newpage;

newpage = get_new_page(page, private, &result);
if (!newpage)
return -ENOMEM;

Expand All @@ -955,6 +956,8 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page,
goto out;

rc = __unmap_and_move(page, newpage, force, mode);
if (rc == MIGRATEPAGE_SUCCESS)
put_new_page = NULL;

out:
if (rc != -EAGAIN) {
Expand All @@ -981,7 +984,7 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page,
* it. Otherwise, putback_lru_page() will drop the reference grabbed
* during isolation.
*/
if (rc != MIGRATEPAGE_SUCCESS && put_new_page) {
if (put_new_page) {
ClearPageSwapBacked(newpage);
put_new_page(newpage, private);
} else if (unlikely(__is_movable_balloon_page(newpage))) {
Expand Down Expand Up @@ -1022,7 +1025,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
struct page *hpage, int force,
enum migrate_mode mode)
{
int rc = 0;
int rc = -EAGAIN;
int *result = NULL;
int page_was_mapped = 0;
struct page *new_hpage;
Expand All @@ -1044,8 +1047,6 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
if (!new_hpage)
return -ENOMEM;

rc = -EAGAIN;

if (!trylock_page(hpage)) {
if (!force || mode != MIGRATE_SYNC)
goto out;
Expand All @@ -1070,8 +1071,10 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
if (anon_vma)
put_anon_vma(anon_vma);

if (rc == MIGRATEPAGE_SUCCESS)
if (rc == MIGRATEPAGE_SUCCESS) {
hugetlb_cgroup_migrate(hpage, new_hpage);
put_new_page = NULL;
}

unlock_page(hpage);
out:
Expand All @@ -1083,7 +1086,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
* it. Otherwise, put_page() will drop the reference grabbed during
* isolation.
*/
if (rc != MIGRATEPAGE_SUCCESS && put_new_page)
if (put_new_page)
put_new_page(new_hpage, private);
else
putback_active_hugepage(new_hpage);
Expand Down

0 comments on commit 2def742

Please sign in to comment.