Skip to content

Commit

Permalink
khugepaged: drain LRU add pagevec after swapin
Browse files Browse the repository at this point in the history
collapse_huge_page() tries to swap in pages that are part of the PMD
range.  Just swapped in page goes though LRU add cache.  The cache gets
extra reference on the page.

The extra reference can lead to the collapse fail: the following
__collapse_huge_page_isolate() would check refcount and abort collapse
seeing unexpected refcount.

The fix is to drain local LRU add cache in
__collapse_huge_page_swapin() if we successfully swapped in any pages.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Tested-by: Zi Yan <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
Reviewed-by: Zi Yan <[email protected]>
Acked-by: Yang Shi <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: John Hubbard <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Ralph Campbell <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
kiryl authored and torvalds committed Jun 4, 2020
1 parent a980df3 commit ae2c5d8
Showing 1 changed file with 5 additions and 0 deletions.
5 changes: 5 additions & 0 deletions mm/khugepaged.c
Original file line number Diff line number Diff line change
Expand Up @@ -931,6 +931,11 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
}
vmf.pte--;
pte_unmap(vmf.pte);

/* Drain LRU add pagevec to remove extra pin on the swapped in pages */
if (swapped_in)
lru_add_drain();

trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 1);
return true;
}
Expand Down

0 comments on commit ae2c5d8

Please sign in to comment.