Skip to content

Commit

Permalink
khugepaged: allow to collapse a page shared across fork
Browse files Browse the repository at this point in the history
The page can be included into collapse as long as it doesn't have extra
pins (from GUP or otherwise).

Logic to check the refcount is moved to a separate function.  For pages in
swap cache, add compound_nr(page) to the expected refcount, in order to
handle the compound page case.  This is in preparation for the following
patch.

VM_BUG_ON_PAGE() was removed from __collapse_huge_page_copy() as the
invariant it checks is no longer valid: the source can be mapped multiple
times now.

[[email protected]: remove error message when checking external pins]
  Link: http://lkml.kernel.org/r/[email protected]
[[email protected]: fix set-but-not-used warning]
  Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Kirill A. Shutemov <[email protected]>
Signed-off-by: Yang Shi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Tested-by: Zi Yan <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
Reviewed-by: Zi Yan <[email protected]>
Reviewed-by: John Hubbard <[email protected]>
Acked-by: Yang Shi <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Ralph Campbell <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
kiryl authored and torvalds committed Jun 4, 2020
1 parent ae2c5d8 commit 9445689
Showing 1 changed file with 37 additions and 9 deletions.
46 changes: 37 additions & 9 deletions mm/khugepaged.c
Original file line number Diff line number Diff line change
Expand Up @@ -526,6 +526,17 @@ static void release_pte_pages(pte_t *pte, pte_t *_pte)
}
}

static bool is_refcount_suitable(struct page *page)
{
int expected_refcount;

expected_refcount = total_mapcount(page);
if (PageSwapCache(page))
expected_refcount += compound_nr(page);

return page_count(page) == expected_refcount;
}

static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
unsigned long address,
pte_t *pte)
Expand Down Expand Up @@ -578,11 +589,17 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
}

/*
* cannot use mapcount: can't collapse if there's a gup pin.
* The page must only be referenced by the scanned process
* and page swap cache.
* Check if the page has any GUP (or other external) pins.
*
* The page table that maps the page has been already unlinked
* from the page table tree and this process cannot get
* an additinal pin on the page.
*
* New pins can come later if the page is shared across fork,
* but not from this process. The other process cannot write to
* the page, only trigger CoW.
*/
if (page_count(page) != 1 + PageSwapCache(page)) {
if (!is_refcount_suitable(page)) {
unlock_page(page);
result = SCAN_PAGE_COUNT;
goto out;
Expand Down Expand Up @@ -669,7 +686,6 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
} else {
src_page = pte_page(pteval);
copy_user_highpage(page, src_page, address, vma);
VM_BUG_ON_PAGE(page_mapcount(src_page) != 1, src_page);
release_pte_page(src_page);
/*
* ptl mostly unnecessary, but preempt has to
Expand Down Expand Up @@ -1221,11 +1237,23 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
}

/*
* cannot use mapcount: can't collapse if there's a gup pin.
* The page must only be referenced by the scanned process
* and page swap cache.
* Check if the page has any GUP (or other external) pins.
*
* Here the check is racy it may see totmal_mapcount > refcount
* in some cases.
* For example, one process with one forked child process.
* The parent has the PMD split due to MADV_DONTNEED, then
* the child is trying unmap the whole PMD, but khugepaged
* may be scanning the parent between the child has
* PageDoubleMap flag cleared and dec the mapcount. So
* khugepaged may see total_mapcount > refcount.
*
* But such case is ephemeral we could always retry collapse
* later. However it may report false positive if the page
* has excessive GUP pins (i.e. 512). Anyway the same check
* will be done again later the risk seems low.
*/
if (page_count(page) != 1 + PageSwapCache(page)) {
if (!is_refcount_suitable(page)) {
result = SCAN_PAGE_COUNT;
goto out_unmap;
}
Expand Down

0 comments on commit 9445689

Please sign in to comment.