Skip to content

Commit

Permalink
mm: fix handling PTE-mapped THPs in page_referenced()
Browse files Browse the repository at this point in the history
For PTE-mapped THP page_check_address_transhuge() is not adequate: it
cannot find all relevant PTEs, only the first one.  It means we can miss
some references of the page and it can result in suboptimal decisions by
vmscan.

Let's switch it to page_vma_mapped_walk().

I don't think it's subject for stable@: it's not fatal.  The only side
effect is that THP can be swapped out when it shouldn't.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Kirill A. Shutemov <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Hillf Danton <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Srikar Dronamraju <[email protected]>
Cc: Vladimir Davydov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
kiryl authored and torvalds committed Feb 25, 2017
1 parent ace71a1 commit 8eaeded
Showing 1 changed file with 34 additions and 32 deletions.
66 changes: 34 additions & 32 deletions mm/rmap.c
Original file line number Diff line number Diff line change
Expand Up @@ -886,45 +886,48 @@ struct page_referenced_arg {
static int page_referenced_one(struct page *page, struct vm_area_struct *vma,
unsigned long address, void *arg)
{
struct mm_struct *mm = vma->vm_mm;
struct page_referenced_arg *pra = arg;
pmd_t *pmd;
pte_t *pte;
spinlock_t *ptl;
struct page_vma_mapped_walk pvmw = {
.page = page,
.vma = vma,
.address = address,
};
int referenced = 0;

if (!page_check_address_transhuge(page, mm, address, &pmd, &pte, &ptl))
return SWAP_AGAIN;
while (page_vma_mapped_walk(&pvmw)) {
address = pvmw.address;

if (vma->vm_flags & VM_LOCKED) {
if (pte)
pte_unmap(pte);
spin_unlock(ptl);
pra->vm_flags |= VM_LOCKED;
return SWAP_FAIL; /* To break the loop */
}
if (vma->vm_flags & VM_LOCKED) {
page_vma_mapped_walk_done(&pvmw);
pra->vm_flags |= VM_LOCKED;
return SWAP_FAIL; /* To break the loop */
}

if (pte) {
if (ptep_clear_flush_young_notify(vma, address, pte)) {
/*
* Don't treat a reference through a sequentially read
* mapping as such. If the page has been used in
* another mapping, we will catch it; if this other
* mapping is already gone, the unmap path will have
* set PG_referenced or activated the page.
*/
if (likely(!(vma->vm_flags & VM_SEQ_READ)))
if (pvmw.pte) {
if (ptep_clear_flush_young_notify(vma, address,
pvmw.pte)) {
/*
* Don't treat a reference through
* a sequentially read mapping as such.
* If the page has been used in another mapping,
* we will catch it; if this other mapping is
* already gone, the unmap path will have set
* PG_referenced or activated the page.
*/
if (likely(!(vma->vm_flags & VM_SEQ_READ)))
referenced++;
}
} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
if (pmdp_clear_flush_young_notify(vma, address,
pvmw.pmd))
referenced++;
} else {
/* unexpected pmd-mapped page? */
WARN_ON_ONCE(1);
}
pte_unmap(pte);
} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
if (pmdp_clear_flush_young_notify(vma, address, pmd))
referenced++;
} else {
/* unexpected pmd-mapped page? */
WARN_ON_ONCE(1);

pra->mapcount--;
}
spin_unlock(ptl);

if (referenced)
clear_page_idle(page);
Expand All @@ -936,7 +939,6 @@ static int page_referenced_one(struct page *page, struct vm_area_struct *vma,
pra->vm_flags |= vma->vm_flags;
}

pra->mapcount--;
if (!pra->mapcount)
return SWAP_SUCCESS; /* To break the loop */

Expand Down

0 comments on commit 8eaeded

Please sign in to comment.