Skip to content

Commit

Permalink
mm: softdirty: respect VM_SOFTDIRTY in PTE holes
Browse files Browse the repository at this point in the history
After a VMA is created with the VM_SOFTDIRTY flag set, /proc/pid/pagemap
should report that the VMA's virtual pages are soft-dirty until
VM_SOFTDIRTY is cleared (i.e., by the next write of "4" to
/proc/pid/clear_refs).  However, pagemap ignores the VM_SOFTDIRTY flag
for virtual addresses that fall in PTE holes (i.e., virtual addresses
that don't have a PMD, PUD, or PGD allocated yet).

To observe this bug, use mmap to create a VMA large enough such that
there's a good chance that the VMA will occupy an unused PMD, then test
the soft-dirty bit on its pages.  In practice, I found that a VMA that
covered a PMD's worth of address space was big enough.

This patch adds the necessary VMA lookup to the PTE hole callback in
/proc/pid/pagemap's page walk and sets soft-dirty according to the VMAs'
VM_SOFTDIRTY flag.

Signed-off-by: Peter Feiner <[email protected]>
Acked-by: Cyrill Gorcunov <[email protected]>
Cc: Pavel Emelyanov <[email protected]>
Cc: Hugh Dickins <[email protected]>
Acked-by: Naoya Horiguchi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
peterfeiner authored and torvalds committed Aug 7, 2014
1 parent 3a91053 commit 68b5a65
Showing 1 changed file with 21 additions and 6 deletions.
27 changes: 21 additions & 6 deletions fs/proc/task_mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -925,15 +925,30 @@ static int pagemap_pte_hole(unsigned long start, unsigned long end,
struct mm_walk *walk)
{
struct pagemapread *pm = walk->private;
unsigned long addr;
unsigned long addr = start;
int err = 0;
pagemap_entry_t pme = make_pme(PM_NOT_PRESENT(pm->v2));

for (addr = start; addr < end; addr += PAGE_SIZE) {
err = add_to_pagemap(addr, &pme, pm);
if (err)
break;
while (addr < end) {
struct vm_area_struct *vma = find_vma(walk->mm, addr);
pagemap_entry_t pme = make_pme(PM_NOT_PRESENT(pm->v2));
unsigned long vm_end;

if (!vma) {
vm_end = end;
} else {
vm_end = min(end, vma->vm_end);
if (vma->vm_flags & VM_SOFTDIRTY)
pme.pme |= PM_STATUS2(pm->v2, __PM_SOFT_DIRTY);
}

for (; addr < vm_end; addr += PAGE_SIZE) {
err = add_to_pagemap(addr, &pme, pm);
if (err)
goto out;
}
}

out:
return err;
}

Expand Down

0 comments on commit 68b5a65

Please sign in to comment.