Skip to content

Commit

Permalink
mm: make swapoff more robust against soft dirty
Browse files Browse the repository at this point in the history
Both s390 and powerpc have hit the issue of swapoff hanging, when
CONFIG_HAVE_ARCH_SOFT_DIRTY and CONFIG_MEM_SOFT_DIRTY ifdefs were not
quite as x86_64 had them.  I think it would be much clearer if
HAVE_ARCH_SOFT_DIRTY was just a Kconfig option set by architectures to
determine whether the MEM_SOFT_DIRTY option should be offered, and the
actual code depend upon CONFIG_MEM_SOFT_DIRTY alone.

But won't embark on that change myself: instead make swapoff more
robust, by using pte_swp_clear_soft_dirty() on each pte it encounters,
without an explicit #ifdef CONFIG_MEM_SOFT_DIRTY.  That being a no-op,
whether the bit in question is defined as 0 or the asm-generic fallback
is used, unless soft dirty is fully turned on.

Why "maybe" in maybe_same_pte()? Rename it pte_same_as_swp().

Signed-off-by: Hugh Dickins <[email protected]>
Reviewed-by: Aneesh Kumar K.V <[email protected]>
Acked-by: Cyrill Gorcunov <[email protected]>
Cc: Laurent Dufour <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Hugh Dickins authored and torvalds committed Jan 16, 2016
1 parent 88f306b commit 9f8bdb3
Showing 1 changed file with 4 additions and 14 deletions.
18 changes: 4 additions & 14 deletions mm/swapfile.c
Original file line number Diff line number Diff line change
Expand Up @@ -1111,19 +1111,9 @@ unsigned int count_swap_pages(int type, int free)
}
#endif /* CONFIG_HIBERNATION */

static inline int maybe_same_pte(pte_t pte, pte_t swp_pte)
static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte)
{
#ifdef CONFIG_MEM_SOFT_DIRTY
/*
* When pte keeps soft dirty bit the pte generated
* from swap entry does not has it, still it's same
* pte from logical point of view.
*/
pte_t swp_pte_dirty = pte_swp_mksoft_dirty(swp_pte);
return pte_same(pte, swp_pte) || pte_same(pte, swp_pte_dirty);
#else
return pte_same(pte, swp_pte);
#endif
return pte_same(pte_swp_clear_soft_dirty(pte), swp_pte);
}

/*
Expand Down Expand Up @@ -1152,7 +1142,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
}

pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
if (unlikely(!maybe_same_pte(*pte, swp_entry_to_pte(entry)))) {
if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) {
mem_cgroup_cancel_charge(page, memcg, false);
ret = 0;
goto out;
Expand Down Expand Up @@ -1210,7 +1200,7 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
* swapoff spends a _lot_ of time in this loop!
* Test inline before going to call unuse_pte.
*/
if (unlikely(maybe_same_pte(*pte, swp_pte))) {
if (unlikely(pte_same_as_swp(*pte, swp_pte))) {
pte_unmap(pte);
ret = unuse_pte(vma, pmd, addr, entry, page);
if (ret)
Expand Down

0 comments on commit 9f8bdb3

Please sign in to comment.