Skip to content

Commit

Permalink
mm/huge_memory.c: use helper function migration_entry_to_page()
Browse files Browse the repository at this point in the history
It's more recommended to use helper function migration_entry_to_page()
to get the page via migration entry.  We can also enjoy the PageLocked()
check there.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Miaohe Lin <[email protected]>
Reviewed-by: Peter Xu <[email protected]>
Cc: Aneesh Kumar K.V <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Michel Lespinasse <[email protected]>
Cc: Ralph Campbell <[email protected]>
Cc: Thomas Hellstrm (Intel) <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: William Kucharski <[email protected]>
Cc: Yang Shi <[email protected]>
Cc: yuleixzhang <[email protected]>
Cc: Zi Yan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
MiaoheLin authored and torvalds committed May 5, 2021
1 parent d4afd60 commit a44f89d
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions mm/huge_memory.c
Original file line number Diff line number Diff line change
Expand Up @@ -1693,7 +1693,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,

VM_BUG_ON(!is_pmd_migration_entry(orig_pmd));
entry = pmd_to_swp_entry(orig_pmd);
page = pfn_to_page(swp_offset(entry));
page = migration_entry_to_page(entry);
flush_needed = 0;
} else
WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
Expand Down Expand Up @@ -2101,7 +2101,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
swp_entry_t entry;

entry = pmd_to_swp_entry(old_pmd);
page = pfn_to_page(swp_offset(entry));
page = migration_entry_to_page(entry);
write = is_write_migration_entry(entry);
young = false;
soft_dirty = pmd_swp_soft_dirty(old_pmd);
Expand Down

0 comments on commit a44f89d

Please sign in to comment.