Skip to content

Commit

Permalink
hugetlb: fix huge_pmd_unshare address update
Browse files Browse the repository at this point in the history
The routine huge_pmd_unshare() is passed a pointer to an address
associated with an area which may be unshared.  If unshare is successful
this address is updated to 'optimize' callers iterating over huge page
addresses.  For the optimization to work correctly, address should be
updated to the last huge page in the unmapped/unshared area.  However, in
the common case where the passed address is PUD_SIZE aligned, the address
is incorrectly updated to the address of the preceding huge page.  That
wastes CPU cycles as the unmapped/unshared range is scanned twice.

Link: https://lkml.kernel.org/r/[email protected]
Fixes: 39dde65 ("shared page table for hugetlb page")
Signed-off-by: Mike Kravetz <[email protected]>
Acked-by: Muchun Song <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
  • Loading branch information
mjkravetz authored and akpm00 committed May 27, 2022
1 parent 2505a98 commit 4838127
Showing 1 changed file with 8 additions and 1 deletion.
9 changes: 8 additions & 1 deletion mm/hugetlb.c
Original file line number Diff line number Diff line change
Expand Up @@ -6562,7 +6562,14 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
pud_clear(pud);
put_page(virt_to_page(ptep));
mm_dec_nr_pmds(mm);
*addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE;
/*
* This update of passed address optimizes loops sequentially
* processing addresses in increments of huge page size (PMD_SIZE
* in this case). By clearing the pud, a PUD_SIZE area is unmapped.
* Update address to the 'last page' in the cleared area so that
* calling loop can move to first page past this area.
*/
*addr |= PUD_SIZE - PMD_SIZE;
return 1;
}

Expand Down

0 comments on commit 4838127

Please sign in to comment.