Skip to content

Commit

Permalink
mm,madvise,hugetlb: fix unexpected data loss with MADV_DONTNEED on hu…
Browse files Browse the repository at this point in the history
…getlbfs

A common use case for hugetlbfs is for the application to create
memory pools backed by huge pages, which then get handed over to
some malloc library (eg. jemalloc) for further management.

That malloc library may be doing MADV_DONTNEED calls on memory
that is no longer needed, expecting those calls to happen on
PAGE_SIZE boundaries.

However, currently the MADV_DONTNEED code rounds up any such
requests to HPAGE_PMD_SIZE boundaries. This leads to undesired
outcomes when jemalloc expects a 4kB MADV_DONTNEED, but 2MB of
memory get zeroed out, instead.

Use of pre-built shared libraries means that user code does not
always know the page size of every memory arena in use.

Avoid unexpected data loss with MADV_DONTNEED by rounding up
only to PAGE_SIZE (in do_madvise), and rounding down to huge
page granularity.

That way programs will only get as much memory zeroed out as
they requested.

Link: https://lkml.kernel.org/r/[email protected]
Fixes: 90e7e7f ("mm: enable MADV_DONTNEED for hugetlb mappings")
Signed-off-by: Rik van Riel <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
  • Loading branch information
rikvanriel authored and akpm00 committed Oct 28, 2022
1 parent fba4eaf commit 8ebe0a5
Showing 1 changed file with 11 additions and 1 deletion.
12 changes: 11 additions & 1 deletion mm/madvise.c
Original file line number Diff line number Diff line change
Expand Up @@ -813,7 +813,14 @@ static bool madvise_dontneed_free_valid_vma(struct vm_area_struct *vma,
if (start & ~huge_page_mask(hstate_vma(vma)))
return false;

*end = ALIGN(*end, huge_page_size(hstate_vma(vma)));
/*
* Madvise callers expect the length to be rounded up to PAGE_SIZE
* boundaries, and may be unaware that this VMA uses huge pages.
* Avoid unexpected data loss by rounding down the number of
* huge pages freed.
*/
*end = ALIGN_DOWN(*end, huge_page_size(hstate_vma(vma)));

return true;
}

Expand All @@ -828,6 +835,9 @@ static long madvise_dontneed_free(struct vm_area_struct *vma,
if (!madvise_dontneed_free_valid_vma(vma, start, &end, behavior))
return -EINVAL;

if (start == end)
return 0;

if (!userfaultfd_remove(vma, start, end)) {
*prev = NULL; /* mmap_lock has been dropped, prev is stale */

Expand Down

0 comments on commit 8ebe0a5

Please sign in to comment.