Skip to content

Commit

Permalink
mm: tail page refcounting optimization for slab and hugetlbfs
Browse files Browse the repository at this point in the history
This skips the _mapcount mangling for slab and hugetlbfs pages.

The main trouble in doing this is to guarantee that PageSlab and
PageHeadHuge remains constant for all get_page/put_page run on the tail
of slab or hugetlbfs compound pages.  Otherwise if they're set during
get_page but not set during put_page, the _mapcount of the tail page
would underflow.

PageHeadHuge will remain true until the compound page is released and
enters the buddy allocator so it won't risk to change even if the tail
page is the last reference left on the page.

PG_slab instead is cleared before the slab frees the head page with
put_page, so if the tail pin is released after the slab freed the page,
we would have a problem.  But in the slab case the tail pin cannot be
the last reference left on the page.  This is because the slab code is
free to reuse the compound page after a kfree/kmem_cache_free without
having to check if there's any tail pin left.  In turn all tail pins
must be always released while the head is still pinned by the slab code
and so we know PG_slab will be still set too.

Signed-off-by: Andrea Arcangeli <[email protected]>
Reviewed-by: Khalid Aziz <[email protected]>
Cc: Pravin Shelar <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Ben Hutchings <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Minchan Kim <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
aagit authored and torvalds committed Jan 22, 2014
1 parent ca64151 commit 44518d2
Show file tree
Hide file tree
Showing 4 changed files with 60 additions and 14 deletions.
6 changes: 0 additions & 6 deletions include/linux/hugetlb.h
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ struct hugepage_subpool *hugepage_new_subpool(long nr_blocks);
void hugepage_put_subpool(struct hugepage_subpool *spool);

int PageHuge(struct page *page);
int PageHeadHuge(struct page *page_head);

void reset_vma_resv_huge_pages(struct vm_area_struct *vma);
int hugetlb_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
Expand Down Expand Up @@ -104,11 +103,6 @@ static inline int PageHuge(struct page *page)
return 0;
}

static inline int PageHeadHuge(struct page *page_head)
{
return 0;
}

static inline void reset_vma_resv_huge_pages(struct vm_area_struct *vma)
{
}
Expand Down
32 changes: 31 additions & 1 deletion include/linux/mm.h
Original file line number Diff line number Diff line change
Expand Up @@ -414,15 +414,45 @@ static inline int page_count(struct page *page)
return atomic_read(&compound_head(page)->_count);
}

#ifdef CONFIG_HUGETLB_PAGE
extern int PageHeadHuge(struct page *page_head);
#else /* CONFIG_HUGETLB_PAGE */
static inline int PageHeadHuge(struct page *page_head)
{
return 0;
}
#endif /* CONFIG_HUGETLB_PAGE */

static inline bool __compound_tail_refcounted(struct page *page)
{
return !PageSlab(page) && !PageHeadHuge(page);
}

/*
* This takes a head page as parameter and tells if the
* tail page reference counting can be skipped.
*
* For this to be safe, PageSlab and PageHeadHuge must remain true on
* any given page where they return true here, until all tail pins
* have been released.
*/
static inline bool compound_tail_refcounted(struct page *page)
{
VM_BUG_ON(!PageHead(page));
return __compound_tail_refcounted(page);
}

static inline void get_huge_page_tail(struct page *page)
{
/*
* __split_huge_page_refcount() cannot run
* from under us.
* In turn no need of compound_trans_head here.
*/
VM_BUG_ON(page_mapcount(page) < 0);
VM_BUG_ON(atomic_read(&page->_count) != 0);
atomic_inc(&page->_mapcount);
if (compound_tail_refcounted(compound_head(page)))
atomic_inc(&page->_mapcount);
}

extern bool __get_page_tail(struct page *page);
Expand Down
3 changes: 2 additions & 1 deletion mm/internal.h
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,8 @@ static inline void __get_page_tail_foll(struct page *page,
VM_BUG_ON(page_mapcount(page) < 0);
if (get_page_head)
atomic_inc(&page->first_page->_count);
atomic_inc(&page->_mapcount);
if (compound_tail_refcounted(page->first_page))
atomic_inc(&page->_mapcount);
}

/*
Expand Down
33 changes: 27 additions & 6 deletions mm/swap.c
Original file line number Diff line number Diff line change
Expand Up @@ -88,8 +88,9 @@ static void put_compound_page(struct page *page)

/*
* THP can not break up slab pages so avoid taking
* compound_lock(). Slab performs non-atomic bit ops
* on page->flags for better performance. In
* compound_lock() and skip the tail page refcounting
* (in _mapcount) too. Slab performs non-atomic bit
* ops on page->flags for better performance. In
* particular slab_unlock() in slub used to be a hot
* path. It is still hot on arches that do not support
* this_cpu_cmpxchg_double().
Expand All @@ -102,7 +103,7 @@ static void put_compound_page(struct page *page)
* PageTail clear after smp_rmb() and we'll treat it
* as a single page.
*/
if (PageSlab(page_head) || PageHeadHuge(page_head)) {
if (!__compound_tail_refcounted(page_head)) {
/*
* If "page" is a THP tail, we must read the tail page
* flags after the head page flags. The
Expand All @@ -117,10 +118,30 @@ static void put_compound_page(struct page *page)
* cannot race here.
*/
VM_BUG_ON(!PageHead(page_head));
VM_BUG_ON(page_mapcount(page) <= 0);
atomic_dec(&page->_mapcount);
if (put_page_testzero(page_head))
VM_BUG_ON(page_mapcount(page) != 0);
if (put_page_testzero(page_head)) {
/*
* If this is the tail of a
* slab compound page, the
* tail pin must not be the
* last reference held on the
* page, because the PG_slab
* cannot be cleared before
* all tail pins (which skips
* the _mapcount tail
* refcounting) have been
* released. For hugetlbfs the
* tail pin may be the last
* reference on the page
* instead, because
* PageHeadHuge will not go
* away until the compound
* page enters the buddy
* allocator.
*/
VM_BUG_ON(PageSlab(page_head));
__put_compound_page(page_head);
}
return;
} else
/*
Expand Down

0 comments on commit 44518d2

Please sign in to comment.