Skip to content

Commit

Permalink
mm: munlock: remove redundant get_page/put_page pair on the fast path
Browse files Browse the repository at this point in the history
The performance of the fast path in munlock_vma_range() can be further
improved by avoiding atomic ops of a redundant get_page()/put_page() pair.

When calling get_page() during page isolation, we already have the pin
from follow_page_mask().  This pin will be then returned by
__pagevec_lru_add(), after which we do not reference the pages anymore.

After this patch, an 8% speedup was measured for munlocking a 56GB large
memory area with THP disabled.

Signed-off-by: Vlastimil Babka <[email protected]>
Reviewed-by: Jörn Engel <[email protected]>
Acked-by: Mel Gorman <[email protected]>
Cc: Michel Lespinasse <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
tehcaster authored and torvalds committed Sep 11, 2013
1 parent 56afe47 commit 5b40998
Showing 1 changed file with 14 additions and 12 deletions.
26 changes: 14 additions & 12 deletions mm/mlock.c
Original file line number Diff line number Diff line change
Expand Up @@ -303,8 +303,10 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
if (PageLRU(page)) {
lruvec = mem_cgroup_page_lruvec(page, zone);
lru = page_lru(page);

get_page(page);
/*
* We already have pin from follow_page_mask()
* so we can spare the get_page() here.
*/
ClearPageLRU(page);
del_page_from_lru_list(page, lruvec, lru);
} else {
Expand Down Expand Up @@ -336,25 +338,25 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
lock_page(page);
if (!__putback_lru_fast_prepare(page, &pvec_putback,
&pgrescued)) {
/* Slow path */
/*
* Slow path. We don't want to lose the last
* pin before unlock_page()
*/
get_page(page); /* for putback_lru_page() */
__munlock_isolated_page(page);
unlock_page(page);
put_page(page); /* from follow_page_mask() */
}
}
}

/* Phase 3: page putback for pages that qualified for the fast path */
/*
* Phase 3: page putback for pages that qualified for the fast path
* This will also call put_page() to return pin from follow_page_mask()
*/
if (pagevec_count(&pvec_putback))
__putback_lru_fast(&pvec_putback, pgrescued);

/* Phase 4: put_page to return pin from follow_page_mask() */
for (i = 0; i < nr; i++) {
struct page *page = pvec->pages[i];

if (page)
put_page(page);
}

pagevec_reinit(pvec);
}

Expand Down

0 comments on commit 5b40998

Please sign in to comment.