Skip to content

Commit

Permalink
[PATCH] fix free swap cache latency
Browse files Browse the repository at this point in the history
Lee Revell reported 28ms latency when process with lots of swapped memory
exits.

2.6.15 introduced a latency regression when unmapping: in accounting the
zap_work latency breaker, pte_none counted 1, pte_present PAGE_SIZE, but a
swap entry counted nothing at all.  We think of pages present as the slow
case, but Lee's trace shows that free_swap_and_cache's radix tree lookup
can make a lot of work - and we could have been doing it many thousands of
times without a latency break.

Move the zap_work update up to account swap entries like pages present.
This does account non-linear pte_file entries, and unmap_mapping_range
skipping over swap entries, by the same amount even though they're quick:
but neither of those cases deserves complicating the code (and they're
treated no worse than they were in 2.6.14).

Signed-off-by: Hugh Dickins <[email protected]>
Acked-by: Nick Piggin <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Hugh Dickins authored and Linus Torvalds committed Mar 17, 2006
1 parent 7670f02 commit 6f5e6b9
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions mm/memory.c
Original file line number Diff line number Diff line change
Expand Up @@ -623,11 +623,12 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
(*zap_work)--;
continue;
}

(*zap_work) -= PAGE_SIZE;

if (pte_present(ptent)) {
struct page *page;

(*zap_work) -= PAGE_SIZE;

page = vm_normal_page(vma, addr, ptent);
if (unlikely(details) && page) {
/*
Expand Down

0 comments on commit 6f5e6b9

Please sign in to comment.