Skip to content

Commit

Permalink
mm/page-writeback.c: do not count anon pages as dirtyable memory
Browse files Browse the repository at this point in the history
The VM is currently heavily tuned to avoid swapping.  Whether that is
good or bad is a separate discussion, but as long as the VM won't swap
to make room for dirty cache, we can not consider anonymous pages when
calculating the amount of dirtyable memory, the baseline to which
dirty_background_ratio and dirty_ratio are applied.

A simple workload that occupies a significant size (40+%, depending on
memory layout, storage speeds etc.) of memory with anon/tmpfs pages and
uses the remainder for a streaming writer demonstrates this problem.  In
that case, the actual cache pages are a small fraction of what is
considered dirtyable overall, which results in an relatively large
portion of the cache pages to be dirtied.  As kswapd starts rotating
these, random tasks enter direct reclaim and stall on IO.

Only consider free pages and file pages dirtyable.

Signed-off-by: Johannes Weiner <[email protected]>
Reported-by: Tejun Heo <[email protected]>
Tested-by: Tejun Heo <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Wu Fengguang <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
hnaz authored and torvalds committed Jan 30, 2014
1 parent a804552 commit a1c3bfb
Show file tree
Hide file tree
Showing 4 changed files with 5 additions and 27 deletions.
2 changes: 0 additions & 2 deletions include/linux/vmstat.h
Original file line number Diff line number Diff line change
Expand Up @@ -142,8 +142,6 @@ static inline unsigned long zone_page_state_snapshot(struct zone *zone,
return x;
}

extern unsigned long global_reclaimable_pages(void);

#ifdef CONFIG_NUMA
/*
* Determine the per node value of a stat item. This function
Expand Down
1 change: 0 additions & 1 deletion mm/internal.h
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,6 @@ extern unsigned long highest_memmap_pfn;
*/
extern int isolate_lru_page(struct page *page);
extern void putback_lru_page(struct page *page);
extern unsigned long zone_reclaimable_pages(struct zone *zone);
extern bool zone_reclaimable(struct zone *zone);

/*
Expand Down
6 changes: 4 additions & 2 deletions mm/page-writeback.c
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,8 @@ static unsigned long zone_dirtyable_memory(struct zone *zone)
nr_pages = zone_page_state(zone, NR_FREE_PAGES);
nr_pages -= min(nr_pages, zone->dirty_balance_reserve);

nr_pages += zone_reclaimable_pages(zone);
nr_pages += zone_page_state(zone, NR_INACTIVE_FILE);
nr_pages += zone_page_state(zone, NR_ACTIVE_FILE);

return nr_pages;
}
Expand Down Expand Up @@ -258,7 +259,8 @@ static unsigned long global_dirtyable_memory(void)
x = global_page_state(NR_FREE_PAGES);
x -= min(x, dirty_balance_reserve);

x += global_reclaimable_pages();
x += global_page_state(NR_INACTIVE_FILE);
x += global_page_state(NR_ACTIVE_FILE);

if (!vm_highmem_is_dirtyable)
x -= highmem_dirtyable_memory(x);
Expand Down
23 changes: 1 addition & 22 deletions mm/vmscan.c
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ static bool global_reclaim(struct scan_control *sc)
}
#endif

unsigned long zone_reclaimable_pages(struct zone *zone)
static unsigned long zone_reclaimable_pages(struct zone *zone)
{
int nr;

Expand Down Expand Up @@ -3315,27 +3315,6 @@ void wakeup_kswapd(struct zone *zone, int order, enum zone_type classzone_idx)
wake_up_interruptible(&pgdat->kswapd_wait);
}

/*
* The reclaimable count would be mostly accurate.
* The less reclaimable pages may be
* - mlocked pages, which will be moved to unevictable list when encountered
* - mapped pages, which may require several travels to be reclaimed
* - dirty pages, which is not "instantly" reclaimable
*/
unsigned long global_reclaimable_pages(void)
{
int nr;

nr = global_page_state(NR_ACTIVE_FILE) +
global_page_state(NR_INACTIVE_FILE);

if (get_nr_swap_pages() > 0)
nr += global_page_state(NR_ACTIVE_ANON) +
global_page_state(NR_INACTIVE_ANON);

return nr;
}

#ifdef CONFIG_HIBERNATION
/*
* Try to free `nr_to_reclaim' of memory, system-wide, and return the number of
Expand Down

0 comments on commit a1c3bfb

Please sign in to comment.