Skip to content

Commit

Permalink
mm: memcontrol: rewrite charge API
Browse files Browse the repository at this point in the history
These patches rework memcg charge lifetime to integrate more naturally
with the lifetime of user pages.  This drastically simplifies the code and
reduces charging and uncharging overhead.  The most expensive part of
charging and uncharging is the page_cgroup bit spinlock, which is removed
entirely after this series.

Here are the top-10 profile entries of a stress test that reads a 128G
sparse file on a freshly booted box, without even a dedicated cgroup (i.e.
 executing in the root memcg).  Before:

    15.36%              cat  [kernel.kallsyms]   [k] copy_user_generic_string
    13.31%              cat  [kernel.kallsyms]   [k] memset
    11.48%              cat  [kernel.kallsyms]   [k] do_mpage_readpage
     4.23%              cat  [kernel.kallsyms]   [k] get_page_from_freelist
     2.38%              cat  [kernel.kallsyms]   [k] put_page
     2.32%              cat  [kernel.kallsyms]   [k] __mem_cgroup_commit_charge
     2.18%          kswapd0  [kernel.kallsyms]   [k] __mem_cgroup_uncharge_common
     1.92%          kswapd0  [kernel.kallsyms]   [k] shrink_page_list
     1.86%              cat  [kernel.kallsyms]   [k] __radix_tree_lookup
     1.62%              cat  [kernel.kallsyms]   [k] __pagevec_lru_add_fn

After:

    15.67%           cat  [kernel.kallsyms]   [k] copy_user_generic_string
    13.48%           cat  [kernel.kallsyms]   [k] memset
    11.42%           cat  [kernel.kallsyms]   [k] do_mpage_readpage
     3.98%           cat  [kernel.kallsyms]   [k] get_page_from_freelist
     2.46%           cat  [kernel.kallsyms]   [k] put_page
     2.13%       kswapd0  [kernel.kallsyms]   [k] shrink_page_list
     1.88%           cat  [kernel.kallsyms]   [k] __radix_tree_lookup
     1.67%           cat  [kernel.kallsyms]   [k] __pagevec_lru_add_fn
     1.39%       kswapd0  [kernel.kallsyms]   [k] free_pcppages_bulk
     1.30%           cat  [kernel.kallsyms]   [k] kfree

As you can see, the memcg footprint has shrunk quite a bit.

   text    data     bss     dec     hex filename
  37970    9892     400   48262    bc86 mm/memcontrol.o.old
  35239    9892     400   45531    b1db mm/memcontrol.o

This patch (of 4):

The memcg charge API charges pages before they are rmapped - i.e.  have an
actual "type" - and so every callsite needs its own set of charge and
uncharge functions to know what type is being operated on.  Worse,
uncharge has to happen from a context that is still type-specific, rather
than at the end of the page's lifetime with exclusive access, and so
requires a lot of synchronization.

Rewrite the charge API to provide a generic set of try_charge(),
commit_charge() and cancel_charge() transaction operations, much like
what's currently done for swap-in:

  mem_cgroup_try_charge() attempts to reserve a charge, reclaiming
  pages from the memcg if necessary.

  mem_cgroup_commit_charge() commits the page to the charge once it
  has a valid page->mapping and PageAnon() reliably tells the type.

  mem_cgroup_cancel_charge() aborts the transaction.

This reduces the charge API and enables subsequent patches to
drastically simplify uncharging.

As pages need to be committed after rmap is established but before they
are added to the LRU, page_add_new_anon_rmap() must stop doing LRU
additions again.  Revive lru_cache_add_active_or_unevictable().

[[email protected]: fix shmem_unuse]
[[email protected]: Add comments on the private use of -EAGAIN]
Signed-off-by: Johannes Weiner <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Vladimir Davydov <[email protected]>
Signed-off-by: Hugh Dickins <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
hnaz authored and torvalds committed Aug 8, 2014
1 parent 4449a51 commit 00501b5
Show file tree
Hide file tree
Showing 12 changed files with 338 additions and 395 deletions.
32 changes: 5 additions & 27 deletions Documentation/cgroups/memcg_test.txt
Original file line number Diff line number Diff line change
Expand Up @@ -24,24 +24,7 @@ Please note that implementation details can be changed.

a page/swp_entry may be charged (usage += PAGE_SIZE) at

mem_cgroup_charge_anon()
Called at new page fault and Copy-On-Write.

mem_cgroup_try_charge_swapin()
Called at do_swap_page() (page fault on swap entry) and swapoff.
Followed by charge-commit-cancel protocol. (With swap accounting)
At commit, a charge recorded in swap_cgroup is removed.

mem_cgroup_charge_file()
Called at add_to_page_cache()

mem_cgroup_cache_charge_swapin()
Called at shmem's swapin.

mem_cgroup_prepare_migration()
Called before migration. "extra" charge is done and followed by
charge-commit-cancel protocol.
At commit, charge against oldpage or newpage will be committed.
mem_cgroup_try_charge()

2. Uncharge
a page/swp_entry may be uncharged (usage -= PAGE_SIZE) by
Expand Down Expand Up @@ -69,19 +52,14 @@ Please note that implementation details can be changed.
to new page is committed. At failure, charge to old page is committed.

3. charge-commit-cancel
In some case, we can't know this "charge" is valid or not at charging
(because of races).
To handle such case, there are charge-commit-cancel functions.
mem_cgroup_try_charge_XXX
mem_cgroup_commit_charge_XXX
mem_cgroup_cancel_charge_XXX
these are used in swap-in and migration.
Memcg pages are charged in two steps:
mem_cgroup_try_charge()
mem_cgroup_commit_charge() or mem_cgroup_cancel_charge()

At try_charge(), there are no flags to say "this page is charged".
at this point, usage += PAGE_SIZE.

At commit(), the function checks the page should be charged or not
and set flags or avoid charging.(usage -= PAGE_SIZE)
At commit(), the page is associated with the memcg.

At cancel(), simply usage -= PAGE_SIZE.

Expand Down
53 changes: 14 additions & 39 deletions include/linux/memcontrol.h
Original file line number Diff line number Diff line change
Expand Up @@ -54,28 +54,11 @@ struct mem_cgroup_reclaim_cookie {
};

#ifdef CONFIG_MEMCG
/*
* All "charge" functions with gfp_mask should use GFP_KERNEL or
* (gfp_mask & GFP_RECLAIM_MASK). In current implementatin, memcg doesn't
* alloc memory but reclaims memory from all available zones. So, "where I want
* memory from" bits of gfp_mask has no meaning. So any bits of that field is
* available but adding a rule is better. charge functions' gfp_mask should
* be set to GFP_KERNEL or gfp_mask & GFP_RECLAIM_MASK for avoiding ambiguous
* codes.
* (Of course, if memcg does memory allocation in future, GFP_KERNEL is sane.)
*/

extern int mem_cgroup_charge_anon(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask);
/* for swap handling */
extern int mem_cgroup_try_charge_swapin(struct mm_struct *mm,
struct page *page, gfp_t mask, struct mem_cgroup **memcgp);
extern void mem_cgroup_commit_charge_swapin(struct page *page,
struct mem_cgroup *memcg);
extern void mem_cgroup_cancel_charge_swapin(struct mem_cgroup *memcg);

extern int mem_cgroup_charge_file(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask);
int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask, struct mem_cgroup **memcgp);
void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg,
bool lrucare);
void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg);

struct lruvec *mem_cgroup_zone_lruvec(struct zone *, struct mem_cgroup *);
struct lruvec *mem_cgroup_page_lruvec(struct page *, struct zone *);
Expand Down Expand Up @@ -233,30 +216,22 @@ void mem_cgroup_print_bad_page(struct page *page);
#else /* CONFIG_MEMCG */
struct mem_cgroup;

static inline int mem_cgroup_charge_anon(struct page *page,
struct mm_struct *mm, gfp_t gfp_mask)
{
return 0;
}

static inline int mem_cgroup_charge_file(struct page *page,
struct mm_struct *mm, gfp_t gfp_mask)
{
return 0;
}

static inline int mem_cgroup_try_charge_swapin(struct mm_struct *mm,
struct page *page, gfp_t gfp_mask, struct mem_cgroup **memcgp)
static inline int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask,
struct mem_cgroup **memcgp)
{
*memcgp = NULL;
return 0;
}

static inline void mem_cgroup_commit_charge_swapin(struct page *page,
struct mem_cgroup *memcg)
static inline void mem_cgroup_commit_charge(struct page *page,
struct mem_cgroup *memcg,
bool lrucare)
{
}

static inline void mem_cgroup_cancel_charge_swapin(struct mem_cgroup *memcg)
static inline void mem_cgroup_cancel_charge(struct page *page,
struct mem_cgroup *memcg)
{
}

Expand Down
3 changes: 3 additions & 0 deletions include/linux/swap.h
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,9 @@ extern void swap_setup(void);

extern void add_page_to_unevictable_list(struct page *page);

extern void lru_cache_add_active_or_unevictable(struct page *page,
struct vm_area_struct *vma);

/* linux/mm/vmscan.c */
extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
gfp_t gfp_mask, nodemask_t *mask);
Expand Down
15 changes: 8 additions & 7 deletions kernel/events/uprobes.c
Original file line number Diff line number Diff line change
Expand Up @@ -167,6 +167,11 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
/* For mmu_notifiers */
const unsigned long mmun_start = addr;
const unsigned long mmun_end = addr + PAGE_SIZE;
struct mem_cgroup *memcg;

err = mem_cgroup_try_charge(kpage, vma->vm_mm, GFP_KERNEL, &memcg);
if (err)
return err;

/* For try_to_free_swap() and munlock_vma_page() below */
lock_page(page);
Expand All @@ -179,6 +184,8 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,

get_page(kpage);
page_add_new_anon_rmap(kpage, vma, addr);
mem_cgroup_commit_charge(kpage, memcg, false);
lru_cache_add_active_or_unevictable(kpage, vma);

if (!PageAnon(page)) {
dec_mm_counter(mm, MM_FILEPAGES);
Expand All @@ -200,6 +207,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,

err = 0;
unlock:
mem_cgroup_cancel_charge(kpage, memcg);
mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
unlock_page(page);
return err;
Expand Down Expand Up @@ -315,18 +323,11 @@ int uprobe_write_opcode(struct mm_struct *mm, unsigned long vaddr,
if (!new_page)
goto put_old;

if (mem_cgroup_charge_anon(new_page, mm, GFP_KERNEL))
goto put_new;

__SetPageUptodate(new_page);
copy_highpage(new_page, old_page);
copy_to_page(new_page, vaddr, &opcode, UPROBE_SWBP_INSN_SIZE);

ret = __replace_page(vma, vaddr, old_page, new_page);
if (ret)
mem_cgroup_uncharge_page(new_page);

put_new:
page_cache_release(new_page);
put_old:
put_page(old_page);
Expand Down
21 changes: 15 additions & 6 deletions mm/filemap.c
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@
#include <linux/security.h>
#include <linux/cpuset.h>
#include <linux/hardirq.h> /* for BUG_ON(!in_atomic()) only */
#include <linux/hugetlb.h>
#include <linux/memcontrol.h>
#include <linux/cleancache.h>
#include <linux/rmap.h>
Expand Down Expand Up @@ -548,19 +549,24 @@ static int __add_to_page_cache_locked(struct page *page,
pgoff_t offset, gfp_t gfp_mask,
void **shadowp)
{
int huge = PageHuge(page);
struct mem_cgroup *memcg;
int error;

VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(PageSwapBacked(page), page);

error = mem_cgroup_charge_file(page, current->mm,
gfp_mask & GFP_RECLAIM_MASK);
if (error)
return error;
if (!huge) {
error = mem_cgroup_try_charge(page, current->mm,
gfp_mask, &memcg);
if (error)
return error;
}

error = radix_tree_maybe_preload(gfp_mask & ~__GFP_HIGHMEM);
if (error) {
mem_cgroup_uncharge_cache_page(page);
if (!huge)
mem_cgroup_cancel_charge(page, memcg);
return error;
}

Expand All @@ -575,13 +581,16 @@ static int __add_to_page_cache_locked(struct page *page,
goto err_insert;
__inc_zone_page_state(page, NR_FILE_PAGES);
spin_unlock_irq(&mapping->tree_lock);
if (!huge)
mem_cgroup_commit_charge(page, memcg, false);
trace_mm_filemap_add_to_page_cache(page);
return 0;
err_insert:
page->mapping = NULL;
/* Leave page->index set: truncation relies upon it */
spin_unlock_irq(&mapping->tree_lock);
mem_cgroup_uncharge_cache_page(page);
if (!huge)
mem_cgroup_cancel_charge(page, memcg);
page_cache_release(page);
return error;
}
Expand Down
Loading

0 comments on commit 00501b5

Please sign in to comment.