Skip to content

Commit

Permalink
mm: memcontrol: Use helpers to read page's memcg data
Browse files Browse the repository at this point in the history
Patch series "mm: allow mapping accounted kernel pages to userspace", v6.

Currently a non-slab kernel page which has been charged to a memory cgroup
can't be mapped to userspace.  The underlying reason is simple: PageKmemcg
flag is defined as a page type (like buddy, offline, etc), so it takes a
bit from a page->mapped counter.  Pages with a type set can't be mapped to
userspace.

But in general the kmemcg flag has nothing to do with mapping to
userspace.  It only means that the page has been accounted by the page
allocator, so it has to be properly uncharged on release.

Some bpf maps are mapping the vmalloc-based memory to userspace, and their
memory can't be accounted because of this implementation detail.

This patchset removes this limitation by moving the PageKmemcg flag into
one of the free bits of the page->mem_cgroup pointer.  Also it formalizes
accesses to the page->mem_cgroup and page->obj_cgroups using new helpers,
adds several checks and removes a couple of obsolete functions.  As the
result the code became more robust with fewer open-coded bit tricks.

This patch (of 4):

Currently there are many open-coded reads of the page->mem_cgroup pointer,
as well as a couple of read helpers, which are barely used.

It creates an obstacle on a way to reuse some bits of the pointer for
storing additional bits of information.  In fact, we already do this for
slab pages, where the last bit indicates that a pointer has an attached
vector of objcg pointers instead of a regular memcg pointer.

This commits uses 2 existing helpers and introduces a new helper to
converts all read sides to calls of these helpers:
  struct mem_cgroup *page_memcg(struct page *page);
  struct mem_cgroup *page_memcg_rcu(struct page *page);
  struct mem_cgroup *page_memcg_check(struct page *page);

page_memcg_check() is intended to be used in cases when the page can be a
slab page and have a memcg pointer pointing at objcg vector.  It does
check the lowest bit, and if set, returns NULL.  page_memcg() contains a
VM_BUG_ON_PAGE() check for the page not being a slab page.

To make sure nobody uses a direct access, struct page's
mem_cgroup/obj_cgroups is converted to unsigned long memcg_data.

Signed-off-by: Roman Gushchin <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
Reviewed-by: Shakeel Butt <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lore.kernel.org/bpf/[email protected]
  • Loading branch information
rgushchin authored and Alexei Starovoitov committed Dec 3, 2020
1 parent 9e83f54 commit bcfe06b
Show file tree
Hide file tree
Showing 14 changed files with 184 additions and 120 deletions.
2 changes: 1 addition & 1 deletion fs/buffer.c
Original file line number Diff line number Diff line change
Expand Up @@ -657,7 +657,7 @@ int __set_page_dirty_buffers(struct page *page)
} while (bh != head);
}
/*
* Lock out page->mem_cgroup migration to keep PageDirty
* Lock out page's memcg migration to keep PageDirty
* synchronized with per-memcg dirty page counters.
*/
lock_page_memcg(page);
Expand Down
2 changes: 1 addition & 1 deletion fs/iomap/buffered-io.c
Original file line number Diff line number Diff line change
Expand Up @@ -650,7 +650,7 @@ iomap_set_page_dirty(struct page *page)
return !TestSetPageDirty(page);

/*
* Lock out page->mem_cgroup migration to keep PageDirty
* Lock out page's memcg migration to keep PageDirty
* synchronized with per-memcg dirty page counters.
*/
lock_page_memcg(page);
Expand Down
114 changes: 105 additions & 9 deletions include/linux/memcontrol.h
Original file line number Diff line number Diff line change
Expand Up @@ -343,6 +343,79 @@ struct mem_cgroup {

extern struct mem_cgroup *root_mem_cgroup;

/*
* page_memcg - get the memory cgroup associated with a page
* @page: a pointer to the page struct
*
* Returns a pointer to the memory cgroup associated with the page,
* or NULL. This function assumes that the page is known to have a
* proper memory cgroup pointer. It's not safe to call this function
* against some type of pages, e.g. slab pages or ex-slab pages.
*
* Any of the following ensures page and memcg binding stability:
* - the page lock
* - LRU isolation
* - lock_page_memcg()
* - exclusive reference
*/
static inline struct mem_cgroup *page_memcg(struct page *page)
{
VM_BUG_ON_PAGE(PageSlab(page), page);
return (struct mem_cgroup *)page->memcg_data;
}

/*
* page_memcg_rcu - locklessly get the memory cgroup associated with a page
* @page: a pointer to the page struct
*
* Returns a pointer to the memory cgroup associated with the page,
* or NULL. This function assumes that the page is known to have a
* proper memory cgroup pointer. It's not safe to call this function
* against some type of pages, e.g. slab pages or ex-slab pages.
*/
static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
{
VM_BUG_ON_PAGE(PageSlab(page), page);
WARN_ON_ONCE(!rcu_read_lock_held());

return (struct mem_cgroup *)READ_ONCE(page->memcg_data);
}

/*
* page_memcg_check - get the memory cgroup associated with a page
* @page: a pointer to the page struct
*
* Returns a pointer to the memory cgroup associated with the page,
* or NULL. This function unlike page_memcg() can take any page
* as an argument. It has to be used in cases when it's not known if a page
* has an associated memory cgroup pointer or an object cgroups vector.
*
* Any of the following ensures page and memcg binding stability:
* - the page lock
* - LRU isolation
* - lock_page_memcg()
* - exclusive reference
*/
static inline struct mem_cgroup *page_memcg_check(struct page *page)
{
/*
* Because page->memcg_data might be changed asynchronously
* for slab pages, READ_ONCE() should be used here.
*/
unsigned long memcg_data = READ_ONCE(page->memcg_data);

/*
* The lowest bit set means that memcg isn't a valid
* memcg pointer, but a obj_cgroups pointer.
* In this case the page is shared and doesn't belong
* to any specific memory cgroup.
*/
if (memcg_data & 0x1UL)
return NULL;

return (struct mem_cgroup *)memcg_data;
}

static __always_inline bool memcg_stat_item_in_bytes(int idx)
{
if (idx == MEMCG_PERCPU_B)
Expand Down Expand Up @@ -743,15 +816,19 @@ static inline void mod_memcg_state(struct mem_cgroup *memcg,
static inline void __mod_memcg_page_state(struct page *page,
int idx, int val)
{
if (page->mem_cgroup)
__mod_memcg_state(page->mem_cgroup, idx, val);
struct mem_cgroup *memcg = page_memcg(page);

if (memcg)
__mod_memcg_state(memcg, idx, val);
}

static inline void mod_memcg_page_state(struct page *page,
int idx, int val)
{
if (page->mem_cgroup)
mod_memcg_state(page->mem_cgroup, idx, val);
struct mem_cgroup *memcg = page_memcg(page);

if (memcg)
mod_memcg_state(memcg, idx, val);
}

static inline unsigned long lruvec_page_state(struct lruvec *lruvec,
Expand Down Expand Up @@ -834,16 +911,17 @@ static inline void __mod_lruvec_page_state(struct page *page,
enum node_stat_item idx, int val)
{
struct page *head = compound_head(page); /* rmap on tail pages */
struct mem_cgroup *memcg = page_memcg(head);
pg_data_t *pgdat = page_pgdat(page);
struct lruvec *lruvec;

/* Untracked pages have no memcg, no lruvec. Update only the node */
if (!head->mem_cgroup) {
if (!memcg) {
__mod_node_page_state(pgdat, idx, val);
return;
}

lruvec = mem_cgroup_lruvec(head->mem_cgroup, pgdat);
lruvec = mem_cgroup_lruvec(memcg, pgdat);
__mod_lruvec_state(lruvec, idx, val);
}

Expand Down Expand Up @@ -878,8 +956,10 @@ static inline void count_memcg_events(struct mem_cgroup *memcg,
static inline void count_memcg_page_event(struct page *page,
enum vm_event_item idx)
{
if (page->mem_cgroup)
count_memcg_events(page->mem_cgroup, idx, 1);
struct mem_cgroup *memcg = page_memcg(page);

if (memcg)
count_memcg_events(memcg, idx, 1);
}

static inline void count_memcg_event_mm(struct mm_struct *mm,
Expand Down Expand Up @@ -941,6 +1021,22 @@ void mem_cgroup_split_huge_fixup(struct page *head);

struct mem_cgroup;

static inline struct mem_cgroup *page_memcg(struct page *page)
{
return NULL;
}

static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
{
WARN_ON_ONCE(!rcu_read_lock_held());
return NULL;
}

static inline struct mem_cgroup *page_memcg_check(struct page *page)
{
return NULL;
}

static inline bool mem_cgroup_is_root(struct mem_cgroup *memcg)
{
return true;
Expand Down Expand Up @@ -1430,7 +1526,7 @@ static inline void mem_cgroup_track_foreign_dirty(struct page *page,
if (mem_cgroup_disabled())
return;

if (unlikely(&page->mem_cgroup->css != wb->memcg_css))
if (unlikely(&page_memcg(page)->css != wb->memcg_css))
mem_cgroup_track_foreign_dirty_slowpath(page, wb);
}

Expand Down
22 changes: 0 additions & 22 deletions include/linux/mm.h
Original file line number Diff line number Diff line change
Expand Up @@ -1484,28 +1484,6 @@ static inline void set_page_links(struct page *page, enum zone_type zone,
#endif
}

#ifdef CONFIG_MEMCG
static inline struct mem_cgroup *page_memcg(struct page *page)
{
return page->mem_cgroup;
}
static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
{
WARN_ON_ONCE(!rcu_read_lock_held());
return READ_ONCE(page->mem_cgroup);
}
#else
static inline struct mem_cgroup *page_memcg(struct page *page)
{
return NULL;
}
static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
{
WARN_ON_ONCE(!rcu_read_lock_held());
return NULL;
}
#endif

/*
* Some inline functions in vmstat.h depend on page_zone()
*/
Expand Down
5 changes: 1 addition & 4 deletions include/linux/mm_types.h
Original file line number Diff line number Diff line change
Expand Up @@ -199,10 +199,7 @@ struct page {
atomic_t _refcount;

#ifdef CONFIG_MEMCG
union {
struct mem_cgroup *mem_cgroup;
struct obj_cgroup **obj_cgroups;
};
unsigned long memcg_data;
#endif

/*
Expand Down
2 changes: 1 addition & 1 deletion include/trace/events/writeback.h
Original file line number Diff line number Diff line change
Expand Up @@ -257,7 +257,7 @@ TRACE_EVENT(track_foreign_dirty,
__entry->ino = inode ? inode->i_ino : 0;
__entry->memcg_id = wb->memcg_css->id;
__entry->cgroup_ino = __trace_wb_assign_cgroup(wb);
__entry->page_cgroup_ino = cgroup_ino(page->mem_cgroup->css.cgroup);
__entry->page_cgroup_ino = cgroup_ino(page_memcg(page)->css.cgroup);
),

TP_printk("bdi %s[%llu]: ino=%lu memcg_id=%u cgroup_ino=%lu page_cgroup_ino=%lu",
Expand Down
7 changes: 4 additions & 3 deletions kernel/fork.c
Original file line number Diff line number Diff line change
Expand Up @@ -404,9 +404,10 @@ static int memcg_charge_kernel_stack(struct task_struct *tsk)

for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
/*
* If memcg_kmem_charge_page() fails, page->mem_cgroup
* pointer is NULL, and memcg_kmem_uncharge_page() in
* free_thread_stack() will ignore this page.
* If memcg_kmem_charge_page() fails, page's
* memory cgroup pointer is NULL, and
* memcg_kmem_uncharge_page() in free_thread_stack()
* will ignore this page.
*/
ret = memcg_kmem_charge_page(vm->pages[i], GFP_KERNEL,
0);
Expand Down
4 changes: 2 additions & 2 deletions mm/debug.c
Original file line number Diff line number Diff line change
Expand Up @@ -182,8 +182,8 @@ void __dump_page(struct page *page, const char *reason)
pr_warn("page dumped because: %s\n", reason);

#ifdef CONFIG_MEMCG
if (!page_poisoned && page->mem_cgroup)
pr_warn("page->mem_cgroup:%px\n", page->mem_cgroup);
if (!page_poisoned && page->memcg_data)
pr_warn("pages's memcg:%lx\n", page->memcg_data);
#endif
}

Expand Down
4 changes: 2 additions & 2 deletions mm/huge_memory.c
Original file line number Diff line number Diff line change
Expand Up @@ -470,7 +470,7 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
#ifdef CONFIG_MEMCG
static inline struct deferred_split *get_deferred_split_queue(struct page *page)
{
struct mem_cgroup *memcg = compound_head(page)->mem_cgroup;
struct mem_cgroup *memcg = page_memcg(compound_head(page));
struct pglist_data *pgdat = NODE_DATA(page_to_nid(page));

if (memcg)
Expand Down Expand Up @@ -2765,7 +2765,7 @@ void deferred_split_huge_page(struct page *page)
{
struct deferred_split *ds_queue = get_deferred_split_queue(page);
#ifdef CONFIG_MEMCG
struct mem_cgroup *memcg = compound_head(page)->mem_cgroup;
struct mem_cgroup *memcg = page_memcg(compound_head(page));
#endif
unsigned long flags;

Expand Down
Loading

0 comments on commit bcfe06b

Please sign in to comment.