Skip to content

Commit

Permalink
mm/slob: Drop usage of page->private for storing page-sized allocations
Browse files Browse the repository at this point in the history
This field was being used to store size allocation so it could be
retrieved by ksize(). However, it is a bad practice to not mark a page
as a slab page and then use fields for special purposes.
There is no need to store the allocated size and
ksize() can simply return PAGE_SIZE << compound_order(page).

Cc: Pekka Enberg <[email protected]>
Cc: Matt Mackall <[email protected]>
Acked-by: Christoph Lameter <[email protected]>
Signed-off-by: Ezequiel Garcia <[email protected]>
Signed-off-by: Pekka Enberg <[email protected]>
  • Loading branch information
ezequielgarcia authored and penberg committed Oct 31, 2012
1 parent 1b4f59e commit 999d879
Showing 1 changed file with 10 additions and 14 deletions.
24 changes: 10 additions & 14 deletions mm/slob.c
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,8 @@
* from kmalloc are prepended with a 4-byte header with the kmalloc size.
* If kmalloc is asked for objects of PAGE_SIZE or larger, it calls
* alloc_pages() directly, allocating compound pages so the page order
* does not have to be separately tracked, and also stores the exact
* allocation size in page->private so that it can be used to accurately
* provide ksize(). These objects are detected in kfree() because slob_page()
* does not have to be separately tracked.
* These objects are detected in kfree() because PageSlab()
* is false for them.
*
* SLAB is emulated on top of SLOB by simply calling constructors and
Expand Down Expand Up @@ -455,11 +454,6 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller)
if (likely(order))
gfp |= __GFP_COMP;
ret = slob_new_pages(gfp, order, node);
if (ret) {
struct page *page;
page = virt_to_page(ret);
page->private = size;
}

trace_kmalloc_node(caller, ret,
size, PAGE_SIZE << order, gfp, node);
Expand Down Expand Up @@ -514,18 +508,20 @@ EXPORT_SYMBOL(kfree);
size_t ksize(const void *block)
{
struct page *sp;
int align;
unsigned int *m;

BUG_ON(!block);
if (unlikely(block == ZERO_SIZE_PTR))
return 0;

sp = virt_to_page(block);
if (PageSlab(sp)) {
int align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
unsigned int *m = (unsigned int *)(block - align);
return SLOB_UNITS(*m) * SLOB_UNIT;
} else
return sp->private;
if (unlikely(!PageSlab(sp)))
return PAGE_SIZE << compound_order(sp);

align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
m = (unsigned int *)(block - align);
return SLOB_UNITS(*m) * SLOB_UNIT;
}
EXPORT_SYMBOL(ksize);

Expand Down

0 comments on commit 999d879

Please sign in to comment.