Skip to content

Commit

Permalink
net: Update alloc frag to reduce get/put page usage and recycle pages
Browse files Browse the repository at this point in the history
This patch is meant to help improve performance by reducing the number of
locked operations required to allocate a frag on x86 and other platforms.
This is accomplished by using atomic_set operations on the page count
instead of calling get_page and put_page.  It is based on work originally
provided by Eric Dumazet.

In addition it also helps to reduce memory overhead when using TCP.  This
is done by recycling the page if the only holder of the frame is the
netdev_alloc_frag call itself.  This can occur when skb heads are stolen by
either GRO or TCP and the driver providing the packets is using paged frags
to store all of the data for the packets.

Cc: Eric Dumazet <[email protected]>
Signed-off-by: Alexander Duyck <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
  • Loading branch information
Alexander Duyck authored and davem330 committed Jul 13, 2012
1 parent 391e5c2 commit 540eb7b
Showing 1 changed file with 20 additions and 8 deletions.
28 changes: 20 additions & 8 deletions net/core/skbuff.c
Original file line number Diff line number Diff line change
Expand Up @@ -296,9 +296,12 @@ EXPORT_SYMBOL(build_skb);
struct netdev_alloc_cache {
struct page *page;
unsigned int offset;
unsigned int pagecnt_bias;
};
static DEFINE_PER_CPU(struct netdev_alloc_cache, netdev_alloc_cache);

#define NETDEV_PAGECNT_BIAS (PAGE_SIZE / SMP_CACHE_BYTES)

/**
* netdev_alloc_frag - allocate a page fragment
* @fragsz: fragment size
Expand All @@ -317,17 +320,26 @@ void *netdev_alloc_frag(unsigned int fragsz)
if (unlikely(!nc->page)) {
refill:
nc->page = alloc_page(GFP_ATOMIC | __GFP_COLD);
if (unlikely(!nc->page))
goto end;
recycle:
atomic_set(&nc->page->_count, NETDEV_PAGECNT_BIAS);
nc->pagecnt_bias = NETDEV_PAGECNT_BIAS;
nc->offset = 0;
}
if (likely(nc->page)) {
if (nc->offset + fragsz > PAGE_SIZE) {
put_page(nc->page);
goto refill;
}
data = page_address(nc->page) + nc->offset;
nc->offset += fragsz;
get_page(nc->page);

if (nc->offset + fragsz > PAGE_SIZE) {
/* avoid unnecessary locked operations if possible */
if ((atomic_read(&nc->page->_count) == nc->pagecnt_bias) ||
atomic_sub_and_test(nc->pagecnt_bias, &nc->page->_count))
goto recycle;
goto refill;
}

data = page_address(nc->page) + nc->offset;
nc->offset += fragsz;
nc->pagecnt_bias--;
end:
local_irq_restore(flags);
return data;
}
Expand Down

0 comments on commit 540eb7b

Please sign in to comment.