Skip to content

Commit

Permalink
slub: tid must be retrieved from the percpu area of the current proce…
Browse files Browse the repository at this point in the history
…ssor

As Steven Rostedt has pointer out: rescheduling could occur on a
different processor after the determination of the per cpu pointer and
before the tid is retrieved. This could result in allocation from the
wrong node in slab_alloc().

The effect is much more severe in slab_free() where we could free to the
freelist of the wrong page.

The window for something like that occurring is pretty small but it is
possible.

Change-Id: I9e658e97ed2096f658d6e67739764a86151f13e3
Signed-off-by: Christoph Lameter <[email protected]>
Signed-off-by: Pekka Enberg <[email protected]>
Git-commit: 7cccd80
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Matt Wagantall <[email protected]>
  • Loading branch information
Christoph Lameter authored and Matt Wagantall committed Oct 7, 2013
1 parent 4291ee7 commit 7c613ee
Showing 1 changed file with 9 additions and 3 deletions.
12 changes: 9 additions & 3 deletions mm/slub.c
Original file line number Diff line number Diff line change
Expand Up @@ -2310,13 +2310,18 @@ static __always_inline void *slab_alloc(struct kmem_cache *s,
return NULL;

redo:

/*
* Must read kmem_cache cpu data via this cpu ptr. Preemption is
* enabled. We may switch back and forth between cpus while
* reading from one cpu area. That does not matter as long
* as we end up on the original cpu again when doing the cmpxchg.
*
* Preemption is disabled for the retrieval of the tid because that
* must occur from the current processor. We cannot allow rescheduling
* on a different processor between the determination of the pointer
* and the retrieval of the tid.
*/
preempt_disable();
c = __this_cpu_ptr(s->cpu_slab);

/*
Expand All @@ -2326,7 +2331,7 @@ static __always_inline void *slab_alloc(struct kmem_cache *s,
* linked list in between.
*/
tid = c->tid;
barrier();
preempt_enable();

object = c->freelist;
if (unlikely(!object || !node_match(c, node)))
Expand Down Expand Up @@ -2572,10 +2577,11 @@ static __always_inline void slab_free(struct kmem_cache *s,
* data is retrieved via this pointer. If we are on the same cpu
* during the cmpxchg then the free will succedd.
*/
preempt_disable();
c = __this_cpu_ptr(s->cpu_slab);

tid = c->tid;
barrier();
preempt_enable();

if (likely(page == c->page)) {
set_freepointer(s, object, c->freelist);
Expand Down

0 comments on commit 7c613ee

Please sign in to comment.