Skip to content

Commit

Permalink
kasan, slub: fix conflicts with CONFIG_SLAB_FREELIST_HARDENED
Browse files Browse the repository at this point in the history
CONFIG_SLAB_FREELIST_HARDENED hashes freelist pointer with the address of
the object where the pointer gets stored.  With tag based KASAN we don't
account for that when building freelist, as we call set_freepointer() with
the first argument untagged.  This patch changes the code to properly
propagate tags throughout the loop.

Link: http://lkml.kernel.org/r/3df171559c52201376f246bf7ce3184fe21c1dc7.1549921721.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reported-by: Qian Cai <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Vincenzo Frascino <[email protected]>
Cc: Kostya Serebryany <[email protected]>
Cc: Evgeniy Stepanov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
xairy authored and torvalds committed Feb 21, 2019
1 parent a710122 commit 18e5066
Showing 1 changed file with 7 additions and 13 deletions.
20 changes: 7 additions & 13 deletions mm/slub.c
Original file line number Diff line number Diff line change
Expand Up @@ -303,11 +303,6 @@ static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp)
__p < (__addr) + (__objects) * (__s)->size; \
__p += (__s)->size)

#define for_each_object_idx(__p, __idx, __s, __addr, __objects) \
for (__p = fixup_red_left(__s, __addr), __idx = 1; \
__idx <= __objects; \
__p += (__s)->size, __idx++)

/* Determine object index from a given position */
static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr)
{
Expand Down Expand Up @@ -1664,17 +1659,16 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
shuffle = shuffle_freelist(s, page);

if (!shuffle) {
for_each_object_idx(p, idx, s, start, page->objects) {
if (likely(idx < page->objects)) {
next = p + s->size;
next = setup_object(s, page, next);
set_freepointer(s, p, next);
} else
set_freepointer(s, p, NULL);
}
start = fixup_red_left(s, start);
start = setup_object(s, page, start);
page->freelist = start;
for (idx = 0, p = start; idx < page->objects - 1; idx++) {
next = p + s->size;
next = setup_object(s, page, next);
set_freepointer(s, p, next);
p = next;
}
set_freepointer(s, p, NULL);
}

page->inuse = page->objects;
Expand Down

0 comments on commit 18e5066

Please sign in to comment.