Skip to content

Commit

Permalink
kasan, slub: fix HW_TAGS zeroing with slub_debug
Browse files Browse the repository at this point in the history
Commit 946fa0d ("mm/slub: extend redzone check to extra allocated
kmalloc space than requested") added precise kmalloc redzone poisoning to
the slub_debug functionality.

However, this commit didn't account for HW_TAGS KASAN fully initializing
the object via its built-in memory initialization feature.  Even though
HW_TAGS KASAN memory initialization contains special memory initialization
handling for when slub_debug is enabled, it does not account for in-object
slub_debug redzones.  As a result, HW_TAGS KASAN can overwrite these
redzones and cause false-positive slub_debug reports.

To fix the issue, avoid HW_TAGS KASAN memory initialization when
slub_debug is enabled altogether.  Implement this by moving the
__slub_debug_enabled check to slab_post_alloc_hook.  Common slab code
seems like a more appropriate place for a slub_debug check anyway.

Link: https://lkml.kernel.org/r/678ac92ab790dba9198f9ca14f405651b97c8502.1688561016.git.andreyknvl@google.com
Fixes: 946fa0d ("mm/slub: extend redzone check to extra allocated kmalloc space than requested")
Signed-off-by: Andrey Konovalov <[email protected]>
Reported-by: Will Deacon <[email protected]>
Acked-by: Marco Elver <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Feng Tang <[email protected]>
Cc: Hyeonggon Yoo <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: [email protected]
Cc: Pekka Enberg <[email protected]>
Cc: Peter Collingbourne <[email protected]>
Cc: Roman Gushchin <[email protected]>
Cc: Vincenzo Frascino <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
  • Loading branch information
xairy authored and akpm00 committed Jul 8, 2023
1 parent 05c56e7 commit fdb54d9
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 14 deletions.
12 changes: 0 additions & 12 deletions mm/kasan/kasan.h
Original file line number Diff line number Diff line change
Expand Up @@ -466,18 +466,6 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init)

if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK))
return;
/*
* Explicitly initialize the memory with the precise object size to
* avoid overwriting the slab redzone. This disables initialization in
* the arch code and may thus lead to performance penalty. This penalty
* does not affect production builds, as slab redzones are not enabled
* there.
*/
if (__slub_debug_enabled() &&
init && ((unsigned long)size & KASAN_GRANULE_MASK)) {
init = false;
memzero_explicit((void *)addr, size);
}
size = round_up(size, KASAN_GRANULE_SIZE);

hw_set_mem_tag_range((void *)addr, size, tag, init);
Expand Down
16 changes: 14 additions & 2 deletions mm/slab.h
Original file line number Diff line number Diff line change
Expand Up @@ -723,6 +723,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
unsigned int orig_size)
{
unsigned int zero_size = s->object_size;
bool kasan_init = init;
size_t i;

flags &= gfp_allowed_mask;
Expand All @@ -739,6 +740,17 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
(s->flags & SLAB_KMALLOC))
zero_size = orig_size;

/*
* When slub_debug is enabled, avoid memory initialization integrated
* into KASAN and instead zero out the memory via the memset below with
* the proper size. Otherwise, KASAN might overwrite SLUB redzones and
* cause false-positive reports. This does not lead to a performance
* penalty on production builds, as slub_debug is not intended to be
* enabled there.
*/
if (__slub_debug_enabled())
kasan_init = false;

/*
* As memory initialization might be integrated into KASAN,
* kasan_slab_alloc and initialization memset must be
Expand All @@ -747,8 +759,8 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
* As p[i] might get tagged, memset and kmemleak hook come after KASAN.
*/
for (i = 0; i < size; i++) {
p[i] = kasan_slab_alloc(s, p[i], flags, init);
if (p[i] && init && !kasan_has_integrated_init())
p[i] = kasan_slab_alloc(s, p[i], flags, kasan_init);
if (p[i] && init && (!kasan_init || !kasan_has_integrated_init()))
memset(p[i], 0, zero_size);
kmemleak_alloc_recursive(p[i], s->object_size, 1,
s->flags, flags);
Expand Down

0 comments on commit fdb54d9

Please sign in to comment.