Skip to content

Commit

Permalink
kasan: don't assume percpu shadow allocations will succeed
Browse files Browse the repository at this point in the history
syzkaller and the fault injector showed that I was wrong to assume that
we could ignore percpu shadow allocation failures.

Handle failures properly.  Merge all the allocated areas back into the
free list and release the shadow, then clean up and return NULL.  The
shadow is released unconditionally, which relies upon the fact that the
release function is able to tolerate pages not being present.

Also clean up shadows in the recovery path - currently they are not
released, which leaks a bit of memory.

Link: http://lkml.kernel.org/r/[email protected]
Fixes: 3c5c3cf ("kasan: support backing vmalloc space with real shadow memory")
Signed-off-by: Daniel Axtens <[email protected]>
Reported-by: [email protected]
Reported-by: [email protected]
Reviewed-by: Andrey Ryabinin <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Qian Cai <[email protected]>
Cc: Uladzislau Rezki (Sony) <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
daxtens authored and torvalds committed Dec 18, 2019
1 parent e218f1c commit 253a496
Showing 1 changed file with 38 additions and 10 deletions.
48 changes: 38 additions & 10 deletions mm/vmalloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -3288,7 +3288,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
struct vmap_area **vas, *va;
struct vm_struct **vms;
int area, area2, last_area, term_area;
unsigned long base, start, size, end, last_end;
unsigned long base, start, size, end, last_end, orig_start, orig_end;
bool purged = false;
enum fit_type type;

Expand Down Expand Up @@ -3418,6 +3418,15 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,

spin_unlock(&free_vmap_area_lock);

/* populate the kasan shadow space */
for (area = 0; area < nr_vms; area++) {
if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area]))
goto err_free_shadow;

kasan_unpoison_vmalloc((void *)vas[area]->va_start,
sizes[area]);
}

/* insert all vm's */
spin_lock(&vmap_area_lock);
for (area = 0; area < nr_vms; area++) {
Expand All @@ -3428,13 +3437,6 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
}
spin_unlock(&vmap_area_lock);

/* populate the shadow space outside of the lock */
for (area = 0; area < nr_vms; area++) {
/* assume success here */
kasan_populate_vmalloc(vas[area]->va_start, sizes[area]);
kasan_unpoison_vmalloc((void *)vms[area]->addr, sizes[area]);
}

kfree(vas);
return vms;

Expand All @@ -3446,8 +3448,12 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
* and when pcpu_get_vm_areas() is success.
*/
while (area--) {
merge_or_add_vmap_area(vas[area], &free_vmap_area_root,
&free_vmap_area_list);
orig_start = vas[area]->va_start;
orig_end = vas[area]->va_end;
va = merge_or_add_vmap_area(vas[area], &free_vmap_area_root,
&free_vmap_area_list);
kasan_release_vmalloc(orig_start, orig_end,
va->va_start, va->va_end);
vas[area] = NULL;
}

Expand Down Expand Up @@ -3482,6 +3488,28 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
kfree(vas);
kfree(vms);
return NULL;

err_free_shadow:
spin_lock(&free_vmap_area_lock);
/*
* We release all the vmalloc shadows, even the ones for regions that
* hadn't been successfully added. This relies on kasan_release_vmalloc
* being able to tolerate this case.
*/
for (area = 0; area < nr_vms; area++) {
orig_start = vas[area]->va_start;
orig_end = vas[area]->va_end;
va = merge_or_add_vmap_area(vas[area], &free_vmap_area_root,
&free_vmap_area_list);
kasan_release_vmalloc(orig_start, orig_end,
va->va_start, va->va_end);
vas[area] = NULL;
kfree(vms[area]);
}
spin_unlock(&free_vmap_area_lock);
kfree(vas);
kfree(vms);
return NULL;
}

/**
Expand Down

0 comments on commit 253a496

Please sign in to comment.