Skip to content

Commit

Permalink
arm64: mte: reset the page tag in page->flags
Browse files Browse the repository at this point in the history
The hardware tag-based KASAN for compatibility with the other modes stores
the tag associated to a page in page->flags.  Due to this the kernel
faults on access when it allocates a page with an initial tag and the user
changes the tags.

Reset the tag associated by the kernel to a page in all the meaningful
places to prevent kernel faults on access.

Note: An alternative to this approach could be to modify page_to_virt().
This though could end up being racy, in fact if a CPU checks the
PG_mte_tagged bit and decides that the page is not tagged but another CPU
maps the same with PROT_MTE and becomes tagged the subsequent kernel
access would fail.

Link: https://lkml.kernel.org/r/9073d4e973747a6f78d5bdd7ebe17f290d087096.1606161801.git.andreyknvl@google.com
Signed-off-by: Vincenzo Frascino <[email protected]>
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Vincenzo Frascino <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Branislav Rankov <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Evgenii Stepanov <[email protected]>
Cc: Kevin Brodsky <[email protected]>
Cc: Marco Elver <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
fvincenzo authored and torvalds committed Dec 22, 2020
1 parent 85f49ca commit e5b8d92
Show file tree
Hide file tree
Showing 4 changed files with 32 additions and 0 deletions.
5 changes: 5 additions & 0 deletions arch/arm64/kernel/hibernate.c
Original file line number Diff line number Diff line change
Expand Up @@ -371,6 +371,11 @@ static void swsusp_mte_restore_tags(void)
unsigned long pfn = xa_state.xa_index;
struct page *page = pfn_to_online_page(pfn);

/*
* It is not required to invoke page_kasan_tag_reset(page)
* at this point since the tags stored in page->flags are
* already restored.
*/
mte_restore_page_tags(page_address(page), tags);

mte_free_tag_storage(tags);
Expand Down
9 changes: 9 additions & 0 deletions arch/arm64/kernel/mte.c
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,15 @@ static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
return;
}

page_kasan_tag_reset(page);
/*
* We need smp_wmb() in between setting the flags and clearing the
* tags because if another thread reads page->flags and builds a
* tagged address out of it, there is an actual dependency to the
* memory access, but on the current thread we do not guarantee that
* the new page->flags are visible before the tags were updated.
*/
smp_wmb();
mte_clear_page_tags(page_address(page));
}

Expand Down
9 changes: 9 additions & 0 deletions arch/arm64/mm/copypage.c
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,15 @@ void copy_highpage(struct page *to, struct page *from)

if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
set_bit(PG_mte_tagged, &to->flags);
page_kasan_tag_reset(to);
/*
* We need smp_wmb() in between setting the flags and clearing the
* tags because if another thread reads page->flags and builds a
* tagged address out of it, there is an actual dependency to the
* memory access, but on the current thread we do not guarantee that
* the new page->flags are visible before the tags were updated.
*/
smp_wmb();
mte_copy_page_tags(kto, kfrom);
}
}
Expand Down
9 changes: 9 additions & 0 deletions arch/arm64/mm/mteswap.c
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,15 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page)
if (!tags)
return false;

page_kasan_tag_reset(page);
/*
* We need smp_wmb() in between setting the flags and clearing the
* tags because if another thread reads page->flags and builds a
* tagged address out of it, there is an actual dependency to the
* memory access, but on the current thread we do not guarantee that
* the new page->flags are visible before the tags were updated.
*/
smp_wmb();
mte_restore_page_tags(page_address(page), tags);

return true;
Expand Down

0 comments on commit e5b8d92

Please sign in to comment.