Skip to content

Commit

Permalink
mm: remove flush_kernel_dcache_page
Browse files Browse the repository at this point in the history
flush_kernel_dcache_page is a rather confusing interface that implements a
subset of flush_dcache_page by not being able to properly handle page
cache mapped pages.

The only callers left are in the exec code as all other previous callers
were incorrect as they could have dealt with page cache pages.  Replace
the calls to flush_kernel_dcache_page with calls to flush_dcache_page,
which for all architectures does either exactly the same thing, can
contains one or more of the following:

 1) an optimization to defer the cache flush for page cache pages not
    mapped into userspace
 2) additional flushing for mapped page cache pages if cache aliases
    are possible

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Christoph Hellwig <[email protected]>
Acked-by: Linus Torvalds <[email protected]>
Reviewed-by: Ira Weiny <[email protected]>
Cc: Alex Shi <[email protected]>
Cc: Geoff Levand <[email protected]>
Cc: Greentime Hu <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Nick Hu <[email protected]>
Cc: Paul Cercueil <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Russell King <[email protected]>
Cc: Thomas Bogendoerfer <[email protected]>
Cc: Ulf Hansson <[email protected]>
Cc: Vincent Chen <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Christoph Hellwig authored and torvalds committed Sep 3, 2021
1 parent 0e84f5d commit f358afc
Show file tree
Hide file tree
Showing 17 changed files with 51 additions and 155 deletions.
86 changes: 37 additions & 49 deletions Documentation/core-api/cachetlb.rst
Original file line number Diff line number Diff line change
Expand Up @@ -271,10 +271,15 @@ maps this page at its virtual address.

``void flush_dcache_page(struct page *page)``

Any time the kernel writes to a page cache page, _OR_
the kernel is about to read from a page cache page and
user space shared/writable mappings of this page potentially
exist, this routine is called.
This routines must be called when:

a) the kernel did write to a page that is in the page cache page
and / or in high memory
b) the kernel is about to read from a page cache page and user space
shared/writable mappings of this page potentially exist. Note
that {get,pin}_user_pages{_fast} already call flush_dcache_page
on any page found in the user address space and thus driver
code rarely needs to take this into account.

.. note::

Expand All @@ -284,38 +289,34 @@ maps this page at its virtual address.
handling vfs symlinks in the page cache need not call
this interface at all.

The phrase "kernel writes to a page cache page" means,
specifically, that the kernel executes store instructions
that dirty data in that page at the page->virtual mapping
of that page. It is important to flush here to handle
D-cache aliasing, to make sure these kernel stores are
visible to user space mappings of that page.

The corollary case is just as important, if there are users
which have shared+writable mappings of this file, we must make
sure that kernel reads of these pages will see the most recent
stores done by the user.

If D-cache aliasing is not an issue, this routine may
simply be defined as a nop on that architecture.

There is a bit set aside in page->flags (PG_arch_1) as
"architecture private". The kernel guarantees that,
for pagecache pages, it will clear this bit when such
a page first enters the pagecache.

This allows these interfaces to be implemented much more
efficiently. It allows one to "defer" (perhaps indefinitely)
the actual flush if there are currently no user processes
mapping this page. See sparc64's flush_dcache_page and
update_mmu_cache implementations for an example of how to go
about doing this.

The idea is, first at flush_dcache_page() time, if
page->mapping->i_mmap is an empty tree, just mark the architecture
private page flag bit. Later, in update_mmu_cache(), a check is
made of this flag bit, and if set the flush is done and the flag
bit is cleared.
The phrase "kernel writes to a page cache page" means, specifically,
that the kernel executes store instructions that dirty data in that
page at the page->virtual mapping of that page. It is important to
flush here to handle D-cache aliasing, to make sure these kernel stores
are visible to user space mappings of that page.

The corollary case is just as important, if there are users which have
shared+writable mappings of this file, we must make sure that kernel
reads of these pages will see the most recent stores done by the user.

If D-cache aliasing is not an issue, this routine may simply be defined
as a nop on that architecture.

There is a bit set aside in page->flags (PG_arch_1) as "architecture
private". The kernel guarantees that, for pagecache pages, it will
clear this bit when such a page first enters the pagecache.

This allows these interfaces to be implemented much more efficiently.
It allows one to "defer" (perhaps indefinitely) the actual flush if
there are currently no user processes mapping this page. See sparc64's
flush_dcache_page and update_mmu_cache implementations for an example
of how to go about doing this.

The idea is, first at flush_dcache_page() time, if page_file_mapping()
returns a mapping, and mapping_mapped on that mapping returns %false,
just mark the architecture private page flag bit. Later, in
update_mmu_cache(), a check is made of this flag bit, and if set the
flush is done and the flag bit is cleared.

.. important::

Expand Down Expand Up @@ -351,19 +352,6 @@ maps this page at its virtual address.
architectures). For incoherent architectures, it should flush
the cache of the page at vmaddr.

``void flush_kernel_dcache_page(struct page *page)``

When the kernel needs to modify a user page is has obtained
with kmap, it calls this function after all modifications are
complete (but before kunmapping it) to bring the underlying
page up to date. It is assumed here that the user has no
incoherent cached copies (i.e. the original page was obtained
from a mechanism like get_user_pages()). The default
implementation is a nop and should remain so on all coherent
architectures. On incoherent architectures, this should flush
the kernel cache for page (using page_address(page)).


``void flush_icache_range(unsigned long start, unsigned long end)``

When the kernel stores into addresses that it will execute
Expand Down
9 changes: 0 additions & 9 deletions Documentation/translations/zh_CN/core-api/cachetlb.rst
Original file line number Diff line number Diff line change
Expand Up @@ -298,15 +298,6 @@ HyperSparc cpu就是这样一个具有这种属性的cpu。
用。默认的实现是nop(对于所有相干的架构应该保持这样)。对于不一致性
的架构,它应该刷新vmaddr处的页面缓存。

``void flush_kernel_dcache_page(struct page *page)``

当内核需要修改一个用kmap获得的用户页时,它会在所有修改完成后(但在
kunmapping之前)调用这个函数,以使底层页面达到最新状态。这里假定用
户没有不一致性的缓存副本(即原始页面是从类似get_user_pages()的机制
中获得的)。默认的实现是一个nop,在所有相干的架构上都应该如此。在不
一致性的架构上,这应该刷新内核缓存中的页面(使用page_address(page))。


``void flush_icache_range(unsigned long start, unsigned long end)``

当内核存储到它将执行的地址中时(例如在加载模块时),这个函数被调用。
Expand Down
4 changes: 1 addition & 3 deletions arch/arm/include/asm/cacheflush.h
Original file line number Diff line number Diff line change
Expand Up @@ -291,6 +291,7 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
extern void flush_dcache_page(struct page *);

#define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1
static inline void flush_kernel_vmap_range(void *addr, int size)
{
if ((cache_is_vivt() || cache_is_vipt_aliasing()))
Expand All @@ -312,9 +313,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
__flush_anon_page(vma, page, vmaddr);
}

#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
extern void flush_kernel_dcache_page(struct page *);

#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages)
#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages)

Expand Down
33 changes: 0 additions & 33 deletions arch/arm/mm/flush.c
Original file line number Diff line number Diff line change
Expand Up @@ -345,39 +345,6 @@ void flush_dcache_page(struct page *page)
}
EXPORT_SYMBOL(flush_dcache_page);

/*
* Ensure cache coherency for the kernel mapping of this page. We can
* assume that the page is pinned via kmap.
*
* If the page only exists in the page cache and there are no user
* space mappings, this is a no-op since the page was already marked
* dirty at creation. Otherwise, we need to flush the dirty kernel
* cache lines directly.
*/
void flush_kernel_dcache_page(struct page *page)
{
if (cache_is_vivt() || cache_is_vipt_aliasing()) {
struct address_space *mapping;

mapping = page_mapping_file(page);

if (!mapping || mapping_mapped(mapping)) {
void *addr;

addr = page_address(page);
/*
* kmap_atomic() doesn't set the page virtual
* address for highmem pages, and
* kunmap_atomic() takes care of cache
* flushing already.
*/
if (!IS_ENABLED(CONFIG_HIGHMEM) || addr)
__cpuc_flush_dcache_area(addr, PAGE_SIZE);
}
}
}
EXPORT_SYMBOL(flush_kernel_dcache_page);

/*
* Flush an anonymous page so that users of get_user_pages()
* can safely access the data. The expected sequence is:
Expand Down
6 changes: 0 additions & 6 deletions arch/arm/mm/nommu.c
Original file line number Diff line number Diff line change
Expand Up @@ -166,12 +166,6 @@ void flush_dcache_page(struct page *page)
}
EXPORT_SYMBOL(flush_dcache_page);

void flush_kernel_dcache_page(struct page *page)
{
__cpuc_flush_dcache_area(page_address(page), PAGE_SIZE);
}
EXPORT_SYMBOL(flush_kernel_dcache_page);

void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long uaddr, void *dst, const void *src,
unsigned long len)
Expand Down
11 changes: 0 additions & 11 deletions arch/csky/abiv1/cacheflush.c
Original file line number Diff line number Diff line change
Expand Up @@ -56,17 +56,6 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr,
}
}

void flush_kernel_dcache_page(struct page *page)
{
struct address_space *mapping;

mapping = page_mapping_file(page);

if (!mapping || mapping_mapped(mapping))
dcache_wbinv_all();
}
EXPORT_SYMBOL(flush_kernel_dcache_page);

void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end)
{
Expand Down
4 changes: 1 addition & 3 deletions arch/csky/abiv1/inc/abi/cacheflush.h
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,10 @@ extern void flush_dcache_page(struct page *);
#define flush_cache_page(vma, page, pfn) cache_wbinv_all()
#define flush_cache_dup_mm(mm) cache_wbinv_all()

#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
extern void flush_kernel_dcache_page(struct page *);

#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages)
#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages)

#define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1
static inline void flush_kernel_vmap_range(void *addr, int size)
{
dcache_wbinv_all();
Expand Down
8 changes: 1 addition & 7 deletions arch/mips/include/asm/cacheflush.h
Original file line number Diff line number Diff line change
Expand Up @@ -125,13 +125,7 @@ static inline void kunmap_noncoherent(void)
kunmap_coherent();
}

#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
static inline void flush_kernel_dcache_page(struct page *page)
{
BUG_ON(cpu_has_dc_aliases && PageHighMem(page));
flush_dcache_page(page);
}

#define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1
/*
* For now flush_kernel_vmap_range and invalidate_kernel_vmap_range both do a
* cache writeback and invalidate operation.
Expand Down
3 changes: 1 addition & 2 deletions arch/nds32/include/asm/cacheflush.h
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,7 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
void flush_anon_page(struct vm_area_struct *vma,
struct page *page, unsigned long vaddr);

#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
void flush_kernel_dcache_page(struct page *page);
#define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1
void flush_kernel_vmap_range(void *addr, int size);
void invalidate_kernel_vmap_range(void *addr, int size);
#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&(mapping)->i_pages)
Expand Down
9 changes: 0 additions & 9 deletions arch/nds32/mm/cacheflush.c
Original file line number Diff line number Diff line change
Expand Up @@ -318,15 +318,6 @@ void flush_anon_page(struct vm_area_struct *vma,
local_irq_restore(flags);
}

void flush_kernel_dcache_page(struct page *page)
{
unsigned long flags;
local_irq_save(flags);
cpu_dcache_wbinval_page((unsigned long)page_address(page));
local_irq_restore(flags);
}
EXPORT_SYMBOL(flush_kernel_dcache_page);

void flush_kernel_vmap_range(void *addr, int size)
{
unsigned long flags;
Expand Down
8 changes: 2 additions & 6 deletions arch/parisc/include/asm/cacheflush.h
Original file line number Diff line number Diff line change
Expand Up @@ -36,16 +36,12 @@ void flush_cache_all_local(void);
void flush_cache_all(void);
void flush_cache_mm(struct mm_struct *mm);

#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
void flush_kernel_dcache_page_addr(void *addr);
static inline void flush_kernel_dcache_page(struct page *page)
{
flush_kernel_dcache_page_addr(page_address(page));
}

#define flush_kernel_dcache_range(start,size) \
flush_kernel_dcache_range_asm((start), (start)+(size));

#define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1
void flush_kernel_vmap_range(void *vaddr, int size);
void invalidate_kernel_vmap_range(void *vaddr, int size);

Expand All @@ -59,7 +55,7 @@ extern void flush_dcache_page(struct page *page);
#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages)

#define flush_icache_page(vma,page) do { \
flush_kernel_dcache_page(page); \
flush_kernel_dcache_page_addr(page_address(page)); \
flush_kernel_icache_page(page_address(page)); \
} while (0)

Expand Down
3 changes: 1 addition & 2 deletions arch/parisc/kernel/cache.c
Original file line number Diff line number Diff line change
Expand Up @@ -334,7 +334,7 @@ void flush_dcache_page(struct page *page)
return;
}

flush_kernel_dcache_page(page);
flush_kernel_dcache_page_addr(page_address(page));

if (!mapping)
return;
Expand Down Expand Up @@ -375,7 +375,6 @@ EXPORT_SYMBOL(flush_dcache_page);

/* Defined in arch/parisc/kernel/pacache.S */
EXPORT_SYMBOL(flush_kernel_dcache_range_asm);
EXPORT_SYMBOL(flush_kernel_dcache_page_asm);
EXPORT_SYMBOL(flush_data_cache_local);
EXPORT_SYMBOL(flush_kernel_icache_range_asm);

Expand Down
8 changes: 2 additions & 6 deletions arch/sh/include/asm/cacheflush.h
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,8 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
if (boot_cpu_data.dcache.n_aliases && PageAnon(page))
__flush_anon_page(page, vmaddr);
}

#define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1
static inline void flush_kernel_vmap_range(void *addr, int size)
{
__flush_wback_region(addr, size);
Expand All @@ -72,12 +74,6 @@ static inline void invalidate_kernel_vmap_range(void *addr, int size)
__flush_invalidate_region(addr, size);
}

#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
static inline void flush_kernel_dcache_page(struct page *page)
{
flush_dcache_page(page);
}

extern void copy_to_user_page(struct vm_area_struct *vma,
struct page *page, unsigned long vaddr, void *dst, const void *src,
unsigned long len);
Expand Down
2 changes: 1 addition & 1 deletion block/blk-map.c
Original file line number Diff line number Diff line change
Expand Up @@ -309,7 +309,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,

static void bio_invalidate_vmalloc_pages(struct bio *bio)
{
#ifdef ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
#ifdef ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE
if (bio->bi_private && !op_is_write(bio_op(bio))) {
unsigned long i, len = 0;

Expand Down
6 changes: 3 additions & 3 deletions fs/exec.c
Original file line number Diff line number Diff line change
Expand Up @@ -574,7 +574,7 @@ static int copy_strings(int argc, struct user_arg_ptr argv,
}

if (kmapped_page) {
flush_kernel_dcache_page(kmapped_page);
flush_dcache_page(kmapped_page);
kunmap(kmapped_page);
put_arg_page(kmapped_page);
}
Expand All @@ -592,7 +592,7 @@ static int copy_strings(int argc, struct user_arg_ptr argv,
ret = 0;
out:
if (kmapped_page) {
flush_kernel_dcache_page(kmapped_page);
flush_dcache_page(kmapped_page);
kunmap(kmapped_page);
put_arg_page(kmapped_page);
}
Expand Down Expand Up @@ -634,7 +634,7 @@ int copy_string_kernel(const char *arg, struct linux_binprm *bprm)
kaddr = kmap_atomic(page);
flush_arg_page(bprm, pos & PAGE_MASK, page);
memcpy(kaddr + offset_in_page(pos), arg, bytes_to_copy);
flush_kernel_dcache_page(page);
flush_dcache_page(page);
kunmap_atomic(kaddr);
put_arg_page(page);
}
Expand Down
5 changes: 1 addition & 4 deletions include/linux/highmem.h
Original file line number Diff line number Diff line change
Expand Up @@ -130,10 +130,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
}
#endif

#ifndef ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
static inline void flush_kernel_dcache_page(struct page *page)
{
}
#ifndef ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE
static inline void flush_kernel_vmap_range(void *vaddr, int size)
{
}
Expand Down
1 change: 0 additions & 1 deletion tools/testing/scatterlist/linux/mm.h
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,6 @@ kmalloc_array(unsigned int n, unsigned int size, unsigned int flags)
#define kmemleak_free(a)

#define PageSlab(p) (0)
#define flush_kernel_dcache_page(p)

#define MAX_ERRNO 4095

Expand Down

0 comments on commit f358afc

Please sign in to comment.