Skip to content

Commit

Permalink
Merge branch 'akpm' (patches from Andrew)
Browse files Browse the repository at this point in the history
Merge updates from Andrew Morton:

 - a few misc bits

 - ocfs2 updates

 - almost all of MM

* emailed patches from Andrew Morton <[email protected]>: (131 commits)
  memory hotplug: fix comments when adding section
  mm: make alloc_node_mem_map a void call if we don't have CONFIG_FLAT_NODE_MEM_MAP
  mm: simplify nodemask printing
  mm,oom_reaper: remove pointless kthread_run() error check
  mm/page_ext.c: check if page_ext is not prepared
  writeback: remove unused function parameter
  mm: do not rely on preempt_count in print_vma_addr
  mm, sparse: do not swamp log with huge vmemmap allocation failures
  mm/hmm: remove redundant variable align_end
  mm/list_lru.c: mark expected switch fall-through
  mm/shmem.c: mark expected switch fall-through
  mm/page_alloc.c: broken deferred calculation
  mm: don't warn about allocations which stall for too long
  fs: fuse: account fuse_inode slab memory as reclaimable
  mm, page_alloc: fix potential false positive in __zone_watermark_ok
  mm: mlock: remove lru_add_drain_all()
  mm, sysctl: make NUMA stats configurable
  shmem: convert shmem_init_inodecache() to void
  Unify migrate_pages and move_pages access checks
  mm, pagevec: rename pagevec drained field
  ...
  • Loading branch information
torvalds committed Nov 16, 2017
2 parents 6363b3f + 1b7176a commit 7c225c6
Show file tree
Hide file tree
Showing 250 changed files with 2,276 additions and 4,084 deletions.
7 changes: 0 additions & 7 deletions Documentation/admin-guide/kernel-parameters.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1864,13 +1864,6 @@
Built with CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y,
the default is off.

kmemcheck= [X86] Boot-time kmemcheck enable/disable/one-shot mode
Valid arguments: 0, 1, 2
kmemcheck=0 (disabled)
kmemcheck=1 (enabled)
kmemcheck=2 (one-shot mode)
Default: 2 (one-shot mode)

kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled MSRs.
Default is 0 (don't ignore, but inject #GP)

Expand Down
1 change: 0 additions & 1 deletion Documentation/dev-tools/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ whole; patches welcome!
kasan
ubsan
kmemleak
kmemcheck
gdb-kernel-debugging
kgdb
kselftest
Expand Down
733 changes: 0 additions & 733 deletions Documentation/dev-tools/kmemcheck.rst

This file was deleted.

1 change: 0 additions & 1 deletion Documentation/filesystems/proc.txt
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,6 @@ Table 1-2: Contents of the status files (as of 4.8)
VmExe size of text segment
VmLib size of shared library code
VmPTE size of page table entries
VmPMD size of second level page tables
VmSwap amount of swap used by anonymous private data
(shmem swap usage is not included)
HugetlbPages size of hugetlb memory portions
Expand Down
25 changes: 24 additions & 1 deletion Documentation/sysctl/vm.txt
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ Currently, these files are in /proc/sys/vm:
- percpu_pagelist_fraction
- stat_interval
- stat_refresh
- numa_stat
- swappiness
- user_reserve_kbytes
- vfs_cache_pressure
Expand Down Expand Up @@ -157,6 +158,10 @@ Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any
value lower than this limit will be ignored and the old configuration will be
retained.

Note: the value of dirty_bytes also must be set greater than
dirty_background_bytes or the amount of memory corresponding to
dirty_background_ratio.

==============================================================

dirty_expire_centisecs
Expand All @@ -176,6 +181,9 @@ generating disk writes will itself start writing out dirty data.

The total available memory is not equal to total system memory.

Note: dirty_ratio must be set greater than dirty_background_ratio or
ratio corresponding to dirty_background_bytes.

==============================================================

dirty_writeback_centisecs
Expand Down Expand Up @@ -622,7 +630,7 @@ oom_dump_tasks

Enables a system-wide task dump (excluding kernel threads) to be produced
when the kernel performs an OOM-killing and includes such information as
pid, uid, tgid, vm size, rss, nr_ptes, nr_pmds, swapents, oom_score_adj
pid, uid, tgid, vm size, rss, pgtables_bytes, swapents, oom_score_adj
score, and name. This is helpful to determine why the OOM killer was
invoked, to identify the rogue task that caused it, and to determine why
the OOM killer chose the task it did to kill.
Expand Down Expand Up @@ -792,6 +800,21 @@ with no ill effects: errors and warnings on these stats are suppressed.)

==============================================================

numa_stat

This interface allows runtime configuration of numa statistics.

When page allocation performance becomes a bottleneck and you can tolerate
some possible tool breakage and decreased numa counter precision, you can
do:
echo 0 > /proc/sys/vm/numa_stat

When page allocation performance is not a bottleneck and you want all
tooling to work, you can do:
echo 1 > /proc/sys/vm/numa_stat

==============================================================

swappiness

This control is used to define how aggressive the kernel will swap
Expand Down
93 changes: 93 additions & 0 deletions Documentation/vm/mmu_notifier.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
When do you need to notify inside page table lock ?

When clearing a pte/pmd we are given a choice to notify the event through
(notify version of *_clear_flush call mmu_notifier_invalidate_range) under
the page table lock. But that notification is not necessary in all cases.

For secondary TLB (non CPU TLB) like IOMMU TLB or device TLB (when device use
thing like ATS/PASID to get the IOMMU to walk the CPU page table to access a
process virtual address space). There is only 2 cases when you need to notify
those secondary TLB while holding page table lock when clearing a pte/pmd:

A) page backing address is free before mmu_notifier_invalidate_range_end()
B) a page table entry is updated to point to a new page (COW, write fault
on zero page, __replace_page(), ...)

Case A is obvious you do not want to take the risk for the device to write to
a page that might now be used by some completely different task.

Case B is more subtle. For correctness it requires the following sequence to
happen:
- take page table lock
- clear page table entry and notify ([pmd/pte]p_huge_clear_flush_notify())
- set page table entry to point to new page

If clearing the page table entry is not followed by a notify before setting
the new pte/pmd value then you can break memory model like C11 or C++11 for
the device.

Consider the following scenario (device use a feature similar to ATS/PASID):

Two address addrA and addrB such that |addrA - addrB| >= PAGE_SIZE we assume
they are write protected for COW (other case of B apply too).

[Time N] --------------------------------------------------------------------
CPU-thread-0 {try to write to addrA}
CPU-thread-1 {try to write to addrB}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {read addrA and populate device TLB}
DEV-thread-2 {read addrB and populate device TLB}
[Time N+1] ------------------------------------------------------------------
CPU-thread-0 {COW_step0: {mmu_notifier_invalidate_range_start(addrA)}}
CPU-thread-1 {COW_step0: {mmu_notifier_invalidate_range_start(addrB)}}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+2] ------------------------------------------------------------------
CPU-thread-0 {COW_step1: {update page table to point to new page for addrA}}
CPU-thread-1 {COW_step1: {update page table to point to new page for addrB}}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+3] ------------------------------------------------------------------
CPU-thread-0 {preempted}
CPU-thread-1 {preempted}
CPU-thread-2 {write to addrA which is a write to new page}
CPU-thread-3 {}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+3] ------------------------------------------------------------------
CPU-thread-0 {preempted}
CPU-thread-1 {preempted}
CPU-thread-2 {}
CPU-thread-3 {write to addrB which is a write to new page}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+4] ------------------------------------------------------------------
CPU-thread-0 {preempted}
CPU-thread-1 {COW_step3: {mmu_notifier_invalidate_range_end(addrB)}}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {}
DEV-thread-2 {}
[Time N+5] ------------------------------------------------------------------
CPU-thread-0 {preempted}
CPU-thread-1 {}
CPU-thread-2 {}
CPU-thread-3 {}
DEV-thread-0 {read addrA from old page}
DEV-thread-2 {read addrB from new page}

So here because at time N+2 the clear page table entry was not pair with a
notification to invalidate the secondary TLB, the device see the new value for
addrB before seing the new value for addrA. This break total memory ordering
for the device.

When changing a pte to write protect or to point to a new write protected page
with same content (KSM) it is fine to delay the mmu_notifier_invalidate_range
call to mmu_notifier_invalidate_range_end() outside the page table lock. This
is true even if the thread doing the page table update is preempted right after
releasing page table lock but before call mmu_notifier_invalidate_range_end().
10 changes: 0 additions & 10 deletions MAINTAINERS
Original file line number Diff line number Diff line change
Expand Up @@ -7692,16 +7692,6 @@ F: include/linux/kdb.h
F: include/linux/kgdb.h
F: kernel/debug/

KMEMCHECK
M: Vegard Nossum <[email protected]>
M: Pekka Enberg <[email protected]>
S: Maintained
F: Documentation/dev-tools/kmemcheck.rst
F: arch/x86/include/asm/kmemcheck.h
F: arch/x86/mm/kmemcheck/
F: include/linux/kmemcheck.h
F: mm/kmemcheck.c

KMEMLEAK
M: Catalin Marinas <[email protected]>
S: Maintained
Expand Down
1 change: 0 additions & 1 deletion arch/arm/include/asm/dma-iommu.h
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
#include <linux/mm_types.h>
#include <linux/scatterlist.h>
#include <linux/dma-debug.h>
#include <linux/kmemcheck.h>
#include <linux/kref.h>

#define ARM_MAPPING_ERROR (~(dma_addr_t)0x0)
Expand Down
2 changes: 1 addition & 1 deletion arch/arm/include/asm/pgalloc.h
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
extern pgd_t *pgd_alloc(struct mm_struct *mm);
extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);

#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO)

static inline void clean_pte_table(pte_t *pte)
{
Expand Down
2 changes: 1 addition & 1 deletion arch/arm/mm/pgd.c
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd_base)
pte = pmd_pgtable(*pmd);
pmd_clear(pmd);
pte_free(mm, pte);
atomic_long_dec(&mm->nr_ptes);
mm_dec_nr_ptes(mm);
no_pmd:
pud_clear(pud);
pmd_free(mm, pmd);
Expand Down
2 changes: 1 addition & 1 deletion arch/arm64/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ config ARM64
select HAVE_ARCH_BITREVERSE
select HAVE_ARCH_HUGE_VMAP
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
Expand Down
2 changes: 1 addition & 1 deletion arch/arm64/include/asm/pgalloc.h
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@

#define check_pgt_cache() do { } while (0)

#define PGALLOC_GFP (GFP_KERNEL | __GFP_NOTRACK | __GFP_ZERO)
#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO)
#define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t))

#if CONFIG_PGTABLE_LEVELS > 2
Expand Down
Loading

0 comments on commit 7c225c6

Please sign in to comment.