Skip to content

Commit

Permalink
mm: fix typos in comments
Browse files Browse the repository at this point in the history
Fix ~94 single-word typos in locking code comments, plus a few
very obvious grammar mistakes.

Link: https://lkml.kernel.org/r/[email protected]
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Randy Dunlap <[email protected]>
Cc: Bhaskar Chowdhury <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Ingo Molnar authored and torvalds committed May 7, 2021
1 parent fa60ce2 commit f0953a1
Show file tree
Hide file tree
Showing 39 changed files with 83 additions and 83 deletions.
2 changes: 1 addition & 1 deletion include/linux/mm.h
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ extern int mmap_rnd_compat_bits __read_mostly;
* embedding these tags into addresses that point to these memory regions, and
* checking that the memory and the pointer tags match on memory accesses)
* redefine this macro to strip tags from pointers.
* It's defined as noop for arcitectures that don't support memory tagging.
* It's defined as noop for architectures that don't support memory tagging.
*/
#ifndef untagged_addr
#define untagged_addr(addr) (addr)
Expand Down
4 changes: 2 additions & 2 deletions include/linux/vmalloc.h
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ struct notifier_block; /* in notifier.h */
*
* If IS_ENABLED(CONFIG_KASAN_VMALLOC), VM_KASAN is set on a vm_struct after
* shadow memory has been mapped. It's used to handle allocation errors so that
* we don't try to poision shadow on free if it was never allocated.
* we don't try to poison shadow on free if it was never allocated.
*
* Otherwise, VM_KASAN is set for kasan_module_alloc() allocations and used to
* determine which allocations need the module shadow freed.
Expand All @@ -43,7 +43,7 @@ struct notifier_block; /* in notifier.h */

/*
* Maximum alignment for ioremap() regions.
* Can be overriden by arch-specific value.
* Can be overridden by arch-specific value.
*/
#ifndef IOREMAP_MAX_ORDER
#define IOREMAP_MAX_ORDER (7 + PAGE_SHIFT) /* 128 pages */
Expand Down
4 changes: 2 additions & 2 deletions mm/balloon_compaction.c
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ EXPORT_SYMBOL_GPL(balloon_page_list_enqueue);
/**
* balloon_page_list_dequeue() - removes pages from balloon's page list and
* returns a list of the pages.
* @b_dev_info: balloon device decriptor where we will grab a page from.
* @b_dev_info: balloon device descriptor where we will grab a page from.
* @pages: pointer to the list of pages that would be returned to the caller.
* @n_req_pages: number of requested pages.
*
Expand Down Expand Up @@ -157,7 +157,7 @@ EXPORT_SYMBOL_GPL(balloon_page_enqueue);
/*
* balloon_page_dequeue - removes a page from balloon's page list and returns
* its address to allow the driver to release the page.
* @b_dev_info: balloon device decriptor where we will grab a page from.
* @b_dev_info: balloon device descriptor where we will grab a page from.
*
* Driver must call this function to properly dequeue a previously enqueued page
* before definitively releasing it back to the guest system.
Expand Down
4 changes: 2 additions & 2 deletions mm/compaction.c
Original file line number Diff line number Diff line change
Expand Up @@ -2012,8 +2012,8 @@ static unsigned int fragmentation_score_wmark(pg_data_t *pgdat, bool low)
unsigned int wmark_low;

/*
* Cap the low watermak to avoid excessive compaction
* activity in case a user sets the proactivess tunable
* Cap the low watermark to avoid excessive compaction
* activity in case a user sets the proactiveness tunable
* close to 100 (maximum).
*/
wmark_low = max(100U - sysctl_compaction_proactiveness, 5U);
Expand Down
2 changes: 1 addition & 1 deletion mm/filemap.c
Original file line number Diff line number Diff line change
Expand Up @@ -2755,7 +2755,7 @@ unsigned int seek_page_size(struct xa_state *xas, struct page *page)
* entirely memory-based such as tmpfs, and filesystems which support
* unwritten extents.
*
* Return: The requested offset on successs, or -ENXIO if @whence specifies
* Return: The requested offset on success, or -ENXIO if @whence specifies
* SEEK_DATA and there is no data after @start. There is an implicit hole
* after @end - 1, so SEEK_HOLE returns @end if all the bytes between @start
* and @end contain data.
Expand Down
2 changes: 1 addition & 1 deletion mm/gup.c
Original file line number Diff line number Diff line change
Expand Up @@ -1575,7 +1575,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
* Returns NULL on any kind of failure - a hole must then be inserted into
* the corefile, to preserve alignment with its headers; and also returns
* NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found -
* allowing a hole to be left in the corefile to save diskspace.
* allowing a hole to be left in the corefile to save disk space.
*
* Called without mmap_lock (takes and releases the mmap_lock by itself).
*/
Expand Down
2 changes: 1 addition & 1 deletion mm/highmem.c
Original file line number Diff line number Diff line change
Expand Up @@ -519,7 +519,7 @@ void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot)

/*
* Disable migration so resulting virtual address is stable
* accross preemption.
* across preemption.
*/
migrate_disable();
preempt_disable();
Expand Down
6 changes: 3 additions & 3 deletions mm/huge_memory.c
Original file line number Diff line number Diff line change
Expand Up @@ -1792,8 +1792,8 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
/*
* Returns
* - 0 if PMD could not be locked
* - 1 if PMD was locked but protections unchange and TLB flush unnecessary
* - HPAGE_PMD_NR is protections changed and TLB flush necessary
* - 1 if PMD was locked but protections unchanged and TLB flush unnecessary
* - HPAGE_PMD_NR if protections changed and TLB flush necessary
*/
int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
unsigned long addr, pgprot_t newprot, unsigned long cp_flags)
Expand Down Expand Up @@ -2469,7 +2469,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
xa_lock(&swap_cache->i_pages);
}

/* lock lru list/PageCompound, ref freezed by page_ref_freeze */
/* lock lru list/PageCompound, ref frozen by page_ref_freeze */
lruvec = lock_page_lruvec(head);

for (i = nr - 1; i >= 1; i--) {
Expand Down
6 changes: 3 additions & 3 deletions mm/hugetlb.c
Original file line number Diff line number Diff line change
Expand Up @@ -466,7 +466,7 @@ static int allocate_file_region_entries(struct resv_map *resv,
resv->region_cache_count;

/* At this point, we should have enough entries in the cache
* for all the existings adds_in_progress. We should only be
* for all the existing adds_in_progress. We should only be
* needing to allocate for regions_needed.
*/
VM_BUG_ON(resv->region_cache_count < resv->adds_in_progress);
Expand Down Expand Up @@ -5536,8 +5536,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
v_end = ALIGN_DOWN(vma->vm_end, PUD_SIZE);

/*
* vma need span at least one aligned PUD size and the start,end range
* must at least partialy within it.
* vma needs to span at least one aligned PUD size, and the range
* must be at least partially within in.
*/
if (!(vma->vm_flags & VM_MAYSHARE) || !(v_end > v_start) ||
(*end <= v_start) || (*start >= v_end))
Expand Down
2 changes: 1 addition & 1 deletion mm/internal.h
Original file line number Diff line number Diff line change
Expand Up @@ -334,7 +334,7 @@ static inline bool is_exec_mapping(vm_flags_t flags)
}

/*
* Stack area - atomatically grows in one direction
* Stack area - automatically grows in one direction
*
* VM_GROWSUP / VM_GROWSDOWN VMAs are always private anonymous:
* do_mmap() forbids all other combinations.
Expand Down
8 changes: 4 additions & 4 deletions mm/kasan/kasan.h
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,9 @@ extern bool kasan_flag_async __ro_after_init;
#define KASAN_TAG_MAX 0xFD /* maximum value for random tags */

#ifdef CONFIG_KASAN_HW_TAGS
#define KASAN_TAG_MIN 0xF0 /* mimimum value for random tags */
#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */
#else
#define KASAN_TAG_MIN 0x00 /* mimimum value for random tags */
#define KASAN_TAG_MIN 0x00 /* minimum value for random tags */
#endif

#ifdef CONFIG_KASAN_GENERIC
Expand Down Expand Up @@ -403,7 +403,7 @@ static inline bool kasan_byte_accessible(const void *addr)
#else /* CONFIG_KASAN_HW_TAGS */

/**
* kasan_poison - mark the memory range as unaccessible
* kasan_poison - mark the memory range as inaccessible
* @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
* @size - range size, must be aligned to KASAN_GRANULE_SIZE
* @value - value that's written to metadata for the range
Expand Down Expand Up @@ -434,7 +434,7 @@ bool kasan_byte_accessible(const void *addr);

/**
* kasan_poison_last_granule - mark the last granule of the memory range as
* unaccessible
* inaccessible
* @addr - range start address, must be aligned to KASAN_GRANULE_SIZE
* @size - range size
*
Expand Down
4 changes: 2 additions & 2 deletions mm/kasan/quarantine.c
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
/* Data structure and operations for quarantine queues. */

/*
* Each queue is a signle-linked list, which also stores the total size of
* Each queue is a single-linked list, which also stores the total size of
* objects inside of it.
*/
struct qlist_head {
Expand Down Expand Up @@ -138,7 +138,7 @@ static void qlink_free(struct qlist_node *qlink, struct kmem_cache *cache)
local_irq_save(flags);

/*
* As the object now gets freed from the quaratine, assume that its
* As the object now gets freed from the quarantine, assume that its
* free track is no longer valid.
*/
*(u8 *)kasan_mem_to_shadow(object) = KASAN_KMALLOC_FREE;
Expand Down
4 changes: 2 additions & 2 deletions mm/kasan/shadow.c
Original file line number Diff line number Diff line change
Expand Up @@ -316,7 +316,7 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
* // rest of vmalloc process <data dependency>
* STORE p, a LOAD shadow(x+99)
*
* If there is no barrier between the end of unpoisioning the shadow
* If there is no barrier between the end of unpoisoning the shadow
* and the store of the result to p, the stores could be committed
* in a different order by CPU#0, and CPU#1 could erroneously observe
* poison in the shadow.
Expand Down Expand Up @@ -384,7 +384,7 @@ static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
* How does this work?
* -------------------
*
* We have a region that is page aligned, labelled as A.
* We have a region that is page aligned, labeled as A.
* That might not map onto the shadow in a way that is page-aligned:
*
* start end
Expand Down
2 changes: 1 addition & 1 deletion mm/kfence/report.c
Original file line number Diff line number Diff line change
Expand Up @@ -263,6 +263,6 @@ void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *r
if (panic_on_warn)
panic("panic_on_warn set ...\n");

/* We encountered a memory unsafety error, taint the kernel! */
/* We encountered a memory safety error, taint the kernel! */
add_taint(TAINT_BAD_PAGE, LOCKDEP_STILL_OK);
}
2 changes: 1 addition & 1 deletion mm/khugepaged.c
Original file line number Diff line number Diff line change
Expand Up @@ -667,7 +667,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
*
* The page table that maps the page has been already unlinked
* from the page table tree and this process cannot get
* an additinal pin on the page.
* an additional pin on the page.
*
* New pins can come later if the page is shared across fork,
* but not from this process. The other process cannot write to
Expand Down
4 changes: 2 additions & 2 deletions mm/ksm.c
Original file line number Diff line number Diff line change
Expand Up @@ -1065,7 +1065,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
/*
* Ok this is tricky, when get_user_pages_fast() run it doesn't
* take any lock, therefore the check that we are going to make
* with the pagecount against the mapcount is racey and
* with the pagecount against the mapcount is racy and
* O_DIRECT can happen right after the check.
* So we clear the pte and flush the tlb before the check
* this assure us that no O_DIRECT can happen after the check
Expand Down Expand Up @@ -1435,7 +1435,7 @@ static struct page *stable_node_dup(struct stable_node **_stable_node_dup,
*/
*_stable_node = found;
/*
* Just for robustneess as stable_node is
* Just for robustness, as stable_node is
* otherwise left as a stable pointer, the
* compiler shall optimize it away at build
* time.
Expand Down
4 changes: 2 additions & 2 deletions mm/madvise.c
Original file line number Diff line number Diff line change
Expand Up @@ -799,7 +799,7 @@ static long madvise_dontneed_free(struct vm_area_struct *vma,
if (end > vma->vm_end) {
/*
* Don't fail if end > vma->vm_end. If the old
* vma was splitted while the mmap_lock was
* vma was split while the mmap_lock was
* released the effect of the concurrent
* operation may not cause madvise() to
* have an undefined result. There may be an
Expand Down Expand Up @@ -1039,7 +1039,7 @@ process_madvise_behavior_valid(int behavior)
* MADV_DODUMP - cancel MADV_DONTDUMP: no longer exclude from core dump.
* MADV_COLD - the application is not expected to use this memory soon,
* deactivate pages in this range so that they can be reclaimed
* easily if memory pressure hanppens.
* easily if memory pressure happens.
* MADV_PAGEOUT - the application is not expected to use this memory soon,
* page out the pages in this range immediately.
*
Expand Down
18 changes: 9 additions & 9 deletions mm/memcontrol.c
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ enum res_type {
#define MEMFILE_PRIVATE(x, val) ((x) << 16 | (val))
#define MEMFILE_TYPE(val) ((val) >> 16 & 0xffff)
#define MEMFILE_ATTR(val) ((val) & 0xffff)
/* Used for OOM nofiier */
/* Used for OOM notifier */
#define OOM_CONTROL (0)

/*
Expand Down Expand Up @@ -786,7 +786,7 @@ void __mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val)
* __count_memcg_events - account VM events in a cgroup
* @memcg: the memory cgroup
* @idx: the event item
* @count: the number of events that occured
* @count: the number of events that occurred
*/
void __count_memcg_events(struct mem_cgroup *memcg, enum vm_event_item idx,
unsigned long count)
Expand Down Expand Up @@ -904,7 +904,7 @@ struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
rcu_read_lock();
do {
/*
* Page cache insertions can happen withou an
* Page cache insertions can happen without an
* actual mm context, e.g. during disk probing
* on boot, loopback IO, acct() writes etc.
*/
Expand Down Expand Up @@ -1712,7 +1712,7 @@ static void mem_cgroup_unmark_under_oom(struct mem_cgroup *memcg)
struct mem_cgroup *iter;

/*
* Be careful about under_oom underflows becase a child memcg
* Be careful about under_oom underflows because a child memcg
* could have been added after mem_cgroup_mark_under_oom.
*/
spin_lock(&memcg_oom_lock);
Expand Down Expand Up @@ -1884,7 +1884,7 @@ bool mem_cgroup_oom_synchronize(bool handle)
/*
* There is no guarantee that an OOM-lock contender
* sees the wakeups triggered by the OOM kill
* uncharges. Wake any sleepers explicitely.
* uncharges. Wake any sleepers explicitly.
*/
memcg_oom_recover(memcg);
}
Expand Down Expand Up @@ -4364,7 +4364,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
* Foreign dirty flushing
*
* There's an inherent mismatch between memcg and writeback. The former
* trackes ownership per-page while the latter per-inode. This was a
* tracks ownership per-page while the latter per-inode. This was a
* deliberate design decision because honoring per-page ownership in the
* writeback path is complicated, may lead to higher CPU and IO overheads
* and deemed unnecessary given that write-sharing an inode across
Expand All @@ -4379,9 +4379,9 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
* triggering background writeback. A will be slowed down without a way to
* make writeback of the dirty pages happen.
*
* Conditions like the above can lead to a cgroup getting repatedly and
* Conditions like the above can lead to a cgroup getting repeatedly and
* severely throttled after making some progress after each
* dirty_expire_interval while the underyling IO device is almost
* dirty_expire_interval while the underlying IO device is almost
* completely idle.
*
* Solving this problem completely requires matching the ownership tracking
Expand Down Expand Up @@ -5774,7 +5774,7 @@ static int mem_cgroup_can_attach(struct cgroup_taskset *tset)
return 0;

/*
* We are now commited to this value whatever it is. Changes in this
* We are now committed to this value whatever it is. Changes in this
* tunable will only affect upcoming migrations, not the current one.
* So we need to save it, and keep it going.
*/
Expand Down
2 changes: 1 addition & 1 deletion mm/memory-failure.c
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ static bool page_handle_poison(struct page *page, bool hugepage_or_freepage, boo
if (dissolve_free_huge_page(page) || !take_page_off_buddy(page))
/*
* We could fail to take off the target page from buddy
* for example due to racy page allocaiton, but that's
* for example due to racy page allocation, but that's
* acceptable because soft-offlined page is not broken
* and if someone really want to use it, they should
* take it.
Expand Down
10 changes: 5 additions & 5 deletions mm/memory.c
Original file line number Diff line number Diff line change
Expand Up @@ -3727,7 +3727,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
return ret;

/*
* Archs like ppc64 need additonal space to store information
* Archs like ppc64 need additional space to store information
* related to pte entry. Use the preallocated table for that.
*/
if (arch_needs_pgtable_deposit() && !vmf->prealloc_pte) {
Expand Down Expand Up @@ -4503,7 +4503,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
}

/**
* mm_account_fault - Do page fault accountings
* mm_account_fault - Do page fault accounting
*
* @regs: the pt_regs struct pointer. When set to NULL, will skip accounting
* of perf event counters, but we'll still do the per-task accounting to
Expand All @@ -4512,9 +4512,9 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
* @flags: the fault flags.
* @ret: the fault retcode.
*
* This will take care of most of the page fault accountings. Meanwhile, it
* This will take care of most of the page fault accounting. Meanwhile, it
* will also include the PERF_COUNT_SW_PAGE_FAULTS_[MAJ|MIN] perf counter
* updates. However note that the handling of PERF_COUNT_SW_PAGE_FAULTS should
* updates. However, note that the handling of PERF_COUNT_SW_PAGE_FAULTS should
* still be in per-arch page fault handlers at the entry of page fault.
*/
static inline void mm_account_fault(struct pt_regs *regs,
Expand Down Expand Up @@ -4848,7 +4848,7 @@ int follow_phys(struct vm_area_struct *vma,
/**
* generic_access_phys - generic implementation for iomem mmap access
* @vma: the vma to access
* @addr: userspace addres, not relative offset within @vma
* @addr: userspace address, not relative offset within @vma
* @buf: buffer to read/write
* @len: length of transfer
* @write: set to FOLL_WRITE when writing, otherwise reading
Expand Down
4 changes: 2 additions & 2 deletions mm/mempolicy.c
Original file line number Diff line number Diff line change
Expand Up @@ -1867,7 +1867,7 @@ static int apply_policy_zone(struct mempolicy *policy, enum zone_type zone)
* we apply policy when gfp_zone(gfp) = ZONE_MOVABLE only.
*
* policy->v.nodes is intersect with node_states[N_MEMORY].
* so if the following test faile, it implies
* so if the following test fails, it implies
* policy->v.nodes has movable memory only.
*/
if (!nodes_intersects(policy->v.nodes, node_states[N_HIGH_MEMORY]))
Expand Down Expand Up @@ -2098,7 +2098,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask)
*
* If tsk's mempolicy is "default" [NULL], return 'true' to indicate default
* policy. Otherwise, check for intersection between mask and the policy
* nodemask for 'bind' or 'interleave' policy. For 'perferred' or 'local'
* nodemask for 'bind' or 'interleave' policy. For 'preferred' or 'local'
* policy, always return true since it may allocate elsewhere on fallback.
*
* Takes task_lock(tsk) to prevent freeing of its mempolicy.
Expand Down
Loading

0 comments on commit f0953a1

Please sign in to comment.