Skip to content

Commit

Permalink
mm: fix some comment errors
Browse files Browse the repository at this point in the history
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Quanfa Fu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Quanfa Fu authored and torvalds committed Jan 15, 2022
1 parent 7f0d267 commit 0b8f0d8
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion mm/khugepaged.c
Original file line number Diff line number Diff line change
Expand Up @@ -1303,7 +1303,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
/*
* Record which node the original page is from and save this
* information to khugepaged_node_load[].
* Khupaged will allocate hugepage from the node has the max
* Khugepaged will allocate hugepage from the node has the max
* hit record.
*/
node = page_to_nid(page);
Expand Down
2 changes: 1 addition & 1 deletion mm/memory-failure.c
Original file line number Diff line number Diff line change
Expand Up @@ -1306,7 +1306,7 @@ static int __get_unpoison_page(struct page *page)
*
* get_hwpoison_page() takes a page refcount of an error page to handle memory
* error on it, after checking that the error page is in a well-defined state
* (defined as a page-type we can successfully handle the memor error on it,
* (defined as a page-type we can successfully handle the memory error on it,
* such as LRU page and hugetlb page).
*
* Memory error handling could be triggered at any time on any type of page,
Expand Down
2 changes: 1 addition & 1 deletion mm/slab_common.c
Original file line number Diff line number Diff line change
Expand Up @@ -819,7 +819,7 @@ void __init setup_kmalloc_cache_index_table(void)

if (KMALLOC_MIN_SIZE >= 64) {
/*
* The 96 byte size cache is not used if the alignment
* The 96 byte sized cache is not used if the alignment
* is 64 byte.
*/
for (i = 64 + 8; i <= 96; i += 8)
Expand Down
2 changes: 1 addition & 1 deletion mm/swap.c
Original file line number Diff line number Diff line change
Expand Up @@ -882,7 +882,7 @@ void lru_cache_disable(void)
* all online CPUs so any calls of lru_cache_disabled wrapped by
* local_lock or preemption disabled would be ordered by that.
* The atomic operation doesn't need to have stronger ordering
* requirements because that is enforeced by the scheduling
* requirements because that is enforced by the scheduling
* guarantees.
*/
__lru_add_drain_all(true);
Expand Down

0 comments on commit 0b8f0d8

Please sign in to comment.