Skip to content

Commit

Permalink
arm64: mm: reserve hugetlb CMA after numa_init
Browse files Browse the repository at this point in the history
hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
done yet. so all reserved memory will be located at node0.

Fixes: cf11e85 ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
Signed-off-by: Barry Song <[email protected]>
Reviewed-by: Anshuman Khandual <[email protected]>
Acked-by: Roman Gushchin <[email protected]>
Cc: Matthias Brugger <[email protected]>
Cc: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
  • Loading branch information
Barry Song authored and willdeacon committed Jun 18, 2020
1 parent b9249cb commit 618e078
Showing 1 changed file with 10 additions and 5 deletions.
15 changes: 10 additions & 5 deletions arch/arm64/mm/init.c
Original file line number Diff line number Diff line change
Expand Up @@ -404,11 +404,6 @@ void __init arm64_memblock_init(void)
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;

dma_contiguous_reserve(arm64_dma32_phys_limit);

#ifdef CONFIG_ARM64_4K_PAGES
hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
#endif

}

void __init bootmem_init(void)
Expand All @@ -424,6 +419,16 @@ void __init bootmem_init(void)
min_low_pfn = min;

arm64_numa_init();

/*
* must be done after arm64_numa_init() which calls numa_init() to
* initialize node_online_map that gets used in hugetlb_cma_reserve()
* while allocating required CMA size across online nodes.
*/
#ifdef CONFIG_ARM64_4K_PAGES
hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
#endif

/*
* Sparsemem tries to allocate bootmem in memory_present(), so must be
* done after the fixed reservations.
Expand Down

0 comments on commit 618e078

Please sign in to comment.