Skip to content

Commit

Permalink
mm: cma: adjust address limit to avoid hitting low/high memory boundary
Browse files Browse the repository at this point in the history
Russell King recently noticed that limiting default CMA region only to low
memory on ARM architecture causes serious memory management issues with
machines having a lot of memory (which is mainly available as high
memory).  More information can be found the following thread:
http://thread.gmane.org/gmane.linux.ports.arm.kernel/348441/

Those two patches removes this limit letting kernel to put default CMA
region into high memory when this is possible (there is enough high memory
available and architecture specific DMA limit fits).

This should solve strange OOM issues on systems with lots of RAM (i.e.
>1GiB) and large (>256M) CMA area.

This patch (of 2):

Automatically allocated regions should not cross low/high memory boundary,
because such regions cannot be later correctly initialized due to spanning
across two memory zones.  This patch adds a check for this case and a
simple code for moving region to low memory if automatically selected
address might not fit completely into high memory.

Signed-off-by: Marek Szyprowski <[email protected]>
Acked-by: Michal Nazarewicz <[email protected]>
Cc: Daniel Drake <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Russell King <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
mszyprow authored and torvalds committed Oct 10, 2014
1 parent d4932f9 commit f7426b9
Showing 1 changed file with 21 additions and 0 deletions.
21 changes: 21 additions & 0 deletions mm/cma.c
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@
#include <linux/slab.h>
#include <linux/log2.h>
#include <linux/cma.h>
#include <linux/highmem.h>

struct cma {
unsigned long base_pfn;
Expand Down Expand Up @@ -163,6 +164,8 @@ int __init cma_declare_contiguous(phys_addr_t base,
bool fixed, struct cma **res_cma)
{
struct cma *cma;
phys_addr_t memblock_end = memblock_end_of_DRAM();
phys_addr_t highmem_start = __pa(high_memory);
int ret = 0;

pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n",
Expand Down Expand Up @@ -196,6 +199,24 @@ int __init cma_declare_contiguous(phys_addr_t base,
if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit))
return -EINVAL;

/*
* adjust limit to avoid crossing low/high memory boundary for
* automatically allocated regions
*/
if (((limit == 0 || limit > memblock_end) &&
(memblock_end - size < highmem_start &&
memblock_end > highmem_start)) ||
(!fixed && limit > highmem_start && limit - size < highmem_start)) {
limit = highmem_start;
}

if (fixed && base < highmem_start && base+size > highmem_start) {
ret = -EINVAL;
pr_err("Region at %08lx defined on low/high memory boundary (%08lx)\n",
(unsigned long)base, (unsigned long)highmem_start);
goto err;
}

/* Reserve memory */
if (base && fixed) {
if (memblock_is_region_reserved(base, size) ||
Expand Down

0 comments on commit f7426b9

Please sign in to comment.