Skip to content

Commit

Permalink
mm, CMA: change cma_declare_contiguous() to obey coding convention
Browse files Browse the repository at this point in the history
Conventionally, we put output param to the end of param list and put the
'base' ahead of 'size', but cma_declare_contiguous() doesn't look like
that, so change it.

Additionally, move down cma_areas reference code to the position where
it is really needed.

Signed-off-by: Joonsoo Kim <[email protected]>
Acked-by: Michal Nazarewicz <[email protected]>
Reviewed-by: Aneesh Kumar K.V <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Aneesh Kumar K.V <[email protected]>
Cc: Gleb Natapov <[email protected]>
Acked-by: Marek Szyprowski <[email protected]>
Tested-by: Marek Szyprowski <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Zhang Yanfei <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
JoonsooKim authored and torvalds committed Aug 7, 2014
1 parent b7155e7 commit c1f733a
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 10 deletions.
4 changes: 2 additions & 2 deletions arch/powerpc/kvm/book3s_hv_builtin.c
Original file line number Diff line number Diff line change
Expand Up @@ -185,8 +185,8 @@ void __init kvm_cma_reserve(void)
align_size = HPT_ALIGN_PAGES << PAGE_SHIFT;

align_size = max(kvm_rma_pages << PAGE_SHIFT, align_size);
cma_declare_contiguous(selected_size, 0, 0, align_size,
KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, &kvm_cma, false);
cma_declare_contiguous(0, selected_size, 0, align_size,
KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma);
}
}

Expand Down
2 changes: 1 addition & 1 deletion drivers/base/dma-contiguous.c
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
{
int ret;

ret = cma_declare_contiguous(size, base, limit, 0, 0, res_cma, fixed);
ret = cma_declare_contiguous(base, size, limit, 0, 0, fixed, res_cma);
if (ret)
return ret;

Expand Down
2 changes: 1 addition & 1 deletion include/linux/cma.h
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ extern unsigned long cma_get_size(struct cma *cma);
extern int __init cma_declare_contiguous(phys_addr_t size,
phys_addr_t base, phys_addr_t limit,
phys_addr_t alignment, unsigned int order_per_bit,
struct cma **res_cma, bool fixed);
bool fixed, struct cma **res_cma);
extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align);
extern bool cma_release(struct cma *cma, struct page *pages, int count);
#endif
13 changes: 7 additions & 6 deletions mm/cma.c
Original file line number Diff line number Diff line change
Expand Up @@ -141,13 +141,13 @@ core_initcall(cma_init_reserved_areas);

/**
* cma_declare_contiguous() - reserve custom contiguous area
* @size: Size of the reserved area (in bytes),
* @base: Base address of the reserved area optional, use 0 for any
* @size: Size of the reserved area (in bytes),
* @limit: End address of the reserved memory (optional, 0 for any).
* @alignment: Alignment for the CMA area, should be power of 2 or zero
* @order_per_bit: Order of pages represented by one bit on bitmap.
* @res_cma: Pointer to store the created cma region.
* @fixed: hint about where to place the reserved area
* @res_cma: Pointer to store the created cma region.
*
* This function reserves memory from early allocator. It should be
* called by arch specific code once the early allocator (memblock or bootmem)
Expand All @@ -157,12 +157,12 @@ core_initcall(cma_init_reserved_areas);
* If @fixed is true, reserve contiguous area at exactly @base. If false,
* reserve in range from @base to @limit.
*/
int __init cma_declare_contiguous(phys_addr_t size,
phys_addr_t base, phys_addr_t limit,
int __init cma_declare_contiguous(phys_addr_t base,
phys_addr_t size, phys_addr_t limit,
phys_addr_t alignment, unsigned int order_per_bit,
struct cma **res_cma, bool fixed)
bool fixed, struct cma **res_cma)
{
struct cma *cma = &cma_areas[cma_area_count];
struct cma *cma;
int ret = 0;

pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n",
Expand Down Expand Up @@ -218,6 +218,7 @@ int __init cma_declare_contiguous(phys_addr_t size,
* Each reserved area must be initialised later, when more kernel
* subsystems (like slab allocator) are available.
*/
cma = &cma_areas[cma_area_count];
cma->base_pfn = PFN_DOWN(base);
cma->count = size >> PAGE_SHIFT;
cma->order_per_bit = order_per_bit;
Expand Down

0 comments on commit c1f733a

Please sign in to comment.