Skip to content

Commit

Permalink
Merge tag 'dma-mapping-5.4' of git://git.infradead.org/users/hch/dma-…
Browse files Browse the repository at this point in the history
…mapping

Pull dma-mapping updates from Christoph Hellwig:

 - add dma-mapping and block layer helpers to take care of IOMMU merging
   for mmc plus subsequent fixups (Yoshihiro Shimoda)

 - rework handling of the pgprot bits for remapping (me)

 - take care of the dma direct infrastructure for swiotlb-xen (me)

 - improve the dma noncoherent remapping infrastructure (me)

 - better defaults for ->mmap, ->get_sgtable and ->get_required_mask
   (me)

 - cleanup mmaping of coherent DMA allocations (me)

 - various misc cleanups (Andy Shevchenko, me)

* tag 'dma-mapping-5.4' of git://git.infradead.org/users/hch/dma-mapping: (41 commits)
  mmc: renesas_sdhi_internal_dmac: Add MMC_CAP2_MERGE_CAPABLE
  mmc: queue: Fix bigger segments usage
  arm64: use asm-generic/dma-mapping.h
  swiotlb-xen: merge xen_unmap_single into xen_swiotlb_unmap_page
  swiotlb-xen: simplify cache maintainance
  swiotlb-xen: use the same foreign page check everywhere
  swiotlb-xen: remove xen_swiotlb_dma_mmap and xen_swiotlb_dma_get_sgtable
  xen: remove the exports for xen_{create,destroy}_contiguous_region
  xen/arm: remove xen_dma_ops
  xen/arm: simplify dma_cache_maint
  xen/arm: use dev_is_dma_coherent
  xen/arm: consolidate page-coherent.h
  xen/arm: use dma-noncoherent.h calls for xen-swiotlb cache maintainance
  arm: remove wrappers for the generic dma remap helpers
  dma-mapping: introduce a dma_common_find_pages helper
  dma-mapping: always use VM_DMA_COHERENT for generic DMA remap
  vmalloc: lift the arm flag for coherent mappings to common code
  dma-mapping: provide a better default ->get_required_mask
  dma-mapping: remove the dma_declare_coherent_memory export
  remoteproc: don't allow modular build
  ...
  • Loading branch information
torvalds committed Sep 19, 2019
2 parents c9fe563 + c7d9ecc commit 671df18
Show file tree
Hide file tree
Showing 73 changed files with 397 additions and 674 deletions.
19 changes: 8 additions & 11 deletions Documentation/DMA-API.txt
Original file line number Diff line number Diff line change
Expand Up @@ -204,6 +204,14 @@ Returns the maximum size of a mapping for the device. The size parameter
of the mapping functions like dma_map_single(), dma_map_page() and
others should not be larger than the returned value.

::

unsigned long
dma_get_merge_boundary(struct device *dev);

Returns the DMA merge boundary. If the device cannot merge any the DMA address
segments, the function returns 0.

Part Id - Streaming DMA mappings
--------------------------------

Expand Down Expand Up @@ -595,17 +603,6 @@ For reasons of efficiency, most platforms choose to track the declared
region only at the granularity of a page. For smaller allocations,
you should use the dma_pool() API.

::

void
dma_release_declared_memory(struct device *dev)

Remove the memory region previously declared from the system. This
API performs *no* in-use checking for this region and will return
unconditionally having removed all the required structures. It is the
driver's job to ensure that no parts of this memory region are
currently in use.

Part III - Debug drivers use of the DMA-API
-------------------------------------------

Expand Down
2 changes: 1 addition & 1 deletion Documentation/x86/x86_64/boot-options.rst
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ IOMMU (input/output memory management unit)
===========================================
Multiple x86-64 PCI-DMA mapping implementations exist, for example:

1. <lib/dma-direct.c>: use no hardware/software IOMMU at all
1. <kernel/dma/direct.c>: use no hardware/software IOMMU at all
(e.g. because you have < 3 GB memory).
Kernel boot message: "PCI-DMA: Disabling IOMMU"

Expand Down
3 changes: 0 additions & 3 deletions arch/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -793,9 +793,6 @@ config COMPAT_32BIT_TIME
This is relevant on all 32-bit architectures, and 64-bit architectures
as part of compat syscall handling.

config ARCH_NO_COHERENT_DMA_MMAP
bool

config ARCH_NO_PREEMPT
bool

Expand Down
2 changes: 2 additions & 0 deletions arch/alpha/kernel/pci_iommu.c
Original file line number Diff line number Diff line change
Expand Up @@ -955,5 +955,7 @@ const struct dma_map_ops alpha_pci_ops = {
.map_sg = alpha_pci_map_sg,
.unmap_sg = alpha_pci_unmap_sg,
.dma_supported = alpha_pci_supported,
.mmap = dma_common_mmap,
.get_sgtable = dma_common_get_sgtable,
};
EXPORT_SYMBOL(alpha_pci_ops);
6 changes: 0 additions & 6 deletions arch/arc/mm/dma.c
Original file line number Diff line number Diff line change
Expand Up @@ -104,9 +104,3 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
dev_info(dev, "use %scoherent DMA ops\n",
dev->dma_coherent ? "" : "non");
}

static int __init atomic_pool_init(void)
{
return dma_atomic_pool_init(GFP_KERNEL, pgprot_noncached(PAGE_KERNEL));
}
postcore_initcall(atomic_pool_init);
2 changes: 1 addition & 1 deletion arch/arm/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ config ARM
select ARCH_HAS_DEBUG_VIRTUAL if MMU
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_DMA_COHERENT_TO_PFN if SWIOTLB
select ARCH_HAS_DMA_MMAP_PGPROT if SWIOTLB
select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_KEEPINITRD
Expand Down
3 changes: 0 additions & 3 deletions arch/arm/include/asm/device.h
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,6 @@ struct dev_archdata {
#endif
#ifdef CONFIG_ARM_DMA_USE_IOMMU
struct dma_iommu_mapping *mapping;
#endif
#ifdef CONFIG_XEN
const struct dma_map_ops *dev_dma_ops;
#endif
unsigned int dma_coherent:1;
unsigned int dma_ops_setup:1;
Expand Down
6 changes: 0 additions & 6 deletions arch/arm/include/asm/dma-mapping.h
Original file line number Diff line number Diff line change
Expand Up @@ -91,12 +91,6 @@ static inline dma_addr_t virt_to_dma(struct device *dev, void *addr)
}
#endif

/* do not use this function in a driver */
static inline bool is_device_dma_coherent(struct device *dev)
{
return dev->archdata.dma_coherent;
}

/**
* arm_dma_alloc - allocate consistent memory for DMA
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
Expand Down
1 change: 0 additions & 1 deletion arch/arm/include/asm/pgtable-nommu.h
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,6 @@ typedef pte_t *pte_addr_t;
*/
#define pgprot_noncached(prot) (prot)
#define pgprot_writecombine(prot) (prot)
#define pgprot_dmacoherent(prot) (prot)
#define pgprot_device(prot) (prot)


Expand Down
93 changes: 0 additions & 93 deletions arch/arm/include/asm/xen/page-coherent.h
Original file line number Diff line number Diff line change
@@ -1,95 +1,2 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_ARM_XEN_PAGE_COHERENT_H
#define _ASM_ARM_XEN_PAGE_COHERENT_H

#include <linux/dma-mapping.h>
#include <asm/page.h>
#include <xen/arm/page-coherent.h>

static inline const struct dma_map_ops *xen_get_dma_ops(struct device *dev)
{
if (dev && dev->archdata.dev_dma_ops)
return dev->archdata.dev_dma_ops;
return get_arch_dma_ops(NULL);
}

static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs)
{
return xen_get_dma_ops(hwdev)->alloc(hwdev, size, dma_handle, flags, attrs);
}

static inline void xen_free_coherent_pages(struct device *hwdev, size_t size,
void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs)
{
xen_get_dma_ops(hwdev)->free(hwdev, size, cpu_addr, dma_handle, attrs);
}

static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
dma_addr_t dev_addr, unsigned long offset, size_t size,
enum dma_data_direction dir, unsigned long attrs)
{
unsigned long page_pfn = page_to_xen_pfn(page);
unsigned long dev_pfn = XEN_PFN_DOWN(dev_addr);
unsigned long compound_pages =
(1<<compound_order(page)) * XEN_PFN_PER_PAGE;
bool local = (page_pfn <= dev_pfn) &&
(dev_pfn - page_pfn < compound_pages);

/*
* Dom0 is mapped 1:1, while the Linux page can span across
* multiple Xen pages, it's not possible for it to contain a
* mix of local and foreign Xen pages. So if the first xen_pfn
* == mfn the page is local otherwise it's a foreign page
* grant-mapped in dom0. If the page is local we can safely
* call the native dma_ops function, otherwise we call the xen
* specific function.
*/
if (local)
xen_get_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs);
else
__xen_dma_map_page(hwdev, page, dev_addr, offset, size, dir, attrs);
}

static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
unsigned long pfn = PFN_DOWN(handle);
/*
* Dom0 is mapped 1:1, while the Linux page can be spanned accross
* multiple Xen page, it's not possible to have a mix of local and
* foreign Xen page. Dom0 is mapped 1:1, so calling pfn_valid on a
* foreign mfn will always return false. If the page is local we can
* safely call the native dma_ops function, otherwise we call the xen
* specific function.
*/
if (pfn_valid(pfn)) {
if (xen_get_dma_ops(hwdev)->unmap_page)
xen_get_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs);
} else
__xen_dma_unmap_page(hwdev, handle, size, dir, attrs);
}

static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
unsigned long pfn = PFN_DOWN(handle);
if (pfn_valid(pfn)) {
if (xen_get_dma_ops(hwdev)->sync_single_for_cpu)
xen_get_dma_ops(hwdev)->sync_single_for_cpu(hwdev, handle, size, dir);
} else
__xen_dma_sync_single_for_cpu(hwdev, handle, size, dir);
}

static inline void xen_dma_sync_single_for_device(struct device *hwdev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
unsigned long pfn = PFN_DOWN(handle);
if (pfn_valid(pfn)) {
if (xen_get_dma_ops(hwdev)->sync_single_for_device)
xen_get_dma_ops(hwdev)->sync_single_for_device(hwdev, handle, size, dir);
} else
__xen_dma_sync_single_for_device(hwdev, handle, size, dir);
}

#endif /* _ASM_ARM_XEN_PAGE_COHERENT_H */
5 changes: 3 additions & 2 deletions arch/arm/mm/dma-mapping-nommu.c
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,9 @@ static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma,

if (dma_mmap_from_global_coherent(vma, cpu_addr, size, &ret))
return ret;

return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))
return ret;
return -ENXIO;
}


Expand Down
Loading

0 comments on commit 671df18

Please sign in to comment.