2009-01-05 14:51:22

by FUJITA Tomonori

[permalink] [raw]
Subject: [PATCH 0/13] x86: unifying ways to handle multiple sets of dma mapping ops

This patchset is the second part of the unification of ways to handle
multiple sets of dma mapping API. The whole work consists of three
patchset. This is for X86 and can be applied independently (against
tip/master).

I've submitted the first part (for IA64):

http://marc.info/?l=linux-kernel&m=123116676006794&w=2

dma_mapping_ops (or dma_ops) struct is used to handle multiple sets of
dma mapping API by X86, SPARC, and POWER. IA64 also handle multiple
sets of dma mapping API but in a very different way (some define
magic).

X86 and IA64 share VT-d and SWIOTLB code. We need several workarounds
for it because of the deference of ways to handle multiple sets of dma
mapping API (e.g., X86 people can't freely change struct
dma_mapping_ops in x86's dma-mapping.h now because it could break
IA64). Seems POWER will use SWIOTLB code soon. I think that it's time
to unify ways to handle multiple sets of dma mapping API. After
applying the whole work, we have struct dma_map_ops
include/linux/dma-mapping.h (I also dream of changing all the archs to
use SWIOTLB in order to remove the bounce code in the block and
network stacks...).

This patchset doesn't include major changes, just converting x86's
dma_mapping_ops to use map_page and unmap_page instead of map_single
and unmap_single. Currently, x86's dma_mapping_ops uses physical
address as a map_single's argument. But it's confusing since
dma_map_single uses a virtual address argument. So I chose POWER
dma_mapping_ops scheme, which uses map_page to handle dma_map_single.

=
arch/x86/include/asm/dma-mapping.h | 23 ++++++++++++++---------
arch/x86/kernel/amd_iommu.c | 16 ++++++++++------
arch/x86/kernel/pci-calgary_64.c | 23 +++++++++++++----------
arch/x86/kernel/pci-gart_64.c | 20 ++++++++++++--------
arch/x86/kernel/pci-nommu.c | 16 ++++++++--------
arch/x86/kernel/pci-swiotlb_64.c | 22 ++++++++++++++++------
drivers/pci/intel-iommu.c | 26 ++++++++++++++++++++++----
7 files changed, 95 insertions(+), 51 deletions(-)



2009-01-05 14:50:45

by FUJITA Tomonori

[permalink] [raw]
Subject: [PATCH 1/8] add map_page and unmap_page to struct dma_mapping_ops

This patch adds map_page and unmap_page to struct dma_mapping_ops.

This is a preparation of struct dma_mapping_ops unification. We use
map_page and unmap_page instead of map_single and unmap_single.

We will remove map_single and unmap_single hooks in the last patch in
this patchset.

Signed-off-by: FUJITA Tomonori <[email protected]>
---
arch/x86/include/asm/dma-mapping.h | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h
index e93265c..2f89d2e 100644
--- a/arch/x86/include/asm/dma-mapping.h
+++ b/arch/x86/include/asm/dma-mapping.h
@@ -8,6 +8,7 @@

#include <linux/kmemcheck.h>
#include <linux/scatterlist.h>
+#include <linux/dma-attrs.h>
#include <asm/io.h>
#include <asm/swiotlb.h>
#include <asm-generic/dma-coherent.h>
@@ -51,6 +52,13 @@ struct dma_mapping_ops {
void (*unmap_sg)(struct device *hwdev,
struct scatterlist *sg, int nents,
int direction);
+ dma_addr_t (*map_page)(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir,
+ struct dma_attrs *attrs);
+ void (*unmap_page)(struct device *dev, dma_addr_t dma_handle,
+ size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs);
int (*dma_supported)(struct device *hwdev, u64 mask);
int is_phys;
};
--
1.6.0.6

2009-01-05 14:51:00

by FUJITA Tomonori

[permalink] [raw]
Subject: [PATCH 3/8] gart: add map_page and unmap_page

This is a preparation of struct dma_mapping_ops unification. We use
map_page and unmap_page instead of map_single and unmap_single.

We will remove map_single and unmap_single hooks in the last patch in
this patchset.

Signed-off-by: FUJITA Tomonori <[email protected]>
---
arch/x86/kernel/pci-gart_64.c | 27 +++++++++++++++++++++++----
1 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/pci-gart_64.c b/arch/x86/kernel/pci-gart_64.c
index 00c2bcd..e49c6dd 100644
--- a/arch/x86/kernel/pci-gart_64.c
+++ b/arch/x86/kernel/pci-gart_64.c
@@ -255,10 +255,13 @@ static dma_addr_t dma_map_area(struct device *dev, dma_addr_t phys_mem,
}

/* Map a single area into the IOMMU */
-static dma_addr_t
-gart_map_single(struct device *dev, phys_addr_t paddr, size_t size, int dir)
+static dma_addr_t gart_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir,
+ struct dma_attrs *attrs)
{
unsigned long bus;
+ phys_addr_t paddr = page_to_phys(page) + offset;

if (!dev)
dev = &x86_dma_fallback_dev;
@@ -272,11 +275,19 @@ gart_map_single(struct device *dev, phys_addr_t paddr, size_t size, int dir)
return bus;
}

+static dma_addr_t gart_map_single(struct device *dev, phys_addr_t paddr,
+ size_t size, int dir)
+{
+ return gart_map_page(dev, pfn_to_page(paddr >> PAGE_SHIFT),
+ paddr & ~PAGE_MASK, size, dir, NULL);
+}
+
/*
* Free a DMA mapping.
*/
-static void gart_unmap_single(struct device *dev, dma_addr_t dma_addr,
- size_t size, int direction)
+static void gart_unmap_page(struct device *dev, dma_addr_t dma_addr,
+ size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
{
unsigned long iommu_page;
int npages;
@@ -295,6 +306,12 @@ static void gart_unmap_single(struct device *dev, dma_addr_t dma_addr,
free_iommu(iommu_page, npages);
}

+static void gart_unmap_single(struct device *dev, dma_addr_t dma_addr,
+ size_t size, int direction)
+{
+ gart_unmap_page(dev, dma_addr, size, direction, NULL);
+}
+
/*
* Wrapper for pci_unmap_single working with scatterlists.
*/
@@ -712,6 +729,8 @@ static struct dma_mapping_ops gart_dma_ops = {
.unmap_single = gart_unmap_single,
.map_sg = gart_map_sg,
.unmap_sg = gart_unmap_sg,
+ .map_page = gart_map_page,
+ .unmap_page = gart_unmap_page,
.alloc_coherent = gart_alloc_coherent,
.free_coherent = gart_free_coherent,
};
--
1.6.0.6

2009-01-05 14:51:40

by FUJITA Tomonori

[permalink] [raw]
Subject: [PATCH 2/8] swiotlb: add map_page and unmap_page

This is a preparation of struct dma_mapping_ops unification. We use
map_page and unmap_page instead of map_single and unmap_single.

This is sorta temporary workaround. We will move them to lib/swiotlb.c
to enable x86's swiotlb code to directly use them.

We will remove map_single and unmap_single hooks in the last patch in
this patchset.

Signed-off-by: FUJITA Tomonori <[email protected]>
---
arch/x86/kernel/pci-swiotlb_64.c | 19 +++++++++++++++++++
1 files changed, 19 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/pci-swiotlb_64.c b/arch/x86/kernel/pci-swiotlb_64.c
index d59c917..d1c0366 100644
--- a/arch/x86/kernel/pci-swiotlb_64.c
+++ b/arch/x86/kernel/pci-swiotlb_64.c
@@ -45,6 +45,23 @@ swiotlb_map_single_phys(struct device *hwdev, phys_addr_t paddr, size_t size,
return swiotlb_map_single(hwdev, phys_to_virt(paddr), size, direction);
}

+/* these will be moved to lib/swiotlb.c later on */
+
+static dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+{
+ return swiotlb_map_single(dev, page_address(page) + offset, size, dir);
+}
+
+static void swiotlb_unmap_page(struct device *dev, dma_addr_t dma_handle,
+ size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+{
+ swiotlb_unmap_single(dev, dma_handle, size, dir);
+}
+
static void *x86_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, gfp_t flags)
{
@@ -71,6 +88,8 @@ struct dma_mapping_ops swiotlb_dma_ops = {
.sync_sg_for_device = swiotlb_sync_sg_for_device,
.map_sg = swiotlb_map_sg,
.unmap_sg = swiotlb_unmap_sg,
+ .map_page = swiotlb_map_page,
+ .unmap_page = swiotlb_unmap_page,
.dma_supported = NULL,
};

--
1.6.0.6

2009-01-05 14:51:55

by FUJITA Tomonori

[permalink] [raw]
Subject: [PATCH 8/8] remove map_single and unmap_single in struct dma_mapping_ops

This patch converts dma_map_single and dma_unmap_single to use
map_page and unmap_page respectively and removes unnecessary
map_single and unmap_single in struct dma_mapping_ops.

This leaves intel-iommu's dma_map_single and dma_unmap_single since
IA64 uses them. They will be removed after the unification.

Signed-off-by: FUJITA Tomonori <[email protected]>
---
arch/x86/include/asm/dma-mapping.h | 15 ++++++---------
arch/x86/kernel/amd_iommu.c | 15 ---------------
arch/x86/kernel/pci-calgary_64.c | 16 ----------------
arch/x86/kernel/pci-gart_64.c | 19 ++-----------------
arch/x86/kernel/pci-nommu.c | 8 --------
arch/x86/kernel/pci-swiotlb_64.c | 9 ---------
drivers/pci/intel-iommu.c | 2 --
7 files changed, 8 insertions(+), 76 deletions(-)

diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h
index 2f89d2e..a0cb867 100644
--- a/arch/x86/include/asm/dma-mapping.h
+++ b/arch/x86/include/asm/dma-mapping.h
@@ -25,10 +25,6 @@ struct dma_mapping_ops {
dma_addr_t *dma_handle, gfp_t gfp);
void (*free_coherent)(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle);
- dma_addr_t (*map_single)(struct device *hwdev, phys_addr_t ptr,
- size_t size, int direction);
- void (*unmap_single)(struct device *dev, dma_addr_t addr,
- size_t size, int direction);
void (*sync_single_for_cpu)(struct device *hwdev,
dma_addr_t dma_handle, size_t size,
int direction);
@@ -105,7 +101,9 @@ dma_map_single(struct device *hwdev, void *ptr, size_t size,

BUG_ON(!valid_dma_direction(direction));
kmemcheck_mark_initialized(ptr, size);
- return ops->map_single(hwdev, virt_to_phys(ptr), size, direction);
+ return ops->map_page(hwdev, virt_to_page(ptr),
+ (unsigned long)ptr & ~PAGE_MASK, size,
+ direction, NULL);
}

static inline void
@@ -115,8 +113,8 @@ dma_unmap_single(struct device *dev, dma_addr_t addr, size_t size,
struct dma_mapping_ops *ops = get_dma_ops(dev);

BUG_ON(!valid_dma_direction(direction));
- if (ops->unmap_single)
- ops->unmap_single(dev, addr, size, direction);
+ if (ops->unmap_page)
+ ops->unmap_page(dev, addr, size, direction, NULL);
}

static inline int
@@ -223,8 +221,7 @@ static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
struct dma_mapping_ops *ops = get_dma_ops(dev);

BUG_ON(!valid_dma_direction(direction));
- return ops->map_single(dev, page_to_phys(page) + offset,
- size, direction);
+ return ops->map_page(dev, page, offset, size, direction, NULL);
}

static inline void dma_unmap_page(struct device *dev, dma_addr_t addr,
diff --git a/arch/x86/kernel/amd_iommu.c b/arch/x86/kernel/amd_iommu.c
index 8570441..a5dedb6 100644
--- a/arch/x86/kernel/amd_iommu.c
+++ b/arch/x86/kernel/amd_iommu.c
@@ -1341,13 +1341,6 @@ out:
return addr;
}

-static dma_addr_t map_single(struct device *dev, phys_addr_t paddr,
- size_t size, int dir)
-{
- return map_page(dev, pfn_to_page(paddr >> PAGE_SHIFT),
- paddr & ~PAGE_MASK, size, dir, NULL);
-}
-
/*
* The exported unmap_single function for dma_ops.
*/
@@ -1378,12 +1371,6 @@ static void unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size,
spin_unlock_irqrestore(&domain->lock, flags);
}

-static void unmap_single(struct device *dev, dma_addr_t dma_addr,
- size_t size, int dir)
-{
- return unmap_page(dev, dma_addr, size, dir, NULL);
-}
-
/*
* This is a special map_sg function which is used if we should map a
* device which is not handled by an AMD IOMMU in the system.
@@ -1664,8 +1651,6 @@ static void prealloc_protection_domains(void)
static struct dma_mapping_ops amd_iommu_dma_ops = {
.alloc_coherent = alloc_coherent,
.free_coherent = free_coherent,
- .map_single = map_single,
- .unmap_single = unmap_single,
.map_page = map_page,
.unmap_page = unmap_page,
.map_sg = map_sg,
diff --git a/arch/x86/kernel/pci-calgary_64.c b/arch/x86/kernel/pci-calgary_64.c
index e33cfcf..756138b 100644
--- a/arch/x86/kernel/pci-calgary_64.c
+++ b/arch/x86/kernel/pci-calgary_64.c
@@ -461,14 +461,6 @@ static dma_addr_t calgary_map_page(struct device *dev, struct page *page,
return iommu_alloc(dev, tbl, vaddr, npages, dir);
}

-static dma_addr_t calgary_map_single(struct device *dev, phys_addr_t paddr,
- size_t size, int direction)
-{
- return calgary_map_page(dev, pfn_to_page(paddr >> PAGE_SHIFT),
- paddr & ~PAGE_MASK, size,
- direction, NULL);
-}
-
static void calgary_unmap_page(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
@@ -480,12 +472,6 @@ static void calgary_unmap_page(struct device *dev, dma_addr_t dma_addr,
iommu_free(tbl, dma_addr, npages);
}

-static void calgary_unmap_single(struct device *dev, dma_addr_t dma_handle,
- size_t size, int direction)
-{
- calgary_unmap_page(dev, dma_handle, size, direction, NULL);
-}
-
static void* calgary_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag)
{
@@ -535,8 +521,6 @@ static void calgary_free_coherent(struct device *dev, size_t size,
static struct dma_mapping_ops calgary_dma_ops = {
.alloc_coherent = calgary_alloc_coherent,
.free_coherent = calgary_free_coherent,
- .map_single = calgary_map_single,
- .unmap_single = calgary_unmap_single,
.map_sg = calgary_map_sg,
.unmap_sg = calgary_unmap_sg,
.map_page = calgary_map_page,
diff --git a/arch/x86/kernel/pci-gart_64.c b/arch/x86/kernel/pci-gart_64.c
index e49c6dd..9c557c0 100644
--- a/arch/x86/kernel/pci-gart_64.c
+++ b/arch/x86/kernel/pci-gart_64.c
@@ -275,13 +275,6 @@ static dma_addr_t gart_map_page(struct device *dev, struct page *page,
return bus;
}

-static dma_addr_t gart_map_single(struct device *dev, phys_addr_t paddr,
- size_t size, int dir)
-{
- return gart_map_page(dev, pfn_to_page(paddr >> PAGE_SHIFT),
- paddr & ~PAGE_MASK, size, dir, NULL);
-}
-
/*
* Free a DMA mapping.
*/
@@ -306,12 +299,6 @@ static void gart_unmap_page(struct device *dev, dma_addr_t dma_addr,
free_iommu(iommu_page, npages);
}

-static void gart_unmap_single(struct device *dev, dma_addr_t dma_addr,
- size_t size, int direction)
-{
- gart_unmap_page(dev, dma_addr, size, direction, NULL);
-}
-
/*
* Wrapper for pci_unmap_single working with scatterlists.
*/
@@ -324,7 +311,7 @@ gart_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, int dir)
for_each_sg(sg, s, nents, i) {
if (!s->dma_length || !s->length)
break;
- gart_unmap_single(dev, s->dma_address, s->dma_length, dir);
+ gart_unmap_page(dev, s->dma_address, s->dma_length, dir, NULL);
}
}

@@ -538,7 +525,7 @@ static void
gart_free_coherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_addr)
{
- gart_unmap_single(dev, dma_addr, size, DMA_BIDIRECTIONAL);
+ gart_unmap_page(dev, dma_addr, size, DMA_BIDIRECTIONAL, NULL);
free_pages((unsigned long)vaddr, get_order(size));
}

@@ -725,8 +712,6 @@ static __init int init_k8_gatt(struct agp_kern_info *info)
}

static struct dma_mapping_ops gart_dma_ops = {
- .map_single = gart_map_single,
- .unmap_single = gart_unmap_single,
.map_sg = gart_map_sg,
.unmap_sg = gart_unmap_sg,
.map_page = gart_map_page,
diff --git a/arch/x86/kernel/pci-nommu.c b/arch/x86/kernel/pci-nommu.c
index 5a73a82..d42b69c 100644
--- a/arch/x86/kernel/pci-nommu.c
+++ b/arch/x86/kernel/pci-nommu.c
@@ -38,13 +38,6 @@ static dma_addr_t nommu_map_page(struct device *dev, struct page *page,
return bus;
}

-static dma_addr_t nommu_map_single(struct device *hwdev, phys_addr_t paddr,
- size_t size, int direction)
-{
- return nommu_map_page(hwdev, pfn_to_page(paddr >> PAGE_SHIFT),
- paddr & ~PAGE_MASK, size, direction, NULL);
-}
-
/* Map a set of buffers described by scatterlist in streaming
* mode for DMA. This is the scatter-gather version of the
* above pci_map_single interface. Here the scatter gather list
@@ -88,7 +81,6 @@ static void nommu_free_coherent(struct device *dev, size_t size, void *vaddr,
struct dma_mapping_ops nommu_dma_ops = {
.alloc_coherent = dma_generic_alloc_coherent,
.free_coherent = nommu_free_coherent,
- .map_single = nommu_map_single,
.map_sg = nommu_map_sg,
.map_page = nommu_map_page,
.is_phys = 1,
diff --git a/arch/x86/kernel/pci-swiotlb_64.c b/arch/x86/kernel/pci-swiotlb_64.c
index d1c0366..3ae354c 100644
--- a/arch/x86/kernel/pci-swiotlb_64.c
+++ b/arch/x86/kernel/pci-swiotlb_64.c
@@ -38,13 +38,6 @@ int __weak swiotlb_arch_range_needs_mapping(void *ptr, size_t size)
return 0;
}

-static dma_addr_t
-swiotlb_map_single_phys(struct device *hwdev, phys_addr_t paddr, size_t size,
- int direction)
-{
- return swiotlb_map_single(hwdev, phys_to_virt(paddr), size, direction);
-}
-
/* these will be moved to lib/swiotlb.c later on */

static dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
@@ -78,8 +71,6 @@ struct dma_mapping_ops swiotlb_dma_ops = {
.mapping_error = swiotlb_dma_mapping_error,
.alloc_coherent = x86_swiotlb_alloc_coherent,
.free_coherent = swiotlb_free_coherent,
- .map_single = swiotlb_map_single_phys,
- .unmap_single = swiotlb_unmap_single,
.sync_single_for_cpu = swiotlb_sync_single_for_cpu,
.sync_single_for_device = swiotlb_sync_single_for_device,
.sync_single_range_for_cpu = swiotlb_sync_single_range_for_cpu,
diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c
index 60258ec..da273e4 100644
--- a/drivers/pci/intel-iommu.c
+++ b/drivers/pci/intel-iommu.c
@@ -2582,8 +2582,6 @@ int intel_map_sg(struct device *hwdev, struct scatterlist *sglist, int nelems,
static struct dma_mapping_ops intel_dma_ops = {
.alloc_coherent = intel_alloc_coherent,
.free_coherent = intel_free_coherent,
- .map_single = intel_map_single,
- .unmap_single = intel_unmap_single,
.map_sg = intel_map_sg,
.unmap_sg = intel_unmap_sg,
#ifdef CONFIG_X86_64
--
1.6.0.6

2009-01-05 14:52:23

by FUJITA Tomonori

[permalink] [raw]
Subject: [PATCH 7/8] pci-nommu: add map_page

This is a preparation of struct dma_mapping_ops unification. We use
map_page and unmap_page instead of map_single and unmap_single.

We will remove map_single hook in the last patch in this patchset.

Signed-off-by: FUJITA Tomonori <[email protected]>
---
arch/x86/kernel/pci-nommu.c | 20 ++++++++++++++------
1 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/pci-nommu.c b/arch/x86/kernel/pci-nommu.c
index c70ab5a..5a73a82 100644
--- a/arch/x86/kernel/pci-nommu.c
+++ b/arch/x86/kernel/pci-nommu.c
@@ -25,18 +25,25 @@ check_addr(char *name, struct device *hwdev, dma_addr_t bus, size_t size)
return 1;
}

-static dma_addr_t
-nommu_map_single(struct device *hwdev, phys_addr_t paddr, size_t size,
- int direction)
+static dma_addr_t nommu_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir,
+ struct dma_attrs *attrs)
{
- dma_addr_t bus = paddr;
+ dma_addr_t bus = page_to_phys(page) + offset;
WARN_ON(size == 0);
- if (!check_addr("map_single", hwdev, bus, size))
- return bad_dma_address;
+ if (!check_addr("map_single", dev, bus, size))
+ return bad_dma_address;
flush_write_buffers();
return bus;
}

+static dma_addr_t nommu_map_single(struct device *hwdev, phys_addr_t paddr,
+ size_t size, int direction)
+{
+ return nommu_map_page(hwdev, pfn_to_page(paddr >> PAGE_SHIFT),
+ paddr & ~PAGE_MASK, size, direction, NULL);
+}

/* Map a set of buffers described by scatterlist in streaming
* mode for DMA. This is the scatter-gather version of the
@@ -83,6 +90,7 @@ struct dma_mapping_ops nommu_dma_ops = {
.free_coherent = nommu_free_coherent,
.map_single = nommu_map_single,
.map_sg = nommu_map_sg,
+ .map_page = nommu_map_page,
.is_phys = 1,
};

--
1.6.0.6

2009-01-05 14:52:43

by FUJITA Tomonori

[permalink] [raw]
Subject: [PATCH 5/8] AMD IOMMU: add map_page and unmap_page

This is a preparation of struct dma_mapping_ops unification. We use
map_page and unmap_page instead of map_single and unmap_single.

We will remove map_single and unmap_single hooks in the last patch in
this patchset.

Signed-off-by: FUJITA Tomonori <[email protected]>
Cc: Joerg Roedel <[email protected]>
---
arch/x86/kernel/amd_iommu.c | 27 +++++++++++++++++++++++----
1 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/amd_iommu.c b/arch/x86/kernel/amd_iommu.c
index 5113c08..8570441 100644
--- a/arch/x86/kernel/amd_iommu.c
+++ b/arch/x86/kernel/amd_iommu.c
@@ -22,6 +22,7 @@
#include <linux/bitops.h>
#include <linux/debugfs.h>
#include <linux/scatterlist.h>
+#include <linux/dma-mapping.h>
#include <linux/iommu-helper.h>
#ifdef CONFIG_IOMMU_API
#include <linux/iommu.h>
@@ -1297,8 +1298,10 @@ static void __unmap_single(struct amd_iommu *iommu,
/*
* The exported map_single function for dma_ops.
*/
-static dma_addr_t map_single(struct device *dev, phys_addr_t paddr,
- size_t size, int dir)
+static dma_addr_t map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir,
+ struct dma_attrs *attrs)
{
unsigned long flags;
struct amd_iommu *iommu;
@@ -1306,6 +1309,7 @@ static dma_addr_t map_single(struct device *dev, phys_addr_t paddr,
u16 devid;
dma_addr_t addr;
u64 dma_mask;
+ phys_addr_t paddr = page_to_phys(page) + offset;

INC_STATS_COUNTER(cnt_map_single);

@@ -1337,11 +1341,18 @@ out:
return addr;
}

+static dma_addr_t map_single(struct device *dev, phys_addr_t paddr,
+ size_t size, int dir)
+{
+ return map_page(dev, pfn_to_page(paddr >> PAGE_SHIFT),
+ paddr & ~PAGE_MASK, size, dir, NULL);
+}
+
/*
* The exported unmap_single function for dma_ops.
*/
-static void unmap_single(struct device *dev, dma_addr_t dma_addr,
- size_t size, int dir)
+static void unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size,
+ enum dma_data_direction dir, struct dma_attrs *attrs)
{
unsigned long flags;
struct amd_iommu *iommu;
@@ -1367,6 +1378,12 @@ static void unmap_single(struct device *dev, dma_addr_t dma_addr,
spin_unlock_irqrestore(&domain->lock, flags);
}

+static void unmap_single(struct device *dev, dma_addr_t dma_addr,
+ size_t size, int dir)
+{
+ return unmap_page(dev, dma_addr, size, dir, NULL);
+}
+
/*
* This is a special map_sg function which is used if we should map a
* device which is not handled by an AMD IOMMU in the system.
@@ -1649,6 +1666,8 @@ static struct dma_mapping_ops amd_iommu_dma_ops = {
.free_coherent = free_coherent,
.map_single = map_single,
.unmap_single = unmap_single,
+ .map_page = map_page,
+ .unmap_page = unmap_page,
.map_sg = map_sg,
.unmap_sg = unmap_sg,
.dma_supported = amd_iommu_dma_supported,
--
1.6.0.6

2009-01-05 14:53:00

by FUJITA Tomonori

[permalink] [raw]
Subject: [PATCH 6/8] intel-iommu: add map_page and unmap_page

This is a preparation of struct dma_mapping_ops unification. We use
map_page and unmap_page instead of map_single and unmap_single.

This uses a temporary workaround, ifdef X86_64 to avoid IA64
build. The workaround will be removed after the unification. Well,
changing x86's struct dma_mapping_ops could break IA64. It's just
wrong. It's one of problems that this patchset fixes.

We will remove map_single and unmap_single hooks in the last patch in
this patchset.

Signed-off-by: FUJITA Tomonori <[email protected]>
Cc: David Woodhouse <[email protected]>
---
drivers/pci/intel-iommu.c | 24 ++++++++++++++++++++++--
1 files changed, 22 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c
index 235fb7a..60258ec 100644
--- a/drivers/pci/intel-iommu.c
+++ b/drivers/pci/intel-iommu.c
@@ -2273,6 +2273,15 @@ error:
return 0;
}

+static dma_addr_t intel_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+{
+ return __intel_map_single(dev, page_to_phys(page) + offset, size,
+ dir, to_pci_dev(dev)->dma_mask);
+}
+
dma_addr_t intel_map_single(struct device *hwdev, phys_addr_t paddr,
size_t size, int dir)
{
@@ -2341,8 +2350,9 @@ static void add_unmap(struct dmar_domain *dom, struct iova *iova)
spin_unlock_irqrestore(&async_umap_flush_lock, flags);
}

-void intel_unmap_single(struct device *dev, dma_addr_t dev_addr, size_t size,
- int dir)
+static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr,
+ size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct dmar_domain *domain;
@@ -2386,6 +2396,12 @@ void intel_unmap_single(struct device *dev, dma_addr_t dev_addr, size_t size,
}
}

+void intel_unmap_single(struct device *dev, dma_addr_t dev_addr, size_t size,
+ int dir)
+{
+ intel_unmap_page(dev, dev_addr, size, dir, NULL);
+}
+
void *intel_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, gfp_t flags)
{
@@ -2570,6 +2586,10 @@ static struct dma_mapping_ops intel_dma_ops = {
.unmap_single = intel_unmap_single,
.map_sg = intel_map_sg,
.unmap_sg = intel_unmap_sg,
+#ifdef CONFIG_X86_64
+ .map_page = intel_map_page,
+ .unmap_page = intel_unmap_page,
+#endif
};

static inline int iommu_domain_cache_init(void)
--
1.6.0.6

2009-01-05 14:53:40

by FUJITA Tomonori

[permalink] [raw]
Subject: [PATCH 4/8] calgary: add map_page and unmap_page

This is a preparation of struct dma_mapping_ops unification. We use
map_page and unmap_page instead of map_single and unmap_single.

We will remove map_single and unmap_single hooks in the last patch in
this patchset.

Signed-off-by: FUJITA Tomonori <[email protected]>
Cc: Muli Ben-Yehuda <[email protected]>
---
arch/x86/kernel/pci-calgary_64.c | 35 +++++++++++++++++++++++++++--------
1 files changed, 27 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/pci-calgary_64.c b/arch/x86/kernel/pci-calgary_64.c
index d28bbdc..e33cfcf 100644
--- a/arch/x86/kernel/pci-calgary_64.c
+++ b/arch/x86/kernel/pci-calgary_64.c
@@ -445,10 +445,12 @@ error:
return 0;
}

-static dma_addr_t calgary_map_single(struct device *dev, phys_addr_t paddr,
- size_t size, int direction)
+static dma_addr_t calgary_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir,
+ struct dma_attrs *attrs)
{
- void *vaddr = phys_to_virt(paddr);
+ void *vaddr = page_address(page) + offset;
unsigned long uaddr;
unsigned int npages;
struct iommu_table *tbl = find_iommu_table(dev);
@@ -456,17 +458,32 @@ static dma_addr_t calgary_map_single(struct device *dev, phys_addr_t paddr,
uaddr = (unsigned long)vaddr;
npages = iommu_num_pages(uaddr, size, PAGE_SIZE);

- return iommu_alloc(dev, tbl, vaddr, npages, direction);
+ return iommu_alloc(dev, tbl, vaddr, npages, dir);
}

-static void calgary_unmap_single(struct device *dev, dma_addr_t dma_handle,
- size_t size, int direction)
+static dma_addr_t calgary_map_single(struct device *dev, phys_addr_t paddr,
+ size_t size, int direction)
+{
+ return calgary_map_page(dev, pfn_to_page(paddr >> PAGE_SHIFT),
+ paddr & ~PAGE_MASK, size,
+ direction, NULL);
+}
+
+static void calgary_unmap_page(struct device *dev, dma_addr_t dma_addr,
+ size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
{
struct iommu_table *tbl = find_iommu_table(dev);
unsigned int npages;

- npages = iommu_num_pages(dma_handle, size, PAGE_SIZE);
- iommu_free(tbl, dma_handle, npages);
+ npages = iommu_num_pages(dma_addr, size, PAGE_SIZE);
+ iommu_free(tbl, dma_addr, npages);
+}
+
+static void calgary_unmap_single(struct device *dev, dma_addr_t dma_handle,
+ size_t size, int direction)
+{
+ calgary_unmap_page(dev, dma_handle, size, direction, NULL);
}

static void* calgary_alloc_coherent(struct device *dev, size_t size,
@@ -522,6 +539,8 @@ static struct dma_mapping_ops calgary_dma_ops = {
.unmap_single = calgary_unmap_single,
.map_sg = calgary_map_sg,
.unmap_sg = calgary_unmap_sg,
+ .map_page = calgary_map_page,
+ .unmap_page = calgary_unmap_page,
};

static inline void __iomem * busno_to_bbar(unsigned char num)
--
1.6.0.6

2009-01-05 15:01:43

by FUJITA Tomonori

[permalink] [raw]
Subject: Re: [PATCH 0/13] x86: unifying ways to handle multiple sets of dma mapping ops

On Mon, 5 Jan 2009 23:47:20 +0900
FUJITA Tomonori <[email protected]> wrote:

> This patchset is the second part of the unification of ways to handle
> multiple sets of dma mapping API. The whole work consists of three
> patchset. This is for X86 and can be applied independently (against
> tip/master).

Oops, I messed up the subject, should have been:

[PATCH 0/8] x86: unifying ways to handle multiple sets of dma mapping ops

2009-01-05 17:26:21

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 1/8] add map_page and unmap_page to struct dma_mapping_ops

On Mon, Jan 05, 2009 at 11:47:21PM +0900, FUJITA Tomonori wrote:
> This patch adds map_page and unmap_page to struct dma_mapping_ops.
>
> This is a preparation of struct dma_mapping_ops unification. We use
> map_page and unmap_page instead of map_single and unmap_single.
>
> We will remove map_single and unmap_single hooks in the last patch in
> this patchset.
>
> Signed-off-by: FUJITA Tomonori <[email protected]>
> ---
> arch/x86/include/asm/dma-mapping.h | 8 ++++++++
> 1 files changed, 8 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h
> index e93265c..2f89d2e 100644
> --- a/arch/x86/include/asm/dma-mapping.h
> +++ b/arch/x86/include/asm/dma-mapping.h
> @@ -8,6 +8,7 @@
>
> #include <linux/kmemcheck.h>
> #include <linux/scatterlist.h>
> +#include <linux/dma-attrs.h>
> #include <asm/io.h>
> #include <asm/swiotlb.h>
> #include <asm-generic/dma-coherent.h>
> @@ -51,6 +52,13 @@ struct dma_mapping_ops {
> void (*unmap_sg)(struct device *hwdev,
> struct scatterlist *sg, int nents,
> int direction);
> + dma_addr_t (*map_page)(struct device *dev, struct page *page,
> + unsigned long offset, size_t size,
> + enum dma_data_direction dir,
> + struct dma_attrs *attrs);
> + void (*unmap_page)(struct device *dev, dma_addr_t dma_handle,
> + size_t size, enum dma_data_direction dir,
> + struct dma_attrs *attrs);

Why do we need an offset into the page? The name suggests that this
function maps a whole page so the offset should be irrelevant.

Joerg

2009-01-05 17:29:17

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 1/8] add map_page and unmap_page to struct dma_mapping_ops

On Mon, Jan 05, 2009 at 06:26:08PM +0100, Joerg Roedel wrote:
> On Mon, Jan 05, 2009 at 11:47:21PM +0900, FUJITA Tomonori wrote:
> > This patch adds map_page and unmap_page to struct dma_mapping_ops.
> >
> > This is a preparation of struct dma_mapping_ops unification. We use
> > map_page and unmap_page instead of map_single and unmap_single.
> >
> > We will remove map_single and unmap_single hooks in the last patch in
> > this patchset.
> >
> > Signed-off-by: FUJITA Tomonori <[email protected]>
> > ---
> > arch/x86/include/asm/dma-mapping.h | 8 ++++++++
> > 1 files changed, 8 insertions(+), 0 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h
> > index e93265c..2f89d2e 100644
> > --- a/arch/x86/include/asm/dma-mapping.h
> > +++ b/arch/x86/include/asm/dma-mapping.h
> > @@ -8,6 +8,7 @@
> >
> > #include <linux/kmemcheck.h>
> > #include <linux/scatterlist.h>
> > +#include <linux/dma-attrs.h>
> > #include <asm/io.h>
> > #include <asm/swiotlb.h>
> > #include <asm-generic/dma-coherent.h>
> > @@ -51,6 +52,13 @@ struct dma_mapping_ops {
> > void (*unmap_sg)(struct device *hwdev,
> > struct scatterlist *sg, int nents,
> > int direction);
> > + dma_addr_t (*map_page)(struct device *dev, struct page *page,
> > + unsigned long offset, size_t size,
> > + enum dma_data_direction dir,
> > + struct dma_attrs *attrs);
> > + void (*unmap_page)(struct device *dev, dma_addr_t dma_handle,
> > + size_t size, enum dma_data_direction dir,
> > + struct dma_attrs *attrs);
>
> Why do we need an offset into the page? The name suggests that this
> function maps a whole page so the offset should be irrelevant.

Ah, just saw it. Forget this stupid question :)

2009-01-05 18:00:52

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 8/8] remove map_single and unmap_single in struct dma_mapping_ops

Is it the right way to implement map_single in terms of map_page? Doing
this you optimize for the map_page case. But a grep in drivers/ shows:

linux/drivers $ grep -r _map_page *|wc -l
126
linux/drivers $ grep -r _map_single *|wc -l
613

There are a lot more users of map_single than of map_page. I think its
better to optimize for the map_single case and implement map_page in
terms of map_single.

Joerg

On Mon, Jan 05, 2009 at 11:47:28PM +0900, FUJITA Tomonori wrote:
> This patch converts dma_map_single and dma_unmap_single to use
> map_page and unmap_page respectively and removes unnecessary
> map_single and unmap_single in struct dma_mapping_ops.
>
> This leaves intel-iommu's dma_map_single and dma_unmap_single since
> IA64 uses them. They will be removed after the unification.
>
> Signed-off-by: FUJITA Tomonori <[email protected]>
> ---
> arch/x86/include/asm/dma-mapping.h | 15 ++++++---------
> arch/x86/kernel/amd_iommu.c | 15 ---------------
> arch/x86/kernel/pci-calgary_64.c | 16 ----------------
> arch/x86/kernel/pci-gart_64.c | 19 ++-----------------
> arch/x86/kernel/pci-nommu.c | 8 --------
> arch/x86/kernel/pci-swiotlb_64.c | 9 ---------
> drivers/pci/intel-iommu.c | 2 --
> 7 files changed, 8 insertions(+), 76 deletions(-)
>
> diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h
> index 2f89d2e..a0cb867 100644
> --- a/arch/x86/include/asm/dma-mapping.h
> +++ b/arch/x86/include/asm/dma-mapping.h
> @@ -25,10 +25,6 @@ struct dma_mapping_ops {
> dma_addr_t *dma_handle, gfp_t gfp);
> void (*free_coherent)(struct device *dev, size_t size,
> void *vaddr, dma_addr_t dma_handle);
> - dma_addr_t (*map_single)(struct device *hwdev, phys_addr_t ptr,
> - size_t size, int direction);
> - void (*unmap_single)(struct device *dev, dma_addr_t addr,
> - size_t size, int direction);
> void (*sync_single_for_cpu)(struct device *hwdev,
> dma_addr_t dma_handle, size_t size,
> int direction);
> @@ -105,7 +101,9 @@ dma_map_single(struct device *hwdev, void *ptr, size_t size,
>
> BUG_ON(!valid_dma_direction(direction));
> kmemcheck_mark_initialized(ptr, size);
> - return ops->map_single(hwdev, virt_to_phys(ptr), size, direction);
> + return ops->map_page(hwdev, virt_to_page(ptr),
> + (unsigned long)ptr & ~PAGE_MASK, size,
> + direction, NULL);
> }
>
> static inline void
> @@ -115,8 +113,8 @@ dma_unmap_single(struct device *dev, dma_addr_t addr, size_t size,
> struct dma_mapping_ops *ops = get_dma_ops(dev);
>
> BUG_ON(!valid_dma_direction(direction));
> - if (ops->unmap_single)
> - ops->unmap_single(dev, addr, size, direction);
> + if (ops->unmap_page)
> + ops->unmap_page(dev, addr, size, direction, NULL);
> }
>
> static inline int
> @@ -223,8 +221,7 @@ static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
> struct dma_mapping_ops *ops = get_dma_ops(dev);
>
> BUG_ON(!valid_dma_direction(direction));
> - return ops->map_single(dev, page_to_phys(page) + offset,
> - size, direction);
> + return ops->map_page(dev, page, offset, size, direction, NULL);
> }
>
> static inline void dma_unmap_page(struct device *dev, dma_addr_t addr,
> diff --git a/arch/x86/kernel/amd_iommu.c b/arch/x86/kernel/amd_iommu.c
> index 8570441..a5dedb6 100644
> --- a/arch/x86/kernel/amd_iommu.c
> +++ b/arch/x86/kernel/amd_iommu.c
> @@ -1341,13 +1341,6 @@ out:
> return addr;
> }
>
> -static dma_addr_t map_single(struct device *dev, phys_addr_t paddr,
> - size_t size, int dir)
> -{
> - return map_page(dev, pfn_to_page(paddr >> PAGE_SHIFT),
> - paddr & ~PAGE_MASK, size, dir, NULL);
> -}
> -
> /*
> * The exported unmap_single function for dma_ops.
> */
> @@ -1378,12 +1371,6 @@ static void unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size,
> spin_unlock_irqrestore(&domain->lock, flags);
> }
>
> -static void unmap_single(struct device *dev, dma_addr_t dma_addr,
> - size_t size, int dir)
> -{
> - return unmap_page(dev, dma_addr, size, dir, NULL);
> -}
> -
> /*
> * This is a special map_sg function which is used if we should map a
> * device which is not handled by an AMD IOMMU in the system.
> @@ -1664,8 +1651,6 @@ static void prealloc_protection_domains(void)
> static struct dma_mapping_ops amd_iommu_dma_ops = {
> .alloc_coherent = alloc_coherent,
> .free_coherent = free_coherent,
> - .map_single = map_single,
> - .unmap_single = unmap_single,
> .map_page = map_page,
> .unmap_page = unmap_page,
> .map_sg = map_sg,
> diff --git a/arch/x86/kernel/pci-calgary_64.c b/arch/x86/kernel/pci-calgary_64.c
> index e33cfcf..756138b 100644
> --- a/arch/x86/kernel/pci-calgary_64.c
> +++ b/arch/x86/kernel/pci-calgary_64.c
> @@ -461,14 +461,6 @@ static dma_addr_t calgary_map_page(struct device *dev, struct page *page,
> return iommu_alloc(dev, tbl, vaddr, npages, dir);
> }
>
> -static dma_addr_t calgary_map_single(struct device *dev, phys_addr_t paddr,
> - size_t size, int direction)
> -{
> - return calgary_map_page(dev, pfn_to_page(paddr >> PAGE_SHIFT),
> - paddr & ~PAGE_MASK, size,
> - direction, NULL);
> -}
> -
> static void calgary_unmap_page(struct device *dev, dma_addr_t dma_addr,
> size_t size, enum dma_data_direction dir,
> struct dma_attrs *attrs)
> @@ -480,12 +472,6 @@ static void calgary_unmap_page(struct device *dev, dma_addr_t dma_addr,
> iommu_free(tbl, dma_addr, npages);
> }
>
> -static void calgary_unmap_single(struct device *dev, dma_addr_t dma_handle,
> - size_t size, int direction)
> -{
> - calgary_unmap_page(dev, dma_handle, size, direction, NULL);
> -}
> -
> static void* calgary_alloc_coherent(struct device *dev, size_t size,
> dma_addr_t *dma_handle, gfp_t flag)
> {
> @@ -535,8 +521,6 @@ static void calgary_free_coherent(struct device *dev, size_t size,
> static struct dma_mapping_ops calgary_dma_ops = {
> .alloc_coherent = calgary_alloc_coherent,
> .free_coherent = calgary_free_coherent,
> - .map_single = calgary_map_single,
> - .unmap_single = calgary_unmap_single,
> .map_sg = calgary_map_sg,
> .unmap_sg = calgary_unmap_sg,
> .map_page = calgary_map_page,
> diff --git a/arch/x86/kernel/pci-gart_64.c b/arch/x86/kernel/pci-gart_64.c
> index e49c6dd..9c557c0 100644
> --- a/arch/x86/kernel/pci-gart_64.c
> +++ b/arch/x86/kernel/pci-gart_64.c
> @@ -275,13 +275,6 @@ static dma_addr_t gart_map_page(struct device *dev, struct page *page,
> return bus;
> }
>
> -static dma_addr_t gart_map_single(struct device *dev, phys_addr_t paddr,
> - size_t size, int dir)
> -{
> - return gart_map_page(dev, pfn_to_page(paddr >> PAGE_SHIFT),
> - paddr & ~PAGE_MASK, size, dir, NULL);
> -}
> -
> /*
> * Free a DMA mapping.
> */
> @@ -306,12 +299,6 @@ static void gart_unmap_page(struct device *dev, dma_addr_t dma_addr,
> free_iommu(iommu_page, npages);
> }
>
> -static void gart_unmap_single(struct device *dev, dma_addr_t dma_addr,
> - size_t size, int direction)
> -{
> - gart_unmap_page(dev, dma_addr, size, direction, NULL);
> -}
> -
> /*
> * Wrapper for pci_unmap_single working with scatterlists.
> */
> @@ -324,7 +311,7 @@ gart_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, int dir)
> for_each_sg(sg, s, nents, i) {
> if (!s->dma_length || !s->length)
> break;
> - gart_unmap_single(dev, s->dma_address, s->dma_length, dir);
> + gart_unmap_page(dev, s->dma_address, s->dma_length, dir, NULL);
> }
> }
>
> @@ -538,7 +525,7 @@ static void
> gart_free_coherent(struct device *dev, size_t size, void *vaddr,
> dma_addr_t dma_addr)
> {
> - gart_unmap_single(dev, dma_addr, size, DMA_BIDIRECTIONAL);
> + gart_unmap_page(dev, dma_addr, size, DMA_BIDIRECTIONAL, NULL);
> free_pages((unsigned long)vaddr, get_order(size));
> }
>
> @@ -725,8 +712,6 @@ static __init int init_k8_gatt(struct agp_kern_info *info)
> }
>
> static struct dma_mapping_ops gart_dma_ops = {
> - .map_single = gart_map_single,
> - .unmap_single = gart_unmap_single,
> .map_sg = gart_map_sg,
> .unmap_sg = gart_unmap_sg,
> .map_page = gart_map_page,
> diff --git a/arch/x86/kernel/pci-nommu.c b/arch/x86/kernel/pci-nommu.c
> index 5a73a82..d42b69c 100644
> --- a/arch/x86/kernel/pci-nommu.c
> +++ b/arch/x86/kernel/pci-nommu.c
> @@ -38,13 +38,6 @@ static dma_addr_t nommu_map_page(struct device *dev, struct page *page,
> return bus;
> }
>
> -static dma_addr_t nommu_map_single(struct device *hwdev, phys_addr_t paddr,
> - size_t size, int direction)
> -{
> - return nommu_map_page(hwdev, pfn_to_page(paddr >> PAGE_SHIFT),
> - paddr & ~PAGE_MASK, size, direction, NULL);
> -}
> -
> /* Map a set of buffers described by scatterlist in streaming
> * mode for DMA. This is the scatter-gather version of the
> * above pci_map_single interface. Here the scatter gather list
> @@ -88,7 +81,6 @@ static void nommu_free_coherent(struct device *dev, size_t size, void *vaddr,
> struct dma_mapping_ops nommu_dma_ops = {
> .alloc_coherent = dma_generic_alloc_coherent,
> .free_coherent = nommu_free_coherent,
> - .map_single = nommu_map_single,
> .map_sg = nommu_map_sg,
> .map_page = nommu_map_page,
> .is_phys = 1,
> diff --git a/arch/x86/kernel/pci-swiotlb_64.c b/arch/x86/kernel/pci-swiotlb_64.c
> index d1c0366..3ae354c 100644
> --- a/arch/x86/kernel/pci-swiotlb_64.c
> +++ b/arch/x86/kernel/pci-swiotlb_64.c
> @@ -38,13 +38,6 @@ int __weak swiotlb_arch_range_needs_mapping(void *ptr, size_t size)
> return 0;
> }
>
> -static dma_addr_t
> -swiotlb_map_single_phys(struct device *hwdev, phys_addr_t paddr, size_t size,
> - int direction)
> -{
> - return swiotlb_map_single(hwdev, phys_to_virt(paddr), size, direction);
> -}
> -
> /* these will be moved to lib/swiotlb.c later on */
>
> static dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
> @@ -78,8 +71,6 @@ struct dma_mapping_ops swiotlb_dma_ops = {
> .mapping_error = swiotlb_dma_mapping_error,
> .alloc_coherent = x86_swiotlb_alloc_coherent,
> .free_coherent = swiotlb_free_coherent,
> - .map_single = swiotlb_map_single_phys,
> - .unmap_single = swiotlb_unmap_single,
> .sync_single_for_cpu = swiotlb_sync_single_for_cpu,
> .sync_single_for_device = swiotlb_sync_single_for_device,
> .sync_single_range_for_cpu = swiotlb_sync_single_range_for_cpu,
> diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c
> index 60258ec..da273e4 100644
> --- a/drivers/pci/intel-iommu.c
> +++ b/drivers/pci/intel-iommu.c
> @@ -2582,8 +2582,6 @@ int intel_map_sg(struct device *hwdev, struct scatterlist *sglist, int nelems,
> static struct dma_mapping_ops intel_dma_ops = {
> .alloc_coherent = intel_alloc_coherent,
> .free_coherent = intel_free_coherent,
> - .map_single = intel_map_single,
> - .unmap_single = intel_unmap_single,
> .map_sg = intel_map_sg,
> .unmap_sg = intel_unmap_sg,
> #ifdef CONFIG_X86_64
> --
> 1.6.0.6
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2009-01-06 09:26:43

by Muli Ben-Yehuda

[permalink] [raw]
Subject: Re: [PATCH 4/8] calgary: add map_page and unmap_page

On Mon, Jan 05, 2009 at 11:47:24PM +0900, FUJITA Tomonori wrote:
> This is a preparation of struct dma_mapping_ops unification. We use
> map_page and unmap_page instead of map_single and unmap_single.
>
> We will remove map_single and unmap_single hooks in the last patch in
> this patchset.
>
> Signed-off-by: FUJITA Tomonori <[email protected]>
> Cc: Muli Ben-Yehuda <[email protected]>

Calgary bits look fine.

Acked-by: Muli Ben-Yehuda <[email protected]>

Cheers,
Muli
--
SYSTOR 2009---The Israeli Experimental Systems Conference
May 4-6, 2009, Haifa, Israel
http://www.haifa.il.ibm.com/conferences/systor2009/

2009-01-06 10:22:34

by FUJITA Tomonori

[permalink] [raw]
Subject: Re: [PATCH 8/8] remove map_single and unmap_single in struct dma_mapping_ops

On Mon, 5 Jan 2009 19:00:38 +0100
Joerg Roedel <[email protected]> wrote:

> Is it the right way to implement map_single in terms of map_page? Doing
> this you optimize for the map_page case. But a grep in drivers/ shows:
>
> linux/drivers $ grep -r _map_page *|wc -l
> 126
> linux/drivers $ grep -r _map_single *|wc -l
> 613

The comparison is irrelevant since dma_map_page and dma_map_single
have different purposes.

If passing virtual memory address to an IOMMU is enough (and
convenient), then drivers use dma_map_single.

For some purposes, drivers need to pass a page frame and use
dma_map_page (or dma_map_sg).

We could have two hooks in dma_map_ops struct for dma_map_single and
dma_map_page. Say, we have map_single and map_page hooks. But the
map_page hook can be used to support both dma_map_single and
dma_map_page. Note that the map_single hook can't do that since it use
a virtual address as an argument. That's why I have only the map_page
hook in dma_map_ops struct.

As X86 does now, we could have map_single hook that use a physical
address to handle both dma_map_single and dma_map_page. However, it's
confusing since it means that the arguments of dma_map_single and its
hook (map_single) is inconsistent.


> There are a lot more users of map_single than of map_page. I think its
> better to optimize for the map_single case and implement map_page in
> terms of map_single.

As I wrote above, it doesn't make sense.

2009-01-06 13:08:10

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH 4/8] calgary: add map_page and unmap_page


* Muli Ben-Yehuda <[email protected]> wrote:

> On Mon, Jan 05, 2009 at 11:47:24PM +0900, FUJITA Tomonori wrote:
> > This is a preparation of struct dma_mapping_ops unification. We use
> > map_page and unmap_page instead of map_single and unmap_single.
> >
> > We will remove map_single and unmap_single hooks in the last patch in
> > this patchset.
> >
> > Signed-off-by: FUJITA Tomonori <[email protected]>
> > Cc: Muli Ben-Yehuda <[email protected]>
>
> Calgary bits look fine.
>
> Acked-by: Muli Ben-Yehuda <[email protected]>

thanks, added your ack to the changelog.

Ingo