This fixes the regression of the
6fee48cd330c68332f9712bc968d934a1a84a32a commit.
After the above commit, pci_set_consistent_dma_mask doesn't work for
dmabounce devices. It converts ARM to use the generic implementation
of pci_set_consistent_dma_mask.
The root cause that ARM's CONFIG_DMABOUNCE uses dev->coherent_dma_mask
for the dma mask of a device _AND_ a bus. dev->coherent_dma_mask is
supposed to represent the dma mask of a device.
The proper solution is that device and bus has the own dma mask
respectively (POWERPC does the similar). But the fix must go to stable
trees so this simple (and hacky a bit) patch works better for now.
For further information: http://lkml.org/lkml/2010/8/10/272
Reported-by: Krzysztof Halasa <[email protected]>
Signed-off-by: FUJITA Tomonori <[email protected]>
Tested-by: Krzysztof Halasa <[email protected]>
Cc: [email protected]
---
arch/arm/mm/dma-mapping.c | 5 +++++
1 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index c704eed..2a3fc2e 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -77,6 +77,11 @@ static struct page *__dma_alloc_buffer(struct device *dev, size_t size, gfp_t gf
if (mask < 0xffffffffULL)
gfp |= GFP_DMA;
+#ifdef CONFIG_DMABOUNCE
+ if (dev->archdata.dmabounce)
+ gfp |= GFP_DMA;
+#endif
+
page = alloc_pages(gfp, order);
if (!page)
return NULL;
--
1.6.5
What's the outcome of the following?
Russell, are you going to apply it as is, or is a better (short-term)
fix being made?
FUJITA Tomonori <[email protected]> writes:
> This fixes the regression of the
> 6fee48cd330c68332f9712bc968d934a1a84a32a commit.
>
> After the above commit, pci_set_consistent_dma_mask doesn't work for
> dmabounce devices. It converts ARM to use the generic implementation
> of pci_set_consistent_dma_mask.
>
> The root cause that ARM's CONFIG_DMABOUNCE uses dev->coherent_dma_mask
> for the dma mask of a device _AND_ a bus. dev->coherent_dma_mask is
> supposed to represent the dma mask of a device.
>
> The proper solution is that device and bus has the own dma mask
> respectively (POWERPC does the similar). But the fix must go to stable
> trees so this simple (and hacky a bit) patch works better for now.
>
> For further information: http://lkml.org/lkml/2010/8/10/272
>
> Reported-by: Krzysztof Halasa <[email protected]>
> Signed-off-by: FUJITA Tomonori <[email protected]>
> Tested-by: Krzysztof Halasa <[email protected]>
> Cc: [email protected]
> ---
> arch/arm/mm/dma-mapping.c | 5 +++++
> 1 files changed, 5 insertions(+), 0 deletions(-)
>
> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index c704eed..2a3fc2e 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -77,6 +77,11 @@ static struct page *__dma_alloc_buffer(struct device *dev, size_t size, gfp_t gf
> if (mask < 0xffffffffULL)
> gfp |= GFP_DMA;
>
> +#ifdef CONFIG_DMABOUNCE
> + if (dev->archdata.dmabounce)
> + gfp |= GFP_DMA;
> +#endif
> +
> page = alloc_pages(gfp, order);
> if (!page)
> return NULL;
--
Krzysztof Halasa