2014-02-24 11:00:59

by Arnd Bergmann

[permalink] [raw]
Subject: DMABOUNCE in pci-rcar

Hi Magnus,

I noticed during randconfig testing that you enabled DMABOUNCE for the
pci-rcar-gen2 driver as posted in this patch https://lkml.org/lkml/2014/2/5/30

I didn't see the original post unfortunately, but I fear we have to
revert it and come up with a better solution, as your approach seems
wrong on a number of levels:

* We really don't want any new users of DMABOUNCE on ARM but instead leave
it to the PXA/IXP/SA1100 based platforms using it today.

* If your SoCs have an IOMMU, you should really use it, as that would
give you much better performance than bounce buffers anyway

* If the hardware is so screwed up that you have to use bounce buffers,
use the SWIOTLB code that is reasonably modern.

* The window base and size in the driver should not be hardcoded, as
this is likely not a device specific property but rather an artifact of
how it's connected. Use the standard "dma-ranges" property.

* You should not need your own dma_map_ops in the driver. What you
do there isn't really specific to your host but should live in some
place where it can be shared with other drivers.

* The implementation of the dma_map_ops is wrong: at the very least,
you cannot use the dma_mask of the pci host when translating
calls from the device, since each pci device can have a different
dma mask that is set by the driver if it needs larger or smaller
than 32-bit DMA space.

* The block bounce code can't be enabled if CONFIG_BLOCK is disabled,
this is how I noticed your changes.

* The block layer assumes that it will bounce any highmem pages,
but that isn't necessarily what you want here: if you have 2GB
of lowmem space (CONFIG_VMSPLIT_2G) or more, you will also have
lowmem pages that need bouncing.

* (completely unrelated, I noticed you set up a bogus I/O space
window. Don't do that, you will get get a panic if a device happens
to use it. Not providing an resource should work fine though)

On the bright side, your other changes in the same series all look
good. Thanks especially for sorting out the probe() function, I was
already wondering if we could change that the way you did.

Arnd


2014-02-24 23:49:33

by Magnus Damm

[permalink] [raw]
Subject: Re: DMABOUNCE in pci-rcar

On Mon, Feb 24, 2014 at 8:00 PM, Arnd Bergmann <[email protected]> wrote:
> Hi Magnus,
>
> I noticed during randconfig testing that you enabled DMABOUNCE for the
> pci-rcar-gen2 driver as posted in this patch https://lkml.org/lkml/2014/2/5/30

Hi Arnd,

The patch that you are referring to has been updated a few times, but
I believe your comments are still valid for the series
[PATCH v2 00/08] PCI: rcar: Recent driver patches from Ben Dooks and me (V2)

> I didn't see the original post unfortunately, but I fear we have to
> revert it and come up with a better solution, as your approach seems
> wrong on a number of levels:
>
> * We really don't want any new users of DMABOUNCE on ARM but instead leave
> it to the PXA/IXP/SA1100 based platforms using it today.
>
> * If your SoCs have an IOMMU, you should really use it, as that would
> give you much better performance than bounce buffers anyway
>
> * If the hardware is so screwed up that you have to use bounce buffers,
> use the SWIOTLB code that is reasonably modern.

>From my point of view we need some kind of bounce buffer unless we
have IOMMU support. I understand that an IOMMU would be much better
than a software-based implementation. If it is possible to use an
IOMMU with these devices remain to be seen.

I didn't know about the SWIOTLB code, neither did I know that
DMABOUNCE was supposed to be avoided. Now I do!

I do realize that my following patches madly mix potential bus code
and actual device support, however..

[PATCH v2 06/08] PCI: rcar: Add DMABOUNCE support
[PATCH 07/08] PCI: rcar: Enable BOUNCE in case of HIGHMEM

.. without my patches the driver does not handle CONFIG_BOUNCE and
CONFIG_VMSPLIT_2G.

> * The window base and size in the driver should not be hardcoded, as
> this is likely not a device specific property but rather an artifact of
> how it's connected. Use the standard "dma-ranges" property.

I'm afraid that I may not understand your concern fully here. From my
view the existing driver (without my patch) is having hard coded
board-specific memory base and a hard coded size in it, and with my
patches I try to rework that. Do you have any existing example code to
recommend me?

Regarding what the device can do and/or how it is connected - here is
some hardware information:

1) 3 on-chip PCI bridges per SoC with USB EHCI/OHCI controllers on
each of them (PCI bridge code is also shared between multiple SoCs and
of course multiple boards).

2) The systems have 40-bit physical address space and the CPU can
address this when LPAE is enabled.

3) System RAM is available in two banks - one bank in the lowest
32-bits (top 8 bits set to 0) and another bank in higher space.

4) The PCI bridge has a 32-bit base address for the windows with
alignment requirement (needs to be evenly aligned based on size)

5) Each PCI bridge instance has two windows, but supported size
differs. One window supports up to 2 GiB, another 256MiB.

Without IOMMU available I came to the conclusion that I need both
BOUNCE and DMABOUNCE to support the above hardware.

> * You should not need your own dma_map_ops in the driver. What you
> do there isn't really specific to your host but should live in some
> place where it can be shared with other drivers.

I think this boils down to the less-than-32-bit bus master capability
and also the poor match to a DMA zone. Consider 24-bit ISA DMA address
space vs 31-bit DMA space on a 32-bit system. I may be wrong, but a
DMA zone in my mind is something suitable for the classic ISA DMA, not
so much for modern complex systems with multiple devices that come
with different bus master capabilities.

That said, of course it makes sense to share code whenever possible.
Can you explain a bit more what kind of code you would like to have
broken out?

> * The implementation of the dma_map_ops is wrong: at the very least,
> you cannot use the dma_mask of the pci host when translating
> calls from the device, since each pci device can have a different
> dma mask that is set by the driver if it needs larger or smaller
> than 32-bit DMA space.

The current version of the driver (with or without my patch) seems to
leave all dma mask handling left out. I understand that this is poor
programming practise and I would like to see that fixed. As you
noticed, I did not try to fix this issue in my patch.

It may be worth noticing that the PCI devices hanging off the PCI
bridge instances are all on-chip fixed to OHCI and EHCI and the
drivers for these USB host controllers do not seem to try to set the
dma mask as it is today. So we are talking about a fixed set of PCI
devices here and not external PCI bus with general purpose PCI
drivers. I'm not sure if that makes things much better though..

So yes, I'd like the PCI bridge driver to be fixed with proper dma
mask handling. The question is just what that mask is supposed to
represent on a system where we a) may have IOMMU (and if so we can
address 40 bits), and if not we b) are limited to 31 bits but we also
have a non-zero base address. I'm not sure how to represent that
information with a single dma mask. Any suggestions?

> * The block bounce code can't be enabled if CONFIG_BLOCK is disabled,
> this is how I noticed your changes.

Do you mean that the following patch is causing some kind of build error?
[PATCH 07/08] PCI: rcar: Enable BOUNCE in case of HIGHMEM

If possible, can you please let me know which patches that you want me
to rework?

> * The block layer assumes that it will bounce any highmem pages,
> but that isn't necessarily what you want here: if you have 2GB
> of lowmem space (CONFIG_VMSPLIT_2G) or more, you will also have
> lowmem pages that need bouncing.

Good to hear that you also came to the conclusion that the two cases
need to be handled separately. =)

The lowmem bounce case (CONFIG_VMSPLIT_2G) is implemented in:
[PATCH v2 06/08] PCI: rcar: Add DMABOUNCE support

The block layer bounce is also enabled in:
[PATCH 07/08] PCI: rcar: Enable BOUNCE in case of HIGHMEM

> * (completely unrelated, I noticed you set up a bogus I/O space
> window. Don't do that, you will get get a panic if a device happens
> to use it. Not providing an resource should work fine though)

Right. To be honest, I'm not sure why the original author implemented
this. Similar to the dma mask bits this is something that I would like
to fix, but it has been out of scope for my patches so far.

> On the bright side, your other changes in the same series all look
> good. Thanks especially for sorting out the probe() function, I was
> already wondering if we could change that the way you did.

Thanks. Ideally I'd like to support bind and unbind on the bridge too
(CARDBUS can hotplug PCI, so should we!), but I suppose that sorting
out the bounce bits is more important.

Also, the SWIOTLB code, are you aware of any existing ARM users? I
also wonder if the code can work with multiple devices - the bits in
lib/swiotlb.c looks like single instance with global system wide
support only - perhaps it needs rework to support 3 PCI bridges?

Thanks for your help, I appreciate your feedback.

Cheers,

/ magnus

2014-02-25 00:18:03

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: DMABOUNCE in pci-rcar

On Tue, Feb 25, 2014 at 08:49:28AM +0900, Magnus Damm wrote:
> On Mon, Feb 24, 2014 at 8:00 PM, Arnd Bergmann <[email protected]> wrote:
> >From my point of view we need some kind of bounce buffer unless we
> have IOMMU support. I understand that an IOMMU would be much better
> than a software-based implementation. If it is possible to use an
> IOMMU with these devices remain to be seen.
>
> I didn't know about the SWIOTLB code, neither did I know that
> DMABOUNCE was supposed to be avoided. Now I do!

The reason DMABOUNCE should be avoided is because it is a known source
of OOMs, and that has never been investigated and fixed. You can read
about some of the kinds of problems this code creates here:

http://webcache.googleusercontent.com/search?q=cache:jwl4g8hqWa8J:comments.gmane.org/gmane.linux.ports.arm.kernel/15850+&cd=2&hl=en&ct=clnk&gl=uk&client=firefox-a

That was never got to the bottom of. I could harp on about not having
the hardware, the people with the hardware not being capable of debugging
it, or not willing to litter their kernels with printks when they've
found a reproducable way to trigger it, etc - but none of that really
matters.

What matters is the end result is nothing was ever done to investigate
the causes, so it remains "unsafe" to use.

> I do realize that my following patches madly mix potential bus code
> and actual device support, however..
>
> [PATCH v2 06/08] PCI: rcar: Add DMABOUNCE support
> [PATCH 07/08] PCI: rcar: Enable BOUNCE in case of HIGHMEM
>
> .. without my patches the driver does not handle CONFIG_BOUNCE and
> CONFIG_VMSPLIT_2G.

Can we please kill the idea that CONFIG_VMSPLIT_* has something to do
with DMA? It doesn't. VMSPLIT sets where the boundary between userspace
and kernel space is placed in virtual memory. It doesn't really change
which memory is DMA-able.

There is the BLK_BOUNCE_HIGH option, but that's more to do with drivers
saying "I don't handle highmem pages because I'm old and no one's updated
me".

The same is true of highmem vs bouncing for DMA. Highmem is purely a
virtual memory concept and has /nothing/ to do with whether the memory
can be DMA'd to.

Let's take an extreme example. Let's say I set a 3G VM split, so kernel
memory starts at 0xc0000000. I then set the vmalloc space to be 1024M -
but the kernel strinks that down to the maximum that can be accomodated,
which leaves something like 16MB of lowmem. Let's say I have 512MB of
RAM in the machine.

Now let's consider I do the same thing, but with a 2G VM split. Has the
memory pages which can be DMA'd to changed at all? Yes, the CPU's view
of pages has changed, but the DMA engine's view hasn't changed /one/ /bit/.

Now consider when vmalloc space isn't expanded to maximum and all that
RAM is mapped into the kernel direct mapped region. Again, any
difference as far as the DMA engine goes? No there isn't.

So, the idea that highmem or vmsplit has any kind of impact on whether
memory can be DMA'd to by the hardware is absolutely absurd.

VMsplit and highmem are a CPU visible concept, and has very little to do
with whether the memory is DMA-able.

--
FTTC broadband for 0.8mile line: now at 9.7Mbps down 460kbps up... slowly
improving, and getting towards what was expected from it.

2014-02-25 02:00:57

by Magnus Damm

[permalink] [raw]
Subject: Re: DMABOUNCE in pci-rcar

Hi Russell,

On Tue, Feb 25, 2014 at 9:17 AM, Russell King - ARM Linux
<[email protected]> wrote:
> On Tue, Feb 25, 2014 at 08:49:28AM +0900, Magnus Damm wrote:
>> On Mon, Feb 24, 2014 at 8:00 PM, Arnd Bergmann <[email protected]> wrote:
>> >From my point of view we need some kind of bounce buffer unless we
>> have IOMMU support. I understand that an IOMMU would be much better
>> than a software-based implementation. If it is possible to use an
>> IOMMU with these devices remain to be seen.
>>
>> I didn't know about the SWIOTLB code, neither did I know that
>> DMABOUNCE was supposed to be avoided. Now I do!
>
> The reason DMABOUNCE should be avoided is because it is a known source
> of OOMs, and that has never been investigated and fixed. You can read
> about some of the kinds of problems this code creates here:
>
> http://webcache.googleusercontent.com/search?q=cache:jwl4g8hqWa8J:comments.gmane.org/gmane.linux.ports.arm.kernel/15850+&cd=2&hl=en&ct=clnk&gl=uk&client=firefox-a
>
> That was never got to the bottom of. I could harp on about not having
> the hardware, the people with the hardware not being capable of debugging
> it, or not willing to litter their kernels with printks when they've
> found a reproducable way to trigger it, etc - but none of that really
> matters.
>
> What matters is the end result is nothing was ever done to investigate
> the causes, so it remains "unsafe" to use.

Thanks for the pointer! It is good to know.

>> I do realize that my following patches madly mix potential bus code
>> and actual device support, however..
>>
>> [PATCH v2 06/08] PCI: rcar: Add DMABOUNCE support
>> [PATCH 07/08] PCI: rcar: Enable BOUNCE in case of HIGHMEM
>>
>> .. without my patches the driver does not handle CONFIG_BOUNCE and
>> CONFIG_VMSPLIT_2G.
>
> Can we please kill the idea that CONFIG_VMSPLIT_* has something to do
> with DMA? It doesn't. VMSPLIT sets where the boundary between userspace
> and kernel space is placed in virtual memory. It doesn't really change
> which memory is DMA-able.
>
> There is the BLK_BOUNCE_HIGH option, but that's more to do with drivers
> saying "I don't handle highmem pages because I'm old and no one's updated
> me".

Spot on! =)

>From my observations drivers saying that they don't support HIGHMEM
may actually mean that they have a certain physical address
limitation. For instance, if you want to misuse the zones then on a
32-bit system not supporting HIGHMEM will guarantee that your memory
is within 32-bits. I'm not saying anyone should do that, but I'm sure
that kind of stuff is all over the place. =)

> The same is true of highmem vs bouncing for DMA. Highmem is purely a
> virtual memory concept and has /nothing/ to do with whether the memory
> can be DMA'd to.
>
> Let's take an extreme example. Let's say I set a 3G VM split, so kernel
> memory starts at 0xc0000000. I then set the vmalloc space to be 1024M -
> but the kernel strinks that down to the maximum that can be accomodated,
> which leaves something like 16MB of lowmem. Let's say I have 512MB of
> RAM in the machine.
>
> Now let's consider I do the same thing, but with a 2G VM split. Has the
> memory pages which can be DMA'd to changed at all? Yes, the CPU's view
> of pages has changed, but the DMA engine's view hasn't changed /one/ /bit/.
>
> Now consider when vmalloc space isn't expanded to maximum and all that
> RAM is mapped into the kernel direct mapped region. Again, any
> difference as far as the DMA engine goes? No there isn't.
>
> So, the idea that highmem or vmsplit has any kind of impact on whether
> memory can be DMA'd to by the hardware is absolutely absurd.
>
> VMsplit and highmem are a CPU visible concept, and has very little to do
> with whether the memory is DMA-able.

I totally agree with what you are saying. If the memory is arranged as
DMA zone or lowmem or HIGHMEM does not matter much from a hardware
point or view. The hardware addressing limitation and the software
concepts of memory zones are often mixed together - I suppose the DMA
zone is the only case where they may have any relation.

The most basic hardware limitation we have with this particular PCI
bridge is that it can only do bus master memory access within the
lowest 32-bits of physical address space (no high LPAE memory banks).
And to make it more complicated, the hardware is even more restriced
than that - the physical address space where bus mastering can happen
is limited to 1 GiB. (The PCI bridge hardware itself can do 2 GiB
window but it must be mapped a 0x8... The on-board memory banks are
designed with 2 GiB memory starting from 0x4.. so out of that only
1GiB remains useful)

The reason why the VMSPLIT was brought up was that the existing
hard-coded 1GiB window happens to work with the common 3G VM split
because lowmem also happens to be within 1 GiB. I suppose the code was
written with "luck" so to say.

The issue is when VMSPLIT is changed to 2G then the 1GiB PCI bus
master limitation will be visible and things will bomb out. I solve
that by using DMABOUNCE to support any VMSPLIT configuration with 1GiB
of bus mastering ability.

And the DMABOUNCE code does not support HIGHMEM, so because of that
the block layer BOUNCE is also used.

Thanks for your help.

Cheers,

/ magnus

2014-02-25 12:15:49

by Arnd Bergmann

[permalink] [raw]
Subject: Re: DMABOUNCE in pci-rcar

On Tuesday 25 February 2014, Magnus Damm wrote:
> On Mon, Feb 24, 2014 at 8:00 PM, Arnd Bergmann <[email protected]> wrote:

> > * The window base and size in the driver should not be hardcoded, as
> > this is likely not a device specific property but rather an artifact of
> > how it's connected. Use the standard "dma-ranges" property.
>
> I'm afraid that I may not understand your concern fully here. From my
> view the existing driver (without my patch) is having hard coded
> board-specific memory base and a hard coded size in it, and with my
> patches I try to rework that. Do you have any existing example code to
> recommend me?

You are right, this is a preexisting condition, and your patch someone
improves this, just not the way I was hoping for.

At the moment, "dma-ranges" is only used by powerpc platforms, but
we have discussed using it on ARM a couple of times when similar
problems came up. Just today, Santosh Shilimkar posted patches for
mach-keystone, and the PCI support patches that are under review for
arm64 x-gene have a related requirement.

There are two parts to this problem: smaller-than-4GB windows and
windows that are offset to the start of memory. The dma-ranges
approach should handle both. In theory it can also deal with
multiple windows, but we have so far not needed that.

> Regarding what the device can do and/or how it is connected - here is
> some hardware information:
>
> 1) 3 on-chip PCI bridges per SoC with USB EHCI/OHCI controllers on
> each of them (PCI bridge code is also shared between multiple SoCs and
> of course multiple boards).
>
> 2) The systems have 40-bit physical address space and the CPU can
> address this when LPAE is enabled.
>
> 3) System RAM is available in two banks - one bank in the lowest
> 32-bits (top 8 bits set to 0) and another bank in higher space.
>
> 4) The PCI bridge has a 32-bit base address for the windows with
> alignment requirement (needs to be evenly aligned based on size)
>
> 5) Each PCI bridge instance has two windows, but supported size
> differs. One window supports up to 2 GiB, another 256MiB.

Thanks for the background. Now, the only real problem is the case where
the window doesn't span all of the RAM below the 4GiB boundary, but
this would be the case at least in the 256MiB window example.

For systems where you have e.g. 2GB of RAM visible below the 4GB
boundary, you shouldn't need any special code aside from refusing
to set a 64-bit dma mask from the driver.

> Without IOMMU available I came to the conclusion that I need both
> BOUNCE and DMABOUNCE to support the above hardware.

You definitely need either DMABOUNCE or SWIOTLB for this case, yes.
The reason for this is that the PCI code assumes that every DMA
master can access all memory below the 4GB boundary if the device
supports it.

I'm less sure about CONFIG_BOUNCE, and Russell already explained
a few things about it that I didn't know. Normally I would expect
the SWIOTLB code to kick in during dma_map_sg() for block devices
just like it does for network and other devices.

> > * You should not need your own dma_map_ops in the driver. What you
> > do there isn't really specific to your host but should live in some
> > place where it can be shared with other drivers.
>
> I think this boils down to the less-than-32-bit bus master capability
> and also the poor match to a DMA zone. Consider 24-bit ISA DMA address
> space vs 31-bit DMA space on a 32-bit system. I may be wrong, but a
> DMA zone in my mind is something suitable for the classic ISA DMA, not
> so much for modern complex systems with multiple devices that come
> with different bus master capabilities.
>
> That said, of course it makes sense to share code whenever possible.
> Can you explain a bit more what kind of code you would like to have
> broken out?

I was hoping that we can get to a point where we automatically check
the dma-ranges property for platform devices and set the swiotlb
dma_map_ops if necessary. For PCI device, I think each device should
inherit the map_ops from its parent.

> > * The implementation of the dma_map_ops is wrong: at the very least,
> > you cannot use the dma_mask of the pci host when translating
> > calls from the device, since each pci device can have a different
> > dma mask that is set by the driver if it needs larger or smaller
> > than 32-bit DMA space.
>
> The current version of the driver (with or without my patch) seems to
> leave all dma mask handling left out. I understand that this is poor
> programming practise and I would like to see that fixed. As you
> noticed, I did not try to fix this issue in my patch.
>
> It may be worth noticing that the PCI devices hanging off the PCI
> bridge instances are all on-chip fixed to OHCI and EHCI and the
> drivers for these USB host controllers do not seem to try to set the
> dma mask as it is today. So we are talking about a fixed set of PCI
> devices here and not external PCI bus with general purpose PCI
> drivers. I'm not sure if that makes things much better though..

You still have EHCI calling dma_set_mask(DMA_BIS_MASK(64)) to allow high
DMA, while OHCI doesn't allow it, so even with just two possible devices,
you have a conflict.

> So yes, I'd like the PCI bridge driver to be fixed with proper dma
> mask handling. The question is just what that mask is supposed to
> represent on a system where we a) may have IOMMU (and if so we can
> address 40 bits), and if not we b) are limited to 31 bits but we also
> have a non-zero base address. I'm not sure how to represent that
> information with a single dma mask. Any suggestions?

With nonzero base address, do you mean you have to add a number to
the bus address to get to the CPU address? That case is handled by the
patches that Santosh just posted.

Do you always have a 31-bit mask when there is no IOMMU, and have an
IOMMU when there is a smaller mask? That may somewhat simplify the
problem space.

> > * The block bounce code can't be enabled if CONFIG_BLOCK is disabled,
> > this is how I noticed your changes.
>
> Do you mean that the following patch is causing some kind of build error?
> [PATCH 07/08] PCI: rcar: Enable BOUNCE in case of HIGHMEM
>
> If possible, can you please let me know which patches that you want me
> to rework?

The build error could be worked around by changing it to

select BOUNCE if BLOCK && MMU && HIGHMEM

but I really think you shouldn't need BOUNCE at all, so this needs more
investigation.

> > * The block layer assumes that it will bounce any highmem pages,
> > but that isn't necessarily what you want here: if you have 2GB
> > of lowmem space (CONFIG_VMSPLIT_2G) or more, you will also have
> > lowmem pages that need bouncing.
>
> Good to hear that you also came to the conclusion that the two cases
> need to be handled separately. =)
>
> The lowmem bounce case (CONFIG_VMSPLIT_2G) is implemented in:
> [PATCH v2 06/08] PCI: rcar: Add DMABOUNCE support
>
> The block layer bounce is also enabled in:
> [PATCH 07/08] PCI: rcar: Enable BOUNCE in case of HIGHMEM

But what do here is to only bounce highmem pages. My point is
that highmem is the wrong key here (as Russell also mentioned)
and that you may also have to bounce lowmem pages.

> > On the bright side, your other changes in the same series all look
> > good. Thanks especially for sorting out the probe() function, I was
> > already wondering if we could change that the way you did.
>
> Thanks. Ideally I'd like to support bind and unbind on the bridge too
> (CARDBUS can hotplug PCI, so should we!), but I suppose that sorting
> out the bounce bits is more important.

Agreed.

> Also, the SWIOTLB code, are you aware of any existing ARM users? I
> also wonder if the code can work with multiple devices - the bits in
> lib/swiotlb.c looks like single instance with global system wide
> support only - perhaps it needs rework to support 3 PCI bridges?

I haven't looked at the code much, but Xen seems to use it. We can definitely
extend it if you see problems. For instance I think we may have to
wrap them to handle noncoherent buses, as the code was written for x86
and ia64 that are both cache-coherent.

Arnd

2014-02-25 15:44:34

by Arnd Bergmann

[permalink] [raw]
Subject: Re: DMABOUNCE in pci-rcar

On Tuesday 25 February 2014, Magnus Damm wrote:
> And the DMABOUNCE code does not support HIGHMEM, so because of that
> the block layer BOUNCE is also used.

Ah, I misunderstood this part previously. I understand better what's
going on now, but this also enforces the impression that both BOUNCE
and DMABOUNCE are not what you should be doing here.

On a related note, I've had some more discussions with Santosh on IRC,
and I think he's in the exact same position on mach-keystone, so we
should make sure that whatever solution either of you comes up with
also works for the other one.

The situation on keystone may be a little worse even, because all their
DMA masters have a 2GB limit, but it's also possible that the same
is true for you. Which categories of DMA masters do you have on R-Car?

a) less than 32-bit mask, with IOMMU
b) less than 32-bit mask, without IOMMU
c) 32-bit mask
d) 64-bit mask

Arnd

2014-02-26 19:48:41

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: DMABOUNCE in pci-rcar

On Mon, Feb 24, 2014 at 4:00 AM, Arnd Bergmann <[email protected]> wrote:
> Hi Magnus,
>
> I noticed during randconfig testing that you enabled DMABOUNCE for the
> pci-rcar-gen2 driver as posted in this patch https://lkml.org/lkml/2014/2/5/30
>
> I didn't see the original post unfortunately, but I fear we have to
> revert it and come up with a better solution, ...

Sounds like I should drop the following patches from my pci/host-rcar
branch for now?

PCI: rcar: Add DMABOUNCE support
PCI: rcar: Enable BOUNCE in case of HIGHMEM
PCI: rcar: Make the Kconfig dependencies more generic

Bjorn

2014-02-26 19:58:17

by Arnd Bergmann

[permalink] [raw]
Subject: Re: DMABOUNCE in pci-rcar

On Wednesday 26 February 2014 12:48:17 Bjorn Helgaas wrote:
> On Mon, Feb 24, 2014 at 4:00 AM, Arnd Bergmann <[email protected]> wrote:
> > Hi Magnus,
> >
> > I noticed during randconfig testing that you enabled DMABOUNCE for the
> > pci-rcar-gen2 driver as posted in this patch https://lkml.org/lkml/2014/2/5/30
> >
> > I didn't see the original post unfortunately, but I fear we have to
> > revert it and come up with a better solution, ...
>
> Sounds like I should drop the following patches from my pci/host-rcar
> branch for now?
>
> PCI: rcar: Add DMABOUNCE support
> PCI: rcar: Enable BOUNCE in case of HIGHMEM
> PCI: rcar: Make the Kconfig dependencies more generic

Sounds good to me. The last patch is actually fine, but you'll have to
fix the context to apply it without the other two.

Arnd

2014-02-26 21:03:19

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: DMABOUNCE in pci-rcar

On Wed, Feb 26, 2014 at 12:57 PM, Arnd Bergmann <[email protected]> wrote:
> On Wednesday 26 February 2014 12:48:17 Bjorn Helgaas wrote:
>> On Mon, Feb 24, 2014 at 4:00 AM, Arnd Bergmann <[email protected]> wrote:
>> > Hi Magnus,
>> >
>> > I noticed during randconfig testing that you enabled DMABOUNCE for the
>> > pci-rcar-gen2 driver as posted in this patch https://lkml.org/lkml/2014/2/5/30
>> >
>> > I didn't see the original post unfortunately, but I fear we have to
>> > revert it and come up with a better solution, ...
>>
>> Sounds like I should drop the following patches from my pci/host-rcar
>> branch for now?
>>
>> PCI: rcar: Add DMABOUNCE support
>> PCI: rcar: Enable BOUNCE in case of HIGHMEM
>> PCI: rcar: Make the Kconfig dependencies more generic
>
> Sounds good to me. The last patch is actually fine, but you'll have to
> fix the context to apply it without the other two.

OK, I dropped the DMABOUNCE and BOUNCE patches and force-updated my
"next" branch.

Bjorn

2014-02-26 21:52:57

by Arnd Bergmann

[permalink] [raw]
Subject: Re: DMABOUNCE in pci-rcar

On Wednesday 26 February 2014, Bjorn Helgaas wrote:
> OK, I dropped the DMABOUNCE and BOUNCE patches and force-updated my
> "next" branch.

Thanks!

Arnd