2002-12-18 02:54:21

by James Bottomley

[permalink] [raw]
Subject: [RFT][PATCH] generic device DMA implementation

The attached should represent close to final form for the generic DMA API. It
includes documentation (surprise!) and and implementation in terms of the pci_
API for every arch (apart from parisc, which will be submitted later).

I've folded in the feedback from the previous thread. Hopefully, this should
be ready for inclusion. If people could test it on x86 and other
architectures, I'd be grateful.

comments and feedback from testing welcome.

James


Attachments:
tmp.diff (38.20 kB)
tmp.diff

2002-12-18 03:05:48

by David Mosberger

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation


James> The attached should represent close to final form for the
James> generic DMA API. It includes documentation (surprise!) and
James> and implementation in terms of the pci_ API for every arch
James> (apart from parisc, which will be submitted later).

James> I've folded in the feedback from the previous thread.
James> Hopefully, this should be ready for inclusion. If people
James> could test it on x86 and other architectures, I'd be
James> grateful.

James> comments and feedback from testing welcome.

Would you mind doing a s/consistent/coherent/g? This has been
misnamed in the PCI DMA interface all along, but I didn't think it's
worth breaking drivers because of it. But since this is a new
interface, there is no such issue.

(Consistency says something about memory access ordering, coherency
only talks about there not being multiple values for a given memory
location. On DMA-coherent platforms with weakly-ordered memory
systems, the returned memory really is only coherent, not consistent,
i.e., you have to use memory barriers if you want to enforce
ordering.)

Thanks,

--david

2002-12-27 20:07:42

by David Brownell

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

I think you saw that patch to let the new 2.5.53 generic dma code
replace one of the two indirections USB needs. Here are some of
the key open issues I'm thinking of:

- DMA mapping calls still return no errors; so BUG() out instead?

Consider systems where DMA-able memory is limited (like SA-1111,
to 1 MByte); clearly it should be possible for these calls to
fail, when they can't allocate a bounce buffer. Or (see below)
when an invalid argument is provided to a dma mapping call.

Fix by defining fault returns for the current signatures,
starting with the api specs:

* dma_map_sg() returns negative errno (or zero?) when it
fails. (Those are illegal sglist lengths.)

* dma_map_single() returns an arch-specific value, like
DMA_ADDR_INVALID, when it fails. (DaveM's suggestion,
from a while back; it's seemingly arch-specific.)

Yes, the PCI dma calls would benefit from those bugfixes too.

- Implementation-wise, I'm rather surprised that the generic
version doesn't just add new bus driver methods rather than
still insisting everything be PCI underneath.

It's not clear to me how I'd make, for example, a USB device
or interface work with dma_map_sg() ... those "generic" calls
are going to fail (except on x86, where all memory is DMA-able)
since USB != PCI. Even when usb_buffer_map_sg() would succeed.
(The second indirection: the usb controller hardware does the
mapping, not the device or hcd. That's usually PCI.)

Hmm, I suppose there'd need to be a default implementation
of the mapping operations (for all non-pci busses) that'd
fail cleanly ... :)

- There's no analogue to pci_pool, and there's nothing like
"kmalloc" (likely built from N dma-coherent pools).

That forces drivers to write and maintain memory allocators,
is a waste of energy as well as being bug-prone. So in that
sense this API isn't a complete functional replacement of
the current PCI (has pools, ~= kmem_cache_t) or USB (with
simpler buffer_alloc ~= kmalloc) APIs for dma.

- The API says drivers "must" satisfy dma_get_cache_alignment(),
yet both implementations, asm-{i386,generic}/dma-mapping.h,
ignore that rule.

Are you certain of that rule, for all cache coherency models?
I thought only some machines (with dma-incoherent caches) had
that as a hard constraint. (Otherwise it's a soft one: even
if there's cacheline contention, the hardware won't lose data
when drivers use memory barriers correctly.)

I expect that combination is likely to be problematic, since the
previous rule has been (wrongly?) that kmalloc or kmem_cache
memory is fine for DMA mappings, no size restrictions. Yet for
one example on x86 dma_get_cache_alignment() returns 128 bytes,
but kmalloc has several smaller pool sizes ... and lately will
align to L1_CACHE_BYTES (wasting memory on small allocs?) even
when that's smaller than L1_CACHE_MAX (in the new dma calls).

All the more reason to have a drop-in kmalloc alternative for
dma-aware code to use, handling such details transparently!

- Isn't arch/i386/kernel/pci-dma.c handling DMA masks wrong?
It's passing GFP_DMA in cases where GFP_HIGHMEM is correct ...

I'm glad to see progress on making DMA more generic, thanks!

- Dave










2002-12-27 21:32:10

by James Bottomley

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

[email protected] said:
> - Implementation-wise, I'm rather surprised that the generic
> version doesn't just add new bus driver methods rather than
> still insisting everything be PCI underneath.

You mean dma-mapping.h in asm-generic? The reason for that is to provide an
implementation that functions now for non x86 (and non-parisc) archs without
having to write specific code for them all. Since all the other arch's now
function in terms of the pci_ API, that was the only way of sliding the dma_
API in without breaking them all.

Bus driver methods have been advocated before, but it's not clear to me that
they should be exposed in the *generic* API.

> It's not clear to me how I'd make, for example, a USB device
> or interface work with dma_map_sg() ... those "generic" calls
> are going to fail (except on x86, where all memory is DMA-able)
> since USB != PCI.

Actually, they should work on parisc out of the box as well because of the way
its DMA implementation is built in terms of the generic dma_ API.

As far as implementing this generically, just adding a case for the
usb_bus_type in asm-generic/dma-mapping.h will probably get you where you need
to be. (the asm-generic is, after all, only intended as a stopgap. Fully
coherent platforms with no IOMMUs will probably take the x86 route to
implementing the dma_ API, platforms with IOMMUs will probably (eventually) do
similar things to parisc).


> (The second indirection: the usb controller hardware does the
> mapping, not the device or hcd. That's usually PCI.)

Could you clarify this a little. I tend to think of "mapping" as something
done by the IO MMU managing the bus. I think you mean that the usb controller
will mark a region of memory to be accessed by the device. If such a region
were also "mapped" by an IOMMU, it would be done outside the control of the
USB controller, correct? (the IOMMU would translate between the address the
processor sees and the address the USB controller thinks it's responding to)

Is the problem actually that the USB controller needs to be able to allocate
coherent memory in a range much more narrowly defined than the current
dma_mask allows?

> - There's no analogue to pci_pool, and there's nothing like
> "kmalloc" (likely built from N dma-coherent pools).

I didn't want to build another memory pool re-implementation. The mempool API
seems to me to be flexible enough for this, is there some reason it won't work?

I did consider wrappering mempool to make it easier, but I couldn't really
find a simplifying wrapper that wouldn't lose flexibility.

James


2002-12-27 21:39:16

by James Bottomley

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

[email protected] said:
> - DMA mapping calls still return no errors; so BUG() out instead?

That's actually an open question. The line of least resistance (which is what
I followed) is to do what the pci_ API does (i.e. BUG()).

It's not clear to me that adding error returns rather than BUGging would buy
us anything (because now all the drivers have to know about the errors and
process them).

> Consider systems where DMA-able memory is limited (like SA-1111,
> to 1 MByte); clearly it should be possible for these calls to
> fail, when they can't allocate a bounce buffer. Or (see below)
> when an invalid argument is provided to a dma mapping call.

That's pretty much an edge case. I'm not opposed to putting edge cases in the
api (I did it for dma_alloc_noncoherent() to help parisc), but I don't think
the main line should be affected unless there's a good case for it.

Perhaps there is a compromise where the driver flags in the struct
device_driver that it wants error returns otherwise it takes the default
behaviour (i.e. no error return checking and BUG if there's a problem).

James


2002-12-27 23:47:25

by James Bottomley

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

[email protected] said:
> This is not entirely correct: The driver must use the normal memory
> barrier instructions even in coherent memory. Could you copy the
> section about wmb() from DMA-mapping into your new documentation?

I made the name change from consistent to coherent (at David Mosberger's)
request to address at least some of this.

I suppose I can add it as a note to dma_alloc_coherent too.

> Noone obeys that rule, and it's not trivial to fix it.

Any driver that disobeys this rule today with the pci_ API is prone to cache
related corruption on non-coherent architectures.

> Is it really impossible to work around that in the platform specific
> code? In the worst case, the arch code could memcopy to/from a
> cacheline aligned buffer.

Well, it's not impossible, but I don't believe it can be done efficiently.
And since it can't be done efficiently, I don't believe it's right to impact
the drivers that are properly written to take caching effects into account.

Isn't the better solution to let the platform maintainers negotiate with the
driver maintainers to get those drivers they care about fixed?

James


2002-12-28 00:12:11

by Manfred Spraul

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

James Bottomley wrote:

>>Noone obeys that rule, and it's not trivial to fix it.
>>
>>
>
>Any driver that disobeys this rule today with the pci_ API is prone to cache
>related corruption on non-coherent architectures.
>
>
The networking core disobeys the rule in the sendfile implementation.
Depending on the cacheline size, even small TCP packets might disobey
the rule. The problem is not restricted to drivers.

>
>
>>Is it really impossible to work around that in the platform specific
>>code? In the worst case, the arch code could memcopy to/from a
>>cacheline aligned buffer.
>>
>>
>
>Well, it's not impossible, but I don't believe it can be done efficiently.
>And since it can't be done efficiently, I don't believe it's right to impact
>the drivers that are properly written to take caching effects into account.
>
>Isn't the better solution to let the platform maintainers negotiate with the
>driver maintainers to get those drivers they care about fixed?
>
>
I agree that the performance will be bad, but like misaligned memory
access, the arch code should support it. Leave the warning about bad
performance in the documentation, and modify the drivers where it
actually matters.
Your new documentation disagrees with the current implementation, and
that is just wrong.
And in the case of networking, someone must do the double buffering.
Doing it within dma_map_single() would avoid modifying every pci driver.

--
Manfred

2002-12-28 01:15:39

by David Brownell

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

James Bottomley wrote:
> [email protected] said:
>
>>- Implementation-wise, I'm rather surprised that the generic
>> version doesn't just add new bus driver methods rather than
>> still insisting everything be PCI underneath.
>
>
> You mean dma-mapping.h in asm-generic? ..

Yes. As noted, it can't work for USB directly. And your
suggestion of more "... else if (bus is USB) ... else ... "
logic (#including each bus type's headers?) bothers me.


> Bus driver methods have been advocated before, but it's not clear to me that
> they should be exposed in the *generic* API.

Isn't the goal to make sure that for every kind of "struct device *"
it should be possible to use those dma_*() calls, without BUGging
out. If that's not true ... then why were they defined?

That's certainly the notion I was talking about when this "generic
dma" API notion came up this summer [1]. (That discussion led to
the USB DMA APIs, and then the usb_sg_* calls that let usb-storage
queue scatterlists directly to disk: performance wins, including
DaveM's "USB keyboards don't allocate IOMMU pages", but structures
looking ahead to having real generic DMA calls.)


>> It's not clear to me how I'd make, for example, a USB device
>> or interface work with dma_map_sg() ... those "generic" calls
>> are going to fail (except on x86, where all memory is DMA-able)
>> since USB != PCI.
>
>
> Actually, they should work on parisc out of the box as well because of the way
> its DMA implementation is built in terms of the generic dma_ API.

Most of us haven't seen your PARISC code, it's not in Linus' tree. :)

>> (The second indirection: the usb controller hardware does the
>> mapping, not the device or hcd. That's usually PCI.)
>
>
> Could you clarify this a little.

Actually, make that "hardware-specific code".

The USB controller is what does the DMA. But USB device drivers don't
talk to USB controllers, at least not directly. Instead they talk to a
"struct usb_interface *", or a "struct usb_device *" ... those are more
or less software proxies for the real devices with usbcore and some
HCD turning proxy i/o requests into USB controller operations.

The indirection is getting from the USB device (or interface) to the
object representing the USB controller. All USB calls need that, at
least for host-side APIs, since the controller driver is multiplexing
up to almost 4000 I/O channels. (127 devices * 31 endpoints, max; and
of course typical usage is more like dozens of channels.)


> Is the problem actually that the USB controller needs to be able to allocate
> coherent memory in a range much more narrowly defined than the current
> dma_mask allows?

Nope, it's just an indirection issue. Even on a PCI based system, the "struct
device" used by a USB driver (likely usb_interface->dev) will never be a USB
controller. Since it's the USB controller actually doing the I/O something
needs to use that controller to do the DMA mapping(s).

So any generic DMA logic needs to be able to drill down a level or so before
doing DMA mappings (or allocations) for a USB "struct device *".

- Dave

[1] http://marc.theaimsgroup.com/?l=linux-kernel&m=102389137402497&w=2


2002-12-28 01:42:26

by David Brownell

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation


>>- There's no analogue to pci_pool, and there's nothing like
>> "kmalloc" (likely built from N dma-coherent pools).
>
>
> I didn't want to build another memory pool re-implementation. The mempool API
> seems to me to be flexible enough for this, is there some reason it won't work?

I didn't notice any way it would track, and return, DMA addresses.
It's much like a kmem_cache in that way.


> I did consider wrappering mempool to make it easier, but I couldn't really
> find a simplifying wrapper that wouldn't lose flexibility.

In My Ideal World (tm) Linux would have some kind of memory allocator
that'd be configured to use __get_free_pages() or dma_alloc_coherent()
as appropriate. Fast, efficient; caching pre-initted objects; etc.

I'm not sure how realistic that is. So long as APIs keep getting written
so that drivers _must_ re-invent the "memory allocator" wheel, it's not.

But ... if the generic DMA API includes such stuff, it'd be easy to replace
a dumb implementation (have you seen pci_pool, or how usb_buffer_alloc
works? :) with something more intelligent than any driver could justify
writing for its own use.

- Dave

2002-12-28 02:16:58

by David Brownell

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

Hi,

>>- DMA mapping calls still return no errors; so BUG() out instead?
>
>
> That's actually an open question. The line of least resistance (which is what
> I followed) is to do what the pci_ API does (i.e. BUG()).

That might have been appropriate for PCI-mostly APIs, since those tend to
be resource-rich. Maybe. (It always seemed like an API bug to me.)

I can't buy that logic in the "generic" case though. Heck, haven't all
the address space allocation calls in Linux always exposed ENOMEM type
faults ... except PCI? This one is _really_ easy to fix now. Resources
are never infinite.


> It's not clear to me that adding error returns rather than BUGging would buy
> us anything (because now all the drivers have to know about the errors and
> process them).

For me, designing any "generic" API to handle common cases (like allocation
failures) reasonably (no BUGging!) is a fundamental design requirement.

Robust drivers are aware of things like allocation faults, and handle them.
If they do so poorly, that can be fixed like any other driver bug.


>> Consider systems where DMA-able memory is limited (like SA-1111,
>> to 1 MByte); clearly it should be possible for these calls to
>> fail, when they can't allocate a bounce buffer. Or (see below)
>> when an invalid argument is provided to a dma mapping call.
>
>
> That's pretty much an edge case. I'm not opposed to putting edge cases in the
> api (I did it for dma_alloc_noncoherent() to help parisc), but I don't think
> the main line should be affected unless there's a good case for it.

Absolutely *any* system can have situations where the relevant address space
(or memory) was all in use, or wasn't available to a non-blocking request
without blocking, etc. Happens more often on some systems than others; I
just chose SA-1111 since your approach would seem to make that unusable.

If that isn't a "good case", why not? And what could ever be a "good case"?


> Perhaps there is a compromise where the driver flags in the struct
> device_driver that it wants error returns otherwise it takes the default
> behaviour (i.e. no error return checking and BUG if there's a problem).

IMO that's the worst of all possible worlds. The error paths would get
even less testing than they do today. If there's a fault path defined,
use it in all cases: don't just BUG() in some modes, and some drivers.

- Dave

2002-12-28 02:40:36

by Adam J. Richter

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

At 2002-12-28 1:29:54 GMT, David Brownell wrote:
>Isn't the goal to make sure that for every kind of "struct device *"
>it should be possible to use those dma_*() calls, without BUGging
>out.

No.

>If that's not true ... then why were they defined?

So that other memory mapped busses such as ISA and sbus
can use them.

USB devices should do DMA operations with respect to their USB
host adapters, typically a PCI device. For example, imagine a machine
with two USB controllers on different bus instances.

One of these days, I'd like to add "struct device *dma_dev;" to
struct request_queue to facilitate optional centralization of
dma_{,un}map_sg for most hardware drivers. PCI scsi controllers, for
example would set dma_dev to the PCI device. USB scsi controllers
would set it to the USB host adapter.

Adam J. Richter __ ______________ 575 Oroville Road
[email protected] \ / Milpitas, California 95035
+1 408 309-6081 | g g d r a s i l United States of America
"Free Software For The Rest Of Us."

2002-12-28 14:51:15

by David Brownell

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

Adam J. Richter wrote:
> At 2002-12-28 1:29:54 GMT, David Brownell wrote:
>
>>Isn't the goal to make sure that for every kind of "struct device *"
>>it should be possible to use those dma_*() calls, without BUGging
>>out.
>
> No.
>
>>If that's not true ... then why were they defined?
>
> So that other memory mapped busses such as ISA and sbus
> can use them.

That sounds like a "yes", not a "no" ... except for devices on busses
that don't have do memory mapped I/O. It's what I described: dma_*()
calls being used with struct device, without BUGging out.


> USB devices should do DMA operations with respect to their USB
> host adapters, typically a PCI device. For example, imagine a machine
> with two USB controllers on different bus instances.

USB already does that ... you write as if it didn't. It's done that
since pretty early in the 2.4 series, when conversions to the "new" PCI
DMA APIs replaced usage of virt_to_bus and bus_to_virt and USB became
usable on platforms that weren't very PC-like. Controllers that don't
use PCI also need Linux support, in embedded configs.

However, the device drivers can't do that using the not-yet-generic DMA
calls. It's handled by usbcore, or by the USB DMA calls. That works,
and will continue to do so ... but it does highlight how "generic" the
implementation of those APIs is (not very).

- Dave



2002-12-28 15:32:59

by Adam J. Richter

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

David Brownell wrote:
>Adam J. Richter wrote:
>> At 2002-12-28 1:29:54 GMT, David Brownell wrote:
>>
>>>Isn't the goal to make sure that for every kind of "struct device *"
^^^^^
>>>it should be possible to use those dma_*() calls, without BUGging
>>>out.
>>
>> No.
>>
>>>If that's not true ... then why were they defined?
>>
>> So that other memory mapped busses such as ISA and sbus
>> can use them.

>That sounds like a "yes", not a "no" ... except for devices on busses
^^^^^^
Then it's not "every", which was your question. I guess I need
to understand better how to interpret your questions.

>that don't have do memory mapped I/O. It's what I described: dma_*()
>calls being used with struct device, without BUGging out.

Let's see if we agree. The behavior I expect is:

addr = dma_malloc(&some_usb_device->dev,size, &dma_addr,DMA_CONSISTENT)
===> BUG()

addr = dma_malloc(host_dev(some_usb_device), &dma_addr, DMA_CONSISTENT)
===> some consistent memory (or NULL).

where host_dev would be something like:

struct device *host_dev(struct usb_device *usbdev)
{
struct usb_hcd *hcd = usbdev->bus->hcpriv;
return &hcd->pdev->device; /* actually, this would become &hcd->dev */
}


>> USB devices should do DMA operations with respect to their USB
>> host adapters, typically a PCI device. For example, imagine a machine
>> with two USB controllers on different bus instances.

>USB already does that ... you write as if it didn't.

I wasn't aware of it. I've looked up the code now. Thanks
for the pointer. With generic DMA operations, we can delete most of
drivers/bus/usb/core/buffer.c, specifically
hcd_buffer_{alloc,free,{,un}map{,_sg},dmasync,sync_sg}, and their
method pointers in struct usb_operations in drivers/usb/core/hcd.[ch]
without introducing PCI dependency.

I hope this clarifies things. Please let me know if you think
we still disagree on something.

Adam J. Richter __ ______________ 575 Oroville Road
[email protected] \ / Milpitas, California 95035
+1 408 309-6081 | g g d r a s i l United States of America
"Free Software For The Rest Of Us."

2002-12-28 16:05:17

by James Bottomley

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

OK, the attached is a sketch of an implementation of bus_type operations.

It renames all the platform dma_ operations to platform_dma_ and will call
only the bus specific operation if it exists. Thus it will be the
responsibility of the bus to call the platform_dma_ functions correctly (this
one is a large loaded gun).

The answer to error handling in the general case is still no (because I don't
want to impact the main line code for a specific problem, and the main line is
x86 which effectively has infinite mapping resources), but I don't see why the
platforms can't export a set of values they guarantee not to return as
dma_addr_t's that you can use for errors in the bus implementations.

Would this solve most of your problems?

James


Attachments:
tmp.diff (4.97 kB)
tmp.diff

2002-12-28 16:09:52

by James Bottomley

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

[email protected] said:
> The indirection is getting from the USB device (or interface) to the
> object representing the USB controller. All USB calls need that, at
> least for host-side APIs, since the controller driver is multiplexing
> up to almost 4000 I/O channels. (127 devices * 31 endpoints, max; and
> of course typical usage is more like dozens of channels.)

This sounds like a mirror of the problem of finding the IOMMU on parisc (there
can be more than one).

The way parisc solves this is to look in dev->platform_data and if that's null
walk up the dev->parent until the IOMMU is found and then cache the IOMMU ops
in the current dev->platform_data. Obviously, you can't use platform_data,
but you could use driver_data for this. The IOMMU's actually lie on a parisc
specific bus, so the ability to walk up the device tree without having to know
the device types was crucial to implementing this.

James


2002-12-28 16:18:12

by James Bottomley

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

[email protected] said:
> Your new documentation disagrees with the current implementation, and
> that is just wrong.

I don't agree that protecting users from cache line overlap misuse is current
implementation. It's certainly not on parisc which was the non-coherent
platform I chose to model this with, which platforms do it now for the pci_
API?

James


2002-12-28 16:45:39

by David Brownell

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

Hi,

>>>>Isn't the goal to make sure that for every kind of "struct device *"
> ^^^^^
>>>>it should be possible to use those dma_*() calls, without BUGging
>>>>out.
>>That sounds like a "yes", not a "no" ... except for devices on busses
> ^^^^^^
> Then it's not "every", which was your question. I guess I need
> to understand better how to interpret your questions.

My bad ... I meant "every" kind of device doing memory mapped I/O, which
is all that had been mentioned so far. This needs to work on more than
just the "platform bus"; "layered busses" shouldn't need special casing,
they are just as common. You had mentioned SCSI busses; USB, FireWire,
and others also exist. (Though I confess I still don't like BUG() as a
way to fail much of anything!)

That's likely a better way to present one of my points: if it's "generic",
then it must work on more than just the lowest level platform bus.


>>that don't have do memory mapped I/O. It's what I described: dma_*()
>>calls being used with struct device, without BUGging out.
>
>
> Let's see if we agree. The behavior I expect is:
>
> addr = dma_malloc(&some_usb_device->dev,size, &dma_addr,DMA_CONSISTENT)
> ===> BUG()

That's not consistent with what I thought what you said. USB devices
use memory mapped I/O, so this should work. (And using BUG instead
of returning null still seems wrong...)

However, we could agree that some kind of dma_malloc() should exist !!


> addr = dma_malloc(host_dev(some_usb_device), &dma_addr, DMA_CONSISTENT)
> ===> some consistent memory (or NULL).
>
> where host_dev would be something like:
>
> struct device *host_dev(struct usb_device *usbdev)
> {
> struct usb_hcd *hcd = usbdev->bus->hcpriv;
> return &hcd->pdev->device; /* actually, this would become &hcd->dev */
> }

Please look at the 2.5.53 tree with my "usbcore dma updates (and doc)"
patch, which Greg has now merged and submitted to Linus.

The pre-existing USB DMA API syntax did not change, so it looks only
"something" like what you wrote there.

- Dave



2002-12-28 17:27:29

by David Brownell

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

James Bottomley wrote:
> OK, the attached is a sketch of an implementation of bus_type operations.

Quick reaction ....

Those signatures look more or less right, at a quick glance,
except that allocating N bytes should pass a __GFP_WAIT flag.
(And of course, allocating a mapping needs a failure return.)

That bus_dma_ops is more of a "vtable" approach, and I confess
I'd been thinking of hanging some object that had internal state
as well as method pointers. (Call it a "whatsit" for the moment.)

That'd make it possible for layered busses like USB and SCSI
to just reference the "whatsit" from the parent bus in their
layered "struct device" objects. [1]

In many cases that'd just end up being a ref to the "platform
whatsit", eliminating a conditional test from the hot path from
your sketch as well as an entire set of new "platform_*()" APIs.

- Dave

[1] That is, resembling what Benjamin Herrenschmidt suggested:

http://marc.theaimsgroup.com/?l=linux-kernel&m=102389432006266&w=2






2002-12-28 17:46:37

by Manfred Spraul

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

James Bottomley wrote:

>[email protected] said:
>
>
>>Your new documentation disagrees with the current implementation, and
>>that is just wrong.
>>
>>
>
>I don't agree that protecting users from cache line overlap misuse is current
>implementation. It's certainly not on parisc which was the non-coherent
>platform I chose to model this with, which platforms do it now for the pci_
>API?
>
>
You are aware that "users" is not one or two drivers that noone uses,
it's the whole networking stack.
What do you propose to fix sendfile() and networking with small network
packets [e.g. two 64 byte packets within a 128 byte cache line]?

One platforms that handles it is Miles Bader's memcopy based
dma_map_single() implementation.
http://marc.theaimsgroup.com/?l=linux-kernel&m=103907087825616&w=2

And obviously i386, i.e. all archs with empty dma_map_single() functions.

I see three options:
- modify the networking core, and enforce that a cache line is never
shared between users for such archs. Big change. Often not necessary -
some nics must double buffer internally anyway.
- modify every driver that doesn't do double buffering, and enable
double buffering on the affected archs. Even larger change.
- do the double buffering in dma_map_single() & co.

One problem for double buffering in dma_map_single() is that it would
double buffer too often: for example, the start of the rx buffers is
usually misaligned by the driver, to ensure that the IP headers are
aligned. The rest of the cacheline is unused, but it's not possible to
give that information to dma_map_single().

--
Manfred


2002-12-28 18:06:23

by Russell King

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

I've just been working through the ARM dma stuff, converting it to the
new API, and I foudn this:

> +static inline int
> +pci_dma_supported(struct pci_dev *hwdev, u64 mask)
> +{
> + return dma_supported(&hwdev->dev, mask);
> +}
> (etc)

I'll now pull out a bit from DMA-mapping.txt:

| Using Consistent DMA mappings.
|
| To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
| you should do:
|
| dma_addr_t dma_handle;
|
| cpu_addr = pci_alloc_consistent(dev, size, &dma_handle);
|
| where dev is a struct pci_dev *. You should pass NULL for PCI like buses
| where devices don't have struct pci_dev (like ISA, EISA). This may be
| called in interrupt context.

What happens to &hwdev->dev when you do as detailed there and pass NULL
into these "compatibility" functions? Probably an oops.

I think these "compatibility" functions need to do:

static inline xxx
pci_xxx(struct pci_dev *hwdev, ...)
{
dma_xxxx(hwdev ? &hwdev->dev : NULL, ...)
}

so they remain correct to existing API users expectations.

--
Russell King ([email protected]) The developer of ARM Linux
http://www.arm.linux.org.uk/personal/aboutme.html








2002-12-28 18:02:19

by David Brownell

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

James Bottomley wrote:
> [email protected] said:
>
>>The indirection is getting from the USB device (or interface) to the
>>object representing the USB controller. ...
>
> This sounds like a mirror of the problem of finding the IOMMU on parisc (there
> can be more than one).

Wouldn't it be straightforward to package that IOMMU solution using the
"call dev->whatsit->dma_op()" approach I mentioned? Storing data in
the "whatsit" seems more practical than saying driver_data is no longer
available to the device's driver. (I'll be agnostic on platform_data.)

This problem seems to me to be a common layering requirement. All the
indirections are known when the device structure is being initted, so it
might as well be set up then. True for PARISC (right?), as well as USB,
SCSI, and most other driver stacks. I suspect it'd even allow complex
voodoo for multi-path I/O too...

- Dave


> The way parisc solves this is to look in dev->platform_data and if that's null
> walk up the dev->parent until the IOMMU is found and then cache the IOMMU ops
> in the current dev->platform_data. Obviously, you can't use platform_data,
> but you could use driver_data for this. The IOMMU's actually lie on a parisc
> specific bus, so the ability to walk up the device tree without having to know
> the device types was crucial to implementing this.
>
> James
>
>
>



2002-12-28 18:05:08

by James Bottomley

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

[email protected] said:
> You are aware that "users" is not one or two drivers that noone uses,
> it's the whole networking stack.

I am aware of this. I'm also aware that it is *currently* broken with the old
API on all non-coherent arch's bar the one you point out.

All I actually did was document the existing problem, I think.

How bad actually is it? Networking seems to work fine for me on non-coherent
parisc. Whereas, when I had this cache line overlap problem in a SCSI driver,
I was seeing corruption all over the place.

The problem really only occurs if the CPU can modify part of a cache line
while a device has modified memory belonging to another part. Now a flush
from the CPU will destroy the device data (or an invalidate from the driver
destroy the CPU's data). The problem is effectively rendered harmless if only
data going in the same direction shares a cache line (even if it is for
different devices). It strikes me that this is probably true for network data
and would explain the fact that I haven't seen any obvious network related
corruption.

James


2002-12-28 18:11:06

by James Bottomley

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

[email protected] said:
> What happens to &hwdev->dev when you do as detailed there and pass
> NULL into these "compatibility" functions? Probably an oops.

Yes. Already found by Udo Steinberg and fixed in bk latest...

James


2002-12-28 18:16:54

by Manfred Spraul

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

James Bottomley wrote:

>The problem really only occurs if the CPU can modify part of a cache line
>while a device has modified memory belonging to another part. Now a flush
>from the CPU will destroy the device data (or an invalidate from the driver
>destroy the CPU's data). The problem is effectively rendered harmless if only
>data going in the same direction shares a cache line (even if it is for
>different devices). It strikes me that this is probably true for network data
>and would explain the fact that I haven't seen any obvious network related
>corruption.
>
>
Yes. Networking usually generates exclusive cachelines.
I'm aware of two special cases:
If multiple kmalloc buffers fit into one cacheline, then it can happen
all the time. But the smallest kmalloc buffer is 64 bytes [assuming page
size > 4096].
Is your cache line >= 128 bytes?

Or sendfile() of a mmap'ed file that is modified by userspace. That is
the recommended approach for zerocopy tx, but I'm not sure which apps
actually use that. IIRC DaveM mentioned the approach.

Additionally, the TCP checksum could catch the corruption and resent the
packet - you wouldn't notice the corruptions, unless you use hw checksums.

--
Manfred

2002-12-28 18:31:55

by James Bottomley

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

[email protected] said:
> If multiple kmalloc buffers fit into one cacheline, then it can happen
> all the time. But the smallest kmalloc buffer is 64 bytes [assuming
> page size > 4096].

Actually, I did forget to mention that on parisc non-coherent, the minimum
kmalloc allocation is the cache line width, so that problem cannot occur.

Hmm, perhaps that is an easier (and faster) approach to fixing the problems on
non-coherent platforms?

James


2002-12-28 19:57:24

by Manfred Spraul

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

James Bottomley wrote:

>[email protected] said:
>
>
>>If multiple kmalloc buffers fit into one cacheline, then it can happen
>> all the time. But the smallest kmalloc buffer is 64 bytes [assuming
>>page size > 4096].
>>
>>
>
>Actually, I did forget to mention that on parisc non-coherent, the minimum
>kmalloc allocation is the cache line width, so that problem cannot occur.
>
>Hmm, perhaps that is an easier (and faster) approach to fixing the problems on
>non-coherent platforms?
>
>
How do you want to fix sendfile()?
Note that I'm thinking along the same line as reading an unaligned
integer: the arch must provide a trap handler that emulates misaligned
reads, but it should never happen, except if someone manually creates an
IP packet with odd options to perform an DoS attack. Restricting kmalloc
is obviously faster, but I'm not convinced that this really catches all
corner cases.

A memcpy() based dma_map implementation would have another advantage:
enable it on i386, and you'll catch everyone that violates the dma spec
immediately.

The only problem is that the API is bad - networking buffers are usually
2 kB allocations, 2 kB aligned.
The actual data area that is passed to pci_map_single starts at offset 2
and has an odd length. It's not possible to pass the information to
dma_map_single() that the rest of the cacheline is unused and that
double buffering is not required, despite the misaligned case.
Except for sendfile(), then double buffering is necessary.

--
Manfred

2002-12-28 20:03:05

by Adam J. Richter

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

Regarding the problem of multiple users of inconsistent
streaming memory potentially sharing the same cache line, I suspect
that the number of places that need to be fixed (or even just ought to
be commented) is probably small.

First of all, I believe it is sufficient if we just ensure
that the memory used for device --> cpu do not share a cache line (at
least now with anything that the CPU may write to), as that is the
only direction which involves invalidating data in the CPU cache.

So, for network packets, we should only concerned about
inbound ones, in which case the maximum packet size has usually been
allocated anyhow.

I believe all of the current block device IO generators
generate transfers that are aligned and sized in units of at least 512
(although struct bio_vec does not require this), which I think is a
multiple of cache line size of all current architectures.

I haven't checked, but I would suspect that even for the remaining
non-network non-block devices that do large amount of input via DMA,
such as scanners (typically via SCSI or USB) that the input buffers
allocated for these transfers happen to be a multiple of cache line
size, just because they're large and because programmers like to use
powers of two and the memory allocators usually end up aligning them
on such a power of two.

Just to be clear: I'm only talking about corruption due to
streaming mappings overlapping other data in the same CPU cache line.
I am not talking about using inconsistent memory to map control
structures on architectures that lack consistent memory.

Adam J. Richter __ ______________ 575 Oroville Road
[email protected] \ / Milpitas, California 95035
+1 408 309-6081 | g g d r a s i l United States of America
"Free Software For The Rest Of Us."

2002-12-28 22:11:31

by Adam J. Richter

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

Odd number of ">" = David Brownell
Even number of ">>" = Adam Richter

>[...] This [DMA mapping operations] needs to work on more than
>just the "platform bus"; "layered busses" shouldn't need special casing,
>they are just as common.

It is not necessarily the case that every non-DMA device has
nexactly one DMA device "parent." You can have SCSI devices that are
multipathed from multiple scsi controllers. Software RAID devices can
drive multiple controllers. Some devices have no DMA in their IO
paths, such as some parallel port devices (and, yes, there is a
parallel bus in that you can daisy chain multiple devices off of one
port, each device having a distinct ID). If drivers are explicit
about which DMA mapped device they are referring to, it will simplify
things like adding support for multipath failover.

On the other hand, it might be a convenient shorthand be able to
say dma_malloc(usb_device,....) instead of
dma_malloc(usb_device->controller, ...). It's just that the number of
callers is small enough so that I don't think that the resulting code
shrink would make up for the size of the extra wrapper routines. So,
I'd rather have more clarity about exactly which device's the DMA
constrains are being used.

By the way, one idea that I've mentioned before that might
help catch some memory alocation bugs would be a type scheme so that
the compiler could catch some of mistakes, like so:

/* PCI, ISA, sbus, etc. devices embed this instead of struct device: */
struct dma_device {
u64 dma_mask;
/* other stuff? */

struct device dev;
};

void *dma_malloc(struct dma_device *dma_dev, size_t nbytes,
dma_addr_t *dma_addr, unsigned int flags);


Also, another separate feature that might be handy would be
a new field in struct device.

struct device {
....
struct dma_device *dma_dev;
}

device.dma_dev would point back to the device in the case of PCI,
ISA and other memory mapped devices, and it would point to the host
controller for USB devices, the SCSI host adapter for SCSI devices, etc.
Devices that do fail over might implement device-specific spinlock to
guard access to this field so that it could be changed on the fly.

So, for example, the high level networking code could embed
could conslidate mapping of outgoing packets by doing something like:

skbuff->dma_addr = dma_map_single(netdev->dev->dma_dev,
skbuff->data, ...)

...and that would even work for USB or SCSI ethernet adapters.



>You had mentioned SCSI busses; USB, FireWire,
>and others also exist. (Though I confess I still don't like BUG() as a
>way to fail much of anything!)

BUG() is generally the optimal way to fail due to programmer
error, as opposed to program error. You want to catch the bug as
early as possible. If you have a system where you want to do
something other exiting the current process with a fake SIGSEGV (say
you want to try to invoke a debugger or do a more graceful system call
return), you can redefine BUG() to your liking. Writing lots of code
to carefully unwind programmer error usually leads to so much more
complexity that the overall effect is a reduction in reliability, and,
besides, you get into an infinite development cycle of trying to
recover from all possible programmer errors in the recovery code, and
then in the recovery code for the recovery code and so on.

>Please look at the 2.5.53 tree with my "usbcore dma updates (and doc)"
>patch, which Greg has now merged and submitted to Linus.

This looks great. Notice that you're only doing DMA
operations on usb_device->controller, which is a memory-mapped device
(typically PCI).

Adam J. Richter __ ______________ 575 Oroville Road
[email protected] \ / Milpitas, California 95035
+1 408 309-6081 | g g d r a s i l United States of America
"Free Software For The Rest Of Us."

2002-12-30 23:09:29

by David Brownell

[permalink] [raw]
Subject: Re: [RFT][PATCH] generic device DMA implementation

Adam J. Richter wrote:
> Odd number of ">" = David Brownell
> Even number of ">>" = Adam Richter

Toggle that one more time ... ;)

>
> On the other hand, it might be a convenient shorthand be able to
> say dma_malloc(usb_device,....) instead of
> dma_malloc(usb_device->controller, ...). It's just that the number of
> callers is small enough so that I don't think that the resulting code
> shrink would make up for the size of the extra wrapper routines. So,

Since about 2.5.32 that API has been

void *usb_buffer_alloc(usb_device *, size, mem_flags, dma_addr_t *)

Sure -- when dma_alloc() is available, we should be able to make it
inline completely. Done correctly it should be an object code shrink.


> struct device {
> ....
> struct dma_device *dma_dev;
> }
>
> device.dma_dev would point back to the device in the case of PCI,
> ISA and other memory mapped devices, and it would point to the host
> controller for USB devices, the SCSI host adapter for SCSI devices, etc.

With 'dma_device' being pretty much the 'whatsit' I mentioned: some state
(from platforms that need it, like u64 dma_mask and maybe a list of pci
pools to use with dma_malloc), plus methods basically like James' signatures
from 'struct bus_dma_ops'.

Yes, that'd be something that might be the platform implementation (often
pci, if it doesn't vanish like on x86), something customized (choose dma
paths on the fly) or just BUG() out.


> BUG() is generally the optimal way to fail due to programmer
> error, as opposed to program error. You want to catch the bug as
> early as possible.

I can agree to that in scenarios like relying on DMA ops with hardware
known not to support them. If it ever happens, there's deep confusion.

But not in the case of generic dma "map this buffer" operations failing
because of issues like temporary resource starvation; or almost any
other temporary allocation failure that appears after the system booted.


>>Please look at the 2.5.53 tree with my "usbcore dma updates (and doc)"
>>patch, which Greg has now merged and submitted to Linus.
>
> This looks great. Notice that you're only doing DMA
> operations on usb_device->controller, which is a memory-mapped device
> (typically PCI).

Actually it isn't necessarily ... some host controllers talk I/O space
using FIFOs for commands and data, rather than memory mapping registers,
shared memory request schedules, and DMAing to/from the kernel buffers.
Linux would want a small tweak to support those controllers; maybe it'd
be as simple as testing whethere there's a dma_whatsit object pointer.

The usb_buffer_*map*() calls could now be inlined, but I thought I'd rather
only leave one copy of all the "don't go through null pointer" checking.
If we ever reduce such checking in USB, those routines would all be
good candidates for turning into inlined calls to dma_*() calls.

- Dave







2002-12-30 23:33:57

by Scott Robert Ladd

[permalink] [raw]
Subject: [2.5.52] NFS works with 2.4.20, not with Win2K/SFU

Heck if I know whose bug this is...

One of my systems runs kernel 2.5.52; its NFS shares mount fine with my
2.4.20 system, and the 2.4.20 shares mount properly on the 2.5.52 system.
All's happy in Linuxland.

Unfortunately, Windows is *not* happy. My system using Windows 2000
w/"Services for Unix" can mount the NFS exports from the 2.4.20 machine --
but while the Win2K box can *see* the 2.5.52 shares, it suffers terribly
when trying to mount them -- sometimes locking up, sometimes telling me the
share can't be found.

Another oddity: The Win2k machine sees the 2.4.20 system by IP address, and
the 2.5.52 system by name.

I have the 2.5.52 kernel compiled for NFS4. Both the 2.5.52 system and
2.4.20 have identical /etc/exports and /etc/hosts.

I'm quite willing to lay this in the lap of those jolly folk in Redmond, but
I was wondering if anyone knew of incompatibility between 2.5.52 NFS and
Win2K/SFU.

..Scott

--
Scott Robert Ladd
Coyote Gulch Productions, http://www.coyotegulch.com
No ads -- just very free (and somewhat unusual) code.

2002-12-31 02:34:31

by Joe

[permalink] [raw]
Subject: Re: [2.5.52] NFS works with 2.4.20, not with Win2K/SFU

No telling what kind of odd bugs could be in
ms services for unix, I was not at all impressed
with what I saw of it at Linux world last Aug -
It seemed to be mostly a broken toy...

My experience suggests that you'll have much
better luck using samba for your unix-to-pc
connectivity needs.

Just a thought,

Joe

Scott Robert Ladd wrote:

>Heck if I know whose bug this is...
>
>One of my systems runs kernel 2.5.52; its NFS shares mount fine with my
>2.4.20 system, and the 2.4.20 shares mount properly on the 2.5.52 system.
>All's happy in Linuxland.
>
>Unfortunately, Windows is *not* happy. My system using Windows 2000
>w/"Services for Unix" can mount the NFS exports from the 2.4.20 machine --
>but while the Win2K box can *see* the 2.5.52 shares, it suffers terribly
>when trying to mount them -- sometimes locking up, sometimes telling me the
>share can't be found.
>
>Another oddity: The Win2k machine sees the 2.4.20 system by IP address, and
>the 2.5.52 system by name.
>
>I have the 2.5.52 kernel compiled for NFS4. Both the 2.5.52 system and
>2.4.20 have identical /etc/exports and /etc/hosts.
>
>I'm quite willing to lay this in the lap of those jolly folk in Redmond, but
>I was wondering if anyone knew of incompatibility between 2.5.52 NFS and
>Win2K/SFU.
>
>..Scott
>
>--
>Scott Robert Ladd
>Coyote Gulch Productions, http://www.coyotegulch.com
>No ads -- just very free (and somewhat unusual) code.
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to [email protected]
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
>
>
>


2002-12-31 04:23:48

by Scott Robert Ladd

[permalink] [raw]
Subject: RE: [2.5.52] NFS works with 2.4.20, not with Win2K/SFU

> My experience suggests that you'll have much
> better luck using samba for your unix-to-pc
> connectivity needs.

I'm already using Samba; however, since most of my network is Linux-based
with NFS shares, it seemed reasonable to try and use NFS for everything. I
dislike creating two access points (NFS and Samba) for the same share -- but
then again, I probably over-estimated the ability of Windows.

..Scott