On Wed, Nov 07, 2007 at 04:21:04PM +0200, Amit Shah wrote:
> We make the dma_mapping_ops structure to point to our structure so
> that every DMA access goes through us. (This is the reason this only
> works for 64-bit guest. 32-bit guest doesn't yet have a dma_ops
> struct.)
I need the same facility for Calgary for falling back to swiotlb if a
translation is disabled on some slot, and IB needs the same facility
for some IB adapters (e.g., ipath). Perhaps it's time to consider
stackable dma-ops (unless someone has a better idea...).
Cheers,
Muli
On Monday 12 November 2007 16:20:01 Muli Ben-Yehuda wrote:
> On Wed, Nov 07, 2007 at 04:21:04PM +0200, Amit Shah wrote:
> > We make the dma_mapping_ops structure to point to our structure so
> > that every DMA access goes through us. (This is the reason this only
> > works for 64-bit guest. 32-bit guest doesn't yet have a dma_ops
> > struct.)
>
> I need the same facility for Calgary for falling back to swiotlb if a
> translation is disabled on some slot, and IB needs the same facility
> for some IB adapters (e.g., ipath). Perhaps it's time to consider
> stackable dma-ops (unless someone has a better idea...).
That would make great sense and simplify implementations.
How do you propose such an implementation? An array of function pointers for
each possible call?
>
> Cheers,
> Muli
Amit.
On Mon, Nov 12, 2007 at 05:26:24PM +0530, Amit Shah wrote:
> On Monday 12 November 2007 16:20:01 Muli Ben-Yehuda wrote:
> > On Wed, Nov 07, 2007 at 04:21:04PM +0200, Amit Shah wrote:
> > > We make the dma_mapping_ops structure to point to our structure so
> > > that every DMA access goes through us. (This is the reason this only
> > > works for 64-bit guest. 32-bit guest doesn't yet have a dma_ops
> > > struct.)
> >
> > I need the same facility for Calgary for falling back to swiotlb if a
> > translation is disabled on some slot, and IB needs the same facility
> > for some IB adapters (e.g., ipath). Perhaps it's time to consider
> > stackable dma-ops (unless someone has a better idea...).
>
> That would make great sense and simplify implementations.
>
> How do you propose such an implementation? An array of function
> pointers for each possible call?
I was thinking of simply a chain of dma_ops (via dma_ops->prev_ops) ,
where it's the responsibility of each dma_ops implementation to call
or not call the corresponding entry in chain (prev_ops->op()). This
works well for Calgary (which will only use prev_ops selectively, and
I think it will work well for the IB folks. Will it work for you?
Cheers,
Muli
On Monday 12 November 2007 19:02:07 Muli Ben-Yehuda wrote:
> On Mon, Nov 12, 2007 at 05:26:24PM +0530, Amit Shah wrote:
> > On Monday 12 November 2007 16:20:01 Muli Ben-Yehuda wrote:
> > > On Wed, Nov 07, 2007 at 04:21:04PM +0200, Amit Shah wrote:
> > > > We make the dma_mapping_ops structure to point to our structure so
> > > > that every DMA access goes through us. (This is the reason this only
> > > > works for 64-bit guest. 32-bit guest doesn't yet have a dma_ops
> > > > struct.)
> > >
> > > I need the same facility for Calgary for falling back to swiotlb if a
> > > translation is disabled on some slot, and IB needs the same facility
> > > for some IB adapters (e.g., ipath). Perhaps it's time to consider
> > > stackable dma-ops (unless someone has a better idea...).
> >
> > That would make great sense and simplify implementations.
> >
> > How do you propose such an implementation? An array of function
> > pointers for each possible call?
>
> I was thinking of simply a chain of dma_ops (via dma_ops->prev_ops) ,
> where it's the responsibility of each dma_ops implementation to call
> or not call the corresponding entry in chain (prev_ops->op()). This
> works well for Calgary (which will only use prev_ops selectively, and
> I think it will work well for the IB folks. Will it work for you?
Selectively? What happens in the case when some iommu doesn't want to invoke
the prev_op, but the mapping depends on it being called (eg, the hypercalling
op is embedded somewhere in the prev_op chain)
Hmm, also, a hypercall should be the last operation to be called in a few
cases, but also the first (and the last) to be called in several other cases.
For example, in a guest, you can go register any number of iotlbs, but you
don't actually want to do anything there -- you just want to do a hypercall
and get the mapping from the host.
But in any case, what ensures that the hypercall op always gets called and
also that it's the last one?
Also, I'm thinking of implementations where let's say sg_map_free is not
defined for a particular iotlb, but it was defined in the previously
registered one. How to handle this?
It seems a small dispatcher which takes care of this seems the likely choice
here, but avoiding it (or at least caching the decisions) is something that
needs more thought.
On Mon, Nov 12, 2007 at 07:25:27PM +0530, Amit Shah wrote:
> Selectively? What happens in the case when some iommu doesn't want
> to invoke the prev_op, but the mapping depends on it being called
> (eg, the hypercalling op is embedded somewhere in the prev_op chain)
Bad things :-)
There needs to be a hierarchy of dma-ops, e.g., nommu/swiotlb, then a
hardware iommu, then pvdma. Not sure where IB fits in here. The
calling order would be the reverse of the initialization order, so
pvdma->hardare->nommu/swiotlb.
> Hmm, also, a hypercall should be the last operation to be called in
> a few cases, but also the first (and the last) to be called in
> several other cases. For example, in a guest, you can go register
> any number of iotlbs, but you don't actually want to do anything
> there -- you just want to do a hypercall and get the mapping from
> the host.
>
> But in any case, what ensures that the hypercall op always gets
> called and also that it's the last one?
If it gets called first it can ensure that it runs either first or
last, or both, since it controls when to run the other hooks, before
or after it does what it needs to do.
> Also, I'm thinking of implementations where let's say sg_map_free is
> not defined for a particular iotlb, but it was defined in the
> previously registered one. How to handle this?
Good point, this will require all dma ops implementation to provide
stubs that just return prev_ops->op if it's set.
> It seems a small dispatcher which takes care of this seems the
> likely choice here, but avoiding it (or at least caching the
> decisions) is something that needs more thought.
Yeah, I'm not too enthusiastic about it, but we do need such a generic
mechanism or we will each end up implementing our own versions of
it...
Cheers,
Muli
Muli Ben-Yehuda wrote:
> On Wed, Nov 07, 2007 at 04:21:04PM +0200, Amit Shah wrote:
>
>> We make the dma_mapping_ops structure to point to our structure so
>> that every DMA access goes through us. (This is the reason this only
>> works for 64-bit guest. 32-bit guest doesn't yet have a dma_ops
>> struct.)
>
> I need the same facility for Calgary for falling back to swiotlb if a
> translation is disabled on some slot, and IB needs the same facility
> for some IB adapters (e.g., ipath). Perhaps it's time to consider
> stackable dma-ops (unless someone has a better idea...).
Hmm, at least the later sounds like for per-device dma_ops would be more
useful that stackable ones, as each stack instance just checks "should I
do something for device $foo, if not, call the next one ...".
cheers,
Gerd
On Mon, Nov 12, 2007 at 12:50:01PM +0200, Muli Ben-Yehuda wrote:
> On Wed, Nov 07, 2007 at 04:21:04PM +0200, Amit Shah wrote:
>
> > We make the dma_mapping_ops structure to point to our structure so
> > that every DMA access goes through us. (This is the reason this only
> > works for 64-bit guest. 32-bit guest doesn't yet have a dma_ops
> > struct.)
>
> I need the same facility for Calgary for falling back to swiotlb if a
> translation is disabled on some slot, and IB needs the same facility
> for some IB adapters (e.g., ipath). Perhaps it's time to consider
> stackable dma-ops (unless someone has a better idea...).
Stackable dma-ops sounds good to me. The only problem is that there is a
performance penalty for devices handled on the bottom of the stack. But
the alternative I can think of, a per-device dma-ops structure, uses more
memory and is much more intrusive to the driver core. So I am fine with
a stackable solution.
Joerg
--
| AMD Saxony Limited Liability Company & Co. KG
Operating | Wilschdorfer Landstr. 101, 01109 Dresden, Germany
System | Register Court Dresden: HRA 4896
Research | General Partner authorized to represent:
Center | AMD Saxony LLC (Wilmington, Delaware, US)
| General Manager of AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy