2009-04-08 20:54:46

by Kumar Gala

[permalink] [raw]
Subject: tracking of PCI address space

I was wondering if we have anything that tracks regions associated
with the "inbound" side of a pci_bus.

What I mean is on embedded PPC we have window/mapping registers for
both inbound (accessing memory on the SoC) and outbound (access PCI
device MMIO, IO etc). The combination of the inbound & outbound
convey what exists in the PCI address space vs CPU physical address
space (and how to map from one to the other). Today in the PPC land
we only attach outbound windows to the pci_bus. So technically the
inbound side information (like what subset of physical memory is
visible on the PCI bus) seems to be lost.

- k


2009-04-08 21:54:29

by Kumar Gala

[permalink] [raw]
Subject: Re: tracking of PCI address space


On Apr 8, 2009, at 4:49 PM, Ira Snyder wrote:

> On Wed, Apr 08, 2009 at 03:53:55PM -0500, Kumar Gala wrote:
>> I was wondering if we have anything that tracks regions associated
>> with
>> the "inbound" side of a pci_bus.
>>
>> What I mean is on embedded PPC we have window/mapping registers for
>> both
>> inbound (accessing memory on the SoC) and outbound (access PCI device
>> MMIO, IO etc). The combination of the inbound & outbound convey what
>> exists in the PCI address space vs CPU physical address space (and
>> how to
>> map from one to the other). Today in the PPC land we only attach
>> outbound windows to the pci_bus. So technically the inbound side
>> information (like what subset of physical memory is visible on the
>> PCI
>> bus) seems to be lost.
>>
>
> To the best of my knowledge there is no API to set inbound windows in
> Linux. I've been implementing a virtio-over-PCI driver which needs the
> inbound windows. I set them up myself during driver probe, using
> get_immrbase() to get the IMMR registers. This board is a PCI Slave /
> Agent, it doesn't even have PCI support compiled into the kernel.

I'm not concerned explicitly about setting up inbound windows, its
more about have a consistent view of the PCI address space which may
be different from the CPU physical address space.

I'm working on code to actually setup the inbound windows on 85xx/86xx
class devices (based on dma-ranges property in the device tree). As I
was thinking about this I realized that the send of ranges/dma-ranges
in the .dts and what we map to outbound vs inbound changes if we an
agent or host.

- k

2009-04-08 22:14:55

by Ira W. Snyder

[permalink] [raw]
Subject: Re: tracking of PCI address space

On Wed, Apr 08, 2009 at 03:53:55PM -0500, Kumar Gala wrote:
> I was wondering if we have anything that tracks regions associated with
> the "inbound" side of a pci_bus.
>
> What I mean is on embedded PPC we have window/mapping registers for both
> inbound (accessing memory on the SoC) and outbound (access PCI device
> MMIO, IO etc). The combination of the inbound & outbound convey what
> exists in the PCI address space vs CPU physical address space (and how to
> map from one to the other). Today in the PPC land we only attach
> outbound windows to the pci_bus. So technically the inbound side
> information (like what subset of physical memory is visible on the PCI
> bus) seems to be lost.
>

To the best of my knowledge there is no API to set inbound windows in
Linux. I've been implementing a virtio-over-PCI driver which needs the
inbound windows. I set them up myself during driver probe, using
get_immrbase() to get the IMMR registers. This board is a PCI Slave /
Agent, it doesn't even have PCI support compiled into the kernel.

Ira

2009-04-09 06:21:51

by Benjamin Herrenschmidt

[permalink] [raw]
Subject: Re: tracking of PCI address space

On Wed, 2009-04-08 at 15:53 -0500, Kumar Gala wrote:
> I was wondering if we have anything that tracks regions associated
> with the "inbound" side of a pci_bus.
>
> What I mean is on embedded PPC we have window/mapping registers for
> both inbound (accessing memory on the SoC) and outbound (access PCI
> device MMIO, IO etc). The combination of the inbound & outbound
> convey what exists in the PCI address space vs CPU physical address
> space (and how to map from one to the other). Today in the PPC land
> we only attach outbound windows to the pci_bus. So technically the
> inbound side information (like what subset of physical memory is
> visible on the PCI bus) seems to be lost.

On powerpc, we do keep track of the offset, but that's about it.

Tracking inbound ranges is very platform specific though. You can have
multiple inbound windows with different translations, in some cases some
via iommu and some not, or windows aliasing the same target memory but
with different attributes, etc...

I don't think there's that much interest in trying to create generic
code to keep track.

Ben.

2009-04-09 06:56:12

by Grant Grundler

[permalink] [raw]
Subject: Re: tracking of PCI address space

On Wed, Apr 08, 2009 at 03:53:55PM -0500, Kumar Gala wrote:
> I was wondering if we have anything that tracks regions associated with
> the "inbound" side of a pci_bus.
>
> What I mean is on embedded PPC we have window/mapping registers for both
> inbound (accessing memory on the SoC) and outbound (access PCI device
> MMIO, IO etc). The combination of the inbound & outbound convey what
> exists in the PCI address space vs CPU physical address space (and how to
> map from one to the other).

Most PCI Host bus controllers will negatively decode the outbound ranges
for inbound traffic.

PARISC and IA64 have extra registers to play some games with that. But
routing between PCI bus controllers to make them look like a single PCI
segment was the main intent of that. I've not found any other uses
to subvert that.

> Today in the PPC land we only attach
> outbound windows to the pci_bus. So technically the inbound side
> information (like what subset of physical memory is visible on the PCI
> bus) seems to be lost.

What did you need inbound "routing map" for?

thanks,
grant