2019-05-01 17:38:11

by Srinath Mannam

[permalink] [raw]
Subject: [PATCH v5 0/3] PCIe Host request to reserve IOVA

Few SOCs have limitation that their PCIe host can't allow few inbound
address ranges. Allowed inbound address ranges are listed in dma-ranges
DT property and this address ranges are required to do IOVA mapping.
Remaining address ranges have to be reserved in IOVA mapping.

PCIe Host driver of those SOCs has to list resource entries of allowed
address ranges given in dma-ranges DT property in sorted order. This
sorted list of resources will be processed and reserve IOVA address for
inaccessible address holes while initializing IOMMU domain.

This patch set is based on Linux-5.1-rc3.

Changes from v4:
- Addressed Bjorn, Robin Murphy and Auger Eric review comments.
- Commit message modification.
- Change DMA_BIT_MASK to "~(dma_addr_t)0".

Changes from v3:
- Addressed Robin Murphy review comments.
- pcie-iproc: parse dma-ranges and make sorted resource list.
- dma-iommu: process list and reserve gaps between entries

Changes from v2:
- Patch set rebased to Linux-5.0-rc2

Changes from v1:
- Addressed Oza review comments.

Srinath Mannam (3):
PCI: Add dma_ranges window list
iommu/dma: Reserve IOVA for PCIe inaccessible DMA address
PCI: iproc: Add sorted dma ranges resource entries to host bridge

drivers/iommu/dma-iommu.c | 19 ++++++++++++++++
drivers/pci/controller/pcie-iproc.c | 44 ++++++++++++++++++++++++++++++++++++-
drivers/pci/probe.c | 3 +++
include/linux/pci.h | 1 +
4 files changed, 66 insertions(+), 1 deletion(-)

--
2.7.4


2019-05-01 17:38:22

by Srinath Mannam

[permalink] [raw]
Subject: [PATCH v5 1/3] PCI: Add dma_ranges window list

Add a dma_ranges field in PCI host bridge structure to hold resource
entries list of memory regions in sorted order given through dma-ranges
DT property.

While initializing IOMMU domain of PCI EPs connected to that host bridge,
this list of resources will be processed and IOVAs for the address holes
will be reserved.

Signed-off-by: Srinath Mannam <[email protected]>
Based-on-patch-by: Oza Pawandeep <[email protected]>
Reviewed-by: Oza Pawandeep <[email protected]>
Acked-by: Bjorn Helgaas <[email protected]>
---
drivers/pci/probe.c | 3 +++
include/linux/pci.h | 1 +
2 files changed, 4 insertions(+)

diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index 7e12d01..72563c1 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -595,6 +595,7 @@ struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
return NULL;

INIT_LIST_HEAD(&bridge->windows);
+ INIT_LIST_HEAD(&bridge->dma_ranges);
bridge->dev.release = pci_release_host_bridge_dev;

/*
@@ -623,6 +624,7 @@ struct pci_host_bridge *devm_pci_alloc_host_bridge(struct device *dev,
return NULL;

INIT_LIST_HEAD(&bridge->windows);
+ INIT_LIST_HEAD(&bridge->dma_ranges);
bridge->dev.release = devm_pci_release_host_bridge_dev;

return bridge;
@@ -632,6 +634,7 @@ EXPORT_SYMBOL(devm_pci_alloc_host_bridge);
void pci_free_host_bridge(struct pci_host_bridge *bridge)
{
pci_free_resource_list(&bridge->windows);
+ pci_free_resource_list(&bridge->dma_ranges);

kfree(bridge);
}
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 7744821..bba0a29 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -490,6 +490,7 @@ struct pci_host_bridge {
void *sysdata;
int busnr;
struct list_head windows; /* resource_entry */
+ struct list_head dma_ranges; /* dma ranges resource list */
u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */
int (*map_irq)(const struct pci_dev *, u8, u8);
void (*release_fn)(struct pci_host_bridge *);
--
2.7.4

2019-05-01 17:40:05

by Srinath Mannam

[permalink] [raw]
Subject: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address

dma_ranges field of PCI host bridge structure has resource entries in
sorted order of address range given through dma-ranges DT property. This
list is the accessible DMA address range. So that this resource list will
be processed and reserve IOVA address to the inaccessible address holes in
the list.

This method is similar to PCI IO resources address ranges reserving in
IOMMU for each EP connected to host bridge.

Signed-off-by: Srinath Mannam <[email protected]>
Based-on-patch-by: Oza Pawandeep <[email protected]>
Reviewed-by: Oza Pawandeep <[email protected]>
Acked-by: Robin Murphy <[email protected]>
---
drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 77aabe6..da94844 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
struct resource_entry *window;
unsigned long lo, hi;
+ phys_addr_t start = 0, end;

resource_list_for_each_entry(window, &bridge->windows) {
if (resource_type(window->res) != IORESOURCE_MEM)
@@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
hi = iova_pfn(iovad, window->res->end - window->offset);
reserve_iova(iovad, lo, hi);
}
+
+ /* Get reserved DMA windows from host bridge */
+ resource_list_for_each_entry(window, &bridge->dma_ranges) {
+ end = window->res->start - window->offset;
+resv_iova:
+ if (end - start) {
+ lo = iova_pfn(iovad, start);
+ hi = iova_pfn(iovad, end);
+ reserve_iova(iovad, lo, hi);
+ }
+ start = window->res->end - window->offset + 1;
+ /* If window is last entry */
+ if (window->node.next == &bridge->dma_ranges &&
+ end != ~(dma_addr_t)0) {
+ end = ~(dma_addr_t)0;
+ goto resv_iova;
+ }
+ }
}

static int iova_reserve_iommu_regions(struct device *dev,
--
2.7.4

2019-05-01 17:40:08

by Srinath Mannam

[permalink] [raw]
Subject: [PATCH v5 3/3] PCI: iproc: Add sorted dma ranges resource entries to host bridge

IPROC host has the limitation that it can use only those address ranges
given by dma-ranges property as inbound address. So that the memory
address holes in dma-ranges should be reserved to allocate as DMA address.

Inbound address of host accessed by PCIe devices will not be translated
before it comes to IOMMU or directly to PE. But the limitation of this
host is, access to few address ranges are ignored. So that IOVA ranges
for these address ranges have to be reserved.

All allowed address ranges are listed in dma-ranges DT parameter. These
address ranges are converted as resource entries and listed in sorted
order and added to dma_ranges list of PCI host bridge structure.

Ex:
dma-ranges = < \
0x43000000 0x00 0x80000000 0x00 0x80000000 0x00 0x80000000 \
0x43000000 0x08 0x00000000 0x08 0x00000000 0x08 0x00000000 \
0x43000000 0x80 0x00000000 0x80 0x00000000 0x40 0x00000000>

In the above example of dma-ranges, memory address from
0x0 - 0x80000000,
0x100000000 - 0x800000000,
0x1000000000 - 0x8000000000 and
0x10000000000 - 0xffffffffffffffff.
are not allowed to be used as inbound addresses.

Signed-off-by: Srinath Mannam <[email protected]>
Based-on-patch-by: Oza Pawandeep <[email protected]>
Reviewed-by: Oza Pawandeep <[email protected]>
Reviewed-by: Eric Auger <[email protected]>
---
drivers/pci/controller/pcie-iproc.c | 44 ++++++++++++++++++++++++++++++++++++-
1 file changed, 43 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/controller/pcie-iproc.c b/drivers/pci/controller/pcie-iproc.c
index c20fd6b..94ba5c0 100644
--- a/drivers/pci/controller/pcie-iproc.c
+++ b/drivers/pci/controller/pcie-iproc.c
@@ -1146,11 +1146,43 @@ static int iproc_pcie_setup_ib(struct iproc_pcie *pcie,
return ret;
}

+static int
+iproc_pcie_add_dma_range(struct device *dev, struct list_head *resources,
+ struct of_pci_range *range)
+{
+ struct resource *res;
+ struct resource_entry *entry, *tmp;
+ struct list_head *head = resources;
+
+ res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL);
+ if (!res)
+ return -ENOMEM;
+
+ resource_list_for_each_entry(tmp, resources) {
+ if (tmp->res->start < range->cpu_addr)
+ head = &tmp->node;
+ }
+
+ res->start = range->cpu_addr;
+ res->end = res->start + range->size - 1;
+
+ entry = resource_list_create_entry(res, 0);
+ if (!entry)
+ return -ENOMEM;
+
+ entry->offset = res->start - range->cpu_addr;
+ resource_list_add(entry, head);
+
+ return 0;
+}
+
static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie)
{
+ struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
struct of_pci_range range;
struct of_pci_range_parser parser;
int ret;
+ LIST_HEAD(resources);

/* Get the dma-ranges from DT */
ret = of_pci_dma_range_parser_init(&parser, pcie->dev->of_node);
@@ -1158,13 +1190,23 @@ static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie)
return ret;

for_each_of_pci_range(&parser, &range) {
+ ret = iproc_pcie_add_dma_range(pcie->dev,
+ &resources,
+ &range);
+ if (ret)
+ goto out;
/* Each range entry corresponds to an inbound mapping region */
ret = iproc_pcie_setup_ib(pcie, &range, IPROC_PCIE_IB_MAP_MEM);
if (ret)
- return ret;
+ goto out;
}

+ list_splice_init(&resources, &host->dma_ranges);
+
return 0;
+out:
+ pci_free_resource_list(&resources);
+ return ret;
}

static int iproce_pcie_get_msi(struct iproc_pcie *pcie,
--
2.7.4

2019-05-02 11:05:20

by Lorenzo Pieralisi

[permalink] [raw]
Subject: Re: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address

On Wed, May 01, 2019 at 11:06:25PM +0530, Srinath Mannam wrote:
> dma_ranges field of PCI host bridge structure has resource entries in
> sorted order of address range given through dma-ranges DT property. This
> list is the accessible DMA address range. So that this resource list will
> be processed and reserve IOVA address to the inaccessible address holes in
> the list.
>
> This method is similar to PCI IO resources address ranges reserving in
> IOMMU for each EP connected to host bridge.
>
> Signed-off-by: Srinath Mannam <[email protected]>
> Based-on-patch-by: Oza Pawandeep <[email protected]>
> Reviewed-by: Oza Pawandeep <[email protected]>
> Acked-by: Robin Murphy <[email protected]>
> ---
> drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++
> 1 file changed, 19 insertions(+)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 77aabe6..da94844 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
> struct resource_entry *window;
> unsigned long lo, hi;
> + phys_addr_t start = 0, end;
>
> resource_list_for_each_entry(window, &bridge->windows) {
> if (resource_type(window->res) != IORESOURCE_MEM)
> @@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> hi = iova_pfn(iovad, window->res->end - window->offset);
> reserve_iova(iovad, lo, hi);
> }
> +
> + /* Get reserved DMA windows from host bridge */
> + resource_list_for_each_entry(window, &bridge->dma_ranges) {

If this list is not sorted it seems to me the logic in this loop is
broken and you can't rely on callers to sort it because it is not a
written requirement and it is not enforced (you know because you
wrote the code but any other developer is not supposed to guess
it).

Can't we rewrite this loop so that it does not rely on list
entries order ?

I won't merge this series unless you sort it, no pun intended.

Lorenzo

> + end = window->res->start - window->offset;
> +resv_iova:
> + if (end - start) {
> + lo = iova_pfn(iovad, start);
> + hi = iova_pfn(iovad, end);
> + reserve_iova(iovad, lo, hi);
> + }
> + start = window->res->end - window->offset + 1;
> + /* If window is last entry */
> + if (window->node.next == &bridge->dma_ranges &&
> + end != ~(dma_addr_t)0) {
> + end = ~(dma_addr_t)0;
> + goto resv_iova;
> + }
> + }
> }
>
> static int iova_reserve_iommu_regions(struct device *dev,
> --
> 2.7.4
>

2019-05-02 11:28:36

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address

Hi Lorenzo,

On 02/05/2019 12:01, Lorenzo Pieralisi wrote:
> On Wed, May 01, 2019 at 11:06:25PM +0530, Srinath Mannam wrote:
>> dma_ranges field of PCI host bridge structure has resource entries in
>> sorted order of address range given through dma-ranges DT property. This
>> list is the accessible DMA address range. So that this resource list will
>> be processed and reserve IOVA address to the inaccessible address holes in
>> the list.
>>
>> This method is similar to PCI IO resources address ranges reserving in
>> IOMMU for each EP connected to host bridge.
>>
>> Signed-off-by: Srinath Mannam <[email protected]>
>> Based-on-patch-by: Oza Pawandeep <[email protected]>
>> Reviewed-by: Oza Pawandeep <[email protected]>
>> Acked-by: Robin Murphy <[email protected]>
>> ---
>> drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++
>> 1 file changed, 19 insertions(+)
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index 77aabe6..da94844 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
>> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
>> struct resource_entry *window;
>> unsigned long lo, hi;
>> + phys_addr_t start = 0, end;
>>
>> resource_list_for_each_entry(window, &bridge->windows) {
>> if (resource_type(window->res) != IORESOURCE_MEM)
>> @@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
>> hi = iova_pfn(iovad, window->res->end - window->offset);
>> reserve_iova(iovad, lo, hi);
>> }
>> +
>> + /* Get reserved DMA windows from host bridge */
>> + resource_list_for_each_entry(window, &bridge->dma_ranges) {
>
> If this list is not sorted it seems to me the logic in this loop is
> broken and you can't rely on callers to sort it because it is not a
> written requirement and it is not enforced (you know because you
> wrote the code but any other developer is not supposed to guess
> it).
>
> Can't we rewrite this loop so that it does not rely on list
> entries order ?

The original idea was that callers should be required to provide a
sorted list, since it keeps things nice and simple...

> I won't merge this series unless you sort it, no pun intended.
>
> Lorenzo
>
>> + end = window->res->start - window->offset;

...so would you consider it sufficient to add

if (end < start)
dev_err(...);

here, plus commenting the definition of pci_host_bridge::dma_ranges that
it must be sorted in ascending order?

[ I guess it might even make sense to factor out the parsing and list
construction from patch #3 into an of_pci core helper from the
beginning, so that there's even less chance of another driver
reimplementing it incorrectly in future. ]

Failing that, although I do prefer the "simple by construction"
approach, I'd have no objection to just sticking a list_sort() call in
here instead, if you'd rather it be entirely bulletproof.

Robin.

>> +resv_iova:
>> + if (end - start) {
>> + lo = iova_pfn(iovad, start);
>> + hi = iova_pfn(iovad, end);
>> + reserve_iova(iovad, lo, hi);
>> + }
>> + start = window->res->end - window->offset + 1;
>> + /* If window is last entry */
>> + if (window->node.next == &bridge->dma_ranges &&
>> + end != ~(dma_addr_t)0) {
>> + end = ~(dma_addr_t)0;
>> + goto resv_iova;
>> + }
>> + }
>> }
>>
>> static int iova_reserve_iommu_regions(struct device *dev,
>> --
>> 2.7.4
>>

2019-05-02 13:08:17

by Lorenzo Pieralisi

[permalink] [raw]
Subject: Re: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address

On Thu, May 02, 2019 at 12:27:02PM +0100, Robin Murphy wrote:
> Hi Lorenzo,
>
> On 02/05/2019 12:01, Lorenzo Pieralisi wrote:
> > On Wed, May 01, 2019 at 11:06:25PM +0530, Srinath Mannam wrote:
> > > dma_ranges field of PCI host bridge structure has resource entries in
> > > sorted order of address range given through dma-ranges DT property. This
> > > list is the accessible DMA address range. So that this resource list will
> > > be processed and reserve IOVA address to the inaccessible address holes in
> > > the list.
> > >
> > > This method is similar to PCI IO resources address ranges reserving in
> > > IOMMU for each EP connected to host bridge.
> > >
> > > Signed-off-by: Srinath Mannam <[email protected]>
> > > Based-on-patch-by: Oza Pawandeep <[email protected]>
> > > Reviewed-by: Oza Pawandeep <[email protected]>
> > > Acked-by: Robin Murphy <[email protected]>
> > > ---
> > > drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++
> > > 1 file changed, 19 insertions(+)
> > >
> > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > > index 77aabe6..da94844 100644
> > > --- a/drivers/iommu/dma-iommu.c
> > > +++ b/drivers/iommu/dma-iommu.c
> > > @@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> > > struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
> > > struct resource_entry *window;
> > > unsigned long lo, hi;
> > > + phys_addr_t start = 0, end;
> > > resource_list_for_each_entry(window, &bridge->windows) {
> > > if (resource_type(window->res) != IORESOURCE_MEM)
> > > @@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> > > hi = iova_pfn(iovad, window->res->end - window->offset);
> > > reserve_iova(iovad, lo, hi);
> > > }
> > > +
> > > + /* Get reserved DMA windows from host bridge */
> > > + resource_list_for_each_entry(window, &bridge->dma_ranges) {
> >
> > If this list is not sorted it seems to me the logic in this loop is
> > broken and you can't rely on callers to sort it because it is not a
> > written requirement and it is not enforced (you know because you
> > wrote the code but any other developer is not supposed to guess
> > it).
> >
> > Can't we rewrite this loop so that it does not rely on list
> > entries order ?
>
> The original idea was that callers should be required to provide a sorted
> list, since it keeps things nice and simple...

I understand, if it was self-contained in driver code that would be fine
but in core code with possible multiple consumers this must be
documented/enforced, somehow.

> > I won't merge this series unless you sort it, no pun intended.
> >
> > Lorenzo
> >
> > > + end = window->res->start - window->offset;
>
> ...so would you consider it sufficient to add
>
> if (end < start)
> dev_err(...);

We should also revert any IOVA reservation we did prior to this
error, right ?

Anyway, I think it is best to ensure it *is* sorted.

> here, plus commenting the definition of pci_host_bridge::dma_ranges
> that it must be sorted in ascending order?

I don't think that commenting dma_ranges would help much, I am more
keen on making it work by construction.

> [ I guess it might even make sense to factor out the parsing and list
> construction from patch #3 into an of_pci core helper from the beginning, so
> that there's even less chance of another driver reimplementing it
> incorrectly in future. ]

This makes sense IMO and I would like to take this approach if you
don't mind.

Either this or we move the whole IOVA reservation and dma-ranges
parsing into PCI IProc.

> Failing that, although I do prefer the "simple by construction"
> approach, I'd have no objection to just sticking a list_sort() call in
> here instead, if you'd rather it be entirely bulletproof.

I think what you outline above is a sensible way forward - if we
miss the merge window so be it.

Thanks,
Lorenzo

> Robin.
>
> > > +resv_iova:
> > > + if (end - start) {
> > > + lo = iova_pfn(iovad, start);
> > > + hi = iova_pfn(iovad, end);
> > > + reserve_iova(iovad, lo, hi);
> > > + }
> > > + start = window->res->end - window->offset + 1;
> > > + /* If window is last entry */
> > > + if (window->node.next == &bridge->dma_ranges &&
> > > + end != ~(dma_addr_t)0) {
> > > + end = ~(dma_addr_t)0;
> > > + goto resv_iova;
> > > + }
> > > + }
> > > }
> > > static int iova_reserve_iommu_regions(struct device *dev,
> > > --
> > > 2.7.4
> > >

2019-05-02 14:16:46

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address

On 02/05/2019 14:06, Lorenzo Pieralisi wrote:
> On Thu, May 02, 2019 at 12:27:02PM +0100, Robin Murphy wrote:
>> Hi Lorenzo,
>>
>> On 02/05/2019 12:01, Lorenzo Pieralisi wrote:
>>> On Wed, May 01, 2019 at 11:06:25PM +0530, Srinath Mannam wrote:
>>>> dma_ranges field of PCI host bridge structure has resource entries in
>>>> sorted order of address range given through dma-ranges DT property. This
>>>> list is the accessible DMA address range. So that this resource list will
>>>> be processed and reserve IOVA address to the inaccessible address holes in
>>>> the list.
>>>>
>>>> This method is similar to PCI IO resources address ranges reserving in
>>>> IOMMU for each EP connected to host bridge.
>>>>
>>>> Signed-off-by: Srinath Mannam <[email protected]>
>>>> Based-on-patch-by: Oza Pawandeep <[email protected]>
>>>> Reviewed-by: Oza Pawandeep <[email protected]>
>>>> Acked-by: Robin Murphy <[email protected]>
>>>> ---
>>>> drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++
>>>> 1 file changed, 19 insertions(+)
>>>>
>>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>>> index 77aabe6..da94844 100644
>>>> --- a/drivers/iommu/dma-iommu.c
>>>> +++ b/drivers/iommu/dma-iommu.c
>>>> @@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
>>>> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
>>>> struct resource_entry *window;
>>>> unsigned long lo, hi;
>>>> + phys_addr_t start = 0, end;
>>>> resource_list_for_each_entry(window, &bridge->windows) {
>>>> if (resource_type(window->res) != IORESOURCE_MEM)
>>>> @@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
>>>> hi = iova_pfn(iovad, window->res->end - window->offset);
>>>> reserve_iova(iovad, lo, hi);
>>>> }
>>>> +
>>>> + /* Get reserved DMA windows from host bridge */
>>>> + resource_list_for_each_entry(window, &bridge->dma_ranges) {
>>>
>>> If this list is not sorted it seems to me the logic in this loop is
>>> broken and you can't rely on callers to sort it because it is not a
>>> written requirement and it is not enforced (you know because you
>>> wrote the code but any other developer is not supposed to guess
>>> it).
>>>
>>> Can't we rewrite this loop so that it does not rely on list
>>> entries order ?
>>
>> The original idea was that callers should be required to provide a sorted
>> list, since it keeps things nice and simple...
>
> I understand, if it was self-contained in driver code that would be fine
> but in core code with possible multiple consumers this must be
> documented/enforced, somehow.
>
>>> I won't merge this series unless you sort it, no pun intended.
>>>
>>> Lorenzo
>>>
>>>> + end = window->res->start - window->offset;
>>
>> ...so would you consider it sufficient to add
>>
>> if (end < start)
>> dev_err(...);
>
> We should also revert any IOVA reservation we did prior to this
> error, right ?

I think it would be enough to propagate an error code back out through
iommu_dma_init_domain(), which should then end up aborting the whole
IOMMU setup - reserve_iova() isn't really designed to be undoable, but
since this is the kind of error that should only ever be hit during
driver or DT development, as long as we continue booting such that the
developer can clearly see what's gone wrong, I don't think we need
bother spending too much effort tidying up inside the unused domain.

> Anyway, I think it is best to ensure it *is* sorted.
>
>> here, plus commenting the definition of pci_host_bridge::dma_ranges
>> that it must be sorted in ascending order?
>
> I don't think that commenting dma_ranges would help much, I am more
> keen on making it work by construction.
>
>> [ I guess it might even make sense to factor out the parsing and list
>> construction from patch #3 into an of_pci core helper from the beginning, so
>> that there's even less chance of another driver reimplementing it
>> incorrectly in future. ]
>
> This makes sense IMO and I would like to take this approach if you
> don't mind.

Sure - at some point it would be nice to wire this up to
pci-host-generic for Juno as well (with a parallel version for ACPI
_DMA), so from that viewpoint, the more groundwork in place the better :)

Thanks,
Robin.

>
> Either this or we move the whole IOVA reservation and dma-ranges
> parsing into PCI IProc.
>
>> Failing that, although I do prefer the "simple by construction"
>> approach, I'd have no objection to just sticking a list_sort() call in
>> here instead, if you'd rather it be entirely bulletproof.
>
> I think what you outline above is a sensible way forward - if we
> miss the merge window so be it.
>
> Thanks,
> Lorenzo
>
>> Robin.
>>
>>>> +resv_iova:
>>>> + if (end - start) {
>>>> + lo = iova_pfn(iovad, start);
>>>> + hi = iova_pfn(iovad, end);
>>>> + reserve_iova(iovad, lo, hi);
>>>> + }
>>>> + start = window->res->end - window->offset + 1;
>>>> + /* If window is last entry */
>>>> + if (window->node.next == &bridge->dma_ranges &&
>>>> + end != ~(dma_addr_t)0) {
>>>> + end = ~(dma_addr_t)0;
>>>> + goto resv_iova;
>>>> + }
>>>> + }
>>>> }
>>>> static int iova_reserve_iommu_regions(struct device *dev,
>>>> --
>>>> 2.7.4
>>>>

2019-05-03 06:31:38

by Srinath Mannam

[permalink] [raw]
Subject: Re: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address

Hi Robin, Lorenzo,

Thanks for review and guidance.
AFAIU, conclusion of discussion is, to return error if dma-ranges list
is not sorted.

So that, Can I send a new patch with below change to return error if
dma-ranges list is not sorted?

-static void iova_reserve_pci_windows(struct pci_dev *dev,
+static int iova_reserve_pci_windows(struct pci_dev *dev,
struct iova_domain *iovad)
{
struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
@@ -227,11 +227,15 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
resource_list_for_each_entry(window, &bridge->dma_ranges) {
end = window->res->start - window->offset;
resv_iova:
- if (end - start) {
+ if (end > start) {
lo = iova_pfn(iovad, start);
hi = iova_pfn(iovad, end);
reserve_iova(iovad, lo, hi);
+ } else {
+ dev_err(&dev->dev, "Unsorted dma_ranges list\n");
+ return -EINVAL;
}
+

Please provide your inputs if any more changes required. Thank you,

Regards,
Srinath.

On Thu, May 2, 2019 at 7:45 PM Robin Murphy <[email protected]> wrote:
>
> On 02/05/2019 14:06, Lorenzo Pieralisi wrote:
> > On Thu, May 02, 2019 at 12:27:02PM +0100, Robin Murphy wrote:
> >> Hi Lorenzo,
> >>
> >> On 02/05/2019 12:01, Lorenzo Pieralisi wrote:
> >>> On Wed, May 01, 2019 at 11:06:25PM +0530, Srinath Mannam wrote:
> >>>> dma_ranges field of PCI host bridge structure has resource entries in
> >>>> sorted order of address range given through dma-ranges DT property. This
> >>>> list is the accessible DMA address range. So that this resource list will
> >>>> be processed and reserve IOVA address to the inaccessible address holes in
> >>>> the list.
> >>>>
> >>>> This method is similar to PCI IO resources address ranges reserving in
> >>>> IOMMU for each EP connected to host bridge.
> >>>>
> >>>> Signed-off-by: Srinath Mannam <[email protected]>
> >>>> Based-on-patch-by: Oza Pawandeep <[email protected]>
> >>>> Reviewed-by: Oza Pawandeep <[email protected]>
> >>>> Acked-by: Robin Murphy <[email protected]>
> >>>> ---
> >>>> drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++
> >>>> 1 file changed, 19 insertions(+)
> >>>>
> >>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> >>>> index 77aabe6..da94844 100644
> >>>> --- a/drivers/iommu/dma-iommu.c
> >>>> +++ b/drivers/iommu/dma-iommu.c
> >>>> @@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> >>>> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
> >>>> struct resource_entry *window;
> >>>> unsigned long lo, hi;
> >>>> + phys_addr_t start = 0, end;
> >>>> resource_list_for_each_entry(window, &bridge->windows) {
> >>>> if (resource_type(window->res) != IORESOURCE_MEM)
> >>>> @@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> >>>> hi = iova_pfn(iovad, window->res->end - window->offset);
> >>>> reserve_iova(iovad, lo, hi);
> >>>> }
> >>>> +
> >>>> + /* Get reserved DMA windows from host bridge */
> >>>> + resource_list_for_each_entry(window, &bridge->dma_ranges) {
> >>>
> >>> If this list is not sorted it seems to me the logic in this loop is
> >>> broken and you can't rely on callers to sort it because it is not a
> >>> written requirement and it is not enforced (you know because you
> >>> wrote the code but any other developer is not supposed to guess
> >>> it).
> >>>
> >>> Can't we rewrite this loop so that it does not rely on list
> >>> entries order ?
> >>
> >> The original idea was that callers should be required to provide a sorted
> >> list, since it keeps things nice and simple...
> >
> > I understand, if it was self-contained in driver code that would be fine
> > but in core code with possible multiple consumers this must be
> > documented/enforced, somehow.
> >
> >>> I won't merge this series unless you sort it, no pun intended.
> >>>
> >>> Lorenzo
> >>>
> >>>> + end = window->res->start - window->offset;
> >>
> >> ...so would you consider it sufficient to add
> >>
> >> if (end < start)
> >> dev_err(...);
> >
> > We should also revert any IOVA reservation we did prior to this
> > error, right ?
>
> I think it would be enough to propagate an error code back out through
> iommu_dma_init_domain(), which should then end up aborting the whole
> IOMMU setup - reserve_iova() isn't really designed to be undoable, but
> since this is the kind of error that should only ever be hit during
> driver or DT development, as long as we continue booting such that the
> developer can clearly see what's gone wrong, I don't think we need
> bother spending too much effort tidying up inside the unused domain.
>
> > Anyway, I think it is best to ensure it *is* sorted.
> >
> >> here, plus commenting the definition of pci_host_bridge::dma_ranges
> >> that it must be sorted in ascending order?
> >
> > I don't think that commenting dma_ranges would help much, I am more
> > keen on making it work by construction.
> >
> >> [ I guess it might even make sense to factor out the parsing and list
> >> construction from patch #3 into an of_pci core helper from the beginning, so
> >> that there's even less chance of another driver reimplementing it
> >> incorrectly in future. ]
> >
> > This makes sense IMO and I would like to take this approach if you
> > don't mind.
>
> Sure - at some point it would be nice to wire this up to
> pci-host-generic for Juno as well (with a parallel version for ACPI
> _DMA), so from that viewpoint, the more groundwork in place the better :)
>
> Thanks,
> Robin.
>
> >
> > Either this or we move the whole IOVA reservation and dma-ranges
> > parsing into PCI IProc.
> >
> >> Failing that, although I do prefer the "simple by construction"
> >> approach, I'd have no objection to just sticking a list_sort() call in
> >> here instead, if you'd rather it be entirely bulletproof.
> >
> > I think what you outline above is a sensible way forward - if we
> > miss the merge window so be it.
> >
> > Thanks,
> > Lorenzo
> >
> >> Robin.
> >>
> >>>> +resv_iova:
> >>>> + if (end - start) {
> >>>> + lo = iova_pfn(iovad, start);
> >>>> + hi = iova_pfn(iovad, end);
> >>>> + reserve_iova(iovad, lo, hi);
> >>>> + }
> >>>> + start = window->res->end - window->offset + 1;
> >>>> + /* If window is last entry */
> >>>> + if (window->node.next == &bridge->dma_ranges &&
> >>>> + end != ~(dma_addr_t)0) {
> >>>> + end = ~(dma_addr_t)0;
> >>>> + goto resv_iova;
> >>>> + }
> >>>> + }
> >>>> }
> >>>> static int iova_reserve_iommu_regions(struct device *dev,
> >>>> --
> >>>> 2.7.4
> >>>>

2019-05-03 09:55:30

by Lorenzo Pieralisi

[permalink] [raw]
Subject: Re: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address

On Fri, May 03, 2019 at 10:53:23AM +0530, Srinath Mannam wrote:
> Hi Robin, Lorenzo,
>
> Thanks for review and guidance.
> AFAIU, conclusion of discussion is, to return error if dma-ranges list
> is not sorted.
>
> So that, Can I send a new patch with below change to return error if
> dma-ranges list is not sorted?

You can but I can't guarantee it will make it for v5.2.

We will have to move the DT parsing and dma list ranges creation
to core code anyway because I want this to work by construction,
so even if we manage to make v5.2 you will have to do that.

I pushed a branch out:

not-to-merge/iova-dma-ranges

where I rewrote all commit logs and I am not willing to do it again
so please use them for your v6 posting if you manage to make it
today.

Lorenzo

> -static void iova_reserve_pci_windows(struct pci_dev *dev,
> +static int iova_reserve_pci_windows(struct pci_dev *dev,
> struct iova_domain *iovad)
> {
> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
> @@ -227,11 +227,15 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> resource_list_for_each_entry(window, &bridge->dma_ranges) {
> end = window->res->start - window->offset;
> resv_iova:
> - if (end - start) {
> + if (end > start) {
> lo = iova_pfn(iovad, start);
> hi = iova_pfn(iovad, end);
> reserve_iova(iovad, lo, hi);
> + } else {
> + dev_err(&dev->dev, "Unsorted dma_ranges list\n");
> + return -EINVAL;
> }
> +
>
> Please provide your inputs if any more changes required. Thank you,
>
> Regards,
> Srinath.
>
> On Thu, May 2, 2019 at 7:45 PM Robin Murphy <[email protected]> wrote:
> >
> > On 02/05/2019 14:06, Lorenzo Pieralisi wrote:
> > > On Thu, May 02, 2019 at 12:27:02PM +0100, Robin Murphy wrote:
> > >> Hi Lorenzo,
> > >>
> > >> On 02/05/2019 12:01, Lorenzo Pieralisi wrote:
> > >>> On Wed, May 01, 2019 at 11:06:25PM +0530, Srinath Mannam wrote:
> > >>>> dma_ranges field of PCI host bridge structure has resource entries in
> > >>>> sorted order of address range given through dma-ranges DT property. This
> > >>>> list is the accessible DMA address range. So that this resource list will
> > >>>> be processed and reserve IOVA address to the inaccessible address holes in
> > >>>> the list.
> > >>>>
> > >>>> This method is similar to PCI IO resources address ranges reserving in
> > >>>> IOMMU for each EP connected to host bridge.
> > >>>>
> > >>>> Signed-off-by: Srinath Mannam <[email protected]>
> > >>>> Based-on-patch-by: Oza Pawandeep <[email protected]>
> > >>>> Reviewed-by: Oza Pawandeep <[email protected]>
> > >>>> Acked-by: Robin Murphy <[email protected]>
> > >>>> ---
> > >>>> drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++
> > >>>> 1 file changed, 19 insertions(+)
> > >>>>
> > >>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > >>>> index 77aabe6..da94844 100644
> > >>>> --- a/drivers/iommu/dma-iommu.c
> > >>>> +++ b/drivers/iommu/dma-iommu.c
> > >>>> @@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> > >>>> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
> > >>>> struct resource_entry *window;
> > >>>> unsigned long lo, hi;
> > >>>> + phys_addr_t start = 0, end;
> > >>>> resource_list_for_each_entry(window, &bridge->windows) {
> > >>>> if (resource_type(window->res) != IORESOURCE_MEM)
> > >>>> @@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> > >>>> hi = iova_pfn(iovad, window->res->end - window->offset);
> > >>>> reserve_iova(iovad, lo, hi);
> > >>>> }
> > >>>> +
> > >>>> + /* Get reserved DMA windows from host bridge */
> > >>>> + resource_list_for_each_entry(window, &bridge->dma_ranges) {
> > >>>
> > >>> If this list is not sorted it seems to me the logic in this loop is
> > >>> broken and you can't rely on callers to sort it because it is not a
> > >>> written requirement and it is not enforced (you know because you
> > >>> wrote the code but any other developer is not supposed to guess
> > >>> it).
> > >>>
> > >>> Can't we rewrite this loop so that it does not rely on list
> > >>> entries order ?
> > >>
> > >> The original idea was that callers should be required to provide a sorted
> > >> list, since it keeps things nice and simple...
> > >
> > > I understand, if it was self-contained in driver code that would be fine
> > > but in core code with possible multiple consumers this must be
> > > documented/enforced, somehow.
> > >
> > >>> I won't merge this series unless you sort it, no pun intended.
> > >>>
> > >>> Lorenzo
> > >>>
> > >>>> + end = window->res->start - window->offset;
> > >>
> > >> ...so would you consider it sufficient to add
> > >>
> > >> if (end < start)
> > >> dev_err(...);
> > >
> > > We should also revert any IOVA reservation we did prior to this
> > > error, right ?
> >
> > I think it would be enough to propagate an error code back out through
> > iommu_dma_init_domain(), which should then end up aborting the whole
> > IOMMU setup - reserve_iova() isn't really designed to be undoable, but
> > since this is the kind of error that should only ever be hit during
> > driver or DT development, as long as we continue booting such that the
> > developer can clearly see what's gone wrong, I don't think we need
> > bother spending too much effort tidying up inside the unused domain.
> >
> > > Anyway, I think it is best to ensure it *is* sorted.
> > >
> > >> here, plus commenting the definition of pci_host_bridge::dma_ranges
> > >> that it must be sorted in ascending order?
> > >
> > > I don't think that commenting dma_ranges would help much, I am more
> > > keen on making it work by construction.
> > >
> > >> [ I guess it might even make sense to factor out the parsing and list
> > >> construction from patch #3 into an of_pci core helper from the beginning, so
> > >> that there's even less chance of another driver reimplementing it
> > >> incorrectly in future. ]
> > >
> > > This makes sense IMO and I would like to take this approach if you
> > > don't mind.
> >
> > Sure - at some point it would be nice to wire this up to
> > pci-host-generic for Juno as well (with a parallel version for ACPI
> > _DMA), so from that viewpoint, the more groundwork in place the better :)
> >
> > Thanks,
> > Robin.
> >
> > >
> > > Either this or we move the whole IOVA reservation and dma-ranges
> > > parsing into PCI IProc.
> > >
> > >> Failing that, although I do prefer the "simple by construction"
> > >> approach, I'd have no objection to just sticking a list_sort() call in
> > >> here instead, if you'd rather it be entirely bulletproof.
> > >
> > > I think what you outline above is a sensible way forward - if we
> > > miss the merge window so be it.
> > >
> > > Thanks,
> > > Lorenzo
> > >
> > >> Robin.
> > >>
> > >>>> +resv_iova:
> > >>>> + if (end - start) {
> > >>>> + lo = iova_pfn(iovad, start);
> > >>>> + hi = iova_pfn(iovad, end);
> > >>>> + reserve_iova(iovad, lo, hi);
> > >>>> + }
> > >>>> + start = window->res->end - window->offset + 1;
> > >>>> + /* If window is last entry */
> > >>>> + if (window->node.next == &bridge->dma_ranges &&
> > >>>> + end != ~(dma_addr_t)0) {
> > >>>> + end = ~(dma_addr_t)0;
> > >>>> + goto resv_iova;
> > >>>> + }
> > >>>> + }
> > >>>> }
> > >>>> static int iova_reserve_iommu_regions(struct device *dev,
> > >>>> --
> > >>>> 2.7.4
> > >>>>

2019-05-03 10:28:46

by Srinath Mannam

[permalink] [raw]
Subject: Re: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address

Hi Lorenzo,

Thank you so much, Please see my reply below.

On Fri, May 3, 2019 at 3:23 PM Lorenzo Pieralisi
<[email protected]> wrote:
>
> On Fri, May 03, 2019 at 10:53:23AM +0530, Srinath Mannam wrote:
> > Hi Robin, Lorenzo,
> >
> > Thanks for review and guidance.
> > AFAIU, conclusion of discussion is, to return error if dma-ranges list
> > is not sorted.
> >
> > So that, Can I send a new patch with below change to return error if
> > dma-ranges list is not sorted?
>
> You can but I can't guarantee it will make it for v5.2.
>
> We will have to move the DT parsing and dma list ranges creation
> to core code anyway because I want this to work by construction,
> so even if we manage to make v5.2 you will have to do that.
Yes, Later I will work on it and do required core code changes.
>
> I pushed a branch out:
>
> not-to-merge/iova-dma-ranges
>
> where I rewrote all commit logs and I am not willing to do it again
> so please use them for your v6 posting if you manage to make it
> today.
Thank you, I will take all commit log changes and push v6 version today.

Regards,
Srinath.
>
> Lorenzo
>
> > -static void iova_reserve_pci_windows(struct pci_dev *dev,
> > +static int iova_reserve_pci_windows(struct pci_dev *dev,
> > struct iova_domain *iovad)
> > {
> > struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
> > @@ -227,11 +227,15 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> > resource_list_for_each_entry(window, &bridge->dma_ranges) {
> > end = window->res->start - window->offset;
> > resv_iova:
> > - if (end - start) {
> > + if (end > start) {
> > lo = iova_pfn(iovad, start);
> > hi = iova_pfn(iovad, end);
> > reserve_iova(iovad, lo, hi);
> > + } else {
> > + dev_err(&dev->dev, "Unsorted dma_ranges list\n");
> > + return -EINVAL;
> > }
> > +
> >
> > Please provide your inputs if any more changes required. Thank you,
> >
> > Regards,
> > Srinath.
> >
> > On Thu, May 2, 2019 at 7:45 PM Robin Murphy <[email protected]> wrote:
> > >
> > > On 02/05/2019 14:06, Lorenzo Pieralisi wrote:
> > > > On Thu, May 02, 2019 at 12:27:02PM +0100, Robin Murphy wrote:
> > > >> Hi Lorenzo,
> > > >>
> > > >> On 02/05/2019 12:01, Lorenzo Pieralisi wrote:
> > > >>> On Wed, May 01, 2019 at 11:06:25PM +0530, Srinath Mannam wrote:
> > > >>>> dma_ranges field of PCI host bridge structure has resource entries in
> > > >>>> sorted order of address range given through dma-ranges DT property. This
> > > >>>> list is the accessible DMA address range. So that this resource list will
> > > >>>> be processed and reserve IOVA address to the inaccessible address holes in
> > > >>>> the list.
> > > >>>>
> > > >>>> This method is similar to PCI IO resources address ranges reserving in
> > > >>>> IOMMU for each EP connected to host bridge.
> > > >>>>
> > > >>>> Signed-off-by: Srinath Mannam <[email protected]>
> > > >>>> Based-on-patch-by: Oza Pawandeep <[email protected]>
> > > >>>> Reviewed-by: Oza Pawandeep <[email protected]>
> > > >>>> Acked-by: Robin Murphy <[email protected]>
> > > >>>> ---
> > > >>>> drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++
> > > >>>> 1 file changed, 19 insertions(+)
> > > >>>>
> > > >>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > > >>>> index 77aabe6..da94844 100644
> > > >>>> --- a/drivers/iommu/dma-iommu.c
> > > >>>> +++ b/drivers/iommu/dma-iommu.c
> > > >>>> @@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> > > >>>> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
> > > >>>> struct resource_entry *window;
> > > >>>> unsigned long lo, hi;
> > > >>>> + phys_addr_t start = 0, end;
> > > >>>> resource_list_for_each_entry(window, &bridge->windows) {
> > > >>>> if (resource_type(window->res) != IORESOURCE_MEM)
> > > >>>> @@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> > > >>>> hi = iova_pfn(iovad, window->res->end - window->offset);
> > > >>>> reserve_iova(iovad, lo, hi);
> > > >>>> }
> > > >>>> +
> > > >>>> + /* Get reserved DMA windows from host bridge */
> > > >>>> + resource_list_for_each_entry(window, &bridge->dma_ranges) {
> > > >>>
> > > >>> If this list is not sorted it seems to me the logic in this loop is
> > > >>> broken and you can't rely on callers to sort it because it is not a
> > > >>> written requirement and it is not enforced (you know because you
> > > >>> wrote the code but any other developer is not supposed to guess
> > > >>> it).
> > > >>>
> > > >>> Can't we rewrite this loop so that it does not rely on list
> > > >>> entries order ?
> > > >>
> > > >> The original idea was that callers should be required to provide a sorted
> > > >> list, since it keeps things nice and simple...
> > > >
> > > > I understand, if it was self-contained in driver code that would be fine
> > > > but in core code with possible multiple consumers this must be
> > > > documented/enforced, somehow.
> > > >
> > > >>> I won't merge this series unless you sort it, no pun intended.
> > > >>>
> > > >>> Lorenzo
> > > >>>
> > > >>>> + end = window->res->start - window->offset;
> > > >>
> > > >> ...so would you consider it sufficient to add
> > > >>
> > > >> if (end < start)
> > > >> dev_err(...);
> > > >
> > > > We should also revert any IOVA reservation we did prior to this
> > > > error, right ?
> > >
> > > I think it would be enough to propagate an error code back out through
> > > iommu_dma_init_domain(), which should then end up aborting the whole
> > > IOMMU setup - reserve_iova() isn't really designed to be undoable, but
> > > since this is the kind of error that should only ever be hit during
> > > driver or DT development, as long as we continue booting such that the
> > > developer can clearly see what's gone wrong, I don't think we need
> > > bother spending too much effort tidying up inside the unused domain.
> > >
> > > > Anyway, I think it is best to ensure it *is* sorted.
> > > >
> > > >> here, plus commenting the definition of pci_host_bridge::dma_ranges
> > > >> that it must be sorted in ascending order?
> > > >
> > > > I don't think that commenting dma_ranges would help much, I am more
> > > > keen on making it work by construction.
> > > >
> > > >> [ I guess it might even make sense to factor out the parsing and list
> > > >> construction from patch #3 into an of_pci core helper from the beginning, so
> > > >> that there's even less chance of another driver reimplementing it
> > > >> incorrectly in future. ]
> > > >
> > > > This makes sense IMO and I would like to take this approach if you
> > > > don't mind.
> > >
> > > Sure - at some point it would be nice to wire this up to
> > > pci-host-generic for Juno as well (with a parallel version for ACPI
> > > _DMA), so from that viewpoint, the more groundwork in place the better :)
> > >
> > > Thanks,
> > > Robin.
> > >
> > > >
> > > > Either this or we move the whole IOVA reservation and dma-ranges
> > > > parsing into PCI IProc.
> > > >
> > > >> Failing that, although I do prefer the "simple by construction"
> > > >> approach, I'd have no objection to just sticking a list_sort() call in
> > > >> here instead, if you'd rather it be entirely bulletproof.
> > > >
> > > > I think what you outline above is a sensible way forward - if we
> > > > miss the merge window so be it.
> > > >
> > > > Thanks,
> > > > Lorenzo
> > > >
> > > >> Robin.
> > > >>
> > > >>>> +resv_iova:
> > > >>>> + if (end - start) {
> > > >>>> + lo = iova_pfn(iovad, start);
> > > >>>> + hi = iova_pfn(iovad, end);
> > > >>>> + reserve_iova(iovad, lo, hi);
> > > >>>> + }
> > > >>>> + start = window->res->end - window->offset + 1;
> > > >>>> + /* If window is last entry */
> > > >>>> + if (window->node.next == &bridge->dma_ranges &&
> > > >>>> + end != ~(dma_addr_t)0) {
> > > >>>> + end = ~(dma_addr_t)0;
> > > >>>> + goto resv_iova;
> > > >>>> + }
> > > >>>> + }
> > > >>>> }
> > > >>>> static int iova_reserve_iommu_regions(struct device *dev,
> > > >>>> --
> > > >>>> 2.7.4
> > > >>>>

2019-05-03 10:46:37

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address

On 03/05/2019 06:23, Srinath Mannam wrote:
> Hi Robin, Lorenzo,
>
> Thanks for review and guidance.
> AFAIU, conclusion of discussion is, to return error if dma-ranges list
> is not sorted.
>
> So that, Can I send a new patch with below change to return error if
> dma-ranges list is not sorted?
>
> -static void iova_reserve_pci_windows(struct pci_dev *dev,
> +static int iova_reserve_pci_windows(struct pci_dev *dev,
> struct iova_domain *iovad)
> {
> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
> @@ -227,11 +227,15 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> resource_list_for_each_entry(window, &bridge->dma_ranges) {
> end = window->res->start - window->offset;
> resv_iova:
> - if (end - start) {
> + if (end > start) {
> lo = iova_pfn(iovad, start);
> hi = iova_pfn(iovad, end);
> reserve_iova(iovad, lo, hi);
> + } else {
> + dev_err(&dev->dev, "Unsorted dma_ranges list\n");
> + return -EINVAL;
> }
> +
>
> Please provide your inputs if any more changes required. Thank you,

You also need to handle and return this error where
iova_reserve_pci_windows() is called from iova_reserve_iommu_regions().

Robin.

> Regards,
> Srinath.
>
> On Thu, May 2, 2019 at 7:45 PM Robin Murphy <[email protected]> wrote:
>>
>> On 02/05/2019 14:06, Lorenzo Pieralisi wrote:
>>> On Thu, May 02, 2019 at 12:27:02PM +0100, Robin Murphy wrote:
>>>> Hi Lorenzo,
>>>>
>>>> On 02/05/2019 12:01, Lorenzo Pieralisi wrote:
>>>>> On Wed, May 01, 2019 at 11:06:25PM +0530, Srinath Mannam wrote:
>>>>>> dma_ranges field of PCI host bridge structure has resource entries in
>>>>>> sorted order of address range given through dma-ranges DT property. This
>>>>>> list is the accessible DMA address range. So that this resource list will
>>>>>> be processed and reserve IOVA address to the inaccessible address holes in
>>>>>> the list.
>>>>>>
>>>>>> This method is similar to PCI IO resources address ranges reserving in
>>>>>> IOMMU for each EP connected to host bridge.
>>>>>>
>>>>>> Signed-off-by: Srinath Mannam <[email protected]>
>>>>>> Based-on-patch-by: Oza Pawandeep <[email protected]>
>>>>>> Reviewed-by: Oza Pawandeep <[email protected]>
>>>>>> Acked-by: Robin Murphy <[email protected]>
>>>>>> ---
>>>>>> drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++
>>>>>> 1 file changed, 19 insertions(+)
>>>>>>
>>>>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>>>>> index 77aabe6..da94844 100644
>>>>>> --- a/drivers/iommu/dma-iommu.c
>>>>>> +++ b/drivers/iommu/dma-iommu.c
>>>>>> @@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
>>>>>> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
>>>>>> struct resource_entry *window;
>>>>>> unsigned long lo, hi;
>>>>>> + phys_addr_t start = 0, end;
>>>>>> resource_list_for_each_entry(window, &bridge->windows) {
>>>>>> if (resource_type(window->res) != IORESOURCE_MEM)
>>>>>> @@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
>>>>>> hi = iova_pfn(iovad, window->res->end - window->offset);
>>>>>> reserve_iova(iovad, lo, hi);
>>>>>> }
>>>>>> +
>>>>>> + /* Get reserved DMA windows from host bridge */
>>>>>> + resource_list_for_each_entry(window, &bridge->dma_ranges) {
>>>>>
>>>>> If this list is not sorted it seems to me the logic in this loop is
>>>>> broken and you can't rely on callers to sort it because it is not a
>>>>> written requirement and it is not enforced (you know because you
>>>>> wrote the code but any other developer is not supposed to guess
>>>>> it).
>>>>>
>>>>> Can't we rewrite this loop so that it does not rely on list
>>>>> entries order ?
>>>>
>>>> The original idea was that callers should be required to provide a sorted
>>>> list, since it keeps things nice and simple...
>>>
>>> I understand, if it was self-contained in driver code that would be fine
>>> but in core code with possible multiple consumers this must be
>>> documented/enforced, somehow.
>>>
>>>>> I won't merge this series unless you sort it, no pun intended.
>>>>>
>>>>> Lorenzo
>>>>>
>>>>>> + end = window->res->start - window->offset;
>>>>
>>>> ...so would you consider it sufficient to add
>>>>
>>>> if (end < start)
>>>> dev_err(...);
>>>
>>> We should also revert any IOVA reservation we did prior to this
>>> error, right ?
>>
>> I think it would be enough to propagate an error code back out through
>> iommu_dma_init_domain(), which should then end up aborting the whole
>> IOMMU setup - reserve_iova() isn't really designed to be undoable, but
>> since this is the kind of error that should only ever be hit during
>> driver or DT development, as long as we continue booting such that the
>> developer can clearly see what's gone wrong, I don't think we need
>> bother spending too much effort tidying up inside the unused domain.
>>
>>> Anyway, I think it is best to ensure it *is* sorted.
>>>
>>>> here, plus commenting the definition of pci_host_bridge::dma_ranges
>>>> that it must be sorted in ascending order?
>>>
>>> I don't think that commenting dma_ranges would help much, I am more
>>> keen on making it work by construction.
>>>
>>>> [ I guess it might even make sense to factor out the parsing and list
>>>> construction from patch #3 into an of_pci core helper from the beginning, so
>>>> that there's even less chance of another driver reimplementing it
>>>> incorrectly in future. ]
>>>
>>> This makes sense IMO and I would like to take this approach if you
>>> don't mind.
>>
>> Sure - at some point it would be nice to wire this up to
>> pci-host-generic for Juno as well (with a parallel version for ACPI
>> _DMA), so from that viewpoint, the more groundwork in place the better :)
>>
>> Thanks,
>> Robin.
>>
>>>
>>> Either this or we move the whole IOVA reservation and dma-ranges
>>> parsing into PCI IProc.
>>>
>>>> Failing that, although I do prefer the "simple by construction"
>>>> approach, I'd have no objection to just sticking a list_sort() call in
>>>> here instead, if you'd rather it be entirely bulletproof.
>>>
>>> I think what you outline above is a sensible way forward - if we
>>> miss the merge window so be it.
>>>
>>> Thanks,
>>> Lorenzo
>>>
>>>> Robin.
>>>>
>>>>>> +resv_iova:
>>>>>> + if (end - start) {
>>>>>> + lo = iova_pfn(iovad, start);
>>>>>> + hi = iova_pfn(iovad, end);
>>>>>> + reserve_iova(iovad, lo, hi);
>>>>>> + }
>>>>>> + start = window->res->end - window->offset + 1;
>>>>>> + /* If window is last entry */
>>>>>> + if (window->node.next == &bridge->dma_ranges &&
>>>>>> + end != ~(dma_addr_t)0) {
>>>>>> + end = ~(dma_addr_t)0;
>>>>>> + goto resv_iova;
>>>>>> + }
>>>>>> + }
>>>>>> }
>>>>>> static int iova_reserve_iommu_regions(struct device *dev,
>>>>>> --
>>>>>> 2.7.4
>>>>>>

2019-05-03 10:48:01

by Srinath Mannam

[permalink] [raw]
Subject: Re: [PATCH v5 2/3] iommu/dma: Reserve IOVA for PCIe inaccessible DMA address

Hi Robin,


On Fri, May 3, 2019 at 3:58 PM Robin Murphy <[email protected]> wrote:
>
> On 03/05/2019 06:23, Srinath Mannam wrote:
> > Hi Robin, Lorenzo,
> >
> > Thanks for review and guidance.
> > AFAIU, conclusion of discussion is, to return error if dma-ranges list
> > is not sorted.
> >
> > So that, Can I send a new patch with below change to return error if
> > dma-ranges list is not sorted?
> >
> > -static void iova_reserve_pci_windows(struct pci_dev *dev,
> > +static int iova_reserve_pci_windows(struct pci_dev *dev,
> > struct iova_domain *iovad)
> > {
> > struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
> > @@ -227,11 +227,15 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> > resource_list_for_each_entry(window, &bridge->dma_ranges) {
> > end = window->res->start - window->offset;
> > resv_iova:
> > - if (end - start) {
> > + if (end > start) {
> > lo = iova_pfn(iovad, start);
> > hi = iova_pfn(iovad, end);
> > reserve_iova(iovad, lo, hi);
> > + } else {
> > + dev_err(&dev->dev, "Unsorted dma_ranges list\n");
> > + return -EINVAL;
> > }
> > +
> >
> > Please provide your inputs if any more changes required. Thank you,
>
> You also need to handle and return this error where
> iova_reserve_pci_windows() is called from iova_reserve_iommu_regions().
Thank you. I am doing this.

Regards,
Srinath.
>
> Robin.
>
> > Regards,
> > Srinath.
> >
> > On Thu, May 2, 2019 at 7:45 PM Robin Murphy <[email protected]> wrote:
> >>
> >> On 02/05/2019 14:06, Lorenzo Pieralisi wrote:
> >>> On Thu, May 02, 2019 at 12:27:02PM +0100, Robin Murphy wrote:
> >>>> Hi Lorenzo,
> >>>>
> >>>> On 02/05/2019 12:01, Lorenzo Pieralisi wrote:
> >>>>> On Wed, May 01, 2019 at 11:06:25PM +0530, Srinath Mannam wrote:
> >>>>>> dma_ranges field of PCI host bridge structure has resource entries in
> >>>>>> sorted order of address range given through dma-ranges DT property. This
> >>>>>> list is the accessible DMA address range. So that this resource list will
> >>>>>> be processed and reserve IOVA address to the inaccessible address holes in
> >>>>>> the list.
> >>>>>>
> >>>>>> This method is similar to PCI IO resources address ranges reserving in
> >>>>>> IOMMU for each EP connected to host bridge.
> >>>>>>
> >>>>>> Signed-off-by: Srinath Mannam <[email protected]>
> >>>>>> Based-on-patch-by: Oza Pawandeep <[email protected]>
> >>>>>> Reviewed-by: Oza Pawandeep <[email protected]>
> >>>>>> Acked-by: Robin Murphy <[email protected]>
> >>>>>> ---
> >>>>>> drivers/iommu/dma-iommu.c | 19 +++++++++++++++++++
> >>>>>> 1 file changed, 19 insertions(+)
> >>>>>>
> >>>>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> >>>>>> index 77aabe6..da94844 100644
> >>>>>> --- a/drivers/iommu/dma-iommu.c
> >>>>>> +++ b/drivers/iommu/dma-iommu.c
> >>>>>> @@ -212,6 +212,7 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> >>>>>> struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
> >>>>>> struct resource_entry *window;
> >>>>>> unsigned long lo, hi;
> >>>>>> + phys_addr_t start = 0, end;
> >>>>>> resource_list_for_each_entry(window, &bridge->windows) {
> >>>>>> if (resource_type(window->res) != IORESOURCE_MEM)
> >>>>>> @@ -221,6 +222,24 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
> >>>>>> hi = iova_pfn(iovad, window->res->end - window->offset);
> >>>>>> reserve_iova(iovad, lo, hi);
> >>>>>> }
> >>>>>> +
> >>>>>> + /* Get reserved DMA windows from host bridge */
> >>>>>> + resource_list_for_each_entry(window, &bridge->dma_ranges) {
> >>>>>
> >>>>> If this list is not sorted it seems to me the logic in this loop is
> >>>>> broken and you can't rely on callers to sort it because it is not a
> >>>>> written requirement and it is not enforced (you know because you
> >>>>> wrote the code but any other developer is not supposed to guess
> >>>>> it).
> >>>>>
> >>>>> Can't we rewrite this loop so that it does not rely on list
> >>>>> entries order ?
> >>>>
> >>>> The original idea was that callers should be required to provide a sorted
> >>>> list, since it keeps things nice and simple...
> >>>
> >>> I understand, if it was self-contained in driver code that would be fine
> >>> but in core code with possible multiple consumers this must be
> >>> documented/enforced, somehow.
> >>>
> >>>>> I won't merge this series unless you sort it, no pun intended.
> >>>>>
> >>>>> Lorenzo
> >>>>>
> >>>>>> + end = window->res->start - window->offset;
> >>>>
> >>>> ...so would you consider it sufficient to add
> >>>>
> >>>> if (end < start)
> >>>> dev_err(...);
> >>>
> >>> We should also revert any IOVA reservation we did prior to this
> >>> error, right ?
> >>
> >> I think it would be enough to propagate an error code back out through
> >> iommu_dma_init_domain(), which should then end up aborting the whole
> >> IOMMU setup - reserve_iova() isn't really designed to be undoable, but
> >> since this is the kind of error that should only ever be hit during
> >> driver or DT development, as long as we continue booting such that the
> >> developer can clearly see what's gone wrong, I don't think we need
> >> bother spending too much effort tidying up inside the unused domain.
> >>
> >>> Anyway, I think it is best to ensure it *is* sorted.
> >>>
> >>>> here, plus commenting the definition of pci_host_bridge::dma_ranges
> >>>> that it must be sorted in ascending order?
> >>>
> >>> I don't think that commenting dma_ranges would help much, I am more
> >>> keen on making it work by construction.
> >>>
> >>>> [ I guess it might even make sense to factor out the parsing and list
> >>>> construction from patch #3 into an of_pci core helper from the beginning, so
> >>>> that there's even less chance of another driver reimplementing it
> >>>> incorrectly in future. ]
> >>>
> >>> This makes sense IMO and I would like to take this approach if you
> >>> don't mind.
> >>
> >> Sure - at some point it would be nice to wire this up to
> >> pci-host-generic for Juno as well (with a parallel version for ACPI
> >> _DMA), so from that viewpoint, the more groundwork in place the better :)
> >>
> >> Thanks,
> >> Robin.
> >>
> >>>
> >>> Either this or we move the whole IOVA reservation and dma-ranges
> >>> parsing into PCI IProc.
> >>>
> >>>> Failing that, although I do prefer the "simple by construction"
> >>>> approach, I'd have no objection to just sticking a list_sort() call in
> >>>> here instead, if you'd rather it be entirely bulletproof.
> >>>
> >>> I think what you outline above is a sensible way forward - if we
> >>> miss the merge window so be it.
> >>>
> >>> Thanks,
> >>> Lorenzo
> >>>
> >>>> Robin.
> >>>>
> >>>>>> +resv_iova:
> >>>>>> + if (end - start) {
> >>>>>> + lo = iova_pfn(iovad, start);
> >>>>>> + hi = iova_pfn(iovad, end);
> >>>>>> + reserve_iova(iovad, lo, hi);
> >>>>>> + }
> >>>>>> + start = window->res->end - window->offset + 1;
> >>>>>> + /* If window is last entry */
> >>>>>> + if (window->node.next == &bridge->dma_ranges &&
> >>>>>> + end != ~(dma_addr_t)0) {
> >>>>>> + end = ~(dma_addr_t)0;
> >>>>>> + goto resv_iova;
> >>>>>> + }
> >>>>>> + }
> >>>>>> }
> >>>>>> static int iova_reserve_iommu_regions(struct device *dev,
> >>>>>> --
> >>>>>> 2.7.4
> >>>>>>