Few SOCs have limitation that their PCIe host
can't allow few inbound address ranges.
Allowed inbound address ranges are listed in
dma-ranges DT property and this address ranges
are required to do IOVA mapping.
Remaining address ranges have to be reserved in
IOVA mapping.
PCIe Host driver of those SOCs has to list all
address ranges which have to reserve their IOVA
address into PCIe host bridge resource entry list.
IOMMU framework will reserve these IOVAs while
initializing IOMMU domain.
This patch set is based on Linux-4.19-rc1.
Srinath Mannam (3):
PCI: Add dma-resv window list
iommu/dma: IOVA reserve for PCI host reserve address list
PCI: iproc: Add dma reserve resources to host
drivers/iommu/dma-iommu.c | 8 ++++++
drivers/pci/controller/pcie-iproc.c | 49 +++++++++++++++++++++++++++++++++++++
drivers/pci/probe.c | 3 +++
include/linux/pci.h | 1 +
4 files changed, 61 insertions(+)
--
2.7.4
Add a dma_resv parameter in pci host bridge structure
to hold resource entries list of memory regions for
which IOVAs has to reserve.
IOMMU framework reserve IOVA for this list of address
range while initializing IOMMU domain of corresponding
PCI EP connected to the HOST.
PCIe host driver will add resource entries to this
list based on requirements.
Few hosts can't use all inbound address, so those
address regions will be add to this list to avoid
IOMMU mapping.
Signed-off-by: Srinath Mannam <[email protected]>
---
drivers/pci/probe.c | 3 +++
include/linux/pci.h | 1 +
2 files changed, 4 insertions(+)
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index ec78400..bbed0e7 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -544,6 +544,7 @@ struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
return NULL;
INIT_LIST_HEAD(&bridge->windows);
+ INIT_LIST_HEAD(&bridge->dma_resv);
bridge->dev.release = pci_release_host_bridge_dev;
/*
@@ -572,6 +573,7 @@ struct pci_host_bridge *devm_pci_alloc_host_bridge(struct device *dev,
return NULL;
INIT_LIST_HEAD(&bridge->windows);
+ INIT_LIST_HEAD(&bridge->dma_resv);
bridge->dev.release = devm_pci_release_host_bridge_dev;
return bridge;
@@ -581,6 +583,7 @@ EXPORT_SYMBOL(devm_pci_alloc_host_bridge);
void pci_free_host_bridge(struct pci_host_bridge *bridge)
{
pci_free_resource_list(&bridge->windows);
+ pci_free_resource_list(&bridge->dma_resv);
kfree(bridge);
}
diff --git a/include/linux/pci.h b/include/linux/pci.h
index e72ca8d..1f0a32a 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -472,6 +472,7 @@ struct pci_host_bridge {
void *sysdata;
int busnr;
struct list_head windows; /* resource_entry */
+ struct list_head dma_resv; /* reserv dma ranges */
u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */
int (*map_irq)(const struct pci_dev *, u8, u8);
void (*release_fn)(struct pci_host_bridge *);
--
2.7.4
PCI host has list of resource entries contain memory
address range for which IOVA address mapping has to
be reserve.
These address ranges are the address holes in
dma-ranges property.
It is similar to PCI IO resources address range
reserving in IOMMU for each EP connected to
corresponding host.
Signed-off-by: Srinath Mannam <[email protected]>
---
drivers/iommu/dma-iommu.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 511ff9a..346da81 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -220,6 +220,14 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
hi = iova_pfn(iovad, window->res->end - window->offset);
reserve_iova(iovad, lo, hi);
}
+
+ /* Get reserved DMA windows from host bridge */
+ resource_list_for_each_entry(window, &bridge->dma_resv) {
+
+ lo = iova_pfn(iovad, window->res->start - window->offset);
+ hi = iova_pfn(iovad, window->res->end - window->offset);
+ reserve_iova(iovad, lo, hi);
+ }
}
static int iova_reserve_iommu_regions(struct device *dev,
--
2.7.4
IPROC host has the limitation that it can use
only those address ranges given by dma-ranges
property as inbound address.
So that the memory address holes in dma-ranges
should be reserved to allocate as DMA address.
All such reserved addresses are created as resource
entries and add to dma_resv list of pci host bridge.
These dma reserve resources created by parsing
dma-ranges parameter.
Ex:
dma-ranges = < \
0x43000000 0x00 0x80000000 0x00 0x80000000 0x00 0x80000000 \
0x43000000 0x08 0x00000000 0x08 0x00000000 0x08 0x00000000 \
0x43000000 0x80 0x00000000 0x80 0x00000000 0x40 0x00000000>
In the above example of dma-ranges, memory address from
0x0 - 0x80000000,
0x100000000 - 0x800000000,
0x1000000000 - 0x8000000000 and
0x10000000000 - 0xffffffffffffffff.
are not allowed to use as inbound addresses.
So that we need to add these address range to dma_resv
list to reserve their IOVA address ranges.
Signed-off-by: Srinath Mannam <[email protected]>
---
drivers/pci/controller/pcie-iproc.c | 49 +++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
diff --git a/drivers/pci/controller/pcie-iproc.c b/drivers/pci/controller/pcie-iproc.c
index 3160e93..43e465a 100644
--- a/drivers/pci/controller/pcie-iproc.c
+++ b/drivers/pci/controller/pcie-iproc.c
@@ -1154,25 +1154,74 @@ static int iproc_pcie_setup_ib(struct iproc_pcie *pcie,
return ret;
}
+static int
+iproc_pcie_add_dma_resv_range(struct device *dev, struct list_head *resources,
+ uint64_t start, uint64_t end)
+{
+ struct resource *res;
+
+ res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL);
+ if (!res)
+ return -ENOMEM;
+
+ res->start = (resource_size_t)start;
+ res->end = (resource_size_t)end;
+ pci_add_resource_offset(resources, res, 0);
+
+ return 0;
+}
+
static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie)
{
+ struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
struct of_pci_range range;
struct of_pci_range_parser parser;
int ret;
+ uint64_t start, end;
+ LIST_HEAD(resources);
/* Get the dma-ranges from DT */
ret = of_pci_dma_range_parser_init(&parser, pcie->dev->of_node);
if (ret)
return ret;
+ start = 0;
for_each_of_pci_range(&parser, &range) {
+ end = range.pci_addr;
+ /* dma-ranges list expected in sorted order */
+ if (end < start) {
+ ret = -EINVAL;
+ goto out;
+ }
/* Each range entry corresponds to an inbound mapping region */
ret = iproc_pcie_setup_ib(pcie, &range, IPROC_PCIE_IB_MAP_MEM);
if (ret)
return ret;
+
+ if (end - start) {
+ ret = iproc_pcie_add_dma_resv_range(pcie->dev,
+ &resources,
+ start, end);
+ if (ret)
+ goto out;
+ }
+ start = range.pci_addr + range.size;
}
+ end = ~0;
+ if (end - start) {
+ ret = iproc_pcie_add_dma_resv_range(pcie->dev, &resources,
+ start, end);
+ if (ret)
+ goto out;
+ }
+
+ list_splice_init(&resources, &host->dma_resv);
+
return 0;
+out:
+ pci_free_resource_list(&resources);
+ return ret;
}
static int iproce_pcie_get_msi(struct iproc_pcie *pcie,
--
2.7.4
On 2018-12-12 11:16, Srinath Mannam wrote:
> IPROC host has the limitation that it can use
> only those address ranges given by dma-ranges
> property as inbound address.
> So that the memory address holes in dma-ranges
> should be reserved to allocate as DMA address.
>
> All such reserved addresses are created as resource
> entries and add to dma_resv list of pci host bridge.
>
> These dma reserve resources created by parsing
> dma-ranges parameter.
>
> Ex:
> dma-ranges = < \
> 0x43000000 0x00 0x80000000 0x00 0x80000000 0x00 0x80000000 \
> 0x43000000 0x08 0x00000000 0x08 0x00000000 0x08 0x00000000 \
> 0x43000000 0x80 0x00000000 0x80 0x00000000 0x40 0x00000000>
>
> In the above example of dma-ranges, memory address from
> 0x0 - 0x80000000,
> 0x100000000 - 0x800000000,
> 0x1000000000 - 0x8000000000 and
> 0x10000000000 - 0xffffffffffffffff.
> are not allowed to use as inbound addresses.
> So that we need to add these address range to dma_resv
> list to reserve their IOVA address ranges.
>
> Signed-off-by: Srinath Mannam <[email protected]>
> ---
> drivers/pci/controller/pcie-iproc.c | 49
> +++++++++++++++++++++++++++++++++++++
> 1 file changed, 49 insertions(+)
>
> diff --git a/drivers/pci/controller/pcie-iproc.c
> b/drivers/pci/controller/pcie-iproc.c
> index 3160e93..43e465a 100644
> --- a/drivers/pci/controller/pcie-iproc.c
> +++ b/drivers/pci/controller/pcie-iproc.c
> @@ -1154,25 +1154,74 @@ static int iproc_pcie_setup_ib(struct
> iproc_pcie *pcie,
> return ret;
> }
>
> +static int
> +iproc_pcie_add_dma_resv_range(struct device *dev, struct list_head
> *resources,
> + uint64_t start, uint64_t end)
> +{
> + struct resource *res;
> +
> + res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL);
> + if (!res)
> + return -ENOMEM;
> +
> + res->start = (resource_size_t)start;
> + res->end = (resource_size_t)end;
> + pci_add_resource_offset(resources, res, 0);
> +
> + return 0;
> +}
> +
> static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie)
> {
> + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
> struct of_pci_range range;
> struct of_pci_range_parser parser;
> int ret;
> + uint64_t start, end;
> + LIST_HEAD(resources);
>
> /* Get the dma-ranges from DT */
> ret = of_pci_dma_range_parser_init(&parser, pcie->dev->of_node);
> if (ret)
> return ret;
>
> + start = 0;
> for_each_of_pci_range(&parser, &range) {
> + end = range.pci_addr;
> + /* dma-ranges list expected in sorted order */
> + if (end < start) {
> + ret = -EINVAL;
> + goto out;
> + }
> /* Each range entry corresponds to an inbound mapping region */
> ret = iproc_pcie_setup_ib(pcie, &range, IPROC_PCIE_IB_MAP_MEM);
> if (ret)
> return ret;
> +
> + if (end - start) {
> + ret = iproc_pcie_add_dma_resv_range(pcie->dev,
> + &resources,
> + start, end);
> + if (ret)
> + goto out;
> + }
> + start = range.pci_addr + range.size;
> }
>
> + end = ~0;
Hi Srinath,
this series is based on following patch sets.
https://lkml.org/lkml/2017/5/16/19
https://lkml.org/lkml/2017/5/16/23
https://lkml.org/lkml/2017/5/16/21,
some comments to be adapted from the patch-set I did.
end = ~0;
you should consider DMA_MASK, to see iproc controller is in 32 bit or 64
bit system.
please check following code snippet.
if (tmp_dma_addr < DMA_BIT_MASK(sizeof(dma_addr_t) * 8)) {
+ lo = iova_pfn(iovad, tmp_dma_addr);
+ hi = iova_pfn(iovad,
+ DMA_BIT_MASK(sizeof(dma_addr_t) * 8) - 1);
+ reserve_iova(iovad, lo, hi);
+ }
Also if this controller is integrated to 64bit platform, but decide to
restrict DMA to 32 bit for some reason, the code should address such
scenarios.
so it is always safe to do
#define BITS_PER_BYTE 8
DMA_BIT_MASK(sizeof(dma_addr_t) * BITS_PER_BYTE)
so please use kernle macro to find the end of DMA region.
Also ideally according to SBSA v5
8.3 PCI Express device view of memory
Transactions from a PCI express device will either directly address the
memory system of the base server system
or be presented to a SMMU for optional address translation and
permission policing.
In systems that are compatible with level 3 or above of the SBSA, the
addresses sent by PCI express devices
must be presented to the memory system or SMMU unmodified. In a system
where the PCI express does not use
an SMMU, the PCI express devices have the same view of physical memory
as the PEs. In a system with a
SMMU for PCI express there are no transformations to addresses being
sent by PCI express devices before they
are presented as an input address to the SMMU.
which is a limitation of iproc controller, please list the limitation in
detail as it violates SBSA.
Regards,
Oza.
> + if (end - start) {
> + ret = iproc_pcie_add_dma_resv_range(pcie->dev, &resources,
> + start, end);
> + if (ret)
> + goto out;
> + }
> +
> + list_splice_init(&resources, &host->dma_resv);
> +
> return 0;
> +out:
> + pci_free_resource_list(&resources);
> + return ret;
> }
>
> static int iproce_pcie_get_msi(struct iproc_pcie *pcie,
Hi Oza,
Thank you for the review.
Please find my comments in lined.
On Thu, Dec 13, 2018 at 11:33 AM <[email protected]> wrote:
>
> On 2018-12-12 11:16, Srinath Mannam wrote:
> > IPROC host has the limitation that it can use
> > only those address ranges given by dma-ranges
> > property as inbound address.
> > So that the memory address holes in dma-ranges
> > should be reserved to allocate as DMA address.
> >
> > All such reserved addresses are created as resource
> > entries and add to dma_resv list of pci host bridge.
> >
> > These dma reserve resources created by parsing
> > dma-ranges parameter.
> >
> > Ex:
> > dma-ranges = < \
> > 0x43000000 0x00 0x80000000 0x00 0x80000000 0x00 0x80000000 \
> > 0x43000000 0x08 0x00000000 0x08 0x00000000 0x08 0x00000000 \
> > 0x43000000 0x80 0x00000000 0x80 0x00000000 0x40 0x00000000>
> >
> > In the above example of dma-ranges, memory address from
> > 0x0 - 0x80000000,
> > 0x100000000 - 0x800000000,
> > 0x1000000000 - 0x8000000000 and
> > 0x10000000000 - 0xffffffffffffffff.
> > are not allowed to use as inbound addresses.
> > So that we need to add these address range to dma_resv
> > list to reserve their IOVA address ranges.
> >
> > Signed-off-by: Srinath Mannam <[email protected]>
> > ---
> > drivers/pci/controller/pcie-iproc.c | 49
> > +++++++++++++++++++++++++++++++++++++
> > 1 file changed, 49 insertions(+)
> >
> > diff --git a/drivers/pci/controller/pcie-iproc.c
> > b/drivers/pci/controller/pcie-iproc.c
> > index 3160e93..43e465a 100644
> > --- a/drivers/pci/controller/pcie-iproc.c
> > +++ b/drivers/pci/controller/pcie-iproc.c
> > @@ -1154,25 +1154,74 @@ static int iproc_pcie_setup_ib(struct
> > iproc_pcie *pcie,
> > return ret;
> > }
> >
> > +static int
> > +iproc_pcie_add_dma_resv_range(struct device *dev, struct list_head
> > *resources,
> > + uint64_t start, uint64_t end)
> > +{
> > + struct resource *res;
> > +
> > + res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL);
> > + if (!res)
> > + return -ENOMEM;
> > +
> > + res->start = (resource_size_t)start;
> > + res->end = (resource_size_t)end;
> > + pci_add_resource_offset(resources, res, 0);
> > +
> > + return 0;
> > +}
> > +
> > static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie)
> > {
> > + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
> > struct of_pci_range range;
> > struct of_pci_range_parser parser;
> > int ret;
> > + uint64_t start, end;
> > + LIST_HEAD(resources);
> >
> > /* Get the dma-ranges from DT */
> > ret = of_pci_dma_range_parser_init(&parser, pcie->dev->of_node);
> > if (ret)
> > return ret;
> >
> > + start = 0;
> > for_each_of_pci_range(&parser, &range) {
> > + end = range.pci_addr;
> > + /* dma-ranges list expected in sorted order */
> > + if (end < start) {
> > + ret = -EINVAL;
> > + goto out;
> > + }
> > /* Each range entry corresponds to an inbound mapping region */
> > ret = iproc_pcie_setup_ib(pcie, &range, IPROC_PCIE_IB_MAP_MEM);
> > if (ret)
> > return ret;
> > +
> > + if (end - start) {
> > + ret = iproc_pcie_add_dma_resv_range(pcie->dev,
> > + &resources,
> > + start, end);
> > + if (ret)
> > + goto out;
> > + }
> > + start = range.pci_addr + range.size;
> > }
> >
> > + end = ~0;
> Hi Srinath,
>
> this series is based on following patch sets.
>
> https://lkml.org/lkml/2017/5/16/19
> https://lkml.org/lkml/2017/5/16/23
> https://lkml.org/lkml/2017/5/16/21,
>
Yes, this patch series is done based on the inputs of the patches you
sent earlier.
> some comments to be adapted from the patch-set I did.
>
> end = ~0;
> you should consider DMA_MASK, to see iproc controller is in 32 bit or 64
> bit system.
> please check following code snippet.
>
> if (tmp_dma_addr < DMA_BIT_MASK(sizeof(dma_addr_t) * 8)) {
> + lo = iova_pfn(iovad, tmp_dma_addr);
> + hi = iova_pfn(iovad,
> + DMA_BIT_MASK(sizeof(dma_addr_t) * 8) - 1);
> + reserve_iova(iovad, lo, hi);
> + }
>
> Also if this controller is integrated to 64bit platform, but decide to
> restrict DMA to 32 bit for some reason, the code should address such
> scenarios.
> so it is always safe to do
>
> #define BITS_PER_BYTE 8
> DMA_BIT_MASK(sizeof(dma_addr_t) * BITS_PER_BYTE)
> so please use kernle macro to find the end of DMA region.
>
this change done with the assumption, that end_address is max bus
address(~0) instead
pcie RC dma mask.
Even dma-ranges has 64bit size dma-mask of PCIe host is forced to 32bit.
// in of_dma_configure function
dev->coherent_dma_mask = DMA_BIT_MASK(32);
And dma-mask of endpoint was set to 64bit in their drivers. also SMMU supported
dma mask is 48-bit.
But here requirement is all address ranges except dma-ranges address
between 0x0 - ~0x0
have to be reserved.
as you suggested I will change ~0 to DMA_BIT_MASK macro it will look
more cleaner.
>
> Also ideally according to SBSA v5
>
> 8.3 PCI Express device view of memory
>
> Transactions from a PCI express device will either directly address the
> memory system of the base server system
> or be presented to a SMMU for optional address translation and
> permission policing.
> In systems that are compatible with level 3 or above of the SBSA, the
> addresses sent by PCI express devices
> must be presented to the memory system or SMMU unmodified. In a system
> where the PCI express does not use
> an SMMU, the PCI express devices have the same view of physical memory
> as the PEs. In a system with a
> SMMU for PCI express there are no transformations to addresses being
> sent by PCI express devices before they
> are presented as an input address to the SMMU.
>
> which is a limitation of iproc controller, please list the limitation in
> detail as it violates SBSA.
Inbound address sent by pcie devices to the host will not be translate
(unchanged) before it comes to
SMMU or directly to PE. but the there is limitation that, few address
ranges are not allowed by
IPROC PCIe RC and those were filtered out. That is the reason forces
this change request.
I will append these details in commit message.
Regards,
Srinath.
>
> Regards,
> Oza.
>
> > + if (end - start) {
> > + ret = iproc_pcie_add_dma_resv_range(pcie->dev, &resources,
> > + start, end);
> > + if (ret)
> > + goto out;
> > + }
> > +
> > + list_splice_init(&resources, &host->dma_resv);
> > +
> > return 0;
> > +out:
> > + pci_free_resource_list(&resources);
> > + return ret;
> > }
> >
> > static int iproce_pcie_get_msi(struct iproc_pcie *pcie,
>
>
>
>
>
On 2018-12-13 14:47, Srinath Mannam wrote:
> Hi Oza,
>
> Thank you for the review.
> Please find my comments in lined.
>
> On Thu, Dec 13, 2018 at 11:33 AM <[email protected]> wrote:
>>
>> On 2018-12-12 11:16, Srinath Mannam wrote:
>> > IPROC host has the limitation that it can use
>> > only those address ranges given by dma-ranges
>> > property as inbound address.
>> > So that the memory address holes in dma-ranges
>> > should be reserved to allocate as DMA address.
>> >
>> > All such reserved addresses are created as resource
>> > entries and add to dma_resv list of pci host bridge.
>> >
>> > These dma reserve resources created by parsing
>> > dma-ranges parameter.
>> >
>> > Ex:
>> > dma-ranges = < \
>> > 0x43000000 0x00 0x80000000 0x00 0x80000000 0x00 0x80000000 \
>> > 0x43000000 0x08 0x00000000 0x08 0x00000000 0x08 0x00000000 \
>> > 0x43000000 0x80 0x00000000 0x80 0x00000000 0x40 0x00000000>
>> >
>> > In the above example of dma-ranges, memory address from
>> > 0x0 - 0x80000000,
>> > 0x100000000 - 0x800000000,
>> > 0x1000000000 - 0x8000000000 and
>> > 0x10000000000 - 0xffffffffffffffff.
>> > are not allowed to use as inbound addresses.
>> > So that we need to add these address range to dma_resv
>> > list to reserve their IOVA address ranges.
>> >
>> > Signed-off-by: Srinath Mannam <[email protected]>
>> > ---
>> > drivers/pci/controller/pcie-iproc.c | 49
>> > +++++++++++++++++++++++++++++++++++++
>> > 1 file changed, 49 insertions(+)
>> >
>> > diff --git a/drivers/pci/controller/pcie-iproc.c
>> > b/drivers/pci/controller/pcie-iproc.c
>> > index 3160e93..43e465a 100644
>> > --- a/drivers/pci/controller/pcie-iproc.c
>> > +++ b/drivers/pci/controller/pcie-iproc.c
>> > @@ -1154,25 +1154,74 @@ static int iproc_pcie_setup_ib(struct
>> > iproc_pcie *pcie,
>> > return ret;
>> > }
>> >
>> > +static int
>> > +iproc_pcie_add_dma_resv_range(struct device *dev, struct list_head
>> > *resources,
>> > + uint64_t start, uint64_t end)
>> > +{
>> > + struct resource *res;
>> > +
>> > + res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL);
>> > + if (!res)
>> > + return -ENOMEM;
>> > +
>> > + res->start = (resource_size_t)start;
>> > + res->end = (resource_size_t)end;
>> > + pci_add_resource_offset(resources, res, 0);
>> > +
>> > + return 0;
>> > +}
>> > +
>> > static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie)
>> > {
>> > + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
>> > struct of_pci_range range;
>> > struct of_pci_range_parser parser;
>> > int ret;
>> > + uint64_t start, end;
>> > + LIST_HEAD(resources);
>> >
>> > /* Get the dma-ranges from DT */
>> > ret = of_pci_dma_range_parser_init(&parser, pcie->dev->of_node);
>> > if (ret)
>> > return ret;
>> >
>> > + start = 0;
>> > for_each_of_pci_range(&parser, &range) {
>> > + end = range.pci_addr;
>> > + /* dma-ranges list expected in sorted order */
>> > + if (end < start) {
>> > + ret = -EINVAL;
>> > + goto out;
>> > + }
>> > /* Each range entry corresponds to an inbound mapping region */
>> > ret = iproc_pcie_setup_ib(pcie, &range, IPROC_PCIE_IB_MAP_MEM);
>> > if (ret)
>> > return ret;
>> > +
>> > + if (end - start) {
>> > + ret = iproc_pcie_add_dma_resv_range(pcie->dev,
>> > + &resources,
>> > + start, end);
>> > + if (ret)
>> > + goto out;
>> > + }
>> > + start = range.pci_addr + range.size;
>> > }
>> >
>> > + end = ~0;
>> Hi Srinath,
>>
>> this series is based on following patch sets.
>>
>> https://lkml.org/lkml/2017/5/16/19
>> https://lkml.org/lkml/2017/5/16/23
>> https://lkml.org/lkml/2017/5/16/21,
>>
> Yes, this patch series is done based on the inputs of the patches you
> sent earlier.
>
>> some comments to be adapted from the patch-set I did.
>>
>> end = ~0;
>> you should consider DMA_MASK, to see iproc controller is in 32 bit or
>> 64
>> bit system.
>> please check following code snippet.
>>
>> if (tmp_dma_addr < DMA_BIT_MASK(sizeof(dma_addr_t) * 8)) {
>> + lo = iova_pfn(iovad, tmp_dma_addr);
>> + hi = iova_pfn(iovad,
>> + DMA_BIT_MASK(sizeof(dma_addr_t)
>> * 8) - 1);
>> + reserve_iova(iovad, lo, hi);
>> + }
>>
>> Also if this controller is integrated to 64bit platform, but decide to
>> restrict DMA to 32 bit for some reason, the code should address such
>> scenarios.
>> so it is always safe to do
>>
>> #define BITS_PER_BYTE 8
>> DMA_BIT_MASK(sizeof(dma_addr_t) * BITS_PER_BYTE)
>> so please use kernle macro to find the end of DMA region.
>>
> this change done with the assumption, that end_address is max bus
> address(~0) instead
> pcie RC dma mask.
> Even dma-ranges has 64bit size dma-mask of PCIe host is forced to
> 32bit.
> // in of_dma_configure function
> dev->coherent_dma_mask = DMA_BIT_MASK(32);
> And dma-mask of endpoint was set to 64bit in their drivers. also SMMU
> supported
> dma mask is 48-bit.
> But here requirement is all address ranges except dma-ranges address
> between 0x0 - ~0x0
> have to be reserved.
> as you suggested I will change ~0 to DMA_BIT_MASK macro it will look
> more cleaner.
>>
agreed, makes sense;
>> Also ideally according to SBSA v5
>>
>> 8.3 PCI Express device view of memory
>>
>> Transactions from a PCI express device will either directly address
>> the
>> memory system of the base server system
>> or be presented to a SMMU for optional address translation and
>> permission policing.
>> In systems that are compatible with level 3 or above of the SBSA, the
>> addresses sent by PCI express devices
>> must be presented to the memory system or SMMU unmodified. In a system
>> where the PCI express does not use
>> an SMMU, the PCI express devices have the same view of physical memory
>> as the PEs. In a system with a
>> SMMU for PCI express there are no transformations to addresses being
>> sent by PCI express devices before they
>> are presented as an input address to the SMMU.
>>
>> which is a limitation of iproc controller, please list the limitation
>> in
>> detail as it violates SBSA.
> Inbound address sent by pcie devices to the host will not be translate
> (unchanged) before it comes to
> SMMU or directly to PE. but the there is limitation that, few address
> ranges are not allowed by
> IPROC PCIe RC and those were filtered out. That is the reason forces
> this change request.
> I will append these details in commit message.
>
My intention was just to make sure that commit message clearly mentions
limitation.
so yes, since controller has to be programmed for incoming address and
there is no way around to disable those 1:1 translations, and allow
everything incoming.
that is the HW limitation.
Thanks for addressing.
> Regards,
> Srinath.
>>
>> Regards,
>> Oza.
>>
>> > + if (end - start) {
>> > + ret = iproc_pcie_add_dma_resv_range(pcie->dev, &resources,
>> > + start, end);
>> > + if (ret)
>> > + goto out;
>> > + }
>> > +
>> > + list_splice_init(&resources, &host->dma_resv);
>> > +
>> > return 0;
>> > +out:
>> > + pci_free_resource_list(&resources);
>> > + return ret;
>> > }
>> >
>> > static int iproce_pcie_get_msi(struct iproc_pcie *pcie,
>>
>>
>>
>>
>>