2020-09-24 11:11:21

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH v2 0/5] PCI: dwc: improve msi handling

Improve the msi code:
1. Add proper error handling.
2. Move dw_pcie_msi_init() from each users to designware host to solve
msi page leakage in resume path.

Since v1:
- add proper error handling patches.
- solve the msi page leakage by moving dw_pcie_msi_init() from each
users to designware host


Jisheng Zhang (5):
PCI: dwc: Call dma_unmap_page() before freeing the msi page
PCI: dwc: Check alloc_page() return value
PCI: dwc: Rename dw_pcie_free_msi to dw_pcie_msi_deinit
PCI: dwc: Skip PCIE_MSI_INTR0* programming if MSI is disabled
PCI: dwc: Move dw_pcie_msi_init() from each users to designware host

drivers/pci/controller/dwc/pci-dra7xx.c | 1 +
drivers/pci/controller/dwc/pci-exynos.c | 2 -
drivers/pci/controller/dwc/pci-imx6.c | 3 --
drivers/pci/controller/dwc/pci-meson.c | 8 ----
drivers/pci/controller/dwc/pcie-artpec6.c | 10 -----
.../pci/controller/dwc/pcie-designware-host.c | 43 +++++++++++++------
.../pci/controller/dwc/pcie-designware-plat.c | 3 --
drivers/pci/controller/dwc/pcie-designware.h | 9 +++-
drivers/pci/controller/dwc/pcie-histb.c | 3 --
drivers/pci/controller/dwc/pcie-kirin.c | 3 --
drivers/pci/controller/dwc/pcie-qcom.c | 3 --
drivers/pci/controller/dwc/pcie-spear13xx.c | 1 -
drivers/pci/controller/dwc/pcie-tegra194.c | 2 -
drivers/pci/controller/dwc/pcie-uniphier.c | 9 +---
14 files changed, 38 insertions(+), 62 deletions(-)

--
2.28.0


2020-09-24 11:12:53

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH v2 2/5] PCI: dwc: Check alloc_page() return value

We need to check alloc_page() succeed or not before continuing.

Signed-off-by: Jisheng Zhang <[email protected]>
---
drivers/pci/controller/dwc/pcie-designware-host.c | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index 0a19de946351..9e04e8ef3aa4 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -303,6 +303,11 @@ void dw_pcie_msi_init(struct pcie_port *pp)
u64 msi_target;

pp->msi_page = alloc_page(GFP_KERNEL);
+ if (!pp->msi_page) {
+ dev_err(dev, "Failed to alloc MSI page\n");
+ return;
+ }
+
pp->msi_data = dma_map_page(dev, pp->msi_page, 0, PAGE_SIZE,
DMA_FROM_DEVICE);
if (dma_mapping_error(dev, pp->msi_data)) {
--
2.28.0

2020-09-24 11:12:53

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH v2 1/5] PCI: dwc: Call dma_unmap_page() before freeing the msi page

In dw_pcie_free_msi(), call dma_unmap_page() before freeing.

Signed-off-by: Jisheng Zhang <[email protected]>
---
drivers/pci/controller/dwc/pcie-designware-host.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index 9dafecba347f..0a19de946351 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -288,8 +288,12 @@ void dw_pcie_free_msi(struct pcie_port *pp)
irq_domain_remove(pp->msi_domain);
irq_domain_remove(pp->irq_domain);

- if (pp->msi_page)
+ if (pp->msi_page) {
+ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ struct device *dev = pci->dev;
+ dma_unmap_page(dev, pp->msi_data, PAGE_SIZE, DMA_FROM_DEVICE);
__free_page(pp->msi_page);
+ }
}

void dw_pcie_msi_init(struct pcie_port *pp)
--
2.28.0

2020-09-24 11:14:28

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH v2 4/5] PCI: dwc: Skip PCIE_MSI_INTR0* programming if MSI is disabled

If MSI is disabled, there's no need to program PCIE_MSI_INTR0_MASK
and PCIE_MSI_INTR0_ENABLE registers.

Signed-off-by: Jisheng Zhang <[email protected]>
---
drivers/pci/controller/dwc/pcie-designware-host.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index d2de8bc5db91..7a8adf597803 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -641,7 +641,7 @@ void dw_pcie_setup_rc(struct pcie_port *pp)

dw_pcie_setup(pci);

- if (!pp->ops->msi_host_init) {
+ if (pci_msi_enabled() && !pp->ops->msi_host_init) {
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;

/* Initialize IRQ Status array */
--
2.28.0

2020-09-24 11:57:56

by Gustavo Pimentel

[permalink] [raw]
Subject: RE: [PATCH v2 1/5] PCI: dwc: Call dma_unmap_page() before freeing the msi page

On Thu, Sep 24, 2020 at 12:5:57, Jisheng Zhang
<[email protected]> wrote:

> In dw_pcie_free_msi(), call dma_unmap_page() before freeing.
>
> Signed-off-by: Jisheng Zhang <[email protected]>
> ---
> drivers/pci/controller/dwc/pcie-designware-host.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> index 9dafecba347f..0a19de946351 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -288,8 +288,12 @@ void dw_pcie_free_msi(struct pcie_port *pp)
> irq_domain_remove(pp->msi_domain);
> irq_domain_remove(pp->irq_domain);
>
> - if (pp->msi_page)
> + if (pp->msi_page) {
> + struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> + struct device *dev = pci->dev;
> + dma_unmap_page(dev, pp->msi_data, PAGE_SIZE, DMA_FROM_DEVICE);
> __free_page(pp->msi_page);
> + }
> }
>
> void dw_pcie_msi_init(struct pcie_port *pp)
> --
> 2.28.0

Acked-by: Gustavo Pimentel <[email protected]>


2020-09-24 11:59:10

by Gustavo Pimentel

[permalink] [raw]
Subject: RE: [PATCH v2 4/5] PCI: dwc: Skip PCIE_MSI_INTR0* programming if MSI is disabled

On Thu, Sep 24, 2020 at 12:7:13, Jisheng Zhang
<[email protected]> wrote:

> If MSI is disabled, there's no need to program PCIE_MSI_INTR0_MASK
> and PCIE_MSI_INTR0_ENABLE registers.
>
> Signed-off-by: Jisheng Zhang <[email protected]>
> ---
> drivers/pci/controller/dwc/pcie-designware-host.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> index d2de8bc5db91..7a8adf597803 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -641,7 +641,7 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
>
> dw_pcie_setup(pci);
>
> - if (!pp->ops->msi_host_init) {
> + if (pci_msi_enabled() && !pp->ops->msi_host_init) {
> num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
>
> /* Initialize IRQ Status array */
> --
> 2.28.0

Acked-by: Gustavo Pimentel <[email protected]>


2020-09-24 11:59:34

by Gustavo Pimentel

[permalink] [raw]
Subject: RE: [PATCH v2 2/5] PCI: dwc: Check alloc_page() return value

On Thu, Sep 24, 2020 at 12:6:23, Jisheng Zhang
<[email protected]> wrote:

> We need to check alloc_page() succeed or not before continuing.
>
> Signed-off-by: Jisheng Zhang <[email protected]>
> ---
> drivers/pci/controller/dwc/pcie-designware-host.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> index 0a19de946351..9e04e8ef3aa4 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -303,6 +303,11 @@ void dw_pcie_msi_init(struct pcie_port *pp)
> u64 msi_target;
>
> pp->msi_page = alloc_page(GFP_KERNEL);
> + if (!pp->msi_page) {
> + dev_err(dev, "Failed to alloc MSI page\n");
> + return;
> + }
> +
> pp->msi_data = dma_map_page(dev, pp->msi_page, 0, PAGE_SIZE,
> DMA_FROM_DEVICE);
> if (dma_mapping_error(dev, pp->msi_data)) {
> --
> 2.28.0

Acked-by: Gustavo Pimentel <[email protected]>


2020-09-25 08:55:35

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling


On 24/09/2020 12:05, Jisheng Zhang wrote:
> Improve the msi code:
> 1. Add proper error handling.
> 2. Move dw_pcie_msi_init() from each users to designware host to solve
> msi page leakage in resume path.

Apologies if this is slightly off topic, but I have been meaning to ask
about MSIs and PCI. On Tegra194 which uses the DWC PCI driver, whenever we
hotplug CPUs we see the following warnings ...

[ 79.068351] WARNING KERN IRQ70: set affinity failed(-22).
[ 79.068362] WARNING KERN IRQ71: set affinity failed(-22).

These interrupts are the MSIs ...

70: 0 0 0 0 0 0 0 0 PCI-MSI 134217728 Edge PCIe PME, aerdrv
71: 0 0 0 0 0 0 0 0 PCI-MSI 134742016 Edge ahci[0001:01:00.0]

This caused because ...

static int dw_pci_msi_set_affinity(struct irq_data *d,
const struct cpumask *mask, bool force)
{
return -EINVAL;
}

Now the above is not unique to the DWC PCI host driver, it appears that
most PCIe drivers also do the same. However, I am curious if there is
any way to avoid the above warnings given that setting the affinity does
not appear to be supported in anyway AFAICT.

Cheers
Jon

--
nvpublic

2020-09-25 09:19:10

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling

Hi Jon,

On Fri, 25 Sep 2020 09:53:45 +0100 Jon Hunter wrote:


>
> On 24/09/2020 12:05, Jisheng Zhang wrote:
> > Improve the msi code:
> > 1. Add proper error handling.
> > 2. Move dw_pcie_msi_init() from each users to designware host to solve
> > msi page leakage in resume path.
>
> Apologies if this is slightly off topic, but I have been meaning to ask
> about MSIs and PCI. On Tegra194 which uses the DWC PCI driver, whenever we
> hotplug CPUs we see the following warnings ...
>
> [ 79.068351] WARNING KERN IRQ70: set affinity failed(-22).
> [ 79.068362] WARNING KERN IRQ71: set affinity failed(-22).
>
> These interrupts are the MSIs ...
>
> 70: 0 0 0 0 0 0 0 0 PCI-MSI 134217728 Edge PCIe PME, aerdrv
> 71: 0 0 0 0 0 0 0 0 PCI-MSI 134742016 Edge ahci[0001:01:00.0]
>
> This caused because ...
>
> static int dw_pci_msi_set_affinity(struct irq_data *d,
> const struct cpumask *mask, bool force)
> {
> return -EINVAL;
> }
>
> Now the above is not unique to the DWC PCI host driver, it appears that
> most PCIe drivers also do the same. However, I am curious if there is
> any way to avoid the above warnings given that setting the affinity does
> not appear to be supported in anyway AFAICT.
>


Could you please try below patch?


diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index bf25d783b5c5..7e5dc54d060e 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -197,7 +197,6 @@ static struct irq_chip dw_pci_msi_bottom_irq_chip = {
.name = "DWPCI-MSI",
.irq_ack = dw_pci_bottom_ack,
.irq_compose_msi_msg = dw_pci_setup_msi_msg,
- .irq_set_affinity = dw_pci_msi_set_affinity,
.irq_mask = dw_pci_bottom_mask,
.irq_unmask = dw_pci_bottom_unmask,
};

2020-09-25 09:29:49

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling

On Fri, 25 Sep 2020 17:17:12 +0800
Jisheng Zhang <[email protected]> wrote:

> CAUTION: Email originated externally, do not click links or open attachments unless you recognize the sender and know the content is safe.
>
>
> Hi Jon,
>
> On Fri, 25 Sep 2020 09:53:45 +0100 Jon Hunter wrote:
>
>
> >
> > On 24/09/2020 12:05, Jisheng Zhang wrote:
> > > Improve the msi code:
> > > 1. Add proper error handling.
> > > 2. Move dw_pcie_msi_init() from each users to designware host to solve
> > > msi page leakage in resume path.
> >
> > Apologies if this is slightly off topic, but I have been meaning to ask
> > about MSIs and PCI. On Tegra194 which uses the DWC PCI driver, whenever we
> > hotplug CPUs we see the following warnings ...
> >
> > [ 79.068351] WARNING KERN IRQ70: set affinity failed(-22).
> > [ 79.068362] WARNING KERN IRQ71: set affinity failed(-22).
> >
> > These interrupts are the MSIs ...
> >
> > 70: 0 0 0 0 0 0 0 0 PCI-MSI 134217728 Edge PCIe PME, aerdrv
> > 71: 0 0 0 0 0 0 0 0 PCI-MSI 134742016 Edge ahci[0001:01:00.0]
> >
> > This caused because ...
> >
> > static int dw_pci_msi_set_affinity(struct irq_data *d,
> > const struct cpumask *mask, bool force)
> > {
> > return -EINVAL;
> > }
> >
> > Now the above is not unique to the DWC PCI host driver, it appears that
> > most PCIe drivers also do the same. However, I am curious if there is
> > any way to avoid the above warnings given that setting the affinity does
> > not appear to be supported in anyway AFAICT.
> >
>
>
> Could you please try below patch?
>
>
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> index bf25d783b5c5..7e5dc54d060e 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -197,7 +197,6 @@ static struct irq_chip dw_pci_msi_bottom_irq_chip = {
> .name = "DWPCI-MSI",
> .irq_ack = dw_pci_bottom_ack,
> .irq_compose_msi_msg = dw_pci_setup_msi_msg,
> - .irq_set_affinity = dw_pci_msi_set_affinity,
> .irq_mask = dw_pci_bottom_mask,
> .irq_unmask = dw_pci_bottom_unmask,
> };

A complete patch w/o compiler warning:

diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index bf25d783b5c5..18f719cfed0b 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -137,12 +137,6 @@ static void dw_pci_setup_msi_msg(struct irq_data *d, struct msi_msg *msg)
(int)d->hwirq, msg->address_hi, msg->address_lo);
}

-static int dw_pci_msi_set_affinity(struct irq_data *d,
- const struct cpumask *mask, bool force)
-{
- return -EINVAL;
-}
-
static void dw_pci_bottom_mask(struct irq_data *d)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
@@ -197,7 +191,6 @@ static struct irq_chip dw_pci_msi_bottom_irq_chip = {
.name = "DWPCI-MSI",
.irq_ack = dw_pci_bottom_ack,
.irq_compose_msi_msg = dw_pci_setup_msi_msg,
- .irq_set_affinity = dw_pci_msi_set_affinity,
.irq_mask = dw_pci_bottom_mask,
.irq_unmask = dw_pci_bottom_unmask,
};

2020-09-25 15:15:13

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling

Hi Jisheng,

On 25/09/2020 10:27, Jisheng Zhang wrote:

...

>> Could you please try below patch?
>>
>>
>> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
>> index bf25d783b5c5..7e5dc54d060e 100644
>> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
>> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
>> @@ -197,7 +197,6 @@ static struct irq_chip dw_pci_msi_bottom_irq_chip = {
>> .name = "DWPCI-MSI",
>> .irq_ack = dw_pci_bottom_ack,
>> .irq_compose_msi_msg = dw_pci_setup_msi_msg,
>> - .irq_set_affinity = dw_pci_msi_set_affinity,
>> .irq_mask = dw_pci_bottom_mask,
>> .irq_unmask = dw_pci_bottom_unmask,
>> };
>
> A complete patch w/o compiler warning:
>
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> index bf25d783b5c5..18f719cfed0b 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -137,12 +137,6 @@ static void dw_pci_setup_msi_msg(struct irq_data *d, struct msi_msg *msg)
> (int)d->hwirq, msg->address_hi, msg->address_lo);
> }
>
> -static int dw_pci_msi_set_affinity(struct irq_data *d,
> - const struct cpumask *mask, bool force)
> -{
> - return -EINVAL;
> -}
> -
> static void dw_pci_bottom_mask(struct irq_data *d)
> {
> struct pcie_port *pp = irq_data_get_irq_chip_data(d);
> @@ -197,7 +191,6 @@ static struct irq_chip dw_pci_msi_bottom_irq_chip = {
> .name = "DWPCI-MSI",
> .irq_ack = dw_pci_bottom_ack,
> .irq_compose_msi_msg = dw_pci_setup_msi_msg,
> - .irq_set_affinity = dw_pci_msi_set_affinity,
> .irq_mask = dw_pci_bottom_mask,
> .irq_unmask = dw_pci_bottom_unmask,
> };
>


Thanks I was not expecting this to work because ...

int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
bool force)
{
struct irq_desc *desc = irq_data_to_desc(data);
struct irq_chip *chip = irq_data_get_irq_chip(data);
int ret;

if (!chip || !chip->irq_set_affinity)
return -EINVAL;

However, with your patch Tegra crashes on boot ...

[ 11.613853] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000
[ 11.622500] Mem abort info:
[ 11.622515] ESR = 0x86000004
[ 11.622524] EC = 0x21: IABT (current EL), IL = 32 bits
[ 11.622540] SET = 0, FnV = 0
[ 11.636544] EA = 0, S1PTW = 0
[ 11.636554] user pgtable: 4k pages, 48-bit VAs, pgdp=000000046a28e000
[ 11.636559] [0000000000000000] pgd=0000000000000000, p4d=0000000000000000
[ 11.652652] Internal error: Oops: 86000004 [#1] PREEMPT SMP
[ 11.652658] Modules linked in: pwm_tegra phy_tegra194_p2u crct10dif_ce lm90 pwm_fan tegra_bpmp_thermal pcie_tegra194 ip_tables x_tables ipv6
[ 11.670525] CPU: 3 PID: 138 Comm: kworker/3:3 Not tainted 5.9.0-rc4-dirty #12
[ 11.670534] Hardware name: NVIDIA Jetson AGX Xavier Developer Kit (DT)
[ 11.683967] Workqueue: events deferred_probe_work_func
[ 11.683974] pstate: 60c00089 (nZCv daIf +PAN +UAO BTYPE=--)
[ 11.683985] pc : 0x0
[ 11.696669] lr : msi_domain_set_affinity+0x44/0xc0
[ 11.696672] sp : ffff800012bcb390
[ 11.696680] x29: ffff800012bcb390 x28: ffff0003e3033c20
[ 11.709891] x27: ffff0003e76cfe58 x26: 0000000000000000
[ 11.709900] x25: ffff800011d7e850 x24: ffff800011d7e878
[ 11.709908] x23: 0000000000000000 x22: ffff0003e76cfe00
[ 11.709914] x21: ffff0003e76cfe58 x20: ffff0003e76cfe58
[ 11.709921] x19: ffff800011b19000 x18: ffffffffffffffff
[ 11.709927] x17: 0000000000000000 x16: 0000000000000000
[ 11.741262] x15: ffff800011b19948 x14: 0000000000000040
[ 11.741267] x13: 0000000000000228 x12: 0000000000000030
[ 11.741272] x11: 0101010101010101 x10: 0000000000000040
[ 11.741277] x9 : 0000000000000000 x8 : 0000000000000004
[ 11.741281] x7 : ffffffffffffffff x6 : 00000000000000ff
[ 11.767374] x5 : 0000000000000000 x4 : 0000000000000000
[ 11.767379] x3 : 0000000000000000 x2 : 0000000000000000
[ 11.767384] x1 : ffff800011d7e898 x0 : ffff0003e262bf00
[ 11.767406] Call trace:
[ 11.767410] 0x0
[ 11.767424] irq_do_set_affinity+0x4c/0x178
[ 11.791400] irq_setup_affinity+0x124/0x1b0
[ 11.791423] irq_startup+0x6c/0x118
[ 11.791434] __setup_irq+0x810/0x8a0
[ 11.802510] request_threaded_irq+0xdc/0x188
[ 11.802517] pcie_pme_probe+0x98/0x110
[ 11.802536] pcie_port_probe_service+0x34/0x60
[ 11.814799] really_probe+0x110/0x400
[ 11.814809] driver_probe_device+0x54/0xb8
[ 11.822438] __device_attach_driver+0x90/0xc0
[ 11.822463] bus_for_each_drv+0x70/0xc8
[ 11.822471] __device_attach+0xec/0x150
[ 11.834307] device_initial_probe+0x10/0x18
[ 11.834311] bus_probe_device+0x94/0xa0
[ 11.834315] device_add+0x464/0x730
[ 11.834338] device_register+0x1c/0x28
[ 11.834349] pcie_port_device_register+0x2d0/0x3e8
[ 11.854056] pcie_portdrv_probe+0x34/0xd8
[ 11.854063] local_pci_probe+0x3c/0xa0
[ 11.854088] pci_device_probe+0x128/0x1c8
[ 11.854103] really_probe+0x110/0x400
[ 11.869283] driver_probe_device+0x54/0xb8
[ 11.869311] __device_attach_driver+0x90/0xc0
[ 11.877638] bus_for_each_drv+0x70/0xc8
[ 11.877645] __device_attach+0xec/0x150
[ 11.877669] device_attach+0x10/0x18
[ 11.877680] pci_bus_add_device+0x4c/0xb0
[ 11.892642] pci_bus_add_devices+0x44/0x90
[ 11.892646] dw_pcie_host_init+0x370/0x4f8
[ 11.892653] tegra_pcie_dw_probe+0x5e8/0xb50 [pcie_tegra194]
[ 11.892661] platform_drv_probe+0x50/0xa8
[ 11.910179] really_probe+0x110/0x400
[ 11.910183] driver_probe_device+0x54/0xb8
[ 11.910186] __device_attach_driver+0x90/0xc0
[ 11.910213] bus_for_each_drv+0x70/0xc8
[ 11.910240] __device_attach+0xec/0x150
[ 11.929689] device_initial_probe+0x10/0x18
[ 11.929694] bus_probe_device+0x94/0xa0
[ 11.929719] deferred_probe_work_func+0x6c/0xa0
[ 11.929730] process_one_work+0x1cc/0x360
[ 11.946008] worker_thread+0x48/0x450
[ 11.949602] kthread+0x120/0x150
[ 11.952803] ret_from_fork+0x10/0x1c
[ 11.956332] Code: bad PC value
[ 11.959360] ---[ end trace 03c30e252fe4e40b ]---

To be honest, I am not sure I completely understand why it crashes here.

Cheers
Jon

--
nvpublic

2020-09-27 08:30:28

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling

Hi,

On Fri, 25 Sep 2020 16:13:02 +0100 Jon Hunter wrote:

>
> Hi Jisheng,
>
> On 25/09/2020 10:27, Jisheng Zhang wrote:
>
> ...
>
> >> Could you please try below patch?
> >>
> >>
> >> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> >> index bf25d783b5c5..7e5dc54d060e 100644
> >> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> >> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> >> @@ -197,7 +197,6 @@ static struct irq_chip dw_pci_msi_bottom_irq_chip = {
> >> .name = "DWPCI-MSI",
> >> .irq_ack = dw_pci_bottom_ack,
> >> .irq_compose_msi_msg = dw_pci_setup_msi_msg,
> >> - .irq_set_affinity = dw_pci_msi_set_affinity,
> >> .irq_mask = dw_pci_bottom_mask,
> >> .irq_unmask = dw_pci_bottom_unmask,
> >> };
> >
> > A complete patch w/o compiler warning:
> >
> > diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> > index bf25d783b5c5..18f719cfed0b 100644
> > --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> > +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> > @@ -137,12 +137,6 @@ static void dw_pci_setup_msi_msg(struct irq_data *d, struct msi_msg *msg)
> > (int)d->hwirq, msg->address_hi, msg->address_lo);
> > }
> >
> > -static int dw_pci_msi_set_affinity(struct irq_data *d,
> > - const struct cpumask *mask, bool force)
> > -{
> > - return -EINVAL;
> > -}
> > -
> > static void dw_pci_bottom_mask(struct irq_data *d)
> > {
> > struct pcie_port *pp = irq_data_get_irq_chip_data(d);
> > @@ -197,7 +191,6 @@ static struct irq_chip dw_pci_msi_bottom_irq_chip = {
> > .name = "DWPCI-MSI",
> > .irq_ack = dw_pci_bottom_ack,
> > .irq_compose_msi_msg = dw_pci_setup_msi_msg,
> > - .irq_set_affinity = dw_pci_msi_set_affinity,
> > .irq_mask = dw_pci_bottom_mask,
> > .irq_unmask = dw_pci_bottom_unmask,
> > };
> >
>
>
> Thanks I was not expecting this to work because ...
>
> int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
> bool force)
> {
> struct irq_desc *desc = irq_data_to_desc(data);
> struct irq_chip *chip = irq_data_get_irq_chip(data);
> int ret;
>
> if (!chip || !chip->irq_set_affinity)
> return -EINVAL;
>
> However, with your patch Tegra crashes on boot ...
>
> [ 11.613853] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000
> [ 11.622500] Mem abort info:
> [ 11.622515] ESR = 0x86000004
> [ 11.622524] EC = 0x21: IABT (current EL), IL = 32 bits
> [ 11.622540] SET = 0, FnV = 0
> [ 11.636544] EA = 0, S1PTW = 0
> [ 11.636554] user pgtable: 4k pages, 48-bit VAs, pgdp=000000046a28e000
> [ 11.636559] [0000000000000000] pgd=0000000000000000, p4d=0000000000000000
> [ 11.652652] Internal error: Oops: 86000004 [#1] PREEMPT SMP
> [ 11.652658] Modules linked in: pwm_tegra phy_tegra194_p2u crct10dif_ce lm90 pwm_fan tegra_bpmp_thermal pcie_tegra194 ip_tables x_tables ipv6
> [ 11.670525] CPU: 3 PID: 138 Comm: kworker/3:3 Not tainted 5.9.0-rc4-dirty #12
> [ 11.670534] Hardware name: NVIDIA Jetson AGX Xavier Developer Kit (DT)
> [ 11.683967] Workqueue: events deferred_probe_work_func
> [ 11.683974] pstate: 60c00089 (nZCv daIf +PAN +UAO BTYPE=--)
> [ 11.683985] pc : 0x0
> [ 11.696669] lr : msi_domain_set_affinity+0x44/0xc0
> [ 11.696672] sp : ffff800012bcb390
> [ 11.696680] x29: ffff800012bcb390 x28: ffff0003e3033c20
> [ 11.709891] x27: ffff0003e76cfe58 x26: 0000000000000000
> [ 11.709900] x25: ffff800011d7e850 x24: ffff800011d7e878
> [ 11.709908] x23: 0000000000000000 x22: ffff0003e76cfe00
> [ 11.709914] x21: ffff0003e76cfe58 x20: ffff0003e76cfe58
> [ 11.709921] x19: ffff800011b19000 x18: ffffffffffffffff
> [ 11.709927] x17: 0000000000000000 x16: 0000000000000000
> [ 11.741262] x15: ffff800011b19948 x14: 0000000000000040
> [ 11.741267] x13: 0000000000000228 x12: 0000000000000030
> [ 11.741272] x11: 0101010101010101 x10: 0000000000000040
> [ 11.741277] x9 : 0000000000000000 x8 : 0000000000000004
> [ 11.741281] x7 : ffffffffffffffff x6 : 00000000000000ff
> [ 11.767374] x5 : 0000000000000000 x4 : 0000000000000000
> [ 11.767379] x3 : 0000000000000000 x2 : 0000000000000000
> [ 11.767384] x1 : ffff800011d7e898 x0 : ffff0003e262bf00
> [ 11.767406] Call trace:
> [ 11.767410] 0x0
> [ 11.767424] irq_do_set_affinity+0x4c/0x178
> [ 11.791400] irq_setup_affinity+0x124/0x1b0
> [ 11.791423] irq_startup+0x6c/0x118
> [ 11.791434] __setup_irq+0x810/0x8a0
> [ 11.802510] request_threaded_irq+0xdc/0x188
> [ 11.802517] pcie_pme_probe+0x98/0x110
> [ 11.802536] pcie_port_probe_service+0x34/0x60
> [ 11.814799] really_probe+0x110/0x400
> [ 11.814809] driver_probe_device+0x54/0xb8
> [ 11.822438] __device_attach_driver+0x90/0xc0
> [ 11.822463] bus_for_each_drv+0x70/0xc8
> [ 11.822471] __device_attach+0xec/0x150
> [ 11.834307] device_initial_probe+0x10/0x18
> [ 11.834311] bus_probe_device+0x94/0xa0
> [ 11.834315] device_add+0x464/0x730
> [ 11.834338] device_register+0x1c/0x28
> [ 11.834349] pcie_port_device_register+0x2d0/0x3e8
> [ 11.854056] pcie_portdrv_probe+0x34/0xd8
> [ 11.854063] local_pci_probe+0x3c/0xa0
> [ 11.854088] pci_device_probe+0x128/0x1c8
> [ 11.854103] really_probe+0x110/0x400
> [ 11.869283] driver_probe_device+0x54/0xb8
> [ 11.869311] __device_attach_driver+0x90/0xc0
> [ 11.877638] bus_for_each_drv+0x70/0xc8
> [ 11.877645] __device_attach+0xec/0x150
> [ 11.877669] device_attach+0x10/0x18
> [ 11.877680] pci_bus_add_device+0x4c/0xb0
> [ 11.892642] pci_bus_add_devices+0x44/0x90
> [ 11.892646] dw_pcie_host_init+0x370/0x4f8
> [ 11.892653] tegra_pcie_dw_probe+0x5e8/0xb50 [pcie_tegra194]
> [ 11.892661] platform_drv_probe+0x50/0xa8
> [ 11.910179] really_probe+0x110/0x400
> [ 11.910183] driver_probe_device+0x54/0xb8
> [ 11.910186] __device_attach_driver+0x90/0xc0
> [ 11.910213] bus_for_each_drv+0x70/0xc8
> [ 11.910240] __device_attach+0xec/0x150
> [ 11.929689] device_initial_probe+0x10/0x18
> [ 11.929694] bus_probe_device+0x94/0xa0
> [ 11.929719] deferred_probe_work_func+0x6c/0xa0
> [ 11.929730] process_one_work+0x1cc/0x360
> [ 11.946008] worker_thread+0x48/0x450
> [ 11.949602] kthread+0x120/0x150
> [ 11.952803] ret_from_fork+0x10/0x1c
> [ 11.956332] Code: bad PC value
> [ 11.959360] ---[ end trace 03c30e252fe4e40b ]---
>
> To be honest, I am not sure I completely understand why it crashes here.
>

I see, the msi_domain_set_affinity() calls parent->chip->irq_set_affinity
without checking, grepping the irqchip and pci dir, I found that
if the MSI is based on some cascaded interrupt mechanism, they all
point the irq_set_affinity to irq_chip_set_affinity_parent(), so I believe
below patch works:

diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index bf25d783b5c5..093fba616736 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -137,12 +137,6 @@ static void dw_pci_setup_msi_msg(struct irq_data *d, struct msi_msg *msg)
(int)d->hwirq, msg->address_hi, msg->address_lo);
}

-static int dw_pci_msi_set_affinity(struct irq_data *d,
- const struct cpumask *mask, bool force)
-{
- return -EINVAL;
-}
-
static void dw_pci_bottom_mask(struct irq_data *d)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
@@ -197,7 +191,7 @@ static struct irq_chip dw_pci_msi_bottom_irq_chip = {
.name = "DWPCI-MSI",
.irq_ack = dw_pci_bottom_ack,
.irq_compose_msi_msg = dw_pci_setup_msi_msg,
- .irq_set_affinity = dw_pci_msi_set_affinity,
+ .irq_set_affinity = irq_chip_set_affinity_parent,
.irq_mask = dw_pci_bottom_mask,
.irq_unmask = dw_pci_bottom_unmask,
};

2020-09-28 17:48:29

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling


On 27/09/2020 09:28, Jisheng Zhang wrote:

...

> I see, the msi_domain_set_affinity() calls parent->chip->irq_set_affinity
> without checking, grepping the irqchip and pci dir, I found that
> if the MSI is based on some cascaded interrupt mechanism, they all
> point the irq_set_affinity to irq_chip_set_affinity_parent(), so I believe
> below patch works:
>
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> index bf25d783b5c5..093fba616736 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -137,12 +137,6 @@ static void dw_pci_setup_msi_msg(struct irq_data *d, struct msi_msg *msg)
> (int)d->hwirq, msg->address_hi, msg->address_lo);
> }
>
> -static int dw_pci_msi_set_affinity(struct irq_data *d,
> - const struct cpumask *mask, bool force)
> -{
> - return -EINVAL;
> -}
> -
> static void dw_pci_bottom_mask(struct irq_data *d)
> {
> struct pcie_port *pp = irq_data_get_irq_chip_data(d);
> @@ -197,7 +191,7 @@ static struct irq_chip dw_pci_msi_bottom_irq_chip = {
> .name = "DWPCI-MSI",
> .irq_ack = dw_pci_bottom_ack,
> .irq_compose_msi_msg = dw_pci_setup_msi_msg,
> - .irq_set_affinity = dw_pci_msi_set_affinity,
> + .irq_set_affinity = irq_chip_set_affinity_parent,
> .irq_mask = dw_pci_bottom_mask,
> .irq_unmask = dw_pci_bottom_unmask,
> };
>


Unfortunately, this still crashes ...

[ 11.521674] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000018
[ 11.530324] Mem abort info:
[ 11.533089] ESR = 0x96000004
[ 11.536105] EC = 0x25: DABT (current EL), IL = 32 bits
[ 11.541333] SET = 0, FnV = 0
[ 11.544344] EA = 0, S1PTW = 0
[ 11.547441] Data abort info:
[ 11.550279] ISV = 0, ISS = 0x00000004
[ 11.554056] CM = 0, WnR = 0
[ 11.557007] user pgtable: 4k pages, 48-bit VAs, pgdp=0000000467341000
[ 11.563333] [0000000000000018] pgd=0000000000000000, p4d=0000000000000000
[ 11.570024] Internal error: Oops: 96000004 [#1] PREEMPT SMP
[ 11.575517] Modules linked in: crct10dif_ce pwm_tegra snd_hda_core phy_tegra194_p2u lm90 pcie_tegra194 tegra_bpmp_thermal pwm_fan ip_tables x_tables ipv6
[ 11.589046] CPU: 3 PID: 148 Comm: kworker/3:1 Not tainted 5.9.0-rc4-00009-g6fdf18edb995-dirty #7
[ 11.597669] Hardware name: NVIDIA Jetson AGX Xavier Developer Kit (DT)
[ 11.604110] Workqueue: events deferred_probe_work_func
[ 11.609174] pstate: 60c00089 (nZCv daIf +PAN +UAO BTYPE=--)
[ 11.614657] pc : irq_chip_set_affinity_parent+0x4/0x30
[ 11.619735] lr : msi_domain_set_affinity+0x44/0xc0
[ 11.624448] sp : ffff800012d4b390
[ 11.627744] x29: ffff800012d4b390 x28: ffff0003e7234c20
[ 11.632983] x27: ffff0003e913e460 x26: 0000000000000000
[ 11.638231] x25: ffff800011d7e890 x24: ffff800011d7e8b8
[ 11.643466] x23: 0000000000000000 x22: ffff0003e913e400
[ 11.648701] x21: ffff0003e913e460 x20: ffff0003e913e460
[ 11.653932] x19: ffff800011b19000 x18: ffffffffffffffff
[ 11.659160] x17: 0000000000000000 x16: 0000000000000000
[ 11.664390] x15: 0000000000000001 x14: 0000000000000040
[ 11.669636] x13: 0000000000000228 x12: 0000000000000030
[ 11.674864] x11: 0101010101010101 x10: 0000000000000040
[ 11.680111] x9 : 0000000000000000 x8 : 0000000000000004
[ 11.685363] x7 : ffffffffffffffff x6 : 00000000000000ff
[ 11.690596] x5 : 0000000000000000 x4 : 0000000000000000
[ 11.695843] x3 : ffff8000100d89a8 x2 : 0000000000000000
[ 11.701058] x1 : ffff800011d7e8d8 x0 : 0000000000000000
[ 11.706288] Call trace:
[ 11.708708] irq_chip_set_affinity_parent+0x4/0x30
[ 11.713431] irq_do_set_affinity+0x4c/0x178
[ 11.717540] irq_setup_affinity+0x124/0x1b0
[ 11.721650] irq_startup+0x6c/0x118
[ 11.725081] __setup_irq+0x810/0x8a0
[ 11.728580] request_threaded_irq+0xdc/0x188
[ 11.732793] pcie_pme_probe+0x98/0x110
[ 11.736481] pcie_port_probe_service+0x34/0x60
[ 11.740848] really_probe+0x110/0x400
[ 11.744445] driver_probe_device+0x54/0xb8
[ 11.748482] __device_attach_driver+0x90/0xc0
[ 11.752758] bus_for_each_drv+0x70/0xc8
[ 11.756526] __device_attach+0xec/0x150
[ 11.760306] device_initial_probe+0x10/0x18
[ 11.764413] bus_probe_device+0x94/0xa0
[ 11.768203] device_add+0x464/0x730
[ 11.771630] device_register+0x1c/0x28
[ 11.775311] pcie_port_device_register+0x2d0/0x3e8
[ 11.780017] pcie_portdrv_probe+0x34/0xd8
[ 11.783957] local_pci_probe+0x3c/0xa0
[ 11.787647] pci_device_probe+0x128/0x1c8
[ 11.791588] really_probe+0x110/0x400
[ 11.795179] driver_probe_device+0x54/0xb8
[ 11.799202] __device_attach_driver+0x90/0xc0
[ 11.803480] bus_for_each_drv+0x70/0xc8
[ 11.807244] __device_attach+0xec/0x150
[ 11.811009] device_attach+0x10/0x18
[ 11.814518] pci_bus_add_device+0x4c/0xb0
[ 11.818456] pci_bus_add_devices+0x44/0x90
[ 11.822478] dw_pcie_host_init+0x370/0x4f8
[ 11.826504] tegra_pcie_dw_probe+0x5e8/0xb50 [pcie_tegra194]
[ 11.832044] platform_drv_probe+0x50/0xa8
[ 11.835984] really_probe+0x110/0x400
[ 11.839580] driver_probe_device+0x54/0xb8
[ 11.843608] __device_attach_driver+0x90/0xc0
[ 11.847887] bus_for_each_drv+0x70/0xc8
[ 11.851655] __device_attach+0xec/0x150
[ 11.855424] device_initial_probe+0x10/0x18
[ 11.859548] bus_probe_device+0x94/0xa0
[ 11.863317] deferred_probe_work_func+0x6c/0xa0
[ 11.867781] process_one_work+0x1cc/0x360
[ 11.871720] worker_thread+0x48/0x450
[ 11.875318] kthread+0x120/0x150
[ 11.878495] ret_from_fork+0x10/0x1c
[ 11.882027] Code: a8c17bfd d65f03c0 d503201f f9401400 (f9400c03)

Cheers
Jon

--
nvpublic

2020-09-29 10:51:03

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling

Hi Jon,

On Fri, 25 Sep 2020 09:53:45 +0100 Jon Hunter wrote:

>
> On 24/09/2020 12:05, Jisheng Zhang wrote:
> > Improve the msi code:
> > 1. Add proper error handling.
> > 2. Move dw_pcie_msi_init() from each users to designware host to solve
> > msi page leakage in resume path.
>
> Apologies if this is slightly off topic, but I have been meaning to ask
> about MSIs and PCI. On Tegra194 which uses the DWC PCI driver, whenever we
> hotplug CPUs we see the following warnings ...
>
> [ 79.068351] WARNING KERN IRQ70: set affinity failed(-22).
> [ 79.068362] WARNING KERN IRQ71: set affinity failed(-22).
>

I tried to reproduce this issue on Synaptics SoC, but can't reproduce it.
Per my understanding of the code in kernel/irq/cpuhotplug.c, this warning
happened when we migrate irqs away from the offline cpu, this implicitly
implies that before this point the irq has bind to the offline cpu, but how
could this happen given current dw_pci_msi_set_affinity() implementation
always return -EINVAL

thanks

2020-09-29 13:24:55

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling

Hi Jisheng,

On 29/09/2020 11:48, Jisheng Zhang wrote:
> Hi Jon,
>
> On Fri, 25 Sep 2020 09:53:45 +0100 Jon Hunter wrote:
>
>>
>> On 24/09/2020 12:05, Jisheng Zhang wrote:
>>> Improve the msi code:
>>> 1. Add proper error handling.
>>> 2. Move dw_pcie_msi_init() from each users to designware host to solve
>>> msi page leakage in resume path.
>>
>> Apologies if this is slightly off topic, but I have been meaning to ask
>> about MSIs and PCI. On Tegra194 which uses the DWC PCI driver, whenever we
>> hotplug CPUs we see the following warnings ...
>>
>> [ 79.068351] WARNING KERN IRQ70: set affinity failed(-22).
>> [ 79.068362] WARNING KERN IRQ71: set affinity failed(-22).
>>
>
> I tried to reproduce this issue on Synaptics SoC, but can't reproduce it.
> Per my understanding of the code in kernel/irq/cpuhotplug.c, this warning
> happened when we migrate irqs away from the offline cpu, this implicitly
> implies that before this point the irq has bind to the offline cpu, but how
> could this happen given current dw_pci_msi_set_affinity() implementation
> always return -EINVAL

By default the smp_affinity should be set so that all CPUs can be
interrupted ...

$ cat /proc/irq/70/smp_affinity
0xff

In my case there are 8 CPUs and so 0xff implies that the interrupt can
be triggered on any of the 8 CPUs.

Do you see the set_affinity callback being called for the DWC irqchip in
migrate_one_irq()?

Cheers
Jon

--
nvpublic

2020-09-29 17:28:04

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling

On 2020-09-29 14:22, Jon Hunter wrote:
> Hi Jisheng,
>
> On 29/09/2020 11:48, Jisheng Zhang wrote:
>> Hi Jon,
>>
>> On Fri, 25 Sep 2020 09:53:45 +0100 Jon Hunter wrote:
>>
>>>
>>> On 24/09/2020 12:05, Jisheng Zhang wrote:
>>>> Improve the msi code:
>>>> 1. Add proper error handling.
>>>> 2. Move dw_pcie_msi_init() from each users to designware host to
>>>> solve
>>>> msi page leakage in resume path.
>>>
>>> Apologies if this is slightly off topic, but I have been meaning to
>>> ask
>>> about MSIs and PCI. On Tegra194 which uses the DWC PCI driver,
>>> whenever we
>>> hotplug CPUs we see the following warnings ...
>>>
>>> [ 79.068351] WARNING KERN IRQ70: set affinity failed(-22).
>>> [ 79.068362] WARNING KERN IRQ71: set affinity failed(-22).
>>>
>>
>> I tried to reproduce this issue on Synaptics SoC, but can't reproduce
>> it.
>> Per my understanding of the code in kernel/irq/cpuhotplug.c, this
>> warning
>> happened when we migrate irqs away from the offline cpu, this
>> implicitly
>> implies that before this point the irq has bind to the offline cpu,
>> but how
>> could this happen given current dw_pci_msi_set_affinity()
>> implementation
>> always return -EINVAL
>
> By default the smp_affinity should be set so that all CPUs can be
> interrupted ...
>
> $ cat /proc/irq/70/smp_affinity
> 0xff
>
> In my case there are 8 CPUs and so 0xff implies that the interrupt can
> be triggered on any of the 8 CPUs.
>
> Do you see the set_affinity callback being called for the DWC irqchip
> in
> migrate_one_irq()?

The problem is common to all MSI implementations that end up muxing
all the end-point MSIs into a single interrupt. With these systems,
you cannot set the affinity of individual MSIs (they don't target a
CPU, they target another interrupt... braindead). Only the mux
interrupt can have its affinity changed.

So returning -EINVAL is the right thing to do.

M.
--
Jazz is not dead. It just smells funny...

2020-09-29 17:32:26

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v2 2/5] PCI: dwc: Check alloc_page() return value

On 2020-09-24 12:06, Jisheng Zhang wrote:
> We need to check alloc_page() succeed or not before continuing.
>
> Signed-off-by: Jisheng Zhang <[email protected]>
> ---
> drivers/pci/controller/dwc/pcie-designware-host.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c
> b/drivers/pci/controller/dwc/pcie-designware-host.c
> index 0a19de946351..9e04e8ef3aa4 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -303,6 +303,11 @@ void dw_pcie_msi_init(struct pcie_port *pp)
> u64 msi_target;
>
> pp->msi_page = alloc_page(GFP_KERNEL);
> + if (!pp->msi_page) {
> + dev_err(dev, "Failed to alloc MSI page\n");

A failing allocation will already scream, so there is no need to
add insult to injury.

More importantly, what is this MSI page ever used for? If I remember
well, this is just a random address that never gets written to.

So why do we allocate a page here? Why do we bother with this DMA
mapping?

M.
--
Who you jivin' with that Cosmik Debris?

2020-09-29 18:05:24

by Jon Hunter

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling


On 29/09/2020 18:25, Marc Zyngier wrote:
> On 2020-09-29 14:22, Jon Hunter wrote:
>> Hi Jisheng,
>>
>> On 29/09/2020 11:48, Jisheng Zhang wrote:
>>> Hi Jon,
>>>
>>> On Fri, 25 Sep 2020 09:53:45 +0100 Jon Hunter wrote:
>>>
>>>>
>>>> On 24/09/2020 12:05, Jisheng Zhang wrote:
>>>>> Improve the msi code:
>>>>> 1. Add proper error handling.
>>>>> 2. Move dw_pcie_msi_init() from each users to designware host to solve
>>>>> msi page leakage in resume path.
>>>>
>>>> Apologies if this is slightly off topic, but I have been meaning to ask
>>>> about MSIs and PCI. On Tegra194 which uses the DWC PCI driver,
>>>> whenever we
>>>> hotplug CPUs we see the following warnings ...
>>>>
>>>>  [      79.068351] WARNING KERN IRQ70: set affinity failed(-22).
>>>>  [      79.068362] WARNING KERN IRQ71: set affinity failed(-22).
>>>>
>>>
>>> I tried to reproduce this issue on Synaptics SoC, but can't reproduce
>>> it.
>>> Per my understanding of the code in kernel/irq/cpuhotplug.c, this
>>> warning
>>> happened when we migrate irqs away from the offline cpu, this implicitly
>>> implies that before this point the irq has bind to the offline cpu,
>>> but how
>>> could this happen given current dw_pci_msi_set_affinity() implementation
>>> always return -EINVAL
>>
>> By default the smp_affinity should be set so that all CPUs can be
>> interrupted ...
>>
>> $ cat /proc/irq/70/smp_affinity
>> 0xff
>>
>> In my case there are 8 CPUs and so 0xff implies that the interrupt can
>> be triggered on any of the 8 CPUs.
>>
>> Do you see the set_affinity callback being called for the DWC irqchip in
>> migrate_one_irq()?
>
> The problem is common to all MSI implementations that end up muxing
> all the end-point MSIs into a single interrupt. With these systems,
> you cannot set the affinity of individual MSIs (they don't target a
> CPU, they target another interrupt... braindead). Only the mux
> interrupt can have its affinity changed.
>
> So returning -EINVAL is the right thing to do.

Right, so if that is the case, then surely there should be some way to
avoid these warnings because they are not relevant?

Cheers
Jon

--
nvpublic

2020-09-29 18:14:40

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling

On 2020-09-29 19:02, Jon Hunter wrote:
> On 29/09/2020 18:25, Marc Zyngier wrote:
>> On 2020-09-29 14:22, Jon Hunter wrote:
>>> Hi Jisheng,
>>>
>>> On 29/09/2020 11:48, Jisheng Zhang wrote:
>>>> Hi Jon,
>>>>
>>>> On Fri, 25 Sep 2020 09:53:45 +0100 Jon Hunter wrote:
>>>>
>>>>>
>>>>> On 24/09/2020 12:05, Jisheng Zhang wrote:
>>>>>> Improve the msi code:
>>>>>> 1. Add proper error handling.
>>>>>> 2. Move dw_pcie_msi_init() from each users to designware host to
>>>>>> solve
>>>>>> msi page leakage in resume path.
>>>>>
>>>>> Apologies if this is slightly off topic, but I have been meaning to
>>>>> ask
>>>>> about MSIs and PCI. On Tegra194 which uses the DWC PCI driver,
>>>>> whenever we
>>>>> hotplug CPUs we see the following warnings ...
>>>>>
>>>>>  [      79.068351] WARNING KERN IRQ70: set affinity failed(-22).
>>>>>  [      79.068362] WARNING KERN IRQ71: set affinity failed(-22).
>>>>>
>>>>
>>>> I tried to reproduce this issue on Synaptics SoC, but can't
>>>> reproduce
>>>> it.
>>>> Per my understanding of the code in kernel/irq/cpuhotplug.c, this
>>>> warning
>>>> happened when we migrate irqs away from the offline cpu, this
>>>> implicitly
>>>> implies that before this point the irq has bind to the offline cpu,
>>>> but how
>>>> could this happen given current dw_pci_msi_set_affinity()
>>>> implementation
>>>> always return -EINVAL
>>>
>>> By default the smp_affinity should be set so that all CPUs can be
>>> interrupted ...
>>>
>>> $ cat /proc/irq/70/smp_affinity
>>> 0xff
>>>
>>> In my case there are 8 CPUs and so 0xff implies that the interrupt
>>> can
>>> be triggered on any of the 8 CPUs.
>>>
>>> Do you see the set_affinity callback being called for the DWC irqchip
>>> in
>>> migrate_one_irq()?
>>
>> The problem is common to all MSI implementations that end up muxing
>> all the end-point MSIs into a single interrupt. With these systems,
>> you cannot set the affinity of individual MSIs (they don't target a
>> CPU, they target another interrupt... braindead). Only the mux
>> interrupt can have its affinity changed.
>>
>> So returning -EINVAL is the right thing to do.
>
> Right, so if that is the case, then surely there should be some way to
> avoid these warnings because they are not relevant?

I don't think there is a way to do this, because the core code
doesn't (and cannot) know the exact interrupt topology.

The only alternative would be to change the affinity of the mux
interrupt when a MSI affinity changes, but that tends to break
userspace (irqbalance, for example).

M.
--
Jazz is not dead. It just smells funny...

2020-09-30 01:27:55

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 2/5] PCI: dwc: Check alloc_page() return value

Hi Marc,

On Tue, 29 Sep 2020 18:29:52 +0100 Marc Zyngier wrote:

>
>
> On 2020-09-24 12:06, Jisheng Zhang wrote:
> > We need to check alloc_page() succeed or not before continuing.
> >
> > Signed-off-by: Jisheng Zhang <[email protected]>
> > ---
> > drivers/pci/controller/dwc/pcie-designware-host.c | 5 +++++
> > 1 file changed, 5 insertions(+)
> >
> > diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c
> > b/drivers/pci/controller/dwc/pcie-designware-host.c
> > index 0a19de946351..9e04e8ef3aa4 100644
> > --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> > +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> > @@ -303,6 +303,11 @@ void dw_pcie_msi_init(struct pcie_port *pp)
> > u64 msi_target;
> >
> > pp->msi_page = alloc_page(GFP_KERNEL);
> > + if (!pp->msi_page) {
> > + dev_err(dev, "Failed to alloc MSI page\n");
>
> A failing allocation will already scream, so there is no need to
> add insult to injury.
>
> More importantly, what is this MSI page ever used for? If I remember
> well, this is just a random address that never gets written to.
>
> So why do we allocate a page here? Why do we bother with this DMA
> mapping?
>

Ard and Rob also pointed out that there's no need to allocate a page, instead,
we could use an address in the driver data for MSI address. So I refactored
the patches and verified this solution works fine. Could you please review
the V5?


Thanks

2020-10-06 06:28:29

by Vidya Sagar

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling

Hi,
I would like to verify this series along with the other series "PCI:
dwc: fix two MSI issues" on Tegra194. I tried to apply these series on
both linux-next and Lorenzo's pci/dwc branches but there seem to be non
trivial conflicts. Could you please tell me which branch I can use and
apply these series cleanly?
FWIW, I acknowledge that the existing code does leak MSI target page
every time system goes through suspend-resume sequence on Tegra194.

Thanks,
Vidya Sagar

On 9/24/2020 4:35 PM, Jisheng Zhang wrote:
> External email: Use caution opening links or attachments
>
>
> Improve the msi code:
> 1. Add proper error handling.
> 2. Move dw_pcie_msi_init() from each users to designware host to solve
> msi page leakage in resume path.
>
> Since v1:
> - add proper error handling patches.
> - solve the msi page leakage by moving dw_pcie_msi_init() from each
> users to designware host
>
>
> Jisheng Zhang (5):
> PCI: dwc: Call dma_unmap_page() before freeing the msi page
> PCI: dwc: Check alloc_page() return value
> PCI: dwc: Rename dw_pcie_free_msi to dw_pcie_msi_deinit
> PCI: dwc: Skip PCIE_MSI_INTR0* programming if MSI is disabled
> PCI: dwc: Move dw_pcie_msi_init() from each users to designware host
>
> drivers/pci/controller/dwc/pci-dra7xx.c | 1 +
> drivers/pci/controller/dwc/pci-exynos.c | 2 -
> drivers/pci/controller/dwc/pci-imx6.c | 3 --
> drivers/pci/controller/dwc/pci-meson.c | 8 ----
> drivers/pci/controller/dwc/pcie-artpec6.c | 10 -----
> .../pci/controller/dwc/pcie-designware-host.c | 43 +++++++++++++------
> .../pci/controller/dwc/pcie-designware-plat.c | 3 --
> drivers/pci/controller/dwc/pcie-designware.h | 9 +++-
> drivers/pci/controller/dwc/pcie-histb.c | 3 --
> drivers/pci/controller/dwc/pcie-kirin.c | 3 --
> drivers/pci/controller/dwc/pcie-qcom.c | 3 --
> drivers/pci/controller/dwc/pcie-spear13xx.c | 1 -
> drivers/pci/controller/dwc/pcie-tegra194.c | 2 -
> drivers/pci/controller/dwc/pcie-uniphier.c | 9 +---
> 14 files changed, 38 insertions(+), 62 deletions(-)
>
> --
> 2.28.0
>

2020-10-06 06:39:28

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling

On Tue, 6 Oct 2020 11:56:34 +0530 Vidya Sagar wrote:

>
>
> Hi,

Hi,

> I would like to verify this series along with the other series "PCI:
> dwc: fix two MSI issues" on Tegra194. I tried to apply these series on
> both linux-next and Lorenzo's pci/dwc branches but there seem to be non
> trivial conflicts. Could you please tell me which branch I can use and
> apply these series cleanly?

This is a fix, so I thought the series would be picked up in v5.9, so the
series is patched against v5.9-rcN

could you please try v5 https://lkml.org/lkml/2020/9/29/2511 on v5.9-rc7?


Thanks

> FWIW, I acknowledge that the existing code does leak MSI target page
> every time system goes through suspend-resume sequence on Tegra194.
>
> Thanks,
> Vidya Sagar
>
> On 9/24/2020 4:35 PM, Jisheng Zhang wrote:
> > External email: Use caution opening links or attachments
> >
> >
> > Improve the msi code:
> > 1. Add proper error handling.
> > 2. Move dw_pcie_msi_init() from each users to designware host to solve
> > msi page leakage in resume path.
> >
> > Since v1:
> > - add proper error handling patches.
> > - solve the msi page leakage by moving dw_pcie_msi_init() from each
> > users to designware host
> >
> >
> > Jisheng Zhang (5):
> > PCI: dwc: Call dma_unmap_page() before freeing the msi page
> > PCI: dwc: Check alloc_page() return value
> > PCI: dwc: Rename dw_pcie_free_msi to dw_pcie_msi_deinit
> > PCI: dwc: Skip PCIE_MSI_INTR0* programming if MSI is disabled
> > PCI: dwc: Move dw_pcie_msi_init() from each users to designware host
> >
> > drivers/pci/controller/dwc/pci-dra7xx.c | 1 +
> > drivers/pci/controller/dwc/pci-exynos.c | 2 -
> > drivers/pci/controller/dwc/pci-imx6.c | 3 --
> > drivers/pci/controller/dwc/pci-meson.c | 8 ----
> > drivers/pci/controller/dwc/pcie-artpec6.c | 10 -----
> > .../pci/controller/dwc/pcie-designware-host.c | 43 +++++++++++++------
> > .../pci/controller/dwc/pcie-designware-plat.c | 3 --
> > drivers/pci/controller/dwc/pcie-designware.h | 9 +++-
> > drivers/pci/controller/dwc/pcie-histb.c | 3 --
> > drivers/pci/controller/dwc/pcie-kirin.c | 3 --
> > drivers/pci/controller/dwc/pcie-qcom.c | 3 --
> > drivers/pci/controller/dwc/pcie-spear13xx.c | 1 -
> > drivers/pci/controller/dwc/pcie-tegra194.c | 2 -
> > drivers/pci/controller/dwc/pcie-uniphier.c | 9 +---
> > 14 files changed, 38 insertions(+), 62 deletions(-)
> >
> > --
> > 2.28.0
> >

2020-10-08 05:35:22

by Vidya Sagar

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] PCI: dwc: improve msi handling



On 10/6/2020 12:06 PM, Jisheng Zhang wrote:
> External email: Use caution opening links or attachments
>
>
> On Tue, 6 Oct 2020 11:56:34 +0530 Vidya Sagar wrote:
>
>>
>>
>> Hi,
>
> Hi,
>
>> I would like to verify this series along with the other series "PCI:
>> dwc: fix two MSI issues" on Tegra194. I tried to apply these series on
>> both linux-next and Lorenzo's pci/dwc branches but there seem to be non
>> trivial conflicts. Could you please tell me which branch I can use and
>> apply these series cleanly?
>
> This is a fix, so I thought the series would be picked up in v5.9, so the
> series is patched against v5.9-rcN
>
> could you please try v5 https://lkml.org/lkml/2020/9/29/2511 on v5.9-rc7?
I tried this series on top of v5.9-rc7 and it worked as expected on
Tegra194 platform. Also, I couldn't cleanly apply the other series 'PCI:
dwc: fix two MSI issues' on top. Could you please rebase them?

Thanks,
Vidya Sagar
>
>
> Thanks
>
>> FWIW, I acknowledge that the existing code does leak MSI target page
>> every time system goes through suspend-resume sequence on Tegra194.
>>
>> Thanks,
>> Vidya Sagar
>>
>> On 9/24/2020 4:35 PM, Jisheng Zhang wrote:
>>> External email: Use caution opening links or attachments
>>>
>>>
>>> Improve the msi code:
>>> 1. Add proper error handling.
>>> 2. Move dw_pcie_msi_init() from each users to designware host to solve
>>> msi page leakage in resume path.
>>>
>>> Since v1:
>>> - add proper error handling patches.
>>> - solve the msi page leakage by moving dw_pcie_msi_init() from each
>>> users to designware host
>>>
>>>
>>> Jisheng Zhang (5):
>>> PCI: dwc: Call dma_unmap_page() before freeing the msi page
>>> PCI: dwc: Check alloc_page() return value
>>> PCI: dwc: Rename dw_pcie_free_msi to dw_pcie_msi_deinit
>>> PCI: dwc: Skip PCIE_MSI_INTR0* programming if MSI is disabled
>>> PCI: dwc: Move dw_pcie_msi_init() from each users to designware host
>>>
>>> drivers/pci/controller/dwc/pci-dra7xx.c | 1 +
>>> drivers/pci/controller/dwc/pci-exynos.c | 2 -
>>> drivers/pci/controller/dwc/pci-imx6.c | 3 --
>>> drivers/pci/controller/dwc/pci-meson.c | 8 ----
>>> drivers/pci/controller/dwc/pcie-artpec6.c | 10 -----
>>> .../pci/controller/dwc/pcie-designware-host.c | 43 +++++++++++++------
>>> .../pci/controller/dwc/pcie-designware-plat.c | 3 --
>>> drivers/pci/controller/dwc/pcie-designware.h | 9 +++-
>>> drivers/pci/controller/dwc/pcie-histb.c | 3 --
>>> drivers/pci/controller/dwc/pcie-kirin.c | 3 --
>>> drivers/pci/controller/dwc/pcie-qcom.c | 3 --
>>> drivers/pci/controller/dwc/pcie-spear13xx.c | 1 -
>>> drivers/pci/controller/dwc/pcie-tegra194.c | 2 -
>>> drivers/pci/controller/dwc/pcie-uniphier.c | 9 +---
>>> 14 files changed, 38 insertions(+), 62 deletions(-)
>>>
>>> --
>>> 2.28.0
>>>
>

2020-10-09 08:42:46

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH] PCI: dwc: Move dw_pcie_msi_init() from each users to designware host

Let the designware host take care the integrated msi init rather
than duplicate dw_pcie_msi_init() in each users.

Signed-off-by: Jisheng Zhang <[email protected]>
---

Hi Vidya,

After V7, only this patch is left, others in v2 are not needed. There's one
more clean up chance -- we can also move dw_pcie_free_msi() to designware
host and make it static if we can clean up dra7xx. I see Rob is working on
some larger MSI clean-ups, maybe this will be done in his clean-ups.

Thanks

drivers/pci/controller/dwc/pci-dra7xx.c | 1 -
drivers/pci/controller/dwc/pci-exynos.c | 2 --
drivers/pci/controller/dwc/pci-imx6.c | 1 -
drivers/pci/controller/dwc/pci-meson.c | 2 --
drivers/pci/controller/dwc/pcie-artpec6.c | 1 -
drivers/pci/controller/dwc/pcie-designware-host.c | 5 +++--
drivers/pci/controller/dwc/pcie-designware-plat.c | 1 -
drivers/pci/controller/dwc/pcie-designware.h | 5 -----
drivers/pci/controller/dwc/pcie-histb.c | 1 -
drivers/pci/controller/dwc/pcie-kirin.c | 1 -
drivers/pci/controller/dwc/pcie-qcom.c | 1 -
drivers/pci/controller/dwc/pcie-spear13xx.c | 4 +---
drivers/pci/controller/dwc/pcie-tegra194.c | 2 --
drivers/pci/controller/dwc/pcie-uniphier.c | 2 --
14 files changed, 4 insertions(+), 25 deletions(-)

diff --git a/drivers/pci/controller/dwc/pci-dra7xx.c b/drivers/pci/controller/dwc/pci-dra7xx.c
index 6d012d2b1e90..a5edaa6b6b58 100644
--- a/drivers/pci/controller/dwc/pci-dra7xx.c
+++ b/drivers/pci/controller/dwc/pci-dra7xx.c
@@ -185,7 +185,6 @@ static int dra7xx_pcie_host_init(struct pcie_port *pp)

dra7xx_pcie_establish_link(pci);
dw_pcie_wait_for_link(pci);
- dw_pcie_msi_init(pp);
dra7xx_pcie_enable_interrupts(dra7xx);

return 0;
diff --git a/drivers/pci/controller/dwc/pci-exynos.c b/drivers/pci/controller/dwc/pci-exynos.c
index 242683cde04a..97c166885277 100644
--- a/drivers/pci/controller/dwc/pci-exynos.c
+++ b/drivers/pci/controller/dwc/pci-exynos.c
@@ -298,8 +298,6 @@ static void exynos_pcie_msi_init(struct exynos_pcie *ep)
struct pcie_port *pp = &pci->pp;
u32 val;

- dw_pcie_msi_init(pp);
-
/* enable MSI interrupt */
val = exynos_pcie_readl(ep->mem_res->elbi_base, PCIE_IRQ_EN_LEVEL);
val |= IRQ_MSI_ENABLE;
diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
index 337c74cbdfdb..cf52eb5d7d2e 100644
--- a/drivers/pci/controller/dwc/pci-imx6.c
+++ b/drivers/pci/controller/dwc/pci-imx6.c
@@ -836,7 +836,6 @@ static int imx6_pcie_host_init(struct pcie_port *pp)
imx6_setup_phy_mpll(imx6_pcie);
dw_pcie_setup_rc(pp);
imx6_pcie_establish_link(imx6_pcie);
- dw_pcie_msi_init(pp);

return 0;
}
diff --git a/drivers/pci/controller/dwc/pci-meson.c b/drivers/pci/controller/dwc/pci-meson.c
index 33deb290c4e7..11bfc42fac1c 100644
--- a/drivers/pci/controller/dwc/pci-meson.c
+++ b/drivers/pci/controller/dwc/pci-meson.c
@@ -387,8 +387,6 @@ static int meson_pcie_host_init(struct pcie_port *pp)
if (ret)
return ret;

- dw_pcie_msi_init(pp);
-
return 0;
}

diff --git a/drivers/pci/controller/dwc/pcie-artpec6.c b/drivers/pci/controller/dwc/pcie-artpec6.c
index 929448e9e0bc..73d4bf99c737 100644
--- a/drivers/pci/controller/dwc/pcie-artpec6.c
+++ b/drivers/pci/controller/dwc/pcie-artpec6.c
@@ -331,7 +331,6 @@ static int artpec6_pcie_host_init(struct pcie_port *pp)
dw_pcie_setup_rc(pp);
artpec6_pcie_establish_link(pci);
dw_pcie_wait_for_link(pci);
- dw_pcie_msi_init(pp);

return 0;
}
diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index d02c7e74738d..7622f114223e 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -275,7 +275,7 @@ void dw_pcie_free_msi(struct pcie_port *pp)
}
}

-void dw_pcie_msi_init(struct pcie_port *pp)
+static void dw_pcie_msi_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
u64 msi_target = (u64)pp->msi_data;
@@ -287,7 +287,6 @@ void dw_pcie_msi_init(struct pcie_port *pp)
dw_pcie_writel_dbi(pci, PCIE_MSI_ADDR_LO, lower_32_bits(msi_target));
dw_pcie_writel_dbi(pci, PCIE_MSI_ADDR_HI, upper_32_bits(msi_target));
}
-EXPORT_SYMBOL_GPL(dw_pcie_msi_init);

int dw_pcie_host_init(struct pcie_port *pp)
{
@@ -545,6 +544,8 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
~0);
}
}
+ if (pci_msi_enabled() && pp->msi_data)
+ dw_pcie_msi_init(pp);

/* Setup RC BARs */
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0x00000004);
diff --git a/drivers/pci/controller/dwc/pcie-designware-plat.c b/drivers/pci/controller/dwc/pcie-designware-plat.c
index e3e300669ed5..9ccf69a3dcf4 100644
--- a/drivers/pci/controller/dwc/pcie-designware-plat.c
+++ b/drivers/pci/controller/dwc/pcie-designware-plat.c
@@ -39,7 +39,6 @@ static int dw_plat_pcie_host_init(struct pcie_port *pp)

dw_pcie_setup_rc(pp);
dw_pcie_wait_for_link(pci);
- dw_pcie_msi_init(pp);

return 0;
}
diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h
index 9d2f511f13fa..f9f6b276a11a 100644
--- a/drivers/pci/controller/dwc/pcie-designware.h
+++ b/drivers/pci/controller/dwc/pcie-designware.h
@@ -365,7 +365,6 @@ static inline void dw_pcie_dbi_ro_wr_dis(struct dw_pcie *pci)

#ifdef CONFIG_PCIE_DW_HOST
irqreturn_t dw_handle_msi_irq(struct pcie_port *pp);
-void dw_pcie_msi_init(struct pcie_port *pp);
void dw_pcie_free_msi(struct pcie_port *pp);
void dw_pcie_setup_rc(struct pcie_port *pp);
int dw_pcie_host_init(struct pcie_port *pp);
@@ -379,10 +378,6 @@ static inline irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
return IRQ_NONE;
}

-static inline void dw_pcie_msi_init(struct pcie_port *pp)
-{
-}
-
static inline void dw_pcie_free_msi(struct pcie_port *pp)
{
}
diff --git a/drivers/pci/controller/dwc/pcie-histb.c b/drivers/pci/controller/dwc/pcie-histb.c
index afc1abbe49aa..aa9eaee2c4bd 100644
--- a/drivers/pci/controller/dwc/pcie-histb.c
+++ b/drivers/pci/controller/dwc/pcie-histb.c
@@ -202,7 +202,6 @@ static int histb_pcie_host_init(struct pcie_port *pp)
pp->bridge->ops = &histb_pci_ops;

histb_pcie_establish_link(pp);
- dw_pcie_msi_init(pp);

return 0;
}
diff --git a/drivers/pci/controller/dwc/pcie-kirin.c b/drivers/pci/controller/dwc/pcie-kirin.c
index 6f01ae013326..dc30e43a6be9 100644
--- a/drivers/pci/controller/dwc/pcie-kirin.c
+++ b/drivers/pci/controller/dwc/pcie-kirin.c
@@ -429,7 +429,6 @@ static int kirin_pcie_host_init(struct pcie_port *pp)
pp->bridge->ops = &kirin_pci_ops;

kirin_pcie_establish_link(pp);
- dw_pcie_msi_init(pp);

return 0;
}
diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
index 5eb28251dbee..4f66e534e011 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -1271,7 +1271,6 @@ static int qcom_pcie_host_init(struct pcie_port *pp)
}

dw_pcie_setup_rc(pp);
- dw_pcie_msi_init(pp);

qcom_ep_reset_deassert(pcie);

diff --git a/drivers/pci/controller/dwc/pcie-spear13xx.c b/drivers/pci/controller/dwc/pcie-spear13xx.c
index e348225f651f..c75550573a1e 100644
--- a/drivers/pci/controller/dwc/pcie-spear13xx.c
+++ b/drivers/pci/controller/dwc/pcie-spear13xx.c
@@ -129,11 +129,9 @@ static void spear13xx_pcie_enable_interrupts(struct spear13xx_pcie *spear13xx_pc
struct pcie_app_reg *app_reg = spear13xx_pcie->app_base;

/* Enable MSI interrupt */
- if (IS_ENABLED(CONFIG_PCI_MSI)) {
- dw_pcie_msi_init(pp);
+ if (IS_ENABLED(CONFIG_PCI_MSI))
writel(readl(&app_reg->int_mask) |
MSI_CTRL_INT, &app_reg->int_mask);
- }
}

static int spear13xx_pcie_link_up(struct dw_pcie *pci)
diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
index aa511ec0d800..b093be21cab2 100644
--- a/drivers/pci/controller/dwc/pcie-tegra194.c
+++ b/drivers/pci/controller/dwc/pcie-tegra194.c
@@ -772,8 +772,6 @@ static void tegra_pcie_enable_msi_interrupts(struct pcie_port *pp)
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
u32 val;

- dw_pcie_msi_init(pp);
-
/* Enable MSI interrupt generation */
val = appl_readl(pcie, APPL_INTR_EN_L0_0);
val |= APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN;
diff --git a/drivers/pci/controller/dwc/pcie-uniphier.c b/drivers/pci/controller/dwc/pcie-uniphier.c
index 48176265c867..c19bdfed4337 100644
--- a/drivers/pci/controller/dwc/pcie-uniphier.c
+++ b/drivers/pci/controller/dwc/pcie-uniphier.c
@@ -322,8 +322,6 @@ static int uniphier_pcie_host_init(struct pcie_port *pp)
if (ret)
return ret;

- dw_pcie_msi_init(pp);
-
return 0;
}

--
2.28.0