2023-12-13 00:36:34

by Igor Mammedov

[permalink] [raw]
Subject: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
bridge reconfiguration in case of single HBA hotplug.
However in virt environment it's possible to pause machine hotplug several
HBAs and let machine run. That can hit the same race when 2nd hotplugged
HBA will start re-configuring bridge.
Do the same thing as SHPC and throttle down hotplug of 2nd and up
devices within single hotplug event.

Signed-off-by: Igor Mammedov <[email protected]>
---
drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
index 6b11609927d6..30bca2086b24 100644
--- a/drivers/pci/hotplug/acpiphp_glue.c
+++ b/drivers/pci/hotplug/acpiphp_glue.c
@@ -37,6 +37,7 @@
#include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/acpi.h>
+#include <linux/delay.h>

#include "../pci.h"
#include "acpiphp.h"
@@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev)
static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
{
struct acpiphp_slot *slot;
+ int nr_hp_slots = 0;

/* Bail out if the bridge is going away. */
if (bridge->is_going_away)
@@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)

/* configure all functions */
if (slot->flags != SLOT_ENABLED) {
+ if (nr_hp_slots)
+ msleep(1000);
+
+ ++nr_hp_slots;
enable_slot(slot, true);
}
} else {
--
2.39.3


2023-12-13 07:26:41

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

On Wed, Dec 13, 2023 at 01:36:14AM +0100, Igor Mammedov wrote:
> previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
> introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
> bridge reconfiguration in case of single HBA hotplug.
> However in virt environment it's possible to pause machine hotplug several
> HBAs and let machine run. That can hit the same race when 2nd hotplugged
> HBA will start re-configuring bridge.
> Do the same thing as SHPC and throttle down hotplug of 2nd and up
> devices within single hotplug event.
>
> Signed-off-by: Igor Mammedov <[email protected]>
> ---
> drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
> index 6b11609927d6..30bca2086b24 100644
> --- a/drivers/pci/hotplug/acpiphp_glue.c
> +++ b/drivers/pci/hotplug/acpiphp_glue.c
> @@ -37,6 +37,7 @@
> #include <linux/mutex.h>
> #include <linux/slab.h>
> #include <linux/acpi.h>
> +#include <linux/delay.h>
>
> #include "../pci.h"
> #include "acpiphp.h"
> @@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev)
> static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> {
> struct acpiphp_slot *slot;
> + int nr_hp_slots = 0;
>
> /* Bail out if the bridge is going away. */
> if (bridge->is_going_away)
> @@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
>
> /* configure all functions */
> if (slot->flags != SLOT_ENABLED) {
> + if (nr_hp_slots)
> + msleep(1000);
> +
> + ++nr_hp_slots;
> enable_slot(slot, true);
> }
> } else {
> --
> 2.39.3
>
>

<formletter>

This is not the correct way to submit patches for inclusion in the
stable kernel tree. Please read:
https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.

</formletter>

2023-12-13 08:14:30

by Dongli Zhang

[permalink] [raw]
Subject: Re: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

Hi Igor,


On 12/12/23 16:36, Igor Mammedov wrote:
> previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
> introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
> bridge reconfiguration in case of single HBA hotplug.
> However in virt environment it's possible to pause machine hotplug several
> HBAs and let machine run. That can hit the same race when 2nd hotplugged

Would you mind helping explain what does "pause machine hotplug several HBAs and
let machine run" indicate?

Thank you very much!

Dongli Zhang

> HBA will start re-configuring bridge.
> Do the same thing as SHPC and throttle down hotplug of 2nd and up
> devices within single hotplug event.
>
> Signed-off-by: Igor Mammedov <[email protected]>
> ---
> drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
> index 6b11609927d6..30bca2086b24 100644
> --- a/drivers/pci/hotplug/acpiphp_glue.c
> +++ b/drivers/pci/hotplug/acpiphp_glue.c
> @@ -37,6 +37,7 @@
> #include <linux/mutex.h>
> #include <linux/slab.h>
> #include <linux/acpi.h>
> +#include <linux/delay.h>
>
> #include "../pci.h"
> #include "acpiphp.h"
> @@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev)
> static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> {
> struct acpiphp_slot *slot;
> + int nr_hp_slots = 0;
>
> /* Bail out if the bridge is going away. */
> if (bridge->is_going_away)
> @@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
>
> /* configure all functions */
> if (slot->flags != SLOT_ENABLED) {
> + if (nr_hp_slots)
> + msleep(1000);
> +
> + ++nr_hp_slots;
> enable_slot(slot, true);
> }
> } else {

2023-12-13 09:47:54

by Fiona Ebner

[permalink] [raw]
Subject: Re: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

Am 13.12.23 um 01:36 schrieb Igor Mammedov:
> previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
> introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
> bridge reconfiguration in case of single HBA hotplug.
> However in virt environment it's possible to pause machine hotplug several
> HBAs and let machine run. That can hit the same race when 2nd hotplugged
> HBA will start re-configuring bridge.
> Do the same thing as SHPC and throttle down hotplug of 2nd and up
> devices within single hotplug event.
>
> Signed-off-by: Igor Mammedov <[email protected]>

With only the first patch applied, I could reproduce the issue described
here, i.e. pausing the vCPUs while doing multiple hotplugs and this
patch makes that scenario work too:

Tested-by: Fiona Ebner <[email protected]>

2023-12-13 10:06:16

by Igor Mammedov

[permalink] [raw]
Subject: Re: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

On Wed, 13 Dec 2023 00:13:37 -0800
Dongli Zhang <[email protected]> wrote:

> Hi Igor,
>
>
> On 12/12/23 16:36, Igor Mammedov wrote:
> > previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
> > introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
> > bridge reconfiguration in case of single HBA hotplug.
> > However in virt environment it's possible to pause machine hotplug several
> > HBAs and let machine run. That can hit the same race when 2nd hotplugged
>
> Would you mind helping explain what does "pause machine hotplug several HBAs and
> let machine run" indicate?

qemu example would be:
{qemu) stop
(qemu) device_add device_add vhost-scsi-pci,wwpn=naa.5001405324af0985,id=vhost01,bus=bridge1,addr=8
(qemu) device_add vhost-scsi-pci,wwpn=naa.5001405324af0986,id=vhost02,bus=bridge1,addr=0
(qemu) cont

this way when machine continues to run acpiphp code will see 2 HBAs at once
and try to process one right after another. So [1/2] patch is not enough
to cover above case, and hence the same hack SHPC employs by adding delay.
However 2 separate hotplug events as in your reproducer should be covered
by the 1st patch.

> Thank you very much!
>
> Dongli Zhang
>
> > HBA will start re-configuring bridge.
> > Do the same thing as SHPC and throttle down hotplug of 2nd and up
> > devices within single hotplug event.
> >
> > Signed-off-by: Igor Mammedov <[email protected]>
> > ---
> > drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
> > index 6b11609927d6..30bca2086b24 100644
> > --- a/drivers/pci/hotplug/acpiphp_glue.c
> > +++ b/drivers/pci/hotplug/acpiphp_glue.c
> > @@ -37,6 +37,7 @@
> > #include <linux/mutex.h>
> > #include <linux/slab.h>
> > #include <linux/acpi.h>
> > +#include <linux/delay.h>
> >
> > #include "../pci.h"
> > #include "acpiphp.h"
> > @@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev)
> > static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> > {
> > struct acpiphp_slot *slot;
> > + int nr_hp_slots = 0;
> >
> > /* Bail out if the bridge is going away. */
> > if (bridge->is_going_away)
> > @@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> >
> > /* configure all functions */
> > if (slot->flags != SLOT_ENABLED) {
> > + if (nr_hp_slots)
> > + msleep(1000);
> > +
> > + ++nr_hp_slots;
> > enable_slot(slot, true);
> > }
> > } else {
>

2023-12-13 13:08:32

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

On Wed, Dec 13, 2023 at 1:36 AM Igor Mammedov <[email protected]> wrote:
>
> previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
> introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
> bridge reconfiguration in case of single HBA hotplug.
> However in virt environment it's possible to pause machine hotplug several
> HBAs and let machine run. That can hit the same race when 2nd hotplugged
> HBA will start re-configuring bridge.
> Do the same thing as SHPC and throttle down hotplug of 2nd and up
> devices within single hotplug event.
>
> Signed-off-by: Igor Mammedov <[email protected]>
> ---
> drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
> index 6b11609927d6..30bca2086b24 100644
> --- a/drivers/pci/hotplug/acpiphp_glue.c
> +++ b/drivers/pci/hotplug/acpiphp_glue.c
> @@ -37,6 +37,7 @@
> #include <linux/mutex.h>
> #include <linux/slab.h>
> #include <linux/acpi.h>
> +#include <linux/delay.h>
>
> #include "../pci.h"
> #include "acpiphp.h"
> @@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev)
> static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> {
> struct acpiphp_slot *slot;
> + int nr_hp_slots = 0;
>
> /* Bail out if the bridge is going away. */
> if (bridge->is_going_away)
> @@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
>
> /* configure all functions */
> if (slot->flags != SLOT_ENABLED) {
> + if (nr_hp_slots)
> + msleep(1000);

Why is 1000 considered the most suitable number here? Any chance to
define a symbol for it?

And won't this affect the cases when the race in question is not a concern?

Also, adding arbitrary timeouts is not the most robust way of
addressing race conditions IMV. Wouldn't it be better to add some
proper synchronization between the pieces of code that can race with
each other?

> +
> + ++nr_hp_slots;
> enable_slot(slot, true);
> }
> } else {
> --

2023-12-13 16:50:00

by Igor Mammedov

[permalink] [raw]
Subject: Re: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

On Wed, Dec 13, 2023 at 2:08 PM Rafael J. Wysocki <[email protected]> wrote:
>
> On Wed, Dec 13, 2023 at 1:36 AM Igor Mammedov <[email protected]> wrote:
> >
> > previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
> > introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
> > bridge reconfiguration in case of single HBA hotplug.
> > However in virt environment it's possible to pause machine hotplug several
> > HBAs and let machine run. That can hit the same race when 2nd hotplugged
> > HBA will start re-configuring bridge.
> > Do the same thing as SHPC and throttle down hotplug of 2nd and up
> > devices within single hotplug event.
> >
> > Signed-off-by: Igor Mammedov <[email protected]>
> > ---
> > drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
> > index 6b11609927d6..30bca2086b24 100644
> > --- a/drivers/pci/hotplug/acpiphp_glue.c
> > +++ b/drivers/pci/hotplug/acpiphp_glue.c
> > @@ -37,6 +37,7 @@
> > #include <linux/mutex.h>
> > #include <linux/slab.h>
> > #include <linux/acpi.h>
> > +#include <linux/delay.h>
> >
> > #include "../pci.h"
> > #include "acpiphp.h"
> > @@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev)
> > static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> > {
> > struct acpiphp_slot *slot;
> > + int nr_hp_slots = 0;
> >
> > /* Bail out if the bridge is going away. */
> > if (bridge->is_going_away)
> > @@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> >
> > /* configure all functions */
> > if (slot->flags != SLOT_ENABLED) {
> > + if (nr_hp_slots)
> > + msleep(1000);
>
> Why is 1000 considered the most suitable number here? Any chance to
> define a symbol for it?

Timeout was borrowed from SHPC hotplug workflow where it apparently
makes race harder to reproduce.
(though it's not excuse to add more timeouts elsewhere)

> And won't this affect the cases when the race in question is not a concern?

In practice it's not likely, since even in virt scenario hypervisor won't
stop VM to hotplug device (which beats whole purpose of hotplug).

But in case of a very slow VM (overcommit case) it's possible for
several HBA's to be hotplugged by the time acpiphp gets a chance
to handle the 1st hotplug event. SHPC is more or less 'safe' with its
1sec delay.

> Also, adding arbitrary timeouts is not the most robust way of
> addressing race conditions IMV. Wouldn't it be better to add some
> proper synchronization between the pieces of code that can race with
> each other?

I don't like it either, it's a stop gap measure to hide regression on
short notice,
which I can fixup without much risk in short time left, before folks
leave on holidays.
It's fine to drop the patch as chances of this happening are small.
[1/2] should cover reported cases.

Since it's RFC, I basically ask for opinions on a proper way to fix
SCSI_ASYNC_SCAN
running wild while the hotplug is in progress (and maybe SCSI is not
the only user that
schedules async job from device probe). So adding synchronisation and testing
would take time (not something I'd do this late in the cycle).

So far I'm thinking about adding rw mutex to bridge with the PCI
hotplug subsystem
being a writer while scsi scan jobs would be readers and wait till hotplug code
says it's safe to proceed.
I plan to work in this direction and give it some testing, unless
someone has a better idea.

>
> > +
> > + ++nr_hp_slots;
> > enable_slot(slot, true);
> > }
> > } else {
> > --
>

2023-12-13 16:55:05

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

On Wed, Dec 13, 2023 at 05:49:39PM +0100, Igor Mammedov wrote:
> On Wed, Dec 13, 2023 at 2:08 PM Rafael J. Wysocki <[email protected]> wrote:
> >
> > On Wed, Dec 13, 2023 at 1:36 AM Igor Mammedov <[email protected]> wrote:
> > >
> > > previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
> > > introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
> > > bridge reconfiguration in case of single HBA hotplug.
> > > However in virt environment it's possible to pause machine hotplug several
> > > HBAs and let machine run. That can hit the same race when 2nd hotplugged
> > > HBA will start re-configuring bridge.
> > > Do the same thing as SHPC and throttle down hotplug of 2nd and up
> > > devices within single hotplug event.
> > >
> > > Signed-off-by: Igor Mammedov <[email protected]>
> > > ---
> > > drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++
> > > 1 file changed, 6 insertions(+)
> > >
> > > diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
> > > index 6b11609927d6..30bca2086b24 100644
> > > --- a/drivers/pci/hotplug/acpiphp_glue.c
> > > +++ b/drivers/pci/hotplug/acpiphp_glue.c
> > > @@ -37,6 +37,7 @@
> > > #include <linux/mutex.h>
> > > #include <linux/slab.h>
> > > #include <linux/acpi.h>
> > > +#include <linux/delay.h>
> > >
> > > #include "../pci.h"
> > > #include "acpiphp.h"
> > > @@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev)
> > > static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> > > {
> > > struct acpiphp_slot *slot;
> > > + int nr_hp_slots = 0;
> > >
> > > /* Bail out if the bridge is going away. */
> > > if (bridge->is_going_away)
> > > @@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> > >
> > > /* configure all functions */
> > > if (slot->flags != SLOT_ENABLED) {
> > > + if (nr_hp_slots)
> > > + msleep(1000);
> >
> > Why is 1000 considered the most suitable number here? Any chance to
> > define a symbol for it?
>
> Timeout was borrowed from SHPC hotplug workflow where it apparently
> makes race harder to reproduce.
> (though it's not excuse to add more timeouts elsewhere)
>
> > And won't this affect the cases when the race in question is not a concern?
>
> In practice it's not likely, since even in virt scenario hypervisor won't
> stop VM to hotplug device (which beats whole purpose of hotplug).
>
> But in case of a very slow VM (overcommit case) it's possible for
> several HBA's to be hotplugged by the time acpiphp gets a chance
> to handle the 1st hotplug event. SHPC is more or less 'safe' with its
> 1sec delay.
>
> > Also, adding arbitrary timeouts is not the most robust way of
> > addressing race conditions IMV. Wouldn't it be better to add some
> > proper synchronization between the pieces of code that can race with
> > each other?
>
> I don't like it either, it's a stop gap measure to hide regression on
> short notice,
> which I can fixup without much risk in short time left, before folks
> leave on holidays.
> It's fine to drop the patch as chances of this happening are small.
> [1/2] should cover reported cases.
>
> Since it's RFC, I basically ask for opinions on a proper way to fix
> SCSI_ASYNC_SCAN
> running wild while the hotplug is in progress (and maybe SCSI is not
> the only user that
> schedules async job from device probe).

Of course not. And things don't have to be scheduled from probe right?
Can be triggered by an interrupt or userspace activity.

> So adding synchronisation and testing
> would take time (not something I'd do this late in the cycle).
>
> So far I'm thinking about adding rw mutex to bridge with the PCI
> hotplug subsystem
> being a writer while scsi scan jobs would be readers and wait till hotplug code
> says it's safe to proceed.
> I plan to work in this direction and give it some testing, unless
> someone has a better idea.

> >
> > > +
> > > + ++nr_hp_slots;
> > > enable_slot(slot, true);
> > > }
> > > } else {
> > > --
> >

2023-12-13 17:10:10

by Dongli Zhang

[permalink] [raw]
Subject: Re: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

Hi Igor,

On 12/13/23 08:54, Michael S. Tsirkin wrote:
> On Wed, Dec 13, 2023 at 05:49:39PM +0100, Igor Mammedov wrote:
>> On Wed, Dec 13, 2023 at 2:08 PM Rafael J. Wysocki <[email protected]> wrote:
>>>
>>> On Wed, Dec 13, 2023 at 1:36 AM Igor Mammedov <[email protected]> wrote:
>>>>
>>>> previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
>>>> introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
>>>> bridge reconfiguration in case of single HBA hotplug.
>>>> However in virt environment it's possible to pause machine hotplug several
>>>> HBAs and let machine run. That can hit the same race when 2nd hotplugged
>>>> HBA will start re-configuring bridge.
>>>> Do the same thing as SHPC and throttle down hotplug of 2nd and up
>>>> devices within single hotplug event.
>>>>
>>>> Signed-off-by: Igor Mammedov <[email protected]>
>>>> ---
>>>> drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++
>>>> 1 file changed, 6 insertions(+)
>>>>
>>>> diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
>>>> index 6b11609927d6..30bca2086b24 100644
>>>> --- a/drivers/pci/hotplug/acpiphp_glue.c
>>>> +++ b/drivers/pci/hotplug/acpiphp_glue.c
>>>> @@ -37,6 +37,7 @@
>>>> #include <linux/mutex.h>
>>>> #include <linux/slab.h>
>>>> #include <linux/acpi.h>
>>>> +#include <linux/delay.h>
>>>>
>>>> #include "../pci.h"
>>>> #include "acpiphp.h"
>>>> @@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev)
>>>> static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
>>>> {
>>>> struct acpiphp_slot *slot;
>>>> + int nr_hp_slots = 0;
>>>>
>>>> /* Bail out if the bridge is going away. */
>>>> if (bridge->is_going_away)
>>>> @@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
>>>>
>>>> /* configure all functions */
>>>> if (slot->flags != SLOT_ENABLED) {
>>>> + if (nr_hp_slots)
>>>> + msleep(1000);
>>>
>>> Why is 1000 considered the most suitable number here? Any chance to
>>> define a symbol for it?
>>
>> Timeout was borrowed from SHPC hotplug workflow where it apparently
>> makes race harder to reproduce.
>> (though it's not excuse to add more timeouts elsewhere)
>>
>>> And won't this affect the cases when the race in question is not a concern?
>>
>> In practice it's not likely, since even in virt scenario hypervisor won't
>> stop VM to hotplug device (which beats whole purpose of hotplug).
>>
>> But in case of a very slow VM (overcommit case) it's possible for
>> several HBA's to be hotplugged by the time acpiphp gets a chance
>> to handle the 1st hotplug event. SHPC is more or less 'safe' with its
>> 1sec delay.
>>
>>> Also, adding arbitrary timeouts is not the most robust way of
>>> addressing race conditions IMV. Wouldn't it be better to add some
>>> proper synchronization between the pieces of code that can race with
>>> each other?
>>
>> I don't like it either, it's a stop gap measure to hide regression on
>> short notice,
>> which I can fixup without much risk in short time left, before folks
>> leave on holidays.
>> It's fine to drop the patch as chances of this happening are small.
>> [1/2] should cover reported cases.
>>
>> Since it's RFC, I basically ask for opinions on a proper way to fix
>> SCSI_ASYNC_SCAN
>> running wild while the hotplug is in progress (and maybe SCSI is not
>> the only user that
>> schedules async job from device probe).
>
> Of course not. And things don't have to be scheduled from probe right?
> Can be triggered by an interrupt or userspace activity.

I agree with Michael. TBH, I am curious if the two patches can
workaround/resolve the issue.

Would you mind helping explain if to run enable_slot() for a new PCI device can
impact the other PCI devices existing on the bridge?

E.g.,:

1. Attach several virtio-scsi or virtio-net on the same bridge.

2. Trigger workload for those PCI devices. They may do mmio write to kick the
doorbell (to trigger KVM/QEMU ioeventfd) very frequently.

3. Now hot-add an extra PCI device. Since the slot is never enabled, it enables
the slot via enable_slot().

Can I assume the last enable_slot() will temporarily re-configure the bridge
window so that all other PCI devices' mmio will lose effect at that time point?

Since drivers always kick the doorbell conditionally, they may hang forever.

As I have reported, we used to have the similar issue.

PCI: Probe bridge window attributes once at enumeration-time
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=51c48b310183ab6ba5419edfc6a8de889cc04521


Therefore, can I assume the issue is not because to re-enable an already-enabled
slot, but to touch the bridge window for more than once?

Thank you very much!

Dongli Zhang

>
>> So adding synchronisation and testing
>> would take time (not something I'd do this late in the cycle).
>>
>> So far I'm thinking about adding rw mutex to bridge with the PCI
>> hotplug subsystem
>> being a writer while scsi scan jobs would be readers and wait till hotplug code
>> says it's safe to proceed.
>> I plan to work in this direction and give it some testing, unless
>> someone has a better idea.
>
>>>
>>>> +
>>>> + ++nr_hp_slots;
>>>> enable_slot(slot, true);
>>>> }
>>>> } else {
>>>> --
>>>
>

2023-12-13 17:26:06

by Dongli Zhang

[permalink] [raw]
Subject: Re: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

Hi Igor,

On 12/13/23 02:05, Igor Mammedov wrote:
> On Wed, 13 Dec 2023 00:13:37 -0800
> Dongli Zhang <[email protected]> wrote:
>
>> Hi Igor,
>>
>>
>> On 12/12/23 16:36, Igor Mammedov wrote:
>>> previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
>>> introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
>>> bridge reconfiguration in case of single HBA hotplug.
>>> However in virt environment it's possible to pause machine hotplug several
>>> HBAs and let machine run. That can hit the same race when 2nd hotplugged
>>
>> Would you mind helping explain what does "pause machine hotplug several HBAs and
>> let machine run" indicate?
>
> qemu example would be:
> {qemu) stop
> (qemu) device_add device_add vhost-scsi-pci,wwpn=naa.5001405324af0985,id=vhost01,bus=bridge1,addr=8
> (qemu) device_add vhost-scsi-pci,wwpn=naa.5001405324af0986,id=vhost02,bus=bridge1,addr=0
> (qemu) cont
>
> this way when machine continues to run acpiphp code will see 2 HBAs at once
> and try to process one right after another. So [1/2] patch is not enough
> to cover above case, and hence the same hack SHPC employs by adding delay.
> However 2 separate hotplug events as in your reproducer should be covered
> by the 1st patch.

Thank you very much for the explanation.

That indicates the two PCI devices will be detected and enabled in the same
event. Neither of the two PCI devices used to be enabled.

As mentioned in another email, I do not think this is the way to even workaround
the issue, because there are other ways to do mmio at the same time point.

Dongli Zhang

>
>> Thank you very much!
>>
>> Dongli Zhang
>>
>>> HBA will start re-configuring bridge.
>>> Do the same thing as SHPC and throttle down hotplug of 2nd and up
>>> devices within single hotplug event.
>>>
>>> Signed-off-by: Igor Mammedov <[email protected]>
>>> ---
>>> drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++
>>> 1 file changed, 6 insertions(+)
>>>
>>> diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
>>> index 6b11609927d6..30bca2086b24 100644
>>> --- a/drivers/pci/hotplug/acpiphp_glue.c
>>> +++ b/drivers/pci/hotplug/acpiphp_glue.c
>>> @@ -37,6 +37,7 @@
>>> #include <linux/mutex.h>
>>> #include <linux/slab.h>
>>> #include <linux/acpi.h>
>>> +#include <linux/delay.h>
>>>
>>> #include "../pci.h"
>>> #include "acpiphp.h"
>>> @@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev)
>>> static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
>>> {
>>> struct acpiphp_slot *slot;
>>> + int nr_hp_slots = 0;
>>>
>>> /* Bail out if the bridge is going away. */
>>> if (bridge->is_going_away)
>>> @@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
>>>
>>> /* configure all functions */
>>> if (slot->flags != SLOT_ENABLED) {
>>> + if (nr_hp_slots)
>>> + msleep(1000);
>>> +
>>> + ++nr_hp_slots;
>>> enable_slot(slot, true);
>>> }
>>> } else {
>>
>

2023-12-13 18:51:21

by Igor Mammedov

[permalink] [raw]
Subject: Re: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

On Wed, Dec 13, 2023 at 5:54 PM Michael S. Tsirkin <[email protected]> wrote:
>
> On Wed, Dec 13, 2023 at 05:49:39PM +0100, Igor Mammedov wrote:
> > On Wed, Dec 13, 2023 at 2:08 PM Rafael J. Wysocki <[email protected]> wrote:
> > >
> > > On Wed, Dec 13, 2023 at 1:36 AM Igor Mammedov <[email protected]> wrote:
> > > >
> > > > previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
> > > > introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
> > > > bridge reconfiguration in case of single HBA hotplug.
> > > > However in virt environment it's possible to pause machine hotplug several
> > > > HBAs and let machine run. That can hit the same race when 2nd hotplugged
> > > > HBA will start re-configuring bridge.
> > > > Do the same thing as SHPC and throttle down hotplug of 2nd and up
> > > > devices within single hotplug event.
> > > >
> > > > Signed-off-by: Igor Mammedov <[email protected]>
> > > > ---
> > > > drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++
> > > > 1 file changed, 6 insertions(+)
> > > >
> > > > diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
> > > > index 6b11609927d6..30bca2086b24 100644
> > > > --- a/drivers/pci/hotplug/acpiphp_glue.c
> > > > +++ b/drivers/pci/hotplug/acpiphp_glue.c
> > > > @@ -37,6 +37,7 @@
> > > > #include <linux/mutex.h>
> > > > #include <linux/slab.h>
> > > > #include <linux/acpi.h>
> > > > +#include <linux/delay.h>
> > > >
> > > > #include "../pci.h"
> > > > #include "acpiphp.h"
> > > > @@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev)
> > > > static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> > > > {
> > > > struct acpiphp_slot *slot;
> > > > + int nr_hp_slots = 0;
> > > >
> > > > /* Bail out if the bridge is going away. */
> > > > if (bridge->is_going_away)
> > > > @@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> > > >
> > > > /* configure all functions */
> > > > if (slot->flags != SLOT_ENABLED) {
> > > > + if (nr_hp_slots)
> > > > + msleep(1000);
> > >
> > > Why is 1000 considered the most suitable number here? Any chance to
> > > define a symbol for it?
> >
> > Timeout was borrowed from SHPC hotplug workflow where it apparently
> > makes race harder to reproduce.
> > (though it's not excuse to add more timeouts elsewhere)
> >
> > > And won't this affect the cases when the race in question is not a concern?
> >
> > In practice it's not likely, since even in virt scenario hypervisor won't
> > stop VM to hotplug device (which beats whole purpose of hotplug).
> >
> > But in case of a very slow VM (overcommit case) it's possible for
> > several HBA's to be hotplugged by the time acpiphp gets a chance
> > to handle the 1st hotplug event. SHPC is more or less 'safe' with its
> > 1sec delay.
> >
> > > Also, adding arbitrary timeouts is not the most robust way of
> > > addressing race conditions IMV. Wouldn't it be better to add some
> > > proper synchronization between the pieces of code that can race with
> > > each other?
> >
> > I don't like it either, it's a stop gap measure to hide regression on
> > short notice,
> > which I can fixup without much risk in short time left, before folks
> > leave on holidays.
> > It's fine to drop the patch as chances of this happening are small.
> > [1/2] should cover reported cases.
> >
> > Since it's RFC, I basically ask for opinions on a proper way to fix
> > SCSI_ASYNC_SCAN
> > running wild while the hotplug is in progress (and maybe SCSI is not
> > the only user that
> > schedules async job from device probe).
>
> Of course not. And things don't have to be scheduled from probe right?
> Can be triggered by an interrupt or userspace activity.

Maybe, but it would probably depend on driver/device.

For HBA case, we probably can't depend on iqr or a userspace activity.
Current expectations are that after hotplug HBA will show up along with
drives attached to it. I suppose udev can kick off scan on HBA after device
appears but it is still postponing the same race just elsewhere.
Not to mention making the whole system more complicated/fragile.

Also async scan during hotplug begs a question, it does speed up
boot process with several HBA. But how much sense it makes to do so
at hotplug time where resources are plugged on demand
(synchronous scan might even be better).


> > So adding synchronisation and testing
> > would take time (not something I'd do this late in the cycle).
> >
> > So far I'm thinking about adding rw mutex to bridge with the PCI
> > hotplug subsystem
> > being a writer while scsi scan jobs would be readers and wait till hotplug code
> > says it's safe to proceed.
> > I plan to work in this direction and give it some testing, unless
> > someone has a better idea.
>
> > >
> > > > +
> > > > + ++nr_hp_slots;
> > > > enable_slot(slot, true);
> > > > }
> > > > } else {
> > > > --
> > >
>

2024-01-03 09:55:23

by Igor Mammedov

[permalink] [raw]
Subject: Re: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

On Wed, 13 Dec 2023 09:09:18 -0800
Dongli Zhang <[email protected]> wrote:

> Hi Igor,
>
> On 12/13/23 08:54, Michael S. Tsirkin wrote:
> > On Wed, Dec 13, 2023 at 05:49:39PM +0100, Igor Mammedov wrote:
> >> On Wed, Dec 13, 2023 at 2:08 PM Rafael J. Wysocki <[email protected]> wrote:
> >>>
> >>> On Wed, Dec 13, 2023 at 1:36 AM Igor Mammedov <[email protected]> wrote:
> >>>>
> >>>> previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
> >>>> introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
> >>>> bridge reconfiguration in case of single HBA hotplug.
> >>>> However in virt environment it's possible to pause machine hotplug several
> >>>> HBAs and let machine run. That can hit the same race when 2nd hotplugged
> >>>> HBA will start re-configuring bridge.
> >>>> Do the same thing as SHPC and throttle down hotplug of 2nd and up
> >>>> devices within single hotplug event.
> >>>>
> >>>> Signed-off-by: Igor Mammedov <[email protected]>
> >>>> ---
> >>>> drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++
> >>>> 1 file changed, 6 insertions(+)
> >>>>
> >>>> diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
> >>>> index 6b11609927d6..30bca2086b24 100644
> >>>> --- a/drivers/pci/hotplug/acpiphp_glue.c
> >>>> +++ b/drivers/pci/hotplug/acpiphp_glue.c
> >>>> @@ -37,6 +37,7 @@
> >>>> #include <linux/mutex.h>
> >>>> #include <linux/slab.h>
> >>>> #include <linux/acpi.h>
> >>>> +#include <linux/delay.h>
> >>>>
> >>>> #include "../pci.h"
> >>>> #include "acpiphp.h"
> >>>> @@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev)
> >>>> static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> >>>> {
> >>>> struct acpiphp_slot *slot;
> >>>> + int nr_hp_slots = 0;
> >>>>
> >>>> /* Bail out if the bridge is going away. */
> >>>> if (bridge->is_going_away)
> >>>> @@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
> >>>>
> >>>> /* configure all functions */
> >>>> if (slot->flags != SLOT_ENABLED) {
> >>>> + if (nr_hp_slots)
> >>>> + msleep(1000);
> >>>
> >>> Why is 1000 considered the most suitable number here? Any chance to
> >>> define a symbol for it?
> >>
> >> Timeout was borrowed from SHPC hotplug workflow where it apparently
> >> makes race harder to reproduce.
> >> (though it's not excuse to add more timeouts elsewhere)
> >>
> >>> And won't this affect the cases when the race in question is not a concern?
> >>
> >> In practice it's not likely, since even in virt scenario hypervisor won't
> >> stop VM to hotplug device (which beats whole purpose of hotplug).
> >>
> >> But in case of a very slow VM (overcommit case) it's possible for
> >> several HBA's to be hotplugged by the time acpiphp gets a chance
> >> to handle the 1st hotplug event. SHPC is more or less 'safe' with its
> >> 1sec delay.
> >>
> >>> Also, adding arbitrary timeouts is not the most robust way of
> >>> addressing race conditions IMV. Wouldn't it be better to add some
> >>> proper synchronization between the pieces of code that can race with
> >>> each other?
> >>
> >> I don't like it either, it's a stop gap measure to hide regression on
> >> short notice,
> >> which I can fixup without much risk in short time left, before folks
> >> leave on holidays.
> >> It's fine to drop the patch as chances of this happening are small.
> >> [1/2] should cover reported cases.
> >>
> >> Since it's RFC, I basically ask for opinions on a proper way to fix
> >> SCSI_ASYNC_SCAN
> >> running wild while the hotplug is in progress (and maybe SCSI is not
> >> the only user that
> >> schedules async job from device probe).
> >
> > Of course not. And things don't have to be scheduled from probe right?
> > Can be triggered by an interrupt or userspace activity.
>
> I agree with Michael. TBH, I am curious if the two patches can
> workaround/resolve the issue.
>
> Would you mind helping explain if to run enable_slot() for a new PCI device can
> impact the other PCI devices existing on the bridge?
>
> E.g.,:
>
> 1. Attach several virtio-scsi or virtio-net on the same bridge.
>
> 2. Trigger workload for those PCI devices. They may do mmio write to kick the
> doorbell (to trigger KVM/QEMU ioeventfd) very frequently.
>
> 3. Now hot-add an extra PCI device. Since the slot is never enabled, it enables
> the slot via enable_slot().
>
> Can I assume the last enable_slot() will temporarily re-configure the bridge
> window so that all other PCI devices' mmio will lose effect at that time point?

That's likely what would happen.
The same issue should apply to native PCIe and SHPC hotplug, as they also use
pci_assign_unassigned_bridge_resources().

Perhaps drivers have to be taught that PCI tree is being reconfigured or some
another approach can be used to deal with it.
Do you have any ideas?

I'm comparing with Windows guest, which manages to reconfigure PCI hierarchy
on the fly. (though I haven't tested that under heavy load with several
devices on a bridge).

> Since drivers always kick the doorbell conditionally, they may hang forever.
>
> As I have reported, we used to have the similar issue.
>
> PCI: Probe bridge window attributes once at enumeration-time
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=51c48b310183ab6ba5419edfc6a8de889cc04521
>
>
> Therefore, can I assume the issue is not because to re-enable an already-enabled
> slot, but to touch the bridge window for more than once?
>
> Thank you very much!
>
> Dongli Zhang
>
> >
> >> So adding synchronisation and testing
> >> would take time (not something I'd do this late in the cycle).
> >>
> >> So far I'm thinking about adding rw mutex to bridge with the PCI
> >> hotplug subsystem
> >> being a writer while scsi scan jobs would be readers and wait till hotplug code
> >> says it's safe to proceed.
> >> I plan to work in this direction and give it some testing, unless
> >> someone has a better idea.
> >
> >>>
> >>>> +
> >>>> + ++nr_hp_slots;
> >>>> enable_slot(slot, true);
> >>>> }
> >>>> } else {
> >>>> --
> >>>
> >
>


2024-01-03 16:20:22

by Dongli Zhang

[permalink] [raw]
Subject: Re: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time

Hi Igor,

On 1/3/24 01:54, Igor Mammedov wrote:
> On Wed, 13 Dec 2023 09:09:18 -0800
> Dongli Zhang <[email protected]> wrote:
>
>> Hi Igor,
>>
>> On 12/13/23 08:54, Michael S. Tsirkin wrote:
>>> On Wed, Dec 13, 2023 at 05:49:39PM +0100, Igor Mammedov wrote:
>>>> On Wed, Dec 13, 2023 at 2:08 PM Rafael J. Wysocki <[email protected]> wrote:
>>>>>
>>>>> On Wed, Dec 13, 2023 at 1:36 AM Igor Mammedov <[email protected]> wrote:
>>>>>>
>>>>>> previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already"
>>>>>> introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and
>>>>>> bridge reconfiguration in case of single HBA hotplug.
>>>>>> However in virt environment it's possible to pause machine hotplug several
>>>>>> HBAs and let machine run. That can hit the same race when 2nd hotplugged
>>>>>> HBA will start re-configuring bridge.
>>>>>> Do the same thing as SHPC and throttle down hotplug of 2nd and up
>>>>>> devices within single hotplug event.
>>>>>>
>>>>>> Signed-off-by: Igor Mammedov <[email protected]>
>>>>>> ---
>>>>>> drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++
>>>>>> 1 file changed, 6 insertions(+)
>>>>>>
>>>>>> diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c
>>>>>> index 6b11609927d6..30bca2086b24 100644
>>>>>> --- a/drivers/pci/hotplug/acpiphp_glue.c
>>>>>> +++ b/drivers/pci/hotplug/acpiphp_glue.c
>>>>>> @@ -37,6 +37,7 @@
>>>>>> #include <linux/mutex.h>
>>>>>> #include <linux/slab.h>
>>>>>> #include <linux/acpi.h>
>>>>>> +#include <linux/delay.h>
>>>>>>
>>>>>> #include "../pci.h"
>>>>>> #include "acpiphp.h"
>>>>>> @@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev)
>>>>>> static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
>>>>>> {
>>>>>> struct acpiphp_slot *slot;
>>>>>> + int nr_hp_slots = 0;
>>>>>>
>>>>>> /* Bail out if the bridge is going away. */
>>>>>> if (bridge->is_going_away)
>>>>>> @@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
>>>>>>
>>>>>> /* configure all functions */
>>>>>> if (slot->flags != SLOT_ENABLED) {
>>>>>> + if (nr_hp_slots)
>>>>>> + msleep(1000);
>>>>>
>>>>> Why is 1000 considered the most suitable number here? Any chance to
>>>>> define a symbol for it?
>>>>
>>>> Timeout was borrowed from SHPC hotplug workflow where it apparently
>>>> makes race harder to reproduce.
>>>> (though it's not excuse to add more timeouts elsewhere)
>>>>
>>>>> And won't this affect the cases when the race in question is not a concern?
>>>>
>>>> In practice it's not likely, since even in virt scenario hypervisor won't
>>>> stop VM to hotplug device (which beats whole purpose of hotplug).
>>>>
>>>> But in case of a very slow VM (overcommit case) it's possible for
>>>> several HBA's to be hotplugged by the time acpiphp gets a chance
>>>> to handle the 1st hotplug event. SHPC is more or less 'safe' with its
>>>> 1sec delay.
>>>>
>>>>> Also, adding arbitrary timeouts is not the most robust way of
>>>>> addressing race conditions IMV. Wouldn't it be better to add some
>>>>> proper synchronization between the pieces of code that can race with
>>>>> each other?
>>>>
>>>> I don't like it either, it's a stop gap measure to hide regression on
>>>> short notice,
>>>> which I can fixup without much risk in short time left, before folks
>>>> leave on holidays.
>>>> It's fine to drop the patch as chances of this happening are small.
>>>> [1/2] should cover reported cases.
>>>>
>>>> Since it's RFC, I basically ask for opinions on a proper way to fix
>>>> SCSI_ASYNC_SCAN
>>>> running wild while the hotplug is in progress (and maybe SCSI is not
>>>> the only user that
>>>> schedules async job from device probe).
>>>
>>> Of course not. And things don't have to be scheduled from probe right?
>>> Can be triggered by an interrupt or userspace activity.
>>
>> I agree with Michael. TBH, I am curious if the two patches can
>> workaround/resolve the issue.
>>
>> Would you mind helping explain if to run enable_slot() for a new PCI device can
>> impact the other PCI devices existing on the bridge?
>>
>> E.g.,:
>>
>> 1. Attach several virtio-scsi or virtio-net on the same bridge.
>>
>> 2. Trigger workload for those PCI devices. They may do mmio write to kick the
>> doorbell (to trigger KVM/QEMU ioeventfd) very frequently.
>>
>> 3. Now hot-add an extra PCI device. Since the slot is never enabled, it enables
>> the slot via enable_slot().
>>
>> Can I assume the last enable_slot() will temporarily re-configure the bridge
>> window so that all other PCI devices' mmio will lose effect at that time point?
>
> That's likely what would happen.
> The same issue should apply to native PCIe and SHPC hotplug, as they also use
> pci_assign_unassigned_bridge_resources().
>
> Perhaps drivers have to be taught that PCI tree is being reconfigured or some
> another approach can be used to deal with it.
> Do you have any ideas?

This is not limited to the kernel space. The kernel space may remap/expose the
mmio region to the userspace. The userspace program may directly interacts with
the PCI device as well (DPDK?).

How about stop machine mechanism if we need to touch the PCI bridge window, to
guarantee no CPU is actively accessing the mmio?

Thank you very much!

Dongli Zhang

>
> I'm comparing with Windows guest, which manages to reconfigure PCI hierarchy
> on the fly. (though I haven't tested that under heavy load with several
> devices on a bridge).
>
>> Since drivers always kick the doorbell conditionally, they may hang forever.
>>
>> As I have reported, we used to have the similar issue.
>>
>> PCI: Probe bridge window attributes once at enumeration-time
>> https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=51c48b310183ab6ba5419edfc6a8de889cc04521__;!!ACWV5N9M2RV99hQ!KCs-3SYiAH9t7yzAmSJlDD-YJ7bo0z3Syg9VYF8JJ3JTkYeoSINQD4Tx_7NpDxWeL04FF5lLWlQHrZMsDkoY$
>>
>>
>> Therefore, can I assume the issue is not because to re-enable an already-enabled
>> slot, but to touch the bridge window for more than once?
>>
>> Thank you very much!
>>
>> Dongli Zhang
>>
>>>
>>>> So adding synchronisation and testing
>>>> would take time (not something I'd do this late in the cycle).
>>>>
>>>> So far I'm thinking about adding rw mutex to bridge with the PCI
>>>> hotplug subsystem
>>>> being a writer while scsi scan jobs would be readers and wait till hotplug code
>>>> says it's safe to proceed.
>>>> I plan to work in this direction and give it some testing, unless
>>>> someone has a better idea.
>>>
>>>>>
>>>>>> +
>>>>>> + ++nr_hp_slots;
>>>>>> enable_slot(slot, true);
>>>>>> }
>>>>>> } else {
>>>>>> --
>>>>>
>>>
>>
>