This simple patch set fixed some serious security issues found when DPC
error injection and NVMe SSD hotplug brute force test were doing -- race
condition between DPC handler and pciehp, AER interrupt handlers, caused
system hang and system with DPC feature couldn't recover to normal
working state as expected (NVMe instance lost, mount operation hang,
race PCIe access caused uncorrectable errors reported alternatively etc).
With this patch set applied, stable 5.9-rc6 on ICS (Ice Lake SP platform,
see
https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server))
could pass the PCIe Gen4 NVMe SSD brute force hotplug test with any time
interval between hot-remove and plug-in operation tens of times without
any errors occur and system works normal.
With this patch set applied, system with DPC feature could recover from
NON-FATAL and FATAL errors injection test and works as expected.
System works smoothly when errors happen while hotplug is doing, no
uncorrectable errors found.
Brute DPC error injection script:
for i in {0..100}
do
setpci -s 64:02.0 0x196.w=000a
setpci -s 65:00.0 0x04.w=0544
mount /dev/nvme0n1p1 /root/nvme
sleep 1
done
Other details see every commits description part.
This patch set could be applied to stable 5.9-rc6 directly.
Help to review and test.
V2: changed according to review by Andy Shevchenko.
V3: changed patch 4/5 to simpler coding.
V4: move function pci_wait_port_outdpc() to DPC driver and its
declaration to pci.h. (tip from Christoph Hellwig <[email protected]>).
Thanks,
Ethan
Ethan Zhao (5):
PCI: define a function to check and wait till port finish DPC handling
PCI: pciehp: check and wait port status out of DPC before handling
DLLSC and PDC
PCI/ERR: get device before call device driver to avoid NULL pointer
reference
PCI: only return true when dev io state is really changed
PCI/ERR: don't mix io state not changed and no driver together
drivers/pci/hotplug/pciehp_hpc.c | 4 +++-
drivers/pci/pci.h | 34 +++++---------------------------
drivers/pci/pcie/err.c | 18 +++++++++++++++--
include/linux/pci.h | 31 +++++++++++++++++++++++++++++
4 files changed, 55 insertions(+), 32 deletions(-)
--
2.18.4
Once root port DPC capability is enabled and triggered, at the beginning
of DPC is triggered, the DPC status bits are set by hardware and then
sends DPC/DLLSC/PDC interrupts to OS DPC and pciehp drivers, it will
take the port and software DPC interrupt handler 10ms to 50ms (test data
on ICS(Ice Lake SP platform, see
https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server)
& stable 5.9-rc6) to complete the DPC containment procedure
till the DPC status is cleared at the end of the DPC interrupt handler.
We use this function to check if the root port is in DPC handling status
and wait till the hardware and software completed the procedure.
Signed-off-by: Ethan Zhao <[email protected]>
Tested-by: Wen Jin <[email protected]>
Tested-by: Shanshan Zhang <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
changes:
V2:align ICS code name to public doc.
V3: no change.
V4: response to Christoph's (Christoph Hellwig <[email protected]>)
tip, move pci_wait_port_outdpc() to DPC driver and its declaration
to pci.h.
drivers/pci/pci.h | 2 ++
drivers/pci/pcie/dpc.c | 27 +++++++++++++++++++++++++++
2 files changed, 29 insertions(+)
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index fa12f7cbc1a0..8fdb0d823d5a 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -455,10 +455,12 @@ void pci_restore_dpc_state(struct pci_dev *dev);
void pci_dpc_init(struct pci_dev *pdev);
void dpc_process_error(struct pci_dev *pdev);
pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
+bool pci_wait_port_outdpc(struct pci_dev *pdev);
#else
static inline void pci_save_dpc_state(struct pci_dev *dev) {}
static inline void pci_restore_dpc_state(struct pci_dev *dev) {}
static inline void pci_dpc_init(struct pci_dev *pdev) {}
+inline bool pci_wait_port_outdpc(struct pci_dev *pdev) { return false; }
#endif
#ifdef CONFIG_PCI_ATS
diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
index daa9a4153776..2e0e091ce923 100644
--- a/drivers/pci/pcie/dpc.c
+++ b/drivers/pci/pcie/dpc.c
@@ -71,6 +71,33 @@ void pci_restore_dpc_state(struct pci_dev *dev)
pci_write_config_word(dev, dev->dpc_cap + PCI_EXP_DPC_CTL, *cap);
}
+bool pci_wait_port_outdpc(struct pci_dev *pdev)
+{
+ u16 cap = pdev->dpc_cap, status;
+ u16 loop = 0;
+
+ if (!cap) {
+ pci_WARN_ONCE(pdev, !cap, "No DPC capability initiated\n");
+ return false;
+ }
+ pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
+ pci_dbg(pdev, "DPC status %x, cap %x\n", status, cap);
+
+ while (status & PCI_EXP_DPC_STATUS_TRIGGER && loop < 100) {
+ msleep(10);
+ loop++;
+ pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
+ }
+
+ if (!(status & PCI_EXP_DPC_STATUS_TRIGGER)) {
+ pci_dbg(pdev, "Out of DPC %x, cost %d ms\n", status, loop*10);
+ return true;
+ }
+
+ pci_dbg(pdev, "Timeout to wait port out of DPC status\n");
+ return false;
+}
+
static int dpc_wait_rp_inactive(struct pci_dev *pdev)
{
unsigned long timeout = jiffies + HZ;
--
2.18.4
When root port has DPC capability and it is enabled, then triggered by
errors, DPC DLLSC and PDC interrupts will be sent to DPC driver, pciehp
driver at the same time.
That will cause following result:
1. Link and device are recovered by hardware DPC and software DPC driver,
device
isn't removed, but the pciehp might treat it as device was hot removed.
2. Race condition happens bettween pciehp_unconfigure_device() called by
pciehp_ist() in pciehp driver and pci_do_recovery() called by
dpc_handler in DPC driver. no luck, there is no lock to protect
pci_stop_and_remove_bus_device()
against pci_walk_bus(), they hold different samphore and mutex,
pci_stop_and_remove_bus_device holds pci_rescan_remove_lock, and
pci_walk_bus() holds pci_bus_sem.
This race condition is not purely code analysis, it could be triggered by
following command series:
# setpci -s 64:02.0 0x196.w=000a // 64:02.0 rootport has DPC capability
# setpci -s 65:00.0 0x04.w=0544 // 65:00.0 NVMe SSD populated in port
# mount /dev/nvme0n1p1 nvme
One shot will cause system panic and NULL pointer reference happened.
(tested on stable 5.8 & ICS(Ice Lake SP platform, see
https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server))
Buffer I/O error on dev nvme0n1p1, logical block 3328, async page read
BUG: kernel NULL pointer dereference, address: 0000000000000050
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0
Oops: 0000 [#1] SMP NOPTI
CPU: 12 PID: 513 Comm: irq/124-pcie-dp Not tainted 5.8.0 el8.x86_64+ #1
RIP: 0010:report_error_detected.cold.4+0x7d/0xe6
Code: b6 d0 e8 e8 fe 11 00 e8 16 c5 fb ff be 06 00 00 00 48 89 df e8 d3
65 ff ff b8 06 00 00 00 e9 75 fc ff ff 48 8b 43 68 45 31 c9 <48> 8b 50
50 48 83 3a 00 41 0f 94 c1 45 31 c0 48 85 d2 41 0f 94 c0
RSP: 0018:ff8e06cf8762fda8 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ff4e3eaacf42a000 RCX: ff4e3eb31f223c01
RDX: ff4e3eaacf42a140 RSI: ff4e3eb31f223c00 RDI: ff4e3eaacf42a138
RBP: ff8e06cf8762fdd0 R08: 00000000000000bf R09: 0000000000000000
R10: 000000eb8ebeab53 R11: ffffffff93453258 R12: 0000000000000002
R13: ff4e3eaacf42a130 R14: ff8e06cf8762fe2c R15: ff4e3eab44733828
FS: 0000000000000000(0000) GS:ff4e3eab1fd00000(0000) knl
GS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000050 CR3: 0000000f8f80a004 CR4: 0000000000761ee0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
? report_normal_detected+0x20/0x20
report_frozen_detected+0x16/0x20
pci_walk_bus+0x75/0x90
? dpc_irq+0x90/0x90
pcie_do_recovery+0x157/0x201
? irq_finalize_oneshot.part.47+0xe0/0xe0
dpc_handler+0x29/0x40
irq_thread_fn+0x24/0x60
irq_thread+0xea/0x170
? irq_forced_thread_fn+0x80/0x80
? irq_thread_check_affinity+0xf0/0xf0
kthread+0x124/0x140
? kthread_park+0x90/0x90
ret_from_fork+0x1f/0x30
Modules linked in: nft_fib_inet.........
CR2: 0000000000000050
With this patch, the handling flow of DPC containment and hotplug is
partly ordered and serialized, let hardware DPC do the controller reset
etc recovery action first, then DPC driver handling the call-back from
device drivers, clear the DPC status, at the end, pciehp handle the DLLSC
and PDC etc.
Signed-off-by: Ethan Zhao <[email protected]>
Tested-by: Wen Jin <[email protected]>
Tested-by: Shanshan Zhang <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
---
Changes:
V2: revise doc according to Andy's suggestion.
V3: no change.
V4: no change.
drivers/pci/hotplug/pciehp_hpc.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
index 53433b37e181..6f271160f18d 100644
--- a/drivers/pci/hotplug/pciehp_hpc.c
+++ b/drivers/pci/hotplug/pciehp_hpc.c
@@ -710,8 +710,10 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
down_read(&ctrl->reset_lock);
if (events & DISABLE_SLOT)
pciehp_handle_disable_request(ctrl);
- else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
+ else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) {
+ pci_wait_port_outdpc(pdev);
pciehp_handle_presence_or_link_change(ctrl, events);
+ }
up_read(&ctrl->reset_lock);
ret = IRQ_HANDLED;
--
2.18.4
When uncorrectable error happens, AER driver and DPC driver interrupt
handlers likely call
pcie_do_recovery()
->pci_walk_bus()
->report_frozen_detected()
with pci_channel_io_frozen the same time.
If pci_dev_set_io_state() return true even if the original state is
pci_channel_io_frozen, that will cause AER or DPC handler re-enter
the error detecting and recovery procedure one after another.
The result is the recovery flow mixed between AER and DPC.
So simplify the pci_dev_set_io_state() function to only return true
when dev->error_state is changed.
Signed-off-by: Ethan Zhao <[email protected]>
Tested-by: Wen Jin <[email protected]>
Tested-by: Shanshan Zhang <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Reviewed-by: Alexandru Gagniuc <[email protected]>
Reviewed-by: Joe Perches <[email protected]>
---
Changnes:
V2: revise description and code according to suggestion from Andy.
V3: change code to simpler.
V4: no change.
drivers/pci/pci.h | 37 +++++--------------------------------
1 file changed, 5 insertions(+), 32 deletions(-)
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index fa12f7cbc1a0..a2c1c7d5f494 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -359,39 +359,12 @@ struct pci_sriov {
static inline bool pci_dev_set_io_state(struct pci_dev *dev,
pci_channel_state_t new)
{
- bool changed = false;
-
device_lock_assert(&dev->dev);
- switch (new) {
- case pci_channel_io_perm_failure:
- switch (dev->error_state) {
- case pci_channel_io_frozen:
- case pci_channel_io_normal:
- case pci_channel_io_perm_failure:
- changed = true;
- break;
- }
- break;
- case pci_channel_io_frozen:
- switch (dev->error_state) {
- case pci_channel_io_frozen:
- case pci_channel_io_normal:
- changed = true;
- break;
- }
- break;
- case pci_channel_io_normal:
- switch (dev->error_state) {
- case pci_channel_io_frozen:
- case pci_channel_io_normal:
- changed = true;
- break;
- }
- break;
- }
- if (changed)
- dev->error_state = new;
- return changed;
+ if (dev->error_state == new)
+ return false;
+
+ dev->error_state = new;
+ return true;
}
static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused)
--
2.18.4
When we see 'can't recover (no error_detected callback)' on console,
Maybe the reason is io state is not changed by calling
pci_dev_set_io_state(), that is confused. fix it.
Signed-off-by: Ethan Zhao <[email protected]>
Tested-by: Wen Jin <[email protected]>
Tested-by: Shanshan Zhang <[email protected]>
---
Chagnes:
V2: no change.
V3: no change.
V4: no change.
drivers/pci/pcie/err.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
index e35c4480c86b..d85f27c90c26 100644
--- a/drivers/pci/pcie/err.c
+++ b/drivers/pci/pcie/err.c
@@ -55,8 +55,10 @@ static int report_error_detected(struct pci_dev *dev,
if (!pci_dev_get(dev))
return 0;
device_lock(&dev->dev);
- if (!pci_dev_set_io_state(dev, state) ||
- !dev->driver ||
+ if (!pci_dev_set_io_state(dev, state)) {
+ pci_dbg(dev, "Device might already being in error handling ...\n");
+ vote = PCI_ERS_RESULT_NONE;
+ } else if (!dev->driver ||
!dev->driver->err_handler ||
!dev->driver->err_handler->error_detected) {
/*
--
2.18.4
During DPC error injection test we found there is race condition between
pciehp and DPC driver, NULL pointer reference caused panic as following
# setpci -s 64:02.0 0x196.w=000a
// 64:02.0 is rootport has DPC capability
# setpci -s 65:00.0 0x04.w=0544
// 65:00.0 is NVMe SSD populated in above port
# mount /dev/nvme0n1p1 nvme
(tested on stable 5.8 & ICS(Ice Lake SP platform, see
https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server))
Buffer I/O error on dev nvme0n1p1, logical block 468843328,
async page read
BUG: kernel NULL pointer dereference, address: 0000000000000050
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0
Oops: 0000 [#1] SMP NOPTI
CPU: 12 PID: 513 Comm: irq/124-pcie-dp Not tainted 5.8.0-0.0.7.el8.x86_64+ #1
RIP: 0010:report_error_detected.cold.4+0x7d/0xe6
Code: b6 d0 e8 e8 fe 11 00 e8 16 c5 fb ff be 06 00 00 00 48 89 df e8 d3 65 ff
ff b8 06 00 00 00 e9 75 fc ff ff 48 8b 43 68 45 31 c9 <48> 8b 50 50 48 83 3a 00
41 0f 94 c1 45 31 c0 48 85 d2 41 0f 94 c0
RSP: 0018:ff8e06cf8762fda8 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ff4e3eaacf42a000 RCX: ff4e3eb31f223c01
RDX: ff4e3eaacf42a140 RSI: ff4e3eb31f223c00 RDI: ff4e3eaacf42a138
RBP: ff8e06cf8762fdd0 R08: 00000000000000bf R09: 0000000000000000
R10: 000000eb8ebeab53 R11: ffffffff93453258 R12: 0000000000000002
R13: ff4e3eaacf42a130 R14: ff8e06cf8762fe2c R15: ff4e3eab44733828
FS: 0000000000000000(0000) GS:ff4e3eab1fd00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000050 CR3: 0000000f8f80a004 CR4: 0000000000761ee0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
? report_normal_detected+0x20/0x20
report_frozen_detected+0x16/0x20
pci_walk_bus+0x75/0x90
? dpc_irq+0x90/0x90
pcie_do_recovery+0x157/0x201
? irq_finalize_oneshot.part.47+0xe0/0xe0
dpc_handler+0x29/0x40
irq_thread_fn+0x24/0x60
irq_thread+0xea/0x170
? irq_forced_thread_fn+0x80/0x80
? irq_thread_check_affinity+0xf0/0xf0
kthread+0x124/0x140
? kthread_park+0x90/0x90
ret_from_fork+0x1f/0x30
Modules linked in: nft_fib_inet.........
CR2: 0000000000000050
Though we partly close the race condition with patch 'PCI: pciehp: check
and wait port status out of DPC before handling DLLSC and PDC', but there
is no hardware spec or software sequence to guarantee the pcie_ist() run
into pci_wait_port_outdpc() first or DPC triggered status bits being set
first when errors triggered DPC containment procedure, so device still
could be removed by function pci_stop_and_removed_bus_device() then freed
by pci_dev_put() in pciehp driver first during pcie_do_recover()/
pci_walk_bus() is called by dpc_handler() in DPC driver.
Maybe unify pci_bus_sem and pci_rescan_remove_lock to serialize the
removal and walking operation is the right way, but here we use
pci_dev_get() to increase the reference count of device before using the
device to avoid it is freed in use.
With this patch and patch 'PCI: pciehp: check and wait port status out of
DPC before handling DLLSC and PDC', stable 5.9-rc6 could pass the error
injection test and no panic happened.
Brute DPC error injection script:
for i in {0..100}
do
setpci -s 64:02.0 0x196.w=000a
setpci -s 65:00.0 0x04.w=0544
mount /dev/nvme0n1p1 /root/nvme
sleep 1
done
Signed-off-by: Ethan Zhao <[email protected]>
Tested-by: Wen Jin <[email protected]>
Tested-by: Shanshan Zhang <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
---
Changes:
V2: revise doc according to Andy's suggestion.
V3: no change.
V4: no change.
drivers/pci/pcie/err.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/pci/pcie/err.c b/drivers/pci/pcie/err.c
index c543f419d8f9..e35c4480c86b 100644
--- a/drivers/pci/pcie/err.c
+++ b/drivers/pci/pcie/err.c
@@ -52,6 +52,8 @@ static int report_error_detected(struct pci_dev *dev,
pci_ers_result_t vote;
const struct pci_error_handlers *err_handler;
+ if (!pci_dev_get(dev))
+ return 0;
device_lock(&dev->dev);
if (!pci_dev_set_io_state(dev, state) ||
!dev->driver ||
@@ -76,6 +78,7 @@ static int report_error_detected(struct pci_dev *dev,
pci_uevent_ers(dev, vote);
*result = merge_result(*result, vote);
device_unlock(&dev->dev);
+ pci_dev_put(dev);
return 0;
}
@@ -94,6 +97,8 @@ static int report_mmio_enabled(struct pci_dev *dev, void *data)
pci_ers_result_t vote, *result = data;
const struct pci_error_handlers *err_handler;
+ if (!pci_dev_get(dev))
+ return 0;
device_lock(&dev->dev);
if (!dev->driver ||
!dev->driver->err_handler ||
@@ -105,6 +110,7 @@ static int report_mmio_enabled(struct pci_dev *dev, void *data)
*result = merge_result(*result, vote);
out:
device_unlock(&dev->dev);
+ pci_dev_put(dev);
return 0;
}
@@ -113,6 +119,8 @@ static int report_slot_reset(struct pci_dev *dev, void *data)
pci_ers_result_t vote, *result = data;
const struct pci_error_handlers *err_handler;
+ if (!pci_dev_get(dev))
+ return 0;
device_lock(&dev->dev);
if (!dev->driver ||
!dev->driver->err_handler ||
@@ -124,6 +132,7 @@ static int report_slot_reset(struct pci_dev *dev, void *data)
*result = merge_result(*result, vote);
out:
device_unlock(&dev->dev);
+ pci_dev_put(dev);
return 0;
}
@@ -131,6 +140,8 @@ static int report_resume(struct pci_dev *dev, void *data)
{
const struct pci_error_handlers *err_handler;
+ if (!pci_dev_get(dev))
+ return 0;
device_lock(&dev->dev);
if (!pci_dev_set_io_state(dev, pci_channel_io_normal) ||
!dev->driver ||
@@ -143,6 +154,7 @@ static int report_resume(struct pci_dev *dev, void *data)
out:
pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED);
device_unlock(&dev->dev);
+ pci_dev_put(dev);
return 0;
}
--
2.18.4
On Sun, Sep 27, 2020 at 11:33 AM Ethan Zhao <[email protected]> wrote:
>
> Once root port DPC capability is enabled and triggered, at the beginning
> of DPC is triggered, the DPC status bits are set by hardware and then
> sends DPC/DLLSC/PDC interrupts to OS DPC and pciehp drivers, it will
> take the port and software DPC interrupt handler 10ms to 50ms (test data
> on ICS(Ice Lake SP platform, see
> https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server)
> & stable 5.9-rc6) to complete the DPC containment procedure
> till the DPC status is cleared at the end of the DPC interrupt handler.
>
> We use this function to check if the root port is in DPC handling status
> and wait till the hardware and software completed the procedure.
>
> Signed-off-by: Ethan Zhao <[email protected]>
> Tested-by: Wen Jin <[email protected]>
> Tested-by: Shanshan Zhang <[email protected]>
> Reviewed-by: Andy Shevchenko <[email protected]>
I haven't given you this tag. Where did you get it from?
(Dave, that's the case where we need to push the [internal review] process)
> Reviewed-by: Christoph Hellwig <[email protected]>
> ---
> changes:
> V2:align ICS code name to public doc.
> V3: no change.
> V4: response to Christoph's (Christoph Hellwig <[email protected]>)
> tip, move pci_wait_port_outdpc() to DPC driver and its declaration
> to pci.h.
>
> drivers/pci/pci.h | 2 ++
> drivers/pci/pcie/dpc.c | 27 +++++++++++++++++++++++++++
> 2 files changed, 29 insertions(+)
>
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index fa12f7cbc1a0..8fdb0d823d5a 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -455,10 +455,12 @@ void pci_restore_dpc_state(struct pci_dev *dev);
> void pci_dpc_init(struct pci_dev *pdev);
> void dpc_process_error(struct pci_dev *pdev);
> pci_ers_result_t dpc_reset_link(struct pci_dev *pdev);
> +bool pci_wait_port_outdpc(struct pci_dev *pdev);
> #else
> static inline void pci_save_dpc_state(struct pci_dev *dev) {}
> static inline void pci_restore_dpc_state(struct pci_dev *dev) {}
> static inline void pci_dpc_init(struct pci_dev *pdev) {}
> +inline bool pci_wait_port_outdpc(struct pci_dev *pdev) { return false; }
> #endif
>
> #ifdef CONFIG_PCI_ATS
> diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
> index daa9a4153776..2e0e091ce923 100644
> --- a/drivers/pci/pcie/dpc.c
> +++ b/drivers/pci/pcie/dpc.c
> @@ -71,6 +71,33 @@ void pci_restore_dpc_state(struct pci_dev *dev)
> pci_write_config_word(dev, dev->dpc_cap + PCI_EXP_DPC_CTL, *cap);
> }
>
> +bool pci_wait_port_outdpc(struct pci_dev *pdev)
> +{
> + u16 cap = pdev->dpc_cap, status;
> + u16 loop = 0;
> +
> + if (!cap) {
> + pci_WARN_ONCE(pdev, !cap, "No DPC capability initiated\n");
> + return false;
> + }
> + pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
> + pci_dbg(pdev, "DPC status %x, cap %x\n", status, cap);
> +
> + while (status & PCI_EXP_DPC_STATUS_TRIGGER && loop < 100) {
> + msleep(10);
> + loop++;
> + pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
> + }
> +
> + if (!(status & PCI_EXP_DPC_STATUS_TRIGGER)) {
> + pci_dbg(pdev, "Out of DPC %x, cost %d ms\n", status, loop*10);
> + return true;
> + }
> +
> + pci_dbg(pdev, "Timeout to wait port out of DPC status\n");
> + return false;
> +}
> +
> static int dpc_wait_rp_inactive(struct pci_dev *pdev)
> {
> unsigned long timeout = jiffies + HZ;
> --
> 2.18.4
>
--
With Best Regards,
Andy Shevchenko
On Sun, Sep 27, 2020 at 11:31 AM Ethan Zhao <[email protected]> wrote:
>
> When root port has DPC capability and it is enabled, then triggered by
> errors, DPC DLLSC and PDC interrupts will be sent to DPC driver, pciehp
> driver at the same time.
> That will cause following result:
>
> 1. Link and device are recovered by hardware DPC and software DPC driver,
> device
> isn't removed, but the pciehp might treat it as device was hot removed.
>
> 2. Race condition happens bettween pciehp_unconfigure_device() called by
> pciehp_ist() in pciehp driver and pci_do_recovery() called by
> dpc_handler in DPC driver. no luck, there is no lock to protect
> pci_stop_and_remove_bus_device()
> against pci_walk_bus(), they hold different samphore and mutex,
> pci_stop_and_remove_bus_device holds pci_rescan_remove_lock, and
> pci_walk_bus() holds pci_bus_sem.
>
> This race condition is not purely code analysis, it could be triggered by
> following command series:
>
> # setpci -s 64:02.0 0x196.w=000a // 64:02.0 rootport has DPC capability
> # setpci -s 65:00.0 0x04.w=0544 // 65:00.0 NVMe SSD populated in port
> # mount /dev/nvme0n1p1 nvme
>
> One shot will cause system panic and NULL pointer reference happened.
> (tested on stable 5.8 & ICS(Ice Lake SP platform, see
> https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server))
>
> Buffer I/O error on dev nvme0n1p1, logical block 3328, async page read
> BUG: kernel NULL pointer dereference, address: 0000000000000050
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 0
Seems like you randomly did something about the series and would like
it to be applied?! It's no go!
Please, read my comments again v1 one more time and carefully comment
or address.
Why do you still have these (some above, some below this comment)
non-relevant lines of oops?
> Oops: 0000 [#1] SMP NOPTI
> CPU: 12 PID: 513 Comm: irq/124-pcie-dp Not tainted 5.8.0 el8.x86_64+ #1
> RIP: 0010:report_error_detected.cold.4+0x7d/0xe6
> Code: b6 d0 e8 e8 fe 11 00 e8 16 c5 fb ff be 06 00 00 00 48 89 df e8 d3
> 65 ff ff b8 06 00 00 00 e9 75 fc ff ff 48 8b 43 68 45 31 c9 <48> 8b 50
> 50 48 83 3a 00 41 0f 94 c1 45 31 c0 48 85 d2 41 0f 94 c0
> RSP: 0018:ff8e06cf8762fda8 EFLAGS: 00010246
> RAX: 0000000000000000 RBX: ff4e3eaacf42a000 RCX: ff4e3eb31f223c01
> RDX: ff4e3eaacf42a140 RSI: ff4e3eb31f223c00 RDI: ff4e3eaacf42a138
> RBP: ff8e06cf8762fdd0 R08: 00000000000000bf R09: 0000000000000000
> R10: 000000eb8ebeab53 R11: ffffffff93453258 R12: 0000000000000002
> R13: ff4e3eaacf42a130 R14: ff8e06cf8762fe2c R15: ff4e3eab44733828
> FS: 0000000000000000(0000) GS:ff4e3eab1fd00000(0000) knl
> GS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000000000050 CR3: 0000000f8f80a004 CR4: 0000000000761ee0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> PKRU: 55555554
> Call Trace:
> ? report_normal_detected+0x20/0x20
> report_frozen_detected+0x16/0x20
> pci_walk_bus+0x75/0x90
> ? dpc_irq+0x90/0x90
> pcie_do_recovery+0x157/0x201
> ? irq_finalize_oneshot.part.47+0xe0/0xe0
> dpc_handler+0x29/0x40
> irq_thread_fn+0x24/0x60
> irq_thread+0xea/0x170
> ? irq_forced_thread_fn+0x80/0x80
> ? irq_thread_check_affinity+0xf0/0xf0
> kthread+0x124/0x140
> ? kthread_park+0x90/0x90
> ret_from_fork+0x1f/0x30
> Modules linked in: nft_fib_inet.........
> CR2: 0000000000000050
> Reviewed-by: Andy Shevchenko <[email protected]>
And no, this is not how the tags are being applied.
--
With Best Regards,
Andy Shevchenko
On Sun, 2020-09-27 at 04:27 -0400, Ethan Zhao wrote:
> When uncorrectable error happens, AER driver and DPC driver interrupt
> handlers likely call
>
> pcie_do_recovery()
> ->pci_walk_bus()
> ->report_frozen_detected()
>
> with pci_channel_io_frozen the same time.
> If pci_dev_set_io_state() return true even if the original state is
> pci_channel_io_frozen, that will cause AER or DPC handler re-enter
> the error detecting and recovery procedure one after another.
> The result is the recovery flow mixed between AER and DPC.
> So simplify the pci_dev_set_io_state() function to only return true
> when dev->error_state is changed.
>
> Signed-off-by: Ethan Zhao <[email protected]>
> Tested-by: Wen Jin <[email protected]>
> Tested-by: Shanshan Zhang <[email protected]>
> Reviewed-by: Andy Shevchenko <[email protected]>
> Reviewed-by: Alexandru Gagniuc <[email protected]>
> Reviewed-by: Joe Perches <[email protected]>
Hi Ethan/Haifeng.
Like Andy, I did not "review" this patch and sign it.
I merely suggested another simplification.
Please do not add -by: lines unless actually received by you.
Sorry for that offence, I should ask for your permission.
-----Original Message-----
From: Joe Perches <[email protected]>
Sent: Sunday, September 27, 2020 5:14 PM
To: Zhao, Haifeng <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]
Cc: [email protected]; [email protected]; Jia, Pei P <[email protected]>; [email protected]; Kuppuswamy, Sathyanarayanan <[email protected]>; [email protected]
Subject: Re: [PATCH 4/5 V4] PCI: only return true when dev io state is really changed
On Sun, 2020-09-27 at 04:27 -0400, Ethan Zhao wrote:
> When uncorrectable error happens, AER driver and DPC driver interrupt
> handlers likely call
>
> pcie_do_recovery()
> ->pci_walk_bus()
> ->report_frozen_detected()
>
> with pci_channel_io_frozen the same time.
> If pci_dev_set_io_state() return true even if the original state is
> pci_channel_io_frozen, that will cause AER or DPC handler re-enter the
> error detecting and recovery procedure one after another.
> The result is the recovery flow mixed between AER and DPC.
> So simplify the pci_dev_set_io_state() function to only return true
> when dev->error_state is changed.
>
> Signed-off-by: Ethan Zhao <[email protected]>
> Tested-by: Wen Jin <[email protected]>
> Tested-by: Shanshan Zhang <[email protected]>
> Reviewed-by: Andy Shevchenko <[email protected]>
> Reviewed-by: Alexandru Gagniuc <[email protected]>
> Reviewed-by: Joe Perches <[email protected]>
Hi Ethan/Haifeng.
Like Andy, I did not "review" this patch and sign it.
I merely suggested another simplification.
Please do not add -by: lines unless actually received by you.
Andy,
May I ask which line of the Oops is " you randomly did something " ? and should be removed ?
Thanks,
Ethan
-----Original Message-----
From: Andy Shevchenko <[email protected]>
Sent: Sunday, September 27, 2020 5:10 PM
To: Zhao, Haifeng <[email protected]>; Hansen, Dave <[email protected]>
Cc: Bjorn Helgaas <[email protected]>; Oliver <[email protected]>; [email protected]; Lukas Wunner <[email protected]>; Andy Shevchenko <[email protected]>; Stuart Hayes <[email protected]>; Alexandru Gagniuc <[email protected]>; Mika Westerberg <[email protected]>; linux-pci <[email protected]>; Linux Kernel Mailing List <[email protected]>; Jia, Pei P <[email protected]>; [email protected]; Kuppuswamy, Sathyanarayanan <[email protected]>; Christoph Hellwig <[email protected]>; Joe Perches <[email protected]>
Subject: Re: [PATCH 2/5 V4] PCI: pciehp: check and wait port status out of DPC before handling DLLSC and PDC
On Sun, Sep 27, 2020 at 11:31 AM Ethan Zhao <[email protected]> wrote:
>
> When root port has DPC capability and it is enabled, then triggered by
> errors, DPC DLLSC and PDC interrupts will be sent to DPC driver,
> pciehp driver at the same time.
> That will cause following result:
>
> 1. Link and device are recovered by hardware DPC and software DPC driver,
> device
> isn't removed, but the pciehp might treat it as device was hot removed.
>
> 2. Race condition happens bettween pciehp_unconfigure_device() called by
> pciehp_ist() in pciehp driver and pci_do_recovery() called by
> dpc_handler in DPC driver. no luck, there is no lock to protect
> pci_stop_and_remove_bus_device()
> against pci_walk_bus(), they hold different samphore and mutex,
> pci_stop_and_remove_bus_device holds pci_rescan_remove_lock, and
> pci_walk_bus() holds pci_bus_sem.
>
> This race condition is not purely code analysis, it could be triggered
> by following command series:
>
> # setpci -s 64:02.0 0x196.w=000a // 64:02.0 rootport has DPC capability
> # setpci -s 65:00.0 0x04.w=0544 // 65:00.0 NVMe SSD populated in port
> # mount /dev/nvme0n1p1 nvme
>
> One shot will cause system panic and NULL pointer reference happened.
> (tested on stable 5.8 & ICS(Ice Lake SP platform, see
> https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server
> ))
>
> Buffer I/O error on dev nvme0n1p1, logical block 3328, async page read
> BUG: kernel NULL pointer dereference, address: 0000000000000050
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 0
Seems like you randomly did something about the series and would like it to be applied?! It's no go!
Please, read my comments again v1 one more time and carefully comment or address.
Why do you still have these (some above, some below this comment) non-relevant lines of oops?
> Oops: 0000 [#1] SMP NOPTI
> CPU: 12 PID: 513 Comm: irq/124-pcie-dp Not tainted 5.8.0 el8.x86_64+ #1
> RIP: 0010:report_error_detected.cold.4+0x7d/0xe6
> Code: b6 d0 e8 e8 fe 11 00 e8 16 c5 fb ff be 06 00 00 00 48 89 df e8 d3
> 65 ff ff b8 06 00 00 00 e9 75 fc ff ff 48 8b 43 68 45 31 c9 <48> 8b 50
> 50 48 83 3a 00 41 0f 94 c1 45 31 c0 48 85 d2 41 0f 94 c0
> RSP: 0018:ff8e06cf8762fda8 EFLAGS: 00010246
> RAX: 0000000000000000 RBX: ff4e3eaacf42a000 RCX: ff4e3eb31f223c01
> RDX: ff4e3eaacf42a140 RSI: ff4e3eb31f223c00 RDI: ff4e3eaacf42a138
> RBP: ff8e06cf8762fdd0 R08: 00000000000000bf R09: 0000000000000000
> R10: 000000eb8ebeab53 R11: ffffffff93453258 R12: 0000000000000002
> R13: ff4e3eaacf42a130 R14: ff8e06cf8762fe2c R15: ff4e3eab44733828
> FS: 0000000000000000(0000) GS:ff4e3eab1fd00000(0000) knl
> GS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000000000050 CR3: 0000000f8f80a004 CR4: 0000000000761ee0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> PKRU: 55555554
> Call Trace:
> ? report_normal_detected+0x20/0x20
> report_frozen_detected+0x16/0x20
> pci_walk_bus+0x75/0x90
> ? dpc_irq+0x90/0x90
> pcie_do_recovery+0x157/0x201
> ? irq_finalize_oneshot.part.47+0xe0/0xe0
> dpc_handler+0x29/0x40
> irq_thread_fn+0x24/0x60
> irq_thread+0xea/0x170
> ? irq_forced_thread_fn+0x80/0x80
> ? irq_thread_check_affinity+0xf0/0xf0
> kthread+0x124/0x140
> ? kthread_park+0x90/0x90
> ret_from_fork+0x1f/0x30
> Modules linked in: nft_fib_inet.........
> CR2: 0000000000000050
> Reviewed-by: Andy Shevchenko <[email protected]>
And no, this is not how the tags are being applied.
--
With Best Regards,
Andy Shevchenko
Sathyanarayanan,
-----Original Message-----
From: Kuppuswamy, Sathyanarayanan <[email protected]>
Sent: Monday, September 28, 2020 2:59 AM
To: Zhao, Haifeng <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]
Cc: [email protected]; [email protected]; Jia, Pei P <[email protected]>; [email protected]; [email protected]; [email protected]
Subject: Re: [PATCH 2/5 V4] PCI: pciehp: check and wait port status out of DPC before handling DLLSC and PDC
On 9/27/20 1:27 AM, Ethan Zhao wrote:
> When root port has DPC capability and it is enabled, then triggered by
> errors, DPC DLLSC and PDC interrupts will be sent to DPC driver,
> pciehp driver at the same time.
> That will cause following result:
>
> 1. Link and device are recovered by hardware DPC and software DPC driver,
> device
> isn't removed, but the pciehp might treat it as device was hot removed.
>
> 2. Race condition happens bettween pciehp_unconfigure_device() called by
> pciehp_ist() in pciehp driver and pci_do_recovery() called by
> dpc_handler in DPC driver. no luck, there is no lock to protect
> pci_stop_and_remove_bus_device()
> against pci_walk_bus(), they hold different samphore and mutex,
> pci_stop_and_remove_bus_device holds pci_rescan_remove_lock, and
> pci_walk_bus() holds pci_bus_sem.
Why not address the locking issue? May be a common lock?
>
> This race condition is not purely code analysis, it could be triggered
> by following command series:
>
> # setpci -s 64:02.0 0x196.w=000a // 64:02.0 rootport has DPC capability
> # setpci -s 65:00.0 0x04.w=0544 // 65:00.0 NVMe SSD populated in port
> # mount /dev/nvme0n1p1 nvme
>
> One shot will cause system panic and NULL pointer reference happened.
> (tested on stable 5.8 & ICS(Ice Lake SP platform, see
> https://en.wikichip.org/wiki/intel/microarchitectures/ice_lake_(server
> ))
>
> Buffer I/O error on dev nvme0n1p1, logical block 3328, async page read
> BUG: kernel NULL pointer dereference, address: 0000000000000050
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 0
> Oops: 0000 [#1] SMP NOPTI
> CPU: 12 PID: 513 Comm: irq/124-pcie-dp Not tainted 5.8.0 el8.x86_64+ #1
> RIP: 0010:report_error_detected.cold.4+0x7d/0xe6
> Code: b6 d0 e8 e8 fe 11 00 e8 16 c5 fb ff be 06 00 00 00 48 89 df e8 d3
> 65 ff ff b8 06 00 00 00 e9 75 fc ff ff 48 8b 43 68 45 31 c9 <48> 8b 50
> 50 48 83 3a 00 41 0f 94 c1 45 31 c0 48 85 d2 41 0f 94 c0
> RSP: 0018:ff8e06cf8762fda8 EFLAGS: 00010246
> RAX: 0000000000000000 RBX: ff4e3eaacf42a000 RCX: ff4e3eb31f223c01
> RDX: ff4e3eaacf42a140 RSI: ff4e3eb31f223c00 RDI: ff4e3eaacf42a138
> RBP: ff8e06cf8762fdd0 R08: 00000000000000bf R09: 0000000000000000
> R10: 000000eb8ebeab53 R11: ffffffff93453258 R12: 0000000000000002
> R13: ff4e3eaacf42a130 R14: ff8e06cf8762fe2c R15: ff4e3eab44733828
> FS: 0000000000000000(0000) GS:ff4e3eab1fd00000(0000) knl
> GS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000000000050 CR3: 0000000f8f80a004 CR4: 0000000000761ee0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> PKRU: 55555554
> Call Trace:
> ? report_normal_detected+0x20/0x20
> report_frozen_detected+0x16/0x20
> pci_walk_bus+0x75/0x90
> ? dpc_irq+0x90/0x90
> pcie_do_recovery+0x157/0x201
> ? irq_finalize_oneshot.part.47+0xe0/0xe0
> dpc_handler+0x29/0x40
> irq_thread_fn+0x24/0x60
> irq_thread+0xea/0x170
> ? irq_forced_thread_fn+0x80/0x80
> ? irq_thread_check_affinity+0xf0/0xf0
> kthread+0x124/0x140
> ? kthread_park+0x90/0x90
> ret_from_fork+0x1f/0x30
> Modules linked in: nft_fib_inet.........
> CR2: 0000000000000050
>
> With this patch, the handling flow of DPC containment and hotplug is
> partly ordered and serialized,
If its a partial fix, what scenario is not covered?
:see the 1/5 patch.
> let hardware DPC do the controller reset etc recovery action first,
> then DPC driver handling the call-back from device drivers, clear the
> DPC status, at the end, pciehp handle the DLLSC and PDC etc.
>
> Signed-off-by: Ethan Zhao <[email protected]>
> Tested-by: Wen Jin <[email protected]>
> Tested-by: Shanshan Zhang <[email protected]>
> Reviewed-by: Andy Shevchenko <[email protected]>
> ---
> Changes:
> V2: revise doc according to Andy's suggestion.
> V3: no change.
> V4: no change.
>
> drivers/pci/hotplug/pciehp_hpc.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/pci/hotplug/pciehp_hpc.c
> b/drivers/pci/hotplug/pciehp_hpc.c
> index 53433b37e181..6f271160f18d 100644
> --- a/drivers/pci/hotplug/pciehp_hpc.c
> +++ b/drivers/pci/hotplug/pciehp_hpc.c
> @@ -710,8 +710,10 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
> down_read(&ctrl->reset_lock);
> if (events & DISABLE_SLOT)
> pciehp_handle_disable_request(ctrl);
> - else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
> + else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) {
> + pci_wait_port_outdpc(pdev);
This would add worst case 1s delay in handling the DLLSC events. This does not distinguish between DLLSC event triggered by DPC or hotplug. Also additional delay may violate the timing requirements.
: It will wait only when DPC is enabled and triggered. Or it will skip the waiting.
Test with different time interval between hot-remove and hot-plugin, also no spec
Says it will violate timing requirement. It works.
Thanks,
Ethan
> pciehp_handle_presence_or_link_change(ctrl, events);
> + }
> up_read(&ctrl->reset_lock);
>
> ret = IRQ_HANDLED;
--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer