2024-01-16 12:36:52

by Lennert Buytenhek

[permalink] [raw]
Subject: ASMedia ASM1062 (AHCI) hang after "ahci 0000:28:00.0: Using 64-bit DMA addresses"

Hi,

On kernel 6.6.x, with an ASMedia ASM1062 (AHCI) controller, on an
ASUSTeK Pro WS WRX80E-SAGE SE WIFI mainboard, PCI ID 1b21:0612 and
subsystem ID 1043:858d, I got a total apparent controller hang,
rendering the two attached SATA devices unavailable, that was
immediately preceded by the following kernel messages:

[Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: Using 64-bit DMA addresses
[Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00000 flags=0x0000]
[Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00300 flags=0x0000]
[Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00380 flags=0x0000]
[Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00400 flags=0x0000]
[Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00680 flags=0x0000]
[Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00700 flags=0x0000]

It seems as if the controller has problems with 64-bit DMA addresses,
and the comments around the source of the message in
drivers/iommu/dma-iommu.c seem to point into that same direction:

/*
* Try to use all the 32-bit PCI addresses first. The original SAC vs.
* DAC reasoning loses relevance with PCIe, but enough hardware and
* firmware bugs are still lurking out there that it's safest not to
* venture into the 64-bit space until necessary.
*
* If your device goes wrong after seeing the notice then likely either
* its driver is not setting DMA masks accurately, the hardware has
* some inherent bug in handling >32-bit addresses, or not all the
* expected address bits are wired up between the device and the IOMMU.
*/
if (dma_limit > DMA_BIT_MASK(32) && dev->iommu->pci_32bit_workaround) {
iova = alloc_iova_fast(iovad, iova_len,
DMA_BIT_MASK(32) >> shift, false);
if (iova)
goto done;

dev->iommu->pci_32bit_workaround = false;
dev_notice(dev, "Using %d-bit DMA addresses\n", bits_per(dma_limit));
}

Are there any tests you can think of that I can run to further narrow
down this issue? By itself, the issue reproduces only rarely.

Thank you in advance.

Kind regards,
Lennert


2024-01-16 14:20:38

by Niklas Cassel

[permalink] [raw]
Subject: Re: ASMedia ASM1062 (AHCI) hang after "ahci 0000:28:00.0: Using 64-bit DMA addresses"

Hello Lennert,

On Tue, Jan 16, 2024 at 02:27:40PM +0200, Lennert Buytenhek wrote:
> Hi,
>
> On kernel 6.6.x, with an ASMedia ASM1062 (AHCI) controller, on an
> ASUSTeK Pro WS WRX80E-SAGE SE WIFI mainboard, PCI ID 1b21:0612 and
> subsystem ID 1043:858d, I got a total apparent controller hang,
> rendering the two attached SATA devices unavailable, that was
> immediately preceded by the following kernel messages:
>
> [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: Using 64-bit DMA addresses
> [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00000 flags=0x0000]
> [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00300 flags=0x0000]
> [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00380 flags=0x0000]
> [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00400 flags=0x0000]
> [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00680 flags=0x0000]
> [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00700 flags=0x0000]
>
> It seems as if the controller has problems with 64-bit DMA addresses,
> and the comments around the source of the message in
> drivers/iommu/dma-iommu.c seem to point into that same direction:
>
> /*
> * Try to use all the 32-bit PCI addresses first. The original SAC vs.
> * DAC reasoning loses relevance with PCIe, but enough hardware and
> * firmware bugs are still lurking out there that it's safest not to
> * venture into the 64-bit space until necessary.
> *
> * If your device goes wrong after seeing the notice then likely either
> * its driver is not setting DMA masks accurately, the hardware has
> * some inherent bug in handling >32-bit addresses, or not all the
> * expected address bits are wired up between the device and the IOMMU.
> */
> if (dma_limit > DMA_BIT_MASK(32) && dev->iommu->pci_32bit_workaround) {
> iova = alloc_iova_fast(iovad, iova_len,
> DMA_BIT_MASK(32) >> shift, false);
> if (iova)
> goto done;
>
> dev->iommu->pci_32bit_workaround = false;
> dev_notice(dev, "Using %d-bit DMA addresses\n", bits_per(dma_limit));
> }

The DMA mask is set here:
https://github.com/torvalds/linux/blob/v6.7/drivers/ata/ahci.c#L967

And should be called using:
hpriv->cap & HOST_CAP_64
https://github.com/torvalds/linux/blob/v6.7/drivers/ata/ahci.c#L1929

Where hpriv->cap is capabilities reported by the AHCI controller itself.
So it definitely seems like your controller supports 64-bit addressing.

I guess it could be some problem with your BIOS.
Have you tried updating your BIOS?


If that does not work, perhaps you could try this (completely untested) patch:
(You might need to modify the strings to match the exact strings reported by
your BIOS.)

If it works, we need to add a specific BIOS version too, see e.g.
https://github.com/torvalds/linux/blob/v6.7/drivers/ata/ahci.c#L1310


diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index 3a5f3255f51b..35dead43142c 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -1034,6 +1034,30 @@ static void ahci_p5wdh_workaround(struct ata_host *host)
}
}

+static bool ahci_broken_64_bit(struct pci_dev *pdev)
+{
+ static const struct dmi_system_id sysids[] = {
+ {
+ .ident = "ASUS Pro WS WRX80E-SAGE",
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR,
+ "ASUSTeK Computer INC."),
+ DMI_MATCH(DMI_BOARD_NAME, "Pro WS WRX80E-SAGE"),
+ },
+ },
+ { }
+ };
+ const struct dmi_system_id *dmi = dmi_first_match(sysids);
+
+ if (!dmi)
+ return false;
+
+ dev_warn(&pdev->dev, "%s: forcing 32bit DMA, update BIOS\n",
+ dmi->ident);
+
+ return true;
+}
+
/*
* Macbook7,1 firmware forcibly disables MCP89 AHCI and changes PCI ID when
* booting in BIOS compatibility mode. We restore the registers but not ID.
@@ -1799,6 +1823,10 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ahci_broken_devslp(pdev))
hpriv->flags |= AHCI_HFLAG_NO_DEVSLP;

+ /* must set flag prior to save config in order to take effect */
+ if (ahci_broken_64_bit(pdev))
+ hpriv->flags |= AHCI_HFLAG_32BIT_ONLY;
+
#ifdef CONFIG_ARM64
if (pdev->vendor == PCI_VENDOR_ID_HUAWEI &&
pdev->device == 0xa235 &&

2024-01-17 21:14:48

by Lennert Buytenhek

[permalink] [raw]
Subject: Re: ASMedia ASM1062 (AHCI) hang after "ahci 0000:28:00.0: Using 64-bit DMA addresses"

On Tue, Jan 16, 2024 at 03:20:23PM +0100, Niklas Cassel wrote:

> Hello Lennert,

Hi Niklas,

Thanks for your reply!


> > On kernel 6.6.x, with an ASMedia ASM1062 (AHCI) controller, on an

Minor correction to this: lspci says that this is an ASM1062, but it's
actually an ASM1061. I think that the two parts share a PCI device ID,
and I've submitted a PCI ID DB change here:

https://admin.pci-ids.ucw.cz/read/PC/1b21/0612


> > ASUSTeK Pro WS WRX80E-SAGE SE WIFI mainboard, PCI ID 1b21:0612 and
> > subsystem ID 1043:858d, I got a total apparent controller hang,
> > rendering the two attached SATA devices unavailable, that was
> > immediately preceded by the following kernel messages:
> >
> > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: Using 64-bit DMA addresses
> > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00000 flags=0x0000]
> > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00300 flags=0x0000]
> > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00380 flags=0x0000]
> > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00400 flags=0x0000]
> > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00680 flags=0x0000]
> > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00700 flags=0x0000]
> >
> > It seems as if the controller has problems with 64-bit DMA addresses,
> > and the comments around the source of the message in
> > drivers/iommu/dma-iommu.c seem to point into that same direction:
> >
> > /*
> > * Try to use all the 32-bit PCI addresses first. The original SAC vs.
> > * DAC reasoning loses relevance with PCIe, but enough hardware and
> > * firmware bugs are still lurking out there that it's safest not to
> > * venture into the 64-bit space until necessary.
> > *
> > * If your device goes wrong after seeing the notice then likely either
> > * its driver is not setting DMA masks accurately, the hardware has
> > * some inherent bug in handling >32-bit addresses, or not all the
> > * expected address bits are wired up between the device and the IOMMU.
> > */
> > if (dma_limit > DMA_BIT_MASK(32) && dev->iommu->pci_32bit_workaround) {
> > iova = alloc_iova_fast(iovad, iova_len,
> > DMA_BIT_MASK(32) >> shift, false);
> > if (iova)
> > goto done;
> >
> > dev->iommu->pci_32bit_workaround = false;
> > dev_notice(dev, "Using %d-bit DMA addresses\n", bits_per(dma_limit));
> > }
>
> The DMA mask is set here:
> https://github.com/torvalds/linux/blob/v6.7/drivers/ata/ahci.c#L967
>
> And should be called using:
> hpriv->cap & HOST_CAP_64
> https://github.com/torvalds/linux/blob/v6.7/drivers/ata/ahci.c#L1929
>
> Where hpriv->cap is capabilities reported by the AHCI controller itself.
> So it definitely seems like your controller supports 64-bit addressing.

Perhaps, or maybe it's misreporting its capabilities, as it is an old
part (from 2011 or before), and given that it doesn't seem to support
64-bit MSI addressing, either, which for a part with a 64-bit DMA engine
would be an odd restriction:

# lspci -s 28:00.0 -vv | grep -A1 MSI:
Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit-
Address: fee00000 Data: 0000
#

(I checked the available datasheets, but there is no mention of whether
or not the part supports 64-bit DMA.)


> I guess it could be some problem with your BIOS.
> Have you tried updating your BIOS?

The machine is running the latest BIOS available from the vendor at
the time of this writing, version 1201:

# dmidecode | grep -A2 "^BIOS Information"
BIOS Information
Vendor: American Megatrends Inc.
Version: 1201
#

Per:

https://www.asus.com/motherboards-components/motherboards/workstation/pro-ws-wrx80e-sage-se-wifi/helpdesk_bios?model2Name=Pro-WS-WRX80E-SAGE-SE-WIFI

However, some Googling suggests that the ASM106x loads its own firmware
from a directly attached SPI flash chip, and there are several versions
of this firmware available in the wild, with different versions of the
firmware apparently available for legacy IDE mode and for AHCI mode. If
(some of) the AHCI logic is indeed contained inside the firmware, I
could see a firmware bug leading to the controller incorrectly presenting
itself as being 64-bit DMA capable.

Some poking around in the BIOS image suggests that there is no copy of
the ASM106x firmware inside the BIOS image. In other words, it could be
that, even though the machine is running the latest available BIOS, the
ASM1061 might be running an older firmware version.

The ASM1061 firmware does not seem to be readable from software via a
ROM BAR, and it doesn't seem to readable from software in general (the
vendor-supplied DOS .exe updater tool only allows you to erase or
update the SPI flash), so I can't check which firmware version it is
currently using.


> If that does not work, perhaps you could try this (completely untested) patch:
> (You might need to modify the strings to match the exact strings reported by
> your BIOS.)

Thanks for the patch!

I will do some tests with PCI passthrough to a VM, to see whether, and if
it does, exactly how the controller mangles DMA addresses.

I've also ordered a discrete PCIe card with an ASM1061 chip on it, and I
will perform similar tests with that card, to see exactly where the issue
is, i.e. whether it is specific to this mainboard or not.

I will follow up once I will have more information.

Kind regards,
Lennert

2024-01-17 22:58:19

by Niklas Cassel

[permalink] [raw]
Subject: Re: ASMedia ASM1062 (AHCI) hang after "ahci 0000:28:00.0: Using 64-bit DMA addresses"

Hello Lennert,

On Wed, Jan 17, 2024 at 11:14:30PM +0200, Lennert Buytenhek wrote:
> On Tue, Jan 16, 2024 at 03:20:23PM +0100, Niklas Cassel wrote:
>
> > > On kernel 6.6.x, with an ASMedia ASM1062 (AHCI) controller, on an
>
> Minor correction to this: lspci says that this is an ASM1062, but it's
> actually an ASM1061. I think that the two parts share a PCI device ID,
> and I've submitted a PCI ID DB change here:
>
> https://admin.pci-ids.ucw.cz/read/PC/1b21/0612

FWIW, the kernel states that 0x0612 is ASM1062, and 0x0611 is ASM1061:
https://github.com/torvalds/linux/blob/v6.7/drivers/ata/ahci.c#L603-L604

But that could of course be incorrect.


When you are dumping the LnkCap in the PCI ID DB change request,
are you dumping the LnkCap for the AHCI controller or the PCI bridge?

(Because you use # lspci -s 27:00.0 in the PCI ID DB change request,
but # lspci -s 28:00.0 further down in this email.)

(Perhaps the PCI bride only has one PCI lane, but the AHCI controller has two?)


> > The DMA mask is set here:
> > https://github.com/torvalds/linux/blob/v6.7/drivers/ata/ahci.c#L967
> >
> > And should be called using:
> > hpriv->cap & HOST_CAP_64
> > https://github.com/torvalds/linux/blob/v6.7/drivers/ata/ahci.c#L1929
> >
> > Where hpriv->cap is capabilities reported by the AHCI controller itself.
> > So it definitely seems like your controller supports 64-bit addressing.
>
> Perhaps, or maybe it's misreporting its capabilities, as it is an old
> part (from 2011 or before), and given that it doesn't seem to support
> 64-bit MSI addressing, either, which for a part with a 64-bit DMA engine
> would be an odd restriction:
>
> # lspci -s 28:00.0 -vv | grep -A1 MSI:
> Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit-
> Address: fee00000 Data: 0000
> #

That just claims that MSIs have to use a 32-bit PCI address.

See e.g.:
00:02.0 VGA compatible controller: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev 0b) (prog-if 00 [VGA controller])
Subsystem: Lenovo Device 3978
Flags: bus master, fast devsel, latency 0, IRQ 58
Memory at b0000000 (64-bit, non-prefetchable) [size=4M]
Memory at a0000000 (64-bit, prefetchable) [size=256M]
I/O ports at 4000 [size=64]
Expansion ROM at <unassigned> [disabled]
Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit-
Capabilities: [d0] Power Management version 2
Capabilities: [a4] PCI Advanced Features
Kernel driver in use: i915

It has 64-bit BARs, but does not support 64-bit MSIs.


>
> (I checked the available datasheets, but there is no mention of whether
> or not the part supports 64-bit DMA.)

If you are curious, hpriv->cap is the HBA capabilities reported by the
device, see:
https://www.intel.com/content/dam/www/public/us/en/documents/technical-specifications/serial-ata-ahci-spec-rev1-3-1.pdf

3.1.1 Offset 00h: CAP – HBA Capabilities

Bit 31 - Supports 64-bit Addressing (S64A).

It seems a bit silly that the AHCI controller vendor accidentally set this
bit to 1.


> Per:
>
> https://www.asus.com/motherboards-components/motherboards/workstation/pro-ws-wrx80e-sage-se-wifi/helpdesk_bios?model2Name=Pro-WS-WRX80E-SAGE-SE-WIFI
>
> However, some Googling suggests that the ASM106x loads its own firmware
> from a directly attached SPI flash chip, and there are several versions
> of this firmware available in the wild, with different versions of the
> firmware apparently available for legacy IDE mode and for AHCI mode. If
> (some of) the AHCI logic is indeed contained inside the firmware, I
> could see a firmware bug leading to the controller incorrectly presenting
> itself as being 64-bit DMA capable.
>
> Some poking around in the BIOS image suggests that there is no copy of
> the ASM106x firmware inside the BIOS image. In other words, it could be
> that, even though the machine is running the latest available BIOS, the
> ASM1061 might be running an older firmware version.
>
> The ASM1061 firmware does not seem to be readable from software via a
> ROM BAR, and it doesn't seem to readable from software in general (the
> vendor-supplied DOS .exe updater tool only allows you to erase or
> update the SPI flash), so I can't check which firmware version it is
> currently using.
>
>
> > If that does not work, perhaps you could try this (completely untested) patch:
> > (You might need to modify the strings to match the exact strings reported by
> > your BIOS.)
>
> Thanks for the patch!

Assuming that you just have ASM106x controllers in your system,
you could also replace it with a simple:

@@ -1799,6 +1823,10 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ahci_broken_devslp(pdev))
hpriv->flags |= AHCI_HFLAG_NO_DEVSLP;

+ /* must set flag prior to save config in order to take effect */
+ hpriv->flags |= AHCI_HFLAG_32BIT_ONLY;
+
#ifdef CONFIG_ARM64
if (pdev->vendor == PCI_VENDOR_ID_HUAWEI &&
pdev->device == 0xa235 &&


Just for testing.


>
> I will do some tests with PCI passthrough to a VM, to see whether, and if
> it does, exactly how the controller mangles DMA addresses.

Were you running in a VM when testing this?
(Usually you need to pass through all PCI devices in the same iommu group.)

The errors from your previous email:
[IO_PAGE_FAULT domain=0x0035 address=0x7fffff00000 flags=0x0000]
[Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged

could also suggest an iommu issue. Have you tried booting with iommu=off
and/or amd_iommu=off on the kernel command line?
(Or temporarily disable the iommu in BIOS.)


>
> I've also ordered a discrete PCIe card with an ASM1061 chip on it, and I
> will perform similar tests with that card, to see exactly where the issue
> is, i.e. whether it is specific to this mainboard or not.
>
> I will follow up once I will have more information.

Looking forward to hear your findings :)


Kind regards,
Niklas

2024-01-23 21:01:20

by Lennert Buytenhek

[permalink] [raw]
Subject: Re: ASMedia ASM1062 (AHCI) hang after "ahci 0000:28:00.0: Using 64-bit DMA addresses"

On Wed, Jan 17, 2024 at 11:52:25PM +0100, Niklas Cassel wrote:

> Hello Lennert,

Hi Niklas,

Thank you for your patience. I think that I have gotten to the bottom
of the issue. See below.


> > > > On kernel 6.6.x, with an ASMedia ASM1062 (AHCI) controller, on an
> >
> > Minor correction to this: lspci says that this is an ASM1062, but it's
> > actually an ASM1061. I think that the two parts share a PCI device ID,
> > and I've submitted a PCI ID DB change here:
> >
> > https://admin.pci-ids.ucw.cz/read/PC/1b21/0612
>
> FWIW, the kernel states that 0x0612 is ASM1062, and 0x0611 is ASM1061:
> https://github.com/torvalds/linux/blob/v6.7/drivers/ata/ahci.c#L603-L604
>
> But that could of course be incorrect.

I think that is incorrect.

FWIW, I bought the following PCIe x1 2-port SATA controller from Amazon.
The brand seems to be "10Gtek", and there does not seem to be a brand
model number for this card.

https://www.amazon.de/dp/B09Y5FDCGX

The card has an ASM1061 on it according to the product page, and the
main chip on the card that I got indeed says "ASM1061" on it, along
with some other markings -- but lspci says 0x612:

# lspci -s 07:00.0
07:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02)
# lspci -s 07:00.0 -n
07:00.0 0106: 1b21:0612 (rev 02)
#

Digging into the git history, the commit that added the note that
claims that 0x0612 is ASM1062 and 0x0611 is ASM1061 is this one:

https://github.com/torvalds/linux/commit/7b4f6ecacb14f384adc1a5a67ad95eb082c02bd1

Which does:

/* Asmedia */
- { PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci }, /* ASM1061 */
+ { PCI_VDEVICE(ASMEDIA, 0x0601), board_ahci }, /* ASM1060 */
+ { PCI_VDEVICE(ASMEDIA, 0x0602), board_ahci }, /* ASM1060 */
+ { PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci }, /* ASM1061 */
+ { PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci }, /* ASM1062 */

Note that it removes the line that says that 0x612 is ASM1061!

This commit's description references:

https://bugzilla.kernel.org/show_bug.cgi?id=42804

This bug report is not 100% clear to me, but the reporter of that bug
seems to say that they have a PCIe card with a :0x611 device ID that
reports itself as an IDE controller but can nevertheless operate in
AHCI mode. And indeed, their 0x611 ASM1061 reports a prog-if of 85:

04:00.0 IDE interface: ASMedia Technology Inc. ASM1061 SATA IDE Controller (rev 01) (prog-if 85 [Master SecO PriO])

While my 0x612 ASM1061s have prog-ifs of 01. The two controllers on the
Asus Pro WS WRX80E SAGE SE WIFI mainboard:

# lspci -v | grep "^2[78]"
27:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02) (prog-if 01 [AHCI 1.0])
28:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02) (prog-if 01 [AHCI 1.0])

And the discrete PCIe card (in another machine):

# lspci -v | grep ^07
07:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02) (prog-if 01 [AHCI 1.0])

Therefore, I suspect that 0x611 is "ASM1061/ASM1062 in IDE mode
presenting with a prog-if of 85" and 0x612 is "ASM1061/ASM1062 in AHCI
mode presenting with a prog-if of 01", and that the bug reporter wanted
to run their IDE mode controller in AHCI mode, which the ASM106x seems
to allow as it seems to present the same BARs (legacy I/O port ranges
as well as an AHCI register memory BAR) in both modes.


> When you are dumping the LnkCap in the PCI ID DB change request,
> are you dumping the LnkCap for the AHCI controller or the PCI bridge?
>
> (Because you use # lspci -s 27:00.0 in the PCI ID DB change request,
> but # lspci -s 28:00.0 further down in this email.)
>
> (Perhaps the PCI bride only has one PCI lane, but the AHCI controller
> has two?)

There was more information in the PCI ID DB change request at first, but
I had to trim the comment because there is a 1024 character limit. :-(

The Asus Pro WS WRX80E SAGE SE WIFI mainboard has two ASM1061 controllers:

27:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02)
28:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 02)

The one throwing the I/O page faults that led to this email thread was
the 28:00.0 one, while for the PCI ID DB change request I had to trim
one of the controllers out of the comment field for it to fit, and I
decided to trim the second one and leave the first one.

The section of PCIe topology that pertains to these controllers is:

+-[0000:20]-+-00.0
| +-00.2
| +-01.0
| +-01.1-[21-2c]----00.0-[22-2c]--+-01.0-[23]----00.0
| | +-02.0-[24-25]--+-00.0
| | | \-00.1
| | +-03.0-[26]----00.0
| | +-04.0-[27]----00.0
| | +-05.0-[28]----00.0
| | +-06.0-[29-2a]----00.0-[2a]----00.0
| | +-08.0-[2b]--+-00.0
| | | +-00.1
| | | \-00.3
| | \-0a.0-[2c]----00.0

Where the controllers themselves both report 5GT/s x1:

# lspci -s 27:00.0 -vv | egrep -e "Lnk(Cap|Sta):"
LnkCap: Port #1, Speed 5GT/s, Width x1, ASPM L0s, Exit Latency L0s unlimited
LnkSta: Speed 5GT/s, Width x1
# lspci -s 28:00.0 -vv | egrep -e "Lnk(Cap|Sta):"
LnkCap: Port #1, Speed 5GT/s, Width x1, ASPM L0s, Exit Latency L0s unlimited
LnkSta: Speed 5GT/s, Width x1
#

And their upstream ports both report 16GT/s x1:

# lspci -s 22:04.0 -vv | egrep -e "Lnk(Cap|Sta):"
LnkCap: Port #4, Speed 16GT/s, Width x1, ASPM L1, Exit Latency L1 <32us
LnkSta: Speed 5GT/s, Width x1
# lspci -s 22:05.0 -vv | egrep -e "Lnk(Cap|Sta):"
LnkCap: Port #5, Speed 16GT/s, Width x1, ASPM L1, Exit Latency L1 <32us
LnkSta: Speed 5GT/s, Width x1
#


> > (I checked the available datasheets, but there is no mention of whether
> > or not the part supports 64-bit DMA.)
>
> If you are curious, hpriv->cap is the HBA capabilities reported by the
> device, see:
> https://www.intel.com/content/dam/www/public/us/en/documents/technical-specifications/serial-ata-ahci-spec-rev1-3-1.pdf
>
> 3.1.1 Offset 00h: CAP – HBA Capabilities
>
> Bit 31 - Supports 64-bit Addressing (S64A).
>
> It seems a bit silly that the AHCI controller vendor accidentally set this
> bit to 1.

It sets S64A, but from some testing with this discrete 10Gtek ASM1061
PCIe card, it seems that while the ASM106x supports more than 32 DMA
address bits, it doesn't support the full 64 DMA address bits -- it
only seems to support 43 DMA address bits, and this is likely what
tripped it up here. See below.


> > I will do some tests with PCI passthrough to a VM, to see whether, and if
> > it does, exactly how the controller mangles DMA addresses.
>
> Were you running in a VM when testing this?
> (Usually you need to pass through all PCI devices in the same iommu group.)

There are some VMs, but the I/O page faults that started this email
thread happened on the host, i.e. no PCI device passthrough involved.


> The errors from your previous email:
> [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00000 flags=0x0000]
> [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged
>
> could also suggest an iommu issue. Have you tried booting with iommu=off
> and/or amd_iommu=off on the kernel command line?
> (Or temporarily disable the iommu in BIOS.)

If the card has a DMA mask problem, then that would likely lead to
memory and/or disk corruption.


For my testing, I used a different, random X570 based test system I
had lying around, as the Asus Pro WS WRX80E SAGE SE WIFI based one is
currently being used in production.

On the X570 machine, I passed the ASM1061 PCIe card as well as the
X570 chipset's AHCI controller through to a virtual machine, and I used
the following kernel patch in the virtual machine to force I/O page
faults on writes. (This works because the virtual machine boots from a
virtio block device.)

--- a/drivers/ata/libahci.c
+++ b/drivers/ata/libahci.c
@@ -1667,6 +1667,12 @@ static unsigned int ahci_fill_sg(struct ata_queued_cmd *qc, void *cmd_tbl)
dma_addr_t addr = sg_dma_address(sg);
u32 sg_len = sg_dma_len(sg);

+ printk(KERN_INFO "mapping dma_address=%.16lx sg_len=%.8lx dma_dir=%d\n",
+ (unsigned long)addr, (unsigned long)sg_len, qc->dma_dir);
+
+ if (qc->dma_dir == DMA_TO_DEVICE)
+ addr = 0xffffffff00000000;
+
ahci_sg[si].addr = cpu_to_le32(addr & 0xffffffff);
ahci_sg[si].addr_hi = cpu_to_le32((addr >> 16) >> 16);
ahci_sg[si].flags_size = cpu_to_le32(sg_len - 1);

When trying to write to the first sector of a disk on the X570 chipset's
AHCI controller from the virtual machine, I then get these sorts of I/O
page faults on the host, entirely as expected:

[Tue Jan 23 21:34:24 2024] vfio-pci 0000:0a:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0021 address=0xffffffff00000000 flags=0x0010]
[Tue Jan 23 21:34:25 2024] vfio-pci 0000:0a:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0021 address=0xffffffff00000000 flags=0x0010]
[Tue Jan 23 21:34:25 2024] vfio-pci 0000:0a:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0021 address=0xffffffff00000000 flags=0x0010]

However, if I write to the first sector of a disk on the ASM1061 PCIe
card, I get very different I/O page faults:

[Tue Jan 23 21:31:55 2024] vfio-pci 0000:07:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0021 address=0x7ff00000000 flags=0x0010]
[Tue Jan 23 21:31:55 2024] vfio-pci 0000:07:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0021 address=0x7ff00000500 flags=0x0010]

Note how the upper 21 bits in the reported DMA addresses in the ASM1061
case are all clear.

The host PCIe topology shows that the chipset controller and ASM1061
are on sibling buses, suggesting that they are in a similar IOMMU
environment:

+-01.2-[02-0a]----00.0-[03-0a]--+-03.0-[04]--+-00.0
| | \-00.1
| +-04.0-[05]----00.0
| +-05.0-[06]----00.0
| +-06.0-[07]----00.0 <== ASM1061
| +-08.0-[08]--+-00.0
| | +-00.1
| | \-00.3
| +-09.0-[09]----00.0 <== X570 AHCI
| \-0a.0-[0a]----00.0 <== X570 AHCI

I did another test where I force the upper 21 DMA address bits to 1
for all writes:

--- a/drivers/ata/libahci.c
+++ b/drivers/ata/libahci.c
@@ -1667,6 +1667,12 @@ static unsigned int ahci_fill_sg(struct ata_queued_cmd *qc, void *cmd_tbl)
dma_addr_t addr = sg_dma_address(sg);
u32 sg_len = sg_dma_len(sg);

+ printk(KERN_INFO "mapping dma_address=%.16lx sg_len=%.8lx dma_dir=%d\n",
+ (unsigned long)addr, (unsigned long)sg_len, qc->dma_dir);
+
+ if (qc->dma_dir == DMA_TO_DEVICE)
+ addr |= 0xfffff80000000000;
+
ahci_sg[si].addr = cpu_to_le32(addr & 0xffffffff);
ahci_sg[si].addr_hi = cpu_to_le32((addr >> 16) >> 16);
ahci_sg[si].flags_size = cpu_to_le32(sg_len - 1);

As expected, this breaks writes to the disk on the X570 AHCI controller,
which now all give I/O page faults on the host, but I/O to the disk on
the ASM1061 controller seems completely unaffected by this patch, and I
can create a btrfs filesystem, clone linux.git onto it, scrub it with
no errors, unmount and remount, scrub it again with no errors, reboot
and mount it again, and again scrub it with no errors, etc.

This all suggests to me that the ASM1061 drops the upper 21 bits of all
DMA addresses. Going back to the original report, on the Asus Pro WS
WRX80E-SAGE SE WIFI, we also see DMA addresses that seem to have been
capped to 43 bits:

> [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: Using 64-bit DMA addresses
> [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00000 flags=0x0000]
> [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00300 flags=0x0000]
> [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00380 flags=0x0000]

Since in this test the X570 AHCI controller is inside the chipset and
the ASM1061 in a PCIe slot, this doesn't 100% prove that the ASM1061 is
at fault (e.g. the upstream IOMMUs for the X570 AHCI controller and the
ASM1061 could be behaving differently), and to 100% prove this theory I
would have to find a non-ASM1061 AHCI controller and put it in the same
PCIe slot as the ASM1061 is currently in, and try to make it DMA to
address 0xffffffff00000000, and verify that the I/O page faults on the
host report 0xffffffff00000000 and not 0x7fffff00000 -- but I think that
the current evidence is perhaps good enough?

There are two ways to handle this -- either set the DMA mask for ASM106x
parts to 43 bits, or take the lazy route and just use AHCI_HFLAG_32BIT_ONLY
for these parts. I feel that the former would be more appropriate, as
there seem to be plenty of bits beyond bit 31 that do work, but I will
defer to your judgement on this matter. What do you think the right way
to handle this apparent hardware quirk is?

Thanks again for your consideration.

Kind regards,
Lennert

2024-01-24 10:15:57

by Niklas Cassel

[permalink] [raw]
Subject: Re: ASMedia ASM1062 (AHCI) hang after "ahci 0000:28:00.0: Using 64-bit DMA addresses"

On Tue, Jan 23, 2024 at 11:00:44PM +0200, Lennert Buytenhek wrote:
> On Wed, Jan 17, 2024 at 11:52:25PM +0100, Niklas Cassel wrote:

(snip)

> This all suggests to me that the ASM1061 drops the upper 21 bits of all
> DMA addresses. Going back to the original report, on the Asus Pro WS
> WRX80E-SAGE SE WIFI, we also see DMA addresses that seem to have been
> capped to 43 bits:
>
> > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: Using 64-bit DMA addresses
> > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00000 flags=0x0000]
> > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00300 flags=0x0000]
> > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00380 flags=0x0000]
>
> Since in this test the X570 AHCI controller is inside the chipset and
> the ASM1061 in a PCIe slot, this doesn't 100% prove that the ASM1061 is
> at fault (e.g. the upstream IOMMUs for the X570 AHCI controller and the
> ASM1061 could be behaving differently), and to 100% prove this theory I
> would have to find a non-ASM1061 AHCI controller and put it in the same
> PCIe slot as the ASM1061 is currently in, and try to make it DMA to
> address 0xffffffff00000000, and verify that the I/O page faults on the
> host report 0xffffffff00000000 and not 0x7fffff00000 -- but I think that
> the current evidence is perhaps good enough?

It does indeed look like the same issue on the internal ASMedia ASM1061 on
your Asus Pro WS WRX80E-SAGE SE WIFI and the stand alone ASMedia ASM1061
PCI card connected to your other X570 based motherboard.

However, ASMedia ASM1061 seems to be quite common, so I'm surprised that
no one has ever reported this problem before, so what has changed?
Perhaps there is some recent kernel patch that introduced this?

The commit was introduced:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4bf7fda4dce22214c70c49960b1b6438e6260b67
was reverted:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=af3e9579ecfbe1796334bb25a2f0a6437983673a
and was then introduced in a new form:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=791c2b17fb4023f21c3cbf5f268af01d9b8cb7cc

I suppose that these commits might be recent enough that we have not received
any bug reports for ASMedia ASM1061 since then.


If you can find another PCIe card (e.g. a AHCI controller or NVMe controller)
that you can plug in to the same slot on the X570 motherboard,
I agree that it would confirm your theory.


If you don't have any other PCIe card, do you possibly have another system,
with an IOMMU and a free PCIe slot that you can plug your ASMedia ASM1061
PCI card and perform the same test?

(Preferably something that is not AMD, to rule out a amd_iommu issue,
since both Asus Pro WS WRX80E-SAGE SE WIFI and X570 use amd_iommu.)

If we see the same behavior that the device drops the upper 21-bits there
when using the trick in your test patch, that would also confirm your theory.


>
> There are two ways to handle this -- either set the DMA mask for ASM106x
> parts to 43 bits, or take the lazy route and just use AHCI_HFLAG_32BIT_ONLY
> for these parts. I feel that the former would be more appropriate, as
> there seem to be plenty of bits beyond bit 31 that do work, but I will
> defer to your judgement on this matter. What do you think the right way
> to handle this apparent hardware quirk is?

I've seen something similar for NVMe, where some NVMe controllers from
Amazon was violating the spec, and only supported 48-bit DMA addresses,
even though NVMe spec requires you to support 64-bit DMA addresses, see:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4bdf260362b3be529d170b04662638fd6dc52241

It is possible that ASMedia ASM1061 has a similar problem (but for AHCI)
and only supports 43-bit DMA addresses, even though it sets AHCI CAP.S64A,
which says "Indicates whether the HBA can access 64-bit data structures.".

I think the best thing is to do a similar quirk, where we set the dma_mask
accordingly.


Kind regards,
Niklas

2024-01-24 12:41:13

by Lennert Buytenhek

[permalink] [raw]
Subject: Re: ASMedia ASM1062 (AHCI) hang after "ahci 0000:28:00.0: Using 64-bit DMA addresses"

On Wed, Jan 24, 2024 at 11:15:11AM +0100, Niklas Cassel wrote:

> > This all suggests to me that the ASM1061 drops the upper 21 bits of all
> > DMA addresses. Going back to the original report, on the Asus Pro WS
> > WRX80E-SAGE SE WIFI, we also see DMA addresses that seem to have been
> > capped to 43 bits:
> >
> > > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: Using 64-bit DMA addresses
> > > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00000 flags=0x0000]
> > > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00300 flags=0x0000]
> > > [Thu Jan 4 23:12:54 2024] ahci 0000:28:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0035 address=0x7fffff00380 flags=0x0000]
> >
> > Since in this test the X570 AHCI controller is inside the chipset and
> > the ASM1061 in a PCIe slot, this doesn't 100% prove that the ASM1061 is
> > at fault (e.g. the upstream IOMMUs for the X570 AHCI controller and the
> > ASM1061 could be behaving differently), and to 100% prove this theory I
> > would have to find a non-ASM1061 AHCI controller and put it in the same
> > PCIe slot as the ASM1061 is currently in, and try to make it DMA to
> > address 0xffffffff00000000, and verify that the I/O page faults on the
> > host report 0xffffffff00000000 and not 0x7fffff00000 -- but I think that
> > the current evidence is perhaps good enough?
>
> It does indeed look like the same issue on the internal ASMedia ASM1061 on
> your Asus Pro WS WRX80E-SAGE SE WIFI and the stand alone ASMedia ASM1061
> PCI card connected to your other X570 based motherboard.
>
> However, ASMedia ASM1061 seems to be quite common, so I'm surprised that
> no one has ever reported this problem before, so what has changed?
> Perhaps there is some recent kernel patch that introduced this?
>
> The commit was introduced:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4bf7fda4dce22214c70c49960b1b6438e6260b67
> was reverted:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=af3e9579ecfbe1796334bb25a2f0a6437983673a
> and was then introduced in a new form:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=791c2b17fb4023f21c3cbf5f268af01d9b8cb7cc
>
> I suppose that these commits might be recent enough that we have not received
> any bug reports for ASMedia ASM1061 since then.

I don't know exactly what triggered the I/O page faults that started
this email thread, but note that it was working OK for a week or two
before this happened. When the issue originally triggered, there was
a lot of write I/O going on to a pair of (slow) QLC SSDs connected to
the ASM1061, so there may have been timeouts or I/O errors involved.

This system is new and it has never run any kernel older than 6.6, so
I don't have data for older kernels.

Also note that this is 2022 CPU on a 2023 mainboard with 256 GiB of
RAM with a 2011 PCIe 2.0 SATA controller, which might not be the most
common of combinations.


> If you can find another PCIe card (e.g. a AHCI controller or NVMe
> controller) that you can plug in to the same slot on the X570
> motherboard, I agree that it would confirm your theory.

I don't have another PCIe AHCI controller handy right now, but I do
have another PCIe card that I can make DMA to arbitrary addresses, a
SuperMicro AOC-SAS2LP-MV8 PCIe SAS controller.

However, since the ASM1061 card was in a mechanically x1 slot, and
this SAS controller is a x8 card, I've had to both re-try the ASM1061
in another, larger slot, as well as test the SAS controller in that
same slot.

So, with the ASM1061 moved to another (x16) slot (where it is now
04:00.0, instead of 07:00.0 which it was before), with the same patch
as before:

+ if (qc->dma_dir == DMA_TO_DEVICE)
+ addr = 0xffffffff00000000;

In this new slot, the I/O page faults generated look the same as before,
that is, with the upper 21 bits of the DMA addresses dropped:

[Wed Jan 24 13:59:45 2024] vfio-pci 0000:04:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0021 address=0x7ff00000000 flags=0x0010]
[Wed Jan 24 13:59:45 2024] vfio-pci 0000:04:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0021 address=0x7ff00000700 flags=0x0010]

After replacing the ASM1061 card with the SAS controller, which is:

04:00.0 RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev c3)

And with the following patch applied to the mvsas driver:

diff --git a/drivers/scsi/mvsas/mv_94xx.c b/drivers/scsi/mvsas/mv_94xx.c
index fc0b8eb68204..11886e73a625 100644
--- a/drivers/scsi/mvsas/mv_94xx.c
+++ b/drivers/scsi/mvsas/mv_94xx.c
@@ -788,7 +788,7 @@ static void mvs_94xx_make_prd(struct scatterlist *scatter, int nr, void *prd)
struct mvs_prd_imt im_len;
*(u32 *)&im_len = 0;
for_each_sg(scatter, sg, nr, i) {
- buf_prd->addr = cpu_to_le64(sg_dma_address(sg));
+ buf_prd->addr = cpu_to_le64(0xffffffff00000000UL);
im_len.len = sg_dma_len(sg);
buf_prd->im_len = cpu_to_le32(*(u32 *)&im_len);
buf_prd++;

The corresponding I/O page faults in the host look like this:

[Wed Jan 24 14:08:33 2024] vfio-pci 0000:04:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0021 address=0xffffffff00000000 flags=0x0030]
[Wed Jan 24 14:08:33 2024] vfio-pci 0000:04:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x0021 address=0xffffffff00000180 flags=0x0030]

That is, with the upper 32 bits of the DMA address fully intact.

I think this shows pretty conclusively that the ASM1061 is dropping
the upper 21 bits of DMA addresses.


> > There are two ways to handle this -- either set the DMA mask for ASM106x
> > parts to 43 bits, or take the lazy route and just use AHCI_HFLAG_32BIT_ONLY
> > for these parts. I feel that the former would be more appropriate, as
> > there seem to be plenty of bits beyond bit 31 that do work, but I will
> > defer to your judgement on this matter. What do you think the right way
> > to handle this apparent hardware quirk is?
>
> I've seen something similar for NVMe, where some NVMe controllers from
> Amazon was violating the spec, and only supported 48-bit DMA addresses,
> even though NVMe spec requires you to support 64-bit DMA addresses, see:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4bdf260362b3be529d170b04662638fd6dc52241
>
> It is possible that ASMedia ASM1061 has a similar problem (but for AHCI)
> and only supports 43-bit DMA addresses, even though it sets AHCI CAP.S64A,
> which says "Indicates whether the HBA can access 64-bit data structures.".
>
> I think the best thing is to do a similar quirk, where we set the dma_mask
> accordingly.

I'll give that a try.


Kind regards,
Lennert

2024-01-24 13:58:21

by Lennert Buytenhek

[permalink] [raw]
Subject: Re: ASMedia ASM1062 (AHCI) hang after "ahci 0000:28:00.0: Using 64-bit DMA addresses"

On Wed, Jan 24, 2024 at 02:40:51PM +0200, Lennert Buytenhek wrote:

> > > There are two ways to handle this -- either set the DMA mask for ASM106x
> > > parts to 43 bits, or take the lazy route and just use AHCI_HFLAG_32BIT_ONLY
> > > for these parts. I feel that the former would be more appropriate, as
> > > there seem to be plenty of bits beyond bit 31 that do work, but I will
> > > defer to your judgement on this matter. What do you think the right way
> > > to handle this apparent hardware quirk is?
> >
> > I've seen something similar for NVMe, where some NVMe controllers from
> > Amazon was violating the spec, and only supported 48-bit DMA addresses,
> > even though NVMe spec requires you to support 64-bit DMA addresses, see:
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4bdf260362b3be529d170b04662638fd6dc52241
> >
> > It is possible that ASMedia ASM1061 has a similar problem (but for AHCI)
> > and only supports 43-bit DMA addresses, even though it sets AHCI CAP.S64A,
> > which says "Indicates whether the HBA can access 64-bit data structures.".
> >
> > I think the best thing is to do a similar quirk, where we set the dma_mask
> > accordingly.
>
> I'll give that a try.

I've sent out a patch that appears (from printk debugging) to do the
right thing, but I haven't validated that that patch fixes the original
issue, as the original issue is not trivial to trigger, and the hardware
that it triggered on is currently unavailable.

I've also made the quirk apply to all ASMedia ASM106x parts, because I
expect them to be affected by the same issue, but let's see what the
ASMedia folks have to say about that.

Thanks for your help!


Kind regards,
Lennert

2024-01-24 16:15:27

by Robin Murphy

[permalink] [raw]
Subject: Re: ASMedia ASM1062 (AHCI) hang after "ahci 0000:28:00.0: Using 64-bit DMA addresses"

On 24/01/2024 1:58 pm, Lennert Buytenhek wrote:
> On Wed, Jan 24, 2024 at 02:40:51PM +0200, Lennert Buytenhek wrote:
>
>>>> There are two ways to handle this -- either set the DMA mask for ASM106x
>>>> parts to 43 bits, or take the lazy route and just use AHCI_HFLAG_32BIT_ONLY
>>>> for these parts. I feel that the former would be more appropriate, as
>>>> there seem to be plenty of bits beyond bit 31 that do work, but I will
>>>> defer to your judgement on this matter. What do you think the right way
>>>> to handle this apparent hardware quirk is?
>>>
>>> I've seen something similar for NVMe, where some NVMe controllers from
>>> Amazon was violating the spec, and only supported 48-bit DMA addresses,
>>> even though NVMe spec requires you to support 64-bit DMA addresses, see:
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4bdf260362b3be529d170b04662638fd6dc52241
>>>
>>> It is possible that ASMedia ASM1061 has a similar problem (but for AHCI)
>>> and only supports 43-bit DMA addresses, even though it sets AHCI CAP.S64A,
>>> which says "Indicates whether the HBA can access 64-bit data structures.".
>>>
>>> I think the best thing is to do a similar quirk, where we set the dma_mask
>>> accordingly.
>>
>> I'll give that a try.
>
> I've sent out a patch that appears (from printk debugging) to do the
> right thing, but I haven't validated that that patch fixes the original
> issue, as the original issue is not trivial to trigger, and the hardware
> that it triggered on is currently unavailable.

The missing piece of the puzzle is that *something* has to use up all
the available 32-bit IOVA space to make you spill over into the 64-bit
space to begin with. It can happen just from having many large buffers
mapped simultaneously (particularly if there are several devices in the
same IOMMU group), or it could be that something is leaking DMA mappings
over time.

An easy way to confirm the device behaviour should be to boot with
"iommu.forcedac=1", then all devices will have their full DMA mask
exercised straight away.

Cheers,
Robin.

> I've also made the quirk apply to all ASMedia ASM106x parts, because I
> expect them to be affected by the same issue, but let's see what the
> ASMedia folks have to say about that.
>
> Thanks for your help!
>
>
> Kind regards,
> Lennert
>