2007-05-25 04:21:19

by Eric W. Biederman

[permalink] [raw]
Subject: [PATCH 1/2] msi: Invert the sense of the MSI enables.


Currently we blacklist known bad msi configurations which means we
keep getting MSI enabled on chipsets that either do not support MSI,
or MSI is implemented improperly. Since the normal IRQ routing
mechanism seems to works even when MSI does not, this is a bad default
and causes non-functioning systems for no good reason.

So this patch inverts the sense of the MSI bus flag to only enable
MSI on known good systems. I am seeding that list with the set of
chipsets with an enabled hypertransport MSI mapping capability. Which
is as close as I can come to an generic MSI enable. So for actually
using MSI this patch is a regression, but for just having MSI enabled
in the kernel by default things should just work with this patch
applied.

People can still enable MSI on a per bus level for testing by writing
to sysfs so discovering chipsets that actually work (assuming we are
using modular drivers) should be pretty straight forward.

Signed-off-by: Eric W. Biederman <[email protected]>
---
Documentation/MSI-HOWTO.txt | 30 +++++++++++++------
drivers/pci/msi.c | 19 ++++++++----
drivers/pci/pci-sysfs.c | 6 ++--
drivers/pci/quirks.c | 66 ++++---------------------------------------
include/linux/pci.h | 2 +-
5 files changed, 42 insertions(+), 81 deletions(-)

diff --git a/Documentation/MSI-HOWTO.txt b/Documentation/MSI-HOWTO.txt
index 0d82407..81f4e18 100644
--- a/Documentation/MSI-HOWTO.txt
+++ b/Documentation/MSI-HOWTO.txt
@@ -485,21 +485,31 @@ single device. This may be achieved by either not calling pci_enable_msi()
or all, or setting the pci_dev->no_msi flag before (most of the time
in a quirk).

-6.2. Disabling MSI below a bridge
-
-The vast majority of MSI quirks are required by PCI bridges not
-being able to route MSI between busses. In this case, MSI have to be
-disabled on all devices behind this bridge. It is achieves by setting
-the PCI_BUS_FLAGS_NO_MSI flag in the pci_bus->bus_flags of the bridge
+6.2. Enabling MSI below a bridge
+
+Despite being standard, mandated by the pci-express spec, supported
+by plugin cards, and less hassle to deal with then irq routing tables
+not all hardware, and not all chipsets enable MSI, and not all
+motherboards enable MSI support in MSI supporting hardware. So until
+a hardware specific test is implemted to test if MSI is supported and
+enabled on a piece of hardware we disable MSI support by default.
+This at least ensures users will have a working system using older
+mechanisms.
+
+So to enable MSI support on a particular chipset we need a MSI quirk
+that recognizes the chipset and test to see if MSI is enabled. In
+this case, MSI has to be enabled on all bridges that are capable of
+transform MSI messages into irqs. It is achieved by setting
+the PCI_BUS_FLAGS_MSI flag in the pci_bus->bus_flags of the bridge
subordinate bus. There is no need to set the same flag on bridges that
-are below the broken bridge. When pci_enable_msi() is called to enable
-MSI on a device, pci_msi_supported() takes care of checking the NO_MSI
-flag in all parent busses of the device.
+are below the working bridge. When pci_enable_msi() is called to enable
+MSI on a device, pci_msi_supported() takes care of checking to ensure
+at least one parent buss supports MSI.

Some bridges actually support dynamic MSI support enabling/disabling
by changing some bits in their PCI configuration space (especially
the Hypertransport chipsets such as the nVidia nForce and Serverworks
-HT2000). It may then be required to update the NO_MSI flag on the
+HT2000). It may then be required to update the MSI flag on the
corresponding devices in the sysfs hierarchy. To enable MSI support
on device "0000:00:0e", do:

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index d9cbd58..000c9ae 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -467,15 +467,20 @@ static int pci_msi_check_device(struct pci_dev* dev, int nvec, int type)
if (nvec < 1)
return -ERANGE;

- /* Any bridge which does NOT route MSI transactions from it's
- * secondary bus to it's primary bus must set NO_MSI flag on
- * the secondary pci_bus.
- * We expect only arch-specific PCI host bus controller driver
- * or quirks for specific PCI bridges to be setting NO_MSI.
+ /* For pure pci bridges routing MSI traffic is just another
+ * example of routing a small DMA transaction so it should be
+ * no problem. However getting MSI traffic from PCI to the
+ * the non PCI part of the chipset is a problem. So this
+ * code checks to see if we have an upstream bus where
+ * MSI is known to work.
+ *
+ * Only non pci to pci bridges are expected to set this flag.
*/
for (bus = dev->bus; bus; bus = bus->parent)
- if (bus->bus_flags & PCI_BUS_FLAGS_NO_MSI)
- return -EINVAL;
+ if (bus->bus_flags & PCI_BUS_FLAGS_MSI)
+ break;
+ if (!bus)
+ return -EINVAL;

ret = arch_msi_check_device(dev, nvec, type);
if (ret)
diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
index 284e83a..4cebb60 100644
--- a/drivers/pci/pci-sysfs.c
+++ b/drivers/pci/pci-sysfs.c
@@ -160,7 +160,7 @@ msi_bus_show(struct device *dev, struct device_attribute *attr, char *buf)
return 0;

return sprintf (buf, "%u\n",
- !(pdev->subordinate->bus_flags & PCI_BUS_FLAGS_NO_MSI));
+ !!(pdev->subordinate->bus_flags & PCI_BUS_FLAGS_MSI));
}

static ssize_t
@@ -178,13 +178,13 @@ msi_bus_store(struct device *dev, struct device_attribute *attr,
return count;

if (*buf == '0') {
- pdev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
+ pdev->subordinate->bus_flags &= ~PCI_BUS_FLAGS_MSI;
dev_warn(&pdev->dev, "forced subordinate bus to not support MSI,"
" bad things could happen.\n");
}

if (*buf == '1') {
- pdev->subordinate->bus_flags &= ~PCI_BUS_FLAGS_NO_MSI;
+ pdev->subordinate->bus_flags |= PCI_BUS_FLAGS_MSI;
dev_warn(&pdev->dev, "forced subordinate bus to support MSI,"
" bad things could happen.\n");
}
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 6ccc2e9..f60c6c6 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -1625,33 +1625,6 @@ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_CK804_PCIE,
quirk_nvidia_ck804_pcie_aer_ext_cap);

#ifdef CONFIG_PCI_MSI
-/* The Serverworks PCI-X chipset does not support MSI. We cannot easily rely
- * on setting PCI_BUS_FLAGS_NO_MSI in its bus flags because there are actually
- * some other busses controlled by the chipset even if Linux is not aware of it.
- * Instead of setting the flag on all busses in the machine, simply disable MSI
- * globally.
- */
-static void __init quirk_svw_msi(struct pci_dev *dev)
-{
- pci_no_msi();
- printk(KERN_WARNING "PCI: MSI quirk detected. MSI deactivated.\n");
-}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_GCNB_LE, quirk_svw_msi);
-
-/* Disable MSI on chipsets that are known to not support it */
-static void __devinit quirk_disable_msi(struct pci_dev *dev)
-{
- if (dev->subordinate) {
- printk(KERN_WARNING "PCI: MSI quirk detected. "
- "PCI_BUS_FLAGS_NO_MSI set for %s subordinate bus.\n",
- pci_name(dev));
- dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
- }
-}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8131_BRIDGE, quirk_disable_msi);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS400_200, quirk_disable_msi);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS480, quirk_disable_msi);
-
/* Go through the list of Hypertransport capabilities and
* return 1 if a HT MSI capability is found and enabled */
static int __devinit msi_ht_cap_enabled(struct pci_dev *dev)
@@ -1677,43 +1650,16 @@ static int __devinit msi_ht_cap_enabled(struct pci_dev *dev)
return 0;
}

-/* Check the hypertransport MSI mapping to know whether MSI is enabled or not */
+/* Enable MSI on hypertransport chipsets supporting MSI */
static void __devinit quirk_msi_ht_cap(struct pci_dev *dev)
{
- if (dev->subordinate && !msi_ht_cap_enabled(dev)) {
- printk(KERN_WARNING "PCI: MSI quirk detected. "
- "MSI disabled on chipset %s.\n",
- pci_name(dev));
- dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
+ if (dev->subordinate && msi_ht_cap_enabled(dev)) {
+ printk(KERN_INFO "PCI: Enabled MSI on chipset %s.\n",
+ pci_name(dev));
+ dev->subordinate->bus_flags |= PCI_BUS_FLAGS_MSI;
}
}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE,
- quirk_msi_ht_cap);
+DECLARE_PCI_FIXUP_FINAL(PCI_ANY_ID, PCI_ANY_ID, quirk_msi_ht_cap);

-/* The nVidia CK804 chipset may have 2 HT MSI mappings.
- * MSI are supported if the MSI capability set in any of these mappings.
- */
-static void __devinit quirk_nvidia_ck804_msi_ht_cap(struct pci_dev *dev)
-{
- struct pci_dev *pdev;
-
- if (!dev->subordinate)
- return;

- /* check HT MSI cap on this chipset and the root one.
- * a single one having MSI is enough to be sure that MSI are supported.
- */
- pdev = pci_get_slot(dev->bus, 0);
- if (!pdev)
- return;
- if (!msi_ht_cap_enabled(dev) && !msi_ht_cap_enabled(pdev)) {
- printk(KERN_WARNING "PCI: MSI quirk detected. "
- "MSI disabled on chipset %s.\n",
- pci_name(dev));
- dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
- }
- pci_dev_put(pdev);
-}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_CK804_PCIE,
- quirk_nvidia_ck804_msi_ht_cap);
#endif /* CONFIG_PCI_MSI */
diff --git a/include/linux/pci.h b/include/linux/pci.h
index fbf3766..468d761 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -111,7 +111,7 @@ enum pcie_reset_state {

typedef unsigned short __bitwise pci_bus_flags_t;
enum pci_bus_flags {
- PCI_BUS_FLAGS_NO_MSI = (__force pci_bus_flags_t) 1,
+ PCI_BUS_FLAGS_MSI = (__force pci_bus_flags_t) 1,
};

struct pci_cap_saved_state {
--
1.5.1.1.181.g2de0


2007-05-25 04:28:32

by Eric W. Biederman

[permalink] [raw]
Subject: [PATCH 2/2] msi: Add support for the Intel chipsets that support MSI.


This patch is the result of a quick survey of the Intel chipset
documents. I took a quick look in the document to see if the chipset
supported MSI and if so I looked through to find the vendor and device
id of device 0 function 0 of the chipset and added a quirk for that
device id if I it was not a duplicate.

I happen to have one of the systems on the list so the patch is
tested, and seems to work fine.

This patch should be safe. The anecdotal evidence is that when dealing
with MSI the Intel chipsets just work. If we find some buggy ones
changing the list won't be hard.

This patch should also serve as an example of how simple it is to
enable MSI on a chipset or platform configuration where it is known
to work.

This patch does not use the defines from pci_ids.h because there are
so many defines and so many duplicate host-bridge device id duplicates
in Intels documentation I could not keep any of it straight without
using the raw device ids. Which allowed me to order the fixups and
quickly detect duplicates. Unfortunately the good names were
a confusing layer of abstraction. I have still updated pci_ids.h
so that it is possible to track the numbers back to their chipset.

Signed-off-by: Eric W. Biederman <[email protected]>
---
drivers/pci/quirks.c | 27 +++++++++++++++++++++++++++
include/linux/pci_ids.h | 10 ++++++++++
2 files changed, 37 insertions(+), 0 deletions(-)

diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index f60c6c6..40fb499 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -1661,5 +1661,32 @@ static void __devinit quirk_msi_ht_cap(struct pci_dev *dev)
}
DECLARE_PCI_FIXUP_FINAL(PCI_ANY_ID, PCI_ANY_ID, quirk_msi_ht_cap);

+static void __devinit quirk_msi_chipset(struct pci_dev *dev)
+{
+ dev->bus->bus_flags |= PCI_BUS_FLAGS_MSI;
+}
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x254C, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2550, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2558, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2560, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2570, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2578, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2580, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2581, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2588, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2590, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x25C8, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x25D0, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x25D4, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2600, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2770, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2774, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2778, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x277C, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x27A0, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2990, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2A00, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x359E, quirk_msi_chipset);
+

#endif /* CONFIG_PCI_MSI */
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index 62b3e00..71df8c0 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -2232,11 +2232,20 @@
#define PCI_DEVICE_ID_INTEL_82865_IG 0x2572
#define PCI_DEVICE_ID_INTEL_82875_HB 0x2578
#define PCI_DEVICE_ID_INTEL_82915G_HB 0x2580
+#define PCI_DEVICE_ID_INTEL_82915_HB 0x2581
#define PCI_DEVICE_ID_INTEL_82915G_IG 0x2582
+#define PCI_DEVICE_ID_INTEL_E7221_HB 0x2588
#define PCI_DEVICE_ID_INTEL_82915GM_HB 0x2590
#define PCI_DEVICE_ID_INTEL_82915GM_IG 0x2592
+#define PCI_DEVICE_ID_INTEL_5000P_HB 0x25C8
+#define PCI_DEVICE_ID_INTEL_5000Z_HB 0x25D0
+#define PCI_DEVICE_ID_INTEL_5000V_HB 0x25D4
+#define PCI_DEVICE_ID_INTEL_E8500_HB 0x2600
#define PCI_DEVICE_ID_INTEL_82945G_HB 0x2770
#define PCI_DEVICE_ID_INTEL_82945G_IG 0x2772
+#define PCI_DEVICE_ID_INTEL_82955X_HB 0x2774
+#define PCI_DEVICE_ID_INTEL_3000_HB 0x2778
+#define PCI_DEVICE_ID_INTEL_82975X_HB 0x277C
#define PCI_DEVICE_ID_INTEL_82945GM_HB 0x27A0
#define PCI_DEVICE_ID_INTEL_82945GM_IG 0x27A2
#define PCI_DEVICE_ID_INTEL_ICH6_0 0x2640
@@ -2272,6 +2281,7 @@
#define PCI_DEVICE_ID_INTEL_ICH9_4 0x2914
#define PCI_DEVICE_ID_INTEL_ICH9_5 0x2915
#define PCI_DEVICE_ID_INTEL_ICH9_6 0x2930
+#define PCI_DEVICE_ID_INTEL_82946_HB 0x2990
#define PCI_DEVICE_ID_INTEL_82855PM_HB 0x3340
#define PCI_DEVICE_ID_INTEL_82830_HB 0x3575
#define PCI_DEVICE_ID_INTEL_82830_CGC 0x3577
--
1.5.1.1.181.g2de0

2007-05-25 04:33:06

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

On Thu, 24 May 2007 22:19:09 -0600 [email protected] (Eric W. Biederman) wrote:

> Currently we blacklist known bad msi configurations which means we
> keep getting MSI enabled on chipsets that either do not support MSI,
> or MSI is implemented improperly. Since the normal IRQ routing
> mechanism seems to works even when MSI does not, this is a bad default
> and causes non-functioning systems for no good reason.
>
> So this patch inverts the sense of the MSI bus flag to only enable
> MSI on known good systems. I am seeding that list with the set of
> chipsets with an enabled hypertransport MSI mapping capability. Which
> is as close as I can come to an generic MSI enable. So for actually
> using MSI this patch is a regression, but for just having MSI enabled
> in the kernel by default things should just work with this patch
> applied.
>
> People can still enable MSI on a per bus level for testing by writing
> to sysfs so discovering chipsets that actually work (assuming we are
> using modular drivers) should be pretty straight forward.

Yup.

Do we have a feel for how much performace we're losing on those
systems which _could_ do MSI, but which will end up defaulting
to not using it?

2007-05-25 05:14:22

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

On Thu, 2007-05-24 at 22:19 -0600, Eric W. Biederman wrote:
> Currently we blacklist known bad msi configurations which means we
> keep getting MSI enabled on chipsets that either do not support MSI,
> or MSI is implemented improperly. Since the normal IRQ routing
> mechanism seems to works even when MSI does not, this is a bad default
> and causes non-functioning systems for no good reason.
>
> So this patch inverts the sense of the MSI bus flag to only enable
> MSI on known good systems. I am seeding that list with the set of
> chipsets with an enabled hypertransport MSI mapping capability. Which
> is as close as I can come to an generic MSI enable. So for actually
> using MSI this patch is a regression, but for just having MSI enabled
> in the kernel by default things should just work with this patch
> applied.

I guess this is a good idea for random x86 machines. On powerpc I think
we'll just turn it on for every bus, and let the existing per-platform
logic decide.

cheers

--
Michael Ellerman
OzLabs, IBM Australia Development Lab

wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)

We do not inherit the earth from our ancestors,
we borrow it from our children. - S.M.A.R.T Person


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2007-05-25 05:19:38

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

On Thu, May 24, 2007 at 10:19:09PM -0600, Eric W. Biederman wrote:
>
> Currently we blacklist known bad msi configurations which means we
> keep getting MSI enabled on chipsets that either do not support MSI,
> or MSI is implemented improperly. Since the normal IRQ routing
> mechanism seems to works even when MSI does not, this is a bad default
> and causes non-functioning systems for no good reason.
>
> So this patch inverts the sense of the MSI bus flag to only enable
> MSI on known good systems. I am seeding that list with the set of
> chipsets with an enabled hypertransport MSI mapping capability. Which
> is as close as I can come to an generic MSI enable. So for actually
> using MSI this patch is a regression, but for just having MSI enabled
> in the kernel by default things should just work with this patch
> applied.
>
> People can still enable MSI on a per bus level for testing by writing
> to sysfs so discovering chipsets that actually work (assuming we are
> using modular drivers) should be pretty straight forward.

Originally I would have thought this would be a good idea, but now that
Vista is out, which supports MSI, I don't think we are going to need
this in the future. All new chipsets should support MSI fine and this
table will only grow in the future, while the blacklist should not need
to have many new entries added to it.

So I don't think this is a good idea, sorry.

greg k-h

2007-05-25 05:23:22

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

Andrew Morton <[email protected]> writes:

> On Thu, 24 May 2007 22:19:09 -0600 [email protected] (Eric W. Biederman)
> wrote:
>
>> Currently we blacklist known bad msi configurations which means we
>> keep getting MSI enabled on chipsets that either do not support MSI,
>> or MSI is implemented improperly. Since the normal IRQ routing
>> mechanism seems to works even when MSI does not, this is a bad default
>> and causes non-functioning systems for no good reason.
>>
>> So this patch inverts the sense of the MSI bus flag to only enable
>> MSI on known good systems. I am seeding that list with the set of
>> chipsets with an enabled hypertransport MSI mapping capability. Which
>> is as close as I can come to an generic MSI enable. So for actually
>> using MSI this patch is a regression, but for just having MSI enabled
>> in the kernel by default things should just work with this patch
>> applied.
>>
>> People can still enable MSI on a per bus level for testing by writing
>> to sysfs so discovering chipsets that actually work (assuming we are
>> using modular drivers) should be pretty straight forward.
>
> Yup.
>
> Do we have a feel for how much performace we're losing on those
> systems which _could_ do MSI, but which will end up defaulting
> to not using it?

I don't have any good numbers, although it is enough that you can
measure the difference. I think Roland Drier and the other IB
guys have actually made the measurements. For low-end hardware
I expect the performance difference is currently in the noise.

However because MSI irqs are not shared a lot of things are
simplified. In particular it means that you should be able to have an
irq fastpath that does not read the hardware at all, and hardware
register reads can be comparatively very slow.

Further because all MSI irq are not shared and edge triggered we don't
have a danger of screaming irqs or drivers not handling a shared
irq properly.

The big performance win is most likely to be for fast I/O devices
where the multiple IRQs per card will allow the irqs for different
flows of data to be directed to different cpus.

So for the short term I don't think there is much downside in having
MSI disabled. For the long term I think MSI will be quite useful.

Further my gut feel is that my two patches together will enable MSI on
most of the x86 chipsets where it currently works. Because most
chipsets are either from Intel or are hypertransport based.

Eric

2007-05-25 05:39:17

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH 2/2] msi: Add support for the Intel chipsets that support MSI.

On Friday 25 May 2007 06:26:50 Eric W. Biederman wrote:
>
> This patch is the result of a quick survey of the Intel chipset
> documents. I took a quick look in the document to see if the chipset
> supported MSI and if so I looked through to find the vendor and device
> id of device 0 function 0 of the chipset and added a quirk for that
> device id if I it was not a duplicate.

It would be better to look for any PCI bridge. Sometimes there are
different PCI bridges around (e.g. external PCI-X bridges on HT systems)
which might need own quirks

Also in the x86 world Microsoft defined a FADT ACPI flag that MSI doesn't
work for Vista. It might make sense to do

if (dmi year >= 2007 && FADT.msi_disable not set) assume it works

> This patch should be safe. The anecdotal evidence is that when dealing
> with MSI the Intel chipsets just work. If we find some buggy ones
> changing the list won't be hard.

The FADT bit should be probably checked anyways.

-Andi


2007-05-25 05:44:38

by Grant Grundler

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

On Thu, May 24, 2007 at 09:31:57PM -0700, Andrew Morton wrote:
> On Thu, 24 May 2007 22:19:09 -0600 [email protected] (Eric W. Biederman) wrote:
>
> > Currently we blacklist known bad msi configurations which means we
> > keep getting MSI enabled on chipsets that either do not support MSI,
> > or MSI is implemented improperly. Since the normal IRQ routing
> > mechanism seems to works even when MSI does not, this is a bad default
> > and causes non-functioning systems for no good reason.
...
> Yup.
>
> Do we have a feel for how much performace we're losing on those
> systems which _could_ do MSI, but which will end up defaulting
> to not using it?

Rick Jones (HP, aka Mr Netperf.org) just recently posted some data
that happened to compare. I've clipped out thw two relevant lines below:

http://lists.openfabrics.org/pipermail/general/2007-May/035709.html


Bulk Transfer "Latency"
Unidir Bidir
Card Mbit/s SDx SDr Mbit/s SDx SDr Tran/s SDx SDr
---------------------------------------------------------------------------
Myri10G IP 9k 9320 0.862 0.949 10950 1.00 0.86 19260 19.67 16.18 *
Myri10G IP 9k msi 9320 0.449 0.672 10840 0.63 0.62 19430 11.68 11.56

original posting explains the fields.
SDx (Service Demand on Transmit) is 2x more with MSI disabled.
SDr (Service Demand on RX) is ~50% higher with MSI disabled.
Ditto for latency metrics.

ISTR to remember seeing ~5-10% difference on tg3 NICs and ~20% with PCI-X
infiniband (all on HP ZX1 chip, bottleneck was PCI-X bus). When I posted
a tg3 patch to linux-net (which got rejected because of tg3 HW bugs), I
did NOT include any performance numbers like I thought I did. :(

hth,
grant

2007-05-25 05:51:59

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.


> Do we have a feel for how much performace we're losing on those
> systems which _could_ do MSI, but which will end up defaulting
> to not using it?

At least on 10GB ethernet it is a significant difference; you usually
cannot go anywhere near line speed without MSI

I suspect it is visible on high performance / multiple GB NICs too.

-Andi

2007-05-25 05:58:57

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

Greg KH <[email protected]> writes:

> On Thu, May 24, 2007 at 10:19:09PM -0600, Eric W. Biederman wrote:
>>
>> Currently we blacklist known bad msi configurations which means we
>> keep getting MSI enabled on chipsets that either do not support MSI,
>> or MSI is implemented improperly. Since the normal IRQ routing
>> mechanism seems to works even when MSI does not, this is a bad default
>> and causes non-functioning systems for no good reason.
>>
>> So this patch inverts the sense of the MSI bus flag to only enable
>> MSI on known good systems. I am seeding that list with the set of
>> chipsets with an enabled hypertransport MSI mapping capability. Which
>> is as close as I can come to an generic MSI enable. So for actually
>> using MSI this patch is a regression, but for just having MSI enabled
>> in the kernel by default things should just work with this patch
>> applied.
>>
>> People can still enable MSI on a per bus level for testing by writing
>> to sysfs so discovering chipsets that actually work (assuming we are
>> using modular drivers) should be pretty straight forward.
>
> Originally I would have thought this would be a good idea, but now that
> Vista is out, which supports MSI, I don't think we are going to need
> this in the future. All new chipsets should support MSI fine and this
> table will only grow in the future, while the blacklist should not need
> to have many new entries added to it.
>
> So I don't think this is a good idea, sorry.

For me seeing is believing. I don't see that yet.

What I see is myself pointed at a growing pile of bug reports
of chipsets where MSI does not work.

What I see is a steadily growing black list that looks like it should
include every non-Intel x86 pci-express chipset. Despite the fact
the fact it requires skill to show a chipset does not support MSI
properly, and to generate the patch, and that MSI I/O devices are
still fairly rare.

Meanwhile I have written in a day something that does the nearly
the equivalent of our current black list. A white list looks
much easier to maintain.

Further if we want to invert the list for a given vendor that we trust
I don't think it would be hard to write an inverse quirk that enables
MSI on chipsets that are new enough for some version of new enough.
In particular it probably makes sense to do that for Intel pci-express
chipsets.


In part I have a problem with the current black list because a number
of the entries appear to be flat out hacks.

When you add to this the fact that MSI can be implemented on simple
pci devices and those pci devices can potentially be plugged into old
machines. (Say someone buys a new pci cards as an upgrade). I
don't see how we can successfully setup a black list of every
historical chipset that supports pci.

Further there are a number of outstanding bugs that are there
simply because msi is currently enabled by default on chipsets
that don't support it.


So I see the current situation as a maintenance disaster. People are
not stepping up to the plate fast enough to grow the black list to the
set of chipsets and motherboards that don't implement MSI, don't
implement MSI correctly, or only sometimes implement MSI correctly.

So Greg if you want to volunteer to comb through all of the existing
pci express chipsets and figure out what is needed to test to see
if the are setup correctly and to black list them if they are not
setup correctly, and to unconditionally black list them if we don't
support MSI more power to you.

Right now I want code that works, and this is the best path I can
see to that.

Eric

2007-05-25 06:01:29

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

Michael Ellerman <[email protected]> writes:

> On Thu, 2007-05-24 at 22:19 -0600, Eric W. Biederman wrote:
>> Currently we blacklist known bad msi configurations which means we
>> keep getting MSI enabled on chipsets that either do not support MSI,
>> or MSI is implemented improperly. Since the normal IRQ routing
>> mechanism seems to works even when MSI does not, this is a bad default
>> and causes non-functioning systems for no good reason.
>>
>> So this patch inverts the sense of the MSI bus flag to only enable
>> MSI on known good systems. I am seeding that list with the set of
>> chipsets with an enabled hypertransport MSI mapping capability. Which
>> is as close as I can come to an generic MSI enable. So for actually
>> using MSI this patch is a regression, but for just having MSI enabled
>> in the kernel by default things should just work with this patch
>> applied.
>
> I guess this is a good idea for random x86 machines. On powerpc I think
> we'll just turn it on for every bus, and let the existing per-platform
> logic decide.

Yep. That is pretty much what I expected.

Since you already have to detect how to implement the MSI methods
you need a separate white list anyway.

Just a side note. This only needs to be enabled for pci root
busses.

Eric

2007-05-25 06:12:04

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH 2/2] msi: Add support for the Intel chipsets that support MSI.

Andi Kleen <[email protected]> writes:

> On Friday 25 May 2007 06:26:50 Eric W. Biederman wrote:
>>
>> This patch is the result of a quick survey of the Intel chipset
>> documents. I took a quick look in the document to see if the chipset
>> supported MSI and if so I looked through to find the vendor and device
>> id of device 0 function 0 of the chipset and added a quirk for that
>> device id if I it was not a duplicate.
>
> It would be better to look for any PCI bridge. Sometimes there are
> different PCI bridges around (e.g. external PCI-X bridges on HT systems)
> which might need own quirks

Maybe but pci-pci bridges should have no problems with MSI traffic
because at that level MSI is just a normal DMA write going upstream.

The AMD 8131 hypertransport to PCI-X bridge only failed because at the
connection to hypertransport it did not implement the hypertransport
msi mapping capability and the associated logic to convert an MSI
irq into a hypertranposrt IRQ.

I currently have a quirk that looks for any hypertransport msi mapping
capability and only enables MSI on the downstream bus of the bridge.

> Also in the x86 world Microsoft defined a FADT ACPI flag that MSI doesn't
> work for Vista. It might make sense to do
>
> if (dmi year >= 2007 && FADT.msi_disable not set) assume it works

Sounds reasonable to me. If it happens to work reliably.

>> This patch should be safe. The anecdotal evidence is that when dealing
>> with MSI the Intel chipsets just work. If we find some buggy ones
>> changing the list won't be hard.
>
> The FADT bit should be probably checked anyways.

Sure if we have a way to check I have no problem, although I tend to
trust the hardware more.

Eric

2007-05-25 06:40:35

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

From: Michael Ellerman <[email protected]>
Date: Fri, 25 May 2007 15:14:10 +1000

> On Thu, 2007-05-24 at 22:19 -0600, Eric W. Biederman wrote:
> > Currently we blacklist known bad msi configurations which means we
> > keep getting MSI enabled on chipsets that either do not support MSI,
> > or MSI is implemented improperly. Since the normal IRQ routing
> > mechanism seems to works even when MSI does not, this is a bad default
> > and causes non-functioning systems for no good reason.
> >
> > So this patch inverts the sense of the MSI bus flag to only enable
> > MSI on known good systems. I am seeding that list with the set of
> > chipsets with an enabled hypertransport MSI mapping capability. Which
> > is as close as I can come to an generic MSI enable. So for actually
> > using MSI this patch is a regression, but for just having MSI enabled
> > in the kernel by default things should just work with this patch
> > applied.
>
> I guess this is a good idea for random x86 machines. On powerpc I think
> we'll just turn it on for every bus, and let the existing per-platform
> logic decide.

I think I'll turn it on always on sparc64.

2007-05-25 14:42:46

by Chuck Ebbert

[permalink] [raw]
Subject: Re: [PATCH 2/2] msi: Add support for the Intel chipsets that support MSI.

On 05/25/2007 02:10 AM, Eric W. Biederman wrote:
>>> This patch should be safe. The anecdotal evidence is that when dealing
>>> with MSI the Intel chipsets just work. If we find some buggy ones
>>> changing the list won't be hard.
>> The FADT bit should be probably checked anyways.
>
> Sure if we have a way to check I have no problem, although I tend to
> trust the hardware more.
>

Already in 2.6.22:

http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=f8993aff8b4de0317c6e081802ca5c86c449fef2
Commit: f8993aff8b4de0317c6e081802ca5c86c449fef2
Parent: a23cf14b161b8deeb0f701d577a0e8be6365e247
Author: Shaohua Li <[email protected]>
AuthorDate: Wed Apr 25 11:05:12 2007 +0800
Committer: Len Brown <[email protected]>
CommitDate: Wed Apr 25 01:13:47 2007 -0400

ACPI: Disable MSI on request of FADT

2007-05-25 15:20:02

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

Greg KH <[email protected]> writes:

> Originally I would have thought this would be a good idea, but now that
> Vista is out, which supports MSI, I don't think we are going to need
> this in the future. All new chipsets should support MSI fine and this
> table will only grow in the future, while the blacklist should not need
> to have many new entries added to it.
>
> So I don't think this is a good idea, sorry.

- The current situation is broken
- In spec hardware does not require MSI to generate interrupts
Which leaves enabling MSI optional.

Do you have a better idea to solve the current brokenness?

MSI appears to have enough problems that enabling it in a kernel
that is supposed to run lots of different hardware (like a distro
kernel) is a recipe for disaster.

Eric

2007-05-25 15:28:54

by Chuck Ebbert

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

On 05/25/2007 11:17 AM, Eric W. Biederman wrote:
>
> MSI appears to have enough problems that enabling it in a kernel
> that is supposed to run lots of different hardware (like a distro
> kernel) is a recipe for disaster.

Ubuntu and Fedora have disabled it and added a "pci=msi" option
to enable it if required.

2007-05-25 15:40:48

by Roland Dreier

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

> - In spec hardware does not require MSI to generate interrupts
> Which leaves enabling MSI optional.

Actually at least the Qlogic/Pathscale PCI Express ipath adapters
cannot generate INTx interrupts -- they definitely do require MSI to
operate.

- R.

2007-05-25 15:42:48

by Roland Dreier

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

> > - In spec hardware does not require MSI to generate interrupts
> > Which leaves enabling MSI optional.
>
> Actually at least the Qlogic/Pathscale PCI Express ipath adapters
> cannot generate INTx interrupts -- they definitely do require MSI to
> operate.

Oh yeah... when I first found out about this, I rechecked the PCI
Express spec and found that in fact legacy INTx interrupts are
optional. So the ipath adapters that require MSI do conform to the
spec.

- R.

2007-05-25 16:12:13

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

Roland Dreier <[email protected]> writes:

> > > - In spec hardware does not require MSI to generate interrupts
> > > Which leaves enabling MSI optional.
> >
> > Actually at least the Qlogic/Pathscale PCI Express ipath adapters
> > cannot generate INTx interrupts -- they definitely do require MSI to
> > operate.
>
> Oh yeah... when I first found out about this, I rechecked the PCI
> Express spec and found that in fact legacy INTx interrupts are
> optional. So the ipath adapters that require MSI do conform to the
> spec.

Hmm...

I find in section 6.1:
> In addition to PCI INTx compatible interrupt emulation, PCI Express
> requires support of MSI or MSI-X or both.
Which suggests that INTx support is required.

I do not find any wording that suggest the opposite.
I do see it stated that it is intended to EOL support for INTx at
some point.

Where did you see it mentioned that INTx was optional?

I do see it clearly stating that MSI is the preferred mechanism from
pci express.

Eric

2007-05-25 16:55:04

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH 2/2] msi: Add support for the Intel chipsets that support MSI.

Chuck Ebbert <[email protected]> writes:

> http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=f8993aff8b4de0317c6e081802ca5c86c449fef2
> Commit: f8993aff8b4de0317c6e081802ca5c86c449fef2
> Parent: a23cf14b161b8deeb0f701d577a0e8be6365e247
> Author: Shaohua Li <[email protected]>
> AuthorDate: Wed Apr 25 11:05:12 2007 +0800
> Committer: Len Brown <[email protected]>
> CommitDate: Wed Apr 25 01:13:47 2007 -0400
>
> ACPI: Disable MSI on request of FADT

Interesting. I'm going to have to read up on this bit.
Perhaps it is enough.

Eric

2007-05-25 20:09:34

by David Schwartz

[permalink] [raw]
Subject: RE: [PATCH 1/2] msi: Invert the sense of the MSI enables.


> Hmm...

> I find in section 6.1:
> > In addition to PCI INTx compatible interrupt emulation, PCI Express
> > requires support of MSI or MSI-X or both.

> Which suggests that INTx support is required.

Unfortunately, this can be equally well read to suggest that MSI/MSI-X is
not required, but that MSI/MSI-x is required in addition if you choose to
support PCI INTx. Section 6.1 is ambiguous and you have to look elsewhere to
figure out if PCI INTx is required or not.

I think the most natural reading of this is that if you choose to support
PCI INTx compatible interrupt emulation, PCI Express requires support of MSI
or MSI-X or both in addition.

However, other sections are not ambiguous:

"For legacy compatibility, PCI Express provides a PCI INTx emulation
mechanism to signal
interrupts to the system interrupt controller (typically part of the Root
Complex). This
mechanism is compatible with existing PCI software, and provides the same
level and type
service as corresponding PCI interrupt signaling mechanism and is
independent of system
interrupt controller specifics. This legacy compatibility mechanism allows
boot device
support without requiring complex BIOS-level interrupt configuration/control
service stacks.
It virtualizes PCI physical interrupt signals by using an in-band signaling
mechanism."

This seems to make it pretty clear that if a device requires MSI/MSI-X, it
is broken. Devices are supposed to work even with PCI drivers that are not
smart enough to support MSI/MSI-X.

DS


2007-05-25 20:28:59

by Jonathan Lundell

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

On May 24, 2007, at 10:51 PM, Andi Kleen wrote:

>> Do we have a feel for how much performace we're losing on those
>> systems which _could_ do MSI, but which will end up defaulting
>> to not using it?
>
> At least on 10GB ethernet it is a significant difference; you usually
> cannot go anywhere near line speed without MSI
>
> I suspect it is visible on high performance / multiple GB NICs too.

Why would that be? As the packet rate goes up and NAPI polling kicks
in, wouldn't MSI make less and less difference?

I like the fact that MSI gives us finer control over CPU affinity
than many INTx implementations, but that's a different issue.

2007-05-25 20:35:28

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

On Fri, May 25, 2007 at 09:17:35AM -0600, Eric W. Biederman wrote:
> Greg KH <[email protected]> writes:
>
> > Originally I would have thought this would be a good idea, but now that
> > Vista is out, which supports MSI, I don't think we are going to need
> > this in the future. All new chipsets should support MSI fine and this
> > table will only grow in the future, while the blacklist should not need
> > to have many new entries added to it.
> >
> > So I don't think this is a good idea, sorry.
>
> - The current situation is broken
> - In spec hardware does not require MSI to generate interrupts
> Which leaves enabling MSI optional.
>
> Do you have a better idea to solve the current brokenness?
>
> MSI appears to have enough problems that enabling it in a kernel
> that is supposed to run lots of different hardware (like a distro
> kernel) is a recipe for disaster.

Oh, I agree it's a major pain in the ass at times...

But I'm real hesitant to change things this way. We'll get reports of
people who used to have MSI working, and now it will not (like all AMD
chipsets). That's a regression...

Perhaps we can trigger off of the same flag that Vista uses like Andi
suggested? That's one way to "know" that the hardware works, right?

For non-x86 arches, they all seem to want to always enable MSI as they
don't have to deal with as many broken chipsets, if any at all. So for
them, we'd have to just whitelist the whole arch. Does that really make
sense?

And again, over time, like years, this list is going to grow way beyond
a managable thing, especially as any new chipset that comes out in 2009
is going to have working MSI, right? I think our blacklist is easier to
manage over time, while causing a problem for some users in trying to
determine their broken hardware that they currently have.

It's a trade off, and I'd like to choose the one that over the long
term, causes the least ammount of work and maintaiblity. I think the
current blacklist meets that goal.

thanks,

greg k-h

2007-05-25 21:07:57

by Roland Dreier

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

> > In addition to PCI INTx compatible interrupt emulation, PCI Express
> > requires support of MSI or MSI-X or both.
> Which suggests that INTx support is required.
>
> I do not find any wording that suggest the opposite.
> I do see it stated that it is intended to EOL support for INTx at
> some point.
>
> Where did you see it mentioned that INTx was optional?

I don't see any requirement that a device that generates MSI
interrupts must also be able to signal the same interrupts via INTx.
The spec explicitly says:

"All PCI Express device Functions that are capable of generating
interrupts must support MSI or MSI-X or both."

but there is no corresponding explicit requirement that legacy INTx
mode be supported, so it certainly seems permitted for a device not to
generate INTx interrupts. In fact as you alluded to, the spec says,

"The legacy INTx emulation mechanism may be deprecated in a future
version of this specification."

and I wouldn't think the intention would be for one version of the
spec to *require* something that is planned on being deprecated later.

And the Pathscale guys were pretty confident that their device was
compliant with the PCIe spec.

- R.

2007-05-25 21:09:15

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

Greg KH <[email protected]> writes:
>> MSI appears to have enough problems that enabling it in a kernel
>> that is supposed to run lots of different hardware (like a distro
>> kernel) is a recipe for disaster.
>
> Oh, I agree it's a major pain in the ass at times...
>
> But I'm real hesitant to change things this way. We'll get reports of
> people who used to have MSI working, and now it will not (like all AMD
> chipsets). That's a regression...

You saw my quirk that enabled MSI if you happen to have a
hypertransport msi mapping capability. That should take care of
all hypertransport compliant AMD chipsets.

So at least 90% of what we have now should work.

> Perhaps we can trigger off of the same flag that Vista uses like Andi
> suggested? That's one way to "know" that the hardware works, right?

A little. It is redefining the problem as squarely a BIOS writers
problem and possibly we want to do that in the presence of pci-express.

> For non-x86 arches, they all seem to want to always enable MSI as they
> don't have to deal with as many broken chipsets, if any at all. So for
> them, we'd have to just whitelist the whole arch. Does that really make
> sense?

Non-x86 (unless I'm confused) already has white lists or some
equivalent because they can't use generic code for configuring MSI
because non-x86 does not have a standard MSI target window with a
standard meaning for the bits. So non-x86 has it's own set of arch
specific mechanisms to handle this, and they just don't want generic
code getting in the way.

So it may makes sense make the default all to be x86 specific.

> And again, over time, like years, this list is going to grow way beyond
> a managable thing, especially as any new chipset that comes out in 2009
> is going to have working MSI, right?

I haven't a clue. I know we are in we are in the teething pain stage
now which just generally makes things difficult.

> I think our blacklist is easier to
> manage over time, while causing a problem for some users in trying to
> determine their broken hardware that they currently have.

Possibly. It just doesn't seem straight forward to add something
safely to our blacklist.

> It's a trade off, and I'd like to choose the one that over the long
> term, causes the least ammount of work and maintaiblity. I think the
> current blacklist meets that goal.

A reasonable goal. I will come back to this after the long holiday
weekend here and see what makes sense.

I think for most of Intel I can reduce my test to:
If (bus == 0 , device == 0, function == 0 && vendor == Intel &&
has a pci express capability) {
Enable msi on all busses().
}

At which point I don't think we will need to do much maintenance.

Eric

2007-05-25 21:17:25

by Roland Dreier

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

> I think for most of Intel I can reduce my test to:
> If (bus == 0 , device == 0, function == 0 && vendor == Intel &&
> has a pci express capability) {
> Enable msi on all busses().
> }

MSI was working on every Intel PCI-X chipset I ever saw too...

- R.

2007-05-25 21:47:04

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

On Fri, May 25, 2007 at 03:06:22PM -0600, Eric W. Biederman wrote:
> Greg KH <[email protected]> writes:
> > It's a trade off, and I'd like to choose the one that over the long
> > term, causes the least ammount of work and maintaiblity. I think the
> > current blacklist meets that goal.
>
> A reasonable goal. I will come back to this after the long holiday
> weekend here and see what makes sense.

Ok, if you still think after looking into it some more that it's still
needed, please resend the patches, and I'll reconsider it.

thanks,

greg k-h

2007-05-25 21:47:55

by Brice Goglin

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

Eric W. Biederman wrote:
> @@ -1677,43 +1650,16 @@ static int __devinit msi_ht_cap_enabled(struct pci_dev *dev)
> return 0;
> }
>
> -/* Check the hypertransport MSI mapping to know whether MSI is enabled or not */
> +/* Enable MSI on hypertransport chipsets supporting MSI */
> static void __devinit quirk_msi_ht_cap(struct pci_dev *dev)
> {
> - if (dev->subordinate && !msi_ht_cap_enabled(dev)) {
> - printk(KERN_WARNING "PCI: MSI quirk detected. "
> - "MSI disabled on chipset %s.\n",
> - pci_name(dev));
> - dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
> + if (dev->subordinate && msi_ht_cap_enabled(dev)) {
> + printk(KERN_INFO "PCI: Enabled MSI on chipset %s.\n",
> + pci_name(dev));
> + dev->subordinate->bus_flags |= PCI_BUS_FLAGS_MSI;
> }
> }
> -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE,
> - quirk_msi_ht_cap);
> +DECLARE_PCI_FIXUP_FINAL(PCI_ANY_ID, PCI_ANY_ID, quirk_msi_ht_cap);
>
> -/* The nVidia CK804 chipset may have 2 HT MSI mappings.
> - * MSI are supported if the MSI capability set in any of these mappings.
> - */
> -static void __devinit quirk_nvidia_ck804_msi_ht_cap(struct pci_dev *dev)
> -{
> - struct pci_dev *pdev;
> -
> - if (!dev->subordinate)
> - return;
>
> - /* check HT MSI cap on this chipset and the root one.
> - * a single one having MSI is enough to be sure that MSI are supported.
> - */
> - pdev = pci_get_slot(dev->bus, 0);
> - if (!pdev)
> - return;
> - if (!msi_ht_cap_enabled(dev) && !msi_ht_cap_enabled(pdev)) {
> - printk(KERN_WARNING "PCI: MSI quirk detected. "
> - "MSI disabled on chipset %s.\n",
> - pci_name(dev));
> - dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
> - }
> - pci_dev_put(pdev);
> -}
> -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_CK804_PCIE,
> - quirk_nvidia_ck804_msi_ht_cap);
>

Are you sure that calling quirk_msi_ht_cap() on all devices will really
replace my nVidia CK804 specific quirk above?

I haven't looked at all this for a while, but if I remember correctly,
the PCI hierarchy with an AMD8131 and a CK804 looks like the following.

-+-[08]-+-0a.0-[09]--
\-[00]-+-00.0
+-0e.0-[02]--


The HT MSI mapping of the CK804 may be either on device 00.0 (10de:005e)
and 0e.0 (10de:005d). The devices that are physically behind the CK804
chipset are on bus 02.

To get MSI enabled for these devices, the MSI flag should be set either
on bus 00 (looks impossible here) or on bus 02 (if the HT MSI mapping is
found on 0e.0). However, if the MSI mapping is found on device 00.0, I
don't see your code could enable MSI behind on bus 02.

Brice

2007-05-26 06:43:25

by Grant Grundler

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

On Fri, May 25, 2007 at 01:35:01PM -0700, Greg KH wrote:
...
> And again, over time, like years, this list is going to grow way beyond
> a managable thing, especially as any new chipset that comes out in 2009
> is going to have working MSI, right? I think our blacklist is easier to
> manage over time, while causing a problem for some users in trying to
> determine their broken hardware that they currently have.
>
> It's a trade off, and I'd like to choose the one that over the long
> term, causes the least ammount of work and maintaiblity. I think the
> current blacklist meets that goal.

Before you change your mind again, I agree with the above.
I'm convinced MSI support will only get better and not worse.

The reason is all high speed interconnects (IB, 10gige) and many
storage controllers already support MSI becuase it's more efficient
and will benchmark better at the very least. Server vendors won't be
able to sell their boxes if MSI isn't working.

Desktop has never been as sensitive to IO performance and I don't
know what the situation currently is with laptop gige NICs and
gfx cards (gfx cards use MSI? I don't see it in drivers/video).

thanks,
grant

> thanks,
>
> greg k-h

2007-05-26 06:52:37

by Grant Grundler

[permalink] [raw]
Subject: Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.

On Fri, May 25, 2007 at 01:16:57PM -0700, Jonathan Lundell wrote:
> On May 24, 2007, at 10:51 PM, Andi Kleen wrote:
>
> >>Do we have a feel for how much performace we're losing on those
> >>systems which _could_ do MSI, but which will end up defaulting
> >>to not using it?
> >
> >At least on 10GB ethernet it is a significant difference; you usually
> >cannot go anywhere near line speed without MSI
> >
> >I suspect it is visible on high performance / multiple GB NICs too.
>
> Why would that be? As the packet rate goes up and NAPI polling kicks
> in, wouldn't MSI make less and less difference?

CPUs are so fast now that we never even get into polling mode.
So MSI makes even more of a difference.

davem and jamal hadi salim were already years ago seeing workloads
(packet rates) where the CPU utilization would peak at packet rates
that were just high enough for NAPI to occasionally be used.
IIRC, Jamal's OLS 2005 or 2006 paper talks about this behavior.


> I like the fact that MSI gives us finer control over CPU affinity
> than many INTx implementations, but that's a different issue.

Yes, I agree.

thanks,
grant