2024-04-04 16:48:43

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v9 3/4] PCI: brcmstb: Set downstream maximum {no-}snoop LTR values

Hi Jim,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 9f8413c4a66f2fb776d3dc3c9ed20bf435eb305e]

url: https://github.com/intel-lab-lkp/linux/commits/Jim-Quinlan/dt-bindings-PCI-brcmstb-Add-property-brcm-clkreq-mode/20240404-054118
base: 9f8413c4a66f2fb776d3dc3c9ed20bf435eb305e
patch link: https://lore.kernel.org/r/20240403213902.26391-4-james.quinlan%40broadcom.com
patch subject: [PATCH v9 3/4] PCI: brcmstb: Set downstream maximum {no-}snoop LTR values
config: arm64-defconfig (https://download.01.org/0day-ci/archive/20240405/[email protected]/config)
compiler: aarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240405/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> drivers/pci/controller/pcie-brcmstb.c:728:6: warning: no previous prototype for 'brcm_set_downstream_devs_ltr_max' [-Wmissing-prototypes]
728 | void brcm_set_downstream_devs_ltr_max(struct brcm_pcie *pcie)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


vim +/brcm_set_downstream_devs_ltr_max +728 drivers/pci/controller/pcie-brcmstb.c

727
> 728 void brcm_set_downstream_devs_ltr_max(struct brcm_pcie *pcie)
729 {
730 struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
731 u16 ltr_fmt = FIELD_PREP(GENMASK(9, 0), BRCM_LTR_MAX_VALUE)
732 | FIELD_PREP(GENMASK(12, 10), BRCM_LTR_MAX_SCALE)
733 | GENMASK(15, 15);
734
735 if (bridge->native_ltr)
736 pci_walk_bus(bridge->bus, brcm_set_dev_ltr_max, &ltr_fmt);
737 }
738

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2024-04-04 16:49:45

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v9 3/4] PCI: brcmstb: Set downstream maximum {no-}snoop LTR values

Hi Jim,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 9f8413c4a66f2fb776d3dc3c9ed20bf435eb305e]

url: https://github.com/intel-lab-lkp/linux/commits/Jim-Quinlan/dt-bindings-PCI-brcmstb-Add-property-brcm-clkreq-mode/20240404-054118
base: 9f8413c4a66f2fb776d3dc3c9ed20bf435eb305e
patch link: https://lore.kernel.org/r/20240403213902.26391-4-james.quinlan%40broadcom.com
patch subject: [PATCH v9 3/4] PCI: brcmstb: Set downstream maximum {no-}snoop LTR values
config: arm-defconfig (https://download.01.org/0day-ci/archive/20240405/[email protected]/config)
compiler: clang version 14.0.6 (https://github.com/llvm/llvm-project.git f28c006a5895fc0e329fe15fead81e37457cb1d1)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240405/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> drivers/pci/controller/pcie-brcmstb.c:728:6: warning: no previous prototype for function 'brcm_set_downstream_devs_ltr_max' [-Wmissing-prototypes]
void brcm_set_downstream_devs_ltr_max(struct brcm_pcie *pcie)
^
drivers/pci/controller/pcie-brcmstb.c:728:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void brcm_set_downstream_devs_ltr_max(struct brcm_pcie *pcie)
^
static
1 warning generated.


vim +/brcm_set_downstream_devs_ltr_max +728 drivers/pci/controller/pcie-brcmstb.c

727
> 728 void brcm_set_downstream_devs_ltr_max(struct brcm_pcie *pcie)
729 {
730 struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
731 u16 ltr_fmt = FIELD_PREP(GENMASK(9, 0), BRCM_LTR_MAX_VALUE)
732 | FIELD_PREP(GENMASK(12, 10), BRCM_LTR_MAX_SCALE)
733 | GENMASK(15, 15);
734
735 if (bridge->native_ltr)
736 pci_walk_bus(bridge->bus, brcm_set_dev_ltr_max, &ltr_fmt);
737 }
738

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2024-04-05 01:09:07

by Cyril Brulebois

[permalink] [raw]
Subject: Re: [PATCH v9 0/4] PCI: brcmstb: Configure appropriate HW CLKREQ# mode

Hi Jim,

Jim Quinlan <[email protected]> (2024-04-03):
> v9 -- v8 was setting an internal bus timeout to accomodate large L1 exit
> latencies. After meeting the PCIe HW team it was revealed that the
> HW default timeout value was set low for the purposes of HW debugging
> convenience; for nominal operation it needs to be set to a higher
> value independent of this submission's purpose. This is now a
> separate commit.
>
> -- With v8, Bjorne asked what was preventing a device from exceeding the
> time required for the above internal bus timeout. The answer to this
> is for us to set the endpoints' max latency {no-,}snoop value to
> something below this internal timeout value. If the endpoint
> respects this value as it should, it will not send an LTR request
> with a larger latency value and not put itself in a situation
> that requires more latency than is possible for the platform.
>
> Typically, ACPI or FW sets these max latency values. In most of our
> systems we do not have this happening so it is up to the RC driver to
> set these values in the endpoint devices. If the endpoints already
> have non-zero values that are lower than what we are setting, we let
> them be, as it is possible ACPI or FW set them and knows something
> that we do not.
>
> -- The "clkreq" commit has only been changed to remove the code that was
> setting the timeout value, as this code is now its own commit.

Given the bot's feedback, I took the liberty of running tests on your
patch series except with an extra “static” keyword. On my build system,
gcc 12 wasn't complaining about it but I didn't spend time trying to
find the right options, or trying a switch to clang to confirm the
before/after situation:

-void brcm_set_downstream_devs_ltr_max(struct brcm_pcie *pcie)
+static void brcm_set_downstream_devs_ltr_max(struct brcm_pcie *pcie)


Anyway, this is still:

Tested-by: Cyril Brulebois <[email protected]>


Test setup:
-----------

- using a $CM with the 20230111 EEPROM
- on the same CM4 IO Board
- with a $PCIE board (PCIe to multiple USB ports)
- and the same Samsung USB flash drive + Logitech keyboard.

where $CM is one of:

- CM4 Lite Rev 1.0
- CM4 8/32 Rev 1.0
- CM4 4/32 Rev 1.1

and $PCIE is one of:

- SupaHub PCE6U1C-R02, VER 006
- SupaHub PCE6U1C-R02, VER 006S
- Waveshare VIA VL805/806-based


Results:
--------

1. Given this is already v9, and given I don't see how this could have
possibly changed, I didn't build or tested an unpatched kernel,
which I would still expect to produce either a successful boot
*without* seeing the devices plugged on the PCIe-to-USB board or the
dreaded SError in most cases.

2. With a patched kernel (v6.7-562-g9f8413c4a66f2 + this series +
“static” in front of brcm_set_downstream_devs_ltr_max()), for all
$CM/$PCIE combinations, I'm getting a system that boots, sees the
flash drive, and gives decent read/write performance on it (plus a
functional keyboard).


Cheers,
--
Cyril Brulebois ([email protected]) <https://debamax.com/>
D-I release manager -- Release team member -- Freelance Consultant


Attachments:
(No filename) (3.27 kB)
signature.asc (849.00 B)
Download all attachments

2024-05-01 00:08:10

by Jim Quinlan

[permalink] [raw]
Subject: Re: [PATCH v9 0/4] PCI: brcmstb: Configure appropriate HW CLKREQ# mode

On Wed, Apr 3, 2024 at 5:39 PM Jim Quinlan <[email protected]> wrote:
>
> v9 -- v8 was setting an internal bus timeout to accomodate large L1 exit
> latencies. After meeting the PCIe HW team it was revealed that the
> HW default timeout value was set low for the purposes of HW debugging
> convenience; for nominal operation it needs to be set to a higher
> value independent of this submission's purpose. This is now a
> separate commit.

Bjorn,

Did you have some time to look at this? Do you have any comments or questions?

Regards,
Jim Quinlan
Broadcom STB/CM
>
> -- With v8, Bjorne asked what was preventing a device from exceeding the
> time required for the above internal bus timeout. The answer to this
> is for us to set the endpoints' max latency {no-,}snoop value to
> something below this internal timeout value. If the endpoint
> respects this value as it should, it will not send an LTR request
> with a larger latency value and not put itself in a situation
> that requires more latency than is possible for the platform.
>
> Typically, ACPI or FW sets these max latency values. In most of our
> systems we do not have this happening so it is up to the RC driver to
> set these values in the endpoint devices. If the endpoints already
> have non-zero values that are lower than what we are setting, we let
> them be, as it is possible ACPI or FW set them and knows something
> that we do not.
>
> -- The "clkreq" commit has only been changed to remove the code that was
> setting the timeout value, as this code is now its own commit.
>
> v8 -- Un-advertise L1SS capability when in "no-l1ss" mode (Bjorn)
> -- Squashed last two commits of v7 (Bjorn)
> -- Fix DT binding description text wrapping (Bjorn)
> -- Fix incorrect Spec reference (Bjorn)
> s/PCIe Spec/PCIe Express Mini CEM 2.1 specification/
> -- Text substitutions (Bjorn)
> s/WRT/With respect to/
> s/Tclron/T_CLRon/
>
> v7 -- Manivannan Sadhasivam suggested (a) making the property look like a
> network phy-mode and (b) keeping the code simple (not counting clkreq
> signal appearances, un-advertising capabilites, etc). This is
> what I have done. The property is now "brcm,clkreq-mode" and
> the values may be one of "safe", "default", and "no-l1ss". The
> default setting is to employ the most capable power savings mode.
>
> v6 -- No code has been changed.
> -- Changed commit subject and comment in "#PERST" commit (Bjorn, Cyril)
> -- Changed sign-off and author email address for all commits.
> This was due to a change in Broadcom's upstreaming policy.
>
> v5 -- Remove DT property "brcm,completion-timeout-us" from
> "DT bindings" commit. Although this error may be reported
> as a completion timeout, its cause was traced to an
> internal bus timeout which may occur even when there is
> no PCIe access being processed. We set a timeout of four
> seconds only if we are operating in "L1SS CLKREQ#" mode.
> -- Correct CEM 2.0 reference provided by HW engineer,
> s/3.2.5.2.5/3.2.5.2.2/ (Bjorn)
> -- Add newline to dev_info() string (Stefan)
> -- Change variable rval to unsigned (Stefan)
> -- s/implementaion/implementation/ (Bjorn)
> -- s/superpowersave/powersupersave/ (Bjorn)
> -- Slightly modify message on "PERST#" commit.
> -- Rebase to torvalds master
>
> v4 -- New commit that asserts PERST# for 2711/RPi SOCs at PCIe RC
> driver probe() time. This is done in Raspian Linux and its
> absence may be the cause of a failing test case.
> -- New commit that removes stale comment.
>
> v3 -- Rewrote commit msgs and comments refering panics if L1SS
> is enabled/disabled; the code snippet that unadvertises L1SS
> eliminates the panic scenario. (Bjorn)
> -- Add reference for "400ns of CLKREQ# assertion" blurb (Bjorn)
> -- Put binding names in DT commit Subject (Bjorn)
> -- Add a verb to a commit's subject line (Bjorn)
> -- s/accomodat(\w+)/accommodat$1/g (Bjorn)
> -- Rewrote commit msgs and comments refering panics if L1SS
> is enabled/disabled; the code snippet that unadvertises L1SS
> eliminates the panic scenario. (Bjorn)
>
> v2 -- Changed binding property 'brcm,completion-timeout-msec' to
> 'brcm,completion-timeout-us'. (StefanW for standard suffix).
> -- Warn when clamping timeout value, and include clamped
> region in message. Also add min and max in YAML. (StefanW)
> -- Qualify description of "brcm,completion-timeout-us" so that
> it refers to PCIe transactions. (StefanW)
> -- Remvove mention of Linux specifics in binding description. (StefanW)
> -- s/clkreq#/CLKREQ#/g (Bjorn)
> -- Refactor completion-timeout-us code to compare max and min to
> value given by the property (as opposed to the computed value).
>
> v1 -- The current driver assumes the downstream devices can
> provide CLKREQ# for ASPM. These commits accomodate devices
> w/ or w/o clkreq# and also handle L1SS-capable devices.
>
> -- The Raspian Linux folks have already been using a PCIe RC
> property "brcm,enable-l1ss". These commits use the same
> property, in a backward-compatible manner, and the implementaion
> adds more detail and also automatically identifies devices w/o
> a clkreq# signal, i.e. most devices plugged into an RPi CM4
> IO board.
>
> Jim Quinlan (4):
> dt-bindings: PCI: brcmstb: Add property "brcm,clkreq-mode"
> PCI: brcmstb: Set reasonable value for internal bus timeout
> PCI: brcmstb: Set downstream maximum {no-}snoop LTR values
> PCI: brcmstb: Configure HW CLKREQ# mode appropriate for downstream
> device
>
> .../bindings/pci/brcm,stb-pcie.yaml | 18 ++
> drivers/pci/controller/pcie-brcmstb.c | 161 +++++++++++++++++-
> 2 files changed, 170 insertions(+), 9 deletions(-)
>
>
> base-commit: 9f8413c4a66f2fb776d3dc3c9ed20bf435eb305e
> --
> 2.17.1
>


Attachments:
smime.p7s (4.11 kB)
S/MIME Cryptographic Signature

2024-05-06 22:33:39

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: [PATCH v9 0/4] PCI: brcmstb: Configure appropriate HW CLKREQ# mode

On Tue, Apr 30, 2024 at 05:02:45PM -0400, Jim Quinlan wrote:
> On Wed, Apr 3, 2024 at 5:39 PM Jim Quinlan <[email protected]> wrote:
> >
> > v9 -- v8 was setting an internal bus timeout to accomodate large L1 exit
> > latencies. After meeting the PCIe HW team it was revealed that the
> > HW default timeout value was set low for the purposes of HW debugging
> > convenience; for nominal operation it needs to be set to a higher
> > value independent of this submission's purpose. This is now a
> > separate commit.
>
> Bjorn,
>
> Did you have some time to look at this? Do you have any comments or questions?

Sorry, I didn't realize you were waiting on me. I think Krzysztof W.
will ultimately take care of these.

I have some minor comments but overall I'm fine with this.

> > -- With v8, Bjorne asked what was preventing a device from exceeding the
> > time required for the above internal bus timeout. The answer to this
> > is for us to set the endpoints' max latency {no-,}snoop value to
> > something below this internal timeout value. If the endpoint
> > respects this value as it should, it will not send an LTR request
> > with a larger latency value and not put itself in a situation
> > that requires more latency than is possible for the platform.
> >
> > Typically, ACPI or FW sets these max latency values. In most of our
> > systems we do not have this happening so it is up to the RC driver to
> > set these values in the endpoint devices. If the endpoints already
> > have non-zero values that are lower than what we are setting, we let
> > them be, as it is possible ACPI or FW set them and knows something
> > that we do not.
> >
> > -- The "clkreq" commit has only been changed to remove the code that was
> > setting the timeout value, as this code is now its own commit.
> >
> > v8 -- Un-advertise L1SS capability when in "no-l1ss" mode (Bjorn)
> > -- Squashed last two commits of v7 (Bjorn)
> > -- Fix DT binding description text wrapping (Bjorn)
> > -- Fix incorrect Spec reference (Bjorn)
> > s/PCIe Spec/PCIe Express Mini CEM 2.1 specification/
> > -- Text substitutions (Bjorn)
> > s/WRT/With respect to/
> > s/Tclron/T_CLRon/
> >
> > v7 -- Manivannan Sadhasivam suggested (a) making the property look like a
> > network phy-mode and (b) keeping the code simple (not counting clkreq
> > signal appearances, un-advertising capabilites, etc). This is
> > what I have done. The property is now "brcm,clkreq-mode" and
> > the values may be one of "safe", "default", and "no-l1ss". The
> > default setting is to employ the most capable power savings mode.
> >
> > v6 -- No code has been changed.
> > -- Changed commit subject and comment in "#PERST" commit (Bjorn, Cyril)
> > -- Changed sign-off and author email address for all commits.
> > This was due to a change in Broadcom's upstreaming policy.
> >
> > v5 -- Remove DT property "brcm,completion-timeout-us" from
> > "DT bindings" commit. Although this error may be reported
> > as a completion timeout, its cause was traced to an
> > internal bus timeout which may occur even when there is
> > no PCIe access being processed. We set a timeout of four
> > seconds only if we are operating in "L1SS CLKREQ#" mode.
> > -- Correct CEM 2.0 reference provided by HW engineer,
> > s/3.2.5.2.5/3.2.5.2.2/ (Bjorn)
> > -- Add newline to dev_info() string (Stefan)
> > -- Change variable rval to unsigned (Stefan)
> > -- s/implementaion/implementation/ (Bjorn)
> > -- s/superpowersave/powersupersave/ (Bjorn)
> > -- Slightly modify message on "PERST#" commit.
> > -- Rebase to torvalds master
> >
> > v4 -- New commit that asserts PERST# for 2711/RPi SOCs at PCIe RC
> > driver probe() time. This is done in Raspian Linux and its
> > absence may be the cause of a failing test case.
> > -- New commit that removes stale comment.
> >
> > v3 -- Rewrote commit msgs and comments refering panics if L1SS
> > is enabled/disabled; the code snippet that unadvertises L1SS
> > eliminates the panic scenario. (Bjorn)
> > -- Add reference for "400ns of CLKREQ# assertion" blurb (Bjorn)
> > -- Put binding names in DT commit Subject (Bjorn)
> > -- Add a verb to a commit's subject line (Bjorn)
> > -- s/accomodat(\w+)/accommodat$1/g (Bjorn)
> > -- Rewrote commit msgs and comments refering panics if L1SS
> > is enabled/disabled; the code snippet that unadvertises L1SS
> > eliminates the panic scenario. (Bjorn)
> >
> > v2 -- Changed binding property 'brcm,completion-timeout-msec' to
> > 'brcm,completion-timeout-us'. (StefanW for standard suffix).
> > -- Warn when clamping timeout value, and include clamped
> > region in message. Also add min and max in YAML. (StefanW)
> > -- Qualify description of "brcm,completion-timeout-us" so that
> > it refers to PCIe transactions. (StefanW)
> > -- Remvove mention of Linux specifics in binding description. (StefanW)
> > -- s/clkreq#/CLKREQ#/g (Bjorn)
> > -- Refactor completion-timeout-us code to compare max and min to
> > value given by the property (as opposed to the computed value).
> >
> > v1 -- The current driver assumes the downstream devices can
> > provide CLKREQ# for ASPM. These commits accomodate devices
> > w/ or w/o clkreq# and also handle L1SS-capable devices.
> >
> > -- The Raspian Linux folks have already been using a PCIe RC
> > property "brcm,enable-l1ss". These commits use the same
> > property, in a backward-compatible manner, and the implementaion
> > adds more detail and also automatically identifies devices w/o
> > a clkreq# signal, i.e. most devices plugged into an RPi CM4
> > IO board.
> >
> > Jim Quinlan (4):
> > dt-bindings: PCI: brcmstb: Add property "brcm,clkreq-mode"
> > PCI: brcmstb: Set reasonable value for internal bus timeout
> > PCI: brcmstb: Set downstream maximum {no-}snoop LTR values
> > PCI: brcmstb: Configure HW CLKREQ# mode appropriate for downstream
> > device
> >
> > .../bindings/pci/brcm,stb-pcie.yaml | 18 ++
> > drivers/pci/controller/pcie-brcmstb.c | 161 +++++++++++++++++-
> > 2 files changed, 170 insertions(+), 9 deletions(-)
> >
> >
> > base-commit: 9f8413c4a66f2fb776d3dc3c9ed20bf435eb305e
> > --
> > 2.17.1
> >



2024-05-06 22:45:20

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: [PATCH v9 2/4] PCI: brcmstb: Set reasonable value for internal bus timeout

On Wed, Apr 03, 2024 at 05:38:59PM -0400, Jim Quinlan wrote:
> HW initializes an internal bus timeout register to a small value for
> debugging convenience. Set this to something reasonable, i.e. in the
> vicinity of 10 msec.
>
> Signed-off-by: Jim Quinlan <[email protected]>
> ---
> drivers/pci/controller/pcie-brcmstb.c | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
> index f9dd6622fe10..e3480ca4cd57 100644
> --- a/drivers/pci/controller/pcie-brcmstb.c
> +++ b/drivers/pci/controller/pcie-brcmstb.c
> @@ -664,6 +664,21 @@ static int brcm_pcie_enable_msi(struct brcm_pcie *pcie)
> return 0;
> }
>
> +/*
> + * An internal HW bus timer value is set to a small value for debugging
> + * convenience. Set this to something reasonable, i.e. somewhere around
> + * 10ms.
> + */
> +static void brcm_extend_internal_bus_timeout(struct brcm_pcie *pcie, u32 nsec)
> +{
> + /* TIMEOUT register is two registers before RGR1_SW_INIT_1 */
> + const unsigned int REG_OFFSET = PCIE_RGR1_SW_INIT_1(pcie) - 8;
> + u32 timeout_us = nsec / 1000;
> +
> + /* Each unit in timeout register is 1/216,000,000 seconds */
> + writel(216 * timeout_us, pcie->base + REG_OFFSET);
> +}
> +
> /* The controller is capable of serving in both RC and EP roles */
> static bool brcm_pcie_rc_mode(struct brcm_pcie *pcie)
> {
> @@ -1059,6 +1074,9 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
> return -ENODEV;
> }
>
> + /* Extend internal bus timeout to 8ms or so */
> + brcm_extend_internal_bus_timeout(pcie, SZ_8M);

The 216*usec is obviously determined by hardware, but the choice of
nsec for the interface, and converting to usec internally seems
arbitrary; the caller could just easily supply usec. Or do you
envision using this interface for timeouts < 1 usec?

"SZ_8M" seems a little unusual as a time measurement and doesn't give
a hint about the units. It's pretty common to use "8 * USEC_PER_MSEC"
or even "8 * NSEC_PER_MSEC" for things like this.

But it's fine with me as-is.

> if (pcie->gen)
> brcm_pcie_set_gen(pcie, pcie->gen);
>
> --
> 2.17.1
>



2024-05-06 23:20:42

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: [PATCH v9 4/4] PCI: brcmstb: Configure HW CLKREQ# mode appropriate for downstream device

On Wed, Apr 03, 2024 at 05:39:01PM -0400, Jim Quinlan wrote:
> The Broadcom STB/CM PCIe HW core, which is also used in RPi SOCs, must be
> deliberately set by the PCIe RC HW into one of three mutually exclusive
> modes:
>
> "safe" -- No CLKREQ# expected or required, refclk is always provided. This
> mode should work for all devices but is not be capable of any refclk
> power savings.

s/refclk is always provided/the Root Port always supplies Refclk/

At least, I assume that's what this means? The Root Port always
supplies Refclk regardless of whether a downstream device deasserts
CLKREQ#?

The patch doesn't do anything to prevent aspm.c from setting
PCI_EXP_LNKCTL_CLKREQ_EN, so it looks like Linux may still set the
"Enable Clock Power Management" bit in downstream devices, but the
Root Port just ignores the CLKREQ# signal, right?

s/is not be/is not/

> "no-l1ss" -- CLKREQ# is expected to be driven by the downstream device for
> CPM and ASPM L0s and L1. Provides Clock Power Management, L0s, and L1,
> but cannot provide L1 substate (L1SS) power savings. If the downstream
> device connected to the RC is L1SS capable AND the OS enables L1SS, all
> PCIe traffic may abruptly halt, potentially hanging the system.

s/CPM/Clock Power Management (CPM)/ and then you can use "CPM" for the
*second* reference here.

It *looks* like we should never see this PCIe hang because with this
setting you don't advertise L1SS in the Root Port, so the OS should
never enable L1SS, at least for that link. Right?

If we never enable L1SS in the case where it could cause a hang, why
mention the possibility here?

I assume that if the downstream device is a Switch, L1SS is unsafe for
the Root Port to Switch link, but it could still be used for the link
between the Switch and whatever is below it?

> "default" -- Bidirectional CLKREQ# between the RC and downstream device.
> Provides ASPM L0s, L1, and L1SS, but not compliant to provide Clock
> Power Management; specifically, may not be able to meet the T_CLRon max
> timing of 400ns as specified in "Dynamic Clock Control", section
> 3.2.5.2.2 of the PCIe Express Mini CEM 2.1 specification. This
> situation is atypical and should happen only with older devices.

IIUC this T_CLRon timing issue is with the STB/CM *Root Port*, but the
last sentence refers to "older devices," which sounds like it means
"older devices that might be plugged into the Root Port." That would
suggest the issue is with those devices, not iwth the STB/CM Root
Port.

Or maybe this is meant to refer to older STB/CM Root Ports?

> Previously, this driver always set the mode to "no-l1ss", as almost all
> STB/CM boards operate in this mode. But now there is interest in
> activating L1SS power savings from STB/CM customers, which requires
> "default" mode. In addition, a bug was filed for RPi4 CM platform because
> most devices did not work in "no-l1ss" mode (see link below).

I'm having a hard time reconciling "almost all STB/CM boards operate
in 'no-l1ss' mode" with "most devices did not work in 'no-l1ss' mode."
They sound contradictory.

> Note that the mode is specified by the DT property "brcm,clkreq-mode". If
> this property is omitted, then "default" mode is chosen.

As a user, how do I determine which setting to use?

Trial and error? If so, how do I identify the errors?

Obviously "default" is the best, so I assume I would try that first.
If something is flaky (whatever that means), I would fall back to
"no-l1ss", which gets me Clock PM, L0s, and L1, right? In what
situation does "no-l1ss" fail, and how do I tell that it fails?

> Link: https://bugzilla.kernel.org/show_bug.cgi?id=217276
>
> Signed-off-by: Jim Quinlan <[email protected]>
> ---
> drivers/pci/controller/pcie-brcmstb.c | 79 ++++++++++++++++++++++++---
> 1 file changed, 70 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
> index 3d08b92d5bb8..3dc8511e6f58 100644
> --- a/drivers/pci/controller/pcie-brcmstb.c
> +++ b/drivers/pci/controller/pcie-brcmstb.c
> @@ -48,6 +48,9 @@
> #define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY 0x04dc
> #define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK 0xc00
>
> +#define PCIE_RC_CFG_PRIV1_ROOT_CAP 0x4f8
> +#define PCIE_RC_CFG_PRIV1_ROOT_CAP_L1SS_MODE_MASK 0xf8
> +
> #define PCIE_RC_DL_MDIO_ADDR 0x1100
> #define PCIE_RC_DL_MDIO_WR_DATA 0x1104
> #define PCIE_RC_DL_MDIO_RD_DATA 0x1108
> @@ -121,9 +124,12 @@
>
> #define PCIE_MISC_HARD_PCIE_HARD_DEBUG 0x4204
> #define PCIE_MISC_HARD_PCIE_HARD_DEBUG_CLKREQ_DEBUG_ENABLE_MASK 0x2
> +#define PCIE_MISC_HARD_PCIE_HARD_DEBUG_L1SS_ENABLE_MASK 0x200000
> #define PCIE_MISC_HARD_PCIE_HARD_DEBUG_SERDES_IDDQ_MASK 0x08000000
> #define PCIE_BMIPS_MISC_HARD_PCIE_HARD_DEBUG_SERDES_IDDQ_MASK 0x00800000
> -
> +#define PCIE_CLKREQ_MASK \
> + (PCIE_MISC_HARD_PCIE_HARD_DEBUG_CLKREQ_DEBUG_ENABLE_MASK | \
> + PCIE_MISC_HARD_PCIE_HARD_DEBUG_L1SS_ENABLE_MASK)
>
> #define PCIE_INTR2_CPU_BASE 0x4300
> #define PCIE_MSI_INTR2_BASE 0x4500
> @@ -1100,13 +1106,73 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
> return 0;
> }
>
> +static void brcm_config_clkreq(struct brcm_pcie *pcie)
> +{
> + static const char err_msg[] = "invalid 'brcm,clkreq-mode' DT string\n";
> + const char *mode = "default";
> + u32 clkreq_cntl;
> + int ret, tmp;
> +
> + ret = of_property_read_string(pcie->np, "brcm,clkreq-mode", &mode);
> + if (ret && ret != -EINVAL) {
> + dev_err(pcie->dev, err_msg);
> + mode = "safe";
> + }
> +
> + /* Start out assuming safe mode (both mode bits cleared) */
> + clkreq_cntl = readl(pcie->base + PCIE_MISC_HARD_PCIE_HARD_DEBUG);
> + clkreq_cntl &= ~PCIE_CLKREQ_MASK;
> +
> + if (strcmp(mode, "no-l1ss") == 0) {
> + /*
> + * "no-l1ss" -- Provides Clock Power Management, L0s, and
> + * L1, but cannot provide L1 substate (L1SS) power
> + * savings. If the downstream device connected to the RC is
> + * L1SS capable AND the OS enables L1SS, all PCIe traffic
> + * may abruptly halt, potentially hanging the system.
> + */
> + clkreq_cntl |= PCIE_MISC_HARD_PCIE_HARD_DEBUG_CLKREQ_DEBUG_ENABLE_MASK;
> + /*
> + * We want to un-advertise L1 substates because if the OS
> + * tries to configure the controller into using L1 substate
> + * power savings it may fail or hang when the RC HW is in
> + * "no-l1ss" mode.
> + */
> + tmp = readl(pcie->base + PCIE_RC_CFG_PRIV1_ROOT_CAP);
> + u32p_replace_bits(&tmp, 2, PCIE_RC_CFG_PRIV1_ROOT_CAP_L1SS_MODE_MASK);
> + writel(tmp, pcie->base + PCIE_RC_CFG_PRIV1_ROOT_CAP);
> +
> + } else if (strcmp(mode, "default") == 0) {
> + /*
> + * "default" -- Provides L0s, L1, and L1SS, but not
> + * compliant to provide Clock Power Management;
> + * specifically, may not be able to meet the Tclron max
> + * timing of 400ns as specified in "Dynamic Clock Control",
> + * section 3.2.5.2.2 of the PCIe spec. This situation is
> + * atypical and should happen only with older devices.
> + */
> + clkreq_cntl |= PCIE_MISC_HARD_PCIE_HARD_DEBUG_L1SS_ENABLE_MASK;
> +
> + } else {
> + /*
> + * "safe" -- No power savings; refclk is driven by RC
> + * unconditionally.
> + */
> + if (strcmp(mode, "safe") != 0)
> + dev_err(pcie->dev, err_msg);
> + mode = "safe";
> + }
> + writel(clkreq_cntl, pcie->base + PCIE_MISC_HARD_PCIE_HARD_DEBUG);
> +
> + dev_info(pcie->dev, "clkreq-mode set to %s\n", mode);
> +}
> +
> static int brcm_pcie_start_link(struct brcm_pcie *pcie)
> {
> struct device *dev = pcie->dev;
> void __iomem *base = pcie->base;
> u16 nlw, cls, lnksta;
> bool ssc_good = false;
> - u32 tmp;
> int ret, i;
>
> /* Unassert the fundamental reset */
> @@ -1138,6 +1204,8 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
> */
> brcm_extend_internal_bus_timeout(pcie, BRCM_LTR_MAX_NS + 1000);
>
> + brcm_config_clkreq(pcie);
> +
> if (pcie->gen)
> brcm_pcie_set_gen(pcie, pcie->gen);
>
> @@ -1156,13 +1224,6 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
> pci_speed_string(pcie_link_speed[cls]), nlw,
> ssc_good ? "(SSC)" : "(!SSC)");
>
> - /*
> - * Refclk from RC should be gated with CLKREQ# input when ASPM L0s,L1
> - * is enabled => setting the CLKREQ_DEBUG_ENABLE field to 1.
> - */
> - tmp = readl(base + PCIE_MISC_HARD_PCIE_HARD_DEBUG);
> - tmp |= PCIE_MISC_HARD_PCIE_HARD_DEBUG_CLKREQ_DEBUG_ENABLE_MASK;
> - writel(tmp, base + PCIE_MISC_HARD_PCIE_HARD_DEBUG);
>
> return 0;
> }
> --
> 2.17.1
>



2024-05-08 22:04:28

by Jim Quinlan

[permalink] [raw]
Subject: Re: [PATCH v9 4/4] PCI: brcmstb: Configure HW CLKREQ# mode appropriate for downstream device

On Mon, May 6, 2024 at 7:20 PM Bjorn Helgaas <[email protected]> wrote:
>
> On Wed, Apr 03, 2024 at 05:39:01PM -0400, Jim Quinlan wrote:
> > The Broadcom STB/CM PCIe HW core, which is also used in RPi SOCs, must be
> > deliberately set by the PCIe RC HW into one of three mutually exclusive
> > modes:
> >
> > "safe" -- No CLKREQ# expected or required, refclk is always provided. This
> > mode should work for all devices but is not be capable of any refclk
> > power savings.
>
> s/refclk is always provided/the Root Port always supplies Refclk/
>
> At least, I assume that's what this means? The Root Port always
> supplies Refclk regardless of whether a downstream device deasserts
> CLKREQ#?
>
> The patch doesn't do anything to prevent aspm.c from setting
> PCI_EXP_LNKCTL_CLKREQ_EN, so it looks like Linux may still set the
> "Enable Clock Power Management" bit in downstream devices, but the
> Root Port just ignores the CLKREQ# signal, right?
>
> s/is not be/is not/
>
> > "no-l1ss" -- CLKREQ# is expected to be driven by the downstream device for
> > CPM and ASPM L0s and L1. Provides Clock Power Management, L0s, and L1,
> > but cannot provide L1 substate (L1SS) power savings. If the downstream
> > device connected to the RC is L1SS capable AND the OS enables L1SS, all
> > PCIe traffic may abruptly halt, potentially hanging the system.
>
> s/CPM/Clock Power Management (CPM)/ and then you can use "CPM" for the
> *second* reference here.
>
> It *looks* like we should never see this PCIe hang because with this
> setting you don't advertise L1SS in the Root Port, so the OS should
> never enable L1SS, at least for that link. Right?
>
> If we never enable L1SS in the case where it could cause a hang, why
> mention the possibility here?

Hello Bjorn,

I will remove this.

>
> I assume that if the downstream device is a Switch, L1SS is unsafe for
> the Root Port to Switch link, but it could still be used for the link
> between the Switch and whatever is below it?
Yes. The "brcm,clkreq-mode" only applies to the root complex and the
device to which it is connected.

>
> > "default" -- Bidirectional CLKREQ# between the RC and downstream device.
> > Provides ASPM L0s, L1, and L1SS, but not compliant to provide Clock
> > Power Management; specifically, may not be able to meet the T_CLRon max
> > timing of 400ns as specified in "Dynamic Clock Control", section
> > 3.2.5.2.2 of the PCIe Express Mini CEM 2.1 specification. This
> > situation is atypical and should happen only with older devices.
>
> IIUC this T_CLRon timing issue is with the STB/CM *Root Port*, but the
> last sentence refers to "older devices," which sounds like it means
> "older devices that might be plugged into the Root Port." That would
> suggest the issue is with those devices, not iwth the STB/CM Root
> Port.
According to the PCIe HW designer, more modern chips have extra circuitry
to overcome this issue. I really do not know if this is the case, nor am I
sure that he knows for sure. But the spec says that T_CLRon should meet
a certain value, and this RC cannot do that in some situations.

>
> Or maybe this is meant to refer to older STB/CM Root Ports?
>
> > Previously, this driver always set the mode to "no-l1ss", as almost all
> > STB/CM boards operate in this mode. But now there is interest in
> > activating L1SS power savings from STB/CM customers, which requires
> > "default" mode. In addition, a bug was filed for RPi4 CM platform because
> > most devices did not work in "no-l1ss" mode (see link below).
>
> I'm having a hard time reconciling "almost all STB/CM boards operate
> in 'no-l1ss' mode" with "most devices did not work in 'no-l1ss' mode."
> They sound contradictory.

I concur, it is no longer clear to me why some device+board+connector
combos work
in "no-l1ss" mode and not in "default mode", and vice versa. Our
existing boards
work in "no-l1ss" mode and the RPi CM HW works fine with "default"
mode (l1ss possible).

This is not just due to older devices, although I've noticed that a
lot of older devices
have no trace connected to their CLKREQ# pin, and the signal is left floating.
Another thing that has recently surfaced is that some of our board
designs are using a unidirectional level-shifter for CLKREQ#, which is
a bidirectional signal. This may be causing mayhem. Another issue is
that some if not a majority of the
adapters I use to test PCIe devices on a board with a socket interfere
with the CLKREQ# signal;
e.g. some adapters ground it, leading me to believe that systems are
working when
they would not if CLKREQ# was not grounded.

I have not enumerated all of the reasons for which
brcm,clkreq-mode setting will make a device+board+connector combo work or not.
But I do know that being able to configure these modes is a must-have
requirement. I also
know that the "default" setting I am proposing is the same configuration
that is used by the RaspberryPi folks with RaspianOS. The STB consumers have no
problem changing the DT property if required. Similarly, a Linux
enthusiast should be
able to set the brcm,clkreq-mode property to "safe" if they are
having PCIe issues,
just like they may configure CONFIG_PCIE_BUS_SAFE=y.

Please keep in mind that currently the upstream Linux will not run on
an Rpi CM board until
this submission or something like it is accepted.

TL;DR Let me rewrite this text and resubmit.

>
> > Note that the mode is specified by the DT property "brcm,clkreq-mode". If
> > this property is omitted, then "default" mode is chosen.
>
> As a user, how do I determine which setting to use?
Using the "safe" mode will always work. In fact I considered making
this the default mode.
As I said, I cannot enumerate all of the reasons why one mode works and one does
not for a particular device+board+connector combo. The HW folks have not really
been forthcoming on the reasons as well.

>
> Trial and error? If so, how do I identify the errors?
Either PCIe link-up is not happening, or it is happening but the
device driver is non-functional
and boot typically hangs.

>
> Obviously "default" is the best, so I assume I would try that first.
> If something is flaky (whatever that means), I would fall back to
> "no-l1ss", which gets me Clock PM, L0s, and L1, right? In what
> situation does "no-l1ss" fail, and how do I tell that it fails?

For example,"no-l1ss" fails on the Rpi CM. Perhaps the reason for that
is that the CLKREQ# signal is left floating and some devices do not connect
their CLKREQ# pin. But I am not sure of that -- I do not have access to
the signals and I do not have the requisite RPi CM design info.

Regards,
Jim Quinlan
Broadcom STB/CM


>
> > Link: https://bugzilla.kernel.org/show_bug.cgi?id=217276
> >
> > Signed-off-by: Jim Quinlan <[email protected]>
> > ---
> > drivers/pci/controller/pcie-brcmstb.c | 79 ++++++++++++++++++++++++---
> > 1 file changed, 70 insertions(+), 9 deletions(-)
> >
> > diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
> > index 3d08b92d5bb8..3dc8511e6f58 100644
> > --- a/drivers/pci/controller/pcie-brcmstb.c
> > +++ b/drivers/pci/controller/pcie-brcmstb.c
> > @@ -48,6 +48,9 @@
> > #define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY 0x04dc
> > #define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK 0xc00
> >
> > +#define PCIE_RC_CFG_PRIV1_ROOT_CAP 0x4f8
> > +#define PCIE_RC_CFG_PRIV1_ROOT_CAP_L1SS_MODE_MASK 0xf8
> > +
> > #define PCIE_RC_DL_MDIO_ADDR 0x1100
> > #define PCIE_RC_DL_MDIO_WR_DATA 0x1104
> > #define PCIE_RC_DL_MDIO_RD_DATA 0x1108
> > @@ -121,9 +124,12 @@
> >
> > #define PCIE_MISC_HARD_PCIE_HARD_DEBUG 0x4204
> > #define PCIE_MISC_HARD_PCIE_HARD_DEBUG_CLKREQ_DEBUG_ENABLE_MASK 0x2
> > +#define PCIE_MISC_HARD_PCIE_HARD_DEBUG_L1SS_ENABLE_MASK 0x200000
> > #define PCIE_MISC_HARD_PCIE_HARD_DEBUG_SERDES_IDDQ_MASK 0x08000000
> > #define PCIE_BMIPS_MISC_HARD_PCIE_HARD_DEBUG_SERDES_IDDQ_MASK 0x00800000
> > -
> > +#define PCIE_CLKREQ_MASK \
> > + (PCIE_MISC_HARD_PCIE_HARD_DEBUG_CLKREQ_DEBUG_ENABLE_MASK | \
> > + PCIE_MISC_HARD_PCIE_HARD_DEBUG_L1SS_ENABLE_MASK)
> >
> > #define PCIE_INTR2_CPU_BASE 0x4300
> > #define PCIE_MSI_INTR2_BASE 0x4500
> > @@ -1100,13 +1106,73 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
> > return 0;
> > }
> >
> > +static void brcm_config_clkreq(struct brcm_pcie *pcie)
> > +{
> > + static const char err_msg[] = "invalid 'brcm,clkreq-mode' DT string\n";
> > + const char *mode = "default";
> > + u32 clkreq_cntl;
> > + int ret, tmp;
> > +
> > + ret = of_property_read_string(pcie->np, "brcm,clkreq-mode", &mode);
> > + if (ret && ret != -EINVAL) {
> > + dev_err(pcie->dev, err_msg);
> > + mode = "safe";
> > + }
> > +
> > + /* Start out assuming safe mode (both mode bits cleared) */
> > + clkreq_cntl = readl(pcie->base + PCIE_MISC_HARD_PCIE_HARD_DEBUG);
> > + clkreq_cntl &= ~PCIE_CLKREQ_MASK;
> > +
> > + if (strcmp(mode, "no-l1ss") == 0) {
> > + /*
> > + * "no-l1ss" -- Provides Clock Power Management, L0s, and
> > + * L1, but cannot provide L1 substate (L1SS) power
> > + * savings. If the downstream device connected to the RC is
> > + * L1SS capable AND the OS enables L1SS, all PCIe traffic
> > + * may abruptly halt, potentially hanging the system.
> > + */
> > + clkreq_cntl |= PCIE_MISC_HARD_PCIE_HARD_DEBUG_CLKREQ_DEBUG_ENABLE_MASK;
> > + /*
> > + * We want to un-advertise L1 substates because if the OS
> > + * tries to configure the controller into using L1 substate
> > + * power savings it may fail or hang when the RC HW is in
> > + * "no-l1ss" mode.
> > + */
> > + tmp = readl(pcie->base + PCIE_RC_CFG_PRIV1_ROOT_CAP);
> > + u32p_replace_bits(&tmp, 2, PCIE_RC_CFG_PRIV1_ROOT_CAP_L1SS_MODE_MASK);
> > + writel(tmp, pcie->base + PCIE_RC_CFG_PRIV1_ROOT_CAP);
> > +
> > + } else if (strcmp(mode, "default") == 0) {
> > + /*
> > + * "default" -- Provides L0s, L1, and L1SS, but not
> > + * compliant to provide Clock Power Management;
> > + * specifically, may not be able to meet the Tclron max
> > + * timing of 400ns as specified in "Dynamic Clock Control",
> > + * section 3.2.5.2.2 of the PCIe spec. This situation is
> > + * atypical and should happen only with older devices.
> > + */
> > + clkreq_cntl |= PCIE_MISC_HARD_PCIE_HARD_DEBUG_L1SS_ENABLE_MASK;
> > +
> > + } else {
> > + /*
> > + * "safe" -- No power savings; refclk is driven by RC
> > + * unconditionally.
> > + */
> > + if (strcmp(mode, "safe") != 0)
> > + dev_err(pcie->dev, err_msg);
> > + mode = "safe";
> > + }
> > + writel(clkreq_cntl, pcie->base + PCIE_MISC_HARD_PCIE_HARD_DEBUG);
> > +
> > + dev_info(pcie->dev, "clkreq-mode set to %s\n", mode);
> > +}
> > +
> > static int brcm_pcie_start_link(struct brcm_pcie *pcie)
> > {
> > struct device *dev = pcie->dev;
> > void __iomem *base = pcie->base;
> > u16 nlw, cls, lnksta;
> > bool ssc_good = false;
> > - u32 tmp;
> > int ret, i;
> >
> > /* Unassert the fundamental reset */
> > @@ -1138,6 +1204,8 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
> > */
> > brcm_extend_internal_bus_timeout(pcie, BRCM_LTR_MAX_NS + 1000);
> >
> > + brcm_config_clkreq(pcie);
> > +
> > if (pcie->gen)
> > brcm_pcie_set_gen(pcie, pcie->gen);
> >
> > @@ -1156,13 +1224,6 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
> > pci_speed_string(pcie_link_speed[cls]), nlw,
> > ssc_good ? "(SSC)" : "(!SSC)");
> >
> > - /*
> > - * Refclk from RC should be gated with CLKREQ# input when ASPM L0s,L1
> > - * is enabled => setting the CLKREQ_DEBUG_ENABLE field to 1.
> > - */
> > - tmp = readl(base + PCIE_MISC_HARD_PCIE_HARD_DEBUG);
> > - tmp |= PCIE_MISC_HARD_PCIE_HARD_DEBUG_CLKREQ_DEBUG_ENABLE_MASK;
> > - writel(tmp, base + PCIE_MISC_HARD_PCIE_HARD_DEBUG);
> >
> > return 0;
> > }
> > --
> > 2.17.1
> >
>
>


Attachments:
smime.p7s (4.11 kB)
S/MIME Cryptographic Signature

2024-05-08 23:33:13

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: [PATCH v9 4/4] PCI: brcmstb: Configure HW CLKREQ# mode appropriate for downstream device

On Wed, May 08, 2024 at 01:55:24PM -0400, Jim Quinlan wrote:
> On Mon, May 6, 2024 at 7:20 PM Bjorn Helgaas <[email protected]> wrote:
> ...

> > As a user, how do I determine which setting to use?
>
> Using the "safe" mode will always work. In fact I considered making
> this the default mode.

> As I said, I cannot enumerate all of the reasons why one mode works
> and one does not for a particular device+board+connector combo. The
> HW folks have not really been forthcoming on the reasons as well.
>
> > Trial and error? If so, how do I identify the errors?
>
> Either PCIe link-up is not happening, or it is happening but the
> device driver is non-functional and boot typically hangs.

What I'm hearing is that it's trial and error.

If we can't tell users how to figure out which mode to use, I think we
have to explicitly say "try the modes in this order until you find one
that works."

That sucks, but if it's all we can do, I guess we don't have much
choice, and we should just own up to it.

There's no point in telling users "if your card drives CLKREQ# use X,
but if not and it can tolerate out-of-spec T_CLRon timing, use Y"
because nobody knows how to figure that out.

And we can say which features are enabled in each mode so they aren't
surprised, e.g., something like this:

"default" -- The Root Port supports ASPM L0s, L1, L1 Substates, and
Clock Power Management. This provides the best power savings but
some devices may not work correctly because the Root Port doesn't
comply with T_CLRon timing required for PCIe Mini Cards [1].

"no-l1ss" -- The Root Port supports ASPM L0s, L1 (but not L1
Substates), and Clock Power Management. [I assume there's some
other Root Port defect that causes issues with some devices in
this mode; I dunno. If we don't know exactly what it is, I guess
we can't really say anything.]

"safe" -- The Root Port supports ASPM L0, L1, L1 Substates, but not
Clock Power Management. All devices should work in this mode.

[1] PCIe Mini CEM r2.1, sec 3.2.5.2.2

(I'm not sure which features are *actually* enabled in each mode, I
just guessed.)

Bjorn