Received: by 2002:a05:6358:c692:b0:131:369:b2a3 with SMTP id fe18csp5017499rwb; Mon, 31 Jul 2023 16:51:42 -0700 (PDT) X-Google-Smtp-Source: APBJJlGsg3myVOOZvIhr9h/K2tjCF0S8g5ZFEheyx7h+t+FBDmnE1pYMy1SadIbV/ft6ca+U9CGd X-Received: by 2002:a05:6402:1299:b0:522:2af1:1ffe with SMTP id w25-20020a056402129900b005222af11ffemr897017edv.9.1690847502540; Mon, 31 Jul 2023 16:51:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690847502; cv=none; d=google.com; s=arc-20160816; b=LrwbL25olkMRccNsmt08LPKVCIXJhylcUxdPWZr2B2UDFS2Vc850xiac153JrRE0sS HpvjPf3DZ5P8Wx4cAQny6z/ro3dQhgStuHrXJAu7KrbxxmBYpQ9rVzn6lpcMjR3hRR6j KN8+OGHjmFcv7aFqXBiEpTwmYv/x3FfoHOS5yhPhgBa94qu4Q9IG8nshFw3FY1akA1cB rQSeoP4QGN7SzdMQvEEMLe8vQduc8K30ZgzP3cagfL25iDI+KGNwFVn6X3S40775O6Im kPa7ybAshUGT12DT5xhPguEH70WxarETpgl2AF+VVh/7kTUHYoe8S+BeOg1EGhsBahlX hqDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :message-id:subject:cc:to:from:date:dkim-signature; bh=zRZgYEB4Iip4W9nXc3Cn+f+mVJwjtXtdomD3/Mr13e4=; fh=1MX1PSHg4To+KjI7DQ6detH3vXmyJwob1zb/VQQ1Yrk=; b=LFHYZRnG8YiEq43WTD/5YcmJBT6Jo32e6xLMosELPq0PVpZvF/967JC2VAKdssz4UD cdl4bw+0XymorXOIid/tQ7tww+31/ZxWzMfu0Vd6DnVYvo0WhKDf6IMjjA3hr35rKl3W 92P9TcH4ZJ1/oACmHy4nJFnde5DKf1losu8kyoScKjc0e9X8FCh0dYOzi+hOTGNkGooL q+3miyIKRkdL1NjRow1OeuXcaJbYuS4tFoxkca0V8uZN8jHCfaljgu5pvHntFj0nspYJ ueKSS2wmYMoX4mzoxzw1AqSx69KMyJyLAQ99bQk9XUT4j83W2YtlRH1WsYO+BKSKv5Rs Vm+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=hjt9RVuZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b1-20020aa7cd01000000b0051e29447e4fsi3290323edw.548.2023.07.31.16.51.17; Mon, 31 Jul 2023 16:51:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=hjt9RVuZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231862AbjGaXMl (ORCPT + 99 others); Mon, 31 Jul 2023 19:12:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231855AbjGaXMj (ORCPT ); Mon, 31 Jul 2023 19:12:39 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C2FE1FC8; Mon, 31 Jul 2023 16:12:26 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0B8B761220; Mon, 31 Jul 2023 23:12:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2F0DAC433C8; Mon, 31 Jul 2023 23:12:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690845145; bh=Nu9Zxt9VTHuAwj+frT44+k98BkcHvxSV6qXTkuyVlDs=; h=Date:From:To:Cc:Subject:In-Reply-To:From; b=hjt9RVuZswOhkpkeENCGIM/9sWjcNIp545q3Q5LE4tsA3jJPt2T+kwakTCE/UtAIm WjLD6C5y1Yw7kintnZA3HPs+3q3qXCTsmZ5k/2lV08hnPCOUIlinu8ng7WqhiRK6tl xiUpV8/Egf7duc+WL+7x4/qEob30Vqqb2jiZJ6fTiOgZDfqVj4f1FlbqEUxZRMH138 +9lVwODWiKZCA1ApR6was22LDYsNxF/44VsX5qlEfH7uVzr7Lh/1eI57vKYBXm1hCo jSTysIQEeVbJ1SVv30ZB0WpVJjbp1UkBGTmF7OuvbL0kd1+LGBqngBnAdHRjz+G1r9 P5Bq39udxyDxg== Date: Mon, 31 Jul 2023 18:12:23 -0500 From: Bjorn Helgaas To: Kevin Xie Cc: Minda Chen , Daire McNamara , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Bjorn Helgaas , Lorenzo Pieralisi , Krzysztof =?utf-8?Q?Wilczy=C5=84ski?= , Emil Renner Berthing , devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-pci@vger.kernel.org, Paul Walmsley , Palmer Dabbelt , Albert Ou , Philipp Zabel , Mason Huo , Leyfoon Tan , Mika Westerberg , "Maciej W. Rozycki" , Pali =?iso-8859-1?Q?Roh=E1r?= , Marek =?iso-8859-1?Q?Beh=FAn?= Subject: Re: [PATCH v1 8/9] PCI: PLDA: starfive: Add JH7110 PCIe controller Message-ID: <20230731231223.GA14721@bhelgaas> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <66d794c5-3837-483e-87d1-4b745d7cb9c4@starfivetech.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [+cc Pali, Marek because I used f76b36d40bee ("PCI: aardvark: Fix link training") as an example] On Mon, Jul 31, 2023 at 01:52:35PM +0800, Kevin Xie wrote: > On 2023/7/28 5:40, Bjorn Helgaas wrote: > > On Tue, Jul 25, 2023 at 03:46:35PM -0500, Bjorn Helgaas wrote: > >> On Mon, Jul 24, 2023 at 06:48:47PM +0800, Kevin Xie wrote: > >> > On 2023/7/21 0:15, Bjorn Helgaas wrote: > >> > > On Thu, Jul 20, 2023 at 06:11:59PM +0800, Kevin Xie wrote: > >> > >> On 2023/7/20 0:48, Bjorn Helgaas wrote: > >> > >> > On Wed, Jul 19, 2023 at 06:20:56PM +0800, Minda Chen wrote: > >> > >> >> Add StarFive JH7110 SoC PCIe controller platform > >> > >> >> driver codes. > >> > >> > >> However, in the compatibility testing with several NVMe SSD, we > >> > >> found that Lenovo Thinklife ST8000 NVMe can not get ready in 100ms, > >> > >> and it actually needs almost 200ms. Thus, we increased the T_PVPERL > >> > >> value to 300ms for the better device compatibility. > >> > > ... > >> > > > >> > > Thanks for this valuable information! This NVMe issue potentially > >> > > affects many similar drivers, and we may need a more generic fix so > >> > > this device works well with all of them. > >> > > > >> > > T_PVPERL is defined to start when power is stable. Do you have a way > >> > > to accurately determine that point? I'm guessing this: > >> > > > >> > > gpiod_set_value_cansleep(pcie->power_gpio, 1) > >> > > > >> > > turns the power on? But of course that doesn't mean it is instantly > >> > > stable. Maybe your testing is telling you that your driver should > >> > > have a hardware-specific 200ms delay to wait for power to become > >> > > stable, followed by the standard 100ms for T_PVPERL? > >> > > >> > You are right, we did not take the power stable cost into account. > >> > T_PVPERL is enough for Lenovo Thinklife ST8000 NVMe SSD to get ready, > >> > and the extra cost is from the power circuit of a PCIe to M.2 connector, > >> > which is used to verify M.2 SSD with our EVB at early stage. > >> > >> Hmm. That sounds potentially interesting. I assume you're talking > >> about something like this: https://www.amazon.com/dp/B07JKH5VTL > >> > >> I'm not familiar with the timing requirements for something like this. > >> There is a PCIe M.2 spec with some timing requirements, but I don't > >> know whether or how software is supposed to manage this. There is a > >> T_PVPGL (power valid to PERST# inactive) parameter, but it's > >> implementation specific, so I don't know what the point of that is. > >> And I don't see a way for software to even detect the presence of such > >> an adapter. > > > > I intended to ask about this on the PCI-SIG forum, but after reading > > this thread [1], I don't think we would learn anything. The question > > was: > > > > The M.2 device has 5 voltage rails generated from the 3.3V input > > supply voltage > > ------------------------------------------- > > This is re. Table 17 in PCI Express M.2 Specification Revision 1.1 > > Power Valid* to PERST# input inactive : Implementation specific; > > recommended 50 ms > > > > What exactly does this mean ? > > > > The Note says > > > > *Power Valid when all the voltage supply rails have reached their > > respective Vmin. > > > > Does this mean that the 50ms to PERSTn is counted from the instant > > when all *5 voltage rails* on the M.2 device have become "good" ? > > > > and the answer was: > > > > You wrote; > > Does this mean that the 50ms to PERSTn is counted from the instant > > when all 5 voltage rails on the M.2 device have become "good" ? > > > > Reply: > > This means that counting the recommended 50 ms begins from the time > > when the power rails coming to the device/module, from the host, are > > stable *at the device connector*. > > > > As for the time it takes voltages derived inside the device from any > > of the host power rails (e.g., 3.3V rail) to become stable, that is > > part of the 50ms the host should wait before de-asserting PERST#, in > > order ensure that most devices will be ready by then. > > > > Strictly speaking, nothing disastrous happens if a host violates the > > 50ms. If it de-asserts too soon, the device may not be ready, but > > most hosts will try again. If the host de-asserts too late, the > > device has even more time to stabilize. This is why the WG felt that > > an exact minimum number for >>Tpvpgl, was not valid in practice, and > > we made it a recommendation. > > > > Since T_PVPGL is implementation-specific, we can't really base > > anything in software on the 50ms recommendation. It sounds to me like > > they are counting on software to retry config reads when enumerating. > > > > I guess the delays we *can* observe are: > > > > 100ms T_PVPERL "Power stable to PERST# inactive" (CEM 2.9.2) > > 100ms software delay between reset and config request (Base 6.6.1) > > Refer to Figure2-10 in CEM Spec V2.0, I guess this two delays are T2 & T4? > In the PATCH v2[4/4], T2 is the msleep(100) for T_PVPERL, > and T4 is done by starfive_pcie_host_wait_for_link(). Yes, I think "T2" is T_PVPERL. The CEM r2.0 Figure 2-10 note is "2. Minimum time from power rails within specified tolerance to PERST# inactive (T_PVPERL)." As far as T4 ("Minimum PERST# inactive to PCI Express link out of electrical idle"), I don't see a name or a value for that parameter, and I don't think it is the delay required by PCIe r6.0, sec 6.6.1. The delay required by sec 6.6.1 is a minimum of 100ms following exit from reset or, for fast links, 100ms after link training completes. The comment at the call of advk_pcie_wait_for_link() [2] says it is the delay required by sec 6.6.1, but that doesn't seem right to me. For one thing, I don't think 6.6.1 says anything about "link up" being the end of a delay. So if we want to do the delay required by 6.6.1, "wait_for_link()" doesn't seem like quite the right name. For another, all the *_wait_for_link() functions can return success after 0ms, 90ms, 180ms, etc. They're unlikely to return after 0ms, but 90ms is quite possible. If we avoided the 0ms return and LINK_WAIT_USLEEP_MIN were 100ms instead of 90ms, that should be enough for slow links, where we need 100ms following "exit from reset." But it's still not enough for fast links where we need 100ms "after link training completes" because we don't know when training completed. If training completed 89ms into *_wait_for_link(), we only delay 1ms after that. > > The PCI core doesn't know how to assert PERST#, so the T_PVPERL delay > > definitely has to be in the host controller driver. > > > > The PCI core observes the second 100ms delay after a reset in > > pci_bridge_wait_for_secondary_bus(). But this 100ms delay does not > > happen during initial enumeration. I think the assumption of the PCI > > core is that when the host controller driver calls pci_host_probe(), > > we can issue config requests immediately. > > > > So I think that to be safe, we probably need to do both of those 100ms > > delays in the host controller driver. Maybe there's some hope of > > supporting the latter one in the PCI core someday, but that's not > > today. > > > > Bjorn > > > > [1] https://forum.pcisig.com/viewtopic.php?f=74&t=1037 [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/controller/pci-aardvark.c?id=v6.4#n433