Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp2149876yba; Wed, 3 Apr 2019 02:17:10 -0700 (PDT) X-Google-Smtp-Source: APXvYqyzyLU42bwzyoAdd0L4B5nSI3wxAcooPNIcdLK/7qJzrQz7t9NHMFJWATe0J04DnjNZEUTi X-Received: by 2002:a17:902:7883:: with SMTP id q3mr17318235pll.60.1554283030298; Wed, 03 Apr 2019 02:17:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554283030; cv=none; d=google.com; s=arc-20160816; b=BqTlDipkfjLq0/7/3rX6SkQ8dyvI/SFB3JXP9SFynN/h+6d0OnxTCTils01f/hXCmh 9WxBzK4J+HGrHGVAvvLaXkU7AfL6erHHku+f4TWBcr8lG3wd/KVLCgx3fmaSqWB5cC6B iyDvPydJCCn8tg9wYjnrMbi8O7svDq/uBbyJgsHbyGjCknjcQPtekxcq5rMN0wiFZt5B ab3QHrVcCNSEZN9eDHpvUc2fQVePF22F5rwrZLlCvAwDGl1IFCa2SdYpMyQi6X7lColf f8M2sUyGb50oSPCbupK9pneQP4Yv9PDh8R6hb461ei2y8VF/W83Nq8v7LZBlM3UKbRam Cq+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=c8MvKeNkm6jRMFC23RiJXHefhPBDTqQ9LaEl0LP157I=; b=sdojd9jm/ikUs86Z3Pese8EQZ4/t2nPzrogTr3S+VoMcmKxMfBkJ9+THAYsGMIimHT qQL/MGsBssDx+8fCZ37PE1JiB6Zac3C6jyP2Ls0JL4QOYWa2FB8npXlXHOFlJzjAb3N5 VK8bDXXbG44YD2Aym6ZaVXZKchbmULgwcE5C6N6fp+P9Ni+O3abOfES3tiWklulInJY2 qsATSkGWtDIr5gCF+lb0u4lw2/ebj6Tz+Yvbdg2bnE37kjss87r7CaROUiGxpGWwVAgf U/omVYqncXcl3fiXaDRgWrcC4BejiiAMLsz4w7N7zrC7KWj/lg7M2MPUGQu5vlnNrUZJ SSqQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=XOXCngxm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p9si13035115pff.52.2019.04.03.02.16.54; Wed, 03 Apr 2019 02:17:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=XOXCngxm; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726496AbfDCJQG (ORCPT + 99 others); Wed, 3 Apr 2019 05:16:06 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:18363 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725935AbfDCJQF (ORCPT ); Wed, 3 Apr 2019 05:16:05 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 03 Apr 2019 02:16:07 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 03 Apr 2019 02:16:03 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 03 Apr 2019 02:16:03 -0700 Received: from [10.24.47.153] (10.124.1.5) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 3 Apr 2019 09:15:52 +0000 Subject: Re: [PATCH 09/10] PCI: tegra: Add Tegra194 PCIe support To: Thierry Reding CC: Bjorn Helgaas , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , References: <1553613207-3988-1-git-send-email-vidyas@nvidia.com> <1553613207-3988-10-git-send-email-vidyas@nvidia.com> <20190329203159.GG24180@google.com> <5eb9599c-a6d6-d3a3-beef-5225ed7393f9@nvidia.com> <20190402141424.GB8017@ulmo> X-Nvconfidentiality: public From: Vidya Sagar Message-ID: Date: Wed, 3 Apr 2019 14:45:49 +0530 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190402141424.GB8017@ulmo> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL101.nvidia.com (172.20.187.10) Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1554282967; bh=c8MvKeNkm6jRMFC23RiJXHefhPBDTqQ9LaEl0LP157I=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=XOXCngxm765cJ6zww+F0HIGSXw86e6cX3pkHYvyOMqf3JbibcKhTASUKnfcdY96Nm t72n3jP6kGHzhiOtRoj16nv8Qv7w44xaHd+7dowbFd1p+WviUEjbZ67sfRL+nqIkSP TaYbkusIdEOSR4Qwr2WdCeb+c6wImfsZ8L7HJM8ddfkZmPOAg9SX0vKwmLMRUStFZW HUNdRyveeOsxSFNwSx/Kw4dQIm9+sfi9SeeZLinKywWp1+W+frRBAlbrJOszBXH6KN GBilyA7cC7vBakLVrrKAcEJXn5lqT9md9a0cCDCYopJHGLMIZGsmD7R3nWB5+SB1FB FKrXHIN6Hc9pg== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 4/2/2019 7:44 PM, Thierry Reding wrote: > On Tue, Apr 02, 2019 at 12:47:48PM +0530, Vidya Sagar wrote: >> On 3/30/2019 2:22 AM, Bjorn Helgaas wrote: > [...] >>>> +static int tegra_pcie_dw_host_init(struct pcie_port *pp) >>>> +{ > [...] >>>> + val_w = dw_pcie_readw_dbi(pci, CFG_LINK_STATUS); >>>> + while (!(val_w & PCI_EXP_LNKSTA_DLLLA)) { >>>> + if (!count) { >>>> + val = readl(pcie->appl_base + APPL_DEBUG); >>>> + val &= APPL_DEBUG_LTSSM_STATE_MASK; >>>> + val >>= APPL_DEBUG_LTSSM_STATE_SHIFT; >>>> + tmp = readl(pcie->appl_base + APPL_LINK_STATUS); >>>> + tmp &= APPL_LINK_STATUS_RDLH_LINK_UP; >>>> + if (val == 0x11 && !tmp) { >>>> + dev_info(pci->dev, "link is down in DLL"); >>>> + dev_info(pci->dev, >>>> + "trying again with DLFE disabled\n"); >>>> + /* disable LTSSM */ >>>> + val = readl(pcie->appl_base + APPL_CTRL); >>>> + val &= ~APPL_CTRL_LTSSM_EN; >>>> + writel(val, pcie->appl_base + APPL_CTRL); >>>> + >>>> + reset_control_assert(pcie->core_rst); >>>> + reset_control_deassert(pcie->core_rst); >>>> + >>>> + offset = >>>> + dw_pcie_find_ext_capability(pci, >>>> + PCI_EXT_CAP_ID_DLF) >>>> + + PCI_DLF_CAP; >>> >>> This capability offset doesn't change, does it? Could it be computed >>> outside the loop? >> This is the only place where DLF offset is needed and gets calculated and this >> scenario is very rare as so far only a legacy ASMedia USB3.0 card requires DLF >> to be disabled to get PCIe link up. So, I thought of calculating the offset >> here itself instead of using a separate variable. >> >>> >>>> + val = dw_pcie_readl_dbi(pci, offset); >>>> + val &= ~DL_FEATURE_EXCHANGE_EN; >>>> + dw_pcie_writel_dbi(pci, offset, val); >>>> + >>>> + tegra_pcie_dw_host_init(&pcie->pci.pp); >>> >>> This looks like some sort of "wait for link up" retry loop, but a >>> recursive call seems a little unusual. My 5 second analysis is that >>> the loop could run this 200 times, and you sure don't want the >>> possibility of a 200-deep call chain. Is there way to split out the >>> host init from the link-up polling? >> Again, this recursive calling comes into picture only for a legacy ASMedia >> USB3.0 card and it is going to be a 1-deep call chain as the recursion takes >> place only once depending on the condition. Apart from the legacy ASMedia card, >> there is no other card at this point in time out of a huge number of cards that we have >> tested. > > A more idiomatic way would be to add a "retry:" label somewhere and goto > that after disabling DLFE. That way you achieve the same effect, but you > can avoid the recursion, even if it is harmless in practice. Initially I thought of using goto to keep it simple, but I thought it would be discouraged and hence used recursion. But, yeah.. agree that goto would keep it simple and I'll switch to goto now. > >>>> +static int tegra_pcie_dw_probe(struct platform_device *pdev) >>>> +{ >>>> + struct tegra_pcie_dw *pcie; >>>> + struct pcie_port *pp; >>>> + struct dw_pcie *pci; >>>> + struct phy **phy; >>>> + struct resource *dbi_res; >>>> + struct resource *atu_dma_res; >>>> + const struct of_device_id *match; >>>> + const struct tegra_pcie_of_data *data; >>>> + char *name; >>>> + int ret, i; >>>> + >>>> + pcie = devm_kzalloc(&pdev->dev, sizeof(*pcie), GFP_KERNEL); >>>> + if (!pcie) >>>> + return -ENOMEM; >>>> + >>>> + pci = &pcie->pci; >>>> + pci->dev = &pdev->dev; >>>> + pci->ops = &tegra_dw_pcie_ops; >>>> + pp = &pci->pp; >>>> + pcie->dev = &pdev->dev; >>>> + >>>> + match = of_match_device(of_match_ptr(tegra_pcie_dw_of_match), >>>> + &pdev->dev); >>>> + if (!match) >>>> + return -EINVAL; >>> >>> Logically could be the first thing in the function since it doesn't >>> depend on anything. >> Done >> >>> >>>> + data = (struct tegra_pcie_of_data *)match->data; > > of_device_get_match_data() can help remove some of the above > boilerplate. Also, there's no reason to check for a failure with these > functions. The driver is OF-only and can only ever be probed if the > device exists, in which case match (or data for that matter) will never > be NULL. Done. > >>> I see that an earlier patch added "bus" to struct pcie_port. I think >>> it would be better to somehow connect to the pci_host_bridge struct. >>> Several other drivers already do this; see uses of >>> pci_host_bridge_from_priv(). >> All non-DesignWare based implementations save their private data structure >> in 'private' pointer of struct pci_host_bridge and use pci_host_bridge_from_priv() >> to get it back. But, DesignWare based implementations save pcie_port in 'sysdata' >> and nothing in 'private' pointer. So, I'm not sure if pci_host_bridge_from_priv() >> can be used in this case. Please do let me know if you think otherwise. > > If nothing is currently stored in the private pointer, why not do like > the other drivers and store the struct pci_host_bridge pointer there? non-designware drivers get their private data allocated as part of pci_alloc_host_bridge() by passing the size of their private structure and use pci_host_bridge_from_priv() to get pointer to their own private structure (which is within struct pci_host_bridge). Whereas in Designware core, we can get the memory for struct pcie_port much before than calling pci_alloc_host_bridge() API, in fact, size '0' is passed as an argument to alloc API. This is the reason why struct pcie_port pointer is saved in 'sysdata'. > > Thierry >