Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp1363614ybk; Thu, 21 May 2020 05:15:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyQ1uIlA20m+sdnTgLU1bVixnadN9B0yKEI/1gQVK8JjnAfRanGh9lqh0u/ZuPdjGU4wLlS X-Received: by 2002:a17:906:1b43:: with SMTP id p3mr2998547ejg.265.1590063320385; Thu, 21 May 2020 05:15:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590063320; cv=none; d=google.com; s=arc-20160816; b=MsVv9V8+1GwRiU5kKtiSRL98ZxzmOVFq/MPCUd03zu3r/7zmVIFak1o+ACSVebZ92a Dp0QN7QpURMfTnrdAvjFllGa98KS02uAqVT+8FKjrE1ElWulr1MUpxLTmfr3QIeiHMqm KFKtuIeDcUkVEOIxxBT3YrpS4TxQU3KKtPkPwEH8uCiMDEji1EPvg/MdAcF24VtIqnqc SH//c6SCYY9ZiyBvg24vygb/WODQ/mUX2N1/RE5XMVK+rrY7EADYgM9mjbM+BWoj/Efn CGpmMHz54wGn5wawtoHESvYihHBk6gwxnZKub5+AkLzS9bd+nzToY8FjP9f3/rQw5Yrw mIpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:from:subject:dkim-signature; bh=3HENkJ17qOrSqmQfjcb0TayRhr8NN9GDpkzIoZgJ91U=; b=f09vFtdV4rpItwV7mPlIrgoJMSlKl9Na9IRyyvTdYEQ4vvhRo6eEUfAJi3JrGk8f6s FDcGbWuNlLfS7mhTa7wMNs+om2KiixRlF8k58GpHn2ihvRhlw6OJXY9g4Sxdq3ha3QsZ bn9cLweTVIQhLZF7gozm9jfApx0moMiUjal/2H7GKS9J5vRZxbMkGONWnOkzOaqMFFrG hi/+QFPGMhSoAD3/JF/om2nNchaXabkE9/M4T5BPn/KEc3DLMYXVY7UVMLzyIiKTylbC V/FJlh7NlVmR0nEsX5OBi91GPEyObNUoS9ivrC7lWyyBaiPHq9sjP/Ezajgkpgy6LmtP DRRg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=RHQYNUl+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s23si2947186edr.581.2020.05.21.05.14.52; Thu, 21 May 2020 05:15:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=RHQYNUl+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729152AbgEUMNT (ORCPT + 99 others); Thu, 21 May 2020 08:13:19 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:43770 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728043AbgEUMNT (ORCPT ); Thu, 21 May 2020 08:13:19 -0400 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 04LCD6EZ056524; Thu, 21 May 2020 07:13:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1590063186; bh=3HENkJ17qOrSqmQfjcb0TayRhr8NN9GDpkzIoZgJ91U=; h=Subject:From:To:CC:References:Date:In-Reply-To; b=RHQYNUl+9sRg1ln9eIDNEsqD5d2EnxEZKOrz5qLQrqGor3xQ8O4i0q/D/mTAaKfH+ Lkr4hNPkima7UCeUZIHmaTsr/cVVcAtao3CEyVbliseZePOwaluGyfbBNtbf2np0vC 3pJyZTZCACXc+Dmez6Ms2MzchJdzCf99v+ccBTwA= Received: from DFLE115.ent.ti.com (dfle115.ent.ti.com [10.64.6.36]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 04LCD6l7017067 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 21 May 2020 07:13:06 -0500 Received: from DFLE100.ent.ti.com (10.64.6.21) by DFLE115.ent.ti.com (10.64.6.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Thu, 21 May 2020 07:13:06 -0500 Received: from fllv0040.itg.ti.com (10.64.41.20) by DFLE100.ent.ti.com (10.64.6.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Thu, 21 May 2020 07:13:06 -0500 Received: from [10.250.233.85] (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0040.itg.ti.com (8.15.2/8.15.2) with ESMTP id 04LCD3S4074070; Thu, 21 May 2020 07:13:03 -0500 Subject: Re: [PATCH v3 4/4] PCI: cadence: Use "dma-ranges" instead of "cdns,no-bar-match-nbits" property From: Kishon Vijay Abraham I To: Rob Herring CC: Bjorn Helgaas , Lorenzo Pieralisi , Robin Murphy , Tom Joseph , PCI , , "linux-kernel@vger.kernel.org" , "moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE" References: <20200508130646.23939-1-kishon@ti.com> <20200508130646.23939-5-kishon@ti.com> <162447e2-ac3b-9523-d404-130b93e0860e@ti.com> Message-ID: <73274652-4a18-e20b-36d1-73529241b9d7@ti.com> Date: Thu, 21 May 2020 17:43:02 +0530 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: <162447e2-ac3b-9523-d404-130b93e0860e@ti.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Rob, On 5/21/2020 9:00 AM, Kishon Vijay Abraham I wrote: > Hi Rob, > > On 5/19/2020 10:41 PM, Rob Herring wrote: >> On Fri, May 8, 2020 at 7:07 AM Kishon Vijay Abraham I wrote: >>> >>> Cadence PCIe core driver (host mode) uses "cdns,no-bar-match-nbits" >>> property to configure the number of bits passed through from PCIe >>> address to internal address in Inbound Address Translation register. >>> This only used the NO MATCH BAR. >>> >>> However standard PCI dt-binding already defines "dma-ranges" to >>> describe the address ranges accessible by PCIe controller. Add support >>> in Cadence PCIe host driver to parse dma-ranges and configure the >>> inbound regions for BAR0, BAR1 and NO MATCH BAR. Cadence IP specifies >>> maximum size for BAR0 as 256GB, maximum size for BAR1 as 2 GB, so if >>> the dma-ranges specifies a size larger than the maximum allowed, the >>> driver will split and configure the BARs. >> >> Would be useful to know what your dma-ranges contains now. >> >> >>> Legacy device tree binding compatibility is maintained by retaining >>> support for "cdns,no-bar-match-nbits". >>> >>> Signed-off-by: Kishon Vijay Abraham I >>> --- >>> .../controller/cadence/pcie-cadence-host.c | 141 ++++++++++++++++-- >>> drivers/pci/controller/cadence/pcie-cadence.h | 17 ++- >>> 2 files changed, 141 insertions(+), 17 deletions(-) >>> >>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c >>> index 6ecebb79057a..2485ecd8434d 100644 >>> --- a/drivers/pci/controller/cadence/pcie-cadence-host.c >>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c >>> @@ -11,6 +11,12 @@ >>> >>> #include "pcie-cadence.h" >>> >>> +static u64 cdns_rp_bar_max_size[] = { >>> + [RP_BAR0] = _ULL(128 * SZ_2G), >>> + [RP_BAR1] = SZ_2G, >>> + [RP_NO_BAR] = SZ_64T, >>> +}; >>> + >>> void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, >>> int where) >>> { >>> @@ -106,6 +112,117 @@ static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) >>> return 0; >>> } >>> >>> +static void cdns_pcie_host_bar_ib_config(struct cdns_pcie_rc *rc, >>> + enum cdns_pcie_rp_bar bar, >>> + u64 cpu_addr, u32 aperture) >>> +{ >>> + struct cdns_pcie *pcie = &rc->pcie; >>> + u32 addr0, addr1; >>> + >>> + addr0 = CDNS_PCIE_AT_IB_RP_BAR_ADDR0_NBITS(aperture) | >>> + (lower_32_bits(cpu_addr) & GENMASK(31, 8)); >>> + addr1 = upper_32_bits(cpu_addr); >>> + cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_RP_BAR_ADDR0(bar), addr0); >>> + cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar), addr1); >>> +} >>> + >>> +static int cdns_pcie_host_bar_config(struct cdns_pcie_rc *rc, >>> + struct resource_entry *entry, >>> + enum cdns_pcie_rp_bar *index) >>> +{ >>> + u64 cpu_addr, pci_addr, size, winsize; >>> + struct cdns_pcie *pcie = &rc->pcie; >>> + struct device *dev = pcie->dev; >>> + enum cdns_pcie_rp_bar bar; >>> + unsigned long flags; >>> + u32 aperture; >>> + u32 value; >>> + >>> + cpu_addr = entry->res->start; >>> + flags = entry->res->flags; >>> + pci_addr = entry->res->start - entry->offset; >>> + size = resource_size(entry->res); >>> + bar = *index; >>> + >>> + if (entry->offset) { >>> + dev_err(dev, "Cannot map PCI addr: %llx to CPU addr: %llx\n", >>> + pci_addr, cpu_addr); >> >> Would be a bit more clear to say PCI addr must equal CPU addr. >> >>> + return -EINVAL; >>> + } >>> + >>> + value = cdns_pcie_readl(pcie, CDNS_PCIE_LM_RC_BAR_CFG); >>> + while (size > 0) { >>> + if (bar > RP_NO_BAR) { >>> + dev_err(dev, "Failed to map inbound regions!\n"); >>> + return -EINVAL; >>> + } >>> + >>> + winsize = size; >>> + if (size > cdns_rp_bar_max_size[bar]) >>> + winsize = cdns_rp_bar_max_size[bar]; >>> + >>> + aperture = ilog2(winsize); >>> + >>> + cdns_pcie_host_bar_ib_config(rc, bar, cpu_addr, aperture); >>> + >>> + if (bar == RP_NO_BAR) >>> + break; >>> + >>> + if (winsize + cpu_addr >= SZ_4G) { >>> + if (!(flags & IORESOURCE_PREFETCH)) >>> + value |= LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar); >>> + value |= LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar); >>> + } else { >>> + if (!(flags & IORESOURCE_PREFETCH)) >>> + value |= LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar); >>> + value |= LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar); >>> + } >>> + >>> + value |= LM_RC_BAR_CFG_APERTURE(bar, aperture); >>> + >>> + size -= winsize; >>> + cpu_addr += winsize; >>> + bar++; >>> + } >>> + cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value); >>> + *index = bar; >>> + >>> + return 0; >>> +} >>> + >>> +static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc) >>> +{ >>> + enum cdns_pcie_rp_bar bar = RP_BAR0; >>> + struct cdns_pcie *pcie = &rc->pcie; >>> + struct device *dev = pcie->dev; >>> + struct device_node *np = dev->of_node; >>> + struct pci_host_bridge *bridge; >>> + struct resource_entry *entry; >>> + u32 no_bar_nbits = 32; >>> + int err; >>> + >>> + bridge = pci_host_bridge_from_priv(rc); >>> + if (!bridge) >>> + return -ENOMEM; >>> + >>> + if (list_empty(&bridge->dma_ranges)) { >>> + of_property_read_u32(np, "cdns,no-bar-match-nbits", >>> + &no_bar_nbits); >>> + cdns_pcie_host_bar_ib_config(rc, RP_NO_BAR, 0x0, no_bar_nbits); >>> + return 0; >>> + } >>> + >>> + resource_list_for_each_entry(entry, &bridge->dma_ranges) { >>> + err = cdns_pcie_host_bar_config(rc, entry, &bar); >> >> Seems like this should have some better logic to pick which BAR to >> use. Something like find the biggest region and then find the smallest >> BAR that it fits in. Then get the next biggest... > > Okay, I'll change this something like for each region, find the smallest BAR > that it fits in and if there is no BAR big enough to hold the region, split the > region to see if can be fitted using multiple BARs. I don't see the purpose of > finding the biggest region first since at all times we'll only use the smallest > BAR to fit. Nevermind, I realized finding the biggest region is useful. I have sent a patch adding support for that. Thanks Kishon