Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp1009186imm; Wed, 18 Jul 2018 14:54:53 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfyy5Yrv23FoRgEkNojFH3sBnux5UleLoS7sjZOglwlvu4/rlIcGyPkhWIrchGACMvYr0xy X-Received: by 2002:a62:2f84:: with SMTP id v126-v6mr6848614pfv.115.1531950893122; Wed, 18 Jul 2018 14:54:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531950893; cv=none; d=google.com; s=arc-20160816; b=kX9QsaiaIWrn5VPqzLsfKzmfYuOVdyjA2wyJ8Y8OlHP0VNB3LKo+3lQJx/5ejZgvPo wT64BekEVoBiVvq8SPZW6W9K+H5GGjauWxCn9SvMjth5CqbMZ89z+1Y5D4Pxdr78o3tv t+WLlfJLuOJTLUocPLH69HXEmqOqzCD6gWFUh8o63SmiNAMaU/XjPChNRYWaUNt1wLh6 FCEyifG167b7h/7B8jkNKQvdoCopPpEDcd5jtZA3gWdCKGlEQZR5NSiTkAF/01SCbUTD 3vbJKb0dSyRBy+OfPyPZ0joU+6uK10BIbSXK0fg/9rMImOsG5XHeA5+WPYuFVqwLbPv2 vRRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=gjqfp+fbDEDyxy/tCsyUCjto1kp2c2mZqA9AY4IBnHE=; b=DQWo9Nc6OOKEsmdWfJPzJc5MUxu2hOK1eMA/xtv1/f6rMmQ+KbSVFsR5GEsKY6zP4S 1n7vJu+Psgj+Y6VsEJe+1qogClCKF9hUei5yIVtDSKizwYklFfnUqMHnpqT8dMN8OvFc 8rL11LzT2WJvP6vHav4SRLHrUCA0jFr28RDPXFxxNF9yHAUAI4EHO9Oiizx5gbkIBTwK nlafzgaAxXRAxKtJXKyFh9YquFQjtxHtzttp7V9tDCLfbGS1tQsG3oj4KOLId9BY8NKi kS1VkhkXVirM7AyhUekM3ADs1cf0cyANj4WTDCro4fBFUCP74q2lAphotvI4wbAJd2N7 a/vA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=tLCmhILM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l7-v6si4347712pgc.650.2018.07.18.14.54.37; Wed, 18 Jul 2018 14:54:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=tLCmhILM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729974AbeGRWdz (ORCPT + 99 others); Wed, 18 Jul 2018 18:33:55 -0400 Received: from mail.kernel.org ([198.145.29.99]:60402 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729134AbeGRWdx (ORCPT ); Wed, 18 Jul 2018 18:33:53 -0400 Received: from localhost (unknown [69.71.4.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8185B2075E; Wed, 18 Jul 2018 21:54:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1531950840; bh=W1zDvrQV4btpL8IG4FjNejB6OFQCzNV1ehkct/kRNVA=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=tLCmhILMWOabblApf6Ml0FS9pJKWGtAuO6tZrUh4bm8W7647XAEaM1el3a4jZLRZp Q4xc3AU68TVPfIlprp2mf7tT8FH7gTSzT7bACTa71h0dpts8DK1b+dPqjnPcPT6Ngr lpG+CeQK88ztdNncTgdfexDxKZ1i4xMmTmlLVI/c= Date: Wed, 18 Jul 2018 16:53:59 -0500 From: Bjorn Helgaas To: Alex_Gagniuc@Dellteam.com Cc: mr.nuke.me@gmail.com, bhelgaas@google.com, Austin.Bolen@dell.com, Shyam.Iyer@dell.com, keith.busch@intel.com, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, jeffrey.t.kirsher@intel.com, ariel.elior@cavium.com, michael.chan@broadcom.com, ganeshgr@chelsio.com, tariqt@mellanox.com, jakub.kicinski@netronome.com, talgi@mellanox.com, airlied@gmail.com, alexander.deucher@amd.com, Mike Marciniszyn Subject: Re: [PATCH v3] PCI: Check for PCIe downtraining conditions Message-ID: <20180718215359.GG128988@bhelgaas-glaptop.roam.corp.google.com> References: <20180604155523.14906-1-mr.nuke.me@gmail.com> <20180716211706.GB12391@bhelgaas-glaptop.roam.corp.google.com> <97a70a71e1034bafbcabc6c4e23577c0@ausx13mps321.AMER.DELL.COM> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <97a70a71e1034bafbcabc6c4e23577c0@ausx13mps321.AMER.DELL.COM> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [+cc Mike (hfi1)] On Mon, Jul 16, 2018 at 10:28:35PM +0000, Alex_Gagniuc@Dellteam.com wrote: > On 7/16/2018 4:17 PM, Bjorn Helgaas wrote: > >> ... > >> The easiest way to detect this is with pcie_print_link_status(), > >> since the bottleneck is usually the link that is downtrained. It's not > >> a perfect solution, but it works extremely well in most cases. > > > > This is an interesting idea. I have two concerns: > > > > Some drivers already do this on their own, and we probably don't want > > duplicate output for those devices. In most cases (ixgbe and mlx* are > > exceptions), the drivers do this unconditionally so we *could* remove > > it from the driver if we add it to the core. The dmesg order would > > change, and the message wouldn't be associated with the driver as it > > now is. > > Oh, there are only 8 users of that. Even I could patch up the drivers to > remove the call, assuming we reach agreement about this change. > > > Also, I think some of the GPU devices might come up at a lower speed, > > then download firmware, then reset the device so it comes up at a > > higher speed. I think this patch will make us complain about about > > the low initial speed, which might confuse users. > > I spoke to one of the PCIe spec writers. It's allowable for a device to > downtrain speed or width. It would also be extremely dumb to downtrain > with the intent to re-train at a higher speed later, but it's possible > devices do dumb stuff like that. That's why it's an informational > message, instead of a warning. FWIW, here's some of the discussion related to hfi1 from [1]: > Btw, why is the driver configuring the PCIe link speed? Isn't > this something we should be handling in the PCI core? The device comes out of reset at the 5GT/s speed. The driver downloads device firmware, programs PCIe registers, and co-ordinates the transition to 8GT/s. This recipe is device specific and is therefore implemented in the hfi1 driver built on top of PCI core functions and macros. Also several DRM drivers seem to do this (see cik_pcie_gen3_enable(), si_pcie_gen3_enable()); from [2]: My understanding was that some platfoms only bring up the link in gen 1 mode for compatibility reasons. [1] https://lkml.kernel.org/r/32E1700B9017364D9B60AED9960492BC627FF54C@fmsmsx120.amr.corp.intel.com [2] https://lkml.kernel.org/r/BN6PR12MB1809BD30AA5B890C054F9832F7B50@BN6PR12MB1809.namprd12.prod.outlook.com > Another case: Some devices (lower-end GPUs) use silicon (and marketing) > that advertises x16, but they're only routed for x8. I'm okay with > seeing an informational message in this case. In fact, I didn't know > that my Quadro card for three years is only wired for x8 until I was > testing this patch. Yeah, it's probably OK. I don't want bug reports from people who think something's broken when it's really just a hardware limitation of their system. But hopefully the message is not alarming. > > So I'm not sure whether it's better to do this in the core for all > > devices, or if we should just add it to the high-performance drivers > > that really care. > > You're thinking "do I really need that bandwidth" because I'm using a > function called "_bandwidth_". The point of the change is very far from > that: it is to help in system troubleshooting by detecting downtraining > conditions. I'm not sure what you think I'm thinking :) My question is whether it's worthwhile to print this extra information for *every* PCIe device, given that your use case is the tiny percentage of broken systems. If we only printed the info in the "bw_avail < bw_cap" case, i.e., when the device is capable of more than it's getting, that would make a lot of sense to me. The normal case line is more questionable. I think the reason that's there is because the network drivers are very performance sensitive and like to see that info all the time. Maybe we need something like this: pcie_print_link_status(struct pci_dev *dev, int verbose) { ... if (bw_avail >= bw_cap) { if (verbose) pci_info(dev, "... available PCIe bandwidth ..."); } else pci_info(dev, "... available PCIe bandwidth, limited by ..."); } So the core could print only the potential problems with: pcie_print_link_status(dev, 0); and drivers that really care even if there's no problem could do: pcie_print_link_status(dev, 1); > >> Signed-off-by: Alexandru Gagniuc > [snip] > >> + /* Look from the device up to avoid downstream ports with no devices. */ > >> + if ((pci_pcie_type(dev) != PCI_EXP_TYPE_ENDPOINT) && > >> + (pci_pcie_type(dev) != PCI_EXP_TYPE_LEG_END) && > >> + (pci_pcie_type(dev) != PCI_EXP_TYPE_UPSTREAM)) > >> + return; > > > > Do we care about Upstream Ports here? > > YES! Switches. e.g. an x16 switch with 4x downstream ports could > downtrain at 8x and 4x, and we'd never catch it. OK, I think I see your point: if the upstream port *could* do 16x but only trains to 4x, and two endpoints below it are both capable of 4x, the endpoints *think* they're happy but in fact they have to share 4x when they could use more. Bjorn