Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp3058630ybi; Mon, 17 Jun 2019 15:41:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqwsSg0SHuMB6OOBZNqkvMm09OTyPw11ijSy1BZa5J+9loPrUiEikTYohMMNudvSqqGllddm X-Received: by 2002:aa7:8143:: with SMTP id d3mr114170110pfn.143.1560811313739; Mon, 17 Jun 2019 15:41:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560811313; cv=none; d=google.com; s=arc-20160816; b=mUAlW27NKzlcibo+VTvw1XWhiGrVFADHhdFVKrtMUTOkwq+UcC9mU+MA4bcKzZsjri 1b8tkCBRWOGBDmf1daB5+ViOzumeIWgqY4JgJKrBInsGrlf9yfAbr4kJHeXdvQz/t2Ba 6vp8Qowaf118Tdkevaxdp69m4QmluL5RNlyIMh0+C2x+3XGxH1HaxTmzJDJRIvaeixdh PL8rm3UJcHb4TT7bwbSoT2rR2VWf6VDSEBhkc1u45BP89uEjMlj6eeWKPq2Crdv39lUC ccL1v++g8X/oEB9AAtOP5/lM+BoZgYC9nDiru+7sYUBNwKEBxWvxwqJRXjVHw9xr8FSE Ktfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=8Osbr8APVqyIXExBAFGrsuiFCoG+JwFuaQ+ANOXZZ1o=; b=EF4PtYSg4YTH5FZ7U5AfC9S2quWD8nL9iN+R/vAxhObiVH9vd4PjuKxObSiDiUVbUD +V6i1L+6q85lZPhs3EeTIbSHo6Yf/NWdR2tq61UOUNnUwPL1WxQbJK2qbs3p8p+IlIcD APWrG1NSKitLd19/1pfYDpJjVc0VTwfoL9cE7OOgOmDQEh9zLAi1JZn2KSzhI4EHyi4T 26kwygH/c+rnNB+T2Av1wNJZ5JdUffATR4DowpwUnh0KlYz/riffFGxMVugIKshimxl8 iXjrGeaQsNuK6eByYZ4YuT7+Z7GoEuQDehCcCOrc8SajZc5+j0doKDrYc+QxL3x54UXR KbCA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b17si10910093pls.52.2019.06.17.15.41.37; Mon, 17 Jun 2019 15:41:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727795AbfFQWlE (ORCPT + 99 others); Mon, 17 Jun 2019 18:41:04 -0400 Received: from cloudserver094114.home.pl ([79.96.170.134]:64736 "EHLO cloudserver094114.home.pl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727000AbfFQWlE (ORCPT ); Mon, 17 Jun 2019 18:41:04 -0400 Received: from 79.184.254.20.ipv4.supernova.orange.pl (79.184.254.20) (HELO kreacher.localnet) by serwer1319399.home.pl (79.96.170.134) with SMTP (IdeaSmtpServer 0.83.267) id be3d876888b6f07f; Tue, 18 Jun 2019 00:41:01 +0200 From: "Rafael J. Wysocki" To: Mika Westerberg , Keith Busch Cc: Lukas Wunner , Bjorn Helgaas , linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Alex Williamson , Alexandru Gagniuc Subject: Re: [PATCH] PCI/PME: Fix race on PME polling Date: Tue, 18 Jun 2019 00:41:01 +0200 Message-ID: <2521908.csJO6TsRBn@kreacher> In-Reply-To: <20190617143510.GT2640@lahna.fi.intel.com> References: <0113014581dbe2d1f938813f1783905bd81b79db.1560079442.git.lukas@wunner.de> <1957149.eOSnrBRbHu@kreacher> <20190617143510.GT2640@lahna.fi.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Monday, June 17, 2019 4:35:10 PM CEST Mika Westerberg wrote: > On Mon, Jun 17, 2019 at 12:37:06PM +0200, Rafael J. Wysocki wrote: > > On Sunday, June 9, 2019 1:29:33 PM CEST Lukas Wunner wrote: > > > Since commit df17e62e5bff ("PCI: Add support for polling PME state on > > > suspended legacy PCI devices"), the work item pci_pme_list_scan() polls > > > the PME status flag of devices and wakes them up if the bit is set. > > > > > > The function performs a check whether a device's upstream bridge is in > > > D0 for otherwise the device is inaccessible, rendering PME polling > > > impossible. However the check is racy because it is performed before > > > polling the device. If the upstream bridge runtime suspends to D3hot > > > after pci_pme_list_scan() checks its power state and before it invokes > > > pci_pme_wakeup(), the latter will read the PMCSR as "all ones" and > > > mistake it for a set PME status flag. I am seeing this race play out as > > > a Thunderbolt controller going to D3cold and occasionally immediately > > > going to D0 again because PM polling was performed at just the wrong > > > time. > > > > > > Avoid by checking for an "all ones" PMCSR in pci_check_pme_status(). > > > > > > Fixes: 58ff463396ad ("PCI PM: Add function for checking PME status of devices") > > > Tested-by: Mika Westerberg > > > Signed-off-by: Lukas Wunner > > > Cc: stable@vger.kernel.org # v2.6.34+ > > > Cc: Rafael J. Wysocki > > > --- > > > drivers/pci/pci.c | 2 ++ > > > 1 file changed, 2 insertions(+) > > > > > > diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c > > > index 8abc843b1615..eed5db9f152f 100644 > > > --- a/drivers/pci/pci.c > > > +++ b/drivers/pci/pci.c > > > @@ -1989,6 +1989,8 @@ bool pci_check_pme_status(struct pci_dev *dev) > > > pci_read_config_word(dev, pmcsr_pos, &pmcsr); > > > if (!(pmcsr & PCI_PM_CTRL_PME_STATUS)) > > > return false; > > > + if (pmcsr == 0xffff) > > > + return false; > > > > > > /* Clear PME status. */ > > > pmcsr |= PCI_PM_CTRL_PME_STATUS; > > > > > > > Added to my 5.3 queue, thanks! > > Today when doing some PM testing I noticed that this patch actually > reveals an issue in our native PME handling. Problem is in > pcie_pme_handle_request() where we first convert req_id to struct > pci_dev and then call pci_check_pme_status() for it. Now, when a device > triggers wake the link is first brought up and then the PME is sent to > root complex with req_id matching the originating device. However, if > there are PCIe ports in the middle they may still be in D3 which means > that pci_check_pme_status() returns 0xffff for the device below so there > are lots of > > Spurious native interrupt" > > messages in the dmesg but the actual PME is never handled. > > It has been working because pci_check_pme_status() returned true in case > of 0xffff as well and we went and runtime resumed to originating device. In this case 0xffff is as good as PME Status set, that is the device needs to be resumed. This is a regression in the $subject patch, not a bug in the PME code. > I think the correct way to handle this is actually drop the call to > pci_check_pme_status() in pcie_pme_handle_request() because the whole > idea of req_id in PME message is to allow the root complex and SW to > identify the device without need to poll for the PME status bit. Not really, because if there is a PCIe-to-PCI bridge below the port, it is expected to use the req_id of the bridge for all of the devices below it. I'm going to drop this patch from my queue.