Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp660295ybl; Fri, 9 Aug 2019 11:40:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqw1aohv3JrGLij5Rxt5yNvk2eHajBFleepF0QFj/h0WNoatQm/V6GN2i4lpAykk/vsWuluw X-Received: by 2002:a65:6256:: with SMTP id q22mr18696946pgv.408.1565376026738; Fri, 09 Aug 2019 11:40:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565376026; cv=none; d=google.com; s=arc-20160816; b=u9Ovmf73VkbZq3yTMux8ft6aJNk19tur4SSo6/m53FlKIFwJtc6yxTQ2dXGj0WEvyD kW84fmnMsgB+rPuieRqI8JZKyPDznPQb4n2fj+l77oiHm5ounK9l4l7unfniMLk2f5au ZvEi+/ECpgxmsIkZKaH4rel+WNYJL09wQkQ+uUI64u5lnX0Jye1eHRPog/rzdiqtS3tf I3uIk7xVA1ahnkUtgiA/00y4+pQXRTYEVbs1q5K5/8KV7gIXMUwvsA4P3vzPQy/T+0k8 fyb7sDQBKWdLbeKMDsCeSHobRcgFqGAgPuAUj6SA9R+4OOHUxHWjff8d2XFg/KIqtYm+ wBOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=tZI/MVwtQ0W/bMoh19Sa9Uxgugm+V4lkeKnWV7zqJRE=; b=FZL1qB4Ls1IARtUrfwMSrhzB/V2qt+pSOL6OqGHIS7IrGfluYAmaIIvKoNlRlxcx3W ROhvsndwwIyCyP5Rr+SInSW+ZKG1B6Zy2pBMLSQfFVNRstcE1FXGtaKvDhpfAwlNrbeY 50VHccmwDXP12qQXGEO4TZ2Ul3U3fdi/yMQsh0ooqfvRW63qHxnwC2jjROYzDfhX8Ymt e12G6Zi7P5KjNWOjlSFVf3nN18MHqI8pDWyCT1iwKntSADBNY+MEYY4ENB93d8WABU55 xoJFtVBnX6w94AWaULQiS/gJDd+KJDH1KsRHb1Rwk0ZxeTXVGnlggVJudX7rfNjIDlic NF8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q11si17207708pls.424.2019.08.09.11.40.11; Fri, 09 Aug 2019 11:40:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437133AbfHISik (ORCPT + 99 others); Fri, 9 Aug 2019 14:38:40 -0400 Received: from bmailout2.hostsharing.net ([83.223.78.240]:51331 "EHLO bmailout2.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2407437AbfHISik (ORCPT ); Fri, 9 Aug 2019 14:38:40 -0400 X-Greylist: delayed 586 seconds by postgrey-1.27 at vger.kernel.org; Fri, 09 Aug 2019 14:38:39 EDT Received: from h08.hostsharing.net (h08.hostsharing.net [83.223.95.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.hostsharing.net", Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified)) by bmailout2.hostsharing.net (Postfix) with ESMTPS id 76C062800935A; Fri, 9 Aug 2019 20:28:52 +0200 (CEST) Received: by h08.hostsharing.net (Postfix, from userid 100393) id 42430DFE15; Fri, 9 Aug 2019 20:28:52 +0200 (CEST) Date: Fri, 9 Aug 2019 20:28:52 +0200 From: Lukas Wunner To: sathyanarayanan kuppuswamy Cc: Bjorn Helgaas , linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, Xiongfeng Wang Subject: Re: [PATCH] PCI: pciehp: Avoid returning prematurely from sysfs requests Message-ID: <20190809182852.rkkhng7d5m5xf3tp@wunner.de> References: <4174210466e27eb7e2243dd1d801d5f75baaffd8.1565345211.git.lukas@wunner.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170113 (1.7.2) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Aug 09, 2019 at 10:28:15AM -0700, sathyanarayanan kuppuswamy wrote: > On 8/9/19 3:28 AM, Lukas Wunner wrote: > > A sysfs request to enable or disable a PCIe hotplug slot should not > > return before it has been carried out. That is sought to be achieved > > by waiting until the controller's "pending_events" have been cleared. > > > > However the IRQ thread pciehp_ist() clears the "pending_events" before > > it acts on them. If pciehp_sysfs_enable_slot() / _disable_slot() happen > > to check the "pending_events" after they have been cleared but while > > pciehp_ist() is still running, the functions may return prematurely > > with an incorrect return value. > > Can this be fixed by changing the sequence of clearing the pending_events in > pciehp_ist() ? It can't. The processing logic is such that pciehp_ist() atomically removes bits from pending_events and acts upon them. Simultaneously, new events may be queued up by adding bits to pending_events (through a hardirq handled by pciehp_isr(), through a sysfs request, etc). Those will be handled in an additional iteration of pciehp_ist(). If I'd delay removing bits from pending_events, I then couldn't tell if new events have accumulated while others have been processed. E.g. a PDS event may occur while another one is being processed. The second PDS events may signify a card removal immediately after the card has been brought up. It's crucial not to lose the second PDS event but act properly on it by bringing the slot down again. This way of processing events also allows me to easily filter events. E.g. we tolerate link flaps occurring during the first 100 ms after enabling the slot simply by atomically removing bits from pending_events at a certain point. See commit 6c35a1ac3da6 ("PCI: pciehp: Tolerate initially unstable link"). Now what I *could* do would be to make the events currently being processed public, e.g. by adding an "atomic_t current_events" to struct controller. Then I could wait in pciehp_sysfs_enable_slot() / _disable_slot() until both "pending_events" and "current_events" becomes empty. But it would basically amount to the same as this patch, and we don't really need to know *which* events are being processed, only the *fact* that events are being processed. Let me know if you have further questions regarding the pciehp processing logic. Thanks, Lukas