Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753558AbdHOJ7H convert rfc822-to-8bit (ORCPT ); Tue, 15 Aug 2017 05:59:07 -0400 Received: from mga05.intel.com ([192.55.52.43]:20500 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753012AbdHOJ7F (ORCPT ); Tue, 15 Aug 2017 05:59:05 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,377,1498546800"; d="scan'208";a="139740729" From: "Zheng, Lv" To: "Rafael J. Wysocki" CC: Linux ACPI , Mika Westerberg , Srinivas Pandruvada , Linux PCI , LKML , "Moore, Robert" Subject: RE: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time Thread-Topic: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time Thread-Index: AQHTEWDck09JCTGfBE2UIpBZxOhPhaJ80aXwgABqy4CAAWZZ0P//7X0AgAaif3A= Date: Tue, 15 Aug 2017 09:59:00 +0000 Message-ID: <1AE640813FDE7649BE1B193DEA596E886CF0B3F2@SHSMSX101.ccr.corp.intel.com> References: <12346760.yAFCnkEgf6@aspire.rjw.lan> <5185778.X36DhrMnGV@aspire.rjw.lan> <1AE640813FDE7649BE1B193DEA596E886CF09085@SHSMSX101.ccr.corp.intel.com> <3677056.OS19FbOdkF@aspire.rjw.lan> In-Reply-To: <3677056.OS19FbOdkF@aspire.rjw.lan> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYWZiZGUzOTItNTFjZS00MmJkLWIzOTItZDQyMjRkNDA4ZWIxIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6IkR3bTd3YlwvVDhkOE9UelwvNm9kY2pcL0tEVFlrdWF2T2dETTFRQ0FMQ0d0MFU9In0= x-ctpclassification: CTP_IC dlp-product: dlpe-windows dlp-version: 10.0.102.7 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5195 Lines: 124 Hi, > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net] > Subject: Re: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time > > On Friday, August 11, 2017 7:40:56 AM CEST Zheng, Lv wrote: > > Hi, Rafael > > > > > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net] > > > Subject: Re: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time > > > > > > On Thursday, August 10, 2017 3:48:58 AM CEST Zheng, Lv wrote: > > > > Hi, Rafael > > > > > > > > > From: Rafael J. Wysocki [mailto:rjw@rjwysocki.net] > > > > > Subject: [PATCH 1/3] ACPICA: Dispatch active GPEs at init time > > > > > > > > > > From: Rafael J. Wysocki > > > > > > > > > > In some cases GPEs are already active when they are enabled by > > > > > acpi_ev_initialize_gpe_block() and whatever happens next may depend > > > > > on the result of handling the events signaled by them, so the > > > > > events should not be discarded (which is what happens currently) and > > > > > they should be handled as soon as reasonably possible. > > > > > > > > > > For this reason, modify acpi_ev_initialize_gpe_block() to > > > > > dispatch GPEs with the status flag set in-band right after > > > > > enabling them. > > > > > > > > In fact, what we need seems to be invoking acpi_ev_gpe_dispatch() > > > > right after enabling an GPE. So there are 2 conditions related: > > > > 1. GPE is enabled for the first time. > > > > 2. GPE is initialized. > > > > > > > > And we need to make sure that before acpi_update_all_gpes() is invoked, > > > > all GPE EN bits are actually disabled. > > > > > > But we don't do it today, do we? > > > > We don't do that. > > > > > > > > And still calling _dispatch() should not be incorrect even if the GPE > > > has been enabled already at this point. Worst case it just will > > > queue up the execution of _Lxx/_Exx which may or may not do anything > > > useful. > > > > > > And BTW this is all done under acpi_gbl_gpe_lock so acpi_ev_gpe_detect() > > > will block on it if run concurrently and we've checked the status, so > > > we know that the GPE *should* be dispatched, so I sort of fail to see > > > the problem. > > > > There is another problem related: > > ACPICA clears GPEs before enabling it. > > This is proven to be wrong, and we have to fix it: > > https://bugzilla.kernel.org/show_bug.cgi?id=196249 > > > > without fixing this issue, in this solution, we surely need to save the > > GPE STS bit before incrementing GPE reference count, and poll it according > > to the saved STS bit. Because if we poll it after enabling, STS bit will > > be wrongly cleared. > > I'm not sure if I understand you correctly, but why would we poll it? > > In the $subject patch the status is checked and then > acpi_ev_add_gpe_reference() is called to add a reference to the GPE. > > If this is the first reference (which will be the case in the majority > of cases), acpi_ev_enable_gpe() will be called and that will clear the > status. > > Then, acpi_ev_gpe_dispatch() is called if the status was set and that > itself doesn't check the status. It disables the GPE upfront (so the > status doesn't matter from now on until the GPE is enabled again) and > clears the status unconditionally if the GPE is edge-triggered. This > means that for edge-triggered GPEs the clearing of the status by > acpi_ev_enable_gpe() doesn't matter here. No problem, I understood. And was thinking thought this patch [edge GPE dispatch fix] should be correct and good for current Linux upstream. What I meant is: PATCH [edge GPE clear fix] https://patchwork.kernel.org/patch/9894983/ is a fix we need for upstream as it is the only possible fix for the issue fixed by it. On top of that, when acpi_ev_enable_gpe() is called, GPE won't be cleared. and then things can be done in a simpler way: PATCH [edge GPE enable fix] https://patchwork.kernel.org/patch/9894989/ As [edge GPE clear fix] is risky, I think [edge GPE dispatch fix] is OK for Linux upstream. So we can have 2 processes: 1. Merge [edge GPE dispatch fix] and let [edge GPE clear fix] and [edge GPE enable fix] released from ACPICA upstream so that: 1. We can enhance them in ACPICA upstream. 2. It will be regression safer for us to merge [edge GPE clear fix]. 2. Merge [edge GPE clear fix] and [edge GPE enable fix] without merging [edge GPE dispatch fix]. What I meant is: It's up to you to decide which process we should take. :) > > For level-triggered GPEs the status is not cleared by acpi_ev_gpe_dispatch() > itself, but _Lxx is executed (again, without checking the status) and > finally the status is cleared by acpi_ev_finish_gpe() anyway, so we > end up with cleared status no matter what (and that before re-enabling > the GPE). End even if _Lxx itself checked the status (but honestly why > would it?), then this is a level-triggered GPE, so it will re-trigger > anyway if not handled this time. Let's skip level-triggered GPE case. > > So it looks like the fact that acpi_ev_enable_gpe() clears the status before > enabling the GPE doesn't matter for this patch, although it may matter in > general, but that is a different (and somewhat related) issue. I agreed. Thanks and best regards Lv