Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp9987958ybi; Wed, 24 Jul 2019 13:37:42 -0700 (PDT) X-Google-Smtp-Source: APXvYqyNF3tMvSw1r0pqm5Jrr/t9jb1SL5YfU7rbJ4CeNzYXRfgEgqD8eDe+3pg44xyrCxq5fAOc X-Received: by 2002:a17:902:f087:: with SMTP id go7mr87248403plb.330.1564000662730; Wed, 24 Jul 2019 13:37:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564000662; cv=none; d=google.com; s=arc-20160816; b=mxJUbO/VYqcCMBKvfwY543PxgIXGuD3yx3mWLb/Oo4QtT8EDVhfzw0seUYqkJjb31y s42vrHmHR8Gya66zCO/O7/z7vo/W1jofVnu7+OdYi6/T38mOw1/RFHD2EgKFbADvOKAA Hsf+el6fHzJLvJtYEvESFm33ifxrcX3HPkHIJfQWz6VA709cTvvOV15PwTGo6GCwmCoL aBfqPJyqxmMem2fDz2kYucjaWKJOBJ6Eu4nAZIO1JTQVsFKqSBMrMIdFCbMNVtUDsAwc 1j++Naen7KB+53Jip96aDQOy9njOPFibAOCM6MkFtV+cBlSB8uwsBHQxW5Hhq3NragK2 4Imw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dmarc-filter:dkim-signature:dkim-signature; bh=LeXYBF4XZF+Eab2E7VFMqCpPAbcHZ5qGNQCkNioMI04=; b=URwgHGDf0fjrM2UIysmAAuiQuGq+C+TrelA79aXcfeLB/6Ekqji/7x18KMHe3gfdmB I/vow8cgUR1G+Xt3Wmp/K7BbNKi9PN+//BP/u69vzlorHkIj48Nx9rAfd7DQi+1onG3O 8MV4fcT1zcUbnvhJV8fOt3Eusf8dVZD5kKm9g5BdpqMwDL5fT8C+MFeGmvJsiJV+nUFl 464QTfuproelg55cFd4ho8LAmgNm/bsjPWVsrA6TPf90/6bMGyq6j+30Z/ExHlMVWdaP 1yQ21wXoI43Sc2ya/D71y84Uq7augwluzJcEub/fIhKiz0FHh4bt90BNAy7EUJEBIblK RMjg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@codeaurora.org header.s=default header.b=e+1uovH+; dkim=pass header.i=@codeaurora.org header.s=default header.b=a7cYdd2x; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g22si14492224pfh.219.2019.07.24.13.37.28; Wed, 24 Jul 2019 13:37:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@codeaurora.org header.s=default header.b=e+1uovH+; dkim=pass header.i=@codeaurora.org header.s=default header.b=a7cYdd2x; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389588AbfGXUgP (ORCPT + 99 others); Wed, 24 Jul 2019 16:36:15 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:52914 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726632AbfGXUgO (ORCPT ); Wed, 24 Jul 2019 16:36:14 -0400 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id 4F28560588; Wed, 24 Jul 2019 20:36:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1564000573; bh=LeXYBF4XZF+Eab2E7VFMqCpPAbcHZ5qGNQCkNioMI04=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=e+1uovH+DfPEa13HCBRKq6kYXCNe6BUY6BmVLWd3OKIn8uKQvdwzAsfQWJOsWBzQo 3pvyITvUwHHKKGVn2H9TVqF4fswgAvsjnPVfGJHJrwFaAxji7ZtNGamZ99JAUT88Z6 5jWTHlmS8r64iMczX1k0rRZfF/s0hfElDANZnM5c= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on pdx-caf-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=ALL_TRUSTED,BAYES_00, DKIM_INVALID,DKIM_SIGNED,SPF_NONE autolearn=no autolearn_force=no version=3.4.0 Received: from localhost (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) (Authenticated sender: ilina@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 4B4F460392; Wed, 24 Jul 2019 20:36:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1564000572; bh=LeXYBF4XZF+Eab2E7VFMqCpPAbcHZ5qGNQCkNioMI04=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=a7cYdd2xjX2abQ5f3LG+OcGurndQp/AjKXZLkjoyWEbhzmCKz5cGH9RYpwbVgNul3 DXzxFpU9IaTIG/TbQawoIbIfm78gJR8tDStF9ih2Mr3HYMPS5o2h5Yt5g6r/4sKjZM ieya17n3ksapj9vhBRNQYj6TdCHqcfu4GSdYaPg0= DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 4B4F460392 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=ilina@codeaurora.org Date: Wed, 24 Jul 2019 14:36:10 -0600 From: Lina Iyer To: Stephen Boyd Cc: agross@kernel.org, bjorn.andersson@linaro.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, rnayak@codeaurora.org, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, dianders@chromium.org, mkshah@codeaurora.org Subject: Re: [PATCH V2 2/4] drivers: qcom: rpmh-rsc: avoid locking in the interrupt handler Message-ID: <20190724203610.GE18620@codeaurora.org> References: <20190722215340.3071-1-ilina@codeaurora.org> <20190722215340.3071-2-ilina@codeaurora.org> <5d3769df.1c69fb81.55d03.aa33@mx.google.com> <20190724145251.GB18620@codeaurora.org> <5d38b38e.1c69fb81.e8e5d.035b@mx.google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <5d38b38e.1c69fb81.e8e5d.035b@mx.google.com> User-Agent: Mutt/1.11.3 (2019-02-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 24 2019 at 13:38 -0600, Stephen Boyd wrote: >Quoting Lina Iyer (2019-07-24 07:52:51) >> On Tue, Jul 23 2019 at 14:11 -0600, Stephen Boyd wrote: >> >Quoting Lina Iyer (2019-07-22 14:53:38) >> >> Avoid locking in the interrupt context to improve latency. Since we >> >> don't lock in the interrupt context, it is possible that we now could >> >> race with the DRV_CONTROL register that writes the enable register and >> >> cleared by the interrupt handler. For fire-n-forget requests, the >> >> interrupt may be raised as soon as the TCS is triggered and the IRQ >> >> handler may clear the enable bit before the DRV_CONTROL is read back. >> >> >> >> Use the non-sync variant when enabling the TCS register to avoid reading >> >> back a value that may been cleared because the interrupt handler ran >> >> immediately after triggering the TCS. >> >> >> >> Signed-off-by: Lina Iyer >> >> --- >> > >> >I have to read this patch carefully. The commit text isn't convincing me >> >that it is actually safe to make this change. It mostly talks about the >> >performance improvements and how we need to fix __tcs_trigger(), which >> >is good, but I was hoping to be convinced that not grabbing the lock >> >here is safe. >> > >> >How do we ensure that drv->tcs_in_use is cleared before we call >> >tcs_write() and try to look for a free bit? Isn't it possible that we'll >> >get into a situation where the bitmap is all used up but the hardware >> >has just received an interrupt and is going to clear out a bit and then >> >an rpmh write fails with -EBUSY? >> > >> If we have a situation where there are no available free bits, we retry >> and that is part of the function. Since we have only 2 TCSes avaialble >> to write to the hardware and there could be multiple requests coming in, >> it is a very common situation. We try and acquire the drv->lock and if >> there are free TCS available and if available mark them busy and send >> our requests. If there are none available, we keep retrying. >> > >Ok. I wonder if we need some sort of barriers here too, like an >smp_mb__after_atomic()? That way we can make sure that the write to >clear the bit is seen by another CPU that could be spinning forever >waiting for that bit to be cleared? Before this change the spinlock >would be guaranteed to make these barriers for us, but now that doesn't >seem to be the case. I really hope that this whole thing can be changed >to be a mutex though, in which case we can use the bit_wait() API, etc. >to put tasks to sleep while RPMh is processing things. > We have drivers that want to send requests in atomic contexts and therefore mutex locks would not work. --Lina