Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp5007637ybf; Wed, 4 Mar 2020 15:23:26 -0800 (PST) X-Google-Smtp-Source: ADFU+vs+LG9mcNLK6eXeWtd0zM/ngVMO8Yu12pQAg1k5vBEJiVKphylNV+HwmwDeWstO5PATcV33 X-Received: by 2002:aca:f354:: with SMTP id r81mr3353145oih.90.1583364205996; Wed, 04 Mar 2020 15:23:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583364205; cv=none; d=google.com; s=arc-20160816; b=RWHfmvJzwNaxmO07v3qInHC1oPDWD+uFHDb2zqawOU5+faQjNaOJf67/38XXv6K+j0 3dpI+7TBnJ4D7/WjpMO5yo9A/kemq5pkSc3MvmUYeaGYqVEkfvR744BQmGZ8F8LhLSgA 7ijt+x+phbusgrDNTWYYYOxp+JeIaHok2GP+g689Hpsa8cK+Icm/yffoj+GK841vtI0m ztpCHs4mJAby+nVH4RrrpXSt57GFklRs1ZxS4ds0lZFg11yP4bBim6h0dZ6W5Pg9JOuY Jo+BHiKN50GsqN7q06P59+rwcpW9a33ZHolaTf2xZVJbBBGTI04w8hK8LsaCBEgTyfY7 JBFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=qzbOBxoGlvw5OWNW1Ah6l3VKMPsLHTnZb9jh0xdhap8=; b=InWYXl0Nv8L+dBKBjsWAmkGyrddJtd69RDImR0dXaiL/vP7BfPKUxlxImrAXEcjEA8 iqR7FhWnSTB1P16U5bE/XvpJZxCbLUV/Qv+jEzYg0j0LnjlkBygawfFfaDphQC6lk4Zd Ym8U1DVfjq5JCb56hnyLURw/gT9J9hIGYso64+ldO4hPjFIBEFs0g59C2LCGvIkhCJ88 lYN+bBYV1N8eYT7YYujelti/w7lefrI+is5t94JMOL0AZJeTXbWHZasm5E+brJXV2ajJ l3Kigoy2MVqX2MmKMnB0sTH8FFHA1Q9XOOnIxtP70hm45dW8HGcg14cS5+BXQ2KSGQBK H0Tw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=Xr+duYjs; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i2si2021018otc.130.2020.03.04.15.23.14; Wed, 04 Mar 2020 15:23:25 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=Xr+duYjs; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388646AbgCDXW1 (ORCPT + 99 others); Wed, 4 Mar 2020 18:22:27 -0500 Received: from mail-ua1-f66.google.com ([209.85.222.66]:42029 "EHLO mail-ua1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388407AbgCDXW1 (ORCPT ); Wed, 4 Mar 2020 18:22:27 -0500 Received: by mail-ua1-f66.google.com with SMTP id p2so1388530uao.9 for ; Wed, 04 Mar 2020 15:22:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=qzbOBxoGlvw5OWNW1Ah6l3VKMPsLHTnZb9jh0xdhap8=; b=Xr+duYjsrVDjQpHtpWtSC1yYT91JeMgIM1acnVzC2D4m4WcgTMM+UHuRMU8jF6RJ59 RaYhze0GcrEMvt8LPmaR6Av3j139t1PD1ShBMBbpNYzfsc0Pw32MzaHTVL0bSxj+oHd2 euArYs4SRYFMJZ6vMnk8s2litzY57UNAn3BNM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=qzbOBxoGlvw5OWNW1Ah6l3VKMPsLHTnZb9jh0xdhap8=; b=SCkVJ/J7UssB1fdoRajQStA6GXqH8FhNf9128+vK1hg6utz/748+lhJdT8ihN/YZ6g d2i0EOAXCjUb6QFXmSNnMFnAVzKp49AzhPXSILQSc/kkiQ8ILRkxSLfSiNYrMFShlpgJ +hLZLtxc/P0wNdBn0DRMm7OU41ydt+MTQfT0P2i39CvxndjXot1jhSK7p1rgakUx5Yto J/ZCPTIg7JiJLAv5kQ9v7GdrAJ0tOdFax3YZIPDBzewhdpnSfCgX9Ep86wm6hf6NJ3LO qm3PVJNwLhZMm2ctbOv99QUaq82ENCmTkZ6X5h3FboLx4NlST/wxfix01BpaK0DOqyTN /YcA== X-Gm-Message-State: ANhLgQ0TNjfVgcflOePFLv9W/fjO6cfXHDJO5Viz8XSwsjq2mpf1qq21 Q62J2w8lqtk/nFFkMbXuhvF+kX2FsdI= X-Received: by 2002:ab0:266:: with SMTP id 93mr2888889uas.58.1583364144037; Wed, 04 Mar 2020 15:22:24 -0800 (PST) Received: from mail-ua1-f49.google.com (mail-ua1-f49.google.com. [209.85.222.49]) by smtp.gmail.com with ESMTPSA id s23sm1908381vsa.21.2020.03.04.15.22.22 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 04 Mar 2020 15:22:23 -0800 (PST) Received: by mail-ua1-f49.google.com with SMTP id p2so1388512uao.9 for ; Wed, 04 Mar 2020 15:22:22 -0800 (PST) X-Received: by 2002:ab0:1653:: with SMTP id l19mr3087910uae.8.1583364142386; Wed, 04 Mar 2020 15:22:22 -0800 (PST) MIME-Version: 1.0 References: <1583238415-18686-1-git-send-email-mkshah@codeaurora.org> <1583238415-18686-4-git-send-email-mkshah@codeaurora.org> In-Reply-To: <1583238415-18686-4-git-send-email-mkshah@codeaurora.org> From: Doug Anderson Date: Wed, 4 Mar 2020 15:22:11 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v10 3/3] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches To: Maulik Shah Cc: Stephen Boyd , Matthias Kaehlcke , Evan Green , Bjorn Andersson , LKML , linux-arm-msm , Andy Gross , Rajendra Nayak , Lina Iyer , lsrao@codeaurora.org Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On Tue, Mar 3, 2020 at 4:27 AM Maulik Shah wrote: > > Add changes to invoke rpmh flush() from within cache_lock when the data > in cache is dirty. > > This is done only if OSI is not supported in PSCI. If OSI is supported > rpmh_flush can get invoked when the last cpu going to power collapse > deepest low power mode. > > Also remove "depends on COMPILE_TEST" for Kconfig option QCOM_RPMH so the > driver is only compiled for arm64 which supports psci_has_osi_support() > API. > > Signed-off-by: Maulik Shah > Reviewed-by: Srinivas Rao L > --- > drivers/soc/qcom/Kconfig | 2 +- > drivers/soc/qcom/rpmh.c | 37 ++++++++++++++++++++++--------------- > 2 files changed, 23 insertions(+), 16 deletions(-) > > diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig > index d0a73e7..2e581bc 100644 > --- a/drivers/soc/qcom/Kconfig > +++ b/drivers/soc/qcom/Kconfig > @@ -105,7 +105,7 @@ config QCOM_RMTFS_MEM > > config QCOM_RPMH > bool "Qualcomm RPM-Hardened (RPMH) Communication" > - depends on ARCH_QCOM && ARM64 || COMPILE_TEST > + depends on ARCH_QCOM && ARM64 > help > Support for communication with the hardened-RPM blocks in > Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an > diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c > index f28afe4..dafb0da 100644 > --- a/drivers/soc/qcom/rpmh.c > +++ b/drivers/soc/qcom/rpmh.c > @@ -12,6 +12,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -158,6 +159,13 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr, > } > > unlock: > + if (ctrlr->dirty && !psci_has_osi_support()) { > + if (rpmh_flush(ctrlr)) { > + spin_unlock_irqrestore(&ctrlr->cache_lock, flags); > + return ERR_PTR(-EINVAL); > + } > + } > + > spin_unlock_irqrestore(&ctrlr->cache_lock, flags); > > return req; > @@ -285,26 +293,35 @@ int rpmh_write(const struct device *dev, enum rpmh_state state, > } > EXPORT_SYMBOL(rpmh_write); > > -static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req) > +static int cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req) > { > unsigned long flags; > > spin_lock_irqsave(&ctrlr->cache_lock, flags); > + > list_add_tail(&req->list, &ctrlr->batch_cache); > ctrlr->dirty = true; > + > + if (!psci_has_osi_support()) { > + if (rpmh_flush(ctrlr)) { The whole API here is a bit unfortunate. From what I can tell, callers of this code almost always call rpmh_write_batch() in triplicate, AKA: rpmh_write_batch(active, ...) rpmh_write_batch(wake, ...) rpmh_write_batch(sleep, ...) ...that's going to end up writing the whole sleep/wake sets twice every single time, right? I know you talked about trying to keep separate dirty bits for sleep/wake and maybe that would help, but it might not be so easy due to the comparison of "sleep_val" and "wake_val" in is_req_valid(). I guess we can keep the inefficiency for now and see how much it hits us, but it feels ugly. > + spin_unlock_irqrestore(&ctrlr->cache_lock, flags); > + return -EINVAL; nit: why not add "int ret = 0" to the top of the function, then here: if (rpmh_flush(ctrl)) ret = -EINVAL; ...then at the end "return ret". It avoids the 2nd copy of the unlock? Also: Why throw away the return value of rpmh_flush and replace it with -EINVAL? Trying to avoid -EBUSY? ...oh, should you handle -EBUSY? AKA: if (!psci_has_osi_support()) { do { ret = rpmh_flush(ctrl); } while (ret == -EBUSY); } > + } > + } > + > spin_unlock_irqrestore(&ctrlr->cache_lock, flags); > + > + return 0; > } > > static int flush_batch(struct rpmh_ctrlr *ctrlr) > { > struct batch_cache_req *req; > const struct rpmh_request *rpm_msg; > - unsigned long flags; > int ret = 0; > int i; > > /* Send Sleep/Wake requests to the controller, expect no response */ > - spin_lock_irqsave(&ctrlr->cache_lock, flags); > list_for_each_entry(req, &ctrlr->batch_cache, list) { > for (i = 0; i < req->count; i++) { > rpm_msg = req->rpm_msgs + i; > @@ -314,7 +331,6 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr) > break; > } > } > - spin_unlock_irqrestore(&ctrlr->cache_lock, flags); > > return ret; > } > @@ -386,10 +402,8 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state, > cmd += n[i]; > } > > - if (state != RPMH_ACTIVE_ONLY_STATE) { > - cache_batch(ctrlr, req); > - return 0; > - } > + if (state != RPMH_ACTIVE_ONLY_STATE) > + return cache_batch(ctrlr, req); > > for (i = 0; i < count; i++) { > struct completion *compl = &compls[i]; > @@ -455,9 +469,6 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state, > * Return: -EBUSY if the controller is busy, probably waiting on a response > * to a RPMH request sent earlier. > * > - * This function is always called from the sleep code from the last CPU > - * that is powering down the entire system. Since no other RPMH API would be > - * executing at this time, it is safe to run lockless. nit: you've now got an extra "blank" (just has a "*" on it) line at the end of your comment block. nit: in v9, Evan suggested "We should probably replace that with a comment indicating that we assume ctrlr->cache_lock is already held". Maybe you could do that? Also: presumably you _will_ still be called by the sleep code from the last CPU on systems with OSI. Is that true? If that's not true then you should change your function to static. If that is true, then your comment should be something like "this function will either be called from sleep code on the last CPU (thus no spinlock needed) or with the spinlock already held". -Doug