Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp210328ybb; Thu, 9 Apr 2020 21:18:21 -0700 (PDT) X-Google-Smtp-Source: APiQypIrZr9l32WQ9NIF5nRE0753C55zDcuUy3aH6Nzbe73j24CEC5WYzfInvmUOyPFENlLwXbz7 X-Received: by 2002:a37:65c3:: with SMTP id z186mr2266634qkb.484.1586492301294; Thu, 09 Apr 2020 21:18:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586492301; cv=none; d=google.com; s=arc-20160816; b=Hrq9dLknj91g6MtlWDJeVqeBPU7T2jIPAf4rW0sDY9ed6m0DArvxtlaMmbDGUEPzRH iojXg1lpHwukYsMt3535mxAOdM9eAlyVNPNLIuIlMAJ2Z4OvX5wTT5q4+xgm7xSqkRiq LQcNS3+b3OF4lkMjtfrXhADO1dxk/WJSpFElGOJ2Sd6pOm9imGssFRsKgXKyp7lfEehL MdlONRhvwZnLvb8GR4isAnDHU7cFWf43hzNgUPnceWQk0Zpu8LWxUapzoLU8LgZ3l1Vm LrHgbCCXFUX31lR8or56BpkvlDcM8YqVQOGCiPIAgtKdc0O7NyWBgzau1XJh+4gLZB5X eqyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:message-id:date:to:cc:from :subject:references:in-reply-to:content-transfer-encoding :mime-version:dkim-signature; bh=gEB6hREJEktFWL2i0vdfy1ZM0oVuZ9cuA3ZTnPrry/E=; b=ve8Zl2w0wyOGOLcJ0JDoX6oiaC8U26YcAiVSQMpJy/tLVXzcI+cTNP6tkGKT1Z2RyZ QYA8mL7uIUCfJqxIAE7g1NRez4wT2fmU8iu+T1avFGGjilS1Vmam96x7eSUADtGCBwb1 x1wyJehlL1xPEQm4u8Ltv1/gIUGBc2qtWrMktJsJaXSImB1a2f21n1lM5HTtx+Eb1TKX 2vWHpMQqxBh5GnLtXYGo3wyXT+jl/wpUdTSr0ZjRrXjTCg5oRQQglXZcT5DmKmjAeHZX JKPU63kI72HVgZcbGeX7pqWIxCQ+2wFmePYnAPlWMTfzKOyDWCqGnOWJyKaKYIi8sdDO aLNw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b="L9/zOm1w"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t186si729450qkb.61.2020.04.09.21.17.53; Thu, 09 Apr 2020 21:18:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b="L9/zOm1w"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726101AbgDJEPi (ORCPT + 99 others); Fri, 10 Apr 2020 00:15:38 -0400 Received: from mail-pl1-f193.google.com ([209.85.214.193]:36683 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725838AbgDJEPi (ORCPT ); Fri, 10 Apr 2020 00:15:38 -0400 Received: by mail-pl1-f193.google.com with SMTP id g2so295035plo.3 for ; Thu, 09 Apr 2020 21:15:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:content-transfer-encoding:in-reply-to:references :subject:from:cc:to:date:message-id:user-agent; bh=gEB6hREJEktFWL2i0vdfy1ZM0oVuZ9cuA3ZTnPrry/E=; b=L9/zOm1wXSp82yeBUotLf+3/A3y3RCe/ABJ6hUH3SwrMI1ei5iUZitd37//69XQaIj NzZdD/D0HY7c76STlcGkUv3UR4MIMDsRMh4gVqV6H5ImXGTwcV6pfUR3CDDN4VOUCLeZ MOvG6sfx0WsHxrwgE5blbxmxLUscfXN/Sc2/4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:content-transfer-encoding :in-reply-to:references:subject:from:cc:to:date:message-id :user-agent; bh=gEB6hREJEktFWL2i0vdfy1ZM0oVuZ9cuA3ZTnPrry/E=; b=GIKFCRz4sN81MU3gAshHQyGhvrKwOqMH6eXL/HZ8F/CSIUcF8X9N3lT6VdpFVLESZ1 qeXh3tHZ9vqWaGze0aX9ceXhvV6XrfSKoYUF21eJmRnnQ7nSUl/sUq+vRes05E/vKSAp YcjE4a8wuTHHOpzagqooMS7XSSfAD6Xny1a1Re5HLZ8bXtV7MGgW7wbSBICz648iDoS1 efY1e16R8zyT4OyM3garh7kIz3J9STfCgxubEbA+EglKwSUbsWKz+aiaFST50y4eHkF9 fZYXYjQ8uGPvEr6QhVHd2iJG207DFFVC1AuhO2h3YCFcdgoSwi2gfpj0hPKuoDLPUdj/ IPXg== X-Gm-Message-State: AGi0PuaFdIDb7Rro+7TG0gv33OGJWXlCB5q7o3ORJJNS85EKLI0aPmkM M5aOiEIrubgLnrFwQgaN5OQ0zap8wJ4= X-Received: by 2002:a17:902:8ec8:: with SMTP id x8mr2914059plo.204.1586492133589; Thu, 09 Apr 2020 21:15:33 -0700 (PDT) Received: from chromium.org ([2620:15c:202:1:fa53:7765:582b:82b9]) by smtp.gmail.com with ESMTPSA id d22sm555698pgh.11.2020.04.09.21.15.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Apr 2020 21:15:32 -0700 (PDT) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable In-Reply-To: <1586154741-8293-5-git-send-email-mkshah@codeaurora.org> References: <1586154741-8293-1-git-send-email-mkshah@codeaurora.org> <1586154741-8293-5-git-send-email-mkshah@codeaurora.org> Subject: Re: [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches From: Stephen Boyd Cc: linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, agross@kernel.org, mka@chromium.org, rnayak@codeaurora.org, ilina@codeaurora.org, lsrao@codeaurora.org, Maulik Shah To: Maulik Shah , bjorn.andersson@linaro.org, dianders@chromium.org, evgreen@chromium.org Date: Thu, 09 Apr 2020 21:15:31 -0700 Message-ID: <158649213142.77611.5740203322498170248@swboyd.mtv.corp.google.com> User-Agent: alot/0.9 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Quoting Maulik Shah (2020-04-05 23:32:19) > Add changes to invoke rpmh flush() from CPU PM notification. > This is done when the last the cpu is entering power collapse and 'power collapse' is a qcom-ism. Maybe say something like "deep CPU idle states"? > controller is not busy. >=20 > Controllers that do have 'HW solver' mode do not need to register > for CPU PM notification. They may be in autonomous mode executing > low power mode and do not require rpmh_flush() to happen from CPU > PM notification. Can you provide an example of a HW solver mode controller? Presumably the display RSC is one of these? >=20 > Signed-off-by: Maulik Shah > Reviewed-by: Douglas Anderson > diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c > index b718221..fbe1f3e 100644 > --- a/drivers/soc/qcom/rpmh-rsc.c > +++ b/drivers/soc/qcom/rpmh-rsc.c > @@ -521,8 +527,86 @@ int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, co= nst struct tcs_request *msg) > return tcs_ctrl_write(drv, msg); > } > =20 > +/** > + * rpmh_rsc_ctrlr_is_busy() - Check if any of the AMCs are busy. > + * > + * @drv: The controller > + * > + * Checks if any of the AMCs are busy in handling ACTIVE sets. > + * This is called from the last cpu powering down before flushing > + * SLEEP and WAKE sets. If AMCs are busy, controller can not enter > + * power collapse, so deny from the last cpu's pm notification. > + * > + * Return: > + * * False - AMCs are idle > + * * True - AMCs are busy > + */ > +static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv) Can drv be const? Would be nice to make it const in some places in this driver. > +{ > + int m; > + struct tcs_group *tcs =3D get_tcs_of_type(drv, ACTIVE_TCS); > + > + /* > + * If we made an active request on a RSC that does not have a > + * dedicated TCS for active state use, then re-purposed wake TCSes > + * should be checked for not busy. not busy, because we use wake TCSes for active requests in this case. > + * > + * Since this is called from the last cpu, need not take drv or t= cs > + * lock before checking tcs_is_free(). > + */ > + if (!tcs->num_tcs) > + tcs =3D get_tcs_of_type(drv, WAKE_TCS); > + > + for (m =3D tcs->offset; m < tcs->offset + tcs->num_tcs; m++) { > + if (!tcs_is_free(drv, m)) > + return true; > + } > + > + return false; > +} [...] > + > diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c > index a75f3df..88f8801 100644 > --- a/drivers/soc/qcom/rpmh.c > +++ b/drivers/soc/qcom/rpmh.c > @@ -433,16 +430,17 @@ static int send_single(struct rpmh_ctrlr *ctrlr, en= um rpmh_state state, > } > =20 > /** > - * rpmh_flush: Flushes the buffered active and sleep sets to TCS > + * rpmh_flush() - Flushes the buffered sleep and wake sets to TCSes > * > - * @ctrlr: controller making request to flush cached data > + * @ctrlr: Controller making request to flush cached data > * > - * Return: -EBUSY if the controller is busy, probably waiting on a respo= nse > - * to a RPMH request sent earlier. > + * This function is called from sleep code on the last CPU > + * (thus no spinlock needed). Might be good to stick a lockdep_assert_irqs_disabled() in this function then. That would document that this function should only be called when irqs are disabled. > * > - * This function is always called from the sleep code from the last CPU > - * that is powering down the entire system. Since no other RPMH API woul= d be > - * executing at this time, it is safe to run lockless. > + * Return: > + * * 0 - Success > + * * -EAGAIN - Retry again > + * * Error code - Otherwise > */ > int rpmh_flush(struct rpmh_ctrlr *ctrlr) This function name keeps throwing me off. Can we please call it something like rpmh_configure_tcs_sleep_wake()? The word "flush" seems to imply there's some sort of cache going on, but that's not really the case. We're programming a couple TCS FIFOs so that they can be used across deep CPU sleep states. > { > @@ -455,9 +453,7 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr) > } > =20 > /* Invalidate the TCSes first to avoid stale data */ > - do { > - ret =3D rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr)); > - } while (ret =3D=3D -EAGAIN); > + ret =3D rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr)); > if (ret) > return ret; >