Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp308011iob; Wed, 11 May 2022 15:07:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwlL1LgbQwZF1mlT6B7SOs3JGTfExw4QnH1uzXTL5Bh+chAYeVIW37BGcnypMO8PiMNsxdC X-Received: by 2002:a17:907:1c01:b0:6f4:2692:e23 with SMTP id nc1-20020a1709071c0100b006f426920e23mr26845772ejc.243.1652306854399; Wed, 11 May 2022 15:07:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652306854; cv=none; d=google.com; s=arc-20160816; b=tTwMDKjd5z81zn4f2TIuXegbmsE/ipi1CyuSZ4xDbMWWaj+M0AIYRgibixr28X3DXN 0R3ORwLqyMze0HHHYUCvBvdu9dX1bc1ZXs9+QP49uM7vuPBZR1uF5/FKK8ztQpGG2djD p8z34KTFfUMCGFVW99j4Rpi6wDOGPVdlS11xR1zILYn0ZHgMV5aDkhpFVI1bmMWCT3Ls n8fXVBXxNOSUOwBvBeI+RfwWLuQXMR35Gv8qVk4MfZ226KMGG1rsFOaMuUsXV3yd8D15 W2rIh/3UqydKOh+C3mIfbMbMCPqVyQHTy7ucwmewAw0SrGqASqB+qX48OFtQZ2SIXyYC w9kw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=7veflo9UbLlxTRHyg/FB//3GlHN50TVblda+zNS/H24=; b=wRjDaYpCuiG1s4/dOvFtArSkYY3nmoPLfAC8OGsZNLOl7sctkiG9iN/+pIAYXYED2e Td8zY952l6MOq65em56KmNS582/ImkmLB2sZGCaJHmaD4x3ihzUs6+lip+lYMGCZCU/m g9Dbo3+24rXpGPuc8Dwcf3AJEpAH3Nk3/I8xMIhB8EOIZTrTInmlXdtPF/kIO1Kg2dvz gY8KdcmSPuPlCoGklbBQ3/v0DJ30LJpxGzAPfjvyY2RjCheXUtvdLo/1iMEjQuz2M0MD 2JLX7vHdXnqfhYaGMZWUksDdoPUssRpsf9HXe6DhyMs2xYJedHeTvhkuiBWoUSNDDAwr yZwQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@quicinc.com header.s=qcdkim header.b=j8mWWNNh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ne26-20020a1709077b9a00b006df76385cf7si4710158ejc.407.2022.05.11.15.07.08; Wed, 11 May 2022 15:07:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@quicinc.com header.s=qcdkim header.b=j8mWWNNh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243489AbiEKNSL (ORCPT + 99 others); Wed, 11 May 2022 09:18:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243402AbiEKNSA (ORCPT ); Wed, 11 May 2022 09:18:00 -0400 Received: from alexa-out-sd-01.qualcomm.com (alexa-out-sd-01.qualcomm.com [199.106.114.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40DF88FD67; Wed, 11 May 2022 06:17:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1652275061; x=1683811061; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=7veflo9UbLlxTRHyg/FB//3GlHN50TVblda+zNS/H24=; b=j8mWWNNhcY3bcklZ6HZFFkEHqRh2xh9OCqpVRnS5pGi1YlL1Q8x0X5Ax 4H5AKQ6VK8BHc71ANijk06m+DWmmkqSkKj9eEfVoDcFoSNXXOwegWvZYH 0yc8w9RCSohzDyYQ+B6vTL1ofoHYjXPALHqHpdGSAJh0vuPq2mIoYbL8j o=; Received: from unknown (HELO ironmsg02-sd.qualcomm.com) ([10.53.140.142]) by alexa-out-sd-01.qualcomm.com with ESMTP; 11 May 2022 06:17:41 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg02-sd.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 May 2022 06:17:40 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 11 May 2022 06:17:40 -0700 Received: from mkshah-linux.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 11 May 2022 06:17:36 -0700 From: Maulik Shah To: , CC: , , , , , , , Maulik Shah Subject: [PATCH v2 6/6] soc: qcom: rpmh-rsc: Write CONTROL_TCS with next timer wakeup Date: Wed, 11 May 2022 18:46:56 +0530 Message-ID: <1652275016-13423-7-git-send-email-quic_mkshah@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1652275016-13423-1-git-send-email-quic_mkshah@quicinc.com> References: <1652275016-13423-1-git-send-email-quic_mkshah@quicinc.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The next wakeup timer value needs to be set in always on domain timer as the arch timer interrupt can not wakeup the SoC if after the deepest CPUidle states the SoC also enters deepest low power state. To wakeup the SoC in such scenarios the earliest wakeup time is set in CONTROL_TCS and the firmware takes care of setting up its own timer in always on domain with next wakeup time. The timer wakes up the RSC and sets resources back to wake state. Signed-off-by: Maulik Shah --- drivers/soc/qcom/rpmh-internal.h | 3 ++ drivers/soc/qcom/rpmh-rsc.c | 61 ++++++++++++++++++++++++++++++++++++++++ drivers/soc/qcom/rpmh.c | 4 ++- 3 files changed, 67 insertions(+), 1 deletion(-) diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h index 7866bb1..39f5358 100644 --- a/drivers/soc/qcom/rpmh-internal.h +++ b/drivers/soc/qcom/rpmh-internal.h @@ -112,6 +112,7 @@ struct rpmh_ctrlr { * @tcs_wait: Wait queue used to wait for @tcs_in_use to free up a * slot * @client: Handle to the DRV's client. + * @dev: RSC device. */ struct rsc_drv { const char *name; @@ -127,12 +128,14 @@ struct rsc_drv { spinlock_t lock; wait_queue_head_t tcs_wait; struct rpmh_ctrlr client; + struct device *dev; }; int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg); int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg); void rpmh_rsc_invalidate(struct rsc_drv *drv); +void rpmh_rsc_write_next_wakeup(struct rsc_drv *drv); void rpmh_tx_done(const struct tcs_request *msg, int r); int rpmh_flush(struct rpmh_ctrlr *ctrlr); diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c index 8e01697..25b838b 100644 --- a/drivers/soc/qcom/rpmh-rsc.c +++ b/drivers/soc/qcom/rpmh-rsc.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -25,6 +26,7 @@ #include #include +#include #include #include #include @@ -49,6 +51,14 @@ #define DRV_NCPT_MASK 0x1F #define DRV_NCPT_SHIFT 27 +/* Offsets for CONTROL TCS Registers */ +#define RSC_DRV_CTL_TCS_DATA_HI 0x38 +#define RSC_DRV_CTL_TCS_DATA_HI_MASK 0xFFFFFF +#define RSC_DRV_CTL_TCS_DATA_HI_VALID BIT(31) +#define RSC_DRV_CTL_TCS_DATA_LO 0x40 +#define RSC_DRV_CTL_TCS_DATA_LO_MASK 0xFFFFFFFF +#define RSC_DRV_CTL_TCS_DATA_SIZE 32 + /* Offsets for common TCS Registers, one bit per TCS */ #define RSC_DRV_IRQ_ENABLE 0x00 #define RSC_DRV_IRQ_STATUS 0x04 @@ -142,6 +152,14 @@ * +---------------------------------------------------+ */ +#define USECS_TO_CYCLES(time_usecs) \ + xloops_to_cycles((time_usecs) * 0x10C7UL) + +static inline unsigned long xloops_to_cycles(unsigned long xloops) +{ + return (xloops * loops_per_jiffy * HZ) >> 32; +} + static inline void __iomem * tcs_reg_addr(const struct rsc_drv *drv, int reg, int tcs_id) { @@ -757,6 +775,48 @@ static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv) } /** + * rpmh_rsc_write_next_wakeup() - Write next wakeup in CONTROL_TCS. + * @drv: The controller + * + * Writes maximum wakeup cycles when called from suspend. + * Writes earliest hrtimer wakeup when called from idle. + */ +void rpmh_rsc_write_next_wakeup(struct rsc_drv *drv) +{ + ktime_t now, wakeup; + u64 wakeup_us, wakeup_cycles = ~0; + u32 lo, hi; + + if (!drv->tcs[CONTROL_TCS].num_tcs || !drv->genpd_nb.notifier_call) + return; + + /* Set highest time when system (timekeeping) is suspended */ + if (system_state == SYSTEM_SUSPEND) + goto exit; + + /* Find the earliest hrtimer wakeup from online cpus */ + wakeup = dev_pm_genpd_get_next_hrtimer(drv->dev); + + /* Find the relative wakeup in kernel time scale */ + now = ktime_get(); + wakeup = ktime_sub(wakeup, now); + wakeup_us = ktime_to_us(wakeup); + + /* Convert the wakeup to arch timer scale */ + wakeup_cycles = USECS_TO_CYCLES(wakeup_us); + wakeup_cycles += arch_timer_read_counter(); + +exit: + lo = wakeup_cycles & RSC_DRV_CTL_TCS_DATA_LO_MASK; + hi = wakeup_cycles >> RSC_DRV_CTL_TCS_DATA_SIZE; + hi &= RSC_DRV_CTL_TCS_DATA_HI_MASK; + hi |= RSC_DRV_CTL_TCS_DATA_HI_VALID; + + writel_relaxed(lo, drv->base + RSC_DRV_CTL_TCS_DATA_LO); + writel_relaxed(hi, drv->base + RSC_DRV_CTL_TCS_DATA_HI); +} + +/** * rpmh_rsc_cpu_pm_callback() - Check if any of the AMCs are busy. * @nfb: Pointer to the notifier block in struct rsc_drv. * @action: CPU_PM_ENTER, CPU_PM_ENTER_FAILED, or CPU_PM_EXIT. @@ -1035,6 +1095,7 @@ static int rpmh_rsc_probe(struct platform_device *pdev) INIT_LIST_HEAD(&drv->client.batch_cache); dev_set_drvdata(&pdev->dev, drv); + drv->dev = &pdev->dev; ret = devm_of_platform_populate(&pdev->dev); if (ret && pdev->dev.pm_domain) { diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c index 01765ee..3a53ed9 100644 --- a/drivers/soc/qcom/rpmh.c +++ b/drivers/soc/qcom/rpmh.c @@ -450,7 +450,7 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr) if (!ctrlr->dirty) { pr_debug("Skipping flush, TCS has latest data.\n"); - goto exit; + goto write_next_wakeup; } /* Invalidate the TCSes first to avoid stale data */ @@ -479,6 +479,8 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr) ctrlr->dirty = false; +write_next_wakeup: + rpmh_rsc_write_next_wakeup(ctrlr_to_drv(ctrlr)); exit: spin_unlock(&ctrlr->cache_lock); return ret; -- 2.7.4