Received: by 2002:ad5:4acb:0:0:0:0:0 with SMTP id n11csp3240093imw; Mon, 11 Jul 2022 05:01:17 -0700 (PDT) X-Google-Smtp-Source: AGRyM1tiXByrWeg0P8zPyxVg6j8Xy12pCmjLuXS5hAXipmSSdpxctJT1X4wWXSuYucSRY9zlwGv+ X-Received: by 2002:a05:6402:3553:b0:43a:dba8:9f65 with SMTP id f19-20020a056402355300b0043adba89f65mr3749553edd.323.1657540877202; Mon, 11 Jul 2022 05:01:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1657540877; cv=none; d=google.com; s=arc-20160816; b=ll5Mlnq9/pL/vChky/+WcaoL6jlNzIT6FmgkUUZh1ep+YMnnw9rmBX+VG19748tubc 4aiWvn8NnhHws+igE6gUvcp/mNxei7OB3X7Ke9yr7CtHXPOGubezEFOY+V6ThM/I3hri II2SfYeojRIUK3YPgAD6ZvsIBiiS/MEff7JHyUpJf0B+ctAoRfZwKyrJBs2ofkHo6rQp gHEugJSQypRHtNt4F0daa0xxBU1WEVWRSdksp2gCnCWejFl6YBtk5YP77BMNaDdbwrbI F2brUcH2SMkxAo0rJANY5Qzq9aATxtc7F9V3fF9m0IUt6i4ha4ZBwJqToUIuc25+ZdaN M0RA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=rsuVX4+Fk0zEwgcAXa4p8Pic6TFa+DVD5bKVK4M/r5c=; b=h1I23a5pubJUnq19NRPpXuCLiGLSdG9faEyM5+bViCuDS6+utmWmVlsGNvCZBBj6cw 16ExJAgruh/JuojQfMNeEix+84y/WsRhjXW4pxZsipCbHD9HZr+s86KJvehXjDmhf4sL gctcGuWuyzg2r6KbQIWAqBfFto1VK506GTuEuSFlO4r5GLxgBG2P2rqZBnjPKvpy8KB5 U+mvj6J+kR9mxksMEkKYzLfjQiw/PcBH6a+gF77y92lQxVXm5mz3UqqeVntkN9coPEjK 3gCMm09AQL1fh3DfqhyhxECT1eRUVKL9ACEfwi+oK2IknAe9r254LRbLhWpTZv7HLV2S 4unQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kPy+zR91; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f19-20020a05640214d300b0043a6c4ea4fcsi9427792edx.470.2022.07.11.05.00.51; Mon, 11 Jul 2022 05:01:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kPy+zR91; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229679AbiGKLx1 (ORCPT + 99 others); Mon, 11 Jul 2022 07:53:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231450AbiGKLxL (ORCPT ); Mon, 11 Jul 2022 07:53:11 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DA9829CA2 for ; Mon, 11 Jul 2022 04:53:06 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id m14so4216272plg.5 for ; Mon, 11 Jul 2022 04:53:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rsuVX4+Fk0zEwgcAXa4p8Pic6TFa+DVD5bKVK4M/r5c=; b=kPy+zR91RLOnuD5gCrxaQwZnlck5gmy6/62lGsFszwd6HtNlVHf9Y8MP7ekyasuOHG Xdszm9FRx3PGF8i5d7C+XEKBKiwo+jN1AkZnK31CQT2rIUhS9YGKp06IOvTverORBxgR 1Q/sJrWjvm17V5rJfjSDyR0RhcWW5bgIrX7YbTaHLH5hyaow19CRsonEcHbIxVyvHdQI MvYPHtnSiyczI7NLYSmKj/eVJBFTUtC9fzMGaD4FG5MRrVJRzYPfH/sVrkVYCbGBDYSQ oPORB+NIHfsq5gsV6K1d5ucq0c+JvB+fa0yXFO3v4Hnx69jLPNXANgv5uRDc41TS3c7E ouRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rsuVX4+Fk0zEwgcAXa4p8Pic6TFa+DVD5bKVK4M/r5c=; b=6s7uKFQJN46Yt4jofpYCjI03vtCj+d1DI60G6uA1w/B1zn07fL7uVhtFJXnPHPwNr8 wNgJAhO+/VvG17/eAIjpsEla35+mkgAlvH3sSbUxSSVp006xZ7BANDdmRTnZLJNMnV1O DCWiK6nt9OoA4tnMten6I7xbkY3x39ZY2HWXTHdq/9NCzw9FsXj68U5hbF1kDgUweAcK 68fQwwVgP0wDcPbUvtrfyfxS7Rsvkl7OsFp3xEYvsi4dwXJz/k/pT8puMG+IkEvDWhYY pkeV73m/NUUmlLznFvot3Cqb5R35Q1vMAe2IV9/QGL9CWRmosP+uAbkrw7/zN2yObvrt Td6A== X-Gm-Message-State: AJIora9MvgNsCBCexLBHVjJwbRsam4//PbYDUHev3FWQu33GE82JICHR 9qht6d00oq4Ah7V/NUQQPJ74CA== X-Received: by 2002:a17:90b:1c07:b0:1f0:2077:6a9 with SMTP id oc7-20020a17090b1c0700b001f0207706a9mr12503682pjb.90.1657540385661; Mon, 11 Jul 2022 04:53:05 -0700 (PDT) Received: from leo-build-box.lan (n058152077182.netvigator.com. [58.152.77.182]) by smtp.gmail.com with ESMTPSA id h14-20020a17090a648e00b001eaec8cea55sm4586502pjj.57.2022.07.11.04.53.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Jul 2022 04:53:05 -0700 (PDT) From: Leo Yan To: Georgi Djakov , Andy Gross , Bjorn Andersson , Krzysztof Kozlowski , Konrad Dybcio , Rob Herring , Bryan O'Donoghue , linux-arm-msm@vger.kernel.org, linux-pm@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Leo Yan Subject: [PATCH v5 4/5] interconnect: qcom: icc-rpm: Support multiple buckets Date: Mon, 11 Jul 2022 19:52:39 +0800 Message-Id: <20220711115240.806236-5-leo.yan@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220711115240.806236-1-leo.yan@linaro.org> References: <20220711115240.806236-1-leo.yan@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The current interconnect rpm driver uses a single aggregate bandwidth to calculate the clock rates for both active and sleep clocks; therefore, it has no chance to separate bandwidth requests for these two kinds of clocks. This patch studies the implementation from interconnect rpmh driver to support multiple buckets. The rpmh driver provides three buckets for AMC, WAKE, and SLEEP; this driver only needs to use WAKE and SLEEP buckets, but we keep the same way with rpmh driver, this can allow us to reuse the DT binding and avoid to define duplicated data structures. This patch introduces two callbacks: qcom_icc_pre_bw_aggregate() is used to clean up bucket values before aggregate bandwidth requests, and qcom_icc_bw_aggregate() is to aggregate bandwidth for buckets. Signed-off-by: Leo Yan --- drivers/interconnect/qcom/icc-rpm.c | 51 ++++++++++++++++++++++++++++- drivers/interconnect/qcom/icc-rpm.h | 6 ++++ 2 files changed, 56 insertions(+), 1 deletion(-) diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c index 8c9d5cc7276c..d27b1582521f 100644 --- a/drivers/interconnect/qcom/icc-rpm.c +++ b/drivers/interconnect/qcom/icc-rpm.c @@ -254,6 +254,54 @@ static int __qcom_icc_set(struct icc_node *n, struct qcom_icc_node *qn, return 0; } +/** + * qcom_icc_pre_bw_aggregate - cleans up values before re-aggregate requests + * @node: icc node to operate on + */ +static void qcom_icc_pre_bw_aggregate(struct icc_node *node) +{ + struct qcom_icc_node *qn; + size_t i; + + qn = node->data; + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) { + qn->sum_avg[i] = 0; + qn->max_peak[i] = 0; + } +} + +/** + * qcom_icc_bw_aggregate - aggregate bw for buckets indicated by tag + * @node: node to aggregate + * @tag: tag to indicate which buckets to aggregate + * @avg_bw: new bw to sum aggregate + * @peak_bw: new bw to max aggregate + * @agg_avg: existing aggregate avg bw val + * @agg_peak: existing aggregate peak bw val + */ +static int qcom_icc_bw_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, + u32 peak_bw, u32 *agg_avg, u32 *agg_peak) +{ + size_t i; + struct qcom_icc_node *qn; + + qn = node->data; + + if (!tag) + tag = QCOM_ICC_TAG_ALWAYS; + + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) { + if (tag & BIT(i)) { + qn->sum_avg[i] += avg_bw; + qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw); + } + } + + *agg_avg += avg_bw; + *agg_peak = max_t(u32, *agg_peak, peak_bw); + return 0; +} + static int qcom_icc_set(struct icc_node *src, struct icc_node *dst) { struct qcom_icc_provider *qp; @@ -414,7 +462,8 @@ int qnoc_probe(struct platform_device *pdev) INIT_LIST_HEAD(&provider->nodes); provider->dev = dev; provider->set = qcom_icc_set; - provider->aggregate = icc_std_aggregate; + provider->pre_aggregate = qcom_icc_pre_bw_aggregate; + provider->aggregate = qcom_icc_bw_aggregate; provider->xlate_extended = qcom_icc_xlate_extended; provider->data = data; diff --git a/drivers/interconnect/qcom/icc-rpm.h b/drivers/interconnect/qcom/icc-rpm.h index ebee9009301e..a49af844ab13 100644 --- a/drivers/interconnect/qcom/icc-rpm.h +++ b/drivers/interconnect/qcom/icc-rpm.h @@ -6,6 +6,8 @@ #ifndef __DRIVERS_INTERCONNECT_QCOM_ICC_RPM_H #define __DRIVERS_INTERCONNECT_QCOM_ICC_RPM_H +#include + #define RPM_BUS_MASTER_REQ 0x73616d62 #define RPM_BUS_SLAVE_REQ 0x766c7362 @@ -65,6 +67,8 @@ struct qcom_icc_qos { * @links: an array of nodes where we can go next while traversing * @num_links: the total number of @links * @buswidth: width of the interconnect between a node and the bus (bytes) + * @sum_avg: current sum aggregate value of all avg bw requests + * @max_peak: current max aggregate value of all peak bw requests * @mas_rpm_id: RPM id for devices that are bus masters * @slv_rpm_id: RPM id for devices that are bus slaves * @qos: NoC QoS setting parameters @@ -75,6 +79,8 @@ struct qcom_icc_node { const u16 *links; u16 num_links; u16 buswidth; + u64 sum_avg[QCOM_ICC_NUM_BUCKETS]; + u64 max_peak[QCOM_ICC_NUM_BUCKETS]; int mas_rpm_id; int slv_rpm_id; struct qcom_icc_qos qos; -- 2.25.1