Received: by 2002:ad5:4acb:0:0:0:0:0 with SMTP id n11csp3799094imw; Thu, 7 Jul 2022 07:49:38 -0700 (PDT) X-Google-Smtp-Source: AGRyM1sJ7/hSm4x1pg7wxPtZk35dBTs2p8TMWIC2WNL4ZJvT7qDwAJPdHYkWOUtF5yAFQIiqHcY9 X-Received: by 2002:a17:90b:4b8e:b0:1ed:3a07:caad with SMTP id lr14-20020a17090b4b8e00b001ed3a07caadmr5708163pjb.194.1657205377920; Thu, 07 Jul 2022 07:49:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1657205377; cv=none; d=google.com; s=arc-20160816; b=dqM2R11ewH0XIXydYmlTtjS7i+M5tcjpV5ue7sxL1GY8dNx4QKER3GuXKNlR9fbqG8 wkMTgUezSJknlNm6jtMyZmH5FDiAHBPzE01+9W1Aqkp+80CyyvwGZYjd8377H+E59DUk oFGto2EnrWYheH7izWiB82OTZ3AttO1vkxpBvyC5zmeljo2FRX78V9qNvlkgvTDTOQNh hpmBk2ZcEmqDOvjhi6vMdD9I8YrmubjLfDUSuvIXPobXIZn9M82dxPyf0X2Pe545+QjR oRfQKqVvskko1yF6aoq2ivnyoe8Lfavbdge1oN2zcDL8Ie2NgPVoRdPlZmfWElLOUIxU YA9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:to:content-language:subject:mime-version:date:message-id :dkim-signature; bh=I3h1eqmoIN2zTgckCVhY5rnSzAFAY8otXTx6sdpN/Cc=; b=Q8jxx8rD9j8OSRZzOKC4bttuZgtuzOXsK+K5QzQnih634+hg1ShyYnDFjxouHhBdX/ jUlEzXx8qB7Wy4Ec+e0+K24On1DYu39dozwjQQEOc2Jh7JKAUJUv8Qxw9OVEnPZITHpl ZLxJmWAmHV2CuyekbANU0A69dYFrIfxf8X1uZLbbrlfn9S/sJY988W+pHbdIa8h9SEUB rvmcWXObjUU09JkuDQrioomO+1wt8x4fT7KWzSAr9HI6JiIJiMqE3OsSSlaSSYmGdiCm EqBCPP+2SZ0hKScbb0/UaRlXyTrcrkRWKc9jjtYTot1sQwPR07BY+OJO5IoXc1y7i5Di S5VA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ZJiIMtYC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u12-20020a170902e80c00b0016be9faf06bsi16383055plg.480.2022.07.07.07.49.21; Thu, 07 Jul 2022 07:49:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ZJiIMtYC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235788AbiGGOeI (ORCPT + 99 others); Thu, 7 Jul 2022 10:34:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235397AbiGGOeH (ORCPT ); Thu, 7 Jul 2022 10:34:07 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECADC2A732; Thu, 7 Jul 2022 07:34:05 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7B8DFB82139; Thu, 7 Jul 2022 14:34:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 43847C3411E; Thu, 7 Jul 2022 14:34:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657204443; bh=WcCj/mS9RTdQYqbcOaXGcW3pRviUrRb0UE/IJVCNlrM=; h=Date:Subject:To:References:From:In-Reply-To:From; b=ZJiIMtYC6iHeh1na94SLj/3K4bAW+2UN9B/KupY2HkbzVEhgNzS/myZeuowFtDTYq jsIXZCvmDoz/f4jlcQMX3AFIugTgtdtDjw86wtrHbF4XnRgRvPurourTK+nlX1OBFI 93iM828Us08nWTE3gjA8/VOMeB4n2J+lWGrj9ljrs5qOy2xJhkb8IL8YwQZWYR+gXr ifeJF4Rp46H7/yqH2EJ2bUz4kRXHNWd0nZlyJCR4Ea+woPSNOOVRqfBqws/LjK0SmY 9K0+6pvDhOcRY5w637R/NR9oE1s6OYbTAJhLg8CGtqW02Vit3uBaX8d4K8oPAlZFyH sjCSJycqYNEDw== Message-ID: <28bf991f-7b4c-0af1-2780-842500b01a0f@kernel.org> Date: Thu, 7 Jul 2022 17:33:58 +0300 MIME-Version: 1.0 Subject: Re: [PATCH v4 5/5] interconnect: qcom: icc-rpm: Set bandwidth and clock for bucket values Content-Language: en-US To: Leo Yan , Andy Gross , Bjorn Andersson , Rob Herring , Krzysztof Kozlowski , linux-arm-msm@vger.kernel.org, linux-pm@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org References: <20220705072336.742703-1-leo.yan@linaro.org> <20220705072336.742703-6-leo.yan@linaro.org> From: Georgi Djakov In-Reply-To: <20220705072336.742703-6-leo.yan@linaro.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 5.07.22 10:23, Leo Yan wrote: > This commit uses buckets for support bandwidth and clock rates. It > introduces a new function qcom_icc_bus_aggregate() to calculate the > aggregate average and peak bandwidths for every bucket, and also it > calculates the maximum aggregate values across all buckets. > > The maximum aggregate values are used to calculate the final bandwidth > requests. And we can set the clock rate per bucket, we use SLEEP bucket > as default bucket if a platform doesn't enable the interconnect path > tags in DT binding; otherwise, we use WAKE bucket to set active clock > and use SLEEP bucket for other clocks. So far we don't use AMC bucket. > > Signed-off-by: Leo Yan > --- > drivers/interconnect/qcom/icc-rpm.c | 80 ++++++++++++++++++++++++----- > 1 file changed, 67 insertions(+), 13 deletions(-) > > diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c > index b025fc6b97c9..4b932eb807c7 100644 > --- a/drivers/interconnect/qcom/icc-rpm.c > +++ b/drivers/interconnect/qcom/icc-rpm.c > @@ -302,18 +302,62 @@ static int qcom_icc_bw_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, > return 0; > } > > +/** > + * qcom_icc_bus_aggregate - aggregate bandwidth by traversing all nodes > + * @provider: generic interconnect provider > + * @agg_avg: an array for aggregated average bandwidth of buckets > + * @agg_peak: an array for aggregated peak bandwidth of buckets > + * @max_agg_avg: pointer to max value of aggregated average bandwidth > + * @max_agg_peak: pointer to max value of aggregated peak bandwidth > + */ > +static void qcom_icc_bus_aggregate(struct icc_provider *provider, > + u64 *agg_avg, u64 *agg_peak, > + u64 *max_agg_avg, u64 *max_agg_peak) > +{ > + struct icc_node *node; > + struct qcom_icc_node *qn; > + int i; > + > + /* Initialise aggregate values */ > + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) { > + agg_avg[i] = 0; > + agg_peak[i] = 0; > + } > + > + *max_agg_avg = 0; > + *max_agg_peak = 0; > + > + /* > + * Iterate nodes on the interconnect and aggregate bandwidth > + * requests for every bucket. > + */ > + list_for_each_entry(node, &provider->nodes, node_list) { > + qn = node->data; > + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) { > + agg_avg[i] += qn->sum_avg[i]; > + agg_peak[i] = max_t(u64, agg_peak[i], qn->max_peak[i]); > + } > + } > + > + /* Find maximum values across all buckets */ > + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) { > + *max_agg_avg = max_t(u64, *max_agg_avg, agg_avg[i]); > + *max_agg_peak = max_t(u64, *max_agg_peak, agg_peak[i]); > + } > +} > + > static int qcom_icc_set(struct icc_node *src, struct icc_node *dst) > { > struct qcom_icc_provider *qp; > struct qcom_icc_node *src_qn = NULL, *dst_qn = NULL; > struct icc_provider *provider; > - struct icc_node *n; > u64 sum_bw; > u64 max_peak_bw; > u64 rate; > - u32 agg_avg = 0; > - u32 agg_peak = 0; > + u64 agg_avg[QCOM_ICC_NUM_BUCKETS], agg_peak[QCOM_ICC_NUM_BUCKETS]; > + u64 max_agg_avg, max_agg_peak; > int ret, i; > + int bucket; > > src_qn = src->data; > if (dst) > @@ -321,12 +365,11 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst) > provider = src->provider; > qp = to_qcom_provider(provider); > > - list_for_each_entry(n, &provider->nodes, node_list) > - provider->aggregate(n, 0, n->avg_bw, n->peak_bw, > - &agg_avg, &agg_peak); > + qcom_icc_bus_aggregate(provider, agg_avg, agg_peak, &max_agg_avg, > + &max_agg_peak); > > - sum_bw = icc_units_to_bps(agg_avg); > - max_peak_bw = icc_units_to_bps(agg_peak); > + sum_bw = icc_units_to_bps(max_agg_avg); > + max_peak_bw = icc_units_to_bps(max_agg_peak); > > ret = __qcom_icc_set(src, src_qn, sum_bw); > if (ret) > @@ -337,12 +380,23 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst) > return ret; > } > > - rate = max(sum_bw, max_peak_bw); Looks like max_peak_bw is unused now? > - do_div(rate, src_qn->buswidth); > - rate = min_t(u64, rate, LONG_MAX); > - > for (i = 0; i < qp->num_clks; i++) { > + /* > + * Use WAKE bucket for active clock, otherwise, use SLEEP bucket > + * for other clocks. If a platform doesn't set interconnect > + * path tags, by default use sleep bucket for all clocks. > + * > + * Note, AMC bucket is not supported yet. > + */ > + if (!strcmp(qp->bus_clks[i].id, "bus_a")) > + bucket = QCOM_ICC_BUCKET_WAKE; > + else > + bucket = QCOM_ICC_BUCKET_SLEEP; > + > + rate = icc_units_to_bps(max(agg_avg[bucket], agg_peak[bucket])); > + do_div(rate, src_qn->buswidth); > + rate = min_t(u64, rate, LONG_MAX); > + > if (qp->bus_clk_rate[i] == rate) > continue; Thanks, Georgi