2022-07-11 11:58:05

by Leo Yan

[permalink] [raw]
Subject: [PATCH v5 0/5] interconnect: qcom: icc-rpm: Support bucket

This patch set is to support bucket in icc-rpm driver, so it implements
the similar mechanism in the icc-rpmh driver.

We can use interconnect path tag to indicate the bandwidth voting is for
which buckets, and there have three kinds of buckets: AWC, WAKE and
SLEEP, finally the wake and sleep bucket values are used to set the
corresponding clock (active and sleep clocks). So far, we keep the AWC
bucket but doesn't really use it.

Patches 01, 02, 03 enable interconnect path tag and update the DT
binding document; patches 04 and 05 support bucket and use bucket values
to set the bandwidth and clock rates.

Note, this patch set is dependent on an out of tree patch "interconnect:
icc-rpm: Set destination bandwidth as well as source bandwidth" [1].
With the dependent patch, this patch set can be cleanly applied on the
Linux kernel master branch with the latest commit 32346491ddf2
("Linux 5.19-rc6").

[1] https://lore.kernel.org/linux-pm/[email protected]/T/#r304f7b103c806e1570d555a0f5aaf83ae3990ac0


Changes from v4:
- Added Krzysztof's Acked tag for DT binding document patch;
- Fixed the unalignment between function qcom_icc_pre_bw_aggregate() and
its comment (Georgi);
- Simplified qcom_icc_bus_aggregate() with removing unused parameter
'max_agg_peak';
- Removed unsed local variable 'max_peak_bw' in qcom_icc_set() (Georgi).

Changes from v3:
- Removed $ref and redundant sentence in DT binding document for
'#interconnect-cells' (Krzysztof Kozlowski).

Changes from v2:
- Fixed for DT checker error for command ''make DT_CHECKER_FLAGS=-m
dt_binding_check' (Rob Herring).

Changes from v1:
- Added description for property "#interconnect-cells" (Rob Herring);
- Added Dimtry's reviewed tags for patches 02 and 03 (Dmitry Baryshkov);
- Rebased on the latest mainline kernel and resolved conflict.


Leo Yan (5):
dt-bindings: interconnect: Update property for icc-rpm path tag
interconnect: qcom: Move qcom_icc_xlate_extended() to a common file
interconnect: qcom: icc-rpm: Change to use qcom_icc_xlate_extended()
interconnect: qcom: icc-rpm: Support multiple buckets
interconnect: qcom: icc-rpm: Set bandwidth and clock for bucket values

.../bindings/interconnect/qcom,rpm.yaml | 6 +-
drivers/interconnect/qcom/Makefile | 3 +
drivers/interconnect/qcom/icc-common.c | 34 +++++
drivers/interconnect/qcom/icc-common.h | 13 ++
drivers/interconnect/qcom/icc-rpm.c | 129 +++++++++++++++---
drivers/interconnect/qcom/icc-rpm.h | 6 +
drivers/interconnect/qcom/icc-rpmh.c | 26 +---
drivers/interconnect/qcom/icc-rpmh.h | 1 -
drivers/interconnect/qcom/sm8450.c | 1 +
9 files changed, 176 insertions(+), 43 deletions(-)
create mode 100644 drivers/interconnect/qcom/icc-common.c
create mode 100644 drivers/interconnect/qcom/icc-common.h

--
2.25.1


2022-07-11 11:58:21

by Leo Yan

[permalink] [raw]
Subject: [PATCH v5 1/5] dt-bindings: interconnect: Update property for icc-rpm path tag

To support path tag in icc-rpm driver, the "#interconnect-cells"
property is updated as enumerate values: 1 or 2. Setting to 1 means
it is compatible with old DT binding that interconnect path only
contains node id; if set to 2 for "#interconnect-cells" property, then
the second specifier is used as a tag (e.g. vote for which buckets).

Signed-off-by: Leo Yan <[email protected]>
Acked-by: Krzysztof Kozlowski <[email protected]>
---
.../devicetree/bindings/interconnect/qcom,rpm.yaml | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml b/Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml
index 8a676fef8c1d..4b37aa88a375 100644
--- a/Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml
+++ b/Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml
@@ -45,7 +45,11 @@ properties:
- qcom,sdm660-snoc

'#interconnect-cells':
- const: 1
+ description: |
+ Value: <1> is one cell in an interconnect specifier for the
+ interconnect node id, <2> requires the interconnect node id and an
+ extra path tag.
+ enum: [ 1, 2 ]

clocks:
minItems: 2
--
2.25.1

2022-07-11 11:58:34

by Leo Yan

[permalink] [raw]
Subject: [PATCH v5 2/5] interconnect: qcom: Move qcom_icc_xlate_extended() to a common file

since there have conflict between two headers icc-rpmh.h and icc-rpm.h,
the function qcom_icc_xlate_extended() is declared in icc-rpmh.h thus
it cannot be used by icc-rpm driver.

Move the function to a new common file icc-common.c so that allow it to
be called by multiple drivers.

Signed-off-by: Leo Yan <[email protected]>
Reviewed-by: Dmitry Baryshkov <[email protected]>
---
drivers/interconnect/qcom/Makefile | 3 +++
drivers/interconnect/qcom/icc-common.c | 34 ++++++++++++++++++++++++++
drivers/interconnect/qcom/icc-common.h | 13 ++++++++++
drivers/interconnect/qcom/icc-rpmh.c | 26 +-------------------
drivers/interconnect/qcom/icc-rpmh.h | 1 -
drivers/interconnect/qcom/sm8450.c | 1 +
6 files changed, 52 insertions(+), 26 deletions(-)
create mode 100644 drivers/interconnect/qcom/icc-common.c
create mode 100644 drivers/interconnect/qcom/icc-common.h

diff --git a/drivers/interconnect/qcom/Makefile b/drivers/interconnect/qcom/Makefile
index 8d1fe9d38ac3..e6451470f812 100644
--- a/drivers/interconnect/qcom/Makefile
+++ b/drivers/interconnect/qcom/Makefile
@@ -1,5 +1,8 @@
# SPDX-License-Identifier: GPL-2.0

+obj-$(CONFIG_INTERCONNECT_QCOM) += interconnect_qcom.o
+
+interconnect_qcom-y := icc-common.o
icc-bcm-voter-objs := bcm-voter.o
qnoc-msm8916-objs := msm8916.o
qnoc-msm8939-objs := msm8939.o
diff --git a/drivers/interconnect/qcom/icc-common.c b/drivers/interconnect/qcom/icc-common.c
new file mode 100644
index 000000000000..0822ce207b5d
--- /dev/null
+++ b/drivers/interconnect/qcom/icc-common.c
@@ -0,0 +1,34 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022 Linaro Ltd.
+ */
+
+#include <linux/of.h>
+#include <linux/slab.h>
+
+#include "icc-common.h"
+
+struct icc_node_data *qcom_icc_xlate_extended(struct of_phandle_args *spec, void *data)
+{
+ struct icc_node_data *ndata;
+ struct icc_node *node;
+
+ node = of_icc_xlate_onecell(spec, data);
+ if (IS_ERR(node))
+ return ERR_CAST(node);
+
+ ndata = kzalloc(sizeof(*ndata), GFP_KERNEL);
+ if (!ndata)
+ return ERR_PTR(-ENOMEM);
+
+ ndata->node = node;
+
+ if (spec->args_count == 2)
+ ndata->tag = spec->args[1];
+
+ if (spec->args_count > 2)
+ pr_warn("%pOF: Too many arguments, path tag is not parsed\n", spec->np);
+
+ return ndata;
+}
+EXPORT_SYMBOL_GPL(qcom_icc_xlate_extended);
diff --git a/drivers/interconnect/qcom/icc-common.h b/drivers/interconnect/qcom/icc-common.h
new file mode 100644
index 000000000000..33bb2c38dff3
--- /dev/null
+++ b/drivers/interconnect/qcom/icc-common.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2022 Linaro Ltd.
+ */
+
+#ifndef __DRIVERS_INTERCONNECT_QCOM_ICC_COMMON_H__
+#define __DRIVERS_INTERCONNECT_QCOM_ICC_COMMON_H__
+
+#include <linux/interconnect-provider.h>
+
+struct icc_node_data *qcom_icc_xlate_extended(struct of_phandle_args *spec, void *data);
+
+#endif
diff --git a/drivers/interconnect/qcom/icc-rpmh.c b/drivers/interconnect/qcom/icc-rpmh.c
index 3c40076eb5fb..505d53e80d96 100644
--- a/drivers/interconnect/qcom/icc-rpmh.c
+++ b/drivers/interconnect/qcom/icc-rpmh.c
@@ -11,6 +11,7 @@
#include <linux/slab.h>

#include "bcm-voter.h"
+#include "icc-common.h"
#include "icc-rpmh.h"

/**
@@ -100,31 +101,6 @@ int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
}
EXPORT_SYMBOL_GPL(qcom_icc_set);

-struct icc_node_data *qcom_icc_xlate_extended(struct of_phandle_args *spec, void *data)
-{
- struct icc_node_data *ndata;
- struct icc_node *node;
-
- node = of_icc_xlate_onecell(spec, data);
- if (IS_ERR(node))
- return ERR_CAST(node);
-
- ndata = kzalloc(sizeof(*ndata), GFP_KERNEL);
- if (!ndata)
- return ERR_PTR(-ENOMEM);
-
- ndata->node = node;
-
- if (spec->args_count == 2)
- ndata->tag = spec->args[1];
-
- if (spec->args_count > 2)
- pr_warn("%pOF: Too many arguments, path tag is not parsed\n", spec->np);
-
- return ndata;
-}
-EXPORT_SYMBOL_GPL(qcom_icc_xlate_extended);
-
/**
* qcom_icc_bcm_init - populates bcm aux data and connect qnodes
* @bcm: bcm to be initialized
diff --git a/drivers/interconnect/qcom/icc-rpmh.h b/drivers/interconnect/qcom/icc-rpmh.h
index d29929461c17..04391c1ba465 100644
--- a/drivers/interconnect/qcom/icc-rpmh.h
+++ b/drivers/interconnect/qcom/icc-rpmh.h
@@ -131,7 +131,6 @@ struct qcom_icc_desc {
int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
u32 peak_bw, u32 *agg_avg, u32 *agg_peak);
int qcom_icc_set(struct icc_node *src, struct icc_node *dst);
-struct icc_node_data *qcom_icc_xlate_extended(struct of_phandle_args *spec, void *data);
int qcom_icc_bcm_init(struct qcom_icc_bcm *bcm, struct device *dev);
void qcom_icc_pre_aggregate(struct icc_node *node);
int qcom_icc_rpmh_probe(struct platform_device *pdev);
diff --git a/drivers/interconnect/qcom/sm8450.c b/drivers/interconnect/qcom/sm8450.c
index 7e3d372b712f..e821fd0b2f66 100644
--- a/drivers/interconnect/qcom/sm8450.c
+++ b/drivers/interconnect/qcom/sm8450.c
@@ -12,6 +12,7 @@
#include <dt-bindings/interconnect/qcom,sm8450.h>

#include "bcm-voter.h"
+#include "icc-common.h"
#include "icc-rpmh.h"
#include "sm8450.h"

--
2.25.1

2022-07-11 11:58:44

by Leo Yan

[permalink] [raw]
Subject: [PATCH v5 3/5] interconnect: qcom: icc-rpm: Change to use qcom_icc_xlate_extended()

This commit changes to use callback qcom_icc_xlate_extended(). This
is a preparation for population path tags from the interconnect DT
binding, it doesn't introduce functionality change for the existed DT
binding without path tags.

Signed-off-by: Leo Yan <[email protected]>
Reviewed-by: Dmitry Baryshkov <[email protected]>
---
drivers/interconnect/qcom/icc-rpm.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
index 7e8bcbb2f5db..8c9d5cc7276c 100644
--- a/drivers/interconnect/qcom/icc-rpm.c
+++ b/drivers/interconnect/qcom/icc-rpm.c
@@ -16,6 +16,7 @@
#include <linux/slab.h>

#include "smd-rpm.h"
+#include "icc-common.h"
#include "icc-rpm.h"

/* QNOC QoS */
@@ -414,7 +415,7 @@ int qnoc_probe(struct platform_device *pdev)
provider->dev = dev;
provider->set = qcom_icc_set;
provider->aggregate = icc_std_aggregate;
- provider->xlate = of_icc_xlate_onecell;
+ provider->xlate_extended = qcom_icc_xlate_extended;
provider->data = data;

ret = icc_provider_add(provider);
--
2.25.1

2022-07-11 12:01:17

by Leo Yan

[permalink] [raw]
Subject: [PATCH v5 4/5] interconnect: qcom: icc-rpm: Support multiple buckets

The current interconnect rpm driver uses a single aggregate bandwidth to
calculate the clock rates for both active and sleep clocks; therefore,
it has no chance to separate bandwidth requests for these two kinds of
clocks.

This patch studies the implementation from interconnect rpmh driver to
support multiple buckets. The rpmh driver provides three buckets for
AMC, WAKE, and SLEEP; this driver only needs to use WAKE and SLEEP
buckets, but we keep the same way with rpmh driver, this can allow us to
reuse the DT binding and avoid to define duplicated data structures.

This patch introduces two callbacks: qcom_icc_pre_bw_aggregate() is used
to clean up bucket values before aggregate bandwidth requests, and
qcom_icc_bw_aggregate() is to aggregate bandwidth for buckets.

Signed-off-by: Leo Yan <[email protected]>
---
drivers/interconnect/qcom/icc-rpm.c | 51 ++++++++++++++++++++++++++++-
drivers/interconnect/qcom/icc-rpm.h | 6 ++++
2 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
index 8c9d5cc7276c..d27b1582521f 100644
--- a/drivers/interconnect/qcom/icc-rpm.c
+++ b/drivers/interconnect/qcom/icc-rpm.c
@@ -254,6 +254,54 @@ static int __qcom_icc_set(struct icc_node *n, struct qcom_icc_node *qn,
return 0;
}

+/**
+ * qcom_icc_pre_bw_aggregate - cleans up values before re-aggregate requests
+ * @node: icc node to operate on
+ */
+static void qcom_icc_pre_bw_aggregate(struct icc_node *node)
+{
+ struct qcom_icc_node *qn;
+ size_t i;
+
+ qn = node->data;
+ for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
+ qn->sum_avg[i] = 0;
+ qn->max_peak[i] = 0;
+ }
+}
+
+/**
+ * qcom_icc_bw_aggregate - aggregate bw for buckets indicated by tag
+ * @node: node to aggregate
+ * @tag: tag to indicate which buckets to aggregate
+ * @avg_bw: new bw to sum aggregate
+ * @peak_bw: new bw to max aggregate
+ * @agg_avg: existing aggregate avg bw val
+ * @agg_peak: existing aggregate peak bw val
+ */
+static int qcom_icc_bw_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
+ u32 peak_bw, u32 *agg_avg, u32 *agg_peak)
+{
+ size_t i;
+ struct qcom_icc_node *qn;
+
+ qn = node->data;
+
+ if (!tag)
+ tag = QCOM_ICC_TAG_ALWAYS;
+
+ for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
+ if (tag & BIT(i)) {
+ qn->sum_avg[i] += avg_bw;
+ qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw);
+ }
+ }
+
+ *agg_avg += avg_bw;
+ *agg_peak = max_t(u32, *agg_peak, peak_bw);
+ return 0;
+}
+
static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
{
struct qcom_icc_provider *qp;
@@ -414,7 +462,8 @@ int qnoc_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&provider->nodes);
provider->dev = dev;
provider->set = qcom_icc_set;
- provider->aggregate = icc_std_aggregate;
+ provider->pre_aggregate = qcom_icc_pre_bw_aggregate;
+ provider->aggregate = qcom_icc_bw_aggregate;
provider->xlate_extended = qcom_icc_xlate_extended;
provider->data = data;

diff --git a/drivers/interconnect/qcom/icc-rpm.h b/drivers/interconnect/qcom/icc-rpm.h
index ebee9009301e..a49af844ab13 100644
--- a/drivers/interconnect/qcom/icc-rpm.h
+++ b/drivers/interconnect/qcom/icc-rpm.h
@@ -6,6 +6,8 @@
#ifndef __DRIVERS_INTERCONNECT_QCOM_ICC_RPM_H
#define __DRIVERS_INTERCONNECT_QCOM_ICC_RPM_H

+#include <dt-bindings/interconnect/qcom,icc.h>
+
#define RPM_BUS_MASTER_REQ 0x73616d62
#define RPM_BUS_SLAVE_REQ 0x766c7362

@@ -65,6 +67,8 @@ struct qcom_icc_qos {
* @links: an array of nodes where we can go next while traversing
* @num_links: the total number of @links
* @buswidth: width of the interconnect between a node and the bus (bytes)
+ * @sum_avg: current sum aggregate value of all avg bw requests
+ * @max_peak: current max aggregate value of all peak bw requests
* @mas_rpm_id: RPM id for devices that are bus masters
* @slv_rpm_id: RPM id for devices that are bus slaves
* @qos: NoC QoS setting parameters
@@ -75,6 +79,8 @@ struct qcom_icc_node {
const u16 *links;
u16 num_links;
u16 buswidth;
+ u64 sum_avg[QCOM_ICC_NUM_BUCKETS];
+ u64 max_peak[QCOM_ICC_NUM_BUCKETS];
int mas_rpm_id;
int slv_rpm_id;
struct qcom_icc_qos qos;
--
2.25.1

2022-07-11 12:27:33

by Leo Yan

[permalink] [raw]
Subject: [PATCH v5 5/5] interconnect: qcom: icc-rpm: Set bandwidth and clock for bucket values

This commit uses buckets for support bandwidth and clock rates. It
introduces a new function qcom_icc_bus_aggregate() to calculate the
aggregate average and peak bandwidths for every bucket, and also it
calculates the maximum value of aggregated average bandwidth across all
buckets.

The maximum aggregated average is used to calculate the final bandwidth
requests. And we can set the clock rate per bucket, we use SLEEP bucket
as default bucket if a platform doesn't enable the interconnect path
tags in DT binding; otherwise, we use WAKE bucket to set active clock
and use SLEEP bucket for other clocks. So far we don't use AMC bucket.

Signed-off-by: Leo Yan <[email protected]>
---
drivers/interconnect/qcom/icc-rpm.c | 75 +++++++++++++++++++++++------
1 file changed, 61 insertions(+), 14 deletions(-)

diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
index d27b1582521f..f15f5deee6ef 100644
--- a/drivers/interconnect/qcom/icc-rpm.c
+++ b/drivers/interconnect/qcom/icc-rpm.c
@@ -302,18 +302,57 @@ static int qcom_icc_bw_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
return 0;
}

+/**
+ * qcom_icc_bus_aggregate - aggregate bandwidth by traversing all nodes
+ * @provider: generic interconnect provider
+ * @agg_avg: an array for aggregated average bandwidth of buckets
+ * @agg_peak: an array for aggregated peak bandwidth of buckets
+ * @max_agg_avg: pointer to max value of aggregated average bandwidth
+ */
+static void qcom_icc_bus_aggregate(struct icc_provider *provider,
+ u64 *agg_avg, u64 *agg_peak,
+ u64 *max_agg_avg)
+{
+ struct icc_node *node;
+ struct qcom_icc_node *qn;
+ int i;
+
+ /* Initialise aggregate values */
+ for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
+ agg_avg[i] = 0;
+ agg_peak[i] = 0;
+ }
+
+ *max_agg_avg = 0;
+
+ /*
+ * Iterate nodes on the interconnect and aggregate bandwidth
+ * requests for every bucket.
+ */
+ list_for_each_entry(node, &provider->nodes, node_list) {
+ qn = node->data;
+ for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
+ agg_avg[i] += qn->sum_avg[i];
+ agg_peak[i] = max_t(u64, agg_peak[i], qn->max_peak[i]);
+ }
+ }
+
+ /* Find maximum values across all buckets */
+ for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++)
+ *max_agg_avg = max_t(u64, *max_agg_avg, agg_avg[i]);
+}
+
static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
{
struct qcom_icc_provider *qp;
struct qcom_icc_node *src_qn = NULL, *dst_qn = NULL;
struct icc_provider *provider;
- struct icc_node *n;
u64 sum_bw;
- u64 max_peak_bw;
u64 rate;
- u32 agg_avg = 0;
- u32 agg_peak = 0;
+ u64 agg_avg[QCOM_ICC_NUM_BUCKETS], agg_peak[QCOM_ICC_NUM_BUCKETS];
+ u64 max_agg_avg, max_agg_peak;
int ret, i;
+ int bucket;

src_qn = src->data;
if (dst)
@@ -321,12 +360,9 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
provider = src->provider;
qp = to_qcom_provider(provider);

- list_for_each_entry(n, &provider->nodes, node_list)
- provider->aggregate(n, 0, n->avg_bw, n->peak_bw,
- &agg_avg, &agg_peak);
+ qcom_icc_bus_aggregate(provider, agg_avg, agg_peak, &max_agg_avg);

- sum_bw = icc_units_to_bps(agg_avg);
- max_peak_bw = icc_units_to_bps(agg_peak);
+ sum_bw = icc_units_to_bps(max_agg_avg);

ret = __qcom_icc_set(src, src_qn, sum_bw);
if (ret)
@@ -337,12 +373,23 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
return ret;
}

- rate = max(sum_bw, max_peak_bw);
-
- do_div(rate, src_qn->buswidth);
- rate = min_t(u64, rate, LONG_MAX);
-
for (i = 0; i < qp->num_clks; i++) {
+ /*
+ * Use WAKE bucket for active clock, otherwise, use SLEEP bucket
+ * for other clocks. If a platform doesn't set interconnect
+ * path tags, by default use sleep bucket for all clocks.
+ *
+ * Note, AMC bucket is not supported yet.
+ */
+ if (!strcmp(qp->bus_clks[i].id, "bus_a"))
+ bucket = QCOM_ICC_BUCKET_WAKE;
+ else
+ bucket = QCOM_ICC_BUCKET_SLEEP;
+
+ rate = icc_units_to_bps(max(agg_avg[bucket], agg_peak[bucket]));
+ do_div(rate, src_qn->buswidth);
+ rate = min_t(u64, rate, LONG_MAX);
+
if (qp->bus_clk_rate[i] == rate)
continue;

--
2.25.1

2022-07-11 14:02:21

by Georgi Djakov

[permalink] [raw]
Subject: Re: [PATCH v5 5/5] interconnect: qcom: icc-rpm: Set bandwidth and clock for bucket values


Hi Leo,

On 11.07.22 14:52, Leo Yan wrote:
> This commit uses buckets for support bandwidth and clock rates. It
> introduces a new function qcom_icc_bus_aggregate() to calculate the
> aggregate average and peak bandwidths for every bucket, and also it
> calculates the maximum value of aggregated average bandwidth across all
> buckets.
>
> The maximum aggregated average is used to calculate the final bandwidth
> requests. And we can set the clock rate per bucket, we use SLEEP bucket
> as default bucket if a platform doesn't enable the interconnect path
> tags in DT binding; otherwise, we use WAKE bucket to set active clock
> and use SLEEP bucket for other clocks. So far we don't use AMC bucket.
>
> Signed-off-by: Leo Yan <[email protected]>
> ---
> drivers/interconnect/qcom/icc-rpm.c | 75 +++++++++++++++++++++++------
> 1 file changed, 61 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
> index d27b1582521f..f15f5deee6ef 100644
> --- a/drivers/interconnect/qcom/icc-rpm.c
> +++ b/drivers/interconnect/qcom/icc-rpm.c
> @@ -302,18 +302,57 @@ static int qcom_icc_bw_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
> return 0;
> }
>
> +/**
> + * qcom_icc_bus_aggregate - aggregate bandwidth by traversing all nodes
> + * @provider: generic interconnect provider
> + * @agg_avg: an array for aggregated average bandwidth of buckets
> + * @agg_peak: an array for aggregated peak bandwidth of buckets
> + * @max_agg_avg: pointer to max value of aggregated average bandwidth
> + */
> +static void qcom_icc_bus_aggregate(struct icc_provider *provider,
> + u64 *agg_avg, u64 *agg_peak,
> + u64 *max_agg_avg)
> +{
> + struct icc_node *node;
> + struct qcom_icc_node *qn;
> + int i;
> +
> + /* Initialise aggregate values */
> + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
> + agg_avg[i] = 0;
> + agg_peak[i] = 0;
> + }
> +
> + *max_agg_avg = 0;
> +
> + /*
> + * Iterate nodes on the interconnect and aggregate bandwidth
> + * requests for every bucket.
> + */
> + list_for_each_entry(node, &provider->nodes, node_list) {
> + qn = node->data;
> + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
> + agg_avg[i] += qn->sum_avg[i];
> + agg_peak[i] = max_t(u64, agg_peak[i], qn->max_peak[i]);
> + }
> + }
> +
> + /* Find maximum values across all buckets */
> + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++)
> + *max_agg_avg = max_t(u64, *max_agg_avg, agg_avg[i]);
> +}
> +
> static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
> {
> struct qcom_icc_provider *qp;
> struct qcom_icc_node *src_qn = NULL, *dst_qn = NULL;
> struct icc_provider *provider;
> - struct icc_node *n;
> u64 sum_bw;
> - u64 max_peak_bw;
> u64 rate;
> - u32 agg_avg = 0;
> - u32 agg_peak = 0;
> + u64 agg_avg[QCOM_ICC_NUM_BUCKETS], agg_peak[QCOM_ICC_NUM_BUCKETS];
> + u64 max_agg_avg, max_agg_peak;

Now max_agg_peak is unused?

Thanks,
Georgi

2022-07-12 01:56:56

by Leo Yan

[permalink] [raw]
Subject: Re: [PATCH v5 5/5] interconnect: qcom: icc-rpm: Set bandwidth and clock for bucket values

Hi Georgi,

On Mon, Jul 11, 2022 at 04:53:47PM +0300, Georgi Djakov wrote:

[...]

> > static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
> > {
> > struct qcom_icc_provider *qp;
> > struct qcom_icc_node *src_qn = NULL, *dst_qn = NULL;
> > struct icc_provider *provider;
> > - struct icc_node *n;
> > u64 sum_bw;
> > - u64 max_peak_bw;
> > u64 rate;
> > - u32 agg_avg = 0;
> > - u32 agg_peak = 0;
> > + u64 agg_avg[QCOM_ICC_NUM_BUCKETS], agg_peak[QCOM_ICC_NUM_BUCKETS];
> > + u64 max_agg_avg, max_agg_peak;
>
> Now max_agg_peak is unused?

Sorry for this mistake. Will send new patch series soon.

Thanks,
Leo