2022-06-30 06:32:41

by Leo Yan

[permalink] [raw]
Subject: [PATCH v2 0/5] interconnect: qcom: icc-rpm: Support bucket

This patch set is to support bucket in icc-rpm driver, so it implements
the similar mechanism in the icc-rpmh driver.

We can use interconnect path tag to indicate the bandwidth voting is for
which buckets, and there have three kinds of buckets: AWC, WAKE and
SLEEP, finally the wake and sleep bucket values are used to set the
corresponding clock (active and sleep clocks). So far, we keep the AWC
bucket but doesn't really use it.

Patches 01, 02, 03 enable interconnect path tag and update the DT
binding document; patches 04 and 05 support bucket and use bucket values
to set the bandwidth and clock rates.

Changes from v1:
- Added description for property "#interconnect-cells" (Rob Herring);
- Added Dimtry's reviewed tags for patches 02 and 03 (Dmitry Baryshkov);
- Rebased on the latest mainline kernel and resolved conflict.


Leo Yan (5):
dt-bindings: interconnect: Update property for icc-rpm path tag
interconnect: qcom: Move qcom_icc_xlate_extended() to a common file
interconnect: qcom: icc-rpm: Change to use qcom_icc_xlate_extended()
interconnect: qcom: icc-rpm: Support multiple buckets
interconnect: qcom: icc-rpm: Set bandwidth and clock for bucket values

.../bindings/interconnect/qcom,rpm.yaml | 6 +-
drivers/interconnect/qcom/Makefile | 3 +
drivers/interconnect/qcom/icc-common.c | 34 +++++
drivers/interconnect/qcom/icc-common.h | 13 ++
drivers/interconnect/qcom/icc-rpm.c | 134 ++++++++++++++++--
drivers/interconnect/qcom/icc-rpm.h | 6 +
drivers/interconnect/qcom/icc-rpmh.c | 26 +---
drivers/interconnect/qcom/icc-rpmh.h | 1 -
drivers/interconnect/qcom/sm8450.c | 1 +
9 files changed, 182 insertions(+), 42 deletions(-)
create mode 100644 drivers/interconnect/qcom/icc-common.c
create mode 100644 drivers/interconnect/qcom/icc-common.h

--
2.25.1


2022-06-30 06:35:04

by Leo Yan

[permalink] [raw]
Subject: [PATCH v2 5/5] interconnect: qcom: icc-rpm: Set bandwidth and clock for bucket values

This commit uses buckets for support bandwidth and clock rates. It
introduces a new function qcom_icc_bus_aggregate() to calculate the
aggregate average and peak bandwidths for every bucket, and also it
calculates the maximum aggregate values across all buckets.

The maximum aggregate values are used to calculate the final bandwidth
requests. And we can set the clock rate per bucket, we use SLEEP bucket
as default bucket if a platform doesn't enable the interconnect path
tags in DT binding; otherwise, we use WAKE bucket to set active clock
and use SLEEP bucket for other clocks. So far we don't use AMC bucket.

Signed-off-by: Leo Yan <[email protected]>
---
drivers/interconnect/qcom/icc-rpm.c | 80 ++++++++++++++++++++++++-----
1 file changed, 67 insertions(+), 13 deletions(-)

diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
index b025fc6b97c9..4b932eb807c7 100644
--- a/drivers/interconnect/qcom/icc-rpm.c
+++ b/drivers/interconnect/qcom/icc-rpm.c
@@ -302,18 +302,62 @@ static int qcom_icc_bw_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
return 0;
}

+/**
+ * qcom_icc_bus_aggregate - aggregate bandwidth by traversing all nodes
+ * @provider: generic interconnect provider
+ * @agg_avg: an array for aggregated average bandwidth of buckets
+ * @agg_peak: an array for aggregated peak bandwidth of buckets
+ * @max_agg_avg: pointer to max value of aggregated average bandwidth
+ * @max_agg_peak: pointer to max value of aggregated peak bandwidth
+ */
+static void qcom_icc_bus_aggregate(struct icc_provider *provider,
+ u64 *agg_avg, u64 *agg_peak,
+ u64 *max_agg_avg, u64 *max_agg_peak)
+{
+ struct icc_node *node;
+ struct qcom_icc_node *qn;
+ int i;
+
+ /* Initialise aggregate values */
+ for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
+ agg_avg[i] = 0;
+ agg_peak[i] = 0;
+ }
+
+ *max_agg_avg = 0;
+ *max_agg_peak = 0;
+
+ /*
+ * Iterate nodes on the interconnect and aggregate bandwidth
+ * requests for every bucket.
+ */
+ list_for_each_entry(node, &provider->nodes, node_list) {
+ qn = node->data;
+ for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
+ agg_avg[i] += qn->sum_avg[i];
+ agg_peak[i] = max_t(u64, agg_peak[i], qn->max_peak[i]);
+ }
+ }
+
+ /* Find maximum values across all buckets */
+ for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
+ *max_agg_avg = max_t(u64, *max_agg_avg, agg_avg[i]);
+ *max_agg_peak = max_t(u64, *max_agg_peak, agg_peak[i]);
+ }
+}
+
static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
{
struct qcom_icc_provider *qp;
struct qcom_icc_node *src_qn = NULL, *dst_qn = NULL;
struct icc_provider *provider;
- struct icc_node *n;
u64 sum_bw;
u64 max_peak_bw;
u64 rate;
- u32 agg_avg = 0;
- u32 agg_peak = 0;
+ u64 agg_avg[QCOM_ICC_NUM_BUCKETS], agg_peak[QCOM_ICC_NUM_BUCKETS];
+ u64 max_agg_avg, max_agg_peak;
int ret, i;
+ int bucket;

src_qn = src->data;
if (dst)
@@ -321,12 +365,11 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
provider = src->provider;
qp = to_qcom_provider(provider);

- list_for_each_entry(n, &provider->nodes, node_list)
- provider->aggregate(n, 0, n->avg_bw, n->peak_bw,
- &agg_avg, &agg_peak);
+ qcom_icc_bus_aggregate(provider, agg_avg, agg_peak, &max_agg_avg,
+ &max_agg_peak);

- sum_bw = icc_units_to_bps(agg_avg);
- max_peak_bw = icc_units_to_bps(agg_peak);
+ sum_bw = icc_units_to_bps(max_agg_avg);
+ max_peak_bw = icc_units_to_bps(max_agg_peak);

ret = __qcom_icc_set(src, src_qn, sum_bw);
if (ret)
@@ -337,12 +380,23 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
return ret;
}

- rate = max(sum_bw, max_peak_bw);
-
- do_div(rate, src_qn->buswidth);
- rate = min_t(u64, rate, LONG_MAX);
-
for (i = 0; i < qp->num_clks; i++) {
+ /*
+ * Use WAKE bucket for active clock, otherwise, use SLEEP bucket
+ * for other clocks. If a platform doesn't set interconnect
+ * path tags, by default use sleep bucket for all clocks.
+ *
+ * Note, AMC bucket is not supported yet.
+ */
+ if (!strcmp(qp->bus_clks[i].id, "bus_a"))
+ bucket = QCOM_ICC_BUCKET_WAKE;
+ else
+ bucket = QCOM_ICC_BUCKET_SLEEP;
+
+ rate = icc_units_to_bps(max(agg_avg[bucket], agg_peak[bucket]));
+ do_div(rate, src_qn->buswidth);
+ rate = min_t(u64, rate, LONG_MAX);
+
if (qp->bus_clk_rate[i] == rate)
continue;

--
2.25.1

2022-06-30 06:35:05

by Leo Yan

[permalink] [raw]
Subject: [PATCH v2 3/5] interconnect: qcom: icc-rpm: Change to use qcom_icc_xlate_extended()

This commit changes to use callback qcom_icc_xlate_extended(). This
is a preparation for population path tags from the interconnect DT
binding, it doesn't introduce functionality change for the existed DT
binding without path tags.

Signed-off-by: Leo Yan <[email protected]>
Reviewed-by: Dmitry Baryshkov <[email protected]>
---
drivers/interconnect/qcom/icc-rpm.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
index 7e8bcbb2f5db..8c9d5cc7276c 100644
--- a/drivers/interconnect/qcom/icc-rpm.c
+++ b/drivers/interconnect/qcom/icc-rpm.c
@@ -16,6 +16,7 @@
#include <linux/slab.h>

#include "smd-rpm.h"
+#include "icc-common.h"
#include "icc-rpm.h"

/* QNOC QoS */
@@ -414,7 +415,7 @@ int qnoc_probe(struct platform_device *pdev)
provider->dev = dev;
provider->set = qcom_icc_set;
provider->aggregate = icc_std_aggregate;
- provider->xlate = of_icc_xlate_onecell;
+ provider->xlate_extended = qcom_icc_xlate_extended;
provider->data = data;

ret = icc_provider_add(provider);
--
2.25.1

2022-06-30 06:37:05

by Leo Yan

[permalink] [raw]
Subject: [PATCH v2 1/5] dt-bindings: interconnect: Update property for icc-rpm path tag

To support path tag in icc-rpm driver, the "#interconnect-cells"
property is updated as enumerate values: 1 or 2. Setting to 1 means
it is compatible with old DT binding that interconnect path only
contains node id; if set to 2 for "#interconnect-cells" property, then
the second specifier is used as a tag (e.g. vote for which buckets).

Signed-off-by: Leo Yan <[email protected]>
---
.../devicetree/bindings/interconnect/qcom,rpm.yaml | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml b/Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml
index 8a676fef8c1d..16df305ea243 100644
--- a/Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml
+++ b/Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml
@@ -45,7 +45,11 @@ properties:
- qcom,sdm660-snoc

'#interconnect-cells':
- const: 1
+ description:
+ '1' is one cell in a interconnect specifier for the interconnect
+ node id, or '2' requires the interconnect node id and an extra
+ path tag.
+ enum: [ 1, 2 ]

clocks:
minItems: 2
--
2.25.1

2022-06-30 06:37:28

by Leo Yan

[permalink] [raw]
Subject: [PATCH v2 4/5] interconnect: qcom: icc-rpm: Support multiple buckets

The current interconnect rpm driver uses a single aggregate bandwidth to
calculate the clock rates for both active and sleep clocks; therefore,
it has no chance to separate bandwidth requests for these two kinds of
clocks.

This patch studies the implementation from interconnect rpmh driver to
support multiple buckets. The rpmh driver provides three buckets for
AMC, WAKE, and SLEEP; this driver only needs to use WAKE and SLEEP
buckets, but we keep the same way with rpmh driver, this can allow us to
reuse the DT binding and avoid to define duplicated data structures.

This patch introduces two callbacks: qcom_icc_pre_bw_aggregate() is used
to clean up bucket values before aggregate bandwidth requests, and
qcom_icc_bw_aggregate() is to aggregate bandwidth for buckets.

Signed-off-by: Leo Yan <[email protected]>
---
drivers/interconnect/qcom/icc-rpm.c | 51 ++++++++++++++++++++++++++++-
drivers/interconnect/qcom/icc-rpm.h | 6 ++++
2 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
index 8c9d5cc7276c..b025fc6b97c9 100644
--- a/drivers/interconnect/qcom/icc-rpm.c
+++ b/drivers/interconnect/qcom/icc-rpm.c
@@ -254,6 +254,54 @@ static int __qcom_icc_set(struct icc_node *n, struct qcom_icc_node *qn,
return 0;
}

+/**
+ * qcom_icc_rpm_pre_bw_aggregate - cleans up values before re-aggregate requests
+ * @node: icc node to operate on
+ */
+static void qcom_icc_pre_bw_aggregate(struct icc_node *node)
+{
+ struct qcom_icc_node *qn;
+ size_t i;
+
+ qn = node->data;
+ for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
+ qn->sum_avg[i] = 0;
+ qn->max_peak[i] = 0;
+ }
+}
+
+/**
+ * qcom_icc_bw_aggregate - aggregate bw for buckets indicated by tag
+ * @node: node to aggregate
+ * @tag: tag to indicate which buckets to aggregate
+ * @avg_bw: new bw to sum aggregate
+ * @peak_bw: new bw to max aggregate
+ * @agg_avg: existing aggregate avg bw val
+ * @agg_peak: existing aggregate peak bw val
+ */
+static int qcom_icc_bw_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
+ u32 peak_bw, u32 *agg_avg, u32 *agg_peak)
+{
+ size_t i;
+ struct qcom_icc_node *qn;
+
+ qn = node->data;
+
+ if (!tag)
+ tag = QCOM_ICC_TAG_ALWAYS;
+
+ for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
+ if (tag & BIT(i)) {
+ qn->sum_avg[i] += avg_bw;
+ qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw);
+ }
+ }
+
+ *agg_avg += avg_bw;
+ *agg_peak = max_t(u32, *agg_peak, peak_bw);
+ return 0;
+}
+
static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
{
struct qcom_icc_provider *qp;
@@ -414,7 +462,8 @@ int qnoc_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&provider->nodes);
provider->dev = dev;
provider->set = qcom_icc_set;
- provider->aggregate = icc_std_aggregate;
+ provider->pre_aggregate = qcom_icc_pre_bw_aggregate;
+ provider->aggregate = qcom_icc_bw_aggregate;
provider->xlate_extended = qcom_icc_xlate_extended;
provider->data = data;

diff --git a/drivers/interconnect/qcom/icc-rpm.h b/drivers/interconnect/qcom/icc-rpm.h
index ebee9009301e..a49af844ab13 100644
--- a/drivers/interconnect/qcom/icc-rpm.h
+++ b/drivers/interconnect/qcom/icc-rpm.h
@@ -6,6 +6,8 @@
#ifndef __DRIVERS_INTERCONNECT_QCOM_ICC_RPM_H
#define __DRIVERS_INTERCONNECT_QCOM_ICC_RPM_H

+#include <dt-bindings/interconnect/qcom,icc.h>
+
#define RPM_BUS_MASTER_REQ 0x73616d62
#define RPM_BUS_SLAVE_REQ 0x766c7362

@@ -65,6 +67,8 @@ struct qcom_icc_qos {
* @links: an array of nodes where we can go next while traversing
* @num_links: the total number of @links
* @buswidth: width of the interconnect between a node and the bus (bytes)
+ * @sum_avg: current sum aggregate value of all avg bw requests
+ * @max_peak: current max aggregate value of all peak bw requests
* @mas_rpm_id: RPM id for devices that are bus masters
* @slv_rpm_id: RPM id for devices that are bus slaves
* @qos: NoC QoS setting parameters
@@ -75,6 +79,8 @@ struct qcom_icc_node {
const u16 *links;
u16 num_links;
u16 buswidth;
+ u64 sum_avg[QCOM_ICC_NUM_BUCKETS];
+ u64 max_peak[QCOM_ICC_NUM_BUCKETS];
int mas_rpm_id;
int slv_rpm_id;
struct qcom_icc_qos qos;
--
2.25.1

2022-06-30 13:47:52

by Rob Herring (Arm)

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] dt-bindings: interconnect: Update property for icc-rpm path tag

On Thu, 30 Jun 2022 13:57:18 +0800, Leo Yan wrote:
> To support path tag in icc-rpm driver, the "#interconnect-cells"
> property is updated as enumerate values: 1 or 2. Setting to 1 means
> it is compatible with old DT binding that interconnect path only
> contains node id; if set to 2 for "#interconnect-cells" property, then
> the second specifier is used as a tag (e.g. vote for which buckets).
>
> Signed-off-by: Leo Yan <[email protected]>
> ---
> .../devicetree/bindings/interconnect/qcom,rpm.yaml | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>

My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check'
on your patch (DT_CHECKER_FLAGS is new in v5.13):

yamllint warnings/errors:
./Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml:49:11: [error] syntax error: expected <block end>, but found '<scalar>' (syntax)

dtschema/dtc warnings/errors:
make[1]: *** Deleting file 'Documentation/devicetree/bindings/interconnect/qcom,rpm.example.dts'
Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml:49:11: did not find expected key
make[1]: *** [Documentation/devicetree/bindings/Makefile:26: Documentation/devicetree/bindings/interconnect/qcom,rpm.example.dts] Error 1
make[1]: *** Waiting for unfinished jobs....
./Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml:49:11: did not find expected key
/builds/robherring/linux-dt-review/Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml: ignoring, error parsing file
make: *** [Makefile:1404: dt_binding_check] Error 2

doc reference errors (make refcheckdocs):

See https://patchwork.ozlabs.org/patch/

This check can fail if there are any dependencies. The base for a patch
series is generally the most recent rc1.

If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:

pip3 install dtschema --upgrade

Please check and re-submit.

2022-07-01 11:00:41

by Leo Yan

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] dt-bindings: interconnect: Update property for icc-rpm path tag

Hi Rob,

On Thu, Jun 30, 2022 at 07:44:25AM -0600, Rob Herring wrote:

[...]

> My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check'
> on your patch (DT_CHECKER_FLAGS is new in v5.13):
>
> yamllint warnings/errors:
> ./Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml:49:11: [error] syntax error: expected <block end>, but found '<scalar>' (syntax)
>
> dtschema/dtc warnings/errors:
> make[1]: *** Deleting file 'Documentation/devicetree/bindings/interconnect/qcom,rpm.example.dts'
> Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml:49:11: did not find expected key
> make[1]: *** [Documentation/devicetree/bindings/Makefile:26: Documentation/devicetree/bindings/interconnect/qcom,rpm.example.dts] Error 1
> make[1]: *** Waiting for unfinished jobs....
> ./Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml:49:11: did not find expected key
> /builds/robherring/linux-dt-review/Documentation/devicetree/bindings/interconnect/qcom,rpm.yaml: ignoring, error parsing file
> make: *** [Makefile:1404: dt_binding_check] Error 2
>
> doc reference errors (make refcheckdocs):
>
> See https://patchwork.ozlabs.org/patch/
>
> This check can fail if there are any dependencies. The base for a patch
> series is generally the most recent rc1.
>
> If you already ran 'make dt_binding_check' and didn't see the above
> error(s), then make sure 'yamllint' is installed and dt-schema is up to
> date:
>
> pip3 install dtschema --upgrade

Sorry that I did not run 'make dt_binding_check', will check it.

> Please check and re-submit.

Yeah, will do it.

Thanks a lot for the tips.

Leo