Modern SoCs have multiple processors and various dedicated cores (video, gpu,
graphics, modem). These cores are talking to each other and can generate a
lot of data flowing through the on-chip interconnects. These interconnect
buses could form different topologies such as crossbar, point to point buses,
hierarchical buses or use the network-on-chip concept.
These buses have been sized usually to handle use cases with high data
throughput but it is not necessary all the time and consume a lot of power.
Furthermore, the priority between masters can vary depending on the running
use case like video playback or CPU intensive tasks.
Having an API to control the requirement of the system in terms of bandwidth
and QoS, so we can adapt the interconnect configuration to match those by
scaling the frequencies, setting link priority and tuning QoS parameters.
This configuration can be a static, one-time operation done at boot for some
platforms or a dynamic set of operations that happen at run-time.
This patchset introduce a new API to get the requirement and configure the
interconnect buses across the entire chipset to fit with the current demand.
The API is NOT for changing the performance of the endpoint devices, but only
the interconnect path in between them.
The API is using a consumer/provider-based model, where the providers are
the interconnect buses and the consumers could be various drivers.
The consumers request interconnect resources (path) to an endpoint and set
the desired constraints on this data flow path. The provider(s) receive
requests from consumers and aggregate these requests for all master-slave
pairs on that path. Then the providers configure each participating in the
topology node according to the requested data flow path, physical links and
constraints. The topology could be complicated and multi-tiered and is SoC
specific.
Below is a simplified diagram of a real-world SoC topology. The interconnect
providers are the NoCs.
+----------------+ +----------------+
| HW Accelerator |--->| M NoC |<---------------+
+----------------+ +----------------+ |
| | +------------+
+-----+ +-------------+ V +------+ | |
| DDR | | +--------+ | PCIe | | |
+-----+ | | Slaves | +------+ | |
^ ^ | +--------+ | | C NoC |
| | V V | |
+------------------+ +------------------------+ | | +-----+
| |-->| |-->| |-->| CPU |
| |-->| |<--| | +-----+
| Mem NoC | | S NoC | +------------+
| |<--| |---------+ |
| |<--| |<------+ | | +--------+
+------------------+ +------------------------+ | | +-->| Slaves |
^ ^ ^ ^ ^ | | +--------+
| | | | | | V
+------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
| CPUs | | | GPU | | DSP | | Masters |-->| P NoC |-->| Slaves |
+------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
|
+-------+
| Modem |
+-------+
TODO:
* Create icc_set_extended() to handle parameters such as latency and other
QoS values.
* Convert from using global node identifiers to local per provider ids.
* Cache the path between the nodes instead of walking the graph on each get().
* Sync interconnect requests with the idle state of the device.
Changes since patchset v8 (https://lkml.org/lkml/2018/8/10/387)
* Fixed the names of the files when built as modules.
* Corrected some typos in comments.
Changes since patchset v7 (https://lkml.org/lkml/2018/7/31/647)
* Addressed comments on kernel-doc and grammar. (Randy)
* Picked Reviewed-by: Evan
* Squashed consumer and provider DT bindings into single patch. (Rob)
* Cleaned-up msm8916 DT bindings docs by removing unused port ids.
* Updated documentation for the cases when NULL is returned. (Saravana)
* New patch to add myself as maintainer.
Changes since patchset v6 (https://lkml.org/lkml/2018/7/9/698)
* [patches 1,6]: Move the aggregation within the provider from the framework to
the platform driver's set() callback, as the aggregation point could be SoC
specific.
* [patch 1]: Include missing header, reset state only of the traversed nodes,
move more code into path_init(), add more asserts, move misplaced mutex,
simplify icc_link_destroy() (Evan)
* [patch 1]: Fix the order of requests to go from source to destination. (Alex)
* [patch 7]: Use better wording in the documentation. (Evan)
* [patch 6]: Reorder struct members, sort nodes alphabetically, improve naming
of variables , add missing clk_disable_unprepare() in error paths. (Matthias)
* [patch 6]: Remove redundant NULL pointer check in msm8916 driver. (Alex)
* [patch 6]: Add missing depend on QCOM_SMD_RPM in Kconfig. (Evan)
* [patch 3]: Don't check for errors on debugfs calls, remove debugfs directory
when module is unloaded (Greg)
Changes since patchset v5 (https://lkml.org/lkml/2018/6/20/453)
* Fix the modular build, make rpm-smd driver a module.
* Optimize locking and move to higher level. (Evan)
* Code cleanups. Fix typos. (Evan, Matthias)
* Add the source node to the path. (Evan)
* Rename path_allocate() to path_init() with minor refactoring. (Evan)
* Rename *_remove() functions to *_destroy().
* Return fixed errors in icc_link_destroy(). (Evan)
* Fix krealloc() usage in icc_link_destroy(). (Evan)
* Add missing kfree() in icc_node_create(). (Matthias)
* Make icc_node_add() return void. (Matthias)
* Change mutex_init to mutex_lock in icc_provider_add(). (Matthias)
* Add new icc_node_del() function to delete nodes from provider.
* Fix the header guard to reflect the path in smd-rpm.h. (Evan)
* Check for errors returned by qcom_icc_rpm_smd_send(). (Evan)
* Propagate the error of icc_provider_del(). (Evan)
Changes since patchset v4 (https://lkml.org/lkml/2018/3/9/856)
* Simplified locking by using a single global mutex. (Evan)
* Changed the aggregation function interface.
* Implemented functions for node, link, provider removal. (Evan)
* Naming changes on variables and functions, removed redundant code. (Evan)
* Fixes and clarifications in the docs. (Matthias, Evan, Amit, Alexandre)
* Removed mandatory reg DT property, made interconnect-names optional. (Bjorn)
* Made interconnect-cells property required to align with other bindings. (Neil)
* Moved msm8916 specific bindings into a separate file and patch. (Bjorn)
* Use the names, instead of the hardcoded ids for topology. (Matthias)
* Init the node before creating the links. (Evan)
* Added icc_units_to_bps macro. (Amit)
Changes since patchset v3 (https://lkml.org/lkml/2017/9/8/544)
* Refactored the constraints aggregation.
* Use the IDR API.
* Split the provider and consumer bindings into separate patches and propose
new bindings for consumers, which allows to specify the local source port.
* Adopted the icc_ prefix for API functions.
* Introduced separate API functions for creating interconnect nodes and links.
* Added DT lookup support in addition to platform data.
* Dropped the event tracing patch for now.
* Added a patch to provide summary via debugfs.
* Use macro for the list of topology definitions in the platform driver.
* Various minor changes.
Changes since patchset v2 (https://lkml.org/lkml/2017/7/20/825)
* Split the aggregation into per node and per provider. Cache the
aggregated values.
* Various small refactorings and cleanups in the framework.
* Added a patch introducing basic tracepoint support for monitoring
the time required to update the interconnect nodes.
Changes since patchset v1 (https://lkml.org/lkml/2017/6/27/890)
* Updates in the documentation.
* Changes in request aggregation, locking.
* Dropped the aggregate() callback and use the default as it currently
sufficient for the single vendor driver. Will add it later when needed.
* Dropped the dt-bindings draft patch for now.
Changes since RFC v2 (https://lkml.org/lkml/2017/6/12/316)
* Converted documentation to rst format.
* Fixed an incorrect call to mutex_lock. Renamed max_bw to peak_bw.
Changes since RFC v1 (https://lkml.org/lkml/2017/5/15/605)
* Refactored code into shorter functions.
* Added a new aggregate() API function.
* Rearranged some structs to reduce padding bytes.
Changes since RFC v0 (https://lkml.org/lkml/2017/3/1/599)
* Removed DT support and added optional Patch 3 with new bindings proposal.
* Converted the topology into internal driver data.
* Made the framework modular.
* interconnect_get() now takes (src and dst ports as arguments).
* Removed public declarations of some structs.
* Now passing prev/next nodes to the vendor driver.
* Properly remove requests on _put().
* Added refcounting.
* Updated documentation.
* Changed struct interconnect_path to use array instead of linked list.
Georgi Djakov (8):
interconnect: Add generic on-chip interconnect API
dt-bindings: Introduce interconnect binding
interconnect: Allow endpoints translation via DT
interconnect: Add debugfs support
interconnect: qcom: Add RPM communication
dt-bindings: interconnect: Document qcom,msm8916 NoC bindings
interconnect: qcom: Add msm8916 interconnect provider driver
MAINTAINERS: add a maintainer for the interconnect API
.../bindings/interconnect/interconnect.txt | 60 ++
.../bindings/interconnect/qcom-msm8916.txt | 41 +
.../bindings/interconnect/qcom-smd.txt | 32 +
Documentation/interconnect/interconnect.rst | 94 +++
MAINTAINERS | 10 +
drivers/Kconfig | 2 +
drivers/Makefile | 1 +
drivers/interconnect/Kconfig | 15 +
drivers/interconnect/Makefile | 6 +
drivers/interconnect/core.c | 729 ++++++++++++++++++
drivers/interconnect/qcom/Kconfig | 22 +
drivers/interconnect/qcom/Makefile | 7 +
drivers/interconnect/qcom/msm8916.c | 510 ++++++++++++
drivers/interconnect/qcom/smd-rpm.c | 91 +++
drivers/interconnect/qcom/smd-rpm.h | 15 +
include/dt-bindings/interconnect/qcom.h | 98 +++
include/linux/interconnect-provider.h | 125 +++
include/linux/interconnect.h | 49 ++
18 files changed, 1907 insertions(+)
create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-smd.txt
create mode 100644 Documentation/interconnect/interconnect.rst
create mode 100644 drivers/interconnect/Kconfig
create mode 100644 drivers/interconnect/Makefile
create mode 100644 drivers/interconnect/core.c
create mode 100644 drivers/interconnect/qcom/Kconfig
create mode 100644 drivers/interconnect/qcom/Makefile
create mode 100644 drivers/interconnect/qcom/msm8916.c
create mode 100644 drivers/interconnect/qcom/smd-rpm.c
create mode 100644 drivers/interconnect/qcom/smd-rpm.h
create mode 100644 include/dt-bindings/interconnect/qcom.h
create mode 100644 include/linux/interconnect-provider.h
create mode 100644 include/linux/interconnect.h
This binding is intended to represent the relations between the interconnect
controllers (providers) and consumer device nodes. It will allow creating links
between consumers and interconnect paths (exposed by interconnect providers).
Signed-off-by: Georgi Djakov <[email protected]>
---
.../bindings/interconnect/interconnect.txt | 60 +++++++++++++++++++
1 file changed, 60 insertions(+)
create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
new file mode 100644
index 000000000000..5cb7d3c8d44d
--- /dev/null
+++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
@@ -0,0 +1,60 @@
+Interconnect Provider Device Tree Bindings
+=========================================
+
+The purpose of this document is to define a common set of generic interconnect
+providers/consumers properties.
+
+
+= interconnect providers =
+
+The interconnect provider binding is intended to represent the interconnect
+controllers in the system. Each provider registers a set of interconnect
+nodes, which expose the interconnect related capabilities of the interconnect
+to consumer drivers. These capabilities can be throughput, latency, priority
+etc. The consumer drivers set constraints on interconnect path (or endpoints)
+depending on the use case. Interconnect providers can also be interconnect
+consumers, such as in the case where two network-on-chip fabrics interface
+directly
+
+Required properties:
+- compatible : contains the interconnect provider compatible string
+- #interconnect-cells : number of cells in a interconnect specifier needed to
+ encode the interconnect node id
+
+Example:
+
+ snoc: snoc@580000 {
+ compatible = "qcom,msm8916-snoc";
+ #interconnect-cells = <1>;
+ reg = <0x580000 0x14000>;
+ clock-names = "bus_clk", "bus_a_clk";
+ clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
+ <&rpmcc RPM_SMD_SNOC_A_CLK>;
+ };
+
+
+= interconnect consumers =
+
+The interconnect consumers are device nodes which dynamically express their
+bandwidth requirements along interconnect paths they are connected to. There
+can be multiple interconnect providers on a SoC and the consumer may consume
+multiple paths from different providers depending on use case and the
+components it has to interact with.
+
+Required properties:
+interconnects : Pairs of phandles and interconnect provider specifier to denote
+ the edge source and destination ports of the interconnect path.
+
+Optional properties:
+interconnect-names : List of interconnect path name strings sorted in the same
+ order as the interconnects property. Consumers drivers will use
+ interconnect-names to match interconnect paths with interconnect
+ specifiers.
+
+Example:
+
+ sdhci@7864000 {
+ ...
+ interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
+ interconnect-names = "ddr";
+ };
Currently we support only platform data for specifying the interconnect
endpoints. As now the endpoints are hard-coded into the consumer driver
this may lead to complications when a single driver is used by multiple
SoCs, which may have different interconnect topology.
To avoid cluttering the consumer drivers, introduce a translation function
to help us get the board specific interconnect data from device-tree.
Signed-off-by: Georgi Djakov <[email protected]>
Reviewed-by: Evan Green <[email protected]>
---
drivers/interconnect/core.c | 78 ++++++++++++++++++++++++++++++++++++
include/linux/interconnect.h | 7 ++++
2 files changed, 85 insertions(+)
diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
index 6d932245a610..ed0a6783ffc4 100644
--- a/drivers/interconnect/core.c
+++ b/drivers/interconnect/core.c
@@ -15,6 +15,7 @@
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/slab.h>
+#include <linux/of.h>
#include <linux/overflow.h>
static DEFINE_IDR(icc_idr);
@@ -192,6 +193,83 @@ static int apply_constraints(struct icc_path *path)
return ret;
}
+/**
+ * of_icc_get() - get a path handle from a DT node based on name
+ * @dev: device pointer for the consumer device
+ * @name: interconnect path name
+ *
+ * This function will search for a path two endpoints and return an
+ * icc_path handle on success. Use icc_put() to release constraints when
+ * they are not needed anymore.
+ * If the interconnect API is disabled, NULL is returned and the consumer
+ * drivers will still build. Drivers are free to handle this specifically,
+ * but they don't have to. NULL is also returned when the "interconnects"
+ * DT property is missing.
+ *
+ * Return: icc_path pointer on success or ERR_PTR() on error. NULL is returned
+ * when the API is disabled or the "interconnects" DT property is missing.
+ */
+struct icc_path *of_icc_get(struct device *dev, const char *name)
+{
+ struct device_node *np = NULL;
+ struct of_phandle_args src_args, dst_args;
+ u32 src_id, dst_id;
+ int idx = 0;
+ int ret;
+
+ if (!dev || !dev->of_node)
+ return ERR_PTR(-ENODEV);
+
+ np = dev->of_node;
+
+ /*
+ * When the consumer DT node do not have "interconnects" property
+ * return a NULL path to skip setting constraints.
+ */
+ if (!of_find_property(np, "interconnects", NULL))
+ return NULL;
+
+ /*
+ * We use a combination of phandle and specifier for endpoint. For now
+ * lets support only global ids and extend this is the future if needed
+ * without breaking DT compatibility.
+ */
+ if (name) {
+ idx = of_property_match_string(np, "interconnect-names", name);
+ if (idx < 0)
+ return ERR_PTR(idx);
+ }
+
+ ret = of_parse_phandle_with_args(np, "interconnects",
+ "#interconnect-cells", idx * 2,
+ &src_args);
+ if (ret)
+ return ERR_PTR(ret);
+
+ of_node_put(src_args.np);
+
+ if (!src_args.args_count || src_args.args_count > 1)
+ return ERR_PTR(-EINVAL);
+
+ src_id = src_args.args[0];
+
+ ret = of_parse_phandle_with_args(np, "interconnects",
+ "#interconnect-cells", idx * 2 + 1,
+ &dst_args);
+ if (ret)
+ return ERR_PTR(ret);
+
+ of_node_put(dst_args.np);
+
+ if (!dst_args.args_count || dst_args.args_count > 1)
+ return ERR_PTR(-EINVAL);
+
+ dst_id = dst_args.args[0];
+
+ return icc_get(dev, src_id, dst_id);
+}
+EXPORT_SYMBOL_GPL(of_icc_get);
+
/**
* icc_set() - set constraints on an interconnect path between two endpoints
* @path: reference to the path returned by icc_get()
diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
index 7fd6e8d0696c..dc48e441d005 100644
--- a/include/linux/interconnect.h
+++ b/include/linux/interconnect.h
@@ -17,6 +17,7 @@ struct device;
struct icc_path *icc_get(struct device *dev, const int src_id,
const int dst_id);
+struct icc_path *of_icc_get(struct device *dev, const char *name);
void icc_put(struct icc_path *path);
int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
@@ -28,6 +29,12 @@ static inline struct icc_path *icc_get(struct device *dev, const int src_id,
return NULL;
}
+static inline struct icc_path *of_icc_get(struct device *dev,
+ const char *name)
+{
+ return NULL;
+}
+
static inline void icc_put(struct icc_path *path)
{
}
Add myself as a maintainer of the interconnect API.
Signed-off-by: Georgi Djakov <[email protected]>
---
MAINTAINERS | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index a5b256b25905..bdbab6a90c93 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7578,6 +7578,16 @@ L: [email protected]
S: Maintained
F: drivers/gpio/gpio-intel-mid.c
+INTERCONNECT API
+M: Georgi Djakov <[email protected]>
+S: Maintained
+F: Documentation/interconnect/
+F: Documentation/devicetree/bindings/interconnect/
+F: drivers/interconnect/
+F: include/dt-bindings/interconnect/
+F: include/linux/interconnect-provider.h
+F: include/linux/interconnect.h
+
INVENSENSE MPU-3050 GYROSCOPE DRIVER
M: Linus Walleij <[email protected]>
L: [email protected]
Add driver for the Qualcomm interconnect buses found in msm8916 based
platforms.
Signed-off-by: Georgi Djakov <[email protected]>
Reviewed-by: Evan Green <[email protected]>
---
drivers/interconnect/Kconfig | 5 +
drivers/interconnect/Makefile | 1 +
drivers/interconnect/qcom/Kconfig | 9 +
drivers/interconnect/qcom/Makefile | 2 +
drivers/interconnect/qcom/msm8916.c | 510 ++++++++++++++++++++++++++++
5 files changed, 527 insertions(+)
create mode 100644 drivers/interconnect/qcom/msm8916.c
diff --git a/drivers/interconnect/Kconfig b/drivers/interconnect/Kconfig
index a261c7d41deb..07a8276fa35a 100644
--- a/drivers/interconnect/Kconfig
+++ b/drivers/interconnect/Kconfig
@@ -8,3 +8,8 @@ menuconfig INTERCONNECT
If unsure, say no.
+if INTERCONNECT
+
+source "drivers/interconnect/qcom/Kconfig"
+
+endif
diff --git a/drivers/interconnect/Makefile b/drivers/interconnect/Makefile
index 7a01f33b5593..28f2ab0824d5 100644
--- a/drivers/interconnect/Makefile
+++ b/drivers/interconnect/Makefile
@@ -3,3 +3,4 @@
icc-core-objs := core.o
obj-$(CONFIG_INTERCONNECT) += icc-core.o
+obj-$(CONFIG_INTERCONNECT_QCOM) += qcom/
diff --git a/drivers/interconnect/qcom/Kconfig b/drivers/interconnect/qcom/Kconfig
index 6146552fac86..8a69c71fb372 100644
--- a/drivers/interconnect/qcom/Kconfig
+++ b/drivers/interconnect/qcom/Kconfig
@@ -11,3 +11,12 @@ config INTERCONNECT_QCOM_SMD_RPM
help
This is a driver for communicating interconnect related configuration
details with a remote processor (RPM) on Qualcomm platforms.
+
+config INTERCONNECT_QCOM_MSM8916
+ tristate "Qualcomm MSM8916 interconnect driver"
+ depends on INTERCONNECT_QCOM
+ depends on QCOM_SMD_RPM
+ select INTERCONNECT_QCOM_SMD_RPM
+ help
+ This is a driver for the Qualcomm Network-on-Chip on msm8916-based
+ platforms.
diff --git a/drivers/interconnect/qcom/Makefile b/drivers/interconnect/qcom/Makefile
index f4477ad22ba2..3d9d1f402db6 100644
--- a/drivers/interconnect/qcom/Makefile
+++ b/drivers/interconnect/qcom/Makefile
@@ -1,5 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
qnoc-smd-rpm-objs := smd-rpm.o
+qnoc-msm8916-objs := msm8916.o
obj-$(CONFIG_INTERCONNECT_QCOM_SMD_RPM) += qnoc-smd-rpm.o
+obj-$(CONFIG_INTERCONNECT_QCOM_MSM8916) += qnoc-msm8916.o
diff --git a/drivers/interconnect/qcom/msm8916.c b/drivers/interconnect/qcom/msm8916.c
new file mode 100644
index 000000000000..74145f4f74f1
--- /dev/null
+++ b/drivers/interconnect/qcom/msm8916.c
@@ -0,0 +1,510 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2018 Linaro Ltd
+ * Author: Georgi Djakov <[email protected]>
+ */
+
+#include <dt-bindings/interconnect/qcom.h>
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/interconnect-provider.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include "smd-rpm.h"
+
+#define RPM_BUS_MASTER_REQ 0x73616d62
+#define RPM_BUS_SLAVE_REQ 0x766c7362
+
+#define to_qcom_provider(_provider) \
+ container_of(_provider, struct qcom_icc_provider, provider)
+
+enum qcom_qos_mode {
+ QCOM_QOS_MODE_BYPASS = 0,
+ QCOM_QOS_MODE_FIXED,
+ QCOM_QOS_MODE_MAX,
+};
+
+struct qcom_icc_provider {
+ struct icc_provider provider;
+ void __iomem *base;
+ struct clk *bus_clk;
+ struct clk *bus_a_clk;
+};
+
+#define MSM8916_MAX_LINKS 8
+
+/**
+ * struct qcom_icc_node - Qualcomm specific interconnect nodes
+ * @name: the node name used in debugfs
+ * @id: a unique node identifier
+ * @links: an array of nodes where we can go next while traversing
+ * @num_links: the total number of @links
+ * @port: the offset index into the masters QoS register space
+ * @buswidth: width of the interconnect between a node and the bus (bytes)
+ * @ap_owned: the AP CPU does the writing to QoS registers
+ * @qos_mode: QoS mode for ap_owned resources
+ * @mas_rpm_id: RPM id for devices that are bus masters
+ * @slv_rpm_id: RPM id for devices that are bus slaves
+ * @rate: current bus clock rate in Hz
+ */
+struct qcom_icc_node {
+ unsigned char *name;
+ u16 id;
+ u16 links[MSM8916_MAX_LINKS];
+ u16 num_links;
+ u16 port;
+ u16 buswidth;
+ bool ap_owned;
+ enum qcom_qos_mode qos_mode;
+ int mas_rpm_id;
+ int slv_rpm_id;
+ u64 rate;
+};
+
+struct qcom_icc_desc {
+ struct qcom_icc_node **nodes;
+ size_t num_nodes;
+};
+
+#define DEFINE_QNODE(_name, _id, _port, _buswidth, _ap_owned, \
+ _mas_rpm_id, _slv_rpm_id, _qos_mode, \
+ _numlinks, ...) \
+ static struct qcom_icc_node _name = { \
+ .name = #_name, \
+ .id = _id, \
+ .port = _port, \
+ .buswidth = _buswidth, \
+ .ap_owned = _ap_owned, \
+ .mas_rpm_id = _mas_rpm_id, \
+ .slv_rpm_id = _slv_rpm_id, \
+ .qos_mode = _qos_mode, \
+ .num_links = _numlinks, \
+ .links = { __VA_ARGS__ }, \
+ }
+
+DEFINE_QNODE(bimc_snoc_mas, BIMC_SNOC_MAS, 0, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, BIMC_SNOC_SLV);
+DEFINE_QNODE(bimc_snoc_slv, BIMC_SNOC_SLV, 0, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 2, SNOC_INT_0, SNOC_INT_1);
+DEFINE_QNODE(mas_apss, MASTER_AMPSS_M0, 0, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_EBI_CH0, BIMC_SNOC_MAS, SLAVE_AMPSS_L2);
+DEFINE_QNODE(mas_audio, MASTER_LPASS, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_M_0);
+DEFINE_QNODE(mas_blsp_1, MASTER_BLSP_1, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_M_1);
+DEFINE_QNODE(mas_dehr, MASTER_DEHR, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_M_0);
+DEFINE_QNODE(mas_gfx, MASTER_GRAPHICS_3D, 2, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_EBI_CH0, BIMC_SNOC_MAS, SLAVE_AMPSS_L2);
+DEFINE_QNODE(mas_jpeg, MASTER_JPEG, 6, 16, 1, -1, -1, QCOM_QOS_MODE_BYPASS, 2, SNOC_MM_INT_0, SNOC_MM_INT_2);
+DEFINE_QNODE(mas_mdp, MASTER_MDP_PORT0, 7, 16, 1, -1, -1, QCOM_QOS_MODE_BYPASS, 2, SNOC_MM_INT_0, SNOC_MM_INT_2);
+DEFINE_QNODE(mas_pnoc_crypto_0, MASTER_CRYPTO_CORE0, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_INT_1);
+DEFINE_QNODE(mas_pnoc_sdcc_1, MASTER_SDCC_1, 7, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_INT_1);
+DEFINE_QNODE(mas_pnoc_sdcc_2, MASTER_SDCC_2, 8, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_INT_1);
+DEFINE_QNODE(mas_qdss_bam, MASTER_QDSS_BAM, 11, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_QDSS_INT);
+DEFINE_QNODE(mas_qdss_etr, MASTER_QDSS_ETR, 10, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_QDSS_INT);
+DEFINE_QNODE(mas_snoc_cfg, MASTER_SNOC_CFG, 0, 16, 0, 20, -1, QCOM_QOS_MODE_BYPASS, 1, SNOC_QDSS_INT);
+DEFINE_QNODE(mas_spdm, MASTER_SPDM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_M_0);
+DEFINE_QNODE(mas_tcu0, MASTER_TCU_0, 5, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_EBI_CH0, BIMC_SNOC_MAS, SLAVE_AMPSS_L2);
+DEFINE_QNODE(mas_tcu1, MASTER_TCU_1, 6, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_EBI_CH0, BIMC_SNOC_MAS, SLAVE_AMPSS_L2);
+DEFINE_QNODE(mas_usb_hs, MASTER_USB_HS, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_M_1);
+DEFINE_QNODE(mas_vfe, MASTER_VFE, 9, 16, 1, -1, -1, QCOM_QOS_MODE_BYPASS, 2, SNOC_MM_INT_1, SNOC_MM_INT_2);
+DEFINE_QNODE(mas_video, MASTER_VIDEO_P0, 8, 16, 1, -1, -1, QCOM_QOS_MODE_BYPASS, 2, SNOC_MM_INT_0, SNOC_MM_INT_2);
+DEFINE_QNODE(mm_int_0, SNOC_MM_INT_0, 0, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_MM_INT_BIMC);
+DEFINE_QNODE(mm_int_1, SNOC_MM_INT_1, 0, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_MM_INT_BIMC);
+DEFINE_QNODE(mm_int_2, SNOC_MM_INT_2, 0, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_INT_0);
+DEFINE_QNODE(mm_int_bimc, SNOC_MM_INT_BIMC, 0, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_BIMC_1_MAS);
+DEFINE_QNODE(pnoc_int_0, PNOC_INT_0, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 8, PNOC_SNOC_MAS, PNOC_SLV_0, PNOC_SLV_1, PNOC_SLV_2, PNOC_SLV_3, PNOC_SLV_4, PNOC_SLV_8, PNOC_SLV_9);
+DEFINE_QNODE(pnoc_int_1, PNOC_INT_1, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_SNOC_MAS);
+DEFINE_QNODE(pnoc_m_0, PNOC_M_0, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_INT_0);
+DEFINE_QNODE(pnoc_m_1, PNOC_M_1, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_SNOC_MAS);
+DEFINE_QNODE(pnoc_s_0, PNOC_SLV_0, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 5, SLAVE_CLK_CTL, SLAVE_TLMM, SLAVE_MSM_TCSR, SLAVE_SECURITY, SLAVE_MSS);
+DEFINE_QNODE(pnoc_s_1, PNOC_SLV_1, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 5, SLAVE_IMEM_CFG, SLAVE_CRYPTO_0_CFG, SLAVE_RPM_MSG_RAM, SLAVE_MSM_PDM, SLAVE_PRNG);
+DEFINE_QNODE(pnoc_s_2, PNOC_SLV_2, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 5, SLAVE_SPDM, SLAVE_BOOT_ROM, SLAVE_BIMC_CFG, SLAVE_PNOC_CFG, SLAVE_PMIC_ARB);
+DEFINE_QNODE(pnoc_s_3, PNOC_SLV_3, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 5, SLAVE_MPM, SLAVE_SNOC_CFG, SLAVE_RBCPR_CFG, SLAVE_QDSS_CFG, SLAVE_DEHR_CFG);
+DEFINE_QNODE(pnoc_s_4, PNOC_SLV_4, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_VENUS_CFG, SLAVE_CAMERA_CFG, SLAVE_DISPLAY_CFG);
+DEFINE_QNODE(pnoc_s_8, PNOC_SLV_8, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_USB_HS, SLAVE_SDCC_1, SLAVE_BLSP_1);
+DEFINE_QNODE(pnoc_s_9, PNOC_SLV_9, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 3, SLAVE_SDCC_4, SLAVE_LPASS, SLAVE_GRAPHICS_3D_CFG);
+DEFINE_QNODE(pnoc_snoc_mas, PNOC_SNOC_MAS, 0, 8, 0, 29, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_SNOC_SLV);
+DEFINE_QNODE(pnoc_snoc_slv, PNOC_SNOC_SLV, 0, 8, 0, -1, 45, QCOM_QOS_MODE_FIXED, 3, SNOC_INT_0, SNOC_INT_BIMC, SNOC_INT_1);
+DEFINE_QNODE(qdss_int, SNOC_QDSS_INT, 0, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 2, SNOC_INT_0, SNOC_INT_BIMC);
+DEFINE_QNODE(slv_apps_l2, SLAVE_AMPSS_L2, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_apss, SYSTEM_SLAVE_FAB_APPS, 0, 4, 0, -1, 20, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_audio, SLAVE_LPASS, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_bimc_cfg, SLAVE_BIMC_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_blsp_1, SLAVE_BLSP_1, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_boot_rom, SLAVE_BOOT_ROM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_camera_cfg, SLAVE_CAMERA_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_cats_0, SLAVE_CATS_128, 0, 16, 0, -1, 106, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_cats_1, SLAVE_OCMEM_64, 0, 8, 0, -1, 107, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_clk_ctl, SLAVE_CLK_CTL, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_crypto_0_cfg, SLAVE_CRYPTO_0_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_dehr_cfg, SLAVE_DEHR_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_display_cfg, SLAVE_DISPLAY_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_ebi_ch0, SLAVE_EBI_CH0, 0, 8, 0, -1, 0, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_gfx_cfg, SLAVE_GRAPHICS_3D_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_imem_cfg, SLAVE_IMEM_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_imem, SLAVE_SYSTEM_IMEM, 0, 8, 0, -1, 26, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_mpm, SLAVE_MPM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_msg_ram, SLAVE_RPM_MSG_RAM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_mss, SLAVE_MSS, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_pdm, SLAVE_MSM_PDM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_pmic_arb, SLAVE_PMIC_ARB, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_pnoc_cfg, SLAVE_PNOC_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_prng, SLAVE_PRNG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_qdss_cfg, SLAVE_QDSS_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_qdss_stm, SLAVE_QDSS_STM, 0, 4, 0, -1, 30, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_rbcpr_cfg, SLAVE_RBCPR_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_sdcc_1, SLAVE_SDCC_1, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_sdcc_2, SLAVE_SDCC_4, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_security, SLAVE_SECURITY, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_snoc_cfg, SLAVE_SNOC_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_spdm, SLAVE_SPDM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_srvc_snoc, SLAVE_SERVICE_SNOC, 0, 8, 0, -1, 29, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_tcsr, SLAVE_MSM_TCSR, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_tlmm, SLAVE_TLMM, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_usb_hs, SLAVE_USB_HS, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(slv_venus_cfg, SLAVE_VENUS_CFG, 0, 4, 0, -1, -1, QCOM_QOS_MODE_FIXED, 0, 0);
+DEFINE_QNODE(snoc_bimc_0_mas, SNOC_BIMC_0_MAS, 0, 8, 0, 3, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_BIMC_0_SLV);
+DEFINE_QNODE(snoc_bimc_0_slv, SNOC_BIMC_0_SLV, 0, 8, 0, -1, 24, QCOM_QOS_MODE_FIXED, 1, SLAVE_EBI_CH0);
+DEFINE_QNODE(snoc_bimc_1_mas, SNOC_BIMC_1_MAS, 0, 16, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_BIMC_1_SLV);
+DEFINE_QNODE(snoc_bimc_1_slv, SNOC_BIMC_1_SLV, 0, 8, 1, -1, -1, QCOM_QOS_MODE_FIXED, 1, SLAVE_EBI_CH0);
+DEFINE_QNODE(snoc_int_0, SNOC_INT_0, 0, 8, 0, 99, 130, QCOM_QOS_MODE_FIXED, 3, SLAVE_QDSS_STM, SLAVE_SYSTEM_IMEM, SNOC_PNOC_MAS);
+DEFINE_QNODE(snoc_int_1, SNOC_INT_1, 0, 8, 0, 100, 131, QCOM_QOS_MODE_FIXED, 3, SYSTEM_SLAVE_FAB_APPS, SLAVE_CATS_128, SLAVE_OCMEM_64);
+DEFINE_QNODE(snoc_int_bimc, SNOC_INT_BIMC, 0, 8, 0, 101, 132, QCOM_QOS_MODE_FIXED, 1, SNOC_BIMC_0_MAS);
+DEFINE_QNODE(snoc_pnoc_mas, SNOC_PNOC_MAS, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, SNOC_PNOC_SLV);
+DEFINE_QNODE(snoc_pnoc_slv, SNOC_PNOC_SLV, 0, 8, 0, -1, -1, QCOM_QOS_MODE_FIXED, 1, PNOC_INT_0);
+
+static struct qcom_icc_node *msm8916_snoc_nodes[] = {
+ &bimc_snoc_slv,
+ &mas_jpeg,
+ &mas_mdp,
+ &mas_qdss_bam,
+ &mas_qdss_etr,
+ &mas_snoc_cfg,
+ &mas_vfe,
+ &mas_video,
+ &mm_int_0,
+ &mm_int_1,
+ &mm_int_2,
+ &mm_int_bimc,
+ &pnoc_snoc_slv,
+ &qdss_int,
+ &slv_apss,
+ &slv_cats_0,
+ &slv_cats_1,
+ &slv_imem,
+ &slv_qdss_stm,
+ &slv_srvc_snoc,
+ &snoc_bimc_0_mas,
+ &snoc_bimc_1_mas,
+ &snoc_int_0,
+ &snoc_int_1,
+ &snoc_int_bimc,
+ &snoc_pnoc_mas,
+};
+
+static struct qcom_icc_desc msm8916_snoc = {
+ .nodes = msm8916_snoc_nodes,
+ .num_nodes = ARRAY_SIZE(msm8916_snoc_nodes),
+};
+
+static struct qcom_icc_node *msm8916_bimc_nodes[] = {
+ &bimc_snoc_mas,
+ &mas_apss,
+ &mas_gfx,
+ &mas_tcu0,
+ &mas_tcu1,
+ &slv_apps_l2,
+ &slv_ebi_ch0,
+ &snoc_bimc_0_slv,
+ &snoc_bimc_1_slv,
+};
+
+static struct qcom_icc_desc msm8916_bimc = {
+ .nodes = msm8916_bimc_nodes,
+ .num_nodes = ARRAY_SIZE(msm8916_bimc_nodes),
+};
+
+static struct qcom_icc_node *msm8916_pnoc_nodes[] = {
+ &mas_audio,
+ &mas_blsp_1,
+ &mas_dehr,
+ &mas_pnoc_crypto_0,
+ &mas_pnoc_sdcc_1,
+ &mas_pnoc_sdcc_2,
+ &mas_spdm,
+ &mas_usb_hs,
+ &pnoc_int_0,
+ &pnoc_int_1,
+ &pnoc_m_0,
+ &pnoc_m_1,
+ &pnoc_s_0,
+ &pnoc_s_1,
+ &pnoc_s_2,
+ &pnoc_s_3,
+ &pnoc_s_4,
+ &pnoc_s_8,
+ &pnoc_s_9,
+ &pnoc_snoc_mas,
+ &slv_audio,
+ &slv_bimc_cfg,
+ &slv_blsp_1,
+ &slv_boot_rom,
+ &slv_camera_cfg,
+ &slv_clk_ctl,
+ &slv_crypto_0_cfg,
+ &slv_dehr_cfg,
+ &slv_display_cfg,
+ &slv_gfx_cfg,
+ &slv_imem_cfg,
+ &slv_mpm,
+ &slv_msg_ram,
+ &slv_mss,
+ &slv_pdm,
+ &slv_pmic_arb,
+ &slv_pnoc_cfg,
+ &slv_prng,
+ &slv_qdss_cfg,
+ &slv_rbcpr_cfg,
+ &slv_sdcc_1,
+ &slv_sdcc_2,
+ &slv_security,
+ &slv_snoc_cfg,
+ &slv_spdm,
+ &slv_tcsr,
+ &slv_tlmm,
+ &slv_usb_hs,
+ &slv_venus_cfg,
+ &snoc_pnoc_slv,
+};
+
+static struct qcom_icc_desc msm8916_pnoc = {
+ .nodes = msm8916_pnoc_nodes,
+ .num_nodes = ARRAY_SIZE(msm8916_pnoc_nodes),
+};
+
+static int qcom_icc_aggregate(struct icc_node *node, u32 avg_bw, u32 peak_bw,
+ u32 *agg_avg, u32 *agg_peak)
+{
+ *agg_avg += avg_bw;
+ *agg_peak = max(*agg_peak, peak_bw);
+
+ return 0;
+}
+
+static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
+{
+ struct qcom_icc_provider *qp;
+ struct qcom_icc_node *qn;
+ struct icc_provider *provider;
+ struct icc_node *n;
+ u64 sum_bw;
+ u64 max_peak_bw;
+ u64 rate;
+ u32 agg_avg = 0;
+ u32 agg_peak = 0;
+ int ret = 0;
+
+ qn = src->data;
+ provider = src->provider;
+ qp = to_qcom_provider(provider);
+
+ list_for_each_entry(n, &provider->nodes, node_list)
+ qcom_icc_aggregate(n, n->avg_bw, n->peak_bw,
+ &agg_avg, &agg_peak);
+
+ sum_bw = icc_units_to_bps(agg_avg);
+ max_peak_bw = icc_units_to_bps(agg_peak);
+
+ /* set bandwidth */
+ if (qn->ap_owned) {
+ /* TODO: set QoS */
+ } else {
+ /* send message to the RPM processor */
+ if (qn->mas_rpm_id != -1) {
+ ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE,
+ RPM_BUS_MASTER_REQ,
+ qn->mas_rpm_id,
+ sum_bw);
+ if (ret) {
+ pr_err("qcom_icc_rpm_smd_send mas error %d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ if (qn->slv_rpm_id != -1) {
+ ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE,
+ RPM_BUS_SLAVE_REQ,
+ qn->slv_rpm_id,
+ sum_bw);
+ if (ret) {
+ pr_err("qcom_icc_rpm_smd_send slv error %d\n",
+ ret);
+ return ret;
+ }
+ }
+ }
+
+ rate = max(sum_bw, max_peak_bw);
+
+ do_div(rate, qn->buswidth);
+
+ if (qn->rate != rate) {
+ ret = clk_set_rate(qp->bus_clk, rate);
+ if (ret) {
+ pr_err("set clk rate %lld error %d\n", rate, ret);
+ return ret;
+ }
+
+ ret = clk_set_rate(qp->bus_a_clk, rate);
+ if (ret) {
+ pr_err("set clk rate %lld error %d\n", rate, ret);
+ return ret;
+ }
+
+ qn->rate = rate;
+ }
+
+ return ret;
+}
+
+static int qnoc_probe(struct platform_device *pdev)
+{
+ const struct qcom_icc_desc *desc;
+ struct qcom_icc_node **qnodes;
+ struct icc_node *node;
+ struct qcom_icc_provider *qp;
+ struct resource *res;
+ struct icc_provider *provider;
+ size_t num_nodes, i;
+ int ret;
+
+ /* wait for RPM */
+ if (!qcom_icc_rpm_smd_available())
+ return -EPROBE_DEFER;
+
+ desc = of_device_get_match_data(&pdev->dev);
+ if (!desc)
+ return -EINVAL;
+
+ qnodes = desc->nodes;
+ num_nodes = desc->num_nodes;
+
+ qp = devm_kzalloc(&pdev->dev, sizeof(*qp), GFP_KERNEL);
+ if (!qp)
+ return -ENOMEM;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ qp->base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(qp->base))
+ return PTR_ERR(qp->base);
+
+ qp->bus_clk = devm_clk_get(&pdev->dev, "bus_clk");
+ if (IS_ERR(qp->bus_clk))
+ return PTR_ERR(qp->bus_clk);
+
+ ret = clk_prepare_enable(qp->bus_clk);
+ if (ret) {
+ dev_err(&pdev->dev, "error enabling bus_clk: %d\n", ret);
+ return ret;
+ }
+
+ qp->bus_a_clk = devm_clk_get(&pdev->dev, "bus_a_clk");
+ if (IS_ERR(qp->bus_a_clk))
+ return PTR_ERR(qp->bus_a_clk);
+
+ ret = clk_prepare_enable(qp->bus_a_clk);
+ if (ret) {
+ dev_err(&pdev->dev, "error enabling bus_a_clk: %d\n", ret);
+ clk_disable_unprepare(qp->bus_clk);
+ return ret;
+ }
+
+ provider = &qp->provider;
+ provider->dev = &pdev->dev;
+ provider->set = &qcom_icc_set;
+ provider->aggregate = &qcom_icc_aggregate;
+ INIT_LIST_HEAD(&provider->nodes);
+ provider->data = qp;
+
+ ret = icc_provider_add(provider);
+ if (ret) {
+ dev_err(&pdev->dev, "error adding interconnect provider\n");
+ clk_disable_unprepare(qp->bus_clk);
+ clk_disable_unprepare(qp->bus_a_clk);
+ return ret;
+ }
+
+ for (i = 0; i < num_nodes; i++) {
+ int ret;
+ size_t j;
+
+ node = icc_node_create(qnodes[i]->id);
+ if (IS_ERR(node)) {
+ ret = PTR_ERR(node);
+ goto err;
+ }
+
+ node->name = qnodes[i]->name;
+ node->data = qnodes[i];
+ icc_node_add(node, provider);
+
+ dev_dbg(&pdev->dev, "registered node %s\n", node->name);
+
+ /* populate links */
+ for (j = 0; j < qnodes[i]->num_links; j++)
+ if (qnodes[i]->links[j])
+ icc_link_create(node, qnodes[i]->links[j]);
+ }
+
+ platform_set_drvdata(pdev, provider);
+
+ return ret;
+err:
+ list_for_each_entry(node, &provider->nodes, node_list) {
+ icc_node_del(node);
+ icc_node_destroy(node->id);
+ }
+ clk_disable_unprepare(qp->bus_clk);
+ clk_disable_unprepare(qp->bus_a_clk);
+ icc_provider_del(provider);
+
+ return ret;
+}
+
+static int qnoc_remove(struct platform_device *pdev)
+{
+ struct icc_provider *provider = platform_get_drvdata(pdev);
+ struct qcom_icc_provider *qp = to_qcom_provider(provider);
+ struct icc_node *n;
+
+ list_for_each_entry(n, &provider->nodes, node_list) {
+ icc_node_del(n);
+ icc_node_destroy(n->id);
+ }
+ clk_disable_unprepare(qp->bus_clk);
+ clk_disable_unprepare(qp->bus_a_clk);
+
+ return icc_provider_del(provider);
+}
+
+static const struct of_device_id qnoc_of_match[] = {
+ { .compatible = "qcom,msm8916-pnoc", .data = &msm8916_pnoc },
+ { .compatible = "qcom,msm8916-snoc", .data = &msm8916_snoc },
+ { .compatible = "qcom,msm8916-bimc", .data = &msm8916_bimc },
+ { },
+};
+MODULE_DEVICE_TABLE(of, qnoc_of_match);
+
+static struct platform_driver qnoc_driver = {
+ .probe = qnoc_probe,
+ .remove = qnoc_remove,
+ .driver = {
+ .name = "qnoc-msm8916",
+ .of_match_table = qnoc_of_match,
+ },
+};
+module_platform_driver(qnoc_driver);
+MODULE_AUTHOR("Georgi Djakov <[email protected]>");
+MODULE_DESCRIPTION("Qualcomm msm8916 NoC driver");
+MODULE_LICENSE("GPL v2");
Document the device-tree bindings of the Network-On-Chip interconnect
hardware found on Qualcomm msm8916 platforms.
Signed-off-by: Georgi Djakov <[email protected]>
---
.../bindings/interconnect/qcom-msm8916.txt | 41 ++++++++
include/dt-bindings/interconnect/qcom.h | 98 +++++++++++++++++++
2 files changed, 139 insertions(+)
create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
create mode 100644 include/dt-bindings/interconnect/qcom.h
diff --git a/Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt b/Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
new file mode 100644
index 000000000000..744df51df4ed
--- /dev/null
+++ b/Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
@@ -0,0 +1,41 @@
+Qualcomm MSM8916 Network-On-Chip interconnect driver binding
+----------------------------------------------------
+
+Required properties :
+- compatible : shall contain only one of the following:
+ "qcom,msm8916-bimc"
+ "qcom,msm8916-pnoc"
+ "qcom,msm8916-snoc"
+- #interconnect-cells : should contain 1
+- reg : shall contain base register location and length
+
+Optional properties :
+clocks : list of phandles and specifiers to all interconnect bus clocks
+clock-names : clock names should include both "bus_clk" and "bus_a_clk"
+
+Examples:
+
+ snoc: snoc@580000 {
+ compatible = "qcom,msm8916-snoc";
+ #interconnect-cells = <1>;
+ reg = <0x580000 0x14000>;
+ clock-names = "bus_clk", "bus_a_clk";
+ clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
+ <&rpmcc RPM_SMD_SNOC_A_CLK>;
+ };
+ bimc: bimc@400000 {
+ compatible = "qcom,msm8916-bimc";
+ #interconnect-cells = <1>;
+ reg = <0x400000 0x62000>;
+ clock-names = "bus_clk", "bus_a_clk";
+ clocks = <&rpmcc RPM_SMD_BIMC_CLK>,
+ <&rpmcc RPM_SMD_BIMC_A_CLK>;
+ };
+ pnoc: pnoc@500000 {
+ compatible = "qcom,msm8916-pnoc";
+ #interconnect-cells = <1>;
+ reg = <0x500000 0x11000>;
+ clock-names = "bus_clk", "bus_a_clk";
+ clocks = <&rpmcc RPM_SMD_PCNOC_CLK>,
+ <&rpmcc RPM_SMD_PCNOC_A_CLK>;
+ };
diff --git a/include/dt-bindings/interconnect/qcom.h b/include/dt-bindings/interconnect/qcom.h
new file mode 100644
index 000000000000..48f944b30e5d
--- /dev/null
+++ b/include/dt-bindings/interconnect/qcom.h
@@ -0,0 +1,98 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Qualcomm interconnect IDs
+ *
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <[email protected]>
+ */
+
+#ifndef __DT_BINDINGS_INTERCONNECT_QCOM_H
+#define __DT_BINDINGS_INTERCONNECT_QCOM_H
+
+#define BIMC_SNOC_MAS 1
+#define BIMC_SNOC_SLV 2
+#define MASTER_AMPSS_M0 3
+#define MASTER_BLSP_1 4
+#define MASTER_CRYPTO_CORE0 5
+#define MASTER_DEHR 6
+#define MASTER_GRAPHICS_3D 7
+#define MASTER_JPEG 8
+#define MASTER_LPASS 9
+#define MASTER_MDP_PORT0 10
+#define MASTER_QDSS_BAM 11
+#define MASTER_QDSS_ETR 12
+#define MASTER_SDCC_1 13
+#define MASTER_SDCC_2 14
+#define MASTER_SNOC_CFG 15
+#define MASTER_SPDM 16
+#define MASTER_TCU_0 17
+#define MASTER_TCU_1 18
+#define MASTER_USB_HS 19
+#define MASTER_VFE 20
+#define MASTER_VIDEO_P0 21
+#define PNOC_INT_0 22
+#define PNOC_INT_1 23
+#define PNOC_M_0 24
+#define PNOC_M_1 25
+#define PNOC_SLV_0 26
+#define PNOC_SLV_1 27
+#define PNOC_SLV_2 28
+#define PNOC_SLV_3 29
+#define PNOC_SLV_4 30
+#define PNOC_SLV_8 31
+#define PNOC_SLV_9 32
+#define PNOC_SNOC_MAS 33
+#define PNOC_SNOC_SLV 34
+#define SLAVE_AMPSS_L2 35
+#define SLAVE_BIMC_CFG 36
+#define SLAVE_BLSP_1 37
+#define SLAVE_BOOT_ROM 38
+#define SLAVE_CAMERA_CFG 39
+#define SLAVE_CATS_128 40
+#define SLAVE_CLK_CTL 41
+#define SLAVE_CRYPTO_0_CFG 42
+#define SLAVE_DEHR_CFG 43
+#define SLAVE_DISPLAY_CFG 44
+#define SLAVE_EBI_CH0 45
+#define SLAVE_GRAPHICS_3D_CFG 46
+#define SLAVE_IMEM_CFG 47
+#define SLAVE_LPASS 48
+#define SLAVE_MPM 49
+#define SLAVE_MSM_PDM 50
+#define SLAVE_MSM_TCSR 51
+#define SLAVE_MSS 52
+#define SLAVE_OCMEM_64 53
+#define SLAVE_PMIC_ARB 54
+#define SLAVE_PNOC_CFG 55
+#define SLAVE_PRNG 56
+#define SLAVE_QDSS_CFG 57
+#define SLAVE_QDSS_STM 58
+#define SLAVE_RBCPR_CFG 59
+#define SLAVE_RPM_MSG_RAM 60
+#define SLAVE_SDCC_1 61
+#define SLAVE_SDCC_4 62
+#define SLAVE_SECURITY 63
+#define SLAVE_SERVICE_SNOC 64
+#define SLAVE_SNOC_CFG 65
+#define SLAVE_SPDM 66
+#define SLAVE_SYSTEM_IMEM 67
+#define SLAVE_TLMM 68
+#define SLAVE_USB_HS 69
+#define SLAVE_VENUS_CFG 70
+#define SNOC_BIMC_0_MAS 71
+#define SNOC_BIMC_0_SLV 72
+#define SNOC_BIMC_1_MAS 73
+#define SNOC_BIMC_1_SLV 74
+#define SNOC_INT_0 75
+#define SNOC_INT_1 76
+#define SNOC_INT_BIMC 77
+#define SNOC_MM_INT_0 78
+#define SNOC_MM_INT_1 79
+#define SNOC_MM_INT_2 80
+#define SNOC_MM_INT_BIMC 81
+#define SNOC_PNOC_MAS 82
+#define SNOC_PNOC_SLV 83
+#define SNOC_QDSS_INT 84
+#define SYSTEM_SLAVE_FAB_APPS 85
+
+#endif
On some Qualcomm SoCs, there is a remote processor, which controls some of
the Network-On-Chip interconnect resources. Other CPUs express their needs
by communicating with this processor. Add a driver to handle communication
with this remote processor.
Signed-off-by: Georgi Djakov <[email protected]>
Reviewed-by: Evan Green <[email protected]>
---
.../bindings/interconnect/qcom-smd.txt | 32 +++++++
drivers/interconnect/qcom/Kconfig | 13 +++
drivers/interconnect/qcom/Makefile | 5 +
drivers/interconnect/qcom/smd-rpm.c | 91 +++++++++++++++++++
drivers/interconnect/qcom/smd-rpm.h | 15 +++
5 files changed, 156 insertions(+)
create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-smd.txt
create mode 100644 drivers/interconnect/qcom/Kconfig
create mode 100644 drivers/interconnect/qcom/Makefile
create mode 100644 drivers/interconnect/qcom/smd-rpm.c
create mode 100644 drivers/interconnect/qcom/smd-rpm.h
diff --git a/Documentation/devicetree/bindings/interconnect/qcom-smd.txt b/Documentation/devicetree/bindings/interconnect/qcom-smd.txt
new file mode 100644
index 000000000000..2325167f6eaf
--- /dev/null
+++ b/Documentation/devicetree/bindings/interconnect/qcom-smd.txt
@@ -0,0 +1,32 @@
+Qualcomm SMD-RPM interconnect driver binding
+------------------------------------------------
+The RPM (Resource Power Manager) is a dedicated hardware engine
+for managing the shared SoC resources in order to keep the lowest
+power profile. It communicates with other hardware subsystems via
+the shared memory driver (SMD) back-end and accepts requests for
+various resources.
+
+Required properties :
+- compatible : shall contain only one of the following:
+ "qcom,interconnect-smd-rpm"
+
+Example:
+ smd {
+ compatible = "qcom,smd";
+
+ rpm {
+ interrupts = <GIC_SPI 168 IRQ_TYPE_EDGE_RISING>;
+ qcom,ipc = <&apcs 8 0>;
+ qcom,smd-edge = <15>;
+
+ rpm_requests {
+ compatible = "qcom,rpm-msm8916";
+ qcom,smd-channels = "rpm_requests";
+
+ interconnect-smd-rpm {
+ compatible = "qcom,interconnect-smd-rpm";
+ };
+
+ };
+ };
+ };
diff --git a/drivers/interconnect/qcom/Kconfig b/drivers/interconnect/qcom/Kconfig
new file mode 100644
index 000000000000..6146552fac86
--- /dev/null
+++ b/drivers/interconnect/qcom/Kconfig
@@ -0,0 +1,13 @@
+config INTERCONNECT_QCOM
+ bool "Qualcomm Network-on-Chip interconnect drivers"
+ depends on ARCH_QCOM
+ help
+ Support for Qualcomm's Network-on-Chip interconnect hardware.
+
+config INTERCONNECT_QCOM_SMD_RPM
+ tristate "Qualcomm SMD RPM interconnect driver"
+ depends on INTERCONNECT_QCOM
+ depends on QCOM_SMD_RPM
+ help
+ This is a driver for communicating interconnect related configuration
+ details with a remote processor (RPM) on Qualcomm platforms.
diff --git a/drivers/interconnect/qcom/Makefile b/drivers/interconnect/qcom/Makefile
new file mode 100644
index 000000000000..f4477ad22ba2
--- /dev/null
+++ b/drivers/interconnect/qcom/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0
+
+qnoc-smd-rpm-objs := smd-rpm.o
+
+obj-$(CONFIG_INTERCONNECT_QCOM_SMD_RPM) += qnoc-smd-rpm.o
diff --git a/drivers/interconnect/qcom/smd-rpm.c b/drivers/interconnect/qcom/smd-rpm.c
new file mode 100644
index 000000000000..48b7a2a6eb84
--- /dev/null
+++ b/drivers/interconnect/qcom/smd-rpm.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * RPM over SMD communication wrapper for interconnects
+ *
+ * Copyright (C) 2018 Linaro Ltd
+ * Author: Georgi Djakov <[email protected]>
+ */
+
+#include <linux/interconnect-provider.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/soc/qcom/smd-rpm.h>
+#include "smd-rpm.h"
+
+#define RPM_KEY_BW 0x00007762
+
+static struct qcom_icc_rpm {
+ struct qcom_smd_rpm *rpm;
+} icc_rpm_smd;
+
+struct icc_rpm_smd_req {
+ __le32 key;
+ __le32 nbytes;
+ __le32 value;
+};
+
+bool qcom_icc_rpm_smd_available(void)
+{
+ if (!icc_rpm_smd.rpm)
+ return false;
+
+ return true;
+}
+EXPORT_SYMBOL_GPL(qcom_icc_rpm_smd_available);
+
+int qcom_icc_rpm_smd_send(int ctx, int rsc_type, int id, u32 val)
+{
+ struct icc_rpm_smd_req req = {
+ .key = cpu_to_le32(RPM_KEY_BW),
+ .nbytes = cpu_to_le32(sizeof(u32)),
+ .value = cpu_to_le32(val),
+ };
+
+ return qcom_rpm_smd_write(icc_rpm_smd.rpm, ctx, rsc_type, id, &req,
+ sizeof(req));
+}
+EXPORT_SYMBOL_GPL(qcom_icc_rpm_smd_send);
+
+static int qcom_icc_rpm_smd_probe(struct platform_device *pdev)
+{
+ icc_rpm_smd.rpm = dev_get_drvdata(pdev->dev.parent);
+ if (!icc_rpm_smd.rpm) {
+ dev_err(&pdev->dev, "unable to retrieve handle to RPM\n");
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static const struct of_device_id qcom_icc_rpm_smd_dt_match[] = {
+ { .compatible = "qcom,interconnect-smd-rpm", },
+ { },
+};
+
+MODULE_DEVICE_TABLE(of, qcom_icc_rpm_smd_dt_match);
+
+static struct platform_driver qcom_interconnect_rpm_smd_driver = {
+ .driver = {
+ .name = "qcom-interconnect-smd-rpm",
+ .of_match_table = qcom_icc_rpm_smd_dt_match,
+ },
+ .probe = qcom_icc_rpm_smd_probe,
+};
+
+static int __init rpm_smd_interconnect_init(void)
+{
+ return platform_driver_register(&qcom_interconnect_rpm_smd_driver);
+}
+subsys_initcall(rpm_smd_interconnect_init);
+
+static void __exit rpm_smd_interconnect_exit(void)
+{
+ platform_driver_unregister(&qcom_interconnect_rpm_smd_driver);
+}
+module_exit(rpm_smd_interconnect_exit)
+
+MODULE_AUTHOR("Georgi Djakov <[email protected]>");
+MODULE_DESCRIPTION("Qualcomm SMD RPM interconnect driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/interconnect/qcom/smd-rpm.h b/drivers/interconnect/qcom/smd-rpm.h
new file mode 100644
index 000000000000..11c280eb86dc
--- /dev/null
+++ b/drivers/interconnect/qcom/smd-rpm.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <[email protected]>
+ */
+
+#ifndef __DRIVERS_INTERCONNECT_QCOM_SMD_RPM_H
+#define __DRIVERS_INTERCONNECT_QCOM_SMD_RPM_H
+
+#include <linux/soc/qcom/smd-rpm.h>
+
+bool qcom_icc_rpm_smd_available(void);
+int qcom_icc_rpm_smd_send(int ctx, int rsc_type, int id, u32 val);
+
+#endif
Add a functionality to provide information about the current constraints
per each node and provider.
Signed-off-by: Georgi Djakov <[email protected]>
Reviewed-by: Evan Green <[email protected]>
---
drivers/interconnect/core.c | 78 +++++++++++++++++++++++++++++++++++++
1 file changed, 78 insertions(+)
diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
index ed0a6783ffc4..2740f1a441d8 100644
--- a/drivers/interconnect/core.c
+++ b/drivers/interconnect/core.c
@@ -6,6 +6,7 @@
* Author: Georgi Djakov <[email protected]>
*/
+#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/idr.h>
#include <linux/init.h>
@@ -17,10 +18,12 @@
#include <linux/slab.h>
#include <linux/of.h>
#include <linux/overflow.h>
+#include <linux/uaccess.h>
static DEFINE_IDR(icc_idr);
static LIST_HEAD(icc_provider_list);
static DEFINE_MUTEX(icc_lock);
+static struct dentry *icc_debugfs_dir;
/**
* struct icc_req - constraints that are attached to each node
@@ -49,6 +52,62 @@ struct icc_path {
struct icc_req reqs[];
};
+#ifdef CONFIG_DEBUG_FS
+
+static void icc_summary_show_one(struct seq_file *s, struct icc_node *n)
+{
+ if (!n)
+ return;
+
+ seq_printf(s, "%-30s %12d %12d\n",
+ n->name, n->avg_bw, n->peak_bw);
+}
+
+static int icc_summary_show(struct seq_file *s, void *data)
+{
+ struct icc_provider *provider;
+
+ seq_puts(s, " node avg peak\n");
+ seq_puts(s, "--------------------------------------------------------\n");
+
+ mutex_lock(&icc_lock);
+
+ list_for_each_entry(provider, &icc_provider_list, provider_list) {
+ struct icc_node *n;
+
+ list_for_each_entry(n, &provider->nodes, node_list) {
+ struct icc_req *r;
+
+ icc_summary_show_one(s, n);
+ hlist_for_each_entry(r, &n->req_list, req_node) {
+ if (!r->dev)
+ continue;
+
+ seq_printf(s, " %-26s %12d %12d\n",
+ dev_name(r->dev), r->avg_bw,
+ r->peak_bw);
+ }
+ }
+ }
+
+ mutex_unlock(&icc_lock);
+
+ return 0;
+}
+
+static int icc_summary_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, icc_summary_show, inode->i_private);
+}
+
+static const struct file_operations icc_summary_fops = {
+ .open = icc_summary_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+#endif
+
static struct icc_node *node_find(const int id)
{
return idr_find(&icc_idr, id);
@@ -646,6 +705,25 @@ int icc_provider_del(struct icc_provider *provider)
}
EXPORT_SYMBOL_GPL(icc_provider_del);
+static int __init icc_init(void)
+{
+#ifdef CONFIG_DEBUG_FS
+ icc_debugfs_dir = debugfs_create_dir("interconnect", NULL);
+ debugfs_create_file("interconnect_summary", 0444,
+ icc_debugfs_dir, NULL, &icc_summary_fops);
+#endif
+ return 0;
+}
+
+static void __exit icc_exit(void)
+{
+#ifdef CONFIG_DEBUG_FS
+ debugfs_remove_recursive(icc_debugfs_dir);
+#endif
+}
+module_init(icc_init);
+module_exit(icc_exit);
+
MODULE_AUTHOR("Georgi Djakov <[email protected]");
MODULE_DESCRIPTION("Interconnect Driver Core");
MODULE_LICENSE("GPL v2");
This patch introduces a new API to get requirements and configure the
interconnect buses across the entire chipset to fit with the current
demand.
The API is using a consumer/provider-based model, where the providers are
the interconnect buses and the consumers could be various drivers.
The consumers request interconnect resources (path) between endpoints and
set the desired constraints on this data flow path. The providers receive
requests from consumers and aggregate these requests for all master-slave
pairs on that path. Then the providers configure each participating in the
topology node according to the requested data flow path, physical links and
constraints. The topology could be complicated and multi-tiered and is SoC
specific.
Signed-off-by: Georgi Djakov <[email protected]>
Reviewed-by: Evan Green <[email protected]>
---
Documentation/interconnect/interconnect.rst | 94 ++++
drivers/Kconfig | 2 +
drivers/Makefile | 1 +
drivers/interconnect/Kconfig | 10 +
drivers/interconnect/Makefile | 5 +
drivers/interconnect/core.c | 573 ++++++++++++++++++++
include/linux/interconnect-provider.h | 125 +++++
include/linux/interconnect.h | 42 ++
8 files changed, 852 insertions(+)
create mode 100644 Documentation/interconnect/interconnect.rst
create mode 100644 drivers/interconnect/Kconfig
create mode 100644 drivers/interconnect/Makefile
create mode 100644 drivers/interconnect/core.c
create mode 100644 include/linux/interconnect-provider.h
create mode 100644 include/linux/interconnect.h
diff --git a/Documentation/interconnect/interconnect.rst b/Documentation/interconnect/interconnect.rst
new file mode 100644
index 000000000000..b8107dcc4cd3
--- /dev/null
+++ b/Documentation/interconnect/interconnect.rst
@@ -0,0 +1,94 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=====================================
+GENERIC SYSTEM INTERCONNECT SUBSYSTEM
+=====================================
+
+Introduction
+------------
+
+This framework is designed to provide a standard kernel interface to control
+the settings of the interconnects on an SoC. These settings can be throughput,
+latency and priority between multiple interconnected devices or functional
+blocks. This can be controlled dynamically in order to save power or provide
+maximum performance.
+
+The interconnect bus is hardware with configurable parameters, which can be
+set on a data path according to the requests received from various drivers.
+An example of interconnect buses are the interconnects between various
+components or functional blocks in chipsets. There can be multiple interconnects
+on an SoC that can be multi-tiered.
+
+Below is a simplified diagram of a real-world SoC interconnect bus topology.
+
+::
+
+ +----------------+ +----------------+
+ | HW Accelerator |--->| M NoC |<---------------+
+ +----------------+ +----------------+ |
+ | | +------------+
+ +-----+ +-------------+ V +------+ | |
+ | DDR | | +--------+ | PCIe | | |
+ +-----+ | | Slaves | +------+ | |
+ ^ ^ | +--------+ | | C NoC |
+ | | V V | |
+ +------------------+ +------------------------+ | | +-----+
+ | |-->| |-->| |-->| CPU |
+ | |-->| |<--| | +-----+
+ | Mem NoC | | S NoC | +------------+
+ | |<--| |---------+ |
+ | |<--| |<------+ | | +--------+
+ +------------------+ +------------------------+ | | +-->| Slaves |
+ ^ ^ ^ ^ ^ | | +--------+
+ | | | | | | V
+ +------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
+ | CPUs | | | GPU | | DSP | | Masters |-->| P NoC |-->| Slaves |
+ +------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
+ |
+ +-------+
+ | Modem |
+ +-------+
+
+Terminology
+-----------
+
+Interconnect provider is the software definition of the interconnect hardware.
+The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P NoC
+and Mem NoC.
+
+Interconnect node is the software definition of the interconnect hardware
+port. Each interconnect provider consists of multiple interconnect nodes,
+which are connected to other SoC components including other interconnect
+providers. The point on the diagram where the CPUs connect to the memory is
+called an interconnect node, which belongs to the Mem NoC interconnect provider.
+
+Interconnect endpoints are the first or the last element of the path. Every
+endpoint is a node, but not every node is an endpoint.
+
+Interconnect path is everything between two endpoints including all the nodes
+that have to be traversed to reach from a source to destination node. It may
+include multiple master-slave pairs across several interconnect providers.
+
+Interconnect consumers are the entities which make use of the data paths exposed
+by the providers. The consumers send requests to providers requesting various
+throughput, latency and priority. Usually the consumers are device drivers, that
+send request based on their needs. An example for a consumer is a video decoder
+that supports various formats and image sizes.
+
+Interconnect providers
+----------------------
+
+Interconnect provider is an entity that implements methods to initialize and
+configure interconnect bus hardware. The interconnect provider drivers should
+be registered with the interconnect provider core.
+
+.. kernel-doc:: include/linux/interconnect-provider.h
+
+Interconnect consumers
+----------------------
+
+Interconnect consumers are the clients which use the interconnect APIs to
+get paths between endpoints and set their bandwidth/latency/QoS requirements
+for these interconnect paths.
+
+.. kernel-doc:: include/linux/interconnect.h
diff --git a/drivers/Kconfig b/drivers/Kconfig
index ab4d43923c4d..ec510981554a 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -219,4 +219,6 @@ source "drivers/siox/Kconfig"
source "drivers/slimbus/Kconfig"
+source "drivers/interconnect/Kconfig"
+
endmenu
diff --git a/drivers/Makefile b/drivers/Makefile
index 578f469f72fb..06f68339e2a7 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -186,3 +186,4 @@ obj-$(CONFIG_MULTIPLEXER) += mux/
obj-$(CONFIG_UNISYS_VISORBUS) += visorbus/
obj-$(CONFIG_SIOX) += siox/
obj-$(CONFIG_GNSS) += gnss/
+obj-$(CONFIG_INTERCONNECT) += interconnect/
diff --git a/drivers/interconnect/Kconfig b/drivers/interconnect/Kconfig
new file mode 100644
index 000000000000..a261c7d41deb
--- /dev/null
+++ b/drivers/interconnect/Kconfig
@@ -0,0 +1,10 @@
+menuconfig INTERCONNECT
+ tristate "On-Chip Interconnect management support"
+ help
+ Support for management of the on-chip interconnects.
+
+ This framework is designed to provide a generic interface for
+ managing the interconnects in a SoC.
+
+ If unsure, say no.
+
diff --git a/drivers/interconnect/Makefile b/drivers/interconnect/Makefile
new file mode 100644
index 000000000000..7a01f33b5593
--- /dev/null
+++ b/drivers/interconnect/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0
+
+icc-core-objs := core.o
+
+obj-$(CONFIG_INTERCONNECT) += icc-core.o
diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
new file mode 100644
index 000000000000..6d932245a610
--- /dev/null
+++ b/drivers/interconnect/core.c
@@ -0,0 +1,573 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Interconnect framework core driver
+ *
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <[email protected]>
+ */
+
+#include <linux/device.h>
+#include <linux/idr.h>
+#include <linux/init.h>
+#include <linux/interconnect.h>
+#include <linux/interconnect-provider.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+#include <linux/overflow.h>
+
+static DEFINE_IDR(icc_idr);
+static LIST_HEAD(icc_provider_list);
+static DEFINE_MUTEX(icc_lock);
+
+/**
+ * struct icc_req - constraints that are attached to each node
+ *
+ * @req_node: entry in list of requests for the particular @node
+ * @node: the interconnect node to which this constraint applies
+ * @dev: reference to the device that sets the constraints
+ * @avg_bw: an integer describing the average bandwidth in kBps
+ * @peak_bw: an integer describing the peak bandwidth in kBps
+ */
+struct icc_req {
+ struct hlist_node req_node;
+ struct icc_node *node;
+ struct device *dev;
+ u32 avg_bw;
+ u32 peak_bw;
+};
+
+/**
+ * struct icc_path - interconnect path structure
+ * @num_nodes: number of hops (nodes)
+ * @reqs: array of the requests applicable to this path of nodes
+ */
+struct icc_path {
+ size_t num_nodes;
+ struct icc_req reqs[];
+};
+
+static struct icc_node *node_find(const int id)
+{
+ return idr_find(&icc_idr, id);
+}
+
+static struct icc_path *path_init(struct device *dev, struct icc_node *dst,
+ ssize_t num_nodes)
+{
+ struct icc_node *node = dst;
+ struct icc_path *path;
+ int i;
+
+ path = kzalloc(struct_size(path, reqs, num_nodes), GFP_KERNEL);
+ if (!path)
+ return ERR_PTR(-ENOMEM);
+
+ path->num_nodes = num_nodes;
+
+ for (i = num_nodes - 1; i >= 0; i--) {
+ node->provider->users++;
+ hlist_add_head(&path->reqs[i].req_node, &node->req_list);
+ path->reqs[i].node = node;
+ path->reqs[i].dev = dev;
+ /* reference to previous node was saved during path traversal */
+ node = node->reverse;
+ }
+
+ return path;
+}
+
+static struct icc_path *path_find(struct device *dev, struct icc_node *src,
+ struct icc_node *dst)
+{
+ struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
+ struct icc_node *n, *node = NULL;
+ struct list_head traverse_list;
+ struct list_head edge_list;
+ struct list_head visited_list;
+ size_t i, depth = 1;
+ bool found = false;
+
+ INIT_LIST_HEAD(&traverse_list);
+ INIT_LIST_HEAD(&edge_list);
+ INIT_LIST_HEAD(&visited_list);
+
+ list_add(&src->search_list, &traverse_list);
+ src->reverse = NULL;
+
+ do {
+ list_for_each_entry_safe(node, n, &traverse_list, search_list) {
+ if (node == dst) {
+ found = true;
+ list_splice_init(&edge_list, &visited_list);
+ list_splice_init(&traverse_list, &visited_list);
+ break;
+ }
+ for (i = 0; i < node->num_links; i++) {
+ struct icc_node *tmp = node->links[i];
+
+ if (!tmp) {
+ path = ERR_PTR(-ENOENT);
+ goto out;
+ }
+
+ if (tmp->is_traversed)
+ continue;
+
+ tmp->is_traversed = true;
+ tmp->reverse = node;
+ list_add_tail(&tmp->search_list, &edge_list);
+ }
+ }
+
+ if (found)
+ break;
+
+ list_splice_init(&traverse_list, &visited_list);
+ list_splice_init(&edge_list, &traverse_list);
+
+ /* count the hops including the source */
+ depth++;
+
+ } while (!list_empty(&traverse_list));
+
+out:
+
+ /* reset the traversed state */
+ list_for_each_entry_reverse(n, &visited_list, search_list)
+ n->is_traversed = false;
+
+ if (found)
+ path = path_init(dev, dst, depth);
+
+ return path;
+}
+
+/*
+ * We want the path to honor all bandwidth requests, so the average and peak
+ * bandwidth requirements from each consumer are aggregated at each node.
+ */
+
+static int aggregate_requests(struct icc_node *node)
+{
+ struct icc_provider *p = node->provider;
+ struct icc_req *r;
+
+ node->avg_bw = 0;
+ node->peak_bw = 0;
+
+ hlist_for_each_entry(r, &node->req_list, req_node)
+ p->aggregate(node, r->avg_bw, r->peak_bw,
+ &node->avg_bw, &node->peak_bw);
+
+ return 0;
+}
+
+static int apply_constraints(struct icc_path *path)
+{
+ struct icc_node *next, *prev = NULL;
+ int ret;
+ int i;
+
+ for (i = 0; i < path->num_nodes; i++, prev = next) {
+ struct icc_provider *p;
+
+ next = path->reqs[i].node;
+ /*
+ * Both endpoints should be valid master-slave pairs of the
+ * same interconnect provider that will be configured.
+ */
+ if (!prev || next->provider != prev->provider)
+ continue;
+
+ p = next->provider;
+
+ /* set the constraints */
+ ret = p->set(prev, next);
+ if (ret)
+ goto out;
+ }
+out:
+ return ret;
+}
+
+/**
+ * icc_set() - set constraints on an interconnect path between two endpoints
+ * @path: reference to the path returned by icc_get()
+ * @avg_bw: average bandwidth in kilobytes per second
+ * @peak_bw: peak bandwidth in kilobytes per second
+ *
+ * This function is used by an interconnect consumer to express its own needs
+ * in terms of bandwidth for a previously requested path between two endpoints.
+ * The requests are aggregated and each node is updated accordingly. The entire
+ * path is locked by a mutex to ensure that the set() is completed.
+ * The @path can be NULL when the "interconnects" DT properties is missing,
+ * which will mean that no constraints will be set.
+ *
+ * Returns 0 on success, or an appropriate error code otherwise.
+ */
+int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw)
+{
+ struct icc_node *node;
+ struct icc_provider *p;
+ size_t i;
+ int ret;
+
+ if (!path)
+ return 0;
+
+ mutex_lock(&icc_lock);
+
+ for (i = 0; i < path->num_nodes; i++) {
+ node = path->reqs[i].node;
+ p = node->provider;
+
+ /* update the consumer request for this path */
+ path->reqs[i].avg_bw = avg_bw;
+ path->reqs[i].peak_bw = peak_bw;
+
+ /* aggregate requests for this node */
+ aggregate_requests(node);
+ }
+
+ ret = apply_constraints(path);
+ if (ret)
+ pr_err("interconnect: error applying constraints (%d)", ret);
+
+ mutex_unlock(&icc_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(icc_set);
+
+/**
+ * icc_get() - return a handle for path between two endpoints
+ * @dev: the device requesting the path
+ * @src_id: source device port id
+ * @dst_id: destination device port id
+ *
+ * This function will search for a path between two endpoints and return an
+ * icc_path handle on success. Use icc_put() to release
+ * constraints when they are not needed anymore.
+ * If the interconnect API is disabled, NULL is returned and the consumer
+ * drivers will still build. Drivers are free to handle this specifically,
+ * but they don't have to.
+ *
+ * Return: icc_path pointer on success, ERR_PTR() on error or NULL if the
+ * interconnect API is disabled.
+ */
+struct icc_path *icc_get(struct device *dev, const int src_id, const int dst_id)
+{
+ struct icc_node *src, *dst;
+ struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
+
+ mutex_lock(&icc_lock);
+
+ src = node_find(src_id);
+ if (!src)
+ goto out;
+
+ dst = node_find(dst_id);
+ if (!dst)
+ goto out;
+
+ path = path_find(dev, src, dst);
+ if (IS_ERR(path))
+ dev_err(dev, "%s: invalid path=%ld\n", __func__, PTR_ERR(path));
+
+out:
+ mutex_unlock(&icc_lock);
+ return path;
+}
+EXPORT_SYMBOL_GPL(icc_get);
+
+/**
+ * icc_put() - release the reference to the icc_path
+ * @path: interconnect path
+ *
+ * Use this function to release the constraints on a path when the path is
+ * no longer needed. The constraints will be re-aggregated.
+ */
+void icc_put(struct icc_path *path)
+{
+ struct icc_node *node;
+ size_t i;
+ int ret;
+
+ if (!path || WARN_ON(IS_ERR(path)))
+ return;
+
+ ret = icc_set(path, 0, 0);
+ if (ret)
+ pr_err("%s: error (%d)\n", __func__, ret);
+
+ mutex_lock(&icc_lock);
+ for (i = 0; i < path->num_nodes; i++) {
+ node = path->reqs[i].node;
+ hlist_del(&path->reqs[i].req_node);
+ if (!WARN_ON(!node->provider->users))
+ node->provider->users--;
+ }
+ mutex_unlock(&icc_lock);
+
+ kfree(path);
+}
+EXPORT_SYMBOL_GPL(icc_put);
+
+static struct icc_node *icc_node_create_nolock(int id)
+{
+ struct icc_node *node;
+
+ /* check if node already exists */
+ node = node_find(id);
+ if (node)
+ goto out;
+
+ node = kzalloc(sizeof(*node), GFP_KERNEL);
+ if (!node) {
+ node = ERR_PTR(-ENOMEM);
+ goto out;
+ }
+
+ id = idr_alloc(&icc_idr, node, id, id + 1, GFP_KERNEL);
+ if (WARN(id < 0, "couldn't get idr")) {
+ kfree(node);
+ node = ERR_PTR(id);
+ goto out;
+ }
+
+ node->id = id;
+
+out:
+ return node;
+}
+
+/**
+ * icc_node_create() - create a node
+ * @id: node id
+ *
+ * Return: icc_node pointer on success, or ERR_PTR() on error
+ */
+struct icc_node *icc_node_create(int id)
+{
+ struct icc_node *node;
+
+ mutex_lock(&icc_lock);
+
+ node = icc_node_create_nolock(id);
+
+ mutex_unlock(&icc_lock);
+
+ return node;
+}
+EXPORT_SYMBOL_GPL(icc_node_create);
+
+/**
+ * icc_node_destroy() - destroy a node
+ * @id: node id
+ *
+ */
+void icc_node_destroy(int id)
+{
+ struct icc_node *node;
+
+ mutex_lock(&icc_lock);
+
+ node = node_find(id);
+ if (node) {
+ idr_remove(&icc_idr, node->id);
+ WARN_ON(!hlist_empty(&node->req_list));
+ }
+
+ mutex_unlock(&icc_lock);
+
+ kfree(node);
+}
+EXPORT_SYMBOL_GPL(icc_node_destroy);
+
+/**
+ * icc_link_create() - create a link between two nodes
+ * @src_id: source node id
+ * @dst_id: destination node id
+ *
+ * Create a link between two nodes. The nodes might belong to different
+ * interconnect providers and the @dst_id node might not exist (if the
+ * provider driver has not probed yet). So just create the @dst_id node
+ * and when the actual provider driver is probed, the rest of the node
+ * data is filled.
+ *
+ * Return: 0 on success, or an error code otherwise
+ */
+int icc_link_create(struct icc_node *node, const int dst_id)
+{
+ struct icc_node *dst;
+ struct icc_node **new;
+ int ret = 0;
+
+ if (!node->provider)
+ return -EINVAL;
+
+ mutex_lock(&icc_lock);
+
+ dst = node_find(dst_id);
+ if (!dst) {
+ dst = icc_node_create_nolock(dst_id);
+
+ if (IS_ERR(dst)) {
+ ret = PTR_ERR(dst);
+ goto out;
+ }
+ }
+
+ new = krealloc(node->links,
+ (node->num_links + 1) * sizeof(*node->links),
+ GFP_KERNEL);
+ if (!new) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ node->links = new;
+ node->links[node->num_links++] = dst;
+
+out:
+ mutex_unlock(&icc_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(icc_link_create);
+
+/**
+ * icc_link_destroy() - destroy a link between two nodes
+ * @src: pointer to source node
+ * @dst: pointer to destination node
+ *
+ * Return: 0 on success, or an error code otherwise
+ */
+int icc_link_destroy(struct icc_node *src, struct icc_node *dst)
+{
+ struct icc_node **new;
+ size_t slot;
+ int ret = 0;
+
+ if (IS_ERR_OR_NULL(src))
+ return -EINVAL;
+
+ if (IS_ERR_OR_NULL(dst))
+ return -EINVAL;
+
+ mutex_lock(&icc_lock);
+
+ for (slot = 0; slot < src->num_links; slot++)
+ if (src->links[slot] == dst)
+ break;
+
+ if (WARN_ON(slot == src->num_links)) {
+ ret = -ENXIO;
+ goto out;
+ }
+
+ src->links[slot] = src->links[--src->num_links];
+
+ new = krealloc(src->links,
+ (src->num_links) * sizeof(*src->links),
+ GFP_KERNEL);
+ if (new)
+ src->links = new;
+
+out:
+ mutex_unlock(&icc_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(icc_link_destroy);
+
+/**
+ * icc_node_add() - add interconnect node to interconnect provider
+ * @node: pointer to the interconnect node
+ * @provider: pointer to the interconnect provider
+ *
+ */
+void icc_node_add(struct icc_node *node, struct icc_provider *provider)
+{
+ mutex_lock(&icc_lock);
+
+ node->provider = provider;
+ list_add(&node->node_list, &provider->nodes);
+
+ mutex_unlock(&icc_lock);
+}
+EXPORT_SYMBOL_GPL(icc_node_add);
+
+/**
+ * icc_node_del() - delete interconnect node from interconnect provider
+ * @node: pointer to the interconnect node
+ *
+ */
+void icc_node_del(struct icc_node *node)
+{
+ mutex_lock(&icc_lock);
+
+ list_del(&node->node_list);
+
+ mutex_unlock(&icc_lock);
+}
+EXPORT_SYMBOL_GPL(icc_node_del);
+
+/**
+ * icc_provider_add() - add a new interconnect provider
+ * @icc_provider: the interconnect provider that will be added into topology
+ *
+ * Return: 0 on success, or an error code otherwise
+ */
+int icc_provider_add(struct icc_provider *provider)
+{
+ if (WARN_ON(!provider->set))
+ return -EINVAL;
+
+ mutex_lock(&icc_lock);
+
+ INIT_LIST_HEAD(&provider->nodes);
+ list_add(&provider->provider_list, &icc_provider_list);
+
+ mutex_unlock(&icc_lock);
+
+ dev_dbg(provider->dev, "interconnect provider added to topology\n");
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(icc_provider_add);
+
+/**
+ * icc_provider_del() - delete previously added interconnect provider
+ * @icc_provider: the interconnect provider that will be removed from topology
+ *
+ * Return: 0 on success, or an error code otherwise
+ */
+int icc_provider_del(struct icc_provider *provider)
+{
+ mutex_lock(&icc_lock);
+ if (provider->users) {
+ pr_warn("interconnect provider still has %d users\n",
+ provider->users);
+ mutex_unlock(&icc_lock);
+ return -EBUSY;
+ }
+
+ if (!list_empty(&provider->nodes)) {
+ pr_warn("interconnect provider still has nodes\n");
+ mutex_unlock(&icc_lock);
+ return -EBUSY;
+ }
+
+ list_del(&provider->provider_list);
+ mutex_unlock(&icc_lock);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(icc_provider_del);
+
+MODULE_AUTHOR("Georgi Djakov <[email protected]");
+MODULE_DESCRIPTION("Interconnect Driver Core");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/interconnect-provider.h b/include/linux/interconnect-provider.h
new file mode 100644
index 000000000000..78208a754181
--- /dev/null
+++ b/include/linux/interconnect-provider.h
@@ -0,0 +1,125 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <[email protected]>
+ */
+
+#ifndef __LINUX_INTERCONNECT_PROVIDER_H
+#define __LINUX_INTERCONNECT_PROVIDER_H
+
+#include <linux/interconnect.h>
+
+#define icc_units_to_bps(bw) ((bw) * 1000ULL)
+
+struct icc_node;
+
+/**
+ * struct icc_provider - interconnect provider (controller) entity that might
+ * provide multiple interconnect controls
+ *
+ * @provider_list: list of the registered interconnect providers
+ * @nodes: internal list of the interconnect provider nodes
+ * @set: pointer to device specific set operation function
+ * @aggregate: pointer to device specific aggregate operation function
+ * @dev: the device this interconnect provider belongs to
+ * @users: count of active users
+ * @data: pointer to private data
+ */
+struct icc_provider {
+ struct list_head provider_list;
+ struct list_head nodes;
+ int (*set)(struct icc_node *src, struct icc_node *dst);
+ int (*aggregate)(struct icc_node *node, u32 avg_bw, u32 peak_bw,
+ u32 *agg_avg, u32 *agg_peak);
+ struct device *dev;
+ int users;
+ void *data;
+};
+
+/**
+ * struct icc_node - entity that is part of the interconnect topology
+ *
+ * @id: platform specific node id
+ * @name: node name used in debugfs
+ * @links: a list of targets pointing to where we can go next when traversing
+ * @num_links: number of links to other interconnect nodes
+ * @provider: points to the interconnect provider of this node
+ * @node_list: the list entry in the parent provider's "nodes" list
+ * @search_list: list used when walking the nodes graph
+ * @reverse: pointer to previous node when walking the nodes graph
+ * @is_traversed: flag that is used when walking the nodes graph
+ * @req_list: a list of QoS constraint requests associated with this node
+ * @avg_bw: aggregated value of average bandwidth requests from all consumers
+ * @peak_bw: aggregated value of peak bandwidth requests from all consumers
+ * @data: pointer to private data
+ */
+struct icc_node {
+ int id;
+ const char *name;
+ struct icc_node **links;
+ size_t num_links;
+
+ struct icc_provider *provider;
+ struct list_head node_list;
+ struct list_head search_list;
+ struct icc_node *reverse;
+ u8 is_traversed:1;
+ struct hlist_head req_list;
+ u32 avg_bw;
+ u32 peak_bw;
+ void *data;
+};
+
+#if IS_ENABLED(CONFIG_INTERCONNECT)
+
+struct icc_node *icc_node_create(int id);
+void icc_node_destroy(int id);
+int icc_link_create(struct icc_node *node, const int dst_id);
+int icc_link_destroy(struct icc_node *src, struct icc_node *dst);
+void icc_node_add(struct icc_node *node, struct icc_provider *provider);
+void icc_node_del(struct icc_node *node);
+int icc_provider_add(struct icc_provider *provider);
+int icc_provider_del(struct icc_provider *provider);
+
+#else
+
+static inline struct icc_node *icc_node_create(int id)
+{
+ return ERR_PTR(-ENOTSUPP);
+}
+
+void icc_node_destroy(int id)
+{
+}
+
+static inline int icc_link_create(struct icc_node *node, const int dst_id)
+{
+ return -ENOTSUPP;
+}
+
+int icc_link_destroy(struct icc_node *src, struct icc_node *dst)
+{
+ return -ENOTSUPP;
+}
+
+void icc_node_add(struct icc_node *node, struct icc_provider *provider)
+{
+}
+
+void icc_node_del(struct icc_node *node)
+{
+}
+
+static inline int icc_provider_add(struct icc_provider *provider)
+{
+ return -ENOTSUPP;
+}
+
+static inline int icc_provider_del(struct icc_provider *provider)
+{
+ return -ENOTSUPP;
+}
+
+#endif /* CONFIG_INTERCONNECT */
+
+#endif /* __LINUX_INTERCONNECT_PROVIDER_H */
diff --git a/include/linux/interconnect.h b/include/linux/interconnect.h
new file mode 100644
index 000000000000..7fd6e8d0696c
--- /dev/null
+++ b/include/linux/interconnect.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018, Linaro Ltd.
+ * Author: Georgi Djakov <[email protected]>
+ */
+
+#ifndef __LINUX_INTERCONNECT_H
+#define __LINUX_INTERCONNECT_H
+
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+struct icc_path;
+struct device;
+
+#if IS_ENABLED(CONFIG_INTERCONNECT)
+
+struct icc_path *icc_get(struct device *dev, const int src_id,
+ const int dst_id);
+void icc_put(struct icc_path *path);
+int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw);
+
+#else
+
+static inline struct icc_path *icc_get(struct device *dev, const int src_id,
+ const int dst_id)
+{
+ return NULL;
+}
+
+static inline void icc_put(struct icc_path *path)
+{
+}
+
+static inline int icc_set(struct icc_path *path, u32 avg_bw, u32 peak_bw)
+{
+ return 0;
+}
+
+#endif /* CONFIG_INTERCONNECT */
+
+#endif /* __LINUX_INTERCONNECT_H */
Hi Georgi,
I'm currently reviewing this patchset (long overdue), but considering
that we haven't added any major new features to the framework for the
last couple of revisions, can you get this patchset merged into
linux-next to see how things shake out there? We've had this merged
branch merged into a CI build on kernelci for a while now w/o any
major incident so we should increase its exposure.
You could ask Stephen Rothwell if he'll accept the a branch directly
from you or he needs an upstream maintainer (GregKH?) to carry it.
Regard,
Amit
On Fri, Aug 31, 2018 at 7:31 PM, Georgi Djakov <[email protected]> wrote:
> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> graphics, modem). These cores are talking to each other and can generate a
> lot of data flowing through the on-chip interconnects. These interconnect
> buses could form different topologies such as crossbar, point to point buses,
> hierarchical buses or use the network-on-chip concept.
>
> These buses have been sized usually to handle use cases with high data
> throughput but it is not necessary all the time and consume a lot of power.
> Furthermore, the priority between masters can vary depending on the running
> use case like video playback or CPU intensive tasks.
>
> Having an API to control the requirement of the system in terms of bandwidth
> and QoS, so we can adapt the interconnect configuration to match those by
> scaling the frequencies, setting link priority and tuning QoS parameters.
> This configuration can be a static, one-time operation done at boot for some
> platforms or a dynamic set of operations that happen at run-time.
>
> This patchset introduce a new API to get the requirement and configure the
> interconnect buses across the entire chipset to fit with the current demand.
> The API is NOT for changing the performance of the endpoint devices, but only
> the interconnect path in between them.
>
> The API is using a consumer/provider-based model, where the providers are
> the interconnect buses and the consumers could be various drivers.
> The consumers request interconnect resources (path) to an endpoint and set
> the desired constraints on this data flow path. The provider(s) receive
> requests from consumers and aggregate these requests for all master-slave
> pairs on that path. Then the providers configure each participating in the
> topology node according to the requested data flow path, physical links and
> constraints. The topology could be complicated and multi-tiered and is SoC
> specific.
>
> Below is a simplified diagram of a real-world SoC topology. The interconnect
> providers are the NoCs.
>
> +----------------+ +----------------+
> | HW Accelerator |--->| M NoC |<---------------+
> +----------------+ +----------------+ |
> | | +------------+
> +-----+ +-------------+ V +------+ | |
> | DDR | | +--------+ | PCIe | | |
> +-----+ | | Slaves | +------+ | |
> ^ ^ | +--------+ | | C NoC |
> | | V V | |
> +------------------+ +------------------------+ | | +-----+
> | |-->| |-->| |-->| CPU |
> | |-->| |<--| | +-----+
> | Mem NoC | | S NoC | +------------+
> | |<--| |---------+ |
> | |<--| |<------+ | | +--------+
> +------------------+ +------------------------+ | | +-->| Slaves |
> ^ ^ ^ ^ ^ | | +--------+
> | | | | | | V
> +------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
> | CPUs | | | GPU | | DSP | | Masters |-->| P NoC |-->| Slaves |
> +------+ | +-----+ +-----+ +---------+ +----------------+ +--------+
> |
> +-------+
> | Modem |
> +-------+
>
> TODO:
> * Create icc_set_extended() to handle parameters such as latency and other
> QoS values.
> * Convert from using global node identifiers to local per provider ids.
> * Cache the path between the nodes instead of walking the graph on each get().
> * Sync interconnect requests with the idle state of the device.
>
> Changes since patchset v8 (https://lkml.org/lkml/2018/8/10/387)
> * Fixed the names of the files when built as modules.
> * Corrected some typos in comments.
>
> Changes since patchset v7 (https://lkml.org/lkml/2018/7/31/647)
> * Addressed comments on kernel-doc and grammar. (Randy)
> * Picked Reviewed-by: Evan
> * Squashed consumer and provider DT bindings into single patch. (Rob)
> * Cleaned-up msm8916 DT bindings docs by removing unused port ids.
> * Updated documentation for the cases when NULL is returned. (Saravana)
> * New patch to add myself as maintainer.
>
> Changes since patchset v6 (https://lkml.org/lkml/2018/7/9/698)
> * [patches 1,6]: Move the aggregation within the provider from the framework to
> the platform driver's set() callback, as the aggregation point could be SoC
> specific.
> * [patch 1]: Include missing header, reset state only of the traversed nodes,
> move more code into path_init(), add more asserts, move misplaced mutex,
> simplify icc_link_destroy() (Evan)
> * [patch 1]: Fix the order of requests to go from source to destination. (Alex)
> * [patch 7]: Use better wording in the documentation. (Evan)
> * [patch 6]: Reorder struct members, sort nodes alphabetically, improve naming
> of variables , add missing clk_disable_unprepare() in error paths. (Matthias)
> * [patch 6]: Remove redundant NULL pointer check in msm8916 driver. (Alex)
> * [patch 6]: Add missing depend on QCOM_SMD_RPM in Kconfig. (Evan)
> * [patch 3]: Don't check for errors on debugfs calls, remove debugfs directory
> when module is unloaded (Greg)
>
> Changes since patchset v5 (https://lkml.org/lkml/2018/6/20/453)
> * Fix the modular build, make rpm-smd driver a module.
> * Optimize locking and move to higher level. (Evan)
> * Code cleanups. Fix typos. (Evan, Matthias)
> * Add the source node to the path. (Evan)
> * Rename path_allocate() to path_init() with minor refactoring. (Evan)
> * Rename *_remove() functions to *_destroy().
> * Return fixed errors in icc_link_destroy(). (Evan)
> * Fix krealloc() usage in icc_link_destroy(). (Evan)
> * Add missing kfree() in icc_node_create(). (Matthias)
> * Make icc_node_add() return void. (Matthias)
> * Change mutex_init to mutex_lock in icc_provider_add(). (Matthias)
> * Add new icc_node_del() function to delete nodes from provider.
> * Fix the header guard to reflect the path in smd-rpm.h. (Evan)
> * Check for errors returned by qcom_icc_rpm_smd_send(). (Evan)
> * Propagate the error of icc_provider_del(). (Evan)
>
> Changes since patchset v4 (https://lkml.org/lkml/2018/3/9/856)
> * Simplified locking by using a single global mutex. (Evan)
> * Changed the aggregation function interface.
> * Implemented functions for node, link, provider removal. (Evan)
> * Naming changes on variables and functions, removed redundant code. (Evan)
> * Fixes and clarifications in the docs. (Matthias, Evan, Amit, Alexandre)
> * Removed mandatory reg DT property, made interconnect-names optional. (Bjorn)
> * Made interconnect-cells property required to align with other bindings. (Neil)
> * Moved msm8916 specific bindings into a separate file and patch. (Bjorn)
> * Use the names, instead of the hardcoded ids for topology. (Matthias)
> * Init the node before creating the links. (Evan)
> * Added icc_units_to_bps macro. (Amit)
>
> Changes since patchset v3 (https://lkml.org/lkml/2017/9/8/544)
> * Refactored the constraints aggregation.
> * Use the IDR API.
> * Split the provider and consumer bindings into separate patches and propose
> new bindings for consumers, which allows to specify the local source port.
> * Adopted the icc_ prefix for API functions.
> * Introduced separate API functions for creating interconnect nodes and links.
> * Added DT lookup support in addition to platform data.
> * Dropped the event tracing patch for now.
> * Added a patch to provide summary via debugfs.
> * Use macro for the list of topology definitions in the platform driver.
> * Various minor changes.
>
> Changes since patchset v2 (https://lkml.org/lkml/2017/7/20/825)
> * Split the aggregation into per node and per provider. Cache the
> aggregated values.
> * Various small refactorings and cleanups in the framework.
> * Added a patch introducing basic tracepoint support for monitoring
> the time required to update the interconnect nodes.
>
> Changes since patchset v1 (https://lkml.org/lkml/2017/6/27/890)
> * Updates in the documentation.
> * Changes in request aggregation, locking.
> * Dropped the aggregate() callback and use the default as it currently
> sufficient for the single vendor driver. Will add it later when needed.
> * Dropped the dt-bindings draft patch for now.
>
> Changes since RFC v2 (https://lkml.org/lkml/2017/6/12/316)
> * Converted documentation to rst format.
> * Fixed an incorrect call to mutex_lock. Renamed max_bw to peak_bw.
>
> Changes since RFC v1 (https://lkml.org/lkml/2017/5/15/605)
> * Refactored code into shorter functions.
> * Added a new aggregate() API function.
> * Rearranged some structs to reduce padding bytes.
>
> Changes since RFC v0 (https://lkml.org/lkml/2017/3/1/599)
> * Removed DT support and added optional Patch 3 with new bindings proposal.
> * Converted the topology into internal driver data.
> * Made the framework modular.
> * interconnect_get() now takes (src and dst ports as arguments).
> * Removed public declarations of some structs.
> * Now passing prev/next nodes to the vendor driver.
> * Properly remove requests on _put().
> * Added refcounting.
> * Updated documentation.
> * Changed struct interconnect_path to use array instead of linked list.
>
> Georgi Djakov (8):
> interconnect: Add generic on-chip interconnect API
> dt-bindings: Introduce interconnect binding
> interconnect: Allow endpoints translation via DT
> interconnect: Add debugfs support
> interconnect: qcom: Add RPM communication
> dt-bindings: interconnect: Document qcom,msm8916 NoC bindings
> interconnect: qcom: Add msm8916 interconnect provider driver
> MAINTAINERS: add a maintainer for the interconnect API
>
> .../bindings/interconnect/interconnect.txt | 60 ++
> .../bindings/interconnect/qcom-msm8916.txt | 41 +
> .../bindings/interconnect/qcom-smd.txt | 32 +
> Documentation/interconnect/interconnect.rst | 94 +++
> MAINTAINERS | 10 +
> drivers/Kconfig | 2 +
> drivers/Makefile | 1 +
> drivers/interconnect/Kconfig | 15 +
> drivers/interconnect/Makefile | 6 +
> drivers/interconnect/core.c | 729 ++++++++++++++++++
> drivers/interconnect/qcom/Kconfig | 22 +
> drivers/interconnect/qcom/Makefile | 7 +
> drivers/interconnect/qcom/msm8916.c | 510 ++++++++++++
> drivers/interconnect/qcom/smd-rpm.c | 91 +++
> drivers/interconnect/qcom/smd-rpm.h | 15 +
> include/dt-bindings/interconnect/qcom.h | 98 +++
> include/linux/interconnect-provider.h | 125 +++
> include/linux/interconnect.h | 49 ++
> 18 files changed, 1907 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
> create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
> create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-smd.txt
> create mode 100644 Documentation/interconnect/interconnect.rst
> create mode 100644 drivers/interconnect/Kconfig
> create mode 100644 drivers/interconnect/Makefile
> create mode 100644 drivers/interconnect/core.c
> create mode 100644 drivers/interconnect/qcom/Kconfig
> create mode 100644 drivers/interconnect/qcom/Makefile
> create mode 100644 drivers/interconnect/qcom/msm8916.c
> create mode 100644 drivers/interconnect/qcom/smd-rpm.c
> create mode 100644 drivers/interconnect/qcom/smd-rpm.h
> create mode 100644 include/dt-bindings/interconnect/qcom.h
> create mode 100644 include/linux/interconnect-provider.h
> create mode 100644 include/linux/interconnect.h
>
Hi all,
On Tue, 4 Sep 2018 15:54:27 +0530 Amit Kucheria <[email protected]> wrote:
>
> I'm currently reviewing this patchset (long overdue), but considering
> that we haven't added any major new features to the framework for the
> last couple of revisions, can you get this patchset merged into
> linux-next to see how things shake out there? We've had this merged
> branch merged into a CI build on kernelci for a while now w/o any
> major incident so we should increase its exposure.
>
> You could ask Stephen Rothwell if he'll accept the a branch directly
> from you or he needs an upstream maintainer (GregKH?) to carry it.
Since it appears to have been well reviewed and tested, I can take it
if you send me a git URL and the names of contacts in case of issues
with the branch. Once Greg has merged it, I will try to drop it again
(unless you think there will be an ongoing reason to keep it in
linux-next separately), but a prompting email would also be nice in
case I forget.
--
Cheers,
Stephen Rothwell
Hi Stephen,
On 09/05/2018 02:36 AM, Stephen Rothwell wrote:
> Hi all,
>
> On Tue, 4 Sep 2018 15:54:27 +0530 Amit Kucheria <[email protected]> wrote:
>>
>> I'm currently reviewing this patchset (long overdue), but considering
>> that we haven't added any major new features to the framework for the
>> last couple of revisions, can you get this patchset merged into
>> linux-next to see how things shake out there? We've had this merged
>> branch merged into a CI build on kernelci for a while now w/o any
>> major incident so we should increase its exposure.
>>
>> You could ask Stephen Rothwell if he'll accept the a branch directly
>> from you or he needs an upstream maintainer (GregKH?) to carry it.
>
> Since it appears to have been well reviewed and tested, I can take it
> if you send me a git URL and the names of contacts in case of issues
> with the branch. Once Greg has merged it, I will try to drop it again
> (unless you think there will be an ongoing reason to keep it in
> linux-next separately), but a prompting email would also be nice in
> case I forget.
Thank you! Below is the git URL to pull from and you can use my e-mail
as contact for any issues.
git://git.linaro.org/people/georgi.djakov/linux.git#icc-next
I expect this to be a continuing effort with some upcoming users of this
API, so i plan to maintain this branch and occasionally put some
reviewed/tested patches to get wider integration testing. But let's see
what happens now.
In any case, if this increases your burden, feel free to drop it.
Thanks,
Georgi
Hi Georgi,
On Wed, 5 Sep 2018 17:50:28 +0300 Georgi Djakov <[email protected]> wrote:
>
> On 09/05/2018 02:36 AM, Stephen Rothwell wrote:
> >
> > On Tue, 4 Sep 2018 15:54:27 +0530 Amit Kucheria <[email protected]> wrote:
> >>
> >> I'm currently reviewing this patchset (long overdue), but considering
> >> that we haven't added any major new features to the framework for the
> >> last couple of revisions, can you get this patchset merged into
> >> linux-next to see how things shake out there? We've had this merged
> >> branch merged into a CI build on kernelci for a while now w/o any
> >> major incident so we should increase its exposure.
> >>
> >> You could ask Stephen Rothwell if he'll accept the a branch directly
> >> from you or he needs an upstream maintainer (GregKH?) to carry it.
> >
> > Since it appears to have been well reviewed and tested, I can take it
> > if you send me a git URL and the names of contacts in case of issues
> > with the branch. Once Greg has merged it, I will try to drop it again
> > (unless you think there will be an ongoing reason to keep it in
> > linux-next separately), but a prompting email would also be nice in
> > case I forget.
>
> Thank you! Below is the git URL to pull from and you can use my e-mail
> as contact for any issues.
>
> git://git.linaro.org/people/georgi.djakov/linux.git#icc-next
>
> I expect this to be a continuing effort with some upcoming users of this
> API, so i plan to maintain this branch and occasionally put some
> reviewed/tested patches to get wider integration testing. But let's see
> what happens now.
>
> In any case, if this increases your burden, feel free to drop it.
Added from (later) today.
Thanks for adding your subsystem tree as a participant of linux-next. As
you may know, this is not a judgement of your code. The purpose of
linux-next is for integration testing and to lower the impact of
conflicts between subsystems in the next merge window.
You will need to ensure that the patches/commits in your tree/series have
been:
* submitted under GPL v2 (or later) and include the Contributor's
Signed-off-by,
* posted to the relevant mailing list,
* reviewed by you (or another maintainer of your subsystem tree),
* successfully unit tested, and
* destined for the current or next Linux merge window.
Basically, this should be just what you would send to Linus (or ask him
to fetch). It is allowed to be rebased if you deem it necessary.
--
Cheers,
Stephen Rothwell
[email protected]
On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
> This binding is intended to represent the relations between the interconnect
> controllers (providers) and consumer device nodes. It will allow creating links
> between consumers and interconnect paths (exposed by interconnect providers).
As I mentioned in person, I want to see other SoC families using this
before accepting. They don't have to be ready for upstream, but WIP
patches or even just a "yes, this works for us and we're going to use
this binding on X".
Also, I think the QCom GPU use of this should be fully sorted out. Or
more generically how this fits into OPP binding which seems to be never
ending extended...
> Signed-off-by: Georgi Djakov <[email protected]>
> ---
> .../bindings/interconnect/interconnect.txt | 60 +++++++++++++++++++
> 1 file changed, 60 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
>
> diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> new file mode 100644
> index 000000000000..5cb7d3c8d44d
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> @@ -0,0 +1,60 @@
> +Interconnect Provider Device Tree Bindings
> +=========================================
> +
> +The purpose of this document is to define a common set of generic interconnect
> +providers/consumers properties.
> +
> +
> += interconnect providers =
> +
> +The interconnect provider binding is intended to represent the interconnect
> +controllers in the system. Each provider registers a set of interconnect
> +nodes, which expose the interconnect related capabilities of the interconnect
> +to consumer drivers. These capabilities can be throughput, latency, priority
> +etc. The consumer drivers set constraints on interconnect path (or endpoints)
> +depending on the use case. Interconnect providers can also be interconnect
> +consumers, such as in the case where two network-on-chip fabrics interface
> +directly
missing '.'
> +
> +Required properties:
> +- compatible : contains the interconnect provider compatible string
> +- #interconnect-cells : number of cells in a interconnect specifier needed to
> + encode the interconnect node id
> +
> +Example:
> +
> + snoc: snoc@580000 {
> + compatible = "qcom,msm8916-snoc";
> + #interconnect-cells = <1>;
> + reg = <0x580000 0x14000>;
> + clock-names = "bus_clk", "bus_a_clk";
> + clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
> + <&rpmcc RPM_SMD_SNOC_A_CLK>;
> + };
> +
> +
> += interconnect consumers =
> +
> +The interconnect consumers are device nodes which dynamically express their
> +bandwidth requirements along interconnect paths they are connected to. There
> +can be multiple interconnect providers on a SoC and the consumer may consume
> +multiple paths from different providers depending on use case and the
> +components it has to interact with.
> +
> +Required properties:
> +interconnects : Pairs of phandles and interconnect provider specifier to denote
> + the edge source and destination ports of the interconnect path.
> +
> +Optional properties:
> +interconnect-names : List of interconnect path name strings sorted in the same
> + order as the interconnects property. Consumers drivers will use
> + interconnect-names to match interconnect paths with interconnect
> + specifiers.
specifier pairs.
> +
> +Example:
> +
> + sdhci@7864000 {
> + ...
> + interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
> + interconnect-names = "ddr";
> + };
On Fri, Aug 31, 2018 at 05:01:48PM +0300, Georgi Djakov wrote:
> On some Qualcomm SoCs, there is a remote processor, which controls some of
> the Network-On-Chip interconnect resources. Other CPUs express their needs
> by communicating with this processor. Add a driver to handle communication
> with this remote processor.
I don't think you should have a binding nor a separate driver for this.
It's not actually an interconnect provider, so it doesn't belong in
bindings/interconnect. And it just looks like abuse of DT to instantiate
some driver.
All the driver amounts to is a 1 function wrapper for RPM_KEY_BW
messages. Can't this be part of the parent?
Rob
On Fri, Aug 31, 2018 at 05:01:49PM +0300, Georgi Djakov wrote:
> Document the device-tree bindings of the Network-On-Chip interconnect
> hardware found on Qualcomm msm8916 platforms.
>
> Signed-off-by: Georgi Djakov <[email protected]>
> ---
> .../bindings/interconnect/qcom-msm8916.txt | 41 ++++++++
> include/dt-bindings/interconnect/qcom.h | 98 +++++++++++++++++++
> 2 files changed, 139 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
> create mode 100644 include/dt-bindings/interconnect/qcom.h
>
> diff --git a/Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt b/Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
> new file mode 100644
> index 000000000000..744df51df4ed
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
> @@ -0,0 +1,41 @@
> +Qualcomm MSM8916 Network-On-Chip interconnect driver binding
> +----------------------------------------------------
> +
> +Required properties :
> +- compatible : shall contain only one of the following:
> + "qcom,msm8916-bimc"
> + "qcom,msm8916-pnoc"
> + "qcom,msm8916-snoc"
> +- #interconnect-cells : should contain 1
> +- reg : shall contain base register location and length
> +
> +Optional properties :
> +clocks : list of phandles and specifiers to all interconnect bus clocks
> +clock-names : clock names should include both "bus_clk" and "bus_a_clk"
> +
> +Examples:
> +
> + snoc: snoc@580000 {
interconnect@...
> + compatible = "qcom,msm8916-snoc";
> + #interconnect-cells = <1>;
> + reg = <0x580000 0x14000>;
> + clock-names = "bus_clk", "bus_a_clk";
> + clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
> + <&rpmcc RPM_SMD_SNOC_A_CLK>;
> + };
> + bimc: bimc@400000 {
interconnect@...
> + compatible = "qcom,msm8916-bimc";
> + #interconnect-cells = <1>;
> + reg = <0x400000 0x62000>;
> + clock-names = "bus_clk", "bus_a_clk";
> + clocks = <&rpmcc RPM_SMD_BIMC_CLK>,
> + <&rpmcc RPM_SMD_BIMC_A_CLK>;
> + };
> + pnoc: pnoc@500000 {
and here.
> + compatible = "qcom,msm8916-pnoc";
> + #interconnect-cells = <1>;
> + reg = <0x500000 0x11000>;
> + clock-names = "bus_clk", "bus_a_clk";
> + clocks = <&rpmcc RPM_SMD_PCNOC_CLK>,
> + <&rpmcc RPM_SMD_PCNOC_A_CLK>;
> + };
> diff --git a/include/dt-bindings/interconnect/qcom.h b/include/dt-bindings/interconnect/qcom.h
> new file mode 100644
> index 000000000000..48f944b30e5d
> --- /dev/null
> +++ b/include/dt-bindings/interconnect/qcom.h
> @@ -0,0 +1,98 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Qualcomm interconnect IDs
> + *
> + * Copyright (c) 2018, Linaro Ltd.
> + * Author: Georgi Djakov <[email protected]>
> + */
> +
> +#ifndef __DT_BINDINGS_INTERCONNECT_QCOM_H
> +#define __DT_BINDINGS_INTERCONNECT_QCOM_H
> +
> +#define BIMC_SNOC_MAS 1
> +#define BIMC_SNOC_SLV 2
> +#define MASTER_AMPSS_M0 3
> +#define MASTER_BLSP_1 4
> +#define MASTER_CRYPTO_CORE0 5
> +#define MASTER_DEHR 6
> +#define MASTER_GRAPHICS_3D 7
> +#define MASTER_JPEG 8
> +#define MASTER_LPASS 9
> +#define MASTER_MDP_PORT0 10
> +#define MASTER_QDSS_BAM 11
> +#define MASTER_QDSS_ETR 12
> +#define MASTER_SDCC_1 13
> +#define MASTER_SDCC_2 14
> +#define MASTER_SNOC_CFG 15
> +#define MASTER_SPDM 16
> +#define MASTER_TCU_0 17
> +#define MASTER_TCU_1 18
> +#define MASTER_USB_HS 19
> +#define MASTER_VFE 20
> +#define MASTER_VIDEO_P0 21
> +#define PNOC_INT_0 22
> +#define PNOC_INT_1 23
> +#define PNOC_M_0 24
> +#define PNOC_M_1 25
> +#define PNOC_SLV_0 26
> +#define PNOC_SLV_1 27
> +#define PNOC_SLV_2 28
> +#define PNOC_SLV_3 29
> +#define PNOC_SLV_4 30
> +#define PNOC_SLV_8 31
> +#define PNOC_SLV_9 32
> +#define PNOC_SNOC_MAS 33
> +#define PNOC_SNOC_SLV 34
> +#define SLAVE_AMPSS_L2 35
> +#define SLAVE_BIMC_CFG 36
> +#define SLAVE_BLSP_1 37
> +#define SLAVE_BOOT_ROM 38
> +#define SLAVE_CAMERA_CFG 39
> +#define SLAVE_CATS_128 40
> +#define SLAVE_CLK_CTL 41
> +#define SLAVE_CRYPTO_0_CFG 42
> +#define SLAVE_DEHR_CFG 43
> +#define SLAVE_DISPLAY_CFG 44
> +#define SLAVE_EBI_CH0 45
> +#define SLAVE_GRAPHICS_3D_CFG 46
> +#define SLAVE_IMEM_CFG 47
> +#define SLAVE_LPASS 48
> +#define SLAVE_MPM 49
> +#define SLAVE_MSM_PDM 50
> +#define SLAVE_MSM_TCSR 51
> +#define SLAVE_MSS 52
> +#define SLAVE_OCMEM_64 53
> +#define SLAVE_PMIC_ARB 54
> +#define SLAVE_PNOC_CFG 55
> +#define SLAVE_PRNG 56
> +#define SLAVE_QDSS_CFG 57
> +#define SLAVE_QDSS_STM 58
> +#define SLAVE_RBCPR_CFG 59
> +#define SLAVE_RPM_MSG_RAM 60
> +#define SLAVE_SDCC_1 61
> +#define SLAVE_SDCC_4 62
> +#define SLAVE_SECURITY 63
> +#define SLAVE_SERVICE_SNOC 64
> +#define SLAVE_SNOC_CFG 65
> +#define SLAVE_SPDM 66
> +#define SLAVE_SYSTEM_IMEM 67
> +#define SLAVE_TLMM 68
> +#define SLAVE_USB_HS 69
> +#define SLAVE_VENUS_CFG 70
> +#define SNOC_BIMC_0_MAS 71
> +#define SNOC_BIMC_0_SLV 72
> +#define SNOC_BIMC_1_MAS 73
> +#define SNOC_BIMC_1_SLV 74
> +#define SNOC_INT_0 75
> +#define SNOC_INT_1 76
> +#define SNOC_INT_BIMC 77
> +#define SNOC_MM_INT_0 78
> +#define SNOC_MM_INT_1 79
> +#define SNOC_MM_INT_2 80
> +#define SNOC_MM_INT_BIMC 81
> +#define SNOC_PNOC_MAS 82
> +#define SNOC_PNOC_SLV 83
> +#define SNOC_QDSS_INT 84
> +#define SYSTEM_SLAVE_FAB_APPS 85
Each of the 3 providers in the example should have their own number
space. This looks like you have combined them all. Though that is hard
to tell because I can't really decipher the naming convention
completely.
Rob
On Tue, Sep 25, 2018 at 01:02:15PM -0500, Rob Herring wrote:
> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
> > This binding is intended to represent the relations between the interconnect
> > controllers (providers) and consumer device nodes. It will allow creating links
> > between consumers and interconnect paths (exposed by interconnect providers).
>
> As I mentioned in person, I want to see other SoC families using this
> before accepting. They don't have to be ready for upstream, but WIP
> patches or even just a "yes, this works for us and we're going to use
> this binding on X".
>
> Also, I think the QCom GPU use of this should be fully sorted out. Or
> more generically how this fits into OPP binding which seems to be never
> ending extended...
This is a discussion I wouldn't mind having now. To jog memories, this is what
I posted a few weeks ago:
https://patchwork.freedesktop.org/patch/246117/
This seems like the easiest way to me to tie the frequency and the bandwidth
quota together for GPU devfreq scaling but I'm not married to the format and
I'll happily go a few rounds on the bikeshed if we can get something we can
be happy with.
Jordan
> > Signed-off-by: Georgi Djakov <[email protected]>
> > ---
> > .../bindings/interconnect/interconnect.txt | 60 +++++++++++++++++++
> > 1 file changed, 60 insertions(+)
> > create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
> >
> > diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> > new file mode 100644
> > index 000000000000..5cb7d3c8d44d
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> > @@ -0,0 +1,60 @@
> > +Interconnect Provider Device Tree Bindings
> > +=========================================
> > +
> > +The purpose of this document is to define a common set of generic interconnect
> > +providers/consumers properties.
> > +
> > +
> > += interconnect providers =
> > +
> > +The interconnect provider binding is intended to represent the interconnect
> > +controllers in the system. Each provider registers a set of interconnect
> > +nodes, which expose the interconnect related capabilities of the interconnect
> > +to consumer drivers. These capabilities can be throughput, latency, priority
> > +etc. The consumer drivers set constraints on interconnect path (or endpoints)
> > +depending on the use case. Interconnect providers can also be interconnect
> > +consumers, such as in the case where two network-on-chip fabrics interface
> > +directly
>
> missing '.'
>
> > +
> > +Required properties:
> > +- compatible : contains the interconnect provider compatible string
> > +- #interconnect-cells : number of cells in a interconnect specifier needed to
> > + encode the interconnect node id
> > +
> > +Example:
> > +
> > + snoc: snoc@580000 {
> > + compatible = "qcom,msm8916-snoc";
> > + #interconnect-cells = <1>;
> > + reg = <0x580000 0x14000>;
> > + clock-names = "bus_clk", "bus_a_clk";
> > + clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
> > + <&rpmcc RPM_SMD_SNOC_A_CLK>;
> > + };
> > +
> > +
> > += interconnect consumers =
> > +
> > +The interconnect consumers are device nodes which dynamically express their
> > +bandwidth requirements along interconnect paths they are connected to. There
> > +can be multiple interconnect providers on a SoC and the consumer may consume
> > +multiple paths from different providers depending on use case and the
> > +components it has to interact with.
> > +
> > +Required properties:
> > +interconnects : Pairs of phandles and interconnect provider specifier to denote
> > + the edge source and destination ports of the interconnect path.
> > +
> > +Optional properties:
> > +interconnect-names : List of interconnect path name strings sorted in the same
> > + order as the interconnects property. Consumers drivers will use
> > + interconnect-names to match interconnect paths with interconnect
> > + specifiers.
>
> specifier pairs.
>
> > +
> > +Example:
> > +
> > + sdhci@7864000 {
> > + ...
> > + interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
> > + interconnect-names = "ddr";
> > + };
--
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
Hi Rob,
Thanks for the comments!
On 09/25/2018 09:02 PM, Rob Herring wrote:
> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>> This binding is intended to represent the relations between the interconnect
>> controllers (providers) and consumer device nodes. It will allow creating links
>> between consumers and interconnect paths (exposed by interconnect providers).
>
> As I mentioned in person, I want to see other SoC families using this
> before accepting. They don't have to be ready for upstream, but WIP
> patches or even just a "yes, this works for us and we're going to use
> this binding on X".
Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
currently using this binding, there is ongoing work from at least two
other vendors that would be using this same binding. I will check on
what is their progress so far.
> Also, I think the QCom GPU use of this should be fully sorted out. Or
> more generically how this fits into OPP binding which seems to be never
> ending extended...
I see this as a further step. It could be OPP binding which include
bandwidth values or some separate DT property. Jordan has already
proposed something, do you have any initial comments on that?
BR,
Georgi
>> Signed-off-by: Georgi Djakov <[email protected]>
>> ---
>> .../bindings/interconnect/interconnect.txt | 60 +++++++++++++++++++
>> 1 file changed, 60 insertions(+)
>> create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
>>
>> diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
>> new file mode 100644
>> index 000000000000..5cb7d3c8d44d
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
>> @@ -0,0 +1,60 @@
>> +Interconnect Provider Device Tree Bindings
>> +=========================================
>> +
>> +The purpose of this document is to define a common set of generic interconnect
>> +providers/consumers properties.
>> +
>> +
>> += interconnect providers =
>> +
>> +The interconnect provider binding is intended to represent the interconnect
>> +controllers in the system. Each provider registers a set of interconnect
>> +nodes, which expose the interconnect related capabilities of the interconnect
>> +to consumer drivers. These capabilities can be throughput, latency, priority
>> +etc. The consumer drivers set constraints on interconnect path (or endpoints)
>> +depending on the use case. Interconnect providers can also be interconnect
>> +consumers, such as in the case where two network-on-chip fabrics interface
>> +directly
>
> missing '.'
>
>> +
>> +Required properties:
>> +- compatible : contains the interconnect provider compatible string
>> +- #interconnect-cells : number of cells in a interconnect specifier needed to
>> + encode the interconnect node id
>> +
>> +Example:
>> +
>> + snoc: snoc@580000 {
>> + compatible = "qcom,msm8916-snoc";
>> + #interconnect-cells = <1>;
>> + reg = <0x580000 0x14000>;
>> + clock-names = "bus_clk", "bus_a_clk";
>> + clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
>> + <&rpmcc RPM_SMD_SNOC_A_CLK>;
>> + };
>> +
>> +
>> += interconnect consumers =
>> +
>> +The interconnect consumers are device nodes which dynamically express their
>> +bandwidth requirements along interconnect paths they are connected to. There
>> +can be multiple interconnect providers on a SoC and the consumer may consume
>> +multiple paths from different providers depending on use case and the
>> +components it has to interact with.
>> +
>> +Required properties:
>> +interconnects : Pairs of phandles and interconnect provider specifier to denote
>> + the edge source and destination ports of the interconnect path.
>> +
>> +Optional properties:
>> +interconnect-names : List of interconnect path name strings sorted in the same
>> + order as the interconnects property. Consumers drivers will use
>> + interconnect-names to match interconnect paths with interconnect
>> + specifiers.
>
> specifier pairs.
>
>> +
>> +Example:
>> +
>> + sdhci@7864000 {
>> + ...
>> + interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
>> + interconnect-names = "ddr";
>> + };
On Wed, Sep 26, 2018 at 05:42:15PM +0300, Georgi Djakov wrote:
> Hi Rob,
>
> Thanks for the comments!
>
> On 09/25/2018 09:02 PM, Rob Herring wrote:
> > On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
> >> This binding is intended to represent the relations between the interconnect
> >> controllers (providers) and consumer device nodes. It will allow creating links
> >> between consumers and interconnect paths (exposed by interconnect providers).
> >
> > As I mentioned in person, I want to see other SoC families using this
> > before accepting. They don't have to be ready for upstream, but WIP
> > patches or even just a "yes, this works for us and we're going to use
> > this binding on X".
>
> Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
> currently using this binding, there is ongoing work from at least two
> other vendors that would be using this same binding. I will check on
> what is their progress so far.
>
> > Also, I think the QCom GPU use of this should be fully sorted out. Or
> > more generically how this fits into OPP binding which seems to be never
> > ending extended...
>
> I see this as a further step. It could be OPP binding which include
> bandwidth values or some separate DT property. Jordan has already
> proposed something, do you have any initial comments on that?
I am curious as how this fits into new systems which have firmware driven
CPUFreq and other DVFS. I would like to avoid using this in such systems
and leave it upto the firmware to scale the bus/interconnect based on the
other components that are connected to it and active.
--
Regards,
Sudeep
Hi Sudeep,
On 09/26/2018 05:48 PM, Sudeep Holla wrote:
> On Wed, Sep 26, 2018 at 05:42:15PM +0300, Georgi Djakov wrote:
>> Hi Rob,
>>
>> Thanks for the comments!
>>
>> On 09/25/2018 09:02 PM, Rob Herring wrote:
>>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>>> This binding is intended to represent the relations between the interconnect
>>>> controllers (providers) and consumer device nodes. It will allow creating links
>>>> between consumers and interconnect paths (exposed by interconnect providers).
>>>
>>> As I mentioned in person, I want to see other SoC families using this
>>> before accepting. They don't have to be ready for upstream, but WIP
>>> patches or even just a "yes, this works for us and we're going to use
>>> this binding on X".
>>
>> Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
>> currently using this binding, there is ongoing work from at least two
>> other vendors that would be using this same binding. I will check on
>> what is their progress so far.
>>
>>> Also, I think the QCom GPU use of this should be fully sorted out. Or
>>> more generically how this fits into OPP binding which seems to be never
>>> ending extended...
>>
>> I see this as a further step. It could be OPP binding which include
>> bandwidth values or some separate DT property. Jordan has already
>> proposed something, do you have any initial comments on that?
>
> I am curious as how this fits into new systems which have firmware driven
> CPUFreq and other DVFS. I would like to avoid using this in such systems
> and leave it upto the firmware to scale the bus/interconnect based on the
> other components that are connected to it and active.
This can be used with firmware driven systems too. In this case the
interconnect platform driver will not manage the interconnects directly,
but will collect data and provide hints to the firmware. Then the
firmware can take into account these hints and do the scaling.
Thanks,
Georgi
On 09/26/2018 07:34 AM, Jordan Crouse wrote:
> On Tue, Sep 25, 2018 at 01:02:15PM -0500, Rob Herring wrote:
>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>> This binding is intended to represent the relations between the interconnect
>>> controllers (providers) and consumer device nodes. It will allow creating links
>>> between consumers and interconnect paths (exposed by interconnect providers).
>> As I mentioned in person, I want to see other SoC families using this
>> before accepting. They don't have to be ready for upstream, but WIP
>> patches or even just a "yes, this works for us and we're going to use
>> this binding on X".
>>
>> Also, I think the QCom GPU use of this should be fully sorted out. Or
>> more generically how this fits into OPP binding which seems to be never
>> ending extended...
> This is a discussion I wouldn't mind having now. To jog memories, this is what
> I posted a few weeks ago:
>
> https://patchwork.freedesktop.org/patch/246117/
>
> This seems like the easiest way to me to tie the frequency and the bandwidth
> quota together for GPU devfreq scaling but I'm not married to the format and
> I'll happily go a few rounds on the bikeshed if we can get something we can
> be happy with.
>
> Jordan
Been meaning to send this out for a while, but caught up with other stuff.
That GPU BW patch is very specific to device to device mapping and
doesn't work well for different use cases (Eg: those that can calculate
based on use case, etc).
Interconnect paths have different BW (bandwidth) operating points that
they can support. For example: 1 GB/s, 1.7 GB/s, 5GB/s, etc. Having a
mapping from GPU or CPU to those are fine/necessary, but we still need a
separate BW OPP table for interconnect paths to list what they can
actually support.
Two different ways we could represent BW OPP tables for interconnect paths:
1. Represent interconnect paths (CPU to DDR, GPU to DDR, etc) as
devices and have OPPs for those devices.
2. We can have a "interconnect-opp-tables" DT binding similar to
"interconnects" and "interconnect-names". So if a device GPU or Video
decoder or I2C device needs to vote on an interconnect path, they can
also list the OPP tables that those paths can support.
I know Rob doesn't like (1). But I'm hoping at least (2) is acceptable.
I'm open to other suggestions too.
Both (1) and (2) need BW OPP tables similar to frequency OPP tables.
That should be easy to add and Viresh is open to that. I'm open to other
options too, but the fundamental missing part is how to tie a list of BW
OPPs to interconnect paths in DT.
Once we have one of the above two options, we can use the required-opps
field (already present in kernel) for the mapping between GPU to a
particular BW need (suggested by Viresh during an in person conversation).
Thanks,
Saravana
--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
On Mon, Oct 01, 2018 at 01:56:32PM -0700, Saravana Kannan wrote:
>
>
> On 09/26/2018 07:34 AM, Jordan Crouse wrote:
> >On Tue, Sep 25, 2018 at 01:02:15PM -0500, Rob Herring wrote:
> >>On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
> >>>This binding is intended to represent the relations between the interconnect
> >>>controllers (providers) and consumer device nodes. It will allow creating links
> >>>between consumers and interconnect paths (exposed by interconnect providers).
> >>As I mentioned in person, I want to see other SoC families using this
> >>before accepting. They don't have to be ready for upstream, but WIP
> >>patches or even just a "yes, this works for us and we're going to use
> >>this binding on X".
> >>
> >>Also, I think the QCom GPU use of this should be fully sorted out. Or
> >>more generically how this fits into OPP binding which seems to be never
> >>ending extended...
> >This is a discussion I wouldn't mind having now. To jog memories, this is what
> >I posted a few weeks ago:
> >
> >https://patchwork.freedesktop.org/patch/246117/
> >
> >This seems like the easiest way to me to tie the frequency and the bandwidth
> >quota together for GPU devfreq scaling but I'm not married to the format and
> >I'll happily go a few rounds on the bikeshed if we can get something we can
> >be happy with.
> >
> >Jordan
>
> Been meaning to send this out for a while, but caught up with other stuff.
>
> That GPU BW patch is very specific to device to device mapping and
> doesn't work well for different use cases (Eg: those that? can
> calculate based on use case, etc).
>
> Interconnect paths have different BW (bandwidth) operating points
> that they can support. For example: 1 GB/s, 1.7 GB/s, 5GB/s, etc.
> Having a mapping from GPU or CPU to those are fine/necessary, but we
> still need a separate BW OPP table for interconnect paths to list
> what they can actually support.
>
> Two different ways we could represent BW OPP tables for interconnect paths:
> 1.? Represent interconnect paths (CPU to DDR, GPU to DDR, etc) as
> devices and have OPPs for those devices.
>
> 2. We can have a "interconnect-opp-tables" DT binding similar to
> "interconnects" and "interconnect-names". So if a device GPU or
> Video decoder or I2C device needs to vote on an interconnect path,
> they can also list the OPP tables that those paths can support.
>
> I know Rob doesn't like (1). But I'm hoping at least (2) is
> acceptable. I'm open to other suggestions too.
>
> Both (1) and (2) need BW OPP tables similar to frequency OPP tables.
> That should be easy to add and Viresh is open to that. I'm open to
> other options too, but the fundamental missing part is how to tie a
> list of BW OPPs to interconnect paths in DT.
>
> Once we have one of the above two options, we can use the
> required-opps field (already present in kernel) for the mapping
> between GPU to a particular BW need (suggested by Viresh during an
> in person conversation).
Assuming we are willing to maintain the bandwidth OPP tables and the
names / phandles needed to describe a 1:1 GPU -> bandwidth mapping
I'm okay with required-opps but for the sake of argument how would
required-opps work for a device that needs to vote multiple paths
for a given OPP?
Jordan
--
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
On 10/01/2018 02:26 PM, Jordan Crouse wrote:
> On Mon, Oct 01, 2018 at 01:56:32PM -0700, Saravana Kannan wrote:
>>
>> On 09/26/2018 07:34 AM, Jordan Crouse wrote:
>>> On Tue, Sep 25, 2018 at 01:02:15PM -0500, Rob Herring wrote:
>>>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>>>> This binding is intended to represent the relations between the interconnect
>>>>> controllers (providers) and consumer device nodes. It will allow creating links
>>>>> between consumers and interconnect paths (exposed by interconnect providers).
>>>> As I mentioned in person, I want to see other SoC families using this
>>>> before accepting. They don't have to be ready for upstream, but WIP
>>>> patches or even just a "yes, this works for us and we're going to use
>>>> this binding on X".
>>>>
>>>> Also, I think the QCom GPU use of this should be fully sorted out. Or
>>>> more generically how this fits into OPP binding which seems to be never
>>>> ending extended...
>>> This is a discussion I wouldn't mind having now. To jog memories, this is what
>>> I posted a few weeks ago:
>>>
>>> https://patchwork.freedesktop.org/patch/246117/
>>>
>>> This seems like the easiest way to me to tie the frequency and the bandwidth
>>> quota together for GPU devfreq scaling but I'm not married to the format and
>>> I'll happily go a few rounds on the bikeshed if we can get something we can
>>> be happy with.
>>>
>>> Jordan
>> Been meaning to send this out for a while, but caught up with other stuff.
>>
>> That GPU BW patch is very specific to device to device mapping and
>> doesn't work well for different use cases (Eg: those that can
>> calculate based on use case, etc).
>>
>> Interconnect paths have different BW (bandwidth) operating points
>> that they can support. For example: 1 GB/s, 1.7 GB/s, 5GB/s, etc.
>> Having a mapping from GPU or CPU to those are fine/necessary, but we
>> still need a separate BW OPP table for interconnect paths to list
>> what they can actually support.
>>
>> Two different ways we could represent BW OPP tables for interconnect paths:
>> 1. Represent interconnect paths (CPU to DDR, GPU to DDR, etc) as
>> devices and have OPPs for those devices.
>>
>> 2. We can have a "interconnect-opp-tables" DT binding similar to
>> "interconnects" and "interconnect-names". So if a device GPU or
>> Video decoder or I2C device needs to vote on an interconnect path,
>> they can also list the OPP tables that those paths can support.
>>
>> I know Rob doesn't like (1). But I'm hoping at least (2) is
>> acceptable. I'm open to other suggestions too.
>>
>> Both (1) and (2) need BW OPP tables similar to frequency OPP tables.
>> That should be easy to add and Viresh is open to that. I'm open to
>> other options too, but the fundamental missing part is how to tie a
>> list of BW OPPs to interconnect paths in DT.
>>
>> Once we have one of the above two options, we can use the
>> required-opps field (already present in kernel) for the mapping
>> between GPU to a particular BW need (suggested by Viresh during an
>> in person conversation).
> Assuming we are willing to maintain the bandwidth OPP tables and the
> names / phandles needed to describe a 1:1 GPU -> bandwidth mapping
> I'm okay with required-opps but for the sake of argument how would
> required-opps work for a device that needs to vote multiple paths
> for a given OPP?
You can list multiple required-opps per device OPP. It's an array of
phandles to OPP entries in other tables.
-Saravana
--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
On 09/26/2018 07:48 AM, Sudeep Holla wrote:
> On Wed, Sep 26, 2018 at 05:42:15PM +0300, Georgi Djakov wrote:
>> Hi Rob,
>>
>> Thanks for the comments!
>>
>> On 09/25/2018 09:02 PM, Rob Herring wrote:
>>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>>> This binding is intended to represent the relations between the interconnect
>>>> controllers (providers) and consumer device nodes. It will allow creating links
>>>> between consumers and interconnect paths (exposed by interconnect providers).
>>> As I mentioned in person, I want to see other SoC families using this
>>> before accepting. They don't have to be ready for upstream, but WIP
>>> patches or even just a "yes, this works for us and we're going to use
>>> this binding on X".
>> Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
>> currently using this binding, there is ongoing work from at least two
>> other vendors that would be using this same binding. I will check on
>> what is their progress so far.
>>
>>> Also, I think the QCom GPU use of this should be fully sorted out. Or
>>> more generically how this fits into OPP binding which seems to be never
>>> ending extended...
>> I see this as a further step. It could be OPP binding which include
>> bandwidth values or some separate DT property. Jordan has already
>> proposed something, do you have any initial comments on that?
> I am curious as how this fits into new systems which have firmware driven
> CPUFreq and other DVFS. I would like to avoid using this in such systems
> and leave it upto the firmware to scale the bus/interconnect based on the
> other components that are connected to it and active.
>
You've made the same point multiple times across different patch sets.
Not all FW can do arbitrary functions. A lot of them are very limited in
their capabilities. So, as much as you and I would like to let the FW do
the work, it's not always possible. So, in those cases, we do need to
have support for the kernel scaling the interconnects correctly.
Hopefully this clears up your questions about FW capabilities.
Thanks,
Saravana
--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
Hi Rob,
On 09/25/2018 09:17 PM, Rob Herring wrote:
> On Fri, Aug 31, 2018 at 05:01:48PM +0300, Georgi Djakov wrote:
>> On some Qualcomm SoCs, there is a remote processor, which controls some of
>> the Network-On-Chip interconnect resources. Other CPUs express their needs
>> by communicating with this processor. Add a driver to handle communication
>> with this remote processor.
>
> I don't think you should have a binding nor a separate driver for this.
> It's not actually an interconnect provider, so it doesn't belong in
> bindings/interconnect. And it just looks like abuse of DT to instantiate
> some driver.
The idea of this binding here is to represent the remote processor, that
is also in control of some of the shared interconnect paths. The
bandwidth needs of the DSPs and modem are also reported to this remote
processor. It also takes over some of the bandwidth management while the
application CPU is powered down. So yes, it is also a kind of an
interconnect provider, so IMO it should be in DT.
We already have similar DT sub-nodes for remote regulator and clock
resources and this is just adding another sub-node for the interconnect
bandwidth related subsystem.
This, together with each separate NoC hardware block (in patch 6/8) are
building up the whole topology. The configuration of interconnect paths
consists of a combination of register writes, clock scaling and sending
a message to the remote processor.
> All the driver amounts to is a 1 function wrapper for RPM_KEY_BW
> messages. Can't this be part of the parent?
I am re-using this part for other SoCs and have separated it to avoid
duplication.
Thanks,
Georgi
Hi Rob,
On 09/25/2018 09:22 PM, Rob Herring wrote:
> On Fri, Aug 31, 2018 at 05:01:49PM +0300, Georgi Djakov wrote:
>> Document the device-tree bindings of the Network-On-Chip interconnect
>> hardware found on Qualcomm msm8916 platforms.
>>
>> Signed-off-by: Georgi Djakov <[email protected]>
>> ---
>> .../bindings/interconnect/qcom-msm8916.txt | 41 ++++++++
>> include/dt-bindings/interconnect/qcom.h | 98 +++++++++++++++++++
>> 2 files changed, 139 insertions(+)
>> create mode 100644 Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
>> create mode 100644 include/dt-bindings/interconnect/qcom.h
>>
>> diff --git a/Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt b/Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
>> new file mode 100644
>> index 000000000000..744df51df4ed
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/interconnect/qcom-msm8916.txt
>> @@ -0,0 +1,41 @@
>> +Qualcomm MSM8916 Network-On-Chip interconnect driver binding
>> +----------------------------------------------------
>> +
>> +Required properties :
>> +- compatible : shall contain only one of the following:
>> + "qcom,msm8916-bimc"
>> + "qcom,msm8916-pnoc"
>> + "qcom,msm8916-snoc"
>> +- #interconnect-cells : should contain 1
>> +- reg : shall contain base register location and length
>> +
>> +Optional properties :
>> +clocks : list of phandles and specifiers to all interconnect bus clocks
>> +clock-names : clock names should include both "bus_clk" and "bus_a_clk"
>> +
>> +Examples:
>> +
>> + snoc: snoc@580000 {
>
> interconnect@...
Ok.
>
>> + compatible = "qcom,msm8916-snoc";
>> + #interconnect-cells = <1>;
>> + reg = <0x580000 0x14000>;
>> + clock-names = "bus_clk", "bus_a_clk";
>> + clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
>> + <&rpmcc RPM_SMD_SNOC_A_CLK>;
>> + };
>> + bimc: bimc@400000 {
>
> interconnect@...
Ok.
>
>> + compatible = "qcom,msm8916-bimc";
>> + #interconnect-cells = <1>;
>> + reg = <0x400000 0x62000>;
>> + clock-names = "bus_clk", "bus_a_clk";
>> + clocks = <&rpmcc RPM_SMD_BIMC_CLK>,
>> + <&rpmcc RPM_SMD_BIMC_A_CLK>;
>> + };
>> + pnoc: pnoc@500000 {
>
> and here.
Ok.
>
>> + compatible = "qcom,msm8916-pnoc";
>> + #interconnect-cells = <1>;
>> + reg = <0x500000 0x11000>;
>> + clock-names = "bus_clk", "bus_a_clk";
>> + clocks = <&rpmcc RPM_SMD_PCNOC_CLK>,
>> + <&rpmcc RPM_SMD_PCNOC_A_CLK>;
>> + };
>> diff --git a/include/dt-bindings/interconnect/qcom.h b/include/dt-bindings/interconnect/qcom.h
>> new file mode 100644
>> index 000000000000..48f944b30e5d
>> --- /dev/null
>> +++ b/include/dt-bindings/interconnect/qcom.h
>> @@ -0,0 +1,98 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Qualcomm interconnect IDs
>> + *
>> + * Copyright (c) 2018, Linaro Ltd.
>> + * Author: Georgi Djakov <[email protected]>
>> + */
>> +
>> +#ifndef __DT_BINDINGS_INTERCONNECT_QCOM_H
>> +#define __DT_BINDINGS_INTERCONNECT_QCOM_H
>> +
>> +#define BIMC_SNOC_MAS 1
>> +#define BIMC_SNOC_SLV 2
>> +#define MASTER_AMPSS_M0 3
>> +#define MASTER_BLSP_1 4
>> +#define MASTER_CRYPTO_CORE0 5
>> +#define MASTER_DEHR 6
>> +#define MASTER_GRAPHICS_3D 7
>> +#define MASTER_JPEG 8
>> +#define MASTER_LPASS 9
>> +#define MASTER_MDP_PORT0 10
>> +#define MASTER_QDSS_BAM 11
>> +#define MASTER_QDSS_ETR 12
>> +#define MASTER_SDCC_1 13
>> +#define MASTER_SDCC_2 14
>> +#define MASTER_SNOC_CFG 15
>> +#define MASTER_SPDM 16
>> +#define MASTER_TCU_0 17
>> +#define MASTER_TCU_1 18
>> +#define MASTER_USB_HS 19
>> +#define MASTER_VFE 20
>> +#define MASTER_VIDEO_P0 21
>> +#define PNOC_INT_0 22
>> +#define PNOC_INT_1 23
>> +#define PNOC_M_0 24
>> +#define PNOC_M_1 25
>> +#define PNOC_SLV_0 26
>> +#define PNOC_SLV_1 27
>> +#define PNOC_SLV_2 28
>> +#define PNOC_SLV_3 29
>> +#define PNOC_SLV_4 30
>> +#define PNOC_SLV_8 31
>> +#define PNOC_SLV_9 32
>> +#define PNOC_SNOC_MAS 33
>> +#define PNOC_SNOC_SLV 34
>> +#define SLAVE_AMPSS_L2 35
>> +#define SLAVE_BIMC_CFG 36
>> +#define SLAVE_BLSP_1 37
>> +#define SLAVE_BOOT_ROM 38
>> +#define SLAVE_CAMERA_CFG 39
>> +#define SLAVE_CATS_128 40
>> +#define SLAVE_CLK_CTL 41
>> +#define SLAVE_CRYPTO_0_CFG 42
>> +#define SLAVE_DEHR_CFG 43
>> +#define SLAVE_DISPLAY_CFG 44
>> +#define SLAVE_EBI_CH0 45
>> +#define SLAVE_GRAPHICS_3D_CFG 46
>> +#define SLAVE_IMEM_CFG 47
>> +#define SLAVE_LPASS 48
>> +#define SLAVE_MPM 49
>> +#define SLAVE_MSM_PDM 50
>> +#define SLAVE_MSM_TCSR 51
>> +#define SLAVE_MSS 52
>> +#define SLAVE_OCMEM_64 53
>> +#define SLAVE_PMIC_ARB 54
>> +#define SLAVE_PNOC_CFG 55
>> +#define SLAVE_PRNG 56
>> +#define SLAVE_QDSS_CFG 57
>> +#define SLAVE_QDSS_STM 58
>> +#define SLAVE_RBCPR_CFG 59
>> +#define SLAVE_RPM_MSG_RAM 60
>> +#define SLAVE_SDCC_1 61
>> +#define SLAVE_SDCC_4 62
>> +#define SLAVE_SECURITY 63
>> +#define SLAVE_SERVICE_SNOC 64
>> +#define SLAVE_SNOC_CFG 65
>> +#define SLAVE_SPDM 66
>> +#define SLAVE_SYSTEM_IMEM 67
>> +#define SLAVE_TLMM 68
>> +#define SLAVE_USB_HS 69
>> +#define SLAVE_VENUS_CFG 70
>> +#define SNOC_BIMC_0_MAS 71
>> +#define SNOC_BIMC_0_SLV 72
>> +#define SNOC_BIMC_1_MAS 73
>> +#define SNOC_BIMC_1_SLV 74
>> +#define SNOC_INT_0 75
>> +#define SNOC_INT_1 76
>> +#define SNOC_INT_BIMC 77
>> +#define SNOC_MM_INT_0 78
>> +#define SNOC_MM_INT_1 79
>> +#define SNOC_MM_INT_2 80
>> +#define SNOC_MM_INT_BIMC 81
>> +#define SNOC_PNOC_MAS 82
>> +#define SNOC_PNOC_SLV 83
>> +#define SNOC_QDSS_INT 84
>> +#define SYSTEM_SLAVE_FAB_APPS 85
>
> Each of the 3 providers in the example should have their own number
> space. This looks like you have combined them all. Though that is hard
> to tell because I can't really decipher the naming convention
> completely.
That's the plan. Thanks for reviewing!
BR,
Georgi
On Mon, Oct 01, 2018 at 04:49:32PM -0700, Saravana Kannan wrote:
> On 09/26/2018 07:48 AM, Sudeep Holla wrote:
> > On Wed, Sep 26, 2018 at 05:42:15PM +0300, Georgi Djakov wrote:
> > > Hi Rob,
> > >
> > > Thanks for the comments!
> > >
> > > On 09/25/2018 09:02 PM, Rob Herring wrote:
> > > > On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
> > > > > This binding is intended to represent the relations between the interconnect
> > > > > controllers (providers) and consumer device nodes. It will allow creating links
> > > > > between consumers and interconnect paths (exposed by interconnect providers).
> > > > As I mentioned in person, I want to see other SoC families using this
> > > > before accepting. They don't have to be ready for upstream, but WIP
> > > > patches or even just a "yes, this works for us and we're going to use
> > > > this binding on X".
> > > Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
> > > currently using this binding, there is ongoing work from at least two
> > > other vendors that would be using this same binding. I will check on
> > > what is their progress so far.
> > >
> > > > Also, I think the QCom GPU use of this should be fully sorted out. Or
> > > > more generically how this fits into OPP binding which seems to be never
> > > > ending extended...
> > > I see this as a further step. It could be OPP binding which include
> > > bandwidth values or some separate DT property. Jordan has already
> > > proposed something, do you have any initial comments on that?
> > I am curious as how this fits into new systems which have firmware driven
> > CPUFreq and other DVFS. I would like to avoid using this in such systems
> > and leave it upto the firmware to scale the bus/interconnect based on the
> > other components that are connected to it and active.
> >
>
> You've made the same point multiple times across different patch sets. Not
> all FW can do arbitrary functions. A lot of them are very limited in their
> capabilities. So, as much as you and I would like to let the FW do the work,
> it's not always possible. So, in those cases, we do need to have support for
> the kernel scaling the interconnects correctly. Hopefully this clears up
> your questions about FW capabilities.
Yes, I do understand I have made the same point multiple time and it's
intentional. We need to get the fragmented f/w support story fixed.
Different ARM vendors are doing different things in f/w and ARM sees the
same fragmentation story as before. We have come up with new specification
and my annoying multiple emails are just to constantly remind the same.
I do understand we have existing implementations to consider, but fixing
the functionality in arbitrary way is not a good design and it better
to get them fixed for future.
--
Regards,
Sudeep
On 10/02/2018 04:17 AM, Sudeep Holla wrote:
> On Mon, Oct 01, 2018 at 04:49:32PM -0700, Saravana Kannan wrote:
>> On 09/26/2018 07:48 AM, Sudeep Holla wrote:
>>> On Wed, Sep 26, 2018 at 05:42:15PM +0300, Georgi Djakov wrote:
>>>> Hi Rob,
>>>>
>>>> Thanks for the comments!
>>>>
>>>> On 09/25/2018 09:02 PM, Rob Herring wrote:
>>>>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>>>>> This binding is intended to represent the relations between the interconnect
>>>>>> controllers (providers) and consumer device nodes. It will allow creating links
>>>>>> between consumers and interconnect paths (exposed by interconnect providers).
>>>>> As I mentioned in person, I want to see other SoC families using this
>>>>> before accepting. They don't have to be ready for upstream, but WIP
>>>>> patches or even just a "yes, this works for us and we're going to use
>>>>> this binding on X".
>>>> Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
>>>> currently using this binding, there is ongoing work from at least two
>>>> other vendors that would be using this same binding. I will check on
>>>> what is their progress so far.
>>>>
>>>>> Also, I think the QCom GPU use of this should be fully sorted out. Or
>>>>> more generically how this fits into OPP binding which seems to be never
>>>>> ending extended...
>>>> I see this as a further step. It could be OPP binding which include
>>>> bandwidth values or some separate DT property. Jordan has already
>>>> proposed something, do you have any initial comments on that?
>>> I am curious as how this fits into new systems which have firmware driven
>>> CPUFreq and other DVFS. I would like to avoid using this in such systems
>>> and leave it upto the firmware to scale the bus/interconnect based on the
>>> other components that are connected to it and active.
>>>
>> You've made the same point multiple times across different patch sets. Not
>> all FW can do arbitrary functions. A lot of them are very limited in their
>> capabilities. So, as much as you and I would like to let the FW do the work,
>> it's not always possible. So, in those cases, we do need to have support for
>> the kernel scaling the interconnects correctly. Hopefully this clears up
>> your questions about FW capabilities.
> Yes, I do understand I have made the same point multiple time and it's
> intentional. We need to get the fragmented f/w support story fixed.
> Different ARM vendors are doing different things in f/w and ARM sees the
> same fragmentation story as before. We have come up with new specification
> and my annoying multiple emails are just to constantly remind the same.
>
> I do understand we have existing implementations to consider, but fixing
> the functionality in arbitrary way is not a good design and it better
> to get them fixed for future.
I believe the fragmentation you are referring to is in the
interface/communication protocol. I see the benefit of standardizing
that as long as the standard actually turns out to be good. But that's
completely separate from what the FW can/can't do. Asking to standardize
what the FW can/can't do doesn't seem realistic as each chip vendor will
have different priorities - power, performance, cost, chip area, etc.
It's the conflation of these separate topics that doesn't help IMHO.
-Saravana
--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
On Tue, Oct 02, 2018 at 11:56:56AM -0700, Saravana Kannan wrote:
>
> On 10/02/2018 04:17 AM, Sudeep Holla wrote:
[...]
> > Yes, I do understand I have made the same point multiple time and it's
> > intentional. We need to get the fragmented f/w support story fixed.
> > Different ARM vendors are doing different things in f/w and ARM sees the
> > same fragmentation story as before. We have come up with new specification
> > and my annoying multiple emails are just to constantly remind the same.
> >
> > I do understand we have existing implementations to consider, but fixing
> > the functionality in arbitrary way is not a good design and it better
> > to get them fixed for future.
>
> I believe the fragmentation you are referring to is? in the
> interface/communication protocol. I see the benefit of standardizing that as
> long as the standard actually turns out to be good. But that's completely
> separate from what the FW can/can't do. Asking to standardize what the FW
> can/can't do doesn't seem realistic as each chip vendor will have different
> priorities - power, performance, cost, chip area, etc. It's the conflation
> of these separate topics that doesn't help IMHO.
I agree on interface/communication protocol fragmentation and firmware
can implement whatever the vendor wish. What I was also referring was
the mix-n-match approach which should be avoided.
e.g. Device A and B's PM is managed completely by firmware using OSPM hints
Suppose Device X's PM is dependent on Device A and B, in which case it's
simpler and cleaner to leave Device X PM to firmware. Reading the state
of A and B and using that as hint for X is just overhead which firmware
can manage better. That was my main concern here: A=CPU and B=some other
device and X is inter-connect to which A and B are connected.
If CPU OPPs are obtained from f/w and this inter-connect from DT, mapping
then is a mess and that's what I was concerned. I am sorry if that's not
the scenario here, I may have mistaken then.
--
Regards,
Sudeep
On 10/03/2018 02:33 AM, Sudeep Holla wrote:
> On Tue, Oct 02, 2018 at 11:56:56AM -0700, Saravana Kannan wrote:
>> On 10/02/2018 04:17 AM, Sudeep Holla wrote:
> [...]
>
>>> Yes, I do understand I have made the same point multiple time and it's
>>> intentional. We need to get the fragmented f/w support story fixed.
>>> Different ARM vendors are doing different things in f/w and ARM sees the
>>> same fragmentation story as before. We have come up with new specification
>>> and my annoying multiple emails are just to constantly remind the same.
>>>
>>> I do understand we have existing implementations to consider, but fixing
>>> the functionality in arbitrary way is not a good design and it better
>>> to get them fixed for future.
>> I believe the fragmentation you are referring to is in the
>> interface/communication protocol. I see the benefit of standardizing that as
>> long as the standard actually turns out to be good. But that's completely
>> separate from what the FW can/can't do. Asking to standardize what the FW
>> can/can't do doesn't seem realistic as each chip vendor will have different
>> priorities - power, performance, cost, chip area, etc. It's the conflation
>> of these separate topics that doesn't help IMHO.
> I agree on interface/communication protocol fragmentation and firmware
> can implement whatever the vendor wish. What I was also referring was
> the mix-n-match approach which should be avoided.
>
> e.g. Device A and B's PM is managed completely by firmware using OSPM hints
> Suppose Device X's PM is dependent on Device A and B, in which case it's
> simpler and cleaner to leave Device X PM to firmware. Reading the state
> of A and B and using that as hint for X is just overhead which firmware
> can manage better. That was my main concern here: A=CPU and B=some other
> device and X is inter-connect to which A and B are connected.
>
> If CPU OPPs are obtained from f/w and this inter-connect from DT, mapping
> then is a mess and that's what I was concerned. I am sorry if that's not
> the scenario here, I may have mistaken then.
>
What you are asking would be an ideal case, but this is not an ideal
world. There are tons of constraints for each chip vendor. Saying you
can't mix and match makes perfect the enemy of the good. Adding FW
support for A and B might make them optimal. But adding support for X
might not be possible for multiple real world constraints (chip area,
cost, time to market, etc). Saying "either do it all or do nothing" is
going to hold back a lot progress that can come in increments. Heck, we
do the same thing in the kernel. We'll add basic simple features first
and then improve on them. Why is it suddenly frowned up if a FW/HW
follows the same approach? I'll just have to agree to disagree with you
on this view point.
-Saravana
--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
On Wed, Oct 03, 2018 at 11:06:45AM -0700, Saravana Kannan wrote:
>
>
> On 10/03/2018 02:33 AM, Sudeep Holla wrote:
> > On Tue, Oct 02, 2018 at 11:56:56AM -0700, Saravana Kannan wrote:
> > > On 10/02/2018 04:17 AM, Sudeep Holla wrote:
> > [...]
> >
> > > > Yes, I do understand I have made the same point multiple time and it's
> > > > intentional. We need to get the fragmented f/w support story fixed.
> > > > Different ARM vendors are doing different things in f/w and ARM sees the
> > > > same fragmentation story as before. We have come up with new specification
> > > > and my annoying multiple emails are just to constantly remind the same.
> > > >
> > > > I do understand we have existing implementations to consider, but fixing
> > > > the functionality in arbitrary way is not a good design and it better
> > > > to get them fixed for future.
> > > I believe the fragmentation you are referring to is? in the
> > > interface/communication protocol. I see the benefit of standardizing that as
> > > long as the standard actually turns out to be good. But that's completely
> > > separate from what the FW can/can't do. Asking to standardize what the FW
> > > can/can't do doesn't seem realistic as each chip vendor will have different
> > > priorities - power, performance, cost, chip area, etc. It's the conflation
> > > of these separate topics that doesn't help IMHO.
> > I agree on interface/communication protocol fragmentation and firmware
> > can implement whatever the vendor wish. What I was also referring was
> > the mix-n-match approach which should be avoided.
> >
> > e.g. Device A and B's PM is managed completely by firmware using OSPM hints
> > Suppose Device X's PM is dependent on Device A and B, in which case it's
> > simpler and cleaner to leave Device X PM to firmware. Reading the state
> > of A and B and using that as hint for X is just overhead which firmware
> > can manage better. That was my main concern here: A=CPU and B=some other
> > device and X is inter-connect to which A and B are connected.
> >
> > If CPU OPPs are obtained from f/w and this inter-connect from DT, mapping
> > then is a mess and that's what I was concerned. I am sorry if that's not
> > the scenario here, I may have mistaken then.
> >
> What you are asking would be an ideal case, but this is not an ideal world.
Agreed.
> There are tons of constraints for each chip vendor. Saying you can't mix and
> match makes perfect the enemy of the good.
We can have endless debate on that.
> Adding FW support for A and B might make them optimal.
OK...
> But adding support for X might not be possible for
> multiple real world constraints (chip area, cost, time to market, etc).
but is not a good design though. If f/w blindly changes DVFS for X based
on OS request, then there's possibility for clkscrew kind of exploits
still though moving A/B to f/w was to avoid it. The chances are low but
not zero.
> Saying "either do it all or do nothing" is going to hold back a lot progress
> that can come in increments. Heck, we do the same thing in the kernel. We'll
> add basic simple features first and then improve on them. Why is it suddenly
> frowned up if a FW/HW follows the same approach? I'll just have to agree to
> disagree with you on this view point.
>
I agree on adding basic and then improve on that policy. But it's not
fair to compare this mix-'n'-match approach to that. Sorry but I
disagree with the comparison here.
--
Regards,
Sudeep
Hi Rob,
On 9/26/18 17:42, Georgi Djakov wrote:
> Hi Rob,
>
> Thanks for the comments!
>
> On 09/25/2018 09:02 PM, Rob Herring wrote:
>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>> This binding is intended to represent the relations between the interconnect
>>> controllers (providers) and consumer device nodes. It will allow creating links
>>> between consumers and interconnect paths (exposed by interconnect providers).
>>
>> As I mentioned in person, I want to see other SoC families using this
>> before accepting. They don't have to be ready for upstream, but WIP
>> patches or even just a "yes, this works for us and we're going to use
>> this binding on X".
Patches for the iMX7ULP platform are already available [1]. Thanks
Alexandre Bailon!
The interconnect API seems to be also a good fit for Nvidia SoCs. There
is an ongoing discussion about implementing an interconnect provider
driver for Tegra [2]. Thanks Thierry and Krishna!
In addition of the above, i also checked privately with a few other SoC
maintainers and made them aware of these patches. Some are not ready for
upstream yet, but the feedback was positive and i expect more SoCs to
make use of this in the future.
BR,
Georgi
[1] https://lkml.org/lkml/2018/11/17/368
[2] https://marc.info/?t=154056181900001&r=1&w=2
> Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
> currently using this binding, there is ongoing work from at least two
> other vendors that would be using this same binding. I will check on
> what is their progress so far.
>
>> Also, I think the QCom GPU use of this should be fully sorted out. Or
>> more generically how this fits into OPP binding which seems to be never
>> ending extended...
>
> I see this as a further step. It could be OPP binding which include
> bandwidth values or some separate DT property. Jordan has already
> proposed something, do you have any initial comments on that?
>
> BR,
> Georgi
>