From: Nikita Travkin <[email protected]>
This series adds memshare QMI service.
It receives messages from other subsystems (like modem subsystem) and
answers with a message that contains the memory address of the allocated
region. This is used on many msm8916 based devices for location service
which requests 2 MB region when the modem starts. If the memory isn't
provided, GPS doesn't work. Newer platforms may also use it in other
subsystems.
The driver implements:
- legacy alloc/free messages (known to be used by Samsung A2015 devices)
- generic alloc/free messages (used on most msm8916 devices)
- query_size message (sent by at least one modem firmware for msm8916)
Downstream driver [1] seems to use dynamic memory allocation but it
uses concept of "Guaranteed Memory" which means that the region is
allocated on driver probe and not released even after "free" request.
It also seems to support memory allocation "on demand" without prior
client description.
My driver implements "guaranteed" allocation using reserved-memory regions
and does not allow allocation for any request that was not described in
the device tree. It also additionally checks that qrtr node that sent
the message is correct (to prevent, for example, processing messages sent
from userspace).
memshare_qmi_msg.c/h files are mostly copied from [1] (query_size was
added from later version of the driver) and cleaned up. Those files keep
copyright line from the originals.
This driver is tested to work on multiple msm8916 devices, including
devices that have dts upstream (with an exception of mtp; db410c doesn't
use it).
[1] https://source.codeaurora.org/quic/la/kernel/msm-3.10/tree/drivers/soc/qcom/memshare?h=LA.BR.1.2.9.1-02310-8x16.0
Nikita Travkin (2):
dt-bindings: soc: qcom: Add bindings for Qualcomm Memshare service
soc: qcom: Add Qualcomm Memshare QMI service
.../bindings/soc/qcom/qcom,memshare.yaml | 109 ++++
drivers/soc/qcom/Kconfig | 8 +
drivers/soc/qcom/Makefile | 2 +
drivers/soc/qcom/memshare.c | 501 ++++++++++++++++++
drivers/soc/qcom/memshare_qmi_msg.c | 370 +++++++++++++
drivers/soc/qcom/memshare_qmi_msg.h | 228 ++++++++
include/dt-bindings/soc/qcom,memshare.h | 10 +
7 files changed, 1228 insertions(+)
create mode 100644 Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
create mode 100644 drivers/soc/qcom/memshare.c
create mode 100644 drivers/soc/qcom/memshare_qmi_msg.c
create mode 100644 drivers/soc/qcom/memshare_qmi_msg.h
create mode 100644 include/dt-bindings/soc/qcom,memshare.h
--
2.27.0
From: Nikita Travkin <[email protected]>
Add DT bindings for memshare: QMI service that allocates
memory per remote processor request.
Signed-off-by: Nikita Travkin <[email protected]>
---
.../bindings/soc/qcom/qcom,memshare.yaml | 109 ++++++++++++++++++
include/dt-bindings/soc/qcom,memshare.h | 10 ++
2 files changed, 119 insertions(+)
create mode 100644 Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
create mode 100644 include/dt-bindings/soc/qcom,memshare.h
diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
new file mode 100644
index 000000000000..ebdf128b066c
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
@@ -0,0 +1,109 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/soc/qcom/qcom,memshare.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Qualcomm QMI Shared Memory Service
+
+description: |
+ This driver provides a QMI service that allows remote processors (like modem)
+ to request additional memory. It is used for applications like GPS in modem.
+
+maintainers:
+ - Nikita Travkin <[email protected]>
+
+properties:
+ compatible:
+ const: qcom,memshare
+
+ qcom,legacy-client:
+ $ref: /schemas/types.yaml#/definitions/phandle
+ description: Phandle to a memshare client node used for legacy requests.
+
+ "#address-cells":
+ const: 1
+
+ "#size-cells":
+ const: 0
+
+patternProperties:
+ "^.*@[0-9]+$":
+ type: object
+
+ properties:
+ reg:
+ description: Proc-ID for clients in this node.
+
+ qcom,qrtr-node:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description: Node from which the requests are expected.
+
+ "#address-cells":
+ const: 1
+
+ "#size-cells":
+ const: 0
+
+ patternProperties:
+ "^.*@[0-9]+$":
+ type: object
+
+ properties:
+ reg:
+ description: ID of this client.
+
+ memory-region:
+ $ref: /schemas/types.yaml#/definitions/phandle
+ description: |
+ Reserved memory region that should be used for allocation.
+
+ required:
+ - reg
+
+ required:
+ - reg
+ - qcom,qrtr-node
+
+required:
+ - compatible
+
+additionalProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/soc/qcom,memshare.h>
+
+ reserved-memory {
+
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ gps_mem: gps@93c00000 {
+ reg = <0x0 0x93c00000 0x0 0x200000>;
+ no-map;
+ };
+ };
+
+ memshare {
+ compatible = "qcom,memshare";
+ qcom,legacy-client = <&memshare_gps>;
+
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ mpss@0 {
+ reg = <MEMSHARE_PROC_MPSS_V01>;
+ qcom,qrtr-node = <0>;
+
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ memshare_gps: gps@0 {
+ reg = <0>;
+ memory-region = <&gps_mem>;
+ };
+ };
+ };
+
+...
diff --git a/include/dt-bindings/soc/qcom,memshare.h b/include/dt-bindings/soc/qcom,memshare.h
new file mode 100644
index 000000000000..4cef1ef75d09
--- /dev/null
+++ b/include/dt-bindings/soc/qcom,memshare.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __DT_QCOM_MEMSHARE_H__
+#define __DT_QCOM_MEMSHARE_H__
+
+#define MEMSHARE_PROC_MPSS_V01 0
+#define MEMSHARE_PROC_ADSP_V01 1
+#define MEMSHARE_PROC_WCNSS_V01 2
+
+#endif /* __DT_QCOM_MEMSHARE_H__ */
--
2.27.0
From: Nikita Travkin <[email protected]>
This QMI service provides memory region for remote processors (like
modem) on request. It is known to be used in GPS service in modem
firmware for some (most) of msm8916 devices.
This commit implements "guaranteed allocation" of memory using
reserved-memory regions because gps service crashes on some devices
unless specific address was provided.
Signed-off-by: Nikita Travkin <[email protected]>
---
drivers/soc/qcom/Kconfig | 8 +
drivers/soc/qcom/Makefile | 2 +
drivers/soc/qcom/memshare.c | 501 ++++++++++++++++++++++++++++
drivers/soc/qcom/memshare_qmi_msg.c | 370 ++++++++++++++++++++
drivers/soc/qcom/memshare_qmi_msg.h | 228 +++++++++++++
5 files changed, 1109 insertions(+)
create mode 100644 drivers/soc/qcom/memshare.c
create mode 100644 drivers/soc/qcom/memshare_qmi_msg.c
create mode 100644 drivers/soc/qcom/memshare_qmi_msg.h
diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index 79b568f82a1c..8575cd6fe0ef 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -209,4 +209,12 @@ config QCOM_APR
application processor and QDSP6. APR is
used by audio driver to configure QDSP6
ASM, ADM and AFE modules.
+
+config QCOM_MEMSHARE_QMI_SERVICE
+ select QCOM_QMI_HELPERS
+ tristate "Qualcomm QMI Shared Memory Service (MEMSHARE)"
+ help
+ This service provides additional memory regions on request from
+ remote processors. On some platforms it is used for GPS by
+ the location service in the modem subsystem.
endmenu
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index ad675a6593d0..5a986c2e1770 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -26,3 +26,5 @@ obj-$(CONFIG_QCOM_LLCC) += llcc-qcom.o
obj-$(CONFIG_QCOM_RPMHPD) += rpmhpd.o
obj-$(CONFIG_QCOM_RPMPD) += rpmpd.o
obj-$(CONFIG_QCOM_KRYO_L2_ACCESSORS) += kryo-l2-accessors.o
+obj-$(CONFIG_QCOM_MEMSHARE_QMI_SERVICE) += qcom_memshare.o
+qcom_memshare-y := memshare.o memshare_qmi_msg.o
diff --git a/drivers/soc/qcom/memshare.c b/drivers/soc/qcom/memshare.c
new file mode 100644
index 000000000000..a19858b91276
--- /dev/null
+++ b/drivers/soc/qcom/memshare.c
@@ -0,0 +1,501 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/device.h>
+#include <linux/platform_device.h>
+#include <linux/net.h>
+#include <linux/soc/qcom/qmi.h>
+
+#include "memshare_qmi_msg.h"
+
+#define MEMSHARE_MAX_CLIENTS 10
+
+struct memshare_client {
+ u32 id;
+ u32 proc_id;
+ u32 qrtr_node;
+ u32 size;
+ phys_addr_t phy_addr;
+};
+
+struct memshare {
+ struct device *dev;
+ struct qmi_handle qmi;
+ unsigned int client_cnt;
+ struct memshare_client *legacy_client;
+ struct memshare_client *clients[MEMSHARE_MAX_CLIENTS];
+};
+
+static struct memshare_client *memshare_get_client(struct memshare *share, u32 id, u32 proc_id)
+{
+ int i;
+
+ for (i = 0; i < share->client_cnt; i++)
+ if (share->clients[i]->id == id && share->clients[i]->proc_id == proc_id)
+ return share->clients[i];
+
+ return NULL;
+}
+
+static void memshare_alloc_req_handler(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn, const void *data)
+{
+ struct memshare *share = container_of(qmi, struct memshare, qmi);
+ const struct mem_alloc_req_msg_v01 *req = data;
+ struct mem_alloc_resp_msg_v01 resp = { .resp = QMI_RESULT_FAILURE_V01 };
+ struct memshare_client *client = share->legacy_client;
+ int ret;
+
+ dev_dbg(share->dev,
+ "alloc_req: num_bytes=%d, block_alignment_valid=%d, block_alignment=%d, node=%d\n",
+ req->num_bytes,
+ req->block_alignment_valid,
+ req->block_alignment,
+ sq->sq_node
+ );
+
+ if (!client) {
+ dev_err(share->dev, "Unknown request from legacy client (size=%d, node=%d)\n",
+ req->num_bytes, sq->sq_node);
+ goto send_response;
+ }
+
+ if (sq->sq_node != client->qrtr_node) {
+ dev_err(share->dev, "Request from node %d but %d expected\n",
+ sq->sq_node, client->qrtr_node);
+ goto send_response;
+ }
+
+ if (client->size && client->size != req->num_bytes) {
+ dev_err(share->dev, "Got a request with wrong size (size=%d)\n",
+ req->num_bytes);
+ goto send_response;
+ }
+
+ if (req->block_alignment_valid)
+ if (client->phy_addr % MEM_BLOCK_ALIGN_TO_BYTES(req->block_alignment) != 0)
+ dev_warn(share->dev, "Memory region is not aligned by %d bytes\n",
+ MEM_BLOCK_ALIGN_TO_BYTES(req->block_alignment));
+
+ if (!client->phy_addr) {
+ dev_info(share->dev,
+ "Client sent a request but no memory is configured (size=%d, node=%d)\n",
+ req->num_bytes, sq->sq_node);
+ goto send_response;
+ }
+
+ resp.resp = QMI_RESULT_SUCCESS_V01;
+ resp.handle_valid = true;
+ resp.handle = client->phy_addr;
+ resp.num_bytes_valid = true;
+ resp.num_bytes = client->size;
+
+send_response:
+ ret = qmi_send_response(qmi, sq, txn, MEM_ALLOC_MSG_V01, MEM_MAX_MSG_LEN_V01,
+ mem_alloc_resp_msg_data_v01_ei, &resp);
+ if (ret < 0)
+ dev_err(share->dev, "Failed to send response: %d\n", ret);
+}
+
+static void memshare_free_req_handler(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn, const void *data)
+{
+ struct memshare *share = container_of(qmi, struct memshare, qmi);
+ const struct mem_free_req_msg_v01 *req = data;
+ struct mem_free_resp_msg_v01 resp = { .resp = QMI_RESULT_FAILURE_V01 };
+ struct memshare_client *client = share->legacy_client;
+ int ret;
+
+ dev_dbg(share->dev, "free_req: node=%d\n", sq->sq_node);
+
+ if (!client) {
+ dev_err(share->dev, "Unknown request from legacy client\n");
+ goto send_response;
+ }
+
+ if (sq->sq_node != client->qrtr_node) {
+ dev_err(share->dev, "Request from node %d but %d expected\n",
+ sq->sq_node, client->qrtr_node);
+ goto send_response;
+ }
+
+ if (client->phy_addr != req->handle) {
+ dev_err(share->dev, "Got a request with wrong address\n");
+ goto send_response;
+ }
+
+ resp.resp = QMI_RESULT_SUCCESS_V01;
+
+send_response:
+ ret = qmi_send_response(qmi, sq, txn, MEM_FREE_MSG_V01, MEM_MAX_MSG_LEN_V01,
+ mem_free_resp_msg_data_v01_ei, &resp);
+ if (ret < 0)
+ dev_err(share->dev, "Failed to send response: %d\n", ret);
+}
+
+static void memshare_alloc_generic_req_handler(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn, const void *data)
+{
+ struct memshare *share = container_of(qmi, struct memshare, qmi);
+ const struct mem_alloc_generic_req_msg_v01 *req = data;
+ struct mem_alloc_generic_resp_msg_v01 *resp;
+ struct memshare_client *client;
+ int ret;
+
+ resp = kzalloc(sizeof(*resp), GFP_KERNEL);
+ if (!resp)
+ return;
+
+ resp->resp.result = QMI_RESULT_FAILURE_V01;
+ resp->resp.error = QMI_ERR_INTERNAL_V01;
+
+ dev_dbg(share->dev,
+ "alloc_generic_req: num_bytes=%d, client_id=%d, proc_id=%d, sequence_id=%d, "
+ "alloc_contiguous_valid=%d, alloc_contiguous=%d, block_alignment_valid=%d, "
+ "block_alignment=%d, node=%d\n",
+ req->num_bytes,
+ req->client_id,
+ req->proc_id,
+ req->sequence_id,
+ req->alloc_contiguous_valid,
+ req->alloc_contiguous,
+ req->block_alignment_valid,
+ req->block_alignment,
+ sq->sq_node
+ );
+
+ client = memshare_get_client(share, req->client_id, req->proc_id);
+ if (!client) {
+ dev_err(share->dev,
+ "Got a request from unknown client (id=%d, proc=%d, size=%d, node=%d)\n",
+ req->client_id, req->proc_id, req->num_bytes, sq->sq_node);
+ goto send_response;
+ }
+
+ if (sq->sq_node != client->qrtr_node) {
+ dev_err(share->dev, "Request from node %d but %d expected\n",
+ sq->sq_node, client->qrtr_node);
+ goto send_response;
+ }
+
+ if (!client->phy_addr) {
+ dev_info(share->dev,
+ "Client sent a request but no memory is configured "
+ "(id=%d, proc=%d, size=%d, node=%d)\n",
+ req->client_id, req->proc_id, req->num_bytes, sq->sq_node);
+ goto send_response;
+ }
+
+ if (client->size != req->num_bytes) {
+ dev_err(share->dev,
+ "Got a request with wrong size (id=%d, proc=%d, size=%d)\n",
+ req->client_id, req->proc_id, req->num_bytes);
+ goto send_response;
+ }
+
+ if (req->block_alignment_valid)
+ if (client->phy_addr % MEM_BLOCK_ALIGN_TO_BYTES(req->block_alignment) != 0)
+ dev_warn(share->dev, "Memory region is not aligned by %d bytes\n",
+ MEM_BLOCK_ALIGN_TO_BYTES(req->block_alignment));
+
+ resp->resp.result = QMI_RESULT_SUCCESS_V01;
+ resp->resp.error = QMI_ERR_NONE_V01;
+ resp->sequence_id_valid = true;
+ resp->sequence_id = req->sequence_id;
+ resp->dhms_mem_alloc_addr_info_valid = true;
+ resp->dhms_mem_alloc_addr_info_len = 1;
+ resp->dhms_mem_alloc_addr_info[0].phy_addr = client->phy_addr;
+ resp->dhms_mem_alloc_addr_info[0].num_bytes = client->size;
+
+send_response:
+ ret = qmi_send_response(qmi, sq, txn, MEM_ALLOC_GENERIC_MSG_V01, MEM_MAX_MSG_LEN_V01,
+ mem_alloc_generic_resp_msg_data_v01_ei, resp);
+ if (ret < 0)
+ dev_err(share->dev, "Failed to send response: %d\n", ret);
+
+ kfree(resp);
+}
+
+static void memshare_free_generic_req_handler(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn, const void *data)
+{
+ struct memshare *share = container_of(qmi, struct memshare, qmi);
+ const struct mem_free_generic_req_msg_v01 *req = data;
+ struct mem_free_generic_resp_msg_v01 resp = {
+ .resp.result = QMI_RESULT_FAILURE_V01,
+ .resp.error = QMI_ERR_INTERNAL_V01,
+ };
+ struct memshare_client *client;
+ int ret;
+
+ dev_dbg(share->dev,
+ "free_generic_req: dhms_mem_alloc_addr_info_len=%d, client_id_valid=%d, "
+ "client_id=%d, proc_id_valid=%d, proc_id=%d, node=%d\n",
+ req->dhms_mem_alloc_addr_info_len,
+ req->client_id_valid,
+ req->client_id,
+ req->proc_id_valid,
+ req->proc_id,
+ sq->sq_node
+ );
+
+ if (req->dhms_mem_alloc_addr_info_len != 1) {
+ dev_err(share->dev, "addr_info_len = %d is unexpected\n",
+ req->dhms_mem_alloc_addr_info_len);
+ goto send_response;
+ }
+
+ if (!req->client_id_valid || !req->proc_id_valid) {
+ dev_err(share->dev, "Got a request from unknown client\n");
+ goto send_response;
+ }
+
+ client = memshare_get_client(share, req->client_id, req->proc_id);
+ if (!client) {
+ dev_err(share->dev, "Got a request from unknown client (id=%d, proc=%d)\n",
+ req->client_id, req->proc_id);
+ goto send_response;
+ }
+
+ if (req->dhms_mem_alloc_addr_info[0].phy_addr != client->phy_addr) {
+ dev_err(share->dev, "Client sent invalid handle\n");
+ goto send_response;
+ }
+
+ if (sq->sq_node != client->qrtr_node) {
+ dev_err(share->dev, "Request from node %d but %d expected\n",
+ sq->sq_node, client->qrtr_node);
+ goto send_response;
+ }
+
+ resp.resp.result = QMI_RESULT_SUCCESS_V01;
+ resp.resp.error = QMI_ERR_NONE_V01;
+
+send_response:
+ ret = qmi_send_response(qmi, sq, txn, MEM_FREE_GENERIC_MSG_V01, MEM_MAX_MSG_LEN_V01,
+ mem_free_generic_resp_msg_data_v01_ei, &resp);
+ if (ret < 0)
+ dev_err(share->dev, "Failed to send response: %d\n", ret);
+}
+
+static void memshare_query_size_req_handler(struct qmi_handle *qmi, struct sockaddr_qrtr *sq,
+ struct qmi_txn *txn, const void *data)
+{
+ struct memshare *share = container_of(qmi, struct memshare, qmi);
+ const struct mem_query_size_req_msg_v01 *req = data;
+ struct mem_query_size_rsp_msg_v01 resp = {
+ .resp.result = QMI_RESULT_FAILURE_V01,
+ .resp.error = QMI_ERR_INTERNAL_V01,
+ };
+ struct memshare_client *client;
+ int ret;
+
+ dev_dbg(share->dev,
+ "query_size_req: client_id=%d, proc_id_valid=%d, proc_id=%d, node=%d\n",
+ req->client_id,
+ req->proc_id_valid,
+ req->proc_id,
+ sq->sq_node
+ );
+
+ client = memshare_get_client(share, req->client_id, req->proc_id);
+ if (!client) {
+ dev_err(share->dev, "Got a request from unknown client (id=%d, proc=%d)\n",
+ req->client_id, req->proc_id);
+ goto send_response;
+ }
+
+ if (sq->sq_node != client->qrtr_node) {
+ dev_err(share->dev, "Request from node %d but %d expected\n",
+ sq->sq_node, client->qrtr_node);
+ goto send_response;
+ }
+
+ if (!client->phy_addr)
+ goto send_response;
+
+ resp.resp.result = QMI_RESULT_SUCCESS_V01;
+ resp.resp.error = QMI_ERR_NONE_V01;
+ resp.size_valid = true;
+ resp.size = client->size;
+
+send_response:
+ ret = qmi_send_response(qmi, sq, txn, MEM_QUERY_SIZE_MSG_V01, MEM_MAX_MSG_LEN_V01,
+ mem_query_size_resp_msg_data_v01_ei, &resp);
+ if (ret < 0)
+ dev_err(share->dev, "Failed to send response: %d\n", ret);
+}
+
+static struct qmi_msg_handler memshare_handlers[] = {
+ {
+ .type = QMI_REQUEST,
+ .msg_id = MEM_ALLOC_MSG_V01,
+ .ei = mem_alloc_req_msg_data_v01_ei,
+ .decoded_size = sizeof(struct mem_alloc_req_msg_v01),
+ .fn = memshare_alloc_req_handler,
+ },
+ {
+ .type = QMI_REQUEST,
+ .msg_id = MEM_FREE_MSG_V01,
+ .ei = mem_free_req_msg_data_v01_ei,
+ .decoded_size = sizeof(struct mem_free_req_msg_v01),
+ .fn = memshare_free_req_handler,
+ },
+ {
+ .type = QMI_REQUEST,
+ .msg_id = MEM_ALLOC_GENERIC_MSG_V01,
+ .ei = mem_alloc_generic_req_msg_data_v01_ei,
+ .decoded_size = sizeof(struct mem_alloc_generic_req_msg_v01),
+ .fn = memshare_alloc_generic_req_handler,
+ },
+ {
+ .type = QMI_REQUEST,
+ .msg_id = MEM_FREE_GENERIC_MSG_V01,
+ .ei = mem_free_generic_req_msg_data_v01_ei,
+ .decoded_size = sizeof(struct mem_free_generic_req_msg_v01),
+ .fn = memshare_free_generic_req_handler,
+ },
+ {
+ .type = QMI_REQUEST,
+ .msg_id = MEM_QUERY_SIZE_MSG_V01,
+ .ei = mem_query_size_req_msg_data_v01_ei,
+ .decoded_size = sizeof(struct mem_query_size_req_msg_v01),
+ .fn = memshare_query_size_req_handler,
+ },
+ { /* sentinel */ }
+};
+
+static int memshare_probe_dt(struct memshare *share)
+{
+ struct device_node *np = share->dev->of_node;
+ struct device_node *proc_node = NULL, *client_node = NULL, *mem_node = NULL;
+ int ret = 0;
+ u32 proc_id, qrtr_node;
+ struct memshare_client *client;
+ phandle legacy_client = 0;
+ struct resource reserved_memory;
+
+ ret = of_property_read_u32(np, "qcom,legacy-client", &legacy_client);
+ if (ret && ret != -EINVAL)
+ return ret;
+
+ for_each_available_child_of_node(np, proc_node) {
+ ret = of_property_read_u32(proc_node, "reg", &proc_id);
+ if (ret)
+ goto error;
+
+ ret = of_property_read_u32(proc_node, "qcom,qrtr-node", &qrtr_node);
+ if (ret)
+ goto error;
+
+ for_each_available_child_of_node(proc_node, client_node) {
+ if (share->client_cnt >= MEMSHARE_MAX_CLIENTS) {
+ ret = -EINVAL;
+ goto error;
+ }
+
+ client = devm_kzalloc(share->dev, sizeof(*client), GFP_KERNEL);
+ if (!client) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ ret = of_property_read_u32(client_node, "reg", &client->id);
+ if (ret)
+ goto error;
+
+ client->proc_id = proc_id;
+ client->qrtr_node = qrtr_node;
+
+ mem_node = of_parse_phandle(client_node, "memory-region", 0);
+ if (mem_node) {
+ ret = of_address_to_resource(mem_node, 0, &reserved_memory);
+ of_node_put(mem_node);
+ if (ret)
+ goto error;
+
+ client->phy_addr = reserved_memory.start;
+ client->size = resource_size(&reserved_memory);
+ }
+
+ if (client_node->phandle == legacy_client)
+ share->legacy_client = client;
+
+ share->clients[share->client_cnt] = client;
+ share->client_cnt++;
+ }
+ }
+
+ return 0;
+
+error:
+ of_node_put(client_node);
+ of_node_put(proc_node);
+ return ret;
+}
+
+static int memshare_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct memshare *share;
+ int ret;
+
+ share = devm_kzalloc(&pdev->dev, sizeof(*share), GFP_KERNEL);
+ if (!share)
+ return -ENOMEM;
+
+ share->dev = dev;
+ dev_set_drvdata(&pdev->dev, share);
+
+ ret = qmi_handle_init(&share->qmi, MEM_MAX_MSG_LEN_V01, NULL, memshare_handlers);
+ if (ret < 0)
+ return ret;
+
+ ret = memshare_probe_dt(share);
+ if (ret < 0)
+ goto error;
+
+ ret = qmi_add_server(&share->qmi, MEM_SERVICE_SVC_ID,
+ MEM_SERVICE_VER, MEM_SERVICE_INS_ID);
+ if (ret < 0)
+ goto error;
+
+ return 0;
+
+error:
+ qmi_handle_release(&share->qmi);
+ return ret;
+}
+
+static int memshare_remove(struct platform_device *pdev)
+{
+ struct memshare *share = dev_get_drvdata(&pdev->dev);
+
+ qmi_handle_release(&share->qmi);
+
+ return 0;
+}
+
+static const struct of_device_id memshare_of_match[] = {
+ { .compatible = "qcom,memshare", },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, memshare_of_match);
+
+static struct platform_driver memshare_driver = {
+ .probe = memshare_probe,
+ .remove = memshare_remove,
+ .driver = {
+ .name = "qcom-memshare",
+ .of_match_table = of_match_ptr(memshare_of_match),
+ },
+};
+module_platform_driver(memshare_driver);
+
+MODULE_DESCRIPTION("Qualcomm Memory Share Service");
+MODULE_AUTHOR("Nikita Travkin <[email protected]>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/soc/qcom/memshare_qmi_msg.c b/drivers/soc/qcom/memshare_qmi_msg.c
new file mode 100644
index 000000000000..0cbccaaa7994
--- /dev/null
+++ b/drivers/soc/qcom/memshare_qmi_msg.c
@@ -0,0 +1,370 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2013-2015, 2017-2018, The Linux Foundation. All rights reserved.
+
+#include <linux/stddef.h>
+#include <linux/soc/qcom/qmi.h>
+
+#include "memshare_qmi_msg.h"
+
+struct qmi_elem_info mem_alloc_req_msg_data_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_req_msg_v01, num_bytes),
+ .tlv_type = 0x01,
+ .offset = offsetof(struct mem_alloc_req_msg_v01, num_bytes),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_req_msg_v01, block_alignment_valid),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_alloc_req_msg_v01, block_alignment_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_req_msg_v01, block_alignment),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_alloc_req_msg_v01, block_alignment),
+ },
+ {
+ .data_type = QMI_EOTI,
+ },
+};
+
+struct qmi_elem_info mem_alloc_resp_msg_data_v01_ei[] = {
+ {
+ .data_type = QMI_SIGNED_2_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_resp_msg_v01, resp),
+ .tlv_type = 0x01,
+ .offset = offsetof(struct mem_alloc_resp_msg_v01, resp),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_resp_msg_v01, handle_valid),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_alloc_resp_msg_v01, handle_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_resp_msg_v01, handle),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_alloc_resp_msg_v01, handle),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_resp_msg_v01, num_bytes_valid),
+ .tlv_type = 0x11,
+ .offset = offsetof(struct mem_alloc_resp_msg_v01, num_bytes_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_resp_msg_v01, num_bytes),
+ .tlv_type = 0x11,
+ .offset = offsetof(struct mem_alloc_resp_msg_v01, num_bytes),
+ },
+ {
+ .data_type = QMI_EOTI,
+ },
+};
+
+struct qmi_elem_info mem_free_req_msg_data_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_free_req_msg_v01, handle),
+ .tlv_type = 0x01,
+ .offset = offsetof(struct mem_free_req_msg_v01, handle),
+ },
+ {
+ .data_type = QMI_EOTI,
+ },
+};
+
+struct qmi_elem_info mem_free_resp_msg_data_v01_ei[] = {
+ {
+ .data_type = QMI_SIGNED_2_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_free_resp_msg_v01, resp),
+ .tlv_type = 0x01,
+ .offset = offsetof(struct mem_free_resp_msg_v01, resp),
+ },
+ {
+ .data_type = QMI_EOTI,
+ },
+};
+
+struct qmi_elem_info dhms_mem_alloc_addr_info_type_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_8_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct dhms_mem_alloc_addr_info_type_v01, phy_addr),
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ .offset = offsetof(struct dhms_mem_alloc_addr_info_type_v01, phy_addr),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct dhms_mem_alloc_addr_info_type_v01, num_bytes),
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ .offset = offsetof(struct dhms_mem_alloc_addr_info_type_v01, num_bytes),
+ },
+ {
+ .data_type = QMI_EOTI,
+ },
+};
+
+struct qmi_elem_info mem_alloc_generic_req_msg_data_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_req_msg_v01, num_bytes),
+ .tlv_type = 0x01,
+ .offset = offsetof(struct mem_alloc_generic_req_msg_v01, num_bytes),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_req_msg_v01, client_id),
+ .tlv_type = 0x02,
+ .offset = offsetof(struct mem_alloc_generic_req_msg_v01, client_id),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_req_msg_v01, proc_id),
+ .tlv_type = 0x03,
+ .offset = offsetof(struct mem_alloc_generic_req_msg_v01, proc_id),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_req_msg_v01, sequence_id),
+ .tlv_type = 0x04,
+ .offset = offsetof(struct mem_alloc_generic_req_msg_v01, sequence_id),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_req_msg_v01,
+ alloc_contiguous_valid),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_alloc_generic_req_msg_v01,
+ alloc_contiguous_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_1_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_req_msg_v01,
+ alloc_contiguous),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_alloc_generic_req_msg_v01, alloc_contiguous),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_req_msg_v01,
+ block_alignment_valid),
+ .tlv_type = 0x11,
+ .offset = offsetof(struct mem_alloc_generic_req_msg_v01,
+ block_alignment_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_req_msg_v01,
+ block_alignment),
+ .tlv_type = 0x11,
+ .offset = offsetof(struct mem_alloc_generic_req_msg_v01, block_alignment),
+ },
+ {
+ .data_type = QMI_EOTI,
+ },
+};
+
+struct qmi_elem_info mem_alloc_generic_resp_msg_data_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_resp_msg_v01, resp),
+ .tlv_type = 0x02,
+ .offset = offsetof(struct mem_alloc_generic_resp_msg_v01, resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_resp_msg_v01,
+ sequence_id_valid),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_alloc_generic_resp_msg_v01, sequence_id_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_resp_msg_v01, sequence_id),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_alloc_generic_resp_msg_v01, sequence_id),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_resp_msg_v01,
+ dhms_mem_alloc_addr_info_valid),
+ .tlv_type = 0x11,
+ .offset = offsetof(struct mem_alloc_generic_resp_msg_v01,
+ dhms_mem_alloc_addr_info_valid),
+ },
+ {
+ .data_type = QMI_DATA_LEN,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_alloc_generic_resp_msg_v01,
+ dhms_mem_alloc_addr_info_len),
+ .tlv_type = 0x11,
+ .offset = offsetof(struct mem_alloc_generic_resp_msg_v01,
+ dhms_mem_alloc_addr_info_len),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = MAX_ARR_CNT_V01,
+ .elem_size = sizeof_field(struct mem_alloc_generic_resp_msg_v01,
+ dhms_mem_alloc_addr_info),
+ .array_type = VAR_LEN_ARRAY,
+ .tlv_type = 0x11,
+ .offset = offsetof(struct mem_alloc_generic_resp_msg_v01,
+ dhms_mem_alloc_addr_info),
+ .ei_array = dhms_mem_alloc_addr_info_type_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ },
+};
+
+struct qmi_elem_info mem_free_generic_req_msg_data_v01_ei[] = {
+ {
+ .data_type = QMI_DATA_LEN,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_free_generic_req_msg_v01,
+ dhms_mem_alloc_addr_info_len),
+ .tlv_type = 0x01,
+ .offset = offsetof(struct mem_free_generic_req_msg_v01,
+ dhms_mem_alloc_addr_info_len),
+ },
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = MAX_ARR_CNT_V01,
+ .elem_size = sizeof_field(struct mem_free_generic_req_msg_v01,
+ dhms_mem_alloc_addr_info),
+ .array_type = VAR_LEN_ARRAY,
+ .tlv_type = 0x01,
+ .offset = offsetof(struct mem_free_generic_req_msg_v01,
+ dhms_mem_alloc_addr_info),
+ .ei_array = dhms_mem_alloc_addr_info_type_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_free_generic_req_msg_v01, client_id_valid),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_free_generic_req_msg_v01, client_id_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_free_generic_req_msg_v01, client_id),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_free_generic_req_msg_v01, client_id),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_free_generic_req_msg_v01, proc_id_valid),
+ .tlv_type = 0x11,
+ .offset = offsetof(struct mem_free_generic_req_msg_v01, proc_id_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_free_generic_req_msg_v01, proc_id),
+ .tlv_type = 0x11,
+ .offset = offsetof(struct mem_free_generic_req_msg_v01, proc_id),
+ },
+ {
+ .data_type = QMI_EOTI,
+ },
+};
+
+struct qmi_elem_info mem_free_generic_resp_msg_data_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_free_generic_resp_msg_v01, resp),
+ .tlv_type = 0x02,
+ .offset = offsetof(struct mem_free_generic_resp_msg_v01, resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ },
+};
+
+struct qmi_elem_info mem_query_size_req_msg_data_v01_ei[] = {
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_query_size_req_msg_v01, client_id),
+ .tlv_type = 0x01,
+ .offset = offsetof(struct mem_query_size_req_msg_v01, client_id),
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_query_size_req_msg_v01, proc_id_valid),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_query_size_req_msg_v01, proc_id_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_query_size_req_msg_v01, proc_id),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_query_size_req_msg_v01, proc_id),
+ },
+ {
+ .data_type = QMI_EOTI,
+ },
+};
+
+struct qmi_elem_info mem_query_size_resp_msg_data_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_query_size_rsp_msg_v01, resp),
+ .tlv_type = 0x02,
+ .offset = offsetof(struct mem_query_size_rsp_msg_v01, resp),
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_OPT_FLAG,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_query_size_rsp_msg_v01, size_valid),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_query_size_rsp_msg_v01, size_valid),
+ },
+ {
+ .data_type = QMI_UNSIGNED_4_BYTE,
+ .elem_len = 1,
+ .elem_size = sizeof_field(struct mem_query_size_rsp_msg_v01, size),
+ .tlv_type = 0x10,
+ .offset = offsetof(struct mem_query_size_rsp_msg_v01, size),
+ },
+ {
+ .data_type = QMI_EOTI,
+ },
+};
diff --git a/drivers/soc/qcom/memshare_qmi_msg.h b/drivers/soc/qcom/memshare_qmi_msg.h
new file mode 100644
index 000000000000..c436330a0713
--- /dev/null
+++ b/drivers/soc/qcom/memshare_qmi_msg.h
@@ -0,0 +1,228 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2013-2015, 2017-2018, The Linux Foundation. All rights reserved. */
+
+#ifndef HEAP_MEM_EXT_SERVICE_01_H
+#define HEAP_MEM_EXT_SERVICE_01_H
+
+#include <linux/types.h>
+#include <linux/soc/qcom/qmi.h>
+
+#define MEM_SERVICE_SVC_ID 0x00000034
+#define MEM_SERVICE_INS_ID 1
+#define MEM_SERVICE_VER 1
+
+/* Service Message Definition */
+#define MEM_ALLOC_MSG_V01 0x0020
+#define MEM_FREE_MSG_V01 0x0021
+#define MEM_ALLOC_GENERIC_MSG_V01 0x0022
+#define MEM_FREE_GENERIC_MSG_V01 0x0023
+#define MEM_QUERY_SIZE_MSG_V01 0x0024
+
+#define MEM_MAX_MSG_LEN_V01 255
+#define MAX_ARR_CNT_V01 64
+
+#define MEM_BLOCK_ALIGN_TO_BYTES(x) (2 << (x)) /* 0..11 */
+
+/**
+ * Unless stated otherwise, any property X that is paired with X_valid
+ * property is an optional property. Other properties are mandatory.
+ */
+
+/**
+ * struct dhms_mem_alloc_addr_info_type_v01 - Element of memory block array.
+ * @phy_addr: Physical address of memory block.
+ * @num_bytes: Size of memory block.
+ */
+struct dhms_mem_alloc_addr_info_type_v01 {
+ u64 phy_addr;
+ u32 num_bytes;
+};
+
+/**
+ * struct mem_alloc_req_msg_v01 - Legacy memory allocation request.
+ * @num_bytes: Requested size.
+ * @block_alignment_valid: Must be set to true if block_alignment is being passed.
+ * @block_alignment: The block alignment for the memory block to be allocated.
+ *
+ * Request Message.
+ * This command is used for getting the multiple physically contiguous memory
+ * blocks from the server memory subsystem.
+ */
+struct mem_alloc_req_msg_v01 {
+ u32 num_bytes;
+ u8 block_alignment_valid;
+ u32 block_alignment;
+};
+
+/**
+ * struct mem_alloc_resp_msg_v01 - Response for legacy allocation memory request.
+ * @resp: Result Code. The result of the requested memory operation.
+ * @handle_valid: Must be set to true if handle is being passed.
+ * @handle: Memory Block Handle. The physical address of the memory allocated on the HLOS.
+ * @num_bytes_valid: Must be set to true if num_bytes is being passed.
+ * @num_bytes: Memory block size. The number of bytes actually allocated for the request.
+ * This value can be smaller than the size requested in QMI_DHMS_MEM_ALLOC_REQ_MSG.
+ *
+ * Response Message.
+ * This command is used for getting the multiple physically contiguous memory
+ * blocks from the server memory subsystem
+ */
+struct mem_alloc_resp_msg_v01 {
+ u16 resp;
+ u8 handle_valid;
+ u64 handle;
+ u8 num_bytes_valid;
+ u32 num_bytes;
+};
+
+/**
+ * struct mem_free_req_msg_v01 - Legacy memory free request.
+ * @handle: Physical address of memory to be freed.
+ *
+ * Request Message.
+ * This command is used for releasing the multiple physically contiguous memory
+ * blocks to the server memory subsystem.
+ */
+struct mem_free_req_msg_v01 {
+ u64 handle;
+};
+
+/**
+ * struct mem_free_resp_msg_v01 - Response for legacy memory free request.
+ * @resp: Result of the requested memory operation.
+ *
+ * Response Message.
+ * This command is used for releasing the multiple physically contiguous memory
+ * blocks to the server memory subsystem.
+ */
+struct mem_free_resp_msg_v01 {
+ u16 resp;
+};
+
+/**
+ * struct mem_alloc_generic_req_msg_v01 - Memory allocation request.
+ * @num_bytes: Requested size.
+ * @client_id: Client ID.
+ * @proc_id: Peripheral ID.
+ * @sequence_id: Sequence ID.
+ * @alloc_contiguous_valid: Must be set to true if alloc_contiguous is being passed.
+ * @alloc_contiguous: alloc_contiguous is used to identify that clients are requesting
+ * for contiguous or non contiguous memory, default is contiguous.
+ * 0 = non contiguous else contiguous.
+ * @block_alignment_valid: Must be set to true if block_alignment is being passed.
+ * @block_alignment: The block alignment for the memory block to be allocated.
+ *
+ * Request Message.
+ * This command is used for getting the multiple physically contiguous memory
+ * blocks from the server memory subsystem
+ */
+struct mem_alloc_generic_req_msg_v01 {
+ u32 num_bytes;
+ u32 client_id;
+ u32 proc_id;
+ u32 sequence_id;
+ u8 alloc_contiguous_valid;
+ u8 alloc_contiguous;
+ u8 block_alignment_valid;
+ u32 block_alignment;
+};
+
+/**
+ * struct mem_alloc_generic_resp_msg_v01 - Response for memory allocation request.
+ * @resp: Result Code. The result of the requested memory operation.
+ * @sequence_id_valid: Must be set to true if sequence_id is being passed.
+ * @sequence_id: Sequence ID. This property is marked as mandatory.
+ * @dhms_mem_alloc_addr_info_valid: Must be set to true if handle is being passed.
+ * @dhms_mem_alloc_addr_info_len: Handle Size.
+ * @dhms_mem_alloc_addr_info: Memory Block Handle. The physical address of the
+ * memory allocated on the HLOS.
+ *
+ * Response Message.
+ * This command is used for getting the multiple physically contiguous memory
+ * blocks from the server memory subsystem
+ */
+struct mem_alloc_generic_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+ u8 sequence_id_valid;
+ u32 sequence_id;
+ u8 dhms_mem_alloc_addr_info_valid;
+ u8 dhms_mem_alloc_addr_info_len;
+ struct dhms_mem_alloc_addr_info_type_v01 dhms_mem_alloc_addr_info[MAX_ARR_CNT_V01];
+};
+
+/**
+ * struct mem_free_generic_req_msg_v01 - Memory free request.
+ * @dhms_mem_alloc_addr_info_len: Must be set to # of elments in array.
+ * @dhms_mem_alloc_addr_info: Physical address and size of the memory allocated
+ * on the HLOS to be freed.
+ * @client_id_valid: Must be set to true if client_id is being passed.
+ * @client_id: Client ID.
+ * @proc_id_valid: Must be set to true if proc_id is being passed.
+ * @proc_id: Peripheral ID.
+ *
+ * Request Message.
+ * This command is used for releasing the multiple physically contiguous memory
+ * blocks to the server memory subsystem
+ */
+struct mem_free_generic_req_msg_v01 {
+ u8 dhms_mem_alloc_addr_info_len;
+ struct dhms_mem_alloc_addr_info_type_v01 dhms_mem_alloc_addr_info[MAX_ARR_CNT_V01];
+ u8 client_id_valid;
+ u32 client_id;
+ u8 proc_id_valid;
+ u32 proc_id;
+};
+
+/**
+ * struct mem_free_generic_resp_msg_v01 - Response for memory free request.
+ * @resp: Result of the requested memory operation.
+ *
+ * Response Message.
+ * This command is used for releasing the multiple physically contiguous memory
+ * blocks to the server memory subsystem
+ */
+struct mem_free_generic_resp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+};
+
+/**
+ * struct mem_query_size_req_msg_v01 - Size query request.
+ * @client_id: Client ID.
+ * @proc_id_valid: proc_id_valid must be set to true if proc_id is being passed.
+ * @proc_id: Proc ID.
+ *
+ * Request Message.
+ */
+struct mem_query_size_req_msg_v01 {
+ u32 client_id;
+ u8 proc_id_valid;
+ u32 proc_id;
+};
+
+/**
+ * struct mem_query_size_rsp_msg_v01 - Response for Size query request.
+ * @resp: Result Code.
+ * @size_valid: size_valid must be set to true if size is being passed.
+ * @size: Size.
+ *
+ * Response Message.
+ */
+struct mem_query_size_rsp_msg_v01 {
+ struct qmi_response_type_v01 resp;
+ u8 size_valid;
+ u32 size;
+};
+
+/* Message structure definitions defined in "memshare_qmi_msg.c" */
+extern struct qmi_elem_info mem_alloc_req_msg_data_v01_ei[];
+extern struct qmi_elem_info mem_alloc_resp_msg_data_v01_ei[];
+extern struct qmi_elem_info mem_free_req_msg_data_v01_ei[];
+extern struct qmi_elem_info mem_free_resp_msg_data_v01_ei[];
+extern struct qmi_elem_info mem_alloc_generic_req_msg_data_v01_ei[];
+extern struct qmi_elem_info mem_alloc_generic_resp_msg_data_v01_ei[];
+extern struct qmi_elem_info mem_free_generic_req_msg_data_v01_ei[];
+extern struct qmi_elem_info mem_free_generic_resp_msg_data_v01_ei[];
+extern struct qmi_elem_info mem_query_size_req_msg_data_v01_ei[];
+extern struct qmi_elem_info mem_query_size_resp_msg_data_v01_ei[];
+
+#endif
--
2.27.0
On Fri, Mar 19, 2021 at 10:23:20PM +0500, [email protected] wrote:
> From: Nikita Travkin <[email protected]>
>
> Add DT bindings for memshare: QMI service that allocates
> memory per remote processor request.
>
> Signed-off-by: Nikita Travkin <[email protected]>
> ---
> .../bindings/soc/qcom/qcom,memshare.yaml | 109 ++++++++++++++++++
> include/dt-bindings/soc/qcom,memshare.h | 10 ++
> 2 files changed, 119 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
> create mode 100644 include/dt-bindings/soc/qcom,memshare.h
>
> diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
> new file mode 100644
> index 000000000000..ebdf128b066c
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
> @@ -0,0 +1,109 @@
> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: "http://devicetree.org/schemas/soc/qcom/qcom,memshare.yaml#"
> +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
> +
> +title: Qualcomm QMI Shared Memory Service
How many shared memory interfaces does Qcom have...
> +
> +description: |
> + This driver provides a QMI service that allows remote processors (like modem)
> + to request additional memory. It is used for applications like GPS in modem.
If the memory region is defined in reserved-memory, how are you
allocating additional memory?
> +
> +maintainers:
> + - Nikita Travkin <[email protected]>
> +
> +properties:
> + compatible:
> + const: qcom,memshare
> +
> + qcom,legacy-client:
> + $ref: /schemas/types.yaml#/definitions/phandle
> + description: Phandle to a memshare client node used for legacy requests.
> +
> + "#address-cells":
> + const: 1
> +
> + "#size-cells":
> + const: 0
> +
> +patternProperties:
> + "^.*@[0-9]+$":
> + type: object
> +
> + properties:
> + reg:
> + description: Proc-ID for clients in this node.
What's Proc-ID?
> +
> + qcom,qrtr-node:
> + $ref: /schemas/types.yaml#/definitions/uint32
> + description: Node from which the requests are expected.
> +
> + "#address-cells":
> + const: 1
> +
> + "#size-cells":
> + const: 0
> +
> + patternProperties:
> + "^.*@[0-9]+$":
> + type: object
> +
> + properties:
> + reg:
> + description: ID of this client.
How does one determine the ID?
> +
> + memory-region:
> + $ref: /schemas/types.yaml#/definitions/phandle
> + description: |
> + Reserved memory region that should be used for allocation.
> +
> + required:
> + - reg
> +
> + required:
> + - reg
> + - qcom,qrtr-node
> +
> +required:
> + - compatible
> +
> +additionalProperties: false
> +
> +examples:
> + - |
> + #include <dt-bindings/soc/qcom,memshare.h>
> +
> + reserved-memory {
> +
> + #address-cells = <2>;
> + #size-cells = <2>;
> +
> + gps_mem: gps@93c00000 {
> + reg = <0x0 0x93c00000 0x0 0x200000>;
> + no-map;
We support 'compatible' in reserved-memory nodes, can you simplify the
binding and put everything in here?
> + };
> + };
> +
> + memshare {
> + compatible = "qcom,memshare";
> + qcom,legacy-client = <&memshare_gps>;
> +
> + #address-cells = <1>;
> + #size-cells = <0>;
> +
> + mpss@0 {
> + reg = <MEMSHARE_PROC_MPSS_V01>;
> + qcom,qrtr-node = <0>;
> +
> + #address-cells = <1>;
> + #size-cells = <0>;
> +
> + memshare_gps: gps@0 {
> + reg = <0>;
> + memory-region = <&gps_mem>;
> + };
> + };
> + };
> +
> +...
> diff --git a/include/dt-bindings/soc/qcom,memshare.h b/include/dt-bindings/soc/qcom,memshare.h
> new file mode 100644
> index 000000000000..4cef1ef75d09
> --- /dev/null
> +++ b/include/dt-bindings/soc/qcom,memshare.h
> @@ -0,0 +1,10 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef __DT_QCOM_MEMSHARE_H__
> +#define __DT_QCOM_MEMSHARE_H__
> +
> +#define MEMSHARE_PROC_MPSS_V01 0
> +#define MEMSHARE_PROC_ADSP_V01 1
> +#define MEMSHARE_PROC_WCNSS_V01 2
> +
> +#endif /* __DT_QCOM_MEMSHARE_H__ */
> --
> 2.27.0
>
Hi, sorry for a late reply but I couldn't answer earlier.
30.03.2021 19:40, Rob Herring пишет:
> On Fri, Mar 19, 2021 at 10:23:20PM +0500, [email protected] wrote:
>> From: Nikita Travkin <[email protected]>
>>
>> Add DT bindings for memshare: QMI service that allocates
>> memory per remote processor request.
>>
>> Signed-off-by: Nikita Travkin <[email protected]>
>> ---
>> .../bindings/soc/qcom/qcom,memshare.yaml | 109 ++++++++++++++++++
>> include/dt-bindings/soc/qcom,memshare.h | 10 ++
>> 2 files changed, 119 insertions(+)
>> create mode 100644 Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
>> create mode 100644 include/dt-bindings/soc/qcom,memshare.h
>>
>> diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
>> new file mode 100644
>> index 000000000000..ebdf128b066c
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
>> @@ -0,0 +1,109 @@
>> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: "http://devicetree.org/schemas/soc/qcom/qcom,memshare.yaml#"
>> +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
>> +
>> +title: Qualcomm QMI Shared Memory Service
> How many shared memory interfaces does Qcom have...
>
>> +
>> +description: |
>> + This driver provides a QMI service that allows remote processors (like modem)
>> + to request additional memory. It is used for applications like GPS in modem.
> If the memory region is defined in reserved-memory, how are you
> allocating additional memory?
Initially remoteproc is loaded into it's own reserved-memory region
but qcom decided that they sometimes need more memory than that.
Memshare driver in msm8916 downstream tree seem to blindly allocate
DMA region for every request that it gets. Additionally for those
clients described in the DT, they do the DMA allocation on boot
time and never free the region. They call it "guaranteed" allocation.
On msm8916 only one "guaranteed" client seem to be used so I decided
to implement it with reserved-memory node. On newer platforms they
seem to have more clients but I think that the driver can be easily
extended to support dynamic allocation if someone really needs it.
I tried to explain that in the cover letter but I think I made some
mistake as I don't see it in the Patchwork.
>> +
>> +maintainers:
>> + - Nikita Travkin <[email protected]>
>> +
>> +properties:
>> + compatible:
>> + const: qcom,memshare
>> +
>> + qcom,legacy-client:
>> + $ref: /schemas/types.yaml#/definitions/phandle
>> + description: Phandle to a memshare client node used for legacy requests.
>> +
>> + "#address-cells":
>> + const: 1
>> +
>> + "#size-cells":
>> + const: 0
>> +
>> +patternProperties:
>> + "^.*@[0-9]+$":
>> + type: object
>> +
>> + properties:
>> + reg:
>> + description: Proc-ID for clients in this node.
> What's Proc-ID?
The requests from the remote nodes contain client-id and proc-id
that are supposed to differentiate the clients. It's possible to
find the values in downstream DT or by observing what messages
are received by the memshare service (I left dev_dbg logging in
the driver for that reason)
I think I should reword it to make this more apparent, maybe
"Proc-ID that clients in this node send."?
>
>> +
>> + qcom,qrtr-node:
>> + $ref: /schemas/types.yaml#/definitions/uint32
>> + description: Node from which the requests are expected.
>> +
>> + "#address-cells":
>> + const: 1
>> +
>> + "#size-cells":
>> + const: 0
>> +
>> + patternProperties:
>> + "^.*@[0-9]+$":
>> + type: object
>> +
>> + properties:
>> + reg:
>> + description: ID of this client.
> How does one determine the ID?
As with proc-id, maybe reword to "ID that this client sends."?
I will change those in v2, I still expect comments on the driver
itself, so I'll wait for that before submitting it with just a
couple lines changed.
>
>> +
>> + memory-region:
>> + $ref: /schemas/types.yaml#/definitions/phandle
>> + description: |
>> + Reserved memory region that should be used for allocation.
>> +
>> + required:
>> + - reg
>> +
>> + required:
>> + - reg
>> + - qcom,qrtr-node
>> +
>> +required:
>> + - compatible
>> +
>> +additionalProperties: false
>> +
>> +examples:
>> + - |
>> + #include <dt-bindings/soc/qcom,memshare.h>
>> +
>> + reserved-memory {
>> +
>> + #address-cells = <2>;
>> + #size-cells = <2>;
>> +
>> + gps_mem: gps@93c00000 {
>> + reg = <0x0 0x93c00000 0x0 0x200000>;
>> + no-map;
> We support 'compatible' in reserved-memory nodes, can you simplify the
> binding and put everything in here?
If I understand this correctly, each reserved-memory node will
then load a new instance of memshare. Since the driver registers a
QMI service that handles multiple clients, there should be only one
instance. Additionally, as I mentioned earlier, some clients may not
need reserved-memory at all
>> + };
>> + };
>> +
>> + memshare {
>> + compatible = "qcom,memshare";
>> + qcom,legacy-client = <&memshare_gps>;
>> +
>> + #address-cells = <1>;
>> + #size-cells = <0>;
>> +
>> + mpss@0 {
>> + reg = <MEMSHARE_PROC_MPSS_V01>;
>> + qcom,qrtr-node = <0>;
>> +
>> + #address-cells = <1>;
>> + #size-cells = <0>;
>> +
>> + memshare_gps: gps@0 {
>> + reg = <0>;
>> + memory-region = <&gps_mem>;
>> + };
>> + };
>> + };
>> +
>> +...
>> diff --git a/include/dt-bindings/soc/qcom,memshare.h b/include/dt-bindings/soc/qcom,memshare.h
>> new file mode 100644
>> index 000000000000..4cef1ef75d09
>> --- /dev/null
>> +++ b/include/dt-bindings/soc/qcom,memshare.h
>> @@ -0,0 +1,10 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +
>> +#ifndef __DT_QCOM_MEMSHARE_H__
>> +#define __DT_QCOM_MEMSHARE_H__
>> +
>> +#define MEMSHARE_PROC_MPSS_V01 0
>> +#define MEMSHARE_PROC_ADSP_V01 1
>> +#define MEMSHARE_PROC_WCNSS_V01 2
>> +
>> +#endif /* __DT_QCOM_MEMSHARE_H__ */
>> --
>> 2.27.0
>>
On Sat 10 Apr 03:05 CDT 2021, Nikita Travkin wrote:
> Hi, sorry for a late reply but I couldn't answer earlier.
>
> 30.03.2021 19:40, Rob Herring ??????????:
> > On Fri, Mar 19, 2021 at 10:23:20PM +0500, [email protected] wrote:
> >> From: Nikita Travkin <[email protected]>
> >>
> >> Add DT bindings for memshare: QMI service that allocates
> >> memory per remote processor request.
> >>
> >> Signed-off-by: Nikita Travkin <[email protected]>
> >> ---
> >> .../bindings/soc/qcom/qcom,memshare.yaml | 109 ++++++++++++++++++
> >> include/dt-bindings/soc/qcom,memshare.h | 10 ++
> >> 2 files changed, 119 insertions(+)
> >> create mode 100644 Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
> >> create mode 100644 include/dt-bindings/soc/qcom,memshare.h
> >>
> >> diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
> >> new file mode 100644
> >> index 000000000000..ebdf128b066c
> >> --- /dev/null
> >> +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
> >> @@ -0,0 +1,109 @@
> >> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> >> +%YAML 1.2
> >> +---
> >> +$id: "http://devicetree.org/schemas/soc/qcom/qcom,memshare.yaml#"
> >> +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
> >> +
> >> +title: Qualcomm QMI Shared Memory Service
> > How many shared memory interfaces does Qcom have...
> >
> >> +
> >> +description: |
> >> + This driver provides a QMI service that allows remote processors (like modem)
> >> + to request additional memory. It is used for applications like GPS in modem.
> > If the memory region is defined in reserved-memory, how are you
> > allocating additional memory?
>
> Initially remoteproc is loaded into it's own reserved-memory region
> but qcom decided that they sometimes need more memory than that.
> Memshare driver in msm8916 downstream tree seem to blindly allocate
> DMA region for every request that it gets. Additionally for those
> clients described in the DT, they do the DMA allocation on boot
> time and never free the region. They call it "guaranteed" allocation.
>
> On msm8916 only one "guaranteed" client seem to be used so I decided
> to implement it with reserved-memory node. On newer platforms they
> seem to have more clients but I think that the driver can be easily
> extended to support dynamic allocation if someone really needs it.
>
Is the "guaranteed" memory required to come from the reserved-memory
part of memory, or could it simply be allocated on demand as well (or
preallocated, but at a dynamic address)?
If these allocations always came from a reserved-memory region, then
adding a "qcom,memshare" compatible to the reserved-memory node itself
seems like a reasonable approach. But if dma_alloc is sufficient, and
there's cases where there's no "guaranteed" region, perhaps we should
just describe this as part of the remoteproc node (i.e. essentially
flipping the node/subnode in your current binding).
E.g. can we get away with simply adding an optional qcom,memshare-node
to the remoteproc binding and when that's present we make the Qualcomm
remoteproc drivers spawn the memshare handler and listen for requests
from that node?
> I tried to explain that in the cover letter but I think I made some
> mistake as I don't see it in the Patchwork.
>
> >> +
> >> +maintainers:
> >> + - Nikita Travkin <[email protected]>
> >> +
> >> +properties:
> >> + compatible:
> >> + const: qcom,memshare
> >> +
> >> + qcom,legacy-client:
> >> + $ref: /schemas/types.yaml#/definitions/phandle
> >> + description: Phandle to a memshare client node used for legacy requests.
> >> +
> >> + "#address-cells":
> >> + const: 1
> >> +
> >> + "#size-cells":
> >> + const: 0
> >> +
> >> +patternProperties:
> >> + "^.*@[0-9]+$":
> >> + type: object
> >> +
> >> + properties:
> >> + reg:
> >> + description: Proc-ID for clients in this node.
> > What's Proc-ID?
>
> The requests from the remote nodes contain client-id and proc-id
> that are supposed to differentiate the clients. It's possible to
> find the values in downstream DT or by observing what messages
> are received by the memshare service (I left dev_dbg logging in
> the driver for that reason)
>
> I think I should reword it to make this more apparent, maybe
> "Proc-ID that clients in this node send."?
>
If this is a constant for each remote and we make this a child thing of
remoteproc perhaps encode the number in the remoteproc nodes?
(We still need something in DT to state that we want a memshare for
a given platform/remoteproc)
> >
> >> +
> >> + qcom,qrtr-node:
> >> + $ref: /schemas/types.yaml#/definitions/uint32
> >> + description: Node from which the requests are expected.
> >> +
> >> + "#address-cells":
> >> + const: 1
> >> +
> >> + "#size-cells":
> >> + const: 0
> >> +
> >> + patternProperties:
> >> + "^.*@[0-9]+$":
> >> + type: object
> >> +
> >> + properties:
> >> + reg:
> >> + description: ID of this client.
> > How does one determine the ID?
>
> As with proc-id, maybe reword to "ID that this client sends."?
>
> I will change those in v2, I still expect comments on the driver
> itself, so I'll wait for that before submitting it with just a
> couple lines changed.
>
> >
> >> +
> >> + memory-region:
> >> + $ref: /schemas/types.yaml#/definitions/phandle
> >> + description: |
> >> + Reserved memory region that should be used for allocation.
> >> +
> >> + required:
> >> + - reg
> >> +
> >> + required:
> >> + - reg
> >> + - qcom,qrtr-node
> >> +
> >> +required:
> >> + - compatible
> >> +
> >> +additionalProperties: false
> >> +
> >> +examples:
> >> + - |
> >> + #include <dt-bindings/soc/qcom,memshare.h>
> >> +
> >> + reserved-memory {
> >> +
> >> + #address-cells = <2>;
> >> + #size-cells = <2>;
> >> +
> >> + gps_mem: gps@93c00000 {
> >> + reg = <0x0 0x93c00000 0x0 0x200000>;
> >> + no-map;
> > We support 'compatible' in reserved-memory nodes, can you simplify the
> > binding and put everything in here?
>
> If I understand this correctly, each reserved-memory node will
> then load a new instance of memshare. Since the driver registers a
> QMI service that handles multiple clients, there should be only one
> instance.
This you could work around in the driver implementation, to refcount a
single implementation shared among all the instances.
> Additionally, as I mentioned earlier, some clients may not
> need reserved-memory at all
>
This on the other hand, makes me feel like we shouldn't go that route.
Regards,
Bjorn
> >> + };
> >> + };
> >> +
> >> + memshare {
> >> + compatible = "qcom,memshare";
> >> + qcom,legacy-client = <&memshare_gps>;
> >> +
> >> + #address-cells = <1>;
> >> + #size-cells = <0>;
> >> +
> >> + mpss@0 {
> >> + reg = <MEMSHARE_PROC_MPSS_V01>;
> >> + qcom,qrtr-node = <0>;
> >> +
> >> + #address-cells = <1>;
> >> + #size-cells = <0>;
> >> +
> >> + memshare_gps: gps@0 {
> >> + reg = <0>;
> >> + memory-region = <&gps_mem>;
> >> + };
> >> + };
> >> + };
> >> +
> >> +...
> >> diff --git a/include/dt-bindings/soc/qcom,memshare.h b/include/dt-bindings/soc/qcom,memshare.h
> >> new file mode 100644
> >> index 000000000000..4cef1ef75d09
> >> --- /dev/null
> >> +++ b/include/dt-bindings/soc/qcom,memshare.h
> >> @@ -0,0 +1,10 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +
> >> +#ifndef __DT_QCOM_MEMSHARE_H__
> >> +#define __DT_QCOM_MEMSHARE_H__
> >> +
> >> +#define MEMSHARE_PROC_MPSS_V01 0
> >> +#define MEMSHARE_PROC_ADSP_V01 1
> >> +#define MEMSHARE_PROC_WCNSS_V01 2
> >> +
> >> +#endif /* __DT_QCOM_MEMSHARE_H__ */
> >> --
> >> 2.27.0
> >>
>
14.04.2021 08:15, Bjorn Andersson пишет:
> On Sat 10 Apr 03:05 CDT 2021, Nikita Travkin wrote:
>
>> Hi, sorry for a late reply but I couldn't answer earlier.
>>
>> 30.03.2021 19:40, Rob Herring ??????????:
>>> On Fri, Mar 19, 2021 at 10:23:20PM +0500, [email protected] wrote:
>>>> From: Nikita Travkin <[email protected]>
>>>>
>>>> Add DT bindings for memshare: QMI service that allocates
>>>> memory per remote processor request.
>>>>
>>>> Signed-off-by: Nikita Travkin <[email protected]>
>>>> ---
>>>> .../bindings/soc/qcom/qcom,memshare.yaml | 109 ++++++++++++++++++
>>>> include/dt-bindings/soc/qcom,memshare.h | 10 ++
>>>> 2 files changed, 119 insertions(+)
>>>> create mode 100644 Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
>>>> create mode 100644 include/dt-bindings/soc/qcom,memshare.h
>>>>
>>>> diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
>>>> new file mode 100644
>>>> index 000000000000..ebdf128b066c
>>>> --- /dev/null
>>>> +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
>>>> @@ -0,0 +1,109 @@
>>>> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
>>>> +%YAML 1.2
>>>> +---
>>>> +$id: "http://devicetree.org/schemas/soc/qcom/qcom,memshare.yaml#"
>>>> +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
>>>> +
>>>> +title: Qualcomm QMI Shared Memory Service
>>> How many shared memory interfaces does Qcom have...
>>>
>>>> +
>>>> +description: |
>>>> + This driver provides a QMI service that allows remote processors (like modem)
>>>> + to request additional memory. It is used for applications like GPS in modem.
>>> If the memory region is defined in reserved-memory, how are you
>>> allocating additional memory?
>> Initially remoteproc is loaded into it's own reserved-memory region
>> but qcom decided that they sometimes need more memory than that.
>> Memshare driver in msm8916 downstream tree seem to blindly allocate
>> DMA region for every request that it gets. Additionally for those
>> clients described in the DT, they do the DMA allocation on boot
>> time and never free the region. They call it "guaranteed" allocation.
>>
>> On msm8916 only one "guaranteed" client seem to be used so I decided
>> to implement it with reserved-memory node. On newer platforms they
>> seem to have more clients but I think that the driver can be easily
>> extended to support dynamic allocation if someone really needs it.
>>
> Is the "guaranteed" memory required to come from the reserved-memory
> part of memory, or could it simply be allocated on demand as well (or
> preallocated, but at a dynamic address)?
This is rather complicated.
For most (msm8916) devices it works with a region from dma_alloc but
there are at least three devices where it causes problems.
If the region was allocated by dma_alloc (somewhere near 0xfe100000
if I remember correctly) then
- Wileyfox Swift (Longcheer L8150): Location service "crashes"
(repeats request every 10-ish seconds while location session is
open and gives no location data)
- Samsung A3, A5: The entire modem crashes after it gets the response
with such address.
Downstream kernel allocates the region at slightly different address
which works fine.
It's probably possible to change the allocation address with dma mask
but I have no idea why the crash happens on those devices, if it's
even possible to debug this or find all the "bad" regions.
Because of that I prefer using a known-good address at least for the
location client that keeps it forever.
> If these allocations always came from a reserved-memory region, then
> adding a "qcom,memshare" compatible to the reserved-memory node itself
> seems like a reasonable approach. But if dma_alloc is sufficient, and
> there's cases where there's no "guaranteed" region, perhaps we should
> just describe this as part of the remoteproc node (i.e. essentially
> flipping the node/subnode in your current binding).
>
>
> E.g. can we get away with simply adding an optional qcom,memshare-node
> to the remoteproc binding and when that's present we make the Qualcomm
> remoteproc drivers spawn the memshare handler and listen for requests
> from that node?
I'm having a hard time imagining how this would be implemented...
Assuming that I need to keep reserved-memory for some clients
and other clients will need proper dynamic allocation (maybe on a
newer platform), there will be an optional subnode in each remoteproc
node, that will be detected by the remoteproc driver which will then
start memshare. It will have all the id-s, size or phandle to
reserved-memory. Then there may be multiple of those in one
remoteproc. Or one memshare subnode will contain multiple client
nodes? Then if I understand correctly, there are multiple different
remoteproc drivers so each will have to be modified. They will need
to spawn only one memshare instance and pass the clients to it.
Or maybe the subnode can contain a compatible and the code to keep
only one instance will be in the memshare driver itself...
To be honest, I'm getting very confused by even trying to lay this
down in my mind. I think it just unnecessarily complicates both
binding and the driver to "hide" it's nodes through the device tree.
Maybe I just didn't understand the proposal...
>> I tried to explain that in the cover letter but I think I made some
>> mistake as I don't see it in the Patchwork.
>>
>>>> +
>>>> +maintainers:
>>>> + - Nikita Travkin <[email protected]>
>>>> +
>>>> +properties:
>>>> + compatible:
>>>> + const: qcom,memshare
>>>> +
>>>> + qcom,legacy-client:
>>>> + $ref: /schemas/types.yaml#/definitions/phandle
>>>> + description: Phandle to a memshare client node used for legacy requests.
>>>> +
>>>> + "#address-cells":
>>>> + const: 1
>>>> +
>>>> + "#size-cells":
>>>> + const: 0
>>>> +
>>>> +patternProperties:
>>>> + "^.*@[0-9]+$":
>>>> + type: object
>>>> +
>>>> + properties:
>>>> + reg:
>>>> + description: Proc-ID for clients in this node.
>>> What's Proc-ID?
>> The requests from the remote nodes contain client-id and proc-id
>> that are supposed to differentiate the clients. It's possible to
>> find the values in downstream DT or by observing what messages
>> are received by the memshare service (I left dev_dbg logging in
>> the driver for that reason)
>>
>> I think I should reword it to make this more apparent, maybe
>> "Proc-ID that clients in this node send."?
>>
> If this is a constant for each remote and we make this a child thing of
> remoteproc perhaps encode the number in the remoteproc nodes?
>
> (We still need something in DT to state that we want a memshare for
> a given platform/remoteproc)
>
>>>> +
>>>> + qcom,qrtr-node:
>>>> + $ref: /schemas/types.yaml#/definitions/uint32
>>>> + description: Node from which the requests are expected.
>>>> +
>>>> + "#address-cells":
>>>> + const: 1
>>>> +
>>>> + "#size-cells":
>>>> + const: 0
>>>> +
>>>> + patternProperties:
>>>> + "^.*@[0-9]+$":
>>>> + type: object
>>>> +
>>>> + properties:
>>>> + reg:
>>>> + description: ID of this client.
>>> How does one determine the ID?
>> As with proc-id, maybe reword to "ID that this client sends."?
>>
>> I will change those in v2, I still expect comments on the driver
>> itself, so I'll wait for that before submitting it with just a
>> couple lines changed.
>>
>>>> +
>>>> + memory-region:
>>>> + $ref: /schemas/types.yaml#/definitions/phandle
>>>> + description: |
>>>> + Reserved memory region that should be used for allocation.
>>>> +
>>>> + required:
>>>> + - reg
>>>> +
>>>> + required:
>>>> + - reg
>>>> + - qcom,qrtr-node
>>>> +
>>>> +required:
>>>> + - compatible
>>>> +
>>>> +additionalProperties: false
>>>> +
>>>> +examples:
>>>> + - |
>>>> + #include <dt-bindings/soc/qcom,memshare.h>
>>>> +
>>>> + reserved-memory {
>>>> +
>>>> + #address-cells = <2>;
>>>> + #size-cells = <2>;
>>>> +
>>>> + gps_mem: gps@93c00000 {
>>>> + reg = <0x0 0x93c00000 0x0 0x200000>;
>>>> + no-map;
>>> We support 'compatible' in reserved-memory nodes, can you simplify the
>>> binding and put everything in here?
>> If I understand this correctly, each reserved-memory node will
>> then load a new instance of memshare. Since the driver registers a
>> QMI service that handles multiple clients, there should be only one
>> instance.
> This you could work around in the driver implementation, to refcount a
> single implementation shared among all the instances.
>
>> Additionally, as I mentioned earlier, some clients may not
>> need reserved-memory at all
>>
> This on the other hand, makes me feel like we shouldn't go that route.
>
> Regards,
> Bjorn
>
>>>> + };
>>>> + };
>>>> +
>>>> + memshare {
>>>> + compatible = "qcom,memshare";
>>>> + qcom,legacy-client = <&memshare_gps>;
>>>> +
>>>> + #address-cells = <1>;
>>>> + #size-cells = <0>;
>>>> +
>>>> + mpss@0 {
>>>> + reg = <MEMSHARE_PROC_MPSS_V01>;
>>>> + qcom,qrtr-node = <0>;
>>>> +
>>>> + #address-cells = <1>;
>>>> + #size-cells = <0>;
>>>> +
>>>> + memshare_gps: gps@0 {
>>>> + reg = <0>;
>>>> + memory-region = <&gps_mem>;
>>>> + };
>>>> + };
>>>> + };
>>>> +
>>>> +...
>>>> diff --git a/include/dt-bindings/soc/qcom,memshare.h b/include/dt-bindings/soc/qcom,memshare.h
>>>> new file mode 100644
>>>> index 000000000000..4cef1ef75d09
>>>> --- /dev/null
>>>> +++ b/include/dt-bindings/soc/qcom,memshare.h
>>>> @@ -0,0 +1,10 @@
>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>> +
>>>> +#ifndef __DT_QCOM_MEMSHARE_H__
>>>> +#define __DT_QCOM_MEMSHARE_H__
>>>> +
>>>> +#define MEMSHARE_PROC_MPSS_V01 0
>>>> +#define MEMSHARE_PROC_ADSP_V01 1
>>>> +#define MEMSHARE_PROC_WCNSS_V01 2
>>>> +
>>>> +#endif /* __DT_QCOM_MEMSHARE_H__ */
>>>> --
>>>> 2.27.0
>>>>