2021-05-20 14:09:25

by Kumar, M Chetan

[permalink] [raw]
Subject: [PATCH V3 00/16] net: iosm: PCIe Driver for Intel M.2 Modem

The IOSM (IPC over Shared Memory) driver is a PCIe host driver implemented
for linux or chrome platform for data exchange over PCIe interface between
Host platform & Intel M.2 Modem. The driver exposes interface conforming to
the MBIM protocol. Any front end application ( eg: Modem Manager) could
easily manage the MBIM interface to enable data communication towards WWAN.

Intel M.2 modem uses 2 BAR regions. The first region is dedicated to Doorbell
register for IRQs and the second region is used as scratchpad area for book
keeping modem execution stage details along with host system shared memory
region context details. The upper edge of the driver exposes the control and
data channels for user space application interaction. At lower edge these data
and control channels are associated to pipes. The pipes are lowest level
interfaces used over PCIe as a logical channel for message exchange. A single
channel maps to UL and DL pipe and are initialized on device open.

On UL path, driver copies application sent data to SKBs associate it with
transfer descriptor and puts it on to ring buffer for DMA transfer. Once
information has been updated in shared memory region, host gives a Doorbell
to modem to perform DMA and modem uses MSI to communicate back to host.
For receiving data in DL path, SKBs are pre-allocated during pipe open and
transfer descriptors are given to modem for DMA transfer.

The driver exposes two types of ports, namely "wwanctrl", a char device node
which is used for MBIM control operation and "INMx",(x = 0,1,2..7) network
interfaces for IP data communication.
1) MBIM Control Interface:
This node exposes an interface between modem and application using char device
exposed by "IOSM" driver to establish and manage the MBIM data communication
with PCIe based Intel M.2 Modems.

2) MBIM Data Interface:
The IOSM driver exposes IP link interface "inmX" of type "IOSM" for IP traffic.
Iproute network utility is used for creating "inmX" network interface and for
associating it with MBIM IP session. The Driver supports upto 8 IP sessions for
simultaneous IP communication.

Changes since v2:
* WWAN port adaptation.
* Removed inline keyword in .c files.
* Aligned ipc_ prefix for function name to be consistent across files.

* PATCH1:
* Removed Module Author define.
* Aligned ipc_ prefix for function name to be consistent across file.
* PATCH2:
* Removed inline keyword in .c file.
* Aligned ipc_ prefix for function name to be consistent across file.
* PATCH4:
* WWAN port adaptation.
* Removed inline keyword in .c file.
* Aligned ipc_ prefix for function name to be consistent across file.
* PATCH5:
* WWAN port adaptation.
* Aligned ipc_ prefix for function name to be consistent across file.
* PATCH6:
* WWAN port adaptation.
* PATCH7:
* Renamed file to iosm_ipc_port.c
* WWAN port adaptation for AT & MBIM protocol communication.
* PATCH9:
* Aligned ipc_ prefix for function name to be consistent across file.
* PATCH10:
* Aligned ipc_ prefix for function name to be consistent across file.
* PATCH11:
* Aligned ipc_ prefix for function name to be consistent across file.
* PATCH13:
* Endianness type correction for transfer descriptor structure.
* PATCH15:
* Clean-up DSS channel implementation.
* Aligned ipc_ prefix for function name to be consistent across file.
* PATCH16:
* Clean-up wwan Kconfig & Makefile (Changes available as part of
wwan subsystem).
* Removed NET dependency key word from iosm Kconfig.
* Removed IOCTL section from documentation.

Changes since v1:
* Removed Ethernet header & VLAN tag handling from wwan net driver.
* Implement rtnet_link interface for IP traffic handling.
* Strip off Modem FW flashing & CD collection code changes.
* Moved driver documentation to RsT file.
* Change copyright year.

* PATCH1:
* Implement module_init() & exit() callbacks for rtnl_link.
* Documentation correction for function signature.
* Fix coverity warnings.
* PATCH2:
* Streamline multiple returns using goto.
* PATCH3:
* Removed space around the : for the bitfields.
* Return proper error code instead of returning -1.
* PATCH4:
* Clean-up vlan tag ids & removed FW flashing logic.
* Function return type correction.
* Return proper error code instead of returning -1.
* PATCH5:
* Change vlan_id to ip link if_id & document correction.
* Define new enums for IP & DSS session mapping.
* Return proper error code instead of returning -1.
* Clean-up vlan tag id & removed FW flashing logic.
* PATCH6:
* Return proper error code instead of returning -1.
* Define IPC channels in serial order.
* PATCH7:
* Renamed iosm_sio struct to iosm_cdev.
* Added memory barriers around atomic operations.
* PATCH8:
* Moved task queue struct to header file.
* Streamline multiple returns using goto.
* PATCH9:
* Endianness type correction for Host-Device protocol structure.
* Removed space around the : for the bitfields.
* Change session from dynamic to static.
* Streamline multiple returns using goto.
* PATCH10:
* Endianness type correction for Host-Device protocol structure.
* Function signature documentation correction.
* Streamline multiple returns using goto.
* Removed vlan tag id & replace it with ip link interface id.
* PATCH11:
* Removed space around the : for the bitfields.
* Moved pm module under static allocation.
* Added memory barriers around atomic operations.
* PATCH12:
* Endianness type correction for Host-Device protocol structure.
* Function signature documentation correction.
* Streamline multiple returns using goto.
* PATCH13:
* Endianness type correction for Host-Device protocol structure.
* Function signature documentation correction.
* Streamline multiple returns using goto.
* PATCH14:
* Removed no related header file inclusion.
* PATCH15:
* Removed Ethernet header & VLAN tag handling from wwan net driver.
* Implement rtnet_link interface for IP traffic handling.
* PATCH16:
* Moved driver documentation to RsT file.
* Modified if_link.h file to support link type iosm.

--
M Chetan Kumar (16):
net: iosm: entry point
net: iosm: irq handling
net: iosm: mmio scratchpad
net: iosm: shared memory IPC interface
net: iosm: shared memory I/O operations
net: iosm: channel configuration
net: iosm: wwan port control device
net: iosm: bottom half
net: iosm: multiplex IP sessions
net: iosm: encode or decode datagram
net: iosm: power management
net: iosm: shared memory protocol
net: iosm: protocol operations
net: iosm: uevent support
net: iosm: net driver
net: iosm: infrastructure

.../networking/device_drivers/index.rst | 1 +
.../networking/device_drivers/wwan/index.rst | 18 +
.../networking/device_drivers/wwan/iosm.rst | 95 ++
MAINTAINERS | 7 +
drivers/net/wwan/Kconfig | 12 +
drivers/net/wwan/Makefile | 1 +
drivers/net/wwan/iosm/Makefile | 26 +
drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c | 88 ++
drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h | 59 +
drivers/net/wwan/iosm/iosm_ipc_imem.c | 1372 +++++++++++++++++
drivers/net/wwan/iosm/iosm_ipc_imem.h | 579 +++++++
drivers/net/wwan/iosm/iosm_ipc_imem_ops.c | 346 +++++
drivers/net/wwan/iosm/iosm_ipc_imem_ops.h | 99 ++
drivers/net/wwan/iosm/iosm_ipc_irq.c | 90 ++
drivers/net/wwan/iosm/iosm_ipc_irq.h | 33 +
drivers/net/wwan/iosm/iosm_ipc_mmio.c | 223 +++
drivers/net/wwan/iosm/iosm_ipc_mmio.h | 193 +++
drivers/net/wwan/iosm/iosm_ipc_mux.c | 455 ++++++
drivers/net/wwan/iosm/iosm_ipc_mux.h | 343 +++++
drivers/net/wwan/iosm/iosm_ipc_mux_codec.c | 910 +++++++++++
drivers/net/wwan/iosm/iosm_ipc_mux_codec.h | 193 +++
drivers/net/wwan/iosm/iosm_ipc_pcie.c | 586 +++++++
drivers/net/wwan/iosm/iosm_ipc_pcie.h | 211 +++
drivers/net/wwan/iosm/iosm_ipc_pm.c | 333 ++++
drivers/net/wwan/iosm/iosm_ipc_pm.h | 207 +++
drivers/net/wwan/iosm/iosm_ipc_port.c | 85 +
drivers/net/wwan/iosm/iosm_ipc_port.h | 50 +
drivers/net/wwan/iosm/iosm_ipc_protocol.c | 283 ++++
drivers/net/wwan/iosm/iosm_ipc_protocol.h | 237 +++
drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c | 552 +++++++
drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h | 444 ++++++
drivers/net/wwan/iosm/iosm_ipc_task_queue.c | 202 +++
drivers/net/wwan/iosm/iosm_ipc_task_queue.h | 97 ++
drivers/net/wwan/iosm/iosm_ipc_uevent.c | 44 +
drivers/net/wwan/iosm/iosm_ipc_uevent.h | 41 +
drivers/net/wwan/iosm/iosm_ipc_wwan.c | 439 ++++++
drivers/net/wwan/iosm/iosm_ipc_wwan.h | 55 +
include/uapi/linux/if_link.h | 10 +
tools/include/uapi/linux/if_link.h | 10 +
39 files changed, 9029 insertions(+)
create mode 100644 Documentation/networking/device_drivers/wwan/index.rst
create mode 100755 Documentation/networking/device_drivers/wwan/iosm.rst
create mode 100644 drivers/net/wwan/iosm/Makefile
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_irq.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_irq.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mmio.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mmio.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux_codec.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux_codec.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pcie.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pcie.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pm.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pm.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_port.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_port.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_task_queue.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_task_queue.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.h
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.h

--
2.25.1


2021-05-20 14:09:38

by Kumar, M Chetan

[permalink] [raw]
Subject: [PATCH V3 10/16] net: iosm: encode or decode datagram

1) Encode UL packet into datagram.
2) Decode DL datagram and route it to network layer.
3) Supports credit based flow control.

Signed-off-by: M Chetan Kumar <[email protected]>
---
v3: Aligned ipc_ prefix for function name to be consistent across file.
v2:
* Endianness type correction for Host-Device protocol structure.
* Function signature documentation correction.
* Streamline multiple returns using goto.
* Removed vlan tag id & replace with ip link interface id.
---
drivers/net/wwan/iosm/iosm_ipc_mux_codec.c | 910 +++++++++++++++++++++
drivers/net/wwan/iosm/iosm_ipc_mux_codec.h | 193 +++++
2 files changed, 1103 insertions(+)
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux_codec.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux_codec.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux_codec.c b/drivers/net/wwan/iosm/iosm_ipc_mux_codec.c
new file mode 100644
index 000000000000..820af7cecbbc
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_mux_codec.c
@@ -0,0 +1,910 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#include <linux/nospec.h>
+
+#include "iosm_ipc_imem_ops.h"
+#include "iosm_ipc_mux_codec.h"
+#include "iosm_ipc_task_queue.h"
+
+/* Test the link power state and send a MUX command in blocking mode. */
+static int ipc_mux_tq_cmd_send(struct iosm_imem *ipc_imem, int arg, void *msg,
+ size_t size)
+{
+ struct iosm_mux *ipc_mux = ipc_imem->mux;
+ const struct mux_acb *acb = msg;
+
+ skb_queue_tail(&ipc_mux->channel->ul_list, acb->skb);
+ ipc_imem_ul_send(ipc_mux->imem);
+
+ return 0;
+}
+
+static int ipc_mux_acb_send(struct iosm_mux *ipc_mux, bool blocking)
+{
+ struct completion *completion = &ipc_mux->channel->ul_sem;
+ int ret = ipc_task_queue_send_task(ipc_mux->imem, ipc_mux_tq_cmd_send,
+ 0, &ipc_mux->acb,
+ sizeof(ipc_mux->acb), false);
+ if (ret) {
+ dev_err(ipc_mux->dev, "unable to send mux command");
+ return ret;
+ }
+
+ /* if blocking, suspend the app and wait for irq in the flash or
+ * crash phase. return false on timeout to indicate failure.
+ */
+ if (blocking) {
+ u32 wait_time_milliseconds = IPC_MUX_CMD_RUN_DEFAULT_TIMEOUT;
+
+ reinit_completion(completion);
+
+ if (wait_for_completion_interruptible_timeout
+ (completion, msecs_to_jiffies(wait_time_milliseconds)) ==
+ 0) {
+ dev_err(ipc_mux->dev, "ch[%d] timeout",
+ ipc_mux->channel_id);
+ ipc_uevent_send(ipc_mux->imem->dev, UEVENT_MDM_TIMEOUT);
+ return -ETIMEDOUT;
+ }
+ }
+
+ return 0;
+}
+
+/* Prepare mux Command */
+static struct mux_lite_cmdh *ipc_mux_lite_add_cmd(struct iosm_mux *ipc_mux,
+ u32 cmd, struct mux_acb *acb,
+ void *param, u32 param_size)
+{
+ struct mux_lite_cmdh *cmdh = (struct mux_lite_cmdh *)acb->skb->data;
+
+ cmdh->signature = cpu_to_le32(MUX_SIG_CMDH);
+ cmdh->command_type = cpu_to_le32(cmd);
+ cmdh->if_id = acb->if_id;
+
+ acb->cmd = cmd;
+
+ cmdh->cmd_len = cpu_to_le16(offsetof(struct mux_lite_cmdh, param) +
+ param_size);
+ cmdh->transaction_id = cpu_to_le32(ipc_mux->tx_transaction_id++);
+
+ if (param)
+ memcpy(&cmdh->param, param, param_size);
+
+ skb_put(acb->skb, le16_to_cpu(cmdh->cmd_len));
+
+ return cmdh;
+}
+
+static int ipc_mux_acb_alloc(struct iosm_mux *ipc_mux)
+{
+ struct mux_acb *acb = &ipc_mux->acb;
+ struct sk_buff *skb;
+ dma_addr_t mapping;
+
+ /* Allocate skb memory for the uplink buffer. */
+ skb = ipc_pcie_alloc_skb(ipc_mux->pcie, MUX_MAX_UL_ACB_BUF_SIZE,
+ GFP_ATOMIC, &mapping, DMA_TO_DEVICE, 0);
+ if (!skb)
+ return -ENOMEM;
+
+ /* Save the skb address. */
+ acb->skb = skb;
+
+ memset(skb->data, 0, MUX_MAX_UL_ACB_BUF_SIZE);
+
+ return 0;
+}
+
+int ipc_mux_dl_acb_send_cmds(struct iosm_mux *ipc_mux, u32 cmd_type, u8 if_id,
+ u32 transaction_id, union mux_cmd_param *param,
+ size_t res_size, bool blocking, bool respond)
+{
+ struct mux_acb *acb = &ipc_mux->acb;
+ struct mux_lite_cmdh *ack_lite;
+ int ret = 0;
+
+ acb->if_id = if_id;
+ ret = ipc_mux_acb_alloc(ipc_mux);
+ if (ret)
+ return ret;
+
+ ack_lite = ipc_mux_lite_add_cmd(ipc_mux, cmd_type, acb, param,
+ res_size);
+ if (respond)
+ ack_lite->transaction_id = cpu_to_le32(transaction_id);
+
+ ret = ipc_mux_acb_send(ipc_mux, blocking);
+
+ return ret;
+}
+
+void ipc_mux_netif_tx_flowctrl(struct mux_session *session, int idx, bool on)
+{
+ /* Inform the network interface to start/stop flow ctrl */
+ ipc_wwan_tx_flowctrl(session->wwan, idx, on);
+}
+
+static int ipc_mux_dl_cmdresps_decode_process(struct iosm_mux *ipc_mux,
+ struct mux_lite_cmdh *cmdh)
+{
+ struct mux_acb *acb = &ipc_mux->acb;
+
+ switch (le32_to_cpu(cmdh->command_type)) {
+ case MUX_CMD_OPEN_SESSION_RESP:
+ case MUX_CMD_CLOSE_SESSION_RESP:
+ /* Resume the control application. */
+ acb->got_param = cmdh->param;
+ break;
+
+ case MUX_LITE_CMD_FLOW_CTL_ACK:
+ /* This command type is not expected as response for
+ * Aggregation version of the protocol. So return non-zero.
+ */
+ if (ipc_mux->protocol != MUX_LITE)
+ return -EINVAL;
+
+ dev_dbg(ipc_mux->dev, "if %u FLOW_CTL_ACK %u received",
+ cmdh->if_id, le32_to_cpu(cmdh->transaction_id));
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ acb->wanted_response = MUX_CMD_INVALID;
+ acb->got_response = le32_to_cpu(cmdh->command_type);
+ complete(&ipc_mux->channel->ul_sem);
+
+ return 0;
+}
+
+static int ipc_mux_dl_dlcmds_decode_process(struct iosm_mux *ipc_mux,
+ struct mux_lite_cmdh *cmdh)
+{
+ union mux_cmd_param *param = &cmdh->param;
+ struct mux_session *session;
+ int new_size;
+
+ dev_dbg(ipc_mux->dev, "if_id[%d]: dlcmds decode process %d",
+ cmdh->if_id, le32_to_cpu(cmdh->command_type));
+
+ switch (le32_to_cpu(cmdh->command_type)) {
+ case MUX_LITE_CMD_FLOW_CTL:
+
+ if (cmdh->if_id >= ipc_mux->nr_sessions) {
+ dev_err(ipc_mux->dev, "if_id [%d] not valid",
+ cmdh->if_id);
+ return -EINVAL; /* No session interface id. */
+ }
+
+ session = &ipc_mux->session[cmdh->if_id];
+
+ new_size = offsetof(struct mux_lite_cmdh, param) +
+ sizeof(param->flow_ctl);
+ if (param->flow_ctl.mask == cpu_to_le32(0xFFFFFFFF)) {
+ /* Backward Compatibility */
+ if (cmdh->cmd_len == cpu_to_le16(new_size))
+ session->flow_ctl_mask =
+ le32_to_cpu(param->flow_ctl.mask);
+ else
+ session->flow_ctl_mask = ~0;
+ /* if CP asks for FLOW CTRL Enable
+ * then set our internal flow control Tx flag
+ * to limit uplink session queueing
+ */
+ session->net_tx_stop = true;
+ /* Update the stats */
+ session->flow_ctl_en_cnt++;
+ } else if (param->flow_ctl.mask == 0) {
+ /* Just reset the Flow control mask and let
+ * mux_flow_ctrl_low_thre_b take control on
+ * our internal Tx flag and enabling kernel
+ * flow control
+ */
+ /* Backward Compatibility */
+ if (cmdh->cmd_len == cpu_to_le16(new_size))
+ session->flow_ctl_mask =
+ le32_to_cpu(param->flow_ctl.mask);
+ else
+ session->flow_ctl_mask = 0;
+ /* Update the stats */
+ session->flow_ctl_dis_cnt++;
+ } else {
+ break;
+ }
+
+ dev_dbg(ipc_mux->dev, "if[%u] FLOW CTRL 0x%08X", cmdh->if_id,
+ le32_to_cpu(param->flow_ctl.mask));
+ break;
+
+ case MUX_LITE_CMD_LINK_STATUS_REPORT:
+ break;
+
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+/* Decode and Send appropriate response to a command block. */
+static void ipc_mux_dl_cmd_decode(struct iosm_mux *ipc_mux, struct sk_buff *skb)
+{
+ struct mux_lite_cmdh *cmdh = (struct mux_lite_cmdh *)skb->data;
+ __le32 trans_id = cmdh->transaction_id;
+
+ if (ipc_mux_dl_cmdresps_decode_process(ipc_mux, cmdh)) {
+ /* Unable to decode command response indicates the cmd_type
+ * may be a command instead of response. So try to decoding it.
+ */
+ if (!ipc_mux_dl_dlcmds_decode_process(ipc_mux, cmdh)) {
+ /* Decoded command may need a response. Give the
+ * response according to the command type.
+ */
+ union mux_cmd_param *mux_cmd = NULL;
+ size_t size = 0;
+ u32 cmd = MUX_LITE_CMD_LINK_STATUS_REPORT_RESP;
+
+ if (cmdh->command_type ==
+ cpu_to_le32(MUX_LITE_CMD_LINK_STATUS_REPORT)) {
+ mux_cmd = &cmdh->param;
+ mux_cmd->link_status_resp.response =
+ cpu_to_le32(MUX_CMD_RESP_SUCCESS);
+ /* response field is u32 */
+ size = sizeof(u32);
+ } else if (cmdh->command_type ==
+ cpu_to_le32(MUX_LITE_CMD_FLOW_CTL)) {
+ cmd = MUX_LITE_CMD_FLOW_CTL_ACK;
+ } else {
+ return;
+ }
+
+ if (ipc_mux_dl_acb_send_cmds(ipc_mux, cmd, cmdh->if_id,
+ le32_to_cpu(trans_id),
+ mux_cmd, size, false,
+ true))
+ dev_err(ipc_mux->dev,
+ "if_id %d: cmd send failed",
+ cmdh->if_id);
+ }
+ }
+}
+
+/* Pass the DL packet to the netif layer. */
+static int ipc_mux_net_receive(struct iosm_mux *ipc_mux, int if_id,
+ struct iosm_wwan *wwan, u32 offset,
+ u8 service_class, struct sk_buff *skb)
+{
+ struct sk_buff *dest_skb = skb_clone(skb, GFP_ATOMIC);
+
+ if (!dest_skb)
+ return -ENOMEM;
+
+ skb_pull(dest_skb, offset);
+ skb_set_tail_pointer(dest_skb, dest_skb->len);
+ /* Pass the packet to the netif layer. */
+ dest_skb->priority = service_class;
+
+ return ipc_wwan_receive(wwan, dest_skb, false, if_id);
+}
+
+/* Decode Flow Credit Table in the block */
+static void ipc_mux_dl_fcth_decode(struct iosm_mux *ipc_mux,
+ unsigned char *block)
+{
+ struct ipc_mem_lite_gen_tbl *fct = (struct ipc_mem_lite_gen_tbl *)block;
+ struct iosm_wwan *wwan;
+ int ul_credits;
+ int if_id;
+
+ if (fct->vfl_length != sizeof(fct->vfl.nr_of_bytes)) {
+ dev_err(ipc_mux->dev, "unexpected FCT length: %d",
+ fct->vfl_length);
+ return;
+ }
+
+ if_id = fct->if_id;
+ if (if_id >= ipc_mux->nr_sessions) {
+ dev_err(ipc_mux->dev, "not supported if_id: %d", if_id);
+ return;
+ }
+
+ /* Is the session active ? */
+ if_id = array_index_nospec(if_id, ipc_mux->nr_sessions);
+ wwan = ipc_mux->session[if_id].wwan;
+ if (!wwan) {
+ dev_err(ipc_mux->dev, "session Net ID is NULL");
+ return;
+ }
+
+ ul_credits = fct->vfl.nr_of_bytes;
+
+ dev_dbg(ipc_mux->dev, "Flow_Credit:: if_id[%d] Old: %d Grants: %d",
+ if_id, ipc_mux->session[if_id].ul_flow_credits, ul_credits);
+
+ /* Update the Flow Credit information from ADB */
+ ipc_mux->session[if_id].ul_flow_credits += ul_credits;
+
+ /* Check whether the TX can be started */
+ if (ipc_mux->session[if_id].ul_flow_credits > 0) {
+ ipc_mux->session[if_id].net_tx_stop = false;
+ ipc_mux_netif_tx_flowctrl(&ipc_mux->session[if_id],
+ ipc_mux->session[if_id].if_id, false);
+ }
+}
+
+/* Decode non-aggregated datagram */
+static void ipc_mux_dl_adgh_decode(struct iosm_mux *ipc_mux,
+ struct sk_buff *skb)
+{
+ u32 pad_len, packet_offset;
+ struct iosm_wwan *wwan;
+ struct mux_adgh *adgh;
+ u8 *block = skb->data;
+ int rc = 0;
+ u8 if_id;
+
+ adgh = (struct mux_adgh *)block;
+
+ if (adgh->signature != cpu_to_le32(MUX_SIG_ADGH)) {
+ dev_err(ipc_mux->dev, "invalid ADGH signature received");
+ return;
+ }
+
+ if_id = adgh->if_id;
+ if (if_id >= ipc_mux->nr_sessions) {
+ dev_err(ipc_mux->dev, "invalid if_id while decoding %d", if_id);
+ return;
+ }
+
+ /* Is the session active ? */
+ if_id = array_index_nospec(if_id, ipc_mux->nr_sessions);
+ wwan = ipc_mux->session[if_id].wwan;
+ if (!wwan) {
+ dev_err(ipc_mux->dev, "session Net ID is NULL");
+ return;
+ }
+
+ /* Store the pad len for the corresponding session
+ * Pad bytes as negotiated in the open session less the header size
+ * (see session management chapter for details).
+ * If resulting padding is zero or less, the additional head padding is
+ * omitted. For e.g., if HEAD_PAD_LEN = 16 or less, this field is
+ * omitted if HEAD_PAD_LEN = 20, then this field will have 4 bytes
+ * set to zero
+ */
+ pad_len =
+ ipc_mux->session[if_id].dl_head_pad_len - IPC_MEM_DL_ETH_OFFSET;
+ packet_offset = sizeof(*adgh) + pad_len;
+
+ if_id += ipc_mux->wwan_q_offset;
+
+ /* Pass the packet to the netif layer */
+ rc = ipc_mux_net_receive(ipc_mux, if_id, wwan, packet_offset,
+ adgh->service_class, skb);
+ if (rc) {
+ dev_err(ipc_mux->dev, "mux adgh decoding error");
+ return;
+ }
+ ipc_mux->session[if_id].flush = 1;
+}
+
+void ipc_mux_dl_decode(struct iosm_mux *ipc_mux, struct sk_buff *skb)
+{
+ u32 signature;
+
+ if (!skb->data)
+ return;
+
+ /* Decode the MUX header type. */
+ signature = le32_to_cpup((__le32 *)skb->data);
+
+ switch (signature) {
+ case MUX_SIG_ADGH:
+ ipc_mux_dl_adgh_decode(ipc_mux, skb);
+ break;
+
+ case MUX_SIG_FCTH:
+ ipc_mux_dl_fcth_decode(ipc_mux, skb->data);
+ break;
+
+ case MUX_SIG_CMDH:
+ ipc_mux_dl_cmd_decode(ipc_mux, skb);
+ break;
+
+ default:
+ dev_err(ipc_mux->dev, "invalid ABH signature");
+ }
+
+ ipc_pcie_kfree_skb(ipc_mux->pcie, skb);
+}
+
+static int ipc_mux_ul_skb_alloc(struct iosm_mux *ipc_mux,
+ struct mux_adb *ul_adb, u32 type)
+{
+ /* Take the first element of the free list. */
+ struct sk_buff *skb = skb_dequeue(&ul_adb->free_list);
+ int qlt_size;
+
+ if (!skb)
+ return -EBUSY; /* Wait for a free ADB skb. */
+
+ /* Mark it as UL ADB to select the right free operation. */
+ IPC_CB(skb)->op_type = (u8)UL_MUX_OP_ADB;
+
+ switch (type) {
+ case MUX_SIG_ADGH:
+ /* Save the ADB memory settings. */
+ ul_adb->dest_skb = skb;
+ ul_adb->buf = skb->data;
+ ul_adb->size = IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE;
+ /* reset statistic counter */
+ ul_adb->if_cnt = 0;
+ ul_adb->payload_size = 0;
+ ul_adb->dg_cnt_total = 0;
+
+ ul_adb->adgh = (struct mux_adgh *)skb->data;
+ memset(ul_adb->adgh, 0, sizeof(struct mux_adgh));
+ break;
+
+ case MUX_SIG_QLTH:
+ qlt_size = offsetof(struct ipc_mem_lite_gen_tbl, vfl) +
+ (MUX_QUEUE_LEVEL * sizeof(struct mux_lite_vfl));
+
+ if (qlt_size > IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE) {
+ dev_err(ipc_mux->dev,
+ "can't support. QLT size:%d SKB size: %d",
+ qlt_size, IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE);
+ return -ERANGE;
+ }
+
+ ul_adb->qlth_skb = skb;
+ memset((ul_adb->qlth_skb)->data, 0, qlt_size);
+ skb_put(skb, qlt_size);
+ break;
+ }
+
+ return 0;
+}
+
+static void ipc_mux_ul_adgh_finish(struct iosm_mux *ipc_mux)
+{
+ struct mux_adb *ul_adb = &ipc_mux->ul_adb;
+ u16 adgh_len;
+ long long bytes;
+ char *str;
+
+ if (!ul_adb || !ul_adb->dest_skb) {
+ dev_err(ipc_mux->dev, "no dest skb");
+ return;
+ }
+
+ adgh_len = le16_to_cpu(ul_adb->adgh->length);
+ skb_put(ul_adb->dest_skb, adgh_len);
+ skb_queue_tail(&ipc_mux->channel->ul_list, ul_adb->dest_skb);
+ ul_adb->dest_skb = NULL;
+
+ if (ipc_mux->ul_flow == MUX_UL_ON_CREDITS) {
+ struct mux_session *session;
+
+ session = &ipc_mux->session[ul_adb->adgh->if_id];
+ str = "available_credits";
+ bytes = (long long)session->ul_flow_credits;
+
+ } else {
+ str = "pend_bytes";
+ bytes = ipc_mux->ul_data_pend_bytes;
+ ipc_mux->ul_data_pend_bytes = ipc_mux->ul_data_pend_bytes +
+ adgh_len;
+ }
+
+ dev_dbg(ipc_mux->dev, "UL ADGH: size=%u, if_id=%d, payload=%d, %s=%lld",
+ adgh_len, ul_adb->adgh->if_id, ul_adb->payload_size,
+ str, bytes);
+}
+
+/* Allocates an ADB from the free list and initializes it with ADBH */
+static bool ipc_mux_ul_adb_allocate(struct iosm_mux *ipc_mux,
+ struct mux_adb *adb, int *size_needed,
+ u32 type)
+{
+ bool ret_val = false;
+ int status;
+
+ if (!adb->dest_skb) {
+ /* Allocate memory for the ADB including of the
+ * datagram table header.
+ */
+ status = ipc_mux_ul_skb_alloc(ipc_mux, adb, type);
+ if (status)
+ /* Is a pending ADB available ? */
+ ret_val = true; /* None. */
+
+ /* Update size need to zero only for new ADB memory */
+ *size_needed = 0;
+ }
+
+ return ret_val;
+}
+
+/* Informs the network stack to stop sending further packets for all opened
+ * sessions
+ */
+static void ipc_mux_stop_tx_for_all_sessions(struct iosm_mux *ipc_mux)
+{
+ struct mux_session *session;
+ int idx;
+
+ for (idx = 0; idx < ipc_mux->nr_sessions; idx++) {
+ session = &ipc_mux->session[idx];
+
+ if (!session->wwan)
+ continue;
+
+ session->net_tx_stop = true;
+ }
+}
+
+/* Sends Queue Level Table of all opened sessions */
+static bool ipc_mux_lite_send_qlt(struct iosm_mux *ipc_mux)
+{
+ struct ipc_mem_lite_gen_tbl *qlt;
+ struct mux_session *session;
+ bool qlt_updated = false;
+ int i;
+ int qlt_size;
+
+ if (!ipc_mux->initialized || ipc_mux->state != MUX_S_ACTIVE)
+ return qlt_updated;
+
+ qlt_size = offsetof(struct ipc_mem_lite_gen_tbl, vfl) +
+ MUX_QUEUE_LEVEL * sizeof(struct mux_lite_vfl);
+
+ for (i = 0; i < ipc_mux->nr_sessions; i++) {
+ session = &ipc_mux->session[i];
+
+ if (!session->wwan || session->flow_ctl_mask)
+ continue;
+
+ if (ipc_mux_ul_skb_alloc(ipc_mux, &ipc_mux->ul_adb,
+ MUX_SIG_QLTH)) {
+ dev_err(ipc_mux->dev,
+ "no reserved mem to send QLT of if_id: %d", i);
+ break;
+ }
+
+ /* Prepare QLT */
+ qlt = (struct ipc_mem_lite_gen_tbl *)(ipc_mux->ul_adb.qlth_skb)
+ ->data;
+ qlt->signature = cpu_to_le32(MUX_SIG_QLTH);
+ qlt->length = cpu_to_le16(qlt_size);
+ qlt->if_id = i;
+ qlt->vfl_length = MUX_QUEUE_LEVEL * sizeof(struct mux_lite_vfl);
+ qlt->reserved[0] = 0;
+ qlt->reserved[1] = 0;
+
+ qlt->vfl.nr_of_bytes = session->ul_list.qlen;
+
+ /* Add QLT to the transfer list. */
+ skb_queue_tail(&ipc_mux->channel->ul_list,
+ ipc_mux->ul_adb.qlth_skb);
+
+ qlt_updated = true;
+ ipc_mux->ul_adb.qlth_skb = NULL;
+ }
+
+ if (qlt_updated)
+ /* Updates the TDs with ul_list */
+ (void)ipc_imem_ul_write_td(ipc_mux->imem);
+
+ return qlt_updated;
+}
+
+/* Checks the available credits for the specified session and returns
+ * number of packets for which credits are available.
+ */
+static int ipc_mux_ul_bytes_credits_check(struct iosm_mux *ipc_mux,
+ struct mux_session *session,
+ struct sk_buff_head *ul_list,
+ int max_nr_of_pkts)
+{
+ int pkts_to_send = 0;
+ struct sk_buff *skb;
+ int credits = 0;
+
+ if (ipc_mux->ul_flow == MUX_UL_ON_CREDITS) {
+ credits = session->ul_flow_credits;
+ if (credits <= 0) {
+ dev_dbg(ipc_mux->dev,
+ "FC::if_id[%d] Insuff.Credits/Qlen:%d/%u",
+ session->if_id, session->ul_flow_credits,
+ session->ul_list.qlen); /* nr_of_bytes */
+ return 0;
+ }
+ } else {
+ credits = IPC_MEM_MUX_UL_FLOWCTRL_HIGH_B -
+ ipc_mux->ul_data_pend_bytes;
+ if (credits <= 0) {
+ ipc_mux_stop_tx_for_all_sessions(ipc_mux);
+
+ dev_dbg(ipc_mux->dev,
+ "if_id[%d] encod. fail Bytes: %llu, thresh: %d",
+ session->if_id, ipc_mux->ul_data_pend_bytes,
+ IPC_MEM_MUX_UL_FLOWCTRL_HIGH_B);
+ return 0;
+ }
+ }
+
+ /* Check if there are enough credits/bytes available to send the
+ * requested max_nr_of_pkts. Otherwise restrict the nr_of_pkts
+ * depending on available credits.
+ */
+ skb_queue_walk(ul_list, skb)
+ {
+ if (!(credits >= skb->len && pkts_to_send < max_nr_of_pkts))
+ break;
+ credits -= skb->len;
+ pkts_to_send++;
+ }
+
+ return pkts_to_send;
+}
+
+/* Encode the UL IP packet according to Lite spec. */
+static int ipc_mux_ul_adgh_encode(struct iosm_mux *ipc_mux, int session_id,
+ struct mux_session *session,
+ struct sk_buff_head *ul_list,
+ struct mux_adb *adb, int nr_of_pkts)
+{
+ int offset = sizeof(struct mux_adgh);
+ int adb_updated = -EINVAL;
+ struct sk_buff *src_skb;
+ int aligned_size = 0;
+ int nr_of_skb = 0;
+ u32 pad_len = 0;
+
+ /* Re-calculate the number of packets depending on number of bytes to be
+ * processed/available credits.
+ */
+ nr_of_pkts = ipc_mux_ul_bytes_credits_check(ipc_mux, session, ul_list,
+ nr_of_pkts);
+
+ /* If calculated nr_of_pkts from available credits is <= 0
+ * then nothing to do.
+ */
+ if (nr_of_pkts <= 0)
+ return 0;
+
+ /* Read configured UL head_pad_length for session.*/
+ if (session->ul_head_pad_len > IPC_MEM_DL_ETH_OFFSET)
+ pad_len = session->ul_head_pad_len - IPC_MEM_DL_ETH_OFFSET;
+
+ /* Process all pending UL packets for this session
+ * depending on the allocated datagram table size.
+ */
+ while (nr_of_pkts > 0) {
+ /* get destination skb allocated */
+ if (ipc_mux_ul_adb_allocate(ipc_mux, adb, &ipc_mux->size_needed,
+ MUX_SIG_ADGH)) {
+ dev_err(ipc_mux->dev, "no reserved memory for ADGH");
+ return -ENOMEM;
+ }
+
+ /* Peek at the head of the list. */
+ src_skb = skb_peek(ul_list);
+ if (!src_skb) {
+ dev_err(ipc_mux->dev,
+ "skb peek return NULL with count : %d",
+ nr_of_pkts);
+ break;
+ }
+
+ /* Calculate the memory value. */
+ aligned_size = ALIGN((pad_len + src_skb->len), 4);
+
+ ipc_mux->size_needed = sizeof(struct mux_adgh) + aligned_size;
+
+ if (ipc_mux->size_needed > adb->size) {
+ dev_dbg(ipc_mux->dev, "size needed %d, adgh size %d",
+ ipc_mux->size_needed, adb->size);
+ /* Return 1 if any IP packet is added to the transfer
+ * list.
+ */
+ return nr_of_skb ? 1 : 0;
+ }
+
+ /* Add buffer (without head padding to next pending transfer) */
+ memcpy(adb->buf + offset + pad_len, src_skb->data,
+ src_skb->len);
+
+ adb->adgh->signature = cpu_to_le32(MUX_SIG_ADGH);
+ adb->adgh->if_id = session_id;
+ adb->adgh->length =
+ cpu_to_le16(sizeof(struct mux_adgh) + pad_len +
+ src_skb->len);
+ adb->adgh->service_class = src_skb->priority;
+ adb->adgh->next_count = --nr_of_pkts;
+ adb->dg_cnt_total++;
+ adb->payload_size += src_skb->len;
+
+ if (ipc_mux->ul_flow == MUX_UL_ON_CREDITS)
+ /* Decrement the credit value as we are processing the
+ * datagram from the UL list.
+ */
+ session->ul_flow_credits -= src_skb->len;
+
+ /* Remove the processed elements and free it. */
+ src_skb = skb_dequeue(ul_list);
+ dev_kfree_skb(src_skb);
+ nr_of_skb++;
+
+ ipc_mux_ul_adgh_finish(ipc_mux);
+ }
+
+ if (nr_of_skb) {
+ /* Send QLT info to modem if pending bytes > high watermark
+ * in case of mux lite
+ */
+ if (ipc_mux->ul_flow == MUX_UL_ON_CREDITS ||
+ ipc_mux->ul_data_pend_bytes >=
+ IPC_MEM_MUX_UL_FLOWCTRL_LOW_B)
+ adb_updated = ipc_mux_lite_send_qlt(ipc_mux);
+ else
+ adb_updated = 1;
+
+ /* Updates the TDs with ul_list */
+ (void)ipc_imem_ul_write_td(ipc_mux->imem);
+ }
+
+ return adb_updated;
+}
+
+bool ipc_mux_ul_data_encode(struct iosm_mux *ipc_mux)
+{
+ struct sk_buff_head *ul_list;
+ struct mux_session *session;
+ int updated = 0;
+ int session_id;
+ int dg_n;
+ int i;
+
+ if (!ipc_mux || ipc_mux->state != MUX_S_ACTIVE ||
+ ipc_mux->adb_prep_ongoing)
+ return false;
+
+ ipc_mux->adb_prep_ongoing = true;
+
+ for (i = 0; i < ipc_mux->nr_sessions; i++) {
+ session_id = ipc_mux->rr_next_session;
+ session = &ipc_mux->session[session_id];
+
+ /* Go to next handle rr_next_session overflow */
+ ipc_mux->rr_next_session++;
+ if (ipc_mux->rr_next_session >= ipc_mux->nr_sessions)
+ ipc_mux->rr_next_session = 0;
+
+ if (!session->wwan || session->flow_ctl_mask ||
+ session->net_tx_stop)
+ continue;
+
+ ul_list = &session->ul_list;
+
+ /* Is something pending in UL and flow ctrl off */
+ dg_n = skb_queue_len(ul_list);
+ if (dg_n > MUX_MAX_UL_DG_ENTRIES)
+ dg_n = MUX_MAX_UL_DG_ENTRIES;
+
+ if (dg_n == 0)
+ /* Nothing to do for ipc_mux session
+ * -> try next session id.
+ */
+ continue;
+
+ updated = ipc_mux_ul_adgh_encode(ipc_mux, session_id, session,
+ ul_list, &ipc_mux->ul_adb,
+ dg_n);
+ }
+
+ ipc_mux->adb_prep_ongoing = false;
+ return updated == 1;
+}
+
+void ipc_mux_ul_encoded_process(struct iosm_mux *ipc_mux, struct sk_buff *skb)
+{
+ struct mux_adgh *adgh;
+ u16 adgh_len;
+
+ adgh = (struct mux_adgh *)skb->data;
+ adgh_len = le16_to_cpu(adgh->length);
+
+ if (adgh->signature == cpu_to_le32(MUX_SIG_ADGH) &&
+ ipc_mux->ul_flow == MUX_UL)
+ ipc_mux->ul_data_pend_bytes = ipc_mux->ul_data_pend_bytes -
+ adgh_len;
+
+ if (ipc_mux->ul_flow == MUX_UL)
+ dev_dbg(ipc_mux->dev, "ul_data_pend_bytes: %lld",
+ ipc_mux->ul_data_pend_bytes);
+
+ /* Reset the skb settings. */
+ skb->tail = 0;
+ skb->len = 0;
+
+ /* Add the consumed ADB to the free list. */
+ skb_queue_tail((&ipc_mux->ul_adb.free_list), skb);
+}
+
+/* Start the NETIF uplink send transfer in MUX mode. */
+static int ipc_mux_tq_ul_trigger_encode(struct iosm_imem *ipc_imem, int arg,
+ void *msg, size_t size)
+{
+ struct iosm_mux *ipc_mux = ipc_imem->mux;
+ bool ul_data_pend = false;
+
+ /* Add session UL data to a ADB and ADGH */
+ ul_data_pend = ipc_mux_ul_data_encode(ipc_mux);
+ if (ul_data_pend)
+ /* Delay the doorbell irq */
+ ipc_imem_td_update_timer_start(ipc_mux->imem);
+
+ /* reset the debounce flag */
+ ipc_mux->ev_mux_net_transmit_pending = false;
+
+ return 0;
+}
+
+int ipc_mux_ul_trigger_encode(struct iosm_mux *ipc_mux, int if_id,
+ struct sk_buff *skb)
+{
+ struct mux_session *session = &ipc_mux->session[if_id];
+ int ret = -EINVAL;
+
+ if (ipc_mux->channel &&
+ ipc_mux->channel->state != IMEM_CHANNEL_ACTIVE) {
+ dev_err(ipc_mux->dev,
+ "channel state is not IMEM_CHANNEL_ACTIVE");
+ goto out;
+ }
+
+ if (!session->wwan) {
+ dev_err(ipc_mux->dev, "session net ID is NULL");
+ ret = -EFAULT;
+ goto out;
+ }
+
+ /* Session is under flow control.
+ * Check if packet can be queued in session list, if not
+ * suspend net tx
+ */
+ if (skb_queue_len(&session->ul_list) >=
+ (session->net_tx_stop ?
+ IPC_MEM_MUX_UL_SESS_FCON_THRESHOLD :
+ (IPC_MEM_MUX_UL_SESS_FCON_THRESHOLD *
+ IPC_MEM_MUX_UL_SESS_FCOFF_THRESHOLD_FACTOR))) {
+ ipc_mux_netif_tx_flowctrl(session, session->if_id, true);
+ ret = -EBUSY;
+ goto out;
+ }
+
+ /* Add skb to the uplink skb accumulator. */
+ skb_queue_tail(&session->ul_list, skb);
+
+ /* Inform the IPC kthread to pass uplink IP packets to CP. */
+ if (!ipc_mux->ev_mux_net_transmit_pending) {
+ ipc_mux->ev_mux_net_transmit_pending = true;
+ ret = ipc_task_queue_send_task(ipc_mux->imem,
+ ipc_mux_tq_ul_trigger_encode, 0,
+ NULL, 0, false);
+ if (ret)
+ goto out;
+ }
+ dev_dbg(ipc_mux->dev, "mux ul if[%d] qlen=%d/%u, len=%d/%d, prio=%d",
+ if_id, skb_queue_len(&session->ul_list), session->ul_list.qlen,
+ skb->len, skb->truesize, skb->priority);
+ ret = 0;
+out:
+ return ret;
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux_codec.h b/drivers/net/wwan/iosm/iosm_ipc_mux_codec.h
new file mode 100644
index 000000000000..4a74e3c9457f
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_mux_codec.h
@@ -0,0 +1,193 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_MUX_CODEC_H
+#define IOSM_IPC_MUX_CODEC_H
+
+#include "iosm_ipc_mux.h"
+
+/* Queue level size and reporting
+ * >1 is enable, 0 is disable
+ */
+#define MUX_QUEUE_LEVEL 1
+
+/* Size of the buffer for the IP MUX commands. */
+#define MUX_MAX_UL_ACB_BUF_SIZE 256
+
+/* Maximum number of packets in a go per session */
+#define MUX_MAX_UL_DG_ENTRIES 100
+
+/* ADGH: Signature of the Datagram Header. */
+#define MUX_SIG_ADGH 0x48474441
+
+/* CMDH: Signature of the Command Header. */
+#define MUX_SIG_CMDH 0x48444D43
+
+/* QLTH: Signature of the Queue Level Table */
+#define MUX_SIG_QLTH 0x48544C51
+
+/* FCTH: Signature of the Flow Credit Table */
+#define MUX_SIG_FCTH 0x48544346
+
+/* MUX UL session threshold factor */
+#define IPC_MEM_MUX_UL_SESS_FCOFF_THRESHOLD_FACTOR (4)
+
+/* Size of the buffer for the IP MUX Lite data buffer. */
+#define IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE (2 * 1024)
+
+/* MUX UL session threshold in number of packets */
+#define IPC_MEM_MUX_UL_SESS_FCON_THRESHOLD (64)
+
+/* Default time out for sending IPC session commands like
+ * open session, close session etc
+ * unit : milliseconds
+ */
+#define IPC_MUX_CMD_RUN_DEFAULT_TIMEOUT 1000 /* 1 second */
+
+/* MUX UL flow control lower threshold in bytes */
+#define IPC_MEM_MUX_UL_FLOWCTRL_LOW_B 10240 /* 10KB */
+
+/* MUX UL flow control higher threshold in bytes (5ms worth of data)*/
+#define IPC_MEM_MUX_UL_FLOWCTRL_HIGH_B (110 * 1024)
+
+/**
+ * struct mux_adgh - Aggregated Datagram Header.
+ * @signature: Signature of the Aggregated Datagram Header(0x48474441)
+ * @length: Length (in bytes) of the datagram header. This length
+ * shall include the header size. Min value: 0x10
+ * @if_id: ID of the interface the datagrams belong to
+ * @opt_ipv4v6: Indicates IPv4(=0)/IPv6(=1), It is optional if not
+ * used set it to zero.
+ * @reserved: Reserved bits. Set to zero.
+ * @service_class: Service class identifier for the datagram.
+ * @next_count: Count of the datagrams that shall be following this
+ * datagrams for this interface. A count of zero means
+ * the next datagram may not belong to this interface.
+ * @reserved1: Reserved bytes, Set to zero
+ */
+struct mux_adgh {
+ __le32 signature;
+ __le16 length;
+ u8 if_id;
+ u8 opt_ipv4v6;
+ u8 service_class;
+ u8 next_count;
+ u8 reserved1[6];
+};
+
+/**
+ * struct mux_lite_cmdh - MUX Lite Command Header
+ * @signature: Signature of the Command Header(0x48444D43)
+ * @cmd_len: Length (in bytes) of the command. This length shall
+ * include the header size. Minimum value: 0x10
+ * @if_id: ID of the interface the commands in the table belong to.
+ * @reserved: Reserved Set to zero.
+ * @command_type: Command Enum.
+ * @transaction_id: 4 byte value shall be generated and sent along with a
+ * command Responses and ACKs shall have the same
+ * Transaction ID as their commands. It shall be unique to
+ * the command transaction on the given interface.
+ * @param: Optional parameters used with the command.
+ */
+struct mux_lite_cmdh {
+ __le32 signature;
+ __le16 cmd_len;
+ u8 if_id;
+ u8 reserved;
+ __le32 command_type;
+ __le32 transaction_id;
+ union mux_cmd_param param;
+};
+
+/**
+ * struct mux_lite_vfl - value field in generic table
+ * @nr_of_bytes: Number of bytes available to transmit in the queue.
+ */
+struct mux_lite_vfl {
+ u32 nr_of_bytes;
+};
+
+/**
+ * struct ipc_mem_lite_gen_tbl - Generic table format for Queue Level
+ * and Flow Credit
+ * @signature: Signature of the table
+ * @length: Length of the table
+ * @if_id: ID of the interface the table belongs to
+ * @vfl_length: Value field length
+ * @reserved: Reserved
+ * @vfl: Value field of variable length
+ */
+struct ipc_mem_lite_gen_tbl {
+ __le32 signature;
+ __le16 length;
+ u8 if_id;
+ u8 vfl_length;
+ u32 reserved[2];
+ struct mux_lite_vfl vfl;
+};
+
+/**
+ * ipc_mux_dl_decode -Route the DL packet through the IP MUX layer
+ * depending on Header.
+ * @ipc_mux: Pointer to MUX data-struct
+ * @skb: Pointer to ipc_skb.
+ */
+void ipc_mux_dl_decode(struct iosm_mux *ipc_mux, struct sk_buff *skb);
+
+/**
+ * ipc_mux_dl_acb_send_cmds - Respond to the Command blocks.
+ * @ipc_mux: Pointer to MUX data-struct
+ * @cmd_type: Command
+ * @if_id: Session interface id.
+ * @transaction_id: Command transaction id.
+ * @param: Pointer to command params.
+ * @res_size: Response size
+ * @blocking: True for blocking send
+ * @respond: If true return transaction ID
+ *
+ * Returns: 0 in success and failure value on error
+ */
+int ipc_mux_dl_acb_send_cmds(struct iosm_mux *ipc_mux, u32 cmd_type, u8 if_id,
+ u32 transaction_id, union mux_cmd_param *param,
+ size_t res_size, bool blocking, bool respond);
+
+/**
+ * ipc_mux_netif_tx_flowctrl - Enable/Disable TX flow control on MUX sessions.
+ * @session: Pointer to mux_session struct
+ * @idx: Session ID
+ * @on: true for Enable and false for disable flow control
+ */
+void ipc_mux_netif_tx_flowctrl(struct mux_session *session, int idx, bool on);
+
+/**
+ * ipc_mux_ul_trigger_encode - Route the UL packet through the IP MUX layer
+ * for encoding.
+ * @ipc_mux: Pointer to MUX data-struct
+ * @if_id: Session ID.
+ * @skb: Pointer to ipc_skb.
+ *
+ * Returns: 0 if successfully encoded
+ * failure value on error
+ * -EBUSY if packet has to be retransmitted.
+ */
+int ipc_mux_ul_trigger_encode(struct iosm_mux *ipc_mux, int if_id,
+ struct sk_buff *skb);
+/**
+ * ipc_mux_ul_data_encode - UL encode function for calling from Tasklet context.
+ * @ipc_mux: Pointer to MUX data-struct
+ *
+ * Returns: TRUE if any packet of any session is encoded FALSE otherwise.
+ */
+bool ipc_mux_ul_data_encode(struct iosm_mux *ipc_mux);
+
+/**
+ * ipc_mux_ul_encoded_process - Handles the Modem processed UL data by adding
+ * the SKB to the UL free list.
+ * @ipc_mux: Pointer to MUX data-struct
+ * @skb: Pointer to ipc_skb.
+ */
+void ipc_mux_ul_encoded_process(struct iosm_mux *ipc_mux, struct sk_buff *skb);
+
+#endif
--
2.25.1

2021-05-20 14:09:41

by Kumar, M Chetan

[permalink] [raw]
Subject: [PATCH V3 11/16] net: iosm: power management

Implements state machine to handle host & device sleep.

Signed-off-by: M Chetan Kumar <[email protected]>
---
v3: Aligned ipc_ prefix for function name to be consistent across file.
v2:
* Removed space around the : for the bitfields.
* Moved pm module under static allocation
* Added memory barriers around atomic operations.
---
drivers/net/wwan/iosm/iosm_ipc_pm.c | 333 ++++++++++++++++++++++++++++
drivers/net/wwan/iosm/iosm_ipc_pm.h | 207 +++++++++++++++++
2 files changed, 540 insertions(+)
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pm.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pm.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_pm.c b/drivers/net/wwan/iosm/iosm_ipc_pm.c
new file mode 100644
index 000000000000..413601c72dcd
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_pm.c
@@ -0,0 +1,333 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#include "iosm_ipc_protocol.h"
+
+/* Timeout value in MS for the PM to wait for device to reach active state */
+#define IPC_PM_ACTIVE_TIMEOUT_MS (500)
+
+/* Note that here "active" has the value 1, as compared to the enums
+ * ipc_mem_host_pm_state or ipc_mem_dev_pm_state, where "active" is 0
+ */
+#define IPC_PM_SLEEP (0)
+#define CONSUME_STATE (0)
+#define IPC_PM_ACTIVE (1)
+
+void ipc_pm_signal_hpda_doorbell(struct iosm_pm *ipc_pm, u32 identifier,
+ bool host_slp_check)
+{
+ if (host_slp_check && ipc_pm->host_pm_state != IPC_MEM_HOST_PM_ACTIVE &&
+ ipc_pm->host_pm_state != IPC_MEM_HOST_PM_ACTIVE_WAIT) {
+ ipc_pm->pending_hpda_update = true;
+ dev_dbg(ipc_pm->dev,
+ "Pend HPDA update set. Host PM_State: %d identifier:%d",
+ ipc_pm->host_pm_state, identifier);
+ return;
+ }
+
+ if (!ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_IRQ, true)) {
+ ipc_pm->pending_hpda_update = true;
+ dev_dbg(ipc_pm->dev, "Pending HPDA update set. identifier:%d",
+ identifier);
+ return;
+ }
+ ipc_pm->pending_hpda_update = false;
+
+ /* Trigger the irq towards CP */
+ ipc_cp_irq_hpda_update(ipc_pm->pcie, identifier);
+
+ ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_IRQ, false);
+}
+
+/* Wake up the device if it is in low power mode. */
+static bool ipc_pm_link_activate(struct iosm_pm *ipc_pm)
+{
+ if (ipc_pm->cp_state == IPC_MEM_DEV_PM_ACTIVE)
+ return true;
+
+ if (ipc_pm->cp_state == IPC_MEM_DEV_PM_SLEEP) {
+ if (ipc_pm->ap_state == IPC_MEM_DEV_PM_SLEEP) {
+ /* Wake up the device. */
+ ipc_cp_irq_sleep_control(ipc_pm->pcie,
+ IPC_MEM_DEV_PM_WAKEUP);
+ ipc_pm->ap_state = IPC_MEM_DEV_PM_ACTIVE_WAIT;
+
+ goto not_active;
+ }
+
+ if (ipc_pm->ap_state == IPC_MEM_DEV_PM_ACTIVE_WAIT)
+ goto not_active;
+
+ return true;
+ }
+
+not_active:
+ /* link is not ready */
+ return false;
+}
+
+bool ipc_pm_wait_for_device_active(struct iosm_pm *ipc_pm)
+{
+ bool ret_val = false;
+
+ if (ipc_pm->ap_state != IPC_MEM_DEV_PM_ACTIVE) {
+ /* Complete all memory stores before setting bit */
+ smp_mb__before_atomic();
+
+ /* Wait for IPC_PM_ACTIVE_TIMEOUT_MS for Device sleep state
+ * machine to enter ACTIVE state.
+ */
+ set_bit(0, &ipc_pm->host_sleep_pend);
+
+ /* Complete all memory stores after setting bit */
+ smp_mb__after_atomic();
+
+ if (!wait_for_completion_interruptible_timeout
+ (&ipc_pm->host_sleep_complete,
+ msecs_to_jiffies(IPC_PM_ACTIVE_TIMEOUT_MS))) {
+ dev_err(ipc_pm->dev,
+ "PM timeout. Expected State:%d. Actual: %d",
+ IPC_MEM_DEV_PM_ACTIVE, ipc_pm->ap_state);
+ goto active_timeout;
+ }
+ }
+
+ ret_val = true;
+active_timeout:
+ /* Complete all memory stores before clearing bit */
+ smp_mb__before_atomic();
+
+ /* Reset the atomic variable in any case as device sleep
+ * state machine change is no longer of interest.
+ */
+ clear_bit(0, &ipc_pm->host_sleep_pend);
+
+ /* Complete all memory stores after clearing bit */
+ smp_mb__after_atomic();
+
+ return ret_val;
+}
+
+static void ipc_pm_on_link_sleep(struct iosm_pm *ipc_pm)
+{
+ /* pending sleep ack and all conditions are cleared
+ * -> signal SLEEP__ACK to CP
+ */
+ ipc_pm->cp_state = IPC_MEM_DEV_PM_SLEEP;
+ ipc_pm->ap_state = IPC_MEM_DEV_PM_SLEEP;
+
+ ipc_cp_irq_sleep_control(ipc_pm->pcie, IPC_MEM_DEV_PM_SLEEP);
+}
+
+static void ipc_pm_on_link_wake(struct iosm_pm *ipc_pm, bool ack)
+{
+ ipc_pm->ap_state = IPC_MEM_DEV_PM_ACTIVE;
+
+ if (ack) {
+ ipc_pm->cp_state = IPC_MEM_DEV_PM_ACTIVE;
+
+ ipc_cp_irq_sleep_control(ipc_pm->pcie, IPC_MEM_DEV_PM_ACTIVE);
+
+ /* check the consume state !!! */
+ if (test_bit(CONSUME_STATE, &ipc_pm->host_sleep_pend))
+ complete(&ipc_pm->host_sleep_complete);
+ }
+
+ /* Check for pending HPDA update.
+ * Pending HP update could be because of sending message was
+ * put on hold due to Device sleep state or due to TD update
+ * which could be because of Device Sleep and Host Sleep
+ * states.
+ */
+ if (ipc_pm->pending_hpda_update &&
+ ipc_pm->host_pm_state == IPC_MEM_HOST_PM_ACTIVE)
+ ipc_pm_signal_hpda_doorbell(ipc_pm, IPC_HP_PM_TRIGGER, true);
+}
+
+bool ipc_pm_trigger(struct iosm_pm *ipc_pm, enum ipc_pm_unit unit, bool active)
+{
+ union ipc_pm_cond old_cond;
+ union ipc_pm_cond new_cond;
+ bool link_active;
+
+ /* Save the current D3 state. */
+ new_cond = ipc_pm->pm_cond;
+ old_cond = ipc_pm->pm_cond;
+
+ /* Calculate the power state only in the runtime phase. */
+ switch (unit) {
+ case IPC_PM_UNIT_IRQ: /* CP irq */
+ new_cond.irq = active;
+ break;
+
+ case IPC_PM_UNIT_LINK: /* Device link state. */
+ new_cond.link = active;
+ break;
+
+ case IPC_PM_UNIT_HS: /* Host sleep trigger requires Link. */
+ new_cond.hs = active;
+ break;
+
+ default:
+ break;
+ }
+
+ /* Something changed ? */
+ if (old_cond.raw == new_cond.raw) {
+ /* Stay in the current PM state. */
+ link_active = old_cond.link == IPC_PM_ACTIVE;
+ goto ret;
+ }
+
+ ipc_pm->pm_cond = new_cond;
+
+ if (new_cond.link)
+ ipc_pm_on_link_wake(ipc_pm, unit == IPC_PM_UNIT_LINK);
+ else if (unit == IPC_PM_UNIT_LINK)
+ ipc_pm_on_link_sleep(ipc_pm);
+
+ if (old_cond.link == IPC_PM_SLEEP && new_cond.raw) {
+ link_active = ipc_pm_link_activate(ipc_pm);
+ goto ret;
+ }
+
+ link_active = old_cond.link == IPC_PM_ACTIVE;
+
+ret:
+ return link_active;
+}
+
+bool ipc_pm_prepare_host_sleep(struct iosm_pm *ipc_pm)
+{
+ /* suspend not allowed if host_pm_state is not IPC_MEM_HOST_PM_ACTIVE */
+ if (ipc_pm->host_pm_state != IPC_MEM_HOST_PM_ACTIVE) {
+ dev_err(ipc_pm->dev, "host_pm_state=%d\tExpected to be: %d",
+ ipc_pm->host_pm_state, IPC_MEM_HOST_PM_ACTIVE);
+ return false;
+ }
+
+ ipc_pm->host_pm_state = IPC_MEM_HOST_PM_SLEEP_WAIT_D3;
+
+ return true;
+}
+
+bool ipc_pm_prepare_host_active(struct iosm_pm *ipc_pm)
+{
+ if (ipc_pm->host_pm_state != IPC_MEM_HOST_PM_SLEEP) {
+ dev_err(ipc_pm->dev, "host_pm_state=%d\tExpected to be: %d",
+ ipc_pm->host_pm_state, IPC_MEM_HOST_PM_SLEEP);
+ return false;
+ }
+
+ /* Sending Sleep Exit message to CP. Update the state */
+ ipc_pm->host_pm_state = IPC_MEM_HOST_PM_ACTIVE_WAIT;
+
+ return true;
+}
+
+void ipc_pm_set_s2idle_sleep(struct iosm_pm *ipc_pm, bool sleep)
+{
+ if (sleep) {
+ ipc_pm->ap_state = IPC_MEM_DEV_PM_SLEEP;
+ ipc_pm->cp_state = IPC_MEM_DEV_PM_SLEEP;
+ ipc_pm->device_sleep_notification = IPC_MEM_DEV_PM_SLEEP;
+ } else {
+ ipc_pm->ap_state = IPC_MEM_DEV_PM_ACTIVE;
+ ipc_pm->cp_state = IPC_MEM_DEV_PM_ACTIVE;
+ ipc_pm->device_sleep_notification = IPC_MEM_DEV_PM_ACTIVE;
+ ipc_pm->pm_cond.link = IPC_PM_ACTIVE;
+ }
+}
+
+bool ipc_pm_dev_slp_notification(struct iosm_pm *ipc_pm, u32 cp_pm_req)
+{
+ if (cp_pm_req == ipc_pm->device_sleep_notification)
+ return false;
+
+ ipc_pm->device_sleep_notification = cp_pm_req;
+
+ /* Evaluate the PM request. */
+ switch (ipc_pm->cp_state) {
+ case IPC_MEM_DEV_PM_ACTIVE:
+ switch (cp_pm_req) {
+ case IPC_MEM_DEV_PM_ACTIVE:
+ break;
+
+ case IPC_MEM_DEV_PM_SLEEP:
+ /* Inform the PM that the device link can go down. */
+ ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_LINK, false);
+ return true;
+
+ default:
+ dev_err(ipc_pm->dev,
+ "loc-pm=%d active: confused req-pm=%d",
+ ipc_pm->cp_state, cp_pm_req);
+ break;
+ }
+ break;
+
+ case IPC_MEM_DEV_PM_SLEEP:
+ switch (cp_pm_req) {
+ case IPC_MEM_DEV_PM_ACTIVE:
+ /* Inform the PM that the device link is active. */
+ ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_LINK, true);
+ break;
+
+ case IPC_MEM_DEV_PM_SLEEP:
+ break;
+
+ default:
+ dev_err(ipc_pm->dev,
+ "loc-pm=%d sleep: confused req-pm=%d",
+ ipc_pm->cp_state, cp_pm_req);
+ break;
+ }
+ break;
+
+ default:
+ dev_err(ipc_pm->dev, "confused loc-pm=%d, req-pm=%d",
+ ipc_pm->cp_state, cp_pm_req);
+ break;
+ }
+
+ return false;
+}
+
+void ipc_pm_init(struct iosm_protocol *ipc_protocol)
+{
+ struct iosm_imem *ipc_imem = ipc_protocol->imem;
+ struct iosm_pm *ipc_pm = &ipc_protocol->pm;
+
+ ipc_pm->pcie = ipc_imem->pcie;
+ ipc_pm->dev = ipc_imem->dev;
+
+ ipc_pm->pm_cond.irq = IPC_PM_SLEEP;
+ ipc_pm->pm_cond.hs = IPC_PM_SLEEP;
+ ipc_pm->pm_cond.link = IPC_PM_ACTIVE;
+
+ ipc_pm->cp_state = IPC_MEM_DEV_PM_ACTIVE;
+ ipc_pm->ap_state = IPC_MEM_DEV_PM_ACTIVE;
+ ipc_pm->host_pm_state = IPC_MEM_HOST_PM_ACTIVE;
+
+ /* Create generic wait-for-completion handler for Host Sleep
+ * and device sleep coordination.
+ */
+ init_completion(&ipc_pm->host_sleep_complete);
+
+ /* Complete all memory stores before clearing bit */
+ smp_mb__before_atomic();
+
+ clear_bit(0, &ipc_pm->host_sleep_pend);
+
+ /* Complete all memory stores after clearing bit */
+ smp_mb__after_atomic();
+}
+
+void ipc_pm_deinit(struct iosm_protocol *proto)
+{
+ struct iosm_pm *ipc_pm = &proto->pm;
+
+ complete(&ipc_pm->host_sleep_complete);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_pm.h b/drivers/net/wwan/iosm/iosm_ipc_pm.h
new file mode 100644
index 000000000000..e7c00f388cb0
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_pm.h
@@ -0,0 +1,207 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_PM_H
+#define IOSM_IPC_PM_H
+
+/* Trigger the doorbell interrupt on cp to change the PM sleep/active status */
+#define ipc_cp_irq_sleep_control(ipc_pcie, data) \
+ ipc_doorbell_fire(ipc_pcie, IPC_DOORBELL_IRQ_SLEEP, data)
+
+/* Trigger the doorbell interrupt on CP to do hpda update */
+#define ipc_cp_irq_hpda_update(ipc_pcie, data) \
+ ipc_doorbell_fire(ipc_pcie, IPC_DOORBELL_IRQ_HPDA, 0xFF & (data))
+
+/**
+ * union ipc_pm_cond - Conditions for D3 and the sleep message to CP.
+ * @raw: raw/combined value for faster check
+ * @irq: IRQ towards CP
+ * @hs: Host Sleep
+ * @link: Device link state.
+ */
+union ipc_pm_cond {
+ unsigned int raw;
+
+ struct {
+ unsigned int irq:1,
+ hs:1,
+ link:1;
+ };
+};
+
+/**
+ * enum ipc_mem_host_pm_state - Possible states of the HOST SLEEP finite state
+ * machine.
+ * @IPC_MEM_HOST_PM_ACTIVE: Host is active
+ * @IPC_MEM_HOST_PM_ACTIVE_WAIT: Intermediate state before going to
+ * active
+ * @IPC_MEM_HOST_PM_SLEEP_WAIT_IDLE: Intermediate state to wait for idle
+ * before going into sleep
+ * @IPC_MEM_HOST_PM_SLEEP_WAIT_D3: Intermediate state to wait for D3
+ * before going to sleep
+ * @IPC_MEM_HOST_PM_SLEEP: after this state the interface is not
+ * accessible host is in suspend to RAM
+ * @IPC_MEM_HOST_PM_SLEEP_WAIT_EXIT_SLEEP: Intermediate state before exiting
+ * sleep
+ */
+enum ipc_mem_host_pm_state {
+ IPC_MEM_HOST_PM_ACTIVE,
+ IPC_MEM_HOST_PM_ACTIVE_WAIT,
+ IPC_MEM_HOST_PM_SLEEP_WAIT_IDLE,
+ IPC_MEM_HOST_PM_SLEEP_WAIT_D3,
+ IPC_MEM_HOST_PM_SLEEP,
+ IPC_MEM_HOST_PM_SLEEP_WAIT_EXIT_SLEEP,
+};
+
+/**
+ * enum ipc_mem_dev_pm_state - Possible states of the DEVICE SLEEP finite state
+ * machine.
+ * @IPC_MEM_DEV_PM_ACTIVE: IPC_MEM_DEV_PM_ACTIVE is the initial
+ * power management state.
+ * IRQ(struct ipc_mem_device_info:
+ * device_sleep_notification)
+ * and DOORBELL-IRQ-HPDA(data) values.
+ * @IPC_MEM_DEV_PM_SLEEP: IPC_MEM_DEV_PM_SLEEP is PM state for
+ * sleep.
+ * @IPC_MEM_DEV_PM_WAKEUP: DOORBELL-IRQ-DEVICE_WAKE(data).
+ * @IPC_MEM_DEV_PM_HOST_SLEEP: DOORBELL-IRQ-HOST_SLEEP(data).
+ * @IPC_MEM_DEV_PM_ACTIVE_WAIT: Local intermediate states.
+ * @IPC_MEM_DEV_PM_FORCE_SLEEP: DOORBELL-IRQ-FORCE_SLEEP.
+ * @IPC_MEM_DEV_PM_FORCE_ACTIVE: DOORBELL-IRQ-FORCE_ACTIVE.
+ */
+enum ipc_mem_dev_pm_state {
+ IPC_MEM_DEV_PM_ACTIVE,
+ IPC_MEM_DEV_PM_SLEEP,
+ IPC_MEM_DEV_PM_WAKEUP,
+ IPC_MEM_DEV_PM_HOST_SLEEP,
+ IPC_MEM_DEV_PM_ACTIVE_WAIT,
+ IPC_MEM_DEV_PM_FORCE_SLEEP = 7,
+ IPC_MEM_DEV_PM_FORCE_ACTIVE,
+};
+
+/**
+ * struct iosm_pm - Power management instance
+ * @pcie: Pointer to iosm_pcie structure
+ * @dev: Pointer to device structure
+ * @host_pm_state: PM states for host
+ * @host_sleep_pend: Variable to indicate Host Sleep Pending
+ * @host_sleep_complete: Generic wait-for-completion used in
+ * case of Host Sleep
+ * @pm_cond: Conditions for power management
+ * @ap_state: Current power management state, the
+ * initial state is IPC_MEM_DEV_PM_ACTIVE eq. 0.
+ * @cp_state: PM State of CP
+ * @device_sleep_notification: last handled device_sleep_notfication
+ * @pending_hpda_update: is a HPDA update pending?
+ */
+struct iosm_pm {
+ struct iosm_pcie *pcie;
+ struct device *dev;
+ enum ipc_mem_host_pm_state host_pm_state;
+ unsigned long host_sleep_pend;
+ struct completion host_sleep_complete;
+ union ipc_pm_cond pm_cond;
+ enum ipc_mem_dev_pm_state ap_state;
+ enum ipc_mem_dev_pm_state cp_state;
+ u32 device_sleep_notification;
+ u8 pending_hpda_update:1;
+};
+
+/**
+ * enum ipc_pm_unit - Power management units.
+ * @IPC_PM_UNIT_IRQ: IRQ towards CP
+ * @IPC_PM_UNIT_HS: Host Sleep for converged protocol
+ * @IPC_PM_UNIT_LINK: Link state controlled by CP.
+ */
+enum ipc_pm_unit {
+ IPC_PM_UNIT_IRQ,
+ IPC_PM_UNIT_HS,
+ IPC_PM_UNIT_LINK,
+};
+
+/**
+ * ipc_pm_init - Allocate power management component
+ * @ipc_protocol: Pointer to iosm_protocol structure
+ */
+void ipc_pm_init(struct iosm_protocol *ipc_protocol);
+
+/**
+ * ipc_pm_deinit - Free power management component, invalidating its pointer.
+ * @ipc_protocol: Pointer to iosm_protocol structure
+ */
+void ipc_pm_deinit(struct iosm_protocol *ipc_protocol);
+
+/**
+ * ipc_pm_dev_slp_notification - Handle a sleep notification message from the
+ * device. This can be called from interrupt state
+ * This function handles Host Sleep requests too
+ * if the Host Sleep protocol is register based.
+ * @ipc_pm: Pointer to power management component
+ * @sleep_notification: Actual notification from device
+ *
+ * Returns: true if dev sleep state has to be checked, false otherwise.
+ */
+bool ipc_pm_dev_slp_notification(struct iosm_pm *ipc_pm,
+ u32 sleep_notification);
+
+/**
+ * ipc_pm_set_s2idle_sleep - Set PM variables to sleep/active
+ * @ipc_pm: Pointer to power management component
+ * @sleep: true to enter sleep/false to exit sleep
+ */
+void ipc_pm_set_s2idle_sleep(struct iosm_pm *ipc_pm, bool sleep);
+
+/**
+ * ipc_pm_prepare_host_sleep - Prepare the PM for sleep by entering
+ * IPC_MEM_HOST_PM_SLEEP_WAIT_D3 state.
+ * @ipc_pm: Pointer to power management component
+ *
+ * Returns: true on success, false if the host was not active.
+ */
+bool ipc_pm_prepare_host_sleep(struct iosm_pm *ipc_pm);
+
+/**
+ * ipc_pm_prepare_host_active - Prepare the PM for wakeup by entering
+ * IPC_MEM_HOST_PM_ACTIVE_WAIT state.
+ * @ipc_pm: Pointer to power management component
+ *
+ * Returns: true on success, false if the host was not sleeping.
+ */
+bool ipc_pm_prepare_host_active(struct iosm_pm *ipc_pm);
+
+/**
+ * ipc_pm_wait_for_device_active - Wait upto IPC_PM_ACTIVE_TIMEOUT_MS ms
+ * for the device to reach active state
+ * @ipc_pm: Pointer to power management component
+ *
+ * Returns: true if device is active, false on timeout
+ */
+bool ipc_pm_wait_for_device_active(struct iosm_pm *ipc_pm);
+
+/**
+ * ipc_pm_signal_hpda_doorbell - Wake up the device if it is in low power mode
+ * and trigger a head pointer update interrupt.
+ * @ipc_pm: Pointer to power management component
+ * @identifier: specifies what component triggered hpda update irq
+ * @host_slp_check: if set to true then Host Sleep state machine check will
+ * be performed. If Host Sleep state machine allows HP
+ * update then only doorbell is triggered otherwise pending
+ * flag will be set. If set to false then Host Sleep check
+ * will not be performed. This is helpful for Host Sleep
+ * negotiation through message ring.
+ */
+void ipc_pm_signal_hpda_doorbell(struct iosm_pm *ipc_pm, u32 identifier,
+ bool host_slp_check);
+/**
+ * ipc_pm_trigger - Update power manager and wake up the link if needed
+ * @ipc_pm: Pointer to power management component
+ * @unit: Power management units
+ * @active: Device link state
+ *
+ * Returns: true if link is unchanged or active, false otherwise
+ */
+bool ipc_pm_trigger(struct iosm_pm *ipc_pm, enum ipc_pm_unit unit, bool active);
+
+#endif
--
2.25.1

2021-05-20 14:09:49

by Kumar, M Chetan

[permalink] [raw]
Subject: [PATCH V3 14/16] net: iosm: uevent support

Report modem status via uevent.

Signed-off-by: M Chetan Kumar <[email protected]>
---
v3: no change.
v2: Removed non-related header file inclusion.
---
drivers/net/wwan/iosm/iosm_ipc_uevent.c | 44 +++++++++++++++++++++++++
drivers/net/wwan/iosm/iosm_ipc_uevent.h | 41 +++++++++++++++++++++++
2 files changed, 85 insertions(+)
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_uevent.c b/drivers/net/wwan/iosm/iosm_ipc_uevent.c
new file mode 100644
index 000000000000..2229d752926c
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_uevent.c
@@ -0,0 +1,44 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#include <linux/device.h>
+#include <linux/kobject.h>
+#include <linux/slab.h>
+
+#include "iosm_ipc_uevent.h"
+
+/* Update the uevent in work queue context */
+static void ipc_uevent_work(struct work_struct *data)
+{
+ struct ipc_uevent_info *info;
+ char *envp[2] = { NULL, NULL };
+
+ info = container_of(data, struct ipc_uevent_info, work);
+
+ envp[0] = info->uevent;
+
+ if (kobject_uevent_env(&info->dev->kobj, KOBJ_CHANGE, envp))
+ pr_err("uevent %s failed to sent", info->uevent);
+
+ kfree(info);
+}
+
+void ipc_uevent_send(struct device *dev, char *uevent)
+{
+ struct ipc_uevent_info *info = kzalloc(sizeof(*info), GFP_ATOMIC);
+
+ if (!info)
+ return;
+
+ /* Initialize the kernel work queue */
+ INIT_WORK(&info->work, ipc_uevent_work);
+
+ /* Store the device and event information */
+ info->dev = dev;
+ snprintf(info->uevent, MAX_UEVENT_LEN, "%s: %s", dev_name(dev), uevent);
+
+ /* Schedule uevent in process context using work queue */
+ schedule_work(&info->work);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_uevent.h b/drivers/net/wwan/iosm/iosm_ipc_uevent.h
new file mode 100644
index 000000000000..2e45c051b5f4
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_uevent.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_UEVENT_H
+#define IOSM_IPC_UEVENT_H
+
+/* Baseband event strings */
+#define UEVENT_MDM_NOT_READY "MDM_NOT_READY"
+#define UEVENT_ROM_READY "ROM_READY"
+#define UEVENT_MDM_READY "MDM_READY"
+#define UEVENT_CRASH "CRASH"
+#define UEVENT_CD_READY "CD_READY"
+#define UEVENT_CD_READY_LINK_DOWN "CD_READY_LINK_DOWN"
+#define UEVENT_MDM_TIMEOUT "MDM_TIMEOUT"
+
+/* Maximum length of user events */
+#define MAX_UEVENT_LEN 64
+
+/**
+ * struct ipc_uevent_info - Uevent information structure.
+ * @dev: Pointer to device structure
+ * @uevent: Uevent information
+ * @work: Uevent work struct
+ */
+struct ipc_uevent_info {
+ struct device *dev;
+ char uevent[MAX_UEVENT_LEN];
+ struct work_struct work;
+};
+
+/**
+ * ipc_uevent_send - Send modem event to user space.
+ * @dev: Generic device pointer
+ * @uevent: Uevent information
+ *
+ */
+void ipc_uevent_send(struct device *dev, char *uevent);
+
+#endif
--
2.25.1

2021-05-20 14:09:52

by Kumar, M Chetan

[permalink] [raw]
Subject: [PATCH V3 13/16] net: iosm: protocol operations

1) Update UL/DL transfer descriptors in message ring.
2) Define message set for pipe/sleep protocol.

Signed-off-by: M Chetan Kumar <[email protected]>
---
v3: Endianness type correction for transfer descriptor structure.
v2:
* Endianness type correction for Host-Device protocol structure.
* Function signature documentation correction.
* Streamline multiple returns using goto.
---
drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c | 552 ++++++++++++++++++
drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h | 444 ++++++++++++++
2 files changed, 996 insertions(+)
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c b/drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c
new file mode 100644
index 000000000000..91109e27efd3
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c
@@ -0,0 +1,552 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#include "iosm_ipc_protocol.h"
+#include "iosm_ipc_protocol_ops.h"
+
+/* Get the next free message element.*/
+static union ipc_mem_msg_entry *
+ipc_protocol_free_msg_get(struct iosm_protocol *ipc_protocol, int *index)
+{
+ u32 head = le32_to_cpu(ipc_protocol->p_ap_shm->msg_head);
+ u32 new_head = (head + 1) % IPC_MEM_MSG_ENTRIES;
+ union ipc_mem_msg_entry *msg;
+
+ if (new_head == le32_to_cpu(ipc_protocol->p_ap_shm->msg_tail)) {
+ dev_err(ipc_protocol->dev, "message ring is full");
+ return NULL;
+ }
+
+ /* Get the pointer to the next free message element,
+ * reset the fields and mark is as invalid.
+ */
+ msg = &ipc_protocol->p_ap_shm->msg_ring[head];
+ memset(msg, 0, sizeof(*msg));
+
+ /* return index in message ring */
+ *index = head;
+
+ return msg;
+}
+
+/* Updates the message ring Head pointer */
+void ipc_protocol_msg_hp_update(struct iosm_imem *ipc_imem)
+{
+ struct iosm_protocol *ipc_protocol = ipc_imem->ipc_protocol;
+ u32 head = le32_to_cpu(ipc_protocol->p_ap_shm->msg_head);
+ u32 new_head = (head + 1) % IPC_MEM_MSG_ENTRIES;
+
+ /* Update head pointer and fire doorbell. */
+ ipc_protocol->p_ap_shm->msg_head = cpu_to_le32(new_head);
+ ipc_protocol->old_msg_tail =
+ le32_to_cpu(ipc_protocol->p_ap_shm->msg_tail);
+
+ ipc_pm_signal_hpda_doorbell(&ipc_protocol->pm, IPC_HP_MR, false);
+}
+
+/* Allocate and prepare a OPEN_PIPE message.
+ * This also allocates the memory for the new TDR structure and
+ * updates the pipe structure referenced in the preparation arguments.
+ */
+static int ipc_protocol_msg_prepipe_open(struct iosm_protocol *ipc_protocol,
+ union ipc_msg_prep_args *args)
+{
+ int index;
+ union ipc_mem_msg_entry *msg =
+ ipc_protocol_free_msg_get(ipc_protocol, &index);
+ struct ipc_pipe *pipe = args->pipe_open.pipe;
+ struct ipc_protocol_td *tdr;
+ struct sk_buff **skbr;
+
+ if (!msg) {
+ dev_err(ipc_protocol->dev, "failed to get free message");
+ return -EIO;
+ }
+
+ /* Allocate the skbuf elements for the skbuf which are on the way.
+ * SKB ring is internal memory allocation for driver. No need to
+ * re-calculate the start and end addresses.
+ */
+ skbr = kcalloc(pipe->nr_of_entries, sizeof(*skbr), GFP_ATOMIC);
+ if (!skbr)
+ return -ENOMEM;
+
+ /* Allocate the transfer descriptors for the pipe. */
+ tdr = pci_alloc_consistent(ipc_protocol->pcie->pci,
+ pipe->nr_of_entries * sizeof(*tdr),
+ &pipe->phy_tdr_start);
+ if (!tdr) {
+ kfree(skbr);
+ dev_err(ipc_protocol->dev, "tdr alloc error");
+ return -ENOMEM;
+ }
+
+ pipe->max_nr_of_queued_entries = pipe->nr_of_entries - 1;
+ pipe->nr_of_queued_entries = 0;
+ pipe->tdr_start = tdr;
+ pipe->skbr_start = skbr;
+ pipe->old_tail = 0;
+
+ ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr] = 0;
+
+ msg->open_pipe.type_of_message = IPC_MEM_MSG_OPEN_PIPE;
+ msg->open_pipe.pipe_nr = pipe->pipe_nr;
+ msg->open_pipe.tdr_addr = cpu_to_le64(pipe->phy_tdr_start);
+ msg->open_pipe.tdr_entries = cpu_to_le16(pipe->nr_of_entries);
+ msg->open_pipe.accumulation_backoff =
+ cpu_to_le32(pipe->accumulation_backoff);
+ msg->open_pipe.irq_vector = cpu_to_le32(pipe->irq);
+
+ return index;
+}
+
+static int ipc_protocol_msg_prepipe_close(struct iosm_protocol *ipc_protocol,
+ union ipc_msg_prep_args *args)
+{
+ int index = -1;
+ union ipc_mem_msg_entry *msg =
+ ipc_protocol_free_msg_get(ipc_protocol, &index);
+ struct ipc_pipe *pipe = args->pipe_close.pipe;
+
+ if (!msg)
+ return -EIO;
+
+ msg->close_pipe.type_of_message = IPC_MEM_MSG_CLOSE_PIPE;
+ msg->close_pipe.pipe_nr = pipe->pipe_nr;
+
+ dev_dbg(ipc_protocol->dev, "IPC_MEM_MSG_CLOSE_PIPE(pipe_nr=%d)",
+ msg->close_pipe.pipe_nr);
+
+ return index;
+}
+
+static int ipc_protocol_msg_prep_sleep(struct iosm_protocol *ipc_protocol,
+ union ipc_msg_prep_args *args)
+{
+ int index = -1;
+ union ipc_mem_msg_entry *msg =
+ ipc_protocol_free_msg_get(ipc_protocol, &index);
+
+ if (!msg) {
+ dev_err(ipc_protocol->dev, "failed to get free message");
+ return -EIO;
+ }
+
+ /* Prepare and send the host sleep message to CP to enter or exit D3. */
+ msg->host_sleep.type_of_message = IPC_MEM_MSG_SLEEP;
+ msg->host_sleep.target = args->sleep.target; /* 0=host, 1=device */
+
+ /* state; 0=enter, 1=exit 2=enter w/o protocol */
+ msg->host_sleep.state = args->sleep.state;
+
+ dev_dbg(ipc_protocol->dev, "IPC_MEM_MSG_SLEEP(target=%d; state=%d)",
+ msg->host_sleep.target, msg->host_sleep.state);
+
+ return index;
+}
+
+static int ipc_protocol_msg_prep_feature_set(struct iosm_protocol *ipc_protocol,
+ union ipc_msg_prep_args *args)
+{
+ int index = -1;
+ union ipc_mem_msg_entry *msg =
+ ipc_protocol_free_msg_get(ipc_protocol, &index);
+
+ if (!msg) {
+ dev_err(ipc_protocol->dev, "failed to get free message");
+ return -EIO;
+ }
+
+ msg->feature_set.type_of_message = IPC_MEM_MSG_FEATURE_SET;
+ msg->feature_set.reset_enable = args->feature_set.reset_enable <<
+ RESET_BIT;
+
+ dev_dbg(ipc_protocol->dev, "IPC_MEM_MSG_FEATURE_SET(reset_enable=%d)",
+ msg->feature_set.reset_enable >> RESET_BIT);
+
+ return index;
+}
+
+/* Processes the message consumed by CP. */
+bool ipc_protocol_msg_process(struct iosm_imem *ipc_imem, int irq)
+{
+ struct iosm_protocol *ipc_protocol = ipc_imem->ipc_protocol;
+ struct ipc_rsp **rsp_ring = ipc_protocol->rsp_ring;
+ bool msg_processed = false;
+ u32 i;
+
+ if (le32_to_cpu(ipc_protocol->p_ap_shm->msg_tail) >=
+ IPC_MEM_MSG_ENTRIES) {
+ dev_err(ipc_protocol->dev, "msg_tail out of range: %d",
+ le32_to_cpu(ipc_protocol->p_ap_shm->msg_tail));
+ return msg_processed;
+ }
+
+ if (irq != IMEM_IRQ_DONT_CARE &&
+ irq != ipc_protocol->p_ap_shm->ci.msg_irq_vector)
+ return msg_processed;
+
+ for (i = ipc_protocol->old_msg_tail;
+ i != le32_to_cpu(ipc_protocol->p_ap_shm->msg_tail);
+ i = (i + 1) % IPC_MEM_MSG_ENTRIES) {
+ union ipc_mem_msg_entry *msg =
+ &ipc_protocol->p_ap_shm->msg_ring[i];
+
+ dev_dbg(ipc_protocol->dev, "msg[%d]: type=%u status=%d", i,
+ msg->common.type_of_message,
+ msg->common.completion_status);
+
+ /* Update response with status and wake up waiting requestor */
+ if (rsp_ring[i]) {
+ rsp_ring[i]->status =
+ le32_to_cpu(msg->common.completion_status);
+ complete(&rsp_ring[i]->completion);
+ rsp_ring[i] = NULL;
+ }
+ msg_processed = true;
+ }
+
+ ipc_protocol->old_msg_tail = i;
+ return msg_processed;
+}
+
+/* Sends data from UL list to CP for the provided pipe by updating the Head
+ * pointer of given pipe.
+ */
+bool ipc_protocol_ul_td_send(struct iosm_protocol *ipc_protocol,
+ struct ipc_pipe *pipe,
+ struct sk_buff_head *p_ul_list)
+{
+ struct ipc_protocol_td *td;
+ bool hpda_pending = false;
+ struct sk_buff *skb;
+ s32 free_elements;
+ u32 head;
+ u32 tail;
+
+ if (!ipc_protocol->p_ap_shm) {
+ dev_err(ipc_protocol->dev, "driver is not initialized");
+ return false;
+ }
+
+ /* Get head and tail of the td list and calculate
+ * the number of free elements.
+ */
+ head = le32_to_cpu(ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr]);
+ tail = pipe->old_tail;
+
+ while (!skb_queue_empty(p_ul_list)) {
+ if (head < tail)
+ free_elements = tail - head - 1;
+ else
+ free_elements =
+ pipe->nr_of_entries - head + ((s32)tail - 1);
+
+ if (free_elements <= 0) {
+ dev_dbg(ipc_protocol->dev,
+ "no free td elements for UL pipe %d",
+ pipe->pipe_nr);
+ break;
+ }
+
+ /* Get the td address. */
+ td = &pipe->tdr_start[head];
+
+ /* Take the first element of the uplink list and add it
+ * to the td list.
+ */
+ skb = skb_dequeue(p_ul_list);
+ if (WARN_ON(!skb))
+ break;
+
+ /* Save the reference to the uplink skbuf. */
+ pipe->skbr_start[head] = skb;
+
+ td->buffer.address = IPC_CB(skb)->mapping;
+ td->scs = cpu_to_le32(skb->len) & cpu_to_le32(SIZE_MASK);
+ td->next = 0;
+
+ pipe->nr_of_queued_entries++;
+
+ /* Calculate the new head and save it. */
+ head++;
+ if (head >= pipe->nr_of_entries)
+ head = 0;
+
+ ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr] =
+ cpu_to_le32(head);
+ }
+
+ if (pipe->old_head != head) {
+ dev_dbg(ipc_protocol->dev, "New UL TDs Pipe:%d", pipe->pipe_nr);
+
+ pipe->old_head = head;
+ /* Trigger doorbell because of pending UL packets. */
+ hpda_pending = true;
+ }
+
+ return hpda_pending;
+}
+
+/* Checks for Tail pointer update from CP and returns the data as SKB. */
+struct sk_buff *ipc_protocol_ul_td_process(struct iosm_protocol *ipc_protocol,
+ struct ipc_pipe *pipe)
+{
+ struct ipc_protocol_td *p_td = &pipe->tdr_start[pipe->old_tail];
+ struct sk_buff *skb = pipe->skbr_start[pipe->old_tail];
+
+ pipe->nr_of_queued_entries--;
+ pipe->old_tail++;
+ if (pipe->old_tail >= pipe->nr_of_entries)
+ pipe->old_tail = 0;
+
+ if (!p_td->buffer.address) {
+ dev_err(ipc_protocol->dev, "Td buffer address is NULL");
+ return NULL;
+ }
+
+ if (p_td->buffer.address != IPC_CB(skb)->mapping) {
+ dev_err(ipc_protocol->dev,
+ "pipe %d: invalid buf_addr or skb_data",
+ pipe->pipe_nr);
+ return NULL;
+ }
+
+ return skb;
+}
+
+/* Allocates an SKB for CP to send data and updates the Head Pointer
+ * of the given Pipe#.
+ */
+bool ipc_protocol_dl_td_prepare(struct iosm_protocol *ipc_protocol,
+ struct ipc_pipe *pipe)
+{
+ struct ipc_protocol_td *td;
+ dma_addr_t mapping = 0;
+ u32 head, new_head;
+ struct sk_buff *skb;
+ u32 tail;
+
+ /* Get head and tail of the td list and calculate
+ * the number of free elements.
+ */
+ head = le32_to_cpu(ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr]);
+ tail = le32_to_cpu(ipc_protocol->p_ap_shm->tail_array[pipe->pipe_nr]);
+
+ new_head = head + 1;
+ if (new_head >= pipe->nr_of_entries)
+ new_head = 0;
+
+ if (new_head == tail)
+ return false;
+
+ /* Get the td address. */
+ td = &pipe->tdr_start[head];
+
+ /* Allocate the skbuf for the descriptor. */
+ skb = ipc_pcie_alloc_skb(ipc_protocol->pcie, pipe->buf_size, GFP_ATOMIC,
+ &mapping, DMA_FROM_DEVICE,
+ IPC_MEM_DL_ETH_OFFSET);
+ if (!skb)
+ return false;
+
+ td->buffer.address = mapping;
+ td->scs = cpu_to_le32(pipe->buf_size) & cpu_to_le32(SIZE_MASK);
+ td->next = 0;
+
+ /* store the new head value. */
+ ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr] =
+ cpu_to_le32(new_head);
+
+ /* Save the reference to the skbuf. */
+ pipe->skbr_start[head] = skb;
+
+ pipe->nr_of_queued_entries++;
+
+ return true;
+}
+
+/* Processes DL TD's */
+struct sk_buff *ipc_protocol_dl_td_process(struct iosm_protocol *ipc_protocol,
+ struct ipc_pipe *pipe)
+{
+ u32 tail =
+ le32_to_cpu(ipc_protocol->p_ap_shm->tail_array[pipe->pipe_nr]);
+ struct ipc_protocol_td *p_td;
+ struct sk_buff *skb;
+
+ if (!pipe->tdr_start)
+ return NULL;
+
+ /* Copy the reference to the downlink buffer. */
+ p_td = &pipe->tdr_start[pipe->old_tail];
+ skb = pipe->skbr_start[pipe->old_tail];
+
+ /* Reset the ring elements. */
+ pipe->skbr_start[pipe->old_tail] = NULL;
+
+ pipe->nr_of_queued_entries--;
+
+ pipe->old_tail++;
+ if (pipe->old_tail >= pipe->nr_of_entries)
+ pipe->old_tail = 0;
+
+ if (!skb) {
+ dev_err(ipc_protocol->dev, "skb is null");
+ goto ret;
+ } else if (!p_td->buffer.address) {
+ dev_err(ipc_protocol->dev, "td/buffer address is null");
+ ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
+ skb = NULL;
+ goto ret;
+ }
+
+ if (!IPC_CB(skb)) {
+ dev_err(ipc_protocol->dev, "pipe# %d, tail: %d skb_cb is NULL",
+ pipe->pipe_nr, tail);
+ ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
+ skb = NULL;
+ goto ret;
+ }
+
+ if (p_td->buffer.address != IPC_CB(skb)->mapping) {
+ dev_err(ipc_protocol->dev, "invalid buf=%p or skb=%p",
+ (void *)p_td->buffer.address, skb->data);
+ ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
+ skb = NULL;
+ goto ret;
+ } else if ((le32_to_cpu(p_td->scs) & SIZE_MASK) > pipe->buf_size) {
+ dev_err(ipc_protocol->dev, "invalid buffer size %d > %d",
+ le32_to_cpu(p_td->scs) & SIZE_MASK,
+ pipe->buf_size);
+ ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
+ skb = NULL;
+ goto ret;
+ } else if (le32_to_cpu(p_td->scs) >> COMPLETION_STATUS ==
+ IPC_MEM_TD_CS_ABORT) {
+ /* Discard aborted buffers. */
+ dev_dbg(ipc_protocol->dev, "discard 'aborted' buffers");
+ ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
+ skb = NULL;
+ goto ret;
+ }
+
+ /* Set the length field in skbuf. */
+ skb_put(skb, le32_to_cpu(p_td->scs) & SIZE_MASK);
+
+ret:
+ return skb;
+}
+
+void ipc_protocol_get_head_tail_index(struct iosm_protocol *ipc_protocol,
+ struct ipc_pipe *pipe, u32 *head,
+ u32 *tail)
+{
+ struct ipc_protocol_ap_shm *ipc_ap_shm = ipc_protocol->p_ap_shm;
+
+ if (head)
+ *head = le32_to_cpu(ipc_ap_shm->head_array[pipe->pipe_nr]);
+
+ if (tail)
+ *tail = le32_to_cpu(ipc_ap_shm->tail_array[pipe->pipe_nr]);
+}
+
+/* Frees the TDs given to CP. */
+void ipc_protocol_pipe_cleanup(struct iosm_protocol *ipc_protocol,
+ struct ipc_pipe *pipe)
+{
+ struct sk_buff *skb;
+ u32 head;
+ u32 tail;
+
+ /* Get the start and the end of the buffer list. */
+ head = le32_to_cpu(ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr]);
+ tail = pipe->old_tail;
+
+ /* Reset tail and head to 0. */
+ ipc_protocol->p_ap_shm->tail_array[pipe->pipe_nr] = 0;
+ ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr] = 0;
+
+ /* Free pending uplink and downlink buffers. */
+ if (pipe->skbr_start) {
+ while (head != tail) {
+ /* Get the reference to the skbuf,
+ * which is on the way and free it.
+ */
+ skb = pipe->skbr_start[tail];
+ if (skb)
+ ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
+
+ tail++;
+ if (tail >= pipe->nr_of_entries)
+ tail = 0;
+ }
+
+ kfree(pipe->skbr_start);
+ pipe->skbr_start = NULL;
+ }
+
+ pipe->old_tail = 0;
+
+ /* Free and reset the td and skbuf circular buffers. kfree is save! */
+ if (pipe->tdr_start) {
+ pci_free_consistent(ipc_protocol->pcie->pci,
+ sizeof(*pipe->tdr_start) *
+ pipe->nr_of_entries,
+ pipe->tdr_start, pipe->phy_tdr_start);
+
+ pipe->tdr_start = NULL;
+ }
+}
+
+enum ipc_mem_device_ipc_state ipc_protocol_get_ipc_status(struct iosm_protocol
+ *ipc_protocol)
+{
+ return (enum ipc_mem_device_ipc_state)
+ le32_to_cpu(ipc_protocol->p_ap_shm->device_info.ipc_status);
+}
+
+enum ipc_mem_exec_stage
+ipc_protocol_get_ap_exec_stage(struct iosm_protocol *ipc_protocol)
+{
+ return le32_to_cpu(ipc_protocol->p_ap_shm->device_info.execution_stage);
+}
+
+int ipc_protocol_msg_prep(struct iosm_imem *ipc_imem,
+ enum ipc_msg_prep_type msg_type,
+ union ipc_msg_prep_args *args)
+{
+ struct iosm_protocol *ipc_protocol = ipc_imem->ipc_protocol;
+
+ switch (msg_type) {
+ case IPC_MSG_PREP_SLEEP:
+ return ipc_protocol_msg_prep_sleep(ipc_protocol, args);
+
+ case IPC_MSG_PREP_PIPE_OPEN:
+ return ipc_protocol_msg_prepipe_open(ipc_protocol, args);
+
+ case IPC_MSG_PREP_PIPE_CLOSE:
+ return ipc_protocol_msg_prepipe_close(ipc_protocol, args);
+
+ case IPC_MSG_PREP_FEATURE_SET:
+ return ipc_protocol_msg_prep_feature_set(ipc_protocol, args);
+
+ /* Unsupported messages in protocol */
+ case IPC_MSG_PREP_MAP:
+ case IPC_MSG_PREP_UNMAP:
+ default:
+ dev_err(ipc_protocol->dev,
+ "unsupported message type: %d in protocol", msg_type);
+ return -EINVAL;
+ }
+}
+
+u32
+ipc_protocol_pm_dev_get_sleep_notification(struct iosm_protocol *ipc_protocol)
+{
+ struct ipc_protocol_ap_shm *ipc_ap_shm = ipc_protocol->p_ap_shm;
+
+ return le32_to_cpu(ipc_ap_shm->device_info.device_sleep_notification);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h b/drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h
new file mode 100644
index 000000000000..35aa1387306e
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h
@@ -0,0 +1,444 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_PROTOCOL_OPS_H
+#define IOSM_IPC_PROTOCOL_OPS_H
+
+#define SIZE_MASK 0x00FFFFFF
+#define COMPLETION_STATUS 24
+#define RESET_BIT 7
+
+/**
+ * enum ipc_mem_td_cs - Completion status of a TD
+ * @IPC_MEM_TD_CS_INVALID: Initial status - td not yet used.
+ * @IPC_MEM_TD_CS_PARTIAL_TRANSFER: More data pending -> next TD used for this
+ * @IPC_MEM_TD_CS_END_TRANSFER: IO transfer is complete.
+ * @IPC_MEM_TD_CS_OVERFLOW: IO transfer to small for the buff to write
+ * @IPC_MEM_TD_CS_ABORT: TD marked as abort and shall be discarded
+ * by AP.
+ * @IPC_MEM_TD_CS_ERROR: General error.
+ */
+enum ipc_mem_td_cs {
+ IPC_MEM_TD_CS_INVALID,
+ IPC_MEM_TD_CS_PARTIAL_TRANSFER,
+ IPC_MEM_TD_CS_END_TRANSFER,
+ IPC_MEM_TD_CS_OVERFLOW,
+ IPC_MEM_TD_CS_ABORT,
+ IPC_MEM_TD_CS_ERROR,
+};
+
+/**
+ * enum ipc_mem_msg_cs - Completion status of IPC Message
+ * @IPC_MEM_MSG_CS_INVALID: Initial status.
+ * @IPC_MEM_MSG_CS_SUCCESS: IPC Message completion success.
+ * @IPC_MEM_MSG_CS_ERROR: Message send error.
+ */
+enum ipc_mem_msg_cs {
+ IPC_MEM_MSG_CS_INVALID,
+ IPC_MEM_MSG_CS_SUCCESS,
+ IPC_MEM_MSG_CS_ERROR,
+};
+
+/**
+ * struct ipc_msg_prep_args_pipe - struct for pipe args for message preparation
+ * @pipe: Pipe to open/close
+ */
+struct ipc_msg_prep_args_pipe {
+ struct ipc_pipe *pipe;
+};
+
+/**
+ * struct ipc_msg_prep_args_sleep - struct for sleep args for message
+ * preparation
+ * @target: 0=host, 1=device
+ * @state: 0=enter sleep, 1=exit sleep
+ */
+struct ipc_msg_prep_args_sleep {
+ unsigned int target;
+ unsigned int state;
+};
+
+/**
+ * struct ipc_msg_prep_feature_set - struct for feature set argument for
+ * message preparation
+ * @reset_enable: 0=out-of-band, 1=in-band-crash notification
+ */
+struct ipc_msg_prep_feature_set {
+ u8 reset_enable;
+};
+
+/**
+ * struct ipc_msg_prep_map - struct for map argument for message preparation
+ * @region_id: Region to map
+ * @addr: Pcie addr of region to map
+ * @size: Size of the region to map
+ */
+struct ipc_msg_prep_map {
+ unsigned int region_id;
+ unsigned long addr;
+ size_t size;
+};
+
+/**
+ * struct ipc_msg_prep_unmap - struct for unmap argument for message preparation
+ * @region_id: Region to unmap
+ */
+struct ipc_msg_prep_unmap {
+ unsigned int region_id;
+};
+
+/**
+ * struct ipc_msg_prep_args - Union to handle different message types
+ * @pipe_open: Pipe open message preparation struct
+ * @pipe_close: Pipe close message preparation struct
+ * @sleep: Sleep message preparation struct
+ * @feature_set: Feature set message preparation struct
+ * @map: Memory map message preparation struct
+ * @unmap: Memory unmap message preparation struct
+ */
+union ipc_msg_prep_args {
+ struct ipc_msg_prep_args_pipe pipe_open;
+ struct ipc_msg_prep_args_pipe pipe_close;
+ struct ipc_msg_prep_args_sleep sleep;
+ struct ipc_msg_prep_feature_set feature_set;
+ struct ipc_msg_prep_map map;
+ struct ipc_msg_prep_unmap unmap;
+};
+
+/**
+ * enum ipc_msg_prep_type - Enum for message prepare actions
+ * @IPC_MSG_PREP_SLEEP: Sleep message preparation type
+ * @IPC_MSG_PREP_PIPE_OPEN: Pipe open message preparation type
+ * @IPC_MSG_PREP_PIPE_CLOSE: Pipe close message preparation type
+ * @IPC_MSG_PREP_FEATURE_SET: Feature set message preparation type
+ * @IPC_MSG_PREP_MAP: Memory map message preparation type
+ * @IPC_MSG_PREP_UNMAP: Memory unmap message preparation type
+ */
+enum ipc_msg_prep_type {
+ IPC_MSG_PREP_SLEEP,
+ IPC_MSG_PREP_PIPE_OPEN,
+ IPC_MSG_PREP_PIPE_CLOSE,
+ IPC_MSG_PREP_FEATURE_SET,
+ IPC_MSG_PREP_MAP,
+ IPC_MSG_PREP_UNMAP,
+};
+
+/**
+ * struct ipc_rsp - Response to sent message
+ * @completion: For waking up requestor
+ * @status: Completion status
+ */
+struct ipc_rsp {
+ struct completion completion;
+ enum ipc_mem_msg_cs status;
+};
+
+/**
+ * enum ipc_mem_msg - Type-definition of the messages.
+ * @IPC_MEM_MSG_OPEN_PIPE: AP ->CP: Open a pipe
+ * @IPC_MEM_MSG_CLOSE_PIPE: AP ->CP: Close a pipe
+ * @IPC_MEM_MSG_ABORT_PIPE: AP ->CP: wait for completion of the
+ * running transfer and abort all pending
+ * IO-transfers for the pipe
+ * @IPC_MEM_MSG_SLEEP: AP ->CP: host enter or exit sleep
+ * @IPC_MEM_MSG_FEATURE_SET: AP ->CP: Intel feature configuration
+ */
+enum ipc_mem_msg {
+ IPC_MEM_MSG_OPEN_PIPE = 0x01,
+ IPC_MEM_MSG_CLOSE_PIPE = 0x02,
+ IPC_MEM_MSG_ABORT_PIPE = 0x03,
+ IPC_MEM_MSG_SLEEP = 0x04,
+ IPC_MEM_MSG_FEATURE_SET = 0xF0,
+};
+
+/**
+ * struct ipc_mem_msg_open_pipe - Message structure for open pipe
+ * @tdr_addr: Tdr address
+ * @tdr_entries: Tdr entries
+ * @pipe_nr: Pipe number
+ * @type_of_message: Message type
+ * @irq_vector: MSI vector number
+ * @accumulation_backoff: Time in usec for data accumalation
+ * @completion_status: Message Completion Status
+ */
+struct ipc_mem_msg_open_pipe {
+ __le64 tdr_addr;
+ __le16 tdr_entries;
+ u8 pipe_nr;
+ u8 type_of_message;
+ __le32 irq_vector;
+ __le32 accumulation_backoff;
+ __le32 completion_status;
+};
+
+/**
+ * struct ipc_mem_msg_close_pipe - Message structure for close pipe
+ * @reserved1: Reserved
+ * @reserved2: Reserved
+ * @pipe_nr: Pipe number
+ * @type_of_message: Message type
+ * @reserved3: Reserved
+ * @reserved4: Reserved
+ * @completion_status: Message Completion Status
+ */
+struct ipc_mem_msg_close_pipe {
+ __le32 reserved1[2];
+ __le16 reserved2;
+ u8 pipe_nr;
+ u8 type_of_message;
+ __le32 reserved3;
+ __le32 reserved4;
+ __le32 completion_status;
+};
+
+/**
+ * struct ipc_mem_msg_abort_pipe - Message structure for abort pipe
+ * @reserved1: Reserved
+ * @reserved2: Reserved
+ * @pipe_nr: Pipe number
+ * @type_of_message: Message type
+ * @reserved3: Reserved
+ * @reserved4: Reserved
+ * @completion_status: Message Completion Status
+ */
+struct ipc_mem_msg_abort_pipe {
+ __le32 reserved1[2];
+ __le16 reserved2;
+ u8 pipe_nr;
+ u8 type_of_message;
+ __le32 reserved3;
+ __le32 reserved4;
+ __le32 completion_status;
+};
+
+/**
+ * struct ipc_mem_msg_host_sleep - Message structure for sleep message.
+ * @reserved1: Reserved
+ * @target: 0=host, 1=device, host or EP devie
+ * is the message target
+ * @state: 0=enter sleep, 1=exit sleep,
+ * 2=enter sleep no protocol
+ * @reserved2: Reserved
+ * @type_of_message: Message type
+ * @reserved3: Reserved
+ * @reserved4: Reserved
+ * @completion_status: Message Completion Status
+ */
+struct ipc_mem_msg_host_sleep {
+ __le32 reserved1[2];
+ u8 target;
+ u8 state;
+ u8 reserved2;
+ u8 type_of_message;
+ __le32 reserved3;
+ __le32 reserved4;
+ __le32 completion_status;
+};
+
+/**
+ * struct ipc_mem_msg_feature_set - Message structure for feature_set message
+ * @reserved1: Reserved
+ * @reserved2: Reserved
+ * @reset_enable: 0=out-of-band, 1=in-band-crash notification
+ * @type_of_message: Message type
+ * @reserved3: Reserved
+ * @reserved4: Reserved
+ * @completion_status: Message Completion Status
+ */
+struct ipc_mem_msg_feature_set {
+ __le32 reserved1[2];
+ __le16 reserved2;
+ u8 reset_enable;
+ u8 type_of_message;
+ __le32 reserved3;
+ __le32 reserved4;
+ __le32 completion_status;
+};
+
+/**
+ * struct ipc_mem_msg_common - Message structure for completion status update.
+ * @reserved1: Reserved
+ * @reserved2: Reserved
+ * @type_of_message: Message type
+ * @reserved3: Reserved
+ * @reserved4: Reserved
+ * @completion_status: Message Completion Status
+ */
+struct ipc_mem_msg_common {
+ __le32 reserved1[2];
+ u8 reserved2[3];
+ u8 type_of_message;
+ __le32 reserved3;
+ __le32 reserved4;
+ __le32 completion_status;
+};
+
+/**
+ * union ipc_mem_msg_entry - Union with all possible messages.
+ * @open_pipe: Open pipe message struct
+ * @close_pipe: Close pipe message struct
+ * @abort_pipe: Abort pipe message struct
+ * @host_sleep: Host sleep message struct
+ * @feature_set: Featuer set message struct
+ * @common: Used to access msg_type and to set the completion status
+ */
+union ipc_mem_msg_entry {
+ struct ipc_mem_msg_open_pipe open_pipe;
+ struct ipc_mem_msg_close_pipe close_pipe;
+ struct ipc_mem_msg_abort_pipe abort_pipe;
+ struct ipc_mem_msg_host_sleep host_sleep;
+ struct ipc_mem_msg_feature_set feature_set;
+ struct ipc_mem_msg_common common;
+};
+
+/* Transfer descriptor definition. */
+struct ipc_protocol_td {
+ union {
+ /* 0 : 63 - 64-bit address of a buffer in host memory. */
+ dma_addr_t address;
+ struct {
+ /* 0 : 31 - 32 bit address */
+ __le32 address;
+ /* 32 : 63 - corresponding descriptor */
+ __le32 desc;
+ } __packed shm;
+ } buffer;
+
+ /* 0 - 2nd byte - Size of the buffer.
+ * The host provides the size of the buffer queued.
+ * The EP device reads this value and shall update
+ * it for downlink transfers to indicate the
+ * amount of data written in buffer.
+ * 3rd byte - This field provides the completion status
+ * of the TD. When queuing the TD, the host sets
+ * the status to 0. The EP device updates this
+ * field when completing the TD.
+ */
+ __le32 scs;
+
+ /* 0th - nr of following descriptors
+ * 1 - 3rd byte - reserved
+ */
+ __le32 next;
+} __packed;
+
+/**
+ * ipc_protocol_msg_prep - Prepare message based upon message type
+ * @ipc_imem: iosm_protocol instance
+ * @msg_type: message prepare type
+ * @args: message arguments
+ *
+ * Return: 0 on success and failure value on error
+ */
+int ipc_protocol_msg_prep(struct iosm_imem *ipc_imem,
+ enum ipc_msg_prep_type msg_type,
+ union ipc_msg_prep_args *args);
+
+/**
+ * ipc_protocol_msg_hp_update - Function for head pointer update
+ * of message ring
+ * @ipc_imem: iosm_protocol instance
+ */
+void ipc_protocol_msg_hp_update(struct iosm_imem *ipc_imem);
+
+/**
+ * ipc_protocol_msg_process - Function for processing responses
+ * to IPC messages
+ * @ipc_imem: iosm_protocol instance
+ * @irq: IRQ vector
+ *
+ * Return: True on success, false if error
+ */
+bool ipc_protocol_msg_process(struct iosm_imem *ipc_imem, int irq);
+
+/**
+ * ipc_protocol_ul_td_send - Function for sending the data to CP
+ * @ipc_protocol: iosm_protocol instance
+ * @pipe: Pipe instance
+ * @p_ul_list: uplink sk_buff list
+ *
+ * Return: true in success, false in case of error
+ */
+bool ipc_protocol_ul_td_send(struct iosm_protocol *ipc_protocol,
+ struct ipc_pipe *pipe,
+ struct sk_buff_head *p_ul_list);
+
+/**
+ * ipc_protocol_ul_td_process - Function for processing the sent data
+ * @ipc_protocol: iosm_protocol instance
+ * @pipe: Pipe instance
+ *
+ * Return: sk_buff instance
+ */
+struct sk_buff *ipc_protocol_ul_td_process(struct iosm_protocol *ipc_protocol,
+ struct ipc_pipe *pipe);
+
+/**
+ * ipc_protocol_dl_td_prepare - Function for providing DL TDs to CP
+ * @ipc_protocol: iosm_protocol instance
+ * @pipe: Pipe instance
+ *
+ * Return: true in success, false in case of error
+ */
+bool ipc_protocol_dl_td_prepare(struct iosm_protocol *ipc_protocol,
+ struct ipc_pipe *pipe);
+
+/**
+ * ipc_protocol_dl_td_process - Function for processing the DL data
+ * @ipc_protocol: iosm_protocol instance
+ * @pipe: Pipe instance
+ *
+ * Return: sk_buff instance
+ */
+struct sk_buff *ipc_protocol_dl_td_process(struct iosm_protocol *ipc_protocol,
+ struct ipc_pipe *pipe);
+
+/**
+ * ipc_protocol_get_head_tail_index - Function for getting Head and Tail
+ * pointer index of given pipe
+ * @ipc_protocol: iosm_protocol instance
+ * @pipe: Pipe Instance
+ * @head: head pointer index of the given pipe
+ * @tail: tail pointer index of the given pipe
+ */
+void ipc_protocol_get_head_tail_index(struct iosm_protocol *ipc_protocol,
+ struct ipc_pipe *pipe, u32 *head,
+ u32 *tail);
+/**
+ * ipc_protocol_get_ipc_status - Function for getting the IPC Status
+ * @ipc_protocol: iosm_protocol instance
+ *
+ * Return: Returns IPC State
+ */
+enum ipc_mem_device_ipc_state ipc_protocol_get_ipc_status(struct iosm_protocol
+ *ipc_protocol);
+
+/**
+ * ipc_protocol_pipe_cleanup - Function to cleanup pipe resources
+ * @ipc_protocol: iosm_protocol instance
+ * @pipe: Pipe instance
+ */
+void ipc_protocol_pipe_cleanup(struct iosm_protocol *ipc_protocol,
+ struct ipc_pipe *pipe);
+
+/**
+ * ipc_protocol_get_ap_exec_stage - Function for getting AP Exec Stage
+ * @ipc_protocol: pointer to struct iosm protocol
+ *
+ * Return: returns BOOT Stages
+ */
+enum ipc_mem_exec_stage
+ipc_protocol_get_ap_exec_stage(struct iosm_protocol *ipc_protocol);
+
+/**
+ * ipc_protocol_pm_dev_get_sleep_notification - Function for getting Dev Sleep
+ * notification
+ * @ipc_protocol: iosm_protocol instance
+ *
+ * Return: Returns dev PM State
+ */
+u32 ipc_protocol_pm_dev_get_sleep_notification(struct iosm_protocol
+ *ipc_protocol);
+#endif
--
2.25.1

2021-05-20 14:10:08

by Kumar, M Chetan

[permalink] [raw]
Subject: [PATCH V3 16/16] net: iosm: infrastructure

1) Kconfig & Makefile changes for IOSM Driver compilation.
2) Add IOSM Driver documentation.
3) Modified if_link.h file to support link type iosm.
4) Modified MAINTAINER file for IOSM Driver addition.

Signed-off-by: M Chetan Kumar <[email protected]>
---
v3:
* Clean-up driver/net Kconfig & Makefile (Changes available as
part of wwan subsystem).
* Removed NET dependency key word from iosm Kconfig.
* Removed IOCTL section from documentation.
v2:
* Moved driver documentation to RsT file.
* Modified if_link.h file to support link type iosm.
---
.../networking/device_drivers/index.rst | 1 +
.../networking/device_drivers/wwan/index.rst | 18 ++++
.../networking/device_drivers/wwan/iosm.rst | 95 +++++++++++++++++++
MAINTAINERS | 7 ++
drivers/net/wwan/Kconfig | 12 +++
drivers/net/wwan/Makefile | 1 +
drivers/net/wwan/iosm/Makefile | 26 +++++
include/uapi/linux/if_link.h | 10 ++
tools/include/uapi/linux/if_link.h | 10 ++
9 files changed, 180 insertions(+)
create mode 100644 Documentation/networking/device_drivers/wwan/index.rst
create mode 100755 Documentation/networking/device_drivers/wwan/iosm.rst
create mode 100644 drivers/net/wwan/iosm/Makefile

diff --git a/Documentation/networking/device_drivers/index.rst b/Documentation/networking/device_drivers/index.rst
index d8279de7bf25..3a5a1d46e77e 100644
--- a/Documentation/networking/device_drivers/index.rst
+++ b/Documentation/networking/device_drivers/index.rst
@@ -18,6 +18,7 @@ Contents:
qlogic/index
wan/index
wifi/index
+ wwan/index

.. only:: subproject and html

diff --git a/Documentation/networking/device_drivers/wwan/index.rst b/Documentation/networking/device_drivers/wwan/index.rst
new file mode 100644
index 000000000000..1cb8c7371401
--- /dev/null
+++ b/Documentation/networking/device_drivers/wwan/index.rst
@@ -0,0 +1,18 @@
+.. SPDX-License-Identifier: GPL-2.0-only
+
+WWAN Device Drivers
+===================
+
+Contents:
+
+.. toctree::
+ :maxdepth: 2
+
+ iosm
+
+.. only:: subproject and html
+
+ Indices
+ =======
+
+ * :ref:`genindex`
diff --git a/Documentation/networking/device_drivers/wwan/iosm.rst b/Documentation/networking/device_drivers/wwan/iosm.rst
new file mode 100755
index 000000000000..ab5509bd84f6
--- /dev/null
+++ b/Documentation/networking/device_drivers/wwan/iosm.rst
@@ -0,0 +1,95 @@
+.. SPDX-License-Identifier: GPL-2.0-only
+
+.. Copyright (C) 2020-21 Intel Corporation
+
+.. _iosm_driver_doc:
+
+===========================================
+IOSM Driver for Intel M.2 PCIe based Modems
+===========================================
+The IOSM (IPC over Shared Memory) driver is a WWAN PCIe host driver developed
+for linux or chrome platform for data exchange over PCIe interface between
+Host platform & Intel M.2 Modem. The driver exposes interface conforming to the
+MBIM protocol [1]. Any front end application ( eg: Modem Manager) could easily
+manage the MBIM interface to enable data communication towards WWAN.
+
+Basic usage
+===========
+MBIM functions are inactive when unmanaged. The IOSM driver only provides a
+userspace interface MBIM "WWAN PORT" representing MBIM control channel and does
+not play any role in managing the functionality. It is the job of a userspace
+application to detect port enumeration and enable MBIM functionality.
+
+Examples of few such userspace application are:
+- mbimcli (included with the libmbim [2] library), and
+- Modem Manager [3]
+
+Management Applications to carry out below required actions for establishing
+MBIM IP session:
+- open the MBIM control channel
+- configure network connection settings
+- connect to network
+- configure IP network interface
+
+Management application development
+==================================
+The driver and userspace interfaces are described below. The MBIM protocol is
+described in [1] Mobile Broadband Interface Model v1.0 Errata-1.
+
+MBIM control channel userspace ABI
+----------------------------------
+
+/dev/wwan0p3MBIM character device
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The driver exposes an MBIM interface to the MBIM function by implementing
+MBIM WWAN Port. The userspace end of the control channel pipe is a
+/dev/wwan0p3MBIM character device. Application shall use this interface for
+MBIM protocol communication.
+
+Fragmentation
+~~~~~~~~~~~~~
+The userspace application is responsible for all control message fragmentation
+and defragmentation as per MBIM specification.
+
+/dev/wwan0p3MBIM write()
+~~~~~~~~~~~~~~~~~~~~~
+The MBIM control messages from the management application must not exceed the
+negotiated control message size.
+
+/dev/wwan0p3MBIM read()
+~~~~~~~~~~~~~~~~~~~~
+The management application must accept control messages of up the negotiated
+control message size.
+
+MBIM data channel userspace ABI
+-------------------------------
+
+inmX network device
+~~~~~~~~~~~~~~~~~~~~
+The IOSM driver exposes IP link interface "inmX" of type "IOSM" for IP traffic.
+Iproute network utility is used for creating "inmX" network interface and for
+associating it with MBIM IP session. The Driver supports upto 8 IP sessions for
+simultaneous IP communication.
+
+The userspace management application is responsible for creating new IP link
+prior to establishing MBIM IP session where the SessionId is greater than 0.
+
+For example, creating new IP link for a MBIM IP session with SessionId 1:
+
+ ip link add link wwan0 name inm1 type IOSM if_id 1
+
+The driver will automatically map the "inm1" network device to MBIM IP session 1.
+
+References
+==========
+[1] "MBIM (Mobile Broadband Interface Model) Errata-1"
+ - https://www.usb.org/document-library/
+
+[2] libmbim - "a glib-based library for talking to WWAN modems and
+ devices which speak the Mobile Interface Broadband Model (MBIM)
+ protocol"
+ - http://www.freedesktop.org/wiki/Software/libmbim/
+
+[3] Modem Manager - "a DBus-activated daemon which controls mobile
+ broadband (2G/3G/4G) devices and connections"
+ - http://www.freedesktop.org/wiki/Software/ModemManager/
diff --git a/MAINTAINERS b/MAINTAINERS
index 008fcad7ac00..79feacdc7dd0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9443,6 +9443,13 @@ L: [email protected]
S: Maintained
F: drivers/platform/x86/intel-wmi-thunderbolt.c

+INTEL WWAN IOSM DRIVER
+M: M Chetan Kumar <[email protected]>
+M: Intel Corporation <[email protected]>
+L: [email protected]
+S: Maintained
+F: drivers/net/wwan/iosm/
+
INTEL(R) TRACE HUB
M: Alexander Shishkin <[email protected]>
S: Supported
diff --git a/drivers/net/wwan/Kconfig b/drivers/net/wwan/Kconfig
index 7ad1920120bc..d91bc646706e 100644
--- a/drivers/net/wwan/Kconfig
+++ b/drivers/net/wwan/Kconfig
@@ -34,4 +34,16 @@ config MHI_WWAN_CTRL
To compile this driver as a module, choose M here: the module will be
called mhi_wwan_ctrl.

+config IOSM
+ tristate "IOSM Driver for Intel M.2 WWAN Device"
+ select WWAN_CORE
+ depends on INTEL_IOMMU
+ help
+ This driver enables Intel M.2 WWAN Device communication.
+
+ If you have one of those Intel M.2 WWAN Modules and wish to use it in
+ Linux say Y/M here.
+
+ If unsure, say N.
+
endif # WWAN
diff --git a/drivers/net/wwan/Makefile b/drivers/net/wwan/Makefile
index 556cd90958ca..5ff6725943da 100644
--- a/drivers/net/wwan/Makefile
+++ b/drivers/net/wwan/Makefile
@@ -7,3 +7,4 @@ obj-$(CONFIG_WWAN_CORE) += wwan.o
wwan-objs += wwan_core.o

obj-$(CONFIG_MHI_WWAN_CTRL) += mhi_wwan_ctrl.o
+obj-$(CONFIG_IOSM) += iosm/
diff --git a/drivers/net/wwan/iosm/Makefile b/drivers/net/wwan/iosm/Makefile
new file mode 100644
index 000000000000..cdeeb9357af6
--- /dev/null
+++ b/drivers/net/wwan/iosm/Makefile
@@ -0,0 +1,26 @@
+# SPDX-License-Identifier: (GPL-2.0-only)
+#
+# Copyright (C) 2020-21 Intel Corporation.
+#
+
+iosm-y = \
+ iosm_ipc_task_queue.o \
+ iosm_ipc_imem.o \
+ iosm_ipc_imem_ops.o \
+ iosm_ipc_mmio.o \
+ iosm_ipc_port.o \
+ iosm_ipc_wwan.o \
+ iosm_ipc_uevent.o \
+ iosm_ipc_pm.o \
+ iosm_ipc_pcie.o \
+ iosm_ipc_irq.o \
+ iosm_ipc_chnl_cfg.o \
+ iosm_ipc_protocol.o \
+ iosm_ipc_protocol_ops.o \
+ iosm_ipc_mux.o \
+ iosm_ipc_mux_codec.o
+
+obj-$(CONFIG_IOSM) := iosm.o
+
+# compilation flags
+ccflags-y += -DDEBUG
diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
index cd5b382a4138..f7c3beebb074 100644
--- a/include/uapi/linux/if_link.h
+++ b/include/uapi/linux/if_link.h
@@ -1251,4 +1251,14 @@ struct ifla_rmnet_flags {
__u32 mask;
};

+/* IOSM Section */
+
+enum {
+ IFLA_IOSM_UNSPEC,
+ IFLA_IOSM_IF_ID,
+ __IFLA_IOSM_MAX,
+};
+
+#define IFLA_IOSM_MAX (__IFLA_IOSM_MAX - 1)
+
#endif /* _UAPI_LINUX_IF_LINK_H */
diff --git a/tools/include/uapi/linux/if_link.h b/tools/include/uapi/linux/if_link.h
index d208b2af697f..cb496d0de39e 100644
--- a/tools/include/uapi/linux/if_link.h
+++ b/tools/include/uapi/linux/if_link.h
@@ -1046,4 +1046,14 @@ struct ifla_rmnet_flags {
__u32 mask;
};

+/* IOSM Section */
+
+enum {
+ IFLA_IOSM_UNSPEC,
+ IFLA_IOSM_IF_ID,
+ __IFLA_IOSM_MAX,
+};
+
+#define IFLA_IOSM_MAX (__IFLA_IOSM_MAX - 1)
+
#endif /* _UAPI_LINUX_IF_LINK_H */
--
2.25.1

2021-05-20 14:10:09

by Kumar, M Chetan

[permalink] [raw]
Subject: [PATCH V3 15/16] net: iosm: net driver

1) Create net device & implement net operations for data/IP communication.
2) Bind IP Link to mux IP session for simultaneous IP traffic.

Signed-off-by: M Chetan Kumar <[email protected]>
---
v3:
* Clean-up DSS channel implementation.
* Aligned ipc_ prefix for function name to be consistent across file.
v2:
* Removed Ethernet header & VLAN tag handling from wwan net driver.
* Implement rtnet_link interface for IP traffic handling.
---
drivers/net/wwan/iosm/iosm_ipc_wwan.c | 439 ++++++++++++++++++++++++++
drivers/net/wwan/iosm/iosm_ipc_wwan.h | 55 ++++
2 files changed, 494 insertions(+)
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_wwan.c b/drivers/net/wwan/iosm/iosm_ipc_wwan.c
new file mode 100644
index 000000000000..02c35bc86674
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_wwan.c
@@ -0,0 +1,439 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#include <linux/etherdevice.h>
+#include <linux/if_arp.h>
+#include <linux/if_link.h>
+#include <net/rtnetlink.h>
+
+#include "iosm_ipc_chnl_cfg.h"
+#include "iosm_ipc_imem_ops.h"
+#include "iosm_ipc_wwan.h"
+
+#define IOSM_IP_TYPE_MASK 0xF0
+#define IOSM_IP_TYPE_IPV4 0x40
+#define IOSM_IP_TYPE_IPV6 0x60
+
+#define IOSM_IF_ID_PAYLOAD 2
+
+static struct device_type wwan_type = { .name = "wwan" };
+
+static const struct nla_policy ipc_wwan_policy[IFLA_IOSM_MAX + 1] = {
+ [IFLA_IOSM_IF_ID] = { .type = NLA_U16 },
+};
+
+/**
+ * struct iosm_net_link - This structure includes information about interface
+ * dev.
+ * @if_id: Interface id for device.
+ * @ch_id: IPC channel number for which interface device is created.
+ * @netdev: Pointer to network interface device structure
+ * @ipc_wwan: Pointer to iosm_wwan struct
+ */
+
+struct iosm_net_link {
+ int if_id;
+ int ch_id;
+ struct net_device *netdev;
+ struct iosm_wwan *ipc_wwan;
+};
+
+/**
+ * struct iosm_wwan - This structure contains information about WWAN root device
+ * and interface to the IPC layer.
+ * @netdev: Pointer to network interface device structure.
+ * @sub_netlist: List of netlink interfaces
+ * @ipc_imem: Pointer to imem data-struct
+ * @dev: Pointer device structure
+ * @if_mutex: Mutex used for add and remove interface id
+ * @is_registered: Registration status with netdev
+ */
+struct iosm_wwan {
+ struct net_device *netdev;
+ struct iosm_net_link __rcu *sub_netlist[MAX_IP_MUX_SESSION];
+ struct iosm_imem *ipc_imem;
+ struct device *dev;
+ struct mutex if_mutex; /* Mutex used for add and remove interface id */
+ u8 is_registered:1;
+};
+
+/* Bring-up the wwan net link */
+static int ipc_wwan_link_open(struct net_device *netdev)
+{
+ struct iosm_net_link *netlink = netdev_priv(netdev);
+ struct iosm_wwan *ipc_wwan = netlink->ipc_wwan;
+ int if_id = netlink->if_id;
+ int ret = -EINVAL;
+
+ if (if_id < IP_MUX_SESSION_START || if_id > IP_MUX_SESSION_END)
+ return ret;
+
+ mutex_lock(&ipc_wwan->if_mutex);
+
+ /* get channel id */
+ netlink->ch_id =
+ ipc_imem_sys_wwan_open(ipc_wwan->ipc_imem, if_id);
+
+ if (netlink->ch_id < 0) {
+ dev_err(ipc_wwan->dev,
+ "cannot connect wwan0 & id %d to the IPC mem layer",
+ if_id);
+ ret = -ENODEV;
+ goto out;
+ }
+
+ /* enable tx path, DL data may follow */
+ netif_start_queue(netdev);
+
+ dev_dbg(ipc_wwan->dev, "Channel id %d allocated to if_id %d",
+ netlink->ch_id, netlink->if_id);
+
+ ret = 0;
+out:
+ mutex_unlock(&ipc_wwan->if_mutex);
+ return ret;
+}
+
+/* Bring-down the wwan net link */
+static int ipc_wwan_link_stop(struct net_device *netdev)
+{
+ struct iosm_net_link *netlink = netdev_priv(netdev);
+
+ netif_stop_queue(netdev);
+
+ mutex_lock(&netlink->ipc_wwan->if_mutex);
+ ipc_imem_sys_wwan_close(netlink->ipc_wwan->ipc_imem, netlink->if_id,
+ netlink->ch_id);
+ mutex_unlock(&netlink->ipc_wwan->if_mutex);
+
+ return 0;
+}
+
+/* Transmit a packet */
+static int ipc_wwan_link_transmit(struct sk_buff *skb,
+ struct net_device *netdev)
+{
+ struct iosm_net_link *netlink = netdev_priv(netdev);
+ struct iosm_wwan *ipc_wwan = netlink->ipc_wwan;
+ int if_id = netlink->if_id;
+ int ret = -EINVAL;
+
+ /* Interface IDs from 1 to 8 are for IP data
+ * & from 257 to 261 are for non-IP data
+ */
+ if (if_id < IP_MUX_SESSION_START || if_id > IP_MUX_SESSION_END)
+ goto exit;
+
+ /* Send the SKB to device for transmission */
+ ret = ipc_imem_sys_wwan_transmit(ipc_wwan->ipc_imem,
+ if_id, netlink->ch_id, skb);
+
+ /* Return code of zero is success */
+ if (ret == 0) {
+ ret = NETDEV_TX_OK;
+ } else if (ret == -EBUSY) {
+ ret = NETDEV_TX_BUSY;
+ dev_err(ipc_wwan->dev, "unable to push packets");
+ } else {
+ goto exit;
+ }
+
+ return ret;
+
+exit:
+ /* Log any skb drop */
+ if (if_id)
+ dev_dbg(ipc_wwan->dev, "skb dropped. IF_ID: %d, ret: %d", if_id,
+ ret);
+
+ dev_kfree_skb_any(skb);
+ return ret;
+}
+
+/* Ops structure for wwan net link */
+static const struct net_device_ops ipc_inm_ops = {
+ .ndo_open = ipc_wwan_link_open,
+ .ndo_stop = ipc_wwan_link_stop,
+ .ndo_start_xmit = ipc_wwan_link_transmit,
+};
+
+/* Setup function for creating new net link */
+static void ipc_wwan_setup(struct net_device *iosm_dev)
+{
+ iosm_dev->header_ops = NULL;
+ iosm_dev->hard_header_len = 0;
+ iosm_dev->priv_flags |= IFF_NO_QUEUE;
+
+ iosm_dev->type = ARPHRD_NONE;
+ iosm_dev->min_mtu = ETH_MIN_MTU;
+ iosm_dev->max_mtu = ETH_MAX_MTU;
+
+ iosm_dev->flags = IFF_POINTOPOINT | IFF_NOARP;
+
+ iosm_dev->netdev_ops = &ipc_inm_ops;
+}
+
+static struct device_type inm_type = { .name = "iosmdev" };
+
+/* Create new wwan net link */
+static int ipc_wwan_newlink(struct net *src_net, struct net_device *dev,
+ struct nlattr *tb[], struct nlattr *data[],
+ struct netlink_ext_ack *extack)
+{
+ struct iosm_net_link *netlink = netdev_priv(dev);
+ struct iosm_wwan *ipc_wwan;
+ struct net_device *netdev;
+ int err = -EINVAL;
+ int index;
+
+ if (!data[IFLA_IOSM_IF_ID]) {
+ pr_err("Interface ID not specified");
+ goto out;
+ }
+
+ if (!tb[IFLA_LINK]) {
+ pr_err("Link not specified");
+ goto out;
+ }
+
+ netlink->netdev = dev;
+
+ netdev = __dev_get_by_index(src_net, nla_get_u32(tb[IFLA_LINK]));
+
+ netlink->ipc_wwan = netdev_priv(netdev);
+
+ ipc_wwan = netlink->ipc_wwan;
+
+ if (ipc_wwan->netdev != netdev)
+ goto out;
+
+ netlink->if_id = nla_get_u16(data[IFLA_IOSM_IF_ID]);
+ index = netlink->if_id;
+
+ /* Complete all memory stores before this point */
+ smp_mb();
+ if (index < IP_MUX_SESSION_START || index > IP_MUX_SESSION_END)
+ goto out;
+
+ rcu_read_lock();
+
+ if (rcu_access_pointer(ipc_wwan->sub_netlist[index - 1])) {
+ pr_err("IOSM interface ID already in use");
+ goto out_free_lock;
+ }
+
+ SET_NETDEV_DEVTYPE(dev, &inm_type);
+
+ eth_hw_addr_random(dev);
+ err = register_netdevice(dev);
+ if (err) {
+ dev_err(ipc_wwan->dev, "register netlink failed.\n");
+ goto out_free_lock;
+ }
+
+ err = netdev_upper_dev_link(ipc_wwan->netdev, dev, extack);
+
+ if (err) {
+ dev_err(ipc_wwan->dev, "netdev linking with parent failed.\n");
+ goto netlink_err;
+ }
+
+ rcu_assign_pointer(ipc_wwan->sub_netlist[index - 1], netlink);
+ netif_device_attach(dev);
+ rcu_read_unlock();
+
+ return 0;
+
+netlink_err:
+ unregister_netdevice(dev);
+out_free_lock:
+ rcu_read_unlock();
+out:
+ return err;
+}
+
+/* Delete new wwan net link */
+static void ipc_wwan_dellink(struct net_device *dev, struct list_head *head)
+{
+ struct iosm_net_link *netlink = netdev_priv(dev);
+ u16 index = netlink->if_id;
+
+ netdev_upper_dev_unlink(netlink->ipc_wwan->netdev, dev);
+ unregister_netdevice(dev);
+
+ mutex_lock(&netlink->ipc_wwan->if_mutex);
+ rcu_assign_pointer(netlink->ipc_wwan->sub_netlist[index - 1], NULL);
+ mutex_unlock(&netlink->ipc_wwan->if_mutex);
+}
+
+/* Get size for iosm net link payload*/
+static size_t ipc_wwan_get_size(const struct net_device *dev)
+{
+ return nla_total_size(IOSM_IF_ID_PAYLOAD);
+}
+
+/* Validate the input parameters for wwan net link */
+static int ipc_wwan_validate(struct nlattr *tb[], struct nlattr *data[],
+ struct netlink_ext_ack *extack)
+{
+ u16 if_id;
+
+ if (!data || !data[IFLA_IOSM_IF_ID]) {
+ NL_SET_ERR_MSG_MOD(extack, "IF ID not specified");
+ return -EINVAL;
+ }
+
+ if_id = nla_get_u16(data[IFLA_IOSM_IF_ID]);
+
+ if (if_id < IP_MUX_SESSION_START || if_id > IP_MUX_SESSION_END) {
+ NL_SET_ERR_MSG_MOD(extack, "Invalid Interface");
+ return -ERANGE;
+ }
+
+ return 0;
+}
+
+/* RT Net link ops structure for new wwan net link */
+struct rtnl_link_ops iosm_netlink __read_mostly = {
+ .kind = "iosm",
+ .maxtype = __IFLA_IOSM_MAX,
+ .priv_size = sizeof(struct iosm_net_link),
+ .setup = ipc_wwan_setup,
+ .validate = ipc_wwan_validate,
+ .newlink = ipc_wwan_newlink,
+ .dellink = ipc_wwan_dellink,
+ .get_size = ipc_wwan_get_size,
+ .policy = ipc_wwan_policy,
+};
+
+int ipc_wwan_receive(struct iosm_wwan *ipc_wwan, struct sk_buff *skb_arg,
+ bool dss, int if_id)
+{
+ struct sk_buff *skb = skb_arg;
+ struct net_device_stats stats;
+ int ret;
+
+ if ((skb->data[0] & IOSM_IP_TYPE_MASK) == IOSM_IP_TYPE_IPV4)
+ skb->protocol = htons(ETH_P_IP);
+ else if ((skb->data[0] & IOSM_IP_TYPE_MASK) ==
+ IOSM_IP_TYPE_IPV6)
+ skb->protocol = htons(ETH_P_IPV6);
+
+ skb->pkt_type = PACKET_HOST;
+
+ if (if_id < (IP_MUX_SESSION_START - 1) ||
+ if_id > (IP_MUX_SESSION_END - 1)) {
+ dev_kfree_skb(skb);
+ return -EINVAL;
+ }
+
+ rcu_read_lock();
+ skb->dev = rcu_dereference(ipc_wwan->sub_netlist[if_id])->netdev;
+ stats = rcu_dereference(ipc_wwan->sub_netlist[if_id])->netdev->stats;
+ stats.rx_packets++;
+ stats.rx_bytes += skb->len;
+
+ ret = netif_rx(skb);
+ rcu_read_unlock();
+
+ return ret;
+}
+
+void ipc_wwan_tx_flowctrl(struct iosm_wwan *ipc_wwan, int if_id, bool on)
+{
+ struct net_device *netdev;
+ bool is_tx_blk;
+
+ rcu_read_lock();
+ netdev = rcu_dereference(ipc_wwan->sub_netlist[if_id])->netdev;
+
+ is_tx_blk = netif_queue_stopped(netdev);
+
+ if (on)
+ dev_dbg(ipc_wwan->dev, "session id[%d]: flowctrl enable",
+ if_id);
+
+ if (on && !is_tx_blk)
+ netif_stop_queue(netdev);
+ else if (!on && is_tx_blk)
+ netif_wake_queue(netdev);
+ rcu_read_unlock();
+}
+
+static void ipc_netdev_setup(struct net_device *dev) {}
+
+struct iosm_wwan *ipc_wwan_init(struct iosm_imem *ipc_imem, struct device *dev)
+{
+ static const struct net_device_ops iosm_wwandev_ops = {};
+ struct iosm_wwan *ipc_wwan;
+ struct net_device *netdev;
+
+ netdev = alloc_netdev(sizeof(*ipc_wwan), "wwan%d", NET_NAME_ENUM,
+ ipc_netdev_setup);
+
+ if (!netdev)
+ return NULL;
+
+ ipc_wwan = netdev_priv(netdev);
+
+ ipc_wwan->dev = dev;
+ ipc_wwan->netdev = netdev;
+ ipc_wwan->is_registered = false;
+
+ ipc_wwan->ipc_imem = ipc_imem;
+
+ mutex_init(&ipc_wwan->if_mutex);
+
+ /* allocate random ethernet address */
+ eth_random_addr(netdev->dev_addr);
+ netdev->addr_assign_type = NET_ADDR_RANDOM;
+
+ netdev->netdev_ops = &iosm_wwandev_ops;
+ netdev->flags |= IFF_NOARP;
+
+ SET_NETDEV_DEVTYPE(netdev, &wwan_type);
+
+ if (register_netdev(netdev)) {
+ dev_err(ipc_wwan->dev, "register_netdev failed");
+ goto reg_fail;
+ }
+
+ ipc_wwan->is_registered = true;
+
+ netif_device_attach(netdev);
+
+ return ipc_wwan;
+
+reg_fail:
+ free_netdev(netdev);
+ return NULL;
+}
+
+void ipc_wwan_deinit(struct iosm_wwan *ipc_wwan)
+{
+ struct iosm_net_link *netlink;
+ int i;
+
+ if (ipc_wwan->is_registered) {
+ rcu_read_lock();
+ for (i = IP_MUX_SESSION_START; i <= IP_MUX_SESSION_END; i++) {
+ if (rcu_access_pointer(ipc_wwan->sub_netlist[i - 1])) {
+ netlink =
+ rcu_dereference(ipc_wwan->sub_netlist[i - 1]);
+ rtnl_lock();
+ netdev_upper_dev_unlink(ipc_wwan->netdev,
+ netlink->netdev);
+ unregister_netdevice(netlink->netdev);
+ rtnl_unlock();
+ rcu_assign_pointer(ipc_wwan->sub_netlist[i - 1],
+ NULL);
+ }
+ }
+ rcu_read_unlock();
+
+ unregister_netdev(ipc_wwan->netdev);
+ free_netdev(ipc_wwan->netdev);
+ }
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_wwan.h b/drivers/net/wwan/iosm/iosm_ipc_wwan.h
new file mode 100644
index 000000000000..4925f22dff0a
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_wwan.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_WWAN_H
+#define IOSM_IPC_WWAN_H
+
+/**
+ * ipc_wwan_init - Allocate, Init and register WWAN device
+ * @ipc_imem: Pointer to imem data-struct
+ * @dev: Pointer to device structure
+ *
+ * Returns: Pointer to instance on success else NULL
+ */
+struct iosm_wwan *ipc_wwan_init(struct iosm_imem *ipc_imem, struct device *dev);
+
+/**
+ * ipc_wwan_deinit - Unregister and free WWAN device, clear pointer
+ * @ipc_wwan: Pointer to wwan instance data
+ */
+void ipc_wwan_deinit(struct iosm_wwan *ipc_wwan);
+
+/**
+ * ipc_wwan_receive - Receive a downlink packet from CP.
+ * @ipc_wwan: Pointer to wwan instance
+ * @skb_arg: Pointer to struct sk_buff
+ * @dss: Set to true if interafce id is from 257 to 261,
+ * else false
+ * @if_id: Interface ID
+ *
+ * Return: 0 on success and failure value on error
+ */
+int ipc_wwan_receive(struct iosm_wwan *ipc_wwan, struct sk_buff *skb_arg,
+ bool dss, int if_id);
+
+/**
+ * ipc_wwan_tx_flowctrl - Enable/Disable TX flow control
+ * @ipc_wwan: Pointer to wwan instance
+ * @id: Ipc mux channel session id
+ * @on: if true then flow ctrl would be enabled else disable
+ *
+ */
+void ipc_wwan_tx_flowctrl(struct iosm_wwan *ipc_wwan, int id, bool on);
+
+/**
+ * ipc_wwan_is_tx_stopped - Checks if Tx stopped for a Interface id.
+ * @ipc_wwan: Pointer to wwan instance
+ * @id: Ipc mux channel session id
+ *
+ * Return: true if stopped, false otherwise
+ */
+bool ipc_wwan_is_tx_stopped(struct iosm_wwan *ipc_wwan, int id);
+
+#endif
--
2.25.1

2021-05-20 14:11:12

by Kumar, M Chetan

[permalink] [raw]
Subject: [PATCH V3 09/16] net: iosm: multiplex IP sessions

Establish IP session between host-device & session management.

Signed-off-by: M Chetan Kumar <[email protected]>
---
v3: Aligned ipc_ prefix for function name to be consistent across file.
v2:
* Endianness type correction for Host-Device protocol structure.
* Removed space around the : for the bitfields.
* Change session from dynamic to static.
* Streamline multiple returns using goto.
---
drivers/net/wwan/iosm/iosm_ipc_mux.c | 455 +++++++++++++++++++++++++++
drivers/net/wwan/iosm/iosm_ipc_mux.h | 343 ++++++++++++++++++++
2 files changed, 798 insertions(+)
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux.c b/drivers/net/wwan/iosm/iosm_ipc_mux.c
new file mode 100644
index 000000000000..c1c77ce699da
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_mux.c
@@ -0,0 +1,455 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#include "iosm_ipc_mux_codec.h"
+
+/* At the begin of the runtime phase the IP MUX channel shall created. */
+static int ipc_mux_channel_create(struct iosm_mux *ipc_mux)
+{
+ int channel_id;
+
+ channel_id = ipc_imem_channel_alloc(ipc_mux->imem, ipc_mux->instance_id,
+ IPC_CTYPE_WWAN);
+
+ if (channel_id < 0) {
+ dev_err(ipc_mux->dev,
+ "allocation of the MUX channel id failed");
+ ipc_mux->state = MUX_S_ERROR;
+ ipc_mux->event = MUX_E_NOT_APPLICABLE;
+ goto no_channel;
+ }
+
+ /* Establish the MUX channel in blocking mode. */
+ ipc_mux->channel = ipc_imem_channel_open(ipc_mux->imem, channel_id,
+ IPC_HP_NET_CHANNEL_INIT);
+
+ if (!ipc_mux->channel) {
+ dev_err(ipc_mux->dev, "ipc_imem_channel_open failed");
+ ipc_mux->state = MUX_S_ERROR;
+ ipc_mux->event = MUX_E_NOT_APPLICABLE;
+ return -ENODEV; /* MUX channel is not available. */
+ }
+
+ /* Define the MUX active state properties. */
+ ipc_mux->state = MUX_S_ACTIVE;
+ ipc_mux->event = MUX_E_NO_ORDERS;
+
+no_channel:
+ return channel_id;
+}
+
+/* Reset the session/if id state. */
+static void ipc_mux_session_free(struct iosm_mux *ipc_mux, int if_id)
+{
+ struct mux_session *if_entry;
+
+ if_entry = &ipc_mux->session[if_id];
+ /* Reset the session state. */
+ if_entry->wwan = NULL;
+}
+
+/* Create and send the session open command. */
+static struct mux_cmd_open_session_resp *
+ipc_mux_session_open_send(struct iosm_mux *ipc_mux, int if_id)
+{
+ struct mux_cmd_open_session_resp *open_session_resp;
+ struct mux_acb *acb = &ipc_mux->acb;
+ union mux_cmd_param param;
+
+ /* open_session commands to one ACB and start transmission. */
+ param.open_session.flow_ctrl = 0;
+ param.open_session.ipv4v6_hints = 0;
+ param.open_session.reserved2 = 0;
+ param.open_session.dl_head_pad_len = cpu_to_le32(IPC_MEM_DL_ETH_OFFSET);
+
+ /* Finish and transfer ACB. The user thread is suspended.
+ * It is a blocking function call, until CP responds or timeout.
+ */
+ acb->wanted_response = MUX_CMD_OPEN_SESSION_RESP;
+ if (ipc_mux_dl_acb_send_cmds(ipc_mux, MUX_CMD_OPEN_SESSION, if_id, 0,
+ &param, sizeof(param.open_session), true,
+ false) ||
+ acb->got_response != MUX_CMD_OPEN_SESSION_RESP) {
+ dev_err(ipc_mux->dev, "if_id %d: OPEN_SESSION send failed",
+ if_id);
+ return NULL;
+ }
+
+ open_session_resp = &ipc_mux->acb.got_param.open_session_resp;
+ if (open_session_resp->response != cpu_to_le32(MUX_CMD_RESP_SUCCESS)) {
+ dev_err(ipc_mux->dev,
+ "if_id %d,session open failed,response=%d", if_id,
+ open_session_resp->response);
+ return NULL;
+ }
+
+ return open_session_resp;
+}
+
+/* Open the first IP session. */
+static bool ipc_mux_session_open(struct iosm_mux *ipc_mux,
+ struct mux_session_open *session_open)
+{
+ struct mux_cmd_open_session_resp *open_session_resp;
+ int if_id;
+
+ /* Search for a free session interface id. */
+ if_id = le32_to_cpu(session_open->if_id);
+ if (if_id < 0 || if_id >= ipc_mux->nr_sessions) {
+ dev_err(ipc_mux->dev, "invalid interface id=%d", if_id);
+ return false;
+ }
+
+ /* Create and send the session open command.
+ * It is a blocking function call, until CP responds or timeout.
+ */
+ open_session_resp = ipc_mux_session_open_send(ipc_mux, if_id);
+ if (!open_session_resp) {
+ ipc_mux_session_free(ipc_mux, if_id);
+ session_open->if_id = cpu_to_le32(-1);
+ return false;
+ }
+
+ /* Initialize the uplink skb accumulator. */
+ skb_queue_head_init(&ipc_mux->session[if_id].ul_list);
+
+ ipc_mux->session[if_id].dl_head_pad_len = IPC_MEM_DL_ETH_OFFSET;
+ ipc_mux->session[if_id].ul_head_pad_len =
+ le32_to_cpu(open_session_resp->ul_head_pad_len);
+ ipc_mux->session[if_id].wwan = ipc_mux->wwan;
+
+ /* Reset the flow ctrl stats of the session */
+ ipc_mux->session[if_id].flow_ctl_en_cnt = 0;
+ ipc_mux->session[if_id].flow_ctl_dis_cnt = 0;
+ ipc_mux->session[if_id].ul_flow_credits = 0;
+ ipc_mux->session[if_id].net_tx_stop = false;
+ ipc_mux->session[if_id].flow_ctl_mask = 0;
+
+ /* Save and return the assigned if id. */
+ session_open->if_id = cpu_to_le32(if_id);
+
+ return true;
+}
+
+/* Free pending session UL packet. */
+static void ipc_mux_session_reset(struct iosm_mux *ipc_mux, int if_id)
+{
+ /* Reset the session/if id state. */
+ ipc_mux_session_free(ipc_mux, if_id);
+
+ /* Empty the uplink skb accumulator. */
+ skb_queue_purge(&ipc_mux->session[if_id].ul_list);
+}
+
+static void ipc_mux_session_close(struct iosm_mux *ipc_mux,
+ struct mux_session_close *msg)
+{
+ int if_id;
+
+ /* Copy the session interface id. */
+ if_id = le32_to_cpu(msg->if_id);
+
+ if (if_id < 0 || if_id >= ipc_mux->nr_sessions) {
+ dev_err(ipc_mux->dev, "invalid session id %d", if_id);
+ return;
+ }
+
+ /* Create and send the session close command.
+ * It is a blocking function call, until CP responds or timeout.
+ */
+ if (ipc_mux_dl_acb_send_cmds(ipc_mux, MUX_CMD_CLOSE_SESSION, if_id, 0,
+ NULL, 0, true, false))
+ dev_err(ipc_mux->dev, "if_id %d: CLOSE_SESSION send failed",
+ if_id);
+
+ /* Reset the flow ctrl stats of the session */
+ ipc_mux->session[if_id].flow_ctl_en_cnt = 0;
+ ipc_mux->session[if_id].flow_ctl_dis_cnt = 0;
+ ipc_mux->session[if_id].flow_ctl_mask = 0;
+
+ ipc_mux_session_reset(ipc_mux, if_id);
+}
+
+static void ipc_mux_channel_close(struct iosm_mux *ipc_mux,
+ struct mux_channel_close *channel_close_p)
+{
+ int i;
+
+ /* Free pending session UL packet. */
+ for (i = 0; i < ipc_mux->nr_sessions; i++)
+ if (ipc_mux->session[i].wwan)
+ ipc_mux_session_reset(ipc_mux, i);
+
+ ipc_imem_channel_close(ipc_mux->imem, ipc_mux->channel_id);
+
+ /* Reset the MUX object. */
+ ipc_mux->state = MUX_S_INACTIVE;
+ ipc_mux->event = MUX_E_INACTIVE;
+}
+
+/* CP has interrupted AP. If AP is in IP MUX mode, execute the pending ops. */
+static int ipc_mux_schedule(struct iosm_mux *ipc_mux, union mux_msg *msg)
+{
+ enum mux_event order;
+ bool success;
+ int ret = -EIO;
+
+ if (!ipc_mux->initialized) {
+ ret = -EAGAIN;
+ goto out;
+ }
+
+ order = msg->common.event;
+
+ switch (ipc_mux->state) {
+ case MUX_S_INACTIVE:
+ if (order != MUX_E_MUX_SESSION_OPEN)
+ goto out; /* Wait for the request to open a session */
+
+ if (ipc_mux->event == MUX_E_INACTIVE)
+ /* Establish the MUX channel and the new state. */
+ ipc_mux->channel_id = ipc_mux_channel_create(ipc_mux);
+
+ if (ipc_mux->state != MUX_S_ACTIVE) {
+ ret = ipc_mux->channel_id; /* Missing the MUX channel */
+ goto out;
+ }
+
+ /* Disable the TD update timer and open the first IP session. */
+ ipc_imem_td_update_timer_suspend(ipc_mux->imem, true);
+ ipc_mux->event = MUX_E_MUX_SESSION_OPEN;
+ success = ipc_mux_session_open(ipc_mux, &msg->session_open);
+
+ ipc_imem_td_update_timer_suspend(ipc_mux->imem, false);
+ if (success)
+ ret = ipc_mux->channel_id;
+ goto out;
+
+ case MUX_S_ACTIVE:
+ switch (order) {
+ case MUX_E_MUX_SESSION_OPEN:
+ /* Disable the TD update timer and open a session */
+ ipc_imem_td_update_timer_suspend(ipc_mux->imem, true);
+ ipc_mux->event = MUX_E_MUX_SESSION_OPEN;
+ success = ipc_mux_session_open(ipc_mux,
+ &msg->session_open);
+ ipc_imem_td_update_timer_suspend(ipc_mux->imem, false);
+ if (success)
+ ret = ipc_mux->channel_id;
+ goto out;
+
+ case MUX_E_MUX_SESSION_CLOSE:
+ /* Release an IP session. */
+ ipc_mux->event = MUX_E_MUX_SESSION_CLOSE;
+ ipc_mux_session_close(ipc_mux, &msg->session_close);
+ ret = ipc_mux->channel_id;
+ goto out;
+
+ case MUX_E_MUX_CHANNEL_CLOSE:
+ /* Close the MUX channel pipes. */
+ ipc_mux->event = MUX_E_MUX_CHANNEL_CLOSE;
+ ipc_mux_channel_close(ipc_mux, &msg->channel_close);
+ ret = ipc_mux->channel_id;
+ goto out;
+
+ default:
+ /* Invalid order. */
+ goto out;
+ }
+
+ default:
+ dev_err(ipc_mux->dev,
+ "unexpected MUX transition: state=%d, event=%d",
+ ipc_mux->state, ipc_mux->event);
+ }
+out:
+ return ret;
+}
+
+struct iosm_mux *ipc_mux_init(struct ipc_mux_config *mux_cfg,
+ struct iosm_imem *imem)
+{
+ struct iosm_mux *ipc_mux = kzalloc(sizeof(*ipc_mux), GFP_KERNEL);
+ int i, ul_tds, ul_td_size;
+ struct sk_buff_head *free_list;
+ struct sk_buff *skb;
+
+ if (!ipc_mux)
+ return NULL;
+
+ ipc_mux->protocol = mux_cfg->protocol;
+ ipc_mux->ul_flow = mux_cfg->ul_flow;
+ ipc_mux->nr_sessions = mux_cfg->nr_sessions;
+ ipc_mux->instance_id = mux_cfg->instance_id;
+ ipc_mux->wwan_q_offset = 0;
+
+ ipc_mux->pcie = imem->pcie;
+ ipc_mux->imem = imem;
+ ipc_mux->ipc_protocol = imem->ipc_protocol;
+ ipc_mux->dev = imem->dev;
+ ipc_mux->wwan = imem->wwan;
+
+ /* Get the reference to the UL ADB list. */
+ free_list = &ipc_mux->ul_adb.free_list;
+
+ /* Initialize the list with free ADB. */
+ skb_queue_head_init(free_list);
+
+ ul_td_size = IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE;
+
+ ul_tds = IPC_MEM_MAX_TDS_MUX_LITE_UL;
+
+ ipc_mux->ul_adb.dest_skb = NULL;
+
+ ipc_mux->initialized = true;
+ ipc_mux->adb_prep_ongoing = false;
+ ipc_mux->size_needed = 0;
+ ipc_mux->ul_data_pend_bytes = 0;
+ ipc_mux->state = MUX_S_INACTIVE;
+ ipc_mux->ev_mux_net_transmit_pending = false;
+ ipc_mux->tx_transaction_id = 0;
+ ipc_mux->rr_next_session = 0;
+ ipc_mux->event = MUX_E_INACTIVE;
+ ipc_mux->channel_id = -1;
+ ipc_mux->channel = NULL;
+
+ /* Allocate the list of UL ADB. */
+ for (i = 0; i < ul_tds; i++) {
+ dma_addr_t mapping;
+
+ skb = ipc_pcie_alloc_skb(ipc_mux->pcie, ul_td_size, GFP_ATOMIC,
+ &mapping, DMA_TO_DEVICE, 0);
+ if (!skb) {
+ ipc_mux_deinit(ipc_mux);
+ return NULL;
+ }
+ /* Extend the UL ADB list. */
+ skb_queue_tail(free_list, skb);
+ }
+
+ return ipc_mux;
+}
+
+/* Informs the network stack to restart transmission for all opened session if
+ * Flow Control is not ON for that session.
+ */
+static void ipc_mux_restart_tx_for_all_sessions(struct iosm_mux *ipc_mux)
+{
+ struct mux_session *session;
+ int idx;
+
+ for (idx = 0; idx < ipc_mux->nr_sessions; idx++) {
+ session = &ipc_mux->session[idx];
+
+ if (!session->wwan)
+ continue;
+
+ /* If flow control of the session is OFF and if there was tx
+ * stop then restart. Inform the network interface to restart
+ * sending data.
+ */
+ if (session->flow_ctl_mask == 0) {
+ session->net_tx_stop = false;
+ ipc_mux_netif_tx_flowctrl(session, idx, false);
+ }
+ }
+}
+
+/* Informs the network stack to stop sending further pkt for all opened
+ * sessions
+ */
+static void ipc_mux_stop_netif_for_all_sessions(struct iosm_mux *ipc_mux)
+{
+ struct mux_session *session;
+ int idx;
+
+ for (idx = 0; idx < ipc_mux->nr_sessions; idx++) {
+ session = &ipc_mux->session[idx];
+
+ if (!session->wwan)
+ continue;
+
+ ipc_mux_netif_tx_flowctrl(session, session->if_id, true);
+ }
+}
+
+void ipc_mux_check_n_restart_tx(struct iosm_mux *ipc_mux)
+{
+ if (ipc_mux->ul_flow == MUX_UL) {
+ int low_thresh = IPC_MEM_MUX_UL_FLOWCTRL_LOW_B;
+
+ if (ipc_mux->ul_data_pend_bytes < low_thresh)
+ ipc_mux_restart_tx_for_all_sessions(ipc_mux);
+ }
+}
+
+int ipc_mux_get_max_sessions(struct iosm_mux *ipc_mux)
+{
+ return ipc_mux ? ipc_mux->nr_sessions : -EFAULT;
+}
+
+enum ipc_mux_protocol ipc_mux_get_active_protocol(struct iosm_mux *ipc_mux)
+{
+ return ipc_mux ? ipc_mux->protocol : MUX_UNKNOWN;
+}
+
+int ipc_mux_open_session(struct iosm_mux *ipc_mux, int session_nr)
+{
+ struct mux_session_open *session_open;
+ union mux_msg mux_msg;
+
+ session_open = &mux_msg.session_open;
+ session_open->event = MUX_E_MUX_SESSION_OPEN;
+
+ session_open->if_id = cpu_to_le32(session_nr);
+ ipc_mux->session[session_nr].flags |= IPC_MEM_WWAN_MUX;
+ return ipc_mux_schedule(ipc_mux, &mux_msg);
+}
+
+int ipc_mux_close_session(struct iosm_mux *ipc_mux, int session_nr)
+{
+ struct mux_session_close *session_close;
+ union mux_msg mux_msg;
+ int ret_val;
+
+ session_close = &mux_msg.session_close;
+ session_close->event = MUX_E_MUX_SESSION_CLOSE;
+
+ session_close->if_id = cpu_to_le32(session_nr);
+ ret_val = ipc_mux_schedule(ipc_mux, &mux_msg);
+ ipc_mux->session[session_nr].flags &= ~IPC_MEM_WWAN_MUX;
+
+ return ret_val;
+}
+
+void ipc_mux_deinit(struct iosm_mux *ipc_mux)
+{
+ struct mux_channel_close *channel_close;
+ struct sk_buff_head *free_list;
+ union mux_msg mux_msg;
+ struct sk_buff *skb;
+
+ if (!ipc_mux->initialized)
+ return;
+ ipc_mux_stop_netif_for_all_sessions(ipc_mux);
+
+ channel_close = &mux_msg.channel_close;
+ channel_close->event = MUX_E_MUX_CHANNEL_CLOSE;
+ ipc_mux_schedule(ipc_mux, &mux_msg);
+
+ /* Empty the ADB free list. */
+ free_list = &ipc_mux->ul_adb.free_list;
+
+ /* Remove from the head of the downlink queue. */
+ while ((skb = skb_dequeue(free_list)))
+ ipc_pcie_kfree_skb(ipc_mux->pcie, skb);
+
+ if (ipc_mux->channel) {
+ ipc_mux->channel->ul_pipe.is_open = false;
+ ipc_mux->channel->dl_pipe.is_open = false;
+ }
+
+ kfree(ipc_mux);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux.h b/drivers/net/wwan/iosm/iosm_ipc_mux.h
new file mode 100644
index 000000000000..ddd2cd0bd911
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_mux.h
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_MUX_H
+#define IOSM_IPC_MUX_H
+
+#include "iosm_ipc_protocol.h"
+
+/* Size of the buffer for the IP MUX data buffer. */
+#define IPC_MEM_MAX_DL_MUX_BUF_SIZE (16 * 1024)
+#define IPC_MEM_MAX_UL_ADB_BUF_SIZE IPC_MEM_MAX_DL_MUX_BUF_SIZE
+
+/* Size of the buffer for the IP MUX Lite data buffer. */
+#define IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE (2 * 1024)
+
+/* TD counts for IP MUX Lite */
+#define IPC_MEM_MAX_TDS_MUX_LITE_UL 800
+#define IPC_MEM_MAX_TDS_MUX_LITE_DL 1200
+
+/* open session request (AP->CP) */
+#define MUX_CMD_OPEN_SESSION 1
+
+/* response to open session request (CP->AP) */
+#define MUX_CMD_OPEN_SESSION_RESP 2
+
+/* close session request (AP->CP) */
+#define MUX_CMD_CLOSE_SESSION 3
+
+/* response to close session request (CP->AP) */
+#define MUX_CMD_CLOSE_SESSION_RESP 4
+
+/* Flow control command with mask of the flow per queue/flow. */
+#define MUX_LITE_CMD_FLOW_CTL 5
+
+/* ACK the flow control command. Shall have the same Transaction ID as the
+ * matching FLOW_CTL command.
+ */
+#define MUX_LITE_CMD_FLOW_CTL_ACK 6
+
+/* Command for report packet indicating link quality metrics. */
+#define MUX_LITE_CMD_LINK_STATUS_REPORT 7
+
+/* Response to a report packet */
+#define MUX_LITE_CMD_LINK_STATUS_REPORT_RESP 8
+
+/* Used to reset a command/response state. */
+#define MUX_CMD_INVALID 255
+
+/* command response : command processed successfully */
+#define MUX_CMD_RESP_SUCCESS 0
+
+/* MUX for route link devices */
+#define IPC_MEM_WWAN_MUX BIT(0)
+
+/* Initiated actions to change the state of the MUX object. */
+enum mux_event {
+ MUX_E_INACTIVE, /* No initiated actions. */
+ MUX_E_MUX_SESSION_OPEN, /* Create the MUX channel and a session. */
+ MUX_E_MUX_SESSION_CLOSE, /* Release a session. */
+ MUX_E_MUX_CHANNEL_CLOSE, /* Release the MUX channel. */
+ MUX_E_NO_ORDERS, /* No MUX order. */
+ MUX_E_NOT_APPLICABLE, /* Defect IP MUX. */
+};
+
+/* MUX session open command. */
+struct mux_session_open {
+ enum mux_event event;
+ __le32 if_id;
+};
+
+/* MUX session close command. */
+struct mux_session_close {
+ enum mux_event event;
+ __le32 if_id;
+};
+
+/* MUX channel close command. */
+struct mux_channel_close {
+ enum mux_event event;
+};
+
+/* Default message type to find out the right message type. */
+struct mux_common {
+ enum mux_event event;
+};
+
+/* List of ops in MUX mode. */
+union mux_msg {
+ struct mux_session_open session_open;
+ struct mux_session_close session_close;
+ struct mux_channel_close channel_close;
+ struct mux_common common;
+};
+
+/* Parameter definition of the open session command. */
+struct mux_cmd_open_session {
+ u8 flow_ctrl; /* 0: Flow control disabled (flow allowed). */
+ /* 1: Flow control enabled (flow not allowed)*/
+ u8 ipv4v6_hints; /* 0: IPv4/IPv6 hints not supported.*/
+ /* 1: IPv4/IPv6 hints supported*/
+ __le16 reserved2; /* Reserved. Set to zero. */
+ __le32 dl_head_pad_len; /* Maximum length supported */
+ /* for DL head padding on a datagram. */
+};
+
+/* Parameter definition of the open session response. */
+struct mux_cmd_open_session_resp {
+ __le32 response; /* Response code */
+ u8 flow_ctrl; /* 0: Flow control disabled (flow allowed). */
+ /* 1: Flow control enabled (flow not allowed) */
+ u8 ipv4v6_hints; /* 0: IPv4/IPv6 hints not supported */
+ /* 1: IPv4/IPv6 hints supported */
+ __le16 reserved2; /* Reserved. Set to zero. */
+ __le32 ul_head_pad_len; /* Actual length supported for */
+ /* UL head padding on adatagram.*/
+};
+
+/* Parameter definition of the close session response code */
+struct mux_cmd_close_session_resp {
+ __le32 response;
+};
+
+/* Parameter definition of the flow control command. */
+struct mux_cmd_flow_ctl {
+ __le32 mask; /* indicating the desired flow control */
+ /* state for various flows/queues */
+};
+
+/* Parameter definition of the link status report code*/
+struct mux_cmd_link_status_report {
+ u8 payload;
+};
+
+/* Parameter definition of the link status report response code. */
+struct mux_cmd_link_status_report_resp {
+ __le32 response;
+};
+
+/**
+ * union mux_cmd_param - Union-definition of the command parameters.
+ * @open_session: Inband command for open session
+ * @open_session_resp: Inband command for open session response
+ * @close_session_resp: Inband command for close session response
+ * @flow_ctl: In-band flow control on the opened interfaces
+ * @link_status: In-band Link Status Report
+ * @link_status_resp: In-band command for link status report response
+ */
+union mux_cmd_param {
+ struct mux_cmd_open_session open_session;
+ struct mux_cmd_open_session_resp open_session_resp;
+ struct mux_cmd_close_session_resp close_session_resp;
+ struct mux_cmd_flow_ctl flow_ctl;
+ struct mux_cmd_link_status_report link_status;
+ struct mux_cmd_link_status_report_resp link_status_resp;
+};
+
+/* States of the MUX object.. */
+enum mux_state {
+ MUX_S_INACTIVE, /* IP MUX is unused. */
+ MUX_S_ACTIVE, /* IP MUX channel is available. */
+ MUX_S_ERROR, /* Defect IP MUX. */
+};
+
+/* Supported MUX protocols. */
+enum ipc_mux_protocol {
+ MUX_UNKNOWN,
+ MUX_LITE,
+};
+
+/* Supported UL data transfer methods. */
+enum ipc_mux_ul_flow {
+ MUX_UL_UNKNOWN,
+ MUX_UL, /* Normal UL data transfer */
+ MUX_UL_ON_CREDITS, /* UL data transfer will be based on credits */
+};
+
+/* List of the MUX session. */
+struct mux_session {
+ struct iosm_wwan *wwan; /*Network i/f used for communication*/
+ int if_id; /* i/f id for session open message.*/
+ u32 flags;
+ u32 ul_head_pad_len; /* Nr of bytes for UL head padding. */
+ u32 dl_head_pad_len; /* Nr of bytes for DL head padding. */
+ struct sk_buff_head ul_list; /* skb entries for an ADT. */
+ u32 flow_ctl_mask; /* UL flow control */
+ u32 flow_ctl_en_cnt; /* Flow control Enable cmd count */
+ u32 flow_ctl_dis_cnt; /* Flow Control Disable cmd count */
+ int ul_flow_credits; /* UL flow credits */
+ u8 net_tx_stop:1,
+ flush:1; /* flush net interface ? */
+};
+
+/* State of a single UL data block. */
+struct mux_adb {
+ struct sk_buff *dest_skb; /* Current UL skb for the data block. */
+ u8 *buf; /* ADB memory. */
+ struct mux_adgh *adgh; /* ADGH pointer */
+ struct sk_buff *qlth_skb; /* QLTH pointer */
+ u32 *next_table_index; /* Pointer to next table index. */
+ struct sk_buff_head free_list; /* List of alloc. ADB for the UL sess.*/
+ int size; /* Size of the ADB memory. */
+ u32 if_cnt; /* Statistic counter */
+ u32 dg_cnt_total;
+ u32 payload_size;
+};
+
+/* Temporary ACB state. */
+struct mux_acb {
+ struct sk_buff *skb; /* Used UL skb. */
+ int if_id; /* Session id. */
+ u32 wanted_response;
+ u32 got_response;
+ u32 cmd;
+ union mux_cmd_param got_param; /* Received command/response parameter */
+};
+
+/**
+ * struct iosm_mux - Structure of the data multiplexing over an IP channel.
+ * @dev: Pointer to device structure
+ * @session: Array of the MUX sessions.
+ * @channel: Reference to the IP MUX channel
+ * @pcie: Pointer to iosm_pcie struct
+ * @imem: Pointer to iosm_imem
+ * @wwan: Poinetr to iosm_wwan
+ * @ipc_protocol: Pointer to iosm_protocol
+ * @channel_id: Channel ID for MUX
+ * @protocol: Type of the MUX protocol
+ * @ul_flow: UL Flow type
+ * @nr_sessions: Number of sessions
+ * @instance_id: Instance ID
+ * @state: States of the MUX object
+ * @event: Initiated actions to change the state of the MUX object
+ * @tx_transaction_id: Transaction id for the ACB command.
+ * @rr_next_session: Next session number for round robin.
+ * @ul_adb: State of the UL ADB/ADGH.
+ * @size_needed: Variable to store the size needed during ADB preparation
+ * @ul_data_pend_bytes: Pending UL data to be processed in bytes
+ * @acb: Temporary ACB state
+ * @wwan_q_offset: This will hold the offset of the given instance
+ * Useful while passing or receiving packets from
+ * wwan/imem layer.
+ * @initialized: MUX object is initialized
+ * @ev_mux_net_transmit_pending:
+ * 0 means inform the IPC tasklet to pass the
+ * accumulated uplink ADB to CP.
+ * @adb_prep_ongoing: Flag for ADB preparation status
+ */
+struct iosm_mux {
+ struct device *dev;
+ struct mux_session session[IPC_MEM_MUX_IP_SESSION_ENTRIES];
+ struct ipc_mem_channel *channel;
+ struct iosm_pcie *pcie;
+ struct iosm_imem *imem;
+ struct iosm_wwan *wwan;
+ struct iosm_protocol *ipc_protocol;
+ int channel_id;
+ enum ipc_mux_protocol protocol;
+ enum ipc_mux_ul_flow ul_flow;
+ int nr_sessions;
+ int instance_id;
+ enum mux_state state;
+ enum mux_event event;
+ u32 tx_transaction_id;
+ int rr_next_session;
+ struct mux_adb ul_adb;
+ int size_needed;
+ long long ul_data_pend_bytes;
+ struct mux_acb acb;
+ int wwan_q_offset;
+ u8 initialized:1,
+ ev_mux_net_transmit_pending:1,
+ adb_prep_ongoing:1;
+};
+
+/* MUX configuration structure */
+struct ipc_mux_config {
+ enum ipc_mux_protocol protocol;
+ enum ipc_mux_ul_flow ul_flow;
+ int nr_sessions;
+ int instance_id;
+};
+
+/**
+ * ipc_mux_init - Allocates and Init MUX instance
+ * @mux_cfg: Pointer to MUX configuration structure
+ * @ipc_imem: Pointer to imem data-struct
+ *
+ * Returns: Initialized mux pointer on success else NULL
+ */
+struct iosm_mux *ipc_mux_init(struct ipc_mux_config *mux_cfg,
+ struct iosm_imem *ipc_imem);
+
+/**
+ * ipc_mux_deinit - Deallocates MUX instance
+ * @ipc_mux: Pointer to the MUX instance.
+ */
+void ipc_mux_deinit(struct iosm_mux *ipc_mux);
+
+/**
+ * ipc_mux_check_n_restart_tx - Checks for pending UL date bytes and then
+ * it restarts the net interface tx queue if
+ * device has set flow control as off.
+ * @ipc_mux: Pointer to MUX data-struct
+ */
+void ipc_mux_check_n_restart_tx(struct iosm_mux *ipc_mux);
+
+/**
+ * ipc_mux_get_active_protocol - Returns the active MUX protocol type.
+ * @ipc_mux: Pointer to MUX data-struct
+ *
+ * Returns: enum of type ipc_mux_protocol
+ */
+enum ipc_mux_protocol ipc_mux_get_active_protocol(struct iosm_mux *ipc_mux);
+
+/**
+ * ipc_mux_open_session - Opens a MUX session for IP traffic.
+ * @ipc_mux: Pointer to MUX data-struct
+ * @session_nr: Interface ID or session number
+ *
+ * Returns: channel id on success, failure value on error
+ */
+int ipc_mux_open_session(struct iosm_mux *ipc_mux, int session_nr);
+
+/**
+ * ipc_mux_close_session - Closes a MUX session.
+ * @ipc_mux: Pointer to MUX data-struct
+ * @session_nr: Interface ID or session number
+ *
+ * Returns: channel id on success, failure value on error
+ */
+int ipc_mux_close_session(struct iosm_mux *ipc_mux, int session_nr);
+
+/**
+ * ipc_mux_get_max_sessions - Retuns the maximum sessions supported on the
+ * provided MUX instance..
+ * @ipc_mux: Pointer to MUX data-struct
+ *
+ * Returns: Number of sessions supported on Success and failure value on error
+ */
+int ipc_mux_get_max_sessions(struct iosm_mux *ipc_mux);
+#endif
--
2.25.1

2021-05-20 14:11:13

by Kumar, M Chetan

[permalink] [raw]
Subject: [PATCH V3 05/16] net: iosm: shared memory I/O operations

1) Binds logical channel between host-device for communication.
2) Implements device specific(Char/Net) IO operations.

Signed-off-by: M Chetan Kumar <[email protected]>
---
v3:
* WWAN port adaptation.
* Aligned ipc_ prefix for function name to be consistent across file.
v2:
* Change vlan_id to ip link if_id & document correction.
* Define new enums for IP & DSS session mapping.
* Return proper error code instead of returning -1.
* Clean-up vlan tag id & removed FW flashing logic.
---
drivers/net/wwan/iosm/iosm_ipc_imem_ops.c | 346 ++++++++++++++++++++++
drivers/net/wwan/iosm/iosm_ipc_imem_ops.h | 99 +++++++
2 files changed, 445 insertions(+)
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem_ops.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
new file mode 100644
index 000000000000..46f76e8aae92
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
@@ -0,0 +1,346 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#include <linux/delay.h>
+
+#include "iosm_ipc_chnl_cfg.h"
+#include "iosm_ipc_imem.h"
+#include "iosm_ipc_imem_ops.h"
+#include "iosm_ipc_port.h"
+#include "iosm_ipc_task_queue.h"
+
+/* Open a packet data online channel between the network layer and CP. */
+int ipc_imem_sys_wwan_open(struct iosm_imem *ipc_imem, int if_id)
+{
+ dev_dbg(ipc_imem->dev, "%s if id: %d",
+ ipc_imem_phase_get_string(ipc_imem->phase), if_id);
+
+ /* The network interface is only supported in the runtime phase. */
+ if (ipc_imem_phase_update(ipc_imem) != IPC_P_RUN) {
+ dev_err(ipc_imem->dev, "net:%d : refused phase %s", if_id,
+ ipc_imem_phase_get_string(ipc_imem->phase));
+ return -EIO;
+ }
+
+ /* check for the interafce id
+ * if if_id 1 to 8 then create IP MUX channel sessions.
+ * To start MUX session from 0 as network interface id would start
+ * from 1 so map it to if_id = if_id - 1
+ */
+ if (if_id >= IP_MUX_SESSION_START && if_id <= IP_MUX_SESSION_END)
+ return ipc_mux_open_session(ipc_imem->mux, if_id - 1);
+
+ return -EINVAL;
+}
+
+/* Release a net link to CP. */
+void ipc_imem_sys_wwan_close(struct iosm_imem *ipc_imem, int if_id,
+ int channel_id)
+{
+ if (ipc_imem->mux && if_id >= IP_MUX_SESSION_START &&
+ if_id <= IP_MUX_SESSION_END)
+ ipc_mux_close_session(ipc_imem->mux, if_id - 1);
+}
+
+/* Tasklet call to do uplink transfer. */
+static int ipc_imem_tq_cdev_write(struct iosm_imem *ipc_imem, int arg,
+ void *msg, size_t size)
+{
+ ipc_imem->ev_cdev_write_pending = false;
+ ipc_imem_ul_send(ipc_imem);
+
+ return 0;
+}
+
+/* Through tasklet to do sio write. */
+static int ipc_imem_call_cdev_write(struct iosm_imem *ipc_imem)
+{
+ if (ipc_imem->ev_cdev_write_pending)
+ return -1;
+
+ ipc_imem->ev_cdev_write_pending = true;
+
+ return ipc_task_queue_send_task(ipc_imem, ipc_imem_tq_cdev_write, 0,
+ NULL, 0, false);
+}
+
+/* Function for transfer UL data */
+int ipc_imem_sys_wwan_transmit(struct iosm_imem *ipc_imem,
+ int if_id, int channel_id, struct sk_buff *skb)
+{
+ int ret = -EINVAL;
+
+ if (!ipc_imem || channel_id < 0)
+ goto out;
+
+ /* Is CP Running? */
+ if (ipc_imem->phase != IPC_P_RUN) {
+ dev_dbg(ipc_imem->dev, "phase %s transmit",
+ ipc_imem_phase_get_string(ipc_imem->phase));
+ ret = -EIO;
+ goto out;
+ }
+
+ if (if_id >= IP_MUX_SESSION_START && if_id <= IP_MUX_SESSION_END)
+ /* Route the UL packet through IP MUX Layer */
+ ret = ipc_mux_ul_trigger_encode(ipc_imem->mux,
+ if_id - 1, skb);
+ else
+ dev_err(ipc_imem->dev,
+ "invalid if_id %d: ", if_id);
+out:
+ return ret;
+}
+
+/* Initialize wwan channel */
+void ipc_imem_wwan_channel_init(struct iosm_imem *ipc_imem,
+ enum ipc_mux_protocol mux_type)
+{
+ struct ipc_chnl_cfg chnl_cfg = { 0 };
+
+ ipc_imem->cp_version = ipc_mmio_get_cp_version(ipc_imem->mmio);
+
+ /* If modem version is invalid (0xffffffff), do not initialize WWAN. */
+ if (ipc_imem->cp_version == -1) {
+ dev_err(ipc_imem->dev, "invalid CP version");
+ return;
+ }
+
+ ipc_chnl_cfg_get(&chnl_cfg, ipc_imem->nr_of_channels);
+ ipc_imem_channel_init(ipc_imem, IPC_CTYPE_WWAN, chnl_cfg,
+ IRQ_MOD_OFF);
+
+ /* WWAN registration. */
+ ipc_imem->wwan = ipc_wwan_init(ipc_imem, ipc_imem->dev);
+ if (!ipc_imem->wwan)
+ dev_err(ipc_imem->dev,
+ "failed to register the ipc_wwan interfaces");
+}
+
+/* Map SKB to DMA for transfer */
+static int ipc_imem_map_skb_to_dma(struct iosm_imem *ipc_imem,
+ struct sk_buff *skb)
+{
+ struct iosm_pcie *ipc_pcie = ipc_imem->pcie;
+ char *buf = skb->data;
+ int len = skb->len;
+ dma_addr_t mapping;
+ int ret;
+
+ ret = ipc_pcie_addr_map(ipc_pcie, buf, len, &mapping, DMA_TO_DEVICE);
+
+ if (ret)
+ goto err;
+
+ BUILD_BUG_ON(sizeof(*IPC_CB(skb)) > sizeof(skb->cb));
+
+ IPC_CB(skb)->mapping = mapping;
+ IPC_CB(skb)->direction = DMA_TO_DEVICE;
+ IPC_CB(skb)->len = len;
+ IPC_CB(skb)->op_type = (u8)UL_DEFAULT;
+
+err:
+ return ret;
+}
+
+/* return true if channel is ready for use */
+static bool ipc_imem_is_channel_active(struct iosm_imem *ipc_imem,
+ struct ipc_mem_channel *channel)
+{
+ enum ipc_phase phase;
+
+ /* Update the current operation phase. */
+ phase = ipc_imem->phase;
+
+ /* Select the operation depending on the execution stage. */
+ switch (phase) {
+ case IPC_P_RUN:
+ case IPC_P_PSI:
+ case IPC_P_EBL:
+ break;
+
+ case IPC_P_ROM:
+ /* Prepare the PSI image for the CP ROM driver and
+ * suspend the flash app.
+ */
+ if (channel->state != IMEM_CHANNEL_RESERVED) {
+ dev_err(ipc_imem->dev,
+ "ch[%d]:invalid channel state %d,expected %d",
+ channel->channel_id, channel->state,
+ IMEM_CHANNEL_RESERVED);
+ goto channel_unavailable;
+ }
+ goto channel_available;
+
+ default:
+ /* Ignore uplink actions in all other phases. */
+ dev_err(ipc_imem->dev, "ch[%d]: confused phase %d",
+ channel->channel_id, phase);
+ goto channel_unavailable;
+ }
+ /* Check the full availability of the channel. */
+ if (channel->state != IMEM_CHANNEL_ACTIVE) {
+ dev_err(ipc_imem->dev, "ch[%d]: confused channel state %d",
+ channel->channel_id, channel->state);
+ goto channel_unavailable;
+ }
+
+channel_available:
+ return true;
+
+channel_unavailable:
+ return false;
+}
+
+/* Release a sio link to CP. */
+void ipc_imem_sys_cdev_close(struct iosm_cdev *ipc_cdev)
+{
+ struct iosm_imem *ipc_imem = ipc_cdev->ipc_imem;
+ struct ipc_mem_channel *channel = ipc_cdev->channel;
+ enum ipc_phase curr_phase;
+ int status = 0;
+ u32 tail = 0;
+
+ curr_phase = ipc_imem->phase;
+
+ /* If current phase is IPC_P_OFF or SIO ID is -ve then
+ * channel is already freed. Nothing to do.
+ */
+ if (curr_phase == IPC_P_OFF) {
+ dev_err(ipc_imem->dev,
+ "nothing to do. Current Phase: %s",
+ ipc_imem_phase_get_string(curr_phase));
+ return;
+ }
+
+ if (channel->state == IMEM_CHANNEL_FREE) {
+ dev_err(ipc_imem->dev, "ch[%d]: invalid channel state %d",
+ channel->channel_id, channel->state);
+ return;
+ }
+
+ /* If there are any pending TDs then wait for Timeout/Completion before
+ * closing pipe.
+ */
+ if (channel->ul_pipe.old_tail != channel->ul_pipe.old_head) {
+ ipc_imem->app_notify_ul_pend = 1;
+
+ /* Suspend the user app and wait a certain time for processing
+ * UL Data.
+ */
+ status = wait_for_completion_interruptible_timeout
+ (&ipc_imem->ul_pend_sem,
+ msecs_to_jiffies(IPC_PEND_DATA_TIMEOUT));
+ if (status == 0) {
+ dev_dbg(ipc_imem->dev,
+ "Pend data Timeout UL-Pipe:%d Head:%d Tail:%d",
+ channel->ul_pipe.pipe_nr,
+ channel->ul_pipe.old_head,
+ channel->ul_pipe.old_tail);
+ }
+
+ ipc_imem->app_notify_ul_pend = 0;
+ }
+
+ /* If there are any pending TDs then wait for Timeout/Completion before
+ * closing pipe.
+ */
+ ipc_protocol_get_head_tail_index(ipc_imem->ipc_protocol,
+ &channel->dl_pipe, NULL, &tail);
+
+ if (tail != channel->dl_pipe.old_tail) {
+ ipc_imem->app_notify_dl_pend = 1;
+
+ /* Suspend the user app and wait a certain time for processing
+ * DL Data.
+ */
+ status = wait_for_completion_interruptible_timeout
+ (&ipc_imem->dl_pend_sem,
+ msecs_to_jiffies(IPC_PEND_DATA_TIMEOUT));
+ if (status == 0) {
+ dev_dbg(ipc_imem->dev,
+ "Pend data Timeout DL-Pipe:%d Head:%d Tail:%d",
+ channel->dl_pipe.pipe_nr,
+ channel->dl_pipe.old_head,
+ channel->dl_pipe.old_tail);
+ }
+
+ ipc_imem->app_notify_dl_pend = 0;
+ }
+
+ /* Due to wait for completion in messages, there is a small window
+ * between closing the pipe and updating the channel is closed. In this
+ * small window there could be HP update from Host Driver. Hence update
+ * the channel state as CLOSING to aviod unnecessary interrupt
+ * towards CP.
+ */
+ channel->state = IMEM_CHANNEL_CLOSING;
+
+ ipc_imem_pipe_close(ipc_imem, &channel->ul_pipe);
+ ipc_imem_pipe_close(ipc_imem, &channel->dl_pipe);
+
+ ipc_imem_channel_free(channel);
+}
+
+/* Open a PORT link to CP and return the channel */
+struct ipc_mem_channel *ipc_imem_sys_port_open(struct iosm_imem *ipc_imem,
+ int chl_id, int hp_id)
+{
+ struct ipc_mem_channel *channel;
+ int ch_id;
+
+ /* The PORT interface is only supported in the runtime phase. */
+ if (ipc_imem_phase_update(ipc_imem) != IPC_P_RUN) {
+ dev_err(ipc_imem->dev, "PORT open refused, phase %s",
+ ipc_imem_phase_get_string(ipc_imem->phase));
+ return NULL;
+ }
+
+ ch_id = ipc_imem_channel_alloc(ipc_imem, chl_id, IPC_CTYPE_CTRL);
+
+ if (ch_id < 0) {
+ dev_err(ipc_imem->dev, "reservation of an PORT chnl id failed");
+ return NULL;
+ }
+
+ channel = ipc_imem_channel_open(ipc_imem, ch_id, hp_id);
+
+ if (!channel) {
+ dev_err(ipc_imem->dev, "PORT channel id open failed");
+ return NULL;
+ }
+
+ return channel;
+}
+
+/* transfer skb to modem */
+int ipc_imem_sys_cdev_write(struct iosm_cdev *ipc_cdev, struct sk_buff *skb)
+{
+ struct ipc_mem_channel *channel = ipc_cdev->channel;
+ struct iosm_imem *ipc_imem = ipc_cdev->ipc_imem;
+ int ret = -EIO;
+
+ if (!ipc_imem_is_channel_active(ipc_imem, channel) ||
+ ipc_imem->phase == IPC_P_OFF_REQ)
+ goto out;
+
+ ret = ipc_imem_map_skb_to_dma(ipc_imem, skb);
+
+ if (ret)
+ goto out;
+
+ /* Add skb to the uplink skbuf accumulator. */
+ skb_queue_tail(&channel->ul_list, skb);
+
+ ret = ipc_imem_call_cdev_write(ipc_imem);
+
+ if (ret) {
+ skb_dequeue_tail(&channel->ul_list);
+ dev_err(ipc_cdev->dev, "channel id[%d] write failed\n",
+ ipc_cdev->channel->channel_id);
+ }
+out:
+ return ret;
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
new file mode 100644
index 000000000000..6677a82be77b
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_IMEM_OPS_H
+#define IOSM_IPC_IMEM_OPS_H
+
+#include "iosm_ipc_mux_codec.h"
+
+/* Maximum wait time for blocking read */
+#define IPC_READ_TIMEOUT 500
+
+/* The delay in ms for defering the unregister */
+#define SIO_UNREGISTER_DEFER_DELAY_MS 1
+
+/* Default delay till CP PSI image is running and modem updates the
+ * execution stage.
+ * unit : milliseconds
+ */
+#define PSI_START_DEFAULT_TIMEOUT 3000
+
+/* Default time out when closing SIO, till the modem is in
+ * running state.
+ * unit : milliseconds
+ */
+#define BOOT_CHECK_DEFAULT_TIMEOUT 400
+
+/* IP MUX channel range */
+#define IP_MUX_SESSION_START 1
+#define IP_MUX_SESSION_END 8
+#define MAX_IP_MUX_SESSION IP_MUX_SESSION_END
+
+/**
+ * ipc_imem_sys_port_open - Open a port link to CP.
+ * @ipc_imem: Imem instance.
+ * @chl_id: Channel Indentifier.
+ * @hp_id: HP Indentifier.
+ *
+ * Return: channel instance on success, NULL for failure
+ */
+struct ipc_mem_channel *ipc_imem_sys_port_open(struct iosm_imem *ipc_imem,
+ int chl_id, int hp_id);
+
+/**
+ * ipc_imem_sys_cdev_close - Release a sio link to CP.
+ * @ipc_cdev: iosm sio instance.
+ */
+void ipc_imem_sys_cdev_close(struct iosm_cdev *ipc_cdev);
+
+/**
+ * ipc_imem_sys_cdev_write - Route the uplink buffer to CP.
+ * @ipc_cdev: iosm_cdev instance.
+ * @skb: Pointer to skb.
+ *
+ * Return: 0 on success and failure value on error
+ */
+int ipc_imem_sys_cdev_write(struct iosm_cdev *ipc_cdev, struct sk_buff *skb);
+
+/**
+ * ipc_imem_sys_wwan_open - Open packet data online channel between network
+ * layer and CP.
+ * @ipc_imem: Imem instance.
+ * @if_id: ip link tag of the net device.
+ *
+ * Return: Channel ID on success and failure value on error
+ */
+int ipc_imem_sys_wwan_open(struct iosm_imem *ipc_imem, int if_id);
+
+/**
+ * ipc_imem_sys_wwan_close - Close packet data online channel between network
+ * layer and CP.
+ * @ipc_imem: Imem instance.
+ * @if_id: IP link id net device.
+ * @channel_id: Channel ID to be closed.
+ */
+void ipc_imem_sys_wwan_close(struct iosm_imem *ipc_imem, int if_id,
+ int channel_id);
+
+/**
+ * ipc_imem_sys_wwan_transmit - Function for transfer UL data
+ * @ipc_imem: Imem instance.
+ * @if_id: link ID of the device.
+ * @channel_id: Channel ID used
+ * @skb: Pointer to sk buffer
+ *
+ * Return: 0 on success and failure value on error
+ */
+int ipc_imem_sys_wwan_transmit(struct iosm_imem *ipc_imem, int if_id,
+ int channel_id, struct sk_buff *skb);
+/**
+ * ipc_imem_wwan_channel_init - Initializes WWAN channels and the channel for
+ * MUX.
+ * @ipc_imem: Pointer to iosm_imem struct.
+ * @mux_type: Type of mux protocol.
+ */
+void ipc_imem_wwan_channel_init(struct iosm_imem *ipc_imem,
+ enum ipc_mux_protocol mux_type);
+#endif
--
2.25.1

2021-05-20 14:11:41

by Kumar, M Chetan

[permalink] [raw]
Subject: [PATCH V3 12/16] net: iosm: shared memory protocol

1) Defines messaging protocol for handling Transfer Descriptor
in both UL/DL direction.
2) Ring buffer management.

Signed-off-by: M Chetan Kumar <[email protected]>
---
v3: no change.
v2:
* Endianness type correction for Host-device protocol structure.
* Function signature documentation correction.
* Streamline multiple returns using goto.
---
drivers/net/wwan/iosm/iosm_ipc_protocol.c | 283 ++++++++++++++++++++++
drivers/net/wwan/iosm/iosm_ipc_protocol.h | 237 ++++++++++++++++++
2 files changed, 520 insertions(+)
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol.c
create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_protocol.c b/drivers/net/wwan/iosm/iosm_ipc_protocol.c
new file mode 100644
index 000000000000..834d8b146a94
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_protocol.c
@@ -0,0 +1,283 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#include "iosm_ipc_imem.h"
+#include "iosm_ipc_protocol.h"
+#include "iosm_ipc_protocol_ops.h"
+#include "iosm_ipc_pm.h"
+#include "iosm_ipc_task_queue.h"
+
+int ipc_protocol_tq_msg_send(struct iosm_protocol *ipc_protocol,
+ enum ipc_msg_prep_type msg_type,
+ union ipc_msg_prep_args *prep_args,
+ struct ipc_rsp *response)
+{
+ int index = ipc_protocol_msg_prep(ipc_protocol->imem, msg_type,
+ prep_args);
+
+ /* Store reference towards caller specified response in response ring
+ * and signal CP
+ */
+ if (index >= 0 && index < IPC_MEM_MSG_ENTRIES) {
+ ipc_protocol->rsp_ring[index] = response;
+ ipc_protocol_msg_hp_update(ipc_protocol->imem);
+ }
+
+ return index;
+}
+
+/* Callback for message send */
+static int ipc_protocol_tq_msg_send_cb(struct iosm_imem *ipc_imem, int arg,
+ void *msg, size_t size)
+{
+ struct ipc_call_msg_send_args *send_args = msg;
+ struct iosm_protocol *ipc_protocol = ipc_imem->ipc_protocol;
+
+ return ipc_protocol_tq_msg_send(ipc_protocol, send_args->msg_type,
+ send_args->prep_args,
+ send_args->response);
+}
+
+/* Remove reference to a response. This is typically used when a requestor timed
+ * out and is no longer interested in the response.
+ */
+static int ipc_protocol_tq_msg_remove(struct iosm_imem *ipc_imem, int arg,
+ void *msg, size_t size)
+{
+ struct iosm_protocol *ipc_protocol = ipc_imem->ipc_protocol;
+
+ ipc_protocol->rsp_ring[arg] = NULL;
+ return 0;
+}
+
+int ipc_protocol_msg_send(struct iosm_protocol *ipc_protocol,
+ enum ipc_msg_prep_type prep,
+ union ipc_msg_prep_args *prep_args)
+{
+ struct ipc_call_msg_send_args send_args;
+ unsigned int exec_timeout;
+ struct ipc_rsp response;
+ int index;
+
+ exec_timeout = (ipc_protocol_get_ap_exec_stage(ipc_protocol) ==
+ IPC_MEM_EXEC_STAGE_RUN ?
+ IPC_MSG_COMPLETE_RUN_DEFAULT_TIMEOUT :
+ IPC_MSG_COMPLETE_BOOT_DEFAULT_TIMEOUT);
+
+ /* Trap if called from non-preemptible context */
+ might_sleep();
+
+ response.status = IPC_MEM_MSG_CS_INVALID;
+ init_completion(&response.completion);
+
+ send_args.msg_type = prep;
+ send_args.prep_args = prep_args;
+ send_args.response = &response;
+
+ /* Allocate and prepare message to be sent in tasklet context.
+ * A positive index returned form tasklet_call references the message
+ * in case it needs to be cancelled when there is a timeout.
+ */
+ index = ipc_task_queue_send_task(ipc_protocol->imem,
+ ipc_protocol_tq_msg_send_cb, 0,
+ &send_args, 0, true);
+
+ if (index < 0) {
+ dev_err(ipc_protocol->dev, "msg %d failed", prep);
+ return index;
+ }
+
+ /* Wait for the device to respond to the message */
+ switch (wait_for_completion_timeout(&response.completion,
+ msecs_to_jiffies(exec_timeout))) {
+ case 0:
+ /* Timeout, there was no response from the device.
+ * Remove the reference to the local response completion
+ * object as we are no longer interested in the response.
+ */
+ ipc_task_queue_send_task(ipc_protocol->imem,
+ ipc_protocol_tq_msg_remove, index,
+ NULL, 0, true);
+ dev_err(ipc_protocol->dev, "msg timeout");
+ ipc_uevent_send(ipc_protocol->pcie->dev, UEVENT_MDM_TIMEOUT);
+ break;
+ default:
+ /* We got a response in time; check completion status: */
+ if (response.status != IPC_MEM_MSG_CS_SUCCESS) {
+ dev_err(ipc_protocol->dev,
+ "msg completion status error %d",
+ response.status);
+ return -EIO;
+ }
+ }
+
+ return 0;
+}
+
+static int ipc_protocol_msg_send_host_sleep(struct iosm_protocol *ipc_protocol,
+ u32 state)
+{
+ union ipc_msg_prep_args prep_args = {
+ .sleep.target = 0,
+ .sleep.state = state,
+ };
+
+ return ipc_protocol_msg_send(ipc_protocol, IPC_MSG_PREP_SLEEP,
+ &prep_args);
+}
+
+void ipc_protocol_doorbell_trigger(struct iosm_protocol *ipc_protocol,
+ u32 identifier)
+{
+ ipc_pm_signal_hpda_doorbell(&ipc_protocol->pm, identifier, true);
+}
+
+bool ipc_protocol_pm_dev_sleep_handle(struct iosm_protocol *ipc_protocol)
+{
+ u32 ipc_status = ipc_protocol_get_ipc_status(ipc_protocol);
+ u32 requested;
+
+ if (ipc_status != IPC_MEM_DEVICE_IPC_RUNNING) {
+ dev_err(ipc_protocol->dev,
+ "irq ignored, CP IPC state is %d, should be RUNNING",
+ ipc_status);
+
+ /* Stop further processing. */
+ return false;
+ }
+
+ /* Get a copy of the requested PM state by the device and the local
+ * device PM state.
+ */
+ requested = ipc_protocol_pm_dev_get_sleep_notification(ipc_protocol);
+
+ return ipc_pm_dev_slp_notification(&ipc_protocol->pm, requested);
+}
+
+static int ipc_protocol_tq_wakeup_dev_slp(struct iosm_imem *ipc_imem, int arg,
+ void *msg, size_t size)
+{
+ struct iosm_pm *ipc_pm = &ipc_imem->ipc_protocol->pm;
+
+ /* Wakeup from device sleep if it is not ACTIVE */
+ ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_HS, true);
+
+ ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_HS, false);
+
+ return 0;
+}
+
+void ipc_protocol_s2idle_sleep(struct iosm_protocol *ipc_protocol, bool sleep)
+{
+ ipc_pm_set_s2idle_sleep(&ipc_protocol->pm, sleep);
+}
+
+bool ipc_protocol_suspend(struct iosm_protocol *ipc_protocol)
+{
+ if (!ipc_pm_prepare_host_sleep(&ipc_protocol->pm))
+ goto err;
+
+ ipc_task_queue_send_task(ipc_protocol->imem,
+ ipc_protocol_tq_wakeup_dev_slp, 0, NULL, 0,
+ true);
+
+ if (!ipc_pm_wait_for_device_active(&ipc_protocol->pm)) {
+ ipc_uevent_send(ipc_protocol->pcie->dev, UEVENT_MDM_TIMEOUT);
+ goto err;
+ }
+
+ /* Send the sleep message for sync sys calls. */
+ dev_dbg(ipc_protocol->dev, "send TARGET_HOST, ENTER_SLEEP");
+ if (ipc_protocol_msg_send_host_sleep(ipc_protocol,
+ IPC_HOST_SLEEP_ENTER_SLEEP)) {
+ /* Sending ENTER_SLEEP message failed, we are still active */
+ ipc_protocol->pm.host_pm_state = IPC_MEM_HOST_PM_ACTIVE;
+ goto err;
+ }
+
+ ipc_protocol->pm.host_pm_state = IPC_MEM_HOST_PM_SLEEP;
+ return true;
+err:
+ return false;
+}
+
+bool ipc_protocol_resume(struct iosm_protocol *ipc_protocol)
+{
+ if (!ipc_pm_prepare_host_active(&ipc_protocol->pm))
+ return false;
+
+ dev_dbg(ipc_protocol->dev, "send TARGET_HOST, EXIT_SLEEP");
+ if (ipc_protocol_msg_send_host_sleep(ipc_protocol,
+ IPC_HOST_SLEEP_EXIT_SLEEP)) {
+ ipc_protocol->pm.host_pm_state = IPC_MEM_HOST_PM_SLEEP;
+ return false;
+ }
+
+ ipc_protocol->pm.host_pm_state = IPC_MEM_HOST_PM_ACTIVE;
+
+ return true;
+}
+
+struct iosm_protocol *ipc_protocol_init(struct iosm_imem *ipc_imem)
+{
+ struct iosm_protocol *ipc_protocol =
+ kzalloc(sizeof(*ipc_protocol), GFP_KERNEL);
+ struct ipc_protocol_context_info *p_ci;
+ u64 addr;
+
+ if (!ipc_protocol)
+ return NULL;
+
+ ipc_protocol->dev = ipc_imem->dev;
+ ipc_protocol->pcie = ipc_imem->pcie;
+ ipc_protocol->imem = ipc_imem;
+ ipc_protocol->p_ap_shm = NULL;
+ ipc_protocol->phy_ap_shm = 0;
+
+ ipc_protocol->old_msg_tail = 0;
+
+ ipc_protocol->p_ap_shm =
+ pci_alloc_consistent(ipc_protocol->pcie->pci,
+ sizeof(*ipc_protocol->p_ap_shm),
+ &ipc_protocol->phy_ap_shm);
+
+ if (!ipc_protocol->p_ap_shm) {
+ dev_err(ipc_protocol->dev, "pci shm alloc error");
+ kfree(ipc_protocol);
+ return NULL;
+ }
+
+ /* Prepare the context info for CP. */
+ addr = ipc_protocol->phy_ap_shm;
+ p_ci = &ipc_protocol->p_ap_shm->ci;
+ p_ci->device_info_addr =
+ addr + offsetof(struct ipc_protocol_ap_shm, device_info);
+ p_ci->head_array =
+ addr + offsetof(struct ipc_protocol_ap_shm, head_array);
+ p_ci->tail_array =
+ addr + offsetof(struct ipc_protocol_ap_shm, tail_array);
+ p_ci->msg_head = addr + offsetof(struct ipc_protocol_ap_shm, msg_head);
+ p_ci->msg_tail = addr + offsetof(struct ipc_protocol_ap_shm, msg_tail);
+ p_ci->msg_ring_addr =
+ addr + offsetof(struct ipc_protocol_ap_shm, msg_ring);
+ p_ci->msg_ring_entries = cpu_to_le16(IPC_MEM_MSG_ENTRIES);
+ p_ci->msg_irq_vector = IPC_MSG_IRQ_VECTOR;
+ p_ci->device_info_irq_vector = IPC_DEVICE_IRQ_VECTOR;
+
+ ipc_mmio_set_contex_info_addr(ipc_imem->mmio, addr);
+
+ ipc_pm_init(ipc_protocol);
+
+ return ipc_protocol;
+}
+
+void ipc_protocol_deinit(struct iosm_protocol *proto)
+{
+ pci_free_consistent(proto->pcie->pci, sizeof(*proto->p_ap_shm),
+ proto->p_ap_shm, proto->phy_ap_shm);
+
+ ipc_pm_deinit(proto);
+ kfree(proto);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_protocol.h b/drivers/net/wwan/iosm/iosm_ipc_protocol.h
new file mode 100644
index 000000000000..9b3a6d86ece7
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_protocol.h
@@ -0,0 +1,237 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020-21 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_PROTOCOL_H
+#define IOSM_IPC_PROTOCOL_H
+
+#include "iosm_ipc_imem.h"
+#include "iosm_ipc_pm.h"
+#include "iosm_ipc_protocol_ops.h"
+
+/* Trigger the doorbell interrupt on CP. */
+#define IPC_DOORBELL_IRQ_HPDA 0
+#define IPC_DOORBELL_IRQ_IPC 1
+#define IPC_DOORBELL_IRQ_SLEEP 2
+
+/* IRQ vector number. */
+#define IPC_DEVICE_IRQ_VECTOR 0
+#define IPC_MSG_IRQ_VECTOR 0
+#define IPC_UL_PIPE_IRQ_VECTOR 0
+#define IPC_DL_PIPE_IRQ_VECTOR 0
+
+#define IPC_MEM_MSG_ENTRIES 128
+
+/* Default time out for sending IPC messages like open pipe, close pipe etc.
+ * during run mode.
+ *
+ * If the message interface lock to CP times out, the link to CP is broken.
+ * mode : run mode (IPC_MEM_EXEC_STAGE_RUN)
+ * unit : milliseconds
+ */
+#define IPC_MSG_COMPLETE_RUN_DEFAULT_TIMEOUT 500 /* 0.5 seconds */
+
+/* Default time out for sending IPC messages like open pipe, close pipe etc.
+ * during boot mode.
+ *
+ * If the message interface lock to CP times out, the link to CP is broken.
+ * mode : boot mode
+ * (IPC_MEM_EXEC_STAGE_BOOT | IPC_MEM_EXEC_STAGE_PSI | IPC_MEM_EXEC_STAGE_EBL)
+ * unit : milliseconds
+ */
+#define IPC_MSG_COMPLETE_BOOT_DEFAULT_TIMEOUT 500 /* 0.5 seconds */
+
+/**
+ * struct ipc_protocol_context_info - Structure of the context info
+ * @device_info_addr: 64 bit address to device info
+ * @head_array: 64 bit address to head pointer arr for the pipes
+ * @tail_array: 64 bit address to tail pointer arr for the pipes
+ * @msg_head: 64 bit address to message head pointer
+ * @msg_tail: 64 bit address to message tail pointer
+ * @msg_ring_addr: 64 bit pointer to the message ring buffer
+ * @msg_ring_entries: This field provides the number of entries which
+ * the MR can hold
+ * @msg_irq_vector: This field provides the IRQ which shall be
+ * generated by the EP device when generating
+ * completion for Messages.
+ * @device_info_irq_vector: This field provides the IRQ which shall be
+ * generated by the EP dev after updating Dev. Info
+ */
+struct ipc_protocol_context_info {
+ phys_addr_t device_info_addr;
+ phys_addr_t head_array;
+ phys_addr_t tail_array;
+ phys_addr_t msg_head;
+ phys_addr_t msg_tail;
+ phys_addr_t msg_ring_addr;
+ __le16 msg_ring_entries;
+ u8 msg_irq_vector;
+ u8 device_info_irq_vector;
+};
+
+/**
+ * struct ipc_protocol_device_info - Structure for the device information
+ * @execution_stage: CP execution stage
+ * @ipc_status: IPC states
+ * @device_sleep_notification: Requested device pm states
+ */
+struct ipc_protocol_device_info {
+ __le32 execution_stage;
+ __le32 ipc_status;
+ __le32 device_sleep_notification;
+};
+
+/**
+ * struct ipc_protocol_ap_shm - Protocol Shared Memory Structure
+ * @ci: Context information struct
+ * @device_info: Device information struct
+ * @msg_head: Point to msg head
+ * @head_array: Array of head pointer
+ * @msg_tail: Point to msg tail
+ * @tail_array: Array of tail pointer
+ * @msg_ring: Circular buffers for the read/tail and write/head
+ * indeces.
+ */
+struct ipc_protocol_ap_shm {
+ struct ipc_protocol_context_info ci;
+ struct ipc_protocol_device_info device_info;
+ __le32 msg_head;
+ __le32 head_array[IPC_MEM_MAX_PIPES];
+ __le32 msg_tail;
+ __le32 tail_array[IPC_MEM_MAX_PIPES];
+ union ipc_mem_msg_entry msg_ring[IPC_MEM_MSG_ENTRIES];
+};
+
+/**
+ * struct iosm_protocol - Structure for IPC protocol.
+ * @p_ap_shm: Pointer to Protocol Shared Memory Structure
+ * @pm: Instance to struct iosm_pm
+ * @pcie: Pointer to struct iosm_pcie
+ * @imem: Pointer to struct iosm_imem
+ * @rsp_ring: Array of OS completion objects to be triggered once CP
+ * acknowledges a request in the message ring
+ * @dev: Pointer to device structure
+ * @phy_ap_shm: Physical/Mapped representation of the shared memory info
+ * @old_msg_tail: Old msg tail ptr, until AP has handled ACK's from CP
+ */
+struct iosm_protocol {
+ struct ipc_protocol_ap_shm *p_ap_shm;
+ struct iosm_pm pm;
+ struct iosm_pcie *pcie;
+ struct iosm_imem *imem;
+ struct ipc_rsp *rsp_ring[IPC_MEM_MSG_ENTRIES];
+ struct device *dev;
+ phys_addr_t phy_ap_shm;
+ u32 old_msg_tail;
+};
+
+/**
+ * struct ipc_call_msg_send_args - Structure for message argument for
+ * tasklet function.
+ * @prep_args: Arguments for message preparation function
+ * @response: Can be NULL if result can be ignored
+ * @msg_type: Message Type
+ */
+struct ipc_call_msg_send_args {
+ union ipc_msg_prep_args *prep_args;
+ struct ipc_rsp *response;
+ enum ipc_msg_prep_type msg_type;
+};
+
+/**
+ * ipc_protocol_tq_msg_send - prepare the msg and send to CP
+ * @ipc_protocol: Pointer to ipc_protocol instance
+ * @msg_type: Message type
+ * @prep_args: Message arguments
+ * @response: Pointer to a response object which has a
+ * completion object and return code.
+ *
+ * Returns: 0 on success and failure value on error
+ */
+int ipc_protocol_tq_msg_send(struct iosm_protocol *ipc_protocol,
+ enum ipc_msg_prep_type msg_type,
+ union ipc_msg_prep_args *prep_args,
+ struct ipc_rsp *response);
+
+/**
+ * ipc_protocol_msg_send - Send ipc control message to CP and wait for response
+ * @ipc_protocol: Pointer to ipc_protocol instance
+ * @prep: Message type
+ * @prep_args: Message arguments
+ *
+ * Returns: 0 on success and failure value on error
+ */
+int ipc_protocol_msg_send(struct iosm_protocol *ipc_protocol,
+ enum ipc_msg_prep_type prep,
+ union ipc_msg_prep_args *prep_args);
+
+/**
+ * ipc_protocol_suspend - Signal to CP that host wants to go to sleep (suspend).
+ * @ipc_protocol: Pointer to ipc_protocol instance
+ *
+ * Returns: true if host can suspend, false if suspend must be aborted.
+ */
+bool ipc_protocol_suspend(struct iosm_protocol *ipc_protocol);
+
+/**
+ * ipc_protocol_s2idle_sleep - Call PM function to set PM variables in s2idle
+ * sleep/active case
+ * @ipc_protocol: Pointer to ipc_protocol instance
+ * @sleep: True for sleep/False for active
+ */
+void ipc_protocol_s2idle_sleep(struct iosm_protocol *ipc_protocol, bool sleep);
+
+/**
+ * ipc_protocol_resume - Signal to CP that host wants to resume operation.
+ * @ipc_protocol: Pointer to ipc_protocol instance
+ *
+ * Returns: true if host can resume, false if there is a problem.
+ */
+bool ipc_protocol_resume(struct iosm_protocol *ipc_protocol);
+
+/**
+ * ipc_protocol_pm_dev_sleep_handle - Handles the Device Sleep state change
+ * notification.
+ * @ipc_protocol: Pointer to ipc_protocol instance.
+ *
+ * Returns: true if sleep notification handled, false otherwise.
+ */
+bool ipc_protocol_pm_dev_sleep_handle(struct iosm_protocol *ipc_protocol);
+
+/**
+ * ipc_protocol_doorbell_trigger - Wrapper for PM function which wake up the
+ * device if it is in low power mode
+ * and trigger a head pointer update interrupt.
+ * @ipc_protocol: Pointer to ipc_protocol instance.
+ * @identifier: Specifies what component triggered hpda
+ * update irq
+ */
+void ipc_protocol_doorbell_trigger(struct iosm_protocol *ipc_protocol,
+ u32 identifier);
+
+/**
+ * ipc_protocol_sleep_notification_string - Returns last Sleep Notification as
+ * string.
+ * @ipc_protocol: Instance pointer of Protocol module.
+ *
+ * Returns: Pointer to string.
+ */
+const char *
+ipc_protocol_sleep_notification_string(struct iosm_protocol *ipc_protocol);
+
+/**
+ * ipc_protocol_init - Allocates IPC protocol instance
+ * @ipc_imem: Pointer to iosm_imem structure
+ *
+ * Returns: Address of IPC protocol instance on success & NULL on failure.
+ */
+struct iosm_protocol *ipc_protocol_init(struct iosm_imem *ipc_imem);
+
+/**
+ * ipc_protocol_deinit - Deallocates IPC protocol instance
+ * @ipc_protocol: pointer to the IPC protocol instance
+ */
+void ipc_protocol_deinit(struct iosm_protocol *ipc_protocol);
+
+#endif
--
2.25.1

2021-05-21 20:28:27

by Loic Poulain

[permalink] [raw]
Subject: Re: [PATCH V3 15/16] net: iosm: net driver

Hi Chetan,

On Thu, 20 May 2021 at 16:09, M Chetan Kumar <[email protected]> wrote:
>
> 1) Create net device & implement net operations for data/IP communication.
> 2) Bind IP Link to mux IP session for simultaneous IP traffic.
>
> Signed-off-by: M Chetan Kumar <[email protected]>
> ---
> v3:
> * Clean-up DSS channel implementation.
> * Aligned ipc_ prefix for function name to be consistent across file.
> v2:
> * Removed Ethernet header & VLAN tag handling from wwan net driver.
> * Implement rtnet_link interface for IP traffic handling.
> ---
> drivers/net/wwan/iosm/iosm_ipc_wwan.c | 439 ++++++++++++++++++++++++++
> drivers/net/wwan/iosm/iosm_ipc_wwan.h | 55 ++++
> 2 files changed, 494 insertions(+)
> create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.c
> create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.h
>
> diff --git a/drivers/net/wwan/iosm/iosm_ipc_wwan.c b/drivers/net/wwan/iosm/iosm_ipc_wwan.c
> new file mode 100644
> index 000000000000..02c35bc86674
> --- /dev/null
> +++ b/drivers/net/wwan/iosm/iosm_ipc_wwan.c
> @@ -0,0 +1,439 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2020-21 Intel Corporation.
> + */
> +
> +#include <linux/etherdevice.h>
> +#include <linux/if_arp.h>
> +#include <linux/if_link.h>
> +#include <net/rtnetlink.h>
> +
> +#include "iosm_ipc_chnl_cfg.h"
> +#include "iosm_ipc_imem_ops.h"
> +#include "iosm_ipc_wwan.h"
> +
> +#define IOSM_IP_TYPE_MASK 0xF0
> +#define IOSM_IP_TYPE_IPV4 0x40
> +#define IOSM_IP_TYPE_IPV6 0x60

[...]

> +
> +static void ipc_netdev_setup(struct net_device *dev) {}
> +
> +struct iosm_wwan *ipc_wwan_init(struct iosm_imem *ipc_imem, struct device *dev)
> +{
> + static const struct net_device_ops iosm_wwandev_ops = {};
> + struct iosm_wwan *ipc_wwan;
> + struct net_device *netdev;
> +
> + netdev = alloc_netdev(sizeof(*ipc_wwan), "wwan%d", NET_NAME_ENUM,
> + ipc_netdev_setup);
> +
> + if (!netdev)
> + return NULL;
> +
> + ipc_wwan = netdev_priv(netdev);
> +
> + ipc_wwan->dev = dev;
> + ipc_wwan->netdev = netdev;
> + ipc_wwan->is_registered = false;
> +
> + ipc_wwan->ipc_imem = ipc_imem;
> +
> + mutex_init(&ipc_wwan->if_mutex);
> +
> + /* allocate random ethernet address */
> + eth_random_addr(netdev->dev_addr);
> + netdev->addr_assign_type = NET_ADDR_RANDOM;
> +
> + netdev->netdev_ops = &iosm_wwandev_ops;
> + netdev->flags |= IFF_NOARP;
> +
> + SET_NETDEV_DEVTYPE(netdev, &wwan_type);
> +
> + if (register_netdev(netdev)) {
> + dev_err(ipc_wwan->dev, "register_netdev failed");
> + goto reg_fail;
> + }

So you register a no-op netdev which is only used to represent the
modem instance, and to be referenced for link creation over IOSM
rtnetlinks?
The new WWAN framework creates a logical WWAN device instance (e.g;
/sys/class/wwan0), I think it would make sense to use its index as
parameter when creating the new links, instead of relying on this
dummy netdev. Note that for now the wwan_device is private to
wwan_core and created implicitly on the WWAN control port
registration.
Moreover I wonder if it could also be possible to create a generic
WWAN link type instead of creating yet-another hw specific one, that
could benefit future WWAN drivers, and simplify user side integration,
with a common interface to create links and multiplex PDN (a bit like
wlan vif).

Regards,
Loic

2021-05-24 10:37:35

by Kumar, M Chetan

[permalink] [raw]
Subject: RE: [PATCH V3 15/16] net: iosm: net driver

Hi Loic,

> > +static void ipc_netdev_setup(struct net_device *dev) {}
> > +
> > +struct iosm_wwan *ipc_wwan_init(struct iosm_imem *ipc_imem, struct
> > +device *dev) {
> > + static const struct net_device_ops iosm_wwandev_ops = {};
> > + struct iosm_wwan *ipc_wwan;
> > + struct net_device *netdev;
> > +
> > + netdev = alloc_netdev(sizeof(*ipc_wwan), "wwan%d",
> NET_NAME_ENUM,
> > + ipc_netdev_setup);
> > +
> > + if (!netdev)
> > + return NULL;
> > +
> > + ipc_wwan = netdev_priv(netdev);
> > +
> > + ipc_wwan->dev = dev;
> > + ipc_wwan->netdev = netdev;
> > + ipc_wwan->is_registered = false;
> > +
> > + ipc_wwan->ipc_imem = ipc_imem;
> > +
> > + mutex_init(&ipc_wwan->if_mutex);
> > +
> > + /* allocate random ethernet address */
> > + eth_random_addr(netdev->dev_addr);
> > + netdev->addr_assign_type = NET_ADDR_RANDOM;
> > +
> > + netdev->netdev_ops = &iosm_wwandev_ops;
> > + netdev->flags |= IFF_NOARP;
> > +
> > + SET_NETDEV_DEVTYPE(netdev, &wwan_type);
> > +
> > + if (register_netdev(netdev)) {
> > + dev_err(ipc_wwan->dev, "register_netdev failed");
> > + goto reg_fail;
> > + }
>
> So you register a no-op netdev which is only used to represent the modem
> instance, and to be referenced for link creation over IOSM rtnetlinks?

That’s correct driver creates wwan0 (no-op netdev) to represent the
modem instance and is referenced for link creation over IOSM rtnetlinks.

> The new WWAN framework creates a logical WWAN device instance (e.g;
> /sys/class/wwan0), I think it would make sense to use its index as parameter
> when creating the new links, instead of relying on this dummy netdev. Note
> that for now the wwan_device is private to wwan_core and created implicitly
> on the WWAN control port registration.

In order to use WWAN device instance any additional changes required inside
wwan_core ? Or simply passing /sys/class/wwan0 device to ip link add is enough.

Can you please share us more details on wwan_core changes(if any)/how we should
use /sys/class/wwan0 for link creation ?

> Moreover I wonder if it could also be possible to create a generic WWAN link
> type instead of creating yet-another hw specific one, that could benefit
> future WWAN drivers, and simplify user side integration, with a common
> interface to create links and multiplex PDN (a bit like wlan vif).

Common interface could benefit both wwan drivers and user side integration.
WWAN framework generalizes WWAN device control port, would it also consider
WWAN netdev part ? Is there a plan to support such implementation inside
wwan_core.

Regards,
Chetan

2021-05-25 08:48:10

by Loic Poulain

[permalink] [raw]
Subject: Re: [PATCH V3 15/16] net: iosm: net driver

Hi Chetan,

On Mon, 24 May 2021 at 12:36, Kumar, M Chetan <[email protected]> wrote:
>
> Hi Loic,
>
> > > +static void ipc_netdev_setup(struct net_device *dev) {}
> > > +
> > > +struct iosm_wwan *ipc_wwan_init(struct iosm_imem *ipc_imem, struct
> > > +device *dev) {
> > > + static const struct net_device_ops iosm_wwandev_ops = {};
> > > + struct iosm_wwan *ipc_wwan;
> > > + struct net_device *netdev;
> > > +
> > > + netdev = alloc_netdev(sizeof(*ipc_wwan), "wwan%d",
> > NET_NAME_ENUM,
> > > + ipc_netdev_setup);
> > > +
> > > + if (!netdev)
> > > + return NULL;
> > > +
> > > + ipc_wwan = netdev_priv(netdev);
> > > +
> > > + ipc_wwan->dev = dev;
> > > + ipc_wwan->netdev = netdev;
> > > + ipc_wwan->is_registered = false;
> > > +
> > > + ipc_wwan->ipc_imem = ipc_imem;
> > > +
> > > + mutex_init(&ipc_wwan->if_mutex);
> > > +
> > > + /* allocate random ethernet address */
> > > + eth_random_addr(netdev->dev_addr);
> > > + netdev->addr_assign_type = NET_ADDR_RANDOM;
> > > +
> > > + netdev->netdev_ops = &iosm_wwandev_ops;
> > > + netdev->flags |= IFF_NOARP;
> > > +
> > > + SET_NETDEV_DEVTYPE(netdev, &wwan_type);
> > > +
> > > + if (register_netdev(netdev)) {
> > > + dev_err(ipc_wwan->dev, "register_netdev failed");
> > > + goto reg_fail;
> > > + }
> >
> > So you register a no-op netdev which is only used to represent the modem
> > instance, and to be referenced for link creation over IOSM rtnetlinks?
>
> That’s correct driver creates wwan0 (no-op netdev) to represent the
> modem instance and is referenced for link creation over IOSM rtnetlinks.
>
> > The new WWAN framework creates a logical WWAN device instance (e.g;
> > /sys/class/wwan0), I think it would make sense to use its index as parameter
> > when creating the new links, instead of relying on this dummy netdev. Note
> > that for now the wwan_device is private to wwan_core and created implicitly
> > on the WWAN control port registration.
>
> In order to use WWAN device instance any additional changes required inside
> wwan_core ? Or simply passing /sys/class/wwan0 device to ip link add is enough.

So basically the rtnetlink ops would be implemented and define in
wwan_core, as "wwan" link type.
Allowing users to create a new WWAN link/context whatever the
underlying hardware is. We could therefore pass the WWAN device name
or index to e.g:
ip link add wwan0.1 type wwan hw wwan0 session 1

> Can you please share us more details on wwan_core changes(if any)/how we should
> use /sys/class/wwan0 for link creation ?

Well, move rtnetlink ops to wwan_core (or wwan_rtnetlink), and parse
netlink parameters into the wwan core. Add support for registering
`wwan_ops`, something like:
wwan_register_ops(wwan_ops *ops, struct device *wwan_root_device)

The ops could be basically:
struct wwan_ops {
int (*add_intf)(struct device *wwan_root_device, const char *name,
struct wwan_intf_params *params);
int (*del_intf) ...
}

Then you could implement your own ops in iosm, with ios_add_intf()
allocating and registering the netdev as you already do.
struct wwan_intf_params would contain parameters of the interface,
like the session_id (and possibly extended later with others, like
checksum offload, etc...).

What do you think?

>
> > Moreover I wonder if it could also be possible to create a generic WWAN link
> > type instead of creating yet-another hw specific one, that could benefit
> > future WWAN drivers, and simplify user side integration, with a common
> > interface to create links and multiplex PDN (a bit like wlan vif).
>
> Common interface could benefit both wwan drivers and user side integration.
> WWAN framework generalizes WWAN device control port, would it also consider
> WWAN netdev part ? Is there a plan to support such implementation inside
> wwan_core.

I also plan to submit a change to add a wwan_register_netdevice()
function (similarly to WiFi cfg80211_register_netdevice), that could
be used instead of register_netdevice(), basically factorizing wwan
netdev registration (add "wwan" dev_type, add sysfs link to the 'wwan'
device...).

Regards,
Loic

2021-05-25 12:55:48

by Johannes Berg

[permalink] [raw]
Subject: Re: [PATCH V3 15/16] net: iosm: net driver

On Tue, 2021-05-25 at 10:24 +0200, Loic Poulain wrote:
> >

> > Can you please share us more details on wwan_core changes(if any)/how we should
> > use /sys/class/wwan0 for link creation ?
>
> Well, move rtnetlink ops to wwan_core (or wwan_rtnetlink), and parse
> netlink parameters into the wwan core. Add support for registering
> `wwan_ops`, something like:
> wwan_register_ops(wwan_ops *ops, struct device *wwan_root_device)
>
> The ops could be basically:
> struct wwan_ops {
>     int (*add_intf)(struct device *wwan_root_device, const char *name,
> struct wwan_intf_params *params);
>     int (*del_intf) ...
> }
>
> Then you could implement your own ops in iosm, with ios_add_intf()
> allocating and registering the netdev as you already do.
> struct wwan_intf_params would contain parameters of the interface,
> like the session_id (and possibly extended later with others, like
> checksum offload, etc...).
>
> What do you think?

Note that I kind of tried this in my version back when:

https://lore.kernel.org/netdev/20200225105149.59963c95aa29.Id0e40565452d0d5bb9ce5cc00b8755ec96db8559@changeid/#Z30include:net:wwan.h

See struct wwan_component_device_ops.

I had a different *generic* netlink family rather than rtnetlink ops,
but that's mostly an implementation detail. I tend to like genetlink
better, but having rtnetlink makes it easier in iproute2 (though it has
some generic netlink code too, of course.)

Nobody really seemed all that interested back then.

And frankly, I'm really annoyed that while all of this played out we let
QMI enter the tree with their home-grown stuff (and dummy netdevs,
FWIW), *then* said the IOSM driver has to go to rtnetlink ops like them,
instead of what older drivers are doing, and *now* are shifting
goalposts again towards something like the framework I started writing
early on for the IOSM driver, while the QMI driver was happening, and
nobody cared ...

Yeah, life's not fair and all that, but it does kind of feel like
arbitrary shifting of the goal posts, while QMI is already in tree. Of
course it's not like we have a competition with them here, but having
some help from there would've been nice. Oh well.

Not that I disagree with any of this, it does technically make sense.

However, I think at this point it'd be good to see some comments here
from DaveM/Jakub that they're going to force Qualcomm to also go down
this route, because they're now *heavily* invested in their own APIs,
and inventing a framework just for the IOSM driver is fairly pointless.


> I also plan to submit a change to add a wwan_register_netdevice()
> function (similarly to WiFi cfg80211_register_netdevice), that could
> be used instead of register_netdevice(), basically factorizing wwan
> netdev registration (add "wwan" dev_type, add sysfs link to the 'wwan'
> device...).

Be careful with that, I tend to think we made some mistakes in this
area, look at the recent locking things there. We used to do stuff from
a netdev notifier, and that has caused all kinds of pain recently when I
reworked the locking to not be so overly dependent on the RTNL all the
time (which really has become the new BKL, at least for desktop work,
sometimes I can't even type in the UI when the RTNL is blocked). So
wwan_register_netdevice() is probably fine, doing netdev notifier I'd
now recommend against.
But in any case, that's just a side thread.

johannes

2021-05-25 14:52:47

by Loic Poulain

[permalink] [raw]
Subject: Re: [PATCH V3 15/16] net: iosm: net driver

Hi Johannes,

On Tue, 25 May 2021 at 14:55, Johannes Berg <[email protected]> wrote:
>
> On Tue, 2021-05-25 at 10:24 +0200, Loic Poulain wrote:
> > >
>
> > > Can you please share us more details on wwan_core changes(if any)/how we should
> > > use /sys/class/wwan0 for link creation ?
> >
> > Well, move rtnetlink ops to wwan_core (or wwan_rtnetlink), and parse
> > netlink parameters into the wwan core. Add support for registering
> > `wwan_ops`, something like:
> > wwan_register_ops(wwan_ops *ops, struct device *wwan_root_device)
> >
> > The ops could be basically:
> > struct wwan_ops {
> > int (*add_intf)(struct device *wwan_root_device, const char *name,
> > struct wwan_intf_params *params);
> > int (*del_intf) ...
> > }
> >
> > Then you could implement your own ops in iosm, with ios_add_intf()
> > allocating and registering the netdev as you already do.
> > struct wwan_intf_params would contain parameters of the interface,
> > like the session_id (and possibly extended later with others, like
> > checksum offload, etc...).
> >
> > What do you think?
>
> Note that I kind of tried this in my version back when:
>
> https://lore.kernel.org/netdev/20200225105149.59963c95aa29.Id0e40565452d0d5bb9ce5cc00b8755ec96db8559@changeid/#Z30include:net:wwan.h
>
> See struct wwan_component_device_ops.
>
> I had a different *generic* netlink family rather than rtnetlink ops,
> but that's mostly an implementation detail. I tend to like genetlink
> better, but having rtnetlink makes it easier in iproute2 (though it has
> some generic netlink code too, of course.)
>
> Nobody really seemed all that interested back then.
>
> And frankly, I'm really annoyed that while all of this played out we let
> QMI enter the tree with their home-grown stuff (and dummy netdevs,
> FWIW), *then* said the IOSM driver has to go to rtnetlink ops like them,
> instead of what older drivers are doing, and *now* are shifting
> goalposts again towards something like the framework I started writing
> early on for the IOSM driver, while the QMI driver was happening, and
> nobody cared ...

Yes, I guess it's all about timings... At least, I care now...
I've recently worked on the mhi_net driver, which is basically the
netdev driver for Qualcomm PCIe modems. MHI being similar to IOSM
(exposing logical channels over PCI). Like QCOM USB variants, data can
be transferred in QMAP format (I guess what you call QMI), via the
`rmnet` link type (setup via iproute2).

>
> Yeah, life's not fair and all that, but it does kind of feel like
> arbitrary shifting of the goal posts, while QMI is already in tree. Of
> course it's not like we have a competition with them here, but having
> some help from there would've been nice. Oh well.
>
> Not that I disagree with any of this, it does technically make sense.
>
> However, I think at this point it'd be good to see some comments here
> from DaveM/Jakub that they're going to force Qualcomm to also go down
> this route, because they're now *heavily* invested in their own APIs,
> and inventing a framework just for the IOSM driver is fairly pointless.

This a legitimate point, but it's not too late to do the 'right'
thing, + It should not be too much change in the IOSM driver.

Regarding Qualcomm, I think it should be possible to fit QCOM Modem
drivers into that solution. It would consist of creating a simple
wrapper in QMAP/rmnet so that the rmnet link can (also) be created
from the kernel side (e.g. from mhi_net driver):
wwan_new_link() => wwan->add_intf_cb() => mhi_net_add_intf() => rmnet_newlink()

That way mhi_net driver would comply with the new hw agnostic wwan link
API, without breaking backward compatibility if someone wants to
explicitly create a rmnet link. Moreover, it could also be applicable
to USB modems based on MBIM and their VLAN link types.

That's my guess, but maybe I do not have the whole picture and miss
something... Anyway, I'll not blame the IOSM driver for not doing
this. But If you decide to go with the wwan link type, I'll do my best
to adapt that to QCOM mhi_net as well.

> > I also plan to submit a change to add a wwan_register_netdevice()
> > function (similarly to WiFi cfg80211_register_netdevice), that could
> > be used instead of register_netdevice(), basically factorizing wwan
> > netdev registration (add "wwan" dev_type, add sysfs link to the 'wwan'
> > device...).
>
> Be careful with that, I tend to think we made some mistakes in this
> area, look at the recent locking things there. We used to do stuff from
> a netdev notifier, and that has caused all kinds of pain recently when I
> reworked the locking to not be so overly dependent on the RTNL all the
> time (which really has become the new BKL, at least for desktop work,
> sometimes I can't even type in the UI when the RTNL is blocked). So
> wwan_register_netdevice() is probably fine, doing netdev notifier I'd
> now recommend against.
> But in any case, that's just a side thread.

Sure, thanks for pointing this.

Regards,
Loic

2021-05-27 09:41:30

by Johannes Berg

[permalink] [raw]
Subject: Re: [PATCH V3 15/16] net: iosm: net driver

Hi Loic,

> Yes, I guess it's all about timings... At least, I care now...

:)

> I've recently worked on the mhi_net driver, which is basically the
> netdev driver for Qualcomm PCIe modems. MHI being similar to IOSM
> (exposing logical channels over PCI). Like QCOM USB variants, data can
> be transferred in QMAP format (I guess what you call QMI), via the
> `rmnet` link type (setup via iproute2).

Right.

(I know nothing about the formats, so if I said anything about 'QMI'
just ignore and think 'qualcomm stuff')

>
> This a legitimate point, but it's not too late to do the 'right'
> thing, + It should not be too much change in the IOSM driver.

Agree. Though I looked at it now in the last couple of hours, and it's
actually not easy to do.

I came up with these patches for now:
https://p.sipsolutions.net/d8d8897c3f43cb85.txt

(on top of 5.13-rc3 + the patchset we're discussing here)

The key problem is that rtnetlink ops are meant to be for a single
device family, and don't really generalize well. For example:

+static void wwan_rtnl_setup(struct net_device *dev)
+{
+ /* FIXME - how do we implement this? we dont have any data
+ * at this point ..., i.e. we can't look up the context yet?
+ * We'd need data[IFLA_WWAN_DEV_NAME], see wwan_rtnl_newlink().
+ */
+}

or

+static struct rtnl_link_ops wwan_rtnl_link_ops __read_mostly = {
[...]
+ .priv_size = WWAN_MAX_NETDEV_PRIV,

are both rather annoying.

Making this more generic should of course be possible, but would require
fairly large changes all over the kernel - passing the tb/data to all
the handlers involved here, etc. That seems awkward?

What do you think?

The alternative I could see is doing what wifi did and create a
completely new (generic) netlink family, but that's also awkward to some
extend and requires writing more code to handle stuff that rtnetlink
already does ...


Please take a look. I suppose we could change rtnetlink to make it
possible to have this behind it ... but that might even be tricky,
because setup() is called in the context of alloc_netdev_mqs(), and that
also has no private data to pass through. So would we have to extend
rtnetlink ops with a "get_setup()" method that actually *returns* a
pointer to the setup method, so that it can be per-user (such as IOSM)?
Tricky stuff.


> Regarding Qualcomm, I think it should be possible to fit QCOM Modem
> drivers into that solution. It would consist of creating a simple
> wrapper in QMAP/rmnet so that the rmnet link can (also) be created
> from the kernel side (e.g. from mhi_net driver):
> wwan_new_link() => wwan->add_intf_cb() => mhi_net_add_intf() => rmnet_newlink()
>
> That way mhi_net driver would comply with the new hw agnostic wwan link
> API, without breaking backward compatibility if someone wants to
> explicitly create a rmnet link. Moreover, it could also be applicable
> to USB modems based on MBIM and their VLAN link types.

Maybe. It'd just need some incentive, and there's none now that the
Qualcomm stuff is already there :)

johannes

2021-05-27 11:31:35

by Loic Poulain

[permalink] [raw]
Subject: Re: [PATCH V3 15/16] net: iosm: net driver

Hi Johannes,

On Thu, 27 May 2021 at 11:40, Johannes Berg <[email protected]> wrote:
>
> Hi Loic,
>
> > Yes, I guess it's all about timings... At least, I care now...
>
> :)
>
> > I've recently worked on the mhi_net driver, which is basically the
> > netdev driver for Qualcomm PCIe modems. MHI being similar to IOSM
> > (exposing logical channels over PCI). Like QCOM USB variants, data can
> > be transferred in QMAP format (I guess what you call QMI), via the
> > `rmnet` link type (setup via iproute2).
>
> Right.
>
> (I know nothing about the formats, so if I said anything about 'QMI'
> just ignore and think 'qualcomm stuff')
>
> >
> > This a legitimate point, but it's not too late to do the 'right'
> > thing, + It should not be too much change in the IOSM driver.
>
> Agree. Though I looked at it now in the last couple of hours, and it's
> actually not easy to do.
>
> I came up with these patches for now:
> https://p.sipsolutions.net/d8d8897c3f43cb85.txt
>
> (on top of 5.13-rc3 + the patchset we're discussing here)

Great, that looks exactly what we need.

>
> The key problem is that rtnetlink ops are meant to be for a single
> device family, and don't really generalize well. For example:
>
> +static void wwan_rtnl_setup(struct net_device *dev)
> +{
> + /* FIXME - how do we implement this? we dont have any data
> + * at this point ..., i.e. we can't look up the context yet?
> + * We'd need data[IFLA_WWAN_DEV_NAME], see wwan_rtnl_newlink().
> + */
> +}

Argh, yes I've overlooked that issue. But, do we even need to do
something here? What if we do nothing here and call wwan->ops->setup()
early in wwan_rtnl_newlink(). AFAIU, we don't really use data setup
info until we actually register the netdev, except maybe for the
netdev->txqueue_len. Though it's probably not a robust solution...

>
> or
>
> +static struct rtnl_link_ops wwan_rtnl_link_ops __read_mostly = {
> [...]
> + .priv_size = WWAN_MAX_NETDEV_PRIV,
>
> are both rather annoying.
>
> Making this more generic should of course be possible, but would require
> fairly large changes all over the kernel - passing the tb/data to all
> the handlers involved here, etc. That seems awkward?

Yes, or alternatively add an optional alloc_netdev() rtnl ops, e.g. in
rtnl_create_link:

- dev = alloc_netdev_mqs(ops->priv_size, ifname, name_assign_type,
- ops->setup, num_tx_queues, num_rx_queues);
+ if (ops->alloc_netdev) {
+ dev = ops->alloc_netdev(ifname, name_assign_type, num_tx_queues,
+ num_rx_queues, tb);
+ } else {
+ dev = alloc_netdev_mqs(ops->priv_size, ifname, name_assign_type,
+ ops->setup, num_tx_queues,
num_rx_queues);
+ }

That would solve both the issues (setup, priv_size), without entire
kernel refactoring.

>
> What do you think?
>
> The alternative I could see is doing what wifi did and create a
> completely new (generic) netlink family, but that's also awkward to some
> extend and requires writing more code to handle stuff that rtnetlink
> already does ...

That would work indeed, but I would prefer avoiding such 'complexity',
mainly because link management is all we need. Indeed, except if we
want to abstract and handle control protocols (MBIM, QMI, AT) in the
kernel, we should not have to expose additional high-level operations
as in nl80211/cfg80211.

>
>
> Please take a look. I suppose we could change rtnetlink to make it
> possible to have this behind it ... but that might even be tricky,
> because setup() is called in the context of alloc_netdev_mqs(), and that
> also has no private data to pass through. So would we have to extend
> rtnetlink ops with a "get_setup()" method that actually *returns* a
> pointer to the setup method, so that it can be per-user (such as IOSM)?
> Tricky stuff.

What do you think about this alloc_netdev() ops? we should be able to
retrieve all we need from that.

Regards,
Loic

2021-06-01 07:32:42

by Johannes Berg

[permalink] [raw]
Subject: Re: [PATCH V3 15/16] net: iosm: net driver

Hi,

> Yes, or alternatively add an optional alloc_netdev() rtnl ops, e.g. in
> rtnl_create_link:


Yes, that works. It needs some more fiddling (we need 'data', not just
'tb') and it cannot be called 'alloc_netdev' (that's a macro!), but
yeah, otherwise works. I'll just call it alloc().

I'll send a few updated patches in a few minutes.

johannes