2021-11-01 03:57:35

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 00/14] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem

t7xx is the PCIe host device driver for Intel 5G 5000 M.2 solution which
is based on MediaTek's T700 modem to provide WWAN connectivity.
The driver uses the WWAN framework infrastructure to create the following
control ports and network interfaces:
* /dev/wwan0mbim0 - Interface conforming to the MBIM protocol.
Applications like libmbim [1] or Modem Manager [2] from v1.16 onwards
with [3][4] can use it to enable data communication towards WWAN.
* /dev/wwan0at0 - Interface that supports AT commands.
* wwan0 - Primary network interface for IP traffic.

The main blocks in t7xx driver are:
* PCIe layer - Implements probe, removal, and power management callbacks.
* Port-proxy - Provides a common interface to interact with different types
of ports such as WWAN ports.
* Modem control & status monitor - Implements the entry point for modem
initialization, reset and exit, as well as exception handling.
* CLDMA (Control Layer DMA) - Manages the HW used by the port layer to send
control messages to the modem using MediaTek's CCCI (Cross-Core
Communication Interface) protocol.
* DPMAIF (Data Plane Modem AP Interface) - Controls the HW that provides
uplink and downlink queues for the data path. The data exchange takes
place using circular buffers to share data buffer addresses and metadata
to describe the packets.
* MHCCIF (Modem Host Cross-Core Interface) - Provides interrupt channels
for bidirectional event notification such as handshake, exception, PM and
port enumeration.

The compilation of the t7xx driver is enabled by the CONFIG_MTK_T7XX config
option which depends on CONFIG_WWAN.
This driver was originally developed by MediaTek. Intel adapted t7xx to
the WWAN framework, optimized and refactored the driver source in close
collaboration with MediaTek. This will enable getting the t7xx driver on
Approved Vendor List for interested OEM's and ODM's productization plans
with Intel 5G 5000 M.2 solution.

List of contributors:
Amir Hanania <[email protected]>
Andriy Shevchenko <[email protected]>
Chandrashekar Devegowda <[email protected]>
Dinesh Sharma <[email protected]>
Eliot Lee <[email protected]>
Haijun Liu <[email protected]>
M Chetan Kumar <[email protected]>
Mika Westerberg <[email protected]>
Moises Veleta <[email protected]>
Pierre-louis Bossart <[email protected]>
Chiranjeevi Rapolu <[email protected]>
Ricardo Martinez <[email protected]>
Muralidharan Sethuraman <[email protected]>
Soumya Prakash Mishra <[email protected]>
Sreehari Kancharla <[email protected]>
Suresh Nagaraj <[email protected]>

[1] https://www.freedesktop.org/software/libmbim/
[2] https://www.freedesktop.org/software/ModemManager/
[3] https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/582
[4] https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/523

v2:
- Replace pdev->driver->name with dev_driver_string(&pdev->dev).
- Replace random_ether_addr() with eth_random_addr().
- Update kernel-doc comment for enum data_policy.
- Indicate the driver is 'Supported' instead of 'Maintained'.
- Fix the Signed-of-by and Co-developed-by tags in the patches.
- Added authors and contributors in the top comment of the src files.

Chandrashekar Devegowda (1):
net: wwan: t7xx: Add AT and MBIM WWAN ports

Haijun Lio (11):
net: wwan: t7xx: Add control DMA interface
net: wwan: t7xx: Add core components
net: wwan: t7xx: Add port proxy infrastructure
net: wwan: t7xx: Add control port
net: wwan: t7xx: Data path HW layer
net: wwan: t7xx: Add data path interface
net: wwan: t7xx: Add WWAN network interface
net: wwan: t7xx: Introduce power management support
net: wwan: t7xx: Runtime PM
net: wwan: t7xx: Device deep sleep lock/unlock
net: wwan: t7xx: Add debug and test ports

Ricardo Martinez (2):
net: wwan: Add default MTU size
net: wwan: t7xx: Add maintainers and documentation

.../networking/device_drivers/wwan/index.rst | 1 +
.../networking/device_drivers/wwan/t7xx.rst | 120 ++
MAINTAINERS | 11 +
drivers/net/wwan/Kconfig | 14 +
drivers/net/wwan/Makefile | 1 +
drivers/net/wwan/t7xx/Makefile | 24 +
drivers/net/wwan/t7xx/t7xx_cldma.c | 277 +++
drivers/net/wwan/t7xx/t7xx_cldma.h | 168 ++
drivers/net/wwan/t7xx/t7xx_common.h | 76 +
drivers/net/wwan/t7xx/t7xx_dpmaif.c | 1524 +++++++++++++++
drivers/net/wwan/t7xx/t7xx_dpmaif.h | 168 ++
drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 1663 +++++++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_cldma.h | 156 ++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c | 638 +++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h | 279 +++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 1562 ++++++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h | 117 ++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 842 +++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h | 82 +
drivers/net/wwan/t7xx/t7xx_mhccif.c | 124 ++
drivers/net/wwan/t7xx/t7xx_mhccif.h | 35 +
drivers/net/wwan/t7xx/t7xx_modem_ops.c | 747 ++++++++
drivers/net/wwan/t7xx/t7xx_modem_ops.h | 92 +
drivers/net/wwan/t7xx/t7xx_monitor.h | 147 ++
drivers/net/wwan/t7xx/t7xx_netdev.c | 545 ++++++
drivers/net/wwan/t7xx/t7xx_netdev.h | 63 +
drivers/net/wwan/t7xx/t7xx_pci.c | 789 ++++++++
drivers/net/wwan/t7xx/t7xx_pci.h | 121 ++
drivers/net/wwan/t7xx/t7xx_pcie_mac.c | 277 +++
drivers/net/wwan/t7xx/t7xx_pcie_mac.h | 36 +
drivers/net/wwan/t7xx/t7xx_port.h | 163 ++
drivers/net/wwan/t7xx/t7xx_port_char.c | 424 +++++
drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c | 150 ++
drivers/net/wwan/t7xx/t7xx_port_proxy.c | 829 ++++++++
drivers/net/wwan/t7xx/t7xx_port_proxy.h | 102 +
drivers/net/wwan/t7xx/t7xx_port_tty.c | 191 ++
drivers/net/wwan/t7xx/t7xx_port_wwan.c | 281 +++
drivers/net/wwan/t7xx/t7xx_reg.h | 398 ++++
drivers/net/wwan/t7xx/t7xx_skb_util.c | 362 ++++
drivers/net/wwan/t7xx/t7xx_skb_util.h | 110 ++
drivers/net/wwan/t7xx/t7xx_state_monitor.c | 627 +++++++
drivers/net/wwan/t7xx/t7xx_tty_ops.c | 205 ++
drivers/net/wwan/t7xx/t7xx_tty_ops.h | 44 +
include/linux/wwan.h | 5 +
44 files changed, 14590 insertions(+)
create mode 100644 Documentation/networking/device_drivers/wwan/t7xx.rst
create mode 100644 drivers/net/wwan/t7xx/Makefile
create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_common.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_monitor.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_port.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_char.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_tty.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_wwan.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_reg.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_skb_util.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_skb_util.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_tty_ops.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_tty_ops.h

--
2.17.1


2021-11-01 03:57:42

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 01/14] net: wwan: Add default MTU size

Add a default MTU size definition that new WWAN drivers can refer to.

Signed-off-by: Ricardo Martinez <[email protected]>
---
include/linux/wwan.h | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/include/linux/wwan.h b/include/linux/wwan.h
index 9fac819f92e3..28934b7dd0ae 100644
--- a/include/linux/wwan.h
+++ b/include/linux/wwan.h
@@ -171,4 +171,9 @@ int wwan_register_ops(struct device *parent, const struct wwan_ops *ops,

void wwan_unregister_ops(struct device *parent);

+/*
+ * Default WWAN interface MTU value
+ */
+#define WWAN_DEFAULT_MTU 1500
+
#endif /* __WWAN_H */
--
2.17.1

2021-11-01 03:58:01

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 04/14] net: wwan: t7xx: Add port proxy infrastructure

From: Haijun Lio <[email protected]>

Port-proxy provides a common interface to interact with different types
of ports. Ports export their configuration via `struct t7xx_port` and
operate as defined by `struct port_ops`.

Signed-off-by: Haijun Lio <[email protected]>
Co-developed-by: Chandrashekar Devegowda <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/Makefile | 1 +
drivers/net/wwan/t7xx/t7xx_modem_ops.c | 14 +-
drivers/net/wwan/t7xx/t7xx_port.h | 161 +++++
drivers/net/wwan/t7xx/t7xx_port_proxy.c | 775 +++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_port_proxy.h | 94 +++
drivers/net/wwan/t7xx/t7xx_state_monitor.c | 28 +-
6 files changed, 1070 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/wwan/t7xx/t7xx_port.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.h

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index dc0e6e025d55..1f117f36124a 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -11,3 +11,4 @@ mtk_t7xx-y:= t7xx_pci.o \
t7xx_skb_util.o \
t7xx_cldma.o \
t7xx_hif_cldma.o \
+ t7xx_port_proxy.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
index b80a36e09523..a814705dab3a 100644
--- a/drivers/net/wwan/t7xx/t7xx_modem_ops.c
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
@@ -23,6 +23,8 @@
#include "t7xx_monitor.h"
#include "t7xx_pci.h"
#include "t7xx_pcie_mac.h"
+#include "t7xx_port.h"
+#include "t7xx_port_proxy.h"

#define RGU_RESET_DELAY_US 20
#define PORT_RESET_DELAY_US 2000
@@ -220,12 +222,14 @@ static void md_exception(struct mtk_modem *md, enum hif_ex_stage stage)

mtk_dev = md->mtk_dev;

- if (stage == HIF_EX_CLEARQ_DONE)
+ if (stage == HIF_EX_CLEARQ_DONE) {
/* give DHL time to flush data.
* this is an empirical value that assure
* that DHL have enough time to flush all the date.
*/
msleep(PORT_RESET_DELAY_US);
+ port_proxy_reset(&mtk_dev->pdev->dev);
+ }

cldma_exception(ID_CLDMA1, stage);

@@ -412,6 +416,7 @@ void mtk_md_reset(struct mtk_pci_dev *mtk_dev)
md_structure_reset(md);
ccci_fsm_reset();
cldma_reset(ID_CLDMA1);
+ port_proxy_reset(&mtk_dev->pdev->dev);
md->md_init_finish = true;
}

@@ -450,6 +455,10 @@ int mtk_md_init(struct mtk_pci_dev *mtk_dev)
if (ret)
goto err_fsm_init;

+ ret = port_proxy_init(mtk_dev->md);
+ if (ret)
+ goto err_cldma_init;
+
fsm_ctl = fsm_get_entry();
fsm_append_command(fsm_ctl, CCCI_COMMAND_START, 0);

@@ -458,6 +467,8 @@ int mtk_md_init(struct mtk_pci_dev *mtk_dev)
md->md_init_finish = true;
return 0;

+err_cldma_init:
+ cldma_exit(ID_CLDMA1);
err_fsm_init:
ccci_fsm_uninit();
err_alloc:
@@ -480,6 +491,7 @@ void mtk_md_exit(struct mtk_pci_dev *mtk_dev)
fsm_ctl = fsm_get_entry();
/* change FSM state, will auto jump to stopped */
fsm_append_command(fsm_ctl, CCCI_COMMAND_PRE_STOP, 1);
+ port_proxy_uninit();
cldma_exit(ID_CLDMA1);
ccci_fsm_uninit();
destroy_workqueue(md->handshake_wq);
diff --git a/drivers/net/wwan/t7xx/t7xx_port.h b/drivers/net/wwan/t7xx/t7xx_port.h
new file mode 100644
index 000000000000..badaaa418b97
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port.h
@@ -0,0 +1,161 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Andy Shevchenko <[email protected]>
+ * Chandrashekar Devegowda <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ */
+
+#ifndef __T7XX_PORT_H__
+#define __T7XX_PORT_H__
+
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/wait.h>
+#include <linux/wwan.h>
+#include <linux/types.h>
+
+#include "t7xx_hif_cldma.h"
+
+#define PORT_F_RX_ALLOW_DROP BIT(0) /* packet will be dropped if port's RX buffer full */
+#define PORT_F_RX_FULLED BIT(1) /* RX buffer has been full once */
+#define PORT_F_USER_HEADER BIT(2) /* CCCI header will be provided by user, but not by CCCI */
+#define PORT_F_RX_EXCLUSIVE BIT(3) /* RX queue only has this one port */
+#define PORT_F_RX_ADJUST_HEADER BIT(4) /* check whether need remove CCCI header while recv skb */
+#define PORT_F_RX_CH_TRAFFIC BIT(5) /* enable port channel traffic */
+#define PORT_F_RX_CHAR_NODE BIT(7) /* need export char dev node for userspace */
+#define PORT_F_CHAR_NODE_SHOW BIT(10) /* char dev node is shown to userspace at default */
+
+/* reused for net TX, Data queue, same bit as RX_FULLED */
+#define PORT_F_TX_DATA_FULLED BIT(1)
+#define PORT_F_TX_ACK_FULLED BIT(8)
+#define PORT_F_RAW_DATA BIT(9)
+
+#define CCCI_MAX_CH_ID 0xff /* RX channel ID should NOT be >= this!! */
+#define CCCI_CH_ID_MASK 0xff
+
+/* Channel ID and Message ID definitions.
+ * The channel number consists of peer_id(15:12) , channel_id(11:0)
+ * peer_id:
+ * 0:reserved, 1: to sAP, 2: to MD
+ */
+enum ccci_ch {
+ /* to MD */
+ CCCI_CONTROL_RX = 0x2000,
+ CCCI_CONTROL_TX = 0x2001,
+ CCCI_SYSTEM_RX = 0x2002,
+ CCCI_SYSTEM_TX = 0x2003,
+ CCCI_UART1_RX = 0x2006, /* META */
+ CCCI_UART1_RX_ACK = 0x2007,
+ CCCI_UART1_TX = 0x2008,
+ CCCI_UART1_TX_ACK = 0x2009,
+ CCCI_UART2_RX = 0x200a, /* AT */
+ CCCI_UART2_RX_ACK = 0x200b,
+ CCCI_UART2_TX = 0x200c,
+ CCCI_UART2_TX_ACK = 0x200d,
+ CCCI_MD_LOG_RX = 0x202a, /* MD logging */
+ CCCI_MD_LOG_TX = 0x202b,
+ CCCI_LB_IT_RX = 0x203e, /* loop back test */
+ CCCI_LB_IT_TX = 0x203f,
+ CCCI_STATUS_RX = 0x2043, /* status polling */
+ CCCI_STATUS_TX = 0x2044,
+ CCCI_MIPC_RX = 0x20ce, /* MIPC */
+ CCCI_MIPC_TX = 0x20cf,
+ CCCI_MBIM_RX = 0x20d0,
+ CCCI_MBIM_TX = 0x20d1,
+ CCCI_DSS0_RX = 0x20d2,
+ CCCI_DSS0_TX = 0x20d3,
+ CCCI_DSS1_RX = 0x20d4,
+ CCCI_DSS1_TX = 0x20d5,
+ CCCI_DSS2_RX = 0x20d6,
+ CCCI_DSS2_TX = 0x20d7,
+ CCCI_DSS3_RX = 0x20d8,
+ CCCI_DSS3_TX = 0x20d9,
+ CCCI_DSS4_RX = 0x20da,
+ CCCI_DSS4_TX = 0x20db,
+ CCCI_DSS5_RX = 0x20dc,
+ CCCI_DSS5_TX = 0x20dd,
+ CCCI_DSS6_RX = 0x20de,
+ CCCI_DSS6_TX = 0x20df,
+ CCCI_DSS7_RX = 0x20e0,
+ CCCI_DSS7_TX = 0x20e1,
+ CCCI_MAX_CH_NUM,
+ CCCI_MONITOR_CH_ID = GENMASK(31, 28), /* for MD init */
+ CCCI_INVALID_CH_ID = GENMASK(15, 0),
+};
+
+struct t7xx_port;
+struct port_ops {
+ int (*init)(struct t7xx_port *port);
+ int (*recv_skb)(struct t7xx_port *port, struct sk_buff *skb);
+ void (*md_state_notify)(struct t7xx_port *port, unsigned int md_state);
+ void (*uninit)(struct t7xx_port *port);
+ int (*enable_chl)(struct t7xx_port *port);
+ int (*disable_chl)(struct t7xx_port *port);
+};
+
+typedef void (*port_skb_handler)(struct t7xx_port *port, struct sk_buff *skb);
+
+struct t7xx_port {
+ /* members used for initialization, do not change the order */
+ enum ccci_ch tx_ch;
+ enum ccci_ch rx_ch;
+ unsigned char txq_index;
+ unsigned char rxq_index;
+ unsigned char txq_exp_index;
+ unsigned char rxq_exp_index;
+ enum cldma_id path_id;
+ unsigned int flags;
+ struct port_ops *ops;
+ unsigned int minor;
+ char *name;
+ enum wwan_port_type mtk_port_type;
+
+ /* members un-initialized in definition */
+ struct wwan_port *mtk_wwan_port;
+ struct mtk_pci_dev *mtk_dev;
+ struct device *dev;
+ short seq_nums[2];
+ struct port_proxy *port_proxy;
+ atomic_t usage_cnt;
+ struct list_head entry;
+ struct list_head queue_entry;
+ unsigned int major;
+ unsigned int minor_base;
+ /* TX and RX flows are asymmetric since ports are multiplexed on
+ * queues.
+ *
+ * TX: data blocks are sent directly to a queue. Each port
+ * does not maintain a TX list; instead, they only provide
+ * a wait_queue_head for blocking writes.
+ *
+ * RX: Each port uses a RX list to hold packets,
+ * allowing the modem to dispatch RX packet as quickly as possible.
+ */
+ struct sk_buff_head rx_skb_list;
+ bool skb_from_pool;
+ spinlock_t port_update_lock; /* protects port configuration */
+ wait_queue_head_t rx_wq;
+ int rx_length_th;
+ port_skb_handler skb_handler;
+ unsigned char chan_enable;
+ unsigned char chn_crt_stat;
+ struct cdev *cdev;
+ struct task_struct *thread;
+ struct mutex tx_mutex_lock; /* protects the seq number operation */
+};
+
+int port_kthread_handler(void *arg);
+int port_recv_skb(struct t7xx_port *port, struct sk_buff *skb);
+int port_write_room_to_md(struct t7xx_port *port);
+struct t7xx_port *port_get_by_minor(int minor);
+struct t7xx_port *port_get_by_name(char *port_name);
+int port_send_skb_to_md(struct t7xx_port *port, struct sk_buff *skb, bool blocking);
+
+#endif /* __T7XX_PORT_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.c b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
new file mode 100644
index 000000000000..a3795d30c317
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
@@ -0,0 +1,775 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Andy Shevchenko <[email protected]>
+ * Chandrashekar Devegowda <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/netlink.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+
+#include "t7xx_hif_cldma.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_monitor.h"
+#include "t7xx_port_proxy.h"
+#include "t7xx_skb_util.h"
+
+#define MTK_DEV_NAME "MTK_WWAN_M80"
+
+#define PORT_NETLINK_MSG_MAX_PAYLOAD 32
+#define PORT_NOTIFY_PROTOCOL NETLINK_USERSOCK
+#define PORT_STATE_BROADCAST_GROUP 21
+#define CHECK_RX_SEQ_MASK 0x7fff
+#define DATA_AT_CMD_Q 5
+
+/* port->minor is configured in-sequence, but when we use it in code
+ * it should be unique among all ports for addressing.
+ */
+#define TTY_IPC_MINOR_BASE 100
+#define TTY_PORT_MINOR_BASE 250
+#define TTY_PORT_MINOR_INVALID -1
+
+static struct port_proxy *port_prox;
+
+#define for_each_proxy_port(i, p, proxy) \
+ for (i = 0, (p) = &(proxy)->ports[i]; \
+ i < (proxy)->port_number; \
+ i++, (p) = &(proxy)->ports[i])
+
+static struct port_ops dummy_port_ops;
+
+static struct t7xx_port md_ccci_ports[] = {
+ {0, 0, 0, 0, 0, 0, ID_CLDMA1, 0, &dummy_port_ops, 0xff, "dummy_port",},
+};
+
+static int port_netlink_send_msg(struct t7xx_port *port, int grp,
+ const char *buf, size_t len)
+{
+ struct port_proxy *pprox;
+ struct sk_buff *nl_skb;
+ struct nlmsghdr *nlh;
+
+ nl_skb = nlmsg_new(len, GFP_KERNEL);
+ if (!nl_skb)
+ return -ENOMEM;
+
+ nlh = nlmsg_put(nl_skb, 0, 1, NLMSG_DONE, len, 0);
+ if (!nlh) {
+ dev_err(port->dev, "could not release netlink\n");
+ nlmsg_free(nl_skb);
+ return -EFAULT;
+ }
+
+ /* Add new netlink message to the skb
+ * after checking if header+payload
+ * can be handled.
+ */
+ memcpy(nlmsg_data(nlh), buf, len);
+
+ pprox = port->port_proxy;
+ return netlink_broadcast(pprox->netlink_sock, nl_skb,
+ 0, grp, GFP_KERNEL);
+}
+
+static int port_netlink_init(void)
+{
+ port_prox->netlink_sock = netlink_kernel_create(&init_net, PORT_NOTIFY_PROTOCOL, NULL);
+
+ if (!port_prox->netlink_sock) {
+ dev_err(port_prox->dev, "failed to create netlink socket\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void port_netlink_uninit(void)
+{
+ if (port_prox->netlink_sock) {
+ netlink_kernel_release(port_prox->netlink_sock);
+ port_prox->netlink_sock = NULL;
+ }
+}
+
+static struct t7xx_port *proxy_get_port(int minor, enum ccci_ch ch)
+{
+ struct t7xx_port *port;
+ int i;
+
+ if (!port_prox)
+ return NULL;
+
+ for_each_proxy_port(i, port, port_prox) {
+ if (minor >= 0 && port->minor == minor)
+ return port;
+
+ if (ch != CCCI_INVALID_CH_ID && (port->rx_ch == ch || port->tx_ch == ch))
+ return port;
+ }
+
+ return NULL;
+}
+
+struct t7xx_port *port_proxy_get_port(int major, int minor)
+{
+ if (port_prox && port_prox->major == major)
+ return proxy_get_port(minor, CCCI_INVALID_CH_ID);
+
+ return NULL;
+}
+
+static inline struct t7xx_port *port_get_by_ch(enum ccci_ch ch)
+{
+ return proxy_get_port(TTY_PORT_MINOR_INVALID, ch);
+}
+
+/* Sequence numbering to track for lost packets */
+void port_proxy_set_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h)
+{
+ if (ccci_h && port) {
+ ccci_h->status &= ~HDR_FLD_SEQ;
+ ccci_h->status |= FIELD_PREP(HDR_FLD_SEQ, port->seq_nums[MTK_OUT]);
+ ccci_h->status &= ~HDR_FLD_AST;
+ ccci_h->status |= FIELD_PREP(HDR_FLD_AST, 1);
+ }
+}
+
+static u16 port_check_rx_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h)
+{
+ u16 channel, seq_num, assert_bit;
+
+ channel = FIELD_GET(HDR_FLD_CHN, ccci_h->status);
+ seq_num = FIELD_GET(HDR_FLD_SEQ, ccci_h->status);
+ assert_bit = FIELD_GET(HDR_FLD_AST, ccci_h->status);
+ if (assert_bit && port->seq_nums[MTK_IN] &&
+ ((seq_num - port->seq_nums[MTK_IN]) & CHECK_RX_SEQ_MASK) != 1) {
+ dev_err(port->dev, "channel %d seq number out-of-order %d->%d (data: %X, %X)\n",
+ channel, seq_num, port->seq_nums[MTK_IN],
+ ccci_h->data[0], ccci_h->data[1]);
+ }
+
+ return seq_num;
+}
+
+void port_proxy_reset(struct device *dev)
+{
+ struct t7xx_port *port;
+ int i;
+
+ if (!port_prox) {
+ dev_err(dev, "invalid port proxy\n");
+ return;
+ }
+
+ for_each_proxy_port(i, port, port_prox) {
+ port->seq_nums[MTK_IN] = -1;
+ port->seq_nums[MTK_OUT] = 0;
+ }
+}
+
+static inline int port_get_queue_no(struct t7xx_port *port)
+{
+ return ccci_fsm_get_md_state() == MD_STATE_EXCEPTION ?
+ port->txq_exp_index : port->txq_index;
+}
+
+static inline void port_struct_init(struct t7xx_port *port)
+{
+ INIT_LIST_HEAD(&port->entry);
+ INIT_LIST_HEAD(&port->queue_entry);
+ skb_queue_head_init(&port->rx_skb_list);
+ init_waitqueue_head(&port->rx_wq);
+ port->seq_nums[MTK_IN] = -1;
+ port->seq_nums[MTK_OUT] = 0;
+ atomic_set(&port->usage_cnt, 0);
+ port->port_proxy = port_prox;
+}
+
+static void port_adjust_skb(struct t7xx_port *port, struct sk_buff *skb)
+{
+ struct ccci_header *ccci_h;
+
+ ccci_h = (struct ccci_header *)skb->data;
+ if (port->flags & PORT_F_USER_HEADER) { /* header provide by user */
+ /* CCCI_MON_CH should fall in here, as header must be
+ * send to md_init.
+ */
+ if (ccci_h->data[0] == CCCI_HEADER_NO_DATA) {
+ if (skb->len > sizeof(struct ccci_header)) {
+ dev_err_ratelimited(port->dev,
+ "recv unexpected data for %s, skb->len=%d\n",
+ port->name, skb->len);
+ skb_trim(skb, sizeof(struct ccci_header));
+ }
+ }
+ } else {
+ /* remove CCCI header */
+ skb_pull(skb, sizeof(struct ccci_header));
+ }
+}
+
+/**
+ * port_recv_skb() - receive skb from modem or HIF
+ * @port: port to use
+ * @skb: skb to use
+ *
+ * Used to receive native HIF RX data,
+ * which have same RX receive flow.
+ *
+ * Return: 0 for success or error code
+ */
+int port_recv_skb(struct t7xx_port *port, struct sk_buff *skb)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&port->rx_wq.lock, flags);
+ if (port->rx_skb_list.qlen < port->rx_length_th) {
+ struct ccci_header *ccci_h = (struct ccci_header *)skb->data;
+
+ port->flags &= ~PORT_F_RX_FULLED;
+ if (port->flags & PORT_F_RX_ADJUST_HEADER)
+ port_adjust_skb(port, skb);
+
+ if (!(port->flags & PORT_F_RAW_DATA) &&
+ FIELD_GET(HDR_FLD_CHN, ccci_h->status) == CCCI_STATUS_RX) {
+ port->skb_handler(port, skb);
+ } else {
+ if (port->mtk_wwan_port)
+ wwan_port_rx(port->mtk_wwan_port, skb);
+ else
+ __skb_queue_tail(&port->rx_skb_list, skb);
+ }
+
+ spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+ wake_up_all(&port->rx_wq);
+ return 0;
+ }
+
+ port->flags |= PORT_F_RX_FULLED;
+ spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+ if (port->flags & PORT_F_RX_ALLOW_DROP) {
+ dev_err(port->dev, "port %s RX full, drop packet\n", port->name);
+ return -ENETDOWN;
+ }
+
+ return -ENOBUFS;
+}
+
+/**
+ * port_kthread_handler() - kthread handler for specific port
+ * @arg: port pointer
+ *
+ * Receive native HIF RX data,
+ * which have same RX receive flow.
+ *
+ * Return: Always 0 to kthread_run
+ */
+int port_kthread_handler(void *arg)
+{
+ while (!kthread_should_stop()) {
+ struct t7xx_port *port = arg;
+ struct sk_buff *skb;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port->rx_wq.lock, flags);
+ if (skb_queue_empty(&port->rx_skb_list) &&
+ wait_event_interruptible_locked_irq(port->rx_wq,
+ !skb_queue_empty(&port->rx_skb_list) ||
+ kthread_should_stop())) {
+ spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+ continue;
+ }
+
+ if (kthread_should_stop()) {
+ spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+ break;
+ }
+
+ skb = __skb_dequeue(&port->rx_skb_list);
+ spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+
+ if (port->skb_handler)
+ port->skb_handler(port, skb);
+ }
+
+ return 0;
+}
+
+int port_write_room_to_md(struct t7xx_port *port)
+{
+ return cldma_write_room(port->path_id, port_get_queue_no(port));
+}
+
+int port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb, bool from_pool)
+{
+ struct ccci_header *ccci_h;
+ unsigned char tx_qno;
+ int ret;
+
+ ccci_h = (struct ccci_header *)(skb->data);
+ tx_qno = port_get_queue_no(port);
+ port_proxy_set_seq_num(port, (struct ccci_header *)ccci_h);
+ ret = cldma_send_skb(port->path_id, tx_qno, skb, from_pool, true);
+ if (ret) {
+ dev_err(port->dev, "failed to send skb, error: %d\n", ret);
+ } else {
+ /* Record the port seq_num after the data is sent to HIF.
+ * Only bits 0-14 are used, thus negating overflow.
+ */
+ port->seq_nums[MTK_OUT]++;
+ }
+
+ return ret;
+}
+
+int port_send_skb_to_md(struct t7xx_port *port, struct sk_buff *skb, bool blocking)
+{
+ enum md_state md_state;
+ unsigned int fsm_state;
+
+ md_state = ccci_fsm_get_md_state();
+ fsm_state = ccci_fsm_get_current_state();
+ if (fsm_state != CCCI_FSM_PRE_START) {
+ if (md_state == MD_STATE_WAITING_FOR_HS1 ||
+ md_state == MD_STATE_WAITING_FOR_HS2) {
+ return -ENODEV;
+ }
+
+ if (md_state == MD_STATE_EXCEPTION &&
+ port->tx_ch != CCCI_MD_LOG_TX &&
+ port->tx_ch != CCCI_UART1_TX) {
+ return -ETXTBSY;
+ }
+
+ if (md_state == MD_STATE_STOPPED ||
+ md_state == MD_STATE_WAITING_TO_STOP ||
+ md_state == MD_STATE_INVALID) {
+ return -ENODEV;
+ }
+ }
+
+ return cldma_send_skb(port->path_id, port_get_queue_no(port),
+ skb, port->skb_from_pool, blocking);
+}
+
+static void proxy_setup_channel_mapping(void)
+{
+ struct t7xx_port *port;
+ int i, j;
+
+ /* init RX_CH=>port list mapping */
+ for (i = 0; i < ARRAY_SIZE(port_prox->rx_ch_ports); i++)
+ INIT_LIST_HEAD(&port_prox->rx_ch_ports[i]);
+ /* init queue_id=>port list mapping per HIF */
+ for (j = 0; j < ARRAY_SIZE(port_prox->queue_ports); j++) {
+ for (i = 0; i < ARRAY_SIZE(port_prox->queue_ports[j]); i++)
+ INIT_LIST_HEAD(&port_prox->queue_ports[j][i]);
+ }
+
+ /* setup port mapping */
+ for_each_proxy_port(i, port, port_prox) {
+ /* setup RX_CH=>port list mapping */
+ list_add_tail(&port->entry,
+ &port_prox->rx_ch_ports[port->rx_ch & CCCI_CH_ID_MASK]);
+ /* setup QUEUE_ID=>port list mapping */
+ list_add_tail(&port->queue_entry,
+ &port_prox->queue_ports[port->path_id][port->rxq_index]);
+ }
+}
+
+/* inject CCCI message to modem */
+void port_proxy_send_msg_to_md(int ch, unsigned int msg, unsigned int resv)
+{
+ struct ctrl_msg_header *ctrl_msg_h;
+ struct ccci_header *ccci_h;
+ struct t7xx_port *port;
+ struct sk_buff *skb;
+ int ret;
+
+ port = port_get_by_ch(ch);
+ if (!port)
+ return;
+
+ skb = ccci_alloc_skb_from_pool(&port->mtk_dev->pools, sizeof(struct ccci_header),
+ GFS_BLOCKING);
+ if (!skb)
+ return;
+
+ if (ch == CCCI_CONTROL_TX) {
+ ccci_h = (struct ccci_header *)(skb->data);
+ ccci_h->data[0] = CCCI_HEADER_NO_DATA;
+ ccci_h->data[1] = sizeof(struct ctrl_msg_header) + CCCI_H_LEN;
+ ccci_h->status &= ~HDR_FLD_CHN;
+ ccci_h->status |= FIELD_PREP(HDR_FLD_CHN, ch);
+ ccci_h->reserved = 0;
+ ctrl_msg_h = (struct ctrl_msg_header *)(skb->data + CCCI_H_LEN);
+ ctrl_msg_h->data_length = 0;
+ ctrl_msg_h->reserved = resv;
+ ctrl_msg_h->ctrl_msg_id = msg;
+ skb_put(skb, CCCI_H_LEN + sizeof(struct ctrl_msg_header));
+ } else {
+ ccci_h = skb_put(skb, sizeof(struct ccci_header));
+ ccci_h->data[0] = CCCI_HEADER_NO_DATA;
+ ccci_h->data[1] = msg;
+ ccci_h->status &= ~HDR_FLD_CHN;
+ ccci_h->status |= FIELD_PREP(HDR_FLD_CHN, ch);
+ ccci_h->reserved = resv;
+ }
+
+ ret = port_proxy_send_skb(port, skb, port->skb_from_pool);
+ if (ret) {
+ dev_err(port->dev, "port%s send to MD fail\n", port->name);
+ ccci_free_skb(&port->mtk_dev->pools, skb);
+ }
+}
+
+/**
+ * port_proxy_recv_skb_from_q() - receive raw data from dedicated queue
+ * @queue: CLDMA queue
+ * @skb: socket buffer
+ *
+ * Return: 0 for success or error code for drops
+ */
+static int port_proxy_recv_skb_from_q(struct cldma_queue *queue, struct sk_buff *skb)
+{
+ struct t7xx_port *port;
+ int ret = 0;
+
+ port = port_prox->dedicated_ports[queue->hif_id][queue->index];
+
+ if (skb && port->ops->recv_skb)
+ ret = port->ops->recv_skb(port, skb);
+
+ if (ret < 0 && ret != -ENOBUFS) {
+ dev_err(port->dev, "drop on RX ch %d, ret %d\n", port->rx_ch, ret);
+ ccci_free_skb(&port->mtk_dev->pools, skb);
+ return -ENETDOWN;
+ }
+
+ return ret;
+}
+
+/**
+ * port_proxy_dispatch_recv_skb() - dispatch received skb
+ * @queue: CLDMA queue
+ * @skb: socket buffer
+ *
+ * If recv_request returns 0 or -ENETDOWN, then it's the port's duty
+ * to free the request and the caller should no longer reference the request.
+ * If recv_request returns any other error, caller should free the request.
+ *
+ * Return: 0 or greater for success, or negative error code
+ */
+static int port_proxy_dispatch_recv_skb(struct cldma_queue *queue, struct sk_buff *skb)
+{
+ struct list_head *port_list;
+ struct ccci_header *ccci_h;
+ struct t7xx_port *port;
+ u16 seq_num, channel;
+ int ret = -ENETDOWN;
+
+ if (!skb)
+ return -EINVAL;
+
+ ccci_h = (struct ccci_header *)skb->data;
+ channel = FIELD_GET(HDR_FLD_CHN, ccci_h->status);
+
+ if (channel >= CCCI_MAX_CH_NUM ||
+ ccci_fsm_get_md_state() == MD_STATE_INVALID)
+ goto err_exit;
+
+ port_list = &port_prox->rx_ch_ports[channel & CCCI_CH_ID_MASK];
+ list_for_each_entry(port, port_list, entry) {
+ if (queue->hif_id != port->path_id || channel != port->rx_ch)
+ continue;
+
+ /* Multi-cast is not supported, because one port may be freed
+ * and can modify this request before another port can process it.
+ * However we still can use req->state to achieve some kind of
+ * multi-cast if needed.
+ */
+ if (port->ops->recv_skb) {
+ seq_num = port_check_rx_seq_num(port, ccci_h);
+ ret = port->ops->recv_skb(port, skb);
+ /* If the packet is stored to RX buffer
+ * successfully or drop, the sequence
+ * num will be updated.
+ */
+ if (ret == -ENOBUFS)
+ return ret;
+
+ port->seq_nums[MTK_IN] = seq_num;
+ }
+
+ break;
+ }
+
+err_exit:
+ if (ret < 0) {
+ struct skb_pools *pools;
+
+ pools = &queue->md->mtk_dev->pools;
+ ccci_free_skb(pools, skb);
+ return -ENETDOWN;
+ }
+
+ return 0;
+}
+
+static int port_proxy_recv_skb(struct cldma_queue *queue, struct sk_buff *skb)
+{
+ if (queue->q_type == CLDMA_SHARED_Q)
+ return port_proxy_dispatch_recv_skb(queue, skb);
+
+ return port_proxy_recv_skb_from_q(queue, skb);
+}
+
+/**
+ * port_proxy_md_status_notify() - notify all ports of state
+ * @state: state
+ *
+ * Called by ccci_fsm,
+ * Used to dispatch modem status for all ports,
+ * which want to know MD state transition.
+ */
+void port_proxy_md_status_notify(unsigned int state)
+{
+ struct t7xx_port *port;
+ int i;
+
+ if (!port_prox)
+ return;
+
+ for_each_proxy_port(i, port, port_prox)
+ if (port->ops->md_state_notify)
+ port->ops->md_state_notify(port, state);
+}
+
+static int proxy_register_char_dev(void)
+{
+ dev_t dev = 0;
+ int ret;
+
+ if (port_prox->major) {
+ dev = MKDEV(port_prox->major, port_prox->minor_base);
+ ret = register_chrdev_region(dev, TTY_IPC_MINOR_BASE, MTK_DEV_NAME);
+ } else {
+ ret = alloc_chrdev_region(&dev, port_prox->minor_base,
+ TTY_IPC_MINOR_BASE, MTK_DEV_NAME);
+ if (ret)
+ dev_err(port_prox->dev, "failed to alloc chrdev region, ret=%d\n", ret);
+
+ port_prox->major = MAJOR(dev);
+ }
+
+ return ret;
+}
+
+static void proxy_init_all_ports(struct mtk_modem *md)
+{
+ struct t7xx_port *port;
+ int i;
+
+ for_each_proxy_port(i, port, port_prox) {
+ port_struct_init(port);
+
+ port->major = port_prox->major;
+ port->minor_base = port_prox->minor_base;
+ port->mtk_dev = md->mtk_dev;
+ port->dev = &md->mtk_dev->pdev->dev;
+ spin_lock_init(&port->port_update_lock);
+ spin_lock(&port->port_update_lock);
+ mutex_init(&port->tx_mutex_lock);
+ if (port->flags & PORT_F_CHAR_NODE_SHOW)
+ port->chan_enable = CCCI_CHAN_ENABLE;
+ else
+ port->chan_enable = CCCI_CHAN_DISABLE;
+
+ port->chn_crt_stat = CCCI_CHAN_DISABLE;
+ spin_unlock(&port->port_update_lock);
+ if (port->ops->init)
+ port->ops->init(port);
+
+ if (port->flags & PORT_F_RAW_DATA)
+ port_prox->dedicated_ports[port->path_id][port->rxq_index] = port;
+ }
+
+ proxy_setup_channel_mapping();
+}
+
+static int proxy_alloc(struct mtk_modem *md)
+{
+ int ret;
+
+ port_prox = devm_kzalloc(&md->mtk_dev->pdev->dev, sizeof(*port_prox), GFP_KERNEL);
+ if (!port_prox)
+ return -ENOMEM;
+
+ ret = proxy_register_char_dev();
+ if (ret)
+ return ret;
+
+ port_prox->dev = &md->mtk_dev->pdev->dev;
+ port_prox->ports = md_ccci_ports;
+ port_prox->port_number = ARRAY_SIZE(md_ccci_ports);
+ proxy_init_all_ports(md);
+
+ return 0;
+};
+
+struct t7xx_port *port_get_by_minor(int minor)
+{
+ return proxy_get_port(minor, CCCI_INVALID_CH_ID);
+}
+
+struct t7xx_port *port_get_by_name(char *port_name)
+{
+ struct t7xx_port *port;
+ int i;
+
+ if (!port_prox)
+ return NULL;
+
+ for_each_proxy_port(i, port, port_prox)
+ if (!strncmp(port->name, port_name, strlen(port->name)))
+ return port;
+
+ return NULL;
+}
+
+int port_proxy_broadcast_state(struct t7xx_port *port, int state)
+{
+ char msg[PORT_NETLINK_MSG_MAX_PAYLOAD];
+
+ if (state >= MTK_PORT_STATE_INVALID)
+ return -EINVAL;
+
+ switch (state) {
+ case MTK_PORT_STATE_ENABLE:
+ snprintf(msg, PORT_NETLINK_MSG_MAX_PAYLOAD, "enable %s", port->name);
+ break;
+
+ case MTK_PORT_STATE_DISABLE:
+ snprintf(msg, PORT_NETLINK_MSG_MAX_PAYLOAD, "disable %s", port->name);
+ break;
+
+ default:
+ snprintf(msg, PORT_NETLINK_MSG_MAX_PAYLOAD, "invalid operation");
+ break;
+ }
+
+ return port_netlink_send_msg(port, PORT_STATE_BROADCAST_GROUP, msg, strlen(msg) + 1);
+}
+
+/**
+ * port_proxy_init() - init ports
+ * @md: modem
+ *
+ * Called by CCCI modem,
+ * used to create all CCCI port instances.
+ *
+ * Return: 0 for success or error
+ */
+int port_proxy_init(struct mtk_modem *md)
+{
+ int ret;
+
+ ret = proxy_alloc(md);
+ if (ret)
+ return ret;
+
+ ret = port_netlink_init();
+ if (ret)
+ goto err_netlink;
+
+ cldma_set_recv_skb(ID_CLDMA1, port_proxy_recv_skb);
+
+ return 0;
+
+err_netlink:
+ port_proxy_uninit();
+
+ return ret;
+}
+
+void port_proxy_uninit(void)
+{
+ struct t7xx_port *port;
+ int i;
+
+ for_each_proxy_port(i, port, port_prox)
+ if (port->ops->uninit)
+ port->ops->uninit(port);
+
+ unregister_chrdev_region(MKDEV(port_prox->major, port_prox->minor_base),
+ TTY_IPC_MINOR_BASE);
+ port_netlink_uninit();
+}
+
+/**
+ * port_proxy_node_control() - create/remove node
+ * @dev: device
+ * @port_msg: message
+ *
+ * Used to control create/remove device node.
+ *
+ * Return: 0 for success or error
+ */
+int port_proxy_node_control(struct device *dev, struct port_msg *port_msg)
+{
+ struct t7xx_port *port;
+ unsigned int ports, i;
+ unsigned int version;
+
+ version = FIELD_GET(PORT_MSG_VERSION, port_msg->info);
+ if (version != PORT_ENUM_VER ||
+ port_msg->head_pattern != PORT_ENUM_HEAD_PATTERN ||
+ port_msg->tail_pattern != PORT_ENUM_TAIL_PATTERN) {
+ dev_err(dev, "port message enumeration invalid %x:%x:%x\n",
+ version, port_msg->head_pattern, port_msg->tail_pattern);
+ return -EFAULT;
+ }
+
+ ports = FIELD_GET(PORT_MSG_PRT_CNT, port_msg->info);
+
+ for (i = 0; i < ports; i++) {
+ u32 *port_info = (u32 *)(port_msg->data + sizeof(*port_info) * i);
+ unsigned int en_flag = FIELD_GET(PORT_INFO_ENFLG, *port_info);
+ unsigned int ch_id = FIELD_GET(PORT_INFO_CH_ID, *port_info);
+
+ port = port_get_by_ch(ch_id);
+
+ if (!port) {
+ dev_warn(dev, "Port:%x not found\n", ch_id);
+ continue;
+ }
+
+ if (ccci_fsm_get_md_state() == MD_STATE_READY) {
+ if (en_flag == CCCI_CHAN_ENABLE) {
+ if (port->ops->enable_chl)
+ port->ops->enable_chl(port);
+ } else {
+ if (port->ops->disable_chl)
+ port->ops->disable_chl(port);
+ }
+ } else {
+ port->chan_enable = en_flag;
+ }
+ }
+
+ return 0;
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.h b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
new file mode 100644
index 000000000000..bd700321ca60
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_PORT_PROXY_H__
+#define __T7XX_PORT_PROXY_H__
+
+#include <linux/bits.h>
+#include <linux/device.h>
+#include <linux/types.h>
+#include <net/sock.h>
+
+#include "t7xx_common.h"
+#include "t7xx_hif_cldma.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_port.h"
+
+/* CCCI logic channel enable & disable flag */
+#define CCCI_CHAN_ENABLE 1
+#define CCCI_CHAN_DISABLE 0
+
+#define MTK_MAX_QUEUE_NUM 16
+#define MTK_PORT_STATE_ENABLE 0
+#define MTK_PORT_STATE_DISABLE 1
+#define MTK_PORT_STATE_INVALID 2
+
+#define MAX_RX_QUEUE_LENGTH 32
+#define MAX_CTRL_QUEUE_LENGTH 16
+
+#define CCCI_MTU 3568 /* 3.5kB -16 */
+#define CLDMA_TXQ_MTU MTK_SKB_4K
+
+struct port_proxy {
+ int port_number;
+ unsigned int major;
+ unsigned int minor_base;
+ struct t7xx_port *ports;
+ struct t7xx_port *dedicated_ports[CLDMA_NUM][MTK_MAX_QUEUE_NUM];
+ /* port list of each RX channel, for RX dispatching */
+ struct list_head rx_ch_ports[CCCI_MAX_CH_ID];
+ /* port list of each queue for receiving queue status dispatching */
+ struct list_head queue_ports[CLDMA_NUM][MTK_MAX_QUEUE_NUM];
+ struct sock *netlink_sock;
+ struct device *dev;
+};
+
+struct ctrl_msg_header {
+ u32 ctrl_msg_id;
+ u32 reserved;
+ u32 data_length;
+ u8 data[0];
+};
+
+struct port_msg {
+ u32 head_pattern;
+ u32 info;
+ u32 tail_pattern;
+ u8 data[0]; /* port set info */
+};
+
+#define PORT_INFO_RSRVD GENMASK(31, 16)
+#define PORT_INFO_ENFLG GENMASK(15, 15)
+#define PORT_INFO_CH_ID GENMASK(14, 0)
+
+#define PORT_MSG_VERSION GENMASK(31, 16)
+#define PORT_MSG_PRT_CNT GENMASK(15, 0)
+
+#define PORT_ENUM_VER 0
+#define PORT_ENUM_HEAD_PATTERN 0x5a5a5a5a
+#define PORT_ENUM_TAIL_PATTERN 0xa5a5a5a5
+#define PORT_ENUM_VER_MISMATCH 0x00657272
+
+int port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb, bool from_pool);
+void port_proxy_set_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h);
+int port_proxy_node_control(struct device *dev, struct port_msg *port_msg);
+void port_proxy_reset(struct device *dev);
+int port_proxy_broadcast_state(struct t7xx_port *port, int state);
+void port_proxy_send_msg_to_md(int ch, unsigned int msg, unsigned int resv);
+void port_proxy_uninit(void);
+int port_proxy_init(struct mtk_modem *md);
+void port_proxy_md_status_notify(unsigned int state);
+struct t7xx_port *port_proxy_get_port(int major, int minor);
+
+#endif /* __T7XX_PORT_PROXY_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
index fb81e894d5f9..4f9e8cfa2f94 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -24,6 +24,8 @@
#include "t7xx_monitor.h"
#include "t7xx_pci.h"
#include "t7xx_pcie_mac.h"
+#include "t7xx_port.h"
+#include "t7xx_port_proxy.h"

#define FSM_DRM_DISABLE_DELAY_MS 200
#define FSM_EX_REASON GENMASK(23, 16)
@@ -84,6 +86,9 @@ void fsm_broadcast_state(struct ccci_fsm_ctl *ctl, enum md_state state)

ctl->md_state = state;

+ /* update to port first, otherwise sending message on HS2 may fail */
+ port_proxy_md_status_notify(state);
+
fsm_state_notify(state);
}

@@ -141,7 +146,8 @@ static void fsm_flush_queue(struct ccci_fsm_ctl *ctl)
static void fsm_routine_exception(struct ccci_fsm_ctl *ctl, struct ccci_fsm_command *cmd,
enum ccci_ex_reason reason)
{
- bool rec_ok = false;
+ struct t7xx_port *log_port, *meta_port;
+ bool rec_ok = false, pass = false;
struct ccci_fsm_event *event;
unsigned long flags;
struct device *dev;
@@ -202,11 +208,29 @@ static void fsm_routine_exception(struct ccci_fsm_ctl *ctl, struct ccci_fsm_comm
if (!list_empty(&ctl->event_queue)) {
event = list_first_entry(&ctl->event_queue,
struct ccci_fsm_event, entry);
- if (event->event_id == CCCI_EVENT_MD_EX_PASS)
+ if (event->event_id == CCCI_EVENT_MD_EX_PASS) {
+ pass = true;
fsm_finish_event(ctl, event);
+ }
}

spin_unlock_irqrestore(&ctl->event_lock, flags);
+ if (pass) {
+ log_port = port_get_by_name("ttyCMdLog");
+ if (log_port)
+ log_port->ops->enable_chl(log_port);
+ else
+ dev_err(dev, "ttyCMdLog port not found\n");
+
+ meta_port = port_get_by_name("ttyCMdMeta");
+ if (meta_port)
+ meta_port->ops->enable_chl(meta_port);
+ else
+ dev_err(dev, "ttyCMdMeta port not found\n");
+
+ break;
+ }
+
cnt++;
msleep(EVENT_POLL_INTERVAL_MS);
}
--
2.17.1

2021-11-01 03:58:01

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 03/14] net: wwan: t7xx: Add core components

From: Haijun Lio <[email protected]>

Registers the t7xx device driver with the kernel. Setup all the core
components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
modem control operations, modem state machine, and build
infrastructure.

* PCIe layer code implements driver probe and removal.
* MHCCIF provides interrupt channels to communicate events
such as handshake, PM and port enumeration.
* Modem control implements the entry point for modem init,
reset and exit.
* The modem status monitor is a state machine used by modem control
to complete initialization and stop. It is used also to propagate
exception events reported by other components.

Signed-off-by: Haijun Lio <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/Kconfig | 14 +
drivers/net/wwan/Makefile | 1 +
drivers/net/wwan/t7xx/Makefile | 13 +
drivers/net/wwan/t7xx/t7xx_common.h | 76 +++
drivers/net/wwan/t7xx/t7xx_mhccif.c | 105 ++++
drivers/net/wwan/t7xx/t7xx_mhccif.h | 35 ++
drivers/net/wwan/t7xx/t7xx_modem_ops.c | 486 +++++++++++++++++
drivers/net/wwan/t7xx/t7xx_modem_ops.h | 90 ++++
drivers/net/wwan/t7xx/t7xx_monitor.h | 144 +++++
drivers/net/wwan/t7xx/t7xx_pci.c | 238 ++++++++
drivers/net/wwan/t7xx/t7xx_pci.h | 66 +++
drivers/net/wwan/t7xx/t7xx_pcie_mac.c | 277 ++++++++++
drivers/net/wwan/t7xx/t7xx_pcie_mac.h | 36 ++
drivers/net/wwan/t7xx/t7xx_reg.h | 398 ++++++++++++++
drivers/net/wwan/t7xx/t7xx_skb_util.c | 362 +++++++++++++
drivers/net/wwan/t7xx/t7xx_skb_util.h | 110 ++++
drivers/net/wwan/t7xx/t7xx_state_monitor.c | 598 +++++++++++++++++++++
17 files changed, 3049 insertions(+)
create mode 100644 drivers/net/wwan/t7xx/Makefile
create mode 100644 drivers/net/wwan/t7xx/t7xx_common.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_monitor.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_reg.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_skb_util.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_skb_util.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.c

diff --git a/drivers/net/wwan/Kconfig b/drivers/net/wwan/Kconfig
index 17543be14665..98e29616c025 100644
--- a/drivers/net/wwan/Kconfig
+++ b/drivers/net/wwan/Kconfig
@@ -80,6 +80,20 @@ config IOSM

If unsure, say N.

+config MTK_T7XX
+ tristate "MediaTek PCIe 5G WWAN modem T7XX device"
+ depends on PCI
+ help
+ Enables MediaTek PCIe based 5G WWAN modem (T700 series) device.
+ Adapts WWAN framework and provides network interface like wwan0
+ and tty interfaces like wwan0at0 (AT protocol), wwan0mbim0
+ (MBIM protocol), etc.
+
+ To compile this driver as a module, choose M here: the module will be
+ called mtk_t7xx.
+
+ If unsure, say N.
+
endif # WWAN

endmenu
diff --git a/drivers/net/wwan/Makefile b/drivers/net/wwan/Makefile
index fe51feedac21..ae913797efa4 100644
--- a/drivers/net/wwan/Makefile
+++ b/drivers/net/wwan/Makefile
@@ -12,3 +12,4 @@ obj-$(CONFIG_MHI_WWAN_CTRL) += mhi_wwan_ctrl.o
obj-$(CONFIG_MHI_WWAN_MBIM) += mhi_wwan_mbim.o
obj-$(CONFIG_RPMSG_WWAN_CTRL) += rpmsg_wwan_ctrl.o
obj-$(CONFIG_IOSM) += iosm/
+obj-$(CONFIG_MTK_T7XX) += t7xx/
diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
new file mode 100644
index 000000000000..dc0e6e025d55
--- /dev/null
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0-only
+
+ccflags-y += -Werror
+
+obj-${CONFIG_MTK_T7XX} := mtk_t7xx.o
+mtk_t7xx-y:= t7xx_pci.o \
+ t7xx_pcie_mac.o \
+ t7xx_mhccif.o \
+ t7xx_state_monitor.o \
+ t7xx_modem_ops.o \
+ t7xx_skb_util.o \
+ t7xx_cldma.o \
+ t7xx_hif_cldma.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_common.h b/drivers/net/wwan/t7xx/t7xx_common.h
new file mode 100644
index 000000000000..816acddb1481
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_common.h
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_COMMON_H__
+#define __T7XX_COMMON_H__
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+struct ccci_header {
+ /* do not assume data[1] is data length in rx */
+ u32 data[2];
+ u32 status;
+ u32 reserved;
+};
+
+enum txq_type {
+ TXQ_NORMAL,
+ TXQ_FAST,
+ TXQ_TYPE_CNT
+};
+
+enum direction {
+ MTK_IN,
+ MTK_OUT,
+ MTK_INOUT,
+};
+
+#define HDR_FLD_AST BIT(31)
+#define HDR_FLD_SEQ GENMASK(30, 16)
+#define HDR_FLD_CHN GENMASK(15, 0)
+
+#define CCCI_H_LEN 16
+/* CCCI_H_LEN + reserved space that is used in exception flow */
+#define CCCI_H_ELEN 128
+
+#define CCCI_HEADER_NO_DATA 0xffffffff
+
+/* Control identification numbers for AP<->MD messages */
+#define CTL_ID_HS1_MSG 0x0
+#define CTL_ID_HS2_MSG 0x1
+#define CTL_ID_HS3_MSG 0x2
+#define CTL_ID_MD_EX 0x4
+#define CTL_ID_DRV_VER_ERROR 0x5
+#define CTL_ID_MD_EX_ACK 0x6
+#define CTL_ID_MD_EX_PASS 0x8
+#define CTL_ID_PORT_ENUM 0x9
+
+/* Modem exception check identification number */
+#define MD_EX_CHK_ID 0x45584350
+/* Modem exception check acknowledge identification number */
+#define MD_EX_CHK_ACK_ID 0x45524543
+
+enum md_state {
+ MD_STATE_INVALID, /* no traffic */
+ MD_STATE_GATED, /* no traffic */
+ MD_STATE_WAITING_FOR_HS1,
+ MD_STATE_WAITING_FOR_HS2,
+ MD_STATE_READY,
+ MD_STATE_EXCEPTION,
+ MD_STATE_RESET, /* no traffic */
+ MD_STATE_WAITING_TO_STOP,
+ MD_STATE_STOPPED,
+};
+
+#endif
diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.c b/drivers/net/wwan/t7xx/t7xx_mhccif.c
new file mode 100644
index 000000000000..927aeb39e313
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_mhccif.c
@@ -0,0 +1,105 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/completion.h>
+#include <linux/pci.h>
+
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+
+static void mhccif_clear_interrupts(struct mtk_pci_dev *mtk_dev, u32 mask)
+{
+ void __iomem *mhccif_pbase;
+
+ mhccif_pbase = mtk_dev->base_addr.mhccif_rc_base;
+
+ /* Clear level 2 interrupt */
+ iowrite32(mask, mhccif_pbase + REG_EP2RC_SW_INT_ACK);
+
+ /* Read back to ensure write is done */
+ mhccif_read_sw_int_sts(mtk_dev);
+
+ /* Clear level 1 interrupt */
+ mtk_pcie_mac_clear_int_status(mtk_dev, MHCCIF_INT);
+}
+
+static irqreturn_t mhccif_isr_thread(int irq, void *data)
+{
+ struct mtk_pci_dev *mtk_dev;
+ u32 int_sts;
+
+ mtk_dev = data;
+
+ /* Use 1*4 bits to avoid low power bits*/
+ iowrite32(L1_1_DISABLE_BIT(1) | L1_2_DISABLE_BIT(1),
+ IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_SET_0);
+
+ int_sts = mhccif_read_sw_int_sts(mtk_dev);
+ if (int_sts & mtk_dev->mhccif_bitmask)
+ mtk_pci_mhccif_isr(mtk_dev);
+
+ /* Clear 2 & 1 level interrupts */
+ mhccif_clear_interrupts(mtk_dev, int_sts);
+
+ /* Enable corresponding interrupt */
+ mtk_pcie_mac_set_int(mtk_dev, MHCCIF_INT);
+ return IRQ_HANDLED;
+}
+
+u32 mhccif_read_sw_int_sts(struct mtk_pci_dev *mtk_dev)
+{
+ return ioread32(mtk_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_STS);
+}
+
+void mhccif_mask_set(struct mtk_pci_dev *mtk_dev, u32 val)
+{
+ iowrite32(val, mtk_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_EAP_MASK_SET);
+}
+
+void mhccif_mask_clr(struct mtk_pci_dev *mtk_dev, u32 val)
+{
+ iowrite32(val, mtk_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_EAP_MASK_CLR);
+}
+
+u32 mhccif_mask_get(struct mtk_pci_dev *mtk_dev)
+{
+ /* mtk_dev is validated in calling function */
+ return ioread32(mtk_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_EAP_MASK);
+}
+
+static irqreturn_t mhccif_isr_handler(int irq, void *data)
+{
+ return IRQ_WAKE_THREAD;
+}
+
+void mhccif_init(struct mtk_pci_dev *mtk_dev)
+{
+ mtk_dev->base_addr.mhccif_rc_base = mtk_dev->base_addr.pcie_ext_reg_base +
+ MHCCIF_RC_DEV_BASE -
+ mtk_dev->base_addr.pcie_dev_reg_trsl_addr;
+
+ /* Register MHCCHIF int handler to handle */
+ mtk_dev->intr_handler[MHCCIF_INT] = mhccif_isr_handler;
+ mtk_dev->intr_thread[MHCCIF_INT] = mhccif_isr_thread;
+ mtk_dev->callback_param[MHCCIF_INT] = mtk_dev;
+}
+
+void mhccif_h2d_swint_trigger(struct mtk_pci_dev *mtk_dev, u32 channel)
+{
+ void __iomem *mhccif_pbase;
+
+ mhccif_pbase = mtk_dev->base_addr.mhccif_rc_base;
+ iowrite32(BIT(channel), mhccif_pbase + REG_RC2EP_SW_BSY);
+ iowrite32(channel, mhccif_pbase + REG_RC2EP_SW_TCHNUM);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.h b/drivers/net/wwan/t7xx/t7xx_mhccif.h
new file mode 100644
index 000000000000..930c20f1ec78
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_mhccif.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_MHCCIF_H__
+#define __T7XX_MHCCIF_H__
+
+#include <linux/types.h>
+
+#include "t7xx_pci.h"
+#include "t7xx_reg.h"
+
+#define D2H_SW_INT_MASK (D2H_INT_EXCEPTION_INIT | \
+ D2H_INT_EXCEPTION_INIT_DONE | \
+ D2H_INT_EXCEPTION_CLEARQ_DONE | \
+ D2H_INT_EXCEPTION_ALLQ_RESET | \
+ D2H_INT_PORT_ENUM | \
+ D2H_INT_ASYNC_MD_HK)
+
+void mhccif_mask_set(struct mtk_pci_dev *mtk_dev, u32 val);
+void mhccif_mask_clr(struct mtk_pci_dev *mtk_dev, u32 val);
+u32 mhccif_mask_get(struct mtk_pci_dev *mtk_dev);
+void mhccif_init(struct mtk_pci_dev *mtk_dev);
+u32 mhccif_read_sw_int_sts(struct mtk_pci_dev *mtk_dev);
+void mhccif_h2d_swint_trigger(struct mtk_pci_dev *mtk_dev, u32 channel);
+
+#endif /*__T7XX_MHCCIF_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
new file mode 100644
index 000000000000..b80a36e09523
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
@@ -0,0 +1,486 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/acpi.h>
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+
+#include "t7xx_hif_cldma.h"
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_monitor.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+
+#define RGU_RESET_DELAY_US 20
+#define PORT_RESET_DELAY_US 2000
+
+enum mtk_feature_support_type {
+ MTK_FEATURE_DOES_NOT_EXIST,
+ MTK_FEATURE_NOT_SUPPORTED,
+ MTK_FEATURE_MUST_BE_SUPPORTED,
+};
+
+static inline unsigned int get_interrupt_status(struct mtk_pci_dev *mtk_dev)
+{
+ return mhccif_read_sw_int_sts(mtk_dev) & D2H_SW_INT_MASK;
+}
+
+/**
+ * mtk_pci_mhccif_isr() - Process MHCCIF interrupts
+ * @mtk_dev: MTK device
+ *
+ * Check the interrupt status, and queue commands accordingly
+ *
+ * Returns: 0 on success or -EINVAL on failure
+ */
+int mtk_pci_mhccif_isr(struct mtk_pci_dev *mtk_dev)
+{
+ struct md_sys_info *md_info;
+ struct ccci_fsm_ctl *ctl;
+ struct mtk_modem *md;
+ unsigned int int_sta;
+ unsigned long flags;
+ u32 mask;
+
+ md = mtk_dev->md;
+ ctl = fsm_get_entry();
+ if (!ctl) {
+ dev_err(&mtk_dev->pdev->dev,
+ "process MHCCIF interrupt before modem monitor was initialized\n");
+ return -EINVAL;
+ }
+
+ md_info = md->md_info;
+ spin_lock_irqsave(&md_info->exp_spinlock, flags);
+ int_sta = get_interrupt_status(mtk_dev);
+ md_info->exp_id |= int_sta;
+
+ if (md_info->exp_id & D2H_INT_PORT_ENUM) {
+ md_info->exp_id &= ~D2H_INT_PORT_ENUM;
+ if (ctl->curr_state == CCCI_FSM_INIT ||
+ ctl->curr_state == CCCI_FSM_PRE_START ||
+ ctl->curr_state == CCCI_FSM_STOPPED)
+ ccci_fsm_recv_md_interrupt(MD_IRQ_PORT_ENUM);
+ }
+
+ if (md_info->exp_id & D2H_INT_EXCEPTION_INIT) {
+ if (ctl->md_state == MD_STATE_INVALID ||
+ ctl->md_state == MD_STATE_WAITING_FOR_HS1 ||
+ ctl->md_state == MD_STATE_WAITING_FOR_HS2 ||
+ ctl->md_state == MD_STATE_READY) {
+ md_info->exp_id &= ~D2H_INT_EXCEPTION_INIT;
+ ccci_fsm_recv_md_interrupt(MD_IRQ_CCIF_EX);
+ }
+ } else if (ctl->md_state == MD_STATE_WAITING_FOR_HS1) {
+ /* start handshake if MD not assert */
+ mask = mhccif_mask_get(mtk_dev);
+ if ((md_info->exp_id & D2H_INT_ASYNC_MD_HK) && !(mask & D2H_INT_ASYNC_MD_HK)) {
+ md_info->exp_id &= ~D2H_INT_ASYNC_MD_HK;
+ queue_work(md->handshake_wq, &md->handshake_work);
+ }
+ }
+
+ spin_unlock_irqrestore(&md_info->exp_spinlock, flags);
+
+ return 0;
+}
+
+static void clr_device_irq_via_pcie(struct mtk_pci_dev *mtk_dev)
+{
+ struct mtk_addr_base *pbase_addr;
+ void __iomem *rgu_pciesta_reg;
+
+ pbase_addr = &mtk_dev->base_addr;
+ rgu_pciesta_reg = pbase_addr->pcie_ext_reg_base + TOPRGU_CH_PCIE_IRQ_STA -
+ pbase_addr->pcie_dev_reg_trsl_addr;
+
+ /* clear RGU PCIe IRQ state */
+ iowrite32(ioread32(rgu_pciesta_reg), rgu_pciesta_reg);
+}
+
+void mtk_clear_rgu_irq(struct mtk_pci_dev *mtk_dev)
+{
+ /* clear L2 */
+ clr_device_irq_via_pcie(mtk_dev);
+ /* clear L1 */
+ mtk_pcie_mac_clear_int_status(mtk_dev, SAP_RGU_INT);
+}
+
+static int mtk_acpi_reset(struct mtk_pci_dev *mtk_dev, char *fn_name)
+{
+#ifdef CONFIG_ACPI
+ struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+ acpi_status acpi_ret;
+ struct device *dev;
+ acpi_handle handle;
+
+ dev = &mtk_dev->pdev->dev;
+
+ if (acpi_disabled) {
+ dev_err(dev, "acpi function isn't enabled\n");
+ return -EFAULT;
+ }
+
+ handle = ACPI_HANDLE(dev);
+ if (!handle) {
+ dev_err(dev, "acpi handle isn't found\n");
+ return -EFAULT;
+ }
+
+ if (!acpi_has_method(handle, fn_name)) {
+ dev_err(dev, "%s method isn't found\n", fn_name);
+ return -EFAULT;
+ }
+
+ acpi_ret = acpi_evaluate_object(handle, fn_name, NULL, &buffer);
+ if (ACPI_FAILURE(acpi_ret)) {
+ dev_err(dev, "%s method fail: %s\n", fn_name, acpi_format_exception(acpi_ret));
+ return -EFAULT;
+ }
+#endif
+ return 0;
+}
+
+int mtk_acpi_fldr_func(struct mtk_pci_dev *mtk_dev)
+{
+ return mtk_acpi_reset(mtk_dev, "_RST");
+}
+
+static void reset_device_via_pmic(struct mtk_pci_dev *mtk_dev)
+{
+ unsigned int val;
+
+ val = ioread32(IREG_BASE(mtk_dev) + PCIE_MISC_DEV_STATUS);
+
+ if (val & MISC_RESET_TYPE_PLDR)
+ mtk_acpi_reset(mtk_dev, "MRST._RST");
+ else if (val & MISC_RESET_TYPE_FLDR)
+ mtk_acpi_fldr_func(mtk_dev);
+}
+
+static irqreturn_t rgu_isr_thread(int irq, void *data)
+{
+ struct mtk_pci_dev *mtk_dev;
+ struct mtk_modem *modem;
+
+ mtk_dev = data;
+ modem = mtk_dev->md;
+
+ msleep(RGU_RESET_DELAY_US);
+ reset_device_via_pmic(modem->mtk_dev);
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t rgu_isr_handler(int irq, void *data)
+{
+ struct mtk_pci_dev *mtk_dev;
+ struct mtk_modem *modem;
+
+ mtk_dev = data;
+ modem = mtk_dev->md;
+
+ mtk_clear_rgu_irq(mtk_dev);
+
+ if (!mtk_dev->rgu_pci_irq_en)
+ return IRQ_HANDLED;
+
+ atomic_set(&modem->rgu_irq_asserted, 1);
+ mtk_pcie_mac_clear_int(mtk_dev, SAP_RGU_INT);
+ return IRQ_WAKE_THREAD;
+}
+
+static void mtk_pcie_register_rgu_isr(struct mtk_pci_dev *mtk_dev)
+{
+ /* registers RGU callback isr with PCIe driver */
+ mtk_pcie_mac_clear_int(mtk_dev, SAP_RGU_INT);
+ mtk_pcie_mac_clear_int_status(mtk_dev, SAP_RGU_INT);
+
+ mtk_dev->intr_handler[SAP_RGU_INT] = rgu_isr_handler;
+ mtk_dev->intr_thread[SAP_RGU_INT] = rgu_isr_thread;
+ mtk_dev->callback_param[SAP_RGU_INT] = mtk_dev;
+ mtk_pcie_mac_set_int(mtk_dev, SAP_RGU_INT);
+}
+
+static void md_exception(struct mtk_modem *md, enum hif_ex_stage stage)
+{
+ struct mtk_pci_dev *mtk_dev;
+
+ mtk_dev = md->mtk_dev;
+
+ if (stage == HIF_EX_CLEARQ_DONE)
+ /* give DHL time to flush data.
+ * this is an empirical value that assure
+ * that DHL have enough time to flush all the date.
+ */
+ msleep(PORT_RESET_DELAY_US);
+
+ cldma_exception(ID_CLDMA1, stage);
+
+ if (stage == HIF_EX_INIT)
+ mhccif_h2d_swint_trigger(mtk_dev, H2D_CH_EXCEPTION_ACK);
+ else if (stage == HIF_EX_CLEARQ_DONE)
+ mhccif_h2d_swint_trigger(mtk_dev, H2D_CH_EXCEPTION_CLEARQ_ACK);
+}
+
+static int wait_hif_ex_hk_event(struct mtk_modem *md, int event_id)
+{
+ struct md_sys_info *md_info;
+ int sleep_time = 10;
+ int retries = 500; /* MD timeout is 5s */
+
+ md_info = md->md_info;
+ do {
+ if (md_info->exp_id & event_id)
+ return 0;
+
+ msleep(sleep_time);
+ } while (--retries);
+
+ return -EFAULT;
+}
+
+static void md_sys_sw_init(struct mtk_pci_dev *mtk_dev)
+{
+ /* Register the MHCCIF isr for MD exception, port enum and
+ * async handshake notifications.
+ */
+ mhccif_mask_set(mtk_dev, D2H_SW_INT_MASK);
+ mtk_dev->mhccif_bitmask = D2H_SW_INT_MASK;
+ mhccif_mask_clr(mtk_dev, D2H_INT_PORT_ENUM);
+
+ /* register RGU irq handler for sAP exception notification */
+ mtk_dev->rgu_pci_irq_en = true;
+ mtk_pcie_register_rgu_isr(mtk_dev);
+}
+
+static void md_hk_wq(struct work_struct *work)
+{
+ struct ccci_fsm_ctl *ctl;
+ struct mtk_modem *md;
+
+ ctl = fsm_get_entry();
+
+ cldma_switch_cfg(ID_CLDMA1);
+ cldma_start(ID_CLDMA1);
+ fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS2);
+ md = container_of(work, struct mtk_modem, handshake_work);
+ atomic_set(&md->core_md.ready, 1);
+ wake_up(&ctl->async_hk_wq);
+}
+
+void mtk_md_event_notify(struct mtk_modem *md, enum md_event_id evt_id)
+{
+ struct md_sys_info *md_info;
+ void __iomem *mhccif_base;
+ struct ccci_fsm_ctl *ctl;
+ unsigned int int_sta;
+ unsigned long flags;
+
+ ctl = fsm_get_entry();
+ md_info = md->md_info;
+
+ switch (evt_id) {
+ case FSM_PRE_START:
+ mhccif_mask_clr(md->mtk_dev, D2H_INT_PORT_ENUM);
+ break;
+
+ case FSM_START:
+ mhccif_mask_set(md->mtk_dev, D2H_INT_PORT_ENUM);
+ spin_lock_irqsave(&md_info->exp_spinlock, flags);
+ int_sta = get_interrupt_status(md->mtk_dev);
+ md_info->exp_id |= int_sta;
+ if (md_info->exp_id & D2H_INT_EXCEPTION_INIT) {
+ atomic_set(&ctl->exp_flg, 1);
+ md_info->exp_id &= ~D2H_INT_EXCEPTION_INIT;
+ md_info->exp_id &= ~D2H_INT_ASYNC_MD_HK;
+ } else if (atomic_read(&ctl->exp_flg)) {
+ md_info->exp_id &= ~D2H_INT_ASYNC_MD_HK;
+ } else if (md_info->exp_id & D2H_INT_ASYNC_MD_HK) {
+ queue_work(md->handshake_wq, &md->handshake_work);
+ md_info->exp_id &= ~D2H_INT_ASYNC_MD_HK;
+ mhccif_base = md->mtk_dev->base_addr.mhccif_rc_base;
+ iowrite32(D2H_INT_ASYNC_MD_HK, mhccif_base + REG_EP2RC_SW_INT_ACK);
+ mhccif_mask_set(md->mtk_dev, D2H_INT_ASYNC_MD_HK);
+ } else {
+ /* unmask async handshake interrupt */
+ mhccif_mask_clr(md->mtk_dev, D2H_INT_ASYNC_MD_HK);
+ }
+
+ spin_unlock_irqrestore(&md_info->exp_spinlock, flags);
+ /* unmask exception interrupt */
+ mhccif_mask_clr(md->mtk_dev,
+ D2H_INT_EXCEPTION_INIT |
+ D2H_INT_EXCEPTION_INIT_DONE |
+ D2H_INT_EXCEPTION_CLEARQ_DONE |
+ D2H_INT_EXCEPTION_ALLQ_RESET);
+ break;
+
+ case FSM_READY:
+ /* mask async handshake interrupt */
+ mhccif_mask_set(md->mtk_dev, D2H_INT_ASYNC_MD_HK);
+ break;
+
+ default:
+ break;
+ }
+}
+
+static void md_structure_reset(struct mtk_modem *md)
+{
+ struct md_sys_info *md_info;
+
+ md_info = md->md_info;
+ md_info->exp_id = 0;
+ spin_lock_init(&md_info->exp_spinlock);
+}
+
+void mtk_md_exception_handshake(struct mtk_modem *md)
+{
+ struct mtk_pci_dev *mtk_dev;
+ int ret;
+
+ mtk_dev = md->mtk_dev;
+ md_exception(md, HIF_EX_INIT);
+ ret = wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_INIT_DONE);
+
+ if (ret)
+ dev_err(&mtk_dev->pdev->dev, "EX CCIF HS timeout, RCH 0x%lx\n",
+ D2H_INT_EXCEPTION_INIT_DONE);
+
+ md_exception(md, HIF_EX_INIT_DONE);
+ ret = wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_CLEARQ_DONE);
+ if (ret)
+ dev_err(&mtk_dev->pdev->dev, "EX CCIF HS timeout, RCH 0x%lx\n",
+ D2H_INT_EXCEPTION_CLEARQ_DONE);
+
+ md_exception(md, HIF_EX_CLEARQ_DONE);
+ ret = wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_ALLQ_RESET);
+ if (ret)
+ dev_err(&mtk_dev->pdev->dev, "EX CCIF HS timeout, RCH 0x%lx\n",
+ D2H_INT_EXCEPTION_ALLQ_RESET);
+
+ md_exception(md, HIF_EX_ALLQ_RESET);
+}
+
+static struct mtk_modem *ccci_md_alloc(struct mtk_pci_dev *mtk_dev)
+{
+ struct mtk_modem *md;
+
+ md = devm_kzalloc(&mtk_dev->pdev->dev, sizeof(*md), GFP_KERNEL);
+ if (!md)
+ return NULL;
+
+ md->md_info = devm_kzalloc(&mtk_dev->pdev->dev, sizeof(*md->md_info), GFP_KERNEL);
+ if (!md->md_info)
+ return NULL;
+
+ md->mtk_dev = mtk_dev;
+ mtk_dev->md = md;
+ atomic_set(&md->core_md.ready, 0);
+ md->handshake_wq = alloc_workqueue("%s",
+ WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI,
+ 0, "md_hk_wq");
+ if (!md->handshake_wq)
+ return NULL;
+
+ INIT_WORK(&md->handshake_work, md_hk_wq);
+ md->core_md.feature_set[RT_ID_MD_PORT_ENUM] &= ~FEATURE_MSK;
+ md->core_md.feature_set[RT_ID_MD_PORT_ENUM] |=
+ FIELD_PREP(FEATURE_MSK, MTK_FEATURE_MUST_BE_SUPPORTED);
+ return md;
+}
+
+void mtk_md_reset(struct mtk_pci_dev *mtk_dev)
+{
+ struct mtk_modem *md;
+
+ md = mtk_dev->md;
+ md->md_init_finish = false;
+ md_structure_reset(md);
+ ccci_fsm_reset();
+ cldma_reset(ID_CLDMA1);
+ md->md_init_finish = true;
+}
+
+/**
+ * mtk_md_init() - Initialize modem
+ * @mtk_dev: MTK device
+ *
+ * Allocate and initialize MD ctrl block, and initialize data path
+ * Register MHCCIF ISR and RGU ISR, and start the state machine
+ *
+ * Return: 0 on success or -ENOMEM on allocation failure
+ */
+int mtk_md_init(struct mtk_pci_dev *mtk_dev)
+{
+ struct ccci_fsm_ctl *fsm_ctl;
+ struct mtk_modem *md;
+ int ret;
+
+ /* allocate and initialize md ctrl memory */
+ md = ccci_md_alloc(mtk_dev);
+ if (!md)
+ return -ENOMEM;
+
+ ret = cldma_alloc(ID_CLDMA1, mtk_dev);
+ if (ret)
+ goto err_alloc;
+
+ /* initialize md ctrl block */
+ md_structure_reset(md);
+
+ ret = ccci_fsm_init(md);
+ if (ret)
+ goto err_alloc;
+
+ ret = cldma_init(ID_CLDMA1);
+ if (ret)
+ goto err_fsm_init;
+
+ fsm_ctl = fsm_get_entry();
+ fsm_append_command(fsm_ctl, CCCI_COMMAND_START, 0);
+
+ md_sys_sw_init(mtk_dev);
+
+ md->md_init_finish = true;
+ return 0;
+
+err_fsm_init:
+ ccci_fsm_uninit();
+err_alloc:
+ destroy_workqueue(md->handshake_wq);
+
+ dev_err(&mtk_dev->pdev->dev, "modem init failed\n");
+ return ret;
+}
+
+void mtk_md_exit(struct mtk_pci_dev *mtk_dev)
+{
+ struct mtk_modem *md = mtk_dev->md;
+ struct ccci_fsm_ctl *fsm_ctl;
+
+ mtk_pcie_mac_clear_int(mtk_dev, SAP_RGU_INT);
+
+ if (!md->md_init_finish)
+ return;
+
+ fsm_ctl = fsm_get_entry();
+ /* change FSM state, will auto jump to stopped */
+ fsm_append_command(fsm_ctl, CCCI_COMMAND_PRE_STOP, 1);
+ cldma_exit(ID_CLDMA1);
+ ccci_fsm_uninit();
+ destroy_workqueue(md->handshake_wq);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.h b/drivers/net/wwan/t7xx/t7xx_modem_ops.h
new file mode 100644
index 000000000000..11aa29ad023f
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.h
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_MODEM_OPS_H__
+#define __T7XX_MODEM_OPS_H__
+
+#include <linux/bits.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_pci.h"
+
+#define RT_ID_MD_PORT_ENUM 0
+#define FEATURE_COUNT 64
+/* Modem feature query identification number */
+#define MD_FEATURE_QUERY_ID 0x49434343
+
+#define FEATURE_VER GENMASK(7, 4)
+#define FEATURE_MSK GENMASK(3, 0)
+
+/**
+ * enum hif_ex_stage - HIF exception handshake stages with the HW
+ * @HIF_EX_INIT: disable and clear TXQ
+ * @HIF_EX_INIT_DONE: polling for init to be done
+ * @HIF_EX_CLEARQ_DONE: disable RX, flush TX/RX workqueues and clear RX
+ * @HIF_EX_ALLQ_RESET: HW is back in safe mode for reinitialization and restart
+ */
+enum hif_ex_stage {
+ HIF_EX_INIT = 0,
+ HIF_EX_INIT_DONE,
+ HIF_EX_CLEARQ_DONE,
+ HIF_EX_ALLQ_RESET,
+};
+
+struct mtk_runtime_feature {
+ u8 feature_id;
+ u8 support_info;
+ u8 reserved[2];
+ u32 data_len;
+ u8 data[];
+};
+
+enum md_event_id {
+ FSM_PRE_START,
+ FSM_START,
+ FSM_READY,
+};
+
+struct core_sys_info {
+ atomic_t ready;
+ u8 feature_set[FEATURE_COUNT];
+};
+
+struct mtk_modem {
+ struct md_sys_info *md_info;
+ struct mtk_pci_dev *mtk_dev;
+ struct core_sys_info core_md;
+ bool md_init_finish;
+ atomic_t rgu_irq_asserted;
+ struct workqueue_struct *handshake_wq;
+ struct work_struct handshake_work;
+};
+
+struct md_sys_info {
+ unsigned int exp_id;
+ spinlock_t exp_spinlock; /* protect exp_id containing EX events */
+};
+
+void mtk_md_exception_handshake(struct mtk_modem *md);
+void mtk_md_event_notify(struct mtk_modem *md, enum md_event_id evt_id);
+void mtk_md_reset(struct mtk_pci_dev *mtk_dev);
+int mtk_md_init(struct mtk_pci_dev *mtk_dev);
+void mtk_md_exit(struct mtk_pci_dev *mtk_dev);
+void mtk_clear_rgu_irq(struct mtk_pci_dev *mtk_dev);
+int mtk_acpi_fldr_func(struct mtk_pci_dev *mtk_dev);
+int mtk_pci_mhccif_isr(struct mtk_pci_dev *mtk_dev);
+
+#endif /* __T7XX_MODEM_OPS_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_monitor.h b/drivers/net/wwan/t7xx/t7xx_monitor.h
new file mode 100644
index 000000000000..e03e044e685a
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_monitor.h
@@ -0,0 +1,144 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_MONITOR_H__
+#define __T7XX_MONITOR_H__
+
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+
+#include "t7xx_common.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_monitor.h"
+#include "t7xx_skb_util.h"
+
+enum ccci_fsm_state {
+ CCCI_FSM_INIT,
+ CCCI_FSM_PRE_START,
+ CCCI_FSM_STARTING,
+ CCCI_FSM_READY,
+ CCCI_FSM_EXCEPTION,
+ CCCI_FSM_STOPPING,
+ CCCI_FSM_STOPPED,
+};
+
+enum ccci_fsm_event_state {
+ CCCI_EVENT_INVALID,
+ CCCI_EVENT_MD_EX,
+ CCCI_EVENT_MD_EX_REC_OK,
+ CCCI_EVENT_MD_EX_PASS,
+ CCCI_EVENT_MAX
+};
+
+enum ccci_fsm_cmd_state {
+ CCCI_COMMAND_INVALID,
+ CCCI_COMMAND_START,
+ CCCI_COMMAND_EXCEPTION,
+ CCCI_COMMAND_PRE_STOP,
+ CCCI_COMMAND_STOP,
+};
+
+enum ccci_ex_reason {
+ EXCEPTION_HS_TIMEOUT,
+ EXCEPTION_EE,
+ EXCEPTION_EVENT,
+};
+
+enum md_irq_type {
+ MD_IRQ_WDT,
+ MD_IRQ_CCIF_EX,
+ MD_IRQ_PORT_ENUM,
+};
+
+enum fsm_cmd_result {
+ FSM_CMD_RESULT_PENDING,
+ FSM_CMD_RESULT_OK,
+ FSM_CMD_RESULT_FAIL,
+};
+
+#define FSM_CMD_FLAG_WAITING_TO_COMPLETE BIT(0)
+#define FSM_CMD_FLAG_FLIGHT_MODE BIT(1)
+
+#define EVENT_POLL_INTERVAL_MS 20
+#define MD_EX_REC_OK_TIMEOUT_MS 10000
+#define MD_EX_PASS_TIMEOUT_MS (45 * 1000)
+
+struct ccci_fsm_monitor {
+ dev_t dev_n;
+ wait_queue_head_t rx_wq;
+ struct ccci_skb_queue rx_skb_list;
+};
+
+struct ccci_fsm_ctl {
+ struct mtk_modem *md;
+ enum md_state md_state;
+ unsigned int curr_state;
+ unsigned int last_state;
+ struct list_head command_queue;
+ struct list_head event_queue;
+ wait_queue_head_t command_wq;
+ wait_queue_head_t event_wq;
+ wait_queue_head_t async_hk_wq;
+ spinlock_t event_lock; /* protects event_queue */
+ spinlock_t command_lock; /* protects command_queue */
+ spinlock_t cmd_complete_lock; /* protects fsm_command */
+ struct task_struct *fsm_thread;
+ atomic_t exp_flg;
+ struct ccci_fsm_monitor monitor_ctl;
+ spinlock_t notifier_lock; /* protects notifier_list */
+ struct list_head notifier_list;
+};
+
+struct ccci_fsm_event {
+ struct list_head entry;
+ enum ccci_fsm_event_state event_id;
+ unsigned int length;
+ unsigned char data[];
+};
+
+struct ccci_fsm_command {
+ struct list_head entry;
+ enum ccci_fsm_cmd_state cmd_id;
+ unsigned int flag;
+ enum fsm_cmd_result result;
+ wait_queue_head_t complete_wq;
+};
+
+struct fsm_notifier_block {
+ struct list_head entry;
+ int (*notifier_fn)(enum md_state state, void *data);
+ void *data;
+};
+
+int fsm_append_command(struct ccci_fsm_ctl *ctl, enum ccci_fsm_cmd_state cmd_id,
+ unsigned int flag);
+int fsm_append_event(struct ccci_fsm_ctl *ctl, enum ccci_fsm_event_state event_id,
+ unsigned char *data, unsigned int length);
+void fsm_clear_event(struct ccci_fsm_ctl *ctl, enum ccci_fsm_event_state event_id);
+
+struct ccci_fsm_ctl *fsm_get_entity_by_device_number(dev_t dev_n);
+struct ccci_fsm_ctl *fsm_get_entry(void);
+
+void fsm_broadcast_state(struct ccci_fsm_ctl *ctl, enum md_state state);
+void ccci_fsm_reset(void);
+int ccci_fsm_init(struct mtk_modem *md);
+void ccci_fsm_uninit(void);
+void ccci_fsm_recv_md_interrupt(enum md_irq_type type);
+enum md_state ccci_fsm_get_md_state(void);
+unsigned int ccci_fsm_get_current_state(void);
+void fsm_notifier_register(struct fsm_notifier_block *notifier);
+void fsm_notifier_unregister(struct fsm_notifier_block *notifier);
+
+#endif /* __T7XX_MONITOR_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
new file mode 100644
index 000000000000..4b624b36e584
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_pci.c
@@ -0,0 +1,238 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Andy Shevchenko <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/pci.h>
+
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+#include "t7xx_skb_util.h"
+
+#define PCI_IREG_BASE 0
+#define PCI_EREG_BASE 2
+
+static int mtk_request_irq(struct pci_dev *pdev)
+{
+ struct mtk_pci_dev *mtk_dev;
+ int ret, i;
+
+ mtk_dev = pci_get_drvdata(pdev);
+
+ for (i = 0; i < EXT_INT_NUM; i++) {
+ const char *irq_descr;
+ int irq_vec;
+
+ if (!mtk_dev->intr_handler[i])
+ continue;
+
+ irq_descr = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s_%d",
+ dev_driver_string(&pdev->dev), i);
+ if (!irq_descr)
+ return -ENOMEM;
+
+ irq_vec = pci_irq_vector(pdev, i);
+ ret = request_threaded_irq(irq_vec, mtk_dev->intr_handler[i],
+ mtk_dev->intr_thread[i], 0, irq_descr,
+ mtk_dev->callback_param[i]);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to request_irq: %d, int: %d, ret: %d\n",
+ irq_vec, i, ret);
+ while (i--) {
+ if (!mtk_dev->intr_handler[i])
+ continue;
+
+ free_irq(pci_irq_vector(pdev, i), mtk_dev->callback_param[i]);
+ }
+
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static int mtk_setup_msix(struct mtk_pci_dev *mtk_dev)
+{
+ int ret;
+
+ /* We are interested only in 6 interrupts, but HW-design requires power-of-2
+ * IRQs allocation.
+ */
+ ret = pci_alloc_irq_vectors(mtk_dev->pdev, EXT_INT_NUM, EXT_INT_NUM, PCI_IRQ_MSIX);
+ if (ret < 0) {
+ dev_err(&mtk_dev->pdev->dev, "Failed to allocate MSI-X entry, errno: %d\n", ret);
+ return ret;
+ }
+
+ ret = mtk_request_irq(mtk_dev->pdev);
+ if (ret) {
+ pci_free_irq_vectors(mtk_dev->pdev);
+ return ret;
+ }
+
+ /* Set MSIX merge config */
+ mtk_pcie_mac_msix_cfg(mtk_dev, EXT_INT_NUM);
+ return 0;
+}
+
+static int mtk_interrupt_init(struct mtk_pci_dev *mtk_dev)
+{
+ int ret, i;
+
+ if (!mtk_dev->pdev->msix_cap)
+ return -EINVAL;
+
+ ret = mtk_setup_msix(mtk_dev);
+ if (ret)
+ return ret;
+
+ /* let the IPs enable interrupts when they are ready */
+ for (i = EXT_INT_START; i < EXT_INT_START + EXT_INT_NUM; i++)
+ PCIE_MAC_MSIX_MSK_SET(mtk_dev, i);
+
+ return 0;
+}
+
+static inline void mtk_pci_infracfg_ao_calc(struct mtk_pci_dev *mtk_dev)
+{
+ mtk_dev->base_addr.infracfg_ao_base = mtk_dev->base_addr.pcie_ext_reg_base +
+ INFRACFG_AO_DEV_CHIP -
+ mtk_dev->base_addr.pcie_dev_reg_trsl_addr;
+}
+
+static int mtk_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct mtk_pci_dev *mtk_dev;
+ int ret;
+
+ mtk_dev = devm_kzalloc(&pdev->dev, sizeof(*mtk_dev), GFP_KERNEL);
+ if (!mtk_dev)
+ return -ENOMEM;
+
+ pci_set_drvdata(pdev, mtk_dev);
+ mtk_dev->pdev = pdev;
+
+ ret = pcim_enable_device(pdev);
+ if (ret)
+ return ret;
+
+ ret = pcim_iomap_regions(pdev, BIT(PCI_IREG_BASE) | BIT(PCI_EREG_BASE), pci_name(pdev));
+ if (ret) {
+ dev_err(&pdev->dev, "PCIm iomap regions fail %d\n", ret);
+ return -ENOMEM;
+ }
+
+ ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (ret) {
+ ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (ret) {
+ dev_err(&pdev->dev, "Could not set PCI DMA mask, err: %d\n", ret);
+ return ret;
+ }
+ }
+
+ ret = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (ret) {
+ ret = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (ret) {
+ dev_err(&pdev->dev, "Could not set consistent PCI DMA mask, err: %d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ IREG_BASE(mtk_dev) = pcim_iomap_table(pdev)[PCI_IREG_BASE];
+ mtk_dev->base_addr.pcie_ext_reg_base = pcim_iomap_table(pdev)[PCI_EREG_BASE];
+
+ ret = ccci_skb_pool_alloc(&mtk_dev->pools);
+ if (ret)
+ return ret;
+
+ mtk_pcie_mac_atr_init(mtk_dev);
+ mtk_pci_infracfg_ao_calc(mtk_dev);
+ mhccif_init(mtk_dev);
+
+ ret = mtk_md_init(mtk_dev);
+ if (ret)
+ goto err;
+
+ mtk_pcie_mac_interrupts_dis(mtk_dev);
+ ret = mtk_interrupt_init(mtk_dev);
+ if (ret)
+ goto err;
+
+ mtk_pcie_mac_set_int(mtk_dev, MHCCIF_INT);
+ mtk_pcie_mac_interrupts_en(mtk_dev);
+ pci_set_master(pdev);
+
+ return 0;
+
+err:
+ ccci_skb_pool_free(&mtk_dev->pools);
+ return ret;
+}
+
+static void mtk_pci_remove(struct pci_dev *pdev)
+{
+ struct mtk_pci_dev *mtk_dev;
+ int i;
+
+ mtk_dev = pci_get_drvdata(pdev);
+ mtk_md_exit(mtk_dev);
+
+ for (i = 0; i < EXT_INT_NUM; i++) {
+ if (!mtk_dev->intr_handler[i])
+ continue;
+
+ free_irq(pci_irq_vector(pdev, i), mtk_dev->callback_param[i]);
+ }
+
+ pci_free_irq_vectors(mtk_dev->pdev);
+ ccci_skb_pool_free(&mtk_dev->pools);
+}
+
+static const struct pci_device_id t7xx_pci_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x4d75) },
+ { }
+};
+MODULE_DEVICE_TABLE(pci, t7xx_pci_table);
+
+static struct pci_driver mtk_pci_driver = {
+ .name = "mtk_t7xx",
+ .id_table = t7xx_pci_table,
+ .probe = mtk_pci_probe,
+ .remove = mtk_pci_remove,
+};
+
+static int __init mtk_pci_init(void)
+{
+ return pci_register_driver(&mtk_pci_driver);
+}
+module_init(mtk_pci_init);
+
+static void __exit mtk_pci_cleanup(void)
+{
+ pci_unregister_driver(&mtk_pci_driver);
+}
+module_exit(mtk_pci_cleanup);
+
+MODULE_AUTHOR("MediaTek Inc");
+MODULE_DESCRIPTION("MediaTek PCIe 5G WWAN modem t7xx driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.h b/drivers/net/wwan/t7xx/t7xx_pci.h
new file mode 100644
index 000000000000..bc6f0a83f546
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_pci.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_PCI_H__
+#define __T7XX_PCI_H__
+
+#include <linux/pci.h>
+#include <linux/types.h>
+
+#include "t7xx_reg.h"
+#include "t7xx_skb_util.h"
+
+/* struct mtk_addr_base - holds base addresses
+ * @pcie_mac_ireg_base: PCIe MAC register base
+ * @pcie_ext_reg_base: used to calculate base addresses for CLDMA, DPMA and MHCCIF registers
+ * @pcie_dev_reg_trsl_addr: used to calculate the register base address
+ * @infracfg_ao_base: base address used in CLDMA reset operations
+ * @mhccif_rc_base: host view of MHCCIF rc base addr
+ */
+struct mtk_addr_base {
+ void __iomem *pcie_mac_ireg_base;
+ void __iomem *pcie_ext_reg_base;
+ u32 pcie_dev_reg_trsl_addr;
+ void __iomem *infracfg_ao_base;
+ void __iomem *mhccif_rc_base;
+};
+
+typedef irqreturn_t (*mtk_intr_callback)(int irq, void *param);
+
+/* struct mtk_pci_dev - MTK device context structure
+ * @intr_handler: array of handler function for request_threaded_irq
+ * @intr_thread: array of thread_fn for request_threaded_irq
+ * @callback_param: array of cookie passed back to interrupt functions
+ * @mhccif_bitmask: device to host interrupt mask
+ * @pdev: pci device
+ * @base_addr: memory base addresses of HW components
+ * @md: modem interface
+ * @ccmni_ctlb: context structure used to control the network data path
+ * @rgu_pci_irq_en: RGU callback isr registered and active
+ * @pools: pre allocated skb pools
+ */
+struct mtk_pci_dev {
+ mtk_intr_callback intr_handler[EXT_INT_NUM];
+ mtk_intr_callback intr_thread[EXT_INT_NUM];
+ void *callback_param[EXT_INT_NUM];
+ u32 mhccif_bitmask;
+ struct pci_dev *pdev;
+ struct mtk_addr_base base_addr;
+ struct mtk_modem *md;
+
+ struct ccmni_ctl_block *ccmni_ctlb;
+ bool rgu_pci_irq_en;
+ struct skb_pools pools;
+};
+
+#endif /* __T7XX_PCI_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_pcie_mac.c b/drivers/net/wwan/t7xx/t7xx_pcie_mac.c
new file mode 100644
index 000000000000..c26ab0a9a47d
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_pcie_mac.c
@@ -0,0 +1,277 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/msi.h>
+
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+
+#define PCIE_REG_BAR 2
+#define PCIE_REG_PORT ATR_SRC_PCI_WIN0
+#define PCIE_REG_TABLE_NUM 0
+#define PCIE_REG_TRSL_PORT ATR_DST_AXIM_0
+
+#define PCIE_DEV_DMA_PORT_START ATR_SRC_AXIS_0
+#define PCIE_DEV_DMA_PORT_END ATR_SRC_AXIS_2
+#define PCIE_DEV_DMA_TABLE_NUM 0
+#define PCIE_DEV_DMA_TRSL_ADDR 0x00000000
+#define PCIE_DEV_DMA_SRC_ADDR 0x00000000
+#define PCIE_DEV_DMA_TRANSPARENT 1
+#define PCIE_DEV_DMA_SIZE 0
+
+enum mtk_atr_src_port {
+ ATR_SRC_PCI_WIN0,
+ ATR_SRC_PCI_WIN1,
+ ATR_SRC_AXIS_0,
+ ATR_SRC_AXIS_1,
+ ATR_SRC_AXIS_2,
+ ATR_SRC_AXIS_3,
+};
+
+enum mtk_atr_dst_port {
+ ATR_DST_PCI_TRX,
+ ATR_DST_PCI_CONFIG,
+ ATR_DST_AXIM_0 = 4,
+ ATR_DST_AXIM_1,
+ ATR_DST_AXIM_2,
+ ATR_DST_AXIM_3,
+};
+
+struct mtk_atr_config {
+ u64 src_addr;
+ u64 trsl_addr;
+ u64 size;
+ u32 port;
+ u32 table;
+ enum mtk_atr_dst_port trsl_id;
+ u32 transparent;
+};
+
+static void mtk_pcie_mac_atr_tables_dis(void __iomem *pbase, enum mtk_atr_src_port port)
+{
+ void __iomem *reg;
+ int i, offset;
+
+ for (i = 0; i < ATR_TABLE_NUM_PER_ATR; i++) {
+ offset = (ATR_PORT_OFFSET * port) + (ATR_TABLE_OFFSET * i);
+
+ /* Disable table by SRC_ADDR */
+ reg = pbase + ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR + offset;
+ iowrite64(0, reg);
+ }
+}
+
+static int mtk_pcie_mac_atr_cfg(struct mtk_pci_dev *mtk_dev, struct mtk_atr_config *cfg)
+{
+ int atr_size, pos, offset;
+ void __iomem *pbase;
+ struct device *dev;
+ void __iomem *reg;
+ u64 value;
+
+ dev = &mtk_dev->pdev->dev;
+ pbase = IREG_BASE(mtk_dev);
+
+ if (cfg->transparent) {
+ /* No address conversion is performed */
+ atr_size = ATR_TRANSPARENT_SIZE;
+ } else {
+ if (cfg->src_addr & (cfg->size - 1)) {
+ dev_err(dev, "src addr is not aligned to size\n");
+ return -EINVAL;
+ }
+
+ if (cfg->trsl_addr & (cfg->size - 1)) {
+ dev_err(dev, "trsl addr %llx not aligned to size %llx\n",
+ cfg->trsl_addr, cfg->size - 1);
+ return -EINVAL;
+ }
+
+ pos = ffs(lower_32_bits(cfg->size));
+ if (!pos)
+ pos = ffs(upper_32_bits(cfg->size)) + 32;
+
+ /* The HW calculates the address translation space as 2^(atr_size + 1)
+ * but we also need to consider that ffs() starts counting from 1.
+ */
+ atr_size = pos - 2;
+ }
+
+ /* Calculate table offset */
+ offset = ATR_PORT_OFFSET * cfg->port + ATR_TABLE_OFFSET * cfg->table;
+
+ reg = pbase + ATR_PCIE_WIN0_T0_TRSL_ADDR + offset;
+ value = cfg->trsl_addr & ATR_PCIE_WIN0_ADDR_ALGMT;
+ iowrite64(value, reg);
+
+ reg = pbase + ATR_PCIE_WIN0_T0_TRSL_PARAM + offset;
+ iowrite32(cfg->trsl_id, reg);
+
+ reg = pbase + ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR + offset;
+ value = (cfg->src_addr & ATR_PCIE_WIN0_ADDR_ALGMT) | (atr_size << 1) | BIT(0);
+ iowrite64(value, reg);
+
+ /* Read back to ensure ATR is set */
+ ioread64(reg);
+ return 0;
+}
+
+/**
+ * mtk_pcie_mac_atr_init() - initialize address translation
+ * @mtk_dev: MTK device
+ *
+ * Setup ATR for ports & device.
+ *
+ */
+void mtk_pcie_mac_atr_init(struct mtk_pci_dev *mtk_dev)
+{
+ struct mtk_atr_config cfg;
+ u32 i;
+
+ /* Disable all ATR table for all ports */
+ for (i = ATR_SRC_PCI_WIN0; i <= ATR_SRC_AXIS_3; i++)
+ mtk_pcie_mac_atr_tables_dis(IREG_BASE(mtk_dev), i);
+
+ memset(&cfg, 0, sizeof(cfg));
+ /* Config ATR for RC to access device's register */
+ cfg.src_addr = pci_resource_start(mtk_dev->pdev, PCIE_REG_BAR);
+ cfg.size = PCIE_REG_SIZE_CHIP;
+ cfg.trsl_addr = PCIE_REG_TRSL_ADDR_CHIP;
+ cfg.port = PCIE_REG_PORT;
+ cfg.table = PCIE_REG_TABLE_NUM;
+ cfg.trsl_id = PCIE_REG_TRSL_PORT;
+ mtk_pcie_mac_atr_tables_dis(IREG_BASE(mtk_dev), cfg.port);
+ mtk_pcie_mac_atr_cfg(mtk_dev, &cfg);
+
+ /* Update translation base */
+ mtk_dev->base_addr.pcie_dev_reg_trsl_addr = PCIE_REG_TRSL_ADDR_CHIP;
+
+ /* Config ATR for EP to access RC's memory */
+ for (i = PCIE_DEV_DMA_PORT_START; i <= PCIE_DEV_DMA_PORT_END; i++) {
+ cfg.src_addr = PCIE_DEV_DMA_SRC_ADDR;
+ cfg.size = PCIE_DEV_DMA_SIZE;
+ cfg.trsl_addr = PCIE_DEV_DMA_TRSL_ADDR;
+ cfg.port = i;
+ cfg.table = PCIE_DEV_DMA_TABLE_NUM;
+ cfg.trsl_id = ATR_DST_PCI_TRX;
+ /* Enable transparent translation */
+ cfg.transparent = PCIE_DEV_DMA_TRANSPARENT;
+ mtk_pcie_mac_atr_tables_dis(IREG_BASE(mtk_dev), cfg.port);
+ mtk_pcie_mac_atr_cfg(mtk_dev, &cfg);
+ }
+}
+
+/**
+ * mtk_pcie_mac_enable_disable_int() - enable/disable interrupts
+ * @mtk_dev: MTK device
+ * @enable: enable/disable
+ *
+ * Enable or disable device interrupts.
+ *
+ */
+static void mtk_pcie_mac_enable_disable_int(struct mtk_pci_dev *mtk_dev, bool enable)
+{
+ u32 value;
+
+ value = ioread32(IREG_BASE(mtk_dev) + ISTAT_HST_CTRL);
+ if (enable)
+ value &= ~ISTAT_HST_CTRL_DIS;
+ else
+ value |= ISTAT_HST_CTRL_DIS;
+
+ iowrite32(value, IREG_BASE(mtk_dev) + ISTAT_HST_CTRL);
+}
+
+void mtk_pcie_mac_interrupts_en(struct mtk_pci_dev *mtk_dev)
+{
+ mtk_pcie_mac_enable_disable_int(mtk_dev, true);
+}
+
+void mtk_pcie_mac_interrupts_dis(struct mtk_pci_dev *mtk_dev)
+{
+ mtk_pcie_mac_enable_disable_int(mtk_dev, false);
+}
+
+/**
+ * mtk_pcie_mac_clear_set_int() - clear/set interrupt by type
+ * @mtk_dev: MTK device
+ * @int_type: interrupt type
+ * @clear: clear/set
+ *
+ * Clear or set device interrupt by type.
+ *
+ */
+static void mtk_pcie_mac_clear_set_int(struct mtk_pci_dev *mtk_dev,
+ enum pcie_int int_type, bool clear)
+{
+ void __iomem *reg;
+
+ if (mtk_dev->pdev->msix_enabled) {
+ if (clear)
+ reg = IREG_BASE(mtk_dev) + IMASK_HOST_MSIX_CLR_GRP0_0;
+ else
+ reg = IREG_BASE(mtk_dev) + IMASK_HOST_MSIX_SET_GRP0_0;
+ } else {
+ if (clear)
+ reg = IREG_BASE(mtk_dev) + INT_EN_HST_CLR;
+ else
+ reg = IREG_BASE(mtk_dev) + INT_EN_HST_SET;
+ }
+
+ iowrite32(BIT(EXT_INT_START + int_type), reg);
+}
+
+void mtk_pcie_mac_clear_int(struct mtk_pci_dev *mtk_dev, enum pcie_int int_type)
+{
+ mtk_pcie_mac_clear_set_int(mtk_dev, int_type, true);
+}
+
+void mtk_pcie_mac_set_int(struct mtk_pci_dev *mtk_dev, enum pcie_int int_type)
+{
+ mtk_pcie_mac_clear_set_int(mtk_dev, int_type, false);
+}
+
+/**
+ * mtk_pcie_mac_clear_int_status() - clear interrupt status by type
+ * @mtk_dev: MTK device
+ * @int_type: interrupt type
+ *
+ * Enable or disable device interrupts' status by type.
+ *
+ */
+void mtk_pcie_mac_clear_int_status(struct mtk_pci_dev *mtk_dev, enum pcie_int int_type)
+{
+ void __iomem *reg;
+
+ if (mtk_dev->pdev->msix_enabled)
+ reg = IREG_BASE(mtk_dev) + MSIX_ISTAT_HST_GRP0_0;
+ else
+ reg = IREG_BASE(mtk_dev) + ISTAT_HST;
+
+ iowrite32(BIT(EXT_INT_START + int_type), reg);
+}
+
+/**
+ * mtk_pcie_mac_msix_cfg() - write MSIX control configuration
+ * @mtk_dev: MTK device
+ * @irq_count: number of MSIX IRQ vectors
+ *
+ * Write IRQ count to device.
+ *
+ */
+void mtk_pcie_mac_msix_cfg(struct mtk_pci_dev *mtk_dev, unsigned int irq_count)
+{
+ iowrite32(ffs(irq_count) * 2 - 1, IREG_BASE(mtk_dev) + PCIE_CFG_MSIX);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_pcie_mac.h b/drivers/net/wwan/t7xx/t7xx_pcie_mac.h
new file mode 100644
index 000000000000..ff078f1a92f4
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_pcie_mac.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_PCIE_MAC_H__
+#define __T7XX_PCIE_MAC_H__
+
+#include <linux/bitops.h>
+#include <linux/io.h>
+
+#include "t7xx_pci.h"
+#include "t7xx_reg.h"
+
+#define IREG_BASE(mtk_dev) ((mtk_dev)->base_addr.pcie_mac_ireg_base)
+
+#define PCIE_MAC_MSIX_MSK_SET(mtk_dev, ext_id) \
+ iowrite32(BIT(ext_id), IREG_BASE(mtk_dev) + IMASK_HOST_MSIX_SET_GRP0_0)
+
+void mtk_pcie_mac_interrupts_en(struct mtk_pci_dev *mtk_dev);
+void mtk_pcie_mac_interrupts_dis(struct mtk_pci_dev *mtk_dev);
+void mtk_pcie_mac_atr_init(struct mtk_pci_dev *mtk_dev);
+void mtk_pcie_mac_clear_int(struct mtk_pci_dev *mtk_dev, enum pcie_int int_type);
+void mtk_pcie_mac_set_int(struct mtk_pci_dev *mtk_dev, enum pcie_int int_type);
+void mtk_pcie_mac_clear_int_status(struct mtk_pci_dev *mtk_dev, enum pcie_int int_type);
+void mtk_pcie_mac_msix_cfg(struct mtk_pci_dev *mtk_dev, unsigned int irq_count);
+
+#endif /* __T7XX_PCIE_MAC_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_reg.h b/drivers/net/wwan/t7xx/t7xx_reg.h
new file mode 100644
index 000000000000..07ce66d4844b
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_reg.h
@@ -0,0 +1,398 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Andy Shevchenko <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_REG_H__
+#define __T7XX_REG_H__
+
+#include <linux/bits.h>
+
+/* RC part */
+
+/* Device base address offset - update if reg BAR base is changed */
+#define MHCCIF_RC_DEV_BASE 0x10024000
+
+#define REG_RC2EP_SW_BSY 0x04
+#define REG_RC2EP_SW_INT_START 0x08
+
+#define REG_RC2EP_SW_TCHNUM 0x0c
+#define H2D_CH_EXCEPTION_ACK 1
+#define H2D_CH_EXCEPTION_CLEARQ_ACK 2
+#define H2D_CH_DS_LOCK 3
+/* Channels 4-8 are reserved */
+#define H2D_CH_SUSPEND_REQ 9
+#define H2D_CH_RESUME_REQ 10
+#define H2D_CH_SUSPEND_REQ_AP 11
+#define H2D_CH_RESUME_REQ_AP 12
+#define H2D_CH_DEVICE_RESET 13
+#define H2D_CH_DRM_DISABLE_AP 14
+
+#define REG_EP2RC_SW_INT_STS 0x10
+#define REG_EP2RC_SW_INT_ACK 0x14
+#define REG_EP2RC_SW_INT_EAP_MASK 0x20
+#define REG_EP2RC_SW_INT_EAP_MASK_SET 0x30
+#define REG_EP2RC_SW_INT_EAP_MASK_CLR 0x40
+
+#define D2H_INT_DS_LOCK_ACK BIT(0)
+#define D2H_INT_EXCEPTION_INIT BIT(1)
+#define D2H_INT_EXCEPTION_INIT_DONE BIT(2)
+#define D2H_INT_EXCEPTION_CLEARQ_DONE BIT(3)
+#define D2H_INT_EXCEPTION_ALLQ_RESET BIT(4)
+#define D2H_INT_PORT_ENUM BIT(5)
+/* bits 6-10 are reserved */
+#define D2H_INT_SUSPEND_ACK BIT(11)
+#define D2H_INT_RESUME_ACK BIT(12)
+#define D2H_INT_SUSPEND_ACK_AP BIT(13)
+#define D2H_INT_RESUME_ACK_AP BIT(14)
+#define D2H_INT_ASYNC_SAP_HK BIT(15)
+#define D2H_INT_ASYNC_MD_HK BIT(16)
+
+/* EP part */
+
+/* Device base address offset */
+#define MHCCIF_EP_DEV_BASE 0x10025000
+
+/* Reg base */
+#define INFRACFG_AO_DEV_CHIP 0x10001000
+
+/* ATR setting */
+#define PCIE_REG_TRSL_ADDR_CHIP 0x10000000
+#define PCIE_REG_SIZE_CHIP 0x00400000
+
+/* RGU */
+#define TOPRGU_CH_PCIE_IRQ_STA 0x1000790c
+
+#define EXP_BAR0 0x0c
+#define EXP_BAR2 0x04
+#define EXP_BAR4 0x0c
+#define EXP_BAR0_SIZE 0x00008000
+#define EXP_BAR2_SIZE 0x00800000
+#define EXP_BAR4_SIZE 0x00800000
+
+#define ATR_PORT_OFFSET 0x100
+#define ATR_TABLE_OFFSET 0x20
+#define ATR_TABLE_NUM_PER_ATR 8
+#define ATR_TRANSPARENT_SIZE 0x3f
+
+/* PCIE_MAC_IREG Register Definition */
+
+#define INT_EN_HST 0x0188
+#define ISTAT_HST 0x018c
+
+#define ISTAT_HST_CTRL 0x01ac
+#define ISTAT_HST_CTRL_DIS BIT(0)
+
+#define INT_EN_HST_SET 0x01f0
+#define INT_EN_HST_CLR 0x01f4
+
+#define PCIE_MISC_CTRL 0x0348
+#define PCIE_MISC_MAC_SLEEP_DIS BIT(7)
+
+#define PCIE_CFG_MSIX 0x03ec
+#define ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR 0x0600
+#define ATR_PCIE_WIN0_T0_TRSL_ADDR 0x0608
+#define ATR_PCIE_WIN0_T0_TRSL_PARAM 0x0610
+#define ATR_PCIE_WIN0_ADDR_ALGMT GENMASK_ULL(63, 12)
+
+#define ATR_SRC_ADDR_INVALID 0x007f
+
+#define PCIE_PM_RESUME_STATE 0x0d0c
+enum mtk_pm_resume_state {
+ PM_RESUME_REG_STATE_L3,
+ PM_RESUME_REG_STATE_L1,
+ PM_RESUME_REG_STATE_INIT,
+ PM_RESUME_REG_STATE_EXP,
+ PM_RESUME_REG_STATE_L2,
+ PM_RESUME_REG_STATE_L2_EXP,
+};
+
+#define PCIE_MISC_DEV_STATUS 0x0d1c
+#define MISC_STAGE_MASK GENMASK(2, 0)
+#define MISC_RESET_TYPE_PLDR BIT(26)
+#define MISC_RESET_TYPE_FLDR BIT(27)
+#define LINUX_STAGE 4
+
+#define PCIE_RESOURCE_STATUS 0x0d28
+#define PCIE_RESOURCE_STATUS_MSK GENMASK(4, 0)
+
+#define DIS_ASPM_LOWPWR_SET_0 0x0e50
+#define DIS_ASPM_LOWPWR_CLR_0 0x0e54
+#define DIS_ASPM_LOWPWR_SET_1 0x0e58
+#define DIS_ASPM_LOWPWR_CLR_1 0x0e5c
+#define L1_DISABLE_BIT(i) BIT((i) * 4 + 1)
+#define L1_1_DISABLE_BIT(i) BIT((i) * 4 + 2)
+#define L1_2_DISABLE_BIT(i) BIT((i) * 4 + 3)
+
+#define MSIX_SW_TRIG_SET_GRP0_0 0x0e80
+#define MSIX_ISTAT_HST_GRP0_0 0x0f00
+#define IMASK_HOST_MSIX_SET_GRP0_0 0x3000
+#define IMASK_HOST_MSIX_CLR_GRP0_0 0x3080
+#define IMASK_HOST_MSIX_GRP0_0 0x3100
+#define EXT_INT_START 24
+#define EXT_INT_NUM 8
+#define MSIX_MSK_SET_ALL GENMASK(31, 24)
+enum pcie_int {
+ DPMAIF_INT = 0,
+ CLDMA0_INT,
+ CLDMA1_INT,
+ CLDMA2_INT,
+ MHCCIF_INT,
+ DPMAIF2_INT,
+ SAP_RGU_INT,
+ CLDMA3_INT,
+};
+
+/* DPMA definitions */
+
+#define DPMAIF_PD_BASE 0x1022d000
+#define BASE_DPMAIF_UL DPMAIF_PD_BASE
+#define BASE_DPMAIF_DL (DPMAIF_PD_BASE + 0x100)
+#define BASE_DPMAIF_AP_MISC (DPMAIF_PD_BASE + 0x400)
+#define BASE_DPMAIF_MMW_HPC (DPMAIF_PD_BASE + 0x600)
+#define BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX (DPMAIF_PD_BASE + 0x900)
+#define BASE_DPMAIF_PD_SRAM_DL (DPMAIF_PD_BASE + 0xc00)
+#define BASE_DPMAIF_PD_SRAM_UL (DPMAIF_PD_BASE + 0xd00)
+
+#define DPMAIF_AO_BASE 0x10014000
+#define BASE_DPMAIF_AO_UL DPMAIF_AO_BASE
+#define BASE_DPMAIF_AO_DL (DPMAIF_AO_BASE + 0x400)
+
+/* dpmaif_ul */
+#define DPMAIF_UL_ADD_DESC (BASE_DPMAIF_UL + 0x00)
+#define DPMAIF_UL_CHK_BUSY (BASE_DPMAIF_UL + 0x88)
+#define DPMAIF_UL_RESERVE_AO_RW (BASE_DPMAIF_UL + 0xac)
+#define DPMAIF_UL_ADD_DESC_CH0 (BASE_DPMAIF_UL + 0xb0)
+
+/* dpmaif_dl */
+#define DPMAIF_DL_BAT_INIT (BASE_DPMAIF_DL + 0x00)
+#define DPMAIF_DL_BAT_ADD (BASE_DPMAIF_DL + 0x04)
+#define DPMAIF_DL_BAT_INIT_CON0 (BASE_DPMAIF_DL + 0x08)
+#define DPMAIF_DL_BAT_INIT_CON1 (BASE_DPMAIF_DL + 0x0c)
+#define DPMAIF_DL_BAT_INIT_CON2 (BASE_DPMAIF_DL + 0x10)
+#define DPMAIF_DL_BAT_INIT_CON3 (BASE_DPMAIF_DL + 0x50)
+#define DPMAIF_DL_CHK_BUSY (BASE_DPMAIF_DL + 0xb4)
+
+/* dpmaif_ap_misc */
+#define DPMAIF_AP_L2TISAR0 (BASE_DPMAIF_AP_MISC + 0x00)
+#define DPMAIF_AP_APDL_L2TISAR0 (BASE_DPMAIF_AP_MISC + 0x50)
+#define DPMAIF_AP_IP_BUSY (BASE_DPMAIF_AP_MISC + 0x60)
+#define DPMAIF_AP_CG_EN (BASE_DPMAIF_AP_MISC + 0x68)
+#define DPMAIF_AP_OVERWRITE_CFG (BASE_DPMAIF_AP_MISC + 0x90)
+#define DPMAIF_AP_MEM_CLR (BASE_DPMAIF_AP_MISC + 0x94)
+#define DPMAIF_AP_ALL_L2TISAR0_MASK GENMASK(31, 0)
+#define DPMAIF_AP_APDL_ALL_L2TISAR0_MASK GENMASK(31, 0)
+#define DPMAIF_AP_IP_BUSY_MASK GENMASK(31, 0)
+
+/* dpmaif_ao_ul */
+#define DPMAIF_AO_UL_INIT_SET (BASE_DPMAIF_AO_UL + 0x0)
+#define DPMAIF_AO_UL_CHNL_ARB0 (BASE_DPMAIF_AO_UL + 0x1c)
+#define DPMAIF_AO_UL_AP_L2TIMR0 (BASE_DPMAIF_AO_UL + 0x80)
+#define DPMAIF_AO_UL_AP_L2TIMCR0 (BASE_DPMAIF_AO_UL + 0x84)
+#define DPMAIF_AO_UL_AP_L2TIMSR0 (BASE_DPMAIF_AO_UL + 0x88)
+#define DPMAIF_AO_UL_AP_L1TIMR0 (BASE_DPMAIF_AO_UL + 0x8c)
+#define DPMAIF_AO_UL_APDL_L2TIMR0 (BASE_DPMAIF_AO_UL + 0x90)
+#define DPMAIF_AO_UL_APDL_L2TIMCR0 (BASE_DPMAIF_AO_UL + 0x94)
+#define DPMAIF_AO_UL_APDL_L2TIMSR0 (BASE_DPMAIF_AO_UL + 0x98)
+#define DPMAIF_AO_AP_DLUL_IP_BUSY_MASK (BASE_DPMAIF_AO_UL + 0x9c)
+
+/* dpmaif_pd_sram_ul */
+#define DPMAIF_AO_UL_CHNL0_CON0 (BASE_DPMAIF_PD_SRAM_UL + 0x10)
+#define DPMAIF_AO_UL_CHNL0_CON1 (BASE_DPMAIF_PD_SRAM_UL + 0x14)
+#define DPMAIF_AO_UL_CHNL0_CON2 (BASE_DPMAIF_PD_SRAM_UL + 0x18)
+#define DPMAIF_AO_UL_CH0_STA (BASE_DPMAIF_PD_SRAM_UL + 0x70)
+
+/* dpmaif_ao_dl */
+#define DPMAIF_AO_DL_INIT_SET (BASE_DPMAIF_AO_DL + 0x00)
+#define DPMAIF_AO_DL_IRQ_MASK (BASE_DPMAIF_AO_DL + 0x0c)
+#define DPMAIF_AO_DL_DLQPIT_INIT_CON5 (BASE_DPMAIF_AO_DL + 0x28)
+#define DPMAIF_AO_DL_DLQPIT_TRIG_THRES (BASE_DPMAIF_AO_DL + 0x34)
+
+/* dpmaif_pd_sram_dl */
+#define DPMAIF_AO_DL_PKTINFO_CON0 (BASE_DPMAIF_PD_SRAM_DL + 0x00)
+#define DPMAIF_AO_DL_PKTINFO_CON1 (BASE_DPMAIF_PD_SRAM_DL + 0x04)
+#define DPMAIF_AO_DL_PKTINFO_CON2 (BASE_DPMAIF_PD_SRAM_DL + 0x08)
+#define DPMAIF_AO_DL_RDY_CHK_THRES (BASE_DPMAIF_PD_SRAM_DL + 0x0c)
+#define DPMAIF_AO_DL_RDY_CHK_FRG_THRES (BASE_DPMAIF_PD_SRAM_DL + 0x10)
+
+#define DPMAIF_AO_DL_DLQ_AGG_CFG (BASE_DPMAIF_PD_SRAM_DL + 0x20)
+#define DPMAIF_AO_DL_DLQPIT_TIMEOUT0 (BASE_DPMAIF_PD_SRAM_DL + 0x24)
+#define DPMAIF_AO_DL_DLQPIT_TIMEOUT1 (BASE_DPMAIF_PD_SRAM_DL + 0x28)
+#define DPMAIF_AO_DL_HPC_CNTL (BASE_DPMAIF_PD_SRAM_DL + 0x38)
+#define DPMAIF_AO_DL_PIT_SEQ_END (BASE_DPMAIF_PD_SRAM_DL + 0x40)
+
+#define DPMAIF_AO_DL_BAT_RIDX (BASE_DPMAIF_PD_SRAM_DL + 0xd8)
+#define DPMAIF_AO_DL_BAT_WRIDX (BASE_DPMAIF_PD_SRAM_DL + 0xdc)
+#define DPMAIF_AO_DL_PIT_RIDX (BASE_DPMAIF_PD_SRAM_DL + 0xec)
+#define DPMAIF_AO_DL_PIT_WRIDX (BASE_DPMAIF_PD_SRAM_DL + 0x60)
+#define DPMAIF_AO_DL_FRGBAT_WRIDX (BASE_DPMAIF_PD_SRAM_DL + 0x78)
+#define DPMAIF_AO_DL_DLQ_WRIDX (BASE_DPMAIF_PD_SRAM_DL + 0xa4)
+
+/* dpmaif_hpc */
+#define DPMAIF_HPC_INTR_MASK (BASE_DPMAIF_MMW_HPC + 0x0f4)
+#define DPMA_HPC_ALL_INT_MASK GENMASK(15, 0)
+
+#define DPMAIF_HPC_DLQ_PATH_MODE 3
+#define DPMAIF_HPC_ADD_MODE_DF 0
+#define DPMAIF_HPC_TOTAL_NUM 8
+#define DPMAIF_HPC_MAX_TOTAL_NUM 8
+
+/* dpmaif_dlq */
+#define DPMAIF_DL_DLQPIT_INIT (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x00)
+#define DPMAIF_DL_DLQPIT_ADD (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x10)
+#define DPMAIF_DL_DLQPIT_INIT_CON0 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x14)
+#define DPMAIF_DL_DLQPIT_INIT_CON1 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x18)
+#define DPMAIF_DL_DLQPIT_INIT_CON2 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x1c)
+#define DPMAIF_DL_DLQPIT_INIT_CON3 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x20)
+#define DPMAIF_DL_DLQPIT_INIT_CON4 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x24)
+#define DPMAIF_DL_DLQPIT_INIT_CON5 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x28)
+#define DPMAIF_DL_DLQPIT_INIT_CON6 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x2c)
+
+/* common include function */
+#define DPMAIF_ULQSAR_n(q) (DPMAIF_AO_UL_CHNL0_CON0 + 0x10 * (q))
+#define DPMAIF_UL_DRBSIZE_ADDRH_n(q) (DPMAIF_AO_UL_CHNL0_CON1 + 0x10 * (q))
+#define DPMAIF_UL_DRB_ADDRH_n(q) (DPMAIF_AO_UL_CHNL0_CON2 + 0x10 * (q))
+#define DPMAIF_ULQ_STA0_n(q) (DPMAIF_AO_UL_CH0_STA + 0x04 * (q))
+#define DPMAIF_ULQ_ADD_DESC_CH_n(q) (DPMAIF_UL_ADD_DESC_CH0 + 0x04 * (q))
+
+#define DPMAIF_UL_DRB_RIDX_OFFSET 16
+
+#define DPMAIF_AP_RGU_ASSERT 0x10001150
+#define DPMAIF_AP_RGU_DEASSERT 0x10001154
+#define DPMAIF_AP_RST_BIT BIT(2)
+
+#define DPMAIF_AP_AO_RGU_ASSERT 0x10001140
+#define DPMAIF_AP_AO_RGU_DEASSERT 0x10001144
+#define DPMAIF_AP_AO_RST_BIT BIT(6)
+
+/* DPMAIF init/restore */
+#define DPMAIF_UL_ADD_NOT_READY BIT(31)
+#define DPMAIF_UL_ADD_UPDATE BIT(31)
+#define DPMAIF_UL_ADD_COUNT_MASK GENMASK(15, 0)
+#define DPMAIF_UL_ALL_QUE_ARB_EN GENMASK(11, 8)
+
+#define DPMAIF_DL_ADD_UPDATE BIT(31)
+#define DPMAIF_DL_ADD_NOT_READY BIT(31)
+#define DPMAIF_DL_FRG_ADD_UPDATE BIT(16)
+#define DPMAIF_DL_ADD_COUNT_MASK GENMASK(15, 0)
+
+#define DPMAIF_DL_BAT_INIT_ALLSET BIT(0)
+#define DPMAIF_DL_BAT_FRG_INIT BIT(16)
+#define DPMAIF_DL_BAT_INIT_EN BIT(31)
+#define DPMAIF_DL_BAT_INIT_NOT_READY BIT(31)
+#define DPMAIF_DL_BAT_INIT_ONLY_ENABLE_BIT 0
+
+#define DPMAIF_DL_PIT_INIT_ALLSET BIT(0)
+#define DPMAIF_DL_PIT_INIT_EN BIT(31)
+#define DPMAIF_DL_PIT_INIT_NOT_READY BIT(31)
+
+#define DPMAIF_PKT_ALIGN64_MODE 0
+#define DPMAIF_PKT_ALIGN128_MODE 1
+
+#define DPMAIF_BAT_REMAIN_SZ_BASE 16
+#define DPMAIF_BAT_BUFFER_SZ_BASE 128
+#define DPMAIF_FRG_BUFFER_SZ_BASE 128
+
+#define DLQ_PIT_IDX_SIZE 0x20
+
+#define DPMAIF_PIT_SIZE_MSK GENMASK(17, 0)
+
+#define DPMAIF_PIT_REM_CNT_MSK GENMASK(17, 0)
+
+#define DPMAIF_BAT_EN_MSK BIT(16)
+#define DPMAIF_FRG_EN_MSK BIT(28)
+#define DPMAIF_BAT_SIZE_MSK GENMASK(15, 0)
+
+#define DPMAIF_BAT_BID_MAXCNT_MSK GENMASK(31, 16)
+#define DPMAIF_BAT_REMAIN_MINSZ_MSK GENMASK(15, 8)
+#define DPMAIF_PIT_CHK_NUM_MSK GENMASK(31, 24)
+#define DPMAIF_BAT_BUF_SZ_MSK GENMASK(16, 8)
+#define DPMAIF_FRG_BUF_SZ_MSK GENMASK(16, 8)
+#define DPMAIF_BAT_RSV_LEN_MSK GENMASK(7, 0)
+#define DPMAIF_PKT_ALIGN_MSK GENMASK(23, 22)
+
+#define DPMAIF_BAT_CHECK_THRES_MSK GENMASK(21, 16)
+#define DPMAIF_FRG_CHECK_THRES_MSK GENMASK(7, 0)
+
+#define DPMAIF_PKT_ALIGN_EN BIT(23)
+
+#define DPMAIF_DRB_SIZE_MSK GENMASK(15, 0)
+
+#define DPMAIF_DL_PIT_WRIDX_MSK GENMASK(17, 0)
+#define DPMAIF_DL_BAT_WRIDX_MSK GENMASK(17, 0)
+#define DPMAIF_DL_FRG_WRIDX_MSK GENMASK(17, 0)
+
+/* Bit fields of registers */
+/* DPMAIF_UL_CHK_BUSY */
+#define DPMAIF_UL_IDLE_STS BIT(11)
+/* DPMAIF_DL_CHK_BUSY */
+#define DPMAIF_DL_IDLE_STS BIT(23)
+/* DPMAIF_AO_DL_RDY_CHK_THRES */
+#define DPMAIF_DL_PKT_CHECKSUM_EN BIT(31)
+#define DPMAIF_PORT_MODE_PCIE BIT(30)
+#define DPMAIF_DL_BURST_PIT_EN BIT(13)
+/* DPMAIF_DL_BAT_INIT_CON1 */
+#define DPMAIF_DL_BAT_CACHE_PRI BIT(22)
+/* DPMAIF_AP_MEM_CLR */
+#define DPMAIF_MEM_CLR BIT(0)
+/* DPMAIF_AP_OVERWRITE_CFG */
+#define DPMAIF_SRAM_SYNC BIT(0)
+/* DPMAIF_AO_UL_INIT_SET */
+#define DPMAIF_UL_INIT_DONE BIT(0)
+/* DPMAIF_AO_DL_INIT_SET */
+#define DPMAIF_DL_INIT_DONE BIT(0)
+/* DPMAIF_AO_DL_PIT_SEQ_END */
+#define DPMAIF_DL_PIT_SEQ_MSK GENMASK(7, 0)
+/* DPMAIF_UL_RESERVE_AO_RW */
+#define DPMAIF_PCIE_MODE_SET_VALUE 0x55
+/* DPMAIF_AP_CG_EN */
+#define DPMAIF_CG_EN 0x7f
+
+/* DPMAIF interrupt */
+#define _UL_INT_DONE_OFFSET 0
+#define _UL_INT_EMPTY_OFFSET 4
+#define _UL_INT_MD_NOTRDY_OFFSET 8
+#define _UL_INT_PWR_NOTRDY_OFFSET 12
+#define _UL_INT_LEN_ERR_OFFSET 16
+
+#define DPMAIF_DL_INT_BATCNT_LEN_ERR BIT(2)
+#define DPMAIF_DL_INT_PITCNT_LEN_ERR BIT(3)
+
+#define DPMAIF_UL_INT_QDONE_MSK GENMASK(3, 0)
+#define DPMAIF_UL_INT_ERR_MSK GENMASK(19, 16)
+
+#define DPMAIF_UDL_IP_BUSY BIT(0)
+#define DPMAIF_DL_INT_DLQ0_QDONE BIT(8)
+#define DPMAIF_DL_INT_DLQ1_QDONE BIT(9)
+#define DPMAIF_DL_INT_DLQ0_PITCNT_LEN BIT(10)
+#define DPMAIF_DL_INT_DLQ1_PITCNT_LEN BIT(11)
+#define DPMAIF_DL_INT_Q2TOQ1 BIT(24)
+#define DPMAIF_DL_INT_Q2APTOP BIT(25)
+
+#define DPMAIF_DLQ_LOW_TIMEOUT_THRES_MKS GENMASK(15, 0)
+#define DPMAIF_DLQ_HIGH_TIMEOUT_THRES_MSK GENMASK(31, 16)
+
+/* DPMAIF DLQ HW configure */
+#define DPMAIF_AGG_MAX_LEN_DF 65535
+#define DPMAIF_AGG_TBL_ENT_NUM_DF 50
+#define DPMAIF_HASH_PRIME_DF 13
+#define DPMAIF_MID_TIMEOUT_THRES_DF 100
+#define DPMAIF_DLQ_TIMEOUT_THRES_DF 100
+#define DPMAIF_DLQ_PRS_THRES_DF 10
+#define DPMAIF_DLQ_HASH_BIT_CHOOSE_DF 0
+
+#define DPMAIF_DLQPIT_EN_MSK BIT(20)
+#define DPMAIF_DLQPIT_CHAN_OFS 16
+#define DPMAIF_ADD_DLQ_PIT_CHAN_OFS 20
+
+#endif /* __T7XX_REG_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_skb_util.c b/drivers/net/wwan/t7xx/t7xx_skb_util.c
new file mode 100644
index 000000000000..872dc82176dc
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_skb_util.c
@@ -0,0 +1,362 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/delay.h>
+#include <linux/netdevice.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/wait.h>
+#include <linux/skbuff.h>
+
+#include "t7xx_pci.h"
+#include "t7xx_skb_util.h"
+
+/* define the pool size of each different skb length */
+#define SKB_64K_POOL_SIZE 50
+#define SKB_4K_POOL_SIZE 256
+#define SKB_16_POOL_SIZE 64
+
+/* reload pool if pool size dropped below 1/RELOAD_TH */
+#define RELOAD_TH 3
+
+#define ALLOC_SKB_RETRY 20
+
+static struct sk_buff *alloc_skb_from_pool(struct skb_pools *pools, size_t size)
+{
+ if (size > MTK_SKB_4K)
+ return ccci_skb_dequeue(pools->reload_work_queue, &pools->skb_pool_64k);
+ else if (size > MTK_SKB_16)
+ return ccci_skb_dequeue(pools->reload_work_queue, &pools->skb_pool_4k);
+ else if (size > 0)
+ return ccci_skb_dequeue(pools->reload_work_queue, &pools->skb_pool_16);
+
+ return NULL;
+}
+
+static struct sk_buff *alloc_skb_from_kernel(size_t size, gfp_t gfp_mask)
+{
+ if (size > MTK_SKB_4K)
+ return __dev_alloc_skb(MTK_SKB_64K, gfp_mask);
+ else if (size > MTK_SKB_1_5K)
+ return __dev_alloc_skb(MTK_SKB_4K, gfp_mask);
+ else if (size > MTK_SKB_16)
+ return __dev_alloc_skb(MTK_SKB_1_5K, gfp_mask);
+ else if (size > 0)
+ return __dev_alloc_skb(MTK_SKB_16, gfp_mask);
+
+ return NULL;
+}
+
+struct sk_buff *ccci_skb_dequeue(struct workqueue_struct *reload_work_queue,
+ struct ccci_skb_queue *queue)
+{
+ struct sk_buff *skb;
+ unsigned long flags;
+
+ spin_lock_irqsave(&queue->skb_list.lock, flags);
+ skb = __skb_dequeue(&queue->skb_list);
+ if (queue->max_occupied < queue->max_len - queue->skb_list.qlen)
+ queue->max_occupied = queue->max_len - queue->skb_list.qlen;
+
+ queue->deq_count++;
+ if (queue->pre_filled && queue->skb_list.qlen < queue->max_len / RELOAD_TH)
+ queue_work(reload_work_queue, &queue->reload_work);
+
+ spin_unlock_irqrestore(&queue->skb_list.lock, flags);
+
+ return skb;
+}
+
+void ccci_skb_enqueue(struct ccci_skb_queue *queue, struct sk_buff *skb)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&queue->skb_list.lock, flags);
+ if (queue->skb_list.qlen < queue->max_len) {
+ queue->enq_count++;
+ __skb_queue_tail(&queue->skb_list, skb);
+ if (queue->skb_list.qlen > queue->max_history)
+ queue->max_history = queue->skb_list.qlen;
+ } else {
+ dev_kfree_skb_any(skb);
+ }
+
+ spin_unlock_irqrestore(&queue->skb_list.lock, flags);
+}
+
+int ccci_skb_queue_alloc(struct ccci_skb_queue *queue, size_t skb_size, size_t max_len, bool fill)
+{
+ skb_queue_head_init(&queue->skb_list);
+ queue->max_len = max_len;
+ if (fill) {
+ unsigned int i;
+
+ for (i = 0; i < queue->max_len; i++) {
+ struct sk_buff *skb;
+
+ skb = alloc_skb_from_kernel(skb_size, GFP_KERNEL);
+
+ if (!skb) {
+ while ((skb = skb_dequeue(&queue->skb_list)))
+ dev_kfree_skb_any(skb);
+ return -ENOMEM;
+ }
+
+ skb_queue_tail(&queue->skb_list, skb);
+ }
+
+ queue->pre_filled = true;
+ } else {
+ queue->pre_filled = false;
+ }
+
+ queue->max_history = 0;
+
+ return 0;
+}
+
+/**
+ * ccci_alloc_skb_from_pool() - allocate memory for skb from pre-allocated pools
+ * @pools: skb pools
+ * @size: allocation size
+ * @blocking : true for blocking operation
+ *
+ * Returns pointer to skb on success, NULL on failure.
+ */
+struct sk_buff *ccci_alloc_skb_from_pool(struct skb_pools *pools, size_t size, bool blocking)
+{
+ struct ccci_buffer_ctrl *buf_ctrl;
+ struct sk_buff *skb;
+ int count;
+
+ if (!size || size > MTK_SKB_64K)
+ return NULL;
+
+ for (count = 0; count < ALLOC_SKB_RETRY; count++) {
+ skb = alloc_skb_from_pool(pools, size);
+ if (skb || !blocking)
+ break;
+
+ /* no memory at the pool.
+ * waiting for pools reload_work which will allocate new memory from kernel
+ */
+ might_sleep();
+ msleep(MTK_SKB_WAIT_FOR_POOLS_RELOAD_MS);
+ }
+
+ if (skb && skb_headroom(skb) == NET_SKB_PAD) {
+ buf_ctrl = skb_push(skb, sizeof(*buf_ctrl));
+ buf_ctrl->head_magic = CCCI_BUF_MAGIC;
+ buf_ctrl->policy = MTK_SKB_POLICY_RECYCLE;
+ buf_ctrl->ioc_override = 0;
+ skb_pull(skb, sizeof(struct ccci_buffer_ctrl));
+ }
+
+ return skb;
+}
+
+/**
+ * ccci_alloc_skb() - allocate memory for skb from the kernel
+ * @size: allocation size
+ * @blocking : true for blocking operation
+ *
+ * Returns pointer to skb on success, NULL on failure.
+ */
+struct sk_buff *ccci_alloc_skb(size_t size, bool blocking)
+{
+ struct sk_buff *skb;
+ int count;
+
+ if (!size || size > MTK_SKB_64K)
+ return NULL;
+
+ if (blocking) {
+ might_sleep();
+ skb = alloc_skb_from_kernel(size, GFP_KERNEL);
+ } else {
+ for (count = 0; count < ALLOC_SKB_RETRY; count++) {
+ skb = alloc_skb_from_kernel(size, GFP_ATOMIC);
+ if (skb)
+ return skb;
+ }
+ }
+
+ return skb;
+}
+
+static void free_skb_from_pool(struct skb_pools *pools, struct sk_buff *skb)
+{
+ if (skb_size(skb) < MTK_SKB_4K)
+ ccci_skb_enqueue(&pools->skb_pool_16, skb);
+ else if (skb_size(skb) < MTK_SKB_64K)
+ ccci_skb_enqueue(&pools->skb_pool_4k, skb);
+ else
+ ccci_skb_enqueue(&pools->skb_pool_64k, skb);
+}
+
+void ccci_free_skb(struct skb_pools *pools, struct sk_buff *skb)
+{
+ struct ccci_buffer_ctrl *buf_ctrl;
+ enum data_policy policy;
+ int offset;
+
+ if (!skb)
+ return;
+
+ offset = NET_SKB_PAD - sizeof(*buf_ctrl);
+ buf_ctrl = (struct ccci_buffer_ctrl *)(skb->head + offset);
+ policy = MTK_SKB_POLICY_FREE;
+
+ if (buf_ctrl->head_magic == CCCI_BUF_MAGIC) {
+ policy = buf_ctrl->policy;
+ memset(buf_ctrl, 0, sizeof(*buf_ctrl));
+ }
+
+ if (policy != MTK_SKB_POLICY_RECYCLE || skb->dev ||
+ skb_size(skb) < NET_SKB_PAD + MTK_SKB_16)
+ policy = MTK_SKB_POLICY_FREE;
+
+ switch (policy) {
+ case MTK_SKB_POLICY_RECYCLE:
+ skb->data = skb->head;
+ skb->len = 0;
+ skb_reset_tail_pointer(skb);
+ /* reserve memory as netdev_alloc_skb */
+ skb_reserve(skb, NET_SKB_PAD);
+ /* enqueue */
+ free_skb_from_pool(pools, skb);
+ break;
+
+ case MTK_SKB_POLICY_FREE:
+ dev_kfree_skb_any(skb);
+ break;
+
+ default:
+ break;
+ }
+}
+
+static void reload_work_64k(struct work_struct *work)
+{
+ struct ccci_skb_queue *queue;
+ struct sk_buff *skb;
+
+ queue = container_of(work, struct ccci_skb_queue, reload_work);
+
+ while (queue->skb_list.qlen < SKB_64K_POOL_SIZE) {
+ skb = alloc_skb_from_kernel(MTK_SKB_64K, GFP_KERNEL);
+ if (skb)
+ skb_queue_tail(&queue->skb_list, skb);
+ }
+}
+
+static void reload_work_4k(struct work_struct *work)
+{
+ struct ccci_skb_queue *queue;
+ struct sk_buff *skb;
+
+ queue = container_of(work, struct ccci_skb_queue, reload_work);
+
+ while (queue->skb_list.qlen < SKB_4K_POOL_SIZE) {
+ skb = alloc_skb_from_kernel(MTK_SKB_4K, GFP_KERNEL);
+ if (skb)
+ skb_queue_tail(&queue->skb_list, skb);
+ }
+}
+
+static void reload_work_16(struct work_struct *work)
+{
+ struct ccci_skb_queue *queue;
+ struct sk_buff *skb;
+
+ queue = container_of(work, struct ccci_skb_queue, reload_work);
+
+ while (queue->skb_list.qlen < SKB_16_POOL_SIZE) {
+ skb = alloc_skb_from_kernel(MTK_SKB_16, GFP_KERNEL);
+ if (skb)
+ skb_queue_tail(&queue->skb_list, skb);
+ }
+}
+
+int ccci_skb_pool_alloc(struct skb_pools *pools)
+{
+ struct sk_buff *skb;
+ int ret;
+
+ ret = ccci_skb_queue_alloc(&pools->skb_pool_64k,
+ MTK_SKB_64K, SKB_64K_POOL_SIZE, true);
+ if (ret)
+ return ret;
+
+ ret = ccci_skb_queue_alloc(&pools->skb_pool_4k,
+ MTK_SKB_4K, SKB_4K_POOL_SIZE, true);
+ if (ret)
+ goto err_pool_4k;
+
+ ret = ccci_skb_queue_alloc(&pools->skb_pool_16,
+ MTK_SKB_16, SKB_16_POOL_SIZE, true);
+ if (ret)
+ goto err_pool_16k;
+
+ pools->reload_work_queue = alloc_workqueue("pool_reload_work",
+ WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI,
+ 1);
+ if (!pools->reload_work_queue) {
+ ret = -ENOMEM;
+ goto err_wq;
+ }
+
+ INIT_WORK(&pools->skb_pool_64k.reload_work, reload_work_64k);
+ INIT_WORK(&pools->skb_pool_4k.reload_work, reload_work_4k);
+ INIT_WORK(&pools->skb_pool_16.reload_work, reload_work_16);
+
+ return ret;
+
+err_wq:
+ while ((skb = skb_dequeue(&pools->skb_pool_16.skb_list)))
+ dev_kfree_skb_any(skb);
+err_pool_16k:
+ while ((skb = skb_dequeue(&pools->skb_pool_4k.skb_list)))
+ dev_kfree_skb_any(skb);
+err_pool_4k:
+ while ((skb = skb_dequeue(&pools->skb_pool_64k.skb_list)))
+ dev_kfree_skb_any(skb);
+
+ return ret;
+}
+
+void ccci_skb_pool_free(struct skb_pools *pools)
+{
+ struct ccci_skb_queue *queue;
+ struct sk_buff *newsk;
+
+ flush_work(&pools->skb_pool_64k.reload_work);
+ flush_work(&pools->skb_pool_4k.reload_work);
+ flush_work(&pools->skb_pool_16.reload_work);
+
+ if (pools->reload_work_queue)
+ destroy_workqueue(pools->reload_work_queue);
+
+ queue = &pools->skb_pool_64k;
+ while ((newsk = skb_dequeue(&queue->skb_list)) != NULL)
+ dev_kfree_skb_any(newsk);
+
+ queue = &pools->skb_pool_4k;
+ while ((newsk = skb_dequeue(&queue->skb_list)) != NULL)
+ dev_kfree_skb_any(newsk);
+
+ queue = &pools->skb_pool_16;
+ while ((newsk = skb_dequeue(&queue->skb_list)) != NULL)
+ dev_kfree_skb_any(newsk);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_skb_util.h b/drivers/net/wwan/t7xx/t7xx_skb_util.h
new file mode 100644
index 000000000000..390e312d5eed
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_skb_util.h
@@ -0,0 +1,110 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_SKB_UTIL_H__
+#define __T7XX_SKB_UTIL_H__
+
+#include <linux/skbuff.h>
+#include <linux/workqueue.h>
+#include <linux/wwan.h>
+
+#define MTK_SKB_64K 64528 /* 63kB + CCCI header */
+#define MTK_SKB_4K 3584 /* 3.5kB */
+#define MTK_SKB_1_5K (WWAN_DEFAULT_MTU + 16) /* net MTU + CCCI_H, for network packet */
+#define MTK_SKB_16 16 /* for struct ccci_header */
+#define NET_RX_BUF MTK_SKB_4K
+
+#define CCCI_BUF_MAGIC 0xFECDBA89
+
+#define MTK_SKB_WAIT_FOR_POOLS_RELOAD_MS 20
+
+struct ccci_skb_queue {
+ struct sk_buff_head skb_list;
+ unsigned int max_len;
+ struct work_struct reload_work;
+ bool pre_filled;
+ unsigned int max_history;
+ unsigned int max_occupied;
+ unsigned int enq_count;
+ unsigned int deq_count;
+};
+
+struct skb_pools {
+ struct ccci_skb_queue skb_pool_64k;
+ struct ccci_skb_queue skb_pool_4k;
+ struct ccci_skb_queue skb_pool_16;
+ struct workqueue_struct *reload_work_queue;
+};
+
+/**
+ * enum data_policy - tells the request free routine how to handle the skb
+ * @MTK_SKB_POLICY_FREE: free the skb
+ * @MTK_SKB_POLICY_RECYCLE: put the skb back into our pool
+ *
+ * The driver request structure will always be recycled, but its skb
+ * can have a different policy. The driver request can work as a wrapper
+ * because the network subsys will handle the skb itself.
+ * TX: policy is determined by sender
+ * RX: policy is determined by receiver
+ */
+enum data_policy {
+ MTK_SKB_POLICY_FREE = 0,
+ MTK_SKB_POLICY_RECYCLE,
+};
+
+/* Get free skb flags */
+#define GFS_NONE_BLOCKING 0
+#define GFS_BLOCKING 1
+
+/* CCCI buffer control structure. Must be less than NET_SKB_PAD */
+struct ccci_buffer_ctrl {
+ unsigned int head_magic;
+ enum data_policy policy;
+ unsigned char ioc_override;
+ unsigned char blocking;
+};
+
+#ifdef NET_SKBUFF_DATA_USES_OFFSET
+static inline unsigned int skb_size(struct sk_buff *skb)
+{
+ return skb->end;
+}
+
+static inline unsigned int skb_data_size(struct sk_buff *skb)
+{
+ return skb->head + skb->end - skb->data;
+}
+#else
+static inline unsigned int skb_size(struct sk_buff *skb)
+{
+ return skb->end - skb->head;
+}
+
+static inline unsigned int skb_data_size(struct sk_buff *skb)
+{
+ return skb->end - skb->data;
+}
+#endif
+
+int ccci_skb_pool_alloc(struct skb_pools *pools);
+void ccci_skb_pool_free(struct skb_pools *pools);
+struct sk_buff *ccci_alloc_skb_from_pool(struct skb_pools *pools, size_t size, bool blocking);
+struct sk_buff *ccci_alloc_skb(size_t size, bool blocking);
+void ccci_free_skb(struct skb_pools *pools, struct sk_buff *skb);
+struct sk_buff *ccci_skb_dequeue(struct workqueue_struct *reload_work_queue,
+ struct ccci_skb_queue *queue);
+void ccci_skb_enqueue(struct ccci_skb_queue *queue, struct sk_buff *skb);
+int ccci_skb_queue_alloc(struct ccci_skb_queue *queue, size_t skb_size, size_t max_len, bool fill);
+
+#endif /* __T7XX_SKB_UTIL_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
new file mode 100644
index 000000000000..fb81e894d5f9
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -0,0 +1,598 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/iopoll.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+
+#include "t7xx_hif_cldma.h"
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_monitor.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+
+#define FSM_DRM_DISABLE_DELAY_MS 200
+#define FSM_EX_REASON GENMASK(23, 16)
+
+static struct ccci_fsm_ctl *ccci_fsm_entry;
+
+void fsm_notifier_register(struct fsm_notifier_block *notifier)
+{
+ struct ccci_fsm_ctl *ctl;
+ unsigned long flags;
+
+ ctl = ccci_fsm_entry;
+ spin_lock_irqsave(&ctl->notifier_lock, flags);
+ list_add_tail(&notifier->entry, &ctl->notifier_list);
+ spin_unlock_irqrestore(&ctl->notifier_lock, flags);
+}
+
+void fsm_notifier_unregister(struct fsm_notifier_block *notifier)
+{
+ struct fsm_notifier_block *notifier_cur, *notifier_next;
+ struct ccci_fsm_ctl *ctl;
+ unsigned long flags;
+
+ ctl = ccci_fsm_entry;
+ spin_lock_irqsave(&ctl->notifier_lock, flags);
+ list_for_each_entry_safe(notifier_cur, notifier_next,
+ &ctl->notifier_list, entry) {
+ if (notifier_cur == notifier)
+ list_del(&notifier->entry);
+ }
+
+ spin_unlock_irqrestore(&ctl->notifier_lock, flags);
+}
+
+static void fsm_state_notify(enum md_state state)
+{
+ struct fsm_notifier_block *notifier;
+ struct ccci_fsm_ctl *ctl;
+ unsigned long flags;
+
+ ctl = ccci_fsm_entry;
+ spin_lock_irqsave(&ctl->notifier_lock, flags);
+ list_for_each_entry(notifier, &ctl->notifier_list, entry) {
+ spin_unlock_irqrestore(&ctl->notifier_lock, flags);
+ if (notifier->notifier_fn)
+ notifier->notifier_fn(state, notifier->data);
+
+ spin_lock_irqsave(&ctl->notifier_lock, flags);
+ }
+
+ spin_unlock_irqrestore(&ctl->notifier_lock, flags);
+}
+
+void fsm_broadcast_state(struct ccci_fsm_ctl *ctl, enum md_state state)
+{
+ if (ctl->md_state != MD_STATE_WAITING_FOR_HS2 && state == MD_STATE_READY)
+ return;
+
+ ctl->md_state = state;
+
+ fsm_state_notify(state);
+}
+
+static void fsm_finish_command(struct ccci_fsm_ctl *ctl, struct ccci_fsm_command *cmd, int result)
+{
+ unsigned long flags;
+
+ if (cmd->flag & FSM_CMD_FLAG_WAITING_TO_COMPLETE) {
+ /* The processing thread may see the list, after a command is added,
+ * without being woken up. Hence a spinlock is needed.
+ */
+ spin_lock_irqsave(&ctl->cmd_complete_lock, flags);
+ cmd->result = result;
+ wake_up_all(&cmd->complete_wq);
+ spin_unlock_irqrestore(&ctl->cmd_complete_lock, flags);
+ } else {
+ /* no one is waiting for this command, free to kfree */
+ kfree(cmd);
+ }
+}
+
+/* call only with protection of event_lock */
+static void fsm_finish_event(struct ccci_fsm_ctl *ctl, struct ccci_fsm_event *event)
+{
+ list_del(&event->entry);
+ kfree(event);
+}
+
+static void fsm_flush_queue(struct ccci_fsm_ctl *ctl)
+{
+ struct ccci_fsm_event *event, *evt_next;
+ struct ccci_fsm_command *cmd, *cmd_next;
+ unsigned long flags;
+ struct device *dev;
+
+ dev = &ctl->md->mtk_dev->pdev->dev;
+ spin_lock_irqsave(&ctl->command_lock, flags);
+ list_for_each_entry_safe(cmd, cmd_next, &ctl->command_queue, entry) {
+ dev_warn(dev, "unhandled command %d\n", cmd->cmd_id);
+ list_del(&cmd->entry);
+ fsm_finish_command(ctl, cmd, FSM_CMD_RESULT_FAIL);
+ }
+
+ spin_unlock_irqrestore(&ctl->command_lock, flags);
+ spin_lock_irqsave(&ctl->event_lock, flags);
+ list_for_each_entry_safe(event, evt_next, &ctl->event_queue, entry) {
+ dev_warn(dev, "unhandled event %d\n", event->event_id);
+ fsm_finish_event(ctl, event);
+ }
+
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+}
+
+/* cmd is not NULL only when reason is ordinary exception */
+static void fsm_routine_exception(struct ccci_fsm_ctl *ctl, struct ccci_fsm_command *cmd,
+ enum ccci_ex_reason reason)
+{
+ bool rec_ok = false;
+ struct ccci_fsm_event *event;
+ unsigned long flags;
+ struct device *dev;
+ int cnt;
+
+ dev = &ctl->md->mtk_dev->pdev->dev;
+ dev_err(dev, "exception %d, from %ps\n", reason, __builtin_return_address(0));
+ /* state sanity check */
+ if (ctl->curr_state != CCCI_FSM_READY && ctl->curr_state != CCCI_FSM_STARTING) {
+ if (cmd)
+ fsm_finish_command(ctl, cmd, FSM_CMD_RESULT_FAIL);
+ return;
+ }
+
+ ctl->last_state = ctl->curr_state;
+ ctl->curr_state = CCCI_FSM_EXCEPTION;
+
+ /* check exception reason */
+ switch (reason) {
+ case EXCEPTION_HS_TIMEOUT:
+ dev_err(dev, "BOOT_HS_FAIL\n");
+ break;
+
+ case EXCEPTION_EVENT:
+ fsm_broadcast_state(ctl, MD_STATE_EXCEPTION);
+ mtk_md_exception_handshake(ctl->md);
+ cnt = 0;
+ while (cnt < MD_EX_REC_OK_TIMEOUT_MS / EVENT_POLL_INTERVAL_MS) {
+ if (kthread_should_stop())
+ return;
+
+ spin_lock_irqsave(&ctl->event_lock, flags);
+ if (!list_empty(&ctl->event_queue)) {
+ event = list_first_entry(&ctl->event_queue,
+ struct ccci_fsm_event, entry);
+ if (event->event_id == CCCI_EVENT_MD_EX) {
+ fsm_finish_event(ctl, event);
+ } else if (event->event_id == CCCI_EVENT_MD_EX_REC_OK) {
+ rec_ok = true;
+ fsm_finish_event(ctl, event);
+ }
+ }
+
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+ if (rec_ok)
+ break;
+
+ cnt++;
+ msleep(EVENT_POLL_INTERVAL_MS);
+ }
+
+ cnt = 0;
+ while (cnt < MD_EX_PASS_TIMEOUT_MS / EVENT_POLL_INTERVAL_MS) {
+ if (kthread_should_stop())
+ return;
+
+ spin_lock_irqsave(&ctl->event_lock, flags);
+ if (!list_empty(&ctl->event_queue)) {
+ event = list_first_entry(&ctl->event_queue,
+ struct ccci_fsm_event, entry);
+ if (event->event_id == CCCI_EVENT_MD_EX_PASS)
+ fsm_finish_event(ctl, event);
+ }
+
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+ cnt++;
+ msleep(EVENT_POLL_INTERVAL_MS);
+ }
+
+ break;
+
+ default:
+ break;
+ }
+
+ if (cmd)
+ fsm_finish_command(ctl, cmd, FSM_CMD_RESULT_OK);
+}
+
+static void fsm_stopped_handler(struct ccci_fsm_ctl *ctl)
+{
+ ctl->last_state = ctl->curr_state;
+ ctl->curr_state = CCCI_FSM_STOPPED;
+
+ fsm_broadcast_state(ctl, MD_STATE_STOPPED);
+ mtk_md_reset(ctl->md->mtk_dev);
+}
+
+static void fsm_routine_stopped(struct ccci_fsm_ctl *ctl, struct ccci_fsm_command *cmd)
+{
+ /* state sanity check */
+ if (ctl->curr_state == CCCI_FSM_STOPPED) {
+ fsm_finish_command(ctl, cmd, FSM_CMD_RESULT_FAIL);
+ return;
+ }
+
+ fsm_stopped_handler(ctl);
+ fsm_finish_command(ctl, cmd, FSM_CMD_RESULT_OK);
+}
+
+static void fsm_routine_stopping(struct ccci_fsm_ctl *ctl, struct ccci_fsm_command *cmd)
+{
+ struct mtk_pci_dev *mtk_dev;
+ int err;
+
+ /* state sanity check */
+ if (ctl->curr_state == CCCI_FSM_STOPPED || ctl->curr_state == CCCI_FSM_STOPPING) {
+ fsm_finish_command(ctl, cmd, FSM_CMD_RESULT_FAIL);
+ return;
+ }
+
+ ctl->last_state = ctl->curr_state;
+ ctl->curr_state = CCCI_FSM_STOPPING;
+
+ fsm_broadcast_state(ctl, MD_STATE_WAITING_TO_STOP);
+ /* stop HW */
+ cldma_stop(ID_CLDMA1);
+
+ mtk_dev = ctl->md->mtk_dev;
+ if (!atomic_read(&ctl->md->rgu_irq_asserted)) {
+ /* disable DRM before FLDR */
+ mhccif_h2d_swint_trigger(mtk_dev, H2D_CH_DRM_DISABLE_AP);
+ msleep(FSM_DRM_DISABLE_DELAY_MS);
+ /* try FLDR first */
+ err = mtk_acpi_fldr_func(mtk_dev);
+ if (err)
+ mhccif_h2d_swint_trigger(mtk_dev, H2D_CH_DEVICE_RESET);
+ }
+
+ /* auto jump to stopped state handler */
+ fsm_stopped_handler(ctl);
+
+ fsm_finish_command(ctl, cmd, FSM_CMD_RESULT_OK);
+}
+
+static void fsm_routine_ready(struct ccci_fsm_ctl *ctl)
+{
+ struct mtk_modem *md;
+
+ ctl->last_state = ctl->curr_state;
+ ctl->curr_state = CCCI_FSM_READY;
+ fsm_broadcast_state(ctl, MD_STATE_READY);
+ md = ctl->md;
+ mtk_md_event_notify(md, FSM_READY);
+}
+
+static void fsm_routine_starting(struct ccci_fsm_ctl *ctl)
+{
+ struct mtk_modem *md;
+ struct device *dev;
+
+ ctl->last_state = ctl->curr_state;
+ ctl->curr_state = CCCI_FSM_STARTING;
+
+ fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS1);
+ md = ctl->md;
+ dev = &md->mtk_dev->pdev->dev;
+ mtk_md_event_notify(md, FSM_START);
+
+ wait_event_interruptible_timeout(ctl->async_hk_wq,
+ atomic_read(&md->core_md.ready) ||
+ atomic_read(&ctl->exp_flg), HZ * 60);
+
+ if (atomic_read(&ctl->exp_flg))
+ dev_err(dev, "MD exception is captured during handshake\n");
+
+ if (!atomic_read(&md->core_md.ready)) {
+ dev_err(dev, "MD handshake timeout\n");
+ fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);
+ } else {
+ fsm_routine_ready(ctl);
+ }
+}
+
+static void fsm_routine_start(struct ccci_fsm_ctl *ctl, struct ccci_fsm_command *cmd)
+{
+ struct mtk_modem *md;
+ struct device *dev;
+ u32 dev_status;
+
+ md = ctl->md;
+ if (!md)
+ return;
+
+ dev = &md->mtk_dev->pdev->dev;
+ /* state sanity check */
+ if (ctl->curr_state != CCCI_FSM_INIT &&
+ ctl->curr_state != CCCI_FSM_PRE_START &&
+ ctl->curr_state != CCCI_FSM_STOPPED) {
+ fsm_finish_command(ctl, cmd, FSM_CMD_RESULT_FAIL);
+ return;
+ }
+
+ ctl->last_state = ctl->curr_state;
+ ctl->curr_state = CCCI_FSM_PRE_START;
+ mtk_md_event_notify(md, FSM_PRE_START);
+
+ read_poll_timeout(ioread32, dev_status, (dev_status & MISC_STAGE_MASK) == LINUX_STAGE,
+ 20000, 2000000, false, IREG_BASE(md->mtk_dev) + PCIE_MISC_DEV_STATUS);
+ if ((dev_status & MISC_STAGE_MASK) != LINUX_STAGE) {
+ dev_err(dev, "invalid device status 0x%lx\n", dev_status & MISC_STAGE_MASK);
+ fsm_finish_command(ctl, cmd, FSM_CMD_RESULT_FAIL);
+ return;
+ }
+ cldma_hif_hw_init(ID_CLDMA1);
+ fsm_routine_starting(ctl);
+ fsm_finish_command(ctl, cmd, FSM_CMD_RESULT_OK);
+}
+
+static int fsm_main_thread(void *data)
+{
+ struct ccci_fsm_command *cmd;
+ struct ccci_fsm_ctl *ctl;
+ unsigned long flags;
+
+ ctl = data;
+
+ while (!kthread_should_stop()) {
+ if (wait_event_interruptible(ctl->command_wq, !list_empty(&ctl->command_queue) ||
+ kthread_should_stop()))
+ continue;
+ if (kthread_should_stop())
+ break;
+
+ spin_lock_irqsave(&ctl->command_lock, flags);
+ cmd = list_first_entry(&ctl->command_queue, struct ccci_fsm_command, entry);
+ list_del(&cmd->entry);
+ spin_unlock_irqrestore(&ctl->command_lock, flags);
+
+ switch (cmd->cmd_id) {
+ case CCCI_COMMAND_START:
+ fsm_routine_start(ctl, cmd);
+ break;
+
+ case CCCI_COMMAND_EXCEPTION:
+ fsm_routine_exception(ctl, cmd, FIELD_GET(FSM_EX_REASON, cmd->flag));
+ break;
+
+ case CCCI_COMMAND_PRE_STOP:
+ fsm_routine_stopping(ctl, cmd);
+ break;
+
+ case CCCI_COMMAND_STOP:
+ fsm_routine_stopped(ctl, cmd);
+ break;
+
+ default:
+ fsm_finish_command(ctl, cmd, FSM_CMD_RESULT_FAIL);
+ fsm_flush_queue(ctl);
+ break;
+ }
+ }
+
+ return 0;
+}
+
+int fsm_append_command(struct ccci_fsm_ctl *ctl, enum ccci_fsm_cmd_state cmd_id, unsigned int flag)
+{
+ struct ccci_fsm_command *cmd;
+ unsigned long flags;
+ int result = 0;
+
+ cmd = kmalloc(sizeof(*cmd),
+ (in_irq() || in_softirq() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL);
+ if (!cmd)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&cmd->entry);
+ init_waitqueue_head(&cmd->complete_wq);
+ cmd->cmd_id = cmd_id;
+ cmd->result = FSM_CMD_RESULT_PENDING;
+ if (in_irq() || irqs_disabled())
+ flag &= ~FSM_CMD_FLAG_WAITING_TO_COMPLETE;
+
+ cmd->flag = flag;
+
+ spin_lock_irqsave(&ctl->command_lock, flags);
+ list_add_tail(&cmd->entry, &ctl->command_queue);
+ spin_unlock_irqrestore(&ctl->command_lock, flags);
+ /* after this line, only dereference command when "waiting-to-complete" */
+ wake_up(&ctl->command_wq);
+ if (flag & FSM_CMD_FLAG_WAITING_TO_COMPLETE) {
+ wait_event(cmd->complete_wq, cmd->result != FSM_CMD_RESULT_PENDING);
+ if (cmd->result != FSM_CMD_RESULT_OK)
+ result = -EINVAL;
+
+ spin_lock_irqsave(&ctl->cmd_complete_lock, flags);
+ kfree(cmd);
+ spin_unlock_irqrestore(&ctl->cmd_complete_lock, flags);
+ }
+
+ return result;
+}
+
+int fsm_append_event(struct ccci_fsm_ctl *ctl, enum ccci_fsm_event_state event_id,
+ unsigned char *data, unsigned int length)
+{
+ struct ccci_fsm_event *event;
+ unsigned long flags;
+ struct device *dev;
+
+ dev = &ctl->md->mtk_dev->pdev->dev;
+
+ if (event_id <= CCCI_EVENT_INVALID || event_id >= CCCI_EVENT_MAX) {
+ dev_err(dev, "invalid event %d\n", event_id);
+ return -EINVAL;
+ }
+
+ event = kmalloc(struct_size(event, data, length),
+ in_interrupt() ? GFP_ATOMIC : GFP_KERNEL);
+ if (!event)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&event->entry);
+ event->event_id = event_id;
+ event->length = length;
+ if (data && length)
+ memcpy(event->data, data, flex_array_size(event, data, event->length));
+
+ spin_lock_irqsave(&ctl->event_lock, flags);
+ list_add_tail(&event->entry, &ctl->event_queue);
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+ wake_up_all(&ctl->event_wq);
+ return 0;
+}
+
+void fsm_clear_event(struct ccci_fsm_ctl *ctl, enum ccci_fsm_event_state event_id)
+{
+ struct ccci_fsm_event *event, *evt_next;
+ unsigned long flags;
+ struct device *dev;
+
+ dev = &ctl->md->mtk_dev->pdev->dev;
+
+ spin_lock_irqsave(&ctl->event_lock, flags);
+ list_for_each_entry_safe(event, evt_next, &ctl->event_queue, entry) {
+ dev_err(dev, "unhandled event %d\n", event->event_id);
+ if (event->event_id == event_id)
+ fsm_finish_event(ctl, event);
+ }
+
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+}
+
+struct ccci_fsm_ctl *fsm_get_entity_by_device_number(dev_t dev_n)
+{
+ if (ccci_fsm_entry && ccci_fsm_entry->monitor_ctl.dev_n == dev_n)
+ return ccci_fsm_entry;
+
+ return NULL;
+}
+
+struct ccci_fsm_ctl *fsm_get_entry(void)
+{
+ return ccci_fsm_entry;
+}
+
+enum md_state ccci_fsm_get_md_state(void)
+{
+ struct ccci_fsm_ctl *ctl;
+
+ ctl = ccci_fsm_entry;
+ if (ctl)
+ return ctl->md_state;
+ else
+ return MD_STATE_INVALID;
+}
+
+unsigned int ccci_fsm_get_current_state(void)
+{
+ struct ccci_fsm_ctl *ctl;
+
+ ctl = ccci_fsm_entry;
+ if (ctl)
+ return ctl->curr_state;
+ else
+ return CCCI_FSM_STOPPED;
+}
+
+void ccci_fsm_recv_md_interrupt(enum md_irq_type type)
+{
+ struct ccci_fsm_ctl *ctl;
+
+ ctl = ccci_fsm_entry;
+ if (type == MD_IRQ_PORT_ENUM) {
+ fsm_append_command(ctl, CCCI_COMMAND_START, 0);
+ } else if (type == MD_IRQ_CCIF_EX) {
+ /* interrupt handshake flow */
+ atomic_set(&ctl->exp_flg, 1);
+ wake_up(&ctl->async_hk_wq);
+ fsm_append_command(ctl, CCCI_COMMAND_EXCEPTION,
+ FIELD_PREP(FSM_EX_REASON, EXCEPTION_EE));
+ }
+}
+
+void ccci_fsm_reset(void)
+{
+ struct ccci_fsm_ctl *ctl;
+
+ ctl = ccci_fsm_entry;
+ /* Clear event and command queues */
+ fsm_flush_queue(ctl);
+
+ ctl->last_state = CCCI_FSM_INIT;
+ ctl->curr_state = CCCI_FSM_STOPPED;
+ atomic_set(&ctl->exp_flg, 0);
+}
+
+int ccci_fsm_init(struct mtk_modem *md)
+{
+ struct ccci_fsm_ctl *ctl;
+
+ ctl = devm_kzalloc(&md->mtk_dev->pdev->dev, sizeof(*ctl), GFP_KERNEL);
+ if (!ctl)
+ return -ENOMEM;
+
+ ccci_fsm_entry = ctl;
+ ctl->md = md;
+ ctl->last_state = CCCI_FSM_INIT;
+ ctl->curr_state = CCCI_FSM_INIT;
+ INIT_LIST_HEAD(&ctl->command_queue);
+ INIT_LIST_HEAD(&ctl->event_queue);
+ init_waitqueue_head(&ctl->async_hk_wq);
+ init_waitqueue_head(&ctl->event_wq);
+ INIT_LIST_HEAD(&ctl->notifier_list);
+ init_waitqueue_head(&ctl->command_wq);
+ spin_lock_init(&ctl->event_lock);
+ spin_lock_init(&ctl->command_lock);
+ spin_lock_init(&ctl->cmd_complete_lock);
+ atomic_set(&ctl->exp_flg, 0);
+ spin_lock_init(&ctl->notifier_lock);
+
+ ctl->fsm_thread = kthread_run(fsm_main_thread, ctl, "ccci_fsm");
+ if (IS_ERR(ctl->fsm_thread)) {
+ dev_err(&md->mtk_dev->pdev->dev, "failed to start monitor thread\n");
+ return PTR_ERR(ctl->fsm_thread);
+ }
+
+ return 0;
+}
+
+void ccci_fsm_uninit(void)
+{
+ struct ccci_fsm_ctl *ctl;
+
+ ctl = ccci_fsm_entry;
+ if (!ctl)
+ return;
+
+ if (ctl->fsm_thread)
+ kthread_stop(ctl->fsm_thread);
+
+ fsm_flush_queue(ctl);
+}
--
2.17.1

2021-11-01 03:58:03

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 07/14] net: wwan: t7xx: Data path HW layer

From: Haijun Lio <[email protected]>

Data Path Modem AP Interface (DPMAIF) HW layer provides HW abstraction
for the upper layer (DPMAIF HIF). It implements functions to do the HW
configuration, TX/RX control and interrupt handling.

Signed-off-by: Haijun Lio <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/t7xx_dpmaif.c | 1524 +++++++++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_dpmaif.h | 168 +++
2 files changed, 1692 insertions(+)
create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.h

diff --git a/drivers/net/wwan/t7xx/t7xx_dpmaif.c b/drivers/net/wwan/t7xx/t7xx_dpmaif.c
new file mode 100644
index 000000000000..b9d78a21b219
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_dpmaif.c
@@ -0,0 +1,1524 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Andy Shevchenko <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/bitops.h>
+#include <linux/delay.h>
+#include <linux/io.h>
+#include <linux/iopoll.h>
+#include <linux/kernel.h>
+
+#include "t7xx_dpmaif.h"
+#include "t7xx_reg.h"
+
+static int dpmaif_init_intr(struct dpmaif_hw_info *hw_info)
+{
+ struct dpmaif_isr_en_mask *isr_en_msk;
+ u32 value, l2intr_en, l2intr_err_en;
+ int ret;
+
+ isr_en_msk = &hw_info->isr_en_mask;
+
+ /* set UL interrupt */
+ l2intr_en = DPMAIF_UL_INT_ERR_MSK | DPMAIF_UL_INT_QDONE_MSK;
+ isr_en_msk->ap_ul_l2intr_en_msk = l2intr_en;
+
+ iowrite32(DPMAIF_AP_ALL_L2TISAR0_MASK, hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+
+ /* set interrupt enable mask */
+ iowrite32(l2intr_en, hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMCR0);
+ iowrite32(~l2intr_en, hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMSR0);
+
+ /* check mask status */
+ ret = readx_poll_timeout_atomic(ioread32,
+ hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0,
+ value, (value & l2intr_en) != l2intr_en,
+ 0, DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (ret)
+ return ret;
+
+ /* set DL interrupt */
+ l2intr_err_en = DPMAIF_DL_INT_PITCNT_LEN_ERR | DPMAIF_DL_INT_BATCNT_LEN_ERR;
+ isr_en_msk->ap_dl_l2intr_err_en_msk = l2intr_err_en;
+
+ l2intr_en = DPMAIF_DL_INT_DLQ0_QDONE | DPMAIF_DL_INT_DLQ0_PITCNT_LEN |
+ DPMAIF_DL_INT_DLQ1_QDONE | DPMAIF_DL_INT_DLQ1_PITCNT_LEN;
+ isr_en_msk->ap_ul_l2intr_en_msk = l2intr_en;
+
+ iowrite32(DPMAIF_AP_APDL_ALL_L2TISAR0_MASK, hw_info->pcie_base + DPMAIF_AP_APDL_L2TISAR0);
+
+ /* set DL ISR PD enable mask */
+ iowrite32(~l2intr_en, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+
+ /* check mask status */
+ ret = readx_poll_timeout_atomic(ioread32,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMR0,
+ value, (value & l2intr_en) != l2intr_en,
+ 0, DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (ret)
+ return ret;
+
+ isr_en_msk->ap_udl_ip_busy_en_msk = DPMAIF_UDL_IP_BUSY;
+
+ iowrite32(DPMAIF_AP_IP_BUSY_MASK, hw_info->pcie_base + DPMAIF_AP_IP_BUSY);
+
+ iowrite32(isr_en_msk->ap_udl_ip_busy_en_msk,
+ hw_info->pcie_base + DPMAIF_AO_AP_DLUL_IP_BUSY_MASK);
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_AP_L1TIMR0);
+ value |= DPMAIF_DL_INT_Q2APTOP | DPMAIF_DL_INT_Q2TOQ1;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_UL_AP_L1TIMR0);
+ iowrite32(DPMA_HPC_ALL_INT_MASK, hw_info->pcie_base + DPMAIF_HPC_INTR_MASK);
+
+ return 0;
+}
+
+static void dpmaif_mask_ulq_interrupt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int q_num)
+{
+ struct dpmaif_isr_en_mask *isr_en_msk;
+ struct dpmaif_hw_info *hw_info;
+ u32 value, ul_int_que_done;
+ int ret;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+ isr_en_msk = &hw_info->isr_en_mask;
+ ul_int_que_done = BIT(q_num + _UL_INT_DONE_OFFSET) & DPMAIF_UL_INT_QDONE_MSK;
+ isr_en_msk->ap_ul_l2intr_en_msk &= ~ul_int_que_done;
+ iowrite32(ul_int_que_done, hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMSR0);
+
+ /* check mask status */
+ ret = readx_poll_timeout_atomic(ioread32,
+ hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0,
+ value, (value & ul_int_que_done) == ul_int_que_done,
+ 0, DPMAIF_CHECK_TIMEOUT_US);
+ if (ret)
+ dev_err(dpmaif_ctrl->dev,
+ "Could not mask the UL interrupt. DPMAIF_AO_UL_AP_L2TIMR0 is 0x%x\n",
+ value);
+}
+
+void dpmaif_unmask_ulq_interrupt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int q_num)
+{
+ struct dpmaif_isr_en_mask *isr_en_msk;
+ struct dpmaif_hw_info *hw_info;
+ u32 value, ul_int_que_done;
+ int ret;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+ isr_en_msk = &hw_info->isr_en_mask;
+ ul_int_que_done = BIT(q_num + _UL_INT_DONE_OFFSET) & DPMAIF_UL_INT_QDONE_MSK;
+ isr_en_msk->ap_ul_l2intr_en_msk |= ul_int_que_done;
+ iowrite32(ul_int_que_done, hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMCR0);
+
+ /* check mask status */
+ ret = readx_poll_timeout_atomic(ioread32,
+ hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0,
+ value, (value & ul_int_que_done) != ul_int_que_done,
+ 0, DPMAIF_CHECK_TIMEOUT_US);
+ if (ret)
+ dev_err(dpmaif_ctrl->dev,
+ "Could not unmask the UL interrupt. DPMAIF_AO_UL_AP_L2TIMR0 is 0x%x\n",
+ value);
+}
+
+static void dpmaif_mask_dl_batcnt_len_err_interrupt(struct dpmaif_hw_info *hw_info)
+{
+ hw_info->isr_en_mask.ap_dl_l2intr_en_msk &= ~DPMAIF_DL_INT_BATCNT_LEN_ERR;
+ iowrite32(DPMAIF_DL_INT_BATCNT_LEN_ERR,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+}
+
+void dpmaif_unmask_dl_batcnt_len_err_interrupt(struct dpmaif_hw_info *hw_info)
+{
+ hw_info->isr_en_mask.ap_dl_l2intr_en_msk |= DPMAIF_DL_INT_BATCNT_LEN_ERR;
+ iowrite32(DPMAIF_DL_INT_BATCNT_LEN_ERR,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+}
+
+static void dpmaif_mask_dl_pitcnt_len_err_interrupt(struct dpmaif_hw_info *hw_info)
+{
+ hw_info->isr_en_mask.ap_dl_l2intr_en_msk &= ~DPMAIF_DL_INT_PITCNT_LEN_ERR;
+ iowrite32(DPMAIF_DL_INT_PITCNT_LEN_ERR,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+}
+
+void dpmaif_unmask_dl_pitcnt_len_err_interrupt(struct dpmaif_hw_info *hw_info)
+{
+ hw_info->isr_en_mask.ap_dl_l2intr_en_msk |= DPMAIF_DL_INT_PITCNT_LEN_ERR;
+ iowrite32(DPMAIF_DL_INT_PITCNT_LEN_ERR,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+}
+
+static u32 update_dlq_interrupt(struct dpmaif_hw_info *hw_info, u32 q_done)
+{
+ u32 value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0);
+ iowrite32(q_done, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+
+ return value;
+}
+
+static int mask_dlq_interrupt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char qno)
+{
+ struct dpmaif_hw_info *hw_info;
+ u32 value, q_done;
+ int ret;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+ q_done = (qno == DPF_RX_QNO0) ? DPMAIF_DL_INT_DLQ0_QDONE : DPMAIF_DL_INT_DLQ1_QDONE;
+ iowrite32(q_done, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+
+ /* check mask status */
+ ret = read_poll_timeout_atomic(update_dlq_interrupt, value, value & q_done,
+ 0, DPMAIF_CHECK_TIMEOUT_US, false, hw_info, q_done);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "mask_dl_dlq0 check timeout\n");
+ return -ETIMEDOUT;
+ }
+
+ hw_info->isr_en_mask.ap_dl_l2intr_en_msk &= ~q_done;
+
+ return 0;
+}
+
+void dpmaif_hw_dlq_unmask_rx_done(struct dpmaif_hw_info *hw_info, unsigned char qno)
+{
+ u32 mask;
+
+ mask = (qno == DPF_RX_QNO0) ? DPMAIF_DL_INT_DLQ0_QDONE : DPMAIF_DL_INT_DLQ1_QDONE;
+ iowrite32(mask, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+ hw_info->isr_en_mask.ap_dl_l2intr_en_msk |= mask;
+}
+
+void dpmaif_clr_ip_busy_sts(struct dpmaif_hw_info *hw_info)
+{
+ u32 ip_busy_sts = ioread32(hw_info->pcie_base + DPMAIF_AP_IP_BUSY);
+
+ iowrite32(ip_busy_sts, hw_info->pcie_base + DPMAIF_AP_IP_BUSY);
+}
+
+static void dpmaif_dlq_mask_rx_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info,
+ unsigned char qno)
+{
+ if (qno == DPF_RX_QNO0)
+ iowrite32(DPMAIF_DL_INT_DLQ0_PITCNT_LEN,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+ else
+ iowrite32(DPMAIF_DL_INT_DLQ1_PITCNT_LEN,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+}
+
+void dpmaif_dlq_unmask_rx_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info, unsigned char qno)
+{
+ if (qno == DPF_RX_QNO0)
+ iowrite32(DPMAIF_DL_INT_DLQ0_PITCNT_LEN,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+ else
+ iowrite32(DPMAIF_DL_INT_DLQ1_PITCNT_LEN,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+}
+
+void dpmaif_clr_ul_all_interrupt(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_AP_ALL_L2TISAR0_MASK, hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+}
+
+void dpmaif_clr_dl_all_interrupt(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_AP_APDL_ALL_L2TISAR0_MASK, hw_info->pcie_base + DPMAIF_AP_APDL_L2TISAR0);
+}
+
+/* para->intr_cnt counter is set to zero before this function is called.
+ * it is not check for overflow as there is no risk of overflowing intr_types or intr_queues.
+ */
+static void dpmaif_hw_check_tx_interrupt(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned int l2_txisar0,
+ struct dpmaif_hw_intr_st_para *para)
+{
+ unsigned int index;
+ unsigned int value;
+
+ value = l2_txisar0 & DP_UL_INT_QDONE_MSK;
+ if (value) {
+ unsigned long pending = (value >> DP_UL_INT_DONE_OFFSET) & DP_UL_QNUM_MSK;
+
+ para->intr_types[para->intr_cnt] = DPF_INTR_UL_DONE;
+ para->intr_queues[para->intr_cnt] = pending;
+
+ /* mask TX queue interrupt immediately after it occurs */
+ for_each_set_bit(index, &pending, DPMAIF_TXQ_NUM)
+ dpmaif_mask_ulq_interrupt(dpmaif_ctrl, index);
+
+ para->intr_cnt++;
+ }
+
+ value = l2_txisar0 & DP_UL_INT_EMPTY_MSK;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_UL_DRB_EMPTY;
+ para->intr_queues[para->intr_cnt] =
+ (value >> DP_UL_INT_EMPTY_OFFSET) & DP_UL_QNUM_MSK;
+ para->intr_cnt++;
+ }
+
+ value = l2_txisar0 & DP_UL_INT_MD_NOTREADY_MSK;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_UL_MD_NOTREADY;
+ para->intr_queues[para->intr_cnt] =
+ (value >> DP_UL_INT_MD_NOTRDY_OFFSET) & DP_UL_QNUM_MSK;
+ para->intr_cnt++;
+ }
+
+ value = l2_txisar0 & DP_UL_INT_MD_PWR_NOTREADY_MSK;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_UL_MD_PWR_NOTREADY;
+ para->intr_queues[para->intr_cnt] =
+ (value >> DP_UL_INT_PWR_NOTRDY_OFFSET) & DP_UL_QNUM_MSK;
+ para->intr_cnt++;
+ }
+
+ value = l2_txisar0 & DP_UL_INT_ERR_MSK;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_UL_LEN_ERR;
+ para->intr_queues[para->intr_cnt] =
+ (value >> DP_UL_INT_LEN_ERR_OFFSET) & DP_UL_QNUM_MSK;
+ para->intr_cnt++;
+ }
+}
+
+/* para->intr_cnt counter is set to zero before this function is called.
+ * it is not check for overflow as there is no risk of overflowing intr_types or intr_queues.
+ */
+static void dpmaif_hw_check_rx_interrupt(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned int *pl2_rxisar0,
+ struct dpmaif_hw_intr_st_para *para,
+ int qno)
+{
+ unsigned int l2_rxisar0 = *pl2_rxisar0;
+ struct dpmaif_hw_info *hw_info;
+ unsigned int value;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+
+ if (qno == DPF_RX_QNO_DFT) {
+ value = l2_rxisar0 & DP_DL_INT_SKB_LEN_ERR;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_DL_SKB_LEN_ERR;
+ para->intr_queues[para->intr_cnt] = DPF_RX_QNO_DFT;
+ para->intr_cnt++;
+ }
+
+ value = l2_rxisar0 & DP_DL_INT_BATCNT_LEN_ERR;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_DL_BATCNT_LEN_ERR;
+ para->intr_queues[para->intr_cnt] = DPF_RX_QNO_DFT;
+ para->intr_cnt++;
+
+ dpmaif_mask_dl_batcnt_len_err_interrupt(hw_info);
+ }
+
+ value = l2_rxisar0 & DP_DL_INT_PITCNT_LEN_ERR;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_DL_PITCNT_LEN_ERR;
+ para->intr_queues[para->intr_cnt] = DPF_RX_QNO_DFT;
+ para->intr_cnt++;
+
+ dpmaif_mask_dl_pitcnt_len_err_interrupt(hw_info);
+ }
+
+ value = l2_rxisar0 & DP_DL_INT_PKT_EMPTY_MSK;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_DL_PKT_EMPTY_SET;
+ para->intr_queues[para->intr_cnt] = DPF_RX_QNO_DFT;
+ para->intr_cnt++;
+ }
+
+ value = l2_rxisar0 & DP_DL_INT_FRG_EMPTY_MSK;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_DL_FRG_EMPTY_SET;
+ para->intr_queues[para->intr_cnt] = DPF_RX_QNO_DFT;
+ para->intr_cnt++;
+ }
+
+ value = l2_rxisar0 & DP_DL_INT_MTU_ERR_MSK;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_DL_MTU_ERR;
+ para->intr_queues[para->intr_cnt] = DPF_RX_QNO_DFT;
+ para->intr_cnt++;
+ }
+
+ value = l2_rxisar0 & DP_DL_INT_FRG_LENERR_MSK;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_DL_FRGCNT_LEN_ERR;
+ para->intr_queues[para->intr_cnt] = DPF_RX_QNO_DFT;
+ para->intr_cnt++;
+ }
+
+ value = l2_rxisar0 & DP_DL_INT_DLQ0_PITCNT_LEN_ERR;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_DL_INT_DLQ0_PITCNT_LEN_ERR;
+ para->intr_queues[para->intr_cnt] = BIT(qno);
+ para->intr_cnt++;
+
+ dpmaif_dlq_mask_rx_pitcnt_len_err_intr(hw_info, qno);
+ }
+
+ value = l2_rxisar0 & DP_DL_INT_HPC_ENT_TYPE_ERR;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_DL_INT_HPC_ENT_TYPE_ERR;
+ para->intr_queues[para->intr_cnt] = DPF_RX_QNO_DFT;
+ para->intr_cnt++;
+ }
+
+ value = l2_rxisar0 & DP_DL_INT_DLQ0_QDONE_SET;
+ if (value) {
+ /* mask RX done interrupt immediately after it occurs */
+ if (!mask_dlq_interrupt(dpmaif_ctrl, qno)) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_DL_DLQ0_DONE;
+ para->intr_queues[para->intr_cnt] = BIT(qno);
+ para->intr_cnt++;
+ } else {
+ /* Unable to clear the interrupt, try again on the next one
+ * device entered low power mode or suffer exception
+ */
+ *pl2_rxisar0 = l2_rxisar0 & ~DP_DL_INT_DLQ0_QDONE_SET;
+ }
+ }
+ } else {
+ value = l2_rxisar0 & DP_DL_INT_DLQ1_PITCNT_LEN_ERR;
+ if (value) {
+ para->intr_types[para->intr_cnt] = DPF_DL_INT_DLQ1_PITCNT_LEN_ERR;
+ para->intr_queues[para->intr_cnt] = BIT(qno);
+ para->intr_cnt++;
+
+ dpmaif_dlq_mask_rx_pitcnt_len_err_intr(hw_info, qno);
+ }
+
+ value = l2_rxisar0 & DP_DL_INT_DLQ1_QDONE_SET;
+ if (value) {
+ /* mask RX done interrupt immediately after it occurs */
+ if (!mask_dlq_interrupt(dpmaif_ctrl, qno)) {
+ para->intr_types[para->intr_cnt] = DPF_INTR_DL_DLQ1_DONE;
+ para->intr_queues[para->intr_cnt] = BIT(qno);
+ para->intr_cnt++;
+ } else {
+ /* Unable to clear the interrupt, try again on the next one
+ * device entered low power mode or suffer exception
+ */
+ *pl2_rxisar0 = l2_rxisar0 & ~DP_DL_INT_DLQ1_QDONE_SET;
+ }
+ }
+ }
+}
+
+/**
+ * dpmaif_hw_get_interrupt_status() - Gets interrupt status from HW
+ * @dpmaif_ctrl: Pointer to struct dpmaif_ctrl
+ * @para: Pointer to struct dpmaif_hw_intr_st_para
+ * @qno: Queue number
+ *
+ * Gets RX/TX interrupt and checks DL/UL interrupt status.
+ * Clears interrupt statuses if needed.
+ *
+ * Return: Interrupt Count
+ */
+int dpmaif_hw_get_interrupt_status(struct dpmaif_ctrl *dpmaif_ctrl,
+ struct dpmaif_hw_intr_st_para *para, int qno)
+{
+ struct dpmaif_hw_info *hw_info;
+ u32 l2_ri_sar0, l2_ti_sar0 = 0;
+ u32 l2_rim_r0, l2_tim_r0 = 0;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+
+ /* get RX interrupt status from HW */
+ l2_ri_sar0 = ioread32(hw_info->pcie_base + DPMAIF_AP_APDL_L2TISAR0);
+ l2_rim_r0 = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMR0);
+
+ /* TX interrupt status */
+ if (qno == DPF_RX_QNO_DFT) {
+ /* All ULQ and DLQ0 interrupts use the same source
+ * no need to check ULQ interrupts when a DLQ1
+ * interrupt has occurred.
+ */
+ l2_ti_sar0 = ioread32(hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+ l2_tim_r0 = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0);
+ }
+
+ /* clear IP busy register wake up CPU case */
+ dpmaif_clr_ip_busy_sts(hw_info);
+
+ /* check UL interrupt status */
+ if (qno == DPF_RX_QNO_DFT) {
+ /* Do not schedule bottom half again or clear UL
+ * interrupt status when we have already masked it.
+ */
+ l2_ti_sar0 &= ~l2_tim_r0;
+ if (l2_ti_sar0) {
+ dpmaif_hw_check_tx_interrupt(dpmaif_ctrl, l2_ti_sar0, para);
+
+ /* clear interrupt status */
+ iowrite32(l2_ti_sar0, hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+ }
+ }
+
+ /* check DL interrupt status */
+ if (l2_ri_sar0) {
+ if (qno == DPF_RX_QNO0) {
+ l2_ri_sar0 &= DP_DL_DLQ0_STATUS_MASK;
+
+ if (l2_rim_r0 & DPMAIF_DL_INT_DLQ0_QDONE)
+ /* Do not schedule bottom half again or clear DL
+ * queue done interrupt status when we have already masked it.
+ */
+ l2_ri_sar0 &= ~DP_DL_INT_DLQ0_QDONE_SET;
+ } else {
+ l2_ri_sar0 &= DP_DL_DLQ1_STATUS_MASK;
+
+ if (l2_rim_r0 & DPMAIF_DL_INT_DLQ1_QDONE)
+ /* Do not schedule bottom half again or clear DL
+ * queue done interrupt status when we have already masked it.
+ */
+ l2_ri_sar0 &= ~DP_DL_INT_DLQ1_QDONE_SET;
+ }
+
+ /* have interrupt event */
+ if (l2_ri_sar0) {
+ dpmaif_hw_check_rx_interrupt(dpmaif_ctrl, &l2_ri_sar0, para, qno);
+ /* always clear BATCNT_ERR */
+ l2_ri_sar0 |= DP_DL_INT_BATCNT_LEN_ERR;
+
+ /* clear interrupt status */
+ iowrite32(l2_ri_sar0, hw_info->pcie_base + DPMAIF_AP_APDL_L2TISAR0);
+ }
+ }
+
+ return para->intr_cnt;
+}
+
+static int dpmaif_sram_init(struct dpmaif_hw_info *hw_info)
+{
+ u32 value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AP_MEM_CLR);
+ value |= DPMAIF_MEM_CLR;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AP_MEM_CLR);
+
+ return readx_poll_timeout_atomic(ioread32,
+ hw_info->pcie_base + DPMAIF_AP_MEM_CLR,
+ value, !(value & DPMAIF_MEM_CLR),
+ 0, DPMAIF_CHECK_INIT_TIMEOUT_US);
+}
+
+static void dpmaif_hw_reset(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_AP_AO_RST_BIT, hw_info->pcie_base + DPMAIF_AP_AO_RGU_ASSERT);
+ udelay(2);
+ iowrite32(DPMAIF_AP_RST_BIT, hw_info->pcie_base + DPMAIF_AP_RGU_ASSERT);
+ udelay(2);
+ iowrite32(DPMAIF_AP_AO_RST_BIT, hw_info->pcie_base + DPMAIF_AP_AO_RGU_DEASSERT);
+ udelay(2);
+ iowrite32(DPMAIF_AP_RST_BIT, hw_info->pcie_base + DPMAIF_AP_RGU_DEASSERT);
+ udelay(2);
+}
+
+static int dpmaif_hw_config(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+ int ret;
+
+ dpmaif_hw_reset(hw_info);
+ ret = dpmaif_sram_init(hw_info);
+ if (ret)
+ return ret;
+
+ /* Set DPMAIF AP port mode */
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+ value |= DPMAIF_PORT_MODE_PCIE;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+
+ iowrite32(DPMAIF_CG_EN, hw_info->pcie_base + DPMAIF_AP_CG_EN);
+
+ return 0;
+}
+
+static inline void dpmaif_pcie_dpmaif_sign(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_PCIE_MODE_SET_VALUE, hw_info->pcie_base + DPMAIF_UL_RESERVE_AO_RW);
+}
+
+static void dpmaif_dl_performance(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ /* BAT cache enable */
+ value = ioread32(hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+ value |= DPMAIF_DL_BAT_CACHE_PRI;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+
+ /* PIT burst enable */
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+ value |= DPMAIF_DL_BURST_PIT_EN;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+}
+
+static void dpmaif_common_hw_init(struct dpmaif_hw_info *hw_info)
+{
+ dpmaif_pcie_dpmaif_sign(hw_info);
+ dpmaif_dl_performance(hw_info);
+}
+
+ /* DPMAIF DL DLQ part HW setting */
+
+static inline void dpmaif_hw_hpc_cntl_set(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = DPMAIF_HPC_DLQ_PATH_MODE | DPMAIF_HPC_ADD_MODE_DF << 2;
+ value |= DPMAIF_HASH_PRIME_DF << 4;
+ value |= DPMAIF_HPC_TOTAL_NUM << 8;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_HPC_CNTL);
+}
+
+static inline void dpmaif_hw_agg_cfg_set(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = DPMAIF_AGG_MAX_LEN_DF | DPMAIF_AGG_TBL_ENT_NUM_DF << 16;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_DLQ_AGG_CFG);
+}
+
+static inline void dpmaif_hw_hash_bit_choose_set(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_DLQ_HASH_BIT_CHOOSE_DF,
+ hw_info->pcie_base + DPMAIF_AO_DL_DLQPIT_INIT_CON5);
+}
+
+static inline void dpmaif_hw_mid_pit_timeout_thres_set(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_MID_TIMEOUT_THRES_DF, hw_info->pcie_base + DPMAIF_AO_DL_DLQPIT_TIMEOUT0);
+}
+
+static void dpmaif_hw_dlq_timeout_thres_set(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value, i;
+
+ /* each registers holds two DLQ threshold timeout values */
+ for (i = 0; i < DPMAIF_HPC_MAX_TOTAL_NUM / 2; i++) {
+ value = FIELD_PREP(DPMAIF_DLQ_LOW_TIMEOUT_THRES_MKS, DPMAIF_DLQ_TIMEOUT_THRES_DF);
+ value |= FIELD_PREP(DPMAIF_DLQ_HIGH_TIMEOUT_THRES_MSK,
+ DPMAIF_DLQ_TIMEOUT_THRES_DF);
+ iowrite32(value,
+ hw_info->pcie_base + DPMAIF_AO_DL_DLQPIT_TIMEOUT1 + sizeof(u32) * i);
+ }
+}
+
+static inline void dpmaif_hw_dlq_start_prs_thres_set(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_DLQ_PRS_THRES_DF, hw_info->pcie_base + DPMAIF_AO_DL_DLQPIT_TRIG_THRES);
+}
+
+static void dpmaif_dl_dlq_hpc_hw_init(struct dpmaif_hw_info *hw_info)
+{
+ dpmaif_hw_hpc_cntl_set(hw_info);
+ dpmaif_hw_agg_cfg_set(hw_info);
+ dpmaif_hw_hash_bit_choose_set(hw_info);
+ dpmaif_hw_mid_pit_timeout_thres_set(hw_info);
+ dpmaif_hw_dlq_timeout_thres_set(hw_info);
+ dpmaif_hw_dlq_start_prs_thres_set(hw_info);
+}
+
+static int dpmaif_dl_bat_init_done(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned char q_num, bool frg_en)
+{
+ struct dpmaif_hw_info *hw_info;
+ u32 value, dl_bat_init = 0;
+ int ret;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+
+ if (frg_en)
+ dl_bat_init = DPMAIF_DL_BAT_FRG_INIT;
+
+ dl_bat_init |= DPMAIF_DL_BAT_INIT_ALLSET;
+ dl_bat_init |= DPMAIF_DL_BAT_INIT_EN;
+
+ ret = readx_poll_timeout_atomic(ioread32,
+ hw_info->pcie_base + DPMAIF_DL_BAT_INIT,
+ value, !(value & DPMAIF_DL_BAT_INIT_NOT_READY),
+ 0, DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "data plane modem DL BAT is not ready\n");
+ return ret;
+ }
+
+ iowrite32(dl_bat_init, hw_info->pcie_base + DPMAIF_DL_BAT_INIT);
+
+ ret = readx_poll_timeout_atomic(ioread32,
+ hw_info->pcie_base + DPMAIF_DL_BAT_INIT,
+ value, !(value & DPMAIF_DL_BAT_INIT_NOT_READY),
+ 0, DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (ret)
+ dev_err(dpmaif_ctrl->dev, "data plane modem DL BAT initialization failed\n");
+
+ return ret;
+}
+
+static void dpmaif_dl_set_bat_base_addr(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, dma_addr_t addr)
+{
+ iowrite32(lower_32_bits(addr), hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON0);
+ iowrite32(upper_32_bits(addr), hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON3);
+}
+
+static void dpmaif_dl_set_bat_size(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, unsigned int size)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+ value &= ~DPMAIF_BAT_SIZE_MSK;
+ value |= size & DPMAIF_BAT_SIZE_MSK;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+}
+
+static void dpmaif_dl_bat_en(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, bool enable)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+ if (enable)
+ value |= DPMAIF_BAT_EN_MSK;
+ else
+ value &= ~DPMAIF_BAT_EN_MSK;
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+}
+
+static void dpmaif_dl_set_ao_bid_maxcnt(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, unsigned int cnt)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON0);
+ value &= ~DPMAIF_BAT_BID_MAXCNT_MSK;
+ value |= FIELD_PREP(DPMAIF_BAT_BID_MAXCNT_MSK, cnt);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON0);
+}
+
+static inline void dpmaif_dl_set_ao_mtu(struct dpmaif_hw_info *hw_info, unsigned int mtu_sz)
+{
+ iowrite32(mtu_sz, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON1);
+}
+
+static void dpmaif_dl_set_ao_pit_chknum(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+ unsigned int number)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+ value &= ~DPMAIF_PIT_CHK_NUM_MSK;
+ value |= FIELD_PREP(DPMAIF_PIT_CHK_NUM_MSK, number);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+}
+
+static void dpmaif_dl_set_ao_remain_minsz(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+ size_t min_sz)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON0);
+ value &= ~DPMAIF_BAT_REMAIN_MINSZ_MSK;
+ value |= FIELD_PREP(DPMAIF_BAT_REMAIN_MINSZ_MSK, min_sz / DPMAIF_BAT_REMAIN_SZ_BASE);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON0);
+}
+
+static void dpmaif_dl_set_ao_bat_bufsz(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, size_t buf_sz)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+ value &= ~DPMAIF_BAT_BUF_SZ_MSK;
+ value |= FIELD_PREP(DPMAIF_BAT_BUF_SZ_MSK, buf_sz / DPMAIF_BAT_BUFFER_SZ_BASE);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+}
+
+static void dpmaif_dl_set_ao_bat_rsv_length(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+ unsigned int length)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+ value &= ~DPMAIF_BAT_RSV_LEN_MSK;
+ value |= length & DPMAIF_BAT_RSV_LEN_MSK;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+}
+
+static void dpmaif_dl_set_pkt_alignment(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+ bool enable, unsigned int mode)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+ value &= ~DPMAIF_PKT_ALIGN_MSK;
+ if (enable) {
+ value |= DPMAIF_PKT_ALIGN_EN;
+ value |= FIELD_PREP(DPMAIF_PKT_ALIGN_MSK, mode);
+ }
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+}
+
+static void dpmaif_dl_set_pkt_checksum(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+ value |= DPMAIF_DL_PKT_CHECKSUM_EN;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+}
+
+static void dpmaif_dl_set_ao_frg_check_thres(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+ unsigned int size)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+ value &= ~DPMAIF_FRG_CHECK_THRES_MSK;
+ value |= (size & DPMAIF_FRG_CHECK_THRES_MSK);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+}
+
+static void dpmaif_dl_set_ao_frg_bufsz(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, unsigned int buf_sz)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+ value &= ~DPMAIF_FRG_BUF_SZ_MSK;
+ value |= FIELD_PREP(DPMAIF_FRG_BUF_SZ_MSK, buf_sz / DPMAIF_FRG_BUFFER_SZ_BASE);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+}
+
+static void dpmaif_dl_frg_ao_en(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, bool enable)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+ if (enable)
+ value |= DPMAIF_FRG_EN_MSK;
+ else
+ value &= ~DPMAIF_FRG_EN_MSK;
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+}
+
+static void dpmaif_dl_set_ao_bat_check_thres(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, unsigned int size)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+ value &= ~DPMAIF_BAT_CHECK_THRES_MSK;
+ value |= FIELD_PREP(DPMAIF_BAT_CHECK_THRES_MSK, size);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+}
+
+static void dpmaif_dl_set_pit_seqnum(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, unsigned int seq)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PIT_SEQ_END);
+ value &= ~DPMAIF_DL_PIT_SEQ_MSK;
+ value |= seq & DPMAIF_DL_PIT_SEQ_MSK;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PIT_SEQ_END);
+}
+
+static void dpmaif_dl_set_dlq_pit_base_addr(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, dma_addr_t addr)
+{
+ iowrite32(lower_32_bits(addr), hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON0);
+ iowrite32(upper_32_bits(addr), hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON4);
+}
+
+static void dpmaif_dl_set_dlq_pit_size(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, unsigned int size)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON1);
+ value &= ~DPMAIF_PIT_SIZE_MSK;
+ value |= size & DPMAIF_PIT_SIZE_MSK;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON1);
+
+ iowrite32(0, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON2);
+ iowrite32(0, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON3);
+ iowrite32(0, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON5);
+ iowrite32(0, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON6);
+}
+
+static void dpmaif_dl_dlq_pit_en(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, bool enable)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON3);
+ if (enable)
+ value |= DPMAIF_DLQPIT_EN_MSK;
+ else
+ value &= ~DPMAIF_DLQPIT_EN_MSK;
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON3);
+}
+
+static void dpmaif_dl_dlq_pit_init_done(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned char q_num, unsigned int pit_idx)
+{
+ struct dpmaif_hw_info *hw_info;
+ unsigned int dl_pit_init;
+ u32 value;
+ int timeout;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+ dl_pit_init = DPMAIF_DL_PIT_INIT_ALLSET;
+ dl_pit_init |= (pit_idx << DPMAIF_DLQPIT_CHAN_OFS);
+ dl_pit_init |= DPMAIF_DL_PIT_INIT_EN;
+
+ timeout = readx_poll_timeout_atomic(ioread32,
+ hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT,
+ value, !(value & DPMAIF_DL_PIT_INIT_NOT_READY),
+ DPMAIF_CHECK_DELAY_US, DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (timeout) {
+ dev_err(dpmaif_ctrl->dev, "data plane modem DL PIT is not ready\n");
+ return;
+ }
+
+ iowrite32(dl_pit_init, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT);
+
+ timeout = readx_poll_timeout_atomic(ioread32,
+ hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT,
+ value, !(value & DPMAIF_DL_PIT_INIT_NOT_READY),
+ DPMAIF_CHECK_DELAY_US, DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (timeout)
+ dev_err(dpmaif_ctrl->dev, "data plane modem DL PIT initialization failed\n");
+}
+
+static void dpmaif_config_dlq_pit_hw(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+ struct dpmaif_dl *dl_que)
+{
+ struct dpmaif_hw_info *hw_info;
+ unsigned int pit_idx = q_num;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+ dpmaif_dl_set_dlq_pit_base_addr(hw_info, q_num, dl_que->pit_base);
+ dpmaif_dl_set_dlq_pit_size(hw_info, q_num, dl_que->pit_size_cnt);
+ dpmaif_dl_dlq_pit_en(hw_info, q_num, true);
+ dpmaif_dl_dlq_pit_init_done(dpmaif_ctrl, q_num, pit_idx);
+}
+
+static void dpmaif_config_all_dlq_hw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_hw_info *hw_info;
+ int i;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++)
+ dpmaif_config_dlq_pit_hw(dpmaif_ctrl, i, &hw_info->dl_que[i]);
+}
+
+static void dpmaif_dl_all_queue_en(struct dpmaif_ctrl *dpmaif_ctrl, bool enable)
+{
+ struct dpmaif_hw_info *hw_info;
+ u32 dl_bat_init, value;
+ int timeout;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+ value = ioread32(hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+
+ if (enable)
+ value |= DPMAIF_BAT_EN_MSK;
+ else
+ value &= ~DPMAIF_BAT_EN_MSK;
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+ dl_bat_init = DPMAIF_DL_BAT_INIT_ONLY_ENABLE_BIT;
+ dl_bat_init |= DPMAIF_DL_BAT_INIT_EN;
+
+ /* update DL BAT setting to HW */
+ timeout = readx_poll_timeout_atomic(ioread32, hw_info->pcie_base + DPMAIF_DL_BAT_INIT,
+ value, !(value & DPMAIF_DL_BAT_INIT_NOT_READY),
+ 0, DPMAIF_CHECK_TIMEOUT_US);
+
+ if (timeout)
+ dev_err(dpmaif_ctrl->dev, "timeout updating BAT setting to HW\n");
+
+ iowrite32(dl_bat_init, hw_info->pcie_base + DPMAIF_DL_BAT_INIT);
+
+ /* wait for HW update */
+ timeout = readx_poll_timeout_atomic(ioread32, hw_info->pcie_base + DPMAIF_DL_BAT_INIT,
+ value, !(value & DPMAIF_DL_BAT_INIT_NOT_READY),
+ 0, DPMAIF_CHECK_TIMEOUT_US);
+
+ if (timeout)
+ dev_err(dpmaif_ctrl->dev, "data plane modem DL BAT is not ready\n");
+}
+
+static int dpmaif_config_dlq_hw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_hw_info *hw_info;
+ struct dpmaif_dl_hwq *dl_hw;
+ struct dpmaif_dl *dl_que;
+ unsigned int queue = 0; /* all queues share one BAT/frag BAT table */
+ int ret;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+ dpmaif_dl_dlq_hpc_hw_init(hw_info);
+
+ dl_que = &hw_info->dl_que[queue];
+ dl_hw = &hw_info->dl_que_hw[queue];
+ if (!dl_que->que_started)
+ return -EBUSY;
+
+ dpmaif_dl_set_ao_remain_minsz(hw_info, queue, dl_hw->bat_remain_size);
+ dpmaif_dl_set_ao_bat_bufsz(hw_info, queue, dl_hw->bat_pkt_bufsz);
+ dpmaif_dl_set_ao_frg_bufsz(hw_info, queue, dl_hw->frg_pkt_bufsz);
+ dpmaif_dl_set_ao_bat_rsv_length(hw_info, queue, dl_hw->bat_rsv_length);
+ dpmaif_dl_set_ao_bid_maxcnt(hw_info, queue, dl_hw->pkt_bid_max_cnt);
+
+ if (dl_hw->pkt_alignment == 64)
+ dpmaif_dl_set_pkt_alignment(hw_info, queue, true, DPMAIF_PKT_ALIGN64_MODE);
+ else if (dl_hw->pkt_alignment == 128)
+ dpmaif_dl_set_pkt_alignment(hw_info, queue, true, DPMAIF_PKT_ALIGN128_MODE);
+ else
+ dpmaif_dl_set_pkt_alignment(hw_info, queue, false, 0);
+
+ dpmaif_dl_set_pit_seqnum(hw_info, queue, DPMAIF_DL_PIT_SEQ_VALUE);
+ dpmaif_dl_set_ao_mtu(hw_info, dl_hw->mtu_size);
+ dpmaif_dl_set_ao_pit_chknum(hw_info, queue, dl_hw->chk_pit_num);
+ dpmaif_dl_set_ao_bat_check_thres(hw_info, queue, dl_hw->chk_bat_num);
+ dpmaif_dl_set_ao_frg_check_thres(hw_info, queue, dl_hw->chk_frg_num);
+ /* enable frag feature */
+ dpmaif_dl_frg_ao_en(hw_info, queue, true);
+ /* init frag use same BAT register */
+ dpmaif_dl_set_bat_base_addr(hw_info, queue, dl_que->frg_base);
+ dpmaif_dl_set_bat_size(hw_info, queue, dl_que->frg_size_cnt);
+ dpmaif_dl_bat_en(hw_info, queue, true);
+ ret = dpmaif_dl_bat_init_done(dpmaif_ctrl, queue, true);
+ if (ret)
+ return ret;
+
+ /* normal BAT init */
+ dpmaif_dl_set_bat_base_addr(hw_info, queue, dl_que->bat_base);
+ dpmaif_dl_set_bat_size(hw_info, queue, dl_que->bat_size_cnt);
+ /* enable BAT */
+ dpmaif_dl_bat_en(hw_info, queue, false);
+ /* notify HW init/setting done */
+ ret = dpmaif_dl_bat_init_done(dpmaif_ctrl, queue, false);
+ if (ret)
+ return ret;
+
+ /* init PIT (two PIT table) */
+ dpmaif_config_all_dlq_hw(dpmaif_ctrl);
+ dpmaif_dl_all_queue_en(dpmaif_ctrl, true);
+
+ dpmaif_dl_set_pkt_checksum(hw_info);
+
+ return ret;
+}
+
+static void dpmaif_ul_update_drb_size(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, unsigned int size)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_UL_DRBSIZE_ADDRH_n(q_num));
+ value &= ~DPMAIF_DRB_SIZE_MSK;
+ value |= size & DPMAIF_DRB_SIZE_MSK;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_UL_DRBSIZE_ADDRH_n(q_num));
+}
+
+static void dpmaif_ul_update_drb_base_addr(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, dma_addr_t addr)
+{
+ iowrite32(lower_32_bits(addr), hw_info->pcie_base + DPMAIF_ULQSAR_n(q_num));
+ iowrite32(upper_32_bits(addr), hw_info->pcie_base + DPMAIF_UL_DRB_ADDRH_n(q_num));
+}
+
+static void dpmaif_ul_rdy_en(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, bool ready)
+{
+ u32 value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+ if (ready)
+ value |= BIT(q_num);
+ else
+ value &= ~BIT(q_num);
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+}
+
+static void dpmaif_ul_arb_en(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, bool enable)
+{
+ u32 value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+
+ if (enable)
+ value |= BIT(q_num + 8);
+ else
+ value &= ~BIT(q_num + 8);
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+}
+
+static void dpmaif_config_ulq_hw(struct dpmaif_hw_info *hw_info)
+{
+ struct dpmaif_ul *ul_que;
+ int i;
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ ul_que = &hw_info->ul_que[i];
+ if (ul_que->que_started) {
+ dpmaif_ul_update_drb_size(hw_info, i, ul_que->drb_size_cnt *
+ DPMAIF_UL_DRB_ENTRY_WORD);
+ dpmaif_ul_update_drb_base_addr(hw_info, i, ul_que->drb_base);
+ dpmaif_ul_rdy_en(hw_info, i, true);
+ dpmaif_ul_arb_en(hw_info, i, true);
+ } else {
+ dpmaif_ul_arb_en(hw_info, i, false);
+ }
+ }
+}
+
+static int dpmaif_hw_init_done(struct dpmaif_hw_info *hw_info)
+{
+ u32 value;
+ int ret;
+
+ /* sync default value to SRAM */
+ value = ioread32(hw_info->pcie_base + DPMAIF_AP_OVERWRITE_CFG);
+ value |= DPMAIF_SRAM_SYNC;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AP_OVERWRITE_CFG);
+
+ ret = readx_poll_timeout_atomic(ioread32, hw_info->pcie_base + DPMAIF_AP_OVERWRITE_CFG,
+ value, !(value & DPMAIF_SRAM_SYNC),
+ 0, DPMAIF_CHECK_TIMEOUT_US);
+ if (ret)
+ return ret;
+
+ iowrite32(DPMAIF_UL_INIT_DONE, hw_info->pcie_base + DPMAIF_AO_UL_INIT_SET);
+ iowrite32(DPMAIF_DL_INIT_DONE, hw_info->pcie_base + DPMAIF_AO_DL_INIT_SET);
+ return 0;
+}
+
+static int dpmaif_config_que_hw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_hw_info *hw_info;
+ int ret;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+ dpmaif_common_hw_init(hw_info);
+ ret = dpmaif_config_dlq_hw(dpmaif_ctrl);
+ if (ret)
+ return ret;
+
+ dpmaif_config_ulq_hw(hw_info);
+ return dpmaif_hw_init_done(hw_info);
+}
+
+static inline bool dpmaif_dl_idle_check(struct dpmaif_hw_info *hw_info)
+{
+ u32 value = ioread32(hw_info->pcie_base + DPMAIF_DL_CHK_BUSY);
+
+ /* If total queue is idle, then all DL is idle.
+ * 0: idle, 1: busy
+ */
+ return !(value & DPMAIF_DL_IDLE_STS);
+}
+
+static void dpmaif_ul_all_queue_en(struct dpmaif_hw_info *hw_info, bool enable)
+{
+ u32 ul_arb_en = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+
+ if (enable)
+ ul_arb_en |= DPMAIF_UL_ALL_QUE_ARB_EN;
+ else
+ ul_arb_en &= ~DPMAIF_UL_ALL_QUE_ARB_EN;
+
+ iowrite32(ul_arb_en, hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+}
+
+static inline int dpmaif_ul_idle_check(struct dpmaif_hw_info *hw_info)
+{
+ u32 value = ioread32(hw_info->pcie_base + DPMAIF_UL_CHK_BUSY);
+
+ /* If total queue is idle, then all UL is idle.
+ * 0: idle, 1: busy
+ */
+ return !(value & DPMAIF_UL_IDLE_STS);
+}
+
+ /* DPMAIF UL Part HW setting */
+
+int dpmaif_ul_add_wcnt(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned char q_num, unsigned int drb_entry_cnt)
+{
+ struct dpmaif_hw_info *hw_info;
+ u32 ul_update, value;
+ int ret;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+ ul_update = drb_entry_cnt & DPMAIF_UL_ADD_COUNT_MASK;
+ ul_update |= DPMAIF_UL_ADD_UPDATE;
+
+ ret = readx_poll_timeout_atomic(ioread32,
+ hw_info->pcie_base + DPMAIF_ULQ_ADD_DESC_CH_n(q_num),
+ value, !(value & DPMAIF_UL_ADD_NOT_READY),
+ 0, DPMAIF_CHECK_TIMEOUT_US);
+
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "UL add is not ready\n");
+ return ret;
+ }
+
+ iowrite32(ul_update, hw_info->pcie_base + DPMAIF_ULQ_ADD_DESC_CH_n(q_num));
+
+ ret = readx_poll_timeout_atomic(ioread32,
+ hw_info->pcie_base + DPMAIF_ULQ_ADD_DESC_CH_n(q_num),
+ value, !(value & DPMAIF_UL_ADD_NOT_READY),
+ 0, DPMAIF_CHECK_TIMEOUT_US);
+
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "timeout updating UL add\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+unsigned int dpmaif_ul_get_ridx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+ unsigned int value = ioread32(hw_info->pcie_base + DPMAIF_ULQ_STA0_n(q_num));
+
+ return value >> DPMAIF_UL_DRB_RIDX_OFFSET;
+}
+
+int dpmaif_dl_add_dlq_pit_remain_cnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int dlq_pit_idx,
+ unsigned int pit_remain_cnt)
+{
+ struct dpmaif_hw_info *hw_info;
+ u32 dl_update, value;
+ int ret;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+ dl_update = pit_remain_cnt & DPMAIF_PIT_REM_CNT_MSK;
+ dl_update |= DPMAIF_DL_ADD_UPDATE | (dlq_pit_idx << DPMAIF_ADD_DLQ_PIT_CHAN_OFS);
+
+ ret = readx_poll_timeout_atomic(ioread32, hw_info->pcie_base + DPMAIF_DL_DLQPIT_ADD,
+ value, !(value & DPMAIF_DL_ADD_NOT_READY),
+ 0, DPMAIF_CHECK_TIMEOUT_US);
+
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "data plane modem is not ready to add dlq\n");
+ return ret;
+ }
+
+ iowrite32(dl_update, hw_info->pcie_base + DPMAIF_DL_DLQPIT_ADD);
+
+ ret = readx_poll_timeout_atomic(ioread32, hw_info->pcie_base + DPMAIF_DL_DLQPIT_ADD,
+ value, !(value & DPMAIF_DL_ADD_NOT_READY),
+ 0, DPMAIF_CHECK_TIMEOUT_US);
+
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "data plane modem add dlq failed\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+unsigned int dpmaif_dl_dlq_pit_get_wridx(struct dpmaif_hw_info *hw_info, unsigned int dlq_pit_idx)
+{
+ return ioread32(hw_info->pcie_base + DPMAIF_AO_DL_DLQ_WRIDX +
+ dlq_pit_idx * DLQ_PIT_IDX_SIZE) & DPMAIF_DL_PIT_WRIDX_MSK;
+}
+
+static bool dl_add_timedout(struct dpmaif_hw_info *hw_info)
+{
+ u32 value;
+ int ret;
+
+ ret = readx_poll_timeout_atomic(ioread32, hw_info->pcie_base + DPMAIF_DL_BAT_ADD,
+ value, !(value & DPMAIF_DL_ADD_NOT_READY),
+ 0, DPMAIF_CHECK_TIMEOUT_US);
+
+ if (ret)
+ return true;
+
+ return false;
+}
+
+int dpmaif_dl_add_bat_cnt(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned char q_num, unsigned int bat_entry_cnt)
+{
+ struct dpmaif_hw_info *hw_info;
+ unsigned int value;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+ value = bat_entry_cnt & DPMAIF_DL_ADD_COUNT_MASK;
+ value |= DPMAIF_DL_ADD_UPDATE;
+
+ if (dl_add_timedout(hw_info)) {
+ dev_err(dpmaif_ctrl->dev, "DL add BAT not ready. count: %u queue: %dn",
+ bat_entry_cnt, q_num);
+ return -EBUSY;
+ }
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_ADD);
+
+ if (dl_add_timedout(hw_info)) {
+ dev_err(dpmaif_ctrl->dev, "DL add BAT timeout. count: %u queue: %dn",
+ bat_entry_cnt, q_num);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+unsigned int dpmaif_dl_get_bat_ridx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+ return ioread32(hw_info->pcie_base + DPMAIF_AO_DL_BAT_RIDX) & DPMAIF_DL_BAT_WRIDX_MSK;
+}
+
+unsigned int dpmaif_dl_get_bat_wridx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+ return ioread32(hw_info->pcie_base + DPMAIF_AO_DL_BAT_WRIDX) & DPMAIF_DL_BAT_WRIDX_MSK;
+}
+
+int dpmaif_dl_add_frg_cnt(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned char q_num, unsigned int frg_entry_cnt)
+{
+ struct dpmaif_hw_info *hw_info;
+ unsigned int value;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+ value = frg_entry_cnt & DPMAIF_DL_ADD_COUNT_MASK;
+ value |= DPMAIF_DL_FRG_ADD_UPDATE;
+ value |= DPMAIF_DL_ADD_UPDATE;
+
+ if (dl_add_timedout(hw_info)) {
+ dev_err(dpmaif_ctrl->dev, "Data plane modem is not ready to add frag DLQ\n");
+ return -EBUSY;
+ }
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_ADD);
+
+ if (dl_add_timedout(hw_info)) {
+ dev_err(dpmaif_ctrl->dev, "Data plane modem add frag DLQ failed");
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+unsigned int dpmaif_dl_get_frg_ridx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+ return ioread32(hw_info->pcie_base + DPMAIF_AO_DL_FRGBAT_WRIDX) & DPMAIF_DL_FRG_WRIDX_MSK;
+}
+
+static void dpmaif_set_queue_property(struct dpmaif_hw_info *hw_info,
+ struct dpmaif_hw_params *init_para)
+{
+ struct dpmaif_dl_hwq *dl_hwq;
+ struct dpmaif_dl *dl_que;
+ struct dpmaif_ul *ul_que;
+ int i;
+
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ dl_hwq = &hw_info->dl_que_hw[i];
+ dl_hwq->bat_remain_size = DPMAIF_HW_BAT_REMAIN;
+ dl_hwq->bat_pkt_bufsz = DPMAIF_HW_BAT_PKTBUF;
+ dl_hwq->frg_pkt_bufsz = DPMAIF_HW_FRG_PKTBUF;
+ dl_hwq->bat_rsv_length = DPMAIF_HW_BAT_RSVLEN;
+ dl_hwq->pkt_bid_max_cnt = DPMAIF_HW_PKT_BIDCNT;
+ dl_hwq->pkt_alignment = DPMAIF_HW_PKT_ALIGN;
+ dl_hwq->mtu_size = DPMAIF_HW_MTU_SIZE;
+ dl_hwq->chk_bat_num = DPMAIF_HW_CHK_BAT_NUM;
+ dl_hwq->chk_frg_num = DPMAIF_HW_CHK_FRG_NUM;
+ dl_hwq->chk_pit_num = DPMAIF_HW_CHK_PIT_NUM;
+
+ dl_que = &hw_info->dl_que[i];
+ dl_que->bat_base = init_para->pkt_bat_base_addr[i];
+ dl_que->bat_size_cnt = init_para->pkt_bat_size_cnt[i];
+ dl_que->pit_base = init_para->pit_base_addr[i];
+ dl_que->pit_size_cnt = init_para->pit_size_cnt[i];
+ dl_que->frg_base = init_para->frg_bat_base_addr[i];
+ dl_que->frg_size_cnt = init_para->frg_bat_size_cnt[i];
+ dl_que->que_started = true;
+ }
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ ul_que = &hw_info->ul_que[i];
+ ul_que->drb_base = init_para->drb_base_addr[i];
+ ul_que->drb_size_cnt = init_para->drb_size_cnt[i];
+ ul_que->que_started = true;
+ }
+}
+
+/**
+ * dpmaif_hw_stop_tx_queue() - Stop all TX queues
+ * @dpmaif_ctrl: Pointer to struct dpmaif_ctrl
+ *
+ * Disable HW UL queue. Checks busy UL queues to go to idle
+ * with an attempt count of 1000000.
+ *
+ * Return: 0 on success and -ETIMEDOUT upon a timeout
+ */
+int dpmaif_hw_stop_tx_queue(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_hw_info *hw_info;
+ int count = 0;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+
+ /* disables HW UL arb queue and check for idle */
+ dpmaif_ul_all_queue_en(hw_info, false);
+ while (dpmaif_ul_idle_check(hw_info)) {
+ if (++count >= DPMAIF_MAX_CHECK_COUNT) {
+ dev_err(dpmaif_ctrl->dev, "Stop TX failed, 0x%x\n",
+ ioread32(hw_info->pcie_base + DPMAIF_UL_CHK_BUSY));
+ return -ETIMEDOUT;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * dpmaif_hw_stop_rx_queue() - Stop all RX queues
+ * @dpmaif_ctrl: Pointer to struct dpmaif_ctrl
+ *
+ * Disable HW DL queue. Checks busy UL queues to go to idle
+ * with an attempt count of 1000000.
+ * Check that HW PIT write index equals read index with the same
+ * attempt count.
+ *
+ * Return: 0 on success and -ETIMEDOUT upon a timeout
+ */
+int dpmaif_hw_stop_rx_queue(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_hw_info *hw_info;
+ unsigned int wridx;
+ unsigned int ridx;
+ int count = 0;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+
+ /* disables HW DL queue and check idle */
+ dpmaif_dl_all_queue_en(dpmaif_ctrl, false);
+ while (dpmaif_dl_idle_check(hw_info)) {
+ if (++count >= DPMAIF_MAX_CHECK_COUNT) {
+ dev_err(dpmaif_ctrl->dev, "stop RX failed, 0x%x\n",
+ ioread32(hw_info->pcie_base + DPMAIF_DL_CHK_BUSY));
+ return -ETIMEDOUT;
+ }
+ }
+
+ /* check middle PIT sync done */
+ count = 0;
+ do {
+ wridx = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PIT_WRIDX) &
+ DPMAIF_DL_PIT_WRIDX_MSK;
+ ridx = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PIT_RIDX) &
+ DPMAIF_DL_PIT_WRIDX_MSK;
+ if (wridx == ridx)
+ return 0;
+
+ if (++count >= DPMAIF_MAX_CHECK_COUNT) {
+ dev_err(dpmaif_ctrl->dev, "check middle PIT sync fail\n");
+ break;
+ }
+ } while (1);
+
+ return -ETIMEDOUT;
+}
+
+void dpmaif_start_hw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ dpmaif_ul_all_queue_en(&dpmaif_ctrl->hif_hw_info, true);
+ dpmaif_dl_all_queue_en(dpmaif_ctrl, true);
+}
+
+/**
+ * dpmaif_hw_init() - Initialize HW data path API
+ * @dpmaif_ctrl: Pointer to struct dpmaif_ctrl
+ * @init_param: Pointer to struct dpmaif_hw_params
+ *
+ * Configures port mode, clock config, HW interrupt init,
+ * and HW queue config.
+ *
+ * Return: 0 on success and -error code upon failure
+ */
+int dpmaif_hw_init(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_hw_params *init_param)
+{
+ struct dpmaif_hw_info *hw_info;
+ int ret;
+
+ hw_info = &dpmaif_ctrl->hif_hw_info;
+
+ /* port mode & clock config */
+ ret = dpmaif_hw_config(hw_info);
+
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "sram_init check fail\n");
+ return ret;
+ }
+
+ /* HW interrupt init */
+ ret = dpmaif_init_intr(hw_info);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "init_intr check fail\n");
+ return ret;
+ }
+
+ /* HW queue config */
+ dpmaif_set_queue_property(hw_info, init_param);
+ ret = dpmaif_config_que_hw(dpmaif_ctrl);
+ if (ret)
+ dev_err(dpmaif_ctrl->dev, "DPMAIF hw_init_done check fail\n");
+
+ return ret;
+}
+
+bool dpmaif_hw_check_clr_ul_done_status(struct dpmaif_hw_info *hw_info, unsigned char qno)
+{
+ u32 value = ioread32(hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+
+ value &= BIT(DP_UL_INT_DONE_OFFSET + qno);
+ if (value) {
+ /* clear interrupt status */
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+ return true;
+ }
+
+ return false;
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_dpmaif.h b/drivers/net/wwan/t7xx/t7xx_dpmaif.h
new file mode 100644
index 000000000000..3289b7cf17ac
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_dpmaif.h
@@ -0,0 +1,168 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_DPMAIF_H__
+#define __T7XX_DPMAIF_H__
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#include "t7xx_hif_dpmaif.h"
+
+#define DPMAIF_DL_PIT_SEQ_VALUE 251
+#define DPMAIF_UL_DRB_BYTE_SIZE 16
+#define DPMAIF_UL_DRB_ENTRY_WORD (DPMAIF_UL_DRB_BYTE_SIZE >> 2)
+
+#define DPMAIF_MAX_CHECK_COUNT 1000000
+#define DPMAIF_CHECK_TIMEOUT_US 10000
+#define DPMAIF_CHECK_INIT_TIMEOUT_US 100000
+#define DPMAIF_CHECK_DELAY_US 10
+
+/* DPMAIF HW Initialization parameter structure */
+struct dpmaif_hw_params {
+ /* UL part */
+ dma_addr_t drb_base_addr[DPMAIF_TXQ_NUM];
+ unsigned int drb_size_cnt[DPMAIF_TXQ_NUM];
+
+ /* DL part */
+ dma_addr_t pkt_bat_base_addr[DPMAIF_RXQ_NUM];
+ unsigned int pkt_bat_size_cnt[DPMAIF_RXQ_NUM];
+ dma_addr_t frg_bat_base_addr[DPMAIF_RXQ_NUM];
+ unsigned int frg_bat_size_cnt[DPMAIF_RXQ_NUM];
+ dma_addr_t pit_base_addr[DPMAIF_RXQ_NUM];
+ unsigned int pit_size_cnt[DPMAIF_RXQ_NUM];
+};
+
+enum dpmaif_hw_intr_type {
+ DPF_INTR_INVALID_MIN = 0,
+
+ DPF_INTR_UL_DONE,
+ DPF_INTR_UL_DRB_EMPTY,
+ DPF_INTR_UL_MD_NOTREADY,
+ DPF_INTR_UL_MD_PWR_NOTREADY,
+ DPF_INTR_UL_LEN_ERR,
+
+ DPF_INTR_DL_DONE,
+ DPF_INTR_DL_SKB_LEN_ERR,
+ DPF_INTR_DL_BATCNT_LEN_ERR,
+ DPF_INTR_DL_PITCNT_LEN_ERR,
+ DPF_INTR_DL_PKT_EMPTY_SET,
+ DPF_INTR_DL_FRG_EMPTY_SET,
+ DPF_INTR_DL_MTU_ERR,
+ DPF_INTR_DL_FRGCNT_LEN_ERR,
+
+ DPF_DL_INT_DLQ0_PITCNT_LEN_ERR,
+ DPF_DL_INT_DLQ1_PITCNT_LEN_ERR,
+ DPF_DL_INT_HPC_ENT_TYPE_ERR,
+ DPF_INTR_DL_DLQ0_DONE,
+ DPF_INTR_DL_DLQ1_DONE,
+
+ DPF_INTR_INVALID_MAX
+};
+
+#define DPF_RX_QNO0 0
+#define DPF_RX_QNO1 1
+#define DPF_RX_QNO_DFT DPF_RX_QNO0
+
+struct dpmaif_hw_intr_st_para {
+ unsigned int intr_cnt;
+ enum dpmaif_hw_intr_type intr_types[DPF_INTR_INVALID_MAX - 1];
+ unsigned int intr_queues[DPF_INTR_INVALID_MAX - 1];
+};
+
+#define DPMAIF_HW_BAT_REMAIN 64
+#define DPMAIF_HW_BAT_PKTBUF (128 * 28)
+#define DPMAIF_HW_FRG_PKTBUF 128
+#define DPMAIF_HW_BAT_RSVLEN 64
+#define DPMAIF_HW_PKT_BIDCNT 1
+#define DPMAIF_HW_PKT_ALIGN 64
+#define DPMAIF_HW_MTU_SIZE (3 * 1024 + 8)
+
+#define DPMAIF_HW_CHK_BAT_NUM 62
+#define DPMAIF_HW_CHK_FRG_NUM 3
+#define DPMAIF_HW_CHK_PIT_NUM (2 * DPMAIF_HW_CHK_BAT_NUM)
+
+/* tx interrupt mask */
+#define DP_UL_INT_DONE_OFFSET 0
+#define DP_UL_INT_EMPTY_OFFSET 5
+#define DP_UL_INT_MD_NOTRDY_OFFSET 10
+#define DP_UL_INT_PWR_NOTRDY_OFFSET 15
+#define DP_UL_INT_LEN_ERR_OFFSET 20
+#define DP_UL_QNUM_MSK 0x1F
+
+#define DP_UL_INT_DONE(q_num) BIT((q_num) + DP_UL_INT_DONE_OFFSET)
+#define DP_UL_INT_EMPTY(q_num) BIT((q_num) + DP_UL_INT_EMPTY_OFFSET)
+#define DP_UL_INT_MD_NOTRDY(q_num) BIT((q_num) + DP_UL_INT_MD_NOTRDY_OFFSET)
+#define DP_UL_INT_PWR_NOTRDY(q_num) BIT((q_num) + DP_UL_INT_PWR_NOTRDY_OFFSET)
+#define DP_UL_INT_LEN_ERR(q_num) BIT((q_num) + DP_UL_INT_LEN_ERR_OFFSET)
+
+#define DP_UL_INT_QDONE_MSK (DP_UL_QNUM_MSK << DP_UL_INT_DONE_OFFSET)
+#define DP_UL_INT_EMPTY_MSK (DP_UL_QNUM_MSK << DP_UL_INT_EMPTY_OFFSET)
+#define DP_UL_INT_MD_NOTREADY_MSK (DP_UL_QNUM_MSK << DP_UL_INT_MD_NOTRDY_OFFSET)
+#define DP_UL_INT_MD_PWR_NOTREADY_MSK (DP_UL_QNUM_MSK << DP_UL_INT_PWR_NOTRDY_OFFSET)
+#define DP_UL_INT_ERR_MSK (DP_UL_QNUM_MSK << DP_UL_INT_LEN_ERR_OFFSET)
+
+/* rx interrupt mask */
+#define DP_DL_INT_QDONE_MSK BIT(0)
+#define DP_DL_INT_SKB_LEN_ERR BIT(1)
+#define DP_DL_INT_BATCNT_LEN_ERR BIT(2)
+#define DP_DL_INT_PITCNT_LEN_ERR BIT(3)
+#define DP_DL_INT_PKT_EMPTY_MSK BIT(4)
+#define DP_DL_INT_FRG_EMPTY_MSK BIT(5)
+#define DP_DL_INT_MTU_ERR_MSK BIT(6)
+#define DP_DL_INT_FRG_LENERR_MSK BIT(7)
+#define DP_DL_INT_DLQ0_PITCNT_LEN_ERR BIT(8)
+#define DP_DL_INT_DLQ1_PITCNT_LEN_ERR BIT(9)
+#define DP_DL_INT_HPC_ENT_TYPE_ERR BIT(10)
+#define DP_DL_INT_DLQ0_QDONE_SET BIT(13)
+#define DP_DL_INT_DLQ1_QDONE_SET BIT(14)
+
+#define DP_DL_DLQ0_STATUS_MASK \
+ (DP_DL_INT_DLQ0_PITCNT_LEN_ERR | DP_DL_INT_DLQ0_QDONE_SET)
+
+#define DP_DL_DLQ1_STATUS_MASK \
+ (DP_DL_INT_DLQ1_PITCNT_LEN_ERR | DP_DL_INT_DLQ1_QDONE_SET)
+
+/* APIs exposed to HIF Layer */
+int dpmaif_hw_init(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_hw_params *init_param);
+int dpmaif_hw_stop_tx_queue(struct dpmaif_ctrl *dpmaif_ctrl);
+int dpmaif_hw_stop_rx_queue(struct dpmaif_ctrl *dpmaif_ctrl);
+void dpmaif_start_hw(struct dpmaif_ctrl *dpmaif_ctrl);
+int dpmaif_hw_get_interrupt_status(struct dpmaif_ctrl *dpmaif_ctrl,
+ struct dpmaif_hw_intr_st_para *para, int qno);
+void dpmaif_unmask_ulq_interrupt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int q_num);
+int dpmaif_ul_add_wcnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+ unsigned int drb_entry_cnt);
+int dpmaif_dl_add_bat_cnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+ unsigned int bat_entry_cnt);
+int dpmaif_dl_add_frg_cnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+ unsigned int frg_entry_cnt);
+int dpmaif_dl_add_dlq_pit_remain_cnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int dlq_pit_idx,
+ unsigned int pit_remain_cnt);
+void dpmaif_dlq_unmask_rx_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info, unsigned char qno);
+void dpmaif_hw_dlq_unmask_rx_done(struct dpmaif_hw_info *hw_info, unsigned char qno);
+bool dpmaif_hw_check_clr_ul_done_status(struct dpmaif_hw_info *hw_info, unsigned char qno);
+unsigned int dpmaif_ul_get_ridx(struct dpmaif_hw_info *hw_info, unsigned char q_num);
+void dpmaif_clr_ul_all_interrupt(struct dpmaif_hw_info *hw_info);
+void dpmaif_clr_dl_all_interrupt(struct dpmaif_hw_info *hw_info);
+void dpmaif_clr_ip_busy_sts(struct dpmaif_hw_info *hw_info);
+void dpmaif_unmask_dl_batcnt_len_err_interrupt(struct dpmaif_hw_info *hw_info);
+void dpmaif_unmask_dl_pitcnt_len_err_interrupt(struct dpmaif_hw_info *hw_info);
+unsigned int dpmaif_dl_get_bat_ridx(struct dpmaif_hw_info *hw_info, unsigned char q_num);
+unsigned int dpmaif_dl_get_bat_wridx(struct dpmaif_hw_info *hw_info, unsigned char q_num);
+unsigned int dpmaif_dl_get_frg_ridx(struct dpmaif_hw_info *hw_info, unsigned char q_num);
+unsigned int dpmaif_dl_dlq_pit_get_wridx(struct dpmaif_hw_info *hw_info,
+ unsigned int dlq_pit_idx);
+
+#endif /* __T7XX_DPMAIF_H__ */
--
2.17.1

2021-11-01 03:58:04

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 11/14] net: wwan: t7xx: Runtime PM

From: Haijun Lio <[email protected]>

Enables runtime power management callbacks including runtime_suspend
and runtime_resume. Autosuspend is used to prevent overhead by frequent
wake-ups.

Signed-off-by: Haijun Lio <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Eliot Lee <[email protected]>
Signed-off-by: Eliot Lee <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 13 +++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 16 ++++++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 15 +++++++++++++++
drivers/net/wwan/t7xx/t7xx_pci.c | 21 +++++++++++++++++++++
4 files changed, 65 insertions(+)

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
index bcee31a5af12..18c1fcccd9dc 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -22,6 +22,7 @@
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/mutex.h>
+#include <linux/pm_runtime.h>
#include <linux/skbuff.h>

#include "t7xx_cldma.h"
@@ -310,6 +311,8 @@ static void cldma_rx_done(struct work_struct *work)
/* enable RX_DONE && EMPTY interrupt */
cldma_hw_dismask_txrxirq(&md_ctrl->hw_info, queue->index, true);
cldma_hw_dismask_eqirq(&md_ctrl->hw_info, queue->index, true);
+ pm_runtime_mark_last_busy(md_ctrl->dev);
+ pm_runtime_put_autosuspend(md_ctrl->dev);
}

static int cldma_gpd_tx_collect(struct cldma_queue *queue)
@@ -451,6 +454,8 @@ static void cldma_tx_done(struct work_struct *work)
}

spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ pm_runtime_mark_last_busy(md_ctrl->dev);
+ pm_runtime_put_autosuspend(md_ctrl->dev);
}

static void cldma_ring_free(struct cldma_ctrl *md_ctrl,
@@ -674,6 +679,7 @@ static void cldma_irq_work_cb(struct cldma_ctrl *md_ctrl)
if (l2_tx_int & (TXRX_STATUS_BITMASK | EMPTY_STATUS_BITMASK)) {
for (i = 0; i < CLDMA_TXQ_NUM; i++) {
if (l2_tx_int & BIT(i)) {
+ pm_runtime_get(md_ctrl->dev);
/* disable TX_DONE interrupt */
cldma_hw_mask_eqirq(hw_info, i, false);
cldma_hw_mask_txrxirq(hw_info, i, false);
@@ -702,6 +708,7 @@ static void cldma_irq_work_cb(struct cldma_ctrl *md_ctrl)
if (l2_rx_int & (TXRX_STATUS_BITMASK | EMPTY_STATUS_BITMASK)) {
for (i = 0; i < CLDMA_RXQ_NUM; i++) {
if (l2_rx_int & (BIT(i) | EQ_STA_BIT(i))) {
+ pm_runtime_get(md_ctrl->dev);
/* disable RX_DONE and QUEUE_EMPTY interrupt */
cldma_hw_mask_eqirq(hw_info, i, true);
cldma_hw_mask_txrxirq(hw_info, i, true);
@@ -1133,8 +1140,12 @@ int cldma_send_skb(enum cldma_id hif_id, int qno, struct sk_buff *skb, bool skb_
struct cldma_queue *queue;
unsigned long flags;
int ret = 0;
+ int val;

md_ctrl = md_cd_get(hif_id);
+ val = pm_runtime_resume_and_get(md_ctrl->dev);
+ if (val < 0 && val != -EACCES)
+ return val;

if (qno >= CLDMA_TXQ_NUM) {
ret = -EINVAL;
@@ -1199,6 +1210,8 @@ int cldma_send_skb(enum cldma_id hif_id, int qno, struct sk_buff *skb, bool skb_
} while (!ret);

exit:
+ pm_runtime_mark_last_busy(md_ctrl->dev);
+ pm_runtime_put_autosuspend(md_ctrl->dev);
return ret;
}

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
index e4af05441707..ae38fb29ec81 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
@@ -22,6 +22,7 @@
#include <linux/kthread.h>
#include <linux/list.h>
#include <linux/mm.h>
+#include <linux/pm_runtime.h>
#include <linux/skbuff.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
@@ -1039,6 +1040,7 @@ static void dpmaif_rxq_work(struct work_struct *work)
{
struct dpmaif_ctrl *dpmaif_ctrl;
struct dpmaif_rx_queue *rxq;
+ int ret;

rxq = container_of(work, struct dpmaif_rx_queue, dpmaif_rxq_work);
dpmaif_ctrl = rxq->dpmaif_ctrl;
@@ -1053,8 +1055,14 @@ static void dpmaif_rxq_work(struct work_struct *work)
return;
}

+ ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev);
+ if (ret < 0 && ret != -EACCES)
+ return;
+
dpmaif_do_rx(dpmaif_ctrl, rxq);

+ pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
+ pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
atomic_set(&rxq->rx_processing, 0);
}

@@ -1417,9 +1425,14 @@ static void dpmaif_bat_release_work(struct work_struct *work)
{
struct dpmaif_ctrl *dpmaif_ctrl;
struct dpmaif_rx_queue *rxq;
+ int ret;

dpmaif_ctrl = container_of(work, struct dpmaif_ctrl, bat_release_work);

+ ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev);
+ if (ret < 0 && ret != -EACCES)
+ return;
+
/* ALL RXQ use one BAT table, so choose DPF_RX_QNO_DFT */
rxq = &dpmaif_ctrl->rxq[DPF_RX_QNO_DFT];

@@ -1427,6 +1440,9 @@ static void dpmaif_bat_release_work(struct work_struct *work)
dpmaif_dl_pkt_bat_release_and_add(rxq);
/* frag BAT release and add */
dpmaif_dl_frag_bat_release_and_add(rxq);
+
+ pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
+ pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
}

int dpmaif_bat_release_work_alloc(struct dpmaif_ctrl *dpmaif_ctrl)
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
index 3ae87761af05..84fc980824e5 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
@@ -18,6 +18,7 @@
#include <linux/kernel.h>
#include <linux/kthread.h>
#include <linux/list.h>
+#include <linux/pm_runtime.h>
#include <linux/spinlock.h>

#include "t7xx_common.h"
@@ -167,6 +168,10 @@ static void dpmaif_tx_done(struct work_struct *work)
txq = container_of(work, struct dpmaif_tx_queue, dpmaif_tx_work);
dpmaif_ctrl = txq->dpmaif_ctrl;

+ ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev);
+ if (ret < 0 && ret != -EACCES)
+ return;
+
ret = dpmaif_tx_release(dpmaif_ctrl, txq->index, txq->drb_size_cnt);
if (ret == -EAGAIN ||
(dpmaif_hw_check_clr_ul_done_status(&dpmaif_ctrl->hif_hw_info, txq->index) &&
@@ -179,6 +184,9 @@ static void dpmaif_tx_done(struct work_struct *work)
dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
dpmaif_unmask_ulq_interrupt(dpmaif_ctrl, txq->index);
}
+
+ pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
+ pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
}

static void set_drb_msg(struct dpmaif_ctrl *dpmaif_ctrl,
@@ -513,6 +521,7 @@ static void do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl)
static int dpmaif_tx_hw_push_thread(void *arg)
{
struct dpmaif_ctrl *dpmaif_ctrl;
+ int ret;

dpmaif_ctrl = arg;
while (!kthread_should_stop()) {
@@ -528,7 +537,13 @@ static int dpmaif_tx_hw_push_thread(void *arg)
if (kthread_should_stop())
break;

+ ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev);
+ if (ret < 0 && ret != -EACCES)
+ return ret;
+
do_tx_hw_push(dpmaif_ctrl);
+ pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
+ pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
}

return 0;
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
index 5afd8eb4203f..3328a225e20b 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.c
+++ b/drivers/net/wwan/t7xx/t7xx_pci.c
@@ -20,6 +20,7 @@
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/pci.h>
+#include <linux/pm_runtime.h>
#include <linux/spinlock.h>

#include "t7xx_mhccif.h"
@@ -34,6 +35,7 @@
#define PCI_EREG_BASE 2

#define PM_ACK_TIMEOUT_MS 1500
+#define PM_AUTOSUSPEND_MS 20000
#define PM_RESOURCE_POLL_TIMEOUT_US 10000
#define PM_RESOURCE_POLL_STEP_US 100

@@ -78,6 +80,8 @@ static int mtk_pci_pm_init(struct mtk_pci_dev *mtk_dev)
atomic_set(&mtk_dev->md_pm_state, MTK_PM_INIT);

iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_SET_0);
+ pm_runtime_set_autosuspend_delay(&pdev->dev, PM_AUTOSUSPEND_MS);
+ pm_runtime_use_autosuspend(&pdev->dev);

return mtk_wait_pm_config(mtk_dev);
}
@@ -92,6 +96,8 @@ void mtk_pci_pm_init_late(struct mtk_pci_dev *mtk_dev)
D2H_INT_RESUME_ACK_AP);
iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_CLR_0);
atomic_set(&mtk_dev->md_pm_state, MTK_PM_RESUMED);
+
+ pm_runtime_put_noidle(&mtk_dev->pdev->dev);
}

static int mtk_pci_pm_reinit(struct mtk_pci_dev *mtk_dev)
@@ -101,6 +107,8 @@ static int mtk_pci_pm_reinit(struct mtk_pci_dev *mtk_dev)
*/
atomic_set(&mtk_dev->md_pm_state, MTK_PM_INIT);

+ pm_runtime_get_noresume(&mtk_dev->pdev->dev);
+
iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_SET_0);
return mtk_wait_pm_config(mtk_dev);
}
@@ -405,6 +413,7 @@ static int __mtk_pci_pm_resume(struct pci_dev *pdev, bool state_check)
mtk_dev->rgu_pci_irq_en = true;
mtk_pcie_mac_set_int(mtk_dev, SAP_RGU_INT);
iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ pm_runtime_mark_last_busy(&pdev->dev);
atomic_set(&mtk_dev->md_pm_state, MTK_PM_RESUMED);

return ret;
@@ -446,6 +455,16 @@ static int mtk_pci_pm_thaw(struct device *dev)
return __mtk_pci_pm_resume(to_pci_dev(dev), false);
}

+static int mtk_pci_pm_runtime_suspend(struct device *dev)
+{
+ return __mtk_pci_pm_suspend(to_pci_dev(dev));
+}
+
+static int mtk_pci_pm_runtime_resume(struct device *dev)
+{
+ return __mtk_pci_pm_resume(to_pci_dev(dev), true);
+}
+
static const struct dev_pm_ops mtk_pci_pm_ops = {
.suspend = mtk_pci_pm_suspend,
.resume = mtk_pci_pm_resume,
@@ -455,6 +474,8 @@ static const struct dev_pm_ops mtk_pci_pm_ops = {
.poweroff = mtk_pci_pm_suspend,
.restore = mtk_pci_pm_resume,
.restore_noirq = mtk_pci_pm_resume_noirq,
+ .runtime_suspend = mtk_pci_pm_runtime_suspend,
+ .runtime_resume = mtk_pci_pm_runtime_resume
};

static int mtk_request_irq(struct pci_dev *pdev)
--
2.17.1

2021-11-01 03:58:07

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 09/14] net: wwan: t7xx: Add WWAN network interface

From: Haijun Lio <[email protected]>

Creates the Cross Core Modem Network Interface (CCMNI) which implements
the wwan_ops for registration with the WWAN framework, CCMNI also
implements the net_device_ops functions used by the network device.
Network device operations include open, close, start transmission, TX
timeout, change MTU, and select queue.

Signed-off-by: Haijun Lio <[email protected]>
Co-developed-by: Chandrashekar Devegowda <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/Makefile | 1 +
drivers/net/wwan/t7xx/t7xx_modem_ops.c | 11 +-
drivers/net/wwan/t7xx/t7xx_netdev.c | 545 +++++++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_netdev.h | 63 +++
4 files changed, 619 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.h

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index a2c97a66dfbe..fcee61e7c4bc 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -18,3 +18,4 @@ mtk_t7xx-y:= t7xx_pci.o \
t7xx_hif_dpmaif.o \
t7xx_hif_dpmaif_tx.o \
t7xx_hif_dpmaif_rx.o \
+ t7xx_netdev.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
index 612be5cbcbd2..3387fb98d746 100644
--- a/drivers/net/wwan/t7xx/t7xx_modem_ops.c
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
@@ -21,6 +21,7 @@
#include "t7xx_mhccif.h"
#include "t7xx_modem_ops.h"
#include "t7xx_monitor.h"
+#include "t7xx_netdev.h"
#include "t7xx_pci.h"
#include "t7xx_pcie_mac.h"
#include "t7xx_port.h"
@@ -691,10 +692,15 @@ int mtk_md_init(struct mtk_pci_dev *mtk_dev)
if (ret)
goto err_alloc;

- ret = cldma_init(ID_CLDMA1);
+ /* init the data path */
+ ret = ccmni_init(mtk_dev);
if (ret)
goto err_fsm_init;

+ ret = cldma_init(ID_CLDMA1);
+ if (ret)
+ goto err_ccmni_init;
+
ret = port_proxy_init(mtk_dev->md);
if (ret)
goto err_cldma_init;
@@ -709,6 +715,8 @@ int mtk_md_init(struct mtk_pci_dev *mtk_dev)

err_cldma_init:
cldma_exit(ID_CLDMA1);
+err_ccmni_init:
+ ccmni_exit(mtk_dev);
err_fsm_init:
ccci_fsm_uninit();
err_alloc:
@@ -733,6 +741,7 @@ void mtk_md_exit(struct mtk_pci_dev *mtk_dev)
fsm_append_command(fsm_ctl, CCCI_COMMAND_PRE_STOP, 1);
port_proxy_uninit();
cldma_exit(ID_CLDMA1);
+ ccmni_exit(mtk_dev);
ccci_fsm_uninit();
destroy_workqueue(md->handshake_wq);
}
diff --git a/drivers/net/wwan/t7xx/t7xx_netdev.c b/drivers/net/wwan/t7xx/t7xx_netdev.c
new file mode 100644
index 000000000000..48c59ff4cd70
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_netdev.c
@@ -0,0 +1,545 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Andy Shevchenko <[email protected]>
+ * Chandrashekar Devegowda <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/ip.h>
+#include <linux/netdevice.h>
+#include <linux/wwan.h>
+
+#include <net/ipv6.h>
+
+#include "t7xx_hif_dpmaif_rx.h"
+#include "t7xx_hif_dpmaif_tx.h"
+#include "t7xx_netdev.h"
+
+#define IP_MUX_SESSION_DEFAULT 0
+#define SBD_PACKET_TYPE_MASK GENMASK(7, 4)
+
+static void ccmni_make_etherframe(struct net_device *dev, void *skb_eth_hdr,
+ u8 *mac_addr, unsigned int packet_type)
+{
+ struct ethhdr *eth_hdr;
+
+ eth_hdr = skb_eth_hdr;
+ memcpy(eth_hdr->h_dest, mac_addr, sizeof(eth_hdr->h_dest));
+ memset(eth_hdr->h_source, 0, sizeof(eth_hdr->h_source));
+
+ if (packet_type == IPV6_VERSION)
+ eth_hdr->h_proto = cpu_to_be16(ETH_P_IPV6);
+ else
+ eth_hdr->h_proto = cpu_to_be16(ETH_P_IP);
+}
+
+static enum txq_type get_txq_type(struct sk_buff *skb)
+{
+ u32 total_len, payload_len, l4_off;
+ bool tcp_syn_fin_rst, is_tcp;
+ struct ipv6hdr *ip6h;
+ struct tcphdr *tcph;
+ struct iphdr *ip4h;
+ u32 packet_type;
+ __be16 frag_off;
+
+ packet_type = skb->data[0] & SBD_PACKET_TYPE_MASK;
+ if (packet_type == IPV6_VERSION) {
+ ip6h = (struct ipv6hdr *)skb->data;
+ total_len = sizeof(struct ipv6hdr) + ntohs(ip6h->payload_len);
+ l4_off = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &ip6h->nexthdr, &frag_off);
+ tcph = (struct tcphdr *)(skb->data + l4_off);
+ is_tcp = ip6h->nexthdr == IPPROTO_TCP;
+ payload_len = total_len - l4_off - (tcph->doff << 2);
+ } else if (packet_type == IPV4_VERSION) {
+ ip4h = (struct iphdr *)skb->data;
+ tcph = (struct tcphdr *)(skb->data + (ip4h->ihl << 2));
+ is_tcp = ip4h->protocol == IPPROTO_TCP;
+ payload_len = ntohs(ip4h->tot_len) - (ip4h->ihl << 2) - (tcph->doff << 2);
+ } else {
+ return TXQ_NORMAL;
+ }
+
+ tcp_syn_fin_rst = tcph->syn || tcph->fin || tcph->rst;
+ if (is_tcp && !payload_len && !tcp_syn_fin_rst)
+ return TXQ_FAST;
+
+ return TXQ_NORMAL;
+}
+
+static u16 ccmni_select_queue(struct net_device *dev, struct sk_buff *skb,
+ struct net_device *sb_dev)
+{
+ struct ccmni_instance *ccmni;
+
+ ccmni = netdev_priv(dev);
+
+ if (ccmni->ctlb->capability & NIC_CAP_DATA_ACK_DVD)
+ return get_txq_type(skb);
+
+ return TXQ_NORMAL;
+}
+
+static int ccmni_open(struct net_device *dev)
+{
+ struct ccmni_instance *ccmni;
+
+ ccmni = wwan_netdev_drvpriv(dev);
+ netif_carrier_on(dev);
+ netif_tx_start_all_queues(dev);
+ atomic_inc(&ccmni->usage);
+ return 0;
+}
+
+static int ccmni_close(struct net_device *dev)
+{
+ struct ccmni_instance *ccmni;
+
+ ccmni = wwan_netdev_drvpriv(dev);
+
+ if (atomic_dec_return(&ccmni->usage) < 0)
+ return -EINVAL;
+
+ netif_carrier_off(dev);
+ netif_tx_disable(dev);
+ return 0;
+}
+
+static int ccmni_send_packet(struct ccmni_instance *ccmni, struct sk_buff *skb, enum txq_type txqt)
+{
+ struct ccmni_ctl_block *ctlb;
+ struct ccci_header *ccci_h;
+ unsigned int ccmni_idx;
+
+ skb_push(skb, sizeof(struct ccci_header));
+ ccci_h = (struct ccci_header *)skb->data;
+ ccci_h->status &= ~HDR_FLD_CHN;
+
+ ccmni_idx = ccmni->index;
+ ccci_h->data[0] = ccmni_idx;
+ ccci_h->data[1] = skb->len;
+ ccci_h->reserved = 0;
+
+ ctlb = ccmni->ctlb;
+ if (dpmaif_tx_send_skb(ctlb->hif_ctrl, txqt, skb)) {
+ skb_pull(skb, sizeof(struct ccci_header));
+ /* we will reserve header again in the next retry */
+ return NETDEV_TX_BUSY;
+ }
+
+ return 0;
+}
+
+static int ccmni_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct ccmni_instance *ccmni;
+ struct ccmni_ctl_block *ctlb;
+ enum txq_type txqt;
+ int skb_len;
+
+ ccmni = wwan_netdev_drvpriv(dev);
+ ctlb = ccmni->ctlb;
+ txqt = TXQ_NORMAL;
+ skb_len = skb->len;
+
+ /* If MTU changed or there is no headroom, drop the packet */
+ if (skb->len > dev->mtu || skb_headroom(skb) < sizeof(struct ccci_header)) {
+ dev_kfree_skb(skb);
+ dev->stats.tx_dropped++;
+ return NETDEV_TX_OK;
+ }
+
+ if (ctlb->capability & NIC_CAP_DATA_ACK_DVD)
+ txqt = get_txq_type(skb);
+
+ if (ccmni_send_packet(ccmni, skb, txqt)) {
+ if (!(ctlb->capability & NIC_CAP_TXBUSY_STOP)) {
+ if ((ccmni->tx_busy_cnt[txqt]++) % 100 == 0)
+ netdev_notice(dev, "[TX]CCMNI:%d busy:pkt=%ld(ack=%d) cnt=%ld\n",
+ ccmni->index, dev->stats.tx_packets,
+ txqt, ccmni->tx_busy_cnt[txqt]);
+ } else {
+ ccmni->tx_busy_cnt[txqt]++;
+ }
+
+ return NETDEV_TX_BUSY;
+ }
+
+ dev->stats.tx_packets++;
+ dev->stats.tx_bytes += skb_len;
+ if (ccmni->tx_busy_cnt[txqt] > 10) {
+ netdev_notice(dev, "[TX]CCMNI:%d TX busy:tx_pkt=%ld(ack=%d) retries=%ld\n",
+ ccmni->index, dev->stats.tx_packets,
+ txqt, ccmni->tx_busy_cnt[txqt]);
+ }
+ ccmni->tx_busy_cnt[txqt] = 0;
+
+ return NETDEV_TX_OK;
+}
+
+static int ccmni_change_mtu(struct net_device *dev, int new_mtu)
+{
+ if (new_mtu > CCMNI_MTU_MAX)
+ return -EINVAL;
+
+ dev->mtu = new_mtu;
+ return 0;
+}
+
+static void ccmni_tx_timeout(struct net_device *dev, unsigned int __always_unused txqueue)
+{
+ struct ccmni_instance *ccmni;
+
+ ccmni = (struct ccmni_instance *)netdev_priv(dev);
+ dev->stats.tx_errors++;
+ if (atomic_read(&ccmni->usage) > 0)
+ netif_tx_wake_all_queues(dev);
+}
+
+static const struct net_device_ops ccmni_netdev_ops = {
+ .ndo_open = ccmni_open,
+ .ndo_stop = ccmni_close,
+ .ndo_start_xmit = ccmni_start_xmit,
+ .ndo_tx_timeout = ccmni_tx_timeout,
+ .ndo_change_mtu = ccmni_change_mtu,
+ .ndo_select_queue = ccmni_select_queue,
+};
+
+static void ccmni_start(struct ccmni_ctl_block *ctlb)
+{
+ struct ccmni_instance *ccmni;
+ int i;
+
+ /* carry on the net link */
+ for (i = 0; i < ctlb->nic_dev_num; i++) {
+ ccmni = ctlb->ccmni_inst[i];
+ if (!ccmni)
+ continue;
+
+ if (atomic_read(&ccmni->usage) > 0) {
+ netif_tx_start_all_queues(ccmni->dev);
+ netif_carrier_on(ccmni->dev);
+ }
+ }
+}
+
+static void ccmni_pre_stop(struct ccmni_ctl_block *ctlb)
+{
+ struct ccmni_instance *ccmni;
+ int i;
+
+ /* stop tx */
+ for (i = 0; i < ctlb->nic_dev_num; i++) {
+ ccmni = ctlb->ccmni_inst[i];
+ if (!ccmni)
+ continue;
+
+ if (atomic_read(&ccmni->usage) > 0)
+ netif_tx_disable(ccmni->dev);
+ }
+}
+
+static void ccmni_pos_stop(struct ccmni_ctl_block *ctlb)
+{
+ struct ccmni_instance *ccmni;
+ int i;
+
+ /* carry off the net link */
+ for (i = 0; i < ctlb->nic_dev_num; i++) {
+ ccmni = ctlb->ccmni_inst[i];
+ if (!ccmni)
+ continue;
+
+ if (atomic_read(&ccmni->usage) > 0)
+ netif_carrier_off(ccmni->dev);
+ }
+}
+
+static void ccmni_wwan_setup(struct net_device *dev)
+{
+ dev->header_ops = NULL;
+ dev->hard_header_len += sizeof(struct ccci_header);
+
+ dev->mtu = WWAN_DEFAULT_MTU;
+ dev->max_mtu = CCMNI_MTU_MAX;
+ dev->tx_queue_len = CCMNI_TX_QUEUE;
+ dev->watchdog_timeo = CCMNI_NETDEV_WDT_TO;
+ /* ccmni is a pure IP device */
+ dev->flags = (IFF_POINTOPOINT | IFF_NOARP)
+ & ~(IFF_BROADCAST | IFF_MULTICAST);
+
+ /* not supporting VLAN */
+ dev->features = NETIF_F_VLAN_CHALLENGED;
+
+ dev->features |= NETIF_F_SG;
+ dev->hw_features |= NETIF_F_SG;
+
+ /* uplink checksum offload */
+ dev->features |= NETIF_F_HW_CSUM;
+ dev->hw_features |= NETIF_F_HW_CSUM;
+
+ /* downlink checksum offload */
+ dev->features |= NETIF_F_RXCSUM;
+ dev->hw_features |= NETIF_F_RXCSUM;
+
+ dev->addr_len = ETH_ALEN;
+
+ /* use kernel default free_netdev() function */
+ dev->needs_free_netdev = true;
+
+ /* no need to free again because of free_netdev() */
+ dev->priv_destructor = NULL;
+ dev->type = ARPHRD_PPP;
+
+ dev->netdev_ops = &ccmni_netdev_ops;
+ eth_random_addr(dev->dev_addr);
+}
+
+static int ccmni_wwan_newlink(void *ctxt, struct net_device *dev, u32 if_id,
+ struct netlink_ext_ack *extack)
+{
+ struct ccmni_ctl_block *ctlb;
+ struct ccmni_instance *ccmni;
+ int ret;
+
+ ctlb = ctxt;
+
+ if (if_id >= ARRAY_SIZE(ctlb->ccmni_inst))
+ return -EINVAL;
+
+ /* initialize private structure of netdev */
+ ccmni = wwan_netdev_drvpriv(dev);
+ ccmni->index = if_id;
+ ccmni->ctlb = ctlb;
+ ccmni->dev = dev;
+ atomic_set(&ccmni->usage, 0);
+ ctlb->ccmni_inst[if_id] = ccmni;
+
+ ret = register_netdevice(dev);
+ if (ret)
+ return ret;
+
+ netif_device_attach(dev);
+ return 0;
+}
+
+static void ccmni_wwan_dellink(void *ctxt, struct net_device *dev, struct list_head *head)
+{
+ struct ccmni_instance *ccmni;
+ struct ccmni_ctl_block *ctlb;
+ int if_id;
+
+ ccmni = wwan_netdev_drvpriv(dev);
+ ctlb = ctxt;
+ if_id = ccmni->index;
+
+ if (if_id >= ARRAY_SIZE(ctlb->ccmni_inst))
+ return;
+
+ if (WARN_ON(ctlb->ccmni_inst[if_id] != ccmni))
+ return;
+
+ unregister_netdevice(dev);
+}
+
+static const struct wwan_ops ccmni_wwan_ops = {
+ .priv_size = sizeof(struct ccmni_instance),
+ .setup = ccmni_wwan_setup,
+ .newlink = ccmni_wwan_newlink,
+ .dellink = ccmni_wwan_dellink,
+};
+
+static int ccmni_md_state_callback(enum md_state state, void *para)
+{
+ struct ccmni_ctl_block *ctlb;
+ int ret = 0;
+
+ ctlb = para;
+ ctlb->md_sta = state;
+
+ switch (state) {
+ case MD_STATE_READY:
+ ccmni_start(ctlb);
+ break;
+
+ case MD_STATE_EXCEPTION:
+ case MD_STATE_STOPPED:
+ ccmni_pre_stop(ctlb);
+ ret = dpmaif_md_state_callback(ctlb->hif_ctrl, state);
+ if (ret < 0)
+ dev_err(ctlb->hif_ctrl->dev,
+ "dpmaif md state callback err, md_sta=%d\n", state);
+
+ ccmni_pos_stop(ctlb);
+ break;
+
+ case MD_STATE_WAITING_FOR_HS1:
+ case MD_STATE_WAITING_TO_STOP:
+ ret = dpmaif_md_state_callback(ctlb->hif_ctrl, state);
+ if (ret < 0)
+ dev_err(ctlb->hif_ctrl->dev,
+ "dpmaif md state callback err, md_sta=%d\n", state);
+ break;
+
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+static void init_md_status_notifier(struct ccmni_ctl_block *ctlb)
+{
+ struct fsm_notifier_block *md_status_notifier;
+
+ md_status_notifier = &ctlb->md_status_notify;
+ INIT_LIST_HEAD(&md_status_notifier->entry);
+ md_status_notifier->notifier_fn = ccmni_md_state_callback;
+ md_status_notifier->data = ctlb;
+
+ fsm_notifier_register(md_status_notifier);
+}
+
+static void ccmni_recv_skb(struct mtk_pci_dev *mtk_dev, int netif_id, struct sk_buff *skb)
+{
+ struct ccmni_instance *ccmni;
+ struct net_device *dev;
+ int pkt_type, skb_len;
+
+ ccmni = mtk_dev->ccmni_ctlb->ccmni_inst[netif_id];
+ if (!ccmni) {
+ dev_kfree_skb(skb);
+ return;
+ }
+
+ dev = ccmni->dev;
+
+ pkt_type = skb->data[0] & SBD_PACKET_TYPE_MASK;
+ ccmni_make_etherframe(dev, skb->data - ETH_HLEN, dev->dev_addr, pkt_type);
+ skb_set_mac_header(skb, -ETH_HLEN);
+ skb_reset_network_header(skb);
+ skb->dev = dev;
+ if (pkt_type == IPV6_VERSION)
+ skb->protocol = htons(ETH_P_IPV6);
+ else
+ skb->protocol = htons(ETH_P_IP);
+
+ skb_len = skb->len;
+
+ netif_rx_any_context(skb);
+ dev->stats.rx_packets++;
+ dev->stats.rx_bytes += skb_len;
+}
+
+static void ccmni_queue_tx_irq_notify(struct ccmni_ctl_block *ctlb, int qno)
+{
+ struct netdev_queue *net_queue;
+ struct ccmni_instance *ccmni;
+
+ ccmni = ctlb->ccmni_inst[0];
+
+ if (netif_running(ccmni->dev) && atomic_read(&ccmni->usage) > 0) {
+ if (ctlb->capability & NIC_CAP_CCMNI_MQ) {
+ net_queue = netdev_get_tx_queue(ccmni->dev, qno);
+ if (netif_tx_queue_stopped(net_queue))
+ netif_tx_wake_queue(net_queue);
+ } else if (netif_queue_stopped(ccmni->dev)) {
+ netif_wake_queue(ccmni->dev);
+ }
+ }
+}
+
+static void ccmni_queue_tx_full_notify(struct ccmni_ctl_block *ctlb, int qno)
+{
+ struct netdev_queue *net_queue;
+ struct ccmni_instance *ccmni;
+
+ ccmni = ctlb->ccmni_inst[0];
+
+ if (atomic_read(&ccmni->usage) > 0) {
+ dev_err(&ctlb->mtk_dev->pdev->dev, "TX queue %d is full\n", qno);
+ if (ctlb->capability & NIC_CAP_CCMNI_MQ) {
+ net_queue = netdev_get_tx_queue(ccmni->dev, qno);
+ netif_tx_stop_queue(net_queue);
+ } else {
+ netif_stop_queue(ccmni->dev);
+ }
+ }
+}
+
+static void ccmni_queue_state_notify(struct mtk_pci_dev *mtk_dev,
+ enum dpmaif_txq_state state, int qno)
+{
+ if (!(mtk_dev->ccmni_ctlb->capability & NIC_CAP_TXBUSY_STOP) ||
+ mtk_dev->ccmni_ctlb->md_sta != MD_STATE_READY ||
+ qno >= TXQ_TYPE_CNT)
+ return;
+
+ if (!mtk_dev->ccmni_ctlb->ccmni_inst[0]) {
+ dev_warn(&mtk_dev->pdev->dev, "No netdev registered yet\n");
+ return;
+ }
+
+ if (state == DMPAIF_TXQ_STATE_IRQ)
+ ccmni_queue_tx_irq_notify(mtk_dev->ccmni_ctlb, qno);
+ else if (state == DMPAIF_TXQ_STATE_FULL)
+ ccmni_queue_tx_full_notify(mtk_dev->ccmni_ctlb, qno);
+}
+
+int ccmni_init(struct mtk_pci_dev *mtk_dev)
+{
+ struct ccmni_ctl_block *ctlb;
+ int ret;
+
+ ctlb = devm_kzalloc(&mtk_dev->pdev->dev, sizeof(*ctlb), GFP_KERNEL);
+ if (!ctlb)
+ return -ENOMEM;
+
+ mtk_dev->ccmni_ctlb = ctlb;
+ ctlb->mtk_dev = mtk_dev;
+ ctlb->callbacks.state_notify = ccmni_queue_state_notify;
+ ctlb->callbacks.recv_skb = ccmni_recv_skb;
+ ctlb->nic_dev_num = NIC_DEV_DEFAULT;
+ ctlb->capability = NIC_CAP_TXBUSY_STOP | NIC_CAP_SGIO |
+ NIC_CAP_DATA_ACK_DVD | NIC_CAP_CCMNI_MQ;
+
+ ctlb->hif_ctrl = dpmaif_hif_init(mtk_dev, &ctlb->callbacks);
+ if (!ctlb->hif_ctrl)
+ return -ENOMEM;
+
+ /* WWAN core will create a netdev for the default IP MUX channel */
+ ret = wwan_register_ops(&ctlb->mtk_dev->pdev->dev, &ccmni_wwan_ops, ctlb,
+ IP_MUX_SESSION_DEFAULT);
+ if (ret)
+ goto error_md;
+
+ init_md_status_notifier(ctlb);
+
+ return 0;
+
+error_md:
+ wwan_unregister_ops(&ctlb->mtk_dev->pdev->dev);
+
+ return ret;
+}
+
+void ccmni_exit(struct mtk_pci_dev *mtk_dev)
+{
+ struct ccmni_ctl_block *ctlb;
+
+ ctlb = mtk_dev->ccmni_ctlb;
+ /* unregister FSM notifier */
+ fsm_notifier_unregister(&ctlb->md_status_notify);
+ wwan_unregister_ops(&ctlb->mtk_dev->pdev->dev);
+ dpmaif_hif_exit(ctlb->hif_ctrl);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_netdev.h b/drivers/net/wwan/t7xx/t7xx_netdev.h
new file mode 100644
index 000000000000..b83b45628df5
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_netdev.h
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_NETDEV_H__
+#define __T7XX_NETDEV_H__
+
+#include <linux/bitfield.h>
+#include <linux/netdevice.h>
+
+#include "t7xx_common.h"
+#include "t7xx_hif_dpmaif.h"
+#include "t7xx_monitor.h"
+
+#define RXQ_NUM DPMAIF_RXQ_NUM
+#define NIC_DEV_MAX 21
+#define NIC_DEV_DEFAULT 2
+#define NIC_CAP_TXBUSY_STOP BIT(0)
+#define NIC_CAP_SGIO BIT(1)
+#define NIC_CAP_DATA_ACK_DVD BIT(2)
+#define NIC_CAP_CCMNI_MQ BIT(3)
+
+/* must be less than DPMAIF_HW_MTU_SIZE (3*1024 + 8) */
+#define CCMNI_MTU_MAX 3000
+#define CCMNI_TX_QUEUE 1000
+#define CCMNI_NETDEV_WDT_TO (1 * HZ)
+
+#define IPV4_VERSION 0x40
+#define IPV6_VERSION 0x60
+
+struct ccmni_instance {
+ unsigned int index;
+ atomic_t usage;
+ struct net_device *dev;
+ struct ccmni_ctl_block *ctlb;
+ unsigned long tx_busy_cnt[TXQ_TYPE_CNT];
+};
+
+struct ccmni_ctl_block {
+ struct mtk_pci_dev *mtk_dev;
+ struct dpmaif_ctrl *hif_ctrl;
+ struct ccmni_instance *ccmni_inst[NIC_DEV_MAX];
+ struct dpmaif_callbacks callbacks;
+ unsigned int nic_dev_num;
+ unsigned int md_sta;
+ unsigned int capability;
+
+ struct fsm_notifier_block md_status_notify;
+};
+
+int ccmni_init(struct mtk_pci_dev *mtk_dev);
+void ccmni_exit(struct mtk_pci_dev *mtk_dev);
+
+#endif /* __T7XX_NETDEV_H__ */
--
2.17.1

2021-11-01 03:58:12

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 13/14] net: wwan: t7xx: Add debug and test ports

From: Haijun Lio <[email protected]>

Creates char and tty port infrastructure for debug and testing.
Those ports support use cases such as:
* Modem log collection
* Memory dump
* Loop-back test
* Factory tests
* Device Service Streams

Signed-off-by: Haijun Lio <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/Makefile | 3 +
drivers/net/wwan/t7xx/t7xx_port.h | 2 +
drivers/net/wwan/t7xx/t7xx_port_char.c | 424 ++++++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_port_proxy.c | 49 ++-
drivers/net/wwan/t7xx/t7xx_port_proxy.h | 3 +
drivers/net/wwan/t7xx/t7xx_port_tty.c | 191 +++++++++++
drivers/net/wwan/t7xx/t7xx_tty_ops.c | 205 ++++++++++++
drivers/net/wwan/t7xx/t7xx_tty_ops.h | 44 +++
8 files changed, 920 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_char.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_tty.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_tty_ops.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_tty_ops.h

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index fcee61e7c4bc..ba5c602a734b 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -19,3 +19,6 @@ mtk_t7xx-y:= t7xx_pci.o \
t7xx_hif_dpmaif_tx.o \
t7xx_hif_dpmaif_rx.o \
t7xx_netdev.o \
+ t7xx_port_char.o \
+ t7xx_port_tty.o \
+ t7xx_tty_ops.o
diff --git a/drivers/net/wwan/t7xx/t7xx_port.h b/drivers/net/wwan/t7xx/t7xx_port.h
index badaaa418b97..aae126b83db3 100644
--- a/drivers/net/wwan/t7xx/t7xx_port.h
+++ b/drivers/net/wwan/t7xx/t7xx_port.h
@@ -157,5 +157,7 @@ int port_write_room_to_md(struct t7xx_port *port);
struct t7xx_port *port_get_by_minor(int minor);
struct t7xx_port *port_get_by_name(char *port_name);
int port_send_skb_to_md(struct t7xx_port *port, struct sk_buff *skb, bool blocking);
+int port_register_device(const char *name, int major, int minor);
+void port_unregister_device(int major, int minor);

#endif /* __T7XX_PORT_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_port_char.c b/drivers/net/wwan/t7xx/t7xx_port_char.c
new file mode 100644
index 000000000000..081fa6944845
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port_char.c
@@ -0,0 +1,424 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/cdev.h>
+#include <linux/mutex.h>
+#include <linux/poll.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+
+#include "t7xx_common.h"
+#include "t7xx_monitor.h"
+#include "t7xx_port.h"
+#include "t7xx_port_proxy.h"
+#include "t7xx_skb_util.h"
+
+static __poll_t port_char_poll(struct file *fp, struct poll_table_struct *poll)
+{
+ struct t7xx_port *port;
+ enum md_state md_state;
+ __poll_t mask = 0;
+
+ port = fp->private_data;
+ md_state = ccci_fsm_get_md_state();
+ poll_wait(fp, &port->rx_wq, poll);
+
+ spin_lock_irq(&port->rx_wq.lock);
+ if (!skb_queue_empty(&port->rx_skb_list))
+ mask |= EPOLLIN | EPOLLRDNORM;
+
+ spin_unlock_irq(&port->rx_wq.lock);
+ if (port_write_room_to_md(port) > 0)
+ mask |= EPOLLOUT | EPOLLWRNORM;
+
+ if (port->rx_ch == CCCI_UART1_RX &&
+ md_state != MD_STATE_READY &&
+ md_state != MD_STATE_EXCEPTION) {
+ /* notify MD logger to save its log before md_init kills it */
+ mask |= EPOLLERR;
+ dev_err(port->dev, "poll error for MD logger at state: %d, mask: %u\n",
+ md_state, mask);
+ }
+
+ return mask;
+}
+
+/**
+ * port_char_open() - open char port
+ * @inode: pointer to inode structure
+ * @file: pointer to file structure
+ *
+ * Open a char port using pre-defined md_ccci_ports structure in port_proxy
+ *
+ * Return: 0 for success, -EINVAL for failure
+ */
+static int port_char_open(struct inode *inode, struct file *file)
+{
+ int major = imajor(inode);
+ int minor = iminor(inode);
+ struct t7xx_port *port;
+
+ port = port_proxy_get_port(major, minor);
+ if (!port)
+ return -EINVAL;
+
+ atomic_inc(&port->usage_cnt);
+ file->private_data = port;
+ return nonseekable_open(inode, file);
+}
+
+static int port_char_close(struct inode *inode, struct file *file)
+{
+ struct t7xx_port *port;
+ struct sk_buff *skb;
+ int clear_cnt = 0;
+
+ port = file->private_data;
+ /* decrease usage count, so when we ask again,
+ * the packet can be dropped in recv_request.
+ */
+ atomic_dec(&port->usage_cnt);
+
+ /* purge RX request list */
+ spin_lock_irq(&port->rx_wq.lock);
+ while ((skb = __skb_dequeue(&port->rx_skb_list)) != NULL) {
+ ccci_free_skb(&port->mtk_dev->pools, skb);
+ clear_cnt++;
+ }
+
+ spin_unlock_irq(&port->rx_wq.lock);
+ return 0;
+}
+
+static ssize_t port_char_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
+{
+ bool full_req_done = false;
+ struct t7xx_port *port;
+ int ret = 0, read_len;
+ struct sk_buff *skb;
+
+ port = file->private_data;
+ spin_lock_irq(&port->rx_wq.lock);
+ if (skb_queue_empty(&port->rx_skb_list)) {
+ if (file->f_flags & O_NONBLOCK) {
+ spin_unlock_irq(&port->rx_wq.lock);
+ return -EAGAIN;
+ }
+
+ ret = wait_event_interruptible_locked_irq(port->rx_wq,
+ !skb_queue_empty(&port->rx_skb_list));
+ if (ret == -ERESTARTSYS) {
+ spin_unlock_irq(&port->rx_wq.lock);
+ return -EINTR;
+ }
+ }
+
+ skb = skb_peek(&port->rx_skb_list);
+
+ if (count >= skb->len) {
+ read_len = skb->len;
+ full_req_done = true;
+ __skb_unlink(skb, &port->rx_skb_list);
+ } else {
+ read_len = count;
+ }
+
+ spin_unlock_irq(&port->rx_wq.lock);
+ if (copy_to_user(buf, skb->data, read_len)) {
+ dev_err(port->dev, "read on %s, copy to user failed, %d/%zu\n",
+ port->name, read_len, count);
+ ret = -EFAULT;
+ }
+
+ skb_pull(skb, read_len);
+ if (full_req_done)
+ ccci_free_skb(&port->mtk_dev->pools, skb);
+
+ return ret ? ret : read_len;
+}
+
+static ssize_t port_char_write(struct file *file, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ size_t actual_count, alloc_size, txq_mtu;
+ int i, multi_packet = 1;
+ struct t7xx_port *port;
+ enum md_state md_state;
+ struct sk_buff *skb;
+ bool blocking;
+ int ret;
+
+ blocking = !(file->f_flags & O_NONBLOCK);
+ port = file->private_data;
+ md_state = ccci_fsm_get_md_state();
+ if (md_state == MD_STATE_WAITING_FOR_HS1 || md_state == MD_STATE_WAITING_FOR_HS2) {
+ dev_warn(port->dev, "port: %s ch: %d, write fail when md_state: %d\n",
+ port->name, port->tx_ch, md_state);
+ return -ENODEV;
+ }
+
+ if (port_write_room_to_md(port) <= 0 && !blocking)
+ return -EAGAIN;
+
+ txq_mtu = CLDMA_TXQ_MTU;
+ if (port->flags & PORT_F_RAW_DATA || port->flags & PORT_F_USER_HEADER) {
+ if (port->flags & PORT_F_USER_HEADER && count > txq_mtu) {
+ dev_err(port->dev, "packet size: %zu larger than MTU on %s\n",
+ count, port->name);
+ return -ENOMEM;
+ }
+
+ actual_count = count > txq_mtu ? txq_mtu : count;
+ alloc_size = actual_count;
+ } else {
+ actual_count = (count + CCCI_H_ELEN) > txq_mtu ? (txq_mtu - CCCI_H_ELEN) : count;
+ alloc_size = actual_count + CCCI_H_ELEN;
+ if (count + CCCI_H_ELEN > txq_mtu && (port->tx_ch == CCCI_MBIM_TX ||
+ (port->tx_ch >= CCCI_DSS0_TX &&
+ port->tx_ch <= CCCI_DSS7_TX))) {
+ multi_packet = (count + txq_mtu - CCCI_H_ELEN - 1) /
+ (txq_mtu - CCCI_H_ELEN);
+ }
+ }
+ mutex_lock(&port->tx_mutex_lock);
+ for (i = 0; i < multi_packet; i++) {
+ struct ccci_header *ccci_h = NULL;
+
+ if (multi_packet > 1 && multi_packet == i + 1) {
+ actual_count = count % (txq_mtu - CCCI_H_ELEN);
+ alloc_size = actual_count + CCCI_H_ELEN;
+ }
+
+ skb = ccci_alloc_skb_from_pool(&port->mtk_dev->pools, alloc_size, blocking);
+ if (!skb) {
+ ret = -ENOMEM;
+ goto err_out;
+ }
+
+ /* Get the user data, no need to validate the data since the driver is just
+ * passing it to the device.
+ */
+ if (port->flags & PORT_F_RAW_DATA) {
+ ret = copy_from_user(skb_put(skb, actual_count), buf, actual_count);
+ if (port->flags & PORT_F_USER_HEADER) {
+ /* The ccci_header is provided by user.
+ *
+ * For only sending ccci_header without additional data
+ * case, data[0]=CCCI_HEADER_NO_DATA, data[1]=user_data,
+ * ch=tx_channel, reserved=no_use.
+ *
+ * For send ccci_header with additional data case,
+ * data[0]=0, data[1]=data_size, ch=tx_channel,
+ * reserved=user_data.
+ */
+ ccci_h = (struct ccci_header *)skb->data;
+ if (actual_count == CCCI_H_LEN)
+ ccci_h->data[0] = CCCI_HEADER_NO_DATA;
+ else
+ ccci_h->data[1] = actual_count;
+
+ ccci_h->status &= ~HDR_FLD_CHN;
+ ccci_h->status |= FIELD_PREP(HDR_FLD_CHN, port->tx_ch);
+ }
+ } else {
+ /* ccci_header is provided by driver */
+ ccci_h = skb_put(skb, CCCI_H_LEN);
+ ccci_h->data[0] = 0;
+ ccci_h->data[1] = actual_count + CCCI_H_LEN;
+ ccci_h->status &= ~HDR_FLD_CHN;
+ ccci_h->status |= FIELD_PREP(HDR_FLD_CHN, port->tx_ch);
+ ccci_h->reserved = 0;
+
+ ret = copy_from_user(skb_put(skb, actual_count),
+ buf + i * (txq_mtu - CCCI_H_ELEN), actual_count);
+ }
+
+ if (ret) {
+ ret = -EFAULT;
+ goto err_out;
+ }
+
+ /* send out */
+ port_proxy_set_seq_num(port, ccci_h);
+ ret = port_send_skb_to_md(port, skb, blocking);
+ if (ret) {
+ if (ret == -EBUSY && !blocking)
+ ret = -EAGAIN;
+
+ goto err_out;
+ } else {
+ /* Record the port seq_num after the data is sent to HIF.
+ * Only bits 0-14 are used, thus negating overflow.
+ */
+ port->seq_nums[MTK_OUT]++;
+ }
+
+ if (multi_packet == 1) {
+ mutex_unlock(&port->tx_mutex_lock);
+ return actual_count;
+ } else if (multi_packet == i + 1) {
+ mutex_unlock(&port->tx_mutex_lock);
+ return count;
+ }
+ }
+
+err_out:
+ mutex_unlock(&port->tx_mutex_lock);
+ dev_err(port->dev, "write error done on %s, size: %zu, ret: %d\n",
+ port->name, actual_count, ret);
+ ccci_free_skb(&port->mtk_dev->pools, skb);
+ return ret;
+}
+
+static const struct file_operations char_fops = {
+ .owner = THIS_MODULE,
+ .open = &port_char_open,
+ .read = &port_char_read,
+ .write = &port_char_write,
+ .release = &port_char_close,
+ .poll = &port_char_poll,
+};
+
+static int port_char_init(struct t7xx_port *port)
+{
+ struct cdev *dev;
+
+ port->rx_length_th = MAX_RX_QUEUE_LENGTH;
+ port->skb_from_pool = true;
+ if (port->flags & PORT_F_RX_CHAR_NODE) {
+ dev = cdev_alloc();
+ if (!dev)
+ return -ENOMEM;
+
+ dev->ops = &char_fops;
+ dev->owner = THIS_MODULE;
+ if (cdev_add(dev, MKDEV(port->major, port->minor_base + port->minor), 1)) {
+ kobject_put(&dev->kobj);
+ return -ENOMEM;
+ }
+
+ if (!(port->flags & PORT_F_RAW_DATA))
+ port->flags |= PORT_F_RX_ADJUST_HEADER;
+
+ port->cdev = dev;
+ }
+
+ if (port->rx_ch == CCCI_UART2_RX)
+ port->flags |= PORT_F_RX_CH_TRAFFIC;
+
+ return 0;
+}
+
+static void port_char_uninit(struct t7xx_port *port)
+{
+ unsigned long flags;
+ struct sk_buff *skb;
+
+ if (port->flags & PORT_F_RX_CHAR_NODE && port->cdev) {
+ if (port->chn_crt_stat == CCCI_CHAN_ENABLE) {
+ port_unregister_device(port->major, port->minor_base + port->minor);
+ spin_lock(&port->port_update_lock);
+ port->chn_crt_stat = CCCI_CHAN_DISABLE;
+ spin_unlock(&port->port_update_lock);
+ }
+
+ cdev_del(port->cdev);
+ port->cdev = NULL;
+ }
+
+ /* interrupts need to be disabled */
+ spin_lock_irqsave(&port->rx_wq.lock, flags);
+ while ((skb = __skb_dequeue(&port->rx_skb_list)) != NULL)
+ ccci_free_skb(&port->mtk_dev->pools, skb);
+ spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+}
+
+static int port_char_recv_skb(struct t7xx_port *port, struct sk_buff *skb)
+{
+ if ((port->flags & PORT_F_RX_CHAR_NODE) && !atomic_read(&port->usage_cnt)) {
+ dev_err_ratelimited(port->dev,
+ "port %s is not opened, dropping packets\n", port->name);
+ return -ENETDOWN;
+ }
+
+ return port_recv_skb(port, skb);
+}
+
+static int port_status_update(struct t7xx_port *port)
+{
+ if (!(port->flags & PORT_F_RX_CHAR_NODE))
+ return 0;
+
+ if (port->chan_enable == CCCI_CHAN_ENABLE) {
+ int ret;
+
+ port->flags &= ~PORT_F_RX_ALLOW_DROP;
+ ret = port_register_device(port->name, port->major,
+ port->minor_base + port->minor);
+ if (ret)
+ return ret;
+
+ port_proxy_broadcast_state(port, MTK_PORT_STATE_ENABLE);
+ spin_lock(&port->port_update_lock);
+ port->chn_crt_stat = CCCI_CHAN_ENABLE;
+ spin_unlock(&port->port_update_lock);
+
+ return 0;
+ }
+
+ port->flags |= PORT_F_RX_ALLOW_DROP;
+ port_unregister_device(port->major, port->minor_base + port->minor);
+ spin_lock(&port->port_update_lock);
+ port->chn_crt_stat = CCCI_CHAN_DISABLE;
+ spin_unlock(&port->port_update_lock);
+ return port_proxy_broadcast_state(port, MTK_PORT_STATE_DISABLE);
+}
+
+static int port_char_enable_chl(struct t7xx_port *port)
+{
+ spin_lock(&port->port_update_lock);
+ port->chan_enable = CCCI_CHAN_ENABLE;
+ spin_unlock(&port->port_update_lock);
+ if (port->chn_crt_stat != port->chan_enable)
+ return port_status_update(port);
+
+ return 0;
+}
+
+static int port_char_disable_chl(struct t7xx_port *port)
+{
+ spin_lock(&port->port_update_lock);
+ port->chan_enable = CCCI_CHAN_DISABLE;
+ spin_unlock(&port->port_update_lock);
+ if (port->chn_crt_stat != port->chan_enable)
+ return port_status_update(port);
+
+ return 0;
+}
+
+static void port_char_md_state_notify(struct t7xx_port *port, unsigned int state)
+{
+ if (state == MD_STATE_READY)
+ port_status_update(port);
+}
+
+struct port_ops char_port_ops = {
+ .init = &port_char_init,
+ .recv_skb = &port_char_recv_skb,
+ .uninit = &port_char_uninit,
+ .enable_chl = &port_char_enable_chl,
+ .disable_chl = &port_char_disable_chl,
+ .md_state_notify = &port_char_md_state_notify,
+};
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.c b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
index 970b5160febf..187ce59446c4 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.c
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
@@ -44,6 +44,7 @@
#define TTY_PORT_MINOR_INVALID -1

static struct port_proxy *port_prox;
+static struct class *dev_class;

#define for_each_proxy_port(i, p, proxy) \
for (i = 0, (p) = &(proxy)->ports[i]; \
@@ -53,8 +54,32 @@ static struct port_proxy *port_prox;
static struct t7xx_port md_ccci_ports[] = {
{CCCI_UART2_TX, CCCI_UART2_RX, DATA_AT_CMD_Q, DATA_AT_CMD_Q, 0xff,
0xff, ID_CLDMA1, PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 0, "ttyC0", WWAN_PORT_AT},
+ {CCCI_MD_LOG_TX, CCCI_MD_LOG_RX, 7, 7, 7, 7, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &char_port_ops, 2, "ttyCMdLog", WWAN_PORT_AT},
+ {CCCI_LB_IT_TX, CCCI_LB_IT_RX, 0, 0, 0xff, 0xff, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &char_port_ops, 3, "ccci_lb_it",},
+ {CCCI_MIPC_TX, CCCI_MIPC_RX, 2, 2, 0, 0, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &tty_port_ops, 1, "ttyCMIPC0",},
{CCCI_MBIM_TX, CCCI_MBIM_RX, 2, 2, 0, 0, ID_CLDMA1,
PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 10, "ttyCMBIM0", WWAN_PORT_MBIM},
+ {CCCI_UART1_TX, CCCI_UART1_RX, 1, 1, 1, 1, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &char_port_ops, 11, "ttyCMdMeta",},
+ {CCCI_DSS0_TX, CCCI_DSS0_RX, 3, 3, 3, 3, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &char_port_ops, 13, "ttyCMBIMDSS0",},
+ {CCCI_DSS1_TX, CCCI_DSS1_RX, 3, 3, 3, 3, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &char_port_ops, 14, "ttyCMBIMDSS1",},
+ {CCCI_DSS2_TX, CCCI_DSS2_RX, 3, 3, 3, 3, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &char_port_ops, 15, "ttyCMBIMDSS2",},
+ {CCCI_DSS3_TX, CCCI_DSS3_RX, 3, 3, 3, 3, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &char_port_ops, 16, "ttyCMBIMDSS3",},
+ {CCCI_DSS4_TX, CCCI_DSS4_RX, 3, 3, 3, 3, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &char_port_ops, 17, "ttyCMBIMDSS4",},
+ {CCCI_DSS5_TX, CCCI_DSS5_RX, 3, 3, 3, 3, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &char_port_ops, 18, "ttyCMBIMDSS5",},
+ {CCCI_DSS6_TX, CCCI_DSS6_RX, 3, 3, 3, 3, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &char_port_ops, 19, "ttyCMBIMDSS6",},
+ {CCCI_DSS7_TX, CCCI_DSS7_RX, 3, 3, 3, 3, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &char_port_ops, 20, "ttyCMBIMDSS7",},
{CCCI_CONTROL_TX, CCCI_CONTROL_RX, 0, 0, 0, 0, ID_CLDMA1,
0, &ctl_port_ops, 0xff, "ccci_ctrl",},
};
@@ -658,6 +683,20 @@ struct t7xx_port *port_get_by_name(char *port_name)
return NULL;
}

+int port_register_device(const char *name, int major, int minor)
+{
+ struct device *dev;
+
+ dev = device_create(dev_class, NULL, MKDEV(major, minor), NULL, "%s", name);
+
+ return IS_ERR(dev) ? PTR_ERR(dev) : 0;
+}
+
+void port_unregister_device(int major, int minor)
+{
+ device_destroy(dev_class, MKDEV(major, minor));
+}
+
int port_proxy_broadcast_state(struct t7xx_port *port, int state)
{
char msg[PORT_NETLINK_MSG_MAX_PAYLOAD];
@@ -695,9 +734,13 @@ int port_proxy_init(struct mtk_modem *md)
{
int ret;

+ dev_class = class_create(THIS_MODULE, "ccci_node");
+ if (IS_ERR(dev_class))
+ return PTR_ERR(dev_class);
+
ret = proxy_alloc(md);
if (ret)
- return ret;
+ goto err_proxy;

ret = port_netlink_init();
if (ret)
@@ -707,6 +750,8 @@ int port_proxy_init(struct mtk_modem *md)

return 0;

+err_proxy:
+ class_destroy(dev_class);
err_netlink:
port_proxy_uninit();

@@ -725,6 +770,8 @@ void port_proxy_uninit(void)
unregister_chrdev_region(MKDEV(port_prox->major, port_prox->minor_base),
TTY_IPC_MINOR_BASE);
port_netlink_uninit();
+
+ class_destroy(dev_class);
}

/**
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.h b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
index 3d43c1f46e2a..704630182e48 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.h
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
@@ -82,8 +82,11 @@ struct port_msg {
#define PORT_ENUM_VER_MISMATCH 0x00657272

/* port operations mapping */
+extern struct port_ops char_port_ops;
extern struct port_ops wwan_sub_port_ops;
extern struct port_ops ctl_port_ops;
+extern struct port_ops tty_port_ops;
+extern struct tty_dev_ops tty_ops;

int port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb, bool from_pool);
void port_proxy_set_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h);
diff --git a/drivers/net/wwan/t7xx/t7xx_port_tty.c b/drivers/net/wwan/t7xx/t7xx_port_tty.c
new file mode 100644
index 000000000000..2ca2ca21c249
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port_tty.c
@@ -0,0 +1,191 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/atomic.h>
+#include <linux/bitfield.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+
+#include "t7xx_common.h"
+#include "t7xx_port_proxy.h"
+#include "t7xx_skb_util.h"
+#include "t7xx_tty_ops.h"
+
+#define TTY_PORT_NAME_BASE "ttyC"
+#define TTY_PORT_NR 32
+
+static int ccci_tty_send_pkt(int tty_port_idx, const void *data, int len)
+{
+ struct ccci_header *ccci_h = NULL;
+ int actual_count, alloc_size;
+ int ret, header_len = 0;
+ struct t7xx_port *port;
+ struct sk_buff *skb;
+
+ port = port_get_by_minor(tty_port_idx + TTY_PORT_MINOR_BASE);
+ if (!port)
+ return -ENXIO;
+
+ if (port->flags & PORT_F_RAW_DATA) {
+ actual_count = len > CLDMA_TXQ_MTU ? CLDMA_TXQ_MTU : len;
+ alloc_size = actual_count;
+ } else {
+ /* get skb info */
+ header_len = sizeof(struct ccci_header);
+ actual_count = len > CCCI_MTU ? CCCI_MTU : len;
+ alloc_size = actual_count + header_len;
+ }
+
+ skb = ccci_alloc_skb_from_pool(&port->mtk_dev->pools, alloc_size, GFS_BLOCKING);
+ if (skb) {
+ if (!(port->flags & PORT_F_RAW_DATA)) {
+ ccci_h = skb_put(skb, sizeof(struct ccci_header));
+ ccci_h->data[0] = 0;
+ ccci_h->data[1] = actual_count + header_len;
+ ccci_h->status &= ~HDR_FLD_CHN;
+ ccci_h->status |= FIELD_PREP(HDR_FLD_CHN, port->tx_ch);
+ ccci_h->reserved = 0;
+ }
+ } else {
+ return -ENOMEM;
+ }
+
+ memcpy(skb_put(skb, actual_count), data, actual_count);
+
+ /* send data */
+ port_proxy_set_seq_num(port, ccci_h);
+ ret = port_send_skb_to_md(port, skb, true);
+ if (ret) {
+ dev_err(port->dev, "failed to send skb to md, ret = %d\n", ret);
+ return ret;
+ }
+
+ /* Record the port seq_num after the data is sent to HIF.
+ * Only bits 0-14 are used, thus negating overflow.
+ */
+ port->seq_nums[MTK_OUT]++;
+
+ return actual_count;
+}
+
+static struct tty_ccci_ops mtk_tty_ops = {
+ .tty_num = TTY_PORT_NR,
+ .name = TTY_PORT_NAME_BASE,
+ .md_ability = 0,
+ .send_pkt = ccci_tty_send_pkt,
+};
+
+static int port_tty_init(struct t7xx_port *port)
+{
+ /* mapping the minor number to tty dev idx */
+ port->minor += TTY_PORT_MINOR_BASE;
+
+ /* init the tty driver */
+ if (!tty_ops.tty_driver_status) {
+ tty_ops.tty_driver_status = true;
+ atomic_set(&tty_ops.port_installed_num, 0);
+ tty_ops.init(&mtk_tty_ops, port);
+ }
+
+ return 0;
+}
+
+static int port_tty_recv_skb(struct t7xx_port *port, struct sk_buff *skb)
+{
+ int actual_recv_len;
+
+ /* get skb data */
+ if (!(port->flags & PORT_F_RAW_DATA))
+ skb_pull(skb, sizeof(struct ccci_header));
+
+ /* send data to tty driver. */
+ actual_recv_len = tty_ops.rx_callback(port, skb->data, skb->len);
+
+ if (actual_recv_len != skb->len) {
+ dev_err(port->dev, "ccci port[%s] recv skb fail\n", port->name);
+ skb_push(skb, sizeof(struct ccci_header));
+ return -ENOBUFS;
+ }
+
+ ccci_free_skb(&port->mtk_dev->pools, skb);
+ return 0;
+}
+
+static void port_tty_md_state_notify(struct t7xx_port *port, unsigned int state)
+{
+ if (state != MD_STATE_READY || port->chan_enable != CCCI_CHAN_ENABLE)
+ return;
+
+ port->flags &= ~PORT_F_RX_ALLOW_DROP;
+ /* create a tty port */
+ tty_ops.tty_port_create(port, port->name);
+ atomic_inc(&tty_ops.port_installed_num);
+ spin_lock(&port->port_update_lock);
+ port->chn_crt_stat = CCCI_CHAN_ENABLE;
+ spin_unlock(&port->port_update_lock);
+}
+
+static void port_tty_uninit(struct t7xx_port *port)
+{
+ port->minor -= TTY_PORT_MINOR_BASE;
+
+ if (port->chn_crt_stat != CCCI_CHAN_ENABLE)
+ return;
+
+ /* destroy tty port */
+ tty_ops.tty_port_destroy(port);
+ spin_lock(&port->port_update_lock);
+ port->chn_crt_stat = CCCI_CHAN_DISABLE;
+ spin_unlock(&port->port_update_lock);
+
+ /* CCCI tty driver exit */
+ if (atomic_dec_and_test(&tty_ops.port_installed_num) && tty_ops.exit) {
+ tty_ops.exit();
+ tty_ops.tty_driver_status = false;
+ }
+}
+
+static int port_tty_enable_chl(struct t7xx_port *port)
+{
+ spin_lock(&port->port_update_lock);
+ port->chan_enable = CCCI_CHAN_ENABLE;
+ spin_unlock(&port->port_update_lock);
+ if (port->chn_crt_stat != port->chan_enable) {
+ port->flags &= ~PORT_F_RX_ALLOW_DROP;
+ /* create a tty port */
+ tty_ops.tty_port_create(port, port->name);
+ spin_lock(&port->port_update_lock);
+ port->chn_crt_stat = CCCI_CHAN_ENABLE;
+ spin_unlock(&port->port_update_lock);
+ atomic_inc(&tty_ops.port_installed_num);
+ }
+
+ return 0;
+}
+
+static int port_tty_disable_chl(struct t7xx_port *port)
+{
+ spin_lock(&port->port_update_lock);
+ port->chan_enable = CCCI_CHAN_DISABLE;
+ spin_unlock(&port->port_update_lock);
+ return 0;
+}
+
+struct port_ops tty_port_ops = {
+ .init = &port_tty_init,
+ .recv_skb = &port_tty_recv_skb,
+ .md_state_notify = &port_tty_md_state_notify,
+ .uninit = &port_tty_uninit,
+ .enable_chl = &port_tty_enable_chl,
+ .disable_chl = &port_tty_disable_chl,
+};
diff --git a/drivers/net/wwan/t7xx/t7xx_tty_ops.c b/drivers/net/wwan/t7xx/t7xx_tty_ops.c
new file mode 100644
index 000000000000..12c5e87b4897
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_tty_ops.c
@@ -0,0 +1,205 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/tty.h>
+#include <linux/tty_driver.h>
+#include <linux/tty_flip.h>
+
+#include "t7xx_port_proxy.h"
+#include "t7xx_tty_ops.h"
+
+#define GET_TTY_IDX(p) ((p)->minor - TTY_PORT_MINOR_BASE)
+
+static struct tty_ctl_block *tty_ctlb;
+static const struct tty_port_operations null_ops = {};
+
+static int ccci_tty_open(struct tty_struct *tty, struct file *filp)
+{
+ struct tty_port *pport;
+ int ret = 0;
+
+ pport = tty->driver->ports[tty->index];
+ if (pport)
+ ret = tty_port_open(pport, tty, filp);
+
+ return ret;
+}
+
+static void ccci_tty_close(struct tty_struct *tty, struct file *filp)
+{
+ struct tty_port *pport;
+
+ pport = tty->driver->ports[tty->index];
+ if (pport)
+ tty_port_close(pport, tty, filp);
+}
+
+static int ccci_tty_write(struct tty_struct *tty, const unsigned char *buf, int count)
+{
+ if (!(tty_ctlb && tty_ctlb->driver == tty->driver))
+ return -EFAULT;
+
+ return tty_ctlb->ccci_ops->send_pkt(tty->index, buf, count);
+}
+
+static unsigned int ccci_tty_write_room(struct tty_struct *tty)
+{
+ return CCCI_MTU;
+}
+
+static const struct tty_operations ccci_serial_ops = {
+ .open = ccci_tty_open,
+ .close = ccci_tty_close,
+ .write = ccci_tty_write,
+ .write_room = ccci_tty_write_room,
+};
+
+static int ccci_tty_port_create(struct t7xx_port *port, char *port_name)
+{
+ struct tty_driver *tty_drv;
+ struct tty_port *pport;
+ int minor = GET_TTY_IDX(port);
+
+ tty_drv = tty_ctlb->driver;
+ tty_drv->name = port_name;
+
+ pport = devm_kzalloc(port->dev, sizeof(*pport), GFP_KERNEL);
+ if (!pport)
+ return -ENOMEM;
+
+ tty_port_init(pport);
+ pport->ops = &null_ops;
+ tty_port_link_device(pport, tty_drv, minor);
+ tty_register_device(tty_drv, minor, NULL);
+ return 0;
+}
+
+static int ccci_tty_port_destroy(struct t7xx_port *port)
+{
+ struct tty_driver *tty_drv;
+ struct tty_port *pport;
+ struct tty_struct *tty;
+ int minor = port->minor;
+
+ tty_drv = tty_ctlb->driver;
+
+ pport = tty_drv->ports[minor];
+ if (!pport) {
+ dev_err(port->dev, "Invalid tty minor:%d\n", minor);
+ return -EINVAL;
+ }
+
+ tty = tty_port_tty_get(pport);
+ if (tty) {
+ tty_vhangup(tty);
+ tty_kref_put(tty);
+ }
+
+ tty_unregister_device(tty_drv, minor);
+ tty_port_destroy(pport);
+ tty_drv->ports[minor] = NULL;
+ return 0;
+}
+
+static int tty_ccci_init(struct tty_ccci_ops *ccci_info, struct t7xx_port *port)
+{
+ struct port_proxy *port_proxy_ptr;
+ struct tty_driver *tty_drv;
+ struct tty_ctl_block *ctlb;
+ int ret, port_nr;
+
+ ctlb = devm_kzalloc(port->dev, sizeof(*ctlb), GFP_KERNEL);
+ if (!ctlb)
+ return -ENOMEM;
+
+ ctlb->ccci_ops = devm_kzalloc(port->dev, sizeof(*ctlb->ccci_ops), GFP_KERNEL);
+ if (!ctlb->ccci_ops)
+ return -ENOMEM;
+
+ tty_ctlb = ctlb;
+ memcpy(ctlb->ccci_ops, ccci_info, sizeof(struct tty_ccci_ops));
+ port_nr = ctlb->ccci_ops->tty_num;
+
+ tty_drv = tty_alloc_driver(port_nr, 0);
+ if (IS_ERR(tty_drv))
+ return -ENOMEM;
+
+ /* init tty driver */
+ port_proxy_ptr = port->port_proxy;
+ ctlb->driver = tty_drv;
+ tty_drv->driver_name = ctlb->ccci_ops->name;
+ tty_drv->name = ctlb->ccci_ops->name;
+ tty_drv->major = port_proxy_ptr->major;
+ tty_drv->minor_start = TTY_PORT_MINOR_BASE;
+ tty_drv->type = TTY_DRIVER_TYPE_SERIAL;
+ tty_drv->subtype = SERIAL_TYPE_NORMAL;
+ tty_drv->flags = TTY_DRIVER_RESET_TERMIOS | TTY_DRIVER_REAL_RAW |
+ TTY_DRIVER_DYNAMIC_DEV | TTY_DRIVER_UNNUMBERED_NODE;
+ tty_drv->init_termios = tty_std_termios;
+ tty_drv->init_termios.c_iflag = 0;
+ tty_drv->init_termios.c_oflag = 0;
+ tty_drv->init_termios.c_cflag |= CLOCAL;
+ tty_drv->init_termios.c_lflag = 0;
+ tty_set_operations(tty_drv, &ccci_serial_ops);
+
+ ret = tty_register_driver(tty_drv);
+ if (ret < 0) {
+ dev_err(port->dev, "Could not register tty driver\n");
+ tty_driver_kref_put(tty_drv);
+ }
+
+ return ret;
+}
+
+static void tty_ccci_uninit(void)
+{
+ struct tty_driver *tty_drv;
+ struct tty_ctl_block *ctlb;
+
+ ctlb = tty_ctlb;
+ if (ctlb) {
+ tty_drv = ctlb->driver;
+ tty_unregister_driver(tty_drv);
+ tty_driver_kref_put(tty_drv);
+ tty_ctlb = NULL;
+ }
+}
+
+static int tty_rx_callback(struct t7xx_port *port, void *data, int len)
+{
+ struct tty_port *pport;
+ struct tty_driver *drv;
+ int tty_id = GET_TTY_IDX(port);
+ int copied = 0;
+
+ drv = tty_ctlb->driver;
+ pport = drv->ports[tty_id];
+
+ if (!pport) {
+ dev_err(port->dev, "tty port isn't created, the packet is dropped\n");
+ return len;
+ }
+
+ /* push data to tty port buffer */
+ copied = tty_insert_flip_string(pport, data, len);
+
+ /* trigger port buffer -> line discipline buffer */
+ tty_flip_buffer_push(pport);
+ return copied;
+}
+
+struct tty_dev_ops tty_ops = {
+ .init = &tty_ccci_init,
+ .tty_port_create = &ccci_tty_port_create,
+ .tty_port_destroy = &ccci_tty_port_destroy,
+ .rx_callback = &tty_rx_callback,
+ .exit = &tty_ccci_uninit,
+};
diff --git a/drivers/net/wwan/t7xx/t7xx_tty_ops.h b/drivers/net/wwan/t7xx/t7xx_tty_ops.h
new file mode 100644
index 000000000000..545cdd27d31a
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_tty_ops.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_TTY_OPS_H__
+#define __T7XX_TTY_OPS_H__
+
+#include <linux/types.h>
+
+#include "t7xx_port_proxy.h"
+
+#define TTY_PORT_MINOR_BASE 250
+
+struct tty_ccci_ops {
+ int tty_num;
+ unsigned char name[16];
+ unsigned int md_ability;
+ int (*send_pkt)(int tty_idx, const void *data, int len);
+};
+
+struct tty_ctl_block {
+ struct tty_driver *driver;
+ struct tty_ccci_ops *ccci_ops;
+ unsigned int md_sta;
+};
+
+struct tty_dev_ops {
+ /* tty port information */
+ bool tty_driver_status;
+ atomic_t port_installed_num;
+ int (*init)(struct tty_ccci_ops *ccci_info, struct t7xx_port *port);
+ int (*tty_port_create)(struct t7xx_port *port, char *port_name);
+ int (*tty_port_destroy)(struct t7xx_port *port);
+ int (*rx_callback)(struct t7xx_port *port, void *data, int len);
+ void (*exit)(void);
+};
+#endif /* __T7XX_TTY_OPS_H__ */
--
2.17.1

2021-11-01 03:58:16

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 10/14] net: wwan: t7xx: Introduce power management support

From: Haijun Lio <[email protected]>

Implements suspend, resumes, freeze, thaw, poweroff, and restore
`dev_pm_ops` callbacks.

From the host point of view, the t7xx driver is one entity. But, the
device has several modules that need to be addressed in different ways
during power management (PM) flows.
The driver uses the term 'PM entities' to refer to the 2 DPMA and
2 CLDMA HW blocks that need to be managed during PM flows.
When a dev_pm_ops function is called, the PM entities list is iterated
and the matching function is called for each entry in the list.

Signed-off-by: Haijun Lio <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 122 +++++-
drivers/net/wwan/t7xx/t7xx_hif_cldma.h | 1 +
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c | 98 +++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h | 1 +
drivers/net/wwan/t7xx/t7xx_mhccif.c | 16 +
drivers/net/wwan/t7xx/t7xx_pci.c | 435 +++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_pci.h | 45 +++
drivers/net/wwan/t7xx/t7xx_state_monitor.c | 2 +
8 files changed, 719 insertions(+), 1 deletion(-)

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
index a63c4b514944..bcee31a5af12 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -1382,6 +1382,120 @@ void cldma_exception(enum cldma_id hif_id, enum hif_ex_stage stage)
}
}

+static void cldma_resume_early(struct mtk_pci_dev *mtk_dev, void *entity_param)
+{
+ struct cldma_hw_info *hw_info;
+ struct cldma_ctrl *md_ctrl;
+ unsigned long flags;
+ int qno_t;
+
+ md_ctrl = entity_param;
+ hw_info = &md_ctrl->hw_info;
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ cldma_hw_restore(hw_info);
+
+ for (qno_t = 0; qno_t < CLDMA_TXQ_NUM; qno_t++) {
+ cldma_hw_set_start_address(hw_info, qno_t, md_ctrl->txq[qno_t].tx_xmit->gpd_addr,
+ false);
+ cldma_hw_set_start_address(hw_info, qno_t, md_ctrl->rxq[qno_t].tr_done->gpd_addr,
+ true);
+ }
+
+ cldma_enable_irq(md_ctrl);
+ cldma_hw_start_queue(hw_info, CLDMA_ALL_Q, true);
+ md_ctrl->rxq_active |= TXRX_STATUS_BITMASK;
+ cldma_hw_dismask_eqirq(hw_info, CLDMA_ALL_Q, true);
+ cldma_hw_dismask_txrxirq(hw_info, CLDMA_ALL_Q, true);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static int cldma_resume(struct mtk_pci_dev *mtk_dev, void *entity_param)
+{
+ struct cldma_ctrl *md_ctrl;
+ unsigned long flags;
+
+ md_ctrl = entity_param;
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ md_ctrl->txq_active |= TXRX_STATUS_BITMASK;
+ cldma_hw_dismask_txrxirq(&md_ctrl->hw_info, CLDMA_ALL_Q, false);
+ cldma_hw_dismask_eqirq(&md_ctrl->hw_info, CLDMA_ALL_Q, false);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ if (md_ctrl->hif_id == ID_CLDMA1)
+ mhccif_mask_clr(mtk_dev, D2H_SW_INT_MASK);
+
+ return 0;
+}
+
+static void cldma_suspend_late(struct mtk_pci_dev *mtk_dev, void *entity_param)
+{
+ struct cldma_hw_info *hw_info;
+ struct cldma_ctrl *md_ctrl;
+ unsigned long flags;
+
+ md_ctrl = entity_param;
+ hw_info = &md_ctrl->hw_info;
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ cldma_hw_mask_eqirq(hw_info, CLDMA_ALL_Q, true);
+ cldma_hw_mask_txrxirq(hw_info, CLDMA_ALL_Q, true);
+ md_ctrl->rxq_active &= ~TXRX_STATUS_BITMASK;
+ cldma_hw_stop_queue(hw_info, CLDMA_ALL_Q, true);
+ cldma_clear_ip_busy(hw_info);
+ cldma_disable_irq(md_ctrl);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static int cldma_suspend(struct mtk_pci_dev *mtk_dev, void *entity_param)
+{
+ struct cldma_hw_info *hw_info;
+ struct cldma_ctrl *md_ctrl;
+ unsigned long flags;
+
+ md_ctrl = entity_param;
+ hw_info = &md_ctrl->hw_info;
+ if (md_ctrl->hif_id == ID_CLDMA1)
+ mhccif_mask_set(mtk_dev, D2H_SW_INT_MASK);
+
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ cldma_hw_mask_eqirq(hw_info, CLDMA_ALL_Q, false);
+ cldma_hw_mask_txrxirq(hw_info, CLDMA_ALL_Q, false);
+ md_ctrl->txq_active &= ~TXRX_STATUS_BITMASK;
+ cldma_hw_stop_queue(hw_info, CLDMA_ALL_Q, false);
+ md_ctrl->txq_started = 0;
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ return 0;
+}
+
+static int cldma_pm_init(struct cldma_ctrl *md_ctrl)
+{
+ md_ctrl->pm_entity = kzalloc(sizeof(*md_ctrl->pm_entity), GFP_KERNEL);
+ if (!md_ctrl->pm_entity)
+ return -ENOMEM;
+
+ md_ctrl->pm_entity->entity_param = md_ctrl;
+ if (md_ctrl->hif_id == ID_CLDMA1)
+ md_ctrl->pm_entity->id = PM_ENTITY_ID_CTRL1;
+ else
+ md_ctrl->pm_entity->id = PM_ENTITY_ID_CTRL2;
+
+ md_ctrl->pm_entity->suspend = cldma_suspend;
+ md_ctrl->pm_entity->suspend_late = cldma_suspend_late;
+ md_ctrl->pm_entity->resume = cldma_resume;
+ md_ctrl->pm_entity->resume_early = cldma_resume_early;
+
+ return mtk_pci_pm_entity_register(md_ctrl->mtk_dev, md_ctrl->pm_entity);
+}
+
+static int cldma_pm_uninit(struct cldma_ctrl *md_ctrl)
+{
+ if (!md_ctrl->pm_entity)
+ return -EINVAL;
+
+ mtk_pci_pm_entity_unregister(md_ctrl->mtk_dev, md_ctrl->pm_entity);
+ kfree_sensitive(md_ctrl->pm_entity);
+ md_ctrl->pm_entity = NULL;
+ return 0;
+}
+
void cldma_hif_hw_init(enum cldma_id hif_id)
{
struct cldma_hw_info *hw_info;
@@ -1423,6 +1537,7 @@ static irqreturn_t cldma_isr_handler(int irq, void *data)
* cldma_init() - Initialize CLDMA
* @hif_id: CLDMA ID (ID_CLDMA0 or ID_CLDMA1)
*
+ * allocate and initialize device power management entity
* initialize HIF TX/RX queue structure
* register CLDMA callback isr with PCIe driver
*
@@ -1433,7 +1548,7 @@ int cldma_init(enum cldma_id hif_id)
struct cldma_hw_info *hw_info;
struct cldma_ctrl *md_ctrl;
struct mtk_modem *md;
- int i;
+ int ret, i;

md_ctrl = md_cd_get(hif_id);
md = md_ctrl->mtk_dev->md;
@@ -1442,6 +1557,9 @@ int cldma_init(enum cldma_id hif_id)
md_ctrl->rxq_active = 0;
md_ctrl->is_late_init = false;
hw_info = &md_ctrl->hw_info;
+ ret = cldma_pm_init(md_ctrl);
+ if (ret)
+ return ret;

spin_lock_init(&md_ctrl->cldma_lock);
/* initialize HIF queue structure */
@@ -1516,4 +1634,6 @@ void cldma_exit(enum cldma_id hif_id)
md_ctrl->rxq[i].worker = NULL;
}
}
+
+ cldma_pm_uninit(md_ctrl);
}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.h b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
index ec713197c85d..09b55f75c905 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
@@ -102,6 +102,7 @@ struct cldma_ctrl {
struct dma_pool *gpd_dmapool;
struct cldma_ring tx_ring[CLDMA_TXQ_NUM];
struct cldma_ring rx_ring[CLDMA_RXQ_NUM];
+ struct md_pm_entity *pm_entity;
struct cldma_hw_info hw_info;
bool is_late_init;
int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb);
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
index e97e1a6082d3..e9aebcc7d0de 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
@@ -456,6 +456,98 @@ static int dpmaif_stop(struct dpmaif_ctrl *dpmaif_ctrl)
return 0;
}

+static int dpmaif_suspend(struct mtk_pci_dev *mtk_dev, void *param)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl;
+
+ dpmaif_ctrl = param;
+ dpmaif_suspend_tx_sw_stop(dpmaif_ctrl);
+ dpmaif_hw_stop_tx_queue(dpmaif_ctrl);
+ dpmaif_hw_stop_rx_queue(dpmaif_ctrl);
+ dpmaif_disable_irq(dpmaif_ctrl);
+ dpmaif_suspend_rx_sw_stop(dpmaif_ctrl);
+ return 0;
+}
+
+static void dpmaif_unmask_dlq_interrupt(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ int qno;
+
+ for (qno = 0; qno < DPMAIF_RXQ_NUM; qno++)
+ dpmaif_hw_dlq_unmask_rx_done(&dpmaif_ctrl->hif_hw_info, qno);
+}
+
+static void dpmaif_pre_start_hw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_rx_queue *rxq;
+ struct dpmaif_tx_queue *txq;
+ unsigned int que_cnt;
+
+ /* Enable UL SW active */
+ for (que_cnt = 0; que_cnt < DPMAIF_TXQ_NUM; que_cnt++) {
+ txq = &dpmaif_ctrl->txq[que_cnt];
+ txq->que_started = true;
+ }
+
+ /* Enable DL/RX SW active */
+ for (que_cnt = 0; que_cnt < DPMAIF_RXQ_NUM; que_cnt++) {
+ rxq = &dpmaif_ctrl->rxq[que_cnt];
+ rxq->que_started = true;
+ }
+}
+
+static int dpmaif_resume(struct mtk_pci_dev *mtk_dev, void *param)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl;
+
+ dpmaif_ctrl = param;
+ if (!dpmaif_ctrl)
+ return 0;
+
+ /* start dpmaif tx/rx queue SW */
+ dpmaif_pre_start_hw(dpmaif_ctrl);
+ /* unmask PCIe DPMAIF interrupt */
+ dpmaif_enable_irq(dpmaif_ctrl);
+ dpmaif_unmask_dlq_interrupt(dpmaif_ctrl);
+ dpmaif_start_hw(dpmaif_ctrl);
+ wake_up(&dpmaif_ctrl->tx_wq);
+ return 0;
+}
+
+static int dpmaif_pm_entity_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct md_pm_entity *dpmaif_pm_entity;
+ int ret;
+
+ dpmaif_pm_entity = &dpmaif_ctrl->dpmaif_pm_entity;
+ INIT_LIST_HEAD(&dpmaif_pm_entity->entity);
+ dpmaif_pm_entity->suspend = &dpmaif_suspend;
+ dpmaif_pm_entity->suspend_late = NULL;
+ dpmaif_pm_entity->resume_early = NULL;
+ dpmaif_pm_entity->resume = &dpmaif_resume;
+ dpmaif_pm_entity->id = PM_ENTITY_ID_DATA;
+ dpmaif_pm_entity->entity_param = dpmaif_ctrl;
+
+ ret = mtk_pci_pm_entity_register(dpmaif_ctrl->mtk_dev, dpmaif_pm_entity);
+ if (ret)
+ dev_err(dpmaif_ctrl->dev, "dpmaif register pm_entity fail\n");
+
+ return ret;
+}
+
+static int dpmaif_pm_entity_release(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct md_pm_entity *dpmaif_pm_entity;
+ int ret;
+
+ dpmaif_pm_entity = &dpmaif_ctrl->dpmaif_pm_entity;
+ ret = mtk_pci_pm_entity_unregister(dpmaif_ctrl->mtk_dev, dpmaif_pm_entity);
+ if (ret < 0)
+ dev_err(dpmaif_ctrl->dev, "dpmaif register pm_entity fail\n");
+
+ return ret;
+}
+
int dpmaif_md_state_callback(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char state)
{
int ret = 0;
@@ -515,6 +607,10 @@ struct dpmaif_ctrl *dpmaif_hif_init(struct mtk_pci_dev *mtk_dev,
dpmaif_ctrl->hif_hw_info.pcie_base = mtk_dev->base_addr.pcie_ext_reg_base -
mtk_dev->base_addr.pcie_dev_reg_trsl_addr;

+ ret = dpmaif_pm_entity_init(dpmaif_ctrl);
+ if (ret)
+ return NULL;
+
/* registers dpmaif irq by PCIe driver API */
dpmaif_platform_irq_init(dpmaif_ctrl);
dpmaif_disable_irq(dpmaif_ctrl);
@@ -523,6 +619,7 @@ struct dpmaif_ctrl *dpmaif_hif_init(struct mtk_pci_dev *mtk_dev,
ret = dpmaif_sw_init(dpmaif_ctrl);
if (ret) {
dev_err(&mtk_dev->pdev->dev, "DPMAIF SW initialization fail! %d\n", ret);
+ dpmaif_pm_entity_release(dpmaif_ctrl);
return NULL;
}

@@ -534,6 +631,7 @@ void dpmaif_hif_exit(struct dpmaif_ctrl *dpmaif_ctrl)
{
if (dpmaif_ctrl->dpmaif_sw_init_done) {
dpmaif_stop(dpmaif_ctrl);
+ dpmaif_pm_entity_release(dpmaif_ctrl);
dpmaif_sw_release(dpmaif_ctrl);
dpmaif_ctrl->dpmaif_sw_init_done = false;
}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
index 384a44acbf62..461ad4f4b836 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
@@ -245,6 +245,7 @@ struct dpmaif_callbacks {
struct dpmaif_ctrl {
struct device *dev;
struct mtk_pci_dev *mtk_dev;
+ struct md_pm_entity dpmaif_pm_entity;
enum dpmaif_state state;
bool dpmaif_sw_init_done;
struct dpmaif_hw_info hif_hw_info;
diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.c b/drivers/net/wwan/t7xx/t7xx_mhccif.c
index 927aeb39e313..e511a4117f47 100644
--- a/drivers/net/wwan/t7xx/t7xx_mhccif.c
+++ b/drivers/net/wwan/t7xx/t7xx_mhccif.c
@@ -18,6 +18,11 @@
#include "t7xx_pci.h"
#include "t7xx_pcie_mac.h"

+#define D2H_INT_SR_ACK (D2H_INT_SUSPEND_ACK | \
+ D2H_INT_RESUME_ACK | \
+ D2H_INT_SUSPEND_ACK_AP | \
+ D2H_INT_RESUME_ACK_AP)
+
static void mhccif_clear_interrupts(struct mtk_pci_dev *mtk_dev, u32 mask)
{
void __iomem *mhccif_pbase;
@@ -52,6 +57,17 @@ static irqreturn_t mhccif_isr_thread(int irq, void *data)
/* Clear 2 & 1 level interrupts */
mhccif_clear_interrupts(mtk_dev, int_sts);

+ if (int_sts & D2H_INT_SR_ACK)
+ complete(&mtk_dev->pm_sr_ack);
+
+ /* Use the 1 bits to avoid low power bits */
+ iowrite32(L1_DISABLE_BIT(1), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_CLR_0);
+
+ int_sts = mhccif_read_sw_int_sts(mtk_dev);
+ if (!int_sts)
+ iowrite32(L1_1_DISABLE_BIT(1) | L1_2_DISABLE_BIT(1),
+ IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_CLR_0);
+
/* Enable corresponding interrupt */
mtk_pcie_mac_set_int(mtk_dev, MHCCIF_INT);
return IRQ_HANDLED;
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
index 4b624b36e584..5afd8eb4203f 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.c
+++ b/drivers/net/wwan/t7xx/t7xx_pci.c
@@ -13,13 +13,18 @@
* Sreehari Kancharla <[email protected]>
*/

+#include <linux/delay.h>
#include <linux/device.h>
+#include <linux/iopoll.h>
+#include <linux/list.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/pci.h>
+#include <linux/spinlock.h>

#include "t7xx_mhccif.h"
#include "t7xx_modem_ops.h"
+#include "t7xx_monitor.h"
#include "t7xx_pci.h"
#include "t7xx_pcie_mac.h"
#include "t7xx_reg.h"
@@ -28,6 +33,430 @@
#define PCI_IREG_BASE 0
#define PCI_EREG_BASE 2

+#define PM_ACK_TIMEOUT_MS 1500
+#define PM_RESOURCE_POLL_TIMEOUT_US 10000
+#define PM_RESOURCE_POLL_STEP_US 100
+
+enum mtk_pm_state {
+ MTK_PM_EXCEPTION, /* Exception flow */
+ MTK_PM_INIT, /* Device initialized, but handshake not completed */
+ MTK_PM_SUSPENDED, /* Device in suspend state */
+ MTK_PM_RESUMED, /* Device in resume state */
+};
+
+static int mtk_wait_pm_config(struct mtk_pci_dev *mtk_dev)
+{
+ int ret, val;
+
+ ret = read_poll_timeout(ioread32, val,
+ (val & PCIE_RESOURCE_STATUS_MSK) == PCIE_RESOURCE_STATUS_MSK,
+ PM_RESOURCE_POLL_STEP_US, PM_RESOURCE_POLL_TIMEOUT_US, true,
+ IREG_BASE(mtk_dev) + PCIE_RESOURCE_STATUS);
+ if (ret == -ETIMEDOUT)
+ dev_err(&mtk_dev->pdev->dev, "PM configuration timed out\n");
+
+ return ret;
+}
+
+static int mtk_pci_pm_init(struct mtk_pci_dev *mtk_dev)
+{
+ struct pci_dev *pdev;
+
+ pdev = mtk_dev->pdev;
+
+ INIT_LIST_HEAD(&mtk_dev->md_pm_entities);
+
+ mutex_init(&mtk_dev->md_pm_entity_mtx);
+
+ init_completion(&mtk_dev->pm_sr_ack);
+
+ device_init_wakeup(&pdev->dev, true);
+
+ dev_pm_set_driver_flags(&pdev->dev, pdev->dev.power.driver_flags |
+ DPM_FLAG_NO_DIRECT_COMPLETE);
+
+ atomic_set(&mtk_dev->md_pm_state, MTK_PM_INIT);
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_SET_0);
+
+ return mtk_wait_pm_config(mtk_dev);
+}
+
+void mtk_pci_pm_init_late(struct mtk_pci_dev *mtk_dev)
+{
+ /* enable the PCIe Resource Lock only after MD deep sleep is done */
+ mhccif_mask_clr(mtk_dev,
+ D2H_INT_SUSPEND_ACK |
+ D2H_INT_RESUME_ACK |
+ D2H_INT_SUSPEND_ACK_AP |
+ D2H_INT_RESUME_ACK_AP);
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ atomic_set(&mtk_dev->md_pm_state, MTK_PM_RESUMED);
+}
+
+static int mtk_pci_pm_reinit(struct mtk_pci_dev *mtk_dev)
+{
+ /* The device is kept in FSM re-init flow
+ * so just roll back PM setting to the init setting.
+ */
+ atomic_set(&mtk_dev->md_pm_state, MTK_PM_INIT);
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_SET_0);
+ return mtk_wait_pm_config(mtk_dev);
+}
+
+void mtk_pci_pm_exp_detected(struct mtk_pci_dev *mtk_dev)
+{
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_SET_0);
+ mtk_wait_pm_config(mtk_dev);
+ atomic_set(&mtk_dev->md_pm_state, MTK_PM_EXCEPTION);
+}
+
+int mtk_pci_pm_entity_register(struct mtk_pci_dev *mtk_dev, struct md_pm_entity *pm_entity)
+{
+ struct md_pm_entity *entity;
+
+ mutex_lock(&mtk_dev->md_pm_entity_mtx);
+ list_for_each_entry(entity, &mtk_dev->md_pm_entities, entity) {
+ if (entity->id == pm_entity->id) {
+ mutex_unlock(&mtk_dev->md_pm_entity_mtx);
+ return -EEXIST;
+ }
+ }
+
+ list_add_tail(&pm_entity->entity, &mtk_dev->md_pm_entities);
+ mutex_unlock(&mtk_dev->md_pm_entity_mtx);
+ return 0;
+}
+
+int mtk_pci_pm_entity_unregister(struct mtk_pci_dev *mtk_dev, struct md_pm_entity *pm_entity)
+{
+ struct md_pm_entity *entity, *tmp_entity;
+
+ mutex_lock(&mtk_dev->md_pm_entity_mtx);
+
+ list_for_each_entry_safe(entity, tmp_entity, &mtk_dev->md_pm_entities, entity) {
+ if (entity->id == pm_entity->id) {
+ list_del(&pm_entity->entity);
+ mutex_unlock(&mtk_dev->md_pm_entity_mtx);
+ return 0;
+ }
+ }
+
+ mutex_unlock(&mtk_dev->md_pm_entity_mtx);
+
+ return -ENXIO;
+}
+
+static int __mtk_pci_pm_suspend(struct pci_dev *pdev)
+{
+ struct mtk_pci_dev *mtk_dev;
+ struct md_pm_entity *entity;
+ unsigned long wait_ret;
+ enum mtk_pm_id id;
+ int ret = 0;
+
+ mtk_dev = pci_get_drvdata(pdev);
+
+ if (atomic_read(&mtk_dev->md_pm_state) <= MTK_PM_INIT) {
+ dev_err(&pdev->dev,
+ "[PM] Exiting suspend, because handshake failure or in an exception\n");
+ return -EFAULT;
+ }
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_SET_0);
+ ret = mtk_wait_pm_config(mtk_dev);
+ if (ret)
+ return ret;
+
+ atomic_set(&mtk_dev->md_pm_state, MTK_PM_SUSPENDED);
+
+ mtk_pcie_mac_clear_int(mtk_dev, SAP_RGU_INT);
+ mtk_dev->rgu_pci_irq_en = false;
+
+ list_for_each_entry(entity, &mtk_dev->md_pm_entities, entity) {
+ if (entity->suspend) {
+ ret = entity->suspend(mtk_dev, entity->entity_param);
+ if (ret) {
+ id = entity->id;
+ break;
+ }
+ }
+ }
+
+ if (ret) {
+ dev_err(&pdev->dev, "[PM] Suspend error: %d, id: %d\n", ret, id);
+
+ list_for_each_entry(entity, &mtk_dev->md_pm_entities, entity) {
+ if (id == entity->id)
+ break;
+
+ if (entity->resume)
+ entity->resume(mtk_dev, entity->entity_param);
+ }
+
+ goto suspend_complete;
+ }
+
+ reinit_completion(&mtk_dev->pm_sr_ack);
+ /* send D3 enter request to MD */
+ mhccif_h2d_swint_trigger(mtk_dev, H2D_CH_SUSPEND_REQ);
+ wait_ret = wait_for_completion_timeout(&mtk_dev->pm_sr_ack,
+ msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+ if (!wait_ret)
+ dev_err(&pdev->dev, "[PM] Wait for device suspend ACK timeout-MD\n");
+
+ reinit_completion(&mtk_dev->pm_sr_ack);
+ /* send D3 enter request to sAP */
+ mhccif_h2d_swint_trigger(mtk_dev, H2D_CH_SUSPEND_REQ_AP);
+ wait_ret = wait_for_completion_timeout(&mtk_dev->pm_sr_ack,
+ msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+ if (!wait_ret)
+ dev_err(&pdev->dev, "[PM] Wait for device suspend ACK timeout-SAP\n");
+
+ /* Each HW's final work */
+ list_for_each_entry(entity, &mtk_dev->md_pm_entities, entity) {
+ if (entity->suspend_late)
+ entity->suspend_late(mtk_dev, entity->entity_param);
+ }
+
+suspend_complete:
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ if (ret) {
+ atomic_set(&mtk_dev->md_pm_state, MTK_PM_RESUMED);
+ mtk_pcie_mac_set_int(mtk_dev, SAP_RGU_INT);
+ }
+
+ return ret;
+}
+
+static void mtk_pcie_interrupt_reinit(struct mtk_pci_dev *mtk_dev)
+{
+ mtk_pcie_mac_msix_cfg(mtk_dev, EXT_INT_NUM);
+
+ /* Disable interrupt first and let the IPs enable them */
+ iowrite32(MSIX_MSK_SET_ALL, IREG_BASE(mtk_dev) + IMASK_HOST_MSIX_CLR_GRP0_0);
+
+ /* Device disables PCIe interrupts during resume and
+ * following function will re-enable PCIe interrupts.
+ */
+ mtk_pcie_mac_interrupts_en(mtk_dev);
+ mtk_pcie_mac_set_int(mtk_dev, MHCCIF_INT);
+}
+
+static int mtk_pcie_reinit(struct mtk_pci_dev *mtk_dev, bool is_d3)
+{
+ int ret;
+
+ ret = pcim_enable_device(mtk_dev->pdev);
+ if (ret)
+ return ret;
+
+ mtk_pcie_mac_atr_init(mtk_dev);
+ mtk_pcie_interrupt_reinit(mtk_dev);
+
+ if (is_d3) {
+ mhccif_init(mtk_dev);
+ return mtk_pci_pm_reinit(mtk_dev);
+ }
+
+ return 0;
+}
+
+static int mtk_send_fsm_command(struct mtk_pci_dev *mtk_dev, u32 event)
+{
+ struct ccci_fsm_ctl *fsm_ctl;
+ int ret = -EINVAL;
+
+ fsm_ctl = fsm_get_entry();
+
+ switch (event) {
+ case CCCI_COMMAND_STOP:
+ ret = fsm_append_command(fsm_ctl, CCCI_COMMAND_STOP, 1);
+ break;
+
+ case CCCI_COMMAND_START:
+ mtk_pcie_mac_clear_int(mtk_dev, SAP_RGU_INT);
+ mtk_pcie_mac_clear_int_status(mtk_dev, SAP_RGU_INT);
+ mtk_dev->rgu_pci_irq_en = true;
+ mtk_pcie_mac_set_int(mtk_dev, SAP_RGU_INT);
+ ret = fsm_append_command(fsm_ctl, CCCI_COMMAND_START, 0);
+ break;
+
+ default:
+ break;
+ }
+ if (ret)
+ dev_err(&mtk_dev->pdev->dev, "handling FSM CMD event: %u error: %d\n", event, ret);
+
+ return ret;
+}
+
+static int __mtk_pci_pm_resume(struct pci_dev *pdev, bool state_check)
+{
+ struct mtk_pci_dev *mtk_dev;
+ struct md_pm_entity *entity;
+ unsigned long wait_ret;
+ u32 resume_reg_state;
+ int ret = 0;
+
+ mtk_dev = pci_get_drvdata(pdev);
+
+ if (atomic_read(&mtk_dev->md_pm_state) <= MTK_PM_INIT) {
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ return 0;
+ }
+
+ /* Get the previous state */
+ resume_reg_state = ioread32(IREG_BASE(mtk_dev) + PCIE_PM_RESUME_STATE);
+
+ if (state_check) {
+ /* For D3/L3 resume, the device could boot so quickly that the
+ * initial value of the dummy register might be overwritten.
+ * Identify new boots if the ATR source address register is not initialized.
+ */
+ u32 atr_reg_val = ioread32(IREG_BASE(mtk_dev) +
+ ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR);
+
+ if (resume_reg_state == PM_RESUME_REG_STATE_L3 ||
+ (resume_reg_state == PM_RESUME_REG_STATE_INIT &&
+ atr_reg_val == ATR_SRC_ADDR_INVALID)) {
+ ret = mtk_send_fsm_command(mtk_dev, CCCI_COMMAND_STOP);
+ if (ret)
+ return ret;
+
+ ret = mtk_pcie_reinit(mtk_dev, true);
+ if (ret)
+ return ret;
+
+ mtk_clear_rgu_irq(mtk_dev);
+ return mtk_send_fsm_command(mtk_dev, CCCI_COMMAND_START);
+ } else if (resume_reg_state == PM_RESUME_REG_STATE_EXP ||
+ resume_reg_state == PM_RESUME_REG_STATE_L2_EXP) {
+ if (resume_reg_state == PM_RESUME_REG_STATE_L2_EXP) {
+ ret = mtk_pcie_reinit(mtk_dev, false);
+ if (ret)
+ return ret;
+ }
+
+ atomic_set(&mtk_dev->md_pm_state, MTK_PM_SUSPENDED);
+ mtk_dev->rgu_pci_irq_en = true;
+ mtk_pcie_mac_set_int(mtk_dev, SAP_RGU_INT);
+
+ mhccif_mask_clr(mtk_dev,
+ D2H_INT_EXCEPTION_INIT |
+ D2H_INT_EXCEPTION_INIT_DONE |
+ D2H_INT_EXCEPTION_CLEARQ_DONE |
+ D2H_INT_EXCEPTION_ALLQ_RESET |
+ D2H_INT_PORT_ENUM);
+
+ return ret;
+ } else if (resume_reg_state == PM_RESUME_REG_STATE_L2) {
+ ret = mtk_pcie_reinit(mtk_dev, false);
+ if (ret)
+ return ret;
+
+ } else if (resume_reg_state != PM_RESUME_REG_STATE_L1 &&
+ resume_reg_state != PM_RESUME_REG_STATE_INIT) {
+ ret = mtk_send_fsm_command(mtk_dev, CCCI_COMMAND_STOP);
+ if (ret)
+ return ret;
+
+ mtk_clear_rgu_irq(mtk_dev);
+ atomic_set(&mtk_dev->md_pm_state, MTK_PM_SUSPENDED);
+ return 0;
+ }
+ }
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_SET_0);
+ mtk_wait_pm_config(mtk_dev);
+
+ list_for_each_entry(entity, &mtk_dev->md_pm_entities, entity) {
+ if (entity->resume_early)
+ entity->resume_early(mtk_dev, entity->entity_param);
+ }
+
+ reinit_completion(&mtk_dev->pm_sr_ack);
+ /* send D3 exit request to MD */
+ mhccif_h2d_swint_trigger(mtk_dev, H2D_CH_RESUME_REQ);
+ wait_ret = wait_for_completion_timeout(&mtk_dev->pm_sr_ack,
+ msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+ if (!wait_ret)
+ dev_err(&pdev->dev, "[PM] Timed out waiting for device MD resume ACK\n");
+
+ reinit_completion(&mtk_dev->pm_sr_ack);
+ /* send D3 exit request to sAP */
+ mhccif_h2d_swint_trigger(mtk_dev, H2D_CH_RESUME_REQ_AP);
+ wait_ret = wait_for_completion_timeout(&mtk_dev->pm_sr_ack,
+ msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+ if (!wait_ret)
+ dev_err(&pdev->dev, "[PM] Timed out waiting for device SAP resume ACK\n");
+
+ /* Each HW final restore works */
+ list_for_each_entry(entity, &mtk_dev->md_pm_entities, entity) {
+ if (entity->resume) {
+ ret = entity->resume(mtk_dev, entity->entity_param);
+ if (ret)
+ dev_err(&pdev->dev, "[PM] Resume entry ID: %d err: %d\n",
+ entity->id, ret);
+ }
+ }
+
+ mtk_dev->rgu_pci_irq_en = true;
+ mtk_pcie_mac_set_int(mtk_dev, SAP_RGU_INT);
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ atomic_set(&mtk_dev->md_pm_state, MTK_PM_RESUMED);
+
+ return ret;
+}
+
+static int mtk_pci_pm_resume_noirq(struct device *dev)
+{
+ struct mtk_pci_dev *mtk_dev;
+ struct pci_dev *pdev;
+ void __iomem *pbase;
+
+ pdev = to_pci_dev(dev);
+ mtk_dev = pci_get_drvdata(pdev);
+ pbase = IREG_BASE(mtk_dev);
+
+ /* disable interrupt first and let the IPs enable them */
+ iowrite32(MSIX_MSK_SET_ALL, pbase + IMASK_HOST_MSIX_CLR_GRP0_0);
+
+ return 0;
+}
+
+static void mtk_pci_shutdown(struct pci_dev *pdev)
+{
+ __mtk_pci_pm_suspend(pdev);
+}
+
+static int mtk_pci_pm_suspend(struct device *dev)
+{
+ return __mtk_pci_pm_suspend(to_pci_dev(dev));
+}
+
+static int mtk_pci_pm_resume(struct device *dev)
+{
+ return __mtk_pci_pm_resume(to_pci_dev(dev), true);
+}
+
+static int mtk_pci_pm_thaw(struct device *dev)
+{
+ return __mtk_pci_pm_resume(to_pci_dev(dev), false);
+}
+
+static const struct dev_pm_ops mtk_pci_pm_ops = {
+ .suspend = mtk_pci_pm_suspend,
+ .resume = mtk_pci_pm_resume,
+ .resume_noirq = mtk_pci_pm_resume_noirq,
+ .freeze = mtk_pci_pm_suspend,
+ .thaw = mtk_pci_pm_thaw,
+ .poweroff = mtk_pci_pm_suspend,
+ .restore = mtk_pci_pm_resume,
+ .restore_noirq = mtk_pci_pm_resume_noirq,
+};
+
static int mtk_request_irq(struct pci_dev *pdev)
{
struct mtk_pci_dev *mtk_dev;
@@ -165,6 +594,10 @@ static int mtk_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (ret)
return ret;

+ ret = mtk_pci_pm_init(mtk_dev);
+ if (ret)
+ goto err;
+
mtk_pcie_mac_atr_init(mtk_dev);
mtk_pci_infracfg_ao_calc(mtk_dev);
mhccif_init(mtk_dev);
@@ -219,6 +652,8 @@ static struct pci_driver mtk_pci_driver = {
.id_table = t7xx_pci_table,
.probe = mtk_pci_probe,
.remove = mtk_pci_remove,
+ .driver.pm = &mtk_pci_pm_ops,
+ .shutdown = mtk_pci_shutdown,
};

static int __init mtk_pci_init(void)
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.h b/drivers/net/wwan/t7xx/t7xx_pci.h
index bc6f0a83f546..7ce429db240f 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.h
+++ b/drivers/net/wwan/t7xx/t7xx_pci.h
@@ -14,6 +14,8 @@
#ifndef __T7XX_PCI_H__
#define __T7XX_PCI_H__

+#include <linux/completion.h>
+#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/types.h>

@@ -45,6 +47,10 @@ typedef irqreturn_t (*mtk_intr_callback)(int irq, void *param);
* @pdev: pci device
* @base_addr: memory base addresses of HW components
* @md: modem interface
+ * @md_pm_entities: list of pm entities
+ * @md_pm_entity_mtx: protects md_pm_entities list
+ * @pm_sr_ack: ack from the device when went to sleep or woke up
+ * @md_pm_state: state for resume/suspend
* @ccmni_ctlb: context structure used to control the network data path
* @rgu_pci_irq_en: RGU callback isr registered and active
* @pools: pre allocated skb pools
@@ -58,9 +64,48 @@ struct mtk_pci_dev {
struct mtk_addr_base base_addr;
struct mtk_modem *md;

+ /* Low Power Items */
+ struct list_head md_pm_entities;
+ struct mutex md_pm_entity_mtx; /* protects md_pm_entities list */
+ struct completion pm_sr_ack;
+ atomic_t md_pm_state;
+
struct ccmni_ctl_block *ccmni_ctlb;
bool rgu_pci_irq_en;
struct skb_pools pools;
};

+enum mtk_pm_id {
+ PM_ENTITY_ID_CTRL1,
+ PM_ENTITY_ID_CTRL2,
+ PM_ENTITY_ID_DATA,
+};
+
+/* struct md_pm_entity - device power management entity
+ * @entity: list of PM Entities
+ * @suspend: callback invoked before sending D3 request to device
+ * @suspend_late: callback invoked after getting D3 ACK from device
+ * @resume_early: callback invoked before sending the resume request to device
+ * @resume: callback invoked after getting resume ACK from device
+ * @id: unique PM entity identifier
+ * @entity_param: parameter passed to the registered callbacks
+ *
+ * This structure is used to indicate PM operations required by internal
+ * HW modules such as CLDMA and DPMA.
+ */
+struct md_pm_entity {
+ struct list_head entity;
+ int (*suspend)(struct mtk_pci_dev *mtk_dev, void *entity_param);
+ void (*suspend_late)(struct mtk_pci_dev *mtk_dev, void *entity_param);
+ void (*resume_early)(struct mtk_pci_dev *mtk_dev, void *entity_param);
+ int (*resume)(struct mtk_pci_dev *mtk_dev, void *entity_param);
+ enum mtk_pm_id id;
+ void *entity_param;
+};
+
+int mtk_pci_pm_entity_register(struct mtk_pci_dev *mtk_dev, struct md_pm_entity *pm_entity);
+int mtk_pci_pm_entity_unregister(struct mtk_pci_dev *mtk_dev, struct md_pm_entity *pm_entity);
+void mtk_pci_pm_init_late(struct mtk_pci_dev *mtk_dev);
+void mtk_pci_pm_exp_detected(struct mtk_pci_dev *mtk_dev);
+
#endif /* __T7XX_PCI_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
index 5a049f0f6bfc..dd4aa3463359 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -173,6 +173,7 @@ static void fsm_routine_exception(struct ccci_fsm_ctl *ctl, struct ccci_fsm_comm

case EXCEPTION_EVENT:
fsm_broadcast_state(ctl, MD_STATE_EXCEPTION);
+ mtk_pci_pm_exp_detected(ctl->md->mtk_dev);
mtk_md_exception_handshake(ctl->md);
cnt = 0;
while (cnt < MD_EX_REC_OK_TIMEOUT_MS / EVENT_POLL_INTERVAL_MS) {
@@ -339,6 +340,7 @@ static void fsm_routine_starting(struct ccci_fsm_ctl *ctl)

fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);
} else {
+ mtk_pci_pm_init_late(md->mtk_dev);
fsm_routine_ready(ctl);
}
}
--
2.17.1

2021-11-01 03:58:20

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 05/14] net: wwan: t7xx: Add control port

From: Haijun Lio <[email protected]>

Control Port implements driver control messages such as modem-host
handshaking, controls port enumeration, and handles exception messages.

The handshaking process between the driver and the modem happens during
the init sequence. The process involves the exchange of a list of
supported runtime features to make sure that modem and host are ready
to provide proper feature lists including port enumeration. Further
features can be enabled and controlled in this handshaking process.

Signed-off-by: Haijun Lio <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/Makefile | 1 +
drivers/net/wwan/t7xx/t7xx_modem_ops.c | 244 ++++++++++++++++++++-
drivers/net/wwan/t7xx/t7xx_modem_ops.h | 2 +
drivers/net/wwan/t7xx/t7xx_monitor.h | 3 +
drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c | 150 +++++++++++++
drivers/net/wwan/t7xx/t7xx_port_proxy.c | 9 +-
drivers/net/wwan/t7xx/t7xx_port_proxy.h | 4 +
drivers/net/wwan/t7xx/t7xx_state_monitor.c | 3 +
8 files changed, 411 insertions(+), 5 deletions(-)
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index 1f117f36124a..b0fac99420a0 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -12,3 +12,4 @@ mtk_t7xx-y:= t7xx_pci.o \
t7xx_cldma.o \
t7xx_hif_cldma.o \
t7xx_port_proxy.o \
+ t7xx_port_ctrl_msg.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
index a814705dab3a..612be5cbcbd2 100644
--- a/drivers/net/wwan/t7xx/t7xx_modem_ops.c
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
@@ -270,6 +270,242 @@ static void md_sys_sw_init(struct mtk_pci_dev *mtk_dev)
mtk_pcie_register_rgu_isr(mtk_dev);
}

+struct feature_query {
+ u32 head_pattern;
+ u8 feature_set[FEATURE_COUNT];
+ u32 tail_pattern;
+};
+
+static void prepare_host_rt_data_query(struct core_sys_info *core)
+{
+ struct ctrl_msg_header *ctrl_msg_h;
+ struct feature_query *ft_query;
+ struct ccci_header *ccci_h;
+ struct sk_buff *skb;
+ size_t packet_size;
+
+ packet_size = sizeof(struct ccci_header) +
+ sizeof(struct ctrl_msg_header) +
+ sizeof(struct feature_query);
+ skb = ccci_alloc_skb(packet_size, GFS_BLOCKING);
+ if (!skb)
+ return;
+
+ skb_put(skb, packet_size);
+ /* fill CCCI header */
+ ccci_h = (struct ccci_header *)skb->data;
+ ccci_h->data[0] = 0;
+ ccci_h->data[1] = packet_size;
+ ccci_h->status &= ~HDR_FLD_CHN;
+ ccci_h->status |= FIELD_PREP(HDR_FLD_CHN, core->ctl_port->tx_ch);
+ ccci_h->status &= ~HDR_FLD_SEQ;
+ ccci_h->reserved = 0;
+ /* fill control message */
+ ctrl_msg_h = (struct ctrl_msg_header *)(skb->data +
+ sizeof(struct ccci_header));
+ ctrl_msg_h->ctrl_msg_id = CTL_ID_HS1_MSG;
+ ctrl_msg_h->reserved = 0;
+ ctrl_msg_h->data_length = sizeof(struct feature_query);
+ /* fill feature query */
+ ft_query = (struct feature_query *)(skb->data +
+ sizeof(struct ccci_header) +
+ sizeof(struct ctrl_msg_header));
+ ft_query->head_pattern = MD_FEATURE_QUERY_ID;
+ memcpy(ft_query->feature_set, core->feature_set, FEATURE_COUNT);
+ ft_query->tail_pattern = MD_FEATURE_QUERY_ID;
+ /* send HS1 message to device */
+ port_proxy_send_skb(core->ctl_port, skb, 0);
+}
+
+static int prepare_device_rt_data(struct core_sys_info *core, struct device *dev,
+ void *data, int data_length)
+{
+ struct mtk_runtime_feature rt_feature;
+ struct ctrl_msg_header *ctrl_msg_h;
+ struct feature_query *md_feature;
+ struct ccci_header *ccci_h;
+ struct sk_buff *skb;
+ int packet_size = 0;
+ char *rt_data;
+ int i;
+
+ skb = ccci_alloc_skb(MTK_SKB_4K, GFS_BLOCKING);
+ if (!skb)
+ return -EFAULT;
+
+ /* fill CCCI header */
+ ccci_h = (struct ccci_header *)skb->data;
+ ccci_h->data[0] = 0;
+ ccci_h->status &= ~HDR_FLD_CHN;
+ ccci_h->status |= FIELD_PREP(HDR_FLD_CHN, core->ctl_port->tx_ch);
+ ccci_h->status &= ~HDR_FLD_SEQ;
+ ccci_h->reserved = 0;
+ /* fill control message header */
+ ctrl_msg_h = (struct ctrl_msg_header *)(skb->data + sizeof(struct ccci_header));
+ ctrl_msg_h->ctrl_msg_id = CTL_ID_HS3_MSG;
+ ctrl_msg_h->reserved = 0;
+ rt_data = (skb->data + sizeof(struct ccci_header) + sizeof(struct ctrl_msg_header));
+
+ /* parse MD runtime data query */
+ md_feature = data;
+ if (md_feature->head_pattern != MD_FEATURE_QUERY_ID ||
+ md_feature->tail_pattern != MD_FEATURE_QUERY_ID) {
+ dev_err(dev, "md_feature pattern is wrong: head 0x%x, tail 0x%x\n",
+ md_feature->head_pattern, md_feature->tail_pattern);
+ return -EINVAL;
+ }
+
+ /* fill runtime feature */
+ for (i = 0; i < FEATURE_COUNT; i++) {
+ u8 md_feature_mask = FIELD_GET(FEATURE_MSK, md_feature->feature_set[i]);
+
+ memset(&rt_feature, 0, sizeof(rt_feature));
+ rt_feature.feature_id = i;
+ switch (md_feature_mask) {
+ case MTK_FEATURE_DOES_NOT_EXIST:
+ case MTK_FEATURE_MUST_BE_SUPPORTED:
+ rt_feature.support_info = md_feature->feature_set[i];
+ break;
+
+ default:
+ break;
+ }
+
+ if (FIELD_GET(FEATURE_MSK, rt_feature.support_info) !=
+ MTK_FEATURE_MUST_BE_SUPPORTED) {
+ rt_feature.data_len = 0;
+ memcpy(rt_data, &rt_feature, sizeof(struct mtk_runtime_feature));
+ rt_data += sizeof(struct mtk_runtime_feature);
+ }
+
+ packet_size += (sizeof(struct mtk_runtime_feature) + rt_feature.data_len);
+ }
+
+ ctrl_msg_h->data_length = packet_size;
+ ccci_h->data[1] = packet_size + sizeof(struct ctrl_msg_header) +
+ sizeof(struct ccci_header);
+ skb_put(skb, ccci_h->data[1]);
+ /* send HS3 message to device */
+ port_proxy_send_skb(core->ctl_port, skb, 0);
+ return 0;
+}
+
+static int parse_host_rt_data(struct core_sys_info *core, struct device *dev,
+ void *data, int data_length)
+{
+ enum mtk_feature_support_type ft_spt_st, ft_spt_cfg;
+ struct mtk_runtime_feature *rt_feature;
+ int i, offset;
+
+ offset = sizeof(struct feature_query);
+ for (i = 0; i < FEATURE_COUNT && offset < data_length; i++) {
+ rt_feature = (struct mtk_runtime_feature *)(data + offset);
+ ft_spt_st = FIELD_GET(FEATURE_MSK, rt_feature->support_info);
+ ft_spt_cfg = FIELD_GET(FEATURE_MSK, core->feature_set[i]);
+ offset += sizeof(struct mtk_runtime_feature) + rt_feature->data_len;
+
+ if (ft_spt_cfg == MTK_FEATURE_MUST_BE_SUPPORTED) {
+ if (ft_spt_st != MTK_FEATURE_MUST_BE_SUPPORTED) {
+ dev_err(dev, "mismatch: runtime feature%d (%d,%d)\n",
+ i, ft_spt_cfg, ft_spt_st);
+ return -EINVAL;
+ }
+
+ if (i == RT_ID_MD_PORT_ENUM)
+ port_proxy_node_control(dev, (struct port_msg *)rt_feature->data);
+ }
+ }
+
+ return 0;
+}
+
+static void core_reset(struct mtk_modem *md)
+{
+ struct ccci_fsm_ctl *fsm_ctl;
+
+ atomic_set(&md->core_md.ready, 0);
+ fsm_ctl = fsm_get_entry();
+
+ if (!fsm_ctl) {
+ dev_err(&md->mtk_dev->pdev->dev, "fsm ctl is not initialized\n");
+ return;
+ }
+
+ /* append HS2_EXIT event to cancel the ongoing handshake in core_hk_handler() */
+ if (atomic_read(&md->core_md.handshake_ongoing))
+ fsm_append_event(fsm_ctl, CCCI_EVENT_MD_HS2_EXIT, NULL, 0);
+
+ atomic_set(&md->core_md.handshake_ongoing, 0);
+}
+
+static void core_hk_handler(struct mtk_modem *md, struct ccci_fsm_ctl *ctl,
+ enum ccci_fsm_event_state event_id,
+ enum ccci_fsm_event_state err_detect)
+{
+ struct core_sys_info *core_info;
+ struct ccci_fsm_event *event, *event_next;
+ unsigned long flags;
+ struct device *dev;
+ int ret;
+
+ core_info = &md->core_md;
+ dev = &md->mtk_dev->pdev->dev;
+ prepare_host_rt_data_query(core_info);
+ while (!kthread_should_stop()) {
+ bool event_received = false;
+
+ spin_lock_irqsave(&ctl->event_lock, flags);
+ list_for_each_entry_safe(event, event_next, &ctl->event_queue, entry) {
+ if (event->event_id == err_detect) {
+ list_del(&event->entry);
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+ dev_err(dev, "core handshake error event received\n");
+ goto exit;
+ }
+
+ if (event->event_id == event_id) {
+ list_del(&event->entry);
+ event_received = true;
+ break;
+ }
+ }
+
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+
+ if (event_received)
+ break;
+
+ wait_event_interruptible(ctl->event_wq, !list_empty(&ctl->event_queue) ||
+ kthread_should_stop());
+ if (kthread_should_stop())
+ goto exit;
+ }
+
+ if (atomic_read(&ctl->exp_flg))
+ goto exit;
+
+ ret = parse_host_rt_data(core_info, dev, event->data, event->length);
+ if (ret) {
+ dev_err(dev, "host runtime data parsing fail:%d\n", ret);
+ goto exit;
+ }
+
+ if (atomic_read(&ctl->exp_flg))
+ goto exit;
+
+ ret = prepare_device_rt_data(core_info, dev, event->data, event->length);
+ if (ret) {
+ dev_err(dev, "device runtime data parsing fail:%d", ret);
+ goto exit;
+ }
+
+ atomic_set(&core_info->ready, 1);
+ atomic_set(&core_info->handshake_ongoing, 0);
+ wake_up(&ctl->async_hk_wq);
+exit:
+ kfree(event);
+}
+
static void md_hk_wq(struct work_struct *work)
{
struct ccci_fsm_ctl *ctl;
@@ -277,12 +513,14 @@ static void md_hk_wq(struct work_struct *work)

ctl = fsm_get_entry();

+ /* clear the HS2 EXIT event appended in core_reset() */
+ fsm_clear_event(ctl, CCCI_EVENT_MD_HS2_EXIT);
cldma_switch_cfg(ID_CLDMA1);
cldma_start(ID_CLDMA1);
fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS2);
md = container_of(work, struct mtk_modem, handshake_work);
- atomic_set(&md->core_md.ready, 1);
- wake_up(&ctl->async_hk_wq);
+ atomic_set(&md->core_md.handshake_ongoing, 1);
+ core_hk_handler(md, ctl, CCCI_EVENT_MD_HS2, CCCI_EVENT_MD_HS2_EXIT);
}

void mtk_md_event_notify(struct mtk_modem *md, enum md_event_id evt_id)
@@ -394,6 +632,7 @@ static struct mtk_modem *ccci_md_alloc(struct mtk_pci_dev *mtk_dev)
md->mtk_dev = mtk_dev;
mtk_dev->md = md;
atomic_set(&md->core_md.ready, 0);
+ atomic_set(&md->core_md.handshake_ongoing, 0);
md->handshake_wq = alloc_workqueue("%s",
WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI,
0, "md_hk_wq");
@@ -418,6 +657,7 @@ void mtk_md_reset(struct mtk_pci_dev *mtk_dev)
cldma_reset(ID_CLDMA1);
port_proxy_reset(&mtk_dev->pdev->dev);
md->md_init_finish = true;
+ core_reset(md);
}

/**
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.h b/drivers/net/wwan/t7xx/t7xx_modem_ops.h
index 11aa29ad023f..4f08d1330f36 100644
--- a/drivers/net/wwan/t7xx/t7xx_modem_ops.h
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.h
@@ -60,7 +60,9 @@ enum md_event_id {

struct core_sys_info {
atomic_t ready;
+ atomic_t handshake_ongoing;
u8 feature_set[FEATURE_COUNT];
+ struct t7xx_port *ctl_port;
};

struct mtk_modem {
diff --git a/drivers/net/wwan/t7xx/t7xx_monitor.h b/drivers/net/wwan/t7xx/t7xx_monitor.h
index e03e044e685a..30b06bcbf524 100644
--- a/drivers/net/wwan/t7xx/t7xx_monitor.h
+++ b/drivers/net/wwan/t7xx/t7xx_monitor.h
@@ -36,9 +36,12 @@ enum ccci_fsm_state {

enum ccci_fsm_event_state {
CCCI_EVENT_INVALID,
+ CCCI_EVENT_MD_HS2,
CCCI_EVENT_MD_EX,
CCCI_EVENT_MD_EX_REC_OK,
CCCI_EVENT_MD_EX_PASS,
+ CCCI_EVENT_MD_HS2_EXIT,
+ CCCI_EVENT_AP_HS2_EXIT,
CCCI_EVENT_MAX
};

diff --git a/drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c b/drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c
new file mode 100644
index 000000000000..6bd0e9682127
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c
@@ -0,0 +1,150 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/kthread.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+
+#include "t7xx_common.h"
+#include "t7xx_monitor.h"
+#include "t7xx_port_proxy.h"
+
+static void fsm_ee_message_handler(struct sk_buff *skb)
+{
+ struct ctrl_msg_header *ctrl_msg_h;
+ struct ccci_fsm_ctl *ctl;
+ enum md_state md_state;
+ struct device *dev;
+
+ ctrl_msg_h = (struct ctrl_msg_header *)skb->data;
+ md_state = ccci_fsm_get_md_state();
+ ctl = fsm_get_entry();
+ dev = &ctl->md->mtk_dev->pdev->dev;
+ if (md_state != MD_STATE_EXCEPTION) {
+ dev_err(dev, "receive invalid MD_EX %x when MD state is %d\n",
+ ctrl_msg_h->reserved, md_state);
+ return;
+ }
+
+ switch (ctrl_msg_h->ctrl_msg_id) {
+ case CTL_ID_MD_EX:
+ if (ctrl_msg_h->reserved != MD_EX_CHK_ID) {
+ dev_err(dev, "receive invalid MD_EX %x\n", ctrl_msg_h->reserved);
+ } else {
+ port_proxy_send_msg_to_md(CCCI_CONTROL_TX, CTL_ID_MD_EX, MD_EX_CHK_ID);
+ fsm_append_event(ctl, CCCI_EVENT_MD_EX, NULL, 0);
+ }
+
+ break;
+
+ case CTL_ID_MD_EX_ACK:
+ if (ctrl_msg_h->reserved != MD_EX_CHK_ACK_ID)
+ dev_err(dev, "receive invalid MD_EX_ACK %x\n", ctrl_msg_h->reserved);
+ else
+ fsm_append_event(ctl, CCCI_EVENT_MD_EX_REC_OK, NULL, 0);
+
+ break;
+
+ case CTL_ID_MD_EX_PASS:
+ fsm_append_event(ctl, CCCI_EVENT_MD_EX_PASS, NULL, 0);
+ break;
+
+ case CTL_ID_DRV_VER_ERROR:
+ dev_err(dev, "AP/MD driver version mismatch\n");
+ }
+}
+
+static void control_msg_handler(struct t7xx_port *port, struct sk_buff *skb)
+{
+ struct ctrl_msg_header *ctrl_msg_h;
+ struct ccci_fsm_ctl *ctl;
+ int ret = 0;
+
+ skb_pull(skb, sizeof(struct ccci_header));
+ ctrl_msg_h = (struct ctrl_msg_header *)skb->data;
+ ctl = fsm_get_entry();
+
+ switch (ctrl_msg_h->ctrl_msg_id) {
+ case CTL_ID_HS2_MSG:
+ skb_pull(skb, sizeof(struct ctrl_msg_header));
+ if (port->rx_ch == CCCI_CONTROL_RX)
+ fsm_append_event(ctl, CCCI_EVENT_MD_HS2,
+ skb->data, ctrl_msg_h->data_length);
+
+ ccci_free_skb(&port->mtk_dev->pools, skb);
+ break;
+
+ case CTL_ID_MD_EX:
+ case CTL_ID_MD_EX_ACK:
+ case CTL_ID_MD_EX_PASS:
+ case CTL_ID_DRV_VER_ERROR:
+ fsm_ee_message_handler(skb);
+ ccci_free_skb(&port->mtk_dev->pools, skb);
+ break;
+
+ case CTL_ID_PORT_ENUM:
+ skb_pull(skb, sizeof(struct ctrl_msg_header));
+ ret = port_proxy_node_control(port->dev, (struct port_msg *)skb->data);
+ if (!ret)
+ port_proxy_send_msg_to_md(CCCI_CONTROL_TX, CTL_ID_PORT_ENUM, 0);
+ else
+ port_proxy_send_msg_to_md(CCCI_CONTROL_TX,
+ CTL_ID_PORT_ENUM, PORT_ENUM_VER_MISMATCH);
+
+ break;
+
+ default:
+ dev_err(port->dev, "unknown control message ID to FSM %x\n",
+ ctrl_msg_h->ctrl_msg_id);
+ break;
+ }
+
+ if (ret)
+ dev_err(port->dev, "%s control message handle error: %d\n", port->name, ret);
+}
+
+static int port_ctl_init(struct t7xx_port *port)
+{
+ port->skb_handler = &control_msg_handler;
+ port->thread = kthread_run(port_kthread_handler, port, "%s", port->name);
+ if (IS_ERR(port->thread)) {
+ dev_err(port->dev, "failed to start port control thread\n");
+ return PTR_ERR(port->thread);
+ }
+
+ port->rx_length_th = MAX_CTRL_QUEUE_LENGTH;
+ port->skb_from_pool = true;
+ return 0;
+}
+
+static void port_ctl_uninit(struct t7xx_port *port)
+{
+ unsigned long flags;
+ struct sk_buff *skb;
+
+ if (port->thread)
+ kthread_stop(port->thread);
+
+ spin_lock_irqsave(&port->rx_wq.lock, flags);
+ while ((skb = __skb_dequeue(&port->rx_skb_list)) != NULL)
+ ccci_free_skb(&port->mtk_dev->pools, skb);
+
+ spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+}
+
+struct port_ops ctl_port_ops = {
+ .init = &port_ctl_init,
+ .recv_skb = &port_recv_skb,
+ .uninit = &port_ctl_uninit,
+};
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.c b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
index a3795d30c317..39e779531068 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.c
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
@@ -50,10 +50,9 @@ static struct port_proxy *port_prox;
i < (proxy)->port_number; \
i++, (p) = &(proxy)->ports[i])

-static struct port_ops dummy_port_ops;
-
static struct t7xx_port md_ccci_ports[] = {
- {0, 0, 0, 0, 0, 0, ID_CLDMA1, 0, &dummy_port_ops, 0xff, "dummy_port",},
+ {CCCI_CONTROL_TX, CCCI_CONTROL_RX, 0, 0, 0, 0, ID_CLDMA1,
+ 0, &ctl_port_ops, 0xff, "ccci_ctrl",},
};

static int port_netlink_send_msg(struct t7xx_port *port, int grp,
@@ -586,6 +585,10 @@ static void proxy_init_all_ports(struct mtk_modem *md)

for_each_proxy_port(i, port, port_prox) {
port_struct_init(port);
+ if (port->tx_ch == CCCI_CONTROL_TX) {
+ port_prox->ctl_port = port;
+ md->core_md.ctl_port = port;
+ }

port->major = port_prox->major;
port->minor_base = port_prox->minor_base;
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.h b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
index bd700321ca60..bc6fef0d7cf6 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.h
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
@@ -45,6 +45,7 @@ struct port_proxy {
unsigned int major;
unsigned int minor_base;
struct t7xx_port *ports;
+ struct t7xx_port *ctl_port;
struct t7xx_port *dedicated_ports[CLDMA_NUM][MTK_MAX_QUEUE_NUM];
/* port list of each RX channel, for RX dispatching */
struct list_head rx_ch_ports[CCCI_MAX_CH_ID];
@@ -80,6 +81,9 @@ struct port_msg {
#define PORT_ENUM_TAIL_PATTERN 0xa5a5a5a5
#define PORT_ENUM_VER_MISMATCH 0x00657272

+/* port operations mapping */
+extern struct port_ops ctl_port_ops;
+
int port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb, bool from_pool);
void port_proxy_set_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h);
int port_proxy_node_control(struct device *dev, struct port_msg *port_msg);
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
index 4f9e8cfa2f94..5a049f0f6bfc 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -334,6 +334,9 @@ static void fsm_routine_starting(struct ccci_fsm_ctl *ctl)

if (!atomic_read(&md->core_md.ready)) {
dev_err(dev, "MD handshake timeout\n");
+ if (atomic_read(&md->core_md.handshake_ongoing))
+ fsm_append_event(ctl, CCCI_EVENT_MD_HS2_EXIT, NULL, 0);
+
fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);
} else {
fsm_routine_ready(ctl);
--
2.17.1

2021-11-01 03:58:23

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 12/14] net: wwan: t7xx: Device deep sleep lock/unlock

From: Haijun Lio <[email protected]>

Introduce the mechanism to lock/unlock the device 'deep sleep' mode.
When the PCIe link state is L1.2 or L2, the host side still can keep
the device is in D0 state from the host side point of view. At the same
time, if the device's 'deep sleep' mode is unlocked, the device will
go to 'deep sleep' while it is still in D0 state on the host side.

Signed-off-by: Haijun Lio <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 11 +++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 24 ++++--
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 39 ++++++---
drivers/net/wwan/t7xx/t7xx_mhccif.c | 3 +
drivers/net/wwan/t7xx/t7xx_pci.c | 95 ++++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_pci.h | 10 +++
6 files changed, 166 insertions(+), 16 deletions(-)

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
index 18c1fcccd9dc..4299061fe50d 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -1147,6 +1147,7 @@ int cldma_send_skb(enum cldma_id hif_id, int qno, struct sk_buff *skb, bool skb_
if (val < 0 && val != -EACCES)
return val;

+ mtk_pci_disable_sleep(md_ctrl->mtk_dev);
if (qno >= CLDMA_TXQ_NUM) {
ret = -EINVAL;
goto exit;
@@ -1182,6 +1183,11 @@ int cldma_send_skb(enum cldma_id hif_id, int qno, struct sk_buff *skb, bool skb_
queue->tx_xmit = cldma_ring_step_forward(queue->tr_ring, tx_req);
spin_unlock_irqrestore(&queue->ring_lock, flags);

+ if (!mtk_pci_sleep_disable_complete(md_ctrl->mtk_dev)) {
+ ret = -EBUSY;
+ break;
+ }
+
spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
cldma_hw_start_send(md_ctrl, qno);
spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
@@ -1189,6 +1195,10 @@ int cldma_send_skb(enum cldma_id hif_id, int qno, struct sk_buff *skb, bool skb_
}

spin_unlock_irqrestore(&queue->ring_lock, flags);
+ if (!mtk_pci_sleep_disable_complete(md_ctrl->mtk_dev)) {
+ ret = -EBUSY;
+ break;
+ }

/* check CLDMA status */
if (!cldma_hw_queue_status(&md_ctrl->hw_info, qno, false)) {
@@ -1210,6 +1220,7 @@ int cldma_send_skb(enum cldma_id hif_id, int qno, struct sk_buff *skb, bool skb_
} while (!ret);

exit:
+ mtk_pci_enable_sleep(md_ctrl->mtk_dev);
pm_runtime_mark_last_busy(md_ctrl->dev);
pm_runtime_put_autosuspend(md_ctrl->dev);
return ret;
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
index ae38fb29ec81..c7dfb74bfe31 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
@@ -40,6 +40,8 @@
#define DPMAIF_RX_PUSH_THRESHOLD_MASK 0x7
#define DPMAIF_NOTIFY_RELEASE_COUNT 128
#define DPMAIF_POLL_PIT_TIME_US 20
+#define DPMAIF_POLL_RX_TIME_US 10
+#define DPMAIF_POLL_RX_TIMEOUT_US 200
#define DPMAIF_POLL_PIT_MAX_TIME_US 2000
#define DPMAIF_WQ_TIME_LIMIT_MS 2
#define DPMAIF_CS_RESULT_PASS 0
@@ -1020,6 +1022,7 @@ static int dpmaif_rx_data_collect(struct dpmaif_ctrl *dpmaif_ctrl,
return 0;
}

+/* call after mtk_pci_disable_sleep */
static void dpmaif_do_rx(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_rx_queue *rxq)
{
int ret;
@@ -1059,7 +1062,13 @@ static void dpmaif_rxq_work(struct work_struct *work)
if (ret < 0 && ret != -EACCES)
return;

- dpmaif_do_rx(dpmaif_ctrl, rxq);
+ mtk_pci_disable_sleep(dpmaif_ctrl->mtk_dev);
+
+ /* we can try blocking wait for lock resource here in process context */
+ if (mtk_pci_sleep_disable_complete(dpmaif_ctrl->mtk_dev))
+ dpmaif_do_rx(dpmaif_ctrl, rxq);
+
+ mtk_pci_enable_sleep(dpmaif_ctrl->mtk_dev);

pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
@@ -1433,14 +1442,19 @@ static void dpmaif_bat_release_work(struct work_struct *work)
if (ret < 0 && ret != -EACCES)
return;

+ mtk_pci_disable_sleep(dpmaif_ctrl->mtk_dev);
+
/* ALL RXQ use one BAT table, so choose DPF_RX_QNO_DFT */
rxq = &dpmaif_ctrl->rxq[DPF_RX_QNO_DFT];

- /* normal BAT release and add */
- dpmaif_dl_pkt_bat_release_and_add(rxq);
- /* frag BAT release and add */
- dpmaif_dl_frag_bat_release_and_add(rxq);
+ if (mtk_pci_sleep_disable_complete(dpmaif_ctrl->mtk_dev)) {
+ /* normal BAT release and add */
+ dpmaif_dl_pkt_bat_release_and_add(rxq);
+ /* frag BAT release and add */
+ dpmaif_dl_frag_bat_release_and_add(rxq);
+ }

+ mtk_pci_enable_sleep(dpmaif_ctrl->mtk_dev);
pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
index 84fc980824e5..8b2f2fc0b7d3 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
@@ -172,19 +172,26 @@ static void dpmaif_tx_done(struct work_struct *work)
if (ret < 0 && ret != -EACCES)
return;

- ret = dpmaif_tx_release(dpmaif_ctrl, txq->index, txq->drb_size_cnt);
- if (ret == -EAGAIN ||
- (dpmaif_hw_check_clr_ul_done_status(&dpmaif_ctrl->hif_hw_info, txq->index) &&
- dpmaif_no_remain_spurious_tx_done_intr(txq))) {
- queue_work(dpmaif_ctrl->txq[txq->index].worker,
- &dpmaif_ctrl->txq[txq->index].dpmaif_tx_work);
- /* clear IP busy to give the device time to enter the low power state */
- dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
- } else {
- dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
- dpmaif_unmask_ulq_interrupt(dpmaif_ctrl, txq->index);
+ /* The device may be in low power state. disable sleep if needed */
+ mtk_pci_disable_sleep(dpmaif_ctrl->mtk_dev);
+
+ /* ensure that we are not in deep sleep */
+ if (mtk_pci_sleep_disable_complete(dpmaif_ctrl->mtk_dev)) {
+ ret = dpmaif_tx_release(dpmaif_ctrl, txq->index, txq->drb_size_cnt);
+ if (ret == -EAGAIN ||
+ (dpmaif_hw_check_clr_ul_done_status(&dpmaif_ctrl->hif_hw_info, txq->index) &&
+ dpmaif_no_remain_spurious_tx_done_intr(txq))) {
+ queue_work(dpmaif_ctrl->txq[txq->index].worker,
+ &dpmaif_ctrl->txq[txq->index].dpmaif_tx_work);
+ /* clear IP busy to give the device time to enter the low power state */
+ dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+ } else {
+ dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+ dpmaif_unmask_ulq_interrupt(dpmaif_ctrl, txq->index);
+ }
}

+ mtk_pci_enable_sleep(dpmaif_ctrl->mtk_dev);
pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
}
@@ -485,6 +492,8 @@ static bool check_all_txq_drb_lack(const struct dpmaif_ctrl *dpmaif_ctrl)

static void do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl)
{
+ bool first_time = true;
+
dpmaif_ctrl->txq_select_times = 0;
do {
int txq_id;
@@ -499,6 +508,11 @@ static void do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl)
if (ret > 0) {
int drb_send_cnt = ret;

+ /* wait for the PCIe resource locked done */
+ if (first_time &&
+ !mtk_pci_sleep_disable_complete(dpmaif_ctrl->mtk_dev))
+ return;
+
/* notify the dpmaif HW */
ret = dpmaif_ul_add_wcnt(dpmaif_ctrl, (unsigned char)txq_id,
drb_send_cnt * DPMAIF_UL_DRB_ENTRY_WORD);
@@ -512,6 +526,7 @@ static void do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl)
}
}

+ first_time = false;
cond_resched();

} while (!tx_lists_are_all_empty(dpmaif_ctrl) && !kthread_should_stop() &&
@@ -541,7 +556,9 @@ static int dpmaif_tx_hw_push_thread(void *arg)
if (ret < 0 && ret != -EACCES)
return ret;

+ mtk_pci_disable_sleep(dpmaif_ctrl->mtk_dev);
do_tx_hw_push(dpmaif_ctrl);
+ mtk_pci_enable_sleep(dpmaif_ctrl->mtk_dev);
pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
}
diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.c b/drivers/net/wwan/t7xx/t7xx_mhccif.c
index e511a4117f47..df250f85f31e 100644
--- a/drivers/net/wwan/t7xx/t7xx_mhccif.c
+++ b/drivers/net/wwan/t7xx/t7xx_mhccif.c
@@ -57,6 +57,9 @@ static irqreturn_t mhccif_isr_thread(int irq, void *data)
/* Clear 2 & 1 level interrupts */
mhccif_clear_interrupts(mtk_dev, int_sts);

+ if (int_sts & D2H_INT_DS_LOCK_ACK)
+ complete_all(&mtk_dev->sleep_lock_acquire);
+
if (int_sts & D2H_INT_SR_ACK)
complete(&mtk_dev->pm_sr_ack);

diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
index 3328a225e20b..1087ce489eff 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.c
+++ b/drivers/net/wwan/t7xx/t7xx_pci.c
@@ -34,6 +34,7 @@
#define PCI_IREG_BASE 0
#define PCI_EREG_BASE 2

+#define MTK_WAIT_TIMEOUT_MS 10
#define PM_ACK_TIMEOUT_MS 1500
#define PM_AUTOSUSPEND_MS 20000
#define PM_RESOURCE_POLL_TIMEOUT_US 10000
@@ -46,6 +47,22 @@ enum mtk_pm_state {
MTK_PM_RESUMED, /* Device in resume state */
};

+static void mtk_dev_set_sleep_capability(struct mtk_pci_dev *mtk_dev, bool enable)
+{
+ void __iomem *ctrl_reg;
+ u32 value;
+
+ ctrl_reg = IREG_BASE(mtk_dev) + PCIE_MISC_CTRL;
+ value = ioread32(ctrl_reg);
+
+ if (enable)
+ value &= ~PCIE_MISC_MAC_SLEEP_DIS;
+ else
+ value |= PCIE_MISC_MAC_SLEEP_DIS;
+
+ iowrite32(value, ctrl_reg);
+}
+
static int mtk_wait_pm_config(struct mtk_pci_dev *mtk_dev)
{
int ret, val;
@@ -68,10 +85,14 @@ static int mtk_pci_pm_init(struct mtk_pci_dev *mtk_dev)

INIT_LIST_HEAD(&mtk_dev->md_pm_entities);

+ spin_lock_init(&mtk_dev->md_pm_lock);
+
mutex_init(&mtk_dev->md_pm_entity_mtx);

+ init_completion(&mtk_dev->sleep_lock_acquire);
init_completion(&mtk_dev->pm_sr_ack);

+ atomic_set(&mtk_dev->sleep_disable_count, 0);
device_init_wakeup(&pdev->dev, true);

dev_pm_set_driver_flags(&pdev->dev, pdev->dev.power.driver_flags |
@@ -90,6 +111,7 @@ void mtk_pci_pm_init_late(struct mtk_pci_dev *mtk_dev)
{
/* enable the PCIe Resource Lock only after MD deep sleep is done */
mhccif_mask_clr(mtk_dev,
+ D2H_INT_DS_LOCK_ACK |
D2H_INT_SUSPEND_ACK |
D2H_INT_RESUME_ACK |
D2H_INT_SUSPEND_ACK_AP |
@@ -156,6 +178,79 @@ int mtk_pci_pm_entity_unregister(struct mtk_pci_dev *mtk_dev, struct md_pm_entit
return -ENXIO;
}

+int mtk_pci_sleep_disable_complete(struct mtk_pci_dev *mtk_dev)
+{
+ int ret;
+
+ ret = wait_for_completion_timeout(&mtk_dev->sleep_lock_acquire,
+ msecs_to_jiffies(MTK_WAIT_TIMEOUT_MS));
+ if (!ret)
+ dev_err_ratelimited(&mtk_dev->pdev->dev, "Resource wait complete timed out\n");
+
+ return ret;
+}
+
+/**
+ * mtk_pci_disable_sleep() - disable deep sleep capability
+ * @mtk_dev: MTK device
+ *
+ * Lock the deep sleep capabitily, note that the device can go into deep sleep
+ * state while it is still in D0 state from the host point of view.
+ *
+ * If device is in deep sleep state then wake up the device and disable deep sleep capability.
+ */
+void mtk_pci_disable_sleep(struct mtk_pci_dev *mtk_dev)
+{
+ unsigned long flags;
+
+ if (atomic_read(&mtk_dev->md_pm_state) < MTK_PM_RESUMED) {
+ atomic_inc(&mtk_dev->sleep_disable_count);
+ complete_all(&mtk_dev->sleep_lock_acquire);
+ return;
+ }
+
+ spin_lock_irqsave(&mtk_dev->md_pm_lock, flags);
+ if (atomic_inc_return(&mtk_dev->sleep_disable_count) == 1) {
+ reinit_completion(&mtk_dev->sleep_lock_acquire);
+ mtk_dev_set_sleep_capability(mtk_dev, false);
+ /* read register status to check whether the device's
+ * deep sleep is disabled or not.
+ */
+ if ((ioread32(IREG_BASE(mtk_dev) + PCIE_RESOURCE_STATUS) &
+ PCIE_RESOURCE_STATUS_MSK) == PCIE_RESOURCE_STATUS_MSK) {
+ spin_unlock_irqrestore(&mtk_dev->md_pm_lock, flags);
+ complete_all(&mtk_dev->sleep_lock_acquire);
+ return;
+ }
+
+ mhccif_h2d_swint_trigger(mtk_dev, H2D_CH_DS_LOCK);
+ }
+
+ spin_unlock_irqrestore(&mtk_dev->md_pm_lock, flags);
+}
+
+/**
+ * mtk_pci_enable_sleep() - enable deep sleep capability
+ * @mtk_dev: MTK device
+ *
+ * After enabling deep sleep, device can enter into deep sleep state.
+ */
+void mtk_pci_enable_sleep(struct mtk_pci_dev *mtk_dev)
+{
+ unsigned long flags;
+
+ if (atomic_read(&mtk_dev->md_pm_state) < MTK_PM_RESUMED) {
+ atomic_dec(&mtk_dev->sleep_disable_count);
+ return;
+ }
+
+ if (atomic_dec_and_test(&mtk_dev->sleep_disable_count)) {
+ spin_lock_irqsave(&mtk_dev->md_pm_lock, flags);
+ mtk_dev_set_sleep_capability(mtk_dev, true);
+ spin_unlock_irqrestore(&mtk_dev->md_pm_lock, flags);
+ }
+}
+
static int __mtk_pci_pm_suspend(struct pci_dev *pdev)
{
struct mtk_pci_dev *mtk_dev;
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.h b/drivers/net/wwan/t7xx/t7xx_pci.h
index 7ce429db240f..ab49b2ab8614 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.h
+++ b/drivers/net/wwan/t7xx/t7xx_pci.h
@@ -17,6 +17,7 @@
#include <linux/completion.h>
#include <linux/mutex.h>
#include <linux/pci.h>
+#include <linux/spinlock.h>
#include <linux/types.h>

#include "t7xx_reg.h"
@@ -48,7 +49,10 @@ typedef irqreturn_t (*mtk_intr_callback)(int irq, void *param);
* @base_addr: memory base addresses of HW components
* @md: modem interface
* @md_pm_entities: list of pm entities
+ * @md_pm_lock: protects PCIe sleep lock
* @md_pm_entity_mtx: protects md_pm_entities list
+ * @sleep_disable_count: PCIe L1.2 lock counter
+ * @sleep_lock_acquire: indicates that sleep has been disabled
* @pm_sr_ack: ack from the device when went to sleep or woke up
* @md_pm_state: state for resume/suspend
* @ccmni_ctlb: context structure used to control the network data path
@@ -66,7 +70,10 @@ struct mtk_pci_dev {

/* Low Power Items */
struct list_head md_pm_entities;
+ spinlock_t md_pm_lock; /* protects PCI resource lock */
struct mutex md_pm_entity_mtx; /* protects md_pm_entities list */
+ atomic_t sleep_disable_count;
+ struct completion sleep_lock_acquire;
struct completion pm_sr_ack;
atomic_t md_pm_state;

@@ -103,6 +110,9 @@ struct md_pm_entity {
void *entity_param;
};

+void mtk_pci_disable_sleep(struct mtk_pci_dev *mtk_dev);
+void mtk_pci_enable_sleep(struct mtk_pci_dev *mtk_dev);
+int mtk_pci_sleep_disable_complete(struct mtk_pci_dev *mtk_dev);
int mtk_pci_pm_entity_register(struct mtk_pci_dev *mtk_dev, struct md_pm_entity *pm_entity);
int mtk_pci_pm_entity_unregister(struct mtk_pci_dev *mtk_dev, struct md_pm_entity *pm_entity);
void mtk_pci_pm_init_late(struct mtk_pci_dev *mtk_dev);
--
2.17.1

2021-11-01 03:58:25

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 06/14] net: wwan: t7xx: Add AT and MBIM WWAN ports

From: Chandrashekar Devegowda <[email protected]>

Adds AT and MBIM ports to the port proxy infrastructure.
The initialization method is responsible for creating the corresponding
ports using the WWAN framework infrastructure. The implemented WWAN port
operations are start, stop, and TX.

Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/Makefile | 1 +
drivers/net/wwan/t7xx/t7xx_port_proxy.c | 4 +
drivers/net/wwan/t7xx/t7xx_port_proxy.h | 1 +
drivers/net/wwan/t7xx/t7xx_port_wwan.c | 281 ++++++++++++++++++++++++
4 files changed, 287 insertions(+)
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_wwan.c

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index b0fac99420a0..9b3cc4c5ebae 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -13,3 +13,4 @@ mtk_t7xx-y:= t7xx_pci.o \
t7xx_hif_cldma.o \
t7xx_port_proxy.o \
t7xx_port_ctrl_msg.o \
+ t7xx_port_wwan.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.c b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
index 39e779531068..970b5160febf 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.c
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
@@ -51,6 +51,10 @@ static struct port_proxy *port_prox;
i++, (p) = &(proxy)->ports[i])

static struct t7xx_port md_ccci_ports[] = {
+ {CCCI_UART2_TX, CCCI_UART2_RX, DATA_AT_CMD_Q, DATA_AT_CMD_Q, 0xff,
+ 0xff, ID_CLDMA1, PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 0, "ttyC0", WWAN_PORT_AT},
+ {CCCI_MBIM_TX, CCCI_MBIM_RX, 2, 2, 0, 0, ID_CLDMA1,
+ PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 10, "ttyCMBIM0", WWAN_PORT_MBIM},
{CCCI_CONTROL_TX, CCCI_CONTROL_RX, 0, 0, 0, 0, ID_CLDMA1,
0, &ctl_port_ops, 0xff, "ccci_ctrl",},
};
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.h b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
index bc6fef0d7cf6..3d43c1f46e2a 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.h
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
@@ -82,6 +82,7 @@ struct port_msg {
#define PORT_ENUM_VER_MISMATCH 0x00657272

/* port operations mapping */
+extern struct port_ops wwan_sub_port_ops;
extern struct port_ops ctl_port_ops;

int port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb, bool from_pool);
diff --git a/drivers/net/wwan/t7xx/t7xx_port_wwan.c b/drivers/net/wwan/t7xx/t7xx_port_wwan.c
new file mode 100644
index 000000000000..c101651b84aa
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port_wwan.c
@@ -0,0 +1,281 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Chandrashekar Devegowda <[email protected]>
+ * Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Andy Shevchenko <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/device.h>
+#include <linux/fs.h>
+#include <linux/minmax.h>
+#include <linux/module.h>
+#include <linux/poll.h>
+#include <linux/uaccess.h>
+#include <linux/wait.h>
+#include <linux/wwan.h>
+#include <linux/bitfield.h>
+
+#include "t7xx_common.h"
+#include "t7xx_hif_cldma.h"
+#include "t7xx_monitor.h"
+#include "t7xx_port_proxy.h"
+#include "t7xx_skb_util.h"
+
+static int mtk_port_ctrl_start(struct wwan_port *port)
+{
+ struct t7xx_port *port_mtk;
+
+ port_mtk = wwan_port_get_drvdata(port);
+
+ if (atomic_read(&port_mtk->usage_cnt))
+ return -EBUSY;
+
+ atomic_inc(&port_mtk->usage_cnt);
+ return 0;
+}
+
+static void mtk_port_ctrl_stop(struct wwan_port *port)
+{
+ struct t7xx_port *port_mtk;
+
+ port_mtk = wwan_port_get_drvdata(port);
+
+ atomic_dec(&port_mtk->usage_cnt);
+}
+
+static int mtk_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
+{
+ size_t actual_count = 0, alloc_size = 0, txq_mtu = 0;
+ struct sk_buff *skb_ccci = NULL;
+ struct t7xx_port *port_ccci;
+ int i, multi_packet = 1;
+ enum md_state md_state;
+ unsigned int count;
+ int ret = 0;
+
+ count = skb->len;
+ if (!count)
+ return -EINVAL;
+
+ port_ccci = wwan_port_get_drvdata(port);
+ md_state = ccci_fsm_get_md_state();
+ if (md_state == MD_STATE_WAITING_FOR_HS1 || md_state == MD_STATE_WAITING_FOR_HS2) {
+ dev_warn(port_ccci->dev, "port %s ch%d write fail when md_state=%d!\n",
+ port_ccci->name, port_ccci->tx_ch, md_state);
+ return -ENODEV;
+ }
+
+ txq_mtu = CLDMA_TXQ_MTU;
+
+ if (port_ccci->flags & PORT_F_RAW_DATA || port_ccci->flags & PORT_F_USER_HEADER) {
+ if (port_ccci->flags & PORT_F_USER_HEADER && count > txq_mtu) {
+ dev_err(port_ccci->dev, "reject packet(size=%u), larger than MTU on %s\n",
+ count, port_ccci->name);
+ return -ENOMEM;
+ }
+
+ alloc_size = min_t(size_t, txq_mtu, count);
+ actual_count = alloc_size;
+ } else {
+ alloc_size = min_t(size_t, txq_mtu, count + CCCI_H_ELEN);
+ actual_count = alloc_size - CCCI_H_ELEN;
+ if (count + CCCI_H_ELEN > txq_mtu &&
+ (port_ccci->tx_ch == CCCI_MBIM_TX ||
+ (port_ccci->tx_ch >= CCCI_DSS0_TX && port_ccci->tx_ch <= CCCI_DSS7_TX)))
+ multi_packet = DIV_ROUND_UP(count, txq_mtu - CCCI_H_ELEN);
+ }
+
+ for (i = 0; i < multi_packet; i++) {
+ struct ccci_header *ccci_h = NULL;
+
+ if (multi_packet > 1 && multi_packet == i + 1) {
+ actual_count = count % (txq_mtu - CCCI_H_ELEN);
+ alloc_size = actual_count + CCCI_H_ELEN;
+ }
+
+ skb_ccci = ccci_alloc_skb_from_pool(&port_ccci->mtk_dev->pools, alloc_size,
+ GFS_BLOCKING);
+ if (!skb_ccci)
+ return -ENOMEM;
+
+ if (port_ccci->flags & PORT_F_RAW_DATA) {
+ memcpy(skb_put(skb_ccci, actual_count), skb->data, actual_count);
+
+ if (port_ccci->flags & PORT_F_USER_HEADER) {
+ /* The ccci_header is provided by user.
+ *
+ * For only send ccci_header without additional data
+ * case, data[0]=CCCI_HEADER_NO_DATA, data[1]=user_data,
+ * ch=tx_channel, reserved=no_use.
+ *
+ * For send ccci_header
+ * with additional data case, data[0]=0,
+ * data[1]=data_size, ch=tx_channel,
+ * reserved=user_data.
+ */
+ ccci_h = (struct ccci_header *)skb->data;
+ if (actual_count == CCCI_H_LEN)
+ ccci_h->data[0] = CCCI_HEADER_NO_DATA;
+ else
+ ccci_h->data[1] = actual_count;
+
+ ccci_h->status &= ~HDR_FLD_CHN;
+ ccci_h->status |= FIELD_PREP(HDR_FLD_CHN, port_ccci->tx_ch);
+ }
+ } else {
+ /* ccci_header is provided by driver */
+ ccci_h = skb_put(skb_ccci, CCCI_H_LEN);
+ ccci_h->data[0] = 0;
+ ccci_h->data[1] = actual_count + CCCI_H_LEN;
+ ccci_h->status &= ~HDR_FLD_CHN;
+ ccci_h->status |= FIELD_PREP(HDR_FLD_CHN, port_ccci->tx_ch);
+ ccci_h->reserved = 0;
+
+ memcpy(skb_put(skb_ccci, actual_count),
+ skb->data + i * (txq_mtu - CCCI_H_ELEN), actual_count);
+ }
+
+ port_proxy_set_seq_num(port_ccci, ccci_h);
+ ret = port_send_skb_to_md(port_ccci, skb_ccci, true);
+ if (ret)
+ goto err_out;
+
+ /* record the port seq_num after the data is sent to HIF */
+ port_ccci->seq_nums[MTK_OUT]++;
+
+ if (multi_packet == 1)
+ return actual_count;
+ else if (multi_packet == i + 1)
+ return count;
+ }
+
+err_out:
+ if (ret != -ENOMEM) {
+ dev_err(port_ccci->dev, "write error done on %s, size=%zu, ret=%d\n",
+ port_ccci->name, actual_count, ret);
+ ccci_free_skb(&port_ccci->mtk_dev->pools, skb_ccci);
+ }
+
+ return ret;
+}
+
+static const struct wwan_port_ops mtk_wwan_port_ops = {
+ .start = mtk_port_ctrl_start,
+ .stop = mtk_port_ctrl_stop,
+ .tx = mtk_port_ctrl_tx,
+};
+
+static int port_wwan_init(struct t7xx_port *port)
+{
+ port->rx_length_th = MAX_RX_QUEUE_LENGTH;
+ port->skb_from_pool = true;
+
+ if (!(port->flags & PORT_F_RAW_DATA))
+ port->flags |= PORT_F_RX_ADJUST_HEADER;
+
+ if (port->rx_ch == CCCI_UART2_RX)
+ port->flags |= PORT_F_RX_CH_TRAFFIC;
+
+ if (port->mtk_port_type != WWAN_PORT_UNKNOWN) {
+ port->mtk_wwan_port =
+ wwan_create_port(port->dev, port->mtk_port_type, &mtk_wwan_port_ops, port);
+ if (IS_ERR(port->mtk_wwan_port))
+ return PTR_ERR(port->mtk_wwan_port);
+ } else {
+ port->mtk_wwan_port = NULL;
+ }
+
+ return 0;
+}
+
+static void port_wwan_uninit(struct t7xx_port *port)
+{
+ if (port->mtk_wwan_port) {
+ if (port->chn_crt_stat == CCCI_CHAN_ENABLE) {
+ spin_lock(&port->port_update_lock);
+ port->chn_crt_stat = CCCI_CHAN_DISABLE;
+ spin_unlock(&port->port_update_lock);
+ }
+
+ wwan_remove_port(port->mtk_wwan_port);
+ port->mtk_wwan_port = NULL;
+ }
+}
+
+static int port_wwan_recv_skb(struct t7xx_port *port, struct sk_buff *skb)
+{
+ if (port->flags & PORT_F_RX_CHAR_NODE) {
+ if (!atomic_read(&port->usage_cnt)) {
+ dev_err_ratelimited(port->dev, "port %s is not opened, drop packets\n",
+ port->name);
+ return -ENETDOWN;
+ }
+ }
+
+ return port_recv_skb(port, skb);
+}
+
+static int port_status_update(struct t7xx_port *port)
+{
+ if (port->flags & PORT_F_RX_CHAR_NODE) {
+ if (port->chan_enable == CCCI_CHAN_ENABLE) {
+ port->flags &= ~PORT_F_RX_ALLOW_DROP;
+ } else {
+ port->flags |= PORT_F_RX_ALLOW_DROP;
+ spin_lock(&port->port_update_lock);
+ port->chn_crt_stat = CCCI_CHAN_DISABLE;
+ spin_unlock(&port->port_update_lock);
+ return port_proxy_broadcast_state(port, MTK_PORT_STATE_DISABLE);
+ }
+ }
+
+ return 0;
+}
+
+static int port_wwan_enable_chl(struct t7xx_port *port)
+{
+ spin_lock(&port->port_update_lock);
+ port->chan_enable = CCCI_CHAN_ENABLE;
+ spin_unlock(&port->port_update_lock);
+
+ if (port->chn_crt_stat != port->chan_enable)
+ return port_status_update(port);
+
+ return 0;
+}
+
+static int port_wwan_disable_chl(struct t7xx_port *port)
+{
+ spin_lock(&port->port_update_lock);
+ port->chan_enable = CCCI_CHAN_DISABLE;
+ spin_unlock(&port->port_update_lock);
+
+ if (port->chn_crt_stat != port->chan_enable)
+ return port_status_update(port);
+
+ return 0;
+}
+
+static void port_wwan_md_state_notify(struct t7xx_port *port, unsigned int state)
+{
+ if (state == MD_STATE_READY)
+ port_status_update(port);
+}
+
+struct port_ops wwan_sub_port_ops = {
+ .init = &port_wwan_init,
+ .recv_skb = &port_wwan_recv_skb,
+ .uninit = &port_wwan_uninit,
+ .enable_chl = &port_wwan_enable_chl,
+ .disable_chl = &port_wwan_disable_chl,
+ .md_state_notify = &port_wwan_md_state_notify,
+};
--
2.17.1

2021-11-01 03:58:24

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 08/14] net: wwan: t7xx: Add data path interface

From: Haijun Lio <[email protected]>

Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
for initialization, ISR, control and event handling of TX/RX flows.

DPMAIF TX
Exposes the `dmpaif_tx_send_skb` function which can be used by the
network device to transmit packets.
The uplink data management uses a Descriptor Ring Buffer (DRB).
First DRB entry is a message type that will be followed by 1 or more
normal DRB entries. Message type DRB will hold the skb information
and each normal DRB entry holds a pointer to the skb payload.

DPMAIF RX
The downlink buffer management uses Buffer Address Table (BAT) and
Packet Information Table (PIT) rings.
The BAT ring holds the address of skb data buffer for the HW to use,
while the PIT contains metadata about a whole network packet including
a reference to the BAT entry holding the data buffer address.
The driver reads the PIT and BAT entries written by the modem, when
reaching a threshold, the driver will reload the PIT and BAT rings.

Signed-off-by: Haijun Lio <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/Makefile | 4 +
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c | 540 +++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h | 278 ++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 1532 ++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h | 117 ++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 810 +++++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h | 82 ++
7 files changed, 3363 insertions(+)
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index 9b3cc4c5ebae..a2c97a66dfbe 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -14,3 +14,7 @@ mtk_t7xx-y:= t7xx_pci.o \
t7xx_port_proxy.o \
t7xx_port_ctrl_msg.o \
t7xx_port_wwan.o \
+ t7xx_dpmaif.o \
+ t7xx_hif_dpmaif.o \
+ t7xx_hif_dpmaif_tx.o \
+ t7xx_hif_dpmaif_rx.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
new file mode 100644
index 000000000000..e97e1a6082d3
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
@@ -0,0 +1,540 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_common.h"
+#include "t7xx_dpmaif.h"
+#include "t7xx_hif_dpmaif.h"
+#include "t7xx_hif_dpmaif_rx.h"
+#include "t7xx_hif_dpmaif_tx.h"
+#include "t7xx_pcie_mac.h"
+
+unsigned int ring_buf_get_next_wrdx(unsigned int buf_len, unsigned int buf_idx)
+{
+ buf_idx++;
+
+ return buf_idx < buf_len ? buf_idx : 0;
+}
+
+unsigned int ring_buf_read_write_count(unsigned int total_cnt, unsigned int rd_idx,
+ unsigned int wrt_idx, bool rd_wrt)
+{
+ int pkt_cnt;
+
+ if (rd_wrt)
+ pkt_cnt = wrt_idx - rd_idx;
+ else
+ pkt_cnt = rd_idx - wrt_idx - 1;
+
+ if (pkt_cnt < 0)
+ pkt_cnt += total_cnt;
+
+ return (unsigned int)pkt_cnt;
+}
+
+static void dpmaif_enable_irq(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_isr_para *isr_para;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(dpmaif_ctrl->isr_para); i++) {
+ isr_para = &dpmaif_ctrl->isr_para[i];
+ mtk_pcie_mac_set_int(dpmaif_ctrl->mtk_dev, isr_para->pcie_int);
+ }
+}
+
+static void dpmaif_disable_irq(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_isr_para *isr_para;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(dpmaif_ctrl->isr_para); i++) {
+ isr_para = &dpmaif_ctrl->isr_para[i];
+ mtk_pcie_mac_clear_int(dpmaif_ctrl->mtk_dev, isr_para->pcie_int);
+ }
+}
+
+static void dpmaif_irq_cb(struct dpmaif_isr_para *isr_para)
+{
+ struct dpmaif_hw_intr_st_para intr_status;
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ struct device *dev;
+ int i;
+
+ dpmaif_ctrl = isr_para->dpmaif_ctrl;
+ dev = dpmaif_ctrl->dev;
+ memset(&intr_status, 0, sizeof(intr_status));
+
+ /* gets HW interrupt types */
+ if (dpmaif_hw_get_interrupt_status(dpmaif_ctrl, &intr_status, isr_para->dlq_id) < 0) {
+ dev_err(dev, "Get HW interrupt status failed!\n");
+ return;
+ }
+
+ /* Clear level 1 interrupt status */
+ /* Clear level 2 DPMAIF interrupt status first,
+ * then clear level 1 PCIe interrupt status
+ * to avoid an empty interrupt.
+ */
+ mtk_pcie_mac_clear_int_status(dpmaif_ctrl->mtk_dev, isr_para->pcie_int);
+
+ /* handles interrupts */
+ for (i = 0; i < intr_status.intr_cnt; i++) {
+ switch (intr_status.intr_types[i]) {
+ case DPF_INTR_UL_DONE:
+ dpmaif_irq_tx_done(dpmaif_ctrl, intr_status.intr_queues[i]);
+ break;
+
+ case DPF_INTR_UL_DRB_EMPTY:
+ case DPF_INTR_UL_MD_NOTREADY:
+ case DPF_INTR_UL_MD_PWR_NOTREADY:
+ /* no need to log an error for these cases. */
+ break;
+
+ case DPF_INTR_DL_BATCNT_LEN_ERR:
+ dev_err_ratelimited(dev, "DL interrupt: packet BAT count length error!\n");
+ dpmaif_unmask_dl_batcnt_len_err_interrupt(&dpmaif_ctrl->hif_hw_info);
+ break;
+
+ case DPF_INTR_DL_PITCNT_LEN_ERR:
+ dev_err_ratelimited(dev, "DL interrupt: PIT count length error!\n");
+ dpmaif_unmask_dl_pitcnt_len_err_interrupt(&dpmaif_ctrl->hif_hw_info);
+ break;
+
+ case DPF_DL_INT_DLQ0_PITCNT_LEN_ERR:
+ dev_err_ratelimited(dev, "DL interrupt: DLQ0 PIT count length error!\n");
+ dpmaif_dlq_unmask_rx_pitcnt_len_err_intr(&dpmaif_ctrl->hif_hw_info,
+ DPF_RX_QNO_DFT);
+ break;
+
+ case DPF_DL_INT_DLQ1_PITCNT_LEN_ERR:
+ dev_err_ratelimited(dev, "DL interrupt: DLQ1 PIT count length error!\n");
+ dpmaif_dlq_unmask_rx_pitcnt_len_err_intr(&dpmaif_ctrl->hif_hw_info,
+ DPF_RX_QNO1);
+ break;
+
+ case DPF_INTR_DL_DONE:
+ case DPF_INTR_DL_DLQ0_DONE:
+ case DPF_INTR_DL_DLQ1_DONE:
+ dpmaif_irq_rx_done(dpmaif_ctrl, intr_status.intr_queues[i]);
+ break;
+
+ default:
+ dev_err_ratelimited(dev, "DL interrupt error: type : %d\n",
+ intr_status.intr_types[i]);
+ }
+ }
+}
+
+static irqreturn_t dpmaif_isr_handler(int irq, void *data)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ struct dpmaif_isr_para *isr_para;
+
+ isr_para = data;
+ dpmaif_ctrl = isr_para->dpmaif_ctrl;
+
+ if (dpmaif_ctrl->state != DPMAIF_STATE_PWRON) {
+ dev_err(dpmaif_ctrl->dev, "interrupt received before initializing DPMAIF\n");
+ return IRQ_HANDLED;
+ }
+
+ mtk_pcie_mac_clear_int(dpmaif_ctrl->mtk_dev, isr_para->pcie_int);
+ dpmaif_irq_cb(isr_para);
+ mtk_pcie_mac_set_int(dpmaif_ctrl->mtk_dev, isr_para->pcie_int);
+ return IRQ_HANDLED;
+}
+
+static void dpmaif_isr_parameter_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_isr_para *isr_para;
+ unsigned char i;
+
+ /* set up the RXQ and isr relation */
+ dpmaif_ctrl->rxq_int_mapping[DPF_RX_QNO0] = DPMAIF_INT;
+ dpmaif_ctrl->rxq_int_mapping[DPF_RX_QNO1] = DPMAIF2_INT;
+
+ /* init the isr parameter */
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ isr_para = &dpmaif_ctrl->isr_para[i];
+ isr_para->dpmaif_ctrl = dpmaif_ctrl;
+ isr_para->dlq_id = i;
+ isr_para->pcie_int = dpmaif_ctrl->rxq_int_mapping[i];
+ }
+}
+
+static void dpmaif_platform_irq_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_isr_para *isr_para;
+ struct mtk_pci_dev *mtk_dev;
+ enum pcie_int int_type;
+ int i;
+
+ mtk_dev = dpmaif_ctrl->mtk_dev;
+ /* PCIe isr parameter init */
+ dpmaif_isr_parameter_init(dpmaif_ctrl);
+
+ /* register isr */
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ isr_para = &dpmaif_ctrl->isr_para[i];
+ int_type = isr_para->pcie_int;
+ mtk_pcie_mac_clear_int(mtk_dev, int_type);
+
+ mtk_dev->intr_handler[int_type] = dpmaif_isr_handler;
+ mtk_dev->intr_thread[int_type] = NULL;
+ mtk_dev->callback_param[int_type] = isr_para;
+
+ mtk_pcie_mac_clear_int_status(mtk_dev, int_type);
+ mtk_pcie_mac_set_int(mtk_dev, int_type);
+ }
+}
+
+static void dpmaif_skb_pool_free(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_skb_pool *pool;
+ unsigned int i;
+
+ pool = &dpmaif_ctrl->skb_pool;
+ flush_work(&pool->reload_work);
+
+ if (pool->reload_work_queue) {
+ destroy_workqueue(pool->reload_work_queue);
+ pool->reload_work_queue = NULL;
+ }
+
+ for (i = 0; i < DPMA_SKB_QUEUE_CNT; i++)
+ dpmaif_skb_queue_free(dpmaif_ctrl, i);
+}
+
+/* we put initializations which takes too much time here: SW init only */
+static int dpmaif_sw_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_rx_queue *rx_q;
+ struct dpmaif_tx_queue *tx_q;
+ int ret, i, j;
+
+ /* RX normal BAT table init */
+ ret = dpmaif_bat_alloc(dpmaif_ctrl, &dpmaif_ctrl->bat_req, BAT_TYPE_NORMAL);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "normal BAT table init fail, %d!\n", ret);
+ return ret;
+ }
+
+ /* RX frag BAT table init */
+ ret = dpmaif_bat_alloc(dpmaif_ctrl, &dpmaif_ctrl->bat_frag, BAT_TYPE_FRAG);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "frag BAT table init fail, %d!\n", ret);
+ goto bat_frag_err;
+ }
+
+ /* dpmaif RXQ resource init */
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ rx_q = &dpmaif_ctrl->rxq[i];
+ rx_q->index = i;
+ rx_q->dpmaif_ctrl = dpmaif_ctrl;
+ ret = dpmaif_rxq_alloc(rx_q);
+ if (ret)
+ goto rxq_init_err;
+ }
+
+ /* dpmaif TXQ resource init */
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ tx_q = &dpmaif_ctrl->txq[i];
+ tx_q->index = i;
+ tx_q->dpmaif_ctrl = dpmaif_ctrl;
+ ret = dpmaif_txq_init(tx_q);
+ if (ret)
+ goto txq_init_err;
+ }
+
+ /* Init TX thread: send skb data to dpmaif HW */
+ ret = dpmaif_tx_thread_init(dpmaif_ctrl);
+ if (ret)
+ goto tx_thread_err;
+
+ /* Init the RX skb pool */
+ ret = dpmaif_skb_pool_init(dpmaif_ctrl);
+ if (ret)
+ goto pool_init_err;
+
+ /* Init BAT rel workqueue */
+ ret = dpmaif_bat_release_work_alloc(dpmaif_ctrl);
+ if (ret)
+ goto bat_work_init_err;
+
+ return 0;
+
+bat_work_init_err:
+ dpmaif_skb_pool_free(dpmaif_ctrl);
+pool_init_err:
+ dpmaif_tx_thread_release(dpmaif_ctrl);
+tx_thread_err:
+ i = DPMAIF_TXQ_NUM;
+txq_init_err:
+ for (j = 0; j < i; j++) {
+ tx_q = &dpmaif_ctrl->txq[j];
+ dpmaif_txq_free(tx_q);
+ }
+
+ i = DPMAIF_RXQ_NUM;
+rxq_init_err:
+ for (j = 0; j < i; j++) {
+ rx_q = &dpmaif_ctrl->rxq[j];
+ dpmaif_rxq_free(rx_q);
+ }
+
+ dpmaif_bat_free(dpmaif_ctrl, &dpmaif_ctrl->bat_frag);
+bat_frag_err:
+ dpmaif_bat_free(dpmaif_ctrl, &dpmaif_ctrl->bat_req);
+
+ return ret;
+}
+
+static void dpmaif_sw_release(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_rx_queue *rx_q;
+ struct dpmaif_tx_queue *tx_q;
+ int i;
+
+ /* release the tx thread */
+ dpmaif_tx_thread_release(dpmaif_ctrl);
+
+ /* release the BAT release workqueue */
+ dpmaif_bat_release_work_free(dpmaif_ctrl);
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ tx_q = &dpmaif_ctrl->txq[i];
+ dpmaif_txq_free(tx_q);
+ }
+
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ rx_q = &dpmaif_ctrl->rxq[i];
+ dpmaif_rxq_free(rx_q);
+ }
+
+ /* release the skb pool */
+ dpmaif_skb_pool_free(dpmaif_ctrl);
+}
+
+static int dpmaif_start(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_hw_params hw_init_para;
+ struct dpmaif_rx_queue *rxq;
+ struct dpmaif_tx_queue *txq;
+ unsigned int buf_cnt;
+ int i, ret = 0;
+
+ if (dpmaif_ctrl->state == DPMAIF_STATE_PWRON)
+ return -EFAULT;
+
+ memset(&hw_init_para, 0, sizeof(hw_init_para));
+
+ /* rx */
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ rxq = &dpmaif_ctrl->rxq[i];
+ rxq->que_started = true;
+ rxq->index = i;
+ rxq->budget = rxq->bat_req->bat_size_cnt - 1;
+
+ /* DPMAIF HW RX queue init parameter */
+ hw_init_para.pkt_bat_base_addr[i] = rxq->bat_req->bat_bus_addr;
+ hw_init_para.pkt_bat_size_cnt[i] = rxq->bat_req->bat_size_cnt;
+ hw_init_para.pit_base_addr[i] = rxq->pit_bus_addr;
+ hw_init_para.pit_size_cnt[i] = rxq->pit_size_cnt;
+ hw_init_para.frg_bat_base_addr[i] = rxq->bat_frag->bat_bus_addr;
+ hw_init_para.frg_bat_size_cnt[i] = rxq->bat_frag->bat_size_cnt;
+ }
+
+ /* rx normal BAT mask init */
+ memset(dpmaif_ctrl->bat_req.bat_mask, 0,
+ dpmaif_ctrl->bat_req.bat_size_cnt * sizeof(unsigned char));
+ /* normal BAT - skb buffer and submit BAT */
+ buf_cnt = dpmaif_ctrl->bat_req.bat_size_cnt - 1;
+ ret = dpmaif_rx_buf_alloc(dpmaif_ctrl, &dpmaif_ctrl->bat_req, 0, buf_cnt, true);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "dpmaif_rx_buf_alloc fail, ret:%d\n", ret);
+ return ret;
+ }
+
+ /* frag BAT - page buffer init */
+ buf_cnt = dpmaif_ctrl->bat_frag.bat_size_cnt - 1;
+ ret = dpmaif_rx_frag_alloc(dpmaif_ctrl, &dpmaif_ctrl->bat_frag, 0, buf_cnt, true);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "dpmaif_rx_frag_alloc fail, ret:%d\n", ret);
+ goto err_bat;
+ }
+
+ /* tx */
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ txq = &dpmaif_ctrl->txq[i];
+ txq->que_started = true;
+
+ /* DPMAIF HW TX queue init parameter */
+ hw_init_para.drb_base_addr[i] = txq->drb_bus_addr;
+ hw_init_para.drb_size_cnt[i] = txq->drb_size_cnt;
+ }
+
+ ret = dpmaif_hw_init(dpmaif_ctrl, &hw_init_para);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "dpmaif_hw_init fail, ret:%d\n", ret);
+ goto err_frag;
+ }
+
+ /* notifies DPMAIF HW for available BAT count */
+ ret = dpmaif_dl_add_bat_cnt(dpmaif_ctrl, 0, rxq->bat_req->bat_size_cnt - 1);
+ if (ret)
+ goto err_frag;
+
+ ret = dpmaif_dl_add_frg_cnt(dpmaif_ctrl, 0, rxq->bat_frag->bat_size_cnt - 1);
+ if (ret)
+ goto err_frag;
+
+ dpmaif_clr_ul_all_interrupt(&dpmaif_ctrl->hif_hw_info);
+ dpmaif_clr_dl_all_interrupt(&dpmaif_ctrl->hif_hw_info);
+
+ dpmaif_ctrl->state = DPMAIF_STATE_PWRON;
+ dpmaif_enable_irq(dpmaif_ctrl);
+
+ /* wake up the dpmaif tx thread */
+ wake_up(&dpmaif_ctrl->tx_wq);
+ return 0;
+err_frag:
+ dpmaif_bat_free(rxq->dpmaif_ctrl, rxq->bat_frag);
+err_bat:
+ dpmaif_bat_free(rxq->dpmaif_ctrl, rxq->bat_req);
+ return ret;
+}
+
+static void dpmaif_pos_stop_hw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ dpmaif_suspend_tx_sw_stop(dpmaif_ctrl);
+ dpmaif_suspend_rx_sw_stop(dpmaif_ctrl);
+}
+
+static void dpmaif_stop_hw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ dpmaif_hw_stop_tx_queue(dpmaif_ctrl);
+ dpmaif_hw_stop_rx_queue(dpmaif_ctrl);
+}
+
+static int dpmaif_stop(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ if (!dpmaif_ctrl->dpmaif_sw_init_done) {
+ dev_err(dpmaif_ctrl->dev, "dpmaif SW init fail\n");
+ return -EFAULT;
+ }
+
+ if (dpmaif_ctrl->state == DPMAIF_STATE_PWROFF)
+ return -EFAULT;
+
+ dpmaif_disable_irq(dpmaif_ctrl);
+ dpmaif_ctrl->state = DPMAIF_STATE_PWROFF;
+ dpmaif_pos_stop_hw(dpmaif_ctrl);
+ flush_work(&dpmaif_ctrl->skb_pool.reload_work);
+ dpmaif_stop_tx_sw(dpmaif_ctrl);
+ dpmaif_stop_rx_sw(dpmaif_ctrl);
+ return 0;
+}
+
+int dpmaif_md_state_callback(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char state)
+{
+ int ret = 0;
+
+ switch (state) {
+ case MD_STATE_WAITING_FOR_HS1:
+ ret = dpmaif_start(dpmaif_ctrl);
+ break;
+
+ case MD_STATE_EXCEPTION:
+ ret = dpmaif_stop(dpmaif_ctrl);
+ break;
+
+ case MD_STATE_STOPPED:
+ ret = dpmaif_stop(dpmaif_ctrl);
+ break;
+
+ case MD_STATE_WAITING_TO_STOP:
+ dpmaif_stop_hw(dpmaif_ctrl);
+ break;
+
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+/**
+ * dpmaif_hif_init() - Initialize data path
+ * @mtk_dev: MTK context structure
+ * @callbacks: Callbacks implemented by the network layer to handle RX skb and
+ * event notifications
+ *
+ * Allocate and initialize datapath control block
+ * Register datapath ISR, TX and RX resources
+ *
+ * Return: pointer to DPMAIF context structure or NULL in case of error
+ */
+struct dpmaif_ctrl *dpmaif_hif_init(struct mtk_pci_dev *mtk_dev,
+ struct dpmaif_callbacks *callbacks)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ int ret;
+
+ if (!callbacks)
+ return NULL;
+
+ dpmaif_ctrl = devm_kzalloc(&mtk_dev->pdev->dev, sizeof(*dpmaif_ctrl), GFP_KERNEL);
+ if (!dpmaif_ctrl)
+ return NULL;
+
+ dpmaif_ctrl->mtk_dev = mtk_dev;
+ dpmaif_ctrl->callbacks = callbacks;
+ dpmaif_ctrl->dev = &mtk_dev->pdev->dev;
+ dpmaif_ctrl->dpmaif_sw_init_done = false;
+ dpmaif_ctrl->hif_hw_info.pcie_base = mtk_dev->base_addr.pcie_ext_reg_base -
+ mtk_dev->base_addr.pcie_dev_reg_trsl_addr;
+
+ /* registers dpmaif irq by PCIe driver API */
+ dpmaif_platform_irq_init(dpmaif_ctrl);
+ dpmaif_disable_irq(dpmaif_ctrl);
+
+ /* Alloc TX/RX resource */
+ ret = dpmaif_sw_init(dpmaif_ctrl);
+ if (ret) {
+ dev_err(&mtk_dev->pdev->dev, "DPMAIF SW initialization fail! %d\n", ret);
+ return NULL;
+ }
+
+ dpmaif_ctrl->dpmaif_sw_init_done = true;
+ return dpmaif_ctrl;
+}
+
+void dpmaif_hif_exit(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ if (dpmaif_ctrl->dpmaif_sw_init_done) {
+ dpmaif_stop(dpmaif_ctrl);
+ dpmaif_sw_release(dpmaif_ctrl);
+ dpmaif_ctrl->dpmaif_sw_init_done = false;
+ }
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
new file mode 100644
index 000000000000..384a44acbf62
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_DPMA_TX_H__
+#define __T7XX_DPMA_TX_H__
+
+#include <linux/netdevice.h>
+#include <linux/workqueue.h>
+#include <linux/wait.h>
+#include <linux/types.h>
+
+#include "t7xx_common.h"
+#include "t7xx_pci.h"
+#include "t7xx_skb_util.h"
+
+#define DPMAIF_RXQ_NUM 2
+#define DPMAIF_TXQ_NUM 5
+
+#define DPMA_SKB_QUEUE_CNT 1
+
+struct dpmaif_isr_en_mask {
+ unsigned int ap_ul_l2intr_en_msk;
+ unsigned int ap_dl_l2intr_en_msk;
+ unsigned int ap_udl_ip_busy_en_msk;
+ unsigned int ap_dl_l2intr_err_en_msk;
+};
+
+struct dpmaif_ul {
+ bool que_started;
+ unsigned char reserve[3];
+ dma_addr_t drb_base;
+ unsigned int drb_size_cnt;
+};
+
+struct dpmaif_dl {
+ bool que_started;
+ unsigned char reserve[3];
+ dma_addr_t pit_base;
+ unsigned int pit_size_cnt;
+ dma_addr_t bat_base;
+ unsigned int bat_size_cnt;
+ dma_addr_t frg_base;
+ unsigned int frg_size_cnt;
+ unsigned int pit_seq;
+};
+
+struct dpmaif_dl_hwq {
+ unsigned int bat_remain_size;
+ unsigned int bat_pkt_bufsz;
+ unsigned int frg_pkt_bufsz;
+ unsigned int bat_rsv_length;
+ unsigned int pkt_bid_max_cnt;
+ unsigned int pkt_alignment;
+ unsigned int mtu_size;
+ unsigned int chk_pit_num;
+ unsigned int chk_bat_num;
+ unsigned int chk_frg_num;
+};
+
+/* Structure of DL BAT */
+struct dpmaif_cur_rx_skb_info {
+ bool msg_pit_received;
+ struct sk_buff *cur_skb;
+ unsigned int cur_chn_idx;
+ unsigned int check_sum;
+ unsigned int pit_dp;
+ int err_payload;
+};
+
+struct dpmaif_bat {
+ unsigned int p_buffer_addr;
+ unsigned int buffer_addr_ext;
+};
+
+struct dpmaif_bat_skb {
+ struct sk_buff *skb;
+ dma_addr_t data_bus_addr;
+ unsigned int data_len;
+};
+
+struct dpmaif_bat_page {
+ struct page *page;
+ dma_addr_t data_bus_addr;
+ unsigned int offset;
+ unsigned int data_len;
+};
+
+enum bat_type {
+ BAT_TYPE_NORMAL = 0,
+ BAT_TYPE_FRAG = 1,
+};
+
+struct dpmaif_bat_request {
+ void *bat_base;
+ dma_addr_t bat_bus_addr;
+ unsigned int bat_size_cnt;
+ unsigned short bat_wr_idx;
+ unsigned short bat_release_rd_idx;
+ void *bat_skb_ptr;
+ unsigned int skb_pkt_cnt;
+ unsigned int pkt_buf_sz;
+ unsigned char *bat_mask;
+ atomic_t refcnt;
+ spinlock_t mask_lock; /* protects bat_mask */
+ enum bat_type type;
+};
+
+struct dpmaif_rx_queue {
+ unsigned char index;
+ bool que_started;
+ unsigned short budget;
+
+ void *pit_base;
+ dma_addr_t pit_bus_addr;
+ unsigned int pit_size_cnt;
+
+ unsigned short pit_rd_idx;
+ unsigned short pit_wr_idx;
+ unsigned short pit_release_rd_idx;
+
+ struct dpmaif_bat_request *bat_req;
+ struct dpmaif_bat_request *bat_frag;
+
+ wait_queue_head_t rx_wq;
+ struct task_struct *rx_thread;
+ struct ccci_skb_queue skb_queue;
+
+ struct workqueue_struct *worker;
+ struct work_struct dpmaif_rxq_work;
+
+ atomic_t rx_processing;
+
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ unsigned int expect_pit_seq;
+ unsigned int pit_remain_release_cnt;
+ struct dpmaif_cur_rx_skb_info rx_data_info;
+};
+
+struct dpmaif_tx_queue {
+ unsigned char index;
+ bool que_started;
+ atomic_t tx_budget;
+ void *drb_base;
+ dma_addr_t drb_bus_addr;
+ unsigned int drb_size_cnt;
+ unsigned short drb_wr_idx;
+ unsigned short drb_rd_idx;
+ unsigned short drb_release_rd_idx;
+ unsigned short last_ch_id;
+ void *drb_skb_base;
+ wait_queue_head_t req_wq;
+ struct workqueue_struct *worker;
+ struct work_struct dpmaif_tx_work;
+ spinlock_t tx_lock; /* protects txq DRB */
+ atomic_t tx_processing;
+
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ /* tx thread skb_list */
+ spinlock_t tx_event_lock;
+ struct list_head tx_event_queue;
+ unsigned int tx_submit_skb_cnt;
+ unsigned int tx_list_max_len;
+ unsigned int tx_skb_stat;
+ bool drb_lack;
+};
+
+/* data path skb pool */
+struct dpmaif_map_skb {
+ struct list_head head;
+ u32 qlen;
+ spinlock_t lock; /* protects skb queue*/
+};
+
+struct dpmaif_skb_info {
+ struct list_head entry;
+ struct sk_buff *skb;
+ unsigned int data_len;
+ dma_addr_t data_bus_addr;
+};
+
+struct dpmaif_skb_queue {
+ struct dpmaif_map_skb skb_list;
+ unsigned int size;
+ unsigned int max_len;
+};
+
+struct dpmaif_skb_pool {
+ struct dpmaif_skb_queue queue[DPMA_SKB_QUEUE_CNT];
+ struct workqueue_struct *reload_work_queue;
+ struct work_struct reload_work;
+ unsigned int queue_cnt;
+};
+
+struct dpmaif_isr_para {
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ unsigned char pcie_int;
+ unsigned char dlq_id;
+};
+
+struct dpmaif_tx_event {
+ struct list_head entry;
+ int qno;
+ struct sk_buff *skb;
+ unsigned int drb_cnt;
+};
+
+enum dpmaif_state {
+ DPMAIF_STATE_MIN,
+ DPMAIF_STATE_PWROFF,
+ DPMAIF_STATE_PWRON,
+ DPMAIF_STATE_EXCEPTION,
+ DPMAIF_STATE_MAX
+};
+
+struct dpmaif_hw_info {
+ void __iomem *pcie_base;
+ struct dpmaif_dl dl_que[DPMAIF_RXQ_NUM];
+ struct dpmaif_ul ul_que[DPMAIF_TXQ_NUM];
+ struct dpmaif_dl_hwq dl_que_hw[DPMAIF_RXQ_NUM];
+ struct dpmaif_isr_en_mask isr_en_mask;
+};
+
+enum dpmaif_txq_state {
+ DMPAIF_TXQ_STATE_IRQ,
+ DMPAIF_TXQ_STATE_FULL,
+};
+
+struct dpmaif_callbacks {
+ void (*state_notify)(struct mtk_pci_dev *mtk_dev,
+ enum dpmaif_txq_state state, int txqt);
+ void (*recv_skb)(struct mtk_pci_dev *mtk_dev, int netif_id, struct sk_buff *skb);
+};
+
+struct dpmaif_ctrl {
+ struct device *dev;
+ struct mtk_pci_dev *mtk_dev;
+ enum dpmaif_state state;
+ bool dpmaif_sw_init_done;
+ struct dpmaif_hw_info hif_hw_info;
+ struct dpmaif_tx_queue txq[DPMAIF_TXQ_NUM];
+ struct dpmaif_rx_queue rxq[DPMAIF_RXQ_NUM];
+
+ unsigned char rxq_int_mapping[DPMAIF_RXQ_NUM];
+ struct dpmaif_isr_para isr_para[DPMAIF_RXQ_NUM];
+
+ struct dpmaif_bat_request bat_req;
+ struct dpmaif_bat_request bat_frag;
+ struct workqueue_struct *bat_release_work_queue;
+ struct work_struct bat_release_work;
+ struct dpmaif_skb_pool skb_pool;
+
+ wait_queue_head_t tx_wq;
+ struct task_struct *tx_thread;
+ unsigned char txq_select_times;
+
+ struct dpmaif_callbacks *callbacks;
+};
+
+struct dpmaif_ctrl *dpmaif_hif_init(struct mtk_pci_dev *mtk_dev,
+ struct dpmaif_callbacks *callbacks);
+void dpmaif_hif_exit(struct dpmaif_ctrl *dpmaif_ctrl);
+int dpmaif_md_state_callback(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char state);
+unsigned int ring_buf_get_next_wrdx(unsigned int buf_len, unsigned int buf_idx);
+unsigned int ring_buf_read_write_count(unsigned int total_cnt, unsigned int rd_idx,
+ unsigned int wrt_idx, bool rd_wrt);
+
+#endif /* __T7XX_DPMA_TX_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
new file mode 100644
index 000000000000..e4af05441707
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
@@ -0,0 +1,1532 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Andy Shevchenko <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/interrupt.h>
+#include <linux/iopoll.h>
+#include <linux/jiffies.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/mm.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "t7xx_dpmaif.h"
+#include "t7xx_hif_dpmaif.h"
+#include "t7xx_hif_dpmaif_rx.h"
+
+#define DPMAIF_BAT_COUNT 8192
+#define DPMAIF_FRG_COUNT 4814
+#define DPMAIF_PIT_COUNT (DPMAIF_BAT_COUNT * 2)
+
+#define DPMAIF_BAT_CNT_THRESHOLD 30
+#define DPMAIF_PIT_CNT_THRESHOLD 60
+#define DPMAIF_RX_PUSH_THRESHOLD_MASK 0x7
+#define DPMAIF_NOTIFY_RELEASE_COUNT 128
+#define DPMAIF_POLL_PIT_TIME_US 20
+#define DPMAIF_POLL_PIT_MAX_TIME_US 2000
+#define DPMAIF_WQ_TIME_LIMIT_MS 2
+#define DPMAIF_CS_RESULT_PASS 0
+
+#define DPMAIF_SKB_OVER_HEAD SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
+#define DPMAIF_SKB_SIZE_EXTRA SKB_DATA_ALIGN(NET_SKB_PAD + DPMAIF_SKB_OVER_HEAD)
+#define DPMAIF_SKB_SIZE(s) ((s) + DPMAIF_SKB_SIZE_EXTRA)
+#define DPMAIF_SKB_Q_SIZE (DPMAIF_BAT_COUNT * DPMAIF_SKB_SIZE(DPMAIF_HW_BAT_PKTBUF))
+#define DPMAIF_SKB_SIZE_MIN 32
+#define DPMAIF_RELOAD_TH_1 4
+#define DPMAIF_RELOAD_TH_2 5
+
+/* packet_type */
+#define DES_PT_PD 0x00
+#define DES_PT_MSG 0x01
+/* buffer_type */
+#define PKT_BUF_FRAG 0x01
+
+static inline unsigned int normal_pit_bid(const struct dpmaif_normal_pit *pit_info)
+{
+ return (FIELD_GET(NORMAL_PIT_H_BID, pit_info->pit_footer) << 13) +
+ FIELD_GET(NORMAL_PIT_BUFFER_ID, pit_info->pit_header);
+}
+
+static void dpmaif_set_skb_cs_type(const unsigned int cs_result, struct sk_buff *skb)
+{
+ if (cs_result == DPMAIF_CS_RESULT_PASS)
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ else
+ skb->ip_summed = CHECKSUM_NONE;
+}
+
+static int dpmaif_net_rx_push_thread(void *arg)
+{
+ struct dpmaif_ctrl *hif_ctrl;
+ struct dpmaif_callbacks *cb;
+ struct dpmaif_rx_queue *q;
+ struct sk_buff *skb;
+ u32 *lhif_header;
+ int netif_id;
+
+ q = arg;
+ hif_ctrl = q->dpmaif_ctrl;
+ cb = hif_ctrl->callbacks;
+ while (!kthread_should_stop()) {
+ if (skb_queue_empty(&q->skb_queue.skb_list)) {
+ if (wait_event_interruptible(q->rx_wq,
+ !skb_queue_empty(&q->skb_queue.skb_list) ||
+ kthread_should_stop()))
+ continue;
+ }
+
+ if (kthread_should_stop())
+ break;
+
+ skb = ccci_skb_dequeue(hif_ctrl->mtk_dev->pools.reload_work_queue,
+ &q->skb_queue);
+ if (!skb)
+ continue;
+
+ lhif_header = (u32 *)skb->data;
+ netif_id = FIELD_GET(LHIF_HEADER_NETIF, *lhif_header);
+ skb_pull(skb, sizeof(*lhif_header));
+ cb->recv_skb(hif_ctrl->mtk_dev, netif_id, skb);
+
+ cond_resched();
+ }
+
+ return 0;
+}
+
+static int dpmaif_update_bat_wr_idx(struct dpmaif_ctrl *dpmaif_ctrl,
+ const unsigned char q_num, const unsigned int bat_cnt)
+{
+ unsigned short old_rl_idx, new_wr_idx, old_wr_idx;
+ struct dpmaif_bat_request *bat_req;
+ struct dpmaif_rx_queue *rxq;
+
+ rxq = &dpmaif_ctrl->rxq[q_num];
+ bat_req = rxq->bat_req;
+
+ if (!rxq->que_started) {
+ dev_err(dpmaif_ctrl->dev, "RX queue %d has not been started\n", rxq->index);
+ return -EINVAL;
+ }
+
+ old_rl_idx = bat_req->bat_release_rd_idx;
+ old_wr_idx = bat_req->bat_wr_idx;
+ new_wr_idx = old_wr_idx + bat_cnt;
+
+ if (old_rl_idx > old_wr_idx) {
+ if (new_wr_idx >= old_rl_idx) {
+ dev_err(dpmaif_ctrl->dev, "RX BAT flow check fail\n");
+ return -EINVAL;
+ }
+ } else if (new_wr_idx >= bat_req->bat_size_cnt) {
+ new_wr_idx -= bat_req->bat_size_cnt;
+ if (new_wr_idx >= old_rl_idx) {
+ dev_err(dpmaif_ctrl->dev, "RX BAT flow check fail\n");
+ return -EINVAL;
+ }
+ }
+
+ bat_req->bat_wr_idx = new_wr_idx;
+ return 0;
+}
+
+#define GET_SKB_BY_ENTRY(skb_entry)\
+ ((struct dpmaif_skb_info *)list_entry(skb_entry, struct dpmaif_skb_info, entry))
+
+static struct dpmaif_skb_info *alloc_and_map_skb_info(const struct dpmaif_ctrl *dpmaif_ctrl,
+ struct sk_buff *skb)
+{
+ struct dpmaif_skb_info *skb_info;
+ dma_addr_t data_bus_addr;
+ size_t data_len;
+
+ skb_info = kmalloc(sizeof(*skb_info), GFP_KERNEL);
+ if (!skb_info)
+ return NULL;
+
+ /* DMA mapping */
+ data_len = skb_data_size(skb);
+ data_bus_addr = dma_map_single(dpmaif_ctrl->dev, skb->data, data_len, DMA_FROM_DEVICE);
+ if (dma_mapping_error(dpmaif_ctrl->dev, data_bus_addr)) {
+ dev_err_ratelimited(dpmaif_ctrl->dev, "DMA mapping error\n");
+ kfree(skb_info);
+ return NULL;
+ }
+
+ INIT_LIST_HEAD(&skb_info->entry);
+ skb_info->skb = skb;
+ skb_info->data_len = data_len;
+ skb_info->data_bus_addr = data_bus_addr;
+ return skb_info;
+}
+
+static struct list_head *dpmaif_map_skb_deq(struct dpmaif_map_skb *skb_list)
+{
+ struct list_head *entry;
+
+ entry = skb_list->head.next;
+ if (!list_empty(&skb_list->head) && entry) {
+ list_del(entry);
+ skb_list->qlen--;
+ return entry;
+ }
+
+ return NULL;
+}
+
+static struct dpmaif_skb_info *dpmaif_skb_dequeue(struct dpmaif_skb_pool *pool,
+ struct dpmaif_skb_queue *queue)
+{
+ unsigned int max_len, qlen;
+ struct list_head *entry;
+ unsigned long flags;
+
+ spin_lock_irqsave(&queue->skb_list.lock, flags);
+ entry = dpmaif_map_skb_deq(&queue->skb_list);
+ max_len = queue->max_len;
+ qlen = queue->skb_list.qlen;
+ spin_unlock_irqrestore(&queue->skb_list.lock, flags);
+ if (!entry)
+ return NULL;
+
+ if (qlen < max_len * DPMAIF_RELOAD_TH_1 / DPMAIF_RELOAD_TH_2)
+ queue_work(pool->reload_work_queue, &pool->reload_work);
+
+ return GET_SKB_BY_ENTRY(entry);
+}
+
+static struct dpmaif_skb_info *dpmaif_dev_alloc_skb(struct dpmaif_ctrl *dpmaif_ctrl,
+ const unsigned int size)
+{
+ struct dpmaif_skb_info *skb_info;
+ struct sk_buff *skb;
+
+ skb = __dev_alloc_skb(size, GFP_KERNEL);
+ if (!skb)
+ return NULL;
+
+ skb_info = alloc_and_map_skb_info(dpmaif_ctrl, skb);
+ if (!skb_info)
+ dev_kfree_skb_any(skb);
+
+ return skb_info;
+}
+
+static struct dpmaif_skb_info *dpmaif_alloc_skb(struct dpmaif_ctrl *dpmaif_ctrl,
+ const unsigned int size)
+{
+ unsigned int i;
+
+ if (size > DPMAIF_HW_BAT_PKTBUF)
+ return dpmaif_dev_alloc_skb(dpmaif_ctrl, size);
+
+ for (i = 0; i < dpmaif_ctrl->skb_pool.queue_cnt; i++) {
+ struct dpmaif_skb_info *skb_info;
+ struct dpmaif_skb_queue *queue;
+
+ queue = &dpmaif_ctrl->skb_pool.queue[DPMA_SKB_QUEUE_CNT - 1 - i];
+
+ if (size <= queue->size) {
+ skb_info = dpmaif_skb_dequeue(&dpmaif_ctrl->skb_pool, queue);
+ if (skb_info && skb_info->skb)
+ return skb_info;
+
+ kfree(skb_info);
+ return dpmaif_dev_alloc_skb(dpmaif_ctrl, size);
+ }
+ }
+
+ return NULL;
+}
+
+/**
+ * dpmaif_rx_buf_alloc() - Allocates buffers for the BAT ring
+ * @dpmaif_ctrl: Pointer to DPMAIF context structure
+ * @bat_req: Pointer to BAT request structure
+ * @q_num: Queue number
+ * @buf_cnt: Number of buffers to allocate
+ * @first_time: Indicates if the ring is being populated for the first time
+ *
+ * Allocates skb and store the start address of the data buffer into the BAT ring.
+ * If the this is not the initial call then notify the HW about the new entries.
+ *
+ * Return: 0 on success, a negative error code on failure
+ */
+int dpmaif_rx_buf_alloc(struct dpmaif_ctrl *dpmaif_ctrl,
+ const struct dpmaif_bat_request *bat_req,
+ const unsigned char q_num, const unsigned int buf_cnt,
+ const bool first_time)
+{
+ unsigned int alloc_cnt, i, hw_wr_idx;
+ unsigned int bat_cnt, bat_max_cnt;
+ unsigned short bat_start_idx;
+ int ret;
+
+ alloc_cnt = buf_cnt;
+ if (!alloc_cnt || alloc_cnt > bat_req->bat_size_cnt)
+ return -EINVAL;
+
+ /* check BAT buffer space */
+ bat_max_cnt = bat_req->bat_size_cnt;
+ bat_cnt = ring_buf_read_write_count(bat_max_cnt, bat_req->bat_release_rd_idx,
+ bat_req->bat_wr_idx, false);
+
+ if (alloc_cnt > bat_cnt)
+ return -ENOMEM;
+
+ bat_start_idx = bat_req->bat_wr_idx;
+
+ /* Set buffer to be used */
+ for (i = 0; i < alloc_cnt; i++) {
+ unsigned short cur_bat_idx = bat_start_idx + i;
+ struct dpmaif_bat_skb *cur_skb;
+ struct dpmaif_bat *cur_bat;
+
+ if (cur_bat_idx >= bat_max_cnt)
+ cur_bat_idx -= bat_max_cnt;
+
+ cur_skb = (struct dpmaif_bat_skb *)bat_req->bat_skb_ptr + cur_bat_idx;
+ if (!cur_skb->skb) {
+ struct dpmaif_skb_info *skb_info;
+
+ skb_info = dpmaif_alloc_skb(dpmaif_ctrl, bat_req->pkt_buf_sz);
+ if (!skb_info)
+ break;
+
+ cur_skb->skb = skb_info->skb;
+ cur_skb->data_bus_addr = skb_info->data_bus_addr;
+ cur_skb->data_len = skb_info->data_len;
+ kfree(skb_info);
+ }
+
+ cur_bat = (struct dpmaif_bat *)bat_req->bat_base + cur_bat_idx;
+ cur_bat->buffer_addr_ext = upper_32_bits(cur_skb->data_bus_addr);
+ cur_bat->p_buffer_addr = lower_32_bits(cur_skb->data_bus_addr);
+ }
+
+ if (!i)
+ return -ENOMEM;
+
+ ret = dpmaif_update_bat_wr_idx(dpmaif_ctrl, q_num, i);
+ if (ret)
+ return ret;
+
+ if (!first_time) {
+ ret = dpmaif_dl_add_bat_cnt(dpmaif_ctrl, q_num, i);
+ if (ret)
+ return ret;
+
+ hw_wr_idx = dpmaif_dl_get_bat_wridx(&dpmaif_ctrl->hif_hw_info, DPF_RX_QNO_DFT);
+ if (hw_wr_idx != bat_req->bat_wr_idx) {
+ ret = -EFAULT;
+ dev_err(dpmaif_ctrl->dev, "Write index mismatch in Rx ring\n");
+ }
+ }
+
+ return ret;
+}
+
+static int dpmaifq_release_rx_pit_entry(struct dpmaif_rx_queue *rxq,
+ const unsigned short rel_entry_num)
+{
+ unsigned short old_sw_rel_idx, new_sw_rel_idx, old_hw_wr_idx;
+ int ret;
+
+ if (!rxq->que_started)
+ return 0;
+
+ if (rel_entry_num >= rxq->pit_size_cnt) {
+ dev_err(rxq->dpmaif_ctrl->dev, "Invalid PIT entry release index.\n");
+ return -EINVAL;
+ }
+
+ old_sw_rel_idx = rxq->pit_release_rd_idx;
+ new_sw_rel_idx = old_sw_rel_idx + rel_entry_num;
+
+ old_hw_wr_idx = rxq->pit_wr_idx;
+
+ if (old_hw_wr_idx < old_sw_rel_idx && new_sw_rel_idx >= rxq->pit_size_cnt)
+ new_sw_rel_idx = new_sw_rel_idx - rxq->pit_size_cnt;
+
+ ret = dpmaif_dl_add_dlq_pit_remain_cnt(rxq->dpmaif_ctrl, rxq->index, rel_entry_num);
+
+ if (ret) {
+ dev_err(rxq->dpmaif_ctrl->dev,
+ "PIT release failure, PIT-r/w/rel-%d,%d%d; rel_entry_num = %d, ret=%d\n",
+ rxq->pit_rd_idx, rxq->pit_wr_idx,
+ rxq->pit_release_rd_idx, rel_entry_num, ret);
+ return ret;
+ }
+ rxq->pit_release_rd_idx = new_sw_rel_idx;
+ return 0;
+}
+
+static void dpmaif_set_bat_mask(struct device *dev,
+ struct dpmaif_bat_request *bat_req, unsigned int idx)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&bat_req->mask_lock, flags);
+ if (!bat_req->bat_mask[idx])
+ bat_req->bat_mask[idx] = 1;
+ else
+ dev_err(dev, "%s BAT mask was already set\n",
+ bat_req->type == BAT_TYPE_NORMAL ? "Normal" : "Frag");
+
+ spin_unlock_irqrestore(&bat_req->mask_lock, flags);
+}
+
+static int frag_bat_cur_bid_check(struct dpmaif_rx_queue *rxq,
+ const unsigned int cur_bid)
+{
+ struct dpmaif_bat_request *bat_frag;
+ struct page *page;
+
+ bat_frag = rxq->bat_frag;
+
+ if (cur_bid >= DPMAIF_FRG_COUNT) {
+ dev_err(rxq->dpmaif_ctrl->dev, "frag BAT cur_bid[%d] err\n", cur_bid);
+ return -EINVAL;
+ }
+
+ page = ((struct dpmaif_bat_page *)bat_frag->bat_skb_ptr + cur_bid)->page;
+ if (!page)
+ return -EINVAL;
+
+ return 0;
+}
+
+/**
+ * dpmaif_rx_frag_alloc() - Allocates buffers for the Fragment BAT ring
+ * @dpmaif_ctrl: Pointer to DPMAIF context structure
+ * @bat_req: Pointer to BAT request structure
+ * @q_num: Queue number
+ * @buf_cnt: Number of buffers to allocate
+ * @first_time: Indicates if the ring is being populated for the first time
+ *
+ * Fragment BAT is used when the received packet won't fit in a regular BAT entry.
+ * This function allocates a page fragment and store the start address of the page
+ * into the Fragment BAT ring.
+ * If the this is not the initial call then notify the HW about the new entries.
+ *
+ * Return: 0 on success, a negative error code on failure
+ */
+int dpmaif_rx_frag_alloc(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
+ unsigned char q_num, const unsigned int buf_cnt, const bool first_time)
+{
+ unsigned short cur_bat_idx;
+ unsigned int buf_space;
+ int i, ret = 0;
+
+ if (!buf_cnt || buf_cnt > bat_req->bat_size_cnt)
+ return -EINVAL;
+
+ buf_space = ring_buf_read_write_count(bat_req->bat_size_cnt, bat_req->bat_release_rd_idx,
+ bat_req->bat_wr_idx, false);
+
+ if (buf_cnt > buf_space) {
+ dev_err(dpmaif_ctrl->dev,
+ "Requested more buffers than the space available in Rx frag ring\n");
+ return -EINVAL;
+ }
+
+ cur_bat_idx = bat_req->bat_wr_idx;
+
+ for (i = 0; i < buf_cnt; i++) {
+ struct dpmaif_bat_page *cur_page;
+ struct dpmaif_bat *cur_bat;
+ dma_addr_t data_base_addr;
+
+ cur_page = (struct dpmaif_bat_page *)bat_req->bat_skb_ptr + cur_bat_idx;
+ if (!cur_page->page) {
+ unsigned long offset;
+ struct page *page;
+ void *data;
+
+ data = netdev_alloc_frag(bat_req->pkt_buf_sz);
+ if (!data)
+ break;
+
+ page = virt_to_head_page(data);
+ offset = data - page_address(page);
+ data_base_addr = dma_map_page(dpmaif_ctrl->dev, page, offset,
+ bat_req->pkt_buf_sz, DMA_FROM_DEVICE);
+
+ if (dma_mapping_error(dpmaif_ctrl->dev, data_base_addr)) {
+ dev_err(dpmaif_ctrl->dev, "DMA mapping fail\n");
+ put_page(virt_to_head_page(data));
+ break;
+ }
+
+ /* record frag information */
+ cur_page->page = page;
+ cur_page->data_bus_addr = data_base_addr;
+ cur_page->offset = offset;
+ cur_page->data_len = bat_req->pkt_buf_sz;
+ }
+
+ /* set to dpmaif HW */
+ data_base_addr = cur_page->data_bus_addr;
+ cur_bat = (struct dpmaif_bat *)bat_req->bat_base + cur_bat_idx;
+ cur_bat->buffer_addr_ext = upper_32_bits(data_base_addr);
+ cur_bat->p_buffer_addr = lower_32_bits(data_base_addr);
+
+ cur_bat_idx = ring_buf_get_next_wrdx(bat_req->bat_size_cnt, cur_bat_idx);
+ }
+
+ if (i < buf_cnt)
+ return -ENOMEM;
+
+ bat_req->bat_wr_idx = cur_bat_idx;
+
+ /* all rxq share one frag_bat table */
+ q_num = DPF_RX_QNO_DFT;
+
+ /* notify to HW */
+ if (!first_time)
+ dpmaif_dl_add_frg_cnt(dpmaif_ctrl, q_num, i);
+
+ return ret;
+}
+
+static int dpmaif_set_rx_frag_to_skb(const struct dpmaif_rx_queue *rxq,
+ const struct dpmaif_normal_pit *pkt_info,
+ const struct dpmaif_cur_rx_skb_info *rx_skb_info)
+{
+ unsigned long long data_bus_addr, data_base_addr;
+ struct dpmaif_bat_page *cur_page_info;
+ struct sk_buff *base_skb;
+ unsigned int data_len;
+ struct device *dev;
+ struct page *page;
+ int data_offset;
+
+ base_skb = rx_skb_info->cur_skb;
+ dev = rxq->dpmaif_ctrl->dev;
+ cur_page_info = rxq->bat_frag->bat_skb_ptr;
+ cur_page_info += normal_pit_bid(pkt_info);
+ page = cur_page_info->page;
+
+ /* rx current frag data unmapping */
+ dma_unmap_page(dev, cur_page_info->data_bus_addr,
+ cur_page_info->data_len, DMA_FROM_DEVICE);
+ if (!page) {
+ dev_err(dev, "frag check fail, bid:%d", normal_pit_bid(pkt_info));
+ return -EINVAL;
+ }
+
+ /* calculate data address && data len. */
+ data_bus_addr = pkt_info->data_addr_ext;
+ data_bus_addr = (data_bus_addr << 32) + pkt_info->p_data_addr;
+ data_base_addr = (unsigned long long)cur_page_info->data_bus_addr;
+ data_offset = (int)(data_bus_addr - data_base_addr);
+
+ data_len = FIELD_GET(NORMAL_PIT_DATA_LEN, pkt_info->pit_header);
+
+ /* record to skb for user: fragment data to nr_frags */
+ skb_add_rx_frag(base_skb, skb_shinfo(base_skb)->nr_frags, page,
+ cur_page_info->offset + data_offset, data_len, cur_page_info->data_len);
+
+ cur_page_info->page = NULL;
+ cur_page_info->offset = 0;
+ cur_page_info->data_len = 0;
+ return 0;
+}
+
+static int dpmaif_get_rx_frag(struct dpmaif_rx_queue *rxq,
+ const struct dpmaif_normal_pit *pkt_info,
+ const struct dpmaif_cur_rx_skb_info *rx_skb_info)
+{
+ unsigned int cur_bid;
+ int ret;
+
+ cur_bid = normal_pit_bid(pkt_info);
+
+ /* check frag bid */
+ ret = frag_bat_cur_bid_check(rxq, cur_bid);
+ if (ret < 0)
+ return ret;
+
+ /* set frag data to cur_skb skb_shinfo->frags[] */
+ ret = dpmaif_set_rx_frag_to_skb(rxq, pkt_info, rx_skb_info);
+ if (ret < 0) {
+ dev_err(rxq->dpmaif_ctrl->dev, "dpmaif_set_rx_frag_to_skb fail, ret = %d\n", ret);
+ return ret;
+ }
+
+ dpmaif_set_bat_mask(rxq->dpmaif_ctrl->dev, rxq->bat_frag, cur_bid);
+ return 0;
+}
+
+static inline int bat_cur_bid_check(struct dpmaif_rx_queue *rxq,
+ const unsigned int cur_bid)
+{
+ struct dpmaif_bat_skb *bat_req;
+ struct dpmaif_bat_skb *bat_skb;
+
+ bat_req = rxq->bat_req->bat_skb_ptr;
+ bat_skb = bat_req + cur_bid;
+
+ if (cur_bid >= DPMAIF_BAT_COUNT || !bat_skb->skb)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int dpmaif_check_read_pit_seq(const struct dpmaif_normal_pit *pit)
+{
+ return FIELD_GET(NORMAL_PIT_PIT_SEQ, pit->pit_footer);
+}
+
+static int dpmaif_check_pit_seq(struct dpmaif_rx_queue *rxq,
+ const struct dpmaif_normal_pit *pit)
+{
+ unsigned int cur_pit_seq, expect_pit_seq = rxq->expect_pit_seq;
+
+ /* Running in soft interrupt therefore cannot sleep */
+ if (read_poll_timeout_atomic(dpmaif_check_read_pit_seq, cur_pit_seq,
+ cur_pit_seq == expect_pit_seq, DPMAIF_POLL_PIT_TIME_US,
+ DPMAIF_POLL_PIT_MAX_TIME_US, false, pit))
+ return -EFAULT;
+
+ rxq->expect_pit_seq++;
+ if (rxq->expect_pit_seq >= DPMAIF_DL_PIT_SEQ_VALUE)
+ rxq->expect_pit_seq = 0;
+
+ return 0;
+}
+
+static unsigned int dpmaif_avail_pkt_bat_cnt(struct dpmaif_bat_request *bat_req)
+{
+ unsigned long flags;
+ unsigned int i;
+
+ spin_lock_irqsave(&bat_req->mask_lock, flags);
+ for (i = 0; i < bat_req->bat_size_cnt; i++) {
+ unsigned int index = bat_req->bat_release_rd_idx + i;
+
+ if (index >= bat_req->bat_size_cnt)
+ index -= bat_req->bat_size_cnt;
+
+ if (!bat_req->bat_mask[index])
+ break;
+ }
+
+ spin_unlock_irqrestore(&bat_req->mask_lock, flags);
+
+ return i;
+}
+
+static int dpmaif_release_dl_bat_entry(const struct dpmaif_rx_queue *rxq,
+ const unsigned int rel_entry_num,
+ const enum bat_type buf_type)
+{
+ unsigned short old_sw_rel_idx, new_sw_rel_idx, hw_rd_idx;
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ struct dpmaif_bat_request *bat;
+ unsigned long flags;
+ unsigned int i;
+
+ dpmaif_ctrl = rxq->dpmaif_ctrl;
+
+ if (!rxq->que_started)
+ return -EINVAL;
+
+ if (buf_type == BAT_TYPE_FRAG) {
+ bat = rxq->bat_frag;
+ hw_rd_idx = dpmaif_dl_get_frg_ridx(&dpmaif_ctrl->hif_hw_info, rxq->index);
+ } else {
+ bat = rxq->bat_req;
+ hw_rd_idx = dpmaif_dl_get_bat_ridx(&dpmaif_ctrl->hif_hw_info, rxq->index);
+ }
+
+ if (rel_entry_num >= bat->bat_size_cnt || !rel_entry_num)
+ return -EINVAL;
+
+ old_sw_rel_idx = bat->bat_release_rd_idx;
+ new_sw_rel_idx = old_sw_rel_idx + rel_entry_num;
+
+ /* queue had empty and no need to release */
+ if (bat->bat_wr_idx == old_sw_rel_idx)
+ return 0;
+
+ if (hw_rd_idx >= old_sw_rel_idx) {
+ if (new_sw_rel_idx > hw_rd_idx)
+ return -EINVAL;
+ } else if (new_sw_rel_idx >= bat->bat_size_cnt) {
+ new_sw_rel_idx -= bat->bat_size_cnt;
+ if (new_sw_rel_idx > hw_rd_idx)
+ return -EINVAL;
+ }
+
+ /* reset BAT mask value */
+ spin_lock_irqsave(&bat->mask_lock, flags);
+ for (i = 0; i < rel_entry_num; i++) {
+ unsigned int index = bat->bat_release_rd_idx + i;
+
+ if (index >= bat->bat_size_cnt)
+ index -= bat->bat_size_cnt;
+
+ bat->bat_mask[index] = 0;
+ }
+
+ spin_unlock_irqrestore(&bat->mask_lock, flags);
+ bat->bat_release_rd_idx = new_sw_rel_idx;
+
+ return rel_entry_num;
+}
+
+static int dpmaif_dl_pit_release_and_add(struct dpmaif_rx_queue *rxq)
+{
+ int ret;
+
+ if (rxq->pit_remain_release_cnt < DPMAIF_PIT_CNT_THRESHOLD)
+ return 0;
+
+ ret = dpmaifq_release_rx_pit_entry(rxq, rxq->pit_remain_release_cnt);
+ if (ret)
+ dev_err(rxq->dpmaif_ctrl->dev, "release PIT fail\n");
+ else
+ rxq->pit_remain_release_cnt = 0;
+
+ return ret;
+}
+
+static int dpmaif_dl_pkt_bat_release_and_add(const struct dpmaif_rx_queue *rxq)
+{
+ unsigned int bid_cnt;
+ int ret;
+
+ bid_cnt = dpmaif_avail_pkt_bat_cnt(rxq->bat_req);
+
+ if (bid_cnt < DPMAIF_BAT_CNT_THRESHOLD)
+ return 0;
+
+ ret = dpmaif_release_dl_bat_entry(rxq, bid_cnt, BAT_TYPE_NORMAL);
+ if (ret <= 0) {
+ dev_err(rxq->dpmaif_ctrl->dev, "release PKT BAT failed, ret:%d\n", ret);
+ return ret;
+ }
+
+ ret = dpmaif_rx_buf_alloc(rxq->dpmaif_ctrl, rxq->bat_req, rxq->index, bid_cnt, false);
+
+ if (ret < 0)
+ dev_err(rxq->dpmaif_ctrl->dev, "allocate new RX buffer failed, ret:%d\n", ret);
+
+ return ret;
+}
+
+static int dpmaif_dl_frag_bat_release_and_add(const struct dpmaif_rx_queue *rxq)
+{
+ unsigned int bid_cnt;
+ int ret;
+
+ bid_cnt = dpmaif_avail_pkt_bat_cnt(rxq->bat_frag);
+
+ if (bid_cnt < DPMAIF_BAT_CNT_THRESHOLD)
+ return 0;
+
+ ret = dpmaif_release_dl_bat_entry(rxq, bid_cnt, BAT_TYPE_FRAG);
+ if (ret <= 0) {
+ dev_err(rxq->dpmaif_ctrl->dev, "release PKT BAT failed, ret:%d\n", ret);
+ return ret;
+ }
+
+ return dpmaif_rx_frag_alloc(rxq->dpmaif_ctrl, rxq->bat_frag, rxq->index, bid_cnt, false);
+}
+
+static inline void dpmaif_rx_msg_pit(const struct dpmaif_rx_queue *rxq,
+ const struct dpmaif_msg_pit *msg_pit,
+ struct dpmaif_cur_rx_skb_info *rx_skb_info)
+{
+ rx_skb_info->cur_chn_idx = FIELD_GET(MSG_PIT_CHANNEL_ID, msg_pit->dword1);
+ rx_skb_info->check_sum = FIELD_GET(MSG_PIT_CHECKSUM, msg_pit->dword1);
+ rx_skb_info->pit_dp = FIELD_GET(MSG_PIT_DP, msg_pit->dword1);
+}
+
+static int dpmaif_rx_set_data_to_skb(const struct dpmaif_rx_queue *rxq,
+ const struct dpmaif_normal_pit *pkt_info,
+ struct dpmaif_cur_rx_skb_info *rx_skb_info)
+{
+ unsigned long long data_bus_addr, data_base_addr;
+ struct dpmaif_bat_skb *cur_bat_skb;
+ struct sk_buff *new_skb;
+ unsigned int data_len;
+ struct device *dev;
+ int data_offset;
+
+ dev = rxq->dpmaif_ctrl->dev;
+ cur_bat_skb = rxq->bat_req->bat_skb_ptr;
+ cur_bat_skb += normal_pit_bid(pkt_info);
+ dma_unmap_single(dev, cur_bat_skb->data_bus_addr, cur_bat_skb->data_len, DMA_FROM_DEVICE);
+
+ /* calculate data address && data len. */
+ /* cur pkt physical address(in a buffer bid pointed) */
+ data_bus_addr = pkt_info->data_addr_ext;
+ data_bus_addr = (data_bus_addr << 32) + pkt_info->p_data_addr;
+ data_base_addr = (unsigned long long)cur_bat_skb->data_bus_addr;
+ data_offset = (int)(data_bus_addr - data_base_addr);
+
+ data_len = FIELD_GET(NORMAL_PIT_DATA_LEN, pkt_info->pit_header);
+ /* cur pkt data len */
+ /* record to skb for user: wapper, enqueue */
+ /* get skb which data contained pkt data */
+ new_skb = cur_bat_skb->skb;
+ new_skb->len = 0;
+ skb_reset_tail_pointer(new_skb);
+ /* set data len, data pointer */
+ skb_reserve(new_skb, data_offset);
+
+ if (new_skb->tail + data_len > new_skb->end) {
+ dev_err(dev, "No buffer space available\n");
+ return -ENOBUFS;
+ }
+
+ skb_put(new_skb, data_len);
+
+ rx_skb_info->cur_skb = new_skb;
+ cur_bat_skb->skb = NULL;
+ return 0;
+}
+
+static int dpmaif_get_rx_pkt(struct dpmaif_rx_queue *rxq,
+ const struct dpmaif_normal_pit *pkt_info,
+ struct dpmaif_cur_rx_skb_info *rx_skb_info)
+{
+ unsigned int cur_bid;
+ int ret;
+
+ cur_bid = normal_pit_bid(pkt_info);
+ ret = bat_cur_bid_check(rxq, cur_bid);
+ if (ret < 0)
+ return ret;
+
+ ret = dpmaif_rx_set_data_to_skb(rxq, pkt_info, rx_skb_info);
+ if (ret < 0) {
+ dev_err(rxq->dpmaif_ctrl->dev, "rx set data to skb failed, ret = %d\n", ret);
+ return ret;
+ }
+
+ dpmaif_set_bat_mask(rxq->dpmaif_ctrl->dev, rxq->bat_req, cur_bid);
+ return 0;
+}
+
+static int dpmaifq_rx_notify_hw(struct dpmaif_rx_queue *rxq)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ int ret;
+
+ dpmaif_ctrl = rxq->dpmaif_ctrl;
+
+ /* normal BAT release and add */
+ queue_work(dpmaif_ctrl->bat_release_work_queue, &dpmaif_ctrl->bat_release_work);
+
+ ret = dpmaif_dl_pit_release_and_add(rxq);
+ if (ret < 0)
+ dev_err(dpmaif_ctrl->dev, "dlq%d update PIT fail\n", rxq->index);
+
+ return ret;
+}
+
+static void dpmaif_rx_skb(struct dpmaif_rx_queue *rxq, struct dpmaif_cur_rx_skb_info *rx_skb_info)
+{
+ struct sk_buff *new_skb;
+ u32 *lhif_header;
+
+ new_skb = rx_skb_info->cur_skb;
+ rx_skb_info->cur_skb = NULL;
+
+ if (rx_skb_info->pit_dp) {
+ dev_kfree_skb_any(new_skb);
+ return;
+ }
+
+ dpmaif_set_skb_cs_type(rx_skb_info->check_sum, new_skb);
+
+ /* MD put the ccmni_index to the msg pkt,
+ * so we need push it alone. Maybe not needed.
+ */
+ lhif_header = skb_push(new_skb, sizeof(*lhif_header));
+ *lhif_header &= ~LHIF_HEADER_NETIF;
+ *lhif_header |= FIELD_PREP(LHIF_HEADER_NETIF, rx_skb_info->cur_chn_idx);
+
+ /* add data to rx thread skb list */
+ ccci_skb_enqueue(&rxq->skb_queue, new_skb);
+}
+
+static int dpmaif_rx_start(struct dpmaif_rx_queue *rxq, const unsigned short pit_cnt,
+ const unsigned long timeout)
+{
+ struct dpmaif_cur_rx_skb_info *cur_rx_skb_info;
+ unsigned short rx_cnt, recv_skb_cnt = 0;
+ unsigned int cur_pit, pit_len;
+ int ret = 0, ret_hw = 0;
+ struct device *dev;
+
+ dev = rxq->dpmaif_ctrl->dev;
+ if (!rxq->pit_base)
+ return -EAGAIN;
+
+ pit_len = rxq->pit_size_cnt;
+ cur_rx_skb_info = &rxq->rx_data_info;
+ cur_pit = rxq->pit_rd_idx;
+
+ for (rx_cnt = 0; rx_cnt < pit_cnt; rx_cnt++) {
+ struct dpmaif_normal_pit *pkt_info;
+
+ if (!cur_rx_skb_info->msg_pit_received && time_after_eq(jiffies, timeout))
+ break;
+
+ pkt_info = (struct dpmaif_normal_pit *)rxq->pit_base + cur_pit;
+
+ if (dpmaif_check_pit_seq(rxq, pkt_info)) {
+ dev_err_ratelimited(dev, "dlq%u checks PIT SEQ fail\n", rxq->index);
+ return -EAGAIN;
+ }
+
+ if (FIELD_GET(NORMAL_PIT_PACKET_TYPE, pkt_info->pit_header) == DES_PT_MSG) {
+ if (cur_rx_skb_info->msg_pit_received)
+ dev_err(dev, "dlq%u receive repeat msg_pit err\n", rxq->index);
+ cur_rx_skb_info->msg_pit_received = true;
+ dpmaif_rx_msg_pit(rxq, (struct dpmaif_msg_pit *)pkt_info,
+ cur_rx_skb_info);
+ } else { /* DES_PT_PD */
+ if (FIELD_GET(NORMAL_PIT_BUFFER_TYPE, pkt_info->pit_header) !=
+ PKT_BUF_FRAG) {
+ /* skb->data: add to skb ptr && record ptr */
+ ret = dpmaif_get_rx_pkt(rxq, pkt_info, cur_rx_skb_info);
+ } else if (!cur_rx_skb_info->cur_skb) {
+ /* msg + frag PIT, no data pkt received */
+ dev_err(dev,
+ "rxq%d skb_idx < 0 PIT/BAT = %d, %d; buf: %ld; %d, %d\n",
+ rxq->index, cur_pit, normal_pit_bid(pkt_info),
+ FIELD_GET(NORMAL_PIT_BUFFER_TYPE, pkt_info->pit_header),
+ rx_cnt, pit_cnt);
+ ret = -EINVAL;
+ } else {
+ /* skb->frags[]: add to frags[] */
+ ret = dpmaif_get_rx_frag(rxq, pkt_info, cur_rx_skb_info);
+ }
+
+ /* check rx status */
+ if (ret < 0) {
+ cur_rx_skb_info->err_payload = 1;
+ dev_err_ratelimited(dev, "rxq%d error payload\n", rxq->index);
+ }
+
+ if (!FIELD_GET(NORMAL_PIT_CONT, pkt_info->pit_header)) {
+ if (!cur_rx_skb_info->err_payload) {
+ dpmaif_rx_skb(rxq, cur_rx_skb_info);
+ } else if (cur_rx_skb_info->cur_skb) {
+ dev_kfree_skb_any(cur_rx_skb_info->cur_skb);
+ cur_rx_skb_info->cur_skb = NULL;
+ }
+
+ /* reinit cur_rx_skb_info */
+ memset(cur_rx_skb_info, 0, sizeof(*cur_rx_skb_info));
+ recv_skb_cnt++;
+ if (!(recv_skb_cnt & DPMAIF_RX_PUSH_THRESHOLD_MASK)) {
+ wake_up_all(&rxq->rx_wq);
+ recv_skb_cnt = 0;
+ }
+ }
+ }
+
+ /* get next pointer to get pkt data */
+ cur_pit = ring_buf_get_next_wrdx(pit_len, cur_pit);
+ rxq->pit_rd_idx = cur_pit;
+
+ /* notify HW++ */
+ rxq->pit_remain_release_cnt++;
+ if (rx_cnt > 0 && !(rx_cnt % DPMAIF_NOTIFY_RELEASE_COUNT)) {
+ ret_hw = dpmaifq_rx_notify_hw(rxq);
+ if (ret_hw < 0)
+ break;
+ }
+ }
+
+ /* still need sync to SW/HW cnt */
+ if (recv_skb_cnt)
+ wake_up_all(&rxq->rx_wq);
+
+ /* update to HW */
+ if (!ret_hw)
+ ret_hw = dpmaifq_rx_notify_hw(rxq);
+
+ if (ret_hw < 0 && !ret)
+ ret = ret_hw;
+
+ return ret < 0 ? ret : rx_cnt;
+}
+
+static unsigned int dpmaifq_poll_rx_pit(struct dpmaif_rx_queue *rxq)
+{
+ unsigned short hw_wr_idx;
+ unsigned int pit_cnt;
+
+ if (!rxq->que_started)
+ return 0;
+
+ hw_wr_idx = dpmaif_dl_dlq_pit_get_wridx(&rxq->dpmaif_ctrl->hif_hw_info, rxq->index);
+ pit_cnt = ring_buf_read_write_count(rxq->pit_size_cnt, rxq->pit_rd_idx, hw_wr_idx, true);
+ rxq->pit_wr_idx = hw_wr_idx;
+ return pit_cnt;
+}
+
+static int dpmaif_rx_data_collect(struct dpmaif_ctrl *dpmaif_ctrl,
+ const unsigned char q_num, const int budget)
+{
+ struct dpmaif_rx_queue *rxq;
+ unsigned long time_limit;
+ unsigned int cnt;
+
+ time_limit = jiffies + msecs_to_jiffies(DPMAIF_WQ_TIME_LIMIT_MS);
+ rxq = &dpmaif_ctrl->rxq[q_num];
+
+ do {
+ cnt = dpmaifq_poll_rx_pit(rxq);
+ if (cnt) {
+ unsigned int rd_cnt = (cnt > budget) ? budget : cnt;
+ int real_cnt = dpmaif_rx_start(rxq, rd_cnt, time_limit);
+
+ if (real_cnt < 0) {
+ if (real_cnt != -EAGAIN)
+ dev_err(dpmaif_ctrl->dev, "dlq%u rx err: %d\n",
+ rxq->index, real_cnt);
+ return real_cnt;
+ } else if (real_cnt < cnt) {
+ return -EAGAIN;
+ }
+ }
+ } while (cnt);
+
+ return 0;
+}
+
+static void dpmaif_do_rx(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_rx_queue *rxq)
+{
+ int ret;
+
+ ret = dpmaif_rx_data_collect(dpmaif_ctrl, rxq->index, rxq->budget);
+ if (ret < 0) {
+ /* Rx done and empty interrupt will be enabled in workqueue */
+ /* Try one more time */
+ queue_work(rxq->worker, &rxq->dpmaif_rxq_work);
+ dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+ } else {
+ dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+ dpmaif_hw_dlq_unmask_rx_done(&dpmaif_ctrl->hif_hw_info, rxq->index);
+ }
+}
+
+static void dpmaif_rxq_work(struct work_struct *work)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ struct dpmaif_rx_queue *rxq;
+
+ rxq = container_of(work, struct dpmaif_rx_queue, dpmaif_rxq_work);
+ dpmaif_ctrl = rxq->dpmaif_ctrl;
+
+ atomic_set(&rxq->rx_processing, 1);
+ /* Ensure rx_processing is changed to 1 before actually begin RX flow */
+ smp_mb();
+
+ if (!rxq->que_started) {
+ atomic_set(&rxq->rx_processing, 0);
+ dev_err(dpmaif_ctrl->dev, "work RXQ: %d has not been started\n", rxq->index);
+ return;
+ }
+
+ dpmaif_do_rx(dpmaif_ctrl, rxq);
+
+ atomic_set(&rxq->rx_processing, 0);
+}
+
+void dpmaif_irq_rx_done(struct dpmaif_ctrl *dpmaif_ctrl, const unsigned int que_mask)
+{
+ struct dpmaif_rx_queue *rxq;
+ int qno;
+
+ /* get the queue id */
+ qno = ffs(que_mask) - 1;
+ if (qno < 0 || qno > DPMAIF_RXQ_NUM - 1) {
+ dev_err(dpmaif_ctrl->dev, "rxq number err, qno:%u\n", qno);
+ return;
+ }
+
+ rxq = &dpmaif_ctrl->rxq[qno];
+ queue_work(rxq->worker, &rxq->dpmaif_rxq_work);
+}
+
+static void dpmaif_base_free(const struct dpmaif_ctrl *dpmaif_ctrl,
+ const struct dpmaif_bat_request *bat_req)
+{
+ if (bat_req->bat_base)
+ dma_free_coherent(dpmaif_ctrl->dev,
+ bat_req->bat_size_cnt * sizeof(struct dpmaif_bat),
+ bat_req->bat_base, bat_req->bat_bus_addr);
+}
+
+/**
+ * dpmaif_bat_alloc() - Allocate the BAT ring buffer
+ * @dpmaif_ctrl: Pointer to DPMAIF context structure
+ * @bat_req: Pointer to BAT request structure
+ * @buf_type: BAT ring type
+ *
+ * This function allocates the BAT ring buffer shared with the HW device, also allocates
+ * a buffer used to store information about the BAT skbs for further release.
+ *
+ * Return: 0 on success, a negative error code on failure
+ */
+int dpmaif_bat_alloc(const struct dpmaif_ctrl *dpmaif_ctrl,
+ struct dpmaif_bat_request *bat_req,
+ const enum bat_type buf_type)
+{
+ int sw_buf_size;
+
+ sw_buf_size = (buf_type == BAT_TYPE_FRAG) ?
+ sizeof(struct dpmaif_bat_page) : sizeof(struct dpmaif_bat_skb);
+
+ bat_req->bat_size_cnt = (buf_type == BAT_TYPE_FRAG) ? DPMAIF_FRG_COUNT : DPMAIF_BAT_COUNT;
+
+ bat_req->skb_pkt_cnt = bat_req->bat_size_cnt;
+ bat_req->pkt_buf_sz = (buf_type == BAT_TYPE_FRAG) ? DPMAIF_HW_FRG_PKTBUF : NET_RX_BUF;
+
+ bat_req->type = buf_type;
+ bat_req->bat_wr_idx = 0;
+ bat_req->bat_release_rd_idx = 0;
+
+ /* alloc buffer for HW && AP SW */
+ bat_req->bat_base = dma_alloc_coherent(dpmaif_ctrl->dev,
+ bat_req->bat_size_cnt * sizeof(struct dpmaif_bat),
+ &bat_req->bat_bus_addr, GFP_KERNEL | __GFP_ZERO);
+
+ if (!bat_req->bat_base)
+ return -ENOMEM;
+
+ /* alloc buffer for AP SW to record skb information */
+ bat_req->bat_skb_ptr = devm_kzalloc(dpmaif_ctrl->dev, bat_req->skb_pkt_cnt * sw_buf_size,
+ GFP_KERNEL);
+
+ if (!bat_req->bat_skb_ptr)
+ goto err_base;
+
+ /* alloc buffer for BAT mask */
+ bat_req->bat_mask = kcalloc(bat_req->bat_size_cnt, sizeof(unsigned char), GFP_KERNEL);
+ if (!bat_req->bat_mask)
+ goto err_base;
+
+ spin_lock_init(&bat_req->mask_lock);
+ atomic_set(&bat_req->refcnt, 0);
+ return 0;
+
+err_base:
+ dpmaif_base_free(dpmaif_ctrl, bat_req);
+
+ return -ENOMEM;
+}
+
+static void unmap_bat_skb(struct device *dev, struct dpmaif_bat_skb *bat_skb_base,
+ unsigned int index)
+{
+ struct dpmaif_bat_skb *bat_skb;
+
+ bat_skb = bat_skb_base + index;
+
+ if (bat_skb->skb) {
+ dma_unmap_single(dev, bat_skb->data_bus_addr, bat_skb->data_len,
+ DMA_FROM_DEVICE);
+ kfree_skb(bat_skb->skb);
+ bat_skb->skb = NULL;
+ }
+}
+
+static void unmap_bat_page(struct device *dev, struct dpmaif_bat_page *bat_page_base,
+ unsigned int index)
+{
+ struct dpmaif_bat_page *bat_page;
+
+ bat_page = bat_page_base + index;
+
+ if (bat_page->page) {
+ dma_unmap_page(dev, bat_page->data_bus_addr, bat_page->data_len,
+ DMA_FROM_DEVICE);
+ put_page(bat_page->page);
+ bat_page->page = NULL;
+ }
+}
+
+void dpmaif_bat_free(const struct dpmaif_ctrl *dpmaif_ctrl,
+ struct dpmaif_bat_request *bat_req)
+{
+ if (!bat_req)
+ return;
+
+ if (!atomic_dec_and_test(&bat_req->refcnt))
+ return;
+
+ kfree(bat_req->bat_mask);
+ bat_req->bat_mask = NULL;
+
+ if (bat_req->bat_skb_ptr) {
+ unsigned int i;
+
+ for (i = 0; i < bat_req->bat_size_cnt; i++) {
+ if (bat_req->type == BAT_TYPE_FRAG)
+ unmap_bat_page(dpmaif_ctrl->dev, bat_req->bat_skb_ptr, i);
+ else
+ unmap_bat_skb(dpmaif_ctrl->dev, bat_req->bat_skb_ptr, i);
+ }
+ }
+
+ dpmaif_base_free(dpmaif_ctrl, bat_req);
+}
+
+static int dpmaif_rx_alloc(struct dpmaif_rx_queue *rxq)
+{
+ /* PIT buffer init */
+ rxq->pit_size_cnt = DPMAIF_PIT_COUNT;
+ rxq->pit_rd_idx = 0;
+ rxq->pit_wr_idx = 0;
+ rxq->pit_release_rd_idx = 0;
+ rxq->expect_pit_seq = 0;
+ rxq->pit_remain_release_cnt = 0;
+
+ memset(&rxq->rx_data_info, 0, sizeof(rxq->rx_data_info));
+
+ /* PIT allocation */
+ rxq->pit_base = dma_alloc_coherent(rxq->dpmaif_ctrl->dev,
+ rxq->pit_size_cnt * sizeof(struct dpmaif_normal_pit),
+ &rxq->pit_bus_addr, GFP_KERNEL | __GFP_ZERO);
+
+ if (!rxq->pit_base)
+ return -ENOMEM;
+
+ /* RXQ BAT table init */
+ rxq->bat_req = &rxq->dpmaif_ctrl->bat_req;
+ atomic_inc(&rxq->bat_req->refcnt);
+
+ /* RXQ Frag BAT table init */
+ rxq->bat_frag = &rxq->dpmaif_ctrl->bat_frag;
+ atomic_inc(&rxq->bat_frag->refcnt);
+ return 0;
+}
+
+static void dpmaif_rx_buf_free(const struct dpmaif_rx_queue *rxq)
+{
+ if (!rxq->dpmaif_ctrl)
+ return;
+
+ dpmaif_bat_free(rxq->dpmaif_ctrl, rxq->bat_req);
+ dpmaif_bat_free(rxq->dpmaif_ctrl, rxq->bat_frag);
+
+ if (rxq->pit_base)
+ dma_free_coherent(rxq->dpmaif_ctrl->dev,
+ rxq->pit_size_cnt * sizeof(struct dpmaif_normal_pit),
+ rxq->pit_base, rxq->pit_bus_addr);
+}
+
+int dpmaif_rxq_alloc(struct dpmaif_rx_queue *queue)
+{
+ int ret;
+
+ /* rxq data (PIT, BAT...) allocation */
+ ret = dpmaif_rx_alloc(queue);
+ if (ret < 0) {
+ dev_err(queue->dpmaif_ctrl->dev, "rx buffer alloc fail, %d\n", ret);
+ return ret;
+ }
+
+ INIT_WORK(&queue->dpmaif_rxq_work, dpmaif_rxq_work);
+ queue->worker = alloc_workqueue("dpmaif_rx%d_worker",
+ WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI,
+ 1, queue->index);
+ if (!queue->worker)
+ return -ENOMEM;
+
+ /* rx push thread and skb list init */
+ init_waitqueue_head(&queue->rx_wq);
+ ccci_skb_queue_alloc(&queue->skb_queue, queue->bat_req->skb_pkt_cnt,
+ queue->bat_req->pkt_buf_sz, false);
+ queue->rx_thread = kthread_run(dpmaif_net_rx_push_thread,
+ queue, "dpmaif_rx%d_push", queue->index);
+ if (IS_ERR(queue->rx_thread)) {
+ dev_err(queue->dpmaif_ctrl->dev, "failed to start rx thread\n");
+ return PTR_ERR(queue->rx_thread);
+ }
+
+ return 0;
+}
+
+void dpmaif_rxq_free(struct dpmaif_rx_queue *queue)
+{
+ struct sk_buff *skb;
+
+ if (queue->worker)
+ destroy_workqueue(queue->worker);
+
+ if (queue->rx_thread)
+ kthread_stop(queue->rx_thread);
+
+ while ((skb = skb_dequeue(&queue->skb_queue.skb_list)))
+ kfree_skb(skb);
+
+ dpmaif_rx_buf_free(queue);
+}
+
+void dpmaif_skb_queue_free(struct dpmaif_ctrl *dpmaif_ctrl, const unsigned int index)
+{
+ struct dpmaif_skb_queue *queue;
+
+ queue = &dpmaif_ctrl->skb_pool.queue[index];
+ while (!list_empty(&queue->skb_list.head)) {
+ struct list_head *entry = dpmaif_map_skb_deq(&queue->skb_list);
+
+ if (entry) {
+ struct dpmaif_skb_info *skb_info = GET_SKB_BY_ENTRY(entry);
+
+ if (skb_info) {
+ if (skb_info->skb) {
+ dev_kfree_skb_any(skb_info->skb);
+ skb_info->skb = NULL;
+ }
+
+ kfree(skb_info);
+ }
+ }
+ }
+}
+
+static void skb_reload_work(struct work_struct *work)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ struct dpmaif_skb_pool *pool;
+ unsigned int i;
+
+ pool = container_of(work, struct dpmaif_skb_pool, reload_work);
+ dpmaif_ctrl = container_of(pool, struct dpmaif_ctrl, skb_pool);
+
+ for (i = 0; i < DPMA_SKB_QUEUE_CNT; i++) {
+ struct dpmaif_skb_queue *queue = &pool->queue[i];
+ unsigned int cnt, qlen, size, max_cnt;
+ unsigned long flags;
+
+ spin_lock_irqsave(&queue->skb_list.lock, flags);
+ size = queue->size;
+ max_cnt = queue->max_len;
+ qlen = queue->skb_list.qlen;
+ spin_unlock_irqrestore(&queue->skb_list.lock, flags);
+
+ if (qlen >= max_cnt * DPMAIF_RELOAD_TH_1 / DPMAIF_RELOAD_TH_2)
+ continue;
+
+ cnt = max_cnt - qlen;
+ while (cnt--) {
+ struct dpmaif_skb_info *skb_info;
+
+ skb_info = dpmaif_dev_alloc_skb(dpmaif_ctrl, size);
+ if (!skb_info)
+ return;
+
+ spin_lock_irqsave(&queue->skb_list.lock, flags);
+ list_add_tail(&skb_info->entry, &queue->skb_list.head);
+ queue->skb_list.qlen++;
+ spin_unlock_irqrestore(&queue->skb_list.lock, flags);
+ }
+ }
+}
+
+static int dpmaif_skb_queue_init_struct(struct dpmaif_ctrl *dpmaif_ctrl,
+ const unsigned int index)
+{
+ struct dpmaif_skb_queue *queue;
+ unsigned int max_cnt, size;
+
+ queue = &dpmaif_ctrl->skb_pool.queue[index];
+ size = DPMAIF_HW_BAT_PKTBUF / BIT(index);
+ max_cnt = DPMAIF_SKB_Q_SIZE / DPMAIF_SKB_SIZE(size);
+
+ if (size < DPMAIF_SKB_SIZE_MIN)
+ return -EINVAL;
+
+ INIT_LIST_HEAD(&queue->skb_list.head);
+ spin_lock_init(&queue->skb_list.lock);
+ queue->skb_list.qlen = 0;
+ queue->size = size;
+ queue->max_len = max_cnt;
+ return 0;
+}
+
+/**
+ * dpmaif_skb_pool_init() - Initialize DPMAIF SKB pool
+ * @dpmaif_ctrl: Pointer to data path control struct dpmaif_ctrl
+ *
+ * Init the DPMAIF SKB queue structures that includes SKB pool, work and workqueue.
+ *
+ * Return: Zero on success and negative errno on failure
+ */
+int dpmaif_skb_pool_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_skb_pool *pool;
+ int i;
+
+ pool = &dpmaif_ctrl->skb_pool;
+ pool->queue_cnt = DPMA_SKB_QUEUE_CNT;
+
+ /* init the skb queue structure */
+ for (i = 0; i < pool->queue_cnt; i++) {
+ int ret = dpmaif_skb_queue_init_struct(dpmaif_ctrl, i);
+
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "Init skb_queue:%d fail\n", i);
+ return ret;
+ }
+ }
+
+ /* init pool reload work */
+ pool->reload_work_queue = alloc_workqueue("dpmaif_skb_pool_reload_work",
+ WQ_MEM_RECLAIM | WQ_HIGHPRI, 1);
+ if (!pool->reload_work_queue) {
+ dev_err(dpmaif_ctrl->dev, "Create the reload_work_queue fail\n");
+ return -ENOMEM;
+ }
+
+ INIT_WORK(&pool->reload_work, skb_reload_work);
+ queue_work(pool->reload_work_queue, &pool->reload_work);
+
+ return 0;
+}
+
+static void dpmaif_bat_release_work(struct work_struct *work)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ struct dpmaif_rx_queue *rxq;
+
+ dpmaif_ctrl = container_of(work, struct dpmaif_ctrl, bat_release_work);
+
+ /* ALL RXQ use one BAT table, so choose DPF_RX_QNO_DFT */
+ rxq = &dpmaif_ctrl->rxq[DPF_RX_QNO_DFT];
+
+ /* normal BAT release and add */
+ dpmaif_dl_pkt_bat_release_and_add(rxq);
+ /* frag BAT release and add */
+ dpmaif_dl_frag_bat_release_and_add(rxq);
+}
+
+int dpmaif_bat_release_work_alloc(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ dpmaif_ctrl->bat_release_work_queue =
+ alloc_workqueue("dpmaif_bat_release_work_queue", WQ_MEM_RECLAIM, 1);
+
+ if (!dpmaif_ctrl->bat_release_work_queue)
+ return -ENOMEM;
+
+ INIT_WORK(&dpmaif_ctrl->bat_release_work, dpmaif_bat_release_work);
+
+ return 0;
+}
+
+void dpmaif_bat_release_work_free(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ flush_work(&dpmaif_ctrl->bat_release_work);
+
+ if (dpmaif_ctrl->bat_release_work_queue) {
+ destroy_workqueue(dpmaif_ctrl->bat_release_work_queue);
+ dpmaif_ctrl->bat_release_work_queue = NULL;
+ }
+}
+
+/**
+ * dpmaif_suspend_rx_sw_stop() - Suspend RX flow
+ * @dpmaif_ctrl: Pointer to data path control struct dpmaif_ctrl
+ *
+ * Wait for all the RX work to finish executing and mark the RX queue as paused
+ *
+ */
+void dpmaif_suspend_rx_sw_stop(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ unsigned int i;
+
+ /* Disable DL/RX SW active */
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ struct dpmaif_rx_queue *rxq;
+ int timeout, value;
+
+ rxq = &dpmaif_ctrl->rxq[i];
+ flush_work(&rxq->dpmaif_rxq_work);
+
+ timeout = readx_poll_timeout_atomic(atomic_read, &rxq->rx_processing, value,
+ !value, 0, DPMAIF_CHECK_INIT_TIMEOUT_US);
+
+ if (timeout)
+ dev_err(dpmaif_ctrl->dev, "Stop RX SW failed\n");
+
+ /* Ensure RX processing has already been stopped before we toggle
+ * the rxq->que_started to false during RX suspend flow.
+ */
+ smp_mb();
+ rxq->que_started = false;
+ }
+}
+
+static void dpmaif_stop_rxq(struct dpmaif_rx_queue *rxq)
+{
+ int cnt, j = 0;
+
+ flush_work(&rxq->dpmaif_rxq_work);
+
+ /* reset SW */
+ rxq->que_started = false;
+ do {
+ /* disable HW arb and check idle */
+ cnt = ring_buf_read_write_count(rxq->pit_size_cnt, rxq->pit_rd_idx,
+ rxq->pit_wr_idx, true);
+ /* retry handler */
+ if (++j >= DPMAIF_MAX_CHECK_COUNT) {
+ dev_err(rxq->dpmaif_ctrl->dev, "stop RX SW failed, %d\n", cnt);
+ break;
+ }
+ } while (cnt);
+
+ memset(rxq->pit_base, 0, rxq->pit_size_cnt * sizeof(struct dpmaif_normal_pit));
+ memset(rxq->bat_req->bat_base, 0, rxq->bat_req->bat_size_cnt * sizeof(struct dpmaif_bat));
+ memset(rxq->bat_req->bat_mask, 0, rxq->bat_req->bat_size_cnt * sizeof(unsigned char));
+ memset(&rxq->rx_data_info, 0, sizeof(rxq->rx_data_info));
+
+ rxq->pit_rd_idx = 0;
+ rxq->pit_wr_idx = 0;
+ rxq->pit_release_rd_idx = 0;
+
+ rxq->expect_pit_seq = 0;
+ rxq->pit_remain_release_cnt = 0;
+
+ rxq->bat_req->bat_release_rd_idx = 0;
+ rxq->bat_req->bat_wr_idx = 0;
+
+ rxq->bat_frag->bat_release_rd_idx = 0;
+ rxq->bat_frag->bat_wr_idx = 0;
+}
+
+void dpmaif_stop_rx_sw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ int i;
+
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++)
+ dpmaif_stop_rxq(&dpmaif_ctrl->rxq[i]);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
new file mode 100644
index 000000000000..3277925abce6
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_HIF_DPMA_RX_H__
+#define __T7XX_HIF_DPMA_RX_H__
+
+#include <linux/bitfield.h>
+#include <linux/types.h>
+
+#include "t7xx_hif_dpmaif.h"
+
+/* lhif header feilds */
+#define LHIF_HEADER_NW_TYPE GENMASK(31, 29)
+#define LHIF_HEADER_NETIF GENMASK(28, 24)
+#define LHIF_HEADER_F GENMASK(22, 20)
+#define LHIF_HEADER_FLOW GENMASK(19, 16)
+
+/* Structure of DL PIT */
+struct dpmaif_normal_pit {
+ unsigned int pit_header;
+ unsigned int p_data_addr;
+ unsigned int data_addr_ext;
+ unsigned int pit_footer;
+};
+
+/* pit_header fields */
+#define NORMAL_PIT_DATA_LEN GENMASK(31, 16)
+#define NORMAL_PIT_BUFFER_ID GENMASK(15, 3)
+#define NORMAL_PIT_BUFFER_TYPE BIT(2)
+#define NORMAL_PIT_CONT BIT(1)
+#define NORMAL_PIT_PACKET_TYPE BIT(0)
+/* pit_footer fields */
+#define NORMAL_PIT_DLQ_DONE GENMASK(31, 30)
+#define NORMAL_PIT_ULQ_DONE GENMASK(29, 24)
+#define NORMAL_PIT_HEADER_OFFSET GENMASK(23, 19)
+#define NORMAL_PIT_BI_F GENMASK(18, 17)
+#define NORMAL_PIT_IG BIT(16)
+#define NORMAL_PIT_RES GENMASK(15, 11)
+#define NORMAL_PIT_H_BID GENMASK(10, 8)
+#define NORMAL_PIT_PIT_SEQ GENMASK(7, 0)
+
+struct dpmaif_msg_pit {
+ unsigned int dword1;
+ unsigned int dword2;
+ unsigned int dword3;
+ unsigned int dword4;
+};
+
+/* double word 1 */
+#define MSG_PIT_DP BIT(31)
+#define MSG_PIT_RES GENMASK(30, 27)
+#define MSG_PIT_NETWORK_TYPE GENMASK(26, 24)
+#define MSG_PIT_CHANNEL_ID GENMASK(23, 16)
+#define MSG_PIT_RES2 GENMASK(15, 12)
+#define MSG_PIT_HPC_IDX GENMASK(11, 8)
+#define MSG_PIT_SRC_QID GENMASK(7, 5)
+#define MSG_PIT_ERROR_BIT BIT(4)
+#define MSG_PIT_CHECKSUM GENMASK(3, 2)
+#define MSG_PIT_CONT BIT(1)
+#define MSG_PIT_PACKET_TYPE BIT(0)
+
+/* double word 2 */
+#define MSG_PIT_HP_IDX GENMASK(31, 27)
+#define MSG_PIT_CMD GENMASK(26, 24)
+#define MSG_PIT_RES3 GENMASK(23, 21)
+#define MSG_PIT_FLOW GENMASK(20, 16)
+#define MSG_PIT_COUNT GENMASK(15, 0)
+
+/* double word 3 */
+#define MSG_PIT_HASH GENMASK(31, 24)
+#define MSG_PIT_RES4 GENMASK(23, 18)
+#define MSG_PIT_PRO GENMASK(17, 16)
+#define MSG_PIT_VBID GENMASK(15, 3)
+#define MSG_PIT_RES5 GENMASK(2, 0)
+
+/* dwourd 4 */
+#define MSG_PIT_DLQ_DONE GENMASK(31, 30)
+#define MSG_PIT_ULQ_DONE GENMASK(29, 24)
+#define MSG_PIT_IP BIT(23)
+#define MSG_PIT_RES6 BIT(22)
+#define MSG_PIT_MR GENMASK(21, 20)
+#define MSG_PIT_RES7 GENMASK(19, 17)
+#define MSG_PIT_IG BIT(16)
+#define MSG_PIT_RES8 GENMASK(15, 11)
+#define MSG_PIT_H_BID GENMASK(10, 8)
+#define MSG_PIT_PIT_SEQ GENMASK(7, 0)
+
+int dpmaif_rxq_alloc(struct dpmaif_rx_queue *queue);
+void dpmaif_stop_rx_sw(struct dpmaif_ctrl *dpmaif_ctrl);
+int dpmaif_bat_release_work_alloc(struct dpmaif_ctrl *dpmaif_ctrl);
+int dpmaif_rx_buf_alloc(struct dpmaif_ctrl *dpmaif_ctrl,
+ const struct dpmaif_bat_request *bat_req, const unsigned char q_num,
+ const unsigned int buf_cnt, const bool first_time);
+void dpmaif_skb_queue_free(struct dpmaif_ctrl *dpmaif_ctrl, const unsigned int index);
+int dpmaif_skb_pool_init(struct dpmaif_ctrl *dpmaif_ctrl);
+int dpmaif_rx_frag_alloc(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
+ unsigned char q_num, const unsigned int buf_cnt, const bool first_time);
+void dpmaif_suspend_rx_sw_stop(struct dpmaif_ctrl *dpmaif_ctrl);
+void dpmaif_irq_rx_done(struct dpmaif_ctrl *dpmaif_ctrl, const unsigned int que_mask);
+void dpmaif_rxq_free(struct dpmaif_rx_queue *queue);
+void dpmaif_bat_release_work_free(struct dpmaif_ctrl *dpmaif_ctrl);
+int dpmaif_bat_alloc(const struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
+ const enum bat_type buf_type);
+void dpmaif_bat_free(const struct dpmaif_ctrl *dpmaif_ctrl,
+ struct dpmaif_bat_request *bat_req);
+
+#endif /* __T7XX_HIF_DPMA_RX_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
new file mode 100644
index 000000000000..3ae87761af05
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
@@ -0,0 +1,810 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+
+#include "t7xx_common.h"
+#include "t7xx_dpmaif.h"
+#include "t7xx_hif_dpmaif.h"
+#include "t7xx_hif_dpmaif_tx.h"
+
+#define DPMAIF_SKB_TX_BURST_CNT 5
+#define DPMAIF_DRB_ENTRY_SIZE 6144
+
+/* DRB dtype */
+#define DES_DTYP_PD 0
+#define DES_DTYP_MSG 1
+
+static unsigned int dpmaifq_poll_tx_drb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num)
+{
+ unsigned short old_sw_rd_idx, new_hw_rd_idx;
+ struct dpmaif_tx_queue *txq;
+ unsigned int hw_read_idx;
+ unsigned int drb_cnt = 0;
+ unsigned long flags;
+
+ txq = &dpmaif_ctrl->txq[q_num];
+ if (!txq->que_started)
+ return drb_cnt;
+
+ old_sw_rd_idx = txq->drb_rd_idx;
+
+ hw_read_idx = dpmaif_ul_get_ridx(&dpmaif_ctrl->hif_hw_info, q_num);
+
+ new_hw_rd_idx = hw_read_idx / DPMAIF_UL_DRB_ENTRY_WORD;
+
+ if (new_hw_rd_idx >= DPMAIF_DRB_ENTRY_SIZE) {
+ dev_err(dpmaif_ctrl->dev, "out of range read index: %u\n", new_hw_rd_idx);
+ return 0;
+ }
+
+ if (old_sw_rd_idx <= new_hw_rd_idx)
+ drb_cnt = new_hw_rd_idx - old_sw_rd_idx;
+ else
+ drb_cnt = txq->drb_size_cnt - old_sw_rd_idx + new_hw_rd_idx;
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ txq->drb_rd_idx = new_hw_rd_idx;
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+ return drb_cnt;
+}
+
+static unsigned short dpmaif_release_tx_buffer(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned char q_num, unsigned int release_cnt)
+{
+ struct dpmaif_drb_skb *cur_drb_skb, *drb_skb_base;
+ struct dpmaif_drb_pd *cur_drb, *drb_base;
+ struct dpmaif_tx_queue *txq;
+ struct dpmaif_callbacks *cb;
+ unsigned int drb_cnt, i;
+ unsigned short cur_idx;
+ unsigned long flags;
+
+ if (!release_cnt)
+ return 0;
+
+ txq = &dpmaif_ctrl->txq[q_num];
+ drb_skb_base = txq->drb_skb_base;
+ drb_base = txq->drb_base;
+ cb = dpmaif_ctrl->callbacks;
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ drb_cnt = txq->drb_size_cnt;
+ cur_idx = txq->drb_release_rd_idx;
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ for (i = 0; i < release_cnt; i++) {
+ cur_drb = drb_base + cur_idx;
+ if (FIELD_GET(DRB_PD_DTYP, cur_drb->header) == DES_DTYP_PD) {
+ cur_drb_skb = drb_skb_base + cur_idx;
+ dma_unmap_single(dpmaif_ctrl->dev, cur_drb_skb->bus_addr,
+ cur_drb_skb->data_len, DMA_TO_DEVICE);
+
+ if (!FIELD_GET(DRB_PD_CONT, cur_drb->header)) {
+ if (!cur_drb_skb->skb) {
+ dev_err(dpmaif_ctrl->dev,
+ "txq%u: DRB check fail, invalid skb\n", q_num);
+ continue;
+ }
+
+ dev_kfree_skb_any(cur_drb_skb->skb);
+ }
+
+ cur_drb_skb->skb = NULL;
+ } else {
+ txq->last_ch_id = FIELD_GET(DRB_MSG_CHANNEL_ID,
+ ((struct dpmaif_drb_msg *)cur_drb)->header_dw2);
+ }
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ cur_idx = ring_buf_get_next_wrdx(drb_cnt, cur_idx);
+ txq->drb_release_rd_idx = cur_idx;
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ if (atomic_inc_return(&txq->tx_budget) > txq->drb_size_cnt / 8)
+ cb->state_notify(dpmaif_ctrl->mtk_dev,
+ DMPAIF_TXQ_STATE_IRQ, txq->index);
+ }
+
+ if (FIELD_GET(DRB_PD_CONT, cur_drb->header))
+ dev_err(dpmaif_ctrl->dev, "txq%u: DRB not marked as the last one\n", q_num);
+
+ return i;
+}
+
+static int dpmaif_tx_release(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned char q_num, unsigned int budget)
+{
+ unsigned int rel_cnt, real_rel_cnt;
+ struct dpmaif_tx_queue *txq;
+
+ txq = &dpmaif_ctrl->txq[q_num];
+
+ /* update rd idx: from HW */
+ dpmaifq_poll_tx_drb(dpmaif_ctrl, q_num);
+ /* release the readable pkts */
+ rel_cnt = ring_buf_read_write_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
+ txq->drb_rd_idx, true);
+
+ real_rel_cnt = min_not_zero(budget, rel_cnt);
+
+ /* release data buff */
+ if (real_rel_cnt)
+ real_rel_cnt = dpmaif_release_tx_buffer(dpmaif_ctrl, q_num, real_rel_cnt);
+
+ return ((real_rel_cnt < rel_cnt) ? -EAGAIN : 0);
+}
+
+/* return false indicates there are remaining spurious interrupts */
+static inline bool dpmaif_no_remain_spurious_tx_done_intr(struct dpmaif_tx_queue *txq)
+{
+ return (dpmaifq_poll_tx_drb(txq->dpmaif_ctrl, txq->index) > 0);
+}
+
+static void dpmaif_tx_done(struct work_struct *work)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ struct dpmaif_tx_queue *txq;
+ int ret;
+
+ txq = container_of(work, struct dpmaif_tx_queue, dpmaif_tx_work);
+ dpmaif_ctrl = txq->dpmaif_ctrl;
+
+ ret = dpmaif_tx_release(dpmaif_ctrl, txq->index, txq->drb_size_cnt);
+ if (ret == -EAGAIN ||
+ (dpmaif_hw_check_clr_ul_done_status(&dpmaif_ctrl->hif_hw_info, txq->index) &&
+ dpmaif_no_remain_spurious_tx_done_intr(txq))) {
+ queue_work(dpmaif_ctrl->txq[txq->index].worker,
+ &dpmaif_ctrl->txq[txq->index].dpmaif_tx_work);
+ /* clear IP busy to give the device time to enter the low power state */
+ dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+ } else {
+ dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+ dpmaif_unmask_ulq_interrupt(dpmaif_ctrl, txq->index);
+ }
+}
+
+static void set_drb_msg(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned char q_num, unsigned short cur_idx,
+ unsigned int pkt_len, unsigned short count_l,
+ unsigned char channel_id, __be16 network_type)
+{
+ struct dpmaif_drb_msg *drb_base;
+ struct dpmaif_drb_msg *drb;
+
+ drb_base = dpmaif_ctrl->txq[q_num].drb_base;
+ drb = drb_base + cur_idx;
+
+ drb->header_dw1 = FIELD_PREP(DRB_MSG_DTYP, DES_DTYP_MSG);
+ drb->header_dw1 |= FIELD_PREP(DRB_MSG_CONT, 1);
+ drb->header_dw1 |= FIELD_PREP(DRB_MSG_PACKET_LEN, pkt_len);
+
+ drb->header_dw2 = FIELD_PREP(DRB_MSG_COUNT_L, count_l);
+ drb->header_dw2 |= FIELD_PREP(DRB_MSG_CHANNEL_ID, channel_id);
+ drb->header_dw2 |= FIELD_PREP(DRB_MSG_L4_CHK, 1);
+}
+
+static void set_drb_payload(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+ unsigned short cur_idx, dma_addr_t data_addr,
+ unsigned int pkt_size, char last_one)
+{
+ struct dpmaif_drb_pd *drb_base;
+ struct dpmaif_drb_pd *drb;
+
+ drb_base = dpmaif_ctrl->txq[q_num].drb_base;
+ drb = drb_base + cur_idx;
+
+ drb->header &= ~DRB_PD_DTYP;
+ drb->header |= FIELD_PREP(DRB_PD_DTYP, DES_DTYP_PD);
+
+ drb->header &= ~DRB_PD_CONT;
+ if (!last_one)
+ drb->header |= FIELD_PREP(DRB_PD_CONT, 1);
+
+ drb->header &= ~DRB_PD_DATA_LEN;
+ drb->header |= FIELD_PREP(DRB_PD_DATA_LEN, pkt_size);
+ drb->p_data_addr = lower_32_bits(data_addr);
+ drb->data_addr_ext = upper_32_bits(data_addr);
+}
+
+static void record_drb_skb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+ unsigned short cur_idx, struct sk_buff *skb, unsigned short is_msg,
+ bool is_frag, bool is_last_one, dma_addr_t bus_addr,
+ unsigned int data_len)
+{
+ struct dpmaif_drb_skb *drb_skb_base;
+ struct dpmaif_drb_skb *drb_skb;
+
+ drb_skb_base = dpmaif_ctrl->txq[q_num].drb_skb_base;
+ drb_skb = drb_skb_base + cur_idx;
+
+ drb_skb->skb = skb;
+ drb_skb->bus_addr = bus_addr;
+ drb_skb->data_len = data_len;
+ drb_skb->config = FIELD_PREP(DRB_SKB_DRB_IDX, cur_idx);
+ drb_skb->config |= FIELD_PREP(DRB_SKB_IS_MSG, is_msg);
+ drb_skb->config |= FIELD_PREP(DRB_SKB_IS_FRAG, is_frag);
+ drb_skb->config |= FIELD_PREP(DRB_SKB_IS_LAST, is_last_one);
+}
+
+static int dpmaif_tx_send_skb_on_tx_thread(struct dpmaif_ctrl *dpmaif_ctrl,
+ struct dpmaif_tx_event *event)
+{
+ unsigned int wt_cnt, send_cnt, payload_cnt;
+ struct skb_shared_info *info;
+ struct dpmaif_tx_queue *txq;
+ int drb_wr_idx_backup = -1;
+ struct ccci_header ccci_h;
+ bool is_frag, is_last_one;
+ bool skb_pulled = false;
+ unsigned short cur_idx;
+ unsigned int data_len;
+ int qno = event->qno;
+ dma_addr_t bus_addr;
+ unsigned long flags;
+ struct sk_buff *skb;
+ void *data_addr;
+ int ret = 0;
+
+ skb = event->skb;
+ txq = &dpmaif_ctrl->txq[qno];
+ if (!txq->que_started || dpmaif_ctrl->state != DPMAIF_STATE_PWRON)
+ return -ENODEV;
+
+ atomic_set(&txq->tx_processing, 1);
+
+ /* Ensure tx_processing is changed to 1 before actually begin TX flow */
+ smp_mb();
+
+ info = skb_shinfo(skb);
+
+ if (info->frag_list)
+ dev_err(dpmaif_ctrl->dev, "ulq%d skb frag_list not supported!\n", qno);
+
+ payload_cnt = info->nr_frags + 1;
+ /* nr_frags: frag cnt, 1: skb->data, 1: msg DRB */
+ send_cnt = payload_cnt + 1;
+
+ ccci_h = *(struct ccci_header *)skb->data;
+ skb_pull(skb, sizeof(struct ccci_header));
+ skb_pulled = true;
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ /* update the drb_wr_idx */
+ cur_idx = txq->drb_wr_idx;
+ drb_wr_idx_backup = cur_idx;
+ txq->drb_wr_idx += send_cnt;
+ if (txq->drb_wr_idx >= txq->drb_size_cnt)
+ txq->drb_wr_idx -= txq->drb_size_cnt;
+
+ /* 3 send data. */
+ /* 3.1 a msg drb first, then payload DRB. */
+ set_drb_msg(dpmaif_ctrl, txq->index, cur_idx, skb->len, 0, ccci_h.data[0], skb->protocol);
+ record_drb_skb(dpmaif_ctrl, txq->index, cur_idx, skb, 1, 0, 0, 0, 0);
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ cur_idx = ring_buf_get_next_wrdx(txq->drb_size_cnt, cur_idx);
+
+ /* 3.2 DRB payload: skb->data + frags[] */
+ for (wt_cnt = 0; wt_cnt < payload_cnt; wt_cnt++) {
+ /* get data_addr && data_len */
+ if (!wt_cnt) {
+ data_len = skb_headlen(skb);
+ data_addr = skb->data;
+ is_frag = false;
+ } else {
+ skb_frag_t *frag = info->frags + wt_cnt - 1;
+
+ data_len = skb_frag_size(frag);
+ data_addr = skb_frag_address(frag);
+ is_frag = true;
+ }
+
+ if (wt_cnt == payload_cnt - 1)
+ is_last_one = true;
+ else
+ /* set 0~(n-1) DRB, maybe none */
+ is_last_one = false;
+
+ /* tx mapping */
+ bus_addr = dma_map_single(dpmaif_ctrl->dev, data_addr, data_len, DMA_TO_DEVICE);
+ if (dma_mapping_error(dpmaif_ctrl->dev, bus_addr)) {
+ dev_err(dpmaif_ctrl->dev, "DMA mapping fail\n");
+ ret = -ENOMEM;
+ break;
+ }
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ set_drb_payload(dpmaif_ctrl, txq->index, cur_idx, bus_addr, data_len,
+ is_last_one);
+ record_drb_skb(dpmaif_ctrl, txq->index, cur_idx, skb, 0, is_frag,
+ is_last_one, bus_addr, data_len);
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ cur_idx = ring_buf_get_next_wrdx(txq->drb_size_cnt, cur_idx);
+ }
+
+ if (ret < 0) {
+ atomic_set(&txq->tx_processing, 0);
+ dev_err(dpmaif_ctrl->dev,
+ "send fail: drb_wr_idx_backup:%d, ret:%d\n", drb_wr_idx_backup, ret);
+
+ if (skb_pulled)
+ skb_push(skb, sizeof(struct ccci_header));
+
+ if (drb_wr_idx_backup >= 0) {
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ txq->drb_wr_idx = drb_wr_idx_backup;
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+ }
+ } else {
+ atomic_sub(send_cnt, &txq->tx_budget);
+ atomic_set(&txq->tx_processing, 0);
+ }
+
+ return ret;
+}
+
+/* must be called within protection of event_lock */
+static inline void dpmaif_finish_event(struct dpmaif_tx_event *event)
+{
+ list_del(&event->entry);
+ kfree(event);
+}
+
+static bool tx_lists_are_all_empty(const struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ int i;
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ if (!list_empty(&dpmaif_ctrl->txq[i].tx_event_queue))
+ return false;
+ }
+
+ return true;
+}
+
+static int select_tx_queue(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ unsigned char txq_prio[TXQ_TYPE_CNT] = {TXQ_FAST, TXQ_NORMAL};
+ unsigned char txq_id, i;
+
+ for (i = 0; i < TXQ_TYPE_CNT; i++) {
+ txq_id = txq_prio[dpmaif_ctrl->txq_select_times % TXQ_TYPE_CNT];
+ if (!dpmaif_ctrl->txq[txq_id].que_started)
+ break;
+
+ if (!list_empty(&dpmaif_ctrl->txq[txq_id].tx_event_queue)) {
+ dpmaif_ctrl->txq_select_times++;
+ return txq_id;
+ }
+
+ dpmaif_ctrl->txq_select_times++;
+ }
+
+ return -EAGAIN;
+}
+
+static int txq_burst_send_skb(struct dpmaif_tx_queue *txq)
+{
+ int drb_remain_cnt, i;
+ unsigned long flags;
+ int drb_cnt = 0;
+ int ret = 0;
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ drb_remain_cnt = ring_buf_read_write_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
+ txq->drb_wr_idx, false);
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ /* write DRB descriptor information */
+ for (i = 0; i < DPMAIF_SKB_TX_BURST_CNT; i++) {
+ struct dpmaif_tx_event *event;
+
+ spin_lock_irqsave(&txq->tx_event_lock, flags);
+ if (list_empty(&txq->tx_event_queue)) {
+ spin_unlock_irqrestore(&txq->tx_event_lock, flags);
+ break;
+ }
+ event = list_first_entry(&txq->tx_event_queue, struct dpmaif_tx_event, entry);
+ spin_unlock_irqrestore(&txq->tx_event_lock, flags);
+
+ if (drb_remain_cnt < event->drb_cnt) {
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ drb_remain_cnt = ring_buf_read_write_count(txq->drb_size_cnt,
+ txq->drb_release_rd_idx,
+ txq->drb_wr_idx,
+ false);
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+ continue;
+ }
+
+ drb_remain_cnt -= event->drb_cnt;
+
+ ret = dpmaif_tx_send_skb_on_tx_thread(txq->dpmaif_ctrl, event);
+
+ if (ret < 0) {
+ dev_crit(txq->dpmaif_ctrl->dev,
+ "txq%d send skb fail, ret=%d\n", event->qno, ret);
+ break;
+ }
+
+ drb_cnt += event->drb_cnt;
+ spin_lock_irqsave(&txq->tx_event_lock, flags);
+ dpmaif_finish_event(event);
+ txq->tx_submit_skb_cnt--;
+ spin_unlock_irqrestore(&txq->tx_event_lock, flags);
+ }
+
+ if (drb_cnt > 0) {
+ txq->drb_lack = false;
+ ret = drb_cnt;
+ } else if (ret == -ENOMEM) {
+ txq->drb_lack = true;
+ }
+
+ return ret;
+}
+
+static bool check_all_txq_drb_lack(const struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ unsigned char i;
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++)
+ if (!list_empty(&dpmaif_ctrl->txq[i].tx_event_queue) &&
+ !dpmaif_ctrl->txq[i].drb_lack)
+ return false;
+
+ return true;
+}
+
+static void do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ dpmaif_ctrl->txq_select_times = 0;
+ do {
+ int txq_id;
+
+ txq_id = select_tx_queue(dpmaif_ctrl);
+ if (txq_id >= 0) {
+ struct dpmaif_tx_queue *txq;
+ int ret;
+
+ txq = &dpmaif_ctrl->txq[txq_id];
+ ret = txq_burst_send_skb(txq);
+ if (ret > 0) {
+ int drb_send_cnt = ret;
+
+ /* notify the dpmaif HW */
+ ret = dpmaif_ul_add_wcnt(dpmaif_ctrl, (unsigned char)txq_id,
+ drb_send_cnt * DPMAIF_UL_DRB_ENTRY_WORD);
+ if (ret < 0)
+ dev_err(dpmaif_ctrl->dev,
+ "txq%d: dpmaif_ul_add_wcnt fail.\n", txq_id);
+ } else {
+ /* all txq the lack of DRB count */
+ if (check_all_txq_drb_lack(dpmaif_ctrl))
+ usleep_range(10, 20);
+ }
+ }
+
+ cond_resched();
+
+ } while (!tx_lists_are_all_empty(dpmaif_ctrl) && !kthread_should_stop() &&
+ (dpmaif_ctrl->state == DPMAIF_STATE_PWRON));
+}
+
+static int dpmaif_tx_hw_push_thread(void *arg)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl;
+
+ dpmaif_ctrl = arg;
+ while (!kthread_should_stop()) {
+ if (tx_lists_are_all_empty(dpmaif_ctrl) ||
+ dpmaif_ctrl->state != DPMAIF_STATE_PWRON) {
+ if (wait_event_interruptible(dpmaif_ctrl->tx_wq,
+ (!tx_lists_are_all_empty(dpmaif_ctrl) &&
+ dpmaif_ctrl->state == DPMAIF_STATE_PWRON) ||
+ kthread_should_stop()))
+ continue;
+ }
+
+ if (kthread_should_stop())
+ break;
+
+ do_tx_hw_push(dpmaif_ctrl);
+ }
+
+ return 0;
+}
+
+int dpmaif_tx_thread_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ init_waitqueue_head(&dpmaif_ctrl->tx_wq);
+ dpmaif_ctrl->tx_thread = kthread_run(dpmaif_tx_hw_push_thread,
+ dpmaif_ctrl, "dpmaif_tx_hw_push");
+ if (IS_ERR(dpmaif_ctrl->tx_thread)) {
+ dev_err(dpmaif_ctrl->dev, "failed to start tx thread\n");
+ return PTR_ERR(dpmaif_ctrl->tx_thread);
+ }
+
+ return 0;
+}
+
+void dpmaif_tx_thread_release(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ if (dpmaif_ctrl->tx_thread)
+ kthread_stop(dpmaif_ctrl->tx_thread);
+}
+
+static inline unsigned char get_drb_cnt_per_skb(struct sk_buff *skb)
+{
+ /* normal DRB (frags data + skb linear data) + msg DRB */
+ return (skb_shinfo(skb)->nr_frags + 1 + 1);
+}
+
+static bool check_tx_queue_drb_available(struct dpmaif_tx_queue *txq, unsigned int send_drb_cnt)
+{
+ unsigned int drb_remain_cnt;
+ unsigned long flags;
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ drb_remain_cnt = ring_buf_read_write_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
+ txq->drb_wr_idx, false);
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ return drb_remain_cnt >= send_drb_cnt;
+}
+
+/**
+ * dpmaif_tx_send_skb() - Add SKB to the xmit queue
+ * @dpmaif_ctrl: Pointer to struct dpmaif_ctrl
+ * @txqt: Queue type to xmit on (normal or fast)
+ * @skb: Pointer to the SKB to xmit
+ *
+ * Add the SKB to the queue of the SKBs to be xmit.
+ * Wake up the thread that push the SKBs from the queue to the HW.
+ *
+ * Return: Zero on success and negative errno on failure
+ */
+int dpmaif_tx_send_skb(struct dpmaif_ctrl *dpmaif_ctrl, enum txq_type txqt, struct sk_buff *skb)
+{
+ bool tx_drb_available = true;
+ struct dpmaif_tx_queue *txq;
+ struct dpmaif_callbacks *cb;
+ unsigned int send_drb_cnt;
+
+ send_drb_cnt = get_drb_cnt_per_skb(skb);
+ txq = &dpmaif_ctrl->txq[txqt];
+
+ /* check tx queue DRB full */
+ if (!(txq->tx_skb_stat++ % DPMAIF_SKB_TX_BURST_CNT))
+ tx_drb_available = check_tx_queue_drb_available(txq, send_drb_cnt);
+
+ if (txq->tx_submit_skb_cnt < txq->tx_list_max_len && tx_drb_available) {
+ struct dpmaif_tx_event *event;
+ unsigned long flags;
+
+ event = kmalloc(sizeof(*event), GFP_ATOMIC);
+ if (!event)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&event->entry);
+ event->qno = txqt;
+ event->skb = skb;
+ event->drb_cnt = send_drb_cnt;
+
+ spin_lock_irqsave(&txq->tx_event_lock, flags);
+ list_add_tail(&event->entry, &txq->tx_event_queue);
+ txq->tx_submit_skb_cnt++;
+ spin_unlock_irqrestore(&txq->tx_event_lock, flags);
+ wake_up(&dpmaif_ctrl->tx_wq);
+
+ return 0;
+ }
+
+ cb = dpmaif_ctrl->callbacks;
+ cb->state_notify(dpmaif_ctrl->mtk_dev, DMPAIF_TXQ_STATE_FULL, txqt);
+
+ return -EBUSY;
+}
+
+void dpmaif_irq_tx_done(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int que_mask)
+{
+ int i;
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ if (que_mask & BIT(i))
+ queue_work(dpmaif_ctrl->txq[i].worker, &dpmaif_ctrl->txq[i].dpmaif_tx_work);
+ }
+}
+
+static int dpmaif_tx_buf_init(struct dpmaif_tx_queue *txq)
+{
+ size_t brb_skb_size;
+ size_t brb_pd_size;
+
+ brb_pd_size = DPMAIF_DRB_ENTRY_SIZE * sizeof(struct dpmaif_drb_pd);
+ brb_skb_size = DPMAIF_DRB_ENTRY_SIZE * sizeof(struct dpmaif_drb_skb);
+
+ txq->drb_size_cnt = DPMAIF_DRB_ENTRY_SIZE;
+
+ /* alloc buffer for HW && AP SW */
+ txq->drb_base = dma_alloc_coherent(txq->dpmaif_ctrl->dev, brb_pd_size,
+ &txq->drb_bus_addr, GFP_KERNEL | __GFP_ZERO);
+ if (!txq->drb_base)
+ return -ENOMEM;
+
+ /* alloc buffer for AP SW to record the skb information */
+ txq->drb_skb_base = devm_kzalloc(txq->dpmaif_ctrl->dev, brb_skb_size, GFP_KERNEL);
+ if (!txq->drb_skb_base) {
+ dma_free_coherent(txq->dpmaif_ctrl->dev, brb_pd_size,
+ txq->drb_base, txq->drb_bus_addr);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void dpmaif_tx_buf_rel(struct dpmaif_tx_queue *txq)
+{
+ if (txq->drb_base)
+ dma_free_coherent(txq->dpmaif_ctrl->dev,
+ txq->drb_size_cnt * sizeof(struct dpmaif_drb_pd),
+ txq->drb_base, txq->drb_bus_addr);
+
+ if (txq->drb_skb_base) {
+ struct dpmaif_drb_skb *drb_skb, *drb_skb_base = txq->drb_skb_base;
+ unsigned int i;
+
+ for (i = 0; i < txq->drb_size_cnt; i++) {
+ drb_skb = drb_skb_base + i;
+ if (drb_skb->skb) {
+ dma_unmap_single(txq->dpmaif_ctrl->dev, drb_skb->bus_addr,
+ drb_skb->data_len, DMA_TO_DEVICE);
+ if (FIELD_GET(DRB_SKB_IS_LAST, drb_skb->config)) {
+ kfree_skb(drb_skb->skb);
+ drb_skb->skb = NULL;
+ }
+ }
+ }
+ }
+}
+
+/**
+ * dpmaif_txq_init() - Initialize TX queue
+ * @txq: Pointer to struct dpmaif_tx_queue
+ *
+ * Initialize the TX queue data structure and allocate memory for it to use.
+ *
+ * Return: Zero on success and negative errno on failure
+ */
+int dpmaif_txq_init(struct dpmaif_tx_queue *txq)
+{
+ int ret;
+
+ spin_lock_init(&txq->tx_event_lock);
+ INIT_LIST_HEAD(&txq->tx_event_queue);
+ txq->tx_submit_skb_cnt = 0;
+ txq->tx_skb_stat = 0;
+ txq->tx_list_max_len = DPMAIF_DRB_ENTRY_SIZE / 2;
+ txq->drb_lack = false;
+
+ init_waitqueue_head(&txq->req_wq);
+ atomic_set(&txq->tx_budget, DPMAIF_DRB_ENTRY_SIZE);
+
+ /* init the DRB DMA buffer and tx skb record info buffer */
+ ret = dpmaif_tx_buf_init(txq);
+ if (ret) {
+ dev_err(txq->dpmaif_ctrl->dev, "tx buffer init fail %d\n", ret);
+ return ret;
+ }
+
+ txq->worker = alloc_workqueue("md_dpmaif_tx%d_worker", WQ_UNBOUND | WQ_MEM_RECLAIM |
+ (txq->index ? 0 : WQ_HIGHPRI), 1, txq->index);
+ if (!txq->worker)
+ return -ENOMEM;
+
+ INIT_WORK(&txq->dpmaif_tx_work, dpmaif_tx_done);
+ spin_lock_init(&txq->tx_lock);
+ return 0;
+}
+
+void dpmaif_txq_free(struct dpmaif_tx_queue *txq)
+{
+ struct dpmaif_tx_event *event, *event_next;
+ unsigned long flags;
+
+ if (txq->worker)
+ destroy_workqueue(txq->worker);
+
+ spin_lock_irqsave(&txq->tx_event_lock, flags);
+ list_for_each_entry_safe(event, event_next, &txq->tx_event_queue, entry) {
+ if (event->skb)
+ dev_kfree_skb_any(event->skb);
+
+ dpmaif_finish_event(event);
+ }
+
+ spin_unlock_irqrestore(&txq->tx_event_lock, flags);
+ dpmaif_tx_buf_rel(txq);
+}
+
+void dpmaif_suspend_tx_sw_stop(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ int i;
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ struct dpmaif_tx_queue *txq;
+ int count;
+
+ txq = &dpmaif_ctrl->txq[i];
+
+ txq->que_started = false;
+ /* Ensure tx_processing is changed to 1 before actually begin TX flow */
+ smp_mb();
+
+ /* Confirm that SW will not transmit */
+ count = 0;
+ do {
+ if (++count >= DPMAIF_MAX_CHECK_COUNT) {
+ dev_err(dpmaif_ctrl->dev, "tx queue stop failed\n");
+ break;
+ }
+ } while (atomic_read(&txq->tx_processing));
+ }
+}
+
+static void dpmaif_stop_txq(struct dpmaif_tx_queue *txq)
+{
+ txq->que_started = false;
+
+ cancel_work_sync(&txq->dpmaif_tx_work);
+ flush_work(&txq->dpmaif_tx_work);
+
+ if (txq->drb_skb_base) {
+ struct dpmaif_drb_skb *drb_skb, *drb_skb_base = txq->drb_skb_base;
+ unsigned int i;
+
+ for (i = 0; i < txq->drb_size_cnt; i++) {
+ drb_skb = drb_skb_base + i;
+ if (drb_skb->skb) {
+ dma_unmap_single(txq->dpmaif_ctrl->dev, drb_skb->bus_addr,
+ drb_skb->data_len, DMA_TO_DEVICE);
+ if (FIELD_GET(DRB_SKB_IS_LAST, drb_skb->config)) {
+ kfree_skb(drb_skb->skb);
+ drb_skb->skb = NULL;
+ }
+ }
+ }
+ }
+
+ txq->drb_rd_idx = 0;
+ txq->drb_wr_idx = 0;
+ txq->drb_release_rd_idx = 0;
+}
+
+void dpmaif_stop_tx_sw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ int i;
+
+ /* flush and release UL descriptor */
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++)
+ dpmaif_stop_txq(&dpmaif_ctrl->txq[i]);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h
new file mode 100644
index 000000000000..81d783792ba2
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_HIF_DPMA_TX_H__
+#define __T7XX_HIF_DPMA_TX_H__
+
+#include <linux/bitfield.h>
+#include <linux/skbuff.h>
+
+#include "t7xx_common.h"
+#include "t7xx_hif_dpmaif.h"
+
+/* UL DRB */
+struct dpmaif_drb_pd {
+ unsigned int header;
+ unsigned int p_data_addr;
+ unsigned int data_addr_ext;
+ unsigned int reserved2;
+};
+
+/* header's fields */
+#define DRB_PD_DATA_LEN GENMASK(31, 16)
+#define DRB_PD_RES GENMASK(15, 3)
+#define DRB_PD_CONT BIT(2)
+#define DRB_PD_DTYP GENMASK(1, 0)
+
+struct dpmaif_drb_msg {
+ unsigned int header_dw1;
+ unsigned int header_dw2;
+ unsigned int reserved4;
+ unsigned int reserved5;
+};
+
+/* first double word header fields */
+#define DRB_MSG_PACKET_LEN GENMASK(31, 16)
+#define DRB_MSG_DW1_RES GENMASK(15, 3)
+#define DRB_MSG_CONT BIT(2)
+#define DRB_MSG_DTYP GENMASK(1, 0)
+
+/* second double word header fields */
+#define DRB_MSG_DW2_RES GENMASK(31, 30)
+#define DRB_MSG_L4_CHK BIT(29)
+#define DRB_MSG_IP_CHK BIT(28)
+#define DRB_MSG_RES2 BIT(27)
+#define DRB_MSG_NETWORK_TYPE GENMASK(26, 24)
+#define DRB_MSG_CHANNEL_ID GENMASK(23, 16)
+#define DRB_MSG_COUNT_L GENMASK(15, 0)
+
+struct dpmaif_drb_skb {
+ struct sk_buff *skb;
+ dma_addr_t bus_addr;
+ unsigned short data_len;
+ u16 config;
+};
+
+#define DRB_SKB_IS_LAST BIT(15)
+#define DRB_SKB_IS_FRAG BIT(14)
+#define DRB_SKB_IS_MSG BIT(13)
+#define DRB_SKB_DRB_IDX GENMASK(12, 0)
+
+int dpmaif_tx_send_skb(struct dpmaif_ctrl *dpmaif_ctrl, enum txq_type txqt,
+ struct sk_buff *skb);
+void dpmaif_tx_thread_release(struct dpmaif_ctrl *dpmaif_ctrl);
+int dpmaif_tx_thread_init(struct dpmaif_ctrl *dpmaif_ctrl);
+void dpmaif_txq_free(struct dpmaif_tx_queue *txq);
+void dpmaif_irq_tx_done(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int que_mask);
+int dpmaif_txq_init(struct dpmaif_tx_queue *txq);
+void dpmaif_suspend_tx_sw_stop(struct dpmaif_ctrl *dpmaif_ctrl);
+void dpmaif_stop_tx_sw(struct dpmaif_ctrl *dpmaif_ctrl);
+
+#endif /* __T7XX_HIF_DPMA_TX_H__ */
--
2.17.1

2021-11-01 03:59:24

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 14/14] net: wwan: t7xx: Add maintainers and documentation

Adds maintainers and documentation for MediaTek t7xx 5G WWAN modem
device driver.

Signed-off-by: Ricardo Martinez <[email protected]>
---
.../networking/device_drivers/wwan/index.rst | 1 +
.../networking/device_drivers/wwan/t7xx.rst | 120 ++++++++++++++++++
MAINTAINERS | 11 ++
3 files changed, 132 insertions(+)
create mode 100644 Documentation/networking/device_drivers/wwan/t7xx.rst

diff --git a/Documentation/networking/device_drivers/wwan/index.rst b/Documentation/networking/device_drivers/wwan/index.rst
index 1cb8c7371401..370d8264d5dc 100644
--- a/Documentation/networking/device_drivers/wwan/index.rst
+++ b/Documentation/networking/device_drivers/wwan/index.rst
@@ -9,6 +9,7 @@ Contents:
:maxdepth: 2

iosm
+ t7xx

.. only:: subproject and html

diff --git a/Documentation/networking/device_drivers/wwan/t7xx.rst b/Documentation/networking/device_drivers/wwan/t7xx.rst
new file mode 100644
index 000000000000..dd5b731957ca
--- /dev/null
+++ b/Documentation/networking/device_drivers/wwan/t7xx.rst
@@ -0,0 +1,120 @@
+.. SPDX-License-Identifier: GPL-2.0-only
+
+.. Copyright (C) 2020-21 Intel Corporation
+
+.. _t7xx_driver_doc:
+
+============================================
+t7xx driver for MTK PCIe based T700 5G modem
+============================================
+The t7xx driver is a WWAN PCIe host driver developed for linux or Chrome OS platforms
+for data exchange over PCIe interface between Host platform & MediaTek's T700 5G modem.
+The driver exposes an interface conforming to the MBIM protocol [1]. Any front end
+application (e.g. Modem Manager) could easily manage the MBIM interface to enable
+data communication towards WWAN. The driver also provides an interface to interact
+with the MediaTek's modem via AT commands.
+
+Basic usage
+===========
+MBIM & AT functions are inactive when unmanaged. The t7xx driver provides
+WWAN port userspace interfaces representing MBIM & AT control channels and does
+not play any role in managing their functionality. It is the job of a userspace
+application to detect port enumeration and enable MBIM & AT functionalities.
+
+Examples of few such userspace applications are:
+
+- mbimcli (included with the libmbim [2] library), and
+- Modem Manager [3]
+
+Management Applications to carry out below required actions for establishing
+MBIM IP session:
+
+- open the MBIM control channel
+- configure network connection settings
+- connect to network
+- configure IP network interface
+
+Management Applications to carry out below required actions for send an AT
+command and receive response:
+
+- open the AT control channel using a UART tool or a special user tool
+
+Management application development
+==================================
+The driver and userspace interfaces are described below. The MBIM protocol is
+described in [1] Mobile Broadband Interface Model v1.0 Errata-1.
+
+MBIM control channel userspace ABI
+----------------------------------
+
+/dev/wwan0mbim0 character device
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The driver exposes an MBIM interface to the MBIM function by implementing
+MBIM WWAN Port. The userspace end of the control channel pipe is a
+/dev/wwan0mbim0 character device. Application shall use this interface for
+MBIM protocol communication.
+
+Fragmentation
+~~~~~~~~~~~~~
+The userspace application is responsible for all control message fragmentation
+and defragmentation as per MBIM specification.
+
+/dev/wwan0mbim0 write()
+~~~~~~~~~~~~~~~~~~~~~~~
+The MBIM control messages from the management application must not exceed the
+negotiated control message size.
+
+/dev/wwan0mbim0 read()
+~~~~~~~~~~~~~~~~~~~~~~
+The management application must accept control messages of up the negotiated
+control message size.
+
+MBIM data channel userspace ABI
+-------------------------------
+
+wwan0-X network device
+~~~~~~~~~~~~~~~~~~~~~~
+The t7xx driver exposes IP link interface "wwan0-X" of type "wwan" for IP
+traffic. Iproute network utility is used for creating "wwan0-X" network
+interface and for associating it with MBIM IP session.
+
+The userspace management application is responsible for creating new IP link
+prior to establishing MBIM IP session where the SessionId is greater than 0.
+
+For example, creating new IP link for a MBIM IP session with SessionId 1:
+
+ ip link add dev wwan0-1 parentdev wwan0 type wwan linkid 1
+
+The driver will automatically map the "wwan0-1" network device to MBIM IP
+session 1.
+
+AT port userspace ABI
+----------------------------------
+
+/dev/wwan0at0 character device
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The driver exposes an AT port by implementing AT WWAN Port.
+The userspace end of the control port is a /dev/wwan0at0 character
+device. Application shall use this interface to issue AT commands.
+
+The MediaTek's T700 modem supports the 3GPP TS 27.007 [4] specification.
+
+References
+==========
+[1] *MBIM (Mobile Broadband Interface Model) Errata-1*
+
+- https://www.usb.org/document-library/
+
+[2] *libmbim "a glib-based library for talking to WWAN modems and devices which
+speak the Mobile Interface Broadband Model (MBIM) protocol"*
+
+- http://www.freedesktop.org/wiki/Software/libmbim/
+
+[3] *Modem Manager "a DBus-activated daemon which controls mobile broadband
+(2G/3G/4G/5G) devices and connections"*
+
+- http://www.freedesktop.org/wiki/Software/ModemManager/
+
+[4] *Specification # 27.007 - 3GPP*
+
+- https://www.3gpp.org/DynaReport/27007.htm
diff --git a/MAINTAINERS b/MAINTAINERS
index 5f87f622ac18..d419bed48aa0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -12066,6 +12066,17 @@ S: Maintained
F: drivers/net/dsa/mt7530.*
F: net/dsa/tag_mtk.c

+MEDIATEK T7XX 5G WWAN MODEM DRIVER
+M: Chandrashekar Devegowda <[email protected]>
+M: Intel Corporation <[email protected]>
+R: Chiranjeevi Rapolu <[email protected]>
+R: Liu Haijun <[email protected]>
+R: M Chetan Kumar <[email protected]>
+R: Ricardo Martinez <[email protected]>
+L: [email protected]
+S: Supported
+F: drivers/net/wwan/t7xx/
+
MEDIATEK USB3 DRD IP DRIVER
M: Chunfeng Yun <[email protected]>
L: [email protected]
--
2.17.1

2021-11-01 03:59:24

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH v2 02/14] net: wwan: t7xx: Add control DMA interface

From: Haijun Lio <[email protected]>

Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control
path of Host-Modem data transfers. CLDMA HIF layer provides a common
interface to the Port Layer.

CLDMA manages 8 independent RX/TX physical channels with data flow
control in HW queues. CLDMA uses ring buffers of General Packet
Descriptors (GPD) for TX/RX. GPDs can represent multiple or single
data buffers (DB).

CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA
interrupts, and initializes CLDMA HW registers.

CLDMA TX flow:
1. Port Layer write
2. Get DB address
3. Configure GPD
4. Triggering processing via HW register write

CLDMA RX flow:
1. CLDMA HW sends a RX "done" to host
2. Driver starts thread to safely read GPD
3. DB is sent to Port layer
4. Create a new buffer for GPD ring

Signed-off-by: Haijun Lio <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/t7xx_cldma.c | 277 +++++
drivers/net/wwan/t7xx/t7xx_cldma.h | 168 +++
drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 1519 ++++++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_cldma.h | 155 +++
4 files changed, 2119 insertions(+)
create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.h

diff --git a/drivers/net/wwan/t7xx/t7xx_cldma.c b/drivers/net/wwan/t7xx/t7xx_cldma.c
new file mode 100644
index 000000000000..edff5d8b5288
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_cldma.c
@@ -0,0 +1,277 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Andy Shevchenko <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/delay.h>
+#include <linux/io.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+
+#include "t7xx_cldma.h"
+
+void cldma_clear_ip_busy(struct cldma_hw_info *hw_info)
+{
+ /* write 1 to clear IP busy register wake up CPU case */
+ iowrite32(ioread32(hw_info->ap_pdn_base + REG_CLDMA_IP_BUSY) | IP_BUSY_WAKEUP,
+ hw_info->ap_pdn_base + REG_CLDMA_IP_BUSY);
+}
+
+/**
+ * cldma_hw_restore() - Restore CLDMA HW registers
+ * @hw_info: Pointer to struct cldma_hw_info
+ *
+ * Restore HW after resume. Writes uplink configuration for CLDMA HW.
+ *
+ */
+void cldma_hw_restore(struct cldma_hw_info *hw_info)
+{
+ u32 ul_cfg;
+
+ ul_cfg = ioread32(hw_info->ap_pdn_base + REG_CLDMA_UL_CFG);
+ ul_cfg &= ~UL_CFG_BIT_MODE_MASK;
+ if (hw_info->hw_mode == MODE_BIT_64)
+ ul_cfg |= UL_CFG_BIT_MODE_64;
+ else if (hw_info->hw_mode == MODE_BIT_40)
+ ul_cfg |= UL_CFG_BIT_MODE_40;
+ else if (hw_info->hw_mode == MODE_BIT_36)
+ ul_cfg |= UL_CFG_BIT_MODE_36;
+
+ iowrite32(ul_cfg, hw_info->ap_pdn_base + REG_CLDMA_UL_CFG);
+ /* disable TX and RX invalid address check */
+ iowrite32(UL_MEM_CHECK_DIS, hw_info->ap_pdn_base + REG_CLDMA_UL_MEM);
+ iowrite32(DL_MEM_CHECK_DIS, hw_info->ap_pdn_base + REG_CLDMA_DL_MEM);
+}
+
+void cldma_hw_start_queue(struct cldma_hw_info *hw_info, u8 qno, bool is_rx)
+{
+ void __iomem *reg_start_cmd;
+ u32 val;
+
+ mb(); /* prevents outstanding GPD updates */
+ reg_start_cmd = is_rx ? hw_info->ap_pdn_base + REG_CLDMA_DL_START_CMD :
+ hw_info->ap_pdn_base + REG_CLDMA_UL_START_CMD;
+
+ val = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno);
+ iowrite32(val, reg_start_cmd);
+}
+
+void cldma_hw_start(struct cldma_hw_info *hw_info)
+{
+ /* unmask txrx interrupt */
+ iowrite32(TXRX_STATUS_BITMASK, hw_info->ap_pdn_base + REG_CLDMA_L2TIMCR0);
+ iowrite32(TXRX_STATUS_BITMASK, hw_info->ap_ao_base + REG_CLDMA_L2RIMCR0);
+ /* unmask empty queue interrupt */
+ iowrite32(EMPTY_STATUS_BITMASK, hw_info->ap_pdn_base + REG_CLDMA_L2TIMCR0);
+ iowrite32(EMPTY_STATUS_BITMASK, hw_info->ap_ao_base + REG_CLDMA_L2RIMCR0);
+}
+
+void cldma_hw_reset(void __iomem *ao_base)
+{
+ iowrite32(ioread32(ao_base + REG_INFRA_RST4_SET) | RST4_CLDMA1_SW_RST_SET,
+ ao_base + REG_INFRA_RST4_SET);
+ iowrite32(ioread32(ao_base + REG_INFRA_RST2_SET) | RST2_CLDMA1_AO_SW_RST_SET,
+ ao_base + REG_INFRA_RST2_SET);
+ udelay(1);
+ iowrite32(ioread32(ao_base + REG_INFRA_RST4_CLR) | RST4_CLDMA1_SW_RST_CLR,
+ ao_base + REG_INFRA_RST4_CLR);
+ iowrite32(ioread32(ao_base + REG_INFRA_RST2_CLR) | RST2_CLDMA1_AO_SW_RST_CLR,
+ ao_base + REG_INFRA_RST2_CLR);
+}
+
+bool cldma_tx_addr_is_set(struct cldma_hw_info *hw_info, unsigned char qno)
+{
+ return !!ioread64(hw_info->ap_pdn_base + REG_CLDMA_UL_START_ADDRL_0 + qno * 8);
+}
+
+void cldma_hw_set_start_address(struct cldma_hw_info *hw_info, unsigned char qno, u64 address,
+ bool is_rx)
+{
+ void __iomem *base;
+
+ if (is_rx) {
+ base = hw_info->ap_ao_base;
+ iowrite64(address, base + REG_CLDMA_DL_START_ADDRL_0 + qno * 8);
+ } else {
+ base = hw_info->ap_pdn_base;
+ iowrite64(address, base + REG_CLDMA_UL_START_ADDRL_0 + qno * 8);
+ }
+}
+
+void cldma_hw_resume_queue(struct cldma_hw_info *hw_info, unsigned char qno, bool is_rx)
+{
+ void __iomem *base;
+
+ base = hw_info->ap_pdn_base;
+ mb(); /* prevents outstanding GPD updates */
+
+ if (is_rx)
+ iowrite32(BIT(qno), base + REG_CLDMA_DL_RESUME_CMD);
+ else
+ iowrite32(BIT(qno), base + REG_CLDMA_UL_RESUME_CMD);
+}
+
+unsigned int cldma_hw_queue_status(struct cldma_hw_info *hw_info, unsigned char qno, bool is_rx)
+{
+ u32 mask;
+
+ mask = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno);
+ if (is_rx)
+ return ioread32(hw_info->ap_ao_base + REG_CLDMA_DL_STATUS) & mask;
+ else
+ return ioread32(hw_info->ap_pdn_base + REG_CLDMA_UL_STATUS) & mask;
+}
+
+void cldma_hw_tx_done(struct cldma_hw_info *hw_info, unsigned int bitmask)
+{
+ unsigned int ch_id;
+
+ ch_id = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0) & bitmask;
+ /* ack interrupt */
+ iowrite32(ch_id, hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0);
+ ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0);
+}
+
+void cldma_hw_rx_done(struct cldma_hw_info *hw_info, unsigned int bitmask)
+{
+ unsigned int ch_id;
+
+ ch_id = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0) & bitmask;
+ /* ack interrupt */
+ iowrite32(ch_id, hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0);
+ ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0);
+}
+
+unsigned int cldma_hw_int_status(struct cldma_hw_info *hw_info, unsigned int bitmask, bool is_rx)
+{
+ void __iomem *reg_int_sta;
+
+ reg_int_sta = is_rx ? hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0 :
+ hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0;
+
+ return ioread32(reg_int_sta) & bitmask;
+}
+
+void cldma_hw_mask_txrxirq(struct cldma_hw_info *hw_info, unsigned char qno, bool is_rx)
+{
+ void __iomem *reg_ims;
+ u32 val;
+
+ /* select the right interrupt mask set register */
+ reg_ims = is_rx ? hw_info->ap_ao_base + REG_CLDMA_L2RIMSR0 :
+ hw_info->ap_pdn_base + REG_CLDMA_L2TIMSR0;
+
+ val = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno);
+ iowrite32(val, reg_ims);
+}
+
+void cldma_hw_mask_eqirq(struct cldma_hw_info *hw_info, unsigned char qno, bool is_rx)
+{
+ void __iomem *reg_ims;
+ u32 val;
+
+ /* select the right interrupt mask set register */
+ reg_ims = is_rx ? hw_info->ap_ao_base + REG_CLDMA_L2RIMSR0 :
+ hw_info->ap_pdn_base + REG_CLDMA_L2TIMSR0;
+
+ val = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno);
+ iowrite32(val << EQ_STA_BIT_OFFSET, reg_ims);
+}
+
+void cldma_hw_dismask_txrxirq(struct cldma_hw_info *hw_info, unsigned char qno, bool is_rx)
+{
+ void __iomem *reg_imc;
+ u32 val;
+
+ /* select the right interrupt mask clear register */
+ reg_imc = is_rx ? hw_info->ap_ao_base + REG_CLDMA_L2RIMCR0 :
+ hw_info->ap_pdn_base + REG_CLDMA_L2TIMCR0;
+
+ val = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno);
+ iowrite32(val, reg_imc);
+}
+
+void cldma_hw_dismask_eqirq(struct cldma_hw_info *hw_info, unsigned char qno, bool is_rx)
+{
+ void __iomem *reg_imc;
+ u32 val;
+
+ /* select the right interrupt mask clear register */
+ reg_imc = is_rx ? hw_info->ap_ao_base + REG_CLDMA_L2RIMCR0 :
+ hw_info->ap_pdn_base + REG_CLDMA_L2TIMCR0;
+
+ val = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno);
+ iowrite32(val << EQ_STA_BIT_OFFSET, reg_imc);
+}
+
+/**
+ * cldma_hw_init() - Initialize CLDMA HW
+ * @hw_info: Pointer to struct cldma_hw_info
+ *
+ * Write uplink and downlink configuration to CLDMA HW.
+ *
+ */
+void cldma_hw_init(struct cldma_hw_info *hw_info)
+{
+ u32 ul_cfg;
+ u32 dl_cfg;
+
+ ul_cfg = ioread32(hw_info->ap_pdn_base + REG_CLDMA_UL_CFG);
+ dl_cfg = ioread32(hw_info->ap_ao_base + REG_CLDMA_DL_CFG);
+
+ /* configure the DRAM address mode */
+ ul_cfg &= ~UL_CFG_BIT_MODE_MASK;
+ dl_cfg &= ~DL_CFG_BIT_MODE_MASK;
+ if (hw_info->hw_mode == MODE_BIT_64) {
+ ul_cfg |= UL_CFG_BIT_MODE_64;
+ dl_cfg |= DL_CFG_BIT_MODE_64;
+ } else if (hw_info->hw_mode == MODE_BIT_40) {
+ ul_cfg |= UL_CFG_BIT_MODE_40;
+ dl_cfg |= DL_CFG_BIT_MODE_40;
+ } else if (hw_info->hw_mode == MODE_BIT_36) {
+ ul_cfg |= UL_CFG_BIT_MODE_36;
+ dl_cfg |= DL_CFG_BIT_MODE_36;
+ }
+
+ iowrite32(ul_cfg, hw_info->ap_pdn_base + REG_CLDMA_UL_CFG);
+ dl_cfg |= DL_CFG_UP_HW_LAST;
+ iowrite32(dl_cfg, hw_info->ap_ao_base + REG_CLDMA_DL_CFG);
+ /* enable interrupt */
+ iowrite32(0, hw_info->ap_ao_base + REG_CLDMA_INT_MASK);
+ /* mask wakeup signal */
+ iowrite32(BUSY_MASK_MD, hw_info->ap_ao_base + REG_CLDMA_BUSY_MASK);
+ /* disable TX and RX invalid address check */
+ iowrite32(UL_MEM_CHECK_DIS, hw_info->ap_pdn_base + REG_CLDMA_UL_MEM);
+ iowrite32(DL_MEM_CHECK_DIS, hw_info->ap_pdn_base + REG_CLDMA_DL_MEM);
+}
+
+void cldma_hw_stop_queue(struct cldma_hw_info *hw_info, u8 qno, bool is_rx)
+{
+ void __iomem *reg_stop_cmd;
+ u32 val;
+
+ reg_stop_cmd = is_rx ? hw_info->ap_pdn_base + REG_CLDMA_DL_STOP_CMD :
+ hw_info->ap_pdn_base + REG_CLDMA_UL_STOP_CMD;
+
+ val = (qno == CLDMA_ALL_Q) ? CLDMA_ALL_Q : BIT(qno);
+ iowrite32(val, reg_stop_cmd);
+}
+
+void cldma_hw_stop(struct cldma_hw_info *hw_info, bool is_rx)
+{
+ void __iomem *reg_ims;
+
+ /* select the right L2 interrupt mask set register */
+ reg_ims = is_rx ? hw_info->ap_ao_base + REG_CLDMA_L2RIMSR0 :
+ hw_info->ap_pdn_base + REG_CLDMA_L2TIMSR0;
+
+ iowrite32(TXRX_STATUS_BITMASK, reg_ims);
+ iowrite32(EMPTY_STATUS_BITMASK, reg_ims);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_cldma.h b/drivers/net/wwan/t7xx/t7xx_cldma.h
new file mode 100644
index 000000000000..0fc539c99432
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_cldma.h
@@ -0,0 +1,168 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_CLDMA_H__
+#define __T7XX_CLDMA_H__
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#define CLDMA_TXQ_NUM 8
+#define CLDMA_RXQ_NUM 8
+#define CLDMA_ALL_Q 0xff
+
+/* interrupt status bit meaning, bitmask */
+#define EMPTY_STATUS_BITMASK 0xff00
+#define TXRX_STATUS_BITMASK 0x00ff
+#define EQ_STA_BIT_OFFSET 8
+#define EQ_STA_BIT(index) (BIT((index) + EQ_STA_BIT_OFFSET) & EMPTY_STATUS_BITMASK)
+
+/* L2RISAR0 */
+#define TQ_ERR_INT_BITMASK 0x00ff0000
+#define TQ_ACTIVE_START_ERR_INT_BITMASK 0xff000000
+
+#define RQ_ERR_INT_BITMASK 0x00ff0000
+#define RQ_ACTIVE_START_ERR_INT_BITMASK 0xff000000
+
+/* HW address @sAP view for reference */
+#define RIGHT_CLDMA_OFFSET 0x1000
+
+#define CLDMA0_AO_BASE 0x10049000
+#define CLDMA0_PD_BASE 0x1021d000
+#define CLDMA1_AO_BASE 0x1004b000
+#define CLDMA1_PD_BASE 0x1021f000
+
+#define CLDMA_R_AO_BASE 0x10023000
+#define CLDMA_R_PD_BASE 0x1023d000
+
+/* CLDMA IN(TX) PD */
+#define REG_CLDMA_UL_START_ADDRL_0 0x0004
+#define REG_CLDMA_UL_START_ADDRH_0 0x0008
+#define REG_CLDMA_UL_CURRENT_ADDRL_0 0x0044
+#define REG_CLDMA_UL_CURRENT_ADDRH_0 0x0048
+#define REG_CLDMA_UL_STATUS 0x0084
+#define REG_CLDMA_UL_START_CMD 0x0088
+#define REG_CLDMA_UL_RESUME_CMD 0x008c
+#define REG_CLDMA_UL_STOP_CMD 0x0090
+#define REG_CLDMA_UL_ERROR 0x0094
+#define REG_CLDMA_UL_CFG 0x0098
+#define UL_CFG_BIT_MODE_36 BIT(5)
+#define UL_CFG_BIT_MODE_40 BIT(6)
+#define UL_CFG_BIT_MODE_64 BIT(7)
+#define UL_CFG_BIT_MODE_MASK GENMASK(7, 5)
+
+#define REG_CLDMA_UL_MEM 0x009c
+#define UL_MEM_CHECK_DIS BIT(0)
+
+/* CLDMA OUT(RX) PD */
+#define REG_CLDMA_DL_START_CMD 0x05bc
+#define REG_CLDMA_DL_RESUME_CMD 0x05c0
+#define REG_CLDMA_DL_STOP_CMD 0x05c4
+#define REG_CLDMA_DL_MEM 0x0508
+#define DL_MEM_CHECK_DIS BIT(0)
+
+/* CLDMA OUT(RX) AO */
+#define REG_CLDMA_DL_CFG 0x0404
+#define DL_CFG_UP_HW_LAST BIT(2)
+#define DL_CFG_BIT_MODE_36 BIT(10)
+#define DL_CFG_BIT_MODE_40 BIT(11)
+#define DL_CFG_BIT_MODE_64 BIT(12)
+#define DL_CFG_BIT_MODE_MASK GENMASK(12, 10)
+
+#define REG_CLDMA_DL_START_ADDRL_0 0x0478
+#define REG_CLDMA_DL_START_ADDRH_0 0x047c
+#define REG_CLDMA_DL_CURRENT_ADDRL_0 0x04b8
+#define REG_CLDMA_DL_CURRENT_ADDRH_0 0x04bc
+#define REG_CLDMA_DL_STATUS 0x04f8
+
+/* CLDMA MISC PD */
+#define REG_CLDMA_L2TISAR0 0x0810
+#define REG_CLDMA_L2TISAR1 0x0814
+#define REG_CLDMA_L2TIMR0 0x0818
+#define REG_CLDMA_L2TIMR1 0x081c
+#define REG_CLDMA_L2TIMCR0 0x0820
+#define REG_CLDMA_L2TIMCR1 0x0824
+#define REG_CLDMA_L2TIMSR0 0x0828
+#define REG_CLDMA_L2TIMSR1 0x082c
+#define REG_CLDMA_L3TISAR0 0x0830
+#define REG_CLDMA_L3TISAR1 0x0834
+#define REG_CLDMA_L2RISAR0 0x0850
+#define REG_CLDMA_L2RISAR1 0x0854
+#define REG_CLDMA_L3RISAR0 0x0870
+#define REG_CLDMA_L3RISAR1 0x0874
+#define REG_CLDMA_IP_BUSY 0x08b4
+#define IP_BUSY_WAKEUP BIT(0)
+#define CLDMA_L2TISAR0_ALL_INT_MASK GENMASK(15, 0)
+#define CLDMA_L2RISAR0_ALL_INT_MASK GENMASK(15, 0)
+
+/* CLDMA MISC AO */
+#define REG_CLDMA_L2RIMR0 0x0858
+#define REG_CLDMA_L2RIMR1 0x085c
+#define REG_CLDMA_L2RIMCR0 0x0860
+#define REG_CLDMA_L2RIMCR1 0x0864
+#define REG_CLDMA_L2RIMSR0 0x0868
+#define REG_CLDMA_L2RIMSR1 0x086c
+#define REG_CLDMA_BUSY_MASK 0x0954
+#define BUSY_MASK_PCIE BIT(0)
+#define BUSY_MASK_AP BIT(1)
+#define BUSY_MASK_MD BIT(2)
+
+#define REG_CLDMA_INT_MASK 0x0960
+
+/* CLDMA RESET */
+#define REG_INFRA_RST4_SET 0x0730
+#define RST4_CLDMA1_SW_RST_SET BIT(20)
+
+#define REG_INFRA_RST4_CLR 0x0734
+#define RST4_CLDMA1_SW_RST_CLR BIT(20)
+
+#define REG_INFRA_RST2_SET 0x0140
+#define RST2_CLDMA1_AO_SW_RST_SET BIT(18)
+
+#define REG_INFRA_RST2_CLR 0x0144
+#define RST2_CLDMA1_AO_SW_RST_CLR BIT(18)
+
+enum hw_mode {
+ MODE_BIT_32,
+ MODE_BIT_36,
+ MODE_BIT_40,
+ MODE_BIT_64,
+};
+
+struct cldma_hw_info {
+ enum hw_mode hw_mode;
+ void __iomem *ap_ao_base;
+ void __iomem *ap_pdn_base;
+ u32 phy_interrupt_id;
+};
+
+void cldma_hw_mask_txrxirq(struct cldma_hw_info *hw_info, unsigned char qno, bool is_rx);
+void cldma_hw_mask_eqirq(struct cldma_hw_info *hw_info, unsigned char qno, bool is_rx);
+void cldma_hw_dismask_txrxirq(struct cldma_hw_info *hw_info, unsigned char qno, bool is_rx);
+void cldma_hw_dismask_eqirq(struct cldma_hw_info *hw_info, unsigned char qno, bool is_rx);
+unsigned int cldma_hw_queue_status(struct cldma_hw_info *hw_info, unsigned char qno, bool is_rx);
+void cldma_hw_init(struct cldma_hw_info *hw_info);
+void cldma_hw_resume_queue(struct cldma_hw_info *hw_info, unsigned char qno, bool is_rx);
+void cldma_hw_start(struct cldma_hw_info *hw_info);
+void cldma_hw_start_queue(struct cldma_hw_info *hw_info, u8 qno, bool is_rx);
+void cldma_hw_tx_done(struct cldma_hw_info *hw_info, unsigned int bitmask);
+void cldma_hw_rx_done(struct cldma_hw_info *hw_info, unsigned int bitmask);
+void cldma_hw_stop_queue(struct cldma_hw_info *hw_info, u8 qno, bool is_rx);
+void cldma_hw_set_start_address(struct cldma_hw_info *hw_info,
+ unsigned char qno, u64 address, bool is_rx);
+void cldma_hw_reset(void __iomem *ao_base);
+void cldma_hw_stop(struct cldma_hw_info *hw_info, bool is_rx);
+unsigned int cldma_hw_int_status(struct cldma_hw_info *hw_info, unsigned int bitmask, bool is_rx);
+void cldma_hw_restore(struct cldma_hw_info *hw_info);
+void cldma_clear_ip_busy(struct cldma_hw_info *hw_info);
+bool cldma_tx_addr_is_set(struct cldma_hw_info *hw_info, unsigned char qno);
+#endif
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
new file mode 100644
index 000000000000..a63c4b514944
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -0,0 +1,1519 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Andy Shevchenko <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dmapool.h>
+#include <linux/dma-mapping.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/iopoll.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/skbuff.h>
+
+#include "t7xx_cldma.h"
+#include "t7xx_common.h"
+#include "t7xx_hif_cldma.h"
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_monitor.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_skb_util.h"
+
+#define MAX_TX_BUDGET 16
+#define MAX_RX_BUDGET 16
+
+#define CHECK_Q_STOP_TIMEOUT_US 1000000
+#define CHECK_Q_STOP_STEP_US 10000
+
+static struct cldma_ctrl *cldma_md_ctrl[CLDMA_NUM];
+
+static DEFINE_MUTEX(ctl_cfg_mutex); /* protects CLDMA late init config */
+
+static enum cldma_queue_type rxq_type[CLDMA_RXQ_NUM];
+static enum cldma_queue_type txq_type[CLDMA_TXQ_NUM];
+static int rxq_buff_size[CLDMA_RXQ_NUM];
+static int rxq_buff_num[CLDMA_RXQ_NUM];
+static int txq_buff_num[CLDMA_TXQ_NUM];
+
+static struct cldma_ctrl *md_cd_get(enum cldma_id hif_id)
+{
+ return cldma_md_ctrl[hif_id];
+}
+
+static inline void md_cd_set(enum cldma_id hif_id, struct cldma_ctrl *md_ctrl)
+{
+ cldma_md_ctrl[hif_id] = md_ctrl;
+}
+
+static inline void md_cd_queue_struct_reset(struct cldma_queue *queue, enum cldma_id hif_id,
+ enum direction dir, unsigned char index)
+{
+ queue->dir = dir;
+ queue->index = index;
+ queue->hif_id = hif_id;
+ queue->tr_ring = NULL;
+ queue->tr_done = NULL;
+ queue->tx_xmit = NULL;
+}
+
+static inline void md_cd_queue_struct_init(struct cldma_queue *queue, enum cldma_id hif_id,
+ enum direction dir, unsigned char index)
+{
+ md_cd_queue_struct_reset(queue, hif_id, dir, index);
+ init_waitqueue_head(&queue->req_wq);
+ spin_lock_init(&queue->ring_lock);
+}
+
+static inline void cldma_tgpd_set_data_ptr(struct cldma_tgpd *tgpd, dma_addr_t data_ptr)
+{
+ tgpd->data_buff_bd_ptr_h = upper_32_bits(data_ptr);
+ tgpd->data_buff_bd_ptr_l = lower_32_bits(data_ptr);
+}
+
+static inline void cldma_tgpd_set_next_ptr(struct cldma_tgpd *tgpd, dma_addr_t next_ptr)
+{
+ tgpd->next_gpd_ptr_h = upper_32_bits(next_ptr);
+ tgpd->next_gpd_ptr_l = lower_32_bits(next_ptr);
+}
+
+static inline void cldma_rgpd_set_data_ptr(struct cldma_rgpd *rgpd, dma_addr_t data_ptr)
+{
+ rgpd->data_buff_bd_ptr_h = upper_32_bits(data_ptr);
+ rgpd->data_buff_bd_ptr_l = lower_32_bits(data_ptr);
+}
+
+static inline void cldma_rgpd_set_next_ptr(struct cldma_rgpd *rgpd, dma_addr_t next_ptr)
+{
+ rgpd->next_gpd_ptr_h = upper_32_bits(next_ptr);
+ rgpd->next_gpd_ptr_l = lower_32_bits(next_ptr);
+}
+
+static struct cldma_request *cldma_ring_step_forward(struct cldma_ring *ring,
+ struct cldma_request *req)
+{
+ struct cldma_request *next_req;
+
+ if (req->entry.next == &ring->gpd_ring)
+ next_req = list_first_entry(&ring->gpd_ring, struct cldma_request, entry);
+ else
+ next_req = list_entry(req->entry.next, struct cldma_request, entry);
+
+ return next_req;
+}
+
+static struct cldma_request *cldma_ring_step_backward(struct cldma_ring *ring,
+ struct cldma_request *req)
+{
+ struct cldma_request *prev_req;
+
+ if (req->entry.prev == &ring->gpd_ring)
+ prev_req = list_last_entry(&ring->gpd_ring, struct cldma_request, entry);
+ else
+ prev_req = list_entry(req->entry.prev, struct cldma_request, entry);
+
+ return prev_req;
+}
+
+static int cldma_gpd_rx_from_queue(struct cldma_queue *queue, int budget, bool *over_budget)
+{
+ unsigned char hwo_polling_count = 0;
+ struct cldma_hw_info *hw_info;
+ struct cldma_ctrl *md_ctrl;
+ struct cldma_request *req;
+ struct cldma_rgpd *rgpd;
+ struct sk_buff *new_skb;
+ bool rx_done = false;
+ struct sk_buff *skb;
+ int count = 0;
+ int ret = 0;
+
+ md_ctrl = md_cd_get(queue->hif_id);
+ hw_info = &md_ctrl->hw_info;
+
+ while (!rx_done) {
+ req = queue->tr_done;
+ if (!req) {
+ dev_err(md_ctrl->dev, "RXQ was released\n");
+ return -ENODATA;
+ }
+
+ rgpd = req->gpd;
+ if ((rgpd->gpd_flags & GPD_FLAGS_HWO) || !req->skb) {
+ u64 gpd_addr;
+
+ /* current 64 bit address is in a table by Q index */
+ gpd_addr = ioread64(hw_info->ap_pdn_base +
+ REG_CLDMA_DL_CURRENT_ADDRL_0 +
+ queue->index * sizeof(u64));
+ if (gpd_addr == GENMASK_ULL(63, 0)) {
+ dev_err(md_ctrl->dev, "PCIe Link disconnected\n");
+ return -ENODATA;
+ }
+
+ if ((u64)queue->tr_done->gpd_addr != gpd_addr &&
+ hwo_polling_count++ < 100) {
+ udelay(1);
+ continue;
+ }
+
+ break;
+ }
+
+ hwo_polling_count = 0;
+ skb = req->skb;
+
+ if (req->mapped_buff) {
+ dma_unmap_single(md_ctrl->dev, req->mapped_buff,
+ skb_data_size(skb), DMA_FROM_DEVICE);
+ req->mapped_buff = 0;
+ }
+
+ /* init skb struct */
+ skb->len = 0;
+ skb_reset_tail_pointer(skb);
+ skb_put(skb, rgpd->data_buff_len);
+
+ /* consume skb */
+ if (md_ctrl->recv_skb) {
+ ret = md_ctrl->recv_skb(queue, skb);
+ } else {
+ ccci_free_skb(&md_ctrl->mtk_dev->pools, skb);
+ ret = -ENETDOWN;
+ }
+
+ new_skb = NULL;
+ if (ret >= 0 || ret == -ENETDOWN)
+ new_skb = ccci_alloc_skb_from_pool(&md_ctrl->mtk_dev->pools,
+ queue->tr_ring->pkt_size,
+ GFS_BLOCKING);
+
+ if (!new_skb) {
+ /* either the port was busy or the skb pool was empty */
+ usleep_range(5000, 10000);
+ return -EAGAIN;
+ }
+
+ /* mark cldma_request as available */
+ req->skb = NULL;
+ cldma_rgpd_set_data_ptr(rgpd, 0);
+ queue->tr_done = cldma_ring_step_forward(queue->tr_ring, req);
+
+ req = queue->rx_refill;
+ rgpd = req->gpd;
+ req->mapped_buff = dma_map_single(md_ctrl->dev, new_skb->data,
+ skb_data_size(new_skb), DMA_FROM_DEVICE);
+ if (dma_mapping_error(md_ctrl->dev, req->mapped_buff)) {
+ dev_err(md_ctrl->dev, "DMA mapping failed\n");
+ req->mapped_buff = 0;
+ ccci_free_skb(&md_ctrl->mtk_dev->pools, new_skb);
+ return -ENOMEM;
+ }
+
+ cldma_rgpd_set_data_ptr(rgpd, req->mapped_buff);
+ rgpd->data_buff_len = 0;
+ /* set HWO, no need to hold ring_lock */
+ rgpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
+ /* mark cldma_request as available */
+ req->skb = new_skb;
+ queue->rx_refill = cldma_ring_step_forward(queue->tr_ring, req);
+
+ if (++count >= budget && need_resched()) {
+ *over_budget = true;
+ rx_done = true;
+ }
+ }
+
+ return 0;
+}
+
+static int cldma_gpd_rx_collect(struct cldma_queue *queue, int budget)
+{
+ struct cldma_hw_info *hw_info;
+ struct cldma_ctrl *md_ctrl;
+ bool over_budget = false;
+ bool rx_not_done = true;
+ unsigned int l2_rx_int;
+ unsigned long flags;
+ int ret;
+
+ md_ctrl = md_cd_get(queue->hif_id);
+ hw_info = &md_ctrl->hw_info;
+
+ while (rx_not_done) {
+ rx_not_done = false;
+ ret = cldma_gpd_rx_from_queue(queue, budget, &over_budget);
+ if (ret == -ENODATA)
+ return 0;
+
+ if (ret)
+ return ret;
+
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ if (md_ctrl->rxq_active & BIT(queue->index)) {
+ /* resume Rx queue */
+ if (!cldma_hw_queue_status(hw_info, queue->index, true))
+ cldma_hw_resume_queue(hw_info, queue->index, true);
+
+ /* greedy mode */
+ l2_rx_int = cldma_hw_int_status(hw_info, BIT(queue->index), true);
+
+ if (l2_rx_int) {
+ /* need be scheduled again, avoid the soft lockup */
+ cldma_hw_rx_done(hw_info, l2_rx_int);
+ if (over_budget) {
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ return -EAGAIN;
+ }
+
+ /* clear IP busy register wake up CPU case */
+ rx_not_done = true;
+ }
+ }
+
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ }
+
+ return 0;
+}
+
+static void cldma_rx_done(struct work_struct *work)
+{
+ struct cldma_ctrl *md_ctrl;
+ struct cldma_queue *queue;
+ int value;
+
+ queue = container_of(work, struct cldma_queue, cldma_rx_work);
+ md_ctrl = md_cd_get(queue->hif_id);
+ value = queue->tr_ring->handle_rx_done(queue, queue->budget);
+
+ if (value && md_ctrl->rxq_active & BIT(queue->index)) {
+ queue_work(queue->worker, &queue->cldma_rx_work);
+ return;
+ }
+
+ /* clear IP busy register wake up CPU case */
+ cldma_clear_ip_busy(&md_ctrl->hw_info);
+ /* enable RX_DONE && EMPTY interrupt */
+ cldma_hw_dismask_txrxirq(&md_ctrl->hw_info, queue->index, true);
+ cldma_hw_dismask_eqirq(&md_ctrl->hw_info, queue->index, true);
+}
+
+static int cldma_gpd_tx_collect(struct cldma_queue *queue)
+{
+ struct cldma_ctrl *md_ctrl;
+ struct cldma_request *req;
+ struct sk_buff *skb_free;
+ struct cldma_tgpd *tgpd;
+ unsigned int dma_len;
+ unsigned long flags;
+ dma_addr_t dma_free;
+ int count = 0;
+
+ md_ctrl = md_cd_get(queue->hif_id);
+
+ while (!kthread_should_stop()) {
+ spin_lock_irqsave(&queue->ring_lock, flags);
+ req = queue->tr_done;
+ if (!req) {
+ dev_err(md_ctrl->dev, "TXQ was released\n");
+ spin_unlock_irqrestore(&queue->ring_lock, flags);
+ break;
+ }
+
+ tgpd = req->gpd;
+ if ((tgpd->gpd_flags & GPD_FLAGS_HWO) || !req->skb) {
+ spin_unlock_irqrestore(&queue->ring_lock, flags);
+ break;
+ }
+
+ /* restore IOC setting */
+ if (req->ioc_override & GPD_FLAGS_IOC) {
+ if (req->ioc_override & GPD_FLAGS_HWO)
+ tgpd->gpd_flags |= GPD_FLAGS_IOC;
+ else
+ tgpd->gpd_flags &= ~GPD_FLAGS_IOC;
+ dev_notice(md_ctrl->dev,
+ "qno%u, req->ioc_override=0x%x,tgpd->gpd_flags=0x%x\n",
+ queue->index, req->ioc_override, tgpd->gpd_flags);
+ }
+
+ queue->budget++;
+ /* save skb reference */
+ dma_free = req->mapped_buff;
+ dma_len = tgpd->data_buff_len;
+ skb_free = req->skb;
+ /* mark cldma_request as available */
+ req->skb = NULL;
+ queue->tr_done = cldma_ring_step_forward(queue->tr_ring, req);
+ spin_unlock_irqrestore(&queue->ring_lock, flags);
+ count++;
+ dma_unmap_single(md_ctrl->dev, dma_free, dma_len, DMA_TO_DEVICE);
+
+ ccci_free_skb(&md_ctrl->mtk_dev->pools, skb_free);
+ }
+
+ if (count)
+ wake_up_nr(&queue->req_wq, count);
+
+ return count;
+}
+
+static void cldma_tx_queue_empty_handler(struct cldma_queue *queue)
+{
+ struct cldma_hw_info *hw_info;
+ struct cldma_ctrl *md_ctrl;
+ struct cldma_request *req;
+ struct cldma_tgpd *tgpd;
+ dma_addr_t ul_curr_addr;
+ unsigned long flags;
+ bool pending_gpd;
+
+ md_ctrl = md_cd_get(queue->hif_id);
+ hw_info = &md_ctrl->hw_info;
+ if (!(md_ctrl->txq_active & BIT(queue->index)))
+ return;
+
+ /* check if there is any pending TGPD with HWO=1 */
+ spin_lock_irqsave(&queue->ring_lock, flags);
+ req = cldma_ring_step_backward(queue->tr_ring, queue->tx_xmit);
+ tgpd = req->gpd;
+ pending_gpd = (tgpd->gpd_flags & GPD_FLAGS_HWO) && req->skb;
+
+ spin_unlock_irqrestore(&queue->ring_lock, flags);
+
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ if (pending_gpd) {
+ /* Check current processing TGPD
+ * current 64-bit address is in a table by Q index.
+ */
+ ul_curr_addr = ioread64(hw_info->ap_pdn_base +
+ REG_CLDMA_UL_CURRENT_ADDRL_0 +
+ queue->index * sizeof(u64));
+ if (req->gpd_addr != ul_curr_addr)
+ dev_err(md_ctrl->dev,
+ "CLDMA%d Q%d TGPD addr, SW:%pad, HW:%pad\n", md_ctrl->hif_id,
+ queue->index, &req->gpd_addr, &ul_curr_addr);
+ else
+ /* retry */
+ cldma_hw_resume_queue(&md_ctrl->hw_info, queue->index, false);
+ }
+
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static void cldma_tx_done(struct work_struct *work)
+{
+ struct cldma_hw_info *hw_info;
+ struct cldma_ctrl *md_ctrl;
+ struct cldma_queue *queue;
+ unsigned int l2_tx_int;
+ unsigned long flags;
+
+ queue = container_of(work, struct cldma_queue, cldma_tx_work);
+ md_ctrl = md_cd_get(queue->hif_id);
+ hw_info = &md_ctrl->hw_info;
+ queue->tr_ring->handle_tx_done(queue);
+
+ /* greedy mode */
+ l2_tx_int = cldma_hw_int_status(hw_info, BIT(queue->index) | EQ_STA_BIT(queue->index),
+ false);
+ if (l2_tx_int & EQ_STA_BIT(queue->index)) {
+ cldma_hw_tx_done(hw_info, EQ_STA_BIT(queue->index));
+ cldma_tx_queue_empty_handler(queue);
+ }
+
+ if (l2_tx_int & BIT(queue->index)) {
+ cldma_hw_tx_done(hw_info, BIT(queue->index));
+ queue_work(queue->worker, &queue->cldma_tx_work);
+ return;
+ }
+
+ /* enable TX_DONE interrupt */
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ if (md_ctrl->txq_active & BIT(queue->index)) {
+ cldma_clear_ip_busy(hw_info);
+ cldma_hw_dismask_eqirq(hw_info, queue->index, false);
+ cldma_hw_dismask_txrxirq(hw_info, queue->index, false);
+ }
+
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static void cldma_ring_free(struct cldma_ctrl *md_ctrl,
+ struct cldma_ring *ring, enum dma_data_direction dir)
+{
+ struct cldma_request *req_cur, *req_next;
+
+ list_for_each_entry_safe(req_cur, req_next, &ring->gpd_ring, entry) {
+ if (req_cur->mapped_buff && req_cur->skb) {
+ dma_unmap_single(md_ctrl->dev, req_cur->mapped_buff,
+ skb_data_size(req_cur->skb), dir);
+ req_cur->mapped_buff = 0;
+ }
+
+ if (req_cur->skb)
+ ccci_free_skb(&md_ctrl->mtk_dev->pools, req_cur->skb);
+
+ if (req_cur->gpd)
+ dma_pool_free(md_ctrl->gpd_dmapool, req_cur->gpd,
+ req_cur->gpd_addr);
+
+ list_del(&req_cur->entry);
+ kfree_sensitive(req_cur);
+ }
+}
+
+static struct cldma_request *alloc_rx_request(struct cldma_ctrl *md_ctrl, size_t pkt_size)
+{
+ struct cldma_request *item;
+ unsigned long flags;
+
+ item = kzalloc(sizeof(*item), GFP_KERNEL);
+ if (!item)
+ return NULL;
+
+ item->skb = ccci_alloc_skb_from_pool(&md_ctrl->mtk_dev->pools, pkt_size, GFS_BLOCKING);
+ if (!item->skb)
+ goto err_skb_alloc;
+
+ item->gpd = dma_pool_alloc(md_ctrl->gpd_dmapool, GFP_KERNEL | __GFP_ZERO,
+ &item->gpd_addr);
+ if (!item->gpd)
+ goto err_gpd_alloc;
+
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ item->mapped_buff = dma_map_single(md_ctrl->dev, item->skb->data,
+ skb_data_size(item->skb), DMA_FROM_DEVICE);
+ if (dma_mapping_error(md_ctrl->dev, item->mapped_buff)) {
+ dev_err(md_ctrl->dev, "DMA mapping failed\n");
+ item->mapped_buff = 0;
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ goto err_dma_map;
+ }
+
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ return item;
+
+err_dma_map:
+ dma_pool_free(md_ctrl->gpd_dmapool, item->gpd, item->gpd_addr);
+err_gpd_alloc:
+ ccci_free_skb(&md_ctrl->mtk_dev->pools, item->skb);
+err_skb_alloc:
+ kfree(item);
+ return NULL;
+}
+
+static int cldma_rx_ring_init(struct cldma_ctrl *md_ctrl, struct cldma_ring *ring)
+{
+ struct cldma_request *item, *first_item = NULL;
+ struct cldma_rgpd *prev_gpd, *gpd = NULL;
+ int i;
+
+ for (i = 0; i < ring->length; i++) {
+ item = alloc_rx_request(md_ctrl, ring->pkt_size);
+ if (!item) {
+ cldma_ring_free(md_ctrl, ring, DMA_FROM_DEVICE);
+ return -ENOMEM;
+ }
+
+ gpd = (struct cldma_rgpd *)item->gpd;
+ cldma_rgpd_set_data_ptr(gpd, item->mapped_buff);
+ gpd->data_allow_len = ring->pkt_size;
+ gpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
+ if (!i)
+ first_item = item;
+ else
+ cldma_rgpd_set_next_ptr(prev_gpd, item->gpd_addr);
+
+ INIT_LIST_HEAD(&item->entry);
+ list_add_tail(&item->entry, &ring->gpd_ring);
+ prev_gpd = gpd;
+ }
+
+ if (first_item)
+ cldma_rgpd_set_next_ptr(gpd, first_item->gpd_addr);
+
+ return 0;
+}
+
+static struct cldma_request *alloc_tx_request(struct cldma_ctrl *md_ctrl)
+{
+ struct cldma_request *item;
+
+ item = kzalloc(sizeof(*item), GFP_KERNEL);
+ if (!item)
+ return NULL;
+
+ item->gpd = dma_pool_alloc(md_ctrl->gpd_dmapool, GFP_KERNEL | __GFP_ZERO,
+ &item->gpd_addr);
+ if (!item->gpd) {
+ kfree(item);
+ return NULL;
+ }
+
+ return item;
+}
+
+static int cldma_tx_ring_init(struct cldma_ctrl *md_ctrl, struct cldma_ring *ring)
+{
+ struct cldma_request *item, *first_item = NULL;
+ struct cldma_tgpd *tgpd, *prev_gpd;
+ int i;
+
+ for (i = 0; i < ring->length; i++) {
+ item = alloc_tx_request(md_ctrl);
+ if (!item) {
+ cldma_ring_free(md_ctrl, ring, DMA_TO_DEVICE);
+ return -ENOMEM;
+ }
+
+ tgpd = item->gpd;
+ tgpd->gpd_flags = GPD_FLAGS_IOC;
+ if (!first_item)
+ first_item = item;
+ else
+ cldma_tgpd_set_next_ptr(prev_gpd, item->gpd_addr);
+ INIT_LIST_HEAD(&item->bd);
+ INIT_LIST_HEAD(&item->entry);
+ list_add_tail(&item->entry, &ring->gpd_ring);
+ prev_gpd = tgpd;
+ }
+
+ if (first_item)
+ cldma_tgpd_set_next_ptr(tgpd, first_item->gpd_addr);
+
+ return 0;
+}
+
+static void cldma_queue_switch_ring(struct cldma_queue *queue)
+{
+ struct cldma_ctrl *md_ctrl;
+ struct cldma_request *req;
+
+ md_ctrl = md_cd_get(queue->hif_id);
+
+ if (queue->dir == MTK_OUT) {
+ queue->tr_ring = &md_ctrl->tx_ring[queue->index];
+ req = list_first_entry(&queue->tr_ring->gpd_ring, struct cldma_request, entry);
+ queue->tr_done = req;
+ queue->tx_xmit = req;
+ queue->budget = queue->tr_ring->length;
+ } else if (queue->dir == MTK_IN) {
+ queue->tr_ring = &md_ctrl->rx_ring[queue->index];
+ req = list_first_entry(&queue->tr_ring->gpd_ring, struct cldma_request, entry);
+ queue->tr_done = req;
+ queue->rx_refill = req;
+ queue->budget = queue->tr_ring->length;
+ }
+}
+
+static void cldma_rx_queue_init(struct cldma_queue *queue)
+{
+ queue->dir = MTK_IN;
+ cldma_queue_switch_ring(queue);
+ queue->q_type = rxq_type[queue->index];
+}
+
+static void cldma_tx_queue_init(struct cldma_queue *queue)
+{
+ queue->dir = MTK_OUT;
+ cldma_queue_switch_ring(queue);
+ queue->q_type = txq_type[queue->index];
+}
+
+static inline void cldma_enable_irq(struct cldma_ctrl *md_ctrl)
+{
+ mtk_pcie_mac_set_int(md_ctrl->mtk_dev, md_ctrl->hw_info.phy_interrupt_id);
+}
+
+static inline void cldma_disable_irq(struct cldma_ctrl *md_ctrl)
+{
+ mtk_pcie_mac_clear_int(md_ctrl->mtk_dev, md_ctrl->hw_info.phy_interrupt_id);
+}
+
+static void cldma_irq_work_cb(struct cldma_ctrl *md_ctrl)
+{
+ u32 l2_tx_int_msk, l2_rx_int_msk, l2_tx_int, l2_rx_int, val;
+ struct cldma_hw_info *hw_info = &md_ctrl->hw_info;
+ int i;
+
+ /* L2 raw interrupt status */
+ l2_tx_int = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0);
+ l2_rx_int = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0);
+ l2_tx_int_msk = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TIMR0);
+ l2_rx_int_msk = ioread32(hw_info->ap_ao_base + REG_CLDMA_L2RIMR0);
+
+ l2_tx_int &= ~l2_tx_int_msk;
+ l2_rx_int &= ~l2_rx_int_msk;
+
+ if (l2_tx_int) {
+ if (l2_tx_int & (TQ_ERR_INT_BITMASK | TQ_ACTIVE_START_ERR_INT_BITMASK)) {
+ /* read and clear L3 TX interrupt status */
+ val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L3TISAR0);
+ iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_L3TISAR0);
+ val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L3TISAR1);
+ iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_L3TISAR1);
+ }
+
+ cldma_hw_tx_done(hw_info, l2_tx_int);
+
+ if (l2_tx_int & (TXRX_STATUS_BITMASK | EMPTY_STATUS_BITMASK)) {
+ for (i = 0; i < CLDMA_TXQ_NUM; i++) {
+ if (l2_tx_int & BIT(i)) {
+ /* disable TX_DONE interrupt */
+ cldma_hw_mask_eqirq(hw_info, i, false);
+ cldma_hw_mask_txrxirq(hw_info, i, false);
+ queue_work(md_ctrl->txq[i].worker,
+ &md_ctrl->txq[i].cldma_tx_work);
+ }
+
+ if (l2_tx_int & EQ_STA_BIT(i))
+ cldma_tx_queue_empty_handler(&md_ctrl->txq[i]);
+ }
+ }
+ }
+
+ if (l2_rx_int) {
+ /* clear IP busy register wake up CPU case */
+ if (l2_rx_int & (RQ_ERR_INT_BITMASK | RQ_ACTIVE_START_ERR_INT_BITMASK)) {
+ /* read and clear L3 RX interrupt status */
+ val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L3RISAR0);
+ iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_L3RISAR0);
+ val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L3RISAR1);
+ iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_L3RISAR1);
+ }
+
+ cldma_hw_rx_done(hw_info, l2_rx_int);
+
+ if (l2_rx_int & (TXRX_STATUS_BITMASK | EMPTY_STATUS_BITMASK)) {
+ for (i = 0; i < CLDMA_RXQ_NUM; i++) {
+ if (l2_rx_int & (BIT(i) | EQ_STA_BIT(i))) {
+ /* disable RX_DONE and QUEUE_EMPTY interrupt */
+ cldma_hw_mask_eqirq(hw_info, i, true);
+ cldma_hw_mask_txrxirq(hw_info, i, true);
+ queue_work(md_ctrl->rxq[i].worker,
+ &md_ctrl->rxq[i].cldma_rx_work);
+ }
+ }
+ }
+ }
+}
+
+static bool queues_active(struct cldma_hw_info *hw_info)
+{
+ unsigned int tx_active;
+ unsigned int rx_active;
+
+ tx_active = cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, false);
+ rx_active = cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, true);
+
+ return ((tx_active || rx_active) && tx_active != CLDMA_ALL_Q && rx_active != CLDMA_ALL_Q);
+}
+
+/**
+ * cldma_stop() - Stop CLDMA
+ * @hif_id: CLDMA ID (ID_CLDMA0 or ID_CLDMA1)
+ *
+ * stop TX RX queue
+ * disable L1 and L2 interrupt
+ * clear TX/RX empty interrupt status
+ *
+ * Return: 0 on success, a negative error code on failure
+ */
+int cldma_stop(enum cldma_id hif_id)
+{
+ struct cldma_hw_info *hw_info;
+ struct cldma_ctrl *md_ctrl;
+ bool active;
+ int i;
+
+ md_ctrl = md_cd_get(hif_id);
+ if (!md_ctrl)
+ return -EINVAL;
+
+ hw_info = &md_ctrl->hw_info;
+
+ /* stop TX/RX queue */
+ md_ctrl->rxq_active = 0;
+ cldma_hw_stop_queue(hw_info, CLDMA_ALL_Q, true);
+ md_ctrl->txq_active = 0;
+ cldma_hw_stop_queue(hw_info, CLDMA_ALL_Q, false);
+ md_ctrl->txq_started = 0;
+
+ /* disable L1 and L2 interrupt */
+ cldma_disable_irq(md_ctrl);
+ cldma_hw_stop(hw_info, true);
+ cldma_hw_stop(hw_info, false);
+
+ /* clear TX/RX empty interrupt status */
+ cldma_hw_tx_done(hw_info, CLDMA_L2TISAR0_ALL_INT_MASK);
+ cldma_hw_rx_done(hw_info, CLDMA_L2RISAR0_ALL_INT_MASK);
+
+ if (md_ctrl->is_late_init) {
+ for (i = 0; i < CLDMA_TXQ_NUM; i++)
+ flush_work(&md_ctrl->txq[i].cldma_tx_work);
+
+ for (i = 0; i < CLDMA_RXQ_NUM; i++)
+ flush_work(&md_ctrl->rxq[i].cldma_rx_work);
+ }
+
+ if (read_poll_timeout(queues_active, active, !active, CHECK_Q_STOP_STEP_US,
+ CHECK_Q_STOP_TIMEOUT_US, true, hw_info)) {
+ dev_err(md_ctrl->dev, "Could not stop CLDMA%d queues", hif_id);
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static void cldma_late_release(struct cldma_ctrl *md_ctrl)
+{
+ int i;
+
+ if (md_ctrl->is_late_init) {
+ /* free all TX/RX CLDMA request/GBD/skb buffer */
+ for (i = 0; i < CLDMA_TXQ_NUM; i++)
+ cldma_ring_free(md_ctrl, &md_ctrl->tx_ring[i], DMA_TO_DEVICE);
+
+ for (i = 0; i < CLDMA_RXQ_NUM; i++)
+ cldma_ring_free(md_ctrl, &md_ctrl->rx_ring[i], DMA_FROM_DEVICE);
+
+ dma_pool_destroy(md_ctrl->gpd_dmapool);
+ md_ctrl->gpd_dmapool = NULL;
+ md_ctrl->is_late_init = false;
+ }
+}
+
+void cldma_reset(enum cldma_id hif_id)
+{
+ struct cldma_ctrl *md_ctrl;
+ struct mtk_modem *md;
+ unsigned long flags;
+ int i;
+
+ md_ctrl = md_cd_get(hif_id);
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ md_ctrl->hif_id = hif_id;
+ md_ctrl->txq_active = 0;
+ md_ctrl->rxq_active = 0;
+ md = md_ctrl->mtk_dev->md;
+
+ cldma_disable_irq(md_ctrl);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+
+ for (i = 0; i < CLDMA_TXQ_NUM; i++) {
+ md_ctrl->txq[i].md = md;
+ cancel_work_sync(&md_ctrl->txq[i].cldma_tx_work);
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ md_cd_queue_struct_reset(&md_ctrl->txq[i], md_ctrl->hif_id, MTK_OUT, i);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ }
+
+ for (i = 0; i < CLDMA_RXQ_NUM; i++) {
+ md_ctrl->rxq[i].md = md;
+ cancel_work_sync(&md_ctrl->rxq[i].cldma_rx_work);
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ md_cd_queue_struct_reset(&md_ctrl->rxq[i], md_ctrl->hif_id, MTK_IN, i);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ }
+
+ cldma_late_release(md_ctrl);
+}
+
+/**
+ * cldma_start() - Start CLDMA
+ * @hif_id: CLDMA ID (ID_CLDMA0 or ID_CLDMA1)
+ *
+ * set TX/RX start address
+ * start all RX queues and enable L2 interrupt
+ *
+ */
+void cldma_start(enum cldma_id hif_id)
+{
+ struct cldma_hw_info *hw_info;
+ struct cldma_ctrl *md_ctrl;
+ unsigned long flags;
+
+ md_ctrl = md_cd_get(hif_id);
+ hw_info = &md_ctrl->hw_info;
+
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ if (md_ctrl->is_late_init) {
+ int i;
+
+ cldma_enable_irq(md_ctrl);
+ /* set start address */
+ for (i = 0; i < CLDMA_TXQ_NUM; i++) {
+ if (md_ctrl->txq[i].tr_done)
+ cldma_hw_set_start_address(hw_info, i,
+ md_ctrl->txq[i].tr_done->gpd_addr, 0);
+ }
+
+ for (i = 0; i < CLDMA_RXQ_NUM; i++) {
+ if (md_ctrl->rxq[i].tr_done)
+ cldma_hw_set_start_address(hw_info, i,
+ md_ctrl->rxq[i].tr_done->gpd_addr, 1);
+ }
+
+ /* start all RX queues and enable L2 interrupt */
+ cldma_hw_start_queue(hw_info, 0xff, 1);
+ cldma_hw_start(hw_info);
+ /* wait write done */
+ wmb();
+ md_ctrl->txq_started = 0;
+ md_ctrl->txq_active |= TXRX_STATUS_BITMASK;
+ md_ctrl->rxq_active |= TXRX_STATUS_BITMASK;
+ }
+
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static void clear_txq(struct cldma_ctrl *md_ctrl, int qnum)
+{
+ struct cldma_request *req;
+ struct cldma_queue *txq;
+ struct cldma_tgpd *tgpd;
+ unsigned long flags;
+
+ txq = &md_ctrl->txq[qnum];
+ spin_lock_irqsave(&txq->ring_lock, flags);
+ req = list_first_entry(&txq->tr_ring->gpd_ring, struct cldma_request, entry);
+ txq->tr_done = req;
+ txq->tx_xmit = req;
+ txq->budget = txq->tr_ring->length;
+ list_for_each_entry(req, &txq->tr_ring->gpd_ring, entry) {
+ tgpd = req->gpd;
+ tgpd->gpd_flags &= ~GPD_FLAGS_HWO;
+ cldma_tgpd_set_data_ptr(tgpd, 0);
+ tgpd->data_buff_len = 0;
+ if (req->skb) {
+ ccci_free_skb(&md_ctrl->mtk_dev->pools, req->skb);
+ req->skb = NULL;
+ }
+ }
+
+ spin_unlock_irqrestore(&txq->ring_lock, flags);
+}
+
+static int clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
+{
+ struct cldma_request *req;
+ struct cldma_queue *rxq;
+ struct cldma_rgpd *rgpd;
+ unsigned long flags;
+
+ rxq = &md_ctrl->rxq[qnum];
+ spin_lock_irqsave(&rxq->ring_lock, flags);
+ req = list_first_entry(&rxq->tr_ring->gpd_ring, struct cldma_request, entry);
+ rxq->tr_done = req;
+ rxq->rx_refill = req;
+ list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) {
+ rgpd = req->gpd;
+ rgpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
+ rgpd->data_buff_len = 0;
+ if (req->skb) {
+ req->skb->len = 0;
+ skb_reset_tail_pointer(req->skb);
+ }
+ }
+
+ spin_unlock_irqrestore(&rxq->ring_lock, flags);
+ list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) {
+ rgpd = req->gpd;
+ if (req->skb)
+ continue;
+
+ req->skb = ccci_alloc_skb_from_pool(&md_ctrl->mtk_dev->pools,
+ rxq->tr_ring->pkt_size, GFS_BLOCKING);
+ if (!req->skb) {
+ dev_err(md_ctrl->dev, "skb not allocated\n");
+ return -ENOMEM;
+ }
+
+ req->mapped_buff = dma_map_single(md_ctrl->dev, req->skb->data,
+ skb_data_size(req->skb), DMA_FROM_DEVICE);
+ if (dma_mapping_error(md_ctrl->dev, req->mapped_buff)) {
+ dev_err(md_ctrl->dev, "DMA mapping failed\n");
+ return -ENOMEM;
+ }
+
+ cldma_rgpd_set_data_ptr(rgpd, req->mapped_buff);
+ }
+
+ return 0;
+}
+
+/* only allowed when CLDMA is stopped */
+static void cldma_clear_all_queue(struct cldma_ctrl *md_ctrl, enum direction dir)
+{
+ int i;
+
+ if (dir == MTK_OUT || dir == MTK_INOUT) {
+ for (i = 0; i < CLDMA_TXQ_NUM; i++)
+ clear_txq(md_ctrl, i);
+ }
+
+ if (dir == MTK_IN || dir == MTK_INOUT) {
+ for (i = 0; i < CLDMA_RXQ_NUM; i++)
+ clear_rxq(md_ctrl, i);
+ }
+}
+
+static void cldma_stop_queue(struct cldma_ctrl *md_ctrl, unsigned char qno, enum direction dir)
+{
+ struct cldma_hw_info *hw_info;
+ unsigned long flags;
+
+ hw_info = &md_ctrl->hw_info;
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ if (dir == MTK_IN) {
+ /* disable RX_DONE and QUEUE_EMPTY interrupt */
+ cldma_hw_mask_eqirq(hw_info, qno, true);
+ cldma_hw_mask_txrxirq(hw_info, qno, true);
+ if (qno == CLDMA_ALL_Q)
+ md_ctrl->rxq_active &= ~TXRX_STATUS_BITMASK;
+ else
+ md_ctrl->rxq_active &= ~(TXRX_STATUS_BITMASK & BIT(qno));
+
+ cldma_hw_stop_queue(hw_info, qno, true);
+ } else if (dir == MTK_OUT) {
+ cldma_hw_mask_eqirq(hw_info, qno, false);
+ cldma_hw_mask_txrxirq(hw_info, qno, false);
+ if (qno == CLDMA_ALL_Q)
+ md_ctrl->txq_active &= ~TXRX_STATUS_BITMASK;
+ else
+ md_ctrl->txq_active &= ~(TXRX_STATUS_BITMASK & BIT(qno));
+
+ cldma_hw_stop_queue(hw_info, qno, false);
+ }
+
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+/* this is called inside queue->ring_lock */
+static int cldma_gpd_handle_tx_request(struct cldma_queue *queue, struct cldma_request *tx_req,
+ struct sk_buff *skb, unsigned int ioc_override)
+{
+ struct cldma_ctrl *md_ctrl;
+ struct cldma_tgpd *tgpd;
+ unsigned long flags;
+
+ md_ctrl = md_cd_get(queue->hif_id);
+ tgpd = tx_req->gpd;
+ /* override current IOC setting */
+ if (ioc_override & GPD_FLAGS_IOC) {
+ /* backup current IOC setting */
+ if (tgpd->gpd_flags & GPD_FLAGS_IOC)
+ tx_req->ioc_override = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
+ else
+ tx_req->ioc_override = GPD_FLAGS_IOC;
+ if (ioc_override & GPD_FLAGS_HWO)
+ tgpd->gpd_flags |= GPD_FLAGS_IOC;
+ else
+ tgpd->gpd_flags &= ~GPD_FLAGS_IOC;
+ }
+
+ /* update GPD */
+ tx_req->mapped_buff = dma_map_single(md_ctrl->dev, skb->data,
+ skb->len, DMA_TO_DEVICE);
+ if (dma_mapping_error(md_ctrl->dev, tx_req->mapped_buff)) {
+ dev_err(md_ctrl->dev, "DMA mapping failed\n");
+ return -ENOMEM;
+ }
+
+ cldma_tgpd_set_data_ptr(tgpd, tx_req->mapped_buff);
+ tgpd->data_buff_len = skb->len;
+
+ /* Set HWO. Use cldma_lock to avoid race condition
+ * with cldma_stop. This lock must cover TGPD setting,
+ * as even without a resume operation.
+ * CLDMA still can start sending next HWO=1,
+ * if last TGPD was just finished.
+ */
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ if (md_ctrl->txq_active & BIT(queue->index))
+ tgpd->gpd_flags |= GPD_FLAGS_HWO;
+
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ /* mark cldma_request as available */
+ tx_req->skb = skb;
+
+ return 0;
+}
+
+static void cldma_hw_start_send(struct cldma_ctrl *md_ctrl, u8 qno)
+{
+ struct cldma_hw_info *hw_info;
+ struct cldma_request *req;
+
+ hw_info = &md_ctrl->hw_info;
+
+ /* check whether the device was powered off (CLDMA start address is not set) */
+ if (!cldma_tx_addr_is_set(hw_info, qno)) {
+ cldma_hw_init(hw_info);
+ req = cldma_ring_step_backward(md_ctrl->txq[qno].tr_ring,
+ md_ctrl->txq[qno].tx_xmit);
+ cldma_hw_set_start_address(hw_info, qno, req->gpd_addr, false);
+ md_ctrl->txq_started &= ~BIT(qno);
+ }
+
+ /* resume or start queue */
+ if (!cldma_hw_queue_status(hw_info, qno, false)) {
+ if (md_ctrl->txq_started & BIT(qno))
+ cldma_hw_resume_queue(hw_info, qno, false);
+ else
+ cldma_hw_start_queue(hw_info, qno, false);
+ md_ctrl->txq_started |= BIT(qno);
+ }
+}
+
+int cldma_write_room(enum cldma_id hif_id, unsigned char qno)
+{
+ struct cldma_ctrl *md_ctrl;
+ struct cldma_queue *queue;
+
+ md_ctrl = md_cd_get(hif_id);
+ queue = &md_ctrl->txq[qno];
+ if (!queue)
+ return -EINVAL;
+
+ if (queue->budget > (MAX_TX_BUDGET - 1))
+ return queue->budget;
+
+ return 0;
+}
+
+/**
+ * cldma_set_recv_skb() - Set the callback to handle RX packets
+ * @hif_id: CLDMA ID (ID_CLDMA0 or ID_CLDMA1)
+ * @recv_skb: processing callback
+ */
+void cldma_set_recv_skb(enum cldma_id hif_id,
+ int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb))
+{
+ struct cldma_ctrl *md_ctrl = md_cd_get(hif_id);
+
+ md_ctrl->recv_skb = recv_skb;
+}
+
+/**
+ * cldma_send_skb() - Send control data to modem
+ * @hif_id: CLDMA ID (ID_CLDMA0 or ID_CLDMA1)
+ * @qno: queue number
+ * @skb: socket buffer
+ * @skb_from_pool: set to true to reuse skb from the pool
+ * @blocking: true for blocking operation
+ *
+ * Send control packet to modem using a ring buffer.
+ * If blocking is set, it will wait for completion.
+ *
+ * Return: 0 on success, -ENOMEM on allocation failure, -EINVAL on invalid queue request, or
+ * -EBUSY on resource lock failure.
+ */
+int cldma_send_skb(enum cldma_id hif_id, int qno, struct sk_buff *skb, bool skb_from_pool,
+ bool blocking)
+{
+ unsigned int ioc_override = 0;
+ struct cldma_request *tx_req;
+ struct cldma_ctrl *md_ctrl;
+ struct cldma_queue *queue;
+ unsigned long flags;
+ int ret = 0;
+
+ md_ctrl = md_cd_get(hif_id);
+
+ if (qno >= CLDMA_TXQ_NUM) {
+ ret = -EINVAL;
+ goto exit;
+ }
+
+ if (skb_from_pool && skb_headroom(skb) == NET_SKB_PAD) {
+ struct ccci_buffer_ctrl *buf_ctrl;
+
+ buf_ctrl = skb_push(skb, sizeof(*buf_ctrl));
+ if (buf_ctrl->head_magic == CCCI_BUF_MAGIC)
+ ioc_override = buf_ctrl->ioc_override;
+ skb_pull(skb, sizeof(*buf_ctrl));
+ }
+
+ queue = &md_ctrl->txq[qno];
+
+ /* check if queue is active */
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ if (!(md_ctrl->txq_active & BIT(qno))) {
+ ret = -EBUSY;
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ goto exit;
+ }
+
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+
+ do {
+ spin_lock_irqsave(&queue->ring_lock, flags);
+ tx_req = queue->tx_xmit;
+ if (queue->budget > 0 && !tx_req->skb) {
+ queue->budget--;
+ queue->tr_ring->handle_tx_request(queue, tx_req, skb, ioc_override);
+ queue->tx_xmit = cldma_ring_step_forward(queue->tr_ring, tx_req);
+ spin_unlock_irqrestore(&queue->ring_lock, flags);
+
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ cldma_hw_start_send(md_ctrl, qno);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ break;
+ }
+
+ spin_unlock_irqrestore(&queue->ring_lock, flags);
+
+ /* check CLDMA status */
+ if (!cldma_hw_queue_status(&md_ctrl->hw_info, qno, false)) {
+ /* resume channel */
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ cldma_hw_resume_queue(&md_ctrl->hw_info, qno, false);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ }
+
+ if (!blocking) {
+ ret = -EBUSY;
+ break;
+ }
+
+ ret = wait_event_interruptible_exclusive(queue->req_wq, queue->budget > 0);
+ if (ret == -ERESTARTSYS)
+ ret = -EINTR;
+
+ } while (!ret);
+
+exit:
+ return ret;
+}
+
+static void ccci_cldma_adjust_config(void)
+{
+ int qno;
+
+ /* set default config */
+ for (qno = 0; qno < CLDMA_RXQ_NUM; qno++) {
+ rxq_buff_size[qno] = MTK_SKB_4K;
+ rxq_type[qno] = CLDMA_SHARED_Q;
+ rxq_buff_num[qno] = MAX_RX_BUDGET;
+ }
+
+ rxq_buff_size[CLDMA_RXQ_NUM - 1] = MTK_SKB_64K;
+
+ for (qno = 0; qno < CLDMA_TXQ_NUM; qno++) {
+ txq_type[qno] = CLDMA_SHARED_Q;
+ txq_buff_num[qno] = MAX_TX_BUDGET;
+ }
+}
+
+/* this function contains longer duration initializations */
+static int cldma_late_init(struct cldma_ctrl *md_ctrl)
+{
+ char dma_pool_name[32];
+ int i, ret;
+
+ if (md_ctrl->is_late_init) {
+ dev_err(md_ctrl->dev, "CLDMA late init was already done\n");
+ return -EALREADY;
+ }
+
+ mutex_lock(&ctl_cfg_mutex);
+ ccci_cldma_adjust_config();
+ /* init ring buffers */
+ snprintf(dma_pool_name, 32, "cldma_request_dma_hif%d", md_ctrl->hif_id);
+ md_ctrl->gpd_dmapool = dma_pool_create(dma_pool_name, md_ctrl->dev,
+ sizeof(struct cldma_tgpd), 16, 0);
+ if (!md_ctrl->gpd_dmapool) {
+ mutex_unlock(&ctl_cfg_mutex);
+ dev_err(md_ctrl->dev, "DMA pool alloc fail\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < CLDMA_TXQ_NUM; i++) {
+ INIT_LIST_HEAD(&md_ctrl->tx_ring[i].gpd_ring);
+ md_ctrl->tx_ring[i].length = txq_buff_num[i];
+ md_ctrl->tx_ring[i].handle_tx_request = &cldma_gpd_handle_tx_request;
+ md_ctrl->tx_ring[i].handle_tx_done = &cldma_gpd_tx_collect;
+ ret = cldma_tx_ring_init(md_ctrl, &md_ctrl->tx_ring[i]);
+ if (ret) {
+ dev_err(md_ctrl->dev, "control TX ring init fail\n");
+ goto err_tx_ring;
+ }
+ }
+
+ for (i = 0; i < CLDMA_RXQ_NUM; i++) {
+ INIT_LIST_HEAD(&md_ctrl->rx_ring[i].gpd_ring);
+ md_ctrl->rx_ring[i].length = rxq_buff_num[i];
+ md_ctrl->rx_ring[i].pkt_size = rxq_buff_size[i];
+ md_ctrl->rx_ring[i].handle_rx_done = &cldma_gpd_rx_collect;
+ ret = cldma_rx_ring_init(md_ctrl, &md_ctrl->rx_ring[i]);
+ if (ret) {
+ dev_err(md_ctrl->dev, "control RX ring init fail\n");
+ goto err_rx_ring;
+ }
+ }
+
+ /* init queue */
+ for (i = 0; i < CLDMA_TXQ_NUM; i++)
+ cldma_tx_queue_init(&md_ctrl->txq[i]);
+
+ for (i = 0; i < CLDMA_RXQ_NUM; i++)
+ cldma_rx_queue_init(&md_ctrl->rxq[i]);
+ mutex_unlock(&ctl_cfg_mutex);
+
+ md_ctrl->is_late_init = true;
+ return 0;
+
+err_rx_ring:
+ while (i--)
+ cldma_ring_free(md_ctrl, &md_ctrl->rx_ring[i], DMA_FROM_DEVICE);
+
+ i = CLDMA_TXQ_NUM;
+err_tx_ring:
+ while (i--)
+ cldma_ring_free(md_ctrl, &md_ctrl->tx_ring[i], DMA_TO_DEVICE);
+
+ mutex_unlock(&ctl_cfg_mutex);
+ return ret;
+}
+
+static inline void __iomem *pcie_addr_transfer(void __iomem *addr, u32 addr_trs1, u32 phy_addr)
+{
+ return addr + phy_addr - addr_trs1;
+}
+
+static void hw_info_init(struct cldma_ctrl *md_ctrl)
+{
+ struct cldma_hw_info *hw_info;
+ u32 phy_ao_base, phy_pd_base;
+ struct mtk_addr_base *pbase;
+
+ pbase = &md_ctrl->mtk_dev->base_addr;
+ hw_info = &md_ctrl->hw_info;
+ if (md_ctrl->hif_id != ID_CLDMA1)
+ return;
+
+ phy_ao_base = CLDMA1_AO_BASE;
+ phy_pd_base = CLDMA1_PD_BASE;
+ hw_info->phy_interrupt_id = CLDMA1_INT;
+
+ hw_info->hw_mode = MODE_BIT_64;
+ hw_info->ap_ao_base = pcie_addr_transfer(pbase->pcie_ext_reg_base,
+ pbase->pcie_dev_reg_trsl_addr,
+ phy_ao_base);
+ hw_info->ap_pdn_base = pcie_addr_transfer(pbase->pcie_ext_reg_base,
+ pbase->pcie_dev_reg_trsl_addr,
+ phy_pd_base);
+}
+
+int cldma_alloc(enum cldma_id hif_id, struct mtk_pci_dev *mtk_dev)
+{
+ struct cldma_ctrl *md_ctrl;
+
+ md_ctrl = devm_kzalloc(&mtk_dev->pdev->dev, sizeof(*md_ctrl), GFP_KERNEL);
+ if (!md_ctrl)
+ return -ENOMEM;
+
+ md_ctrl->mtk_dev = mtk_dev;
+ md_ctrl->dev = &mtk_dev->pdev->dev;
+ md_ctrl->hif_id = hif_id;
+ hw_info_init(md_ctrl);
+ md_cd_set(hif_id, md_ctrl);
+ return 0;
+}
+
+/**
+ * cldma_exception() - CLDMA exception handler
+ * @hif_id: CLDMA ID (ID_CLDMA0 or ID_CLDMA1)
+ * @stage: exception stage
+ *
+ * disable/flush/stop/start CLDMA/queues based on exception stage.
+ *
+ */
+void cldma_exception(enum cldma_id hif_id, enum hif_ex_stage stage)
+{
+ struct cldma_ctrl *md_ctrl;
+
+ md_ctrl = md_cd_get(hif_id);
+ switch (stage) {
+ case HIF_EX_INIT:
+ /* disable CLDMA TX queues */
+ cldma_stop_queue(md_ctrl, CLDMA_ALL_Q, MTK_OUT);
+ /* Clear TX queue */
+ cldma_clear_all_queue(md_ctrl, MTK_OUT);
+ break;
+
+ case HIF_EX_CLEARQ_DONE:
+ /* Stop CLDMA, we do not want to get CLDMA IRQ when MD is
+ * resetting CLDMA after it got clearq_ack.
+ */
+ cldma_stop_queue(md_ctrl, CLDMA_ALL_Q, MTK_IN);
+ /* flush all TX&RX workqueues */
+ cldma_stop(hif_id);
+ if (md_ctrl->hif_id == ID_CLDMA1)
+ cldma_hw_reset(md_ctrl->mtk_dev->base_addr.infracfg_ao_base);
+
+ /* clear the RX queue */
+ cldma_clear_all_queue(md_ctrl, MTK_IN);
+ break;
+
+ case HIF_EX_ALLQ_RESET:
+ cldma_hw_init(&md_ctrl->hw_info);
+ cldma_start(hif_id);
+ break;
+
+ default:
+ break;
+ }
+}
+
+void cldma_hif_hw_init(enum cldma_id hif_id)
+{
+ struct cldma_hw_info *hw_info;
+ struct cldma_ctrl *md_ctrl;
+ unsigned long flags;
+
+ md_ctrl = md_cd_get(hif_id);
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ hw_info = &md_ctrl->hw_info;
+
+ /* mask CLDMA interrupt */
+ cldma_hw_stop(hw_info, false);
+ cldma_hw_stop(hw_info, true);
+
+ /* clear CLDMA interrupt */
+ cldma_hw_rx_done(hw_info, EMPTY_STATUS_BITMASK | TXRX_STATUS_BITMASK);
+ cldma_hw_tx_done(hw_info, EMPTY_STATUS_BITMASK | TXRX_STATUS_BITMASK);
+
+ /* initialize CLDMA hardware */
+ cldma_hw_init(hw_info);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static irqreturn_t cldma_isr_handler(int irq, void *data)
+{
+ struct cldma_hw_info *hw_info;
+ struct cldma_ctrl *md_ctrl;
+
+ md_ctrl = data;
+ hw_info = &md_ctrl->hw_info;
+ mtk_pcie_mac_clear_int(md_ctrl->mtk_dev, hw_info->phy_interrupt_id);
+ cldma_irq_work_cb(md_ctrl);
+ mtk_pcie_mac_clear_int_status(md_ctrl->mtk_dev, hw_info->phy_interrupt_id);
+ mtk_pcie_mac_set_int(md_ctrl->mtk_dev, hw_info->phy_interrupt_id);
+ return IRQ_HANDLED;
+}
+
+/**
+ * cldma_init() - Initialize CLDMA
+ * @hif_id: CLDMA ID (ID_CLDMA0 or ID_CLDMA1)
+ *
+ * initialize HIF TX/RX queue structure
+ * register CLDMA callback isr with PCIe driver
+ *
+ * Return: 0 on success, a negative error code on failure
+ */
+int cldma_init(enum cldma_id hif_id)
+{
+ struct cldma_hw_info *hw_info;
+ struct cldma_ctrl *md_ctrl;
+ struct mtk_modem *md;
+ int i;
+
+ md_ctrl = md_cd_get(hif_id);
+ md = md_ctrl->mtk_dev->md;
+ md_ctrl->hif_id = hif_id;
+ md_ctrl->txq_active = 0;
+ md_ctrl->rxq_active = 0;
+ md_ctrl->is_late_init = false;
+ hw_info = &md_ctrl->hw_info;
+
+ spin_lock_init(&md_ctrl->cldma_lock);
+ /* initialize HIF queue structure */
+ for (i = 0; i < CLDMA_TXQ_NUM; i++) {
+ md_cd_queue_struct_init(&md_ctrl->txq[i], md_ctrl->hif_id, MTK_OUT, i);
+ md_ctrl->txq[i].md = md;
+ md_ctrl->txq[i].worker =
+ alloc_workqueue("md_hif%d_tx%d_worker",
+ WQ_UNBOUND | WQ_MEM_RECLAIM | (!i ? WQ_HIGHPRI : 0),
+ 1, hif_id, i);
+ if (!md_ctrl->txq[i].worker)
+ return -ENOMEM;
+
+ INIT_WORK(&md_ctrl->txq[i].cldma_tx_work, cldma_tx_done);
+ }
+
+ for (i = 0; i < CLDMA_RXQ_NUM; i++) {
+ md_cd_queue_struct_init(&md_ctrl->rxq[i], md_ctrl->hif_id, MTK_IN, i);
+ md_ctrl->rxq[i].md = md;
+ INIT_WORK(&md_ctrl->rxq[i].cldma_rx_work, cldma_rx_done);
+ md_ctrl->rxq[i].worker = alloc_workqueue("md_hif%d_rx%d_worker",
+ WQ_UNBOUND | WQ_MEM_RECLAIM,
+ 1, hif_id, i);
+ if (!md_ctrl->rxq[i].worker)
+ return -ENOMEM;
+ }
+
+ mtk_pcie_mac_clear_int(md_ctrl->mtk_dev, hw_info->phy_interrupt_id);
+
+ /* registers CLDMA callback ISR with PCIe driver */
+ md_ctrl->mtk_dev->intr_handler[hw_info->phy_interrupt_id] = cldma_isr_handler;
+ md_ctrl->mtk_dev->intr_thread[hw_info->phy_interrupt_id] = NULL;
+ md_ctrl->mtk_dev->callback_param[hw_info->phy_interrupt_id] = md_ctrl;
+ mtk_pcie_mac_clear_int_status(md_ctrl->mtk_dev, hw_info->phy_interrupt_id);
+ return 0;
+}
+
+void cldma_switch_cfg(enum cldma_id hif_id)
+{
+ struct cldma_ctrl *md_ctrl;
+
+ md_ctrl = md_cd_get(hif_id);
+ if (md_ctrl) {
+ cldma_late_release(md_ctrl);
+ cldma_late_init(md_ctrl);
+ }
+}
+
+void cldma_exit(enum cldma_id hif_id)
+{
+ struct cldma_ctrl *md_ctrl;
+ int i;
+
+ md_ctrl = md_cd_get(hif_id);
+ if (!md_ctrl)
+ return;
+
+ /* stop CLDMA work */
+ cldma_stop(hif_id);
+ cldma_late_release(md_ctrl);
+
+ for (i = 0; i < CLDMA_TXQ_NUM; i++) {
+ if (md_ctrl->txq[i].worker) {
+ destroy_workqueue(md_ctrl->txq[i].worker);
+ md_ctrl->txq[i].worker = NULL;
+ }
+ }
+
+ for (i = 0; i < CLDMA_RXQ_NUM; i++) {
+ if (md_ctrl->rxq[i].worker) {
+ destroy_workqueue(md_ctrl->rxq[i].worker);
+ md_ctrl->rxq[i].worker = NULL;
+ }
+ }
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.h b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
new file mode 100644
index 000000000000..ec713197c85d
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
@@ -0,0 +1,155 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors: Haijun Lio <[email protected]>
+ * Contributors: Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_HIF_CLDMA_H__
+#define __T7XX_HIF_CLDMA_H__
+
+#include <linux/pci.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+#include <linux/types.h>
+
+#include "t7xx_cldma.h"
+#include "t7xx_common.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+
+#define CLDMA_NUM 2
+
+enum cldma_id {
+ ID_CLDMA0,
+ ID_CLDMA1,
+};
+
+enum cldma_queue_type {
+ CLDMA_SHARED_Q,
+ CLDMA_DEDICATED_Q,
+};
+
+struct cldma_request {
+ void *gpd; /* virtual address for CPU */
+ dma_addr_t gpd_addr; /* physical address for DMA */
+ struct sk_buff *skb;
+ dma_addr_t mapped_buff;
+ struct list_head entry;
+ struct list_head bd;
+ /* inherit from skb */
+ /* bit7: override or not; bit0: IOC setting */
+ unsigned char ioc_override;
+};
+
+struct cldma_queue;
+
+/* cldma_ring is quite light, most of ring buffer operations require queue struct. */
+struct cldma_ring {
+ struct list_head gpd_ring; /* ring of struct cldma_request */
+ int length; /* number of struct cldma_request */
+ int pkt_size; /* size of each packet in ring */
+
+ int (*handle_tx_request)(struct cldma_queue *queue, struct cldma_request *req,
+ struct sk_buff *skb, unsigned int ioc_override);
+ int (*handle_rx_done)(struct cldma_queue *queue, int budget);
+ int (*handle_tx_done)(struct cldma_queue *queue);
+};
+
+struct cldma_queue {
+ unsigned char index;
+ struct mtk_modem *md;
+ struct cldma_ring *tr_ring;
+ struct cldma_request *tr_done;
+ int budget; /* same as ring buffer size by default */
+ struct cldma_request *rx_refill;
+ struct cldma_request *tx_xmit;
+ enum cldma_queue_type q_type;
+ wait_queue_head_t req_wq; /* only for Tx */
+ spinlock_t ring_lock;
+
+ struct workqueue_struct *worker;
+ struct work_struct cldma_rx_work;
+ struct work_struct cldma_tx_work;
+
+ wait_queue_head_t rx_wq;
+ struct task_struct *rx_thread;
+
+ enum cldma_id hif_id;
+ enum direction dir;
+};
+
+struct cldma_ctrl {
+ enum cldma_id hif_id;
+ struct device *dev;
+ struct mtk_pci_dev *mtk_dev;
+ struct cldma_queue txq[CLDMA_TXQ_NUM];
+ struct cldma_queue rxq[CLDMA_RXQ_NUM];
+ unsigned short txq_active;
+ unsigned short rxq_active;
+ unsigned short txq_started;
+ spinlock_t cldma_lock; /* protects CLDMA structure */
+ /* assuming T/R GPD/BD/SPD have the same size */
+ struct dma_pool *gpd_dmapool;
+ struct cldma_ring tx_ring[CLDMA_TXQ_NUM];
+ struct cldma_ring rx_ring[CLDMA_RXQ_NUM];
+ struct cldma_hw_info hw_info;
+ bool is_late_init;
+ int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb);
+};
+
+#define GPD_FLAGS_HWO BIT(0)
+#define GPD_FLAGS_BDP BIT(1)
+#define GPD_FLAGS_BPS BIT(2)
+#define GPD_FLAGS_IOC BIT(7)
+
+struct cldma_tgpd {
+ u8 gpd_flags;
+ u16 non_used;
+ u8 debug_id;
+ u32 next_gpd_ptr_h;
+ u32 next_gpd_ptr_l;
+ u32 data_buff_bd_ptr_h;
+ u32 data_buff_bd_ptr_l;
+ u16 data_buff_len;
+ u16 non_used1;
+} __packed;
+
+struct cldma_rgpd {
+ u8 gpd_flags;
+ u8 non_used;
+ u16 data_allow_len;
+ u32 next_gpd_ptr_h;
+ u32 next_gpd_ptr_l;
+ u32 data_buff_bd_ptr_h;
+ u32 data_buff_bd_ptr_l;
+ u16 data_buff_len;
+ u8 non_used1;
+ u8 debug_id;
+} __packed;
+
+int cldma_alloc(enum cldma_id hif_id, struct mtk_pci_dev *mtk_dev);
+void cldma_hif_hw_init(enum cldma_id hif_id);
+int cldma_init(enum cldma_id hif_id);
+void cldma_exception(enum cldma_id hif_id, enum hif_ex_stage stage);
+void cldma_exit(enum cldma_id hif_id);
+void cldma_switch_cfg(enum cldma_id hif_id);
+int cldma_write_room(enum cldma_id hif_id, unsigned char qno);
+void cldma_start(enum cldma_id hif_id);
+int cldma_stop(enum cldma_id hif_id);
+void cldma_reset(enum cldma_id hif_id);
+void cldma_set_recv_skb(enum cldma_id hif_id,
+ int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb));
+int cldma_send_skb(enum cldma_id hif_id, int qno, struct sk_buff *skb,
+ bool skb_from_pool, bool blocking);
+
+#endif /* __T7XX_HIF_CLDMA_H__ */
--
2.17.1

2021-11-01 13:10:29

by Denis Kirjanov

[permalink] [raw]
Subject: Re: [PATCH v2 00/14] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem



11/1/21 6:56 AM, Ricardo Martinez пишет:
> t7xx is the PCIe host device driver for Intel 5G 5000 M.2 solution which
> is based on MediaTek's T700 modem to provide WWAN connectivity.
> The driver uses the WWAN framework infrastructure to create the following
> control ports and network interfaces:
> * /dev/wwan0mbim0 - Interface conforming to the MBIM protocol.
> Applications like libmbim [1] or Modem Manager [2] from v1.16 onwards
> with [3][4] can use it to enable data communication towards WWAN.
> * /dev/wwan0at0 - Interface that supports AT commands.
> * wwan0 - Primary network interface for IP traffic.

That should be prefixed with net-next

>
> The main blocks in t7xx driver are:
> * PCIe layer - Implements probe, removal, and power management callbacks.
> * Port-proxy - Provides a common interface to interact with different types
> of ports such as WWAN ports.
> * Modem control & status monitor - Implements the entry point for modem
> initialization, reset and exit, as well as exception handling.
> * CLDMA (Control Layer DMA) - Manages the HW used by the port layer to send
> control messages to the modem using MediaTek's CCCI (Cross-Core
> Communication Interface) protocol.
> * DPMAIF (Data Plane Modem AP Interface) - Controls the HW that provides
> uplink and downlink queues for the data path. The data exchange takes
> place using circular buffers to share data buffer addresses and metadata
> to describe the packets.
> * MHCCIF (Modem Host Cross-Core Interface) - Provides interrupt channels
> for bidirectional event notification such as handshake, exception, PM and
> port enumeration.
>
> The compilation of the t7xx driver is enabled by the CONFIG_MTK_T7XX config
> option which depends on CONFIG_WWAN.
> This driver was originally developed by MediaTek. Intel adapted t7xx to
> the WWAN framework, optimized and refactored the driver source in close
> collaboration with MediaTek. This will enable getting the t7xx driver on
> Approved Vendor List for interested OEM's and ODM's productization plans
> with Intel 5G 5000 M.2 solution.
>
> List of contributors:
> Amir Hanania <[email protected]>
> Andriy Shevchenko <[email protected]>
> Chandrashekar Devegowda <[email protected]>
> Dinesh Sharma <[email protected]>
> Eliot Lee <[email protected]>
> Haijun Liu <[email protected]>
> M Chetan Kumar <[email protected]>
> Mika Westerberg <[email protected]>
> Moises Veleta <[email protected]>
> Pierre-louis Bossart <[email protected]>
> Chiranjeevi Rapolu <[email protected]>
> Ricardo Martinez <[email protected]>
> Muralidharan Sethuraman <[email protected]>
> Soumya Prakash Mishra <[email protected]>
> Sreehari Kancharla <[email protected]>
> Suresh Nagaraj <[email protected]>
>
> [1] https://www.freedesktop.org/software/libmbim/
> [2] https://www.freedesktop.org/software/ModemManager/
> [3] https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/582
> [4] https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/523
>
> v2:
> - Replace pdev->driver->name with dev_driver_string(&pdev->dev).
> - Replace random_ether_addr() with eth_random_addr().
> - Update kernel-doc comment for enum data_policy.
> - Indicate the driver is 'Supported' instead of 'Maintained'.
> - Fix the Signed-of-by and Co-developed-by tags in the patches.
> - Added authors and contributors in the top comment of the src files.
>
> Chandrashekar Devegowda (1):
> net: wwan: t7xx: Add AT and MBIM WWAN ports
>
> Haijun Lio (11):
> net: wwan: t7xx: Add control DMA interface
> net: wwan: t7xx: Add core components
> net: wwan: t7xx: Add port proxy infrastructure
> net: wwan: t7xx: Add control port
> net: wwan: t7xx: Data path HW layer
> net: wwan: t7xx: Add data path interface
> net: wwan: t7xx: Add WWAN network interface
> net: wwan: t7xx: Introduce power management support
> net: wwan: t7xx: Runtime PM
> net: wwan: t7xx: Device deep sleep lock/unlock
> net: wwan: t7xx: Add debug and test ports
>
> Ricardo Martinez (2):
> net: wwan: Add default MTU size
> net: wwan: t7xx: Add maintainers and documentation
>
> .../networking/device_drivers/wwan/index.rst | 1 +
> .../networking/device_drivers/wwan/t7xx.rst | 120 ++
> MAINTAINERS | 11 +
> drivers/net/wwan/Kconfig | 14 +
> drivers/net/wwan/Makefile | 1 +
> drivers/net/wwan/t7xx/Makefile | 24 +
> drivers/net/wwan/t7xx/t7xx_cldma.c | 277 +++
> drivers/net/wwan/t7xx/t7xx_cldma.h | 168 ++
> drivers/net/wwan/t7xx/t7xx_common.h | 76 +
> drivers/net/wwan/t7xx/t7xx_dpmaif.c | 1524 +++++++++++++++
> drivers/net/wwan/t7xx/t7xx_dpmaif.h | 168 ++
> drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 1663 +++++++++++++++++
> drivers/net/wwan/t7xx/t7xx_hif_cldma.h | 156 ++
> drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c | 638 +++++++
> drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h | 279 +++
> drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 1562 ++++++++++++++++
> drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h | 117 ++
> drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 842 +++++++++
> drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h | 82 +
> drivers/net/wwan/t7xx/t7xx_mhccif.c | 124 ++
> drivers/net/wwan/t7xx/t7xx_mhccif.h | 35 +
> drivers/net/wwan/t7xx/t7xx_modem_ops.c | 747 ++++++++
> drivers/net/wwan/t7xx/t7xx_modem_ops.h | 92 +
> drivers/net/wwan/t7xx/t7xx_monitor.h | 147 ++
> drivers/net/wwan/t7xx/t7xx_netdev.c | 545 ++++++
> drivers/net/wwan/t7xx/t7xx_netdev.h | 63 +
> drivers/net/wwan/t7xx/t7xx_pci.c | 789 ++++++++
> drivers/net/wwan/t7xx/t7xx_pci.h | 121 ++
> drivers/net/wwan/t7xx/t7xx_pcie_mac.c | 277 +++
> drivers/net/wwan/t7xx/t7xx_pcie_mac.h | 36 +
> drivers/net/wwan/t7xx/t7xx_port.h | 163 ++
> drivers/net/wwan/t7xx/t7xx_port_char.c | 424 +++++
> drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c | 150 ++
> drivers/net/wwan/t7xx/t7xx_port_proxy.c | 829 ++++++++
> drivers/net/wwan/t7xx/t7xx_port_proxy.h | 102 +
> drivers/net/wwan/t7xx/t7xx_port_tty.c | 191 ++
> drivers/net/wwan/t7xx/t7xx_port_wwan.c | 281 +++
> drivers/net/wwan/t7xx/t7xx_reg.h | 398 ++++
> drivers/net/wwan/t7xx/t7xx_skb_util.c | 362 ++++
> drivers/net/wwan/t7xx/t7xx_skb_util.h | 110 ++
> drivers/net/wwan/t7xx/t7xx_state_monitor.c | 627 +++++++
> drivers/net/wwan/t7xx/t7xx_tty_ops.c | 205 ++
> drivers/net/wwan/t7xx/t7xx_tty_ops.h | 44 +
> include/linux/wwan.h | 5 +
> 44 files changed, 14590 insertions(+)
> create mode 100644 Documentation/networking/device_drivers/wwan/t7xx.rst
> create mode 100644 drivers/net/wwan/t7xx/Makefile
> create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_common.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_monitor.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_port.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_port_char.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_port_tty.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_port_wwan.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_reg.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_skb_util.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_skb_util.h
> create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_tty_ops.c
> create mode 100644 drivers/net/wwan/t7xx/t7xx_tty_ops.h
>

2021-11-01 14:04:43

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v2 02/14] net: wwan: t7xx: Add control DMA interface

On Sun, Oct 31, 2021 at 08:56:23PM -0700, Ricardo Martinez wrote:
> From: Haijun Lio <[email protected]>
>
> Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control
> path of Host-Modem data transfers. CLDMA HIF layer provides a common
> interface to the Port Layer.
>
> CLDMA manages 8 independent RX/TX physical channels with data flow
> control in HW queues. CLDMA uses ring buffers of General Packet
> Descriptors (GPD) for TX/RX. GPDs can represent multiple or single
> data buffers (DB).
>
> CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA
> interrupts, and initializes CLDMA HW registers.
>
> CLDMA TX flow:
> 1. Port Layer write
> 2. Get DB address
> 3. Configure GPD
> 4. Triggering processing via HW register write
>
> CLDMA RX flow:
> 1. CLDMA HW sends a RX "done" to host
> 2. Driver starts thread to safely read GPD
> 3. DB is sent to Port layer
> 4. Create a new buffer for GPD ring

> Signed-off-by: Haijun Lio <[email protected]>
> Signed-off-by: Chandrashekar Devegowda <[email protected]>
> Co-developed-by: Ricardo Martinez <[email protected]>
> Signed-off-by: Ricardo Martinez <[email protected]>

This...

> + * Authors: Haijun Lio <[email protected]>

(singular form?)

> + * Contributors: Amir Hanania <[email protected]>
> + * Andy Shevchenko <[email protected]>
> + * Moises Veleta <[email protected]>
> + * Ricardo Martinez<[email protected]>
> + * Sreehari Kancharla <[email protected]>

...doesn't correlate with the above.

At least Ricardo seems to be the (co-)author of the code according to
the tag block above.

Also consider to start list(s) from the new line:

Autors:
A B <>
X Y <>

etc.

...

> +#include <linux/delay.h>
> +#include <linux/io.h>
> +#include <linux/io-64-nonatomic-lo-hi.h>

You may play with iwyu and amend this (including for headers and
the rest of the C-files).

[1]: https://include-what-you-use.org/

> +#include "t7xx_cldma.h"
> +
> +void cldma_clear_ip_busy(struct cldma_hw_info *hw_info)
> +{
> + /* write 1 to clear IP busy register wake up CPU case */
> + iowrite32(ioread32(hw_info->ap_pdn_base + REG_CLDMA_IP_BUSY) | IP_BUSY_WAKEUP,
> + hw_info->ap_pdn_base + REG_CLDMA_IP_BUSY);

Much easier to read in the standard pattern, i.e.

u32 val;

val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_IP_BUSY);
val |= IP_BUSY_WAKEUP;
iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_IP_BUSY);

> +}
> +
> +/**
> + * cldma_hw_restore() - Restore CLDMA HW registers
> + * @hw_info: Pointer to struct cldma_hw_info
> + *
> + * Restore HW after resume. Writes uplink configuration for CLDMA HW.

> + *

Redundant blank line. Check again your code for unneeded redundant lines.

> + */

...

> +void cldma_hw_reset(void __iomem *ao_base)
> +{
> + iowrite32(ioread32(ao_base + REG_INFRA_RST4_SET) | RST4_CLDMA1_SW_RST_SET,
> + ao_base + REG_INFRA_RST4_SET);
> + iowrite32(ioread32(ao_base + REG_INFRA_RST2_SET) | RST2_CLDMA1_AO_SW_RST_SET,
> + ao_base + REG_INFRA_RST2_SET);
> + udelay(1);
> + iowrite32(ioread32(ao_base + REG_INFRA_RST4_CLR) | RST4_CLDMA1_SW_RST_CLR,
> + ao_base + REG_INFRA_RST4_CLR);
> + iowrite32(ioread32(ao_base + REG_INFRA_RST2_CLR) | RST2_CLDMA1_AO_SW_RST_CLR,
> + ao_base + REG_INFRA_RST2_CLR);

Setting and clearing are in the same order, is it okay?
Can we do it rather symmetrical?

> +}

...

> + mb(); /* prevents outstanding GPD updates */

Is there any counterpart of this barrier?

...

> + ch_id = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0) & bitmask;

ch_id = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0);
ch_id &= bitmask;

Consider to use this pattern everywhere in the similar cases.

...

> + /* ack interrupt */

Is it useful? If so it sounds to me like half said phrase, perhaps needs more
elaboration?

...

> + /* enable interrupt */

> + /* mask wakeup signal */

> + /* disable TX and RX invalid address check */

Diito to all of these and more.

You shouldn't describe what code is doing, you should put why it's doing it.

...

> +/* interrupt status bit meaning, bitmask */
> +#define EMPTY_STATUS_BITMASK 0xff00
> +#define TXRX_STATUS_BITMASK 0x00ff

GENMASK()

> +/* L2RISAR0 */
> +#define TQ_ERR_INT_BITMASK 0x00ff0000
> +#define TQ_ACTIVE_START_ERR_INT_BITMASK 0xff000000
> +
> +#define RQ_ERR_INT_BITMASK 0x00ff0000
> +#define RQ_ACTIVE_START_ERR_INT_BITMASK 0xff000000

GENMASK()

What exactly BIT means in all of them when they are _not_ bit masks?

...

> +static struct cldma_request *cldma_ring_step_forward(struct cldma_ring *ring,
> + struct cldma_request *req)
> +{
> + struct cldma_request *next_req;
> +
> + if (req->entry.next == &ring->gpd_ring)
> + next_req = list_first_entry(&ring->gpd_ring, struct cldma_request, entry);
> + else
> + next_req = list_entry(req->entry.next, struct cldma_request, entry);

list_next_entry()

> +
> + return next_req;
> +}
> +
> +static struct cldma_request *cldma_ring_step_backward(struct cldma_ring *ring,
> + struct cldma_request *req)
> +{
> + struct cldma_request *prev_req;
> +
> + if (req->entry.prev == &ring->gpd_ring)
> + prev_req = list_last_entry(&ring->gpd_ring, struct cldma_request, entry);
> + else
> + prev_req = list_entry(req->entry.prev, struct cldma_request, entry);

list_prev_entry()

> +
> + return prev_req;
> +}

...

> +static int cldma_gpd_rx_from_queue(struct cldma_queue *queue, int budget, bool *over_budget)
> +{
> + unsigned char hwo_polling_count = 0;
> + struct cldma_hw_info *hw_info;
> + struct cldma_ctrl *md_ctrl;
> + struct cldma_request *req;
> + struct cldma_rgpd *rgpd;
> + struct sk_buff *new_skb;
> + bool rx_done = false;
> + struct sk_buff *skb;
> + int count = 0;

> + int ret = 0;

How exactly is this assignment being used?
You need to revisit all of them in the driver.

> + md_ctrl = md_cd_get(queue->hif_id);
> + hw_info = &md_ctrl->hw_info;

> + while (!rx_done) {

do {

> + req = queue->tr_done;
> + if (!req) {
> + dev_err(md_ctrl->dev, "RXQ was released\n");
> + return -ENODATA;
> + }
> +
> + rgpd = req->gpd;
> + if ((rgpd->gpd_flags & GPD_FLAGS_HWO) || !req->skb) {
> + u64 gpd_addr;
> +
> + /* current 64 bit address is in a table by Q index */
> + gpd_addr = ioread64(hw_info->ap_pdn_base +
> + REG_CLDMA_DL_CURRENT_ADDRL_0 +
> + queue->index * sizeof(u64));

> + if (gpd_addr == GENMASK_ULL(63, 0)) {
> + dev_err(md_ctrl->dev, "PCIe Link disconnected\n");
> + return -ENODATA;
> + }

I'm wondering if PCI core provides some common method for that (like
pci_dev_is_present() or so) and if it can be used here.

> + if ((u64)queue->tr_done->gpd_addr != gpd_addr &&
> + hwo_polling_count++ < 100) {
> + udelay(1);
> + continue;
> + }
> +
> + break;

I would rather expect

if (...)
break;
...
continue;

> + }
> +
> + hwo_polling_count = 0;
> + skb = req->skb;
> +
> + if (req->mapped_buff) {
> + dma_unmap_single(md_ctrl->dev, req->mapped_buff,
> + skb_data_size(skb), DMA_FROM_DEVICE);
> + req->mapped_buff = 0;
> + }
> +
> + /* init skb struct */
> + skb->len = 0;
> + skb_reset_tail_pointer(skb);
> + skb_put(skb, rgpd->data_buff_len);
> +
> + /* consume skb */
> + if (md_ctrl->recv_skb) {
> + ret = md_ctrl->recv_skb(queue, skb);
> + } else {
> + ccci_free_skb(&md_ctrl->mtk_dev->pools, skb);
> + ret = -ENETDOWN;
> + }

> + new_skb = NULL;

Would be better to put it to else branch...

> + if (ret >= 0 || ret == -ENETDOWN)
> + new_skb = ccci_alloc_skb_from_pool(&md_ctrl->mtk_dev->pools,
> + queue->tr_ring->pkt_size,
> + GFS_BLOCKING);

> +

...and drop this empty line.

> + if (!new_skb) {
> + /* either the port was busy or the skb pool was empty */
> + usleep_range(5000, 10000);
> + return -EAGAIN;

Neither comment, nor function name suggests this error code.
Why not -EBUSY nor -ENOMEM?

> + }
> +
> + /* mark cldma_request as available */
> + req->skb = NULL;
> + cldma_rgpd_set_data_ptr(rgpd, 0);
> + queue->tr_done = cldma_ring_step_forward(queue->tr_ring, req);
> +
> + req = queue->rx_refill;
> + rgpd = req->gpd;
> + req->mapped_buff = dma_map_single(md_ctrl->dev, new_skb->data,
> + skb_data_size(new_skb), DMA_FROM_DEVICE);
> + if (dma_mapping_error(md_ctrl->dev, req->mapped_buff)) {
> + dev_err(md_ctrl->dev, "DMA mapping failed\n");
> + req->mapped_buff = 0;
> + ccci_free_skb(&md_ctrl->mtk_dev->pools, new_skb);
> + return -ENOMEM;
> + }
> +
> + cldma_rgpd_set_data_ptr(rgpd, req->mapped_buff);
> + rgpd->data_buff_len = 0;
> + /* set HWO, no need to hold ring_lock */
> + rgpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
> + /* mark cldma_request as available */
> + req->skb = new_skb;
> + queue->rx_refill = cldma_ring_step_forward(queue->tr_ring, req);

> + if (++count >= budget && need_resched()) {
> + *over_budget = true;
> + rx_done = true;
> + }

} while (...);

*over_budget = true;

> + }
> +
> + return 0;
> +}

...

> + ret = cldma_gpd_rx_from_queue(queue, budget, &over_budget);
> + if (ret == -ENODATA)
> + return 0;
> +
> + if (ret)
> + return ret;

Drop redundant blank line

> + /* greedy mode */
> + l2_rx_int = cldma_hw_int_status(hw_info, BIT(queue->index), true);

> +

Redundant blank line. I think you may shrink your driver by 100+ LOCs
due to these...

> + if (l2_rx_int) {

> + }
> + }

...

> + struct cldma_ctrl *md_ctrl;
> + struct cldma_queue *queue;
> + int value;
> +
> + queue = container_of(work, struct cldma_queue, cldma_rx_work);
> + md_ctrl = md_cd_get(queue->hif_id);

These assignments seem easily can be unified with the definitions.

> + value = queue->tr_ring->handle_rx_done(queue, queue->budget);

> +

Redundant.

> + if (value && md_ctrl->rxq_active & BIT(queue->index)) {
> + queue_work(queue->worker, &queue->cldma_rx_work);
> + return;
> + }

...

> + spin_lock_irqsave(&queue->ring_lock, flags);
> + req = queue->tr_done;
> + if (!req) {

> + dev_err(md_ctrl->dev, "TXQ was released\n");

Under spin lock?! Why?

> + spin_unlock_irqrestore(&queue->ring_lock, flags);
> + break;
> + }

...

> + /* restore IOC setting */
> + if (req->ioc_override & GPD_FLAGS_IOC) {
> + if (req->ioc_override & GPD_FLAGS_HWO)
> + tgpd->gpd_flags |= GPD_FLAGS_IOC;
> + else
> + tgpd->gpd_flags &= ~GPD_FLAGS_IOC;

> + dev_notice(md_ctrl->dev,
> + "qno%u, req->ioc_override=0x%x,tgpd->gpd_flags=0x%x\n",
> + queue->index, req->ioc_override, tgpd->gpd_flags);

Why so high level of the message without baling out?

> + }

...

> + ul_curr_addr = ioread64(hw_info->ap_pdn_base +
> + REG_CLDMA_UL_CURRENT_ADDRL_0 +
> + queue->index * sizeof(u64));
> + if (req->gpd_addr != ul_curr_addr)
> + dev_err(md_ctrl->dev,
> + "CLDMA%d Q%d TGPD addr, SW:%pad, HW:%pad\n", md_ctrl->hif_id,
> + queue->index, &req->gpd_addr, &ul_curr_addr);

Why error level without bailing out?

> + else
> + /* retry */
> + cldma_hw_resume_queue(&md_ctrl->hw_info, queue->index, false);

...

> + struct cldma_hw_info *hw_info;
> + struct cldma_ctrl *md_ctrl;
> + struct cldma_queue *queue;
> + unsigned int l2_tx_int;
> + unsigned long flags;
> +
> + queue = container_of(work, struct cldma_queue, cldma_tx_work);
> + md_ctrl = md_cd_get(queue->hif_id);
> + hw_info = &md_ctrl->hw_info;

Can be unified with definitions above.
Same to all similar cases.

...

> + item->gpd = dma_pool_alloc(md_ctrl->gpd_dmapool, GFP_KERNEL | __GFP_ZERO,
> + &item->gpd_addr);

dma_pool_zalloc()

> + if (!item->gpd)
> + goto err_gpd_alloc;

...

> + item->gpd = dma_pool_alloc(md_ctrl->gpd_dmapool, GFP_KERNEL | __GFP_ZERO,
> + &item->gpd_addr);

Ditto.

> + if (!item->gpd) {
> + kfree(item);
> + return NULL;
> + }

...

> + for (i = 0; i < CLDMA_TXQ_NUM; i++) {
> + if (l2_tx_int & BIT(i)) {

NIH for_each_set_bit()

> + }

...

> + for (i = 0; i < CLDMA_RXQ_NUM; i++) {
> + if (l2_rx_int & (BIT(i) | EQ_STA_BIT(i))) {

Ditto.

> + }

...

> + return ((tx_active || rx_active) && tx_active != CLDMA_ALL_Q && rx_active != CLDMA_ALL_Q);

Too many parentheses.

...

> + if (read_poll_timeout(queues_active, active, !active, CHECK_Q_STOP_STEP_US,
> + CHECK_Q_STOP_TIMEOUT_US, true, hw_info)) {
> + dev_err(md_ctrl->dev, "Could not stop CLDMA%d queues", hif_id);

> + return -EAGAIN;

Why shadowing error code?

> + }

...

> +static void cldma_late_release(struct cldma_ctrl *md_ctrl)
> +{
> + int i;

> + if (md_ctrl->is_late_init) {

if (!...)
return;

> + }
> +}

...

> + /* wait write done */
> + wmb();

This requires also elaboration on where is the counterpart of this barrier.

...

> +static void clear_txq(struct cldma_ctrl *md_ctrl, int qnum)
> +{
> + struct cldma_request *req;
> + struct cldma_queue *txq;
> + struct cldma_tgpd *tgpd;
> + unsigned long flags;

> + txq = &md_ctrl->txq[qnum];

To the definition block.

> + spin_lock_irqsave(&txq->ring_lock, flags);

> + req = list_first_entry(&txq->tr_ring->gpd_ring, struct cldma_request, entry);
> + txq->tr_done = req;
> + txq->tx_xmit = req;
> + txq->budget = txq->tr_ring->length;

I'm wondering what the magic is behind these lines...

> + list_for_each_entry(req, &txq->tr_ring->gpd_ring, entry) {
> + tgpd = req->gpd;
> + tgpd->gpd_flags &= ~GPD_FLAGS_HWO;
> + cldma_tgpd_set_data_ptr(tgpd, 0);
> + tgpd->data_buff_len = 0;
> + if (req->skb) {
> + ccci_free_skb(&md_ctrl->mtk_dev->pools, req->skb);
> + req->skb = NULL;
> + }
> + }
> +
> + spin_unlock_irqrestore(&txq->ring_lock, flags);
> +}

...

> + tx_req->mapped_buff = dma_map_single(md_ctrl->dev, skb->data,
> + skb->len, DMA_TO_DEVICE);

In one case you have a long line, in many others you have doing this.
A bit of consistency is required. If you use 100, use it everywhere,
or otherwise go for 80 everywhere (with possible exceptions as written
in the documentation).

...


> + if (queue->budget > (MAX_TX_BUDGET - 1))

if (queue->budget >= MAX_TX_BUDGET)

> + return queue->budget;

...

> + * Return: 0 on success, -ENOMEM on allocation failure, -EINVAL on invalid queue request, or
> + * -EBUSY on resource lock failure.

* Return: 0 on success,
-ENOMEM on allocation failure,
-EINVAL on invalid queue request, or
-EBUSY on resource lock failure.

?

...

> + do {
> + spin_lock_irqsave(&queue->ring_lock, flags);
> + tx_req = queue->tx_xmit;
> + if (queue->budget > 0 && !tx_req->skb) {
> + queue->budget--;
> + queue->tr_ring->handle_tx_request(queue, tx_req, skb, ioc_override);
> + queue->tx_xmit = cldma_ring_step_forward(queue->tr_ring, tx_req);
> + spin_unlock_irqrestore(&queue->ring_lock, flags);

Since these are two different locks, how you guarantee that below will be run
correctly? (Perhaps some explanation is needed)

> + spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
> + cldma_hw_start_send(md_ctrl, qno);
> + spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
> + break;
> + }

...

> + ret = wait_event_interruptible_exclusive(queue->req_wq, queue->budget > 0);
> + if (ret == -ERESTARTSYS)
> + ret = -EINTR;

Why overriding?

...

> +exit:

Seems useless.

> + return ret;

...

> +/* this function contains longer duration initializations */

Useless.

You must really revisit all the comments over the driver and change them to
explain why rather than what.

Above, of course, maybe converted to kernel doc that actually describes what
the API is doing in more verbose way.

...

> + snprintf(dma_pool_name, 32, "cldma_request_dma_hif%d", md_ctrl->hif_id);

sizeof()

...

sizeof(struct cldma_tgpd), 16, 0);

16 is magic.

...

> + for (i = 0; i < CLDMA_TXQ_NUM; i++)
> + cldma_tx_queue_init(&md_ctrl->txq[i]);
> +
> + for (i = 0; i < CLDMA_RXQ_NUM; i++)
> + cldma_rx_queue_init(&md_ctrl->rxq[i]);
> + mutex_unlock(&ctl_cfg_mutex);
> +
> + md_ctrl->is_late_init = true;
> + return 0;
> +
> +err_rx_ring:
> + while (i--)
> + cldma_ring_free(md_ctrl, &md_ctrl->rx_ring[i], DMA_FROM_DEVICE);

> + i = CLDMA_TXQ_NUM;

I would rather provide i,j and drop this line for good.

> +err_tx_ring:
> + while (i--)
> + cldma_ring_free(md_ctrl, &md_ctrl->tx_ring[i], DMA_TO_DEVICE);

...

> +/**
> + * cldma_exception() - CLDMA exception handler
> + * @hif_id: CLDMA ID (ID_CLDMA0 or ID_CLDMA1)
> + * @stage: exception stage
> + *
> + * disable/flush/stop/start CLDMA/queues based on exception stage.

Please, elaborate state machine here with the (ASCII) chart of the (possible)
state changes.

> + *

Redundant.

> + */

...

> +/**
> + * cldma_init() - Initialize CLDMA
> + * @hif_id: CLDMA ID (ID_CLDMA0 or ID_CLDMA1)

> + * initialize HIF TX/RX queue structure
> + * register CLDMA callback isr with PCIe driver

You need to revisit all descriptions in order to amend English grammar and
punctuation. E.g. here you lost capitalization and period.

> + * Return: 0 on success, a negative error code on failure
> + */

...

> + alloc_workqueue("md_hif%d_tx%d_worker",
> + WQ_UNBOUND | WQ_MEM_RECLAIM | (!i ? WQ_HIGHPRI : 0),

What is the idea behind redundant negation?

> + 1, hif_id, i);

...

> +#ifndef __T7XX_HIF_CLDMA_H__
> +#define __T7XX_HIF_CLDMA_H__
> +
> +#include <linux/pci.h>
> +#include <linux/sched.h>
> +#include <linux/spinlock.h>
> +#include <linux/wait.h>
> +#include <linux/workqueue.h>
> +#include <linux/types.h>
> +
> +#include "t7xx_cldma.h"
> +#include "t7xx_common.h"
> +#include "t7xx_modem_ops.h"
> +#include "t7xx_pci.h"

All of these headers are needed and used here?
Really?!

But bits.h is missed...

...

> +/* cldma_ring is quite light, most of ring buffer operations require queue struct. */
> +struct cldma_ring {
> + struct list_head gpd_ring; /* ring of struct cldma_request */
> + int length; /* number of struct cldma_request */
> + int pkt_size; /* size of each packet in ring */
> +
> + int (*handle_tx_request)(struct cldma_queue *queue, struct cldma_request *req,
> + struct sk_buff *skb, unsigned int ioc_override);
> + int (*handle_rx_done)(struct cldma_queue *queue, int budget);
> + int (*handle_tx_done)(struct cldma_queue *queue);

Perhaps convert comments to proper kernel doc.

> +};

...

> +struct cldma_tgpd {
> + u8 gpd_flags;
> + u16 non_used;
> + u8 debug_id;
> + u32 next_gpd_ptr_h;
> + u32 next_gpd_ptr_l;
> + u32 data_buff_bd_ptr_h;
> + u32 data_buff_bd_ptr_l;
> + u16 data_buff_len;
> + u16 non_used1;
> +} __packed;

What useful does __packed here?

> +struct cldma_rgpd {
> + u8 gpd_flags;
> + u8 non_used;
> + u16 data_allow_len;
> + u32 next_gpd_ptr_h;
> + u32 next_gpd_ptr_l;
> + u32 data_buff_bd_ptr_h;
> + u32 data_buff_bd_ptr_l;
> + u16 data_buff_len;
> + u8 non_used1;
> + u8 debug_id;
> +} __packed;

Ditto.

(If it's about unaligned addresses, then you have to mention it somewhere)

--
With Best Regards,
Andy Shevchenko


2021-11-02 16:41:07

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v2 03/14] net: wwan: t7xx: Add core components

On Sun, Oct 31, 2021 at 08:56:24PM -0700, Ricardo Martinez wrote:
> From: Haijun Lio <[email protected]>
>
> Registers the t7xx device driver with the kernel. Setup all the core
> components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
> modem control operations, modem state machine, and build
> infrastructure.
>
> * PCIe layer code implements driver probe and removal.
> * MHCCIF provides interrupt channels to communicate events
> such as handshake, PM and port enumeration.
> * Modem control implements the entry point for modem init,
> reset and exit.
> * The modem status monitor is a state machine used by modem control
> to complete initialization and stop. It is used also to propagate
> exception events reported by other components.

I will assume that the comments given against previous patch will be applied
to this and the rest of the patches where it makes sense or appropriate.

Below only new comments.

...

> +config MTK_T7XX
> + tristate "MediaTek PCIe 5G WWAN modem T7XX device"

T77xx is easier to read by human beings.

> + depends on PCI

...

> +struct ccci_header {
> + /* do not assume data[1] is data length in rx */

To understand this comment you need to elaborate the content of the header.

> + u32 data[2];
> + u32 status;
> + u32 reserved;
> +};

...

> +#define CCCI_HEADER_NO_DATA 0xffffffff

Is this internal value to Linux or something which is given by hardware?

...

> +/* Modem exception check identification number */
> +#define MD_EX_CHK_ID 0x45584350
> +/* Modem exception check acknowledge identification number */
> +#define MD_EX_CHK_ACK_ID 0x45524543

To me both looks like fourcc. Can you add their ASCII values into comments?

...

> + /* Use 1*4 bits to avoid low power bits*/

What does "1*4 bits" mean?

> + iowrite32(L1_1_DISABLE_BIT(1) | L1_2_DISABLE_BIT(1),
> + IREG_BASE(mtk_dev) + DIS_ASPM_LOWPWR_SET_0);

...

> +int mtk_pci_mhccif_isr(struct mtk_pci_dev *mtk_dev)
> +{
> + struct md_sys_info *md_info;
> + struct ccci_fsm_ctl *ctl;
> + struct mtk_modem *md;
> + unsigned int int_sta;
> + unsigned long flags;
> + u32 mask;
> +
> + md = mtk_dev->md;
> + ctl = fsm_get_entry();
> + if (!ctl) {

> + dev_err(&mtk_dev->pdev->dev,
> + "process MHCCIF interrupt before modem monitor was initialized\n");

Can this potentially flood the logs? If so, needs to be rate limited.

> + return -EINVAL;
> + }

> + md_info = md->md_info;

> + spin_lock_irqsave(&md_info->exp_spinlock, flags);

Can it be called outside of IRQ context?

> + int_sta = get_interrupt_status(mtk_dev);
> + md_info->exp_id |= int_sta;
> +
> + if (md_info->exp_id & D2H_INT_PORT_ENUM) {
> + md_info->exp_id &= ~D2H_INT_PORT_ENUM;
> + if (ctl->curr_state == CCCI_FSM_INIT ||
> + ctl->curr_state == CCCI_FSM_PRE_START ||
> + ctl->curr_state == CCCI_FSM_STOPPED)
> + ccci_fsm_recv_md_interrupt(MD_IRQ_PORT_ENUM);
> + }
> +
> + if (md_info->exp_id & D2H_INT_EXCEPTION_INIT) {
> + if (ctl->md_state == MD_STATE_INVALID ||
> + ctl->md_state == MD_STATE_WAITING_FOR_HS1 ||
> + ctl->md_state == MD_STATE_WAITING_FOR_HS2 ||
> + ctl->md_state == MD_STATE_READY) {
> + md_info->exp_id &= ~D2H_INT_EXCEPTION_INIT;
> + ccci_fsm_recv_md_interrupt(MD_IRQ_CCIF_EX);
> + }
> + } else if (ctl->md_state == MD_STATE_WAITING_FOR_HS1) {
> + /* start handshake if MD not assert */
> + mask = mhccif_mask_get(mtk_dev);
> + if ((md_info->exp_id & D2H_INT_ASYNC_MD_HK) && !(mask & D2H_INT_ASYNC_MD_HK)) {
> + md_info->exp_id &= ~D2H_INT_ASYNC_MD_HK;
> + queue_work(md->handshake_wq, &md->handshake_work);
> + }
> + }
> +
> + spin_unlock_irqrestore(&md_info->exp_spinlock, flags);
> +
> + return 0;
> +}

...

> +static int mtk_acpi_reset(struct mtk_pci_dev *mtk_dev, char *fn_name)
> +{
> +#ifdef CONFIG_ACPI
> + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
> + acpi_status acpi_ret;
> + struct device *dev;
> + acpi_handle handle;
> +
> + dev = &mtk_dev->pdev->dev;

> + if (acpi_disabled) {
> + dev_err(dev, "acpi function isn't enabled\n");
> + return -EFAULT;
> + }

Why this check?

> + handle = ACPI_HANDLE(dev);
> + if (!handle) {
> + dev_err(dev, "acpi handle isn't found\n");

acpi --> ACPI

> + return -EFAULT;
> + }
> +
> + if (!acpi_has_method(handle, fn_name)) {
> + dev_err(dev, "%s method isn't found\n", fn_name);
> + return -EFAULT;
> + }
> +
> + acpi_ret = acpi_evaluate_object(handle, fn_name, NULL, &buffer);
> + if (ACPI_FAILURE(acpi_ret)) {
> + dev_err(dev, "%s method fail: %s\n", fn_name, acpi_format_exception(acpi_ret));
> + return -EFAULT;
> + }
> +#endif
> + return 0;
> +}

...

> + msleep(RGU_RESET_DELAY_US);

DELAY in microseconds while msleep() takes milliseconds.
Something is wrong here.

Also, delays such as 10ms+ should be explained. Esp. when they are
in the threaded IRQ handler.

...

> +void mtk_md_exception_handshake(struct mtk_modem *md)
> +{
> + struct mtk_pci_dev *mtk_dev;

struct device *dev = &mtk_dev->pdev->dev;

will help a lot to make below code cleaner.

> + int ret;

> + mtk_dev = md->mtk_dev;
> + md_exception(md, HIF_EX_INIT);
> + ret = wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_INIT_DONE);
> +
> + if (ret)
> + dev_err(&mtk_dev->pdev->dev, "EX CCIF HS timeout, RCH 0x%lx\n",
> + D2H_INT_EXCEPTION_INIT_DONE);
> +
> + md_exception(md, HIF_EX_INIT_DONE);
> + ret = wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_CLEARQ_DONE);
> + if (ret)
> + dev_err(&mtk_dev->pdev->dev, "EX CCIF HS timeout, RCH 0x%lx\n",
> + D2H_INT_EXCEPTION_CLEARQ_DONE);
> +
> + md_exception(md, HIF_EX_CLEARQ_DONE);
> + ret = wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_ALLQ_RESET);
> + if (ret)
> + dev_err(&mtk_dev->pdev->dev, "EX CCIF HS timeout, RCH 0x%lx\n",
> + D2H_INT_EXCEPTION_ALLQ_RESET);
> +
> + md_exception(md, HIF_EX_ALLQ_RESET);
> +}

...

> +err_fsm_init:
> + ccci_fsm_uninit();
> +err_alloc:
> + destroy_workqueue(md->handshake_wq);

Labels should explain what will be done when goto, and not what was done.

...

> +/* Modem feature query identification number */
> +#define MD_FEATURE_QUERY_ID 0x49434343

All fourcc:s should be represented as ASCII in the comments.

...

> +#ifndef __T7XX_MONITOR_H__
> +#define __T7XX_MONITOR_H__

> +#include <linux/sched.h>

Who is the user of this?

...

> +static int mtk_request_irq(struct pci_dev *pdev)
> +{
> + struct mtk_pci_dev *mtk_dev;
> + int ret, i;
> +
> + mtk_dev = pci_get_drvdata(pdev);
> +
> + for (i = 0; i < EXT_INT_NUM; i++) {
> + const char *irq_descr;
> + int irq_vec;
> +
> + if (!mtk_dev->intr_handler[i])
> + continue;
> +
> + irq_descr = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s_%d",
> + dev_driver_string(&pdev->dev), i);
> + if (!irq_descr)

Resource leak is here.

> + return -ENOMEM;
> +
> + irq_vec = pci_irq_vector(pdev, i);
> + ret = request_threaded_irq(irq_vec, mtk_dev->intr_handler[i],
> + mtk_dev->intr_thread[i], 0, irq_descr,
> + mtk_dev->callback_param[i]);
> + if (ret) {
> + dev_err(&pdev->dev, "Failed to request_irq: %d, int: %d, ret: %d\n",
> + irq_vec, i, ret);
> + while (i--) {
> + if (!mtk_dev->intr_handler[i])
> + continue;
> +
> + free_irq(pci_irq_vector(pdev, i), mtk_dev->callback_param[i]);
> + }
> +
> + return ret;
> + }
> + }
> +
> + return 0;
> +}

...

> + ret = pci_alloc_irq_vectors(mtk_dev->pdev, EXT_INT_NUM, EXT_INT_NUM, PCI_IRQ_MSIX);
> + if (ret < 0) {
> + dev_err(&mtk_dev->pdev->dev, "Failed to allocate MSI-X entry, errno: %d\n", ret);

', errno' is redundant.

> + return ret;
> + }
> +
> + ret = mtk_request_irq(mtk_dev->pdev);
> + if (ret) {
> + pci_free_irq_vectors(mtk_dev->pdev);
> + return ret;
> + }
> +
> + /* Set MSIX merge config */
> + mtk_pcie_mac_msix_cfg(mtk_dev, EXT_INT_NUM);
> + return 0;
> +}

...

> + ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));

This API is absoleted, use corresponding DMA API directly.
On top of that, 64-bit setting never fail...

> + if (ret) {

> + ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
> + if (ret) {
> + dev_err(&pdev->dev, "Could not set PCI DMA mask, err: %d\n", ret);
> + return ret;
> + }

...so this attempt is almost a dead code.

> + }

...

> + ret = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
> + if (ret) {
> + ret = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
> + if (ret) {
> + dev_err(&pdev->dev, "Could not set consistent PCI DMA mask, err: %d\n",
> + ret);
> + return ret;
> + }
> + }

Ditto.

...

> + mtk_pcie_mac_set_int(mtk_dev, MHCCIF_INT);
> + mtk_pcie_mac_interrupts_en(mtk_dev);

> + pci_set_master(pdev);

It's too late for this call. Are you sure it's needed here? Why?

> +
> + return 0;

...

> +err:

Meaningless label name. Try your best to make it better.

> + ccci_skb_pool_free(&mtk_dev->pools);

Does it free IRQ handlers? If so, the function naming is not good enough.

> + return ret;

...

> +static int __init mtk_pci_init(void)
> +{
> + return pci_register_driver(&mtk_pci_driver);
> +}
> +module_init(mtk_pci_init);
> +
> +static void __exit mtk_pci_cleanup(void)
> +{
> + pci_unregister_driver(&mtk_pci_driver);
> +}
> +module_exit(mtk_pci_cleanup);

NIH module_pci_driver().

...

> + * @pdev: pci device

pci --> PCI

...

> +#include <linux/io-64-nonatomic-lo-hi.h>

> +#include <linux/msi.h>

Wondering what the APIs you are using from there.

...

> + for (i = 0; i < ATR_TABLE_NUM_PER_ATR; i++) {
> + offset = (ATR_PORT_OFFSET * port) + (ATR_TABLE_OFFSET * i);

Too many parentheses.

> + /* Disable table by SRC_ADDR */
> + reg = pbase + ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR + offset;
> + iowrite64(0, reg);
> + }

...

> + pos = ffs(lower_32_bits(cfg->size));
> + if (!pos)
> + pos = ffs(upper_32_bits(cfg->size)) + 32;

NIH __ffs64() ?

...

> +static void mtk_pcie_mac_enable_disable_int(struct mtk_pci_dev *mtk_dev, bool enable)
> +{
> + u32 value;
> +
> + value = ioread32(IREG_BASE(mtk_dev) + ISTAT_HST_CTRL);

Either add...

> + if (enable)
> + value &= ~ISTAT_HST_CTRL_DIS;
> + else
> + value |= ISTAT_HST_CTRL_DIS;

> +

...or remove blank line for the sake of consistency.

> + iowrite32(value, IREG_BASE(mtk_dev) + ISTAT_HST_CTRL);
> +}

...

> +#include <linux/bitops.h>

Who is the user of this?

...

> +#ifndef __T7XX_REG_H__
> +#define __T7XX_REG_H__
> +
> +#include <linux/bits.h>

...

> +#define EXP_BAR0 0x0c
> +#define EXP_BAR2 0x04
> +#define EXP_BAR4 0x0c

BAR0 and BAR4 have the same value. Either explain or fix accordingly.

...

> +#define MSIX_MSK_SET_ALL GENMASK(31, 24)

Missed blank line ?

> +enum pcie_int {
> + DPMAIF_INT = 0,
> + CLDMA0_INT,
> + CLDMA1_INT,
> + CLDMA2_INT,
> + MHCCIF_INT,
> + DPMAIF2_INT,
> + SAP_RGU_INT,
> + CLDMA3_INT,
> +};

...

> +static struct sk_buff *alloc_skb_from_pool(struct skb_pools *pools, size_t size)
> +{
> + if (size > MTK_SKB_4K)
> + return ccci_skb_dequeue(pools->reload_work_queue, &pools->skb_pool_64k);
> + else if (size > MTK_SKB_16)
> + return ccci_skb_dequeue(pools->reload_work_queue, &pools->skb_pool_4k);
> + else if (size > 0)
> + return ccci_skb_dequeue(pools->reload_work_queue, &pools->skb_pool_16);

Redundant 'else'. Recommend to read again our internal Wiki about typical
issues with the code.

> + return NULL;
> +}

...

> +static struct sk_buff *alloc_skb_from_kernel(size_t size, gfp_t gfp_mask)
> +{
> + if (size > MTK_SKB_4K)
> + return __dev_alloc_skb(MTK_SKB_64K, gfp_mask);
> + else if (size > MTK_SKB_1_5K)
> + return __dev_alloc_skb(MTK_SKB_4K, gfp_mask);
> + else if (size > MTK_SKB_16)
> + return __dev_alloc_skb(MTK_SKB_1_5K, gfp_mask);
> + else if (size > 0)
> + return __dev_alloc_skb(MTK_SKB_16, gfp_mask);

Ditto.

> + return NULL;
> +}

...

> + for (i = 0; i < queue->max_len; i++) {
> + struct sk_buff *skb;
> +
> + skb = alloc_skb_from_kernel(skb_size, GFP_KERNEL);

> +

Redundant.

> + if (!skb) {
> + while ((skb = skb_dequeue(&queue->skb_list)))
> + dev_kfree_skb_any(skb);
> + return -ENOMEM;
> + }
> +
> + skb_queue_tail(&queue->skb_list, skb);
> + }

...

> +/**
> + * ccci_alloc_skb_from_pool() - allocate memory for skb from pre-allocated pools
> + * @pools: skb pools
> + * @size: allocation size
> + * @blocking : true for blocking operation

Extra white space.

Again, revisit _all_ comments in your series and make them consistent in _all_
possible aspects (style, grammar, ...).

> + *
> + * Returns pointer to skb on success, NULL on failure.
> + */

...

> + if (blocking) {

> + might_sleep();

might_sleep_if() at the top of the function?

> + skb = alloc_skb_from_kernel(size, GFP_KERNEL);
> + } else {
> + for (count = 0; count < ALLOC_SKB_RETRY; count++) {
> + skb = alloc_skb_from_kernel(size, GFP_ATOMIC);
> + if (skb)
> + return skb;
> + }
> + }

...

> + while (queue->skb_list.qlen < SKB_64K_POOL_SIZE) {
> + skb = alloc_skb_from_kernel(MTK_SKB_64K, GFP_KERNEL);
> + if (skb)
> + skb_queue_tail(&queue->skb_list, skb);
> + }

May it become an infinite loop?

...

> + while (queue->skb_list.qlen < SKB_4K_POOL_SIZE) {
> + skb = alloc_skb_from_kernel(MTK_SKB_4K, GFP_KERNEL);
> + if (skb)
> + skb_queue_tail(&queue->skb_list, skb);
> + }

Ditto.

...

> + while (queue->skb_list.qlen < SKB_16_POOL_SIZE) {
> + skb = alloc_skb_from_kernel(MTK_SKB_16, GFP_KERNEL);
> + if (skb)
> + skb_queue_tail(&queue->skb_list, skb);
> + }

Ditto.

...

> + pools->reload_work_queue = alloc_workqueue("pool_reload_work",
> + WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI,
> + 1);

... wqflags = WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI;

pools->reload_work_queue = alloc_workqueue("pool_reload_work", wqflags, 1);

> + if (!pools->reload_work_queue) {
> + ret = -ENOMEM;
> + goto err_wq;
> + }

...

> + list_for_each_entry_safe(notifier_cur, notifier_next,
> + &ctl->notifier_list, entry) {

Out of a sudden this is two lines...

> + if (notifier_cur == notifier)
> + list_del(&notifier->entry);
> + }

...

> + if (!list_empty(&ctl->event_queue)) {
> + event = list_first_entry(&ctl->event_queue,
> + struct ccci_fsm_event, entry);

event = list_first_entry_or_null();
if (event) {

> + if (event->event_id == CCCI_EVENT_MD_EX) {
> + fsm_finish_event(ctl, event);
> + } else if (event->event_id == CCCI_EVENT_MD_EX_REC_OK) {
> + rec_ok = true;
> + fsm_finish_event(ctl, event);
> + }
> + }

...

> + if (!list_empty(&ctl->event_queue)) {
> + event = list_first_entry(&ctl->event_queue,
> + struct ccci_fsm_event, entry);
> + if (event->event_id == CCCI_EVENT_MD_EX_PASS)
> + fsm_finish_event(ctl, event);
> + }

Ditto

...

> + if (!atomic_read(&ctl->md->rgu_irq_asserted)) {

It may be set exactly here, what's the point in atomicity of the above check?

> + /* disable DRM before FLDR */
> + mhccif_h2d_swint_trigger(mtk_dev, H2D_CH_DRM_DISABLE_AP);
> + msleep(FSM_DRM_DISABLE_DELAY_MS);
> + /* try FLDR first */
> + err = mtk_acpi_fldr_func(mtk_dev);
> + if (err)
> + mhccif_h2d_swint_trigger(mtk_dev, H2D_CH_DEVICE_RESET);
> + }

...

> + wait_event_interruptible_timeout(ctl->async_hk_wq,
> + atomic_read(&md->core_md.ready) ||
> + atomic_read(&ctl->exp_flg), HZ * 60);

Are you sure you understand what you are doing with the atomics?

> + if (atomic_read(&ctl->exp_flg))
> + dev_err(dev, "MD exception is captured during handshake\n");
> +
> + if (!atomic_read(&md->core_md.ready)) {
> + dev_err(dev, "MD handshake timeout\n");
> + fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);
> + } else {
> + fsm_routine_ready(ctl);
> + }

...

> + read_poll_timeout(ioread32, dev_status, (dev_status & MISC_STAGE_MASK) == LINUX_STAGE,
> + 20000, 2000000, false, IREG_BASE(md->mtk_dev) + PCIE_MISC_DEV_STATUS);

Why ignoring an error is fine here?

...

> + cmd = kmalloc(sizeof(*cmd),
> + (in_irq() || in_softirq() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL);

Hmm...

> + if (!cmd)
> + return -ENOMEM;

> + if (in_irq() || irqs_disabled())
> + flag &= ~FSM_CMD_FLAG_WAITING_TO_COMPLETE;

Even more hmm...

> + if (flag & FSM_CMD_FLAG_WAITING_TO_COMPLETE) {
> + wait_event(cmd->complete_wq, cmd->result != FSM_CMD_RESULT_PENDING);

Is it okay in IRQ context?

> + if (cmd->result != FSM_CMD_RESULT_OK)
> + result = -EINVAL;

> + spin_lock_irqsave(&ctl->cmd_complete_lock, flags);
> + kfree(cmd);
> + spin_unlock_irqrestore(&ctl->cmd_complete_lock, flags);

While this is under spin lock?

> + }

...

> +enum md_state ccci_fsm_get_md_state(void)
> +{
> + struct ccci_fsm_ctl *ctl;
> +
> + ctl = ccci_fsm_entry;
> + if (ctl)
> + return ctl->md_state;
> + else
> + return MD_STATE_INVALID;
> +}
> +
> +unsigned int ccci_fsm_get_current_state(void)
> +{
> + struct ccci_fsm_ctl *ctl;
> +
> + ctl = ccci_fsm_entry;
> + if (ctl)
> + return ctl->curr_state;
> + else
> + return CCCI_FSM_STOPPED;
> +}

Redundant 'else' everywhere.

...

> +int ccci_fsm_init(struct mtk_modem *md)
> +{
> + struct ccci_fsm_ctl *ctl;

struct device *dev = &md->mtk_dev->pdev->dev;

...

> + ctl->fsm_thread = kthread_run(fsm_main_thread, ctl, "ccci_fsm");
> + if (IS_ERR(ctl->fsm_thread)) {
> + dev_err(&md->mtk_dev->pdev->dev, "failed to start monitor thread\n");

> + return PTR_ERR(ctl->fsm_thread);
> + }
> +
> + return 0;

return PTR_ERR_OR_ZERO(...);

> +}

--
With Best Regards,
Andy Shevchenko


2021-11-03 15:43:02

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v2 04/14] net: wwan: t7xx: Add port proxy infrastructure

On Sun, Oct 31, 2021 at 08:56:25PM -0700, Ricardo Martinez wrote:
> From: Haijun Lio <[email protected]>
>
> Port-proxy provides a common interface to interact with different types
> of ports. Ports export their configuration via `struct t7xx_port` and
> operate as defined by `struct port_ops`.

Same here, assuming that the comments from the previous patches are applied
here as well, only unique are given.

...

> - if (stage == HIF_EX_CLEARQ_DONE)
> + if (stage == HIF_EX_CLEARQ_DONE) {
> /* give DHL time to flush data.
> * this is an empirical value that assure
> * that DHL have enough time to flush all the date.
> */
> msleep(PORT_RESET_DELAY_US);

> + }

These curly brackets should be part of previous patch. Try to minimize
(ideally avoid) ping-pong style of changes in the same series.

...

> +#define CCCI_MAX_CH_ID 0xff /* RX channel ID should NOT be >= this!! */

I haven't got the details behind the comment. Is the Rx channel ID predefined
somewhere? If so, use static_assert() instead of this comment.

...

> +enum ccci_ch {
> + /* to MD */
> + CCCI_CONTROL_RX = 0x2000,
> + CCCI_CONTROL_TX = 0x2001,
> + CCCI_SYSTEM_RX = 0x2002,
> + CCCI_SYSTEM_TX = 0x2003,
> + CCCI_UART1_RX = 0x2006, /* META */
> + CCCI_UART1_RX_ACK = 0x2007,
> + CCCI_UART1_TX = 0x2008,
> + CCCI_UART1_TX_ACK = 0x2009,
> + CCCI_UART2_RX = 0x200a, /* AT */
> + CCCI_UART2_RX_ACK = 0x200b,
> + CCCI_UART2_TX = 0x200c,
> + CCCI_UART2_TX_ACK = 0x200d,
> + CCCI_MD_LOG_RX = 0x202a, /* MD logging */
> + CCCI_MD_LOG_TX = 0x202b,
> + CCCI_LB_IT_RX = 0x203e, /* loop back test */
> + CCCI_LB_IT_TX = 0x203f,
> + CCCI_STATUS_RX = 0x2043, /* status polling */
> + CCCI_STATUS_TX = 0x2044,
> + CCCI_MIPC_RX = 0x20ce, /* MIPC */
> + CCCI_MIPC_TX = 0x20cf,
> + CCCI_MBIM_RX = 0x20d0,
> + CCCI_MBIM_TX = 0x20d1,
> + CCCI_DSS0_RX = 0x20d2,
> + CCCI_DSS0_TX = 0x20d3,
> + CCCI_DSS1_RX = 0x20d4,
> + CCCI_DSS1_TX = 0x20d5,
> + CCCI_DSS2_RX = 0x20d6,
> + CCCI_DSS2_TX = 0x20d7,
> + CCCI_DSS3_RX = 0x20d8,
> + CCCI_DSS3_TX = 0x20d9,
> + CCCI_DSS4_RX = 0x20da,
> + CCCI_DSS4_TX = 0x20db,
> + CCCI_DSS5_RX = 0x20dc,
> + CCCI_DSS5_TX = 0x20dd,
> + CCCI_DSS6_RX = 0x20de,
> + CCCI_DSS6_TX = 0x20df,
> + CCCI_DSS7_RX = 0x20e0,
> + CCCI_DSS7_TX = 0x20e1,

> + CCCI_MAX_CH_NUM,

Not sure about meaning of this and even needfulness. It's obvious you don't
care about actual value here.

> + CCCI_MONITOR_CH_ID = GENMASK(31, 28), /* for MD init */
> + CCCI_INVALID_CH_ID = GENMASK(15, 0),
> +};

...

> +#define MTK_DEV_NAME "MTK_WWAN_M80"

DEV?

...

> +/* port->minor is configured in-sequence, but when we use it in code
> + * it should be unique among all ports for addressing.
> + */
> +#define TTY_IPC_MINOR_BASE 100
> +#define TTY_PORT_MINOR_BASE 250
> +#define TTY_PORT_MINOR_INVALID -1

Why it's not automatically allocated?

...

> +static struct t7xx_port md_ccci_ports[] = {
> + {0, 0, 0, 0, 0, 0, ID_CLDMA1, 0, &dummy_port_ops, 0xff, "dummy_port",},

Use C99 initializers.

> +};

...

> + nlh = nlmsg_put(nl_skb, 0, 1, NLMSG_DONE, len, 0);
> + if (!nlh) {

> + dev_err(port->dev, "could not release netlink\n");

I'm wondering why you are not using net_err() / netdev_err() / netif_err()
where it's appropriate.

> + nlmsg_free(nl_skb);
> + return -EFAULT;
> + }

...

> + return netlink_broadcast(pprox->netlink_sock, nl_skb,
> + 0, grp, GFP_KERNEL);

One line?

...

> +static int port_netlink_init(void)
> +{
> + port_prox->netlink_sock = netlink_kernel_create(&init_net, PORT_NOTIFY_PROTOCOL, NULL);
> +
> + if (!port_prox->netlink_sock) {
> + dev_err(port_prox->dev, "failed to create netlink socket\n");
> + return -ENOMEM;
> + }
> +
> + return 0;
> +}
> +
> +static void port_netlink_uninit(void)
> +{
> + if (port_prox->netlink_sock) {

if (!->netlink_sock) ?

> + netlink_kernel_release(port_prox->netlink_sock);
> + port_prox->netlink_sock = NULL;
> + }
> +}

...

> +static struct t7xx_port *proxy_get_port(int minor, enum ccci_ch ch)
> +{
> + struct t7xx_port *port;
> + int i;
> +
> + if (!port_prox)
> + return NULL;
> +
> + for_each_proxy_port(i, port, port_prox) {
> + if (minor >= 0 && port->minor == minor)
> + return port;
> +
> + if (ch != CCCI_INVALID_CH_ID && (port->rx_ch == ch || port->tx_ch == ch))
> + return port;
> + }
> +
> + return NULL;
> +}
> +
> +struct t7xx_port *port_proxy_get_port(int major, int minor)
> +{
> + if (port_prox && port_prox->major == major)
> + return proxy_get_port(minor, CCCI_INVALID_CH_ID);
> +
> + return NULL;
> +}

Looking into the second one I would definitely refactor the first one


static struct t7xx_port *do_proxy_get_port(int minor, enum ccci_ch ch)
{
struct t7xx_port *port;
int i;

for_each_proxy_port(i, port, port_prox) {
if (minor >= 0 && port->minor == minor)
return port;

if (ch != CCCI_INVALID_CH_ID && (port->rx_ch == ch || port->tx_ch == ch))
return port;
}

return NULL;
}

// If it's even needed at all... Perhaps you may move NULL check to the (single?) caller
static struct t7xx_port *proxy_get_port(int minor, enum ccci_ch ch)
{
if (port_prox)
return do_proxy_get_port(minor, ch);

return NULL;
}

struct t7xx_port *port_proxy_get_port(int major, int minor)
{
if (port_prox && port_prox->major == major)
return do_proxy_get_port(minor, CCCI_INVALID_CH_ID);

return NULL;
}


> +static inline struct t7xx_port *port_get_by_ch(enum ccci_ch ch)
> +{
> + return proxy_get_port(TTY_PORT_MINOR_INVALID, ch);
> +}

...

> + ccci_h = (struct ccci_header *)skb->data;

Do you need casting?

...

> + if (port->flags & PORT_F_USER_HEADER) { /* header provide by user */

Is it proper English in the comment?

> + /* CCCI_MON_CH should fall in here, as header must be
> + * send to md_init.
> + */
> + if (ccci_h->data[0] == CCCI_HEADER_NO_DATA) {
> + if (skb->len > sizeof(struct ccci_header)) {
> + dev_err_ratelimited(port->dev,
> + "recv unexpected data for %s, skb->len=%d\n",
> + port->name, skb->len);
> + skb_trim(skb, sizeof(struct ccci_header));
> + }
> + }
> + } else {
> + /* remove CCCI header */
> + skb_pull(skb, sizeof(struct ccci_header));
> + }

...

> +int port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb, bool from_pool)
> +{
> + struct ccci_header *ccci_h;
> + unsigned char tx_qno;
> + int ret;
> +
> + ccci_h = (struct ccci_header *)(skb->data);
> + tx_qno = port_get_queue_no(port);
> + port_proxy_set_seq_num(port, (struct ccci_header *)ccci_h);
> + ret = cldma_send_skb(port->path_id, tx_qno, skb, from_pool, true);
> + if (ret) {
> + dev_err(port->dev, "failed to send skb, error: %d\n", ret);
> + } else {
> + /* Record the port seq_num after the data is sent to HIF.
> + * Only bits 0-14 are used, thus negating overflow.
> + */
> + port->seq_nums[MTK_OUT]++;
> + }
> +
> + return ret;

The density of the characters is a bit high. Why not refactor this?

...blank line here...
ret = cldma_send_skb(port->path_id, tx_qno, skb, from_pool, true);
if (ret) {
dev_err(port->dev, "failed to send skb, error: %d\n", ret);
return ret;
}

...

> +}

...

> + port_list = &port_prox->rx_ch_ports[channel & CCCI_CH_ID_MASK];
> + list_for_each_entry(port, port_list, entry) {
> + if (queue->hif_id != port->path_id || channel != port->rx_ch)
> + continue;
> +
> + /* Multi-cast is not supported, because one port may be freed
> + * and can modify this request before another port can process it.
> + * However we still can use req->state to achieve some kind of
> + * multi-cast if needed.
> + */
> + if (port->ops->recv_skb) {
> + seq_num = port_check_rx_seq_num(port, ccci_h);
> + ret = port->ops->recv_skb(port, skb);
> + /* If the packet is stored to RX buffer
> + * successfully or drop, the sequence
> + * num will be updated.
> + */
> + if (ret == -ENOBUFS)

Why you don't need to free SKB here?

> + return ret;
> +
> + port->seq_nums[MTK_IN] = seq_num;
> + }
> +
> + break;
> + }
> +
> +err_exit:
> + if (ret < 0) {
> + struct skb_pools *pools;
> +
> + pools = &queue->md->mtk_dev->pools;
> + ccci_free_skb(pools, skb);
> + return -ENETDOWN;
> + }
> +
> + return 0;

Why not simply split this to

return 0;

// pay attention to the label naming
err_free_skb:
ccci_free_skb(&queue->md->mtk_dev->pools, skb);
return -ENETDOWN;

I suspect that this may be part of something bigger which is comming,
so try to minimize both weirdness here and additional shuffling
somewhere else in that case.

...

> + for_each_proxy_port(i, port, port_prox)
> + if (port->ops->md_state_notify)
> + port->ops->md_state_notify(port, state);

Perhaps {} ?

...

> + for_each_proxy_port(i, port, port_prox)
> + if (!strncmp(port->name, port_name, strlen(port->name)))
> + return port;

Ditto.

...

> + switch (state) {
> + case MTK_PORT_STATE_ENABLE:
> + snprintf(msg, PORT_NETLINK_MSG_MAX_PAYLOAD, "enable %s", port->name);

sizeof(msg) is much shorter and flexible.

> + break;
> +
> + case MTK_PORT_STATE_DISABLE:
> + snprintf(msg, PORT_NETLINK_MSG_MAX_PAYLOAD, "disable %s", port->name);
> + break;
> +
> + default:
> + snprintf(msg, PORT_NETLINK_MSG_MAX_PAYLOAD, "invalid operation");
> + break;
> + }

...

> +struct ctrl_msg_header {
> + u32 ctrl_msg_id;
> + u32 reserved;
> + u32 data_length;
> + u8 data[0];

We don't allow VLAs.

> +};
> +
> +struct port_msg {
> + u32 head_pattern;
> + u32 info;
> + u32 tail_pattern;
> + u8 data[0]; /* port set info */

Ditto.

> +};

...

> - if (event->event_id == CCCI_EVENT_MD_EX_PASS)
> + if (event->event_id == CCCI_EVENT_MD_EX_PASS) {
> + pass = true;
> fsm_finish_event(ctl, event);
> + }

Make curly braces to go to the previous patch despite checkpatch warnings.

--
With Best Regards,
Andy Shevchenko


2021-11-06 21:35:53

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 02/14] net: wwan: t7xx: Add control DMA interface

On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
<[email protected]> wrote:
> Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control
> path of Host-Modem data transfers. CLDMA HIF layer provides a common
> interface to the Port Layer.
>
> CLDMA manages 8 independent RX/TX physical channels with data flow
> control in HW queues. CLDMA uses ring buffers of General Packet
> Descriptors (GPD) for TX/RX. GPDs can represent multiple or single
> data buffers (DB).
>
> CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA
> interrupts, and initializes CLDMA HW registers.
>
> CLDMA TX flow:
> 1. Port Layer write
> 2. Get DB address
> 3. Configure GPD
> 4. Triggering processing via HW register write
>
> CLDMA RX flow:
> 1. CLDMA HW sends a RX "done" to host
> 2. Driver starts thread to safely read GPD
> 3. DB is sent to Port layer
> 4. Create a new buffer for GPD ring

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
> ...
> +static struct cldma_ctrl *cldma_md_ctrl[CLDMA_NUM];

Place this pointers array to a private _device_ state structure.
Otherwise, with these global pointers, the driver will break as soon
as a second modem is connected to the host.

Also all corresponding functions should be reworked to be able to
accept a modem state container pointer.

> +static DEFINE_MUTEX(ctl_cfg_mutex); /* protects CLDMA late init config */

Place this mutex to a private device state structure as well to avoid
useless inter-device locking and possible deadlocks.

> +static enum cldma_queue_type rxq_type[CLDMA_RXQ_NUM];
> +static enum cldma_queue_type txq_type[CLDMA_TXQ_NUM];
> +static int rxq_buff_size[CLDMA_RXQ_NUM];
> +static int rxq_buff_num[CLDMA_RXQ_NUM];
> +static int txq_buff_num[CLDMA_TXQ_NUM];

If these arrays could be shared between modem instances (i.e. if this
is a some kind of static configuration) then initialize them
statically or in the module_init(). Otherwise, if these arrays are
part of the runtime state, then place them in a device state
container.

--
Sergey

2021-11-06 21:35:55

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 03/14] net: wwan: t7xx: Add core components

On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
<[email protected]> wrote:
> Registers the t7xx device driver with the kernel. Setup all the core
> components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
> modem control operations, modem state machine, and build
> infrastructure.
>
> * PCIe layer code implements driver probe and removal.
> * MHCCIF provides interrupt channels to communicate events
> such as handshake, PM and port enumeration.
> * Modem control implements the entry point for modem init,
> reset and exit.
> * The modem status monitor is a state machine used by modem control
> to complete initialization and stop. It is used also to propagate
> exception events reported by other components.

[skipped]

> drivers/net/wwan/t7xx/t7xx_monitor.h | 144 +++++
> ...
> drivers/net/wwan/t7xx/t7xx_state_monitor.c | 598 +++++++++++++++++++++

Out of curiosity, why is this file called t7xx_state_monitor.c, while
the corresponding header file is called simply t7xx_monitor.h? Are any
other monitors planed?

[skipped]

> diff --git a/drivers/net/wwan/Kconfig b/drivers/net/wwan/Kconfig
> ...
> +config MTK_T7XX
> + tristate "MediaTek PCIe 5G WWAN modem T7XX device"

As already suggested by Andy, using "T7xx" (lowercase x) in the title
to make it more readable.

> + depends on PCI
> + help
> + Enables MediaTek PCIe based 5G WWAN modem (T700 series) device.

Maybe use the term "T7xx series" here too? Otherwise, it sounds like a
driver for the smartphone chipset only.

> + Adapts WWAN framework and provides network interface like wwan0
> + and tty interfaces like wwan0at0 (AT protocol), wwan0mbim0
> + (MBIM protocol), etc.

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_common.h b/drivers/net/wwan/t7xx/t7xx_common.h
> ...
> +struct ccci_header {
> + /* do not assume data[1] is data length in rx */
> + u32 data[2];

If these fields have different meaning on Tx and Rx you could define
them using a union. E.g.

union {
struct {
__le32 idx;
__le32 type;
} rx;
struct {
__le32 idx;
__le32 len;
} tx;
};
__le32 status;

or even like this:

__le32 idx;
union {
__le32 rx_type;
__le32 tx_len;
};
__le32 status;

Such definition better documents code then the comment above the field.

> + u32 status;
> + u32 reserved;
> +};

All these fields should have a type of __le32 since the structure
passed to the modem as is.

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
> ...
> +static int __init mtk_pci_init(void)
> +{
> + return pci_register_driver(&mtk_pci_driver);
> +}
> +module_init(mtk_pci_init);
> +
> +static void __exit mtk_pci_cleanup(void)
> +{
> + pci_unregister_driver(&mtk_pci_driver);
> +}
> +module_exit(mtk_pci_cleanup);

Since the module does not do anything specific on (de-)initialization
use the module_pci_driver() macro instead of this boilerplate code.

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_skb_util.c b/drivers/net/wwan/t7xx/t7xx_skb_util.c
> ...
> +static struct sk_buff *alloc_skb_from_pool(struct skb_pools *pools, size_t size)
> +{
> + if (size > MTK_SKB_4K)
> + return ccci_skb_dequeue(pools->reload_work_queue, &pools->skb_pool_64k);
> + else if (size > MTK_SKB_16)
> + return ccci_skb_dequeue(pools->reload_work_queue, &pools->skb_pool_4k);
> + else if (size > 0)
> + return ccci_skb_dequeue(pools->reload_work_queue, &pools->skb_pool_16);
> +
> + return NULL;
> +}
> +
> +static struct sk_buff *alloc_skb_from_kernel(size_t size, gfp_t gfp_mask)
> +{
> + if (size > MTK_SKB_4K)
> + return __dev_alloc_skb(MTK_SKB_64K, gfp_mask);
> + else if (size > MTK_SKB_1_5K)
> + return __dev_alloc_skb(MTK_SKB_4K, gfp_mask);
> + else if (size > MTK_SKB_16)
> + return __dev_alloc_skb(MTK_SKB_1_5K, gfp_mask);
> + else if (size > 0)
> + return __dev_alloc_skb(MTK_SKB_16, gfp_mask);
> +
> + return NULL;
> +}

I am wondering what performance gains have you achieved with these skb
pools? Can we see any numbers?

I do not think the control path performance is worth the complexity of
the multilayer skb allocation. In the data packet Rx path, you need to
allocate skb anyway as soon as the driver passes them to the stack. So
what is the gain?

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
> ...
> +static struct ccci_fsm_ctl *ccci_fsm_entry;

Place this pointer at least to the mtk_modem structure. Otherwise,
with this global pointer, the driver will break as soon as a second
modem will be connected to the host.

Also all functions in this file also should be reworked to be able to
accept modem state container pointer, e.g. mtk_modem or some other
related structure.

--
Sergey

2021-11-06 21:35:56

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 04/14] net: wwan: t7xx: Add port proxy infrastructure

On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
<[email protected]> wrote:
> Port-proxy provides a common interface to interact with different types
> of ports. Ports export their configuration via `struct t7xx_port` and
> operate as defined by `struct port_ops`.

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_port.h b/drivers/net/wwan/t7xx/t7xx_port.h
> ...
> +struct t7xx_port {
> + /* members used for initialization, do not change the order */

As already suggested by Andy, use C99 initializers to initialize the
md_ccci_ports array and drop the above comment about the strict order
requirements.

> + enum ccci_ch tx_ch;
> + enum ccci_ch rx_ch;
> + unsigned char txq_index;
> + unsigned char rxq_index;
> + unsigned char txq_exp_index;
> + unsigned char rxq_exp_index;
> + enum cldma_id path_id;
> + unsigned int flags;
> + struct port_ops *ops;
> + unsigned int minor;
> + char *name;

Why did you need these two fields with the port name and minor number?
The WWAN subsystem will care about these data for you. It is its
purpose.

> + enum wwan_port_type mtk_port_type;
> +
> + /* members un-initialized in definition */
> + struct wwan_port *mtk_wwan_port;
> + struct mtk_pci_dev *mtk_dev;
> + struct device *dev;
> + short seq_nums[2];
> + struct port_proxy *port_proxy;
> + atomic_t usage_cnt;
> + struct list_head entry;
> + struct list_head queue_entry;
> + unsigned int major;
> + unsigned int minor_base;
> + /* TX and RX flows are asymmetric since ports are multiplexed on
> + * queues.
> + *
> + * TX: data blocks are sent directly to a queue. Each port
> + * does not maintain a TX list; instead, they only provide
> + * a wait_queue_head for blocking writes.
> + *
> + * RX: Each port uses a RX list to hold packets,
> + * allowing the modem to dispatch RX packet as quickly as possible.
> + */
> + struct sk_buff_head rx_skb_list;
> + bool skb_from_pool;
> + spinlock_t port_update_lock; /* protects port configuration */
> + wait_queue_head_t rx_wq;
> + int rx_length_th;
> + port_skb_handler skb_handler;
> + unsigned char chan_enable;
> + unsigned char chn_crt_stat;
> + struct cdev *cdev;
> + struct task_struct *thread;
> + struct mutex tx_mutex_lock; /* protects the seq number operation */
> +};

You should split the t7xx_port structure at least for two parts. A
first part with static configuration can remain in the structure and
statically initialized in the md_ccci_ports array. All non-shareable
runtime state fields (e.g. SKB lists, pointers to dynamically
allocated device instance structures) should be moved to a device
state container.

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.c b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
> ...
> +#define PORT_NETLINK_MSG_MAX_PAYLOAD 32
> +#define PORT_NOTIFY_PROTOCOL NETLINK_USERSOCK

There is a clear statement in the include/uapi/linux/netlink.h file
that NETLINK_USERSOCK is reserved for user mode socket protocols.
Please do not abuse netlink protocol numbers.

If you really need a special Netlink interface to communicate with
userspace, consider creating a new generic netlink family. But it
looks like all the Netlink stuff here is a leftover of a debug
interface that was used at an earlier driver development stage. So I
suggest to just remove all Netlink usage here and consider using
dynamic debug logging, or switch to the kernel tracing.

> ...
> +static struct port_proxy *port_prox;

This is another one pointer that should be placed into a device
runtime state structure to avoid driver crash with multiple modems.

> ...
> +static struct port_ops dummy_port_ops;

Why do you need this dummy ops structure? You anyway remove it in the
next patch. If you need it as a placeholder for the md_ccci_ports
array below, then it is safe to define an empty md_ccci_ports array
and then just fill it. Please consider removing this structure to
avoid ping-pong changes.

> +static struct t7xx_port md_ccci_ports[] = {
> + {0, 0, 0, 0, 0, 0, ID_CLDMA1, 0, &dummy_port_ops, 0xff, "dummy_port",},

As already suggested by Andy, use C99 initializers here. E.g.

{
.tx_ch = 0,
.rx_ch = 0,
.txq_index = 0,
.rxq_index = 0,
.txq_exp_index = 0,
.rxq_exp_index = 0,
.path_id = ID_CLDMA1,
.ops = &dummy_port_ops,
.name = "dummy_port",
}, {
...
}, {
...
}

> ...
> +/* Sequence numbering to track for lost packets */
> +void port_proxy_set_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h)
> +{
> + if (ccci_h && port) {
> + ccci_h->status &= ~HDR_FLD_SEQ;
> + ccci_h->status |= FIELD_PREP(HDR_FLD_SEQ, port->seq_nums[MTK_OUT]);
> + ccci_h->status &= ~HDR_FLD_AST;
> + ccci_h->status |= FIELD_PREP(HDR_FLD_AST, 1);
> + }

Endians handling required here.

> +}
> +
> +static u16 port_check_rx_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h)
> +{
> + u16 channel, seq_num, assert_bit;
> +
> + channel = FIELD_GET(HDR_FLD_CHN, ccci_h->status);
> + seq_num = FIELD_GET(HDR_FLD_SEQ, ccci_h->status);
> + assert_bit = FIELD_GET(HDR_FLD_AST, ccci_h->status);

Field endians handling is missed here. Probably you should first
convert status field data to CPU endians and only then parse it. E.g.

u32 status = le32_to_cpu(ccci_h->status);

channel = FIELD_GET(HDR_FLD_CHN, status);
seq_num = FIELD_GET(HDR_FLD_SEQ, status);
assert_bit = FIELD_GET(HDR_FLD_AST, status);

> + if (assert_bit && port->seq_nums[MTK_IN] &&
> + ((seq_num - port->seq_nums[MTK_IN]) & CHECK_RX_SEQ_MASK) != 1) {
> + dev_err(port->dev, "channel %d seq number out-of-order %d->%d (data: %X, %X)\n",
> + channel, seq_num, port->seq_nums[MTK_IN],
> + ccci_h->data[0], ccci_h->data[1]);

dev_err_ratelimited() ?

> + }
> +
> + return seq_num;
> +}
> ...
> +int port_recv_skb(struct t7xx_port *port, struct sk_buff *skb)
> +{
> ...
> + if (port->flags & PORT_F_RX_ALLOW_DROP) {
> + dev_err(port->dev, "port %s RX full, drop packet\n", port->name);

Should the ratelimited variant be used here? And why is so high
message level used?

> + return -ENETDOWN;

Why ENETDOWN on buffer space exhaustion?

> + }
> +
> + return -ENOBUFS;
> +}
> ...
> +int port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb, bool from_pool)
> +{
> + struct ccci_header *ccci_h;
> ...
> + ccci_h = (struct ccci_header *)(skb->data);
> ...
> + port_proxy_set_seq_num(port, (struct ccci_header *)ccci_h);

cchi_h is already of type ccci_header, no casting is required here.

> ...
> +
> +/* inject CCCI message to modem */
> +void port_proxy_send_msg_to_md(int ch, unsigned int msg, unsigned int resv)

This function is not called by any code in this patch. Should the
function be moved to the "net: wwan: t7xx: Add control port" patch
along with the ctrl_msg_header structure definition?

> +{
> + struct ctrl_msg_header *ctrl_msg_h;
> + struct ccci_header *ccci_h;
> + struct t7xx_port *port;
> + struct sk_buff *skb;
> + int ret;
> +
> + port = port_get_by_ch(ch);
> + if (!port)
> + return;
> +
> + skb = ccci_alloc_skb_from_pool(&port->mtk_dev->pools, sizeof(struct ccci_header),
> + GFS_BLOCKING);
> + if (!skb)
> + return;
> +
> + if (ch == CCCI_CONTROL_TX) {
> + ccci_h = (struct ccci_header *)(skb->data);
> + ccci_h->data[0] = CCCI_HEADER_NO_DATA;
> + ccci_h->data[1] = sizeof(struct ctrl_msg_header) + CCCI_H_LEN;
> + ccci_h->status &= ~HDR_FLD_CHN;
> + ccci_h->status |= FIELD_PREP(HDR_FLD_CHN, ch);
> + ccci_h->reserved = 0;
> + ctrl_msg_h = (struct ctrl_msg_header *)(skb->data + CCCI_H_LEN);
> + ctrl_msg_h->data_length = 0;
> + ctrl_msg_h->reserved = resv;
> + ctrl_msg_h->ctrl_msg_id = msg;
> + skb_put(skb, CCCI_H_LEN + sizeof(struct ctrl_msg_header));
> + } else {
> + ccci_h = skb_put(skb, sizeof(struct ccci_header));
> + ccci_h->data[0] = CCCI_HEADER_NO_DATA;
> + ccci_h->data[1] = msg;
> + ccci_h->status &= ~HDR_FLD_CHN;
> + ccci_h->status |= FIELD_PREP(HDR_FLD_CHN, ch);
> + ccci_h->reserved = resv;
> + }

Endians handling missed here as well.

> + ret = port_proxy_send_skb(port, skb, port->skb_from_pool);
> + if (ret) {
> + dev_err(port->dev, "port%s send to MD fail\n", port->name);
> + ccci_free_skb(&port->mtk_dev->pools, skb);
> + }
> +}
> ...
> +static int proxy_register_char_dev(void)
> +{
> + dev_t dev = 0;
> + int ret;
> +
> + if (port_prox->major) {
> + dev = MKDEV(port_prox->major, port_prox->minor_base);
> + ret = register_chrdev_region(dev, TTY_IPC_MINOR_BASE, MTK_DEV_NAME);
> + } else {
> + ret = alloc_chrdev_region(&dev, port_prox->minor_base,
> + TTY_IPC_MINOR_BASE, MTK_DEV_NAME);
> + if (ret)
> + dev_err(port_prox->dev, "failed to alloc chrdev region, ret=%d\n", ret);
> +
> + port_prox->major = MAJOR(dev);
> + }

For what do you need these character devices? The WWAN subsystem
already handle all these tasks.

> + return ret;
> +}
> ...
> +static int proxy_alloc(struct mtk_modem *md)
> +{
> + int ret;
> +
> + port_prox = devm_kzalloc(&md->mtk_dev->pdev->dev, sizeof(*port_prox), GFP_KERNEL);
> + if (!port_prox)
> + return -ENOMEM;

This pointer should be placed into the mtk_modem, not to a global variable.

> ...
> +int port_proxy_broadcast_state(struct t7xx_port *port, int state)
> +{
> + char msg[PORT_NETLINK_MSG_MAX_PAYLOAD];
> +
> + if (state >= MTK_PORT_STATE_INVALID)
> + return -EINVAL;
> +
> + switch (state) {
> + case MTK_PORT_STATE_ENABLE:
> + snprintf(msg, PORT_NETLINK_MSG_MAX_PAYLOAD, "enable %s", port->name);
> + break;
> +
> + case MTK_PORT_STATE_DISABLE:
> + snprintf(msg, PORT_NETLINK_MSG_MAX_PAYLOAD, "disable %s", port->name);
> + break;
> +
> + default:
> + snprintf(msg, PORT_NETLINK_MSG_MAX_PAYLOAD, "invalid operation");
> + break;
> + }
> +
> + return port_netlink_send_msg(port, PORT_STATE_BROADCAST_GROUP, msg, strlen(msg) + 1);

Netlink by nature is a binary protocol. You need to emit messages of
different types per port state with a port name attribute inside. Or
emit a single type message with two separate attributes: one to carry
a port state and a separate attribute to carry a port name.

Or, as I suggested above, just drop this Netlink abuse and switch to
dynamic debug logging. Or even better, consider switching to the
kernel tracing API.

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.h b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
> ...
> +struct ctrl_msg_header {
> + u32 ctrl_msg_id;
> + u32 reserved;
> + u32 data_length;

All three of these fields should be of type __be32, since the
structure is passed to the modem as is.

> + u8 data[0];
> +};
> +
> +struct port_msg {
> + u32 head_pattern;
> + u32 info;
> + u32 tail_pattern;

Same here.

> + u8 data[0]; /* port set info */
> +};

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
> ...
> @@ -202,11 +208,29 @@ static void fsm_routine_exception(struct ccci_fsm_ctl *ctl, struct ccci_fsm_comm
> ...
> spin_unlock_irqrestore(&ctl->event_lock, flags);
> + if (pass) {
> + log_port = port_get_by_name("ttyCMdLog");
> + if (log_port)
> + log_port->ops->enable_chl(log_port);
> + else
> + dev_err(dev, "ttyCMdLog port not found\n");
> +
> + meta_port = port_get_by_name("ttyCMdMeta");
> + if (meta_port)
> + meta_port->ops->enable_chl(meta_port);
> + else
> + dev_err(dev, "ttyCMdMeta port not found\n");

Looks like this change does not belong to this patch. The "ttyCMdLog"
port entry will be created only by the last patch.

--
Sergey

2021-11-06 21:36:03

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 01/14] net: wwan: Add default MTU size

On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
<[email protected]> wrote:
> Add a default MTU size definition that new WWAN drivers can refer to.

[skipped]

> +/*
> + * Default WWAN interface MTU value
> + */
> +#define WWAN_DEFAULT_MTU 1500

Why do you need another one macro for the 1500 bytes default MTU?
Consider using ETH_DATA_LEN macro.

--
Sergey

2021-11-06 21:36:04

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 09/14] net: wwan: t7xx: Add WWAN network interface

On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
<[email protected]> wrote:
> Creates the Cross Core Modem Network Interface (CCMNI) which implements
> the wwan_ops for registration with the WWAN framework, CCMNI also
> implements the net_device_ops functions used by the network device.
> Network device operations include open, close, start transmission, TX
> timeout, change MTU, and select queue.

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_netdev.c b/drivers/net/wwan/t7xx/t7xx_netdev.c
> ...
> +static void ccmni_make_etherframe(struct net_device *dev, void *skb_eth_hdr,
> + u8 *mac_addr, unsigned int packet_type)
> +{
> + struct ethhdr *eth_hdr;
> +
> + eth_hdr = skb_eth_hdr;
> + memcpy(eth_hdr->h_dest, mac_addr, sizeof(eth_hdr->h_dest));
> + memset(eth_hdr->h_source, 0, sizeof(eth_hdr->h_source));
> +
> + if (packet_type == IPV6_VERSION)
> + eth_hdr->h_proto = cpu_to_be16(ETH_P_IPV6);
> + else
> + eth_hdr->h_proto = cpu_to_be16(ETH_P_IP);
> +}

If the modem is a pure IP device, you do not need to forge an Ethernet
header. Moreover this does not make any sense, only odd CPU time
spending. Just set netdev->type to ARPHRD_NONE and send a pure
IPv4/IPv6 packet up to the stack.

> +static enum txq_type get_txq_type(struct sk_buff *skb)
> +{
> + u32 total_len, payload_len, l4_off;
> + bool tcp_syn_fin_rst, is_tcp;
> + struct ipv6hdr *ip6h;
> + struct tcphdr *tcph;
> + struct iphdr *ip4h;
> + u32 packet_type;
> + __be16 frag_off;
> +
> + packet_type = skb->data[0] & SBD_PACKET_TYPE_MASK;
> + if (packet_type == IPV6_VERSION) {
> + ip6h = (struct ipv6hdr *)skb->data;
> + total_len = sizeof(struct ipv6hdr) + ntohs(ip6h->payload_len);
> + l4_off = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &ip6h->nexthdr, &frag_off);
> + tcph = (struct tcphdr *)(skb->data + l4_off);
> + is_tcp = ip6h->nexthdr == IPPROTO_TCP;
> + payload_len = total_len - l4_off - (tcph->doff << 2);
> + } else if (packet_type == IPV4_VERSION) {
> + ip4h = (struct iphdr *)skb->data;
> + tcph = (struct tcphdr *)(skb->data + (ip4h->ihl << 2));
> + is_tcp = ip4h->protocol == IPPROTO_TCP;
> + payload_len = ntohs(ip4h->tot_len) - (ip4h->ihl << 2) - (tcph->doff << 2);
> + } else {
> + return TXQ_NORMAL;
> + }
> +
> + tcp_syn_fin_rst = tcph->syn || tcph->fin || tcph->rst;
> + if (is_tcp && !payload_len && !tcp_syn_fin_rst)
> + return TXQ_FAST;
> +
> + return TXQ_NORMAL;
> +}

I am wondering how much modem performance has improved with this
optimization compared to the performance loss on each packet due to
the cache miss? Do you have any measurement results?

> +static u16 ccmni_select_queue(struct net_device *dev, struct sk_buff *skb,
> + struct net_device *sb_dev)
> +{
> + struct ccmni_instance *ccmni;
> +
> + ccmni = netdev_priv(dev);
> +
> + if (ccmni->ctlb->capability & NIC_CAP_DATA_ACK_DVD)
> + return get_txq_type(skb);
> +
> + return TXQ_NORMAL;
> +}
> +
> +static int ccmni_open(struct net_device *dev)
> +{
> + struct ccmni_instance *ccmni;
> +
> + ccmni = wwan_netdev_drvpriv(dev);

Move this assignment to the variable definition.

> + netif_carrier_on(dev);
> + netif_tx_start_all_queues(dev);
> + atomic_inc(&ccmni->usage);
> + return 0;
> +}
> +
> +static int ccmni_close(struct net_device *dev)
> +{
> + struct ccmni_instance *ccmni;
> +
> + ccmni = wwan_netdev_drvpriv(dev);

Same here.

> + if (atomic_dec_return(&ccmni->usage) < 0)
> + return -EINVAL;
> +
> + netif_carrier_off(dev);
> + netif_tx_disable(dev);
> + return 0;
> +}
> +
> +static int ccmni_send_packet(struct ccmni_instance *ccmni, struct sk_buff *skb, enum txq_type txqt)
> +{
> + struct ccmni_ctl_block *ctlb;
> + struct ccci_header *ccci_h;
> + unsigned int ccmni_idx;
> +
> + skb_push(skb, sizeof(struct ccci_header));
> + ccci_h = (struct ccci_header *)skb->data;
> + ccci_h->status &= ~HDR_FLD_CHN;

Please do not push control data to the skb data. You anyway will
remove them during the enqueuing to HW. This approach will cause a
performance penalty. Also this looks like a ccci_header structure
abuse.

Use a dedicated structure and the skb control buffer (e.g. skb->cb) to
preserve control data while the packet stays in an intermediate queue.

> + ccmni_idx = ccmni->index;
> + ccci_h->data[0] = ccmni_idx;
> + ccci_h->data[1] = skb->len;
> + ccci_h->reserved = 0;
> +
> + ctlb = ccmni->ctlb;
> + if (dpmaif_tx_send_skb(ctlb->hif_ctrl, txqt, skb)) {
> + skb_pull(skb, sizeof(struct ccci_header));
> + /* we will reserve header again in the next retry */
> + return NETDEV_TX_BUSY;
> + }
> +
> + return 0;
> +}
> +
> +static int ccmni_start_xmit(struct sk_buff *skb, struct net_device *dev)
> +{
> + struct ccmni_instance *ccmni;
> + struct ccmni_ctl_block *ctlb;
> + enum txq_type txqt;
> + int skb_len;
> +
> + ccmni = wwan_netdev_drvpriv(dev);

Move assignment to the variable definition.

> + ctlb = ccmni->ctlb;
> + txqt = TXQ_NORMAL;
> + skb_len = skb->len;
> +
> + /* If MTU changed or there is no headroom, drop the packet */
> + if (skb->len > dev->mtu || skb_headroom(skb) < sizeof(struct ccci_header)) {
> + dev_kfree_skb(skb);
> + dev->stats.tx_dropped++;
> + return NETDEV_TX_OK;
> + }
> +
> + if (ctlb->capability & NIC_CAP_DATA_ACK_DVD)
> + txqt = get_txq_type(skb);
> +
> + if (ccmni_send_packet(ccmni, skb, txqt)) {
> + if (!(ctlb->capability & NIC_CAP_TXBUSY_STOP)) {
> + if ((ccmni->tx_busy_cnt[txqt]++) % 100 == 0)
> + netdev_notice(dev, "[TX]CCMNI:%d busy:pkt=%ld(ack=%d) cnt=%ld\n",
> + ccmni->index, dev->stats.tx_packets,
> + txqt, ccmni->tx_busy_cnt[txqt]);

What is the purpose of this message?

> + } else {
> + ccmni->tx_busy_cnt[txqt]++;
> + }
> +
> + return NETDEV_TX_BUSY;
> + }
> +
> + dev->stats.tx_packets++;
> + dev->stats.tx_bytes += skb_len;
> + if (ccmni->tx_busy_cnt[txqt] > 10) {
> + netdev_notice(dev, "[TX]CCMNI:%d TX busy:tx_pkt=%ld(ack=%d) retries=%ld\n",
> + ccmni->index, dev->stats.tx_packets,
> + txqt, ccmni->tx_busy_cnt[txqt]);
> + }
> + ccmni->tx_busy_cnt[txqt] = 0;
> +
> + return NETDEV_TX_OK;
> +}
> +
> +static int ccmni_change_mtu(struct net_device *dev, int new_mtu)
> +{
> + if (new_mtu > CCMNI_MTU_MAX)
> + return -EINVAL;
> +
> + dev->mtu = new_mtu;
> + return 0;
> +}

You do not need this function at all. You already specify the max_mtu
value in the ccmni_wwan_setup(), so the network core code will be
happy to check a user requested MTU against max_mtu for you.

> ...
> +static void ccmni_pre_stop(struct ccmni_ctl_block *ctlb)
> +{
> ...
> +}
> +
> +static void ccmni_pos_stop(struct ccmni_ctl_block *ctlb)

Please consider renaming this function to ccmni_post_stop(). It is
quite hard to figure out what position should be stopped on first code
reading.

> ...
> +static void ccmni_wwan_setup(struct net_device *dev)
> +{
> + dev->header_ops = NULL;
> + dev->hard_header_len += sizeof(struct ccci_header);
> +
> + dev->mtu = WWAN_DEFAULT_MTU;
> + dev->max_mtu = CCMNI_MTU_MAX;
> + dev->tx_queue_len = CCMNI_TX_QUEUE;
> + dev->watchdog_timeo = CCMNI_NETDEV_WDT_TO;
> + /* ccmni is a pure IP device */
> + dev->flags = (IFF_POINTOPOINT | IFF_NOARP)
> + & ~(IFF_BROADCAST | IFF_MULTICAST);

You do not need to reset flags on the initial assignment. Just

dev->flags = IFF_POINTOPOINT | IFF_NOARP;

would be enough.

> + /* not supporting VLAN */
> + dev->features = NETIF_F_VLAN_CHALLENGED;
> +
> + dev->features |= NETIF_F_SG;
> + dev->hw_features |= NETIF_F_SG;
> +
> + /* uplink checksum offload */
> + dev->features |= NETIF_F_HW_CSUM;
> + dev->hw_features |= NETIF_F_HW_CSUM;
> +
> + /* downlink checksum offload */
> + dev->features |= NETIF_F_RXCSUM;
> + dev->hw_features |= NETIF_F_RXCSUM;
> +
> + dev->addr_len = ETH_ALEN;

You do not need to configure HW address length as the modem is a pure
IP device. Just drop the above line or explicitly set address length
to zero.

> + /* use kernel default free_netdev() function */
> + dev->needs_free_netdev = true;
> +
> + /* no need to free again because of free_netdev() */
> + dev->priv_destructor = NULL;
> + dev->type = ARPHRD_PPP;

Use ARPHRD_NONE here since the modem is a pure IP device. Or you could
use ARPHRD_RAWIP depending on how you would like to allocate the link
IPv6 address. If in doubt then ARPHRD_NONE is a good starting point.

> + dev->netdev_ops = &ccmni_netdev_ops;
> + eth_random_addr(dev->dev_addr);

You do not need this random address generation.

> +}
> ...
> +static void ccmni_recv_skb(struct mtk_pci_dev *mtk_dev, int netif_id, struct sk_buff *skb)
> +{
> ...
> + pkt_type = skb->data[0] & SBD_PACKET_TYPE_MASK;
> + ccmni_make_etherframe(dev, skb->data - ETH_HLEN, dev->dev_addr, pkt_type);

As I wrote above, you do not need to forge an Ethernet header for pure
IP devices.

> + skb_set_mac_header(skb, -ETH_HLEN);
> + skb_reset_network_header(skb);
> + skb->dev = dev;
> + if (pkt_type == IPV6_VERSION)
> + skb->protocol = htons(ETH_P_IPV6);
> + else
> + skb->protocol = htons(ETH_P_IP);
> +
> + skb_len = skb->len;
> +
> + netif_rx_any_context(skb);

Did you consider using NAPI for the packet Rx path? This should
improve Rx performance.

> + dev->stats.rx_packets++;
> + dev->stats.rx_bytes += skb_len;
> +}

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_netdev.h b/drivers/net/wwan/t7xx/t7xx_netdev.h
> ...
> +#define CCMNI_TX_QUEUE 1000

Is this a really carefully selected queue depth limit, or just an
arbitrary value? If the last one, then feel free to use the
DEFAULT_TX_QUEUE_LEN macro.

> ..
> +#define IPV4_VERSION 0x40
> +#define IPV6_VERSION 0x60

Just curious why the _VERSION suffix? Why not, for example, PKT_TYPE_ prefix?

--
Sergey

2021-11-06 21:36:09

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 05/14] net: wwan: t7xx: Add control port

On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
<[email protected]> wrote:
> Control Port implements driver control messages such as modem-host
> handshaking, controls port enumeration, and handles exception messages.
>
> The handshaking process between the driver and the modem happens during
> the init sequence. The process involves the exchange of a list of
> supported runtime features to make sure that modem and host are ready
> to provide proper feature lists including port enumeration. Further
> features can be enabled and controlled in this handshaking process.

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
> ...
> +struct feature_query {
> + u32 head_pattern;

Looks like this field should be __le32 since it is sent to modem as is.

> + u8 feature_set[FEATURE_COUNT];
> + u32 tail_pattern;

Ditto.

> +};
> +
> +static void prepare_host_rt_data_query(struct core_sys_info *core)
> +{
> ...
> + ft_query->head_pattern = MD_FEATURE_QUERY_ID;

This should be a

ft_query->head_pattern = cpu_to_le32(MD_FEATURE_QUERY_ID);

to run on the big-endians CPU. strace will notify you about each
endians mismatch as soon as you change head_pattern field type to
__le32.

> + memcpy(ft_query->feature_set, core->feature_set, FEATURE_COUNT);
> + ft_query->tail_pattern = MD_FEATURE_QUERY_ID;

Ditto.

> + /* send HS1 message to device */
> + port_proxy_send_skb(core->ctl_port, skb, 0);
> +}

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c b/drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c
> ...
> +static void fsm_ee_message_handler(struct sk_buff *skb)
> +{
> ...
> + ctrl_msg_h = (struct ctrl_msg_header *)skb->data;
> ...
> + switch (ctrl_msg_h->ctrl_msg_id) {

This should be:

switch (le32_to_cpu(ctrl_msg_h->ctrl_msg_id)) {

> + case CTL_ID_MD_EX:
> + if (ctrl_msg_h->reserved != MD_EX_CHK_ID) {

Why is this field called 'reserved', but used to perform message validation?

> ...
> +static void control_msg_handler(struct t7xx_port *port, struct sk_buff *skb)
> +{
> + struct ctrl_msg_header *ctrl_msg_h;
> ...
> + ctrl_msg_h = (struct ctrl_msg_header *)skb->data;
> ..
> + switch (ctrl_msg_h->ctrl_msg_id) {

This should be something like this:

switch (le32_to_cpu(ctrl_msg_h->ctrl_msg_id)) {

--
Sergey

2021-11-06 21:36:12

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 13/14] net: wwan: t7xx: Add debug and test ports

On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
<[email protected]> wrote:
> Creates char and tty port infrastructure for debug and testing.
> Those ports support use cases such as:
> * Modem log collection
> * Memory dump
> * Loop-back test
> * Factory tests
> * Device Service Streams

A-ha. The purpose of chardev stuff in previous patches becomes much more clear.

Please do not do this in such a way. This is not an everyday usage
operation to be supported via the chardev. Also this will require an
end user to study another one bunch of custom vendor tools for quite
common development tasks.

The kernel already has a rich set of frameworks for device debugging.
If you need to update firmware use the devlink firmware updating
facility, see e.g. Intel iosm driver for reference. For memory dumping
you could utilize devlink or device coredump facilities (see ath10k
driver for reference).

For other debugging tasks I recommend you to use debugfs. The WWAN
subsystem could be extended to provide a driver with a common debugfs
infrastructure (e.g. create common directory for WWAN devices, etc.).

As for modem logs, you could pipe them to the common kernel log via
the dynamic debug logging or export them via the debugfs as well.

Loic, Johannes, do you have some other advice on how to facilitate the
modem debugging/development tasks?

--
Sergey

2021-11-06 21:36:13

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 08/14] net: wwan: t7xx: Add data path interface

On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
<[email protected]> wrote:
> Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
> for initialization, ISR, control and event handling of TX/RX flows.
>
> DPMAIF TX
> Exposes the `dmpaif_tx_send_skb` function which can be used by the
> network device to transmit packets.
> The uplink data management uses a Descriptor Ring Buffer (DRB).
> First DRB entry is a message type that will be followed by 1 or more
> normal DRB entries. Message type DRB will hold the skb information
> and each normal DRB entry holds a pointer to the skb payload.
>
> DPMAIF RX
> The downlink buffer management uses Buffer Address Table (BAT) and
> Packet Information Table (PIT) rings.
> The BAT ring holds the address of skb data buffer for the HW to use,
> while the PIT contains metadata about a whole network packet including
> a reference to the BAT entry holding the data buffer address.
> The driver reads the PIT and BAT entries written by the modem, when
> reaching a threshold, the driver will reload the PIT and BAT rings.

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
> ...
> +static int dpmaif_net_rx_push_thread(void *arg)
> +{
> ...
> + while (!kthread_should_stop()) {
> + if (skb_queue_empty(&q->skb_queue.skb_list)) {
> + if (wait_event_interruptible(q->rx_wq,
> + !skb_queue_empty(&q->skb_queue.skb_list) ||
> + kthread_should_stop()))
> + continue;
> + }
> +
> + if (kthread_should_stop())
> + break;

Looks like the above check is used to recheck thread state after the
wait_event_interruptible() call, so the check could be moved to the
skb_queue_empty() code block to avoid odd thread state checks.

> ...
> +static void dpmaif_rx_skb(struct dpmaif_rx_queue *rxq, struct dpmaif_cur_rx_skb_info *rx_skb_info)
> +{
> + struct sk_buff *new_skb;
> + u32 *lhif_header;
> +
> + new_skb = rx_skb_info->cur_skb;
> ...
> + /* MD put the ccmni_index to the msg pkt,
> + * so we need push it alone. Maybe not needed.
> + */
> + lhif_header = skb_push(new_skb, sizeof(*lhif_header));
> + *lhif_header &= ~LHIF_HEADER_NETIF;
> + *lhif_header |= FIELD_PREP(LHIF_HEADER_NETIF, rx_skb_info->cur_chn_idx);

Why is the skb data field used to carry packet control data? Consider
using the skb control buffer (i.e skb->cb) to carry control data
between the driver layers to make metadata handling less expensive and
increase driver performance.

> + /* add data to rx thread skb list */
> + ccci_skb_enqueue(&rxq->skb_queue, new_skb);
> +}
> ...
> +void dpmaif_rxq_free(struct dpmaif_rx_queue *queue)
> +{
> ...
> + while ((skb = skb_dequeue(&queue->skb_queue.skb_list)))
> + kfree_skb(skb);

skb_queue_purge()

> ...
> +static int dpmaif_skb_queue_init_struct(struct dpmaif_ctrl *dpmaif_ctrl,
> + const unsigned int index)
> +{
> ...
> + INIT_LIST_HEAD(&queue->skb_list.head);
> + spin_lock_init(&queue->skb_list.lock);
> + queue->skb_list.qlen = 0;

skb_queue_head_init()

[skipped]

> diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
> ...
> +/* lhif header feilds */
> +#define LHIF_HEADER_NW_TYPE GENMASK(31, 29)
> +#define LHIF_HEADER_NETIF GENMASK(28, 24)
> +#define LHIF_HEADER_F GENMASK(22, 20)
> +#define LHIF_HEADER_FLOW GENMASK(19, 16)

Just place control data to the skb control buffer (i.e. skb->cb) and
define this control data as a structure:

struct rx_pkt_cb {
u8 nw_type;
u8 netif;
u8 flow;
};

> diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
> ...
> +static int dpmaif_tx_send_skb_on_tx_thread(struct dpmaif_ctrl *dpmaif_ctrl,
> + struct dpmaif_tx_event *event)
> +{
> ...
> + struct ccci_header ccci_h;
> ...
> + skb = event->skb;
> ...
> + ccci_h = *(struct ccci_header *)skb->data;
> + skb_pull(skb, sizeof(struct ccci_header));

Place this metadata to the skb control buffer (i.e. skb->cb) to avoid
odd skb_push()/skb_pull() calls.

Also this looks like an abuse of ccci_header structure. In fact it
never passed to the modem along with a data packet, but searching
through the code show this as a structure usage place.

> ...
> +int dpmaif_tx_send_skb(struct dpmaif_ctrl *dpmaif_ctrl, enum txq_type txqt, struct sk_buff *skb)
> +{
> ...
> + if (txq->tx_submit_skb_cnt < txq->tx_list_max_len && tx_drb_available) {
> + struct dpmaif_tx_event *event;
> ...
> + event = kmalloc(sizeof(*event), GFP_ATOMIC);
> ...
> + INIT_LIST_HEAD(&event->entry);
> + event->qno = txqt;
> + event->skb = skb;
> + event->drb_cnt = send_drb_cnt;

Please, place the packet metadata (dpmaif_tx_event data) in the skb
control buffer (i.e. skb->cb) and use skb queue API as in Rx path.
This will allow you to avoid the per-packet metadata memory allocation
and make code simple.

> + spin_lock_irqsave(&txq->tx_event_lock, flags);
> + list_add_tail(&event->entry, &txq->tx_event_queue);
> + txq->tx_submit_skb_cnt++;
> + spin_unlock_irqrestore(&txq->tx_event_lock, flags);
> + wake_up(&dpmaif_ctrl->tx_wq);
> +
> + return 0;
> + }
> +
> + cb = dpmaif_ctrl->callbacks;
> + cb->state_notify(dpmaif_ctrl->mtk_dev, DMPAIF_TXQ_STATE_FULL, txqt);
> +
> + return -EBUSY;

It is better to invert the above condition, handle TXQ full situation
as a corner case and packet queuing as a normal case. I.e. instead of:

if (have_queue_space) {
/* Enqueue packet */
return 0;
}
/* Queue full notification emitting */
return -EBUSY;

handle queuing like this:

if (unlikely(!have_queue_space)) {
/* Queue full notification emitting */
return -EBUSY;
}
/* Enqueue packet */
return 0;

This is a matter of taste, but makes code more readable.

--
Sergey

2021-11-06 21:36:24

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 00/14] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem

Hello Ricardo,

On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
<[email protected]> wrote:
> t7xx is the PCIe host device driver for Intel 5G 5000 M.2 solution which
> is based on MediaTek's T700 modem to provide WWAN connectivity.
> The driver uses the WWAN framework infrastructure to create the following
> control ports and network interfaces:
> * /dev/wwan0mbim0 - Interface conforming to the MBIM protocol.
> Applications like libmbim [1] or Modem Manager [2] from v1.16 onwards
> with [3][4] can use it to enable data communication towards WWAN.
> * /dev/wwan0at0 - Interface that supports AT commands.
> * wwan0 - Primary network interface for IP traffic.
>
> The main blocks in t7xx driver are:
> * PCIe layer - Implements probe, removal, and power management callbacks.
> * Port-proxy - Provides a common interface to interact with different types
> of ports such as WWAN ports.
> * Modem control & status monitor - Implements the entry point for modem
> initialization, reset and exit, as well as exception handling.
> * CLDMA (Control Layer DMA) - Manages the HW used by the port layer to send
> control messages to the modem using MediaTek's CCCI (Cross-Core
> Communication Interface) protocol.
> * DPMAIF (Data Plane Modem AP Interface) - Controls the HW that provides
> uplink and downlink queues for the data path. The data exchange takes
> place using circular buffers to share data buffer addresses and metadata
> to describe the packets.
> * MHCCIF (Modem Host Cross-Core Interface) - Provides interrupt channels
> for bidirectional event notification such as handshake, exception, PM and
> port enumeration.
>
> The compilation of the t7xx driver is enabled by the CONFIG_MTK_T7XX config
> option which depends on CONFIG_WWAN.
> This driver was originally developed by MediaTek. Intel adapted t7xx to
> the WWAN framework, optimized and refactored the driver source in close
> collaboration with MediaTek. This will enable getting the t7xx driver on
> Approved Vendor List for interested OEM's and ODM's productization plans
> with Intel 5G 5000 M.2 solution.

Nice work! The driver generally looks good for me. But at the same
time the driver looks a bit raw, needs some style and functionality
improvements, and a lot of cleanup. Please find general thoughts below
and per-patch comments.

A one nitpick that is common for the entire series. Please consider
using a common prefix for all driver function names (e.g. t7xx_) to
make them more specific. This should improve the code readability.
Thus, any reader will know for sure that the called functions belong
to the driver, and not to a generic kernel API. E.g. use the
t7xx_cldma_hw_init() name for the CLDMA initialization function
instead of the too generic cldma_hw_init() name, etc.

Interestingly, that you are using the common 't7xx_' prefix for all
driver file names. This is Ok, but it does not add to the specifics as
all driver files are already located in a common directory with the
specific name. But function names at the same time lack a common
prefix.

Another common drawback is that the driver should break as soon as two
modems are connected simultaneously. This should happen due to the use
of multiple _global_ variables that keeps pointers to a modem runtime
state. Out of curiosity, did you test the driver with two or more
modems connected simultaneously?

Next, the driver entirely lacks the multibyte field endians handling.
Looks like it will be unable to run on a big-endians CPU. To fix this,
it is needed to find all the structures that are passed to the modem
and replace the multibyte fields of types u16/u32 with __le16/__le32.
Then examine all the field accesses and use
cpu_to_le{16,32}()/le{16,32}_to_cpu() to update/read field contents.
As soon as you change the types to __le16/__le32, sparse (a static
analyzing utility) will warn you about every unsafe field access. Just
build your kernel with make C=1.

Ricardo, please consider submitting at the next iteration a patch
series with the driver that will be cleaned from debug stuff and
questionable optimizations. Just a bare minimum functional: AT/MBIM
control ports and network interface for data communications. This will
cut the code in half. What will greatly facilitate the reviewing
process. And then extend the driver functionality with follow up
patches.

--
Sergey

2021-11-09 12:54:01

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH v2 00/14] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem



On 11/6/2021 11:10 AM, Sergey Ryazanov wrote:
> Hello Ricardo,
>
> On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
> <[email protected]> wrote:
...
> Nice work! The driver generally looks good for me. But at the same
> time the driver looks a bit raw, needs some style and functionality
> improvements, and a lot of cleanup. Please find general thoughts below
> and per-patch comments.
Thanks for the feedback Sergey, we will work on it.

>
> A one nitpick that is common for the entire series. Please consider
> using a common prefix for all driver function names (e.g. t7xx_) to
> make them more specific. This should improve the code readability.
> Thus, any reader will know for sure that the called functions belong
> to the driver, and not to a generic kernel API. E.g. use the
> t7xx_cldma_hw_init() name for the CLDMA initialization function
> instead of the too generic cldma_hw_init() name, etc.
Does this apply to static functions as well?

>
> Interestingly, that you are using the common 't7xx_' prefix for all
> driver file names. This is Ok, but it does not add to the specifics as
> all driver files are already located in a common directory with the
> specific name. But function names at the same time lack a common
> prefix.
>
> Another common drawback is that the driver should break as soon as two
> modems are connected simultaneously. This should happen due to the use
> of multiple _global_ variables that keeps pointers to a modem runtime
> state. Out of curiosity, did you test the driver with two or more
> modems connected simultaneously?
We haven't tested such configurations, we are focusing on platforms with one single modem.

>
> Next, the driver entirely lacks the multibyte field endians handling.
> Looks like it will be unable to run on a big-endians CPU. To fix this,
> it is needed to find all the structures that are passed to the modem
> and replace the multibyte fields of types u16/u32 with __le16/__le32.
> Then examine all the field accesses and use
> cpu_to_le{16,32}()/le{16,32}_to_cpu() to update/read field contents.
> As soon as you change the types to __le16/__le32, sparse (a static
> analyzing utility) will warn you about every unsafe field access. Just
> build your kernel with make C=1.
>
> Ricardo, please consider submitting at the next iteration a patch
> series with the driver that will be cleaned from debug stuff and
> questionable optimizations. Just a bare minimum functional: AT/MBIM
> control ports and network interface for data communications. This will
> cut the code in half. What will greatly facilitate the reviewing
> process. And then extend the driver functionality with follow up
> patches.
>
> --
> Sergey
>

2021-11-09 21:31:29

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 00/14] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem

On Tue, Nov 9, 2021 at 8:26 AM Martinez, Ricardo wrote:
> On 11/6/2021 11:10 AM, Sergey Ryazanov wrote:
>> A one nitpick that is common for the entire series. Please consider
>> using a common prefix for all driver function names (e.g. t7xx_) to
>> make them more specific. This should improve the code readability.
>> Thus, any reader will know for sure that the called functions belong
>> to the driver, and not to a generic kernel API. E.g. use the
>> t7xx_cldma_hw_init() name for the CLDMA initialization function
>> instead of the too generic cldma_hw_init() name, etc.
>
> Does this apply to static functions as well?

As I wrote, this is a nitpick. As you can see in
Documentation/process/coding-style.rst, there are no general rules for
functions naming. My personal rule of thumb is that if a function
performs a very general operation (like averaging, interpolation,
etc.), then a prefix can be omitted. If a function operation is
specific for a module, then add a common module prefix to the function
name. But again, this is my personal rule.

As for the driver, it was quite difficult to read the code that calls
functions such as cldma_alloc(), cldma_init(). It was hard to figure
out whether these functions are new kernel API or they are specific to
the driver. A common way to solve such ambiguity issues is to prefix
the driver function names. But again, this was just an attempt to draw
your attention to the function naming. Feel free to name functions as
you would like, just make the code readable for developers who are not
familiar with the specific HW chip.

>> Another common drawback is that the driver should break as soon as two
>> modems are connected simultaneously. This should happen due to the use
>> of multiple _global_ variables that keeps pointers to a modem runtime
>> state. Out of curiosity, did you test the driver with two or more
>> modems connected simultaneously?
>
> We haven't tested such configurations, we are focusing on platforms with one single modem.

Now you are aware of the potential kernel crash due to the global
variables misuse. Please fix it.

--
Sergey

2021-11-09 22:07:56

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 06/14] net: wwan: t7xx: Add AT and MBIM WWAN ports

On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez wrote:
> ...
> static struct t7xx_port md_ccci_ports[] = {
> + {CCCI_UART2_TX, CCCI_UART2_RX, DATA_AT_CMD_Q, DATA_AT_CMD_Q, 0xff,
> + 0xff, ID_CLDMA1, PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 0, "ttyC0", WWAN_PORT_AT},
> + {CCCI_MBIM_TX, CCCI_MBIM_RX, 2, 2, 0, 0, ID_CLDMA1,
> + PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 10, "ttyCMBIM0", WWAN_PORT_MBIM},
> ...
> + if (count + CCCI_H_ELEN > txq_mtu &&
> + (port_ccci->tx_ch == CCCI_MBIM_TX ||
> + (port_ccci->tx_ch >= CCCI_DSS0_TX && port_ccci->tx_ch <= CCCI_DSS7_TX)))
> + multi_packet = DIV_ROUND_UP(count, txq_mtu - CCCI_H_ELEN);

I am just wondering, the chip does support MBIM message fragmentation,
but does not support AT commands stream (CCCI_UART2_TX) fragmentation.
Is that the correct conclusion from the code above?

BTW, you could factor out data fragmentation support to a dedicated
function to improve code readability. Something like this:

static inline bool port_is_multipacket_capable(... *port)
{
return port->tx_ch == CCCI_MBIM_TX ||
(port->tx_ch >= CCCI_DSS0_TX && port->tx_ch <= CCCI_DSS7_TX);
}

So condition become something like that:

if (count + CCCI_H_ELEN > txq_mtu &&
port_is_multipacket_capable(port))
multi_packet = DIV_ROUND_UP(count, txq_mtu - CCCI_H_ELEN);

--
Sergey

2022-01-12 22:25:20

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH v2 06/14] net: wwan: t7xx: Add AT and MBIM WWAN ports

Hi Sergey,

On 12/6/2021 6:41 PM, Martinez, Ricardo wrote:
>
> On 12/1/2021 12:45 PM, Sergey Ryazanov wrote:
>> Hello Ricardo,
>>
>> On Wed, Dec 1, 2021 at 9:14 AM Martinez, Ricardo
>> <[email protected]> wrote:
>>> On 11/9/2021 4:06 AM, Sergey Ryazanov wrote:
>>>> On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez wrote:
>>>>> ...
>>>>>    static struct t7xx_port md_ccci_ports[] = {
>>>>> +       {CCCI_UART2_TX, CCCI_UART2_RX, DATA_AT_CMD_Q,
>>>>> DATA_AT_CMD_Q, 0xff,
>>>>> +        0xff, ID_CLDMA1, PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops,
>>>>> 0, "ttyC0", WWAN_PORT_AT},
>>>>> +       {CCCI_MBIM_TX, CCCI_MBIM_RX, 2, 2, 0, 0, ID_CLDMA1,
>>>>> +        PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 10, "ttyCMBIM0",
>>>>> WWAN_PORT_MBIM},
>>>>> ...
>>>>> +               if (count + CCCI_H_ELEN > txq_mtu &&
>>>>> +                   (port_ccci->tx_ch == CCCI_MBIM_TX ||
>>>>> +                    (port_ccci->tx_ch >= CCCI_DSS0_TX &&
>>>>> port_ccci->tx_ch <= CCCI_DSS7_TX)))
>>>>> +                       multi_packet = DIV_ROUND_UP(count, txq_mtu
>>>>> - CCCI_H_ELEN);
>>>> I am just wondering, the chip does support MBIM message fragmentation,
>>>> but does not support AT commands stream (CCCI_UART2_TX) fragmentation.
>>>> Is that the correct conclusion from the code above?
>>> Yes, that is correct.
>> Are you sure that the modem does not support AT command fragmentation?
>> The AT commands interface is a stream of chars by its nature. It is
>> designed to work over serial lines. Some modem configuration software
>> even writes commands to a port in a char-by-char manner, i.e. it
>> writes no more than one char at a time to the port.
>>
>> The mechanism that is implemented in the driver to split user input
>> into individual messages is not a true fragmentation mechanism since
>> it does not preserve the original user input length. It just cuts the
>> user input into individual messages and sends them to the modem
>> independently. So, the modem firmware has no way to distinguish
>> whether the user input has been "fragmented" by the user or the
>> driver. How, then, does the modem firmware deal with an AT command
>> "fragmented" by a user? Will the modem firmware ignore the AT command
>> that is received in the char-by-char manner?
>
> The modem supports char-by-char AT command over serial lines, I'll get
> more information
>
> about why splitting long commands is supported only for MBIM and not
> for AT commands.
>
The next version will allow AT command fragmentation as there is modem
is able to handle

fragments as you pointed out. This will make the WWAN Tx callback a bit
simpler.

2021-11-19 06:38:03

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH v2 02/14] net: wwan: t7xx: Add control DMA interface


On 11/1/2021 7:03 AM, Andy Shevchenko wrote:
> On Sun, Oct 31, 2021 at 08:56:23PM -0700, Ricardo Martinez wrote:
>> From: Haijun Lio <[email protected]>
>>
>> Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control
>> path of Host-Modem data transfers. CLDMA HIF layer provides a common
>> interface to the Port Layer.
>>
>> CLDMA manages 8 independent RX/TX physical channels with data flow
>> control in HW queues. CLDMA uses ring buffers of General Packet
>> Descriptors (GPD) for TX/RX. GPDs can represent multiple or single
>> data buffers (DB).
>>
>> CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA
>> interrupts, and initializes CLDMA HW registers.
>>
>> CLDMA TX flow:
>> 1. Port Layer write
>> 2. Get DB address
>> 3. Configure GPD
>> 4. Triggering processing via HW register write
>>
>> CLDMA RX flow:
>> 1. CLDMA HW sends a RX "done" to host
>> 2. Driver starts thread to safely read GPD
>> 3. DB is sent to Port layer
>> 4. Create a new buffer for GPD ring
...
>
>> +void cldma_hw_reset(void __iomem *ao_base)
>> +{
>> + iowrite32(ioread32(ao_base + REG_INFRA_RST4_SET) | RST4_CLDMA1_SW_RST_SET,
>> + ao_base + REG_INFRA_RST4_SET);
>> + iowrite32(ioread32(ao_base + REG_INFRA_RST2_SET) | RST2_CLDMA1_AO_SW_RST_SET,
>> + ao_base + REG_INFRA_RST2_SET);
>> + udelay(1);
>> + iowrite32(ioread32(ao_base + REG_INFRA_RST4_CLR) | RST4_CLDMA1_SW_RST_CLR,
>> + ao_base + REG_INFRA_RST4_CLR);
>> + iowrite32(ioread32(ao_base + REG_INFRA_RST2_CLR) | RST2_CLDMA1_AO_SW_RST_CLR,
>> + ao_base + REG_INFRA_RST2_CLR);
> Setting and clearing are in the same order, is it okay?
> Can we do it rather symmetrical?
In this case, order does not matter.

This will be symmetrical in the next iteration.

>> +}
> ...
>
>> + mb(); /* prevents outstanding GPD updates */
> Is there any counterpart of this barrier?

This is not needed, removing it.

...

>
>> + ret = cldma_gpd_rx_from_queue(queue, budget, &over_budget);
>> + if (ret == -ENODATA)
>> + return 0;
>> +
>> + if (ret)
>> + return ret;
> Drop redundant blank line

The style followed is to keep a blank line after 'if' blocks.

Is that acceptable as long as it is consistent across the driver?
>
>> + /* greedy mode */
>> + l2_rx_int = cldma_hw_int_status(hw_info, BIT(queue->index), true);
...
>
>> +exit:
> Seems useless.

This tag is used when the PM patch is introduced later in the same series.

> + return ret;
...

2021-11-19 06:41:15

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH v2 04/14] net: wwan: t7xx: Add port proxy infrastructure


On 11/3/2021 8:38 AM, Andy Shevchenko wrote:
> On Sun, Oct 31, 2021 at 08:56:25PM -0700, Ricardo Martinez wrote:
>> From: Haijun Lio <[email protected]>
>>
>> Port-proxy provides a common interface to interact with different types
>> of ports. Ports export their configuration via `struct t7xx_port` and
>> operate as defined by `struct port_ops`.
> Same here, assuming that the comments from the previous patches are applied
> here as well, only unique are given.

Thanks for the feedback.

...

>> + nlh = nlmsg_put(nl_skb, 0, 1, NLMSG_DONE, len, 0);
>> + if (!nlh) {
>> + dev_err(port->dev, "could not release netlink\n");
> I'm wondering why you are not using net_err() / netdev_err() / netif_err()
> where it's appropriate.

The original idea was to avoid mixing different types of APIs, but yes,
it makes sense to use

those APIs where it's appropriate, I'm thinking on using them at
t7xx_netdev.c where the

required net_device is available.

...

>> + nlmsg_free(nl_skb);
>> + return -EFAULT;
>> + }
>>
>> ...
>>
>> + ccci_h = (struct ccci_header *)skb->data;
> Do you need casting?
Yes, skb->data is an unsigned char*
> ...
...


2021-11-19 08:29:09

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v2 02/14] net: wwan: t7xx: Add control DMA interface

On Thu, Nov 18, 2021 at 10:36:32PM -0800, Martinez, Ricardo wrote:
> On 11/1/2021 7:03 AM, Andy Shevchenko wrote:
> > On Sun, Oct 31, 2021 at 08:56:23PM -0700, Ricardo Martinez wrote:

...

> > > + ret = cldma_gpd_rx_from_queue(queue, budget, &over_budget);
> > > + if (ret == -ENODATA)
> > > + return 0;
> > > +
> > > + if (ret)
> > > + return ret;
> > Drop redundant blank line
>
> The style followed is to keep a blank line after 'if' blocks.
>
> Is that acceptable as long as it is consistent across the driver?

The idea behind suggestion is that you check for value returned from the call.
So, both if:s are tighten to that call and can be considered as a whole.

It doesn't mean you should blindly remove blank lines everywhere.

...

> > > +exit:
> > Seems useless.
>
> This tag is used when the PM patch is introduced later in the same series.

Can you give it better name?

> > + return ret;

--
With Best Regards,
Andy Shevchenko



2021-11-23 05:38:45

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH v2 03/14] net: wwan: t7xx: Add core components


Sorry for the spam. Re-sending as plain text.


On 11/2/2021 8:46 AM, Andy Shevchenko wrote:
> On Sun, Oct 31, 2021 at 08:56:24PM -0700, Ricardo Martinez wrote:
>> From: Haijun Lio<[email protected]>
>>
>> Registers the t7xx device driver with the kernel. Setup all the core
>> components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
>> modem control operations, modem state machine, and build
>> infrastructure.
>>
>> * PCIe layer code implements driver probe and removal.
>> * MHCCIF provides interrupt channels to communicate events
>> such as handshake, PM and port enumeration.
>> * Modem control implements the entry point for modem init,
>> reset and exit.
>> * The modem status monitor is a state machine used by modem control
>> to complete initialization and stop. It is used also to propagate
>> exception events reported by other components.
>> +#define CCCI_HEADER_NO_DATA 0xffffffff
> Is this internal value to Linux or something which is given by hardware?

It is hardware defined

...

>> + spin_lock_irqsave(&md_info->exp_spinlock, flags);
> Can it be called outside of IRQ context?

Actually, it is not in IRQ context, this function is called by the
thread_fn passed to request_threaded_irq(),

Maybe spin_lock_bh makes more sense.

>> + int_sta = get_interrupt_status(mtk_dev);
>> + md_info->exp_id |= int_sta;
>> +
>> + if (md_info->exp_id & D2H_INT_PORT_ENUM) {
>> + md_info->exp_id &= ~D2H_INT_PORT_ENUM;
>> + if (ctl->curr_state == CCCI_FSM_INIT ||
>> + ctl->curr_state == CCCI_FSM_PRE_START ||
>> + ctl->curr_state == CCCI_FSM_STOPPED)
>> + ccci_fsm_recv_md_interrupt(MD_IRQ_PORT_ENUM);
>> + }
>> +
>> + if (md_info->exp_id & D2H_INT_EXCEPTION_INIT) {
>> + if (ctl->md_state == MD_STATE_INVALID ||
>> + ctl->md_state == MD_STATE_WAITING_FOR_HS1 ||
>> + ctl->md_state == MD_STATE_WAITING_FOR_HS2 ||
>> + ctl->md_state == MD_STATE_READY) {
>> + md_info->exp_id &= ~D2H_INT_EXCEPTION_INIT;
>> + ccci_fsm_recv_md_interrupt(MD_IRQ_CCIF_EX);
>> + }
>> + } else if (ctl->md_state == MD_STATE_WAITING_FOR_HS1) {
>> + /* start handshake if MD not assert */
>> + mask = mhccif_mask_get(mtk_dev);
>> + if ((md_info->exp_id & D2H_INT_ASYNC_MD_HK) && !(mask &
>> D2H_INT_ASYNC_MD_HK)) {
>> + md_info->exp_id &= ~D2H_INT_ASYNC_MD_HK;
>> + queue_work(md->handshake_wq, &md->handshake_work);
>> + }
>> + }
>> +
>> + spin_unlock_irqrestore(&md_info->exp_spinlock, flags);
>> +
>> + return 0;
>> +}
...
>> + cmd = kmalloc(sizeof(*cmd),
>> + (in_irq() || in_softirq() || irqs_disabled()) ? GFP_ATOMIC :
>> GFP_KERNEL);
> Hmm...
Looks like we can just use in_interrupt(), was that the concern?

>> + if (!cmd)
>> + return -ENOMEM;
>> + if (in_irq() || irqs_disabled())
>> + flag &= ~FSM_CMD_FLAG_WAITING_TO_COMPLETE;
> Even more hmm...
>
>> + if (flag & FSM_CMD_FLAG_WAITING_TO_COMPLETE) {
>> + wait_event(cmd->complete_wq, cmd->result != FSM_CMD_RESULT_PENDING);
> Is it okay in IRQ context?

We know that the caller that sets FSM_CMD_FLAG_WAITING_TO_COMPLETE flag
it is not in IRQ context.

If that's good enough then we could also remove the check that clears
that flag few lines above.

The only calls using FSM_CMD_FLAG_WAITING_TO_COMPLETE are coming from
dev_pm_ops callbacks, and

we are not setting pm_runtime_irq_safe().

Otherwise we can use in_interrupt() to check here as well.

>> + if (cmd->result != FSM_CMD_RESULT_OK)
>> + result = -EINVAL;

2021-12-01 06:04:22

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH v2 04/14] net: wwan: t7xx: Add port proxy infrastructure


On 11/6/2021 11:06 AM, Sergey Ryazanov wrote:
> On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
> <[email protected]> wrote:
>> Port-proxy provides a common interface to interact with different types
>> of ports. Ports export their configuration via `struct t7xx_port` and
>> operate as defined by `struct port_ops`.
[skipped]
>
>> + enum ccci_ch tx_ch;
>> + enum ccci_ch rx_ch;
>> + unsigned char txq_index;
>> + unsigned char rxq_index;
>> + unsigned char txq_exp_index;
>> + unsigned char rxq_exp_index;
>> + enum cldma_id path_id;
>> + unsigned int flags;
>> + struct port_ops *ops;
>> + unsigned int minor;
>> + char *name;
> Why did you need these two fields with the port name and minor number?
> The WWAN subsystem will care about these data for you. It is its
> purpose.

This port proxy structure can abstract different types of ports. 'minor'
field is used for

tty and char ports but it will be removed since the next iteration will
contain only

WWAN ports and one control port.

'name' is currently used when printing error logs,  in the future it can
be used

to define the name in the file system for test and debug ports.


[skipped]

>> +static int proxy_register_char_dev(void)
>> +{
>> + dev_t dev = 0;
>> + int ret;
>> +
>> + if (port_prox->major) {
>> + dev = MKDEV(port_prox->major, port_prox->minor_base);
>> + ret = register_chrdev_region(dev, TTY_IPC_MINOR_BASE, MTK_DEV_NAME);
>> + } else {
>> + ret = alloc_chrdev_region(&dev, port_prox->minor_base,
>> + TTY_IPC_MINOR_BASE, MTK_DEV_NAME);
>> + if (ret)
>> + dev_err(port_prox->dev, "failed to alloc chrdev region, ret=%d\n", ret);
>> +
>> + port_prox->major = MAJOR(dev);
>> + }
> For what do you need these character devices? The WWAN subsystem
> already handle all these tasks.
This infrastructure is not going to be included in the next iteration,
it is used for debug and test ports.
>> + return ret;
>> +}
>> ...
>> +static int proxy_alloc(struct mtk_modem *md)
>> +{
>> + int ret;
>> +
>> + port_prox = devm_kzalloc(&md->mtk_dev->pdev->dev, sizeof(*port_prox), GFP_KERNEL);
>> + if (!port_prox)
>> + return -ENOMEM;
>
[skipped]


Ricardo

2021-12-01 06:06:05

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH v2 09/14] net: wwan: t7xx: Add WWAN network interface


On 11/6/2021 11:08 AM, Sergey Ryazanov wrote:
> On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
> <[email protected]> wrote:
>> Creates the Cross Core Modem Network Interface (CCMNI) which implements
>> the wwan_ops for registration with the WWAN framework, CCMNI also
>> implements the net_device_ops functions used by the network device.
>> Network device operations include open, close, start transmission, TX
>> timeout, change MTU, and select queue.
>>
[skipped]
>> +static enum txq_type get_txq_type(struct sk_buff *skb)
>> +{
>> + u32 total_len, payload_len, l4_off;
>> + bool tcp_syn_fin_rst, is_tcp;
>> + struct ipv6hdr *ip6h;
>> + struct tcphdr *tcph;
>> + struct iphdr *ip4h;
>> + u32 packet_type;
>> + __be16 frag_off;
>> +
>> + packet_type = skb->data[0] & SBD_PACKET_TYPE_MASK;
>> + if (packet_type == IPV6_VERSION) {
>> + ip6h = (struct ipv6hdr *)skb->data;
>> + total_len = sizeof(struct ipv6hdr) + ntohs(ip6h->payload_len);
>> + l4_off = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &ip6h->nexthdr, &frag_off);
>> + tcph = (struct tcphdr *)(skb->data + l4_off);
>> + is_tcp = ip6h->nexthdr == IPPROTO_TCP;
>> + payload_len = total_len - l4_off - (tcph->doff << 2);
>> + } else if (packet_type == IPV4_VERSION) {
>> + ip4h = (struct iphdr *)skb->data;
>> + tcph = (struct tcphdr *)(skb->data + (ip4h->ihl << 2));
>> + is_tcp = ip4h->protocol == IPPROTO_TCP;
>> + payload_len = ntohs(ip4h->tot_len) - (ip4h->ihl << 2) - (tcph->doff << 2);
>> + } else {
>> + return TXQ_NORMAL;
>> + }
>> +
>> + tcp_syn_fin_rst = tcph->syn || tcph->fin || tcph->rst;
>> + if (is_tcp && !payload_len && !tcp_syn_fin_rst)
>> + return TXQ_FAST;
>> +
>> + return TXQ_NORMAL;
>> +}
> I am wondering how much modem performance has improved with this
> optimization compared to the performance loss on each packet due to
> the cache miss? Do you have any measurement results?

No performance gains observed in the latest tests, this is going to be
removed for the

next iteration.

>> +static u16 ccmni_select_queue(struct net_device *dev, struct sk_buff *skb,
>> + struct net_device *sb_dev)
>> +{
>> + struct ccmni_instance *ccmni;
>> +
>> + ccmni = netdev_priv(dev);
>> +
>> + if (ccmni->ctlb->capability & NIC_CAP_DATA_ACK_DVD)
>> + return get_txq_type(skb);
>> +
>> + return TXQ_NORMAL;
>> +}
>> +
>> +static int ccmni_open(struct net_device *dev)
>> +{
>> + struct ccmni_instance *ccmni;
>> +
>> + ccmni = wwan_netdev_drvpriv(dev);
>
[skipped]
>> + skb_set_mac_header(skb, -ETH_HLEN);
>> + skb_reset_network_header(skb);
>> + skb->dev = dev;
>> + if (pkt_type == IPV6_VERSION)
>> + skb->protocol = htons(ETH_P_IPV6);
>> + else
>> + skb->protocol = htons(ETH_P_IP);
>> +
>> + skb_len = skb->len;
>> +
>> + netif_rx_any_context(skb);
> Did you consider using NAPI for the packet Rx path? This should
> improve Rx performance.
Yes, NAPI implementation is in the plan.
>> + dev->stats.rx_packets++;
>> + dev->stats.rx_bytes += skb_len;
>> +}
> [skipped]
>
>> diff --git a/drivers/net/wwan/t7xx/t7xx_netdev.h b/drivers/net/wwan/t7xx/t7xx_netdev.h
>> ...
>> +#define CCMNI_TX_QUEUE 1000
> Is this a really carefully selected queue depth limit, or just an
> arbitrary value? If the last one, then feel free to use the
> DEFAULT_TX_QUEUE_LEN macro.
Changing this to DEFAULT_TX_QUEUE_LEN for the next iteration
>> ..
>> +#define IPV4_VERSION 0x40
>> +#define IPV6_VERSION 0x60
> Just curious why the _VERSION suffix? Why not, for example, PKT_TYPE_ prefix?
Nothing special about _VERSION, but it does look a bit weird, will use
PKT_TYPE_  as suggested
> --
> Sergey
Ricardo

2021-12-01 06:14:58

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH v2 06/14] net: wwan: t7xx: Add AT and MBIM WWAN ports


On 11/9/2021 4:06 AM, Sergey Ryazanov wrote:
> On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez wrote:
>> ...
>> static struct t7xx_port md_ccci_ports[] = {
>> + {CCCI_UART2_TX, CCCI_UART2_RX, DATA_AT_CMD_Q, DATA_AT_CMD_Q, 0xff,
>> + 0xff, ID_CLDMA1, PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 0, "ttyC0", WWAN_PORT_AT},
>> + {CCCI_MBIM_TX, CCCI_MBIM_RX, 2, 2, 0, 0, ID_CLDMA1,
>> + PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 10, "ttyCMBIM0", WWAN_PORT_MBIM},
>> ...
>> + if (count + CCCI_H_ELEN > txq_mtu &&
>> + (port_ccci->tx_ch == CCCI_MBIM_TX ||
>> + (port_ccci->tx_ch >= CCCI_DSS0_TX && port_ccci->tx_ch <= CCCI_DSS7_TX)))
>> + multi_packet = DIV_ROUND_UP(count, txq_mtu - CCCI_H_ELEN);
> I am just wondering, the chip does support MBIM message fragmentation,
> but does not support AT commands stream (CCCI_UART2_TX) fragmentation.
> Is that the correct conclusion from the code above?
Yes, that is correct.
>
> BTW, you could factor out data fragmentation support to a dedicated
> function to improve code readability. Something like this:
>
> static inline bool port_is_multipacket_capable(... *port)
> {
> return port->tx_ch == CCCI_MBIM_TX ||
> (port->tx_ch >= CCCI_DSS0_TX && port->tx_ch <= CCCI_DSS7_TX);
> }
>
> So condition become something like that:
>
> if (count + CCCI_H_ELEN > txq_mtu &&
> port_is_multipacket_capable(port))
> multi_packet = DIV_ROUND_UP(count, txq_mtu - CCCI_H_ELEN);

Ricardo



2021-12-01 20:47:52

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 06/14] net: wwan: t7xx: Add AT and MBIM WWAN ports

Hello Ricardo,

On Wed, Dec 1, 2021 at 9:14 AM Martinez, Ricardo
<[email protected]> wrote:
> On 11/9/2021 4:06 AM, Sergey Ryazanov wrote:
>> On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez wrote:
>>> ...
>>> static struct t7xx_port md_ccci_ports[] = {
>>> + {CCCI_UART2_TX, CCCI_UART2_RX, DATA_AT_CMD_Q, DATA_AT_CMD_Q, 0xff,
>>> + 0xff, ID_CLDMA1, PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 0, "ttyC0", WWAN_PORT_AT},
>>> + {CCCI_MBIM_TX, CCCI_MBIM_RX, 2, 2, 0, 0, ID_CLDMA1,
>>> + PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 10, "ttyCMBIM0", WWAN_PORT_MBIM},
>>> ...
>>> + if (count + CCCI_H_ELEN > txq_mtu &&
>>> + (port_ccci->tx_ch == CCCI_MBIM_TX ||
>>> + (port_ccci->tx_ch >= CCCI_DSS0_TX && port_ccci->tx_ch <= CCCI_DSS7_TX)))
>>> + multi_packet = DIV_ROUND_UP(count, txq_mtu - CCCI_H_ELEN);
>>
>> I am just wondering, the chip does support MBIM message fragmentation,
>> but does not support AT commands stream (CCCI_UART2_TX) fragmentation.
>> Is that the correct conclusion from the code above?
>
> Yes, that is correct.

Are you sure that the modem does not support AT command fragmentation?
The AT commands interface is a stream of chars by its nature. It is
designed to work over serial lines. Some modem configuration software
even writes commands to a port in a char-by-char manner, i.e. it
writes no more than one char at a time to the port.

The mechanism that is implemented in the driver to split user input
into individual messages is not a true fragmentation mechanism since
it does not preserve the original user input length. It just cuts the
user input into individual messages and sends them to the modem
independently. So, the modem firmware has no way to distinguish
whether the user input has been "fragmented" by the user or the
driver. How, then, does the modem firmware deal with an AT command
"fragmented" by a user? Will the modem firmware ignore the AT command
that is received in the char-by-char manner?

--
Sergey

2021-12-01 21:09:49

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH v2 09/14] net: wwan: t7xx: Add WWAN network interface

On Wed, Dec 1, 2021 at 9:06 AM Martinez, Ricardo
<[email protected]> wrote:
> On 11/6/2021 11:08 AM, Sergey Ryazanov wrote:
>> On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez wrote:
>>> +#define IPV4_VERSION 0x40
>>> +#define IPV6_VERSION 0x60
>>
>> Just curious why the _VERSION suffix? Why not, for example, PKT_TYPE_ prefix?
>
> Nothing special about _VERSION, but it does look a bit weird, will use
> PKT_TYPE_ as suggested

I checked the driver code again and found that these constants are
really used to distinguish between IPv4 and IPv6 packets by checking
the first byte of the data packet (IP header version field).

Now I am wondering, does the modem firmware report a packet type in
one of the BAT or PIT headers? If the modem is already reporting a
packet type, then it is better to use the provided information instead
of touching the packet data. Otherwise, if the modem does not
explicitly report a packet type, and you have to check the version
field of the IP header, then it seems Ok to keep the names of these
constants as they are (with the _VERSION suffix).

--
Sergey

2021-12-02 20:44:49

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH v2 09/14] net: wwan: t7xx: Add WWAN network interface


Hi Sergey,

On 12/1/2021 1:09 PM, Sergey Ryazanov wrote:
> On Wed, Dec 1, 2021 at 9:06 AM Martinez, Ricardo
> <[email protected]> wrote:
>> On 11/6/2021 11:08 AM, Sergey Ryazanov wrote:
>>> On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez wrote:
>>>> +#define IPV4_VERSION 0x40
>>>> +#define IPV6_VERSION 0x60
>>> Just curious why the _VERSION suffix? Why not, for example, PKT_TYPE_ prefix?
>> Nothing special about _VERSION, but it does look a bit weird, will use
>> PKT_TYPE_ as suggested
> I checked the driver code again and found that these constants are
> really used to distinguish between IPv4 and IPv6 packets by checking
> the first byte of the data packet (IP header version field).
>
> Now I am wondering, does the modem firmware report a packet type in
> one of the BAT or PIT headers? If the modem is already reporting a
> packet type, then it is better to use the provided information instead
> of touching the packet data. Otherwise, if the modem does not
> explicitly report a packet type, and you have to check the version
> field of the IP header, then it seems Ok to keep the names of these
> constants as they are (with the _VERSION suffix).

Actually, there is a bit in the PIT header for the packet type, I'll
make the driver

use it instead of looking at the data packet.


2021-12-02 22:42:47

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH v2 03/14] net: wwan: t7xx: Add core components


On 11/6/2021 11:05 AM, Sergey Ryazanov wrote:
> On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez
> <[email protected]> wrote:
>> Registers the t7xx device driver with the kernel. Setup all the core
>> components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
>> modem control operations, modem state machine, and build
>> infrastructure.
>>
>> * PCIe layer code implements driver probe and removal.
>> * MHCCIF provides interrupt channels to communicate events
>> such as handshake, PM and port enumeration.
>> * Modem control implements the entry point for modem init,
>> reset and exit.
>> * The modem status monitor is a state machine used by modem control
>> to complete initialization and stop. It is used also to propagate
>> exception events reported by other components.
> [skipped]
>
>> drivers/net/wwan/t7xx/t7xx_monitor.h | 144 +++++
>> ...
>> drivers/net/wwan/t7xx/t7xx_state_monitor.c | 598 +++++++++++++++++++++
> Out of curiosity, why is this file called t7xx_state_monitor.c, while
> the corresponding header file is called simply t7xx_monitor.h? Are any
> other monitors planed?
>
> [skipped]

No other monitors, I'll rename it to make it consistent.

[skipped]

>
>> diff --git a/drivers/net/wwan/t7xx/t7xx_skb_util.c b/drivers/net/wwan/t7xx/t7xx_skb_util.c
>> ...
>> +static struct sk_buff *alloc_skb_from_pool(struct skb_pools *pools, size_t size)
>> +{
>> + if (size > MTK_SKB_4K)
>> + return ccci_skb_dequeue(pools->reload_work_queue, &pools->skb_pool_64k);
>> + else if (size > MTK_SKB_16)
>> + return ccci_skb_dequeue(pools->reload_work_queue, &pools->skb_pool_4k);
>> + else if (size > 0)
>> + return ccci_skb_dequeue(pools->reload_work_queue, &pools->skb_pool_16);
>> +
>> + return NULL;
>> +}
>> +
>> +static struct sk_buff *alloc_skb_from_kernel(size_t size, gfp_t gfp_mask)
>> +{
>> + if (size > MTK_SKB_4K)
>> + return __dev_alloc_skb(MTK_SKB_64K, gfp_mask);
>> + else if (size > MTK_SKB_1_5K)
>> + return __dev_alloc_skb(MTK_SKB_4K, gfp_mask);
>> + else if (size > MTK_SKB_16)
>> + return __dev_alloc_skb(MTK_SKB_1_5K, gfp_mask);
>> + else if (size > 0)
>> + return __dev_alloc_skb(MTK_SKB_16, gfp_mask);
>> +
>> + return NULL;
>> +}
> I am wondering what performance gains have you achieved with these skb
> pools? Can we see any numbers?
>
> I do not think the control path performance is worth the complexity of
> the multilayer skb allocation. In the data packet Rx path, you need to
> allocate skb anyway as soon as the driver passes them to the stack. So
> what is the gain?
>
> [skipped]

Agree, we are removing the skb pools for the control path.

Regarding Rx data path, we'll get some numbers to see if the pool is
worth it,

otherwise remove it too.

[skipped]



2021-12-07 02:41:09

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH v2 06/14] net: wwan: t7xx: Add AT and MBIM WWAN ports


On 12/1/2021 12:45 PM, Sergey Ryazanov wrote:
> Hello Ricardo,
>
> On Wed, Dec 1, 2021 at 9:14 AM Martinez, Ricardo
> <[email protected]> wrote:
>> On 11/9/2021 4:06 AM, Sergey Ryazanov wrote:
>>> On Mon, Nov 1, 2021 at 6:57 AM Ricardo Martinez wrote:
>>>> ...
>>>> static struct t7xx_port md_ccci_ports[] = {
>>>> + {CCCI_UART2_TX, CCCI_UART2_RX, DATA_AT_CMD_Q, DATA_AT_CMD_Q, 0xff,
>>>> + 0xff, ID_CLDMA1, PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 0, "ttyC0", WWAN_PORT_AT},
>>>> + {CCCI_MBIM_TX, CCCI_MBIM_RX, 2, 2, 0, 0, ID_CLDMA1,
>>>> + PORT_F_RX_CHAR_NODE, &wwan_sub_port_ops, 10, "ttyCMBIM0", WWAN_PORT_MBIM},
>>>> ...
>>>> + if (count + CCCI_H_ELEN > txq_mtu &&
>>>> + (port_ccci->tx_ch == CCCI_MBIM_TX ||
>>>> + (port_ccci->tx_ch >= CCCI_DSS0_TX && port_ccci->tx_ch <= CCCI_DSS7_TX)))
>>>> + multi_packet = DIV_ROUND_UP(count, txq_mtu - CCCI_H_ELEN);
>>> I am just wondering, the chip does support MBIM message fragmentation,
>>> but does not support AT commands stream (CCCI_UART2_TX) fragmentation.
>>> Is that the correct conclusion from the code above?
>> Yes, that is correct.
> Are you sure that the modem does not support AT command fragmentation?
> The AT commands interface is a stream of chars by its nature. It is
> designed to work over serial lines. Some modem configuration software
> even writes commands to a port in a char-by-char manner, i.e. it
> writes no more than one char at a time to the port.
>
> The mechanism that is implemented in the driver to split user input
> into individual messages is not a true fragmentation mechanism since
> it does not preserve the original user input length. It just cuts the
> user input into individual messages and sends them to the modem
> independently. So, the modem firmware has no way to distinguish
> whether the user input has been "fragmented" by the user or the
> driver. How, then, does the modem firmware deal with an AT command
> "fragmented" by a user? Will the modem firmware ignore the AT command
> that is received in the char-by-char manner?

The modem supports char-by-char AT command over serial lines, I'll get
more information

about why splitting long commands is supported only for MBIM and not for
AT commands.