2022-01-14 14:39:56

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem

t7xx is the PCIe host device driver for Intel 5G 5000 M.2 solution which
is based on MediaTek's T700 modem to provide WWAN connectivity.
The driver uses the WWAN framework infrastructure to create the following
control ports and network interfaces:
* /dev/wwan0mbim0 - Interface conforming to the MBIM protocol.
Applications like libmbim [1] or Modem Manager [2] from v1.16 onwards
with [3][4] can use it to enable data communication towards WWAN.
* /dev/wwan0at0 - Interface that supports AT commands.
* wwan0 - Primary network interface for IP traffic.

The main blocks in t7xx driver are:
* PCIe layer - Implements probe, removal, and power management callbacks.
* Port-proxy - Provides a common interface to interact with different types
of ports such as WWAN ports.
* Modem control & status monitor - Implements the entry point for modem
initialization, reset and exit, as well as exception handling.
* CLDMA (Control Layer DMA) - Manages the HW used by the port layer to send
control messages to the modem using MediaTek's CCCI (Cross-Core
Communication Interface) protocol.
* DPMAIF (Data Plane Modem AP Interface) - Controls the HW that provides
uplink and downlink queues for the data path. The data exchange takes
place using circular buffers to share data buffer addresses and metadata
to describe the packets.
* MHCCIF (Modem Host Cross-Core Interface) - Provides interrupt channels
for bidirectional event notification such as handshake, exception, PM and
port enumeration.

The compilation of the t7xx driver is enabled by the CONFIG_MTK_T7XX config
option which depends on CONFIG_WWAN.
This driver was originally developed by MediaTek. Intel adapted t7xx to
the WWAN framework, optimized and refactored the driver source in close
collaboration with MediaTek. This will enable getting the t7xx driver on
Approved Vendor List for interested OEM's and ODM's productization plans
with Intel 5G 5000 M.2 solution.

List of contributors:
Amir Hanania <[email protected]>
Andriy Shevchenko <[email protected]>
Chandrashekar Devegowda <[email protected]>
Dinesh Sharma <[email protected]>
Eliot Lee <[email protected]>
Haijun Liu <[email protected]>
M Chetan Kumar <[email protected]>
Mika Westerberg <[email protected]>
Moises Veleta <[email protected]>
Pierre-louis Bossart <[email protected]>
Chiranjeevi Rapolu <[email protected]>
Ricardo Martinez <[email protected]>
Muralidharan Sethuraman <[email protected]>
Soumya Prakash Mishra <[email protected]>
Sreehari Kancharla <[email protected]>
Suresh Nagaraj <[email protected]>

[1] https://www.freedesktop.org/software/libmbim/
[2] https://www.freedesktop.org/software/ModemManager/
[3] https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/582
[4] https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/523

v4:
- Implement list_prev_entry_circular() and list_next_entry_circular() macros.
- Remove inline from all c files.
- Define ioread32_poll_timeout_atomic() helper macro.
- Fix return code for WWAN port tx op.
- Allow AT commands fragmentation same as MBIM commands.
- Introduce t7xx_common.h file in the first patch.
- Rename functions and variables as suggested in v3.
- Reduce code duplication by creating fsm_wait_for_event() helper function.
- Remove unneeded dev_err in t7xx_fsm_clr_event().
- Remove unused variable last_state from struct t7xx_fsm_ctl.
- Remove unused variable txq_select_times from struct dpmaif_ctrl.
- Replace ETXTBSY with EBUSY.
- Refactor t7xx_dpmaif_rx_buf_alloc() to remove an unneeded allocation.
- Fix potential leak at t7xx_dpmaif_rx_frag_alloc().
- Simplify return value handling at t7xx_dpmaif_rx_start().
- Add a helper to handle the common part of CCCI header initialization.
- Make sure interrupts are enabled during PM resume.
- Add a parameter to t7xx_fsm_append_cmd() to tell if it is in interrupt context.

v3:
- Avoid unneeded ping-pong changes between patches.
- Use t7xx_ prefix in functions.
- Use t7xx_ prefix in generic structs where mtk_ or ccci prefix was used.
- Update Authors/Contributors header.
- Remove skb pools used for control path.
- Remove skb pools used for RX data path.
- Do not use dedicated TX queue for ACK-only packets.
- Remove __packed attribute from GPD structs.
- Remove the infrastructure for test and debug ports.
- Use the skb control buffer to store metadata.
- Get the IP packet type from RX PIT.
- Merge variable declaration and simple assignments.
- Use preferred coding patterns.
- Remove global variables.
- Declare HW facing structure members as little endian.
- Rename goto tags to describe what is going to be done.
- Do not use variable length arrays.
- Remove unneeded blank lines source code and kdoc headers.
- Use C99 initialization format for port-proxy ports.
- Clean up comments.
- Review included headers.
- Better use of 100 column limit.
- Remove unneeded mb() in CLDMA.
- Remove unneeded spin locks and atomics.
- Handle read_poll_timeout error.
- Use dev_err_ratelimited() where required.
- Fix resource leak when requesting IRQs.
- Use generic DEFAULT_TX_QUEUE_LEN instead custom macro.
- Use ETH_DATA_LEN instead of defining WWAN_DEFAULT_MTU.
- Use sizeof() instead of defines when the size of structures is required.
- Remove unneeded code from netdev:
No need to configure HW address length
No need to implement .ndo_change_mtu
Remove random address generation
- Code simplifications by using kernel provided functions and macros such as:
module_pci_driver
PTR_ERR_OR_ZERO
for_each_set_bit
pci_device_is_present
skb_queue_purge
list_prev_entry
__ffs64

v2:
- Replace pdev->driver->name with dev_driver_string(&pdev->dev).
- Replace random_ether_addr() with eth_random_addr().
- Update kernel-doc comment for enum data_policy.
- Indicate the driver is 'Supported' instead of 'Maintained'.
- Fix the Signed-of-by and Co-developed-by tags in the patches.
- Added authors and contributors in the top comment of the src files.

Ricardo Martinez (13):
list: Add list_next_entry_circular() and list_prev_entry_circular()
net: wwan: t7xx: Add control DMA interface
net: wwan: t7xx: Add core components
net: wwan: t7xx: Add port proxy infrastructure
net: wwan: t7xx: Add control port
net: wwan: t7xx: Add AT and MBIM WWAN ports
net: wwan: t7xx: Data path HW layer
net: wwan: t7xx: Add data path interface
net: wwan: t7xx: Add WWAN network interface
net: wwan: t7xx: Introduce power management support
net: wwan: t7xx: Runtime PM
net: wwan: t7xx: Device deep sleep lock/unlock
net: wwan: t7xx: Add maintainers and documentation

.../networking/device_drivers/wwan/index.rst | 1 +
.../networking/device_drivers/wwan/t7xx.rst | 120 ++
MAINTAINERS | 11 +
drivers/net/wwan/Kconfig | 14 +
drivers/net/wwan/Makefile | 1 +
drivers/net/wwan/t7xx/Makefile | 20 +
drivers/net/wwan/t7xx/t7xx_cldma.c | 282 ++++
drivers/net/wwan/t7xx/t7xx_cldma.h | 177 ++
drivers/net/wwan/t7xx/t7xx_common.h | 95 ++
drivers/net/wwan/t7xx/t7xx_dpmaif.c | 1372 ++++++++++++++++
drivers/net/wwan/t7xx/t7xx_dpmaif.h | 146 ++
drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 1439 +++++++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_cldma.h | 139 ++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c | 577 +++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h | 252 +++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 1251 ++++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h | 115 ++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 754 +++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h | 89 +
drivers/net/wwan/t7xx/t7xx_mhccif.c | 118 ++
drivers/net/wwan/t7xx/t7xx_mhccif.h | 37 +
drivers/net/wwan/t7xx/t7xx_modem_ops.c | 703 ++++++++
drivers/net/wwan/t7xx/t7xx_modem_ops.h | 87 +
drivers/net/wwan/t7xx/t7xx_netdev.c | 433 +++++
drivers/net/wwan/t7xx/t7xx_netdev.h | 61 +
drivers/net/wwan/t7xx/t7xx_pci.c | 767 +++++++++
drivers/net/wwan/t7xx/t7xx_pci.h | 123 ++
drivers/net/wwan/t7xx/t7xx_pcie_mac.c | 277 ++++
drivers/net/wwan/t7xx/t7xx_pcie_mac.h | 37 +
drivers/net/wwan/t7xx/t7xx_port.h | 151 ++
drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c | 190 +++
drivers/net/wwan/t7xx/t7xx_port_proxy.c | 642 ++++++++
drivers/net/wwan/t7xx/t7xx_port_proxy.h | 87 +
drivers/net/wwan/t7xx/t7xx_port_wwan.c | 225 +++
drivers/net/wwan/t7xx/t7xx_reg.h | 379 +++++
drivers/net/wwan/t7xx/t7xx_state_monitor.c | 548 +++++++
drivers/net/wwan/t7xx/t7xx_state_monitor.h | 126 ++
include/linux/list.h | 26 +
38 files changed, 11872 insertions(+)
create mode 100644 Documentation/networking/device_drivers/wwan/t7xx.rst
create mode 100644 drivers/net/wwan/t7xx/Makefile
create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_common.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_port.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_wwan.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_reg.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.h

--
2.17.1


2022-01-14 14:40:03

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH net-next v4 10/13] net: wwan: t7xx: Introduce power management support

From: Haijun Liu <[email protected]>

Implements suspend, resumes, freeze, thaw, poweroff, and restore
`dev_pm_ops` callbacks.

From the host point of view, the t7xx driver is one entity. But, the
device has several modules that need to be addressed in different ways
during power management (PM) flows.
The driver uses the term 'PM entities' to refer to the 2 DPMA and
2 CLDMA HW blocks that need to be managed during PM flows.
When a dev_pm_ops function is called, the PM entities list is iterated
and the matching function is called for each entry in the list.

Signed-off-by: Haijun Liu <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 120 +++++-
drivers/net/wwan/t7xx/t7xx_hif_cldma.h | 1 +
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c | 90 +++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h | 1 +
drivers/net/wwan/t7xx/t7xx_mhccif.c | 17 +
drivers/net/wwan/t7xx/t7xx_pci.c | 426 +++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_pci.h | 46 +++
drivers/net/wwan/t7xx/t7xx_state_monitor.c | 2 +
8 files changed, 702 insertions(+), 1 deletion(-)

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
index 3b49f7b81b01..31e32c10dabb 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -1184,6 +1184,117 @@ void t7xx_cldma_exception(struct cldma_ctrl *md_ctrl, enum hif_ex_stage stage)
}
}

+static void t7xx_cldma_resume_early(struct t7xx_pci_dev *mtk_dev, void *entity_param)
+{
+ struct cldma_ctrl *md_ctrl = entity_param;
+ struct t7xx_cldma_hw *hw_info;
+ unsigned long flags;
+ int qno_t;
+
+ hw_info = &md_ctrl->hw_info;
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ t7xx_cldma_hw_restore(hw_info);
+ for (qno_t = 0; qno_t < CLDMA_TXQ_NUM; qno_t++) {
+ t7xx_cldma_hw_set_start_addr(hw_info, qno_t, md_ctrl->txq[qno_t].tx_xmit->gpd_addr,
+ MTK_TX);
+ t7xx_cldma_hw_set_start_addr(hw_info, qno_t, md_ctrl->rxq[qno_t].tr_done->gpd_addr,
+ MTK_RX);
+ }
+
+ t7xx_cldma_enable_irq(md_ctrl);
+ t7xx_cldma_hw_start_queue(hw_info, CLDMA_ALL_Q, MTK_RX);
+ md_ctrl->rxq_active |= TXRX_STATUS_BITMASK;
+ t7xx_cldma_hw_irq_en_eq(hw_info, CLDMA_ALL_Q, MTK_RX);
+ t7xx_cldma_hw_irq_en_txrx(hw_info, CLDMA_ALL_Q, MTK_RX);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static int t7xx_cldma_resume(struct t7xx_pci_dev *t7xx_dev, void *entity_param)
+{
+ struct cldma_ctrl *md_ctrl = entity_param;
+ unsigned long flags;
+
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ md_ctrl->txq_active |= TXRX_STATUS_BITMASK;
+ t7xx_cldma_hw_irq_en_txrx(&md_ctrl->hw_info, CLDMA_ALL_Q, MTK_TX);
+ t7xx_cldma_hw_irq_en_eq(&md_ctrl->hw_info, CLDMA_ALL_Q, MTK_TX);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+
+ if (md_ctrl->hif_id == ID_CLDMA1)
+ t7xx_mhccif_mask_clr(t7xx_dev, D2H_SW_INT_MASK);
+
+ return 0;
+}
+
+static void t7xx_cldma_suspend_late(struct t7xx_pci_dev *t7xx_dev, void *entity_param)
+{
+ struct cldma_ctrl *md_ctrl = entity_param;
+ struct t7xx_cldma_hw *hw_info;
+ unsigned long flags;
+
+ hw_info = &md_ctrl->hw_info;
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ t7xx_cldma_hw_irq_dis_eq(hw_info, CLDMA_ALL_Q, MTK_RX);
+ t7xx_cldma_hw_irq_dis_txrx(hw_info, CLDMA_ALL_Q, MTK_RX);
+ md_ctrl->rxq_active &= ~TXRX_STATUS_BITMASK;
+ t7xx_cldma_hw_stop_queue(hw_info, CLDMA_ALL_Q, MTK_RX);
+ t7xx_cldma_clear_ip_busy(hw_info);
+ t7xx_cldma_disable_irq(md_ctrl);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static int t7xx_cldma_suspend(struct t7xx_pci_dev *t7xx_dev, void *entity_param)
+{
+ struct cldma_ctrl *md_ctrl = entity_param;
+ struct t7xx_cldma_hw *hw_info;
+ unsigned long flags;
+
+ if (md_ctrl->hif_id == ID_CLDMA1)
+ t7xx_mhccif_mask_set(t7xx_dev, D2H_SW_INT_MASK);
+
+ hw_info = &md_ctrl->hw_info;
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ t7xx_cldma_hw_irq_dis_eq(hw_info, CLDMA_ALL_Q, MTK_TX);
+ t7xx_cldma_hw_irq_dis_txrx(hw_info, CLDMA_ALL_Q, MTK_TX);
+ md_ctrl->txq_active &= ~TXRX_STATUS_BITMASK;
+ t7xx_cldma_hw_stop_queue(hw_info, CLDMA_ALL_Q, MTK_TX);
+ md_ctrl->txq_started = 0;
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+ return 0;
+}
+
+static int t7xx_cldma_pm_init(struct cldma_ctrl *md_ctrl)
+{
+ md_ctrl->pm_entity = kzalloc(sizeof(*md_ctrl->pm_entity), GFP_KERNEL);
+ if (!md_ctrl->pm_entity)
+ return -ENOMEM;
+
+ md_ctrl->pm_entity->entity_param = md_ctrl;
+
+ if (md_ctrl->hif_id == ID_CLDMA1)
+ md_ctrl->pm_entity->id = PM_ENTITY_ID_CTRL1;
+ else
+ md_ctrl->pm_entity->id = PM_ENTITY_ID_CTRL2;
+
+ md_ctrl->pm_entity->suspend = t7xx_cldma_suspend;
+ md_ctrl->pm_entity->suspend_late = t7xx_cldma_suspend_late;
+ md_ctrl->pm_entity->resume = t7xx_cldma_resume;
+ md_ctrl->pm_entity->resume_early = t7xx_cldma_resume_early;
+
+ return t7xx_pci_pm_entity_register(md_ctrl->t7xx_dev, md_ctrl->pm_entity);
+}
+
+static int t7xx_cldma_pm_uninit(struct cldma_ctrl *md_ctrl)
+{
+ if (!md_ctrl->pm_entity)
+ return -EINVAL;
+
+ t7xx_pci_pm_entity_unregister(md_ctrl->t7xx_dev, md_ctrl->pm_entity);
+ kfree_sensitive(md_ctrl->pm_entity);
+ md_ctrl->pm_entity = NULL;
+ return 0;
+}
+
void t7xx_cldma_hif_hw_init(struct cldma_ctrl *md_ctrl)
{
struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
@@ -1216,6 +1327,7 @@ static irqreturn_t t7xx_cldma_isr_handler(int irq, void *data)
* @md: Modem context structure.
* @md_ctrl: CLDMA context structure.
*
+ * Allocate and initialize device power management entity.
* Initialize HIF TX/RX queue structure.
* Register CLDMA callback ISR with PCIe driver.
*
@@ -1226,12 +1338,16 @@ static irqreturn_t t7xx_cldma_isr_handler(int irq, void *data)
int t7xx_cldma_init(struct t7xx_modem *md, struct cldma_ctrl *md_ctrl)
{
struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
- int i;
+ int ret, i;

md_ctrl->txq_active = 0;
md_ctrl->rxq_active = 0;
md_ctrl->is_late_init = false;

+ ret = t7xx_cldma_pm_init(md_ctrl);
+ if (ret)
+ return ret;
+
spin_lock_init(&md_ctrl->cldma_lock);
for (i = 0; i < CLDMA_TXQ_NUM; i++) {
md_cd_queue_struct_init(&md_ctrl->txq[i], md_ctrl, MTK_TX, i);
@@ -1293,4 +1409,6 @@ void t7xx_cldma_exit(struct cldma_ctrl *md_ctrl)
md_ctrl->rxq[i].worker = NULL;
}
}
+
+ t7xx_cldma_pm_uninit(md_ctrl);
}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.h b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
index 5f8100c2b9bd..029272afd1d6 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
@@ -85,6 +85,7 @@ struct cldma_ctrl {
struct dma_pool *gpd_dmapool;
struct cldma_ring tx_ring[CLDMA_TXQ_NUM];
struct cldma_ring rx_ring[CLDMA_RXQ_NUM];
+ struct md_pm_entity *pm_entity;
struct t7xx_cldma_hw hw_info;
bool is_late_init;
int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb);
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
index b731d0be83ee..18e04b713b91 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
@@ -402,6 +402,90 @@ static int t7xx_dpmaif_stop(struct dpmaif_ctrl *dpmaif_ctrl)
return 0;
}

+static int t7xx_dpmaif_suspend(struct t7xx_pci_dev *t7xx_dev, void *param)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl = param;
+
+ t7xx_dpmaif_tx_stop(dpmaif_ctrl);
+ t7xx_dpmaif_hw_stop_all_txq(dpmaif_ctrl);
+ t7xx_dpmaif_hw_stop_all_rxq(dpmaif_ctrl);
+ t7xx_dpmaif_disable_irq(dpmaif_ctrl);
+ t7xx_dpmaif_rx_stop(dpmaif_ctrl);
+ return 0;
+}
+
+static void t7xx_dpmaif_unmask_dlq_intr(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ int qno;
+
+ for (qno = 0; qno < DPMAIF_RXQ_NUM; qno++)
+ t7xx_dpmaif_dlq_unmask_rx_done(&dpmaif_ctrl->hif_hw_info, qno);
+}
+
+static void t7xx_dpmaif_start_txrx_qs(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_rx_queue *rxq;
+ struct dpmaif_tx_queue *txq;
+ unsigned int que_cnt;
+
+ for (que_cnt = 0; que_cnt < DPMAIF_TXQ_NUM; que_cnt++) {
+ txq = &dpmaif_ctrl->txq[que_cnt];
+ txq->que_started = true;
+ }
+
+ for (que_cnt = 0; que_cnt < DPMAIF_RXQ_NUM; que_cnt++) {
+ rxq = &dpmaif_ctrl->rxq[que_cnt];
+ rxq->que_started = true;
+ }
+}
+
+static int t7xx_dpmaif_resume(struct t7xx_pci_dev *t7xx_dev, void *param)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl = param;
+
+ if (!dpmaif_ctrl)
+ return 0;
+
+ t7xx_dpmaif_start_txrx_qs(dpmaif_ctrl);
+ t7xx_dpmaif_enable_irq(dpmaif_ctrl);
+ t7xx_dpmaif_unmask_dlq_intr(dpmaif_ctrl);
+ t7xx_dpmaif_start_hw(dpmaif_ctrl);
+ wake_up(&dpmaif_ctrl->tx_wq);
+ return 0;
+}
+
+static int t7xx_dpmaif_pm_entity_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct md_pm_entity *dpmaif_pm_entity = &dpmaif_ctrl->dpmaif_pm_entity;
+ int ret;
+
+ INIT_LIST_HEAD(&dpmaif_pm_entity->entity);
+ dpmaif_pm_entity->suspend = &t7xx_dpmaif_suspend;
+ dpmaif_pm_entity->suspend_late = NULL;
+ dpmaif_pm_entity->resume_early = NULL;
+ dpmaif_pm_entity->resume = &t7xx_dpmaif_resume;
+ dpmaif_pm_entity->id = PM_ENTITY_ID_DATA;
+ dpmaif_pm_entity->entity_param = dpmaif_ctrl;
+
+ ret = t7xx_pci_pm_entity_register(dpmaif_ctrl->t7xx_dev, dpmaif_pm_entity);
+ if (ret)
+ dev_err(dpmaif_ctrl->dev, "dpmaif register pm_entity fail\n");
+
+ return ret;
+}
+
+static int t7xx_dpmaif_pm_entity_release(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct md_pm_entity *dpmaif_pm_entity = &dpmaif_ctrl->dpmaif_pm_entity;
+ int ret;
+
+ ret = t7xx_pci_pm_entity_unregister(dpmaif_ctrl->t7xx_dev, dpmaif_pm_entity);
+ if (ret < 0)
+ dev_err(dpmaif_ctrl->dev, "dpmaif register pm_entity fail\n");
+
+ return ret;
+}
+
int t7xx_dpmaif_md_state_callback(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char state)
{
int ret = 0;
@@ -464,12 +548,17 @@ struct dpmaif_ctrl *t7xx_dpmaif_hif_init(struct t7xx_pci_dev *t7xx_dev,
dpmaif_ctrl->hif_hw_info.pcie_base = t7xx_dev->base_addr.pcie_ext_reg_base -
t7xx_dev->base_addr.pcie_dev_reg_trsl_addr;

+ ret = t7xx_dpmaif_pm_entity_init(dpmaif_ctrl);
+ if (ret)
+ return NULL;
+
t7xx_dpmaif_register_pcie_irq(dpmaif_ctrl);
t7xx_dpmaif_disable_irq(dpmaif_ctrl);

ret = t7xx_dpmaif_rxtx_sw_allocs(dpmaif_ctrl);
if (ret) {
dev_err(dev, "Failed to allocate RX/TX SW resources: %d\n", ret);
+ t7xx_dpmaif_pm_entity_release(dpmaif_ctrl);
return NULL;
}

@@ -481,6 +570,7 @@ void t7xx_dpmaif_hif_exit(struct dpmaif_ctrl *dpmaif_ctrl)
{
if (dpmaif_ctrl->dpmaif_sw_init_done) {
t7xx_dpmaif_stop(dpmaif_ctrl);
+ t7xx_dpmaif_pm_entity_release(dpmaif_ctrl);
t7xx_dpmaif_sw_release(dpmaif_ctrl);
dpmaif_ctrl->dpmaif_sw_init_done = false;
}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
index 3404e2a75566..88b18619949d 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
@@ -220,6 +220,7 @@ struct dpmaif_callbacks {
struct dpmaif_ctrl {
struct device *dev;
struct t7xx_pci_dev *t7xx_dev;
+ struct md_pm_entity dpmaif_pm_entity;
enum dpmaif_state state;
bool dpmaif_sw_init_done;
struct dpmaif_hw_info hif_hw_info;
diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.c b/drivers/net/wwan/t7xx/t7xx_mhccif.c
index 20aae457629c..74c79d520d88 100644
--- a/drivers/net/wwan/t7xx/t7xx_mhccif.c
+++ b/drivers/net/wwan/t7xx/t7xx_mhccif.c
@@ -24,6 +24,11 @@
#include "t7xx_pcie_mac.h"
#include "t7xx_reg.h"

+#define D2H_INT_SR_ACK (D2H_INT_SUSPEND_ACK | \
+ D2H_INT_RESUME_ACK | \
+ D2H_INT_SUSPEND_ACK_AP | \
+ D2H_INT_RESUME_ACK_AP)
+
static void t7xx_mhccif_clear_interrupts(struct t7xx_pci_dev *t7xx_dev, u32 mask)
{
void __iomem *mhccif_pbase = t7xx_dev->base_addr.mhccif_rc_base;
@@ -49,6 +54,18 @@ static irqreturn_t t7xx_mhccif_isr_thread(int irq, void *data)
t7xx_pci_mhccif_isr(t7xx_dev);

t7xx_mhccif_clear_interrupts(t7xx_dev, int_sts);
+
+ if (int_sts & D2H_INT_SR_ACK)
+ complete(&t7xx_dev->pm_sr_ack);
+
+ iowrite32(L1_DISABLE_BIT(1), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+
+ int_sts = t7xx_mhccif_read_sw_int_sts(t7xx_dev);
+ if (!int_sts) {
+ val = L1_1_DISABLE_BIT(1) | L1_2_DISABLE_BIT(1);
+ iowrite32(val, IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ }
+
t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT);
return IRQ_HANDLED;
}
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
index 6dd8897dfcbb..7d30f597c7e9 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.c
+++ b/drivers/net/wwan/t7xx/t7xx_pci.c
@@ -18,24 +18,444 @@

#include <linux/atomic.h>
#include <linux/bits.h>
+#include <linux/completion.h>
#include <linux/dev_printk.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/gfp.h>
#include <linux/interrupt.h>
#include <linux/io.h>
+#include <linux/iopoll.h>
+#include <linux/jiffies.h>
+#include <linux/list.h>
#include <linux/module.h>
+#include <linux/mutex.h>
#include <linux/pci.h>
+#include <linux/pm.h>
+#include <linux/pm_wakeup.h>

#include "t7xx_mhccif.h"
#include "t7xx_modem_ops.h"
#include "t7xx_pci.h"
#include "t7xx_pcie_mac.h"
#include "t7xx_reg.h"
+#include "t7xx_state_monitor.h"

#define PCI_IREG_BASE 0
#define PCI_EREG_BASE 2

+#define PM_ACK_TIMEOUT_MS 1500
+#define PM_RESOURCE_POLL_TIMEOUT_US 10000
+#define PM_RESOURCE_POLL_STEP_US 100
+
+enum t7xx_pm_state {
+ MTK_PM_EXCEPTION,
+ MTK_PM_INIT, /* Device initialized, but handshake not completed */
+ MTK_PM_SUSPENDED,
+ MTK_PM_RESUMED,
+};
+
+static int t7xx_wait_pm_config(struct t7xx_pci_dev *t7xx_dev)
+{
+ int ret, val;
+
+ ret = read_poll_timeout(ioread32, val,
+ (val & PCIE_RESOURCE_STATUS_MSK) == PCIE_RESOURCE_STATUS_MSK,
+ PM_RESOURCE_POLL_STEP_US, PM_RESOURCE_POLL_TIMEOUT_US, true,
+ IREG_BASE(t7xx_dev) + PCIE_RESOURCE_STATUS);
+ if (ret == -ETIMEDOUT)
+ dev_err(&t7xx_dev->pdev->dev, "PM configuration timed out\n");
+
+ return ret;
+}
+
+static int t7xx_pci_pm_init(struct t7xx_pci_dev *t7xx_dev)
+{
+ struct pci_dev *pdev = t7xx_dev->pdev;
+
+ INIT_LIST_HEAD(&t7xx_dev->md_pm_entities);
+
+ mutex_init(&t7xx_dev->md_pm_entity_mtx);
+
+ init_completion(&t7xx_dev->pm_sr_ack);
+
+ device_init_wakeup(&pdev->dev, true);
+
+ dev_pm_set_driver_flags(&pdev->dev, pdev->dev.power.driver_flags |
+ DPM_FLAG_NO_DIRECT_COMPLETE);
+
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_INIT);
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+
+ return t7xx_wait_pm_config(t7xx_dev);
+}
+
+void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev)
+{
+ /* Enable the PCIe resource lock only after MD deep sleep is done */
+ t7xx_mhccif_mask_clr(t7xx_dev,
+ D2H_INT_SUSPEND_ACK |
+ D2H_INT_RESUME_ACK |
+ D2H_INT_SUSPEND_ACK_AP |
+ D2H_INT_RESUME_ACK_AP);
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
+}
+
+static int t7xx_pci_pm_reinit(struct t7xx_pci_dev *t7xx_dev)
+{
+ /* The device is kept in FSM re-init flow
+ * so just roll back PM setting to the init setting.
+ */
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_INIT);
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+ return t7xx_wait_pm_config(t7xx_dev);
+}
+
+void t7xx_pci_pm_exp_detected(struct t7xx_pci_dev *t7xx_dev)
+{
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+ t7xx_wait_pm_config(t7xx_dev);
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_EXCEPTION);
+}
+
+int t7xx_pci_pm_entity_register(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity)
+{
+ struct md_pm_entity *entity;
+
+ mutex_lock(&t7xx_dev->md_pm_entity_mtx);
+ list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+ if (entity->id == pm_entity->id) {
+ mutex_unlock(&t7xx_dev->md_pm_entity_mtx);
+ return -EEXIST;
+ }
+ }
+
+ list_add_tail(&pm_entity->entity, &t7xx_dev->md_pm_entities);
+ mutex_unlock(&t7xx_dev->md_pm_entity_mtx);
+ return 0;
+}
+
+int t7xx_pci_pm_entity_unregister(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity)
+{
+ struct md_pm_entity *entity, *tmp_entity;
+
+ mutex_lock(&t7xx_dev->md_pm_entity_mtx);
+ list_for_each_entry_safe(entity, tmp_entity, &t7xx_dev->md_pm_entities, entity) {
+ if (entity->id == pm_entity->id) {
+ list_del(&pm_entity->entity);
+ mutex_unlock(&t7xx_dev->md_pm_entity_mtx);
+ return 0;
+ }
+ }
+
+ mutex_unlock(&t7xx_dev->md_pm_entity_mtx);
+
+ return -ENXIO;
+}
+
+static int __t7xx_pci_pm_suspend(struct pci_dev *pdev)
+{
+ struct t7xx_pci_dev *t7xx_dev;
+ struct md_pm_entity *entity;
+ unsigned long wait_ret;
+ enum t7xx_pm_id id;
+ int ret = 0;
+
+ t7xx_dev = pci_get_drvdata(pdev);
+ if (atomic_read(&t7xx_dev->md_pm_state) <= MTK_PM_INIT) {
+ dev_err(&pdev->dev,
+ "[PM] Exiting suspend, because handshake failure or in an exception\n");
+ return -EFAULT;
+ }
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+
+ ret = t7xx_wait_pm_config(t7xx_dev);
+ if (ret)
+ return ret;
+
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_SUSPENDED);
+ t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
+ t7xx_dev->rgu_pci_irq_en = false;
+
+ list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+ if (entity->suspend) {
+ ret = entity->suspend(t7xx_dev, entity->entity_param);
+ if (ret) {
+ id = entity->id;
+ break;
+ }
+ }
+ }
+
+ if (ret) {
+ dev_err(&pdev->dev, "[PM] Suspend error: %d, id: %d\n", ret, id);
+
+ list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+ if (id == entity->id)
+ break;
+
+ if (entity->resume)
+ entity->resume(t7xx_dev, entity->entity_param);
+ }
+
+ goto suspend_complete;
+ }
+
+ reinit_completion(&t7xx_dev->pm_sr_ack);
+ t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_SUSPEND_REQ);
+ wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
+ msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+ if (!wait_ret)
+ dev_err(&pdev->dev, "[PM] Wait for device suspend ACK timeout-MD\n");
+
+ reinit_completion(&t7xx_dev->pm_sr_ack);
+ t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_SUSPEND_REQ_AP);
+ wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
+ msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+ if (!wait_ret)
+ dev_err(&pdev->dev, "[PM] Wait for device suspend ACK timeout-SAP\n");
+
+ list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+ if (entity->suspend_late)
+ entity->suspend_late(t7xx_dev, entity->entity_param);
+ }
+
+suspend_complete:
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+
+ if (ret) {
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
+ t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+ }
+
+ return ret;
+}
+
+static void t7xx_pcie_interrupt_reinit(struct t7xx_pci_dev *t7xx_dev)
+{
+ t7xx_pcie_set_mac_msix_cfg(t7xx_dev, EXT_INT_NUM);
+
+ /* Disable interrupt first and let the IPs enable them */
+ iowrite32(MSIX_MSK_SET_ALL, IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_CLR_GRP0_0);
+
+ /* Device disables PCIe interrupts during resume and
+ * following function will re-enable PCIe interrupts.
+ */
+ t7xx_pcie_mac_interrupts_en(t7xx_dev);
+ t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT);
+}
+
+static int t7xx_pcie_reinit(struct t7xx_pci_dev *t7xx_dev, bool is_d3)
+{
+ int ret;
+
+ ret = pcim_enable_device(t7xx_dev->pdev);
+ if (ret)
+ return ret;
+
+ t7xx_pcie_mac_atr_init(t7xx_dev);
+ t7xx_pcie_interrupt_reinit(t7xx_dev);
+
+ if (is_d3) {
+ t7xx_mhccif_init(t7xx_dev);
+ return t7xx_pci_pm_reinit(t7xx_dev);
+ }
+
+ return 0;
+}
+
+static int t7xx_send_fsm_command(struct t7xx_pci_dev *t7xx_dev, u32 event)
+{
+ struct t7xx_fsm_ctl *fsm_ctl = t7xx_dev->md->fsm_ctl;
+ struct device *dev = &t7xx_dev->pdev->dev;
+ int ret = -EINVAL;
+
+ switch (event) {
+ case FSM_CMD_STOP:
+ ret = t7xx_fsm_append_cmd(fsm_ctl, FSM_CMD_STOP, FSM_CMD_FLAG_WAIT_FOR_COMPLETION);
+ break;
+
+ case FSM_CMD_START:
+ t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
+ t7xx_pcie_mac_clear_int_status(t7xx_dev, SAP_RGU_INT);
+ t7xx_dev->rgu_pci_irq_en = true;
+ t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+ ret = t7xx_fsm_append_cmd(fsm_ctl, FSM_CMD_START, 0);
+ break;
+
+ default:
+ break;
+ }
+
+ if (ret)
+ dev_err(dev, "Failure handling FSM command %u, %d\n", event, ret);
+
+ return ret;
+}
+
+static int __t7xx_pci_pm_resume(struct pci_dev *pdev, bool state_check)
+{
+ struct t7xx_pci_dev *t7xx_dev;
+ struct md_pm_entity *entity;
+ unsigned long wait_ret;
+ u32 prev_state;
+ int ret = 0;
+
+ t7xx_dev = pci_get_drvdata(pdev);
+ if (atomic_read(&t7xx_dev->md_pm_state) <= MTK_PM_INIT) {
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ return 0;
+ }
+
+ t7xx_pcie_mac_interrupts_en(t7xx_dev);
+ prev_state = ioread32(IREG_BASE(t7xx_dev) + PCIE_PM_RESUME_STATE);
+
+ if (state_check) {
+ /* For D3/L3 resume, the device could boot so quickly that the
+ * initial value of the dummy register might be overwritten.
+ * Identify new boots if the ATR source address register is not initialized.
+ */
+ u32 atr_reg_val = ioread32(IREG_BASE(t7xx_dev) +
+ ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR);
+ if (prev_state == PM_RESUME_REG_STATE_L3 ||
+ (prev_state == PM_RESUME_REG_STATE_INIT &&
+ atr_reg_val == ATR_SRC_ADDR_INVALID)) {
+ ret = t7xx_send_fsm_command(t7xx_dev, FSM_CMD_STOP);
+ if (ret)
+ return ret;
+
+ ret = t7xx_pcie_reinit(t7xx_dev, true);
+ if (ret)
+ return ret;
+
+ t7xx_clear_rgu_irq(t7xx_dev);
+ return t7xx_send_fsm_command(t7xx_dev, FSM_CMD_START);
+ }
+
+ if (prev_state == PM_RESUME_REG_STATE_EXP ||
+ prev_state == PM_RESUME_REG_STATE_L2_EXP) {
+ if (prev_state == PM_RESUME_REG_STATE_L2_EXP) {
+ ret = t7xx_pcie_reinit(t7xx_dev, false);
+ if (ret)
+ return ret;
+ }
+
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_SUSPENDED);
+ t7xx_dev->rgu_pci_irq_en = true;
+ t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+
+ t7xx_mhccif_mask_clr(t7xx_dev,
+ D2H_INT_EXCEPTION_INIT |
+ D2H_INT_EXCEPTION_INIT_DONE |
+ D2H_INT_EXCEPTION_CLEARQ_DONE |
+ D2H_INT_EXCEPTION_ALLQ_RESET |
+ D2H_INT_PORT_ENUM);
+
+ return ret;
+ }
+
+ if (prev_state == PM_RESUME_REG_STATE_L2) {
+ ret = t7xx_pcie_reinit(t7xx_dev, false);
+ if (ret)
+ return ret;
+
+ } else if (prev_state != PM_RESUME_REG_STATE_L1 &&
+ prev_state != PM_RESUME_REG_STATE_INIT) {
+ ret = t7xx_send_fsm_command(t7xx_dev, FSM_CMD_STOP);
+ if (ret)
+ return ret;
+
+ t7xx_clear_rgu_irq(t7xx_dev);
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_SUSPENDED);
+ return 0;
+ }
+ }
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+ t7xx_wait_pm_config(t7xx_dev);
+
+ list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+ if (entity->resume_early)
+ entity->resume_early(t7xx_dev, entity->entity_param);
+ }
+
+ reinit_completion(&t7xx_dev->pm_sr_ack);
+ t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_RESUME_REQ);
+ wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
+ msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+ if (!wait_ret)
+ dev_err(&pdev->dev, "[PM] Timed out waiting for device MD resume ACK\n");
+
+ reinit_completion(&t7xx_dev->pm_sr_ack);
+ t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_RESUME_REQ_AP);
+ wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
+ msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+ if (!wait_ret)
+ dev_err(&pdev->dev, "[PM] Timed out waiting for device SAP resume ACK\n");
+
+ list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+ if (entity->resume) {
+ ret = entity->resume(t7xx_dev, entity->entity_param);
+ if (ret)
+ dev_err(&pdev->dev, "[PM] Resume entry ID: %d err: %d\n",
+ entity->id, ret);
+ }
+ }
+
+ t7xx_dev->rgu_pci_irq_en = true;
+ t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
+
+ return ret;
+}
+
+static int t7xx_pci_pm_resume_noirq(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct t7xx_pci_dev *t7xx_dev;
+
+ t7xx_dev = pci_get_drvdata(pdev);
+ t7xx_pcie_mac_interrupts_dis(t7xx_dev);
+
+ return 0;
+}
+
+static void t7xx_pci_shutdown(struct pci_dev *pdev)
+{
+ __t7xx_pci_pm_suspend(pdev);
+}
+
+static int t7xx_pci_pm_suspend(struct device *dev)
+{
+ return __t7xx_pci_pm_suspend(to_pci_dev(dev));
+}
+
+static int t7xx_pci_pm_resume(struct device *dev)
+{
+ return __t7xx_pci_pm_resume(to_pci_dev(dev), true);
+}
+
+static int t7xx_pci_pm_thaw(struct device *dev)
+{
+ return __t7xx_pci_pm_resume(to_pci_dev(dev), false);
+}
+
+static const struct dev_pm_ops t7xx_pci_pm_ops = {
+ .suspend = t7xx_pci_pm_suspend,
+ .resume = t7xx_pci_pm_resume,
+ .resume_noirq = t7xx_pci_pm_resume_noirq,
+ .freeze = t7xx_pci_pm_suspend,
+ .thaw = t7xx_pci_pm_thaw,
+ .poweroff = t7xx_pci_pm_suspend,
+ .restore = t7xx_pci_pm_resume,
+ .restore_noirq = t7xx_pci_pm_resume_noirq,
+};
+
static int t7xx_request_irq(struct pci_dev *pdev)
{
struct t7xx_pci_dev *t7xx_dev;
@@ -165,6 +585,10 @@ static int t7xx_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
IREG_BASE(t7xx_dev) = pcim_iomap_table(pdev)[PCI_IREG_BASE];
t7xx_dev->base_addr.pcie_ext_reg_base = pcim_iomap_table(pdev)[PCI_EREG_BASE];

+ ret = t7xx_pci_pm_init(t7xx_dev);
+ if (ret)
+ return ret;
+
t7xx_pcie_mac_atr_init(t7xx_dev);
t7xx_pci_infracfg_ao_calc(t7xx_dev);
t7xx_mhccif_init(t7xx_dev);
@@ -214,6 +638,8 @@ static struct pci_driver t7xx_pci_driver = {
.id_table = t7xx_pci_table,
.probe = t7xx_pci_probe,
.remove = t7xx_pci_remove,
+ .driver.pm = &t7xx_pci_pm_ops,
+ .shutdown = t7xx_pci_shutdown,
};

module_pci_driver(t7xx_pci_driver);
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.h b/drivers/net/wwan/t7xx/t7xx_pci.h
index b52aaa182a10..6310f31540ca 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.h
+++ b/drivers/net/wwan/t7xx/t7xx_pci.h
@@ -17,7 +17,9 @@
#ifndef __T7XX_PCI_H__
#define __T7XX_PCI_H__

+#include <linux/completion.h>
#include <linux/irqreturn.h>
+#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/types.h>

@@ -48,6 +50,10 @@ typedef irqreturn_t (*t7xx_intr_callback)(int irq, void *param);
* @pdev: PCI device
* @base_addr: memory base addresses of HW components
* @md: modem interface
+ * @md_pm_entities: list of pm entities
+ * @md_pm_entity_mtx: protects md_pm_entities list
+ * @pm_sr_ack: ack from the device when went to sleep or woke up
+ * @md_pm_state: state for resume/suspend
* @ccmni_ctlb: context structure used to control the network data path
* @rgu_pci_irq_en: RGU callback isr registered and active
* @pools: pre allocated skb pools
@@ -60,8 +66,48 @@ struct t7xx_pci_dev {
struct pci_dev *pdev;
struct t7xx_addr_base base_addr;
struct t7xx_modem *md;
+
+ /* Low Power Items */
+ struct list_head md_pm_entities;
+ struct mutex md_pm_entity_mtx; /* Protects MD PM entities list */
+ struct completion pm_sr_ack;
+ atomic_t md_pm_state;
+
struct t7xx_ccmni_ctrl *ccmni_ctlb;
bool rgu_pci_irq_en;
};

+enum t7xx_pm_id {
+ PM_ENTITY_ID_CTRL1,
+ PM_ENTITY_ID_CTRL2,
+ PM_ENTITY_ID_DATA,
+};
+
+/* struct md_pm_entity - device power management entity
+ * @entity: list of PM Entities
+ * @suspend: callback invoked before sending D3 request to device
+ * @suspend_late: callback invoked after getting D3 ACK from device
+ * @resume_early: callback invoked before sending the resume request to device
+ * @resume: callback invoked after getting resume ACK from device
+ * @id: unique PM entity identifier
+ * @entity_param: parameter passed to the registered callbacks
+ *
+ * This structure is used to indicate PM operations required by internal
+ * HW modules such as CLDMA and DPMA.
+ */
+struct md_pm_entity {
+ struct list_head entity;
+ int (*suspend)(struct t7xx_pci_dev *t7xx_dev, void *entity_param);
+ void (*suspend_late)(struct t7xx_pci_dev *t7xx_dev, void *entity_param);
+ void (*resume_early)(struct t7xx_pci_dev *t7xx_dev, void *entity_param);
+ int (*resume)(struct t7xx_pci_dev *t7xx_dev, void *entity_param);
+ enum t7xx_pm_id id;
+ void *entity_param;
+};
+
+int t7xx_pci_pm_entity_register(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity);
+int t7xx_pci_pm_entity_unregister(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity);
+void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_pci_pm_exp_detected(struct t7xx_pci_dev *t7xx_dev);
+
#endif /* __T7XX_PCI_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
index 73fab28848c6..c37b23087c8c 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -199,6 +199,7 @@ static void fsm_routine_exception(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_comm

case EXCEPTION_EVENT:
t7xx_fsm_broadcast_state(ctl, MD_STATE_EXCEPTION);
+ t7xx_pci_pm_exp_detected(ctl->md->t7xx_dev);
t7xx_md_exception_handshake(ctl->md);

fsm_wait_for_event(ctl, FSM_EVENT_MD_EX_REC_OK, FSM_EVENT_MD_EX,
@@ -299,6 +300,7 @@ static void fsm_routine_starting(struct t7xx_fsm_ctl *ctl)

fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);
} else {
+ t7xx_pci_pm_init_late(md->t7xx_dev);
fsm_routine_ready(ctl);
}
}
--
2.17.1

2022-01-14 14:40:03

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface

From: Haijun Liu <[email protected]>

Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
for initialization, ISR, control and event handling of TX/RX flows.

DPMAIF TX
Exposes the `dmpaif_tx_send_skb` function which can be used by the
network device to transmit packets.
The uplink data management uses a Descriptor Ring Buffer (DRB).
First DRB entry is a message type that will be followed by 1 or more
normal DRB entries. Message type DRB will hold the skb information
and each normal DRB entry holds a pointer to the skb payload.

DPMAIF RX
The downlink buffer management uses Buffer Address Table (BAT) and
Packet Information Table (PIT) rings.
The BAT ring holds the address of skb data buffer for the HW to use,
while the PIT contains metadata about a whole network packet including
a reference to the BAT entry holding the data buffer address.
The driver reads the PIT and BAT entries written by the modem, when
reaching a threshold, the driver will reload the PIT and BAT rings.

Signed-off-by: Haijun Liu <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/Makefile | 4 +
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c | 487 ++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h | 251 ++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 1227 ++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h | 115 ++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 724 ++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h | 89 ++
7 files changed, 2897 insertions(+)
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index 9eec2e2472fb..04a9ba50dc14 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -13,3 +13,7 @@ mtk_t7xx-y:= t7xx_pci.o \
t7xx_port_proxy.o \
t7xx_port_ctrl_msg.o \
t7xx_port_wwan.o \
+ t7xx_hif_dpmaif.o \
+ t7xx_hif_dpmaif_tx.o \
+ t7xx_hif_dpmaif_rx.o \
+ t7xx_dpmaif.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
new file mode 100644
index 000000000000..b731d0be83ee
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
@@ -0,0 +1,487 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ * Amir Hanania <[email protected]>
+ * Haijun Liu <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ *
+ * Contributors:
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/irqreturn.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/string.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_common.h"
+#include "t7xx_dpmaif.h"
+#include "t7xx_hif_dpmaif.h"
+#include "t7xx_hif_dpmaif_rx.h"
+#include "t7xx_hif_dpmaif_tx.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+
+unsigned int t7xx_ring_buf_get_next_wr_idx(unsigned int buf_len, unsigned int buf_idx)
+{
+ buf_idx++;
+
+ return buf_idx < buf_len ? buf_idx : 0;
+}
+
+unsigned int t7xx_ring_buf_rd_wr_count(unsigned int total_cnt, unsigned int rd_idx,
+ unsigned int wr_idx, enum dpmaif_rdwr rd_wr)
+{
+ int pkt_cnt;
+
+ if (rd_wr == DPMAIF_READ)
+ pkt_cnt = wr_idx - rd_idx;
+ else
+ pkt_cnt = rd_idx - wr_idx - 1;
+
+ if (pkt_cnt < 0)
+ pkt_cnt += total_cnt;
+
+ return (unsigned int)pkt_cnt;
+}
+
+static void t7xx_dpmaif_enable_irq(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_isr_para *isr_para;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(dpmaif_ctrl->isr_para); i++) {
+ isr_para = &dpmaif_ctrl->isr_para[i];
+ t7xx_pcie_mac_set_int(dpmaif_ctrl->t7xx_dev, isr_para->pcie_int);
+ }
+}
+
+static void t7xx_dpmaif_disable_irq(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_isr_para *isr_para;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(dpmaif_ctrl->isr_para); i++) {
+ isr_para = &dpmaif_ctrl->isr_para[i];
+ t7xx_pcie_mac_clear_int(dpmaif_ctrl->t7xx_dev, isr_para->pcie_int);
+ }
+}
+
+static void t7xx_dpmaif_irq_cb(struct dpmaif_isr_para *isr_para)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl = isr_para->dpmaif_ctrl;
+ struct dpmaif_hw_intr_st_para intr_status;
+ struct device *dev = dpmaif_ctrl->dev;
+ int i;
+
+ memset(&intr_status, 0, sizeof(intr_status));
+
+ if (t7xx_dpmaif_hw_get_intr_cnt(dpmaif_ctrl, &intr_status, isr_para->dlq_id) < 0) {
+ dev_err(dev, "Failed to get HW interrupt count\n");
+ return;
+ }
+
+ t7xx_pcie_mac_clear_int_status(dpmaif_ctrl->t7xx_dev, isr_para->pcie_int);
+
+ for (i = 0; i < intr_status.intr_cnt; i++) {
+ switch (intr_status.intr_types[i]) {
+ case DPF_INTR_UL_DONE:
+ t7xx_dpmaif_irq_tx_done(dpmaif_ctrl, intr_status.intr_queues[i]);
+ break;
+
+ case DPF_INTR_UL_DRB_EMPTY:
+ case DPF_INTR_UL_MD_NOTREADY:
+ case DPF_INTR_UL_MD_PWR_NOTREADY:
+ /* No need to log an error for these */
+ break;
+
+ case DPF_INTR_DL_BATCNT_LEN_ERR:
+ dev_err_ratelimited(dev, "DL interrupt: packet BAT count length error\n");
+ t7xx_dpmaif_dl_unmask_batcnt_len_err_intr(&dpmaif_ctrl->hif_hw_info);
+ break;
+
+ case DPF_INTR_DL_PITCNT_LEN_ERR:
+ dev_err_ratelimited(dev, "DL interrupt: PIT count length error\n");
+ t7xx_dpmaif_dl_unmask_pitcnt_len_err_intr(&dpmaif_ctrl->hif_hw_info);
+ break;
+
+ case DPF_INTR_DL_Q0_PITCNT_LEN_ERR:
+ dev_err_ratelimited(dev, "DL interrupt: DLQ0 PIT count length error\n");
+ t7xx_dpmaif_dlq_unmask_pitcnt_len_err_intr(&dpmaif_ctrl->hif_hw_info,
+ DPF_RX_QNO_DFT);
+ break;
+
+ case DPF_INTR_DL_Q1_PITCNT_LEN_ERR:
+ dev_err_ratelimited(dev, "DL interrupt: DLQ1 PIT count length error\n");
+ t7xx_dpmaif_dlq_unmask_pitcnt_len_err_intr(&dpmaif_ctrl->hif_hw_info,
+ DPF_RX_QNO1);
+ break;
+
+ case DPF_INTR_DL_DONE:
+ case DPF_INTR_DL_Q0_DONE:
+ case DPF_INTR_DL_Q1_DONE:
+ t7xx_dpmaif_irq_rx_done(dpmaif_ctrl, intr_status.intr_queues[i]);
+ break;
+
+ default:
+ dev_err_ratelimited(dev, "DL interrupt error: unknown type : %d\n",
+ intr_status.intr_types[i]);
+ }
+ }
+}
+
+static irqreturn_t t7xx_dpmaif_isr_handler(int irq, void *data)
+{
+ struct dpmaif_isr_para *isr_para = data;
+ struct dpmaif_ctrl *dpmaif_ctrl;
+
+ dpmaif_ctrl = isr_para->dpmaif_ctrl;
+ if (dpmaif_ctrl->state != DPMAIF_STATE_PWRON) {
+ dev_err(dpmaif_ctrl->dev, "Interrupt received before initializing DPMAIF\n");
+ return IRQ_HANDLED;
+ }
+
+ t7xx_pcie_mac_clear_int(dpmaif_ctrl->t7xx_dev, isr_para->pcie_int);
+ t7xx_dpmaif_irq_cb(isr_para);
+ t7xx_pcie_mac_set_int(dpmaif_ctrl->t7xx_dev, isr_para->pcie_int);
+ return IRQ_HANDLED;
+}
+
+static void t7xx_dpmaif_isr_parameter_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_isr_para *isr_para;
+ unsigned char i;
+
+ dpmaif_ctrl->rxq_int_mapping[DPF_RX_QNO0] = DPMAIF_INT;
+ dpmaif_ctrl->rxq_int_mapping[DPF_RX_QNO1] = DPMAIF2_INT;
+
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ isr_para = &dpmaif_ctrl->isr_para[i];
+ isr_para->dpmaif_ctrl = dpmaif_ctrl;
+ isr_para->dlq_id = i;
+ isr_para->pcie_int = dpmaif_ctrl->rxq_int_mapping[i];
+ }
+}
+
+static void t7xx_dpmaif_register_pcie_irq(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct t7xx_pci_dev *t7xx_dev = dpmaif_ctrl->t7xx_dev;
+ struct dpmaif_isr_para *isr_para;
+ enum pcie_int int_type;
+ int i;
+
+ t7xx_dpmaif_isr_parameter_init(dpmaif_ctrl);
+
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ isr_para = &dpmaif_ctrl->isr_para[i];
+ int_type = isr_para->pcie_int;
+ t7xx_pcie_mac_clear_int(t7xx_dev, int_type);
+
+ t7xx_dev->intr_handler[int_type] = t7xx_dpmaif_isr_handler;
+ t7xx_dev->intr_thread[int_type] = NULL;
+ t7xx_dev->callback_param[int_type] = isr_para;
+
+ t7xx_pcie_mac_clear_int_status(t7xx_dev, int_type);
+ t7xx_pcie_mac_set_int(t7xx_dev, int_type);
+ }
+}
+
+static int t7xx_dpmaif_rxtx_sw_allocs(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_rx_queue *rx_q;
+ struct dpmaif_tx_queue *tx_q;
+ int ret, rx_idx, tx_idx, i;
+
+ ret = t7xx_dpmaif_bat_alloc(dpmaif_ctrl, &dpmaif_ctrl->bat_req, BAT_TYPE_NORMAL);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "Failed to allocate normal BAT table: %d\n", ret);
+ return ret;
+ }
+
+ ret = t7xx_dpmaif_bat_alloc(dpmaif_ctrl, &dpmaif_ctrl->bat_frag, BAT_TYPE_FRAG);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "Failed to allocate frag BAT table: %d\n", ret);
+ goto err_free_normal_bat;
+ }
+
+ for (rx_idx = 0; rx_idx < DPMAIF_RXQ_NUM; rx_idx++) {
+ rx_q = &dpmaif_ctrl->rxq[rx_idx];
+ rx_q->index = rx_idx;
+ rx_q->dpmaif_ctrl = dpmaif_ctrl;
+ ret = t7xx_dpmaif_rxq_init(rx_q);
+ if (ret)
+ goto err_free_rxq;
+ }
+
+ for (tx_idx = 0; tx_idx < DPMAIF_TXQ_NUM; tx_idx++) {
+ tx_q = &dpmaif_ctrl->txq[tx_idx];
+ tx_q->index = tx_idx;
+ tx_q->dpmaif_ctrl = dpmaif_ctrl;
+ ret = t7xx_dpmaif_txq_init(tx_q);
+ if (ret)
+ goto err_free_txq;
+ }
+
+ ret = t7xx_dpmaif_tx_thread_init(dpmaif_ctrl);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "Failed to start TX thread\n");
+ goto err_free_txq;
+ }
+
+ ret = t7xx_dpmaif_bat_rel_wq_alloc(dpmaif_ctrl);
+ if (ret)
+ goto err_thread_rel;
+
+ return 0;
+
+err_thread_rel:
+ t7xx_dpmaif_tx_thread_rel(dpmaif_ctrl);
+
+err_free_txq:
+ for (i = 0; i < tx_idx; i++) {
+ tx_q = &dpmaif_ctrl->txq[i];
+ t7xx_dpmaif_txq_free(tx_q);
+ }
+
+err_free_rxq:
+ for (i = 0; i < rx_idx; i++) {
+ rx_q = &dpmaif_ctrl->rxq[i];
+ t7xx_dpmaif_rxq_free(rx_q);
+ }
+
+ t7xx_dpmaif_bat_free(dpmaif_ctrl, &dpmaif_ctrl->bat_frag);
+
+err_free_normal_bat:
+ t7xx_dpmaif_bat_free(dpmaif_ctrl, &dpmaif_ctrl->bat_req);
+
+ return ret;
+}
+
+static void t7xx_dpmaif_sw_release(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_rx_queue *rx_q;
+ struct dpmaif_tx_queue *tx_q;
+ int i;
+
+ t7xx_dpmaif_tx_thread_rel(dpmaif_ctrl);
+ t7xx_dpmaif_bat_wq_rel(dpmaif_ctrl);
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ tx_q = &dpmaif_ctrl->txq[i];
+ t7xx_dpmaif_txq_free(tx_q);
+ }
+
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ rx_q = &dpmaif_ctrl->rxq[i];
+ t7xx_dpmaif_rxq_free(rx_q);
+ }
+}
+
+static int t7xx_dpmaif_start(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_hw_params hw_init_para;
+ struct dpmaif_rx_queue *rxq;
+ struct dpmaif_tx_queue *txq;
+ unsigned int buf_cnt;
+ int i, ret = 0;
+
+ if (dpmaif_ctrl->state == DPMAIF_STATE_PWRON)
+ return -EFAULT;
+
+ memset(&hw_init_para, 0, sizeof(hw_init_para));
+
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ rxq = &dpmaif_ctrl->rxq[i];
+ rxq->que_started = true;
+ rxq->index = i;
+ rxq->budget = rxq->bat_req->bat_size_cnt - 1;
+
+ hw_init_para.pkt_bat_base_addr[i] = rxq->bat_req->bat_bus_addr;
+ hw_init_para.pkt_bat_size_cnt[i] = rxq->bat_req->bat_size_cnt;
+ hw_init_para.pit_base_addr[i] = rxq->pit_bus_addr;
+ hw_init_para.pit_size_cnt[i] = rxq->pit_size_cnt;
+ hw_init_para.frg_bat_base_addr[i] = rxq->bat_frag->bat_bus_addr;
+ hw_init_para.frg_bat_size_cnt[i] = rxq->bat_frag->bat_size_cnt;
+ }
+
+ memset(dpmaif_ctrl->bat_req.bat_mask, 0,
+ dpmaif_ctrl->bat_req.bat_size_cnt * sizeof(unsigned char));
+
+ buf_cnt = dpmaif_ctrl->bat_req.bat_size_cnt - 1;
+ ret = t7xx_dpmaif_rx_buf_alloc(dpmaif_ctrl, &dpmaif_ctrl->bat_req, 0, buf_cnt, true);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "Failed to allocate RX buffer: %d\n",
+ ret);
+ return ret;
+ }
+
+ buf_cnt = dpmaif_ctrl->bat_frag.bat_size_cnt - 1;
+ ret = t7xx_dpmaif_rx_frag_alloc(dpmaif_ctrl, &dpmaif_ctrl->bat_frag, buf_cnt, true);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "Failed to allocate frag RX buffer: %d\n",
+ ret);
+ goto err_free_normal_bat;
+ }
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ txq = &dpmaif_ctrl->txq[i];
+ txq->que_started = true;
+
+ hw_init_para.drb_base_addr[i] = txq->drb_bus_addr;
+ hw_init_para.drb_size_cnt[i] = txq->drb_size_cnt;
+ }
+
+ ret = t7xx_dpmaif_hw_init(dpmaif_ctrl, &hw_init_para);
+ if (ret) {
+ dev_err(dpmaif_ctrl->dev, "Failed to initialize DPMAIF HW: %d\n", ret);
+ goto err_free_frag_bat;
+ }
+
+ ret = t7xx_dpmaif_dl_snd_hw_bat_cnt(dpmaif_ctrl, rxq->bat_req->bat_size_cnt - 1);
+ if (ret)
+ goto err_free_frag_bat;
+
+ ret = t7xx_dpmaif_dl_snd_hw_frg_cnt(dpmaif_ctrl, rxq->bat_frag->bat_size_cnt - 1);
+ if (ret)
+ goto err_free_frag_bat;
+
+ t7xx_dpmaif_ul_clr_all_intr(&dpmaif_ctrl->hif_hw_info);
+ t7xx_dpmaif_dl_clr_all_intr(&dpmaif_ctrl->hif_hw_info);
+ dpmaif_ctrl->state = DPMAIF_STATE_PWRON;
+ t7xx_dpmaif_enable_irq(dpmaif_ctrl);
+ wake_up(&dpmaif_ctrl->tx_wq);
+ return 0;
+
+err_free_frag_bat:
+ t7xx_dpmaif_bat_free(rxq->dpmaif_ctrl, rxq->bat_frag);
+
+err_free_normal_bat:
+ t7xx_dpmaif_bat_free(rxq->dpmaif_ctrl, rxq->bat_req);
+
+ return ret;
+}
+
+static void t7xx_dpmaif_stop_sw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ t7xx_dpmaif_tx_stop(dpmaif_ctrl);
+ t7xx_dpmaif_rx_stop(dpmaif_ctrl);
+}
+
+static void t7xx_dpmaif_stop_hw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ t7xx_dpmaif_hw_stop_all_txq(dpmaif_ctrl);
+ t7xx_dpmaif_hw_stop_all_rxq(dpmaif_ctrl);
+}
+
+static int t7xx_dpmaif_stop(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ if (!dpmaif_ctrl->dpmaif_sw_init_done) {
+ dev_err(dpmaif_ctrl->dev, "dpmaif SW init fail\n");
+ return -EFAULT;
+ }
+
+ if (dpmaif_ctrl->state == DPMAIF_STATE_PWROFF)
+ return -EFAULT;
+
+ t7xx_dpmaif_disable_irq(dpmaif_ctrl);
+ dpmaif_ctrl->state = DPMAIF_STATE_PWROFF;
+ t7xx_dpmaif_stop_sw(dpmaif_ctrl);
+ t7xx_dpmaif_tx_clear(dpmaif_ctrl);
+ t7xx_dpmaif_rx_clear(dpmaif_ctrl);
+ return 0;
+}
+
+int t7xx_dpmaif_md_state_callback(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char state)
+{
+ int ret = 0;
+
+ switch (state) {
+ case MD_STATE_WAITING_FOR_HS1:
+ ret = t7xx_dpmaif_start(dpmaif_ctrl);
+ break;
+
+ case MD_STATE_EXCEPTION:
+ ret = t7xx_dpmaif_stop(dpmaif_ctrl);
+ break;
+
+ case MD_STATE_STOPPED:
+ ret = t7xx_dpmaif_stop(dpmaif_ctrl);
+ break;
+
+ case MD_STATE_WAITING_TO_STOP:
+ t7xx_dpmaif_stop_hw(dpmaif_ctrl);
+ break;
+
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+/**
+ * t7xx_dpmaif_hif_init() - Initialize data path.
+ * @t7xx_dev: MTK context structure.
+ * @callbacks: Callbacks implemented by the network layer to handle RX skb and
+ * event notifications.
+ *
+ * Allocate and initialize datapath control block.
+ * Register datapath ISR, TX and RX resources.
+ *
+ * Return:
+ * * dpmaif_ctrl pointer - Pointer to DPMAIF context structure.
+ * * NULL - In case of error.
+ */
+struct dpmaif_ctrl *t7xx_dpmaif_hif_init(struct t7xx_pci_dev *t7xx_dev,
+ struct dpmaif_callbacks *callbacks)
+{
+ struct device *dev = &t7xx_dev->pdev->dev;
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ int ret;
+
+ if (!callbacks)
+ return NULL;
+
+ dpmaif_ctrl = devm_kzalloc(dev, sizeof(*dpmaif_ctrl), GFP_KERNEL);
+ if (!dpmaif_ctrl)
+ return NULL;
+
+ dpmaif_ctrl->t7xx_dev = t7xx_dev;
+ dpmaif_ctrl->callbacks = callbacks;
+ dpmaif_ctrl->dev = dev;
+ dpmaif_ctrl->dpmaif_sw_init_done = false;
+ dpmaif_ctrl->hif_hw_info.pcie_base = t7xx_dev->base_addr.pcie_ext_reg_base -
+ t7xx_dev->base_addr.pcie_dev_reg_trsl_addr;
+
+ t7xx_dpmaif_register_pcie_irq(dpmaif_ctrl);
+ t7xx_dpmaif_disable_irq(dpmaif_ctrl);
+
+ ret = t7xx_dpmaif_rxtx_sw_allocs(dpmaif_ctrl);
+ if (ret) {
+ dev_err(dev, "Failed to allocate RX/TX SW resources: %d\n", ret);
+ return NULL;
+ }
+
+ dpmaif_ctrl->dpmaif_sw_init_done = true;
+ return dpmaif_ctrl;
+}
+
+void t7xx_dpmaif_hif_exit(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ if (dpmaif_ctrl->dpmaif_sw_init_done) {
+ t7xx_dpmaif_stop(dpmaif_ctrl);
+ t7xx_dpmaif_sw_release(dpmaif_ctrl);
+ dpmaif_ctrl->dpmaif_sw_init_done = false;
+ }
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
new file mode 100644
index 000000000000..3404e2a75566
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
@@ -0,0 +1,251 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ * Amir Hanania <[email protected]>
+ * Haijun Liu <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ *
+ * Contributors:
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_DPMA_TX_H__
+#define __T7XX_DPMA_TX_H__
+
+#include <linux/mm_types.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+#include <linux/wait.h>
+
+#include "t7xx_common.h"
+#include "t7xx_pci.h"
+
+#define DPMAIF_RXQ_NUM 2
+#define DPMAIF_TXQ_NUM 5
+
+enum dpmaif_rdwr {
+ DPMAIF_READ,
+ DPMAIF_WRITE,
+};
+
+struct dpmaif_isr_en_mask {
+ unsigned int ap_ul_l2intr_en_msk;
+ unsigned int ap_dl_l2intr_en_msk;
+ unsigned int ap_udl_ip_busy_en_msk;
+ unsigned int ap_dl_l2intr_err_en_msk;
+};
+
+struct dpmaif_ul {
+ bool que_started;
+ unsigned char reserve[3];
+ dma_addr_t drb_base;
+ unsigned int drb_size_cnt;
+};
+
+struct dpmaif_dl {
+ bool que_started;
+ unsigned char reserve[3];
+ dma_addr_t pit_base;
+ unsigned int pit_size_cnt;
+ dma_addr_t bat_base;
+ unsigned int bat_size_cnt;
+ dma_addr_t frg_base;
+ unsigned int frg_size_cnt;
+ unsigned int pit_seq;
+};
+
+struct dpmaif_dl_hwq {
+ unsigned int bat_remain_size;
+ unsigned int bat_pkt_bufsz;
+ unsigned int frg_pkt_bufsz;
+ unsigned int bat_rsv_length;
+ unsigned int pkt_bid_max_cnt;
+ unsigned int pkt_alignment;
+ unsigned int mtu_size;
+ unsigned int chk_pit_num;
+ unsigned int chk_bat_num;
+ unsigned int chk_frg_num;
+};
+
+/* Structure of DL BAT */
+struct dpmaif_cur_rx_skb_info {
+ bool msg_pit_received;
+ struct sk_buff *cur_skb;
+ unsigned int cur_chn_idx;
+ unsigned int check_sum;
+ unsigned int pit_dp;
+ unsigned int pkt_type;
+ int err_payload;
+};
+
+struct dpmaif_bat {
+ unsigned int p_buffer_addr;
+ unsigned int buffer_addr_ext;
+};
+
+struct dpmaif_bat_skb {
+ struct sk_buff *skb;
+ dma_addr_t data_bus_addr;
+ unsigned int data_len;
+};
+
+struct dpmaif_bat_page {
+ struct page *page;
+ dma_addr_t data_bus_addr;
+ unsigned int offset;
+ unsigned int data_len;
+};
+
+enum bat_type {
+ BAT_TYPE_NORMAL = 0,
+ BAT_TYPE_FRAG = 1,
+};
+
+struct dpmaif_bat_request {
+ void *bat_base;
+ dma_addr_t bat_bus_addr;
+ unsigned int bat_size_cnt;
+ unsigned short bat_wr_idx;
+ unsigned short bat_release_rd_idx;
+ void *bat_skb;
+ unsigned int skb_pkt_cnt;
+ unsigned int pkt_buf_sz;
+ unsigned char *bat_mask;
+ atomic_t refcnt;
+ spinlock_t mask_lock; /* Protects BAT mask */
+ enum bat_type type;
+};
+
+struct dpmaif_rx_queue {
+ unsigned char index;
+ bool que_started;
+ unsigned short budget;
+
+ void *pit_base;
+ dma_addr_t pit_bus_addr;
+ unsigned int pit_size_cnt;
+
+ unsigned short pit_rd_idx;
+ unsigned short pit_wr_idx;
+ unsigned short pit_release_rd_idx;
+
+ struct dpmaif_bat_request *bat_req;
+ struct dpmaif_bat_request *bat_frag;
+
+ wait_queue_head_t rx_wq;
+ struct task_struct *rx_thread;
+ struct sk_buff_head skb_list;
+ unsigned int skb_list_max_len;
+
+ struct workqueue_struct *worker;
+ struct work_struct dpmaif_rxq_work;
+
+ atomic_t rx_processing;
+
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ unsigned int expect_pit_seq;
+ unsigned int pit_remain_release_cnt;
+ struct dpmaif_cur_rx_skb_info rx_data_info;
+};
+
+struct dpmaif_tx_queue {
+ unsigned char index;
+ bool que_started;
+ atomic_t tx_budget;
+ void *drb_base;
+ dma_addr_t drb_bus_addr;
+ unsigned int drb_size_cnt;
+ unsigned short drb_wr_idx;
+ unsigned short drb_rd_idx;
+ unsigned short drb_release_rd_idx;
+ unsigned short last_ch_id;
+ void *drb_skb_base;
+ wait_queue_head_t req_wq;
+ struct workqueue_struct *worker;
+ struct work_struct dpmaif_tx_work;
+ spinlock_t tx_lock; /* Protects txq DRB */
+ atomic_t tx_processing;
+
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ spinlock_t tx_skb_lock; /* Protects TX thread skb list */
+ struct list_head tx_skb_queue;
+ unsigned int tx_submit_skb_cnt;
+ unsigned int tx_list_max_len;
+ unsigned int tx_skb_stat;
+ bool drb_lack;
+};
+
+struct dpmaif_isr_para {
+ struct dpmaif_ctrl *dpmaif_ctrl;
+ unsigned char pcie_int;
+ unsigned char dlq_id;
+};
+
+enum dpmaif_state {
+ DPMAIF_STATE_MIN,
+ DPMAIF_STATE_PWROFF,
+ DPMAIF_STATE_PWRON,
+ DPMAIF_STATE_EXCEPTION,
+ DPMAIF_STATE_MAX
+};
+
+struct dpmaif_hw_info {
+ void __iomem *pcie_base;
+ struct dpmaif_dl dl_que[DPMAIF_RXQ_NUM];
+ struct dpmaif_ul ul_que[DPMAIF_TXQ_NUM];
+ struct dpmaif_dl_hwq dl_que_hw[DPMAIF_RXQ_NUM];
+ struct dpmaif_isr_en_mask isr_en_mask;
+};
+
+enum dpmaif_txq_state {
+ DMPAIF_TXQ_STATE_IRQ,
+ DMPAIF_TXQ_STATE_FULL,
+};
+
+struct dpmaif_callbacks {
+ void (*state_notify)(struct t7xx_pci_dev *t7xx_dev,
+ enum dpmaif_txq_state state, int txqt);
+ void (*recv_skb)(struct t7xx_pci_dev *t7xx_dev, struct sk_buff *skb);
+};
+
+struct dpmaif_ctrl {
+ struct device *dev;
+ struct t7xx_pci_dev *t7xx_dev;
+ enum dpmaif_state state;
+ bool dpmaif_sw_init_done;
+ struct dpmaif_hw_info hif_hw_info;
+ struct dpmaif_tx_queue txq[DPMAIF_TXQ_NUM];
+ struct dpmaif_rx_queue rxq[DPMAIF_RXQ_NUM];
+
+ unsigned char rxq_int_mapping[DPMAIF_RXQ_NUM];
+ struct dpmaif_isr_para isr_para[DPMAIF_RXQ_NUM];
+
+ struct dpmaif_bat_request bat_req;
+ struct dpmaif_bat_request bat_frag;
+ struct workqueue_struct *bat_release_wq;
+ struct work_struct bat_release_work;
+
+ wait_queue_head_t tx_wq;
+ struct task_struct *tx_thread;
+
+ struct dpmaif_callbacks *callbacks;
+};
+
+struct dpmaif_ctrl *t7xx_dpmaif_hif_init(struct t7xx_pci_dev *t7xx_dev,
+ struct dpmaif_callbacks *callbacks);
+void t7xx_dpmaif_hif_exit(struct dpmaif_ctrl *dpmaif_ctrl);
+int t7xx_dpmaif_md_state_callback(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char state);
+unsigned int t7xx_ring_buf_get_next_wr_idx(unsigned int buf_len, unsigned int buf_idx);
+unsigned int t7xx_ring_buf_rd_wr_count(unsigned int total_cnt, unsigned int rd_idx,
+ unsigned int wr_idx, enum dpmaif_rdwr);
+
+#endif /* __T7XX_DPMA_TX_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
new file mode 100644
index 000000000000..7df7ffea8b14
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
@@ -0,0 +1,1227 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ * Amir Hanania <[email protected]>
+ * Haijun Liu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ *
+ * Contributors:
+ * Andy Shevchenko <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/atomic.h>
+#include <linux/bitfield.h>
+#include <linux/bitops.h>
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/gfp.h>
+#include <linux/err.h>
+#include <linux/iopoll.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/mm.h>
+#include <linux/netdevice.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_dpmaif.h"
+#include "t7xx_hif_dpmaif.h"
+#include "t7xx_hif_dpmaif_rx.h"
+#include "t7xx_pci.h"
+
+#define DPMAIF_BAT_COUNT 8192
+#define DPMAIF_FRG_COUNT 4814
+#define DPMAIF_PIT_COUNT (DPMAIF_BAT_COUNT * 2)
+
+#define DPMAIF_BAT_CNT_THRESHOLD 30
+#define DPMAIF_PIT_CNT_THRESHOLD 60
+#define DPMAIF_RX_PUSH_THRESHOLD_MASK GENMASK(2, 0)
+#define DPMAIF_NOTIFY_RELEASE_COUNT 128
+#define DPMAIF_POLL_PIT_TIME_US 20
+#define DPMAIF_POLL_PIT_MAX_TIME_US 2000
+#define DPMAIF_WQ_TIME_LIMIT_MS 2
+#define DPMAIF_CS_RESULT_PASS 0
+
+/* Packet type */
+#define DES_PT_PD 0
+#define DES_PT_MSG 1
+/* Buffer type */
+#define PKT_BUF_FRAG 1
+
+static unsigned int t7xx_normal_pit_bid(const struct dpmaif_normal_pit *pit_info)
+{
+ u32 value;
+
+ value = FIELD_GET(NORMAL_PIT_H_BID, le32_to_cpu(pit_info->pit_footer));
+ value <<= 13;
+ value += FIELD_GET(NORMAL_PIT_BUFFER_ID, le32_to_cpu(pit_info->pit_header));
+ return value;
+}
+
+static int t7xx_dpmaif_net_rx_push_thread(void *arg)
+{
+ struct dpmaif_rx_queue *q = arg;
+ struct dpmaif_ctrl *hif_ctrl;
+ struct dpmaif_callbacks *cb;
+
+ hif_ctrl = q->dpmaif_ctrl;
+ cb = hif_ctrl->callbacks;
+
+ while (!kthread_should_stop()) {
+ struct sk_buff *skb;
+ unsigned long flags;
+
+ if (skb_queue_empty(&q->skb_list)) {
+ if (wait_event_interruptible(q->rx_wq,
+ !skb_queue_empty(&q->skb_list) ||
+ kthread_should_stop()))
+ continue;
+
+ if (kthread_should_stop())
+ break;
+ }
+
+ spin_lock_irqsave(&q->skb_list.lock, flags);
+ skb = __skb_dequeue(&q->skb_list);
+ spin_unlock_irqrestore(&q->skb_list.lock, flags);
+ if (!skb)
+ continue;
+
+ cb->recv_skb(hif_ctrl->t7xx_dev, skb);
+ cond_resched();
+ }
+
+ return 0;
+}
+
+static int t7xx_dpmaif_update_bat_wr_idx(struct dpmaif_ctrl *dpmaif_ctrl,
+ const unsigned char q_num, const unsigned int bat_cnt)
+{
+ struct dpmaif_rx_queue *rxq = &dpmaif_ctrl->rxq[q_num];
+ unsigned short old_rl_idx, new_wr_idx, old_wr_idx;
+ struct dpmaif_bat_request *bat_req = rxq->bat_req;
+
+ if (!rxq->que_started) {
+ dev_err(dpmaif_ctrl->dev, "RX queue %d has not been started\n", rxq->index);
+ return -EINVAL;
+ }
+
+ old_rl_idx = bat_req->bat_release_rd_idx;
+ old_wr_idx = bat_req->bat_wr_idx;
+ new_wr_idx = old_wr_idx + bat_cnt;
+
+ if (old_rl_idx > old_wr_idx && new_wr_idx >= old_rl_idx) {
+ dev_err(dpmaif_ctrl->dev, "RX BAT flow check fail\n");
+ return -EINVAL;
+ }
+
+ if (new_wr_idx >= bat_req->bat_size_cnt) {
+ new_wr_idx -= bat_req->bat_size_cnt;
+ if (new_wr_idx >= old_rl_idx) {
+ dev_err(dpmaif_ctrl->dev, "RX BAT flow check fail\n");
+ return -EINVAL;
+ }
+ }
+
+ bat_req->bat_wr_idx = new_wr_idx;
+ return 0;
+}
+
+static bool t7xx_alloc_and_map_skb_info(const struct dpmaif_ctrl *dpmaif_ctrl,
+ const unsigned int size, struct dpmaif_bat_skb *cur_skb)
+{
+ dma_addr_t data_bus_addr;
+ struct sk_buff *skb;
+ size_t data_len;
+
+ skb = __dev_alloc_skb(size, GFP_KERNEL);
+ if (!skb)
+ return false;
+
+ data_len = t7xx_skb_data_area_size(skb);
+
+ data_bus_addr = dma_map_single(dpmaif_ctrl->dev, skb->data, data_len, DMA_FROM_DEVICE);
+ if (dma_mapping_error(dpmaif_ctrl->dev, data_bus_addr)) {
+ dev_err_ratelimited(dpmaif_ctrl->dev, "DMA mapping error\n");
+ dev_kfree_skb_any(skb);
+ return false;
+ }
+
+ cur_skb->skb = skb;
+ cur_skb->data_bus_addr = data_bus_addr;
+ cur_skb->data_len = data_len;
+
+ return true;
+}
+
+static void t7xx_unmap_bat_skb(struct device *dev, struct dpmaif_bat_skb *bat_skb_base,
+ unsigned int index)
+{
+ struct dpmaif_bat_skb *bat_skb = bat_skb_base + index;
+
+ if (bat_skb->skb) {
+ dma_unmap_single(dev, bat_skb->data_bus_addr, bat_skb->data_len, DMA_FROM_DEVICE);
+ kfree_skb(bat_skb->skb);
+ bat_skb->skb = NULL;
+ }
+}
+
+/**
+ * t7xx_dpmaif_rx_buf_alloc() - Allocate buffers for the BAT ring.
+ * @dpmaif_ctrl: Pointer to DPMAIF context structure.
+ * @bat_req: Pointer to BAT request structure.
+ * @q_num: Queue number.
+ * @buf_cnt: Number of buffers to allocate.
+ * @initial: Indicates if the ring is being populated for the first time.
+ *
+ * Allocate skb and store the start address of the data buffer into the BAT ring.
+ * If this is not the initial call, notify the HW about the new entries.
+ *
+ * Return:
+ * * 0 - Success.
+ * * -ERROR - Error code from failure sub-initializations.
+ */
+int t7xx_dpmaif_rx_buf_alloc(struct dpmaif_ctrl *dpmaif_ctrl,
+ const struct dpmaif_bat_request *bat_req,
+ const unsigned char q_num, const unsigned int buf_cnt,
+ const bool initial)
+{
+ unsigned int i, bat_cnt, bat_max_cnt, hw_wr_idx, alloc_cnt = buf_cnt;
+ unsigned short bat_start_idx;
+ int ret;
+
+ if (!alloc_cnt || alloc_cnt > bat_req->bat_size_cnt)
+ return -EINVAL;
+
+ /* Check BAT buffer space */
+ bat_max_cnt = bat_req->bat_size_cnt;
+
+ bat_cnt = t7xx_ring_buf_rd_wr_count(bat_max_cnt, bat_req->bat_release_rd_idx,
+ bat_req->bat_wr_idx, DPMAIF_WRITE);
+ if (alloc_cnt > bat_cnt)
+ return -ENOMEM;
+
+ bat_start_idx = bat_req->bat_wr_idx;
+
+ for (i = 0; i < alloc_cnt; i++) {
+ unsigned short cur_bat_idx = bat_start_idx + i;
+ struct dpmaif_bat_skb *cur_skb;
+ struct dpmaif_bat *cur_bat;
+
+ if (cur_bat_idx >= bat_max_cnt)
+ cur_bat_idx -= bat_max_cnt;
+
+ cur_skb = (struct dpmaif_bat_skb *)bat_req->bat_skb + cur_bat_idx;
+ if (!cur_skb->skb &&
+ !t7xx_alloc_and_map_skb_info(dpmaif_ctrl, bat_req->pkt_buf_sz, cur_skb))
+ break;
+
+ cur_bat = (struct dpmaif_bat *)bat_req->bat_base + cur_bat_idx;
+ cur_bat->buffer_addr_ext = upper_32_bits(cur_skb->data_bus_addr);
+ cur_bat->p_buffer_addr = lower_32_bits(cur_skb->data_bus_addr);
+ }
+
+ if (!i)
+ return -ENOMEM;
+
+ ret = t7xx_dpmaif_update_bat_wr_idx(dpmaif_ctrl, q_num, i);
+ if (ret)
+ goto err_unmap_skbs;
+
+ if (!initial) {
+ ret = t7xx_dpmaif_dl_snd_hw_bat_cnt(dpmaif_ctrl, i);
+ if (ret)
+ goto err_unmap_skbs;
+
+ hw_wr_idx = t7xx_dpmaif_dl_get_bat_wr_idx(&dpmaif_ctrl->hif_hw_info,
+ DPF_RX_QNO_DFT);
+ if (hw_wr_idx != bat_req->bat_wr_idx) {
+ ret = -EFAULT;
+ dev_err(dpmaif_ctrl->dev, "Write index mismatch in RX ring\n");
+ goto err_unmap_skbs;
+ }
+ }
+
+ return 0;
+
+err_unmap_skbs:
+ while (--i > 0)
+ t7xx_unmap_bat_skb(dpmaif_ctrl->dev, bat_req->bat_skb, i);
+
+ return ret;
+}
+
+static int t7xx_dpmaifq_release_pit_entry(struct dpmaif_rx_queue *rxq,
+ const unsigned short rel_entry_num)
+{
+ unsigned short old_sw_rel_idx, new_sw_rel_idx, old_hw_wr_idx;
+ int ret;
+
+ if (!rxq->que_started)
+ return 0;
+
+ if (rel_entry_num >= rxq->pit_size_cnt) {
+ dev_err(rxq->dpmaif_ctrl->dev, "Invalid PIT release index\n");
+ return -EINVAL;
+ }
+
+ old_sw_rel_idx = rxq->pit_release_rd_idx;
+ new_sw_rel_idx = old_sw_rel_idx + rel_entry_num;
+ old_hw_wr_idx = rxq->pit_wr_idx;
+ if (old_hw_wr_idx < old_sw_rel_idx && new_sw_rel_idx >= rxq->pit_size_cnt)
+ new_sw_rel_idx -= rxq->pit_size_cnt;
+
+ ret = t7xx_dpmaif_dlq_add_pit_remain_cnt(rxq->dpmaif_ctrl, rxq->index, rel_entry_num);
+ if (ret) {
+ dev_err(rxq->dpmaif_ctrl->dev, "PIT release failure: %d\n", ret);
+ return ret;
+ }
+
+ rxq->pit_release_rd_idx = new_sw_rel_idx;
+ return 0;
+}
+
+static void t7xx_dpmaif_set_bat_mask(struct device *dev, struct dpmaif_bat_request *bat_req,
+ unsigned int idx)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&bat_req->mask_lock, flags);
+ bat_req->bat_mask[idx] = 1;
+ spin_unlock_irqrestore(&bat_req->mask_lock, flags);
+}
+
+static int t7xx_frag_bat_cur_bid_check(struct dpmaif_rx_queue *rxq,
+ const unsigned int cur_bid)
+{
+ struct dpmaif_bat_request *bat_frag = rxq->bat_frag;
+ struct dpmaif_bat_page *bat_page;
+
+ if (cur_bid >= DPMAIF_FRG_COUNT)
+ return -EINVAL;
+
+ bat_page = bat_frag->bat_skb + cur_bid;
+ if (!bat_page->page)
+ return -EINVAL;
+
+ return 0;
+}
+
+static void t7xx_unmap_bat_page(struct device *dev, struct dpmaif_bat_page *bat_page_base,
+ unsigned int index)
+{
+ struct dpmaif_bat_page *bat_page = bat_page_base + index;
+
+ if (bat_page->page) {
+ dma_unmap_page(dev, bat_page->data_bus_addr, bat_page->data_len, DMA_FROM_DEVICE);
+ put_page(bat_page->page);
+ bat_page->page = NULL;
+ }
+}
+
+/**
+ * t7xx_dpmaif_rx_frag_alloc() - Allocates buffers for the Fragment BAT ring.
+ * @dpmaif_ctrl: Pointer to DPMAIF context structure.
+ * @bat_req: Pointer to BAT request structure.
+ * @buf_cnt: Number of buffers to allocate.
+ * @initial: Indicates if the ring is being populated for the first time.
+ *
+ * Fragment BAT is used when the received packet does not fit in a normal BAT entry.
+ * This function allocates a page fragment and stores the start address of the page
+ * into the Fragment BAT ring.
+ * If this is not the initial call, notify the HW about the new entries.
+ *
+ * Return:
+ * * 0 - Success.
+ * * -ERROR - Error code from failure sub-initializations.
+ */
+int t7xx_dpmaif_rx_frag_alloc(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
+ const unsigned int buf_cnt, const bool initial)
+{
+ struct dpmaif_bat_page *bat_skb = bat_req->bat_skb;
+ unsigned short cur_bat_idx = bat_req->bat_wr_idx;
+ unsigned int buf_space;
+ int ret, i;
+
+ if (!buf_cnt || buf_cnt > bat_req->bat_size_cnt)
+ return -EINVAL;
+
+ buf_space = t7xx_ring_buf_rd_wr_count(bat_req->bat_size_cnt,
+ bat_req->bat_release_rd_idx, bat_req->bat_wr_idx,
+ DPMAIF_WRITE);
+ if (buf_cnt > buf_space) {
+ dev_err(dpmaif_ctrl->dev,
+ "Requested more buffers than the space available in RX frag ring\n");
+ return -EINVAL;
+ }
+
+ for (i = 0; i < buf_cnt; i++) {
+ struct dpmaif_bat_page *cur_page = bat_skb + cur_bat_idx;
+ struct dpmaif_bat *cur_bat;
+ dma_addr_t data_base_addr;
+
+ if (!cur_page->page) {
+ unsigned long offset;
+ struct page *page;
+ void *data;
+
+ data = netdev_alloc_frag(bat_req->pkt_buf_sz);
+ if (!data)
+ break;
+
+ page = virt_to_head_page(data);
+ offset = data - page_address(page);
+
+ data_base_addr = dma_map_page(dpmaif_ctrl->dev, page, offset,
+ bat_req->pkt_buf_sz, DMA_FROM_DEVICE);
+ if (dma_mapping_error(dpmaif_ctrl->dev, data_base_addr)) {
+ dev_err(dpmaif_ctrl->dev, "DMA mapping fail\n");
+ put_page(virt_to_head_page(data));
+ break;
+ }
+
+ cur_page->page = page;
+ cur_page->data_bus_addr = data_base_addr;
+ cur_page->offset = offset;
+ cur_page->data_len = bat_req->pkt_buf_sz;
+ }
+
+ data_base_addr = cur_page->data_bus_addr;
+ cur_bat = (struct dpmaif_bat *)bat_req->bat_base + cur_bat_idx;
+ cur_bat->buffer_addr_ext = upper_32_bits(data_base_addr);
+ cur_bat->p_buffer_addr = lower_32_bits(data_base_addr);
+ cur_bat_idx = t7xx_ring_buf_get_next_wr_idx(bat_req->bat_size_cnt, cur_bat_idx);
+ }
+
+ bat_req->bat_wr_idx = cur_bat_idx;
+
+ if (!initial)
+ t7xx_dpmaif_dl_snd_hw_frg_cnt(dpmaif_ctrl, i);
+
+ ret = i < buf_cnt ? -ENOMEM : 0;
+ if (ret && initial) {
+ while (--i > 0)
+ t7xx_unmap_bat_page(dpmaif_ctrl->dev, bat_req->bat_skb, i);
+ }
+
+ return ret;
+}
+
+static int t7xx_dpmaif_set_frag_to_skb(const struct dpmaif_rx_queue *rxq,
+ const struct dpmaif_normal_pit *pkt_info,
+ struct sk_buff *skb)
+{
+ unsigned long long data_bus_addr, data_base_addr;
+ struct device *dev = rxq->dpmaif_ctrl->dev;
+ struct dpmaif_bat_page *page_info;
+ unsigned int data_len;
+ int data_offset;
+
+ page_info = rxq->bat_frag->bat_skb;
+ page_info += t7xx_normal_pit_bid(pkt_info);
+ dma_unmap_page(dev, page_info->data_bus_addr, page_info->data_len, DMA_FROM_DEVICE);
+
+ if (!page_info->page)
+ return -EINVAL;
+
+ data_bus_addr = le32_to_cpu(pkt_info->data_addr_ext);
+ data_bus_addr = (data_bus_addr << 32) + le32_to_cpu(pkt_info->p_data_addr);
+ data_base_addr = page_info->data_bus_addr;
+ data_offset = data_bus_addr - data_base_addr;
+ data_offset += page_info->offset;
+ data_len = FIELD_GET(NORMAL_PIT_DATA_LEN, le32_to_cpu(pkt_info->pit_header));
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page_info->page,
+ data_offset, data_len, page_info->data_len);
+
+ page_info->page = NULL;
+ page_info->offset = 0;
+ page_info->data_len = 0;
+ return 0;
+}
+
+static int t7xx_dpmaif_get_frag(struct dpmaif_rx_queue *rxq,
+ const struct dpmaif_normal_pit *pkt_info,
+ const struct dpmaif_cur_rx_skb_info *skb_info)
+{
+ unsigned int cur_bid = t7xx_normal_pit_bid(pkt_info);
+ int ret;
+
+ ret = t7xx_frag_bat_cur_bid_check(rxq, cur_bid);
+ if (ret < 0)
+ return ret;
+
+ ret = t7xx_dpmaif_set_frag_to_skb(rxq, pkt_info, skb_info->cur_skb);
+ if (ret < 0) {
+ dev_err(rxq->dpmaif_ctrl->dev, "Failed to set frag data to skb: %d\n", ret);
+ return ret;
+ }
+
+ t7xx_dpmaif_set_bat_mask(rxq->dpmaif_ctrl->dev, rxq->bat_frag, cur_bid);
+ return 0;
+}
+
+static int t7xx_bat_cur_bid_check(struct dpmaif_rx_queue *rxq, const unsigned int cur_bid)
+{
+ struct dpmaif_bat_skb *bat_skb = rxq->bat_req->bat_skb;
+
+ bat_skb += cur_bid;
+ if (cur_bid >= DPMAIF_BAT_COUNT || !bat_skb->skb)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int t7xx_dpmaif_read_pit_seq(const struct dpmaif_normal_pit *pit)
+{
+ return FIELD_GET(NORMAL_PIT_PIT_SEQ, le32_to_cpu(pit->pit_footer));
+}
+
+static int t7xx_dpmaif_check_pit_seq(struct dpmaif_rx_queue *rxq,
+ const struct dpmaif_normal_pit *pit)
+{
+ unsigned int cur_pit_seq, expect_pit_seq = rxq->expect_pit_seq;
+
+ if (read_poll_timeout_atomic(t7xx_dpmaif_read_pit_seq, cur_pit_seq,
+ cur_pit_seq == expect_pit_seq, DPMAIF_POLL_PIT_TIME_US,
+ DPMAIF_POLL_PIT_MAX_TIME_US, false, pit))
+ return -EFAULT;
+
+ rxq->expect_pit_seq++;
+ if (rxq->expect_pit_seq >= DPMAIF_DL_PIT_SEQ_VALUE)
+ rxq->expect_pit_seq = 0;
+
+ return 0;
+}
+
+static unsigned int t7xx_dpmaif_avail_pkt_bat_cnt(struct dpmaif_bat_request *bat_req)
+{
+ unsigned long flags;
+ unsigned int i;
+
+ spin_lock_irqsave(&bat_req->mask_lock, flags);
+ for (i = 0; i < bat_req->bat_size_cnt; i++) {
+ unsigned int index = bat_req->bat_release_rd_idx + i;
+
+ if (index >= bat_req->bat_size_cnt)
+ index -= bat_req->bat_size_cnt;
+
+ if (!bat_req->bat_mask[index])
+ break;
+ }
+
+ spin_unlock_irqrestore(&bat_req->mask_lock, flags);
+ return i;
+}
+
+static int t7xx_dpmaif_release_bat_entry(const struct dpmaif_rx_queue *rxq,
+ const unsigned int rel_entry_num,
+ const enum bat_type buf_type)
+{
+ unsigned short old_sw_rel_idx, new_sw_rel_idx, hw_rd_idx;
+ struct dpmaif_ctrl *dpmaif_ctrl = rxq->dpmaif_ctrl;
+ struct dpmaif_bat_request *bat;
+ unsigned long flags;
+ unsigned int i;
+
+ if (!rxq->que_started || !rel_entry_num)
+ return -EINVAL;
+
+ if (buf_type == BAT_TYPE_FRAG) {
+ bat = rxq->bat_frag;
+ hw_rd_idx = t7xx_dpmaif_dl_get_frg_rd_idx(&dpmaif_ctrl->hif_hw_info, rxq->index);
+ } else {
+ bat = rxq->bat_req;
+ hw_rd_idx = t7xx_dpmaif_dl_get_bat_rd_idx(&dpmaif_ctrl->hif_hw_info, rxq->index);
+ }
+
+ if (rel_entry_num >= bat->bat_size_cnt)
+ return -EINVAL;
+
+ old_sw_rel_idx = bat->bat_release_rd_idx;
+ new_sw_rel_idx = old_sw_rel_idx + rel_entry_num;
+
+ /* Do not need to release if the queue is empty */
+ if (bat->bat_wr_idx == old_sw_rel_idx)
+ return 0;
+
+ if (hw_rd_idx >= old_sw_rel_idx) {
+ if (new_sw_rel_idx > hw_rd_idx)
+ return -EINVAL;
+ }
+
+ if (new_sw_rel_idx >= bat->bat_size_cnt) {
+ new_sw_rel_idx -= bat->bat_size_cnt;
+ if (new_sw_rel_idx > hw_rd_idx)
+ return -EINVAL;
+ }
+
+ spin_lock_irqsave(&bat->mask_lock, flags);
+ for (i = 0; i < rel_entry_num; i++) {
+ unsigned int index = bat->bat_release_rd_idx + i;
+
+ if (index >= bat->bat_size_cnt)
+ index -= bat->bat_size_cnt;
+
+ bat->bat_mask[index] = 0;
+ }
+
+ spin_unlock_irqrestore(&bat->mask_lock, flags);
+ bat->bat_release_rd_idx = new_sw_rel_idx;
+ return rel_entry_num;
+}
+
+static int t7xx_dpmaif_pit_release_and_add(struct dpmaif_rx_queue *rxq)
+{
+ int ret;
+
+ if (rxq->pit_remain_release_cnt < DPMAIF_PIT_CNT_THRESHOLD)
+ return 0;
+
+ ret = t7xx_dpmaifq_release_pit_entry(rxq, rxq->pit_remain_release_cnt);
+ if (ret)
+ return ret;
+
+ rxq->pit_remain_release_cnt = 0;
+ return 0;
+}
+
+static int t7xx_dpmaif_bat_release_and_add(const struct dpmaif_rx_queue *rxq)
+{
+ unsigned int bid_cnt;
+ int ret;
+
+ bid_cnt = t7xx_dpmaif_avail_pkt_bat_cnt(rxq->bat_req);
+ if (bid_cnt < DPMAIF_BAT_CNT_THRESHOLD)
+ return 0;
+
+ ret = t7xx_dpmaif_release_bat_entry(rxq, bid_cnt, BAT_TYPE_NORMAL);
+ if (ret <= 0) {
+ dev_err(rxq->dpmaif_ctrl->dev, "Release PKT BAT failed: %d\n", ret);
+ return ret;
+ }
+
+ ret = t7xx_dpmaif_rx_buf_alloc(rxq->dpmaif_ctrl, rxq->bat_req, rxq->index, bid_cnt, false);
+ if (ret < 0)
+ dev_err(rxq->dpmaif_ctrl->dev, "Allocate new RX buffer failed: %d\n", ret);
+
+ return ret;
+}
+
+static int t7xx_dpmaif_frag_bat_release_and_add(const struct dpmaif_rx_queue *rxq)
+{
+ unsigned int bid_cnt;
+ int ret;
+
+ bid_cnt = t7xx_dpmaif_avail_pkt_bat_cnt(rxq->bat_frag);
+ if (bid_cnt < DPMAIF_BAT_CNT_THRESHOLD)
+ return 0;
+
+ ret = t7xx_dpmaif_release_bat_entry(rxq, bid_cnt, BAT_TYPE_FRAG);
+ if (ret <= 0) {
+ dev_err(rxq->dpmaif_ctrl->dev, "Release BAT entry failed: %d\n", ret);
+ return ret;
+ }
+
+ return t7xx_dpmaif_rx_frag_alloc(rxq->dpmaif_ctrl, rxq->bat_frag, bid_cnt, false);
+}
+
+static void t7xx_dpmaif_parse_msg_pit(const struct dpmaif_rx_queue *rxq,
+ const struct dpmaif_msg_pit *msg_pit,
+ struct dpmaif_cur_rx_skb_info *skb_info)
+{
+ skb_info->cur_chn_idx = FIELD_GET(MSG_PIT_CHANNEL_ID, le32_to_cpu(msg_pit->dword1));
+ skb_info->check_sum = FIELD_GET(MSG_PIT_CHECKSUM, le32_to_cpu(msg_pit->dword1));
+ skb_info->pit_dp = FIELD_GET(MSG_PIT_DP, le32_to_cpu(msg_pit->dword1));
+ skb_info->pkt_type = FIELD_GET(MSG_PIT_IP, le32_to_cpu(msg_pit->dword4));
+}
+
+static int t7xx_dpmaif_set_data_to_skb(const struct dpmaif_rx_queue *rxq,
+ const struct dpmaif_normal_pit *pkt_info,
+ struct dpmaif_cur_rx_skb_info *skb_info)
+{
+ unsigned long long data_bus_addr, data_base_addr;
+ struct device *dev = rxq->dpmaif_ctrl->dev;
+ struct dpmaif_bat_skb *bat_skb;
+ unsigned int data_len;
+ struct sk_buff *skb;
+ int data_offset;
+
+ bat_skb = rxq->bat_req->bat_skb;
+ bat_skb += t7xx_normal_pit_bid(pkt_info);
+ dma_unmap_single(dev, bat_skb->data_bus_addr, bat_skb->data_len, DMA_FROM_DEVICE);
+
+ data_bus_addr = le32_to_cpu(pkt_info->data_addr_ext);
+ data_bus_addr = (data_bus_addr << 32) + le32_to_cpu(pkt_info->p_data_addr);
+ data_base_addr = bat_skb->data_bus_addr;
+ data_offset = data_bus_addr - data_base_addr;
+ data_len = FIELD_GET(NORMAL_PIT_DATA_LEN, le32_to_cpu(pkt_info->pit_header));
+ skb = bat_skb->skb;
+ skb->len = 0;
+ skb_reset_tail_pointer(skb);
+ skb_reserve(skb, data_offset);
+
+ if (skb->tail + data_len > skb->end) {
+ dev_err(dev, "No buffer space available\n");
+ return -ENOBUFS;
+ }
+
+ skb_put(skb, data_len);
+ skb_info->cur_skb = skb;
+ bat_skb->skb = NULL;
+ return 0;
+}
+
+static int t7xx_dpmaif_get_rx_pkt(struct dpmaif_rx_queue *rxq,
+ const struct dpmaif_normal_pit *pkt_info,
+ struct dpmaif_cur_rx_skb_info *skb_info)
+{
+ unsigned int cur_bid = t7xx_normal_pit_bid(pkt_info);
+ int ret;
+
+ ret = t7xx_bat_cur_bid_check(rxq, cur_bid);
+ if (ret < 0)
+ return ret;
+
+ ret = t7xx_dpmaif_set_data_to_skb(rxq, pkt_info, skb_info);
+ if (ret < 0) {
+ dev_err(rxq->dpmaif_ctrl->dev, "RX set data to skb failed: %d\n", ret);
+ return ret;
+ }
+
+ t7xx_dpmaif_set_bat_mask(rxq->dpmaif_ctrl->dev, rxq->bat_req, cur_bid);
+ return 0;
+}
+
+static int t7xx_dpmaifq_rx_notify_hw(struct dpmaif_rx_queue *rxq)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl = rxq->dpmaif_ctrl;
+ int ret;
+
+ queue_work(dpmaif_ctrl->bat_release_wq, &dpmaif_ctrl->bat_release_work);
+
+ ret = t7xx_dpmaif_pit_release_and_add(rxq);
+ if (ret < 0)
+ dev_err(dpmaif_ctrl->dev, "RXQ%u update PIT failed: %d\n", rxq->index, ret);
+
+ return ret;
+}
+
+static void t7xx_dpmaif_rx_skb_enqueue(struct dpmaif_rx_queue *rxq, struct sk_buff *skb)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rxq->skb_list.lock, flags);
+ if (rxq->skb_list.qlen < rxq->skb_list_max_len)
+ __skb_queue_tail(&rxq->skb_list, skb);
+ else
+ dev_kfree_skb_any(skb);
+
+ spin_unlock_irqrestore(&rxq->skb_list.lock, flags);
+}
+
+static void t7xx_dpmaif_rx_skb(struct dpmaif_rx_queue *rxq,
+ struct dpmaif_cur_rx_skb_info *skb_info)
+{
+ struct sk_buff *skb = skb_info->cur_skb;
+ u8 netif_id;
+
+ skb_info->cur_skb = NULL;
+
+ if (skb_info->pit_dp) {
+ dev_kfree_skb_any(skb);
+ return;
+ }
+
+ skb->ip_summed = skb_info->check_sum == DPMAIF_CS_RESULT_PASS ? CHECKSUM_UNNECESSARY :
+ CHECKSUM_NONE;
+ netif_id = FIELD_GET(NETIF_MASK, skb_info->cur_chn_idx);
+ skb->cb[RX_CB_NETIF_IDX] = netif_id;
+ skb->cb[RX_CB_PKT_TYPE] = skb_info->pkt_type;
+ t7xx_dpmaif_rx_skb_enqueue(rxq, skb);
+}
+
+static int t7xx_dpmaif_rx_start(struct dpmaif_rx_queue *rxq, const unsigned short pit_cnt,
+ const unsigned long timeout)
+{
+ struct device *dev = rxq->dpmaif_ctrl->dev;
+ struct dpmaif_cur_rx_skb_info *skb_info;
+ unsigned short rx_cnt, recv_skb_cnt = 0;
+ unsigned int cur_pit, pit_len;
+ int ret = 0;
+
+ pit_len = rxq->pit_size_cnt;
+ skb_info = &rxq->rx_data_info;
+ cur_pit = rxq->pit_rd_idx;
+
+ for (rx_cnt = 0; rx_cnt < pit_cnt; rx_cnt++) {
+ struct dpmaif_normal_pit *pkt_info;
+ u32 val;
+
+ if (!skb_info->msg_pit_received && time_after_eq(jiffies, timeout))
+ break;
+
+ pkt_info = (struct dpmaif_normal_pit *)rxq->pit_base + cur_pit;
+ if (t7xx_dpmaif_check_pit_seq(rxq, pkt_info)) {
+ dev_err_ratelimited(dev, "RXQ%u checks PIT SEQ fail\n", rxq->index);
+ return -EAGAIN;
+ }
+
+ val = FIELD_GET(NORMAL_PIT_PACKET_TYPE, le32_to_cpu(pkt_info->pit_header));
+ if (val == DES_PT_MSG) {
+ if (skb_info->msg_pit_received)
+ dev_err(dev, "RXQ%u received repeated PIT\n", rxq->index);
+
+ skb_info->msg_pit_received = true;
+ t7xx_dpmaif_parse_msg_pit(rxq, (struct dpmaif_msg_pit *)pkt_info,
+ skb_info);
+ } else { /* DES_PT_PD */
+ val = FIELD_GET(NORMAL_PIT_BUFFER_TYPE, le32_to_cpu(pkt_info->pit_header));
+ if (val != PKT_BUF_FRAG)
+ ret = t7xx_dpmaif_get_rx_pkt(rxq, pkt_info, skb_info);
+ else if (!skb_info->cur_skb)
+ ret = -EINVAL;
+ else
+ ret = t7xx_dpmaif_get_frag(rxq, pkt_info, skb_info);
+
+ if (ret < 0) {
+ skb_info->err_payload = 1;
+ dev_err_ratelimited(dev, "RXQ%u error payload\n", rxq->index);
+ }
+
+ val = FIELD_GET(NORMAL_PIT_CONT, le32_to_cpu(pkt_info->pit_header));
+ if (!val) {
+ if (!skb_info->err_payload) {
+ t7xx_dpmaif_rx_skb(rxq, skb_info);
+ } else if (skb_info->cur_skb) {
+ dev_kfree_skb_any(skb_info->cur_skb);
+ skb_info->cur_skb = NULL;
+ }
+
+ memset(skb_info, 0, sizeof(*skb_info));
+
+ recv_skb_cnt++;
+ if (!(recv_skb_cnt & DPMAIF_RX_PUSH_THRESHOLD_MASK)) {
+ wake_up_all(&rxq->rx_wq);
+ recv_skb_cnt = 0;
+ }
+ }
+ }
+
+ cur_pit = t7xx_ring_buf_get_next_wr_idx(pit_len, cur_pit);
+ rxq->pit_rd_idx = cur_pit;
+ rxq->pit_remain_release_cnt++;
+
+ if (rx_cnt > 0 && !(rx_cnt % DPMAIF_NOTIFY_RELEASE_COUNT)) {
+ ret = t7xx_dpmaifq_rx_notify_hw(rxq);
+ if (ret < 0)
+ break;
+ }
+ }
+
+ if (recv_skb_cnt)
+ wake_up_all(&rxq->rx_wq);
+
+ if (!ret)
+ ret = t7xx_dpmaifq_rx_notify_hw(rxq);
+
+ if (ret)
+ return ret;
+
+ return rx_cnt;
+}
+
+static unsigned int t7xx_dpmaifq_poll_pit(struct dpmaif_rx_queue *rxq)
+{
+ unsigned short hw_wr_idx;
+ unsigned int pit_cnt;
+
+ if (!rxq->que_started)
+ return 0;
+
+ hw_wr_idx = t7xx_dpmaif_dl_dlq_pit_get_wr_idx(&rxq->dpmaif_ctrl->hif_hw_info, rxq->index);
+ pit_cnt = t7xx_ring_buf_rd_wr_count(rxq->pit_size_cnt, rxq->pit_rd_idx, hw_wr_idx,
+ DPMAIF_READ);
+ rxq->pit_wr_idx = hw_wr_idx;
+ return pit_cnt;
+}
+
+static int t7xx_dpmaif_rx_data_collect(struct dpmaif_ctrl *dpmaif_ctrl,
+ const unsigned char q_num, const int budget)
+{
+ struct dpmaif_rx_queue *rxq = &dpmaif_ctrl->rxq[q_num];
+ unsigned long time_limit;
+ unsigned int cnt;
+
+ time_limit = jiffies + msecs_to_jiffies(DPMAIF_WQ_TIME_LIMIT_MS);
+
+ do {
+ unsigned int rd_cnt;
+ int real_cnt;
+
+ cnt = t7xx_dpmaifq_poll_pit(rxq);
+ if (!cnt)
+ break;
+
+ if (!rxq->pit_base)
+ return -EAGAIN;
+
+ rd_cnt = cnt > budget ? budget : cnt;
+
+ real_cnt = t7xx_dpmaif_rx_start(rxq, rd_cnt, time_limit);
+ if (real_cnt < 0)
+ return real_cnt;
+
+ if (real_cnt < cnt)
+ return -EAGAIN;
+
+ } while (cnt);
+
+ return 0;
+}
+
+static void t7xx_dpmaif_do_rx(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_rx_queue *rxq)
+{
+ int ret;
+
+ ret = t7xx_dpmaif_rx_data_collect(dpmaif_ctrl, rxq->index, rxq->budget);
+ if (ret < 0) {
+ /* Try one more time */
+ queue_work(rxq->worker, &rxq->dpmaif_rxq_work);
+ t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+ } else {
+ t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+ t7xx_dpmaif_dlq_unmask_rx_done(&dpmaif_ctrl->hif_hw_info, rxq->index);
+ }
+}
+
+static void t7xx_dpmaif_rxq_work(struct work_struct *work)
+{
+ struct dpmaif_rx_queue *rxq = container_of(work, struct dpmaif_rx_queue, dpmaif_rxq_work);
+ struct dpmaif_ctrl *dpmaif_ctrl = rxq->dpmaif_ctrl;
+
+ atomic_set(&rxq->rx_processing, 1);
+ /* Ensure rx_processing is changed to 1 before actually begin RX flow */
+ smp_mb();
+
+ if (!rxq->que_started) {
+ atomic_set(&rxq->rx_processing, 0);
+ dev_err(dpmaif_ctrl->dev, "Work RXQ: %d has not been started\n", rxq->index);
+ return;
+ }
+
+ t7xx_dpmaif_do_rx(dpmaif_ctrl, rxq);
+
+ atomic_set(&rxq->rx_processing, 0);
+}
+
+void t7xx_dpmaif_irq_rx_done(struct dpmaif_ctrl *dpmaif_ctrl, const unsigned int que_mask)
+{
+ struct dpmaif_rx_queue *rxq;
+ int qno;
+
+ qno = ffs(que_mask) - 1;
+ if (qno < 0 || qno > DPMAIF_RXQ_NUM - 1) {
+ dev_err(dpmaif_ctrl->dev, "Invalid RXQ number: %u\n", qno);
+ return;
+ }
+
+ rxq = &dpmaif_ctrl->rxq[qno];
+ queue_work(rxq->worker, &rxq->dpmaif_rxq_work);
+}
+
+static void t7xx_dpmaif_base_free(const struct dpmaif_ctrl *dpmaif_ctrl,
+ const struct dpmaif_bat_request *bat_req)
+{
+ if (bat_req->bat_base)
+ dma_free_coherent(dpmaif_ctrl->dev,
+ bat_req->bat_size_cnt * sizeof(struct dpmaif_bat),
+ bat_req->bat_base, bat_req->bat_bus_addr);
+}
+
+/**
+ * t7xx_dpmaif_bat_alloc() - Allocate the BAT ring buffer.
+ * @dpmaif_ctrl: Pointer to DPMAIF context structure.
+ * @bat_req: Pointer to BAT request structure.
+ * @buf_type: BAT ring type.
+ *
+ * This function allocates the BAT ring buffer shared with the HW device, also allocates
+ * a buffer used to store information about the BAT skbs for further release.
+ *
+ * Return:
+ * * 0 - Success.
+ * * -ERROR - Error code from failure sub-initializations.
+ */
+int t7xx_dpmaif_bat_alloc(const struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
+ const enum bat_type buf_type)
+{
+ int sw_buf_size;
+
+ if (buf_type == BAT_TYPE_FRAG) {
+ sw_buf_size = sizeof(struct dpmaif_bat_page);
+ bat_req->bat_size_cnt = DPMAIF_FRG_COUNT;
+ bat_req->pkt_buf_sz = DPMAIF_HW_FRG_PKTBUF;
+ } else {
+ sw_buf_size = sizeof(struct dpmaif_bat_skb);
+ bat_req->bat_size_cnt = DPMAIF_BAT_COUNT;
+ bat_req->pkt_buf_sz = NET_RX_BUF;
+ }
+
+ bat_req->skb_pkt_cnt = bat_req->bat_size_cnt;
+ bat_req->type = buf_type;
+ bat_req->bat_wr_idx = 0;
+ bat_req->bat_release_rd_idx = 0;
+
+ bat_req->bat_base = dma_alloc_coherent(dpmaif_ctrl->dev,
+ bat_req->bat_size_cnt * sizeof(struct dpmaif_bat),
+ &bat_req->bat_bus_addr, GFP_KERNEL | __GFP_ZERO);
+ if (!bat_req->bat_base)
+ return -ENOMEM;
+
+ /* For AP SW to record skb information */
+ bat_req->bat_skb = devm_kzalloc(dpmaif_ctrl->dev, bat_req->skb_pkt_cnt * sw_buf_size,
+ GFP_KERNEL);
+ if (!bat_req->bat_skb)
+ goto err_free_dma_mem;
+
+ bat_req->bat_mask = kcalloc(bat_req->bat_size_cnt, sizeof(unsigned char), GFP_KERNEL);
+ if (!bat_req->bat_mask)
+ goto err_free_dma_mem;
+
+ spin_lock_init(&bat_req->mask_lock);
+ atomic_set(&bat_req->refcnt, 0);
+ return 0;
+
+err_free_dma_mem:
+ t7xx_dpmaif_base_free(dpmaif_ctrl, bat_req);
+
+ return -ENOMEM;
+}
+
+void t7xx_dpmaif_bat_free(const struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req)
+{
+ if (!bat_req || !atomic_dec_and_test(&bat_req->refcnt))
+ return;
+
+ kfree(bat_req->bat_mask);
+ bat_req->bat_mask = NULL;
+
+ if (bat_req->bat_skb) {
+ unsigned int i;
+
+ for (i = 0; i < bat_req->bat_size_cnt; i++) {
+ if (bat_req->type == BAT_TYPE_FRAG)
+ t7xx_unmap_bat_page(dpmaif_ctrl->dev, bat_req->bat_skb, i);
+ else
+ t7xx_unmap_bat_skb(dpmaif_ctrl->dev, bat_req->bat_skb, i);
+ }
+ }
+
+ t7xx_dpmaif_base_free(dpmaif_ctrl, bat_req);
+}
+
+static int t7xx_dpmaif_rx_alloc(struct dpmaif_rx_queue *rxq)
+{
+ rxq->pit_size_cnt = DPMAIF_PIT_COUNT;
+ rxq->pit_rd_idx = 0;
+ rxq->pit_wr_idx = 0;
+ rxq->pit_release_rd_idx = 0;
+ rxq->expect_pit_seq = 0;
+ rxq->pit_remain_release_cnt = 0;
+ memset(&rxq->rx_data_info, 0, sizeof(rxq->rx_data_info));
+
+ rxq->pit_base = dma_alloc_coherent(rxq->dpmaif_ctrl->dev,
+ rxq->pit_size_cnt * sizeof(struct dpmaif_normal_pit),
+ &rxq->pit_bus_addr, GFP_KERNEL | __GFP_ZERO);
+ if (!rxq->pit_base)
+ return -ENOMEM;
+
+ rxq->bat_req = &rxq->dpmaif_ctrl->bat_req;
+ atomic_inc(&rxq->bat_req->refcnt);
+
+ rxq->bat_frag = &rxq->dpmaif_ctrl->bat_frag;
+ atomic_inc(&rxq->bat_frag->refcnt);
+ return 0;
+}
+
+static void t7xx_dpmaif_rx_buf_free(const struct dpmaif_rx_queue *rxq)
+{
+ if (!rxq->dpmaif_ctrl)
+ return;
+
+ t7xx_dpmaif_bat_free(rxq->dpmaif_ctrl, rxq->bat_req);
+ t7xx_dpmaif_bat_free(rxq->dpmaif_ctrl, rxq->bat_frag);
+
+ if (rxq->pit_base)
+ dma_free_coherent(rxq->dpmaif_ctrl->dev,
+ rxq->pit_size_cnt * sizeof(struct dpmaif_normal_pit),
+ rxq->pit_base, rxq->pit_bus_addr);
+}
+
+int t7xx_dpmaif_rxq_init(struct dpmaif_rx_queue *queue)
+{
+ int ret;
+
+ ret = t7xx_dpmaif_rx_alloc(queue);
+ if (ret < 0) {
+ dev_err(queue->dpmaif_ctrl->dev, "Failed to allocate RX buffers: %d\n", ret);
+ return ret;
+ }
+
+ INIT_WORK(&queue->dpmaif_rxq_work, t7xx_dpmaif_rxq_work);
+
+ queue->worker = alloc_workqueue("dpmaif_rx%d_worker",
+ WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI, 1, queue->index);
+ if (!queue->worker) {
+ ret = -ENOMEM;
+ goto err_free_rx_buffer;
+ }
+
+ init_waitqueue_head(&queue->rx_wq);
+ skb_queue_head_init(&queue->skb_list);
+ queue->skb_list_max_len = queue->bat_req->pkt_buf_sz;
+ queue->rx_thread = kthread_run(t7xx_dpmaif_net_rx_push_thread,
+ queue, "dpmaif_rx%d_push", queue->index);
+
+ ret = PTR_ERR_OR_ZERO(queue->rx_thread);
+ if (ret)
+ goto err_free_workqueue;
+
+ return 0;
+
+err_free_workqueue:
+ destroy_workqueue(queue->worker);
+
+err_free_rx_buffer:
+ t7xx_dpmaif_rx_buf_free(queue);
+
+ return ret;
+}
+
+void t7xx_dpmaif_rxq_free(struct dpmaif_rx_queue *queue)
+{
+ if (queue->worker)
+ destroy_workqueue(queue->worker);
+
+ if (queue->rx_thread)
+ kthread_stop(queue->rx_thread);
+
+ skb_queue_purge(&queue->skb_list);
+ t7xx_dpmaif_rx_buf_free(queue);
+}
+
+static void t7xx_dpmaif_bat_release_work(struct work_struct *work)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl = container_of(work, struct dpmaif_ctrl, bat_release_work);
+ struct dpmaif_rx_queue *rxq;
+
+ /* ALL RXQ use one BAT table, so choose DPF_RX_QNO_DFT */
+ rxq = &dpmaif_ctrl->rxq[DPF_RX_QNO_DFT];
+ t7xx_dpmaif_bat_release_and_add(rxq);
+ t7xx_dpmaif_frag_bat_release_and_add(rxq);
+}
+
+int t7xx_dpmaif_bat_rel_wq_alloc(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ dpmaif_ctrl->bat_release_wq = alloc_workqueue("dpmaif_bat_release_work_queue",
+ WQ_MEM_RECLAIM, 1);
+ if (!dpmaif_ctrl->bat_release_wq)
+ return -ENOMEM;
+
+ INIT_WORK(&dpmaif_ctrl->bat_release_work, t7xx_dpmaif_bat_release_work);
+ return 0;
+}
+
+void t7xx_dpmaif_bat_wq_rel(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ flush_work(&dpmaif_ctrl->bat_release_work);
+
+ if (dpmaif_ctrl->bat_release_wq) {
+ destroy_workqueue(dpmaif_ctrl->bat_release_wq);
+ dpmaif_ctrl->bat_release_wq = NULL;
+ }
+}
+
+/**
+ * t7xx_dpmaif_rx_stop() - Suspend RX flow.
+ * @dpmaif_ctrl: Pointer to data path control struct dpmaif_ctrl.
+ *
+ * Wait for all the RX work to finish executing and mark the RX queue as paused.
+ */
+void t7xx_dpmaif_rx_stop(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ unsigned int i;
+
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ struct dpmaif_rx_queue *rxq = &dpmaif_ctrl->rxq[i];
+ int timeout, value;
+
+ flush_work(&rxq->dpmaif_rxq_work);
+
+ timeout = readx_poll_timeout_atomic(atomic_read, &rxq->rx_processing, value,
+ !value, 0, DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (timeout)
+ dev_err(dpmaif_ctrl->dev, "Stop RX SW failed\n");
+
+ /* Ensure RX processing has stopped before we set rxq->que_started to false */
+ smp_mb();
+ rxq->que_started = false;
+ }
+}
+
+static void t7xx_dpmaif_stop_rxq(struct dpmaif_rx_queue *rxq)
+{
+ int cnt, j = 0;
+
+ flush_work(&rxq->dpmaif_rxq_work);
+ rxq->que_started = false;
+
+ do {
+ cnt = t7xx_ring_buf_rd_wr_count(rxq->pit_size_cnt, rxq->pit_rd_idx,
+ rxq->pit_wr_idx, DPMAIF_READ);
+
+ if (++j >= DPMAIF_MAX_CHECK_COUNT) {
+ dev_err(rxq->dpmaif_ctrl->dev, "Stop RX SW failed, %d\n", cnt);
+ break;
+ }
+ } while (cnt);
+
+ memset(rxq->pit_base, 0, rxq->pit_size_cnt * sizeof(struct dpmaif_normal_pit));
+ memset(rxq->bat_req->bat_base, 0, rxq->bat_req->bat_size_cnt * sizeof(struct dpmaif_bat));
+ memset(rxq->bat_req->bat_mask, 0, rxq->bat_req->bat_size_cnt * sizeof(unsigned char));
+ memset(&rxq->rx_data_info, 0, sizeof(rxq->rx_data_info));
+
+ rxq->pit_rd_idx = 0;
+ rxq->pit_wr_idx = 0;
+ rxq->pit_release_rd_idx = 0;
+ rxq->expect_pit_seq = 0;
+ rxq->pit_remain_release_cnt = 0;
+ rxq->bat_req->bat_release_rd_idx = 0;
+ rxq->bat_req->bat_wr_idx = 0;
+ rxq->bat_frag->bat_release_rd_idx = 0;
+ rxq->bat_frag->bat_wr_idx = 0;
+}
+
+void t7xx_dpmaif_rx_clear(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ int i;
+
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++)
+ t7xx_dpmaif_stop_rxq(&dpmaif_ctrl->rxq[i]);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
new file mode 100644
index 000000000000..16df19b20866
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
@@ -0,0 +1,115 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ * Haijun Liu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ *
+ * Contributors:
+ * Amir Hanania <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_HIF_DPMA_RX_H__
+#define __T7XX_HIF_DPMA_RX_H__
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#include "t7xx_hif_dpmaif.h"
+
+#define NETIF_MASK GENMASK(4, 0)
+#define RX_CB_NETIF_IDX 0
+#define RX_CB_PKT_TYPE 1
+
+#define PKT_TYPE_IP4 0
+#define PKT_TYPE_IP6 1
+
+/* Structure of DL PIT */
+struct dpmaif_normal_pit {
+ __le32 pit_header;
+ __le32 p_data_addr;
+ __le32 data_addr_ext;
+ __le32 pit_footer;
+};
+
+/* PIT header fields */
+#define NORMAL_PIT_DATA_LEN GENMASK(31, 16)
+#define NORMAL_PIT_BUFFER_ID GENMASK(15, 3)
+#define NORMAL_PIT_BUFFER_TYPE BIT(2)
+#define NORMAL_PIT_CONT BIT(1)
+#define NORMAL_PIT_PACKET_TYPE BIT(0)
+/* PIT footer fields */
+#define NORMAL_PIT_DLQ_DONE GENMASK(31, 30)
+#define NORMAL_PIT_ULQ_DONE GENMASK(29, 24)
+#define NORMAL_PIT_HEADER_OFFSET GENMASK(23, 19)
+#define NORMAL_PIT_BI_F GENMASK(18, 17)
+#define NORMAL_PIT_IG BIT(16)
+#define NORMAL_PIT_RES GENMASK(15, 11)
+#define NORMAL_PIT_H_BID GENMASK(10, 8)
+#define NORMAL_PIT_PIT_SEQ GENMASK(7, 0)
+
+struct dpmaif_msg_pit {
+ __le32 dword1;
+ __le32 dword2;
+ __le32 dword3;
+ __le32 dword4;
+};
+
+#define MSG_PIT_DP BIT(31)
+#define MSG_PIT_RES GENMASK(30, 27)
+#define MSG_PIT_NETWORK_TYPE GENMASK(26, 24)
+#define MSG_PIT_CHANNEL_ID GENMASK(23, 16)
+#define MSG_PIT_RES2 GENMASK(15, 12)
+#define MSG_PIT_HPC_IDX GENMASK(11, 8)
+#define MSG_PIT_SRC_QID GENMASK(7, 5)
+#define MSG_PIT_ERROR_BIT BIT(4)
+#define MSG_PIT_CHECKSUM GENMASK(3, 2)
+#define MSG_PIT_CONT BIT(1)
+#define MSG_PIT_PACKET_TYPE BIT(0)
+
+#define MSG_PIT_HP_IDX GENMASK(31, 27)
+#define MSG_PIT_CMD GENMASK(26, 24)
+#define MSG_PIT_RES3 GENMASK(23, 21)
+#define MSG_PIT_FLOW GENMASK(20, 16)
+#define MSG_PIT_COUNT GENMASK(15, 0)
+
+#define MSG_PIT_HASH GENMASK(31, 24)
+#define MSG_PIT_RES4 GENMASK(23, 18)
+#define MSG_PIT_PRO GENMASK(17, 16)
+#define MSG_PIT_VBID GENMASK(15, 3)
+#define MSG_PIT_RES5 GENMASK(2, 0)
+
+#define MSG_PIT_DLQ_DONE GENMASK(31, 30)
+#define MSG_PIT_ULQ_DONE GENMASK(29, 24)
+#define MSG_PIT_IP BIT(23)
+#define MSG_PIT_RES6 BIT(22)
+#define MSG_PIT_MR GENMASK(21, 20)
+#define MSG_PIT_RES7 GENMASK(19, 17)
+#define MSG_PIT_IG BIT(16)
+#define MSG_PIT_RES8 GENMASK(15, 11)
+#define MSG_PIT_H_BID GENMASK(10, 8)
+#define MSG_PIT_PIT_SEQ GENMASK(7, 0)
+
+int t7xx_dpmaif_rxq_init(struct dpmaif_rx_queue *queue);
+void t7xx_dpmaif_rx_clear(struct dpmaif_ctrl *dpmaif_ctrl);
+int t7xx_dpmaif_bat_rel_wq_alloc(struct dpmaif_ctrl *dpmaif_ctrl);
+int t7xx_dpmaif_rx_buf_alloc(struct dpmaif_ctrl *dpmaif_ctrl,
+ const struct dpmaif_bat_request *bat_req, const unsigned char q_num,
+ const unsigned int buf_cnt, const bool first_time);
+int t7xx_dpmaif_rx_frag_alloc(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
+ const unsigned int buf_cnt, const bool first_time);
+void t7xx_dpmaif_rx_stop(struct dpmaif_ctrl *dpmaif_ctrl);
+void t7xx_dpmaif_irq_rx_done(struct dpmaif_ctrl *dpmaif_ctrl, const unsigned int que_mask);
+void t7xx_dpmaif_rxq_free(struct dpmaif_rx_queue *queue);
+void t7xx_dpmaif_bat_wq_rel(struct dpmaif_ctrl *dpmaif_ctrl);
+int t7xx_dpmaif_bat_alloc(const struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
+ const enum bat_type buf_type);
+void t7xx_dpmaif_bat_free(const struct dpmaif_ctrl *dpmaif_ctrl,
+ struct dpmaif_bat_request *bat_req);
+
+#endif /* __T7XX_HIF_DPMA_RX_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
new file mode 100644
index 000000000000..3c601492aa16
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
@@ -0,0 +1,724 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ * Amir Hanania <[email protected]>
+ * Haijun Liu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ *
+ * Contributors:
+ * Chiranjeevi Rapolu <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/atomic.h>
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/gfp.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/minmax.h>
+#include <linux/netdevice.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/skbuff.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_common.h"
+#include "t7xx_dpmaif.h"
+#include "t7xx_hif_dpmaif.h"
+#include "t7xx_hif_dpmaif_tx.h"
+#include "t7xx_pci.h"
+
+#define DPMAIF_SKB_TX_BURST_CNT 5
+#define DPMAIF_DRB_ENTRY_SIZE 6144
+
+/* DRB dtype */
+#define DES_DTYP_PD 0
+#define DES_DTYP_MSG 1
+
+static unsigned int t7xx_dpmaif_update_drb_rd_idx(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned char q_num)
+{
+ struct dpmaif_tx_queue *txq = &dpmaif_ctrl->txq[q_num];
+ unsigned short old_sw_rd_idx, new_hw_rd_idx;
+ unsigned int hw_read_idx;
+ unsigned int drb_cnt;
+ unsigned long flags;
+
+ if (!txq->que_started)
+ return 0;
+
+ old_sw_rd_idx = txq->drb_rd_idx;
+ hw_read_idx = t7xx_dpmaif_ul_get_rd_idx(&dpmaif_ctrl->hif_hw_info, q_num);
+
+ new_hw_rd_idx = hw_read_idx / DPMAIF_UL_DRB_ENTRY_WORD;
+ if (new_hw_rd_idx >= DPMAIF_DRB_ENTRY_SIZE) {
+ dev_err(dpmaif_ctrl->dev, "Out of range read index: %u\n", new_hw_rd_idx);
+ return 0;
+ }
+
+ if (old_sw_rd_idx <= new_hw_rd_idx)
+ drb_cnt = new_hw_rd_idx - old_sw_rd_idx;
+ else
+ drb_cnt = txq->drb_size_cnt - old_sw_rd_idx + new_hw_rd_idx;
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ txq->drb_rd_idx = new_hw_rd_idx;
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+ return drb_cnt;
+}
+
+static unsigned short t7xx_dpmaif_release_tx_buffer(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned char q_num, unsigned int release_cnt)
+{
+ struct dpmaif_tx_queue *txq = &dpmaif_ctrl->txq[q_num];
+ struct dpmaif_callbacks *cb = dpmaif_ctrl->callbacks;
+ struct dpmaif_drb_skb *cur_drb_skb, *drb_skb_base;
+ struct dpmaif_drb_pd *cur_drb, *drb_base;
+ unsigned int drb_cnt, i;
+ unsigned short cur_idx;
+ unsigned long flags;
+
+ drb_skb_base = txq->drb_skb_base;
+ drb_base = txq->drb_base;
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ drb_cnt = txq->drb_size_cnt;
+ cur_idx = txq->drb_release_rd_idx;
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ for (i = 0; i < release_cnt; i++) {
+ cur_drb = drb_base + cur_idx;
+ if (FIELD_GET(DRB_PD_DTYP, le32_to_cpu(cur_drb->header)) == DES_DTYP_PD) {
+ cur_drb_skb = drb_skb_base + cur_idx;
+ if (!FIELD_GET(DRB_SKB_IS_MSG, cur_drb_skb->config))
+ dma_unmap_single(dpmaif_ctrl->dev, cur_drb_skb->bus_addr,
+ cur_drb_skb->data_len, DMA_TO_DEVICE);
+
+ if (!FIELD_GET(DRB_PD_CONT, le32_to_cpu(cur_drb->header))) {
+ if (!cur_drb_skb->skb) {
+ dev_err(dpmaif_ctrl->dev,
+ "txq%u: DRB check fail, invalid skb\n", q_num);
+ continue;
+ }
+
+ dev_kfree_skb_any(cur_drb_skb->skb);
+ }
+
+ cur_drb_skb->skb = NULL;
+ } else {
+ struct dpmaif_drb_msg *drb_msg = (struct dpmaif_drb_msg *)cur_drb;
+
+ txq->last_ch_id = FIELD_GET(DRB_MSG_CHANNEL_ID,
+ le32_to_cpu(drb_msg->header_dw2));
+ }
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ cur_idx = t7xx_ring_buf_get_next_wr_idx(drb_cnt, cur_idx);
+ txq->drb_release_rd_idx = cur_idx;
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ if (atomic_inc_return(&txq->tx_budget) > txq->drb_size_cnt / 8)
+ cb->state_notify(dpmaif_ctrl->t7xx_dev, DMPAIF_TXQ_STATE_IRQ, txq->index);
+ }
+
+ if (FIELD_GET(DRB_PD_CONT, le32_to_cpu(cur_drb->header)))
+ dev_err(dpmaif_ctrl->dev, "txq%u: DRB not marked as the last one\n", q_num);
+
+ return i;
+}
+
+static int t7xx_dpmaif_tx_release(struct dpmaif_ctrl *dpmaif_ctrl,
+ unsigned char q_num, unsigned int budget)
+{
+ struct dpmaif_tx_queue *txq = &dpmaif_ctrl->txq[q_num];
+ unsigned int rel_cnt, real_rel_cnt;
+
+ /* Update read index from HW */
+ t7xx_dpmaif_update_drb_rd_idx(dpmaif_ctrl, q_num);
+
+ rel_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
+ txq->drb_rd_idx, DPMAIF_READ);
+
+ real_rel_cnt = min_not_zero(budget, rel_cnt);
+ if (real_rel_cnt)
+ real_rel_cnt = t7xx_dpmaif_release_tx_buffer(dpmaif_ctrl, q_num, real_rel_cnt);
+
+ return real_rel_cnt < rel_cnt ? -EAGAIN : 0;
+}
+
+static bool t7xx_dpmaif_drb_ring_not_empty(struct dpmaif_tx_queue *txq)
+{
+ return !!t7xx_dpmaif_update_drb_rd_idx(txq->dpmaif_ctrl, txq->index);
+}
+
+static void t7xx_dpmaif_tx_done(struct work_struct *work)
+{
+ struct dpmaif_tx_queue *txq = container_of(work, struct dpmaif_tx_queue, dpmaif_tx_work);
+ struct dpmaif_ctrl *dpmaif_ctrl = txq->dpmaif_ctrl;
+ int ret;
+
+ ret = t7xx_dpmaif_tx_release(dpmaif_ctrl, txq->index, txq->drb_size_cnt);
+ if (ret == -EAGAIN ||
+ (t7xx_dpmaif_ul_clr_done(&dpmaif_ctrl->hif_hw_info, txq->index) &&
+ t7xx_dpmaif_drb_ring_not_empty(txq))) {
+ queue_work(dpmaif_ctrl->txq[txq->index].worker,
+ &dpmaif_ctrl->txq[txq->index].dpmaif_tx_work);
+ /* Give the device time to enter the low power state */
+ t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+ } else {
+ t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+ t7xx_dpmaif_unmask_ulq_intr(dpmaif_ctrl, txq->index);
+ }
+}
+
+static void t7xx_setup_msg_drb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+ unsigned short cur_idx, unsigned int pkt_len, unsigned short count_l,
+ unsigned char channel_id)
+{
+ struct dpmaif_drb_msg *drb_base = dpmaif_ctrl->txq[q_num].drb_base;
+ struct dpmaif_drb_msg *drb = drb_base + cur_idx;
+
+ drb->header_dw1 = cpu_to_le32(FIELD_PREP(DRB_MSG_DTYP, DES_DTYP_MSG));
+ drb->header_dw1 |= cpu_to_le32(FIELD_PREP(DRB_MSG_CONT, 1));
+ drb->header_dw1 |= cpu_to_le32(FIELD_PREP(DRB_MSG_PACKET_LEN, pkt_len));
+
+ drb->header_dw2 = cpu_to_le32(FIELD_PREP(DRB_MSG_COUNT_L, count_l));
+ drb->header_dw2 |= cpu_to_le32(FIELD_PREP(DRB_MSG_CHANNEL_ID, channel_id));
+ drb->header_dw2 |= cpu_to_le32(FIELD_PREP(DRB_MSG_L4_CHK, 1));
+}
+
+static void t7xx_setup_payload_drb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+ unsigned short cur_idx, dma_addr_t data_addr,
+ unsigned int pkt_size, char last_one)
+{
+ struct dpmaif_drb_pd *drb_base = dpmaif_ctrl->txq[q_num].drb_base;
+ struct dpmaif_drb_pd *drb = drb_base + cur_idx;
+
+ drb->header &= cpu_to_le32(~DRB_PD_DTYP);
+ drb->header |= cpu_to_le32(FIELD_PREP(DRB_PD_DTYP, DES_DTYP_PD));
+ drb->header &= cpu_to_le32(~DRB_PD_CONT);
+
+ if (!last_one)
+ drb->header |= cpu_to_le32(FIELD_PREP(DRB_PD_CONT, 1));
+
+ drb->header &= cpu_to_le32(~DRB_PD_DATA_LEN);
+ drb->header |= cpu_to_le32(FIELD_PREP(DRB_PD_DATA_LEN, pkt_size));
+ drb->p_data_addr = cpu_to_le32(lower_32_bits(data_addr));
+ drb->data_addr_ext = cpu_to_le32(upper_32_bits(data_addr));
+}
+
+static void t7xx_record_drb_skb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+ unsigned short cur_idx, struct sk_buff *skb, unsigned short is_msg,
+ bool is_frag, bool is_last_one, dma_addr_t bus_addr,
+ unsigned int data_len)
+{
+ struct dpmaif_drb_skb *drb_skb_base = dpmaif_ctrl->txq[q_num].drb_skb_base;
+ struct dpmaif_drb_skb *drb_skb = drb_skb_base + cur_idx;
+
+ drb_skb->skb = skb;
+ drb_skb->bus_addr = bus_addr;
+ drb_skb->data_len = data_len;
+ drb_skb->config = FIELD_PREP(DRB_SKB_DRB_IDX, cur_idx);
+ drb_skb->config |= FIELD_PREP(DRB_SKB_IS_MSG, is_msg);
+ drb_skb->config |= FIELD_PREP(DRB_SKB_IS_FRAG, is_frag);
+ drb_skb->config |= FIELD_PREP(DRB_SKB_IS_LAST, is_last_one);
+}
+
+static int t7xx_dpmaif_add_skb_to_ring(struct dpmaif_ctrl *dpmaif_ctrl, struct sk_buff *skb)
+{
+ unsigned int wr_cnt, send_cnt, payload_cnt;
+ unsigned short cur_idx, drb_wr_idx_backup;
+ bool is_frag, is_last_one = false;
+ int qtype = skb->cb[TX_CB_QTYPE];
+ struct skb_shared_info *info;
+ struct dpmaif_tx_queue *txq;
+ unsigned int data_len;
+ dma_addr_t bus_addr;
+ unsigned long flags;
+ void *data_addr;
+
+ txq = &dpmaif_ctrl->txq[qtype];
+ if (!txq->que_started || dpmaif_ctrl->state != DPMAIF_STATE_PWRON)
+ return -ENODEV;
+
+ atomic_set(&txq->tx_processing, 1);
+ /* Ensure tx_processing is changed to 1 before actually begin TX flow */
+ smp_mb();
+
+ info = skb_shinfo(skb);
+ if (info->frag_list)
+ dev_warn_ratelimited(dpmaif_ctrl->dev, "frag_list not supported\n");
+
+ payload_cnt = info->nr_frags + 1;
+ /* nr_frags: frag cnt, 1: skb->data, 1: msg DRB */
+ send_cnt = payload_cnt + 1;
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ cur_idx = txq->drb_wr_idx;
+ drb_wr_idx_backup = cur_idx;
+
+ txq->drb_wr_idx += send_cnt;
+ if (txq->drb_wr_idx >= txq->drb_size_cnt)
+ txq->drb_wr_idx -= txq->drb_size_cnt;
+
+ t7xx_setup_msg_drb(dpmaif_ctrl, txq->index, cur_idx, skb->len, 0, skb->cb[TX_CB_NETIF_IDX]);
+ t7xx_record_drb_skb(dpmaif_ctrl, txq->index, cur_idx, skb, 1, 0, 0, 0, 0);
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ cur_idx = t7xx_ring_buf_get_next_wr_idx(txq->drb_size_cnt, cur_idx);
+
+ for (wr_cnt = 0; wr_cnt < payload_cnt; wr_cnt++) {
+ if (!wr_cnt) {
+ data_len = skb_headlen(skb);
+ data_addr = skb->data;
+ is_frag = false;
+ } else {
+ skb_frag_t *frag = info->frags + wr_cnt - 1;
+
+ data_len = skb_frag_size(frag);
+ data_addr = skb_frag_address(frag);
+ is_frag = true;
+ }
+
+ if (wr_cnt == payload_cnt - 1)
+ is_last_one = true;
+
+ /* TX mapping */
+ bus_addr = dma_map_single(dpmaif_ctrl->dev, data_addr, data_len, DMA_TO_DEVICE);
+ if (dma_mapping_error(dpmaif_ctrl->dev, bus_addr)) {
+ dev_err(dpmaif_ctrl->dev, "DMA mapping fail\n");
+ atomic_set(&txq->tx_processing, 0);
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ txq->drb_wr_idx = drb_wr_idx_backup;
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+ return -ENOMEM;
+ }
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ t7xx_setup_payload_drb(dpmaif_ctrl, txq->index, cur_idx, bus_addr, data_len,
+ is_last_one);
+ t7xx_record_drb_skb(dpmaif_ctrl, txq->index, cur_idx, skb, 0, is_frag,
+ is_last_one, bus_addr, data_len);
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ cur_idx = t7xx_ring_buf_get_next_wr_idx(txq->drb_size_cnt, cur_idx);
+ }
+
+ atomic_sub(send_cnt, &txq->tx_budget);
+ atomic_set(&txq->tx_processing, 0);
+
+ return 0;
+}
+
+static bool t7xx_tx_lists_are_all_empty(const struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ int i;
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ if (!list_empty(&dpmaif_ctrl->txq[i].tx_skb_queue))
+ return false;
+ }
+
+ return true;
+}
+
+/* Currently, only the default TX queue is used */
+static int t7xx_select_tx_queue(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ return TXQ_TYPE_DEFAULT;
+}
+
+static int t7xx_txq_burst_send_skb(struct dpmaif_tx_queue *txq)
+{
+ int drb_remain_cnt, i;
+ unsigned long flags;
+ int drb_cnt = 0;
+ int ret = 0;
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
+ txq->drb_wr_idx, DPMAIF_WRITE);
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ for (i = 0; i < DPMAIF_SKB_TX_BURST_CNT; i++) {
+ struct sk_buff *skb;
+
+ spin_lock_irqsave(&txq->tx_skb_lock, flags);
+ skb = list_first_entry_or_null(&txq->tx_skb_queue, struct sk_buff, list);
+ spin_unlock_irqrestore(&txq->tx_skb_lock, flags);
+
+ if (!skb)
+ break;
+
+ if (drb_remain_cnt < skb->cb[TX_CB_DRB_CNT]) {
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt,
+ txq->drb_release_rd_idx,
+ txq->drb_wr_idx, DPMAIF_WRITE);
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+ continue;
+ }
+
+ drb_remain_cnt -= skb->cb[TX_CB_DRB_CNT];
+
+ ret = t7xx_dpmaif_add_skb_to_ring(txq->dpmaif_ctrl, skb);
+ if (ret < 0) {
+ dev_err(txq->dpmaif_ctrl->dev,
+ "Failed to add skb to device's ring: %d\n", ret);
+ break;
+ }
+
+ drb_cnt += skb->cb[TX_CB_DRB_CNT];
+ spin_lock_irqsave(&txq->tx_skb_lock, flags);
+ list_del(&skb->list);
+ txq->tx_submit_skb_cnt--;
+ spin_unlock_irqrestore(&txq->tx_skb_lock, flags);
+ }
+
+ if (drb_cnt > 0) {
+ txq->drb_lack = false;
+ ret = drb_cnt;
+ } else if (ret == -ENOMEM) {
+ txq->drb_lack = true;
+ }
+
+ return ret;
+}
+
+static bool t7xx_check_all_txq_drb_lack(const struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ unsigned char i;
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++)
+ if (!list_empty(&dpmaif_ctrl->txq[i].tx_skb_queue) &&
+ !dpmaif_ctrl->txq[i].drb_lack)
+ return false;
+
+ return true;
+}
+
+static void t7xx_do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ do {
+ int txq_id;
+
+ txq_id = t7xx_select_tx_queue(dpmaif_ctrl);
+ if (txq_id >= 0) {
+ struct dpmaif_tx_queue *txq;
+ int ret;
+
+ txq = &dpmaif_ctrl->txq[txq_id];
+
+ ret = t7xx_txq_burst_send_skb(txq);
+ if (ret > 0) {
+ int drb_send_cnt = ret;
+
+ ret = t7xx_dpmaif_ul_update_hw_drb_cnt(dpmaif_ctrl,
+ (unsigned char)txq_id,
+ drb_send_cnt *
+ DPMAIF_UL_DRB_ENTRY_WORD);
+ if (ret < 0)
+ dev_err(dpmaif_ctrl->dev,
+ "txq%d: Failed to update DRB count in HW\n",
+ txq_id);
+ } else if (t7xx_check_all_txq_drb_lack(dpmaif_ctrl)) {
+ usleep_range(10, 20);
+ }
+ }
+
+ cond_resched();
+ } while (!t7xx_tx_lists_are_all_empty(dpmaif_ctrl) && !kthread_should_stop() &&
+ (dpmaif_ctrl->state == DPMAIF_STATE_PWRON));
+}
+
+static int t7xx_dpmaif_tx_hw_push_thread(void *arg)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl = arg;
+
+ while (!kthread_should_stop()) {
+ if (t7xx_tx_lists_are_all_empty(dpmaif_ctrl) ||
+ dpmaif_ctrl->state != DPMAIF_STATE_PWRON) {
+ if (wait_event_interruptible(dpmaif_ctrl->tx_wq,
+ (!t7xx_tx_lists_are_all_empty(dpmaif_ctrl) &&
+ dpmaif_ctrl->state == DPMAIF_STATE_PWRON) ||
+ kthread_should_stop()))
+ continue;
+
+ if (kthread_should_stop())
+ break;
+ }
+
+ t7xx_do_tx_hw_push(dpmaif_ctrl);
+ }
+
+ return 0;
+}
+
+int t7xx_dpmaif_tx_thread_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ init_waitqueue_head(&dpmaif_ctrl->tx_wq);
+ dpmaif_ctrl->tx_thread = kthread_run(t7xx_dpmaif_tx_hw_push_thread,
+ dpmaif_ctrl, "dpmaif_tx_hw_push");
+ return PTR_ERR_OR_ZERO(dpmaif_ctrl->tx_thread);
+}
+
+void t7xx_dpmaif_tx_thread_rel(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ if (dpmaif_ctrl->tx_thread)
+ kthread_stop(dpmaif_ctrl->tx_thread);
+}
+
+static unsigned char t7xx_get_drb_cnt_per_skb(struct sk_buff *skb)
+{
+ /* Normal DRB (frags data + skb linear data) + msg DRB */
+ return skb_shinfo(skb)->nr_frags + 2;
+}
+
+static bool t7xx_check_tx_queue_drb_available(struct dpmaif_tx_queue *txq,
+ unsigned int send_drb_cnt)
+{
+ unsigned int drb_remain_cnt;
+ unsigned long flags;
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
+ txq->drb_wr_idx, DPMAIF_WRITE);
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ return drb_remain_cnt >= send_drb_cnt;
+}
+
+/**
+ * t7xx_dpmaif_tx_send_skb() - Add skb to the transmit queue.
+ * @dpmaif_ctrl: Pointer to struct dpmaif_ctrl.
+ * @txqt: Queue type to xmit on (normal or fast).
+ * @skb: Pointer to the skb to transmit.
+ *
+ * Add the skb to the queue of the skbs to be transmit.
+ * Wake up the thread that push the skbs from the queue to the HW.
+ *
+ * Return:
+ * * 0 - Success.
+ * * -ERROR - Error code from failure sub-initializations.
+ */
+int t7xx_dpmaif_tx_send_skb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int txqt, struct sk_buff *skb)
+{
+ bool tx_drb_available = true;
+ struct dpmaif_tx_queue *txq;
+ struct dpmaif_callbacks *cb;
+ unsigned int send_drb_cnt;
+ unsigned long flags;
+
+ send_drb_cnt = t7xx_get_drb_cnt_per_skb(skb);
+
+ txq = &dpmaif_ctrl->txq[txqt];
+ if (!(txq->tx_skb_stat++ % DPMAIF_SKB_TX_BURST_CNT))
+ tx_drb_available = t7xx_check_tx_queue_drb_available(txq, send_drb_cnt);
+
+ if (!tx_drb_available || txq->tx_submit_skb_cnt >= txq->tx_list_max_len) {
+ cb = dpmaif_ctrl->callbacks;
+ cb->state_notify(dpmaif_ctrl->t7xx_dev, DMPAIF_TXQ_STATE_FULL, txqt);
+ return -EBUSY;
+ }
+
+ skb->cb[TX_CB_QTYPE] = txqt;
+ skb->cb[TX_CB_DRB_CNT] = send_drb_cnt;
+
+ spin_lock_irqsave(&txq->tx_skb_lock, flags);
+ list_add_tail(&skb->list, &txq->tx_skb_queue);
+ txq->tx_submit_skb_cnt++;
+ spin_unlock_irqrestore(&txq->tx_skb_lock, flags);
+ wake_up(&dpmaif_ctrl->tx_wq);
+
+ return 0;
+}
+
+void t7xx_dpmaif_irq_tx_done(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int que_mask)
+{
+ int i;
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ if (que_mask & BIT(i))
+ queue_work(dpmaif_ctrl->txq[i].worker, &dpmaif_ctrl->txq[i].dpmaif_tx_work);
+ }
+}
+
+static int t7xx_dpmaif_tx_drb_buf_init(struct dpmaif_tx_queue *txq)
+{
+ size_t brb_skb_size, brb_pd_size;
+
+ brb_pd_size = DPMAIF_DRB_ENTRY_SIZE * sizeof(struct dpmaif_drb_pd);
+ brb_skb_size = DPMAIF_DRB_ENTRY_SIZE * sizeof(struct dpmaif_drb_skb);
+
+ txq->drb_size_cnt = DPMAIF_DRB_ENTRY_SIZE;
+
+ /* For HW && AP SW */
+ txq->drb_base = dma_alloc_coherent(txq->dpmaif_ctrl->dev, brb_pd_size,
+ &txq->drb_bus_addr, GFP_KERNEL | __GFP_ZERO);
+ if (!txq->drb_base)
+ return -ENOMEM;
+
+ /* For AP SW to record the skb information */
+ txq->drb_skb_base = devm_kzalloc(txq->dpmaif_ctrl->dev, brb_skb_size, GFP_KERNEL);
+ if (!txq->drb_skb_base) {
+ dma_free_coherent(txq->dpmaif_ctrl->dev, brb_pd_size,
+ txq->drb_base, txq->drb_bus_addr);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void t7xx_dpmaif_tx_free_drb_skb(struct dpmaif_tx_queue *txq)
+{
+ struct dpmaif_drb_skb *drb_skb, *drb_skb_base;
+ unsigned int i;
+
+ drb_skb_base = txq->drb_skb_base;
+ if (!drb_skb_base)
+ return;
+
+ for (i = 0; i < txq->drb_size_cnt; i++) {
+ drb_skb = drb_skb_base + i;
+ if (!drb_skb->skb)
+ continue;
+
+ if (!FIELD_GET(DRB_SKB_IS_MSG, drb_skb->config))
+ dma_unmap_single(txq->dpmaif_ctrl->dev, drb_skb->bus_addr,
+ drb_skb->data_len, DMA_TO_DEVICE);
+
+ if (FIELD_GET(DRB_SKB_IS_LAST, drb_skb->config)) {
+ kfree_skb(drb_skb->skb);
+ drb_skb->skb = NULL;
+ }
+ }
+}
+
+static void t7xx_dpmaif_tx_drb_buf_rel(struct dpmaif_tx_queue *txq)
+{
+ if (txq->drb_base)
+ dma_free_coherent(txq->dpmaif_ctrl->dev,
+ txq->drb_size_cnt * sizeof(struct dpmaif_drb_pd),
+ txq->drb_base, txq->drb_bus_addr);
+
+ t7xx_dpmaif_tx_free_drb_skb(txq);
+}
+
+/**
+ * t7xx_dpmaif_txq_init() - Initialize TX queue.
+ * @txq: Pointer to struct dpmaif_tx_queue.
+ *
+ * Initialize the TX queue data structure and allocate memory for it to use.
+ *
+ * Return:
+ * * 0 - Success.
+ * * -ERROR - Error code from failure sub-initializations.
+ */
+int t7xx_dpmaif_txq_init(struct dpmaif_tx_queue *txq)
+{
+ int ret;
+
+ spin_lock_init(&txq->tx_skb_lock);
+ INIT_LIST_HEAD(&txq->tx_skb_queue);
+ txq->tx_submit_skb_cnt = 0;
+ txq->tx_skb_stat = 0;
+ txq->tx_list_max_len = DPMAIF_DRB_ENTRY_SIZE / 2;
+ txq->drb_lack = false;
+
+ init_waitqueue_head(&txq->req_wq);
+ atomic_set(&txq->tx_budget, DPMAIF_DRB_ENTRY_SIZE);
+
+ ret = t7xx_dpmaif_tx_drb_buf_init(txq);
+ if (ret) {
+ dev_err(txq->dpmaif_ctrl->dev, "Failed to initialize DRB buffers: %d\n", ret);
+ return ret;
+ }
+
+ txq->worker = alloc_workqueue("md_dpmaif_tx%d_worker", WQ_UNBOUND | WQ_MEM_RECLAIM |
+ (txq->index ? 0 : WQ_HIGHPRI), 1, txq->index);
+ if (!txq->worker)
+ return -ENOMEM;
+
+ INIT_WORK(&txq->dpmaif_tx_work, t7xx_dpmaif_tx_done);
+ spin_lock_init(&txq->tx_lock);
+ return 0;
+}
+
+void t7xx_dpmaif_txq_free(struct dpmaif_tx_queue *txq)
+{
+ struct sk_buff *skb, *skb_next;
+ unsigned long flags;
+
+ if (txq->worker)
+ destroy_workqueue(txq->worker);
+
+ spin_lock_irqsave(&txq->tx_skb_lock, flags);
+ list_for_each_entry_safe(skb, skb_next, &txq->tx_skb_queue, list) {
+ list_del(&skb->list);
+ dev_kfree_skb_any(skb);
+ }
+
+ spin_unlock_irqrestore(&txq->tx_skb_lock, flags);
+ t7xx_dpmaif_tx_drb_buf_rel(txq);
+}
+
+void t7xx_dpmaif_tx_stop(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ int i;
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ struct dpmaif_tx_queue *txq;
+ int count;
+
+ txq = &dpmaif_ctrl->txq[i];
+ txq->que_started = false;
+ /* Ensure tx_processing is changed to 1 before actually begin TX flow */
+ smp_mb();
+
+ /* Confirm that SW will not transmit */
+ count = 0;
+
+ while (atomic_read(&txq->tx_processing)) {
+ if (++count >= DPMAIF_MAX_CHECK_COUNT) {
+ dev_err(dpmaif_ctrl->dev, "TX queue stop failed\n");
+ break;
+ }
+ };
+ }
+}
+
+static void t7xx_dpmaif_txq_flush_rel(struct dpmaif_tx_queue *txq)
+{
+ txq->que_started = false;
+
+ cancel_work_sync(&txq->dpmaif_tx_work);
+ flush_work(&txq->dpmaif_tx_work);
+ t7xx_dpmaif_tx_free_drb_skb(txq);
+
+ txq->drb_rd_idx = 0;
+ txq->drb_wr_idx = 0;
+ txq->drb_release_rd_idx = 0;
+}
+
+void t7xx_dpmaif_tx_clear(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ int i;
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++)
+ t7xx_dpmaif_txq_flush_rel(&dpmaif_ctrl->txq[i]);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h
new file mode 100644
index 000000000000..3e8b1f93d8b9
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ * Haijun Liu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ *
+ * Contributors:
+ * Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_HIF_DPMA_TX_H__
+#define __T7XX_HIF_DPMA_TX_H__
+
+#include <linux/bits.h>
+#include <linux/skbuff.h>
+#include <linux/types.h>
+
+#include "t7xx_common.h"
+#include "t7xx_hif_dpmaif.h"
+
+/* SKB control buffer indexed values */
+#define TX_CB_NETIF_IDX 0
+#define TX_CB_QTYPE 1
+#define TX_CB_DRB_CNT 2
+
+/* UL DRB */
+struct dpmaif_drb_pd {
+ __le32 header;
+ __le32 p_data_addr;
+ __le32 data_addr_ext;
+ __le32 reserved2;
+};
+
+/* Header fields */
+#define DRB_PD_DATA_LEN ((u32)GENMASK(31, 16))
+#define DRB_PD_RES GENMASK(15, 3)
+#define DRB_PD_CONT BIT(2)
+#define DRB_PD_DTYP GENMASK(1, 0)
+
+struct dpmaif_drb_msg {
+ __le32 header_dw1;
+ __le32 header_dw2;
+ __le32 reserved4;
+ __le32 reserved5;
+};
+
+#define DRB_MSG_PACKET_LEN GENMASK(31, 16)
+#define DRB_MSG_DW1_RES GENMASK(15, 3)
+#define DRB_MSG_CONT BIT(2)
+#define DRB_MSG_DTYP GENMASK(1, 0)
+
+#define DRB_MSG_DW2_RES GENMASK(31, 30)
+#define DRB_MSG_L4_CHK BIT(29)
+#define DRB_MSG_IP_CHK BIT(28)
+#define DRB_MSG_RES2 BIT(27)
+#define DRB_MSG_NETWORK_TYPE GENMASK(26, 24)
+#define DRB_MSG_CHANNEL_ID GENMASK(23, 16)
+#define DRB_MSG_COUNT_L GENMASK(15, 0)
+
+struct dpmaif_drb_skb {
+ struct sk_buff *skb;
+ dma_addr_t bus_addr;
+ unsigned short data_len;
+ u16 config;
+};
+
+#define DRB_SKB_IS_LAST BIT(15)
+#define DRB_SKB_IS_FRAG BIT(14)
+#define DRB_SKB_IS_MSG BIT(13)
+#define DRB_SKB_DRB_IDX GENMASK(12, 0)
+
+int t7xx_dpmaif_tx_send_skb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int txqt,
+ struct sk_buff *skb);
+void t7xx_dpmaif_tx_thread_rel(struct dpmaif_ctrl *dpmaif_ctrl);
+int t7xx_dpmaif_tx_thread_init(struct dpmaif_ctrl *dpmaif_ctrl);
+void t7xx_dpmaif_txq_free(struct dpmaif_tx_queue *txq);
+void t7xx_dpmaif_irq_tx_done(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int que_mask);
+int t7xx_dpmaif_txq_init(struct dpmaif_tx_queue *txq);
+void t7xx_dpmaif_tx_stop(struct dpmaif_ctrl *dpmaif_ctrl);
+void t7xx_dpmaif_tx_clear(struct dpmaif_ctrl *dpmaif_ctrl);
+
+#endif /* __T7XX_HIF_DPMA_TX_H__ */
--
2.17.1

2022-01-16 06:46:28

by Loic Poulain

[permalink] [raw]
Subject: Re: [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem

On Fri, 14 Jan 2022 at 02:06, Ricardo Martinez
<[email protected]> wrote:
>
> t7xx is the PCIe host device driver for Intel 5G 5000 M.2 solution which
> is based on MediaTek's T700 modem to provide WWAN connectivity.
> The driver uses the WWAN framework infrastructure to create the following
> control ports and network interfaces:
> * /dev/wwan0mbim0 - Interface conforming to the MBIM protocol.
> Applications like libmbim [1] or Modem Manager [2] from v1.16 onwards
> with [3][4] can use it to enable data communication towards WWAN.
> * /dev/wwan0at0 - Interface that supports AT commands.
> * wwan0 - Primary network interface for IP traffic.
>
> The main blocks in t7xx driver are:
> * PCIe layer - Implements probe, removal, and power management callbacks.
> * Port-proxy - Provides a common interface to interact with different types
> of ports such as WWAN ports.
> * Modem control & status monitor - Implements the entry point for modem
> initialization, reset and exit, as well as exception handling.
> * CLDMA (Control Layer DMA) - Manages the HW used by the port layer to send
> control messages to the modem using MediaTek's CCCI (Cross-Core
> Communication Interface) protocol.
> * DPMAIF (Data Plane Modem AP Interface) - Controls the HW that provides
> uplink and downlink queues for the data path. The data exchange takes
> place using circular buffers to share data buffer addresses and metadata
> to describe the packets.
> * MHCCIF (Modem Host Cross-Core Interface) - Provides interrupt channels
> for bidirectional event notification such as handshake, exception, PM and
> port enumeration.
>
> The compilation of the t7xx driver is enabled by the CONFIG_MTK_T7XX config
> option which depends on CONFIG_WWAN.
> This driver was originally developed by MediaTek. Intel adapted t7xx to
> the WWAN framework, optimized and refactored the driver source in close
> collaboration with MediaTek. This will enable getting the t7xx driver on
> Approved Vendor List for interested OEM's and ODM's productization plans
> with Intel 5G 5000 M.2 solution.

From a WWAN framework perspective:

Reviewed-by: Loic Poulain <[email protected]>

2022-02-07 17:34:49

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <[email protected]>
>
> Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
> for initialization, ISR, control and event handling of TX/RX flows.
>
> DPMAIF TX
> Exposes the `dmpaif_tx_send_skb` function which can be used by the
> network device to transmit packets.
> The uplink data management uses a Descriptor Ring Buffer (DRB).
> First DRB entry is a message type that will be followed by 1 or more
> normal DRB entries. Message type DRB will hold the skb information
> and each normal DRB entry holds a pointer to the skb payload.
>
> DPMAIF RX
> The downlink buffer management uses Buffer Address Table (BAT) and
> Packet Information Table (PIT) rings.
> The BAT ring holds the address of skb data buffer for the HW to use,
> while the PIT contains metadata about a whole network packet including
> a reference to the BAT entry holding the data buffer address.
> The driver reads the PIT and BAT entries written by the modem, when
> reaching a threshold, the driver will reload the PIT and BAT rings.
>
> Signed-off-by: Haijun Liu <[email protected]>
> Signed-off-by: Chandrashekar Devegowda <[email protected]>
> Co-developed-by: Ricardo Martinez <[email protected]>
> Signed-off-by: Ricardo Martinez <[email protected]>
> ---

> + unsigned short last_ch_id;
Values is never used.

> + if (old_rl_idx > old_wr_idx && new_wr_idx >= old_rl_idx) {
> + dev_err(dpmaif_ctrl->dev, "RX BAT flow check fail\n");
> + return -EINVAL;
> + }
> +
> + if (new_wr_idx >= bat_req->bat_size_cnt) {
> + new_wr_idx -= bat_req->bat_size_cnt;
> + if (new_wr_idx >= old_rl_idx) {
> + dev_err(dpmaif_ctrl->dev, "RX BAT flow check fail\n");
> + return -EINVAL;
> + }

Make a label for the identical block and goto there.

> +static void t7xx_unmap_bat_skb(struct device *dev, struct dpmaif_bat_skb *bat_skb_base,
> + unsigned int index)
> +{
> + struct dpmaif_bat_skb *bat_skb = bat_skb_base + index;
> +
> + if (bat_skb->skb) {
> + dma_unmap_single(dev, bat_skb->data_bus_addr, bat_skb->data_len, DMA_FROM_DEVICE);
> + kfree_skb(bat_skb->skb);

For consistency, dev_kfree_skb?

> + * @initial: Indicates if the ring is being populated for the first time.
> + *
> + * Allocate skb and store the start address of the data buffer into the BAT ring.
> + * If this is not the initial call, notify the HW about the new entries.
> + *
> + * Return:
> + * * 0 - Success.
> + * * -ERROR - Error code from failure sub-initializations.
> + */
> +int t7xx_dpmaif_rx_buf_alloc(struct dpmaif_ctrl *dpmaif_ctrl,
> + const struct dpmaif_bat_request *bat_req,
> + const unsigned char q_num, const unsigned int buf_cnt,
> + const bool initial)

vs its prototype:

+int t7xx_dpmaif_rx_buf_alloc(struct dpmaif_ctrl *dpmaif_ctrl,
+ const struct dpmaif_bat_request *bat_req, const unsigned char q_num,
+ const unsigned int buf_cnt, const bool first_time);

> +int t7xx_dpmaif_rx_frag_alloc(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
> + const unsigned int buf_cnt, const bool initial)
> +{
> + struct dpmaif_bat_page *bat_skb = bat_req->bat_skb;
> + unsigned short cur_bat_idx = bat_req->bat_wr_idx;
> + unsigned int buf_space;
> + int ret, i;
...
> + ret = i < buf_cnt ? -ENOMEM : 0;
> + if (ret && initial) {

int ret = 0, i;
...
if (i < buf_cnt) {
ret = -ENOMEM;
if (initial) {
...
}
}

> + if (!tx_drb_available || txq->tx_submit_skb_cnt >= txq->tx_list_max_len) {
> + cb = dpmaif_ctrl->callbacks;
> + cb->state_notify(dpmaif_ctrl->t7xx_dev, DMPAIF_TXQ_STATE_FULL, txqt);
> + return -EBUSY;
> + }
> +
> + skb->cb[TX_CB_QTYPE] = txqt;
> + skb->cb[TX_CB_DRB_CNT] = send_drb_cnt;
> +
> + spin_lock_irqsave(&txq->tx_skb_lock, flags);
> + list_add_tail(&skb->list, &txq->tx_skb_queue);
> + txq->tx_submit_skb_cnt++;
> + spin_unlock_irqrestore(&txq->tx_skb_lock, flags);

Perhaps the critical section needs to start earlier to enforce that
tx_list_max_len check?


(I'm yet to read half of this patch...)

--
i.


2022-02-09 08:53:45

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <[email protected]>
>
> Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
> for initialization, ISR, control and event handling of TX/RX flows.
>
> DPMAIF TX
> Exposes the `dmpaif_tx_send_skb` function which can be used by the
> network device to transmit packets.
> The uplink data management uses a Descriptor Ring Buffer (DRB).
> First DRB entry is a message type that will be followed by 1 or more
> normal DRB entries. Message type DRB will hold the skb information
> and each normal DRB entry holds a pointer to the skb payload.
>
> DPMAIF RX
> The downlink buffer management uses Buffer Address Table (BAT) and
> Packet Information Table (PIT) rings.
> The BAT ring holds the address of skb data buffer for the HW to use,
> while the PIT contains metadata about a whole network packet including
> a reference to the BAT entry holding the data buffer address.
> The driver reads the PIT and BAT entries written by the modem, when
> reaching a threshold, the driver will reload the PIT and BAT rings.
>
> Signed-off-by: Haijun Liu <[email protected]>
> Signed-off-by: Chandrashekar Devegowda <[email protected]>
> Co-developed-by: Ricardo Martinez <[email protected]>
> Signed-off-by: Ricardo Martinez <[email protected]>
> ---

> + bat_req->bat_mask[idx] = 1;
...
> + if (!bat_req->bat_mask[index])
...
> + bat->bat_mask[index] = 0;

Seem to be linux/bitmap.h

I wonder though if the loop in t7xx_dpmaif_avail_pkt_bat_cnt()
could be replaced with arithmetic calculation based on
bat_release_rd_idx and some other idx? It would make the bitmap
unnecessary.

> +static int t7xx_dpmaif_rx_start(struct dpmaif_rx_queue *rxq, const unsigned short pit_cnt,
> + const unsigned long timeout)
> +{
> + struct device *dev = rxq->dpmaif_ctrl->dev;
> + struct dpmaif_cur_rx_skb_info *skb_info;
> + unsigned short rx_cnt, recv_skb_cnt = 0;

unsigned int

I'd also use unsigned int for all those local variables dealing
with the indexes instead of unsigned short.

> +static int t7xx_dpmaif_rx_data_collect(struct dpmaif_ctrl *dpmaif_ctrl,
> + const unsigned char q_num, const int budget)
> +{
> + struct dpmaif_rx_queue *rxq = &dpmaif_ctrl->rxq[q_num];
> + unsigned long time_limit;
> + unsigned int cnt;
> +
> + time_limit = jiffies + msecs_to_jiffies(DPMAIF_WQ_TIME_LIMIT_MS);
> +
> + do {
> + unsigned int rd_cnt;
> + int real_cnt;
> +
> + cnt = t7xx_dpmaifq_poll_pit(rxq);
> + if (!cnt)
> + break;
> +
> + if (!rxq->pit_base)
> + return -EAGAIN;
> +
> + rd_cnt = cnt > budget ? budget : cnt;

min_t or min (after making budget const unsigned int).

> + real_cnt = t7xx_dpmaif_rx_start(rxq, rd_cnt, time_limit);
> + if (real_cnt < 0)
> + return real_cnt;
> +
> + if (real_cnt < cnt)
> + return -EAGAIN;
> +
> + } while (cnt);

With the break already inside the loop for the same condition,
this check is dead-code.

> + hw_read_idx = t7xx_dpmaif_ul_get_rd_idx(&dpmaif_ctrl->hif_hw_info, q_num);
> +
> + new_hw_rd_idx = hw_read_idx / DPMAIF_UL_DRB_ENTRY_WORD;

Is DPMAIF_UL_DRB_ENTRY_WORD size of an entry? In that case it
would probably make sense put it inside t7xx_dpmaif_ul_get_rd_idx?

> + if (new_hw_rd_idx >= DPMAIF_DRB_ENTRY_SIZE) {

Is DPMAIF_DRB_ENTRY_SIZE telling the number of entries rather than
an "ENTRY_SIZE"? I think these both constant could likely be named
better.

> + drb->header_dw1 = cpu_to_le32(FIELD_PREP(DRB_MSG_DTYP, DES_DTYP_MSG));
> + drb->header_dw1 |= cpu_to_le32(FIELD_PREP(DRB_MSG_CONT, 1));
> + drb->header_dw1 |= cpu_to_le32(FIELD_PREP(DRB_MSG_PACKET_LEN, pkt_len));
> +
> + drb->header_dw2 = cpu_to_le32(FIELD_PREP(DRB_MSG_COUNT_L, count_l));
> + drb->header_dw2 |= cpu_to_le32(FIELD_PREP(DRB_MSG_CHANNEL_ID, channel_id));
> + drb->header_dw2 |= cpu_to_le32(FIELD_PREP(DRB_MSG_L4_CHK, 1));

I'd do:
drb->header_dw1 = cpu_to_le32(FIELD_PREP(DRB_MSG_DTYP, DES_DTYP_MSG) |
FIELD_PREP(DRB_MSG_CONT, 1) |
FIELD_PREP(DRB_MSG_PACKET_LEN, pkt_len));


> +static void t7xx_setup_payload_drb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
> + unsigned short cur_idx, dma_addr_t data_addr,
> + unsigned int pkt_size, char last_one)

bool last_one

> + struct skb_shared_info *info;

Variable usually called shinfo.

> + spin_lock_irqsave(&txq->tx_lock, flags);
> + cur_idx = txq->drb_wr_idx;
> + drb_wr_idx_backup = cur_idx;
> +
> + txq->drb_wr_idx += send_cnt;
> + if (txq->drb_wr_idx >= txq->drb_size_cnt)
> + txq->drb_wr_idx -= txq->drb_size_cnt;
> +
> + t7xx_setup_msg_drb(dpmaif_ctrl, txq->index, cur_idx, skb->len, 0, skb->cb[TX_CB_NETIF_IDX]);
> + t7xx_record_drb_skb(dpmaif_ctrl, txq->index, cur_idx, skb, 1, 0, 0, 0, 0);
> + spin_unlock_irqrestore(&txq->tx_lock, flags);
> +
> + cur_idx = t7xx_ring_buf_get_next_wr_idx(txq->drb_size_cnt, cur_idx);
> +
> + for (wr_cnt = 0; wr_cnt < payload_cnt; wr_cnt++) {
> + if (!wr_cnt) {
> + data_len = skb_headlen(skb);
> + data_addr = skb->data;
> + is_frag = false;
> + } else {
> + skb_frag_t *frag = info->frags + wr_cnt - 1;
> +
> + data_len = skb_frag_size(frag);
> + data_addr = skb_frag_address(frag);
> + is_frag = true;
> + }
> +
> + if (wr_cnt == payload_cnt - 1)
> + is_last_one = true;
> +
> + /* TX mapping */
> + bus_addr = dma_map_single(dpmaif_ctrl->dev, data_addr, data_len, DMA_TO_DEVICE);
> + if (dma_mapping_error(dpmaif_ctrl->dev, bus_addr)) {
> + dev_err(dpmaif_ctrl->dev, "DMA mapping fail\n");
> + atomic_set(&txq->tx_processing, 0);
> +
> + spin_lock_irqsave(&txq->tx_lock, flags);
> + txq->drb_wr_idx = drb_wr_idx_backup;
> + spin_unlock_irqrestore(&txq->tx_lock, flags);

Hmm, can txq's drb_wr_idx get updated (or cleared) by something else
in between these critical sections?

That "TX mapping" comment seems to just state the obvious.

> +static int t7xx_txq_burst_send_skb(struct dpmaif_tx_queue *txq)
> +{
> + int drb_remain_cnt, i;
> + unsigned long flags;
> + int drb_cnt = 0;
> + int ret = 0;
> +
> + spin_lock_irqsave(&txq->tx_lock, flags);
> + drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
> + txq->drb_wr_idx, DPMAIF_WRITE);
> + spin_unlock_irqrestore(&txq->tx_lock, flags);
> +
> + for (i = 0; i < DPMAIF_SKB_TX_BURST_CNT; i++) {
> + struct sk_buff *skb;
> +
> + spin_lock_irqsave(&txq->tx_skb_lock, flags);
> + skb = list_first_entry_or_null(&txq->tx_skb_queue, struct sk_buff, list);
> + spin_unlock_irqrestore(&txq->tx_skb_lock, flags);
> +
> + if (!skb)
> + break;
> +
> + if (drb_remain_cnt < skb->cb[TX_CB_DRB_CNT]) {
> + spin_lock_irqsave(&txq->tx_lock, flags);
> + drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt,
> + txq->drb_release_rd_idx,
> + txq->drb_wr_idx, DPMAIF_WRITE);
> + spin_unlock_irqrestore(&txq->tx_lock, flags);
> + continue;
> + }
...
> + if (drb_cnt > 0) {
> + txq->drb_lack = false;
> + ret = drb_cnt;
> + } else if (ret == -ENOMEM) {
> + txq->drb_lack = true;

Based on the variable name, I'd expect drb_lack be set true when
drb_remain_cnt < skb->cb[TX_CB_DRB_CNT] occurred but that doesn't
happen. Maybe that if branch within loop should set ret = -ENOMEM;
before continue?

It would be nice if the drb check here and in
t7xx_check_tx_queue_drb_available could be consolidated into
a single place. That requires small refactoring (adding __
variant of that function which does just the check).

Please also check the other comments on skb->cb below.

> + txq_id = t7xx_select_tx_queue(dpmaif_ctrl);
> + if (txq_id >= 0) {

t7xx_select_tx_queue used to do que_started check (in v2) but it
doesn't anymore so this if is always true these days. I'm left to
wonder though if it was ok to drop that que_started check?

> +static unsigned char t7xx_get_drb_cnt_per_skb(struct sk_buff *skb)
> +{
> + /* Normal DRB (frags data + skb linear data) + msg DRB */
> + return skb_shinfo(skb)->nr_frags + 2;
> +}

I'd rename this to t7xx_skb_drb_cnt().

> +int t7xx_dpmaif_tx_send_skb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int txqt, struct sk_buff *skb)
> +{
> + bool tx_drb_available = true;
...
> + send_drb_cnt = t7xx_get_drb_cnt_per_skb(skb);
> +
> + txq = &dpmaif_ctrl->txq[txqt];
> + if (!(txq->tx_skb_stat++ % DPMAIF_SKB_TX_BURST_CNT))
> + tx_drb_available = t7xx_check_tx_queue_drb_available(txq, send_drb_cnt);
> +
> + if (!tx_drb_available || txq->tx_submit_skb_cnt >= txq->tx_list_max_len) {

Because of the modulo if, drbs might not be available despite
variable claims them to be. Is it intentional?

> + if (FIELD_GET(DRB_SKB_IS_LAST, drb_skb->config)) {
> + kfree_skb(drb_skb->skb);

dev_kfree_...?


> +void t7xx_dpmaif_tx_stop(struct dpmaif_ctrl *dpmaif_ctrl)
> +{
> + int i;
> +
> + for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
> + struct dpmaif_tx_queue *txq;
> + int count;
> +
> + txq = &dpmaif_ctrl->txq[i];
> + txq->que_started = false;
> + /* Ensure tx_processing is changed to 1 before actually begin TX flow */
> + smp_mb();
> +
> + /* Confirm that SW will not transmit */
> + count = 0;
> +
> + while (atomic_read(&txq->tx_processing)) {

That "Ensure ..." comment should be reworded as it makes little
sense as is for 2 reasons:
- We're in _stop, not begin tx func
- tx_processing isn't changed to 1 here

> +/* SKB control buffer indexed values */
> +#define TX_CB_NETIF_IDX 0
> +#define TX_CB_QTYPE 1
> +#define TX_CB_DRB_CNT 2

The normal way of storing a struct to skb->cb area is:

struct t7xx_skb_cb {
u8 netif_idx;
u8 qtype;
u8 drb_cnt;
};

#define T7XX_SKB_CB(__skb) ((struct t7xx_skb_cb *)&((__skb)->cb[0]))

However, there's only a single txqt/qtype (TXQ_TYPE_DEFAULT) in the
patchset? And it seems to me that drb_cnt is a value that could be always
derived using t7xx_get_drb_cnt_per_skb() from the skb rather than
stored?

> +#define DRB_PD_DATA_LEN ((u32)GENMASK(31, 16))
Drop the cast?

> +struct dpmaif_drb_skb {
...
> + u16 config;
> +};
> +
> +#define DRB_SKB_IS_LAST BIT(15)
> +#define DRB_SKB_IS_FRAG BIT(14)
> +#define DRB_SKB_IS_MSG BIT(13)
> +#define DRB_SKB_DRB_IDX GENMASK(12, 0)

These are not HW related (don't care about endianess)? I guess
C bitfield could be used for them.


--
i.


2022-02-11 06:14:54

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH net-next v4 10/13] net: wwan: t7xx: Introduce power management support

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <[email protected]>
>
> Implements suspend, resumes, freeze, thaw, poweroff, and restore
> `dev_pm_ops` callbacks.
>
> >From the host point of view, the t7xx driver is one entity. But, the
> device has several modules that need to be addressed in different ways
> during power management (PM) flows.
> The driver uses the term 'PM entities' to refer to the 2 DPMA and
> 2 CLDMA HW blocks that need to be managed during PM flows.
> When a dev_pm_ops function is called, the PM entities list is iterated
> and the matching function is called for each entry in the list.
>
> Signed-off-by: Haijun Liu <[email protected]>
> Signed-off-by: Chandrashekar Devegowda <[email protected]>
> Co-developed-by: Ricardo Martinez <[email protected]>
> Signed-off-by: Ricardo Martinez <[email protected]>
> ---


> if (ret) {
> dev_err(dev, "Failed to allocate RX/TX SW resources: %d\n", ret);
> + t7xx_dpmaif_pm_entity_release(dpmaif_ctrl);
> return NULL;

Print after release.


> +static int __t7xx_pci_pm_suspend(struct pci_dev *pdev)
> +{
> + struct t7xx_pci_dev *t7xx_dev;
> + struct md_pm_entity *entity;
> + unsigned long wait_ret;
> + enum t7xx_pm_id id;
> + int ret = 0;
> +
> + t7xx_dev = pci_get_drvdata(pdev);
> + if (atomic_read(&t7xx_dev->md_pm_state) <= MTK_PM_INIT) {
> + dev_err(&pdev->dev,
> + "[PM] Exiting suspend, because handshake failure or in an exception\n");
> + return -EFAULT;
> + }
> +
> + iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
> +
> + ret = t7xx_wait_pm_config(t7xx_dev);
> + if (ret)
> + return ret;

Do you need to rollback the iowrite?

> + atomic_set(&t7xx_dev->md_pm_state, MTK_PM_SUSPENDED);
> + t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
> + t7xx_dev->rgu_pci_irq_en = false;
> +
> + list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
> + if (entity->suspend) {
> + ret = entity->suspend(t7xx_dev, entity->entity_param);
> + if (ret) {
> + id = entity->id;
> + break;
> + }
> + }
> + }
> +
> + if (ret) {
> + dev_err(&pdev->dev, "[PM] Suspend error: %d, id: %d\n", ret, id);
> +
> + list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
> + if (id == entity->id)
> + break;
> +
> + if (entity->resume)
> + entity->resume(t7xx_dev, entity->entity_param);
> + }
> +
> + goto suspend_complete;

Suspend failure path(?) gotos to "suspend_complete"?

> + }
> +
> + reinit_completion(&t7xx_dev->pm_sr_ack);
> + t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_SUSPEND_REQ);
> + wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
> + msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
> + if (!wait_ret)
> + dev_err(&pdev->dev, "[PM] Wait for device suspend ACK timeout-MD\n");
> +
> + reinit_completion(&t7xx_dev->pm_sr_ack);
> + t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_SUSPEND_REQ_AP);
> + wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
> + msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
> + if (!wait_ret)
> + dev_err(&pdev->dev, "[PM] Wait for device suspend ACK timeout-SAP\n");
> +
> + list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
> + if (entity->suspend_late)
> + entity->suspend_late(t7xx_dev, entity->entity_param);
> + }
> +
> +suspend_complete:
> + iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
> +
> + if (ret) {
> + atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
> + t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
> + }
> +
> + return ret;
> +}

Please check all paths in this function. I found enough oddities to not
be able to convince myself I understood it all or found all the problems.
As if, e.g., an ok path return would be missing above misnamed
suspend_complete label (but then there's if (ret) below it which is kind
of counterargument against my reasoning).

I've no comments on patches 11-13.

--
i.


2022-02-16 07:38:46

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface


On 2/8/2022 12:19 AM, Ilpo Järvinen wrote:
> On Thu, 13 Jan 2022, Ricardo Martinez wrote:
>
>> From: Haijun Liu <[email protected]>
>>
>> Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
>> for initialization, ISR, control and event handling of TX/RX flows.
...
>
>> + spin_lock_irqsave(&txq->tx_lock, flags);
>> + cur_idx = txq->drb_wr_idx;
>> + drb_wr_idx_backup = cur_idx;
>> +
>> + txq->drb_wr_idx += send_cnt;
>> + if (txq->drb_wr_idx >= txq->drb_size_cnt)
>> + txq->drb_wr_idx -= txq->drb_size_cnt;
>> +
>> + t7xx_setup_msg_drb(dpmaif_ctrl, txq->index, cur_idx, skb->len, 0, skb->cb[TX_CB_NETIF_IDX]);
>> + t7xx_record_drb_skb(dpmaif_ctrl, txq->index, cur_idx, skb, 1, 0, 0, 0, 0);
>> + spin_unlock_irqrestore(&txq->tx_lock, flags);
>> +
>> + cur_idx = t7xx_ring_buf_get_next_wr_idx(txq->drb_size_cnt, cur_idx);
>> +
>> + for (wr_cnt = 0; wr_cnt < payload_cnt; wr_cnt++) {
>> + if (!wr_cnt) {
>> + data_len = skb_headlen(skb);
>> + data_addr = skb->data;
>> + is_frag = false;
>> + } else {
>> + skb_frag_t *frag = info->frags + wr_cnt - 1;
>> +
>> + data_len = skb_frag_size(frag);
>> + data_addr = skb_frag_address(frag);
>> + is_frag = true;
>> + }
>> +
>> + if (wr_cnt == payload_cnt - 1)
>> + is_last_one = true;
>> +
>> + /* TX mapping */
>> + bus_addr = dma_map_single(dpmaif_ctrl->dev, data_addr, data_len, DMA_TO_DEVICE);
>> + if (dma_mapping_error(dpmaif_ctrl->dev, bus_addr)) {
>> + dev_err(dpmaif_ctrl->dev, "DMA mapping fail\n");
>> + atomic_set(&txq->tx_processing, 0);
>> +
>> + spin_lock_irqsave(&txq->tx_lock, flags);
>> + txq->drb_wr_idx = drb_wr_idx_backup;
>> + spin_unlock_irqrestore(&txq->tx_lock, flags);
> Hmm, can txq's drb_wr_idx get updated (or cleared) by something else
> in between these critical sections?
drb_wr_idx cannot be modified inbetween, but it can be used to calculate
the number of DRBs available, which shouldn't be a problem.
The function is reserving the DRBs at the beginning, in the rare case of
error it will release them.
...
> + txq_id = t7xx_select_tx_queue(dpmaif_ctrl);
> + if (txq_id >= 0) {
> t7xx_select_tx_queue used to do que_started check (in v2) but it
> doesn't anymore so this if is always true these days. I'm left to
> wonder though if it was ok to drop that que_started check?

The que_started check wasn't supposed to be dropped, I'll add it back.

...

>> +/* SKB control buffer indexed values */
>> +#define TX_CB_NETIF_IDX 0
>> +#define TX_CB_QTYPE 1
>> +#define TX_CB_DRB_CNT 2
> The normal way of storing a struct to skb->cb area is:
>
> struct t7xx_skb_cb {
> u8 netif_idx;
> u8 qtype;
> u8 drb_cnt;
> };
>
> #define T7XX_SKB_CB(__skb) ((struct t7xx_skb_cb *)&((__skb)->cb[0]))
>
> However, there's only a single txqt/qtype (TXQ_TYPE_DEFAULT) in the
> patchset? And it seems to me that drb_cnt is a value that could be always
> derived using t7xx_get_drb_cnt_per_skb() from the skb rather than
> stored?

The next iteration will contain t7xx_tx_skb_cb and t7xx_rx_skb_cb
structures.

Also, q_number is going to be used instead of qtype.

Only one queue is used but I think we can keep this code generic as it
is straight forward (not like the drb_lack case), any thoughts?

>> +#define DRB_PD_DATA_LEN ((u32)GENMASK(31, 16))
> Drop the cast?

The cast was added to avoid a compiler warning about truncated bits.

I'll move it to the place where it is required:

drb->header &= cpu_to_le32(~(u32)DRB_PD_DATA_LEN);

...

2022-02-16 15:17:10

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface

On Tue, 15 Feb 2022, Martinez, Ricardo wrote:
> On 2/8/2022 12:19 AM, Ilpo Järvinen wrote:
> > On Thu, 13 Jan 2022, Ricardo Martinez wrote:
> >
> > > +/* SKB control buffer indexed values */
> > > +#define TX_CB_NETIF_IDX 0
> > > +#define TX_CB_QTYPE 1
> > > +#define TX_CB_DRB_CNT 2
> > The normal way of storing a struct to skb->cb area is:
> >
> > struct t7xx_skb_cb {
> > u8 netif_idx;
> > u8 qtype;
> > u8 drb_cnt;
> > };
> >
> > #define T7XX_SKB_CB(__skb) ((struct t7xx_skb_cb *)&((__skb)->cb[0]))
> >
> > However, there's only a single txqt/qtype (TXQ_TYPE_DEFAULT) in the
> > patchset? And it seems to me that drb_cnt is a value that could be always
> > derived using t7xx_get_drb_cnt_per_skb() from the skb rather than
> > stored?
>
> The next iteration will contain t7xx_tx_skb_cb and t7xx_rx_skb_cb structures.

Ah, I didn't even notice the other one. Why differentiate them? There's
enough space in cb area and netif_idx is in both anyway. (union inside
struct could be used if short on space and tx/rx differ but it is not
needed here now.)

> Also, q_number is going to be used instead of qtype.
>
> Only one queue is used but I think we can keep this code generic as it is
> straight forward (not like the drb_lack case), any thoughts?

I don't mind if you find it useful.


--
i.

2022-02-22 22:10:43

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface


On 2/8/2022 12:19 AM, Ilpo Järvinen wrote:
> On Thu, 13 Jan 2022, Ricardo Martinez wrote:
>
>> From: Haijun Liu <[email protected]>
>>
>> Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
>> for initialization, ISR, control and event handling of TX/RX flows.
...
>> + bat_req->bat_mask[idx] = 1;
> ...
>> + if (!bat_req->bat_mask[index])
> ...
>> + bat->bat_mask[index] = 0;
> Seem to be linux/bitmap.h
>
> I wonder though if the loop in t7xx_dpmaif_avail_pkt_bat_cnt()
> could be replaced with arithmetic calculation based on
> bat_release_rd_idx and some other idx? It would make the bitmap
> unnecessary.

A bitmap is needed since entries could be returned out of order.

...

>> + hw_read_idx = t7xx_dpmaif_ul_get_rd_idx(&dpmaif_ctrl->hif_hw_info, q_num);
>> +
>> + new_hw_rd_idx = hw_read_idx / DPMAIF_UL_DRB_ENTRY_WORD;
> Is DPMAIF_UL_DRB_ENTRY_WORD size of an entry? In that case it
> would probably make sense put it inside t7xx_dpmaif_ul_get_rd_idx?
Yes, moving this into t7xx_dpmaif_ul_get_rd_idx()
>> + if (new_hw_rd_idx >= DPMAIF_DRB_ENTRY_SIZE) {
> Is DPMAIF_DRB_ENTRY_SIZE telling the number of entries rather than
> an "ENTRY_SIZE"? I think these both constant could likely be named
> better.
...
>> +static int t7xx_txq_burst_send_skb(struct dpmaif_tx_queue *txq)
>> +{
>> + int drb_remain_cnt, i;
>> + unsigned long flags;
>> + int drb_cnt = 0;
>> + int ret = 0;
>> +
>> + spin_lock_irqsave(&txq->tx_lock, flags);
>> + drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
>> + txq->drb_wr_idx, DPMAIF_WRITE);
>> + spin_unlock_irqrestore(&txq->tx_lock, flags);
>> +
>> + for (i = 0; i < DPMAIF_SKB_TX_BURST_CNT; i++) {
>> + struct sk_buff *skb;
>> +
>> + spin_lock_irqsave(&txq->tx_skb_lock, flags);
>> + skb = list_first_entry_or_null(&txq->tx_skb_queue, struct sk_buff, list);
>> + spin_unlock_irqrestore(&txq->tx_skb_lock, flags);
>> +
>> + if (!skb)
>> + break;
>> +
>> + if (drb_remain_cnt < skb->cb[TX_CB_DRB_CNT]) {
>> + spin_lock_irqsave(&txq->tx_lock, flags);
>> + drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt,
>> + txq->drb_release_rd_idx,
>> + txq->drb_wr_idx, DPMAIF_WRITE);
>> + spin_unlock_irqrestore(&txq->tx_lock, flags);
>> + continue;
>> + }
> ...
>> + if (drb_cnt > 0) {
>> + txq->drb_lack = false;
>> + ret = drb_cnt;
>> + } else if (ret == -ENOMEM) {
>> + txq->drb_lack = true;
> Based on the variable name, I'd expect drb_lack be set true when
> drb_remain_cnt < skb->cb[TX_CB_DRB_CNT] occurred but that doesn't
> happen. Maybe that if branch within loop should set ret = -ENOMEM;
> before continue?

This drb_lack logic is going to be dropped since it was intended for

multiple Tx queues but currently only one is used.

> It would be nice if the drb check here and in
> t7xx_check_tx_queue_drb_available could be consolidated into
> a single place. That requires small refactoring (adding __
> variant of that function which does just the check).
>
> Please also check the other comments on skb->cb below.
...
>
>> +int t7xx_dpmaif_tx_send_skb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int txqt, struct sk_buff *skb)
>> +{
>> + bool tx_drb_available = true;
> ...
>> + send_drb_cnt = t7xx_get_drb_cnt_per_skb(skb);
>> +
>> + txq = &dpmaif_ctrl->txq[txqt];
>> + if (!(txq->tx_skb_stat++ % DPMAIF_SKB_TX_BURST_CNT))
>> + tx_drb_available = t7xx_check_tx_queue_drb_available(txq, send_drb_cnt);
>> +
>> + if (!tx_drb_available || txq->tx_submit_skb_cnt >= txq->tx_list_max_len) {
> Because of the modulo if, drbs might not be available despite
> variable claims them to be. Is it intentional?

It is intentional, I'll refactor this to do the  DRB and tx_list_max_len
checks independently for clarity.

...