2022-02-24 01:17:43

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH net-next v5 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem

t7xx is the PCIe host device driver for Intel 5G 5000 M.2 solution which
is based on MediaTek's T700 modem to provide WWAN connectivity.
The driver uses the WWAN framework infrastructure to create the following
control ports and network interfaces:
* /dev/wwan0mbim0 - Interface conforming to the MBIM protocol.
Applications like libmbim [1] or Modem Manager [2] from v1.16 onwards
with [3][4] can use it to enable data communication towards WWAN.
* /dev/wwan0at0 - Interface that supports AT commands.
* wwan0 - Primary network interface for IP traffic.

The main blocks in t7xx driver are:
* PCIe layer - Implements probe, removal, and power management callbacks.
* Port-proxy - Provides a common interface to interact with different types
of ports such as WWAN ports.
* Modem control & status monitor - Implements the entry point for modem
initialization, reset and exit, as well as exception handling.
* CLDMA (Control Layer DMA) - Manages the HW used by the port layer to send
control messages to the modem using MediaTek's CCCI (Cross-Core
Communication Interface) protocol.
* DPMAIF (Data Plane Modem AP Interface) - Controls the HW that provides
uplink and downlink queues for the data path. The data exchange takes
place using circular buffers to share data buffer addresses and metadata
to describe the packets.
* MHCCIF (Modem Host Cross-Core Interface) - Provides interrupt channels
for bidirectional event notification such as handshake, exception, PM and
port enumeration.

The compilation of the t7xx driver is enabled by the CONFIG_MTK_T7XX config
option which depends on CONFIG_WWAN.
This driver was originally developed by MediaTek. Intel adapted t7xx to
the WWAN framework, optimized and refactored the driver source code in close
collaboration with MediaTek. This will enable getting the t7xx driver on the
Approved Vendor List for interested OEM's and ODM's productization plans
with Intel 5G 5000 M.2 solution.

List of contributors:
Amir Hanania <[email protected]>
Andriy Shevchenko <[email protected]>
Chandrashekar Devegowda <[email protected]>
Dinesh Sharma <[email protected]>
Eliot Lee <[email protected]>
Haijun Liu <[email protected]>
M Chetan Kumar <[email protected]>
Mika Westerberg <[email protected]>
Moises Veleta <[email protected]>
Pierre-louis Bossart <[email protected]>
Chiranjeevi Rapolu <[email protected]>
Ricardo Martinez <[email protected]>
Madhusmita Sahu <[email protected]>
Muralidharan Sethuraman <[email protected]>
Soumya Prakash Mishra <[email protected]>
Sreehari Kancharla <[email protected]>
Suresh Nagaraj <[email protected]>

[1] https://www.freedesktop.org/software/libmbim/
[2] https://www.freedesktop.org/software/ModemManager/
[3] https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/582
[4] https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/523

v5:
- Update Intel's copyright years to 2021-2022.
- Remove circular dependency between DPMAIF HW (07) and HIF (08).
- Keep separate patches for CLDMA (02) and Core (03)
but improve the code split by decoupling CLDMA from
modem ops and cleaning up t7xx_common.h.
- Rename ID_CLDMA0/ID_CLDMA1 to CLDMA_ID_AP/CLDMA_ID_MD.
- Consistently use CLDMA's ring_lock to protect tr_ring.
- Free resources first and then print messages.
- Implement suggested name changes.
- Do not explicitly include dev_printk.h.
- Remove redundant dev_err()s.
- Fix possible memory leak during probe.
- Remove infrastructure for legacy interrupts.
- Remove unused macros and variables, including those that
can be replaced with constants.
- Remove PCIE_MAC_MSIX_MSK_SET macro which is duplicated code.
- Refactor __t7xx_pci_pm_suspend() for clarity.
- Refactor t7xx_cldma_rx_ring_init() and t7xx_cldma_tx_ring_init().
- Do not use & for function callbacks.
- Declare a structure to access skb->cb[] data.
- Use skb_put_data instead of memcpy.
- No need to use kfree_sensitive.
- Use dev_kfree_skb() instead of kfree_skb().
- Refactor t7xx_prepare_device_rt_data() to remove potential leaks,
avoid unneeded memset and keep rt_data and packet_size updates
inside the same 'if' block.
- Set port's rx_length_th back to 0 during uninit.
- Remove unneeded 'blocking' parameter from t7xx_cldma_send_skb().
- Return -EIO in t7xx_cldma_send_skb() if the queue is inactive.
- Refactor t7xx_cldma_qs_are_active() to use pci_device_is_present().
- Simplify t7xx_cldma_stop_q() and rename it to t7xx_cldma_stop_all_qs().
- Fix potential leaks in t7xx_cldma_init().
- Improve error handling in fsm_append_event and fsm_routine_starting().
- Propagate return codes from fsm_append_cmd() and t7xx_fsm_append_event().
- Refactor fsm_wait_for_event() to avoid unnecessary sleep.
- Create the WWAN ports and net device only after the modem is in
the ready state.
- Refactor t7xx_port_proxy_recv_skb() and port_recv_skb().
- Rename t7xx_port_check_rx_seq_num() as t7xx_port_next_rx_seq_num()
and fix the seq_num logic to handle overflows.
- Declare seq_nums as u16 instead of short.
- Use unsigned int for local indexes.
- Use min_t instead of the ternary operator.
- Refactor the loop in t7xx_dpmaif_rx_data_collect() to avoid a dead
condition check.
- Use a bitmap (bat_bitmap) instead of an array to keep track of
the DRB status. Used in t7xx_dpmaif_avail_pkt_bat_cnt().
- Refactor t7xx_dpmaif_tx_send_skb() to protect tx_submit_skb_cnt
with spinlock and remove the misleading tx_drb_available variable.
- Consolidate bit operations before endianness conversion.
- Use C bit fields in dpmaif_drb_skb struct which is not HW related.
- Add back the que_started check in t7xx_select_tx_queue().
- Create a helper function to get the DRB count.
- Simplify the use of 'usage' during t7xx_ccmni_close().
- Enforce CCMNI MTU selection with BUILD_BUG_ON() instead of a comment.
- Remove t7xx_ccmni_ctrl->capability parameter which remains constant.

v4:
- Implement list_prev_entry_circular() and list_next_entry_circular() macros.
- Remove inline from all c files.
- Define ioread32_poll_timeout_atomic() helper macro.
- Fix return code for WWAN port tx op.
- Allow AT commands fragmentation same as MBIM commands.
- Introduce t7xx_common.h file in the first patch.
- Rename functions and variables as suggested in v3.
- Reduce code duplication by creating fsm_wait_for_event() helper function.
- Remove unneeded dev_err in t7xx_fsm_clr_event().
- Remove unused variable last_state from struct t7xx_fsm_ctl.
- Remove unused variable txq_select_times from struct dpmaif_ctrl.
- Replace ETXTBSY with EBUSY.
- Refactor t7xx_dpmaif_rx_buf_alloc() to remove an unneeded allocation.
- Fix potential leak at t7xx_dpmaif_rx_frag_alloc().
- Simplify return value handling at t7xx_dpmaif_rx_start().
- Add a helper to handle the common part of CCCI header initialization.
- Make sure interrupts are enabled during PM resume.
- Add a parameter to t7xx_fsm_append_cmd() to tell if it is in interrupt context.

v3:
- Avoid unneeded ping-pong changes between patches.
- Use t7xx_ prefix in functions.
- Use t7xx_ prefix in generic structs where mtk_ or ccci prefix was used.
- Update Authors/Contributors header.
- Remove skb pools used for control path.
- Remove skb pools used for RX data path.
- Do not use dedicated TX queue for ACK-only packets.
- Remove __packed attribute from GPD structs.
- Remove the infrastructure for test and debug ports.
- Use the skb control buffer to store metadata.
- Get the IP packet type from RX PIT.
- Merge variable declaration and simple assignments.
- Use preferred coding patterns.
- Remove global variables.
- Declare HW facing structure members as little endian.
- Rename goto tags to describe what is going to be done.
- Do not use variable length arrays.
- Remove unneeded blank lines source code and kdoc headers.
- Use C99 initialization format for port-proxy ports.
- Clean up comments.
- Review included headers.
- Better use of 100 column limit.
- Remove unneeded mb() in CLDMA.
- Remove unneeded spin locks and atomics.
- Handle read_poll_timeout error.
- Use dev_err_ratelimited() where required.
- Fix resource leak when requesting IRQs.
- Use generic DEFAULT_TX_QUEUE_LEN instead custom macro.
- Use ETH_DATA_LEN instead of defining WWAN_DEFAULT_MTU.
- Use sizeof() instead of defines when the size of structures is required.
- Remove unneeded code from netdev:
No need to configure HW address length
No need to implement .ndo_change_mtu
Remove random address generation
- Code simplifications by using kernel provided functions and macros such as:
module_pci_driver
PTR_ERR_OR_ZERO
for_each_set_bit
pci_device_is_present
skb_queue_purge
list_prev_entry
__ffs64

v2:
- Replace pdev->driver->name with dev_driver_string(&pdev->dev).
- Replace random_ether_addr() with eth_random_addr().
- Update kernel-doc comment for enum data_policy.
- Indicate the driver is 'Supported' instead of 'Maintained'.
- Fix the Signed-of-by and Co-developed-by tags in the patches.
- Added authors and contributors in the top comment of the src files.

Ricardo Martinez (13):
list: Add list_next_entry_circular() and list_prev_entry_circular()
net: wwan: t7xx: Add control DMA interface
net: wwan: t7xx: Add core components
net: wwan: t7xx: Add port proxy infrastructure
net: wwan: t7xx: Add control port
net: wwan: t7xx: Add AT and MBIM WWAN ports
net: wwan: t7xx: Data path HW layer
net: wwan: t7xx: Add data path interface
net: wwan: t7xx: Add WWAN network interface
net: wwan: t7xx: Introduce power management
net: wwan: t7xx: Runtime PM
net: wwan: t7xx: Device deep sleep lock/unlock
net: wwan: t7xx: Add maintainers and documentation

.../networking/device_drivers/wwan/index.rst | 1 +
.../networking/device_drivers/wwan/t7xx.rst | 120 ++
MAINTAINERS | 11 +
drivers/net/wwan/Kconfig | 14 +
drivers/net/wwan/Makefile | 1 +
drivers/net/wwan/t7xx/Makefile | 20 +
drivers/net/wwan/t7xx/t7xx_cldma.c | 281 ++++
drivers/net/wwan/t7xx/t7xx_cldma.h | 176 +++
drivers/net/wwan/t7xx/t7xx_common.h | 66 +
drivers/net/wwan/t7xx/t7xx_dpmaif.c | 1294 ++++++++++++++++
drivers/net/wwan/t7xx/t7xx_dpmaif.h | 179 +++
drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 1351 +++++++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_cldma.h | 142 ++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c | 574 +++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h | 211 +++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 1248 +++++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h | 114 ++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 729 +++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h | 84 +
drivers/net/wwan/t7xx/t7xx_mhccif.c | 122 ++
drivers/net/wwan/t7xx/t7xx_mhccif.h | 37 +
drivers/net/wwan/t7xx/t7xx_modem_ops.c | 764 ++++++++++
drivers/net/wwan/t7xx/t7xx_modem_ops.h | 87 ++
drivers/net/wwan/t7xx/t7xx_netdev.c | 430 ++++++
drivers/net/wwan/t7xx/t7xx_netdev.h | 56 +
drivers/net/wwan/t7xx/t7xx_pci.c | 768 ++++++++++
drivers/net/wwan/t7xx/t7xx_pci.h | 122 ++
drivers/net/wwan/t7xx/t7xx_pcie_mac.c | 263 ++++
drivers/net/wwan/t7xx/t7xx_pcie_mac.h | 31 +
drivers/net/wwan/t7xx/t7xx_port.h | 149 ++
drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c | 205 +++
drivers/net/wwan/t7xx/t7xx_port_proxy.c | 621 ++++++++
drivers/net/wwan/t7xx/t7xx_port_proxy.h | 100 ++
drivers/net/wwan/t7xx/t7xx_port_wwan.c | 210 +++
drivers/net/wwan/t7xx/t7xx_reg.h | 352 +++++
drivers/net/wwan/t7xx/t7xx_state_monitor.c | 550 +++++++
drivers/net/wwan/t7xx/t7xx_state_monitor.h | 125 ++
include/linux/list.h | 26 +
38 files changed, 11634 insertions(+)
create mode 100644 Documentation/networking/device_drivers/wwan/t7xx.rst
create mode 100644 drivers/net/wwan/t7xx/Makefile
create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_common.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_port.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_wwan.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_reg.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.h

--
2.17.1


2022-02-24 01:22:17

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH net-next v5 13/13] net: wwan: t7xx: Add maintainers and documentation

Adds maintainers and documentation for MediaTek t7xx 5G WWAN modem
device driver.

Signed-off-by: Ricardo Martinez <[email protected]>

From a WWAN framework perspective:
Reviewed-by: Loic Poulain <[email protected]>
---
.../networking/device_drivers/wwan/index.rst | 1 +
.../networking/device_drivers/wwan/t7xx.rst | 120 ++++++++++++++++++
MAINTAINERS | 11 ++
3 files changed, 132 insertions(+)
create mode 100644 Documentation/networking/device_drivers/wwan/t7xx.rst

diff --git a/Documentation/networking/device_drivers/wwan/index.rst b/Documentation/networking/device_drivers/wwan/index.rst
index 1cb8c7371401..370d8264d5dc 100644
--- a/Documentation/networking/device_drivers/wwan/index.rst
+++ b/Documentation/networking/device_drivers/wwan/index.rst
@@ -9,6 +9,7 @@ Contents:
:maxdepth: 2

iosm
+ t7xx

.. only:: subproject and html

diff --git a/Documentation/networking/device_drivers/wwan/t7xx.rst b/Documentation/networking/device_drivers/wwan/t7xx.rst
new file mode 100644
index 000000000000..dd5b731957ca
--- /dev/null
+++ b/Documentation/networking/device_drivers/wwan/t7xx.rst
@@ -0,0 +1,120 @@
+.. SPDX-License-Identifier: GPL-2.0-only
+
+.. Copyright (C) 2020-21 Intel Corporation
+
+.. _t7xx_driver_doc:
+
+============================================
+t7xx driver for MTK PCIe based T700 5G modem
+============================================
+The t7xx driver is a WWAN PCIe host driver developed for linux or Chrome OS platforms
+for data exchange over PCIe interface between Host platform & MediaTek's T700 5G modem.
+The driver exposes an interface conforming to the MBIM protocol [1]. Any front end
+application (e.g. Modem Manager) could easily manage the MBIM interface to enable
+data communication towards WWAN. The driver also provides an interface to interact
+with the MediaTek's modem via AT commands.
+
+Basic usage
+===========
+MBIM & AT functions are inactive when unmanaged. The t7xx driver provides
+WWAN port userspace interfaces representing MBIM & AT control channels and does
+not play any role in managing their functionality. It is the job of a userspace
+application to detect port enumeration and enable MBIM & AT functionalities.
+
+Examples of few such userspace applications are:
+
+- mbimcli (included with the libmbim [2] library), and
+- Modem Manager [3]
+
+Management Applications to carry out below required actions for establishing
+MBIM IP session:
+
+- open the MBIM control channel
+- configure network connection settings
+- connect to network
+- configure IP network interface
+
+Management Applications to carry out below required actions for send an AT
+command and receive response:
+
+- open the AT control channel using a UART tool or a special user tool
+
+Management application development
+==================================
+The driver and userspace interfaces are described below. The MBIM protocol is
+described in [1] Mobile Broadband Interface Model v1.0 Errata-1.
+
+MBIM control channel userspace ABI
+----------------------------------
+
+/dev/wwan0mbim0 character device
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The driver exposes an MBIM interface to the MBIM function by implementing
+MBIM WWAN Port. The userspace end of the control channel pipe is a
+/dev/wwan0mbim0 character device. Application shall use this interface for
+MBIM protocol communication.
+
+Fragmentation
+~~~~~~~~~~~~~
+The userspace application is responsible for all control message fragmentation
+and defragmentation as per MBIM specification.
+
+/dev/wwan0mbim0 write()
+~~~~~~~~~~~~~~~~~~~~~~~
+The MBIM control messages from the management application must not exceed the
+negotiated control message size.
+
+/dev/wwan0mbim0 read()
+~~~~~~~~~~~~~~~~~~~~~~
+The management application must accept control messages of up the negotiated
+control message size.
+
+MBIM data channel userspace ABI
+-------------------------------
+
+wwan0-X network device
+~~~~~~~~~~~~~~~~~~~~~~
+The t7xx driver exposes IP link interface "wwan0-X" of type "wwan" for IP
+traffic. Iproute network utility is used for creating "wwan0-X" network
+interface and for associating it with MBIM IP session.
+
+The userspace management application is responsible for creating new IP link
+prior to establishing MBIM IP session where the SessionId is greater than 0.
+
+For example, creating new IP link for a MBIM IP session with SessionId 1:
+
+ ip link add dev wwan0-1 parentdev wwan0 type wwan linkid 1
+
+The driver will automatically map the "wwan0-1" network device to MBIM IP
+session 1.
+
+AT port userspace ABI
+----------------------------------
+
+/dev/wwan0at0 character device
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The driver exposes an AT port by implementing AT WWAN Port.
+The userspace end of the control port is a /dev/wwan0at0 character
+device. Application shall use this interface to issue AT commands.
+
+The MediaTek's T700 modem supports the 3GPP TS 27.007 [4] specification.
+
+References
+==========
+[1] *MBIM (Mobile Broadband Interface Model) Errata-1*
+
+- https://www.usb.org/document-library/
+
+[2] *libmbim "a glib-based library for talking to WWAN modems and devices which
+speak the Mobile Interface Broadband Model (MBIM) protocol"*
+
+- http://www.freedesktop.org/wiki/Software/libmbim/
+
+[3] *Modem Manager "a DBus-activated daemon which controls mobile broadband
+(2G/3G/4G/5G) devices and connections"*
+
+- http://www.freedesktop.org/wiki/Software/ModemManager/
+
+[4] *Specification # 27.007 - 3GPP*
+
+- https://www.3gpp.org/DynaReport/27007.htm
diff --git a/MAINTAINERS b/MAINTAINERS
index aa0f6cbb634e..1f8c2b26de2b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -12338,6 +12338,17 @@ S: Maintained
F: drivers/net/dsa/mt7530.*
F: net/dsa/tag_mtk.c

+MEDIATEK T7XX 5G WWAN MODEM DRIVER
+M: Chandrashekar Devegowda <[email protected]>
+M: Intel Corporation <[email protected]>
+R: Chiranjeevi Rapolu <[email protected]>
+R: Liu Haijun <[email protected]>
+R: M Chetan Kumar <[email protected]>
+R: Ricardo Martinez <[email protected]>
+L: [email protected]
+S: Supported
+F: drivers/net/wwan/t7xx/
+
MEDIATEK USB3 DRD IP DRIVER
M: Chunfeng Yun <[email protected]>
L: [email protected]
--
2.17.1

2022-02-24 01:29:54

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH net-next v5 11/13] net: wwan: t7xx: Runtime PM

From: Haijun Liu <[email protected]>

Enables runtime power management callbacks including runtime_suspend
and runtime_resume. Autosuspend is used to prevent overhead by frequent
wake-ups.

Signed-off-by: Haijun Liu <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Eliot Lee <[email protected]>
Signed-off-by: Eliot Lee <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 14 ++++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 17 +++++++++++++++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 15 +++++++++++++++
drivers/net/wwan/t7xx/t7xx_pci.c | 22 ++++++++++++++++++++++
4 files changed, 68 insertions(+)

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
index 114b67ebb270..c67eacfd69b5 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -33,6 +33,7 @@
#include <linux/list.h>
#include <linux/netdevice.h>
#include <linux/pci.h>
+#include <linux/pm_runtime.h>
#include <linux/sched.h>
#include <linux/skbuff.h>
#include <linux/slab.h>
@@ -262,6 +263,8 @@ static void t7xx_cldma_rx_done(struct work_struct *work)
t7xx_cldma_clear_ip_busy(&md_ctrl->hw_info);
t7xx_cldma_hw_irq_en_txrx(&md_ctrl->hw_info, queue->index, MTK_RX);
t7xx_cldma_hw_irq_en_eq(&md_ctrl->hw_info, queue->index, MTK_RX);
+ pm_runtime_mark_last_busy(md_ctrl->dev);
+ pm_runtime_put_autosuspend(md_ctrl->dev);
}

static int t7xx_cldma_gpd_tx_collect(struct cldma_queue *queue)
@@ -373,6 +376,9 @@ static void t7xx_cldma_tx_done(struct work_struct *work)
t7xx_cldma_hw_irq_en_txrx(hw_info, queue->index, MTK_TX);
}
spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+
+ pm_runtime_mark_last_busy(md_ctrl->dev);
+ pm_runtime_put_autosuspend(md_ctrl->dev);
}

static void t7xx_cldma_ring_free(struct cldma_ctrl *md_ctrl,
@@ -579,6 +585,7 @@ static void t7xx_cldma_irq_work_cb(struct cldma_ctrl *md_ctrl)
if (l2_tx_int & (TXRX_STATUS_BITMASK | EMPTY_STATUS_BITMASK)) {
for_each_set_bit(i, (unsigned long *)&l2_tx_int, L2_INT_BIT_COUNT) {
if (i < CLDMA_TXQ_NUM) {
+ pm_runtime_get(md_ctrl->dev);
t7xx_cldma_hw_irq_dis_eq(hw_info, i, MTK_TX);
t7xx_cldma_hw_irq_dis_txrx(hw_info, i, MTK_TX);
queue_work(md_ctrl->txq[i].worker,
@@ -603,6 +610,7 @@ static void t7xx_cldma_irq_work_cb(struct cldma_ctrl *md_ctrl)
if (l2_rx_int & (TXRX_STATUS_BITMASK | EMPTY_STATUS_BITMASK)) {
l2_rx_int |= l2_rx_int >> CLDMA_RXQ_NUM;
for_each_set_bit(i, (unsigned long *)&l2_rx_int, CLDMA_RXQ_NUM) {
+ pm_runtime_get(md_ctrl->dev);
t7xx_cldma_hw_irq_dis_eq(hw_info, i, MTK_RX);
t7xx_cldma_hw_irq_dis_txrx(hw_info, i, MTK_RX);
queue_work(md_ctrl->rxq[i].worker, &md_ctrl->rxq[i].cldma_work);
@@ -933,6 +941,10 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb
if (qno >= CLDMA_TXQ_NUM)
return -EINVAL;

+ ret = pm_runtime_resume_and_get(md_ctrl->dev);
+ if (ret < 0 && ret != -EACCES)
+ return ret;
+
queue = &md_ctrl->txq[qno];

spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
@@ -976,6 +988,8 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb
} while (!ret);

allow_sleep:
+ pm_runtime_mark_last_busy(md_ctrl->dev);
+ pm_runtime_put_autosuspend(md_ctrl->dev);
return ret;
}

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
index 85fa747078bd..72dd76d90dbb 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
@@ -32,6 +32,7 @@
#include <linux/minmax.h>
#include <linux/mm.h>
#include <linux/netdevice.h>
+#include <linux/pm_runtime.h>
#include <linux/sched.h>
#include <linux/skbuff.h>
#include <linux/slab.h>
@@ -912,6 +913,7 @@ static void t7xx_dpmaif_rxq_work(struct work_struct *work)
{
struct dpmaif_rx_queue *rxq = container_of(work, struct dpmaif_rx_queue, dpmaif_rxq_work);
struct dpmaif_ctrl *dpmaif_ctrl = rxq->dpmaif_ctrl;
+ int ret;

atomic_set(&rxq->rx_processing, 1);
/* Ensure rx_processing is changed to 1 before actually begin RX flow */
@@ -923,7 +925,14 @@ static void t7xx_dpmaif_rxq_work(struct work_struct *work)
return;
}

+ ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev);
+ if (ret < 0 && ret != -EACCES)
+ return;
+
t7xx_dpmaif_do_rx(dpmaif_ctrl, rxq);
+
+ pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
+ pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
atomic_set(&rxq->rx_processing, 0);
}

@@ -1126,11 +1135,19 @@ static void t7xx_dpmaif_bat_release_work(struct work_struct *work)
{
struct dpmaif_ctrl *dpmaif_ctrl = container_of(work, struct dpmaif_ctrl, bat_release_work);
struct dpmaif_rx_queue *rxq;
+ int ret;
+
+ ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev);
+ if (ret < 0 && ret != -EACCES)
+ return;

/* ALL RXQ use one BAT table, so choose DPF_RX_QNO_DFT */
rxq = &dpmaif_ctrl->rxq[DPF_RX_QNO_DFT];
t7xx_dpmaif_bat_release_and_add(rxq);
t7xx_dpmaif_frag_bat_release_and_add(rxq);
+
+ pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
+ pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
}

int t7xx_dpmaif_bat_rel_wq_alloc(struct dpmaif_ctrl *dpmaif_ctrl)
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
index f588c729e957..6d1d084c404a 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
@@ -28,6 +28,7 @@
#include <linux/list.h>
#include <linux/minmax.h>
#include <linux/netdevice.h>
+#include <linux/pm_runtime.h>
#include <linux/sched.h>
#include <linux/spinlock.h>
#include <linux/skbuff.h>
@@ -164,6 +165,10 @@ static void t7xx_dpmaif_tx_done(struct work_struct *work)
struct dpmaif_hw_info *hw_info;
int ret;

+ ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev);
+ if (ret < 0 && ret != -EACCES)
+ return;
+
hw_info = &dpmaif_ctrl->hw_info;
ret = t7xx_dpmaif_tx_release(dpmaif_ctrl, txq->index, txq->drb_size_cnt);
if (ret == -EAGAIN ||
@@ -177,6 +182,9 @@ static void t7xx_dpmaif_tx_done(struct work_struct *work)
t7xx_dpmaif_clr_ip_busy_sts(hw_info);
t7xx_dpmaif_unmask_ulq_intr(hw_info, txq->index);
}
+
+ pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
+ pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
}

static void t7xx_setup_msg_drb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
@@ -438,6 +446,7 @@ static void t7xx_do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl)
static int t7xx_dpmaif_tx_hw_push_thread(void *arg)
{
struct dpmaif_ctrl *dpmaif_ctrl = arg;
+ int ret;

while (!kthread_should_stop()) {
if (t7xx_tx_lists_are_all_empty(dpmaif_ctrl) ||
@@ -452,7 +461,13 @@ static int t7xx_dpmaif_tx_hw_push_thread(void *arg)
break;
}

+ ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev);
+ if (ret < 0 && ret != -EACCES)
+ return ret;
+
t7xx_do_tx_hw_push(dpmaif_ctrl);
+ pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
+ pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
}

return 0;
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
index 880d9448d662..e027be718041 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.c
+++ b/drivers/net/wwan/t7xx/t7xx_pci.c
@@ -31,6 +31,7 @@
#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/pm.h>
+#include <linux/pm_runtime.h>
#include <linux/pm_wakeup.h>

#include "t7xx_mhccif.h"
@@ -44,6 +45,7 @@
#define T7XX_PCI_EREG_BASE 2

#define PM_ACK_TIMEOUT_MS 1500
+#define PM_AUTOSUSPEND_MS 20000
#define PM_RESOURCE_POLL_TIMEOUT_US 10000
#define PM_RESOURCE_POLL_STEP_US 100

@@ -86,6 +88,8 @@ static int t7xx_pci_pm_init(struct t7xx_pci_dev *t7xx_dev)
atomic_set(&t7xx_dev->md_pm_state, MTK_PM_INIT);

iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+ pm_runtime_set_autosuspend_delay(&pdev->dev, PM_AUTOSUSPEND_MS);
+ pm_runtime_use_autosuspend(&pdev->dev);

return t7xx_wait_pm_config(t7xx_dev);
}
@@ -100,6 +104,8 @@ void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev)
D2H_INT_RESUME_ACK_AP);
iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
+
+ pm_runtime_put_noidle(&t7xx_dev->pdev->dev);
}

static int t7xx_pci_pm_reinit(struct t7xx_pci_dev *t7xx_dev)
@@ -108,6 +114,9 @@ static int t7xx_pci_pm_reinit(struct t7xx_pci_dev *t7xx_dev)
* so just roll back PM setting to the init setting.
*/
atomic_set(&t7xx_dev->md_pm_state, MTK_PM_INIT);
+
+ pm_runtime_get_noresume(&t7xx_dev->pdev->dev);
+
iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
return t7xx_wait_pm_config(t7xx_dev);
}
@@ -407,6 +416,7 @@ static int __t7xx_pci_pm_resume(struct pci_dev *pdev, bool state_check)
t7xx_dev->rgu_pci_irq_en = true;
t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ pm_runtime_mark_last_busy(&pdev->dev);
atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);

return ret;
@@ -443,6 +453,16 @@ static int t7xx_pci_pm_thaw(struct device *dev)
return __t7xx_pci_pm_resume(to_pci_dev(dev), false);
}

+static int t7xx_pci_pm_runtime_suspend(struct device *dev)
+{
+ return __t7xx_pci_pm_suspend(to_pci_dev(dev));
+}
+
+static int t7xx_pci_pm_runtime_resume(struct device *dev)
+{
+ return __t7xx_pci_pm_resume(to_pci_dev(dev), true);
+}
+
static const struct dev_pm_ops t7xx_pci_pm_ops = {
.suspend = t7xx_pci_pm_suspend,
.resume = t7xx_pci_pm_resume,
@@ -452,6 +472,8 @@ static const struct dev_pm_ops t7xx_pci_pm_ops = {
.poweroff = t7xx_pci_pm_suspend,
.restore = t7xx_pci_pm_resume,
.restore_noirq = t7xx_pci_pm_resume_noirq,
+ .runtime_suspend = t7xx_pci_pm_runtime_suspend,
+ .runtime_resume = t7xx_pci_pm_runtime_resume
};

static int t7xx_request_irq(struct pci_dev *pdev)
--
2.17.1

2022-02-24 01:34:00

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH net-next v5 10/13] net: wwan: t7xx: Introduce power management

From: Haijun Liu <[email protected]>

Implements suspend, resumes, freeze, thaw, poweroff, and restore
`dev_pm_ops` callbacks.

From the host point of view, the t7xx driver is one entity. But, the
device has several modules that need to be addressed in different ways
during power management (PM) flows.
The driver uses the term 'PM entities' to refer to the 2 DPMA and
2 CLDMA HW blocks that need to be managed during PM flows.
When a dev_pm_ops function is called, the PM entities list is iterated
and the matching function is called for each entry in the list.

Signed-off-by: Haijun Liu <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>
---
drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 123 +++++-
drivers/net/wwan/t7xx/t7xx_hif_cldma.h | 1 +
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c | 90 +++++
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h | 1 +
drivers/net/wwan/t7xx/t7xx_mhccif.c | 17 +
drivers/net/wwan/t7xx/t7xx_pci.c | 425 +++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_pci.h | 47 +++
drivers/net/wwan/t7xx/t7xx_state_monitor.c | 2 +
8 files changed, 705 insertions(+), 1 deletion(-)

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
index b2b82e9dff09..114b67ebb270 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -1087,6 +1087,120 @@ int t7xx_cldma_alloc(enum cldma_id hif_id, struct t7xx_pci_dev *t7xx_dev)
return 0;
}

+static void t7xx_cldma_resume_early(struct t7xx_pci_dev *t7xx_dev, void *entity_param)
+{
+ struct cldma_ctrl *md_ctrl = entity_param;
+ struct t7xx_cldma_hw *hw_info;
+ unsigned long flags;
+ int qno_t;
+
+ hw_info = &md_ctrl->hw_info;
+
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ t7xx_cldma_hw_restore(hw_info);
+ for (qno_t = 0; qno_t < CLDMA_TXQ_NUM; qno_t++) {
+ t7xx_cldma_hw_set_start_addr(hw_info, qno_t, md_ctrl->txq[qno_t].tx_next->gpd_addr,
+ MTK_TX);
+ t7xx_cldma_hw_set_start_addr(hw_info, qno_t, md_ctrl->rxq[qno_t].tr_done->gpd_addr,
+ MTK_RX);
+ }
+ t7xx_cldma_enable_irq(md_ctrl);
+ t7xx_cldma_hw_start_queue(hw_info, CLDMA_ALL_Q, MTK_RX);
+ md_ctrl->rxq_active |= TXRX_STATUS_BITMASK;
+ t7xx_cldma_hw_irq_en_eq(hw_info, CLDMA_ALL_Q, MTK_RX);
+ t7xx_cldma_hw_irq_en_txrx(hw_info, CLDMA_ALL_Q, MTK_RX);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static int t7xx_cldma_resume(struct t7xx_pci_dev *t7xx_dev, void *entity_param)
+{
+ struct cldma_ctrl *md_ctrl = entity_param;
+ unsigned long flags;
+
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ md_ctrl->txq_active |= TXRX_STATUS_BITMASK;
+ t7xx_cldma_hw_irq_en_txrx(&md_ctrl->hw_info, CLDMA_ALL_Q, MTK_TX);
+ t7xx_cldma_hw_irq_en_eq(&md_ctrl->hw_info, CLDMA_ALL_Q, MTK_TX);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+
+ if (md_ctrl->hif_id == CLDMA_ID_MD)
+ t7xx_mhccif_mask_clr(t7xx_dev, D2H_SW_INT_MASK);
+
+ return 0;
+}
+
+static void t7xx_cldma_suspend_late(struct t7xx_pci_dev *t7xx_dev, void *entity_param)
+{
+ struct cldma_ctrl *md_ctrl = entity_param;
+ struct t7xx_cldma_hw *hw_info;
+ unsigned long flags;
+
+ hw_info = &md_ctrl->hw_info;
+
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ t7xx_cldma_hw_irq_dis_eq(hw_info, CLDMA_ALL_Q, MTK_RX);
+ t7xx_cldma_hw_irq_dis_txrx(hw_info, CLDMA_ALL_Q, MTK_RX);
+ md_ctrl->rxq_active &= ~TXRX_STATUS_BITMASK;
+ t7xx_cldma_hw_stop_all_qs(hw_info, MTK_RX);
+ t7xx_cldma_clear_ip_busy(hw_info);
+ t7xx_cldma_disable_irq(md_ctrl);
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static int t7xx_cldma_suspend(struct t7xx_pci_dev *t7xx_dev, void *entity_param)
+{
+ struct cldma_ctrl *md_ctrl = entity_param;
+ struct t7xx_cldma_hw *hw_info;
+ unsigned long flags;
+
+ if (md_ctrl->hif_id == CLDMA_ID_MD)
+ t7xx_mhccif_mask_set(t7xx_dev, D2H_SW_INT_MASK);
+
+ hw_info = &md_ctrl->hw_info;
+
+ spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+ t7xx_cldma_hw_irq_dis_eq(hw_info, CLDMA_ALL_Q, MTK_TX);
+ t7xx_cldma_hw_irq_dis_txrx(hw_info, CLDMA_ALL_Q, MTK_TX);
+ md_ctrl->txq_active &= ~TXRX_STATUS_BITMASK;
+ t7xx_cldma_hw_stop_all_qs(hw_info, MTK_TX);
+ md_ctrl->txq_started = 0;
+ spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+
+ return 0;
+}
+
+static int t7xx_cldma_pm_init(struct cldma_ctrl *md_ctrl)
+{
+ md_ctrl->pm_entity = kzalloc(sizeof(*md_ctrl->pm_entity), GFP_KERNEL);
+ if (!md_ctrl->pm_entity)
+ return -ENOMEM;
+
+ md_ctrl->pm_entity->entity_param = md_ctrl;
+
+ if (md_ctrl->hif_id == CLDMA_ID_MD)
+ md_ctrl->pm_entity->id = PM_ENTITY_ID_CTRL1;
+ else
+ md_ctrl->pm_entity->id = PM_ENTITY_ID_CTRL2;
+
+ md_ctrl->pm_entity->suspend = t7xx_cldma_suspend;
+ md_ctrl->pm_entity->suspend_late = t7xx_cldma_suspend_late;
+ md_ctrl->pm_entity->resume = t7xx_cldma_resume;
+ md_ctrl->pm_entity->resume_early = t7xx_cldma_resume_early;
+
+ return t7xx_pci_pm_entity_register(md_ctrl->t7xx_dev, md_ctrl->pm_entity);
+}
+
+static int t7xx_cldma_pm_uninit(struct cldma_ctrl *md_ctrl)
+{
+ if (!md_ctrl->pm_entity)
+ return -EINVAL;
+
+ t7xx_pci_pm_entity_unregister(md_ctrl->t7xx_dev, md_ctrl->pm_entity);
+ kfree(md_ctrl->pm_entity);
+ md_ctrl->pm_entity = NULL;
+ return 0;
+}
+
void t7xx_cldma_hif_hw_init(struct cldma_ctrl *md_ctrl)
{
struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
@@ -1137,6 +1251,7 @@ static void t7xx_cldma_destroy_wqs(struct cldma_ctrl *md_ctrl)
* t7xx_cldma_init() - Initialize CLDMA.
* @md_ctrl: CLDMA context structure.
*
+ * Allocate and initialize device power management entity.
* Initialize HIF TX/RX queue structure.
* Register CLDMA callback ISR with PCIe driver.
*
@@ -1147,12 +1262,16 @@ static void t7xx_cldma_destroy_wqs(struct cldma_ctrl *md_ctrl)
int t7xx_cldma_init(struct cldma_ctrl *md_ctrl)
{
struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
- int i;
+ int ret, i;

md_ctrl->txq_active = 0;
md_ctrl->rxq_active = 0;
md_ctrl->is_late_init = false;

+ ret = t7xx_cldma_pm_init(md_ctrl);
+ if (ret)
+ return ret;
+
spin_lock_init(&md_ctrl->cldma_lock);

for (i = 0; i < CLDMA_TXQ_NUM; i++) {
@@ -1187,6 +1306,7 @@ int t7xx_cldma_init(struct cldma_ctrl *md_ctrl)

err_workqueue:
t7xx_cldma_destroy_wqs(md_ctrl);
+ t7xx_cldma_pm_uninit(md_ctrl);
return -ENOMEM;
}

@@ -1201,4 +1321,5 @@ void t7xx_cldma_exit(struct cldma_ctrl *md_ctrl)
t7xx_cldma_stop(md_ctrl);
t7xx_cldma_late_release(md_ctrl);
t7xx_cldma_destroy_wqs(md_ctrl);
+ t7xx_cldma_pm_uninit(md_ctrl);
}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.h b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
index 2970b6312e99..38210fa9cc19 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
@@ -87,6 +87,7 @@ struct cldma_ctrl {
struct dma_pool *gpd_dmapool;
struct cldma_ring tx_ring[CLDMA_TXQ_NUM];
struct cldma_ring rx_ring[CLDMA_RXQ_NUM];
+ struct md_pm_entity *pm_entity;
struct t7xx_cldma_hw hw_info;
bool is_late_init;
int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb);
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
index b0b459f060ce..76f5f802083a 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
@@ -398,6 +398,90 @@ static int t7xx_dpmaif_stop(struct dpmaif_ctrl *dpmaif_ctrl)
return 0;
}

+static int t7xx_dpmaif_suspend(struct t7xx_pci_dev *t7xx_dev, void *param)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl = param;
+
+ t7xx_dpmaif_tx_stop(dpmaif_ctrl);
+ t7xx_dpmaif_hw_stop_all_txq(&dpmaif_ctrl->hw_info);
+ t7xx_dpmaif_hw_stop_all_rxq(&dpmaif_ctrl->hw_info);
+ t7xx_dpmaif_disable_irq(dpmaif_ctrl);
+ t7xx_dpmaif_rx_stop(dpmaif_ctrl);
+ return 0;
+}
+
+static void t7xx_dpmaif_unmask_dlq_intr(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ int qno;
+
+ for (qno = 0; qno < DPMAIF_RXQ_NUM; qno++)
+ t7xx_dpmaif_dlq_unmask_rx_done(&dpmaif_ctrl->hw_info, qno);
+}
+
+static void t7xx_dpmaif_start_txrx_qs(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct dpmaif_rx_queue *rxq;
+ struct dpmaif_tx_queue *txq;
+ unsigned int que_cnt;
+
+ for (que_cnt = 0; que_cnt < DPMAIF_TXQ_NUM; que_cnt++) {
+ txq = &dpmaif_ctrl->txq[que_cnt];
+ txq->que_started = true;
+ }
+
+ for (que_cnt = 0; que_cnt < DPMAIF_RXQ_NUM; que_cnt++) {
+ rxq = &dpmaif_ctrl->rxq[que_cnt];
+ rxq->que_started = true;
+ }
+}
+
+static int t7xx_dpmaif_resume(struct t7xx_pci_dev *t7xx_dev, void *param)
+{
+ struct dpmaif_ctrl *dpmaif_ctrl = param;
+
+ if (!dpmaif_ctrl)
+ return 0;
+
+ t7xx_dpmaif_start_txrx_qs(dpmaif_ctrl);
+ t7xx_dpmaif_enable_irq(dpmaif_ctrl);
+ t7xx_dpmaif_unmask_dlq_intr(dpmaif_ctrl);
+ t7xx_dpmaif_start_hw(&dpmaif_ctrl->hw_info);
+ wake_up(&dpmaif_ctrl->tx_wq);
+ return 0;
+}
+
+static int t7xx_dpmaif_pm_entity_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct md_pm_entity *dpmaif_pm_entity = &dpmaif_ctrl->dpmaif_pm_entity;
+ int ret;
+
+ INIT_LIST_HEAD(&dpmaif_pm_entity->entity);
+ dpmaif_pm_entity->suspend = &t7xx_dpmaif_suspend;
+ dpmaif_pm_entity->suspend_late = NULL;
+ dpmaif_pm_entity->resume_early = NULL;
+ dpmaif_pm_entity->resume = &t7xx_dpmaif_resume;
+ dpmaif_pm_entity->id = PM_ENTITY_ID_DATA;
+ dpmaif_pm_entity->entity_param = dpmaif_ctrl;
+
+ ret = t7xx_pci_pm_entity_register(dpmaif_ctrl->t7xx_dev, dpmaif_pm_entity);
+ if (ret)
+ dev_err(dpmaif_ctrl->dev, "dpmaif register pm_entity fail\n");
+
+ return ret;
+}
+
+static int t7xx_dpmaif_pm_entity_release(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+ struct md_pm_entity *dpmaif_pm_entity = &dpmaif_ctrl->dpmaif_pm_entity;
+ int ret;
+
+ ret = t7xx_pci_pm_entity_unregister(dpmaif_ctrl->t7xx_dev, dpmaif_pm_entity);
+ if (ret < 0)
+ dev_err(dpmaif_ctrl->dev, "dpmaif register pm_entity fail\n");
+
+ return ret;
+}
+
int t7xx_dpmaif_md_state_callback(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char state)
{
int ret = 0;
@@ -461,11 +545,16 @@ struct dpmaif_ctrl *t7xx_dpmaif_hif_init(struct t7xx_pci_dev *t7xx_dev,
dpmaif_ctrl->hw_info.pcie_base = t7xx_dev->base_addr.pcie_ext_reg_base -
t7xx_dev->base_addr.pcie_dev_reg_trsl_addr;

+ ret = t7xx_dpmaif_pm_entity_init(dpmaif_ctrl);
+ if (ret)
+ return NULL;
+
t7xx_dpmaif_register_pcie_irq(dpmaif_ctrl);
t7xx_dpmaif_disable_irq(dpmaif_ctrl);

ret = t7xx_dpmaif_rxtx_sw_allocs(dpmaif_ctrl);
if (ret) {
+ t7xx_dpmaif_pm_entity_release(dpmaif_ctrl);
dev_err(dev, "Failed to allocate RX/TX SW resources: %d\n", ret);
return NULL;
}
@@ -478,6 +567,7 @@ void t7xx_dpmaif_hif_exit(struct dpmaif_ctrl *dpmaif_ctrl)
{
if (dpmaif_ctrl->dpmaif_sw_init_done) {
t7xx_dpmaif_stop(dpmaif_ctrl);
+ t7xx_dpmaif_pm_entity_release(dpmaif_ctrl);
t7xx_dpmaif_sw_release(dpmaif_ctrl);
dpmaif_ctrl->dpmaif_sw_init_done = false;
}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
index a133c3dfac94..f86785ae2481 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
@@ -179,6 +179,7 @@ struct dpmaif_callbacks {
struct dpmaif_ctrl {
struct device *dev;
struct t7xx_pci_dev *t7xx_dev;
+ struct md_pm_entity dpmaif_pm_entity;
enum dpmaif_state state;
bool dpmaif_sw_init_done;
struct dpmaif_hw_info hw_info;
diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.c b/drivers/net/wwan/t7xx/t7xx_mhccif.c
index 0ed12f8638fb..60c265c24a8b 100644
--- a/drivers/net/wwan/t7xx/t7xx_mhccif.c
+++ b/drivers/net/wwan/t7xx/t7xx_mhccif.c
@@ -24,6 +24,11 @@
#include "t7xx_pcie_mac.h"
#include "t7xx_reg.h"

+#define D2H_INT_SR_ACK (D2H_INT_SUSPEND_ACK | \
+ D2H_INT_RESUME_ACK | \
+ D2H_INT_SUSPEND_ACK_AP | \
+ D2H_INT_RESUME_ACK_AP)
+
static void t7xx_mhccif_clear_interrupts(struct t7xx_pci_dev *t7xx_dev, u32 mask)
{
void __iomem *mhccif_pbase = t7xx_dev->base_addr.mhccif_rc_base;
@@ -53,6 +58,18 @@ static irqreturn_t t7xx_mhccif_isr_thread(int irq, void *data)
}

t7xx_mhccif_clear_interrupts(t7xx_dev, int_status);
+
+ if (int_status & D2H_INT_SR_ACK)
+ complete(&t7xx_dev->pm_sr_ack);
+
+ iowrite32(L1_DISABLE_BIT(1), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+
+ int_status = t7xx_mhccif_read_sw_int_sts(t7xx_dev);
+ if (!int_status) {
+ val = L1_1_DISABLE_BIT(1) | L1_2_DISABLE_BIT(1);
+ iowrite32(val, IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ }
+
t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT);
return IRQ_HANDLED;
}
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
index 1c34919ae5d6..880d9448d662 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.c
+++ b/drivers/net/wwan/t7xx/t7xx_pci.c
@@ -18,23 +18,442 @@

#include <linux/atomic.h>
#include <linux/bits.h>
+#include <linux/completion.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/gfp.h>
#include <linux/interrupt.h>
#include <linux/io.h>
+#include <linux/iopoll.h>
+#include <linux/jiffies.h>
+#include <linux/list.h>
#include <linux/module.h>
+#include <linux/mutex.h>
#include <linux/pci.h>
+#include <linux/pm.h>
+#include <linux/pm_wakeup.h>

#include "t7xx_mhccif.h"
#include "t7xx_modem_ops.h"
#include "t7xx_pci.h"
#include "t7xx_pcie_mac.h"
#include "t7xx_reg.h"
+#include "t7xx_state_monitor.h"

#define T7XX_PCI_IREG_BASE 0
#define T7XX_PCI_EREG_BASE 2

+#define PM_ACK_TIMEOUT_MS 1500
+#define PM_RESOURCE_POLL_TIMEOUT_US 10000
+#define PM_RESOURCE_POLL_STEP_US 100
+
+enum t7xx_pm_state {
+ MTK_PM_EXCEPTION,
+ MTK_PM_INIT, /* Device initialized, but handshake not completed */
+ MTK_PM_SUSPENDED,
+ MTK_PM_RESUMED,
+};
+
+static int t7xx_wait_pm_config(struct t7xx_pci_dev *t7xx_dev)
+{
+ int ret, val;
+
+ ret = read_poll_timeout(ioread32, val,
+ (val & T7XX_PCIE_RESOURCE_STS_MSK) == T7XX_PCIE_RESOURCE_STS_MSK,
+ PM_RESOURCE_POLL_STEP_US, PM_RESOURCE_POLL_TIMEOUT_US, true,
+ IREG_BASE(t7xx_dev) + T7XX_PCIE_RESOURCE_STATUS);
+ if (ret == -ETIMEDOUT)
+ dev_err(&t7xx_dev->pdev->dev, "PM configuration timed out\n");
+
+ return ret;
+}
+
+static int t7xx_pci_pm_init(struct t7xx_pci_dev *t7xx_dev)
+{
+ struct pci_dev *pdev = t7xx_dev->pdev;
+
+ INIT_LIST_HEAD(&t7xx_dev->md_pm_entities);
+
+ mutex_init(&t7xx_dev->md_pm_entity_mtx);
+
+ init_completion(&t7xx_dev->pm_sr_ack);
+
+ device_init_wakeup(&pdev->dev, true);
+
+ dev_pm_set_driver_flags(&pdev->dev, pdev->dev.power.driver_flags |
+ DPM_FLAG_NO_DIRECT_COMPLETE);
+
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_INIT);
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+
+ return t7xx_wait_pm_config(t7xx_dev);
+}
+
+void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev)
+{
+ /* Enable the PCIe resource lock only after MD deep sleep is done */
+ t7xx_mhccif_mask_clr(t7xx_dev,
+ D2H_INT_SUSPEND_ACK |
+ D2H_INT_RESUME_ACK |
+ D2H_INT_SUSPEND_ACK_AP |
+ D2H_INT_RESUME_ACK_AP);
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
+}
+
+static int t7xx_pci_pm_reinit(struct t7xx_pci_dev *t7xx_dev)
+{
+ /* The device is kept in FSM re-init flow
+ * so just roll back PM setting to the init setting.
+ */
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_INIT);
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+ return t7xx_wait_pm_config(t7xx_dev);
+}
+
+void t7xx_pci_pm_exp_detected(struct t7xx_pci_dev *t7xx_dev)
+{
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+ t7xx_wait_pm_config(t7xx_dev);
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_EXCEPTION);
+}
+
+int t7xx_pci_pm_entity_register(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity)
+{
+ struct md_pm_entity *entity;
+
+ mutex_lock(&t7xx_dev->md_pm_entity_mtx);
+ list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+ if (entity->id == pm_entity->id) {
+ mutex_unlock(&t7xx_dev->md_pm_entity_mtx);
+ return -EEXIST;
+ }
+ }
+
+ list_add_tail(&pm_entity->entity, &t7xx_dev->md_pm_entities);
+ mutex_unlock(&t7xx_dev->md_pm_entity_mtx);
+ return 0;
+}
+
+int t7xx_pci_pm_entity_unregister(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity)
+{
+ struct md_pm_entity *entity, *tmp_entity;
+
+ mutex_lock(&t7xx_dev->md_pm_entity_mtx);
+ list_for_each_entry_safe(entity, tmp_entity, &t7xx_dev->md_pm_entities, entity) {
+ if (entity->id == pm_entity->id) {
+ list_del(&pm_entity->entity);
+ mutex_unlock(&t7xx_dev->md_pm_entity_mtx);
+ return 0;
+ }
+ }
+
+ mutex_unlock(&t7xx_dev->md_pm_entity_mtx);
+
+ return -ENXIO;
+}
+
+static int t7xx_send_pm_request(struct t7xx_pci_dev *t7xx_dev, u32 request)
+{
+ unsigned long wait_ret;
+
+ reinit_completion(&t7xx_dev->pm_sr_ack);
+ t7xx_mhccif_h2d_swint_trigger(t7xx_dev, request);
+ wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
+ msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+ if (!wait_ret)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
+static int __t7xx_pci_pm_suspend(struct pci_dev *pdev)
+{
+ enum t7xx_pm_id entity_id = PM_ENTITY_ID_INVALID;
+ struct t7xx_pci_dev *t7xx_dev;
+ struct md_pm_entity *entity;
+ int ret;
+
+ t7xx_dev = pci_get_drvdata(pdev);
+ if (atomic_read(&t7xx_dev->md_pm_state) <= MTK_PM_INIT) {
+ dev_err(&pdev->dev, "[PM] Exiting suspend, modem in invalid state\n");
+ return -EFAULT;
+ }
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+ ret = t7xx_wait_pm_config(t7xx_dev);
+ if (ret) {
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ return ret;
+ }
+
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_SUSPENDED);
+ t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
+ t7xx_dev->rgu_pci_irq_en = false;
+
+ list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+ if (!entity->suspend)
+ continue;
+
+ ret = entity->suspend(t7xx_dev, entity->entity_param);
+ if (ret) {
+ entity_id = entity->id;
+ dev_err(&pdev->dev, "[PM] Suspend error: %d, id: %d\n", ret, entity_id);
+ goto abort_suspend;
+ }
+ }
+
+ ret = t7xx_send_pm_request(t7xx_dev, H2D_CH_SUSPEND_REQ);
+ if (ret) {
+ dev_err(&pdev->dev, "[PM] MD suspend error: %d\n", ret);
+ goto abort_suspend;
+ }
+
+ ret = t7xx_send_pm_request(t7xx_dev, H2D_CH_SUSPEND_REQ_AP);
+ if (ret) {
+ t7xx_send_pm_request(t7xx_dev, H2D_CH_RESUME_REQ);
+ dev_err(&pdev->dev, "[PM] SAP suspend error: %d\n", ret);
+ goto abort_suspend;
+ }
+
+ list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+ if (entity->suspend_late)
+ entity->suspend_late(t7xx_dev, entity->entity_param);
+ }
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ return 0;
+
+abort_suspend:
+ list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+ if (entity_id == entity->id)
+ break;
+
+ if (entity->resume)
+ entity->resume(t7xx_dev, entity->entity_param);
+ }
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
+ t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+ return ret;
+}
+
+static void t7xx_pcie_interrupt_reinit(struct t7xx_pci_dev *t7xx_dev)
+{
+ t7xx_pcie_set_mac_msix_cfg(t7xx_dev, EXT_INT_NUM);
+
+ /* Disable interrupt first and let the IPs enable them */
+ iowrite32(MSIX_MSK_SET_ALL, IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_CLR_GRP0_0);
+
+ /* Device disables PCIe interrupts during resume and
+ * following function will re-enable PCIe interrupts.
+ */
+ t7xx_pcie_mac_interrupts_en(t7xx_dev);
+ t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT);
+}
+
+static int t7xx_pcie_reinit(struct t7xx_pci_dev *t7xx_dev, bool is_d3)
+{
+ int ret;
+
+ ret = pcim_enable_device(t7xx_dev->pdev);
+ if (ret)
+ return ret;
+
+ t7xx_pcie_mac_atr_init(t7xx_dev);
+ t7xx_pcie_interrupt_reinit(t7xx_dev);
+
+ if (is_d3) {
+ t7xx_mhccif_init(t7xx_dev);
+ return t7xx_pci_pm_reinit(t7xx_dev);
+ }
+
+ return 0;
+}
+
+static int t7xx_send_fsm_command(struct t7xx_pci_dev *t7xx_dev, u32 event)
+{
+ struct t7xx_fsm_ctl *fsm_ctl = t7xx_dev->md->fsm_ctl;
+ struct device *dev = &t7xx_dev->pdev->dev;
+ int ret = -EINVAL;
+
+ switch (event) {
+ case FSM_CMD_STOP:
+ ret = t7xx_fsm_append_cmd(fsm_ctl, FSM_CMD_STOP, FSM_CMD_FLAG_WAIT_FOR_COMPLETION);
+ break;
+
+ case FSM_CMD_START:
+ t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
+ t7xx_pcie_mac_clear_int_status(t7xx_dev, SAP_RGU_INT);
+ t7xx_dev->rgu_pci_irq_en = true;
+ t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+ ret = t7xx_fsm_append_cmd(fsm_ctl, FSM_CMD_START, 0);
+ break;
+
+ default:
+ break;
+ }
+
+ if (ret)
+ dev_err(dev, "Failure handling FSM command %u, %d\n", event, ret);
+
+ return ret;
+}
+
+static int __t7xx_pci_pm_resume(struct pci_dev *pdev, bool state_check)
+{
+ struct t7xx_pci_dev *t7xx_dev;
+ struct md_pm_entity *entity;
+ u32 prev_state;
+ int ret = 0;
+
+ t7xx_dev = pci_get_drvdata(pdev);
+ if (atomic_read(&t7xx_dev->md_pm_state) <= MTK_PM_INIT) {
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ return 0;
+ }
+
+ t7xx_pcie_mac_interrupts_en(t7xx_dev);
+ prev_state = ioread32(IREG_BASE(t7xx_dev) + T7XX_PCIE_PM_RESUME_STATE);
+
+ if (state_check) {
+ /* For D3/L3 resume, the device could boot so quickly that the
+ * initial value of the dummy register might be overwritten.
+ * Identify new boots if the ATR source address register is not initialized.
+ */
+ u32 atr_reg_val = ioread32(IREG_BASE(t7xx_dev) +
+ ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR);
+ if (prev_state == PM_RESUME_REG_STATE_L3 ||
+ (prev_state == PM_RESUME_REG_STATE_INIT &&
+ atr_reg_val == ATR_SRC_ADDR_INVALID)) {
+ ret = t7xx_send_fsm_command(t7xx_dev, FSM_CMD_STOP);
+ if (ret)
+ return ret;
+
+ ret = t7xx_pcie_reinit(t7xx_dev, true);
+ if (ret)
+ return ret;
+
+ t7xx_clear_rgu_irq(t7xx_dev);
+ return t7xx_send_fsm_command(t7xx_dev, FSM_CMD_START);
+ }
+
+ if (prev_state == PM_RESUME_REG_STATE_EXP ||
+ prev_state == PM_RESUME_REG_STATE_L2_EXP) {
+ if (prev_state == PM_RESUME_REG_STATE_L2_EXP) {
+ ret = t7xx_pcie_reinit(t7xx_dev, false);
+ if (ret)
+ return ret;
+ }
+
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_SUSPENDED);
+ t7xx_dev->rgu_pci_irq_en = true;
+ t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+
+ t7xx_mhccif_mask_clr(t7xx_dev,
+ D2H_INT_EXCEPTION_INIT |
+ D2H_INT_EXCEPTION_INIT_DONE |
+ D2H_INT_EXCEPTION_CLEARQ_DONE |
+ D2H_INT_EXCEPTION_ALLQ_RESET |
+ D2H_INT_PORT_ENUM);
+
+ return ret;
+ }
+
+ if (prev_state == PM_RESUME_REG_STATE_L2) {
+ ret = t7xx_pcie_reinit(t7xx_dev, false);
+ if (ret)
+ return ret;
+
+ } else if (prev_state != PM_RESUME_REG_STATE_L1 &&
+ prev_state != PM_RESUME_REG_STATE_INIT) {
+ ret = t7xx_send_fsm_command(t7xx_dev, FSM_CMD_STOP);
+ if (ret)
+ return ret;
+
+ t7xx_clear_rgu_irq(t7xx_dev);
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_SUSPENDED);
+ return 0;
+ }
+ }
+
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+ t7xx_wait_pm_config(t7xx_dev);
+
+ list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+ if (entity->resume_early)
+ entity->resume_early(t7xx_dev, entity->entity_param);
+ }
+
+ ret = t7xx_send_pm_request(t7xx_dev, H2D_CH_RESUME_REQ);
+ if (ret)
+ dev_err(&pdev->dev, "[PM] MD resume error: %d\n", ret);
+
+ ret = t7xx_send_pm_request(t7xx_dev, H2D_CH_RESUME_REQ_AP);
+ if (ret)
+ dev_err(&pdev->dev, "[PM] SAP resume error: %d\n", ret);
+
+ list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+ if (entity->resume) {
+ ret = entity->resume(t7xx_dev, entity->entity_param);
+ if (ret)
+ dev_err(&pdev->dev, "[PM] Resume entry ID: %d error: %d\n",
+ entity->id, ret);
+ }
+ }
+
+ t7xx_dev->rgu_pci_irq_en = true;
+ t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+ iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+ atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
+
+ return ret;
+}
+
+static int t7xx_pci_pm_resume_noirq(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct t7xx_pci_dev *t7xx_dev;
+
+ t7xx_dev = pci_get_drvdata(pdev);
+ t7xx_pcie_mac_interrupts_dis(t7xx_dev);
+
+ return 0;
+}
+
+static void t7xx_pci_shutdown(struct pci_dev *pdev)
+{
+ __t7xx_pci_pm_suspend(pdev);
+}
+
+static int t7xx_pci_pm_suspend(struct device *dev)
+{
+ return __t7xx_pci_pm_suspend(to_pci_dev(dev));
+}
+
+static int t7xx_pci_pm_resume(struct device *dev)
+{
+ return __t7xx_pci_pm_resume(to_pci_dev(dev), true);
+}
+
+static int t7xx_pci_pm_thaw(struct device *dev)
+{
+ return __t7xx_pci_pm_resume(to_pci_dev(dev), false);
+}
+
+static const struct dev_pm_ops t7xx_pci_pm_ops = {
+ .suspend = t7xx_pci_pm_suspend,
+ .resume = t7xx_pci_pm_resume,
+ .resume_noirq = t7xx_pci_pm_resume_noirq,
+ .freeze = t7xx_pci_pm_suspend,
+ .thaw = t7xx_pci_pm_thaw,
+ .poweroff = t7xx_pci_pm_suspend,
+ .restore = t7xx_pci_pm_resume,
+ .restore_noirq = t7xx_pci_pm_resume_noirq,
+};
+
static int t7xx_request_irq(struct pci_dev *pdev)
{
struct t7xx_pci_dev *t7xx_dev;
@@ -165,6 +584,10 @@ static int t7xx_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
IREG_BASE(t7xx_dev) = pcim_iomap_table(pdev)[T7XX_PCI_IREG_BASE];
t7xx_dev->base_addr.pcie_ext_reg_base = pcim_iomap_table(pdev)[T7XX_PCI_EREG_BASE];

+ ret = t7xx_pci_pm_init(t7xx_dev);
+ if (ret)
+ return ret;
+
t7xx_pcie_mac_atr_init(t7xx_dev);
t7xx_pci_infracfg_ao_calc(t7xx_dev);
t7xx_mhccif_init(t7xx_dev);
@@ -216,6 +639,8 @@ static struct pci_driver t7xx_pci_driver = {
.id_table = t7xx_pci_table,
.probe = t7xx_pci_probe,
.remove = t7xx_pci_remove,
+ .driver.pm = &t7xx_pci_pm_ops,
+ .shutdown = t7xx_pci_shutdown,
};

module_pci_driver(t7xx_pci_driver);
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.h b/drivers/net/wwan/t7xx/t7xx_pci.h
index ecdab7abe17b..314d8616a0e0 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.h
+++ b/drivers/net/wwan/t7xx/t7xx_pci.h
@@ -17,7 +17,9 @@
#ifndef __T7XX_PCI_H__
#define __T7XX_PCI_H__

+#include <linux/completion.h>
#include <linux/irqreturn.h>
+#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/types.h>

@@ -47,6 +49,10 @@ typedef irqreturn_t (*t7xx_intr_callback)(int irq, void *param);
* @pdev: PCI device
* @base_addr: memory base addresses of HW components
* @md: modem interface
+ * @md_pm_entities: list of pm entities
+ * @md_pm_entity_mtx: protects md_pm_entities list
+ * @pm_sr_ack: ack from the device when went to sleep or woke up
+ * @md_pm_state: state for resume/suspend
* @ccmni_ctlb: context structure used to control the network data path
* @rgu_pci_irq_en: RGU callback isr registered and active
* @pools: pre allocated skb pools
@@ -58,8 +64,49 @@ struct t7xx_pci_dev {
struct pci_dev *pdev;
struct t7xx_addr_base base_addr;
struct t7xx_modem *md;
+
+ /* Low Power Items */
+ struct list_head md_pm_entities;
+ struct mutex md_pm_entity_mtx; /* Protects MD PM entities list */
+ struct completion pm_sr_ack;
+ atomic_t md_pm_state;
+
struct t7xx_ccmni_ctrl *ccmni_ctlb;
bool rgu_pci_irq_en;
};

+enum t7xx_pm_id {
+ PM_ENTITY_ID_CTRL1,
+ PM_ENTITY_ID_CTRL2,
+ PM_ENTITY_ID_DATA,
+ PM_ENTITY_ID_INVALID
+};
+
+/* struct md_pm_entity - device power management entity
+ * @entity: list of PM Entities
+ * @suspend: callback invoked before sending D3 request to device
+ * @suspend_late: callback invoked after getting D3 ACK from device
+ * @resume_early: callback invoked before sending the resume request to device
+ * @resume: callback invoked after getting resume ACK from device
+ * @id: unique PM entity identifier
+ * @entity_param: parameter passed to the registered callbacks
+ *
+ * This structure is used to indicate PM operations required by internal
+ * HW modules such as CLDMA and DPMA.
+ */
+struct md_pm_entity {
+ struct list_head entity;
+ int (*suspend)(struct t7xx_pci_dev *t7xx_dev, void *entity_param);
+ void (*suspend_late)(struct t7xx_pci_dev *t7xx_dev, void *entity_param);
+ void (*resume_early)(struct t7xx_pci_dev *t7xx_dev, void *entity_param);
+ int (*resume)(struct t7xx_pci_dev *t7xx_dev, void *entity_param);
+ enum t7xx_pm_id id;
+ void *entity_param;
+};
+
+int t7xx_pci_pm_entity_register(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity);
+int t7xx_pci_pm_entity_unregister(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity);
+void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_pci_pm_exp_detected(struct t7xx_pci_dev *t7xx_dev);
+
#endif /* __T7XX_PCI_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
index b2b809316301..e0024c3cbd3c 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -188,6 +188,7 @@ static void fsm_routine_exception(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_comm
case EXCEPTION_EVENT:
dev_err(dev, "Exception event\n");
t7xx_fsm_broadcast_state(ctl, MD_STATE_EXCEPTION);
+ t7xx_pci_pm_exp_detected(ctl->md->t7xx_dev);
t7xx_md_exception_handshake(ctl->md);

fsm_wait_for_event(ctl, FSM_EVENT_MD_EX_REC_OK, FSM_EVENT_MD_EX,
@@ -300,6 +301,7 @@ static int fsm_routine_starting(struct t7xx_fsm_ctl *ctl)
return -ETIMEDOUT;
}

+ t7xx_pci_pm_init_late(md->t7xx_dev);
fsm_routine_ready(ctl);
return 0;
}
--
2.17.1

2022-02-24 01:37:49

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH net-next v5 03/13] net: wwan: t7xx: Add core components

From: Haijun Liu <[email protected]>

Registers the t7xx device driver with the kernel. Setup all the core
components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
modem control operations, modem state machine, and build
infrastructure.

* PCIe layer code implements driver probe and removal.
* MHCCIF provides interrupt channels to communicate events
such as handshake, PM and port enumeration.
* Modem control implements the entry point for modem init,
reset and exit.
* The modem status monitor is a state machine used by modem control
to complete initialization and stop. It is used also to propagate
exception events reported by other components.

Signed-off-by: Haijun Liu <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>

From a WWAN framework perspective:
Reviewed-by: Loic Poulain <[email protected]>
---
drivers/net/wwan/Kconfig | 14 +
drivers/net/wwan/Makefile | 1 +
drivers/net/wwan/t7xx/Makefile | 12 +
drivers/net/wwan/t7xx/t7xx_common.h | 10 +
drivers/net/wwan/t7xx/t7xx_mhccif.c | 102 ++++
drivers/net/wwan/t7xx/t7xx_mhccif.h | 37 ++
drivers/net/wwan/t7xx/t7xx_modem_ops.c | 500 +++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_modem_ops.h | 84 ++++
drivers/net/wwan/t7xx/t7xx_pci.c | 225 +++++++++
drivers/net/wwan/t7xx/t7xx_pci.h | 65 +++
drivers/net/wwan/t7xx/t7xx_pcie_mac.c | 263 ++++++++++
drivers/net/wwan/t7xx/t7xx_pcie_mac.h | 31 ++
drivers/net/wwan/t7xx/t7xx_reg.h | 106 ++++
drivers/net/wwan/t7xx/t7xx_state_monitor.c | 540 +++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_state_monitor.h | 123 +++++
15 files changed, 2113 insertions(+)
create mode 100644 drivers/net/wwan/t7xx/Makefile
create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.h
create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.h

diff --git a/drivers/net/wwan/Kconfig b/drivers/net/wwan/Kconfig
index 609fd4a2c865..3486ffe94ac4 100644
--- a/drivers/net/wwan/Kconfig
+++ b/drivers/net/wwan/Kconfig
@@ -105,6 +105,20 @@ config IOSM

If unsure, say N.

+config MTK_T7XX
+ tristate "MediaTek PCIe 5G WWAN modem T7xx device"
+ depends on PCI
+ help
+ Enables MediaTek PCIe based 5G WWAN modem (T7xx series) device.
+ Adapts WWAN framework and provides network interface like wwan0
+ and tty interfaces like wwan0at0 (AT protocol), wwan0mbim0
+ (MBIM protocol), etc.
+
+ To compile this driver as a module, choose M here: the module will be
+ called mtk_t7xx.
+
+ If unsure, say N.
+
endif # WWAN

endmenu
diff --git a/drivers/net/wwan/Makefile b/drivers/net/wwan/Makefile
index e722650bebea..3960c0ae2445 100644
--- a/drivers/net/wwan/Makefile
+++ b/drivers/net/wwan/Makefile
@@ -13,3 +13,4 @@ obj-$(CONFIG_MHI_WWAN_MBIM) += mhi_wwan_mbim.o
obj-$(CONFIG_QCOM_BAM_DMUX) += qcom_bam_dmux.o
obj-$(CONFIG_RPMSG_WWAN_CTRL) += rpmsg_wwan_ctrl.o
obj-$(CONFIG_IOSM) += iosm/
+obj-$(CONFIG_MTK_T7XX) += t7xx/
diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
new file mode 100644
index 000000000000..6a49013bc343
--- /dev/null
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: GPL-2.0-only
+
+ccflags-y += -Werror
+
+obj-${CONFIG_MTK_T7XX} := mtk_t7xx.o
+mtk_t7xx-y:= t7xx_pci.o \
+ t7xx_pcie_mac.o \
+ t7xx_mhccif.o \
+ t7xx_state_monitor.o \
+ t7xx_modem_ops.o \
+ t7xx_cldma.o \
+ t7xx_hif_cldma.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_common.h b/drivers/net/wwan/t7xx/t7xx_common.h
index a05deb7fe425..b512afb62af8 100644
--- a/drivers/net/wwan/t7xx/t7xx_common.h
+++ b/drivers/net/wwan/t7xx/t7xx_common.h
@@ -20,6 +20,16 @@

#include <linux/skbuff.h>

+enum md_state {
+ MD_STATE_INVALID,
+ MD_STATE_WAITING_FOR_HS1,
+ MD_STATE_WAITING_FOR_HS2,
+ MD_STATE_READY,
+ MD_STATE_EXCEPTION,
+ MD_STATE_WAITING_TO_STOP,
+ MD_STATE_STOPPED,
+};
+
enum mtk_txrx {
MTK_TX,
MTK_RX,
diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.c b/drivers/net/wwan/t7xx/t7xx_mhccif.c
new file mode 100644
index 000000000000..0ed12f8638fb
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_mhccif.c
@@ -0,0 +1,102 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Haijun Liu <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ *
+ * Contributors:
+ * Amir Hanania <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ */
+
+#include <linux/bits.h>
+#include <linux/completion.h>
+#include <linux/dev_printk.h>
+#include <linux/io.h>
+#include <linux/irqreturn.h>
+
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+
+static void t7xx_mhccif_clear_interrupts(struct t7xx_pci_dev *t7xx_dev, u32 mask)
+{
+ void __iomem *mhccif_pbase = t7xx_dev->base_addr.mhccif_rc_base;
+
+ /* Clear level 2 interrupt */
+ iowrite32(mask, mhccif_pbase + REG_EP2RC_SW_INT_ACK);
+ /* Ensure write is complete */
+ t7xx_mhccif_read_sw_int_sts(t7xx_dev);
+ /* Clear level 1 interrupt */
+ t7xx_pcie_mac_clear_int_status(t7xx_dev, MHCCIF_INT);
+}
+
+static irqreturn_t t7xx_mhccif_isr_thread(int irq, void *data)
+{
+ struct t7xx_pci_dev *t7xx_dev = data;
+ u32 int_status, val;
+
+ val = L1_1_DISABLE_BIT(1) | L1_2_DISABLE_BIT(1);
+ iowrite32(val, IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+
+ int_status = t7xx_mhccif_read_sw_int_sts(t7xx_dev);
+ if (int_status & D2H_SW_INT_MASK) {
+ int ret = t7xx_pci_mhccif_isr(t7xx_dev);
+
+ if (ret)
+ dev_err(&t7xx_dev->pdev->dev, "PCI MHCCIF ISR failure: %d", ret);
+ }
+
+ t7xx_mhccif_clear_interrupts(t7xx_dev, int_status);
+ t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT);
+ return IRQ_HANDLED;
+}
+
+u32 t7xx_mhccif_read_sw_int_sts(struct t7xx_pci_dev *t7xx_dev)
+{
+ return ioread32(t7xx_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_STS);
+}
+
+void t7xx_mhccif_mask_set(struct t7xx_pci_dev *t7xx_dev, u32 val)
+{
+ iowrite32(val, t7xx_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_EAP_MASK_SET);
+}
+
+void t7xx_mhccif_mask_clr(struct t7xx_pci_dev *t7xx_dev, u32 val)
+{
+ iowrite32(val, t7xx_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_EAP_MASK_CLR);
+}
+
+u32 t7xx_mhccif_mask_get(struct t7xx_pci_dev *t7xx_dev)
+{
+ return ioread32(t7xx_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_EAP_MASK);
+}
+
+static irqreturn_t t7xx_mhccif_isr_handler(int irq, void *data)
+{
+ return IRQ_WAKE_THREAD;
+}
+
+void t7xx_mhccif_init(struct t7xx_pci_dev *t7xx_dev)
+{
+ t7xx_dev->base_addr.mhccif_rc_base = t7xx_dev->base_addr.pcie_ext_reg_base +
+ MHCCIF_RC_DEV_BASE -
+ t7xx_dev->base_addr.pcie_dev_reg_trsl_addr;
+
+ t7xx_dev->intr_handler[MHCCIF_INT] = t7xx_mhccif_isr_handler;
+ t7xx_dev->intr_thread[MHCCIF_INT] = t7xx_mhccif_isr_thread;
+ t7xx_dev->callback_param[MHCCIF_INT] = t7xx_dev;
+}
+
+void t7xx_mhccif_h2d_swint_trigger(struct t7xx_pci_dev *t7xx_dev, u32 channel)
+{
+ void __iomem *mhccif_pbase = t7xx_dev->base_addr.mhccif_rc_base;
+
+ iowrite32(BIT(channel), mhccif_pbase + REG_RC2EP_SW_BSY);
+ iowrite32(channel, mhccif_pbase + REG_RC2EP_SW_TCHNUM);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.h b/drivers/net/wwan/t7xx/t7xx_mhccif.h
new file mode 100644
index 000000000000..d574e18391f9
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_mhccif.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Haijun Liu <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ *
+ * Contributors:
+ * Amir Hanania <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ */
+
+#ifndef __T7XX_MHCCIF_H__
+#define __T7XX_MHCCIF_H__
+
+#include <linux/types.h>
+
+#include "t7xx_pci.h"
+#include "t7xx_reg.h"
+
+#define D2H_SW_INT_MASK (D2H_INT_EXCEPTION_INIT | \
+ D2H_INT_EXCEPTION_INIT_DONE | \
+ D2H_INT_EXCEPTION_CLEARQ_DONE | \
+ D2H_INT_EXCEPTION_ALLQ_RESET | \
+ D2H_INT_PORT_ENUM | \
+ D2H_INT_ASYNC_MD_HK)
+
+void t7xx_mhccif_mask_set(struct t7xx_pci_dev *t7xx_dev, u32 val);
+void t7xx_mhccif_mask_clr(struct t7xx_pci_dev *t7xx_dev, u32 val);
+u32 t7xx_mhccif_mask_get(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_mhccif_init(struct t7xx_pci_dev *t7xx_dev);
+u32 t7xx_mhccif_read_sw_int_sts(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_mhccif_h2d_swint_trigger(struct t7xx_pci_dev *t7xx_dev, u32 channel);
+
+#endif /*__T7XX_MHCCIF_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
new file mode 100644
index 000000000000..49f3907f199f
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
@@ -0,0 +1,500 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Haijun Liu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ *
+ * Contributors:
+ * Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/acpi.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+#include <linux/gfp.h>
+#include <linux/io.h>
+#include <linux/irqreturn.h>
+#include <linux/kthread.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_cldma.h"
+#include "t7xx_hif_cldma.h"
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+#include "t7xx_state_monitor.h"
+
+#define RGU_RESET_DELAY_MS 10
+#define PORT_RESET_DELAY_MS 2000
+#define EX_HS_TIMEOUT_MS 5000
+#define EX_HS_POLL_DELAY_MS 10
+
+static unsigned int t7xx_get_interrupt_status(struct t7xx_pci_dev *t7xx_dev)
+{
+ return t7xx_mhccif_read_sw_int_sts(t7xx_dev) & D2H_SW_INT_MASK;
+}
+
+/**
+ * t7xx_pci_mhccif_isr() - Process MHCCIF interrupts.
+ * @t7xx_dev: MTK device.
+ *
+ * Check the interrupt status and queue commands accordingly.
+ *
+ * Returns:
+ ** 0 - Success.
+ ** -EINVAL - Failure to get FSM control.
+ */
+int t7xx_pci_mhccif_isr(struct t7xx_pci_dev *t7xx_dev)
+{
+ struct t7xx_modem *md = t7xx_dev->md;
+ struct t7xx_fsm_ctl *ctl;
+ unsigned int int_sta;
+ int ret = 0;
+ u32 mask;
+
+ ctl = md->fsm_ctl;
+ if (!ctl) {
+ dev_err_ratelimited(&t7xx_dev->pdev->dev,
+ "MHCCIF interrupt received before initializing MD monitor\n");
+ return -EINVAL;
+ }
+
+ spin_lock_bh(&md->exp_lock);
+ int_sta = t7xx_get_interrupt_status(t7xx_dev);
+ md->exp_id |= int_sta;
+ if (md->exp_id & D2H_INT_EXCEPTION_INIT) {
+ if (ctl->md_state == MD_STATE_INVALID ||
+ ctl->md_state == MD_STATE_WAITING_FOR_HS1 ||
+ ctl->md_state == MD_STATE_WAITING_FOR_HS2 ||
+ ctl->md_state == MD_STATE_READY) {
+ md->exp_id &= ~D2H_INT_EXCEPTION_INIT;
+ ret = t7xx_fsm_recv_md_intr(ctl, MD_IRQ_CCIF_EX);
+ }
+ } else if (md->exp_id & D2H_INT_PORT_ENUM) {
+ md->exp_id &= ~D2H_INT_PORT_ENUM;
+
+ if (ctl->curr_state == FSM_STATE_INIT || ctl->curr_state == FSM_STATE_PRE_START ||
+ ctl->curr_state == FSM_STATE_STOPPED)
+ ret = t7xx_fsm_recv_md_intr(ctl, MD_IRQ_PORT_ENUM);
+ } else if (ctl->md_state == MD_STATE_WAITING_FOR_HS1) {
+ mask = t7xx_mhccif_mask_get(t7xx_dev);
+ if ((md->exp_id & D2H_INT_ASYNC_MD_HK) && !(mask & D2H_INT_ASYNC_MD_HK)) {
+ md->exp_id &= ~D2H_INT_ASYNC_MD_HK;
+ queue_work(md->handshake_wq, &md->handshake_work);
+ }
+ }
+ spin_unlock_bh(&md->exp_lock);
+
+ return ret;
+}
+
+static void t7xx_clr_device_irq_via_pcie(struct t7xx_pci_dev *t7xx_dev)
+{
+ struct t7xx_addr_base *pbase_addr = &t7xx_dev->base_addr;
+ void __iomem *reset_pcie_reg;
+ u32 val;
+
+ reset_pcie_reg = pbase_addr->pcie_ext_reg_base + TOPRGU_CH_PCIE_IRQ_STA -
+ pbase_addr->pcie_dev_reg_trsl_addr;
+ val = ioread32(reset_pcie_reg);
+ iowrite32(val, reset_pcie_reg);
+}
+
+void t7xx_clear_rgu_irq(struct t7xx_pci_dev *t7xx_dev)
+{
+ /* Clear L2 */
+ t7xx_clr_device_irq_via_pcie(t7xx_dev);
+ /* Clear L1 */
+ t7xx_pcie_mac_clear_int_status(t7xx_dev, SAP_RGU_INT);
+}
+
+static int t7xx_acpi_reset(struct t7xx_pci_dev *t7xx_dev, char *fn_name)
+{
+#ifdef CONFIG_ACPI
+ struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+ struct device *dev = &t7xx_dev->pdev->dev;
+ acpi_status acpi_ret;
+ acpi_handle handle;
+
+ handle = ACPI_HANDLE(dev);
+ if (!handle) {
+ dev_err(dev, "ACPI handle not found\n");
+ return -EFAULT;
+ }
+
+ if (!acpi_has_method(handle, fn_name)) {
+ dev_err(dev, "%s method not found\n", fn_name);
+ return -EFAULT;
+ }
+
+ acpi_ret = acpi_evaluate_object(handle, fn_name, NULL, &buffer);
+ if (ACPI_FAILURE(acpi_ret)) {
+ dev_err(dev, "%s method fail: %s\n", fn_name, acpi_format_exception(acpi_ret));
+ return -EFAULT;
+ }
+
+#endif
+ return 0;
+}
+
+int t7xx_acpi_fldr_func(struct t7xx_pci_dev *t7xx_dev)
+{
+ return t7xx_acpi_reset(t7xx_dev, "_RST");
+}
+
+static void t7xx_reset_device_via_pmic(struct t7xx_pci_dev *t7xx_dev)
+{
+ u32 val;
+
+ val = ioread32(IREG_BASE(t7xx_dev) + T7XX_PCIE_MISC_DEV_STATUS);
+ if (val & MISC_RESET_TYPE_PLDR)
+ t7xx_acpi_reset(t7xx_dev, "MRST._RST");
+ else if (val & MISC_RESET_TYPE_FLDR)
+ t7xx_acpi_fldr_func(t7xx_dev);
+}
+
+static irqreturn_t t7xx_rgu_isr_thread(int irq, void *data)
+{
+ struct t7xx_pci_dev *t7xx_dev = data;
+
+ msleep(RGU_RESET_DELAY_MS);
+ t7xx_reset_device_via_pmic(t7xx_dev);
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t t7xx_rgu_isr_handler(int irq, void *data)
+{
+ struct t7xx_pci_dev *t7xx_dev = data;
+ struct t7xx_modem *modem;
+
+ t7xx_clear_rgu_irq(t7xx_dev);
+ if (!t7xx_dev->rgu_pci_irq_en)
+ return IRQ_HANDLED;
+
+ modem = t7xx_dev->md;
+ modem->rgu_irq_asserted = true;
+ t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
+ return IRQ_WAKE_THREAD;
+}
+
+static void t7xx_pcie_register_rgu_isr(struct t7xx_pci_dev *t7xx_dev)
+{
+ /* Registers RGU callback ISR with PCIe driver */
+ t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
+ t7xx_pcie_mac_clear_int_status(t7xx_dev, SAP_RGU_INT);
+
+ t7xx_dev->intr_handler[SAP_RGU_INT] = t7xx_rgu_isr_handler;
+ t7xx_dev->intr_thread[SAP_RGU_INT] = t7xx_rgu_isr_thread;
+ t7xx_dev->callback_param[SAP_RGU_INT] = t7xx_dev;
+ t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+}
+
+/**
+ * t7xx_cldma_exception() - CLDMA exception handler.
+ * @md_ctrl: modem control struct.
+ * @stage: exception stage.
+ *
+ * Part of the modem exception recovery.
+ * Stages are one after the other as describe below:
+ * HIF_EX_INIT: Disable and clear TXQ.
+ * HIF_EX_CLEARQ_DONE: Disable RX, flush TX/RX workqueues and clear RX.
+ * HIF_EX_ALLQ_RESET: HW is back in safe mode for re-initialization and restart.
+ */
+
+/* Modem Exception Handshake Flow
+ *
+ * Modem HW Exception interrupt received
+ * (MD_IRQ_CCIF_EX)
+ * |
+ * +---------v--------+
+ * | HIF_EX_INIT | : Disable and clear TXQ
+ * +------------------+
+ * |
+ * +---------v--------+
+ * | HIF_EX_INIT_DONE | : Wait for the init to be done
+ * +------------------+
+ * |
+ * +---------v--------+
+ * |HIF_EX_CLEARQ_DONE| : Disable and clear RXQ
+ * +------------------+ : Flush TX/RX workqueues
+ * |
+ * +---------v--------+
+ * |HIF_EX_ALLQ_RESET | : Restart HW and CLDMA
+ * +------------------+
+ */
+static void t7xx_cldma_exception(struct cldma_ctrl *md_ctrl, enum hif_ex_stage stage)
+{
+ switch (stage) {
+ case HIF_EX_INIT:
+ t7xx_cldma_stop_all_qs(md_ctrl, MTK_TX);
+ t7xx_cldma_clear_all_qs(md_ctrl, MTK_TX);
+ break;
+
+ case HIF_EX_CLEARQ_DONE:
+ /* We do not want to get CLDMA IRQ when MD is
+ * resetting CLDMA after it got clearq_ack.
+ */
+ t7xx_cldma_stop_all_qs(md_ctrl, MTK_RX);
+ t7xx_cldma_stop(md_ctrl);
+
+ if (md_ctrl->hif_id == CLDMA_ID_MD)
+ t7xx_cldma_hw_reset(md_ctrl->t7xx_dev->base_addr.infracfg_ao_base);
+
+ t7xx_cldma_clear_all_qs(md_ctrl, MTK_RX);
+ break;
+
+ case HIF_EX_ALLQ_RESET:
+ t7xx_cldma_hw_init(&md_ctrl->hw_info);
+ t7xx_cldma_start(md_ctrl);
+ break;
+
+ default:
+ break;
+ }
+}
+
+static void t7xx_md_exception(struct t7xx_modem *md, enum hif_ex_stage stage)
+{
+ struct t7xx_pci_dev *t7xx_dev = md->t7xx_dev;
+
+ if (stage == HIF_EX_CLEARQ_DONE) {
+ /* Give DHL time to flush data */
+ msleep(PORT_RESET_DELAY_MS);
+ }
+
+ t7xx_cldma_exception(md->md_ctrl[CLDMA_ID_MD], stage);
+
+ if (stage == HIF_EX_INIT)
+ t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_EXCEPTION_ACK);
+ else if (stage == HIF_EX_CLEARQ_DONE)
+ t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_EXCEPTION_CLEARQ_ACK);
+}
+
+static int t7xx_wait_hif_ex_hk_event(struct t7xx_modem *md, int event_id)
+{
+ unsigned int waited_time_ms = 0;
+
+ do {
+ if (md->exp_id & event_id)
+ return 0;
+
+ waited_time_ms += EX_HS_POLL_DELAY_MS;
+ msleep(EX_HS_POLL_DELAY_MS);
+ } while (waited_time_ms < EX_HS_TIMEOUT_MS);
+
+ return -EFAULT;
+}
+
+static void t7xx_md_sys_sw_init(struct t7xx_pci_dev *t7xx_dev)
+{
+ /* Register the MHCCIF ISR for MD exception, port enum and
+ * async handshake notifications.
+ */
+ t7xx_mhccif_mask_set(t7xx_dev, D2H_SW_INT_MASK);
+ t7xx_mhccif_mask_clr(t7xx_dev, D2H_INT_PORT_ENUM);
+
+ /* Register RGU IRQ handler for sAP exception notification */
+ t7xx_dev->rgu_pci_irq_en = true;
+ t7xx_pcie_register_rgu_isr(t7xx_dev);
+}
+
+static void t7xx_md_hk_wq(struct work_struct *work)
+{
+ struct t7xx_modem *md = container_of(work, struct t7xx_modem, handshake_work);
+ struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+
+ t7xx_cldma_switch_cfg(md->md_ctrl[CLDMA_ID_MD]);
+ t7xx_cldma_start(md->md_ctrl[CLDMA_ID_MD]);
+ t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS2);
+ md->core_md.ready = true;
+ wake_up(&ctl->async_hk_wq);
+}
+
+void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id)
+{
+ struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+ void __iomem *mhccif_base;
+ unsigned int int_sta;
+ unsigned long flags;
+
+ switch (evt_id) {
+ case FSM_PRE_START:
+ t7xx_mhccif_mask_clr(md->t7xx_dev, D2H_INT_PORT_ENUM);
+ break;
+
+ case FSM_START:
+ t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_PORT_ENUM);
+
+ spin_lock_irqsave(&md->exp_lock, flags);
+ int_sta = t7xx_get_interrupt_status(md->t7xx_dev);
+ md->exp_id |= int_sta;
+ if (md->exp_id & D2H_INT_EXCEPTION_INIT) {
+ ctl->exp_flg = true;
+ md->exp_id &= ~D2H_INT_EXCEPTION_INIT;
+ md->exp_id &= ~D2H_INT_ASYNC_MD_HK;
+ } else if (ctl->exp_flg) {
+ md->exp_id &= ~D2H_INT_ASYNC_MD_HK;
+ } else if (md->exp_id & D2H_INT_ASYNC_MD_HK) {
+ queue_work(md->handshake_wq, &md->handshake_work);
+ md->exp_id &= ~D2H_INT_ASYNC_MD_HK;
+ mhccif_base = md->t7xx_dev->base_addr.mhccif_rc_base;
+ iowrite32(D2H_INT_ASYNC_MD_HK, mhccif_base + REG_EP2RC_SW_INT_ACK);
+ t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_ASYNC_MD_HK);
+ } else {
+ t7xx_mhccif_mask_clr(md->t7xx_dev, D2H_INT_ASYNC_MD_HK);
+ }
+ spin_unlock_irqrestore(&md->exp_lock, flags);
+
+ t7xx_mhccif_mask_clr(md->t7xx_dev,
+ D2H_INT_EXCEPTION_INIT |
+ D2H_INT_EXCEPTION_INIT_DONE |
+ D2H_INT_EXCEPTION_CLEARQ_DONE |
+ D2H_INT_EXCEPTION_ALLQ_RESET);
+ break;
+
+ case FSM_READY:
+ t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_ASYNC_MD_HK);
+ break;
+
+ default:
+ break;
+ }
+}
+
+void t7xx_md_exception_handshake(struct t7xx_modem *md)
+{
+ struct device *dev = &md->t7xx_dev->pdev->dev;
+ int ret;
+
+ t7xx_md_exception(md, HIF_EX_INIT);
+ ret = t7xx_wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_INIT_DONE);
+ if (ret)
+ dev_err(dev, "EX CCIF HS timeout, RCH 0x%lx\n", D2H_INT_EXCEPTION_INIT_DONE);
+
+ t7xx_md_exception(md, HIF_EX_INIT_DONE);
+ ret = t7xx_wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_CLEARQ_DONE);
+ if (ret)
+ dev_err(dev, "EX CCIF HS timeout, RCH 0x%lx\n", D2H_INT_EXCEPTION_CLEARQ_DONE);
+
+ t7xx_md_exception(md, HIF_EX_CLEARQ_DONE);
+ ret = t7xx_wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_ALLQ_RESET);
+ if (ret)
+ dev_err(dev, "EX CCIF HS timeout, RCH 0x%lx\n", D2H_INT_EXCEPTION_ALLQ_RESET);
+
+ t7xx_md_exception(md, HIF_EX_ALLQ_RESET);
+}
+
+static struct t7xx_modem *t7xx_md_alloc(struct t7xx_pci_dev *t7xx_dev)
+{
+ struct device *dev = &t7xx_dev->pdev->dev;
+ struct t7xx_modem *md;
+
+ md = devm_kzalloc(dev, sizeof(*md), GFP_KERNEL);
+ if (!md)
+ return NULL;
+
+ md->t7xx_dev = t7xx_dev;
+ t7xx_dev->md = md;
+ md->core_md.ready = false;
+ spin_lock_init(&md->exp_lock);
+ md->handshake_wq = alloc_workqueue("%s", WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI,
+ 0, "md_hk_wq");
+ if (!md->handshake_wq)
+ return NULL;
+
+ INIT_WORK(&md->handshake_work, t7xx_md_hk_wq);
+ return md;
+}
+
+int t7xx_md_reset(struct t7xx_pci_dev *t7xx_dev)
+{
+ struct t7xx_modem *md = t7xx_dev->md;
+
+ md->md_init_finish = false;
+ md->exp_id = 0;
+ spin_lock_init(&md->exp_lock);
+ t7xx_fsm_reset(md);
+ t7xx_cldma_reset(md->md_ctrl[CLDMA_ID_MD]);
+ md->md_init_finish = true;
+ return 0;
+}
+
+/**
+ * t7xx_md_init() - Initialize modem.
+ * @t7xx_dev: MTK device.
+ *
+ * Allocate and initialize MD control block, and initialize data path.
+ * Register MHCCIF ISR and RGU ISR, and start the state machine.
+ *
+ * Return:
+ ** 0 - Success.
+ ** -ENOMEM - Allocation failure.
+ */
+int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev)
+{
+ struct t7xx_modem *md;
+ int ret;
+
+ md = t7xx_md_alloc(t7xx_dev);
+ if (!md)
+ return -ENOMEM;
+
+ ret = t7xx_cldma_alloc(CLDMA_ID_MD, t7xx_dev);
+ if (ret)
+ goto err_destroy_hswq;
+
+ ret = t7xx_fsm_init(md);
+ if (ret)
+ goto err_destroy_hswq;
+
+ ret = t7xx_cldma_init(md->md_ctrl[CLDMA_ID_MD]);
+ if (ret)
+ goto err_uninit_fsm;
+
+ ret = t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_START, 0);
+ if (ret) /* fsm_uninit flushes cmd queue */
+ goto err_uninit_cldma;
+
+ t7xx_md_sys_sw_init(t7xx_dev);
+ md->md_init_finish = true;
+ return 0;
+
+err_uninit_cldma:
+ t7xx_cldma_exit(md->md_ctrl[CLDMA_ID_MD]);
+
+err_uninit_fsm:
+ t7xx_fsm_uninit(md);
+
+err_destroy_hswq:
+ destroy_workqueue(md->handshake_wq);
+ dev_err(&t7xx_dev->pdev->dev, "Modem init failed\n");
+ return ret;
+}
+
+void t7xx_md_exit(struct t7xx_pci_dev *t7xx_dev)
+{
+ struct t7xx_modem *md = t7xx_dev->md;
+
+ t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
+
+ if (!md->md_init_finish)
+ return;
+
+ t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_PRE_STOP, FSM_CMD_FLAG_WAIT_FOR_COMPLETION);
+ t7xx_cldma_exit(md->md_ctrl[CLDMA_ID_MD]);
+ t7xx_fsm_uninit(md);
+ destroy_workqueue(md->handshake_wq);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.h b/drivers/net/wwan/t7xx/t7xx_modem_ops.h
new file mode 100644
index 000000000000..4c92879621ef
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Haijun Liu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ *
+ * Contributors:
+ * Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_MODEM_OPS_H__
+#define __T7XX_MODEM_OPS_H__
+
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_hif_cldma.h"
+#include "t7xx_pci.h"
+
+#define FEATURE_COUNT 64
+
+/**
+ * enum hif_ex_stage - HIF exception handshake stages with the HW.
+ * @HIF_EX_INIT: Disable and clear TXQ.
+ * @HIF_EX_INIT_DONE: Polling for initialization to be done.
+ * @HIF_EX_CLEARQ_DONE: Disable RX, flush TX/RX workqueues and clear RX.
+ * @HIF_EX_ALLQ_RESET: HW is back in safe mode for re-initialization and restart.
+ */
+enum hif_ex_stage {
+ HIF_EX_INIT,
+ HIF_EX_INIT_DONE,
+ HIF_EX_CLEARQ_DONE,
+ HIF_EX_ALLQ_RESET,
+};
+
+struct mtk_runtime_feature {
+ u8 feature_id;
+ u8 support_info;
+ u8 reserved[2];
+ __le32 data_len;
+};
+
+enum md_event_id {
+ FSM_PRE_START,
+ FSM_START,
+ FSM_READY,
+};
+
+struct t7xx_sys_info {
+ bool ready;
+};
+
+struct t7xx_modem {
+ struct cldma_ctrl *md_ctrl[CLDMA_NUM];
+ struct t7xx_pci_dev *t7xx_dev;
+ struct t7xx_sys_info core_md;
+ bool md_init_finish;
+ bool rgu_irq_asserted;
+ struct workqueue_struct *handshake_wq;
+ struct work_struct handshake_work;
+ struct t7xx_fsm_ctl *fsm_ctl;
+ struct port_proxy *port_prox;
+ unsigned int exp_id;
+ spinlock_t exp_lock; /* Protects exception events */
+};
+
+void t7xx_md_exception_handshake(struct t7xx_modem *md);
+void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id);
+int t7xx_md_reset(struct t7xx_pci_dev *t7xx_dev);
+int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_md_exit(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_clear_rgu_irq(struct t7xx_pci_dev *t7xx_dev);
+int t7xx_acpi_fldr_func(struct t7xx_pci_dev *t7xx_dev);
+int t7xx_pci_mhccif_isr(struct t7xx_pci_dev *t7xx_dev);
+
+#endif /* __T7XX_MODEM_OPS_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
new file mode 100644
index 000000000000..1c34919ae5d6
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_pci.c
@@ -0,0 +1,225 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Haijun Liu <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ *
+ * Contributors:
+ * Amir Hanania <[email protected]>
+ * Andy Shevchenko <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ */
+
+#include <linux/atomic.h>
+#include <linux/bits.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/gfp.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+
+#define T7XX_PCI_IREG_BASE 0
+#define T7XX_PCI_EREG_BASE 2
+
+static int t7xx_request_irq(struct pci_dev *pdev)
+{
+ struct t7xx_pci_dev *t7xx_dev;
+ int ret, i;
+
+ t7xx_dev = pci_get_drvdata(pdev);
+
+ for (i = 0; i < EXT_INT_NUM; i++) {
+ const char *irq_descr;
+ int irq_vec;
+
+ if (!t7xx_dev->intr_handler[i])
+ continue;
+
+ irq_descr = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s_%d",
+ dev_driver_string(&pdev->dev), i);
+ if (!irq_descr) {
+ ret = -ENOMEM;
+ break;
+ }
+
+ irq_vec = pci_irq_vector(pdev, i);
+ ret = request_threaded_irq(irq_vec, t7xx_dev->intr_handler[i],
+ t7xx_dev->intr_thread[i], 0, irq_descr,
+ t7xx_dev->callback_param[i]);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to request IRQ: %d\n", ret);
+ break;
+ }
+ }
+
+ if (ret) {
+ while (i--) {
+ if (!t7xx_dev->intr_handler[i])
+ continue;
+
+ free_irq(pci_irq_vector(pdev, i), t7xx_dev->callback_param[i]);
+ }
+ }
+
+ return ret;
+}
+
+static int t7xx_setup_msix(struct t7xx_pci_dev *t7xx_dev)
+{
+ struct pci_dev *pdev = t7xx_dev->pdev;
+ int ret;
+
+ /* Only using 6 interrupts, but HW-design requires power-of-2 IRQs allocation */
+ ret = pci_alloc_irq_vectors(pdev, EXT_INT_NUM, EXT_INT_NUM, PCI_IRQ_MSIX);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Failed to allocate MSI-X entry: %d\n", ret);
+ return ret;
+ }
+
+ ret = t7xx_request_irq(pdev);
+ if (ret) {
+ pci_free_irq_vectors(pdev);
+ return ret;
+ }
+
+ t7xx_pcie_set_mac_msix_cfg(t7xx_dev, EXT_INT_NUM);
+ return 0;
+}
+
+static int t7xx_interrupt_init(struct t7xx_pci_dev *t7xx_dev)
+{
+ int ret, i;
+
+ if (!t7xx_dev->pdev->msix_cap)
+ return -EINVAL;
+
+ ret = t7xx_setup_msix(t7xx_dev);
+ if (ret)
+ return ret;
+
+ /* IPs enable interrupts when ready */
+ for (i = 0; i < EXT_INT_NUM; i++)
+ t7xx_pcie_mac_clear_int(t7xx_dev, i);
+
+ return 0;
+}
+
+static void t7xx_pci_infracfg_ao_calc(struct t7xx_pci_dev *t7xx_dev)
+{
+ t7xx_dev->base_addr.infracfg_ao_base = t7xx_dev->base_addr.pcie_ext_reg_base +
+ INFRACFG_AO_DEV_CHIP -
+ t7xx_dev->base_addr.pcie_dev_reg_trsl_addr;
+}
+
+static int t7xx_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct t7xx_pci_dev *t7xx_dev;
+ int ret;
+
+ t7xx_dev = devm_kzalloc(&pdev->dev, sizeof(*t7xx_dev), GFP_KERNEL);
+ if (!t7xx_dev)
+ return -ENOMEM;
+
+ pci_set_drvdata(pdev, t7xx_dev);
+ t7xx_dev->pdev = pdev;
+
+ ret = pcim_enable_device(pdev);
+ if (ret)
+ return ret;
+
+ pci_set_master(pdev);
+
+ ret = pcim_iomap_regions(pdev, BIT(T7XX_PCI_IREG_BASE) | BIT(T7XX_PCI_EREG_BASE),
+ pci_name(pdev));
+ if (ret) {
+ dev_err(&pdev->dev, "Could not request BARs: %d\n", ret);
+ return -ENOMEM;
+ }
+
+ ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
+ if (ret) {
+ dev_err(&pdev->dev, "Could not set PCI DMA mask: %d\n", ret);
+ return ret;
+ }
+
+ ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64));
+ if (ret) {
+ dev_err(&pdev->dev, "Could not set consistent PCI DMA mask: %d\n", ret);
+ return ret;
+ }
+
+ IREG_BASE(t7xx_dev) = pcim_iomap_table(pdev)[T7XX_PCI_IREG_BASE];
+ t7xx_dev->base_addr.pcie_ext_reg_base = pcim_iomap_table(pdev)[T7XX_PCI_EREG_BASE];
+
+ t7xx_pcie_mac_atr_init(t7xx_dev);
+ t7xx_pci_infracfg_ao_calc(t7xx_dev);
+ t7xx_mhccif_init(t7xx_dev);
+
+ ret = t7xx_md_init(t7xx_dev);
+ if (ret)
+ return ret;
+
+ t7xx_pcie_mac_interrupts_dis(t7xx_dev);
+
+ ret = t7xx_interrupt_init(t7xx_dev);
+ if (ret) {
+ t7xx_md_exit(t7xx_dev);
+ return ret;
+ }
+
+ t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT);
+ t7xx_pcie_mac_interrupts_en(t7xx_dev);
+
+ return 0;
+}
+
+static void t7xx_pci_remove(struct pci_dev *pdev)
+{
+ struct t7xx_pci_dev *t7xx_dev;
+ int i;
+
+ t7xx_dev = pci_get_drvdata(pdev);
+ t7xx_md_exit(t7xx_dev);
+
+ for (i = 0; i < EXT_INT_NUM; i++) {
+ if (!t7xx_dev->intr_handler[i])
+ continue;
+
+ free_irq(pci_irq_vector(pdev, i), t7xx_dev->callback_param[i]);
+ }
+
+ pci_free_irq_vectors(t7xx_dev->pdev);
+}
+
+static const struct pci_device_id t7xx_pci_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x4d75) },
+ { }
+};
+MODULE_DEVICE_TABLE(pci, t7xx_pci_table);
+
+static struct pci_driver t7xx_pci_driver = {
+ .name = "mtk_t7xx",
+ .id_table = t7xx_pci_table,
+ .probe = t7xx_pci_probe,
+ .remove = t7xx_pci_remove,
+};
+
+module_pci_driver(t7xx_pci_driver);
+
+MODULE_AUTHOR("MediaTek Inc");
+MODULE_DESCRIPTION("MediaTek PCIe 5G WWAN modem T7xx driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.h b/drivers/net/wwan/t7xx/t7xx_pci.h
new file mode 100644
index 000000000000..ecdab7abe17b
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_pci.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Haijun Liu <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ *
+ * Contributors:
+ * Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Moises Veleta <[email protected]>
+ */
+
+#ifndef __T7XX_PCI_H__
+#define __T7XX_PCI_H__
+
+#include <linux/irqreturn.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+
+#include "t7xx_reg.h"
+
+/* struct t7xx_addr_base - holds base addresses
+ * @pcie_mac_ireg_base: PCIe MAC register base
+ * @pcie_ext_reg_base: used to calculate base addresses for CLDMA, DPMA and MHCCIF registers
+ * @pcie_dev_reg_trsl_addr: used to calculate the register base address
+ * @infracfg_ao_base: base address used in CLDMA reset operations
+ * @mhccif_rc_base: host view of MHCCIF rc base addr
+ */
+struct t7xx_addr_base {
+ void __iomem *pcie_mac_ireg_base;
+ void __iomem *pcie_ext_reg_base;
+ u32 pcie_dev_reg_trsl_addr;
+ void __iomem *infracfg_ao_base;
+ void __iomem *mhccif_rc_base;
+};
+
+typedef irqreturn_t (*t7xx_intr_callback)(int irq, void *param);
+
+/* struct t7xx_pci_dev - MTK device context structure
+ * @intr_handler: array of handler function for request_threaded_irq
+ * @intr_thread: array of thread_fn for request_threaded_irq
+ * @callback_param: array of cookie passed back to interrupt functions
+ * @pdev: PCI device
+ * @base_addr: memory base addresses of HW components
+ * @md: modem interface
+ * @ccmni_ctlb: context structure used to control the network data path
+ * @rgu_pci_irq_en: RGU callback isr registered and active
+ * @pools: pre allocated skb pools
+ */
+struct t7xx_pci_dev {
+ t7xx_intr_callback intr_handler[EXT_INT_NUM];
+ t7xx_intr_callback intr_thread[EXT_INT_NUM];
+ void *callback_param[EXT_INT_NUM];
+ struct pci_dev *pdev;
+ struct t7xx_addr_base base_addr;
+ struct t7xx_modem *md;
+ struct t7xx_ccmni_ctrl *ccmni_ctlb;
+ bool rgu_pci_irq_en;
+};
+
+#endif /* __T7XX_PCI_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_pcie_mac.c b/drivers/net/wwan/t7xx/t7xx_pcie_mac.c
new file mode 100644
index 000000000000..fd4b19274d9e
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_pcie_mac.c
@@ -0,0 +1,263 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Haijun Liu <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ *
+ * Contributors:
+ * Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ */
+
+#include <linux/bits.h>
+#include <linux/bitops.h>
+#include <linux/device.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/pci.h>
+#include <linux/string.h>
+#include <linux/types.h>
+
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+
+#define T7XX_PCIE_REG_BAR 2
+#define T7XX_PCIE_REG_PORT ATR_SRC_PCI_WIN0
+#define T7XX_PCIE_REG_TABLE_NUM 0
+#define T7XX_PCIE_REG_TRSL_PORT ATR_DST_AXIM_0
+
+#define T7XX_PCIE_DEV_DMA_PORT_START ATR_SRC_AXIS_0
+#define T7XX_PCIE_DEV_DMA_PORT_END ATR_SRC_AXIS_2
+#define T7XX_PCIE_DEV_DMA_TABLE_NUM 0
+#define T7XX_PCIE_DEV_DMA_TRSL_ADDR 0
+#define T7XX_PCIE_DEV_DMA_SRC_ADDR 0
+#define T7XX_PCIE_DEV_DMA_TRANSPARENT 1
+#define T7XX_PCIE_DEV_DMA_SIZE 0
+
+enum t7xx_atr_src_port {
+ ATR_SRC_PCI_WIN0,
+ ATR_SRC_PCI_WIN1,
+ ATR_SRC_AXIS_0,
+ ATR_SRC_AXIS_1,
+ ATR_SRC_AXIS_2,
+ ATR_SRC_AXIS_3,
+};
+
+enum t7xx_atr_dst_port {
+ ATR_DST_PCI_TRX,
+ ATR_DST_PCI_CONFIG,
+ ATR_DST_AXIM_0 = 4,
+ ATR_DST_AXIM_1,
+ ATR_DST_AXIM_2,
+ ATR_DST_AXIM_3,
+};
+
+struct t7xx_atr_config {
+ u64 src_addr;
+ u64 trsl_addr;
+ u64 size;
+ u32 port;
+ u32 table;
+ enum t7xx_atr_dst_port trsl_id;
+ u32 transparent;
+};
+
+static void t7xx_pcie_mac_atr_tables_dis(void __iomem *pbase, enum t7xx_atr_src_port port)
+{
+ void __iomem *reg;
+ int i, offset;
+
+ for (i = 0; i < ATR_TABLE_NUM_PER_ATR; i++) {
+ offset = ATR_PORT_OFFSET * port + ATR_TABLE_OFFSET * i;
+ reg = pbase + ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR + offset;
+ iowrite64(0, reg);
+ }
+}
+
+static int t7xx_pcie_mac_atr_cfg(struct t7xx_pci_dev *t7xx_dev, struct t7xx_atr_config *cfg)
+{
+ struct device *dev = &t7xx_dev->pdev->dev;
+ void __iomem *pbase = IREG_BASE(t7xx_dev);
+ int atr_size, pos, offset;
+ void __iomem *reg;
+ u64 value;
+
+ if (cfg->transparent) {
+ /* No address conversion is performed */
+ atr_size = ATR_TRANSPARENT_SIZE;
+ } else {
+ if (cfg->src_addr & (cfg->size - 1)) {
+ dev_err(dev, "Source address is not aligned to size\n");
+ return -EINVAL;
+ }
+
+ if (cfg->trsl_addr & (cfg->size - 1)) {
+ dev_err(dev, "Translation address %llx is not aligned to size %llx\n",
+ cfg->trsl_addr, cfg->size - 1);
+ return -EINVAL;
+ }
+
+ pos = __ffs64(cfg->size);
+
+ /* HW calculates the address translation space as 2^(atr_size + 1) */
+ atr_size = pos - 1;
+ }
+
+ offset = ATR_PORT_OFFSET * cfg->port + ATR_TABLE_OFFSET * cfg->table;
+
+ reg = pbase + ATR_PCIE_WIN0_T0_TRSL_ADDR + offset;
+ value = cfg->trsl_addr & ATR_PCIE_WIN0_ADDR_ALGMT;
+ iowrite64(value, reg);
+
+ reg = pbase + ATR_PCIE_WIN0_T0_TRSL_PARAM + offset;
+ iowrite32(cfg->trsl_id, reg);
+
+ reg = pbase + ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR + offset;
+ value = (cfg->src_addr & ATR_PCIE_WIN0_ADDR_ALGMT) | (atr_size << 1) | BIT(0);
+ iowrite64(value, reg);
+
+ /* Ensure ATR is set */
+ ioread64(reg);
+ return 0;
+}
+
+/**
+ * t7xx_pcie_mac_atr_init() - Initialize address translation.
+ * @t7xx_dev: MTK device.
+ *
+ * Setup ATR for ports & device.
+ */
+void t7xx_pcie_mac_atr_init(struct t7xx_pci_dev *t7xx_dev)
+{
+ struct t7xx_atr_config cfg;
+ u32 i;
+
+ /* Disable for all ports */
+ for (i = ATR_SRC_PCI_WIN0; i <= ATR_SRC_AXIS_3; i++)
+ t7xx_pcie_mac_atr_tables_dis(IREG_BASE(t7xx_dev), i);
+
+ memset(&cfg, 0, sizeof(cfg));
+ /* Config ATR for RC to access device's register */
+ cfg.src_addr = pci_resource_start(t7xx_dev->pdev, T7XX_PCIE_REG_BAR);
+ cfg.size = T7XX_PCIE_REG_SIZE_CHIP;
+ cfg.trsl_addr = T7XX_PCIE_REG_TRSL_ADDR_CHIP;
+ cfg.port = T7XX_PCIE_REG_PORT;
+ cfg.table = T7XX_PCIE_REG_TABLE_NUM;
+ cfg.trsl_id = T7XX_PCIE_REG_TRSL_PORT;
+ t7xx_pcie_mac_atr_tables_dis(IREG_BASE(t7xx_dev), cfg.port);
+ t7xx_pcie_mac_atr_cfg(t7xx_dev, &cfg);
+
+ t7xx_dev->base_addr.pcie_dev_reg_trsl_addr = T7XX_PCIE_REG_TRSL_ADDR_CHIP;
+
+ /* Config ATR for EP to access RC's memory */
+ for (i = T7XX_PCIE_DEV_DMA_PORT_START; i <= T7XX_PCIE_DEV_DMA_PORT_END; i++) {
+ cfg.src_addr = T7XX_PCIE_DEV_DMA_SRC_ADDR;
+ cfg.size = T7XX_PCIE_DEV_DMA_SIZE;
+ cfg.trsl_addr = T7XX_PCIE_DEV_DMA_TRSL_ADDR;
+ cfg.port = i;
+ cfg.table = T7XX_PCIE_DEV_DMA_TABLE_NUM;
+ cfg.trsl_id = ATR_DST_PCI_TRX;
+ cfg.transparent = T7XX_PCIE_DEV_DMA_TRANSPARENT;
+ t7xx_pcie_mac_atr_tables_dis(IREG_BASE(t7xx_dev), cfg.port);
+ t7xx_pcie_mac_atr_cfg(t7xx_dev, &cfg);
+ }
+}
+
+/**
+ * t7xx_pcie_mac_enable_disable_int() - Enable/disable interrupts.
+ * @t7xx_dev: MTK device.
+ * @enable: Enable/disable.
+ *
+ * Enable or disable device interrupts.
+ */
+static void t7xx_pcie_mac_enable_disable_int(struct t7xx_pci_dev *t7xx_dev, bool enable)
+{
+ u32 value;
+
+ value = ioread32(IREG_BASE(t7xx_dev) + ISTAT_HST_CTRL);
+
+ if (enable)
+ value &= ~ISTAT_HST_CTRL_DIS;
+ else
+ value |= ISTAT_HST_CTRL_DIS;
+
+ iowrite32(value, IREG_BASE(t7xx_dev) + ISTAT_HST_CTRL);
+}
+
+void t7xx_pcie_mac_interrupts_en(struct t7xx_pci_dev *t7xx_dev)
+{
+ t7xx_pcie_mac_enable_disable_int(t7xx_dev, true);
+}
+
+void t7xx_pcie_mac_interrupts_dis(struct t7xx_pci_dev *t7xx_dev)
+{
+ t7xx_pcie_mac_enable_disable_int(t7xx_dev, false);
+}
+
+/**
+ * t7xx_pcie_mac_clear_set_int() - Clear/set interrupt by type.
+ * @t7xx_dev: MTK device.
+ * @int_type: Interrupt type.
+ * @clear: Clear/set.
+ *
+ * Clear or set device interrupt by type.
+ */
+static void t7xx_pcie_mac_clear_set_int(struct t7xx_pci_dev *t7xx_dev,
+ enum t7xx_int int_type, bool clear)
+{
+ void __iomem *reg;
+ u32 val;
+
+ if (clear)
+ reg = IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_CLR_GRP0_0;
+ else
+ reg = IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_SET_GRP0_0;
+
+ val = BIT(EXT_INT_START + int_type);
+ iowrite32(val, reg);
+}
+
+void t7xx_pcie_mac_clear_int(struct t7xx_pci_dev *t7xx_dev, enum t7xx_int int_type)
+{
+ t7xx_pcie_mac_clear_set_int(t7xx_dev, int_type, true);
+}
+
+void t7xx_pcie_mac_set_int(struct t7xx_pci_dev *t7xx_dev, enum t7xx_int int_type)
+{
+ t7xx_pcie_mac_clear_set_int(t7xx_dev, int_type, false);
+}
+
+/**
+ * t7xx_pcie_mac_clear_int_status() - Clear interrupt status by type.
+ * @t7xx_dev: MTK device.
+ * @int_type: Interrupt type.
+ *
+ * Enable or disable device interrupts' status by type.
+ */
+void t7xx_pcie_mac_clear_int_status(struct t7xx_pci_dev *t7xx_dev, enum t7xx_int int_type)
+{
+ void __iomem *reg = IREG_BASE(t7xx_dev) + MSIX_ISTAT_HST_GRP0_0;
+ u32 val = BIT(EXT_INT_START + int_type);
+
+ iowrite32(val, reg);
+}
+
+/**
+ * t7xx_pcie_set_mac_msix_cfg() - Write MSIX control configuration.
+ * @t7xx_dev: MTK device.
+ * @irq_count: Number of MSIX IRQ vectors.
+ *
+ * Write IRQ count to device.
+ */
+void t7xx_pcie_set_mac_msix_cfg(struct t7xx_pci_dev *t7xx_dev, unsigned int irq_count)
+{
+ u32 val;
+
+ val = ffs(irq_count) * 2 - 1;
+ iowrite32(val, IREG_BASE(t7xx_dev) + T7XX_PCIE_CFG_MSIX);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_pcie_mac.h b/drivers/net/wwan/t7xx/t7xx_pcie_mac.h
new file mode 100644
index 000000000000..92f1036532ae
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_pcie_mac.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Haijun Liu <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ *
+ * Contributors:
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ */
+
+#ifndef __T7XX_PCIE_MAC_H__
+#define __T7XX_PCIE_MAC_H__
+
+#include "t7xx_pci.h"
+#include "t7xx_reg.h"
+
+#define IREG_BASE(t7xx_dev) ((t7xx_dev)->base_addr.pcie_mac_ireg_base)
+
+void t7xx_pcie_mac_interrupts_en(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_pcie_mac_interrupts_dis(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_pcie_mac_atr_init(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_pcie_mac_clear_int(struct t7xx_pci_dev *t7xx_dev, enum t7xx_int int_type);
+void t7xx_pcie_mac_set_int(struct t7xx_pci_dev *t7xx_dev, enum t7xx_int int_type);
+void t7xx_pcie_mac_clear_int_status(struct t7xx_pci_dev *t7xx_dev, enum t7xx_int int_type);
+void t7xx_pcie_set_mac_msix_cfg(struct t7xx_pci_dev *t7xx_dev, unsigned int irq_count);
+
+#endif /* __T7XX_PCIE_MAC_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_reg.h b/drivers/net/wwan/t7xx/t7xx_reg.h
index 827f80ea6de0..969bde7078ed 100644
--- a/drivers/net/wwan/t7xx/t7xx_reg.h
+++ b/drivers/net/wwan/t7xx/t7xx_reg.h
@@ -19,6 +19,112 @@
#ifndef __T7XX_REG_H__
#define __T7XX_REG_H__

+#include <linux/bits.h>
+
+/* Device base address offset */
+#define MHCCIF_RC_DEV_BASE 0x10024000
+
+#define REG_RC2EP_SW_BSY 0x04
+#define REG_RC2EP_SW_INT_START 0x08
+
+#define REG_RC2EP_SW_TCHNUM 0x0c
+#define H2D_CH_EXCEPTION_ACK 1
+#define H2D_CH_EXCEPTION_CLEARQ_ACK 2
+#define H2D_CH_DS_LOCK 3
+/* Channels 4-8 are reserved */
+#define H2D_CH_SUSPEND_REQ 9
+#define H2D_CH_RESUME_REQ 10
+#define H2D_CH_SUSPEND_REQ_AP 11
+#define H2D_CH_RESUME_REQ_AP 12
+#define H2D_CH_DEVICE_RESET 13
+#define H2D_CH_DRM_DISABLE_AP 14
+
+#define REG_EP2RC_SW_INT_STS 0x10
+#define REG_EP2RC_SW_INT_ACK 0x14
+#define REG_EP2RC_SW_INT_EAP_MASK 0x20
+#define REG_EP2RC_SW_INT_EAP_MASK_SET 0x30
+#define REG_EP2RC_SW_INT_EAP_MASK_CLR 0x40
+
+#define D2H_INT_DS_LOCK_ACK BIT(0)
+#define D2H_INT_EXCEPTION_INIT BIT(1)
+#define D2H_INT_EXCEPTION_INIT_DONE BIT(2)
+#define D2H_INT_EXCEPTION_CLEARQ_DONE BIT(3)
+#define D2H_INT_EXCEPTION_ALLQ_RESET BIT(4)
+#define D2H_INT_PORT_ENUM BIT(5)
+/* Bits 6-10 are reserved */
+#define D2H_INT_SUSPEND_ACK BIT(11)
+#define D2H_INT_RESUME_ACK BIT(12)
+#define D2H_INT_SUSPEND_ACK_AP BIT(13)
+#define D2H_INT_RESUME_ACK_AP BIT(14)
+#define D2H_INT_ASYNC_SAP_HK BIT(15)
+#define D2H_INT_ASYNC_MD_HK BIT(16)
+
+/* Register base */
+#define INFRACFG_AO_DEV_CHIP 0x10001000
+
+/* ATR setting */
+#define T7XX_PCIE_REG_TRSL_ADDR_CHIP 0x10000000
+#define T7XX_PCIE_REG_SIZE_CHIP 0x00400000
+
+/* Reset Generic Unit (RGU) */
+#define TOPRGU_CH_PCIE_IRQ_STA 0x1000790c
+
+#define ATR_PORT_OFFSET 0x100
+#define ATR_TABLE_OFFSET 0x20
+#define ATR_TABLE_NUM_PER_ATR 8
+#define ATR_TRANSPARENT_SIZE 0x3f
+
+/* PCIE_MAC_IREG Register Definition */
+
+#define ISTAT_HST_CTRL 0x01ac
+#define ISTAT_HST_CTRL_DIS BIT(0)
+
+#define T7XX_PCIE_MISC_CTRL 0x0348
+#define T7XX_PCIE_MISC_MAC_SLEEP_DIS BIT(7)
+
+#define T7XX_PCIE_CFG_MSIX 0x03ec
+#define ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR 0x0600
+#define ATR_PCIE_WIN0_T0_TRSL_ADDR 0x0608
+#define ATR_PCIE_WIN0_T0_TRSL_PARAM 0x0610
+#define ATR_PCIE_WIN0_ADDR_ALGMT GENMASK_ULL(63, 12)
+
+#define ATR_SRC_ADDR_INVALID 0x007f
+
+#define T7XX_PCIE_PM_RESUME_STATE 0x0d0c
+
+enum t7xx_pm_resume_state {
+ PM_RESUME_REG_STATE_L3,
+ PM_RESUME_REG_STATE_L1,
+ PM_RESUME_REG_STATE_INIT,
+ PM_RESUME_REG_STATE_EXP,
+ PM_RESUME_REG_STATE_L2,
+ PM_RESUME_REG_STATE_L2_EXP,
+};
+
+#define T7XX_PCIE_MISC_DEV_STATUS 0x0d1c
+#define MISC_STAGE_MASK GENMASK(2, 0)
+#define MISC_RESET_TYPE_PLDR BIT(26)
+#define MISC_RESET_TYPE_FLDR BIT(27)
+#define LINUX_STAGE 4
+
+#define T7XX_PCIE_RESOURCE_STATUS 0x0d28
+#define T7XX_PCIE_RESOURCE_STS_MSK GENMASK(4, 0)
+
+#define DIS_ASPM_LOWPWR_SET_0 0x0e50
+#define DIS_ASPM_LOWPWR_CLR_0 0x0e54
+#define DIS_ASPM_LOWPWR_SET_1 0x0e58
+#define DIS_ASPM_LOWPWR_CLR_1 0x0e5c
+#define L1_DISABLE_BIT(i) BIT((i) * 4 + 1)
+#define L1_1_DISABLE_BIT(i) BIT((i) * 4 + 2)
+#define L1_2_DISABLE_BIT(i) BIT((i) * 4 + 3)
+
+#define MSIX_ISTAT_HST_GRP0_0 0x0f00
+#define IMASK_HOST_MSIX_SET_GRP0_0 0x3000
+#define IMASK_HOST_MSIX_CLR_GRP0_0 0x3080
+#define EXT_INT_START 24
+#define EXT_INT_NUM 8
+#define MSIX_MSK_SET_ALL GENMASK(31, 24)
+
enum t7xx_int {
DPMAIF_INT,
CLDMA0_INT,
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
new file mode 100644
index 000000000000..5c18b0798a1c
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -0,0 +1,540 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Haijun Liu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ *
+ * Contributors:
+ * Amir Hanania <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/bits.h>
+#include <linux/bitfield.h>
+#include <linux/completion.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/gfp.h>
+#include <linux/iopoll.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+
+#include "t7xx_hif_cldma.h"
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+#include "t7xx_state_monitor.h"
+
+#define FSM_DRM_DISABLE_DELAY_MS 200
+#define FSM_EVENT_POLL_INTERVAL_MS 20
+#define FSM_MD_EX_REC_OK_TIMEOUT_MS 10000
+#define FSM_MD_EX_PASS_TIMEOUT_MS 45000
+#define FSM_CMD_TIMEOUT_MS 2000
+
+void t7xx_fsm_notifier_register(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier)
+{
+ struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ctl->notifier_lock, flags);
+ list_add_tail(&notifier->entry, &ctl->notifier_list);
+ spin_unlock_irqrestore(&ctl->notifier_lock, flags);
+}
+
+void t7xx_fsm_notifier_unregister(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier)
+{
+ struct t7xx_fsm_notifier *notifier_cur, *notifier_next;
+ struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ctl->notifier_lock, flags);
+ list_for_each_entry_safe(notifier_cur, notifier_next, &ctl->notifier_list, entry) {
+ if (notifier_cur == notifier)
+ list_del(&notifier->entry);
+ }
+ spin_unlock_irqrestore(&ctl->notifier_lock, flags);
+}
+
+static void fsm_state_notify(struct t7xx_modem *md, enum md_state state)
+{
+ struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+ struct t7xx_fsm_notifier *notifier;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ctl->notifier_lock, flags);
+ list_for_each_entry(notifier, &ctl->notifier_list, entry) {
+ spin_unlock_irqrestore(&ctl->notifier_lock, flags);
+ if (notifier->notifier_fn)
+ notifier->notifier_fn(state, notifier->data);
+
+ spin_lock_irqsave(&ctl->notifier_lock, flags);
+ }
+ spin_unlock_irqrestore(&ctl->notifier_lock, flags);
+}
+
+void t7xx_fsm_broadcast_state(struct t7xx_fsm_ctl *ctl, enum md_state state)
+{
+ ctl->md_state = state;
+ fsm_state_notify(ctl->md, state);
+}
+
+static void fsm_finish_command(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd, int result)
+{
+ if (cmd->flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
+ *cmd->ret = result;
+ complete_all(cmd->done);
+ }
+
+ kfree(cmd);
+}
+
+static void fsm_del_kf_event(struct t7xx_fsm_event *event)
+{
+ list_del(&event->entry);
+ kfree(event);
+}
+
+static void fsm_flush_event_cmd_qs(struct t7xx_fsm_ctl *ctl)
+{
+ struct device *dev = &ctl->md->t7xx_dev->pdev->dev;
+ struct t7xx_fsm_event *event, *evt_next;
+ struct t7xx_fsm_command *cmd, *cmd_next;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ctl->command_lock, flags);
+ list_for_each_entry_safe(cmd, cmd_next, &ctl->command_queue, entry) {
+ dev_warn(dev, "Unhandled command %d\n", cmd->cmd_id);
+ list_del(&cmd->entry);
+ fsm_finish_command(ctl, cmd, -EINVAL);
+ }
+ spin_unlock_irqrestore(&ctl->command_lock, flags);
+
+ spin_lock_irqsave(&ctl->event_lock, flags);
+ list_for_each_entry_safe(event, evt_next, &ctl->event_queue, entry) {
+ dev_warn(dev, "Unhandled event %d\n", event->event_id);
+ fsm_del_kf_event(event);
+ }
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+}
+
+static void fsm_wait_for_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_expected,
+ enum t7xx_fsm_event_state event_ignore, int retries)
+{
+ struct t7xx_fsm_event *event;
+ bool event_received = false;
+ unsigned long flags;
+ int cnt = 0;
+
+ while (cnt++ < retries && !event_received) {
+ bool sleep_required = true;
+
+ if (kthread_should_stop())
+ return;
+
+ spin_lock_irqsave(&ctl->event_lock, flags);
+ event = list_first_entry_or_null(&ctl->event_queue, struct t7xx_fsm_event, entry);
+ if (event) {
+ event_received = event->event_id == event_expected;
+ if (event_received || event->event_id == event_ignore) {
+ fsm_del_kf_event(event);
+ sleep_required = false;
+ }
+ }
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+
+ if (sleep_required)
+ msleep(FSM_EVENT_POLL_INTERVAL_MS);
+ }
+}
+
+static void fsm_routine_exception(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd,
+ enum t7xx_ex_reason reason)
+{
+ struct device *dev = &ctl->md->t7xx_dev->pdev->dev;
+
+ if (ctl->curr_state != FSM_STATE_READY && ctl->curr_state != FSM_STATE_STARTING) {
+ if (cmd)
+ fsm_finish_command(ctl, cmd, -EINVAL);
+
+ return;
+ }
+
+ ctl->curr_state = FSM_STATE_EXCEPTION;
+
+ switch (reason) {
+ case EXCEPTION_HS_TIMEOUT:
+ dev_err(dev, "Boot Handshake failure\n");
+ break;
+
+ case EXCEPTION_EVENT:
+ dev_err(dev, "Exception event\n");
+ t7xx_fsm_broadcast_state(ctl, MD_STATE_EXCEPTION);
+ t7xx_md_exception_handshake(ctl->md);
+
+ fsm_wait_for_event(ctl, FSM_EVENT_MD_EX_REC_OK, FSM_EVENT_MD_EX,
+ FSM_MD_EX_REC_OK_TIMEOUT_MS / FSM_EVENT_POLL_INTERVAL_MS);
+ fsm_wait_for_event(ctl, FSM_EVENT_MD_EX_PASS, FSM_EVENT_INVALID,
+ FSM_MD_EX_PASS_TIMEOUT_MS / FSM_EVENT_POLL_INTERVAL_MS);
+ break;
+
+ default:
+ dev_err(dev, "Exception %d\n", reason);
+ break;
+ }
+
+ if (cmd)
+ fsm_finish_command(ctl, cmd, 0);
+}
+
+static int fsm_stopped_handler(struct t7xx_fsm_ctl *ctl)
+{
+ ctl->curr_state = FSM_STATE_STOPPED;
+
+ t7xx_fsm_broadcast_state(ctl, MD_STATE_STOPPED);
+ return t7xx_md_reset(ctl->md->t7xx_dev);
+}
+
+static void fsm_routine_stopped(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd)
+{
+ if (ctl->curr_state == FSM_STATE_STOPPED) {
+ fsm_finish_command(ctl, cmd, -EINVAL);
+ return;
+ }
+
+ fsm_finish_command(ctl, cmd, fsm_stopped_handler(ctl));
+}
+
+static void fsm_routine_stopping(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd)
+{
+ struct t7xx_pci_dev *t7xx_dev;
+ struct cldma_ctrl *md_ctrl;
+ int err;
+
+ if (ctl->curr_state == FSM_STATE_STOPPED || ctl->curr_state == FSM_STATE_STOPPING) {
+ fsm_finish_command(ctl, cmd, -EINVAL);
+ return;
+ }
+
+ md_ctrl = ctl->md->md_ctrl[CLDMA_ID_MD];
+ t7xx_dev = ctl->md->t7xx_dev;
+
+ ctl->curr_state = FSM_STATE_STOPPING;
+ t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_TO_STOP);
+ t7xx_cldma_stop(md_ctrl);
+
+ if (!ctl->md->rgu_irq_asserted) {
+ t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DRM_DISABLE_AP);
+ /* Wait for the DRM disable to take effect */
+ msleep(FSM_DRM_DISABLE_DELAY_MS);
+
+ err = t7xx_acpi_fldr_func(t7xx_dev);
+ if (err)
+ t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DEVICE_RESET);
+ }
+
+ fsm_finish_command(ctl, cmd, fsm_stopped_handler(ctl));
+}
+
+static void t7xx_fsm_broadcast_ready_state(struct t7xx_fsm_ctl *ctl)
+{
+ if (ctl->md_state != MD_STATE_WAITING_FOR_HS2)
+ return;
+
+ ctl->md_state = MD_STATE_READY;
+
+ fsm_state_notify(ctl->md, MD_STATE_READY);
+}
+
+static void fsm_routine_ready(struct t7xx_fsm_ctl *ctl)
+{
+ struct t7xx_modem *md = ctl->md;
+
+ ctl->curr_state = FSM_STATE_READY;
+ t7xx_fsm_broadcast_ready_state(ctl);
+ t7xx_md_event_notify(md, FSM_READY);
+}
+
+static int fsm_routine_starting(struct t7xx_fsm_ctl *ctl)
+{
+ struct t7xx_modem *md = ctl->md;
+ struct device *dev;
+
+ ctl->curr_state = FSM_STATE_STARTING;
+
+ t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS1);
+ t7xx_md_event_notify(md, FSM_START);
+
+ wait_event_interruptible_timeout(ctl->async_hk_wq, md->core_md.ready || ctl->exp_flg,
+ HZ * 60);
+ dev = &md->t7xx_dev->pdev->dev;
+
+ if (ctl->exp_flg)
+ dev_err(dev, "MD exception is captured during handshake\n");
+
+ if (!md->core_md.ready) {
+ dev_err(dev, "MD handshake timeout\n");
+ fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);
+ return -ETIMEDOUT;
+ }
+
+ fsm_routine_ready(ctl);
+ return 0;
+}
+
+static void fsm_routine_start(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd)
+{
+ struct t7xx_modem *md = ctl->md;
+ u32 dev_status;
+ int ret;
+
+ if (!md)
+ return;
+
+ if (ctl->curr_state != FSM_STATE_INIT && ctl->curr_state != FSM_STATE_PRE_START &&
+ ctl->curr_state != FSM_STATE_STOPPED) {
+ fsm_finish_command(ctl, cmd, -EINVAL);
+ return;
+ }
+
+ ctl->curr_state = FSM_STATE_PRE_START;
+ t7xx_md_event_notify(md, FSM_PRE_START);
+
+ ret = read_poll_timeout(ioread32, dev_status,
+ (dev_status & MISC_STAGE_MASK) == LINUX_STAGE, 20000, 2000000,
+ false, IREG_BASE(md->t7xx_dev) + T7XX_PCIE_MISC_DEV_STATUS);
+ if (ret) {
+ struct device *dev = &md->t7xx_dev->pdev->dev;
+
+ fsm_finish_command(ctl, cmd, -ETIMEDOUT);
+ dev_err(dev, "Invalid device status 0x%lx\n", dev_status & MISC_STAGE_MASK);
+ return;
+ }
+
+ t7xx_cldma_hif_hw_init(md->md_ctrl[CLDMA_ID_MD]);
+ fsm_finish_command(ctl, cmd, fsm_routine_starting(ctl));
+}
+
+static int fsm_main_thread(void *data)
+{
+ struct t7xx_fsm_ctl *ctl = data;
+ struct t7xx_fsm_command *cmd;
+ unsigned long flags;
+
+ while (!kthread_should_stop()) {
+ if (wait_event_interruptible(ctl->command_wq, !list_empty(&ctl->command_queue) ||
+ kthread_should_stop()))
+ continue;
+
+ if (kthread_should_stop())
+ break;
+
+ spin_lock_irqsave(&ctl->command_lock, flags);
+ cmd = list_first_entry(&ctl->command_queue, struct t7xx_fsm_command, entry);
+ list_del(&cmd->entry);
+ spin_unlock_irqrestore(&ctl->command_lock, flags);
+
+ switch (cmd->cmd_id) {
+ case FSM_CMD_START:
+ fsm_routine_start(ctl, cmd);
+ break;
+
+ case FSM_CMD_EXCEPTION:
+ fsm_routine_exception(ctl, cmd, FIELD_GET(FSM_CMD_EX_REASON, cmd->flag));
+ break;
+
+ case FSM_CMD_PRE_STOP:
+ fsm_routine_stopping(ctl, cmd);
+ break;
+
+ case FSM_CMD_STOP:
+ fsm_routine_stopped(ctl, cmd);
+ break;
+
+ default:
+ fsm_finish_command(ctl, cmd, -EINVAL);
+ fsm_flush_event_cmd_qs(ctl);
+ break;
+ }
+ }
+
+ return 0;
+}
+
+int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id, unsigned int flag)
+{
+ DECLARE_COMPLETION_ONSTACK(done);
+ struct t7xx_fsm_command *cmd;
+ unsigned long flags;
+ int ret;
+
+ cmd = kzalloc(sizeof(*cmd), flag & FSM_CMD_FLAG_IN_INTERRUPT ? GFP_ATOMIC : GFP_KERNEL);
+ if (!cmd)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&cmd->entry);
+ cmd->cmd_id = cmd_id;
+ cmd->flag = flag;
+ if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
+ cmd->done = &done;
+ cmd->ret = &ret;
+ }
+
+ spin_lock_irqsave(&ctl->command_lock, flags);
+ list_add_tail(&cmd->entry, &ctl->command_queue);
+ spin_unlock_irqrestore(&ctl->command_lock, flags);
+
+ wake_up(&ctl->command_wq);
+
+ if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
+ unsigned long wait_ret;
+
+ wait_ret = wait_for_completion_timeout(&done,
+ msecs_to_jiffies(FSM_CMD_TIMEOUT_MS));
+ if (!wait_ret)
+ return -ETIMEDOUT;
+
+ return ret;
+ }
+
+ return 0;
+}
+
+int t7xx_fsm_append_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id,
+ unsigned char *data, unsigned int length)
+{
+ struct device *dev = &ctl->md->t7xx_dev->pdev->dev;
+ struct t7xx_fsm_event *event;
+ unsigned long flags;
+
+ if (event_id <= FSM_EVENT_INVALID || event_id >= FSM_EVENT_MAX) {
+ dev_err(dev, "Invalid event %d\n", event_id);
+ return -EINVAL;
+ }
+
+ event = kmalloc(sizeof(*event) + length, in_interrupt() ? GFP_ATOMIC : GFP_KERNEL);
+ if (!event)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&event->entry);
+ event->event_id = event_id;
+ event->length = length;
+
+ if (data && length)
+ memcpy((void *)event + sizeof(*event), data, length);
+
+ spin_lock_irqsave(&ctl->event_lock, flags);
+ list_add_tail(&event->entry, &ctl->event_queue);
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+
+ wake_up_all(&ctl->event_wq);
+ return 0;
+}
+
+void t7xx_fsm_clr_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id)
+{
+ struct t7xx_fsm_event *event, *evt_next;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ctl->event_lock, flags);
+ list_for_each_entry_safe(event, evt_next, &ctl->event_queue, entry) {
+ if (event->event_id == event_id)
+ fsm_del_kf_event(event);
+ }
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+}
+
+enum md_state t7xx_fsm_get_md_state(struct t7xx_fsm_ctl *ctl)
+{
+ if (ctl)
+ return ctl->md_state;
+
+ return MD_STATE_INVALID;
+}
+
+unsigned int t7xx_fsm_get_ctl_state(struct t7xx_fsm_ctl *ctl)
+{
+ if (ctl)
+ return ctl->curr_state;
+
+ return FSM_STATE_STOPPED;
+}
+
+int t7xx_fsm_recv_md_intr(struct t7xx_fsm_ctl *ctl, enum t7xx_md_irq_type type)
+{
+ unsigned int cmd_flags = FSM_CMD_FLAG_IN_INTERRUPT;
+
+ if (type == MD_IRQ_PORT_ENUM) {
+ return t7xx_fsm_append_cmd(ctl, FSM_CMD_START, cmd_flags);
+ } else if (type == MD_IRQ_CCIF_EX) {
+ ctl->exp_flg = true;
+ wake_up(&ctl->async_hk_wq);
+ cmd_flags |= FIELD_PREP(FSM_CMD_EX_REASON, EXCEPTION_EVENT);
+ return t7xx_fsm_append_cmd(ctl, FSM_CMD_EXCEPTION, cmd_flags);
+ }
+
+ return -EINVAL;
+}
+
+void t7xx_fsm_reset(struct t7xx_modem *md)
+{
+ struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+
+ fsm_flush_event_cmd_qs(ctl);
+ ctl->curr_state = FSM_STATE_STOPPED;
+ ctl->exp_flg = false;
+}
+
+int t7xx_fsm_init(struct t7xx_modem *md)
+{
+ struct device *dev = &md->t7xx_dev->pdev->dev;
+ struct t7xx_fsm_ctl *ctl;
+
+ ctl = devm_kzalloc(dev, sizeof(*ctl), GFP_KERNEL);
+ if (!ctl)
+ return -ENOMEM;
+
+ md->fsm_ctl = ctl;
+ ctl->md = md;
+ ctl->curr_state = FSM_STATE_INIT;
+ INIT_LIST_HEAD(&ctl->command_queue);
+ INIT_LIST_HEAD(&ctl->event_queue);
+ init_waitqueue_head(&ctl->async_hk_wq);
+ init_waitqueue_head(&ctl->event_wq);
+ INIT_LIST_HEAD(&ctl->notifier_list);
+ init_waitqueue_head(&ctl->command_wq);
+ spin_lock_init(&ctl->event_lock);
+ spin_lock_init(&ctl->command_lock);
+ ctl->exp_flg = false;
+ spin_lock_init(&ctl->notifier_lock);
+
+ ctl->fsm_thread = kthread_run(fsm_main_thread, ctl, "t7xx_fsm");
+ return PTR_ERR_OR_ZERO(ctl->fsm_thread);
+}
+
+void t7xx_fsm_uninit(struct t7xx_modem *md)
+{
+ struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+
+ if (!ctl)
+ return;
+
+ if (ctl->fsm_thread)
+ kthread_stop(ctl->fsm_thread);
+
+ fsm_flush_event_cmd_qs(ctl);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.h b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
new file mode 100644
index 000000000000..2e966c898269
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Amir Hanania <[email protected]>
+ * Haijun Liu <[email protected]>
+ * Moises Veleta <[email protected]>
+ *
+ * Contributors:
+ * Eliot Lee <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_MONITOR_H__
+#define __T7XX_MONITOR_H__
+
+#include <linux/bits.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+
+#include "t7xx_common.h"
+#include "t7xx_modem_ops.h"
+
+enum t7xx_fsm_state {
+ FSM_STATE_INIT,
+ FSM_STATE_PRE_START,
+ FSM_STATE_STARTING,
+ FSM_STATE_READY,
+ FSM_STATE_EXCEPTION,
+ FSM_STATE_STOPPING,
+ FSM_STATE_STOPPED,
+};
+
+enum t7xx_fsm_event_state {
+ FSM_EVENT_INVALID,
+ FSM_EVENT_MD_EX,
+ FSM_EVENT_MD_EX_REC_OK,
+ FSM_EVENT_MD_EX_PASS,
+ FSM_EVENT_MAX
+};
+
+enum t7xx_fsm_cmd_state {
+ FSM_CMD_INVALID,
+ FSM_CMD_START,
+ FSM_CMD_EXCEPTION,
+ FSM_CMD_PRE_STOP,
+ FSM_CMD_STOP,
+};
+
+enum t7xx_ex_reason {
+ EXCEPTION_HS_TIMEOUT,
+ EXCEPTION_EVENT,
+};
+
+enum t7xx_md_irq_type {
+ MD_IRQ_WDT,
+ MD_IRQ_CCIF_EX,
+ MD_IRQ_PORT_ENUM,
+};
+
+#define FSM_CMD_FLAG_WAIT_FOR_COMPLETION BIT(0)
+#define FSM_CMD_FLAG_FLIGHT_MODE BIT(1)
+#define FSM_CMD_FLAG_IN_INTERRUPT BIT(2)
+#define FSM_CMD_EX_REASON GENMASK(23, 16)
+
+struct t7xx_fsm_ctl {
+ struct t7xx_modem *md;
+ enum md_state md_state;
+ unsigned int curr_state;
+ struct list_head command_queue;
+ struct list_head event_queue;
+ wait_queue_head_t command_wq;
+ wait_queue_head_t event_wq;
+ wait_queue_head_t async_hk_wq;
+ spinlock_t event_lock; /* Protects event queue */
+ spinlock_t command_lock; /* Protects command queue */
+ struct task_struct *fsm_thread;
+ bool exp_flg;
+ spinlock_t notifier_lock; /* Protects notifier list */
+ struct list_head notifier_list;
+};
+
+struct t7xx_fsm_event {
+ struct list_head entry;
+ enum t7xx_fsm_event_state event_id;
+ unsigned int length;
+};
+
+struct t7xx_fsm_command {
+ struct list_head entry;
+ enum t7xx_fsm_cmd_state cmd_id;
+ unsigned int flag;
+ struct completion *done;
+ int *ret;
+};
+
+struct t7xx_fsm_notifier {
+ struct list_head entry;
+ int (*notifier_fn)(enum md_state state, void *data);
+ void *data;
+};
+
+int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id,
+ unsigned int flag);
+int t7xx_fsm_append_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id,
+ unsigned char *data, unsigned int length);
+void t7xx_fsm_clr_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id);
+void t7xx_fsm_broadcast_state(struct t7xx_fsm_ctl *ctl, enum md_state state);
+void t7xx_fsm_reset(struct t7xx_modem *md);
+int t7xx_fsm_init(struct t7xx_modem *md);
+void t7xx_fsm_uninit(struct t7xx_modem *md);
+int t7xx_fsm_recv_md_intr(struct t7xx_fsm_ctl *ctl, enum t7xx_md_irq_type type);
+enum md_state t7xx_fsm_get_md_state(struct t7xx_fsm_ctl *ctl);
+unsigned int t7xx_fsm_get_ctl_state(struct t7xx_fsm_ctl *ctl);
+void t7xx_fsm_notifier_register(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier);
+void t7xx_fsm_notifier_unregister(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier);
+
+#endif /* __T7XX_MONITOR_H__ */
--
2.17.1

2022-02-24 01:44:08

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH net-next v5 07/13] net: wwan: t7xx: Data path HW layer

From: Haijun Liu <[email protected]>

Data Path Modem AP Interface (DPMAIF) HW layer provides HW abstraction
for the upper layer (DPMAIF HIF). It implements functions to do the HW
configuration, TX/RX control and interrupt handling.

Signed-off-by: Haijun Liu <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>

From a WWAN framework perspective:
Reviewed-by: Loic Poulain <[email protected]>
---
drivers/net/wwan/t7xx/t7xx_dpmaif.c | 1294 +++++++++++++++++++++++++++
drivers/net/wwan/t7xx/t7xx_dpmaif.h | 179 ++++
drivers/net/wwan/t7xx/t7xx_reg.h | 213 +++++
3 files changed, 1686 insertions(+)
create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.c
create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.h

diff --git a/drivers/net/wwan/t7xx/t7xx_dpmaif.c b/drivers/net/wwan/t7xx/t7xx_dpmaif.c
new file mode 100644
index 000000000000..d27fff91c4f9
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_dpmaif.c
@@ -0,0 +1,1294 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Amir Hanania <[email protected]>
+ * Haijun Liu <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ *
+ * Contributors:
+ * Andy Shevchenko <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/bits.h>
+#include <linux/bitfield.h>
+#include <linux/bitops.h>
+#include <linux/delay.h>
+#include <linux/dev_printk.h>
+#include <linux/io.h>
+#include <linux/iopoll.h>
+#include <linux/types.h>
+
+#include "t7xx_dpmaif.h"
+#include "t7xx_reg.h"
+
+#define ioread32_poll_timeout_atomic(addr, val, cond, delay_us, timeout_us) \
+ readx_poll_timeout_atomic(ioread32, addr, val, cond, delay_us, timeout_us)
+
+static int t7xx_dpmaif_init_intr(struct dpmaif_hw_info *hw_info)
+{
+ struct dpmaif_isr_en_mask *isr_en_msk = &hw_info->isr_en_mask;
+ u32 value, ul_intr_enable, dl_intr_enable;
+ int ret;
+
+ ul_intr_enable = DP_UL_INT_ERR_MSK | DP_UL_INT_QDONE_MSK;
+ isr_en_msk->ap_ul_l2intr_en_msk = ul_intr_enable;
+ iowrite32(DPMAIF_AP_ALL_L2TISAR0_MASK, hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+
+ /* Set interrupt enable mask */
+ iowrite32(ul_intr_enable, hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMCR0);
+ iowrite32(~ul_intr_enable, hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMSR0);
+
+ /* Check mask status */
+ ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0,
+ value, (value & ul_intr_enable) != ul_intr_enable, 0,
+ DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (ret)
+ return ret;
+
+ dl_intr_enable = DP_DL_INT_PITCNT_LEN_ERR | DP_DL_INT_BATCNT_LEN_ERR;
+ isr_en_msk->ap_dl_l2intr_err_en_msk = dl_intr_enable;
+ ul_intr_enable = DPMAIF_DL_INT_DLQ0_QDONE | DPMAIF_DL_INT_DLQ0_PITCNT_LEN |
+ DPMAIF_DL_INT_DLQ1_QDONE | DPMAIF_DL_INT_DLQ1_PITCNT_LEN;
+ isr_en_msk->ap_ul_l2intr_en_msk = ul_intr_enable;
+ iowrite32(DPMAIF_AP_APDL_ALL_L2TISAR0_MASK, hw_info->pcie_base + DPMAIF_AP_APDL_L2TISAR0);
+
+ /* Set DL ISR PD enable mask */
+ iowrite32(~ul_intr_enable, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+ ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMR0,
+ value, (value & ul_intr_enable) != ul_intr_enable, 0,
+ DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (ret)
+ return ret;
+
+ isr_en_msk->ap_udl_ip_busy_en_msk = DPMAIF_UDL_IP_BUSY;
+ iowrite32(DPMAIF_AP_IP_BUSY_MASK, hw_info->pcie_base + DPMAIF_AP_IP_BUSY);
+ iowrite32(isr_en_msk->ap_udl_ip_busy_en_msk,
+ hw_info->pcie_base + DPMAIF_AO_AP_DLUL_IP_BUSY_MASK);
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_AP_L1TIMR0);
+ value |= DPMAIF_DL_INT_Q2APTOP | DPMAIF_DL_INT_Q2TOQ1;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_UL_AP_L1TIMR0);
+ iowrite32(DPMA_HPC_ALL_INT_MASK, hw_info->pcie_base + DPMAIF_HPC_INTR_MASK);
+
+ return 0;
+}
+
+static void t7xx_dpmaif_mask_ulq_intr(struct dpmaif_hw_info *hw_info, unsigned int q_num)
+{
+ struct dpmaif_isr_en_mask *isr_en_msk;
+ u32 value, ul_int_que_done;
+ int ret;
+
+ isr_en_msk = &hw_info->isr_en_mask;
+ ul_int_que_done = BIT(q_num + DP_UL_INT_DONE_OFFSET) & DP_UL_INT_QDONE_MSK;
+ isr_en_msk->ap_ul_l2intr_en_msk &= ~ul_int_que_done;
+ iowrite32(ul_int_que_done, hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMSR0);
+
+ ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0,
+ value, (value & ul_int_que_done) == ul_int_que_done, 0,
+ DPMAIF_CHECK_TIMEOUT_US);
+ if (ret)
+ dev_err(hw_info->dev,
+ "Could not mask the UL interrupt. DPMAIF_AO_UL_AP_L2TIMR0 is 0x%x\n",
+ value);
+}
+
+void t7xx_dpmaif_unmask_ulq_intr(struct dpmaif_hw_info *hw_info, unsigned int q_num)
+{
+ struct dpmaif_isr_en_mask *isr_en_msk;
+ u32 value, ul_int_que_done;
+ int ret;
+
+ isr_en_msk = &hw_info->isr_en_mask;
+ ul_int_que_done = BIT(q_num + DP_UL_INT_DONE_OFFSET) & DP_UL_INT_QDONE_MSK;
+ isr_en_msk->ap_ul_l2intr_en_msk |= ul_int_que_done;
+ iowrite32(ul_int_que_done, hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMCR0);
+
+ ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0,
+ value, (value & ul_int_que_done) != ul_int_que_done, 0,
+ DPMAIF_CHECK_TIMEOUT_US);
+ if (ret)
+ dev_err(hw_info->dev,
+ "Could not unmask the UL interrupt. DPMAIF_AO_UL_AP_L2TIMR0 is 0x%x\n",
+ value);
+}
+
+void t7xx_dpmaif_dl_unmask_batcnt_len_err_intr(struct dpmaif_hw_info *hw_info)
+{
+ hw_info->isr_en_mask.ap_dl_l2intr_en_msk |= DP_DL_INT_BATCNT_LEN_ERR;
+ iowrite32(DP_DL_INT_BATCNT_LEN_ERR, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+}
+
+void t7xx_dpmaif_dl_unmask_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info)
+{
+ hw_info->isr_en_mask.ap_dl_l2intr_en_msk |= DP_DL_INT_PITCNT_LEN_ERR;
+ iowrite32(DP_DL_INT_PITCNT_LEN_ERR, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+}
+
+static u32 t7xx_update_dlq_intr(struct dpmaif_hw_info *hw_info, u32 q_done)
+{
+ u32 value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0);
+ iowrite32(q_done, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+ return value;
+}
+
+static int t7xx_mask_dlq_intr(struct dpmaif_hw_info *hw_info, unsigned char qno)
+{
+ u32 value, q_done;
+ int ret;
+
+ q_done = qno == DPF_RX_QNO0 ? DPMAIF_DL_INT_DLQ0_QDONE : DPMAIF_DL_INT_DLQ1_QDONE;
+ iowrite32(q_done, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+
+ ret = read_poll_timeout_atomic(t7xx_update_dlq_intr, value, value & q_done,
+ 0, DPMAIF_CHECK_TIMEOUT_US, false, hw_info, q_done);
+ if (ret) {
+ dev_err(hw_info->dev,
+ "Could not mask the DL interrupt. DPMAIF_AO_UL_AP_L2TIMR0 is 0x%x\n",
+ value);
+ return -ETIMEDOUT;
+ }
+
+ hw_info->isr_en_mask.ap_dl_l2intr_en_msk &= ~q_done;
+ return 0;
+}
+
+void t7xx_dpmaif_dlq_unmask_rx_done(struct dpmaif_hw_info *hw_info, unsigned char qno)
+{
+ u32 mask;
+
+ mask = qno == DPF_RX_QNO0 ? DPMAIF_DL_INT_DLQ0_QDONE : DPMAIF_DL_INT_DLQ1_QDONE;
+ iowrite32(mask, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+ hw_info->isr_en_mask.ap_dl_l2intr_en_msk |= mask;
+}
+
+void t7xx_dpmaif_clr_ip_busy_sts(struct dpmaif_hw_info *hw_info)
+{
+ u32 ip_busy_sts;
+
+ ip_busy_sts = ioread32(hw_info->pcie_base + DPMAIF_AP_IP_BUSY);
+ iowrite32(ip_busy_sts, hw_info->pcie_base + DPMAIF_AP_IP_BUSY);
+}
+
+static void t7xx_dpmaif_dlq_mask_rx_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info,
+ unsigned char qno)
+{
+ if (qno == DPF_RX_QNO0)
+ iowrite32(DPMAIF_DL_INT_DLQ0_PITCNT_LEN,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+ else
+ iowrite32(DPMAIF_DL_INT_DLQ1_PITCNT_LEN,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+}
+
+void t7xx_dpmaif_dlq_unmask_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info,
+ unsigned char qno)
+{
+ if (qno == DPF_RX_QNO0)
+ iowrite32(DPMAIF_DL_INT_DLQ0_PITCNT_LEN,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+ else
+ iowrite32(DPMAIF_DL_INT_DLQ1_PITCNT_LEN,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+}
+
+void t7xx_dpmaif_ul_clr_all_intr(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_AP_ALL_L2TISAR0_MASK, hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+}
+
+void t7xx_dpmaif_dl_clr_all_intr(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_AP_APDL_ALL_L2TISAR0_MASK, hw_info->pcie_base + DPMAIF_AP_APDL_L2TISAR0);
+}
+
+static void t7xx_dpmaif_set_intr_para(struct dpmaif_hw_intr_st_para *para,
+ enum dpmaif_hw_intr_type intr_type, unsigned int intr_queue)
+{
+ para->intr_types[para->intr_cnt] = intr_type;
+ para->intr_queues[para->intr_cnt] = intr_queue;
+ para->intr_cnt++;
+}
+
+/* The para->intr_cnt counter is set to zero before this function is called.
+ * It does not check for overflow as there is no risk of overflowing intr_types or intr_queues.
+ */
+static void t7xx_dpmaif_hw_check_tx_intr(struct dpmaif_hw_info *hw_info,
+ unsigned int intr_status,
+ struct dpmaif_hw_intr_st_para *para)
+{
+ unsigned long value;
+
+ value = FIELD_GET(DP_UL_INT_QDONE_MSK, intr_status);
+ if (value) {
+ unsigned int index;
+
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_UL_DONE, value);
+
+ for_each_set_bit(index, &value, DPMAIF_TXQ_NUM)
+ t7xx_dpmaif_mask_ulq_intr(hw_info, index);
+ }
+
+ value = FIELD_GET(DP_UL_INT_EMPTY_MSK, intr_status);
+ if (value)
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_UL_DRB_EMPTY, value);
+
+ value = FIELD_GET(DP_UL_INT_MD_NOTREADY_MSK, intr_status);
+ if (value)
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_UL_MD_NOTREADY, value);
+
+ value = FIELD_GET(DP_UL_INT_MD_PWR_NOTREADY_MSK, intr_status);
+ if (value)
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_UL_MD_PWR_NOTREADY, value);
+
+ value = FIELD_GET(DP_UL_INT_ERR_MSK, intr_status);
+ if (value)
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_UL_LEN_ERR, value);
+
+ /* Clear interrupt status */
+ iowrite32(intr_status, hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+}
+
+/* The para->intr_cnt counter is set to zero before this function is called.
+ * It does not check for overflow as there is no risk of overflowing intr_types or intr_queues.
+ */
+static void t7xx_dpmaif_hw_check_rx_intr(struct dpmaif_hw_info *hw_info,
+ unsigned int intr_status,
+ struct dpmaif_hw_intr_st_para *para, int qno)
+{
+ if (qno == DPF_RX_QNO_DFT) {
+ if (intr_status & DP_DL_INT_SKB_LEN_ERR)
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_SKB_LEN_ERR, DPF_RX_QNO_DFT);
+
+ if (intr_status & DP_DL_INT_BATCNT_LEN_ERR) {
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_BATCNT_LEN_ERR, DPF_RX_QNO_DFT);
+ hw_info->isr_en_mask.ap_dl_l2intr_en_msk &= ~DP_DL_INT_BATCNT_LEN_ERR;
+ iowrite32(DP_DL_INT_BATCNT_LEN_ERR,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+ }
+
+ if (intr_status & DP_DL_INT_PITCNT_LEN_ERR) {
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_PITCNT_LEN_ERR, DPF_RX_QNO_DFT);
+ hw_info->isr_en_mask.ap_dl_l2intr_en_msk &= ~DP_DL_INT_PITCNT_LEN_ERR;
+ iowrite32(DP_DL_INT_PITCNT_LEN_ERR,
+ hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+ }
+
+ if (intr_status & DP_DL_INT_PKT_EMPTY_MSK)
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_PKT_EMPTY_SET, DPF_RX_QNO_DFT);
+
+ if (intr_status & DP_DL_INT_FRG_EMPTY_MSK)
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_FRG_EMPTY_SET, DPF_RX_QNO_DFT);
+
+ if (intr_status & DP_DL_INT_MTU_ERR_MSK)
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_MTU_ERR, DPF_RX_QNO_DFT);
+
+ if (intr_status & DP_DL_INT_FRG_LEN_ERR_MSK)
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_FRGCNT_LEN_ERR, DPF_RX_QNO_DFT);
+
+ if (intr_status & DP_DL_INT_Q0_PITCNT_LEN_ERR) {
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_Q0_PITCNT_LEN_ERR, BIT(qno));
+ t7xx_dpmaif_dlq_mask_rx_pitcnt_len_err_intr(hw_info, qno);
+ }
+
+ if (intr_status & DP_DL_INT_HPC_ENT_TYPE_ERR)
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_HPC_ENT_TYPE_ERR,
+ DPF_RX_QNO_DFT);
+
+ if (intr_status & DP_DL_INT_Q0_DONE) {
+ /* Mask RX done interrupt immediately after it occurs, do not clear
+ * the interrupt if the mask operation fails.
+ */
+ if (!t7xx_mask_dlq_intr(hw_info, qno))
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_Q0_DONE, BIT(qno));
+ else
+ intr_status &= ~DP_DL_INT_Q0_DONE;
+ }
+ } else {
+ if (intr_status & DP_DL_INT_Q1_PITCNT_LEN_ERR) {
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_Q1_PITCNT_LEN_ERR, BIT(qno));
+ t7xx_dpmaif_dlq_mask_rx_pitcnt_len_err_intr(hw_info, qno);
+ }
+
+ if (intr_status & DP_DL_INT_Q1_DONE) {
+ if (!t7xx_mask_dlq_intr(hw_info, qno))
+ t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_Q1_DONE, BIT(qno));
+ else
+ intr_status &= ~DP_DL_INT_Q1_DONE;
+ }
+ }
+
+ intr_status |= DP_DL_INT_BATCNT_LEN_ERR;
+ /* Clear interrupt status */
+ iowrite32(intr_status, hw_info->pcie_base + DPMAIF_AP_APDL_L2TISAR0);
+}
+
+/**
+ * t7xx_dpmaif_hw_get_intr_cnt() - Reads interrupt status and count from HW.
+ * @hw_info: Pointer to struct hw_info.
+ * @para: Pointer to struct dpmaif_hw_intr_st_para.
+ * @qno: Queue number.
+ *
+ * Reads RX/TX interrupt status from HW and clears UL/DL status as needed.
+ *
+ * Return: Interrupt count.
+ */
+int t7xx_dpmaif_hw_get_intr_cnt(struct dpmaif_hw_info *hw_info,
+ struct dpmaif_hw_intr_st_para *para, int qno)
+{
+ u32 rx_intr_status, tx_intr_status = 0;
+ u32 rx_intr_qdone, tx_intr_qdone = 0;
+
+ rx_intr_status = ioread32(hw_info->pcie_base + DPMAIF_AP_APDL_L2TISAR0);
+ rx_intr_qdone = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMR0);
+
+ /* TX interrupt status */
+ if (qno == DPF_RX_QNO_DFT) {
+ /* All ULQ and DLQ0 interrupts use the same source no need to check ULQ interrupts
+ * when a DLQ1 interrupt has occurred.
+ */
+ tx_intr_status = ioread32(hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+ tx_intr_qdone = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0);
+ }
+
+ t7xx_dpmaif_clr_ip_busy_sts(hw_info);
+
+ if (qno == DPF_RX_QNO_DFT) {
+ /* Do not schedule bottom half again or clear UL interrupt status when we
+ * have already masked it.
+ */
+ tx_intr_status &= ~tx_intr_qdone;
+ if (tx_intr_status)
+ t7xx_dpmaif_hw_check_tx_intr(hw_info, tx_intr_status, para);
+ }
+
+ if (rx_intr_status) {
+ if (qno == DPF_RX_QNO0) {
+ rx_intr_status &= DP_DL_Q0_STATUS_MASK;
+ if (rx_intr_qdone & DPMAIF_DL_INT_DLQ0_QDONE)
+ /* Do not schedule bottom half again or clear DL
+ * queue done interrupt status when we have already masked it.
+ */
+ rx_intr_status &= ~DP_DL_INT_Q0_DONE;
+ } else {
+ rx_intr_status &= DP_DL_Q1_STATUS_MASK;
+ if (rx_intr_qdone & DPMAIF_DL_INT_DLQ1_QDONE)
+ rx_intr_status &= ~DP_DL_INT_Q1_DONE;
+ }
+
+ if (rx_intr_status)
+ t7xx_dpmaif_hw_check_rx_intr(hw_info, rx_intr_status, para, qno);
+ }
+
+ return para->intr_cnt;
+}
+
+static int t7xx_dpmaif_sram_init(struct dpmaif_hw_info *hw_info)
+{
+ u32 value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AP_MEM_CLR);
+ value |= DPMAIF_MEM_CLR;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AP_MEM_CLR);
+
+ return ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_AP_MEM_CLR,
+ value, !(value & DPMAIF_MEM_CLR), 0,
+ DPMAIF_CHECK_INIT_TIMEOUT_US);
+}
+
+static void t7xx_dpmaif_hw_reset(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_AP_AO_RST_BIT, hw_info->pcie_base + DPMAIF_AP_AO_RGU_ASSERT);
+ udelay(2);
+ iowrite32(DPMAIF_AP_RST_BIT, hw_info->pcie_base + DPMAIF_AP_RGU_ASSERT);
+ udelay(2);
+ iowrite32(DPMAIF_AP_AO_RST_BIT, hw_info->pcie_base + DPMAIF_AP_AO_RGU_DEASSERT);
+ udelay(2);
+ iowrite32(DPMAIF_AP_RST_BIT, hw_info->pcie_base + DPMAIF_AP_RGU_DEASSERT);
+ udelay(2);
+}
+
+static int t7xx_dpmaif_hw_config(struct dpmaif_hw_info *hw_info)
+{
+ u32 ap_port_mode;
+ int ret;
+
+ t7xx_dpmaif_hw_reset(hw_info);
+
+ ret = t7xx_dpmaif_sram_init(hw_info);
+ if (ret)
+ return ret;
+
+ ap_port_mode = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+ ap_port_mode |= DPMAIF_PORT_MODE_PCIE;
+ iowrite32(ap_port_mode, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+ iowrite32(DPMAIF_CG_EN, hw_info->pcie_base + DPMAIF_AP_CG_EN);
+ return 0;
+}
+
+static void t7xx_dpmaif_pcie_dpmaif_sign(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_PCIE_MODE_SET_VALUE, hw_info->pcie_base + DPMAIF_UL_RESERVE_AO_RW);
+}
+
+static void t7xx_dpmaif_dl_performance(struct dpmaif_hw_info *hw_info)
+{
+ u32 enable_bat_cache, enable_pit_burst;
+
+ enable_bat_cache = ioread32(hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+ enable_bat_cache |= DPMAIF_DL_BAT_CACHE_PRI;
+ iowrite32(enable_bat_cache, hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+
+ enable_pit_burst = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+ enable_pit_burst |= DPMAIF_DL_BURST_PIT_EN;
+ iowrite32(enable_pit_burst, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+}
+
+ /* DPMAIF DL DLQ part HW setting */
+
+static void t7xx_dpmaif_hw_hpc_cntl_set(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = DPMAIF_HPC_DLQ_PATH_MODE | DPMAIF_HPC_ADD_MODE_DF << 2;
+ value |= DPMAIF_HASH_PRIME_DF << 4;
+ value |= DPMAIF_HPC_TOTAL_NUM << 8;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_HPC_CNTL);
+}
+
+static void t7xx_dpmaif_hw_agg_cfg_set(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = DPMAIF_AGG_MAX_LEN_DF | DPMAIF_AGG_TBL_ENT_NUM_DF << 16;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_DLQ_AGG_CFG);
+}
+
+static void t7xx_dpmaif_hw_hash_bit_choose_set(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_DLQ_HASH_BIT_CHOOSE_DF,
+ hw_info->pcie_base + DPMAIF_AO_DL_DLQPIT_INIT_CON5);
+}
+
+static void t7xx_dpmaif_hw_mid_pit_timeout_thres_set(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_MID_TIMEOUT_THRES_DF, hw_info->pcie_base + DPMAIF_AO_DL_DLQPIT_TIMEOUT0);
+}
+
+static void t7xx_dpmaif_hw_dlq_timeout_thres_set(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value, i;
+
+ /* Each register holds two DLQ threshold timeout values */
+ for (i = 0; i < DPMAIF_HPC_MAX_TOTAL_NUM / 2; i++) {
+ value = FIELD_PREP(DPMAIF_DLQ_LOW_TIMEOUT_THRES_MKS, DPMAIF_DLQ_TIMEOUT_THRES_DF);
+ value |= FIELD_PREP(DPMAIF_DLQ_HIGH_TIMEOUT_THRES_MSK,
+ DPMAIF_DLQ_TIMEOUT_THRES_DF);
+ iowrite32(value,
+ hw_info->pcie_base + DPMAIF_AO_DL_DLQPIT_TIMEOUT1 + sizeof(u32) * i);
+ }
+}
+
+static void t7xx_dpmaif_hw_dlq_start_prs_thres_set(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_DLQ_PRS_THRES_DF, hw_info->pcie_base + DPMAIF_AO_DL_DLQPIT_TRIG_THRES);
+}
+
+static void t7xx_dpmaif_dl_dlq_hpc_hw_init(struct dpmaif_hw_info *hw_info)
+{
+ t7xx_dpmaif_hw_hpc_cntl_set(hw_info);
+ t7xx_dpmaif_hw_agg_cfg_set(hw_info);
+ t7xx_dpmaif_hw_hash_bit_choose_set(hw_info);
+ t7xx_dpmaif_hw_mid_pit_timeout_thres_set(hw_info);
+ t7xx_dpmaif_hw_dlq_timeout_thres_set(hw_info);
+ t7xx_dpmaif_hw_dlq_start_prs_thres_set(hw_info);
+}
+
+static int t7xx_dpmaif_dl_bat_init_done(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, bool frg_en)
+{
+ u32 value, dl_bat_init = 0;
+ int ret;
+
+ if (frg_en)
+ dl_bat_init = DPMAIF_DL_BAT_FRG_INIT;
+
+ dl_bat_init |= DPMAIF_DL_BAT_INIT_ALLSET;
+ dl_bat_init |= DPMAIF_DL_BAT_INIT_EN;
+
+ ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_BAT_INIT,
+ value, !(value & DPMAIF_DL_BAT_INIT_NOT_READY), 0,
+ DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (ret) {
+ dev_err(hw_info->dev, "Data plane modem DL BAT is not ready\n");
+ return ret;
+ }
+
+ iowrite32(dl_bat_init, hw_info->pcie_base + DPMAIF_DL_BAT_INIT);
+
+ ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_BAT_INIT,
+ value, !(value & DPMAIF_DL_BAT_INIT_NOT_READY), 0,
+ DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (ret)
+ dev_err(hw_info->dev, "Data plane modem DL BAT initialization failed\n");
+
+ return ret;
+}
+
+static void t7xx_dpmaif_dl_set_bat_base_addr(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, dma_addr_t addr)
+{
+ iowrite32(lower_32_bits(addr), hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON0);
+ iowrite32(upper_32_bits(addr), hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON3);
+}
+
+static void t7xx_dpmaif_dl_set_bat_size(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, unsigned int size)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+ value &= ~DPMAIF_BAT_SIZE_MSK;
+ value |= size & DPMAIF_BAT_SIZE_MSK;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+}
+
+static void t7xx_dpmaif_dl_bat_en(struct dpmaif_hw_info *hw_info, unsigned char q_num, bool enable)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+
+ if (enable)
+ value |= DPMAIF_BAT_EN_MSK;
+ else
+ value &= ~DPMAIF_BAT_EN_MSK;
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+}
+
+static void t7xx_dpmaif_dl_set_ao_bid_maxcnt(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON0);
+ value &= ~DPMAIF_BAT_BID_MAXCNT_MSK;
+ value |= FIELD_PREP(DPMAIF_BAT_BID_MAXCNT_MSK, DPMAIF_HW_PKT_BIDCNT);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON0);
+}
+
+static void t7xx_dpmaif_dl_set_ao_mtu(struct dpmaif_hw_info *hw_info)
+{
+ iowrite32(DPMAIF_HW_MTU_SIZE, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON1);
+}
+
+static void t7xx_dpmaif_dl_set_ao_pit_chknum(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+ value &= ~DPMAIF_PIT_CHK_NUM_MSK;
+ value |= FIELD_PREP(DPMAIF_PIT_CHK_NUM_MSK, DPMAIF_HW_CHK_PIT_NUM);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+}
+
+static void t7xx_dpmaif_dl_set_ao_remain_minsz(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON0);
+ value &= ~DPMAIF_BAT_REMAIN_MINSZ_MSK;
+ value |= FIELD_PREP(DPMAIF_BAT_REMAIN_MINSZ_MSK,
+ DPMAIF_HW_BAT_REMAIN / DPMAIF_BAT_REMAIN_SZ_BASE);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON0);
+}
+
+static void t7xx_dpmaif_dl_set_ao_bat_bufsz(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+ value &= ~DPMAIF_BAT_BUF_SZ_MSK;
+ value |= FIELD_PREP(DPMAIF_BAT_BUF_SZ_MSK,
+ DPMAIF_HW_BAT_PKTBUF / DPMAIF_BAT_BUFFER_SZ_BASE);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+}
+
+static void t7xx_dpmaif_dl_set_ao_bat_rsv_length(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+ value &= ~DPMAIF_BAT_RSV_LEN_MSK;
+ value |= DPMAIF_HW_BAT_RSVLEN & DPMAIF_BAT_RSV_LEN_MSK;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+}
+
+static void t7xx_dpmaif_dl_set_pkt_alignment(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+ value &= ~DPMAIF_PKT_ALIGN_MSK;
+ value |= DPMAIF_PKT_ALIGN_EN;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+}
+
+static void t7xx_dpmaif_dl_set_pkt_checksum(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+ value |= DPMAIF_DL_PKT_CHECKSUM_EN;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+}
+
+static void t7xx_dpmaif_dl_set_ao_frg_check_thres(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+ value &= ~DPMAIF_FRG_CHECK_THRES_MSK;
+ value |= (DPMAIF_HW_CHK_FRG_NUM & DPMAIF_FRG_CHECK_THRES_MSK);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+}
+
+static void t7xx_dpmaif_dl_set_ao_frg_bufsz(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+ value &= ~DPMAIF_FRG_BUF_SZ_MSK;
+ value |= FIELD_PREP(DPMAIF_FRG_BUF_SZ_MSK,
+ DPMAIF_HW_FRG_PKTBUF / DPMAIF_FRG_BUFFER_SZ_BASE);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+}
+
+static void t7xx_dpmaif_dl_frg_ao_en(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+ bool enable)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+
+ if (enable)
+ value |= DPMAIF_FRG_EN_MSK;
+ else
+ value &= ~DPMAIF_FRG_EN_MSK;
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+}
+
+static void t7xx_dpmaif_dl_set_ao_bat_check_thres(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+ value &= ~DPMAIF_BAT_CHECK_THRES_MSK;
+ value |= FIELD_PREP(DPMAIF_BAT_CHECK_THRES_MSK, DPMAIF_HW_CHK_BAT_NUM);
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+}
+
+static void t7xx_dpmaif_dl_set_pit_seqnum(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PIT_SEQ_END);
+ value &= ~DPMAIF_DL_PIT_SEQ_MSK;
+ value |= DPMAIF_DL_PIT_SEQ_VALUE & DPMAIF_DL_PIT_SEQ_MSK;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PIT_SEQ_END);
+}
+
+static void t7xx_dpmaif_dl_set_dlq_pit_base_addr(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, dma_addr_t addr)
+{
+ iowrite32(lower_32_bits(addr), hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON0);
+ iowrite32(upper_32_bits(addr), hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON4);
+}
+
+static void t7xx_dpmaif_dl_set_dlq_pit_size(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, unsigned int size)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON1);
+ value &= ~DPMAIF_PIT_SIZE_MSK;
+ value |= size & DPMAIF_PIT_SIZE_MSK;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON1);
+ iowrite32(0, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON2);
+ iowrite32(0, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON3);
+ iowrite32(0, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON5);
+ iowrite32(0, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON6);
+}
+
+static void t7xx_dpmaif_dl_dlq_pit_en(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON3);
+ value |= DPMAIF_DLQPIT_EN_MSK;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON3);
+}
+
+static void t7xx_dpmaif_dl_dlq_pit_init_done(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, unsigned int pit_idx)
+{
+ unsigned int dl_pit_init;
+ int timeout;
+ u32 value;
+
+ dl_pit_init = DPMAIF_DL_PIT_INIT_ALLSET;
+ dl_pit_init |= (pit_idx << DPMAIF_DLQPIT_CHAN_OFS);
+ dl_pit_init |= DPMAIF_DL_PIT_INIT_EN;
+
+ timeout = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT,
+ value, !(value & DPMAIF_DL_PIT_INIT_NOT_READY),
+ DPMAIF_CHECK_DELAY_US,
+ DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (timeout) {
+ dev_err(hw_info->dev, "Data plane modem DL PIT is not ready\n");
+ return;
+ }
+
+ iowrite32(dl_pit_init, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT);
+ timeout = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT,
+ value, !(value & DPMAIF_DL_PIT_INIT_NOT_READY),
+ DPMAIF_CHECK_DELAY_US,
+ DPMAIF_CHECK_INIT_TIMEOUT_US);
+ if (timeout)
+ dev_err(hw_info->dev, "Data plane modem DL PIT initialization failed\n");
+}
+
+static void t7xx_dpmaif_config_dlq_pit_hw(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+ struct dpmaif_dl *dl_que)
+{
+ unsigned int pit_idx = q_num;
+
+ t7xx_dpmaif_dl_set_dlq_pit_base_addr(hw_info, q_num, dl_que->pit_base);
+ t7xx_dpmaif_dl_set_dlq_pit_size(hw_info, q_num, dl_que->pit_size_cnt);
+ t7xx_dpmaif_dl_dlq_pit_en(hw_info, q_num);
+ t7xx_dpmaif_dl_dlq_pit_init_done(hw_info, q_num, pit_idx);
+}
+
+static void t7xx_dpmaif_config_all_dlq_hw(struct dpmaif_hw_info *hw_info)
+{
+ int i;
+
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++)
+ t7xx_dpmaif_config_dlq_pit_hw(hw_info, i, &hw_info->dl_que[i]);
+}
+
+static void t7xx_dpmaif_dl_all_q_en(struct dpmaif_hw_info *hw_info, bool enable)
+{
+ u32 dl_bat_init, value;
+ int timeout;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+
+ if (enable)
+ value |= DPMAIF_BAT_EN_MSK;
+ else
+ value &= ~DPMAIF_BAT_EN_MSK;
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+ dl_bat_init = DPMAIF_DL_BAT_INIT_ONLY_ENABLE_BIT;
+ dl_bat_init |= DPMAIF_DL_BAT_INIT_EN;
+
+ timeout = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_BAT_INIT,
+ value, !(value & DPMAIF_DL_BAT_INIT_NOT_READY), 0,
+ DPMAIF_CHECK_TIMEOUT_US);
+ if (timeout)
+ dev_err(hw_info->dev, "Timeout updating BAT setting to HW\n");
+
+ iowrite32(dl_bat_init, hw_info->pcie_base + DPMAIF_DL_BAT_INIT);
+ timeout = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_BAT_INIT,
+ value, !(value & DPMAIF_DL_BAT_INIT_NOT_READY), 0,
+ DPMAIF_CHECK_TIMEOUT_US);
+ if (timeout)
+ dev_err(hw_info->dev, "Data plane modem DL BAT is not ready\n");
+}
+
+static int t7xx_dpmaif_config_dlq_hw(struct dpmaif_hw_info *hw_info)
+{
+ struct dpmaif_dl *dl_que;
+ unsigned int queue = 0; /* All queues share one BAT/frag BAT table */
+ int ret;
+
+ t7xx_dpmaif_dl_dlq_hpc_hw_init(hw_info);
+
+ dl_que = &hw_info->dl_que[queue];
+ if (!dl_que->que_started)
+ return -EBUSY;
+
+ t7xx_dpmaif_dl_set_ao_remain_minsz(hw_info);
+ t7xx_dpmaif_dl_set_ao_bat_bufsz(hw_info);
+ t7xx_dpmaif_dl_set_ao_frg_bufsz(hw_info);
+ t7xx_dpmaif_dl_set_ao_bat_rsv_length(hw_info);
+ t7xx_dpmaif_dl_set_ao_bid_maxcnt(hw_info);
+ t7xx_dpmaif_dl_set_pkt_alignment(hw_info);
+ t7xx_dpmaif_dl_set_pit_seqnum(hw_info);
+ t7xx_dpmaif_dl_set_ao_mtu(hw_info);
+ t7xx_dpmaif_dl_set_ao_pit_chknum(hw_info);
+ t7xx_dpmaif_dl_set_ao_bat_check_thres(hw_info);
+ t7xx_dpmaif_dl_set_ao_frg_check_thres(hw_info);
+ t7xx_dpmaif_dl_frg_ao_en(hw_info, queue, true);
+
+ t7xx_dpmaif_dl_set_bat_base_addr(hw_info, queue, dl_que->frg_base);
+ t7xx_dpmaif_dl_set_bat_size(hw_info, queue, dl_que->frg_size_cnt);
+ t7xx_dpmaif_dl_bat_en(hw_info, queue, true);
+
+ ret = t7xx_dpmaif_dl_bat_init_done(hw_info, queue, true);
+ if (ret)
+ return ret;
+
+ t7xx_dpmaif_dl_set_bat_base_addr(hw_info, queue, dl_que->bat_base);
+ t7xx_dpmaif_dl_set_bat_size(hw_info, queue, dl_que->bat_size_cnt);
+ t7xx_dpmaif_dl_bat_en(hw_info, queue, false);
+
+ ret = t7xx_dpmaif_dl_bat_init_done(hw_info, queue, false);
+ if (ret)
+ return ret;
+
+ /* Init PIT (two PIT table) */
+ t7xx_dpmaif_config_all_dlq_hw(hw_info);
+ t7xx_dpmaif_dl_all_q_en(hw_info, true);
+ t7xx_dpmaif_dl_set_pkt_checksum(hw_info);
+ return 0;
+}
+
+static void t7xx_dpmaif_ul_update_drb_size(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, unsigned int size)
+{
+ unsigned int value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_UL_DRBSIZE_ADDRH_n(q_num));
+ value &= ~DPMAIF_DRB_SIZE_MSK;
+ value |= size & DPMAIF_DRB_SIZE_MSK;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_UL_DRBSIZE_ADDRH_n(q_num));
+}
+
+static void t7xx_dpmaif_ul_update_drb_base_addr(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, dma_addr_t addr)
+{
+ iowrite32(lower_32_bits(addr), hw_info->pcie_base + DPMAIF_ULQSAR_n(q_num));
+ iowrite32(upper_32_bits(addr), hw_info->pcie_base + DPMAIF_UL_DRB_ADDRH_n(q_num));
+}
+
+static void t7xx_dpmaif_ul_rdy_en(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, bool ready)
+{
+ u32 value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+
+ if (ready)
+ value |= BIT(q_num);
+ else
+ value &= ~BIT(q_num);
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+}
+
+static void t7xx_dpmaif_ul_arb_en(struct dpmaif_hw_info *hw_info,
+ unsigned char q_num, bool enable)
+{
+ u32 value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+
+ if (enable)
+ value |= BIT(q_num + 8);
+ else
+ value &= ~BIT(q_num + 8);
+
+ iowrite32(value, hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+}
+
+static void t7xx_dpmaif_config_ulq_hw(struct dpmaif_hw_info *hw_info)
+{
+ struct dpmaif_ul *ul_que;
+ int i;
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ ul_que = &hw_info->ul_que[i];
+ if (ul_que->que_started) {
+ t7xx_dpmaif_ul_update_drb_size(hw_info, i, ul_que->drb_size_cnt *
+ DPMAIF_UL_DRB_SIZE_WORD);
+ t7xx_dpmaif_ul_update_drb_base_addr(hw_info, i, ul_que->drb_base);
+ t7xx_dpmaif_ul_rdy_en(hw_info, i, true);
+ t7xx_dpmaif_ul_arb_en(hw_info, i, true);
+ } else {
+ t7xx_dpmaif_ul_arb_en(hw_info, i, false);
+ }
+ }
+}
+
+static int t7xx_dpmaif_hw_init_done(struct dpmaif_hw_info *hw_info)
+{
+ u32 ap_cfg;
+ int ret;
+
+ ap_cfg = ioread32(hw_info->pcie_base + DPMAIF_AP_OVERWRITE_CFG);
+ ap_cfg |= DPMAIF_SRAM_SYNC;
+ iowrite32(ap_cfg, hw_info->pcie_base + DPMAIF_AP_OVERWRITE_CFG);
+
+ ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_AP_OVERWRITE_CFG,
+ ap_cfg, !(ap_cfg & DPMAIF_SRAM_SYNC), 0,
+ DPMAIF_CHECK_TIMEOUT_US);
+ if (ret)
+ return ret;
+
+ iowrite32(DPMAIF_UL_INIT_DONE, hw_info->pcie_base + DPMAIF_AO_UL_INIT_SET);
+ iowrite32(DPMAIF_DL_INIT_DONE, hw_info->pcie_base + DPMAIF_AO_DL_INIT_SET);
+ return 0;
+}
+
+static bool t7xx_dpmaif_dl_idle_check(struct dpmaif_hw_info *hw_info)
+{
+ u32 dpmaif_dl_is_busy = ioread32(hw_info->pcie_base + DPMAIF_DL_CHK_BUSY);
+
+ return !(dpmaif_dl_is_busy & DPMAIF_DL_IDLE_STS);
+}
+
+static void t7xx_dpmaif_ul_all_q_en(struct dpmaif_hw_info *hw_info, bool enable)
+{
+ u32 ul_arb_en = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+
+ if (enable)
+ ul_arb_en |= DPMAIF_UL_ALL_QUE_ARB_EN;
+ else
+ ul_arb_en &= ~DPMAIF_UL_ALL_QUE_ARB_EN;
+
+ iowrite32(ul_arb_en, hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+}
+
+static bool t7xx_dpmaif_ul_idle_check(struct dpmaif_hw_info *hw_info)
+{
+ u32 dpmaif_ul_is_busy = ioread32(hw_info->pcie_base + DPMAIF_UL_CHK_BUSY);
+
+ return !(dpmaif_ul_is_busy & DPMAIF_UL_IDLE_STS);
+}
+
+int t7xx_dpmaif_ul_update_hw_drb_cnt(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+ unsigned int drb_entry_cnt)
+{
+ u32 ul_update, value;
+ int ret;
+
+ ul_update = drb_entry_cnt & DPMAIF_UL_ADD_COUNT_MASK;
+ ul_update |= DPMAIF_UL_ADD_UPDATE;
+
+ ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_ULQ_ADD_DESC_CH_n(q_num),
+ value, !(value & DPMAIF_UL_ADD_NOT_READY), 0,
+ DPMAIF_CHECK_TIMEOUT_US);
+ if (ret) {
+ dev_err(hw_info->dev, "UL add is not ready\n");
+ return ret;
+ }
+
+ iowrite32(ul_update, hw_info->pcie_base + DPMAIF_ULQ_ADD_DESC_CH_n(q_num));
+
+ ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_ULQ_ADD_DESC_CH_n(q_num),
+ value, !(value & DPMAIF_UL_ADD_NOT_READY), 0,
+ DPMAIF_CHECK_TIMEOUT_US);
+ if (ret) {
+ dev_err(hw_info->dev, "Timeout updating UL add\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+unsigned int t7xx_dpmaif_ul_get_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+ unsigned int value = ioread32(hw_info->pcie_base + DPMAIF_ULQ_STA0_n(q_num));
+
+ return FIELD_GET(DPMAIF_UL_DRB_RIDX_MSK, value) / DPMAIF_UL_DRB_SIZE_WORD;
+}
+
+int t7xx_dpmaif_dlq_add_pit_remain_cnt(struct dpmaif_hw_info *hw_info, unsigned int dlq_pit_idx,
+ unsigned int pit_remain_cnt)
+{
+ u32 dl_update, value;
+ int ret;
+
+ dl_update = pit_remain_cnt & DPMAIF_PIT_REM_CNT_MSK;
+ dl_update |= DPMAIF_DL_ADD_UPDATE | (dlq_pit_idx << DPMAIF_ADD_DLQ_PIT_CHAN_OFS);
+
+ ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_DLQPIT_ADD,
+ value, !(value & DPMAIF_DL_ADD_NOT_READY), 0,
+ DPMAIF_CHECK_TIMEOUT_US);
+ if (ret) {
+ dev_err(hw_info->dev, "Data plane modem is not ready to add dlq\n");
+ return ret;
+ }
+
+ iowrite32(dl_update, hw_info->pcie_base + DPMAIF_DL_DLQPIT_ADD);
+
+ ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_DLQPIT_ADD,
+ value, !(value & DPMAIF_DL_ADD_NOT_READY), 0,
+ DPMAIF_CHECK_TIMEOUT_US);
+ if (ret) {
+ dev_err(hw_info->dev, "Data plane modem add dlq failed\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+unsigned int t7xx_dpmaif_dl_dlq_pit_get_wr_idx(struct dpmaif_hw_info *hw_info,
+ unsigned int dlq_pit_idx)
+{
+ u32 value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_DLQ_WR_IDX +
+ dlq_pit_idx * DLQ_PIT_IDX_SIZE);
+ return value & DPMAIF_DL_RD_WR_IDX_MSK;
+}
+
+static bool t7xx_dl_add_timedout(struct dpmaif_hw_info *hw_info)
+{
+ u32 value;
+ int ret;
+
+ ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_BAT_ADD,
+ value, !(value & DPMAIF_DL_ADD_NOT_READY), 0,
+ DPMAIF_CHECK_TIMEOUT_US);
+ return !!ret;
+}
+
+int t7xx_dpmaif_dl_snd_hw_bat_cnt(struct dpmaif_hw_info *hw_info, unsigned int bat_entry_cnt)
+{
+ unsigned int value;
+
+ if (t7xx_dl_add_timedout(hw_info)) {
+ dev_err(hw_info->dev, "DL add BAT not ready\n");
+ return -EBUSY;
+ }
+
+ value = bat_entry_cnt & DPMAIF_DL_ADD_COUNT_MASK;
+ value |= DPMAIF_DL_ADD_UPDATE;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_ADD);
+
+ if (t7xx_dl_add_timedout(hw_info)) {
+ dev_err(hw_info->dev, "DL add BAT timeout\n");
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+unsigned int t7xx_dpmaif_dl_get_bat_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+ u32 value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_BAT_RD_IDX);
+ return value & DPMAIF_DL_RD_WR_IDX_MSK;
+}
+
+unsigned int t7xx_dpmaif_dl_get_bat_wr_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+ u32 value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_BAT_WR_IDX);
+ return value & DPMAIF_DL_RD_WR_IDX_MSK;
+}
+
+int t7xx_dpmaif_dl_snd_hw_frg_cnt(struct dpmaif_hw_info *hw_info, unsigned int frg_entry_cnt)
+{
+ unsigned int value;
+
+ if (t7xx_dl_add_timedout(hw_info)) {
+ dev_err(hw_info->dev, "Data plane modem is not ready to add frag DLQ\n");
+ return -EBUSY;
+ }
+
+ value = frg_entry_cnt & DPMAIF_DL_ADD_COUNT_MASK;
+ value |= DPMAIF_DL_FRG_ADD_UPDATE | DPMAIF_DL_ADD_UPDATE;
+ iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_ADD);
+
+ if (t7xx_dl_add_timedout(hw_info)) {
+ dev_err(hw_info->dev, "Data plane modem add frag DLQ failed");
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+unsigned int t7xx_dpmaif_dl_get_frg_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+ u32 value;
+
+ value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_FRGBAT_RD_IDX);
+ return value & DPMAIF_DL_RD_WR_IDX_MSK;
+}
+
+static void t7xx_dpmaif_set_queue_property(struct dpmaif_hw_info *hw_info,
+ struct dpmaif_hw_params *init_para)
+{
+ struct dpmaif_dl *dl_que;
+ struct dpmaif_ul *ul_que;
+ int i;
+
+ for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+ dl_que = &hw_info->dl_que[i];
+ dl_que->bat_base = init_para->pkt_bat_base_addr[i];
+ dl_que->bat_size_cnt = init_para->pkt_bat_size_cnt[i];
+ dl_que->pit_base = init_para->pit_base_addr[i];
+ dl_que->pit_size_cnt = init_para->pit_size_cnt[i];
+ dl_que->frg_base = init_para->frg_bat_base_addr[i];
+ dl_que->frg_size_cnt = init_para->frg_bat_size_cnt[i];
+ dl_que->que_started = true;
+ }
+
+ for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+ ul_que = &hw_info->ul_que[i];
+ ul_que->drb_base = init_para->drb_base_addr[i];
+ ul_que->drb_size_cnt = init_para->drb_size_cnt[i];
+ ul_que->que_started = true;
+ }
+}
+
+/**
+ * t7xx_dpmaif_hw_stop_all_txq() - Stop all TX queues.
+ * @hw_info: Pointer to struct hw_info.
+ *
+ * Disable HW UL queues. Checks busy UL queues to go to idle
+ * with an attempt count of 1000000.
+ *
+ * Return:
+ * * 0 - Success
+ * * -ETIMEDOUT - Timed out checking busy queues
+ */
+int t7xx_dpmaif_hw_stop_all_txq(struct dpmaif_hw_info *hw_info)
+{
+ int count = 0;
+
+ t7xx_dpmaif_ul_all_q_en(hw_info, false);
+ while (t7xx_dpmaif_ul_idle_check(hw_info)) {
+ if (++count >= DPMAIF_MAX_CHECK_COUNT) {
+ dev_err(hw_info->dev, "Failed to stop TX, status: 0x%x\n",
+ ioread32(hw_info->pcie_base + DPMAIF_UL_CHK_BUSY));
+ return -ETIMEDOUT;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * t7xx_dpmaif_hw_stop_all_rxq() - Stop all RX queues.
+ * @hw_info: Pointer to struct hw_info.
+ *
+ * Disable HW DL queue. Checks busy UL queues to go to idle
+ * with an attempt count of 1000000.
+ * Check that HW PIT write index equals read index with the same
+ * attempt count.
+ *
+ * Return:
+ * * 0 - Success.
+ * * -ETIMEDOUT - Timed out checking busy queues.
+ */
+int t7xx_dpmaif_hw_stop_all_rxq(struct dpmaif_hw_info *hw_info)
+{
+ unsigned int wr_idx, rd_idx;
+ int count = 0;
+
+ t7xx_dpmaif_dl_all_q_en(hw_info, false);
+ while (t7xx_dpmaif_dl_idle_check(hw_info)) {
+ if (++count >= DPMAIF_MAX_CHECK_COUNT) {
+ dev_err(hw_info->dev, "Failed to stop RX, status: 0x%x\n",
+ ioread32(hw_info->pcie_base + DPMAIF_DL_CHK_BUSY));
+ return -ETIMEDOUT;
+ }
+ }
+
+ /* Check middle PIT sync done */
+ count = 0;
+ do {
+ wr_idx = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PIT_WR_IDX);
+ wr_idx &= DPMAIF_DL_RD_WR_IDX_MSK;
+ rd_idx = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PIT_RD_IDX);
+ rd_idx &= DPMAIF_DL_RD_WR_IDX_MSK;
+
+ if (wr_idx == rd_idx)
+ return 0;
+ } while (++count < DPMAIF_MAX_CHECK_COUNT);
+
+ dev_err(hw_info->dev, "Check middle PIT sync fail\n");
+ return -ETIMEDOUT;
+}
+
+void t7xx_dpmaif_start_hw(struct dpmaif_hw_info *hw_info)
+{
+ t7xx_dpmaif_ul_all_q_en(hw_info, true);
+ t7xx_dpmaif_dl_all_q_en(hw_info, true);
+}
+
+/**
+ * t7xx_dpmaif_hw_init() - Initialize HW data path API.
+ * @hw_info: Pointer to struct hw_info.
+ * @init_param: Pointer to struct dpmaif_hw_params.
+ *
+ * Configures port mode, clock config, HW interrupt initialization, and HW queue.
+ *
+ * Return:
+ * * 0 - Success.
+ * * -ERROR - Error code from failure sub-initializations.
+ */
+int t7xx_dpmaif_hw_init(struct dpmaif_hw_info *hw_info, struct dpmaif_hw_params *init_param)
+{
+ int ret;
+
+ ret = t7xx_dpmaif_hw_config(hw_info);
+ if (ret) {
+ dev_err(hw_info->dev, "DPMAIF HW config failed\n");
+ return ret;
+ }
+
+ ret = t7xx_dpmaif_init_intr(hw_info);
+ if (ret) {
+ dev_err(hw_info->dev, "DPMAIF HW interrupts init failed\n");
+ return ret;
+ }
+
+ t7xx_dpmaif_set_queue_property(hw_info, init_param);
+ t7xx_dpmaif_pcie_dpmaif_sign(hw_info);
+ t7xx_dpmaif_dl_performance(hw_info);
+
+ ret = t7xx_dpmaif_config_dlq_hw(hw_info);
+ if (ret) {
+ dev_err(hw_info->dev, "DPMAIF HW dlq config failed\n");
+ return ret;
+ }
+
+ t7xx_dpmaif_config_ulq_hw(hw_info);
+
+ ret = t7xx_dpmaif_hw_init_done(hw_info);
+ if (ret)
+ dev_err(hw_info->dev, "DPMAIF HW queue init failed\n");
+
+ return ret;
+}
+
+bool t7xx_dpmaif_ul_clr_done(struct dpmaif_hw_info *hw_info, unsigned char qno)
+{
+ u32 intr_status;
+
+ intr_status = ioread32(hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+ intr_status &= BIT(DP_UL_INT_DONE_OFFSET + qno);
+ if (intr_status) {
+ iowrite32(intr_status, hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+ return true;
+ }
+
+ return false;
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_dpmaif.h b/drivers/net/wwan/t7xx/t7xx_dpmaif.h
new file mode 100644
index 000000000000..613551aca40b
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_dpmaif.h
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Amir Hanania <[email protected]>
+ * Haijun Liu <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ *
+ * Contributors:
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#ifndef __T7XX_DPMAIF_H__
+#define __T7XX_DPMAIF_H__
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#define DPMAIF_DL_PIT_SEQ_VALUE 251
+#define DPMAIF_UL_DRB_SIZE_WORD 4
+
+#define DPMAIF_MAX_CHECK_COUNT 1000000
+#define DPMAIF_CHECK_TIMEOUT_US 10000
+#define DPMAIF_CHECK_INIT_TIMEOUT_US 100000
+#define DPMAIF_CHECK_DELAY_US 10
+
+#define DPMAIF_RXQ_NUM 2
+#define DPMAIF_TXQ_NUM 5
+
+struct dpmaif_isr_en_mask {
+ unsigned int ap_ul_l2intr_en_msk;
+ unsigned int ap_dl_l2intr_en_msk;
+ unsigned int ap_udl_ip_busy_en_msk;
+ unsigned int ap_dl_l2intr_err_en_msk;
+};
+
+struct dpmaif_ul {
+ bool que_started;
+ unsigned char reserved[3];
+ dma_addr_t drb_base;
+ unsigned int drb_size_cnt;
+};
+
+struct dpmaif_dl {
+ bool que_started;
+ unsigned char reserved[3];
+ dma_addr_t pit_base;
+ unsigned int pit_size_cnt;
+ dma_addr_t bat_base;
+ unsigned int bat_size_cnt;
+ dma_addr_t frg_base;
+ unsigned int frg_size_cnt;
+ unsigned int pit_seq;
+};
+
+struct dpmaif_hw_info {
+ struct device *dev;
+ void __iomem *pcie_base;
+ struct dpmaif_dl dl_que[DPMAIF_RXQ_NUM];
+ struct dpmaif_ul ul_que[DPMAIF_TXQ_NUM];
+ struct dpmaif_isr_en_mask isr_en_mask;
+};
+
+/* DPMAIF HW Initialization parameter structure */
+struct dpmaif_hw_params {
+ /* UL part */
+ dma_addr_t drb_base_addr[DPMAIF_TXQ_NUM];
+ unsigned int drb_size_cnt[DPMAIF_TXQ_NUM];
+ /* DL part */
+ dma_addr_t pkt_bat_base_addr[DPMAIF_RXQ_NUM];
+ unsigned int pkt_bat_size_cnt[DPMAIF_RXQ_NUM];
+ dma_addr_t frg_bat_base_addr[DPMAIF_RXQ_NUM];
+ unsigned int frg_bat_size_cnt[DPMAIF_RXQ_NUM];
+ dma_addr_t pit_base_addr[DPMAIF_RXQ_NUM];
+ unsigned int pit_size_cnt[DPMAIF_RXQ_NUM];
+};
+
+enum dpmaif_hw_intr_type {
+ DPF_INTR_INVALID_MIN,
+ DPF_INTR_UL_DONE,
+ DPF_INTR_UL_DRB_EMPTY,
+ DPF_INTR_UL_MD_NOTREADY,
+ DPF_INTR_UL_MD_PWR_NOTREADY,
+ DPF_INTR_UL_LEN_ERR,
+ DPF_INTR_DL_DONE,
+ DPF_INTR_DL_SKB_LEN_ERR,
+ DPF_INTR_DL_BATCNT_LEN_ERR,
+ DPF_INTR_DL_PITCNT_LEN_ERR,
+ DPF_INTR_DL_PKT_EMPTY_SET,
+ DPF_INTR_DL_FRG_EMPTY_SET,
+ DPF_INTR_DL_MTU_ERR,
+ DPF_INTR_DL_FRGCNT_LEN_ERR,
+ DPF_INTR_DL_Q0_PITCNT_LEN_ERR,
+ DPF_INTR_DL_Q1_PITCNT_LEN_ERR,
+ DPF_INTR_DL_HPC_ENT_TYPE_ERR,
+ DPF_INTR_DL_Q0_DONE,
+ DPF_INTR_DL_Q1_DONE,
+ DPF_INTR_INVALID_MAX
+};
+
+#define DPF_RX_QNO0 0
+#define DPF_RX_QNO1 1
+#define DPF_RX_QNO_DFT DPF_RX_QNO0
+
+struct dpmaif_hw_intr_st_para {
+ unsigned int intr_cnt;
+ enum dpmaif_hw_intr_type intr_types[DPF_INTR_INVALID_MAX - 1];
+ unsigned int intr_queues[DPF_INTR_INVALID_MAX - 1];
+};
+
+#define DPMAIF_HW_BAT_REMAIN 64
+#define DPMAIF_HW_BAT_PKTBUF (128 * 28)
+#define DPMAIF_HW_FRG_PKTBUF 128
+#define DPMAIF_HW_BAT_RSVLEN 64
+#define DPMAIF_HW_PKT_BIDCNT 1
+#define DPMAIF_HW_MTU_SIZE (3 * 1024 + 8)
+#define DPMAIF_HW_CHK_BAT_NUM 62
+#define DPMAIF_HW_CHK_FRG_NUM 3
+#define DPMAIF_HW_CHK_PIT_NUM (2 * DPMAIF_HW_CHK_BAT_NUM)
+
+#define DP_UL_INT_DONE_OFFSET 0
+#define DP_UL_INT_QDONE_MSK GENMASK(4, 0)
+#define DP_UL_INT_EMPTY_MSK GENMASK(9, 5)
+#define DP_UL_INT_MD_NOTREADY_MSK GENMASK(14, 10)
+#define DP_UL_INT_MD_PWR_NOTREADY_MSK GENMASK(19, 15)
+#define DP_UL_INT_ERR_MSK GENMASK(24, 20)
+
+#define DP_DL_INT_QDONE_MSK BIT(0)
+#define DP_DL_INT_SKB_LEN_ERR BIT(1)
+#define DP_DL_INT_BATCNT_LEN_ERR BIT(2)
+#define DP_DL_INT_PITCNT_LEN_ERR BIT(3)
+#define DP_DL_INT_PKT_EMPTY_MSK BIT(4)
+#define DP_DL_INT_FRG_EMPTY_MSK BIT(5)
+#define DP_DL_INT_MTU_ERR_MSK BIT(6)
+#define DP_DL_INT_FRG_LEN_ERR_MSK BIT(7)
+#define DP_DL_INT_Q0_PITCNT_LEN_ERR BIT(8)
+#define DP_DL_INT_Q1_PITCNT_LEN_ERR BIT(9)
+#define DP_DL_INT_HPC_ENT_TYPE_ERR BIT(10)
+#define DP_DL_INT_Q0_DONE BIT(13)
+#define DP_DL_INT_Q1_DONE BIT(14)
+
+#define DP_DL_Q0_STATUS_MASK (DP_DL_INT_Q0_PITCNT_LEN_ERR | DP_DL_INT_Q0_DONE)
+#define DP_DL_Q1_STATUS_MASK (DP_DL_INT_Q1_PITCNT_LEN_ERR | DP_DL_INT_Q1_DONE)
+
+int t7xx_dpmaif_hw_init(struct dpmaif_hw_info *hw_info, struct dpmaif_hw_params *init_param);
+int t7xx_dpmaif_hw_stop_all_txq(struct dpmaif_hw_info *hw_info);
+int t7xx_dpmaif_hw_stop_all_rxq(struct dpmaif_hw_info *hw_info);
+void t7xx_dpmaif_start_hw(struct dpmaif_hw_info *hw_info);
+int t7xx_dpmaif_hw_get_intr_cnt(struct dpmaif_hw_info *hw_info,
+ struct dpmaif_hw_intr_st_para *para, int qno);
+void t7xx_dpmaif_unmask_ulq_intr(struct dpmaif_hw_info *hw_info, unsigned int q_num);
+int t7xx_dpmaif_ul_update_hw_drb_cnt(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+ unsigned int drb_entry_cnt);
+int t7xx_dpmaif_dl_snd_hw_bat_cnt(struct dpmaif_hw_info *hw_info, unsigned int bat_entry_cnt);
+int t7xx_dpmaif_dl_snd_hw_frg_cnt(struct dpmaif_hw_info *hw_info, unsigned int frg_entry_cnt);
+int t7xx_dpmaif_dlq_add_pit_remain_cnt(struct dpmaif_hw_info *hw_info, unsigned int dlq_pit_idx,
+ unsigned int pit_remain_cnt);
+void t7xx_dpmaif_dlq_unmask_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info,
+ unsigned char qno);
+void t7xx_dpmaif_dlq_unmask_rx_done(struct dpmaif_hw_info *hw_info, unsigned char qno);
+bool t7xx_dpmaif_ul_clr_done(struct dpmaif_hw_info *hw_info, unsigned char qno);
+void t7xx_dpmaif_ul_clr_all_intr(struct dpmaif_hw_info *hw_info);
+void t7xx_dpmaif_dl_clr_all_intr(struct dpmaif_hw_info *hw_info);
+void t7xx_dpmaif_clr_ip_busy_sts(struct dpmaif_hw_info *hw_info);
+void t7xx_dpmaif_dl_unmask_batcnt_len_err_intr(struct dpmaif_hw_info *hw_info);
+void t7xx_dpmaif_dl_unmask_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info);
+unsigned int t7xx_dpmaif_ul_get_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num);
+unsigned int t7xx_dpmaif_dl_get_bat_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num);
+unsigned int t7xx_dpmaif_dl_get_bat_wr_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num);
+unsigned int t7xx_dpmaif_dl_get_frg_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num);
+unsigned int t7xx_dpmaif_dl_dlq_pit_get_wr_idx(struct dpmaif_hw_info *hw_info,
+ unsigned int dlq_pit_idx);
+
+#endif /* __T7XX_DPMAIF_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_reg.h b/drivers/net/wwan/t7xx/t7xx_reg.h
index 969bde7078ed..c683c9c2f4de 100644
--- a/drivers/net/wwan/t7xx/t7xx_reg.h
+++ b/drivers/net/wwan/t7xx/t7xx_reg.h
@@ -136,4 +136,217 @@ enum t7xx_int {
CLDMA3_INT,
};

+/* DPMA definitions */
+
+#define DPMAIF_PD_BASE 0x1022d000
+#define BASE_DPMAIF_UL DPMAIF_PD_BASE
+#define BASE_DPMAIF_DL (DPMAIF_PD_BASE + 0x100)
+#define BASE_DPMAIF_AP_MISC (DPMAIF_PD_BASE + 0x400)
+#define BASE_DPMAIF_MMW_HPC (DPMAIF_PD_BASE + 0x600)
+#define BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX (DPMAIF_PD_BASE + 0x900)
+#define BASE_DPMAIF_PD_SRAM_DL (DPMAIF_PD_BASE + 0xc00)
+#define BASE_DPMAIF_PD_SRAM_UL (DPMAIF_PD_BASE + 0xd00)
+
+#define DPMAIF_AO_BASE 0x10014000
+#define BASE_DPMAIF_AO_UL DPMAIF_AO_BASE
+#define BASE_DPMAIF_AO_DL (DPMAIF_AO_BASE + 0x400)
+
+#define DPMAIF_UL_ADD_DESC (BASE_DPMAIF_UL + 0x00)
+#define DPMAIF_UL_CHK_BUSY (BASE_DPMAIF_UL + 0x88)
+#define DPMAIF_UL_RESERVE_AO_RW (BASE_DPMAIF_UL + 0xac)
+#define DPMAIF_UL_ADD_DESC_CH0 (BASE_DPMAIF_UL + 0xb0)
+
+#define DPMAIF_DL_BAT_INIT (BASE_DPMAIF_DL + 0x00)
+#define DPMAIF_DL_BAT_ADD (BASE_DPMAIF_DL + 0x04)
+#define DPMAIF_DL_BAT_INIT_CON0 (BASE_DPMAIF_DL + 0x08)
+#define DPMAIF_DL_BAT_INIT_CON1 (BASE_DPMAIF_DL + 0x0c)
+#define DPMAIF_DL_BAT_INIT_CON2 (BASE_DPMAIF_DL + 0x10)
+#define DPMAIF_DL_BAT_INIT_CON3 (BASE_DPMAIF_DL + 0x50)
+#define DPMAIF_DL_CHK_BUSY (BASE_DPMAIF_DL + 0xb4)
+
+#define DPMAIF_AP_L2TISAR0 (BASE_DPMAIF_AP_MISC + 0x00)
+#define DPMAIF_AP_APDL_L2TISAR0 (BASE_DPMAIF_AP_MISC + 0x50)
+#define DPMAIF_AP_IP_BUSY (BASE_DPMAIF_AP_MISC + 0x60)
+#define DPMAIF_AP_CG_EN (BASE_DPMAIF_AP_MISC + 0x68)
+#define DPMAIF_AP_OVERWRITE_CFG (BASE_DPMAIF_AP_MISC + 0x90)
+#define DPMAIF_AP_MEM_CLR (BASE_DPMAIF_AP_MISC + 0x94)
+#define DPMAIF_AP_ALL_L2TISAR0_MASK GENMASK(31, 0)
+#define DPMAIF_AP_APDL_ALL_L2TISAR0_MASK GENMASK(31, 0)
+#define DPMAIF_AP_IP_BUSY_MASK GENMASK(31, 0)
+
+#define DPMAIF_AO_UL_INIT_SET (BASE_DPMAIF_AO_UL + 0x0)
+#define DPMAIF_AO_UL_CHNL_ARB0 (BASE_DPMAIF_AO_UL + 0x1c)
+#define DPMAIF_AO_UL_AP_L2TIMR0 (BASE_DPMAIF_AO_UL + 0x80)
+#define DPMAIF_AO_UL_AP_L2TIMCR0 (BASE_DPMAIF_AO_UL + 0x84)
+#define DPMAIF_AO_UL_AP_L2TIMSR0 (BASE_DPMAIF_AO_UL + 0x88)
+#define DPMAIF_AO_UL_AP_L1TIMR0 (BASE_DPMAIF_AO_UL + 0x8c)
+#define DPMAIF_AO_UL_APDL_L2TIMR0 (BASE_DPMAIF_AO_UL + 0x90)
+#define DPMAIF_AO_UL_APDL_L2TIMCR0 (BASE_DPMAIF_AO_UL + 0x94)
+#define DPMAIF_AO_UL_APDL_L2TIMSR0 (BASE_DPMAIF_AO_UL + 0x98)
+#define DPMAIF_AO_AP_DLUL_IP_BUSY_MASK (BASE_DPMAIF_AO_UL + 0x9c)
+
+#define DPMAIF_AO_UL_CHNL0_CON0 (BASE_DPMAIF_PD_SRAM_UL + 0x10)
+#define DPMAIF_AO_UL_CHNL0_CON1 (BASE_DPMAIF_PD_SRAM_UL + 0x14)
+#define DPMAIF_AO_UL_CHNL0_CON2 (BASE_DPMAIF_PD_SRAM_UL + 0x18)
+#define DPMAIF_AO_UL_CH0_STA (BASE_DPMAIF_PD_SRAM_UL + 0x70)
+
+#define DPMAIF_AO_DL_INIT_SET (BASE_DPMAIF_AO_DL + 0x00)
+#define DPMAIF_AO_DL_IRQ_MASK (BASE_DPMAIF_AO_DL + 0x0c)
+#define DPMAIF_AO_DL_DLQPIT_INIT_CON5 (BASE_DPMAIF_AO_DL + 0x28)
+#define DPMAIF_AO_DL_DLQPIT_TRIG_THRES (BASE_DPMAIF_AO_DL + 0x34)
+
+#define DPMAIF_AO_DL_PKTINFO_CON0 (BASE_DPMAIF_PD_SRAM_DL + 0x00)
+#define DPMAIF_AO_DL_PKTINFO_CON1 (BASE_DPMAIF_PD_SRAM_DL + 0x04)
+#define DPMAIF_AO_DL_PKTINFO_CON2 (BASE_DPMAIF_PD_SRAM_DL + 0x08)
+#define DPMAIF_AO_DL_RDY_CHK_THRES (BASE_DPMAIF_PD_SRAM_DL + 0x0c)
+#define DPMAIF_AO_DL_RDY_CHK_FRG_THRES (BASE_DPMAIF_PD_SRAM_DL + 0x10)
+
+#define DPMAIF_AO_DL_DLQ_AGG_CFG (BASE_DPMAIF_PD_SRAM_DL + 0x20)
+#define DPMAIF_AO_DL_DLQPIT_TIMEOUT0 (BASE_DPMAIF_PD_SRAM_DL + 0x24)
+#define DPMAIF_AO_DL_DLQPIT_TIMEOUT1 (BASE_DPMAIF_PD_SRAM_DL + 0x28)
+#define DPMAIF_AO_DL_HPC_CNTL (BASE_DPMAIF_PD_SRAM_DL + 0x38)
+#define DPMAIF_AO_DL_PIT_SEQ_END (BASE_DPMAIF_PD_SRAM_DL + 0x40)
+
+#define DPMAIF_AO_DL_BAT_RD_IDX (BASE_DPMAIF_PD_SRAM_DL + 0xd8)
+#define DPMAIF_AO_DL_BAT_WR_IDX (BASE_DPMAIF_PD_SRAM_DL + 0xdc)
+#define DPMAIF_AO_DL_PIT_RD_IDX (BASE_DPMAIF_PD_SRAM_DL + 0xec)
+#define DPMAIF_AO_DL_PIT_WR_IDX (BASE_DPMAIF_PD_SRAM_DL + 0x60)
+#define DPMAIF_AO_DL_FRGBAT_RD_IDX (BASE_DPMAIF_PD_SRAM_DL + 0x78)
+#define DPMAIF_AO_DL_DLQ_WR_IDX (BASE_DPMAIF_PD_SRAM_DL + 0xa4)
+
+#define DPMAIF_HPC_INTR_MASK (BASE_DPMAIF_MMW_HPC + 0x0f4)
+#define DPMA_HPC_ALL_INT_MASK GENMASK(15, 0)
+
+#define DPMAIF_HPC_DLQ_PATH_MODE 3
+#define DPMAIF_HPC_ADD_MODE_DF 0
+#define DPMAIF_HPC_TOTAL_NUM 8
+#define DPMAIF_HPC_MAX_TOTAL_NUM 8
+
+#define DPMAIF_DL_DLQPIT_INIT (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x00)
+#define DPMAIF_DL_DLQPIT_ADD (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x10)
+#define DPMAIF_DL_DLQPIT_INIT_CON0 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x14)
+#define DPMAIF_DL_DLQPIT_INIT_CON1 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x18)
+#define DPMAIF_DL_DLQPIT_INIT_CON2 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x1c)
+#define DPMAIF_DL_DLQPIT_INIT_CON3 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x20)
+#define DPMAIF_DL_DLQPIT_INIT_CON4 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x24)
+#define DPMAIF_DL_DLQPIT_INIT_CON5 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x28)
+#define DPMAIF_DL_DLQPIT_INIT_CON6 (BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x2c)
+
+#define DPMAIF_ULQSAR_n(q) (DPMAIF_AO_UL_CHNL0_CON0 + 0x10 * (q))
+#define DPMAIF_UL_DRBSIZE_ADDRH_n(q) (DPMAIF_AO_UL_CHNL0_CON1 + 0x10 * (q))
+#define DPMAIF_UL_DRB_ADDRH_n(q) (DPMAIF_AO_UL_CHNL0_CON2 + 0x10 * (q))
+#define DPMAIF_ULQ_STA0_n(q) (DPMAIF_AO_UL_CH0_STA + 0x04 * (q))
+#define DPMAIF_ULQ_ADD_DESC_CH_n(q) (DPMAIF_UL_ADD_DESC_CH0 + 0x04 * (q))
+
+#define DPMAIF_UL_DRB_RIDX_MSK GENMASK(31, 16)
+
+#define DPMAIF_AP_RGU_ASSERT 0x10001150
+#define DPMAIF_AP_RGU_DEASSERT 0x10001154
+#define DPMAIF_AP_RST_BIT BIT(2)
+
+#define DPMAIF_AP_AO_RGU_ASSERT 0x10001140
+#define DPMAIF_AP_AO_RGU_DEASSERT 0x10001144
+#define DPMAIF_AP_AO_RST_BIT BIT(6)
+
+/* DPMAIF init/restore */
+#define DPMAIF_UL_ADD_NOT_READY BIT(31)
+#define DPMAIF_UL_ADD_UPDATE BIT(31)
+#define DPMAIF_UL_ADD_COUNT_MASK GENMASK(15, 0)
+#define DPMAIF_UL_ALL_QUE_ARB_EN GENMASK(11, 8)
+
+#define DPMAIF_DL_ADD_UPDATE BIT(31)
+#define DPMAIF_DL_ADD_NOT_READY BIT(31)
+#define DPMAIF_DL_FRG_ADD_UPDATE BIT(16)
+#define DPMAIF_DL_ADD_COUNT_MASK GENMASK(15, 0)
+
+#define DPMAIF_DL_BAT_INIT_ALLSET BIT(0)
+#define DPMAIF_DL_BAT_FRG_INIT BIT(16)
+#define DPMAIF_DL_BAT_INIT_EN BIT(31)
+#define DPMAIF_DL_BAT_INIT_NOT_READY BIT(31)
+#define DPMAIF_DL_BAT_INIT_ONLY_ENABLE_BIT 0
+
+#define DPMAIF_DL_PIT_INIT_ALLSET BIT(0)
+#define DPMAIF_DL_PIT_INIT_EN BIT(31)
+#define DPMAIF_DL_PIT_INIT_NOT_READY BIT(31)
+
+#define DPMAIF_BAT_REMAIN_SZ_BASE 16
+#define DPMAIF_BAT_BUFFER_SZ_BASE 128
+#define DPMAIF_FRG_BUFFER_SZ_BASE 128
+
+#define DLQ_PIT_IDX_SIZE 0x20
+
+#define DPMAIF_PIT_SIZE_MSK GENMASK(17, 0)
+
+#define DPMAIF_PIT_REM_CNT_MSK GENMASK(17, 0)
+
+#define DPMAIF_BAT_EN_MSK BIT(16)
+#define DPMAIF_FRG_EN_MSK BIT(28)
+#define DPMAIF_BAT_SIZE_MSK GENMASK(15, 0)
+
+#define DPMAIF_BAT_BID_MAXCNT_MSK GENMASK(31, 16)
+#define DPMAIF_BAT_REMAIN_MINSZ_MSK GENMASK(15, 8)
+#define DPMAIF_PIT_CHK_NUM_MSK GENMASK(31, 24)
+#define DPMAIF_BAT_BUF_SZ_MSK GENMASK(16, 8)
+#define DPMAIF_FRG_BUF_SZ_MSK GENMASK(16, 8)
+#define DPMAIF_BAT_RSV_LEN_MSK GENMASK(7, 0)
+#define DPMAIF_PKT_ALIGN_MSK GENMASK(23, 22)
+
+#define DPMAIF_BAT_CHECK_THRES_MSK GENMASK(21, 16)
+#define DPMAIF_FRG_CHECK_THRES_MSK GENMASK(7, 0)
+
+#define DPMAIF_PKT_ALIGN_EN BIT(23)
+
+#define DPMAIF_DRB_SIZE_MSK GENMASK(15, 0)
+
+#define DPMAIF_DL_RD_WR_IDX_MSK GENMASK(17, 0)
+
+/* DPMAIF_UL_CHK_BUSY */
+#define DPMAIF_UL_IDLE_STS BIT(11)
+/* DPMAIF_DL_CHK_BUSY */
+#define DPMAIF_DL_IDLE_STS BIT(23)
+/* DPMAIF_AO_DL_RDY_CHK_THRES */
+#define DPMAIF_DL_PKT_CHECKSUM_EN BIT(31)
+#define DPMAIF_PORT_MODE_PCIE BIT(30)
+#define DPMAIF_DL_BURST_PIT_EN BIT(13)
+/* DPMAIF_DL_BAT_INIT_CON1 */
+#define DPMAIF_DL_BAT_CACHE_PRI BIT(22)
+/* DPMAIF_AP_MEM_CLR */
+#define DPMAIF_MEM_CLR BIT(0)
+/* DPMAIF_AP_OVERWRITE_CFG */
+#define DPMAIF_SRAM_SYNC BIT(0)
+/* DPMAIF_AO_UL_INIT_SET */
+#define DPMAIF_UL_INIT_DONE BIT(0)
+/* DPMAIF_AO_DL_INIT_SET */
+#define DPMAIF_DL_INIT_DONE BIT(0)
+/* DPMAIF_AO_DL_PIT_SEQ_END */
+#define DPMAIF_DL_PIT_SEQ_MSK GENMASK(7, 0)
+/* DPMAIF_UL_RESERVE_AO_RW */
+#define DPMAIF_PCIE_MODE_SET_VALUE 0x55
+/* DPMAIF_AP_CG_EN */
+#define DPMAIF_CG_EN 0x7f
+
+#define DPMAIF_UDL_IP_BUSY BIT(0)
+#define DPMAIF_DL_INT_DLQ0_QDONE BIT(8)
+#define DPMAIF_DL_INT_DLQ1_QDONE BIT(9)
+#define DPMAIF_DL_INT_DLQ0_PITCNT_LEN BIT(10)
+#define DPMAIF_DL_INT_DLQ1_PITCNT_LEN BIT(11)
+#define DPMAIF_DL_INT_Q2TOQ1 BIT(24)
+#define DPMAIF_DL_INT_Q2APTOP BIT(25)
+
+#define DPMAIF_DLQ_LOW_TIMEOUT_THRES_MKS GENMASK(15, 0)
+#define DPMAIF_DLQ_HIGH_TIMEOUT_THRES_MSK GENMASK(31, 16)
+
+/* DPMAIF DLQ HW configure */
+#define DPMAIF_AGG_MAX_LEN_DF 65535
+#define DPMAIF_AGG_TBL_ENT_NUM_DF 50
+#define DPMAIF_HASH_PRIME_DF 13
+#define DPMAIF_MID_TIMEOUT_THRES_DF 100
+#define DPMAIF_DLQ_TIMEOUT_THRES_DF 100
+#define DPMAIF_DLQ_PRS_THRES_DF 10
+#define DPMAIF_DLQ_HASH_BIT_CHOOSE_DF 0
+
+#define DPMAIF_DLQPIT_EN_MSK BIT(20)
+#define DPMAIF_DLQPIT_CHAN_OFS 16
+#define DPMAIF_ADD_DLQ_PIT_CHAN_OFS 20
+
#endif /* __T7XX_REG_H__ */
--
2.17.1

2022-02-24 01:47:14

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH net-next v5 06/13] net: wwan: t7xx: Add AT and MBIM WWAN ports

From: Chandrashekar Devegowda <[email protected]>

Adds AT and MBIM ports to the port proxy infrastructure.
The initialization method is responsible for creating the corresponding
ports using the WWAN framework infrastructure. The implemented WWAN port
operations are start, stop, and TX.

Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>

From a WWAN framework perspective:
Reviewed-by: Loic Poulain <[email protected]>
---
drivers/net/wwan/t7xx/Makefile | 1 +
drivers/net/wwan/t7xx/t7xx_port_proxy.c | 24 +++
drivers/net/wwan/t7xx/t7xx_port_proxy.h | 1 +
drivers/net/wwan/t7xx/t7xx_port_wwan.c | 210 ++++++++++++++++++++++++
4 files changed, 236 insertions(+)
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_wwan.c

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index 63e1c67b82b5..9eec2e2472fb 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -12,3 +12,4 @@ mtk_t7xx-y:= t7xx_pci.o \
t7xx_hif_cldma.o \
t7xx_port_proxy.o \
t7xx_port_ctrl_msg.o \
+ t7xx_port_wwan.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.c b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
index 256442a60cc2..9a5cc64904d3 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.c
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
@@ -51,6 +51,30 @@

static struct t7xx_port_static t7xx_md_ports[] = {
{
+ .tx_ch = PORT_CH_UART2_TX,
+ .rx_ch = PORT_CH_UART2_RX,
+ .txq_index = Q_IDX_AT_CMD,
+ .rxq_index = Q_IDX_AT_CMD,
+ .txq_exp_index = 0xff,
+ .rxq_exp_index = 0xff,
+ .path_id = CLDMA_ID_MD,
+ .flags = 0,
+ .ops = &wwan_sub_port_ops,
+ .name = "AT",
+ .port_type = WWAN_PORT_AT,
+ }, {
+ .tx_ch = PORT_CH_MBIM_TX,
+ .rx_ch = PORT_CH_MBIM_RX,
+ .txq_index = Q_IDX_MBIM,
+ .rxq_index = Q_IDX_MBIM,
+ .txq_exp_index = 0,
+ .rxq_exp_index = 0,
+ .path_id = CLDMA_ID_MD,
+ .flags = 0,
+ .ops = &wwan_sub_port_ops,
+ .name = "MBIM",
+ .port_type = WWAN_PORT_MBIM,
+ }, {
.tx_ch = PORT_CH_CONTROL_TX,
.rx_ch = PORT_CH_CONTROL_RX,
.txq_index = Q_IDX_CTRL,
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.h b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
index b23750f78d55..1c9608987728 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.h
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
@@ -80,6 +80,7 @@ struct port_msg {
#define PORT_ENUM_VER_MISMATCH 0x00657272

/* Port operations mapping */
+extern struct port_ops wwan_sub_port_ops;
extern struct port_ops ctl_port_ops;

int t7xx_port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb);
diff --git a/drivers/net/wwan/t7xx/t7xx_port_wwan.c b/drivers/net/wwan/t7xx/t7xx_port_wwan.c
new file mode 100644
index 000000000000..ac9144021431
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port_wwan.c
@@ -0,0 +1,210 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Amir Hanania <[email protected]>
+ * Chandrashekar Devegowda <[email protected]>
+ * Haijun Liu <[email protected]>
+ * Moises Veleta <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ *
+ * Contributors:
+ * Andy Shevchenko <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/atomic.h>
+#include <linux/bitfield.h>
+#include <linux/dev_printk.h>
+#include <linux/err.h>
+#include <linux/gfp.h>
+#include <linux/minmax.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/wwan.h>
+
+#include "t7xx_common.h"
+#include "t7xx_port.h"
+#include "t7xx_port_proxy.h"
+#include "t7xx_state_monitor.h"
+
+#define CCCI_HEADROOM 128
+
+static int t7xx_port_ctrl_start(struct wwan_port *port)
+{
+ struct t7xx_port *port_mtk = wwan_port_get_drvdata(port);
+
+ if (atomic_read(&port_mtk->usage_cnt))
+ return -EBUSY;
+
+ atomic_inc(&port_mtk->usage_cnt);
+ return 0;
+}
+
+static void t7xx_port_ctrl_stop(struct wwan_port *port)
+{
+ struct t7xx_port *port_mtk = wwan_port_get_drvdata(port);
+
+ atomic_dec(&port_mtk->usage_cnt);
+}
+
+static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
+{
+ struct t7xx_port *port_private = wwan_port_get_drvdata(port);
+ size_t actual_len, alloc_size, txq_mtu = CLDMA_MTU;
+ struct t7xx_port_static *port_static;
+ unsigned int len, i, packets;
+ struct t7xx_fsm_ctl *ctl;
+ enum md_state md_state;
+
+ len = skb->len;
+ if (!len || !port_private->rx_length_th || !port_private->chan_enable)
+ return -EINVAL;
+
+ port_static = port_private->port_static;
+ ctl = port_private->t7xx_dev->md->fsm_ctl;
+ md_state = t7xx_fsm_get_md_state(ctl);
+ if (md_state == MD_STATE_WAITING_FOR_HS1 || md_state == MD_STATE_WAITING_FOR_HS2) {
+ dev_warn(port_private->dev, "Cannot write to %s port when md_state=%d\n",
+ port_static->name, md_state);
+ return -ENODEV;
+ }
+
+ alloc_size = min_t(size_t, txq_mtu, len + CCCI_HEADROOM);
+ actual_len = alloc_size - CCCI_HEADROOM;
+ packets = DIV_ROUND_UP(len, txq_mtu - CCCI_HEADROOM);
+
+ for (i = 0; i < packets; i++) {
+ struct ccci_header *ccci_h;
+ struct sk_buff *skb_ccci;
+ int ret;
+
+ if (packets > 1 && packets == i + 1) {
+ actual_len = len % (txq_mtu - CCCI_HEADROOM);
+ alloc_size = actual_len + CCCI_HEADROOM;
+ }
+
+ skb_ccci = __dev_alloc_skb(alloc_size, GFP_KERNEL);
+ if (!skb_ccci)
+ return -ENOMEM;
+
+ ccci_h = skb_put(skb_ccci, sizeof(*ccci_h));
+ t7xx_ccci_header_init(ccci_h, 0, actual_len + sizeof(*ccci_h),
+ port_static->tx_ch, 0);
+ skb_put_data(skb_ccci, skb->data + i * (txq_mtu - CCCI_HEADROOM), actual_len);
+ t7xx_port_proxy_set_tx_seq_num(port_private, ccci_h);
+
+ ret = t7xx_port_send_skb_to_md(port_private, skb_ccci);
+ if (ret) {
+ dev_kfree_skb_any(skb_ccci);
+ dev_err(port_private->dev, "Write error on %s port, %d\n",
+ port_static->name, ret);
+ return ret;
+ }
+
+ port_private->seq_nums[MTK_TX]++;
+ }
+
+ dev_kfree_skb(skb);
+ return 0;
+}
+
+static const struct wwan_port_ops wwan_ops = {
+ .start = t7xx_port_ctrl_start,
+ .stop = t7xx_port_ctrl_stop,
+ .tx = t7xx_port_ctrl_tx,
+};
+
+static int t7xx_port_wwan_init(struct t7xx_port *port)
+{
+ struct t7xx_port_static *port_static = port->port_static;
+
+ port->rx_length_th = RX_QUEUE_MAXLEN;
+ port->flags |= PORT_F_RX_ADJUST_HEADER;
+
+ if (port_static->rx_ch == PORT_CH_UART2_RX)
+ port->flags |= PORT_F_RX_CH_TRAFFIC;
+
+ if (!port->chan_enable)
+ port->flags |= PORT_F_RX_ALLOW_DROP;
+
+ return 0;
+}
+
+static void t7xx_port_wwan_uninit(struct t7xx_port *port)
+{
+ unsigned long flags;
+
+ if (!port->wwan_port)
+ return;
+
+ spin_lock_irqsave(&port->rx_wq.lock, flags);
+ port->rx_length_th = 0;
+ wwan_remove_port(port->wwan_port);
+ port->wwan_port = NULL;
+ spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+}
+
+static int t7xx_port_wwan_recv_skb(struct t7xx_port *port, struct sk_buff *skb)
+{
+ struct t7xx_port_static *port_static = port->port_static;
+
+ if (!atomic_read(&port->usage_cnt)) {
+ dev_kfree_skb_any(skb);
+ dev_err_ratelimited(port->dev, "Port %s is not opened, drop packets\n",
+ port_static->name);
+ return 0;
+ }
+
+ return t7xx_port_recv_skb(port, skb);
+}
+
+static int t7xx_port_wwan_enable_chl(struct t7xx_port *port)
+{
+ spin_lock(&port->port_update_lock);
+ port->chan_enable = true;
+ port->flags &= ~PORT_F_RX_ALLOW_DROP;
+ spin_unlock(&port->port_update_lock);
+
+ return 0;
+}
+
+static int t7xx_port_wwan_disable_chl(struct t7xx_port *port)
+{
+ spin_lock(&port->port_update_lock);
+ port->chan_enable = false;
+ port->flags |= PORT_F_RX_ALLOW_DROP;
+ spin_unlock(&port->port_update_lock);
+
+ return 0;
+}
+
+static void t7xx_port_wwan_md_state_notify(struct t7xx_port *port, unsigned int state)
+{
+ struct t7xx_port_static *port_static = port->port_static;
+
+ if (state != MD_STATE_READY)
+ return;
+
+ if (!port->wwan_port) {
+ port->wwan_port = wwan_create_port(port->dev, port_static->port_type,
+ &wwan_ops, port);
+ if (IS_ERR(port->wwan_port))
+ dev_err(port->dev, "Unable to create WWWAN port %s", port_static->name);
+ }
+}
+
+struct port_ops wwan_sub_port_ops = {
+ .init = t7xx_port_wwan_init,
+ .recv_skb = t7xx_port_wwan_recv_skb,
+ .uninit = t7xx_port_wwan_uninit,
+ .enable_chl = t7xx_port_wwan_enable_chl,
+ .disable_chl = t7xx_port_wwan_disable_chl,
+ .md_state_notify = t7xx_port_wwan_md_state_notify,
+};
--
2.17.1

2022-02-24 01:54:49

by Martinez, Ricardo

[permalink] [raw]
Subject: [PATCH net-next v5 05/13] net: wwan: t7xx: Add control port

From: Haijun Liu <[email protected]>

Control Port implements driver control messages such as modem-host
handshaking, controls port enumeration, and handles exception messages.

The handshaking process between the driver and the modem happens during
the init sequence. The process involves the exchange of a list of
supported runtime features to make sure that modem and host are ready
to provide proper feature lists including port enumeration. Further
features can be enabled and controlled in this handshaking process.

Signed-off-by: Haijun Liu <[email protected]>
Signed-off-by: Chandrashekar Devegowda <[email protected]>
Co-developed-by: Ricardo Martinez <[email protected]>
Signed-off-by: Ricardo Martinez <[email protected]>

From a WWAN framework perspective:
Reviewed-by: Loic Poulain <[email protected]>
---
drivers/net/wwan/t7xx/Makefile | 1 +
drivers/net/wwan/t7xx/t7xx_modem_ops.c | 249 ++++++++++++++++++++-
drivers/net/wwan/t7xx/t7xx_modem_ops.h | 3 +
drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c | 205 +++++++++++++++++
drivers/net/wwan/t7xx/t7xx_port_proxy.c | 74 +++++-
drivers/net/wwan/t7xx/t7xx_port_proxy.h | 30 +++
drivers/net/wwan/t7xx/t7xx_state_monitor.c | 3 +
drivers/net/wwan/t7xx/t7xx_state_monitor.h | 2 +
8 files changed, 563 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index 99f9ca3b4b51..63e1c67b82b5 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -11,3 +11,4 @@ mtk_t7xx-y:= t7xx_pci.o \
t7xx_cldma.o \
t7xx_hif_cldma.o \
t7xx_port_proxy.o \
+ t7xx_port_ctrl_msg.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
index 18e953f35b94..156eeaa907bc 100644
--- a/drivers/net/wwan/t7xx/t7xx_modem_ops.c
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
@@ -16,6 +16,8 @@
*/

#include <linux/acpi.h>
+#include <linux/bits.h>
+#include <linux/bitfield.h>
#include <linux/device.h>
#include <linux/delay.h>
#include <linux/gfp.h>
@@ -26,6 +28,7 @@
#include <linux/spinlock.h>
#include <linux/string.h>
#include <linux/types.h>
+#include <linux/wait.h>
#include <linux/workqueue.h>

#include "t7xx_cldma.h"
@@ -39,11 +42,24 @@
#include "t7xx_reg.h"
#include "t7xx_state_monitor.h"

+#define RT_ID_MD_PORT_ENUM 0
+/* Modem feature query identification code - "ICCC" */
+#define MD_FEATURE_QUERY_ID 0x49434343
+
+#define FEATURE_VER GENMASK(7, 4)
+#define FEATURE_MSK GENMASK(3, 0)
+
#define RGU_RESET_DELAY_MS 10
#define PORT_RESET_DELAY_MS 2000
#define EX_HS_TIMEOUT_MS 5000
#define EX_HS_POLL_DELAY_MS 10

+enum mtk_feature_support_type {
+ MTK_FEATURE_DOES_NOT_EXIST,
+ MTK_FEATURE_NOT_SUPPORTED,
+ MTK_FEATURE_MUST_BE_SUPPORTED,
+};
+
static unsigned int t7xx_get_interrupt_status(struct t7xx_pci_dev *t7xx_dev)
{
return t7xx_mhccif_read_sw_int_sts(t7xx_dev) & D2H_SW_INT_MASK;
@@ -314,16 +330,239 @@ static void t7xx_md_sys_sw_init(struct t7xx_pci_dev *t7xx_dev)
t7xx_pcie_register_rgu_isr(t7xx_dev);
}

+struct feature_query {
+ __le32 head_pattern;
+ u8 feature_set[FEATURE_COUNT];
+ __le32 tail_pattern;
+};
+
+static void t7xx_prepare_host_rt_data_query(struct t7xx_sys_info *core)
+{
+ struct t7xx_port_static *port_static = core->ctl_port->port_static;
+ struct ctrl_msg_header *ctrl_msg_h;
+ struct feature_query *ft_query;
+ struct ccci_header *ccci_h;
+ struct sk_buff *skb;
+ size_t packet_size;
+
+ packet_size = sizeof(*ccci_h) + sizeof(*ctrl_msg_h) + sizeof(*ft_query);
+ skb = __dev_alloc_skb(packet_size, GFP_KERNEL);
+ if (!skb)
+ return;
+
+ skb_put(skb, packet_size);
+
+ ccci_h = (struct ccci_header *)skb->data;
+ t7xx_ccci_header_init(ccci_h, 0, packet_size, port_static->tx_ch, 0);
+ ccci_h->status &= cpu_to_le32(~CCCI_H_SEQ_FLD);
+
+ ctrl_msg_h = (struct ctrl_msg_header *)(skb->data + sizeof(*ccci_h));
+ t7xx_ctrl_msg_header_init(ctrl_msg_h, CTL_ID_HS1_MSG, 0, sizeof(*ft_query));
+
+ ft_query = (struct feature_query *)(skb->data + sizeof(*ccci_h) + sizeof(*ctrl_msg_h));
+ ft_query->head_pattern = cpu_to_le32(MD_FEATURE_QUERY_ID);
+ memcpy(ft_query->feature_set, core->feature_set, FEATURE_COUNT);
+ ft_query->tail_pattern = cpu_to_le32(MD_FEATURE_QUERY_ID);
+
+ /* Send HS1 message to device */
+ t7xx_port_proxy_send_skb(core->ctl_port, skb);
+}
+
+static int t7xx_prepare_device_rt_data(struct t7xx_sys_info *core, struct device *dev,
+ void *data, int data_length)
+{
+ struct t7xx_port_static *port_static = core->ctl_port->port_static;
+ struct feature_query *md_feature = data;
+ struct ctrl_msg_header *ctrl_msg_h;
+ unsigned int total_data_len;
+ struct ccci_header *ccci_h;
+ size_t packet_size = 0;
+ struct sk_buff *skb;
+ char *rt_data;
+ int i;
+
+ /* Parse MD runtime data query */
+ if (le32_to_cpu(md_feature->head_pattern) != MD_FEATURE_QUERY_ID ||
+ le32_to_cpu(md_feature->tail_pattern) != MD_FEATURE_QUERY_ID) {
+ dev_err(dev, "Invalid feature pattern: head 0x%x, tail 0x%x\n",
+ le32_to_cpu(md_feature->head_pattern),
+ le32_to_cpu(md_feature->tail_pattern));
+ return -EINVAL;
+ }
+
+ skb = __dev_alloc_skb(CLDMA_MTU, GFP_KERNEL);
+ if (!skb)
+ return -EFAULT;
+
+ ccci_h = (struct ccci_header *)skb->data;
+ t7xx_ccci_header_init(ccci_h, 0, packet_size, port_static->tx_ch, 0);
+ ccci_h->status &= cpu_to_le32(~CCCI_H_SEQ_FLD);
+ ctrl_msg_h = (struct ctrl_msg_header *)(skb->data + sizeof(*ccci_h));
+ t7xx_ctrl_msg_header_init(ctrl_msg_h, CTL_ID_HS3_MSG, 0, 0);
+ rt_data = skb->data + sizeof(*ccci_h) + sizeof(*ctrl_msg_h);
+
+ /* Fill runtime feature */
+ for (i = 0; i < FEATURE_COUNT; i++) {
+ struct mtk_runtime_feature rt_feature;
+ u8 md_feature_mask;
+
+ if (FIELD_GET(FEATURE_MSK, md_feature->feature_set[i]) ==
+ MTK_FEATURE_MUST_BE_SUPPORTED)
+ continue;
+
+ memset(&rt_feature, 0, sizeof(rt_feature));
+ rt_feature.feature_id = i;
+
+ md_feature_mask = FIELD_GET(FEATURE_MSK, md_feature->feature_set[i]);
+ if (md_feature_mask == MTK_FEATURE_DOES_NOT_EXIST)
+ rt_feature.support_info = md_feature->feature_set[i];
+
+ memcpy(rt_data, &rt_feature, sizeof(rt_feature));
+ rt_data += sizeof(rt_feature);
+ packet_size += sizeof(rt_feature);
+ }
+
+ ctrl_msg_h->data_length = cpu_to_le32(packet_size);
+ total_data_len = packet_size + sizeof(*ctrl_msg_h) + sizeof(*ccci_h);
+ ccci_h->packet_len = cpu_to_le32(total_data_len);
+ skb_put(skb, total_data_len);
+
+ /* Send HS3 message to device */
+ t7xx_port_proxy_send_skb(core->ctl_port, skb);
+ return 0;
+}
+
+static int t7xx_parse_host_rt_data(struct t7xx_fsm_ctl *ctl, struct t7xx_sys_info *core,
+ struct device *dev, void *data, int data_length)
+{
+ enum mtk_feature_support_type ft_spt_st, ft_spt_cfg;
+ struct mtk_runtime_feature *rt_feature;
+ int i, offset;
+
+ offset = sizeof(struct feature_query);
+ for (i = 0; i < FEATURE_COUNT && offset < data_length; i++) {
+ rt_feature = data + offset;
+ ft_spt_st = FIELD_GET(FEATURE_MSK, rt_feature->support_info);
+ offset += sizeof(*rt_feature) + le32_to_cpu(rt_feature->data_len);
+
+ ft_spt_cfg = FIELD_GET(FEATURE_MSK, core->feature_set[i]);
+ if (ft_spt_cfg != MTK_FEATURE_MUST_BE_SUPPORTED)
+ continue;
+
+ if (ft_spt_st != MTK_FEATURE_MUST_BE_SUPPORTED)
+ return -EINVAL;
+
+ if (i == RT_ID_MD_PORT_ENUM) {
+ struct port_msg *p_msg = (void *)rt_feature + sizeof(*rt_feature);
+
+ t7xx_port_proxy_node_control(ctl->md, p_msg);
+ }
+ }
+
+ return 0;
+}
+
+static int t7xx_core_reset(struct t7xx_modem *md)
+{
+ struct device *dev = &md->t7xx_dev->pdev->dev;
+ struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+
+ md->core_md.ready = false;
+
+ if (!ctl) {
+ dev_err(dev, "FSM is not initialized\n");
+ return -EINVAL;
+ }
+
+ if (md->core_md.handshake_ongoing) {
+ int ret = t7xx_fsm_append_event(ctl, FSM_EVENT_MD_HS2_EXIT, NULL, 0);
+
+ if (ret)
+ return ret;
+ }
+
+ md->core_md.handshake_ongoing = false;
+ return 0;
+}
+
+static void t7xx_core_hk_handler(struct t7xx_modem *md, struct t7xx_fsm_ctl *ctl,
+ enum t7xx_fsm_event_state event_id,
+ enum t7xx_fsm_event_state err_detect)
+{
+ struct t7xx_sys_info *core_info = &md->core_md;
+ struct device *dev = &md->t7xx_dev->pdev->dev;
+ struct t7xx_fsm_event *event, *event_next;
+ unsigned long flags;
+ void *event_data;
+ int ret;
+
+ t7xx_prepare_host_rt_data_query(core_info);
+
+ while (!kthread_should_stop()) {
+ bool event_received = false;
+
+ spin_lock_irqsave(&ctl->event_lock, flags);
+ list_for_each_entry_safe(event, event_next, &ctl->event_queue, entry) {
+ if (event->event_id == err_detect) {
+ list_del(&event->entry);
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+ dev_err(dev, "Core handshake error event received\n");
+ goto err_free_event;
+ } else if (event->event_id == event_id) {
+ list_del(&event->entry);
+ event_received = true;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&ctl->event_lock, flags);
+
+ if (event_received)
+ break;
+
+ wait_event_interruptible(ctl->event_wq, !list_empty(&ctl->event_queue) ||
+ kthread_should_stop());
+ if (kthread_should_stop())
+ goto err_free_event;
+ }
+
+ if (ctl->exp_flg)
+ goto err_free_event;
+
+ event_data = (void *)event + sizeof(*event);
+ ret = t7xx_parse_host_rt_data(ctl, core_info, dev, event_data, event->length);
+ if (ret) {
+ dev_err(dev, "Host failure parsing runtime data: %d\n", ret);
+ goto err_free_event;
+ }
+
+ if (ctl->exp_flg)
+ goto err_free_event;
+
+ ret = t7xx_prepare_device_rt_data(core_info, dev, event_data, event->length);
+ if (ret) {
+ dev_err(dev, "Device failure parsing runtime data: %d", ret);
+ goto err_free_event;
+ }
+
+ core_info->ready = true;
+ core_info->handshake_ongoing = false;
+ wake_up(&ctl->async_hk_wq);
+err_free_event:
+ kfree(event);
+}
+
static void t7xx_md_hk_wq(struct work_struct *work)
{
struct t7xx_modem *md = container_of(work, struct t7xx_modem, handshake_work);
struct t7xx_fsm_ctl *ctl = md->fsm_ctl;

+ /* Clear the HS2 EXIT event appended in core_reset() */
+ t7xx_fsm_clr_event(ctl, FSM_EVENT_MD_HS2_EXIT);
t7xx_cldma_switch_cfg(md->md_ctrl[CLDMA_ID_MD]);
t7xx_cldma_start(md->md_ctrl[CLDMA_ID_MD]);
t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS2);
- md->core_md.ready = true;
- wake_up(&ctl->async_hk_wq);
+ md->core_md.handshake_ongoing = true;
+ t7xx_core_hk_handler(md, ctl, FSM_EVENT_MD_HS2, FSM_EVENT_MD_HS2_EXIT);
}

void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id)
@@ -412,6 +651,7 @@ static struct t7xx_modem *t7xx_md_alloc(struct t7xx_pci_dev *t7xx_dev)
md->t7xx_dev = t7xx_dev;
t7xx_dev->md = md;
md->core_md.ready = false;
+ md->core_md.handshake_ongoing = false;
spin_lock_init(&md->exp_lock);
md->handshake_wq = alloc_workqueue("%s", WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI,
0, "md_hk_wq");
@@ -419,6 +659,9 @@ static struct t7xx_modem *t7xx_md_alloc(struct t7xx_pci_dev *t7xx_dev)
return NULL;

INIT_WORK(&md->handshake_work, t7xx_md_hk_wq);
+ md->core_md.feature_set[RT_ID_MD_PORT_ENUM] &= ~FEATURE_MSK;
+ md->core_md.feature_set[RT_ID_MD_PORT_ENUM] |=
+ FIELD_PREP(FEATURE_MSK, MTK_FEATURE_MUST_BE_SUPPORTED);
return md;
}

@@ -433,7 +676,7 @@ int t7xx_md_reset(struct t7xx_pci_dev *t7xx_dev)
t7xx_cldma_reset(md->md_ctrl[CLDMA_ID_MD]);
t7xx_port_proxy_reset(md->port_prox);
md->md_init_finish = true;
- return 0;
+ return t7xx_core_reset(md);
}

/**
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.h b/drivers/net/wwan/t7xx/t7xx_modem_ops.h
index 4c92879621ef..c0596d540ca5 100644
--- a/drivers/net/wwan/t7xx/t7xx_modem_ops.h
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.h
@@ -56,6 +56,9 @@ enum md_event_id {

struct t7xx_sys_info {
bool ready;
+ bool handshake_ongoing;
+ u8 feature_set[FEATURE_COUNT];
+ struct t7xx_port *ctl_port;
};

struct t7xx_modem {
diff --git a/drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c b/drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c
new file mode 100644
index 000000000000..d7edde246cd7
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c
@@ -0,0 +1,205 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021-2022, Intel Corporation.
+ *
+ * Authors:
+ * Haijun Liu <[email protected]>
+ * Ricardo Martinez<[email protected]>
+ * Moises Veleta <[email protected]>
+ *
+ * Contributors:
+ * Amir Hanania <[email protected]>
+ * Chiranjeevi Rapolu <[email protected]>
+ * Eliot Lee <[email protected]>
+ * Sreehari Kancharla <[email protected]>
+ */
+
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/kthread.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+
+#include "t7xx_common.h"
+#include "t7xx_port.h"
+#include "t7xx_port_proxy.h"
+#include "t7xx_state_monitor.h"
+
+static int fsm_ee_message_handler(struct t7xx_fsm_ctl *ctl, struct sk_buff *skb)
+{
+ struct ctrl_msg_header *ctrl_msg_h = (struct ctrl_msg_header *)skb->data;
+ struct device *dev = &ctl->md->t7xx_dev->pdev->dev;
+ struct port_proxy *port_prox = ctl->md->port_prox;
+ enum md_state md_state;
+ int ret = -EINVAL;
+
+ md_state = t7xx_fsm_get_md_state(ctl);
+ if (md_state != MD_STATE_EXCEPTION) {
+ dev_err(dev, "Receive invalid MD_EX %x when MD state is %d\n",
+ ctrl_msg_h->ex_msg, md_state);
+ return -EINVAL;
+ }
+
+ switch (le32_to_cpu(ctrl_msg_h->ctrl_msg_id)) {
+ case CTL_ID_MD_EX:
+ if (le32_to_cpu(ctrl_msg_h->ex_msg) != MD_EX_CHK_ID) {
+ dev_err(dev, "Receive invalid MD_EX %x\n", ctrl_msg_h->ex_msg);
+ } else {
+ t7xx_port_proxy_send_msg_to_md(port_prox, PORT_CH_CONTROL_TX, CTL_ID_MD_EX,
+ MD_EX_CHK_ID);
+ ret = t7xx_fsm_append_event(ctl, FSM_EVENT_MD_EX, NULL, 0);
+ if (ret)
+ dev_err(dev, "Failed to append Modem Exception event");
+ }
+
+ break;
+
+ case CTL_ID_MD_EX_ACK:
+ if (le32_to_cpu(ctrl_msg_h->ex_msg) != MD_EX_CHK_ACK_ID) {
+ dev_err(dev, "Receive invalid MD_EX_ACK %x\n", ctrl_msg_h->ex_msg);
+ } else {
+ ret = t7xx_fsm_append_event(ctl, FSM_EVENT_MD_EX_REC_OK, NULL, 0);
+ if (ret)
+ dev_err(dev, "Failed to append Modem Exception Received event");
+ }
+
+ break;
+
+ case CTL_ID_MD_EX_PASS:
+ ret = t7xx_fsm_append_event(ctl, FSM_EVENT_MD_EX_PASS, NULL, 0);
+ if (ret)
+ dev_err(dev, "Failed to append Modem Exception Passed event");
+
+ break;
+
+ case CTL_ID_DRV_VER_ERROR:
+ dev_err(dev, "AP/MD driver version mismatch\n");
+ }
+
+ return ret;
+}
+
+static int control_msg_handler(struct t7xx_port *port, struct sk_buff *skb)
+{
+ struct t7xx_port_static *port_static = port->port_static;
+ struct t7xx_fsm_ctl *ctl = port->t7xx_dev->md->fsm_ctl;
+ struct port_proxy *port_prox = ctl->md->port_prox;
+ struct ctrl_msg_header *ctrl_msg_h;
+ int ret = 0;
+
+ skb_pull(skb, sizeof(struct ccci_header));
+
+ ctrl_msg_h = (struct ctrl_msg_header *)skb->data;
+ switch (le32_to_cpu(ctrl_msg_h->ctrl_msg_id)) {
+ case CTL_ID_HS2_MSG:
+ skb_pull(skb, sizeof(*ctrl_msg_h));
+
+ if (port_static->rx_ch == PORT_CH_CONTROL_RX) {
+ ret = t7xx_fsm_append_event(ctl, FSM_EVENT_MD_HS2, skb->data,
+ le32_to_cpu(ctrl_msg_h->data_length));
+ if (ret)
+ dev_err(port->dev, "Failed to append Handshake 2 event");
+ }
+
+ dev_kfree_skb_any(skb);
+ break;
+
+ case CTL_ID_MD_EX:
+ case CTL_ID_MD_EX_ACK:
+ case CTL_ID_MD_EX_PASS:
+ case CTL_ID_DRV_VER_ERROR:
+ ret = fsm_ee_message_handler(ctl, skb);
+ dev_kfree_skb_any(skb);
+ break;
+
+ case CTL_ID_PORT_ENUM:
+ skb_pull(skb, sizeof(*ctrl_msg_h));
+ ret = t7xx_port_proxy_node_control(ctl->md, (struct port_msg *)skb->data);
+ if (!ret)
+ t7xx_port_proxy_send_msg_to_md(port_prox, PORT_CH_CONTROL_TX,
+ CTL_ID_PORT_ENUM, 0);
+ else
+ t7xx_port_proxy_send_msg_to_md(port_prox, PORT_CH_CONTROL_TX,
+ CTL_ID_PORT_ENUM, PORT_ENUM_VER_MISMATCH);
+
+ break;
+
+ default:
+ ret = -EINVAL;
+ dev_err(port->dev, "Unknown control message ID to FSM %x\n",
+ le32_to_cpu(ctrl_msg_h->ctrl_msg_id));
+ break;
+ }
+
+ if (ret)
+ dev_err(port->dev, "%s control message handle error: %d\n", port_static->name,
+ ret);
+
+ return ret;
+}
+
+static int port_ctl_rx_thread(void *arg)
+{
+ while (!kthread_should_stop()) {
+ struct t7xx_port *port = arg;
+ struct sk_buff *skb;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port->rx_wq.lock, flags);
+ if (skb_queue_empty(&port->rx_skb_list) &&
+ wait_event_interruptible_locked_irq(port->rx_wq,
+ !skb_queue_empty(&port->rx_skb_list) ||
+ kthread_should_stop())) {
+ spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+ continue;
+ }
+ if (kthread_should_stop()) {
+ spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+ break;
+ }
+ skb = __skb_dequeue(&port->rx_skb_list);
+ spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+
+ port->skb_handler(port, skb);
+ }
+
+ return 0;
+}
+
+static int port_ctl_init(struct t7xx_port *port)
+{
+ struct t7xx_port_static *port_static = port->port_static;
+
+ port->skb_handler = &control_msg_handler;
+ port->thread = kthread_run(port_ctl_rx_thread, port, "%s", port_static->name);
+ if (IS_ERR(port->thread)) {
+ dev_err(port->dev, "Failed to start port control thread\n");
+ return PTR_ERR(port->thread);
+ }
+
+ port->rx_length_th = CTRL_QUEUE_MAXLEN;
+ return 0;
+}
+
+static void port_ctl_uninit(struct t7xx_port *port)
+{
+ unsigned long flags;
+ struct sk_buff *skb;
+
+ if (port->thread)
+ kthread_stop(port->thread);
+
+ spin_lock_irqsave(&port->rx_wq.lock, flags);
+ port->rx_length_th = 0;
+ while ((skb = __skb_dequeue(&port->rx_skb_list)) != NULL)
+ dev_kfree_skb_any(skb);
+ spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+}
+
+struct port_ops ctl_port_ops = {
+ .init = port_ctl_init,
+ .recv_skb = t7xx_port_recv_skb,
+ .uninit = port_ctl_uninit,
+};
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.c b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
index 69c702473c4c..256442a60cc2 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.c
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
@@ -49,7 +49,20 @@
i < (proxy)->port_number; \
i++, (p) = &(proxy)->ports_private[i])

-static struct t7xx_port_static t7xx_md_ports[1];
+static struct t7xx_port_static t7xx_md_ports[] = {
+ {
+ .tx_ch = PORT_CH_CONTROL_TX,
+ .rx_ch = PORT_CH_CONTROL_RX,
+ .txq_index = Q_IDX_CTRL,
+ .rxq_index = Q_IDX_CTRL,
+ .txq_exp_index = 0,
+ .rxq_exp_index = 0,
+ .path_id = CLDMA_ID_MD,
+ .flags = 0,
+ .ops = &ctl_port_ops,
+ .name = "t7xx_ctrl",
+ },
+};

static struct t7xx_port *t7xx_proxy_get_port_by_ch(struct port_proxy *port_prox, enum port_ch ch)
{
@@ -271,6 +284,62 @@ static void t7xx_proxy_setup_ch_mapping(struct port_proxy *port_prox)
}
}

+void t7xx_ccci_header_init(struct ccci_header *ccci_h, unsigned int pkt_header,
+ size_t pkt_len, enum port_ch ch, unsigned int ex_msg)
+{
+ ccci_h->packet_header = cpu_to_le32(pkt_header);
+ ccci_h->packet_len = cpu_to_le32(pkt_len);
+ ccci_h->status &= cpu_to_le32(~CCCI_H_CHN_FLD);
+ ccci_h->status |= cpu_to_le32(FIELD_PREP(CCCI_H_CHN_FLD, ch));
+ ccci_h->ex_msg = cpu_to_le32(ex_msg);
+}
+
+void t7xx_ctrl_msg_header_init(struct ctrl_msg_header *ctrl_msg_h, unsigned int msg_id,
+ unsigned int ex_msg, unsigned int len)
+{
+ ctrl_msg_h->ctrl_msg_id = cpu_to_le32(msg_id);
+ ctrl_msg_h->ex_msg = cpu_to_le32(ex_msg);
+ ctrl_msg_h->data_length = cpu_to_le32(len);
+}
+
+void t7xx_port_proxy_send_msg_to_md(struct port_proxy *port_prox, enum port_ch ch,
+ unsigned int msg, unsigned int ex_msg)
+{
+ struct ctrl_msg_header *ctrl_msg_h;
+ struct ccci_header *ccci_h;
+ struct t7xx_port *port;
+ struct sk_buff *skb;
+ int ret;
+
+ port = t7xx_proxy_get_port_by_ch(port_prox, ch);
+ if (!port)
+ return;
+
+ skb = __dev_alloc_skb(sizeof(*ccci_h), GFP_KERNEL);
+ if (!skb)
+ return;
+
+ if (ch == PORT_CH_CONTROL_TX) {
+ ccci_h = (struct ccci_header *)(skb->data);
+ t7xx_ccci_header_init(ccci_h, CCCI_HEADER_NO_DATA,
+ sizeof(*ctrl_msg_h) + sizeof(*ccci_h), ch, 0);
+ ctrl_msg_h = (struct ctrl_msg_header *)(skb->data + sizeof(*ccci_h));
+ t7xx_ctrl_msg_header_init(ctrl_msg_h, msg, ex_msg, 0);
+ skb_put(skb, sizeof(*ccci_h) + sizeof(*ctrl_msg_h));
+ } else {
+ ccci_h = skb_put(skb, sizeof(*ccci_h));
+ t7xx_ccci_header_init(ccci_h, CCCI_HEADER_NO_DATA, msg, ch, ex_msg);
+ }
+
+ ret = t7xx_port_proxy_send_skb(port, skb);
+ if (ret) {
+ struct t7xx_port_static *port_static = port->port_static;
+
+ dev_kfree_skb_any(skb);
+ dev_err(port->dev, "port%s send to MD fail\n", port_static->name);
+ }
+}
+
static struct t7xx_port *t7xx_port_proxy_find_port(struct t7xx_pci_dev *t7xx_dev,
struct cldma_queue *queue, u16 channel)
{
@@ -380,6 +449,9 @@ static void t7xx_proxy_init_all_ports(struct t7xx_modem *md)

t7xx_port_struct_init(port);

+ if (port_static->tx_ch == PORT_CH_CONTROL_TX)
+ md->core_md.ctl_port = port;
+
port->t7xx_dev = md->t7xx_dev;
port->dev = &md->t7xx_dev->pdev->dev;
spin_lock_init(&port->port_update_lock);
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.h b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
index 59ceb2828aab..b23750f78d55 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.h
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
@@ -40,6 +40,27 @@ struct port_proxy {
struct device *dev;
};

+struct ctrl_msg_header {
+ __le32 ctrl_msg_id;
+ __le32 ex_msg;
+ __le32 data_length;
+};
+
+/* Control identification numbers for AP<->MD messages */
+#define CTL_ID_HS1_MSG 0x0
+#define CTL_ID_HS2_MSG 0x1
+#define CTL_ID_HS3_MSG 0x2
+#define CTL_ID_MD_EX 0x4
+#define CTL_ID_DRV_VER_ERROR 0x5
+#define CTL_ID_MD_EX_ACK 0x6
+#define CTL_ID_MD_EX_PASS 0x8
+#define CTL_ID_PORT_ENUM 0x9
+
+/* Modem exception check identification code - "EXCP" */
+#define MD_EX_CHK_ID 0x45584350
+/* Modem exception check acknowledge identification code - "EREC" */
+#define MD_EX_CHK_ACK_ID 0x45524543
+
struct port_msg {
__le32 head_pattern;
__le32 info;
@@ -58,12 +79,21 @@ struct port_msg {
#define PORT_ENUM_TAIL_PATTERN 0xa5a5a5a5
#define PORT_ENUM_VER_MISMATCH 0x00657272

+/* Port operations mapping */
+extern struct port_ops ctl_port_ops;
+
int t7xx_port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb);
void t7xx_port_proxy_set_tx_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h);
int t7xx_port_proxy_node_control(struct t7xx_modem *md, struct port_msg *port_msg);
void t7xx_port_proxy_reset(struct port_proxy *port_prox);
+void t7xx_port_proxy_send_msg_to_md(struct port_proxy *port_prox, enum port_ch ch,
+ unsigned int msg, unsigned int ex_msg);
void t7xx_port_proxy_uninit(struct port_proxy *port_prox);
int t7xx_port_proxy_init(struct t7xx_modem *md);
void t7xx_port_proxy_md_status_notify(struct port_proxy *port_prox, unsigned int state);
+void t7xx_ccci_header_init(struct ccci_header *ccci_h, unsigned int pkt_header,
+ size_t pkt_len, enum port_ch ch, unsigned int ex_msg);
+void t7xx_ctrl_msg_header_init(struct ctrl_msg_header *ctrl_msg_h, unsigned int msg_id,
+ unsigned int ex_msg, unsigned int len);

#endif /* __T7XX_PORT_PROXY_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
index cd0d03d8ecb4..b2b809316301 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -293,6 +293,9 @@ static int fsm_routine_starting(struct t7xx_fsm_ctl *ctl)

if (!md->core_md.ready) {
dev_err(dev, "MD handshake timeout\n");
+ if (md->core_md.handshake_ongoing)
+ t7xx_fsm_append_event(ctl, FSM_EVENT_MD_HS2_EXIT, NULL, 0);
+
fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);
return -ETIMEDOUT;
}
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.h b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
index 2e966c898269..7c8d2d17cb50 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.h
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
@@ -38,9 +38,11 @@ enum t7xx_fsm_state {

enum t7xx_fsm_event_state {
FSM_EVENT_INVALID,
+ FSM_EVENT_MD_HS2,
FSM_EVENT_MD_EX,
FSM_EVENT_MD_EX_REC_OK,
FSM_EVENT_MD_EX_PASS,
+ FSM_EVENT_MD_HS2_EXIT,
FSM_EVENT_MAX
};

--
2.17.1

2022-02-25 15:31:40

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH net-next v5 06/13] net: wwan: t7xx: Add AT and MBIM WWAN ports

On Wed, 23 Feb 2022, Ricardo Martinez wrote:

> From: Chandrashekar Devegowda <[email protected]>
>
> Adds AT and MBIM ports to the port proxy infrastructure.
> The initialization method is responsible for creating the corresponding
> ports using the WWAN framework infrastructure. The implemented WWAN port
> operations are start, stop, and TX.
>
> Signed-off-by: Chandrashekar Devegowda <[email protected]>
> Co-developed-by: Ricardo Martinez <[email protected]>
> Signed-off-by: Ricardo Martinez <[email protected]>
>
> >From a WWAN framework perspective:
> Reviewed-by: Loic Poulain <[email protected]>
> ---

> +static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
> +{
...
> + if (!len || !port_private->rx_length_th || !port_private->chan_enable)

It raises eyebrows to see rx_xx being used on something called "tx".
Is it trying to tests port not inited?


--
i.

2022-02-25 15:58:41

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH net-next v5 05/13] net: wwan: t7xx: Add control port

On Wed, 23 Feb 2022, Ricardo Martinez wrote:

> From: Haijun Liu <[email protected]>
>
> Control Port implements driver control messages such as modem-host
> handshaking, controls port enumeration, and handles exception messages.
>
> The handshaking process between the driver and the modem happens during
> the init sequence. The process involves the exchange of a list of
> supported runtime features to make sure that modem and host are ready
> to provide proper feature lists including port enumeration. Further
> features can be enabled and controlled in this handshaking process.
>
> Signed-off-by: Haijun Liu <[email protected]>
> Signed-off-by: Chandrashekar Devegowda <[email protected]>
> Co-developed-by: Ricardo Martinez <[email protected]>
> Signed-off-by: Ricardo Martinez <[email protected]>
>
> >From a WWAN framework perspective:
> Reviewed-by: Loic Poulain <[email protected]>
> ---

> @@ -412,6 +651,7 @@ static struct t7xx_modem *t7xx_md_alloc(struct t7xx_pci_dev *t7xx_dev)
...
> md->core_md.ready = false;
> + md->core_md.handshake_ongoing = false;

These assignments are not needed with mem from kzalloc.

Reviewed-by: Ilpo J?rvinen <[email protected]>


--
i.

2022-02-25 17:37:12

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH net-next v5 03/13] net: wwan: t7xx: Add core components

On Wed, 23 Feb 2022, Ricardo Martinez wrote:

> From: Haijun Liu <[email protected]>
>
> Registers the t7xx device driver with the kernel. Setup all the core
> components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
> modem control operations, modem state machine, and build
> infrastructure.
>
> * PCIe layer code implements driver probe and removal.
> * MHCCIF provides interrupt channels to communicate events
> such as handshake, PM and port enumeration.
> * Modem control implements the entry point for modem init,
> reset and exit.
> * The modem status monitor is a state machine used by modem control
> to complete initialization and stop. It is used also to propagate
> exception events reported by other components.
>
> Signed-off-by: Haijun Liu <[email protected]>
> Signed-off-by: Chandrashekar Devegowda <[email protected]>
> Co-developed-by: Ricardo Martinez <[email protected]>
> Signed-off-by: Ricardo Martinez <[email protected]>
>
> >From a WWAN framework perspective:
> Reviewed-by: Loic Poulain <[email protected]>
> ---

> + /* IPs enable interrupts when ready */
> + for (i = 0; i < EXT_INT_NUM; i++)
> + t7xx_pcie_mac_clear_int(t7xx_dev, i);

In v4, PCIE_MAC_MSIX_MSK_SET() wrote to IMASK_HOST_MSIX_SET_GRP0_0.
In v5, t7xx_pcie_mac_clear_int() writes to IMASK_HOST_MSIX_CLR_GRP0_0.

t7xx_pcie_mac_set_int() would write to IMASK_HOST_MSIX_SET_GRP0_0
matching to what v4 did. So you probably want to call
t7xx_pcie_mac_set_int() instead of t7xx_pcie_mac_clear_int()?


--
i.

2022-02-26 02:29:25

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH net-next v5 07/13] net: wwan: t7xx: Data path HW layer

On Wed, 23 Feb 2022, Ricardo Martinez wrote:

> From: Haijun Liu <[email protected]>
>
> Data Path Modem AP Interface (DPMAIF) HW layer provides HW abstraction
> for the upper layer (DPMAIF HIF). It implements functions to do the HW
> configuration, TX/RX control and interrupt handling.
>
> Signed-off-by: Haijun Liu <[email protected]>
> Signed-off-by: Chandrashekar Devegowda <[email protected]>
> Co-developed-by: Ricardo Martinez <[email protected]>
> Signed-off-by: Ricardo Martinez <[email protected]>
>
> >From a WWAN framework perspective:
> Reviewed-by: Loic Poulain <[email protected]>
> ---


> +static void t7xx_dpmaif_dl_set_ao_bat_rsv_length(struct dpmaif_hw_info *hw_info)
> +{
> + unsigned int value;
> +
> + value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
> + value &= ~DPMAIF_BAT_RSV_LEN_MSK;
> + value |= DPMAIF_HW_BAT_RSVLEN & DPMAIF_BAT_RSV_LEN_MSK;

Drop & DPMAIF_BAT_RSV_LEN_MSK

> +static void t7xx_dpmaif_dl_set_ao_frg_check_thres(struct dpmaif_hw_info *hw_info)
> +{
> + unsigned int value;
> +
> + value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
> + value &= ~DPMAIF_FRG_CHECK_THRES_MSK;
> + value |= (DPMAIF_HW_CHK_FRG_NUM & DPMAIF_FRG_CHECK_THRES_MSK);

Ditto.

> + value |= DPMAIF_DL_PIT_SEQ_VALUE & DPMAIF_DL_PIT_SEQ_MSK;

Ditto.


--
i.

2022-03-01 15:50:11

by Ilpo Järvinen

[permalink] [raw]
Subject: Re: [PATCH net-next v5 10/13] net: wwan: t7xx: Introduce power management

On Wed, 23 Feb 2022, Ricardo Martinez wrote:

> From: Haijun Liu <[email protected]>
>
> Implements suspend, resumes, freeze, thaw, poweroff, and restore
> `dev_pm_ops` callbacks.
>
> >From the host point of view, the t7xx driver is one entity. But, the
> device has several modules that need to be addressed in different ways
> during power management (PM) flows.
> The driver uses the term 'PM entities' to refer to the 2 DPMA and
> 2 CLDMA HW blocks that need to be managed during PM flows.
> When a dev_pm_ops function is called, the PM entities list is iterated
> and the matching function is called for each entry in the list.
>
> Signed-off-by: Haijun Liu <[email protected]>
> Signed-off-by: Chandrashekar Devegowda <[email protected]>
> Co-developed-by: Ricardo Martinez <[email protected]>
> Signed-off-by: Ricardo Martinez <[email protected]>
> ---

> +static int __t7xx_pci_pm_suspend(struct pci_dev *pdev)
> +{
...
> + iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
> + return 0;

The success path does this same iowrite32 to DIS_ASPM_LOWPWR_CLR_0
as the failure paths. Is that intended?

The function looks much better now!


--
i.

2022-03-07 07:58:41

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH net-next v5 05/13] net: wwan: t7xx: Add control port

On Thu, Feb 24, 2022 at 1:35 AM Ricardo Martinez
<[email protected]> wrote:
> From: Haijun Liu <[email protected]>
>
> Control Port implements driver control messages such as modem-host
> handshaking, controls port enumeration, and handles exception messages.
>
> The handshaking process between the driver and the modem happens during
> the init sequence. The process involves the exchange of a list of
> supported runtime features to make sure that modem and host are ready
> to provide proper feature lists including port enumeration. Further
> features can be enabled and controlled in this handshaking process.

[skipped]

> +struct feature_query {
> + __le32 head_pattern;
> + u8 feature_set[FEATURE_COUNT];
> + __le32 tail_pattern;
> +};
> +
> +static void t7xx_prepare_host_rt_data_query(struct t7xx_sys_info *core)
> +{
> + struct t7xx_port_static *port_static = core->ctl_port->port_static;
> + struct ctrl_msg_header *ctrl_msg_h;
> + struct feature_query *ft_query;
> + struct ccci_header *ccci_h;
> + struct sk_buff *skb;
> + size_t packet_size;
> +
> + packet_size = sizeof(*ccci_h) + sizeof(*ctrl_msg_h) + sizeof(*ft_query);
> + skb = __dev_alloc_skb(packet_size, GFP_KERNEL);
> + if (!skb)
> + return;
> +
> + skb_put(skb, packet_size);
> +
> + ccci_h = (struct ccci_header *)skb->data;
> + t7xx_ccci_header_init(ccci_h, 0, packet_size, port_static->tx_ch, 0);
> + ccci_h->status &= cpu_to_le32(~CCCI_H_SEQ_FLD);
> +
> + ctrl_msg_h = (struct ctrl_msg_header *)(skb->data + sizeof(*ccci_h));
> + t7xx_ctrl_msg_header_init(ctrl_msg_h, CTL_ID_HS1_MSG, 0, sizeof(*ft_query));
> +
> + ft_query = (struct feature_query *)(skb->data + sizeof(*ccci_h) + sizeof(*ctrl_msg_h));
> + ft_query->head_pattern = cpu_to_le32(MD_FEATURE_QUERY_ID);
> + memcpy(ft_query->feature_set, core->feature_set, FEATURE_COUNT);
> + ft_query->tail_pattern = cpu_to_le32(MD_FEATURE_QUERY_ID);
> +
> + /* Send HS1 message to device */
> + t7xx_port_proxy_send_skb(core->ctl_port, skb);
> +}

I do not care too much, but this code and many other places could be
greatly simplified. It looks like the modem communication protocol has
a layered design, skb and its API are also designed to handle layered
protocols. It just needs to rearrange the code a bit.

For example, to avoid manual accounting of each header in the stack,
skb allocation can be implemented using a stack of allocation
functions:

struct sk_buff *t7xx_port_alloc_skb(int payload)
{
struct sk_buff *skb = alloc_skb(payload + sizeof(struct ccci_header), ...);
if (skb)
skb_reserve(skb, sizeof(struct ccci_header));
return skb;
}

struct sk_buff *t7xx_ctrl_alloc_skb(int payload)
{
struct sk_buff *skb = t7xx_port_alloc_skb(payload + sizeof(struct
ctlr_msg_header), ...);
if (skb)
skb_reserve(skb, sizeof(struct ctrl_msg_header));
return skb;
}

Message sending operation can also be perfectly stacked:

int t7xx_port_proxy_send_skb(*port, *skb)
{
struct ccci_header *ccci_h = skb_push(skb, sizeof(*ccci_h));
/* Build CCCI header (including seqno assignment) */
ccci_h->packet_len = cpu_to_le32(skb->len);
res = cldma_send_skb(..., skb, ...);
if (res)
return res;
/* Update seqno counter here */
return 0;
}

int t7xx_ctrl_send_msg(port, msg_id, skb)
{
int len = skb->len; /* Preserve payload len */
struct ctrl_msg_header *ctrl_msg_h = skb_push(skb, sizeof(*ctrl_msg_h));
/* Build ctrl msg header here */
ctrl_msg_h->data_length = cpu_to_le32(len);
return t7xx_port_proxy_send_skb(port, skb);
}

So the above features request becomes as simple as:

void t7xx_prepare_host_rt_data_query(struct t7xx_sys_info *core)
{
struct feature_query *ft_query;
struct sk_buff *skb;

skb = t7xx_ctrl_alloc_skb(sizeof(*ft_query));
if (!skb)
return;
ft_query = skb_put(skb, sizeof(*ft_query));
/* Build features request here */
if (t7xx_ctrl_send_msg(core->ctl_port, CTL_ID_HS1_MSG, skb))
kfree_skb(skb);
}

Once the allocation and sending functions are implemented in a stacked
way, many other places can be simplified in a similar way.

[skipped]

> +static void t7xx_core_hk_handler(struct t7xx_modem *md, struct t7xx_fsm_ctl *ctl,
> + enum t7xx_fsm_event_state event_id,
> + enum t7xx_fsm_event_state err_detect)
> +{
> + struct t7xx_sys_info *core_info = &md->core_md;
> + struct device *dev = &md->t7xx_dev->pdev->dev;
> + struct t7xx_fsm_event *event, *event_next;
> + unsigned long flags;
> + void *event_data;
> + int ret;
> +
> + t7xx_prepare_host_rt_data_query(core_info);
> +
> + while (!kthread_should_stop()) {
> + bool event_received = false;
> +
> + spin_lock_irqsave(&ctl->event_lock, flags);
> + list_for_each_entry_safe(event, event_next, &ctl->event_queue, entry) {
> + if (event->event_id == err_detect) {
> + list_del(&event->entry);
> + spin_unlock_irqrestore(&ctl->event_lock, flags);
> + dev_err(dev, "Core handshake error event received\n");
> + goto err_free_event;
> + } else if (event->event_id == event_id) {
> + list_del(&event->entry);
> + event_received = true;
> + break;
> + }
> + }
> + spin_unlock_irqrestore(&ctl->event_lock, flags);
> +
> + if (event_received)
> + break;
> +
> + wait_event_interruptible(ctl->event_wq, !list_empty(&ctl->event_queue) ||
> + kthread_should_stop());
> + if (kthread_should_stop())
> + goto err_free_event;
> + }
> +
> + if (ctl->exp_flg)
> + goto err_free_event;
> +
> + event_data = (void *)event + sizeof(*event);

In the V2, the event structure has a data field. But then it was
dropped and now the attached data offset is manually calculated. Why
did you do this, why event->data is not suitable here?

> + ret = t7xx_parse_host_rt_data(ctl, core_info, dev, event_data, event->length);
> + if (ret) {
> + dev_err(dev, "Host failure parsing runtime data: %d\n", ret);
> + goto err_free_event;
> + }
> +
> + if (ctl->exp_flg)
> + goto err_free_event;
> +
> + ret = t7xx_prepare_device_rt_data(core_info, dev, event_data, event->length);
> + if (ret) {
> + dev_err(dev, "Device failure parsing runtime data: %d", ret);
> + goto err_free_event;
> + }
> +
> + core_info->ready = true;
> + core_info->handshake_ongoing = false;
> + wake_up(&ctl->async_hk_wq);
> +err_free_event:
> + kfree(event);
> +}

[skipped]

> +static int port_ctl_init(struct t7xx_port *port)
> +{
> + struct t7xx_port_static *port_static = port->port_static;
> +
> + port->skb_handler = &control_msg_handler;

& is not necessary here and only misguides readers.

> + port->thread = kthread_run(port_ctl_rx_thread, port, "%s", port_static->name);
> + if (IS_ERR(port->thread)) {
> + dev_err(port->dev, "Failed to start port control thread\n");
> + return PTR_ERR(port->thread);
> + }
> +
> + port->rx_length_th = CTRL_QUEUE_MAXLEN;
> + return 0;
> +}

[skipped]

> -static struct t7xx_port_static t7xx_md_ports[1];
> +static struct t7xx_port_static t7xx_md_ports[] = {
> + {
> + .tx_ch = PORT_CH_CONTROL_TX,
> + .rx_ch = PORT_CH_CONTROL_RX,
> + .txq_index = Q_IDX_CTRL,
> + .rxq_index = Q_IDX_CTRL,
> + .txq_exp_index = 0,
> + .rxq_exp_index = 0,
> + .path_id = CLDMA_ID_MD,
> + .flags = 0,

Zero initializer is not needed here, a static variable is filled with
zeros automatically.

> + .ops = &ctl_port_ops,
> + .name = "t7xx_ctrl",
> + },
> +};

[skipped]

> +void t7xx_port_proxy_send_msg_to_md(struct port_proxy *port_prox, enum port_ch ch,
> + unsigned int msg, unsigned int ex_msg)

This function is called only from the control port code and only with
ch = PORT_CH_CONTROL_TX, so I would like to recommend to move it there
and drop the ch argument.

> +{
> + struct ctrl_msg_header *ctrl_msg_h;
> + struct ccci_header *ccci_h;
> + struct t7xx_port *port;
> + struct sk_buff *skb;
> + int ret;
> +
> + port = t7xx_proxy_get_port_by_ch(port_prox, ch);
> + if (!port)
> + return;
> +
> + skb = __dev_alloc_skb(sizeof(*ccci_h), GFP_KERNEL);
> + if (!skb)
> + return;
> +
> + if (ch == PORT_CH_CONTROL_TX) {
> + ccci_h = (struct ccci_header *)(skb->data);
> + t7xx_ccci_header_init(ccci_h, CCCI_HEADER_NO_DATA,
> + sizeof(*ctrl_msg_h) + sizeof(*ccci_h), ch, 0);
> + ctrl_msg_h = (struct ctrl_msg_header *)(skb->data + sizeof(*ccci_h));
> + t7xx_ctrl_msg_header_init(ctrl_msg_h, msg, ex_msg, 0);
> + skb_put(skb, sizeof(*ccci_h) + sizeof(*ctrl_msg_h));
> + } else {
> + ccci_h = skb_put(skb, sizeof(*ccci_h));
> + t7xx_ccci_header_init(ccci_h, CCCI_HEADER_NO_DATA, msg, ch, ex_msg);
> + }
> +
> + ret = t7xx_port_proxy_send_skb(port, skb);
> + if (ret) {
> + struct t7xx_port_static *port_static = port->port_static;
> +
> + dev_kfree_skb_any(skb);
> + dev_err(port->dev, "port%s send to MD fail\n", port_static->name);
> + }
> +}

--
Sergey

2022-03-07 09:12:26

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH net-next v5 06/13] net: wwan: t7xx: Add AT and MBIM WWAN ports

On Thu, Feb 24, 2022 at 1:35 AM Ricardo Martinez
<[email protected]> wrote:
> From: Chandrashekar Devegowda <[email protected]>
>
> Adds AT and MBIM ports to the port proxy infrastructure.
> The initialization method is responsible for creating the corresponding
> ports using the WWAN framework infrastructure. The implemented WWAN port
> operations are start, stop, and TX.

[skipped]

> +static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
> +{
> + struct t7xx_port *port_private = wwan_port_get_drvdata(port);
> + size_t actual_len, alloc_size, txq_mtu = CLDMA_MTU;
> + struct t7xx_port_static *port_static;
> + unsigned int len, i, packets;
> + struct t7xx_fsm_ctl *ctl;
> + enum md_state md_state;
> +
> + len = skb->len;
> + if (!len || !port_private->rx_length_th || !port_private->chan_enable)
> + return -EINVAL;
> +
> + port_static = port_private->port_static;
> + ctl = port_private->t7xx_dev->md->fsm_ctl;
> + md_state = t7xx_fsm_get_md_state(ctl);
> + if (md_state == MD_STATE_WAITING_FOR_HS1 || md_state == MD_STATE_WAITING_FOR_HS2) {
> + dev_warn(port_private->dev, "Cannot write to %s port when md_state=%d\n",
> + port_static->name, md_state);
> + return -ENODEV;
> + }
> +
> + alloc_size = min_t(size_t, txq_mtu, len + CCCI_HEADROOM);
> + actual_len = alloc_size - CCCI_HEADROOM;
> + packets = DIV_ROUND_UP(len, txq_mtu - CCCI_HEADROOM);
> +
> + for (i = 0; i < packets; i++) {
> + struct ccci_header *ccci_h;
> + struct sk_buff *skb_ccci;
> + int ret;
> +
> + if (packets > 1 && packets == i + 1) {
> + actual_len = len % (txq_mtu - CCCI_HEADROOM);
> + alloc_size = actual_len + CCCI_HEADROOM;
> + }

Why do you track the packet number? Why not track the offset in the
passed data? E.g.:

for (off = 0; off < len; off += chunklen) {
chunklen = min(len - off, CLDMA_MTU - sizeof(struct ccci_header);
skb_ccci = alloc_skb(chunklen + sizeof(struct ccci_header), ...);
skb_put_data(skb_ccci, skb->data + off, chunklen);
/* Send skb_ccci */
}

> + skb_ccci = __dev_alloc_skb(alloc_size, GFP_KERNEL);
> + if (!skb_ccci)
> + return -ENOMEM;
> +
> + ccci_h = skb_put(skb_ccci, sizeof(*ccci_h));
> + t7xx_ccci_header_init(ccci_h, 0, actual_len + sizeof(*ccci_h),
> + port_static->tx_ch, 0);
> + skb_put_data(skb_ccci, skb->data + i * (txq_mtu - CCCI_HEADROOM), actual_len);
> + t7xx_port_proxy_set_tx_seq_num(port_private, ccci_h);
> +
> + ret = t7xx_port_send_skb_to_md(port_private, skb_ccci);
> + if (ret) {
> + dev_kfree_skb_any(skb_ccci);
> + dev_err(port_private->dev, "Write error on %s port, %d\n",
> + port_static->name, ret);
> + return ret;
> + }
> +
> + port_private->seq_nums[MTK_TX]++;

Sequence number tracking as well as CCCI header construction are
common operations, so why not move them to t7xx_port_send_skb_to_md()?

> + }
> +
> + dev_kfree_skb(skb);
> + return 0;
> +}

--
Sergey

2022-03-07 09:33:30

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH net-next v5 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem

Hello Ricardo,

On Thu, Feb 24, 2022 at 1:35 AM Ricardo Martinez
<[email protected]> wrote:
> t7xx is the PCIe host device driver for Intel 5G 5000 M.2 solution which
> is based on MediaTek's T700 modem to provide WWAN connectivity.
> The driver uses the WWAN framework infrastructure to create the following
> control ports and network interfaces:
> * /dev/wwan0mbim0 - Interface conforming to the MBIM protocol.
> Applications like libmbim [1] or Modem Manager [2] from v1.16 onwards
> with [3][4] can use it to enable data communication towards WWAN.
> * /dev/wwan0at0 - Interface that supports AT commands.
> * wwan0 - Primary network interface for IP traffic.
>
> The main blocks in t7xx driver are:
> * PCIe layer - Implements probe, removal, and power management callbacks.
> * Port-proxy - Provides a common interface to interact with different types
> of ports such as WWAN ports.
> * Modem control & status monitor - Implements the entry point for modem
> initialization, reset and exit, as well as exception handling.
> * CLDMA (Control Layer DMA) - Manages the HW used by the port layer to send
> control messages to the modem using MediaTek's CCCI (Cross-Core
> Communication Interface) protocol.
> * DPMAIF (Data Plane Modem AP Interface) - Controls the HW that provides
> uplink and downlink queues for the data path. The data exchange takes
> place using circular buffers to share data buffer addresses and metadata
> to describe the packets.
> * MHCCIF (Modem Host Cross-Core Interface) - Provides interrupt channels
> for bidirectional event notification such as handshake, exception, PM and
> port enumeration.
>
> The compilation of the t7xx driver is enabled by the CONFIG_MTK_T7XX config
> option which depends on CONFIG_WWAN.
> This driver was originally developed by MediaTek. Intel adapted t7xx to
> the WWAN framework, optimized and refactored the driver source code in close
> collaboration with MediaTek. This will enable getting the t7xx driver on the
> Approved Vendor List for interested OEM's and ODM's productization plans
> with Intel 5G 5000 M.2 solution.
>
> List of contributors:
> Amir Hanania <[email protected]>
> Andriy Shevchenko <[email protected]>
> Chandrashekar Devegowda <[email protected]>
> Dinesh Sharma <[email protected]>
> Eliot Lee <[email protected]>
> Haijun Liu <[email protected]>
> M Chetan Kumar <[email protected]>
> Mika Westerberg <[email protected]>
> Moises Veleta <[email protected]>
> Pierre-louis Bossart <[email protected]>
> Chiranjeevi Rapolu <[email protected]>
> Ricardo Martinez <[email protected]>
> Madhusmita Sahu <[email protected]>
> Muralidharan Sethuraman <[email protected]>
> Soumya Prakash Mishra <[email protected]>
> Sreehari Kancharla <[email protected]>
> Suresh Nagaraj <[email protected]>
>
> [1] https://www.freedesktop.org/software/libmbim/
> [2] https://www.freedesktop.org/software/ModemManager/
> [3] https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/582
> [4] https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/523
>
> v5:
> - Update Intel's copyright years to 2021-2022.
> - Remove circular dependency between DPMAIF HW (07) and HIF (08).
> - Keep separate patches for CLDMA (02) and Core (03)
> but improve the code split by decoupling CLDMA from
> modem ops and cleaning up t7xx_common.h.
> - Rename ID_CLDMA0/ID_CLDMA1 to CLDMA_ID_AP/CLDMA_ID_MD.
> - Consistently use CLDMA's ring_lock to protect tr_ring.
> - Free resources first and then print messages.
> - Implement suggested name changes.
> - Do not explicitly include dev_printk.h.
> - Remove redundant dev_err()s.
> - Fix possible memory leak during probe.
> - Remove infrastructure for legacy interrupts.
> - Remove unused macros and variables, including those that
> can be replaced with constants.
> - Remove PCIE_MAC_MSIX_MSK_SET macro which is duplicated code.
> - Refactor __t7xx_pci_pm_suspend() for clarity.
> - Refactor t7xx_cldma_rx_ring_init() and t7xx_cldma_tx_ring_init().
> - Do not use & for function callbacks.
> - Declare a structure to access skb->cb[] data.
> - Use skb_put_data instead of memcpy.
> - No need to use kfree_sensitive.
> - Use dev_kfree_skb() instead of kfree_skb().
> - Refactor t7xx_prepare_device_rt_data() to remove potential leaks,
> avoid unneeded memset and keep rt_data and packet_size updates
> inside the same 'if' block.
> - Set port's rx_length_th back to 0 during uninit.
> - Remove unneeded 'blocking' parameter from t7xx_cldma_send_skb().
> - Return -EIO in t7xx_cldma_send_skb() if the queue is inactive.
> - Refactor t7xx_cldma_qs_are_active() to use pci_device_is_present().
> - Simplify t7xx_cldma_stop_q() and rename it to t7xx_cldma_stop_all_qs().
> - Fix potential leaks in t7xx_cldma_init().
> - Improve error handling in fsm_append_event and fsm_routine_starting().
> - Propagate return codes from fsm_append_cmd() and t7xx_fsm_append_event().
> - Refactor fsm_wait_for_event() to avoid unnecessary sleep.
> - Create the WWAN ports and net device only after the modem is in
> the ready state.
> - Refactor t7xx_port_proxy_recv_skb() and port_recv_skb().
> - Rename t7xx_port_check_rx_seq_num() as t7xx_port_next_rx_seq_num()
> and fix the seq_num logic to handle overflows.
> - Declare seq_nums as u16 instead of short.
> - Use unsigned int for local indexes.
> - Use min_t instead of the ternary operator.
> - Refactor the loop in t7xx_dpmaif_rx_data_collect() to avoid a dead
> condition check.
> - Use a bitmap (bat_bitmap) instead of an array to keep track of
> the DRB status. Used in t7xx_dpmaif_avail_pkt_bat_cnt().
> - Refactor t7xx_dpmaif_tx_send_skb() to protect tx_submit_skb_cnt
> with spinlock and remove the misleading tx_drb_available variable.
> - Consolidate bit operations before endianness conversion.
> - Use C bit fields in dpmaif_drb_skb struct which is not HW related.
> - Add back the que_started check in t7xx_select_tx_queue().
> - Create a helper function to get the DRB count.
> - Simplify the use of 'usage' during t7xx_ccmni_close().
> - Enforce CCMNI MTU selection with BUILD_BUG_ON() instead of a comment.
> - Remove t7xx_ccmni_ctrl->capability parameter which remains constant.
>
> v4:
> - Implement list_prev_entry_circular() and list_next_entry_circular() macros.
> - Remove inline from all c files.
> - Define ioread32_poll_timeout_atomic() helper macro.
> - Fix return code for WWAN port tx op.
> - Allow AT commands fragmentation same as MBIM commands.
> - Introduce t7xx_common.h file in the first patch.
> - Rename functions and variables as suggested in v3.
> - Reduce code duplication by creating fsm_wait_for_event() helper function.
> - Remove unneeded dev_err in t7xx_fsm_clr_event().
> - Remove unused variable last_state from struct t7xx_fsm_ctl.
> - Remove unused variable txq_select_times from struct dpmaif_ctrl.
> - Replace ETXTBSY with EBUSY.
> - Refactor t7xx_dpmaif_rx_buf_alloc() to remove an unneeded allocation.
> - Fix potential leak at t7xx_dpmaif_rx_frag_alloc().
> - Simplify return value handling at t7xx_dpmaif_rx_start().
> - Add a helper to handle the common part of CCCI header initialization.
> - Make sure interrupts are enabled during PM resume.
> - Add a parameter to t7xx_fsm_append_cmd() to tell if it is in interrupt context.
>
> v3:
> - Avoid unneeded ping-pong changes between patches.
> - Use t7xx_ prefix in functions.
> - Use t7xx_ prefix in generic structs where mtk_ or ccci prefix was used.
> - Update Authors/Contributors header.
> - Remove skb pools used for control path.
> - Remove skb pools used for RX data path.
> - Do not use dedicated TX queue for ACK-only packets.
> - Remove __packed attribute from GPD structs.
> - Remove the infrastructure for test and debug ports.
> - Use the skb control buffer to store metadata.
> - Get the IP packet type from RX PIT.
> - Merge variable declaration and simple assignments.
> - Use preferred coding patterns.
> - Remove global variables.
> - Declare HW facing structure members as little endian.
> - Rename goto tags to describe what is going to be done.
> - Do not use variable length arrays.
> - Remove unneeded blank lines source code and kdoc headers.
> - Use C99 initialization format for port-proxy ports.
> - Clean up comments.
> - Review included headers.
> - Better use of 100 column limit.
> - Remove unneeded mb() in CLDMA.
> - Remove unneeded spin locks and atomics.
> - Handle read_poll_timeout error.
> - Use dev_err_ratelimited() where required.
> - Fix resource leak when requesting IRQs.
> - Use generic DEFAULT_TX_QUEUE_LEN instead custom macro.
> - Use ETH_DATA_LEN instead of defining WWAN_DEFAULT_MTU.
> - Use sizeof() instead of defines when the size of structures is required.
> - Remove unneeded code from netdev:
> No need to configure HW address length
> No need to implement .ndo_change_mtu
> Remove random address generation
> - Code simplifications by using kernel provided functions and macros such as:
> module_pci_driver
> PTR_ERR_OR_ZERO
> for_each_set_bit
> pci_device_is_present
> skb_queue_purge
> list_prev_entry
> __ffs64
>
> v2:
> - Replace pdev->driver->name with dev_driver_string(&pdev->dev).
> - Replace random_ether_addr() with eth_random_addr().
> - Update kernel-doc comment for enum data_policy.
> - Indicate the driver is 'Supported' instead of 'Maintained'.
> - Fix the Signed-of-by and Co-developed-by tags in the patches.
> - Added authors and contributors in the top comment of the src files.

Sorry for the delay in review. There are too many changes from version
to version and it took some time to dig through them. I would like to
admit that the driver now looks much better. Good job!

Please find per-patch comments.

--
Sergey

2022-03-07 09:44:56

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH net-next v5 03/13] net: wwan: t7xx: Add core components

On Thu, Feb 24, 2022 at 1:35 AM Ricardo Martinez
<[email protected]> wrote:
> From: Haijun Liu <[email protected]>
>
> Registers the t7xx device driver with the kernel. Setup all the core
> components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
> modem control operations, modem state machine, and build
> infrastructure.
>
> * PCIe layer code implements driver probe and removal.
> * MHCCIF provides interrupt channels to communicate events
> such as handshake, PM and port enumeration.
> * Modem control implements the entry point for modem init,
> reset and exit.
> * The modem status monitor is a state machine used by modem control
> to complete initialization and stop. It is used also to propagate
> exception events reported by other components.

[skipped]

> +static struct t7xx_modem *t7xx_md_alloc(struct t7xx_pci_dev *t7xx_dev)
> +{
> + struct device *dev = &t7xx_dev->pdev->dev;
> + struct t7xx_modem *md;
> +
> + md = devm_kzalloc(dev, sizeof(*md), GFP_KERNEL);
> + if (!md)
> + return NULL;
> +
> + md->t7xx_dev = t7xx_dev;
> + t7xx_dev->md = md;
> + md->core_md.ready = false;
> + spin_lock_init(&md->exp_lock);
> + md->handshake_wq = alloc_workqueue("%s", WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI,
> + 0, "md_hk_wq");
> + if (!md->handshake_wq)
> + return NULL;
> +
> + INIT_WORK(&md->handshake_work, t7xx_md_hk_wq);
> + return md;
> +}
> +
> +int t7xx_md_reset(struct t7xx_pci_dev *t7xx_dev)
> +{
> + struct t7xx_modem *md = t7xx_dev->md;
> +
> + md->md_init_finish = false;
> + md->exp_id = 0;
> + spin_lock_init(&md->exp_lock);

Looks like a duplicated initialization, the first time the lock was
initialized in the t7xx_md_alloc() above.

> + t7xx_fsm_reset(md);
> + t7xx_cldma_reset(md->md_ctrl[CLDMA_ID_MD]);
> + md->md_init_finish = true;
> + return 0;
> +}

[skipped]

> +void t7xx_pcie_set_mac_msix_cfg(struct t7xx_pci_dev *t7xx_dev, unsigned int irq_count)
> +{
> + u32 val;
> +
> + val = ffs(irq_count) * 2 - 1;

Move this initialization to the variable declaration.

> + iowrite32(val, IREG_BASE(t7xx_dev) + T7XX_PCIE_CFG_MSIX);
> +}

--
Sergey

2022-03-08 05:46:58

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH net-next v5 10/13] net: wwan: t7xx: Introduce power management


On 3/1/2022 5:26 AM, Ilpo Järvinen wrote:
> On Wed, 23 Feb 2022, Ricardo Martinez wrote:
>
>> From: Haijun Liu <[email protected]>
>>
>> Implements suspend, resumes, freeze, thaw, poweroff, and restore
>> `dev_pm_ops` callbacks.
>>
>> >From the host point of view, the t7xx driver is one entity. But, the
>> device has several modules that need to be addressed in different ways
>> during power management (PM) flows.
>> The driver uses the term 'PM entities' to refer to the 2 DPMA and
>> 2 CLDMA HW blocks that need to be managed during PM flows.
>> When a dev_pm_ops function is called, the PM entities list is iterated
>> and the matching function is called for each entry in the list.
>>
>> Signed-off-by: Haijun Liu <[email protected]>
>> Signed-off-by: Chandrashekar Devegowda <[email protected]>
>> Co-developed-by: Ricardo Martinez <[email protected]>
>> Signed-off-by: Ricardo Martinez <[email protected]>
>> ---
>> +static int __t7xx_pci_pm_suspend(struct pci_dev *pdev)
>> +{
> ...
>> + iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
>> + return 0;
> The success path does this same iowrite32 to DIS_ASPM_LOWPWR_CLR_0
> as the failure paths. Is that intended?

Yes, that's intended.

This function disables low power mode at the beginning and it has to
re-enable it before

returning, regardless of the success or failure path.

The next iteration will contain some naming changes to avoid double
negatives :

- iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);

+ iowrite32(T7XX_L1_BIT(0), IREG_BASE(t7xx_dev) + DISABLE_ASPM_LOWPWR);


> The function looks much better now!
>
>

2022-03-08 13:55:54

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH net-next v5 03/13] net: wwan: t7xx: Add core components


On 2/25/2022 3:10 AM, Ilpo Järvinen wrote:
> On Wed, 23 Feb 2022, Ricardo Martinez wrote:
>
>> From: Haijun Liu <[email protected]>
>>
>> Registers the t7xx device driver with the kernel. Setup all the core
>> components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
>> modem control operations, modem state machine, and build
>> infrastructure.
>>
>> * PCIe layer code implements driver probe and removal.
>> * MHCCIF provides interrupt channels to communicate events
>> such as handshake, PM and port enumeration.
>> * Modem control implements the entry point for modem init,
>> reset and exit.
>> * The modem status monitor is a state machine used by modem control
>> to complete initialization and stop. It is used also to propagate
>> exception events reported by other components.
>>
>> Signed-off-by: Haijun Liu <[email protected]>
>> Signed-off-by: Chandrashekar Devegowda <[email protected]>
>> Co-developed-by: Ricardo Martinez <[email protected]>
>> Signed-off-by: Ricardo Martinez <[email protected]>
>>
>> >From a WWAN framework perspective:
>> Reviewed-by: Loic Poulain <[email protected]>
>> ---
>> + /* IPs enable interrupts when ready */
>> + for (i = 0; i < EXT_INT_NUM; i++)
>> + t7xx_pcie_mac_clear_int(t7xx_dev, i);
> In v4, PCIE_MAC_MSIX_MSK_SET() wrote to IMASK_HOST_MSIX_SET_GRP0_0.
> In v5, t7xx_pcie_mac_clear_int() writes to IMASK_HOST_MSIX_CLR_GRP0_0.
>
> t7xx_pcie_mac_set_int() would write to IMASK_HOST_MSIX_SET_GRP0_0
> matching to what v4 did. So you probably want to call
> t7xx_pcie_mac_set_int() instead of t7xx_pcie_mac_clear_int()?
Yes, this should call t7xx_pcie_mac_set_int().
>

2022-03-09 02:46:26

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH net-next v5 06/13] net: wwan: t7xx: Add AT and MBIM WWAN ports


On 3/6/2022 6:56 PM, Sergey Ryazanov wrote:
> On Thu, Feb 24, 2022 at 1:35 AM Ricardo Martinez
> <[email protected]> wrote:
>> From: Chandrashekar Devegowda <[email protected]>
>>
>> Adds AT and MBIM ports to the port proxy infrastructure.
>> The initialization method is responsible for creating the corresponding
>> ports using the WWAN framework infrastructure. The implemented WWAN port
>> operations are start, stop, and TX.
> [skipped]
>
>> +static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
>> +{
>> + struct t7xx_port *port_private = wwan_port_get_drvdata(port);
>> + size_t actual_len, alloc_size, txq_mtu = CLDMA_MTU;
>> + struct t7xx_port_static *port_static;
>> + unsigned int len, i, packets;
>> + struct t7xx_fsm_ctl *ctl;
>> + enum md_state md_state;
>> +
>> + len = skb->len;
>> + if (!len || !port_private->rx_length_th || !port_private->chan_enable)
>> + return -EINVAL;
>> +
>> + port_static = port_private->port_static;
>> + ctl = port_private->t7xx_dev->md->fsm_ctl;
>> + md_state = t7xx_fsm_get_md_state(ctl);
>> + if (md_state == MD_STATE_WAITING_FOR_HS1 || md_state == MD_STATE_WAITING_FOR_HS2) {
>> + dev_warn(port_private->dev, "Cannot write to %s port when md_state=%d\n",
>> + port_static->name, md_state);
>> + return -ENODEV;
>> + }
>> +
>> + alloc_size = min_t(size_t, txq_mtu, len + CCCI_HEADROOM);
>> + actual_len = alloc_size - CCCI_HEADROOM;
>> + packets = DIV_ROUND_UP(len, txq_mtu - CCCI_HEADROOM);
>> +
>> + for (i = 0; i < packets; i++) {
>> + struct ccci_header *ccci_h;
>> + struct sk_buff *skb_ccci;
>> + int ret;
>> +
>> + if (packets > 1 && packets == i + 1) {
>> + actual_len = len % (txq_mtu - CCCI_HEADROOM);
>> + alloc_size = actual_len + CCCI_HEADROOM;
>> + }
> Why do you track the packet number? Why not track the offset in the
> passed data? E.g.:
>
> for (off = 0; off < len; off += chunklen) {
> chunklen = min(len - off, CLDMA_MTU - sizeof(struct ccci_header);
> skb_ccci = alloc_skb(chunklen + sizeof(struct ccci_header), ...);
> skb_put_data(skb_ccci, skb->data + off, chunklen);
> /* Send skb_ccci */
> }
Sure, I'll make that change.
>> + skb_ccci = __dev_alloc_skb(alloc_size, GFP_KERNEL);
>> + if (!skb_ccci)
>> + return -ENOMEM;
>> +
>> + ccci_h = skb_put(skb_ccci, sizeof(*ccci_h));
>> + t7xx_ccci_header_init(ccci_h, 0, actual_len + sizeof(*ccci_h),
>> + port_static->tx_ch, 0);
>> + skb_put_data(skb_ccci, skb->data + i * (txq_mtu - CCCI_HEADROOM), actual_len);
>> + t7xx_port_proxy_set_tx_seq_num(port_private, ccci_h);
>> +
>> + ret = t7xx_port_send_skb_to_md(port_private, skb_ccci);
>> + if (ret) {
>> + dev_kfree_skb_any(skb_ccci);
>> + dev_err(port_private->dev, "Write error on %s port, %d\n",
>> + port_static->name, ret);
>> + return ret;
>> + }
>> +
>> + port_private->seq_nums[MTK_TX]++;
> Sequence number tracking as well as CCCI header construction are
> common operations, so why not move them to t7xx_port_send_skb_to_md()?

Sequence number should be set as part of CCCI header construction.

I think it's a bit more readable to initialize the CCCI header right
after the corresponding skb_put(). Not a big deal, any thoughts?

Note that the upcoming fw update feature doesn't require a CCCI header,
so we could rename the TX function as t7xx_port_send_ccci_skb_to_md(),
this would give a hint that it is taking care of the CCCI header.
>> + }
>> +
>> + dev_kfree_skb(skb);
>> + return 0;
>> +}
> --
> Sergey

2022-03-10 02:15:43

by Sergey Ryazanov

[permalink] [raw]
Subject: Re: [PATCH net-next v5 06/13] net: wwan: t7xx: Add AT and MBIM WWAN ports

On Wed, Mar 9, 2022 at 3:02 AM Martinez, Ricardo
<[email protected]> wrote:
> On 3/6/2022 6:56 PM, Sergey Ryazanov wrote:
>> On Thu, Feb 24, 2022 at 1:35 AM Ricardo Martinez
>> <[email protected]> wrote:
>>> From: Chandrashekar Devegowda <[email protected]>
>>>
>>> Adds AT and MBIM ports to the port proxy infrastructure.
>>> The initialization method is responsible for creating the corresponding
>>> ports using the WWAN framework infrastructure. The implemented WWAN port
>>> operations are start, stop, and TX.
>> [skipped]
>>
>>> +static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
>>> +{
>>> + struct t7xx_port *port_private = wwan_port_get_drvdata(port);
>>> + size_t actual_len, alloc_size, txq_mtu = CLDMA_MTU;
>>> + struct t7xx_port_static *port_static;
>>> + unsigned int len, i, packets;
>>> + struct t7xx_fsm_ctl *ctl;
>>> + enum md_state md_state;
>>> +
>>> + len = skb->len;
>>> + if (!len || !port_private->rx_length_th || !port_private->chan_enable)
>>> + return -EINVAL;
>>> +
>>> + port_static = port_private->port_static;
>>> + ctl = port_private->t7xx_dev->md->fsm_ctl;
>>> + md_state = t7xx_fsm_get_md_state(ctl);
>>> + if (md_state == MD_STATE_WAITING_FOR_HS1 || md_state == MD_STATE_WAITING_FOR_HS2) {
>>> + dev_warn(port_private->dev, "Cannot write to %s port when md_state=%d\n",
>>> + port_static->name, md_state);
>>> + return -ENODEV;
>>> + }
>>> +
>>> + alloc_size = min_t(size_t, txq_mtu, len + CCCI_HEADROOM);
>>> + actual_len = alloc_size - CCCI_HEADROOM;
>>> + packets = DIV_ROUND_UP(len, txq_mtu - CCCI_HEADROOM);
>>> +
>>> + for (i = 0; i < packets; i++) {
>>> + struct ccci_header *ccci_h;
>>> + struct sk_buff *skb_ccci;
>>> + int ret;
>>> +
>>> + if (packets > 1 && packets == i + 1) {
>>> + actual_len = len % (txq_mtu - CCCI_HEADROOM);
>>> + alloc_size = actual_len + CCCI_HEADROOM;
>>> + }
>>
>> Why do you track the packet number? Why not track the offset in the
>> passed data? E.g.:
>>
>> for (off = 0; off < len; off += chunklen) {
>> chunklen = min(len - off, CLDMA_MTU - sizeof(struct ccci_header);
>> skb_ccci = alloc_skb(chunklen + sizeof(struct ccci_header), ...);
>> skb_put_data(skb_ccci, skb->data + off, chunklen);
>> /* Send skb_ccci */
>> }
>
> Sure, I'll make that change.
>
>>> + skb_ccci = __dev_alloc_skb(alloc_size, GFP_KERNEL);
>>> + if (!skb_ccci)
>>> + return -ENOMEM;
>>> +
>>> + ccci_h = skb_put(skb_ccci, sizeof(*ccci_h));
>>> + t7xx_ccci_header_init(ccci_h, 0, actual_len + sizeof(*ccci_h),
>>> + port_static->tx_ch, 0);
>>> + skb_put_data(skb_ccci, skb->data + i * (txq_mtu - CCCI_HEADROOM), actual_len);
>>> + t7xx_port_proxy_set_tx_seq_num(port_private, ccci_h);
>>> +
>>> + ret = t7xx_port_send_skb_to_md(port_private, skb_ccci);
>>> + if (ret) {
>>> + dev_kfree_skb_any(skb_ccci);
>>> + dev_err(port_private->dev, "Write error on %s port, %d\n",
>>> + port_static->name, ret);
>>> + return ret;
>>> + }
>>> +
>>> + port_private->seq_nums[MTK_TX]++;
>>
>> Sequence number tracking as well as CCCI header construction are
>> common operations, so why not move them to t7xx_port_send_skb_to_md()?
>
> Sequence number should be set as part of CCCI header construction.
>
> I think it's a bit more readable to initialize the CCCI header right
> after the corresponding skb_put(). Not a big deal, any thoughts?

I do not _think_ creating the CCCI header in the WWAN or CTRL port
functions is any good idea. In case of stacked protocols, each layer
should create its own header, pass the packet down the stack, and then
a next layer will create a next header.

In case of the CTRL port, this means that the control port code should
take an opaque data block from an upper layer (e.g. features request),
prepend it with a control msg header, and pass it down the stack to
the port proxy layer, where the CCCI header will be prepended.

In case a WWAN port, all headers are passed from user space, so there
шы nothing to prepend. And the only remaining function is to fragment
a user input, and then pass all the fragments to the port proxy
layer, where the CCCI header will be prepended.

This way, you do not overload the CTRL/WWAN port with code of other
protocols (i.e. CCCI), reduce code duplication. Which in itself
improves the code maintainability and future development. Creating a
CCCI header at the WWAN port layer is like forcing a user to manually
create IP and UDP headers before writing a data block into a network
socket :)

Anyway, it is up to you to decide exactly how to create headers and
assign sequence numbers. I just wanted to point out the code
inconsistency. It does not make the code wrong, it just makes the code
look stranger.

> Note that the upcoming fw update feature doesn't require a CCCI header,
> so we could rename the TX function as t7xx_port_send_ccci_skb_to_md(),
> this would give a hint that it is taking care of the CCCI header.

Does this mean the firmware upgrade does not utilize the channel id,
and just pushes data directly to a specific CLDMA queue? In that case
it looks like the firmware upgrade code needs to entirely bypass the
port proxy layer and communicate directly with CLDMA. Isn't it?

>>> + }
>>> +
>>> + dev_kfree_skb(skb);
>>> + return 0;
>>> +}

--
Sergey

2022-03-11 23:38:38

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH net-next v5 06/13] net: wwan: t7xx: Add AT and MBIM WWAN ports


On 3/9/2022 4:13 PM, Sergey Ryazanov wrote:
> On Wed, Mar 9, 2022 at 3:02 AM Martinez, Ricardo
> <[email protected]> wrote:
>> On 3/6/2022 6:56 PM, Sergey Ryazanov wrote:
>>> On Thu, Feb 24, 2022 at 1:35 AM Ricardo Martinez
>>> <[email protected]> wrote:
>>>> From: Chandrashekar Devegowda <[email protected]>
>>>>
>>>> Adds AT and MBIM ports to the port proxy infrastructure.
>>>> The initialization method is responsible for creating the corresponding
>>>> ports using the WWAN framework infrastructure. The implemented WWAN port
>>>> operations are start, stop, and TX.
>>> [skipped]
>>>
>>>> +static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
>>>> +{
>>>> + struct t7xx_port *port_private = wwan_port_get_drvdata(port);
>>>> + size_t actual_len, alloc_size, txq_mtu = CLDMA_MTU;
>>>> + struct t7xx_port_static *port_static;
>>>> + unsigned int len, i, packets;
>>>> + struct t7xx_fsm_ctl *ctl;
>>>> + enum md_state md_state;
>>>> +
>>>> + len = skb->len;
>>>> + if (!len || !port_private->rx_length_th || !port_private->chan_enable)
>>>> + return -EINVAL;
>>>> +
>>>> + port_static = port_private->port_static;
>>>> + ctl = port_private->t7xx_dev->md->fsm_ctl;
>>>> + md_state = t7xx_fsm_get_md_state(ctl);
>>>> + if (md_state == MD_STATE_WAITING_FOR_HS1 || md_state == MD_STATE_WAITING_FOR_HS2) {
>>>> + dev_warn(port_private->dev, "Cannot write to %s port when md_state=%d\n",
>>>> + port_static->name, md_state);
>>>> + return -ENODEV;
>>>> + }
>>>> +
>>>> + alloc_size = min_t(size_t, txq_mtu, len + CCCI_HEADROOM);
>>>> + actual_len = alloc_size - CCCI_HEADROOM;
>>>> + packets = DIV_ROUND_UP(len, txq_mtu - CCCI_HEADROOM);
>>>> +
>>>> + for (i = 0; i < packets; i++) {
>>>> + struct ccci_header *ccci_h;
>>>> + struct sk_buff *skb_ccci;
>>>> + int ret;
>>>> +
>>>> + if (packets > 1 && packets == i + 1) {
>>>> + actual_len = len % (txq_mtu - CCCI_HEADROOM);
>>>> + alloc_size = actual_len + CCCI_HEADROOM;
>>>> + }
>>> Why do you track the packet number? Why not track the offset in the
>>> passed data? E.g.:
>>>
>>> for (off = 0; off < len; off += chunklen) {
>>> chunklen = min(len - off, CLDMA_MTU - sizeof(struct ccci_header);
>>> skb_ccci = alloc_skb(chunklen + sizeof(struct ccci_header), ...);
>>> skb_put_data(skb_ccci, skb->data + off, chunklen);
>>> /* Send skb_ccci */
>>> }
>> Sure, I'll make that change.
>>
>>>> + skb_ccci = __dev_alloc_skb(alloc_size, GFP_KERNEL);
>>>> + if (!skb_ccci)
>>>> + return -ENOMEM;
>>>> +
>>>> + ccci_h = skb_put(skb_ccci, sizeof(*ccci_h));
>>>> + t7xx_ccci_header_init(ccci_h, 0, actual_len + sizeof(*ccci_h),
>>>> + port_static->tx_ch, 0);
>>>> + skb_put_data(skb_ccci, skb->data + i * (txq_mtu - CCCI_HEADROOM), actual_len);
>>>> + t7xx_port_proxy_set_tx_seq_num(port_private, ccci_h);
>>>> +
>>>> + ret = t7xx_port_send_skb_to_md(port_private, skb_ccci);
>>>> + if (ret) {
>>>> + dev_kfree_skb_any(skb_ccci);
>>>> + dev_err(port_private->dev, "Write error on %s port, %d\n",
>>>> + port_static->name, ret);
>>>> + return ret;
>>>> + }
>>>> +
>>>> + port_private->seq_nums[MTK_TX]++;
>>> Sequence number tracking as well as CCCI header construction are
>>> common operations, so why not move them to t7xx_port_send_skb_to_md()?
>> Sequence number should be set as part of CCCI header construction.
>>
>> I think it's a bit more readable to initialize the CCCI header right
>> after the corresponding skb_put(). Not a big deal, any thoughts?
> I do not _think_ creating the CCCI header in the WWAN or CTRL port
> functions is any good idea. In case of stacked protocols, each layer
> should create its own header, pass the packet down the stack, and then
> a next layer will create a next header.
>
> In case of the CTRL port, this means that the control port code should
> take an opaque data block from an upper layer (e.g. features request),
> prepend it with a control msg header, and pass it down the stack to
> the port proxy layer, where the CCCI header will be prepended.
>
> In case a WWAN port, all headers are passed from user space, so there
> шы nothing to prepend. And the only remaining function is to fragment
> a user input, and then pass all the fragments to the port proxy
> layer, where the CCCI header will be prepended.
>
> This way, you do not overload the CTRL/WWAN port with code of other
> protocols (i.e. CCCI), reduce code duplication. Which in itself
> improves the code maintainability and future development. Creating a
> CCCI header at the WWAN port layer is like forcing a user to manually
> create IP and UDP headers before writing a data block into a network
> socket :)
>
> Anyway, it is up to you to decide exactly how to create headers and
> assign sequence numbers. I just wanted to point out the code
> inconsistency. It does not make the code wrong, it just makes the code
> look stranger.
Agree, the next iteration will implement a layered approach.
>> Note that the upcoming fw update feature doesn't require a CCCI header,
>> so we could rename the TX function as t7xx_port_send_ccci_skb_to_md(),
>> this would give a hint that it is taking care of the CCCI header.
> Does this mean the firmware upgrade does not utilize the channel id,
> and just pushes data directly to a specific CLDMA queue? In that case
> it looks like the firmware upgrade code needs to entirely bypass the
> port proxy layer and communicate directly with CLDMA. Isn't it?

It could bypass port proxy, or it could use a new helper function
implemented for the layered approach, this function
(t7xx_port_send_raw_skb) sends an skb to the right CLDMA instance and
queue based on the port configuration.

>
>>>> + }
>>>> +
>>>> + dev_kfree_skb(skb);
>>>> + return 0;
>>>> +}

2022-03-17 19:46:04

by Martinez, Ricardo

[permalink] [raw]
Subject: Re: [PATCH net-next v5 05/13] net: wwan: t7xx: Add control port


Hi Sergey,

On 3/6/2022 6:55 PM, Sergey Ryazanov wrote:
> On Thu, Feb 24, 2022 at 1:35 AM Ricardo Martinez
> <[email protected]> wrote:
>> From: Haijun Liu <[email protected]>
>>
>> Control Port implements driver control messages such as modem-host
>> handshaking, controls port enumeration, and handles exception messages.
>>
>> The handshaking process between the driver and the modem happens during
>> the init sequence. The process involves the exchange of a list of
>> supported runtime features to make sure that modem and host are ready
>> to provide proper feature lists including port enumeration. Further
>> features can be enabled and controlled in this handshaking process.
>>
...
>> +static void t7xx_core_hk_handler(struct t7xx_modem *md, struct t7xx_fsm_ctl *ctl,
>> + enum t7xx_fsm_event_state event_id,
>> + enum t7xx_fsm_event_state err_detect)
>> +{
>> + struct t7xx_sys_info *core_info = &md->core_md;
>> + struct device *dev = &md->t7xx_dev->pdev->dev;
>> + struct t7xx_fsm_event *event, *event_next;
>> + unsigned long flags;
>> + void *event_data;
>> + int ret;
>> +
>> + t7xx_prepare_host_rt_data_query(core_info);
>> +
>> + while (!kthread_should_stop()) {
>> + bool event_received = false;
>> +
>> + spin_lock_irqsave(&ctl->event_lock, flags);
>> + list_for_each_entry_safe(event, event_next, &ctl->event_queue, entry) {
>> + if (event->event_id == err_detect) {
>> + list_del(&event->entry);
>> + spin_unlock_irqrestore(&ctl->event_lock, flags);
>> + dev_err(dev, "Core handshake error event received\n");
>> + goto err_free_event;
>> + } else if (event->event_id == event_id) {
>> + list_del(&event->entry);
>> + event_received = true;
>> + break;
>> + }
>> + }
>> + spin_unlock_irqrestore(&ctl->event_lock, flags);
>> +
>> + if (event_received)
>> + break;
>> +
>> + wait_event_interruptible(ctl->event_wq, !list_empty(&ctl->event_queue) ||
>> + kthread_should_stop());
>> + if (kthread_should_stop())
>> + goto err_free_event;
>> + }
>> +
>> + if (ctl->exp_flg)
>> + goto err_free_event;
>> +
>> + event_data = (void *)event + sizeof(*event);
> In the V2, the event structure has a data field. But then it was
> dropped and now the attached data offset is manually calculated. Why
> did you do this, why event->data is not suitable here?

It was removed along with other zero length arrays, although it was
declared as an empty array.

The next iteration will use C99 flexible arrays where required, instead
of calculating the data offset manually.

...