2024-01-10 11:42:28

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 00/20] net: ethernet: Add qcom PPE driver

The PPE(packet process engine) hardware block is available in Qualcomm
IPQ chipsets that support PPE architecture, such as IPQ9574 and IPQ5332.
The PPE includes integrated ethernet MAC and PCS(uniphy), which is used
to connect with external PHY devices by PCS. The PPE also includes
various packet processing offload capabilities such as routing and
briding offload, L2 switch capability, VLAN and tunnel processing
offload.

This patch series enables support for the PPE driver which intializes
and configures the PPE, and provides various services for higher level
network drivers in the system such as EDMA (Ethernet DMA) driver or a
DSA switch driver for PPE L2 Switch, for Qualcomm IPQ SoCs.

The PPE driver provides following functions:

1. Initialize PPE device hardware functions such as buffer management,
queue management, TDM, scheduler and clocks in order to bring up PPE
device.

2. Register the PCS driver and uniphy raw clock provider. The uniphy
raw clock is selected as the parent clock of the NSSCC clocks. The
NSSCC clocks are registered by the dependent patchset at the link
below.(Note: There are 3 PCS on IPQ9574, 2 PCS on IPQ5332 platform.)

3. Export the PPE control path API (ppe_device_ops) for use by higher
level network drivers such as the EDMA(Ethernet DMA) driver. The
EDMA netdevice driver depends on this PPE driver and registers the
netdevices to receive and transmit packets using the ethernet ports.

4. Register debugfs file to provide access to various PPE packet counters.
These statistics are recorded by the various HW counters, such as port
RX/TX, CPU code and HW queue counters.

The diagram and detail introduction of PPE are described in the added file:
Documentation/networking/device_drivers/ethernet/qualcomm/ppe/ppe.rst,
which is added by the first patch.
<Documentation: networking: qcom PPE driver documentation>.

PPE driver depends on the NSSCC clock driver below, which provides the
clocks for the PPE driver.
https://lore.kernel.org/linux-arm-msm/[email protected]/
https://lore.kernel.org/linux-arm-msm/[email protected]/

PPE driver also depens on the device tree patch series to bring up PPE
device as below link.
https://lore.kernel.org/all/[email protected]/

Lei Wei (5):
Documentation: networking: qcom PPE driver documentation
net: ethernet: qualcomm: Add PPE L2 bridge initialization
net: ethernet: qualcomm: Add PPE UNIPHY support for phylink
net: ethernet: qualcomm: Add PPE MAC support for phylink
net: ethernet: qualcomm: Add PPE MAC functions

Luo Jie (15):
dt-bindings: net: qcom,ppe: Add bindings yaml file
net: ethernet: qualcomm: Add qcom PPE driver
net: ethernet: qualcomm: Add PPE buffer manager configuration
net: ethernet: qualcomm: Add PPE queue management config
net: ethernet: qualcomm: Add PPE TDM config
net: ethernet: qualcomm: Add PPE port scheduler resource
net: ethernet: qualcomm: Add PPE scheduler config
net: ethernet: qualcomm: Add PPE queue config
net: ethernet: qualcomm: Add PPE service code config
net: ethernet: qualcomm: Add PPE port control config
net: ethernet: qualcomm: Add PPE RSS hash config
net: ethernet: qualcomm: Export PPE function set_maxframe
net: ethernet: qualcomm: Add PPE AC(admission control) function
net: ethernet: qualcomm: Add PPE debugfs counters
arm64: defconfig: Enable qcom PPE driver

.../devicetree/bindings/net/qcom,ppe.yaml | 1330 +++++++
.../device_drivers/ethernet/index.rst | 1 +
.../ethernet/qualcomm/ppe/ppe.rst | 305 ++
MAINTAINERS | 9 +
arch/arm64/configs/defconfig | 1 +
drivers/net/ethernet/qualcomm/Kconfig | 17 +
drivers/net/ethernet/qualcomm/Makefile | 1 +
drivers/net/ethernet/qualcomm/ppe/Makefile | 7 +
drivers/net/ethernet/qualcomm/ppe/ppe.c | 3070 +++++++++++++++++
drivers/net/ethernet/qualcomm/ppe/ppe.h | 315 ++
.../net/ethernet/qualcomm/ppe/ppe_debugfs.c | 953 +++++
.../net/ethernet/qualcomm/ppe/ppe_debugfs.h | 25 +
drivers/net/ethernet/qualcomm/ppe/ppe_ops.c | 628 ++++
drivers/net/ethernet/qualcomm/ppe/ppe_ops.h | 256 ++
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 1106 ++++++
.../net/ethernet/qualcomm/ppe/ppe_uniphy.c | 789 +++++
.../net/ethernet/qualcomm/ppe/ppe_uniphy.h | 227 ++
include/linux/soc/qcom/ppe.h | 105 +
18 files changed, 9145 insertions(+)
create mode 100644 Documentation/devicetree/bindings/net/qcom,ppe.yaml
create mode 100644 Documentation/networking/device_drivers/ethernet/qualcomm/ppe/ppe.rst
create mode 100644 drivers/net/ethernet/qualcomm/ppe/Makefile
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe.c
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe.h
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_uniphy.c
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_uniphy.h
create mode 100644 include/linux/soc/qcom/ppe.h


base-commit: a7fe0881d9b78d402bbd9067dd4503a57c57a1d9
--
2.42.0



2024-01-10 11:42:57

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 01/20] Documentation: networking: qcom PPE driver documentation

From: Lei Wei <[email protected]>

Add Qualcomm PPE driver documentation

Signed-off-by: Lei Wei <[email protected]>
Signed-off-by: Luo Jie <[email protected]>
---
.../device_drivers/ethernet/index.rst | 1 +
.../ethernet/qualcomm/ppe/ppe.rst | 305 ++++++++++++++++++
2 files changed, 306 insertions(+)
create mode 100644 Documentation/networking/device_drivers/ethernet/qualcomm/ppe/ppe.rst

diff --git a/Documentation/networking/device_drivers/ethernet/index.rst b/Documentation/networking/device_drivers/ethernet/index.rst
index 43de285b8a92..fff37383f995 100644
--- a/Documentation/networking/device_drivers/ethernet/index.rst
+++ b/Documentation/networking/device_drivers/ethernet/index.rst
@@ -47,6 +47,7 @@ Contents:
neterion/s2io
netronome/nfp
pensando/ionic
+ qualcomm/ppe/ppe
smsc/smc9
stmicro/stmmac
ti/cpsw
diff --git a/Documentation/networking/device_drivers/ethernet/qualcomm/ppe/ppe.rst b/Documentation/networking/device_drivers/ethernet/qualcomm/ppe/ppe.rst
new file mode 100644
index 000000000000..0f728a178ee7
--- /dev/null
+++ b/Documentation/networking/device_drivers/ethernet/qualcomm/ppe/ppe.rst
@@ -0,0 +1,305 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===================================================
+PPE Driver for Qualcomm PPE Ethernet Network Family
+===================================================
+
+Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+
+Author: Lei Wei <[email protected]>
+
+
+Contents
+========
+
+- `Overview`_
+- `PPE Driver Supported SoCs`_
+- `Enabling the Driver`_
+- `Supported Features`_
+- `PPE MAC and UNIPHY Interface`_
+- `Debugging`_
+- `APIs used by PPE EDMA driver`_
+- `Exported PPE Device Operations`_
+
+
+Overview
+========
+
+This file describes the Qualcomm PPE (Packet Process Engine) driver.
+
+PPE Architecture
+----------------
+
+PPE supports maximum 6 MACs (GMACs or XGMACs) which are connected with 6 PHYs through 3 UNIPHYs,
+the 6 PHYs correspond to 6 Panel ethernet ports from port1 to port6. Some IPQ platforms will have
+less than 6 MACs and 3 UNIPHYs.
+
+PPE supports to forward the ingress packets to EDMA network driver through intenal cpu port0. It
+also supports to forward packets between the PPE ports.
+
+A simplified example view of the PPE interfaces with PHYs and net devices::
+
+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ start/stop +---------+
+ | netdev | | netdev | | netdev | | netdev | | netdev | | netdev | <-------------- | |
+ +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ | |
+ | | |
+ P0 | |
+ | | |
+ +-------------------------------------------------------------------------+ | |
+ | | | |
+ | PPE(packet process engine) | | |
+ | | | |
+ | | | |
+ | +------+ +------+ +------+ +------+ +------+ +------+ | mac ops | phylink |
+ | | MAC0 | | MAC1 | | MAC2 | | MAC3 | | MAC4 | | MAC5 | | <-------- | |
+ | +------+ +------+ +------+ +------+ +------+ +------+ | | |
+ | | | | | | | | | |
+ +-------------------------------------------------------------------------+ | |
+ | | | | | | | |
+ +-------------------------------+ +-------------+ +-------------+ | |
+ | [ QSGMII ] | | [ USXGMII ] | | [ USXGMII ] | pcs ops | |
+ | UNIPHY0 | | UNIPHY1 | | UNIPHY2 | <------------ | |
+ | | | | | | | |
+ +-------------------------------+ +-------------+ +-------------+ | |
+ | | | | | | | |
+ +----------------------------------------------------------------------+ | |
+ | +------+ +------+ +------+ +------+ +------+ +------+ | link change | |
+ | | PHY0 | | PHY1 | | PHY2 | | PHY3 | | PHY4 | | PHY5 | | ----------> | |
+ | +------+ +------+ +------+ +------+ +------+ +------+ | | |
+ | | | | | | | MDIO bus | | |
+ +----------------------------------------------------------------------+ +---------+
+ | | | | | |
+ P1 P2 P3 P4 P5 P6
+
+
+PPE Driver Overview
+-------------------
+
+PPE driver is the PPE platform driver which executes the initialization of the PPE
+hardware including the PPE clock, Queue management, Buffer management, TDM,
+scheduler, RSS, MAC and UNIPHY.
+
+PPE driver provides PPE MAC and UNIPHY PCS operations for PHYLINK. It also exports
+a set of PPE operations. These operations can be used by network driver to drive
+PPE.
+
+PPE driver exports functions to get the PPE device and PPE device operations. Other
+network drivers such as PPE EDMA network driver can use the exported functions to
+get the PPE device and PPE device operations.
+
+PPE driver provides debugfs file to get the PPE packet counters. These packet counters
+include port Rx and Tx counters, CPU code counters and queue counters.
+
+
+PPE Driver Supported SoCs
+=========================
+
+The PPE drivers enable the PPE engine present on the following SoCs:
+
+- IPQ9574
+- IPQ5332
+
+
+Enabling the Driver
+===================
+
+The driver is enabled automatically by the network driver which depends on it such as
+Qualcomm EDMA driver.
+
+The driver is located in the menu structure at:
+
+ -> Device Drivers
+ -> Network device support (NETDEVICES [=y])
+ -> Ethernet driver support
+ -> Qualcomm devices
+ -> Qualcomm Technologies, Inc. PPE Ethernet support
+
+If this driver is built as a module, we can use below commands to install and remove
+it:
+
+- insmod qcom-ppe.ko
+- rmmod qcom-ppe.ko
+
+Please note that this driver should be installed before the Qualcomm EDMA driver
+and removed after the Qualcomm EDMA driver if it is built as a module.
+
+
+Supported Features
+==================
+
+PPE Interface mode - SGMII, 2500BASEX, USXGMII, 10GBASER, QSGMII and QUSGMII.
+
+PPE port link speed from 10Mbps to 10000Mbps.
+
+PPE Packets forwarding and offloading.
+
+PPE RSS.
+
+PPE Buffer Manager.
+
+PPE Queues Manager and Scheduler.
+
+
+PPE MAC and UNIPHY Interface
+============================
+
+PPE MAC and UNIPHY supports various interface mode including SGMII, 2500BASEX, USXGMII,
+10GBASER, QSGMII and QUSGMII. A various link including external PHY mode, 2.5G fixed
+link mode and 10G SFP mode are also supported.
+
+PPE driver provides two phylink ops functions to PPE EDMA network driver to setup and
+destroy phylink for each PPE ports. EDMA neworking driver will setup the phylink when
+each net device created and destroy the phylink when each net device removed.
+
+ - .phylink_setup() will lookup the PPE port device tree for PHYLINK-compatible of
+ binding (phy-handle) and create and return a PHYLINK instance associated with the
+ received net device for each port.
+
+ - .phylink_destroy() will disconnect and destroy the PHYLINK instance for each port.
+
+The PPE phylink MAC ops and UNIPHY PCS ops functions are implemented in the PPE driver
+to drive PPE MAC and UNIPHY to interact with PHYLINK framework.
+
+
+Debugging
+=========
+
+PPE packet counters can be checked by debugfs file ``/sys/kernel/debug/ppe/packet_counter``.
+
+PPE MAC statistics can be checked by ``ethtool -S ethX``.
+
+PPE UNIPHY clock and PPE port clock rate can be checked by the clock debugfs file
+in ``/sys/kernel/debug/clk/``.
+
+PPE port link state and PHY features can be also checked by ``ethtool ethX``.
+
+The SFP module state can be checked by the debugfs file in ``/sys/kernel/debug/@sfp/state``,
+where the ``@sfp`` is the SFP DTS node name.
+
+
+APIs used by PPE EDMA driver
+============================
+
+PPE driver also exports a set of APIs. Whether PPE driver have been probed can be
+checked by below API, the PPE EDMA driver uses it to query and ascertain that the
+PPE driver is installed and probed before going ahead with its initialization.
+::
+
+ bool ppe_is_probed(struct platform_device *pdev);
+
+PPE driver private PPE device structure can be got by below API::
+
+ struct ppe_device *ppe_dev_get(struct platform_device *pdev);
+
+PPE device operations pointer can be got by below API::
+
+ struct ppe_device_ops *ppe_ops_get(struct platform_device *pdev);
+
+Above APIs are exported and also declared in the common ppe header file in
+"include/linux/soc/qcom/ppe.h". The network driver which wants to drive PPE
+can include this header file and get the PPE device structure by exported
+"ppe_dev_get" API, and get the PPE device operations by exported
+"ppe_ops_get" API.
+
+PPE queue configuration operations pointer can be got by below API::
+
+ const struct ppe_queue_ops *ppe_queue_config_ops_get(void);
+
+This API is exported and is only used by PPE EDMA driver.
+
+
+Exported PPE Device Operations
+==============================
+
+PPE driver provides a set of PPE device operations to drive the PPE. Higher level
+network drivers such as PPE EDMA ethernet driver or a PPE DSA driver can use these
+operations to drive the PPE. The definitions of these PPE device operations can be
+found in "include/linux/soc/qcom/ppe.h". Below lists these PPE device operations.
+
+phylink_setup::
+
+ .phylink_setup(struct ppe_device *ppe_dev, struct net_device *netdev, int port);
+
+The phylink_setup operation is used by the network driver such as PPE EDMA driver to
+create PHYLINK when creating net device. This operation takes net device and the PPE
+port id as input parameters, PPE driver will create and return a PHYLINK instance
+associated with the net device and port.
+
+phylink_destroy::
+
+ .phylink_destroy(struct ppe_device *ppe_dev, int port);
+
+The phylink_destroy operation is used by the network driver such as PPE EDMA driver
+to destroy the PHYLINK.
+
+phylink_mac_config::
+
+ .phylink_mac_config(struct ppe_device *ppe_dev, int port, unsigned int mode,
+ const struct phylink_link_state *state);
+
+phylink_mac_link_up::
+
+ .phylink_mac_link_up(struct ppe_device *ppe_dev, int port, struct phy_device *phy,
+ unsigned int mode, phy_interface_t interface, int speed, int duplex,
+ bool tx_pause, bool rx_pause);
+
+phylink_mac_link_down::
+
+ .phylink_mac_link_down(struct ppe_device *ppe_dev, int port, unsigned int mode,
+ phy_interface_t interface);
+
+phylink_mac_select_pcs::
+
+ .phylink_mac_select_pcs(struct ppe_device *ppe_dev, int port,
+ phy_interface_t interface);
+
+Above PHYLINK MAC operations are used by the network driver which creates the PHYLINK and
+provides the PHYLINK MAC ops such as PPE DSA driver. In the network driver PHYLINK MAC ops,
+it can use above PPE PHYLINK MAC operations to drive the PPE MAC for PHYLINK.
+
+get_stats64::
+
+ .get_stats64(struct ppe_device *ppe_dev, int port, struct rtnl_link_stats64 *s);
+
+This operation is used by PPE network driver to get the PPE MAC statistics, for example
+used in network driver net device ops.
+
+get_strings::
+
+ .get_strings(struct ppe_device *ppe_dev, int port, u32 stringset, u8 *data);
+
+get_sset_count::
+
+ .get_sset_count(struct ppe_device *ppe_dev, int port, int sset);
+
+get_ethtool_stats::
+
+ .get_ethtool_stats(struct ppe_device *ppe_dev, int port, u64 *data);
+
+Above operations are used by PPE network driver to show PPE MAC statistics, for example
+used in network driver ethtool ops.
+
+set_mac_address::
+
+ .set_mac_address(struct ppe_device *ppe_dev, int port, u8 *macaddr);
+
+This operation is used by PPE network driver to set the PPE MAC address, for example
+used in network driver net device ops.
+
+set_mac_eee::
+
+ .set_mac_eee(struct ppe_device *ppe_dev, int port, struct ethtool_eee *eee);
+
+get_mac_eee::
+
+ .get_mac_eee(struct ppe_device *ppe_dev, int port, struct ethtool_eee *eee);
+
+Above operations are used by PPE network driver to set and get the PPE MAC eee, for example
+used in network driver ethtool ops.
+
+set_maxframe::
+
+ .set_maxframe(struct ppe_device *ppe_dev, int port, int maxframe_size);
+
+This operation is used by PPE network driver to set the PPE port maximum frame size, for
+example used in network net device ops.
--
2.42.0


2024-01-10 11:43:36

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 02/20] dt-bindings: net: qcom,ppe: Add bindings yaml file

Qualcomm PPE(packet process engine) is supported on
IPQ SOC platform.

Signed-off-by: Luo Jie <[email protected]>
---
.../devicetree/bindings/net/qcom,ppe.yaml | 1330 +++++++++++++++++
1 file changed, 1330 insertions(+)
create mode 100644 Documentation/devicetree/bindings/net/qcom,ppe.yaml

diff --git a/Documentation/devicetree/bindings/net/qcom,ppe.yaml b/Documentation/devicetree/bindings/net/qcom,ppe.yaml
new file mode 100644
index 000000000000..6afb2ad62707
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/qcom,ppe.yaml
@@ -0,0 +1,1330 @@
+# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/net/qcom,ppe.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Qualcomm Packet Process Engine Ethernet controller
+
+maintainers:
+ - Luo Jie <[email protected]>
+
+description:
+ The PPE(packet process engine) is comprised of three componets, Ethernet
+ DMA, Switch core and Port wrapper, Ethernet DMA is used to transmit and
+ receive packets between Ethernet subsytem and host. The Switch core has
+ maximum 8 ports(maximum 6 front panel ports and two FIFO interfaces),
+ among which there are GMAC/XGMACs used as external interfaces and FIFO
+ interfaces connected the EDMA/EIP, The port wrapper provides connections
+ from the GMAC/XGMACS to SGMII/QSGMII/PSGMII/USXGMII/10G-BASER etc, there
+ are maximu 3 UNIPHY(PCS) instances supported by PPE.
+
+properties:
+ compatible:
+ enum:
+ - qcom,ipq5332-ppe
+ - qcom,ipq9574-ppe
+
+ reg:
+ maxItems: 1
+
+ "#address-cells":
+ const: 1
+
+ "#size-cells":
+ const: 1
+
+ ranges: true
+
+ clocks: true
+
+ clock-names: true
+
+ resets: true
+
+ reset-names: true
+
+ tdm-config:
+ type: object
+ additionalProperties: false
+ description: |
+ PPE TDM(time-division multiplexing) config includes buffer management
+ and port scheduler.
+
+ properties:
+ qcom,tdm-bm-config:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ description:
+ The TDM buffer scheduler configs of PPE, there are multiple
+ entries supported, each entry includes valid, direction
+ (ingress or egress), port, second port valid, second port.
+
+ qcom,tdm-port-scheduler-config:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ description:
+ The TDM port scheduler management configs of PPE, there
+ are multiple entries supported each entry includes ingress
+ scheduler port bitmap, ingress scheduler port, egress
+ scheduler port, second egress scheduler port valid and
+ second egress scheduler port.
+
+ required:
+ - qcom,tdm-bm-config
+ - qcom,tdm-port-scheduler-config
+
+ buffer-management-config:
+ type: object
+ additionalProperties: false
+ description: |
+ PPE buffer management config, which supports configuring group
+ buffer and per port buffer, which decides the threshold of the
+ flow control frame generated.
+
+ properties:
+ qcom,group-config:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ description:
+ The PPE buffer support 4 groups, the entry includes
+ the group ID and group buffer numbers, each buffer
+ has 256 bytes.
+
+ qcom,port-config:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ description:
+ The PPE buffer number is also assigned per BM port ID,
+ there are 10 BM ports supported on ipq5332, and 15 BM
+ ports supported on ipq9574. Each entry includs group
+ ID, BM port ID, dedicated buffer, the buffer numbers
+ for receiving packet after pause frame sent, the
+ threshold for pause frame, weight, restore ceil and
+ dynamic buffer or static buffer management.
+
+ required:
+ - qcom,group-config
+ - qcom,port-config
+
+ queue-management-config:
+ type: object
+ additionalProperties: false
+ description: |
+ PPE queue management config, which supports configuring group
+ and per queue buffer limitation, which decides the threshold
+ to drop the packet on the egress port.
+
+ properties:
+ qcom,group-config:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ description:
+ The PPE queue management support 4 groups, the entry
+ includes the group ID, group buffer number, dedicated
+ buffer number, threshold to drop packet and restore
+ ceil.
+
+ qcom,queue-config:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ description:
+ PPE has 256 unicast queues and 44 multicast queues, the
+ entry includes queue base, queue number, group ID,
+ dedicated buffer, the threshold to drop packet, weight,
+ restore ceil and dynamic or static queue management.
+
+ required:
+ - qcom,group-config
+ - qcom,queue-config
+
+ port-scheduler-resource:
+ type: object
+ additionalProperties: false
+ description: The scheduler resource available in PPE.
+ patternProperties:
+ "^port[0-7]$":
+ description: Each subnode represents the scheduler resource per port.
+ type: object
+ properties:
+ port-id:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description: |
+ The PPE port ID, there are maximum 6 physical port,
+ EIP port and CPU port.
+
+ qcom,ucast-queue:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 2
+ maxItems: 2
+ description: The PPE unicast queue range.
+
+ qcom,mcast-queue:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 2
+ maxItems: 2
+ description: The PPE multicast queue range.
+
+ qcom,l0sp:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 2
+ maxItems: 2
+ description: The PPE L0 strict priority scheduler range.
+
+ qcom,l0cdrr:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 2
+ maxItems: 2
+ description: The PPE L0 promise DRR range.
+
+ qcom,l0edrr:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 2
+ maxItems: 2
+ description: The PPE L0 exceed DRR range.
+
+ qcom,l1cdrr:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 2
+ maxItems: 2
+ description: The PPE L1 promise DRR range.
+
+ qcom,l1edrr:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 2
+ maxItems: 2
+ description: The PPE L1 exceed DRR range.
+
+ required:
+ - port-id
+ - qcom,ucast-queue
+ - qcom,mcast-queue
+ - qcom,l0sp
+ - qcom,l0cdrr
+ - qcom,l0edrr
+ - qcom,l1cdrr
+ - qcom,l1edrr
+
+ port-scheduler-config:
+ type: object
+ additionalProperties: false
+ description: The scheduler resource config in PPE.
+ patternProperties:
+ "^port[0-7]$":
+ type: object
+ additionalProperties: false
+ description: Each subnode represents the scheduler config per port.
+ properties:
+ port-id:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description: |
+ The PPE port ID, there are maximum 6 physical port
+ and one CPU port.
+
+ l1scheduler:
+ type: object
+ additionalProperties: false
+ description: PPE port level 1 scheduler config
+ patternProperties:
+ "^group[0-7]$":
+ type: object
+ additionalProperties: false
+ description: PPE per flow scheduler config in level 1
+ properties:
+ qcom,flow:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description:
+ The flow ID for level 1 scheduler
+
+ qcom,flow-loop-priority:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description:
+ The flow loop priority for level 1 scheduler
+
+ qcom,scheduler-config:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 5
+ maxItems: 5
+ description: |
+ The scheduler config includes flow ID, promise priority,
+ promise DRR, exceed priority and exceed DRR.
+
+ required:
+ - qcom,flow
+ - qcom,scheduler-config
+
+ l0scheduler:
+ type: object
+ additionalProperties: false
+ description: PPE port level 0 scheduler config
+ patternProperties:
+ "^group[0-7]$":
+ type: object
+ additionalProperties: false
+ description: PPE per flow scheduler config in level 0
+ properties:
+ qcom,ucast-queue:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description: The unicast queue base ID.
+
+ qcom,mcast-queue:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description: The multicast queue base ID.
+
+ qcom,ucast-loop-priority:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description: |
+ The unicast priority number, each priority has dedicated
+ queue.
+
+ qcom,mcast-loop-priority:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description: |
+ The multicast priority number, each priority has dedicated
+ queue.
+
+ qcom,drr-max-priority:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description:
+ The unicast maximum priority for the configured queues
+
+ qcom,scheduler-config:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ minItems: 5
+ maxItems: 5
+ description:
+ The flow ID input to level 1 scheduler, promise priority,
+ promise DRR, exceed priority and exceed DRR.
+
+ required:
+ - qcom,ucast-queue
+ - qcom,mcast-queue
+ - qcom,scheduler-config
+
+ required:
+ - port-id
+ - l0scheduler
+ - l1scheduler
+
+patternProperties:
+ "^qcom-uniphy@[0-9a-f]+$":
+ type: object
+ additionalProperties: false
+ description: uniphy configuration and clock provider
+ properties:
+ reg:
+ minItems: 2
+ items:
+ - description: The first uniphy register range
+ - description: The second uniphy register range
+ - description: The third uniphy register range
+
+ "#clock-cells":
+ const: 1
+
+ clock-output-names:
+ minItems: 4
+ maxItems: 6
+
+ required:
+ - reg
+ - "#clock-cells"
+ - clock-output-names
+
+allOf:
+ - if:
+ properties:
+ compatible:
+ contains:
+ const: qcom,ipq5332-ppe
+ then:
+ properties:
+ clocks:
+ items:
+ - description: Display common AHB clock from gcc
+ - description: Display common system clock from gcc
+ - description: Display uniphy0 AHB clock from gcc
+ - description: Display uniphy1 AHB clock from gcc
+ - description: Display uniphy0 system clock from gcc
+ - description: Display uniphy1 system clock from gcc
+ - description: Display nss clock from gcc
+ - description: Display nss noc snoc clock from gcc
+ - description: Display nss noc snoc_1 clock from gcc
+ - description: Display sleep clock from gcc
+ - description: Display PPE clock from nsscc
+ - description: Display PPE config clock from nsscc
+ - description: Display NSSNOC PPE clock from nsscc
+ - description: Display NSSNOC PPE config clock from nsscc
+ - description: Display EDMA clock from nsscc
+ - description: Display EDMA config clock from nsscc
+ - description: Display PPE IPE clock from nsscc
+ - description: Display PPE BTQ clock from nsscc
+ - description: Display port1 MAC clock from nsscc
+ - description: Display port2 MAC clock from nsscc
+ - description: Display port1 RX clock from nsscc
+ - description: Display port1 TX clock from nsscc
+ - description: Display port2 RX clock from nsscc
+ - description: Display port2 TX clock from nsscc
+ - description: Display UNIPHY port1 RX clock from nsscc
+ - description: Display UNIPHY port1 TX clock from nsscc
+ - description: Display UNIPHY port2 RX clock from nsscc
+ - description: Display UNIPHY port2 TX clock from nsscc
+ clock-names:
+ items:
+ - const: cmn_ahb
+ - const: cmn_sys
+ - const: uniphy0_ahb
+ - const: uniphy1_ahb
+ - const: uniphy0_sys
+ - const: uniphy1_sys
+ - const: gcc_nsscc
+ - const: gcc_nssnoc_snoc
+ - const: gcc_nssnoc_snoc_1
+ - const: gcc_im_sleep
+ - const: nss_ppe
+ - const: nss_ppe_cfg
+ - const: nssnoc_ppe
+ - const: nssnoc_ppe_cfg
+ - const: nss_edma
+ - const: nss_edma_cfg
+ - const: nss_ppe_ipe
+ - const: nss_ppe_btq
+ - const: port1_mac
+ - const: port2_mac
+ - const: nss_port1_rx
+ - const: nss_port1_tx
+ - const: nss_port2_rx
+ - const: nss_port2_tx
+ - const: uniphy_port1_rx
+ - const: uniphy_port1_tx
+ - const: uniphy_port2_rx
+ - const: uniphy_port2_tx
+
+ resets:
+ items:
+ - description: Reset PPE
+ - description: Reset uniphy0 software config
+ - description: Reset uniphy1 software config
+ - description: Reset uniphy0 AHB
+ - description: Reset uniphy1 AHB
+ - description: Reset uniphy0 system
+ - description: Reset uniphy1 system
+ - description: Reset uniphy0 XPCS
+ - description: Reset uniphy1 SPCS
+ - description: Reset uniphy port1 RX
+ - description: Reset uniphy port1 TX
+ - description: Reset uniphy port2 RX
+ - description: Reset uniphy port2 TX
+ - description: Reset PPE port1 RX
+ - description: Reset PPE port1 TX
+ - description: Reset PPE port2 RX
+ - description: Reset PPE port2 TX
+ - description: Reset PPE port1 MAC
+ - description: Reset PPE port2 MAC
+
+ reset-names:
+ items:
+ - const: ppe
+ - const: uniphy0_soft
+ - const: uniphy1_soft
+ - const: uniphy0_ahb
+ - const: uniphy1_ahb
+ - const: uniphy0_sys
+ - const: uniphy1_sys
+ - const: uniphy0_xpcs
+ - const: uniphy1_xpcs
+ - const: uniphy_port1_rx
+ - const: uniphy_port1_tx
+ - const: uniphy_port2_rx
+ - const: uniphy_port2_tx
+ - const: nss_port1_rx
+ - const: nss_port1_tx
+ - const: nss_port2_rx
+ - const: nss_port2_tx
+ - const: nss_port1_mac
+ - const: nss_port2_mac
+
+ - if:
+ properties:
+ compatible:
+ contains:
+ const: qcom,ipq9574-ppe
+ then:
+ properties:
+ clocks:
+ items:
+ - description: Display common AHB clock from gcc
+ - description: Display common system clock from gcc
+ - description: Display uniphy0 AHB clock from gcc
+ - description: Display uniphy1 AHB clock from gcc
+ - description: Display uniphy2 AHB clock from gcc
+ - description: Display uniphy0 system clock from gcc
+ - description: Display uniphy1 system clock from gcc
+ - description: Display uniphy2 system clock from gcc
+ - description: Display nss clock from gcc
+ - description: Display nss noc clock from gcc
+ - description: Display nss noc snoc clock from gcc
+ - description: Display nss noc snoc_1 clock from gcc
+ - description: Display PPE clock from nsscc
+ - description: Display PPE config clock from nsscc
+ - description: Display NSSNOC PPE clock from nsscc
+ - description: Display NSSNOC PPE config clock from nsscc
+ - description: Display EDMA clock from nsscc
+ - description: Display EDMA config clock from nsscc
+ - description: Display PPE IPE clock from nsscc
+ - description: Display PPE BTQ clock from nsscc
+ - description: Display port1 MAC clock from nsscc
+ - description: Display port2 MAC clock from nsscc
+ - description: Display port3 MAC clock from nsscc
+ - description: Display port4 MAC clock from nsscc
+ - description: Display port5 MAC clock from nsscc
+ - description: Display port6 MAC clock from nsscc
+ - description: Display port1 RX clock from nsscc
+ - description: Display port1 TX clock from nsscc
+ - description: Display port2 RX clock from nsscc
+ - description: Display port2 TX clock from nsscc
+ - description: Display port3 RX clock from nsscc
+ - description: Display port3 TX clock from nsscc
+ - description: Display port4 RX clock from nsscc
+ - description: Display port4 TX clock from nsscc
+ - description: Display port5 RX clock from nsscc
+ - description: Display port5 TX clock from nsscc
+ - description: Display port6 RX clock from nsscc
+ - description: Display port6 TX clock from nsscc
+ - description: Display UNIPHY port1 RX clock from nsscc
+ - description: Display UNIPHY port1 TX clock from nsscc
+ - description: Display UNIPHY port2 RX clock from nsscc
+ - description: Display UNIPHY port2 TX clock from nsscc
+ - description: Display UNIPHY port3 RX clock from nsscc
+ - description: Display UNIPHY port3 TX clock from nsscc
+ - description: Display UNIPHY port4 RX clock from nsscc
+ - description: Display UNIPHY port4 TX clock from nsscc
+ - description: Display UNIPHY port5 RX clock from nsscc
+ - description: Display UNIPHY port5 TX clock from nsscc
+ - description: Display UNIPHY port6 RX clock from nsscc
+ - description: Display UNIPHY port6 TX clock from nsscc
+ - description: Display port5 RX clock source from nsscc
+ - description: Display port5 TX clock source from nsscc
+ clock-names:
+ items:
+ - const: cmn_ahb
+ - const: cmn_sys
+ - const: uniphy0_ahb
+ - const: uniphy1_ahb
+ - const: uniphy2_ahb
+ - const: uniphy0_sys
+ - const: uniphy1_sys
+ - const: uniphy2_sys
+ - const: gcc_nsscc
+ - const: gcc_nssnoc_nsscc
+ - const: gcc_nssnoc_snoc
+ - const: gcc_nssnoc_snoc_1
+ - const: nss_ppe
+ - const: nss_ppe_cfg
+ - const: nssnoc_ppe
+ - const: nssnoc_ppe_cfg
+ - const: nss_edma
+ - const: nss_edma_cfg
+ - const: nss_ppe_ipe
+ - const: nss_ppe_btq
+ - const: port1_mac
+ - const: port2_mac
+ - const: port3_mac
+ - const: port4_mac
+ - const: port5_mac
+ - const: port6_mac
+ - const: nss_port1_rx
+ - const: nss_port1_tx
+ - const: nss_port2_rx
+ - const: nss_port2_tx
+ - const: nss_port3_rx
+ - const: nss_port3_tx
+ - const: nss_port4_rx
+ - const: nss_port4_tx
+ - const: nss_port5_rx
+ - const: nss_port5_tx
+ - const: nss_port6_rx
+ - const: nss_port6_tx
+ - const: uniphy_port1_rx
+ - const: uniphy_port1_tx
+ - const: uniphy_port2_rx
+ - const: uniphy_port2_tx
+ - const: uniphy_port3_rx
+ - const: uniphy_port3_tx
+ - const: uniphy_port4_rx
+ - const: uniphy_port4_tx
+ - const: uniphy_port5_rx
+ - const: uniphy_port5_tx
+ - const: uniphy_port6_rx
+ - const: uniphy_port6_tx
+ - const: nss_port5_rx_clk_src
+ - const: nss_port5_tx_clk_src
+
+ resets:
+ items:
+ - description: Reset PPE
+ - description: Reset uniphy0 software config
+ - description: Reset uniphy1 software config
+ - description: Reset uniphy2 software config
+ - description: Reset uniphy0 AHB
+ - description: Reset uniphy1 AHB
+ - description: Reset uniphy2 AHB
+ - description: Reset uniphy0 system
+ - description: Reset uniphy1 system
+ - description: Reset uniphy2 system
+ - description: Reset uniphy0 XPCS
+ - description: Reset uniphy1 XPCS
+ - description: Reset uniphy2 XPCS
+ - description: Assert uniphy port1
+ - description: Assert uniphy port2
+ - description: Assert uniphy port3
+ - description: Assert uniphy port4
+ - description: Reset PPE port1
+ - description: Reset PPE port2
+ - description: Reset PPE port3
+ - description: Reset PPE port4
+ - description: Reset PPE port5
+ - description: Reset PPE port6
+ - description: Reset PPE port1 MAC
+ - description: Reset PPE port2 MAC
+ - description: Reset PPE port3 MAC
+ - description: Reset PPE port4 MAC
+ - description: Reset PPE port5 MAC
+ - description: Reset PPE port6 MAC
+
+ reset-names:
+ items:
+ - const: ppe
+ - const: uniphy0_soft
+ - const: uniphy1_soft
+ - const: uniphy2_soft
+ - const: uniphy0_ahb
+ - const: uniphy1_ahb
+ - const: uniphy2_ahb
+ - const: uniphy0_sys
+ - const: uniphy1_sys
+ - const: uniphy2_sys
+ - const: uniphy0_xpcs
+ - const: uniphy1_xpcs
+ - const: uniphy2_xpcs
+ - const: uniphy0_port1_dis
+ - const: uniphy0_port2_dis
+ - const: uniphy0_port3_dis
+ - const: uniphy0_port4_dis
+ - const: nss_port1
+ - const: nss_port2
+ - const: nss_port3
+ - const: nss_port4
+ - const: nss_port5
+ - const: nss_port6
+ - const: nss_port1_mac
+ - const: nss_port2_mac
+ - const: nss_port3_mac
+ - const: nss_port4_mac
+ - const: nss_port5_mac
+ - const: nss_port6_mac
+
+required:
+ - compatible
+ - reg
+ - "#address-cells"
+ - "#size-cells"
+ - ranges
+ - clocks
+ - clock-names
+ - resets
+ - reset-names
+ - tdm-config
+ - buffer-management-config
+ - queue-management-config
+ - port-scheduler-resource
+ - port-scheduler-config
+
+additionalProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/clock/qcom,ipq9574-gcc.h>
+ #include <dt-bindings/reset/qcom,ipq9574-gcc.h>
+ #include <dt-bindings/clock/qcom,ipq9574-nsscc.h>
+ #include <dt-bindings/reset/qcom,ipq9574-nsscc.h>
+
+ soc {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ qcom_ppe: qcom-ppe@3a000000 {
+ compatible = "qcom,ipq9574-ppe";
+ reg = <0x3a000000 0xb00000>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ clocks = <&gcc GCC_CMN_12GPLL_AHB_CLK>,
+ <&gcc GCC_CMN_12GPLL_SYS_CLK>,
+ <&gcc GCC_UNIPHY0_AHB_CLK>,
+ <&gcc GCC_UNIPHY1_AHB_CLK>,
+ <&gcc GCC_UNIPHY2_AHB_CLK>,
+ <&gcc GCC_UNIPHY0_SYS_CLK>,
+ <&gcc GCC_UNIPHY1_SYS_CLK>,
+ <&gcc GCC_UNIPHY2_SYS_CLK>,
+ <&gcc GCC_NSSCC_CLK>,
+ <&gcc GCC_NSSNOC_NSSCC_CLK>,
+ <&gcc GCC_NSSNOC_SNOC_CLK>,
+ <&gcc GCC_NSSNOC_SNOC_1_CLK>,
+ <&nsscc NSS_CC_PPE_SWITCH_CLK>,
+ <&nsscc NSS_CC_PPE_SWITCH_CFG_CLK>,
+ <&nsscc NSS_CC_NSSNOC_PPE_CLK>,
+ <&nsscc NSS_CC_NSSNOC_PPE_CFG_CLK>,
+ <&nsscc NSS_CC_PPE_EDMA_CLK>,
+ <&nsscc NSS_CC_PPE_EDMA_CFG_CLK>,
+ <&nsscc NSS_CC_PPE_SWITCH_IPE_CLK>,
+ <&nsscc NSS_CC_PPE_SWITCH_BTQ_CLK>,
+ <&nsscc NSS_CC_PORT1_MAC_CLK>,
+ <&nsscc NSS_CC_PORT2_MAC_CLK>,
+ <&nsscc NSS_CC_PORT3_MAC_CLK>,
+ <&nsscc NSS_CC_PORT4_MAC_CLK>,
+ <&nsscc NSS_CC_PORT5_MAC_CLK>,
+ <&nsscc NSS_CC_PORT6_MAC_CLK>,
+ <&nsscc NSS_CC_PORT1_RX_CLK>,
+ <&nsscc NSS_CC_PORT1_TX_CLK>,
+ <&nsscc NSS_CC_PORT2_RX_CLK>,
+ <&nsscc NSS_CC_PORT2_TX_CLK>,
+ <&nsscc NSS_CC_PORT3_RX_CLK>,
+ <&nsscc NSS_CC_PORT3_TX_CLK>,
+ <&nsscc NSS_CC_PORT4_RX_CLK>,
+ <&nsscc NSS_CC_PORT4_TX_CLK>,
+ <&nsscc NSS_CC_PORT5_RX_CLK>,
+ <&nsscc NSS_CC_PORT5_TX_CLK>,
+ <&nsscc NSS_CC_PORT6_RX_CLK>,
+ <&nsscc NSS_CC_PORT6_TX_CLK>,
+ <&nsscc NSS_CC_UNIPHY_PORT1_RX_CLK>,
+ <&nsscc NSS_CC_UNIPHY_PORT1_TX_CLK>,
+ <&nsscc NSS_CC_UNIPHY_PORT2_RX_CLK>,
+ <&nsscc NSS_CC_UNIPHY_PORT2_TX_CLK>,
+ <&nsscc NSS_CC_UNIPHY_PORT3_RX_CLK>,
+ <&nsscc NSS_CC_UNIPHY_PORT3_TX_CLK>,
+ <&nsscc NSS_CC_UNIPHY_PORT4_RX_CLK>,
+ <&nsscc NSS_CC_UNIPHY_PORT4_TX_CLK>,
+ <&nsscc NSS_CC_UNIPHY_PORT5_RX_CLK>,
+ <&nsscc NSS_CC_UNIPHY_PORT5_TX_CLK>,
+ <&nsscc NSS_CC_UNIPHY_PORT6_RX_CLK>,
+ <&nsscc NSS_CC_UNIPHY_PORT6_TX_CLK>,
+ <&nsscc NSS_CC_PORT5_RX_CLK_SRC>,
+ <&nsscc NSS_CC_PORT5_TX_CLK_SRC>;
+ clock-names = "cmn_ahb",
+ "cmn_sys",
+ "uniphy0_ahb",
+ "uniphy1_ahb",
+ "uniphy2_ahb",
+ "uniphy0_sys",
+ "uniphy1_sys",
+ "uniphy2_sys",
+ "gcc_nsscc",
+ "gcc_nssnoc_nsscc",
+ "gcc_nssnoc_snoc",
+ "gcc_nssnoc_snoc_1",
+ "nss_ppe",
+ "nss_ppe_cfg",
+ "nssnoc_ppe",
+ "nssnoc_ppe_cfg",
+ "nss_edma",
+ "nss_edma_cfg",
+ "nss_ppe_ipe",
+ "nss_ppe_btq",
+ "port1_mac",
+ "port2_mac",
+ "port3_mac",
+ "port4_mac",
+ "port5_mac",
+ "port6_mac",
+ "nss_port1_rx",
+ "nss_port1_tx",
+ "nss_port2_rx",
+ "nss_port2_tx",
+ "nss_port3_rx",
+ "nss_port3_tx",
+ "nss_port4_rx",
+ "nss_port4_tx",
+ "nss_port5_rx",
+ "nss_port5_tx",
+ "nss_port6_rx",
+ "nss_port6_tx",
+ "uniphy_port1_rx",
+ "uniphy_port1_tx",
+ "uniphy_port2_rx",
+ "uniphy_port2_tx",
+ "uniphy_port3_rx",
+ "uniphy_port3_tx",
+ "uniphy_port4_rx",
+ "uniphy_port4_tx",
+ "uniphy_port5_rx",
+ "uniphy_port5_tx",
+ "uniphy_port6_rx",
+ "uniphy_port6_tx",
+ "nss_port5_rx_clk_src",
+ "nss_port5_tx_clk_src";
+ resets = <&nsscc PPE_FULL_RESET>,
+ <&nsscc UNIPHY0_SOFT_RESET>,
+ <&nsscc UNIPHY_PORT5_ARES>,
+ <&nsscc UNIPHY_PORT6_ARES>,
+ <&gcc GCC_UNIPHY0_AHB_RESET>,
+ <&gcc GCC_UNIPHY1_AHB_RESET>,
+ <&gcc GCC_UNIPHY2_AHB_RESET>,
+ <&gcc GCC_UNIPHY0_SYS_RESET>,
+ <&gcc GCC_UNIPHY1_SYS_RESET>,
+ <&gcc GCC_UNIPHY2_SYS_RESET>,
+ <&gcc GCC_UNIPHY0_XPCS_RESET>,
+ <&gcc GCC_UNIPHY1_XPCS_RESET>,
+ <&gcc GCC_UNIPHY2_XPCS_RESET>,
+ <&nsscc UNIPHY_PORT1_ARES>,
+ <&nsscc UNIPHY_PORT2_ARES>,
+ <&nsscc UNIPHY_PORT3_ARES>,
+ <&nsscc UNIPHY_PORT4_ARES>,
+ <&nsscc NSSPORT1_RESET>,
+ <&nsscc NSSPORT2_RESET>,
+ <&nsscc NSSPORT3_RESET>,
+ <&nsscc NSSPORT4_RESET>,
+ <&nsscc NSSPORT5_RESET>,
+ <&nsscc NSSPORT6_RESET>,
+ <&nsscc PORT1_MAC_ARES>,
+ <&nsscc PORT2_MAC_ARES>,
+ <&nsscc PORT3_MAC_ARES>,
+ <&nsscc PORT4_MAC_ARES>,
+ <&nsscc PORT5_MAC_ARES>,
+ <&nsscc PORT6_MAC_ARES>;
+ reset-names = "ppe",
+ "uniphy0_soft",
+ "uniphy1_soft",
+ "uniphy2_soft",
+ "uniphy0_ahb",
+ "uniphy1_ahb",
+ "uniphy2_ahb",
+ "uniphy0_sys",
+ "uniphy1_sys",
+ "uniphy2_sys",
+ "uniphy0_xpcs",
+ "uniphy1_xpcs",
+ "uniphy2_xpcs",
+ "uniphy0_port1_dis",
+ "uniphy0_port2_dis",
+ "uniphy0_port3_dis",
+ "uniphy0_port4_dis",
+ "nss_port1",
+ "nss_port2",
+ "nss_port3",
+ "nss_port4",
+ "nss_port5",
+ "nss_port6",
+ "nss_port1_mac",
+ "nss_port2_mac",
+ "nss_port3_mac",
+ "nss_port4_mac",
+ "nss_port5_mac",
+ "nss_port6_mac";
+
+ uniphys: qcom-uniphy@7a00000 {
+ reg = <0x7a00000 0x10000>,
+ <0x7a10000 0x10000>,
+ <0x7a20000 0x10000>;
+ #clock-cells = <0x1>;
+ clock-output-names = "uniphy0_gcc_rx_clk",
+ "uniphy0_gcc_tx_clk",
+ "uniphy1_gcc_rx_clk",
+ "uniphy1_gcc_tx_clk",
+ "uniphy2_gcc_rx_clk",
+ "uniphy2_gcc_tx_clk";
+ };
+
+ tdm-config {
+ qcom,tdm-bm-config = <1 0 0 0 0>,
+ <1 1 0 0 0>,
+ <1 0 5 0 0>,
+ <1 1 5 0 0>,
+ <1 0 6 0 0>,
+ <1 1 6 0 0>,
+ <1 0 1 0 0>,
+ <1 1 1 0 0>,
+ <1 0 0 0 0>,
+ <1 1 0 0 0>,
+ <1 0 5 0 0>,
+ <1 1 5 0 0>,
+ <1 0 6 0 0>,
+ <1 1 6 0 0>,
+ <1 0 7 0 0>,
+ <1 1 7 0 0>,
+ <1 0 0 0 0>,
+ <1 1 0 0 0>,
+ <1 0 1 0 0>,
+ <1 1 1 0 0>,
+ <1 0 5 0 0>,
+ <1 1 5 0 0>,
+ <1 0 6 0 0>,
+ <1 1 6 0 0>,
+ <1 0 2 0 0>,
+ <1 1 2 0 0>,
+ <1 0 0 0 0>,
+ <1 1 0 0 0>,
+ <1 0 5 0 0>,
+ <1 1 5 0 0>,
+ <1 0 6 0 0>,
+ <1 1 6 0 0>,
+ <1 0 1 0 0>,
+ <1 1 1 0 0>,
+ <1 0 3 0 0>,
+ <1 1 3 0 0>,
+ <1 0 0 0 0>,
+ <1 1 0 0 0>,
+ <1 0 5 0 0>,
+ <1 1 5 0 0>,
+ <1 0 6 0 0>,
+ <1 1 6 0 0>,
+ <1 0 7 0 0>,
+ <1 1 7 0 0>,
+ <1 0 0 0 0>,
+ <1 1 0 0 0>,
+ <1 0 1 0 0>,
+ <1 1 1 0 0>,
+ <1 0 5 0 0>,
+ <1 1 5 0 0>,
+ <1 0 6 0 0>,
+ <1 1 6 0 0>,
+ <1 0 4 0 0>,
+ <1 1 4 0 0>,
+ <1 0 0 0 0>,
+ <1 1 0 0 0>,
+ <1 0 5 0 0>,
+ <1 1 5 0 0>,
+ <1 0 6 0 0>,
+ <1 1 6 0 0>,
+ <1 0 1 0 0>,
+ <1 1 1 0 0>,
+ <1 0 0 0 0>,
+ <1 1 0 0 0>,
+ <1 0 5 0 0>,
+ <1 1 5 0 0>,
+ <1 0 6 0 0>,
+ <1 1 6 0 0>,
+ <1 0 2 0 0>,
+ <1 1 2 0 0>,
+ <1 0 0 0 0>,
+ <1 1 0 0 0>,
+ <1 0 7 0 0>,
+ <1 1 7 0 0>,
+ <1 0 5 0 0>,
+ <1 1 5 0 0>,
+ <1 0 6 0 0>,
+ <1 1 6 0 0>,
+ <1 0 1 0 0>,
+ <1 1 1 0 0>,
+ <1 0 0 0 0>,
+ <1 1 0 0 0>,
+ <1 0 5 0 0>,
+ <1 1 5 0 0>,
+ <1 0 6 0 0>,
+ <1 1 6 0 0>,
+ <1 0 3 0 0>,
+ <1 1 3 0 0>,
+ <1 0 1 0 0>,
+ <1 1 1 0 0>,
+ <1 0 0 0 0>,
+ <1 1 0 0 0>,
+ <1 0 5 0 0>,
+ <1 1 5 0 0>,
+ <1 0 6 0 0>,
+ <1 1 6 0 0>,
+ <1 0 4 0 0>,
+ <1 1 4 0 0>,
+ <1 0 7 0 0>,
+ <1 1 7 0 0>;
+ qcom,tdm-port-scheduler-config = <0x98 6 0 1 1>,
+ <0x94 5 6 1 3>,
+ <0x86 0 5 1 4>,
+ <0x8C 1 6 1 0>,
+ <0x1C 7 5 1 1>,
+ <0x98 2 6 1 0>,
+ <0x1C 5 7 1 1>,
+ <0x34 3 6 1 0>,
+ <0x8C 4 5 1 1>,
+ <0x98 2 6 1 0>,
+ <0x8C 5 4 1 1>,
+ <0xA8 0 6 1 2>,
+ <0x98 5 1 1 0>,
+ <0x98 6 5 1 2>,
+ <0x89 1 6 1 4>,
+ <0xA4 3 0 1 1>,
+ <0x8C 5 6 1 4>,
+ <0xA8 0 2 1 1>,
+ <0x98 6 5 1 0>,
+ <0xC4 4 3 1 1>,
+ <0x94 6 5 1 0>,
+ <0x1C 7 6 1 1>,
+ <0x98 2 5 1 0>,
+ <0x1C 6 7 1 1>,
+ <0x1C 5 6 1 0>,
+ <0x94 3 5 1 1>,
+ <0x8C 4 6 1 0>,
+ <0x94 1 5 1 3>,
+ <0x94 6 1 1 0>,
+ <0xD0 3 5 1 2>,
+ <0x98 6 0 1 1>,
+ <0x94 5 6 1 3>,
+ <0x94 1 5 1 0>,
+ <0x98 2 6 1 1>,
+ <0x8C 4 5 1 0>,
+ <0x1C 7 6 1 1>,
+ <0x8C 0 5 1 4>,
+ <0x89 1 6 1 2>,
+ <0x98 5 0 1 1>,
+ <0x94 6 5 1 3>,
+ <0x92 0 6 1 2>,
+ <0x98 1 5 1 0>,
+ <0x98 6 2 1 1>,
+ <0xD0 0 5 1 3>,
+ <0x94 6 0 1 1>,
+ <0x8C 5 6 1 4>,
+ <0x8C 1 5 1 0>,
+ <0x1C 6 7 1 1>,
+ <0x1C 5 6 1 0>,
+ <0xB0 2 3 1 1>,
+ <0xC4 4 5 1 0>,
+ <0x8C 6 4 1 1>,
+ <0xA4 3 6 1 0>,
+ <0x1C 5 7 1 1>,
+ <0x4C 0 5 1 4>,
+ <0x8C 6 0 1 1>,
+ <0x34 7 6 1 3>,
+ <0x94 5 0 1 1>,
+ <0x98 6 5 1 2>;
+ };
+
+ buffer-management-config {
+ qcom,group-config = <0 1550>;
+ qcom,port-config = <0 0 0 100 1146 7 8 0 1>,
+ <0 1 0 100 250 4 36 0 1>,
+ <0 2 0 100 250 4 36 0 1>,
+ <0 3 0 100 250 4 36 0 1>,
+ <0 4 0 100 250 4 36 0 1>,
+ <0 5 0 100 250 4 36 0 1>,
+ <0 6 0 100 250 4 36 0 1>,
+ <0 7 0 100 250 4 36 0 1>,
+ <0 8 0 128 250 4 36 0 1>,
+ <0 9 0 128 250 4 36 0 1>,
+ <0 10 0 128 250 4 36 0 1>,
+ <0 11 0 128 250 4 36 0 1>,
+ <0 12 0 128 250 4 36 0 1>,
+ <0 13 0 128 250 4 36 0 1>,
+ <0 14 0 40 250 4 36 0 1>;
+ };
+
+ queue-management-config {
+ qcom,group-config = <0 2000 0 0 0>;
+ qcom,queue-config = <0 256 0 0 400 4 36 1>,
+ <256 44 0 0 250 0 36 0>;
+ };
+
+ port-scheduler-resource {
+ port0 {
+ port-id = <0>;
+ qcom,ucast-queue = <0 63>;
+ qcom,mcast-queue = <256 263>;
+ qcom,l0sp = <0 0>;
+ qcom,l0cdrr = <0 7>;
+ qcom,l0edrr = <0 7>;
+ qcom,l1cdrr = <0 0>;
+ qcom,l1edrr = <0 0>;
+ };
+
+ port1 {
+ port-id = <1>;
+ qcom,ucast-queue = <204 211>;
+ qcom,mcast-queue = <272 275>;
+ qcom,l0sp = <51 52>;
+ qcom,l0cdrr = <108 115>;
+ qcom,l0edrr = <108 115>;
+ qcom,l1cdrr = <23 24>;
+ qcom,l1edrr = <23 24>;
+ };
+
+ port2 {
+ port-id = <2>;
+ qcom,ucast-queue = <212 219>;
+ qcom,mcast-queue = <276 279>;
+ qcom,l0sp = <53 54>;
+ qcom,l0cdrr = <116 123>;
+ qcom,l0edrr = <116 123>;
+ qcom,l1cdrr = <25 26>;
+ qcom,l1edrr = <25 26>;
+ };
+
+ port3 {
+ port-id = <3>;
+ qcom,ucast-queue = <220 227>;
+ qcom,mcast-queue = <280 283>;
+ qcom,l0sp = <55 56>;
+ qcom,l0cdrr = <124 131>;
+ qcom,l0edrr = <124 131>;
+ qcom,l1cdrr = <27 28>;
+ qcom,l1edrr = <27 28>;
+ };
+
+ port4 {
+ port-id = <4>;
+ qcom,ucast-queue = <228 235>;
+ qcom,mcast-queue = <284 287>;
+ qcom,l0sp = <57 58>;
+ qcom,l0cdrr = <132 139>;
+ qcom,l0edrr = <132 139>;
+ qcom,l1cdrr = <29 30>;
+ qcom,l1edrr = <29 30>;
+ };
+
+ port5 {
+ port-id = <5>;
+ qcom,ucast-queue = <236 243>;
+ qcom,mcast-queue = <288 291>;
+ qcom,l0sp = <59 60>;
+ qcom,l0cdrr = <140 147>;
+ qcom,l0edrr = <140 147>;
+ qcom,l1cdrr = <31 32>;
+ qcom,l1edrr = <31 32>;
+ };
+
+ port6 {
+ port-id = <6>;
+ qcom,ucast-queue = <244 251>;
+ qcom,mcast-queue = <292 295>;
+ qcom,l0sp = <61 62>;
+ qcom,l0cdrr = <148 155>;
+ qcom,l0edrr = <148 155>;
+ qcom,l1cdrr = <33 34>;
+ qcom,l1edrr = <33 34>;
+ };
+
+ port7 {
+ port-id = <7>;
+ qcom,ucast-queue = <252 255>;
+ qcom,mcast-queue = <296 299>;
+ qcom,l0sp = <63 63>;
+ qcom,l0cdrr = <156 159>;
+ qcom,l0edrr = <156 159>;
+ qcom,l1cdrr = <35 35>;
+ qcom,l1edrr = <35 35>;
+ };
+ };
+
+ port-scheduler-config {
+ port0 {
+ port-id = <0>;
+ l1scheduler {
+ group0 {
+ qcom,flow = <0>;
+ qcom,scheduler-config = <0 0 0 0 0>;
+ };
+ };
+
+ l0scheduler {
+ group0 {
+ qcom,ucast-queue = <0>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,mcast-queue = <256>;
+ qcom,scheduler-config = <0 0 0 0 0>;
+ };
+
+ group1 {
+ qcom,ucast-queue = <8>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,mcast-queue = <257>;
+ qcom,scheduler-config = <0 0 0 0 0>;
+ };
+
+ group2 {
+ qcom,ucast-queue = <16>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,mcast-queue = <258>;
+ qcom,scheduler-config = <0 0 0 0 0>;
+ };
+
+ group3 {
+ qcom,ucast-queue = <24>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,mcast-queue = <259>;
+ qcom,scheduler-config = <0 0 0 0 0>;
+ };
+
+ group4 {
+ qcom,ucast-queue = <32>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,mcast-queue = <260>;
+ qcom,scheduler-config = <0 0 0 0 0>;
+ };
+
+ group5 {
+ qcom,ucast-queue = <40>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,mcast-queue = <261>;
+ qcom,scheduler-config = <0 0 0 0 0>;
+ };
+
+ group6 {
+ qcom,ucast-queue = <48>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,mcast-queue = <262>;
+ qcom,scheduler-config = <0 0 0 0 0>;
+ };
+
+ group7 {
+ qcom,ucast-queue = <56>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,mcast-queue = <263>;
+ qcom,scheduler-config = <0 0 0 0 0>;
+ };
+ };
+ };
+
+ port1 {
+ port-id = <1>;
+ l1scheduler {
+ group0 {
+ qcom,flow = <51>;
+ qcom,flow-loop-priority = <2>;
+ qcom,scheduler-config = <1 0 23 0 23>;
+ };
+ };
+
+ l0scheduler {
+ group0 {
+ qcom,ucast-queue = <204>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,drr-max-priority = <4>;
+ qcom,mcast-queue = <272>;
+ qcom,mcast-loop-priority = <4>;
+ qcom,scheduler-config = <51 0 108 0 108>;
+ };
+ };
+ };
+
+ port2 {
+ port-id = <2>;
+ l1scheduler {
+ group0 {
+ qcom,flow = <53>;
+ qcom,flow-loop-priority = <2>;
+ qcom,scheduler-config = <2 0 25 0 25>;
+ };
+ };
+
+ l0scheduler {
+ group0 {
+ qcom,ucast-queue = <212>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,drr-max-priority = <4>;
+ qcom,mcast-queue = <276>;
+ qcom,mcast-loop-priority = <4>;
+ qcom,scheduler-config = <53 0 116 0 116>;
+ };
+ };
+ };
+
+ port3 {
+ port-id = <3>;
+ l1scheduler {
+ group0 {
+ qcom,flow = <55>;
+ qcom,flow-loop-priority = <2>;
+ qcom,scheduler-config = <3 0 27 0 27>;
+ };
+ };
+
+ l0scheduler {
+ group0 {
+ qcom,ucast-queue = <220>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,drr-max-priority = <4>;
+ qcom,mcast-queue = <280>;
+ qcom,mcast-loop-priority = <4>;
+ qcom,scheduler-config = <55 0 124 0 124>;
+ };
+ };
+ };
+
+ port4 {
+ port-id = <4>;
+ l1scheduler {
+ group0 {
+ qcom,flow = <57>;
+ qcom,flow-loop-priority = <2>;
+ qcom,scheduler-config = <4 0 29 0 29>;
+ };
+ };
+
+ l0scheduler {
+ group0 {
+ qcom,ucast-queue = <228>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,drr-max-priority = <4>;
+ qcom,mcast-queue = <284>;
+ qcom,mcast-loop-priority = <4>;
+ qcom,scheduler-config = <57 0 132 0 132>;
+ };
+ };
+ };
+
+ port5 {
+ port-id = <5>;
+ l1scheduler {
+ group0 {
+ qcom,flow = <59>;
+ qcom,flow-loop-priority = <2>;
+ qcom,scheduler-config = <5 0 31 0 31>;
+ };
+ };
+
+ l0scheduler {
+ group0 {
+ qcom,ucast-queue = <236>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,drr-max-priority = <4>;
+ qcom,mcast-queue = <288>;
+ qcom,mcast-loop-priority = <4>;
+ qcom,scheduler-config = <59 0 140 0 140>;
+ };
+ };
+ };
+
+ port6 {
+ port-id = <6>;
+ l1scheduler {
+ group0 {
+ qcom,flow = <61>;
+ qcom,flow-loop-priority = <2>;
+ qcom,scheduler-config = <6 0 33 0 33>;
+ };
+ };
+
+ l0scheduler {
+ group0 {
+ qcom,ucast-queue = <244>;
+ qcom,ucast-loop-priority = <8>;
+ qcom,drr-max-priority = <4>;
+ qcom,mcast-queue = <292>;
+ qcom,mcast-loop-priority = <4>;
+ qcom,scheduler-config = <61 0 148 0 148>;
+ };
+ };
+ };
+
+ port7 {
+ port-id = <7>;
+ l1scheduler {
+ group0 {
+ qcom,flow = <63>;
+ qcom,scheduler-config = <7 0 35 0 35>;
+ };
+ };
+
+ l0scheduler {
+ group0 {
+ qcom,ucast-queue = <252>;
+ qcom,ucast-loop-priority = <4>;
+ qcom,mcast-queue = <296>;
+ qcom,mcast-loop-priority = <4>;
+ qcom,scheduler-config = <63 0 156 0 156>;
+ };
+ };
+ };
+ };
+ };
+ };
--
2.42.0


2024-01-10 11:43:38

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 03/20] net: ethernet: qualcomm: Add qcom PPE driver

This patch adds the base source files and Makefiles for the PPE driver,
platform driver and clock initialization routines.

The PPE(packet process engine) hardware block is available in Qualcomm
IPQ chipsets that support PPE architecture, such as IPQ9574 and IPQ5332.
The PPE includes integrated ethernet MAC and PCS(uniphy), which is used
to connect with external PHY devices by PCS. The PPE also includes various
packet processing offload capabilities such as routing and bridgng offload,
L2 switch capability, VLAN and tunnel processing offload.

Signed-off-by: Luo Jie <[email protected]>
---
MAINTAINERS | 9 +
drivers/net/ethernet/qualcomm/Kconfig | 14 +
drivers/net/ethernet/qualcomm/Makefile | 1 +
drivers/net/ethernet/qualcomm/ppe/Makefile | 7 +
drivers/net/ethernet/qualcomm/ppe/ppe.c | 389 +++++++++++++++++++++
drivers/net/ethernet/qualcomm/ppe/ppe.h | 143 ++++++++
include/linux/soc/qcom/ppe.h | 28 ++
7 files changed, 591 insertions(+)
create mode 100644 drivers/net/ethernet/qualcomm/ppe/Makefile
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe.c
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe.h
create mode 100644 include/linux/soc/qcom/ppe.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 014ad90d0872..18413231d173 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17943,6 +17943,15 @@ S: Maintained
F: Documentation/devicetree/bindings/mtd/qcom,nandc.yaml
F: drivers/mtd/nand/raw/qcom_nandc.c

+QUALCOMM PPE DRIVER
+M: Luo Jie <[email protected]>
+L: [email protected]
+S: Supported
+F: Documentation/devicetree/bindings/net/qcom,ppe.yaml
+F: Documentation/networking/device_drivers/ethernet/qualcomm/ppe/ppe.rst
+F: drivers/net/ethernet/qualcomm/ppe/
+F: include/linux/soc/qcom/ppe.h
+
QUALCOMM QSEECOM DRIVER
M: Maximilian Luz <[email protected]>
L: [email protected]
diff --git a/drivers/net/ethernet/qualcomm/Kconfig b/drivers/net/ethernet/qualcomm/Kconfig
index 9210ff360fdc..fe826c508f64 100644
--- a/drivers/net/ethernet/qualcomm/Kconfig
+++ b/drivers/net/ethernet/qualcomm/Kconfig
@@ -61,6 +61,20 @@ config QCOM_EMAC
low power, Receive-Side Scaling (RSS), and IEEE 1588-2008
Precision Clock Synchronization Protocol.

+config QCOM_PPE
+ tristate "Qualcomm Technologies, Inc. PPE Ethernet support"
+ depends on HAS_IOMEM && OF
+ depends on COMMON_CLK
+ help
+ This driver supports the Qualcomm Technologies, Inc. packet
+ process engine(PPE) available with IPQ SoC. The PPE houses
+ the ethernet MACs and Ethernet DMA (EDMA) hardware blocks.
+ It also supports L3 flow offload, L2 switch function, RSS
+ and tunnel offload.
+
+ To compile this driver as a module, choose M here. The module
+ will be called qcom-ppe.
+
source "drivers/net/ethernet/qualcomm/rmnet/Kconfig"

endif # NET_VENDOR_QUALCOMM
diff --git a/drivers/net/ethernet/qualcomm/Makefile b/drivers/net/ethernet/qualcomm/Makefile
index 9250976dd884..166a59aea363 100644
--- a/drivers/net/ethernet/qualcomm/Makefile
+++ b/drivers/net/ethernet/qualcomm/Makefile
@@ -11,4 +11,5 @@ qcauart-objs := qca_uart.o

obj-y += emac/

+obj-$(CONFIG_QCOM_PPE) += ppe/
obj-$(CONFIG_RMNET) += rmnet/
diff --git a/drivers/net/ethernet/qualcomm/ppe/Makefile b/drivers/net/ethernet/qualcomm/ppe/Makefile
new file mode 100644
index 000000000000..795aff6501e4
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/ppe/Makefile
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Makefile for the Qualcomm SoCs built-in PPE device driver
+#
+
+obj-$(CONFIG_QCOM_PPE) += qcom-ppe.o
+qcom-ppe-objs := ppe.o
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
new file mode 100644
index 000000000000..23f9de105062
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -0,0 +1,389 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+/* PPE platform device probe, DTSI read and basic HW initialization functions
+ * such as BM, QM, TDM and scheduler configs.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/of.h>
+#include <linux/regmap.h>
+#include <linux/platform_device.h>
+#include <linux/soc/qcom/ppe.h>
+#include "ppe.h"
+
+static const char * const ppe_clock_name[PPE_CLK_MAX] = {
+ "cmn_ahb",
+ "cmn_sys",
+ "uniphy0_sys",
+ "uniphy1_sys",
+ "uniphy2_sys",
+ "uniphy0_ahb",
+ "uniphy1_ahb",
+ "uniphy2_ahb",
+ "gcc_nsscc",
+ "gcc_nssnoc_nsscc",
+ "gcc_nssnoc_snoc",
+ "gcc_nssnoc_snoc_1",
+ "gcc_im_sleep",
+ "nss_ppe",
+ "nss_ppe_cfg",
+ "nssnoc_ppe",
+ "nssnoc_ppe_cfg",
+ "nss_edma",
+ "nss_edma_cfg",
+ "nss_ppe_ipe",
+ "nss_ppe_btq",
+ "port1_mac",
+ "port2_mac",
+ "port3_mac",
+ "port4_mac",
+ "port5_mac",
+ "port6_mac",
+ "nss_port1_rx",
+ "nss_port1_tx",
+ "nss_port2_rx",
+ "nss_port2_tx",
+ "nss_port3_rx",
+ "nss_port3_tx",
+ "nss_port4_rx",
+ "nss_port4_tx",
+ "nss_port5_rx",
+ "nss_port5_tx",
+ "nss_port6_rx",
+ "nss_port6_tx",
+ "uniphy_port1_rx",
+ "uniphy_port1_tx",
+ "uniphy_port2_rx",
+ "uniphy_port2_tx",
+ "uniphy_port3_rx",
+ "uniphy_port3_tx",
+ "uniphy_port4_rx",
+ "uniphy_port4_tx",
+ "uniphy_port5_rx",
+ "uniphy_port5_tx",
+ "uniphy_port6_rx",
+ "uniphy_port6_tx",
+ "nss_port5_rx_clk_src",
+ "nss_port5_tx_clk_src",
+};
+
+static const char * const ppe_reset_name[PPE_RST_MAX] = {
+ "ppe",
+ "uniphy0_sys",
+ "uniphy1_sys",
+ "uniphy2_sys",
+ "uniphy0_ahb",
+ "uniphy1_ahb",
+ "uniphy2_ahb",
+ "uniphy0_xpcs",
+ "uniphy1_xpcs",
+ "uniphy2_xpcs",
+ "uniphy0_soft",
+ "uniphy1_soft",
+ "uniphy2_soft",
+ "uniphy_port1_dis",
+ "uniphy_port2_dis",
+ "uniphy_port3_dis",
+ "uniphy_port4_dis",
+ "uniphy_port1_rx",
+ "uniphy_port1_tx",
+ "uniphy_port2_rx",
+ "uniphy_port2_tx",
+ "nss_port1_rx",
+ "nss_port1_tx",
+ "nss_port2_rx",
+ "nss_port2_tx",
+ "nss_port1",
+ "nss_port2",
+ "nss_port3",
+ "nss_port4",
+ "nss_port5",
+ "nss_port6",
+ "nss_port1_mac",
+ "nss_port2_mac",
+ "nss_port3_mac",
+ "nss_port4_mac",
+ "nss_port5_mac",
+ "nss_port6_mac",
+};
+
+int ppe_type_get(struct ppe_device *ppe_dev)
+{
+ struct ppe_data *ppe_dev_priv = ppe_dev->ppe_priv;
+
+ if (!ppe_dev_priv)
+ return PPE_TYPE_MAX;
+
+ return ppe_dev_priv->ppe_type;
+}
+
+static int ppe_clock_set_enable(struct ppe_device *ppe_dev,
+ enum ppe_clk_id clk_id, unsigned long rate)
+{
+ struct ppe_data *ppe_dev_priv = ppe_dev->ppe_priv;
+
+ if (clk_id >= PPE_CLK_MAX)
+ return -EINVAL;
+
+ if (rate != 0)
+ clk_set_rate(ppe_dev_priv->clk[clk_id], rate);
+
+ return clk_prepare_enable(ppe_dev_priv->clk[clk_id]);
+}
+
+static int ppe_fix_clock_init(struct ppe_device *ppe_dev)
+{
+ unsigned long noc_rate, ppe_rate;
+ enum ppe_clk_id clk_id;
+ int ppe_type = ppe_type_get(ppe_dev);
+
+ switch (ppe_type) {
+ case PPE_TYPE_APPE:
+ noc_rate = 342857143;
+ ppe_rate = 353000000;
+ break;
+ case PPE_TYPE_MPPE:
+ noc_rate = 266660000;
+ ppe_rate = 200000000;
+ ppe_clock_set_enable(ppe_dev, PPE_IM_SLEEP_CLK, 0);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ ppe_clock_set_enable(ppe_dev, PPE_CMN_AHB_CLK, 0);
+ ppe_clock_set_enable(ppe_dev, PPE_CMN_SYS_CLK, 0);
+ ppe_clock_set_enable(ppe_dev, PPE_NSSCC_CLK, 100000000);
+ ppe_clock_set_enable(ppe_dev, PPE_NSSNOC_NSSCC_CLK, 100000000);
+
+ ppe_clock_set_enable(ppe_dev, PPE_NSSNOC_SNOC_CLK, noc_rate);
+ ppe_clock_set_enable(ppe_dev, PPE_NSSNOC_SNOC_1_CLK, noc_rate);
+
+ ppe_clock_set_enable(ppe_dev, PPE_UNIPHY0_SYS_CLK, 24000000);
+ ppe_clock_set_enable(ppe_dev, PPE_UNIPHY1_SYS_CLK, 24000000);
+ ppe_clock_set_enable(ppe_dev, PPE_UNIPHY0_AHB_CLK, 100000000);
+ ppe_clock_set_enable(ppe_dev, PPE_UNIPHY1_AHB_CLK, 100000000);
+
+ if (ppe_type == PPE_TYPE_APPE) {
+ ppe_clock_set_enable(ppe_dev, PPE_UNIPHY2_SYS_CLK, 24000000);
+ ppe_clock_set_enable(ppe_dev, PPE_UNIPHY2_AHB_CLK, 100000000);
+ }
+
+ ppe_clock_set_enable(ppe_dev, PPE_PORT1_MAC_CLK, ppe_rate);
+ ppe_clock_set_enable(ppe_dev, PPE_PORT2_MAC_CLK, ppe_rate);
+
+ if (ppe_type == PPE_TYPE_APPE) {
+ ppe_clock_set_enable(ppe_dev, PPE_PORT3_MAC_CLK, ppe_rate);
+ ppe_clock_set_enable(ppe_dev, PPE_PORT4_MAC_CLK, ppe_rate);
+ ppe_clock_set_enable(ppe_dev, PPE_PORT5_MAC_CLK, ppe_rate);
+ ppe_clock_set_enable(ppe_dev, PPE_PORT6_MAC_CLK, ppe_rate);
+ }
+
+ ppe_clock_set_enable(ppe_dev, PPE_PPE_CLK, ppe_rate);
+ ppe_clock_set_enable(ppe_dev, PPE_PPE_CFG_CLK, ppe_rate);
+ ppe_clock_set_enable(ppe_dev, PPE_NSSNOC_PPE_CLK, ppe_rate);
+ ppe_clock_set_enable(ppe_dev, PPE_NSSNOC_PPE_CFG_CLK, ppe_rate);
+ ppe_clock_set_enable(ppe_dev, PPE_EDMA_CLK, ppe_rate);
+ ppe_clock_set_enable(ppe_dev, PPE_EDMA_CFG_CLK, ppe_rate);
+ ppe_clock_set_enable(ppe_dev, PPE_PPE_IPE_CLK, ppe_rate);
+ ppe_clock_set_enable(ppe_dev, PPE_PPE_BTQ_CLK, ppe_rate);
+
+ /* Enable uniphy port clocks */
+ for (clk_id = PPE_NSS_PORT1_RX_CLK; clk_id <= PPE_UNIPHY_PORT6_TX_CLK; clk_id++)
+ ppe_clock_set_enable(ppe_dev, clk_id, 0);
+
+ return 0;
+}
+
+static int ppe_clock_config(struct platform_device *pdev)
+{
+ struct ppe_device *ppe_dev = platform_get_drvdata(pdev);
+ struct ppe_data *ppe_dev_priv = ppe_dev->ppe_priv;
+ int ret;
+
+ ret = ppe_fix_clock_init(ppe_dev);
+ if (ret)
+ return ret;
+
+ /* Reset PPE */
+ reset_control_assert(ppe_dev_priv->rst[PPE_RST_PPE_RST]);
+ fsleep(100000);
+ reset_control_deassert(ppe_dev_priv->rst[PPE_RST_PPE_RST]);
+ fsleep(100000);
+
+ /* Reset the ahb uniphy connected with the PHY chip */
+ if (ppe_type_get(ppe_dev) == PPE_TYPE_MPPE) {
+ reset_control_assert(ppe_dev_priv->rst[PPE_UNIPHY1_AHB_RST]);
+ fsleep(100000);
+ reset_control_deassert(ppe_dev_priv->rst[PPE_UNIPHY1_AHB_RST]);
+ fsleep(100000);
+ }
+
+ return 0;
+}
+
+bool ppe_is_probed(struct platform_device *pdev)
+{
+ struct ppe_device *ppe_dev = platform_get_drvdata(pdev);
+
+ return ppe_dev && ppe_dev->is_ppe_probed;
+}
+EXPORT_SYMBOL_GPL(ppe_is_probed);
+
+struct ppe_device *ppe_dev_get(struct platform_device *pdev)
+{
+ return platform_get_drvdata(pdev);
+}
+EXPORT_SYMBOL_GPL(ppe_dev_get);
+
+static const struct regmap_range ppe_readable_ranges[] = {
+ regmap_reg_range(0x0, 0x1FF), /* GLB */
+ regmap_reg_range(0x400, 0x5FF), /* LPI CSR */
+ regmap_reg_range(0x1000, 0x11FF), /* GMAC0 */
+ regmap_reg_range(0x1200, 0x13FF), /* GMAC1 */
+ regmap_reg_range(0x1400, 0x15FF), /* GMAC2 */
+ regmap_reg_range(0x1600, 0x17FF), /* GMAC3 */
+ regmap_reg_range(0x1800, 0x19FF), /* GMAC4 */
+ regmap_reg_range(0x1A00, 0x1BFF), /* GMAC5 */
+ regmap_reg_range(0xB000, 0xEFFF), /* PRX CSR */
+ regmap_reg_range(0xF000, 0x1EFFF), /* IPE IV */
+ regmap_reg_range(0x20000, 0x5FFFF), /* PTX CSR */
+ regmap_reg_range(0x60000, 0x9FFFF), /* IPE L2 CSR */
+ regmap_reg_range(0xB0000, 0xEFFFF), /* IPO CSR */
+ regmap_reg_range(0x100000, 0x17FFFF), /* IPE PC */
+ regmap_reg_range(0x180000, 0x1BFFFF), /* PRE IPO CSR */
+ regmap_reg_range(0x1D0000, 0x1DFFFF), /* TUNNEL PARSER CSR */
+ regmap_reg_range(0x1E0000, 0x1EFFFF), /* INGRESS PARSE CSR */
+ regmap_reg_range(0x200000, 0x2FFFFF), /* IPE L3 */
+ regmap_reg_range(0x300000, 0x3FFFFF), /* IPE TL */
+ regmap_reg_range(0x400000, 0x4FFFFF), /* TM */
+ regmap_reg_range(0x500000, 0x503FFF), /* XGMAC0 */
+ regmap_reg_range(0x504000, 0x507FFF), /* XGMAC1 */
+ regmap_reg_range(0x508000, 0x50BFFF), /* XGMAC2 */
+ regmap_reg_range(0x50C000, 0x50FFFF), /* XGMAC3 */
+ regmap_reg_range(0x510000, 0x513FFF), /* XGMAC4 */
+ regmap_reg_range(0x514000, 0x517FFF), /* XGMAC5 */
+ regmap_reg_range(0x600000, 0x6FFFFF), /* BM */
+ regmap_reg_range(0x800000, 0x9FFFFF), /* QM */
+};
+
+static const struct regmap_access_table ppe_reg_table = {
+ .yes_ranges = ppe_readable_ranges,
+ .n_yes_ranges = ARRAY_SIZE(ppe_readable_ranges),
+};
+
+static const struct regmap_config ppe_regmap_config = {
+ .reg_bits = 32,
+ .reg_stride = 4,
+ .val_bits = 32,
+ .rd_table = &ppe_reg_table,
+ .wr_table = &ppe_reg_table,
+ .max_register = 0x9FFFFF,
+ .fast_io = true,
+};
+
+static struct ppe_data *ppe_data_init(struct platform_device *pdev)
+{
+ struct ppe_data *ppe_dev_priv;
+ int ret;
+
+ ppe_dev_priv = devm_kzalloc(&pdev->dev, sizeof(*ppe_dev_priv), GFP_KERNEL);
+ if (!ppe_dev_priv)
+ return ERR_PTR(-ENOMEM);
+
+ if (of_device_is_compatible(pdev->dev.of_node, "qcom,ipq9574-ppe"))
+ ppe_dev_priv->ppe_type = PPE_TYPE_APPE;
+ else if (of_device_is_compatible(pdev->dev.of_node, "qcom,ipq5332-ppe"))
+ ppe_dev_priv->ppe_type = PPE_TYPE_MPPE;
+ else
+ return ERR_PTR(-EINVAL);
+
+ for (ret = 0; ret < PPE_CLK_MAX; ret++) {
+ ppe_dev_priv->clk[ret] = devm_clk_get_optional(&pdev->dev,
+ ppe_clock_name[ret]);
+
+ if (IS_ERR(ppe_dev_priv->clk[ret]))
+ dev_err(&pdev->dev, "Failed to get the clock: %s\n",
+ ppe_clock_name[ret]);
+ }
+
+ for (ret = 0; ret < PPE_RST_MAX; ret++) {
+ ppe_dev_priv->rst[ret] =
+ devm_reset_control_get_optional_exclusive(&pdev->dev,
+ ppe_reset_name[ret]);
+ if (IS_ERR(ppe_dev_priv->rst[ret]))
+ dev_err(&pdev->dev, "Failed to get the reset %s!\n",
+ ppe_reset_name[ret]);
+ }
+
+ return ppe_dev_priv;
+}
+
+static int qcom_ppe_probe(struct platform_device *pdev)
+{
+ struct ppe_device *ppe_dev;
+ void __iomem *base;
+ int ret;
+
+ ppe_dev = devm_kzalloc(&pdev->dev, sizeof(*ppe_dev), GFP_KERNEL);
+ if (!ppe_dev)
+ return -ENOMEM;
+
+ ppe_dev->dev = &pdev->dev;
+ base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(base))
+ return dev_err_probe(&pdev->dev,
+ PTR_ERR(base),
+ "Fail to ioremap\n");
+
+ ppe_dev->regmap = devm_regmap_init_mmio(&pdev->dev, base, &ppe_regmap_config);
+ if (IS_ERR(ppe_dev->regmap))
+ return dev_err_probe(&pdev->dev,
+ PTR_ERR(ppe_dev->regmap),
+ "Fail to regmap\n");
+
+ ppe_dev->ppe_priv = ppe_data_init(pdev);
+ if (IS_ERR(ppe_dev->ppe_priv))
+ return dev_err_probe(&pdev->dev,
+ PTR_ERR(ppe_dev->ppe_priv),
+ "Fail to init ppe data\n");
+
+ platform_set_drvdata(pdev, ppe_dev);
+ ret = ppe_clock_config(pdev);
+ if (ret)
+ return dev_err_probe(&pdev->dev,
+ ret,
+ "ppe clock config failed\n");
+
+ ppe_dev->is_ppe_probed = true;
+ return 0;
+}
+
+static int qcom_ppe_remove(struct platform_device *pdev)
+{
+ return 0;
+}
+
+static const struct of_device_id qcom_ppe_of_match[] = {
+ { .compatible = "qcom,ipq9574-ppe", },
+ { .compatible = "qcom,ipq5332-ppe", },
+ {},
+};
+
+static struct platform_driver qcom_ppe_driver = {
+ .driver = {
+ .name = "qcom_ppe",
+ .owner = THIS_MODULE,
+ .of_match_table = qcom_ppe_of_match,
+ },
+ .probe = qcom_ppe_probe,
+ .remove = qcom_ppe_remove,
+};
+module_platform_driver(qcom_ppe_driver);
+
+MODULE_LICENSE("GPL");
+MODULE_DEVICE_TABLE(of, qcom_ppe_of_match);
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.h b/drivers/net/ethernet/qualcomm/ppe/ppe.h
new file mode 100644
index 000000000000..f54406a6feb7
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.h
@@ -0,0 +1,143 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+/* PPE clock, reset and register read/write declarations. */
+
+#ifndef __PPE_H__
+#define __PPE_H__
+
+#include <linux/clk.h>
+#include <linux/reset.h>
+
+enum ppe_clk_id {
+ /* clocks for CMN PLL */
+ PPE_CMN_AHB_CLK,
+ PPE_CMN_SYS_CLK,
+ /* clocks for PPE integrated uniphy */
+ PPE_UNIPHY0_SYS_CLK,
+ PPE_UNIPHY1_SYS_CLK,
+ PPE_UNIPHY2_SYS_CLK,
+ PPE_UNIPHY0_AHB_CLK,
+ PPE_UNIPHY1_AHB_CLK,
+ PPE_UNIPHY2_AHB_CLK,
+ /* clocks for NSS NOC that is connected with PPE */
+ PPE_NSSCC_CLK,
+ PPE_NSSNOC_NSSCC_CLK,
+ PPE_NSSNOC_SNOC_CLK,
+ PPE_NSSNOC_SNOC_1_CLK,
+ /* clock for sleep that is needed for PPE reset */
+ PPE_IM_SLEEP_CLK,
+ /* clocks for PPE block */
+ PPE_PPE_CLK,
+ PPE_PPE_CFG_CLK,
+ PPE_NSSNOC_PPE_CLK,
+ PPE_NSSNOC_PPE_CFG_CLK,
+ /* clocks for EDMA to be enabled during the PPE initialization */
+ PPE_EDMA_CLK,
+ PPE_EDMA_CFG_CLK,
+ /* clocks for PPE IPE/BTQ modules */
+ PPE_PPE_IPE_CLK,
+ PPE_PPE_BTQ_CLK,
+ /* clocks for PPE integrated MAC */
+ PPE_PORT1_MAC_CLK,
+ PPE_PORT2_MAC_CLK,
+ PPE_PORT3_MAC_CLK,
+ PPE_PORT4_MAC_CLK,
+ PPE_PORT5_MAC_CLK,
+ PPE_PORT6_MAC_CLK,
+ /* clocks for PPE port */
+ PPE_NSS_PORT1_RX_CLK,
+ PPE_NSS_PORT1_TX_CLK,
+ PPE_NSS_PORT2_RX_CLK,
+ PPE_NSS_PORT2_TX_CLK,
+ PPE_NSS_PORT3_RX_CLK,
+ PPE_NSS_PORT3_TX_CLK,
+ PPE_NSS_PORT4_RX_CLK,
+ PPE_NSS_PORT4_TX_CLK,
+ PPE_NSS_PORT5_RX_CLK,
+ PPE_NSS_PORT5_TX_CLK,
+ PPE_NSS_PORT6_RX_CLK,
+ PPE_NSS_PORT6_TX_CLK,
+ /* clocks for PPE uniphy port */
+ PPE_UNIPHY_PORT1_RX_CLK,
+ PPE_UNIPHY_PORT1_TX_CLK,
+ PPE_UNIPHY_PORT2_RX_CLK,
+ PPE_UNIPHY_PORT2_TX_CLK,
+ PPE_UNIPHY_PORT3_RX_CLK,
+ PPE_UNIPHY_PORT3_TX_CLK,
+ PPE_UNIPHY_PORT4_RX_CLK,
+ PPE_UNIPHY_PORT4_TX_CLK,
+ PPE_UNIPHY_PORT5_RX_CLK,
+ PPE_UNIPHY_PORT5_TX_CLK,
+ PPE_UNIPHY_PORT6_RX_CLK,
+ PPE_UNIPHY_PORT6_TX_CLK,
+ /* source clock for PPE port5 */
+ PPE_NSS_PORT5_RX_CLK_SRC,
+ PPE_NSS_PORT5_TX_CLK_SRC,
+ PPE_CLK_MAX
+};
+
+enum ppe_rst_id {
+ /* reset for PPE block */
+ PPE_RST_PPE_RST,
+ /* resets for uniphy */
+ PPE_UNIPHY0_SYS_RST,
+ PPE_UNIPHY1_SYS_RST,
+ PPE_UNIPHY2_SYS_RST,
+ PPE_UNIPHY0_AHB_RST,
+ PPE_UNIPHY1_AHB_RST,
+ PPE_UNIPHY2_AHB_RST,
+ PPE_UNIPHY0_XPCS_RST,
+ PPE_UNIPHY1_XPCS_RST,
+ PPE_UNIPHY2_XPCS_RST,
+ PPE_UNIPHY0_SOFT_RST,
+ PPE_UNIPHY1_SOFT_RST,
+ PPE_UNIPHY2_SOFT_RST,
+ /* resets for uniphy port */
+ PPE_UNIPHY_PORT1_DIS,
+ PPE_UNIPHY_PORT2_DIS,
+ PPE_UNIPHY_PORT3_DIS,
+ PPE_UNIPHY_PORT4_DIS,
+ PPE_UNIPHY_PORT1_RX_RST,
+ PPE_UNIPHY_PORT1_TX_RST,
+ PPE_UNIPHY_PORT2_RX_RST,
+ PPE_UNIPHY_PORT2_TX_RST,
+ /* resets for PPE port */
+ PPE_NSS_PORT1_RX_RST,
+ PPE_NSS_PORT1_TX_RST,
+ PPE_NSS_PORT2_RX_RST,
+ PPE_NSS_PORT2_TX_RST,
+ PPE_NSS_PORT1_RST,
+ PPE_NSS_PORT2_RST,
+ PPE_NSS_PORT3_RST,
+ PPE_NSS_PORT4_RST,
+ PPE_NSS_PORT5_RST,
+ PPE_NSS_PORT6_RST,
+ /* resets for PPE MAC */
+ PPE_NSS_PORT1_MAC_RST,
+ PPE_NSS_PORT2_MAC_RST,
+ PPE_NSS_PORT3_MAC_RST,
+ PPE_NSS_PORT4_MAC_RST,
+ PPE_NSS_PORT5_MAC_RST,
+ PPE_NSS_PORT6_MAC_RST,
+ PPE_RST_MAX
+};
+
+/* Different PPE type used on the different IPQ SoC platform */
+enum {
+ PPE_TYPE_APPE,
+ PPE_TYPE_MPPE,
+ PPE_TYPE_MAX = 0xff,
+};
+
+/* PPE private data of different PPE type device */
+struct ppe_data {
+ int ppe_type;
+ struct clk *clk[PPE_CLK_MAX];
+ struct reset_control *rst[PPE_RST_MAX];
+};
+
+int ppe_type_get(struct ppe_device *ppe_dev);
+#endif
diff --git a/include/linux/soc/qcom/ppe.h b/include/linux/soc/qcom/ppe.h
new file mode 100644
index 000000000000..90566a8841b4
--- /dev/null
+++ b/include/linux/soc/qcom/ppe.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+/* PPE operations to be used by ethernet driver */
+
+#ifndef __QCOM_PPE_H__
+#define __QCOM_PPE_H__
+
+#include <linux/platform_device.h>
+
+/* PPE platform private data, which is used by external driver like
+ * Ethernet DMA driver.
+ */
+struct ppe_device {
+ struct device *dev;
+ struct regmap *regmap;
+ bool is_ppe_probed;
+ void *ppe_priv;
+};
+
+/* Function used to check PPE platform dirver is registered correctly or not. */
+bool ppe_is_probed(struct platform_device *pdev);
+
+/* Function used to get the PPE device */
+struct ppe_device *ppe_dev_get(struct platform_device *pdev);
+#endif
--
2.42.0


2024-01-10 11:44:25

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 07/20] net: ethernet: qualcomm: Add PPE port scheduler resource

PPE port scheduler resource is used to dispatch the packet with
QoS offloaded to PPE hardware, which includes the hardware queue,
DRR(deficit round robin) and SP(strict priority).

Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/ppe.c | 83 ++++++++++++++++++++++++-
drivers/net/ethernet/qualcomm/ppe/ppe.h | 13 ++++
2 files changed, 95 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index 85d8b06a326b..8bf32a7265d2 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -16,6 +16,7 @@
#include "ppe.h"
#include "ppe_regs.h"

+#define PPE_SCHEDULER_PORT_NUM 8
static const char * const ppe_clock_name[PPE_CLK_MAX] = {
"cmn_ahb",
"cmn_sys",
@@ -112,6 +113,8 @@ static const char * const ppe_reset_name[PPE_RST_MAX] = {
"nss_port6_mac",
};

+static struct ppe_scheduler_port_resource ppe_scheduler_res[PPE_SCHEDULER_PORT_NUM];
+
int ppe_write(struct ppe_device *ppe_dev, u32 reg, unsigned int val)
{
return regmap_write(ppe_dev->regmap, reg, val);
@@ -730,6 +733,80 @@ static int of_parse_ppe_tdm(struct ppe_device *ppe_dev,
return ret;
};

+static int of_parse_ppe_scheduler_resource(struct ppe_device *ppe_dev,
+ struct device_node *resource_node)
+{
+ struct device_node *port_node;
+ u32 port;
+
+ for_each_available_child_of_node(resource_node, port_node) {
+ if (of_property_read_u32(port_node, "port-id", &port))
+ return dev_err_probe(ppe_dev->dev, -ENODEV,
+ "port-id not defined on resource\n");
+
+ if (port >= ARRAY_SIZE(ppe_scheduler_res))
+ return dev_err_probe(ppe_dev->dev, -EINVAL,
+ "Invalid port-id defined on resource\n");
+
+ if (of_property_read_u32_array(port_node, "qcom,ucast-queue",
+ ppe_scheduler_res[port].ucastq,
+ ARRAY_SIZE(ppe_scheduler_res[port].ucastq)))
+ return dev_err_probe(ppe_dev->dev, -EINVAL,
+ "Invalid qcom,ucast-queue defined on resource\n");
+
+ if (of_property_read_u32_array(port_node, "qcom,mcast-queue",
+ ppe_scheduler_res[port].mcastq,
+ ARRAY_SIZE(ppe_scheduler_res[port].mcastq)))
+ return dev_err_probe(ppe_dev->dev, -EINVAL,
+ "Invalid qcom,mcast-queue defined on resource\n");
+
+ if (of_property_read_u32_array(port_node, "qcom,l0sp",
+ ppe_scheduler_res[port].l0sp,
+ ARRAY_SIZE(ppe_scheduler_res[port].l0sp)))
+ return dev_err_probe(ppe_dev->dev, -EINVAL,
+ "Invalid qcom,l0sp defined on resource\n");
+
+ if (of_property_read_u32_array(port_node, "qcom,l0cdrr",
+ ppe_scheduler_res[port].l0cdrr,
+ ARRAY_SIZE(ppe_scheduler_res[port].l0cdrr)))
+ return dev_err_probe(ppe_dev->dev, -EINVAL,
+ "Invalid qcom,l0cdrr defined on resource\n");
+
+ if (of_property_read_u32_array(port_node, "qcom,l0edrr",
+ ppe_scheduler_res[port].l0edrr,
+ ARRAY_SIZE(ppe_scheduler_res[port].l0edrr)))
+ return dev_err_probe(ppe_dev->dev, -EINVAL,
+ "Invalid qcom,l0edrr defined on resource\n");
+
+ if (of_property_read_u32_array(port_node, "qcom,l1cdrr",
+ ppe_scheduler_res[port].l1cdrr,
+ ARRAY_SIZE(ppe_scheduler_res[port].l1cdrr)))
+ return dev_err_probe(ppe_dev->dev, -EINVAL,
+ "Invalid qcom,l1cdrr defined on resource\n");
+
+ if (of_property_read_u32_array(port_node, "qcom,l1edrr",
+ ppe_scheduler_res[port].l1edrr,
+ ARRAY_SIZE(ppe_scheduler_res[port].l1edrr)))
+ return dev_err_probe(ppe_dev->dev, -EINVAL,
+ "Invalid qcom,l1edrr defined on resource\n");
+ }
+
+ return 0;
+}
+
+static int of_parse_ppe_scheduler(struct ppe_device *ppe_dev,
+ struct device_node *ppe_node)
+{
+ struct device_node *scheduler_node;
+
+ scheduler_node = of_get_child_by_name(ppe_node, "port-scheduler-resource");
+ if (!scheduler_node)
+ return dev_err_probe(ppe_dev->dev, -ENODEV,
+ "port-scheduler-resource is not defined\n");
+
+ return of_parse_ppe_scheduler_resource(ppe_dev, scheduler_node);
+}
+
static int of_parse_ppe_config(struct ppe_device *ppe_dev,
struct device_node *ppe_node)
{
@@ -743,7 +820,11 @@ static int of_parse_ppe_config(struct ppe_device *ppe_dev,
if (ret)
return ret;

- return of_parse_ppe_tdm(ppe_dev, ppe_node);
+ ret = of_parse_ppe_tdm(ppe_dev, ppe_node);
+ if (ret)
+ return ret;
+
+ return of_parse_ppe_scheduler(ppe_dev, ppe_node);
}

static int qcom_ppe_probe(struct platform_device *pdev)
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.h b/drivers/net/ethernet/qualcomm/ppe/ppe.h
index 6caef42ab235..84b1c9761f79 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.h
@@ -139,6 +139,19 @@ struct ppe_data {
struct reset_control *rst[PPE_RST_MAX];
};

+/* PPE port QoS resource, which includes the queue range and
+ * DRR(deficit round robin), SP(strict priority).
+ */
+struct ppe_scheduler_port_resource {
+ int ucastq[2];
+ int mcastq[2];
+ int l0sp[2];
+ int l0cdrr[2];
+ int l0edrr[2];
+ int l1cdrr[2];
+ int l1edrr[2];
+};
+
int ppe_type_get(struct ppe_device *ppe_dev);

int ppe_write(struct ppe_device *ppe_dev, u32 reg, unsigned int val);
--
2.42.0


2024-01-10 11:44:52

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 08/20] net: ethernet: qualcomm: Add PPE scheduler config

PPE scheduler is configured according to the device tree. This
configuration is read and used for initialization by PPE driver,
and adjusted later by the EDMA driver.

PPE scheduler config determines the priority of scheduling the
packet. PPE supports two level QoS hierarchy, Level 0 and Level 1.
The scheduler config helps with the construction of the PPE QoS
hierarchies for each physical port.

Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/Makefile | 2 +-
drivers/net/ethernet/qualcomm/ppe/ppe.c | 194 ++++++++++++++++-
drivers/net/ethernet/qualcomm/ppe/ppe_ops.c | 206 +++++++++++++++++++
drivers/net/ethernet/qualcomm/ppe/ppe_ops.h | 45 ++++
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 64 ++++++
5 files changed, 508 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_ops.h

diff --git a/drivers/net/ethernet/qualcomm/ppe/Makefile b/drivers/net/ethernet/qualcomm/ppe/Makefile
index 795aff6501e4..c00265339aa7 100644
--- a/drivers/net/ethernet/qualcomm/ppe/Makefile
+++ b/drivers/net/ethernet/qualcomm/ppe/Makefile
@@ -4,4 +4,4 @@
#

obj-$(CONFIG_QCOM_PPE) += qcom-ppe.o
-qcom-ppe-objs := ppe.o
+qcom-ppe-objs := ppe.o ppe_ops.o
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index 8bf32a7265d2..75c24a87e2be 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -15,8 +15,13 @@
#include <linux/soc/qcom/ppe.h>
#include "ppe.h"
#include "ppe_regs.h"
+#include "ppe_ops.h"

#define PPE_SCHEDULER_PORT_NUM 8
+#define PPE_SCHEDULER_L0_NUM 300
+#define PPE_SCHEDULER_L1_NUM 64
+#define PPE_SP_PRIORITY_NUM 8
+
static const char * const ppe_clock_name[PPE_CLK_MAX] = {
"cmn_ahb",
"cmn_sys",
@@ -794,17 +799,202 @@ static int of_parse_ppe_scheduler_resource(struct ppe_device *ppe_dev,
return 0;
}

+static int of_parse_ppe_scheduler_group_config(struct ppe_device *ppe_dev,
+ struct device_node *group_node,
+ int port,
+ const char *node_name,
+ const char *loop_name)
+{
+ struct ppe_qos_scheduler_cfg qos_cfg;
+ const struct ppe_queue_ops *ppe_queue_ops;
+ const __be32 *paddr;
+ int ret, len, i, node_id, level, node_max;
+ u32 tmp_cfg[5], pri_loop, max_pri;
+
+ ppe_queue_ops = ppe_queue_config_ops_get();
+ if (!ppe_queue_ops->queue_scheduler_set)
+ return -EINVAL;
+
+ /* The value of the property node_name can be single value
+ * or array value.
+ *
+ * If the array value is defined, the property loop_name should not
+ * be specified.
+ *
+ * If the single value is defined, the queue ID will be added in the
+ * loop value defined by the loop_name.
+ */
+ paddr = of_get_property(group_node, node_name, &len);
+ if (!paddr)
+ return dev_err_probe(ppe_dev->dev, -ENODEV,
+ "Fail to get queue %s of port %d\n",
+ node_name, port);
+
+ len /= sizeof(u32);
+
+ /* There are two levels scheduler configs, the level 0 scheduler
+ * config is configured on the queue, the level 1 scheduler is
+ * configured on the flow that is from the output of level 0
+ * scheduler.
+ */
+ if (!strcmp(node_name, "qcom,flow")) {
+ level = 1;
+ node_max = PPE_SCHEDULER_L1_NUM;
+ } else {
+ level = 0;
+ node_max = PPE_SCHEDULER_L0_NUM;
+ }
+
+ if (of_property_read_u32_array(group_node, "qcom,scheduler-config",
+ tmp_cfg, ARRAY_SIZE(tmp_cfg)))
+ return dev_err_probe(ppe_dev->dev, -ENODEV,
+ "Fail to get qcom,scheduler-config of port %d\n",
+ port);
+
+ if (of_property_read_u32(group_node, loop_name, &pri_loop)) {
+ for (i = 0; i < len; i++) {
+ node_id = be32_to_cpup(paddr + i);
+ if (node_id >= node_max)
+ return dev_err_probe(ppe_dev->dev, -EINVAL,
+ "Invalid node ID %d of port %d\n",
+ node_id, port);
+
+ memset(&qos_cfg, 0, sizeof(qos_cfg));
+
+ qos_cfg.sp_id = tmp_cfg[0];
+ qos_cfg.c_pri = tmp_cfg[1];
+ qos_cfg.c_drr_id = tmp_cfg[2];
+ qos_cfg.e_pri = tmp_cfg[3];
+ qos_cfg.e_drr_id = tmp_cfg[4];
+ qos_cfg.c_drr_wt = 1;
+ qos_cfg.e_drr_wt = 1;
+ ret = ppe_queue_ops->queue_scheduler_set(ppe_dev,
+ node_id,
+ level,
+ port,
+ qos_cfg);
+ if (ret)
+ return dev_err_probe(ppe_dev->dev, ret,
+ "scheduler set fail on node ID %d\n",
+ node_id);
+ }
+ } else {
+ /* Only one base node ID allowed to loop. */
+ if (len != 1)
+ return dev_err_probe(ppe_dev->dev, -EINVAL,
+ "Multiple node ID defined to loop for port %d\n",
+ port);
+
+ /* Property qcom,drr-max-priority is optional for loop,
+ * if not defined, the default value PPE_SP_PRIORITY_NUM
+ * is used.
+ */
+ max_pri = PPE_SP_PRIORITY_NUM;
+ of_property_read_u32(group_node, "qcom,drr-max-priority", &max_pri);
+
+ node_id = be32_to_cpup(paddr);
+ if (node_id >= node_max)
+ return dev_err_probe(ppe_dev->dev, -EINVAL,
+ "Invalid node ID %d defined to loop for port %d\n",
+ node_id, port);
+
+ for (i = 0; i < pri_loop; i++) {
+ memset(&qos_cfg, 0, sizeof(qos_cfg));
+
+ qos_cfg.sp_id = tmp_cfg[0] + i / max_pri;
+ qos_cfg.c_pri = tmp_cfg[1] + i % max_pri;
+ qos_cfg.c_drr_id = tmp_cfg[2] + i;
+ qos_cfg.e_pri = tmp_cfg[3] + i % max_pri;
+ qos_cfg.e_drr_id = tmp_cfg[4] + i;
+ qos_cfg.c_drr_wt = 1;
+ qos_cfg.e_drr_wt = 1;
+ ret = ppe_queue_ops->queue_scheduler_set(ppe_dev,
+ node_id + i,
+ level,
+ port,
+ qos_cfg);
+ if (ret)
+ return dev_err_probe(ppe_dev->dev, ret,
+ "scheduler set fail on node ID %d\n",
+ node_id + i);
+ }
+ }
+
+ return 0;
+}
+
+static int of_parse_ppe_scheduler_config(struct ppe_device *ppe_dev,
+ struct device_node *port_node)
+{
+ struct device_node *scheduler_node, *child;
+ int port, ret;
+
+ if (of_property_read_u32(port_node, "port-id", &port))
+ return dev_err_probe(ppe_dev->dev, -ENODEV,
+ "Fail to get port-id of l0scheduler\n");
+
+ scheduler_node = of_get_child_by_name(port_node, "l0scheduler");
+ if (!scheduler_node)
+ return dev_err_probe(ppe_dev->dev, -ENODEV,
+ "Fail to get l0scheduler config\n");
+
+ for_each_available_child_of_node(scheduler_node, child) {
+ ret = of_parse_ppe_scheduler_group_config(ppe_dev, child, port,
+ "qcom,ucast-queue",
+ "qcom,ucast-loop-priority");
+ if (ret)
+ return ret;
+
+ ret = of_parse_ppe_scheduler_group_config(ppe_dev, child, port,
+ "qcom,mcast-queue",
+ "qcom,mcast-loop-priority");
+ if (ret)
+ return ret;
+ }
+
+ scheduler_node = of_get_child_by_name(port_node, "l1scheduler");
+ if (!scheduler_node)
+ return dev_err_probe(ppe_dev->dev, -ENODEV,
+ "Fail to get l1scheduler config\n");
+
+ for_each_available_child_of_node(scheduler_node, child) {
+ ret = of_parse_ppe_scheduler_group_config(ppe_dev, child, port,
+ "qcom,flow",
+ "qcom,flow-loop-priority");
+ if (ret)
+ return ret;
+ }
+
+ return ret;
+}
+
static int of_parse_ppe_scheduler(struct ppe_device *ppe_dev,
struct device_node *ppe_node)
{
- struct device_node *scheduler_node;
+ struct device_node *scheduler_node, *port_node;
+ int ret;

scheduler_node = of_get_child_by_name(ppe_node, "port-scheduler-resource");
if (!scheduler_node)
return dev_err_probe(ppe_dev->dev, -ENODEV,
"port-scheduler-resource is not defined\n");

- return of_parse_ppe_scheduler_resource(ppe_dev, scheduler_node);
+ ret = of_parse_ppe_scheduler_resource(ppe_dev, scheduler_node);
+ if (ret)
+ return ret;
+
+ scheduler_node = of_get_child_by_name(ppe_node, "port-scheduler-config");
+ if (!scheduler_node)
+ return dev_err_probe(ppe_dev->dev, -ENODEV,
+ "port-scheduler-config is not defined\n");
+
+ for_each_available_child_of_node(scheduler_node, port_node) {
+ ret = of_parse_ppe_scheduler_config(ppe_dev, port_node);
+ if (ret)
+ return ret;
+ }
+
+ return ret;
}

static int of_parse_ppe_config(struct ppe_device *ppe_dev,
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
new file mode 100644
index 000000000000..7853c2fdcc63
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
@@ -0,0 +1,206 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+/* Low level PPE operations made available to higher level network drivers
+ * such as ethernet or QoS drivers.
+ */
+
+#include <linux/soc/qcom/ppe.h>
+#include "ppe_ops.h"
+#include "ppe_regs.h"
+#include "ppe.h"
+
+static int ppe_scheduler_l0_queue_map_set(struct ppe_device *ppe_dev,
+ int node_id, int port,
+ struct ppe_qos_scheduler_cfg scheduler_cfg)
+{
+ u32 val, index;
+
+ if (node_id >= PPE_L0_FLOW_MAP_TBL_NUM)
+ return -EINVAL;
+
+ val = FIELD_PREP(PPE_L0_FLOW_MAP_TBL_SP_ID, scheduler_cfg.sp_id) |
+ FIELD_PREP(PPE_L0_FLOW_MAP_TBL_C_PRI, scheduler_cfg.c_pri) |
+ FIELD_PREP(PPE_L0_FLOW_MAP_TBL_E_PRI, scheduler_cfg.e_pri) |
+ FIELD_PREP(PPE_L0_FLOW_MAP_TBL_C_DRR_WT, scheduler_cfg.c_drr_wt) |
+ FIELD_PREP(PPE_L0_FLOW_MAP_TBL_E_DRR_WT, scheduler_cfg.e_drr_wt);
+ index = PPE_L0_FLOW_MAP_TBL + node_id * PPE_L0_FLOW_MAP_TBL_INC;
+ ppe_write(ppe_dev, index, val);
+
+ val = FIELD_PREP(PPE_L0_C_SP_CFG_TBL_DRR_ID, scheduler_cfg.c_drr_id) |
+ FIELD_PREP(PPE_L0_C_SP_CFG_TBL_DRR_CREDIT_UNIT, scheduler_cfg.c_drr_unit);
+ index = PPE_L0_C_SP_CFG_TBL +
+ (scheduler_cfg.sp_id * 8 + scheduler_cfg.c_pri) * PPE_L0_C_SP_CFG_TBL_INC;
+ ppe_write(ppe_dev, index, val);
+
+ val = FIELD_PREP(PPE_L0_E_SP_CFG_TBL_DRR_ID, scheduler_cfg.e_drr_id) |
+ FIELD_PREP(PPE_L0_E_SP_CFG_TBL_DRR_CREDIT_UNIT, scheduler_cfg.e_drr_unit);
+ index = PPE_L0_E_SP_CFG_TBL +
+ (scheduler_cfg.sp_id * 8 + scheduler_cfg.e_pri) * PPE_L0_E_SP_CFG_TBL_INC;
+ ppe_write(ppe_dev, index, val);
+
+ val = FIELD_PREP(PPE_L0_FLOW_PORT_MAP_TBL_PORT_NUM, port);
+ index = PPE_L0_FLOW_PORT_MAP_TBL + node_id * PPE_L0_FLOW_PORT_MAP_TBL_INC;
+ ppe_write(ppe_dev, index, val);
+
+ index = PPE_L0_COMP_CFG_TBL + node_id * PPE_L0_COMP_CFG_TBL_INC;
+ return ppe_mask(ppe_dev, index, PPE_L0_COMP_CFG_TBL_DRR_METER_LEN,
+ FIELD_PREP(PPE_L0_COMP_CFG_TBL_DRR_METER_LEN,
+ scheduler_cfg.drr_frame_mode));
+}
+
+static int ppe_scheduler_l0_queue_map_get(struct ppe_device *ppe_dev,
+ int node_id, int *port,
+ struct ppe_qos_scheduler_cfg *scheduler_cfg)
+{
+ u32 val, index;
+
+ if (node_id >= PPE_L0_FLOW_MAP_TBL_NUM)
+ return -EINVAL;
+
+ index = PPE_L0_FLOW_MAP_TBL + node_id * PPE_L0_FLOW_MAP_TBL_INC;
+ ppe_read(ppe_dev, index, &val);
+ scheduler_cfg->sp_id = FIELD_GET(PPE_L0_FLOW_MAP_TBL_SP_ID, val);
+ scheduler_cfg->c_pri = FIELD_GET(PPE_L0_FLOW_MAP_TBL_C_PRI, val);
+ scheduler_cfg->e_pri = FIELD_GET(PPE_L0_FLOW_MAP_TBL_E_PRI, val);
+ scheduler_cfg->c_drr_wt = FIELD_GET(PPE_L0_FLOW_MAP_TBL_C_DRR_WT, val);
+ scheduler_cfg->e_drr_wt = FIELD_GET(PPE_L0_FLOW_MAP_TBL_E_DRR_WT, val);
+
+ index = PPE_L0_C_SP_CFG_TBL +
+ (scheduler_cfg->sp_id * 8 + scheduler_cfg->c_pri) * PPE_L0_C_SP_CFG_TBL_INC;
+ ppe_read(ppe_dev, index, &val);
+ scheduler_cfg->c_drr_id = FIELD_GET(PPE_L0_C_SP_CFG_TBL_DRR_ID, val);
+ scheduler_cfg->c_drr_unit = FIELD_GET(PPE_L0_C_SP_CFG_TBL_DRR_CREDIT_UNIT, val);
+
+ index = PPE_L0_E_SP_CFG_TBL +
+ (scheduler_cfg->sp_id * 8 + scheduler_cfg->e_pri) * PPE_L0_E_SP_CFG_TBL_INC;
+ ppe_read(ppe_dev, index, &val);
+ scheduler_cfg->e_drr_id = FIELD_GET(PPE_L0_E_SP_CFG_TBL_DRR_ID, val);
+ scheduler_cfg->e_drr_unit = FIELD_GET(PPE_L0_E_SP_CFG_TBL_DRR_CREDIT_UNIT, val);
+
+ index = PPE_L0_FLOW_PORT_MAP_TBL + node_id * PPE_L0_FLOW_PORT_MAP_TBL_INC;
+ ppe_read(ppe_dev, index, &val);
+ *port = FIELD_GET(PPE_L0_FLOW_PORT_MAP_TBL_PORT_NUM, val);
+
+ index = PPE_L0_COMP_CFG_TBL + node_id * PPE_L0_COMP_CFG_TBL_INC;
+ ppe_read(ppe_dev, index, &val);
+ scheduler_cfg->drr_frame_mode = FIELD_GET(PPE_L0_COMP_CFG_TBL_DRR_METER_LEN, val);
+
+ return 0;
+}
+
+static int ppe_scheduler_l1_queue_map_set(struct ppe_device *ppe_dev,
+ int node_id, int port,
+ struct ppe_qos_scheduler_cfg scheduler_cfg)
+{
+ u32 val, index;
+
+ if (node_id >= PPE_L1_FLOW_MAP_TBL_NUM)
+ return -EINVAL;
+
+ val = FIELD_PREP(PPE_L1_FLOW_MAP_TBL_SP_ID, scheduler_cfg.sp_id) |
+ FIELD_PREP(PPE_L1_FLOW_MAP_TBL_C_PRI, scheduler_cfg.c_pri) |
+ FIELD_PREP(PPE_L1_FLOW_MAP_TBL_E_PRI, scheduler_cfg.e_pri) |
+ FIELD_PREP(PPE_L1_FLOW_MAP_TBL_C_DRR_WT, scheduler_cfg.c_drr_wt) |
+ FIELD_PREP(PPE_L1_FLOW_MAP_TBL_E_DRR_WT, scheduler_cfg.e_drr_wt);
+ index = PPE_L1_FLOW_MAP_TBL + node_id * PPE_L1_FLOW_MAP_TBL_INC;
+ ppe_write(ppe_dev, index, val);
+
+ val = FIELD_PREP(PPE_L1_C_SP_CFG_TBL_DRR_ID, scheduler_cfg.c_drr_id) |
+ FIELD_PREP(PPE_L1_C_SP_CFG_TBL_DRR_CREDIT_UNIT, scheduler_cfg.c_drr_unit);
+ index = PPE_L1_C_SP_CFG_TBL +
+ (scheduler_cfg.sp_id * 8 + scheduler_cfg.c_pri) * PPE_L1_C_SP_CFG_TBL_INC;
+ ppe_write(ppe_dev, index, val);
+
+ val = FIELD_PREP(PPE_L1_E_SP_CFG_TBL_DRR_ID, scheduler_cfg.e_drr_id) |
+ FIELD_PREP(PPE_L1_E_SP_CFG_TBL_DRR_CREDIT_UNIT, scheduler_cfg.e_drr_unit);
+ index = PPE_L1_E_SP_CFG_TBL +
+ (scheduler_cfg.sp_id * 8 + scheduler_cfg.e_pri) * PPE_L1_E_SP_CFG_TBL_INC;
+ ppe_write(ppe_dev, index, val);
+
+ val = FIELD_PREP(PPE_L1_FLOW_PORT_MAP_TBL_PORT_NUM, port);
+ index = PPE_L1_FLOW_PORT_MAP_TBL + node_id * PPE_L1_FLOW_PORT_MAP_TBL_INC;
+ ppe_write(ppe_dev, index, val);
+
+ index = PPE_L1_COMP_CFG_TBL + node_id * PPE_L1_COMP_CFG_TBL_INC;
+ return ppe_mask(ppe_dev, index, PPE_L1_COMP_CFG_TBL_DRR_METER_LEN,
+ FIELD_PREP(PPE_L1_COMP_CFG_TBL_DRR_METER_LEN,
+ scheduler_cfg.drr_frame_mode));
+}
+
+static int ppe_scheduler_l1_queue_map_get(struct ppe_device *ppe_dev,
+ int node_id, int *port,
+ struct ppe_qos_scheduler_cfg *scheduler_cfg)
+{
+ u32 val, index;
+
+ if (node_id >= PPE_L1_FLOW_MAP_TBL_NUM)
+ return -EINVAL;
+
+ index = PPE_L1_FLOW_MAP_TBL + node_id * PPE_L1_FLOW_MAP_TBL_INC;
+ ppe_read(ppe_dev, index, &val);
+ scheduler_cfg->sp_id = FIELD_GET(PPE_L1_FLOW_MAP_TBL_SP_ID, val);
+ scheduler_cfg->c_pri = FIELD_GET(PPE_L1_FLOW_MAP_TBL_C_PRI, val);
+ scheduler_cfg->e_pri = FIELD_GET(PPE_L1_FLOW_MAP_TBL_E_PRI, val);
+ scheduler_cfg->c_drr_wt = FIELD_GET(PPE_L1_FLOW_MAP_TBL_C_DRR_WT, val);
+ scheduler_cfg->e_drr_wt = FIELD_GET(PPE_L1_FLOW_MAP_TBL_E_DRR_WT, val);
+
+ index = PPE_L1_C_SP_CFG_TBL +
+ (scheduler_cfg->sp_id * 8 + scheduler_cfg->c_pri) * PPE_L1_C_SP_CFG_TBL_INC;
+ ppe_read(ppe_dev, index, &val);
+ scheduler_cfg->c_drr_id = FIELD_GET(PPE_L1_C_SP_CFG_TBL_DRR_ID, val);
+ scheduler_cfg->c_drr_unit = FIELD_GET(PPE_L1_C_SP_CFG_TBL_DRR_CREDIT_UNIT, val);
+
+ index = PPE_L1_E_SP_CFG_TBL +
+ (scheduler_cfg->sp_id * 8 + scheduler_cfg->e_pri) * PPE_L1_E_SP_CFG_TBL_INC;
+ ppe_read(ppe_dev, index, &val);
+ scheduler_cfg->e_drr_id = FIELD_GET(PPE_L1_E_SP_CFG_TBL_DRR_ID, val);
+ scheduler_cfg->e_drr_unit = FIELD_GET(PPE_L1_E_SP_CFG_TBL_DRR_CREDIT_UNIT, val);
+
+ index = PPE_L1_FLOW_PORT_MAP_TBL + node_id * PPE_L1_FLOW_PORT_MAP_TBL_INC;
+ ppe_read(ppe_dev, index, &val);
+ *port = FIELD_GET(PPE_L1_FLOW_PORT_MAP_TBL_PORT_NUM, val);
+
+ index = PPE_L1_COMP_CFG_TBL + node_id * PPE_L1_COMP_CFG_TBL_INC;
+ ppe_read(ppe_dev, index, &val);
+ scheduler_cfg->drr_frame_mode = FIELD_GET(PPE_L1_COMP_CFG_TBL_DRR_METER_LEN, val);
+
+ return 0;
+}
+
+static int ppe_queue_scheduler_set(struct ppe_device *ppe_dev,
+ int node_id, int level, int port,
+ struct ppe_qos_scheduler_cfg scheduler_cfg)
+{
+ if (level == 0)
+ return ppe_scheduler_l0_queue_map_set(ppe_dev, node_id, port, scheduler_cfg);
+ else if (level == 1)
+ return ppe_scheduler_l1_queue_map_set(ppe_dev, node_id, port, scheduler_cfg);
+ else
+ return -EINVAL;
+}
+
+static int ppe_queue_scheduler_get(struct ppe_device *ppe_dev,
+ int node_id, int level, int *port,
+ struct ppe_qos_scheduler_cfg *scheduler_cfg)
+{
+ if (level == 0)
+ return ppe_scheduler_l0_queue_map_get(ppe_dev, node_id, port, scheduler_cfg);
+ else if (level == 1)
+ return ppe_scheduler_l1_queue_map_get(ppe_dev, node_id, port, scheduler_cfg);
+ else
+ return -EINVAL;
+}
+
+static const struct ppe_queue_ops qcom_ppe_queue_config_ops = {
+ .queue_scheduler_set = ppe_queue_scheduler_set,
+ .queue_scheduler_get = ppe_queue_scheduler_get,
+};
+
+const struct ppe_queue_ops *ppe_queue_config_ops_get(void)
+{
+ return &qcom_ppe_queue_config_ops;
+}
+EXPORT_SYMBOL_GPL(ppe_queue_config_ops_get);
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
new file mode 100644
index 000000000000..4980e3fed1c0
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+/* Low level PPE operations to be used by higher level network drivers
+ * such as ethernet or QoS drivers.
+ */
+
+#ifndef __PPE_OPS_H__
+#define __PPE_OPS_H__
+
+/* PPE hardware QoS configurations used to dispatch the packet passed
+ * through PPE, the scheduler supports DRR(deficit round robin with the
+ * weight) and SP(strict priority).
+ */
+struct ppe_qos_scheduler_cfg {
+ int sp_id;
+ int e_pri;
+ int c_pri;
+ int c_drr_id;
+ int e_drr_id;
+ int e_drr_wt;
+ int c_drr_wt;
+ int c_drr_unit;
+ int e_drr_unit;
+ int drr_frame_mode;
+};
+
+/* The operations are used to configure the PPE queue related resource */
+struct ppe_queue_ops {
+ int (*queue_scheduler_set)(struct ppe_device *ppe_dev,
+ int node_id,
+ int level,
+ int port,
+ struct ppe_qos_scheduler_cfg scheduler_cfg);
+ int (*queue_scheduler_get)(struct ppe_device *ppe_dev,
+ int node_id,
+ int level,
+ int *port,
+ struct ppe_qos_scheduler_cfg *scheduler_cfg);
+};
+
+const struct ppe_queue_ops *ppe_queue_config_ops_get(void);
+#endif
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
index 589f92a4f607..10daa70f28e9 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -31,11 +31,75 @@
#define PPE_PSCH_TDM_DEPTH_CFG_INC 4
#define PPE_PSCH_TDM_DEPTH_CFG_TDM_DEPTH GENMASK(7, 0)

+#define PPE_L0_FLOW_MAP_TBL 0x402000
+#define PPE_L0_FLOW_MAP_TBL_NUM 300
+#define PPE_L0_FLOW_MAP_TBL_INC 0x10
+#define PPE_L0_FLOW_MAP_TBL_SP_ID GENMASK(5, 0)
+#define PPE_L0_FLOW_MAP_TBL_C_PRI GENMASK(8, 6)
+#define PPE_L0_FLOW_MAP_TBL_E_PRI GENMASK(11, 9)
+#define PPE_L0_FLOW_MAP_TBL_C_DRR_WT GENMASK(21, 12)
+#define PPE_L0_FLOW_MAP_TBL_E_DRR_WT GENMASK(31, 22)
+
+#define PPE_L0_C_SP_CFG_TBL 0x404000
+#define PPE_L0_C_SP_CFG_TBL_NUM 512
+#define PPE_L0_C_SP_CFG_TBL_INC 0x10
+#define PPE_L0_C_SP_CFG_TBL_DRR_ID GENMASK(7, 0)
+#define PPE_L0_C_SP_CFG_TBL_DRR_CREDIT_UNIT BIT(8)
+
+#define PPE_L0_E_SP_CFG_TBL 0x406000
+#define PPE_L0_E_SP_CFG_TBL_NUM 512
+#define PPE_L0_E_SP_CFG_TBL_INC 0x10
+#define PPE_L0_E_SP_CFG_TBL_DRR_ID GENMASK(7, 0)
+#define PPE_L0_E_SP_CFG_TBL_DRR_CREDIT_UNIT BIT(8)
+
+#define PPE_L0_FLOW_PORT_MAP_TBL 0x408000
+#define PPE_L0_FLOW_PORT_MAP_TBL_NUM 300
+#define PPE_L0_FLOW_PORT_MAP_TBL_INC 0x10
+#define PPE_L0_FLOW_PORT_MAP_TBL_PORT_NUM GENMASK(3, 0)
+
+#define PPE_L0_COMP_CFG_TBL 0x428000
+#define PPE_L0_COMP_CFG_TBL_NUM 300
+#define PPE_L0_COMP_CFG_TBL_INC 0x10
+#define PPE_L0_COMP_CFG_TBL_SHAPER_METER_LEN GENMASK(1, 0)
+#define PPE_L0_COMP_CFG_TBL_DRR_METER_LEN GENMASK(3, 2)
+
#define PPE_DEQ_OPR_TBL 0x430000
#define PPE_DEQ_OPR_TBL_NUM 300
#define PPE_DEQ_OPR_TBL_INC 0x10
#define PPE_ENQ_OPR_TBL_DEQ_DISABLE BIT(0)

+#define PPE_L1_FLOW_MAP_TBL 0x440000
+#define PPE_L1_FLOW_MAP_TBL_NUM 64
+#define PPE_L1_FLOW_MAP_TBL_INC 0x10
+#define PPE_L1_FLOW_MAP_TBL_SP_ID GENMASK(3, 0)
+#define PPE_L1_FLOW_MAP_TBL_C_PRI GENMASK(6, 4)
+#define PPE_L1_FLOW_MAP_TBL_E_PRI GENMASK(9, 7)
+#define PPE_L1_FLOW_MAP_TBL_C_DRR_WT GENMASK(19, 10)
+#define PPE_L1_FLOW_MAP_TBL_E_DRR_WT GENMASK(29, 20)
+
+#define PPE_L1_C_SP_CFG_TBL 0x442000
+#define PPE_L1_C_SP_CFG_TBL_NUM 64
+#define PPE_L1_C_SP_CFG_TBL_INC 0x10
+#define PPE_L1_C_SP_CFG_TBL_DRR_ID GENMASK(5, 0)
+#define PPE_L1_C_SP_CFG_TBL_DRR_CREDIT_UNIT BIT(6)
+
+#define PPE_L1_E_SP_CFG_TBL 0x444000
+#define PPE_L1_E_SP_CFG_TBL_NUM 64
+#define PPE_L1_E_SP_CFG_TBL_INC 0x10
+#define PPE_L1_E_SP_CFG_TBL_DRR_ID GENMASK(5, 0)
+#define PPE_L1_E_SP_CFG_TBL_DRR_CREDIT_UNIT BIT(6)
+
+#define PPE_L1_FLOW_PORT_MAP_TBL 0x446000
+#define PPE_L1_FLOW_PORT_MAP_TBL_NUM 64
+#define PPE_L1_FLOW_PORT_MAP_TBL_INC 0x10
+#define PPE_L1_FLOW_PORT_MAP_TBL_PORT_NUM GENMASK(3, 0)
+
+#define PPE_L1_COMP_CFG_TBL 0x46a000
+#define PPE_L1_COMP_CFG_TBL_NUM 64
+#define PPE_L1_COMP_CFG_TBL_INC 0x10
+#define PPE_L1_COMP_CFG_TBL_SHAPER_METER_LEN GENMASK(1, 0)
+#define PPE_L1_COMP_CFG_TBL_DRR_METER_LEN GENMASK(3, 2)
+
#define PPE_PSCH_TDM_CFG_TBL 0x47a000
#define PPE_PSCH_TDM_CFG_TBL_NUM 128
#define PPE_PSCH_TDM_CFG_TBL_INC 0x10
--
2.42.0


2024-01-10 11:45:47

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 04/20] net: ethernet: qualcomm: Add PPE buffer manager configuration

The BM config controls the flow control or pause frame generated on the
physical port. The number of hardware buffers configured for the port
influence the function of the flow control for that port.

In addition, the PPE register access functions are added.

Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/ppe.c | 164 +++++++++++++++++++
drivers/net/ethernet/qualcomm/ppe/ppe.h | 6 +
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 49 ++++++
3 files changed, 219 insertions(+)
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_regs.h

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index 23f9de105062..94fa13dd17da 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -14,6 +14,7 @@
#include <linux/platform_device.h>
#include <linux/soc/qcom/ppe.h>
#include "ppe.h"
+#include "ppe_regs.h"

static const char * const ppe_clock_name[PPE_CLK_MAX] = {
"cmn_ahb",
@@ -111,6 +112,49 @@ static const char * const ppe_reset_name[PPE_RST_MAX] = {
"nss_port6_mac",
};

+int ppe_write(struct ppe_device *ppe_dev, u32 reg, unsigned int val)
+{
+ return regmap_write(ppe_dev->regmap, reg, val);
+}
+
+int ppe_read(struct ppe_device *ppe_dev, u32 reg, unsigned int *val)
+{
+ return regmap_read(ppe_dev->regmap, reg, val);
+}
+
+int ppe_mask(struct ppe_device *ppe_dev, u32 reg, u32 mask, unsigned int set)
+{
+ return regmap_update_bits(ppe_dev->regmap, reg, mask, set);
+}
+
+int ppe_write_tbl(struct ppe_device *ppe_dev, u32 reg,
+ const unsigned int *val, int cnt)
+{
+ int i, ret;
+
+ for (i = 0; i < cnt / 4; i++) {
+ ret = ppe_write(ppe_dev, reg + i * 4, val[i]);
+ if (ret)
+ return ret;
+ }
+
+ return ret;
+}
+
+int ppe_read_tbl(struct ppe_device *ppe_dev, u32 reg,
+ unsigned int *val, int cnt)
+{
+ int i, ret;
+
+ for (i = 0; i < cnt / 4; i++) {
+ ret = ppe_read(ppe_dev, reg + i * 4, &val[i]);
+ if (ret)
+ return ret;
+ }
+
+ return ret;
+}
+
int ppe_type_get(struct ppe_device *ppe_dev)
{
struct ppe_data *ppe_dev_priv = ppe_dev->ppe_priv;
@@ -323,6 +367,120 @@ static struct ppe_data *ppe_data_init(struct platform_device *pdev)
return ppe_dev_priv;
}

+static int of_parse_ppe_bm(struct ppe_device *ppe_dev,
+ struct device_node *ppe_node)
+{
+ union ppe_bm_port_fc_cfg_u fc_cfg;
+ struct device_node *bm_node;
+ int ret, cnt;
+ u32 *cfg, reg_val;
+
+ bm_node = of_get_child_by_name(ppe_node, "buffer-management-config");
+ if (!bm_node)
+ return dev_err_probe(ppe_dev->dev, -ENODEV,
+ "Fail to get buffer-management-config\n");
+
+ cnt = of_property_count_u32_elems(bm_node, "qcom,group-config");
+ if (cnt < 0)
+ return dev_err_probe(ppe_dev->dev, cnt,
+ "Fail to qcom,group-config\n");
+
+ cfg = kmalloc_array(cnt, sizeof(*cfg), GFP_KERNEL | __GFP_ZERO);
+ if (!cfg)
+ return -ENOMEM;
+
+ ret = of_property_read_u32_array(bm_node, "qcom,group-config", cfg, cnt);
+ if (ret) {
+ dev_err(ppe_dev->dev, "Fail to get qcom,group-config %d\n", ret);
+ goto parse_bm_err;
+ }
+
+ /* Parse BM group configuration,
+ * the dts propert: qcom,group-config = <group group_buf>;
+ *
+ * There are 3 kinds of buffer types, guaranteed buffer(port based),
+ * shared buffer(group based) and react buffer(cache in-flight packets).
+ *
+ * Maximum 4 groups supported by PPE.
+ */
+ ret = 0;
+ while ((cnt - ret) / 2) {
+ if (cfg[ret] < PPE_BM_SHARED_GROUP_CFG_NUM) {
+ reg_val = FIELD_PREP(PPE_BM_SHARED_GROUP_CFG_SHARED_LIMIT, cfg[ret + 1]);
+
+ ppe_write(ppe_dev, PPE_BM_SHARED_GROUP_CFG +
+ PPE_BM_SHARED_GROUP_CFG_INC * cfg[ret], reg_val);
+ }
+ ret += 2;
+ }
+
+ cnt = of_property_count_u32_elems(bm_node, "qcom,port-config");
+ if (cnt < 0) {
+ dev_err(ppe_dev->dev, "Fail to get qcom,port-config %d\n", cnt);
+ goto parse_bm_err;
+ }
+
+ cfg = krealloc_array(cfg, cnt, sizeof(*cfg), GFP_KERNEL | __GFP_ZERO);
+ if (!cfg) {
+ ret = -ENOMEM;
+ goto parse_bm_err;
+ }
+
+ ret = of_property_read_u32_array(bm_node, "qcom,port-config", cfg, cnt);
+ if (ret) {
+ dev_err(ppe_dev->dev, "Fail to get qcom,port-config %d\n", ret);
+ goto parse_bm_err;
+ }
+
+ /* Parse BM port configuration,
+ * the dts property: qcom,port-config = <group port prealloc react ceil
+ * weight res_off res_ceil dynamic>;
+ *
+ * The port based buffer is assigned to the group ID, which is the
+ * buffer dedicated to BM port, and the threshold to generate the
+ * pause frame, the threshold can be configured as the static value
+ * or dynamically adjusted according to the remain buffer.
+ */
+ ret = 0;
+ while ((cnt - ret) / 9) {
+ if (cfg[ret + 1] < PPE_BM_PORT_FC_MODE_NUM) {
+ memset(&fc_cfg, 0, sizeof(fc_cfg));
+
+ fc_cfg.bf.pre_alloc = cfg[ret + 2];
+ fc_cfg.bf.react_limit = cfg[ret + 3];
+ fc_cfg.bf.shared_ceiling_0 = cfg[ret + 4] & 0x7;
+ fc_cfg.bf.shared_ceiling_1 = cfg[ret + 4] >> 3;
+ fc_cfg.bf.shared_weight = cfg[ret + 5];
+ fc_cfg.bf.resum_offset = cfg[ret + 6];
+ fc_cfg.bf.resum_floor_th = cfg[ret + 7];
+ fc_cfg.bf.shared_dynamic = cfg[ret + 8];
+ ppe_write_tbl(ppe_dev, PPE_BM_PORT_FC_CFG +
+ PPE_BM_PORT_FC_CFG_INC * cfg[ret + 1],
+ fc_cfg.val, sizeof(fc_cfg.val));
+
+ reg_val = FIELD_PREP(PPE_BM_PORT_GROUP_ID_SHARED_GROUP_ID, cfg[ret]);
+ ppe_write(ppe_dev, PPE_BM_PORT_GROUP_ID +
+ PPE_BM_PORT_GROUP_ID_INC * cfg[ret + 1], reg_val);
+
+ reg_val = FIELD_PREP(PPE_BM_PORT_FC_MODE_EN, 1);
+ ppe_write(ppe_dev, PPE_BM_PORT_FC_MODE +
+ PPE_BM_PORT_FC_MODE_INC * cfg[ret + 1], reg_val);
+ }
+ ret += 9;
+ }
+ ret = 0;
+
+parse_bm_err:
+ kfree(cfg);
+ return ret;
+}
+
+static int of_parse_ppe_config(struct ppe_device *ppe_dev,
+ struct device_node *ppe_node)
+{
+ return of_parse_ppe_bm(ppe_dev, ppe_node);
+}
+
static int qcom_ppe_probe(struct platform_device *pdev)
{
struct ppe_device *ppe_dev;
@@ -359,6 +517,12 @@ static int qcom_ppe_probe(struct platform_device *pdev)
ret,
"ppe clock config failed\n");

+ ret = of_parse_ppe_config(ppe_dev, pdev->dev.of_node);
+ if (ret)
+ return dev_err_probe(&pdev->dev,
+ ret,
+ "of parse ppe failed\n");
+
ppe_dev->is_ppe_probed = true;
return 0;
}
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.h b/drivers/net/ethernet/qualcomm/ppe/ppe.h
index f54406a6feb7..6caef42ab235 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.h
@@ -140,4 +140,10 @@ struct ppe_data {
};

int ppe_type_get(struct ppe_device *ppe_dev);
+
+int ppe_write(struct ppe_device *ppe_dev, u32 reg, unsigned int val);
+int ppe_read(struct ppe_device *ppe_dev, u32 reg, unsigned int *val);
+int ppe_mask(struct ppe_device *ppe_dev, u32 reg, u32 mask, unsigned int set);
+int ppe_write_tbl(struct ppe_device *ppe_dev, u32 reg, const unsigned int *val, int cnt);
+int ppe_read_tbl(struct ppe_device *ppe_dev, u32 reg, unsigned int *val, int cnt);
#endif
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
new file mode 100644
index 000000000000..e11d8f2a26b7
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+/* PPE hardware register and table declarations. */
+#ifndef __PPE_REGS_H__
+#define __PPE_REGS_H__
+
+#define PPE_BM_PORT_FC_MODE 0x600100
+#define PPE_BM_PORT_FC_MODE_NUM 15
+#define PPE_BM_PORT_FC_MODE_INC 4
+#define PPE_BM_PORT_FC_MODE_EN BIT(0)
+
+#define PPE_BM_PORT_GROUP_ID 0x600180
+#define PPE_BM_PORT_GROUP_ID_NUM 15
+#define PPE_BM_PORT_GROUP_ID_INC 4
+#define PPE_BM_PORT_GROUP_ID_SHARED_GROUP_ID GENMASK(1, 0)
+
+#define PPE_BM_SHARED_GROUP_CFG 0x600290
+#define PPE_BM_SHARED_GROUP_CFG_NUM 4
+#define PPE_BM_SHARED_GROUP_CFG_INC 4
+#define PPE_BM_SHARED_GROUP_CFG_SHARED_LIMIT GENMASK(10, 0)
+
+#define PPE_BM_PORT_FC_CFG 0x601000
+#define PPE_BM_PORT_FC_CFG_NUM 15
+#define PPE_BM_PORT_FC_CFG_INC 0x10
+
+/* BM port configurations, BM port(0-7) for CPU port, BM port(8-13) for physical
+ * port 1-6.
+ */
+struct ppe_bm_port_fc_cfg {
+ u32 react_limit:9,
+ resum_floor_th:9,
+ resum_offset:11,
+ shared_ceiling_0:3;
+ u32 shared_ceiling_1:8,
+ shared_weight:3,
+ shared_dynamic:1,
+ pre_alloc:11,
+ res0:9;
+};
+
+union ppe_bm_port_fc_cfg_u {
+ u32 val[2];
+ struct ppe_bm_port_fc_cfg bf;
+};
+
+#endif
--
2.42.0


2024-01-10 11:46:08

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 09/20] net: ethernet: qualcomm: Add PPE queue config

Assign the default queue base for the physical port. Each physical port
has an independent profile. The egress queue ID is decided by queue
base, priority class offset and hash class offset.

1. The queue base is configured based on the destination port or CPU
code or service code.

2. The maximum priority offset is decided by the range of the DRR
available for the physical port, which is from the device tree scheduler
resource.

3. The hash class offset is configured as 0 by default, which can be
adjusted by the EDMA driver for load balance on the CPU cores.

Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/ppe.c | 98 ++++++++++++++++++++
drivers/net/ethernet/qualcomm/ppe/ppe_ops.c | 79 ++++++++++++++++
drivers/net/ethernet/qualcomm/ppe/ppe_ops.h | 34 +++++++
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 16 ++++
4 files changed, 227 insertions(+)

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index 75c24a87e2be..dd032a158231 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -1017,6 +1017,98 @@ static int of_parse_ppe_config(struct ppe_device *ppe_dev,
return of_parse_ppe_scheduler(ppe_dev, ppe_node);
}

+static int ppe_qm_init(struct ppe_device *ppe_dev)
+{
+ const struct ppe_queue_ops *ppe_queue_ops;
+ struct ppe_queue_ucast_dest queue_dst;
+ int profile_id, priority, res, class;
+
+ ppe_queue_ops = ppe_queue_config_ops_get();
+
+ /* Initialize the PPE queue base ID and queue priority for each
+ * physical port, the egress queue ID is decided by the queue
+ * base ID added by the queue priority class and RSS hash class.
+ *
+ * Each physical port has the independent profile ID, so that
+ * each physical port can be configured with the independent
+ * queue base and queue priority class and RSS hash class.
+ */
+ profile_id = 0;
+ while (profile_id < PPE_SCHEDULER_PORT_NUM) {
+ memset(&queue_dst, 0, sizeof(queue_dst));
+
+ /* The device tree property of queue-config is as below,
+ * <queue_base queue_num group prealloc ceil weight
+ * resume_off dynamic>;
+ */
+ res = ppe_scheduler_res[profile_id].ucastq[0];
+ queue_dst.dest_port = profile_id;
+
+ /* Configure queue base ID and profile ID that is same as
+ * physical port ID.
+ */
+ if (ppe_queue_ops->queue_ucast_base_set)
+ ppe_queue_ops->queue_ucast_base_set(ppe_dev,
+ queue_dst,
+ res,
+ profile_id);
+
+ /* Queue maximum priority supported by each phiscal port */
+ res = ppe_scheduler_res[profile_id].l0cdrr[1] -
+ ppe_scheduler_res[profile_id].l0cdrr[0];
+
+ priority = 0;
+ while (priority < PPE_QUEUE_PRI_MAX) {
+ if (priority > res)
+ class = res;
+ else
+ class = priority;
+
+ if (ppe_queue_ops->queue_ucast_pri_class_set)
+ ppe_queue_ops->queue_ucast_pri_class_set(ppe_dev,
+ profile_id,
+ priority,
+ class);
+ priority++;
+ }
+
+ /* Configure the queue RSS hash class value as 0 by default,
+ * which can be configured as the value same as the ARM CPU
+ * core number to distribute traffic for the traffic load balance.
+ */
+ priority = 0;
+ while (priority < PPE_QUEUE_HASH_MAX) {
+ if (ppe_queue_ops->queue_ucast_hash_class_set)
+ ppe_queue_ops->queue_ucast_hash_class_set(ppe_dev,
+ profile_id,
+ priority,
+ 0);
+ priority++;
+ }
+
+ profile_id++;
+ }
+
+ /* Redirect ARP reply packet with the max priority on CPU port, which
+ * keeps the ARP reply with highest priority received by EDMA when
+ * there is heavy traffic.
+ */
+ memset(&queue_dst, 0, sizeof(queue_dst));
+ queue_dst.cpu_code_en = true;
+ queue_dst.cpu_code = 101;
+ res = ppe_scheduler_res[0].ucastq[0];
+ priority = ppe_scheduler_res[0].l0cdrr[1] - ppe_scheduler_res[0].l0cdrr[0];
+ if (ppe_queue_ops->queue_ucast_base_set)
+ ppe_queue_ops->queue_ucast_base_set(ppe_dev, queue_dst, res, priority);
+
+ return 0;
+}
+
+static int ppe_dev_hw_init(struct ppe_device *ppe_dev)
+{
+ return ppe_qm_init(ppe_dev);
+}
+
static int qcom_ppe_probe(struct platform_device *pdev)
{
struct ppe_device *ppe_dev;
@@ -1059,6 +1151,12 @@ static int qcom_ppe_probe(struct platform_device *pdev)
ret,
"of parse ppe failed\n");

+ ret = ppe_dev_hw_init(ppe_dev);
+ if (ret)
+ return dev_err_probe(&pdev->dev,
+ ret,
+ "ppe device hw init failed\n");
+
ppe_dev->is_ppe_probed = true;
return 0;
}
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
index 7853c2fdcc63..eaa3f1e7b525 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
@@ -194,9 +194,88 @@ static int ppe_queue_scheduler_get(struct ppe_device *ppe_dev,
return -EINVAL;
}

+static int ppe_queue_ucast_base_set(struct ppe_device *ppe_dev,
+ struct ppe_queue_ucast_dest queue_dst,
+ int queue_base, int profile_id)
+{
+ u32 reg_val;
+ int index;
+
+ if (queue_dst.service_code_en)
+ index = 2048 + (queue_dst.src_profile << 8) + queue_dst.service_code;
+ else if (queue_dst.cpu_code_en)
+ index = 1024 + (queue_dst.src_profile << 8) + queue_dst.cpu_code;
+ else
+ index = (queue_dst.src_profile << 8) + queue_dst.dest_port;
+
+ reg_val = FIELD_PREP(PPE_UCAST_QUEUE_MAP_TBL_PROFILE_ID, profile_id) |
+ FIELD_PREP(PPE_UCAST_QUEUE_MAP_TBL_QUEUE_ID, queue_base);
+
+ return ppe_write(ppe_dev, PPE_UCAST_QUEUE_MAP_TBL + index * PPE_UCAST_QUEUE_MAP_TBL_INC,
+ reg_val);
+}
+
+static int ppe_queue_ucast_base_get(struct ppe_device *ppe_dev,
+ struct ppe_queue_ucast_dest queue_dst,
+ int *queue_base, int *profile_id)
+{
+ u32 reg_val;
+ int index;
+
+ if (queue_dst.service_code_en)
+ index = 2048 + (queue_dst.src_profile << 8) + queue_dst.service_code;
+ else if (queue_dst.cpu_code_en)
+ index = 1024 + (queue_dst.src_profile << 8) + queue_dst.cpu_code;
+ else
+ index = (queue_dst.src_profile << 8) + queue_dst.dest_port;
+
+ ppe_read(ppe_dev, PPE_UCAST_QUEUE_MAP_TBL + index * PPE_UCAST_QUEUE_MAP_TBL_INC, &reg_val);
+
+ *queue_base = FIELD_GET(PPE_UCAST_QUEUE_MAP_TBL_QUEUE_ID, reg_val);
+ *profile_id = FIELD_GET(PPE_UCAST_QUEUE_MAP_TBL_PROFILE_ID, reg_val);
+
+ return 0;
+}
+
+static int ppe_queue_ucast_pri_class_set(struct ppe_device *ppe_dev,
+ int profile_id,
+ int priority,
+ int class_offset)
+{
+ u32 reg_val;
+ int index;
+
+ index = (profile_id << 4) + priority;
+ reg_val = FIELD_PREP(PPE_UCAST_PRIORITY_MAP_TBL_CLASS, class_offset);
+
+ return ppe_write(ppe_dev,
+ PPE_UCAST_PRIORITY_MAP_TBL + index * PPE_UCAST_PRIORITY_MAP_TBL_INC,
+ reg_val);
+}
+
+static int ppe_queue_ucast_hash_class_set(struct ppe_device *ppe_dev,
+ int profile_id,
+ int rss_hash,
+ int class_offset)
+{
+ u32 reg_val;
+ int index;
+
+ index = (profile_id << 4) + rss_hash;
+ reg_val = FIELD_PREP(PPE_UCAST_HASH_MAP_TBL_HASH, class_offset);
+
+ return ppe_write(ppe_dev,
+ PPE_UCAST_HASH_MAP_TBL + index * PPE_UCAST_HASH_MAP_TBL_INC,
+ reg_val);
+}
+
static const struct ppe_queue_ops qcom_ppe_queue_config_ops = {
.queue_scheduler_set = ppe_queue_scheduler_set,
.queue_scheduler_get = ppe_queue_scheduler_get,
+ .queue_ucast_base_set = ppe_queue_ucast_base_set,
+ .queue_ucast_base_get = ppe_queue_ucast_base_get,
+ .queue_ucast_pri_class_set = ppe_queue_ucast_pri_class_set,
+ .queue_ucast_hash_class_set = ppe_queue_ucast_hash_class_set,
};

const struct ppe_queue_ops *ppe_queue_config_ops_get(void)
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
index 4980e3fed1c0..181dbd4a3d90 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
@@ -10,6 +10,9 @@
#ifndef __PPE_OPS_H__
#define __PPE_OPS_H__

+#define PPE_QUEUE_PRI_MAX 16
+#define PPE_QUEUE_HASH_MAX 256
+
/* PPE hardware QoS configurations used to dispatch the packet passed
* through PPE, the scheduler supports DRR(deficit round robin with the
* weight) and SP(strict priority).
@@ -27,6 +30,21 @@ struct ppe_qos_scheduler_cfg {
int drr_frame_mode;
};

+/* The egress queue ID can be decided by service code, CPU code and
+ * egress port.
+ *
+ * service code has the highest priority to decide queue base, then
+ * CPU code, finally egress port when all are enabled.
+ */
+struct ppe_queue_ucast_dest {
+ int src_profile;
+ bool service_code_en;
+ int service_code;
+ bool cpu_code_en;
+ int cpu_code;
+ int dest_port;
+};
+
/* The operations are used to configure the PPE queue related resource */
struct ppe_queue_ops {
int (*queue_scheduler_set)(struct ppe_device *ppe_dev,
@@ -39,6 +57,22 @@ struct ppe_queue_ops {
int level,
int *port,
struct ppe_qos_scheduler_cfg *scheduler_cfg);
+ int (*queue_ucast_base_set)(struct ppe_device *ppe_dev,
+ struct ppe_queue_ucast_dest queue_dst,
+ int queue_base,
+ int profile_id);
+ int (*queue_ucast_base_get)(struct ppe_device *ppe_dev,
+ struct ppe_queue_ucast_dest queue_dst,
+ int *queue_base,
+ int *profile_id);
+ int (*queue_ucast_pri_class_set)(struct ppe_device *ppe_dev,
+ int profile_id,
+ int priority,
+ int class_offset);
+ int (*queue_ucast_hash_class_set)(struct ppe_device *ppe_dev,
+ int profile_id,
+ int rss_hash,
+ int class_offset);
};

const struct ppe_queue_ops *ppe_queue_config_ops_get(void);
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
index 10daa70f28e9..9fdb9592b44b 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -148,6 +148,22 @@ union ppe_bm_port_fc_cfg_u {
struct ppe_bm_port_fc_cfg bf;
};

+#define PPE_UCAST_QUEUE_MAP_TBL 0x810000
+#define PPE_UCAST_QUEUE_MAP_TBL_NUM 3072
+#define PPE_UCAST_QUEUE_MAP_TBL_INC 0x10
+#define PPE_UCAST_QUEUE_MAP_TBL_PROFILE_ID GENMASK(3, 0)
+#define PPE_UCAST_QUEUE_MAP_TBL_QUEUE_ID GENMASK(11, 4)
+
+#define PPE_UCAST_HASH_MAP_TBL 0x830000
+#define PPE_UCAST_HASH_MAP_TBL_NUM 4096
+#define PPE_UCAST_HASH_MAP_TBL_INC 0x10
+#define PPE_UCAST_HASH_MAP_TBL_HASH GENMASK(7, 0)
+
+#define PPE_UCAST_PRIORITY_MAP_TBL 0x842000
+#define PPE_UCAST_PRIORITY_MAP_TBL_NUM 256
+#define PPE_UCAST_PRIORITY_MAP_TBL_INC 0x10
+#define PPE_UCAST_PRIORITY_MAP_TBL_CLASS GENMASK(3, 0)
+
#define PPE_AC_UNI_QUEUE_CFG_TBL 0x848000
#define PPE_AC_UNI_QUEUE_CFG_TBL_NUM 256
#define PPE_AC_UNI_QUEUE_CFG_TBL_INC 0x10
--
2.42.0


2024-01-10 11:46:57

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 10/20] net: ethernet: qualcomm: Add PPE service code config

The service code is used to bypass some PPE handlers when the
packet is passed through PPE.

Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/ppe.c | 22 +++-
drivers/net/ethernet/qualcomm/ppe/ppe_ops.c | 42 ++++++++
drivers/net/ethernet/qualcomm/ppe/ppe_ops.h | 107 +++++++++++++++++++
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 62 +++++++++++
4 files changed, 232 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index dd032a158231..acff37f9d832 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -1104,9 +1104,29 @@ static int ppe_qm_init(struct ppe_device *ppe_dev)
return 0;
}

+static int ppe_servcode_init(struct ppe_device *ppe_dev)
+{
+ struct ppe_servcode_cfg servcode_cfg;
+
+ memset(&servcode_cfg, 0, sizeof(servcode_cfg));
+ servcode_cfg.bypass_bitmap[0] = (u32)(~(BIT(FAKE_MAC_HEADER_BYP) |
+ BIT(SERVICE_CODE_BYP) |
+ BIT(FAKE_L2_PROTO_BYP)));
+ servcode_cfg.bypass_bitmap[1] = (u32)(~(BIT(ACL_POST_ROUTING_CHECK_BYP)));
+
+ /* The default service code used by CPU port */
+ return ppe_servcode_config_set(ppe_dev, 1, servcode_cfg);
+}
+
static int ppe_dev_hw_init(struct ppe_device *ppe_dev)
{
- return ppe_qm_init(ppe_dev);
+ int ret;
+
+ ret = ppe_qm_init(ppe_dev);
+ if (ret)
+ return ret;
+
+ return ppe_servcode_init(ppe_dev);
}

static int qcom_ppe_probe(struct platform_device *pdev)
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
index eaa3f1e7b525..a3269c0898be 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
@@ -269,6 +269,48 @@ static int ppe_queue_ucast_hash_class_set(struct ppe_device *ppe_dev,
reg_val);
}

+int ppe_servcode_config_set(struct ppe_device *ppe_dev,
+ int servcode,
+ struct ppe_servcode_cfg cfg)
+{
+ union ppe_eg_service_cfg_u eg_service_cfg;
+ union ppe_service_cfg_u service_cfg;
+ int val;
+
+ memset(&service_cfg, 0, sizeof(service_cfg));
+ memset(&eg_service_cfg, 0, sizeof(eg_service_cfg));
+
+ val = FIELD_PREP(PPE_IN_L2_SERVICE_TBL_DST_PORT_ID_VALID, cfg.dest_port_valid) |
+ FIELD_PREP(PPE_IN_L2_SERVICE_TBL_DST_PORT_ID, cfg.dest_port) |
+ FIELD_PREP(PPE_IN_L2_SERVICE_TBL_DST_DIRECTION, cfg.is_src) |
+ FIELD_PREP(PPE_IN_L2_SERVICE_TBL_DST_BYPASS_BITMAP, cfg.bypass_bitmap[1]) |
+ FIELD_PREP(PPE_IN_L2_SERVICE_TBL_RX_CNT_EN,
+ cfg.bypass_bitmap[2] & BIT(1) ? 1 : 0) |
+ FIELD_PREP(PPE_IN_L2_SERVICE_TBL_TX_CNT_EN,
+ cfg.bypass_bitmap[2] & BIT(3) ? 1 : 0);
+ ppe_write(ppe_dev, PPE_IN_L2_SERVICE_TBL + PPE_IN_L2_SERVICE_TBL_INC * servcode, val);
+
+ ppe_read_tbl(ppe_dev, PPE_SERVICE_TBL + PPE_SERVICE_TBL_INC * servcode,
+ service_cfg.val, sizeof(service_cfg.val));
+ service_cfg.bf.bypass_bitmap = cfg.bypass_bitmap[0];
+ service_cfg.bf.rx_counting_en = cfg.bypass_bitmap[2] & BIT(0);
+ ppe_write_tbl(ppe_dev, PPE_SERVICE_TBL + PPE_SERVICE_TBL_INC * servcode,
+ service_cfg.val, sizeof(service_cfg.val));
+
+ ppe_read_tbl(ppe_dev, PPE_EG_SERVICE_TBL + PPE_EG_SERVICE_TBL_INC * servcode,
+ eg_service_cfg.val, sizeof(eg_service_cfg.val));
+ eg_service_cfg.bf.field_update_action = cfg.field_update_bitmap;
+ eg_service_cfg.bf.next_service_code = cfg.next_service_code;
+ eg_service_cfg.bf.hw_services = cfg.hw_service;
+ eg_service_cfg.bf.offset_sel = cfg.offset_sel;
+ eg_service_cfg.bf.tx_counting_en = cfg.bypass_bitmap[2] & BIT(2) ? 1 : 0;
+ ppe_write_tbl(ppe_dev, PPE_EG_SERVICE_TBL + PPE_EG_SERVICE_TBL_INC * servcode,
+ eg_service_cfg.val, sizeof(eg_service_cfg.val));
+
+ val = FIELD_PREP(PPE_TL_SERVICE_TBL_BYPASS_BITMAP, cfg.bypass_bitmap[3]);
+ return ppe_write(ppe_dev, PPE_TL_SERVICE_TBL + PPE_TL_SERVICE_TBL_INC * servcode, val);
+}
+
static const struct ppe_queue_ops qcom_ppe_queue_config_ops = {
.queue_scheduler_set = ppe_queue_scheduler_set,
.queue_scheduler_get = ppe_queue_scheduler_get,
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
index 181dbd4a3d90..b3c1ade7c948 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
@@ -45,6 +45,109 @@ struct ppe_queue_ucast_dest {
int dest_port;
};

+/* bypss_bitmap_0 */
+enum {
+ IN_VLAN_TAG_FMT_CHECK_BYP = 0,
+ IN_VLAN_MEMBER_CHECK_BYP,
+ IN_VLAN_XLT_BYP,
+ MY_MAC_CHECK_BYP,
+ DIP_LOOKUP_BYP,
+ FLOW_LOOKUP_BYP = 5,
+ FLOW_ACTION_BYP,
+ ACL_BYP,
+ FAKE_MAC_HEADER_BYP,
+ SERVICE_CODE_BYP,
+ WRONG_PKT_FMT_L2_BYP = 10,
+ WRONG_PKT_FMT_L3_IPV4_BYP,
+ WRONG_PKT_FMT_L3_IPV6_BYP,
+ WRONG_PKT_FMT_L4_BYP,
+ FLOW_SERVICE_CODE_BYP,
+ ACL_SERVICE_CODE_BYP = 15,
+ FAKE_L2_PROTO_BYP,
+ PPPOE_TERMINATION_BYP,
+ DEFAULT_VLAN_BYP,
+ DEFAULT_PCP_BYP,
+ VSI_ASSIGN_BYP,
+ IN_VLAN_ASSIGN_FAIL_BYP = 24,
+ SOURCE_GUARD_BYP,
+ MRU_MTU_CHECK_BYP,
+ FLOW_SRC_CHECK_BYP,
+ FLOW_QOS_BYP,
+};
+
+/* bypss_bitmap_1 */
+enum {
+ EG_VLAN_MEMBER_CHECK_BYP = 0,
+ EG_VLAN_XLT_BYP,
+ EG_VLAN_TAG_FMT_CTRL_BYP,
+ FDB_LEARN_BYP,
+ FDB_REFRESH_BYP,
+ L2_SOURCE_SEC_BYP = 5,
+ MANAGEMENT_FWD_BYP,
+ BRIDGING_FWD_BYP,
+ IN_STP_FLTR_BYP,
+ EG_STP_FLTR_BYP,
+ SOURCE_FLTR_BYP = 10,
+ POLICER_BYP,
+ L2_PKT_EDIT_BYP,
+ L3_PKT_EDIT_BYP,
+ ACL_POST_ROUTING_CHECK_BYP,
+ PORT_ISOLATION_BYP = 15,
+ PRE_ACL_QOS_BYP,
+ POST_ACL_QOS_BYP,
+ DSCP_QOS_BYP,
+ PCP_QOS_BYP,
+ PREHEADER_QOS_BYP = 20,
+ FAKE_MAC_DROP_BYP,
+ TUNL_CONTEXT_BYP,
+ FLOW_POLICER_BYP,
+};
+
+/* bypss_bitmap_2 */
+enum {
+ RX_VLAN_COUNTER_BYP = 0,
+ RX_COUNTER_BYP,
+ TX_VLAN_COUNTER_BYP,
+ TX_COUNTER_BYP,
+};
+
+/* bypass_bitmap_3 */
+enum {
+ TL_SERVICE_CODE_BYP = 0,
+ TL_BYP,
+ TL_L3_IF_CHECK_BYP,
+ TL_VLAN_CHECK_BYP,
+ TL_DMAC_CHECK_BYP,
+ TL_UDP_CSUM_0_CHECK_BYP = 5,
+ TL_TBL_DE_ACCE_CHECK_BYP,
+ TL_PPPOE_MC_TERM_CHECK_BYP,
+ TL_TTL_EXCEED_CHECK_BYP,
+ TL_MAP_SRC_CHECK_BYP,
+ TL_MAP_DST_CHECK_BYP = 10,
+ TL_LPM_DST_LOOKUP_BYP,
+ TL_LPM_LOOKUP_BYP,
+ TL_WRONG_PKT_FMT_L2_BYP,
+ TL_WRONG_PKT_FMT_L3_IPV4_BYP,
+ TL_WRONG_PKT_FMT_L3_IPV6_BYP = 15,
+ TL_WRONG_PKT_FMT_L4_BYP,
+ TL_WRONG_PKT_FMT_TUNNEL_BYP,
+ TL_PRE_IPO_BYP = 20,
+};
+
+/* PPE service code is used to bypass hardware handler when the packet pass
+ * through PPE, the supported service code number is 256.
+ */
+struct ppe_servcode_cfg {
+ bool dest_port_valid;
+ int dest_port;
+ u32 bypass_bitmap[4];
+ bool is_src;
+ int field_update_bitmap;
+ int next_service_code;
+ int hw_service;
+ int offset_sel;
+};
+
/* The operations are used to configure the PPE queue related resource */
struct ppe_queue_ops {
int (*queue_scheduler_set)(struct ppe_device *ppe_dev,
@@ -76,4 +179,8 @@ struct ppe_queue_ops {
};

const struct ppe_queue_ops *ppe_queue_config_ops_get(void);
+
+int ppe_servcode_config_set(struct ppe_device *ppe_dev,
+ int servcode,
+ struct ppe_servcode_cfg cfg);
#endif
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
index 9fdb9592b44b..66ddfd5cd123 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -23,9 +23,71 @@
#define PPE_BM_TDM_CFG_TBL_SECOND_PORT_VALID BIT(6)
#define PPE_BM_TDM_CFG_TBL_SECOND_PORT GENMASK(11, 8)

+#define PPE_SERVICE_TBL 0x15000
+#define PPE_SERVICE_TBL_NUM 256
+#define PPE_SERVICE_TBL_INC 0x10
+#define PPE_SERVICE_TBL_BYPASS_BITMAP GENMASK(31, 0)
+#define PPE_SERVICE_TBL_RX_COUNTING_EN BIT(32)
+
+/* service code for the ingress packet, the PPE features can be bypassed
+ * with service config.
+ */
+struct ppe_service_cfg {
+ u32 bypass_bitmap;
+ u32 rx_counting_en:1,
+ res0:31;
+};
+
+union ppe_service_cfg_u {
+ u32 val[2];
+ struct ppe_service_cfg bf;
+};
+
#define PPE_EG_BRIDGE_CONFIG 0x20044
#define PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN BIT(2)

+#define PPE_EG_SERVICE_TBL 0x43000
+#define PPE_EG_SERVICE_TBL_NUM 256
+#define PPE_EG_SERVICE_TBL_INC 0x10
+
+/* service code config for the egress packet, the new service code can be
+ * generated and ath header can be configured.
+ */
+struct ppe_eg_service_cfg {
+ u32 field_update_action;
+ u32 next_service_code:8,
+ hw_services:6,
+ offset_sel:1,
+ tx_counting_en:1,
+ ip_length_update:1,
+ ath_hdr_insert_dis:1,
+ ath_hdr_type:3,
+ ath_from_cpu:1,
+ ath_disable_bit:1,
+ ath_port_bitmap:7,
+ res0:2;
+};
+
+union ppe_eg_service_cfg_u {
+ u32 val[2];
+ struct ppe_eg_service_cfg bf;
+};
+
+#define PPE_IN_L2_SERVICE_TBL 0x66000
+#define PPE_IN_L2_SERVICE_TBL_NUM 256
+#define PPE_IN_L2_SERVICE_TBL_INC 0x10
+#define PPE_IN_L2_SERVICE_TBL_DST_PORT_ID_VALID BIT(0)
+#define PPE_IN_L2_SERVICE_TBL_DST_PORT_ID GENMASK(4, 1)
+#define PPE_IN_L2_SERVICE_TBL_DST_DIRECTION BIT(5)
+#define PPE_IN_L2_SERVICE_TBL_DST_BYPASS_BITMAP GENMASK(29, 6)
+#define PPE_IN_L2_SERVICE_TBL_RX_CNT_EN BIT(30)
+#define PPE_IN_L2_SERVICE_TBL_TX_CNT_EN BIT(31)
+
+#define PPE_TL_SERVICE_TBL 0x306000
+#define PPE_TL_SERVICE_TBL_NUM 256
+#define PPE_TL_SERVICE_TBL_INC 4
+#define PPE_TL_SERVICE_TBL_BYPASS_BITMAP GENMASK(31, 0)
+
#define PPE_PSCH_TDM_DEPTH_CFG 0x400000
#define PPE_PSCH_TDM_DEPTH_CFG_NUM 1
#define PPE_PSCH_TDM_DEPTH_CFG_INC 4
--
2.42.0


2024-01-10 11:47:23

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 11/20] net: ethernet: qualcomm: Add PPE port control config

1. Enable the port statistics counter for the physical port.

2. Configure the default action as drop when the packet size is more
than the configured MTU of physical port.

3. For IPQ5322, the number of PPE port is 3.

Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/ppe.c | 60 ++++++++++++++++-
drivers/net/ethernet/qualcomm/ppe/ppe.h | 10 +++
drivers/net/ethernet/qualcomm/ppe/ppe_ops.c | 22 +++++++
drivers/net/ethernet/qualcomm/ppe/ppe_ops.h | 1 +
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 69 ++++++++++++++++++++
5 files changed, 161 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index acff37f9d832..bce0a9137c9f 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -18,6 +18,7 @@
#include "ppe_ops.h"

#define PPE_SCHEDULER_PORT_NUM 8
+#define MPPE_SCHEDULER_PORT_NUM 3
#define PPE_SCHEDULER_L0_NUM 300
#define PPE_SCHEDULER_L1_NUM 64
#define PPE_SP_PRIORITY_NUM 8
@@ -1118,6 +1119,59 @@ static int ppe_servcode_init(struct ppe_device *ppe_dev)
return ppe_servcode_config_set(ppe_dev, 1, servcode_cfg);
}

+static int ppe_port_ctrl_init(struct ppe_device *ppe_dev)
+{
+ union ppe_mru_mtu_ctrl_cfg_u mru_mtu_cfg;
+ int ret, port_num = PPE_SCHEDULER_PORT_NUM;
+ u32 reg_val;
+
+ if (ppe_type_get(ppe_dev) == PPE_TYPE_MPPE) {
+ for (ret = 0; ret < MPPE_SCHEDULER_PORT_NUM; ret++) {
+ reg_val = FIELD_PREP(PPE_TX_BUFF_THRSH_XOFF, 3) |
+ FIELD_PREP(PPE_TX_BUFF_THRSH_XON, 3);
+ ppe_write(ppe_dev, PPE_TX_BUFF_THRSH + PPE_TX_BUFF_THRSH_INC * ret,
+ reg_val);
+
+ /* Fix 147B line rate on physical port */
+ if (ret != 0)
+ ppe_mask(ppe_dev, PPE_RX_FIFO_CFG + PPE_RX_FIFO_CFG_INC * ret,
+ PPE_RX_FIFO_CFG_THRSH,
+ FIELD_PREP(PPE_RX_FIFO_CFG_THRSH, 7));
+ }
+
+ port_num = MPPE_SCHEDULER_PORT_NUM;
+ }
+
+ for (ret = 0; ret < port_num; ret++) {
+ if (ret != 0) {
+ memset(&mru_mtu_cfg, 0, sizeof(mru_mtu_cfg));
+ ppe_read_tbl(ppe_dev,
+ PPE_MRU_MTU_CTRL_TBL + PPE_MRU_MTU_CTRL_TBL_INC * ret,
+ mru_mtu_cfg.val, sizeof(mru_mtu_cfg.val));
+
+ /* Drop the packet when the packet size is more than
+ * the MTU of the physical interface.
+ */
+ mru_mtu_cfg.bf.mru_cmd = PPE_ACTION_DROP;
+ mru_mtu_cfg.bf.mtu_cmd = PPE_ACTION_DROP;
+
+ ppe_write_tbl(ppe_dev,
+ PPE_MRU_MTU_CTRL_TBL + PPE_MRU_MTU_CTRL_TBL_INC * ret,
+ mru_mtu_cfg.val, sizeof(mru_mtu_cfg.val));
+
+ ppe_mask(ppe_dev,
+ PPE_MC_MTU_CTRL_TBL + PPE_MC_MTU_CTRL_TBL_INC * ret,
+ PPE_MC_MTU_CTRL_TBL_MTU_CMD,
+ FIELD_PREP(PPE_MC_MTU_CTRL_TBL_MTU_CMD, PPE_ACTION_DROP));
+ }
+
+ /* Enable PPE port counter */
+ ppe_counter_set(ppe_dev, ret, true);
+ }
+
+ return 0;
+}
+
static int ppe_dev_hw_init(struct ppe_device *ppe_dev)
{
int ret;
@@ -1126,7 +1180,11 @@ static int ppe_dev_hw_init(struct ppe_device *ppe_dev)
if (ret)
return ret;

- return ppe_servcode_init(ppe_dev);
+ ret = ppe_servcode_init(ppe_dev);
+ if (ret)
+ return ret;
+
+ return ppe_port_ctrl_init(ppe_dev);
}

static int qcom_ppe_probe(struct platform_device *pdev)
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.h b/drivers/net/ethernet/qualcomm/ppe/ppe.h
index 84b1c9761f79..507626b6ab2e 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.h
@@ -132,6 +132,16 @@ enum {
PPE_TYPE_MAX = 0xff,
};

+/* The action of packet received by PPE can be forwarded, dropped, copied
+ * to CPU(enter multicast queue), redirected to CPU(enter unicast queue).
+ */
+enum {
+ PPE_ACTION_FORWARD = 0,
+ PPE_ACTION_DROP,
+ PPE_ACTION_COPY_TO_CPU,
+ PPE_ACTION_REDIRECTED_TO_CPU
+};
+
/* PPE private data of different PPE type device */
struct ppe_data {
int ppe_type;
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
index a3269c0898be..b017983e7cbf 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
@@ -311,6 +311,28 @@ int ppe_servcode_config_set(struct ppe_device *ppe_dev,
return ppe_write(ppe_dev, PPE_TL_SERVICE_TBL + PPE_TL_SERVICE_TBL_INC * servcode, val);
}

+int ppe_counter_set(struct ppe_device *ppe_dev, int port, bool enable)
+{
+ union ppe_mru_mtu_ctrl_cfg_u mru_mtu_cfg;
+
+ memset(&mru_mtu_cfg, 0, sizeof(mru_mtu_cfg));
+
+ ppe_read_tbl(ppe_dev, PPE_MRU_MTU_CTRL_TBL + PPE_MRU_MTU_CTRL_TBL_INC * port,
+ mru_mtu_cfg.val, sizeof(mru_mtu_cfg.val));
+ mru_mtu_cfg.bf.rx_cnt_en = enable;
+ mru_mtu_cfg.bf.tx_cnt_en = enable;
+ ppe_write_tbl(ppe_dev, PPE_MRU_MTU_CTRL_TBL + PPE_MRU_MTU_CTRL_TBL_INC * port,
+ mru_mtu_cfg.val, sizeof(mru_mtu_cfg.val));
+
+ ppe_mask(ppe_dev, PPE_MC_MTU_CTRL_TBL + PPE_MC_MTU_CTRL_TBL_INC * port,
+ PPE_MC_MTU_CTRL_TBL_TX_CNT_EN,
+ FIELD_PREP(PPE_MC_MTU_CTRL_TBL_TX_CNT_EN, enable));
+
+ return ppe_mask(ppe_dev, PPE_PORT_EG_VLAN + PPE_PORT_EG_VLAN_INC * port,
+ PPE_PORT_EG_VLAN_TX_COUNTING_EN,
+ FIELD_PREP(PPE_PORT_EG_VLAN_TX_COUNTING_EN, enable));
+}
+
static const struct ppe_queue_ops qcom_ppe_queue_config_ops = {
.queue_scheduler_set = ppe_queue_scheduler_set,
.queue_scheduler_get = ppe_queue_scheduler_get,
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
index b3c1ade7c948..ab64a760b60b 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
@@ -183,4 +183,5 @@ const struct ppe_queue_ops *ppe_queue_config_ops_get(void);
int ppe_servcode_config_set(struct ppe_device *ppe_dev,
int servcode,
struct ppe_servcode_cfg cfg);
+int ppe_counter_set(struct ppe_device *ppe_dev, int port, bool enable);
#endif
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
index 66ddfd5cd123..3e61de54f921 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -14,6 +14,11 @@
#define PPE_BM_TDM_CTRL_TDM_OFFSET GENMASK(14, 8)
#define PPE_BM_TDM_CTRL_TDM_EN BIT(31)

+#define PPE_RX_FIFO_CFG 0xb004
+#define PPE_RX_FIFO_CFG_NUM 8
+#define PPE_RX_FIFO_CFG_INC 4
+#define PPE_RX_FIFO_CFG_THRSH GENMASK(2, 0)
+
#define PPE_BM_TDM_CFG_TBL 0xc000
#define PPE_BM_TDM_CFG_TBL_NUM 128
#define PPE_BM_TDM_CFG_TBL_INC 0x10
@@ -43,6 +48,17 @@ union ppe_service_cfg_u {
struct ppe_service_cfg bf;
};

+#define PPE_PORT_EG_VLAN 0x20020
+#define PPE_PORT_EG_VLAN_NUM 8
+#define PPE_PORT_EG_VLAN_INC 4
+#define PPE_PORT_EG_VLAN_PORT_VLAN_TYPE BIT(0)
+#define PPE_PORT_EG_VLAN_PORT_EG_VLAN_CTAG_MODE GENMASK(2, 1)
+#define PPE_PORT_EG_VLAN_PORT_EG_VLAN_STAG_MODE GENMASK(4, 3)
+#define PPE_PORT_EG_VLAN_VSI_TAG_MODE_EN BIT(5)
+#define PPE_PORT_EG_VLAN_PORT_EG_PCP_PROP_CMD BIT(6)
+#define PPE_PORT_EG_VLAN_PORT_EG_DEI_PROP_CMD BIT(7)
+#define PPE_PORT_EG_VLAN_TX_COUNTING_EN BIT(8)
+
#define PPE_EG_BRIDGE_CONFIG 0x20044
#define PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN BIT(2)

@@ -73,6 +89,59 @@ union ppe_eg_service_cfg_u {
struct ppe_eg_service_cfg bf;
};

+#define PPE_TX_BUFF_THRSH 0x26100
+#define PPE_TX_BUFF_THRSH_NUM 8
+#define PPE_TX_BUFF_THRSH_INC 4
+#define PPE_TX_BUFF_THRSH_XOFF GENMASK(7, 0)
+#define PPE_TX_BUFF_THRSH_XON GENMASK(15, 8)
+
+#define PPE_MC_MTU_CTRL_TBL 0x60a00
+#define PPE_MC_MTU_CTRL_TBL_NUM 8
+#define PPE_MC_MTU_CTRL_TBL_INC 4
+#define PPE_MC_MTU_CTRL_TBL_MTU GENMASK(13, 0)
+#define PPE_MC_MTU_CTRL_TBL_MTU_CMD GENMASK(15, 14)
+#define PPE_MC_MTU_CTRL_TBL_TX_CNT_EN BIT(16)
+
+#define PPE_MRU_MTU_CTRL_TBL 0x65000
+#define PPE_MRU_MTU_CTRL_TBL_NUM 256
+#define PPE_MRU_MTU_CTRL_TBL_INC 0x10
+
+/* PPE port control configuration, the MTU and QoS are configured by
+ * this table.
+ */
+struct ppe_mru_mtu_ctrl_cfg {
+ u32 mru:14,
+ mru_cmd:2,
+ mtu:14,
+ mtu_cmd:2;
+
+ u32 rx_cnt_en:1,
+ tx_cnt_en:1,
+ src_profile:2,
+ pcp_qos_group_id:1,
+ dscp_qos_group_id:1,
+ pcp_res_prec_force:1,
+ dscp_res_prec_force:1,
+ preheader_res_prec:3,
+ pcp_res_prec:3,
+ dscp_res_prec:3,
+ flow_res_prec:3,
+ pre_acl_res_prec:3,
+ post_acl_res_prec:3,
+ source_filtering_bypass:1,
+ source_filtering_mode:1,
+ pre_ipo_outer_res_prec:3,
+ pre_ipo_inner_res_prec_0:1;
+
+ u32 pre_ipo_inner_res_prec_1:2,
+ res0:30;
+};
+
+union ppe_mru_mtu_ctrl_cfg_u {
+ u32 val[3];
+ struct ppe_mru_mtu_ctrl_cfg bf;
+};
+
#define PPE_IN_L2_SERVICE_TBL 0x66000
#define PPE_IN_L2_SERVICE_TBL_NUM 256
#define PPE_IN_L2_SERVICE_TBL_INC 0x10
--
2.42.0


2024-01-10 11:48:11

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 12/20] net: ethernet: qualcomm: Add PPE RSS hash config

PPE RSS hash is generated by the configured seed based on the
packet content, which is used to select queue and can also be
passed to EDMA RX descriptor.

Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/ppe.c | 53 ++++++++++-
drivers/net/ethernet/qualcomm/ppe/ppe_ops.c | 97 ++++++++++++++++++++
drivers/net/ethernet/qualcomm/ppe/ppe_ops.h | 22 +++++
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 44 +++++++++
4 files changed, 215 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index bce0a9137c9f..746ef42fea5d 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -1172,6 +1172,53 @@ static int ppe_port_ctrl_init(struct ppe_device *ppe_dev)
return 0;
}

+static int ppe_rss_hash_init(struct ppe_device *ppe_dev)
+{
+ const struct ppe_queue_ops *ppe_queue_ops;
+ struct ppe_rss_hash_cfg hash_cfg;
+ int i, ret;
+ u16 fins[5] = {0x205, 0x264, 0x227, 0x245, 0x201};
+ u8 ips[4] = {0x13, 0xb, 0x13, 0xb};
+
+ ppe_queue_ops = ppe_queue_config_ops_get();
+ if (!ppe_queue_ops->rss_hash_config_set)
+ return -EINVAL;
+
+ hash_cfg.hash_seed = get_random_u32();
+ hash_cfg.hash_mask = 0xfff;
+ hash_cfg.hash_fragment_mode = false;
+
+ i = 0;
+ while (i < ARRAY_SIZE(fins)) {
+ hash_cfg.hash_fin_inner[i] = fins[i] & 0x1f;
+ hash_cfg.hash_fin_outer[i] = fins[i] >> 5;
+ i++;
+ }
+
+ hash_cfg.hash_protocol_mix = 0x13;
+ hash_cfg.hash_dport_mix = 0xb;
+ hash_cfg.hash_sport_mix = 0x13;
+ hash_cfg.hash_sip_mix[0] = 0x13;
+ hash_cfg.hash_dip_mix[0] = 0xb;
+
+ ret = ppe_queue_ops->rss_hash_config_set(ppe_dev,
+ PPE_RSS_HASH_MODE_IPV4,
+ hash_cfg);
+ if (ret)
+ return ret;
+
+ i = 0;
+ while (i < ARRAY_SIZE(ips)) {
+ hash_cfg.hash_sip_mix[i] = ips[i];
+ hash_cfg.hash_dip_mix[i] = ips[i];
+ i++;
+ }
+
+ return ppe_queue_ops->rss_hash_config_set(ppe_dev,
+ PPE_RSS_HASH_MODE_IPV6,
+ hash_cfg);
+}
+
static int ppe_dev_hw_init(struct ppe_device *ppe_dev)
{
int ret;
@@ -1184,7 +1231,11 @@ static int ppe_dev_hw_init(struct ppe_device *ppe_dev)
if (ret)
return ret;

- return ppe_port_ctrl_init(ppe_dev);
+ ret = ppe_port_ctrl_init(ppe_dev);
+ if (ret)
+ return ret;
+
+ return ppe_rss_hash_init(ppe_dev);
}

static int qcom_ppe_probe(struct platform_device *pdev)
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
index b017983e7cbf..0398a36d680a 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
@@ -333,6 +333,102 @@ int ppe_counter_set(struct ppe_device *ppe_dev, int port, bool enable)
FIELD_PREP(PPE_PORT_EG_VLAN_TX_COUNTING_EN, enable));
}

+static int ppe_rss_hash_config_set(struct ppe_device *ppe_dev,
+ int mode,
+ struct ppe_rss_hash_cfg cfg)
+{
+ u32 val;
+ int i;
+
+ if (mode & PPE_RSS_HASH_MODE_IPV4) {
+ val = FIELD_PREP(PPE_RSS_HASH_MASK_IPV4_HASH_MASK, cfg.hash_mask) |
+ FIELD_PREP(PPE_RSS_HASH_MASK_IPV4_FRAGMENT,
+ cfg.hash_fragment_mode);
+ ppe_write(ppe_dev, PPE_RSS_HASH_MASK_IPV4, val);
+
+ val = FIELD_PREP(PPE_RSS_HASH_SEED_IPV4_VAL, cfg.hash_seed);
+ ppe_write(ppe_dev, PPE_RSS_HASH_SEED_IPV4, val);
+
+ for (i = 0; i < PPE_RSS_HASH_MIX_IPV4_NUM; i++) {
+ switch (i) {
+ case 0:
+ val = FIELD_PREP(PPE_RSS_HASH_MIX_IPV4_VAL,
+ cfg.hash_sip_mix[0]);
+ break;
+ case 1:
+ val = FIELD_PREP(PPE_RSS_HASH_MIX_IPV4_VAL,
+ cfg.hash_dip_mix[0]);
+ break;
+ case 2:
+ val = FIELD_PREP(PPE_RSS_HASH_MIX_IPV4_VAL,
+ cfg.hash_protocol_mix);
+ break;
+ case 3:
+ val = FIELD_PREP(PPE_RSS_HASH_MIX_IPV4_VAL,
+ cfg.hash_dport_mix);
+ break;
+ case 4:
+ val = FIELD_PREP(PPE_RSS_HASH_MIX_IPV4_VAL,
+ cfg.hash_sport_mix);
+ break;
+ default:
+ break;
+ }
+ ppe_write(ppe_dev, PPE_RSS_HASH_MIX_IPV4 + i * PPE_RSS_HASH_MIX_IPV4_INC,
+ val);
+ }
+
+ for (i = 0; i < PPE_RSS_HASH_MIX_IPV4_NUM; i++) {
+ val = FIELD_PREP(PPE_RSS_HASH_FIN_IPV4_INNER, cfg.hash_fin_inner[i]) |
+ FIELD_PREP(PPE_RSS_HASH_FIN_IPV4_OUTER,
+ cfg.hash_fin_outer[i]);
+ ppe_write(ppe_dev, PPE_RSS_HASH_FIN_IPV4 + i * PPE_RSS_HASH_FIN_IPV4_INC,
+ val);
+ }
+ }
+
+ if (mode & PPE_RSS_HASH_MODE_IPV6) {
+ val = FIELD_PREP(PPE_RSS_HASH_MASK_HASH_MASK, cfg.hash_mask) |
+ FIELD_PREP(PPE_RSS_HASH_MASK_FRAGMENT, cfg.hash_fragment_mode);
+ ppe_write(ppe_dev, PPE_RSS_HASH_MASK, val);
+
+ val = FIELD_PREP(PPE_RSS_HASH_SEED_VAL, cfg.hash_seed);
+ ppe_write(ppe_dev, PPE_RSS_HASH_SEED, val);
+
+ for (i = 0; i < PPE_RSS_HASH_MIX_NUM; i++) {
+ switch (i) {
+ case 0 ... 3:
+ val = FIELD_PREP(PPE_RSS_HASH_MIX_VAL, cfg.hash_sip_mix[i]);
+ break;
+ case 4 ... 7:
+ val = FIELD_PREP(PPE_RSS_HASH_MIX_VAL, cfg.hash_dip_mix[i - 4]);
+ break;
+ case 8:
+ val = FIELD_PREP(PPE_RSS_HASH_MIX_VAL, cfg.hash_protocol_mix);
+ break;
+ case 9:
+ val = FIELD_PREP(PPE_RSS_HASH_MIX_VAL, cfg.hash_dport_mix);
+ break;
+ case 10:
+ val = FIELD_PREP(PPE_RSS_HASH_MIX_VAL, cfg.hash_sport_mix);
+ break;
+ default:
+ break;
+ }
+ ppe_write(ppe_dev, PPE_RSS_HASH_MIX + i * PPE_RSS_HASH_MIX_INC, val);
+ }
+
+ for (i = 0; i < PPE_RSS_HASH_FIN_NUM; i++) {
+ val = FIELD_PREP(PPE_RSS_HASH_FIN_INNER, cfg.hash_fin_inner[i]) |
+ FIELD_PREP(PPE_RSS_HASH_FIN_OUTER, cfg.hash_fin_outer[i]);
+
+ ppe_write(ppe_dev, PPE_RSS_HASH_FIN + i * PPE_RSS_HASH_FIN_INC, val);
+ }
+ }
+
+ return 0;
+}
+
static const struct ppe_queue_ops qcom_ppe_queue_config_ops = {
.queue_scheduler_set = ppe_queue_scheduler_set,
.queue_scheduler_get = ppe_queue_scheduler_get,
@@ -340,6 +436,7 @@ static const struct ppe_queue_ops qcom_ppe_queue_config_ops = {
.queue_ucast_base_get = ppe_queue_ucast_base_get,
.queue_ucast_pri_class_set = ppe_queue_ucast_pri_class_set,
.queue_ucast_hash_class_set = ppe_queue_ucast_hash_class_set,
+ .rss_hash_config_set = ppe_rss_hash_config_set,
};

const struct ppe_queue_ops *ppe_queue_config_ops_get(void)
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
index ab64a760b60b..da0f37323042 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
@@ -12,6 +12,8 @@

#define PPE_QUEUE_PRI_MAX 16
#define PPE_QUEUE_HASH_MAX 256
+#define PPE_RSS_HASH_MODE_IPV4 BIT(0)
+#define PPE_RSS_HASH_MODE_IPV6 BIT(1)

/* PPE hardware QoS configurations used to dispatch the packet passed
* through PPE, the scheduler supports DRR(deficit round robin with the
@@ -148,6 +150,23 @@ struct ppe_servcode_cfg {
int offset_sel;
};

+/* PPE RSS hash can be configured to generate the hash value based on
+ * 5 tuples of packet, the generated hash value is used to decides the
+ * final queue ID.
+ */
+struct ppe_rss_hash_cfg {
+ u32 hash_mask;
+ bool hash_fragment_mode;
+ u32 hash_seed;
+ u8 hash_sip_mix[4];
+ u8 hash_dip_mix[4];
+ u8 hash_protocol_mix;
+ u8 hash_sport_mix;
+ u8 hash_dport_mix;
+ u8 hash_fin_inner[5];
+ u8 hash_fin_outer[5];
+};
+
/* The operations are used to configure the PPE queue related resource */
struct ppe_queue_ops {
int (*queue_scheduler_set)(struct ppe_device *ppe_dev,
@@ -176,6 +195,9 @@ struct ppe_queue_ops {
int profile_id,
int rss_hash,
int class_offset);
+ int (*rss_hash_config_set)(struct ppe_device *ppe_dev,
+ int mode,
+ struct ppe_rss_hash_cfg hash_cfg);
};

const struct ppe_queue_ops *ppe_queue_config_ops_get(void);
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
index 3e61de54f921..b42089599cc9 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -19,6 +19,50 @@
#define PPE_RX_FIFO_CFG_INC 4
#define PPE_RX_FIFO_CFG_THRSH GENMASK(2, 0)

+#define PPE_RSS_HASH_MASK 0xb4318
+#define PPE_RSS_HASH_MASK_NUM 1
+#define PPE_RSS_HASH_MASK_INC 4
+#define PPE_RSS_HASH_MASK_HASH_MASK GENMASK(20, 0)
+#define PPE_RSS_HASH_MASK_FRAGMENT BIT(28)
+
+#define PPE_RSS_HASH_SEED 0xb431c
+#define PPE_RSS_HASH_SEED_NUM 1
+#define PPE_RSS_HASH_SEED_INC 4
+#define PPE_RSS_HASH_SEED_VAL GENMASK(31, 0)
+
+#define PPE_RSS_HASH_MIX 0xb4320
+#define PPE_RSS_HASH_MIX_NUM 11
+#define PPE_RSS_HASH_MIX_INC 4
+#define PPE_RSS_HASH_MIX_VAL GENMASK(4, 0)
+
+#define PPE_RSS_HASH_FIN 0xb4350
+#define PPE_RSS_HASH_FIN_NUM 5
+#define PPE_RSS_HASH_FIN_INC 4
+#define PPE_RSS_HASH_FIN_INNER GENMASK(4, 0)
+#define PPE_RSS_HASH_FIN_OUTER GENMASK(9, 5)
+
+#define PPE_RSS_HASH_MASK_IPV4 0xb4380
+#define PPE_RSS_HASH_MASK_IPV4_NUM 1
+#define PPE_RSS_HASH_MASK_IPV4_INC 4
+#define PPE_RSS_HASH_MASK_IPV4_HASH_MASK GENMASK(20, 0)
+#define PPE_RSS_HASH_MASK_IPV4_FRAGMENT BIT(28)
+
+#define PPE_RSS_HASH_SEED_IPV4 0xb4384
+#define PPE_RSS_HASH_SEED_IPV4_NUM 1
+#define PPE_RSS_HASH_SEED_IPV4_INC 4
+#define PPE_RSS_HASH_SEED_IPV4_VAL GENMASK(31, 0)
+
+#define PPE_RSS_HASH_MIX_IPV4 0xb4390
+#define PPE_RSS_HASH_MIX_IPV4_NUM 5
+#define PPE_RSS_HASH_MIX_IPV4_INC 4
+#define PPE_RSS_HASH_MIX_IPV4_VAL GENMASK(4, 0)
+
+#define PPE_RSS_HASH_FIN_IPV4 0xb43b0
+#define PPE_RSS_HASH_FIN_IPV4_NUM 5
+#define PPE_RSS_HASH_FIN_IPV4_INC 4
+#define PPE_RSS_HASH_FIN_IPV4_INNER GENMASK(4, 0)
+#define PPE_RSS_HASH_FIN_IPV4_OUTER GENMASK(9, 5)
+
#define PPE_BM_TDM_CFG_TBL 0xc000
#define PPE_BM_TDM_CFG_TBL_NUM 128
#define PPE_BM_TDM_CFG_TBL_INC 0x10
--
2.42.0


2024-01-10 11:48:33

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 14/20] net: ethernet: qualcomm: Add PPE AC(admission control) function

The PPE AC function is for configuring the threshold to drop packet
from queue.

In addition, the back pressure from EDMA ring to PPE queue function
can be configured, which is used by the EDMA driver to enable the
back pressure feature.

Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/ppe_ops.c | 182 +++++++++++++++++++
drivers/net/ethernet/qualcomm/ppe/ppe_ops.h | 47 +++++
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 24 +++
3 files changed, 253 insertions(+)

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
index 0398a36d680a..b4f46ad2be59 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.c
@@ -429,6 +429,183 @@ static int ppe_rss_hash_config_set(struct ppe_device *ppe_dev,
return 0;
}

+static int ppe_queue_ac_threshold_set(struct ppe_device *ppe_dev,
+ int queue,
+ struct ppe_queue_ac_threshold ac_threshold)
+{
+ union ppe_ac_uni_queue_cfg_u uni_queue_cfg;
+
+ if (queue >= PPE_AC_UNI_QUEUE_CFG_TBL_NUM)
+ return -EINVAL;
+
+ memset(&uni_queue_cfg, 0, sizeof(uni_queue_cfg));
+ ppe_read_tbl(ppe_dev, PPE_AC_UNI_QUEUE_CFG_TBL +
+ PPE_AC_UNI_QUEUE_CFG_TBL_INC * queue,
+ uni_queue_cfg.val, sizeof(uni_queue_cfg.val));
+
+ uni_queue_cfg.bf.wred_en = ac_threshold.wred_enable;
+ uni_queue_cfg.bf.color_aware = ac_threshold.color_enable;
+ uni_queue_cfg.bf.shared_dynamic = ac_threshold.dynamic;
+ uni_queue_cfg.bf.shared_weight = ac_threshold.shared_weight;
+ uni_queue_cfg.bf.shared_ceiling = ac_threshold.ceiling;
+ uni_queue_cfg.bf.gap_grn_grn_min = ac_threshold.green_min_off;
+ uni_queue_cfg.bf.gap_grn_yel_max = ac_threshold.yel_max_off;
+ uni_queue_cfg.bf.gap_grn_yel_min_0 = ac_threshold.yel_min_off & 0x3ff;
+ uni_queue_cfg.bf.gap_grn_yel_min_1 = (ac_threshold.yel_min_off >> 10) & BIT(0);
+ uni_queue_cfg.bf.gap_grn_red_max = ac_threshold.red_max_off;
+ uni_queue_cfg.bf.gap_grn_red_min = ac_threshold.red_min_off;
+ uni_queue_cfg.bf.red_resume_0 = ac_threshold.red_resume_off & 0x1ff;
+ uni_queue_cfg.bf.red_resume_1 = ac_threshold.red_resume_off >> 9 & GENMASK(1, 0);
+ uni_queue_cfg.bf.yel_resume = ac_threshold.yel_resume_off;
+ uni_queue_cfg.bf.grn_resume = ac_threshold.green_resume_off;
+
+ return ppe_write_tbl(ppe_dev, PPE_AC_UNI_QUEUE_CFG_TBL +
+ PPE_AC_UNI_QUEUE_CFG_TBL_INC * queue,
+ uni_queue_cfg.val, sizeof(uni_queue_cfg.val));
+}
+
+static int ppe_queue_ac_threshold_get(struct ppe_device *ppe_dev,
+ int queue,
+ struct ppe_queue_ac_threshold *ac_threshold)
+{
+ union ppe_ac_uni_queue_cfg_u uni_queue_cfg;
+
+ if (queue >= PPE_AC_UNI_QUEUE_CFG_TBL_NUM)
+ return -EINVAL;
+
+ memset(&uni_queue_cfg, 0, sizeof(uni_queue_cfg));
+ ppe_read_tbl(ppe_dev, PPE_AC_UNI_QUEUE_CFG_TBL +
+ PPE_AC_UNI_QUEUE_CFG_TBL_INC * queue,
+ uni_queue_cfg.val, sizeof(uni_queue_cfg.val));
+
+ ac_threshold->wred_enable = uni_queue_cfg.bf.wred_en;
+ ac_threshold->color_enable = uni_queue_cfg.bf.color_aware;
+ ac_threshold->dynamic = uni_queue_cfg.bf.shared_dynamic;
+ ac_threshold->shared_weight = uni_queue_cfg.bf.shared_weight;
+ ac_threshold->ceiling = uni_queue_cfg.bf.shared_ceiling;
+ ac_threshold->green_min_off = uni_queue_cfg.bf.gap_grn_grn_min;
+ ac_threshold->yel_max_off = uni_queue_cfg.bf.gap_grn_yel_max;
+ ac_threshold->yel_min_off = (uni_queue_cfg.bf.gap_grn_yel_min_0 & 0x3ff) |
+ (uni_queue_cfg.bf.gap_grn_yel_min_1 << 10 & BIT(0));
+ ac_threshold->red_max_off = uni_queue_cfg.bf.gap_grn_red_max;
+ ac_threshold->red_min_off = uni_queue_cfg.bf.gap_grn_red_min;
+ ac_threshold->red_resume_off = (uni_queue_cfg.bf.red_resume_0 & 0x1ff) |
+ (uni_queue_cfg.bf.red_resume_1 << 9 & GENMASK(1, 0));
+ ac_threshold->yel_resume_off = uni_queue_cfg.bf.yel_resume;
+ ac_threshold->green_resume_off = uni_queue_cfg.bf.grn_resume;
+
+ return 0;
+}
+
+static int ppe_queue_ac_ctrl_set(struct ppe_device *ppe_dev,
+ u32 index,
+ struct ppe_queue_ac_ctrl ac_ctrl)
+{
+ union ppe_ac_uni_queue_cfg_u uni_queue_cfg;
+ union ppe_ac_mul_queue_cfg_u mul_queue_cfg;
+ union ppe_ac_grp_cfg_u grp_cfg;
+ int ret;
+
+ memset(&grp_cfg, 0, sizeof(grp_cfg));
+ memset(&uni_queue_cfg, 0, sizeof(uni_queue_cfg));
+ memset(&mul_queue_cfg, 0, sizeof(mul_queue_cfg));
+
+ ret = FIELD_GET(PPE_QUEUE_AC_VALUE_MASK, index);
+ if (FIELD_GET(PPE_QUEUE_AC_TYPE_MASK, index) == PPE_QUEUE_AC_TYPE_GROUP) {
+ ppe_read_tbl(ppe_dev, PPE_AC_GRP_CFG_TBL +
+ PPE_AC_GRP_CFG_TBL_INC * ret,
+ grp_cfg.val, sizeof(grp_cfg.val));
+
+ grp_cfg.bf.ac_en = ac_ctrl.ac_en;
+ grp_cfg.bf.force_ac_en = ac_ctrl.ac_fc_en;
+
+ ppe_write_tbl(ppe_dev, PPE_AC_GRP_CFG_TBL +
+ PPE_AC_GRP_CFG_TBL_INC * ret,
+ grp_cfg.val, sizeof(grp_cfg.val));
+ } else {
+ if (ret > PPE_QUEUE_AC_UCAST_MAX) {
+ ppe_read_tbl(ppe_dev, PPE_AC_MUL_QUEUE_CFG_TBL +
+ PPE_AC_MUL_QUEUE_CFG_TBL_INC * ret,
+ mul_queue_cfg.val, sizeof(mul_queue_cfg.val));
+
+ mul_queue_cfg.bf.ac_en = ac_ctrl.ac_en;
+ mul_queue_cfg.bf.force_ac_en = ac_ctrl.ac_fc_en;
+
+ ppe_write_tbl(ppe_dev, PPE_AC_MUL_QUEUE_CFG_TBL +
+ PPE_AC_MUL_QUEUE_CFG_TBL_INC * ret,
+ mul_queue_cfg.val, sizeof(mul_queue_cfg.val));
+ } else {
+ ppe_read_tbl(ppe_dev, PPE_AC_UNI_QUEUE_CFG_TBL +
+ PPE_AC_UNI_QUEUE_CFG_TBL_INC * ret,
+ uni_queue_cfg.val, sizeof(uni_queue_cfg.val));
+
+ uni_queue_cfg.bf.ac_en = ac_ctrl.ac_en;
+ uni_queue_cfg.bf.force_ac_en = ac_ctrl.ac_fc_en;
+
+ ppe_write_tbl(ppe_dev, PPE_AC_UNI_QUEUE_CFG_TBL +
+ PPE_AC_UNI_QUEUE_CFG_TBL_INC * ret,
+ uni_queue_cfg.val, sizeof(uni_queue_cfg.val));
+ }
+ }
+
+ return 0;
+}
+
+static int ppe_queue_ac_ctrl_get(struct ppe_device *ppe_dev,
+ u32 index,
+ struct ppe_queue_ac_ctrl *ac_ctrl)
+{
+ union ppe_ac_uni_queue_cfg_u uni_queue_cfg;
+ union ppe_ac_mul_queue_cfg_u mul_queue_cfg;
+ union ppe_ac_grp_cfg_u grp_cfg;
+ int ret;
+
+ memset(&grp_cfg, 0, sizeof(grp_cfg));
+ memset(&uni_queue_cfg, 0, sizeof(uni_queue_cfg));
+ memset(&mul_queue_cfg, 0, sizeof(mul_queue_cfg));
+
+ ret = FIELD_GET(PPE_QUEUE_AC_VALUE_MASK, index);
+ if (FIELD_GET(PPE_QUEUE_AC_TYPE_MASK, index) == PPE_QUEUE_AC_TYPE_GROUP) {
+ ppe_read_tbl(ppe_dev, PPE_AC_GRP_CFG_TBL +
+ PPE_AC_GRP_CFG_TBL_INC * ret,
+ grp_cfg.val, sizeof(grp_cfg.val));
+
+ ac_ctrl->ac_en = grp_cfg.bf.ac_en;
+ ac_ctrl->ac_fc_en = grp_cfg.bf.force_ac_en;
+ } else {
+ if (ret > PPE_QUEUE_AC_UCAST_MAX) {
+ ppe_read_tbl(ppe_dev, PPE_AC_MUL_QUEUE_CFG_TBL +
+ PPE_AC_MUL_QUEUE_CFG_TBL_INC * ret,
+ mul_queue_cfg.val, sizeof(mul_queue_cfg.val));
+
+ ac_ctrl->ac_en = mul_queue_cfg.bf.ac_en;
+ ac_ctrl->ac_fc_en = mul_queue_cfg.bf.force_ac_en;
+ } else {
+ ppe_read_tbl(ppe_dev, PPE_AC_UNI_QUEUE_CFG_TBL +
+ PPE_AC_UNI_QUEUE_CFG_TBL_INC * ret,
+ uni_queue_cfg.val, sizeof(uni_queue_cfg.val));
+
+ ac_ctrl->ac_en = uni_queue_cfg.bf.ac_en;
+ ac_ctrl->ac_fc_en = uni_queue_cfg.bf.force_ac_en;
+ }
+ }
+
+ return 0;
+}
+
+static int ppe_ring_queue_map_set(struct ppe_device *ppe_dev,
+ int ring_id,
+ u32 *queue_map)
+{
+ union ppe_ring_q_map_cfg_u ring_q_map;
+
+ memset(&ring_q_map, 0, sizeof(ring_q_map));
+
+ memcpy(ring_q_map.val, queue_map, sizeof(ring_q_map.val));
+ return ppe_write_tbl(ppe_dev, PPE_RING_Q_MAP_TBL + PPE_RING_Q_MAP_TBL_INC * ring_id,
+ ring_q_map.val, sizeof(ring_q_map.val));
+}
+
static const struct ppe_queue_ops qcom_ppe_queue_config_ops = {
.queue_scheduler_set = ppe_queue_scheduler_set,
.queue_scheduler_get = ppe_queue_scheduler_get,
@@ -437,6 +614,11 @@ static const struct ppe_queue_ops qcom_ppe_queue_config_ops = {
.queue_ucast_pri_class_set = ppe_queue_ucast_pri_class_set,
.queue_ucast_hash_class_set = ppe_queue_ucast_hash_class_set,
.rss_hash_config_set = ppe_rss_hash_config_set,
+ .queue_ac_threshold_set = ppe_queue_ac_threshold_set,
+ .queue_ac_threshold_get = ppe_queue_ac_threshold_get,
+ .queue_ac_ctrl_set = ppe_queue_ac_ctrl_set,
+ .queue_ac_ctrl_get = ppe_queue_ac_ctrl_get,
+ .ring_queue_map_set = ppe_ring_queue_map_set,
};

const struct ppe_queue_ops *ppe_queue_config_ops_get(void)
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
index da0f37323042..9d069d73e257 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_ops.h
@@ -14,6 +14,12 @@
#define PPE_QUEUE_HASH_MAX 256
#define PPE_RSS_HASH_MODE_IPV4 BIT(0)
#define PPE_RSS_HASH_MODE_IPV6 BIT(1)
+#define PPE_QUEUE_AC_TYPE_QUEUE 0
+#define PPE_QUEUE_AC_TYPE_GROUP 1
+#define PPE_QUEUE_AC_UCAST_MAX 255
+#define PPE_QUEUE_AC_VALUE_MASK GENMASK(23, 0)
+#define PPE_QUEUE_AC_TYPE_MASK GENMASK(31, 24)
+#define PPE_RING_MAPPED_BP_QUEUE_WORD_COUNT 10

/* PPE hardware QoS configurations used to dispatch the packet passed
* through PPE, the scheduler supports DRR(deficit round robin with the
@@ -167,6 +173,32 @@ struct ppe_rss_hash_cfg {
u8 hash_fin_outer[5];
};

+/* PPE queue threshold config for the admission control, the threshold
+ * decides the length of queue, the threshold can be configured statically
+ * or dynamically changed with the free buffer.
+ */
+struct ppe_queue_ac_threshold {
+ bool color_enable;
+ bool wred_enable;
+ bool dynamic;
+ int shared_weight;
+ int green_min_off;
+ int yel_max_off;
+ int yel_min_off;
+ int red_max_off;
+ int red_min_off;
+ int green_resume_off;
+ int yel_resume_off;
+ int red_resume_off;
+ int ceiling;
+};
+
+/* Admission control status of PPE queue. */
+struct ppe_queue_ac_ctrl {
+ bool ac_en;
+ bool ac_fc_en;
+};
+
/* The operations are used to configure the PPE queue related resource */
struct ppe_queue_ops {
int (*queue_scheduler_set)(struct ppe_device *ppe_dev,
@@ -198,6 +230,21 @@ struct ppe_queue_ops {
int (*rss_hash_config_set)(struct ppe_device *ppe_dev,
int mode,
struct ppe_rss_hash_cfg hash_cfg);
+ int (*queue_ac_threshold_set)(struct ppe_device *ppe_dev,
+ int queue,
+ struct ppe_queue_ac_threshold ac_threshold);
+ int (*queue_ac_threshold_get)(struct ppe_device *ppe_dev,
+ int queue,
+ struct ppe_queue_ac_threshold *ac_threshold);
+ int (*queue_ac_ctrl_set)(struct ppe_device *ppe_dev,
+ u32 index,
+ struct ppe_queue_ac_ctrl ac_ctrl);
+ int (*queue_ac_ctrl_get)(struct ppe_device *ppe_dev,
+ u32 index,
+ struct ppe_queue_ac_ctrl *ac_ctrl);
+ int (*ring_queue_map_set)(struct ppe_device *ppe_dev,
+ int ring_id,
+ u32 *queue_map);
};

const struct ppe_queue_ops *ppe_queue_config_ops_get(void);
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
index b42089599cc9..ef12037ffed5 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -238,6 +238,30 @@ union ppe_mru_mtu_ctrl_cfg_u {
#define PPE_L0_COMP_CFG_TBL_SHAPER_METER_LEN GENMASK(1, 0)
#define PPE_L0_COMP_CFG_TBL_DRR_METER_LEN GENMASK(3, 2)

+#define PPE_RING_Q_MAP_TBL 0x42a000
+#define PPE_RING_Q_MAP_TBL_NUM 24
+#define PPE_RING_Q_MAP_TBL_INC 0x40
+
+/* The queue bitmap for the back pressure from EDAM RX ring to PPE queue */
+struct ppe_ring_q_map_cfg {
+ u32 queue_bitmap_0;
+ u32 queue_bitmap_1;
+ u32 queue_bitmap_2;
+ u32 queue_bitmap_3;
+ u32 queue_bitmap_4;
+ u32 queue_bitmap_5;
+ u32 queue_bitmap_6;
+ u32 queue_bitmap_7;
+ u32 queue_bitmap_8;
+ u32 queue_bitmap_9:12,
+ res0:20;
+};
+
+union ppe_ring_q_map_cfg_u {
+ u32 val[10];
+ struct ppe_ring_q_map_cfg bf;
+};
+
#define PPE_DEQ_OPR_TBL 0x430000
#define PPE_DEQ_OPR_TBL_NUM 300
#define PPE_DEQ_OPR_TBL_INC 0x10
--
2.42.0


2024-01-10 11:49:00

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 06/20] net: ethernet: qualcomm: Add PPE TDM config

TDM(Time Division Multiplex) config controls the performance of the
PPE ports, which assigns the clock tickets for the PPE port to receive
and transmit packet.

Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/ppe.c | 111 ++++++++++++++++++-
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 30 +++++
2 files changed, 140 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index 122e325b407d..85d8b06a326b 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -625,6 +625,111 @@ static int of_parse_ppe_qm(struct ppe_device *ppe_dev,
return ret;
}

+static int of_parse_ppe_tdm(struct ppe_device *ppe_dev,
+ struct device_node *ppe_node)
+{
+ struct device_node *tdm_node;
+ u32 *cfg, reg_val;
+ int ret, cnt;
+
+ tdm_node = of_get_child_by_name(ppe_node, "tdm-config");
+ if (!tdm_node)
+ return dev_err_probe(ppe_dev->dev, -ENODEV,
+ "tdm-config is not defined\n");
+
+ cnt = of_property_count_u32_elems(tdm_node, "qcom,tdm-bm-config");
+ if (cnt < 0)
+ return dev_err_probe(ppe_dev->dev, -EINVAL,
+ "Fail to get qcom,tdm-bm-config\n");
+
+ cfg = kmalloc_array(cnt, sizeof(*cfg), GFP_KERNEL | __GFP_ZERO);
+ if (!cfg)
+ return -ENOMEM;
+
+ ret = of_property_read_u32_array(tdm_node, "qcom,tdm-bm-config", cfg, cnt);
+ if (ret) {
+ dev_err(ppe_dev->dev, "Fail to get qcom,tdm-bm-config\n");
+ goto parse_tdm_err;
+ }
+
+ /* Parse TDM BM configuration,
+ * the dts property:
+ * qcom,tdm-bm-config = <valid dir port second_valid second_port>;
+ *
+ * This config decides the number ticks available for physical port
+ * to utilize buffer for receiving and transmiting packet.
+ */
+ reg_val = FIELD_PREP(PPE_BM_TDM_CTRL_TDM_DEPTH, cnt / 5) |
+ FIELD_PREP(PPE_BM_TDM_CTRL_TDM_OFFSET, 0) |
+ FIELD_PREP(PPE_BM_TDM_CTRL_TDM_EN, 1);
+ ret = ppe_write(ppe_dev, PPE_BM_TDM_CTRL, reg_val);
+ if (ret)
+ return ret;
+
+ ret = 0;
+ while ((cnt - ret) / 5) {
+ reg_val = FIELD_PREP(PPE_BM_TDM_CFG_TBL_VALID, cfg[ret]) |
+ FIELD_PREP(PPE_BM_TDM_CFG_TBL_DIR, cfg[ret + 1]) |
+ FIELD_PREP(PPE_BM_TDM_CFG_TBL_PORT_NUM, cfg[ret + 2]) |
+ FIELD_PREP(PPE_BM_TDM_CFG_TBL_SECOND_PORT_VALID, cfg[ret + 3]) |
+ FIELD_PREP(PPE_BM_TDM_CFG_TBL_SECOND_PORT, cfg[ret + 4]);
+
+ ppe_write(ppe_dev,
+ PPE_BM_TDM_CFG_TBL + (ret / 5) * PPE_BM_TDM_CFG_TBL_INC,
+ reg_val);
+ ret += 5;
+ }
+
+ cnt = of_property_count_u32_elems(tdm_node, "qcom,tdm-port-scheduler-config");
+ if (cnt < 0) {
+ dev_err(ppe_dev->dev, "Fail to get qcom,tdm-port-scheduler-config\n");
+ goto parse_tdm_err;
+ }
+
+ cfg = krealloc_array(cfg, cnt, sizeof(*cfg), GFP_KERNEL | __GFP_ZERO);
+ if (!cfg) {
+ ret = -ENOMEM;
+ goto parse_tdm_err;
+ }
+
+ ret = of_property_read_u32_array(tdm_node, "qcom,tdm-port-scheduler-config",
+ cfg, cnt);
+ if (ret) {
+ dev_err(ppe_dev->dev, "Fail to get qcom,tdm-port-scheduler-config\n");
+ goto parse_tdm_err;
+ }
+
+ /* Parse TDM scheduler configuration,
+ * the dts property:
+ * qcom,tdm-port-scheduler-config = <ensch_bmp ensch_port desch_port
+ * desch_second_valid desch_second_port>;
+ *
+ * This config decides the ticks number available for packet enqueue
+ * and dequeue on the physical port.
+ */
+ reg_val = FIELD_PREP(PPE_PSCH_TDM_DEPTH_CFG_TDM_DEPTH, cnt / 5);
+ ppe_write(ppe_dev, PPE_PSCH_TDM_DEPTH_CFG, reg_val);
+
+ ret = 0;
+ while ((cnt - ret) / 5) {
+ reg_val = FIELD_PREP(PPE_PSCH_TDM_CFG_TBL_ENS_PORT_BITMAP, cfg[ret]) |
+ FIELD_PREP(PPE_PSCH_TDM_CFG_TBL_ENS_PORT, cfg[ret + 1]) |
+ FIELD_PREP(PPE_PSCH_TDM_CFG_TBL_DES_PORT, cfg[ret + 2]) |
+ FIELD_PREP(PPE_PSCH_TDM_CFG_TBL_DES_SECOND_PORT_EN, cfg[ret + 3]) |
+ FIELD_PREP(PPE_PSCH_TDM_CFG_TBL_DES_SECOND_PORT, cfg[ret + 4]);
+
+ ppe_write(ppe_dev,
+ PPE_PSCH_TDM_CFG_TBL + (ret / 5) * PPE_PSCH_TDM_CFG_TBL_INC,
+ reg_val);
+ ret += 5;
+ }
+
+ ret = 0;
+parse_tdm_err:
+ kfree(cfg);
+ return ret;
+};
+
static int of_parse_ppe_config(struct ppe_device *ppe_dev,
struct device_node *ppe_node)
{
@@ -634,7 +739,11 @@ static int of_parse_ppe_config(struct ppe_device *ppe_dev,
if (ret)
return ret;

- return of_parse_ppe_qm(ppe_dev, ppe_node);
+ ret = of_parse_ppe_qm(ppe_dev, ppe_node);
+ if (ret)
+ return ret;
+
+ return of_parse_ppe_tdm(ppe_dev, ppe_node);
}

static int qcom_ppe_probe(struct platform_device *pdev)
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
index 3e75b75fa48c..589f92a4f607 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -7,14 +7,44 @@
#ifndef __PPE_REGS_H__
#define __PPE_REGS_H__

+#define PPE_BM_TDM_CTRL 0xb000
+#define PPE_BM_TDM_CTRL_NUM 1
+#define PPE_BM_TDM_CTRL_INC 4
+#define PPE_BM_TDM_CTRL_TDM_DEPTH GENMASK(7, 0)
+#define PPE_BM_TDM_CTRL_TDM_OFFSET GENMASK(14, 8)
+#define PPE_BM_TDM_CTRL_TDM_EN BIT(31)
+
+#define PPE_BM_TDM_CFG_TBL 0xc000
+#define PPE_BM_TDM_CFG_TBL_NUM 128
+#define PPE_BM_TDM_CFG_TBL_INC 0x10
+#define PPE_BM_TDM_CFG_TBL_PORT_NUM GENMASK(3, 0)
+#define PPE_BM_TDM_CFG_TBL_DIR BIT(4)
+#define PPE_BM_TDM_CFG_TBL_VALID BIT(5)
+#define PPE_BM_TDM_CFG_TBL_SECOND_PORT_VALID BIT(6)
+#define PPE_BM_TDM_CFG_TBL_SECOND_PORT GENMASK(11, 8)
+
#define PPE_EG_BRIDGE_CONFIG 0x20044
#define PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN BIT(2)

+#define PPE_PSCH_TDM_DEPTH_CFG 0x400000
+#define PPE_PSCH_TDM_DEPTH_CFG_NUM 1
+#define PPE_PSCH_TDM_DEPTH_CFG_INC 4
+#define PPE_PSCH_TDM_DEPTH_CFG_TDM_DEPTH GENMASK(7, 0)
+
#define PPE_DEQ_OPR_TBL 0x430000
#define PPE_DEQ_OPR_TBL_NUM 300
#define PPE_DEQ_OPR_TBL_INC 0x10
#define PPE_ENQ_OPR_TBL_DEQ_DISABLE BIT(0)

+#define PPE_PSCH_TDM_CFG_TBL 0x47a000
+#define PPE_PSCH_TDM_CFG_TBL_NUM 128
+#define PPE_PSCH_TDM_CFG_TBL_INC 0x10
+#define PPE_PSCH_TDM_CFG_TBL_DES_PORT GENMASK(3, 0)
+#define PPE_PSCH_TDM_CFG_TBL_ENS_PORT GENMASK(7, 4)
+#define PPE_PSCH_TDM_CFG_TBL_ENS_PORT_BITMAP GENMASK(15, 8)
+#define PPE_PSCH_TDM_CFG_TBL_DES_SECOND_PORT_EN BIT(16)
+#define PPE_PSCH_TDM_CFG_TBL_DES_SECOND_PORT GENMASK(20, 17)
+
#define PPE_BM_PORT_FC_MODE 0x600100
#define PPE_BM_PORT_FC_MODE_NUM 15
#define PPE_BM_PORT_FC_MODE_INC 4
--
2.42.0


2024-01-10 11:49:06

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 16/20] net: ethernet: qualcomm: Add PPE L2 bridge initialization

From: Lei Wei <[email protected]>

Add PPE L2 bridge initialization. The default per-port
settings are as follows: For PPE physical port, the L2
learning is enabled and the forward action is initialized
to forward to CPU port. The PPE bridge tx is also enabled
for PPE CPU port and disabled for PPE physical port.

Signed-off-by: Lei Wei <[email protected]>
Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/ppe.c | 78 ++++++++++++++
drivers/net/ethernet/qualcomm/ppe/ppe.h | 10 ++
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 102 +++++++++++++++++++
3 files changed, 190 insertions(+)

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index 71973bce2cd2..04f80589c05b 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -1260,6 +1260,80 @@ static int ppe_rss_hash_init(struct ppe_device *ppe_dev)
hash_cfg);
}

+static int ppe_bridge_init(struct ppe_device *ppe_dev)
+{
+ union ppe_l2_vp_port_tbl_u port_tbl;
+ union ppe_vsi_tbl_u vsi_tbl;
+ u32 reg_val = 0;
+ int i = 0;
+
+ /* CPU port0 initialization */
+ reg_val = FIELD_PREP(PPE_PORT_BRIDGE_CTRL_ISOLATION_BITMAP, 0x7F) |
+ PPE_PORT_BRIDGE_CTRL_PROMISC_EN;
+ ppe_mask(ppe_dev,
+ PPE_PORT_BRIDGE_CTRL + PPE_PORT_BRIDGE_CTRL_INC * PPE_PORT0,
+ PPE_PORT_BRIDGE_CTRL_MASK,
+ reg_val | PPE_PORT_BRIDGE_CTRL_TXMAC_EN);
+
+ /* Physical and virtual physical port initialization */
+ reg_val |= (PPE_PORT_BRIDGE_CTRL_STATION_MODE_LRN_EN |
+ PPE_PORT_BRIDGE_CTRL_NEW_ADDR_LRN_EN);
+ for (i = PPE_PORT1; i <= PPE_PORT6; i++) {
+ ppe_mask(ppe_dev,
+ PPE_PORT_BRIDGE_CTRL + PPE_PORT_BRIDGE_CTRL_INC * i,
+ PPE_PORT_BRIDGE_CTRL_MASK,
+ reg_val);
+
+ /* Invalid vsi fowarding to CPU port0 */
+ memset(&port_tbl, 0, sizeof(port_tbl));
+ ppe_read_tbl(ppe_dev,
+ PPE_L2_VP_PORT_TBL + PPE_L2_VP_PORT_TBL_INC * i,
+ port_tbl.val,
+ sizeof(port_tbl.val));
+ port_tbl.bf.invalid_vsi_forwarding_en = true;
+ port_tbl.bf.dst_info = PPE_PORT0;
+ ppe_write_tbl(ppe_dev,
+ PPE_L2_VP_PORT_TBL + PPE_L2_VP_PORT_TBL_INC * i,
+ port_tbl.val,
+ sizeof(port_tbl.val));
+ }
+
+ /* Internal port7 initialization */
+ ppe_mask(ppe_dev,
+ PPE_PORT_BRIDGE_CTRL + PPE_PORT_BRIDGE_CTRL_INC * PPE_PORT7,
+ PPE_PORT_BRIDGE_CTRL_MASK,
+ reg_val | PPE_PORT_BRIDGE_CTRL_TXMAC_EN);
+
+ /* Enable Global L2 Learn and Ageing */
+ ppe_mask(ppe_dev,
+ PPE_L2_GLOBAL_CONFIG,
+ PPE_L2_GLOBAL_CONFIG_LRN_EN | PPE_L2_GLOBAL_CONFIG_AGE_EN,
+ PPE_L2_GLOBAL_CONFIG_LRN_EN | PPE_L2_GLOBAL_CONFIG_AGE_EN);
+
+ /* Vsi initialization */
+ for (i = 0; i < PPE_VSI_TBL_NUM; i++) {
+ memset(&vsi_tbl, 0, sizeof(vsi_tbl));
+ ppe_read_tbl(ppe_dev,
+ PPE_VSI_TBL + PPE_VSI_TBL_INC * i,
+ vsi_tbl.val,
+ sizeof(vsi_tbl.val));
+ vsi_tbl.bf.member_port_bitmap = BIT(PPE_PORT0);
+ vsi_tbl.bf.uuc_bitmap = BIT(PPE_PORT0);
+ vsi_tbl.bf.umc_bitmap = BIT(PPE_PORT0);
+ vsi_tbl.bf.bc_bitmap = BIT(PPE_PORT0);
+ vsi_tbl.bf.new_addr_lrn_en = true;
+ vsi_tbl.bf.new_addr_fwd_cmd = 0;
+ vsi_tbl.bf.station_move_lrn_en = true;
+ vsi_tbl.bf.station_move_fwd_cmd = 0;
+ ppe_write_tbl(ppe_dev,
+ PPE_VSI_TBL + PPE_VSI_TBL_INC * i,
+ vsi_tbl.val,
+ sizeof(vsi_tbl.val));
+ }
+
+ return 0;
+}
+
static int ppe_dev_hw_init(struct ppe_device *ppe_dev)
{
int ret;
@@ -1276,6 +1350,10 @@ static int ppe_dev_hw_init(struct ppe_device *ppe_dev)
if (ret)
return ret;

+ ret = ppe_bridge_init(ppe_dev);
+ if (ret)
+ return ret;
+
return ppe_rss_hash_init(ppe_dev);
}

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.h b/drivers/net/ethernet/qualcomm/ppe/ppe.h
index 507626b6ab2e..828d467540c9 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.h
@@ -11,6 +11,16 @@
#include <linux/clk.h>
#include <linux/reset.h>

+/* PPE Ports */
+#define PPE_PORT0 0
+#define PPE_PORT1 1
+#define PPE_PORT2 2
+#define PPE_PORT3 3
+#define PPE_PORT4 4
+#define PPE_PORT5 5
+#define PPE_PORT6 6
+#define PPE_PORT7 7
+
enum ppe_clk_id {
/* clocks for CMN PLL */
PPE_CMN_AHB_CLK,
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
index 98bf19f974ce..13115405bad9 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -238,6 +238,39 @@ union ppe_eg_service_cfg_u {
#define PPE_TX_BUFF_THRSH_XOFF GENMASK(7, 0)
#define PPE_TX_BUFF_THRSH_XON GENMASK(15, 8)

+#define PPE_L2_GLOBAL_CONFIG 0x60038
+#define PPE_L2_GLOBAL_CONFIG_LRN_EN BIT(6)
+#define PPE_L2_GLOBAL_CONFIG_AGE_EN BIT(7)
+
+#define PPE_MIRROR_ANALYZER 0x60040
+#define PPE_MIRROR_ANALYZER_NUM 1
+#define PPE_MIRROR_ANALYZER_INC 4
+#define PPE_MIRROR_ANALYZER_INGRESS_PORT GENMASK(5, 0)
+#define PPE_MIRROR_ANALYZER_EGRESS_PORT GENMASK(13, 8)
+
+#define PPE_PORT_BRIDGE_CTRL 0x60300
+#define PPE_PORT_BRIDGE_CTRL_NUM 8
+#define PPE_PORT_BRIDGE_CTRL_INC 4
+#define PPE_PORT_BRIDGE_CTRL_NEW_ADDR_LRN_EN BIT(0)
+#define PPE_PORT_BRIDGE_CTRL_NEW_ADDR_FWD_CMD GENMASK(2, 1)
+#define PPE_PORT_BRIDGE_CTRL_STATION_MODE_LRN_EN BIT(3)
+#define PPE_PORT_BRIDGE_CTRL_STATION_MODE_FWD_CMD GENMASK(5, 4)
+#define PPE_PORT_BRIDGE_CTRL_ISOLATION_BITMAP GENMASK(15, 8)
+#define PPE_PORT_BRIDGE_CTRL_TXMAC_EN BIT(16)
+#define PPE_PORT_BRIDGE_CTRL_PROMISC_EN BIT(17)
+#define PPE_PORT_BRIDGE_CTRL_MASK GENMASK(17, 0)
+
+#define PPE_PORT_MIRROR 0x60800
+#define PPE_PORT_MIRROR_NUM 8
+#define PPE_PORT_MIRROR_INC 4
+#define PPE_PORT_MIRROR_INGRESS_EN BIT(0)
+#define PPE_PORT_MIRROR_EGRESS_EN BIT(1)
+
+#define PPE_CST_STATE 0x60100
+#define PPE_CST_STATE_NUM 8
+#define PPE_CST_STATE_INC 4
+#define PPE_CST_STATE_PORT_STATE GENMASK(1, 0)
+
#define PPE_MC_MTU_CTRL_TBL 0x60a00
#define PPE_MC_MTU_CTRL_TBL_NUM 8
#define PPE_MC_MTU_CTRL_TBL_INC 4
@@ -245,6 +278,28 @@ union ppe_eg_service_cfg_u {
#define PPE_MC_MTU_CTRL_TBL_MTU_CMD GENMASK(15, 14)
#define PPE_MC_MTU_CTRL_TBL_TX_CNT_EN BIT(16)

+#define PPE_VSI_TBL 0x63800
+#define PPE_VSI_TBL_NUM 64
+#define PPE_VSI_TBL_INC 0x10
+
+/* PPE vsi configurations */
+struct ppe_vsi_tbl {
+ u32 member_port_bitmap:8,
+ uuc_bitmap:8,
+ umc_bitmap:8,
+ bc_bitmap:8;
+ u32 new_addr_lrn_en:1,
+ new_addr_fwd_cmd:2,
+ station_move_lrn_en:1,
+ station_move_fwd_cmd:2,
+ res0:26;
+};
+
+union ppe_vsi_tbl_u {
+ u32 val[2];
+ struct ppe_vsi_tbl bf;
+};
+
#define PPE_MRU_MTU_CTRL_TBL 0x65000
#define PPE_MRU_MTU_CTRL_TBL_NUM 256
#define PPE_MRU_MTU_CTRL_TBL_INC 0x10
@@ -295,6 +350,53 @@ union ppe_mru_mtu_ctrl_cfg_u {
#define PPE_IN_L2_SERVICE_TBL_RX_CNT_EN BIT(30)
#define PPE_IN_L2_SERVICE_TBL_TX_CNT_EN BIT(31)

+#define PPE_L2_VP_PORT_TBL 0x98000
+#define PPE_L2_VP_PORT_TBL_NUM 256
+#define PPE_L2_VP_PORT_TBL_INC 0x10
+
+/* Port configurations */
+struct ppe_l2_vp_port_tbl {
+ u32 invalid_vsi_forwarding_en:1,
+ promisc_en:1,
+ dst_info:8,
+ physical_port:3,
+ new_addr_lrn_en:1,
+ new_addr_fwd_cmd:2,
+ station_move_lrn_en:1,
+ station_move_fwd_cmd:2,
+ lrn_lmt_cnt:12,
+ lrn_lmt_en:1;
+ u32 lrn_lmt_exceed_fwd:2,
+ eg_vlan_fltr_cmd:1,
+ port_isolation_bitmap:8,
+ isol_profile:6,
+ isol_en:1,
+ policer_en:1,
+ policer_index:9,
+ vp_state_check_en:1,
+ vp_type:1,
+ vp_context_active:1,
+ vp_eg_data_valid:1;
+ u32 physical_port_mtu_check_en:1,
+ mtu_check_type:1,
+ extra_header_len:8,
+ eg_vlan_fmt_valid:1,
+ eg_stag_fmt:1,
+ eg_ctag_fmt:1,
+ exception_fmt_ctrl:1,
+ enq_service_code_en:1,
+ enq_service_code:8,
+ enq_phy_port:3,
+ app_ctrl_profile_0:6;
+ u32 app_ctrl_profile_1:2,
+ res0:30;
+};
+
+union ppe_l2_vp_port_tbl_u {
+ u32 val[4];
+ struct ppe_l2_vp_port_tbl bf;
+};
+
#define PPE_PORT_RX_CNT_TBL 0x150000
#define PPE_PORT_RX_CNT_TBL_NUM 256
#define PPE_PORT_RX_CNT_TBL_INC 0x20
--
2.42.0


2024-01-10 11:49:21

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 05/20] net: ethernet: qualcomm: Add PPE queue management config

QM(queue management) config decides the length of queue and the
threshold to drop packet, there are two kinds of queue, unicast
queue and multicast queue to transmit different type of packet.

Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/ppe.c | 158 ++++++++++++++++++-
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 106 +++++++++++++
2 files changed, 263 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index 94fa13dd17da..122e325b407d 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -475,10 +475,166 @@ static int of_parse_ppe_bm(struct ppe_device *ppe_dev,
return ret;
}

+static int of_parse_ppe_qm(struct ppe_device *ppe_dev,
+ struct device_node *ppe_node)
+{
+ union ppe_ac_uni_queue_cfg_u uni_queue_cfg;
+ union ppe_ac_mul_queue_cfg_u mul_queue_cfg;
+ union ppe_ac_grp_cfg_u group_cfg;
+ struct device_node *qm_node;
+ int ret, cnt, queue_id;
+ u32 *cfg;
+
+ qm_node = of_get_child_by_name(ppe_node, "queue-management-config");
+ if (!qm_node)
+ return dev_err_probe(ppe_dev->dev, -ENODEV,
+ "Fail to get queue-management-config\n");
+
+ cnt = of_property_count_u32_elems(qm_node, "qcom,group-config");
+ if (cnt < 0)
+ return dev_err_probe(ppe_dev->dev, -ENODEV,
+ "Fail to get qcom,group-config\n");
+
+ cfg = kmalloc_array(cnt, sizeof(*cfg), GFP_KERNEL | __GFP_ZERO);
+ if (!cfg)
+ return -ENOMEM;
+
+ ret = of_property_read_u32_array(qm_node, "qcom,group-config", cfg, cnt);
+ if (ret) {
+ dev_err(ppe_dev->dev, "Fail to get qcom,group-config\n");
+ goto parse_qm_err;
+ }
+
+ /* Parse QM group config:
+ * qcom,group-config = <group total prealloc ceil resume_off>;
+ *
+ * For packet enqueue, there are two kinds of buffer type available,
+ * queue based buffer and group(shared) buffer, the queue based buffer
+ * is used firstly, then shared buffer used.
+ *
+ * Maximum 4 groups buffer supported by PPE.
+ */
+ ret = 0;
+ while ((cnt - ret) / 5) {
+ memset(&group_cfg, 0, sizeof(group_cfg));
+
+ ppe_read_tbl(ppe_dev, PPE_AC_GRP_CFG_TBL +
+ PPE_AC_GRP_CFG_TBL_INC * cfg[ret],
+ group_cfg.val, sizeof(group_cfg.val));
+
+ group_cfg.bf.limit = cfg[ret + 1];
+ group_cfg.bf.prealloc_limit = cfg[ret + 2];
+ group_cfg.bf.dp_thrd_0 = cfg[ret + 3] & 0x3f;
+ group_cfg.bf.dp_thrd_1 = cfg[ret + 3] >> 7;
+ group_cfg.bf.grn_resume = cfg[ret + 4];
+
+ ppe_write_tbl(ppe_dev, PPE_AC_GRP_CFG_TBL +
+ PPE_AC_GRP_CFG_TBL_INC * cfg[ret],
+ group_cfg.val, sizeof(group_cfg.val));
+ ret += 5;
+ }
+
+ cnt = of_property_count_u32_elems(qm_node, "qcom,queue-config");
+ if (cnt < 0) {
+ dev_err(ppe_dev->dev, "Fail to get qcom,queue-config\n");
+ goto parse_qm_err;
+ }
+
+ cfg = krealloc_array(cfg, cnt, sizeof(*cfg), GFP_KERNEL | __GFP_ZERO);
+ if (!cfg) {
+ ret = -ENOMEM;
+ goto parse_qm_err;
+ }
+
+ ret = of_property_read_u32_array(qm_node, "qcom,queue-config", cfg, cnt);
+ if (ret) {
+ dev_err(ppe_dev->dev, "Fail to get qcom,queue-config\n");
+ goto parse_qm_err;
+ }
+
+ /* Parse queue based config:
+ * qcom,queue-config = <queue_base queue_num group prealloc
+ * ceil weight resume_off dynamic>;
+ *
+ * There are totally 256(queue id 0-255) unicast queues and 44(256-299)
+ * multicast queues available in PPE, each queue is assigned the
+ * dedicated buffer and ceil to drop packet, the unicast queue supports
+ * static configured ceil value and dynamic ceil value that is adjusted
+ * according to the available group buffers, multicast queue only supports
+ * static ceil.
+ */
+ ret = 0;
+ while ((cnt - ret) / 8) {
+ queue_id = 0;
+ while (queue_id < cfg[ret + 1]) {
+ if (cfg[ret] + queue_id < PPE_AC_UNI_QUEUE_CFG_TBL_NUM) {
+ memset(&uni_queue_cfg, 0, sizeof(uni_queue_cfg));
+
+ ppe_read_tbl(ppe_dev, PPE_AC_UNI_QUEUE_CFG_TBL +
+ PPE_AC_UNI_QUEUE_CFG_TBL_INC * (cfg[ret] + queue_id),
+ uni_queue_cfg.val, sizeof(uni_queue_cfg.val));
+
+ uni_queue_cfg.bf.ac_grp_id = cfg[ret + 2];
+ uni_queue_cfg.bf.prealloc_limit = cfg[ret + 3];
+ uni_queue_cfg.bf.shared_ceiling = cfg[ret + 4];
+ uni_queue_cfg.bf.shared_weight = cfg[ret + 5];
+ uni_queue_cfg.bf.grn_resume = cfg[ret + 6];
+ uni_queue_cfg.bf.shared_dynamic = cfg[ret + 7];
+ uni_queue_cfg.bf.ac_en = 1;
+
+ ppe_write_tbl(ppe_dev, PPE_AC_UNI_QUEUE_CFG_TBL +
+ PPE_AC_UNI_QUEUE_CFG_TBL_INC * (cfg[ret] + queue_id),
+ uni_queue_cfg.val, sizeof(uni_queue_cfg.val));
+ } else {
+ memset(&mul_queue_cfg, 0, sizeof(mul_queue_cfg));
+
+ ppe_read_tbl(ppe_dev, PPE_AC_MUL_QUEUE_CFG_TBL +
+ PPE_AC_MUL_QUEUE_CFG_TBL_INC * (cfg[ret] + queue_id),
+ mul_queue_cfg.val, sizeof(mul_queue_cfg.val));
+
+ mul_queue_cfg.bf.ac_grp_id = cfg[ret + 2];
+ mul_queue_cfg.bf.prealloc_limit = cfg[ret + 3];
+ mul_queue_cfg.bf.shared_ceiling = cfg[ret + 4];
+ mul_queue_cfg.bf.grn_resume = cfg[ret + 6];
+ mul_queue_cfg.bf.ac_en = 1;
+
+ ppe_write_tbl(ppe_dev, PPE_AC_MUL_QUEUE_CFG_TBL +
+ PPE_AC_MUL_QUEUE_CFG_TBL_INC * (cfg[ret] + queue_id),
+ mul_queue_cfg.val, sizeof(mul_queue_cfg.val));
+ }
+
+ ppe_mask(ppe_dev, PPE_ENQ_OPR_TBL +
+ PPE_ENQ_OPR_TBL_INC * (cfg[ret] + queue_id),
+ PPE_ENQ_OPR_TBL_DEQ_DISABLE, 0);
+
+ ppe_mask(ppe_dev, PPE_DEQ_OPR_TBL +
+ PPE_DEQ_OPR_TBL_INC * (cfg[ret] + queue_id),
+ PPE_ENQ_OPR_TBL_DEQ_DISABLE, 0);
+
+ queue_id++;
+ }
+ ret += 8;
+ }
+
+ /* Enable queue counter */
+ ret = ppe_mask(ppe_dev, PPE_EG_BRIDGE_CONFIG,
+ PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN,
+ PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN);
+parse_qm_err:
+ kfree(cfg);
+ return ret;
+}
+
static int of_parse_ppe_config(struct ppe_device *ppe_dev,
struct device_node *ppe_node)
{
- return of_parse_ppe_bm(ppe_dev, ppe_node);
+ int ret;
+
+ ret = of_parse_ppe_bm(ppe_dev, ppe_node);
+ if (ret)
+ return ret;
+
+ return of_parse_ppe_qm(ppe_dev, ppe_node);
}

static int qcom_ppe_probe(struct platform_device *pdev)
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
index e11d8f2a26b7..3e75b75fa48c 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -7,6 +7,14 @@
#ifndef __PPE_REGS_H__
#define __PPE_REGS_H__

+#define PPE_EG_BRIDGE_CONFIG 0x20044
+#define PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN BIT(2)
+
+#define PPE_DEQ_OPR_TBL 0x430000
+#define PPE_DEQ_OPR_TBL_NUM 300
+#define PPE_DEQ_OPR_TBL_INC 0x10
+#define PPE_ENQ_OPR_TBL_DEQ_DISABLE BIT(0)
+
#define PPE_BM_PORT_FC_MODE 0x600100
#define PPE_BM_PORT_FC_MODE_NUM 15
#define PPE_BM_PORT_FC_MODE_INC 4
@@ -46,4 +54,102 @@ union ppe_bm_port_fc_cfg_u {
struct ppe_bm_port_fc_cfg bf;
};

+#define PPE_AC_UNI_QUEUE_CFG_TBL 0x848000
+#define PPE_AC_UNI_QUEUE_CFG_TBL_NUM 256
+#define PPE_AC_UNI_QUEUE_CFG_TBL_INC 0x10
+
+/* PPE unicast queue(0-255) configurations, the threshold supports to be
+ * configured static or dynamic.
+ *
+ * For dynamic threshold, the queue threshold depends on the remain buffer.
+ */
+struct ppe_ac_uni_queue_cfg {
+ u32 ac_en:1,
+ wred_en:1,
+ force_ac_en:1,
+ color_aware:1,
+ ac_grp_id:2,
+ prealloc_limit:11,
+ shared_dynamic:1,
+ shared_weight:3,
+ shared_ceiling:11;
+ u32 gap_grn_grn_min:11,
+ gap_grn_yel_max:11,
+ gap_grn_yel_min_0:10;
+ u32 gap_grn_yel_min_1:1,
+ gap_grn_red_max:11,
+ gap_grn_red_min:11,
+ red_resume_0:9;
+ u32 red_resume_1:2,
+ yel_resume:11,
+ grn_resume:11,
+ res0:8;
+};
+
+union ppe_ac_uni_queue_cfg_u {
+ u32 val[4];
+ struct ppe_ac_uni_queue_cfg bf;
+};
+
+#define PPE_AC_MUL_QUEUE_CFG_TBL 0x84a000
+#define PPE_AC_MUL_QUEUE_CFG_TBL_NUM 44
+#define PPE_AC_MUL_QUEUE_CFG_TBL_INC 0x10
+
+/* PPE multicast queue(256-299) configurations, the mutlicast queues are
+ * fixed to the PPE ports, which only support static threshold.
+ */
+struct ppe_ac_mul_queue_cfg {
+ u32 ac_en:1,
+ force_ac_en:1,
+ color_aware:1,
+ ac_grp_id:2,
+ prealloc_limit:11,
+ shared_ceiling:11,
+ gap_grn_yel_0:5;
+ u32 gap_grn_yel_1:6,
+ gap_grn_red:11,
+ red_resume:11,
+ yel_resume_0:4;
+ u32 yel_resume_1:7,
+ grn_resume:11,
+ res0:14;
+};
+
+union ppe_ac_mul_queue_cfg_u {
+ u32 val[3];
+ struct ppe_ac_mul_queue_cfg bf;
+};
+
+#define PPE_AC_GRP_CFG_TBL 0x84c000
+#define PPE_AC_GRP_CFG_TBL_NUM 4
+#define PPE_AC_GRP_CFG_TBL_INC 0x10
+
+/* PPE admission control of group configurations */
+struct ppe_ac_grp_cfg {
+ u32 ac_en:1,
+ force_ac_en:1,
+ color_aware:1,
+ gap_grn_red:11,
+ gap_grn_yel:11,
+ dp_thrd_0:7;
+ u32 dp_thrd_1:4,
+ limit:11,
+ red_resume:11,
+ yel_resume_0:6;
+ u32 yel_resume_1:5,
+ grn_resume:11,
+ prealloc_limit:11,
+ res0:5;
+};
+
+union ppe_ac_grp_cfg_u {
+ u32 val[3];
+ struct ppe_ac_grp_cfg bf;
+};
+
+#define PPE_ENQ_OPR_TBL 0x85c000
+#define PPE_ENQ_OPR_TBL_NUM 300
+#define PPE_ENQ_OPR_TBL_INC 0x10
+#define PPE_ENQ_OPR_TBL_ENQ_DISABLE BIT(0)
+
#endif
--
2.42.0


2024-01-10 11:49:33

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 15/20] net: ethernet: qualcomm: Add PPE debugfs counters

The PPE counter is checked with the command below.
"cat /sys/kernel/debug/ppe/packet_counter"

Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/Makefile | 2 +-
drivers/net/ethernet/qualcomm/ppe/ppe.c | 8 +
.../net/ethernet/qualcomm/ppe/ppe_debugfs.c | 953 ++++++++++++++++++
.../net/ethernet/qualcomm/ppe/ppe_debugfs.h | 25 +
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 256 +++++
include/linux/soc/qcom/ppe.h | 1 +
6 files changed, 1244 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h

diff --git a/drivers/net/ethernet/qualcomm/ppe/Makefile b/drivers/net/ethernet/qualcomm/ppe/Makefile
index c00265339aa7..516ea23443ab 100644
--- a/drivers/net/ethernet/qualcomm/ppe/Makefile
+++ b/drivers/net/ethernet/qualcomm/ppe/Makefile
@@ -4,4 +4,4 @@
#

obj-$(CONFIG_QCOM_PPE) += qcom-ppe.o
-qcom-ppe-objs := ppe.o ppe_ops.o
+qcom-ppe-objs := ppe.o ppe_ops.o ppe_debugfs.o
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index d0e0fa9d5609..71973bce2cd2 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -17,6 +17,7 @@
#include "ppe.h"
#include "ppe_regs.h"
#include "ppe_ops.h"
+#include "ppe_debugfs.h"

#define PPE_SCHEDULER_PORT_NUM 8
#define MPPE_SCHEDULER_PORT_NUM 3
@@ -1328,11 +1329,18 @@ static int qcom_ppe_probe(struct platform_device *pdev)

ppe_dev->ppe_ops = &qcom_ppe_ops;
ppe_dev->is_ppe_probed = true;
+ ppe_debugfs_setup(ppe_dev);
+
return 0;
}

static int qcom_ppe_remove(struct platform_device *pdev)
{
+ struct ppe_device *ppe_dev;
+
+ ppe_dev = platform_get_drvdata(pdev);
+ ppe_debugfs_teardown(ppe_dev);
+
return 0;
}

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c b/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c
new file mode 100644
index 000000000000..a72c0eed9142
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c
@@ -0,0 +1,953 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+/* PPE debugfs routines for display of PPE counters useful for debug. */
+
+#include <linux/seq_file.h>
+#include <linux/debugfs.h>
+#include <linux/soc/qcom/ppe.h>
+#include "ppe_debugfs.h"
+#include "ppe_regs.h"
+#include "ppe.h"
+
+static const char * const ppe_cpucode[] = {
+ "Forwarding to CPU",
+ "Unknown L2 protocol exception redirect/copy to CPU",
+ "PPPoE wrong version or wrong type exception redirect/copy to CPU",
+ "PPPoE wrong code exception redirect/copy to CPU",
+ "PPPoE unsupported PPP protocol exception redirect/copy to CPU",
+ "IPv4 wrong version exception redirect/copy to CPU",
+ "IPv4 small IHL exception redirect/copy to CPU",
+ "IPv4 with option exception redirect/copy to CPU",
+ "IPv4 header incomplete exception redirect/copy to CPU",
+ "IPv4 bad total length exception redirect/copy to CPU",
+ "IPv4 data incomplete exception redirect/copy to CPU",
+ "IPv4 fragment exception redirect/copy to CPU",
+ "IPv4 ping of death exception redirect/copy to CPU",
+ "IPv4 small TTL exception redirect/copy to CPU",
+ "IPv4 unknown IP protocol exception redirect/copy to CPU",
+ "IPv4 checksum error exception redirect/copy to CPU",
+ "IPv4 invalid SIP exception redirect/copy to CPU",
+ "IPv4 invalid DIP exception redirect/copy to CPU",
+ "IPv4 LAND attack exception redirect/copy to CPU",
+ "IPv4 AH header incomplete exception redirect/copy to CPU",
+ "IPv4 AH header cross 128-byte exception redirect/copy to CPU",
+ "IPv4 ESP header incomplete exception redirect/copy to CPU",
+ "IPv6 wrong version exception redirect/copy to CPU",
+ "IPv6 header incomplete exception redirect/copy to CPU",
+ "IPv6 bad total length exception redirect/copy to CPU",
+ "IPv6 data incomplete exception redirect/copy to CPU",
+ "IPv6 with extension header exception redirect/copy to CPU",
+ "IPv6 small hop limit exception redirect/copy to CPU",
+ "IPv6 invalid SIP exception redirect/copy to CPU",
+ "IPv6 invalid DIP exception redirect/copy to CPU",
+ "IPv6 LAND attack exception redirect/copy to CPU",
+ "IPv6 fragment exception redirect/copy to CPU",
+ "IPv6 ping of death exception redirect/copy to CPU",
+ "IPv6 with more than 2 extension headers exception redirect/copy to CPU",
+ "IPv6 unknown last next header exception redirect/copy to CPU",
+ "IPv6 mobility header incomplete exception redirect/copy to CPU",
+ "IPv6 mobility header cross 128-byte exception redirect/copy to CPU",
+ "IPv6 AH header incomplete exception redirect/copy to CPU",
+ "IPv6 AH header cross 128-byte exception redirect/copy to CPU",
+ "IPv6 ESP header incomplete exception redirect/copy to CPU",
+ "IPv6 ESP header cross 128-byte exception redirect/copy to CPU",
+ "IPv6 other extension header incomplete exception redirect/copy to CPU",
+ "IPv6 other extension header cross 128-byte exception redirect/copy to CPU",
+ "TCP header incomplete exception redirect/copy to CPU",
+ "TCP header cross 128-byte exception redirect/copy to CPU",
+ "TCP same SP and DP exception redirect/copy to CPU",
+ "TCP small data offset redirect/copy to CPU",
+ "TCP flags VALUE/MASK group 0 exception redirect/copy to CPU",
+ "TCP flags VALUE/MASK group 1 exception redirect/copy to CPU",
+ "TCP flags VALUE/MASK group 2 exception redirect/copy to CPU",
+ "TCP flags VALUE/MASK group 3 exception redirect/copy to CPU",
+ "TCP flags VALUE/MASK group 4 exception redirect/copy to CPU",
+ "TCP flags VALUE/MASK group 5 exception redirect/copy to CPU",
+ "TCP flags VALUE/MASK group 6 exception redirect/copy to CPU",
+ "TCP flags VALUE/MASK group 7 exception redirect/copy to CPU",
+ "TCP checksum error exception redirect/copy to CPU",
+ "UDP header incomplete exception redirect/copy to CPU",
+ "UDP header cross 128-byte exception redirect/copy to CPU",
+ "UDP same SP and DP exception redirect/copy to CPU",
+ "UDP bad length exception redirect/copy to CPU",
+ "UDP data incomplete exception redirect/copy to CPU",
+ "UDP checksum error exception redirect/copy to CPU",
+ "UDP-Lite header incomplete exception redirect/copy to CPU",
+ "UDP-Lite header cross 128-byte exception redirect/copy to CPU",
+ "UDP-Lite same SP and DP exception redirect/copy to CPU",
+ "UDP-Lite checksum coverage value 0-7 exception redirect/copy to CPU",
+ "UDP-Lite checksum coverage value too big exception redirect/copy to CPU",
+ "UDP-Lite checksum coverage value cross 128-byte exception redirect/copy to CPU",
+ "UDP-Lite checksum error exception redirect/copy to CPU",
+ "Fake L2 protocol packet redirect/copy to CPU",
+ "Fake MAC header packet redirect/copy to CPU",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "L2 MRU checking fail redirect/copy to CPU",
+ "L2 MTU checking fail redirect/copy to CPU",
+ "IP prefix broadcast redirect/copy to CPU",
+ "L3 MTU checking fail redirect/copy to CPU",
+ "L3 MRU checking fail redirect/copy to CPU",
+ "ICMP redirect/copy to CPU",
+ "IP to me routing TTL 1 redirect/copy to CPU",
+ "IP to me routing TTL 0 redirect/copy to CPU",
+ "Flow service code loop redirect/copy to CPU",
+ "Flow de-accelerate redirect/copy to CPU",
+ "Flow source interface check fail redirect/copy to CPU",
+ "Flow sync toggle mismatch redirect/copy to CPU",
+ "MTU check fail if DF set redirect/copy to CPU",
+ "PPPoE multicast redirect/copy to CPU",
+ "Flow MTU check fail redirect/copy to cpu",
+ "Flow MTU DF check fail redirect/copy to CPU",
+ "UDP CHECKSUM ZERO redirect/copy to CPU",
+ "Reserved",
+ "EAPoL packet redirect/copy to CPU",
+ "PPPoE discovery packet redirect/copy to CPU",
+ "IGMP packet redirect/copy to CPU",
+ "ARP request packet redirect/copy to CPU",
+ "ARP reply packet redirect/copy to CPU",
+ "DHCPv4 packet redirect/copy to CPU",
+ "Reserved",
+ "LINKOAM packet redirect/copy to CPU",
+ "Reserved",
+ "Reserved",
+ "MLD packet redirect/copy to CPU",
+ "NS packet redirect/copy to CPU",
+ "NA packet redirect/copy to CPU",
+ "DHCPv6 packet redirect/copy to CPU",
+ "Reserved",
+ "Reserved",
+ "PTP sync packet redirect/copy to CPU",
+ "PTP follow up packet redirect/copy to CPU",
+ "PTP delay request packet redirect/copy to CPU",
+ "PTP delay response packet redirect/copy to CPU",
+ "PTP delay request packet redirect/copy to CPU",
+ "PTP delay response packet redirect/copy to CPU",
+ "PTP delay response follow up packet redirect/copy to CPU",
+ "PTP announce packet redirect/copy to CPU",
+ "PTP management packet redirect/copy to CPU",
+ "PTP signaling packet redirect/copy to CPU",
+ "PTP message reserved type 0 packet redirect/copy to CPU",
+ "PTP message reserved type 1 packet redirect/copy to CPU",
+ "PTP message reserved type 2 packet redirect/copy to CPU",
+ "PTP message reserved type 3 packet redirect/copy to CPU",
+ "PTP message reserved type packet redirect/copy to CPU",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "IPv4 source guard unknown packet redirect/copy to CPU",
+ "IPv6 source guard unknown packet redirect/copy to CPU",
+ "ARP source guard unknown packet redirect/copy to CPU",
+ "ND source guard unknown packet redirect/copy to CPU",
+ "IPv4 source guard violation packet redirect/copy to CPU",
+ "IPv6 source guard violation packet redirect/copy to CPU",
+ "ARP source guard violation packet redirect/copy to CPU",
+ "ND source guard violation packet redirect/copy to CPU",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "L3 route host mismatch action redirect/copy to CPU",
+ "L3 flow SNAT action redirect/copy to CPU",
+ "L3 flow DNAT action redirect/copy to CPU",
+ "L3 flow routing action redirect/copy to CPU",
+ "L3 flow bridging action redirect/copy to CPU",
+ "L3 multicast bridging action redirect/copy to CPU",
+ "L3 route preheader routing action redirect/copy to CPU",
+ "L3 route preheader SNAPT action redirect/copy to CPU",
+ "L3 route preheader DNAPT action redirect/copy to CPU",
+ "L3 route preheader SNAT action redirect/copy to CPU",
+ "L3 route preheader DNAT action redirect/copy to CPU",
+ "L3 no route preheader NAT action redirect/copy to CPU",
+ "L3 no route preheader NAT error redirect/copy to CPU",
+ "L3 route action redirect/copy to CPU",
+ "L3 no route action redirect/copy to CPU",
+ "L3 no route next hop invalid action redirect/copy to CPU",
+ "L3 no route preheader action redirect/copy to CPU",
+ "L3 bridge action redirect/copy to CPU",
+ "L3 flow action redirect/copy to CPU",
+ "L3 flow miss action redirect/copy to CPU",
+ "L2 new MAC address redirect/copy to CPU",
+ "L2 hash violation redirect/copy to CPU",
+ "L2 station move redirect/copy to CPU",
+ "L2 learn limit redirect/copy to CPU",
+ "L2 SA lookup action redirect/copy to CPU",
+ "L2 DA lookup action redirect/copy to CPU",
+ "APP_CTRL action redirect/copy to CPU",
+ "INGRESS_VLAN filter action redirect/copy to CPU",
+ "INGRSS VLAN translation miss redirect/copy to CPU",
+ "EGREE VLAN filter redirect/copy to CPU",
+ "Pre-IPO action",
+ "Post-IPO action",
+ "Service code action",
+ "L3 route Pre-IPO action redirect/copy to CPU",
+ "L3 route Pre-IPO snapt action redirect/copy to CPU",
+ "L3 route Pre-IPO dnapt action redirect/copy to CPU",
+ "L3 route Pre-IPO snat action redirect/copy to CPU",
+ "L3 route Pre-IPO dnat action redirect/copy to CPU",
+ "Tl L3 if check fail redirect/copy to CPU",
+ "TL vlan check fail redirect/copy to CPU",
+ "TL PPPoE Multicast termination redirect/copy to CPU",
+ "TL de-acceleration redirect/copy to CPU",
+ "TL UDP checksum zero redirect/copy to CPU",
+ "TL TTL exceed redirect/copy to CPU",
+ "TL LPM interface check fail redirect/copy to CPU",
+ "TL LPM vlan check fail redirect/copy to CPU",
+ "TL map src check fail redirect/copy to CPU",
+ "TL map dst check fail redirect/copy to CPU",
+ "TL map UDP checksum zero redirect/copy to CPU",
+ "TL map non TCP/UDP redirect/copy to CPU",
+ "TL forward cmd action redirect/copy to CPU",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "L2 Pre IPO action redirect/copy to CPU",
+ "L2 tunnel context check invalid redirect/copy to CPU",
+ "Reserved",
+ "Reserved",
+ "Tunnel decap ecn redirect/copy to CPU",
+ "Tunnel decap inner packet too short redirect/copy to CPU",
+ "Tunnel VXLAN header exception redirect/copy to CPU",
+ "Tunnel VXLAN-GPE header exception redirect/copy to CPU",
+ "Tunnel GENEVE header exception redirect/copy to CPU",
+ "Tunnel GRE header exception redirect/copy to CPU",
+ "Reserved",
+ "Tunnel decap unknown inner type redirect/copy to CPU",
+ "Tunnel parser VXLAN flag exception redirect/copy to CPU",
+ "Tunnel parser VXLAN-GPE flag exception redirect/copy to CPU",
+ "Tunnel parser GRE flag exception redirect/copy to CPU",
+ "Tunnel parser GENEVE flag exception redirect/copy to CPU",
+ "Tunnel parser PROGRAM0 exception redirect/copy to CPU",
+ "Tunnel parser PROGRAM1 exception redirect/copy to CPU",
+ "Tunnel parser PROGRAM2 exception redirect/copy to CPU",
+ "Tunnel parser PROGRAM3 exception redirect/copy to CPU",
+ "Tunnel parser PROGRAM4 exception redirect/copy to CPU",
+ "Tunnel parser PROGRAM5 exception redirect/copy to CPU",
+ "Trap for flooding",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Reserved",
+ "Egress mirror to CPU",
+ "Ingress mirror to CPU",
+};
+
+static const char * const ppe_dropcode[] = {
+ "None",
+ "Unknown L2 protocol exception drop",
+ "PPPoE wrong version or wrong type exception drop",
+ "PPPoE unsupported PPP protocol exception drop",
+ "IPv4 wrong version exception drop",
+ "IPv4 small IHL or header incomplete or data incomplete exception drop",
+ "IPv4 with option exception drop",
+ "IPv4 bad total length exception drop",
+ "IPv4 fragment exception drop",
+ "IPv4 ping of death exception drop",
+ "IPv4 small TTL exception drop",
+ "IPv4 unknown IP protocol exception drop",
+ "IPv4 checksum error exception drop",
+ "IPv4 invalid SIP/DIP exception drop",
+ "IPv4 LAND attack exception drop",
+ "IPv4 AH or ESP header incomplete or AH header cross 128 byte exception drop",
+ "IPv6 wrong version exception drop",
+ "IPv6 header or data incomplete exception drop",
+ "IPv6 bad payload length exception drop",
+ "IPv6 with extension header exception drop",
+ "IPv6 small hop limit exception drop",
+ "IPv6 invalid SIP/DIP exception drop",
+ "IPv6 LAND attack exception drop",
+ "IPv6 fragment exception drop",
+ "IPv6 ping of death exception drop",
+ "IPv6 with more than 2 extension headers or unknown last next header exception drop",
+ "IPv6 AH/ESP/other extension/mobility header incomplete/cross 128 bytes exception drop",
+ "TCP header incomplete or cross 128 byte or small data offset exception drop",
+ "TCP same SP and DP exception drop",
+ "TCP flags VALUE/MASK group 0/1/2/3/4/5/6/7 exception drop",
+ "TCP checksum error exception drop",
+ "UDP header incomplete or cross 128 byte or data incomplete exception drop",
+ "UDP same SP and DP exception drop",
+ "UDP bad length exception drop",
+ "UDP checksum error exception drop",
+ "UDP-Lite header incomplete/same SP and DP/checksum coverage invalid value exception drop",
+ "UDP-Lite checksum error exception drop",
+ "L3 route Pre-IPO action exception drop",
+ "L3 route Pre-IPO snapt action exception drop",
+ "L3 route Pre-IPO dnapt action exception drop",
+ "L3 route Pre-IPO snat action exception drop",
+ "L3 route Pre-IPO dnat action exception drop",
+ "Tl L3 if check fail exception drop",
+ "TL vlan check fail exception drop",
+ "TL PPPoE Multicast termination exception drop",
+ "TL de-acceleration exception drop",
+ "TL UDP checksum zero exception drop",
+ "TL TTL exceed exception drop",
+ "TL LPM interface check fail exception drop",
+ "TL LPM vlan check fail exception drop",
+ "TL map src check fail exception drop",
+ "TL map dst check fail exception drop",
+ "TL map UDP checksum zero exception drop",
+ "TL map non TCP/UDP exception drop",
+ "L2 Pre IPO action redirect/copy to CPU",
+ "L2 tunnel context check invalid redirect/copy to CPU",
+ "Reserved",
+ "Reserved",
+ "Tunnel decap ecn exception drop",
+ "Tunnel decap inner packet too short exception drop",
+ "Tunnel VXLAN header exception drop",
+ "Tunnel VXLAN-GPE header exception drop",
+ "Tunnel GENEVE header exception drop",
+ "Tunnel GRE header exception drop",
+ "Reserved",
+ "Tunnel decap unknown inner type exception drop",
+ "Tunnel parser VXLAN, VXLAN-GPE, GRE or GENEVE flag exception drop",
+ "Tunnel parser PROGRAM0~ PROGRAM5 exception drop",
+ "TL forward cmd action exception drop",
+ "L3 multicast bridging action",
+ "L3 no route with preheader NAT action",
+ "L3 no route with preheader NAT action error configuration",
+ "L3 route action drop",
+ "L3 no route action drop",
+ "L3 no route next hop invalid action drop",
+ "L3 no route preheader action drop",
+ "L3 bridge action drop",
+ "L3 flow action drop",
+ "L3 flow miss action drop",
+ "L2 MRU checking fail drop",
+ "L2 MTU checking fail drop",
+ "L3 IP prefix broadcast drop",
+ "L3 MTU checking fail drop",
+ "L3 MRU checking fail drop",
+ "L3 ICMP redirect drop",
+ "Fake MAC header indicated packet not routing or bypass L3 edit drop",
+ "L3 IP route TTL zero drop",
+ "L3 flow service code loop drop",
+ "L3 flow de-accelerate drop",
+ "L3 flow source interface check fail drop",
+ "Flow toggle mismatch exception drop",
+ "MTU check exception if DF set drop",
+ "PPPoE multicast packet with IP routing enabled drop",
+ "IPv4 SG unknown drop",
+ "IPv6 SG unknown drop",
+ "ARP SG unknown drop",
+ "ND SG unknown drop",
+ "IPv4 SG violation drop",
+ "IPv6 SG violation drop",
+ "ARP SG violation drop",
+ "ND SG violation drop",
+ "L2 new MAC address drop",
+ "L2 hash violation drop",
+ "L2 station move drop",
+ "L2 learn limit drop",
+ "L2 SA lookup action drop",
+ "L2 DA lookup action drop",
+ "APP_CTRL action drop",
+ "Ingress VLAN filtering action drop",
+ "Ingress VLAN translation miss drop",
+ "Egress VLAN filtering drop",
+ "Pre-IPO entry hit action drop",
+ "Post-IPO entry hit action drop",
+ "Multicast SA or broadcast SA drop",
+ "No destination drop",
+ "STG ingress filtering drop",
+ "STG egress filtering drop",
+ "Source port filter drop",
+ "Trunk select fail drop",
+ "TX MAC disable drop",
+ "Ingress VLAN tag format drop",
+ "CRC error drop",
+ "PAUSE frame drop",
+ "Promiscuous drop",
+ "Isolation drop",
+ "Management packet APP_CTRL drop",
+ "Fake L2 protocol indicated packet not routing or bypass L3 edit drop",
+ "Policing drop",
+};
+
+static int ppe_prx_drop_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ int i, tag;
+ u32 val;
+
+ /* The counter for the packet dropped because of no buffer available,
+ * no need to release the buffer.
+ */
+ PREFIX_S("PRX_DROP_CNT", "SILENT_DROP:");
+ tag = 0;
+ for (i = 0; i < PPE_DROP_CNT_NUM; i++) {
+ ppe_read(ppe_dev, PPE_DROP_CNT + i * PPE_DROP_CNT_INC, &val);
+
+ if (val > 0) {
+ tag++;
+ if (!(tag % 4)) {
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ CNT_ONE_TYPE(val, "port", i);
+ }
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_prx_bm_drop_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ union ppe_drop_stat_u drop_stat;
+ int i, tag;
+
+ /* The counter for the packet dropped because of no enough buffer
+ * to cache packet, some buffer allocated for the part of packet.
+ */
+ PREFIX_S("PRX_BM_DROP_CNT", "OVERFLOW_DROP:");
+ tag = 0;
+ for (i = 0; i < PPE_DROP_STAT_NUM; i++) {
+ memset(&drop_stat, 0, sizeof(drop_stat));
+ ppe_read_tbl(ppe_dev, PPE_DROP_STAT + PPE_DROP_STAT_INC * i,
+ drop_stat.val, sizeof(drop_stat.val));
+
+ if (drop_stat.bf.pkt_cnt > 0) {
+ tag++;
+ if (!(tag % 4)) {
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ CNT_ONE_TYPE(drop_stat.bf.pkt_cnt, "port", i);
+ }
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_prx_bm_port_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ int used_cnt, react_cnt;
+ int i, tag;
+ u32 val;
+
+ /* This is read only counter, which can't be flused. */
+ PREFIX_S("PRX_BM_PORT_CNT", "USED/REACT:");
+ tag = 0;
+ for (i = 0; i < PPE_BM_USED_CNT_NUM; i++) {
+ ppe_read(ppe_dev, PPE_BM_USED_CNT + i * PPE_BM_USED_CNT_INC, &val);
+ used_cnt = FIELD_GET(PPE_BM_USED_CNT_VAL, val);
+
+ ppe_read(ppe_dev, PPE_BM_REACT_CNT + i * PPE_BM_REACT_CNT_INC, &val);
+ react_cnt = FIELD_GET(PPE_BM_REACT_CNT_VAL, val);
+
+ if (used_cnt > 0 || react_cnt > 0) {
+ tag++;
+ if (!(tag % 4)) {
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ CNT_TWO_TYPE(used_cnt, react_cnt, "port", i);
+ }
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_ipx_pkt_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ u32 val, tunnel_val;
+ int i, tag;
+
+ PREFIX_S("IPR_PKT_CNT", "TPRX/IPRX:");
+ tag = 0;
+ for (i = 0; i < PPE_IPR_PKT_CNT_NUM; i++) {
+ ppe_read(ppe_dev, PPE_TPR_PKT_CNT + i * PPE_IPR_PKT_CNT_INC, &tunnel_val);
+ ppe_read(ppe_dev, PPE_IPR_PKT_CNT + i * PPE_IPR_PKT_CNT_INC, &val);
+
+ if (tunnel_val > 0 || val > 0) {
+ tag++;
+ if (!(tag % 4)) {
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ CNT_TWO_TYPE(tunnel_val, val, "port", i);
+ }
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_port_rx_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ union ppe_phy_port_rx_cnt_tbl_u phy_port_rx_cnt;
+ int i, tag;
+
+ PREFIX_S("PORT_RX_CNT", "RX/RX_DROP:");
+ tag = 0;
+ for (i = 0; i < PPE_PHY_PORT_RX_CNT_TBL_NUM; i++) {
+ memset(&phy_port_rx_cnt, 0, sizeof(phy_port_rx_cnt));
+ ppe_read_tbl(ppe_dev, PPE_PHY_PORT_RX_CNT_TBL + PPE_PHY_PORT_RX_CNT_TBL_INC * i,
+ phy_port_rx_cnt.val, sizeof(phy_port_rx_cnt.val));
+
+ if (phy_port_rx_cnt.bf.pkt_cnt > 0) {
+ tag++;
+ if (!(tag % 4)) {
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ CNT_TWO_TYPE(phy_port_rx_cnt.bf.pkt_cnt, phy_port_rx_cnt.bf.drop_pkt_cnt_0 |
+ phy_port_rx_cnt.bf.drop_pkt_cnt_1 << 24, "port", i);
+ }
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_vp_rx_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ union ppe_port_rx_cnt_tbl_u port_rx_cnt;
+ int i, tag;
+
+ PREFIX_S("VPORT_RX_CNT", "RX/RX_DROP:");
+ tag = 0;
+ for (i = 0; i < PPE_PORT_RX_CNT_TBL_NUM; i++) {
+ memset(&port_rx_cnt, 0, sizeof(port_rx_cnt));
+ ppe_read_tbl(ppe_dev, PPE_PORT_RX_CNT_TBL + PPE_PORT_RX_CNT_TBL_INC * i,
+ port_rx_cnt.val, sizeof(port_rx_cnt.val));
+
+ if (port_rx_cnt.bf.pkt_cnt > 0) {
+ tag++;
+ if (!(tag % 4)) {
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ CNT_TWO_TYPE(port_rx_cnt.bf.pkt_cnt, port_rx_cnt.bf.drop_pkt_cnt_0 |
+ port_rx_cnt.bf.drop_pkt_cnt_1 << 24, "port", i);
+ }
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_pre_l2_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ union ppe_pre_l2_cnt_tbl_u pre_l2_cnt;
+ int i, tag;
+
+ PREFIX_S("PRE_L2_CNT", "RX/RX_DROP:");
+ tag = 0;
+ for (i = 0; i < PPE_PRE_L2_CNT_TBL_NUM; i++) {
+ memset(&pre_l2_cnt, 0, sizeof(pre_l2_cnt));
+ ppe_read_tbl(ppe_dev, PPE_PRE_L2_CNT_TBL + PPE_PRE_L2_CNT_TBL_INC * i,
+ pre_l2_cnt.val, sizeof(pre_l2_cnt.val));
+
+ if (pre_l2_cnt.bf.pkt_cnt > 0) {
+ tag++;
+ if (!(tag % 4)) {
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ CNT_TWO_TYPE(pre_l2_cnt.bf.pkt_cnt, pre_l2_cnt.bf.drop_pkt_cnt_0 |
+ pre_l2_cnt.bf.drop_pkt_cnt_1 << 24, "vsi", i);
+ }
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_vlan_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ union ppe_vlan_cnt_u vlan_cnt;
+ int i, tag;
+
+ PREFIX_S("VLAN_CNT", "RX:");
+ tag = 0;
+ for (i = 0; i < PPE_VLAN_CNT_TBL_NUM; i++) {
+ memset(&vlan_cnt, 0, sizeof(vlan_cnt));
+ ppe_read_tbl(ppe_dev, PPE_VLAN_CNT_TBL + PPE_VLAN_CNT_TBL_INC * i,
+ vlan_cnt.val, sizeof(vlan_cnt.val));
+
+ if (vlan_cnt.bf.pkt_cnt > 0) {
+ tag++;
+ if (!(tag % 4)) {
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ CNT_ONE_TYPE(vlan_cnt.bf.pkt_cnt, "vsi", i);
+ }
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_cpu_code_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ union ppe_drop_cpu_cnt_u drop_cpu_cnt;
+ int i;
+
+ PREFIX_S("CPU_CODE_CNT", "CODE:");
+ for (i = 0; i < PPE_DROP_CPU_CNT_TBL_NUM; i++) {
+ memset(&drop_cpu_cnt, 0, sizeof(drop_cpu_cnt));
+ ppe_read_tbl(ppe_dev, PPE_DROP_CPU_CNT_TBL + PPE_DROP_CPU_CNT_TBL_INC * i,
+ drop_cpu_cnt.val, sizeof(drop_cpu_cnt.val));
+
+ /* entry index i = cpucode when i < 256;
+ * entry index i = 256 + dropcode * 8 + port & 7 when i > =256.
+ */
+ if (!drop_cpu_cnt.bf.pkt_cnt)
+ continue;
+
+ if (i < 256)
+ CNT_CPU_CODE(drop_cpu_cnt.bf.pkt_cnt, ppe_cpucode[i], i);
+ else
+ CNT_DROP_CODE(drop_cpu_cnt.bf.pkt_cnt,
+ ppe_dropcode[(i - 256) / 8],
+ (i - 256) % 8, (i - 256) / 8);
+
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_eg_vsi_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ union ppe_eg_vsi_cnt_tbl_u eg_vsi_cnt;
+ int i, tag;
+
+ PREFIX_S("EG_VSI_CNT", "TX:");
+ tag = 0;
+ for (i = 0; i < PPE_EG_VSI_COUNTER_TBL_NUM; i++) {
+ memset(&eg_vsi_cnt, 0, sizeof(eg_vsi_cnt));
+ ppe_read_tbl(ppe_dev, PPE_EG_VSI_COUNTER_TBL + PPE_EG_VSI_COUNTER_TBL_INC * i,
+ eg_vsi_cnt.val, sizeof(eg_vsi_cnt.val));
+
+ if (eg_vsi_cnt.bf.pkt_cnt > 0) {
+ tag++;
+ if (!(tag % 4)) {
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ CNT_ONE_TYPE(eg_vsi_cnt.bf.pkt_cnt, "vsi", i);
+ }
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_vp_tx_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ union ppe_vport_tx_counter_tbl_u vport_tx_counter;
+ union ppe_vport_tx_drop_u vport_tx_drop;
+ int i, tag;
+
+ PREFIX_S("VPORT_TX_CNT", "TX/TX_DROP:");
+ tag = 0;
+ for (i = 0; i < PPE_VPORT_TX_COUNTER_TBL_NUM; i++) {
+ memset(&vport_tx_counter, 0, sizeof(vport_tx_counter));
+ memset(&vport_tx_drop, 0, sizeof(vport_tx_drop));
+
+ ppe_read_tbl(ppe_dev, PPE_VPORT_TX_COUNTER_TBL + PPE_VPORT_TX_COUNTER_TBL_INC * i,
+ vport_tx_counter.val, sizeof(vport_tx_counter.val));
+ ppe_read_tbl(ppe_dev, PPE_VPORT_TX_DROP_CNT_TBL + PPE_VPORT_TX_DROP_CNT_TBL_INC * i,
+ vport_tx_drop.val, sizeof(vport_tx_drop.val));
+
+ if (vport_tx_counter.bf.pkt_cnt > 0 || vport_tx_drop.bf.pkt_cnt > 0) {
+ tag++;
+ if (!(tag % 4)) {
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ CNT_TWO_TYPE(vport_tx_counter.bf.pkt_cnt,
+ vport_tx_drop.bf.pkt_cnt, "port", i);
+ }
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_port_tx_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ union ppe_port_tx_counter_tbl_u port_tx_counter;
+ union ppe_port_tx_drop_u port_tx_drop;
+ int i, tag;
+
+ PREFIX_S("PORT_TX_CNT", "TX/TX_DROP:");
+ tag = 0;
+ for (i = 0; i < PPE_PORT_TX_COUNTER_TBL_NUM; i++) {
+ memset(&port_tx_counter, 0, sizeof(port_tx_counter));
+ memset(&port_tx_drop, 0, sizeof(port_tx_drop));
+
+ ppe_read_tbl(ppe_dev, PPE_PORT_TX_COUNTER_TBL + PPE_PORT_TX_COUNTER_TBL_INC * i,
+ port_tx_counter.val, sizeof(port_tx_counter.val));
+ ppe_read_tbl(ppe_dev, PPE_PORT_TX_DROP_CNT_TBL + PPE_PORT_TX_DROP_CNT_TBL_INC * i,
+ port_tx_drop.val, sizeof(port_tx_drop.val));
+
+ if (port_tx_counter.bf.pkt_cnt > 0 || port_tx_drop.bf.pkt_cnt > 0) {
+ tag++;
+ if (!(tag % 4)) {
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ CNT_TWO_TYPE(port_tx_counter.bf.pkt_cnt,
+ port_tx_drop.bf.pkt_cnt, "port", i);
+ }
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_queue_tx_counter_get(struct ppe_device *ppe_dev,
+ struct seq_file *seq)
+{
+ union ppe_queue_tx_counter_tbl_u queue_tx_counter;
+ u32 val, pend_cnt;
+ int i, tag;
+
+ PREFIX_S("QUEUE_TX_CNT", "TX/PEND:");
+ tag = 0;
+ for (i = 0; i < PPE_QUEUE_TX_COUNTER_TBL_NUM; i++) {
+ memset(&queue_tx_counter, 0, sizeof(queue_tx_counter));
+ ppe_read_tbl(ppe_dev, PPE_QUEUE_TX_COUNTER_TBL + PPE_QUEUE_TX_COUNTER_TBL_INC * i,
+ queue_tx_counter.val, sizeof(queue_tx_counter.val));
+
+ if (i < PPE_AC_UNI_QUEUE_CFG_TBL_NUM) {
+ ppe_read(ppe_dev, PPE_AC_UNI_QUEUE_CNT_TBL +
+ PPE_AC_UNI_QUEUE_CNT_TBL_INC * i, &val);
+ pend_cnt = FIELD_GET(PPE_AC_UNI_QUEUE_CNT_TBL_PEND_CNT, val);
+ } else {
+ ppe_read(ppe_dev, PPE_AC_MUL_QUEUE_CNT_TBL +
+ PPE_AC_MUL_QUEUE_CNT_TBL_INC *
+ (i - PPE_AC_UNI_QUEUE_CFG_TBL_NUM), &val);
+ pend_cnt = FIELD_GET(PPE_AC_MUL_QUEUE_CNT_TBL_PEND_CNT, val);
+ }
+
+ if (queue_tx_counter.bf.pkt_cnt > 0 || pend_cnt > 0) {
+ tag++;
+ if (!(tag % 4)) {
+ seq_putc(seq, '\n');
+ PREFIX_S("", "");
+ }
+
+ CNT_TWO_TYPE(queue_tx_counter.bf.pkt_cnt, pend_cnt, "queue", i);
+ }
+ }
+
+ seq_putc(seq, '\n');
+ return 0;
+}
+
+static int ppe_packet_counter_show(struct seq_file *seq, void *v)
+{
+ struct ppe_device *ppe_dev = seq->private;
+
+ ppe_prx_drop_counter_get(ppe_dev, seq);
+ ppe_prx_bm_drop_counter_get(ppe_dev, seq);
+ ppe_prx_bm_port_counter_get(ppe_dev, seq);
+ ppe_ipx_pkt_counter_get(ppe_dev, seq);
+ ppe_port_rx_counter_get(ppe_dev, seq);
+ ppe_vp_rx_counter_get(ppe_dev, seq);
+ ppe_pre_l2_counter_get(ppe_dev, seq);
+ ppe_vlan_counter_get(ppe_dev, seq);
+ ppe_cpu_code_counter_get(ppe_dev, seq);
+ ppe_eg_vsi_counter_get(ppe_dev, seq);
+ ppe_vp_tx_counter_get(ppe_dev, seq);
+ ppe_port_tx_counter_get(ppe_dev, seq);
+ ppe_queue_tx_counter_get(ppe_dev, seq);
+
+ return 0;
+}
+
+static int ppe_packet_counter_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, ppe_packet_counter_show, inode->i_private);
+}
+
+static ssize_t ppe_packet_counter_clear(struct file *file,
+ const char __user *buf,
+ size_t count, loff_t *pos)
+{
+ union ppe_port_tx_counter_tbl_u port_tx_counter;
+ union ppe_vport_tx_counter_tbl_u vport_tx_counter;
+ union ppe_queue_tx_counter_tbl_u queue_tx_counter;
+ union ppe_phy_port_rx_cnt_tbl_u phy_port_rx_cnt;
+ union ppe_port_rx_cnt_tbl_u port_rx_cnt;
+ union ppe_pre_l2_cnt_tbl_u pre_l2_cnt;
+ union ppe_eg_vsi_cnt_tbl_u eg_vsi_cnt;
+ union ppe_vport_tx_drop_u vport_tx_drop;
+ union ppe_port_tx_drop_u port_tx_drop;
+ union ppe_drop_cpu_cnt_u drop_cpu_cnt;
+ union ppe_drop_stat_u drop_stat;
+ union ppe_vlan_cnt_u vlan_cnt;
+ struct ppe_device *ppe_dev;
+ u32 val;
+ int i;
+
+ memset(&drop_stat, 0, sizeof(drop_stat));
+ memset(&vlan_cnt, 0, sizeof(vlan_cnt));
+ memset(&pre_l2_cnt, 0, sizeof(pre_l2_cnt));
+ memset(&port_tx_drop, 0, sizeof(port_tx_drop));
+ memset(&eg_vsi_cnt, 0, sizeof(eg_vsi_cnt));
+ memset(&port_tx_counter, 0, sizeof(port_tx_counter));
+ memset(&vport_tx_counter, 0, sizeof(vport_tx_counter));
+ memset(&queue_tx_counter, 0, sizeof(queue_tx_counter));
+ memset(&vport_tx_drop, 0, sizeof(vport_tx_drop));
+ memset(&drop_cpu_cnt, 0, sizeof(drop_cpu_cnt));
+ memset(&port_rx_cnt, 0, sizeof(port_rx_cnt));
+ memset(&phy_port_rx_cnt, 0, sizeof(phy_port_rx_cnt));
+
+ val = 0;
+ ppe_dev = file_inode(file)->i_private;
+ for (i = 0; i < PPE_DROP_CNT_NUM; i++)
+ ppe_write(ppe_dev, PPE_DROP_CNT + i * PPE_DROP_CNT_INC, val);
+
+ for (i = 0; i < PPE_DROP_STAT_NUM; i++)
+ ppe_write_tbl(ppe_dev, PPE_DROP_STAT + PPE_DROP_STAT_INC * i,
+ drop_stat.val, sizeof(drop_stat.val));
+
+ for (i = 0; i < PPE_IPR_PKT_CNT_NUM; i++)
+ ppe_write(ppe_dev, PPE_IPR_PKT_CNT + i * PPE_IPR_PKT_CNT_INC, val);
+
+ for (i = 0; i < PPE_VLAN_CNT_TBL_NUM; i++)
+ ppe_write_tbl(ppe_dev, PPE_VLAN_CNT_TBL + PPE_VLAN_CNT_TBL_INC * i,
+ vlan_cnt.val, sizeof(vlan_cnt.val));
+
+ for (i = 0; i < PPE_PRE_L2_CNT_TBL_NUM; i++)
+ ppe_write_tbl(ppe_dev, PPE_PRE_L2_CNT_TBL + PPE_PRE_L2_CNT_TBL_INC * i,
+ pre_l2_cnt.val, sizeof(pre_l2_cnt.val));
+
+ for (i = 0; i < PPE_PORT_TX_DROP_CNT_TBL_NUM; i++)
+ ppe_write_tbl(ppe_dev, PPE_PORT_TX_DROP_CNT_TBL + PPE_PORT_TX_DROP_CNT_TBL_INC * i,
+ port_tx_drop.val, sizeof(port_tx_drop.val));
+
+ for (i = 0; i < PPE_EG_VSI_COUNTER_TBL_NUM; i++)
+ ppe_write_tbl(ppe_dev, PPE_EG_VSI_COUNTER_TBL + PPE_EG_VSI_COUNTER_TBL_INC * i,
+ eg_vsi_cnt.val, sizeof(eg_vsi_cnt.val));
+
+ for (i = 0; i < PPE_PORT_TX_COUNTER_TBL_NUM; i++)
+ ppe_write_tbl(ppe_dev, PPE_PORT_TX_COUNTER_TBL + PPE_PORT_TX_COUNTER_TBL_INC * i,
+ port_tx_counter.val, sizeof(port_tx_counter.val));
+
+ for (i = 0; i < PPE_VPORT_TX_COUNTER_TBL_NUM; i++)
+ ppe_write_tbl(ppe_dev, PPE_VPORT_TX_COUNTER_TBL + PPE_VPORT_TX_COUNTER_TBL_INC * i,
+ vport_tx_counter.val, sizeof(vport_tx_counter.val));
+
+ for (i = 0; i < PPE_QUEUE_TX_COUNTER_TBL_NUM; i++)
+ ppe_write_tbl(ppe_dev, PPE_QUEUE_TX_COUNTER_TBL + PPE_QUEUE_TX_COUNTER_TBL_INC * i,
+ queue_tx_counter.val, sizeof(queue_tx_counter.val));
+
+ ppe_write(ppe_dev, PPE_EPE_DBG_IN_CNT, val);
+ ppe_write(ppe_dev, PPE_EPE_DBG_OUT_CNT, val);
+
+ for (i = 0; i < PPE_VPORT_TX_DROP_CNT_TBL_NUM; i++)
+ ppe_write_tbl(ppe_dev, PPE_VPORT_TX_DROP_CNT_TBL +
+ PPE_VPORT_TX_DROP_CNT_TBL_INC * i,
+ vport_tx_drop.val, sizeof(vport_tx_drop.val));
+
+ for (i = 0; i < PPE_DROP_CPU_CNT_TBL_NUM; i++)
+ ppe_write_tbl(ppe_dev, PPE_DROP_CPU_CNT_TBL + PPE_DROP_CPU_CNT_TBL_INC * i,
+ drop_cpu_cnt.val, sizeof(drop_cpu_cnt.val));
+
+ for (i = 0; i < PPE_PORT_RX_CNT_TBL_NUM; i++)
+ ppe_write_tbl(ppe_dev, PPE_PORT_RX_CNT_TBL + PPE_PORT_RX_CNT_TBL_INC * i,
+ port_rx_cnt.val, sizeof(port_rx_cnt.val));
+
+ for (i = 0; i < PPE_PHY_PORT_RX_CNT_TBL_NUM; i++)
+ ppe_write_tbl(ppe_dev, PPE_PHY_PORT_RX_CNT_TBL + PPE_PHY_PORT_RX_CNT_TBL_INC * i,
+ phy_port_rx_cnt.val, sizeof(phy_port_rx_cnt.val));
+
+ return count;
+}
+
+static const struct file_operations ppe_debugfs_packet_counter_fops = {
+ .owner = THIS_MODULE,
+ .open = ppe_packet_counter_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+ .write = ppe_packet_counter_clear,
+};
+
+int ppe_debugfs_setup(struct ppe_device *ppe_dev)
+{
+ ppe_dev->debugfs_root = debugfs_create_dir("ppe", NULL);
+ debugfs_create_file("packet_counter", 0444,
+ ppe_dev->debugfs_root,
+ ppe_dev,
+ &ppe_debugfs_packet_counter_fops);
+ return 0;
+}
+
+void ppe_debugfs_teardown(struct ppe_device *ppe_dev)
+{
+ debugfs_remove_recursive(ppe_dev->debugfs_root);
+ ppe_dev->debugfs_root = NULL;
+}
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h
new file mode 100644
index 000000000000..97463216d5af
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+/* PPE debugfs counters setup. */
+
+#ifndef __PPE_DEBUGFS_H__
+#define __PPE_DEBUGFS_H__
+
+#define PREFIX_S(desc, cnt_type) \
+ seq_printf(seq, "%-16s %16s", desc, cnt_type)
+#define CNT_ONE_TYPE(cnt, str, index) \
+ seq_printf(seq, "%10u(%s=%04d)", cnt, str, index)
+#define CNT_TWO_TYPE(cnt, cnt1, str, index) \
+ seq_printf(seq, "%10u/%u(%s=%04d)", cnt, cnt1, str, index)
+#define CNT_CPU_CODE(cnt, str, index) \
+ seq_printf(seq, "%10u(%s),cpucode:%d", cnt, str, index)
+#define CNT_DROP_CODE(cnt, str, port, index) \
+ seq_printf(seq, "%10u(port=%d:%s),dropcode:%d", cnt, port, str, index)
+
+int ppe_debugfs_setup(struct ppe_device *ppe_dev);
+void ppe_debugfs_teardown(struct ppe_device *ppe_dev);
+
+#endif
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
index ef12037ffed5..98bf19f974ce 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -19,6 +19,105 @@
#define PPE_RX_FIFO_CFG_INC 4
#define PPE_RX_FIFO_CFG_THRSH GENMASK(2, 0)

+#define PPE_DROP_CNT 0xb024
+#define PPE_DROP_CNT_NUM 8
+#define PPE_DROP_CNT_INC 4
+#define PPE_DROP_CNT_PKT_CNT GENMASK(31, 0)
+
+#define PPE_DROP_STAT 0xe000
+#define PPE_DROP_STAT_NUM 30
+#define PPE_DROP_STAT_INC 0x10
+#define PPE_DROP_STAT_PKT_CNT GENMASK(31, 0)
+
+/* BM port drop counter */
+struct ppe_drop_stat {
+ u32 pkt_cnt;
+ u32 byte_cnt_0;
+ u32 byte_cnt_1:8,
+ res0:24;
+};
+
+union ppe_drop_stat_u {
+ u32 val[3];
+ struct ppe_drop_stat bf;
+};
+
+#define PPE_EPE_DBG_IN_CNT 0x26054
+#define PPE_EPE_DBG_IN_CNT_NUM 1
+#define PPE_EPE_DBG_IN_CNT_INC 0x4
+
+#define PPE_EPE_DBG_OUT_CNT 0x26070
+#define PPE_EPE_DBG_OUT_CNT_NUM 1
+#define PPE_EPE_DBG_OUT_CNT_INC 0x4
+
+#define PPE_EG_VSI_COUNTER_TBL 0x41000
+#define PPE_EG_VSI_COUNTER_TBL_NUM 64
+#define PPE_EG_VSI_COUNTER_TBL_INC 0x10
+
+/* Egress VLAN counter */
+struct ppe_eg_vsi_cnt_tbl {
+ u32 pkt_cnt;
+ u32 byte_cnt_0;
+ u32 byte_cnt_1:8,
+ res0:24;
+};
+
+union ppe_eg_vsi_cnt_tbl_u {
+ u32 val[3];
+ struct ppe_eg_vsi_cnt_tbl bf;
+};
+
+#define PPE_PORT_TX_COUNTER_TBL 0x45000
+#define PPE_PORT_TX_COUNTER_TBL_NUM 8
+#define PPE_PORT_TX_COUNTER_TBL_INC 0x10
+
+/* Port TX counter */
+struct ppe_port_tx_counter_tbl {
+ u32 pkt_cnt;
+ u32 byte_cnt_0;
+ u32 byte_cnt_1:8,
+ res0:24;
+};
+
+union ppe_port_tx_counter_tbl_u {
+ u32 val[3];
+ struct ppe_port_tx_counter_tbl bf;
+};
+
+#define PPE_VPORT_TX_COUNTER_TBL 0x47000
+#define PPE_VPORT_TX_COUNTER_TBL_NUM 256
+#define PPE_VPORT_TX_COUNTER_TBL_INC 0x10
+
+/* Virtual port TX counter */
+struct ppe_vport_tx_counter_tbl {
+ u32 pkt_cnt;
+ u32 byte_cnt_0;
+ u32 byte_cnt_1:8,
+ res0:24;
+};
+
+union ppe_vport_tx_counter_tbl_u {
+ u32 val[3];
+ struct ppe_vport_tx_counter_tbl bf;
+};
+
+#define PPE_QUEUE_TX_COUNTER_TBL 0x4a000
+#define PPE_QUEUE_TX_COUNTER_TBL_NUM 300
+#define PPE_QUEUE_TX_COUNTER_TBL_INC 0x10
+
+/* Queue counter */
+struct ppe_queue_tx_counter_tbl {
+ u32 pkt_cnt;
+ u32 byte_cnt_0;
+ u32 byte_cnt_1:8,
+ res0:24;
+};
+
+union ppe_queue_tx_counter_tbl_u {
+ u32 val[3];
+ struct ppe_queue_tx_counter_tbl bf;
+};
+
#define PPE_RSS_HASH_MASK 0xb4318
#define PPE_RSS_HASH_MASK_NUM 1
#define PPE_RSS_HASH_MASK_INC 4
@@ -196,6 +295,143 @@ union ppe_mru_mtu_ctrl_cfg_u {
#define PPE_IN_L2_SERVICE_TBL_RX_CNT_EN BIT(30)
#define PPE_IN_L2_SERVICE_TBL_TX_CNT_EN BIT(31)

+#define PPE_PORT_RX_CNT_TBL 0x150000
+#define PPE_PORT_RX_CNT_TBL_NUM 256
+#define PPE_PORT_RX_CNT_TBL_INC 0x20
+
+/* Port RX counter */
+struct ppe_port_rx_cnt_tbl {
+ u32 pkt_cnt;
+ u32 byte_cnt_0;
+ u32 byte_cnt_1:8,
+ drop_pkt_cnt_0:24;
+ u32 drop_pkt_cnt_1:8,
+ drop_byte_cnt_0:24;
+ u32 drop_byte_cnt_1:16,
+ res0:16;
+};
+
+union ppe_port_rx_cnt_tbl_u {
+ u32 val[5];
+ struct ppe_port_rx_cnt_tbl bf;
+};
+
+#define PPE_PHY_PORT_RX_CNT_TBL 0x156000
+#define PPE_PHY_PORT_RX_CNT_TBL_NUM 8
+#define PPE_PHY_PORT_RX_CNT_TBL_INC 0x20
+
+/* Physical port RX and RX drop counter */
+struct ppe_phy_port_rx_cnt_tbl {
+ u32 pkt_cnt;
+ u32 byte_cnt_0;
+ u32 byte_cnt_1:8,
+ drop_pkt_cnt_0:24;
+ u32 drop_pkt_cnt_1:8,
+ drop_byte_cnt_0:24;
+ u32 drop_byte_cnt_1:16,
+ res0:16;
+};
+
+union ppe_phy_port_rx_cnt_tbl_u {
+ u32 val[5];
+ struct ppe_phy_port_rx_cnt_tbl bf;
+};
+
+#define PPE_DROP_CPU_CNT_TBL 0x160000
+#define PPE_DROP_CPU_CNT_TBL_NUM 1280
+#define PPE_DROP_CPU_CNT_TBL_INC 0x10
+
+/* counter for the packet to CPU port */
+struct ppe_drop_cpu_cnt {
+ u32 pkt_cnt;
+ u32 byte_cnt_0;
+ u32 byte_cnt_1:8,
+ res0:24;
+};
+
+union ppe_drop_cpu_cnt_u {
+ u32 val[3];
+ struct ppe_drop_cpu_cnt bf;
+};
+
+#define PPE_VLAN_CNT_TBL 0x178000
+#define PPE_VLAN_CNT_TBL_NUM 64
+#define PPE_VLAN_CNT_TBL_INC 0x10
+
+/* VLAN counter */
+struct ppe_vlan_cnt {
+ u32 pkt_cnt;
+ u32 byte_cnt_0;
+ u32 byte_cnt_1:8,
+ res0:24;
+};
+
+union ppe_vlan_cnt_u {
+ u32 val[3];
+ struct ppe_vlan_cnt bf;
+};
+
+#define PPE_PRE_L2_CNT_TBL 0x17c000
+#define PPE_PRE_L2_CNT_TBL_NUM 64
+#define PPE_PRE_L2_CNT_TBL_INC 0x20
+
+/* PPE L2 counter */
+struct ppe_pre_l2_cnt_tbl {
+ u32 pkt_cnt;
+ u32 byte_cnt_0;
+ u32 byte_cnt_1:8,
+ drop_pkt_cnt_0:24;
+ u32 drop_pkt_cnt_1:8,
+ drop_byte_cnt_0:24;
+ u32 drop_byte_cnt_1:16,
+ res0:16;
+};
+
+union ppe_pre_l2_cnt_tbl_u {
+ u32 val[5];
+ struct ppe_pre_l2_cnt_tbl bf;
+};
+
+#define PPE_PORT_TX_DROP_CNT_TBL 0x17d000
+#define PPE_PORT_TX_DROP_CNT_TBL_NUM 8
+#define PPE_PORT_TX_DROP_CNT_TBL_INC 0x10
+
+/* Port TX drop counter */
+struct ppe_port_tx_drop_cnt {
+ u32 pkt_cnt;
+ u32 byte_cnt_0;
+ u32 byte_cnt_1:8,
+ res0:24;
+};
+
+union ppe_port_tx_drop_u {
+ u32 val[3];
+ struct ppe_port_tx_drop_cnt bf;
+};
+
+#define PPE_VPORT_TX_DROP_CNT_TBL 0x17e000
+#define PPE_VPORT_TX_DROP_CNT_TBL_NUM 256
+#define PPE_VPORT_TX_DROP_CNT_TBL_INC 0x10
+
+/* Virtual port TX counter */
+struct ppe_vport_tx_drop_cnt {
+ u32 pkt_cnt;
+ u32 byte_cnt_0;
+ u32 byte_cnt_1:8,
+ res0:24;
+};
+
+union ppe_vport_tx_drop_u {
+ u32 val[3];
+ struct ppe_vport_tx_drop_cnt bf;
+};
+
+#define PPE_TPR_PKT_CNT 0x1d0080
+#define PPE_IPR_PKT_CNT 0x1e0080
+#define PPE_IPR_PKT_CNT_NUM 8
+#define PPE_IPR_PKT_CNT_INC 4
+#define PPE_IPR_PKT_CNT_PKT_CNT GENMASK(31, 0)
+
#define PPE_TL_SERVICE_TBL 0x306000
#define PPE_TL_SERVICE_TBL_NUM 256
#define PPE_TL_SERVICE_TBL_INC 4
@@ -318,6 +554,16 @@ union ppe_ring_q_map_cfg_u {
#define PPE_BM_PORT_GROUP_ID_INC 4
#define PPE_BM_PORT_GROUP_ID_SHARED_GROUP_ID GENMASK(1, 0)

+#define PPE_BM_USED_CNT 0x6001c0
+#define PPE_BM_USED_CNT_NUM 15
+#define PPE_BM_USED_CNT_INC 0x4
+#define PPE_BM_USED_CNT_VAL GENMASK(10, 0)
+
+#define PPE_BM_REACT_CNT 0x600240
+#define PPE_BM_REACT_CNT_NUM 15
+#define PPE_BM_REACT_CNT_INC 0x4
+#define PPE_BM_REACT_CNT_VAL GENMASK(8, 0)
+
#define PPE_BM_SHARED_GROUP_CFG 0x600290
#define PPE_BM_SHARED_GROUP_CFG_NUM 4
#define PPE_BM_SHARED_GROUP_CFG_INC 4
@@ -456,6 +702,16 @@ union ppe_ac_grp_cfg_u {
struct ppe_ac_grp_cfg bf;
};

+#define PPE_AC_UNI_QUEUE_CNT_TBL 0x84e000
+#define PPE_AC_UNI_QUEUE_CNT_TBL_NUM 256
+#define PPE_AC_UNI_QUEUE_CNT_TBL_INC 0x10
+#define PPE_AC_UNI_QUEUE_CNT_TBL_PEND_CNT GENMASK(12, 0)
+
+#define PPE_AC_MUL_QUEUE_CNT_TBL 0x852000
+#define PPE_AC_MUL_QUEUE_CNT_TBL_NUM 44
+#define PPE_AC_MUL_QUEUE_CNT_TBL_INC 0x10
+#define PPE_AC_MUL_QUEUE_CNT_TBL_PEND_CNT GENMASK(12, 0)
+
#define PPE_ENQ_OPR_TBL 0x85c000
#define PPE_ENQ_OPR_TBL_NUM 300
#define PPE_ENQ_OPR_TBL_INC 0x10
diff --git a/include/linux/soc/qcom/ppe.h b/include/linux/soc/qcom/ppe.h
index 70ee192d9ef0..268109c823ad 100644
--- a/include/linux/soc/qcom/ppe.h
+++ b/include/linux/soc/qcom/ppe.h
@@ -17,6 +17,7 @@ struct ppe_device {
struct device *dev;
struct regmap *regmap;
struct ppe_device_ops *ppe_ops;
+ struct dentry *debugfs_root;
bool is_ppe_probed;
void *ppe_priv;
};
--
2.42.0


2024-01-10 11:50:26

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 17/20] net: ethernet: qualcomm: Add PPE UNIPHY support for phylink

From: Lei Wei <[email protected]>

This driver adds support for PPE UNIPHY initialization and UNIPHY
PCS operations which used by phylink.

PPE supports maximum 6 GMAC or XGMAC ports which can be connected
with maximum 3 UNIPHYs. The UNIPHY registers and provides raw clock
to feeds NCCSS clocks to provide different clocks to PPE ports in
different link speed.

Signed-off-by: Lei Wei <[email protected]>
Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/Makefile | 2 +-
drivers/net/ethernet/qualcomm/ppe/ppe.c | 25 +
drivers/net/ethernet/qualcomm/ppe/ppe.h | 2 +
.../net/ethernet/qualcomm/ppe/ppe_uniphy.c | 789 ++++++++++++++++++
.../net/ethernet/qualcomm/ppe/ppe_uniphy.h | 227 +++++
include/linux/soc/qcom/ppe.h | 1 +
6 files changed, 1045 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_uniphy.c
create mode 100644 drivers/net/ethernet/qualcomm/ppe/ppe_uniphy.h

diff --git a/drivers/net/ethernet/qualcomm/ppe/Makefile b/drivers/net/ethernet/qualcomm/ppe/Makefile
index 516ea23443ab..487f62d5e38c 100644
--- a/drivers/net/ethernet/qualcomm/ppe/Makefile
+++ b/drivers/net/ethernet/qualcomm/ppe/Makefile
@@ -4,4 +4,4 @@
#

obj-$(CONFIG_QCOM_PPE) += qcom-ppe.o
-qcom-ppe-objs := ppe.o ppe_ops.o ppe_debugfs.o
+qcom-ppe-objs := ppe.o ppe_ops.o ppe_debugfs.o ppe_uniphy.o
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index 04f80589c05b..21040efe71fc 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -18,6 +18,7 @@
#include "ppe_regs.h"
#include "ppe_ops.h"
#include "ppe_debugfs.h"
+#include "ppe_uniphy.h"

#define PPE_SCHEDULER_PORT_NUM 8
#define MPPE_SCHEDULER_PORT_NUM 3
@@ -176,6 +177,26 @@ int ppe_type_get(struct ppe_device *ppe_dev)
return ppe_dev_priv->ppe_type;
}

+struct clk **ppe_clock_get(struct ppe_device *ppe_dev)
+{
+ struct ppe_data *ppe_dev_priv = ppe_dev->ppe_priv;
+
+ if (!ppe_dev_priv)
+ return NULL;
+
+ return ppe_dev_priv->clk;
+}
+
+struct reset_control **ppe_reset_get(struct ppe_device *ppe_dev)
+{
+ struct ppe_data *ppe_dev_priv = ppe_dev->ppe_priv;
+
+ if (!ppe_dev_priv)
+ return NULL;
+
+ return ppe_dev_priv->rst;
+}
+
static int ppe_clock_set_enable(struct ppe_device *ppe_dev,
enum ppe_clk_id clk_id, unsigned long rate)
{
@@ -1405,6 +1426,10 @@ static int qcom_ppe_probe(struct platform_device *pdev)
ret,
"ppe device hw init failed\n");

+ ppe_dev->uniphy = ppe_uniphy_setup(pdev);
+ if (IS_ERR(ppe_dev->uniphy))
+ return dev_err_probe(&pdev->dev, ret, "ppe uniphy initialization failed\n");
+
ppe_dev->ppe_ops = &qcom_ppe_ops;
ppe_dev->is_ppe_probed = true;
ppe_debugfs_setup(ppe_dev);
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.h b/drivers/net/ethernet/qualcomm/ppe/ppe.h
index 828d467540c9..45b70f47cd21 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.h
@@ -173,6 +173,8 @@ struct ppe_scheduler_port_resource {
};

int ppe_type_get(struct ppe_device *ppe_dev);
+struct clk **ppe_clock_get(struct ppe_device *ppe_dev);
+struct reset_control **ppe_reset_get(struct ppe_device *ppe_dev);

int ppe_write(struct ppe_device *ppe_dev, u32 reg, unsigned int val);
int ppe_read(struct ppe_device *ppe_dev, u32 reg, unsigned int *val);
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_uniphy.c b/drivers/net/ethernet/qualcomm/ppe/ppe_uniphy.c
new file mode 100644
index 000000000000..3a2b6fc77a9c
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_uniphy.c
@@ -0,0 +1,789 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+/* PPE UNIPHY clock register and UNIPHY PCS operations for phylink.
+ *
+ * The PPE UNIPHY block is specifically used by PPE to connect the PPE MAC
+ * with the external PHYs or SFPs or Switches (fixed link). The PPE UNIPHY
+ * block includes serdes, PCS or XPCS and the control logic to support PPE
+ * ports to work in different interface mode and different link speed.
+ *
+ * The PPE UNIPHY block provides raw clock as the parent clock to NSSCC
+ * clocks and the NSSCC clocks can be configured to generate different
+ * port Tx and Rx clocks to PPE ports in different port link speed.
+ */
+
+#include <linux/clk.h>
+#include <linux/reset.h>
+#include <linux/clk-provider.h>
+#include <linux/soc/qcom/ppe.h>
+#include "ppe.h"
+#include "ppe_uniphy.h"
+
+/* UNIPHY clock direction */
+enum {
+ UNIPHY_RX = 0,
+ UNIPHY_TX,
+};
+
+/* UNIPHY clock data type */
+struct clk_uniphy {
+ struct clk_hw hw;
+ u8 index;
+ u8 dir;
+ unsigned long rate;
+};
+
+#define to_clk_uniphy(_hw) container_of(_hw, struct clk_uniphy, hw)
+/* UNIPHY clock rate */
+#define UNIPHY_CLK_RATE_125M 125000000
+#define UNIPHY_CLK_RATE_312P5M 312500000
+
+static void ppe_uniphy_write(struct ppe_uniphy *uniphy, u32 val, u32 reg)
+{
+ if (reg >= UNIPHY_INDIRECT_ADDR_START) {
+ writel(FIELD_GET(UNIPHY_INDIRECT_ADDR_HIGH, reg),
+ uniphy->base + UNIPHY_INDIRECT_AHB_ADDR);
+ writel(val, uniphy->base + UNIPHY_INDIRECT_DATA_ADDR(reg));
+ } else {
+ writel(val, uniphy->base + reg);
+ }
+}
+
+static u32 ppe_uniphy_read(struct ppe_uniphy *uniphy, u32 reg)
+{
+ if (reg >= UNIPHY_INDIRECT_ADDR_START) {
+ writel(FIELD_GET(UNIPHY_INDIRECT_ADDR_HIGH, reg),
+ uniphy->base + UNIPHY_INDIRECT_AHB_ADDR);
+ return readl(uniphy->base + UNIPHY_INDIRECT_DATA_ADDR(reg));
+ } else {
+ return readl(uniphy->base + reg);
+ }
+}
+
+static int ppe_uniphy_mask(struct ppe_uniphy *uniphy, u32 reg, u32 mask, u32 set)
+{
+ u32 val;
+
+ val = ppe_uniphy_read(uniphy, reg);
+ val &= ~mask;
+ val |= set;
+ ppe_uniphy_write(uniphy, val, reg);
+
+ return 0;
+}
+
+static unsigned long clk_uniphy_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ struct clk_uniphy *uniphy = to_clk_uniphy(hw);
+
+ return uniphy->rate;
+}
+
+static int clk_uniphy_determine_rate(struct clk_hw *hw,
+ struct clk_rate_request *req)
+{
+ if (req->rate <= UNIPHY_CLK_RATE_125M)
+ req->rate = UNIPHY_CLK_RATE_125M;
+ else
+ req->rate = UNIPHY_CLK_RATE_312P5M;
+
+ return 0;
+}
+
+static int clk_uniphy_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
+{
+ struct clk_uniphy *uniphy = to_clk_uniphy(hw);
+
+ if (rate != UNIPHY_CLK_RATE_125M && rate != UNIPHY_CLK_RATE_312P5M)
+ return -1;
+
+ uniphy->rate = rate;
+
+ return 0;
+}
+
+static const struct clk_ops clk_uniphy_ops = {
+ .recalc_rate = clk_uniphy_recalc_rate,
+ .determine_rate = clk_uniphy_determine_rate,
+ .set_rate = clk_uniphy_set_rate,
+};
+
+static struct clk_uniphy uniphy0_gcc_rx_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "uniphy0_gcc_rx_clk",
+ .ops = &clk_uniphy_ops,
+ },
+ .index = 0,
+ .dir = UNIPHY_RX,
+ .rate = UNIPHY_CLK_RATE_125M,
+};
+
+static struct clk_uniphy uniphy0_gcc_tx_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "uniphy0_gcc_tx_clk",
+ .ops = &clk_uniphy_ops,
+ },
+ .index = 0,
+ .dir = UNIPHY_TX,
+ .rate = UNIPHY_CLK_RATE_125M,
+};
+
+static struct clk_uniphy uniphy1_gcc_rx_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "uniphy1_gcc_rx_clk",
+ .ops = &clk_uniphy_ops,
+ },
+ .index = 1,
+ .dir = UNIPHY_RX,
+ .rate = UNIPHY_CLK_RATE_312P5M,
+};
+
+static struct clk_uniphy uniphy1_gcc_tx_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "uniphy1_gcc_tx_clk",
+ .ops = &clk_uniphy_ops,
+ },
+ .index = 1,
+ .dir = UNIPHY_TX,
+ .rate = UNIPHY_CLK_RATE_312P5M,
+};
+
+static struct clk_uniphy uniphy2_gcc_rx_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "uniphy2_gcc_rx_clk",
+ .ops = &clk_uniphy_ops,
+ },
+ .index = 2,
+ .dir = UNIPHY_RX,
+ .rate = UNIPHY_CLK_RATE_312P5M,
+};
+
+static struct clk_uniphy uniphy2_gcc_tx_clk = {
+ .hw.init = &(struct clk_init_data){
+ .name = "uniphy2_gcc_tx_clk",
+ .ops = &clk_uniphy_ops,
+ },
+ .index = 2,
+ .dir = UNIPHY_TX,
+ .rate = UNIPHY_CLK_RATE_312P5M,
+};
+
+static struct clk_hw *uniphy_raw_clks[] = {
+ &uniphy0_gcc_rx_clk.hw, &uniphy0_gcc_tx_clk.hw,
+ &uniphy1_gcc_rx_clk.hw, &uniphy1_gcc_tx_clk.hw,
+ &uniphy2_gcc_rx_clk.hw, &uniphy2_gcc_tx_clk.hw,
+};
+
+int ppe_uniphy_port_gcc_clock_en_set(struct ppe_uniphy *uniphy, int port, bool enable)
+{
+ struct clk **clock = ppe_clock_get(uniphy->ppe_dev);
+ enum ppe_clk_id rx_id, tx_id;
+ int err = 0;
+
+ rx_id = PPE_UNIPHY_PORT1_RX_CLK + ((port - 1) << 1);
+ tx_id = PPE_UNIPHY_PORT1_TX_CLK + ((port - 1) << 1);
+
+ if (enable) {
+ if (!IS_ERR(clock[rx_id])) {
+ err = clk_prepare_enable(clock[rx_id]);
+ if (err) {
+ dev_err(uniphy->ppe_dev->dev,
+ "Failed to enable uniphy port %d rx_clk(%d)\n",
+ port, rx_id);
+ return err;
+ }
+ }
+
+ if (!IS_ERR(clock[tx_id])) {
+ err = clk_prepare_enable(clock[tx_id]);
+ if (err) {
+ dev_err(uniphy->ppe_dev->dev,
+ "Failed to enable uniphy port %d tx_clk(%d)\n",
+ port, tx_id);
+ return err;
+ }
+ }
+ } else {
+ clk_disable_unprepare(clock[rx_id]);
+ clk_disable_unprepare(clock[tx_id]);
+ }
+
+ return 0;
+}
+
+static int ppe_uniphy_interface_gcc_clock_en_set(struct ppe_uniphy *uniphy, bool enable)
+{
+ int ppe_type = ppe_type_get(uniphy->ppe_dev);
+ int port = 0;
+
+ switch (uniphy->index) {
+ case 2:
+ ppe_uniphy_port_gcc_clock_en_set(uniphy, PPE_PORT6, enable);
+ break;
+ case 1:
+ if (ppe_type == PPE_TYPE_APPE)
+ ppe_uniphy_port_gcc_clock_en_set(uniphy, PPE_PORT5, enable);
+ else if (ppe_type == PPE_TYPE_MPPE)
+ ppe_uniphy_port_gcc_clock_en_set(uniphy, PPE_PORT2, enable);
+ break;
+ case 0:
+ if (ppe_type == PPE_TYPE_APPE) {
+ for (port = PPE_PORT1; port <= PPE_PORT4; port++)
+ ppe_uniphy_port_gcc_clock_en_set(uniphy, port, enable);
+ } else if (ppe_type == PPE_TYPE_MPPE) {
+ ppe_uniphy_port_gcc_clock_en_set(uniphy, PPE_PORT1, enable);
+ }
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int ppe_uniphy_gcc_xpcs_reset(struct ppe_uniphy *uniphy, bool enable)
+{
+ struct reset_control **reset = ppe_reset_get(uniphy->ppe_dev);
+ enum ppe_rst_id id = PPE_UNIPHY0_XPCS_RST + uniphy->index;
+
+ if (IS_ERR(reset[id]))
+ return PTR_ERR(reset[id]);
+
+ if (enable)
+ return reset_control_assert(reset[id]);
+ else
+ return reset_control_deassert(reset[id]);
+}
+
+static int ppe_uniphy_gcc_software_reset(struct ppe_uniphy *uniphy)
+{
+ struct reset_control **reset = ppe_reset_get(uniphy->ppe_dev);
+ int ppe_type = ppe_type_get(uniphy->ppe_dev);
+ unsigned int index = uniphy->index;
+ int err = 0, port = 0;
+
+ /* Assert uniphy sys reset control */
+ if (!IS_ERR(reset[PPE_UNIPHY0_SYS_RST + index])) {
+ err = reset_control_assert(reset[PPE_UNIPHY0_SYS_RST + index]);
+ if (err)
+ return err;
+ }
+
+ /* Assert uniphy port reset control */
+ switch (ppe_type) {
+ case PPE_TYPE_APPE:
+ if (index == 0) {
+ for (port = PPE_PORT1; port <= PPE_PORT4; port++) {
+ if (!IS_ERR(reset[PPE_UNIPHY_PORT1_DIS + port - 1])) {
+ err = reset_control_assert(reset[PPE_UNIPHY_PORT1_DIS +
+ port - 1]);
+ if (err)
+ return err;
+ }
+ }
+ } else {
+ if (!IS_ERR(reset[PPE_UNIPHY0_SOFT_RST + index])) {
+ err = reset_control_assert(reset[PPE_UNIPHY0_SOFT_RST + index]);
+ if (err)
+ return err;
+ }
+ }
+ break;
+ case PPE_TYPE_MPPE:
+ if (!IS_ERR(reset[PPE_UNIPHY_PORT1_RX_RST + (index << 1)])) {
+ err = reset_control_assert(reset[PPE_UNIPHY_PORT1_RX_RST + (index << 1)]);
+ if (err)
+ return err;
+ }
+
+ if (!IS_ERR(reset[PPE_UNIPHY_PORT1_TX_RST + (index << 1)])) {
+ err = reset_control_assert(reset[PPE_UNIPHY_PORT1_TX_RST + (index << 1)]);
+ if (err)
+ return err;
+ }
+ break;
+ default:
+ break;
+ }
+ fsleep(100000);
+
+ /* Deassert uniphy sys reset control */
+ if (!IS_ERR(reset[PPE_UNIPHY0_SYS_RST + index])) {
+ err = reset_control_deassert(reset[PPE_UNIPHY0_SYS_RST + index]);
+ if (err)
+ return err;
+ }
+
+ /* Deassert uniphy port reset control */
+ switch (ppe_type) {
+ case PPE_TYPE_APPE:
+ if (index == 0) {
+ for (port = PPE_PORT1; port <= PPE_PORT4; port++) {
+ if (!IS_ERR(reset[PPE_UNIPHY_PORT1_DIS + port - 1])) {
+ err = reset_control_deassert(reset[PPE_UNIPHY_PORT1_DIS +
+ port - 1]);
+ if (err)
+ return err;
+ }
+ }
+ } else {
+ if (!IS_ERR(reset[PPE_UNIPHY0_SOFT_RST + index])) {
+ err = reset_control_deassert(reset[PPE_UNIPHY0_SOFT_RST + index]);
+ if (err)
+ return err;
+ }
+ }
+ break;
+ case PPE_TYPE_MPPE:
+ if (!IS_ERR(reset[PPE_UNIPHY_PORT1_RX_RST + (index << 1)])) {
+ err = reset_control_deassert(reset[PPE_UNIPHY_PORT1_RX_RST + (index << 1)]);
+ if (err)
+ return err;
+ }
+
+ if (!IS_ERR(reset[PPE_UNIPHY_PORT1_TX_RST + (index << 1)])) {
+ err = reset_control_deassert(reset[PPE_UNIPHY_PORT1_TX_RST + (index << 1)]);
+ if (err)
+ return err;
+ }
+ break;
+ default:
+ break;
+ }
+ fsleep(100000);
+
+ return err;
+}
+
+int ppe_uniphy_autoneg_complete_check(struct ppe_uniphy *uniphy, int port)
+{
+ u32 reg, val;
+ int channel, ret;
+
+ if (uniphy->interface == PHY_INTERFACE_MODE_USXGMII ||
+ uniphy->interface == PHY_INTERFACE_MODE_QUSGMII) {
+ /* Only uniphy0 may have multi channels */
+ channel = (uniphy->index == 0) ? (port - 1) : 0;
+ reg = (channel == 0) ? VR_MII_AN_INTR_STS_ADDR :
+ VR_MII_AN_INTR_STS_CHANNEL_ADDR(channel);
+
+ /* Wait auto negotiation complete */
+ ret = read_poll_timeout(ppe_uniphy_read, val,
+ (val & CL37_ANCMPLT_INTR),
+ 1000, 100000, true,
+ uniphy, reg);
+ if (ret) {
+ dev_err(uniphy->ppe_dev->dev,
+ "uniphy %d auto negotiation timeout\n", uniphy->index);
+ return ret;
+ }
+
+ /* Clear auto negotiation complete interrupt */
+ ppe_uniphy_mask(uniphy, reg, CL37_ANCMPLT_INTR, 0);
+ }
+
+ return 0;
+}
+
+int ppe_uniphy_speed_set(struct ppe_uniphy *uniphy, int port, int speed)
+{
+ u32 reg, val;
+ int channel;
+
+ if (uniphy->interface == PHY_INTERFACE_MODE_USXGMII ||
+ uniphy->interface == PHY_INTERFACE_MODE_QUSGMII) {
+ /* Only uniphy0 may have multiple channels */
+ channel = (uniphy->index == 0) ? (port - 1) : 0;
+
+ reg = (channel == 0) ? SR_MII_CTRL_ADDR :
+ SR_MII_CTRL_CHANNEL_ADDR(channel);
+
+ switch (speed) {
+ case SPEED_100:
+ val = USXGMII_SPEED_100;
+ break;
+ case SPEED_1000:
+ val = USXGMII_SPEED_1000;
+ break;
+ case SPEED_2500:
+ val = USXGMII_SPEED_2500;
+ break;
+ case SPEED_5000:
+ val = USXGMII_SPEED_5000;
+ break;
+ case SPEED_10000:
+ val = USXGMII_SPEED_10000;
+ break;
+ case SPEED_10:
+ val = USXGMII_SPEED_10;
+ break;
+ default:
+ val = 0;
+ break;
+ }
+
+ ppe_uniphy_mask(uniphy, reg, USXGMII_SPEED_MASK, val);
+ }
+
+ return 0;
+}
+
+int ppe_uniphy_duplex_set(struct ppe_uniphy *uniphy, int port, int duplex)
+{
+ u32 reg;
+ int channel;
+
+ if (uniphy->interface == PHY_INTERFACE_MODE_USXGMII &&
+ uniphy->interface == PHY_INTERFACE_MODE_QUSGMII) {
+ /* Only uniphy0 may have multiple channels */
+ channel = (uniphy->index == 0) ? (port - 1) : 0;
+
+ reg = (channel == 0) ? SR_MII_CTRL_ADDR :
+ SR_MII_CTRL_CHANNEL_ADDR(channel);
+
+ ppe_uniphy_mask(uniphy, reg, USXGMII_DUPLEX_FULL,
+ (duplex == DUPLEX_FULL) ? USXGMII_DUPLEX_FULL : 0);
+ }
+
+ return 0;
+}
+
+int ppe_uniphy_adapter_reset(struct ppe_uniphy *uniphy, int port)
+{
+ int channel;
+
+ /* Only uniphy0 may have multiple channels */
+ channel = (uniphy->index == 0) ? (port - 1) : 0;
+
+ switch (uniphy->interface) {
+ case PHY_INTERFACE_MODE_USXGMII:
+ case PHY_INTERFACE_MODE_QUSGMII:
+ if (channel == 0)
+ ppe_uniphy_mask(uniphy,
+ VR_XS_PCS_DIG_CTRL1_ADDR,
+ USRA_RST, USRA_RST);
+ else
+ ppe_uniphy_mask(uniphy,
+ VR_MII_DIG_CTRL1_CHANNEL_ADDR(channel),
+ CHANNEL_USRA_RST, CHANNEL_USRA_RST);
+ break;
+ case PHY_INTERFACE_MODE_SGMII:
+ case PHY_INTERFACE_MODE_1000BASEX:
+ case PHY_INTERFACE_MODE_2500BASEX:
+ case PHY_INTERFACE_MODE_QSGMII:
+ ppe_uniphy_mask(uniphy,
+ UNIPHY_CHANNEL_INPUT_OUTPUT_4_ADDR(channel),
+ NEWADDEDFROMHERE_CH_ADP_SW_RSTN, 0);
+ ppe_uniphy_mask(uniphy,
+ UNIPHY_CHANNEL_INPUT_OUTPUT_4_ADDR(channel),
+ NEWADDEDFROMHERE_CH_ADP_SW_RSTN,
+ NEWADDEDFROMHERE_CH_ADP_SW_RSTN);
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int ppe_pcs_config(struct phylink_pcs *pcs, unsigned int mode,
+ phy_interface_t interface,
+ const unsigned long *advertising,
+ bool permit_pause_to_mac)
+{
+ struct ppe_uniphy *uniphy = pcs_to_ppe_uniphy(pcs);
+ unsigned long rate = 0;
+ int ret, channel = 0;
+ u32 val = 0;
+
+ if (uniphy->interface == interface)
+ return 0;
+
+ uniphy->interface = interface;
+
+ /* Disable gcc uniphy interface clock */
+ ppe_uniphy_interface_gcc_clock_en_set(uniphy, false);
+
+ /* Assert gcc uniphy xpcs reset control */
+ ppe_uniphy_gcc_xpcs_reset(uniphy, true);
+
+ /* Configure uniphy mode */
+ switch (interface) {
+ case PHY_INTERFACE_MODE_USXGMII:
+ case PHY_INTERFACE_MODE_10GBASER:
+ case PHY_INTERFACE_MODE_QUSGMII:
+ rate = UNIPHY_CLK_RATE_312P5M;
+ ppe_uniphy_mask(uniphy, UNIPHY_MODE_CTRL_ADDR,
+ USXGMII_MODE_CTRL_MASK, USXGMII_MODE_CTRL);
+ break;
+ case PHY_INTERFACE_MODE_2500BASEX:
+ rate = UNIPHY_CLK_RATE_312P5M;
+ ppe_uniphy_mask(uniphy, UNIPHY_MODE_CTRL_ADDR,
+ SGMIIPLUS_MODE_CTRL_MASK, SGMIIPLUS_MODE_CTRL);
+ break;
+ case PHY_INTERFACE_MODE_SGMII:
+ case PHY_INTERFACE_MODE_1000BASEX:
+ rate = UNIPHY_CLK_RATE_125M;
+ ppe_uniphy_mask(uniphy, UNIPHY_MODE_CTRL_ADDR,
+ SGMII_MODE_CTRL_MASK, SGMII_MODE_CTRL);
+ break;
+ case PHY_INTERFACE_MODE_QSGMII:
+ rate = UNIPHY_CLK_RATE_125M;
+ ppe_uniphy_mask(uniphy, UNIPHY_MODE_CTRL_ADDR,
+ QSGMII_MODE_CTRL_MASK, QSGMII_MODE_CTRL);
+ break;
+ default:
+ break;
+ }
+
+ if (interface == PHY_INTERFACE_MODE_QUSGMII)
+ ppe_uniphy_mask(uniphy, UNIPHY_QP_USXG_OPITON1_ADDR,
+ GMII_SRC_SEL, GMII_SRC_SEL);
+
+ if (interface == PHY_INTERFACE_MODE_10GBASER)
+ ppe_uniphy_mask(uniphy, UNIPHY_LINK_DETECT_ADDR,
+ DETECT_LOS_FROM_SFP, UNIPHY_10GR_LINK_LOSS);
+
+ /* Reset uniphy gcc software reset control */
+ ppe_uniphy_gcc_software_reset(uniphy);
+
+ /* Wait uniphy calibration completion */
+ ret = read_poll_timeout(ppe_uniphy_read, val,
+ (val & MMD1_REG_CALIBRATION_DONE_REG),
+ 1000, 100000, true,
+ uniphy, UNIPHY_OFFSET_CALIB_4_ADDR);
+ if (ret) {
+ dev_err(uniphy->ppe_dev->dev,
+ "uniphy %d calibration timeout\n", uniphy->index);
+ return ret;
+ }
+
+ /* Enable gcc uniphy interface clk */
+ ppe_uniphy_interface_gcc_clock_en_set(uniphy, true);
+
+ /* Deassert gcc uniphy xpcs reset control */
+ if (interface == PHY_INTERFACE_MODE_USXGMII ||
+ interface == PHY_INTERFACE_MODE_10GBASER ||
+ interface == PHY_INTERFACE_MODE_QUSGMII)
+ ppe_uniphy_gcc_xpcs_reset(uniphy, false);
+
+ if (interface == PHY_INTERFACE_MODE_USXGMII ||
+ interface == PHY_INTERFACE_MODE_QUSGMII) {
+ /* Wait 10gr link up */
+ ret = read_poll_timeout(ppe_uniphy_read, val,
+ (val & SR_XS_PCS_KR_STS1_PLU),
+ 1000, 100000, true,
+ uniphy, SR_XS_PCS_KR_STS1_ADDR);
+ if (ret)
+ dev_warn(uniphy->ppe_dev->dev,
+ "uniphy %d 10gr linkup timeout\n", uniphy->index);
+
+ /* Enable usxgmii */
+ ppe_uniphy_mask(uniphy, VR_XS_PCS_DIG_CTRL1_ADDR, USXGMII_EN, USXGMII_EN);
+
+ if (interface == PHY_INTERFACE_MODE_QUSGMII) {
+ /* XPCS set quxgmii mode */
+ ppe_uniphy_mask(uniphy, VR_XS_PCS_DIG_STS_ADDR, AM_COUNT, QUXGMII_AM_COUNT);
+ ppe_uniphy_mask(uniphy, VR_XS_PCS_KR_CTRL_ADDR, USXG_MODE, QUXGMII_MODE);
+ /* XPCS software reset */
+ ppe_uniphy_mask(uniphy, VR_XS_PCS_DIG_CTRL1_ADDR, VR_RST, VR_RST);
+ }
+
+ /* Enable autoneg complete interrupt and 10M/100M 8bit mii width */
+ ppe_uniphy_mask(uniphy, VR_MII_AN_CTRL_ADDR,
+ MII_AN_INTR_EN | MII_CTRL, MII_AN_INTR_EN | MII_CTRL);
+
+ if (interface == PHY_INTERFACE_MODE_QUSGMII) {
+ for (channel = 1; channel <= 3; channel++)
+ ppe_uniphy_mask(uniphy, VR_MII_AN_CTRL_CHANNEL_ADDR(channel),
+ MII_AN_INTR_EN | MII_CTRL,
+ MII_AN_INTR_EN | MII_CTRL);
+ /* Disable TICD */
+ ppe_uniphy_mask(uniphy, VR_XAUI_MODE_CTRL_ADDR, IPG_CHECK, IPG_CHECK);
+ for (channel = 1; channel <= 3; channel++)
+ ppe_uniphy_mask(uniphy, VR_XAUI_MODE_CTRL_CHANNEL_ADDR(channel),
+ IPG_CHECK, IPG_CHECK);
+ }
+
+ /* Enable autoneg ability and usxgmii 10g speed and full duplex */
+ ppe_uniphy_mask(uniphy, SR_MII_CTRL_ADDR,
+ USXGMII_SPEED_MASK | AN_ENABLE | USXGMII_DUPLEX_FULL,
+ USXGMII_SPEED_10000 | AN_ENABLE | USXGMII_DUPLEX_FULL);
+ if (interface == PHY_INTERFACE_MODE_QUSGMII) {
+ for (channel = 1; channel <= 3; channel++)
+ ppe_uniphy_mask(uniphy, SR_MII_CTRL_CHANNEL_ADDR(channel),
+ USXGMII_SPEED_MASK | AN_ENABLE |
+ USXGMII_DUPLEX_FULL,
+ USXGMII_SPEED_10000 | AN_ENABLE |
+ USXGMII_DUPLEX_FULL);
+
+ /* Enable eee transparent mode */
+ ppe_uniphy_mask(uniphy, VR_XS_PCS_EEE_MCTRL0_ADDR,
+ MULT_FACT_100NS | SIGN_BIT,
+ FIELD_PREP(MULT_FACT_100NS, 0x1) | SIGN_BIT);
+ ppe_uniphy_mask(uniphy, VR_XS_PCS_EEE_TXTIMER_ADDR,
+ TSL_RES | T1U_RES | TWL_RES,
+ UNIPHY_XPCS_TSL_TIMER |
+ UNIPHY_XPCS_T1U_TIMER | UNIPHY_XPCS_TWL_TIMER);
+ ppe_uniphy_mask(uniphy, VR_XS_PCS_EEE_RXTIMER_ADDR,
+ RES_100U | TWR_RES,
+ UNIPHY_XPCS_100US_TIMER | UNIPHY_XPCS_TWR_TIMER);
+
+ ppe_uniphy_mask(uniphy, VR_XS_PCS_EEE_MCTRL1_ADDR,
+ TRN_LPI | TRN_RXLPI, TRN_LPI | TRN_RXLPI);
+ ppe_uniphy_mask(uniphy, VR_XS_PCS_EEE_MCTRL0_ADDR,
+ LTX_EN | LRX_EN, LTX_EN | LRX_EN);
+ }
+ }
+
+ /* Set uniphy raw clk rate */
+ clk_set_rate(uniphy_raw_clks[(uniphy->index << 1) + UNIPHY_RX]->clk,
+ rate);
+ clk_set_rate(uniphy_raw_clks[(uniphy->index << 1) + UNIPHY_TX]->clk,
+ rate);
+
+ dev_info(uniphy->ppe_dev->dev,
+ "ppe pcs config uniphy index %d, interface %s\n",
+ uniphy->index, phy_modes(interface));
+
+ return 0;
+}
+
+static void ppe_pcs_get_state(struct phylink_pcs *pcs,
+ struct phylink_link_state *state)
+{
+ struct ppe_uniphy *uniphy = pcs_to_ppe_uniphy(pcs);
+ u32 val;
+
+ switch (state->interface) {
+ case PHY_INTERFACE_MODE_10GBASER:
+ val = ppe_uniphy_read(uniphy, SR_XS_PCS_KR_STS1_ADDR);
+ state->link = (val & SR_XS_PCS_KR_STS1_PLU) ? 1 : 0;
+ state->duplex = DUPLEX_FULL;
+ state->speed = SPEED_10000;
+ state->pause |= (MLO_PAUSE_RX | MLO_PAUSE_TX);
+ break;
+ case PHY_INTERFACE_MODE_2500BASEX:
+ val = ppe_uniphy_read(uniphy, UNIPHY_CHANNEL0_INPUT_OUTPUT_6_ADDR);
+ state->link = (val & NEWADDEDFROMHERE_CH0_LINK_MAC) ? 1 : 0;
+ state->duplex = DUPLEX_FULL;
+ state->speed = SPEED_2500;
+ state->pause |= (MLO_PAUSE_RX | MLO_PAUSE_TX);
+ break;
+ case PHY_INTERFACE_MODE_1000BASEX:
+ case PHY_INTERFACE_MODE_SGMII:
+ val = ppe_uniphy_read(uniphy, UNIPHY_CHANNEL0_INPUT_OUTPUT_6_ADDR);
+ state->link = (val & NEWADDEDFROMHERE_CH0_LINK_MAC) ? 1 : 0;
+ state->duplex = (val & NEWADDEDFROMHERE_CH0_DUPLEX_MODE_MAC) ?
+ DUPLEX_FULL : DUPLEX_HALF;
+ if (FIELD_GET(NEWADDEDFROMHERE_CH0_SPEED_MODE_MAC, val) == UNIPHY_SPEED_10M)
+ state->speed = SPEED_10;
+ else if (FIELD_GET(NEWADDEDFROMHERE_CH0_SPEED_MODE_MAC, val) == UNIPHY_SPEED_100M)
+ state->speed = SPEED_100;
+ else if (FIELD_GET(NEWADDEDFROMHERE_CH0_SPEED_MODE_MAC, val) == UNIPHY_SPEED_1000M)
+ state->speed = SPEED_1000;
+ state->pause |= (MLO_PAUSE_RX | MLO_PAUSE_TX);
+ break;
+ default:
+ break;
+ }
+}
+
+static void ppe_pcs_an_restart(struct phylink_pcs *pcs)
+{
+}
+
+static const struct phylink_pcs_ops ppe_pcs_ops = {
+ .pcs_get_state = ppe_pcs_get_state,
+ .pcs_config = ppe_pcs_config,
+ .pcs_an_restart = ppe_pcs_an_restart,
+};
+
+static void uniphy_clk_release_provider(void *res)
+{
+ of_clk_del_provider(res);
+}
+
+struct ppe_uniphy *ppe_uniphy_setup(struct platform_device *pdev)
+{
+ struct clk_hw_onecell_data *uniphy_clk_data = NULL;
+ struct device_node *np;
+ struct ppe_device *ppe_dev = platform_get_drvdata(pdev);
+ struct ppe_uniphy *uniphy;
+ int i, ret, clk_num = 0;
+
+ np = of_get_child_by_name(pdev->dev.of_node, "qcom-uniphy");
+ if (!np) {
+ dev_err(&pdev->dev, "Failed to find uniphy node\n");
+ return ERR_PTR(-ENODEV);
+ }
+
+ /* Register uniphy raw clock */
+ clk_num = of_property_count_strings(np, "clock-output-names");
+ if (clk_num < 0) {
+ dev_err(&pdev->dev, "%pOFn: invalid clock output count\n", np);
+ goto err_node_put;
+ }
+
+ uniphy_clk_data = devm_kzalloc(&pdev->dev,
+ struct_size(uniphy_clk_data, hws, clk_num),
+ GFP_KERNEL);
+ if (!uniphy_clk_data) {
+ ret = -ENOMEM;
+ goto err_node_put;
+ }
+
+ uniphy_clk_data->num = clk_num;
+ for (i = 0; i < clk_num; i++) {
+ ret = of_property_read_string_index(np, "clock-output-names", i,
+ (const char **)&uniphy_raw_clks[i]->init->name);
+ if (ret) {
+ dev_err(&pdev->dev, "invalid clock name @ %pOFn\n", np);
+ goto err_node_put;
+ }
+
+ ret = devm_clk_hw_register(&pdev->dev, uniphy_raw_clks[i]);
+ if (ret)
+ goto err_node_put;
+ uniphy_clk_data->hws[i] = uniphy_raw_clks[i];
+ }
+
+ ret = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, uniphy_clk_data);
+ if (ret)
+ goto err_node_put;
+
+ ret = devm_add_action_or_reset(&pdev->dev, uniphy_clk_release_provider, np);
+ if (ret)
+ goto err_node_put;
+
+ /* Initialize each uniphy structure */
+ uniphy = devm_kzalloc(&pdev->dev, sizeof(*uniphy) * (clk_num >> 1), GFP_KERNEL);
+ if (!uniphy) {
+ ret = -ENOMEM;
+ goto err_node_put;
+ }
+
+ for (i = 0; i < (clk_num >> 1); i++) {
+ uniphy[i].base = devm_of_iomap(&pdev->dev, np, i, NULL);
+ if (IS_ERR(uniphy[i].base)) {
+ ret = PTR_ERR(uniphy[i].base);
+ goto err_node_put;
+ }
+ uniphy[i].index = i;
+ uniphy[i].interface = PHY_INTERFACE_MODE_NA;
+ uniphy[i].ppe_dev = ppe_dev;
+ uniphy[i].pcs.ops = &ppe_pcs_ops;
+ uniphy[i].pcs.poll = true;
+ }
+ of_node_put(np);
+ return uniphy;
+
+err_node_put:
+ of_node_put(np);
+ return ERR_PTR(ret);
+}
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_uniphy.h b/drivers/net/ethernet/qualcomm/ppe/ppe_uniphy.h
new file mode 100644
index 000000000000..ec547e520937
--- /dev/null
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_uniphy.h
@@ -0,0 +1,227 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+/* PPE UNIPHY functions and UNIPHY hardware registers declarations. */
+
+#ifndef _PPE_UNIPHY_H_
+#define _PPE_UNIPHY_H_
+
+#include <linux/phylink.h>
+
+#define UNIPHY_INDIRECT_ADDR_START 0x8000
+#define UNIPHY_INDIRECT_AHB_ADDR 0x83fc
+#define UNIPHY_INDIRECT_ADDR_HIGH GENMASK(20, 8)
+#define UNIPHY_INDIRECT_ADDR_LOW GENMASK(7, 0)
+#define UNIPHY_INDIRECT_DATA_ADDR(reg) (FIELD_PREP(GENMASK(15, 10), 0x20) | \
+ FIELD_PREP(GENMASK(9, 2), \
+ FIELD_GET(UNIPHY_INDIRECT_ADDR_LOW, reg)))
+
+/* [register] UNIPHY_MISC2 */
+#define UNIPHY_MISC2_ADDR 0x218
+#define PHY_MODE GENMASK(6, 4)
+#define USXGMII_PHY_MODE (FIELD_PREP(PHY_MODE, 0x7))
+#define SGMII_PLUS_PHY_MODE (FIELD_PREP(PHY_MODE, 0x5))
+#define SGMII_PHY_MODE (FIELD_PREP(PHY_MODE, 0x3))
+
+/* [register] UNIPHY_MODE_CTRL */
+#define UNIPHY_MODE_CTRL_ADDR 0x46c
+#define NEWADDEDFROMHERE_CH0_AUTONEG_MODE BIT(0)
+#define NEWADDEDFROMHERE_CH1_CH0_SGMII BIT(1)
+#define NEWADDEDFROMHERE_CH4_CH1_0_SGMII BIT(2)
+#define NEWADDEDFROMHERE_SGMII_EVEN_LOW BIT(3)
+#define NEWADDEDFROMHERE_CH0_MODE_CTRL_25M GENMASK(6, 4)
+#define NEWADDEDFROMHERE_CH0_QSGMII_SGMII BIT(8)
+#define NEWADDEDFROMHERE_CH0_PSGMII_QSGMII BIT(9)
+#define NEWADDEDFROMHERE_SG_MODE BIT(10)
+#define NEWADDEDFROMHERE_SGPLUS_MODE BIT(11)
+#define NEWADDEDFROMHERE_XPCS_MODE BIT(12)
+#define NEWADDEDFROMHERE_USXG_EN BIT(13)
+#define NEWADDEDFROMHERE_SW_V17_V18 BIT(15)
+#define USXGMII_MODE_CTRL_MASK GENMASK(12, 8)
+#define USXGMII_MODE_CTRL NEWADDEDFROMHERE_XPCS_MODE
+#define TEN_GR_MODE_CTRL_MASK GENMASK(12, 8)
+#define TEN_GR_MODE_CTRL NEWADDEDFROMHERE_XPCS_MODE
+#define QUSGMII_MODE_CTRL_MASK GENMASK(12, 8)
+#define QUSGMII_MODE_CTRL NEWADDEDFROMHERE_XPCS_MODE
+#define SGMIIPLUS_MODE_CTRL_MASK (NEWADDEDFROMHERE_CH0_AUTONEG_MODE | \
+ GENMASK(12, 8))
+#define SGMIIPLUS_MODE_CTRL NEWADDEDFROMHERE_SGPLUS_MODE
+#define QSGMII_MODE_CTRL_MASK (NEWADDEDFROMHERE_CH0_AUTONEG_MODE | \
+ GENMASK(12, 8))
+#define QSGMII_MODE_CTRL NEWADDEDFROMHERE_CH0_PSGMII_QSGMII
+#define SGMII_MODE_CTRL_MASK (NEWADDEDFROMHERE_CH0_AUTONEG_MODE | \
+ GENMASK(12, 8))
+#define SGMII_MODE_CTRL NEWADDEDFROMHERE_SG_MODE
+
+/* [register] UNIPHY_CHANNEL_INPUT_OUTPUT_4 */
+#define UNIPHY_CHANNEL0_INPUT_OUTPUT_4_ADDR 0x480
+#define NEWADDEDFROMHERE_CH0_ADP_SW_RSTN BIT(11)
+#define UNIPHY_CHANNEL1_INPUT_OUTPUT_4_ADDR 0x498
+#define NEWADDEDFROMHERE_CH1_ADP_SW_RSTN BIT(11)
+#define UNIPHY_CHANNEL2_INPUT_OUTPUT_4_ADDR 0x4b0
+#define NEWADDEDFROMHERE_CH2_ADP_SW_RSTN BIT(11)
+#define UNIPHY_CHANNEL3_INPUT_OUTPUT_4_ADDR 0x4c8
+#define NEWADDEDFROMHERE_CH3_ADP_SW_RSTN BIT(11)
+#define UNIPHY_CHANNEL4_INPUT_OUTPUT_4_ADDR 0x4e0
+#define NEWADDEDFROMHERE_CH4_ADP_SW_RSTN BIT(11)
+#define UNIPHY_CHANNEL_INPUT_OUTPUT_4_ADDR(x) (0x480 + 0x18 * (x))
+#define NEWADDEDFROMHERE_CH_ADP_SW_RSTN BIT(11)
+
+/* [register] UNIPHY_CHANNEL_INPUT_OUTPUT_6 */
+#define UNIPHY_CHANNEL0_INPUT_OUTPUT_6_ADDR 0x488
+#define NEWADDEDFROMHERE_CH0_LINK_MAC BIT(7)
+#define NEWADDEDFROMHERE_CH0_DUPLEX_MODE_MAC BIT(6)
+#define NEWADDEDFROMHERE_CH0_SPEED_MODE_MAC GENMASK(5, 4)
+#define NEWADDEDFROMHERE_CH0_PAUSE_MAC BIT(3)
+#define NEWADDEDFROMHERE_CH0_ASYM_PAUSE_MAC BIT(2)
+#define NEWADDEDFROMHERE_CH0_TX_PAUSE_EN_MAC BIT(1)
+#define NEWADDEDFROMHERE_CH0_RX_PAUSE_EN_MAC BIT(0)
+#define UNIPHY_SPEED_10M 0
+#define UNIPHY_SPEED_100M 1
+#define UNIPHY_SPEED_1000M 2
+
+/* [register] UNIPHY_INSTANCE_LINK_DETECT */
+#define UNIPHY_LINK_DETECT_ADDR 0x570
+#define DETECT_LOS_FROM_SFP GENMASK(8, 6)
+#define UNIPHY_10GR_LINK_LOSS (FIELD_PREP(DETECT_LOS_FROM_SFP, 0x7))
+
+/* [register] UNIPHY_QP_USXG_OPITON1 */
+#define UNIPHY_QP_USXG_OPITON1_ADDR 0x584
+#define GMII_SRC_SEL BIT(0)
+
+/* [register] UNIPHY_OFFSET_CALIB_4 */
+#define UNIPHY_OFFSET_CALIB_4_ADDR 0x1e0
+#define MMD1_REG_CALIBRATION_DONE_REG BIT(7)
+#define UNIPHY_CALIBRATION_DONE 0x1
+
+/* [register] UNIPHY_PLL_RESET */
+#define UNIPHY_PLL_RESET_ADDR 0x780
+#define UPHY_ANA_EN_SW_RSTN BIT(6)
+
+/* [register] SR_XS_PCS_KR_STS1 */
+#define SR_XS_PCS_KR_STS1_ADDR 0x30020
+#define SR_XS_PCS_KR_STS1_PLU BIT(12)
+
+/* [register] VR_XS_PCS_DIG_CTRL1 */
+#define VR_XS_PCS_DIG_CTRL1_ADDR 0x38000
+#define USXGMII_EN BIT(9)
+#define USRA_RST BIT(10)
+#define VR_RST BIT(15)
+
+/* [register] VR_XS_PCS_EEE_MCTRL0 */
+#define VR_XS_PCS_EEE_MCTRL0_ADDR 0x38006
+#define LTX_EN BIT(0)
+#define LRX_EN BIT(1)
+#define SIGN_BIT BIT(6)
+#define MULT_FACT_100NS GENMASK(11, 8)
+
+/* [register] VR_XS_PCS_KR_CTRL */
+#define VR_XS_PCS_KR_CTRL_ADDR 0x38007
+#define USXG_MODE GENMASK(12, 10)
+#define QUXGMII_MODE (FIELD_PREP(USXG_MODE, 0x5))
+
+/* [register] VR_XS_PCS_EEE_TXTIMER */
+#define VR_XS_PCS_EEE_TXTIMER_ADDR 0x38008
+#define TSL_RES GENMASK(5, 0)
+#define T1U_RES GENMASK(7, 6)
+#define TWL_RES GENMASK(12, 8)
+#define UNIPHY_XPCS_TSL_TIMER (FIELD_PREP(TSL_RES, 0xa))
+#define UNIPHY_XPCS_T1U_TIMER (FIELD_PREP(TSL_RES, 0x3))
+#define UNIPHY_XPCS_TWL_TIMER (FIELD_PREP(TSL_RES, 0x16))
+
+/* [register] VR_XS_PCS_EEE_RXTIMER */
+#define VR_XS_PCS_EEE_RXTIMER_ADDR 0x38009
+#define RES_100U GENMASK(7, 0)
+#define TWR_RES GENMASK(13, 8)
+#define UNIPHY_XPCS_100US_TIMER (FIELD_PREP(RES_100U, 0xc8))
+#define UNIPHY_XPCS_TWR_TIMER (FIELD_PREP(RES_100U, 0x1c))
+
+/* [register] VR_XS_PCS_DIG_STS */
+#define VR_XS_PCS_DIG_STS_ADDR 0x3800a
+#define AM_COUNT GENMASK(14, 0)
+#define QUXGMII_AM_COUNT (FIELD_PREP(AM_COUNT, 0x6018))
+
+/* [register] VR_XS_PCS_EEE_MCTRL1 */
+#define VR_XS_PCS_EEE_MCTRL1_ADDR 0x3800b
+#define TRN_LPI BIT(0)
+#define TRN_RXLPI BIT(8)
+
+/* [register] VR_MII_1_DIG_CTRL1 */
+#define VR_MII_DIG_CTRL1_CHANNEL1_ADDR 0x1a8000
+#define VR_MII_DIG_CTRL1_CHANNEL2_ADDR 0x1b8000
+#define VR_MII_DIG_CTRL1_CHANNEL3_ADDR 0x1c8000
+#define VR_MII_DIG_CTRL1_CHANNEL_ADDR(x) (0x1a8000 + 0x10000 * ((x) - 1))
+#define CHANNEL_USRA_RST BIT(5)
+
+/* [register] VR_MII_AN_CTRL */
+#define VR_MII_AN_CTRL_ADDR 0x1f8001
+#define VR_MII_AN_CTRL_CHANNEL1_ADDR 0x1a8001
+#define VR_MII_AN_CTRL_CHANNEL2_ADDR 0x1b8001
+#define VR_MII_AN_CTRL_CHANNEL3_ADDR 0x1c8001
+#define VR_MII_AN_CTRL_CHANNEL_ADDR(x) (0x1a8001 + 0x10000 * ((x) - 1))
+#define MII_AN_INTR_EN BIT(0)
+#define MII_CTRL BIT(8)
+
+/* [register] VR_MII_AN_INTR_STS */
+#define VR_MII_AN_INTR_STS_ADDR 0x1f8002
+#define VR_MII_AN_INTR_STS_CHANNEL1_ADDR 0x1a8002
+#define VR_MII_AN_INTR_STS_CHANNEL2_ADDR 0x1b8002
+#define VR_MII_AN_INTR_STS_CHANNEL3_ADDR 0x1c8002
+#define VR_MII_AN_INTR_STS_CHANNEL_ADDR(x) (0x1a8002 + 0x10000 * ((x) - 1))
+#define CL37_ANCMPLT_INTR BIT(0)
+
+/* [register] VR_XAUI_MODE_CTRL */
+#define VR_XAUI_MODE_CTRL_ADDR 0x1f8004
+#define VR_XAUI_MODE_CTRL_CHANNEL1_ADDR 0x1a8004
+#define VR_XAUI_MODE_CTRL_CHANNEL2_ADDR 0x1b8004
+#define VR_XAUI_MODE_CTRL_CHANNEL3_ADDR 0x1c8004
+#define VR_XAUI_MODE_CTRL_CHANNEL_ADDR(x) (0x1a8004 + 0x10000 * ((x) - 1))
+#define IPG_CHECK BIT(0)
+
+/* [register] SR_MII_CTRL */
+#define SR_MII_CTRL_ADDR 0x1f0000
+#define SR_MII_CTRL_CHANNEL1_ADDR 0x1a0000
+#define SR_MII_CTRL_CHANNEL2_ADDR 0x1b0000
+#define SR_MII_CTRL_CHANNEL3_ADDR 0x1c0000
+#define SR_MII_CTRL_CHANNEL_ADDR(x) (0x1a0000 + 0x10000 * ((x) - 1))
+#define AN_ENABLE BIT(12)
+#define USXGMII_DUPLEX_FULL BIT(8)
+#define USXGMII_SPEED_MASK (BIT(13) | BIT(6) | BIT(5))
+#define USXGMII_SPEED_10000 (BIT(13) | BIT(6))
+#define USXGMII_SPEED_5000 (BIT(13) | BIT(5))
+#define USXGMII_SPEED_2500 BIT(5)
+#define USXGMII_SPEED_1000 BIT(6)
+#define USXGMII_SPEED_100 BIT(13)
+#define USXGMII_SPEED_10 0
+
+/* PPE UNIPHY data type */
+struct ppe_uniphy {
+ void __iomem *base;
+ struct ppe_device *ppe_dev;
+ unsigned int index;
+ phy_interface_t interface;
+ struct phylink_pcs pcs;
+};
+
+#define pcs_to_ppe_uniphy(_pcs) container_of(_pcs, struct ppe_uniphy, pcs)
+
+struct ppe_uniphy *ppe_uniphy_setup(struct platform_device *pdev);
+
+int ppe_uniphy_speed_set(struct ppe_uniphy *uniphy,
+ int port, int speed);
+
+int ppe_uniphy_duplex_set(struct ppe_uniphy *uniphy,
+ int port, int duplex);
+
+int ppe_uniphy_adapter_reset(struct ppe_uniphy *uniphy,
+ int port);
+
+int ppe_uniphy_autoneg_complete_check(struct ppe_uniphy *uniphy,
+ int port);
+
+int ppe_uniphy_port_gcc_clock_en_set(struct ppe_uniphy *uniphy,
+ int port, bool enable);
+
+#endif /* _PPE_UNIPHY_H_ */
diff --git a/include/linux/soc/qcom/ppe.h b/include/linux/soc/qcom/ppe.h
index 268109c823ad..d3cb18df33fa 100644
--- a/include/linux/soc/qcom/ppe.h
+++ b/include/linux/soc/qcom/ppe.h
@@ -20,6 +20,7 @@ struct ppe_device {
struct dentry *debugfs_root;
bool is_ppe_probed;
void *ppe_priv;
+ void *uniphy;
};

/* PPE operations, which is used by the external driver like Ethernet
--
2.42.0


2024-01-10 11:50:38

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 18/20] net: ethernet: qualcomm: Add PPE MAC support for phylink

From: Lei Wei <[email protected]>

This driver adds support for PPE MAC initialization and MAC
operations which used by phylink.

Signed-off-by: Lei Wei <[email protected]>
Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/Kconfig | 3 +
drivers/net/ethernet/qualcomm/ppe/ppe.c | 904 +++++++++++++++++++
drivers/net/ethernet/qualcomm/ppe/ppe.h | 33 +
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 112 +++
include/linux/soc/qcom/ppe.h | 33 +
5 files changed, 1085 insertions(+)

diff --git a/drivers/net/ethernet/qualcomm/Kconfig b/drivers/net/ethernet/qualcomm/Kconfig
index fe826c508f64..261f6b8c0d2e 100644
--- a/drivers/net/ethernet/qualcomm/Kconfig
+++ b/drivers/net/ethernet/qualcomm/Kconfig
@@ -65,6 +65,9 @@ config QCOM_PPE
tristate "Qualcomm Technologies, Inc. PPE Ethernet support"
depends on HAS_IOMEM && OF
depends on COMMON_CLK
+ select PHYLINK
+ select HWMON
+ select SFP
help
This driver supports the Qualcomm Technologies, Inc. packet
process engine(PPE) available with IPQ SoC. The PPE houses
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index 21040efe71fc..d241ff3eab84 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -13,6 +13,8 @@
#include <linux/regmap.h>
#include <linux/platform_device.h>
#include <linux/if_ether.h>
+#include <linux/of_net.h>
+#include <linux/rtnetlink.h>
#include <linux/soc/qcom/ppe.h>
#include "ppe.h"
#include "ppe_regs.h"
@@ -197,6 +199,19 @@ struct reset_control **ppe_reset_get(struct ppe_device *ppe_dev)
return ppe_dev_priv->rst;
}

+static struct ppe_port *ppe_port_get(struct ppe_device *ppe_dev, int port)
+{
+ struct ppe_ports *ppe_ports = (struct ppe_ports *)ppe_dev->ports;
+ int i = 0;
+
+ for (i = 0; i < ppe_ports->num; i++) {
+ if (ppe_ports->port[i].port_id == port)
+ return &ppe_ports->port[i];
+ }
+
+ return NULL;
+}
+
static int ppe_clock_set_enable(struct ppe_device *ppe_dev,
enum ppe_clk_id clk_id, unsigned long rate)
{
@@ -302,6 +317,869 @@ static int ppe_clock_config(struct platform_device *pdev)
return 0;
}

+static int ppe_port_mac_reset(struct ppe_device *ppe_dev, int port)
+{
+ struct ppe_data *ppe_dev_priv = ppe_dev->ppe_priv;
+
+ reset_control_assert(ppe_dev_priv->rst[PPE_NSS_PORT1_MAC_RST + port - 1]);
+ if (ppe_dev_priv->ppe_type == PPE_TYPE_APPE) {
+ reset_control_assert(ppe_dev_priv->rst[PPE_NSS_PORT1_RST + port]);
+ } else if (ppe_dev_priv->ppe_type == PPE_TYPE_MPPE) {
+ reset_control_assert(ppe_dev_priv->rst[PPE_NSS_PORT1_RX_RST + ((port - 1) << 1)]);
+ reset_control_assert(ppe_dev_priv->rst[PPE_NSS_PORT1_TX_RST + ((port - 1) << 1)]);
+ }
+ fsleep(150000);
+
+ reset_control_deassert(ppe_dev_priv->rst[PPE_NSS_PORT1_MAC_RST + port - 1]);
+ if (ppe_dev_priv->ppe_type == PPE_TYPE_APPE) {
+ reset_control_deassert(ppe_dev_priv->rst[PPE_NSS_PORT1_RST + port]);
+ } else if (ppe_dev_priv->ppe_type == PPE_TYPE_MPPE) {
+ reset_control_deassert(ppe_dev_priv->rst[PPE_NSS_PORT1_RX_RST + ((port - 1) << 1)]);
+ reset_control_deassert(ppe_dev_priv->rst[PPE_NSS_PORT1_TX_RST + ((port - 1) << 1)]);
+ }
+ fsleep(150000);
+
+ return 0;
+}
+
+static int ppe_gcc_port_speed_clk_set(struct ppe_device *ppe_dev,
+ int port, int speed, phy_interface_t interface)
+{
+ struct ppe_data *ppe_dev_priv = ppe_dev->ppe_priv;
+ enum ppe_clk_id rx_id, tx_id;
+ unsigned long rate = 0;
+ int err = 0;
+
+ rx_id = PPE_NSS_PORT1_RX_CLK + ((port - 1) << 1);
+ tx_id = PPE_NSS_PORT1_TX_CLK + ((port - 1) << 1);
+
+ switch (interface) {
+ case PHY_INTERFACE_MODE_USXGMII:
+ case PHY_INTERFACE_MODE_10GKR:
+ case PHY_INTERFACE_MODE_QUSGMII:
+ case PHY_INTERFACE_MODE_10GBASER:
+ if (speed == SPEED_10)
+ rate = 1250000;
+ else if (speed == SPEED_100)
+ rate = 12500000;
+ else if (speed == SPEED_1000)
+ rate = 125000000;
+ else if (speed == SPEED_2500)
+ rate = 78125000;
+ else if (speed == SPEED_5000)
+ rate = 156250000;
+ else if (speed == SPEED_10000)
+ rate = 312500000;
+ break;
+ case PHY_INTERFACE_MODE_2500BASEX:
+ if (speed == SPEED_2500)
+ rate = 312500000;
+ break;
+ case PHY_INTERFACE_MODE_QSGMII:
+ case PHY_INTERFACE_MODE_1000BASEX:
+ case PHY_INTERFACE_MODE_SGMII:
+ if (speed == SPEED_10)
+ rate = 2500000;
+ else if (speed == SPEED_100)
+ rate = 25000000;
+ else if (speed == SPEED_1000)
+ rate = 125000000;
+ break;
+ default:
+ break;
+ }
+
+ if (!IS_ERR(ppe_dev_priv->clk[rx_id])) {
+ err = clk_set_rate(ppe_dev_priv->clk[rx_id], rate);
+ if (err) {
+ dev_err(ppe_dev->dev,
+ "Failed to set ppe port %d speed rx clk(%d)\n",
+ port, rx_id);
+ return err;
+ }
+ }
+
+ if (!IS_ERR(ppe_dev_priv->clk[tx_id])) {
+ err = clk_set_rate(ppe_dev_priv->clk[tx_id], rate);
+ if (err) {
+ dev_err(ppe_dev->dev,
+ "Failed to set ppe port %d speed rx clk(%d)\n",
+ port, rx_id);
+ return err;
+ }
+ }
+
+ return 0;
+}
+
+static int ppe_mac_speed_set(struct ppe_device *ppe_dev,
+ int port, int speed, phy_interface_t interface)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+ u32 val;
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+ return -ENOENT;
+ }
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_GMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_SPEED,
+ &val);
+ val &= ~GMAC_SPEED_MASK;
+ switch (speed) {
+ case SPEED_10:
+ val |= GMAC_SPEED_10;
+ break;
+ case SPEED_100:
+ val |= GMAC_SPEED_100;
+ break;
+ case SPEED_1000:
+ val |= GMAC_SPEED_1000;
+ break;
+ default:
+ break;
+ }
+ ppe_write(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_SPEED,
+ val);
+ } else if (ppe_port->mac_type == PPE_MAC_TYPE_XGMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_CONFIGURATION,
+ &val);
+ val &= ~XGMAC_SPEED_MASK;
+ switch (speed) {
+ case SPEED_10000:
+ if (interface == PHY_INTERFACE_MODE_USXGMII ||
+ interface == PHY_INTERFACE_MODE_QUSGMII)
+ val |= XGMAC_SPEED_10000_USXGMII;
+ else
+ val |= XGMAC_SPEED_10000;
+ break;
+ case SPEED_5000:
+ val |= XGMAC_SPEED_5000;
+ break;
+ case SPEED_2500:
+ if (interface == PHY_INTERFACE_MODE_USXGMII ||
+ interface == PHY_INTERFACE_MODE_QUSGMII)
+ val |= XGMAC_SPEED_2500_USXGMII;
+ else
+ val |= XGMAC_SPEED_2500;
+ break;
+ case SPEED_1000:
+ val |= XGMAC_SPEED_1000;
+ break;
+ case SPEED_100:
+ val |= XGMAC_SPEED_100;
+ break;
+ case SPEED_10:
+ val |= XGMAC_SPEED_10;
+ break;
+ default:
+ break;
+ }
+ ppe_write(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_CONFIGURATION,
+ val);
+ }
+
+ return 0;
+}
+
+static int ppe_mac_duplex_set(struct ppe_device *ppe_dev, int port, int duplex)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+ u32 val;
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+ return -ENOENT;
+ }
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_GMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_ENABLE,
+ &val);
+ if (duplex == DUPLEX_FULL)
+ val |= GMAC_DUPLEX_FULL;
+ else
+ val &= ~GMAC_DUPLEX_FULL;
+ ppe_write(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_ENABLE,
+ val);
+ }
+
+ return 0;
+}
+
+static int ppe_mac_txfc_status_set(struct ppe_device *ppe_dev, int port, bool enable)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+ u32 val;
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+ return -ENOENT;
+ }
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_GMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_ENABLE,
+ &val);
+ if (enable)
+ val |= GMAC_TX_FLOW_EN;
+ else
+ val &= ~GMAC_TX_FLOW_EN;
+ ppe_write(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_ENABLE,
+ val);
+ } else if (ppe_port->mac_type == PPE_MAC_TYPE_XGMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_Q0_TX_FLOW_CTRL,
+ &val);
+ if (enable) {
+ val &= ~XGMAC_PT_MASK;
+ val |= (XGMAC_PAUSE_TIME | XGMAC_TFE);
+ } else {
+ val &= ~XGMAC_TFE;
+ }
+ ppe_write(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_Q0_TX_FLOW_CTRL,
+ val);
+ }
+
+ ppe_read(ppe_dev,
+ PPE_BM_PORT_FC_MODE + PPE_BM_PORT_FC_MODE_INC * (port + 7),
+ &val);
+ if (enable)
+ val |= PPE_BM_PORT_FC_MODE_EN;
+ else
+ val &= ~PPE_BM_PORT_FC_MODE_EN;
+ ppe_write(ppe_dev,
+ PPE_BM_PORT_FC_MODE + PPE_BM_PORT_FC_MODE_INC * (port + 7),
+ val);
+
+ return 0;
+}
+
+static int ppe_mac_rxfc_status_set(struct ppe_device *ppe_dev, int port, bool enable)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+ u32 val;
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+ return -ENOENT;
+ }
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_GMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_ENABLE,
+ &val);
+ if (enable)
+ val |= GMAC_RX_FLOW_EN;
+ else
+ val &= ~GMAC_RX_FLOW_EN;
+ ppe_write(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_ENABLE,
+ val);
+ } else if (ppe_port->mac_type == PPE_MAC_TYPE_XGMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_FLOW_CTRL,
+ &val);
+ if (enable)
+ val |= XGMAC_RFE;
+ else
+ val &= ~XGMAC_RFE;
+ ppe_write(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_FLOW_CTRL,
+ val);
+ }
+
+ return 0;
+}
+
+static int ppe_mac_txmac_en_set(struct ppe_device *ppe_dev, int port, bool enable)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+ u32 val;
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+ return -ENOENT;
+ }
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_GMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_ENABLE,
+ &val);
+ if (enable)
+ val |= GMAC_TXMAC_EN;
+ else
+ val &= ~GMAC_TXMAC_EN;
+ ppe_write(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_ENABLE,
+ val);
+ } else if (ppe_port->mac_type == PPE_MAC_TYPE_XGMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_CONFIGURATION,
+ &val);
+ if (enable)
+ val |= XGMAC_TE;
+ else
+ val &= ~XGMAC_TE;
+ ppe_write(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_CONFIGURATION,
+ val);
+ }
+
+ return 0;
+}
+
+static int ppe_mac_rxmac_en_set(struct ppe_device *ppe_dev, int port, bool enable)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+ u32 val;
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+ return -ENOENT;
+ }
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_GMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_ENABLE,
+ &val);
+ if (enable)
+ val |= GMAC_RXMAC_EN;
+ else
+ val &= ~GMAC_RXMAC_EN;
+ ppe_write(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_ENABLE,
+ val);
+ } else if (ppe_port->mac_type == PPE_MAC_TYPE_XGMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_CONFIGURATION,
+ &val);
+ if (enable)
+ val |= XGMAC_RE;
+ else
+ val &= ~XGMAC_RE;
+ ppe_write(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_CONFIGURATION,
+ val);
+ }
+
+ return 0;
+}
+
+static int ppe_port_bridge_txmac_en_set(struct ppe_device *ppe_dev, int port, bool enable)
+{
+ u32 val;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_BRIDGE_CTRL + PPE_PORT_BRIDGE_CTRL_INC * port,
+ &val);
+
+ if (enable)
+ val |= PPE_PORT_BRIDGE_CTRL_TXMAC_EN;
+ else
+ val &= ~PPE_PORT_BRIDGE_CTRL_TXMAC_EN;
+
+ ppe_write(ppe_dev,
+ PPE_PORT_BRIDGE_CTRL + PPE_PORT_BRIDGE_CTRL_INC * port,
+ val);
+
+ return 0;
+}
+
+static void ppe_phylink_mac_config(struct ppe_device *ppe_dev, int port,
+ unsigned int mode, const struct phylink_link_state *state)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+ int mac_type;
+ u32 val;
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+ return;
+ }
+
+ switch (state->interface) {
+ case PHY_INTERFACE_MODE_USXGMII:
+ case PHY_INTERFACE_MODE_2500BASEX:
+ case PHY_INTERFACE_MODE_10GBASER:
+ case PHY_INTERFACE_MODE_QUSGMII:
+ mac_type = PPE_MAC_TYPE_XGMAC;
+ break;
+ default:
+ mac_type = PPE_MAC_TYPE_GMAC;
+ break;
+ }
+
+ if (ppe_port->mac_type != mac_type) {
+ /* Reset port mac for gmac */
+ if (mac_type == PPE_MAC_TYPE_GMAC)
+ ppe_port_mac_reset(ppe_dev, port);
+
+ /* Port mux to select gmac or xgmac */
+ mutex_lock(&ppe_dev->reg_mutex);
+ ppe_read(ppe_dev, PPE_PORT_MUX_CTRL, &val);
+ if (mac_type == PPE_MAC_TYPE_GMAC)
+ val &= ~PPE_PORT_MAC_SEL(port);
+ else
+ val |= PPE_PORT_MAC_SEL(port);
+ if (port == PPE_PORT5)
+ val |= PPE_PORT5_PCS_SEL;
+
+ ppe_write(ppe_dev, PPE_PORT_MUX_CTRL, val);
+ mutex_unlock(&ppe_dev->reg_mutex);
+ ppe_port->mac_type = mac_type;
+ }
+
+ /* Reset ppe port link status when interface changes,
+ * this allows PPE MAC and UNIPHY to be configured
+ * according to the port link up status in ppe phylink
+ * mac link up.
+ */
+ if (state->interface != ppe_port->interface) {
+ ppe_port->speed = SPEED_UNKNOWN;
+ ppe_port->duplex = DUPLEX_UNKNOWN;
+ ppe_port->pause = MLO_PAUSE_NONE;
+ ppe_port->interface = state->interface;
+ }
+
+ dev_info(ppe_dev->dev, "PPE port %d mac config: interface %s, mac_type %d\n",
+ port, phy_modes(state->interface), mac_type);
+}
+
+static struct phylink_pcs *ppe_phylink_mac_select_pcs(struct ppe_device *ppe_dev,
+ int port, phy_interface_t interface)
+{
+ struct ppe_uniphy *uniphy = (struct ppe_uniphy *)ppe_dev->uniphy;
+ int ppe_type = ppe_type_get(ppe_dev);
+ int index;
+
+ switch (port) {
+ case PPE_PORT6:
+ index = 2;
+ break;
+ case PPE_PORT5:
+ index = 1;
+ break;
+ case PPE_PORT4:
+ case PPE_PORT3:
+ index = 0;
+ break;
+ case PPE_PORT2:
+ if (ppe_type == PPE_TYPE_MPPE)
+ index = 1;
+ else if (ppe_type == PPE_TYPE_APPE)
+ index = 0;
+ break;
+ case PPE_PORT1:
+ index = 0;
+ break;
+ default:
+ index = -1;
+ break;
+ }
+
+ if (index >= 0)
+ return &uniphy[index].pcs;
+ else
+ return NULL;
+}
+
+static void ppe_phylink_mac_link_up(struct ppe_device *ppe_dev, int port,
+ struct phy_device *phy,
+ unsigned int mode, phy_interface_t interface,
+ int speed, int duplex, bool tx_pause, bool rx_pause)
+{
+ struct phylink_pcs *pcs = ppe_phylink_mac_select_pcs(ppe_dev, port, interface);
+ struct ppe_uniphy *uniphy = pcs_to_ppe_uniphy(pcs);
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+
+ /* Wait uniphy auto-negotiation completion */
+ ppe_uniphy_autoneg_complete_check(uniphy, port);
+
+ if (speed != ppe_port->speed ||
+ duplex != ppe_port->duplex ||
+ tx_pause != !!(ppe_port->pause & MLO_PAUSE_TX) ||
+ rx_pause != !!(ppe_port->pause & MLO_PAUSE_RX)) {
+ /* Disable gcc uniphy port clk */
+ ppe_uniphy_port_gcc_clock_en_set(uniphy, port, false);
+
+ if (speed != ppe_port->speed) {
+ /* Set gcc port speed clock */
+ ppe_gcc_port_speed_clk_set(ppe_dev, port, speed, interface);
+ fsleep(10000);
+ /* Set uniphy channel speed */
+ ppe_uniphy_speed_set(uniphy, port, speed);
+ /* Set mac speed */
+ ppe_mac_speed_set(ppe_dev, port, speed, interface);
+ ppe_port->speed = speed;
+ }
+
+ if (duplex != ppe_port->duplex) {
+ /* Set uniphy channel duplex */
+ ppe_uniphy_duplex_set(uniphy, port, duplex);
+ /* Set mac duplex */
+ ppe_mac_duplex_set(ppe_dev, port, duplex);
+ ppe_port->duplex = duplex;
+ }
+
+ if (tx_pause != !!(ppe_port->pause & MLO_PAUSE_TX)) {
+ /* Set mac tx flow ctrl */
+ ppe_mac_txfc_status_set(ppe_dev, port, tx_pause);
+ if (tx_pause)
+ ppe_port->pause |= MLO_PAUSE_TX;
+ else
+ ppe_port->pause &= ~MLO_PAUSE_TX;
+ }
+
+ if (rx_pause != !!(ppe_port->pause & MLO_PAUSE_RX)) {
+ /* Set mac rx flow ctrl */
+ ppe_mac_rxfc_status_set(ppe_dev, port, rx_pause);
+ if (rx_pause)
+ ppe_port->pause |= MLO_PAUSE_RX;
+ else
+ ppe_port->pause &= ~MLO_PAUSE_RX;
+ }
+
+ /* Enable gcc uniphy port clk */
+ ppe_uniphy_port_gcc_clock_en_set(uniphy, port, true);
+
+ /* Reset uniphy channel adapter */
+ ppe_uniphy_adapter_reset(uniphy, port);
+ }
+
+ /* Enable ppe mac tx and rx */
+ ppe_mac_txmac_en_set(ppe_dev, port, true);
+ ppe_mac_rxmac_en_set(ppe_dev, port, true);
+
+ /* Enable ppe bridge port tx mac */
+ ppe_port_bridge_txmac_en_set(ppe_dev, port, true);
+
+ dev_info(ppe_dev->dev,
+ "PPE port %d interface %s link up - %s%s - pause tx %d rx %d\n",
+ port, phy_modes(interface), phy_speed_to_str(speed),
+ phy_duplex_to_str(duplex), tx_pause, rx_pause);
+}
+
+static void ppe_phylink_mac_link_down(struct ppe_device *ppe_dev, int port,
+ unsigned int mode, phy_interface_t interface)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+
+ if (!ppe_port)
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+
+ /* Disable ppe port bridge tx mac */
+ ppe_port_bridge_txmac_en_set(ppe_dev, port, false);
+
+ /* Disable ppe mac rx */
+ ppe_mac_rxmac_en_set(ppe_dev, port, false);
+ fsleep(10000);
+
+ /* Disable ppe mac tx */
+ ppe_mac_txmac_en_set(ppe_dev, port, false);
+
+ dev_info(ppe_dev->dev, "PPE port %d interface %s link down\n",
+ port, phy_modes(interface));
+}
+
+static int ppe_mac_init(struct platform_device *pdev)
+{
+ struct device_node *ports_node, *port_node;
+ struct ppe_device *ppe_dev = platform_get_drvdata(pdev);
+ struct ppe_ports *ppe_ports = NULL;
+ phy_interface_t phy_mode = PHY_INTERFACE_MODE_NA;
+ int i = 0, port = 0, err = 0, port_num = 0;
+
+ ports_node = of_get_child_by_name(pdev->dev.of_node, "qcom,port_phyinfo");
+ if (!ports_node) {
+ dev_err(&pdev->dev, "Failed to get qcom port phy info node\n");
+ return -ENODEV;
+ }
+
+ port_num = of_get_child_count(ports_node);
+
+ ppe_ports = devm_kzalloc(&pdev->dev,
+ struct_size(ppe_ports, port, port_num),
+ GFP_KERNEL);
+ if (!ppe_ports) {
+ err = -ENOMEM;
+ goto err_ports_node_put;
+ }
+
+ ppe_dev->ports = ppe_ports;
+ ppe_ports->num = port_num;
+
+ for_each_available_child_of_node(ports_node, port_node) {
+ err = of_property_read_u32(port_node, "port_id", &port);
+ if (err) {
+ dev_err(&pdev->dev, "Failed to get port id\n");
+ goto err_port_node_put;
+ }
+
+ err = of_get_phy_mode(port_node, &phy_mode);
+ if (err) {
+ dev_err(&pdev->dev, "Failed to get phy mode\n");
+ goto err_port_node_put;
+ }
+
+ ppe_ports->port[i].ppe_dev = ppe_dev;
+ ppe_ports->port[i].port_id = port;
+ ppe_ports->port[i].np = port_node;
+ ppe_ports->port[i].interface = phy_mode;
+ ppe_ports->port[i].mac_type = PPE_MAC_TYPE_NA;
+ ppe_ports->port[i].speed = SPEED_UNKNOWN;
+ ppe_ports->port[i].duplex = DUPLEX_UNKNOWN;
+ ppe_ports->port[i].pause = MLO_PAUSE_NONE;
+ i++;
+
+ /* Port gmac HW initialization */
+ ppe_mask(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_ENABLE,
+ GMAC_MAC_EN, 0);
+
+ ppe_mask(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_MAC_JUMBO_SIZE,
+ GMAC_JUMBO_SIZE_MASK,
+ FIELD_PREP(GMAC_JUMBO_SIZE_MASK, MAC_MAX_FRAME_SIZE));
+
+ ppe_mask(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_MAC_CTRL2,
+ GMAC_INIT_CTRL2_FIELD, GMAC_INIT_CTRL2);
+
+ ppe_mask(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_MAC_DBG_CTRL,
+ GMAC_HIGH_IPG_MASK,
+ FIELD_PREP(GMAC_HIGH_IPG_MASK, GMAC_IPG_CHECK));
+
+ ppe_mask(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_MAC_MIB_CTRL,
+ MAC_MIB_EN | MAC_MIB_RD_CLR | MAC_MIB_RESET,
+ MAC_MIB_EN | MAC_MIB_RD_CLR | MAC_MIB_RESET);
+
+ ppe_mask(ppe_dev,
+ PPE_PORT_GMAC_ADDR(port) + GMAC_MAC_MIB_CTRL,
+ MAC_MIB_RESET, 0);
+
+ /* Port xgmac HW initialization */
+ ppe_mask(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_CONFIGURATION,
+ XGMAC_INIT_TX_CONFIG_FIELD, XGMAC_INIT_TX_CONFIG);
+
+ ppe_mask(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_CONFIGURATION,
+ XGMAC_INIT_RX_CONFIG_FIELD, XGMAC_INIT_RX_CONFIG);
+
+ ppe_mask(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_WATCHDOG_TIMEOUT,
+ XGMAC_INIT_WATCHDOG_FIELD, XGMAC_INIT_WATCHDOG);
+
+ ppe_mask(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_PACKET_FILTER,
+ XGMAC_INIT_FILTER_FIELD, XGMAC_INIT_FILTER);
+
+ ppe_mask(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_MMC_CONTROL,
+ XGMAC_MCF | XGMAC_CNTRST, XGMAC_CNTRST);
+ }
+
+ of_node_put(ports_node);
+ dev_info(ppe_dev->dev, "QCOM PPE MAC init success\n");
+ return 0;
+
+err_port_node_put:
+ of_node_put(port_node);
+err_ports_node_put:
+ of_node_put(ports_node);
+ return err;
+}
+
+static void ppe_mac_config(struct phylink_config *config, unsigned int mode,
+ const struct phylink_link_state *state)
+{
+ struct ppe_device *ppe_dev = NULL;
+ struct ppe_port *ppe_port = container_of(config,
+ struct ppe_port,
+ phylink_config);
+
+ if (!ppe_port)
+ dev_err(ppe_dev->dev, "Failed to find ppe port\n");
+
+ ppe_dev = ppe_port->ppe_dev;
+
+ if (ppe_dev && ppe_dev->ppe_ops &&
+ ppe_dev->ppe_ops->phylink_mac_config) {
+ ppe_dev->ppe_ops->phylink_mac_config(ppe_dev,
+ ppe_port->port_id,
+ mode, state);
+ } else {
+ dev_err(ppe_dev->dev,
+ "Failed to find ppe device mac config operation\n");
+ }
+}
+
+static void ppe_mac_link_down(struct phylink_config *config, unsigned int mode,
+ phy_interface_t interface)
+{
+ struct ppe_device *ppe_dev = NULL;
+ struct ppe_port *ppe_port = container_of(config,
+ struct ppe_port,
+ phylink_config);
+
+ if (!ppe_port)
+ dev_err(ppe_dev->dev, "Failed to find ppe port\n");
+
+ ppe_dev = ppe_port->ppe_dev;
+
+ if (ppe_dev && ppe_dev->ppe_ops &&
+ ppe_dev->ppe_ops->phylink_mac_link_down) {
+ ppe_dev->ppe_ops->phylink_mac_link_down(ppe_dev,
+ ppe_port->port_id,
+ mode, interface);
+ } else {
+ dev_err(ppe_dev->dev,
+ "Failed to find ppe device link down operation\n");
+ }
+}
+
+static void ppe_mac_link_up(struct phylink_config *config,
+ struct phy_device *phy,
+ unsigned int mode, phy_interface_t interface,
+ int speed, int duplex, bool tx_pause, bool rx_pause)
+{
+ struct ppe_device *ppe_dev = NULL;
+ struct ppe_port *ppe_port = container_of(config,
+ struct ppe_port,
+ phylink_config);
+
+ if (!ppe_port)
+ dev_err(ppe_dev->dev, "Failed to find ppe port\n");
+
+ ppe_dev = ppe_port->ppe_dev;
+
+ if (ppe_dev && ppe_dev->ppe_ops &&
+ ppe_dev->ppe_ops->phylink_mac_link_up) {
+ ppe_dev->ppe_ops->phylink_mac_link_up(ppe_dev,
+ ppe_port->port_id,
+ phy, mode, interface,
+ speed, duplex,
+ tx_pause, rx_pause);
+ } else {
+ dev_err(ppe_dev->dev,
+ "Failed to find ppe device link up operation\n");
+ }
+}
+
+static struct phylink_pcs *ppe_mac_select_pcs(struct phylink_config *config,
+ phy_interface_t interface)
+{
+ struct ppe_device *ppe_dev = NULL;
+ struct ppe_port *ppe_port = container_of(config,
+ struct ppe_port,
+ phylink_config);
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port");
+ return NULL;
+ }
+
+ ppe_dev = ppe_port->ppe_dev;
+
+ if (ppe_dev && ppe_dev->ppe_ops &&
+ ppe_dev->ppe_ops->phylink_mac_select_pcs) {
+ return ppe_dev->ppe_ops->phylink_mac_select_pcs(ppe_dev,
+ ppe_port->port_id,
+ interface);
+ } else {
+ dev_err(ppe_dev->dev,
+ "Failed to find ppe device pcs select operation\n");
+ return NULL;
+ }
+}
+
+static const struct phylink_mac_ops ppe_phylink_ops = {
+ .mac_config = ppe_mac_config,
+ .mac_link_down = ppe_mac_link_down,
+ .mac_link_up = ppe_mac_link_up,
+ .mac_select_pcs = ppe_mac_select_pcs,
+};
+
+static struct phylink *ppe_phylink_setup(struct ppe_device *ppe_dev,
+ struct net_device *netdev,
+ int port)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+ int err;
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+ return NULL;
+ }
+
+ /* per port phylink capability */
+ ppe_port->phylink_config.dev = &netdev->dev;
+ ppe_port->phylink_config.type = PHYLINK_NETDEV;
+ ppe_port->phylink_config.mac_capabilities = MAC_ASYM_PAUSE | MAC_SYM_PAUSE |
+ MAC_10 | MAC_100 | MAC_1000 | MAC_2500FD | MAC_5000FD | MAC_10000FD;
+ __set_bit(PHY_INTERFACE_MODE_SGMII,
+ ppe_port->phylink_config.supported_interfaces);
+ __set_bit(PHY_INTERFACE_MODE_1000BASEX,
+ ppe_port->phylink_config.supported_interfaces);
+ __set_bit(PHY_INTERFACE_MODE_2500BASEX,
+ ppe_port->phylink_config.supported_interfaces);
+ __set_bit(PHY_INTERFACE_MODE_USXGMII,
+ ppe_port->phylink_config.supported_interfaces);
+ __set_bit(PHY_INTERFACE_MODE_10GBASER,
+ ppe_port->phylink_config.supported_interfaces);
+ __set_bit(PHY_INTERFACE_MODE_QSGMII,
+ ppe_port->phylink_config.supported_interfaces);
+ __set_bit(PHY_INTERFACE_MODE_QUSGMII,
+ ppe_port->phylink_config.supported_interfaces);
+
+ /* create phylink */
+ ppe_port->phylink = phylink_create(&ppe_port->phylink_config,
+ of_fwnode_handle(ppe_port->np),
+ ppe_port->interface, &ppe_phylink_ops);
+ if (IS_ERR(ppe_port->phylink)) {
+ dev_err(ppe_dev->dev, "Failed to create phylink for port %d\n", port);
+ return NULL;
+ }
+
+ /* connect phylink */
+ err = phylink_of_phy_connect(ppe_port->phylink, ppe_port->np, 0);
+ if (err) {
+ dev_err(ppe_dev->dev, "Failed to connect phylink for port %d\n", port);
+ phylink_destroy(ppe_port->phylink);
+ ppe_port->phylink = NULL;
+ return NULL;
+ }
+
+ return ppe_port->phylink;
+}
+
+static void ppe_phylink_destroy(struct ppe_device *ppe_dev, int port)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+
+ if (!ppe_port)
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+
+ if (ppe_port->phylink) {
+ rtnl_lock();
+ phylink_disconnect_phy(ppe_port->phylink);
+ rtnl_unlock();
+ phylink_destroy(ppe_port->phylink);
+ ppe_port->phylink = NULL;
+ }
+}
+
bool ppe_is_probed(struct platform_device *pdev)
{
struct ppe_device *ppe_dev = platform_get_drvdata(pdev);
@@ -352,6 +1230,12 @@ static int ppe_port_maxframe_set(struct ppe_device *ppe_dev,
}

static struct ppe_device_ops qcom_ppe_ops = {
+ .phylink_setup = ppe_phylink_setup,
+ .phylink_destroy = ppe_phylink_destroy,
+ .phylink_mac_config = ppe_phylink_mac_config,
+ .phylink_mac_link_up = ppe_phylink_mac_link_up,
+ .phylink_mac_link_down = ppe_phylink_mac_link_down,
+ .phylink_mac_select_pcs = ppe_phylink_mac_select_pcs,
.set_maxframe = ppe_port_maxframe_set,
};

@@ -1407,6 +2291,7 @@ static int qcom_ppe_probe(struct platform_device *pdev)
PTR_ERR(ppe_dev->ppe_priv),
"Fail to init ppe data\n");

+ mutex_init(&ppe_dev->reg_mutex);
platform_set_drvdata(pdev, ppe_dev);
ret = ppe_clock_config(pdev);
if (ret)
@@ -1426,6 +2311,10 @@ static int qcom_ppe_probe(struct platform_device *pdev)
ret,
"ppe device hw init failed\n");

+ ret = ppe_mac_init(pdev);
+ if (ret)
+ return dev_err_probe(&pdev->dev, ret, "ppe mac initialization failed\n");
+
ppe_dev->uniphy = ppe_uniphy_setup(pdev);
if (IS_ERR(ppe_dev->uniphy))
return dev_err_probe(&pdev->dev, ret, "ppe uniphy initialization failed\n");
@@ -1440,10 +2329,25 @@ static int qcom_ppe_probe(struct platform_device *pdev)
static int qcom_ppe_remove(struct platform_device *pdev)
{
struct ppe_device *ppe_dev;
+ struct ppe_ports *ppe_ports;
+ struct ppe_data *ppe_dev_priv;
+ int i, port;

ppe_dev = platform_get_drvdata(pdev);
+ ppe_dev_priv = ppe_dev->ppe_priv;
+ ppe_ports = (struct ppe_ports *)ppe_dev->ports;
+
ppe_debugfs_teardown(ppe_dev);

+ for (i = 0; i < ppe_ports->num; i++) {
+ /* Reset ppe port parent clock to XO clock */
+ port = ppe_ports->port[i].port_id;
+ clk_set_rate(ppe_dev_priv->clk[PPE_NSS_PORT1_RX_CLK + ((port - 1) << 1)],
+ P_XO_CLOCK_RATE);
+ clk_set_rate(ppe_dev_priv->clk[PPE_NSS_PORT1_TX_CLK + ((port - 1) << 1)],
+ P_XO_CLOCK_RATE);
+ }
+
return 0;
}

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.h b/drivers/net/ethernet/qualcomm/ppe/ppe.h
index 45b70f47cd21..532d53c05bf9 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.h
@@ -21,6 +21,9 @@
#define PPE_PORT6 6
#define PPE_PORT7 7

+/* PPE Port XO Clock Rate */
+#define P_XO_CLOCK_RATE 24000000
+
enum ppe_clk_id {
/* clocks for CMN PLL */
PPE_CMN_AHB_CLK,
@@ -152,6 +155,14 @@ enum {
PPE_ACTION_REDIRECTED_TO_CPU
};

+/* PPE MAC Type */
+enum {
+ PPE_MAC_TYPE_NA,
+ PPE_MAC_TYPE_GMAC,
+ PPE_MAC_TYPE_XGMAC,
+ PPE_MAC_TYPE_MAX
+};
+
/* PPE private data of different PPE type device */
struct ppe_data {
int ppe_type;
@@ -172,6 +183,28 @@ struct ppe_scheduler_port_resource {
int l1edrr[2];
};

+/* PPE per port data type to record port settings such as phylink
+ * setting, mac type, interface mode and link speed.
+ */
+struct ppe_port {
+ struct phylink *phylink;
+ struct phylink_config phylink_config;
+ struct device_node *np;
+ struct ppe_device *ppe_dev;
+ phy_interface_t interface;
+ int mac_type;
+ int port_id;
+ int speed;
+ int duplex;
+ int pause;
+};
+
+/* PPE ports data type */
+struct ppe_ports {
+ unsigned int num;
+ struct ppe_port port[];
+};
+
int ppe_type_get(struct ppe_device *ppe_dev);
struct clk **ppe_clock_get(struct ppe_device *ppe_dev);
struct reset_control **ppe_reset_get(struct ppe_device *ppe_dev);
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
index 13115405bad9..43cd067c8c73 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -7,6 +7,16 @@
#ifndef __PPE_REGS_H__
#define __PPE_REGS_H__

+#define PPE_PORT_MUX_CTRL 0x10
+#define PPE_PORT6_MAC_SEL BIT(13)
+#define PPE_PORT5_MAC_SEL BIT(12)
+#define PPE_PORT4_MAC_SEL BIT(11)
+#define PPE_PORT3_MAC_SEL BIT(10)
+#define PPE_PORT2_MAC_SEL BIT(9)
+#define PPE_PORT1_MAC_SEL BIT(8)
+#define PPE_PORT5_PCS_SEL BIT(4)
+#define PPE_PORT_MAC_SEL(x) (PPE_PORT1_MAC_SEL << ((x) - 1))
+
#define PPE_BM_TDM_CTRL 0xb000
#define PPE_BM_TDM_CTRL_NUM 1
#define PPE_BM_TDM_CTRL_INC 4
@@ -819,4 +829,106 @@ union ppe_ac_grp_cfg_u {
#define PPE_ENQ_OPR_TBL_INC 0x10
#define PPE_ENQ_OPR_TBL_ENQ_DISABLE BIT(0)

+/* PPE MAC Address */
+#define PPE_PORT_GMAC_ADDR(x) (0x001000 + ((x) - 1) * 0x200)
+#define PPE_PORT_XGMAC_ADDR(x) (0x500000 + ((x) - 1) * 0x4000)
+
+/* GMAC Registers */
+#define GMAC_ENABLE 0x0
+#define GMAC_TX_FLOW_EN BIT(6)
+#define GMAC_RX_FLOW_EN BIT(5)
+#define GMAC_DUPLEX_FULL BIT(4)
+#define GMAC_TXMAC_EN BIT(1)
+#define GMAC_RXMAC_EN BIT(0)
+#define GMAC_MAC_EN (GMAC_RXMAC_EN | GMAC_TXMAC_EN)
+
+#define GMAC_SPEED 0x4
+#define GMAC_SPEED_MASK GENMASK(1, 0)
+#define GMAC_SPEED_10 0
+#define GMAC_SPEED_100 1
+#define GMAC_SPEED_1000 2
+
+#define GMAC_MAC_CTRL2 0x18
+#define GMAC_TX_THD_MASK GENMASK(27, 24)
+#define GMAC_MAXFR_MASK GENMASK(21, 8)
+#define GMAC_CRS_SEL BIT(6)
+#define GMAC_TX_THD 0x1
+#define GMAC_INIT_CTRL2_FIELD (GMAC_MAXFR_MASK | \
+ GMAC_CRS_SEL | GMAC_TX_THD_MASK)
+#define GMAC_INIT_CTRL2 (FIELD_PREP(GMAC_MAXFR_MASK, \
+ MAC_MAX_FRAME_SIZE) | FIELD_PREP(GMAC_TX_THD_MASK, GMAC_TX_THD))
+
+#define GMAC_MAC_DBG_CTRL 0x1c
+#define GMAC_HIGH_IPG_MASK GENMASK(15, 8)
+#define GMAC_IPG_CHECK 0xc
+
+#define GMAC_MAC_JUMBO_SIZE 0x30
+#define GMAC_JUMBO_SIZE_MASK GENMASK(13, 0)
+#define MAC_MAX_FRAME_SIZE 0x3000
+
+#define GMAC_MAC_MIB_CTRL 0x34
+#define MAC_MIB_RD_CLR BIT(2)
+#define MAC_MIB_RESET BIT(1)
+#define MAC_MIB_EN BIT(0)
+
+/* XGMAC Registers */
+#define XGMAC_TX_CONFIGURATION 0x0
+#define XGMAC_SPEED_MASK GENMASK(31, 29)
+#define XGMAC_SPEED_10000_USXGMII FIELD_PREP(XGMAC_SPEED_MASK, 4)
+#define XGMAC_SPEED_10000 FIELD_PREP(XGMAC_SPEED_MASK, 0)
+#define XGMAC_SPEED_5000 FIELD_PREP(XGMAC_SPEED_MASK, 5)
+#define XGMAC_SPEED_2500_USXGMII FIELD_PREP(XGMAC_SPEED_MASK, 6)
+#define XGMAC_SPEED_2500 FIELD_PREP(XGMAC_SPEED_MASK, 2)
+#define XGMAC_SPEED_1000 FIELD_PREP(XGMAC_SPEED_MASK, 3)
+#define XGMAC_SPEED_100 XGMAC_SPEED_1000
+#define XGMAC_SPEED_10 XGMAC_SPEED_1000
+
+#define XGMAC_JD BIT(16)
+#define XGMAC_TE BIT(0)
+#define XGMAC_INIT_TX_CONFIG_FIELD (XGMAC_JD | XGMAC_TE)
+#define XGMAC_INIT_TX_CONFIG XGMAC_JD
+
+#define XGMAC_RX_CONFIGURATION 0x4
+#define XGMAC_GPSL_MASK GENMASK(29, 16)
+#define XGMAC_WD BIT(7)
+#define XGMAC_GPSLCE BIT(6)
+#define XGMAC_CST BIT(2)
+#define XGMAC_ACS BIT(1)
+#define XGMAC_RE BIT(0)
+#define XGMAC_INIT_RX_CONFIG_FIELD (XGMAC_RE | XGMAC_ACS | \
+ XGMAC_CST | XGMAC_WD | XGMAC_GPSLCE | XGMAC_GPSL_MASK)
+#define XGMAC_INIT_RX_CONFIG (XGMAC_ACS | XGMAC_CST | \
+ XGMAC_GPSLCE | FIELD_PREP(XGMAC_GPSL_MASK, MAC_MAX_FRAME_SIZE))
+
+#define XGMAC_PACKET_FILTER 0x8
+#define XGMAC_RA BIT(31)
+#define XGMAC_PCF_MASK GENMASK(7, 6)
+#define XGMAC_PR BIT(0)
+#define XGMAC_PASS_CONTROL_PACKET 0x2
+#define XGMAC_INIT_FILTER_FIELD (XGMAC_RA | XGMAC_PR | \
+ XGMAC_PCF_MASK)
+#define XGMAC_INIT_FILTER (XGMAC_RA | XGMAC_PR | \
+ FIELD_PREP(XGMAC_PCF_MASK, \
+ XGMAC_PASS_CONTROL_PACKET))
+
+#define XGMAC_WATCHDOG_TIMEOUT 0xc
+#define XGMAC_PWE BIT(8)
+#define XGMAC_WTO_MASK GENMASK(3, 0)
+#define XGMAC_WTO_LIMIT_13K 0xb
+#define XGMAC_INIT_WATCHDOG_FIELD (XGMAC_PWE | XGMAC_WTO_MASK)
+#define XGMAC_INIT_WATCHDOG (XGMAC_PWE | \
+ FIELD_PREP(XGMAC_WTO_MASK, XGMAC_WTO_LIMIT_13K))
+
+#define XGMAC_Q0_TX_FLOW_CTRL 0x70
+#define XGMAC_PT_MASK GENMASK(31, 16)
+#define XGMAC_PAUSE_TIME FIELD_PREP(XGMAC_PT_MASK, 0xffff)
+#define XGMAC_TFE BIT(1)
+
+#define XGMAC_RX_FLOW_CTRL 0x90
+#define XGMAC_RFE BIT(0)
+
+#define XGMAC_MMC_CONTROL 0x800
+#define XGMAC_MCF BIT(3)
+#define XGMAC_CNTRST BIT(0)
+
#endif
diff --git a/include/linux/soc/qcom/ppe.h b/include/linux/soc/qcom/ppe.h
index d3cb18df33fa..40e69a262650 100644
--- a/include/linux/soc/qcom/ppe.h
+++ b/include/linux/soc/qcom/ppe.h
@@ -9,6 +9,7 @@
#define __QCOM_PPE_H__

#include <linux/platform_device.h>
+#include <linux/phylink.h>

/* PPE platform private data, which is used by external driver like
* Ethernet DMA driver.
@@ -20,6 +21,8 @@ struct ppe_device {
struct dentry *debugfs_root;
bool is_ppe_probed;
void *ppe_priv;
+ struct mutex reg_mutex; /* Protects ppe reg operation */
+ void *ports;
void *uniphy;
};

@@ -27,6 +30,36 @@ struct ppe_device {
* DMA driver to configure PPE.
*/
struct ppe_device_ops {
+ /*
+ * PHYLINK integration
+ */
+ struct phylink *(*phylink_setup)(struct ppe_device *ppe_dev,
+ struct net_device *netdev, int port);
+ void (*phylink_destroy)(struct ppe_device *ppe_dev,
+ int port);
+ void (*phylink_mac_config)(struct ppe_device *ppe_dev,
+ int port,
+ unsigned int mode,
+ const struct phylink_link_state *state);
+ void (*phylink_mac_link_up)(struct ppe_device *ppe_dev,
+ int port,
+ struct phy_device *phy,
+ unsigned int mode,
+ phy_interface_t interface,
+ int speed,
+ int duplex,
+ bool tx_pause,
+ bool rx_pause);
+ void (*phylink_mac_link_down)(struct ppe_device *ppe_dev,
+ int port,
+ unsigned int mode,
+ phy_interface_t interface);
+ struct phylink_pcs *(*phylink_mac_select_pcs)(struct ppe_device *ppe_dev,
+ int port,
+ phy_interface_t interface);
+ /*
+ * Port maximum frame size setting
+ */
int (*set_maxframe)(struct ppe_device *ppe_dev, int port,
int maxframe_size);
};
--
2.42.0


2024-01-10 11:50:57

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 19/20] net: ethernet: qualcomm: Add PPE MAC functions

From: Lei Wei <[email protected]>

Add PPE MAC functions including MAC MIB statistics, MAC eee and MAC
address setting related operations which used by ethtool and netdev ops.

Signed-off-by: Lei Wei <[email protected]>
Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/ppe.c | 698 +++++++++++++++++++
drivers/net/ethernet/qualcomm/ppe/ppe.h | 98 +++
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 172 +++++
include/linux/soc/qcom/ppe.h | 30 +
4 files changed, 998 insertions(+)

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index d241ff3eab84..680d228a5307 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -28,6 +28,115 @@
#define PPE_SCHEDULER_L1_NUM 64
#define PPE_SP_PRIORITY_NUM 8

+#define PPE_ETHTOOL_XGMIB_STAT(x) { #x, \
+ offsetof(struct ppe_xgmib_hw_stats, x) / sizeof(u64) }
+#define PPE_ETHTOOL_GMIB_STAT(x) { #x, \
+ offsetof(struct ppe_gmib_hw_stats, x) / sizeof(u64) }
+
+/* Poll interval time to poll GMAC MIBs for overflow protection */
+#define PPE_GMIB_STATS_POLL_INTERVAL 120000
+
+/* XGMAC strings used by ethtool */
+static const struct ppe_ethtool_gstrings_xgmib_stats {
+ char name[ETH_GSTRING_LEN];
+ u32 offset;
+} ppe_ethtool_gstrings_xgmib_stats[] = {
+ PPE_ETHTOOL_XGMIB_STAT(rx_frames),
+ PPE_ETHTOOL_XGMIB_STAT(rx_bytes),
+ PPE_ETHTOOL_XGMIB_STAT(rx_bytes_g),
+ PPE_ETHTOOL_XGMIB_STAT(rx_broadcast_g),
+ PPE_ETHTOOL_XGMIB_STAT(rx_multicast_g),
+ PPE_ETHTOOL_XGMIB_STAT(rx_unicast_g),
+ PPE_ETHTOOL_XGMIB_STAT(rx_crc_err),
+ PPE_ETHTOOL_XGMIB_STAT(rx_runt_err),
+ PPE_ETHTOOL_XGMIB_STAT(rx_jabber_err),
+ PPE_ETHTOOL_XGMIB_STAT(rx_undersize_g),
+ PPE_ETHTOOL_XGMIB_STAT(rx_oversize_g),
+ PPE_ETHTOOL_XGMIB_STAT(rx_pkt64),
+ PPE_ETHTOOL_XGMIB_STAT(rx_pkt65to127),
+ PPE_ETHTOOL_XGMIB_STAT(rx_pkt128to255),
+ PPE_ETHTOOL_XGMIB_STAT(rx_pkt256to511),
+ PPE_ETHTOOL_XGMIB_STAT(rx_pkt512to1023),
+ PPE_ETHTOOL_XGMIB_STAT(rx_pkt1024tomax),
+ PPE_ETHTOOL_XGMIB_STAT(rx_len_err),
+ PPE_ETHTOOL_XGMIB_STAT(rx_outofrange_err),
+ PPE_ETHTOOL_XGMIB_STAT(rx_pause),
+ PPE_ETHTOOL_XGMIB_STAT(rx_fifo_overflow),
+ PPE_ETHTOOL_XGMIB_STAT(rx_vlan),
+ PPE_ETHTOOL_XGMIB_STAT(rx_wdog_err),
+ PPE_ETHTOOL_XGMIB_STAT(rx_lpi_usec),
+ PPE_ETHTOOL_XGMIB_STAT(rx_lpi_tran),
+ PPE_ETHTOOL_XGMIB_STAT(rx_drop_frames),
+ PPE_ETHTOOL_XGMIB_STAT(rx_drop_bytes),
+ PPE_ETHTOOL_XGMIB_STAT(tx_bytes),
+ PPE_ETHTOOL_XGMIB_STAT(tx_bytes_g),
+ PPE_ETHTOOL_XGMIB_STAT(tx_frames),
+ PPE_ETHTOOL_XGMIB_STAT(tx_frame_g),
+ PPE_ETHTOOL_XGMIB_STAT(tx_broadcast),
+ PPE_ETHTOOL_XGMIB_STAT(tx_broadcast_g),
+ PPE_ETHTOOL_XGMIB_STAT(tx_multicast),
+ PPE_ETHTOOL_XGMIB_STAT(tx_multicast_g),
+ PPE_ETHTOOL_XGMIB_STAT(tx_unicast),
+ PPE_ETHTOOL_XGMIB_STAT(tx_pkt64),
+ PPE_ETHTOOL_XGMIB_STAT(tx_pkt65to127),
+ PPE_ETHTOOL_XGMIB_STAT(tx_pkt128to255),
+ PPE_ETHTOOL_XGMIB_STAT(tx_pkt256to511),
+ PPE_ETHTOOL_XGMIB_STAT(tx_pkt512to1023),
+ PPE_ETHTOOL_XGMIB_STAT(tx_pkt1024tomax),
+ PPE_ETHTOOL_XGMIB_STAT(tx_underflow_err),
+ PPE_ETHTOOL_XGMIB_STAT(tx_pause),
+ PPE_ETHTOOL_XGMIB_STAT(tx_vlan_g),
+ PPE_ETHTOOL_XGMIB_STAT(tx_lpi_usec),
+ PPE_ETHTOOL_XGMIB_STAT(tx_lpi_tran),
+};
+
+/* GMAC strings used by ethtool */
+static const struct ppe_ethtool_gstrings_gmib_stats {
+ char name[ETH_GSTRING_LEN];
+ u32 offset;
+} ppe_ethtool_gstrings_gmib_stats[] = {
+ PPE_ETHTOOL_GMIB_STAT(rx_broadcast),
+ PPE_ETHTOOL_GMIB_STAT(rx_pause),
+ PPE_ETHTOOL_GMIB_STAT(rx_unicast),
+ PPE_ETHTOOL_GMIB_STAT(rx_multicast),
+ PPE_ETHTOOL_GMIB_STAT(rx_fcserr),
+ PPE_ETHTOOL_GMIB_STAT(rx_alignerr),
+ PPE_ETHTOOL_GMIB_STAT(rx_runt),
+ PPE_ETHTOOL_GMIB_STAT(rx_frag),
+ PPE_ETHTOOL_GMIB_STAT(rx_jmbfcserr),
+ PPE_ETHTOOL_GMIB_STAT(rx_jmbalignerr),
+ PPE_ETHTOOL_GMIB_STAT(rx_pkt64),
+ PPE_ETHTOOL_GMIB_STAT(rx_pkt65to127),
+ PPE_ETHTOOL_GMIB_STAT(rx_pkt128to255),
+ PPE_ETHTOOL_GMIB_STAT(rx_pkt256to511),
+ PPE_ETHTOOL_GMIB_STAT(rx_pkt512to1023),
+ PPE_ETHTOOL_GMIB_STAT(rx_pkt1024to1518),
+ PPE_ETHTOOL_GMIB_STAT(rx_pkt1519tomax),
+ PPE_ETHTOOL_GMIB_STAT(rx_toolong),
+ PPE_ETHTOOL_GMIB_STAT(rx_pktgoodbyte),
+ PPE_ETHTOOL_GMIB_STAT(rx_pktbadbyte),
+ PPE_ETHTOOL_GMIB_STAT(tx_broadcast),
+ PPE_ETHTOOL_GMIB_STAT(tx_pause),
+ PPE_ETHTOOL_GMIB_STAT(tx_multicast),
+ PPE_ETHTOOL_GMIB_STAT(tx_underrun),
+ PPE_ETHTOOL_GMIB_STAT(tx_pkt64),
+ PPE_ETHTOOL_GMIB_STAT(tx_pkt65to127),
+ PPE_ETHTOOL_GMIB_STAT(tx_pkt128to255),
+ PPE_ETHTOOL_GMIB_STAT(tx_pkt256to511),
+ PPE_ETHTOOL_GMIB_STAT(tx_pkt512to1023),
+ PPE_ETHTOOL_GMIB_STAT(tx_pkt1024to1518),
+ PPE_ETHTOOL_GMIB_STAT(tx_pkt1519tomax),
+ PPE_ETHTOOL_GMIB_STAT(tx_pktbyte),
+ PPE_ETHTOOL_GMIB_STAT(tx_collisions),
+ PPE_ETHTOOL_GMIB_STAT(tx_abortcol),
+ PPE_ETHTOOL_GMIB_STAT(tx_multicol),
+ PPE_ETHTOOL_GMIB_STAT(tx_singlecol),
+ PPE_ETHTOOL_GMIB_STAT(tx_exesdeffer),
+ PPE_ETHTOOL_GMIB_STAT(tx_deffer),
+ PPE_ETHTOOL_GMIB_STAT(tx_latecol),
+ PPE_ETHTOOL_GMIB_STAT(tx_unicast),
+};
+
static const char * const ppe_clock_name[PPE_CLK_MAX] = {
"cmn_ahb",
"cmn_sys",
@@ -694,6 +803,362 @@ static int ppe_port_bridge_txmac_en_set(struct ppe_device *ppe_dev, int port, bo
return 0;
}

+/* Get GMAC MIBs from GMAC registers and update to PPE port gmib stats */
+static void ppe_gmib_stats_update(struct ppe_port *ppe_port)
+{
+ u32 val, hi;
+ struct ppe_device *ppe_dev = ppe_port->ppe_dev;
+ int port = ppe_port->port_id;
+
+ spin_lock(&ppe_port->stats_lock);
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXBROAD, &val);
+ ppe_port->gmib_stats->rx_broadcast += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXPAUSE, &val);
+ ppe_port->gmib_stats->rx_pause += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXMULTI, &val);
+ ppe_port->gmib_stats->rx_multicast += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXFCSERR, &val);
+ ppe_port->gmib_stats->rx_fcserr += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXALIGNERR, &val);
+ ppe_port->gmib_stats->rx_alignerr += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXRUNT, &val);
+ ppe_port->gmib_stats->rx_runt += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXFRAG, &val);
+ ppe_port->gmib_stats->rx_frag += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXJUMBOFCSERR, &val);
+ ppe_port->gmib_stats->rx_jmbfcserr += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXJUMBOALIGNERR, &val);
+ ppe_port->gmib_stats->rx_jmbalignerr += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXPKT64, &val);
+ ppe_port->gmib_stats->rx_pkt64 += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXPKT65TO127, &val);
+ ppe_port->gmib_stats->rx_pkt65to127 += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXPKT128TO255, &val);
+ ppe_port->gmib_stats->rx_pkt128to255 += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXPKT256TO511, &val);
+ ppe_port->gmib_stats->rx_pkt256to511 += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXPKT512TO1023, &val);
+ ppe_port->gmib_stats->rx_pkt512to1023 += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXPKT1024TO1518, &val);
+ ppe_port->gmib_stats->rx_pkt1024to1518 += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXPKT1519TOX, &val);
+ ppe_port->gmib_stats->rx_pkt1519tomax += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXTOOLONG, &val);
+ ppe_port->gmib_stats->rx_toolong += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXGOODBYTE_L, &val);
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXGOODBYTE_H, &hi);
+ ppe_port->gmib_stats->rx_pktgoodbyte += (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXBADBYTE_L, &val);
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXBADBYTE_H, &hi);
+ ppe_port->gmib_stats->rx_pktbadbyte += (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_RXUNI, &val);
+ ppe_port->gmib_stats->rx_unicast += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXBROAD, &val);
+ ppe_port->gmib_stats->tx_broadcast += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXPAUSE, &val);
+ ppe_port->gmib_stats->tx_pause += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXMULTI, &val);
+ ppe_port->gmib_stats->tx_multicast += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXUNDERRUN, &val);
+ ppe_port->gmib_stats->tx_underrun += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXPKT64, &val);
+ ppe_port->gmib_stats->tx_pkt64 += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXPKT65TO127, &val);
+ ppe_port->gmib_stats->tx_pkt65to127 += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXPKT128TO255, &val);
+ ppe_port->gmib_stats->tx_pkt128to255 += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXPKT256TO511, &val);
+ ppe_port->gmib_stats->tx_pkt256to511 += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXPKT512TO1023, &val);
+ ppe_port->gmib_stats->tx_pkt512to1023 += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXPKT1024TO1518, &val);
+ ppe_port->gmib_stats->tx_pkt1024to1518 += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXPKT1519TOX, &val);
+ ppe_port->gmib_stats->tx_pkt1519tomax += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXBYTE_L, &val);
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXBYTE_H, &hi);
+ ppe_port->gmib_stats->tx_pktbyte += (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXCOLLISIONS, &val);
+ ppe_port->gmib_stats->tx_collisions += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXABORTCOL, &val);
+ ppe_port->gmib_stats->tx_abortcol += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXMULTICOL, &val);
+ ppe_port->gmib_stats->tx_multicol += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXSINGLECOL, &val);
+ ppe_port->gmib_stats->tx_singlecol += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXEXCESSIVEDEFER, &val);
+ ppe_port->gmib_stats->tx_exesdeffer += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXDEFER, &val);
+ ppe_port->gmib_stats->tx_deffer += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXLATECOL, &val);
+ ppe_port->gmib_stats->tx_latecol += (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_TXUNI, &val);
+ ppe_port->gmib_stats->tx_unicast += (u64)val;
+
+ spin_unlock(&ppe_port->stats_lock);
+}
+
+/* Get XGMAC MIBs from XGMAC registers */
+static void ppe_xgmib_stats_update(struct ppe_device *ppe_dev, int port,
+ struct ppe_xgmib_hw_stats *xgmib_hw_stats)
+{
+ u32 val, hi;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_OCTET_COUNT_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_OCTET_COUNT_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->tx_bytes = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_FRAME_COUNT_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_FRAME_COUNT_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->tx_frames = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_BROADCAST_FRAMES_GOOD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_BROADCAST_FRAMES_GOOD_HIGH, &hi);
+ xgmib_hw_stats->tx_broadcast_g = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_MULTICAST_FRAMES_GOOD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_MULTICAST_FRAMES_GOOD_HIGH, &hi);
+ xgmib_hw_stats->tx_multicast_g = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_64OCTETS_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_64OCTETS_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->tx_pkt64 = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_65TO127OCTETS_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_65TO127OCTETS_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->tx_pkt65to127 = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_128TO255OCTETS_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_128TO255OCTETS_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->tx_pkt128to255 = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_256TO511OCTETS_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_256TO511OCTETS_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->tx_pkt256to511 = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_512TO1023OCTETS_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_512TO1023OCTETS_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->tx_pkt512to1023 = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_1024TOMAXOCTETS_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_1024TOMAXOCTETS_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->tx_pkt1024tomax = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_UNICAST_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_UNICAST_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->tx_unicast = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_MULTICAST_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_MULTICAST_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->tx_multicast = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_BROADCAST_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_BROADCAST_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->tx_broadcast = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_UNDERFLOW_ERROR_FRAMES_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_UNDERFLOW_ERROR_FRAMES_HIGH, &hi);
+ xgmib_hw_stats->tx_underflow_err = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_OCTET_COUNT_GOOD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_OCTET_COUNT_GOOD_HIGH, &hi);
+ xgmib_hw_stats->tx_bytes_g = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_FRAME_COUNT_GOOD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_FRAME_COUNT_GOOD_HIGH, &hi);
+ xgmib_hw_stats->tx_frame_g = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_PAUSE_FRAMES_LOW, &val);
+ xgmib_hw_stats->tx_pause = (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_VLAN_FRAMES_GOOD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_VLAN_FRAMES_GOOD_HIGH, &hi);
+ xgmib_hw_stats->tx_vlan_g = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_LPI_USEC_CNTR, &val);
+ xgmib_hw_stats->tx_lpi_usec = (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_TX_LPI_TRAN_CNTR, &val);
+ xgmib_hw_stats->tx_lpi_tran = (u64)val;
+
+ /* rx mib stats */
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_FRAME_COUNT_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_FRAME_COUNT_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->rx_frames = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_OCTET_COUNT_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_OCTET_COUNT_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->rx_bytes = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_OCTET_COUNT_GOOD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_OCTET_COUNT_GOOD_HIGH, &hi);
+ xgmib_hw_stats->rx_bytes_g = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_BROADCAST_FRAMES_GOOD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_BROADCAST_FRAMES_GOOD_HIGH, &hi);
+ xgmib_hw_stats->rx_broadcast_g = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_MULTICAST_FRAMES_GOOD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_MULTICAST_FRAMES_GOOD_HIGH, &hi);
+ xgmib_hw_stats->rx_multicast_g = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_CRC_ERROR_FRAMES_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_CRC_ERROR_FRAMES_HIGH, &hi);
+ xgmib_hw_stats->rx_crc_err = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_FRAG_ERROR_FRAMES, &val);
+ xgmib_hw_stats->rx_runt_err = (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_JABBER_ERROR_FRAMES, &val);
+ xgmib_hw_stats->rx_jabber_err = (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_UNDERSIZE_FRAMES_GOOD, &val);
+ xgmib_hw_stats->rx_undersize_g = (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_OVERSIZE_FRAMES_GOOD, &val);
+ xgmib_hw_stats->rx_oversize_g = (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_64OCTETS_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_64OCTETS_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->rx_pkt64 = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_65TO127OCTETS_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_65TO127OCTETS_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->rx_pkt65to127 = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_128TO255OCTETS_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_128TO255OCTETS_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->rx_pkt128to255 = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_256TO511OCTETS_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_256TO511OCTETS_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->rx_pkt256to511 = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_512TO1023OCTETS_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_512TO1023OCTETS_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->rx_pkt512to1023 = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_1024TOMAXOCTETS_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_1024TOMAXOCTETS_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->rx_pkt1024tomax = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_UNICAST_FRAMES_GOOD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_UNICAST_FRAMES_GOOD_HIGH, &hi);
+ xgmib_hw_stats->rx_unicast_g = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_LENGTH_ERROR_FRAMES_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_LENGTH_ERROR_FRAMES_HIGH, &hi);
+ xgmib_hw_stats->rx_len_err = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_OUTOFRANGE_FRAMES_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_OUTOFRANGE_FRAMES_HIGH, &hi);
+ xgmib_hw_stats->rx_outofrange_err = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_PAUSE_FRAMES_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_PAUSE_FRAMES_HIGH, &hi);
+ xgmib_hw_stats->rx_pause = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_FIFOOVERFLOW_FRAMES_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_FIFOOVERFLOW_FRAMES_HIGH, &hi);
+ xgmib_hw_stats->rx_fifo_overflow = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_VLAN_FRAMES_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_VLAN_FRAMES_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->rx_vlan = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_WATCHDOG_ERROR_FRAMES, &val);
+ xgmib_hw_stats->rx_wdog_err = (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_LPI_USEC_CNTR, &val);
+ xgmib_hw_stats->rx_lpi_usec = (u64)val;
+
+ ppe_read(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_LPI_TRAN_CNTR, &val);
+ xgmib_hw_stats->rx_lpi_tran = (u64)val;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_DISCARD_FRAME_COUNT_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_DISCARD_FRAME_COUNT_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->rx_drop_frames = (u64)val | (u64)hi << 32;
+
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_DISCARD_OCTET_COUNT_GOOD_BAD_LOW, &val);
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_RX_DISCARD_OCTET_COUNT_GOOD_BAD_HIGH, &hi);
+ xgmib_hw_stats->rx_drop_bytes = (u64)val | (u64)hi << 32;
+}
+
+static void ppe_gmib_stats_poll(struct work_struct *work)
+{
+ struct ppe_port *ppe_port = container_of(work, struct ppe_port,
+ gmib_read.work);
+
+ ppe_gmib_stats_update(ppe_port);
+
+ schedule_delayed_work(&ppe_port->gmib_read,
+ msecs_to_jiffies(PPE_GMIB_STATS_POLL_INTERVAL));
+}
+
static void ppe_phylink_mac_config(struct ppe_device *ppe_dev, int port,
unsigned int mode, const struct phylink_link_state *state)
{
@@ -862,6 +1327,9 @@ static void ppe_phylink_mac_link_up(struct ppe_device *ppe_dev, int port,
/* Enable ppe bridge port tx mac */
ppe_port_bridge_txmac_en_set(ppe_dev, port, true);

+ /* Start gmib statistics polling */
+ schedule_delayed_work(&ppe_port->gmib_read, 0);
+
dev_info(ppe_dev->dev,
"PPE port %d interface %s link up - %s%s - pause tx %d rx %d\n",
port, phy_modes(interface), phy_speed_to_str(speed),
@@ -886,6 +1354,9 @@ static void ppe_phylink_mac_link_down(struct ppe_device *ppe_dev, int port,
/* Disable ppe mac tx */
ppe_mac_txmac_en_set(ppe_dev, port, false);

+ /* Stop gmib statistics polling */
+ cancel_delayed_work_sync(&ppe_port->gmib_read);
+
dev_info(ppe_dev->dev, "PPE port %d interface %s link down\n",
port, phy_modes(interface));
}
@@ -938,6 +1409,11 @@ static int ppe_mac_init(struct platform_device *pdev)
ppe_ports->port[i].speed = SPEED_UNKNOWN;
ppe_ports->port[i].duplex = DUPLEX_UNKNOWN;
ppe_ports->port[i].pause = MLO_PAUSE_NONE;
+ ppe_ports->port[i].gmib_stats = devm_kzalloc(&pdev->dev,
+ sizeof(*ppe_ports->port[i].gmib_stats),
+ GFP_KERNEL);
+ spin_lock_init(&ppe_ports->port[i].stats_lock);
+ INIT_DELAYED_WORK(&ppe_ports->port[i].gmib_read, ppe_gmib_stats_poll);
i++;

/* Port gmac HW initialization */
@@ -1180,6 +1656,218 @@ static void ppe_phylink_destroy(struct ppe_device *ppe_dev, int port)
}
}

+static int ppe_get_sset_count(struct ppe_device *ppe_dev, int port, int sset)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+ return -ENODEV;
+ }
+
+ if (sset != ETH_SS_STATS)
+ return 0;
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_XGMAC)
+ return ARRAY_SIZE(ppe_ethtool_gstrings_xgmib_stats);
+ else
+ return ARRAY_SIZE(ppe_ethtool_gstrings_gmib_stats);
+}
+
+static void ppe_get_strings(struct ppe_device *ppe_dev, int port, u32 stringset, u8 *data)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+ int i;
+
+ if (!ppe_port)
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+
+ if (stringset != ETH_SS_STATS)
+ return;
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_XGMAC) {
+ for (i = 0; i < ARRAY_SIZE(ppe_ethtool_gstrings_xgmib_stats); i++)
+ memcpy(data + i * ETH_GSTRING_LEN,
+ ppe_ethtool_gstrings_xgmib_stats[i].name, ETH_GSTRING_LEN);
+ } else {
+ for (i = 0; i < ARRAY_SIZE(ppe_ethtool_gstrings_gmib_stats); i++)
+ memcpy(data + i * ETH_GSTRING_LEN,
+ ppe_ethtool_gstrings_gmib_stats[i].name, ETH_GSTRING_LEN);
+ }
+}
+
+static void ppe_get_ethtool_stats(struct ppe_device *ppe_dev, int port, u64 *data)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+ u64 *data_src;
+ int i;
+
+ if (!ppe_port)
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_XGMAC) {
+ struct ppe_xgmib_hw_stats xgmib_hw_stats;
+
+ ppe_xgmib_stats_update(ppe_dev, port, &xgmib_hw_stats);
+ data_src = (u64 *)(&xgmib_hw_stats);
+ for (i = 0; i < ARRAY_SIZE(ppe_ethtool_gstrings_xgmib_stats); i++)
+ data[i] = *(data_src + ppe_ethtool_gstrings_xgmib_stats[i].offset);
+ } else {
+ ppe_gmib_stats_update(ppe_port);
+ data_src = (u64 *)(ppe_port->gmib_stats);
+ for (i = 0; i < ARRAY_SIZE(ppe_ethtool_gstrings_gmib_stats); i++)
+ data[i] = *(data_src + ppe_ethtool_gstrings_gmib_stats[i].offset);
+ }
+}
+
+static void ppe_get_stats64(struct ppe_device *ppe_dev, int port, struct rtnl_link_stats64 *s)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+
+ if (!ppe_port)
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_XGMAC) {
+ struct ppe_xgmib_hw_stats xgmib_hw_stats;
+
+ ppe_xgmib_stats_update(ppe_dev, port, &xgmib_hw_stats);
+ s->rx_packets = xgmib_hw_stats.rx_unicast_g +
+ xgmib_hw_stats.rx_broadcast_g + xgmib_hw_stats.rx_multicast_g;
+ s->tx_packets = xgmib_hw_stats.tx_unicast +
+ xgmib_hw_stats.tx_broadcast_g + xgmib_hw_stats.tx_multicast_g;
+ s->rx_bytes = xgmib_hw_stats.rx_bytes;
+ s->tx_bytes = xgmib_hw_stats.tx_bytes;
+ s->multicast = xgmib_hw_stats.rx_multicast_g;
+
+ s->rx_crc_errors = xgmib_hw_stats.rx_crc_err;
+ s->rx_frame_errors = xgmib_hw_stats.rx_runt_err;
+ s->rx_fifo_errors = xgmib_hw_stats.rx_fifo_overflow;
+ s->rx_length_errors = xgmib_hw_stats.rx_len_err;
+ s->rx_errors = s->rx_crc_errors + s->rx_frame_errors +
+ s->rx_fifo_errors + s->rx_length_errors;
+ s->rx_dropped = xgmib_hw_stats.rx_drop_frames + s->rx_errors;
+
+ s->tx_fifo_errors = xgmib_hw_stats.tx_underflow_err;
+ s->tx_errors = s->tx_fifo_errors;
+ } else {
+ ppe_gmib_stats_update(ppe_port);
+ s->rx_packets = ppe_port->gmib_stats->rx_unicast +
+ ppe_port->gmib_stats->rx_broadcast + ppe_port->gmib_stats->rx_multicast;
+ s->tx_packets = ppe_port->gmib_stats->tx_unicast +
+ ppe_port->gmib_stats->tx_broadcast + ppe_port->gmib_stats->tx_multicast;
+ s->rx_bytes = ppe_port->gmib_stats->rx_pktgoodbyte;
+ s->tx_bytes = ppe_port->gmib_stats->tx_pktbyte;
+
+ s->rx_crc_errors = ppe_port->gmib_stats->rx_fcserr +
+ ppe_port->gmib_stats->rx_jmbfcserr;
+ s->rx_frame_errors = ppe_port->gmib_stats->rx_alignerr +
+ ppe_port->gmib_stats->rx_jmbalignerr;
+ s->rx_fifo_errors = ppe_port->gmib_stats->rx_runt;
+ s->rx_errors = s->rx_crc_errors + s->rx_frame_errors + s->rx_fifo_errors;
+ s->rx_dropped = ppe_port->gmib_stats->rx_toolong + s->rx_errors;
+
+ s->tx_fifo_errors = ppe_port->gmib_stats->tx_underrun;
+ s->tx_aborted_errors = ppe_port->gmib_stats->tx_abortcol;
+ s->tx_errors = s->tx_fifo_errors + s->tx_aborted_errors;
+ s->collisions = ppe_port->gmib_stats->tx_collisions;
+ s->multicast = ppe_port->gmib_stats->rx_multicast;
+ }
+}
+
+static int ppe_set_mac_address(struct ppe_device *ppe_dev, int port, u8 *macaddr)
+{
+ u32 reg_val;
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+ return -ENODEV;
+ }
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_XGMAC) {
+ reg_val = (macaddr[5] << 8) | macaddr[4] | XGMAC_ADDR_EN;
+ ppe_write(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_MAC_ADDR0_HIGH, reg_val);
+ reg_val = (macaddr[3] << 24) | (macaddr[2] << 16) | (macaddr[1] << 8) | macaddr[0];
+ ppe_write(ppe_dev, PPE_PORT_XGMAC_ADDR(port) + XGMAC_MAC_ADDR0_LOW, reg_val);
+ } else {
+ reg_val = (macaddr[5] << 8) | macaddr[4];
+ ppe_write(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_GOL_MAC_ADDR0, reg_val);
+ reg_val = (macaddr[0] << 24) | (macaddr[1] << 16) | (macaddr[2] << 8) | macaddr[3];
+ ppe_write(ppe_dev, PPE_PORT_GMAC_ADDR(port) + GMAC_GOL_MAC_ADDR1, reg_val);
+ }
+
+ return 0;
+}
+
+static int ppe_set_mac_eee(struct ppe_device *ppe_dev, int port, struct ethtool_eee *eee)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+ u32 reg_val;
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+ return -ENODEV;
+ }
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_XGMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_LPI_CONTROL_STATUS,
+ &reg_val);
+ reg_val |= (XGMAC_LPI_PLS | XGMAC_LPI_TXA | XGMAC_LPI_TE);
+ if (eee->tx_lpi_enabled)
+ reg_val |= XGMAC_LPI_TXEN;
+ else
+ reg_val &= ~XGMAC_LPI_TXEN;
+ ppe_write(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_LPI_CONTROL_STATUS,
+ reg_val);
+ ppe_mask(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_1US_TIC_COUNTER,
+ XGMAC_1US_TIC_CNTR, FIELD_PREP(XGMAC_1US_TIC_CNTR, 0x15f));
+ ppe_mask(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_LPI_AUTO_ENTRY_TIMER,
+ XGMAC_LPI_ET, FIELD_PREP(XGMAC_LPI_ET, 0x2c));
+ } else {
+ ppe_read(ppe_dev, PPE_LPI_LPI_EN, &reg_val);
+ if (eee->tx_lpi_enabled)
+ reg_val |= PPE_LPI_PORT_EN(port);
+ else
+ reg_val &= ~PPE_LPI_PORT_EN(port);
+ ppe_write(ppe_dev, PPE_LPI_LPI_EN, reg_val);
+ }
+
+ return 0;
+}
+
+static int ppe_get_mac_eee(struct ppe_device *ppe_dev, int port, struct ethtool_eee *eee)
+{
+ struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
+ u32 reg_val;
+
+ if (!ppe_port) {
+ dev_err(ppe_dev->dev, "Failed to find ppe port %d\n", port);
+ return -ENODEV;
+ }
+
+ if (ppe_port->mac_type == PPE_MAC_TYPE_XGMAC) {
+ ppe_read(ppe_dev,
+ PPE_PORT_XGMAC_ADDR(port) + XGMAC_LPI_CONTROL_STATUS,
+ &reg_val);
+ if (reg_val & XGMAC_LPI_TXEN)
+ eee->tx_lpi_enabled = 1;
+ else
+ eee->tx_lpi_enabled = 0;
+ } else {
+ ppe_read(ppe_dev, PPE_LPI_LPI_EN, &reg_val);
+ if (reg_val & PPE_LPI_PORT_EN(port))
+ eee->tx_lpi_enabled = 1;
+ else
+ eee->tx_lpi_enabled = 0;
+ }
+
+ return 0;
+}
+
bool ppe_is_probed(struct platform_device *pdev)
{
struct ppe_device *ppe_dev = platform_get_drvdata(pdev);
@@ -1236,6 +1924,13 @@ static struct ppe_device_ops qcom_ppe_ops = {
.phylink_mac_link_up = ppe_phylink_mac_link_up,
.phylink_mac_link_down = ppe_phylink_mac_link_down,
.phylink_mac_select_pcs = ppe_phylink_mac_select_pcs,
+ .get_sset_count = ppe_get_sset_count,
+ .get_strings = ppe_get_strings,
+ .get_ethtool_stats = ppe_get_ethtool_stats,
+ .get_stats64 = ppe_get_stats64,
+ .set_mac_address = ppe_set_mac_address,
+ .set_mac_eee = ppe_set_mac_eee,
+ .get_mac_eee = ppe_get_mac_eee,
.set_maxframe = ppe_port_maxframe_set,
};

@@ -2340,6 +3035,9 @@ static int qcom_ppe_remove(struct platform_device *pdev)
ppe_debugfs_teardown(ppe_dev);

for (i = 0; i < ppe_ports->num; i++) {
+ /* Stop gmib statistics polling */
+ cancel_delayed_work_sync(&ppe_ports->port[i].gmib_read);
+
/* Reset ppe port parent clock to XO clock */
port = ppe_ports->port[i].port_id;
clk_set_rate(ppe_dev_priv->clk[PPE_NSS_PORT1_RX_CLK + ((port - 1) << 1)],
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.h b/drivers/net/ethernet/qualcomm/ppe/ppe.h
index 532d53c05bf9..5c43d7c19d98 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.h
@@ -183,6 +183,101 @@ struct ppe_scheduler_port_resource {
int l1edrr[2];
};

+/* PPE GMAC statistics */
+struct ppe_gmib_hw_stats {
+ u64 rx_broadcast;
+ u64 rx_pause;
+ u64 rx_multicast;
+ u64 rx_fcserr;
+ u64 rx_alignerr;
+ u64 rx_runt;
+ u64 rx_frag;
+ u64 rx_jmbfcserr;
+ u64 rx_jmbalignerr;
+ u64 rx_pkt64;
+ u64 rx_pkt65to127;
+ u64 rx_pkt128to255;
+ u64 rx_pkt256to511;
+ u64 rx_pkt512to1023;
+ u64 rx_pkt1024to1518;
+ u64 rx_pkt1519tomax;
+ u64 rx_toolong;
+ u64 rx_pktgoodbyte;
+ u64 rx_pktbadbyte;
+ u64 rx_unicast;
+ u64 tx_broadcast;
+ u64 tx_pause;
+ u64 tx_multicast;
+ u64 tx_underrun;
+ u64 tx_pkt64;
+ u64 tx_pkt65to127;
+ u64 tx_pkt128to255;
+ u64 tx_pkt256to511;
+ u64 tx_pkt512to1023;
+ u64 tx_pkt1024to1518;
+ u64 tx_pkt1519tomax;
+ u64 tx_pktbyte;
+ u64 tx_collisions;
+ u64 tx_abortcol;
+ u64 tx_multicol;
+ u64 tx_singlecol;
+ u64 tx_exesdeffer;
+ u64 tx_deffer;
+ u64 tx_latecol;
+ u64 tx_unicast;
+};
+
+/* PPE XGMAC statistics */
+struct ppe_xgmib_hw_stats {
+ u64 tx_bytes;
+ u64 tx_frames;
+ u64 tx_broadcast_g;
+ u64 tx_multicast_g;
+ u64 tx_pkt64;
+ u64 tx_pkt65to127;
+ u64 tx_pkt128to255;
+ u64 tx_pkt256to511;
+ u64 tx_pkt512to1023;
+ u64 tx_pkt1024tomax;
+ u64 tx_unicast;
+ u64 tx_multicast;
+ u64 tx_broadcast;
+ u64 tx_underflow_err;
+ u64 tx_bytes_g;
+ u64 tx_frame_g;
+ u64 tx_pause;
+ u64 tx_vlan_g;
+ u64 tx_lpi_usec;
+ u64 tx_lpi_tran;
+ u64 rx_frames;
+ u64 rx_bytes;
+ u64 rx_bytes_g;
+ u64 rx_broadcast_g;
+ u64 rx_multicast_g;
+ u64 rx_crc_err;
+ u64 rx_runt_err;
+ u64 rx_jabber_err;
+ u64 rx_undersize_g;
+ u64 rx_oversize_g;
+ u64 rx_pkt64;
+ u64 rx_pkt65to127;
+ u64 rx_pkt128to255;
+ u64 rx_pkt256to511;
+ u64 rx_pkt512to1023;
+ u64 rx_pkt1024tomax;
+ u64 rx_unicast_g;
+ u64 rx_len_err;
+ u64 rx_outofrange_err;
+ u64 rx_pause;
+ u64 rx_fifo_overflow;
+ u64 rx_vlan;
+ u64 rx_wdog_err;
+ u64 rx_lpi_usec;
+ u64 rx_lpi_tran;
+ u64 rx_drop_frames;
+ u64 rx_drop_bytes;
+};
+
/* PPE per port data type to record port settings such as phylink
* setting, mac type, interface mode and link speed.
*/
@@ -197,6 +292,9 @@ struct ppe_port {
int speed;
int duplex;
int pause;
+ struct delayed_work gmib_read;
+ struct ppe_gmib_hw_stats *gmib_stats;
+ spinlock_t stats_lock; /* Protects gmib stats */
};

/* PPE ports data type */
diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
index 43cd067c8c73..242ed494bcfc 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
@@ -17,6 +17,15 @@
#define PPE_PORT5_PCS_SEL BIT(4)
#define PPE_PORT_MAC_SEL(x) (PPE_PORT1_MAC_SEL << ((x) - 1))

+#define PPE_LPI_LPI_EN 0x400
+#define PPE_LPI_PORT1_EN BIT(0)
+#define PPE_LPI_PORT2_EN BIT(1)
+#define PPE_LPI_PORT3_EN BIT(2)
+#define PPE_LPI_PORT4_EN BIT(3)
+#define PPE_LPI_PORT5_EN BIT(4)
+#define PPE_LPI_PORT6_EN BIT(5)
+#define PPE_LPI_PORT_EN(x) (PPE_LPI_PORT1_EN << ((x) - 1))
+
#define PPE_BM_TDM_CTRL 0xb000
#define PPE_BM_TDM_CTRL_NUM 1
#define PPE_BM_TDM_CTRL_INC 4
@@ -848,6 +857,16 @@ union ppe_ac_grp_cfg_u {
#define GMAC_SPEED_100 1
#define GMAC_SPEED_1000 2

+#define GMAC_GOL_MAC_ADDR0 0x8
+#define MAC_ADDR_BYTE5 GENMASK(15, 8)
+#define MAC_ADDR_BYTE4 GENMASK(7, 0)
+
+#define GMAC_GOL_MAC_ADDR1 0xC
+#define MAC_ADDR_BYTE0 GENMASK(31, 24)
+#define MAC_ADDR_BYTE1 GENMASK(23, 16)
+#define MAC_ADDR_BYTE2 GENMASK(15, 8)
+#define MAC_ADDR_BYTE3 GENMASK(7, 0)
+
#define GMAC_MAC_CTRL2 0x18
#define GMAC_TX_THD_MASK GENMASK(27, 24)
#define GMAC_MAXFR_MASK GENMASK(21, 8)
@@ -871,6 +890,50 @@ union ppe_ac_grp_cfg_u {
#define MAC_MIB_RESET BIT(1)
#define MAC_MIB_EN BIT(0)

+#define GMAC_RXBROAD 0x40
+#define GMAC_RXPAUSE 0x44
+#define GMAC_RXMULTI 0x48
+#define GMAC_RXFCSERR 0x4C
+#define GMAC_RXALIGNERR 0x50
+#define GMAC_RXRUNT 0x54
+#define GMAC_RXFRAG 0x58
+#define GMAC_RXJUMBOFCSERR 0x5C
+#define GMAC_RXJUMBOALIGNERR 0x60
+#define GMAC_RXPKT64 0x64
+#define GMAC_RXPKT65TO127 0x68
+#define GMAC_RXPKT128TO255 0x6C
+#define GMAC_RXPKT256TO511 0x70
+#define GMAC_RXPKT512TO1023 0x74
+#define GMAC_RXPKT1024TO1518 0x78
+#define GMAC_RXPKT1519TOX 0x7C
+#define GMAC_RXTOOLONG 0x80
+#define GMAC_RXGOODBYTE_L 0x84
+#define GMAC_RXGOODBYTE_H 0x88
+#define GMAC_RXBADBYTE_L 0x8C
+#define GMAC_RXBADBYTE_H 0x90
+#define GMAC_RXUNI 0x94
+#define GMAC_TXBROAD 0xA0
+#define GMAC_TXPAUSE 0xA4
+#define GMAC_TXMULTI 0xA8
+#define GMAC_TXUNDERRUN 0xAC
+#define GMAC_TXPKT64 0xB0
+#define GMAC_TXPKT65TO127 0xB4
+#define GMAC_TXPKT128TO255 0xB8
+#define GMAC_TXPKT256TO511 0xBC
+#define GMAC_TXPKT512TO1023 0xC0
+#define GMAC_TXPKT1024TO1518 0xC4
+#define GMAC_TXPKT1519TOX 0xC8
+#define GMAC_TXBYTE_L 0xCC
+#define GMAC_TXBYTE_H 0xD0
+#define GMAC_TXCOLLISIONS 0xD4
+#define GMAC_TXABORTCOL 0xD8
+#define GMAC_TXMULTICOL 0xDC
+#define GMAC_TXSINGLECOL 0xE0
+#define GMAC_TXEXCESSIVEDEFER 0xE4
+#define GMAC_TXDEFER 0xE8
+#define GMAC_TXLATECOL 0xEC
+#define GMAC_TXUNI 0xF0
+
/* XGMAC Registers */
#define XGMAC_TX_CONFIGURATION 0x0
#define XGMAC_SPEED_MASK GENMASK(31, 29)
@@ -927,8 +990,117 @@ union ppe_ac_grp_cfg_u {
#define XGMAC_RX_FLOW_CTRL 0x90
#define XGMAC_RFE BIT(0)

+#define XGMAC_LPI_CONTROL_STATUS 0xd0
+#define XGMAC_LPI_TXEN BIT(16)
+#define XGMAC_LPI_PLS BIT(17)
+#define XGMAC_LPI_TXA BIT(19)
+#define XGMAC_LPI_TE BIT(20)
+
+#define XGMAC_LPI_TIMERS_CONTROL 0xd4
+#define XGMAC_LPI_TWT GENMASK(15, 0)
+#define XGMAC_LPI_LST GENMASK(25, 16)
+
+#define XGMAC_LPI_AUTO_ENTRY_TIMER 0xd8
+#define XGMAC_LPI_ET GENMASK(19, 3)
+
+#define XGMAC_1US_TIC_COUNTER 0xdc
+#define XGMAC_1US_TIC_CNTR GENMASK(11, 0)
+
+#define XGMAC_MAC_ADDR0_HIGH 0x300
+#define XGMAC_ADDR_EN BIT(31)
+#define XGMAC_ADDRHI GENMASK(15, 0)
+
+#define XGMAC_MAC_ADDR0_LOW 0x304
+#define XGMAC_ADDRLO GENMASK(31, 0)
+
#define XGMAC_MMC_CONTROL 0x800
#define XGMAC_MCF BIT(3)
#define XGMAC_CNTRST BIT(0)

+#define XGMAC_TX_OCTET_COUNT_GOOD_BAD_LOW 0x814
+#define XGMAC_TX_OCTET_COUNT_GOOD_BAD_HIGH 0x818
+#define XGMAC_TX_FRAME_COUNT_GOOD_BAD_LOW 0x81C
+#define XGMAC_TX_FRAME_COUNT_GOOD_BAD_HIGH 0x820
+#define XGMAC_TX_BROADCAST_FRAMES_GOOD_LOW 0x824
+#define XGMAC_TX_BROADCAST_FRAMES_GOOD_HIGH 0x828
+#define XGMAC_TX_MULTICAST_FRAMES_GOOD_LOW 0x82C
+#define XGMAC_TX_MULTICAST_FRAMES_GOOD_HIGH 0x830
+#define XGMAC_TX_64OCTETS_FRAMES_GOOD_BAD_LOW 0x834
+#define XGMAC_TX_64OCTETS_FRAMES_GOOD_BAD_HIGH 0x838
+#define XGMAC_TX_65TO127OCTETS_FRAMES_GOOD_BAD_LOW 0x83C
+#define XGMAC_TX_65TO127OCTETS_FRAMES_GOOD_BAD_HIGH 0x840
+#define XGMAC_TX_128TO255OCTETS_FRAMES_GOOD_BAD_LOW 0x844
+#define XGMAC_TX_128TO255OCTETS_FRAMES_GOOD_BAD_HIGH 0x848
+#define XGMAC_TX_256TO511OCTETS_FRAMES_GOOD_BAD_LOW 0x84C
+#define XGMAC_TX_256TO511OCTETS_FRAMES_GOOD_BAD_HIGH 0x850
+#define XGMAC_TX_512TO1023OCTETS_FRAMES_GOOD_BAD_LOW 0x854
+#define XGMAC_TX_512TO1023OCTETS_FRAMES_GOOD_BAD_HIGH 0x858
+#define XGMAC_TX_1024TOMAXOCTETS_FRAMES_GOOD_BAD_LOW 0x85C
+#define XGMAC_TX_1024TOMAXOCTETS_FRAMES_GOOD_BAD_HIGH 0x860
+#define XGMAC_TX_UNICAST_FRAMES_GOOD_BAD_LOW 0x864
+#define XGMAC_TX_UNICAST_FRAMES_GOOD_BAD_HIGH 0x868
+#define XGMAC_TX_MULTICAST_FRAMES_GOOD_BAD_LOW 0x86C
+#define XGMAC_TX_MULTICAST_FRAMES_GOOD_BAD_HIGH 0x870
+#define XGMAC_TX_BROADCAST_FRAMES_GOOD_BAD_LOW 0x874
+#define XGMAC_TX_BROADCAST_FRAMES_GOOD_BAD_HIGH 0x878
+#define XGMAC_TX_UNDERFLOW_ERROR_FRAMES_LOW 0x87C
+#define XGMAC_TX_UNDERFLOW_ERROR_FRAMES_HIGH 0x880
+#define XGMAC_TX_OCTET_COUNT_GOOD_LOW 0x884
+#define XGMAC_TX_OCTET_COUNT_GOOD_HIGH 0x888
+#define XGMAC_TX_FRAME_COUNT_GOOD_LOW 0x88C
+#define XGMAC_TX_FRAME_COUNT_GOOD_HIGH 0x890
+#define XGMAC_TX_PAUSE_FRAMES_LOW 0x894
+#define XGMAC_TX_PAUSE_FRAMES_HIGH 0x898
+#define XGMAC_TX_VLAN_FRAMES_GOOD_LOW 0x89C
+#define XGMAC_TX_VLAN_FRAMES_GOOD_HIGH 0x8A0
+#define XGMAC_TX_LPI_USEC_CNTR 0x8A4
+#define XGMAC_TX_LPI_TRAN_CNTR 0x8A8
+#define XGMAC_RX_FRAME_COUNT_GOOD_BAD_LOW 0x900
+#define XGMAC_RX_FRAME_COUNT_GOOD_BAD_HIGH 0x904
+#define XGMAC_RX_OCTET_COUNT_GOOD_BAD_LOW 0x908
+#define XGMAC_RX_OCTET_COUNT_GOOD_BAD_HIGH 0x90C
+#define XGMAC_RX_OCTET_COUNT_GOOD_LOW 0x910
+#define XGMAC_RX_OCTET_COUNT_GOOD_HIGH 0x914
+#define XGMAC_RX_BROADCAST_FRAMES_GOOD_LOW 0x918
+#define XGMAC_RX_BROADCAST_FRAMES_GOOD_HIGH 0x91C
+#define XGMAC_RX_MULTICAST_FRAMES_GOOD_LOW 0x920
+#define XGMAC_RX_MULTICAST_FRAMES_GOOD_HIGH 0x924
+#define XGMAC_RX_CRC_ERROR_FRAMES_LOW 0x928
+#define XGMAC_RX_CRC_ERROR_FRAMES_HIGH 0x92C
+#define XGMAC_RX_FRAG_ERROR_FRAMES 0x930
+#define XGMAC_RX_JABBER_ERROR_FRAMES 0x934
+#define XGMAC_RX_UNDERSIZE_FRAMES_GOOD 0x938
+#define XGMAC_RX_OVERSIZE_FRAMES_GOOD 0x93C
+#define XGMAC_RX_64OCTETS_FRAMES_GOOD_BAD_LOW 0x940
+#define XGMAC_RX_64OCTETS_FRAMES_GOOD_BAD_HIGH 0x944
+#define XGMAC_RX_65TO127OCTETS_FRAMES_GOOD_BAD_LOW 0x948
+#define XGMAC_RX_65TO127OCTETS_FRAMES_GOOD_BAD_HIGH 0x94C
+#define XGMAC_RX_128TO255OCTETS_FRAMES_GOOD_BAD_LOW 0x950
+#define XGMAC_RX_128TO255OCTETS_FRAMES_GOOD_BAD_HIGH 0x954
+#define XGMAC_RX_256TO511OCTETS_FRAMES_GOOD_BAD_LOW 0x958
+#define XGMAC_RX_256TO511OCTETS_FRAMES_GOOD_BAD_HIGH 0x95C
+#define XGMAC_RX_512TO1023OCTETS_FRAMES_GOOD_BAD_LOW 0x960
+#define XGMAC_RX_512TO1023OCTETS_FRAMES_GOOD_BAD_HIGH 0x964
+#define XGMAC_RX_1024TOMAXOCTETS_FRAMES_GOOD_BAD_LOW 0x968
+#define XGMAC_RX_1024TOMAXOCTETS_FRAMES_GOOD_BAD_HIGH 0x96C
+#define XGMAC_RX_UNICAST_FRAMES_GOOD_LOW 0x970
+#define XGMAC_RX_UNICAST_FRAMES_GOOD_HIGH 0x974
+#define XGMAC_RX_LENGTH_ERROR_FRAMES_LOW 0x978
+#define XGMAC_RX_LENGTH_ERROR_FRAMES_HIGH 0x97C
+#define XGMAC_RX_OUTOFRANGE_FRAMES_LOW 0x980
+#define XGMAC_RX_OUTOFRANGE_FRAMES_HIGH 0x984
+#define XGMAC_RX_PAUSE_FRAMES_LOW 0x988
+#define XGMAC_RX_PAUSE_FRAMES_HIGH 0x98C
+#define XGMAC_RX_FIFOOVERFLOW_FRAMES_LOW 0x990
+#define XGMAC_RX_FIFOOVERFLOW_FRAMES_HIGH 0x994
+#define XGMAC_RX_VLAN_FRAMES_GOOD_BAD_LOW 0x998
+#define XGMAC_RX_VLAN_FRAMES_GOOD_BAD_HIGH 0x99C
+#define XGMAC_RX_WATCHDOG_ERROR_FRAMES 0x9A0
+#define XGMAC_RX_LPI_USEC_CNTR 0x9A4
+#define XGMAC_RX_LPI_TRAN_CNTR 0x9A8
+#define XGMAC_RX_DISCARD_FRAME_COUNT_GOOD_BAD_LOW 0x9AC
+#define XGMAC_RX_DISCARD_FRAME_COUNT_GOOD_BAD_HIGH 0x9B0
+#define XGMAC_RX_DISCARD_OCTET_COUNT_GOOD_BAD_LOW 0x9B4
+#define XGMAC_RX_DISCARD_OCTET_COUNT_GOOD_BAD_HIGH 0x9B8
+
#endif
diff --git a/include/linux/soc/qcom/ppe.h b/include/linux/soc/qcom/ppe.h
index 40e69a262650..8f3652675ce3 100644
--- a/include/linux/soc/qcom/ppe.h
+++ b/include/linux/soc/qcom/ppe.h
@@ -10,6 +10,7 @@

#include <linux/platform_device.h>
#include <linux/phylink.h>
+#include <linux/if_link.h>

/* PPE platform private data, which is used by external driver like
* Ethernet DMA driver.
@@ -57,6 +58,35 @@ struct ppe_device_ops {
struct phylink_pcs *(*phylink_mac_select_pcs)(struct ppe_device *ppe_dev,
int port,
phy_interface_t interface);
+ /*
+ * Port statistics counters
+ */
+ void (*get_stats64)(struct ppe_device *ppe_dev,
+ int port,
+ struct rtnl_link_stats64 *s);
+ void (*get_strings)(struct ppe_device *ppe_dev,
+ int port,
+ u32 stringset,
+ u8 *data);
+ int (*get_sset_count)(struct ppe_device *ppe_dev,
+ int port,
+ int sset);
+ void (*get_ethtool_stats)(struct ppe_device *ppe_dev,
+ int port,
+ u64 *data);
+ /*
+ * Port MAC address setting
+ */
+ int (*set_mac_address)(struct ppe_device *ppe_dev,
+ int port,
+ u8 *macaddr);
+ /*
+ * Port MAC EEE settings
+ */
+ int (*set_mac_eee)(struct ppe_device *ppe_dev, int port,
+ struct ethtool_eee *eee);
+ int (*get_mac_eee)(struct ppe_device *ppe_dev, int port,
+ struct ethtool_eee *eee);
/*
* Port maximum frame size setting
*/
--
2.42.0


2024-01-10 11:51:15

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 20/20] arm64: defconfig: Enable qcom PPE driver

Enable qcom PPE driver, which is used on the Qualcomm IPQ SoC.

Signed-off-by: Luo Jie <[email protected]>
---
arch/arm64/configs/defconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
index b60aa1f89343..2be2aea9da2a 100644
--- a/arch/arm64/configs/defconfig
+++ b/arch/arm64/configs/defconfig
@@ -283,6 +283,7 @@ CONFIG_VIRTIO_BLK=y
CONFIG_BLK_DEV_NVME=m
CONFIG_QCOM_COINCELL=m
CONFIG_QCOM_FASTRPC=m
+CONFIG_QCOM_PPE=m
CONFIG_BATTERY_QCOM_BATTMGR=m
CONFIG_UCSI_PMIC_GLINK=m
CONFIG_SRAM=y
--
2.42.0


2024-01-10 11:51:22

by Luo Jie

[permalink] [raw]
Subject: [PATCH net-next 13/20] net: ethernet: qualcomm: Export PPE function set_maxframe

set_maxframe is called when the MTU of interface is configured, which
limits the size of packet passed through PPE.

Signed-off-by: Luo Jie <[email protected]>
---
drivers/net/ethernet/qualcomm/ppe/ppe.c | 41 +++++++++++++++++++++++++
include/linux/soc/qcom/ppe.h | 12 ++++++++
2 files changed, 53 insertions(+)

diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe.c b/drivers/net/ethernet/qualcomm/ppe/ppe.c
index 746ef42fea5d..d0e0fa9d5609 100644
--- a/drivers/net/ethernet/qualcomm/ppe/ppe.c
+++ b/drivers/net/ethernet/qualcomm/ppe/ppe.c
@@ -12,6 +12,7 @@
#include <linux/of.h>
#include <linux/regmap.h>
#include <linux/platform_device.h>
+#include <linux/if_ether.h>
#include <linux/soc/qcom/ppe.h>
#include "ppe.h"
#include "ppe_regs.h"
@@ -293,6 +294,45 @@ struct ppe_device *ppe_dev_get(struct platform_device *pdev)
}
EXPORT_SYMBOL_GPL(ppe_dev_get);

+struct ppe_device_ops *ppe_ops_get(struct platform_device *pdev)
+{
+ struct ppe_device *ppe_dev = platform_get_drvdata(pdev);
+
+ if (!ppe_dev)
+ return NULL;
+
+ return ppe_dev->ppe_ops;
+}
+EXPORT_SYMBOL_GPL(ppe_ops_get);
+
+static int ppe_port_maxframe_set(struct ppe_device *ppe_dev,
+ int port, int maxframe_size)
+{
+ union ppe_mru_mtu_ctrl_cfg_u mru_mtu_cfg;
+
+ /* The max frame size should be MTU added by ETH_HLEN in PPE */
+ maxframe_size += ETH_HLEN;
+
+ if (port < PPE_MC_MTU_CTRL_TBL_NUM)
+ ppe_mask(ppe_dev, PPE_MC_MTU_CTRL_TBL + PPE_MC_MTU_CTRL_TBL_INC * port,
+ PPE_MC_MTU_CTRL_TBL_MTU,
+ FIELD_PREP(PPE_MC_MTU_CTRL_TBL_MTU, maxframe_size));
+
+ memset(&mru_mtu_cfg, 0, sizeof(mru_mtu_cfg));
+ ppe_read_tbl(ppe_dev, PPE_MRU_MTU_CTRL_TBL + PPE_MRU_MTU_CTRL_TBL_INC * port,
+ mru_mtu_cfg.val, sizeof(mru_mtu_cfg.val));
+
+ mru_mtu_cfg.bf.mru = maxframe_size;
+ mru_mtu_cfg.bf.mtu = maxframe_size;
+
+ return ppe_write_tbl(ppe_dev, PPE_MRU_MTU_CTRL_TBL + PPE_MRU_MTU_CTRL_TBL_INC * port,
+ mru_mtu_cfg.val, sizeof(mru_mtu_cfg.val));
+}
+
+static struct ppe_device_ops qcom_ppe_ops = {
+ .set_maxframe = ppe_port_maxframe_set,
+};
+
static const struct regmap_range ppe_readable_ranges[] = {
regmap_reg_range(0x0, 0x1FF), /* GLB */
regmap_reg_range(0x400, 0x5FF), /* LPI CSR */
@@ -1286,6 +1326,7 @@ static int qcom_ppe_probe(struct platform_device *pdev)
ret,
"ppe device hw init failed\n");

+ ppe_dev->ppe_ops = &qcom_ppe_ops;
ppe_dev->is_ppe_probed = true;
return 0;
}
diff --git a/include/linux/soc/qcom/ppe.h b/include/linux/soc/qcom/ppe.h
index 90566a8841b4..70ee192d9ef0 100644
--- a/include/linux/soc/qcom/ppe.h
+++ b/include/linux/soc/qcom/ppe.h
@@ -16,13 +16,25 @@
struct ppe_device {
struct device *dev;
struct regmap *regmap;
+ struct ppe_device_ops *ppe_ops;
bool is_ppe_probed;
void *ppe_priv;
};

+/* PPE operations, which is used by the external driver like Ethernet
+ * DMA driver to configure PPE.
+ */
+struct ppe_device_ops {
+ int (*set_maxframe)(struct ppe_device *ppe_dev, int port,
+ int maxframe_size);
+};
+
/* Function used to check PPE platform dirver is registered correctly or not. */
bool ppe_is_probed(struct platform_device *pdev);

/* Function used to get the PPE device */
struct ppe_device *ppe_dev_get(struct platform_device *pdev);
+
+/* Function used to get the operations of PPE device */
+struct ppe_device_ops *ppe_ops_get(struct platform_device *pdev);
#endif
--
2.42.0


2024-01-10 12:12:32

by Russell King (Oracle)

[permalink] [raw]
Subject: Re: [PATCH net-next 17/20] net: ethernet: qualcomm: Add PPE UNIPHY support for phylink

On Wed, Jan 10, 2024 at 07:40:29PM +0800, Luo Jie wrote:
> +static int clk_uniphy_set_rate(struct clk_hw *hw, unsigned long rate,
> + unsigned long parent_rate)
> +{
> + struct clk_uniphy *uniphy = to_clk_uniphy(hw);
> +
> + if (rate != UNIPHY_CLK_RATE_125M && rate != UNIPHY_CLK_RATE_312P5M)
> + return -1;

Sigh. I get very annoyed off by stuff like this. It's lazy programming,
and makes me wonder why I should be bothered to spend time reviewing if
the programmer can't be bothered to pay attention to details. It makes
me wonder what else is done lazily in the patch.

-1 is -EPERM "Operation not permitted". This is highly likely not an
appropriate error code for this code.

> +int ppe_uniphy_autoneg_complete_check(struct ppe_uniphy *uniphy, int port)
> +{
> + u32 reg, val;
> + int channel, ret;
> +
> + if (uniphy->interface == PHY_INTERFACE_MODE_USXGMII ||
> + uniphy->interface == PHY_INTERFACE_MODE_QUSGMII) {
> + /* Only uniphy0 may have multi channels */
> + channel = (uniphy->index == 0) ? (port - 1) : 0;
> + reg = (channel == 0) ? VR_MII_AN_INTR_STS_ADDR :
> + VR_MII_AN_INTR_STS_CHANNEL_ADDR(channel);
> +
> + /* Wait auto negotiation complete */
> + ret = read_poll_timeout(ppe_uniphy_read, val,
> + (val & CL37_ANCMPLT_INTR),
> + 1000, 100000, true,
> + uniphy, reg);
> + if (ret) {
> + dev_err(uniphy->ppe_dev->dev,
> + "uniphy %d auto negotiation timeout\n", uniphy->index);
> + return ret;
> + }
> +
> + /* Clear auto negotiation complete interrupt */
> + ppe_uniphy_mask(uniphy, reg, CL37_ANCMPLT_INTR, 0);
> + }
> +
> + return 0;
> +}

Why is this necessary? Why is it callable outside this file? Shouldn't
this be done in the .pcs_get_state method? If negotiation hasn't
completed (and negotiation is being used) then .pcs_get_state should not
report that the link is up.

> +
> +int ppe_uniphy_speed_set(struct ppe_uniphy *uniphy, int port, int speed)
> +{
> + u32 reg, val;
> + int channel;
> +
> + if (uniphy->interface == PHY_INTERFACE_MODE_USXGMII ||
> + uniphy->interface == PHY_INTERFACE_MODE_QUSGMII) {
> + /* Only uniphy0 may have multiple channels */
> + channel = (uniphy->index == 0) ? (port - 1) : 0;
> +
> + reg = (channel == 0) ? SR_MII_CTRL_ADDR :
> + SR_MII_CTRL_CHANNEL_ADDR(channel);
> +
> + switch (speed) {
> + case SPEED_100:
> + val = USXGMII_SPEED_100;
> + break;
> + case SPEED_1000:
> + val = USXGMII_SPEED_1000;
> + break;
> + case SPEED_2500:
> + val = USXGMII_SPEED_2500;
> + break;
> + case SPEED_5000:
> + val = USXGMII_SPEED_5000;
> + break;
> + case SPEED_10000:
> + val = USXGMII_SPEED_10000;
> + break;
> + case SPEED_10:
> + val = USXGMII_SPEED_10;
> + break;
> + default:
> + val = 0;
> + break;
> + }
> +
> + ppe_uniphy_mask(uniphy, reg, USXGMII_SPEED_MASK, val);
> + }
> +
> + return 0;
> +}
> +
> +int ppe_uniphy_duplex_set(struct ppe_uniphy *uniphy, int port, int duplex)
> +{
> + u32 reg;
> + int channel;
> +
> + if (uniphy->interface == PHY_INTERFACE_MODE_USXGMII &&
> + uniphy->interface == PHY_INTERFACE_MODE_QUSGMII) {
> + /* Only uniphy0 may have multiple channels */
> + channel = (uniphy->index == 0) ? (port - 1) : 0;
> +
> + reg = (channel == 0) ? SR_MII_CTRL_ADDR :
> + SR_MII_CTRL_CHANNEL_ADDR(channel);
> +
> + ppe_uniphy_mask(uniphy, reg, USXGMII_DUPLEX_FULL,
> + (duplex == DUPLEX_FULL) ? USXGMII_DUPLEX_FULL : 0);
> + }
> +
> + return 0;
> +}

What calls the above two functions? Surely this should be called from
the .pcs_link_up method? I would also imagine that you call each of
these consecutively. So why not modify the register in one go rather
than piecemeal like this. I'm not a fan of one-function-to-control-one-
parameter-in-a-register style when it results in more register accesses
than are really necessary.

> +static void ppe_pcs_get_state(struct phylink_pcs *pcs,
> + struct phylink_link_state *state)
> +{
> + struct ppe_uniphy *uniphy = pcs_to_ppe_uniphy(pcs);
> + u32 val;
> +
> + switch (state->interface) {
> + case PHY_INTERFACE_MODE_10GBASER:
> + val = ppe_uniphy_read(uniphy, SR_XS_PCS_KR_STS1_ADDR);
> + state->link = (val & SR_XS_PCS_KR_STS1_PLU) ? 1 : 0;

Unnecessary tenary operation.

state->link = !!(val & SR_XS_PCS_KR_STS1_PLU);

> + state->duplex = DUPLEX_FULL;
> + state->speed = SPEED_10000;
> + state->pause |= (MLO_PAUSE_RX | MLO_PAUSE_TX);

Excessive parens.

> + break;
> + case PHY_INTERFACE_MODE_2500BASEX:
> + val = ppe_uniphy_read(uniphy, UNIPHY_CHANNEL0_INPUT_OUTPUT_6_ADDR);
> + state->link = (val & NEWADDEDFROMHERE_CH0_LINK_MAC) ? 1 : 0;

Ditto.

> + state->duplex = DUPLEX_FULL;
> + state->speed = SPEED_2500;
> + state->pause |= (MLO_PAUSE_RX | MLO_PAUSE_TX);

Ditto.

> + break;
> + case PHY_INTERFACE_MODE_1000BASEX:
> + case PHY_INTERFACE_MODE_SGMII:
> + val = ppe_uniphy_read(uniphy, UNIPHY_CHANNEL0_INPUT_OUTPUT_6_ADDR);
> + state->link = (val & NEWADDEDFROMHERE_CH0_LINK_MAC) ? 1 : 0;
> + state->duplex = (val & NEWADDEDFROMHERE_CH0_DUPLEX_MODE_MAC) ?
> + DUPLEX_FULL : DUPLEX_HALF;
> + if (FIELD_GET(NEWADDEDFROMHERE_CH0_SPEED_MODE_MAC, val) == UNIPHY_SPEED_10M)
> + state->speed = SPEED_10;
> + else if (FIELD_GET(NEWADDEDFROMHERE_CH0_SPEED_MODE_MAC, val) == UNIPHY_SPEED_100M)
> + state->speed = SPEED_100;
> + else if (FIELD_GET(NEWADDEDFROMHERE_CH0_SPEED_MODE_MAC, val) == UNIPHY_SPEED_1000M)
> + state->speed = SPEED_1000;

Looks like a switch(FIELD_GET(NEWADDEDFROMHERE_CH0_SPEED_MODE_MAC, val)
would be better here. Also "NEWADDEDFROMHERE" ?

> + state->pause |= (MLO_PAUSE_RX | MLO_PAUSE_TX);

Ditto.

As you make no differentiation between 1000base-X and SGMII, I question
whether your hardware supports 1000base-X. I seem to recall in previous
discussions that it doesn't. So, that means it doesn't support the
inband negotiation word format for 1000base-X. Thus, 1000base-X should
not be included in any of these switch statements, and 1000base-X won't
be usable.

> +/* [register] UNIPHY_MODE_CTRL */
> +#define UNIPHY_MODE_CTRL_ADDR 0x46c
> +#define NEWADDEDFROMHERE_CH0_AUTONEG_MODE BIT(0)
> +#define NEWADDEDFROMHERE_CH1_CH0_SGMII BIT(1)
> +#define NEWADDEDFROMHERE_CH4_CH1_0_SGMII BIT(2)
> +#define NEWADDEDFROMHERE_SGMII_EVEN_LOW BIT(3)
> +#define NEWADDEDFROMHERE_CH0_MODE_CTRL_25M GENMASK(6, 4)
> +#define NEWADDEDFROMHERE_CH0_QSGMII_SGMII BIT(8)
> +#define NEWADDEDFROMHERE_CH0_PSGMII_QSGMII BIT(9)
> +#define NEWADDEDFROMHERE_SG_MODE BIT(10)
> +#define NEWADDEDFROMHERE_SGPLUS_MODE BIT(11)
> +#define NEWADDEDFROMHERE_XPCS_MODE BIT(12)
> +#define NEWADDEDFROMHERE_USXG_EN BIT(13)
> +#define NEWADDEDFROMHERE_SW_V17_V18 BIT(15)

Again, why "NEWADDEDFROMHERE" ?

> +/* [register] VR_XS_PCS_EEE_MCTRL0 */
> +#define VR_XS_PCS_EEE_MCTRL0_ADDR 0x38006
> +#define LTX_EN BIT(0)
> +#define LRX_EN BIT(1)
> +#define SIGN_BIT BIT(6)

"SIGN_BIT" is likely too generic a name.

> +#define MULT_FACT_100NS GENMASK(11, 8)
> +
> +/* [register] VR_XS_PCS_KR_CTRL */
> +#define VR_XS_PCS_KR_CTRL_ADDR 0x38007
> +#define USXG_MODE GENMASK(12, 10)
> +#define QUXGMII_MODE (FIELD_PREP(USXG_MODE, 0x5))
> +
> +/* [register] VR_XS_PCS_EEE_TXTIMER */
> +#define VR_XS_PCS_EEE_TXTIMER_ADDR 0x38008
> +#define TSL_RES GENMASK(5, 0)
> +#define T1U_RES GENMASK(7, 6)
> +#define TWL_RES GENMASK(12, 8)
> +#define UNIPHY_XPCS_TSL_TIMER (FIELD_PREP(TSL_RES, 0xa))
> +#define UNIPHY_XPCS_T1U_TIMER (FIELD_PREP(TSL_RES, 0x3))
> +#define UNIPHY_XPCS_TWL_TIMER (FIELD_PREP(TSL_RES, 0x16))
> +
> +/* [register] VR_XS_PCS_EEE_RXTIMER */
> +#define VR_XS_PCS_EEE_RXTIMER_ADDR 0x38009
> +#define RES_100U GENMASK(7, 0)
> +#define TWR_RES GENMASK(13, 8)
> +#define UNIPHY_XPCS_100US_TIMER (FIELD_PREP(RES_100U, 0xc8))
> +#define UNIPHY_XPCS_TWR_TIMER (FIELD_PREP(RES_100U, 0x1c))
> +
> +/* [register] VR_XS_PCS_DIG_STS */
> +#define VR_XS_PCS_DIG_STS_ADDR 0x3800a
> +#define AM_COUNT GENMASK(14, 0)
> +#define QUXGMII_AM_COUNT (FIELD_PREP(AM_COUNT, 0x6018))
> +
> +/* [register] VR_XS_PCS_EEE_MCTRL1 */
> +#define VR_XS_PCS_EEE_MCTRL1_ADDR 0x3800b
> +#define TRN_LPI BIT(0)
> +#define TRN_RXLPI BIT(8)
> +
> +/* [register] VR_MII_1_DIG_CTRL1 */
> +#define VR_MII_DIG_CTRL1_CHANNEL1_ADDR 0x1a8000
> +#define VR_MII_DIG_CTRL1_CHANNEL2_ADDR 0x1b8000
> +#define VR_MII_DIG_CTRL1_CHANNEL3_ADDR 0x1c8000
> +#define VR_MII_DIG_CTRL1_CHANNEL_ADDR(x) (0x1a8000 + 0x10000 * ((x) - 1))
> +#define CHANNEL_USRA_RST BIT(5)
> +
> +/* [register] VR_MII_AN_CTRL */
> +#define VR_MII_AN_CTRL_ADDR 0x1f8001
> +#define VR_MII_AN_CTRL_CHANNEL1_ADDR 0x1a8001
> +#define VR_MII_AN_CTRL_CHANNEL2_ADDR 0x1b8001
> +#define VR_MII_AN_CTRL_CHANNEL3_ADDR 0x1c8001
> +#define VR_MII_AN_CTRL_CHANNEL_ADDR(x) (0x1a8001 + 0x10000 * ((x) - 1))
> +#define MII_AN_INTR_EN BIT(0)
> +#define MII_CTRL BIT(8)

Too generic a name.

> +
> +/* [register] VR_MII_AN_INTR_STS */
> +#define VR_MII_AN_INTR_STS_ADDR 0x1f8002
> +#define VR_MII_AN_INTR_STS_CHANNEL1_ADDR 0x1a8002
> +#define VR_MII_AN_INTR_STS_CHANNEL2_ADDR 0x1b8002
> +#define VR_MII_AN_INTR_STS_CHANNEL3_ADDR 0x1c8002
> +#define VR_MII_AN_INTR_STS_CHANNEL_ADDR(x) (0x1a8002 + 0x10000 * ((x) - 1))
> +#define CL37_ANCMPLT_INTR BIT(0)
> +
> +/* [register] VR_XAUI_MODE_CTRL */
> +#define VR_XAUI_MODE_CTRL_ADDR 0x1f8004
> +#define VR_XAUI_MODE_CTRL_CHANNEL1_ADDR 0x1a8004
> +#define VR_XAUI_MODE_CTRL_CHANNEL2_ADDR 0x1b8004
> +#define VR_XAUI_MODE_CTRL_CHANNEL3_ADDR 0x1c8004
> +#define VR_XAUI_MODE_CTRL_CHANNEL_ADDR(x) (0x1a8004 + 0x10000 * ((x) - 1))
> +#define IPG_CHECK BIT(0)
> +
> +/* [register] SR_MII_CTRL */
> +#define SR_MII_CTRL_ADDR 0x1f0000
> +#define SR_MII_CTRL_CHANNEL1_ADDR 0x1a0000
> +#define SR_MII_CTRL_CHANNEL2_ADDR 0x1b0000
> +#define SR_MII_CTRL_CHANNEL3_ADDR 0x1c0000
> +#define SR_MII_CTRL_CHANNEL_ADDR(x) (0x1a0000 + 0x10000 * ((x) - 1))


> +#define AN_ENABLE BIT(12)

Looks like MDIO_AN_CTRL1_ENABLE

> +#define USXGMII_DUPLEX_FULL BIT(8)
> +#define USXGMII_SPEED_MASK (BIT(13) | BIT(6) | BIT(5))
> +#define USXGMII_SPEED_10000 (BIT(13) | BIT(6))
> +#define USXGMII_SPEED_5000 (BIT(13) | BIT(5))
> +#define USXGMII_SPEED_2500 BIT(5)
> +#define USXGMII_SPEED_1000 BIT(6)
> +#define USXGMII_SPEED_100 BIT(13)
> +#define USXGMII_SPEED_10 0

Looks rather like the standard IEEE 802.3 definitions except for the
2.5G and 5G speeds. Probably worth a comment stating that they're
slightly different.

> +
> +/* PPE UNIPHY data type */
> +struct ppe_uniphy {
> + void __iomem *base;
> + struct ppe_device *ppe_dev;
> + unsigned int index;
> + phy_interface_t interface;
> + struct phylink_pcs pcs;
> +};
> +
> +#define pcs_to_ppe_uniphy(_pcs) container_of(_pcs, struct ppe_uniphy, pcs)

As this should only be used in the .c file, I suggest making this a
static function in the .c file. There should be no requirement to use
it outside of the .c file.

> +
> +struct ppe_uniphy *ppe_uniphy_setup(struct platform_device *pdev);
> +
> +int ppe_uniphy_speed_set(struct ppe_uniphy *uniphy,
> + int port, int speed);
> +
> +int ppe_uniphy_duplex_set(struct ppe_uniphy *uniphy,
> + int port, int duplex);
> +
> +int ppe_uniphy_adapter_reset(struct ppe_uniphy *uniphy,
> + int port);
> +
> +int ppe_uniphy_autoneg_complete_check(struct ppe_uniphy *uniphy,
> + int port);
> +
> +int ppe_uniphy_port_gcc_clock_en_set(struct ppe_uniphy *uniphy,
> + int port, bool enable);
> +
> +#endif /* _PPE_UNIPHY_H_ */
> diff --git a/include/linux/soc/qcom/ppe.h b/include/linux/soc/qcom/ppe.h
> index 268109c823ad..d3cb18df33fa 100644
> --- a/include/linux/soc/qcom/ppe.h
> +++ b/include/linux/soc/qcom/ppe.h
> @@ -20,6 +20,7 @@ struct ppe_device {
> struct dentry *debugfs_root;
> bool is_ppe_probed;
> void *ppe_priv;
> + void *uniphy;

Not struct ppe_uniphy *uniphy? You can declare the struct before use
via:

struct ppe_uniphy;

so you don't need to include ppe_uniphy.h in this header.

Thanks.

--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!

2024-01-10 12:19:15

by Russell King (Oracle)

[permalink] [raw]
Subject: Re: [PATCH net-next 18/20] net: ethernet: qualcomm: Add PPE MAC support for phylink

On Wed, Jan 10, 2024 at 07:40:30PM +0800, Luo Jie wrote:
> +static void ppe_phylink_mac_link_up(struct ppe_device *ppe_dev, int port,
> + struct phy_device *phy,
> + unsigned int mode, phy_interface_t interface,
> + int speed, int duplex, bool tx_pause, bool rx_pause)
> +{
> + struct phylink_pcs *pcs = ppe_phylink_mac_select_pcs(ppe_dev, port, interface);
> + struct ppe_uniphy *uniphy = pcs_to_ppe_uniphy(pcs);
> + struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
> +
> + /* Wait uniphy auto-negotiation completion */
> + ppe_uniphy_autoneg_complete_check(uniphy, port);

Way too late...

> @@ -352,6 +1230,12 @@ static int ppe_port_maxframe_set(struct ppe_device *ppe_dev,
> }
>
> static struct ppe_device_ops qcom_ppe_ops = {
> + .phylink_setup = ppe_phylink_setup,
> + .phylink_destroy = ppe_phylink_destroy,
> + .phylink_mac_config = ppe_phylink_mac_config,
> + .phylink_mac_link_up = ppe_phylink_mac_link_up,
> + .phylink_mac_link_down = ppe_phylink_mac_link_down,
> + .phylink_mac_select_pcs = ppe_phylink_mac_select_pcs,
> .set_maxframe = ppe_port_maxframe_set,
> };

Why this extra layer of abstraction? If you need separate phylink
operations, why not implement separate phylink_mac_ops structures?

--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!

2024-01-10 12:23:04

by Krzysztof Kozlowski

[permalink] [raw]
Subject: Re: [PATCH net-next 02/20] dt-bindings: net: qcom,ppe: Add bindings yaml file

On 10/01/2024 12:40, Luo Jie wrote:
> Qualcomm PPE(packet process engine) is supported on
> IPQ SOC platform.
>

A nit, subject: drop second/last, redundant "bindings". The
"dt-bindings" prefix is already stating that these are bindings.
See also:
https://elixir.bootlin.com/linux/v6.7-rc8/source/Documentation/devicetree/bindings/submitting-patches.rst#L18

Basically your subject has only prefix and nothing else useful.

Limited review follows, I am not wasting my time much on this.

> Signed-off-by: Luo Jie <[email protected]>
> ---
> .../devicetree/bindings/net/qcom,ppe.yaml | 1330 +++++++++++++++++
> 1 file changed, 1330 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/net/qcom,ppe.yaml
>
> diff --git a/Documentation/devicetree/bindings/net/qcom,ppe.yaml b/Documentation/devicetree/bindings/net/qcom,ppe.yaml
> new file mode 100644
> index 000000000000..6afb2ad62707
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/net/qcom,ppe.yaml
> @@ -0,0 +1,1330 @@
> +# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/net/qcom,ppe.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Qualcomm Packet Process Engine Ethernet controller

Where is the ref to ethernet controllers schema?

> +
> +maintainers:
> + - Luo Jie <[email protected]>
> +
> +description:
> + The PPE(packet process engine) is comprised of three componets, Ethernet
> + DMA, Switch core and Port wrapper, Ethernet DMA is used to transmit and
> + receive packets between Ethernet subsytem and host. The Switch core has
> + maximum 8 ports(maximum 6 front panel ports and two FIFO interfaces),
> + among which there are GMAC/XGMACs used as external interfaces and FIFO
> + interfaces connected the EDMA/EIP, The port wrapper provides connections
> + from the GMAC/XGMACS to SGMII/QSGMII/PSGMII/USXGMII/10G-BASER etc, there
> + are maximu 3 UNIPHY(PCS) instances supported by PPE.
> +
> +properties:
> + compatible:
> + enum:
> + - qcom,ipq5332-ppe
> + - qcom,ipq9574-ppe
> +
> + reg:
> + maxItems: 1
> +
> + "#address-cells":
> + const: 1
> +
> + "#size-cells":
> + const: 1
> +
> + ranges: true
> +
> + clocks: true

These cannot be true, we expect here widest constraints.

> +
> + clock-names: true
> +
> + resets: true
> +
> + reset-names: true
> +
> + tdm-config:
> + type: object
> + additionalProperties: false
> + description: |
> + PPE TDM(time-division multiplexing) config includes buffer management
> + and port scheduler.
> +
> + properties:
> + qcom,tdm-bm-config:
> + $ref: /schemas/types.yaml#/definitions/uint32-array
> + description:
> + The TDM buffer scheduler configs of PPE, there are multiple
> + entries supported, each entry includes valid, direction
> + (ingress or egress), port, second port valid, second port.
> +
> + qcom,tdm-port-scheduler-config:
> + $ref: /schemas/types.yaml#/definitions/uint32-array
> + description:
> + The TDM port scheduler management configs of PPE, there
> + are multiple entries supported each entry includes ingress
> + scheduler port bitmap, ingress scheduler port, egress
> + scheduler port, second egress scheduler port valid and
> + second egress scheduler port.
> +
> + required:
> + - qcom,tdm-bm-config
> + - qcom,tdm-port-scheduler-config
> +
> + buffer-management-config:
> + type: object
> + additionalProperties: false
> + description: |
> + PPE buffer management config, which supports configuring group
> + buffer and per port buffer, which decides the threshold of the
> + flow control frame generated.
> +

I don't understand this sentence. Rephrase it to proper sentence and
proper hardware, not driver, description.

> + properties:
> + qcom,group-config:
> + $ref: /schemas/types.yaml#/definitions/uint32-array
> + description:
> + The PPE buffer support 4 groups, the entry includes
> + the group ID and group buffer numbers, each buffer
> + has 256 bytes.

Missing constraints, like min/max and number of items.

> +
> + qcom,port-config:
> + $ref: /schemas/types.yaml#/definitions/uint32-array
> + description:
> + The PPE buffer number is also assigned per BM port ID,
> + there are 10 BM ports supported on ipq5332, and 15 BM
> + ports supported on ipq9574. Each entry includs group
> + ID, BM port ID, dedicated buffer, the buffer numbers
> + for receiving packet after pause frame sent, the
> + threshold for pause frame, weight, restore ceil and
> + dynamic buffer or static buffer management.
> +
> + required:
> + - qcom,group-config
> + - qcom,port-config
> +
> + queue-management-config:
> + type: object
> + additionalProperties: false
> + description: |
> + PPE queue management config, which supports configuring group
> + and per queue buffer limitation, which decides the threshold
> + to drop the packet on the egress port.
> +
> + properties:
> + qcom,group-config:
> + $ref: /schemas/types.yaml#/definitions/uint32-array
> + description:
> + The PPE queue management support 4 groups, the entry
> + includes the group ID, group buffer number, dedicated
> + buffer number, threshold to drop packet and restore
> + ceil.
> +
> + qcom,queue-config:
> + $ref: /schemas/types.yaml#/definitions/uint32-array
> + description:
> + PPE has 256 unicast queues and 44 multicast queues, the
> + entry includes queue base, queue number, group ID,
> + dedicated buffer, the threshold to drop packet, weight,
> + restore ceil and dynamic or static queue management.
> +
> + required:
> + - qcom,group-config
> + - qcom,queue-config
> +
> + port-scheduler-resource:
> + type: object
> + additionalProperties: false
> + description: The scheduler resource available in PPE.
> + patternProperties:
> + "^port[0-7]$":

port-

> + description: Each subnode represents the scheduler resource per port.
> + type: object
> + properties:
> + port-id:
> + $ref: /schemas/types.yaml#/definitions/uint32
> + description: |

Do not need '|' unless you need to preserve formatting. This applies
everywhere.

> + The PPE port ID, there are maximum 6 physical port,
> + EIP port and CPU port.

Your node name suffix says 8 ports. Anyway, missing min/max.

All these nodes (before, here and further) looks like dump of vendor code.

I expect some good explanation why we should accept this. Commit msg you
wrote is meaningless. It literally brings zero information about hardware.

You have been asked to provide accurate hardware description yet you
keep ignoring people's feedback.
..

> +
> +patternProperties:


phy@

Node names should be generic. See also an explanation and list of
examples (not exhaustive) in DT specification:
https://devicetree-specification.readthedocs.io/en/latest/chapter2-devicetree-basics.html#generic-names-recommendation


> + "^qcom-uniphy@[0-9a-f]+$":
> + type: object
> + additionalProperties: false
> + description: uniphy configuration and clock provider
> + properties:
> + reg:
> + minItems: 2
> + items:
> + - description: The first uniphy register range
> + - description: The second uniphy register range
> + - description: The third uniphy register range

first, second and third are really useless descriptions. We expect
something useful.

> +
> + "#clock-cells":
> + const: 1
> +
> + clock-output-names:
> + minItems: 4
> + maxItems: 6
> +
> + required:
> + - reg
> + - "#clock-cells"
> + - clock-output-names
> +
> +allOf:
> + - if:
> + properties:
> + compatible:
> + contains:
> + const: qcom,ipq5332-ppe
> + then:
> + properties:
> + clocks:
> + items:
> + - description: Display common AHB clock from gcc
> + - description: Display common system clock from gcc
> + - description: Display uniphy0 AHB clock from gcc
> + - description: Display uniphy1 AHB clock from gcc
> + - description: Display uniphy0 system clock from gcc
> + - description: Display uniphy1 system clock from gcc
> + - description: Display nss clock from gcc
> + - description: Display nss noc snoc clock from gcc
> + - description: Display nss noc snoc_1 clock from gcc
> + - description: Display sleep clock from gcc
> + - description: Display PPE clock from nsscc
> + - description: Display PPE config clock from nsscc
> + - description: Display NSSNOC PPE clock from nsscc
> + - description: Display NSSNOC PPE config clock from nsscc
> + - description: Display EDMA clock from nsscc
> + - description: Display EDMA config clock from nsscc
> + - description: Display PPE IPE clock from nsscc
> + - description: Display PPE BTQ clock from nsscc
> + - description: Display port1 MAC clock from nsscc
> + - description: Display port2 MAC clock from nsscc
> + - description: Display port1 RX clock from nsscc
> + - description: Display port1 TX clock from nsscc
> + - description: Display port2 RX clock from nsscc
> + - description: Display port2 TX clock from nsscc
> + - description: Display UNIPHY port1 RX clock from nsscc
> + - description: Display UNIPHY port1 TX clock from nsscc
> + - description: Display UNIPHY port2 RX clock from nsscc
> + - description: Display UNIPHY port2 TX clock from nsscc
> + clock-names:
> + items:
> + - const: cmn_ahb
> + - const: cmn_sys
> + - const: uniphy0_ahb
> + - const: uniphy1_ahb
> + - const: uniphy0_sys
> + - const: uniphy1_sys
> + - const: gcc_nsscc
> + - const: gcc_nssnoc_snoc
> + - const: gcc_nssnoc_snoc_1
> + - const: gcc_im_sleep
> + - const: nss_ppe
> + - const: nss_ppe_cfg
> + - const: nssnoc_ppe
> + - const: nssnoc_ppe_cfg
> + - const: nss_edma
> + - const: nss_edma_cfg
> + - const: nss_ppe_ipe
> + - const: nss_ppe_btq
> + - const: port1_mac
> + - const: port2_mac
> + - const: nss_port1_rx
> + - const: nss_port1_tx
> + - const: nss_port2_rx
> + - const: nss_port2_tx
> + - const: uniphy_port1_rx
> + - const: uniphy_port1_tx
> + - const: uniphy_port2_rx
> + - const: uniphy_port2_tx
> +
> + resets:
> + items:
> + - description: Reset PPE
> + - description: Reset uniphy0 software config
> + - description: Reset uniphy1 software config
> + - description: Reset uniphy0 AHB
> + - description: Reset uniphy1 AHB
> + - description: Reset uniphy0 system
> + - description: Reset uniphy1 system
> + - description: Reset uniphy0 XPCS
> + - description: Reset uniphy1 SPCS
> + - description: Reset uniphy port1 RX
> + - description: Reset uniphy port1 TX
> + - description: Reset uniphy port2 RX
> + - description: Reset uniphy port2 TX
> + - description: Reset PPE port1 RX
> + - description: Reset PPE port1 TX
> + - description: Reset PPE port2 RX
> + - description: Reset PPE port2 TX
> + - description: Reset PPE port1 MAC
> + - description: Reset PPE port2 MAC
> +
> + reset-names:
> + items:
> + - const: ppe
> + - const: uniphy0_soft
> + - const: uniphy1_soft
> + - const: uniphy0_ahb
> + - const: uniphy1_ahb
> + - const: uniphy0_sys
> + - const: uniphy1_sys
> + - const: uniphy0_xpcs
> + - const: uniphy1_xpcs
> + - const: uniphy_port1_rx
> + - const: uniphy_port1_tx
> + - const: uniphy_port2_rx
> + - const: uniphy_port2_tx
> + - const: nss_port1_rx
> + - const: nss_port1_tx
> + - const: nss_port2_rx
> + - const: nss_port2_tx
> + - const: nss_port1_mac
> + - const: nss_port2_mac
> +
> + - if:
> + properties:
> + compatible:
> + contains:
> + const: qcom,ipq9574-ppe
> + then:
> + properties:
> + clocks:
> + items:
> + - description: Display common AHB clock from gcc
> + - description: Display common system clock from gcc
> + - description: Display uniphy0 AHB clock from gcc
> + - description: Display uniphy1 AHB clock from gcc
> + - description: Display uniphy2 AHB clock from gcc
> + - description: Display uniphy0 system clock from gcc
> + - description: Display uniphy1 system clock from gcc
> + - description: Display uniphy2 system clock from gcc
> + - description: Display nss clock from gcc
> + - description: Display nss noc clock from gcc
> + - description: Display nss noc snoc clock from gcc
> + - description: Display nss noc snoc_1 clock from gcc
> + - description: Display PPE clock from nsscc
> + - description: Display PPE config clock from nsscc
> + - description: Display NSSNOC PPE clock from nsscc
> + - description: Display NSSNOC PPE config clock from nsscc
> + - description: Display EDMA clock from nsscc
> + - description: Display EDMA config clock from nsscc
> + - description: Display PPE IPE clock from nsscc
> + - description: Display PPE BTQ clock from nsscc
> + - description: Display port1 MAC clock from nsscc
> + - description: Display port2 MAC clock from nsscc
> + - description: Display port3 MAC clock from nsscc
> + - description: Display port4 MAC clock from nsscc
> + - description: Display port5 MAC clock from nsscc
> + - description: Display port6 MAC clock from nsscc
> + - description: Display port1 RX clock from nsscc
> + - description: Display port1 TX clock from nsscc
> + - description: Display port2 RX clock from nsscc
> + - description: Display port2 TX clock from nsscc
> + - description: Display port3 RX clock from nsscc
> + - description: Display port3 TX clock from nsscc
> + - description: Display port4 RX clock from nsscc
> + - description: Display port4 TX clock from nsscc
> + - description: Display port5 RX clock from nsscc
> + - description: Display port5 TX clock from nsscc
> + - description: Display port6 RX clock from nsscc
> + - description: Display port6 TX clock from nsscc
> + - description: Display UNIPHY port1 RX clock from nsscc
> + - description: Display UNIPHY port1 TX clock from nsscc
> + - description: Display UNIPHY port2 RX clock from nsscc
> + - description: Display UNIPHY port2 TX clock from nsscc
> + - description: Display UNIPHY port3 RX clock from nsscc
> + - description: Display UNIPHY port3 TX clock from nsscc
> + - description: Display UNIPHY port4 RX clock from nsscc
> + - description: Display UNIPHY port4 TX clock from nsscc
> + - description: Display UNIPHY port5 RX clock from nsscc
> + - description: Display UNIPHY port5 TX clock from nsscc
> + - description: Display UNIPHY port6 RX clock from nsscc
> + - description: Display UNIPHY port6 TX clock from nsscc
> + - description: Display port5 RX clock source from nsscc
> + - description: Display port5 TX clock source from nsscc
> + clock-names:
> + items:
> + - const: cmn_ahb
> + - const: cmn_sys
> + - const: uniphy0_ahb
> + - const: uniphy1_ahb
> + - const: uniphy2_ahb
> + - const: uniphy0_sys
> + - const: uniphy1_sys
> + - const: uniphy2_sys
> + - const: gcc_nsscc
> + - const: gcc_nssnoc_nsscc
> + - const: gcc_nssnoc_snoc
> + - const: gcc_nssnoc_snoc_1
> + - const: nss_ppe
> + - const: nss_ppe_cfg
> + - const: nssnoc_ppe
> + - const: nssnoc_ppe_cfg
> + - const: nss_edma
> + - const: nss_edma_cfg
> + - const: nss_ppe_ipe
> + - const: nss_ppe_btq
> + - const: port1_mac
> + - const: port2_mac
> + - const: port3_mac
> + - const: port4_mac
> + - const: port5_mac
> + - const: port6_mac
> + - const: nss_port1_rx
> + - const: nss_port1_tx
> + - const: nss_port2_rx
> + - const: nss_port2_tx
> + - const: nss_port3_rx
> + - const: nss_port3_tx
> + - const: nss_port4_rx
> + - const: nss_port4_tx
> + - const: nss_port5_rx
> + - const: nss_port5_tx
> + - const: nss_port6_rx
> + - const: nss_port6_tx
> + - const: uniphy_port1_rx
> + - const: uniphy_port1_tx
> + - const: uniphy_port2_rx
> + - const: uniphy_port2_tx
> + - const: uniphy_port3_rx
> + - const: uniphy_port3_tx
> + - const: uniphy_port4_rx
> + - const: uniphy_port4_tx
> + - const: uniphy_port5_rx
> + - const: uniphy_port5_tx
> + - const: uniphy_port6_rx
> + - const: uniphy_port6_tx
> + - const: nss_port5_rx_clk_src
> + - const: nss_port5_tx_clk_src
> +
> + resets:
> + items:
> + - description: Reset PPE
> + - description: Reset uniphy0 software config
> + - description: Reset uniphy1 software config
> + - description: Reset uniphy2 software config
> + - description: Reset uniphy0 AHB
> + - description: Reset uniphy1 AHB
> + - description: Reset uniphy2 AHB
> + - description: Reset uniphy0 system
> + - description: Reset uniphy1 system
> + - description: Reset uniphy2 system
> + - description: Reset uniphy0 XPCS
> + - description: Reset uniphy1 XPCS
> + - description: Reset uniphy2 XPCS
> + - description: Assert uniphy port1
> + - description: Assert uniphy port2
> + - description: Assert uniphy port3
> + - description: Assert uniphy port4
> + - description: Reset PPE port1
> + - description: Reset PPE port2
> + - description: Reset PPE port3
> + - description: Reset PPE port4
> + - description: Reset PPE port5
> + - description: Reset PPE port6
> + - description: Reset PPE port1 MAC
> + - description: Reset PPE port2 MAC
> + - description: Reset PPE port3 MAC
> + - description: Reset PPE port4 MAC
> + - description: Reset PPE port5 MAC
> + - description: Reset PPE port6 MAC
> +
> + reset-names:
> + items:
> + - const: ppe
> + - const: uniphy0_soft
> + - const: uniphy1_soft
> + - const: uniphy2_soft
> + - const: uniphy0_ahb
> + - const: uniphy1_ahb
> + - const: uniphy2_ahb
> + - const: uniphy0_sys
> + - const: uniphy1_sys
> + - const: uniphy2_sys
> + - const: uniphy0_xpcs
> + - const: uniphy1_xpcs
> + - const: uniphy2_xpcs
> + - const: uniphy0_port1_dis
> + - const: uniphy0_port2_dis
> + - const: uniphy0_port3_dis
> + - const: uniphy0_port4_dis
> + - const: nss_port1
> + - const: nss_port2
> + - const: nss_port3
> + - const: nss_port4
> + - const: nss_port5
> + - const: nss_port6
> + - const: nss_port1_mac
> + - const: nss_port2_mac
> + - const: nss_port3_mac
> + - const: nss_port4_mac
> + - const: nss_port5_mac
> + - const: nss_port6_mac
> +
> +required:

allOf: goes after required:

> + - compatible
> + - reg
> + - "#address-cells"
> + - "#size-cells"
> + - ranges
> + - clocks
> + - clock-names
> + - resets
> + - reset-names
> + - tdm-config
> + - buffer-management-config
> + - queue-management-config
> + - port-scheduler-resource
> + - port-scheduler-config
> +
> +additionalProperties: false


> +
> +examples:
> + - |
> + #include <dt-bindings/clock/qcom,ipq9574-gcc.h>
> + #include <dt-bindings/reset/qcom,ipq9574-gcc.h>
> + #include <dt-bindings/clock/qcom,ipq9574-nsscc.h>
> + #include <dt-bindings/reset/qcom,ipq9574-nsscc.h>
> +
> + soc {
> + #address-cells = <1>;
> + #size-cells = <1>;
> + qcom_ppe: qcom-ppe@3a000000 {

Drop label, Generic node names.

> + compatible = "qcom,ipq9574-ppe";

Entire indentation of example is broken. Use one described in the
bindings coding style.

Best regards,
Krzysztof


2024-01-10 12:24:37

by Krzysztof Kozlowski

[permalink] [raw]
Subject: Re: [PATCH net-next 00/20] net: ethernet: Add qcom PPE driver

On 10/01/2024 12:40, Luo Jie wrote:
> The PPE(packet process engine) hardware block is available in Qualcomm
> IPQ chipsets that support PPE architecture, such as IPQ9574 and IPQ5332.
> The PPE includes integrated ethernet MAC and PCS(uniphy), which is used
> to connect with external PHY devices by PCS. The PPE also includes
> various packet processing offload capabilities such as routing and
> briding offload, L2 switch capability, VLAN and tunnel processing
> offload.
>
> This patch series enables support for the PPE driver which intializes
> and configures the PPE, and provides various services for higher level
> network drivers in the system such as EDMA (Ethernet DMA) driver or a
> DSA switch driver for PPE L2 Switch, for Qualcomm IPQ SoCs.

net-next is closed.

Best regards,
Krzysztof


2024-01-10 13:01:44

by Rob Herring

[permalink] [raw]
Subject: Re: [PATCH net-next 02/20] dt-bindings: net: qcom,ppe: Add bindings yaml file


On Wed, 10 Jan 2024 19:40:14 +0800, Luo Jie wrote:
> Qualcomm PPE(packet process engine) is supported on
> IPQ SOC platform.
>
> Signed-off-by: Luo Jie <[email protected]>
> ---
> .../devicetree/bindings/net/qcom,ppe.yaml | 1330 +++++++++++++++++
> 1 file changed, 1330 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/net/qcom,ppe.yaml
>

My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check'
on your patch (DT_CHECKER_FLAGS is new in v5.13):

yamllint warnings/errors:

dtschema/dtc warnings/errors:
Documentation/devicetree/bindings/net/qcom,ppe.example.dts:20:18: fatal error: dt-bindings/clock/qcom,ipq9574-nsscc.h: No such file or directory
20 | #include <dt-bindings/clock/qcom,ipq9574-nsscc.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [scripts/Makefile.lib:419: Documentation/devicetree/bindings/net/qcom,ppe.example.dtb] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [/builds/robherring/dt-review-ci/linux/Makefile:1424: dt_binding_check] Error 2
make: *** [Makefile:234: __sub-make] Error 2

doc reference errors (make refcheckdocs):

See https://patchwork.ozlabs.org/project/devicetree-bindings/patch/[email protected]

The base for the series is generally the latest rc1. A different dependency
should be noted in *this* patch.

If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:

pip3 install dtschema --upgrade

Please check and re-submit after running the above command yourself. Note
that DT_SCHEMA_FILES can be set to your schema file to speed up checking
your schema. However, it must be unset to test all examples with your schema.


2024-01-10 15:56:15

by Simon Horman

[permalink] [raw]
Subject: Re: [PATCH net-next 00/20] net: ethernet: Add qcom PPE driver

On Wed, Jan 10, 2024 at 01:24:06PM +0100, Krzysztof Kozlowski wrote:
> On 10/01/2024 12:40, Luo Jie wrote:
> > The PPE(packet process engine) hardware block is available in Qualcomm
> > IPQ chipsets that support PPE architecture, such as IPQ9574 and IPQ5332.
> > The PPE includes integrated ethernet MAC and PCS(uniphy), which is used
> > to connect with external PHY devices by PCS. The PPE also includes
> > various packet processing offload capabilities such as routing and
> > briding offload, L2 switch capability, VLAN and tunnel processing
> > offload.
> >
> > This patch series enables support for the PPE driver which intializes
> > and configures the PPE, and provides various services for higher level
> > network drivers in the system such as EDMA (Ethernet DMA) driver or a
> > DSA switch driver for PPE L2 Switch, for Qualcomm IPQ SoCs.
>
> net-next is closed.

Also, please try to avoid sending patch-sets with more than 15 patches
for net or net-next.

https://www.kernel.org/doc/html/next/process/maintainer-netdev.html#dividing-work-into-patches

2024-01-10 22:24:45

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH net-next 00/20] net: ethernet: Add qcom PPE driver

On Wed, 10 Jan 2024 19:40:12 +0800 Luo Jie wrote:
> The PPE(packet process engine) hardware block is available in Qualcomm
> IPQ chipsets that support PPE architecture, such as IPQ9574 and IPQ5332.

What's the relationship between this driver and QCA8084?

In the last month I see separate changes from you for mdio-ipq4019.c,
phy/at803x.c and now this driver (none of which got merged, AFAICT.)
Are you actually the author of this code, or are you just trying
to upstream bunch of vendor code?

Now you're dumping another 10kLoC on the list, and even though this is
hardly your first posting you're apparently not aware of our most basic
posting rules:
https://www.kernel.org/doc/html/next/process/maintainer-netdev.html#tl-dr

The reviewers are getting frustrated. Please, help us help you.
Stop throwing code at the list and work out a plan with Andrew
and others on how to get something merged...
--
pv-bot: 15cnt
pw-bot: cr

2024-01-11 15:53:22

by Luo Jie

[permalink] [raw]
Subject: Re: [PATCH net-next 00/20] net: ethernet: Add qcom PPE driver



On 1/11/2024 6:24 AM, Jakub Kicinski wrote:
> On Wed, 10 Jan 2024 19:40:12 +0800 Luo Jie wrote:
>> The PPE(packet process engine) hardware block is available in Qualcomm
>> IPQ chipsets that support PPE architecture, such as IPQ9574 and IPQ5332.
>
> What's the relationship between this driver and QCA8084?

The PPE (packet processing engine) is the network processing hardware
block in QCOM IPQ SoC. It includes the ethernet MAC and UNIPHY(PCS).
This driver is the base PPE driver which brings up the PPE and handles
MAC/UNIPHY operations. QCA8084 is the external 2.5Gbps 4-port PHY
device, which can be connected with PPE integrated MAC by UNIPHY(PCS).

Here is the relationship.
PPE integrated MAC --- PPE integrated UNIPHY(PCS) --- (PCS)QCA8084.

>
> In the last month I see separate changes from you for mdio-ipq4019.c,
> phy/at803x.c and now this driver (none of which got merged, AFAICT.)
> Are you actually the author of this code, or are you just trying
> to upstream bunch of vendor code?

Yes, Jakub, there are two authors in these patch series, Lei Wei and me.
The patches are already ready for some time, the code has been verified
on the Qualcomm reference design board. These are not downstream drivers
but drivers re-written for upstream.

>
> Now you're dumping another 10kLoC on the list, and even though this is
> hardly your first posting you're apparently not aware of our most basic
> posting rules:
> https://www.kernel.org/doc/html/next/process/maintainer-netdev.html#tl-dr
>
> The reviewers are getting frustrated. Please, help us help you.
> Stop throwing code at the list and work out a plan with Andrew
> and others on how to get something merged...

Sorry for trouble caused, will learn about the guidance provided by
the review comments, and follow up on the guidance and have the full
internal review of the patch updates before pushing the patch series.

2024-01-12 15:52:05

by Luo Jie

[permalink] [raw]
Subject: Re: [PATCH net-next 00/20] net: ethernet: Add qcom PPE driver



On 1/10/2024 11:44 PM, Simon Horman wrote:
> On Wed, Jan 10, 2024 at 01:24:06PM +0100, Krzysztof Kozlowski wrote:
>> On 10/01/2024 12:40, Luo Jie wrote:
>>> The PPE(packet process engine) hardware block is available in Qualcomm
>>> IPQ chipsets that support PPE architecture, such as IPQ9574 and IPQ5332.
>>> The PPE includes integrated ethernet MAC and PCS(uniphy), which is used
>>> to connect with external PHY devices by PCS. The PPE also includes
>>> various packet processing offload capabilities such as routing and
>>> briding offload, L2 switch capability, VLAN and tunnel processing
>>> offload.
>>>
>>> This patch series enables support for the PPE driver which intializes
>>> and configures the PPE, and provides various services for higher level
>>> network drivers in the system such as EDMA (Ethernet DMA) driver or a
>>> DSA switch driver for PPE L2 Switch, for Qualcomm IPQ SoCs.
>>
>> net-next is closed.
>
> Also, please try to avoid sending patch-sets with more than 15 patches
> for net or net-next.
>
> https://www.kernel.org/doc/html/next/process/maintainer-netdev.html#dividing-work-into-patches

Got it, at a later point when this review resumes, we will split the PPE
driver patches into two series, one is for PPE switch core feature,
another is for MAC/UNIPHY features. Hope this is fine.

Thanks for this comment.


2024-01-12 17:57:14

by Christian Marangi

[permalink] [raw]
Subject: Re: [PATCH net-next 00/20] net: ethernet: Add qcom PPE driver

On Thu, Jan 11, 2024 at 11:49:53PM +0800, Jie Luo wrote:
>
>
> On 1/11/2024 6:24 AM, Jakub Kicinski wrote:
> > On Wed, 10 Jan 2024 19:40:12 +0800 Luo Jie wrote:
> > > The PPE(packet process engine) hardware block is available in Qualcomm
> > > IPQ chipsets that support PPE architecture, such as IPQ9574 and IPQ5332.
> >
> > What's the relationship between this driver and QCA8084?
>
> The PPE (packet processing engine) is the network processing hardware block
> in QCOM IPQ SoC. It includes the ethernet MAC and UNIPHY(PCS). This driver
> is the base PPE driver which brings up the PPE and handles MAC/UNIPHY
> operations. QCA8084 is the external 2.5Gbps 4-port PHY device, which can be
> connected with PPE integrated MAC by UNIPHY(PCS).
>
> Here is the relationship.
> PPE integrated MAC --- PPE integrated UNIPHY(PCS) --- (PCS)QCA8084.
>
> >
> > In the last month I see separate changes from you for mdio-ipq4019.c,
> > phy/at803x.c and now this driver (none of which got merged, AFAICT.)
> > Are you actually the author of this code, or are you just trying
> > to upstream bunch of vendor code?
>
> Yes, Jakub, there are two authors in these patch series, Lei Wei and me.
> The patches are already ready for some time, the code has been verified
> on the Qualcomm reference design board. These are not downstream drivers
> but drivers re-written for upstream.
>
> >
> > Now you're dumping another 10kLoC on the list, and even though this is
> > hardly your first posting you're apparently not aware of our most basic
> > posting rules:
> > https://www.kernel.org/doc/html/next/process/maintainer-netdev.html#tl-dr
> >
> > The reviewers are getting frustrated. Please, help us help you.
> > Stop throwing code at the list and work out a plan with Andrew
> > and others on how to get something merged...
>
> Sorry for trouble caused, will learn about the guidance provided by
> the review comments, and follow up on the guidance and have the full
> internal review of the patch updates before pushing the patch series.

I renew my will of helping in any kind of manner in this, I love the
intention for EDMAv2 to have an upstream driver instead of SSDK, hoping
in the future to also have the same treatement for EDMAv1 (it's really a
pitty to have a support hole with ipq807x not supported)

Feel free to send an email or anything, considering this is massive, an
extra eye before sending might make things better than reaching (I can
already see this) a massive series with at least 20 revision given the
complexity of this thing.

--
Ansuel

2024-01-17 15:26:55

by Luo Jie

[permalink] [raw]
Subject: Re: [PATCH net-next 00/20] net: ethernet: Add qcom PPE driver



On 1/13/2024 1:56 AM, Christian Marangi wrote:
> On Thu, Jan 11, 2024 at 11:49:53PM +0800, Jie Luo wrote:
>>
>>
>> On 1/11/2024 6:24 AM, Jakub Kicinski wrote:
>>> On Wed, 10 Jan 2024 19:40:12 +0800 Luo Jie wrote:
>>>> The PPE(packet process engine) hardware block is available in Qualcomm
>>>> IPQ chipsets that support PPE architecture, such as IPQ9574 and IPQ5332.
>>>
>>> What's the relationship between this driver and QCA8084?
>>
>> The PPE (packet processing engine) is the network processing hardware block
>> in QCOM IPQ SoC. It includes the ethernet MAC and UNIPHY(PCS). This driver
>> is the base PPE driver which brings up the PPE and handles MAC/UNIPHY
>> operations. QCA8084 is the external 2.5Gbps 4-port PHY device, which can be
>> connected with PPE integrated MAC by UNIPHY(PCS).
>>
>> Here is the relationship.
>> PPE integrated MAC --- PPE integrated UNIPHY(PCS) --- (PCS)QCA8084.
>>
>>>
>>> In the last month I see separate changes from you for mdio-ipq4019.c,
>>> phy/at803x.c and now this driver (none of which got merged, AFAICT.)
>>> Are you actually the author of this code, or are you just trying
>>> to upstream bunch of vendor code?
>>
>> Yes, Jakub, there are two authors in these patch series, Lei Wei and me.
>> The patches are already ready for some time, the code has been verified
>> on the Qualcomm reference design board. These are not downstream drivers
>> but drivers re-written for upstream.
>>
>>>
>>> Now you're dumping another 10kLoC on the list, and even though this is
>>> hardly your first posting you're apparently not aware of our most basic
>>> posting rules:
>>> https://www.kernel.org/doc/html/next/process/maintainer-netdev.html#tl-dr
>>>
>>> The reviewers are getting frustrated. Please, help us help you.
>>> Stop throwing code at the list and work out a plan with Andrew
>>> and others on how to get something merged...
>>
>> Sorry for trouble caused, will learn about the guidance provided by
>> the review comments, and follow up on the guidance and have the full
>> internal review of the patch updates before pushing the patch series.
>
> I renew my will of helping in any kind of manner in this, I love the
> intention for EDMAv2 to have an upstream driver instead of SSDK, hoping
> in the future to also have the same treatement for EDMAv1 (it's really a
> pitty to have a support hole with ipq807x not supported)
>
> Feel free to send an email or anything, considering this is massive, an
> extra eye before sending might make things better than reaching (I can
> already see this) a massive series with at least 20 revision given the
> complexity of this thing.
>

Thanks Christian for the help. Yes, the EDMAV2 driver will be posted
some time after net-next is reopen and after this PPE driver patch
series resumes. The EDMAv2 driver will be posted as separate driver
series, which depends on this PPE driver. Currently we plan to post the
EDMAv2 driver support for IPQ5332 and IPQ9574 firstly. For IPQ807x, it
is a driver for an older architecture as you can see, but we will
consider this for the future.

We will certainly review it internally before publishing it later for
upstream review.

2024-01-22 13:56:46

by Luo Jie

[permalink] [raw]
Subject: Re: [PATCH net-next 02/20] dt-bindings: net: qcom,ppe: Add bindings yaml file



On 1/10/2024 8:22 PM, Krzysztof Kozlowski wrote:
> On 10/01/2024 12:40, Luo Jie wrote:
>> Qualcomm PPE(packet process engine) is supported on
>> IPQ SOC platform.
>>
>
> A nit, subject: drop second/last, redundant "bindings". The
> "dt-bindings" prefix is already stating that these are bindings.
> See also:
> https://elixir.bootlin.com/linux/v6.7-rc8/source/Documentation/devicetree/bindings/submitting-patches.rst#L18
>
> Basically your subject has only prefix and nothing else useful.
>
> Limited review follows, I am not wasting my time much on this.

Will remove the redundant words, and follow up the guidance
mentioned in the link. Will correct the subject as well.

>
>> Signed-off-by: Luo Jie <[email protected]>
>> ---
>> .../devicetree/bindings/net/qcom,ppe.yaml | 1330 +++++++++++++++++
>> 1 file changed, 1330 insertions(+)
>> create mode 100644 Documentation/devicetree/bindings/net/qcom,ppe.yaml
>>
>> diff --git a/Documentation/devicetree/bindings/net/qcom,ppe.yaml b/Documentation/devicetree/bindings/net/qcom,ppe.yaml
>> new file mode 100644
>> index 000000000000..6afb2ad62707
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/net/qcom,ppe.yaml
>> @@ -0,0 +1,1330 @@
>> +# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/net/qcom,ppe.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: Qualcomm Packet Process Engine Ethernet controller
>
> Where is the ref to ethernet controllers schema?
Sorry, the title above is not describing the device for this dtbindings
correctly. It should say "Qualcomm Packet Process Engine". The
reference to the schema for PPE is mentioned above.

>
>> +
>> +maintainers:
>> + - Luo Jie <[email protected]>
>> +
>> +description:
>> + The PPE(packet process engine) is comprised of three componets, Ethernet
>> + DMA, Switch core and Port wrapper, Ethernet DMA is used to transmit and
>> + receive packets between Ethernet subsytem and host. The Switch core has
>> + maximum 8 ports(maximum 6 front panel ports and two FIFO interfaces),
>> + among which there are GMAC/XGMACs used as external interfaces and FIFO
>> + interfaces connected the EDMA/EIP, The port wrapper provides connections
>> + from the GMAC/XGMACS to SGMII/QSGMII/PSGMII/USXGMII/10G-BASER etc, there
>> + are maximu 3 UNIPHY(PCS) instances supported by PPE.
>> +
>> +properties:
>> + compatible:
>> + enum:
>> + - qcom,ipq5332-ppe
>> + - qcom,ipq9574-ppe
>> +
>> + reg:
>> + maxItems: 1
>> +
>> + "#address-cells":
>> + const: 1
>> +
>> + "#size-cells":
>> + const: 1
>> +
>> + ranges: true
>> +
>> + clocks: true
>
> These cannot be true, we expect here widest constraints.

Got it, will update to add the right constraints for the properties.

>
>> +
>> + clock-names: true
>> +
>> + resets: true
>> +
>> + reset-names: true
>> +
>> + tdm-config:
>> + type: object
>> + additionalProperties: false
>> + description: |
>> + PPE TDM(time-division multiplexing) config includes buffer management
>> + and port scheduler.
>> +
>> + properties:
>> + qcom,tdm-bm-config:
>> + $ref: /schemas/types.yaml#/definitions/uint32-array
>> + description:
>> + The TDM buffer scheduler configs of PPE, there are multiple
>> + entries supported, each entry includes valid, direction
>> + (ingress or egress), port, second port valid, second port.
>> +
>> + qcom,tdm-port-scheduler-config:
>> + $ref: /schemas/types.yaml#/definitions/uint32-array
>> + description:
>> + The TDM port scheduler management configs of PPE, there
>> + are multiple entries supported each entry includes ingress
>> + scheduler port bitmap, ingress scheduler port, egress
>> + scheduler port, second egress scheduler port valid and
>> + second egress scheduler port.
>> +
>> + required:
>> + - qcom,tdm-bm-config
>> + - qcom,tdm-port-scheduler-config
>> +
>> + buffer-management-config:
>> + type: object
>> + additionalProperties: false
>> + description: |
>> + PPE buffer management config, which supports configuring group
>> + buffer and per port buffer, which decides the threshold of the
>> + flow control frame generated.
>> +
>
> I don't understand this sentence. Rephrase it to proper sentence and
> proper hardware, not driver, description.

Ok, I will edit the description to make it more clear. This information
determines the number of hardware buffers configured per port in the
PPE. This configuration influences flow control behavior of the port.

>
>> + properties:
>> + qcom,group-config:
>> + $ref: /schemas/types.yaml#/definitions/uint32-array
>> + description:
>> + The PPE buffer support 4 groups, the entry includes
>> + the group ID and group buffer numbers, each buffer
>> + has 256 bytes.
>
> Missing constraints, like min/max and number of items.

Ok, will add these constraints.

>
>> +
>> + qcom,port-config:
>> + $ref: /schemas/types.yaml#/definitions/uint32-array
>> + description:
>> + The PPE buffer number is also assigned per BM port ID,
>> + there are 10 BM ports supported on ipq5332, and 15 BM
>> + ports supported on ipq9574. Each entry includs group
>> + ID, BM port ID, dedicated buffer, the buffer numbers
>> + for receiving packet after pause frame sent, the
>> + threshold for pause frame, weight, restore ceil and
>> + dynamic buffer or static buffer management.
>> +
>> + required:
>> + - qcom,group-config
>> + - qcom,port-config
>> +
>> + queue-management-config:
>> + type: object
>> + additionalProperties: false
>> + description: |
>> + PPE queue management config, which supports configuring group
>> + and per queue buffer limitation, which decides the threshold
>> + to drop the packet on the egress port.
>> +
>> + properties:
>> + qcom,group-config:
>> + $ref: /schemas/types.yaml#/definitions/uint32-array
>> + description:
>> + The PPE queue management support 4 groups, the entry
>> + includes the group ID, group buffer number, dedicated
>> + buffer number, threshold to drop packet and restore
>> + ceil.
>> +
>> + qcom,queue-config:
>> + $ref: /schemas/types.yaml#/definitions/uint32-array
>> + description:
>> + PPE has 256 unicast queues and 44 multicast queues, the
>> + entry includes queue base, queue number, group ID,
>> + dedicated buffer, the threshold to drop packet, weight,
>> + restore ceil and dynamic or static queue management.
>> +
>> + required:
>> + - qcom,group-config
>> + - qcom,queue-config
>> +
>> + port-scheduler-resource:
>> + type: object
>> + additionalProperties: false
>> + description: The scheduler resource available in PPE.
>> + patternProperties:
>> + "^port[0-7]$":
>
> port-

Ok. will do.

>
>> + description: Each subnode represents the scheduler resource per port.
>> + type: object
>> + properties:
>> + port-id:
>> + $ref: /schemas/types.yaml#/definitions/uint32
>> + description: |
>
> Do not need '|' unless you need to preserve formatting. This applies
> everywhere.

Got it, will remove it everywhere applicable.

>
>> + The PPE port ID, there are maximum 6 physical port,
>> + EIP port and CPU port.
>
> Your node name suffix says 8 ports. Anyway, missing min/max.

will add the constraints.

>
> All these nodes (before, here and further) looks like dump of vendor code.
>
> I expect some good explanation why we should accept this. Commit msg you
> wrote is meaningless. It literally brings zero information about hardware.
>
> You have been asked to provide accurate hardware description yet you
> keep ignoring people's feedback.

We are reviewing the current DTS to include only relevant information
which varies per board, and move rest of the configuration to driver. We
will update the commit message and the descriptions in dtbindings about
the details of the hardware information for the updated DTS/dtbindings
when this patch series resumes.


> ...
>
>> +
>> +patternProperties:
>
>
> phy@
>
> Node names should be generic. See also an explanation and list of
> examples (not exhaustive) in DT specification:
> https://devicetree-specification.readthedocs.io/en/latest/chapter2-devicetree-basics.html#generic-names-recommendation

Got it, thanks. Will refer the link and update accordingly.

>
>
>> + "^qcom-uniphy@[0-9a-f]+$":
>> + type: object
>> + additionalProperties: false
>> + description: uniphy configuration and clock provider
>> + properties:
>> + reg:
>> + minItems: 2
>> + items:
>> + - description: The first uniphy register range
>> + - description: The second uniphy register range
>> + - description: The third uniphy register range
>
> first, second and third are really useless descriptions. We expect
> something useful.
>

I will rephrase the descriptions here to clarify. Depending on the SoC
type (IPQ5332 or IPQ9574) there can be two or three UNIPHY(PCS) in the
PPE. This property defines the address ranges for the register space of
these UNIPHY(PCS) of PPE.


>> +
>> + "#clock-cells":
>> + const: 1
>> +
>> + clock-output-names:
>> + minItems: 4
>> + maxItems: 6
>> +
>> + required:
>> + - reg
>> + - "#clock-cells"
>> + - clock-output-names
>> +
>> +allOf:
>> + - if:
>> + properties:
>> + compatible:
>> + contains:
>> + const: qcom,ipq5332-ppe
>> + then:
>> + properties:
>> + clocks:
>> + items:
>> + - description: Display common AHB clock from gcc
>> + - description: Display common system clock from gcc
>> + - description: Display uniphy0 AHB clock from gcc
>> + - description: Display uniphy1 AHB clock from gcc
>> + - description: Display uniphy0 system clock from gcc
>> + - description: Display uniphy1 system clock from gcc
>> + - description: Display nss clock from gcc
>> + - description: Display nss noc snoc clock from gcc
>> + - description: Display nss noc snoc_1 clock from gcc
>> + - description: Display sleep clock from gcc
>> + - description: Display PPE clock from nsscc
>> + - description: Display PPE config clock from nsscc
>> + - description: Display NSSNOC PPE clock from nsscc
>> + - description: Display NSSNOC PPE config clock from nsscc
>> + - description: Display EDMA clock from nsscc
>> + - description: Display EDMA config clock from nsscc
>> + - description: Display PPE IPE clock from nsscc
>> + - description: Display PPE BTQ clock from nsscc
>> + - description: Display port1 MAC clock from nsscc
>> + - description: Display port2 MAC clock from nsscc
>> + - description: Display port1 RX clock from nsscc
>> + - description: Display port1 TX clock from nsscc
>> + - description: Display port2 RX clock from nsscc
>> + - description: Display port2 TX clock from nsscc
>> + - description: Display UNIPHY port1 RX clock from nsscc
>> + - description: Display UNIPHY port1 TX clock from nsscc
>> + - description: Display UNIPHY port2 RX clock from nsscc
>> + - description: Display UNIPHY port2 TX clock from nsscc
>> + clock-names:
>> + items:
>> + - const: cmn_ahb
>> + - const: cmn_sys
>> + - const: uniphy0_ahb
>> + - const: uniphy1_ahb
>> + - const: uniphy0_sys
>> + - const: uniphy1_sys
>> + - const: gcc_nsscc
>> + - const: gcc_nssnoc_snoc
>> + - const: gcc_nssnoc_snoc_1
>> + - const: gcc_im_sleep
>> + - const: nss_ppe
>> + - const: nss_ppe_cfg
>> + - const: nssnoc_ppe
>> + - const: nssnoc_ppe_cfg
>> + - const: nss_edma
>> + - const: nss_edma_cfg
>> + - const: nss_ppe_ipe
>> + - const: nss_ppe_btq
>> + - const: port1_mac
>> + - const: port2_mac
>> + - const: nss_port1_rx
>> + - const: nss_port1_tx
>> + - const: nss_port2_rx
>> + - const: nss_port2_tx
>> + - const: uniphy_port1_rx
>> + - const: uniphy_port1_tx
>> + - const: uniphy_port2_rx
>> + - const: uniphy_port2_tx
>> +
>> + resets:
>> + items:
>> + - description: Reset PPE
>> + - description: Reset uniphy0 software config
>> + - description: Reset uniphy1 software config
>> + - description: Reset uniphy0 AHB
>> + - description: Reset uniphy1 AHB
>> + - description: Reset uniphy0 system
>> + - description: Reset uniphy1 system
>> + - description: Reset uniphy0 XPCS
>> + - description: Reset uniphy1 SPCS
>> + - description: Reset uniphy port1 RX
>> + - description: Reset uniphy port1 TX
>> + - description: Reset uniphy port2 RX
>> + - description: Reset uniphy port2 TX
>> + - description: Reset PPE port1 RX
>> + - description: Reset PPE port1 TX
>> + - description: Reset PPE port2 RX
>> + - description: Reset PPE port2 TX
>> + - description: Reset PPE port1 MAC
>> + - description: Reset PPE port2 MAC
>> +
>> + reset-names:
>> + items:
>> + - const: ppe
>> + - const: uniphy0_soft
>> + - const: uniphy1_soft
>> + - const: uniphy0_ahb
>> + - const: uniphy1_ahb
>> + - const: uniphy0_sys
>> + - const: uniphy1_sys
>> + - const: uniphy0_xpcs
>> + - const: uniphy1_xpcs
>> + - const: uniphy_port1_rx
>> + - const: uniphy_port1_tx
>> + - const: uniphy_port2_rx
>> + - const: uniphy_port2_tx
>> + - const: nss_port1_rx
>> + - const: nss_port1_tx
>> + - const: nss_port2_rx
>> + - const: nss_port2_tx
>> + - const: nss_port1_mac
>> + - const: nss_port2_mac
>> +
>> + - if:
>> + properties:
>> + compatible:
>> + contains:
>> + const: qcom,ipq9574-ppe
>> + then:
>> + properties:
>> + clocks:
>> + items:
>> + - description: Display common AHB clock from gcc
>> + - description: Display common system clock from gcc
>> + - description: Display uniphy0 AHB clock from gcc
>> + - description: Display uniphy1 AHB clock from gcc
>> + - description: Display uniphy2 AHB clock from gcc
>> + - description: Display uniphy0 system clock from gcc
>> + - description: Display uniphy1 system clock from gcc
>> + - description: Display uniphy2 system clock from gcc
>> + - description: Display nss clock from gcc
>> + - description: Display nss noc clock from gcc
>> + - description: Display nss noc snoc clock from gcc
>> + - description: Display nss noc snoc_1 clock from gcc
>> + - description: Display PPE clock from nsscc
>> + - description: Display PPE config clock from nsscc
>> + - description: Display NSSNOC PPE clock from nsscc
>> + - description: Display NSSNOC PPE config clock from nsscc
>> + - description: Display EDMA clock from nsscc
>> + - description: Display EDMA config clock from nsscc
>> + - description: Display PPE IPE clock from nsscc
>> + - description: Display PPE BTQ clock from nsscc
>> + - description: Display port1 MAC clock from nsscc
>> + - description: Display port2 MAC clock from nsscc
>> + - description: Display port3 MAC clock from nsscc
>> + - description: Display port4 MAC clock from nsscc
>> + - description: Display port5 MAC clock from nsscc
>> + - description: Display port6 MAC clock from nsscc
>> + - description: Display port1 RX clock from nsscc
>> + - description: Display port1 TX clock from nsscc
>> + - description: Display port2 RX clock from nsscc
>> + - description: Display port2 TX clock from nsscc
>> + - description: Display port3 RX clock from nsscc
>> + - description: Display port3 TX clock from nsscc
>> + - description: Display port4 RX clock from nsscc
>> + - description: Display port4 TX clock from nsscc
>> + - description: Display port5 RX clock from nsscc
>> + - description: Display port5 TX clock from nsscc
>> + - description: Display port6 RX clock from nsscc
>> + - description: Display port6 TX clock from nsscc
>> + - description: Display UNIPHY port1 RX clock from nsscc
>> + - description: Display UNIPHY port1 TX clock from nsscc
>> + - description: Display UNIPHY port2 RX clock from nsscc
>> + - description: Display UNIPHY port2 TX clock from nsscc
>> + - description: Display UNIPHY port3 RX clock from nsscc
>> + - description: Display UNIPHY port3 TX clock from nsscc
>> + - description: Display UNIPHY port4 RX clock from nsscc
>> + - description: Display UNIPHY port4 TX clock from nsscc
>> + - description: Display UNIPHY port5 RX clock from nsscc
>> + - description: Display UNIPHY port5 TX clock from nsscc
>> + - description: Display UNIPHY port6 RX clock from nsscc
>> + - description: Display UNIPHY port6 TX clock from nsscc
>> + - description: Display port5 RX clock source from nsscc
>> + - description: Display port5 TX clock source from nsscc
>> + clock-names:
>> + items:
>> + - const: cmn_ahb
>> + - const: cmn_sys
>> + - const: uniphy0_ahb
>> + - const: uniphy1_ahb
>> + - const: uniphy2_ahb
>> + - const: uniphy0_sys
>> + - const: uniphy1_sys
>> + - const: uniphy2_sys
>> + - const: gcc_nsscc
>> + - const: gcc_nssnoc_nsscc
>> + - const: gcc_nssnoc_snoc
>> + - const: gcc_nssnoc_snoc_1
>> + - const: nss_ppe
>> + - const: nss_ppe_cfg
>> + - const: nssnoc_ppe
>> + - const: nssnoc_ppe_cfg
>> + - const: nss_edma
>> + - const: nss_edma_cfg
>> + - const: nss_ppe_ipe
>> + - const: nss_ppe_btq
>> + - const: port1_mac
>> + - const: port2_mac
>> + - const: port3_mac
>> + - const: port4_mac
>> + - const: port5_mac
>> + - const: port6_mac
>> + - const: nss_port1_rx
>> + - const: nss_port1_tx
>> + - const: nss_port2_rx
>> + - const: nss_port2_tx
>> + - const: nss_port3_rx
>> + - const: nss_port3_tx
>> + - const: nss_port4_rx
>> + - const: nss_port4_tx
>> + - const: nss_port5_rx
>> + - const: nss_port5_tx
>> + - const: nss_port6_rx
>> + - const: nss_port6_tx
>> + - const: uniphy_port1_rx
>> + - const: uniphy_port1_tx
>> + - const: uniphy_port2_rx
>> + - const: uniphy_port2_tx
>> + - const: uniphy_port3_rx
>> + - const: uniphy_port3_tx
>> + - const: uniphy_port4_rx
>> + - const: uniphy_port4_tx
>> + - const: uniphy_port5_rx
>> + - const: uniphy_port5_tx
>> + - const: uniphy_port6_rx
>> + - const: uniphy_port6_tx
>> + - const: nss_port5_rx_clk_src
>> + - const: nss_port5_tx_clk_src
>> +
>> + resets:
>> + items:
>> + - description: Reset PPE
>> + - description: Reset uniphy0 software config
>> + - description: Reset uniphy1 software config
>> + - description: Reset uniphy2 software config
>> + - description: Reset uniphy0 AHB
>> + - description: Reset uniphy1 AHB
>> + - description: Reset uniphy2 AHB
>> + - description: Reset uniphy0 system
>> + - description: Reset uniphy1 system
>> + - description: Reset uniphy2 system
>> + - description: Reset uniphy0 XPCS
>> + - description: Reset uniphy1 XPCS
>> + - description: Reset uniphy2 XPCS
>> + - description: Assert uniphy port1
>> + - description: Assert uniphy port2
>> + - description: Assert uniphy port3
>> + - description: Assert uniphy port4
>> + - description: Reset PPE port1
>> + - description: Reset PPE port2
>> + - description: Reset PPE port3
>> + - description: Reset PPE port4
>> + - description: Reset PPE port5
>> + - description: Reset PPE port6
>> + - description: Reset PPE port1 MAC
>> + - description: Reset PPE port2 MAC
>> + - description: Reset PPE port3 MAC
>> + - description: Reset PPE port4 MAC
>> + - description: Reset PPE port5 MAC
>> + - description: Reset PPE port6 MAC
>> +
>> + reset-names:
>> + items:
>> + - const: ppe
>> + - const: uniphy0_soft
>> + - const: uniphy1_soft
>> + - const: uniphy2_soft
>> + - const: uniphy0_ahb
>> + - const: uniphy1_ahb
>> + - const: uniphy2_ahb
>> + - const: uniphy0_sys
>> + - const: uniphy1_sys
>> + - const: uniphy2_sys
>> + - const: uniphy0_xpcs
>> + - const: uniphy1_xpcs
>> + - const: uniphy2_xpcs
>> + - const: uniphy0_port1_dis
>> + - const: uniphy0_port2_dis
>> + - const: uniphy0_port3_dis
>> + - const: uniphy0_port4_dis
>> + - const: nss_port1
>> + - const: nss_port2
>> + - const: nss_port3
>> + - const: nss_port4
>> + - const: nss_port5
>> + - const: nss_port6
>> + - const: nss_port1_mac
>> + - const: nss_port2_mac
>> + - const: nss_port3_mac
>> + - const: nss_port4_mac
>> + - const: nss_port5_mac
>> + - const: nss_port6_mac
>> +
>> +required:
>
> allOf: goes after required:

Ok.

>
>> + - compatible
>> + - reg
>> + - "#address-cells"
>> + - "#size-cells"
>> + - ranges
>> + - clocks
>> + - clock-names
>> + - resets
>> + - reset-names
>> + - tdm-config
>> + - buffer-management-config
>> + - queue-management-config
>> + - port-scheduler-resource
>> + - port-scheduler-config
>> +
>> +additionalProperties: false
>
>
>> +
>> +examples:
>> + - |
>> + #include <dt-bindings/clock/qcom,ipq9574-gcc.h>
>> + #include <dt-bindings/reset/qcom,ipq9574-gcc.h>
>> + #include <dt-bindings/clock/qcom,ipq9574-nsscc.h>
>> + #include <dt-bindings/reset/qcom,ipq9574-nsscc.h>
>> +
>> + soc {
>> + #address-cells = <1>;
>> + #size-cells = <1>;
>> + qcom_ppe: qcom-ppe@3a000000 {
>
> Drop label, Generic node names.

Ok. We are trying to identify an appropriate generic name for the PPE
from the device tree documentation, since it comprises of many hardware
functions such as ethernet MAC and other packet processing blocks. We
are planning to update the node name to a generic name 'ethernet'.

>
>> + compatible = "qcom,ipq9574-ppe";
>
> Entire indentation of example is broken. Use one described in the
> bindings coding style.

will correct it, thanks for pointing out.

>
> Best regards,
> Krzysztof
>

2024-01-22 15:01:23

by Andrew Lunn

[permalink] [raw]
Subject: Re: [PATCH net-next 02/20] dt-bindings: net: qcom,ppe: Add bindings yaml file

> > > +++ b/Documentation/devicetree/bindings/net/qcom,ppe.yaml
> > > @@ -0,0 +1,1330 @@
> > > +# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
> > > +%YAML 1.2
> > > +---
> > > +$id: http://devicetree.org/schemas/net/qcom,ppe.yaml#
> > > +$schema: http://devicetree.org/meta-schemas/core.yaml#
> > > +
> > > +title: Qualcomm Packet Process Engine Ethernet controller
> >
> > Where is the ref to ethernet controllers schema?
> Sorry, the title above is not describing the device for this dtbindings
> correctly. It should say "Qualcomm Packet Process Engine". The
> reference to the schema for PPE is mentioned above.

I think you are not correctly understanding the comment. within the
PPE you have a collection of Ethernet interfaces. All the common
properties for Ethernet ports are described in

Documentation/devicetree/bindings/net/ethernet-controller.yaml

so you are expected to reference this schema.

> > > +description:
> > > + The PPE(packet process engine) is comprised of three componets, Ethernet
> > > + DMA, Switch core and Port wrapper, Ethernet DMA is used to transmit and
> > > + receive packets between Ethernet subsytem and host. The Switch core has
> > > + maximum 8 ports(maximum 6 front panel ports and two FIFO interfaces),
> > > + among which there are GMAC/XGMACs used as external interfaces and FIFO
> > > + interfaces connected the EDMA/EIP, The port wrapper provides connections
> > > + from the GMAC/XGMACS to SGMII/QSGMII/PSGMII/USXGMII/10G-BASER etc, there
> > > + are maximu 3 UNIPHY(PCS) instances supported by PPE.

I think a big part of the problem here is, you have a flat
representation of the PPE. But device tree is very hierarchical. The
hardware itself is also probably very hierarchical. Please spend some
timer studying other DT descriptions of similar hardware. Then throw
away this vendor crap DT binding and start again from scratch, with a
hierarchical description of the hardware.

Andrew

2024-01-22 15:08:35

by Lei Wei

[permalink] [raw]
Subject: Re: [PATCH net-next 17/20] net: ethernet: qualcomm: Add PPE UNIPHY support for phylink



On 1/10/2024 8:09 PM, Russell King (Oracle) wrote:
> On Wed, Jan 10, 2024 at 07:40:29PM +0800, Luo Jie wrote:
>> +static int clk_uniphy_set_rate(struct clk_hw *hw, unsigned long rate,
>> + unsigned long parent_rate)
>> +{
>> + struct clk_uniphy *uniphy = to_clk_uniphy(hw);
>> +
>> + if (rate != UNIPHY_CLK_RATE_125M && rate != UNIPHY_CLK_RATE_312P5M)
>> + return -1;
>
> Sigh. I get very annoyed off by stuff like this. It's lazy programming,
> and makes me wonder why I should be bothered to spend time reviewing if
> the programmer can't be bothered to pay attention to details. It makes
> me wonder what else is done lazily in the patch.
>
> -1 is -EPERM "Operation not permitted". This is highly likely not an
> appropriate error code for this code.
>
Sorry for this. I will update the driver to have appropriate error codes
where required.

>> +int ppe_uniphy_autoneg_complete_check(struct ppe_uniphy *uniphy, int port)
>> +{
>> + u32 reg, val;
>> + int channel, ret;
>> +
>> + if (uniphy->interface == PHY_INTERFACE_MODE_USXGMII ||
>> + uniphy->interface == PHY_INTERFACE_MODE_QUSGMII) {
>> + /* Only uniphy0 may have multi channels */
>> + channel = (uniphy->index == 0) ? (port - 1) : 0;
>> + reg = (channel == 0) ? VR_MII_AN_INTR_STS_ADDR :
>> + VR_MII_AN_INTR_STS_CHANNEL_ADDR(channel);
>> +
>> + /* Wait auto negotiation complete */
>> + ret = read_poll_timeout(ppe_uniphy_read, val,
>> + (val & CL37_ANCMPLT_INTR),
>> + 1000, 100000, true,
>> + uniphy, reg);
>> + if (ret) {
>> + dev_err(uniphy->ppe_dev->dev,
>> + "uniphy %d auto negotiation timeout\n", uniphy->index);
>> + return ret;
>> + }
>> +
>> + /* Clear auto negotiation complete interrupt */
>> + ppe_uniphy_mask(uniphy, reg, CL37_ANCMPLT_INTR, 0);
>> + }
>> +
>> + return 0;
>> +}
>
> Why is this necessary? Why is it callable outside this file? Shouldn't
> this be done in the .pcs_get_state method? If negotiation hasn't
> completed (and negotiation is being used) then .pcs_get_state should not
> report that the link is up.
>
Currently it is called outside this file in the following patch:
https://lore.kernel.org/netdev/[email protected]/

Yes, if inband autoneg is used, .pcs_get_state should report the link
status. Then this function should not be needed and should be removed.
I will update the .pcs_get_state method for USXGMII if using inband autoneg.

>> +
>> +int ppe_uniphy_speed_set(struct ppe_uniphy *uniphy, int port, int speed)
>> +{
>> + u32 reg, val;
>> + int channel;
>> +
>> + if (uniphy->interface == PHY_INTERFACE_MODE_USXGMII ||
>> + uniphy->interface == PHY_INTERFACE_MODE_QUSGMII) {
>> + /* Only uniphy0 may have multiple channels */
>> + channel = (uniphy->index == 0) ? (port - 1) : 0;
>> +
>> + reg = (channel == 0) ? SR_MII_CTRL_ADDR :
>> + SR_MII_CTRL_CHANNEL_ADDR(channel);
>> +
>> + switch (speed) {
>> + case SPEED_100:
>> + val = USXGMII_SPEED_100;
>> + break;
>> + case SPEED_1000:
>> + val = USXGMII_SPEED_1000;
>> + break;
>> + case SPEED_2500:
>> + val = USXGMII_SPEED_2500;
>> + break;
>> + case SPEED_5000:
>> + val = USXGMII_SPEED_5000;
>> + break;
>> + case SPEED_10000:
>> + val = USXGMII_SPEED_10000;
>> + break;
>> + case SPEED_10:
>> + val = USXGMII_SPEED_10;
>> + break;
>> + default:
>> + val = 0;
>> + break;
>> + }
>> +
>> + ppe_uniphy_mask(uniphy, reg, USXGMII_SPEED_MASK, val);
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +int ppe_uniphy_duplex_set(struct ppe_uniphy *uniphy, int port, int duplex)
>> +{
>> + u32 reg;
>> + int channel;
>> +
>> + if (uniphy->interface == PHY_INTERFACE_MODE_USXGMII &&
>> + uniphy->interface == PHY_INTERFACE_MODE_QUSGMII) {
>> + /* Only uniphy0 may have multiple channels */
>> + channel = (uniphy->index == 0) ? (port - 1) : 0;
>> +
>> + reg = (channel == 0) ? SR_MII_CTRL_ADDR :
>> + SR_MII_CTRL_CHANNEL_ADDR(channel);
>> +
>> + ppe_uniphy_mask(uniphy, reg, USXGMII_DUPLEX_FULL,
>> + (duplex == DUPLEX_FULL) ? USXGMII_DUPLEX_FULL : 0);
>> + }
>> +
>> + return 0;
>> +}
>
> What calls the above two functions? Surely this should be called from
> the .pcs_link_up method? I would also imagine that you call each of
> these consecutively. So why not modify the register in one go rather
> than piecemeal like this. I'm not a fan of one-function-to-control-one-
> parameter-in-a-register style when it results in more register accesses
> than are really necessary.
>

When we consider the sequence of operations expected from the driver by
the hardware, the MACand PCSoperations are interleaved. So we were not
able to clearly separate the MACand PCS operations during link up into
pcs_link_up() and .mac_link_up() ops. So we have avoided using
pcs_link_up() and included the entire sequence in mac_link_up() op.
This function is called by the PPE MAC support patch below.

https://lore.kernel.org/netdev/[email protected]/

The sequence expected by PPE HW from driver for link up is as below:
1. disable uniphy interface clock. (PCS operation)
2. configure the PPE port speed clock. (MAC operation)
3. configure the uniphy pcs speed for usxgmii (PCS operation).
4. configure PPE MAC speed (MAC operation).
5. enable uniphy interface clock (PCS operation).
6. reset uniphy pcs adapter (PCS operation).
7. enable mac (MAC operation).

I will also check whole patch to rework the
one-function-to-control-one-parameter-in-a-register style being used
here, thanks for the suggestion.

>> +static void ppe_pcs_get_state(struct phylink_pcs *pcs,
>> + struct phylink_link_state *state)
>> +{
>> + struct ppe_uniphy *uniphy = pcs_to_ppe_uniphy(pcs);
>> + u32 val;
>> +
>> + switch (state->interface) {
>> + case PHY_INTERFACE_MODE_10GBASER:
>> + val = ppe_uniphy_read(uniphy, SR_XS_PCS_KR_STS1_ADDR);
>> + state->link = (val & SR_XS_PCS_KR_STS1_PLU) ? 1 : 0;
>
> Unnecessary tenary operation.
>
> state->link = !!(val & SR_XS_PCS_KR_STS1_PLU);
>

Sure, Thanks for the suggestion, I will update it.

>> + state->duplex = DUPLEX_FULL;
>> + state->speed = SPEED_10000;
>> + state->pause |= (MLO_PAUSE_RX | MLO_PAUSE_TX);
>
> Excessive parens.
>
Will update it.

>> + break;
>> + case PHY_INTERFACE_MODE_2500BASEX:
>> + val = ppe_uniphy_read(uniphy, UNIPHY_CHANNEL0_INPUT_OUTPUT_6_ADDR);
>> + state->link = (val & NEWADDEDFROMHERE_CH0_LINK_MAC) ? 1 : 0;
>
> Ditto.
>
Will update it.

>> + state->duplex = DUPLEX_FULL;
>> + state->speed = SPEED_2500;
>> + state->pause |= (MLO_PAUSE_RX | MLO_PAUSE_TX);
>
> Ditto.
>
Will update it.

>> + break;
>> + case PHY_INTERFACE_MODE_1000BASEX:
>> + case PHY_INTERFACE_MODE_SGMII:
>> + val = ppe_uniphy_read(uniphy, UNIPHY_CHANNEL0_INPUT_OUTPUT_6_ADDR);
>> + state->link = (val & NEWADDEDFROMHERE_CH0_LINK_MAC) ? 1 : 0;
>> + state->duplex = (val & NEWADDEDFROMHERE_CH0_DUPLEX_MODE_MAC) ?
>> + DUPLEX_FULL : DUPLEX_HALF;
>> + if (FIELD_GET(NEWADDEDFROMHERE_CH0_SPEED_MODE_MAC, val) == UNIPHY_SPEED_10M)
>> + state->speed = SPEED_10;
>> + else if (FIELD_GET(NEWADDEDFROMHERE_CH0_SPEED_MODE_MAC, val) == UNIPHY_SPEED_100M)
>> + state->speed = SPEED_100;
>> + else if (FIELD_GET(NEWADDEDFROMHERE_CH0_SPEED_MODE_MAC, val) == UNIPHY_SPEED_1000M)
>> + state->speed = SPEED_1000;
>
> Looks like a switch(FIELD_GET(NEWADDEDFROMHERE_CH0_SPEED_MODE_MAC, val)
> would be better here. Also "NEWADDEDFROMHERE" ?
>
Sorry for the confusion, I will translate the register to meaningful name.

>> + state->pause |= (MLO_PAUSE_RX | MLO_PAUSE_TX);
>
> Ditto.
>
Will update it.

> As you make no differentiation between 1000base-X and SGMII, I question
> whether your hardware supports 1000base-X. I seem to recall in previous
> discussions that it doesn't. So, that means it doesn't support the
> inband negotiation word format for 1000base-X. Thus, 1000base-X should
> not be included in any of these switch statements, and 1000base-X won't
> be usable.
>
Our hardware supports both 1000base-x and SGMII auto-neg, the hardware
can resolve and decode the autoneg result of 1000base-x C37 word format
and SGMII auto-neg word format. This information after autoneg
resolution is stored in the same register exported to software. This is
the reason the same code works for both cases.

>> +/* [register] UNIPHY_MODE_CTRL */
>> +#define UNIPHY_MODE_CTRL_ADDR 0x46c
>> +#define NEWADDEDFROMHERE_CH0_AUTONEG_MODE BIT(0)
>> +#define NEWADDEDFROMHERE_CH1_CH0_SGMII BIT(1)
>> +#define NEWADDEDFROMHERE_CH4_CH1_0_SGMII BIT(2)
>> +#define NEWADDEDFROMHERE_SGMII_EVEN_LOW BIT(3)
>> +#define NEWADDEDFROMHERE_CH0_MODE_CTRL_25M GENMASK(6, 4)
>> +#define NEWADDEDFROMHERE_CH0_QSGMII_SGMII BIT(8)
>> +#define NEWADDEDFROMHERE_CH0_PSGMII_QSGMII BIT(9)
>> +#define NEWADDEDFROMHERE_SG_MODE BIT(10)
>> +#define NEWADDEDFROMHERE_SGPLUS_MODE BIT(11)
>> +#define NEWADDEDFROMHERE_XPCS_MODE BIT(12)
>> +#define NEWADDEDFROMHERE_USXG_EN BIT(13)
>> +#define NEWADDEDFROMHERE_SW_V17_V18 BIT(15)
>
> Again, why "NEWADDEDFROMHERE" ?
>
Will rename to use proper name.

>> +/* [register] VR_XS_PCS_EEE_MCTRL0 */
>> +#define VR_XS_PCS_EEE_MCTRL0_ADDR 0x38006
>> +#define LTX_EN BIT(0)
>> +#define LRX_EN BIT(1)
>> +#define SIGN_BIT BIT(6)
>
> "SIGN_BIT" is likely too generic a name.
>
As above, will rename the register to proper name.

>> +#define MULT_FACT_100NS GENMASK(11, 8)
>> +
>> +/* [register] VR_XS_PCS_KR_CTRL */
>> +#define VR_XS_PCS_KR_CTRL_ADDR 0x38007
>> +#define USXG_MODE GENMASK(12, 10)
>> +#define QUXGMII_MODE (FIELD_PREP(USXG_MODE, 0x5))
>> +
>> +/* [register] VR_XS_PCS_EEE_TXTIMER */
>> +#define VR_XS_PCS_EEE_TXTIMER_ADDR 0x38008
>> +#define TSL_RES GENMASK(5, 0)
>> +#define T1U_RES GENMASK(7, 6)
>> +#define TWL_RES GENMASK(12, 8)
>> +#define UNIPHY_XPCS_TSL_TIMER (FIELD_PREP(TSL_RES, 0xa))
>> +#define UNIPHY_XPCS_T1U_TIMER (FIELD_PREP(TSL_RES, 0x3))
>> +#define UNIPHY_XPCS_TWL_TIMER (FIELD_PREP(TSL_RES, 0x16))
>> +
>> +/* [register] VR_XS_PCS_EEE_RXTIMER */
>> +#define VR_XS_PCS_EEE_RXTIMER_ADDR 0x38009
>> +#define RES_100U GENMASK(7, 0)
>> +#define TWR_RES GENMASK(13, 8)
>> +#define UNIPHY_XPCS_100US_TIMER (FIELD_PREP(RES_100U, 0xc8))
>> +#define UNIPHY_XPCS_TWR_TIMER (FIELD_PREP(RES_100U, 0x1c))
>> +
>> +/* [register] VR_XS_PCS_DIG_STS */
>> +#define VR_XS_PCS_DIG_STS_ADDR 0x3800a
>> +#define AM_COUNT GENMASK(14, 0)
>> +#define QUXGMII_AM_COUNT (FIELD_PREP(AM_COUNT, 0x6018))
>> +
>> +/* [register] VR_XS_PCS_EEE_MCTRL1 */
>> +#define VR_XS_PCS_EEE_MCTRL1_ADDR 0x3800b
>> +#define TRN_LPI BIT(0)
>> +#define TRN_RXLPI BIT(8)
>> +
>> +/* [register] VR_MII_1_DIG_CTRL1 */
>> +#define VR_MII_DIG_CTRL1_CHANNEL1_ADDR 0x1a8000
>> +#define VR_MII_DIG_CTRL1_CHANNEL2_ADDR 0x1b8000
>> +#define VR_MII_DIG_CTRL1_CHANNEL3_ADDR 0x1c8000
>> +#define VR_MII_DIG_CTRL1_CHANNEL_ADDR(x) (0x1a8000 + 0x10000 * ((x) - 1))
>> +#define CHANNEL_USRA_RST BIT(5)
>> +
>> +/* [register] VR_MII_AN_CTRL */
>> +#define VR_MII_AN_CTRL_ADDR 0x1f8001
>> +#define VR_MII_AN_CTRL_CHANNEL1_ADDR 0x1a8001
>> +#define VR_MII_AN_CTRL_CHANNEL2_ADDR 0x1b8001
>> +#define VR_MII_AN_CTRL_CHANNEL3_ADDR 0x1c8001
>> +#define VR_MII_AN_CTRL_CHANNEL_ADDR(x) (0x1a8001 + 0x10000 * ((x) - 1))
>> +#define MII_AN_INTR_EN BIT(0)
>> +#define MII_CTRL BIT(8)
>
> Too generic a name.
>
Will update and rename it.

>> +
>> +/* [register] VR_MII_AN_INTR_STS */
>> +#define VR_MII_AN_INTR_STS_ADDR 0x1f8002
>> +#define VR_MII_AN_INTR_STS_CHANNEL1_ADDR 0x1a8002
>> +#define VR_MII_AN_INTR_STS_CHANNEL2_ADDR 0x1b8002
>> +#define VR_MII_AN_INTR_STS_CHANNEL3_ADDR 0x1c8002
>> +#define VR_MII_AN_INTR_STS_CHANNEL_ADDR(x) (0x1a8002 + 0x10000 * ((x) - 1))
>> +#define CL37_ANCMPLT_INTR BIT(0)
>> +
>> +/* [register] VR_XAUI_MODE_CTRL */
>> +#define VR_XAUI_MODE_CTRL_ADDR 0x1f8004
>> +#define VR_XAUI_MODE_CTRL_CHANNEL1_ADDR 0x1a8004
>> +#define VR_XAUI_MODE_CTRL_CHANNEL2_ADDR 0x1b8004
>> +#define VR_XAUI_MODE_CTRL_CHANNEL3_ADDR 0x1c8004
>> +#define VR_XAUI_MODE_CTRL_CHANNEL_ADDR(x) (0x1a8004 + 0x10000 * ((x) - 1))
>> +#define IPG_CHECK BIT(0)
>> +
>> +/* [register] SR_MII_CTRL */
>> +#define SR_MII_CTRL_ADDR 0x1f0000
>> +#define SR_MII_CTRL_CHANNEL1_ADDR 0x1a0000
>> +#define SR_MII_CTRL_CHANNEL2_ADDR 0x1b0000
>> +#define SR_MII_CTRL_CHANNEL3_ADDR 0x1c0000
>> +#define SR_MII_CTRL_CHANNEL_ADDR(x) (0x1a0000 + 0x10000 * ((x) - 1))
>
>
>> +#define AN_ENABLE BIT(12)
>
> Looks like MDIO_AN_CTRL1_ENABLE
>

This is the uniphy xpcs autoneg enable control bit, our uniphy is not
MDIO accessed, I will rename it to a meaningful name.

>> +#define USXGMII_DUPLEX_FULL BIT(8)
>> +#define USXGMII_SPEED_MASK (BIT(13) | BIT(6) | BIT(5))
>> +#define USXGMII_SPEED_10000 (BIT(13) | BIT(6))
>> +#define USXGMII_SPEED_5000 (BIT(13) | BIT(5))
>> +#define USXGMII_SPEED_2500 BIT(5)
>> +#define USXGMII_SPEED_1000 BIT(6)
>> +#define USXGMII_SPEED_100 BIT(13)
>> +#define USXGMII_SPEED_10 0
>
> Looks rather like the standard IEEE 802.3 definitions except for the
> 2.5G and 5G speeds. Probably worth a comment stating that they're
> slightly different.
>

Sure, will add comment for it in code and documentation files, thanks.

>> +
>> +/* PPE UNIPHY data type */
>> +struct ppe_uniphy {
>> + void __iomem *base;
>> + struct ppe_device *ppe_dev;
>> + unsigned int index;
>> + phy_interface_t interface;
>> + struct phylink_pcs pcs;
>> +};
>> +
>> +#define pcs_to_ppe_uniphy(_pcs) container_of(_pcs, struct ppe_uniphy, pcs)
>
> As this should only be used in the .c file, I suggest making this a
> static function in the .c file. There should be no requirement to use
> it outside of the .c file.
>

This is used in the following patch as I explained above for the MAC/PCS
related comment:
https://lore.kernel.org/netdev/[email protected]/

>> +
>> +struct ppe_uniphy *ppe_uniphy_setup(struct platform_device *pdev);
>> +
>> +int ppe_uniphy_speed_set(struct ppe_uniphy *uniphy,
>> + int port, int speed);
>> +
>> +int ppe_uniphy_duplex_set(struct ppe_uniphy *uniphy,
>> + int port, int duplex);
>> +
>> +int ppe_uniphy_adapter_reset(struct ppe_uniphy *uniphy,
>> + int port);
>> +
>> +int ppe_uniphy_autoneg_complete_check(struct ppe_uniphy *uniphy,
>> + int port);
>> +
>> +int ppe_uniphy_port_gcc_clock_en_set(struct ppe_uniphy *uniphy,
>> + int port, bool enable);
>> +
>> +#endif /* _PPE_UNIPHY_H_ */
>> diff --git a/include/linux/soc/qcom/ppe.h b/include/linux/soc/qcom/ppe.h
>> index 268109c823ad..d3cb18df33fa 100644
>> --- a/include/linux/soc/qcom/ppe.h
>> +++ b/include/linux/soc/qcom/ppe.h
>> @@ -20,6 +20,7 @@ struct ppe_device {
>> struct dentry *debugfs_root;
>> bool is_ppe_probed;
>> void *ppe_priv;
>> + void *uniphy;
>
> Not struct ppe_uniphy *uniphy? You can declare the struct before use
> via:
>
> struct ppe_uniphy;
>
> so you don't need to include ppe_uniphy.h in this header.
>

Thanks for the good suggestion, will follow this.

> Thanks.
>

2024-01-22 15:31:52

by Lei Wei

[permalink] [raw]
Subject: Re: [PATCH net-next 18/20] net: ethernet: qualcomm: Add PPE MAC support for phylink



On 1/10/2024 8:18 PM, Russell King (Oracle) wrote:
> On Wed, Jan 10, 2024 at 07:40:30PM +0800, Luo Jie wrote:
>> +static void ppe_phylink_mac_link_up(struct ppe_device *ppe_dev, int port,
>> + struct phy_device *phy,
>> + unsigned int mode, phy_interface_t interface,
>> + int speed, int duplex, bool tx_pause, bool rx_pause)
>> +{
>> + struct phylink_pcs *pcs = ppe_phylink_mac_select_pcs(ppe_dev, port, interface);
>> + struct ppe_uniphy *uniphy = pcs_to_ppe_uniphy(pcs);
>> + struct ppe_port *ppe_port = ppe_port_get(ppe_dev, port);
>> +
>> + /* Wait uniphy auto-negotiation completion */
>> + ppe_uniphy_autoneg_complete_check(uniphy, port);
>
> Way too late...
>


Yes agree, this will be removed. If inband autoneg is used,
pcs_get_state should report the link status. Then this function call
should not be needed and should be removed.

>> @@ -352,6 +1230,12 @@ static int ppe_port_maxframe_set(struct ppe_device *ppe_dev,
>> }
>>
>> static struct ppe_device_ops qcom_ppe_ops = {
>> + .phylink_setup = ppe_phylink_setup,
>> + .phylink_destroy = ppe_phylink_destroy,
>> + .phylink_mac_config = ppe_phylink_mac_config,
>> + .phylink_mac_link_up = ppe_phylink_mac_link_up,
>> + .phylink_mac_link_down = ppe_phylink_mac_link_down,
>> + .phylink_mac_select_pcs = ppe_phylink_mac_select_pcs,
>> .set_maxframe = ppe_port_maxframe_set,
>> };
>
> Why this extra layer of abstraction? If you need separate phylink
> operations, why not implement separate phylink_mac_ops structures?
>

This PPE driver will serve as the base driver for higher level drivers
such as the ethernet DMA (EDMA) driver and the DSA switch driver. The
ppe_device_ops is exported to these higher level drivers, to allow
access to PPE operations. For example, the EDMA driver (ethernet
netdevice driver to be pushed for review after the PPE driver) will use
the phylink_setup/destroy ops for managing netdevice to PHY linkage. The
set_maxframe op is also to be used by the EDMA driver during MTU change
operation on the ethernet port.

I also mentioned it in the section "Exported PPE Device Operations" in
PPE driver documentation:
https://lore.kernel.org/netdev/[email protected]/

Whereas the PPE DSA switch driver is expected to use the phylink_mac
ops. However,we will remove the phylink_mac ops from this patch now
since it is currently unused.