2020-02-25 16:31:23

by Vadym Kochan

[permalink] [raw]
Subject: [RFC net-next 0/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX326x (AC3x)

Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
wireless SMB deployment.

Prestera Switchdev is a firmware based driver which operates via PCI
bus. The driver is split into 2 modules:

- prestera_sw.ko - main generic Switchdev Prestera ASIC related logic.

- prestera_pci.ko - bus specific code which also implements firmware
loading and low-level messaging protocol between
firmware and the switchdev driver.

This driver implementation includes only L1 & basic L2 support.

The core Prestera switching logic is implemented in prestera.c, there is
an intermediate hw layer between core logic and firmware. It is
implemented in prestera_hw.c, the purpose of it is to encapsulate hw
related logic, in future there is a plan to support more devices with
different HW related configurations.

The firmware has to be loaded each time device is reset. The driver is
loading it from:

/lib/firmware/marvell/prestera_fw_img.bin

The firmware image version is located within internal header and consists
of 3 numbers - MAJOR.MINOR.PATCH. Additionally, driver has hard-coded
minimum supported firmware version which it can work with:

MAJOR - reflects the support on ABI level between driver and loaded
firmware, this number should be the same for driver and
loaded firmware.

MINOR - this is the minimal supported version between driver and the
firmware.

PATCH - indicates only fixes, firmware ABI is not changed.

The firmware image will be submitted to the linux-firmware after the
driver is accepted.

The following Switchdev features are supported:

- VLAN-aware bridge offloading
- VLAN-unaware bridge offloading
- FDB offloading (learning, ageing)
- Switchport configuration

CPU RX/TX support will be provided in the next contribution.

Vadym Kochan (3):
net: marvell: prestera: Add Switchdev driver for Prestera family ASIC
device 98DX325x (AC3x)
net: marvell: prestera: Add PCI interface support
dt-bindings: marvell,prestera: Add address mapping for Prestera
Switchdev PCIe driver

.../bindings/net/marvell,prestera.txt | 13 +
drivers/net/ethernet/marvell/Kconfig | 1 +
drivers/net/ethernet/marvell/Makefile | 1 +
drivers/net/ethernet/marvell/prestera/Kconfig | 24 +
.../net/ethernet/marvell/prestera/Makefile | 5 +
.../net/ethernet/marvell/prestera/prestera.c | 1502 +++++++++++++++++
.../net/ethernet/marvell/prestera/prestera.h | 244 +++
.../marvell/prestera/prestera_drv_ver.h | 23 +
.../ethernet/marvell/prestera/prestera_hw.c | 1094 ++++++++++++
.../ethernet/marvell/prestera/prestera_hw.h | 159 ++
.../ethernet/marvell/prestera/prestera_pci.c | 840 +++++++++
.../marvell/prestera/prestera_switchdev.c | 1217 +++++++++++++
12 files changed, 5123 insertions(+)
create mode 100644 drivers/net/ethernet/marvell/prestera/Kconfig
create mode 100644 drivers/net/ethernet/marvell/prestera/Makefile
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.c
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.h
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.c
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.h
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_pci.c
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_switchdev.c

--
2.17.1


2020-02-25 16:31:35

by Vadym Kochan

[permalink] [raw]
Subject: [RFC net-next 2/3] net: marvell: prestera: Add PCI interface support

Add PCI interface driver for Prestera Switch ASICs family devices, which
provides:

- Firmware loading mechanism
- Requests & events handling to/from the firmware
- Access to the firmware on the bus level

The firmware has to be loaded each time device is reset. The driver is
loading it from:

/lib/firmware/marvell/prestera_fw_img.bin

The firmware image version is located within internal header and consists
of 3 numbers - MAJOR.MINOR.PATCH. Additionally, driver has hard-coded
minimum supported firmware version which it can work with:

MAJOR - reflects the support on ABI level between driver and loaded
firmware, this number should be the same for driver and loaded
firmware.

MINOR - this is the minimal supported version between driver and the
firmware.

PATCH - indicates only fixes, firmware ABI is not changed.

Signed-off-by: Vadym Kochan <[email protected]>
Signed-off-by: Oleksandr Mazur <[email protected]>
---
drivers/net/ethernet/marvell/prestera/Kconfig | 11 +
.../net/ethernet/marvell/prestera/Makefile | 2 +
.../ethernet/marvell/prestera/prestera_pci.c | 840 ++++++++++++++++++
3 files changed, 853 insertions(+)
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_pci.c

diff --git a/drivers/net/ethernet/marvell/prestera/Kconfig b/drivers/net/ethernet/marvell/prestera/Kconfig
index d0b416dcb677..a4e52f7af8dd 100644
--- a/drivers/net/ethernet/marvell/prestera/Kconfig
+++ b/drivers/net/ethernet/marvell/prestera/Kconfig
@@ -11,3 +11,14 @@ config PRESTERA

To compile this driver as a module, choose M here: the
module will be called prestera_sw.
+
+config PRESTERA_PCI
+ tristate "PCI interface driver for Marvell Prestera Switch ASICs family"
+ depends on PCI && HAS_IOMEM && PRESTERA
+ default m
+ ---help---
+ This is implementation of PCI interface support for Marvell Prestera
+ Switch ASICs family.
+
+ To compile this driver as a module, choose M here: the
+ module will be called prestera_pci.
diff --git a/drivers/net/ethernet/marvell/prestera/Makefile b/drivers/net/ethernet/marvell/prestera/Makefile
index 9446298fb7f4..5d9b579a0314 100644
--- a/drivers/net/ethernet/marvell/prestera/Makefile
+++ b/drivers/net/ethernet/marvell/prestera/Makefile
@@ -1,3 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_PRESTERA) += prestera_sw.o
prestera_sw-objs := prestera.o prestera_hw.o prestera_switchdev.o
+
+obj-$(CONFIG_PRESTERA_PCI) += prestera_pci.o
diff --git a/drivers/net/ethernet/marvell/prestera/prestera_pci.c b/drivers/net/ethernet/marvell/prestera/prestera_pci.c
new file mode 100644
index 000000000000..847a84e3684a
--- /dev/null
+++ b/drivers/net/ethernet/marvell/prestera/prestera_pci.c
@@ -0,0 +1,840 @@
+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
+ *
+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/circ_buf.h>
+#include <linux/firmware.h>
+
+#include "prestera.h"
+
+#define MVSW_FW_FILENAME "marvell/mvsw_prestera_fw.img"
+
+#define MVSW_SUPP_FW_MAJ_VER 1
+#define MVSW_SUPP_FW_MIN_VER 0
+#define MVSW_SUPP_FW_PATCH_VER 0
+
+#define mvsw_wait_timeout(cond, waitms) \
+({ \
+ unsigned long __wait_end = jiffies + msecs_to_jiffies(waitms); \
+ bool __wait_ret = false; \
+ do { \
+ if (cond) { \
+ __wait_ret = true; \
+ break; \
+ } \
+ cond_resched(); \
+ } while (time_before(jiffies, __wait_end)); \
+ __wait_ret; \
+})
+
+#define MVSW_FW_HDR_MAGIC 0x351D9D06
+#define MVSW_FW_DL_TIMEOUT 50000
+#define MVSW_FW_BLK_SZ 1024
+
+#define FW_VER_MAJ_MUL 1000000
+#define FW_VER_MIN_MUL 1000
+
+#define FW_VER_MAJ(v) ((v) / FW_VER_MAJ_MUL)
+
+#define FW_VER_MIN(v) \
+ (((v) - (FW_VER_MAJ(v) * FW_VER_MAJ_MUL)) / FW_VER_MIN_MUL)
+
+#define FW_VER_PATCH(v) \
+ (v - (FW_VER_MAJ(v) * FW_VER_MAJ_MUL) - (FW_VER_MIN(v) * FW_VER_MIN_MUL))
+
+struct mvsw_pr_fw_header {
+ __be32 magic_number;
+ __be32 version_value;
+ u8 reserved[8];
+} __packed;
+
+struct mvsw_pr_ldr_regs {
+ u32 ldr_ready;
+ u32 pad1;
+
+ u32 ldr_img_size;
+ u32 ldr_ctl_flags;
+
+ u32 ldr_buf_offs;
+ u32 ldr_buf_size;
+
+ u32 ldr_buf_rd;
+ u32 pad2;
+ u32 ldr_buf_wr;
+
+ u32 ldr_status;
+} __packed __aligned(4);
+
+#define MVSW_LDR_REG_OFFSET(f) offsetof(struct mvsw_pr_ldr_regs, f)
+
+#define MVSW_LDR_READY_MAGIC 0xf00dfeed
+
+#define MVSW_LDR_STATUS_IMG_DL BIT(0)
+#define MVSW_LDR_STATUS_START_FW BIT(1)
+#define MVSW_LDR_STATUS_INVALID_IMG BIT(2)
+#define MVSW_LDR_STATUS_NOMEM BIT(3)
+
+#define mvsw_ldr_write(fw, reg, val) \
+ writel(val, (fw)->ldr_regs + (reg))
+#define mvsw_ldr_read(fw, reg) \
+ readl((fw)->ldr_regs + (reg))
+
+/* fw loader registers */
+#define MVSW_LDR_READY_REG MVSW_LDR_REG_OFFSET(ldr_ready)
+#define MVSW_LDR_IMG_SIZE_REG MVSW_LDR_REG_OFFSET(ldr_img_size)
+#define MVSW_LDR_CTL_REG MVSW_LDR_REG_OFFSET(ldr_ctl_flags)
+#define MVSW_LDR_BUF_SIZE_REG MVSW_LDR_REG_OFFSET(ldr_buf_size)
+#define MVSW_LDR_BUF_OFFS_REG MVSW_LDR_REG_OFFSET(ldr_buf_offs)
+#define MVSW_LDR_BUF_RD_REG MVSW_LDR_REG_OFFSET(ldr_buf_rd)
+#define MVSW_LDR_BUF_WR_REG MVSW_LDR_REG_OFFSET(ldr_buf_wr)
+#define MVSW_LDR_STATUS_REG MVSW_LDR_REG_OFFSET(ldr_status)
+
+#define MVSW_LDR_CTL_DL_START BIT(0)
+
+#define MVSW_LDR_WR_IDX_MOVE(fw, n) \
+do { \
+ typeof(fw) __fw = (fw); \
+ (__fw)->ldr_wr_idx = ((__fw)->ldr_wr_idx + (n)) & \
+ ((__fw)->ldr_buf_len - 1); \
+} while (0)
+
+#define MVSW_LDR_WR_IDX_COMMIT(fw) \
+({ \
+ typeof(fw) __fw = (fw); \
+ mvsw_ldr_write((__fw), MVSW_LDR_BUF_WR_REG, \
+ (__fw)->ldr_wr_idx); \
+})
+
+#define MVSW_LDR_WR_PTR(fw) \
+({ \
+ typeof(fw) __fw = (fw); \
+ ((__fw)->ldr_ring_buf + (__fw)->ldr_wr_idx); \
+})
+
+#define MVSW_EVT_QNUM_MAX 4
+
+struct mvsw_pr_fw_evtq_regs {
+ u32 rd_idx;
+ u32 pad1;
+ u32 wr_idx;
+ u32 pad2;
+ u32 offs;
+ u32 len;
+};
+
+struct mvsw_pr_fw_regs {
+ u32 fw_ready;
+ u32 pad;
+ u32 cmd_offs;
+ u32 cmd_len;
+ u32 evt_offs;
+ u32 evt_qnum;
+
+ u32 cmd_req_ctl;
+ u32 cmd_req_len;
+ u32 cmd_rcv_ctl;
+ u32 cmd_rcv_len;
+
+ u32 fw_status;
+
+ struct mvsw_pr_fw_evtq_regs evtq_list[MVSW_EVT_QNUM_MAX];
+};
+
+#define MVSW_FW_REG_OFFSET(f) offsetof(struct mvsw_pr_fw_regs, f)
+
+#define MVSW_FW_READY_MAGIC 0xcafebabe
+
+/* fw registers */
+#define MVSW_FW_READY_REG MVSW_FW_REG_OFFSET(fw_ready)
+
+#define MVSW_CMD_BUF_OFFS_REG MVSW_FW_REG_OFFSET(cmd_offs)
+#define MVSW_CMD_BUF_LEN_REG MVSW_FW_REG_OFFSET(cmd_len)
+#define MVSW_EVT_BUF_OFFS_REG MVSW_FW_REG_OFFSET(evt_offs)
+#define MVSW_EVT_QNUM_REG MVSW_FW_REG_OFFSET(evt_qnum)
+
+#define MVSW_CMD_REQ_CTL_REG MVSW_FW_REG_OFFSET(cmd_req_ctl)
+#define MVSW_CMD_REQ_LEN_REG MVSW_FW_REG_OFFSET(cmd_req_len)
+
+#define MVSW_CMD_RCV_CTL_REG MVSW_FW_REG_OFFSET(cmd_rcv_ctl)
+#define MVSW_CMD_RCV_LEN_REG MVSW_FW_REG_OFFSET(cmd_rcv_len)
+#define MVSW_FW_STATUS_REG MVSW_FW_REG_OFFSET(fw_status)
+
+/* MVSW_CMD_REQ_CTL_REG flags */
+#define MVSW_CMD_F_REQ_SENT BIT(0)
+#define MVSW_CMD_F_REPL_RCVD BIT(1)
+
+/* MVSW_CMD_RCV_CTL_REG flags */
+#define MVSW_CMD_F_REPL_SENT BIT(0)
+
+#define MVSW_EVTQ_REG_OFFSET(q, f) \
+ (MVSW_FW_REG_OFFSET(evtq_list) + \
+ (q) * sizeof(struct mvsw_pr_fw_evtq_regs) + \
+ offsetof(struct mvsw_pr_fw_evtq_regs, f))
+
+#define MVSW_EVTQ_RD_IDX_REG(q) MVSW_EVTQ_REG_OFFSET(q, rd_idx)
+#define MVSW_EVTQ_WR_IDX_REG(q) MVSW_EVTQ_REG_OFFSET(q, wr_idx)
+#define MVSW_EVTQ_OFFS_REG(q) MVSW_EVTQ_REG_OFFSET(q, offs)
+#define MVSW_EVTQ_LEN_REG(q) MVSW_EVTQ_REG_OFFSET(q, len)
+
+#define mvsw_fw_write(fw, reg, val) writel(val, (fw)->hw_regs + (reg))
+#define mvsw_fw_read(fw, reg) readl((fw)->hw_regs + (reg))
+
+struct mvsw_pr_fw_evtq {
+ u8 __iomem *addr;
+ size_t len;
+};
+
+struct mvsw_pr_fw {
+ struct workqueue_struct *wq;
+ struct mvsw_pr_device dev;
+ struct pci_dev *pci_dev;
+ u8 __iomem *mem_addr;
+
+ u8 __iomem *ldr_regs;
+ u8 __iomem *hw_regs;
+
+ u8 __iomem *ldr_ring_buf;
+ u32 ldr_buf_len;
+ u32 ldr_wr_idx;
+
+ /* serialize access to dev->send_req */
+ struct mutex cmd_mtx;
+ size_t cmd_mbox_len;
+ u8 __iomem *cmd_mbox;
+ struct mvsw_pr_fw_evtq evt_queue[MVSW_EVT_QNUM_MAX];
+ u8 evt_qnum;
+ struct work_struct evt_work;
+ u8 __iomem *evt_buf;
+ u8 *evt_msg;
+};
+
+#define mvsw_fw_dev(fw) ((fw)->dev.dev)
+
+#define PRESTERA_DEVICE(id) PCI_VDEVICE(MARVELL, (id))
+
+static struct mvsw_pr_pci_match {
+ struct pci_driver driver;
+ const struct pci_device_id id;
+ bool registered;
+} mvsw_pci_devices[] = {
+ {
+ .driver = { .name = "AC3x 98DX326x", },
+ .id = { PRESTERA_DEVICE(0xc804), 0 },
+ },
+ {{ }, { },}
+};
+
+static int mvsw_pr_fw_load(struct mvsw_pr_fw *fw);
+
+static u32 mvsw_pr_fw_evtq_len(struct mvsw_pr_fw *fw, u8 qid)
+{
+ return fw->evt_queue[qid].len;
+}
+
+static u32 mvsw_pr_fw_evtq_avail(struct mvsw_pr_fw *fw, u8 qid)
+{
+ u32 wr_idx = mvsw_fw_read(fw, MVSW_EVTQ_WR_IDX_REG(qid));
+ u32 rd_idx = mvsw_fw_read(fw, MVSW_EVTQ_RD_IDX_REG(qid));
+
+ return CIRC_CNT(wr_idx, rd_idx, mvsw_pr_fw_evtq_len(fw, qid));
+}
+
+static void mvsw_pr_fw_evtq_rd_set(struct mvsw_pr_fw *fw,
+ u8 qid, u32 idx)
+{
+ u32 rd_idx = idx & (mvsw_pr_fw_evtq_len(fw, qid) - 1);
+
+ mvsw_fw_write(fw, MVSW_EVTQ_RD_IDX_REG(qid), rd_idx);
+}
+
+static u8 __iomem *mvsw_pr_fw_evtq_buf(struct mvsw_pr_fw *fw,
+ u8 qid)
+{
+ return fw->evt_queue[qid].addr;
+}
+
+static u32 mvsw_pr_fw_evtq_read32(struct mvsw_pr_fw *fw, u8 qid)
+{
+ u32 rd_idx = mvsw_fw_read(fw, MVSW_EVTQ_RD_IDX_REG(qid));
+ u32 val;
+
+ val = readl(mvsw_pr_fw_evtq_buf(fw, qid) + rd_idx);
+ mvsw_pr_fw_evtq_rd_set(fw, qid, rd_idx + 4);
+ return val;
+}
+
+static ssize_t mvsw_pr_fw_evtq_read_buf(struct mvsw_pr_fw *fw,
+ u8 qid, u8 *buf, size_t len)
+{
+ u32 idx = mvsw_fw_read(fw, MVSW_EVTQ_RD_IDX_REG(qid));
+ u8 __iomem *evtq_addr = mvsw_pr_fw_evtq_buf(fw, qid);
+ u32 *buf32 = (u32 *)buf;
+ int i;
+
+ for (i = 0; i < len / 4; buf32++, i++) {
+ *buf32 = readl_relaxed(evtq_addr + idx);
+ idx = (idx + 4) & (mvsw_pr_fw_evtq_len(fw, qid) - 1);
+ }
+
+ mvsw_pr_fw_evtq_rd_set(fw, qid, idx);
+
+ return i;
+}
+
+static u8 mvsw_pr_fw_evtq_pick(struct mvsw_pr_fw *fw)
+{
+ int qid;
+
+ for (qid = 0; qid < fw->evt_qnum; qid++) {
+ if (mvsw_pr_fw_evtq_avail(fw, qid) >= 4)
+ return qid;
+ }
+
+ return MVSW_EVT_QNUM_MAX;
+}
+
+static void mvsw_pr_fw_evt_work_fn(struct work_struct *work)
+{
+ struct mvsw_pr_fw *fw;
+ u8 *msg;
+ u8 qid;
+
+ fw = container_of(work, struct mvsw_pr_fw, evt_work);
+ msg = fw->evt_msg;
+
+ while ((qid = mvsw_pr_fw_evtq_pick(fw)) < MVSW_EVT_QNUM_MAX) {
+ u32 idx;
+ u32 len;
+
+ len = mvsw_pr_fw_evtq_read32(fw, qid);
+ idx = mvsw_fw_read(fw, MVSW_EVTQ_RD_IDX_REG(qid));
+
+ WARN_ON(mvsw_pr_fw_evtq_avail(fw, qid) < len);
+
+ if (WARN_ON(len > MVSW_MSG_MAX_SIZE)) {
+ mvsw_pr_fw_evtq_rd_set(fw, qid, idx + len);
+ continue;
+ }
+
+ mvsw_pr_fw_evtq_read_buf(fw, qid, msg, len);
+
+ if (fw->dev.recv_msg)
+ fw->dev.recv_msg(&fw->dev, msg, len);
+ }
+}
+
+static int mvsw_pr_fw_wait_reg32(struct mvsw_pr_fw *fw,
+ u32 reg, u32 val, unsigned int wait)
+{
+ if (mvsw_wait_timeout(mvsw_fw_read(fw, reg) == val, wait))
+ return 0;
+
+ return -EBUSY;
+}
+
+static void mvsw_pci_copy_to(u8 __iomem *dst, u8 *src, size_t len)
+{
+ u32 __iomem *dst32 = (u32 __iomem *)dst;
+ u32 *src32 = (u32 *)src;
+ int i;
+
+ for (i = 0; i < (len / 4); dst32++, src32++, i++)
+ writel_relaxed(*src32, dst32);
+}
+
+static void mvsw_pci_copy_from(u8 *dst, u8 __iomem *src, size_t len)
+{
+ u32 *dst32 = (u32 *)dst;
+ u32 __iomem *src32 = (u32 __iomem *)src;
+ int i;
+
+ for (i = 0; i < (len / 4); dst32++, src32++, i++)
+ *dst32 = readl_relaxed(src32);
+}
+
+static int mvsw_pr_fw_cmd_send(struct mvsw_pr_fw *fw,
+ u8 *in_msg, size_t in_size,
+ u8 *out_msg, size_t out_size,
+ unsigned int wait)
+{
+ u32 ret_size = 0;
+ int err = 0;
+
+ if (!wait)
+ wait = 30000;
+
+ if (ALIGN(in_size, 4) > fw->cmd_mbox_len)
+ return -EMSGSIZE;
+
+ /* wait for finish previous reply from FW */
+ err = mvsw_pr_fw_wait_reg32(fw, MVSW_CMD_RCV_CTL_REG, 0, 30);
+ if (err) {
+ dev_err(mvsw_fw_dev(fw), "finish reply from FW is timed out\n");
+ return err;
+ }
+
+ mvsw_fw_write(fw, MVSW_CMD_REQ_LEN_REG, in_size);
+ mvsw_pci_copy_to(fw->cmd_mbox, in_msg, in_size);
+
+ mvsw_fw_write(fw, MVSW_CMD_REQ_CTL_REG, MVSW_CMD_F_REQ_SENT);
+
+ /* wait for reply from FW */
+ err = mvsw_pr_fw_wait_reg32(fw, MVSW_CMD_RCV_CTL_REG, MVSW_CMD_F_REPL_SENT,
+ wait);
+ if (err) {
+ dev_err(mvsw_fw_dev(fw), "reply from FW is timed out\n");
+ goto cmd_exit;
+ }
+
+ ret_size = mvsw_fw_read(fw, MVSW_CMD_RCV_LEN_REG);
+ if (ret_size > out_size) {
+ dev_err(mvsw_fw_dev(fw), "ret_size (%u) > out_len(%zu)\n",
+ ret_size, out_size);
+ err = -EMSGSIZE;
+ goto cmd_exit;
+ }
+
+ mvsw_pci_copy_from(out_msg, fw->cmd_mbox + in_size, ret_size);
+
+cmd_exit:
+ mvsw_fw_write(fw, MVSW_CMD_REQ_CTL_REG, MVSW_CMD_F_REPL_RCVD);
+ return err;
+}
+
+static int mvsw_pr_fw_send_req(struct mvsw_pr_device *dev,
+ u8 *in_msg, size_t in_size, u8 *out_msg,
+ size_t out_size, unsigned int wait)
+{
+ struct mvsw_pr_fw *fw;
+ ssize_t ret;
+
+ fw = container_of(dev, struct mvsw_pr_fw, dev);
+
+ mutex_lock(&fw->cmd_mtx);
+ ret = mvsw_pr_fw_cmd_send(fw, in_msg, in_size, out_msg, out_size, wait);
+ mutex_unlock(&fw->cmd_mtx);
+
+ return ret;
+}
+
+static int mvsw_pr_fw_init(struct mvsw_pr_fw *fw)
+{
+ u8 __iomem *base;
+ int err;
+ u8 qid;
+
+ err = mvsw_pr_fw_load(fw);
+ if (err && err != -ETIMEDOUT)
+ return err;
+
+ err = mvsw_pr_fw_wait_reg32(fw, MVSW_FW_READY_REG,
+ MVSW_FW_READY_MAGIC, 20000);
+ if (err) {
+ dev_err(mvsw_fw_dev(fw), "FW is failed to start\n");
+ return err;
+ }
+
+ base = fw->mem_addr;
+
+ fw->cmd_mbox = base + mvsw_fw_read(fw, MVSW_CMD_BUF_OFFS_REG);
+ fw->cmd_mbox_len = mvsw_fw_read(fw, MVSW_CMD_BUF_LEN_REG);
+ mutex_init(&fw->cmd_mtx);
+
+ fw->evt_buf = base + mvsw_fw_read(fw, MVSW_EVT_BUF_OFFS_REG);
+ fw->evt_qnum = mvsw_fw_read(fw, MVSW_EVT_QNUM_REG);
+ fw->evt_msg = kmalloc(MVSW_MSG_MAX_SIZE, GFP_KERNEL);
+ if (!fw->evt_msg)
+ return -ENOMEM;
+
+ for (qid = 0; qid < fw->evt_qnum; qid++) {
+ u32 offs = mvsw_fw_read(fw, MVSW_EVTQ_OFFS_REG(qid));
+ struct mvsw_pr_fw_evtq *evtq = &fw->evt_queue[qid];
+
+ evtq->len = mvsw_fw_read(fw, MVSW_EVTQ_LEN_REG(qid));
+ evtq->addr = fw->evt_buf + offs;
+ }
+
+ return 0;
+}
+
+static void mvsw_pr_fw_uninit(struct mvsw_pr_fw *fw)
+{
+ kfree(fw->evt_msg);
+}
+
+static irqreturn_t mvsw_pci_irq_handler(int irq, void *dev_id)
+{
+ struct mvsw_pr_fw *fw = dev_id;
+
+ queue_work(fw->wq, &fw->evt_work);
+
+ return IRQ_HANDLED;
+}
+
+static int mvsw_pr_ldr_wait_reg32(struct mvsw_pr_fw *fw,
+ u32 reg, u32 val, unsigned int wait)
+{
+ if (mvsw_wait_timeout(mvsw_ldr_read(fw, reg) == val, wait))
+ return 0;
+
+ return -EBUSY;
+}
+
+static u32 mvsw_pr_ldr_buf_avail(struct mvsw_pr_fw *fw)
+{
+ u32 rd_idx = mvsw_ldr_read(fw, MVSW_LDR_BUF_RD_REG);
+
+ return CIRC_SPACE(fw->ldr_wr_idx, rd_idx, fw->ldr_buf_len);
+}
+
+static int mvsw_pr_ldr_send_buf(struct mvsw_pr_fw *fw, const u8 *buf,
+ size_t len)
+{
+ int i;
+
+ if (!mvsw_wait_timeout(mvsw_pr_ldr_buf_avail(fw) >= len, 100)) {
+ dev_err(mvsw_fw_dev(fw), "failed wait for sending firmware\n");
+ return -EBUSY;
+ }
+
+ for (i = 0; i < len; i += 4) {
+ writel_relaxed(*(u32 *)(buf + i), MVSW_LDR_WR_PTR(fw));
+ MVSW_LDR_WR_IDX_MOVE(fw, 4);
+ }
+
+ MVSW_LDR_WR_IDX_COMMIT(fw);
+ return 0;
+}
+
+static int mvsw_pr_ldr_send(struct mvsw_pr_fw *fw,
+ const char *img, u32 fw_size)
+{
+ unsigned long mask;
+ u32 status;
+ u32 pos;
+ int err;
+
+ if (mvsw_pr_ldr_wait_reg32(fw, MVSW_LDR_STATUS_REG,
+ MVSW_LDR_STATUS_IMG_DL, 1000)) {
+ dev_err(mvsw_fw_dev(fw), "Loader is not ready to load image\n");
+ return -EBUSY;
+ }
+
+ for (pos = 0; pos < fw_size; pos += MVSW_FW_BLK_SZ) {
+ if (pos + MVSW_FW_BLK_SZ > fw_size)
+ break;
+
+ err = mvsw_pr_ldr_send_buf(fw, img + pos, MVSW_FW_BLK_SZ);
+ if (err)
+ return err;
+ }
+
+ if (pos < fw_size) {
+ err = mvsw_pr_ldr_send_buf(fw, img + pos, fw_size - pos);
+ if (err)
+ return err;
+ }
+
+ /* Waiting for status IMG_DOWNLOADING to change to something else */
+ mask = ~(MVSW_LDR_STATUS_IMG_DL);
+
+ if (!mvsw_wait_timeout(mvsw_ldr_read(fw, MVSW_LDR_STATUS_REG) & mask,
+ MVSW_FW_DL_TIMEOUT)) {
+ dev_err(mvsw_fw_dev(fw), "Timeout to load FW img [state=%d]",
+ mvsw_ldr_read(fw, MVSW_LDR_STATUS_REG));
+ return -ETIMEDOUT;
+ }
+
+ status = mvsw_ldr_read(fw, MVSW_LDR_STATUS_REG);
+ if (status != MVSW_LDR_STATUS_START_FW) {
+ switch (status) {
+ case MVSW_LDR_STATUS_INVALID_IMG:
+ dev_err(mvsw_fw_dev(fw), "FW img has bad crc\n");
+ return -EINVAL;
+ case MVSW_LDR_STATUS_NOMEM:
+ dev_err(mvsw_fw_dev(fw), "Loader has no enough mem\n");
+ return -ENOMEM;
+ default:
+ break;
+ }
+ }
+
+ return 0;
+}
+
+static bool mvsw_pr_ldr_is_ready(struct mvsw_pr_fw *fw)
+{
+ return mvsw_ldr_read(fw, MVSW_LDR_READY_REG) == MVSW_LDR_READY_MAGIC;
+}
+
+static void mvsw_pr_fw_rev_parse(const struct mvsw_pr_fw_header *hdr,
+ struct mvsw_fw_rev *rev)
+{
+ u32 version = be32_to_cpu(hdr->version_value);
+
+ rev->maj = FW_VER_MAJ(version);
+ rev->min = FW_VER_MIN(version);
+ rev->sub = FW_VER_PATCH(version);
+}
+
+static int mvsw_pr_fw_rev_check(struct mvsw_pr_fw *fw)
+{
+ struct mvsw_fw_rev *rev = &fw->dev.fw_rev;
+
+ if (rev->maj == MVSW_SUPP_FW_MAJ_VER &&
+ rev->min >= MVSW_SUPP_FW_MIN_VER) {
+ return 0;
+ }
+
+ dev_err(mvsw_fw_dev(fw), "Driver supports FW version only '%u.%u.%u'",
+ MVSW_SUPP_FW_MAJ_VER,
+ MVSW_SUPP_FW_MIN_VER,
+ MVSW_SUPP_FW_PATCH_VER);
+
+ return -EINVAL;
+}
+
+static int mvsw_pr_fw_hdr_parse(struct mvsw_pr_fw *fw,
+ const struct firmware *img)
+{
+ struct mvsw_pr_fw_header *hdr = (struct mvsw_pr_fw_header *)img->data;
+ struct mvsw_fw_rev *rev = &fw->dev.fw_rev;
+ u32 magic;
+
+ magic = be32_to_cpu(hdr->magic_number);
+ if (magic != MVSW_FW_HDR_MAGIC) {
+ dev_err(mvsw_fw_dev(fw), "FW img type is invalid");
+ return -EINVAL;
+ }
+
+ mvsw_pr_fw_rev_parse(hdr, rev);
+
+ dev_info(mvsw_fw_dev(fw), "FW version '%u.%u.%u'\n",
+ rev->maj, rev->min, rev->sub);
+
+ return mvsw_pr_fw_rev_check(fw);
+}
+
+static int mvsw_pr_fw_load(struct mvsw_pr_fw *fw)
+{
+ size_t hlen = sizeof(struct mvsw_pr_fw_header);
+ const struct firmware *f;
+ bool has_ldr;
+ int err;
+
+ has_ldr = mvsw_wait_timeout(mvsw_pr_ldr_is_ready(fw), 1000);
+ if (!has_ldr) {
+ dev_err(mvsw_fw_dev(fw), "waiting for FW loader is timed out");
+ return -ETIMEDOUT;
+ }
+
+ fw->ldr_ring_buf = fw->ldr_regs +
+ mvsw_ldr_read(fw, MVSW_LDR_BUF_OFFS_REG);
+
+ fw->ldr_buf_len =
+ mvsw_ldr_read(fw, MVSW_LDR_BUF_SIZE_REG);
+
+ fw->ldr_wr_idx = 0;
+
+ err = request_firmware_direct(&f, MVSW_FW_FILENAME, &fw->pci_dev->dev);
+ if (err) {
+ dev_err(mvsw_fw_dev(fw), "failed to request firmware file\n");
+ return err;
+ }
+
+ if (!IS_ALIGNED(f->size, 4)) {
+ dev_err(mvsw_fw_dev(fw), "FW image file is not aligned");
+ release_firmware(f);
+ return -EINVAL;
+ }
+
+ err = mvsw_pr_fw_hdr_parse(fw, f);
+ if (err) {
+ dev_err(mvsw_fw_dev(fw), "FW image header is invalid\n");
+ release_firmware(f);
+ return err;
+ }
+
+ mvsw_ldr_write(fw, MVSW_LDR_IMG_SIZE_REG, f->size - hlen);
+ mvsw_ldr_write(fw, MVSW_LDR_CTL_REG, MVSW_LDR_CTL_DL_START);
+
+ dev_info(mvsw_fw_dev(fw), "Loading prestera FW image ...");
+
+ err = mvsw_pr_ldr_send(fw, f->data + hlen, f->size - hlen);
+
+ release_firmware(f);
+ return err;
+}
+
+static int mvsw_pr_pci_probe(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+{
+ const char *driver_name = pdev->driver->name;
+ struct mvsw_pr_fw *fw;
+ u8 __iomem *mem_addr;
+ int err;
+
+ err = pci_enable_device(pdev);
+ if (err) {
+ dev_err(&pdev->dev, "pci_enable_device failed\n");
+ goto err_pci_enable_device;
+ }
+
+ err = pci_request_regions(pdev, driver_name);
+ if (err) {
+ dev_err(&pdev->dev, "pci_request_regions failed\n");
+ goto err_pci_request_regions;
+ }
+
+ mem_addr = pci_ioremap_bar(pdev, 2);
+ if (!mem_addr) {
+ dev_err(&pdev->dev, "ioremap failed\n");
+ err = -EIO;
+ goto err_ioremap;
+ }
+
+ pci_set_master(pdev);
+
+ fw = kzalloc(sizeof(*fw), GFP_KERNEL);
+ if (!fw) {
+ err = -ENOMEM;
+ goto err_pci_dev_alloc;
+ }
+
+ fw->pci_dev = pdev;
+ fw->dev.dev = &pdev->dev;
+ fw->dev.send_req = mvsw_pr_fw_send_req;
+ fw->mem_addr = mem_addr;
+ fw->ldr_regs = mem_addr;
+ fw->hw_regs = mem_addr;
+
+ fw->wq = alloc_workqueue("mvsw_fw_wq", WQ_HIGHPRI, 1);
+ if (!fw->wq)
+ goto err_wq_alloc;
+
+ INIT_WORK(&fw->evt_work, mvsw_pr_fw_evt_work_fn);
+
+ err = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI);
+ if (err < 0) {
+ dev_err(&pdev->dev, "MSI IRQ init failed\n");
+ goto err_irq_alloc;
+ }
+
+ err = request_irq(pci_irq_vector(pdev, 0), mvsw_pci_irq_handler,
+ 0, driver_name, fw);
+ if (err) {
+ dev_err(&pdev->dev, "fail to request IRQ\n");
+ goto err_request_irq;
+ }
+
+ pci_set_drvdata(pdev, fw);
+
+ err = mvsw_pr_fw_init(fw);
+ if (err)
+ goto err_mvsw_fw_init;
+
+ dev_info(mvsw_fw_dev(fw), "Prestera Switch FW is ready\n");
+
+ err = mvsw_pr_device_register(&fw->dev);
+ if (err)
+ goto err_mvsw_dev_register;
+
+ return 0;
+
+err_mvsw_dev_register:
+ mvsw_pr_fw_uninit(fw);
+err_mvsw_fw_init:
+ free_irq(pci_irq_vector(pdev, 0), fw);
+err_request_irq:
+ pci_free_irq_vectors(pdev);
+err_irq_alloc:
+ destroy_workqueue(fw->wq);
+err_wq_alloc:
+ kfree(fw);
+err_pci_dev_alloc:
+ iounmap(mem_addr);
+err_ioremap:
+ pci_release_regions(pdev);
+err_pci_request_regions:
+ pci_disable_device(pdev);
+err_pci_enable_device:
+ return err;
+}
+
+static void mvsw_pr_pci_remove(struct pci_dev *pdev)
+{
+ struct mvsw_pr_fw *fw = pci_get_drvdata(pdev);
+
+ free_irq(pci_irq_vector(pdev, 0), fw);
+ pci_free_irq_vectors(pdev);
+ mvsw_pr_device_unregister(&fw->dev);
+ flush_workqueue(fw->wq);
+ destroy_workqueue(fw->wq);
+ mvsw_pr_fw_uninit(fw);
+ iounmap(fw->mem_addr);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ kfree(fw);
+}
+
+static int __init mvsw_pr_pci_init(void)
+{
+ struct mvsw_pr_pci_match *match;
+ int err;
+
+ for (match = mvsw_pci_devices; match->driver.name; match++) {
+ match->driver.probe = mvsw_pr_pci_probe;
+ match->driver.remove = mvsw_pr_pci_remove;
+ match->driver.id_table = &match->id;
+
+ err = pci_register_driver(&match->driver);
+ if (err) {
+ pr_err("prestera_pci: failed to register %s\n",
+ match->driver.name);
+ break;
+ }
+
+ match->registered = true;
+ }
+
+ if (err) {
+ for (match = mvsw_pci_devices; match->driver.name; match++) {
+ if (!match->registered)
+ break;
+
+ pci_unregister_driver(&match->driver);
+ }
+
+ return err;
+ }
+
+ pr_info("prestera_pci: Registered Marvell Prestera PCI driver\n");
+ return 0;
+}
+
+static void __exit mvsw_pr_pci_exit(void)
+{
+ struct mvsw_pr_pci_match *match;
+
+ for (match = mvsw_pci_devices; match->driver.name; match++) {
+ if (!match->registered)
+ break;
+
+ pci_unregister_driver(&match->driver);
+ }
+
+ pr_info("prestera_pci: Unregistered Marvell Prestera PCI driver\n");
+}
+
+module_init(mvsw_pr_pci_init);
+module_exit(mvsw_pr_pci_exit);
+
+MODULE_AUTHOR("Marvell Semi.");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Marvell Prestera switch PCI interface");
--
2.17.1

2020-02-25 16:32:33

by Vadym Kochan

[permalink] [raw]
Subject: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
wireless SMB deployment.

This driver implementation includes only L1 & basic L2 support.

The core Prestera switching logic is implemented in prestera.c, there is
an intermediate hw layer between core logic and firmware. It is
implemented in prestera_hw.c, the purpose of it is to encapsulate hw
related logic, in future there is a plan to support more devices with
different HW related configurations.

The following Switchdev features are supported:

- VLAN-aware bridge offloading
- VLAN-unaware bridge offloading
- FDB offloading (learning, ageing)
- Switchport configuration

Signed-off-by: Vadym Kochan <[email protected]>
Signed-off-by: Andrii Savka <[email protected]>
Signed-off-by: Oleksandr Mazur <[email protected]>
Signed-off-by: Serhiy Boiko <[email protected]>
Signed-off-by: Serhiy Pshyk <[email protected]>
Signed-off-by: Taras Chornyi <[email protected]>
Signed-off-by: Volodymyr Mytnyk <[email protected]>
---
drivers/net/ethernet/marvell/Kconfig | 1 +
drivers/net/ethernet/marvell/Makefile | 1 +
drivers/net/ethernet/marvell/prestera/Kconfig | 13 +
.../net/ethernet/marvell/prestera/Makefile | 3 +
.../net/ethernet/marvell/prestera/prestera.c | 1502 +++++++++++++++++
.../net/ethernet/marvell/prestera/prestera.h | 244 +++
.../marvell/prestera/prestera_drv_ver.h | 23 +
.../ethernet/marvell/prestera/prestera_hw.c | 1094 ++++++++++++
.../ethernet/marvell/prestera/prestera_hw.h | 159 ++
.../marvell/prestera/prestera_switchdev.c | 1217 +++++++++++++
10 files changed, 4257 insertions(+)
create mode 100644 drivers/net/ethernet/marvell/prestera/Kconfig
create mode 100644 drivers/net/ethernet/marvell/prestera/Makefile
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.c
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.h
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.c
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.h
create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_switchdev.c

diff --git a/drivers/net/ethernet/marvell/Kconfig b/drivers/net/ethernet/marvell/Kconfig
index 3d5caea096fb..74313d9e1fc0 100644
--- a/drivers/net/ethernet/marvell/Kconfig
+++ b/drivers/net/ethernet/marvell/Kconfig
@@ -171,5 +171,6 @@ config SKY2_DEBUG


source "drivers/net/ethernet/marvell/octeontx2/Kconfig"
+source "drivers/net/ethernet/marvell/prestera/Kconfig"

endif # NET_VENDOR_MARVELL
diff --git a/drivers/net/ethernet/marvell/Makefile b/drivers/net/ethernet/marvell/Makefile
index 89dea7284d5b..9f88fe822555 100644
--- a/drivers/net/ethernet/marvell/Makefile
+++ b/drivers/net/ethernet/marvell/Makefile
@@ -12,3 +12,4 @@ obj-$(CONFIG_PXA168_ETH) += pxa168_eth.o
obj-$(CONFIG_SKGE) += skge.o
obj-$(CONFIG_SKY2) += sky2.o
obj-y += octeontx2/
+obj-y += prestera/
diff --git a/drivers/net/ethernet/marvell/prestera/Kconfig b/drivers/net/ethernet/marvell/prestera/Kconfig
new file mode 100644
index 000000000000..d0b416dcb677
--- /dev/null
+++ b/drivers/net/ethernet/marvell/prestera/Kconfig
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Marvell Prestera drivers configuration
+#
+
+config PRESTERA
+ tristate "Marvell Prestera Switch ASICs support"
+ depends on NET_SWITCHDEV && VLAN_8021Q
+ ---help---
+ This driver supports Marvell Prestera Switch ASICs family.
+
+ To compile this driver as a module, choose M here: the
+ module will be called prestera_sw.
diff --git a/drivers/net/ethernet/marvell/prestera/Makefile b/drivers/net/ethernet/marvell/prestera/Makefile
new file mode 100644
index 000000000000..9446298fb7f4
--- /dev/null
+++ b/drivers/net/ethernet/marvell/prestera/Makefile
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_PRESTERA) += prestera_sw.o
+prestera_sw-objs := prestera.o prestera_hw.o prestera_switchdev.o
diff --git a/drivers/net/ethernet/marvell/prestera/prestera.c b/drivers/net/ethernet/marvell/prestera/prestera.c
new file mode 100644
index 000000000000..12d0eb590bbb
--- /dev/null
+++ b/drivers/net/ethernet/marvell/prestera/prestera.c
@@ -0,0 +1,1502 @@
+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
+ *
+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
+ *
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/list.h>
+#include <linux/netdevice.h>
+#include <linux/netdev_features.h>
+#include <linux/etherdevice.h>
+#include <linux/ethtool.h>
+#include <linux/jiffies.h>
+#include <net/switchdev.h>
+
+#include "prestera.h"
+#include "prestera_hw.h"
+#include "prestera_drv_ver.h"
+
+#define MVSW_PR_MTU_DEFAULT 1536
+
+#define PORT_STATS_CACHE_TIMEOUT_MS (msecs_to_jiffies(1000))
+#define PORT_STATS_CNT (sizeof(struct mvsw_pr_port_stats) / sizeof(u64))
+#define PORT_STATS_IDX(name) \
+ (offsetof(struct mvsw_pr_port_stats, name) / sizeof(u64))
+#define PORT_STATS_FIELD(name) \
+ [PORT_STATS_IDX(name)] = __stringify(name)
+
+static struct list_head switches_registered;
+
+static const char mvsw_driver_kind[] = "prestera_sw";
+static const char mvsw_driver_name[] = "mvsw_switchdev";
+static const char mvsw_driver_version[] = PRESTERA_DRV_VER;
+
+#define mvsw_dev(sw) ((sw)->dev->dev)
+#define mvsw_dev_name(sw) dev_name((sw)->dev->dev)
+
+static struct workqueue_struct *mvsw_pr_wq;
+
+struct mvsw_pr_link_mode {
+ enum ethtool_link_mode_bit_indices eth_mode;
+ u32 speed;
+ u64 pr_mask;
+ u8 duplex;
+ u8 port_type;
+};
+
+static const struct mvsw_pr_link_mode
+mvsw_pr_link_modes[MVSW_LINK_MODE_MAX] = {
+ [MVSW_LINK_MODE_10baseT_Half_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_10baseT_Half_BIT,
+ .speed = 10,
+ .pr_mask = 1 << MVSW_LINK_MODE_10baseT_Half_BIT,
+ .duplex = MVSW_PORT_DUPLEX_HALF,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_10baseT_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_10baseT_Full_BIT,
+ .speed = 10,
+ .pr_mask = 1 << MVSW_LINK_MODE_10baseT_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_100baseT_Half_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_100baseT_Half_BIT,
+ .speed = 100,
+ .pr_mask = 1 << MVSW_LINK_MODE_100baseT_Half_BIT,
+ .duplex = MVSW_PORT_DUPLEX_HALF,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_100baseT_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_100baseT_Full_BIT,
+ .speed = 100,
+ .pr_mask = 1 << MVSW_LINK_MODE_100baseT_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_1000baseT_Half_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_1000baseT_Half_BIT,
+ .speed = 1000,
+ .pr_mask = 1 << MVSW_LINK_MODE_1000baseT_Half_BIT,
+ .duplex = MVSW_PORT_DUPLEX_HALF,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_1000baseT_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
+ .speed = 1000,
+ .pr_mask = 1 << MVSW_LINK_MODE_1000baseT_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_1000baseX_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_1000baseX_Full_BIT,
+ .speed = 1000,
+ .pr_mask = 1 << MVSW_LINK_MODE_1000baseX_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_FIBRE,
+ },
+ [MVSW_LINK_MODE_1000baseKX_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT,
+ .speed = 1000,
+ .pr_mask = 1 << MVSW_LINK_MODE_1000baseKX_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_10GbaseKR_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_10000baseKR_Full_BIT,
+ .speed = 10000,
+ .pr_mask = 1 << MVSW_LINK_MODE_10GbaseKR_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_10GbaseSR_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_10000baseSR_Full_BIT,
+ .speed = 10000,
+ .pr_mask = 1 << MVSW_LINK_MODE_10GbaseSR_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_FIBRE,
+ },
+ [MVSW_LINK_MODE_10GbaseLR_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_10000baseLR_Full_BIT,
+ .speed = 10000,
+ .pr_mask = 1 << MVSW_LINK_MODE_10GbaseLR_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_FIBRE,
+ },
+ [MVSW_LINK_MODE_20GbaseKR2_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT,
+ .speed = 20000,
+ .pr_mask = 1 << MVSW_LINK_MODE_20GbaseKR2_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_25GbaseCR_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT,
+ .speed = 25000,
+ .pr_mask = 1 << MVSW_LINK_MODE_25GbaseCR_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_DA,
+ },
+ [MVSW_LINK_MODE_25GbaseKR_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT,
+ .speed = 25000,
+ .pr_mask = 1 << MVSW_LINK_MODE_25GbaseKR_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_25GbaseSR_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT,
+ .speed = 25000,
+ .pr_mask = 1 << MVSW_LINK_MODE_25GbaseSR_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_FIBRE,
+ },
+ [MVSW_LINK_MODE_40GbaseKR4_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT,
+ .speed = 40000,
+ .pr_mask = 1 << MVSW_LINK_MODE_40GbaseKR4_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_40GbaseCR4_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT,
+ .speed = 40000,
+ .pr_mask = 1 << MVSW_LINK_MODE_40GbaseCR4_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_DA,
+ },
+ [MVSW_LINK_MODE_40GbaseSR4_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT,
+ .speed = 40000,
+ .pr_mask = 1 << MVSW_LINK_MODE_40GbaseSR4_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_FIBRE,
+ },
+ [MVSW_LINK_MODE_50GbaseCR2_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT,
+ .speed = 50000,
+ .pr_mask = 1 << MVSW_LINK_MODE_50GbaseCR2_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_DA,
+ },
+ [MVSW_LINK_MODE_50GbaseKR2_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT,
+ .speed = 50000,
+ .pr_mask = 1 << MVSW_LINK_MODE_50GbaseKR2_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_50GbaseSR2_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT,
+ .speed = 50000,
+ .pr_mask = 1 << MVSW_LINK_MODE_50GbaseSR2_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_FIBRE,
+ },
+ [MVSW_LINK_MODE_100GbaseKR4_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
+ .speed = 100000,
+ .pr_mask = 1 << MVSW_LINK_MODE_100GbaseKR4_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_TP,
+ },
+ [MVSW_LINK_MODE_100GbaseSR4_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT,
+ .speed = 100000,
+ .pr_mask = 1 << MVSW_LINK_MODE_100GbaseSR4_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_FIBRE,
+ },
+ [MVSW_LINK_MODE_100GbaseCR4_Full_BIT] = {
+ .eth_mode = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT,
+ .speed = 100000,
+ .pr_mask = 1 << MVSW_LINK_MODE_100GbaseCR4_Full_BIT,
+ .duplex = MVSW_PORT_DUPLEX_FULL,
+ .port_type = MVSW_PORT_TYPE_DA,
+ }
+};
+
+struct mvsw_pr_fec {
+ u32 eth_fec;
+ enum ethtool_link_mode_bit_indices eth_mode;
+ u8 pr_fec;
+};
+
+static const struct mvsw_pr_fec mvsw_pr_fec_caps[MVSW_PORT_FEC_MAX] = {
+ [MVSW_PORT_FEC_OFF_BIT] = {
+ .eth_fec = ETHTOOL_FEC_OFF,
+ .eth_mode = ETHTOOL_LINK_MODE_FEC_NONE_BIT,
+ .pr_fec = 1 << MVSW_PORT_FEC_OFF_BIT,
+ },
+ [MVSW_PORT_FEC_BASER_BIT] = {
+ .eth_fec = ETHTOOL_FEC_BASER,
+ .eth_mode = ETHTOOL_LINK_MODE_FEC_BASER_BIT,
+ .pr_fec = 1 << MVSW_PORT_FEC_BASER_BIT,
+ },
+ [MVSW_PORT_FEC_RS_BIT] = {
+ .eth_fec = ETHTOOL_FEC_RS,
+ .eth_mode = ETHTOOL_LINK_MODE_FEC_RS_BIT,
+ .pr_fec = 1 << MVSW_PORT_FEC_RS_BIT,
+ }
+};
+
+struct mvsw_pr_port_type {
+ enum ethtool_link_mode_bit_indices eth_mode;
+ u8 eth_type;
+};
+
+static const struct mvsw_pr_port_type
+mvsw_pr_port_types[MVSW_PORT_TYPE_MAX] = {
+ [MVSW_PORT_TYPE_NONE] = {
+ .eth_mode = __ETHTOOL_LINK_MODE_MASK_NBITS,
+ .eth_type = PORT_NONE,
+ },
+ [MVSW_PORT_TYPE_TP] = {
+ .eth_mode = ETHTOOL_LINK_MODE_TP_BIT,
+ .eth_type = PORT_TP,
+ },
+ [MVSW_PORT_TYPE_AUI] = {
+ .eth_mode = ETHTOOL_LINK_MODE_AUI_BIT,
+ .eth_type = PORT_AUI,
+ },
+ [MVSW_PORT_TYPE_MII] = {
+ .eth_mode = ETHTOOL_LINK_MODE_MII_BIT,
+ .eth_type = PORT_MII,
+ },
+ [MVSW_PORT_TYPE_FIBRE] = {
+ .eth_mode = ETHTOOL_LINK_MODE_FIBRE_BIT,
+ .eth_type = PORT_FIBRE,
+ },
+ [MVSW_PORT_TYPE_BNC] = {
+ .eth_mode = ETHTOOL_LINK_MODE_BNC_BIT,
+ .eth_type = PORT_BNC,
+ },
+ [MVSW_PORT_TYPE_DA] = {
+ .eth_mode = ETHTOOL_LINK_MODE_TP_BIT,
+ .eth_type = PORT_TP,
+ },
+ [MVSW_PORT_TYPE_OTHER] = {
+ .eth_mode = __ETHTOOL_LINK_MODE_MASK_NBITS,
+ .eth_type = PORT_OTHER,
+ }
+};
+
+static const char mvsw_pr_port_cnt_name[PORT_STATS_CNT][ETH_GSTRING_LEN] = {
+ PORT_STATS_FIELD(good_octets_received),
+ PORT_STATS_FIELD(bad_octets_received),
+ PORT_STATS_FIELD(mac_trans_error),
+ PORT_STATS_FIELD(broadcast_frames_received),
+ PORT_STATS_FIELD(multicast_frames_received),
+ PORT_STATS_FIELD(frames_64_octets),
+ PORT_STATS_FIELD(frames_65_to_127_octets),
+ PORT_STATS_FIELD(frames_128_to_255_octets),
+ PORT_STATS_FIELD(frames_256_to_511_octets),
+ PORT_STATS_FIELD(frames_512_to_1023_octets),
+ PORT_STATS_FIELD(frames_1024_to_max_octets),
+ PORT_STATS_FIELD(excessive_collision),
+ PORT_STATS_FIELD(multicast_frames_sent),
+ PORT_STATS_FIELD(broadcast_frames_sent),
+ PORT_STATS_FIELD(fc_sent),
+ PORT_STATS_FIELD(fc_received),
+ PORT_STATS_FIELD(buffer_overrun),
+ PORT_STATS_FIELD(undersize),
+ PORT_STATS_FIELD(fragments),
+ PORT_STATS_FIELD(oversize),
+ PORT_STATS_FIELD(jabber),
+ PORT_STATS_FIELD(rx_error_frame_received),
+ PORT_STATS_FIELD(bad_crc),
+ PORT_STATS_FIELD(collisions),
+ PORT_STATS_FIELD(late_collision),
+ PORT_STATS_FIELD(unicast_frames_received),
+ PORT_STATS_FIELD(unicast_frames_sent),
+ PORT_STATS_FIELD(sent_multiple),
+ PORT_STATS_FIELD(sent_deferred),
+ PORT_STATS_FIELD(frames_1024_to_1518_octets),
+ PORT_STATS_FIELD(frames_1519_to_max_octets),
+ PORT_STATS_FIELD(good_octets_sent),
+};
+
+static struct mvsw_pr_port *__find_pr_port(const struct mvsw_pr_switch *sw,
+ u32 port_id)
+{
+ struct mvsw_pr_port *port;
+
+ list_for_each_entry(port, &sw->port_list, list) {
+ if (port->id == port_id)
+ return port;
+ }
+
+ return NULL;
+}
+
+static int mvsw_pr_port_state_set(struct net_device *dev, bool is_up)
+{
+ struct mvsw_pr_port *port = netdev_priv(dev);
+ int err;
+
+ if (!is_up)
+ netif_stop_queue(dev);
+
+ err = mvsw_pr_hw_port_state_set(port, is_up);
+
+ if (is_up && !err)
+ netif_start_queue(dev);
+
+ return err;
+}
+
+static int mvsw_pr_port_get_port_parent_id(struct net_device *dev,
+ struct netdev_phys_item_id *ppid)
+{
+ const struct mvsw_pr_port *port = netdev_priv(dev);
+
+ ppid->id_len = sizeof(port->sw->id);
+
+ memcpy(&ppid->id, &port->sw->id, ppid->id_len);
+ return 0;
+}
+
+static int mvsw_pr_port_get_phys_port_name(struct net_device *dev,
+ char *buf, size_t len)
+{
+ const struct mvsw_pr_port *port = netdev_priv(dev);
+
+ snprintf(buf, len, "%u", port->fp_id);
+ return 0;
+}
+
+static int mvsw_pr_port_open(struct net_device *dev)
+{
+ return mvsw_pr_port_state_set(dev, true);
+}
+
+static int mvsw_pr_port_close(struct net_device *dev)
+{
+ return mvsw_pr_port_state_set(dev, false);
+}
+
+static netdev_tx_t mvsw_pr_port_xmit(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ dev_kfree_skb(skb);
+ return NETDEV_TX_OK;
+}
+
+static int mvsw_is_valid_mac_addr(struct mvsw_pr_port *port, u8 *addr)
+{
+ int err;
+
+ if (!is_valid_ether_addr(addr))
+ return -EADDRNOTAVAIL;
+
+ err = memcmp(port->sw->base_mac, addr, ETH_ALEN - 1);
+ if (err)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int mvsw_pr_port_set_mac_address(struct net_device *dev, void *p)
+{
+ struct mvsw_pr_port *port = netdev_priv(dev);
+ struct sockaddr *addr = p;
+ int err;
+
+ err = mvsw_is_valid_mac_addr(port, addr->sa_data);
+ if (err)
+ return err;
+
+ err = mvsw_pr_hw_port_mac_set(port, addr->sa_data);
+ if (!err)
+ memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
+
+ return err;
+}
+
+static int mvsw_pr_port_change_mtu(struct net_device *dev, int mtu)
+{
+ struct mvsw_pr_port *port = netdev_priv(dev);
+ int err;
+
+ if (port->sw->mtu_min <= mtu && mtu <= port->sw->mtu_max)
+ err = mvsw_pr_hw_port_mtu_set(port, mtu);
+ else
+ err = -EINVAL;
+
+ if (!err)
+ dev->mtu = mtu;
+
+ return err;
+}
+
+static void mvsw_pr_port_get_stats64(struct net_device *dev,
+ struct rtnl_link_stats64 *stats)
+{
+ struct mvsw_pr_port *port = netdev_priv(dev);
+ struct mvsw_pr_port_stats *port_stats = &port->cached_hw_stats.stats;
+
+ stats->rx_packets = port_stats->broadcast_frames_received +
+ port_stats->multicast_frames_received +
+ port_stats->unicast_frames_received;
+
+ stats->tx_packets = port_stats->broadcast_frames_sent +
+ port_stats->multicast_frames_sent +
+ port_stats->unicast_frames_sent;
+
+ stats->rx_bytes = port_stats->good_octets_received;
+
+ stats->tx_bytes = port_stats->good_octets_sent;
+
+ stats->rx_errors = port_stats->rx_error_frame_received;
+ stats->tx_errors = port_stats->mac_trans_error;
+
+ stats->rx_dropped = port_stats->buffer_overrun;
+ stats->tx_dropped = 0;
+
+ stats->multicast = port_stats->multicast_frames_received;
+ stats->collisions = port_stats->excessive_collision;
+
+ stats->rx_crc_errors = port_stats->bad_crc;
+}
+
+static void mvsw_pr_port_get_hw_stats(struct mvsw_pr_port *port)
+{
+ mvsw_pr_hw_port_stats_get(port, &port->cached_hw_stats.stats);
+}
+
+static void update_stats_cache(struct work_struct *work)
+{
+ struct mvsw_pr_port *port =
+ container_of(work, struct mvsw_pr_port,
+ cached_hw_stats.caching_dw.work);
+
+ mvsw_pr_port_get_hw_stats(port);
+
+ queue_delayed_work(mvsw_pr_wq, &port->cached_hw_stats.caching_dw,
+ PORT_STATS_CACHE_TIMEOUT_MS);
+}
+
+static void mvsw_pr_port_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *drvinfo)
+{
+ struct mvsw_pr_port *port = netdev_priv(dev);
+ struct mvsw_pr_switch *sw = port->sw;
+
+ strlcpy(drvinfo->driver, mvsw_driver_kind, sizeof(drvinfo->driver));
+ strlcpy(drvinfo->bus_info, mvsw_dev_name(sw), sizeof(drvinfo->bus_info));
+ snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
+ "%d.%d.%d",
+ sw->dev->fw_rev.maj,
+ sw->dev->fw_rev.min,
+ sw->dev->fw_rev.sub);
+}
+
+static const struct net_device_ops mvsw_pr_netdev_ops = {
+ .ndo_open = mvsw_pr_port_open,
+ .ndo_stop = mvsw_pr_port_close,
+ .ndo_start_xmit = mvsw_pr_port_xmit,
+ .ndo_change_mtu = mvsw_pr_port_change_mtu,
+ .ndo_get_stats64 = mvsw_pr_port_get_stats64,
+ .ndo_set_mac_address = mvsw_pr_port_set_mac_address,
+ .ndo_get_phys_port_name = mvsw_pr_port_get_phys_port_name,
+ .ndo_get_port_parent_id = mvsw_pr_port_get_port_parent_id
+};
+
+bool mvsw_pr_netdev_check(const struct net_device *dev)
+{
+ return dev->netdev_ops == &mvsw_pr_netdev_ops;
+}
+
+static int mvsw_pr_lower_dev_walk(struct net_device *lower_dev, void *data)
+{
+ struct mvsw_pr_port **pport = data;
+
+ if (mvsw_pr_netdev_check(lower_dev)) {
+ *pport = netdev_priv(lower_dev);
+ return 1;
+ }
+
+ return 0;
+}
+
+struct mvsw_pr_port *mvsw_pr_port_dev_lower_find(struct net_device *dev)
+{
+ struct mvsw_pr_port *port;
+
+ if (mvsw_pr_netdev_check(dev))
+ return netdev_priv(dev);
+
+ port = NULL;
+ netdev_walk_all_lower_dev(dev, mvsw_pr_lower_dev_walk, &port);
+
+ return port;
+}
+
+static void mvsw_modes_to_eth(unsigned long *eth_modes, u64 link_modes, u8 fec,
+ u8 type)
+{
+ u32 mode;
+
+ for (mode = 0; mode < MVSW_LINK_MODE_MAX; mode++) {
+ if ((mvsw_pr_link_modes[mode].pr_mask & link_modes) == 0)
+ continue;
+ if (type != MVSW_PORT_TYPE_NONE &&
+ mvsw_pr_link_modes[mode].port_type != type)
+ continue;
+ __set_bit(mvsw_pr_link_modes[mode].eth_mode, eth_modes);
+ }
+
+ for (mode = 0; mode < MVSW_PORT_FEC_MAX; mode++) {
+ if ((mvsw_pr_fec_caps[mode].pr_fec & fec) == 0)
+ continue;
+ __set_bit(mvsw_pr_fec_caps[mode].eth_mode, eth_modes);
+ }
+}
+
+static void mvsw_modes_from_eth(const unsigned long *eth_modes, u64 *link_modes,
+ u8 *fec)
+{
+ u32 mode;
+
+ for (mode = 0; mode < MVSW_LINK_MODE_MAX; mode++) {
+ if (!test_bit(mvsw_pr_link_modes[mode].eth_mode, eth_modes))
+ continue;
+ *link_modes |= mvsw_pr_link_modes[mode].pr_mask;
+ }
+
+ for (mode = 0; mode < MVSW_PORT_FEC_MAX; mode++) {
+ if (!test_bit(mvsw_pr_fec_caps[mode].eth_mode, eth_modes))
+ continue;
+ *fec |= mvsw_pr_fec_caps[mode].pr_fec;
+ }
+}
+
+static void mvsw_pr_port_supp_types_get(struct ethtool_link_ksettings *ecmd,
+ struct mvsw_pr_port *port)
+{
+ u32 mode;
+ u8 ptype;
+
+ for (mode = 0; mode < MVSW_LINK_MODE_MAX; mode++) {
+ if ((mvsw_pr_link_modes[mode].pr_mask &
+ port->caps.supp_link_modes) == 0)
+ continue;
+ ptype = mvsw_pr_link_modes[mode].port_type;
+ __set_bit(mvsw_pr_port_types[ptype].eth_mode,
+ ecmd->link_modes.supported);
+ }
+}
+
+static void mvsw_pr_port_speed_get(struct ethtool_link_ksettings *ecmd,
+ struct mvsw_pr_port *port)
+{
+ u32 speed;
+ int err;
+
+ err = mvsw_pr_hw_port_speed_get(port, &speed);
+ ecmd->base.speed = !err ? speed : SPEED_UNKNOWN;
+}
+
+static int mvsw_pr_port_link_mode_set(struct mvsw_pr_port *port,
+ u32 speed, u8 duplex, u8 type)
+{
+ u32 new_mode = MVSW_LINK_MODE_MAX;
+ u32 mode;
+
+ for (mode = 0; mode < MVSW_LINK_MODE_MAX; mode++) {
+ if (speed != mvsw_pr_link_modes[mode].speed)
+ continue;
+ if (duplex != mvsw_pr_link_modes[mode].duplex)
+ continue;
+ if (!(mvsw_pr_link_modes[mode].pr_mask &
+ port->caps.supp_link_modes))
+ continue;
+ if (type != mvsw_pr_link_modes[mode].port_type)
+ continue;
+
+ new_mode = mode;
+ break;
+ }
+
+ if (new_mode == MVSW_LINK_MODE_MAX) {
+ netdev_err(port->net_dev, "Unsupported speed/duplex requested");
+ return -EINVAL;
+ }
+
+ return mvsw_pr_hw_port_link_mode_set(port, new_mode);
+}
+
+static int mvsw_pr_port_speed_duplex_set(const struct ethtool_link_ksettings
+ *ecmd, struct mvsw_pr_port *port)
+{
+ int err;
+ u8 duplex;
+ u32 speed;
+ u32 curr_mode;
+
+ err = mvsw_pr_hw_port_link_mode_get(port, &curr_mode);
+ if (err || curr_mode >= MVSW_LINK_MODE_MAX)
+ return -EINVAL;
+
+ if (ecmd->base.duplex != DUPLEX_UNKNOWN)
+ duplex = ecmd->base.duplex == DUPLEX_FULL ?
+ MVSW_PORT_DUPLEX_FULL : MVSW_PORT_DUPLEX_HALF;
+ else
+ duplex = mvsw_pr_link_modes[curr_mode].duplex;
+
+ if (ecmd->base.speed != SPEED_UNKNOWN)
+ speed = ecmd->base.speed;
+ else
+ speed = mvsw_pr_link_modes[curr_mode].speed;
+
+ return mvsw_pr_port_link_mode_set(port, speed, duplex, port->caps.type);
+}
+
+static u8 mvsw_pr_port_type_get(struct mvsw_pr_port *port)
+{
+ if (port->caps.type < MVSW_PORT_TYPE_MAX)
+ return mvsw_pr_port_types[port->caps.type].eth_type;
+ return PORT_OTHER;
+}
+
+static int mvsw_pr_port_type_set(const struct ethtool_link_ksettings *ecmd,
+ struct mvsw_pr_port *port)
+{
+ int err;
+ u32 type, mode;
+ u32 new_mode = MVSW_LINK_MODE_MAX;
+
+ for (type = 0; type < MVSW_PORT_TYPE_MAX; type++) {
+ if (mvsw_pr_port_types[type].eth_type == ecmd->base.port &&
+ test_bit(mvsw_pr_port_types[type].eth_mode,
+ ecmd->link_modes.supported)) {
+ break;
+ }
+ }
+
+ if (type == port->caps.type)
+ return 0;
+
+ if (type == MVSW_PORT_TYPE_MAX) {
+ pr_err("Unsupported port type requested\n");
+ return -EINVAL;
+ }
+
+ for (mode = 0; mode < MVSW_LINK_MODE_MAX; mode++) {
+ if ((mvsw_pr_link_modes[mode].pr_mask &
+ port->caps.supp_link_modes) &&
+ type == mvsw_pr_link_modes[mode].port_type) {
+ new_mode = mode;
+ }
+ }
+
+ if (new_mode < MVSW_LINK_MODE_MAX)
+ err = mvsw_pr_hw_port_link_mode_set(port, new_mode);
+ else
+ err = -EINVAL;
+
+ if (!err)
+ port->caps.type = type;
+
+ return err;
+}
+
+static void mvsw_pr_port_remote_cap_get(struct ethtool_link_ksettings *ecmd,
+ struct mvsw_pr_port *port)
+{
+ u64 bitmap;
+
+ if (!mvsw_pr_hw_port_remote_cap_get(port, &bitmap)) {
+ mvsw_modes_to_eth(ecmd->link_modes.lp_advertising,
+ bitmap, 0, MVSW_PORT_TYPE_NONE);
+ }
+}
+
+static void mvsw_pr_port_duplex_get(struct ethtool_link_ksettings *ecmd,
+ struct mvsw_pr_port *port)
+{
+ u8 duplex;
+
+ if (!mvsw_pr_hw_port_duplex_get(port, &duplex)) {
+ ecmd->base.duplex = duplex == MVSW_PORT_DUPLEX_FULL ?
+ DUPLEX_FULL : DUPLEX_HALF;
+ } else {
+ ecmd->base.duplex = DUPLEX_UNKNOWN;
+ }
+}
+
+static int mvsw_pr_port_autoneg_set(struct mvsw_pr_port *port, bool enable,
+ u64 link_modes, u8 fec)
+{
+ bool refresh = false;
+ int err = 0;
+
+ if (port->caps.type != MVSW_PORT_TYPE_TP)
+ return enable ? -EINVAL : 0;
+
+ if (port->adver_link_modes != link_modes || port->adver_fec != fec) {
+ port->adver_link_modes = link_modes;
+ port->adver_fec = fec != 0 ? fec : BIT(MVSW_PORT_FEC_OFF_BIT);
+ refresh = true;
+ }
+
+ if (port->autoneg == enable && !(port->autoneg && refresh))
+ return 0;
+
+ err = mvsw_pr_hw_port_autoneg_set(port, enable,
+ port->adver_link_modes,
+ port->adver_fec);
+ if (err)
+ return -EINVAL;
+
+ port->autoneg = enable;
+ return 0;
+}
+
+static void mvsw_pr_port_mdix_get(struct ethtool_link_ksettings *ecmd,
+ struct mvsw_pr_port *port)
+{
+ u8 mode;
+
+ if (mvsw_pr_hw_port_mdix_get(port, &mode))
+ return;
+
+ ecmd->base.eth_tp_mdix = mode;
+}
+
+static int mvsw_pr_port_mdix_set(const struct ethtool_link_ksettings *ecmd,
+ struct mvsw_pr_port *port)
+{
+ if (ecmd->base.eth_tp_mdix_ctrl)
+ return -EOPNOTSUPP;
+
+ return 0;
+}
+
+static int mvsw_pr_port_get_link_ksettings(struct net_device *dev,
+ struct ethtool_link_ksettings *ecmd)
+{
+ struct mvsw_pr_port *port = netdev_priv(dev);
+
+ ethtool_link_ksettings_zero_link_mode(ecmd, supported);
+ ethtool_link_ksettings_zero_link_mode(ecmd, advertising);
+ ethtool_link_ksettings_zero_link_mode(ecmd, lp_advertising);
+
+ ecmd->base.autoneg = port->autoneg ? AUTONEG_ENABLE : AUTONEG_DISABLE;
+
+ if (port->caps.type == MVSW_PORT_TYPE_TP) {
+ ethtool_link_ksettings_add_link_mode(ecmd, supported, Autoneg);
+ if (netif_running(dev) &&
+ (port->autoneg ||
+ port->caps.transceiver == MVSW_PORT_TRANSCEIVER_COPPER))
+ ethtool_link_ksettings_add_link_mode(ecmd, advertising,
+ Autoneg);
+ }
+
+ mvsw_modes_to_eth(ecmd->link_modes.supported,
+ port->caps.supp_link_modes,
+ port->caps.supp_fec,
+ port->caps.type);
+
+ mvsw_pr_port_supp_types_get(ecmd, port);
+
+ if (netif_carrier_ok(dev)) {
+ mvsw_pr_port_speed_get(ecmd, port);
+ mvsw_pr_port_duplex_get(ecmd, port);
+ } else {
+ ecmd->base.speed = SPEED_UNKNOWN;
+ ecmd->base.duplex = DUPLEX_UNKNOWN;
+ }
+
+ ecmd->base.port = mvsw_pr_port_type_get(port);
+
+ if (port->autoneg) {
+ if (netif_running(dev))
+ mvsw_modes_to_eth(ecmd->link_modes.advertising,
+ port->adver_link_modes,
+ port->adver_fec,
+ port->caps.type);
+
+ if (netif_carrier_ok(dev) &&
+ port->caps.transceiver == MVSW_PORT_TRANSCEIVER_COPPER) {
+ ethtool_link_ksettings_add_link_mode(ecmd,
+ lp_advertising,
+ Autoneg);
+ mvsw_pr_port_remote_cap_get(ecmd, port);
+ }
+ }
+
+ if (port->caps.type == MVSW_PORT_TYPE_TP &&
+ port->caps.transceiver == MVSW_PORT_TRANSCEIVER_COPPER)
+ mvsw_pr_port_mdix_get(ecmd, port);
+
+ return 0;
+}
+
+static bool mvsw_pr_check_supp_modes(const struct mvsw_pr_port_caps *caps,
+ u64 adver_modes, u8 adver_fec)
+{
+ if ((caps->supp_link_modes & adver_modes) == 0)
+ return true;
+ if ((adver_fec & ~caps->supp_fec) != 0)
+ return true;
+
+ return false;
+}
+
+static int mvsw_pr_port_set_link_ksettings(struct net_device *dev,
+ const struct ethtool_link_ksettings
+ *ecmd)
+{
+ struct mvsw_pr_port *port = netdev_priv(dev);
+ bool is_up = netif_running(dev);
+ u64 adver_modes = 0;
+ u8 adver_fec = 0;
+ int err, err1;
+
+ if (is_up) {
+ err = mvsw_pr_port_state_set(dev, false);
+ if (err)
+ return err;
+ }
+
+ err = mvsw_pr_port_type_set(ecmd, port);
+ if (err)
+ goto fini_link_ksettings;
+
+ if (port->caps.transceiver == MVSW_PORT_TRANSCEIVER_COPPER) {
+ err = mvsw_pr_port_mdix_set(ecmd, port);
+ if (err)
+ goto fini_link_ksettings;
+ }
+
+ mvsw_modes_from_eth(ecmd->link_modes.advertising, &adver_modes,
+ &adver_fec);
+
+ if (ecmd->base.autoneg == AUTONEG_ENABLE &&
+ mvsw_pr_check_supp_modes(&port->caps, adver_modes, adver_fec)) {
+ netdev_err(dev, "Unsupported link mode requested");
+ err = -EINVAL;
+ goto fini_link_ksettings;
+ }
+
+ err = mvsw_pr_port_autoneg_set(port,
+ ecmd->base.autoneg == AUTONEG_ENABLE,
+ adver_modes, adver_fec);
+ if (err)
+ goto fini_link_ksettings;
+
+ if (ecmd->base.autoneg == AUTONEG_DISABLE) {
+ err = mvsw_pr_port_speed_duplex_set(ecmd, port);
+ if (err)
+ goto fini_link_ksettings;
+ }
+
+fini_link_ksettings:
+ err1 = mvsw_pr_port_state_set(dev, is_up);
+ if (err1)
+ return err1;
+
+ return err;
+}
+
+static int mvsw_pr_port_get_fecparam(struct net_device *dev,
+ struct ethtool_fecparam *fecparam)
+{
+ struct mvsw_pr_port *port = netdev_priv(dev);
+ u32 mode;
+ u8 active;
+ int err;
+
+ err = mvsw_pr_hw_port_fec_get(port, &active);
+ if (err)
+ return err;
+
+ fecparam->fec = 0;
+ for (mode = 0; mode < MVSW_PORT_FEC_MAX; mode++) {
+ if ((mvsw_pr_fec_caps[mode].pr_fec & port->caps.supp_fec) == 0)
+ continue;
+ fecparam->fec |= mvsw_pr_fec_caps[mode].eth_fec;
+ }
+
+ if (active < MVSW_PORT_FEC_MAX)
+ fecparam->active_fec = mvsw_pr_fec_caps[active].eth_fec;
+ else
+ fecparam->active_fec = ETHTOOL_FEC_AUTO;
+
+ return 0;
+}
+
+static int mvsw_pr_port_set_fecparam(struct net_device *dev,
+ struct ethtool_fecparam *fecparam)
+{
+ struct mvsw_pr_port *port = netdev_priv(dev);
+ u8 fec, active;
+ u32 mode;
+ int err;
+
+ if (port->autoneg) {
+ netdev_err(dev, "FEC set is not allowed while autoneg is on\n");
+ return -EINVAL;
+ }
+
+ err = mvsw_pr_hw_port_fec_get(port, &active);
+ if (err)
+ return err;
+
+ fec = MVSW_PORT_FEC_MAX;
+ for (mode = 0; mode < MVSW_PORT_FEC_MAX; mode++) {
+ if ((mvsw_pr_fec_caps[mode].eth_fec & fecparam->fec) &&
+ (mvsw_pr_fec_caps[mode].pr_fec & port->caps.supp_fec)) {
+ fec = mode;
+ break;
+ }
+ }
+
+ if (fec == active)
+ return 0;
+
+ if (fec == MVSW_PORT_FEC_MAX) {
+ netdev_err(dev, "Unsupported FEC requested");
+ return -EINVAL;
+ }
+
+ return mvsw_pr_hw_port_fec_set(port, fec);
+}
+
+static void mvsw_pr_port_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats,
+ u64 *data)
+{
+ struct mvsw_pr_port *port = netdev_priv(dev);
+ struct mvsw_pr_port_stats *port_stats = &port->cached_hw_stats.stats;
+
+ memcpy((u8 *)data, port_stats, sizeof(*port_stats));
+}
+
+static void mvsw_pr_port_get_strings(struct net_device *dev,
+ u32 stringset, u8 *data)
+{
+ if (stringset != ETH_SS_STATS)
+ return;
+
+ memcpy(data, *mvsw_pr_port_cnt_name, sizeof(mvsw_pr_port_cnt_name));
+}
+
+static int mvsw_pr_port_get_sset_count(struct net_device *dev, int sset)
+{
+ switch (sset) {
+ case ETH_SS_STATS:
+ return PORT_STATS_CNT;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static const struct ethtool_ops mvsw_pr_ethtool_ops = {
+ .get_drvinfo = mvsw_pr_port_get_drvinfo,
+ .get_link_ksettings = mvsw_pr_port_get_link_ksettings,
+ .set_link_ksettings = mvsw_pr_port_set_link_ksettings,
+ .get_fecparam = mvsw_pr_port_get_fecparam,
+ .set_fecparam = mvsw_pr_port_set_fecparam,
+ .get_sset_count = mvsw_pr_port_get_sset_count,
+ .get_strings = mvsw_pr_port_get_strings,
+ .get_ethtool_stats = mvsw_pr_port_get_ethtool_stats,
+ .get_link = ethtool_op_get_link
+};
+
+int mvsw_pr_port_learning_set(struct mvsw_pr_port *port, bool learn)
+{
+ return mvsw_pr_hw_port_learning_set(port, learn);
+}
+
+int mvsw_pr_port_flood_set(struct mvsw_pr_port *port, bool flood)
+{
+ return mvsw_pr_hw_port_flood_set(port, flood);
+}
+
+int mvsw_pr_port_pvid_set(struct mvsw_pr_port *port, u16 vid)
+{
+ int err;
+
+ if (!vid) {
+ err = mvsw_pr_hw_port_accept_frame_type_set
+ (port, MVSW_ACCEPT_FRAME_TYPE_TAGGED);
+ if (err)
+ return err;
+ } else {
+ err = mvsw_pr_hw_vlan_port_vid_set(port, vid);
+ if (err)
+ return err;
+ err = mvsw_pr_hw_port_accept_frame_type_set
+ (port, MVSW_ACCEPT_FRAME_TYPE_ALL);
+ if (err)
+ goto err_port_allow_untagged_set;
+ }
+
+ port->pvid = vid;
+ return 0;
+
+err_port_allow_untagged_set:
+ mvsw_pr_hw_vlan_port_vid_set(port, port->pvid);
+ return err;
+}
+
+struct mvsw_pr_port_vlan*
+mvsw_pr_port_vlan_find_by_vid(const struct mvsw_pr_port *port, u16 vid)
+{
+ struct mvsw_pr_port_vlan *port_vlan;
+
+ list_for_each_entry(port_vlan, &port->vlans_list, list) {
+ if (port_vlan->vid == vid)
+ return port_vlan;
+ }
+
+ return NULL;
+}
+
+struct mvsw_pr_port_vlan*
+mvsw_pr_port_vlan_create(struct mvsw_pr_port *port, u16 vid)
+{
+ bool untagged = vid == MVSW_PR_DEFAULT_VID;
+ struct mvsw_pr_port_vlan *port_vlan;
+ int err;
+
+ port_vlan = mvsw_pr_port_vlan_find_by_vid(port, vid);
+ if (port_vlan)
+ return ERR_PTR(-EEXIST);
+
+ err = mvsw_pr_port_vlan_set(port, vid, true, untagged);
+ if (err)
+ return ERR_PTR(err);
+
+ port_vlan = kzalloc(sizeof(*port_vlan), GFP_KERNEL);
+ if (!port_vlan) {
+ err = -ENOMEM;
+ goto err_port_vlan_alloc;
+ }
+
+ port_vlan->mvsw_pr_port = port;
+ port_vlan->vid = vid;
+
+ list_add(&port_vlan->list, &port->vlans_list);
+
+ return port_vlan;
+
+err_port_vlan_alloc:
+ mvsw_pr_port_vlan_set(port, vid, false, false);
+ return ERR_PTR(err);
+}
+
+static void
+mvsw_pr_port_vlan_cleanup(struct mvsw_pr_port_vlan *port_vlan)
+{
+ if (port_vlan->bridge_port)
+ mvsw_pr_port_vlan_bridge_leave(port_vlan);
+}
+
+void mvsw_pr_port_vlan_destroy(struct mvsw_pr_port_vlan *port_vlan)
+{
+ struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
+ u16 vid = port_vlan->vid;
+
+ mvsw_pr_port_vlan_cleanup(port_vlan);
+ list_del(&port_vlan->list);
+ kfree(port_vlan);
+ mvsw_pr_hw_vlan_port_set(port, vid, false, false);
+}
+
+int mvsw_pr_port_vlan_set(struct mvsw_pr_port *port, u16 vid,
+ bool is_member, bool untagged)
+{
+ return mvsw_pr_hw_vlan_port_set(port, vid, is_member, untagged);
+}
+
+static int mvsw_pr_port_create(struct mvsw_pr_switch *sw, u32 id)
+{
+ struct net_device *net_dev;
+ struct mvsw_pr_port *port;
+ char *mac;
+ int err;
+
+ net_dev = alloc_etherdev(sizeof(*port));
+ if (!net_dev)
+ return -ENOMEM;
+
+ port = netdev_priv(net_dev);
+
+ INIT_LIST_HEAD(&port->vlans_list);
+ port->pvid = MVSW_PR_DEFAULT_VID;
+ port->net_dev = net_dev;
+ port->id = id;
+ port->sw = sw;
+
+ err = mvsw_pr_hw_port_info_get(port, &port->fp_id,
+ &port->hw_id, &port->dev_id);
+ if (err) {
+ dev_err(mvsw_dev(sw), "Failed to get port(%u) info\n", id);
+ goto err_register_netdev;
+ }
+
+ net_dev->features |= NETIF_F_NETNS_LOCAL | NETIF_F_HW_L2FW_DOFFLOAD;
+ net_dev->ethtool_ops = &mvsw_pr_ethtool_ops;
+ net_dev->netdev_ops = &mvsw_pr_netdev_ops;
+
+ netif_carrier_off(net_dev);
+
+ net_dev->mtu = min_t(unsigned int, sw->mtu_max, MVSW_PR_MTU_DEFAULT);
+ net_dev->min_mtu = sw->mtu_min;
+ net_dev->max_mtu = sw->mtu_max;
+
+ err = mvsw_pr_hw_port_mtu_set(port, net_dev->mtu);
+ if (err) {
+ dev_err(mvsw_dev(sw), "Failed to set port(%u) mtu\n", id);
+ goto err_register_netdev;
+ }
+
+ /* Only 0xFF mac addrs are supported */
+ if (port->fp_id >= 0xFF)
+ goto err_register_netdev;
+
+ mac = net_dev->dev_addr;
+ memcpy(mac, sw->base_mac, net_dev->addr_len - 1);
+ mac[net_dev->addr_len - 1] = (char)port->fp_id;
+
+ err = mvsw_pr_hw_port_mac_set(port, mac);
+ if (err) {
+ dev_err(mvsw_dev(sw), "Failed to set port(%u) mac addr\n", id);
+ goto err_register_netdev;
+ }
+
+ err = mvsw_pr_hw_port_cap_get(port, &port->caps);
+ if (err) {
+ dev_err(mvsw_dev(sw), "Failed to get port(%u) caps\n", id);
+ goto err_register_netdev;
+ }
+
+ port->adver_link_modes = 0;
+ port->adver_fec = 1 << MVSW_PORT_FEC_OFF_BIT;
+ port->autoneg = false;
+ mvsw_pr_port_autoneg_set(port, true, port->caps.supp_link_modes,
+ port->caps.supp_fec);
+
+ err = mvsw_pr_hw_port_state_set(port, false);
+ if (err) {
+ dev_err(mvsw_dev(sw), "Failed to set port(%u) down\n", id);
+ goto err_register_netdev;
+ }
+
+ INIT_DELAYED_WORK(&port->cached_hw_stats.caching_dw,
+ &update_stats_cache);
+
+ err = register_netdev(net_dev);
+ if (err)
+ goto err_register_netdev;
+
+ list_add(&port->list, &sw->port_list);
+
+ return 0;
+
+err_register_netdev:
+ free_netdev(net_dev);
+ return err;
+}
+
+static void mvsw_pr_port_vlan_flush(struct mvsw_pr_port *port,
+ bool flush_default)
+{
+ struct mvsw_pr_port_vlan *port_vlan, *tmp;
+
+ list_for_each_entry_safe(port_vlan, tmp, &port->vlans_list, list) {
+ if (!flush_default && port_vlan->vid == MVSW_PR_DEFAULT_VID)
+ continue;
+
+ mvsw_pr_port_vlan_destroy(port_vlan);
+ }
+}
+
+int mvsw_pr_8021d_bridge_create(struct mvsw_pr_switch *sw, u16 *bridge_id)
+{
+ return mvsw_pr_hw_bridge_create(sw, bridge_id);
+}
+
+int mvsw_pr_8021d_bridge_delete(struct mvsw_pr_switch *sw, u16 bridge_id)
+{
+ return mvsw_pr_hw_bridge_delete(sw, bridge_id);
+}
+
+int mvsw_pr_8021d_bridge_port_add(struct mvsw_pr_port *port, u16 bridge_id)
+{
+ return mvsw_pr_hw_bridge_port_add(port, bridge_id);
+}
+
+int mvsw_pr_8021d_bridge_port_delete(struct mvsw_pr_port *port, u16 bridge_id)
+{
+ return mvsw_pr_hw_bridge_port_delete(port, bridge_id);
+}
+
+int mvsw_pr_switch_ageing_set(struct mvsw_pr_switch *sw, u32 ageing_time)
+{
+ return mvsw_pr_hw_switch_ageing_set(sw, ageing_time);
+}
+
+int mvsw_pr_fdb_flush_vlan(struct mvsw_pr_switch *sw, u16 vid,
+ enum mvsw_pr_fdb_flush_mode mode)
+{
+ return mvsw_pr_hw_fdb_flush_vlan(sw, vid, mode);
+}
+
+int mvsw_pr_fdb_flush_port_vlan(struct mvsw_pr_port *port, u16 vid,
+ enum mvsw_pr_fdb_flush_mode mode)
+{
+ return mvsw_pr_hw_fdb_flush_port_vlan(port, vid, mode);
+}
+
+int mvsw_pr_fdb_flush_port(struct mvsw_pr_port *port,
+ enum mvsw_pr_fdb_flush_mode mode)
+{
+ return mvsw_pr_hw_fdb_flush_port(port, mode);
+}
+
+static int mvsw_pr_clear_ports(struct mvsw_pr_switch *sw)
+{
+ struct net_device *net_dev;
+ struct list_head *pos, *n;
+ struct mvsw_pr_port *port;
+
+ list_for_each_safe(pos, n, &sw->port_list) {
+ port = list_entry(pos, typeof(*port), list);
+ net_dev = port->net_dev;
+
+ cancel_delayed_work_sync(&port->cached_hw_stats.caching_dw);
+ unregister_netdev(net_dev);
+ mvsw_pr_port_vlan_flush(port, true);
+ WARN_ON_ONCE(!list_empty(&port->vlans_list));
+ free_netdev(net_dev);
+ list_del(pos);
+ }
+ return (!list_empty(&sw->port_list));
+}
+
+static void mvsw_pr_port_handle_event(struct mvsw_pr_switch *sw,
+ struct mvsw_pr_event *evt)
+{
+ struct mvsw_pr_port *port;
+ struct delayed_work *caching_dw;
+
+ port = __find_pr_port(sw, evt->port_evt.port_id);
+ if (!port)
+ return;
+
+ caching_dw = &port->cached_hw_stats.caching_dw;
+
+ switch (evt->id) {
+ case MVSW_PORT_EVENT_STATE_CHANGED:
+ if (evt->port_evt.data.oper_state) {
+ netif_carrier_on(port->net_dev);
+ if (!delayed_work_pending(caching_dw))
+ queue_delayed_work(mvsw_pr_wq, caching_dw, 0);
+ } else {
+ netif_carrier_off(port->net_dev);
+ if (delayed_work_pending(caching_dw))
+ cancel_delayed_work(caching_dw);
+ }
+ break;
+ }
+}
+
+static void mvsw_pr_fdb_handle_event(struct mvsw_pr_switch *sw,
+ struct mvsw_pr_event *evt)
+{
+ struct switchdev_notifier_fdb_info info;
+ struct mvsw_pr_port *port;
+
+ port = __find_pr_port(sw, evt->fdb_evt.port_id);
+ if (!port)
+ return;
+
+ info.addr = evt->fdb_evt.data.mac;
+ info.vid = evt->fdb_evt.vid;
+ info.offloaded = true;
+
+ rtnl_lock();
+ switch (evt->id) {
+ case MVSW_FDB_EVENT_LEARNED:
+ call_switchdev_notifiers(SWITCHDEV_FDB_ADD_TO_BRIDGE,
+ port->net_dev, &info.info, NULL);
+ break;
+ case MVSW_FDB_EVENT_AGED:
+ call_switchdev_notifiers(SWITCHDEV_FDB_DEL_TO_BRIDGE,
+ port->net_dev, &info.info, NULL);
+ break;
+ }
+ rtnl_unlock();
+ return;
+}
+
+int mvsw_pr_fdb_add(struct mvsw_pr_port *port, const unsigned char *mac,
+ u16 vid, bool dynamic)
+{
+ return mvsw_pr_hw_fdb_add(port, mac, vid, dynamic);
+}
+
+int mvsw_pr_fdb_del(struct mvsw_pr_port *port, const unsigned char *mac,
+ u16 vid)
+{
+ return mvsw_pr_hw_fdb_del(port, mac, vid);
+}
+
+static void mvsw_pr_fdb_event_handler_unregister(struct mvsw_pr_switch *sw)
+{
+ mvsw_pr_hw_event_handler_unregister(sw, MVSW_EVENT_TYPE_FDB,
+ mvsw_pr_fdb_handle_event);
+}
+
+static void mvsw_pr_port_event_handler_unregister(struct mvsw_pr_switch *sw)
+{
+ mvsw_pr_hw_event_handler_unregister(sw, MVSW_EVENT_TYPE_PORT,
+ mvsw_pr_port_handle_event);
+}
+
+static void mvsw_pr_event_handlers_unregister(struct mvsw_pr_switch *sw)
+{
+ mvsw_pr_fdb_event_handler_unregister(sw);
+ mvsw_pr_port_event_handler_unregister(sw);
+}
+
+static int mvsw_pr_fdb_event_handler_register(struct mvsw_pr_switch *sw)
+{
+ return mvsw_pr_hw_event_handler_register(sw, MVSW_EVENT_TYPE_FDB,
+ mvsw_pr_fdb_handle_event);
+}
+
+static int mvsw_pr_port_event_handler_register(struct mvsw_pr_switch *sw)
+{
+ return mvsw_pr_hw_event_handler_register(sw, MVSW_EVENT_TYPE_PORT,
+ mvsw_pr_port_handle_event);
+}
+
+static int mvsw_pr_event_handlers_register(struct mvsw_pr_switch *sw)
+{
+ int err;
+
+ err = mvsw_pr_port_event_handler_register(sw);
+ if (err)
+ return err;
+
+ err = mvsw_pr_fdb_event_handler_register(sw);
+ if (err)
+ goto err_fdb_handler_register;
+
+ return 0;
+
+err_fdb_handler_register:
+ mvsw_pr_port_event_handler_unregister(sw);
+ return err;
+}
+
+static int mvsw_pr_init(struct mvsw_pr_switch *sw)
+{
+ u32 port;
+ int err;
+
+ err = mvsw_pr_hw_switch_init(sw);
+ if (err) {
+ dev_err(mvsw_dev(sw), "Failed to init Switch device\n");
+ return err;
+ }
+
+ dev_info(mvsw_dev(sw), "Initialized Switch device\n");
+
+ err = mvsw_pr_switchdev_register(sw);
+ if (err)
+ return err;
+
+ INIT_LIST_HEAD(&sw->port_list);
+
+ for (port = 0; port < sw->port_count; port++) {
+ err = mvsw_pr_port_create(sw, port);
+ if (err)
+ goto err_ports_init;
+ }
+
+ err = mvsw_pr_event_handlers_register(sw);
+ if (err)
+ goto err_ports_init;
+
+ return 0;
+
+err_ports_init:
+ mvsw_pr_clear_ports(sw);
+ return err;
+}
+
+static void mvsw_pr_fini(struct mvsw_pr_switch *sw)
+{
+ mvsw_pr_event_handlers_unregister(sw);
+
+ mvsw_pr_switchdev_unregister(sw);
+ mvsw_pr_clear_ports(sw);
+}
+
+int mvsw_pr_device_register(struct mvsw_pr_device *dev)
+{
+ struct mvsw_pr_switch *sw;
+ int err;
+
+ sw = kzalloc(sizeof(*sw), GFP_KERNEL);
+ if (!sw)
+ return -ENOMEM;
+
+ dev->priv = sw;
+ sw->dev = dev;
+
+ err = mvsw_pr_init(sw);
+ if (err) {
+ kfree(sw);
+ return err;
+ }
+
+ list_add(&sw->list, &switches_registered);
+
+ return 0;
+}
+EXPORT_SYMBOL(mvsw_pr_device_register);
+
+void mvsw_pr_device_unregister(struct mvsw_pr_device *dev)
+{
+ struct mvsw_pr_switch *sw = dev->priv;
+
+ list_del(&sw->list);
+ mvsw_pr_fini(sw);
+ kfree(sw);
+}
+EXPORT_SYMBOL(mvsw_pr_device_unregister);
+
+static int __init mvsw_pr_module_init(void)
+{
+ INIT_LIST_HEAD(&switches_registered);
+
+ mvsw_pr_wq = alloc_workqueue(mvsw_driver_name, 0, 0);
+ if (!mvsw_pr_wq)
+ return -ENOMEM;
+
+ pr_info("Loading Marvell Prestera Switch Driver\n");
+ return 0;
+}
+
+static void __exit mvsw_pr_module_exit(void)
+{
+ destroy_workqueue(mvsw_pr_wq);
+
+ pr_info("Unloading Marvell Prestera Switch Driver\n");
+}
+
+module_init(mvsw_pr_module_init);
+module_exit(mvsw_pr_module_exit);
+
+MODULE_AUTHOR("Marvell Semi.");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Marvell Prestera switch driver");
+MODULE_VERSION(PRESTERA_DRV_VER);
diff --git a/drivers/net/ethernet/marvell/prestera/prestera.h b/drivers/net/ethernet/marvell/prestera/prestera.h
new file mode 100644
index 000000000000..cbc6b0c78937
--- /dev/null
+++ b/drivers/net/ethernet/marvell/prestera/prestera.h
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
+ *
+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
+ *
+ */
+
+#ifndef _MVSW_PRESTERA_H_
+#define _MVSW_PRESTERA_H_
+
+#include <linux/skbuff.h>
+#include <linux/notifier.h>
+#include <uapi/linux/if_ether.h>
+#include <linux/workqueue.h>
+
+#define MVSW_MSG_MAX_SIZE 1500
+
+#define MVSW_PR_DEFAULT_VID 1
+
+#define MVSW_PR_MIN_AGEING_TIME 10
+#define MVSW_PR_MAX_AGEING_TIME 1000000
+#define MVSW_PR_DEFAULT_AGEING_TIME 300
+
+struct mvsw_fw_rev {
+ u16 maj;
+ u16 min;
+ u16 sub;
+};
+
+struct mvsw_pr_bridge_port;
+
+struct mvsw_pr_port_vlan {
+ struct list_head list;
+ struct mvsw_pr_port *mvsw_pr_port;
+ u16 vid;
+ struct mvsw_pr_bridge_port *bridge_port;
+ struct list_head bridge_vlan_node;
+};
+
+struct mvsw_pr_port_stats {
+ u64 good_octets_received;
+ u64 bad_octets_received;
+ u64 mac_trans_error;
+ u64 broadcast_frames_received;
+ u64 multicast_frames_received;
+ u64 frames_64_octets;
+ u64 frames_65_to_127_octets;
+ u64 frames_128_to_255_octets;
+ u64 frames_256_to_511_octets;
+ u64 frames_512_to_1023_octets;
+ u64 frames_1024_to_max_octets;
+ u64 excessive_collision;
+ u64 multicast_frames_sent;
+ u64 broadcast_frames_sent;
+ u64 fc_sent;
+ u64 fc_received;
+ u64 buffer_overrun;
+ u64 undersize;
+ u64 fragments;
+ u64 oversize;
+ u64 jabber;
+ u64 rx_error_frame_received;
+ u64 bad_crc;
+ u64 collisions;
+ u64 late_collision;
+ u64 unicast_frames_received;
+ u64 unicast_frames_sent;
+ u64 sent_multiple;
+ u64 sent_deferred;
+ u64 frames_1024_to_1518_octets;
+ u64 frames_1519_to_max_octets;
+ u64 good_octets_sent;
+};
+
+struct mvsw_pr_port_caps {
+ u64 supp_link_modes;
+ u8 supp_fec;
+ u8 type;
+ u8 transceiver;
+};
+
+struct mvsw_pr_port {
+ struct net_device *net_dev;
+ struct mvsw_pr_switch *sw;
+ u32 id;
+ u32 hw_id;
+ u32 dev_id;
+ u16 fp_id;
+ u16 pvid;
+ bool autoneg;
+ u64 adver_link_modes;
+ u8 adver_fec;
+ struct mvsw_pr_port_caps caps;
+ struct list_head list;
+ struct list_head vlans_list;
+ struct {
+ struct mvsw_pr_port_stats stats;
+ struct delayed_work caching_dw;
+ } cached_hw_stats;
+};
+
+struct mvsw_pr_switchdev {
+ struct mvsw_pr_switch *sw;
+ struct notifier_block swdev_n;
+ struct notifier_block swdev_blocking_n;
+};
+
+struct mvsw_pr_fib {
+ struct mvsw_pr_switch *sw;
+ struct notifier_block fib_nb;
+ struct notifier_block netevent_nb;
+};
+
+struct mvsw_pr_device {
+ struct device *dev;
+ struct mvsw_fw_rev fw_rev;
+ void *priv;
+
+ /* called by device driver to pass event up to the higher layer */
+ int (*recv_msg)(struct mvsw_pr_device *dev, u8 *msg, size_t size);
+
+ /* called by higher layer to send request to the firmware */
+ int (*send_req)(struct mvsw_pr_device *dev, u8 *in_msg,
+ size_t in_size, u8 *out_msg, size_t out_size,
+ unsigned int wait);
+};
+
+enum mvsw_pr_event_type {
+ MVSW_EVENT_TYPE_UNSPEC,
+ MVSW_EVENT_TYPE_PORT,
+ MVSW_EVENT_TYPE_FDB,
+
+ MVSW_EVENT_TYPE_MAX,
+};
+
+enum mvsw_pr_port_event_id {
+ MVSW_PORT_EVENT_UNSPEC,
+ MVSW_PORT_EVENT_STATE_CHANGED,
+
+ MVSW_PORT_EVENT_MAX,
+};
+
+enum mvsw_pr_fdb_event_id {
+ MVSW_FDB_EVENT_UNSPEC,
+ MVSW_FDB_EVENT_LEARNED,
+ MVSW_FDB_EVENT_AGED,
+
+ MVSW_FDB_EVENT_MAX,
+};
+
+struct mvsw_pr_fdb_event {
+ u32 port_id;
+ u32 vid;
+ union {
+ u8 mac[ETH_ALEN];
+ } data;
+};
+
+struct mvsw_pr_port_event {
+ u32 port_id;
+ union {
+ u32 oper_state;
+ } data;
+};
+
+struct mvsw_pr_event {
+ u16 id;
+ union {
+ struct mvsw_pr_port_event port_evt;
+ struct mvsw_pr_fdb_event fdb_evt;
+ };
+};
+
+struct mvsw_pr_bridge;
+
+struct mvsw_pr_switch {
+ struct list_head list;
+ struct mvsw_pr_device *dev;
+ struct list_head event_handlers;
+ char base_mac[ETH_ALEN];
+ struct list_head port_list;
+ u32 port_count;
+ u32 mtu_min;
+ u32 mtu_max;
+ u8 id;
+ struct mvsw_pr_bridge *bridge;
+ struct mvsw_pr_switchdev *switchdev;
+ struct mvsw_pr_fib *fib;
+ struct notifier_block netdevice_nb;
+};
+
+enum mvsw_pr_fdb_flush_mode {
+ MVSW_PR_FDB_FLUSH_MODE_DYNAMIC = BIT(0),
+ MVSW_PR_FDB_FLUSH_MODE_STATIC = BIT(1),
+ MVSW_PR_FDB_FLUSH_MODE_ALL = MVSW_PR_FDB_FLUSH_MODE_DYNAMIC
+ | MVSW_PR_FDB_FLUSH_MODE_STATIC,
+};
+
+int mvsw_pr_switch_ageing_set(struct mvsw_pr_switch *sw, u32 ageing_time);
+
+int mvsw_pr_port_learning_set(struct mvsw_pr_port *mvsw_pr_port,
+ bool learn_enable);
+int mvsw_pr_port_flood_set(struct mvsw_pr_port *mvsw_pr_port, bool flood);
+int mvsw_pr_port_pvid_set(struct mvsw_pr_port *mvsw_pr_port, u16 vid);
+struct mvsw_pr_port_vlan *
+mvsw_pr_port_vlan_create(struct mvsw_pr_port *mvsw_pr_port, u16 vid);
+void mvsw_pr_port_vlan_destroy(struct mvsw_pr_port_vlan *mvsw_pr_port_vlan);
+int mvsw_pr_port_vlan_set(struct mvsw_pr_port *mvsw_pr_port, u16 vid,
+ bool is_member, bool untagged);
+
+int mvsw_pr_8021d_bridge_create(struct mvsw_pr_switch *sw, u16 *bridge_id);
+int mvsw_pr_8021d_bridge_delete(struct mvsw_pr_switch *sw, u16 bridge_id);
+int mvsw_pr_8021d_bridge_port_add(struct mvsw_pr_port *mvsw_pr_port,
+ u16 bridge_id);
+int mvsw_pr_8021d_bridge_port_delete(struct mvsw_pr_port *mvsw_pr_port,
+ u16 bridge_id);
+
+int mvsw_pr_fdb_add(struct mvsw_pr_port *mvsw_pr_port, const unsigned char *mac,
+ u16 vid, bool dynamic);
+int mvsw_pr_fdb_del(struct mvsw_pr_port *mvsw_pr_port, const unsigned char *mac,
+ u16 vid);
+int mvsw_pr_fdb_flush_vlan(struct mvsw_pr_switch *sw, u16 vid,
+ enum mvsw_pr_fdb_flush_mode mode);
+int mvsw_pr_fdb_flush_port_vlan(struct mvsw_pr_port *port, u16 vid,
+ enum mvsw_pr_fdb_flush_mode mode);
+int mvsw_pr_fdb_flush_port(struct mvsw_pr_port *port,
+ enum mvsw_pr_fdb_flush_mode mode);
+
+struct mvsw_pr_port_vlan *
+mvsw_pr_port_vlan_find_by_vid(const struct mvsw_pr_port *mvsw_pr_port, u16 vid);
+void
+mvsw_pr_port_vlan_bridge_leave(struct mvsw_pr_port_vlan *mvsw_pr_port_vlan);
+
+int mvsw_pr_switchdev_register(struct mvsw_pr_switch *sw);
+void mvsw_pr_switchdev_unregister(struct mvsw_pr_switch *sw);
+
+int mvsw_pr_device_register(struct mvsw_pr_device *dev);
+void mvsw_pr_device_unregister(struct mvsw_pr_device *dev);
+
+bool mvsw_pr_netdev_check(const struct net_device *dev);
+struct mvsw_pr_port *mvsw_pr_port_dev_lower_find(struct net_device *dev);
+
+const struct mvsw_pr_port *mvsw_pr_port_find(u32 dev_hw_id, u32 port_hw_id);
+
+#endif /* _MVSW_PRESTERA_H_ */
diff --git a/drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h b/drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
new file mode 100644
index 000000000000..d6617a16d7e1
--- /dev/null
+++ b/drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
+ *
+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
+ *
+ */
+#ifndef _PRESTERA_DRV_VER_H_
+#define _PRESTERA_DRV_VER_H_
+
+#include <linux/stringify.h>
+
+/* Prestera driver version */
+#define PRESTERA_DRV_VER_MAJOR 1
+#define PRESTERA_DRV_VER_MINOR 0
+#define PRESTERA_DRV_VER_PATCH 0
+#define PRESTERA_DRV_VER_EXTRA
+
+#define PRESTERA_DRV_VER \
+ __stringify(PRESTERA_DRV_VER_MAJOR) "." \
+ __stringify(PRESTERA_DRV_VER_MINOR) "." \
+ __stringify(PRESTERA_DRV_VER_PATCH) \
+ __stringify(PRESTERA_DRV_VER_EXTRA)
+
+#endif /* _PRESTERA_DRV_VER_H_ */
diff --git a/drivers/net/ethernet/marvell/prestera/prestera_hw.c b/drivers/net/ethernet/marvell/prestera/prestera_hw.c
new file mode 100644
index 000000000000..c97bafdd734e
--- /dev/null
+++ b/drivers/net/ethernet/marvell/prestera/prestera_hw.c
@@ -0,0 +1,1094 @@
+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
+ *
+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
+ *
+ */
+#include <linux/etherdevice.h>
+#include <linux/ethtool.h>
+#include <linux/netdevice.h>
+#include <linux/list.h>
+
+#include "prestera.h"
+#include "prestera_hw.h"
+
+#define MVSW_PR_INIT_TIMEOUT 30000000 /* 30sec */
+#define MVSW_PR_MIN_MTU 64
+
+enum mvsw_msg_type {
+ MVSW_MSG_TYPE_SWITCH_UNSPEC,
+ MVSW_MSG_TYPE_SWITCH_INIT,
+
+ MVSW_MSG_TYPE_AGEING_TIMEOUT_SET,
+
+ MVSW_MSG_TYPE_PORT_ATTR_SET,
+ MVSW_MSG_TYPE_PORT_ATTR_GET,
+ MVSW_MSG_TYPE_PORT_INFO_GET,
+
+ MVSW_MSG_TYPE_VLAN_CREATE,
+ MVSW_MSG_TYPE_VLAN_DELETE,
+ MVSW_MSG_TYPE_VLAN_PORT_SET,
+ MVSW_MSG_TYPE_VLAN_PVID_SET,
+
+ MVSW_MSG_TYPE_FDB_ADD,
+ MVSW_MSG_TYPE_FDB_DELETE,
+ MVSW_MSG_TYPE_FDB_FLUSH_PORT,
+ MVSW_MSG_TYPE_FDB_FLUSH_VLAN,
+ MVSW_MSG_TYPE_FDB_FLUSH_PORT_VLAN,
+
+ MVSW_MSG_TYPE_LOG_LEVEL_SET,
+
+ MVSW_MSG_TYPE_BRIDGE_CREATE,
+ MVSW_MSG_TYPE_BRIDGE_DELETE,
+ MVSW_MSG_TYPE_BRIDGE_PORT_ADD,
+ MVSW_MSG_TYPE_BRIDGE_PORT_DELETE,
+
+ MVSW_MSG_TYPE_ACK,
+ MVSW_MSG_TYPE_MAX
+};
+
+enum mvsw_msg_port_attr {
+ MVSW_MSG_PORT_ATTR_ADMIN_STATE,
+ MVSW_MSG_PORT_ATTR_OPER_STATE,
+ MVSW_MSG_PORT_ATTR_MTU,
+ MVSW_MSG_PORT_ATTR_MAC,
+ MVSW_MSG_PORT_ATTR_SPEED,
+ MVSW_MSG_PORT_ATTR_ACCEPT_FRAME_TYPE,
+ MVSW_MSG_PORT_ATTR_LEARNING,
+ MVSW_MSG_PORT_ATTR_FLOOD,
+ MVSW_MSG_PORT_ATTR_CAPABILITY,
+ MVSW_MSG_PORT_ATTR_REMOTE_CAPABILITY,
+ MVSW_MSG_PORT_ATTR_LINK_MODE,
+ MVSW_MSG_PORT_ATTR_TYPE,
+ MVSW_MSG_PORT_ATTR_FEC,
+ MVSW_MSG_PORT_ATTR_AUTONEG,
+ MVSW_MSG_PORT_ATTR_DUPLEX,
+ MVSW_MSG_PORT_ATTR_STATS,
+ MVSW_MSG_PORT_ATTR_MDIX,
+ MVSW_MSG_PORT_ATTR_MAX
+};
+
+enum {
+ MVSW_MSG_ACK_OK,
+ MVSW_MSG_ACK_FAILED,
+ MVSW_MSG_ACK_MAX
+};
+
+enum {
+ MVSW_MODE_FORCED_MDI,
+ MVSW_MODE_FORCED_MDIX,
+ MVSW_MODE_AUTO_MDI,
+ MVSW_MODE_AUTO_MDIX,
+ MVSW_MODE_AUTO
+};
+
+enum {
+ MVSW_PORT_GOOD_OCTETS_RCV_CNT,
+ MVSW_PORT_BAD_OCTETS_RCV_CNT,
+ MVSW_PORT_MAC_TRANSMIT_ERR_CNT,
+ MVSW_PORT_BRDC_PKTS_RCV_CNT,
+ MVSW_PORT_MC_PKTS_RCV_CNT,
+ MVSW_PORT_PKTS_64_OCTETS_CNT,
+ MVSW_PORT_PKTS_65TO127_OCTETS_CNT,
+ MVSW_PORT_PKTS_128TO255_OCTETS_CNT,
+ MVSW_PORT_PKTS_256TO511_OCTETS_CNT,
+ MVSW_PORT_PKTS_512TO1023_OCTETS_CNT,
+ MVSW_PORT_PKTS_1024TOMAX_OCTETS_CNT,
+ MVSW_PORT_EXCESSIVE_COLLISIONS_CNT,
+ MVSW_PORT_MC_PKTS_SENT_CNT,
+ MVSW_PORT_BRDC_PKTS_SENT_CNT,
+ MVSW_PORT_FC_SENT_CNT,
+ MVSW_PORT_GOOD_FC_RCV_CNT,
+ MVSW_PORT_DROP_EVENTS_CNT,
+ MVSW_PORT_UNDERSIZE_PKTS_CNT,
+ MVSW_PORT_FRAGMENTS_PKTS_CNT,
+ MVSW_PORT_OVERSIZE_PKTS_CNT,
+ MVSW_PORT_JABBER_PKTS_CNT,
+ MVSW_PORT_MAC_RCV_ERROR_CNT,
+ MVSW_PORT_BAD_CRC_CNT,
+ MVSW_PORT_COLLISIONS_CNT,
+ MVSW_PORT_LATE_COLLISIONS_CNT,
+ MVSW_PORT_GOOD_UC_PKTS_RCV_CNT,
+ MVSW_PORT_GOOD_UC_PKTS_SENT_CNT,
+ MVSW_PORT_MULTIPLE_PKTS_SENT_CNT,
+ MVSW_PORT_DEFERRED_PKTS_SENT_CNT,
+ MVSW_PORT_PKTS_1024TO1518_OCTETS_CNT,
+ MVSW_PORT_PKTS_1519TOMAX_OCTETS_CNT,
+ MVSW_PORT_GOOD_OCTETS_SENT_CNT,
+ MVSW_PORT_CNT_MAX,
+};
+
+struct mvsw_msg_cmd {
+ u32 type;
+} __packed __aligned(4);
+
+struct mvsw_msg_ret {
+ struct mvsw_msg_cmd cmd;
+ u32 status;
+} __packed __aligned(4);
+
+struct mvsw_msg_common_request {
+ struct mvsw_msg_cmd cmd;
+} __packed __aligned(4);
+
+struct mvsw_msg_common_response {
+ struct mvsw_msg_ret ret;
+} __packed __aligned(4);
+
+union mvsw_msg_switch_param {
+ u32 ageing_timeout;
+};
+
+struct mvsw_msg_switch_attr_cmd {
+ struct mvsw_msg_cmd cmd;
+ union mvsw_msg_switch_param param;
+} __packed __aligned(4);
+
+struct mvsw_msg_switch_init_ret {
+ struct mvsw_msg_ret ret;
+ u32 port_count;
+ u32 mtu_max;
+ u8 switch_id;
+ u8 mac[ETH_ALEN];
+} __packed __aligned(4);
+
+struct mvsw_msg_port_autoneg_param {
+ u64 link_mode;
+ u8 enable;
+ u8 fec;
+};
+
+struct mvsw_msg_port_cap_param {
+ u64 link_mode;
+ u8 type;
+ u8 fec;
+ u8 transceiver;
+};
+
+union mvsw_msg_port_param {
+ u8 admin_state;
+ u8 oper_state;
+ u32 mtu;
+ u8 mac[ETH_ALEN];
+ u8 accept_frm_type;
+ u8 learning;
+ u32 speed;
+ u8 flood;
+ u32 link_mode;
+ u8 type;
+ u8 duplex;
+ u8 fec;
+ u8 mdix;
+ struct mvsw_msg_port_autoneg_param autoneg;
+ struct mvsw_msg_port_cap_param cap;
+};
+
+struct mvsw_msg_port_attr_cmd {
+ struct mvsw_msg_cmd cmd;
+ u32 attr;
+ u32 port;
+ u32 dev;
+ union mvsw_msg_port_param param;
+} __packed __aligned(4);
+
+struct mvsw_msg_port_attr_ret {
+ struct mvsw_msg_ret ret;
+ union mvsw_msg_port_param param;
+} __packed __aligned(4);
+
+struct mvsw_msg_port_stats_ret {
+ struct mvsw_msg_ret ret;
+ u64 stats[MVSW_PORT_CNT_MAX];
+} __packed __aligned(4);
+
+struct mvsw_msg_port_info_cmd {
+ struct mvsw_msg_cmd cmd;
+ u32 port;
+} __packed __aligned(4);
+
+struct mvsw_msg_port_info_ret {
+ struct mvsw_msg_ret ret;
+ u32 hw_id;
+ u32 dev_id;
+ u16 fp_id;
+} __packed __aligned(4);
+
+struct mvsw_msg_vlan_cmd {
+ struct mvsw_msg_cmd cmd;
+ u32 port;
+ u32 dev;
+ u16 vid;
+ u8 is_member;
+ u8 is_tagged;
+} __packed __aligned(4);
+
+struct mvsw_msg_fdb_cmd {
+ struct mvsw_msg_cmd cmd;
+ u32 port;
+ u32 dev;
+ u8 mac[ETH_ALEN];
+ u16 vid;
+ u8 dynamic;
+ u32 flush_mode;
+} __packed __aligned(4);
+
+struct mvsw_msg_event {
+ u16 type;
+ u16 id;
+} __packed __aligned(4);
+
+union mvsw_msg_event_fdb_param {
+ u8 mac[ETH_ALEN];
+};
+
+struct mvsw_msg_event_fdb {
+ struct mvsw_msg_event id;
+ u32 port_id;
+ u32 vid;
+ union mvsw_msg_event_fdb_param param;
+} __packed __aligned(4);
+
+union mvsw_msg_event_port_param {
+ u32 oper_state;
+};
+
+struct mvsw_msg_event_port {
+ struct mvsw_msg_event id;
+ u32 port_id;
+ union mvsw_msg_event_port_param param;
+} __packed __aligned(4);
+
+struct mvsw_msg_bridge_cmd {
+ struct mvsw_msg_cmd cmd;
+ u32 port;
+ u32 dev;
+ u16 bridge;
+} __packed __aligned(4);
+
+struct mvsw_msg_bridge_ret {
+ struct mvsw_msg_ret ret;
+ u16 bridge;
+} __packed __aligned(4);
+
+#define fw_check_resp(_response) \
+({ \
+ int __er = 0; \
+ typeof(_response) __r = (_response); \
+ if (__r->ret.cmd.type != MVSW_MSG_TYPE_ACK) \
+ __er = -EBADE; \
+ else if (__r->ret.status != MVSW_MSG_ACK_OK) \
+ __er = -EINVAL; \
+ (__er); \
+})
+
+#define __fw_send_req_resp(_switch, _type, _request, _response, _wait) \
+({ \
+ int __e; \
+ typeof(_switch) __sw = (_switch); \
+ typeof(_request) __req = (_request); \
+ typeof(_response) __resp = (_response); \
+ __req->cmd.type = (_type); \
+ __e = __sw->dev->send_req(__sw->dev, \
+ (u8 *)__req, sizeof(*__req), \
+ (u8 *)__resp, sizeof(*__resp), \
+ _wait); \
+ if (!__e) \
+ __e = fw_check_resp(__resp); \
+ (__e); \
+})
+
+#define fw_send_req_resp(_sw, _t, _req, _resp) \
+ __fw_send_req_resp(_sw, _t, _req, _resp, 0)
+
+#define fw_send_req_resp_wait(_sw, _t, _req, _resp, _wait) \
+ __fw_send_req_resp(_sw, _t, _req, _resp, _wait)
+
+#define fw_send_req(_sw, _t, _req) \
+({ \
+ struct mvsw_msg_common_response __re; \
+ (fw_send_req_resp(_sw, _t, _req, &__re)); \
+})
+
+struct mvsw_fw_event_handler {
+ struct list_head list;
+ enum mvsw_pr_event_type type;
+ void (*func)(struct mvsw_pr_switch *sw, struct mvsw_pr_event *evt);
+};
+
+static int fw_parse_port_evt(u8 *msg, struct mvsw_pr_event *evt)
+{
+ struct mvsw_msg_event_port *hw_evt = (struct mvsw_msg_event_port *)msg;
+
+ evt->port_evt.port_id = hw_evt->port_id;
+
+ if (evt->id == MVSW_PORT_EVENT_STATE_CHANGED)
+ evt->port_evt.data.oper_state = hw_evt->param.oper_state;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+
+static int fw_parse_fdb_evt(u8 *msg, struct mvsw_pr_event *evt)
+{
+ struct mvsw_msg_event_fdb *hw_evt = (struct mvsw_msg_event_fdb *)msg;
+
+ evt->fdb_evt.port_id = hw_evt->port_id;
+ evt->fdb_evt.vid = hw_evt->vid;
+
+ memcpy(&evt->fdb_evt.data, &hw_evt->param, sizeof(u8) * ETH_ALEN);
+
+ return 0;
+}
+
+struct mvsw_fw_evt_parser {
+ int (*func)(u8 *msg, struct mvsw_pr_event *evt);
+};
+
+static struct mvsw_fw_evt_parser fw_event_parsers[MVSW_EVENT_TYPE_MAX] = {
+ [MVSW_EVENT_TYPE_PORT] = {.func = fw_parse_port_evt},
+ [MVSW_EVENT_TYPE_FDB] = {.func = fw_parse_fdb_evt},
+};
+
+static struct mvsw_fw_event_handler *
+__find_event_handler(const struct mvsw_pr_switch *sw,
+ enum mvsw_pr_event_type type)
+{
+ struct mvsw_fw_event_handler *eh;
+
+ list_for_each_entry_rcu(eh, &sw->event_handlers, list) {
+ if (eh->type == type)
+ return eh;
+ }
+
+ return NULL;
+}
+
+static int fw_event_recv(struct mvsw_pr_device *dev, u8 *buf, size_t size)
+{
+ void (*cb)(struct mvsw_pr_switch *sw, struct mvsw_pr_event *evt) = NULL;
+ struct mvsw_msg_event *msg = (struct mvsw_msg_event *)buf;
+ struct mvsw_pr_switch *sw = dev->priv;
+ struct mvsw_fw_event_handler *eh;
+ struct mvsw_pr_event evt;
+ int err;
+
+ if (msg->type >= MVSW_EVENT_TYPE_MAX)
+ return -EINVAL;
+
+ rcu_read_lock();
+ eh = __find_event_handler(sw, msg->type);
+ if (eh)
+ cb = eh->func;
+ rcu_read_unlock();
+
+ if (!cb || !fw_event_parsers[msg->type].func)
+ return 0;
+
+ evt.id = msg->id;
+
+ err = fw_event_parsers[msg->type].func(buf, &evt);
+ if (!err)
+ cb(sw, &evt);
+
+ return err;
+}
+
+int mvsw_pr_hw_port_info_get(const struct mvsw_pr_port *port,
+ u16 *fp_id, u32 *hw_id, u32 *dev_id)
+{
+ struct mvsw_msg_port_info_ret resp;
+ struct mvsw_msg_port_info_cmd req = {
+ .port = port->id
+ };
+ int err;
+
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_INFO_GET,
+ &req, &resp);
+ if (err)
+ return err;
+
+ *hw_id = resp.hw_id;
+ *dev_id = resp.dev_id;
+ *fp_id = resp.fp_id;
+
+ return 0;
+}
+
+int mvsw_pr_hw_switch_init(struct mvsw_pr_switch *sw)
+{
+ struct mvsw_msg_switch_init_ret resp;
+ struct mvsw_msg_common_request req;
+ int err = 0;
+
+ INIT_LIST_HEAD(&sw->event_handlers);
+
+ err = fw_send_req_resp_wait(sw, MVSW_MSG_TYPE_SWITCH_INIT, &req, &resp,
+ MVSW_PR_INIT_TIMEOUT);
+ if (err)
+ return err;
+
+ sw->id = resp.switch_id;
+ sw->port_count = resp.port_count;
+ sw->mtu_min = MVSW_PR_MIN_MTU;
+ sw->mtu_max = resp.mtu_max;
+ sw->dev->recv_msg = fw_event_recv;
+ memcpy(sw->base_mac, resp.mac, ETH_ALEN);
+
+ return err;
+}
+
+int mvsw_pr_hw_switch_ageing_set(const struct mvsw_pr_switch *sw,
+ u32 ageing_time)
+{
+ struct mvsw_msg_switch_attr_cmd req = {
+ .param = {.ageing_timeout = ageing_time}
+ };
+
+ return fw_send_req(sw, MVSW_MSG_TYPE_AGEING_TIMEOUT_SET, &req);
+}
+
+int mvsw_pr_hw_port_state_set(const struct mvsw_pr_port *port,
+ bool admin_state)
+{
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_ADMIN_STATE,
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .param = {.admin_state = admin_state ? 1 : 0}
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
+}
+
+int mvsw_pr_hw_port_state_get(const struct mvsw_pr_port *port,
+ bool *admin_state, bool *oper_state)
+{
+ struct mvsw_msg_port_attr_ret resp;
+ struct mvsw_msg_port_attr_cmd req = {
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ int err;
+
+ if (admin_state) {
+ req.attr = MVSW_MSG_PORT_ATTR_ADMIN_STATE;
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+ *admin_state = resp.param.admin_state != 0;
+ }
+
+ if (oper_state) {
+ req.attr = MVSW_MSG_PORT_ATTR_OPER_STATE;
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+ *oper_state = resp.param.oper_state != 0;
+ }
+
+ return 0;
+}
+
+int mvsw_pr_hw_port_mtu_set(const struct mvsw_pr_port *port, u32 mtu)
+{
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_MTU,
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .param = {.mtu = mtu}
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
+}
+
+int mvsw_pr_hw_port_mtu_get(const struct mvsw_pr_port *port, u32 *mtu)
+{
+ struct mvsw_msg_port_attr_ret resp;
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_MTU,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ int err;
+
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+
+ *mtu = resp.param.mtu;
+
+ return err;
+}
+
+int mvsw_pr_hw_port_mac_set(const struct mvsw_pr_port *port, char *mac)
+{
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_MAC,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ memcpy(&req.param.mac, mac, sizeof(req.param.mac));
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
+}
+
+int mvsw_pr_hw_port_mac_get(const struct mvsw_pr_port *port, char *mac)
+{
+ struct mvsw_msg_port_attr_ret resp;
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_MAC,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ int err;
+
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+
+ memcpy(mac, resp.param.mac, sizeof(resp.param.mac));
+
+ return err;
+}
+
+int mvsw_pr_hw_port_accept_frame_type_set(const struct mvsw_pr_port *port,
+ enum mvsw_pr_accept_frame_type type)
+{
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_ACCEPT_FRAME_TYPE,
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .param = {.accept_frm_type = type}
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
+}
+
+int mvsw_pr_hw_port_learning_set(const struct mvsw_pr_port *port, bool enable)
+{
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_LEARNING,
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .param = {.learning = enable ? 1 : 0}
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
+}
+
+int mvsw_pr_hw_event_handler_register(struct mvsw_pr_switch *sw,
+ enum mvsw_pr_event_type type,
+ void (*cb)(struct mvsw_pr_switch *sw,
+ struct mvsw_pr_event *evt))
+{
+ struct mvsw_fw_event_handler *eh;
+
+ eh = __find_event_handler(sw, type);
+ if (eh)
+ return -EEXIST;
+ eh = kmalloc(sizeof(*eh), GFP_KERNEL);
+ if (!eh)
+ return -ENOMEM;
+
+ eh->type = type;
+ eh->func = cb;
+
+ INIT_LIST_HEAD(&eh->list);
+
+ list_add_rcu(&eh->list, &sw->event_handlers);
+
+ return 0;
+}
+
+void mvsw_pr_hw_event_handler_unregister(struct mvsw_pr_switch *sw,
+ enum mvsw_pr_event_type type,
+ void (*cb)(struct mvsw_pr_switch *sw,
+ struct mvsw_pr_event *evt))
+{
+ struct mvsw_fw_event_handler *eh;
+
+ eh = __find_event_handler(sw, type);
+ if (!eh)
+ return;
+
+ list_del_rcu(&eh->list);
+ synchronize_rcu();
+ kfree(eh);
+}
+
+int mvsw_pr_hw_vlan_create(const struct mvsw_pr_switch *sw, u16 vid)
+{
+ struct mvsw_msg_vlan_cmd req = {
+ .vid = vid,
+ };
+
+ return fw_send_req(sw, MVSW_MSG_TYPE_VLAN_CREATE, &req);
+}
+
+int mvsw_pr_hw_vlan_delete(const struct mvsw_pr_switch *sw, u16 vid)
+{
+ struct mvsw_msg_vlan_cmd req = {
+ .vid = vid,
+ };
+
+ return fw_send_req(sw, MVSW_MSG_TYPE_VLAN_DELETE, &req);
+}
+
+int mvsw_pr_hw_vlan_port_set(const struct mvsw_pr_port *port,
+ u16 vid, bool is_member, bool untagged)
+{
+ struct mvsw_msg_vlan_cmd req = {
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .vid = vid,
+ .is_member = is_member ? 1 : 0,
+ .is_tagged = untagged ? 0 : 1
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_VLAN_PORT_SET, &req);
+}
+
+int mvsw_pr_hw_vlan_port_vid_set(const struct mvsw_pr_port *port, u16 vid)
+{
+ struct mvsw_msg_vlan_cmd req = {
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .vid = vid
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_VLAN_PVID_SET, &req);
+}
+
+int mvsw_pr_hw_port_speed_get(const struct mvsw_pr_port *port, u32 *speed)
+{
+ struct mvsw_msg_port_attr_ret resp;
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_SPEED,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ int err;
+
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+
+ *speed = resp.param.speed;
+
+ return err;
+}
+
+int mvsw_pr_hw_port_flood_set(const struct mvsw_pr_port *port, bool flood)
+{
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_FLOOD,
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .param = {.flood = flood ? 1 : 0}
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
+}
+
+int mvsw_pr_hw_fdb_add(const struct mvsw_pr_port *port,
+ const unsigned char *mac, u16 vid, bool dynamic)
+{
+ struct mvsw_msg_fdb_cmd req = {
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .vid = vid,
+ .dynamic = dynamic ? 1 : 0
+ };
+
+ memcpy(req.mac, mac, sizeof(req.mac));
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_FDB_ADD, &req);
+}
+
+int mvsw_pr_hw_fdb_del(const struct mvsw_pr_port *port,
+ const unsigned char *mac, u16 vid)
+{
+ struct mvsw_msg_fdb_cmd req = {
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .vid = vid
+ };
+
+ memcpy(req.mac, mac, sizeof(req.mac));
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_FDB_DELETE, &req);
+}
+
+int mvsw_pr_hw_port_cap_get(const struct mvsw_pr_port *port,
+ struct mvsw_pr_port_caps *caps)
+{
+ struct mvsw_msg_port_attr_ret resp;
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_CAPABILITY,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ int err;
+
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+
+ caps->supp_link_modes = resp.param.cap.link_mode;
+ caps->supp_fec = resp.param.cap.fec;
+ caps->type = resp.param.cap.type;
+ caps->transceiver = resp.param.cap.transceiver;
+
+ return err;
+}
+
+int mvsw_pr_hw_port_remote_cap_get(const struct mvsw_pr_port *port,
+ u64 *link_mode_bitmap)
+{
+ struct mvsw_msg_port_attr_ret resp;
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_REMOTE_CAPABILITY,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ int err;
+
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+
+ *link_mode_bitmap = resp.param.cap.link_mode;
+
+ return err;
+}
+
+int mvsw_pr_hw_port_mdix_get(const struct mvsw_pr_port *port, u8 *mode)
+{
+ struct mvsw_msg_port_attr_ret resp;
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_MDIX,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ int err;
+
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+
+ switch (resp.param.mdix) {
+ case MVSW_MODE_FORCED_MDI:
+ case MVSW_MODE_AUTO_MDI:
+ *mode = ETH_TP_MDI;
+ break;
+
+ case MVSW_MODE_FORCED_MDIX:
+ case MVSW_MODE_AUTO_MDIX:
+ *mode = ETH_TP_MDI_X;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int mvsw_pr_hw_port_mdix_set(const struct mvsw_pr_port *port, u8 mode)
+{
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_MDIX,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+
+ switch (mode) {
+ case ETH_TP_MDI:
+ req.param.mdix = MVSW_MODE_FORCED_MDI;
+ break;
+
+ case ETH_TP_MDI_X:
+ req.param.mdix = MVSW_MODE_FORCED_MDIX;
+ break;
+
+ case ETH_TP_MDI_AUTO:
+ req.param.mdix = MVSW_MODE_AUTO;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
+}
+
+int mvsw_pr_hw_port_type_get(const struct mvsw_pr_port *port, u8 *type)
+{
+ struct mvsw_msg_port_attr_ret resp;
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_TYPE,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ int err;
+
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+
+ *type = resp.param.type;
+
+ return err;
+}
+
+int mvsw_pr_hw_port_fec_get(const struct mvsw_pr_port *port, u8 *fec)
+{
+ struct mvsw_msg_port_attr_ret resp;
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_FEC,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ int err;
+
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+
+ *fec = resp.param.fec;
+
+ return err;
+}
+
+int mvsw_pr_hw_port_fec_set(const struct mvsw_pr_port *port, u8 fec)
+{
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_FEC,
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .param = {.fec = fec}
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
+}
+
+int mvsw_pr_hw_port_autoneg_set(const struct mvsw_pr_port *port,
+ bool autoneg, u64 link_modes, u8 fec)
+{
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_AUTONEG,
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .param = {.autoneg = {.link_mode = link_modes,
+ .enable = autoneg ? 1 : 0,
+ .fec = fec}
+ }
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
+}
+
+int mvsw_pr_hw_port_duplex_get(const struct mvsw_pr_port *port, u8 *duplex)
+{
+ struct mvsw_msg_port_attr_ret resp;
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_DUPLEX,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ int err;
+
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+
+ *duplex = resp.param.duplex;
+
+ return err;
+}
+
+int mvsw_pr_hw_port_stats_get(const struct mvsw_pr_port *port,
+ struct mvsw_pr_port_stats *stats)
+{
+ struct mvsw_msg_port_stats_ret resp;
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_STATS,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ u64 *hw_val = resp.stats;
+ int err;
+
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+
+ stats->good_octets_received = hw_val[MVSW_PORT_GOOD_OCTETS_RCV_CNT];
+ stats->bad_octets_received = hw_val[MVSW_PORT_BAD_OCTETS_RCV_CNT];
+ stats->mac_trans_error = hw_val[MVSW_PORT_MAC_TRANSMIT_ERR_CNT];
+ stats->broadcast_frames_received = hw_val[MVSW_PORT_BRDC_PKTS_RCV_CNT];
+ stats->multicast_frames_received = hw_val[MVSW_PORT_MC_PKTS_RCV_CNT];
+ stats->frames_64_octets = hw_val[MVSW_PORT_PKTS_64_OCTETS_CNT];
+ stats->frames_65_to_127_octets =
+ hw_val[MVSW_PORT_PKTS_65TO127_OCTETS_CNT];
+ stats->frames_128_to_255_octets =
+ hw_val[MVSW_PORT_PKTS_128TO255_OCTETS_CNT];
+ stats->frames_256_to_511_octets =
+ hw_val[MVSW_PORT_PKTS_256TO511_OCTETS_CNT];
+ stats->frames_512_to_1023_octets =
+ hw_val[MVSW_PORT_PKTS_512TO1023_OCTETS_CNT];
+ stats->frames_1024_to_max_octets =
+ hw_val[MVSW_PORT_PKTS_1024TOMAX_OCTETS_CNT];
+ stats->excessive_collision = hw_val[MVSW_PORT_EXCESSIVE_COLLISIONS_CNT];
+ stats->multicast_frames_sent = hw_val[MVSW_PORT_MC_PKTS_SENT_CNT];
+ stats->broadcast_frames_sent = hw_val[MVSW_PORT_BRDC_PKTS_SENT_CNT];
+ stats->fc_sent = hw_val[MVSW_PORT_FC_SENT_CNT];
+ stats->fc_received = hw_val[MVSW_PORT_GOOD_FC_RCV_CNT];
+ stats->buffer_overrun = hw_val[MVSW_PORT_DROP_EVENTS_CNT];
+ stats->undersize = hw_val[MVSW_PORT_UNDERSIZE_PKTS_CNT];
+ stats->fragments = hw_val[MVSW_PORT_FRAGMENTS_PKTS_CNT];
+ stats->oversize = hw_val[MVSW_PORT_OVERSIZE_PKTS_CNT];
+ stats->jabber = hw_val[MVSW_PORT_JABBER_PKTS_CNT];
+ stats->rx_error_frame_received = hw_val[MVSW_PORT_MAC_RCV_ERROR_CNT];
+ stats->bad_crc = hw_val[MVSW_PORT_BAD_CRC_CNT];
+ stats->collisions = hw_val[MVSW_PORT_COLLISIONS_CNT];
+ stats->late_collision = hw_val[MVSW_PORT_LATE_COLLISIONS_CNT];
+ stats->unicast_frames_received = hw_val[MVSW_PORT_GOOD_UC_PKTS_RCV_CNT];
+ stats->unicast_frames_sent = hw_val[MVSW_PORT_GOOD_UC_PKTS_SENT_CNT];
+ stats->sent_multiple = hw_val[MVSW_PORT_MULTIPLE_PKTS_SENT_CNT];
+ stats->sent_deferred = hw_val[MVSW_PORT_DEFERRED_PKTS_SENT_CNT];
+ stats->frames_1024_to_1518_octets =
+ hw_val[MVSW_PORT_PKTS_1024TO1518_OCTETS_CNT];
+ stats->frames_1519_to_max_octets =
+ hw_val[MVSW_PORT_PKTS_1519TOMAX_OCTETS_CNT];
+ stats->good_octets_sent = hw_val[MVSW_PORT_GOOD_OCTETS_SENT_CNT];
+
+ return 0;
+}
+
+int mvsw_pr_hw_bridge_create(const struct mvsw_pr_switch *sw, u16 *bridge_id)
+{
+ struct mvsw_msg_bridge_cmd req;
+ struct mvsw_msg_bridge_ret resp;
+ int err;
+
+ err = fw_send_req_resp(sw, MVSW_MSG_TYPE_BRIDGE_CREATE, &req, &resp);
+ if (err)
+ return err;
+
+ *bridge_id = resp.bridge;
+ return err;
+}
+
+int mvsw_pr_hw_bridge_delete(const struct mvsw_pr_switch *sw, u16 bridge_id)
+{
+ struct mvsw_msg_bridge_cmd req = {
+ .bridge = bridge_id
+ };
+
+ return fw_send_req(sw, MVSW_MSG_TYPE_BRIDGE_DELETE, &req);
+}
+
+int mvsw_pr_hw_bridge_port_add(const struct mvsw_pr_port *port, u16 bridge_id)
+{
+ struct mvsw_msg_bridge_cmd req = {
+ .bridge = bridge_id,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_BRIDGE_PORT_ADD, &req);
+}
+
+int mvsw_pr_hw_bridge_port_delete(const struct mvsw_pr_port *port,
+ u16 bridge_id)
+{
+ struct mvsw_msg_bridge_cmd req = {
+ .bridge = bridge_id,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_BRIDGE_PORT_DELETE, &req);
+}
+
+int mvsw_pr_hw_fdb_flush_port(const struct mvsw_pr_port *port, u32 mode)
+{
+ struct mvsw_msg_fdb_cmd req = {
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .flush_mode = mode,
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_FDB_FLUSH_PORT, &req);
+}
+
+int mvsw_pr_hw_fdb_flush_vlan(const struct mvsw_pr_switch *sw, u16 vid,
+ u32 mode)
+{
+ struct mvsw_msg_fdb_cmd req = {
+ .vid = vid,
+ .flush_mode = mode,
+ };
+
+ return fw_send_req(sw, MVSW_MSG_TYPE_FDB_FLUSH_VLAN, &req);
+}
+
+int mvsw_pr_hw_fdb_flush_port_vlan(const struct mvsw_pr_port *port, u16 vid,
+ u32 mode)
+{
+ struct mvsw_msg_fdb_cmd req = {
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .vid = vid,
+ .flush_mode = mode,
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_FDB_FLUSH_PORT_VLAN, &req);
+}
+
+int mvsw_pr_hw_port_link_mode_get(const struct mvsw_pr_port *port,
+ u32 *mode)
+{
+ struct mvsw_msg_port_attr_ret resp;
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_LINK_MODE,
+ .port = port->hw_id,
+ .dev = port->dev_id
+ };
+ int err;
+
+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
+ &req, &resp);
+ if (err)
+ return err;
+
+ *mode = resp.param.link_mode;
+
+ return err;
+}
+
+int mvsw_pr_hw_port_link_mode_set(const struct mvsw_pr_port *port,
+ u32 mode)
+{
+ struct mvsw_msg_port_attr_cmd req = {
+ .attr = MVSW_MSG_PORT_ATTR_LINK_MODE,
+ .port = port->hw_id,
+ .dev = port->dev_id,
+ .param = {.link_mode = mode}
+ };
+
+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
+}
diff --git a/drivers/net/ethernet/marvell/prestera/prestera_hw.h b/drivers/net/ethernet/marvell/prestera/prestera_hw.h
new file mode 100644
index 000000000000..dfae2631160e
--- /dev/null
+++ b/drivers/net/ethernet/marvell/prestera/prestera_hw.h
@@ -0,0 +1,159 @@
+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
+ *
+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
+ *
+ */
+
+#ifndef _MVSW_PRESTERA_HW_H_
+#define _MVSW_PRESTERA_HW_H_
+
+#include <linux/types.h>
+
+enum mvsw_pr_accept_frame_type {
+ MVSW_ACCEPT_FRAME_TYPE_TAGGED,
+ MVSW_ACCEPT_FRAME_TYPE_UNTAGGED,
+ MVSW_ACCEPT_FRAME_TYPE_ALL
+};
+
+enum {
+ MVSW_LINK_MODE_10baseT_Half_BIT,
+ MVSW_LINK_MODE_10baseT_Full_BIT,
+ MVSW_LINK_MODE_100baseT_Half_BIT,
+ MVSW_LINK_MODE_100baseT_Full_BIT,
+ MVSW_LINK_MODE_1000baseT_Half_BIT,
+ MVSW_LINK_MODE_1000baseT_Full_BIT,
+ MVSW_LINK_MODE_1000baseX_Full_BIT,
+ MVSW_LINK_MODE_1000baseKX_Full_BIT,
+ MVSW_LINK_MODE_10GbaseKR_Full_BIT,
+ MVSW_LINK_MODE_10GbaseSR_Full_BIT,
+ MVSW_LINK_MODE_10GbaseLR_Full_BIT,
+ MVSW_LINK_MODE_20GbaseKR2_Full_BIT,
+ MVSW_LINK_MODE_25GbaseCR_Full_BIT,
+ MVSW_LINK_MODE_25GbaseKR_Full_BIT,
+ MVSW_LINK_MODE_25GbaseSR_Full_BIT,
+ MVSW_LINK_MODE_40GbaseKR4_Full_BIT,
+ MVSW_LINK_MODE_40GbaseCR4_Full_BIT,
+ MVSW_LINK_MODE_40GbaseSR4_Full_BIT,
+ MVSW_LINK_MODE_50GbaseCR2_Full_BIT,
+ MVSW_LINK_MODE_50GbaseKR2_Full_BIT,
+ MVSW_LINK_MODE_50GbaseSR2_Full_BIT,
+ MVSW_LINK_MODE_100GbaseKR4_Full_BIT,
+ MVSW_LINK_MODE_100GbaseSR4_Full_BIT,
+ MVSW_LINK_MODE_100GbaseCR4_Full_BIT,
+ MVSW_LINK_MODE_MAX,
+};
+
+enum {
+ MVSW_PORT_TYPE_NONE,
+ MVSW_PORT_TYPE_TP,
+ MVSW_PORT_TYPE_AUI,
+ MVSW_PORT_TYPE_MII,
+ MVSW_PORT_TYPE_FIBRE,
+ MVSW_PORT_TYPE_BNC,
+ MVSW_PORT_TYPE_DA,
+ MVSW_PORT_TYPE_OTHER,
+ MVSW_PORT_TYPE_MAX,
+};
+
+enum {
+ MVSW_PORT_TRANSCEIVER_COPPER,
+ MVSW_PORT_TRANSCEIVER_SFP,
+ MVSW_PORT_TRANSCEIVER_MAX,
+};
+
+enum {
+ MVSW_PORT_FEC_OFF_BIT,
+ MVSW_PORT_FEC_BASER_BIT,
+ MVSW_PORT_FEC_RS_BIT,
+ MVSW_PORT_FEC_MAX,
+};
+
+enum {
+ MVSW_PORT_DUPLEX_HALF,
+ MVSW_PORT_DUPLEX_FULL
+};
+
+struct mvsw_pr_switch;
+struct mvsw_pr_port;
+struct mvsw_pr_port_stats;
+struct mvsw_pr_port_caps;
+
+enum mvsw_pr_event_type;
+struct mvsw_pr_event;
+
+/* Switch API */
+int mvsw_pr_hw_switch_init(struct mvsw_pr_switch *sw);
+int mvsw_pr_hw_switch_ageing_set(const struct mvsw_pr_switch *sw,
+ u32 ageing_time);
+
+/* Port API */
+int mvsw_pr_hw_port_info_get(const struct mvsw_pr_port *port,
+ u16 *fp_id, u32 *hw_id, u32 *dev_id);
+int mvsw_pr_hw_port_state_set(const struct mvsw_pr_port *port,
+ bool admin_state);
+int mvsw_pr_hw_port_state_get(const struct mvsw_pr_port *port,
+ bool *admin_state, bool *oper_state);
+int mvsw_pr_hw_port_mtu_set(const struct mvsw_pr_port *port, u32 mtu);
+int mvsw_pr_hw_port_mtu_get(const struct mvsw_pr_port *port, u32 *mtu);
+int mvsw_pr_hw_port_mac_set(const struct mvsw_pr_port *port, char *mac);
+int mvsw_pr_hw_port_mac_get(const struct mvsw_pr_port *port, char *mac);
+int mvsw_pr_hw_port_accept_frame_type_set(const struct mvsw_pr_port *port,
+ enum mvsw_pr_accept_frame_type type);
+int mvsw_pr_hw_port_learning_set(const struct mvsw_pr_port *port, bool enable);
+int mvsw_pr_hw_port_speed_get(const struct mvsw_pr_port *port, u32 *speed);
+int mvsw_pr_hw_port_flood_set(const struct mvsw_pr_port *port, bool flood);
+int mvsw_pr_hw_port_cap_get(const struct mvsw_pr_port *port,
+ struct mvsw_pr_port_caps *caps);
+int mvsw_pr_hw_port_remote_cap_get(const struct mvsw_pr_port *port,
+ u64 *link_mode_bitmap);
+int mvsw_pr_hw_port_type_get(const struct mvsw_pr_port *port, u8 *type);
+int mvsw_pr_hw_port_fec_get(const struct mvsw_pr_port *port, u8 *fec);
+int mvsw_pr_hw_port_fec_set(const struct mvsw_pr_port *port, u8 fec);
+int mvsw_pr_hw_port_autoneg_set(const struct mvsw_pr_port *port,
+ bool autoneg, u64 link_modes, u8 fec);
+int mvsw_pr_hw_port_duplex_get(const struct mvsw_pr_port *port, u8 *duplex);
+int mvsw_pr_hw_port_stats_get(const struct mvsw_pr_port *port,
+ struct mvsw_pr_port_stats *stats);
+int mvsw_pr_hw_port_link_mode_get(const struct mvsw_pr_port *port,
+ u32 *mode);
+int mvsw_pr_hw_port_link_mode_set(const struct mvsw_pr_port *port,
+ u32 mode);
+int mvsw_pr_hw_port_mdix_get(const struct mvsw_pr_port *port, u8 *mode);
+int mvsw_pr_hw_port_mdix_set(const struct mvsw_pr_port *port, u8 mode);
+
+/* Vlan API */
+int mvsw_pr_hw_vlan_create(const struct mvsw_pr_switch *sw, u16 vid);
+int mvsw_pr_hw_vlan_delete(const struct mvsw_pr_switch *sw, u16 vid);
+int mvsw_pr_hw_vlan_port_set(const struct mvsw_pr_port *port,
+ u16 vid, bool is_member, bool untagged);
+int mvsw_pr_hw_vlan_port_vid_set(const struct mvsw_pr_port *port, u16 vid);
+
+/* FDB API */
+int mvsw_pr_hw_fdb_add(const struct mvsw_pr_port *port,
+ const unsigned char *mac, u16 vid, bool dynamic);
+int mvsw_pr_hw_fdb_del(const struct mvsw_pr_port *port,
+ const unsigned char *mac, u16 vid);
+int mvsw_pr_hw_fdb_flush_port(const struct mvsw_pr_port *port, u32 mode);
+int mvsw_pr_hw_fdb_flush_vlan(const struct mvsw_pr_switch *sw, u16 vid,
+ u32 mode);
+int mvsw_pr_hw_fdb_flush_port_vlan(const struct mvsw_pr_port *port, u16 vid,
+ u32 mode);
+
+/* Bridge API */
+int mvsw_pr_hw_bridge_create(const struct mvsw_pr_switch *sw, u16 *bridge_id);
+int mvsw_pr_hw_bridge_delete(const struct mvsw_pr_switch *sw, u16 bridge_id);
+int mvsw_pr_hw_bridge_port_add(const struct mvsw_pr_port *port, u16 bridge_id);
+int mvsw_pr_hw_bridge_port_delete(const struct mvsw_pr_port *port,
+ u16 bridge_id);
+
+/* Event handlers */
+int mvsw_pr_hw_event_handler_register(struct mvsw_pr_switch *sw,
+ enum mvsw_pr_event_type type,
+ void (*cb)(struct mvsw_pr_switch *sw,
+ struct mvsw_pr_event *evt));
+void mvsw_pr_hw_event_handler_unregister(struct mvsw_pr_switch *sw,
+ enum mvsw_pr_event_type type,
+ void (*cb)(struct mvsw_pr_switch *sw,
+ struct mvsw_pr_event *evt));
+
+#endif /* _MVSW_PRESTERA_HW_H_ */
diff --git a/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c b/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
new file mode 100644
index 000000000000..18fa6bbe5ace
--- /dev/null
+++ b/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
@@ -0,0 +1,1217 @@
+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
+ *
+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
+ *
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/if_vlan.h>
+#include <linux/if_bridge.h>
+#include <linux/notifier.h>
+#include <net/switchdev.h>
+#include <net/netevent.h>
+#include <net/vxlan.h>
+
+#include "prestera.h"
+
+struct mvsw_pr_bridge {
+ struct mvsw_pr_switch *sw;
+ u32 ageing_time;
+ struct list_head bridge_list;
+ bool bridge_8021q_exists;
+};
+
+struct mvsw_pr_bridge_device {
+ struct net_device *dev;
+ struct list_head bridge_node;
+ struct list_head port_list;
+ u16 bridge_id;
+ u8 vlan_enabled:1, multicast_enabled:1, mrouter:1;
+};
+
+struct mvsw_pr_bridge_port {
+ struct net_device *dev;
+ struct mvsw_pr_bridge_device *bridge_device;
+ struct list_head bridge_device_node;
+ struct list_head vlan_list;
+ unsigned int ref_count;
+ u8 stp_state;
+ unsigned long flags;
+};
+
+struct mvsw_pr_bridge_vlan {
+ struct list_head bridge_port_node;
+ struct list_head port_vlan_list;
+ u16 vid;
+};
+
+struct mvsw_pr_event_work {
+ struct work_struct work;
+ struct switchdev_notifier_fdb_info fdb_info;
+ struct net_device *dev;
+ unsigned long event;
+};
+
+static struct workqueue_struct *mvsw_owq;
+
+static struct mvsw_pr_bridge_port *
+mvsw_pr_bridge_port_get(struct mvsw_pr_bridge *bridge,
+ struct net_device *brport_dev);
+
+static void mvsw_pr_bridge_port_put(struct mvsw_pr_bridge *bridge,
+ struct mvsw_pr_bridge_port *br_port);
+
+static struct mvsw_pr_bridge_device *
+mvsw_pr_bridge_device_find(const struct mvsw_pr_bridge *bridge,
+ const struct net_device *br_dev)
+{
+ struct mvsw_pr_bridge_device *bridge_device;
+
+ list_for_each_entry(bridge_device, &bridge->bridge_list,
+ bridge_node)
+ if (bridge_device->dev == br_dev)
+ return bridge_device;
+
+ return NULL;
+}
+
+static bool
+mvsw_pr_bridge_device_is_offloaded(const struct mvsw_pr_switch *sw,
+ const struct net_device *br_dev)
+{
+ return !!mvsw_pr_bridge_device_find(sw->bridge, br_dev);
+}
+
+static struct mvsw_pr_bridge_port *
+__mvsw_pr_bridge_port_find(const struct mvsw_pr_bridge_device *bridge_device,
+ const struct net_device *brport_dev)
+{
+ struct mvsw_pr_bridge_port *br_port;
+
+ list_for_each_entry(br_port, &bridge_device->port_list,
+ bridge_device_node) {
+ if (br_port->dev == brport_dev)
+ return br_port;
+ }
+
+ return NULL;
+}
+
+static struct mvsw_pr_bridge_port *
+mvsw_pr_bridge_port_find(struct mvsw_pr_bridge *bridge,
+ struct net_device *brport_dev)
+{
+ struct net_device *br_dev = netdev_master_upper_dev_get(brport_dev);
+ struct mvsw_pr_bridge_device *bridge_device;
+
+ if (!br_dev)
+ return NULL;
+
+ bridge_device = mvsw_pr_bridge_device_find(bridge, br_dev);
+ if (!bridge_device)
+ return NULL;
+
+ return __mvsw_pr_bridge_port_find(bridge_device, brport_dev);
+}
+
+static struct mvsw_pr_bridge_vlan *
+mvsw_pr_bridge_vlan_find(const struct mvsw_pr_bridge_port *br_port, u16 vid)
+{
+ struct mvsw_pr_bridge_vlan *br_vlan;
+
+ list_for_each_entry(br_vlan, &br_port->vlan_list, bridge_port_node) {
+ if (br_vlan->vid == vid)
+ return br_vlan;
+ }
+
+ return NULL;
+}
+
+static struct mvsw_pr_bridge_vlan *
+mvsw_pr_bridge_vlan_create(struct mvsw_pr_bridge_port *br_port, u16 vid)
+{
+ struct mvsw_pr_bridge_vlan *br_vlan;
+
+ br_vlan = kzalloc(sizeof(*br_vlan), GFP_KERNEL);
+ if (!br_vlan)
+ return NULL;
+
+ INIT_LIST_HEAD(&br_vlan->port_vlan_list);
+ br_vlan->vid = vid;
+ list_add(&br_vlan->bridge_port_node, &br_port->vlan_list);
+
+ return br_vlan;
+}
+
+static void
+mvsw_pr_bridge_vlan_destroy(struct mvsw_pr_bridge_vlan *br_vlan)
+{
+ list_del(&br_vlan->bridge_port_node);
+ WARN_ON(!list_empty(&br_vlan->port_vlan_list));
+ kfree(br_vlan);
+}
+
+static struct mvsw_pr_bridge_vlan *
+mvsw_pr_bridge_vlan_get(struct mvsw_pr_bridge_port *br_port, u16 vid)
+{
+ struct mvsw_pr_bridge_vlan *br_vlan;
+
+ br_vlan = mvsw_pr_bridge_vlan_find(br_port, vid);
+ if (br_vlan)
+ return br_vlan;
+
+ return mvsw_pr_bridge_vlan_create(br_port, vid);
+}
+
+static void mvsw_pr_bridge_vlan_put(struct mvsw_pr_bridge_vlan *br_vlan)
+{
+ if (list_empty(&br_vlan->port_vlan_list))
+ mvsw_pr_bridge_vlan_destroy(br_vlan);
+}
+
+static int
+mvsw_pr_port_vlan_bridge_join(struct mvsw_pr_port_vlan *port_vlan,
+ struct mvsw_pr_bridge_port *br_port,
+ struct netlink_ext_ack *extack)
+{
+ struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
+ struct mvsw_pr_bridge_vlan *br_vlan;
+ u16 vid = port_vlan->vid;
+ int err;
+
+ if (port_vlan->bridge_port)
+ return 0;
+
+ err = mvsw_pr_port_flood_set(port, br_port->flags & BR_FLOOD);
+ if (err)
+ return err;
+
+ err = mvsw_pr_port_learning_set(port, br_port->flags & BR_LEARNING);
+ if (err)
+ goto err_port_learning_set;
+
+ br_vlan = mvsw_pr_bridge_vlan_get(br_port, vid);
+ if (!br_vlan) {
+ err = -ENOMEM;
+ goto err_bridge_vlan_get;
+ }
+
+ list_add(&port_vlan->bridge_vlan_node, &br_vlan->port_vlan_list);
+
+ mvsw_pr_bridge_port_get(port->sw->bridge, br_port->dev);
+ port_vlan->bridge_port = br_port;
+
+ return 0;
+
+err_bridge_vlan_get:
+ mvsw_pr_port_learning_set(port, false);
+err_port_learning_set:
+ return err;
+}
+
+static int
+mvsw_pr_bridge_vlan_port_count_get(struct mvsw_pr_bridge_device *bridge_device,
+ u16 vid)
+{
+ int count = 0;
+ struct mvsw_pr_bridge_port *br_port;
+ struct mvsw_pr_bridge_vlan *br_vlan;
+
+ list_for_each_entry(br_port, &bridge_device->port_list,
+ bridge_device_node) {
+ list_for_each_entry(br_vlan, &br_port->vlan_list,
+ bridge_port_node) {
+ if (br_vlan->vid == vid) {
+ count += 1;
+ break;
+ }
+ }
+ }
+
+ return count;
+}
+
+void
+mvsw_pr_port_vlan_bridge_leave(struct mvsw_pr_port_vlan *port_vlan)
+{
+ struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
+ struct mvsw_pr_bridge_vlan *br_vlan;
+ struct mvsw_pr_bridge_port *br_port;
+ int port_count;
+ u16 vid = port_vlan->vid;
+ bool last_port, last_vlan;
+
+ br_port = port_vlan->bridge_port;
+ last_vlan = list_is_singular(&br_port->vlan_list);
+ port_count =
+ mvsw_pr_bridge_vlan_port_count_get(br_port->bridge_device, vid);
+ br_vlan = mvsw_pr_bridge_vlan_find(br_port, vid);
+ last_port = port_count == 1;
+ if (last_vlan) {
+ mvsw_pr_fdb_flush_port(port, MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
+ } else if (last_port) {
+ mvsw_pr_fdb_flush_vlan(port->sw, vid,
+ MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
+ } else {
+ mvsw_pr_fdb_flush_port_vlan(port, vid,
+ MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
+ }
+
+ list_del(&port_vlan->bridge_vlan_node);
+ mvsw_pr_bridge_vlan_put(br_vlan);
+ mvsw_pr_bridge_port_put(port->sw->bridge, br_port);
+ port_vlan->bridge_port = NULL;
+}
+
+static int
+mvsw_pr_bridge_port_vlan_add(struct mvsw_pr_port *port,
+ struct mvsw_pr_bridge_port *br_port,
+ u16 vid, bool is_untagged, bool is_pvid,
+ struct netlink_ext_ack *extack)
+{
+ u16 pvid;
+ struct mvsw_pr_port_vlan *port_vlan;
+ u16 old_pvid = port->pvid;
+ int err;
+
+ if (is_pvid)
+ pvid = vid;
+ else
+ pvid = port->pvid == vid ? 0 : port->pvid;
+
+ port_vlan = mvsw_pr_port_vlan_find_by_vid(port, vid);
+ if (port_vlan && port_vlan->bridge_port != br_port)
+ return -EEXIST;
+
+ if (!port_vlan) {
+ port_vlan = mvsw_pr_port_vlan_create(port, vid);
+ if (IS_ERR(port_vlan))
+ return PTR_ERR(port_vlan);
+ }
+
+ err = mvsw_pr_port_vlan_set(port, vid, true, is_untagged);
+ if (err)
+ goto err_port_vlan_set;
+
+ err = mvsw_pr_port_pvid_set(port, pvid);
+ if (err)
+ goto err_port_pvid_set;
+
+ err = mvsw_pr_port_vlan_bridge_join(port_vlan, br_port, extack);
+ if (err)
+ goto err_port_vlan_bridge_join;
+
+ return 0;
+
+err_port_vlan_bridge_join:
+ mvsw_pr_port_pvid_set(port, old_pvid);
+err_port_pvid_set:
+ mvsw_pr_port_vlan_set(port, vid, false, false);
+err_port_vlan_set:
+ mvsw_pr_port_vlan_destroy(port_vlan);
+
+ return err;
+}
+
+static int mvsw_pr_port_vlans_add(struct mvsw_pr_port *port,
+ const struct switchdev_obj_port_vlan *vlan,
+ struct switchdev_trans *trans,
+ struct netlink_ext_ack *extack)
+{
+ bool flag_untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
+ bool flag_pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID;
+ struct net_device *orig_dev = vlan->obj.orig_dev;
+ struct mvsw_pr_bridge_port *br_port;
+ struct mvsw_pr_switch *sw = port->sw;
+ u16 vid;
+
+ if (netif_is_bridge_master(orig_dev))
+ return 0;
+
+ if (switchdev_trans_ph_commit(trans))
+ return 0;
+
+ br_port = mvsw_pr_bridge_port_find(sw->bridge, orig_dev);
+ if (WARN_ON(!br_port))
+ return -EINVAL;
+
+ if (!br_port->bridge_device->vlan_enabled)
+ return 0;
+
+ for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
+ int err;
+
+ err = mvsw_pr_bridge_port_vlan_add(port, br_port,
+ vid, flag_untagged,
+ flag_pvid, extack);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int mvsw_pr_port_obj_add(struct net_device *dev,
+ const struct switchdev_obj *obj,
+ struct switchdev_trans *trans,
+ struct netlink_ext_ack *extack)
+{
+ int err = 0;
+ struct mvsw_pr_port *port = netdev_priv(dev);
+ const struct switchdev_obj_port_vlan *vlan;
+
+ switch (obj->id) {
+ case SWITCHDEV_OBJ_ID_PORT_VLAN:
+ vlan = SWITCHDEV_OBJ_PORT_VLAN(obj);
+ err = mvsw_pr_port_vlans_add(port, vlan, trans, extack);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ }
+
+ return err;
+}
+
+static void
+mvsw_pr_bridge_port_vlan_del(struct mvsw_pr_port *port,
+ struct mvsw_pr_bridge_port *br_port, u16 vid)
+{
+ u16 pvid = port->pvid == vid ? 0 : port->pvid;
+ struct mvsw_pr_port_vlan *port_vlan;
+
+ port_vlan = mvsw_pr_port_vlan_find_by_vid(port, vid);
+ if (WARN_ON(!port_vlan))
+ return;
+
+ mvsw_pr_port_vlan_bridge_leave(port_vlan);
+ mvsw_pr_port_pvid_set(port, pvid);
+ mvsw_pr_port_vlan_destroy(port_vlan);
+}
+
+static int mvsw_pr_port_vlans_del(struct mvsw_pr_port *port,
+ const struct switchdev_obj_port_vlan *vlan)
+{
+ struct mvsw_pr_switch *sw = port->sw;
+ struct net_device *orig_dev = vlan->obj.orig_dev;
+ struct mvsw_pr_bridge_port *br_port;
+ u16 vid;
+
+ if (netif_is_bridge_master(orig_dev))
+ return -EOPNOTSUPP;
+
+ br_port = mvsw_pr_bridge_port_find(sw->bridge, orig_dev);
+ if (WARN_ON(!br_port))
+ return -EINVAL;
+
+ if (!br_port->bridge_device->vlan_enabled)
+ return 0;
+
+ for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++)
+ mvsw_pr_bridge_port_vlan_del(port, br_port, vid);
+
+ return 0;
+}
+
+static int mvsw_pr_port_obj_del(struct net_device *dev,
+ const struct switchdev_obj *obj)
+{
+ int err = 0;
+ struct mvsw_pr_port *port = netdev_priv(dev);
+
+ switch (obj->id) {
+ case SWITCHDEV_OBJ_ID_PORT_VLAN:
+ err = mvsw_pr_port_vlans_del(port,
+ SWITCHDEV_OBJ_PORT_VLAN(obj));
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+
+ return err;
+}
+
+static int mvsw_pr_port_attr_br_vlan_set(struct mvsw_pr_port *port,
+ struct switchdev_trans *trans,
+ struct net_device *orig_dev,
+ bool vlan_enabled)
+{
+ struct mvsw_pr_switch *sw = port->sw;
+ struct mvsw_pr_bridge_device *bridge_device;
+
+ if (!switchdev_trans_ph_prepare(trans))
+ return 0;
+
+ bridge_device = mvsw_pr_bridge_device_find(sw->bridge, orig_dev);
+ if (WARN_ON(!bridge_device))
+ return -EINVAL;
+
+ if (bridge_device->vlan_enabled == vlan_enabled)
+ return 0;
+
+ netdev_err(bridge_device->dev,
+ "VLAN filtering can't be changed for existing bridge\n");
+ return -EINVAL;
+}
+
+static int mvsw_pr_port_attr_br_flags_set(struct mvsw_pr_port *port,
+ struct switchdev_trans *trans,
+ struct net_device *orig_dev,
+ unsigned long flags)
+{
+ struct mvsw_pr_bridge_port *br_port;
+ int err;
+
+ if (switchdev_trans_ph_prepare(trans))
+ return 0;
+
+ br_port = mvsw_pr_bridge_port_find(port->sw->bridge, orig_dev);
+ if (!br_port)
+ return 0;
+
+ err = mvsw_pr_port_flood_set(port, flags & BR_FLOOD);
+ if (err)
+ return err;
+
+ err = mvsw_pr_port_learning_set(port, flags & BR_LEARNING);
+ if (err)
+ return err;
+
+ memcpy(&br_port->flags, &flags, sizeof(flags));
+ return 0;
+}
+
+static int mvsw_pr_port_attr_br_ageing_set(struct mvsw_pr_port *port,
+ struct switchdev_trans *trans,
+ unsigned long ageing_clock_t)
+{
+ int err;
+ struct mvsw_pr_switch *sw = port->sw;
+ unsigned long ageing_jiffies = clock_t_to_jiffies(ageing_clock_t);
+ u32 ageing_time = jiffies_to_msecs(ageing_jiffies) / 1000;
+
+ if (switchdev_trans_ph_prepare(trans)) {
+ if (ageing_time < MVSW_PR_MIN_AGEING_TIME ||
+ ageing_time > MVSW_PR_MAX_AGEING_TIME)
+ return -ERANGE;
+ else
+ return 0;
+ }
+
+ err = mvsw_pr_switch_ageing_set(sw, ageing_time);
+ if (!err)
+ sw->bridge->ageing_time = ageing_time;
+
+ return err;
+}
+
+static int mvsw_pr_port_obj_attr_set(struct net_device *dev,
+ const struct switchdev_attr *attr,
+ struct switchdev_trans *trans)
+{
+ int err = 0;
+ struct mvsw_pr_port *port = netdev_priv(dev);
+
+ switch (attr->id) {
+ case SWITCHDEV_ATTR_ID_PORT_STP_STATE:
+ err = -EOPNOTSUPP;
+ break;
+ case SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS:
+ if (attr->u.brport_flags &
+ ~(BR_LEARNING | BR_FLOOD | BR_MCAST_FLOOD))
+ err = -EINVAL;
+ break;
+ case SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS:
+ err = mvsw_pr_port_attr_br_flags_set(port, trans,
+ attr->orig_dev,
+ attr->u.brport_flags);
+ break;
+ case SWITCHDEV_ATTR_ID_BRIDGE_AGEING_TIME:
+ err = mvsw_pr_port_attr_br_ageing_set(port, trans,
+ attr->u.ageing_time);
+ break;
+ case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING:
+ err = mvsw_pr_port_attr_br_vlan_set(port, trans,
+ attr->orig_dev,
+ attr->u.vlan_filtering);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ }
+
+ return err;
+}
+
+static void mvsw_fdb_offload_notify(struct mvsw_pr_port *port,
+ struct switchdev_notifier_fdb_info *info)
+{
+ struct switchdev_notifier_fdb_info send_info;
+
+ send_info.addr = info->addr;
+ send_info.vid = info->vid;
+ send_info.offloaded = true;
+ call_switchdev_notifiers(SWITCHDEV_FDB_OFFLOADED,
+ port->net_dev, &send_info.info, NULL);
+}
+
+static int
+mvsw_pr_port_fdb_set(struct mvsw_pr_port *port,
+ struct switchdev_notifier_fdb_info *fdb_info, bool adding)
+{
+ struct mvsw_pr_switch *sw = port->sw;
+ struct mvsw_pr_bridge_port *br_port;
+ struct mvsw_pr_bridge_device *bridge_device;
+ struct net_device *orig_dev = fdb_info->info.dev;
+ int err;
+ u16 vid;
+
+ br_port = mvsw_pr_bridge_port_find(sw->bridge, orig_dev);
+ if (!br_port)
+ return -EINVAL;
+
+ bridge_device = br_port->bridge_device;
+
+ if (bridge_device->vlan_enabled)
+ vid = fdb_info->vid;
+ else
+ vid = bridge_device->bridge_id;
+
+ if (adding)
+ err = mvsw_pr_fdb_add(port, fdb_info->addr, vid, false);
+ else
+ err = mvsw_pr_fdb_del(port, fdb_info->addr, vid);
+
+ return err;
+}
+
+static void mvsw_pr_bridge_fdb_event_work(struct work_struct *work)
+{
+ int err = 0;
+ struct mvsw_pr_event_work *switchdev_work =
+ container_of(work, struct mvsw_pr_event_work, work);
+ struct net_device *dev = switchdev_work->dev;
+ struct switchdev_notifier_fdb_info *fdb_info;
+ struct mvsw_pr_port *port;
+
+ rtnl_lock();
+ if (netif_is_vxlan(dev))
+ goto out;
+
+ port = mvsw_pr_port_dev_lower_find(dev);
+ if (!port)
+ goto out;
+
+ switch (switchdev_work->event) {
+ case SWITCHDEV_FDB_ADD_TO_DEVICE:
+ fdb_info = &switchdev_work->fdb_info;
+ if (!fdb_info->added_by_user)
+ break;
+ err = mvsw_pr_port_fdb_set(port, fdb_info, true);
+ if (err)
+ break;
+ mvsw_fdb_offload_notify(port, fdb_info);
+ break;
+ case SWITCHDEV_FDB_DEL_TO_DEVICE:
+ fdb_info = &switchdev_work->fdb_info;
+ mvsw_pr_port_fdb_set(port, fdb_info, false);
+ break;
+ case SWITCHDEV_FDB_ADD_TO_BRIDGE:
+ case SWITCHDEV_FDB_DEL_TO_BRIDGE:
+ break;
+ }
+
+out:
+ rtnl_unlock();
+ kfree(switchdev_work->fdb_info.addr);
+ kfree(switchdev_work);
+ dev_put(dev);
+}
+
+static int mvsw_pr_switchdev_event(struct notifier_block *unused,
+ unsigned long event, void *ptr)
+{
+ int err = 0;
+ struct net_device *net_dev = switchdev_notifier_info_to_dev(ptr);
+ struct mvsw_pr_event_work *switchdev_work;
+ struct switchdev_notifier_fdb_info *fdb_info;
+ struct switchdev_notifier_info *info = ptr;
+ struct net_device *upper_br;
+
+ if (event == SWITCHDEV_PORT_ATTR_SET) {
+ err = switchdev_handle_port_attr_set(net_dev, ptr,
+ mvsw_pr_netdev_check,
+ mvsw_pr_port_obj_attr_set);
+ return notifier_from_errno(err);
+ }
+
+ upper_br = netdev_master_upper_dev_get_rcu(net_dev);
+ if (!upper_br)
+ return NOTIFY_DONE;
+
+ if (!netif_is_bridge_master(upper_br))
+ return NOTIFY_DONE;
+
+ switchdev_work = kzalloc(sizeof(*switchdev_work), GFP_ATOMIC);
+ if (!switchdev_work)
+ return NOTIFY_BAD;
+
+ switchdev_work->dev = net_dev;
+ switchdev_work->event = event;
+
+ switch (event) {
+ case SWITCHDEV_FDB_ADD_TO_DEVICE:
+ case SWITCHDEV_FDB_DEL_TO_DEVICE:
+ case SWITCHDEV_FDB_ADD_TO_BRIDGE:
+ case SWITCHDEV_FDB_DEL_TO_BRIDGE:
+ fdb_info = container_of(info,
+ struct switchdev_notifier_fdb_info,
+ info);
+
+ INIT_WORK(&switchdev_work->work, mvsw_pr_bridge_fdb_event_work);
+ memcpy(&switchdev_work->fdb_info, ptr,
+ sizeof(switchdev_work->fdb_info));
+ switchdev_work->fdb_info.addr = kzalloc(ETH_ALEN, GFP_ATOMIC);
+ if (!switchdev_work->fdb_info.addr)
+ goto out;
+ ether_addr_copy((u8 *)switchdev_work->fdb_info.addr,
+ fdb_info->addr);
+ dev_hold(net_dev);
+
+ break;
+ case SWITCHDEV_VXLAN_FDB_ADD_TO_DEVICE:
+ case SWITCHDEV_VXLAN_FDB_DEL_TO_DEVICE:
+ default:
+ kfree(switchdev_work);
+ return NOTIFY_DONE;
+ }
+
+ queue_work(mvsw_owq, &switchdev_work->work);
+ return NOTIFY_DONE;
+out:
+ kfree(switchdev_work);
+ return NOTIFY_BAD;
+}
+
+static int mvsw_pr_switchdev_blocking_event(struct notifier_block *unused,
+ unsigned long event, void *ptr)
+{
+ int err = 0;
+ struct net_device *net_dev = switchdev_notifier_info_to_dev(ptr);
+
+ switch (event) {
+ case SWITCHDEV_PORT_OBJ_ADD:
+ if (netif_is_vxlan(net_dev)) {
+ err = -EOPNOTSUPP;
+ } else {
+ err = switchdev_handle_port_obj_add
+ (net_dev, ptr, mvsw_pr_netdev_check,
+ mvsw_pr_port_obj_add);
+ }
+ break;
+ case SWITCHDEV_PORT_OBJ_DEL:
+ if (netif_is_vxlan(net_dev)) {
+ err = -EOPNOTSUPP;
+ } else {
+ err = switchdev_handle_port_obj_del
+ (net_dev, ptr, mvsw_pr_netdev_check,
+ mvsw_pr_port_obj_del);
+ }
+ break;
+ case SWITCHDEV_PORT_ATTR_SET:
+ err = switchdev_handle_port_attr_set
+ (net_dev, ptr, mvsw_pr_netdev_check,
+ mvsw_pr_port_obj_attr_set);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ }
+
+ return notifier_from_errno(err);
+}
+
+static struct mvsw_pr_bridge_device *
+mvsw_pr_bridge_device_create(struct mvsw_pr_bridge *bridge,
+ struct net_device *br_dev)
+{
+ struct mvsw_pr_bridge_device *bridge_device;
+ bool vlan_enabled = br_vlan_enabled(br_dev);
+ u16 bridge_id;
+ int err;
+
+ if (vlan_enabled && bridge->bridge_8021q_exists) {
+ netdev_err(br_dev, "Only one VLAN-aware bridge is supported\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ bridge_device = kzalloc(sizeof(*bridge_device), GFP_KERNEL);
+ if (!bridge_device)
+ return ERR_PTR(-ENOMEM);
+
+ if (vlan_enabled) {
+ bridge->bridge_8021q_exists = true;
+ } else {
+ err = mvsw_pr_8021d_bridge_create(bridge->sw, &bridge_id);
+ if (err) {
+ kfree(bridge_device);
+ return ERR_PTR(err);
+ }
+
+ bridge_device->bridge_id = bridge_id;
+ }
+
+ bridge_device->dev = br_dev;
+ bridge_device->vlan_enabled = vlan_enabled;
+ bridge_device->multicast_enabled = br_multicast_enabled(br_dev);
+ bridge_device->mrouter = br_multicast_router(br_dev);
+ INIT_LIST_HEAD(&bridge_device->port_list);
+
+ list_add(&bridge_device->bridge_node, &bridge->bridge_list);
+
+ return bridge_device;
+}
+
+static void
+mvsw_pr_bridge_device_destroy(struct mvsw_pr_bridge *bridge,
+ struct mvsw_pr_bridge_device *bridge_device)
+{
+ list_del(&bridge_device->bridge_node);
+ if (bridge_device->vlan_enabled)
+ bridge->bridge_8021q_exists = false;
+ else
+ mvsw_pr_8021d_bridge_delete(bridge->sw,
+ bridge_device->bridge_id);
+
+ WARN_ON(!list_empty(&bridge_device->port_list));
+ kfree(bridge_device);
+}
+
+static struct mvsw_pr_bridge_device *
+mvsw_pr_bridge_device_get(struct mvsw_pr_bridge *bridge,
+ struct net_device *br_dev)
+{
+ struct mvsw_pr_bridge_device *bridge_device;
+
+ bridge_device = mvsw_pr_bridge_device_find(bridge, br_dev);
+ if (bridge_device)
+ return bridge_device;
+
+ return mvsw_pr_bridge_device_create(bridge, br_dev);
+}
+
+static void
+mvsw_pr_bridge_device_put(struct mvsw_pr_bridge *bridge,
+ struct mvsw_pr_bridge_device *bridge_device)
+{
+ if (list_empty(&bridge_device->port_list))
+ mvsw_pr_bridge_device_destroy(bridge, bridge_device);
+}
+
+static struct mvsw_pr_bridge_port *
+mvsw_pr_bridge_port_create(struct mvsw_pr_bridge_device *bridge_device,
+ struct net_device *brport_dev)
+{
+ struct mvsw_pr_bridge_port *br_port;
+ struct mvsw_pr_port *port;
+
+ br_port = kzalloc(sizeof(*br_port), GFP_KERNEL);
+ if (!br_port)
+ return NULL;
+
+ port = mvsw_pr_port_dev_lower_find(brport_dev);
+
+ br_port->dev = brport_dev;
+ br_port->bridge_device = bridge_device;
+ br_port->stp_state = BR_STATE_DISABLED;
+ br_port->flags = BR_LEARNING | BR_FLOOD | BR_LEARNING_SYNC |
+ BR_MCAST_FLOOD;
+ INIT_LIST_HEAD(&br_port->vlan_list);
+ list_add(&br_port->bridge_device_node, &bridge_device->port_list);
+ br_port->ref_count = 1;
+
+ return br_port;
+}
+
+static void
+mvsw_pr_bridge_port_destroy(struct mvsw_pr_bridge_port *br_port)
+{
+ list_del(&br_port->bridge_device_node);
+ WARN_ON(!list_empty(&br_port->vlan_list));
+ kfree(br_port);
+}
+
+static struct mvsw_pr_bridge_port *
+mvsw_pr_bridge_port_get(struct mvsw_pr_bridge *bridge,
+ struct net_device *brport_dev)
+{
+ struct net_device *br_dev = netdev_master_upper_dev_get(brport_dev);
+ struct mvsw_pr_bridge_device *bridge_device;
+ struct mvsw_pr_bridge_port *br_port;
+ int err;
+
+ br_port = mvsw_pr_bridge_port_find(bridge, brport_dev);
+ if (br_port) {
+ br_port->ref_count++;
+ return br_port;
+ }
+
+ bridge_device = mvsw_pr_bridge_device_get(bridge, br_dev);
+ if (IS_ERR(bridge_device))
+ return ERR_CAST(bridge_device);
+
+ br_port = mvsw_pr_bridge_port_create(bridge_device, brport_dev);
+ if (!br_port) {
+ err = -ENOMEM;
+ goto err_brport_create;
+ }
+
+ return br_port;
+
+err_brport_create:
+ mvsw_pr_bridge_device_put(bridge, bridge_device);
+ return ERR_PTR(err);
+}
+
+static void mvsw_pr_bridge_port_put(struct mvsw_pr_bridge *bridge,
+ struct mvsw_pr_bridge_port *br_port)
+{
+ struct mvsw_pr_bridge_device *bridge_device;
+
+ if (--br_port->ref_count != 0)
+ return;
+ bridge_device = br_port->bridge_device;
+ mvsw_pr_bridge_port_destroy(br_port);
+ mvsw_pr_bridge_device_put(bridge, bridge_device);
+}
+
+static int
+mvsw_pr_bridge_8021q_port_join(struct mvsw_pr_bridge_device *bridge_device,
+ struct mvsw_pr_bridge_port *br_port,
+ struct mvsw_pr_port *port,
+ struct netlink_ext_ack *extack)
+{
+ if (is_vlan_dev(br_port->dev)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Can not enslave a VLAN device to a VLAN-aware bridge");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+mvsw_pr_bridge_8021d_port_join(struct mvsw_pr_bridge_device *bridge_device,
+ struct mvsw_pr_bridge_port *br_port,
+ struct mvsw_pr_port *port,
+ struct netlink_ext_ack *extack)
+{
+ int err;
+
+ if (is_vlan_dev(br_port->dev)) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Enslaving of a VLAN device is not supported");
+ return -ENOTSUPP;
+ }
+ err = mvsw_pr_8021d_bridge_port_add(port, bridge_device->bridge_id);
+ if (err)
+ return err;
+
+ err = mvsw_pr_port_flood_set(port, br_port->flags & BR_FLOOD);
+ if (err)
+ goto err_port_flood_set;
+
+ err = mvsw_pr_port_learning_set(port, br_port->flags & BR_LEARNING);
+ if (err)
+ goto err_port_learning_set;
+
+ return err;
+
+err_port_learning_set:
+ mvsw_pr_port_flood_set(port, false);
+err_port_flood_set:
+ mvsw_pr_8021d_bridge_port_delete(port, bridge_device->bridge_id);
+ return err;
+}
+
+static int mvsw_pr_port_bridge_join(struct mvsw_pr_port *port,
+ struct net_device *brport_dev,
+ struct net_device *br_dev,
+ struct netlink_ext_ack *extack)
+{
+ struct mvsw_pr_bridge_device *bridge_device;
+ struct mvsw_pr_switch *sw = port->sw;
+ struct mvsw_pr_bridge_port *br_port;
+ int err;
+
+ br_port = mvsw_pr_bridge_port_get(sw->bridge, brport_dev);
+ if (IS_ERR(br_port))
+ return PTR_ERR(br_port);
+
+ bridge_device = br_port->bridge_device;
+
+ if (bridge_device->vlan_enabled) {
+ err = mvsw_pr_bridge_8021q_port_join(bridge_device, br_port,
+ port, extack);
+ } else {
+ err = mvsw_pr_bridge_8021d_port_join(bridge_device, br_port,
+ port, extack);
+ }
+
+ if (err)
+ goto err_port_join;
+
+ return 0;
+
+err_port_join:
+ mvsw_pr_bridge_port_put(sw->bridge, br_port);
+ return err;
+}
+
+static void
+mvsw_pr_bridge_8021d_port_leave(struct mvsw_pr_bridge_device *bridge_device,
+ struct mvsw_pr_bridge_port *br_port,
+ struct mvsw_pr_port *port)
+{
+ mvsw_pr_fdb_flush_port(port, MVSW_PR_FDB_FLUSH_MODE_ALL);
+ mvsw_pr_8021d_bridge_port_delete(port, bridge_device->bridge_id);
+}
+
+static void
+mvsw_pr_bridge_8021q_port_leave(struct mvsw_pr_bridge_device *bridge_device,
+ struct mvsw_pr_bridge_port *br_port,
+ struct mvsw_pr_port *port)
+{
+ mvsw_pr_fdb_flush_port(port, MVSW_PR_FDB_FLUSH_MODE_ALL);
+ mvsw_pr_port_pvid_set(port, MVSW_PR_DEFAULT_VID);
+}
+
+static void mvsw_pr_port_bridge_leave(struct mvsw_pr_port *port,
+ struct net_device *brport_dev,
+ struct net_device *br_dev)
+{
+ struct mvsw_pr_switch *sw = port->sw;
+ struct mvsw_pr_bridge_device *bridge_device;
+ struct mvsw_pr_bridge_port *br_port;
+
+ bridge_device = mvsw_pr_bridge_device_find(sw->bridge, br_dev);
+ if (!bridge_device)
+ return;
+ br_port = __mvsw_pr_bridge_port_find(bridge_device, brport_dev);
+ if (!br_port)
+ return;
+
+ if (bridge_device->vlan_enabled)
+ mvsw_pr_bridge_8021q_port_leave(bridge_device, br_port, port);
+ else
+ mvsw_pr_bridge_8021d_port_leave(bridge_device, br_port, port);
+
+ mvsw_pr_port_learning_set(port, false);
+ mvsw_pr_port_flood_set(port, false);
+ mvsw_pr_bridge_port_put(sw->bridge, br_port);
+}
+
+static int mvsw_pr_netdevice_port_upper_event(struct net_device *lower_dev,
+ struct net_device *dev,
+ unsigned long event, void *ptr)
+{
+ struct netdev_notifier_changeupper_info *info;
+ struct mvsw_pr_port *port;
+ struct netlink_ext_ack *extack;
+ struct net_device *upper_dev;
+ struct mvsw_pr_switch *sw;
+ int err = 0;
+
+ port = netdev_priv(dev);
+ sw = port->sw;
+ info = ptr;
+ extack = netdev_notifier_info_to_extack(&info->info);
+
+ switch (event) {
+ case NETDEV_PRECHANGEUPPER:
+ upper_dev = info->upper_dev;
+ if (!netif_is_bridge_master(upper_dev)) {
+ NL_SET_ERR_MSG_MOD(extack, "Unknown upper device type");
+ return -EINVAL;
+ }
+ if (!info->linking)
+ break;
+ if (netdev_has_any_upper_dev(upper_dev) &&
+ (!netif_is_bridge_master(upper_dev) ||
+ !mvsw_pr_bridge_device_is_offloaded(sw, upper_dev))) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Enslaving a port to a device that already has an upper device is not supported");
+ return -EINVAL;
+ }
+ break;
+ case NETDEV_CHANGEUPPER:
+ upper_dev = info->upper_dev;
+ if (netif_is_bridge_master(upper_dev)) {
+ if (info->linking)
+ err = mvsw_pr_port_bridge_join(port,
+ lower_dev,
+ upper_dev,
+ extack);
+ else
+ mvsw_pr_port_bridge_leave(port,
+ lower_dev,
+ upper_dev);
+ }
+ break;
+ }
+
+ return err;
+}
+
+static int mvsw_pr_netdevice_port_event(struct net_device *lower_dev,
+ struct net_device *port_dev,
+ unsigned long event, void *ptr)
+{
+ switch (event) {
+ case NETDEV_PRECHANGEUPPER:
+ case NETDEV_CHANGEUPPER:
+ return mvsw_pr_netdevice_port_upper_event(lower_dev, port_dev,
+ event, ptr);
+ }
+
+ return 0;
+}
+
+static int mvsw_pr_netdevice_event(struct notifier_block *nb,
+ unsigned long event, void *ptr)
+{
+ struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+ struct mvsw_pr_switch *sw;
+ int err = 0;
+
+ sw = container_of(nb, struct mvsw_pr_switch, netdevice_nb);
+
+ if (mvsw_pr_netdev_check(dev))
+ err = mvsw_pr_netdevice_port_event(dev, dev, event, ptr);
+
+ return notifier_from_errno(err);
+}
+
+static int mvsw_pr_fdb_init(struct mvsw_pr_switch *sw)
+{
+ int err;
+
+ err = mvsw_pr_switch_ageing_set(sw, MVSW_PR_DEFAULT_AGEING_TIME);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static int mvsw_pr_switchdev_init(struct mvsw_pr_switch *sw)
+{
+ int err = 0;
+ struct mvsw_pr_switchdev *swdev;
+ struct mvsw_pr_bridge *bridge;
+
+ if (sw->switchdev)
+ return -EPERM;
+
+ bridge = kzalloc(sizeof(*sw->bridge), GFP_KERNEL);
+ if (!bridge)
+ return -ENOMEM;
+
+ swdev = kzalloc(sizeof(*sw->switchdev), GFP_KERNEL);
+ if (!swdev) {
+ kfree(bridge);
+ return -ENOMEM;
+ }
+
+ sw->bridge = bridge;
+ bridge->sw = sw;
+ sw->switchdev = swdev;
+ swdev->sw = sw;
+
+ INIT_LIST_HEAD(&sw->bridge->bridge_list);
+
+ mvsw_owq = alloc_ordered_workqueue("%s_ordered", 0, "prestera_sw");
+ if (!mvsw_owq) {
+ err = -ENOMEM;
+ goto err_alloc_workqueue;
+ }
+
+ swdev->swdev_n.notifier_call = mvsw_pr_switchdev_event;
+ err = register_switchdev_notifier(&swdev->swdev_n);
+ if (err)
+ goto err_register_switchdev_notifier;
+
+ swdev->swdev_blocking_n.notifier_call =
+ mvsw_pr_switchdev_blocking_event;
+ err = register_switchdev_blocking_notifier(&swdev->swdev_blocking_n);
+ if (err)
+ goto err_register_block_switchdev_notifier;
+
+ mvsw_pr_fdb_init(sw);
+
+ return 0;
+
+err_register_block_switchdev_notifier:
+ unregister_switchdev_notifier(&swdev->swdev_n);
+err_register_switchdev_notifier:
+ destroy_workqueue(mvsw_owq);
+err_alloc_workqueue:
+ kfree(swdev);
+ kfree(bridge);
+ return err;
+}
+
+static void mvsw_pr_switchdev_fini(struct mvsw_pr_switch *sw)
+{
+ if (!sw->switchdev)
+ return;
+
+ unregister_switchdev_notifier(&sw->switchdev->swdev_n);
+ unregister_switchdev_blocking_notifier
+ (&sw->switchdev->swdev_blocking_n);
+ flush_workqueue(mvsw_owq);
+ destroy_workqueue(mvsw_owq);
+ kfree(sw->switchdev);
+ sw->switchdev = NULL;
+ kfree(sw->bridge);
+}
+
+static int mvsw_pr_netdev_init(struct mvsw_pr_switch *sw)
+{
+ int err = 0;
+
+ if (sw->netdevice_nb.notifier_call)
+ return -EPERM;
+
+ sw->netdevice_nb.notifier_call = mvsw_pr_netdevice_event;
+ err = register_netdevice_notifier(&sw->netdevice_nb);
+ return err;
+}
+
+static void mvsw_pr_netdev_fini(struct mvsw_pr_switch *sw)
+{
+ if (sw->netdevice_nb.notifier_call)
+ unregister_netdevice_notifier(&sw->netdevice_nb);
+}
+
+int mvsw_pr_switchdev_register(struct mvsw_pr_switch *sw)
+{
+ int err;
+
+ err = mvsw_pr_switchdev_init(sw);
+ if (err)
+ return err;
+
+ err = mvsw_pr_netdev_init(sw);
+ if (err)
+ goto err_netdevice_notifier;
+
+ return 0;
+
+err_netdevice_notifier:
+ mvsw_pr_switchdev_fini(sw);
+ return err;
+}
+
+void mvsw_pr_switchdev_unregister(struct mvsw_pr_switch *sw)
+{
+ mvsw_pr_netdev_fini(sw);
+ mvsw_pr_switchdev_fini(sw);
+}
--
2.17.1

2020-02-25 16:33:24

by Vadym Kochan

[permalink] [raw]
Subject: [RFC net-next 3/3] dt-bindings: marvell,prestera: Add address mapping for Prestera Switchdev PCIe driver

Document requirement for the PCI port which is connected to the ASIC, to
allow access to the firmware related registers.

Signed-off-by: Vadym Kochan <[email protected]>
---
.../devicetree/bindings/net/marvell,prestera.txt | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/Documentation/devicetree/bindings/net/marvell,prestera.txt b/Documentation/devicetree/bindings/net/marvell,prestera.txt
index 83370ebf5b89..103c35cfa8a7 100644
--- a/Documentation/devicetree/bindings/net/marvell,prestera.txt
+++ b/Documentation/devicetree/bindings/net/marvell,prestera.txt
@@ -45,3 +45,16 @@ dfx-server {
ranges = <0 MBUS_ID(0x08, 0x00) 0 0x100000>;
reg = <MBUS_ID(0x08, 0x00) 0 0x100000>;
};
+
+Marvell Prestera SwitchDev bindings
+-----------------------------------
+The current implementation of Prestera Switchdev PCI interface driver requires
+that BAR2 is assigned to 0xf6000000 as base address from the PCI IO range:
+
+&cp0_pcie0 {
+ ranges = <0x81000000 0x0 0xfb000000 0x0 0xfb000000 0x0 0xf0000
+ 0x82000000 0x0 0xf6000000 0x0 0xf6000000 0x0 0x2000000
+ 0x82000000 0x0 0xf9000000 0x0 0xf9000000 0x0 0x100000>;
+ phys = <&cp0_comphy0 0>;
+ status = "okay";
+};
--
2.17.1

2020-02-25 20:49:50

by Andrew Lunn

[permalink] [raw]
Subject: Re: [RFC net-next 2/3] net: marvell: prestera: Add PCI interface support

On Tue, Feb 25, 2020 at 04:30:55PM +0000, Vadym Kochan wrote:
> Add PCI interface driver for Prestera Switch ASICs family devices, which
> provides:
>
> - Firmware loading mechanism
> - Requests & events handling to/from the firmware
> - Access to the firmware on the bus level
>
> The firmware has to be loaded each time device is reset. The driver is
> loading it from:
>
> /lib/firmware/marvell/prestera_fw_img.bin
>
> The firmware image version is located within internal header and consists
> of 3 numbers - MAJOR.MINOR.PATCH. Additionally, driver has hard-coded
> minimum supported firmware version which it can work with:
>
> MAJOR - reflects the support on ABI level between driver and loaded
> firmware, this number should be the same for driver and loaded
> firmware.
>
> MINOR - this is the minimal supported version between driver and the
> firmware.
>
> PATCH - indicates only fixes, firmware ABI is not changed.
>
> Signed-off-by: Vadym Kochan <[email protected]>
> Signed-off-by: Oleksandr Mazur <[email protected]>

Nice to see a driver for this hardware.

> +#define mvsw_wait_timeout(cond, waitms) \
> +({ \
> + unsigned long __wait_end = jiffies + msecs_to_jiffies(waitms); \
> + bool __wait_ret = false; \
> + do { \
> + if (cond) { \
> + __wait_ret = true; \
> + break; \
> + } \
> + cond_resched(); \
> + } while (time_before(jiffies, __wait_end)); \
> + __wait_ret; \
> +})

Please try to use include/linux/iopoll.h

> +#define FW_VER_MIN(v) \
> + (((v) - (FW_VER_MAJ(v) * FW_VER_MAJ_MUL)) / FW_VER_MIN_MUL)
> +
> +#define FW_VER_PATCH(v) \
> + (v - (FW_VER_MAJ(v) * FW_VER_MAJ_MUL) - (FW_VER_MIN(v) * FW_VER_MIN_MUL))
> +
> +#define mvsw_ldr_write(fw, reg, val) \
> + writel(val, (fw)->ldr_regs + (reg))
> +#define mvsw_ldr_read(fw, reg) \
> + readl((fw)->ldr_regs + (reg))

You have a lot of macro magic in this file. Please try to replace most
of it with simple functions. The compiler will inline it, giving you
the same performance, but you gain better type checking.

> +#define mvsw_fw_dev(fw) ((fw)->dev.dev)

Macros like this don't bring much value.


> +static int mvsw_pr_fw_hdr_parse(struct mvsw_pr_fw *fw,
> + const struct firmware *img)
> +{
> + struct mvsw_pr_fw_header *hdr = (struct mvsw_pr_fw_header *)img->data;
> + struct mvsw_fw_rev *rev = &fw->dev.fw_rev;
> + u32 magic;
> +
> + magic = be32_to_cpu(hdr->magic_number);
> + if (magic != MVSW_FW_HDR_MAGIC) {
> + dev_err(mvsw_fw_dev(fw), "FW img type is invalid");
> + return -EINVAL;
> + }
> +
> + mvsw_pr_fw_rev_parse(hdr, rev);
> +
> + dev_info(mvsw_fw_dev(fw), "FW version '%u.%u.%u'\n",
> + rev->maj, rev->min, rev->sub);

ethtool can return this. Don't spam the kernel log.

> +
> + return mvsw_pr_fw_rev_check(fw);
> +}
> +
> +static int mvsw_pr_fw_load(struct mvsw_pr_fw *fw)
> +{
> + size_t hlen = sizeof(struct mvsw_pr_fw_header);
> + const struct firmware *f;
> + bool has_ldr;
> + int err;
> +
> + has_ldr = mvsw_wait_timeout(mvsw_pr_ldr_is_ready(fw), 1000);
> + if (!has_ldr) {
> + dev_err(mvsw_fw_dev(fw), "waiting for FW loader is timed out");
> + return -ETIMEDOUT;
> + }
> +
> + fw->ldr_ring_buf = fw->ldr_regs +
> + mvsw_ldr_read(fw, MVSW_LDR_BUF_OFFS_REG);
> +
> + fw->ldr_buf_len =
> + mvsw_ldr_read(fw, MVSW_LDR_BUF_SIZE_REG);
> +
> + fw->ldr_wr_idx = 0;
> +
> + err = request_firmware_direct(&f, MVSW_FW_FILENAME, &fw->pci_dev->dev);
> + if (err) {
> + dev_err(mvsw_fw_dev(fw), "failed to request firmware file\n");
> + return err;
> + }
> +
> + if (!IS_ALIGNED(f->size, 4)) {
> + dev_err(mvsw_fw_dev(fw), "FW image file is not aligned");
> + release_firmware(f);
> + return -EINVAL;
> + }

This is size, so has nothing to do with alignment. Do you mean truncated?


> +
> + err = mvsw_pr_fw_hdr_parse(fw, f);
> + if (err) {
> + dev_err(mvsw_fw_dev(fw), "FW image header is invalid\n");
> + release_firmware(f);
> + return err;
> + }
> +
> + mvsw_ldr_write(fw, MVSW_LDR_IMG_SIZE_REG, f->size - hlen);
> + mvsw_ldr_write(fw, MVSW_LDR_CTL_REG, MVSW_LDR_CTL_DL_START);
> +
> + dev_info(mvsw_fw_dev(fw), "Loading prestera FW image ...");

How slow is this? If the machine is blocked for 10 minutes, doing
nothing than load firmware, maybe it would be good to spam the
console, but otherwise....

> +
> + err = mvsw_pr_ldr_send(fw, f->data + hlen, f->size - hlen);
> +
> + release_firmware(f);
> + return err;
> +}
> +
> +static int mvsw_pr_pci_probe(struct pci_dev *pdev,
> + const struct pci_device_id *id)
> +{
> + const char *driver_name = pdev->driver->name;
> + struct mvsw_pr_fw *fw;
> + u8 __iomem *mem_addr;
> + int err;
> +
> + err = pci_enable_device(pdev);
> + if (err) {
> + dev_err(&pdev->dev, "pci_enable_device failed\n");
> + goto err_pci_enable_device;
> + }
> +
> + err = pci_request_regions(pdev, driver_name);
> + if (err) {
> + dev_err(&pdev->dev, "pci_request_regions failed\n");
> + goto err_pci_request_regions;
> + }
> +
> + mem_addr = pci_ioremap_bar(pdev, 2);
> + if (!mem_addr) {
> + dev_err(&pdev->dev, "ioremap failed\n");
> + err = -EIO;
> + goto err_ioremap;
> + }
> +
> + pci_set_master(pdev);
> +
> + fw = kzalloc(sizeof(*fw), GFP_KERNEL);
> + if (!fw) {
> + err = -ENOMEM;
> + goto err_pci_dev_alloc;
> + }
> +
> + fw->pci_dev = pdev;
> + fw->dev.dev = &pdev->dev;
> + fw->dev.send_req = mvsw_pr_fw_send_req;
> + fw->mem_addr = mem_addr;
> + fw->ldr_regs = mem_addr;
> + fw->hw_regs = mem_addr;
> +
> + fw->wq = alloc_workqueue("mvsw_fw_wq", WQ_HIGHPRI, 1);
> + if (!fw->wq)
> + goto err_wq_alloc;
> +
> + INIT_WORK(&fw->evt_work, mvsw_pr_fw_evt_work_fn);
> +
> + err = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI);
> + if (err < 0) {
> + dev_err(&pdev->dev, "MSI IRQ init failed\n");
> + goto err_irq_alloc;
> + }
> +
> + err = request_irq(pci_irq_vector(pdev, 0), mvsw_pci_irq_handler,
> + 0, driver_name, fw);
> + if (err) {
> + dev_err(&pdev->dev, "fail to request IRQ\n");
> + goto err_request_irq;
> + }
> +
> + pci_set_drvdata(pdev, fw);
> +
> + err = mvsw_pr_fw_init(fw);
> + if (err)
> + goto err_mvsw_fw_init;
> +
> + dev_info(mvsw_fw_dev(fw), "Prestera Switch FW is ready\n");
> +
> + err = mvsw_pr_device_register(&fw->dev);
> + if (err)
> + goto err_mvsw_dev_register;
> +
> + return 0;
> +
> +err_mvsw_dev_register:
> + mvsw_pr_fw_uninit(fw);
> +err_mvsw_fw_init:
> + free_irq(pci_irq_vector(pdev, 0), fw);
> +err_request_irq:
> + pci_free_irq_vectors(pdev);
> +err_irq_alloc:
> + destroy_workqueue(fw->wq);
> +err_wq_alloc:
> + kfree(fw);
> +err_pci_dev_alloc:
> + iounmap(mem_addr);
> +err_ioremap:
> + pci_release_regions(pdev);
> +err_pci_request_regions:
> + pci_disable_device(pdev);
> +err_pci_enable_device:
> + return err;
> +}
> +
> +static void mvsw_pr_pci_remove(struct pci_dev *pdev)
> +{
> + struct mvsw_pr_fw *fw = pci_get_drvdata(pdev);
> +
> + free_irq(pci_irq_vector(pdev, 0), fw);
> + pci_free_irq_vectors(pdev);
> + mvsw_pr_device_unregister(&fw->dev);
> + flush_workqueue(fw->wq);
> + destroy_workqueue(fw->wq);
> + mvsw_pr_fw_uninit(fw);
> + iounmap(fw->mem_addr);
> + pci_release_regions(pdev);
> + pci_disable_device(pdev);
> + kfree(fw);
> +}
> +


> +static int __init mvsw_pr_pci_init(void)
> +{
> + struct mvsw_pr_pci_match *match;
> + int err;
> +
> + for (match = mvsw_pci_devices; match->driver.name; match++) {
> + match->driver.probe = mvsw_pr_pci_probe;
> + match->driver.remove = mvsw_pr_pci_remove;
> + match->driver.id_table = &match->id;
> +
> + err = pci_register_driver(&match->driver);
> + if (err) {
> + pr_err("prestera_pci: failed to register %s\n",
> + match->driver.name);
> + break;
> + }
> +
> + match->registered = true;
> + }

Please don't reinvent the wheel. You should just do something like:

static const struct pci_device_id ilo_devices[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_COMPAQ, 0xB204) },
{ PCI_DEVICE(PCI_VENDOR_ID_HP, 0x3307) },
{ }
};
MODULE_DEVICE_TABLE(pci, ilo_devices);

static struct pci_driver ilo_driver = {
.name = ILO_NAME,
.id_table = ilo_devices,
.probe = ilo_probe,
.remove = ilo_remove,
};

error = pci_register_driver(&ilo_driver);

If you need extra information, you can make use of pci_device_id:driver_data.

> + if (err) {
> + for (match = mvsw_pci_devices; match->driver.name; match++) {
> + if (!match->registered)
> + break;
> +
> + pci_unregister_driver(&match->driver);
> + }
> +
> + return err;
> + }
> +
> + pr_info("prestera_pci: Registered Marvell Prestera PCI driver\n");

Don't spam the kernel log.

> +static void __exit mvsw_pr_pci_exit(void)
> +{
> + struct mvsw_pr_pci_match *match;
> +
> + for (match = mvsw_pci_devices; match->driver.name; match++) {
> + if (!match->registered)
> + break;
> +
> + pci_unregister_driver(&match->driver);
> + }
> +
> + pr_info("prestera_pci: Unregistered Marvell Prestera PCI driver\n");

More spamming of the kernel log.

Andrew

2020-02-25 22:05:54

by Andrew Lunn

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

> +static int mvsw_pr_port_obj_attr_set(struct net_device *dev,
> + const struct switchdev_attr *attr,
> + struct switchdev_trans *trans)
> +{
> + int err = 0;
> + struct mvsw_pr_port *port = netdev_priv(dev);
> +
> + switch (attr->id) {
> + case SWITCHDEV_ATTR_ID_PORT_STP_STATE:
> + err = -EOPNOTSUPP;
> + break;

That is interesting. Is the linux bridge happy with this? Particularly
when you have other interfaces in the Linux SW bridge, which cause a
loop via the switch ports? I assume the network then dies in a
broadcast storm, since there is nothing Linux can do to solve the
loop.

Andrew

2020-02-25 22:12:37

by Andrew Lunn

[permalink] [raw]
Subject: Re: [RFC net-next 0/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX326x (AC3x)

> CPU RX/TX support will be provided in the next contribution.

Hi Vadym

This is a core feature which needs to be in the first version merged
into the kernel. Basically, the driver first needs to offer 24
individual interfaces which can send and receive packets. The Linux
stack does everything else. You then add offloads, like bridges,
vlans, etc, allowing the hardware to accelerate what Linux is doing.

Andrew

2020-02-25 22:45:38

by Chris Packham

[permalink] [raw]
Subject: Re: [RFC net-next 0/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX326x (AC3x)

Hi Vadym,

On Tue, 2020-02-25 at 16:30 +0000, Vadym Kochan wrote:
> Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
> ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
> wireless SMB deployment.
>
> Prestera Switchdev is a firmware based driver which operates via PCI
> bus. The driver is split into 2 modules:
>
> - prestera_sw.ko - main generic Switchdev Prestera ASIC related logic.
>
> - prestera_pci.ko - bus specific code which also implements firmware
> loading and low-level messaging protocol between
> firmware and the switchdev driver.
>
> This driver implementation includes only L1 & basic L2 support.
>
> The core Prestera switching logic is implemented in prestera.c, there is
> an intermediate hw layer between core logic and firmware. It is
> implemented in prestera_hw.c, the purpose of it is to encapsulate hw
> related logic, in future there is a plan to support more devices with
> different HW related configurations.

Very excited by this patch series. We have some custom designs using
the AC3x. I'm in the process of getting the board dtses ready for
submitting upstream.

Please feel free to add me to the Cc list for future versions of this
patch set (and releated ones).

I'll also look to see what we can do to test on our hardware platforms.

>
> The firmware has to be loaded each time device is reset. The driver is
> loading it from:
>
> /lib/firmware/marvell/prestera_fw_img.bin
>
> The firmware image version is located within internal header and consists
> of 3 numbers - MAJOR.MINOR.PATCH. Additionally, driver has hard-coded
> minimum supported firmware version which it can work with:
>
> MAJOR - reflects the support on ABI level between driver and loaded
> firmware, this number should be the same for driver and
> loaded firmware.
>
> MINOR - this is the minimal supported version between driver and the
> firmware.
>
> PATCH - indicates only fixes, firmware ABI is not changed.
>
> The firmware image will be submitted to the linux-firmware after the
> driver is accepted.
>
> The following Switchdev features are supported:
>
> - VLAN-aware bridge offloading
> - VLAN-unaware bridge offloading
> - FDB offloading (learning, ageing)
> - Switchport configuration
>
> CPU RX/TX support will be provided in the next contribution.
>
> Vadym Kochan (3):
> net: marvell: prestera: Add Switchdev driver for Prestera family ASIC
> device 98DX325x (AC3x)
> net: marvell: prestera: Add PCI interface support
> dt-bindings: marvell,prestera: Add address mapping for Prestera
> Switchdev PCIe driver
>
> .../bindings/net/marvell,prestera.txt | 13 +
> drivers/net/ethernet/marvell/Kconfig | 1 +
> drivers/net/ethernet/marvell/Makefile | 1 +
> drivers/net/ethernet/marvell/prestera/Kconfig | 24 +
> .../net/ethernet/marvell/prestera/Makefile | 5 +
> .../net/ethernet/marvell/prestera/prestera.c | 1502 +++++++++++++++++
> .../net/ethernet/marvell/prestera/prestera.h | 244 +++
> .../marvell/prestera/prestera_drv_ver.h | 23 +
> .../ethernet/marvell/prestera/prestera_hw.c | 1094 ++++++++++++
> .../ethernet/marvell/prestera/prestera_hw.h | 159 ++
> .../ethernet/marvell/prestera/prestera_pci.c | 840 +++++++++
> .../marvell/prestera/prestera_switchdev.c | 1217 +++++++++++++
> 12 files changed, 5123 insertions(+)
> create mode 100644 drivers/net/ethernet/marvell/prestera/Kconfig
> create mode 100644 drivers/net/ethernet/marvell/prestera/Makefile
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.c
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.h
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.c
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.h
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_pci.c
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
>

2020-02-26 15:46:44

by Jiri Pirko

[permalink] [raw]
Subject: Re: [RFC net-next 0/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX326x (AC3x)

Tue, Feb 25, 2020 at 05:30:52PM CET, [email protected] wrote:
>Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
>ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
>wireless SMB deployment.
>
>Prestera Switchdev is a firmware based driver which operates via PCI
>bus. The driver is split into 2 modules:
>
> - prestera_sw.ko - main generic Switchdev Prestera ASIC related logic.
>
> - prestera_pci.ko - bus specific code which also implements firmware

It is unusual to see ".ko" in patchset cover letter...


> loading and low-level messaging protocol between
> firmware and the switchdev driver.
>
>This driver implementation includes only L1 & basic L2 support.
>
>The core Prestera switching logic is implemented in prestera.c, there is
>an intermediate hw layer between core logic and firmware. It is
>implemented in prestera_hw.c, the purpose of it is to encapsulate hw
>related logic, in future there is a plan to support more devices with
>different HW related configurations.
>
>The firmware has to be loaded each time device is reset. The driver is
>loading it from:
>
> /lib/firmware/marvell/prestera_fw_img.bin
>
>The firmware image version is located within internal header and consists
>of 3 numbers - MAJOR.MINOR.PATCH. Additionally, driver has hard-coded
>minimum supported firmware version which it can work with:
>
> MAJOR - reflects the support on ABI level between driver and loaded
> firmware, this number should be the same for driver and
> loaded firmware.
>
> MINOR - this is the minimal supported version between driver and the
> firmware.
>
> PATCH - indicates only fixes, firmware ABI is not changed.
>

It is usual that the file name contains a version. I think it is
good to make sure you are loading the version your driver is compatible
with. There could be multiple versions for multiple kernels.


>The firmware image will be submitted to the linux-firmware after the
>driver is accepted.

Hmm, not sure how this works, shouldn't it be submitted there first?



>
>The following Switchdev features are supported:

You don't need to mention "Switchdev". It is just a offloading layer for
bridge. Does not mean anything else now...


>
> - VLAN-aware bridge offloading
> - VLAN-unaware bridge offloading
> - FDB offloading (learning, ageing)
> - Switchport configuration
>
>CPU RX/TX support will be provided in the next contribution.
>
>Vadym Kochan (3):
> net: marvell: prestera: Add Switchdev driver for Prestera family ASIC
> device 98DX325x (AC3x)
> net: marvell: prestera: Add PCI interface support
> dt-bindings: marvell,prestera: Add address mapping for Prestera
> Switchdev PCIe driver
>
> .../bindings/net/marvell,prestera.txt | 13 +
> drivers/net/ethernet/marvell/Kconfig | 1 +
> drivers/net/ethernet/marvell/Makefile | 1 +
> drivers/net/ethernet/marvell/prestera/Kconfig | 24 +
> .../net/ethernet/marvell/prestera/Makefile | 5 +
> .../net/ethernet/marvell/prestera/prestera.c | 1502 +++++++++++++++++
> .../net/ethernet/marvell/prestera/prestera.h | 244 +++
> .../marvell/prestera/prestera_drv_ver.h | 23 +
> .../ethernet/marvell/prestera/prestera_hw.c | 1094 ++++++++++++
> .../ethernet/marvell/prestera/prestera_hw.h | 159 ++
> .../ethernet/marvell/prestera/prestera_pci.c | 840 +++++++++
> .../marvell/prestera/prestera_switchdev.c | 1217 +++++++++++++
> 12 files changed, 5123 insertions(+)
> create mode 100644 drivers/net/ethernet/marvell/prestera/Kconfig
> create mode 100644 drivers/net/ethernet/marvell/prestera/Makefile
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.c
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.h
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.c
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.h
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_pci.c
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
>
>--
>2.17.1
>

2020-02-26 15:55:21

by Jiri Pirko

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Tue, Feb 25, 2020 at 05:30:54PM CET, [email protected] wrote:
>Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
>ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
>wireless SMB deployment.
>
>This driver implementation includes only L1 & basic L2 support.
>
>The core Prestera switching logic is implemented in prestera.c, there is
>an intermediate hw layer between core logic and firmware. It is
>implemented in prestera_hw.c, the purpose of it is to encapsulate hw
>related logic, in future there is a plan to support more devices with
>different HW related configurations.
>
>The following Switchdev features are supported:
>
> - VLAN-aware bridge offloading
> - VLAN-unaware bridge offloading
> - FDB offloading (learning, ageing)
> - Switchport configuration
>
>Signed-off-by: Vadym Kochan <[email protected]>
>Signed-off-by: Andrii Savka <[email protected]>
>Signed-off-by: Oleksandr Mazur <[email protected]>
>Signed-off-by: Serhiy Boiko <[email protected]>
>Signed-off-by: Serhiy Pshyk <[email protected]>
>Signed-off-by: Taras Chornyi <[email protected]>
>Signed-off-by: Volodymyr Mytnyk <[email protected]>
>---
> drivers/net/ethernet/marvell/Kconfig | 1 +
> drivers/net/ethernet/marvell/Makefile | 1 +
> drivers/net/ethernet/marvell/prestera/Kconfig | 13 +
> .../net/ethernet/marvell/prestera/Makefile | 3 +
> .../net/ethernet/marvell/prestera/prestera.c | 1502 +++++++++++++++++
> .../net/ethernet/marvell/prestera/prestera.h | 244 +++
> .../marvell/prestera/prestera_drv_ver.h | 23 +
> .../ethernet/marvell/prestera/prestera_hw.c | 1094 ++++++++++++
> .../ethernet/marvell/prestera/prestera_hw.h | 159 ++
> .../marvell/prestera/prestera_switchdev.c | 1217 +++++++++++++
> 10 files changed, 4257 insertions(+)
> create mode 100644 drivers/net/ethernet/marvell/prestera/Kconfig
> create mode 100644 drivers/net/ethernet/marvell/prestera/Makefile
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.c
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.h
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.c
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.h
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
>
>diff --git a/drivers/net/ethernet/marvell/Kconfig b/drivers/net/ethernet/marvell/Kconfig
>index 3d5caea096fb..74313d9e1fc0 100644
>--- a/drivers/net/ethernet/marvell/Kconfig
>+++ b/drivers/net/ethernet/marvell/Kconfig
>@@ -171,5 +171,6 @@ config SKY2_DEBUG
>
>
> source "drivers/net/ethernet/marvell/octeontx2/Kconfig"
>+source "drivers/net/ethernet/marvell/prestera/Kconfig"
>
> endif # NET_VENDOR_MARVELL
>diff --git a/drivers/net/ethernet/marvell/Makefile b/drivers/net/ethernet/marvell/Makefile
>index 89dea7284d5b..9f88fe822555 100644
>--- a/drivers/net/ethernet/marvell/Makefile
>+++ b/drivers/net/ethernet/marvell/Makefile
>@@ -12,3 +12,4 @@ obj-$(CONFIG_PXA168_ETH) += pxa168_eth.o
> obj-$(CONFIG_SKGE) += skge.o
> obj-$(CONFIG_SKY2) += sky2.o
> obj-y += octeontx2/
>+obj-y += prestera/
>diff --git a/drivers/net/ethernet/marvell/prestera/Kconfig b/drivers/net/ethernet/marvell/prestera/Kconfig
>new file mode 100644
>index 000000000000..d0b416dcb677
>--- /dev/null
>+++ b/drivers/net/ethernet/marvell/prestera/Kconfig
>@@ -0,0 +1,13 @@
>+# SPDX-License-Identifier: GPL-2.0-only
>+#
>+# Marvell Prestera drivers configuration
>+#
>+
>+config PRESTERA
>+ tristate "Marvell Prestera Switch ASICs support"
>+ depends on NET_SWITCHDEV && VLAN_8021Q
>+ ---help---
>+ This driver supports Marvell Prestera Switch ASICs family.
>+
>+ To compile this driver as a module, choose M here: the
>+ module will be called prestera_sw.
>diff --git a/drivers/net/ethernet/marvell/prestera/Makefile b/drivers/net/ethernet/marvell/prestera/Makefile
>new file mode 100644
>index 000000000000..9446298fb7f4
>--- /dev/null
>+++ b/drivers/net/ethernet/marvell/prestera/Makefile
>@@ -0,0 +1,3 @@
>+# SPDX-License-Identifier: GPL-2.0
>+obj-$(CONFIG_PRESTERA) += prestera_sw.o
>+prestera_sw-objs := prestera.o prestera_hw.o prestera_switchdev.o
>diff --git a/drivers/net/ethernet/marvell/prestera/prestera.c b/drivers/net/ethernet/marvell/prestera/prestera.c
>new file mode 100644
>index 000000000000..12d0eb590bbb
>--- /dev/null
>+++ b/drivers/net/ethernet/marvell/prestera/prestera.c
>@@ -0,0 +1,1502 @@
>+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
>+ *
>+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
>+ *
>+ */
>+#include <linux/kernel.h>
>+#include <linux/module.h>
>+#include <linux/list.h>
>+#include <linux/netdevice.h>
>+#include <linux/netdev_features.h>
>+#include <linux/etherdevice.h>
>+#include <linux/ethtool.h>
>+#include <linux/jiffies.h>
>+#include <net/switchdev.h>
>+
>+#include "prestera.h"
>+#include "prestera_hw.h"
>+#include "prestera_drv_ver.h"
>+
>+#define MVSW_PR_MTU_DEFAULT 1536
>+
>+#define PORT_STATS_CACHE_TIMEOUT_MS (msecs_to_jiffies(1000))
>+#define PORT_STATS_CNT (sizeof(struct mvsw_pr_port_stats) / sizeof(u64))

Keep the prefix for all defines withing the file. "PORT_STATS_CNT"
looks way to generic on the first look.


>+#define PORT_STATS_IDX(name) \
>+ (offsetof(struct mvsw_pr_port_stats, name) / sizeof(u64))
>+#define PORT_STATS_FIELD(name) \
>+ [PORT_STATS_IDX(name)] = __stringify(name)
>+
>+static struct list_head switches_registered;
>+
>+static const char mvsw_driver_kind[] = "prestera_sw";

Please be consistent. Make your prefixes, name, filenames the same.
For example:
prestera_driver_kind[] = "prestera";

Applied to the whole code.


>+static const char mvsw_driver_name[] = "mvsw_switchdev";

Why is this different from kind?

Also, don't mention "switchdev" anywhere.


>+static const char mvsw_driver_version[] = PRESTERA_DRV_VER;

[...]


>+static void mvsw_pr_port_remote_cap_get(struct ethtool_link_ksettings *ecmd,
>+ struct mvsw_pr_port *port)
>+{
>+ u64 bitmap;
>+
>+ if (!mvsw_pr_hw_port_remote_cap_get(port, &bitmap)) {
>+ mvsw_modes_to_eth(ecmd->link_modes.lp_advertising,
>+ bitmap, 0, MVSW_PORT_TYPE_NONE);
>+ }

Don't use {} for single statement. checkpatch.pl should warn you about
this.



>+}
>+
>+static void mvsw_pr_port_duplex_get(struct ethtool_link_ksettings *ecmd,
>+ struct mvsw_pr_port *port)
>+{
>+ u8 duplex;
>+
>+ if (!mvsw_pr_hw_port_duplex_get(port, &duplex)) {
>+ ecmd->base.duplex = duplex == MVSW_PORT_DUPLEX_FULL ?
>+ DUPLEX_FULL : DUPLEX_HALF;
>+ } else {
>+ ecmd->base.duplex = DUPLEX_UNKNOWN;
>+ }

Same here.


>+}

[...]


>+static void __exit mvsw_pr_module_exit(void)
>+{
>+ destroy_workqueue(mvsw_pr_wq);
>+
>+ pr_info("Unloading Marvell Prestera Switch Driver\n");

No prints like this please.



>+}
>+
>+module_init(mvsw_pr_module_init);
>+module_exit(mvsw_pr_module_exit);
>+
>+MODULE_AUTHOR("Marvell Semi.");

Does not look so :)


>+MODULE_LICENSE("GPL");
>+MODULE_DESCRIPTION("Marvell Prestera switch driver");
>+MODULE_VERSION(PRESTERA_DRV_VER);

[...]

2020-02-26 16:33:24

by Roopa Prabhu

[permalink] [raw]
Subject: Re: [RFC net-next 0/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX326x (AC3x)

On Tue, Feb 25, 2020 at 8:31 AM Vadym Kochan <[email protected]> wrote:
>
> Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
> ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
> wireless SMB deployment.
>
> Prestera Switchdev is a firmware based driver which operates via PCI
> bus. The driver is split into 2 modules:
>
> - prestera_sw.ko - main generic Switchdev Prestera ASIC related logic.
>
> - prestera_pci.ko - bus specific code which also implements firmware
> loading and low-level messaging protocol between
> firmware and the switchdev driver.
>
> This driver implementation includes only L1 & basic L2 support.
>
> The core Prestera switching logic is implemented in prestera.c, there is
> an intermediate hw layer between core logic and firmware. It is
> implemented in prestera_hw.c, the purpose of it is to encapsulate hw
> related logic, in future there is a plan to support more devices with
> different HW related configurations.
>
> The firmware has to be loaded each time device is reset. The driver is
> loading it from:
>
> /lib/firmware/marvell/prestera_fw_img.bin
>
> The firmware image version is located within internal header and consists
> of 3 numbers - MAJOR.MINOR.PATCH. Additionally, driver has hard-coded
> minimum supported firmware version which it can work with:
>
> MAJOR - reflects the support on ABI level between driver and loaded
> firmware, this number should be the same for driver and
> loaded firmware.
>
> MINOR - this is the minimal supported version between driver and the
> firmware.
>
> PATCH - indicates only fixes, firmware ABI is not changed.
>
> The firmware image will be submitted to the linux-firmware after the
> driver is accepted.
>
> The following Switchdev features are supported:
>
> - VLAN-aware bridge offloading
> - VLAN-unaware bridge offloading
> - FDB offloading (learning, ageing)
> - Switchport configuration
>
> CPU RX/TX support will be provided in the next contribution.
>
> Vadym Kochan (3):
> net: marvell: prestera: Add Switchdev driver for Prestera family ASIC
> device 98DX325x (AC3x)
> net: marvell: prestera: Add PCI interface support
> dt-bindings: marvell,prestera: Add address mapping for Prestera
> Switchdev PCIe driver
>

Have not looked at the patches yet, but very excited to see another
switchdev driver making it into the kernel!.

Thanks Marvell!.

2020-02-27 11:06:52

by Jiri Pirko

[permalink] [raw]
Subject: Re: [RFC net-next 2/3] net: marvell: prestera: Add PCI interface support

Tue, Feb 25, 2020 at 05:30:55PM CET, [email protected] wrote:
>Add PCI interface driver for Prestera Switch ASICs family devices, which
>provides:
>
> - Firmware loading mechanism
> - Requests & events handling to/from the firmware
> - Access to the firmware on the bus level
>
>The firmware has to be loaded each time device is reset. The driver is
>loading it from:
>
> /lib/firmware/marvell/prestera_fw_img.bin
>
>The firmware image version is located within internal header and consists
>of 3 numbers - MAJOR.MINOR.PATCH. Additionally, driver has hard-coded
>minimum supported firmware version which it can work with:
>
> MAJOR - reflects the support on ABI level between driver and loaded
> firmware, this number should be the same for driver and loaded
> firmware.
>
> MINOR - this is the minimal supported version between driver and the
> firmware.
>
> PATCH - indicates only fixes, firmware ABI is not changed.
>
>Signed-off-by: Vadym Kochan <[email protected]>
>Signed-off-by: Oleksandr Mazur <[email protected]>
>---
> drivers/net/ethernet/marvell/prestera/Kconfig | 11 +
> .../net/ethernet/marvell/prestera/Makefile | 2 +
> .../ethernet/marvell/prestera/prestera_pci.c | 840 ++++++++++++++++++
> 3 files changed, 853 insertions(+)
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_pci.c
>
>diff --git a/drivers/net/ethernet/marvell/prestera/Kconfig b/drivers/net/ethernet/marvell/prestera/Kconfig
>index d0b416dcb677..a4e52f7af8dd 100644
>--- a/drivers/net/ethernet/marvell/prestera/Kconfig
>+++ b/drivers/net/ethernet/marvell/prestera/Kconfig
>@@ -11,3 +11,14 @@ config PRESTERA
>
> To compile this driver as a module, choose M here: the
> module will be called prestera_sw.
>+
>+config PRESTERA_PCI
>+ tristate "PCI interface driver for Marvell Prestera Switch ASICs family"
>+ depends on PCI && HAS_IOMEM && PRESTERA
>+ default m
>+ ---help---
>+ This is implementation of PCI interface support for Marvell Prestera
>+ Switch ASICs family.
>+
>+ To compile this driver as a module, choose M here: the
>+ module will be called prestera_pci.
>diff --git a/drivers/net/ethernet/marvell/prestera/Makefile b/drivers/net/ethernet/marvell/prestera/Makefile
>index 9446298fb7f4..5d9b579a0314 100644
>--- a/drivers/net/ethernet/marvell/prestera/Makefile
>+++ b/drivers/net/ethernet/marvell/prestera/Makefile
>@@ -1,3 +1,5 @@
> # SPDX-License-Identifier: GPL-2.0
> obj-$(CONFIG_PRESTERA) += prestera_sw.o
> prestera_sw-objs := prestera.o prestera_hw.o prestera_switchdev.o
>+
>+obj-$(CONFIG_PRESTERA_PCI) += prestera_pci.o
>diff --git a/drivers/net/ethernet/marvell/prestera/prestera_pci.c b/drivers/net/ethernet/marvell/prestera/prestera_pci.c
>new file mode 100644
>index 000000000000..847a84e3684a
>--- /dev/null
>+++ b/drivers/net/ethernet/marvell/prestera/prestera_pci.c
>@@ -0,0 +1,840 @@
>+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
>+ *
>+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
>+ *
>+ */
>+
>+#include <linux/module.h>
>+#include <linux/kernel.h>
>+#include <linux/device.h>
>+#include <linux/pci.h>
>+#include <linux/circ_buf.h>
>+#include <linux/firmware.h>
>+
>+#include "prestera.h"
>+
>+#define MVSW_FW_FILENAME "marvell/mvsw_prestera_fw.img"
>+
>+#define MVSW_SUPP_FW_MAJ_VER 1
>+#define MVSW_SUPP_FW_MIN_VER 0
>+#define MVSW_SUPP_FW_PATCH_VER 0
>+
>+#define mvsw_wait_timeout(cond, waitms) \
>+({ \
>+ unsigned long __wait_end = jiffies + msecs_to_jiffies(waitms); \
>+ bool __wait_ret = false; \
>+ do { \
>+ if (cond) { \
>+ __wait_ret = true; \
>+ break; \
>+ } \
>+ cond_resched(); \
>+ } while (time_before(jiffies, __wait_end)); \
>+ __wait_ret; \
>+})
>+
>+#define MVSW_FW_HDR_MAGIC 0x351D9D06
>+#define MVSW_FW_DL_TIMEOUT 50000
>+#define MVSW_FW_BLK_SZ 1024
>+
>+#define FW_VER_MAJ_MUL 1000000
>+#define FW_VER_MIN_MUL 1000
>+
>+#define FW_VER_MAJ(v) ((v) / FW_VER_MAJ_MUL)
>+
>+#define FW_VER_MIN(v) \
>+ (((v) - (FW_VER_MAJ(v) * FW_VER_MAJ_MUL)) / FW_VER_MIN_MUL)

Add prefix.


>+
>+#define FW_VER_PATCH(v) \
>+ (v - (FW_VER_MAJ(v) * FW_VER_MAJ_MUL) - (FW_VER_MIN(v) * FW_VER_MIN_MUL))
>+
>+struct mvsw_pr_fw_header {
>+ __be32 magic_number;
>+ __be32 version_value;
>+ u8 reserved[8];
>+} __packed;
>+
>+struct mvsw_pr_ldr_regs {
>+ u32 ldr_ready;
>+ u32 pad1;
>+
>+ u32 ldr_img_size;
>+ u32 ldr_ctl_flags;
>+
>+ u32 ldr_buf_offs;
>+ u32 ldr_buf_size;
>+
>+ u32 ldr_buf_rd;
>+ u32 pad2;
>+ u32 ldr_buf_wr;
>+
>+ u32 ldr_status;
>+} __packed __aligned(4);
>+
>+#define MVSW_LDR_REG_OFFSET(f) offsetof(struct mvsw_pr_ldr_regs, f)
>+
>+#define MVSW_LDR_READY_MAGIC 0xf00dfeed
>+
>+#define MVSW_LDR_STATUS_IMG_DL BIT(0)
>+#define MVSW_LDR_STATUS_START_FW BIT(1)
>+#define MVSW_LDR_STATUS_INVALID_IMG BIT(2)
>+#define MVSW_LDR_STATUS_NOMEM BIT(3)
>+
>+#define mvsw_ldr_write(fw, reg, val) \
>+ writel(val, (fw)->ldr_regs + (reg))
>+#define mvsw_ldr_read(fw, reg) \
>+ readl((fw)->ldr_regs + (reg))
>+
>+/* fw loader registers */
>+#define MVSW_LDR_READY_REG MVSW_LDR_REG_OFFSET(ldr_ready)
>+#define MVSW_LDR_IMG_SIZE_REG MVSW_LDR_REG_OFFSET(ldr_img_size)
>+#define MVSW_LDR_CTL_REG MVSW_LDR_REG_OFFSET(ldr_ctl_flags)
>+#define MVSW_LDR_BUF_SIZE_REG MVSW_LDR_REG_OFFSET(ldr_buf_size)
>+#define MVSW_LDR_BUF_OFFS_REG MVSW_LDR_REG_OFFSET(ldr_buf_offs)
>+#define MVSW_LDR_BUF_RD_REG MVSW_LDR_REG_OFFSET(ldr_buf_rd)
>+#define MVSW_LDR_BUF_WR_REG MVSW_LDR_REG_OFFSET(ldr_buf_wr)
>+#define MVSW_LDR_STATUS_REG MVSW_LDR_REG_OFFSET(ldr_status)
>+
>+#define MVSW_LDR_CTL_DL_START BIT(0)
>+
>+#define MVSW_LDR_WR_IDX_MOVE(fw, n) \
>+do { \
>+ typeof(fw) __fw = (fw); \
>+ (__fw)->ldr_wr_idx = ((__fw)->ldr_wr_idx + (n)) & \
>+ ((__fw)->ldr_buf_len - 1); \
>+} while (0)
>+
>+#define MVSW_LDR_WR_IDX_COMMIT(fw) \
>+({ \
>+ typeof(fw) __fw = (fw); \
>+ mvsw_ldr_write((__fw), MVSW_LDR_BUF_WR_REG, \
>+ (__fw)->ldr_wr_idx); \
>+})
>+
>+#define MVSW_LDR_WR_PTR(fw) \
>+({ \
>+ typeof(fw) __fw = (fw); \
>+ ((__fw)->ldr_ring_buf + (__fw)->ldr_wr_idx); \
>+})
>+
>+#define MVSW_EVT_QNUM_MAX 4
>+
>+struct mvsw_pr_fw_evtq_regs {
>+ u32 rd_idx;
>+ u32 pad1;
>+ u32 wr_idx;
>+ u32 pad2;
>+ u32 offs;
>+ u32 len;
>+};
>+
>+struct mvsw_pr_fw_regs {
>+ u32 fw_ready;
>+ u32 pad;
>+ u32 cmd_offs;
>+ u32 cmd_len;
>+ u32 evt_offs;
>+ u32 evt_qnum;
>+
>+ u32 cmd_req_ctl;
>+ u32 cmd_req_len;
>+ u32 cmd_rcv_ctl;
>+ u32 cmd_rcv_len;
>+
>+ u32 fw_status;
>+
>+ struct mvsw_pr_fw_evtq_regs evtq_list[MVSW_EVT_QNUM_MAX];
>+};
>+
>+#define MVSW_FW_REG_OFFSET(f) offsetof(struct mvsw_pr_fw_regs, f)
>+
>+#define MVSW_FW_READY_MAGIC 0xcafebabe
>+
>+/* fw registers */
>+#define MVSW_FW_READY_REG MVSW_FW_REG_OFFSET(fw_ready)
>+
>+#define MVSW_CMD_BUF_OFFS_REG MVSW_FW_REG_OFFSET(cmd_offs)
>+#define MVSW_CMD_BUF_LEN_REG MVSW_FW_REG_OFFSET(cmd_len)
>+#define MVSW_EVT_BUF_OFFS_REG MVSW_FW_REG_OFFSET(evt_offs)
>+#define MVSW_EVT_QNUM_REG MVSW_FW_REG_OFFSET(evt_qnum)
>+
>+#define MVSW_CMD_REQ_CTL_REG MVSW_FW_REG_OFFSET(cmd_req_ctl)
>+#define MVSW_CMD_REQ_LEN_REG MVSW_FW_REG_OFFSET(cmd_req_len)
>+
>+#define MVSW_CMD_RCV_CTL_REG MVSW_FW_REG_OFFSET(cmd_rcv_ctl)
>+#define MVSW_CMD_RCV_LEN_REG MVSW_FW_REG_OFFSET(cmd_rcv_len)
>+#define MVSW_FW_STATUS_REG MVSW_FW_REG_OFFSET(fw_status)
>+
>+/* MVSW_CMD_REQ_CTL_REG flags */
>+#define MVSW_CMD_F_REQ_SENT BIT(0)
>+#define MVSW_CMD_F_REPL_RCVD BIT(1)
>+
>+/* MVSW_CMD_RCV_CTL_REG flags */
>+#define MVSW_CMD_F_REPL_SENT BIT(0)
>+
>+#define MVSW_EVTQ_REG_OFFSET(q, f) \
>+ (MVSW_FW_REG_OFFSET(evtq_list) + \
>+ (q) * sizeof(struct mvsw_pr_fw_evtq_regs) + \
>+ offsetof(struct mvsw_pr_fw_evtq_regs, f))
>+
>+#define MVSW_EVTQ_RD_IDX_REG(q) MVSW_EVTQ_REG_OFFSET(q, rd_idx)
>+#define MVSW_EVTQ_WR_IDX_REG(q) MVSW_EVTQ_REG_OFFSET(q, wr_idx)
>+#define MVSW_EVTQ_OFFS_REG(q) MVSW_EVTQ_REG_OFFSET(q, offs)
>+#define MVSW_EVTQ_LEN_REG(q) MVSW_EVTQ_REG_OFFSET(q, len)
>+
>+#define mvsw_fw_write(fw, reg, val) writel(val, (fw)->hw_regs + (reg))
>+#define mvsw_fw_read(fw, reg) readl((fw)->hw_regs + (reg))
>+
>+struct mvsw_pr_fw_evtq {
>+ u8 __iomem *addr;
>+ size_t len;
>+};
>+
>+struct mvsw_pr_fw {
>+ struct workqueue_struct *wq;
>+ struct mvsw_pr_device dev;
>+ struct pci_dev *pci_dev;
>+ u8 __iomem *mem_addr;
>+
>+ u8 __iomem *ldr_regs;
>+ u8 __iomem *hw_regs;
>+
>+ u8 __iomem *ldr_ring_buf;
>+ u32 ldr_buf_len;
>+ u32 ldr_wr_idx;
>+
>+ /* serialize access to dev->send_req */
>+ struct mutex cmd_mtx;
>+ size_t cmd_mbox_len;
>+ u8 __iomem *cmd_mbox;
>+ struct mvsw_pr_fw_evtq evt_queue[MVSW_EVT_QNUM_MAX];
>+ u8 evt_qnum;
>+ struct work_struct evt_work;
>+ u8 __iomem *evt_buf;
>+ u8 *evt_msg;
>+};
>+
>+#define mvsw_fw_dev(fw) ((fw)->dev.dev)
>+
>+#define PRESTERA_DEVICE(id) PCI_VDEVICE(MARVELL, (id))
>+
>+static struct mvsw_pr_pci_match {

Again, have the prefix consistent.

I suggest "prestera_pci_" here for the whole code.


>+ struct pci_driver driver;
>+ const struct pci_device_id id;
>+ bool registered;
>+} mvsw_pci_devices[] = {
>+ {
>+ .driver = { .name = "AC3x 98DX326x", },
>+ .id = { PRESTERA_DEVICE(0xc804), 0 },
>+ },
>+ {{ }, { },}
>+};
>+
>+static int mvsw_pr_fw_load(struct mvsw_pr_fw *fw);
>+
>+static u32 mvsw_pr_fw_evtq_len(struct mvsw_pr_fw *fw, u8 qid)
>+{
>+ return fw->evt_queue[qid].len;
>+}
>+
>+static u32 mvsw_pr_fw_evtq_avail(struct mvsw_pr_fw *fw, u8 qid)
>+{
>+ u32 wr_idx = mvsw_fw_read(fw, MVSW_EVTQ_WR_IDX_REG(qid));
>+ u32 rd_idx = mvsw_fw_read(fw, MVSW_EVTQ_RD_IDX_REG(qid));
>+
>+ return CIRC_CNT(wr_idx, rd_idx, mvsw_pr_fw_evtq_len(fw, qid));
>+}
>+
>+static void mvsw_pr_fw_evtq_rd_set(struct mvsw_pr_fw *fw,
>+ u8 qid, u32 idx)
>+{
>+ u32 rd_idx = idx & (mvsw_pr_fw_evtq_len(fw, qid) - 1);
>+
>+ mvsw_fw_write(fw, MVSW_EVTQ_RD_IDX_REG(qid), rd_idx);
>+}
>+
>+static u8 __iomem *mvsw_pr_fw_evtq_buf(struct mvsw_pr_fw *fw,
>+ u8 qid)
>+{
>+ return fw->evt_queue[qid].addr;
>+}
>+
>+static u32 mvsw_pr_fw_evtq_read32(struct mvsw_pr_fw *fw, u8 qid)
>+{
>+ u32 rd_idx = mvsw_fw_read(fw, MVSW_EVTQ_RD_IDX_REG(qid));
>+ u32 val;
>+
>+ val = readl(mvsw_pr_fw_evtq_buf(fw, qid) + rd_idx);
>+ mvsw_pr_fw_evtq_rd_set(fw, qid, rd_idx + 4);
>+ return val;
>+}
>+
>+static ssize_t mvsw_pr_fw_evtq_read_buf(struct mvsw_pr_fw *fw,
>+ u8 qid, u8 *buf, size_t len)
>+{
>+ u32 idx = mvsw_fw_read(fw, MVSW_EVTQ_RD_IDX_REG(qid));
>+ u8 __iomem *evtq_addr = mvsw_pr_fw_evtq_buf(fw, qid);
>+ u32 *buf32 = (u32 *)buf;
>+ int i;
>+
>+ for (i = 0; i < len / 4; buf32++, i++) {
>+ *buf32 = readl_relaxed(evtq_addr + idx);
>+ idx = (idx + 4) & (mvsw_pr_fw_evtq_len(fw, qid) - 1);
>+ }
>+
>+ mvsw_pr_fw_evtq_rd_set(fw, qid, idx);
>+
>+ return i;
>+}
>+
>+static u8 mvsw_pr_fw_evtq_pick(struct mvsw_pr_fw *fw)
>+{
>+ int qid;
>+
>+ for (qid = 0; qid < fw->evt_qnum; qid++) {
>+ if (mvsw_pr_fw_evtq_avail(fw, qid) >= 4)
>+ return qid;
>+ }
>+
>+ return MVSW_EVT_QNUM_MAX;
>+}
>+
>+static void mvsw_pr_fw_evt_work_fn(struct work_struct *work)
>+{
>+ struct mvsw_pr_fw *fw;
>+ u8 *msg;
>+ u8 qid;
>+
>+ fw = container_of(work, struct mvsw_pr_fw, evt_work);
>+ msg = fw->evt_msg;
>+
>+ while ((qid = mvsw_pr_fw_evtq_pick(fw)) < MVSW_EVT_QNUM_MAX) {
>+ u32 idx;
>+ u32 len;
>+
>+ len = mvsw_pr_fw_evtq_read32(fw, qid);
>+ idx = mvsw_fw_read(fw, MVSW_EVTQ_RD_IDX_REG(qid));
>+
>+ WARN_ON(mvsw_pr_fw_evtq_avail(fw, qid) < len);
>+
>+ if (WARN_ON(len > MVSW_MSG_MAX_SIZE)) {
>+ mvsw_pr_fw_evtq_rd_set(fw, qid, idx + len);
>+ continue;
>+ }
>+
>+ mvsw_pr_fw_evtq_read_buf(fw, qid, msg, len);
>+
>+ if (fw->dev.recv_msg)
>+ fw->dev.recv_msg(&fw->dev, msg, len);
>+ }
>+}
>+
>+static int mvsw_pr_fw_wait_reg32(struct mvsw_pr_fw *fw,
>+ u32 reg, u32 val, unsigned int wait)
>+{
>+ if (mvsw_wait_timeout(mvsw_fw_read(fw, reg) == val, wait))
>+ return 0;
>+
>+ return -EBUSY;
>+}
>+
>+static void mvsw_pci_copy_to(u8 __iomem *dst, u8 *src, size_t len)
>+{
>+ u32 __iomem *dst32 = (u32 __iomem *)dst;
>+ u32 *src32 = (u32 *)src;
>+ int i;
>+
>+ for (i = 0; i < (len / 4); dst32++, src32++, i++)
>+ writel_relaxed(*src32, dst32);
>+}
>+
>+static void mvsw_pci_copy_from(u8 *dst, u8 __iomem *src, size_t len)
>+{
>+ u32 *dst32 = (u32 *)dst;
>+ u32 __iomem *src32 = (u32 __iomem *)src;
>+ int i;
>+
>+ for (i = 0; i < (len / 4); dst32++, src32++, i++)
>+ *dst32 = readl_relaxed(src32);
>+}
>+
>+static int mvsw_pr_fw_cmd_send(struct mvsw_pr_fw *fw,
>+ u8 *in_msg, size_t in_size,
>+ u8 *out_msg, size_t out_size,
>+ unsigned int wait)
>+{
>+ u32 ret_size = 0;
>+ int err = 0;
>+
>+ if (!wait)
>+ wait = 30000;
>+
>+ if (ALIGN(in_size, 4) > fw->cmd_mbox_len)
>+ return -EMSGSIZE;
>+
>+ /* wait for finish previous reply from FW */
>+ err = mvsw_pr_fw_wait_reg32(fw, MVSW_CMD_RCV_CTL_REG, 0, 30);
>+ if (err) {
>+ dev_err(mvsw_fw_dev(fw), "finish reply from FW is timed out\n");
>+ return err;
>+ }
>+
>+ mvsw_fw_write(fw, MVSW_CMD_REQ_LEN_REG, in_size);
>+ mvsw_pci_copy_to(fw->cmd_mbox, in_msg, in_size);
>+
>+ mvsw_fw_write(fw, MVSW_CMD_REQ_CTL_REG, MVSW_CMD_F_REQ_SENT);
>+
>+ /* wait for reply from FW */
>+ err = mvsw_pr_fw_wait_reg32(fw, MVSW_CMD_RCV_CTL_REG, MVSW_CMD_F_REPL_SENT,
>+ wait);
>+ if (err) {
>+ dev_err(mvsw_fw_dev(fw), "reply from FW is timed out\n");
>+ goto cmd_exit;
>+ }
>+
>+ ret_size = mvsw_fw_read(fw, MVSW_CMD_RCV_LEN_REG);
>+ if (ret_size > out_size) {
>+ dev_err(mvsw_fw_dev(fw), "ret_size (%u) > out_len(%zu)\n",
>+ ret_size, out_size);
>+ err = -EMSGSIZE;
>+ goto cmd_exit;
>+ }
>+
>+ mvsw_pci_copy_from(out_msg, fw->cmd_mbox + in_size, ret_size);
>+
>+cmd_exit:
>+ mvsw_fw_write(fw, MVSW_CMD_REQ_CTL_REG, MVSW_CMD_F_REPL_RCVD);
>+ return err;
>+}
>+
>+static int mvsw_pr_fw_send_req(struct mvsw_pr_device *dev,
>+ u8 *in_msg, size_t in_size, u8 *out_msg,
>+ size_t out_size, unsigned int wait)
>+{
>+ struct mvsw_pr_fw *fw;
>+ ssize_t ret;
>+
>+ fw = container_of(dev, struct mvsw_pr_fw, dev);
>+
>+ mutex_lock(&fw->cmd_mtx);
>+ ret = mvsw_pr_fw_cmd_send(fw, in_msg, in_size, out_msg, out_size, wait);
>+ mutex_unlock(&fw->cmd_mtx);
>+
>+ return ret;
>+}
>+
>+static int mvsw_pr_fw_init(struct mvsw_pr_fw *fw)
>+{
>+ u8 __iomem *base;
>+ int err;
>+ u8 qid;
>+
>+ err = mvsw_pr_fw_load(fw);
>+ if (err && err != -ETIMEDOUT)
>+ return err;
>+
>+ err = mvsw_pr_fw_wait_reg32(fw, MVSW_FW_READY_REG,
>+ MVSW_FW_READY_MAGIC, 20000);
>+ if (err) {
>+ dev_err(mvsw_fw_dev(fw), "FW is failed to start\n");
>+ return err;
>+ }
>+
>+ base = fw->mem_addr;
>+
>+ fw->cmd_mbox = base + mvsw_fw_read(fw, MVSW_CMD_BUF_OFFS_REG);
>+ fw->cmd_mbox_len = mvsw_fw_read(fw, MVSW_CMD_BUF_LEN_REG);
>+ mutex_init(&fw->cmd_mtx);
>+
>+ fw->evt_buf = base + mvsw_fw_read(fw, MVSW_EVT_BUF_OFFS_REG);
>+ fw->evt_qnum = mvsw_fw_read(fw, MVSW_EVT_QNUM_REG);
>+ fw->evt_msg = kmalloc(MVSW_MSG_MAX_SIZE, GFP_KERNEL);
>+ if (!fw->evt_msg)
>+ return -ENOMEM;
>+
>+ for (qid = 0; qid < fw->evt_qnum; qid++) {
>+ u32 offs = mvsw_fw_read(fw, MVSW_EVTQ_OFFS_REG(qid));
>+ struct mvsw_pr_fw_evtq *evtq = &fw->evt_queue[qid];
>+
>+ evtq->len = mvsw_fw_read(fw, MVSW_EVTQ_LEN_REG(qid));
>+ evtq->addr = fw->evt_buf + offs;
>+ }
>+
>+ return 0;
>+}
>+
>+static void mvsw_pr_fw_uninit(struct mvsw_pr_fw *fw)
>+{
>+ kfree(fw->evt_msg);
>+}
>+
>+static irqreturn_t mvsw_pci_irq_handler(int irq, void *dev_id)
>+{
>+ struct mvsw_pr_fw *fw = dev_id;
>+
>+ queue_work(fw->wq, &fw->evt_work);
>+
>+ return IRQ_HANDLED;
>+}
>+
>+static int mvsw_pr_ldr_wait_reg32(struct mvsw_pr_fw *fw,
>+ u32 reg, u32 val, unsigned int wait)
>+{
>+ if (mvsw_wait_timeout(mvsw_ldr_read(fw, reg) == val, wait))
>+ return 0;
>+
>+ return -EBUSY;
>+}
>+
>+static u32 mvsw_pr_ldr_buf_avail(struct mvsw_pr_fw *fw)
>+{
>+ u32 rd_idx = mvsw_ldr_read(fw, MVSW_LDR_BUF_RD_REG);
>+
>+ return CIRC_SPACE(fw->ldr_wr_idx, rd_idx, fw->ldr_buf_len);
>+}
>+
>+static int mvsw_pr_ldr_send_buf(struct mvsw_pr_fw *fw, const u8 *buf,
>+ size_t len)
>+{
>+ int i;
>+
>+ if (!mvsw_wait_timeout(mvsw_pr_ldr_buf_avail(fw) >= len, 100)) {
>+ dev_err(mvsw_fw_dev(fw), "failed wait for sending firmware\n");
>+ return -EBUSY;
>+ }
>+
>+ for (i = 0; i < len; i += 4) {
>+ writel_relaxed(*(u32 *)(buf + i), MVSW_LDR_WR_PTR(fw));
>+ MVSW_LDR_WR_IDX_MOVE(fw, 4);
>+ }
>+
>+ MVSW_LDR_WR_IDX_COMMIT(fw);
>+ return 0;
>+}
>+
>+static int mvsw_pr_ldr_send(struct mvsw_pr_fw *fw,
>+ const char *img, u32 fw_size)
>+{
>+ unsigned long mask;
>+ u32 status;
>+ u32 pos;
>+ int err;
>+
>+ if (mvsw_pr_ldr_wait_reg32(fw, MVSW_LDR_STATUS_REG,
>+ MVSW_LDR_STATUS_IMG_DL, 1000)) {
>+ dev_err(mvsw_fw_dev(fw), "Loader is not ready to load image\n");
>+ return -EBUSY;
>+ }
>+
>+ for (pos = 0; pos < fw_size; pos += MVSW_FW_BLK_SZ) {
>+ if (pos + MVSW_FW_BLK_SZ > fw_size)
>+ break;
>+
>+ err = mvsw_pr_ldr_send_buf(fw, img + pos, MVSW_FW_BLK_SZ);
>+ if (err)
>+ return err;
>+ }
>+
>+ if (pos < fw_size) {
>+ err = mvsw_pr_ldr_send_buf(fw, img + pos, fw_size - pos);
>+ if (err)
>+ return err;
>+ }
>+
>+ /* Waiting for status IMG_DOWNLOADING to change to something else */
>+ mask = ~(MVSW_LDR_STATUS_IMG_DL);
>+
>+ if (!mvsw_wait_timeout(mvsw_ldr_read(fw, MVSW_LDR_STATUS_REG) & mask,
>+ MVSW_FW_DL_TIMEOUT)) {
>+ dev_err(mvsw_fw_dev(fw), "Timeout to load FW img [state=%d]",
>+ mvsw_ldr_read(fw, MVSW_LDR_STATUS_REG));
>+ return -ETIMEDOUT;
>+ }
>+
>+ status = mvsw_ldr_read(fw, MVSW_LDR_STATUS_REG);
>+ if (status != MVSW_LDR_STATUS_START_FW) {
>+ switch (status) {
>+ case MVSW_LDR_STATUS_INVALID_IMG:
>+ dev_err(mvsw_fw_dev(fw), "FW img has bad crc\n");
>+ return -EINVAL;
>+ case MVSW_LDR_STATUS_NOMEM:
>+ dev_err(mvsw_fw_dev(fw), "Loader has no enough mem\n");
>+ return -ENOMEM;
>+ default:
>+ break;
>+ }
>+ }
>+
>+ return 0;
>+}
>+
>+static bool mvsw_pr_ldr_is_ready(struct mvsw_pr_fw *fw)
>+{
>+ return mvsw_ldr_read(fw, MVSW_LDR_READY_REG) == MVSW_LDR_READY_MAGIC;
>+}
>+
>+static void mvsw_pr_fw_rev_parse(const struct mvsw_pr_fw_header *hdr,
>+ struct mvsw_fw_rev *rev)
>+{
>+ u32 version = be32_to_cpu(hdr->version_value);
>+
>+ rev->maj = FW_VER_MAJ(version);
>+ rev->min = FW_VER_MIN(version);
>+ rev->sub = FW_VER_PATCH(version);
>+}
>+
>+static int mvsw_pr_fw_rev_check(struct mvsw_pr_fw *fw)
>+{
>+ struct mvsw_fw_rev *rev = &fw->dev.fw_rev;
>+
>+ if (rev->maj == MVSW_SUPP_FW_MAJ_VER &&
>+ rev->min >= MVSW_SUPP_FW_MIN_VER) {
>+ return 0;
>+ }
>+
>+ dev_err(mvsw_fw_dev(fw), "Driver supports FW version only '%u.%u.%u'",
>+ MVSW_SUPP_FW_MAJ_VER,
>+ MVSW_SUPP_FW_MIN_VER,
>+ MVSW_SUPP_FW_PATCH_VER);
>+
>+ return -EINVAL;
>+}
>+
>+static int mvsw_pr_fw_hdr_parse(struct mvsw_pr_fw *fw,
>+ const struct firmware *img)
>+{
>+ struct mvsw_pr_fw_header *hdr = (struct mvsw_pr_fw_header *)img->data;
>+ struct mvsw_fw_rev *rev = &fw->dev.fw_rev;
>+ u32 magic;
>+
>+ magic = be32_to_cpu(hdr->magic_number);
>+ if (magic != MVSW_FW_HDR_MAGIC) {
>+ dev_err(mvsw_fw_dev(fw), "FW img type is invalid");
>+ return -EINVAL;
>+ }
>+
>+ mvsw_pr_fw_rev_parse(hdr, rev);
>+
>+ dev_info(mvsw_fw_dev(fw), "FW version '%u.%u.%u'\n",
>+ rev->maj, rev->min, rev->sub);
>+
>+ return mvsw_pr_fw_rev_check(fw);
>+}
>+
>+static int mvsw_pr_fw_load(struct mvsw_pr_fw *fw)
>+{
>+ size_t hlen = sizeof(struct mvsw_pr_fw_header);
>+ const struct firmware *f;
>+ bool has_ldr;
>+ int err;
>+
>+ has_ldr = mvsw_wait_timeout(mvsw_pr_ldr_is_ready(fw), 1000);
>+ if (!has_ldr) {
>+ dev_err(mvsw_fw_dev(fw), "waiting for FW loader is timed out");
>+ return -ETIMEDOUT;
>+ }
>+
>+ fw->ldr_ring_buf = fw->ldr_regs +
>+ mvsw_ldr_read(fw, MVSW_LDR_BUF_OFFS_REG);
>+
>+ fw->ldr_buf_len =
>+ mvsw_ldr_read(fw, MVSW_LDR_BUF_SIZE_REG);
>+
>+ fw->ldr_wr_idx = 0;
>+
>+ err = request_firmware_direct(&f, MVSW_FW_FILENAME, &fw->pci_dev->dev);
>+ if (err) {
>+ dev_err(mvsw_fw_dev(fw), "failed to request firmware file\n");
>+ return err;
>+ }
>+
>+ if (!IS_ALIGNED(f->size, 4)) {
>+ dev_err(mvsw_fw_dev(fw), "FW image file is not aligned");
>+ release_firmware(f);
>+ return -EINVAL;
>+ }
>+
>+ err = mvsw_pr_fw_hdr_parse(fw, f);
>+ if (err) {
>+ dev_err(mvsw_fw_dev(fw), "FW image header is invalid\n");
>+ release_firmware(f);
>+ return err;
>+ }
>+
>+ mvsw_ldr_write(fw, MVSW_LDR_IMG_SIZE_REG, f->size - hlen);
>+ mvsw_ldr_write(fw, MVSW_LDR_CTL_REG, MVSW_LDR_CTL_DL_START);
>+
>+ dev_info(mvsw_fw_dev(fw), "Loading prestera FW image ...");
>+
>+ err = mvsw_pr_ldr_send(fw, f->data + hlen, f->size - hlen);
>+
>+ release_firmware(f);
>+ return err;
>+}
>+
>+static int mvsw_pr_pci_probe(struct pci_dev *pdev,
>+ const struct pci_device_id *id)
>+{
>+ const char *driver_name = pdev->driver->name;
>+ struct mvsw_pr_fw *fw;
>+ u8 __iomem *mem_addr;
>+ int err;
>+
>+ err = pci_enable_device(pdev);
>+ if (err) {
>+ dev_err(&pdev->dev, "pci_enable_device failed\n");
>+ goto err_pci_enable_device;
>+ }
>+
>+ err = pci_request_regions(pdev, driver_name);
>+ if (err) {
>+ dev_err(&pdev->dev, "pci_request_regions failed\n");
>+ goto err_pci_request_regions;
>+ }
>+
>+ mem_addr = pci_ioremap_bar(pdev, 2);
>+ if (!mem_addr) {
>+ dev_err(&pdev->dev, "ioremap failed\n");
>+ err = -EIO;
>+ goto err_ioremap;
>+ }
>+
>+ pci_set_master(pdev);
>+
>+ fw = kzalloc(sizeof(*fw), GFP_KERNEL);
>+ if (!fw) {
>+ err = -ENOMEM;
>+ goto err_pci_dev_alloc;
>+ }
>+
>+ fw->pci_dev = pdev;
>+ fw->dev.dev = &pdev->dev;
>+ fw->dev.send_req = mvsw_pr_fw_send_req;
>+ fw->mem_addr = mem_addr;
>+ fw->ldr_regs = mem_addr;
>+ fw->hw_regs = mem_addr;
>+
>+ fw->wq = alloc_workqueue("mvsw_fw_wq", WQ_HIGHPRI, 1);
>+ if (!fw->wq)
>+ goto err_wq_alloc;
>+
>+ INIT_WORK(&fw->evt_work, mvsw_pr_fw_evt_work_fn);
>+
>+ err = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI);
>+ if (err < 0) {
>+ dev_err(&pdev->dev, "MSI IRQ init failed\n");
>+ goto err_irq_alloc;
>+ }
>+
>+ err = request_irq(pci_irq_vector(pdev, 0), mvsw_pci_irq_handler,
>+ 0, driver_name, fw);
>+ if (err) {
>+ dev_err(&pdev->dev, "fail to request IRQ\n");
>+ goto err_request_irq;
>+ }
>+
>+ pci_set_drvdata(pdev, fw);
>+
>+ err = mvsw_pr_fw_init(fw);
>+ if (err)
>+ goto err_mvsw_fw_init;
>+
>+ dev_info(mvsw_fw_dev(fw), "Prestera Switch FW is ready\n");
>+
>+ err = mvsw_pr_device_register(&fw->dev);
>+ if (err)
>+ goto err_mvsw_dev_register;
>+
>+ return 0;
>+
>+err_mvsw_dev_register:
>+ mvsw_pr_fw_uninit(fw);
>+err_mvsw_fw_init:
>+ free_irq(pci_irq_vector(pdev, 0), fw);
>+err_request_irq:
>+ pci_free_irq_vectors(pdev);
>+err_irq_alloc:
>+ destroy_workqueue(fw->wq);
>+err_wq_alloc:
>+ kfree(fw);
>+err_pci_dev_alloc:
>+ iounmap(mem_addr);
>+err_ioremap:
>+ pci_release_regions(pdev);
>+err_pci_request_regions:
>+ pci_disable_device(pdev);
>+err_pci_enable_device:
>+ return err;
>+}
>+
>+static void mvsw_pr_pci_remove(struct pci_dev *pdev)
>+{
>+ struct mvsw_pr_fw *fw = pci_get_drvdata(pdev);
>+
>+ free_irq(pci_irq_vector(pdev, 0), fw);
>+ pci_free_irq_vectors(pdev);
>+ mvsw_pr_device_unregister(&fw->dev);
>+ flush_workqueue(fw->wq);
>+ destroy_workqueue(fw->wq);
>+ mvsw_pr_fw_uninit(fw);
>+ iounmap(fw->mem_addr);
>+ pci_release_regions(pdev);
>+ pci_disable_device(pdev);
>+ kfree(fw);
>+}
>+
>+static int __init mvsw_pr_pci_init(void)
>+{
>+ struct mvsw_pr_pci_match *match;
>+ int err;
>+
>+ for (match = mvsw_pci_devices; match->driver.name; match++) {

Just use MODULE_DEVICE_TABLE(). See spectrum.c for example.


>+ match->driver.probe = mvsw_pr_pci_probe;
>+ match->driver.remove = mvsw_pr_pci_remove;
>+ match->driver.id_table = &match->id;
>+
>+ err = pci_register_driver(&match->driver);
>+ if (err) {
>+ pr_err("prestera_pci: failed to register %s\n",
>+ match->driver.name);
>+ break;
>+ }
>+
>+ match->registered = true;
>+ }
>+
>+ if (err) {
>+ for (match = mvsw_pci_devices; match->driver.name; match++) {
>+ if (!match->registered)
>+ break;
>+
>+ pci_unregister_driver(&match->driver);
>+ }
>+
>+ return err;
>+ }
>+
>+ pr_info("prestera_pci: Registered Marvell Prestera PCI driver\n");

Avoid prints like this one.



>+ return 0;
>+}
>+
>+static void __exit mvsw_pr_pci_exit(void)
>+{
>+ struct mvsw_pr_pci_match *match;
>+
>+ for (match = mvsw_pci_devices; match->driver.name; match++) {
>+ if (!match->registered)
>+ break;
>+
>+ pci_unregister_driver(&match->driver);
>+ }
>+
>+ pr_info("prestera_pci: Unregistered Marvell Prestera PCI driver\n");
>+}
>+
>+module_init(mvsw_pr_pci_init);
>+module_exit(mvsw_pr_pci_exit);
>+
>+MODULE_AUTHOR("Marvell Semi.");

Again, wrong author.


>+MODULE_LICENSE("GPL");

Inconsistent with the header.


>+MODULE_DESCRIPTION("Marvell Prestera switch PCI interface");
>--
>2.17.1
>

2020-02-27 14:25:14

by Jiri Pirko

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Tue, Feb 25, 2020 at 05:30:54PM CET, [email protected] wrote:
>Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
>ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
>wireless SMB deployment.
>
>This driver implementation includes only L1 & basic L2 support.
>
>The core Prestera switching logic is implemented in prestera.c, there is
>an intermediate hw layer between core logic and firmware. It is
>implemented in prestera_hw.c, the purpose of it is to encapsulate hw
>related logic, in future there is a plan to support more devices with
>different HW related configurations.
>
>The following Switchdev features are supported:
>
> - VLAN-aware bridge offloading
> - VLAN-unaware bridge offloading
> - FDB offloading (learning, ageing)
> - Switchport configuration
>
>Signed-off-by: Vadym Kochan <[email protected]>
>Signed-off-by: Andrii Savka <[email protected]>
>Signed-off-by: Oleksandr Mazur <[email protected]>
>Signed-off-by: Serhiy Boiko <[email protected]>
>Signed-off-by: Serhiy Pshyk <[email protected]>
>Signed-off-by: Taras Chornyi <[email protected]>
>Signed-off-by: Volodymyr Mytnyk <[email protected]>
>---
> drivers/net/ethernet/marvell/Kconfig | 1 +
> drivers/net/ethernet/marvell/Makefile | 1 +
> drivers/net/ethernet/marvell/prestera/Kconfig | 13 +
> .../net/ethernet/marvell/prestera/Makefile | 3 +
> .../net/ethernet/marvell/prestera/prestera.c | 1502 +++++++++++++++++
> .../net/ethernet/marvell/prestera/prestera.h | 244 +++
> .../marvell/prestera/prestera_drv_ver.h | 23 +
> .../ethernet/marvell/prestera/prestera_hw.c | 1094 ++++++++++++
> .../ethernet/marvell/prestera/prestera_hw.h | 159 ++
> .../marvell/prestera/prestera_switchdev.c | 1217 +++++++++++++
> 10 files changed, 4257 insertions(+)
> create mode 100644 drivers/net/ethernet/marvell/prestera/Kconfig
> create mode 100644 drivers/net/ethernet/marvell/prestera/Makefile
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.c
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.h
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.c
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.h
> create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
>
>diff --git a/drivers/net/ethernet/marvell/Kconfig b/drivers/net/ethernet/marvell/Kconfig
>index 3d5caea096fb..74313d9e1fc0 100644
>--- a/drivers/net/ethernet/marvell/Kconfig
>+++ b/drivers/net/ethernet/marvell/Kconfig
>@@ -171,5 +171,6 @@ config SKY2_DEBUG
>
>
> source "drivers/net/ethernet/marvell/octeontx2/Kconfig"
>+source "drivers/net/ethernet/marvell/prestera/Kconfig"
>
> endif # NET_VENDOR_MARVELL
>diff --git a/drivers/net/ethernet/marvell/Makefile b/drivers/net/ethernet/marvell/Makefile
>index 89dea7284d5b..9f88fe822555 100644
>--- a/drivers/net/ethernet/marvell/Makefile
>+++ b/drivers/net/ethernet/marvell/Makefile
>@@ -12,3 +12,4 @@ obj-$(CONFIG_PXA168_ETH) += pxa168_eth.o
> obj-$(CONFIG_SKGE) += skge.o
> obj-$(CONFIG_SKY2) += sky2.o
> obj-y += octeontx2/
>+obj-y += prestera/
>diff --git a/drivers/net/ethernet/marvell/prestera/Kconfig b/drivers/net/ethernet/marvell/prestera/Kconfig
>new file mode 100644
>index 000000000000..d0b416dcb677
>--- /dev/null
>+++ b/drivers/net/ethernet/marvell/prestera/Kconfig
>@@ -0,0 +1,13 @@
>+# SPDX-License-Identifier: GPL-2.0-only
>+#
>+# Marvell Prestera drivers configuration
>+#
>+
>+config PRESTERA
>+ tristate "Marvell Prestera Switch ASICs support"
>+ depends on NET_SWITCHDEV && VLAN_8021Q
>+ ---help---
>+ This driver supports Marvell Prestera Switch ASICs family.
>+
>+ To compile this driver as a module, choose M here: the
>+ module will be called prestera_sw.
>diff --git a/drivers/net/ethernet/marvell/prestera/Makefile b/drivers/net/ethernet/marvell/prestera/Makefile
>new file mode 100644
>index 000000000000..9446298fb7f4
>--- /dev/null
>+++ b/drivers/net/ethernet/marvell/prestera/Makefile
>@@ -0,0 +1,3 @@
>+# SPDX-License-Identifier: GPL-2.0
>+obj-$(CONFIG_PRESTERA) += prestera_sw.o
>+prestera_sw-objs := prestera.o prestera_hw.o prestera_switchdev.o
>diff --git a/drivers/net/ethernet/marvell/prestera/prestera.c b/drivers/net/ethernet/marvell/prestera/prestera.c
>new file mode 100644
>index 000000000000..12d0eb590bbb
>--- /dev/null
>+++ b/drivers/net/ethernet/marvell/prestera/prestera.c
>@@ -0,0 +1,1502 @@
>+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0


# SPDX-License-Identifier: GPL-2.0-only
# SPDX-License-Identifier: GPL-2.0
/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0

You have to make up your mind :)




>+ *
>+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
>+ *
>+ */
>+#include <linux/kernel.h>
>+#include <linux/module.h>
>+#include <linux/list.h>
>+#include <linux/netdevice.h>
>+#include <linux/netdev_features.h>
>+#include <linux/etherdevice.h>
>+#include <linux/ethtool.h>
>+#include <linux/jiffies.h>
>+#include <net/switchdev.h>
>+
>+#include "prestera.h"
>+#include "prestera_hw.h"
>+#include "prestera_drv_ver.h"
>+
>+#define MVSW_PR_MTU_DEFAULT 1536
>+
>+#define PORT_STATS_CACHE_TIMEOUT_MS (msecs_to_jiffies(1000))
>+#define PORT_STATS_CNT (sizeof(struct mvsw_pr_port_stats) / sizeof(u64))
>+#define PORT_STATS_IDX(name) \
>+ (offsetof(struct mvsw_pr_port_stats, name) / sizeof(u64))
>+#define PORT_STATS_FIELD(name) \
>+ [PORT_STATS_IDX(name)] = __stringify(name)
>+
>+static struct list_head switches_registered;

Avoid this global list. You don't use it anyway.


>+
>+static const char mvsw_driver_kind[] = "prestera_sw";
>+static const char mvsw_driver_name[] = "mvsw_switchdev";
>+static const char mvsw_driver_version[] = PRESTERA_DRV_VER;
>+
>+#define mvsw_dev(sw) ((sw)->dev->dev)
>+#define mvsw_dev_name(sw) dev_name((sw)->dev->dev)
>+
>+static struct workqueue_struct *mvsw_pr_wq;
>+
>+struct mvsw_pr_link_mode {
>+ enum ethtool_link_mode_bit_indices eth_mode;
>+ u32 speed;
>+ u64 pr_mask;
>+ u8 duplex;
>+ u8 port_type;
>+};
>+
>+static const struct mvsw_pr_link_mode
>+mvsw_pr_link_modes[MVSW_LINK_MODE_MAX] = {
>+ [MVSW_LINK_MODE_10baseT_Half_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_10baseT_Half_BIT,
>+ .speed = 10,
>+ .pr_mask = 1 << MVSW_LINK_MODE_10baseT_Half_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_HALF,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_10baseT_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_10baseT_Full_BIT,
>+ .speed = 10,
>+ .pr_mask = 1 << MVSW_LINK_MODE_10baseT_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_100baseT_Half_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_100baseT_Half_BIT,
>+ .speed = 100,
>+ .pr_mask = 1 << MVSW_LINK_MODE_100baseT_Half_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_HALF,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_100baseT_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_100baseT_Full_BIT,
>+ .speed = 100,
>+ .pr_mask = 1 << MVSW_LINK_MODE_100baseT_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_1000baseT_Half_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_1000baseT_Half_BIT,
>+ .speed = 1000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_1000baseT_Half_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_HALF,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_1000baseT_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
>+ .speed = 1000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_1000baseT_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_1000baseX_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_1000baseX_Full_BIT,
>+ .speed = 1000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_1000baseX_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_FIBRE,
>+ },
>+ [MVSW_LINK_MODE_1000baseKX_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT,
>+ .speed = 1000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_1000baseKX_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_10GbaseKR_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_10000baseKR_Full_BIT,
>+ .speed = 10000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_10GbaseKR_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_10GbaseSR_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_10000baseSR_Full_BIT,
>+ .speed = 10000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_10GbaseSR_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_FIBRE,
>+ },
>+ [MVSW_LINK_MODE_10GbaseLR_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_10000baseLR_Full_BIT,
>+ .speed = 10000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_10GbaseLR_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_FIBRE,
>+ },
>+ [MVSW_LINK_MODE_20GbaseKR2_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT,
>+ .speed = 20000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_20GbaseKR2_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_25GbaseCR_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT,
>+ .speed = 25000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_25GbaseCR_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_DA,
>+ },
>+ [MVSW_LINK_MODE_25GbaseKR_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT,
>+ .speed = 25000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_25GbaseKR_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_25GbaseSR_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT,
>+ .speed = 25000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_25GbaseSR_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_FIBRE,
>+ },
>+ [MVSW_LINK_MODE_40GbaseKR4_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT,
>+ .speed = 40000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_40GbaseKR4_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_40GbaseCR4_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT,
>+ .speed = 40000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_40GbaseCR4_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_DA,
>+ },
>+ [MVSW_LINK_MODE_40GbaseSR4_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT,
>+ .speed = 40000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_40GbaseSR4_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_FIBRE,
>+ },
>+ [MVSW_LINK_MODE_50GbaseCR2_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT,
>+ .speed = 50000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_50GbaseCR2_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_DA,
>+ },
>+ [MVSW_LINK_MODE_50GbaseKR2_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT,
>+ .speed = 50000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_50GbaseKR2_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_50GbaseSR2_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT,
>+ .speed = 50000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_50GbaseSR2_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_FIBRE,
>+ },
>+ [MVSW_LINK_MODE_100GbaseKR4_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
>+ .speed = 100000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_100GbaseKR4_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_TP,
>+ },
>+ [MVSW_LINK_MODE_100GbaseSR4_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT,
>+ .speed = 100000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_100GbaseSR4_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_FIBRE,
>+ },
>+ [MVSW_LINK_MODE_100GbaseCR4_Full_BIT] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT,
>+ .speed = 100000,
>+ .pr_mask = 1 << MVSW_LINK_MODE_100GbaseCR4_Full_BIT,
>+ .duplex = MVSW_PORT_DUPLEX_FULL,
>+ .port_type = MVSW_PORT_TYPE_DA,
>+ }
>+};
>+
>+struct mvsw_pr_fec {
>+ u32 eth_fec;
>+ enum ethtool_link_mode_bit_indices eth_mode;
>+ u8 pr_fec;
>+};
>+
>+static const struct mvsw_pr_fec mvsw_pr_fec_caps[MVSW_PORT_FEC_MAX] = {
>+ [MVSW_PORT_FEC_OFF_BIT] = {
>+ .eth_fec = ETHTOOL_FEC_OFF,
>+ .eth_mode = ETHTOOL_LINK_MODE_FEC_NONE_BIT,
>+ .pr_fec = 1 << MVSW_PORT_FEC_OFF_BIT,
>+ },
>+ [MVSW_PORT_FEC_BASER_BIT] = {
>+ .eth_fec = ETHTOOL_FEC_BASER,
>+ .eth_mode = ETHTOOL_LINK_MODE_FEC_BASER_BIT,
>+ .pr_fec = 1 << MVSW_PORT_FEC_BASER_BIT,
>+ },
>+ [MVSW_PORT_FEC_RS_BIT] = {
>+ .eth_fec = ETHTOOL_FEC_RS,
>+ .eth_mode = ETHTOOL_LINK_MODE_FEC_RS_BIT,
>+ .pr_fec = 1 << MVSW_PORT_FEC_RS_BIT,
>+ }
>+};
>+
>+struct mvsw_pr_port_type {
>+ enum ethtool_link_mode_bit_indices eth_mode;
>+ u8 eth_type;
>+};
>+
>+static const struct mvsw_pr_port_type
>+mvsw_pr_port_types[MVSW_PORT_TYPE_MAX] = {
>+ [MVSW_PORT_TYPE_NONE] = {
>+ .eth_mode = __ETHTOOL_LINK_MODE_MASK_NBITS,
>+ .eth_type = PORT_NONE,
>+ },
>+ [MVSW_PORT_TYPE_TP] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_TP_BIT,
>+ .eth_type = PORT_TP,
>+ },
>+ [MVSW_PORT_TYPE_AUI] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_AUI_BIT,
>+ .eth_type = PORT_AUI,
>+ },
>+ [MVSW_PORT_TYPE_MII] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_MII_BIT,
>+ .eth_type = PORT_MII,
>+ },
>+ [MVSW_PORT_TYPE_FIBRE] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_FIBRE_BIT,
>+ .eth_type = PORT_FIBRE,
>+ },
>+ [MVSW_PORT_TYPE_BNC] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_BNC_BIT,
>+ .eth_type = PORT_BNC,
>+ },
>+ [MVSW_PORT_TYPE_DA] = {
>+ .eth_mode = ETHTOOL_LINK_MODE_TP_BIT,
>+ .eth_type = PORT_TP,
>+ },
>+ [MVSW_PORT_TYPE_OTHER] = {
>+ .eth_mode = __ETHTOOL_LINK_MODE_MASK_NBITS,
>+ .eth_type = PORT_OTHER,
>+ }
>+};
>+
>+static const char mvsw_pr_port_cnt_name[PORT_STATS_CNT][ETH_GSTRING_LEN] = {
>+ PORT_STATS_FIELD(good_octets_received),
>+ PORT_STATS_FIELD(bad_octets_received),
>+ PORT_STATS_FIELD(mac_trans_error),
>+ PORT_STATS_FIELD(broadcast_frames_received),
>+ PORT_STATS_FIELD(multicast_frames_received),
>+ PORT_STATS_FIELD(frames_64_octets),
>+ PORT_STATS_FIELD(frames_65_to_127_octets),
>+ PORT_STATS_FIELD(frames_128_to_255_octets),
>+ PORT_STATS_FIELD(frames_256_to_511_octets),
>+ PORT_STATS_FIELD(frames_512_to_1023_octets),
>+ PORT_STATS_FIELD(frames_1024_to_max_octets),
>+ PORT_STATS_FIELD(excessive_collision),
>+ PORT_STATS_FIELD(multicast_frames_sent),
>+ PORT_STATS_FIELD(broadcast_frames_sent),
>+ PORT_STATS_FIELD(fc_sent),
>+ PORT_STATS_FIELD(fc_received),
>+ PORT_STATS_FIELD(buffer_overrun),
>+ PORT_STATS_FIELD(undersize),
>+ PORT_STATS_FIELD(fragments),
>+ PORT_STATS_FIELD(oversize),
>+ PORT_STATS_FIELD(jabber),
>+ PORT_STATS_FIELD(rx_error_frame_received),
>+ PORT_STATS_FIELD(bad_crc),
>+ PORT_STATS_FIELD(collisions),
>+ PORT_STATS_FIELD(late_collision),
>+ PORT_STATS_FIELD(unicast_frames_received),
>+ PORT_STATS_FIELD(unicast_frames_sent),
>+ PORT_STATS_FIELD(sent_multiple),
>+ PORT_STATS_FIELD(sent_deferred),
>+ PORT_STATS_FIELD(frames_1024_to_1518_octets),
>+ PORT_STATS_FIELD(frames_1519_to_max_octets),
>+ PORT_STATS_FIELD(good_octets_sent),
>+};
>+
>+static struct mvsw_pr_port *__find_pr_port(const struct mvsw_pr_switch *sw,
>+ u32 port_id)
>+{
>+ struct mvsw_pr_port *port;
>+
>+ list_for_each_entry(port, &sw->port_list, list) {
>+ if (port->id == port_id)
>+ return port;
>+ }
>+
>+ return NULL;
>+}
>+
>+static int mvsw_pr_port_state_set(struct net_device *dev, bool is_up)
>+{
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+ int err;
>+
>+ if (!is_up)
>+ netif_stop_queue(dev);
>+
>+ err = mvsw_pr_hw_port_state_set(port, is_up);
>+
>+ if (is_up && !err)
>+ netif_start_queue(dev);
>+
>+ return err;
>+}
>+
>+static int mvsw_pr_port_get_port_parent_id(struct net_device *dev,
>+ struct netdev_phys_item_id *ppid)
>+{
>+ const struct mvsw_pr_port *port = netdev_priv(dev);
>+
>+ ppid->id_len = sizeof(port->sw->id);
>+
>+ memcpy(&ppid->id, &port->sw->id, ppid->id_len);
>+ return 0;
>+}
>+
>+static int mvsw_pr_port_get_phys_port_name(struct net_device *dev,
>+ char *buf, size_t len)

Don't implement this please. Just implement basic devlink and devlink
port support, devlink is going to take care of the netdevice names.


>+{
>+ const struct mvsw_pr_port *port = netdev_priv(dev);
>+
>+ snprintf(buf, len, "%u", port->fp_id);
>+ return 0;
>+}
>+
>+static int mvsw_pr_port_open(struct net_device *dev)
>+{
>+ return mvsw_pr_port_state_set(dev, true);
>+}
>+
>+static int mvsw_pr_port_close(struct net_device *dev)
>+{
>+ return mvsw_pr_port_state_set(dev, false);
>+}
>+
>+static netdev_tx_t mvsw_pr_port_xmit(struct sk_buff *skb,
>+ struct net_device *dev)
>+{

You need to implement this function. In fact, that is the basic
functionality of a netdevice, to transmit and receive traffic.

As Andy suggested, first implement 24 netdevices with slowpath only,
then add offloading and other features.


>+ dev_kfree_skb(skb);
>+ return NETDEV_TX_OK;
>+}
>+
>+static int mvsw_is_valid_mac_addr(struct mvsw_pr_port *port, u8 *addr)
>+{
>+ int err;
>+
>+ if (!is_valid_ether_addr(addr))
>+ return -EADDRNOTAVAIL;
>+
>+ err = memcmp(port->sw->base_mac, addr, ETH_ALEN - 1);

For memcmp, as it does not return 0/-ESOMETHING, you can do:
if (memcmp...


>+ if (err)
>+ return -EINVAL;
>+
>+ return 0;
>+}
>+
>+static int mvsw_pr_port_set_mac_address(struct net_device *dev, void *p)
>+{
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+ struct sockaddr *addr = p;
>+ int err;
>+
>+ err = mvsw_is_valid_mac_addr(port, addr->sa_data);
>+ if (err)
>+ return err;
>+
>+ err = mvsw_pr_hw_port_mac_set(port, addr->sa_data);
>+ if (!err)

do error path in if:
if (err)
return err;


>+ memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
>+
>+ return err;
>+}
>+
>+static int mvsw_pr_port_change_mtu(struct net_device *dev, int mtu)
>+{
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+ int err;
>+
>+ if (port->sw->mtu_min <= mtu && mtu <= port->sw->mtu_max)
>+ err = mvsw_pr_hw_port_mtu_set(port, mtu);
>+ else
>+ err = -EINVAL;
>+
>+ if (!err)
>+ dev->mtu = mtu;
>+
>+ return err;

How about rather:

if (mtu < port->sw->mtu_min || mtu > port->sw->mtu_max)
return -EINVAL;
err = mvsw_pr_hw_port_mtu_set(port, mtu);
if (err)
return err;
dev->mtu = mtu;
return 0;


Btw, since you have dev->mtu_min/mtu_max set, you can avoid checking it
here. dev_validate_mtu() will do it for you.



>+}
>+
>+static void mvsw_pr_port_get_stats64(struct net_device *dev,
>+ struct rtnl_link_stats64 *stats)
>+{
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+ struct mvsw_pr_port_stats *port_stats = &port->cached_hw_stats.stats;
>+
>+ stats->rx_packets = port_stats->broadcast_frames_received +

Don't use tab after "="


>+ port_stats->multicast_frames_received +
>+ port_stats->unicast_frames_received;
>+
>+ stats->tx_packets = port_stats->broadcast_frames_sent +
>+ port_stats->multicast_frames_sent +
>+ port_stats->unicast_frames_sent;
>+
>+ stats->rx_bytes = port_stats->good_octets_received;
>+
>+ stats->tx_bytes = port_stats->good_octets_sent;
>+
>+ stats->rx_errors = port_stats->rx_error_frame_received;
>+ stats->tx_errors = port_stats->mac_trans_error;
>+
>+ stats->rx_dropped = port_stats->buffer_overrun;
>+ stats->tx_dropped = 0;
>+
>+ stats->multicast = port_stats->multicast_frames_received;
>+ stats->collisions = port_stats->excessive_collision;
>+
>+ stats->rx_crc_errors = port_stats->bad_crc;
>+}
>+
>+static void mvsw_pr_port_get_hw_stats(struct mvsw_pr_port *port)
>+{
>+ mvsw_pr_hw_port_stats_get(port, &port->cached_hw_stats.stats);
>+}
>+
>+static void update_stats_cache(struct work_struct *work)
>+{
>+ struct mvsw_pr_port *port =
>+ container_of(work, struct mvsw_pr_port,
>+ cached_hw_stats.caching_dw.work);
>+
>+ mvsw_pr_port_get_hw_stats(port);
>+
>+ queue_delayed_work(mvsw_pr_wq, &port->cached_hw_stats.caching_dw,
>+ PORT_STATS_CACHE_TIMEOUT_MS);
>+}
>+
>+static void mvsw_pr_port_get_drvinfo(struct net_device *dev,
>+ struct ethtool_drvinfo *drvinfo)
>+{
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+ struct mvsw_pr_switch *sw = port->sw;
>+
>+ strlcpy(drvinfo->driver, mvsw_driver_kind, sizeof(drvinfo->driver));
>+ strlcpy(drvinfo->bus_info, mvsw_dev_name(sw), sizeof(drvinfo->bus_info));
>+ snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
>+ "%d.%d.%d",

Unnecessary wrapping.


>+ sw->dev->fw_rev.maj,
>+ sw->dev->fw_rev.min,
>+ sw->dev->fw_rev.sub);
>+}
>+
>+static const struct net_device_ops mvsw_pr_netdev_ops = {
>+ .ndo_open = mvsw_pr_port_open,
>+ .ndo_stop = mvsw_pr_port_close,
>+ .ndo_start_xmit = mvsw_pr_port_xmit,
>+ .ndo_change_mtu = mvsw_pr_port_change_mtu,
>+ .ndo_get_stats64 = mvsw_pr_port_get_stats64,
>+ .ndo_set_mac_address = mvsw_pr_port_set_mac_address,
>+ .ndo_get_phys_port_name = mvsw_pr_port_get_phys_port_name,
>+ .ndo_get_port_parent_id = mvsw_pr_port_get_port_parent_id
>+};
>+
>+bool mvsw_pr_netdev_check(const struct net_device *dev)
>+{
>+ return dev->netdev_ops == &mvsw_pr_netdev_ops;
>+}
>+
>+static int mvsw_pr_lower_dev_walk(struct net_device *lower_dev, void *data)
>+{
>+ struct mvsw_pr_port **pport = data;
>+
>+ if (mvsw_pr_netdev_check(lower_dev)) {
>+ *pport = netdev_priv(lower_dev);
>+ return 1;
>+ }
>+
>+ return 0;
>+}
>+
>+struct mvsw_pr_port *mvsw_pr_port_dev_lower_find(struct net_device *dev)
>+{
>+ struct mvsw_pr_port *port;
>+
>+ if (mvsw_pr_netdev_check(dev))
>+ return netdev_priv(dev);
>+
>+ port = NULL;
>+ netdev_walk_all_lower_dev(dev, mvsw_pr_lower_dev_walk, &port);
>+
>+ return port;
>+}
>+
>+static void mvsw_modes_to_eth(unsigned long *eth_modes, u64 link_modes, u8 fec,
>+ u8 type)
>+{
>+ u32 mode;
>+
>+ for (mode = 0; mode < MVSW_LINK_MODE_MAX; mode++) {
>+ if ((mvsw_pr_link_modes[mode].pr_mask & link_modes) == 0)
>+ continue;
>+ if (type != MVSW_PORT_TYPE_NONE &&
>+ mvsw_pr_link_modes[mode].port_type != type)
>+ continue;
>+ __set_bit(mvsw_pr_link_modes[mode].eth_mode, eth_modes);
>+ }
>+
>+ for (mode = 0; mode < MVSW_PORT_FEC_MAX; mode++) {
>+ if ((mvsw_pr_fec_caps[mode].pr_fec & fec) == 0)
>+ continue;
>+ __set_bit(mvsw_pr_fec_caps[mode].eth_mode, eth_modes);
>+ }
>+}
>+
>+static void mvsw_modes_from_eth(const unsigned long *eth_modes, u64 *link_modes,
>+ u8 *fec)
>+{
>+ u32 mode;
>+
>+ for (mode = 0; mode < MVSW_LINK_MODE_MAX; mode++) {
>+ if (!test_bit(mvsw_pr_link_modes[mode].eth_mode, eth_modes))
>+ continue;
>+ *link_modes |= mvsw_pr_link_modes[mode].pr_mask;
>+ }
>+
>+ for (mode = 0; mode < MVSW_PORT_FEC_MAX; mode++) {
>+ if (!test_bit(mvsw_pr_fec_caps[mode].eth_mode, eth_modes))
>+ continue;
>+ *fec |= mvsw_pr_fec_caps[mode].pr_fec;
>+ }
>+}
>+
>+static void mvsw_pr_port_supp_types_get(struct ethtool_link_ksettings *ecmd,
>+ struct mvsw_pr_port *port)
>+{
>+ u32 mode;
>+ u8 ptype;
>+
>+ for (mode = 0; mode < MVSW_LINK_MODE_MAX; mode++) {
>+ if ((mvsw_pr_link_modes[mode].pr_mask &
>+ port->caps.supp_link_modes) == 0)
>+ continue;
>+ ptype = mvsw_pr_link_modes[mode].port_type;
>+ __set_bit(mvsw_pr_port_types[ptype].eth_mode,
>+ ecmd->link_modes.supported);
>+ }
>+}
>+
>+static void mvsw_pr_port_speed_get(struct ethtool_link_ksettings *ecmd,
>+ struct mvsw_pr_port *port)
>+{
>+ u32 speed;
>+ int err;
>+
>+ err = mvsw_pr_hw_port_speed_get(port, &speed);
>+ ecmd->base.speed = !err ? speed : SPEED_UNKNOWN;
>+}
>+
>+static int mvsw_pr_port_link_mode_set(struct mvsw_pr_port *port,
>+ u32 speed, u8 duplex, u8 type)
>+{
>+ u32 new_mode = MVSW_LINK_MODE_MAX;
>+ u32 mode;
>+
>+ for (mode = 0; mode < MVSW_LINK_MODE_MAX; mode++) {
>+ if (speed != mvsw_pr_link_modes[mode].speed)
>+ continue;
>+ if (duplex != mvsw_pr_link_modes[mode].duplex)
>+ continue;
>+ if (!(mvsw_pr_link_modes[mode].pr_mask &
>+ port->caps.supp_link_modes))
>+ continue;
>+ if (type != mvsw_pr_link_modes[mode].port_type)
>+ continue;
>+
>+ new_mode = mode;
>+ break;
>+ }
>+
>+ if (new_mode == MVSW_LINK_MODE_MAX) {
>+ netdev_err(port->net_dev, "Unsupported speed/duplex requested");
>+ return -EINVAL;
>+ }
>+
>+ return mvsw_pr_hw_port_link_mode_set(port, new_mode);
>+}
>+
>+static int mvsw_pr_port_speed_duplex_set(const struct ethtool_link_ksettings
>+ *ecmd, struct mvsw_pr_port *port)
>+{
>+ int err;
>+ u8 duplex;
>+ u32 speed;
>+ u32 curr_mode;

You have to maintain reverse christmas tree ordering for all variables
in functions:

u32 curr_mode;
u32 speed;
u8 duplex;
int err;


>+
>+ err = mvsw_pr_hw_port_link_mode_get(port, &curr_mode);
>+ if (err || curr_mode >= MVSW_LINK_MODE_MAX)
>+ return -EINVAL;
>+
>+ if (ecmd->base.duplex != DUPLEX_UNKNOWN)
>+ duplex = ecmd->base.duplex == DUPLEX_FULL ?
>+ MVSW_PORT_DUPLEX_FULL : MVSW_PORT_DUPLEX_HALF;
>+ else
>+ duplex = mvsw_pr_link_modes[curr_mode].duplex;
>+
>+ if (ecmd->base.speed != SPEED_UNKNOWN)
>+ speed = ecmd->base.speed;
>+ else
>+ speed = mvsw_pr_link_modes[curr_mode].speed;
>+
>+ return mvsw_pr_port_link_mode_set(port, speed, duplex, port->caps.type);
>+}
>+
>+static u8 mvsw_pr_port_type_get(struct mvsw_pr_port *port)
>+{
>+ if (port->caps.type < MVSW_PORT_TYPE_MAX)
>+ return mvsw_pr_port_types[port->caps.type].eth_type;
>+ return PORT_OTHER;
>+}
>+
>+static int mvsw_pr_port_type_set(const struct ethtool_link_ksettings *ecmd,
>+ struct mvsw_pr_port *port)
>+{
>+ int err;
>+ u32 type, mode;
>+ u32 new_mode = MVSW_LINK_MODE_MAX;
>+
>+ for (type = 0; type < MVSW_PORT_TYPE_MAX; type++) {
>+ if (mvsw_pr_port_types[type].eth_type == ecmd->base.port &&
>+ test_bit(mvsw_pr_port_types[type].eth_mode,
>+ ecmd->link_modes.supported)) {
>+ break;
>+ }
>+ }
>+
>+ if (type == port->caps.type)
>+ return 0;
>+
>+ if (type == MVSW_PORT_TYPE_MAX) {
>+ pr_err("Unsupported port type requested\n");
>+ return -EINVAL;
>+ }
>+
>+ for (mode = 0; mode < MVSW_LINK_MODE_MAX; mode++) {
>+ if ((mvsw_pr_link_modes[mode].pr_mask &
>+ port->caps.supp_link_modes) &&
>+ type == mvsw_pr_link_modes[mode].port_type) {
>+ new_mode = mode;
>+ }
>+ }
>+
>+ if (new_mode < MVSW_LINK_MODE_MAX)
>+ err = mvsw_pr_hw_port_link_mode_set(port, new_mode);
>+ else
>+ err = -EINVAL;
>+
>+ if (!err)
>+ port->caps.type = type;
>+
>+ return err;
>+}
>+
>+static void mvsw_pr_port_remote_cap_get(struct ethtool_link_ksettings *ecmd,
>+ struct mvsw_pr_port *port)
>+{
>+ u64 bitmap;
>+
>+ if (!mvsw_pr_hw_port_remote_cap_get(port, &bitmap)) {
>+ mvsw_modes_to_eth(ecmd->link_modes.lp_advertising,
>+ bitmap, 0, MVSW_PORT_TYPE_NONE);
>+ }
>+}
>+
>+static void mvsw_pr_port_duplex_get(struct ethtool_link_ksettings *ecmd,
>+ struct mvsw_pr_port *port)
>+{
>+ u8 duplex;
>+
>+ if (!mvsw_pr_hw_port_duplex_get(port, &duplex)) {
>+ ecmd->base.duplex = duplex == MVSW_PORT_DUPLEX_FULL ?
>+ DUPLEX_FULL : DUPLEX_HALF;
>+ } else {
>+ ecmd->base.duplex = DUPLEX_UNKNOWN;
>+ }
>+}
>+
>+static int mvsw_pr_port_autoneg_set(struct mvsw_pr_port *port, bool enable,
>+ u64 link_modes, u8 fec)
>+{
>+ bool refresh = false;
>+ int err = 0;
>+
>+ if (port->caps.type != MVSW_PORT_TYPE_TP)
>+ return enable ? -EINVAL : 0;
>+
>+ if (port->adver_link_modes != link_modes || port->adver_fec != fec) {
>+ port->adver_link_modes = link_modes;
>+ port->adver_fec = fec != 0 ? fec : BIT(MVSW_PORT_FEC_OFF_BIT);
>+ refresh = true;
>+ }
>+
>+ if (port->autoneg == enable && !(port->autoneg && refresh))
>+ return 0;
>+
>+ err = mvsw_pr_hw_port_autoneg_set(port, enable,
>+ port->adver_link_modes,
>+ port->adver_fec);
>+ if (err)
>+ return -EINVAL;
>+
>+ port->autoneg = enable;
>+ return 0;
>+}
>+
>+static void mvsw_pr_port_mdix_get(struct ethtool_link_ksettings *ecmd,
>+ struct mvsw_pr_port *port)
>+{
>+ u8 mode;
>+
>+ if (mvsw_pr_hw_port_mdix_get(port, &mode))

always store the return value in "err" and do "if (err)".


>+ return;
>+
>+ ecmd->base.eth_tp_mdix = mode;
>+}
>+
>+static int mvsw_pr_port_mdix_set(const struct ethtool_link_ksettings *ecmd,
>+ struct mvsw_pr_port *port)
>+{
>+ if (ecmd->base.eth_tp_mdix_ctrl)
>+ return -EOPNOTSUPP;
>+
>+ return 0;
>+}
>+
>+static int mvsw_pr_port_get_link_ksettings(struct net_device *dev,
>+ struct ethtool_link_ksettings *ecmd)
>+{
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+
>+ ethtool_link_ksettings_zero_link_mode(ecmd, supported);
>+ ethtool_link_ksettings_zero_link_mode(ecmd, advertising);
>+ ethtool_link_ksettings_zero_link_mode(ecmd, lp_advertising);
>+
>+ ecmd->base.autoneg = port->autoneg ? AUTONEG_ENABLE : AUTONEG_DISABLE;
>+
>+ if (port->caps.type == MVSW_PORT_TYPE_TP) {
>+ ethtool_link_ksettings_add_link_mode(ecmd, supported, Autoneg);
>+ if (netif_running(dev) &&
>+ (port->autoneg ||
>+ port->caps.transceiver == MVSW_PORT_TRANSCEIVER_COPPER))
>+ ethtool_link_ksettings_add_link_mode(ecmd, advertising,
>+ Autoneg);
>+ }
>+
>+ mvsw_modes_to_eth(ecmd->link_modes.supported,
>+ port->caps.supp_link_modes,
>+ port->caps.supp_fec,
>+ port->caps.type);
>+
>+ mvsw_pr_port_supp_types_get(ecmd, port);
>+
>+ if (netif_carrier_ok(dev)) {
>+ mvsw_pr_port_speed_get(ecmd, port);
>+ mvsw_pr_port_duplex_get(ecmd, port);
>+ } else {
>+ ecmd->base.speed = SPEED_UNKNOWN;
>+ ecmd->base.duplex = DUPLEX_UNKNOWN;
>+ }
>+
>+ ecmd->base.port = mvsw_pr_port_type_get(port);
>+
>+ if (port->autoneg) {
>+ if (netif_running(dev))
>+ mvsw_modes_to_eth(ecmd->link_modes.advertising,
>+ port->adver_link_modes,
>+ port->adver_fec,
>+ port->caps.type);
>+
>+ if (netif_carrier_ok(dev) &&
>+ port->caps.transceiver == MVSW_PORT_TRANSCEIVER_COPPER) {
>+ ethtool_link_ksettings_add_link_mode(ecmd,
>+ lp_advertising,
>+ Autoneg);
>+ mvsw_pr_port_remote_cap_get(ecmd, port);
>+ }
>+ }
>+
>+ if (port->caps.type == MVSW_PORT_TYPE_TP &&
>+ port->caps.transceiver == MVSW_PORT_TRANSCEIVER_COPPER)
>+ mvsw_pr_port_mdix_get(ecmd, port);
>+
>+ return 0;
>+}
>+
>+static bool mvsw_pr_check_supp_modes(const struct mvsw_pr_port_caps *caps,
>+ u64 adver_modes, u8 adver_fec)
>+{
>+ if ((caps->supp_link_modes & adver_modes) == 0)
>+ return true;
>+ if ((adver_fec & ~caps->supp_fec) != 0)
>+ return true;
>+
>+ return false;
>+}
>+
>+static int mvsw_pr_port_set_link_ksettings(struct net_device *dev,
>+ const struct ethtool_link_ksettings
>+ *ecmd)
>+{
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+ bool is_up = netif_running(dev);
>+ u64 adver_modes = 0;
>+ u8 adver_fec = 0;
>+ int err, err1;
>+
>+ if (is_up) {
>+ err = mvsw_pr_port_state_set(dev, false);
>+ if (err)
>+ return err;
>+ }
>+
>+ err = mvsw_pr_port_type_set(ecmd, port);
>+ if (err)
>+ goto fini_link_ksettings;
>+
>+ if (port->caps.transceiver == MVSW_PORT_TRANSCEIVER_COPPER) {
>+ err = mvsw_pr_port_mdix_set(ecmd, port);
>+ if (err)
>+ goto fini_link_ksettings;
>+ }
>+
>+ mvsw_modes_from_eth(ecmd->link_modes.advertising, &adver_modes,
>+ &adver_fec);
>+
>+ if (ecmd->base.autoneg == AUTONEG_ENABLE &&
>+ mvsw_pr_check_supp_modes(&port->caps, adver_modes, adver_fec)) {
>+ netdev_err(dev, "Unsupported link mode requested");
>+ err = -EINVAL;
>+ goto fini_link_ksettings;
>+ }
>+
>+ err = mvsw_pr_port_autoneg_set(port,
>+ ecmd->base.autoneg == AUTONEG_ENABLE,
>+ adver_modes, adver_fec);
>+ if (err)
>+ goto fini_link_ksettings;
>+
>+ if (ecmd->base.autoneg == AUTONEG_DISABLE) {
>+ err = mvsw_pr_port_speed_duplex_set(ecmd, port);
>+ if (err)
>+ goto fini_link_ksettings;
>+ }
>+
>+fini_link_ksettings:
>+ err1 = mvsw_pr_port_state_set(dev, is_up);
>+ if (err1)
>+ return err1;
>+
>+ return err;
>+}
>+
>+static int mvsw_pr_port_get_fecparam(struct net_device *dev,
>+ struct ethtool_fecparam *fecparam)
>+{
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+ u32 mode;
>+ u8 active;
>+ int err;
>+
>+ err = mvsw_pr_hw_port_fec_get(port, &active);
>+ if (err)
>+ return err;
>+
>+ fecparam->fec = 0;
>+ for (mode = 0; mode < MVSW_PORT_FEC_MAX; mode++) {
>+ if ((mvsw_pr_fec_caps[mode].pr_fec & port->caps.supp_fec) == 0)
>+ continue;
>+ fecparam->fec |= mvsw_pr_fec_caps[mode].eth_fec;
>+ }
>+
>+ if (active < MVSW_PORT_FEC_MAX)
>+ fecparam->active_fec = mvsw_pr_fec_caps[active].eth_fec;
>+ else
>+ fecparam->active_fec = ETHTOOL_FEC_AUTO;
>+
>+ return 0;
>+}
>+
>+static int mvsw_pr_port_set_fecparam(struct net_device *dev,
>+ struct ethtool_fecparam *fecparam)
>+{
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+ u8 fec, active;
>+ u32 mode;
>+ int err;
>+
>+ if (port->autoneg) {
>+ netdev_err(dev, "FEC set is not allowed while autoneg is on\n");
>+ return -EINVAL;
>+ }
>+
>+ err = mvsw_pr_hw_port_fec_get(port, &active);
>+ if (err)
>+ return err;
>+
>+ fec = MVSW_PORT_FEC_MAX;
>+ for (mode = 0; mode < MVSW_PORT_FEC_MAX; mode++) {
>+ if ((mvsw_pr_fec_caps[mode].eth_fec & fecparam->fec) &&
>+ (mvsw_pr_fec_caps[mode].pr_fec & port->caps.supp_fec)) {
>+ fec = mode;
>+ break;
>+ }
>+ }
>+
>+ if (fec == active)
>+ return 0;
>+
>+ if (fec == MVSW_PORT_FEC_MAX) {
>+ netdev_err(dev, "Unsupported FEC requested");
>+ return -EINVAL;
>+ }
>+
>+ return mvsw_pr_hw_port_fec_set(port, fec);
>+}
>+
>+static void mvsw_pr_port_get_ethtool_stats(struct net_device *dev,
>+ struct ethtool_stats *stats,
>+ u64 *data)
>+{
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+ struct mvsw_pr_port_stats *port_stats = &port->cached_hw_stats.stats;
>+
>+ memcpy((u8 *)data, port_stats, sizeof(*port_stats));
>+}
>+
>+static void mvsw_pr_port_get_strings(struct net_device *dev,
>+ u32 stringset, u8 *data)
>+{
>+ if (stringset != ETH_SS_STATS)
>+ return;
>+
>+ memcpy(data, *mvsw_pr_port_cnt_name, sizeof(mvsw_pr_port_cnt_name));
>+}
>+
>+static int mvsw_pr_port_get_sset_count(struct net_device *dev, int sset)
>+{
>+ switch (sset) {
>+ case ETH_SS_STATS:
>+ return PORT_STATS_CNT;
>+ default:
>+ return -EOPNOTSUPP;
>+ }
>+}
>+
>+static const struct ethtool_ops mvsw_pr_ethtool_ops = {
>+ .get_drvinfo = mvsw_pr_port_get_drvinfo,
>+ .get_link_ksettings = mvsw_pr_port_get_link_ksettings,
>+ .set_link_ksettings = mvsw_pr_port_set_link_ksettings,
>+ .get_fecparam = mvsw_pr_port_get_fecparam,
>+ .set_fecparam = mvsw_pr_port_set_fecparam,
>+ .get_sset_count = mvsw_pr_port_get_sset_count,
>+ .get_strings = mvsw_pr_port_get_strings,
>+ .get_ethtool_stats = mvsw_pr_port_get_ethtool_stats,
>+ .get_link = ethtool_op_get_link
>+};

Please put the ethtool code introduction in a separate patch, easier to
review multiple smaller patches than 1 big one.

Please do the same with other blocks, like bridge offload.


>+
>+int mvsw_pr_port_learning_set(struct mvsw_pr_port *port, bool learn)
>+{
>+ return mvsw_pr_hw_port_learning_set(port, learn);
>+}
>+
>+int mvsw_pr_port_flood_set(struct mvsw_pr_port *port, bool flood)
>+{
>+ return mvsw_pr_hw_port_flood_set(port, flood);
>+}
>+
>+int mvsw_pr_port_pvid_set(struct mvsw_pr_port *port, u16 vid)
>+{
>+ int err;
>+
>+ if (!vid) {
>+ err = mvsw_pr_hw_port_accept_frame_type_set
>+ (port, MVSW_ACCEPT_FRAME_TYPE_TAGGED);
>+ if (err)
>+ return err;
>+ } else {
>+ err = mvsw_pr_hw_vlan_port_vid_set(port, vid);
>+ if (err)
>+ return err;
>+ err = mvsw_pr_hw_port_accept_frame_type_set
>+ (port, MVSW_ACCEPT_FRAME_TYPE_ALL);
>+ if (err)
>+ goto err_port_allow_untagged_set;
>+ }
>+
>+ port->pvid = vid;
>+ return 0;
>+
>+err_port_allow_untagged_set:
>+ mvsw_pr_hw_vlan_port_vid_set(port, port->pvid);
>+ return err;
>+}
>+
>+struct mvsw_pr_port_vlan*
>+mvsw_pr_port_vlan_find_by_vid(const struct mvsw_pr_port *port, u16 vid)
>+{
>+ struct mvsw_pr_port_vlan *port_vlan;
>+
>+ list_for_each_entry(port_vlan, &port->vlans_list, list) {
>+ if (port_vlan->vid == vid)
>+ return port_vlan;
>+ }
>+
>+ return NULL;
>+}
>+
>+struct mvsw_pr_port_vlan*
>+mvsw_pr_port_vlan_create(struct mvsw_pr_port *port, u16 vid)
>+{
>+ bool untagged = vid == MVSW_PR_DEFAULT_VID;
>+ struct mvsw_pr_port_vlan *port_vlan;
>+ int err;
>+
>+ port_vlan = mvsw_pr_port_vlan_find_by_vid(port, vid);
>+ if (port_vlan)
>+ return ERR_PTR(-EEXIST);
>+
>+ err = mvsw_pr_port_vlan_set(port, vid, true, untagged);
>+ if (err)
>+ return ERR_PTR(err);
>+
>+ port_vlan = kzalloc(sizeof(*port_vlan), GFP_KERNEL);
>+ if (!port_vlan) {
>+ err = -ENOMEM;
>+ goto err_port_vlan_alloc;
>+ }
>+
>+ port_vlan->mvsw_pr_port = port;
>+ port_vlan->vid = vid;
>+
>+ list_add(&port_vlan->list, &port->vlans_list);
>+
>+ return port_vlan;
>+
>+err_port_vlan_alloc:
>+ mvsw_pr_port_vlan_set(port, vid, false, false);
>+ return ERR_PTR(err);
>+}
>+
>+static void
>+mvsw_pr_port_vlan_cleanup(struct mvsw_pr_port_vlan *port_vlan)
>+{
>+ if (port_vlan->bridge_port)
>+ mvsw_pr_port_vlan_bridge_leave(port_vlan);
>+}
>+
>+void mvsw_pr_port_vlan_destroy(struct mvsw_pr_port_vlan *port_vlan)
>+{
>+ struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
>+ u16 vid = port_vlan->vid;
>+
>+ mvsw_pr_port_vlan_cleanup(port_vlan);
>+ list_del(&port_vlan->list);
>+ kfree(port_vlan);
>+ mvsw_pr_hw_vlan_port_set(port, vid, false, false);
>+}
>+
>+int mvsw_pr_port_vlan_set(struct mvsw_pr_port *port, u16 vid,
>+ bool is_member, bool untagged)
>+{
>+ return mvsw_pr_hw_vlan_port_set(port, vid, is_member, untagged);
>+}
>+
>+static int mvsw_pr_port_create(struct mvsw_pr_switch *sw, u32 id)
>+{
>+ struct net_device *net_dev;

Be consistent with the rest of the code:
"struct net_device *dev"



>+ struct mvsw_pr_port *port;
>+ char *mac;
>+ int err;
>+
>+ net_dev = alloc_etherdev(sizeof(*port));
>+ if (!net_dev)
>+ return -ENOMEM;
>+
>+ port = netdev_priv(net_dev);
>+
>+ INIT_LIST_HEAD(&port->vlans_list);
>+ port->pvid = MVSW_PR_DEFAULT_VID;
>+ port->net_dev = net_dev;
>+ port->id = id;
>+ port->sw = sw;
>+
>+ err = mvsw_pr_hw_port_info_get(port, &port->fp_id,
>+ &port->hw_id, &port->dev_id);
>+ if (err) {
>+ dev_err(mvsw_dev(sw), "Failed to get port(%u) info\n", id);
>+ goto err_register_netdev;
>+ }
>+
>+ net_dev->features |= NETIF_F_NETNS_LOCAL | NETIF_F_HW_L2FW_DOFFLOAD;
>+ net_dev->ethtool_ops = &mvsw_pr_ethtool_ops;
>+ net_dev->netdev_ops = &mvsw_pr_netdev_ops;
>+
>+ netif_carrier_off(net_dev);
>+
>+ net_dev->mtu = min_t(unsigned int, sw->mtu_max, MVSW_PR_MTU_DEFAULT);
>+ net_dev->min_mtu = sw->mtu_min;
>+ net_dev->max_mtu = sw->mtu_max;
>+
>+ err = mvsw_pr_hw_port_mtu_set(port, net_dev->mtu);
>+ if (err) {
>+ dev_err(mvsw_dev(sw), "Failed to set port(%u) mtu\n", id);
>+ goto err_register_netdev;
>+ }
>+
>+ /* Only 0xFF mac addrs are supported */
>+ if (port->fp_id >= 0xFF)
>+ goto err_register_netdev;
>+
>+ mac = net_dev->dev_addr;
>+ memcpy(mac, sw->base_mac, net_dev->addr_len - 1);
>+ mac[net_dev->addr_len - 1] = (char)port->fp_id;
>+
>+ err = mvsw_pr_hw_port_mac_set(port, mac);
>+ if (err) {
>+ dev_err(mvsw_dev(sw), "Failed to set port(%u) mac addr\n", id);
>+ goto err_register_netdev;
>+ }
>+
>+ err = mvsw_pr_hw_port_cap_get(port, &port->caps);
>+ if (err) {
>+ dev_err(mvsw_dev(sw), "Failed to get port(%u) caps\n", id);
>+ goto err_register_netdev;
>+ }
>+
>+ port->adver_link_modes = 0;

No need. Mem id zeroed.


>+ port->adver_fec = 1 << MVSW_PORT_FEC_OFF_BIT;
>+ port->autoneg = false;

No need. Mem id zeroed.


>+ mvsw_pr_port_autoneg_set(port, true, port->caps.supp_link_modes,
>+ port->caps.supp_fec);
>+
>+ err = mvsw_pr_hw_port_state_set(port, false);
>+ if (err) {
>+ dev_err(mvsw_dev(sw), "Failed to set port(%u) down\n", id);
>+ goto err_register_netdev;
>+ }
>+
>+ INIT_DELAYED_WORK(&port->cached_hw_stats.caching_dw,
>+ &update_stats_cache);
>+
>+ err = register_netdev(net_dev);
>+ if (err)
>+ goto err_register_netdev;
>+
>+ list_add(&port->list, &sw->port_list);
>+
>+ return 0;
>+
>+err_register_netdev:
>+ free_netdev(net_dev);
>+ return err;
>+}
>+
>+static void mvsw_pr_port_vlan_flush(struct mvsw_pr_port *port,
>+ bool flush_default)
>+{
>+ struct mvsw_pr_port_vlan *port_vlan, *tmp;
>+
>+ list_for_each_entry_safe(port_vlan, tmp, &port->vlans_list, list) {
>+ if (!flush_default && port_vlan->vid == MVSW_PR_DEFAULT_VID)
>+ continue;
>+
>+ mvsw_pr_port_vlan_destroy(port_vlan);
>+ }
>+}
>+
>+int mvsw_pr_8021d_bridge_create(struct mvsw_pr_switch *sw, u16 *bridge_id)
>+{
>+ return mvsw_pr_hw_bridge_create(sw, bridge_id);
>+}
>+
>+int mvsw_pr_8021d_bridge_delete(struct mvsw_pr_switch *sw, u16 bridge_id)
>+{
>+ return mvsw_pr_hw_bridge_delete(sw, bridge_id);
>+}
>+
>+int mvsw_pr_8021d_bridge_port_add(struct mvsw_pr_port *port, u16 bridge_id)
>+{
>+ return mvsw_pr_hw_bridge_port_add(port, bridge_id);
>+}
>+
>+int mvsw_pr_8021d_bridge_port_delete(struct mvsw_pr_port *port, u16 bridge_id)
>+{
>+ return mvsw_pr_hw_bridge_port_delete(port, bridge_id);
>+}
>+
>+int mvsw_pr_switch_ageing_set(struct mvsw_pr_switch *sw, u32 ageing_time)
>+{
>+ return mvsw_pr_hw_switch_ageing_set(sw, ageing_time);
>+}
>+
>+int mvsw_pr_fdb_flush_vlan(struct mvsw_pr_switch *sw, u16 vid,
>+ enum mvsw_pr_fdb_flush_mode mode)
>+{
>+ return mvsw_pr_hw_fdb_flush_vlan(sw, vid, mode);
>+}
>+
>+int mvsw_pr_fdb_flush_port_vlan(struct mvsw_pr_port *port, u16 vid,
>+ enum mvsw_pr_fdb_flush_mode mode)
>+{
>+ return mvsw_pr_hw_fdb_flush_port_vlan(port, vid, mode);
>+}
>+
>+int mvsw_pr_fdb_flush_port(struct mvsw_pr_port *port,
>+ enum mvsw_pr_fdb_flush_mode mode)
>+{
>+ return mvsw_pr_hw_fdb_flush_port(port, mode);
>+}
>+
>+static int mvsw_pr_clear_ports(struct mvsw_pr_switch *sw)
>+{

Be consistent with the rest of the code:
"struct net_device *dev"



>+ struct net_device *net_dev;
>+ struct list_head *pos, *n;
>+ struct mvsw_pr_port *port;
>+
>+ list_for_each_safe(pos, n, &sw->port_list) {
>+ port = list_entry(pos, typeof(*port), list);
>+ net_dev = port->net_dev;
>+
>+ cancel_delayed_work_sync(&port->cached_hw_stats.caching_dw);
>+ unregister_netdev(net_dev);
>+ mvsw_pr_port_vlan_flush(port, true);
>+ WARN_ON_ONCE(!list_empty(&port->vlans_list));
>+ free_netdev(net_dev);
>+ list_del(pos);
>+ }
>+ return (!list_empty(&sw->port_list));
>+}
>+
>+static void mvsw_pr_port_handle_event(struct mvsw_pr_switch *sw,
>+ struct mvsw_pr_event *evt)
>+{
>+ struct mvsw_pr_port *port;
>+ struct delayed_work *caching_dw;
>+
>+ port = __find_pr_port(sw, evt->port_evt.port_id);
>+ if (!port)
>+ return;
>+
>+ caching_dw = &port->cached_hw_stats.caching_dw;
>+
>+ switch (evt->id) {
>+ case MVSW_PORT_EVENT_STATE_CHANGED:
>+ if (evt->port_evt.data.oper_state) {
>+ netif_carrier_on(port->net_dev);
>+ if (!delayed_work_pending(caching_dw))
>+ queue_delayed_work(mvsw_pr_wq, caching_dw, 0);
>+ } else {
>+ netif_carrier_off(port->net_dev);
>+ if (delayed_work_pending(caching_dw))
>+ cancel_delayed_work(caching_dw);
>+ }
>+ break;
>+ }
>+}
>+
>+static void mvsw_pr_fdb_handle_event(struct mvsw_pr_switch *sw,
>+ struct mvsw_pr_event *evt)

Hmm, I think you should register this handler from prestera_switchdev.c


>+{
>+ struct switchdev_notifier_fdb_info info;
>+ struct mvsw_pr_port *port;
>+
>+ port = __find_pr_port(sw, evt->fdb_evt.port_id);
>+ if (!port)
>+ return;
>+
>+ info.addr = evt->fdb_evt.data.mac;
>+ info.vid = evt->fdb_evt.vid;
>+ info.offloaded = true;
>+
>+ rtnl_lock();
>+ switch (evt->id) {
>+ case MVSW_FDB_EVENT_LEARNED:
>+ call_switchdev_notifiers(SWITCHDEV_FDB_ADD_TO_BRIDGE,
>+ port->net_dev, &info.info, NULL);
>+ break;
>+ case MVSW_FDB_EVENT_AGED:
>+ call_switchdev_notifiers(SWITCHDEV_FDB_DEL_TO_BRIDGE,
>+ port->net_dev, &info.info, NULL);
>+ break;
>+ }
>+ rtnl_unlock();
>+ return;
>+}
>+
>+int mvsw_pr_fdb_add(struct mvsw_pr_port *port, const unsigned char *mac,
>+ u16 vid, bool dynamic)
>+{
>+ return mvsw_pr_hw_fdb_add(port, mac, vid, dynamic);
>+}
>+
>+int mvsw_pr_fdb_del(struct mvsw_pr_port *port, const unsigned char *mac,
>+ u16 vid)
>+{
>+ return mvsw_pr_hw_fdb_del(port, mac, vid);
>+}
>+
>+static void mvsw_pr_fdb_event_handler_unregister(struct mvsw_pr_switch *sw)
>+{
>+ mvsw_pr_hw_event_handler_unregister(sw, MVSW_EVENT_TYPE_FDB,
>+ mvsw_pr_fdb_handle_event);
>+}
>+
>+static void mvsw_pr_port_event_handler_unregister(struct mvsw_pr_switch *sw)
>+{
>+ mvsw_pr_hw_event_handler_unregister(sw, MVSW_EVENT_TYPE_PORT,
>+ mvsw_pr_port_handle_event);
>+}
>+
>+static void mvsw_pr_event_handlers_unregister(struct mvsw_pr_switch *sw)
>+{
>+ mvsw_pr_fdb_event_handler_unregister(sw);
>+ mvsw_pr_port_event_handler_unregister(sw);
>+}
>+
>+static int mvsw_pr_fdb_event_handler_register(struct mvsw_pr_switch *sw)
>+{
>+ return mvsw_pr_hw_event_handler_register(sw, MVSW_EVENT_TYPE_FDB,
>+ mvsw_pr_fdb_handle_event);
>+}
>+
>+static int mvsw_pr_port_event_handler_register(struct mvsw_pr_switch *sw)
>+{
>+ return mvsw_pr_hw_event_handler_register(sw, MVSW_EVENT_TYPE_PORT,
>+ mvsw_pr_port_handle_event);
>+}
>+
>+static int mvsw_pr_event_handlers_register(struct mvsw_pr_switch *sw)
>+{
>+ int err;
>+
>+ err = mvsw_pr_port_event_handler_register(sw);
>+ if (err)
>+ return err;
>+
>+ err = mvsw_pr_fdb_event_handler_register(sw);
>+ if (err)
>+ goto err_fdb_handler_register;
>+
>+ return 0;
>+
>+err_fdb_handler_register:
>+ mvsw_pr_port_event_handler_unregister(sw);
>+ return err;
>+}
>+
>+static int mvsw_pr_init(struct mvsw_pr_switch *sw)
>+{
>+ u32 port;
>+ int err;
>+
>+ err = mvsw_pr_hw_switch_init(sw);
>+ if (err) {
>+ dev_err(mvsw_dev(sw), "Failed to init Switch device\n");
>+ return err;
>+ }
>+
>+ dev_info(mvsw_dev(sw), "Initialized Switch device\n");

Remove prints like this.


>+
>+ err = mvsw_pr_switchdev_register(sw);
>+ if (err)
>+ return err;
>+
>+ INIT_LIST_HEAD(&sw->port_list);
>+
>+ for (port = 0; port < sw->port_count; port++) {
>+ err = mvsw_pr_port_create(sw, port);
>+ if (err)
>+ goto err_ports_init;
>+ }
>+
>+ err = mvsw_pr_event_handlers_register(sw);
>+ if (err)
>+ goto err_ports_init;
>+
>+ return 0;
>+
>+err_ports_init:
>+ mvsw_pr_clear_ports(sw);
>+ return err;
>+}
>+
>+static void mvsw_pr_fini(struct mvsw_pr_switch *sw)
>+{
>+ mvsw_pr_event_handlers_unregister(sw);
>+

Remove the empty line.


>+ mvsw_pr_switchdev_unregister(sw);
>+ mvsw_pr_clear_ports(sw);
>+}
>+
>+int mvsw_pr_device_register(struct mvsw_pr_device *dev)
>+{
>+ struct mvsw_pr_switch *sw;
>+ int err;
>+
>+ sw = kzalloc(sizeof(*sw), GFP_KERNEL);
>+ if (!sw)
>+ return -ENOMEM;
>+
>+ dev->priv = sw;
>+ sw->dev = dev;
>+
>+ err = mvsw_pr_init(sw);
>+ if (err) {
>+ kfree(sw);
>+ return err;
>+ }
>+
>+ list_add(&sw->list, &switches_registered);
>+
>+ return 0;
>+}
>+EXPORT_SYMBOL(mvsw_pr_device_register);
>+
>+void mvsw_pr_device_unregister(struct mvsw_pr_device *dev)
>+{
>+ struct mvsw_pr_switch *sw = dev->priv;
>+
>+ list_del(&sw->list);
>+ mvsw_pr_fini(sw);
>+ kfree(sw);
>+}
>+EXPORT_SYMBOL(mvsw_pr_device_unregister);
>+
>+static int __init mvsw_pr_module_init(void)
>+{
>+ INIT_LIST_HEAD(&switches_registered);
>+
>+ mvsw_pr_wq = alloc_workqueue(mvsw_driver_name, 0, 0);
>+ if (!mvsw_pr_wq)
>+ return -ENOMEM;
>+
>+ pr_info("Loading Marvell Prestera Switch Driver\n");
>+ return 0;
>+}
>+
>+static void __exit mvsw_pr_module_exit(void)
>+{
>+ destroy_workqueue(mvsw_pr_wq);
>+
>+ pr_info("Unloading Marvell Prestera Switch Driver\n");
>+}
>+
>+module_init(mvsw_pr_module_init);
>+module_exit(mvsw_pr_module_exit);
>+
>+MODULE_AUTHOR("Marvell Semi.");
>+MODULE_LICENSE("GPL");

Inconsistent licences.


>+MODULE_DESCRIPTION("Marvell Prestera switch driver");
>+MODULE_VERSION(PRESTERA_DRV_VER);

Why do you need this? I believe it is better to avoid it.


>diff --git a/drivers/net/ethernet/marvell/prestera/prestera.h b/drivers/net/ethernet/marvell/prestera/prestera.h
>new file mode 100644
>index 000000000000..cbc6b0c78937
>--- /dev/null
>+++ b/drivers/net/ethernet/marvell/prestera/prestera.h
>@@ -0,0 +1,244 @@
>+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
>+ *
>+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
>+ *
>+ */
>+
>+#ifndef _MVSW_PRESTERA_H_
>+#define _MVSW_PRESTERA_H_
>+
>+#include <linux/skbuff.h>
>+#include <linux/notifier.h>
>+#include <uapi/linux/if_ether.h>
>+#include <linux/workqueue.h>
>+
>+#define MVSW_MSG_MAX_SIZE 1500
>+
>+#define MVSW_PR_DEFAULT_VID 1
>+
>+#define MVSW_PR_MIN_AGEING_TIME 10
>+#define MVSW_PR_MAX_AGEING_TIME 1000000
>+#define MVSW_PR_DEFAULT_AGEING_TIME 300
>+
>+struct mvsw_fw_rev {
>+ u16 maj;
>+ u16 min;
>+ u16 sub;
>+};
>+
>+struct mvsw_pr_bridge_port;
>+
>+struct mvsw_pr_port_vlan {
>+ struct list_head list;
>+ struct mvsw_pr_port *mvsw_pr_port;
>+ u16 vid;
>+ struct mvsw_pr_bridge_port *bridge_port;
>+ struct list_head bridge_vlan_node;
>+};
>+
>+struct mvsw_pr_port_stats {
>+ u64 good_octets_received;
>+ u64 bad_octets_received;
>+ u64 mac_trans_error;
>+ u64 broadcast_frames_received;
>+ u64 multicast_frames_received;
>+ u64 frames_64_octets;
>+ u64 frames_65_to_127_octets;
>+ u64 frames_128_to_255_octets;
>+ u64 frames_256_to_511_octets;
>+ u64 frames_512_to_1023_octets;
>+ u64 frames_1024_to_max_octets;
>+ u64 excessive_collision;
>+ u64 multicast_frames_sent;
>+ u64 broadcast_frames_sent;
>+ u64 fc_sent;
>+ u64 fc_received;
>+ u64 buffer_overrun;
>+ u64 undersize;
>+ u64 fragments;
>+ u64 oversize;
>+ u64 jabber;
>+ u64 rx_error_frame_received;
>+ u64 bad_crc;
>+ u64 collisions;
>+ u64 late_collision;
>+ u64 unicast_frames_received;
>+ u64 unicast_frames_sent;
>+ u64 sent_multiple;
>+ u64 sent_deferred;
>+ u64 frames_1024_to_1518_octets;
>+ u64 frames_1519_to_max_octets;
>+ u64 good_octets_sent;
>+};
>+
>+struct mvsw_pr_port_caps {
>+ u64 supp_link_modes;
>+ u8 supp_fec;
>+ u8 type;
>+ u8 transceiver;
>+};
>+
>+struct mvsw_pr_port {
>+ struct net_device *net_dev;

Be consistent with the rest of the code:
"struct net_device *dev"


>+ struct mvsw_pr_switch *sw;
>+ u32 id;
>+ u32 hw_id;
>+ u32 dev_id;
>+ u16 fp_id;
>+ u16 pvid;
>+ bool autoneg;
>+ u64 adver_link_modes;
>+ u8 adver_fec;
>+ struct mvsw_pr_port_caps caps;
>+ struct list_head list;
>+ struct list_head vlans_list;
>+ struct {
>+ struct mvsw_pr_port_stats stats;
>+ struct delayed_work caching_dw;
>+ } cached_hw_stats;
>+};
>+
>+struct mvsw_pr_switchdev {
>+ struct mvsw_pr_switch *sw;
>+ struct notifier_block swdev_n;
>+ struct notifier_block swdev_blocking_n;
>+};
>+
>+struct mvsw_pr_fib {
>+ struct mvsw_pr_switch *sw;
>+ struct notifier_block fib_nb;
>+ struct notifier_block netevent_nb;
>+};
>+
>+struct mvsw_pr_device {
>+ struct device *dev;
>+ struct mvsw_fw_rev fw_rev;
>+ void *priv;
>+
>+ /* called by device driver to pass event up to the higher layer */
>+ int (*recv_msg)(struct mvsw_pr_device *dev, u8 *msg, size_t size);
>+
>+ /* called by higher layer to send request to the firmware */
>+ int (*send_req)(struct mvsw_pr_device *dev, u8 *in_msg,
>+ size_t in_size, u8 *out_msg, size_t out_size,
>+ unsigned int wait);
>+};
>+
>+enum mvsw_pr_event_type {
>+ MVSW_EVENT_TYPE_UNSPEC,
>+ MVSW_EVENT_TYPE_PORT,
>+ MVSW_EVENT_TYPE_FDB,
>+
>+ MVSW_EVENT_TYPE_MAX,
>+};
>+
>+enum mvsw_pr_port_event_id {
>+ MVSW_PORT_EVENT_UNSPEC,
>+ MVSW_PORT_EVENT_STATE_CHANGED,
>+
>+ MVSW_PORT_EVENT_MAX,
>+};
>+
>+enum mvsw_pr_fdb_event_id {
>+ MVSW_FDB_EVENT_UNSPEC,
>+ MVSW_FDB_EVENT_LEARNED,
>+ MVSW_FDB_EVENT_AGED,
>+
>+ MVSW_FDB_EVENT_MAX,
>+};
>+
>+struct mvsw_pr_fdb_event {
>+ u32 port_id;
>+ u32 vid;
>+ union {
>+ u8 mac[ETH_ALEN];
>+ } data;
>+};
>+
>+struct mvsw_pr_port_event {
>+ u32 port_id;
>+ union {
>+ u32 oper_state;
>+ } data;
>+};
>+
>+struct mvsw_pr_event {
>+ u16 id;
>+ union {
>+ struct mvsw_pr_port_event port_evt;
>+ struct mvsw_pr_fdb_event fdb_evt;
>+ };
>+};
>+
>+struct mvsw_pr_bridge;
>+
>+struct mvsw_pr_switch {
>+ struct list_head list;
>+ struct mvsw_pr_device *dev;
>+ struct list_head event_handlers;
>+ char base_mac[ETH_ALEN];
>+ struct list_head port_list;
>+ u32 port_count;
>+ u32 mtu_min;
>+ u32 mtu_max;
>+ u8 id;
>+ struct mvsw_pr_bridge *bridge;
>+ struct mvsw_pr_switchdev *switchdev;
>+ struct mvsw_pr_fib *fib;
>+ struct notifier_block netdevice_nb;
>+};
>+
>+enum mvsw_pr_fdb_flush_mode {
>+ MVSW_PR_FDB_FLUSH_MODE_DYNAMIC = BIT(0),
>+ MVSW_PR_FDB_FLUSH_MODE_STATIC = BIT(1),
>+ MVSW_PR_FDB_FLUSH_MODE_ALL = MVSW_PR_FDB_FLUSH_MODE_DYNAMIC
>+ | MVSW_PR_FDB_FLUSH_MODE_STATIC,
>+};
>+
>+int mvsw_pr_switch_ageing_set(struct mvsw_pr_switch *sw, u32 ageing_time);
>+
>+int mvsw_pr_port_learning_set(struct mvsw_pr_port *mvsw_pr_port,
>+ bool learn_enable);
>+int mvsw_pr_port_flood_set(struct mvsw_pr_port *mvsw_pr_port, bool flood);
>+int mvsw_pr_port_pvid_set(struct mvsw_pr_port *mvsw_pr_port, u16 vid);
>+struct mvsw_pr_port_vlan *
>+mvsw_pr_port_vlan_create(struct mvsw_pr_port *mvsw_pr_port, u16 vid);
>+void mvsw_pr_port_vlan_destroy(struct mvsw_pr_port_vlan *mvsw_pr_port_vlan);
>+int mvsw_pr_port_vlan_set(struct mvsw_pr_port *mvsw_pr_port, u16 vid,
>+ bool is_member, bool untagged);
>+
>+int mvsw_pr_8021d_bridge_create(struct mvsw_pr_switch *sw, u16 *bridge_id);
>+int mvsw_pr_8021d_bridge_delete(struct mvsw_pr_switch *sw, u16 bridge_id);
>+int mvsw_pr_8021d_bridge_port_add(struct mvsw_pr_port *mvsw_pr_port,
>+ u16 bridge_id);
>+int mvsw_pr_8021d_bridge_port_delete(struct mvsw_pr_port *mvsw_pr_port,
>+ u16 bridge_id);
>+
>+int mvsw_pr_fdb_add(struct mvsw_pr_port *mvsw_pr_port, const unsigned char *mac,
>+ u16 vid, bool dynamic);
>+int mvsw_pr_fdb_del(struct mvsw_pr_port *mvsw_pr_port, const unsigned char *mac,
>+ u16 vid);
>+int mvsw_pr_fdb_flush_vlan(struct mvsw_pr_switch *sw, u16 vid,
>+ enum mvsw_pr_fdb_flush_mode mode);
>+int mvsw_pr_fdb_flush_port_vlan(struct mvsw_pr_port *port, u16 vid,
>+ enum mvsw_pr_fdb_flush_mode mode);
>+int mvsw_pr_fdb_flush_port(struct mvsw_pr_port *port,
>+ enum mvsw_pr_fdb_flush_mode mode);
>+
>+struct mvsw_pr_port_vlan *
>+mvsw_pr_port_vlan_find_by_vid(const struct mvsw_pr_port *mvsw_pr_port, u16 vid);
>+void
>+mvsw_pr_port_vlan_bridge_leave(struct mvsw_pr_port_vlan *mvsw_pr_port_vlan);
>+
>+int mvsw_pr_switchdev_register(struct mvsw_pr_switch *sw);
>+void mvsw_pr_switchdev_unregister(struct mvsw_pr_switch *sw);
>+
>+int mvsw_pr_device_register(struct mvsw_pr_device *dev);
>+void mvsw_pr_device_unregister(struct mvsw_pr_device *dev);
>+
>+bool mvsw_pr_netdev_check(const struct net_device *dev);
>+struct mvsw_pr_port *mvsw_pr_port_dev_lower_find(struct net_device *dev);
>+
>+const struct mvsw_pr_port *mvsw_pr_port_find(u32 dev_hw_id, u32 port_hw_id);
>+
>+#endif /* _MVSW_PRESTERA_H_ */
>diff --git a/drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h b/drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
>new file mode 100644
>index 000000000000..d6617a16d7e1
>--- /dev/null
>+++ b/drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
>@@ -0,0 +1,23 @@
>+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
>+ *
>+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
>+ *
>+ */
>+#ifndef _PRESTERA_DRV_VER_H_
>+#define _PRESTERA_DRV_VER_H_
>+
>+#include <linux/stringify.h>
>+
>+/* Prestera driver version */
>+#define PRESTERA_DRV_VER_MAJOR 1
>+#define PRESTERA_DRV_VER_MINOR 0
>+#define PRESTERA_DRV_VER_PATCH 0
>+#define PRESTERA_DRV_VER_EXTRA
>+
>+#define PRESTERA_DRV_VER \
>+ __stringify(PRESTERA_DRV_VER_MAJOR) "." \
>+ __stringify(PRESTERA_DRV_VER_MINOR) "." \
>+ __stringify(PRESTERA_DRV_VER_PATCH) \
>+ __stringify(PRESTERA_DRV_VER_EXTRA)
>+
>+#endif /* _PRESTERA_DRV_VER_H_ */
>diff --git a/drivers/net/ethernet/marvell/prestera/prestera_hw.c b/drivers/net/ethernet/marvell/prestera/prestera_hw.c
>new file mode 100644
>index 000000000000..c97bafdd734e
>--- /dev/null
>+++ b/drivers/net/ethernet/marvell/prestera/prestera_hw.c
>@@ -0,0 +1,1094 @@
>+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
>+ *
>+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
>+ *
>+ */
>+#include <linux/etherdevice.h>
>+#include <linux/ethtool.h>
>+#include <linux/netdevice.h>
>+#include <linux/list.h>
>+
>+#include "prestera.h"
>+#include "prestera_hw.h"
>+
>+#define MVSW_PR_INIT_TIMEOUT 30000000 /* 30sec */
>+#define MVSW_PR_MIN_MTU 64
>+
>+enum mvsw_msg_type {
>+ MVSW_MSG_TYPE_SWITCH_UNSPEC,
>+ MVSW_MSG_TYPE_SWITCH_INIT,
>+
>+ MVSW_MSG_TYPE_AGEING_TIMEOUT_SET,
>+
>+ MVSW_MSG_TYPE_PORT_ATTR_SET,
>+ MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ MVSW_MSG_TYPE_PORT_INFO_GET,
>+
>+ MVSW_MSG_TYPE_VLAN_CREATE,
>+ MVSW_MSG_TYPE_VLAN_DELETE,
>+ MVSW_MSG_TYPE_VLAN_PORT_SET,
>+ MVSW_MSG_TYPE_VLAN_PVID_SET,
>+
>+ MVSW_MSG_TYPE_FDB_ADD,
>+ MVSW_MSG_TYPE_FDB_DELETE,
>+ MVSW_MSG_TYPE_FDB_FLUSH_PORT,
>+ MVSW_MSG_TYPE_FDB_FLUSH_VLAN,
>+ MVSW_MSG_TYPE_FDB_FLUSH_PORT_VLAN,
>+
>+ MVSW_MSG_TYPE_LOG_LEVEL_SET,
>+
>+ MVSW_MSG_TYPE_BRIDGE_CREATE,
>+ MVSW_MSG_TYPE_BRIDGE_DELETE,
>+ MVSW_MSG_TYPE_BRIDGE_PORT_ADD,
>+ MVSW_MSG_TYPE_BRIDGE_PORT_DELETE,
>+
>+ MVSW_MSG_TYPE_ACK,
>+ MVSW_MSG_TYPE_MAX
>+};
>+
>+enum mvsw_msg_port_attr {
>+ MVSW_MSG_PORT_ATTR_ADMIN_STATE,
>+ MVSW_MSG_PORT_ATTR_OPER_STATE,
>+ MVSW_MSG_PORT_ATTR_MTU,
>+ MVSW_MSG_PORT_ATTR_MAC,
>+ MVSW_MSG_PORT_ATTR_SPEED,
>+ MVSW_MSG_PORT_ATTR_ACCEPT_FRAME_TYPE,
>+ MVSW_MSG_PORT_ATTR_LEARNING,
>+ MVSW_MSG_PORT_ATTR_FLOOD,
>+ MVSW_MSG_PORT_ATTR_CAPABILITY,
>+ MVSW_MSG_PORT_ATTR_REMOTE_CAPABILITY,
>+ MVSW_MSG_PORT_ATTR_LINK_MODE,
>+ MVSW_MSG_PORT_ATTR_TYPE,
>+ MVSW_MSG_PORT_ATTR_FEC,
>+ MVSW_MSG_PORT_ATTR_AUTONEG,
>+ MVSW_MSG_PORT_ATTR_DUPLEX,
>+ MVSW_MSG_PORT_ATTR_STATS,
>+ MVSW_MSG_PORT_ATTR_MDIX,
>+ MVSW_MSG_PORT_ATTR_MAX
>+};
>+
>+enum {
>+ MVSW_MSG_ACK_OK,
>+ MVSW_MSG_ACK_FAILED,
>+ MVSW_MSG_ACK_MAX
>+};
>+
>+enum {
>+ MVSW_MODE_FORCED_MDI,
>+ MVSW_MODE_FORCED_MDIX,
>+ MVSW_MODE_AUTO_MDI,
>+ MVSW_MODE_AUTO_MDIX,
>+ MVSW_MODE_AUTO
>+};
>+
>+enum {
>+ MVSW_PORT_GOOD_OCTETS_RCV_CNT,
>+ MVSW_PORT_BAD_OCTETS_RCV_CNT,
>+ MVSW_PORT_MAC_TRANSMIT_ERR_CNT,
>+ MVSW_PORT_BRDC_PKTS_RCV_CNT,
>+ MVSW_PORT_MC_PKTS_RCV_CNT,
>+ MVSW_PORT_PKTS_64_OCTETS_CNT,
>+ MVSW_PORT_PKTS_65TO127_OCTETS_CNT,
>+ MVSW_PORT_PKTS_128TO255_OCTETS_CNT,
>+ MVSW_PORT_PKTS_256TO511_OCTETS_CNT,
>+ MVSW_PORT_PKTS_512TO1023_OCTETS_CNT,
>+ MVSW_PORT_PKTS_1024TOMAX_OCTETS_CNT,
>+ MVSW_PORT_EXCESSIVE_COLLISIONS_CNT,
>+ MVSW_PORT_MC_PKTS_SENT_CNT,
>+ MVSW_PORT_BRDC_PKTS_SENT_CNT,
>+ MVSW_PORT_FC_SENT_CNT,
>+ MVSW_PORT_GOOD_FC_RCV_CNT,
>+ MVSW_PORT_DROP_EVENTS_CNT,
>+ MVSW_PORT_UNDERSIZE_PKTS_CNT,
>+ MVSW_PORT_FRAGMENTS_PKTS_CNT,
>+ MVSW_PORT_OVERSIZE_PKTS_CNT,
>+ MVSW_PORT_JABBER_PKTS_CNT,
>+ MVSW_PORT_MAC_RCV_ERROR_CNT,
>+ MVSW_PORT_BAD_CRC_CNT,
>+ MVSW_PORT_COLLISIONS_CNT,
>+ MVSW_PORT_LATE_COLLISIONS_CNT,
>+ MVSW_PORT_GOOD_UC_PKTS_RCV_CNT,
>+ MVSW_PORT_GOOD_UC_PKTS_SENT_CNT,
>+ MVSW_PORT_MULTIPLE_PKTS_SENT_CNT,
>+ MVSW_PORT_DEFERRED_PKTS_SENT_CNT,
>+ MVSW_PORT_PKTS_1024TO1518_OCTETS_CNT,
>+ MVSW_PORT_PKTS_1519TOMAX_OCTETS_CNT,
>+ MVSW_PORT_GOOD_OCTETS_SENT_CNT,
>+ MVSW_PORT_CNT_MAX,
>+};
>+
>+struct mvsw_msg_cmd {
>+ u32 type;
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_ret {
>+ struct mvsw_msg_cmd cmd;
>+ u32 status;
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_common_request {
>+ struct mvsw_msg_cmd cmd;
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_common_response {
>+ struct mvsw_msg_ret ret;
>+} __packed __aligned(4);
>+
>+union mvsw_msg_switch_param {
>+ u32 ageing_timeout;
>+};
>+
>+struct mvsw_msg_switch_attr_cmd {
>+ struct mvsw_msg_cmd cmd;
>+ union mvsw_msg_switch_param param;
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_switch_init_ret {
>+ struct mvsw_msg_ret ret;
>+ u32 port_count;
>+ u32 mtu_max;
>+ u8 switch_id;
>+ u8 mac[ETH_ALEN];
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_port_autoneg_param {
>+ u64 link_mode;
>+ u8 enable;
>+ u8 fec;
>+};
>+
>+struct mvsw_msg_port_cap_param {
>+ u64 link_mode;
>+ u8 type;
>+ u8 fec;
>+ u8 transceiver;
>+};
>+
>+union mvsw_msg_port_param {
>+ u8 admin_state;
>+ u8 oper_state;
>+ u32 mtu;
>+ u8 mac[ETH_ALEN];
>+ u8 accept_frm_type;
>+ u8 learning;
>+ u32 speed;
>+ u8 flood;
>+ u32 link_mode;
>+ u8 type;
>+ u8 duplex;
>+ u8 fec;
>+ u8 mdix;
>+ struct mvsw_msg_port_autoneg_param autoneg;
>+ struct mvsw_msg_port_cap_param cap;
>+};
>+
>+struct mvsw_msg_port_attr_cmd {
>+ struct mvsw_msg_cmd cmd;
>+ u32 attr;
>+ u32 port;
>+ u32 dev;
>+ union mvsw_msg_port_param param;
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_port_attr_ret {
>+ struct mvsw_msg_ret ret;
>+ union mvsw_msg_port_param param;
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_port_stats_ret {
>+ struct mvsw_msg_ret ret;
>+ u64 stats[MVSW_PORT_CNT_MAX];
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_port_info_cmd {
>+ struct mvsw_msg_cmd cmd;
>+ u32 port;
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_port_info_ret {
>+ struct mvsw_msg_ret ret;
>+ u32 hw_id;
>+ u32 dev_id;
>+ u16 fp_id;
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_vlan_cmd {
>+ struct mvsw_msg_cmd cmd;
>+ u32 port;
>+ u32 dev;
>+ u16 vid;
>+ u8 is_member;
>+ u8 is_tagged;
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_fdb_cmd {
>+ struct mvsw_msg_cmd cmd;
>+ u32 port;
>+ u32 dev;
>+ u8 mac[ETH_ALEN];
>+ u16 vid;
>+ u8 dynamic;
>+ u32 flush_mode;
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_event {
>+ u16 type;
>+ u16 id;
>+} __packed __aligned(4);
>+
>+union mvsw_msg_event_fdb_param {
>+ u8 mac[ETH_ALEN];
>+};
>+
>+struct mvsw_msg_event_fdb {
>+ struct mvsw_msg_event id;
>+ u32 port_id;
>+ u32 vid;
>+ union mvsw_msg_event_fdb_param param;
>+} __packed __aligned(4);
>+
>+union mvsw_msg_event_port_param {
>+ u32 oper_state;
>+};
>+
>+struct mvsw_msg_event_port {
>+ struct mvsw_msg_event id;
>+ u32 port_id;
>+ union mvsw_msg_event_port_param param;
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_bridge_cmd {
>+ struct mvsw_msg_cmd cmd;
>+ u32 port;
>+ u32 dev;
>+ u16 bridge;
>+} __packed __aligned(4);
>+
>+struct mvsw_msg_bridge_ret {
>+ struct mvsw_msg_ret ret;
>+ u16 bridge;
>+} __packed __aligned(4);
>+
>+#define fw_check_resp(_response) \
>+({ \
>+ int __er = 0; \
>+ typeof(_response) __r = (_response); \
>+ if (__r->ret.cmd.type != MVSW_MSG_TYPE_ACK) \
>+ __er = -EBADE; \
>+ else if (__r->ret.status != MVSW_MSG_ACK_OK) \
>+ __er = -EINVAL; \
>+ (__er); \
>+})
>+
>+#define __fw_send_req_resp(_switch, _type, _request, _response, _wait) \

Please try to avoid doing functions in macros like this one and the
previous one.


>+({ \
>+ int __e; \
>+ typeof(_switch) __sw = (_switch); \
>+ typeof(_request) __req = (_request); \
>+ typeof(_response) __resp = (_response); \
>+ __req->cmd.type = (_type); \
>+ __e = __sw->dev->send_req(__sw->dev, \
>+ (u8 *)__req, sizeof(*__req), \
>+ (u8 *)__resp, sizeof(*__resp), \
>+ _wait); \
>+ if (!__e) \
>+ __e = fw_check_resp(__resp); \
>+ (__e); \
>+})
>+
>+#define fw_send_req_resp(_sw, _t, _req, _resp) \
>+ __fw_send_req_resp(_sw, _t, _req, _resp, 0)
>+
>+#define fw_send_req_resp_wait(_sw, _t, _req, _resp, _wait) \
>+ __fw_send_req_resp(_sw, _t, _req, _resp, _wait)
>+
>+#define fw_send_req(_sw, _t, _req) \

This should be function, not define


>+({ \
>+ struct mvsw_msg_common_response __re; \
>+ (fw_send_req_resp(_sw, _t, _req, &__re)); \
>+})
>+
>+struct mvsw_fw_event_handler {
>+ struct list_head list;
>+ enum mvsw_pr_event_type type;
>+ void (*func)(struct mvsw_pr_switch *sw, struct mvsw_pr_event *evt);
>+};
>+
>+static int fw_parse_port_evt(u8 *msg, struct mvsw_pr_event *evt)
>+{
>+ struct mvsw_msg_event_port *hw_evt = (struct mvsw_msg_event_port *)msg;
>+
>+ evt->port_evt.port_id = hw_evt->port_id;
>+
>+ if (evt->id == MVSW_PORT_EVENT_STATE_CHANGED)
>+ evt->port_evt.data.oper_state = hw_evt->param.oper_state;
>+ else
>+ return -EINVAL;
>+
>+ return 0;
>+}
>+
>+static int fw_parse_fdb_evt(u8 *msg, struct mvsw_pr_event *evt)
>+{
>+ struct mvsw_msg_event_fdb *hw_evt = (struct mvsw_msg_event_fdb *)msg;
>+
>+ evt->fdb_evt.port_id = hw_evt->port_id;
>+ evt->fdb_evt.vid = hw_evt->vid;
>+
>+ memcpy(&evt->fdb_evt.data, &hw_evt->param, sizeof(u8) * ETH_ALEN);
>+
>+ return 0;
>+}
>+
>+struct mvsw_fw_evt_parser {
>+ int (*func)(u8 *msg, struct mvsw_pr_event *evt);
>+};
>+
>+static struct mvsw_fw_evt_parser fw_event_parsers[MVSW_EVENT_TYPE_MAX] = {
>+ [MVSW_EVENT_TYPE_PORT] = {.func = fw_parse_port_evt},
>+ [MVSW_EVENT_TYPE_FDB] = {.func = fw_parse_fdb_evt},
>+};
>+
>+static struct mvsw_fw_event_handler *
>+__find_event_handler(const struct mvsw_pr_switch *sw,

Maintain the function prefix even for helpers like this one:
__prestera_fw_find_event_handler()


>+ enum mvsw_pr_event_type type)
>+{
>+ struct mvsw_fw_event_handler *eh;
>+
>+ list_for_each_entry_rcu(eh, &sw->event_handlers, list) {
>+ if (eh->type == type)
>+ return eh;
>+ }
>+
>+ return NULL;
>+}
>+
>+static int fw_event_recv(struct mvsw_pr_device *dev, u8 *buf, size_t size)
>+{
>+ void (*cb)(struct mvsw_pr_switch *sw, struct mvsw_pr_event *evt) = NULL;

Please typedef this and use it in struct mvsw_fw_event_handler
definition as well.


>+ struct mvsw_msg_event *msg = (struct mvsw_msg_event *)buf;
>+ struct mvsw_pr_switch *sw = dev->priv;
>+ struct mvsw_fw_event_handler *eh;
>+ struct mvsw_pr_event evt;
>+ int err;
>+
>+ if (msg->type >= MVSW_EVENT_TYPE_MAX)
>+ return -EINVAL;
>+
>+ rcu_read_lock();
>+ eh = __find_event_handler(sw, msg->type);
>+ if (eh)
>+ cb = eh->func;
>+ rcu_read_unlock();
>+
>+ if (!cb || !fw_event_parsers[msg->type].func)
>+ return 0;
>+
>+ evt.id = msg->id;
>+
>+ err = fw_event_parsers[msg->type].func(buf, &evt);
>+ if (!err)
>+ cb(sw, &evt);
>+
>+ return err;
>+}
>+
>+int mvsw_pr_hw_port_info_get(const struct mvsw_pr_port *port,

Prefix should be rather something like:
"prestera_hw_"


>+ u16 *fp_id, u32 *hw_id, u32 *dev_id)
>+{
>+ struct mvsw_msg_port_info_ret resp;
>+ struct mvsw_msg_port_info_cmd req = {
>+ .port = port->id
>+ };
>+ int err;
>+
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_INFO_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+
>+ *hw_id = resp.hw_id;
>+ *dev_id = resp.dev_id;
>+ *fp_id = resp.fp_id;
>+
>+ return 0;
>+}
>+
>+int mvsw_pr_hw_switch_init(struct mvsw_pr_switch *sw)
>+{
>+ struct mvsw_msg_switch_init_ret resp;
>+ struct mvsw_msg_common_request req;
>+ int err = 0;

Pointless init;


>+
>+ INIT_LIST_HEAD(&sw->event_handlers);
>+
>+ err = fw_send_req_resp_wait(sw, MVSW_MSG_TYPE_SWITCH_INIT, &req, &resp,
>+ MVSW_PR_INIT_TIMEOUT);
>+ if (err)
>+ return err;
>+
>+ sw->id = resp.switch_id;

What is this "switch_id"? u8 does not look like something globally
uniqueue. Rather use base MAC address for example.


>+ sw->port_count = resp.port_count;
>+ sw->mtu_min = MVSW_PR_MIN_MTU;
>+ sw->mtu_max = resp.mtu_max;
>+ sw->dev->recv_msg = fw_event_recv;
>+ memcpy(sw->base_mac, resp.mac, ETH_ALEN);
>+
>+ return err;
>+}
>+
>+int mvsw_pr_hw_switch_ageing_set(const struct mvsw_pr_switch *sw,
>+ u32 ageing_time)
>+{
>+ struct mvsw_msg_switch_attr_cmd req = {
>+ .param = {.ageing_timeout = ageing_time}
>+ };
>+
>+ return fw_send_req(sw, MVSW_MSG_TYPE_AGEING_TIMEOUT_SET, &req);
>+}
>+
>+int mvsw_pr_hw_port_state_set(const struct mvsw_pr_port *port,
>+ bool admin_state)
>+{
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_ADMIN_STATE,
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .param = {.admin_state = admin_state ? 1 : 0}

Just do:
.admin_state = admin_state


>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
>+}
>+
>+int mvsw_pr_hw_port_state_get(const struct mvsw_pr_port *port,
>+ bool *admin_state, bool *oper_state)
>+{
>+ struct mvsw_msg_port_attr_ret resp;
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ int err;
>+
>+ if (admin_state) {
>+ req.attr = MVSW_MSG_PORT_ATTR_ADMIN_STATE;
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+ *admin_state = resp.param.admin_state != 0;
>+ }
>+
>+ if (oper_state) {
>+ req.attr = MVSW_MSG_PORT_ATTR_OPER_STATE;
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+ *oper_state = resp.param.oper_state != 0;
>+ }
>+
>+ return 0;
>+}
>+
>+int mvsw_pr_hw_port_mtu_set(const struct mvsw_pr_port *port, u32 mtu)
>+{
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_MTU,
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .param = {.mtu = mtu}
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
>+}
>+
>+int mvsw_pr_hw_port_mtu_get(const struct mvsw_pr_port *port, u32 *mtu)
>+{
>+ struct mvsw_msg_port_attr_ret resp;
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_MTU,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ int err;
>+
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+
>+ *mtu = resp.param.mtu;
>+
>+ return err;
>+}
>+
>+int mvsw_pr_hw_port_mac_set(const struct mvsw_pr_port *port, char *mac)
>+{
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_MAC,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ memcpy(&req.param.mac, mac, sizeof(req.param.mac));
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
>+}
>+
>+int mvsw_pr_hw_port_mac_get(const struct mvsw_pr_port *port, char *mac)
>+{
>+ struct mvsw_msg_port_attr_ret resp;
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_MAC,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ int err;
>+
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+
>+ memcpy(mac, resp.param.mac, sizeof(resp.param.mac));
>+
>+ return err;
>+}
>+
>+int mvsw_pr_hw_port_accept_frame_type_set(const struct mvsw_pr_port *port,
>+ enum mvsw_pr_accept_frame_type type)
>+{
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_ACCEPT_FRAME_TYPE,
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .param = {.accept_frm_type = type}
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
>+}
>+
>+int mvsw_pr_hw_port_learning_set(const struct mvsw_pr_port *port, bool enable)
>+{
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_LEARNING,
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .param = {.learning = enable ? 1 : 0}
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
>+}
>+
>+int mvsw_pr_hw_event_handler_register(struct mvsw_pr_switch *sw,
>+ enum mvsw_pr_event_type type,
>+ void (*cb)(struct mvsw_pr_switch *sw,
>+ struct mvsw_pr_event *evt))
>+{
>+ struct mvsw_fw_event_handler *eh;
>+
>+ eh = __find_event_handler(sw, type);
>+ if (eh)
>+ return -EEXIST;
>+ eh = kmalloc(sizeof(*eh), GFP_KERNEL);
>+ if (!eh)
>+ return -ENOMEM;
>+
>+ eh->type = type;
>+ eh->func = cb;
>+
>+ INIT_LIST_HEAD(&eh->list);
>+
>+ list_add_rcu(&eh->list, &sw->event_handlers);
>+
>+ return 0;
>+}
>+
>+void mvsw_pr_hw_event_handler_unregister(struct mvsw_pr_switch *sw,
>+ enum mvsw_pr_event_type type,
>+ void (*cb)(struct mvsw_pr_switch *sw,
>+ struct mvsw_pr_event *evt))
>+{
>+ struct mvsw_fw_event_handler *eh;
>+
>+ eh = __find_event_handler(sw, type);
>+ if (!eh)
>+ return;
>+
>+ list_del_rcu(&eh->list);
>+ synchronize_rcu();
>+ kfree(eh);
>+}
>+
>+int mvsw_pr_hw_vlan_create(const struct mvsw_pr_switch *sw, u16 vid)
>+{
>+ struct mvsw_msg_vlan_cmd req = {
>+ .vid = vid,
>+ };
>+
>+ return fw_send_req(sw, MVSW_MSG_TYPE_VLAN_CREATE, &req);
>+}
>+
>+int mvsw_pr_hw_vlan_delete(const struct mvsw_pr_switch *sw, u16 vid)
>+{
>+ struct mvsw_msg_vlan_cmd req = {
>+ .vid = vid,
>+ };
>+
>+ return fw_send_req(sw, MVSW_MSG_TYPE_VLAN_DELETE, &req);
>+}
>+
>+int mvsw_pr_hw_vlan_port_set(const struct mvsw_pr_port *port,
>+ u16 vid, bool is_member, bool untagged)
>+{
>+ struct mvsw_msg_vlan_cmd req = {
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .vid = vid,
>+ .is_member = is_member ? 1 : 0,
>+ .is_tagged = untagged ? 0 : 1
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_VLAN_PORT_SET, &req);
>+}
>+
>+int mvsw_pr_hw_vlan_port_vid_set(const struct mvsw_pr_port *port, u16 vid)
>+{
>+ struct mvsw_msg_vlan_cmd req = {
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .vid = vid
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_VLAN_PVID_SET, &req);
>+}
>+
>+int mvsw_pr_hw_port_speed_get(const struct mvsw_pr_port *port, u32 *speed)
>+{
>+ struct mvsw_msg_port_attr_ret resp;
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_SPEED,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ int err;
>+
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+
>+ *speed = resp.param.speed;
>+
>+ return err;
>+}
>+
>+int mvsw_pr_hw_port_flood_set(const struct mvsw_pr_port *port, bool flood)
>+{
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_FLOOD,
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .param = {.flood = flood ? 1 : 0}
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
>+}
>+
>+int mvsw_pr_hw_fdb_add(const struct mvsw_pr_port *port,
>+ const unsigned char *mac, u16 vid, bool dynamic)
>+{
>+ struct mvsw_msg_fdb_cmd req = {
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .vid = vid,
>+ .dynamic = dynamic ? 1 : 0
>+ };
>+
>+ memcpy(req.mac, mac, sizeof(req.mac));
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_FDB_ADD, &req);
>+}
>+
>+int mvsw_pr_hw_fdb_del(const struct mvsw_pr_port *port,
>+ const unsigned char *mac, u16 vid)
>+{
>+ struct mvsw_msg_fdb_cmd req = {
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .vid = vid
>+ };
>+
>+ memcpy(req.mac, mac, sizeof(req.mac));
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_FDB_DELETE, &req);
>+}
>+
>+int mvsw_pr_hw_port_cap_get(const struct mvsw_pr_port *port,
>+ struct mvsw_pr_port_caps *caps)
>+{
>+ struct mvsw_msg_port_attr_ret resp;
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_CAPABILITY,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ int err;
>+
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+
>+ caps->supp_link_modes = resp.param.cap.link_mode;
>+ caps->supp_fec = resp.param.cap.fec;
>+ caps->type = resp.param.cap.type;
>+ caps->transceiver = resp.param.cap.transceiver;
>+
>+ return err;
>+}
>+
>+int mvsw_pr_hw_port_remote_cap_get(const struct mvsw_pr_port *port,
>+ u64 *link_mode_bitmap)
>+{
>+ struct mvsw_msg_port_attr_ret resp;
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_REMOTE_CAPABILITY,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ int err;
>+
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+
>+ *link_mode_bitmap = resp.param.cap.link_mode;
>+
>+ return err;
>+}
>+
>+int mvsw_pr_hw_port_mdix_get(const struct mvsw_pr_port *port, u8 *mode)
>+{
>+ struct mvsw_msg_port_attr_ret resp;
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_MDIX,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ int err;
>+
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+
>+ switch (resp.param.mdix) {
>+ case MVSW_MODE_FORCED_MDI:
>+ case MVSW_MODE_AUTO_MDI:
>+ *mode = ETH_TP_MDI;
>+ break;
>+
>+ case MVSW_MODE_FORCED_MDIX:
>+ case MVSW_MODE_AUTO_MDIX:
>+ *mode = ETH_TP_MDI_X;
>+ break;
>+
>+ default:
>+ return -EINVAL;
>+ }
>+
>+ return 0;
>+}
>+
>+int mvsw_pr_hw_port_mdix_set(const struct mvsw_pr_port *port, u8 mode)
>+{
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_MDIX,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+
>+ switch (mode) {
>+ case ETH_TP_MDI:
>+ req.param.mdix = MVSW_MODE_FORCED_MDI;
>+ break;
>+
>+ case ETH_TP_MDI_X:
>+ req.param.mdix = MVSW_MODE_FORCED_MDIX;
>+ break;
>+
>+ case ETH_TP_MDI_AUTO:
>+ req.param.mdix = MVSW_MODE_AUTO;
>+ break;
>+
>+ default:
>+ return -EINVAL;
>+ }
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
>+}
>+
>+int mvsw_pr_hw_port_type_get(const struct mvsw_pr_port *port, u8 *type)
>+{
>+ struct mvsw_msg_port_attr_ret resp;
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_TYPE,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ int err;
>+
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+
>+ *type = resp.param.type;
>+
>+ return err;
>+}
>+
>+int mvsw_pr_hw_port_fec_get(const struct mvsw_pr_port *port, u8 *fec)
>+{
>+ struct mvsw_msg_port_attr_ret resp;
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_FEC,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ int err;
>+
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+
>+ *fec = resp.param.fec;
>+
>+ return err;
>+}
>+
>+int mvsw_pr_hw_port_fec_set(const struct mvsw_pr_port *port, u8 fec)
>+{
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_FEC,
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .param = {.fec = fec}
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
>+}
>+
>+int mvsw_pr_hw_port_autoneg_set(const struct mvsw_pr_port *port,
>+ bool autoneg, u64 link_modes, u8 fec)
>+{
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_AUTONEG,
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .param = {.autoneg = {.link_mode = link_modes,
>+ .enable = autoneg ? 1 : 0,

You can do just:
.enable = autoneg;


>+ .fec = fec}
>+ }
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
>+}
>+
>+int mvsw_pr_hw_port_duplex_get(const struct mvsw_pr_port *port, u8 *duplex)
>+{
>+ struct mvsw_msg_port_attr_ret resp;
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_DUPLEX,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ int err;
>+
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+
>+ *duplex = resp.param.duplex;
>+
>+ return err;
>+}
>+
>+int mvsw_pr_hw_port_stats_get(const struct mvsw_pr_port *port,
>+ struct mvsw_pr_port_stats *stats)
>+{
>+ struct mvsw_msg_port_stats_ret resp;
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_STATS,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ u64 *hw_val = resp.stats;
>+ int err;
>+
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+
>+ stats->good_octets_received = hw_val[MVSW_PORT_GOOD_OCTETS_RCV_CNT];
>+ stats->bad_octets_received = hw_val[MVSW_PORT_BAD_OCTETS_RCV_CNT];
>+ stats->mac_trans_error = hw_val[MVSW_PORT_MAC_TRANSMIT_ERR_CNT];
>+ stats->broadcast_frames_received = hw_val[MVSW_PORT_BRDC_PKTS_RCV_CNT];
>+ stats->multicast_frames_received = hw_val[MVSW_PORT_MC_PKTS_RCV_CNT];
>+ stats->frames_64_octets = hw_val[MVSW_PORT_PKTS_64_OCTETS_CNT];
>+ stats->frames_65_to_127_octets =
>+ hw_val[MVSW_PORT_PKTS_65TO127_OCTETS_CNT];
>+ stats->frames_128_to_255_octets =
>+ hw_val[MVSW_PORT_PKTS_128TO255_OCTETS_CNT];
>+ stats->frames_256_to_511_octets =
>+ hw_val[MVSW_PORT_PKTS_256TO511_OCTETS_CNT];
>+ stats->frames_512_to_1023_octets =
>+ hw_val[MVSW_PORT_PKTS_512TO1023_OCTETS_CNT];
>+ stats->frames_1024_to_max_octets =
>+ hw_val[MVSW_PORT_PKTS_1024TOMAX_OCTETS_CNT];
>+ stats->excessive_collision = hw_val[MVSW_PORT_EXCESSIVE_COLLISIONS_CNT];
>+ stats->multicast_frames_sent = hw_val[MVSW_PORT_MC_PKTS_SENT_CNT];
>+ stats->broadcast_frames_sent = hw_val[MVSW_PORT_BRDC_PKTS_SENT_CNT];
>+ stats->fc_sent = hw_val[MVSW_PORT_FC_SENT_CNT];
>+ stats->fc_received = hw_val[MVSW_PORT_GOOD_FC_RCV_CNT];
>+ stats->buffer_overrun = hw_val[MVSW_PORT_DROP_EVENTS_CNT];
>+ stats->undersize = hw_val[MVSW_PORT_UNDERSIZE_PKTS_CNT];
>+ stats->fragments = hw_val[MVSW_PORT_FRAGMENTS_PKTS_CNT];
>+ stats->oversize = hw_val[MVSW_PORT_OVERSIZE_PKTS_CNT];
>+ stats->jabber = hw_val[MVSW_PORT_JABBER_PKTS_CNT];
>+ stats->rx_error_frame_received = hw_val[MVSW_PORT_MAC_RCV_ERROR_CNT];
>+ stats->bad_crc = hw_val[MVSW_PORT_BAD_CRC_CNT];
>+ stats->collisions = hw_val[MVSW_PORT_COLLISIONS_CNT];
>+ stats->late_collision = hw_val[MVSW_PORT_LATE_COLLISIONS_CNT];
>+ stats->unicast_frames_received = hw_val[MVSW_PORT_GOOD_UC_PKTS_RCV_CNT];
>+ stats->unicast_frames_sent = hw_val[MVSW_PORT_GOOD_UC_PKTS_SENT_CNT];
>+ stats->sent_multiple = hw_val[MVSW_PORT_MULTIPLE_PKTS_SENT_CNT];
>+ stats->sent_deferred = hw_val[MVSW_PORT_DEFERRED_PKTS_SENT_CNT];
>+ stats->frames_1024_to_1518_octets =
>+ hw_val[MVSW_PORT_PKTS_1024TO1518_OCTETS_CNT];
>+ stats->frames_1519_to_max_octets =
>+ hw_val[MVSW_PORT_PKTS_1519TOMAX_OCTETS_CNT];
>+ stats->good_octets_sent = hw_val[MVSW_PORT_GOOD_OCTETS_SENT_CNT];
>+
>+ return 0;
>+}
>+
>+int mvsw_pr_hw_bridge_create(const struct mvsw_pr_switch *sw, u16 *bridge_id)
>+{
>+ struct mvsw_msg_bridge_cmd req;
>+ struct mvsw_msg_bridge_ret resp;
>+ int err;
>+
>+ err = fw_send_req_resp(sw, MVSW_MSG_TYPE_BRIDGE_CREATE, &req, &resp);
>+ if (err)
>+ return err;
>+
>+ *bridge_id = resp.bridge;
>+ return err;
>+}
>+
>+int mvsw_pr_hw_bridge_delete(const struct mvsw_pr_switch *sw, u16 bridge_id)
>+{
>+ struct mvsw_msg_bridge_cmd req = {
>+ .bridge = bridge_id
>+ };
>+
>+ return fw_send_req(sw, MVSW_MSG_TYPE_BRIDGE_DELETE, &req);
>+}
>+
>+int mvsw_pr_hw_bridge_port_add(const struct mvsw_pr_port *port, u16 bridge_id)
>+{
>+ struct mvsw_msg_bridge_cmd req = {
>+ .bridge = bridge_id,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_BRIDGE_PORT_ADD, &req);
>+}
>+
>+int mvsw_pr_hw_bridge_port_delete(const struct mvsw_pr_port *port,
>+ u16 bridge_id)
>+{
>+ struct mvsw_msg_bridge_cmd req = {
>+ .bridge = bridge_id,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_BRIDGE_PORT_DELETE, &req);
>+}
>+
>+int mvsw_pr_hw_fdb_flush_port(const struct mvsw_pr_port *port, u32 mode)
>+{
>+ struct mvsw_msg_fdb_cmd req = {
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .flush_mode = mode,
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_FDB_FLUSH_PORT, &req);
>+}
>+
>+int mvsw_pr_hw_fdb_flush_vlan(const struct mvsw_pr_switch *sw, u16 vid,
>+ u32 mode)
>+{
>+ struct mvsw_msg_fdb_cmd req = {
>+ .vid = vid,
>+ .flush_mode = mode,
>+ };
>+
>+ return fw_send_req(sw, MVSW_MSG_TYPE_FDB_FLUSH_VLAN, &req);
>+}
>+
>+int mvsw_pr_hw_fdb_flush_port_vlan(const struct mvsw_pr_port *port, u16 vid,
>+ u32 mode)
>+{
>+ struct mvsw_msg_fdb_cmd req = {
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .vid = vid,
>+ .flush_mode = mode,
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_FDB_FLUSH_PORT_VLAN, &req);
>+}
>+
>+int mvsw_pr_hw_port_link_mode_get(const struct mvsw_pr_port *port,
>+ u32 *mode)
>+{
>+ struct mvsw_msg_port_attr_ret resp;
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_LINK_MODE,
>+ .port = port->hw_id,
>+ .dev = port->dev_id
>+ };
>+ int err;
>+
>+ err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
>+ &req, &resp);
>+ if (err)
>+ return err;
>+
>+ *mode = resp.param.link_mode;
>+
>+ return err;
>+}
>+
>+int mvsw_pr_hw_port_link_mode_set(const struct mvsw_pr_port *port,
>+ u32 mode)
>+{
>+ struct mvsw_msg_port_attr_cmd req = {
>+ .attr = MVSW_MSG_PORT_ATTR_LINK_MODE,
>+ .port = port->hw_id,
>+ .dev = port->dev_id,
>+ .param = {.link_mode = mode}
>+ };
>+
>+ return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
>+}
>diff --git a/drivers/net/ethernet/marvell/prestera/prestera_hw.h b/drivers/net/ethernet/marvell/prestera/prestera_hw.h
>new file mode 100644
>index 000000000000..dfae2631160e
>--- /dev/null
>+++ b/drivers/net/ethernet/marvell/prestera/prestera_hw.h
>@@ -0,0 +1,159 @@
>+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
>+ *
>+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
>+ *
>+ */
>+
>+#ifndef _MVSW_PRESTERA_HW_H_
>+#define _MVSW_PRESTERA_HW_H_
>+
>+#include <linux/types.h>
>+
>+enum mvsw_pr_accept_frame_type {
>+ MVSW_ACCEPT_FRAME_TYPE_TAGGED,
>+ MVSW_ACCEPT_FRAME_TYPE_UNTAGGED,
>+ MVSW_ACCEPT_FRAME_TYPE_ALL
>+};
>+
>+enum {
>+ MVSW_LINK_MODE_10baseT_Half_BIT,
>+ MVSW_LINK_MODE_10baseT_Full_BIT,
>+ MVSW_LINK_MODE_100baseT_Half_BIT,
>+ MVSW_LINK_MODE_100baseT_Full_BIT,
>+ MVSW_LINK_MODE_1000baseT_Half_BIT,
>+ MVSW_LINK_MODE_1000baseT_Full_BIT,
>+ MVSW_LINK_MODE_1000baseX_Full_BIT,
>+ MVSW_LINK_MODE_1000baseKX_Full_BIT,
>+ MVSW_LINK_MODE_10GbaseKR_Full_BIT,
>+ MVSW_LINK_MODE_10GbaseSR_Full_BIT,
>+ MVSW_LINK_MODE_10GbaseLR_Full_BIT,
>+ MVSW_LINK_MODE_20GbaseKR2_Full_BIT,
>+ MVSW_LINK_MODE_25GbaseCR_Full_BIT,
>+ MVSW_LINK_MODE_25GbaseKR_Full_BIT,
>+ MVSW_LINK_MODE_25GbaseSR_Full_BIT,
>+ MVSW_LINK_MODE_40GbaseKR4_Full_BIT,
>+ MVSW_LINK_MODE_40GbaseCR4_Full_BIT,
>+ MVSW_LINK_MODE_40GbaseSR4_Full_BIT,
>+ MVSW_LINK_MODE_50GbaseCR2_Full_BIT,
>+ MVSW_LINK_MODE_50GbaseKR2_Full_BIT,
>+ MVSW_LINK_MODE_50GbaseSR2_Full_BIT,
>+ MVSW_LINK_MODE_100GbaseKR4_Full_BIT,
>+ MVSW_LINK_MODE_100GbaseSR4_Full_BIT,
>+ MVSW_LINK_MODE_100GbaseCR4_Full_BIT,
>+ MVSW_LINK_MODE_MAX,
>+};
>+
>+enum {
>+ MVSW_PORT_TYPE_NONE,
>+ MVSW_PORT_TYPE_TP,
>+ MVSW_PORT_TYPE_AUI,
>+ MVSW_PORT_TYPE_MII,
>+ MVSW_PORT_TYPE_FIBRE,
>+ MVSW_PORT_TYPE_BNC,
>+ MVSW_PORT_TYPE_DA,
>+ MVSW_PORT_TYPE_OTHER,
>+ MVSW_PORT_TYPE_MAX,
>+};
>+
>+enum {
>+ MVSW_PORT_TRANSCEIVER_COPPER,
>+ MVSW_PORT_TRANSCEIVER_SFP,
>+ MVSW_PORT_TRANSCEIVER_MAX,
>+};
>+
>+enum {
>+ MVSW_PORT_FEC_OFF_BIT,
>+ MVSW_PORT_FEC_BASER_BIT,
>+ MVSW_PORT_FEC_RS_BIT,
>+ MVSW_PORT_FEC_MAX,
>+};
>+
>+enum {
>+ MVSW_PORT_DUPLEX_HALF,
>+ MVSW_PORT_DUPLEX_FULL
>+};
>+
>+struct mvsw_pr_switch;
>+struct mvsw_pr_port;

For consistently sake, name this rather like this:
struct prestera;
struct prestera_port;

And use it as this for variables and args:

struct prestera *prestera;
struct prestera_port *port;

Or something like that. The point is, avoid "switch" and "sw".


>+struct mvsw_pr_port_stats;
>+struct mvsw_pr_port_caps;
>+
>+enum mvsw_pr_event_type;
>+struct mvsw_pr_event;
>+
>+/* Switch API */
>+int mvsw_pr_hw_switch_init(struct mvsw_pr_switch *sw);
>+int mvsw_pr_hw_switch_ageing_set(const struct mvsw_pr_switch *sw,
>+ u32 ageing_time);
>+
>+/* Port API */
>+int mvsw_pr_hw_port_info_get(const struct mvsw_pr_port *port,
>+ u16 *fp_id, u32 *hw_id, u32 *dev_id);
>+int mvsw_pr_hw_port_state_set(const struct mvsw_pr_port *port,
>+ bool admin_state);
>+int mvsw_pr_hw_port_state_get(const struct mvsw_pr_port *port,
>+ bool *admin_state, bool *oper_state);
>+int mvsw_pr_hw_port_mtu_set(const struct mvsw_pr_port *port, u32 mtu);
>+int mvsw_pr_hw_port_mtu_get(const struct mvsw_pr_port *port, u32 *mtu);
>+int mvsw_pr_hw_port_mac_set(const struct mvsw_pr_port *port, char *mac);
>+int mvsw_pr_hw_port_mac_get(const struct mvsw_pr_port *port, char *mac);
>+int mvsw_pr_hw_port_accept_frame_type_set(const struct mvsw_pr_port *port,
>+ enum mvsw_pr_accept_frame_type type);
>+int mvsw_pr_hw_port_learning_set(const struct mvsw_pr_port *port, bool enable);
>+int mvsw_pr_hw_port_speed_get(const struct mvsw_pr_port *port, u32 *speed);
>+int mvsw_pr_hw_port_flood_set(const struct mvsw_pr_port *port, bool flood);
>+int mvsw_pr_hw_port_cap_get(const struct mvsw_pr_port *port,
>+ struct mvsw_pr_port_caps *caps);
>+int mvsw_pr_hw_port_remote_cap_get(const struct mvsw_pr_port *port,
>+ u64 *link_mode_bitmap);
>+int mvsw_pr_hw_port_type_get(const struct mvsw_pr_port *port, u8 *type);
>+int mvsw_pr_hw_port_fec_get(const struct mvsw_pr_port *port, u8 *fec);
>+int mvsw_pr_hw_port_fec_set(const struct mvsw_pr_port *port, u8 fec);
>+int mvsw_pr_hw_port_autoneg_set(const struct mvsw_pr_port *port,
>+ bool autoneg, u64 link_modes, u8 fec);
>+int mvsw_pr_hw_port_duplex_get(const struct mvsw_pr_port *port, u8 *duplex);
>+int mvsw_pr_hw_port_stats_get(const struct mvsw_pr_port *port,
>+ struct mvsw_pr_port_stats *stats);
>+int mvsw_pr_hw_port_link_mode_get(const struct mvsw_pr_port *port,
>+ u32 *mode);
>+int mvsw_pr_hw_port_link_mode_set(const struct mvsw_pr_port *port,
>+ u32 mode);
>+int mvsw_pr_hw_port_mdix_get(const struct mvsw_pr_port *port, u8 *mode);
>+int mvsw_pr_hw_port_mdix_set(const struct mvsw_pr_port *port, u8 mode);
>+
>+/* Vlan API */
>+int mvsw_pr_hw_vlan_create(const struct mvsw_pr_switch *sw, u16 vid);
>+int mvsw_pr_hw_vlan_delete(const struct mvsw_pr_switch *sw, u16 vid);
>+int mvsw_pr_hw_vlan_port_set(const struct mvsw_pr_port *port,
>+ u16 vid, bool is_member, bool untagged);
>+int mvsw_pr_hw_vlan_port_vid_set(const struct mvsw_pr_port *port, u16 vid);
>+
>+/* FDB API */
>+int mvsw_pr_hw_fdb_add(const struct mvsw_pr_port *port,
>+ const unsigned char *mac, u16 vid, bool dynamic);
>+int mvsw_pr_hw_fdb_del(const struct mvsw_pr_port *port,
>+ const unsigned char *mac, u16 vid);
>+int mvsw_pr_hw_fdb_flush_port(const struct mvsw_pr_port *port, u32 mode);
>+int mvsw_pr_hw_fdb_flush_vlan(const struct mvsw_pr_switch *sw, u16 vid,
>+ u32 mode);
>+int mvsw_pr_hw_fdb_flush_port_vlan(const struct mvsw_pr_port *port, u16 vid,
>+ u32 mode);
>+
>+/* Bridge API */
>+int mvsw_pr_hw_bridge_create(const struct mvsw_pr_switch *sw, u16 *bridge_id);
>+int mvsw_pr_hw_bridge_delete(const struct mvsw_pr_switch *sw, u16 bridge_id);
>+int mvsw_pr_hw_bridge_port_add(const struct mvsw_pr_port *port, u16 bridge_id);
>+int mvsw_pr_hw_bridge_port_delete(const struct mvsw_pr_port *port,
>+ u16 bridge_id);
>+
>+/* Event handlers */
>+int mvsw_pr_hw_event_handler_register(struct mvsw_pr_switch *sw,
>+ enum mvsw_pr_event_type type,
>+ void (*cb)(struct mvsw_pr_switch *sw,
>+ struct mvsw_pr_event *evt));
>+void mvsw_pr_hw_event_handler_unregister(struct mvsw_pr_switch *sw,
>+ enum mvsw_pr_event_type type,
>+ void (*cb)(struct mvsw_pr_switch *sw,
>+ struct mvsw_pr_event *evt));
>+
>+#endif /* _MVSW_PRESTERA_HW_H_ */
>diff --git a/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c b/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
>new file mode 100644
>index 000000000000..18fa6bbe5ace
>--- /dev/null
>+++ b/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
>@@ -0,0 +1,1217 @@
>+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
>+ *
>+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
>+ *
>+ */
>+#include <linux/kernel.h>
>+#include <linux/module.h>
>+#include <linux/if_vlan.h>
>+#include <linux/if_bridge.h>
>+#include <linux/notifier.h>
>+#include <net/switchdev.h>
>+#include <net/netevent.h>
>+#include <net/vxlan.h>
>+
>+#include "prestera.h"
>+
>+struct mvsw_pr_bridge {
>+ struct mvsw_pr_switch *sw;
>+ u32 ageing_time;

Ageing time is global for ASIC? In linux, it is per-bridge.


>+ struct list_head bridge_list;

I

>+ bool bridge_8021q_exists;
>+};
>+
>+struct mvsw_pr_bridge_device {
>+ struct net_device *dev;
>+ struct list_head bridge_node;
>+ struct list_head port_list;
>+ u16 bridge_id;
>+ u8 vlan_enabled:1, multicast_enabled:1, mrouter:1;
>+};
>+
>+struct mvsw_pr_bridge_port {
>+ struct net_device *dev;
>+ struct mvsw_pr_bridge_device *bridge_device;
>+ struct list_head bridge_device_node;
>+ struct list_head vlan_list;
>+ unsigned int ref_count;
>+ u8 stp_state;
>+ unsigned long flags;
>+};
>+
>+struct mvsw_pr_bridge_vlan {
>+ struct list_head bridge_port_node;
>+ struct list_head port_vlan_list;
>+ u16 vid;
>+};
>+
>+struct mvsw_pr_event_work {
>+ struct work_struct work;
>+ struct switchdev_notifier_fdb_info fdb_info;
>+ struct net_device *dev;
>+ unsigned long event;
>+};
>+
>+static struct workqueue_struct *mvsw_owq;
>+
>+static struct mvsw_pr_bridge_port *
>+mvsw_pr_bridge_port_get(struct mvsw_pr_bridge *bridge,
>+ struct net_device *brport_dev);
>+
>+static void mvsw_pr_bridge_port_put(struct mvsw_pr_bridge *bridge,
>+ struct mvsw_pr_bridge_port *br_port);
>+
>+static struct mvsw_pr_bridge_device *
>+mvsw_pr_bridge_device_find(const struct mvsw_pr_bridge *bridge,
>+ const struct net_device *br_dev)
>+{
>+ struct mvsw_pr_bridge_device *bridge_device;
>+
>+ list_for_each_entry(bridge_device, &bridge->bridge_list,
>+ bridge_node)
>+ if (bridge_device->dev == br_dev)
>+ return bridge_device;
>+
>+ return NULL;
>+}
>+
>+static bool
>+mvsw_pr_bridge_device_is_offloaded(const struct mvsw_pr_switch *sw,
>+ const struct net_device *br_dev)
>+{
>+ return !!mvsw_pr_bridge_device_find(sw->bridge, br_dev);
>+}
>+
>+static struct mvsw_pr_bridge_port *
>+__mvsw_pr_bridge_port_find(const struct mvsw_pr_bridge_device *bridge_device,
>+ const struct net_device *brport_dev)
>+{
>+ struct mvsw_pr_bridge_port *br_port;
>+
>+ list_for_each_entry(br_port, &bridge_device->port_list,
>+ bridge_device_node) {
>+ if (br_port->dev == brport_dev)
>+ return br_port;
>+ }

No need for "{}"


>+
>+ return NULL;
>+}
>+
>+static struct mvsw_pr_bridge_port *
>+mvsw_pr_bridge_port_find(struct mvsw_pr_bridge *bridge,
>+ struct net_device *brport_dev)
>+{
>+ struct net_device *br_dev = netdev_master_upper_dev_get(brport_dev);
>+ struct mvsw_pr_bridge_device *bridge_device;
>+
>+ if (!br_dev)
>+ return NULL;
>+
>+ bridge_device = mvsw_pr_bridge_device_find(bridge, br_dev);
>+ if (!bridge_device)
>+ return NULL;
>+
>+ return __mvsw_pr_bridge_port_find(bridge_device, brport_dev);
>+}
>+
>+static struct mvsw_pr_bridge_vlan *
>+mvsw_pr_bridge_vlan_find(const struct mvsw_pr_bridge_port *br_port, u16 vid)
>+{
>+ struct mvsw_pr_bridge_vlan *br_vlan;
>+
>+ list_for_each_entry(br_vlan, &br_port->vlan_list, bridge_port_node) {
>+ if (br_vlan->vid == vid)
>+ return br_vlan;
>+ }
>+
>+ return NULL;
>+}
>+
>+static struct mvsw_pr_bridge_vlan *
>+mvsw_pr_bridge_vlan_create(struct mvsw_pr_bridge_port *br_port, u16 vid)
>+{
>+ struct mvsw_pr_bridge_vlan *br_vlan;
>+
>+ br_vlan = kzalloc(sizeof(*br_vlan), GFP_KERNEL);
>+ if (!br_vlan)
>+ return NULL;
>+
>+ INIT_LIST_HEAD(&br_vlan->port_vlan_list);
>+ br_vlan->vid = vid;
>+ list_add(&br_vlan->bridge_port_node, &br_port->vlan_list);
>+
>+ return br_vlan;
>+}
>+
>+static void
>+mvsw_pr_bridge_vlan_destroy(struct mvsw_pr_bridge_vlan *br_vlan)
>+{
>+ list_del(&br_vlan->bridge_port_node);
>+ WARN_ON(!list_empty(&br_vlan->port_vlan_list));
>+ kfree(br_vlan);
>+}
>+
>+static struct mvsw_pr_bridge_vlan *
>+mvsw_pr_bridge_vlan_get(struct mvsw_pr_bridge_port *br_port, u16 vid)
>+{
>+ struct mvsw_pr_bridge_vlan *br_vlan;
>+
>+ br_vlan = mvsw_pr_bridge_vlan_find(br_port, vid);
>+ if (br_vlan)
>+ return br_vlan;
>+
>+ return mvsw_pr_bridge_vlan_create(br_port, vid);
>+}
>+
>+static void mvsw_pr_bridge_vlan_put(struct mvsw_pr_bridge_vlan *br_vlan)
>+{
>+ if (list_empty(&br_vlan->port_vlan_list))
>+ mvsw_pr_bridge_vlan_destroy(br_vlan);
>+}
>+
>+static int
>+mvsw_pr_port_vlan_bridge_join(struct mvsw_pr_port_vlan *port_vlan,
>+ struct mvsw_pr_bridge_port *br_port,
>+ struct netlink_ext_ack *extack)
>+{
>+ struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
>+ struct mvsw_pr_bridge_vlan *br_vlan;
>+ u16 vid = port_vlan->vid;
>+ int err;
>+
>+ if (port_vlan->bridge_port)
>+ return 0;
>+
>+ err = mvsw_pr_port_flood_set(port, br_port->flags & BR_FLOOD);
>+ if (err)
>+ return err;
>+
>+ err = mvsw_pr_port_learning_set(port, br_port->flags & BR_LEARNING);
>+ if (err)
>+ goto err_port_learning_set;
>+
>+ br_vlan = mvsw_pr_bridge_vlan_get(br_port, vid);
>+ if (!br_vlan) {
>+ err = -ENOMEM;
>+ goto err_bridge_vlan_get;
>+ }
>+
>+ list_add(&port_vlan->bridge_vlan_node, &br_vlan->port_vlan_list);
>+
>+ mvsw_pr_bridge_port_get(port->sw->bridge, br_port->dev);
>+ port_vlan->bridge_port = br_port;
>+
>+ return 0;
>+
>+err_bridge_vlan_get:
>+ mvsw_pr_port_learning_set(port, false);
>+err_port_learning_set:
>+ return err;
>+}
>+
>+static int
>+mvsw_pr_bridge_vlan_port_count_get(struct mvsw_pr_bridge_device *bridge_device,
>+ u16 vid)
>+{
>+ int count = 0;
>+ struct mvsw_pr_bridge_port *br_port;
>+ struct mvsw_pr_bridge_vlan *br_vlan;
>+
>+ list_for_each_entry(br_port, &bridge_device->port_list,
>+ bridge_device_node) {
>+ list_for_each_entry(br_vlan, &br_port->vlan_list,
>+ bridge_port_node) {
>+ if (br_vlan->vid == vid) {
>+ count += 1;
>+ break;
>+ }
>+ }
>+ }
>+
>+ return count;
>+}
>+
>+void
>+mvsw_pr_port_vlan_bridge_leave(struct mvsw_pr_port_vlan *port_vlan)
>+{
>+ struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
>+ struct mvsw_pr_bridge_vlan *br_vlan;
>+ struct mvsw_pr_bridge_port *br_port;
>+ int port_count;
>+ u16 vid = port_vlan->vid;
>+ bool last_port, last_vlan;
>+
>+ br_port = port_vlan->bridge_port;
>+ last_vlan = list_is_singular(&br_port->vlan_list);
>+ port_count =
>+ mvsw_pr_bridge_vlan_port_count_get(br_port->bridge_device, vid);
>+ br_vlan = mvsw_pr_bridge_vlan_find(br_port, vid);
>+ last_port = port_count == 1;
>+ if (last_vlan) {
>+ mvsw_pr_fdb_flush_port(port, MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
>+ } else if (last_port) {
>+ mvsw_pr_fdb_flush_vlan(port->sw, vid,
>+ MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
>+ } else {
>+ mvsw_pr_fdb_flush_port_vlan(port, vid,
>+ MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
>+ }
>+
>+ list_del(&port_vlan->bridge_vlan_node);
>+ mvsw_pr_bridge_vlan_put(br_vlan);
>+ mvsw_pr_bridge_port_put(port->sw->bridge, br_port);
>+ port_vlan->bridge_port = NULL;
>+}
>+
>+static int
>+mvsw_pr_bridge_port_vlan_add(struct mvsw_pr_port *port,
>+ struct mvsw_pr_bridge_port *br_port,
>+ u16 vid, bool is_untagged, bool is_pvid,
>+ struct netlink_ext_ack *extack)
>+{
>+ u16 pvid;
>+ struct mvsw_pr_port_vlan *port_vlan;
>+ u16 old_pvid = port->pvid;
>+ int err;
>+
>+ if (is_pvid)
>+ pvid = vid;
>+ else
>+ pvid = port->pvid == vid ? 0 : port->pvid;
>+
>+ port_vlan = mvsw_pr_port_vlan_find_by_vid(port, vid);
>+ if (port_vlan && port_vlan->bridge_port != br_port)
>+ return -EEXIST;
>+
>+ if (!port_vlan) {
>+ port_vlan = mvsw_pr_port_vlan_create(port, vid);
>+ if (IS_ERR(port_vlan))
>+ return PTR_ERR(port_vlan);
>+ }
>+
>+ err = mvsw_pr_port_vlan_set(port, vid, true, is_untagged);
>+ if (err)
>+ goto err_port_vlan_set;
>+
>+ err = mvsw_pr_port_pvid_set(port, pvid);
>+ if (err)
>+ goto err_port_pvid_set;
>+
>+ err = mvsw_pr_port_vlan_bridge_join(port_vlan, br_port, extack);
>+ if (err)
>+ goto err_port_vlan_bridge_join;
>+
>+ return 0;
>+
>+err_port_vlan_bridge_join:
>+ mvsw_pr_port_pvid_set(port, old_pvid);
>+err_port_pvid_set:
>+ mvsw_pr_port_vlan_set(port, vid, false, false);
>+err_port_vlan_set:
>+ mvsw_pr_port_vlan_destroy(port_vlan);
>+
>+ return err;
>+}
>+
>+static int mvsw_pr_port_vlans_add(struct mvsw_pr_port *port,
>+ const struct switchdev_obj_port_vlan *vlan,
>+ struct switchdev_trans *trans,
>+ struct netlink_ext_ack *extack)
>+{
>+ bool flag_untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
>+ bool flag_pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID;
>+ struct net_device *orig_dev = vlan->obj.orig_dev;
>+ struct mvsw_pr_bridge_port *br_port;
>+ struct mvsw_pr_switch *sw = port->sw;
>+ u16 vid;
>+
>+ if (netif_is_bridge_master(orig_dev))
>+ return 0;
>+
>+ if (switchdev_trans_ph_commit(trans))
>+ return 0;
>+
>+ br_port = mvsw_pr_bridge_port_find(sw->bridge, orig_dev);
>+ if (WARN_ON(!br_port))
>+ return -EINVAL;
>+
>+ if (!br_port->bridge_device->vlan_enabled)
>+ return 0;
>+
>+ for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
>+ int err;
>+
>+ err = mvsw_pr_bridge_port_vlan_add(port, br_port,
>+ vid, flag_untagged,
>+ flag_pvid, extack);
>+ if (err)
>+ return err;
>+ }
>+
>+ return 0;
>+}
>+
>+static int mvsw_pr_port_obj_add(struct net_device *dev,
>+ const struct switchdev_obj *obj,
>+ struct switchdev_trans *trans,
>+ struct netlink_ext_ack *extack)
>+{
>+ int err = 0;
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+ const struct switchdev_obj_port_vlan *vlan;
>+
>+ switch (obj->id) {
>+ case SWITCHDEV_OBJ_ID_PORT_VLAN:
>+ vlan = SWITCHDEV_OBJ_PORT_VLAN(obj);
>+ err = mvsw_pr_port_vlans_add(port, vlan, trans, extack);
>+ break;
>+ default:
>+ err = -EOPNOTSUPP;
>+ }
>+
>+ return err;
>+}
>+
>+static void
>+mvsw_pr_bridge_port_vlan_del(struct mvsw_pr_port *port,
>+ struct mvsw_pr_bridge_port *br_port, u16 vid)
>+{
>+ u16 pvid = port->pvid == vid ? 0 : port->pvid;
>+ struct mvsw_pr_port_vlan *port_vlan;
>+
>+ port_vlan = mvsw_pr_port_vlan_find_by_vid(port, vid);
>+ if (WARN_ON(!port_vlan))
>+ return;
>+
>+ mvsw_pr_port_vlan_bridge_leave(port_vlan);
>+ mvsw_pr_port_pvid_set(port, pvid);
>+ mvsw_pr_port_vlan_destroy(port_vlan);
>+}
>+
>+static int mvsw_pr_port_vlans_del(struct mvsw_pr_port *port,
>+ const struct switchdev_obj_port_vlan *vlan)
>+{
>+ struct mvsw_pr_switch *sw = port->sw;
>+ struct net_device *orig_dev = vlan->obj.orig_dev;
>+ struct mvsw_pr_bridge_port *br_port;
>+ u16 vid;
>+
>+ if (netif_is_bridge_master(orig_dev))
>+ return -EOPNOTSUPP;
>+
>+ br_port = mvsw_pr_bridge_port_find(sw->bridge, orig_dev);
>+ if (WARN_ON(!br_port))
>+ return -EINVAL;
>+
>+ if (!br_port->bridge_device->vlan_enabled)
>+ return 0;
>+
>+ for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++)
>+ mvsw_pr_bridge_port_vlan_del(port, br_port, vid);
>+
>+ return 0;
>+}
>+
>+static int mvsw_pr_port_obj_del(struct net_device *dev,
>+ const struct switchdev_obj *obj)
>+{
>+ int err = 0;
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+
>+ switch (obj->id) {
>+ case SWITCHDEV_OBJ_ID_PORT_VLAN:
>+ err = mvsw_pr_port_vlans_del(port,
>+ SWITCHDEV_OBJ_PORT_VLAN(obj));
>+ break;
>+ default:
>+ err = -EOPNOTSUPP;
>+ break;
>+ }
>+
>+ return err;
>+}
>+
>+static int mvsw_pr_port_attr_br_vlan_set(struct mvsw_pr_port *port,
>+ struct switchdev_trans *trans,
>+ struct net_device *orig_dev,
>+ bool vlan_enabled)
>+{
>+ struct mvsw_pr_switch *sw = port->sw;
>+ struct mvsw_pr_bridge_device *bridge_device;
>+
>+ if (!switchdev_trans_ph_prepare(trans))
>+ return 0;
>+
>+ bridge_device = mvsw_pr_bridge_device_find(sw->bridge, orig_dev);
>+ if (WARN_ON(!bridge_device))
>+ return -EINVAL;
>+
>+ if (bridge_device->vlan_enabled == vlan_enabled)
>+ return 0;
>+
>+ netdev_err(bridge_device->dev,
>+ "VLAN filtering can't be changed for existing bridge\n");
>+ return -EINVAL;
>+}
>+
>+static int mvsw_pr_port_attr_br_flags_set(struct mvsw_pr_port *port,
>+ struct switchdev_trans *trans,
>+ struct net_device *orig_dev,
>+ unsigned long flags)
>+{
>+ struct mvsw_pr_bridge_port *br_port;
>+ int err;
>+
>+ if (switchdev_trans_ph_prepare(trans))
>+ return 0;
>+
>+ br_port = mvsw_pr_bridge_port_find(port->sw->bridge, orig_dev);
>+ if (!br_port)
>+ return 0;
>+
>+ err = mvsw_pr_port_flood_set(port, flags & BR_FLOOD);
>+ if (err)
>+ return err;
>+
>+ err = mvsw_pr_port_learning_set(port, flags & BR_LEARNING);
>+ if (err)
>+ return err;
>+
>+ memcpy(&br_port->flags, &flags, sizeof(flags));
>+ return 0;
>+}
>+
>+static int mvsw_pr_port_attr_br_ageing_set(struct mvsw_pr_port *port,
>+ struct switchdev_trans *trans,
>+ unsigned long ageing_clock_t)
>+{
>+ int err;
>+ struct mvsw_pr_switch *sw = port->sw;
>+ unsigned long ageing_jiffies = clock_t_to_jiffies(ageing_clock_t);
>+ u32 ageing_time = jiffies_to_msecs(ageing_jiffies) / 1000;
>+
>+ if (switchdev_trans_ph_prepare(trans)) {
>+ if (ageing_time < MVSW_PR_MIN_AGEING_TIME ||
>+ ageing_time > MVSW_PR_MAX_AGEING_TIME)
>+ return -ERANGE;
>+ else
>+ return 0;
>+ }
>+
>+ err = mvsw_pr_switch_ageing_set(sw, ageing_time);
>+ if (!err)
>+ sw->bridge->ageing_time = ageing_time;
>+
>+ return err;
>+}
>+
>+static int mvsw_pr_port_obj_attr_set(struct net_device *dev,
>+ const struct switchdev_attr *attr,
>+ struct switchdev_trans *trans)
>+{
>+ int err = 0;
>+ struct mvsw_pr_port *port = netdev_priv(dev);
>+
>+ switch (attr->id) {
>+ case SWITCHDEV_ATTR_ID_PORT_STP_STATE:
>+ err = -EOPNOTSUPP;
>+ break;
>+ case SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS:
>+ if (attr->u.brport_flags &
>+ ~(BR_LEARNING | BR_FLOOD | BR_MCAST_FLOOD))
>+ err = -EINVAL;
>+ break;
>+ case SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS:
>+ err = mvsw_pr_port_attr_br_flags_set(port, trans,
>+ attr->orig_dev,
>+ attr->u.brport_flags);
>+ break;
>+ case SWITCHDEV_ATTR_ID_BRIDGE_AGEING_TIME:
>+ err = mvsw_pr_port_attr_br_ageing_set(port, trans,
>+ attr->u.ageing_time);
>+ break;
>+ case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING:
>+ err = mvsw_pr_port_attr_br_vlan_set(port, trans,
>+ attr->orig_dev,
>+ attr->u.vlan_filtering);
>+ break;
>+ default:
>+ err = -EOPNOTSUPP;
>+ }
>+
>+ return err;
>+}
>+
>+static void mvsw_fdb_offload_notify(struct mvsw_pr_port *port,
>+ struct switchdev_notifier_fdb_info *info)
>+{
>+ struct switchdev_notifier_fdb_info send_info;
>+
>+ send_info.addr = info->addr;
>+ send_info.vid = info->vid;
>+ send_info.offloaded = true;
>+ call_switchdev_notifiers(SWITCHDEV_FDB_OFFLOADED,
>+ port->net_dev, &send_info.info, NULL);
>+}
>+
>+static int
>+mvsw_pr_port_fdb_set(struct mvsw_pr_port *port,
>+ struct switchdev_notifier_fdb_info *fdb_info, bool adding)
>+{
>+ struct mvsw_pr_switch *sw = port->sw;
>+ struct mvsw_pr_bridge_port *br_port;
>+ struct mvsw_pr_bridge_device *bridge_device;
>+ struct net_device *orig_dev = fdb_info->info.dev;
>+ int err;
>+ u16 vid;
>+
>+ br_port = mvsw_pr_bridge_port_find(sw->bridge, orig_dev);
>+ if (!br_port)
>+ return -EINVAL;
>+
>+ bridge_device = br_port->bridge_device;
>+
>+ if (bridge_device->vlan_enabled)
>+ vid = fdb_info->vid;
>+ else
>+ vid = bridge_device->bridge_id;
>+
>+ if (adding)
>+ err = mvsw_pr_fdb_add(port, fdb_info->addr, vid, false);
>+ else
>+ err = mvsw_pr_fdb_del(port, fdb_info->addr, vid);
>+
>+ return err;
>+}
>+
>+static void mvsw_pr_bridge_fdb_event_work(struct work_struct *work)

Why do you need to do this in work? Why can't you call down to hw
directly from mvsw_pr_switchdev_event(). I might be missing something
but I don't see that accessing hw in this case can sleep.


>+{
>+ int err = 0;
>+ struct mvsw_pr_event_work *switchdev_work =
>+ container_of(work, struct mvsw_pr_event_work, work);
>+ struct net_device *dev = switchdev_work->dev;
>+ struct switchdev_notifier_fdb_info *fdb_info;
>+ struct mvsw_pr_port *port;
>+
>+ rtnl_lock();
>+ if (netif_is_vxlan(dev))
>+ goto out;
>+
>+ port = mvsw_pr_port_dev_lower_find(dev);
>+ if (!port)
>+ goto out;
>+
>+ switch (switchdev_work->event) {
>+ case SWITCHDEV_FDB_ADD_TO_DEVICE:
>+ fdb_info = &switchdev_work->fdb_info;
>+ if (!fdb_info->added_by_user)
>+ break;
>+ err = mvsw_pr_port_fdb_set(port, fdb_info, true);
>+ if (err)
>+ break;
>+ mvsw_fdb_offload_notify(port, fdb_info);
>+ break;
>+ case SWITCHDEV_FDB_DEL_TO_DEVICE:
>+ fdb_info = &switchdev_work->fdb_info;
>+ mvsw_pr_port_fdb_set(port, fdb_info, false);
>+ break;
>+ case SWITCHDEV_FDB_ADD_TO_BRIDGE:
>+ case SWITCHDEV_FDB_DEL_TO_BRIDGE:
>+ break;
>+ }
>+
>+out:
>+ rtnl_unlock();
>+ kfree(switchdev_work->fdb_info.addr);
>+ kfree(switchdev_work);
>+ dev_put(dev);
>+}
>+
>+static int mvsw_pr_switchdev_event(struct notifier_block *unused,
>+ unsigned long event, void *ptr)
>+{
>+ int err = 0;
>+ struct net_device *net_dev = switchdev_notifier_info_to_dev(ptr);
>+ struct mvsw_pr_event_work *switchdev_work;
>+ struct switchdev_notifier_fdb_info *fdb_info;
>+ struct switchdev_notifier_info *info = ptr;
>+ struct net_device *upper_br;
>+
>+ if (event == SWITCHDEV_PORT_ATTR_SET) {
>+ err = switchdev_handle_port_attr_set(net_dev, ptr,
>+ mvsw_pr_netdev_check,
>+ mvsw_pr_port_obj_attr_set);
>+ return notifier_from_errno(err);
>+ }
>+
>+ upper_br = netdev_master_upper_dev_get_rcu(net_dev);
>+ if (!upper_br)
>+ return NOTIFY_DONE;
>+
>+ if (!netif_is_bridge_master(upper_br))
>+ return NOTIFY_DONE;
>+
>+ switchdev_work = kzalloc(sizeof(*switchdev_work), GFP_ATOMIC);
>+ if (!switchdev_work)
>+ return NOTIFY_BAD;
>+
>+ switchdev_work->dev = net_dev;
>+ switchdev_work->event = event;
>+
>+ switch (event) {
>+ case SWITCHDEV_FDB_ADD_TO_DEVICE:
>+ case SWITCHDEV_FDB_DEL_TO_DEVICE:
>+ case SWITCHDEV_FDB_ADD_TO_BRIDGE:
>+ case SWITCHDEV_FDB_DEL_TO_BRIDGE:
>+ fdb_info = container_of(info,
>+ struct switchdev_notifier_fdb_info,
>+ info);
>+
>+ INIT_WORK(&switchdev_work->work, mvsw_pr_bridge_fdb_event_work);
>+ memcpy(&switchdev_work->fdb_info, ptr,
>+ sizeof(switchdev_work->fdb_info));
>+ switchdev_work->fdb_info.addr = kzalloc(ETH_ALEN, GFP_ATOMIC);
>+ if (!switchdev_work->fdb_info.addr)
>+ goto out;
>+ ether_addr_copy((u8 *)switchdev_work->fdb_info.addr,
>+ fdb_info->addr);
>+ dev_hold(net_dev);
>+
>+ break;
>+ case SWITCHDEV_VXLAN_FDB_ADD_TO_DEVICE:
>+ case SWITCHDEV_VXLAN_FDB_DEL_TO_DEVICE:
>+ default:
>+ kfree(switchdev_work);
>+ return NOTIFY_DONE;
>+ }
>+
>+ queue_work(mvsw_owq, &switchdev_work->work);
>+ return NOTIFY_DONE;
>+out:
>+ kfree(switchdev_work);
>+ return NOTIFY_BAD;
>+}
>+
>+static int mvsw_pr_switchdev_blocking_event(struct notifier_block *unused,
>+ unsigned long event, void *ptr)
>+{
>+ int err = 0;

Pointless init.


>+ struct net_device *net_dev = switchdev_notifier_info_to_dev(ptr);
>+
>+ switch (event) {
>+ case SWITCHDEV_PORT_OBJ_ADD:
>+ if (netif_is_vxlan(net_dev)) {
>+ err = -EOPNOTSUPP;
>+ } else {
>+ err = switchdev_handle_port_obj_add
>+ (net_dev, ptr, mvsw_pr_netdev_check,
>+ mvsw_pr_port_obj_add);
>+ }

Remove the "{}"s


>+ break;
>+ case SWITCHDEV_PORT_OBJ_DEL:
>+ if (netif_is_vxlan(net_dev)) {
>+ err = -EOPNOTSUPP;
>+ } else {
>+ err = switchdev_handle_port_obj_del
>+ (net_dev, ptr, mvsw_pr_netdev_check,
>+ mvsw_pr_port_obj_del);
>+ }

Remove the "{}"s



>+ break;
>+ case SWITCHDEV_PORT_ATTR_SET:
>+ err = switchdev_handle_port_attr_set
>+ (net_dev, ptr, mvsw_pr_netdev_check,
>+ mvsw_pr_port_obj_attr_set);
>+ break;
>+ default:
>+ err = -EOPNOTSUPP;
>+ }
>+
>+ return notifier_from_errno(err);
>+}
>+
>+static struct mvsw_pr_bridge_device *
>+mvsw_pr_bridge_device_create(struct mvsw_pr_bridge *bridge,
>+ struct net_device *br_dev)
>+{
>+ struct mvsw_pr_bridge_device *bridge_device;
>+ bool vlan_enabled = br_vlan_enabled(br_dev);
>+ u16 bridge_id;
>+ int err;
>+
>+ if (vlan_enabled && bridge->bridge_8021q_exists) {
>+ netdev_err(br_dev, "Only one VLAN-aware bridge is supported\n");

Messages like this one should not be printed to dmesg, but should
be rather pushed to the caller using extack.


>+ return ERR_PTR(-EINVAL);
>+ }
>+
>+ bridge_device = kzalloc(sizeof(*bridge_device), GFP_KERNEL);
>+ if (!bridge_device)
>+ return ERR_PTR(-ENOMEM);
>+
>+ if (vlan_enabled) {
>+ bridge->bridge_8021q_exists = true;
>+ } else {
>+ err = mvsw_pr_8021d_bridge_create(bridge->sw, &bridge_id);
>+ if (err) {
>+ kfree(bridge_device);
>+ return ERR_PTR(err);
>+ }
>+
>+ bridge_device->bridge_id = bridge_id;
>+ }
>+
>+ bridge_device->dev = br_dev;
>+ bridge_device->vlan_enabled = vlan_enabled;
>+ bridge_device->multicast_enabled = br_multicast_enabled(br_dev);
>+ bridge_device->mrouter = br_multicast_router(br_dev);
>+ INIT_LIST_HEAD(&bridge_device->port_list);
>+
>+ list_add(&bridge_device->bridge_node, &bridge->bridge_list);
>+
>+ return bridge_device;
>+}
>+
>+static void
>+mvsw_pr_bridge_device_destroy(struct mvsw_pr_bridge *bridge,
>+ struct mvsw_pr_bridge_device *bridge_device)
>+{
>+ list_del(&bridge_device->bridge_node);
>+ if (bridge_device->vlan_enabled)
>+ bridge->bridge_8021q_exists = false;
>+ else
>+ mvsw_pr_8021d_bridge_delete(bridge->sw,
>+ bridge_device->bridge_id);
>+
>+ WARN_ON(!list_empty(&bridge_device->port_list));
>+ kfree(bridge_device);
>+}
>+
>+static struct mvsw_pr_bridge_device *
>+mvsw_pr_bridge_device_get(struct mvsw_pr_bridge *bridge,
>+ struct net_device *br_dev)
>+{
>+ struct mvsw_pr_bridge_device *bridge_device;
>+
>+ bridge_device = mvsw_pr_bridge_device_find(bridge, br_dev);
>+ if (bridge_device)
>+ return bridge_device;
>+
>+ return mvsw_pr_bridge_device_create(bridge, br_dev);
>+}
>+
>+static void
>+mvsw_pr_bridge_device_put(struct mvsw_pr_bridge *bridge,
>+ struct mvsw_pr_bridge_device *bridge_device)
>+{
>+ if (list_empty(&bridge_device->port_list))
>+ mvsw_pr_bridge_device_destroy(bridge, bridge_device);
>+}
>+
>+static struct mvsw_pr_bridge_port *
>+mvsw_pr_bridge_port_create(struct mvsw_pr_bridge_device *bridge_device,
>+ struct net_device *brport_dev)
>+{
>+ struct mvsw_pr_bridge_port *br_port;
>+ struct mvsw_pr_port *port;
>+
>+ br_port = kzalloc(sizeof(*br_port), GFP_KERNEL);
>+ if (!br_port)
>+ return NULL;
>+
>+ port = mvsw_pr_port_dev_lower_find(brport_dev);
>+
>+ br_port->dev = brport_dev;
>+ br_port->bridge_device = bridge_device;
>+ br_port->stp_state = BR_STATE_DISABLED;
>+ br_port->flags = BR_LEARNING | BR_FLOOD | BR_LEARNING_SYNC |
>+ BR_MCAST_FLOOD;
>+ INIT_LIST_HEAD(&br_port->vlan_list);
>+ list_add(&br_port->bridge_device_node, &bridge_device->port_list);
>+ br_port->ref_count = 1;
>+
>+ return br_port;
>+}
>+
>+static void
>+mvsw_pr_bridge_port_destroy(struct mvsw_pr_bridge_port *br_port)
>+{
>+ list_del(&br_port->bridge_device_node);
>+ WARN_ON(!list_empty(&br_port->vlan_list));
>+ kfree(br_port);
>+}
>+
>+static struct mvsw_pr_bridge_port *
>+mvsw_pr_bridge_port_get(struct mvsw_pr_bridge *bridge,
>+ struct net_device *brport_dev)
>+{
>+ struct net_device *br_dev = netdev_master_upper_dev_get(brport_dev);
>+ struct mvsw_pr_bridge_device *bridge_device;
>+ struct mvsw_pr_bridge_port *br_port;
>+ int err;
>+
>+ br_port = mvsw_pr_bridge_port_find(bridge, brport_dev);
>+ if (br_port) {
>+ br_port->ref_count++;

Use refcount_t


>+ return br_port;
>+ }
>+
>+ bridge_device = mvsw_pr_bridge_device_get(bridge, br_dev);
>+ if (IS_ERR(bridge_device))
>+ return ERR_CAST(bridge_device);
>+
>+ br_port = mvsw_pr_bridge_port_create(bridge_device, brport_dev);
>+ if (!br_port) {
>+ err = -ENOMEM;
>+ goto err_brport_create;
>+ }
>+
>+ return br_port;
>+
>+err_brport_create:
>+ mvsw_pr_bridge_device_put(bridge, bridge_device);
>+ return ERR_PTR(err);
>+}
>+
>+static void mvsw_pr_bridge_port_put(struct mvsw_pr_bridge *bridge,
>+ struct mvsw_pr_bridge_port *br_port)
>+{
>+ struct mvsw_pr_bridge_device *bridge_device;
>+
>+ if (--br_port->ref_count != 0)
>+ return;
>+ bridge_device = br_port->bridge_device;
>+ mvsw_pr_bridge_port_destroy(br_port);
>+ mvsw_pr_bridge_device_put(bridge, bridge_device);
>+}
>+
>+static int
>+mvsw_pr_bridge_8021q_port_join(struct mvsw_pr_bridge_device *bridge_device,
>+ struct mvsw_pr_bridge_port *br_port,
>+ struct mvsw_pr_port *port,
>+ struct netlink_ext_ack *extack)
>+{
>+ if (is_vlan_dev(br_port->dev)) {
>+ NL_SET_ERR_MSG_MOD(extack,
>+ "Can not enslave a VLAN device to a VLAN-aware bridge");
>+ return -EINVAL;
>+ }
>+
>+ return 0;
>+}
>+
>+static int
>+mvsw_pr_bridge_8021d_port_join(struct mvsw_pr_bridge_device *bridge_device,
>+ struct mvsw_pr_bridge_port *br_port,
>+ struct mvsw_pr_port *port,
>+ struct netlink_ext_ack *extack)
>+{
>+ int err;
>+
>+ if (is_vlan_dev(br_port->dev)) {
>+ NL_SET_ERR_MSG_MOD(extack,
>+ "Enslaving of a VLAN device is not supported");
>+ return -ENOTSUPP;
>+ }
>+ err = mvsw_pr_8021d_bridge_port_add(port, bridge_device->bridge_id);
>+ if (err)
>+ return err;
>+
>+ err = mvsw_pr_port_flood_set(port, br_port->flags & BR_FLOOD);
>+ if (err)
>+ goto err_port_flood_set;
>+
>+ err = mvsw_pr_port_learning_set(port, br_port->flags & BR_LEARNING);
>+ if (err)
>+ goto err_port_learning_set;
>+
>+ return err;
>+
>+err_port_learning_set:
>+ mvsw_pr_port_flood_set(port, false);
>+err_port_flood_set:
>+ mvsw_pr_8021d_bridge_port_delete(port, bridge_device->bridge_id);
>+ return err;
>+}
>+
>+static int mvsw_pr_port_bridge_join(struct mvsw_pr_port *port,
>+ struct net_device *brport_dev,
>+ struct net_device *br_dev,
>+ struct netlink_ext_ack *extack)
>+{
>+ struct mvsw_pr_bridge_device *bridge_device;
>+ struct mvsw_pr_switch *sw = port->sw;
>+ struct mvsw_pr_bridge_port *br_port;
>+ int err;
>+
>+ br_port = mvsw_pr_bridge_port_get(sw->bridge, brport_dev);
>+ if (IS_ERR(br_port))
>+ return PTR_ERR(br_port);
>+
>+ bridge_device = br_port->bridge_device;
>+
>+ if (bridge_device->vlan_enabled) {
>+ err = mvsw_pr_bridge_8021q_port_join(bridge_device, br_port,
>+ port, extack);
>+ } else {
>+ err = mvsw_pr_bridge_8021d_port_join(bridge_device, br_port,
>+ port, extack);
>+ }
>+
>+ if (err)
>+ goto err_port_join;
>+
>+ return 0;
>+
>+err_port_join:
>+ mvsw_pr_bridge_port_put(sw->bridge, br_port);
>+ return err;
>+}
>+
>+static void
>+mvsw_pr_bridge_8021d_port_leave(struct mvsw_pr_bridge_device *bridge_device,
>+ struct mvsw_pr_bridge_port *br_port,
>+ struct mvsw_pr_port *port)
>+{
>+ mvsw_pr_fdb_flush_port(port, MVSW_PR_FDB_FLUSH_MODE_ALL);
>+ mvsw_pr_8021d_bridge_port_delete(port, bridge_device->bridge_id);
>+}
>+
>+static void
>+mvsw_pr_bridge_8021q_port_leave(struct mvsw_pr_bridge_device *bridge_device,
>+ struct mvsw_pr_bridge_port *br_port,
>+ struct mvsw_pr_port *port)
>+{
>+ mvsw_pr_fdb_flush_port(port, MVSW_PR_FDB_FLUSH_MODE_ALL);
>+ mvsw_pr_port_pvid_set(port, MVSW_PR_DEFAULT_VID);
>+}
>+
>+static void mvsw_pr_port_bridge_leave(struct mvsw_pr_port *port,
>+ struct net_device *brport_dev,
>+ struct net_device *br_dev)
>+{
>+ struct mvsw_pr_switch *sw = port->sw;
>+ struct mvsw_pr_bridge_device *bridge_device;
>+ struct mvsw_pr_bridge_port *br_port;
>+
>+ bridge_device = mvsw_pr_bridge_device_find(sw->bridge, br_dev);
>+ if (!bridge_device)
>+ return;
>+ br_port = __mvsw_pr_bridge_port_find(bridge_device, brport_dev);
>+ if (!br_port)
>+ return;
>+
>+ if (bridge_device->vlan_enabled)
>+ mvsw_pr_bridge_8021q_port_leave(bridge_device, br_port, port);
>+ else
>+ mvsw_pr_bridge_8021d_port_leave(bridge_device, br_port, port);
>+
>+ mvsw_pr_port_learning_set(port, false);
>+ mvsw_pr_port_flood_set(port, false);
>+ mvsw_pr_bridge_port_put(sw->bridge, br_port);
>+}
>+
>+static int mvsw_pr_netdevice_port_upper_event(struct net_device *lower_dev,
>+ struct net_device *dev,
>+ unsigned long event, void *ptr)
>+{
>+ struct netdev_notifier_changeupper_info *info;
>+ struct mvsw_pr_port *port;
>+ struct netlink_ext_ack *extack;
>+ struct net_device *upper_dev;
>+ struct mvsw_pr_switch *sw;
>+ int err = 0;
>+
>+ port = netdev_priv(dev);
>+ sw = port->sw;
>+ info = ptr;
>+ extack = netdev_notifier_info_to_extack(&info->info);
>+
>+ switch (event) {
>+ case NETDEV_PRECHANGEUPPER:
>+ upper_dev = info->upper_dev;
>+ if (!netif_is_bridge_master(upper_dev)) {
>+ NL_SET_ERR_MSG_MOD(extack, "Unknown upper device type");
>+ return -EINVAL;
>+ }
>+ if (!info->linking)
>+ break;
>+ if (netdev_has_any_upper_dev(upper_dev) &&
>+ (!netif_is_bridge_master(upper_dev) ||
>+ !mvsw_pr_bridge_device_is_offloaded(sw, upper_dev))) {
>+ NL_SET_ERR_MSG_MOD(extack,
>+ "Enslaving a port to a device that already has an upper device is not supported");
>+ return -EINVAL;
>+ }
>+ break;
>+ case NETDEV_CHANGEUPPER:
>+ upper_dev = info->upper_dev;
>+ if (netif_is_bridge_master(upper_dev)) {
>+ if (info->linking)
>+ err = mvsw_pr_port_bridge_join(port,
>+ lower_dev,
>+ upper_dev,
>+ extack);
>+ else
>+ mvsw_pr_port_bridge_leave(port,
>+ lower_dev,
>+ upper_dev);
>+ }
>+ break;
>+ }
>+
>+ return err;
>+}
>+
>+static int mvsw_pr_netdevice_port_event(struct net_device *lower_dev,
>+ struct net_device *port_dev,
>+ unsigned long event, void *ptr)
>+{
>+ switch (event) {
>+ case NETDEV_PRECHANGEUPPER:
>+ case NETDEV_CHANGEUPPER:
>+ return mvsw_pr_netdevice_port_upper_event(lower_dev, port_dev,
>+ event, ptr);
>+ }
>+
>+ return 0;
>+}
>+
>+static int mvsw_pr_netdevice_event(struct notifier_block *nb,
>+ unsigned long event, void *ptr)
>+{
>+ struct net_device *dev = netdev_notifier_info_to_dev(ptr);
>+ struct mvsw_pr_switch *sw;
>+ int err = 0;
>+
>+ sw = container_of(nb, struct mvsw_pr_switch, netdevice_nb);
>+
>+ if (mvsw_pr_netdev_check(dev))
>+ err = mvsw_pr_netdevice_port_event(dev, dev, event, ptr);
>+
>+ return notifier_from_errno(err);
>+}
>+
>+static int mvsw_pr_fdb_init(struct mvsw_pr_switch *sw)
>+{

Just return mvsw_pr_switch_ageing_set(...


>+ int err;
>+
>+ err = mvsw_pr_switch_ageing_set(sw, MVSW_PR_DEFAULT_AGEING_TIME);
>+ if (err)
>+ return err;
>+
>+ return 0;
>+}
>+
>+static int mvsw_pr_switchdev_init(struct mvsw_pr_switch *sw)
>+{
>+ int err = 0;

Pointless init.


>+ struct mvsw_pr_switchdev *swdev;
>+ struct mvsw_pr_bridge *bridge;
>+
>+ if (sw->switchdev)
>+ return -EPERM;
>+
>+ bridge = kzalloc(sizeof(*sw->bridge), GFP_KERNEL);

This is confusing. This struct is not per-bridge but for all bridges.
Perhaps better to squash it together with struct mvsw_pr_switchdev.


>+ if (!bridge)
>+ return -ENOMEM;
>+
>+ swdev = kzalloc(sizeof(*sw->switchdev), GFP_KERNEL);


Why do you need to allocate the memory for bridge and switchdev structs
and not rather have the struct embedded in struct mvsw_pr_switch?



>+ if (!swdev) {
>+ kfree(bridge);
>+ return -ENOMEM;
>+ }
>+
>+ sw->bridge = bridge;
>+ bridge->sw = sw;
>+ sw->switchdev = swdev;
>+ swdev->sw = sw;
>+
>+ INIT_LIST_HEAD(&sw->bridge->bridge_list);
>+
>+ mvsw_owq = alloc_ordered_workqueue("%s_ordered", 0, "prestera_sw");
>+ if (!mvsw_owq) {
>+ err = -ENOMEM;
>+ goto err_alloc_workqueue;
>+ }
>+
>+ swdev->swdev_n.notifier_call = mvsw_pr_switchdev_event;
>+ err = register_switchdev_notifier(&swdev->swdev_n);
>+ if (err)
>+ goto err_register_switchdev_notifier;
>+
>+ swdev->swdev_blocking_n.notifier_call =
>+ mvsw_pr_switchdev_blocking_event;
>+ err = register_switchdev_blocking_notifier(&swdev->swdev_blocking_n);
>+ if (err)
>+ goto err_register_block_switchdev_notifier;
>+
>+ mvsw_pr_fdb_init(sw);
>+
>+ return 0;
>+
>+err_register_block_switchdev_notifier:
>+ unregister_switchdev_notifier(&swdev->swdev_n);
>+err_register_switchdev_notifier:
>+ destroy_workqueue(mvsw_owq);
>+err_alloc_workqueue:
>+ kfree(swdev);
>+ kfree(bridge);
>+ return err;
>+}
>+
>+static void mvsw_pr_switchdev_fini(struct mvsw_pr_switch *sw)
>+{
>+ if (!sw->switchdev)

How this can happen? Remove the check.

>+ return;
>+
>+ unregister_switchdev_notifier(&sw->switchdev->swdev_n);
>+ unregister_switchdev_blocking_notifier
>+ (&sw->switchdev->swdev_blocking_n);
>+ flush_workqueue(mvsw_owq);
>+ destroy_workqueue(mvsw_owq);
>+ kfree(sw->switchdev);
>+ sw->switchdev = NULL;

Pointless set.


>+ kfree(sw->bridge);
>+}
>+
>+static int mvsw_pr_netdev_init(struct mvsw_pr_switch *sw)

The name is misleading. Looks like it has something to do with netdev,
however it only registers the notifier. Please rename or maybe better
just put the code in mvsw_pr_switchdev_register()

In fact, you are going to need the notifier for things outside switchdev
code too. Perhaps you can register in prestera.c, and call in
prestera_switchdev.c.
See mlxsw_sp_netdevice_event() for inspiration.


>+{
>+ int err = 0;

Pointless init.

>+
>+ if (sw->netdevice_nb.notifier_call)

I don't undestand how this could happen. Remove the check.


>+ return -EPERM;
>+
>+ sw->netdevice_nb.notifier_call = mvsw_pr_netdevice_event;
>+ err = register_netdevice_notifier(&sw->netdevice_nb);

just return.

>+ return err;
>+}
>+
>+static void mvsw_pr_netdev_fini(struct mvsw_pr_switch *sw)
>+{
>+ if (sw->netdevice_nb.notifier_call)

How could it be null? Please remove the check.


>+ unregister_netdevice_notifier(&sw->netdevice_nb);
>+}
>+
>+int mvsw_pr_switchdev_register(struct mvsw_pr_switch *sw)

"register" name is misleading a big. Just do
"prestera_switchdev_init";


>+{
>+ int err;
>+
>+ err = mvsw_pr_switchdev_init(sw);
>+ if (err)
>+ return err;
>+
>+ err = mvsw_pr_netdev_init(sw);
>+ if (err)
>+ goto err_netdevice_notifier;
>+
>+ return 0;
>+
>+err_netdevice_notifier:
>+ mvsw_pr_switchdev_fini(sw);
>+ return err;
>+}
>+
>+void mvsw_pr_switchdev_unregister(struct mvsw_pr_switch *sw)
>+{
>+ mvsw_pr_netdev_fini(sw);
>+ mvsw_pr_switchdev_fini(sw);
>+}
>--
>2.17.1
>

2020-02-27 21:32:47

by Vadym Kochan

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Hi Jiri,

On Wed, Feb 26, 2020 at 04:54:23PM +0100, Jiri Pirko wrote:
> Tue, Feb 25, 2020 at 05:30:54PM CET, [email protected] wrote:
> >Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
> >ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
> >wireless SMB deployment.
> >
> >This driver implementation includes only L1 & basic L2 support.
> >
> >The core Prestera switching logic is implemented in prestera.c, there is
> >an intermediate hw layer between core logic and firmware. It is
> >implemented in prestera_hw.c, the purpose of it is to encapsulate hw
> >related logic, in future there is a plan to support more devices with
> >different HW related configurations.
> >
> >The following Switchdev features are supported:
> >
> > - VLAN-aware bridge offloading
> > - VLAN-unaware bridge offloading
> > - FDB offloading (learning, ageing)
> > - Switchport configuration
> >
> >Signed-off-by: Vadym Kochan <[email protected]>
> >Signed-off-by: Andrii Savka <[email protected]>
> >Signed-off-by: Oleksandr Mazur <[email protected]>
> >Signed-off-by: Serhiy Boiko <[email protected]>
> >Signed-off-by: Serhiy Pshyk <[email protected]>
> >Signed-off-by: Taras Chornyi <[email protected]>
> >Signed-off-by: Volodymyr Mytnyk <[email protected]>
> >---
> > drivers/net/ethernet/marvell/Kconfig | 1 +
> > drivers/net/ethernet/marvell/Makefile | 1 +
> > drivers/net/ethernet/marvell/prestera/Kconfig | 13 +
> > .../net/ethernet/marvell/prestera/Makefile | 3 +
> > .../net/ethernet/marvell/prestera/prestera.c | 1502 +++++++++++++++++
> > .../net/ethernet/marvell/prestera/prestera.h | 244 +++
> > .../marvell/prestera/prestera_drv_ver.h | 23 +
> > .../ethernet/marvell/prestera/prestera_hw.c | 1094 ++++++++++++
> > .../ethernet/marvell/prestera/prestera_hw.h | 159 ++
> > .../marvell/prestera/prestera_switchdev.c | 1217 +++++++++++++
> > 10 files changed, 4257 insertions(+)
> > create mode 100644 drivers/net/ethernet/marvell/prestera/Kconfig
> > create mode 100644 drivers/net/ethernet/marvell/prestera/Makefile
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.c
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.h
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.c
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.h
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
> >
> >diff --git a/drivers/net/ethernet/marvell/Kconfig b/drivers/net/ethernet/marvell/Kconfig
> >index 3d5caea096fb..74313d9e1fc0 100644
> >--- a/drivers/net/ethernet/marvell/Kconfig
> >+++ b/drivers/net/ethernet/marvell/Kconfig
> >@@ -171,5 +171,6 @@ config SKY2_DEBUG
> >
> >
> > source "drivers/net/ethernet/marvell/octeontx2/Kconfig"
> >+source "drivers/net/ethernet/marvell/prestera/Kconfig"
> >
> > endif # NET_VENDOR_MARVELL
> >diff --git a/drivers/net/ethernet/marvell/Makefile b/drivers/net/ethernet/marvell/Makefile
> >index 89dea7284d5b..9f88fe822555 100644
> >--- a/drivers/net/ethernet/marvell/Makefile
> >+++ b/drivers/net/ethernet/marvell/Makefile
> >@@ -12,3 +12,4 @@ obj-$(CONFIG_PXA168_ETH) += pxa168_eth.o
> > obj-$(CONFIG_SKGE) += skge.o
> > obj-$(CONFIG_SKY2) += sky2.o
> > obj-y += octeontx2/
> >+obj-y += prestera/
> >diff --git a/drivers/net/ethernet/marvell/prestera/Kconfig b/drivers/net/ethernet/marvell/prestera/Kconfig
> >new file mode 100644
> >index 000000000000..d0b416dcb677
> >--- /dev/null
> >+++ b/drivers/net/ethernet/marvell/prestera/Kconfig
> >@@ -0,0 +1,13 @@
> >+# SPDX-License-Identifier: GPL-2.0-only
> >+#
> >+# Marvell Prestera drivers configuration
> >+#
> >+
> >+config PRESTERA
> >+ tristate "Marvell Prestera Switch ASICs support"
> >+ depends on NET_SWITCHDEV && VLAN_8021Q
> >+ ---help---
> >+ This driver supports Marvell Prestera Switch ASICs family.
> >+
> >+ To compile this driver as a module, choose M here: the
> >+ module will be called prestera_sw.
> >diff --git a/drivers/net/ethernet/marvell/prestera/Makefile b/drivers/net/ethernet/marvell/prestera/Makefile
> >new file mode 100644
> >index 000000000000..9446298fb7f4
> >--- /dev/null
> >+++ b/drivers/net/ethernet/marvell/prestera/Makefile
> >@@ -0,0 +1,3 @@
> >+# SPDX-License-Identifier: GPL-2.0
> >+obj-$(CONFIG_PRESTERA) += prestera_sw.o
> >+prestera_sw-objs := prestera.o prestera_hw.o prestera_switchdev.o
> >diff --git a/drivers/net/ethernet/marvell/prestera/prestera.c b/drivers/net/ethernet/marvell/prestera/prestera.c
> >new file mode 100644
> >index 000000000000..12d0eb590bbb
> >--- /dev/null
> >+++ b/drivers/net/ethernet/marvell/prestera/prestera.c
> >@@ -0,0 +1,1502 @@
> >+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
> >+ *
> >+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
> >+ *
> >+ */
> >+#include <linux/kernel.h>
> >+#include <linux/module.h>
> >+#include <linux/list.h>
> >+#include <linux/netdevice.h>
> >+#include <linux/netdev_features.h>
> >+#include <linux/etherdevice.h>
> >+#include <linux/ethtool.h>
> >+#include <linux/jiffies.h>
> >+#include <net/switchdev.h>
> >+
> >+#include "prestera.h"
> >+#include "prestera_hw.h"
> >+#include "prestera_drv_ver.h"
> >+
> >+#define MVSW_PR_MTU_DEFAULT 1536
> >+
> >+#define PORT_STATS_CACHE_TIMEOUT_MS (msecs_to_jiffies(1000))
> >+#define PORT_STATS_CNT (sizeof(struct mvsw_pr_port_stats) / sizeof(u64))
>
> Keep the prefix for all defines withing the file. "PORT_STATS_CNT"
> looks way to generic on the first look.
>
>
> >+#define PORT_STATS_IDX(name) \
> >+ (offsetof(struct mvsw_pr_port_stats, name) / sizeof(u64))
> >+#define PORT_STATS_FIELD(name) \
> >+ [PORT_STATS_IDX(name)] = __stringify(name)
> >+
> >+static struct list_head switches_registered;
> >+
> >+static const char mvsw_driver_kind[] = "prestera_sw";
>
> Please be consistent. Make your prefixes, name, filenames the same.
> For example:
> prestera_driver_kind[] = "prestera";
>
> Applied to the whole code.
>
So you suggested to use prestera_ as a prefix, I dont see a problem
with that, but why not mvsw_pr_ ? So it has the vendor, device name parts
together as a key. Also it is necessary to apply prefix for the static
names ?

>
> >+static const char mvsw_driver_name[] = "mvsw_switchdev";
>
> Why is this different from kind?
>
> Also, don't mention "switchdev" anywhere.
>
>
> >+static const char mvsw_driver_version[] = PRESTERA_DRV_VER;
>
> [...]
>
>
> >+static void mvsw_pr_port_remote_cap_get(struct ethtool_link_ksettings *ecmd,
> >+ struct mvsw_pr_port *port)
> >+{
> >+ u64 bitmap;
> >+
> >+ if (!mvsw_pr_hw_port_remote_cap_get(port, &bitmap)) {
> >+ mvsw_modes_to_eth(ecmd->link_modes.lp_advertising,
> >+ bitmap, 0, MVSW_PORT_TYPE_NONE);
> >+ }
>
> Don't use {} for single statement. checkpatch.pl should warn you about
> this.
>
>
>
> >+}
> >+
> >+static void mvsw_pr_port_duplex_get(struct ethtool_link_ksettings *ecmd,
> >+ struct mvsw_pr_port *port)
> >+{
> >+ u8 duplex;
> >+
> >+ if (!mvsw_pr_hw_port_duplex_get(port, &duplex)) {
> >+ ecmd->base.duplex = duplex == MVSW_PORT_DUPLEX_FULL ?
> >+ DUPLEX_FULL : DUPLEX_HALF;
> >+ } else {
> >+ ecmd->base.duplex = DUPLEX_UNKNOWN;
> >+ }
>
> Same here.
>
>
> >+}
>
> [...]
>
>
> >+static void __exit mvsw_pr_module_exit(void)
> >+{
> >+ destroy_workqueue(mvsw_pr_wq);
> >+
> >+ pr_info("Unloading Marvell Prestera Switch Driver\n");
>
> No prints like this please.
>
>
>
> >+}
> >+
> >+module_init(mvsw_pr_module_init);
> >+module_exit(mvsw_pr_module_exit);
> >+
> >+MODULE_AUTHOR("Marvell Semi.");
>
> Does not look so :)
>
>
> >+MODULE_LICENSE("GPL");
> >+MODULE_DESCRIPTION("Marvell Prestera switch driver");
> >+MODULE_VERSION(PRESTERA_DRV_VER);
>
> [...]

Thank you for the comments and suggestions!

Regards,
Vadym Kochan

2020-02-27 21:46:25

by Andrew Lunn

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

> > Please be consistent. Make your prefixes, name, filenames the same.
> > For example:
> > prestera_driver_kind[] = "prestera";
> >
> > Applied to the whole code.
> >
> So you suggested to use prestera_ as a prefix, I dont see a problem
> with that, but why not mvsw_pr_ ? So it has the vendor, device name parts
> together as a key. Also it is necessary to apply prefix for the static
> names ?

Although static names don't cause linker issues, you do still see them
in opps stack traces, etc. It just helps track down where the symbols
come from, if they all have a prefix.

Andrew

2020-02-27 23:51:32

by Vadym Kochan

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

On Thu, Feb 27, 2020 at 10:43:57PM +0100, Andrew Lunn wrote:
> > > Please be consistent. Make your prefixes, name, filenames the same.
> > > For example:
> > > prestera_driver_kind[] = "prestera";
> > >
> > > Applied to the whole code.
> > >
> > So you suggested to use prestera_ as a prefix, I dont see a problem
> > with that, but why not mvsw_pr_ ? So it has the vendor, device name parts
> > together as a key. Also it is necessary to apply prefix for the static
> > names ?
>
> Although static names don't cause linker issues, you do still see them
> in opps stack traces, etc. It just helps track down where the symbols
> come from, if they all have a prefix.
>
> Andrew

Sure, thanks, makes sense. But is it necessary that prefix should match
filenames too ? Would it be OK to use just 'mvpr_' instead of 'prestera_'
for funcs & types in this particular case ?

Regards,
Vadym Kochan

2020-02-28 04:22:22

by Florian Fainelli

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)



On 2/25/2020 8:30 AM, Vadym Kochan wrote:
> Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
> ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
> wireless SMB deployment.
>
> This driver implementation includes only L1 & basic L2 support.
>
> The core Prestera switching logic is implemented in prestera.c, there is
> an intermediate hw layer between core logic and firmware. It is
> implemented in prestera_hw.c, the purpose of it is to encapsulate hw
> related logic, in future there is a plan to support more devices with
> different HW related configurations.
>
> The following Switchdev features are supported:
>
> - VLAN-aware bridge offloading
> - VLAN-unaware bridge offloading
> - FDB offloading (learning, ageing)
> - Switchport configuration
>
> Signed-off-by: Vadym Kochan <[email protected]>
> Signed-off-by: Andrii Savka <[email protected]>
> Signed-off-by: Oleksandr Mazur <[email protected]>
> Signed-off-by: Serhiy Boiko <[email protected]>
> Signed-off-by: Serhiy Pshyk <[email protected]>
> Signed-off-by: Taras Chornyi <[email protected]>
> Signed-off-by: Volodymyr Mytnyk <[email protected]>

Very little to pick on, the driver is nice and clean, great job!

> ---

[snip]

> +#define PORT_STATS_CACHE_TIMEOUT_MS (msecs_to_jiffies(1000))
> +#define PORT_STATS_CNT (sizeof(struct mvsw_pr_port_stats) / sizeof(u64))

All entries in mvsw_pr_port_stats are u64 so you can use ARRAY_SIZE() here.

[snip]

> +
> + err = register_netdev(net_dev);
> + if (err)
> + goto err_register_netdev;
> +
> + list_add(&port->list, &sw->port_list);

As soon as you publish the network device it can be used by notifiers,
user-space etc, better do this as the last operation.

[snip]

> +int mvsw_pr_hw_port_stats_get(const struct mvsw_pr_port *port,
> + struct mvsw_pr_port_stats *stats)
> +{
> + struct mvsw_msg_port_stats_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_STATS,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + u64 *hw_val = resp.stats;
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + stats->good_octets_received = hw_val[MVSW_PORT_GOOD_OCTETS_RCV_CNT];

This seems error prone and not scaling really well, since all stats
member are u64 and they are ordered in the same way as the response, is
not a memcpy() sufficient here?
--
Florian

2020-02-28 04:25:26

by Florian Fainelli

[permalink] [raw]
Subject: Re: [RFC net-next 3/3] dt-bindings: marvell,prestera: Add address mapping for Prestera Switchdev PCIe driver



On 2/25/2020 8:30 AM, Vadym Kochan wrote:
> Document requirement for the PCI port which is connected to the ASIC, to
> allow access to the firmware related registers.
>
> Signed-off-by: Vadym Kochan <[email protected]>
> ---
> .../devicetree/bindings/net/marvell,prestera.txt | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/net/marvell,prestera.txt b/Documentation/devicetree/bindings/net/marvell,prestera.txt
> index 83370ebf5b89..103c35cfa8a7 100644
> --- a/Documentation/devicetree/bindings/net/marvell,prestera.txt
> +++ b/Documentation/devicetree/bindings/net/marvell,prestera.txt
> @@ -45,3 +45,16 @@ dfx-server {
> ranges = <0 MBUS_ID(0x08, 0x00) 0 0x100000>;
> reg = <MBUS_ID(0x08, 0x00) 0 0x100000>;
> };
> +
> +Marvell Prestera SwitchDev bindings
> +-----------------------------------
> +The current implementation of Prestera Switchdev PCI interface driver requires
> +that BAR2 is assigned to 0xf6000000 as base address from the PCI IO range:

It is always a bit disturbing to document what a driver does, or want in
a Device Tree binding. If it is necessary for the PCIe device to have
multiple ranges defined such that the necessary BARs are available, that
is what is necessary, no need to mention what the driver or firmware does.

> +
> +&cp0_pcie0 {
> + ranges = <0x81000000 0x0 0xfb000000 0x0 0xfb000000 0x0 0xf0000
> + 0x82000000 0x0 0xf6000000 0x0 0xf6000000 0x0 0x2000000
> + 0x82000000 0x0 0xf9000000 0x0 0xf9000000 0x0 0x100000>;
> + phys = <&cp0_comphy0 0>;
> + status = "okay";
> +};
>

--
Florian

2020-02-28 06:35:24

by Jiri Pirko

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Thu, Feb 27, 2020 at 10:32:00PM CET, [email protected] wrote:
>Hi Jiri,
>
>On Wed, Feb 26, 2020 at 04:54:23PM +0100, Jiri Pirko wrote:
>> Tue, Feb 25, 2020 at 05:30:54PM CET, [email protected] wrote:
>> >Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
>> >ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
>> >wireless SMB deployment.
>> >
>> >This driver implementation includes only L1 & basic L2 support.
>> >
>> >The core Prestera switching logic is implemented in prestera.c, there is
>> >an intermediate hw layer between core logic and firmware. It is
>> >implemented in prestera_hw.c, the purpose of it is to encapsulate hw
>> >related logic, in future there is a plan to support more devices with
>> >different HW related configurations.
>> >
>> >The following Switchdev features are supported:
>> >
>> > - VLAN-aware bridge offloading
>> > - VLAN-unaware bridge offloading
>> > - FDB offloading (learning, ageing)
>> > - Switchport configuration
>> >
>> >Signed-off-by: Vadym Kochan <[email protected]>
>> >Signed-off-by: Andrii Savka <[email protected]>
>> >Signed-off-by: Oleksandr Mazur <[email protected]>
>> >Signed-off-by: Serhiy Boiko <[email protected]>
>> >Signed-off-by: Serhiy Pshyk <[email protected]>
>> >Signed-off-by: Taras Chornyi <[email protected]>
>> >Signed-off-by: Volodymyr Mytnyk <[email protected]>
>> >---
>> > drivers/net/ethernet/marvell/Kconfig | 1 +
>> > drivers/net/ethernet/marvell/Makefile | 1 +
>> > drivers/net/ethernet/marvell/prestera/Kconfig | 13 +
>> > .../net/ethernet/marvell/prestera/Makefile | 3 +
>> > .../net/ethernet/marvell/prestera/prestera.c | 1502 +++++++++++++++++
>> > .../net/ethernet/marvell/prestera/prestera.h | 244 +++
>> > .../marvell/prestera/prestera_drv_ver.h | 23 +
>> > .../ethernet/marvell/prestera/prestera_hw.c | 1094 ++++++++++++
>> > .../ethernet/marvell/prestera/prestera_hw.h | 159 ++
>> > .../marvell/prestera/prestera_switchdev.c | 1217 +++++++++++++
>> > 10 files changed, 4257 insertions(+)
>> > create mode 100644 drivers/net/ethernet/marvell/prestera/Kconfig
>> > create mode 100644 drivers/net/ethernet/marvell/prestera/Makefile
>> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.c
>> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.h
>> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
>> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.c
>> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.h
>> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
>> >
>> >diff --git a/drivers/net/ethernet/marvell/Kconfig b/drivers/net/ethernet/marvell/Kconfig
>> >index 3d5caea096fb..74313d9e1fc0 100644
>> >--- a/drivers/net/ethernet/marvell/Kconfig
>> >+++ b/drivers/net/ethernet/marvell/Kconfig
>> >@@ -171,5 +171,6 @@ config SKY2_DEBUG
>> >
>> >
>> > source "drivers/net/ethernet/marvell/octeontx2/Kconfig"
>> >+source "drivers/net/ethernet/marvell/prestera/Kconfig"
>> >
>> > endif # NET_VENDOR_MARVELL
>> >diff --git a/drivers/net/ethernet/marvell/Makefile b/drivers/net/ethernet/marvell/Makefile
>> >index 89dea7284d5b..9f88fe822555 100644
>> >--- a/drivers/net/ethernet/marvell/Makefile
>> >+++ b/drivers/net/ethernet/marvell/Makefile
>> >@@ -12,3 +12,4 @@ obj-$(CONFIG_PXA168_ETH) += pxa168_eth.o
>> > obj-$(CONFIG_SKGE) += skge.o
>> > obj-$(CONFIG_SKY2) += sky2.o
>> > obj-y += octeontx2/
>> >+obj-y += prestera/
>> >diff --git a/drivers/net/ethernet/marvell/prestera/Kconfig b/drivers/net/ethernet/marvell/prestera/Kconfig
>> >new file mode 100644
>> >index 000000000000..d0b416dcb677
>> >--- /dev/null
>> >+++ b/drivers/net/ethernet/marvell/prestera/Kconfig
>> >@@ -0,0 +1,13 @@
>> >+# SPDX-License-Identifier: GPL-2.0-only
>> >+#
>> >+# Marvell Prestera drivers configuration
>> >+#
>> >+
>> >+config PRESTERA
>> >+ tristate "Marvell Prestera Switch ASICs support"
>> >+ depends on NET_SWITCHDEV && VLAN_8021Q
>> >+ ---help---
>> >+ This driver supports Marvell Prestera Switch ASICs family.
>> >+
>> >+ To compile this driver as a module, choose M here: the
>> >+ module will be called prestera_sw.
>> >diff --git a/drivers/net/ethernet/marvell/prestera/Makefile b/drivers/net/ethernet/marvell/prestera/Makefile
>> >new file mode 100644
>> >index 000000000000..9446298fb7f4
>> >--- /dev/null
>> >+++ b/drivers/net/ethernet/marvell/prestera/Makefile
>> >@@ -0,0 +1,3 @@
>> >+# SPDX-License-Identifier: GPL-2.0
>> >+obj-$(CONFIG_PRESTERA) += prestera_sw.o
>> >+prestera_sw-objs := prestera.o prestera_hw.o prestera_switchdev.o
>> >diff --git a/drivers/net/ethernet/marvell/prestera/prestera.c b/drivers/net/ethernet/marvell/prestera/prestera.c
>> >new file mode 100644
>> >index 000000000000..12d0eb590bbb
>> >--- /dev/null
>> >+++ b/drivers/net/ethernet/marvell/prestera/prestera.c
>> >@@ -0,0 +1,1502 @@
>> >+/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
>> >+ *
>> >+ * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
>> >+ *
>> >+ */
>> >+#include <linux/kernel.h>
>> >+#include <linux/module.h>
>> >+#include <linux/list.h>
>> >+#include <linux/netdevice.h>
>> >+#include <linux/netdev_features.h>
>> >+#include <linux/etherdevice.h>
>> >+#include <linux/ethtool.h>
>> >+#include <linux/jiffies.h>
>> >+#include <net/switchdev.h>
>> >+
>> >+#include "prestera.h"
>> >+#include "prestera_hw.h"
>> >+#include "prestera_drv_ver.h"
>> >+
>> >+#define MVSW_PR_MTU_DEFAULT 1536
>> >+
>> >+#define PORT_STATS_CACHE_TIMEOUT_MS (msecs_to_jiffies(1000))
>> >+#define PORT_STATS_CNT (sizeof(struct mvsw_pr_port_stats) / sizeof(u64))
>>
>> Keep the prefix for all defines withing the file. "PORT_STATS_CNT"
>> looks way to generic on the first look.
>>
>>
>> >+#define PORT_STATS_IDX(name) \
>> >+ (offsetof(struct mvsw_pr_port_stats, name) / sizeof(u64))
>> >+#define PORT_STATS_FIELD(name) \
>> >+ [PORT_STATS_IDX(name)] = __stringify(name)
>> >+
>> >+static struct list_head switches_registered;
>> >+
>> >+static const char mvsw_driver_kind[] = "prestera_sw";
>>
>> Please be consistent. Make your prefixes, name, filenames the same.
>> For example:
>> prestera_driver_kind[] = "prestera";
>>
>> Applied to the whole code.
>>
>So you suggested to use prestera_ as a prefix, I dont see a problem
>with that, but why not mvsw_pr_ ? So it has the vendor, device name parts

Because of "sw" in the name. You have the directory named "prestera",
the modules are named "prestera_*", for the consistency sake the
prefixes should be "prestera_". "mvsw_" looks totally unrelated.


>together as a key. Also it is necessary to apply prefix for the static
>names ?

Yes please. Be consistent within the whole code. This is handy when
seeing traces.


>
>>
>> >+static const char mvsw_driver_name[] = "mvsw_switchdev";
>>
>> Why is this different from kind?
>>
>> Also, don't mention "switchdev" anywhere.
>>
>>
>> >+static const char mvsw_driver_version[] = PRESTERA_DRV_VER;
>>
>> [...]
>>
>>
>> >+static void mvsw_pr_port_remote_cap_get(struct ethtool_link_ksettings *ecmd,
>> >+ struct mvsw_pr_port *port)
>> >+{
>> >+ u64 bitmap;
>> >+
>> >+ if (!mvsw_pr_hw_port_remote_cap_get(port, &bitmap)) {
>> >+ mvsw_modes_to_eth(ecmd->link_modes.lp_advertising,
>> >+ bitmap, 0, MVSW_PORT_TYPE_NONE);
>> >+ }
>>
>> Don't use {} for single statement. checkpatch.pl should warn you about
>> this.
>>
>>
>>
>> >+}
>> >+
>> >+static void mvsw_pr_port_duplex_get(struct ethtool_link_ksettings *ecmd,
>> >+ struct mvsw_pr_port *port)
>> >+{
>> >+ u8 duplex;
>> >+
>> >+ if (!mvsw_pr_hw_port_duplex_get(port, &duplex)) {
>> >+ ecmd->base.duplex = duplex == MVSW_PORT_DUPLEX_FULL ?
>> >+ DUPLEX_FULL : DUPLEX_HALF;
>> >+ } else {
>> >+ ecmd->base.duplex = DUPLEX_UNKNOWN;
>> >+ }
>>
>> Same here.
>>
>>
>> >+}
>>
>> [...]
>>
>>
>> >+static void __exit mvsw_pr_module_exit(void)
>> >+{
>> >+ destroy_workqueue(mvsw_pr_wq);
>> >+
>> >+ pr_info("Unloading Marvell Prestera Switch Driver\n");
>>
>> No prints like this please.
>>
>>
>>
>> >+}
>> >+
>> >+module_init(mvsw_pr_module_init);
>> >+module_exit(mvsw_pr_module_exit);
>> >+
>> >+MODULE_AUTHOR("Marvell Semi.");
>>
>> Does not look so :)
>>
>>
>> >+MODULE_LICENSE("GPL");
>> >+MODULE_DESCRIPTION("Marvell Prestera switch driver");
>> >+MODULE_VERSION(PRESTERA_DRV_VER);
>>
>> [...]
>
>Thank you for the comments and suggestions!
>
>Regards,
>Vadym Kochan

2020-02-28 06:37:55

by Jiri Pirko

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Fri, Feb 28, 2020 at 12:50:58AM CET, [email protected] wrote:
>On Thu, Feb 27, 2020 at 10:43:57PM +0100, Andrew Lunn wrote:
>> > > Please be consistent. Make your prefixes, name, filenames the same.
>> > > For example:
>> > > prestera_driver_kind[] = "prestera";
>> > >
>> > > Applied to the whole code.
>> > >
>> > So you suggested to use prestera_ as a prefix, I dont see a problem
>> > with that, but why not mvsw_pr_ ? So it has the vendor, device name parts
>> > together as a key. Also it is necessary to apply prefix for the static
>> > names ?
>>
>> Although static names don't cause linker issues, you do still see them
>> in opps stack traces, etc. It just helps track down where the symbols
>> come from, if they all have a prefix.
>>
>> Andrew
>
>Sure, thanks, makes sense. But is it necessary that prefix should match
>filenames too ? Would it be OK to use just 'mvpr_' instead of 'prestera_'

I would vote for "prestera_". It is clean, consistent, obvious.


>for funcs & types in this particular case ?
>
>Regards,
>Vadym Kochan

2020-02-28 08:06:30

by Vadym Kochan

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Hi Jiri,

On Thu, Feb 27, 2020 at 03:22:59PM +0100, Jiri Pirko wrote:
> Tue, Feb 25, 2020 at 05:30:54PM CET, [email protected] wrote:
> >Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
> >ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
> >wireless SMB deployment.
> >
> >This driver implementation includes only L1 & basic L2 support.
> >
> >The core Prestera switching logic is implemented in prestera.c, there is
> >an intermediate hw layer between core logic and firmware. It is
> >implemented in prestera_hw.c, the purpose of it is to encapsulate hw
> >related logic, in future there is a plan to support more devices with
> >different HW related configurations.
> >
> >The following Switchdev features are supported:
> >
> > - VLAN-aware bridge offloading
> > - VLAN-unaware bridge offloading
> > - FDB offloading (learning, ageing)
> > - Switchport configuration
> >
> >Signed-off-by: Vadym Kochan <[email protected]>
> >Signed-off-by: Andrii Savka <[email protected]>
> >Signed-off-by: Oleksandr Mazur <[email protected]>
> >Signed-off-by: Serhiy Boiko <[email protected]>
> >Signed-off-by: Serhiy Pshyk <[email protected]>
> >Signed-off-by: Taras Chornyi <[email protected]>
> >Signed-off-by: Volodymyr Mytnyk <[email protected]>
> >---

[SNIP]

> >+};
> >+
> >+struct mvsw_msg_cmd {
> >+ u32 type;
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_ret {
> >+ struct mvsw_msg_cmd cmd;
> >+ u32 status;
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_common_request {
> >+ struct mvsw_msg_cmd cmd;
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_common_response {
> >+ struct mvsw_msg_ret ret;
> >+} __packed __aligned(4);
> >+
> >+union mvsw_msg_switch_param {
> >+ u32 ageing_timeout;
> >+};
> >+
> >+struct mvsw_msg_switch_attr_cmd {
> >+ struct mvsw_msg_cmd cmd;
> >+ union mvsw_msg_switch_param param;
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_switch_init_ret {
> >+ struct mvsw_msg_ret ret;
> >+ u32 port_count;
> >+ u32 mtu_max;
> >+ u8 switch_id;
> >+ u8 mac[ETH_ALEN];
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_port_autoneg_param {
> >+ u64 link_mode;
> >+ u8 enable;
> >+ u8 fec;
> >+};
> >+
> >+struct mvsw_msg_port_cap_param {
> >+ u64 link_mode;
> >+ u8 type;
> >+ u8 fec;
> >+ u8 transceiver;
> >+};
> >+
> >+union mvsw_msg_port_param {
> >+ u8 admin_state;
> >+ u8 oper_state;
> >+ u32 mtu;
> >+ u8 mac[ETH_ALEN];
> >+ u8 accept_frm_type;
> >+ u8 learning;
> >+ u32 speed;
> >+ u8 flood;
> >+ u32 link_mode;
> >+ u8 type;
> >+ u8 duplex;
> >+ u8 fec;
> >+ u8 mdix;
> >+ struct mvsw_msg_port_autoneg_param autoneg;
> >+ struct mvsw_msg_port_cap_param cap;
> >+};
> >+
> >+struct mvsw_msg_port_attr_cmd {
> >+ struct mvsw_msg_cmd cmd;
> >+ u32 attr;
> >+ u32 port;
> >+ u32 dev;
> >+ union mvsw_msg_port_param param;
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_port_attr_ret {
> >+ struct mvsw_msg_ret ret;
> >+ union mvsw_msg_port_param param;
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_port_stats_ret {
> >+ struct mvsw_msg_ret ret;
> >+ u64 stats[MVSW_PORT_CNT_MAX];
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_port_info_cmd {
> >+ struct mvsw_msg_cmd cmd;
> >+ u32 port;
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_port_info_ret {
> >+ struct mvsw_msg_ret ret;
> >+ u32 hw_id;
> >+ u32 dev_id;
> >+ u16 fp_id;
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_vlan_cmd {
> >+ struct mvsw_msg_cmd cmd;
> >+ u32 port;
> >+ u32 dev;
> >+ u16 vid;
> >+ u8 is_member;
> >+ u8 is_tagged;
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_fdb_cmd {
> >+ struct mvsw_msg_cmd cmd;
> >+ u32 port;
> >+ u32 dev;
> >+ u8 mac[ETH_ALEN];
> >+ u16 vid;
> >+ u8 dynamic;
> >+ u32 flush_mode;
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_event {
> >+ u16 type;
> >+ u16 id;
> >+} __packed __aligned(4);
> >+
> >+union mvsw_msg_event_fdb_param {
> >+ u8 mac[ETH_ALEN];
> >+};
> >+
> >+struct mvsw_msg_event_fdb {
> >+ struct mvsw_msg_event id;
> >+ u32 port_id;
> >+ u32 vid;
> >+ union mvsw_msg_event_fdb_param param;
> >+} __packed __aligned(4);
> >+
> >+union mvsw_msg_event_port_param {
> >+ u32 oper_state;
> >+};
> >+
> >+struct mvsw_msg_event_port {
> >+ struct mvsw_msg_event id;
> >+ u32 port_id;
> >+ union mvsw_msg_event_port_param param;
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_bridge_cmd {
> >+ struct mvsw_msg_cmd cmd;
> >+ u32 port;
> >+ u32 dev;
> >+ u16 bridge;
> >+} __packed __aligned(4);
> >+
> >+struct mvsw_msg_bridge_ret {
> >+ struct mvsw_msg_ret ret;
> >+ u16 bridge;
> >+} __packed __aligned(4);
> >+
> >+#define fw_check_resp(_response) \
> >+({ \
> >+ int __er = 0; \
> >+ typeof(_response) __r = (_response); \
> >+ if (__r->ret.cmd.type != MVSW_MSG_TYPE_ACK) \
> >+ __er = -EBADE; \
> >+ else if (__r->ret.status != MVSW_MSG_ACK_OK) \
> >+ __er = -EINVAL; \
> >+ (__er); \
> >+})
> >+
> >+#define __fw_send_req_resp(_switch, _type, _request, _response, _wait) \
>
> Please try to avoid doing functions in macros like this one and the
> previous one.
>
>
> >+({ \
> >+ int __e; \
> >+ typeof(_switch) __sw = (_switch); \
> >+ typeof(_request) __req = (_request); \
> >+ typeof(_response) __resp = (_response); \
> >+ __req->cmd.type = (_type); \
> >+ __e = __sw->dev->send_req(__sw->dev, \
> >+ (u8 *)__req, sizeof(*__req), \
> >+ (u8 *)__resp, sizeof(*__resp), \
> >+ _wait); \
> >+ if (!__e) \
> >+ __e = fw_check_resp(__resp); \
> >+ (__e); \
> >+})
> >+
> >+#define fw_send_req_resp(_sw, _t, _req, _resp) \
> >+ __fw_send_req_resp(_sw, _t, _req, _resp, 0)
> >+
> >+#define fw_send_req_resp_wait(_sw, _t, _req, _resp, _wait) \
> >+ __fw_send_req_resp(_sw, _t, _req, _resp, _wait)
> >+
> >+#define fw_send_req(_sw, _t, _req) \
>
> This should be function, not define

Yeah, I understand your point, but here was the reason:

all packed structs which defined here in prestera_hw.c
are used for transmission request/return to/from the firmware, each of
the struct requires to have req/ret member (depends if it is request or
return from the firmware), and the purpose of the macro is to avoid case
when someone can forget to add req/ret member to the new such structure.

>
>
> >+({ \
> >+ struct mvsw_msg_common_response __re; \
> >+ (fw_send_req_resp(_sw, _t, _req, &__re)); \
> >+})
> >+

Regards,
Vadym Kochan,

2020-02-28 08:18:15

by Vadym Kochan

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Hi Florian,

On Thu, Feb 27, 2020 at 08:22:02PM -0800, Florian Fainelli wrote:
>
>
> On 2/25/2020 8:30 AM, Vadym Kochan wrote:
> > Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
> > ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
> > wireless SMB deployment.
> >
> > This driver implementation includes only L1 & basic L2 support.
> >
> > The core Prestera switching logic is implemented in prestera.c, there is
> > an intermediate hw layer between core logic and firmware. It is
> > implemented in prestera_hw.c, the purpose of it is to encapsulate hw
> > related logic, in future there is a plan to support more devices with
> > different HW related configurations.
> >
> > The following Switchdev features are supported:
> >
> > - VLAN-aware bridge offloading
> > - VLAN-unaware bridge offloading
> > - FDB offloading (learning, ageing)
> > - Switchport configuration
> >
> > Signed-off-by: Vadym Kochan <[email protected]>
> > Signed-off-by: Andrii Savka <[email protected]>
> > Signed-off-by: Oleksandr Mazur <[email protected]>
> > Signed-off-by: Serhiy Boiko <[email protected]>
> > Signed-off-by: Serhiy Pshyk <[email protected]>
> > Signed-off-by: Taras Chornyi <[email protected]>
> > Signed-off-by: Volodymyr Mytnyk <[email protected]>
>
> Very little to pick on, the driver is nice and clean, great job!
>
> > ---
>
> [snip]
>
> > +#define PORT_STATS_CACHE_TIMEOUT_MS (msecs_to_jiffies(1000))
> > +#define PORT_STATS_CNT (sizeof(struct mvsw_pr_port_stats) / sizeof(u64))
>
> All entries in mvsw_pr_port_stats are u64 so you can use ARRAY_SIZE() here.
>
> [snip]
>
> > +
> > + err = register_netdev(net_dev);
> > + if (err)
> > + goto err_register_netdev;
> > +
> > + list_add(&port->list, &sw->port_list);
>
> As soon as you publish the network device it can be used by notifiers,
> user-space etc, better do this as the last operation.
>
> [snip]
>
> > +int mvsw_pr_hw_port_stats_get(const struct mvsw_pr_port *port,
> > + struct mvsw_pr_port_stats *stats)
> > +{
> > + struct mvsw_msg_port_stats_ret resp;
> > + struct mvsw_msg_port_attr_cmd req = {
> > + .attr = MVSW_MSG_PORT_ATTR_STATS,
> > + .port = port->hw_id,
> > + .dev = port->dev_id
> > + };
> > + u64 *hw_val = resp.stats;
> > + int err;
> > +
> > + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> > + &req, &resp);
> > + if (err)
> > + return err;
> > +
> > + stats->good_octets_received = hw_val[MVSW_PORT_GOOD_OCTETS_RCV_CNT];
>
> This seems error prone and not scaling really well, since all stats
> member are u64 and they are ordered in the same way as the response, is
> not a memcpy() sufficient here?
> --

The reason for this is that struct mvsw_pr_port_stats and struct
mvsw_msg_port_stats_ret has very different usage context, struct
mvsw_pr_port_stats might have different layout, like additional fields
which is needed for the higher layer, so I think it would be better to
fill it member by member which has related one received from the
firmware. So, what I mean is to avoid mixing data transfer objects with
the generic ones. I am totally agree that memcpy looks more simpler, but
it may bring bugs because the generic stats struct may be differ from the
one which is used for transmission.

> Florian

Regards,
Vadym Kochan

2020-02-28 09:45:14

by Vadym Kochan

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Hi Jiri,

On Fri, Feb 28, 2020 at 07:34:51AM +0100, Jiri Pirko wrote:
> Thu, Feb 27, 2020 at 10:32:00PM CET, [email protected] wrote:
> >Hi Jiri,
> >
> >On Wed, Feb 26, 2020 at 04:54:23PM +0100, Jiri Pirko wrote:
> >> Tue, Feb 25, 2020 at 05:30:54PM CET, [email protected] wrote:
> >> >Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
> >> >ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
> >> >wireless SMB deployment.
> >> >
> >> >This driver implementation includes only L1 & basic L2 support.
> >> >
> >> >The core Prestera switching logic is implemented in prestera.c, there is
> >> >an intermediate hw layer between core logic and firmware. It is
> >> >implemented in prestera_hw.c, the purpose of it is to encapsulate hw
> >> >related logic, in future there is a plan to support more devices with
> >> >different HW related configurations.
> >> >
> >> >The following Switchdev features are supported:
> >> >
> >> > - VLAN-aware bridge offloading
> >> > - VLAN-unaware bridge offloading
> >> > - FDB offloading (learning, ageing)
> >> > - Switchport configuration
> >> >
> >> >Signed-off-by: Vadym Kochan <[email protected]>
> >> >Signed-off-by: Andrii Savka <[email protected]>
> >> >Signed-off-by: Oleksandr Mazur <[email protected]>
> >> >Signed-off-by: Serhiy Boiko <[email protected]>
> >> >Signed-off-by: Serhiy Pshyk <[email protected]>
> >> >Signed-off-by: Taras Chornyi <[email protected]>
> >> >Signed-off-by: Volodymyr Mytnyk <[email protected]>
> >> >---

[SNIP]

> >> >+#include <linux/kernel.h>
> >> >+#include <linux/module.h>
> >> >+#include <linux/list.h>
> >> >+#include <linux/netdevice.h>
> >> >+#include <linux/netdev_features.h>
> >> >+#include <linux/etherdevice.h>
> >> >+#include <linux/ethtool.h>
> >> >+#include <linux/jiffies.h>
> >> >+#include <net/switchdev.h>
> >> >+
> >> >+#include "prestera.h"
> >> >+#include "prestera_hw.h"
> >> >+#include "prestera_drv_ver.h"
> >> >+
> >> >+#define MVSW_PR_MTU_DEFAULT 1536
> >> >+
> >> >+#define PORT_STATS_CACHE_TIMEOUT_MS (msecs_to_jiffies(1000))
> >> >+#define PORT_STATS_CNT (sizeof(struct mvsw_pr_port_stats) / sizeof(u64))
> >>
> >> Keep the prefix for all defines withing the file. "PORT_STATS_CNT"
> >> looks way to generic on the first look.
> >>
> >>
> >> >+#define PORT_STATS_IDX(name) \
> >> >+ (offsetof(struct mvsw_pr_port_stats, name) / sizeof(u64))
> >> >+#define PORT_STATS_FIELD(name) \
> >> >+ [PORT_STATS_IDX(name)] = __stringify(name)
> >> >+
> >> >+static struct list_head switches_registered;
> >> >+
> >> >+static const char mvsw_driver_kind[] = "prestera_sw";
> >>
> >> Please be consistent. Make your prefixes, name, filenames the same.
> >> For example:
> >> prestera_driver_kind[] = "prestera";
> >>
> >> Applied to the whole code.
> >>
> >So you suggested to use prestera_ as a prefix, I dont see a problem
> >with that, but why not mvsw_pr_ ? So it has the vendor, device name parts
>
> Because of "sw" in the name. You have the directory named "prestera",
> the modules are named "prestera_*", for the consistency sake the
> prefixes should be "prestera_". "mvsw_" looks totally unrelated.
>
>

I understand. If possible I'd like to get rid of long prefix which is if
to use prestera_xxx. I looked at mlxsw prefix format, and it looks for
me that mvpr_ may be OK in this case ? Also it will make funcs/types
name shorter which makes code read easier.

[SNIP]

> >
> >Regards,
> >Vadym Kochan

I am sorry that this naming issue took more discussion than should, I
just want to define it once an never change it, (it is a bit pain to
rename the whole code with new naming convention :) ).

Regards,
Vadym Kochan

2020-02-28 11:00:34

by Jiri Pirko

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Fri, Feb 28, 2020 at 10:44:53AM CET, [email protected] wrote:
>Hi Jiri,
>
>On Fri, Feb 28, 2020 at 07:34:51AM +0100, Jiri Pirko wrote:
>> Thu, Feb 27, 2020 at 10:32:00PM CET, [email protected] wrote:
>> >Hi Jiri,
>> >
>> >On Wed, Feb 26, 2020 at 04:54:23PM +0100, Jiri Pirko wrote:
>> >> Tue, Feb 25, 2020 at 05:30:54PM CET, [email protected] wrote:
>> >> >Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
>> >> >ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
>> >> >wireless SMB deployment.
>> >> >
>> >> >This driver implementation includes only L1 & basic L2 support.
>> >> >
>> >> >The core Prestera switching logic is implemented in prestera.c, there is
>> >> >an intermediate hw layer between core logic and firmware. It is
>> >> >implemented in prestera_hw.c, the purpose of it is to encapsulate hw
>> >> >related logic, in future there is a plan to support more devices with
>> >> >different HW related configurations.
>> >> >
>> >> >The following Switchdev features are supported:
>> >> >
>> >> > - VLAN-aware bridge offloading
>> >> > - VLAN-unaware bridge offloading
>> >> > - FDB offloading (learning, ageing)
>> >> > - Switchport configuration
>> >> >
>> >> >Signed-off-by: Vadym Kochan <[email protected]>
>> >> >Signed-off-by: Andrii Savka <[email protected]>
>> >> >Signed-off-by: Oleksandr Mazur <[email protected]>
>> >> >Signed-off-by: Serhiy Boiko <[email protected]>
>> >> >Signed-off-by: Serhiy Pshyk <[email protected]>
>> >> >Signed-off-by: Taras Chornyi <[email protected]>
>> >> >Signed-off-by: Volodymyr Mytnyk <[email protected]>
>> >> >---
>
>[SNIP]
>
>> >> >+#include <linux/kernel.h>
>> >> >+#include <linux/module.h>
>> >> >+#include <linux/list.h>
>> >> >+#include <linux/netdevice.h>
>> >> >+#include <linux/netdev_features.h>
>> >> >+#include <linux/etherdevice.h>
>> >> >+#include <linux/ethtool.h>
>> >> >+#include <linux/jiffies.h>
>> >> >+#include <net/switchdev.h>
>> >> >+
>> >> >+#include "prestera.h"
>> >> >+#include "prestera_hw.h"
>> >> >+#include "prestera_drv_ver.h"
>> >> >+
>> >> >+#define MVSW_PR_MTU_DEFAULT 1536
>> >> >+
>> >> >+#define PORT_STATS_CACHE_TIMEOUT_MS (msecs_to_jiffies(1000))
>> >> >+#define PORT_STATS_CNT (sizeof(struct mvsw_pr_port_stats) / sizeof(u64))
>> >>
>> >> Keep the prefix for all defines withing the file. "PORT_STATS_CNT"
>> >> looks way to generic on the first look.
>> >>
>> >>
>> >> >+#define PORT_STATS_IDX(name) \
>> >> >+ (offsetof(struct mvsw_pr_port_stats, name) / sizeof(u64))
>> >> >+#define PORT_STATS_FIELD(name) \
>> >> >+ [PORT_STATS_IDX(name)] = __stringify(name)
>> >> >+
>> >> >+static struct list_head switches_registered;
>> >> >+
>> >> >+static const char mvsw_driver_kind[] = "prestera_sw";
>> >>
>> >> Please be consistent. Make your prefixes, name, filenames the same.
>> >> For example:
>> >> prestera_driver_kind[] = "prestera";
>> >>
>> >> Applied to the whole code.
>> >>
>> >So you suggested to use prestera_ as a prefix, I dont see a problem
>> >with that, but why not mvsw_pr_ ? So it has the vendor, device name parts
>>
>> Because of "sw" in the name. You have the directory named "prestera",
>> the modules are named "prestera_*", for the consistency sake the
>> prefixes should be "prestera_". "mvsw_" looks totally unrelated.
>>
>>
>
>I understand. If possible I'd like to get rid of long prefix which is if
>to use prestera_xxx. I looked at mlxsw prefix format, and it looks for

mlxsw is a bad example for naming :) We did went along with existing
drivers mlx4 and mlx5. The device name was "SwitchX-2".
In your case, you have nice name of the device, just use it :)


>me that mvpr_ may be OK in this case ? Also it will make funcs/types

"prestera_" would be my first choice. I don't see why to mangle vendor
name into the prefix. Also, you have to count with a possibility that in
the future, this devices will no longer be "Marvell" (if sold). This
happens all the time :)



>name shorter which makes code read easier.
>
>[SNIP]
>
>> >
>> >Regards,
>> >Vadym Kochan
>
>I am sorry that this naming issue took more discussion than should, I
>just want to define it once an never change it, (it is a bit pain to
>rename the whole code with new naming convention :) ).

Sure.


>
>Regards,
>Vadym Kochan

2020-02-28 11:04:26

by Jiri Pirko

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Fri, Feb 28, 2020 at 09:06:02AM CET, [email protected] wrote:
>Hi Jiri,
>
>On Thu, Feb 27, 2020 at 03:22:59PM +0100, Jiri Pirko wrote:
>> Tue, Feb 25, 2020 at 05:30:54PM CET, [email protected] wrote:
>> >Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
>> >ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
>> >wireless SMB deployment.
>> >
>> >This driver implementation includes only L1 & basic L2 support.
>> >
>> >The core Prestera switching logic is implemented in prestera.c, there is
>> >an intermediate hw layer between core logic and firmware. It is
>> >implemented in prestera_hw.c, the purpose of it is to encapsulate hw
>> >related logic, in future there is a plan to support more devices with
>> >different HW related configurations.
>> >
>> >The following Switchdev features are supported:
>> >
>> > - VLAN-aware bridge offloading
>> > - VLAN-unaware bridge offloading
>> > - FDB offloading (learning, ageing)
>> > - Switchport configuration
>> >
>> >Signed-off-by: Vadym Kochan <[email protected]>
>> >Signed-off-by: Andrii Savka <[email protected]>
>> >Signed-off-by: Oleksandr Mazur <[email protected]>
>> >Signed-off-by: Serhiy Boiko <[email protected]>
>> >Signed-off-by: Serhiy Pshyk <[email protected]>
>> >Signed-off-by: Taras Chornyi <[email protected]>
>> >Signed-off-by: Volodymyr Mytnyk <[email protected]>
>> >---
>
>[SNIP]
>
>> >+};
>> >+
>> >+struct mvsw_msg_cmd {
>> >+ u32 type;
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_ret {
>> >+ struct mvsw_msg_cmd cmd;
>> >+ u32 status;
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_common_request {
>> >+ struct mvsw_msg_cmd cmd;
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_common_response {
>> >+ struct mvsw_msg_ret ret;
>> >+} __packed __aligned(4);
>> >+
>> >+union mvsw_msg_switch_param {
>> >+ u32 ageing_timeout;
>> >+};
>> >+
>> >+struct mvsw_msg_switch_attr_cmd {
>> >+ struct mvsw_msg_cmd cmd;
>> >+ union mvsw_msg_switch_param param;
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_switch_init_ret {
>> >+ struct mvsw_msg_ret ret;
>> >+ u32 port_count;
>> >+ u32 mtu_max;
>> >+ u8 switch_id;
>> >+ u8 mac[ETH_ALEN];
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_port_autoneg_param {
>> >+ u64 link_mode;
>> >+ u8 enable;
>> >+ u8 fec;
>> >+};
>> >+
>> >+struct mvsw_msg_port_cap_param {
>> >+ u64 link_mode;
>> >+ u8 type;
>> >+ u8 fec;
>> >+ u8 transceiver;
>> >+};
>> >+
>> >+union mvsw_msg_port_param {
>> >+ u8 admin_state;
>> >+ u8 oper_state;
>> >+ u32 mtu;
>> >+ u8 mac[ETH_ALEN];
>> >+ u8 accept_frm_type;
>> >+ u8 learning;
>> >+ u32 speed;
>> >+ u8 flood;
>> >+ u32 link_mode;
>> >+ u8 type;
>> >+ u8 duplex;
>> >+ u8 fec;
>> >+ u8 mdix;
>> >+ struct mvsw_msg_port_autoneg_param autoneg;
>> >+ struct mvsw_msg_port_cap_param cap;
>> >+};
>> >+
>> >+struct mvsw_msg_port_attr_cmd {
>> >+ struct mvsw_msg_cmd cmd;
>> >+ u32 attr;
>> >+ u32 port;
>> >+ u32 dev;
>> >+ union mvsw_msg_port_param param;
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_port_attr_ret {
>> >+ struct mvsw_msg_ret ret;
>> >+ union mvsw_msg_port_param param;
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_port_stats_ret {
>> >+ struct mvsw_msg_ret ret;
>> >+ u64 stats[MVSW_PORT_CNT_MAX];
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_port_info_cmd {
>> >+ struct mvsw_msg_cmd cmd;
>> >+ u32 port;
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_port_info_ret {
>> >+ struct mvsw_msg_ret ret;
>> >+ u32 hw_id;
>> >+ u32 dev_id;
>> >+ u16 fp_id;
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_vlan_cmd {
>> >+ struct mvsw_msg_cmd cmd;
>> >+ u32 port;
>> >+ u32 dev;
>> >+ u16 vid;
>> >+ u8 is_member;
>> >+ u8 is_tagged;
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_fdb_cmd {
>> >+ struct mvsw_msg_cmd cmd;
>> >+ u32 port;
>> >+ u32 dev;
>> >+ u8 mac[ETH_ALEN];
>> >+ u16 vid;
>> >+ u8 dynamic;
>> >+ u32 flush_mode;
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_event {
>> >+ u16 type;
>> >+ u16 id;
>> >+} __packed __aligned(4);
>> >+
>> >+union mvsw_msg_event_fdb_param {
>> >+ u8 mac[ETH_ALEN];
>> >+};
>> >+
>> >+struct mvsw_msg_event_fdb {
>> >+ struct mvsw_msg_event id;
>> >+ u32 port_id;
>> >+ u32 vid;
>> >+ union mvsw_msg_event_fdb_param param;
>> >+} __packed __aligned(4);
>> >+
>> >+union mvsw_msg_event_port_param {
>> >+ u32 oper_state;
>> >+};
>> >+
>> >+struct mvsw_msg_event_port {
>> >+ struct mvsw_msg_event id;
>> >+ u32 port_id;
>> >+ union mvsw_msg_event_port_param param;
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_bridge_cmd {
>> >+ struct mvsw_msg_cmd cmd;
>> >+ u32 port;
>> >+ u32 dev;
>> >+ u16 bridge;
>> >+} __packed __aligned(4);
>> >+
>> >+struct mvsw_msg_bridge_ret {
>> >+ struct mvsw_msg_ret ret;
>> >+ u16 bridge;
>> >+} __packed __aligned(4);
>> >+
>> >+#define fw_check_resp(_response) \
>> >+({ \
>> >+ int __er = 0; \
>> >+ typeof(_response) __r = (_response); \
>> >+ if (__r->ret.cmd.type != MVSW_MSG_TYPE_ACK) \
>> >+ __er = -EBADE; \
>> >+ else if (__r->ret.status != MVSW_MSG_ACK_OK) \
>> >+ __er = -EINVAL; \
>> >+ (__er); \
>> >+})
>> >+
>> >+#define __fw_send_req_resp(_switch, _type, _request, _response, _wait) \
>>
>> Please try to avoid doing functions in macros like this one and the
>> previous one.
>>
>>
>> >+({ \
>> >+ int __e; \
>> >+ typeof(_switch) __sw = (_switch); \
>> >+ typeof(_request) __req = (_request); \
>> >+ typeof(_response) __resp = (_response); \
>> >+ __req->cmd.type = (_type); \
>> >+ __e = __sw->dev->send_req(__sw->dev, \
>> >+ (u8 *)__req, sizeof(*__req), \
>> >+ (u8 *)__resp, sizeof(*__resp), \
>> >+ _wait); \
>> >+ if (!__e) \
>> >+ __e = fw_check_resp(__resp); \
>> >+ (__e); \
>> >+})
>> >+
>> >+#define fw_send_req_resp(_sw, _t, _req, _resp) \
>> >+ __fw_send_req_resp(_sw, _t, _req, _resp, 0)
>> >+
>> >+#define fw_send_req_resp_wait(_sw, _t, _req, _resp, _wait) \
>> >+ __fw_send_req_resp(_sw, _t, _req, _resp, _wait)
>> >+
>> >+#define fw_send_req(_sw, _t, _req) \
>>
>> This should be function, not define
>
>Yeah, I understand your point, but here was the reason:
>
>all packed structs which defined here in prestera_hw.c
>are used for transmission request/return to/from the firmware, each of
>the struct requires to have req/ret member (depends if it is request or
>return from the firmware), and the purpose of the macro is to avoid case
>when someone can forget to add req/ret member to the new such structure.

You should refactor your code to make this cleaner. Define acting as
a function working with random structures counting with 2 member names in
them, that is not nice :/


>
>>
>>
>> >+({ \
>> >+ struct mvsw_msg_common_response __re; \
>> >+ (fw_send_req_resp(_sw, _t, _req, &__re)); \
>> >+})
>> >+
>
>Regards,
>Vadym Kochan,

2020-02-28 14:04:24

by Andrew Lunn

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

On Fri, Feb 28, 2020 at 07:36:23AM +0100, Jiri Pirko wrote:
> Fri, Feb 28, 2020 at 12:50:58AM CET, [email protected] wrote:
> >On Thu, Feb 27, 2020 at 10:43:57PM +0100, Andrew Lunn wrote:
> >> > > Please be consistent. Make your prefixes, name, filenames the same.
> >> > > For example:
> >> > > prestera_driver_kind[] = "prestera";
> >> > >
> >> > > Applied to the whole code.
> >> > >
> >> > So you suggested to use prestera_ as a prefix, I dont see a problem
> >> > with that, but why not mvsw_pr_ ? So it has the vendor, device name parts
> >> > together as a key. Also it is necessary to apply prefix for the static
> >> > names ?
> >>
> >> Although static names don't cause linker issues, you do still see them
> >> in opps stack traces, etc. It just helps track down where the symbols
> >> come from, if they all have a prefix.
> >>
> >> Andrew
> >
> >Sure, thanks, makes sense. But is it necessary that prefix should match
> >filenames too ? Would it be OK to use just 'mvpr_' instead of 'prestera_'
>
> I would vote for "prestera_". It is clean, consistent, obvious.

Yes, prestera_ is better. It also avoids the vendor name, which often
changes as companies are bought, sold, split, etc.

Andrew

2020-02-28 16:52:32

by Vadym Kochan

[permalink] [raw]
Subject: Re: [RFC net-next 0/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX326x (AC3x)

Hi Chris,

On Tue, Feb 25, 2020 at 10:45:09PM +0000, Chris Packham wrote:
> Hi Vadym,
>
> On Tue, 2020-02-25 at 16:30 +0000, Vadym Kochan wrote:
> > Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
> > ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
> > wireless SMB deployment.
> >
> > Prestera Switchdev is a firmware based driver which operates via PCI
> > bus. The driver is split into 2 modules:
> >
> > - prestera_sw.ko - main generic Switchdev Prestera ASIC related logic.
> >
> > - prestera_pci.ko - bus specific code which also implements firmware
> > loading and low-level messaging protocol between
> > firmware and the switchdev driver.
> >
> > This driver implementation includes only L1 & basic L2 support.
> >
> > The core Prestera switching logic is implemented in prestera.c, there is
> > an intermediate hw layer between core logic and firmware. It is
> > implemented in prestera_hw.c, the purpose of it is to encapsulate hw
> > related logic, in future there is a plan to support more devices with
> > different HW related configurations.
>
> Very excited by this patch series. We have some custom designs using
> the AC3x. I'm in the process of getting the board dtses ready for
> submitting upstream.
>
> Please feel free to add me to the Cc list for future versions of this
> patch set (and releated ones).
>
> I'll also look to see what we can do to test on our hardware platforms.
>

Sure. I will add you to CC's list. Please note that you need to make
sure that your board design follows Marvell Design Guide for switchdev
solution.

> >
> > The firmware has to be loaded each time device is reset. The driver is
> > loading it from:
> >
> > /lib/firmware/marvell/prestera_fw_img.bin
> >
> > The firmware image version is located within internal header and consists
> > of 3 numbers - MAJOR.MINOR.PATCH. Additionally, driver has hard-coded
> > minimum supported firmware version which it can work with:
> >
> > MAJOR - reflects the support on ABI level between driver and loaded
> > firmware, this number should be the same for driver and
> > loaded firmware.
> >
> > MINOR - this is the minimal supported version between driver and the
> > firmware.
> >
> > PATCH - indicates only fixes, firmware ABI is not changed.
> >
> > The firmware image will be submitted to the linux-firmware after the
> > driver is accepted.
> >
> > The following Switchdev features are supported:
> >
> > - VLAN-aware bridge offloading
> > - VLAN-unaware bridge offloading
> > - FDB offloading (learning, ageing)
> > - Switchport configuration
> >
> > CPU RX/TX support will be provided in the next contribution.
> >
> > Vadym Kochan (3):
> > net: marvell: prestera: Add Switchdev driver for Prestera family ASIC
> > device 98DX325x (AC3x)
> > net: marvell: prestera: Add PCI interface support
> > dt-bindings: marvell,prestera: Add address mapping for Prestera
> > Switchdev PCIe driver
> >
> > .../bindings/net/marvell,prestera.txt | 13 +
> > drivers/net/ethernet/marvell/Kconfig | 1 +
> > drivers/net/ethernet/marvell/Makefile | 1 +
> > drivers/net/ethernet/marvell/prestera/Kconfig | 24 +
> > .../net/ethernet/marvell/prestera/Makefile | 5 +
> > .../net/ethernet/marvell/prestera/prestera.c | 1502 +++++++++++++++++
> > .../net/ethernet/marvell/prestera/prestera.h | 244 +++
> > .../marvell/prestera/prestera_drv_ver.h | 23 +
> > .../ethernet/marvell/prestera/prestera_hw.c | 1094 ++++++++++++
> > .../ethernet/marvell/prestera/prestera_hw.h | 159 ++
> > .../ethernet/marvell/prestera/prestera_pci.c | 840 +++++++++
> > .../marvell/prestera/prestera_switchdev.c | 1217 +++++++++++++
> > 12 files changed, 5123 insertions(+)
> > create mode 100644 drivers/net/ethernet/marvell/prestera/Kconfig
> > create mode 100644 drivers/net/ethernet/marvell/prestera/Makefile
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.c
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera.h
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.c
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_hw.h
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_pci.c
> > create mode 100644 drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
> >

Regards,
Vadym Kochan

2020-02-28 16:55:13

by Vadym Kochan

[permalink] [raw]
Subject: Re: [RFC net-next 2/3] net: marvell: prestera: Add PCI interface support

Hi Jiri,

On Thu, Feb 27, 2020 at 12:05:07PM +0100, Jiri Pirko wrote:
> Tue, Feb 25, 2020 at 05:30:55PM CET, [email protected] wrote:
> >Add PCI interface driver for Prestera Switch ASICs family devices, which
> >provides:
> >

[SNIP]

> >+
> >+module_init(mvsw_pr_pci_init);
> >+module_exit(mvsw_pr_pci_exit);
> >+
> >+MODULE_AUTHOR("Marvell Semi.");
>
> Again, wrong author.
>

PLVision developing the driver for Marvell and upstreaming it on behalf
of Marvell. This is a long term cooperation that aim to expose Marvell
devices to the Linux community.

[SNIP]

Regards,
Vadym Kochan

2020-02-29 08:00:25

by Jiri Pirko

[permalink] [raw]
Subject: Re: [RFC net-next 2/3] net: marvell: prestera: Add PCI interface support

Fri, Feb 28, 2020 at 05:54:32PM CET, [email protected] wrote:
>Hi Jiri,
>
>On Thu, Feb 27, 2020 at 12:05:07PM +0100, Jiri Pirko wrote:
>> Tue, Feb 25, 2020 at 05:30:55PM CET, [email protected] wrote:
>> >Add PCI interface driver for Prestera Switch ASICs family devices, which
>> >provides:
>> >
>
>[SNIP]
>
>> >+
>> >+module_init(mvsw_pr_pci_init);
>> >+module_exit(mvsw_pr_pci_exit);
>> >+
>> >+MODULE_AUTHOR("Marvell Semi.");
>>
>> Again, wrong author.
>>
>
>PLVision developing the driver for Marvell and upstreaming it on behalf
>of Marvell. This is a long term cooperation that aim to expose Marvell
>devices to the Linux community.

Okay. If you grep the code, most of the time, the MODULE_AUTHOR is a
person. That was my point:
/*
* Author(s), use "Name <email>" or just "Name", for multiple
* authors use multiple MODULE_AUTHOR() statements/lines.
*/
#define MODULE_AUTHOR(_author) MODULE_INFO(author, _author)

But I see that for example "Intel" uses the company name too. So I guess
it is fine.


>
>[SNIP]
>
>Regards,
>Vadym Kochan

2020-03-01 02:12:35

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [RFC net-next 2/3] net: marvell: prestera: Add PCI interface support

On Sat, 29 Feb 2020 08:58:02 +0100 Jiri Pirko wrote:
> Fri, Feb 28, 2020 at 05:54:32PM CET, [email protected] wrote:
> >> >+
> >> >+module_init(mvsw_pr_pci_init);
> >> >+module_exit(mvsw_pr_pci_exit);
> >> >+
> >> >+MODULE_AUTHOR("Marvell Semi.");
> >>
> >> Again, wrong author.
> >
> >PLVision developing the driver for Marvell and upstreaming it on behalf
> >of Marvell. This is a long term cooperation that aim to expose Marvell
> >devices to the Linux community.
>
> Okay. If you grep the code, most of the time, the MODULE_AUTHOR is a
> person. That was my point:
> /*
> * Author(s), use "Name <email>" or just "Name", for multiple
> * authors use multiple MODULE_AUTHOR() statements/lines.
> */
> #define MODULE_AUTHOR(_author) MODULE_INFO(author, _author)

+1

> But I see that for example "Intel" uses the company name too. So I guess
> it is fine.

FWIW I agree with Jiri's original comment. Copyright != authorship.
I'm not a lawyer, but at least in the European law I was exposed to -
company can _own_ code, but it can never _author_ it.

I think authorship as a moral right is inalienable, unlike material/
economic rights (copyright).

So to me company being an author makes no sense at all, Copyrights are
on all your files, that's sufficient, put human names in MODULE_AUTHOR,
or just skip using the macro..

2020-03-05 14:50:20

by Ido Schimmel

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

On Tue, Feb 25, 2020 at 04:30:54PM +0000, Vadym Kochan wrote:
> +int mvsw_pr_port_learning_set(struct mvsw_pr_port *port, bool learn)
> +{
> + return mvsw_pr_hw_port_learning_set(port, learn);
> +}
> +
> +int mvsw_pr_port_flood_set(struct mvsw_pr_port *port, bool flood)
> +{
> + return mvsw_pr_hw_port_flood_set(port, flood);
> +}

Flooding and learning are per-port attributes? Not per-{port, VLAN} ?
If so, you need to have various restrictions in the driver in case
someone configures multiple vlan devices on top of a port and enslaves
them to different bridges.

> +
> +int mvsw_pr_port_pvid_set(struct mvsw_pr_port *port, u16 vid)
> +{
> + int err;
> +
> + if (!vid) {
> + err = mvsw_pr_hw_port_accept_frame_type_set
> + (port, MVSW_ACCEPT_FRAME_TYPE_TAGGED);
> + if (err)
> + return err;
> + } else {
> + err = mvsw_pr_hw_vlan_port_vid_set(port, vid);
> + if (err)
> + return err;
> + err = mvsw_pr_hw_port_accept_frame_type_set
> + (port, MVSW_ACCEPT_FRAME_TYPE_ALL);
> + if (err)
> + goto err_port_allow_untagged_set;
> + }
> +
> + port->pvid = vid;
> + return 0;
> +
> +err_port_allow_untagged_set:
> + mvsw_pr_hw_vlan_port_vid_set(port, port->pvid);
> + return err;
> +}
> +
> +struct mvsw_pr_port_vlan*
> +mvsw_pr_port_vlan_find_by_vid(const struct mvsw_pr_port *port, u16 vid)
> +{
> + struct mvsw_pr_port_vlan *port_vlan;
> +
> + list_for_each_entry(port_vlan, &port->vlans_list, list) {
> + if (port_vlan->vid == vid)
> + return port_vlan;
> + }
> +
> + return NULL;
> +}
> +
> +struct mvsw_pr_port_vlan*
> +mvsw_pr_port_vlan_create(struct mvsw_pr_port *port, u16 vid)
> +{
> + bool untagged = vid == MVSW_PR_DEFAULT_VID;
> + struct mvsw_pr_port_vlan *port_vlan;
> + int err;
> +
> + port_vlan = mvsw_pr_port_vlan_find_by_vid(port, vid);
> + if (port_vlan)
> + return ERR_PTR(-EEXIST);
> +
> + err = mvsw_pr_port_vlan_set(port, vid, true, untagged);
> + if (err)
> + return ERR_PTR(err);
> +
> + port_vlan = kzalloc(sizeof(*port_vlan), GFP_KERNEL);
> + if (!port_vlan) {
> + err = -ENOMEM;
> + goto err_port_vlan_alloc;
> + }
> +
> + port_vlan->mvsw_pr_port = port;
> + port_vlan->vid = vid;
> +
> + list_add(&port_vlan->list, &port->vlans_list);
> +
> + return port_vlan;
> +
> +err_port_vlan_alloc:
> + mvsw_pr_port_vlan_set(port, vid, false, false);
> + return ERR_PTR(err);
> +}
> +
> +static void
> +mvsw_pr_port_vlan_cleanup(struct mvsw_pr_port_vlan *port_vlan)
> +{
> + if (port_vlan->bridge_port)
> + mvsw_pr_port_vlan_bridge_leave(port_vlan);
> +}
> +
> +void mvsw_pr_port_vlan_destroy(struct mvsw_pr_port_vlan *port_vlan)
> +{
> + struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
> + u16 vid = port_vlan->vid;
> +
> + mvsw_pr_port_vlan_cleanup(port_vlan);
> + list_del(&port_vlan->list);
> + kfree(port_vlan);
> + mvsw_pr_hw_vlan_port_set(port, vid, false, false);
> +}
> +
> +int mvsw_pr_port_vlan_set(struct mvsw_pr_port *port, u16 vid,
> + bool is_member, bool untagged)
> +{
> + return mvsw_pr_hw_vlan_port_set(port, vid, is_member, untagged);
> +}
> +
> +static int mvsw_pr_port_create(struct mvsw_pr_switch *sw, u32 id)
> +{
> + struct net_device *net_dev;
> + struct mvsw_pr_port *port;
> + char *mac;
> + int err;
> +
> + net_dev = alloc_etherdev(sizeof(*port));
> + if (!net_dev)
> + return -ENOMEM;
> +
> + port = netdev_priv(net_dev);
> +
> + INIT_LIST_HEAD(&port->vlans_list);
> + port->pvid = MVSW_PR_DEFAULT_VID;

If you're using VID 1, then you need to make sure that user cannot
configure a VLAN device with with this VID. If possible, I suggest that
you use VID 4095, as it cannot be configured from user space.

I'm actually not entirely sure why you need a default VID.

> + port->net_dev = net_dev;
> + port->id = id;
> + port->sw = sw;
> +
> + err = mvsw_pr_hw_port_info_get(port, &port->fp_id,
> + &port->hw_id, &port->dev_id);
> + if (err) {
> + dev_err(mvsw_dev(sw), "Failed to get port(%u) info\n", id);
> + goto err_register_netdev;
> + }
> +
> + net_dev->features |= NETIF_F_NETNS_LOCAL | NETIF_F_HW_L2FW_DOFFLOAD;

Not sure why you need 'NETIF_F_HW_L2FW_DOFFLOAD'. It was introduced by
commit a6cc0cfa72e0 ("net: Add layer 2 hardware acceleration operations
for macvlan devices").


> + net_dev->ethtool_ops = &mvsw_pr_ethtool_ops;
> + net_dev->netdev_ops = &mvsw_pr_netdev_ops;
> +
> + netif_carrier_off(net_dev);
> +
> + net_dev->mtu = min_t(unsigned int, sw->mtu_max, MVSW_PR_MTU_DEFAULT);
> + net_dev->min_mtu = sw->mtu_min;
> + net_dev->max_mtu = sw->mtu_max;
> +
> + err = mvsw_pr_hw_port_mtu_set(port, net_dev->mtu);
> + if (err) {
> + dev_err(mvsw_dev(sw), "Failed to set port(%u) mtu\n", id);
> + goto err_register_netdev;
> + }
> +
> + /* Only 0xFF mac addrs are supported */
> + if (port->fp_id >= 0xFF)
> + goto err_register_netdev;
> +
> + mac = net_dev->dev_addr;
> + memcpy(mac, sw->base_mac, net_dev->addr_len - 1);
> + mac[net_dev->addr_len - 1] = (char)port->fp_id;
> +
> + err = mvsw_pr_hw_port_mac_set(port, mac);
> + if (err) {
> + dev_err(mvsw_dev(sw), "Failed to set port(%u) mac addr\n", id);
> + goto err_register_netdev;
> + }
> +
> + err = mvsw_pr_hw_port_cap_get(port, &port->caps);
> + if (err) {
> + dev_err(mvsw_dev(sw), "Failed to get port(%u) caps\n", id);
> + goto err_register_netdev;
> + }
> +
> + port->adver_link_modes = 0;
> + port->adver_fec = 1 << MVSW_PORT_FEC_OFF_BIT;
> + port->autoneg = false;
> + mvsw_pr_port_autoneg_set(port, true, port->caps.supp_link_modes,
> + port->caps.supp_fec);
> +
> + err = mvsw_pr_hw_port_state_set(port, false);
> + if (err) {
> + dev_err(mvsw_dev(sw), "Failed to set port(%u) down\n", id);
> + goto err_register_netdev;
> + }
> +
> + INIT_DELAYED_WORK(&port->cached_hw_stats.caching_dw,
> + &update_stats_cache);
> +
> + err = register_netdev(net_dev);
> + if (err)
> + goto err_register_netdev;
> +
> + list_add(&port->list, &sw->port_list);

Once you register the netdev it can be accessed by anyone, so it should
be the last operation.

> +
> + return 0;
> +
> +err_register_netdev:
> + free_netdev(net_dev);
> + return err;
> +}

You need to have port_destroy() here. It's much easier to review when
create / destroy are next to each other. You can easily check that they
are symmetric. Same for other functions in the driver.

> +
> +static void mvsw_pr_port_vlan_flush(struct mvsw_pr_port *port,
> + bool flush_default)
> +{
> + struct mvsw_pr_port_vlan *port_vlan, *tmp;
> +
> + list_for_each_entry_safe(port_vlan, tmp, &port->vlans_list, list) {
> + if (!flush_default && port_vlan->vid == MVSW_PR_DEFAULT_VID)
> + continue;
> +
> + mvsw_pr_port_vlan_destroy(port_vlan);
> + }
> +}
> +
> +int mvsw_pr_8021d_bridge_create(struct mvsw_pr_switch *sw, u16 *bridge_id)
> +{
> + return mvsw_pr_hw_bridge_create(sw, bridge_id);
> +}
> +
> +int mvsw_pr_8021d_bridge_delete(struct mvsw_pr_switch *sw, u16 bridge_id)
> +{
> + return mvsw_pr_hw_bridge_delete(sw, bridge_id);
> +}
> +
> +int mvsw_pr_8021d_bridge_port_add(struct mvsw_pr_port *port, u16 bridge_id)
> +{
> + return mvsw_pr_hw_bridge_port_add(port, bridge_id);
> +}
> +
> +int mvsw_pr_8021d_bridge_port_delete(struct mvsw_pr_port *port, u16 bridge_id)
> +{
> + return mvsw_pr_hw_bridge_port_delete(port, bridge_id);
> +}
> +
> +int mvsw_pr_switch_ageing_set(struct mvsw_pr_switch *sw, u32 ageing_time)
> +{
> + return mvsw_pr_hw_switch_ageing_set(sw, ageing_time);
> +}
> +
> +int mvsw_pr_fdb_flush_vlan(struct mvsw_pr_switch *sw, u16 vid,
> + enum mvsw_pr_fdb_flush_mode mode)
> +{
> + return mvsw_pr_hw_fdb_flush_vlan(sw, vid, mode);
> +}
> +
> +int mvsw_pr_fdb_flush_port_vlan(struct mvsw_pr_port *port, u16 vid,
> + enum mvsw_pr_fdb_flush_mode mode)
> +{
> + return mvsw_pr_hw_fdb_flush_port_vlan(port, vid, mode);
> +}
> +
> +int mvsw_pr_fdb_flush_port(struct mvsw_pr_port *port,
> + enum mvsw_pr_fdb_flush_mode mode)
> +{
> + return mvsw_pr_hw_fdb_flush_port(port, mode);
> +}
> +
> +static int mvsw_pr_clear_ports(struct mvsw_pr_switch *sw)
> +{
> + struct net_device *net_dev;
> + struct list_head *pos, *n;
> + struct mvsw_pr_port *port;
> +
> + list_for_each_safe(pos, n, &sw->port_list) {
> + port = list_entry(pos, typeof(*port), list);
> + net_dev = port->net_dev;
> +
> + cancel_delayed_work_sync(&port->cached_hw_stats.caching_dw);
> + unregister_netdev(net_dev);
> + mvsw_pr_port_vlan_flush(port, true);
> + WARN_ON_ONCE(!list_empty(&port->vlans_list));
> + free_netdev(net_dev);
> + list_del(pos);
> + }
> + return (!list_empty(&sw->port_list));
> +}
> +
> +static void mvsw_pr_port_handle_event(struct mvsw_pr_switch *sw,
> + struct mvsw_pr_event *evt)
> +{
> + struct mvsw_pr_port *port;
> + struct delayed_work *caching_dw;
> +
> + port = __find_pr_port(sw, evt->port_evt.port_id);
> + if (!port)
> + return;
> +
> + caching_dw = &port->cached_hw_stats.caching_dw;
> +
> + switch (evt->id) {
> + case MVSW_PORT_EVENT_STATE_CHANGED:
> + if (evt->port_evt.data.oper_state) {
> + netif_carrier_on(port->net_dev);
> + if (!delayed_work_pending(caching_dw))
> + queue_delayed_work(mvsw_pr_wq, caching_dw, 0);
> + } else {
> + netif_carrier_off(port->net_dev);
> + if (delayed_work_pending(caching_dw))
> + cancel_delayed_work(caching_dw);
> + }
> + break;
> + }
> +}
> +
> +static void mvsw_pr_fdb_handle_event(struct mvsw_pr_switch *sw,
> + struct mvsw_pr_event *evt)
> +{
> + struct switchdev_notifier_fdb_info info;
> + struct mvsw_pr_port *port;
> +
> + port = __find_pr_port(sw, evt->fdb_evt.port_id);
> + if (!port)
> + return;
> +
> + info.addr = evt->fdb_evt.data.mac;
> + info.vid = evt->fdb_evt.vid;
> + info.offloaded = true;
> +
> + rtnl_lock();
> + switch (evt->id) {
> + case MVSW_FDB_EVENT_LEARNED:
> + call_switchdev_notifiers(SWITCHDEV_FDB_ADD_TO_BRIDGE,
> + port->net_dev, &info.info, NULL);
> + break;
> + case MVSW_FDB_EVENT_AGED:
> + call_switchdev_notifiers(SWITCHDEV_FDB_DEL_TO_BRIDGE,
> + port->net_dev, &info.info, NULL);
> + break;
> + }

This looks misplaced. You update the bridge about new FDBs in
prestera.c, but offload FDBs from the bridge in prestera_switchdev.c

Both should be implemented in the file that implements all
bridge-related operations.

> + rtnl_unlock();
> + return;
> +}
> +
> +int mvsw_pr_fdb_add(struct mvsw_pr_port *port, const unsigned char *mac,
> + u16 vid, bool dynamic)
> +{
> + return mvsw_pr_hw_fdb_add(port, mac, vid, dynamic);
> +}
> +
> +int mvsw_pr_fdb_del(struct mvsw_pr_port *port, const unsigned char *mac,
> + u16 vid)
> +{
> + return mvsw_pr_hw_fdb_del(port, mac, vid);
> +}
> +
> +static void mvsw_pr_fdb_event_handler_unregister(struct mvsw_pr_switch *sw)
> +{
> + mvsw_pr_hw_event_handler_unregister(sw, MVSW_EVENT_TYPE_FDB,
> + mvsw_pr_fdb_handle_event);
> +}
> +
> +static void mvsw_pr_port_event_handler_unregister(struct mvsw_pr_switch *sw)
> +{
> + mvsw_pr_hw_event_handler_unregister(sw, MVSW_EVENT_TYPE_PORT,
> + mvsw_pr_port_handle_event);
> +}
> +
> +static void mvsw_pr_event_handlers_unregister(struct mvsw_pr_switch *sw)
> +{
> + mvsw_pr_fdb_event_handler_unregister(sw);
> + mvsw_pr_port_event_handler_unregister(sw);
> +}
> +
> +static int mvsw_pr_fdb_event_handler_register(struct mvsw_pr_switch *sw)
> +{
> + return mvsw_pr_hw_event_handler_register(sw, MVSW_EVENT_TYPE_FDB,
> + mvsw_pr_fdb_handle_event);
> +}
> +
> +static int mvsw_pr_port_event_handler_register(struct mvsw_pr_switch *sw)
> +{
> + return mvsw_pr_hw_event_handler_register(sw, MVSW_EVENT_TYPE_PORT,
> + mvsw_pr_port_handle_event);
> +}
> +
> +static int mvsw_pr_event_handlers_register(struct mvsw_pr_switch *sw)
> +{
> + int err;
> +
> + err = mvsw_pr_port_event_handler_register(sw);
> + if (err)
> + return err;
> +
> + err = mvsw_pr_fdb_event_handler_register(sw);
> + if (err)
> + goto err_fdb_handler_register;
> +
> + return 0;
> +
> +err_fdb_handler_register:
> + mvsw_pr_port_event_handler_unregister(sw);
> + return err;
> +}
> +
> +static int mvsw_pr_init(struct mvsw_pr_switch *sw)
> +{
> + u32 port;
> + int err;
> +
> + err = mvsw_pr_hw_switch_init(sw);
> + if (err) {
> + dev_err(mvsw_dev(sw), "Failed to init Switch device\n");
> + return err;
> + }
> +
> + dev_info(mvsw_dev(sw), "Initialized Switch device\n");

Best to remove it, it has very little value. Same in other places.

> +
> + err = mvsw_pr_switchdev_register(sw);
> + if (err)
> + return err;
> +
> + INIT_LIST_HEAD(&sw->port_list);

I understand ports cannot come and go now, but in the future you might
support port splitting and need to remove netdevs mid-operation. So
please consider how you're going to handle locking around this list.

> +
> + for (port = 0; port < sw->port_count; port++) {
> + err = mvsw_pr_port_create(sw, port);
> + if (err)
> + goto err_ports_init;
> + }
> +
> + err = mvsw_pr_event_handlers_register(sw);

This looks incorrect to me. IIUC, here you register handlers for port up
/ down events and FDB entries learn / age-out events. However, by this
time the netdevs are already registered and therefore it is possible
that you missed a few events.

Registering the netdevs should probably be the last thing you do during
init.

> + if (err)
> + goto err_ports_init;
> +
> + return 0;
> +
> +err_ports_init:
> + mvsw_pr_clear_ports(sw);
> + return err;
> +}
> +
> +static void mvsw_pr_fini(struct mvsw_pr_switch *sw)
> +{
> + mvsw_pr_event_handlers_unregister(sw);
> +

No need for this blank line

> + mvsw_pr_switchdev_unregister(sw);
> + mvsw_pr_clear_ports(sw);

This is not symmetric with respect to mvsw_pr_init().

Also, you create each port individually by calling
mvsw_pr_port_create(), but destroy all of them by calling
mvsw_pr_clear_ports(). I suggest to add mvsw_pr_ports_create() which
will call mvsw_pr_port_create() for each port. Then have
mvsw_pr_ports_destroy() and mvsw_pr_port_destroy()

Much easier to review and less error-prone. Same for other functions
that have a reverse function.

> +}
> +
> +int mvsw_pr_device_register(struct mvsw_pr_device *dev)
> +{
> + struct mvsw_pr_switch *sw;
> + int err;
> +
> + sw = kzalloc(sizeof(*sw), GFP_KERNEL);
> + if (!sw)
> + return -ENOMEM;
> +
> + dev->priv = sw;
> + sw->dev = dev;
> +
> + err = mvsw_pr_init(sw);
> + if (err) {
> + kfree(sw);

goto

> + return err;
> + }
> +
> + list_add(&sw->list, &switches_registered);

Looks like this list is never iterated, so best to remove it.

> +
> + return 0;
> +}
> +EXPORT_SYMBOL(mvsw_pr_device_register);
> +
> +void mvsw_pr_device_unregister(struct mvsw_pr_device *dev)
> +{
> + struct mvsw_pr_switch *sw = dev->priv;
> +
> + list_del(&sw->list);
> + mvsw_pr_fini(sw);
> + kfree(sw);
> +}
> +EXPORT_SYMBOL(mvsw_pr_device_unregister);
> +
> +static int __init mvsw_pr_module_init(void)
> +{
> + INIT_LIST_HEAD(&switches_registered);
> +
> + mvsw_pr_wq = alloc_workqueue(mvsw_driver_name, 0, 0);
> + if (!mvsw_pr_wq)
> + return -ENOMEM;
> +
> + pr_info("Loading Marvell Prestera Switch Driver\n");
> + return 0;
> +}
> +
> +static void __exit mvsw_pr_module_exit(void)
> +{
> + destroy_workqueue(mvsw_pr_wq);
> +
> + pr_info("Unloading Marvell Prestera Switch Driver\n");
> +}
> +
> +module_init(mvsw_pr_module_init);
> +module_exit(mvsw_pr_module_exit);
> +
> +MODULE_AUTHOR("Marvell Semi.");
> +MODULE_LICENSE("GPL");
> +MODULE_DESCRIPTION("Marvell Prestera switch driver");
> +MODULE_VERSION(PRESTERA_DRV_VER);
> diff --git a/drivers/net/ethernet/marvell/prestera/prestera.h b/drivers/net/ethernet/marvell/prestera/prestera.h
> new file mode 100644
> index 000000000000..cbc6b0c78937
> --- /dev/null
> +++ b/drivers/net/ethernet/marvell/prestera/prestera.h
> @@ -0,0 +1,244 @@
> +/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
> + *
> + * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
> + *
> + */
> +
> +#ifndef _MVSW_PRESTERA_H_
> +#define _MVSW_PRESTERA_H_
> +
> +#include <linux/skbuff.h>
> +#include <linux/notifier.h>
> +#include <uapi/linux/if_ether.h>
> +#include <linux/workqueue.h>
> +
> +#define MVSW_MSG_MAX_SIZE 1500
> +
> +#define MVSW_PR_DEFAULT_VID 1
> +
> +#define MVSW_PR_MIN_AGEING_TIME 10
> +#define MVSW_PR_MAX_AGEING_TIME 1000000
> +#define MVSW_PR_DEFAULT_AGEING_TIME 300

I assume this is in seconds, so you might want to note it in the name or
comment

> +
> +struct mvsw_fw_rev {
> + u16 maj;
> + u16 min;
> + u16 sub;
> +};
> +
> +struct mvsw_pr_bridge_port;
> +
> +struct mvsw_pr_port_vlan {
> + struct list_head list;
> + struct mvsw_pr_port *mvsw_pr_port;
> + u16 vid;
> + struct mvsw_pr_bridge_port *bridge_port;
> + struct list_head bridge_vlan_node;
> +};
> +
> +struct mvsw_pr_port_stats {
> + u64 good_octets_received;
> + u64 bad_octets_received;
> + u64 mac_trans_error;
> + u64 broadcast_frames_received;
> + u64 multicast_frames_received;
> + u64 frames_64_octets;
> + u64 frames_65_to_127_octets;
> + u64 frames_128_to_255_octets;
> + u64 frames_256_to_511_octets;
> + u64 frames_512_to_1023_octets;
> + u64 frames_1024_to_max_octets;
> + u64 excessive_collision;
> + u64 multicast_frames_sent;
> + u64 broadcast_frames_sent;
> + u64 fc_sent;
> + u64 fc_received;
> + u64 buffer_overrun;
> + u64 undersize;
> + u64 fragments;
> + u64 oversize;
> + u64 jabber;
> + u64 rx_error_frame_received;
> + u64 bad_crc;
> + u64 collisions;
> + u64 late_collision;
> + u64 unicast_frames_received;
> + u64 unicast_frames_sent;
> + u64 sent_multiple;
> + u64 sent_deferred;
> + u64 frames_1024_to_1518_octets;
> + u64 frames_1519_to_max_octets;
> + u64 good_octets_sent;
> +};
> +
> +struct mvsw_pr_port_caps {
> + u64 supp_link_modes;
> + u8 supp_fec;
> + u8 type;
> + u8 transceiver;
> +};
> +
> +struct mvsw_pr_port {
> + struct net_device *net_dev;
> + struct mvsw_pr_switch *sw;
> + u32 id;
> + u32 hw_id;
> + u32 dev_id;
> + u16 fp_id;
> + u16 pvid;
> + bool autoneg;
> + u64 adver_link_modes;
> + u8 adver_fec;
> + struct mvsw_pr_port_caps caps;
> + struct list_head list;
> + struct list_head vlans_list;
> + struct {
> + struct mvsw_pr_port_stats stats;
> + struct delayed_work caching_dw;
> + } cached_hw_stats;
> +};
> +
> +struct mvsw_pr_switchdev {
> + struct mvsw_pr_switch *sw;
> + struct notifier_block swdev_n;
> + struct notifier_block swdev_blocking_n;
> +};
> +
> +struct mvsw_pr_fib {
> + struct mvsw_pr_switch *sw;
> + struct notifier_block fib_nb;
> + struct notifier_block netevent_nb;
> +};

Does not seem to be used anywhere.

> +
> +struct mvsw_pr_device {
> + struct device *dev;
> + struct mvsw_fw_rev fw_rev;
> + void *priv;
> +
> + /* called by device driver to pass event up to the higher layer */
> + int (*recv_msg)(struct mvsw_pr_device *dev, u8 *msg, size_t size);
> +
> + /* called by higher layer to send request to the firmware */
> + int (*send_req)(struct mvsw_pr_device *dev, u8 *in_msg,
> + size_t in_size, u8 *out_msg, size_t out_size,
> + unsigned int wait);
> +};
> +
> +enum mvsw_pr_event_type {
> + MVSW_EVENT_TYPE_UNSPEC,
> + MVSW_EVENT_TYPE_PORT,
> + MVSW_EVENT_TYPE_FDB,
> +
> + MVSW_EVENT_TYPE_MAX,
> +};
> +
> +enum mvsw_pr_port_event_id {
> + MVSW_PORT_EVENT_UNSPEC,
> + MVSW_PORT_EVENT_STATE_CHANGED,
> +
> + MVSW_PORT_EVENT_MAX,
> +};
> +
> +enum mvsw_pr_fdb_event_id {
> + MVSW_FDB_EVENT_UNSPEC,
> + MVSW_FDB_EVENT_LEARNED,
> + MVSW_FDB_EVENT_AGED,
> +
> + MVSW_FDB_EVENT_MAX,
> +};
> +
> +struct mvsw_pr_fdb_event {
> + u32 port_id;
> + u32 vid;
> + union {
> + u8 mac[ETH_ALEN];
> + } data;
> +};
> +
> +struct mvsw_pr_port_event {
> + u32 port_id;
> + union {
> + u32 oper_state;
> + } data;
> +};
> +
> +struct mvsw_pr_event {
> + u16 id;
> + union {
> + struct mvsw_pr_port_event port_evt;
> + struct mvsw_pr_fdb_event fdb_evt;
> + };
> +};
> +
> +struct mvsw_pr_bridge;
> +
> +struct mvsw_pr_switch {
> + struct list_head list;
> + struct mvsw_pr_device *dev;
> + struct list_head event_handlers;
> + char base_mac[ETH_ALEN];
> + struct list_head port_list;
> + u32 port_count;
> + u32 mtu_min;
> + u32 mtu_max;
> + u8 id;
> + struct mvsw_pr_bridge *bridge;
> + struct mvsw_pr_switchdev *switchdev;
> + struct mvsw_pr_fib *fib;

Not used

> + struct notifier_block netdevice_nb;
> +};
> +
> +enum mvsw_pr_fdb_flush_mode {
> + MVSW_PR_FDB_FLUSH_MODE_DYNAMIC = BIT(0),
> + MVSW_PR_FDB_FLUSH_MODE_STATIC = BIT(1),
> + MVSW_PR_FDB_FLUSH_MODE_ALL = MVSW_PR_FDB_FLUSH_MODE_DYNAMIC
> + | MVSW_PR_FDB_FLUSH_MODE_STATIC,
> +};
> +
> +int mvsw_pr_switch_ageing_set(struct mvsw_pr_switch *sw, u32 ageing_time);
> +
> +int mvsw_pr_port_learning_set(struct mvsw_pr_port *mvsw_pr_port,
> + bool learn_enable);
> +int mvsw_pr_port_flood_set(struct mvsw_pr_port *mvsw_pr_port, bool flood);
> +int mvsw_pr_port_pvid_set(struct mvsw_pr_port *mvsw_pr_port, u16 vid);
> +struct mvsw_pr_port_vlan *
> +mvsw_pr_port_vlan_create(struct mvsw_pr_port *mvsw_pr_port, u16 vid);
> +void mvsw_pr_port_vlan_destroy(struct mvsw_pr_port_vlan *mvsw_pr_port_vlan);
> +int mvsw_pr_port_vlan_set(struct mvsw_pr_port *mvsw_pr_port, u16 vid,
> + bool is_member, bool untagged);
> +
> +int mvsw_pr_8021d_bridge_create(struct mvsw_pr_switch *sw, u16 *bridge_id);
> +int mvsw_pr_8021d_bridge_delete(struct mvsw_pr_switch *sw, u16 bridge_id);
> +int mvsw_pr_8021d_bridge_port_add(struct mvsw_pr_port *mvsw_pr_port,
> + u16 bridge_id);
> +int mvsw_pr_8021d_bridge_port_delete(struct mvsw_pr_port *mvsw_pr_port,
> + u16 bridge_id);
> +
> +int mvsw_pr_fdb_add(struct mvsw_pr_port *mvsw_pr_port, const unsigned char *mac,
> + u16 vid, bool dynamic);
> +int mvsw_pr_fdb_del(struct mvsw_pr_port *mvsw_pr_port, const unsigned char *mac,
> + u16 vid);
> +int mvsw_pr_fdb_flush_vlan(struct mvsw_pr_switch *sw, u16 vid,
> + enum mvsw_pr_fdb_flush_mode mode);
> +int mvsw_pr_fdb_flush_port_vlan(struct mvsw_pr_port *port, u16 vid,
> + enum mvsw_pr_fdb_flush_mode mode);
> +int mvsw_pr_fdb_flush_port(struct mvsw_pr_port *port,
> + enum mvsw_pr_fdb_flush_mode mode);
> +
> +struct mvsw_pr_port_vlan *
> +mvsw_pr_port_vlan_find_by_vid(const struct mvsw_pr_port *mvsw_pr_port, u16 vid);
> +void
> +mvsw_pr_port_vlan_bridge_leave(struct mvsw_pr_port_vlan *mvsw_pr_port_vlan);
> +
> +int mvsw_pr_switchdev_register(struct mvsw_pr_switch *sw);
> +void mvsw_pr_switchdev_unregister(struct mvsw_pr_switch *sw);
> +
> +int mvsw_pr_device_register(struct mvsw_pr_device *dev);
> +void mvsw_pr_device_unregister(struct mvsw_pr_device *dev);
> +
> +bool mvsw_pr_netdev_check(const struct net_device *dev);
> +struct mvsw_pr_port *mvsw_pr_port_dev_lower_find(struct net_device *dev);
> +
> +const struct mvsw_pr_port *mvsw_pr_port_find(u32 dev_hw_id, u32 port_hw_id);
> +
> +#endif /* _MVSW_PRESTERA_H_ */
> diff --git a/drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h b/drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
> new file mode 100644
> index 000000000000..d6617a16d7e1
> --- /dev/null
> +++ b/drivers/net/ethernet/marvell/prestera/prestera_drv_ver.h
> @@ -0,0 +1,23 @@
> +/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
> + *
> + * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
> + *
> + */
> +#ifndef _PRESTERA_DRV_VER_H_
> +#define _PRESTERA_DRV_VER_H_
> +
> +#include <linux/stringify.h>
> +
> +/* Prestera driver version */
> +#define PRESTERA_DRV_VER_MAJOR 1
> +#define PRESTERA_DRV_VER_MINOR 0
> +#define PRESTERA_DRV_VER_PATCH 0
> +#define PRESTERA_DRV_VER_EXTRA
> +
> +#define PRESTERA_DRV_VER \
> + __stringify(PRESTERA_DRV_VER_MAJOR) "." \
> + __stringify(PRESTERA_DRV_VER_MINOR) "." \
> + __stringify(PRESTERA_DRV_VER_PATCH) \
> + __stringify(PRESTERA_DRV_VER_EXTRA)
> +
> +#endif /* _PRESTERA_DRV_VER_H_ */
> diff --git a/drivers/net/ethernet/marvell/prestera/prestera_hw.c b/drivers/net/ethernet/marvell/prestera/prestera_hw.c
> new file mode 100644
> index 000000000000..c97bafdd734e
> --- /dev/null
> +++ b/drivers/net/ethernet/marvell/prestera/prestera_hw.c
> @@ -0,0 +1,1094 @@
> +/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
> + *
> + * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
> + *
> + */
> +#include <linux/etherdevice.h>
> +#include <linux/ethtool.h>
> +#include <linux/netdevice.h>
> +#include <linux/list.h>
> +
> +#include "prestera.h"
> +#include "prestera_hw.h"
> +
> +#define MVSW_PR_INIT_TIMEOUT 30000000 /* 30sec */
> +#define MVSW_PR_MIN_MTU 64
> +
> +enum mvsw_msg_type {
> + MVSW_MSG_TYPE_SWITCH_UNSPEC,
> + MVSW_MSG_TYPE_SWITCH_INIT,
> +
> + MVSW_MSG_TYPE_AGEING_TIMEOUT_SET,
> +
> + MVSW_MSG_TYPE_PORT_ATTR_SET,
> + MVSW_MSG_TYPE_PORT_ATTR_GET,
> + MVSW_MSG_TYPE_PORT_INFO_GET,
> +
> + MVSW_MSG_TYPE_VLAN_CREATE,
> + MVSW_MSG_TYPE_VLAN_DELETE,
> + MVSW_MSG_TYPE_VLAN_PORT_SET,
> + MVSW_MSG_TYPE_VLAN_PVID_SET,
> +
> + MVSW_MSG_TYPE_FDB_ADD,
> + MVSW_MSG_TYPE_FDB_DELETE,
> + MVSW_MSG_TYPE_FDB_FLUSH_PORT,
> + MVSW_MSG_TYPE_FDB_FLUSH_VLAN,
> + MVSW_MSG_TYPE_FDB_FLUSH_PORT_VLAN,
> +
> + MVSW_MSG_TYPE_LOG_LEVEL_SET,
> +
> + MVSW_MSG_TYPE_BRIDGE_CREATE,
> + MVSW_MSG_TYPE_BRIDGE_DELETE,
> + MVSW_MSG_TYPE_BRIDGE_PORT_ADD,
> + MVSW_MSG_TYPE_BRIDGE_PORT_DELETE,
> +
> + MVSW_MSG_TYPE_ACK,
> + MVSW_MSG_TYPE_MAX
> +};
> +
> +enum mvsw_msg_port_attr {
> + MVSW_MSG_PORT_ATTR_ADMIN_STATE,
> + MVSW_MSG_PORT_ATTR_OPER_STATE,
> + MVSW_MSG_PORT_ATTR_MTU,
> + MVSW_MSG_PORT_ATTR_MAC,
> + MVSW_MSG_PORT_ATTR_SPEED,
> + MVSW_MSG_PORT_ATTR_ACCEPT_FRAME_TYPE,
> + MVSW_MSG_PORT_ATTR_LEARNING,
> + MVSW_MSG_PORT_ATTR_FLOOD,
> + MVSW_MSG_PORT_ATTR_CAPABILITY,
> + MVSW_MSG_PORT_ATTR_REMOTE_CAPABILITY,
> + MVSW_MSG_PORT_ATTR_LINK_MODE,
> + MVSW_MSG_PORT_ATTR_TYPE,
> + MVSW_MSG_PORT_ATTR_FEC,
> + MVSW_MSG_PORT_ATTR_AUTONEG,
> + MVSW_MSG_PORT_ATTR_DUPLEX,
> + MVSW_MSG_PORT_ATTR_STATS,
> + MVSW_MSG_PORT_ATTR_MDIX,
> + MVSW_MSG_PORT_ATTR_MAX
> +};
> +
> +enum {
> + MVSW_MSG_ACK_OK,
> + MVSW_MSG_ACK_FAILED,
> + MVSW_MSG_ACK_MAX
> +};
> +
> +enum {
> + MVSW_MODE_FORCED_MDI,
> + MVSW_MODE_FORCED_MDIX,
> + MVSW_MODE_AUTO_MDI,
> + MVSW_MODE_AUTO_MDIX,
> + MVSW_MODE_AUTO
> +};
> +
> +enum {
> + MVSW_PORT_GOOD_OCTETS_RCV_CNT,
> + MVSW_PORT_BAD_OCTETS_RCV_CNT,
> + MVSW_PORT_MAC_TRANSMIT_ERR_CNT,
> + MVSW_PORT_BRDC_PKTS_RCV_CNT,
> + MVSW_PORT_MC_PKTS_RCV_CNT,
> + MVSW_PORT_PKTS_64_OCTETS_CNT,
> + MVSW_PORT_PKTS_65TO127_OCTETS_CNT,
> + MVSW_PORT_PKTS_128TO255_OCTETS_CNT,
> + MVSW_PORT_PKTS_256TO511_OCTETS_CNT,
> + MVSW_PORT_PKTS_512TO1023_OCTETS_CNT,
> + MVSW_PORT_PKTS_1024TOMAX_OCTETS_CNT,
> + MVSW_PORT_EXCESSIVE_COLLISIONS_CNT,
> + MVSW_PORT_MC_PKTS_SENT_CNT,
> + MVSW_PORT_BRDC_PKTS_SENT_CNT,
> + MVSW_PORT_FC_SENT_CNT,
> + MVSW_PORT_GOOD_FC_RCV_CNT,
> + MVSW_PORT_DROP_EVENTS_CNT,
> + MVSW_PORT_UNDERSIZE_PKTS_CNT,
> + MVSW_PORT_FRAGMENTS_PKTS_CNT,
> + MVSW_PORT_OVERSIZE_PKTS_CNT,
> + MVSW_PORT_JABBER_PKTS_CNT,
> + MVSW_PORT_MAC_RCV_ERROR_CNT,
> + MVSW_PORT_BAD_CRC_CNT,
> + MVSW_PORT_COLLISIONS_CNT,
> + MVSW_PORT_LATE_COLLISIONS_CNT,
> + MVSW_PORT_GOOD_UC_PKTS_RCV_CNT,
> + MVSW_PORT_GOOD_UC_PKTS_SENT_CNT,
> + MVSW_PORT_MULTIPLE_PKTS_SENT_CNT,
> + MVSW_PORT_DEFERRED_PKTS_SENT_CNT,
> + MVSW_PORT_PKTS_1024TO1518_OCTETS_CNT,
> + MVSW_PORT_PKTS_1519TOMAX_OCTETS_CNT,
> + MVSW_PORT_GOOD_OCTETS_SENT_CNT,
> + MVSW_PORT_CNT_MAX,
> +};
> +
> +struct mvsw_msg_cmd {
> + u32 type;
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_ret {
> + struct mvsw_msg_cmd cmd;
> + u32 status;
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_common_request {
> + struct mvsw_msg_cmd cmd;
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_common_response {
> + struct mvsw_msg_ret ret;
> +} __packed __aligned(4);
> +
> +union mvsw_msg_switch_param {
> + u32 ageing_timeout;
> +};
> +
> +struct mvsw_msg_switch_attr_cmd {
> + struct mvsw_msg_cmd cmd;
> + union mvsw_msg_switch_param param;
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_switch_init_ret {
> + struct mvsw_msg_ret ret;
> + u32 port_count;
> + u32 mtu_max;
> + u8 switch_id;
> + u8 mac[ETH_ALEN];
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_port_autoneg_param {
> + u64 link_mode;
> + u8 enable;
> + u8 fec;
> +};
> +
> +struct mvsw_msg_port_cap_param {
> + u64 link_mode;
> + u8 type;
> + u8 fec;
> + u8 transceiver;
> +};
> +
> +union mvsw_msg_port_param {
> + u8 admin_state;
> + u8 oper_state;
> + u32 mtu;
> + u8 mac[ETH_ALEN];
> + u8 accept_frm_type;
> + u8 learning;
> + u32 speed;
> + u8 flood;
> + u32 link_mode;
> + u8 type;
> + u8 duplex;
> + u8 fec;
> + u8 mdix;
> + struct mvsw_msg_port_autoneg_param autoneg;
> + struct mvsw_msg_port_cap_param cap;
> +};
> +
> +struct mvsw_msg_port_attr_cmd {
> + struct mvsw_msg_cmd cmd;
> + u32 attr;
> + u32 port;
> + u32 dev;
> + union mvsw_msg_port_param param;
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_port_attr_ret {
> + struct mvsw_msg_ret ret;
> + union mvsw_msg_port_param param;
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_port_stats_ret {
> + struct mvsw_msg_ret ret;
> + u64 stats[MVSW_PORT_CNT_MAX];
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_port_info_cmd {
> + struct mvsw_msg_cmd cmd;
> + u32 port;
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_port_info_ret {
> + struct mvsw_msg_ret ret;
> + u32 hw_id;
> + u32 dev_id;
> + u16 fp_id;
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_vlan_cmd {
> + struct mvsw_msg_cmd cmd;
> + u32 port;
> + u32 dev;
> + u16 vid;
> + u8 is_member;
> + u8 is_tagged;
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_fdb_cmd {
> + struct mvsw_msg_cmd cmd;
> + u32 port;
> + u32 dev;
> + u8 mac[ETH_ALEN];
> + u16 vid;
> + u8 dynamic;
> + u32 flush_mode;
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_event {
> + u16 type;
> + u16 id;
> +} __packed __aligned(4);
> +
> +union mvsw_msg_event_fdb_param {
> + u8 mac[ETH_ALEN];
> +};
> +
> +struct mvsw_msg_event_fdb {
> + struct mvsw_msg_event id;
> + u32 port_id;
> + u32 vid;
> + union mvsw_msg_event_fdb_param param;
> +} __packed __aligned(4);
> +
> +union mvsw_msg_event_port_param {
> + u32 oper_state;
> +};
> +
> +struct mvsw_msg_event_port {
> + struct mvsw_msg_event id;
> + u32 port_id;
> + union mvsw_msg_event_port_param param;
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_bridge_cmd {
> + struct mvsw_msg_cmd cmd;
> + u32 port;
> + u32 dev;
> + u16 bridge;
> +} __packed __aligned(4);
> +
> +struct mvsw_msg_bridge_ret {
> + struct mvsw_msg_ret ret;
> + u16 bridge;
> +} __packed __aligned(4);
> +
> +#define fw_check_resp(_response) \
> +({ \
> + int __er = 0; \
> + typeof(_response) __r = (_response); \
> + if (__r->ret.cmd.type != MVSW_MSG_TYPE_ACK) \
> + __er = -EBADE; \
> + else if (__r->ret.status != MVSW_MSG_ACK_OK) \
> + __er = -EINVAL; \
> + (__er); \
> +})
> +
> +#define __fw_send_req_resp(_switch, _type, _request, _response, _wait) \
> +({ \
> + int __e; \
> + typeof(_switch) __sw = (_switch); \
> + typeof(_request) __req = (_request); \
> + typeof(_response) __resp = (_response); \
> + __req->cmd.type = (_type); \
> + __e = __sw->dev->send_req(__sw->dev, \
> + (u8 *)__req, sizeof(*__req), \
> + (u8 *)__resp, sizeof(*__resp), \
> + _wait); \
> + if (!__e) \
> + __e = fw_check_resp(__resp); \
> + (__e); \
> +})
> +
> +#define fw_send_req_resp(_sw, _t, _req, _resp) \
> + __fw_send_req_resp(_sw, _t, _req, _resp, 0)
> +
> +#define fw_send_req_resp_wait(_sw, _t, _req, _resp, _wait) \
> + __fw_send_req_resp(_sw, _t, _req, _resp, _wait)
> +
> +#define fw_send_req(_sw, _t, _req) \
> +({ \
> + struct mvsw_msg_common_response __re; \
> + (fw_send_req_resp(_sw, _t, _req, &__re)); \
> +})
> +
> +struct mvsw_fw_event_handler {
> + struct list_head list;
> + enum mvsw_pr_event_type type;
> + void (*func)(struct mvsw_pr_switch *sw, struct mvsw_pr_event *evt);
> +};
> +
> +static int fw_parse_port_evt(u8 *msg, struct mvsw_pr_event *evt)
> +{
> + struct mvsw_msg_event_port *hw_evt = (struct mvsw_msg_event_port *)msg;
> +
> + evt->port_evt.port_id = hw_evt->port_id;
> +
> + if (evt->id == MVSW_PORT_EVENT_STATE_CHANGED)
> + evt->port_evt.data.oper_state = hw_evt->param.oper_state;
> + else
> + return -EINVAL;
> +
> + return 0;
> +}
> +
> +static int fw_parse_fdb_evt(u8 *msg, struct mvsw_pr_event *evt)
> +{
> + struct mvsw_msg_event_fdb *hw_evt = (struct mvsw_msg_event_fdb *)msg;
> +
> + evt->fdb_evt.port_id = hw_evt->port_id;
> + evt->fdb_evt.vid = hw_evt->vid;
> +
> + memcpy(&evt->fdb_evt.data, &hw_evt->param, sizeof(u8) * ETH_ALEN);
> +
> + return 0;
> +}
> +
> +struct mvsw_fw_evt_parser {
> + int (*func)(u8 *msg, struct mvsw_pr_event *evt);
> +};
> +
> +static struct mvsw_fw_evt_parser fw_event_parsers[MVSW_EVENT_TYPE_MAX] = {
> + [MVSW_EVENT_TYPE_PORT] = {.func = fw_parse_port_evt},
> + [MVSW_EVENT_TYPE_FDB] = {.func = fw_parse_fdb_evt},
> +};
> +
> +static struct mvsw_fw_event_handler *
> +__find_event_handler(const struct mvsw_pr_switch *sw,
> + enum mvsw_pr_event_type type)
> +{
> + struct mvsw_fw_event_handler *eh;
> +
> + list_for_each_entry_rcu(eh, &sw->event_handlers, list) {
> + if (eh->type == type)
> + return eh;
> + }
> +
> + return NULL;
> +}
> +
> +static int fw_event_recv(struct mvsw_pr_device *dev, u8 *buf, size_t size)
> +{
> + void (*cb)(struct mvsw_pr_switch *sw, struct mvsw_pr_event *evt) = NULL;
> + struct mvsw_msg_event *msg = (struct mvsw_msg_event *)buf;
> + struct mvsw_pr_switch *sw = dev->priv;
> + struct mvsw_fw_event_handler *eh;
> + struct mvsw_pr_event evt;
> + int err;
> +
> + if (msg->type >= MVSW_EVENT_TYPE_MAX)
> + return -EINVAL;
> +
> + rcu_read_lock();
> + eh = __find_event_handler(sw, msg->type);
> + if (eh)
> + cb = eh->func;
> + rcu_read_unlock();
> +
> + if (!cb || !fw_event_parsers[msg->type].func)
> + return 0;
> +
> + evt.id = msg->id;
> +
> + err = fw_event_parsers[msg->type].func(buf, &evt);
> + if (!err)
> + cb(sw, &evt);
> +
> + return err;
> +}
> +
> +int mvsw_pr_hw_port_info_get(const struct mvsw_pr_port *port,
> + u16 *fp_id, u32 *hw_id, u32 *dev_id)
> +{
> + struct mvsw_msg_port_info_ret resp;
> + struct mvsw_msg_port_info_cmd req = {
> + .port = port->id
> + };
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_INFO_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + *hw_id = resp.hw_id;
> + *dev_id = resp.dev_id;
> + *fp_id = resp.fp_id;
> +
> + return 0;
> +}
> +
> +int mvsw_pr_hw_switch_init(struct mvsw_pr_switch *sw)
> +{
> + struct mvsw_msg_switch_init_ret resp;
> + struct mvsw_msg_common_request req;
> + int err = 0;
> +
> + INIT_LIST_HEAD(&sw->event_handlers);
> +
> + err = fw_send_req_resp_wait(sw, MVSW_MSG_TYPE_SWITCH_INIT, &req, &resp,
> + MVSW_PR_INIT_TIMEOUT);
> + if (err)
> + return err;
> +
> + sw->id = resp.switch_id;
> + sw->port_count = resp.port_count;
> + sw->mtu_min = MVSW_PR_MIN_MTU;
> + sw->mtu_max = resp.mtu_max;
> + sw->dev->recv_msg = fw_event_recv;
> + memcpy(sw->base_mac, resp.mac, ETH_ALEN);
> +
> + return err;
> +}
> +
> +int mvsw_pr_hw_switch_ageing_set(const struct mvsw_pr_switch *sw,
> + u32 ageing_time)
> +{
> + struct mvsw_msg_switch_attr_cmd req = {
> + .param = {.ageing_timeout = ageing_time}
> + };
> +
> + return fw_send_req(sw, MVSW_MSG_TYPE_AGEING_TIMEOUT_SET, &req);
> +}
> +
> +int mvsw_pr_hw_port_state_set(const struct mvsw_pr_port *port,
> + bool admin_state)
> +{
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_ADMIN_STATE,
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .param = {.admin_state = admin_state ? 1 : 0}
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
> +}
> +
> +int mvsw_pr_hw_port_state_get(const struct mvsw_pr_port *port,
> + bool *admin_state, bool *oper_state)
> +{
> + struct mvsw_msg_port_attr_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + int err;
> +
> + if (admin_state) {
> + req.attr = MVSW_MSG_PORT_ATTR_ADMIN_STATE;
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> + *admin_state = resp.param.admin_state != 0;
> + }
> +
> + if (oper_state) {
> + req.attr = MVSW_MSG_PORT_ATTR_OPER_STATE;
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> + *oper_state = resp.param.oper_state != 0;
> + }
> +
> + return 0;
> +}
> +
> +int mvsw_pr_hw_port_mtu_set(const struct mvsw_pr_port *port, u32 mtu)
> +{
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_MTU,
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .param = {.mtu = mtu}
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
> +}
> +
> +int mvsw_pr_hw_port_mtu_get(const struct mvsw_pr_port *port, u32 *mtu)
> +{
> + struct mvsw_msg_port_attr_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_MTU,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + *mtu = resp.param.mtu;
> +
> + return err;
> +}
> +
> +int mvsw_pr_hw_port_mac_set(const struct mvsw_pr_port *port, char *mac)
> +{
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_MAC,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + memcpy(&req.param.mac, mac, sizeof(req.param.mac));
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
> +}
> +
> +int mvsw_pr_hw_port_mac_get(const struct mvsw_pr_port *port, char *mac)
> +{
> + struct mvsw_msg_port_attr_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_MAC,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + memcpy(mac, resp.param.mac, sizeof(resp.param.mac));
> +
> + return err;
> +}
> +
> +int mvsw_pr_hw_port_accept_frame_type_set(const struct mvsw_pr_port *port,
> + enum mvsw_pr_accept_frame_type type)
> +{
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_ACCEPT_FRAME_TYPE,
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .param = {.accept_frm_type = type}
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
> +}
> +
> +int mvsw_pr_hw_port_learning_set(const struct mvsw_pr_port *port, bool enable)
> +{
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_LEARNING,
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .param = {.learning = enable ? 1 : 0}
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
> +}
> +
> +int mvsw_pr_hw_event_handler_register(struct mvsw_pr_switch *sw,
> + enum mvsw_pr_event_type type,
> + void (*cb)(struct mvsw_pr_switch *sw,
> + struct mvsw_pr_event *evt))
> +{
> + struct mvsw_fw_event_handler *eh;
> +
> + eh = __find_event_handler(sw, type);
> + if (eh)
> + return -EEXIST;
> + eh = kmalloc(sizeof(*eh), GFP_KERNEL);
> + if (!eh)
> + return -ENOMEM;
> +
> + eh->type = type;
> + eh->func = cb;
> +
> + INIT_LIST_HEAD(&eh->list);
> +
> + list_add_rcu(&eh->list, &sw->event_handlers);
> +
> + return 0;
> +}
> +
> +void mvsw_pr_hw_event_handler_unregister(struct mvsw_pr_switch *sw,
> + enum mvsw_pr_event_type type,
> + void (*cb)(struct mvsw_pr_switch *sw,
> + struct mvsw_pr_event *evt))
> +{
> + struct mvsw_fw_event_handler *eh;
> +
> + eh = __find_event_handler(sw, type);
> + if (!eh)
> + return;
> +
> + list_del_rcu(&eh->list);
> + synchronize_rcu();
> + kfree(eh);
> +}
> +
> +int mvsw_pr_hw_vlan_create(const struct mvsw_pr_switch *sw, u16 vid)
> +{
> + struct mvsw_msg_vlan_cmd req = {
> + .vid = vid,
> + };
> +
> + return fw_send_req(sw, MVSW_MSG_TYPE_VLAN_CREATE, &req);
> +}
> +
> +int mvsw_pr_hw_vlan_delete(const struct mvsw_pr_switch *sw, u16 vid)
> +{
> + struct mvsw_msg_vlan_cmd req = {
> + .vid = vid,
> + };
> +
> + return fw_send_req(sw, MVSW_MSG_TYPE_VLAN_DELETE, &req);
> +}
> +
> +int mvsw_pr_hw_vlan_port_set(const struct mvsw_pr_port *port,
> + u16 vid, bool is_member, bool untagged)
> +{
> + struct mvsw_msg_vlan_cmd req = {
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .vid = vid,
> + .is_member = is_member ? 1 : 0,
> + .is_tagged = untagged ? 0 : 1
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_VLAN_PORT_SET, &req);
> +}
> +
> +int mvsw_pr_hw_vlan_port_vid_set(const struct mvsw_pr_port *port, u16 vid)
> +{
> + struct mvsw_msg_vlan_cmd req = {
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .vid = vid
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_VLAN_PVID_SET, &req);
> +}
> +
> +int mvsw_pr_hw_port_speed_get(const struct mvsw_pr_port *port, u32 *speed)
> +{
> + struct mvsw_msg_port_attr_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_SPEED,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + *speed = resp.param.speed;
> +
> + return err;
> +}
> +
> +int mvsw_pr_hw_port_flood_set(const struct mvsw_pr_port *port, bool flood)
> +{
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_FLOOD,
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .param = {.flood = flood ? 1 : 0}
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
> +}
> +
> +int mvsw_pr_hw_fdb_add(const struct mvsw_pr_port *port,
> + const unsigned char *mac, u16 vid, bool dynamic)
> +{
> + struct mvsw_msg_fdb_cmd req = {
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .vid = vid,
> + .dynamic = dynamic ? 1 : 0
> + };
> +
> + memcpy(req.mac, mac, sizeof(req.mac));
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_FDB_ADD, &req);
> +}
> +
> +int mvsw_pr_hw_fdb_del(const struct mvsw_pr_port *port,
> + const unsigned char *mac, u16 vid)
> +{
> + struct mvsw_msg_fdb_cmd req = {
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .vid = vid
> + };
> +
> + memcpy(req.mac, mac, sizeof(req.mac));
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_FDB_DELETE, &req);
> +}
> +
> +int mvsw_pr_hw_port_cap_get(const struct mvsw_pr_port *port,
> + struct mvsw_pr_port_caps *caps)
> +{
> + struct mvsw_msg_port_attr_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_CAPABILITY,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + caps->supp_link_modes = resp.param.cap.link_mode;
> + caps->supp_fec = resp.param.cap.fec;
> + caps->type = resp.param.cap.type;
> + caps->transceiver = resp.param.cap.transceiver;
> +
> + return err;
> +}
> +
> +int mvsw_pr_hw_port_remote_cap_get(const struct mvsw_pr_port *port,
> + u64 *link_mode_bitmap)
> +{
> + struct mvsw_msg_port_attr_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_REMOTE_CAPABILITY,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + *link_mode_bitmap = resp.param.cap.link_mode;
> +
> + return err;
> +}
> +
> +int mvsw_pr_hw_port_mdix_get(const struct mvsw_pr_port *port, u8 *mode)
> +{
> + struct mvsw_msg_port_attr_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_MDIX,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + switch (resp.param.mdix) {
> + case MVSW_MODE_FORCED_MDI:
> + case MVSW_MODE_AUTO_MDI:
> + *mode = ETH_TP_MDI;
> + break;
> +
> + case MVSW_MODE_FORCED_MDIX:
> + case MVSW_MODE_AUTO_MDIX:
> + *mode = ETH_TP_MDI_X;
> + break;
> +
> + default:
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +int mvsw_pr_hw_port_mdix_set(const struct mvsw_pr_port *port, u8 mode)
> +{
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_MDIX,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> +
> + switch (mode) {
> + case ETH_TP_MDI:
> + req.param.mdix = MVSW_MODE_FORCED_MDI;
> + break;
> +
> + case ETH_TP_MDI_X:
> + req.param.mdix = MVSW_MODE_FORCED_MDIX;
> + break;
> +
> + case ETH_TP_MDI_AUTO:
> + req.param.mdix = MVSW_MODE_AUTO;
> + break;
> +
> + default:
> + return -EINVAL;
> + }
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
> +}
> +
> +int mvsw_pr_hw_port_type_get(const struct mvsw_pr_port *port, u8 *type)
> +{
> + struct mvsw_msg_port_attr_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_TYPE,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + *type = resp.param.type;
> +
> + return err;
> +}
> +
> +int mvsw_pr_hw_port_fec_get(const struct mvsw_pr_port *port, u8 *fec)
> +{
> + struct mvsw_msg_port_attr_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_FEC,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + *fec = resp.param.fec;
> +
> + return err;
> +}
> +
> +int mvsw_pr_hw_port_fec_set(const struct mvsw_pr_port *port, u8 fec)
> +{
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_FEC,
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .param = {.fec = fec}
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
> +}
> +
> +int mvsw_pr_hw_port_autoneg_set(const struct mvsw_pr_port *port,
> + bool autoneg, u64 link_modes, u8 fec)
> +{
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_AUTONEG,
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .param = {.autoneg = {.link_mode = link_modes,
> + .enable = autoneg ? 1 : 0,
> + .fec = fec}
> + }
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
> +}
> +
> +int mvsw_pr_hw_port_duplex_get(const struct mvsw_pr_port *port, u8 *duplex)
> +{
> + struct mvsw_msg_port_attr_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_DUPLEX,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + *duplex = resp.param.duplex;
> +
> + return err;
> +}
> +
> +int mvsw_pr_hw_port_stats_get(const struct mvsw_pr_port *port,
> + struct mvsw_pr_port_stats *stats)
> +{
> + struct mvsw_msg_port_stats_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_STATS,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + u64 *hw_val = resp.stats;
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + stats->good_octets_received = hw_val[MVSW_PORT_GOOD_OCTETS_RCV_CNT];
> + stats->bad_octets_received = hw_val[MVSW_PORT_BAD_OCTETS_RCV_CNT];
> + stats->mac_trans_error = hw_val[MVSW_PORT_MAC_TRANSMIT_ERR_CNT];
> + stats->broadcast_frames_received = hw_val[MVSW_PORT_BRDC_PKTS_RCV_CNT];
> + stats->multicast_frames_received = hw_val[MVSW_PORT_MC_PKTS_RCV_CNT];
> + stats->frames_64_octets = hw_val[MVSW_PORT_PKTS_64_OCTETS_CNT];
> + stats->frames_65_to_127_octets =
> + hw_val[MVSW_PORT_PKTS_65TO127_OCTETS_CNT];
> + stats->frames_128_to_255_octets =
> + hw_val[MVSW_PORT_PKTS_128TO255_OCTETS_CNT];
> + stats->frames_256_to_511_octets =
> + hw_val[MVSW_PORT_PKTS_256TO511_OCTETS_CNT];
> + stats->frames_512_to_1023_octets =
> + hw_val[MVSW_PORT_PKTS_512TO1023_OCTETS_CNT];
> + stats->frames_1024_to_max_octets =
> + hw_val[MVSW_PORT_PKTS_1024TOMAX_OCTETS_CNT];
> + stats->excessive_collision = hw_val[MVSW_PORT_EXCESSIVE_COLLISIONS_CNT];
> + stats->multicast_frames_sent = hw_val[MVSW_PORT_MC_PKTS_SENT_CNT];
> + stats->broadcast_frames_sent = hw_val[MVSW_PORT_BRDC_PKTS_SENT_CNT];
> + stats->fc_sent = hw_val[MVSW_PORT_FC_SENT_CNT];
> + stats->fc_received = hw_val[MVSW_PORT_GOOD_FC_RCV_CNT];
> + stats->buffer_overrun = hw_val[MVSW_PORT_DROP_EVENTS_CNT];
> + stats->undersize = hw_val[MVSW_PORT_UNDERSIZE_PKTS_CNT];
> + stats->fragments = hw_val[MVSW_PORT_FRAGMENTS_PKTS_CNT];
> + stats->oversize = hw_val[MVSW_PORT_OVERSIZE_PKTS_CNT];
> + stats->jabber = hw_val[MVSW_PORT_JABBER_PKTS_CNT];
> + stats->rx_error_frame_received = hw_val[MVSW_PORT_MAC_RCV_ERROR_CNT];
> + stats->bad_crc = hw_val[MVSW_PORT_BAD_CRC_CNT];
> + stats->collisions = hw_val[MVSW_PORT_COLLISIONS_CNT];
> + stats->late_collision = hw_val[MVSW_PORT_LATE_COLLISIONS_CNT];
> + stats->unicast_frames_received = hw_val[MVSW_PORT_GOOD_UC_PKTS_RCV_CNT];
> + stats->unicast_frames_sent = hw_val[MVSW_PORT_GOOD_UC_PKTS_SENT_CNT];
> + stats->sent_multiple = hw_val[MVSW_PORT_MULTIPLE_PKTS_SENT_CNT];
> + stats->sent_deferred = hw_val[MVSW_PORT_DEFERRED_PKTS_SENT_CNT];
> + stats->frames_1024_to_1518_octets =
> + hw_val[MVSW_PORT_PKTS_1024TO1518_OCTETS_CNT];
> + stats->frames_1519_to_max_octets =
> + hw_val[MVSW_PORT_PKTS_1519TOMAX_OCTETS_CNT];
> + stats->good_octets_sent = hw_val[MVSW_PORT_GOOD_OCTETS_SENT_CNT];
> +
> + return 0;
> +}
> +
> +int mvsw_pr_hw_bridge_create(const struct mvsw_pr_switch *sw, u16 *bridge_id)
> +{
> + struct mvsw_msg_bridge_cmd req;
> + struct mvsw_msg_bridge_ret resp;
> + int err;
> +
> + err = fw_send_req_resp(sw, MVSW_MSG_TYPE_BRIDGE_CREATE, &req, &resp);
> + if (err)
> + return err;
> +
> + *bridge_id = resp.bridge;
> + return err;
> +}
> +
> +int mvsw_pr_hw_bridge_delete(const struct mvsw_pr_switch *sw, u16 bridge_id)
> +{
> + struct mvsw_msg_bridge_cmd req = {
> + .bridge = bridge_id
> + };
> +
> + return fw_send_req(sw, MVSW_MSG_TYPE_BRIDGE_DELETE, &req);
> +}
> +
> +int mvsw_pr_hw_bridge_port_add(const struct mvsw_pr_port *port, u16 bridge_id)
> +{
> + struct mvsw_msg_bridge_cmd req = {
> + .bridge = bridge_id,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_BRIDGE_PORT_ADD, &req);
> +}
> +
> +int mvsw_pr_hw_bridge_port_delete(const struct mvsw_pr_port *port,
> + u16 bridge_id)
> +{
> + struct mvsw_msg_bridge_cmd req = {
> + .bridge = bridge_id,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_BRIDGE_PORT_DELETE, &req);
> +}
> +
> +int mvsw_pr_hw_fdb_flush_port(const struct mvsw_pr_port *port, u32 mode)
> +{
> + struct mvsw_msg_fdb_cmd req = {
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .flush_mode = mode,
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_FDB_FLUSH_PORT, &req);
> +}
> +
> +int mvsw_pr_hw_fdb_flush_vlan(const struct mvsw_pr_switch *sw, u16 vid,
> + u32 mode)
> +{
> + struct mvsw_msg_fdb_cmd req = {
> + .vid = vid,
> + .flush_mode = mode,
> + };
> +
> + return fw_send_req(sw, MVSW_MSG_TYPE_FDB_FLUSH_VLAN, &req);
> +}
> +
> +int mvsw_pr_hw_fdb_flush_port_vlan(const struct mvsw_pr_port *port, u16 vid,
> + u32 mode)
> +{
> + struct mvsw_msg_fdb_cmd req = {
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .vid = vid,
> + .flush_mode = mode,
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_FDB_FLUSH_PORT_VLAN, &req);
> +}
> +
> +int mvsw_pr_hw_port_link_mode_get(const struct mvsw_pr_port *port,
> + u32 *mode)
> +{
> + struct mvsw_msg_port_attr_ret resp;
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_LINK_MODE,
> + .port = port->hw_id,
> + .dev = port->dev_id
> + };
> + int err;
> +
> + err = fw_send_req_resp(port->sw, MVSW_MSG_TYPE_PORT_ATTR_GET,
> + &req, &resp);
> + if (err)
> + return err;
> +
> + *mode = resp.param.link_mode;
> +
> + return err;
> +}
> +
> +int mvsw_pr_hw_port_link_mode_set(const struct mvsw_pr_port *port,
> + u32 mode)
> +{
> + struct mvsw_msg_port_attr_cmd req = {
> + .attr = MVSW_MSG_PORT_ATTR_LINK_MODE,
> + .port = port->hw_id,
> + .dev = port->dev_id,
> + .param = {.link_mode = mode}
> + };
> +
> + return fw_send_req(port->sw, MVSW_MSG_TYPE_PORT_ATTR_SET, &req);
> +}
> diff --git a/drivers/net/ethernet/marvell/prestera/prestera_hw.h b/drivers/net/ethernet/marvell/prestera/prestera_hw.h
> new file mode 100644
> index 000000000000..dfae2631160e
> --- /dev/null
> +++ b/drivers/net/ethernet/marvell/prestera/prestera_hw.h
> @@ -0,0 +1,159 @@
> +/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
> + *
> + * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
> + *
> + */
> +
> +#ifndef _MVSW_PRESTERA_HW_H_
> +#define _MVSW_PRESTERA_HW_H_
> +
> +#include <linux/types.h>
> +
> +enum mvsw_pr_accept_frame_type {
> + MVSW_ACCEPT_FRAME_TYPE_TAGGED,
> + MVSW_ACCEPT_FRAME_TYPE_UNTAGGED,
> + MVSW_ACCEPT_FRAME_TYPE_ALL
> +};
> +
> +enum {
> + MVSW_LINK_MODE_10baseT_Half_BIT,
> + MVSW_LINK_MODE_10baseT_Full_BIT,
> + MVSW_LINK_MODE_100baseT_Half_BIT,
> + MVSW_LINK_MODE_100baseT_Full_BIT,
> + MVSW_LINK_MODE_1000baseT_Half_BIT,
> + MVSW_LINK_MODE_1000baseT_Full_BIT,
> + MVSW_LINK_MODE_1000baseX_Full_BIT,
> + MVSW_LINK_MODE_1000baseKX_Full_BIT,
> + MVSW_LINK_MODE_10GbaseKR_Full_BIT,
> + MVSW_LINK_MODE_10GbaseSR_Full_BIT,
> + MVSW_LINK_MODE_10GbaseLR_Full_BIT,
> + MVSW_LINK_MODE_20GbaseKR2_Full_BIT,
> + MVSW_LINK_MODE_25GbaseCR_Full_BIT,
> + MVSW_LINK_MODE_25GbaseKR_Full_BIT,
> + MVSW_LINK_MODE_25GbaseSR_Full_BIT,
> + MVSW_LINK_MODE_40GbaseKR4_Full_BIT,
> + MVSW_LINK_MODE_40GbaseCR4_Full_BIT,
> + MVSW_LINK_MODE_40GbaseSR4_Full_BIT,
> + MVSW_LINK_MODE_50GbaseCR2_Full_BIT,
> + MVSW_LINK_MODE_50GbaseKR2_Full_BIT,
> + MVSW_LINK_MODE_50GbaseSR2_Full_BIT,
> + MVSW_LINK_MODE_100GbaseKR4_Full_BIT,
> + MVSW_LINK_MODE_100GbaseSR4_Full_BIT,
> + MVSW_LINK_MODE_100GbaseCR4_Full_BIT,
> + MVSW_LINK_MODE_MAX,
> +};
> +
> +enum {
> + MVSW_PORT_TYPE_NONE,
> + MVSW_PORT_TYPE_TP,
> + MVSW_PORT_TYPE_AUI,
> + MVSW_PORT_TYPE_MII,
> + MVSW_PORT_TYPE_FIBRE,
> + MVSW_PORT_TYPE_BNC,
> + MVSW_PORT_TYPE_DA,
> + MVSW_PORT_TYPE_OTHER,
> + MVSW_PORT_TYPE_MAX,
> +};
> +
> +enum {
> + MVSW_PORT_TRANSCEIVER_COPPER,
> + MVSW_PORT_TRANSCEIVER_SFP,
> + MVSW_PORT_TRANSCEIVER_MAX,
> +};
> +
> +enum {
> + MVSW_PORT_FEC_OFF_BIT,
> + MVSW_PORT_FEC_BASER_BIT,
> + MVSW_PORT_FEC_RS_BIT,
> + MVSW_PORT_FEC_MAX,
> +};
> +
> +enum {
> + MVSW_PORT_DUPLEX_HALF,
> + MVSW_PORT_DUPLEX_FULL
> +};
> +
> +struct mvsw_pr_switch;
> +struct mvsw_pr_port;
> +struct mvsw_pr_port_stats;
> +struct mvsw_pr_port_caps;
> +
> +enum mvsw_pr_event_type;
> +struct mvsw_pr_event;
> +
> +/* Switch API */
> +int mvsw_pr_hw_switch_init(struct mvsw_pr_switch *sw);
> +int mvsw_pr_hw_switch_ageing_set(const struct mvsw_pr_switch *sw,
> + u32 ageing_time);
> +
> +/* Port API */
> +int mvsw_pr_hw_port_info_get(const struct mvsw_pr_port *port,
> + u16 *fp_id, u32 *hw_id, u32 *dev_id);
> +int mvsw_pr_hw_port_state_set(const struct mvsw_pr_port *port,
> + bool admin_state);
> +int mvsw_pr_hw_port_state_get(const struct mvsw_pr_port *port,
> + bool *admin_state, bool *oper_state);
> +int mvsw_pr_hw_port_mtu_set(const struct mvsw_pr_port *port, u32 mtu);
> +int mvsw_pr_hw_port_mtu_get(const struct mvsw_pr_port *port, u32 *mtu);
> +int mvsw_pr_hw_port_mac_set(const struct mvsw_pr_port *port, char *mac);
> +int mvsw_pr_hw_port_mac_get(const struct mvsw_pr_port *port, char *mac);
> +int mvsw_pr_hw_port_accept_frame_type_set(const struct mvsw_pr_port *port,
> + enum mvsw_pr_accept_frame_type type);
> +int mvsw_pr_hw_port_learning_set(const struct mvsw_pr_port *port, bool enable);
> +int mvsw_pr_hw_port_speed_get(const struct mvsw_pr_port *port, u32 *speed);
> +int mvsw_pr_hw_port_flood_set(const struct mvsw_pr_port *port, bool flood);
> +int mvsw_pr_hw_port_cap_get(const struct mvsw_pr_port *port,
> + struct mvsw_pr_port_caps *caps);
> +int mvsw_pr_hw_port_remote_cap_get(const struct mvsw_pr_port *port,
> + u64 *link_mode_bitmap);
> +int mvsw_pr_hw_port_type_get(const struct mvsw_pr_port *port, u8 *type);
> +int mvsw_pr_hw_port_fec_get(const struct mvsw_pr_port *port, u8 *fec);
> +int mvsw_pr_hw_port_fec_set(const struct mvsw_pr_port *port, u8 fec);
> +int mvsw_pr_hw_port_autoneg_set(const struct mvsw_pr_port *port,
> + bool autoneg, u64 link_modes, u8 fec);
> +int mvsw_pr_hw_port_duplex_get(const struct mvsw_pr_port *port, u8 *duplex);
> +int mvsw_pr_hw_port_stats_get(const struct mvsw_pr_port *port,
> + struct mvsw_pr_port_stats *stats);
> +int mvsw_pr_hw_port_link_mode_get(const struct mvsw_pr_port *port,
> + u32 *mode);
> +int mvsw_pr_hw_port_link_mode_set(const struct mvsw_pr_port *port,
> + u32 mode);
> +int mvsw_pr_hw_port_mdix_get(const struct mvsw_pr_port *port, u8 *mode);
> +int mvsw_pr_hw_port_mdix_set(const struct mvsw_pr_port *port, u8 mode);
> +
> +/* Vlan API */
> +int mvsw_pr_hw_vlan_create(const struct mvsw_pr_switch *sw, u16 vid);
> +int mvsw_pr_hw_vlan_delete(const struct mvsw_pr_switch *sw, u16 vid);
> +int mvsw_pr_hw_vlan_port_set(const struct mvsw_pr_port *port,
> + u16 vid, bool is_member, bool untagged);
> +int mvsw_pr_hw_vlan_port_vid_set(const struct mvsw_pr_port *port, u16 vid);
> +
> +/* FDB API */
> +int mvsw_pr_hw_fdb_add(const struct mvsw_pr_port *port,
> + const unsigned char *mac, u16 vid, bool dynamic);
> +int mvsw_pr_hw_fdb_del(const struct mvsw_pr_port *port,
> + const unsigned char *mac, u16 vid);
> +int mvsw_pr_hw_fdb_flush_port(const struct mvsw_pr_port *port, u32 mode);
> +int mvsw_pr_hw_fdb_flush_vlan(const struct mvsw_pr_switch *sw, u16 vid,
> + u32 mode);
> +int mvsw_pr_hw_fdb_flush_port_vlan(const struct mvsw_pr_port *port, u16 vid,
> + u32 mode);
> +
> +/* Bridge API */
> +int mvsw_pr_hw_bridge_create(const struct mvsw_pr_switch *sw, u16 *bridge_id);
> +int mvsw_pr_hw_bridge_delete(const struct mvsw_pr_switch *sw, u16 bridge_id);
> +int mvsw_pr_hw_bridge_port_add(const struct mvsw_pr_port *port, u16 bridge_id);
> +int mvsw_pr_hw_bridge_port_delete(const struct mvsw_pr_port *port,
> + u16 bridge_id);
> +
> +/* Event handlers */
> +int mvsw_pr_hw_event_handler_register(struct mvsw_pr_switch *sw,
> + enum mvsw_pr_event_type type,
> + void (*cb)(struct mvsw_pr_switch *sw,
> + struct mvsw_pr_event *evt));
> +void mvsw_pr_hw_event_handler_unregister(struct mvsw_pr_switch *sw,
> + enum mvsw_pr_event_type type,
> + void (*cb)(struct mvsw_pr_switch *sw,
> + struct mvsw_pr_event *evt));
> +
> +#endif /* _MVSW_PRESTERA_HW_H_ */
> diff --git a/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c b/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
> new file mode 100644
> index 000000000000..18fa6bbe5ace
> --- /dev/null
> +++ b/drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
> @@ -0,0 +1,1217 @@
> +/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
> + *
> + * Copyright (c) 2019-2020 Marvell International Ltd. All rights reserved.
> + *
> + */
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/if_vlan.h>
> +#include <linux/if_bridge.h>
> +#include <linux/notifier.h>
> +#include <net/switchdev.h>
> +#include <net/netevent.h>
> +#include <net/vxlan.h>

Doesn't seem like you have VXLAN support at the moment, so this is not
needed.

> +
> +#include "prestera.h"
> +
> +struct mvsw_pr_bridge {
> + struct mvsw_pr_switch *sw;
> + u32 ageing_time;
> + struct list_head bridge_list;
> + bool bridge_8021q_exists;
> +};
> +
> +struct mvsw_pr_bridge_device {
> + struct net_device *dev;
> + struct list_head bridge_node;
> + struct list_head port_list;
> + u16 bridge_id;
> + u8 vlan_enabled:1, multicast_enabled:1, mrouter:1;
> +};
> +
> +struct mvsw_pr_bridge_port {
> + struct net_device *dev;
> + struct mvsw_pr_bridge_device *bridge_device;
> + struct list_head bridge_device_node;
> + struct list_head vlan_list;
> + unsigned int ref_count;
> + u8 stp_state;
> + unsigned long flags;
> +};
> +
> +struct mvsw_pr_bridge_vlan {
> + struct list_head bridge_port_node;
> + struct list_head port_vlan_list;
> + u16 vid;
> +};
> +
> +struct mvsw_pr_event_work {
> + struct work_struct work;
> + struct switchdev_notifier_fdb_info fdb_info;
> + struct net_device *dev;
> + unsigned long event;
> +};
> +
> +static struct workqueue_struct *mvsw_owq;
> +
> +static struct mvsw_pr_bridge_port *
> +mvsw_pr_bridge_port_get(struct mvsw_pr_bridge *bridge,
> + struct net_device *brport_dev);
> +
> +static void mvsw_pr_bridge_port_put(struct mvsw_pr_bridge *bridge,
> + struct mvsw_pr_bridge_port *br_port);
> +
> +static struct mvsw_pr_bridge_device *
> +mvsw_pr_bridge_device_find(const struct mvsw_pr_bridge *bridge,
> + const struct net_device *br_dev)
> +{
> + struct mvsw_pr_bridge_device *bridge_device;
> +
> + list_for_each_entry(bridge_device, &bridge->bridge_list,
> + bridge_node)
> + if (bridge_device->dev == br_dev)
> + return bridge_device;
> +
> + return NULL;
> +}
> +
> +static bool
> +mvsw_pr_bridge_device_is_offloaded(const struct mvsw_pr_switch *sw,
> + const struct net_device *br_dev)
> +{
> + return !!mvsw_pr_bridge_device_find(sw->bridge, br_dev);
> +}
> +
> +static struct mvsw_pr_bridge_port *
> +__mvsw_pr_bridge_port_find(const struct mvsw_pr_bridge_device *bridge_device,
> + const struct net_device *brport_dev)
> +{
> + struct mvsw_pr_bridge_port *br_port;
> +
> + list_for_each_entry(br_port, &bridge_device->port_list,
> + bridge_device_node) {
> + if (br_port->dev == brport_dev)
> + return br_port;
> + }
> +
> + return NULL;
> +}
> +
> +static struct mvsw_pr_bridge_port *
> +mvsw_pr_bridge_port_find(struct mvsw_pr_bridge *bridge,
> + struct net_device *brport_dev)
> +{
> + struct net_device *br_dev = netdev_master_upper_dev_get(brport_dev);
> + struct mvsw_pr_bridge_device *bridge_device;
> +
> + if (!br_dev)
> + return NULL;
> +
> + bridge_device = mvsw_pr_bridge_device_find(bridge, br_dev);
> + if (!bridge_device)
> + return NULL;
> +
> + return __mvsw_pr_bridge_port_find(bridge_device, brport_dev);
> +}
> +
> +static struct mvsw_pr_bridge_vlan *
> +mvsw_pr_bridge_vlan_find(const struct mvsw_pr_bridge_port *br_port, u16 vid)
> +{
> + struct mvsw_pr_bridge_vlan *br_vlan;
> +
> + list_for_each_entry(br_vlan, &br_port->vlan_list, bridge_port_node) {
> + if (br_vlan->vid == vid)
> + return br_vlan;
> + }
> +
> + return NULL;
> +}
> +
> +static struct mvsw_pr_bridge_vlan *
> +mvsw_pr_bridge_vlan_create(struct mvsw_pr_bridge_port *br_port, u16 vid)
> +{
> + struct mvsw_pr_bridge_vlan *br_vlan;
> +
> + br_vlan = kzalloc(sizeof(*br_vlan), GFP_KERNEL);
> + if (!br_vlan)
> + return NULL;
> +
> + INIT_LIST_HEAD(&br_vlan->port_vlan_list);
> + br_vlan->vid = vid;
> + list_add(&br_vlan->bridge_port_node, &br_port->vlan_list);
> +
> + return br_vlan;
> +}
> +
> +static void
> +mvsw_pr_bridge_vlan_destroy(struct mvsw_pr_bridge_vlan *br_vlan)
> +{
> + list_del(&br_vlan->bridge_port_node);
> + WARN_ON(!list_empty(&br_vlan->port_vlan_list));
> + kfree(br_vlan);
> +}
> +
> +static struct mvsw_pr_bridge_vlan *
> +mvsw_pr_bridge_vlan_get(struct mvsw_pr_bridge_port *br_port, u16 vid)
> +{
> + struct mvsw_pr_bridge_vlan *br_vlan;
> +
> + br_vlan = mvsw_pr_bridge_vlan_find(br_port, vid);
> + if (br_vlan)
> + return br_vlan;
> +
> + return mvsw_pr_bridge_vlan_create(br_port, vid);
> +}
> +
> +static void mvsw_pr_bridge_vlan_put(struct mvsw_pr_bridge_vlan *br_vlan)
> +{
> + if (list_empty(&br_vlan->port_vlan_list))
> + mvsw_pr_bridge_vlan_destroy(br_vlan);
> +}
> +
> +static int
> +mvsw_pr_port_vlan_bridge_join(struct mvsw_pr_port_vlan *port_vlan,
> + struct mvsw_pr_bridge_port *br_port,
> + struct netlink_ext_ack *extack)
> +{
> + struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
> + struct mvsw_pr_bridge_vlan *br_vlan;
> + u16 vid = port_vlan->vid;
> + int err;
> +
> + if (port_vlan->bridge_port)
> + return 0;
> +
> + err = mvsw_pr_port_flood_set(port, br_port->flags & BR_FLOOD);
> + if (err)
> + return err;
> +
> + err = mvsw_pr_port_learning_set(port, br_port->flags & BR_LEARNING);
> + if (err)
> + goto err_port_learning_set;

It seems that learning and flooding are not per-{port, VLAN} attributes,
so I'm not sure why you have this here.

The fact that you don't undo this in mvsw_pr_port_vlan_bridge_leave()
tells me it should not be here.

> +
> + br_vlan = mvsw_pr_bridge_vlan_get(br_port, vid);
> + if (!br_vlan) {
> + err = -ENOMEM;
> + goto err_bridge_vlan_get;
> + }
> +
> + list_add(&port_vlan->bridge_vlan_node, &br_vlan->port_vlan_list);
> +
> + mvsw_pr_bridge_port_get(port->sw->bridge, br_port->dev);
> + port_vlan->bridge_port = br_port;
> +
> + return 0;
> +
> +err_bridge_vlan_get:
> + mvsw_pr_port_learning_set(port, false);
> +err_port_learning_set:
> + return err;
> +}
> +
> +static int
> +mvsw_pr_bridge_vlan_port_count_get(struct mvsw_pr_bridge_device *bridge_device,
> + u16 vid)
> +{
> + int count = 0;
> + struct mvsw_pr_bridge_port *br_port;
> + struct mvsw_pr_bridge_vlan *br_vlan;
> +
> + list_for_each_entry(br_port, &bridge_device->port_list,
> + bridge_device_node) {
> + list_for_each_entry(br_vlan, &br_port->vlan_list,
> + bridge_port_node) {
> + if (br_vlan->vid == vid) {
> + count += 1;
> + break;
> + }
> + }
> + }
> +
> + return count;
> +}
> +
> +void
> +mvsw_pr_port_vlan_bridge_leave(struct mvsw_pr_port_vlan *port_vlan)
> +{
> + struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
> + struct mvsw_pr_bridge_vlan *br_vlan;
> + struct mvsw_pr_bridge_port *br_port;
> + int port_count;
> + u16 vid = port_vlan->vid;
> + bool last_port, last_vlan;
> +
> + br_port = port_vlan->bridge_port;
> + last_vlan = list_is_singular(&br_port->vlan_list);
> + port_count =
> + mvsw_pr_bridge_vlan_port_count_get(br_port->bridge_device, vid);
> + br_vlan = mvsw_pr_bridge_vlan_find(br_port, vid);
> + last_port = port_count == 1;
> + if (last_vlan) {
> + mvsw_pr_fdb_flush_port(port, MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
> + } else if (last_port) {
> + mvsw_pr_fdb_flush_vlan(port->sw, vid,
> + MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
> + } else {
> + mvsw_pr_fdb_flush_port_vlan(port, vid,
> + MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);

If you always flush based on {port, VID}, then why do you need the other
two?

> + }
> +
> + list_del(&port_vlan->bridge_vlan_node);
> + mvsw_pr_bridge_vlan_put(br_vlan);
> + mvsw_pr_bridge_port_put(port->sw->bridge, br_port);
> + port_vlan->bridge_port = NULL;
> +}
> +
> +static int
> +mvsw_pr_bridge_port_vlan_add(struct mvsw_pr_port *port,
> + struct mvsw_pr_bridge_port *br_port,
> + u16 vid, bool is_untagged, bool is_pvid,
> + struct netlink_ext_ack *extack)
> +{
> + u16 pvid;
> + struct mvsw_pr_port_vlan *port_vlan;
> + u16 old_pvid = port->pvid;
> + int err;
> +
> + if (is_pvid)
> + pvid = vid;
> + else
> + pvid = port->pvid == vid ? 0 : port->pvid;
> +
> + port_vlan = mvsw_pr_port_vlan_find_by_vid(port, vid);
> + if (port_vlan && port_vlan->bridge_port != br_port)
> + return -EEXIST;
> +
> + if (!port_vlan) {
> + port_vlan = mvsw_pr_port_vlan_create(port, vid);
> + if (IS_ERR(port_vlan))
> + return PTR_ERR(port_vlan);
> + }
> +
> + err = mvsw_pr_port_vlan_set(port, vid, true, is_untagged);
> + if (err)
> + goto err_port_vlan_set;
> +
> + err = mvsw_pr_port_pvid_set(port, pvid);
> + if (err)
> + goto err_port_pvid_set;
> +
> + err = mvsw_pr_port_vlan_bridge_join(port_vlan, br_port, extack);
> + if (err)
> + goto err_port_vlan_bridge_join;
> +
> + return 0;
> +
> +err_port_vlan_bridge_join:
> + mvsw_pr_port_pvid_set(port, old_pvid);
> +err_port_pvid_set:
> + mvsw_pr_port_vlan_set(port, vid, false, false);
> +err_port_vlan_set:
> + mvsw_pr_port_vlan_destroy(port_vlan);
> +
> + return err;
> +}
> +
> +static int mvsw_pr_port_vlans_add(struct mvsw_pr_port *port,
> + const struct switchdev_obj_port_vlan *vlan,
> + struct switchdev_trans *trans,
> + struct netlink_ext_ack *extack)
> +{
> + bool flag_untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
> + bool flag_pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID;
> + struct net_device *orig_dev = vlan->obj.orig_dev;
> + struct mvsw_pr_bridge_port *br_port;
> + struct mvsw_pr_switch *sw = port->sw;
> + u16 vid;
> +
> + if (netif_is_bridge_master(orig_dev))
> + return 0;
> +
> + if (switchdev_trans_ph_commit(trans))
> + return 0;
> +
> + br_port = mvsw_pr_bridge_port_find(sw->bridge, orig_dev);
> + if (WARN_ON(!br_port))
> + return -EINVAL;
> +
> + if (!br_port->bridge_device->vlan_enabled)
> + return 0;
> +
> + for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
> + int err;
> +
> + err = mvsw_pr_bridge_port_vlan_add(port, br_port,
> + vid, flag_untagged,
> + flag_pvid, extack);
> + if (err)
> + return err;
> + }
> +
> + return 0;
> +}
> +
> +static int mvsw_pr_port_obj_add(struct net_device *dev,
> + const struct switchdev_obj *obj,
> + struct switchdev_trans *trans,
> + struct netlink_ext_ack *extack)
> +{
> + int err = 0;
> + struct mvsw_pr_port *port = netdev_priv(dev);
> + const struct switchdev_obj_port_vlan *vlan;
> +
> + switch (obj->id) {
> + case SWITCHDEV_OBJ_ID_PORT_VLAN:
> + vlan = SWITCHDEV_OBJ_PORT_VLAN(obj);
> + err = mvsw_pr_port_vlans_add(port, vlan, trans, extack);
> + break;
> + default:
> + err = -EOPNOTSUPP;
> + }
> +
> + return err;
> +}
> +
> +static void
> +mvsw_pr_bridge_port_vlan_del(struct mvsw_pr_port *port,
> + struct mvsw_pr_bridge_port *br_port, u16 vid)
> +{
> + u16 pvid = port->pvid == vid ? 0 : port->pvid;
> + struct mvsw_pr_port_vlan *port_vlan;
> +
> + port_vlan = mvsw_pr_port_vlan_find_by_vid(port, vid);
> + if (WARN_ON(!port_vlan))
> + return;
> +
> + mvsw_pr_port_vlan_bridge_leave(port_vlan);
> + mvsw_pr_port_pvid_set(port, pvid);
> + mvsw_pr_port_vlan_destroy(port_vlan);
> +}
> +
> +static int mvsw_pr_port_vlans_del(struct mvsw_pr_port *port,
> + const struct switchdev_obj_port_vlan *vlan)
> +{
> + struct mvsw_pr_switch *sw = port->sw;
> + struct net_device *orig_dev = vlan->obj.orig_dev;
> + struct mvsw_pr_bridge_port *br_port;
> + u16 vid;
> +
> + if (netif_is_bridge_master(orig_dev))
> + return -EOPNOTSUPP;
> +
> + br_port = mvsw_pr_bridge_port_find(sw->bridge, orig_dev);
> + if (WARN_ON(!br_port))
> + return -EINVAL;
> +
> + if (!br_port->bridge_device->vlan_enabled)
> + return 0;
> +
> + for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++)
> + mvsw_pr_bridge_port_vlan_del(port, br_port, vid);
> +
> + return 0;
> +}
> +
> +static int mvsw_pr_port_obj_del(struct net_device *dev,
> + const struct switchdev_obj *obj)
> +{
> + int err = 0;
> + struct mvsw_pr_port *port = netdev_priv(dev);
> +
> + switch (obj->id) {
> + case SWITCHDEV_OBJ_ID_PORT_VLAN:
> + err = mvsw_pr_port_vlans_del(port,
> + SWITCHDEV_OBJ_PORT_VLAN(obj));
> + break;
> + default:
> + err = -EOPNOTSUPP;
> + break;
> + }
> +
> + return err;
> +}
> +
> +static int mvsw_pr_port_attr_br_vlan_set(struct mvsw_pr_port *port,
> + struct switchdev_trans *trans,
> + struct net_device *orig_dev,
> + bool vlan_enabled)
> +{
> + struct mvsw_pr_switch *sw = port->sw;
> + struct mvsw_pr_bridge_device *bridge_device;
> +
> + if (!switchdev_trans_ph_prepare(trans))
> + return 0;
> +
> + bridge_device = mvsw_pr_bridge_device_find(sw->bridge, orig_dev);
> + if (WARN_ON(!bridge_device))
> + return -EINVAL;
> +
> + if (bridge_device->vlan_enabled == vlan_enabled)
> + return 0;
> +
> + netdev_err(bridge_device->dev,
> + "VLAN filtering can't be changed for existing bridge\n");
> + return -EINVAL;
> +}
> +
> +static int mvsw_pr_port_attr_br_flags_set(struct mvsw_pr_port *port,
> + struct switchdev_trans *trans,
> + struct net_device *orig_dev,
> + unsigned long flags)
> +{
> + struct mvsw_pr_bridge_port *br_port;
> + int err;
> +
> + if (switchdev_trans_ph_prepare(trans))
> + return 0;
> +
> + br_port = mvsw_pr_bridge_port_find(port->sw->bridge, orig_dev);
> + if (!br_port)
> + return 0;
> +
> + err = mvsw_pr_port_flood_set(port, flags & BR_FLOOD);
> + if (err)
> + return err;
> +
> + err = mvsw_pr_port_learning_set(port, flags & BR_LEARNING);
> + if (err)
> + return err;
> +
> + memcpy(&br_port->flags, &flags, sizeof(flags));
> + return 0;
> +}
> +
> +static int mvsw_pr_port_attr_br_ageing_set(struct mvsw_pr_port *port,
> + struct switchdev_trans *trans,
> + unsigned long ageing_clock_t)
> +{
> + int err;
> + struct mvsw_pr_switch *sw = port->sw;
> + unsigned long ageing_jiffies = clock_t_to_jiffies(ageing_clock_t);
> + u32 ageing_time = jiffies_to_msecs(ageing_jiffies) / 1000;
> +
> + if (switchdev_trans_ph_prepare(trans)) {
> + if (ageing_time < MVSW_PR_MIN_AGEING_TIME ||
> + ageing_time > MVSW_PR_MAX_AGEING_TIME)
> + return -ERANGE;
> + else
> + return 0;
> + }
> +
> + err = mvsw_pr_switch_ageing_set(sw, ageing_time);
> + if (!err)
> + sw->bridge->ageing_time = ageing_time;
> +
> + return err;
> +}
> +
> +static int mvsw_pr_port_obj_attr_set(struct net_device *dev,
> + const struct switchdev_attr *attr,
> + struct switchdev_trans *trans)
> +{
> + int err = 0;
> + struct mvsw_pr_port *port = netdev_priv(dev);
> +
> + switch (attr->id) {
> + case SWITCHDEV_ATTR_ID_PORT_STP_STATE:
> + err = -EOPNOTSUPP;

You don't support STP?

> + break;
> + case SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS:
> + if (attr->u.brport_flags &
> + ~(BR_LEARNING | BR_FLOOD | BR_MCAST_FLOOD))
> + err = -EINVAL;
> + break;
> + case SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS:
> + err = mvsw_pr_port_attr_br_flags_set(port, trans,
> + attr->orig_dev,
> + attr->u.brport_flags);
> + break;
> + case SWITCHDEV_ATTR_ID_BRIDGE_AGEING_TIME:
> + err = mvsw_pr_port_attr_br_ageing_set(port, trans,
> + attr->u.ageing_time);
> + break;
> + case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING:
> + err = mvsw_pr_port_attr_br_vlan_set(port, trans,
> + attr->orig_dev,
> + attr->u.vlan_filtering);
> + break;
> + default:
> + err = -EOPNOTSUPP;
> + }
> +
> + return err;
> +}
> +
> +static void mvsw_fdb_offload_notify(struct mvsw_pr_port *port,
> + struct switchdev_notifier_fdb_info *info)
> +{
> + struct switchdev_notifier_fdb_info send_info;
> +
> + send_info.addr = info->addr;
> + send_info.vid = info->vid;
> + send_info.offloaded = true;
> + call_switchdev_notifiers(SWITCHDEV_FDB_OFFLOADED,
> + port->net_dev, &send_info.info, NULL);
> +}
> +
> +static int
> +mvsw_pr_port_fdb_set(struct mvsw_pr_port *port,
> + struct switchdev_notifier_fdb_info *fdb_info, bool adding)
> +{
> + struct mvsw_pr_switch *sw = port->sw;
> + struct mvsw_pr_bridge_port *br_port;
> + struct mvsw_pr_bridge_device *bridge_device;
> + struct net_device *orig_dev = fdb_info->info.dev;
> + int err;
> + u16 vid;
> +
> + br_port = mvsw_pr_bridge_port_find(sw->bridge, orig_dev);
> + if (!br_port)
> + return -EINVAL;
> +
> + bridge_device = br_port->bridge_device;
> +
> + if (bridge_device->vlan_enabled)
> + vid = fdb_info->vid;
> + else
> + vid = bridge_device->bridge_id;
> +
> + if (adding)
> + err = mvsw_pr_fdb_add(port, fdb_info->addr, vid, false);
> + else
> + err = mvsw_pr_fdb_del(port, fdb_info->addr, vid);
> +
> + return err;
> +}
> +
> +static void mvsw_pr_bridge_fdb_event_work(struct work_struct *work)
> +{
> + int err = 0;
> + struct mvsw_pr_event_work *switchdev_work =
> + container_of(work, struct mvsw_pr_event_work, work);
> + struct net_device *dev = switchdev_work->dev;
> + struct switchdev_notifier_fdb_info *fdb_info;
> + struct mvsw_pr_port *port;
> +
> + rtnl_lock();
> + if (netif_is_vxlan(dev))
> + goto out;
> +
> + port = mvsw_pr_port_dev_lower_find(dev);
> + if (!port)
> + goto out;
> +
> + switch (switchdev_work->event) {
> + case SWITCHDEV_FDB_ADD_TO_DEVICE:
> + fdb_info = &switchdev_work->fdb_info;
> + if (!fdb_info->added_by_user)
> + break;
> + err = mvsw_pr_port_fdb_set(port, fdb_info, true);
> + if (err)
> + break;
> + mvsw_fdb_offload_notify(port, fdb_info);
> + break;
> + case SWITCHDEV_FDB_DEL_TO_DEVICE:
> + fdb_info = &switchdev_work->fdb_info;
> + mvsw_pr_port_fdb_set(port, fdb_info, false);
> + break;
> + case SWITCHDEV_FDB_ADD_TO_BRIDGE:
> + case SWITCHDEV_FDB_DEL_TO_BRIDGE:
> + break;
> + }
> +
> +out:
> + rtnl_unlock();
> + kfree(switchdev_work->fdb_info.addr);
> + kfree(switchdev_work);
> + dev_put(dev);
> +}
> +
> +static int mvsw_pr_switchdev_event(struct notifier_block *unused,
> + unsigned long event, void *ptr)
> +{
> + int err = 0;
> + struct net_device *net_dev = switchdev_notifier_info_to_dev(ptr);
> + struct mvsw_pr_event_work *switchdev_work;
> + struct switchdev_notifier_fdb_info *fdb_info;
> + struct switchdev_notifier_info *info = ptr;
> + struct net_device *upper_br;
> +
> + if (event == SWITCHDEV_PORT_ATTR_SET) {
> + err = switchdev_handle_port_attr_set(net_dev, ptr,
> + mvsw_pr_netdev_check,
> + mvsw_pr_port_obj_attr_set);
> + return notifier_from_errno(err);
> + }
> +
> + upper_br = netdev_master_upper_dev_get_rcu(net_dev);
> + if (!upper_br)
> + return NOTIFY_DONE;
> +
> + if (!netif_is_bridge_master(upper_br))
> + return NOTIFY_DONE;

Looks too complicated for the use cases you support. Just check that the
netdev is a prestera netdev.

> +
> + switchdev_work = kzalloc(sizeof(*switchdev_work), GFP_ATOMIC);
> + if (!switchdev_work)
> + return NOTIFY_BAD;
> +
> + switchdev_work->dev = net_dev;
> + switchdev_work->event = event;
> +
> + switch (event) {
> + case SWITCHDEV_FDB_ADD_TO_DEVICE:
> + case SWITCHDEV_FDB_DEL_TO_DEVICE:
> + case SWITCHDEV_FDB_ADD_TO_BRIDGE:
> + case SWITCHDEV_FDB_DEL_TO_BRIDGE:
> + fdb_info = container_of(info,
> + struct switchdev_notifier_fdb_info,
> + info);
> +
> + INIT_WORK(&switchdev_work->work, mvsw_pr_bridge_fdb_event_work);
> + memcpy(&switchdev_work->fdb_info, ptr,
> + sizeof(switchdev_work->fdb_info));
> + switchdev_work->fdb_info.addr = kzalloc(ETH_ALEN, GFP_ATOMIC);
> + if (!switchdev_work->fdb_info.addr)
> + goto out;
> + ether_addr_copy((u8 *)switchdev_work->fdb_info.addr,
> + fdb_info->addr);
> + dev_hold(net_dev);
> +
> + break;
> + case SWITCHDEV_VXLAN_FDB_ADD_TO_DEVICE:
> + case SWITCHDEV_VXLAN_FDB_DEL_TO_DEVICE:

You can remove these, that's why you have 'default'...

> + default:
> + kfree(switchdev_work);
> + return NOTIFY_DONE;
> + }
> +
> + queue_work(mvsw_owq, &switchdev_work->work);

Once you defer the operation you cannot return an error, which is
problematic. Do you have a way to know if the operation will succeed or
not? That is, if the hardware has enough space for this new FDB entry?

> + return NOTIFY_DONE;
> +out:
> + kfree(switchdev_work);
> + return NOTIFY_BAD;
> +}
> +
> +static int mvsw_pr_switchdev_blocking_event(struct notifier_block *unused,
> + unsigned long event, void *ptr)
> +{
> + int err = 0;
> + struct net_device *net_dev = switchdev_notifier_info_to_dev(ptr);
> +
> + switch (event) {
> + case SWITCHDEV_PORT_OBJ_ADD:
> + if (netif_is_vxlan(net_dev)) {

You don't need this because you don't support VXLAN.
mvsw_pr_netdev_check() will return false for the VXLAN device and your
driver will not be called.

If you want to forbid enslavement of non-prestera to the bridge, then
you should do it in the netdev notifier. Then you can remove all these
netif_is_vxlan() checks.

> + err = -EOPNOTSUPP;
> + } else {
> + err = switchdev_handle_port_obj_add
> + (net_dev, ptr, mvsw_pr_netdev_check,
> + mvsw_pr_port_obj_add);
> + }
> + break;
> + case SWITCHDEV_PORT_OBJ_DEL:
> + if (netif_is_vxlan(net_dev)) {
> + err = -EOPNOTSUPP;
> + } else {
> + err = switchdev_handle_port_obj_del
> + (net_dev, ptr, mvsw_pr_netdev_check,
> + mvsw_pr_port_obj_del);
> + }
> + break;
> + case SWITCHDEV_PORT_ATTR_SET:
> + err = switchdev_handle_port_attr_set
> + (net_dev, ptr, mvsw_pr_netdev_check,
> + mvsw_pr_port_obj_attr_set);
> + break;
> + default:
> + err = -EOPNOTSUPP;
> + }
> +
> + return notifier_from_errno(err);
> +}
> +
> +static struct mvsw_pr_bridge_device *
> +mvsw_pr_bridge_device_create(struct mvsw_pr_bridge *bridge,
> + struct net_device *br_dev)
> +{
> + struct mvsw_pr_bridge_device *bridge_device;
> + bool vlan_enabled = br_vlan_enabled(br_dev);
> + u16 bridge_id;
> + int err;
> +
> + if (vlan_enabled && bridge->bridge_8021q_exists) {
> + netdev_err(br_dev, "Only one VLAN-aware bridge is supported\n");

Propagate extack to this function and use it here instead of writing to
kernel log

> + return ERR_PTR(-EINVAL);
> + }
> +
> + bridge_device = kzalloc(sizeof(*bridge_device), GFP_KERNEL);
> + if (!bridge_device)
> + return ERR_PTR(-ENOMEM);
> +
> + if (vlan_enabled) {
> + bridge->bridge_8021q_exists = true;
> + } else {
> + err = mvsw_pr_8021d_bridge_create(bridge->sw, &bridge_id);
> + if (err) {
> + kfree(bridge_device);
> + return ERR_PTR(err);
> + }
> +
> + bridge_device->bridge_id = bridge_id;
> + }
> +
> + bridge_device->dev = br_dev;
> + bridge_device->vlan_enabled = vlan_enabled;
> + bridge_device->multicast_enabled = br_multicast_enabled(br_dev);
> + bridge_device->mrouter = br_multicast_router(br_dev);
> + INIT_LIST_HEAD(&bridge_device->port_list);
> +
> + list_add(&bridge_device->bridge_node, &bridge->bridge_list);
> +
> + return bridge_device;
> +}
> +
> +static void
> +mvsw_pr_bridge_device_destroy(struct mvsw_pr_bridge *bridge,
> + struct mvsw_pr_bridge_device *bridge_device)
> +{
> + list_del(&bridge_device->bridge_node);
> + if (bridge_device->vlan_enabled)
> + bridge->bridge_8021q_exists = false;
> + else
> + mvsw_pr_8021d_bridge_delete(bridge->sw,
> + bridge_device->bridge_id);
> +
> + WARN_ON(!list_empty(&bridge_device->port_list));
> + kfree(bridge_device);
> +}
> +
> +static struct mvsw_pr_bridge_device *
> +mvsw_pr_bridge_device_get(struct mvsw_pr_bridge *bridge,
> + struct net_device *br_dev)
> +{
> + struct mvsw_pr_bridge_device *bridge_device;
> +
> + bridge_device = mvsw_pr_bridge_device_find(bridge, br_dev);
> + if (bridge_device)
> + return bridge_device;
> +
> + return mvsw_pr_bridge_device_create(bridge, br_dev);
> +}
> +
> +static void
> +mvsw_pr_bridge_device_put(struct mvsw_pr_bridge *bridge,
> + struct mvsw_pr_bridge_device *bridge_device)
> +{
> + if (list_empty(&bridge_device->port_list))
> + mvsw_pr_bridge_device_destroy(bridge, bridge_device);
> +}
> +
> +static struct mvsw_pr_bridge_port *
> +mvsw_pr_bridge_port_create(struct mvsw_pr_bridge_device *bridge_device,
> + struct net_device *brport_dev)
> +{
> + struct mvsw_pr_bridge_port *br_port;
> + struct mvsw_pr_port *port;
> +
> + br_port = kzalloc(sizeof(*br_port), GFP_KERNEL);
> + if (!br_port)
> + return NULL;
> +
> + port = mvsw_pr_port_dev_lower_find(brport_dev);
> +
> + br_port->dev = brport_dev;
> + br_port->bridge_device = bridge_device;
> + br_port->stp_state = BR_STATE_DISABLED;
> + br_port->flags = BR_LEARNING | BR_FLOOD | BR_LEARNING_SYNC |
> + BR_MCAST_FLOOD;
> + INIT_LIST_HEAD(&br_port->vlan_list);
> + list_add(&br_port->bridge_device_node, &bridge_device->port_list);
> + br_port->ref_count = 1;
> +
> + return br_port;
> +}
> +
> +static void
> +mvsw_pr_bridge_port_destroy(struct mvsw_pr_bridge_port *br_port)
> +{
> + list_del(&br_port->bridge_device_node);
> + WARN_ON(!list_empty(&br_port->vlan_list));
> + kfree(br_port);
> +}
> +
> +static struct mvsw_pr_bridge_port *
> +mvsw_pr_bridge_port_get(struct mvsw_pr_bridge *bridge,
> + struct net_device *brport_dev)
> +{
> + struct net_device *br_dev = netdev_master_upper_dev_get(brport_dev);
> + struct mvsw_pr_bridge_device *bridge_device;
> + struct mvsw_pr_bridge_port *br_port;
> + int err;
> +
> + br_port = mvsw_pr_bridge_port_find(bridge, brport_dev);
> + if (br_port) {
> + br_port->ref_count++;
> + return br_port;
> + }
> +
> + bridge_device = mvsw_pr_bridge_device_get(bridge, br_dev);
> + if (IS_ERR(bridge_device))
> + return ERR_CAST(bridge_device);
> +
> + br_port = mvsw_pr_bridge_port_create(bridge_device, brport_dev);
> + if (!br_port) {
> + err = -ENOMEM;
> + goto err_brport_create;
> + }
> +
> + return br_port;
> +
> +err_brport_create:
> + mvsw_pr_bridge_device_put(bridge, bridge_device);
> + return ERR_PTR(err);
> +}
> +
> +static void mvsw_pr_bridge_port_put(struct mvsw_pr_bridge *bridge,
> + struct mvsw_pr_bridge_port *br_port)
> +{
> + struct mvsw_pr_bridge_device *bridge_device;
> +
> + if (--br_port->ref_count != 0)
> + return;
> + bridge_device = br_port->bridge_device;
> + mvsw_pr_bridge_port_destroy(br_port);
> + mvsw_pr_bridge_device_put(bridge, bridge_device);
> +}
> +
> +static int
> +mvsw_pr_bridge_8021q_port_join(struct mvsw_pr_bridge_device *bridge_device,
> + struct mvsw_pr_bridge_port *br_port,
> + struct mvsw_pr_port *port,
> + struct netlink_ext_ack *extack)
> +{
> + if (is_vlan_dev(br_port->dev)) {

How can this happen? In the netdev notifier you only allow bridge
uppers. Trying to configure a VLAN device on top of a prestera netdev
should error out with "Unknown upper device type".

> + NL_SET_ERR_MSG_MOD(extack,
> + "Can not enslave a VLAN device to a VLAN-aware bridge");
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +mvsw_pr_bridge_8021d_port_join(struct mvsw_pr_bridge_device *bridge_device,
> + struct mvsw_pr_bridge_port *br_port,
> + struct mvsw_pr_port *port,
> + struct netlink_ext_ack *extack)
> +{
> + int err;
> +
> + if (is_vlan_dev(br_port->dev)) {
> + NL_SET_ERR_MSG_MOD(extack,
> + "Enslaving of a VLAN device is not supported");
> + return -ENOTSUPP;
> + }
> + err = mvsw_pr_8021d_bridge_port_add(port, bridge_device->bridge_id);
> + if (err)
> + return err;
> +
> + err = mvsw_pr_port_flood_set(port, br_port->flags & BR_FLOOD);
> + if (err)
> + goto err_port_flood_set;
> +
> + err = mvsw_pr_port_learning_set(port, br_port->flags & BR_LEARNING);
> + if (err)
> + goto err_port_learning_set;
> +
> + return err;
> +
> +err_port_learning_set:
> + mvsw_pr_port_flood_set(port, false);
> +err_port_flood_set:
> + mvsw_pr_8021d_bridge_port_delete(port, bridge_device->bridge_id);
> + return err;
> +}
> +
> +static int mvsw_pr_port_bridge_join(struct mvsw_pr_port *port,
> + struct net_device *brport_dev,
> + struct net_device *br_dev,
> + struct netlink_ext_ack *extack)
> +{
> + struct mvsw_pr_bridge_device *bridge_device;
> + struct mvsw_pr_switch *sw = port->sw;
> + struct mvsw_pr_bridge_port *br_port;
> + int err;
> +
> + br_port = mvsw_pr_bridge_port_get(sw->bridge, brport_dev);
> + if (IS_ERR(br_port))
> + return PTR_ERR(br_port);
> +
> + bridge_device = br_port->bridge_device;
> +
> + if (bridge_device->vlan_enabled) {
> + err = mvsw_pr_bridge_8021q_port_join(bridge_device, br_port,
> + port, extack);
> + } else {
> + err = mvsw_pr_bridge_8021d_port_join(bridge_device, br_port,
> + port, extack);
> + }
> +
> + if (err)
> + goto err_port_join;
> +
> + return 0;
> +
> +err_port_join:
> + mvsw_pr_bridge_port_put(sw->bridge, br_port);
> + return err;
> +}
> +
> +static void
> +mvsw_pr_bridge_8021d_port_leave(struct mvsw_pr_bridge_device *bridge_device,
> + struct mvsw_pr_bridge_port *br_port,
> + struct mvsw_pr_port *port)
> +{
> + mvsw_pr_fdb_flush_port(port, MVSW_PR_FDB_FLUSH_MODE_ALL);
> + mvsw_pr_8021d_bridge_port_delete(port, bridge_device->bridge_id);
> +}
> +
> +static void
> +mvsw_pr_bridge_8021q_port_leave(struct mvsw_pr_bridge_device *bridge_device,
> + struct mvsw_pr_bridge_port *br_port,
> + struct mvsw_pr_port *port)
> +{
> + mvsw_pr_fdb_flush_port(port, MVSW_PR_FDB_FLUSH_MODE_ALL);
> + mvsw_pr_port_pvid_set(port, MVSW_PR_DEFAULT_VID);
> +}
> +

Please have mvsw_pr_port_bridge_join() and mvsw_pr_port_bridge_leave()
next to each other. Easier to see that rollback correctly.

> +static void mvsw_pr_port_bridge_leave(struct mvsw_pr_port *port,
> + struct net_device *brport_dev,
> + struct net_device *br_dev)
> +{
> + struct mvsw_pr_switch *sw = port->sw;
> + struct mvsw_pr_bridge_device *bridge_device;
> + struct mvsw_pr_bridge_port *br_port;
> +
> + bridge_device = mvsw_pr_bridge_device_find(sw->bridge, br_dev);
> + if (!bridge_device)
> + return;
> + br_port = __mvsw_pr_bridge_port_find(bridge_device, brport_dev);
> + if (!br_port)
> + return;
> +
> + if (bridge_device->vlan_enabled)
> + mvsw_pr_bridge_8021q_port_leave(bridge_device, br_port, port);
> + else
> + mvsw_pr_bridge_8021d_port_leave(bridge_device, br_port, port);
> +
> + mvsw_pr_port_learning_set(port, false);
> + mvsw_pr_port_flood_set(port, false);
> + mvsw_pr_bridge_port_put(sw->bridge, br_port);
> +}
> +
> +static int mvsw_pr_netdevice_port_upper_event(struct net_device *lower_dev,
> + struct net_device *dev,
> + unsigned long event, void *ptr)
> +{
> + struct netdev_notifier_changeupper_info *info;
> + struct mvsw_pr_port *port;
> + struct netlink_ext_ack *extack;
> + struct net_device *upper_dev;
> + struct mvsw_pr_switch *sw;
> + int err = 0;
> +
> + port = netdev_priv(dev);
> + sw = port->sw;
> + info = ptr;
> + extack = netdev_notifier_info_to_extack(&info->info);
> +
> + switch (event) {
> + case NETDEV_PRECHANGEUPPER:
> + upper_dev = info->upper_dev;
> + if (!netif_is_bridge_master(upper_dev)) {
> + NL_SET_ERR_MSG_MOD(extack, "Unknown upper device type");
> + return -EINVAL;
> + }
> + if (!info->linking)
> + break;
> + if (netdev_has_any_upper_dev(upper_dev) &&
> + (!netif_is_bridge_master(upper_dev) ||
> + !mvsw_pr_bridge_device_is_offloaded(sw, upper_dev))) {

You only need netdev_has_any_upper_dev().

> + NL_SET_ERR_MSG_MOD(extack,
> + "Enslaving a port to a device that already has an upper device is not supported");

You can have both in the same line. Same in other places.

> + return -EINVAL;
> + }

Does not seem like you support multicast snooping at this stage, so you
need to check this here via br_multicast_enabled()

> + break;
> + case NETDEV_CHANGEUPPER:
> + upper_dev = info->upper_dev;
> + if (netif_is_bridge_master(upper_dev)) {
> + if (info->linking)
> + err = mvsw_pr_port_bridge_join(port,
> + lower_dev,
> + upper_dev,
> + extack);
> + else
> + mvsw_pr_port_bridge_leave(port,
> + lower_dev,
> + upper_dev);
> + }
> + break;
> + }
> +
> + return err;
> +}
> +
> +static int mvsw_pr_netdevice_port_event(struct net_device *lower_dev,
> + struct net_device *port_dev,
> + unsigned long event, void *ptr)
> +{
> + switch (event) {
> + case NETDEV_PRECHANGEUPPER:
> + case NETDEV_CHANGEUPPER:
> + return mvsw_pr_netdevice_port_upper_event(lower_dev, port_dev,
> + event, ptr);
> + }
> +
> + return 0;
> +}
> +
> +static int mvsw_pr_netdevice_event(struct notifier_block *nb,
> + unsigned long event, void *ptr)
> +{
> + struct net_device *dev = netdev_notifier_info_to_dev(ptr);
> + struct mvsw_pr_switch *sw;
> + int err = 0;
> +
> + sw = container_of(nb, struct mvsw_pr_switch, netdevice_nb);
> +
> + if (mvsw_pr_netdev_check(dev))
> + err = mvsw_pr_netdevice_port_event(dev, dev, event, ptr);
> +
> + return notifier_from_errno(err);
> +}
> +
> +static int mvsw_pr_fdb_init(struct mvsw_pr_switch *sw)
> +{
> + int err;

This would be a good place to register the handler for the FDB learn /
age-out events

> +
> + err = mvsw_pr_switch_ageing_set(sw, MVSW_PR_DEFAULT_AGEING_TIME);
> + if (err)
> + return err;
> +
> + return 0;
> +}

And unregister the handler here, in mvsw_pr_fdb_fini()

> +
> +static int mvsw_pr_switchdev_init(struct mvsw_pr_switch *sw)
> +{
> + int err = 0;
> + struct mvsw_pr_switchdev *swdev;
> + struct mvsw_pr_bridge *bridge;

Reverse xmas tree (RXT)... Elsewhere as well

> +
> + if (sw->switchdev)
> + return -EPERM;

Unclear why this is needed

> +
> + bridge = kzalloc(sizeof(*sw->bridge), GFP_KERNEL);
> + if (!bridge)
> + return -ENOMEM;
> +
> + swdev = kzalloc(sizeof(*sw->switchdev), GFP_KERNEL);
> + if (!swdev) {
> + kfree(bridge);

goto

> + return -ENOMEM;
> + }

Why do you need both 'struct mvsw_pr_switchdev' and 'struct
mvsw_pr_bridge'? I think the second is enough. Also, I assume
'switchdev' naming is inspired by mlxsw, but 'bridge' is better.

> +
> + sw->bridge = bridge;
> + bridge->sw = sw;
> + sw->switchdev = swdev;
> + swdev->sw = sw;
> +
> + INIT_LIST_HEAD(&sw->bridge->bridge_list);
> +
> + mvsw_owq = alloc_ordered_workqueue("%s_ordered", 0, "prestera_sw");
> + if (!mvsw_owq) {
> + err = -ENOMEM;
> + goto err_alloc_workqueue;
> + }
> +
> + swdev->swdev_n.notifier_call = mvsw_pr_switchdev_event;
> + err = register_switchdev_notifier(&swdev->swdev_n);
> + if (err)
> + goto err_register_switchdev_notifier;
> +
> + swdev->swdev_blocking_n.notifier_call =
> + mvsw_pr_switchdev_blocking_event;
> + err = register_switchdev_blocking_notifier(&swdev->swdev_blocking_n);
> + if (err)
> + goto err_register_block_switchdev_notifier;
> +
> + mvsw_pr_fdb_init(sw);
> +
> + return 0;
> +
> +err_register_block_switchdev_notifier:
> + unregister_switchdev_notifier(&swdev->swdev_n);
> +err_register_switchdev_notifier:
> + destroy_workqueue(mvsw_owq);
> +err_alloc_workqueue:
> + kfree(swdev);
> + kfree(bridge);
> + return err;
> +}
> +
> +static void mvsw_pr_switchdev_fini(struct mvsw_pr_switch *sw)
> +{
> + if (!sw->switchdev)
> + return;

?

> +
> + unregister_switchdev_notifier(&sw->switchdev->swdev_n);
> + unregister_switchdev_blocking_notifier
> + (&sw->switchdev->swdev_blocking_n);

Should be on the same line

You registered the blocking one last, so you should unregister it first

> + flush_workqueue(mvsw_owq);
> + destroy_workqueue(mvsw_owq);

The comment says "Safely destroy a workqueue. All work currently pending
will be done first." and it calls drain_workqueue(), which calls
flush_workqueue(). So I don't think you need to call flush_workqueue()
first.

> + kfree(sw->switchdev);
> + sw->switchdev = NULL;

Does not seem necessary.

> + kfree(sw->bridge);
> +}
> +
> +static int mvsw_pr_netdev_init(struct mvsw_pr_switch *sw)
> +{
> + int err = 0;
> +
> + if (sw->netdevice_nb.notifier_call)
> + return -EPERM;
> +
> + sw->netdevice_nb.notifier_call = mvsw_pr_netdevice_event;
> + err = register_netdevice_notifier(&sw->netdevice_nb);

You register the netdev notifier in prestera_switchdev.c, which seems to
be specific to bridge-related operations. However, in the future, as
your driver grows, you'll need to handle events that are not related to
bridge.

Therefore, I suggest registering this notifier in the main driver file,
prestera.c

> + return err;
> +}
> +
> +static void mvsw_pr_netdev_fini(struct mvsw_pr_switch *sw)
> +{
> + if (sw->netdevice_nb.notifier_call)

Unclear why this is needed.

> + unregister_netdevice_notifier(&sw->netdevice_nb);
> +}
> +
> +int mvsw_pr_switchdev_register(struct mvsw_pr_switch *sw)
> +{
> + int err;
> +
> + err = mvsw_pr_switchdev_init(sw);
> + if (err)
> + return err;
> +
> + err = mvsw_pr_netdev_init(sw);
> + if (err)
> + goto err_netdevice_notifier;
> +
> + return 0;
> +
> +err_netdevice_notifier:
> + mvsw_pr_switchdev_fini(sw);
> + return err;
> +}
> +
> +void mvsw_pr_switchdev_unregister(struct mvsw_pr_switch *sw)
> +{
> + mvsw_pr_netdev_fini(sw);
> + mvsw_pr_switchdev_fini(sw);
> +}
> --
> 2.17.1
>

2020-03-05 15:02:32

by Ido Schimmel

[permalink] [raw]
Subject: Re: [RFC net-next 0/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX326x (AC3x)

On Tue, Feb 25, 2020 at 04:30:52PM +0000, Vadym Kochan wrote:
> Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8
> ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely
> wireless SMB deployment.

It seems that this device has enough ports to loopback to each other in
order to create meaningful topologies. Therefore, I suggest running
relevant existing tests under tools/testing/selftests/net/forwarding/
and contributing new ones. See tools/testing/selftests/net/forwarding/README
for details.

One problem you will run into is that your netdev notifier only allows
bridge uppers and will therefore veto VRF uppers, which is a
prerequisite. However, since you don't support L3 offload, then all
routed traffic should reach the CPU anyway and therefore VRF uppers can
be allowed.

2020-05-02 15:22:56

by Vadym Kochan

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Hi Ido,

On Thu, Mar 05, 2020 at 04:49:37PM +0200, Ido Schimmel wrote:
> On Tue, Feb 25, 2020 at 04:30:54PM +0000, Vadym Kochan wrote:
> > +int mvsw_pr_port_learning_set(struct mvsw_pr_port *port, bool learn)
> > +{
> > + return mvsw_pr_hw_port_learning_set(port, learn);
> > +}
> > +
> > +int mvsw_pr_port_flood_set(struct mvsw_pr_port *port, bool flood)
> > +{
> > + return mvsw_pr_hw_port_flood_set(port, flood);
> > +}
>
> Flooding and learning are per-port attributes? Not per-{port, VLAN} ?
> If so, you need to have various restrictions in the driver in case
> someone configures multiple vlan devices on top of a port and enslaves
> them to different bridges.
>
> > +
> > +
> > + INIT_LIST_HEAD(&port->vlans_list);
> > + port->pvid = MVSW_PR_DEFAULT_VID;
>
> If you're using VID 1, then you need to make sure that user cannot
> configure a VLAN device with with this VID. If possible, I suggest that
> you use VID 4095, as it cannot be configured from user space.
>
> I'm actually not entirely sure why you need a default VID.
>

> > +mvsw_pr_port_vlan_bridge_join(struct mvsw_pr_port_vlan *port_vlan,
> > + struct mvsw_pr_bridge_port *br_port,
> > + struct netlink_ext_ack *extack)
> > +{
> > + struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
> > + struct mvsw_pr_bridge_vlan *br_vlan;
> > + u16 vid = port_vlan->vid;
> > + int err;
> > +
> > + if (port_vlan->bridge_port)
> > + return 0;
> > +
> > + err = mvsw_pr_port_flood_set(port, br_port->flags & BR_FLOOD);
> > + if (err)
> > + return err;
> > +
> > + err = mvsw_pr_port_learning_set(port, br_port->flags & BR_LEARNING);
> > + if (err)
> > + goto err_port_learning_set;
>
> It seems that learning and flooding are not per-{port, VLAN} attributes,
> so I'm not sure why you have this here.
>
> The fact that you don't undo this in mvsw_pr_port_vlan_bridge_leave()
> tells me it should not be here.
>

> +
> > +void
> > +mvsw_pr_port_vlan_bridge_leave(struct mvsw_pr_port_vlan *port_vlan)
> > +{
> > + struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
> > + struct mvsw_pr_bridge_vlan *br_vlan;
> > + struct mvsw_pr_bridge_port *br_port;
> > + int port_count;
> > + u16 vid = port_vlan->vid;
> > + bool last_port, last_vlan;
> > +
> > + br_port = port_vlan->bridge_port;
> > + last_vlan = list_is_singular(&br_port->vlan_list);
> > + port_count =
> > + mvsw_pr_bridge_vlan_port_count_get(br_port->bridge_device, vid);
> > + br_vlan = mvsw_pr_bridge_vlan_find(br_port, vid);
> > + last_port = port_count == 1;
> > + if (last_vlan) {
> > + mvsw_pr_fdb_flush_port(port, MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
> > + } else if (last_port) {
> > + mvsw_pr_fdb_flush_vlan(port->sw, vid,
> > + MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
> > + } else {
> > + mvsw_pr_fdb_flush_port_vlan(port, vid,
> > + MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
>
> If you always flush based on {port, VID}, then why do you need the other
> two?
>

> +
> > +static int mvsw_pr_port_obj_attr_set(struct net_device *dev,
> > + const struct switchdev_attr *attr,
> > + struct switchdev_trans *trans)
> > +{
> > + int err = 0;
> > + struct mvsw_pr_port *port = netdev_priv(dev);
> > +
> > + switch (attr->id) {
> > + case SWITCHDEV_ATTR_ID_PORT_STP_STATE:
> > + err = -EOPNOTSUPP;
>
> You don't support STP?

Not, yet. But it will be in the next submission or official patch.
>
> > + break;

> > + default:
> > + kfree(switchdev_work);
> > + return NOTIFY_DONE;
> > + }
> > +
> > + queue_work(mvsw_owq, &switchdev_work->work);
>
> Once you defer the operation you cannot return an error, which is
> problematic. Do you have a way to know if the operation will succeed or
> not? That is, if the hardware has enough space for this new FDB entry?
>
Right, fdb configuration on via fw is blocking operation I still need to
think on it if it is possible by current design.


>
> Why do you need both 'struct mvsw_pr_switchdev' and 'struct
> mvsw_pr_bridge'? I think the second is enough. Also, I assume
> 'switchdev' naming is inspired by mlxsw, but 'bridge' is better.
>
I changed to use bridge for bridge object, because having bridge_device
may confuse.

Thank you for your comments they were very useful, sorry for so late
answer, I decided to re-implement this version a bit. Regarding flooding
and default vid I still need to check it.

Regards,
Vadym Kochan

2020-05-05 04:06:37

by Vadym Kochan

[permalink] [raw]
Subject: Re: [RFC net-next 1/3] net: marvell: prestera: Add Switchdev driver for Prestera family ASIC device 98DX325x (AC3x)

Hi Ido,

On Sat, May 02, 2020 at 06:20:49PM +0300, Vadym Kochan wrote:
> Hi Ido,
>
> On Thu, Mar 05, 2020 at 04:49:37PM +0200, Ido Schimmel wrote:
> > On Tue, Feb 25, 2020 at 04:30:54PM +0000, Vadym Kochan wrote:
> > > +int mvsw_pr_port_learning_set(struct mvsw_pr_port *port, bool learn)
> > > +{
> > > + return mvsw_pr_hw_port_learning_set(port, learn);
> > > +}
> > > +
> > > +int mvsw_pr_port_flood_set(struct mvsw_pr_port *port, bool flood)
> > > +{
> > > + return mvsw_pr_hw_port_flood_set(port, flood);
> > > +}
> >
> > Flooding and learning are per-port attributes? Not per-{port, VLAN} ?
> > If so, you need to have various restrictions in the driver in case
> > someone configures multiple vlan devices on top of a port and enslaves
> > them to different bridges.

Yes, and there is no support for vlan device on top of the port.

> >
> > > +
> > > +
> > > + INIT_LIST_HEAD(&port->vlans_list);
> > > + port->pvid = MVSW_PR_DEFAULT_VID;
> >
> > If you're using VID 1, then you need to make sure that user cannot
> > configure a VLAN device with with this VID. If possible, I suggest that
> > you use VID 4095, as it cannot be configured from user space.
> >
> > I'm actually not entirely sure why you need a default VID.
> >
>
> > > +mvsw_pr_port_vlan_bridge_join(struct mvsw_pr_port_vlan *port_vlan,
> > > + struct mvsw_pr_bridge_port *br_port,
> > > + struct netlink_ext_ack *extack)
> > > +{
> > > + struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
> > > + struct mvsw_pr_bridge_vlan *br_vlan;
> > > + u16 vid = port_vlan->vid;
> > > + int err;
> > > +
> > > + if (port_vlan->bridge_port)
> > > + return 0;
> > > +
> > > + err = mvsw_pr_port_flood_set(port, br_port->flags & BR_FLOOD);
> > > + if (err)
> > > + return err;
> > > +
> > > + err = mvsw_pr_port_learning_set(port, br_port->flags & BR_LEARNING);
> > > + if (err)
> > > + goto err_port_learning_set;
> >
> > It seems that learning and flooding are not per-{port, VLAN} attributes,
> > so I'm not sure why you have this here.
> >
> > The fact that you don't undo this in mvsw_pr_port_vlan_bridge_leave()
> > tells me it should not be here.
> >
>
> > +
> > > +void
> > > +mvsw_pr_port_vlan_bridge_leave(struct mvsw_pr_port_vlan *port_vlan)
> > > +{
> > > + struct mvsw_pr_port *port = port_vlan->mvsw_pr_port;
> > > + struct mvsw_pr_bridge_vlan *br_vlan;
> > > + struct mvsw_pr_bridge_port *br_port;
> > > + int port_count;
> > > + u16 vid = port_vlan->vid;
> > > + bool last_port, last_vlan;
> > > +
> > > + br_port = port_vlan->bridge_port;
> > > + last_vlan = list_is_singular(&br_port->vlan_list);
> > > + port_count =
> > > + mvsw_pr_bridge_vlan_port_count_get(br_port->bridge_device, vid);
> > > + br_vlan = mvsw_pr_bridge_vlan_find(br_port, vid);
> > > + last_port = port_count == 1;
> > > + if (last_vlan) {
> > > + mvsw_pr_fdb_flush_port(port, MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
> > > + } else if (last_port) {
> > > + mvsw_pr_fdb_flush_vlan(port->sw, vid,
> > > + MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
> > > + } else {
> > > + mvsw_pr_fdb_flush_port_vlan(port, vid,
> > > + MVSW_PR_FDB_FLUSH_MODE_DYNAMIC);
> >
> > If you always flush based on {port, VID}, then why do you need the other
> > two?
> >
>
> > +
> > > +static int mvsw_pr_port_obj_attr_set(struct net_device *dev,
> > > + const struct switchdev_attr *attr,
> > > + struct switchdev_trans *trans)
> > > +{
> > > + int err = 0;
> > > + struct mvsw_pr_port *port = netdev_priv(dev);
> > > +
> > > + switch (attr->id) {
> > > + case SWITCHDEV_ATTR_ID_PORT_STP_STATE:
> > > + err = -EOPNOTSUPP;
> >
> > You don't support STP?
>
> Not, yet. But it will be in the next submission or official patch.
> >
> > > + break;
>
> > > + default:
> > > + kfree(switchdev_work);
> > > + return NOTIFY_DONE;
> > > + }
> > > +
> > > + queue_work(mvsw_owq, &switchdev_work->work);
> >
> > Once you defer the operation you cannot return an error, which is
> > problematic. Do you have a way to know if the operation will succeed or
> > not? That is, if the hardware has enough space for this new FDB entry?
> >
> Right, fdb configuration on via fw is blocking operation I still need to
> think on it if it is possible by current design.
>
>
> >
> > Why do you need both 'struct mvsw_pr_switchdev' and 'struct
> > mvsw_pr_bridge'? I think the second is enough. Also, I assume
> > 'switchdev' naming is inspired by mlxsw, but 'bridge' is better.
> >
> I changed to use bridge for bridge object, because having bridge_device
> may confuse.
>
> Thank you for your comments they were very useful, sorry for so late
> answer, I decided to re-implement this version a bit. Regarding flooding
> and default vid I still need to check it.
>
> Regards,
> Vadym Kochan