2024-06-11 16:24:45

by Geetha sowjanya

[permalink] [raw]
Subject: [net-next PATCH v5 00/10] Introduce RVU representors

This series adds representor support for each rvu devices.
When switchdev mode is enabled, representor netdev is registered
for each rvu device. In implementation of representor model,
one NIX HW LF with multiple SQ and RQ is reserved, where each
RQ and SQ of the LF are mapped to a representor. A loopback channel
is reserved to support packet path between representors and VFs.
CN10K silicon supports 2 types of MACs, RPM and SDP. This
patch set adds representor support for both RPM and SDP MAC
interfaces.

- Patch 1: Refactors and exports the shared service functions.
- Patch 2: Implements basic representor driver.
- Patch 3: Add devlink support to create representor netdevs that
can be used to manage VFs.
- Patch 4: Implements basec netdev_ndo_ops.
- Patch 5: Installs tcam rules to route packets between representor and
VFs.
- Patch 6: Enables fetching VF stats via representor interface
- Patch 7: Adds support to sync link state between representors and VFs .
- Patch 8: Enables configuring VF MTU via representor netdevs.
- Patch 9: Add representors for sdp MAC.
- Patch 10: Add devlink port support.

Command to create VF representor
#devlink dev eswitch set pci/0002:1c:00.0 mode switchdev
VF representors are created for each VF when switch mode is set switchdev on representor PCI device
# devlink dev eswitch set pci/0002:1c:00.0 mode switchdev
# ip link show
25: r0p1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 32:0f:0f:f0:60:f1 brd ff:ff:ff:ff:ff:ff
26: r1p1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 3e:5d:9a:4d:e7:7b brd ff:ff:ff:ff:ff:ff

#devlink dev
pci/0002:01:00.0
pci/0002:02:00.0
pci/0002:03:00.0
pci/0002:04:00.0
pci/0002:05:00.0
pci/0002:06:00.0
pci/0002:07:00.0

~# devlink port
pci/0002:1c:00.0/0: type eth netdev r0p1v0 flavour pcipf controller 0 pfnum 1 vfnum 0 external false splittable false
pci/0002:1c:00.0/1: type eth netdev r1p1v1 flavour pcivf controller 0 pfnum 1 vfnum 1 external false splittable false
pci/0002:1c:00.0/2: type eth netdev r2p1v2 flavour pcivf controller 0 pfnum 1 vfnum 2 external false splittable false
pci/0002:1c:00.0/3: type eth netdev r3p1v3 flavour pcivf controller 0 pfnum 1 vfnum 3 external false splittable false

-----------
v1-v2:
-Fixed build warnings.
-Address review comments provided by "Kalesh Anakkur Purayil".

v2-v3:
- Used extack for error messages.
- As suggested reworked commit messages.
- Fixed sparse warning.

v3-v4:
- Patch 2 & 3: Fixed coccinelle reported warnings.
- Patch 10: Added devlink port support as suggested by "Jiri Pirko".

v4-v5:
- Patch 3: Removed devm_* usage in rvu_rep_create()
- Addressed review comments by "Simon Horman".
- Patch 9: As suggested by "Jakub Kicinski" reworked commit message.

Geetha sowjanya (10):
octeontx2-pf: Refactoring RVU driver
octeontx2-pf: RVU representor driver
octeontx2-pf: Create representor netdev
octeontx2-pf: Add basic net_device_ops
octeontx2-af: Add packet path between representor and VF
octeontx2-pf: Get VF stats via representor
octeontx2-pf: Add support to sync link state between representor and
VFs
octeontx2-pf: Configure VF mtu via representor
octeontx2-pf: Add representors for sdp MAC
octeontx2-pf: Add devlink port support

.../net/ethernet/marvell/octeontx2/Kconfig | 8 +
.../ethernet/marvell/octeontx2/af/Makefile | 3 +-
.../ethernet/marvell/octeontx2/af/common.h | 2 +
.../net/ethernet/marvell/octeontx2/af/mbox.h | 74 ++
.../net/ethernet/marvell/octeontx2/af/npc.h | 1 +
.../net/ethernet/marvell/octeontx2/af/rvu.c | 9 +
.../net/ethernet/marvell/octeontx2/af/rvu.h | 30 +-
.../marvell/octeontx2/af/rvu_debugfs.c | 27 -
.../marvell/octeontx2/af/rvu_devlink.c | 6 +
.../ethernet/marvell/octeontx2/af/rvu_nix.c | 77 +-
.../marvell/octeontx2/af/rvu_npc_fs.c | 5 +
.../ethernet/marvell/octeontx2/af/rvu_reg.h | 1 +
.../ethernet/marvell/octeontx2/af/rvu_rep.c | 463 ++++++++++++
.../marvell/octeontx2/af/rvu_struct.h | 26 +
.../marvell/octeontx2/af/rvu_switch.c | 20 +-
.../ethernet/marvell/octeontx2/nic/Makefile | 2 +
.../ethernet/marvell/octeontx2/nic/cn10k.c | 4 +-
.../ethernet/marvell/octeontx2/nic/cn10k.h | 2 +-
.../marvell/octeontx2/nic/otx2_common.c | 56 +-
.../marvell/octeontx2/nic/otx2_common.h | 84 ++-
.../marvell/octeontx2/nic/otx2_devlink.c | 49 ++
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 305 +++++---
.../ethernet/marvell/octeontx2/nic/otx2_reg.h | 1 +
.../marvell/octeontx2/nic/otx2_txrx.c | 36 +-
.../marvell/octeontx2/nic/otx2_txrx.h | 3 +-
.../ethernet/marvell/octeontx2/nic/otx2_vf.c | 19 +-
.../net/ethernet/marvell/octeontx2/nic/rep.c | 676 ++++++++++++++++++
.../net/ethernet/marvell/octeontx2/nic/rep.h | 53 ++
28 files changed, 1817 insertions(+), 225 deletions(-)
create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/rep.c
create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/rep.h

--
2.25.1



2024-06-11 16:25:08

by Geetha sowjanya

[permalink] [raw]
Subject: [net-next PATCH v5 01/10] octeontx2-pf: Refactoring RVU driver

Refactoring and export list of shared functions such that
they can be used by both RVU NIC and representor driver.

Signed-off-by: Geetha sowjanya <[email protected]>
---
.../ethernet/marvell/octeontx2/af/common.h | 2 +
.../net/ethernet/marvell/octeontx2/af/mbox.h | 2 +
.../net/ethernet/marvell/octeontx2/af/npc.h | 1 +
.../net/ethernet/marvell/octeontx2/af/rvu.c | 9 +
.../net/ethernet/marvell/octeontx2/af/rvu.h | 1 +
.../marvell/octeontx2/af/rvu_debugfs.c | 27 --
.../ethernet/marvell/octeontx2/af/rvu_nix.c | 43 ++--
.../marvell/octeontx2/af/rvu_npc_fs.c | 5 +
.../ethernet/marvell/octeontx2/af/rvu_reg.h | 1 +
.../marvell/octeontx2/af/rvu_struct.h | 26 ++
.../marvell/octeontx2/af/rvu_switch.c | 2 +-
.../marvell/octeontx2/nic/otx2_common.c | 6 +-
.../marvell/octeontx2/nic/otx2_common.h | 43 ++--
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 240 +++++++++++-------
.../marvell/octeontx2/nic/otx2_txrx.c | 17 +-
.../marvell/octeontx2/nic/otx2_txrx.h | 3 +-
.../ethernet/marvell/octeontx2/nic/otx2_vf.c | 7 +-
17 files changed, 258 insertions(+), 177 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/common.h b/drivers/net/ethernet/marvell/octeontx2/af/common.h
index 2436c1ff9ba4..46fbc8f2dce8 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/common.h
@@ -155,9 +155,11 @@ enum nix_scheduler {
/* Min/Max packet sizes, excluding FCS */
#define NIC_HW_MIN_FRS 40
#define NIC_HW_MAX_FRS 9212
+#define SDP_HW_MIN_FRS 16
#define SDP_HW_MAX_FRS 65535
#define CN10K_LMAC_LINK_MAX_FRS 16380 /* 16k - FCS */
#define CN10K_LBK_LINK_MAX_FRS 65535 /* 64k */
+#define SDP_LINK_CREDIT 0x320202

/* NIX RX action operation*/
#define NIX_RX_ACTIONOP_DROP (0x0ull)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 4a77f6fe2622..e6d7d6e862c0 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -78,6 +78,7 @@ struct otx2_mbox {
struct mbox_hdr {
u64 msg_size; /* Total msgs size embedded */
u16 num_msgs; /* No of msgs embedded */
+ u16 sig;
};

/* Header which precedes every msg and is also part of it */
@@ -1562,6 +1563,7 @@ struct flow_msg {
u8 icmp_type;
u8 icmp_code;
__be16 tcp_flags;
+ u16 sq_id;
};

struct npc_install_flow_req {
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
index d883157393ea..a6533a9b8aa1 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
@@ -245,6 +245,7 @@ enum key_fields {
NPC_VLAN_TAG2,
/* inner vlan tci for double tagged frame */
NPC_VLAN_TAG3,
+ NPC_SQ_ID,
/* other header fields programmed to extract but not of our interest */
NPC_UNKNOWN,
NPC_KEY_FIELDS_MAX,
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index ff78251f92d4..ef6be6291057 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -2151,6 +2151,14 @@ static void __rvu_mbox_handler(struct rvu_work *mwork, int type, bool poll)

offset = mbox->rx_start + ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);

+ if (req_hdr->sig) {
+ msg = mdev->mbase + offset;
+ writeq(0x1, rvu->pfreg_base +
+ ((BLKADDR_NIX0 << 20) |
+ NIX_LF_CINTX_INT_W1S(rvu_get_pf(msg->pcifunc))));
+ usleep_range(1000, 2000);
+ goto done;
+ }
for (id = 0; id < mw->mbox_wrk[devid].num_msgs; id++) {
msg = mdev->mbase + offset;

@@ -2184,6 +2192,7 @@ static void __rvu_mbox_handler(struct rvu_work *mwork, int type, bool poll)
err, otx2_mbox_id2name(msg->id),
msg->id, devid);
}
+done:
mw->mbox_wrk[devid].num_msgs = 0;

if (poll)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 3063a84a45ef..30efa5607c58 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -1016,6 +1016,7 @@ int rvu_ndc_fix_locked_cacheline(struct rvu *rvu, int blkaddr);
void rvu_switch_enable(struct rvu *rvu);
void rvu_switch_disable(struct rvu *rvu);
void rvu_switch_update_rules(struct rvu *rvu, u16 pcifunc);
+void rvu_switch_enable_lbk_link(struct rvu *rvu, u16 pcifunc, bool ena);

int rvu_npc_set_parse_mode(struct rvu *rvu, u16 pcifunc, u64 mode, u8 dir,
u64 pkind, u8 var_len_off, u8 var_len_off_mask,
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
index 4a4ef5bd9e0b..e661f83b188b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
@@ -45,33 +45,6 @@ enum {
CGX_STAT18,
};

-/* NIX TX stats */
-enum nix_stat_lf_tx {
- TX_UCAST = 0x0,
- TX_BCAST = 0x1,
- TX_MCAST = 0x2,
- TX_DROP = 0x3,
- TX_OCTS = 0x4,
- TX_STATS_ENUM_LAST,
-};
-
-/* NIX RX stats */
-enum nix_stat_lf_rx {
- RX_OCTS = 0x0,
- RX_UCAST = 0x1,
- RX_BCAST = 0x2,
- RX_MCAST = 0x3,
- RX_DROP = 0x4,
- RX_DROP_OCTS = 0x5,
- RX_FCS = 0x6,
- RX_ERR = 0x7,
- RX_DRP_BCAST = 0x8,
- RX_DRP_MCAST = 0x9,
- RX_DRP_L3BCAST = 0xa,
- RX_DRP_L3MCAST = 0xb,
- RX_STATS_ENUM_LAST,
-};
-
static char *cgx_rx_stats_fields[] = {
[CGX_STAT0] = "Received packets",
[CGX_STAT1] = "Octets of received packets",
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 00af8888e329..b94ff8c75ffc 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -289,6 +289,23 @@ static void nix_rx_sync(struct rvu *rvu, int blkaddr)
dev_err(rvu->dev, "SYNC2: NIX RX software sync failed\n");
}

+static int nix_get_tx_link(struct rvu *rvu, u16 pcifunc)
+{
+ struct rvu_hwinfo *hw = rvu->hw;
+ int pf = rvu_get_pf(pcifunc);
+ u8 cgx_id = 0, lmac_id = 0;
+
+ if (is_lbk_vf(rvu, pcifunc)) {/* LBK links */
+ return hw->cgx_links;
+ } else if (is_pf_cgxmapped(rvu, pf)) {
+ rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
+ return (cgx_id * hw->lmac_per_cgx) + lmac_id;
+ }
+
+ /* SDP link */
+ return hw->cgx_links + hw->lbk_links;
+}
+
static bool is_valid_txschq(struct rvu *rvu, int blkaddr,
int lvl, u16 pcifunc, u16 schq)
{
@@ -584,6 +601,9 @@ int rvu_mbox_handler_nix_bp_disable(struct rvu *rvu,
if (!is_pf_cgxmapped(rvu, pf) && type != NIX_INTF_TYPE_LBK)
return 0;

+ if (is_sdp_pfvf(pcifunc))
+ type = NIX_INTF_TYPE_SDP;
+
pfvf = rvu_get_pfvf(rvu, pcifunc);
err = nix_get_struct_ptrs(rvu, pcifunc, &nix_hw, &blkaddr);
if (err)
@@ -1984,23 +2004,6 @@ static void nix_clear_tx_xoff(struct rvu *rvu, int blkaddr,
rvu_write64(rvu, blkaddr, reg, 0x0);
}

-static int nix_get_tx_link(struct rvu *rvu, u16 pcifunc)
-{
- struct rvu_hwinfo *hw = rvu->hw;
- int pf = rvu_get_pf(pcifunc);
- u8 cgx_id = 0, lmac_id = 0;
-
- if (is_lbk_vf(rvu, pcifunc)) {/* LBK links */
- return hw->cgx_links;
- } else if (is_pf_cgxmapped(rvu, pf)) {
- rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
- return (cgx_id * hw->lmac_per_cgx) + lmac_id;
- }
-
- /* SDP link */
- return hw->cgx_links + hw->lbk_links;
-}
-
static void nix_get_txschq_range(struct rvu *rvu, u16 pcifunc,
int link, int *start, int *end)
{
@@ -2949,6 +2952,7 @@ static int nix_tx_vtag_alloc(struct rvu *rvu, int blkaddr,
mutex_unlock(&vlan->rsrc_lock);

regval = size ? vtag : vtag << 32;
+ regval |= (vtag & ~GENMASK_ULL(47, 0)) << 48;

rvu_write64(rvu, blkaddr,
NIX_AF_TX_VTAG_DEFX_DATA(index), regval);
@@ -4619,6 +4623,7 @@ static void nix_link_config(struct rvu *rvu, int blkaddr,
rvu_get_lbk_link_max_frs(rvu, &lbk_max_frs);
rvu_get_lmac_link_max_frs(rvu, &lmac_max_frs);

+ rvu_write64(rvu, blkaddr, NIX_AF_SDP_LINK_CREDIT, SDP_LINK_CREDIT);
/* Set default min/max packet lengths allowed on NIX Rx links.
*
* With HW reset minlen value of 60byte, HW will treat ARP pkts
@@ -4630,14 +4635,14 @@ static void nix_link_config(struct rvu *rvu, int blkaddr,
((u64)lmac_max_frs << 16) | NIC_HW_MIN_FRS);
}

- for (link = hw->cgx_links; link < hw->lbk_links; link++) {
+ for (link = hw->cgx_links; link < hw->cgx_links + hw->lbk_links; link++) {
rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link),
((u64)lbk_max_frs << 16) | NIC_HW_MIN_FRS);
}
if (hw->sdp_links) {
link = hw->cgx_links + hw->lbk_links;
rvu_write64(rvu, blkaddr, NIX_AF_RX_LINKX_CFG(link),
- SDP_HW_MAX_FRS << 16 | NIC_HW_MIN_FRS);
+ SDP_HW_MAX_FRS << 16 | SDP_HW_MIN_FRS);
}

/* Get MCS external bypass status for CN10K-B */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
index 150635de2bd5..9fa06ec21ad1 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
@@ -555,6 +555,7 @@ do { \
NPC_SCAN_HDR(NPC_SMAC, NPC_LID_LA, la_ltype, la_start + 6, 6);
/* PF_FUNC is 2 bytes at 0th byte of NPC_LT_LA_IH_NIX_ETHER */
NPC_SCAN_HDR(NPC_PF_FUNC, NPC_LID_LA, NPC_LT_LA_IH_NIX_ETHER, 0, 2);
+ NPC_SCAN_HDR(NPC_SQ_ID, NPC_LID_LA, NPC_LT_LA_IH_NIX_ETHER, 2, 3);
}

static void npc_set_features(struct rvu *rvu, int blkaddr, u8 intf)
@@ -1225,6 +1226,10 @@ static int npc_update_tx_entry(struct rvu *rvu, struct rvu_pfvf *pfvf,
npc_update_entry(rvu, NPC_PF_FUNC, entry, (__force u16)htons(target),
0, mask, 0, NIX_INTF_TX);

+ npc_update_entry(rvu, NPC_SQ_ID, entry,
+ (__force u16)htons(req->packet.sq_id),
+ 0, req->mask.sq_id, 0, NIX_INTF_TX);
+
*(u64 *)&action = 0x00;
action.op = req->op;
action.index = req->index;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
index 5ec92654e7ad..95c332a1fc70 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
@@ -431,6 +431,7 @@
#define NIX_AF_MDQX_IN_MD_COUNT(a) (0x14e0 | (a) << 16)
#define NIX_AF_SMQX_STATUS(a) (0x730 | (a) << 16)
#define NIX_AF_MDQX_OUT_MD_COUNT(a) (0xdb0 | (a) << 16)
+#define NIX_LF_CINTX_INT_W1S(a) (0xd30 | (a) << 12)

#define NIX_PRIV_AF_INT_CFG (0x8000000)
#define NIX_PRIV_LFX_CFG (0x8000010)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
index 5ef406c7e8a4..ee54f1694ea6 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
@@ -825,4 +825,30 @@ enum nix_tx_vtag_op {
#define VTAG_STRIP BIT_ULL(4)
#define VTAG_CAPTURE BIT_ULL(5)

+/* NIX TX stats */
+enum nix_stat_lf_tx {
+ TX_UCAST = 0x0,
+ TX_BCAST = 0x1,
+ TX_MCAST = 0x2,
+ TX_DROP = 0x3,
+ TX_OCTS = 0x4,
+ TX_STATS_ENUM_LAST,
+};
+
+/* NIX RX stats */
+enum nix_stat_lf_rx {
+ RX_OCTS = 0x0,
+ RX_UCAST = 0x1,
+ RX_BCAST = 0x2,
+ RX_MCAST = 0x3,
+ RX_DROP = 0x4,
+ RX_DROP_OCTS = 0x5,
+ RX_FCS = 0x6,
+ RX_ERR = 0x7,
+ RX_DRP_BCAST = 0x8,
+ RX_DRP_MCAST = 0x9,
+ RX_DRP_L3BCAST = 0xa,
+ RX_DRP_L3MCAST = 0xb,
+ RX_STATS_ENUM_LAST,
+};
#endif /* RVU_STRUCT_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c
index 854045ed3b06..ceb81eebf65e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_switch.c
@@ -8,7 +8,7 @@
#include <linux/bitfield.h>
#include "rvu.h"

-static void rvu_switch_enable_lbk_link(struct rvu *rvu, u16 pcifunc, bool enable)
+void rvu_switch_enable_lbk_link(struct rvu *rvu, u16 pcifunc, bool enable)
{
struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
struct nix_hw *nix_hw;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index a85ac039d779..4b53af087cb4 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -227,7 +227,7 @@ int otx2_hw_set_mtu(struct otx2_nic *pfvf, int mtu)
u16 maxlen;
int err;

- maxlen = otx2_get_max_mtu(pfvf) + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN;
+ maxlen = pfvf->hw.max_mtu + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN;

mutex_lock(&pfvf->mbox.lock);
req = otx2_mbox_alloc_msg_nix_set_hw_frs(&pfvf->mbox);
@@ -236,7 +236,7 @@ int otx2_hw_set_mtu(struct otx2_nic *pfvf, int mtu)
return -ENOMEM;
}

- req->maxlen = pfvf->netdev->mtu + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN;
+ req->maxlen = mtu + OTX2_ETH_HLEN + OTX2_HW_TIMESTAMP_LEN;

/* Use max receive length supported by hardware for loopback devices */
if (is_otx2_lbkvf(pfvf->pdev))
@@ -246,6 +246,7 @@ int otx2_hw_set_mtu(struct otx2_nic *pfvf, int mtu)
mutex_unlock(&pfvf->mbox.lock);
return err;
}
+EXPORT_SYMBOL(otx2_hw_set_mtu);

int otx2_config_pause_frm(struct otx2_nic *pfvf)
{
@@ -1782,6 +1783,7 @@ void otx2_free_cints(struct otx2_nic *pfvf, int n)
free_irq(vector, &qset->napi[qidx]);
}
}
+EXPORT_SYMBOL(otx2_free_cints);

void otx2_set_cints_affinity(struct otx2_nic *pfvf)
{
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index 24fbbef265a6..3072648f478c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -120,33 +120,6 @@ enum otx2_errcodes_re {
ERRCODE_IL4_CSUM = 0x22,
};

-/* NIX TX stats */
-enum nix_stat_lf_tx {
- TX_UCAST = 0x0,
- TX_BCAST = 0x1,
- TX_MCAST = 0x2,
- TX_DROP = 0x3,
- TX_OCTS = 0x4,
- TX_STATS_ENUM_LAST,
-};
-
-/* NIX RX stats */
-enum nix_stat_lf_rx {
- RX_OCTS = 0x0,
- RX_UCAST = 0x1,
- RX_BCAST = 0x2,
- RX_MCAST = 0x3,
- RX_DROP = 0x4,
- RX_DROP_OCTS = 0x5,
- RX_FCS = 0x6,
- RX_ERR = 0x7,
- RX_DRP_BCAST = 0x8,
- RX_DRP_MCAST = 0x9,
- RX_DRP_L3BCAST = 0xa,
- RX_DRP_L3MCAST = 0xb,
- RX_STATS_ENUM_LAST,
-};
-
struct otx2_dev_stats {
u64 rx_bytes;
u64 rx_frames;
@@ -228,6 +201,7 @@ struct otx2_hw {
u16 txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
u16 matchall_ipolicer;
u32 dwrr_mtu;
+ u32 max_mtu;
u8 smq_link_type;

/* HW settings, coalescing etc */
@@ -999,6 +973,21 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
int otx2_aura_init(struct otx2_nic *pfvf, int aura_id,
int pool_id, int numptrs);

+int otx2_init_hw_resources(struct otx2_nic *pfvf);
+void otx2_free_hw_resources(struct otx2_nic *pf);
+int otx2_wq_init(struct otx2_nic *pf);
+int otx2_check_pf_usable(struct otx2_nic *pf);
+int otx2_pfaf_mbox_init(struct otx2_nic *pf);
+int otx2_register_mbox_intr(struct otx2_nic *pf, bool probe_af);
+int otx2_realloc_msix_vectors(struct otx2_nic *pf);
+void otx2_pfaf_mbox_destroy(struct otx2_nic *pf);
+void otx2_disable_mbox_intr(struct otx2_nic *pf);
+void otx2_free_queue_mem(struct otx2_qset *qset);
+int otx2_alloc_queue_mem(struct otx2_nic *pf);
+void otx2_disable_napi(struct otx2_nic *pf);
+irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq);
+int otx2_init_rsrc(struct pci_dev *pdev, struct otx2_nic *pf);
+
/* RSS configuration APIs*/
int otx2_rss_init(struct otx2_nic *pfvf);
int otx2_set_flowkey_cfg(struct otx2_nic *pfvf);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index f5bce3e326cc..9d16019cfaab 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -1008,7 +1008,7 @@ static irqreturn_t otx2_pfaf_mbox_intr_handler(int irq, void *pf_irq)
return IRQ_HANDLED;
}

-static void otx2_disable_mbox_intr(struct otx2_nic *pf)
+void otx2_disable_mbox_intr(struct otx2_nic *pf)
{
int vector = pci_irq_vector(pf->pdev, RVU_PF_INT_VEC_AFPF_MBOX);

@@ -1016,8 +1016,9 @@ static void otx2_disable_mbox_intr(struct otx2_nic *pf)
otx2_write64(pf, RVU_PF_INT_ENA_W1C, BIT_ULL(0));
free_irq(vector, pf);
}
+EXPORT_SYMBOL(otx2_disable_mbox_intr);

-static int otx2_register_mbox_intr(struct otx2_nic *pf, bool probe_af)
+int otx2_register_mbox_intr(struct otx2_nic *pf, bool probe_af)
{
struct otx2_hw *hw = &pf->hw;
struct msg_req *req;
@@ -1060,8 +1061,9 @@ static int otx2_register_mbox_intr(struct otx2_nic *pf, bool probe_af)

return 0;
}
+EXPORT_SYMBOL(otx2_register_mbox_intr);

-static void otx2_pfaf_mbox_destroy(struct otx2_nic *pf)
+void otx2_pfaf_mbox_destroy(struct otx2_nic *pf)
{
struct mbox *mbox = &pf->mbox;

@@ -1076,8 +1078,9 @@ static void otx2_pfaf_mbox_destroy(struct otx2_nic *pf)
otx2_mbox_destroy(&mbox->mbox);
otx2_mbox_destroy(&mbox->mbox_up);
}
+EXPORT_SYMBOL(otx2_pfaf_mbox_destroy);

-static int otx2_pfaf_mbox_init(struct otx2_nic *pf)
+int otx2_pfaf_mbox_init(struct otx2_nic *pf)
{
struct mbox *mbox = &pf->mbox;
void __iomem *hwbase;
@@ -1124,6 +1127,7 @@ static int otx2_pfaf_mbox_init(struct otx2_nic *pf)
otx2_pfaf_mbox_destroy(pf);
return err;
}
+EXPORT_SYMBOL(otx2_pfaf_mbox_init);

static int otx2_cgx_config_linkevents(struct otx2_nic *pf, bool enable)
{
@@ -1379,7 +1383,7 @@ static irqreturn_t otx2_q_intr_handler(int irq, void *data)
return IRQ_HANDLED;
}

-static irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq)
+irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq)
{
struct otx2_cq_poll *cq_poll = (struct otx2_cq_poll *)cq_irq;
struct otx2_nic *pf = (struct otx2_nic *)cq_poll->dev;
@@ -1398,20 +1402,25 @@ static irqreturn_t otx2_cq_intr_handler(int irq, void *cq_irq)

return IRQ_HANDLED;
}
+EXPORT_SYMBOL(otx2_cq_intr_handler);

-static void otx2_disable_napi(struct otx2_nic *pf)
+void otx2_disable_napi(struct otx2_nic *pf)
{
struct otx2_qset *qset = &pf->qset;
struct otx2_cq_poll *cq_poll;
+ struct work_struct *work;
int qidx;

for (qidx = 0; qidx < pf->hw.cint_cnt; qidx++) {
cq_poll = &qset->napi[qidx];
- cancel_work_sync(&cq_poll->dim.work);
+ work = &cq_poll->dim.work;
+ if (work->func)
+ cancel_work_sync(&cq_poll->dim.work);
napi_disable(&cq_poll->napi);
netif_napi_del(&cq_poll->napi);
}
}
+EXPORT_SYMBOL(otx2_disable_napi);

static void otx2_free_cq_res(struct otx2_nic *pf)
{
@@ -1477,7 +1486,7 @@ static int otx2_get_rbuf_size(struct otx2_nic *pf, int mtu)
return ALIGN(rbuf_size, 2048);
}

-static int otx2_init_hw_resources(struct otx2_nic *pf)
+int otx2_init_hw_resources(struct otx2_nic *pf)
{
struct nix_lf_free_req *free_req;
struct mbox *mbox = &pf->mbox;
@@ -1601,8 +1610,9 @@ static int otx2_init_hw_resources(struct otx2_nic *pf)
mutex_unlock(&mbox->lock);
return err;
}
+EXPORT_SYMBOL(otx2_init_hw_resources);

-static void otx2_free_hw_resources(struct otx2_nic *pf)
+void otx2_free_hw_resources(struct otx2_nic *pf)
{
struct otx2_qset *qset = &pf->qset;
struct nix_lf_free_req *free_req;
@@ -1688,6 +1698,7 @@ static void otx2_free_hw_resources(struct otx2_nic *pf)
}
mutex_unlock(&mbox->lock);
}
+EXPORT_SYMBOL(otx2_free_hw_resources);

static bool otx2_promisc_use_mce_list(struct otx2_nic *pfvf)
{
@@ -1770,15 +1781,23 @@ static void otx2_dim_work(struct work_struct *w)
dim->state = DIM_START_MEASURE;
}

-int otx2_open(struct net_device *netdev)
+void otx2_free_queue_mem(struct otx2_qset *qset)
+{
+ kfree(qset->sq);
+ qset->sq = NULL;
+ kfree(qset->cq);
+ qset->cq = NULL;
+ kfree(qset->rq);
+ qset->rq = NULL;
+ kfree(qset->napi);
+}
+EXPORT_SYMBOL(otx2_free_queue_mem);
+int otx2_alloc_queue_mem(struct otx2_nic *pf)
{
- struct otx2_nic *pf = netdev_priv(netdev);
- struct otx2_cq_poll *cq_poll = NULL;
struct otx2_qset *qset = &pf->qset;
- int err = 0, qidx, vec;
- char *irq_name;
+ struct otx2_cq_poll *cq_poll;
+ int err = -ENOMEM;

- netif_carrier_off(netdev);

/* RQ and SQs are mapped to different CQs,
* so find out max CQ IRQs (i.e CINTs) needed.
@@ -1791,14 +1810,13 @@ int otx2_open(struct net_device *netdev)

qset->napi = kcalloc(pf->hw.cint_cnt, sizeof(*cq_poll), GFP_KERNEL);
if (!qset->napi)
- return -ENOMEM;
+ return err;

/* CQ size of RQ */
qset->rqe_cnt = qset->rqe_cnt ? qset->rqe_cnt : Q_COUNT(Q_SIZE_256);
/* CQ size of SQ */
qset->sqe_cnt = qset->sqe_cnt ? qset->sqe_cnt : Q_COUNT(Q_SIZE_4K);

- err = -ENOMEM;
qset->cq = kcalloc(pf->qset.cq_cnt,
sizeof(struct otx2_cq_queue), GFP_KERNEL);
if (!qset->cq)
@@ -1814,6 +1832,28 @@ int otx2_open(struct net_device *netdev)
if (!qset->rq)
goto err_free_mem;

+ return 0;
+
+err_free_mem:
+ otx2_free_queue_mem(qset);
+ return err;
+}
+EXPORT_SYMBOL(otx2_alloc_queue_mem);
+
+int otx2_open(struct net_device *netdev)
+{
+ struct otx2_nic *pf = netdev_priv(netdev);
+ struct otx2_cq_poll *cq_poll = NULL;
+ struct otx2_qset *qset = &pf->qset;
+ int err = 0, qidx, vec;
+ char *irq_name;
+
+ netif_carrier_off(netdev);
+
+ err = otx2_alloc_queue_mem(pf);
+ if (err)
+ return err;
+
err = otx2_init_hw_resources(pf);
if (err)
goto err_free_mem;
@@ -1979,10 +2019,7 @@ int otx2_open(struct net_device *netdev)
otx2_disable_napi(pf);
otx2_free_hw_resources(pf);
err_free_mem:
- kfree(qset->sq);
- kfree(qset->cq);
- kfree(qset->rq);
- kfree(qset->napi);
+ otx2_free_queue_mem(qset);
return err;
}
EXPORT_SYMBOL(otx2_open);
@@ -2047,11 +2084,8 @@ int otx2_stop(struct net_device *netdev)
for (qidx = 0; qidx < netdev->num_tx_queues; qidx++)
netdev_tx_reset_queue(netdev_get_tx_queue(netdev, qidx));

+ otx2_free_queue_mem(qset);

- kfree(qset->sq);
- kfree(qset->cq);
- kfree(qset->rq);
- kfree(qset->napi);
/* Do not clear RQ/SQ ringsize settings */
memset_startat(qset, 0, sqe_cnt);
return 0;
@@ -2081,7 +2115,7 @@ static netdev_tx_t otx2_xmit(struct sk_buff *skb, struct net_device *netdev)
sq = &pf->qset.sq[sq_idx];
txq = netdev_get_tx_queue(netdev, qidx);

- if (!otx2_sq_append_skb(netdev, sq, skb, qidx)) {
+ if (!otx2_sq_append_skb(pf, txq, sq, skb, qidx)) {
netif_tx_stop_queue(txq);

/* Check again, incase SQBs got freed up */
@@ -2786,7 +2820,7 @@ static const struct net_device_ops otx2_netdev_ops = {
.ndo_set_vf_trust = otx2_ndo_set_vf_trust,
};

-static int otx2_wq_init(struct otx2_nic *pf)
+int otx2_wq_init(struct otx2_nic *pf)
{
pf->otx2_wq = create_singlethread_workqueue("otx2_wq");
if (!pf->otx2_wq)
@@ -2797,7 +2831,7 @@ static int otx2_wq_init(struct otx2_nic *pf)
return 0;
}

-static int otx2_check_pf_usable(struct otx2_nic *nic)
+int otx2_check_pf_usable(struct otx2_nic *nic)
{
u64 rev;

@@ -2814,8 +2848,9 @@ static int otx2_check_pf_usable(struct otx2_nic *nic)
}
return 0;
}
+EXPORT_SYMBOL(otx2_check_pf_usable);

-static int otx2_realloc_msix_vectors(struct otx2_nic *pf)
+int otx2_realloc_msix_vectors(struct otx2_nic *pf)
{
struct otx2_hw *hw = &pf->hw;
int num_vec, err;
@@ -2837,6 +2872,7 @@ static int otx2_realloc_msix_vectors(struct otx2_nic *pf)

return otx2_register_mbox_intr(pf, false);
}
+EXPORT_SYMBOL(otx2_realloc_msix_vectors);

static int otx2_sriov_vfcfg_init(struct otx2_nic *pf)
{
@@ -2872,6 +2908,88 @@ static void otx2_sriov_vfcfg_cleanup(struct otx2_nic *pf)
}
}

+int otx2_init_rsrc(struct pci_dev *pdev, struct otx2_nic *pf)
+{
+ struct device *dev = &pdev->dev;
+ struct otx2_hw *hw = &pf->hw;
+ int num_vec, err;
+
+ num_vec = pci_msix_vec_count(pdev);
+ hw->irq_name = devm_kmalloc_array(&hw->pdev->dev, num_vec, NAME_SIZE,
+ GFP_KERNEL);
+ if (!hw->irq_name)
+ return -ENOMEM;
+
+ hw->affinity_mask = devm_kcalloc(&hw->pdev->dev, num_vec,
+ sizeof(cpumask_var_t), GFP_KERNEL);
+ if (!hw->affinity_mask)
+ return -ENOMEM;
+
+ /* Map CSRs */
+ pf->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0);
+ if (!pf->reg_base) {
+ dev_err(dev, "Unable to map physical function CSRs, aborting\n");
+ return -ENOMEM;
+ }
+
+ err = otx2_check_pf_usable(pf);
+ if (err)
+ return err;
+
+ err = pci_alloc_irq_vectors(hw->pdev, RVU_PF_INT_VEC_CNT,
+ RVU_PF_INT_VEC_CNT, PCI_IRQ_MSIX);
+ if (err < 0) {
+ dev_err(dev, "%s: Failed to alloc %d IRQ vectors\n",
+ __func__, num_vec);
+ return err;
+ }
+
+ otx2_setup_dev_hw_settings(pf);
+
+ /* Init PF <=> AF mailbox stuff */
+ err = otx2_pfaf_mbox_init(pf);
+ if (err)
+ goto err_free_irq_vectors;
+
+ /* Register mailbox interrupt */
+ err = otx2_register_mbox_intr(pf, true);
+ if (err)
+ goto err_mbox_destroy;
+
+ /* Request AF to attach NPA and NIX LFs to this PF.
+ * NIX and NPA LFs are needed for this PF to function as a NIC.
+ */
+ err = otx2_attach_npa_nix(pf);
+ if (err)
+ goto err_disable_mbox_intr;
+
+ err = otx2_realloc_msix_vectors(pf);
+ if (err)
+ goto err_detach_rsrc;
+
+ err = cn10k_lmtst_init(pf);
+ if (err)
+ goto err_detach_rsrc;
+
+ return 0;
+
+err_detach_rsrc:
+ if (pf->hw.lmt_info)
+ free_percpu(pf->hw.lmt_info);
+ if (test_bit(CN10K_LMTST, &pf->hw.cap_flag))
+ qmem_free(pf->dev, pf->dync_lmt);
+ otx2_detach_resources(&pf->mbox);
+err_disable_mbox_intr:
+ otx2_disable_mbox_intr(pf);
+err_mbox_destroy:
+ otx2_pfaf_mbox_destroy(pf);
+err_free_irq_vectors:
+ pci_free_irq_vectors(hw->pdev);
+
+ return err;
+}
+EXPORT_SYMBOL(otx2_init_rsrc);
+
static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct device *dev = &pdev->dev;
@@ -2879,7 +2997,6 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
struct net_device *netdev;
struct otx2_nic *pf;
struct otx2_hw *hw;
- int num_vec;

err = pcim_enable_device(pdev);
if (err) {
@@ -2930,71 +3047,14 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
/* Use CQE of 128 byte descriptor size by default */
hw->xqe_size = 128;

- num_vec = pci_msix_vec_count(pdev);
- hw->irq_name = devm_kmalloc_array(&hw->pdev->dev, num_vec, NAME_SIZE,
- GFP_KERNEL);
- if (!hw->irq_name) {
- err = -ENOMEM;
- goto err_free_netdev;
- }
-
- hw->affinity_mask = devm_kcalloc(&hw->pdev->dev, num_vec,
- sizeof(cpumask_var_t), GFP_KERNEL);
- if (!hw->affinity_mask) {
- err = -ENOMEM;
- goto err_free_netdev;
- }
-
- /* Map CSRs */
- pf->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0);
- if (!pf->reg_base) {
- dev_err(dev, "Unable to map physical function CSRs, aborting\n");
- err = -ENOMEM;
- goto err_free_netdev;
- }
-
- err = otx2_check_pf_usable(pf);
+ err = otx2_init_rsrc(pdev, pf);
if (err)
goto err_free_netdev;

- err = pci_alloc_irq_vectors(hw->pdev, RVU_PF_INT_VEC_CNT,
- RVU_PF_INT_VEC_CNT, PCI_IRQ_MSIX);
- if (err < 0) {
- dev_err(dev, "%s: Failed to alloc %d IRQ vectors\n",
- __func__, num_vec);
- goto err_free_netdev;
- }
-
- otx2_setup_dev_hw_settings(pf);
-
- /* Init PF <=> AF mailbox stuff */
- err = otx2_pfaf_mbox_init(pf);
- if (err)
- goto err_free_irq_vectors;
-
- /* Register mailbox interrupt */
- err = otx2_register_mbox_intr(pf, true);
- if (err)
- goto err_mbox_destroy;
-
- /* Request AF to attach NPA and NIX LFs to this PF.
- * NIX and NPA LFs are needed for this PF to function as a NIC.
- */
- err = otx2_attach_npa_nix(pf);
- if (err)
- goto err_disable_mbox_intr;
-
- err = otx2_realloc_msix_vectors(pf);
- if (err)
- goto err_detach_rsrc;
-
err = otx2_set_real_num_queues(netdev, hw->tx_queues, hw->rx_queues);
if (err)
goto err_detach_rsrc;

- err = cn10k_lmtst_init(pf);
- if (err)
- goto err_detach_rsrc;

/* Assign default mac address */
otx2_get_mac_from_af(netdev);
@@ -3058,6 +3118,7 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)

netdev->min_mtu = OTX2_MIN_MTU;
netdev->max_mtu = otx2_get_max_mtu(pf);
+ hw->max_mtu = netdev->max_mtu;

/* reset CGX/RPM MAC stats */
otx2_reset_mac_stats(pf);
@@ -3118,11 +3179,8 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (test_bit(CN10K_LMTST, &pf->hw.cap_flag))
qmem_free(pf->dev, pf->dync_lmt);
otx2_detach_resources(&pf->mbox);
-err_disable_mbox_intr:
otx2_disable_mbox_intr(pf);
-err_mbox_destroy:
otx2_pfaf_mbox_destroy(pf);
-err_free_irq_vectors:
pci_free_irq_vectors(hw->pdev);
err_free_netdev:
pci_set_drvdata(pdev, NULL);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
index a16e9f244117..aad467aa94a3 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
@@ -131,6 +131,7 @@ static void otx2_xdp_snd_pkt_handler(struct otx2_nic *pfvf,
}

static void otx2_snd_pkt_handler(struct otx2_nic *pfvf,
+ struct net_device *ndev,
struct otx2_cq_queue *cq,
struct otx2_snd_queue *sq,
struct nix_cqe_tx_s *cqe,
@@ -145,7 +146,7 @@ static void otx2_snd_pkt_handler(struct otx2_nic *pfvf,

if (unlikely(snd_comp->status) && netif_msg_tx_err(pfvf))
net_err_ratelimited("%s: TX%d: Error in send CQ status:%x\n",
- pfvf->netdev->name, cq->cint_idx,
+ ndev->name, cq->cint_idx,
snd_comp->status);

sg = &sq->sg[snd_comp->sqe_id];
@@ -453,6 +454,7 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf,
int tx_pkts = 0, tx_bytes = 0, qidx;
struct otx2_snd_queue *sq;
struct nix_cqe_tx_s *cqe;
+ struct net_device *ndev;
int processed_cqe = 0;

if (cq->pend_cqe >= budget)
@@ -464,6 +466,7 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf,
process_cqe:
qidx = cq->cq_idx - pfvf->hw.rx_queues;
sq = &pfvf->qset.sq[qidx];
+ ndev = pfvf->netdev;

while (likely(processed_cqe < budget) && cq->pend_cqe) {
cqe = (struct nix_cqe_tx_s *)otx2_get_next_cqe(cq);
@@ -478,7 +481,8 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf,
if (cq->cq_type == CQ_XDP)
otx2_xdp_snd_pkt_handler(pfvf, sq, cqe);
else
- otx2_snd_pkt_handler(pfvf, cq, &pfvf->qset.sq[qidx],
+ otx2_snd_pkt_handler(pfvf, ndev, cq,
+ &pfvf->qset.sq[qidx],
cqe, budget, &tx_pkts, &tx_bytes);

cqe->hdr.cqe_type = NIX_XQE_TYPE_INVALID;
@@ -505,7 +509,7 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf,
/* Check if queue was stopped earlier due to ring full */
smp_mb();
if (netif_tx_queue_stopped(txq) &&
- netif_carrier_ok(pfvf->netdev))
+ netif_carrier_ok(ndev))
netif_tx_wake_queue(txq);
}
return 0;
@@ -594,6 +598,7 @@ int otx2_napi_handler(struct napi_struct *napi, int budget)
}
return workdone;
}
+EXPORT_SYMBOL(otx2_napi_handler);

void otx2_sqe_flush(void *dev, struct otx2_snd_queue *sq,
int size, int qidx)
@@ -1141,13 +1146,13 @@ static void otx2_set_txtstamp(struct otx2_nic *pfvf, struct sk_buff *skb,
}
}

-bool otx2_sq_append_skb(struct net_device *netdev, struct otx2_snd_queue *sq,
+bool otx2_sq_append_skb(void *dev, struct netdev_queue *txq,
+ struct otx2_snd_queue *sq,
struct sk_buff *skb, u16 qidx)
{
- struct netdev_queue *txq = netdev_get_tx_queue(netdev, qidx);
- struct otx2_nic *pfvf = netdev_priv(netdev);
int offset, num_segs, free_desc;
struct nix_sqe_hdr_s *sqe_hdr;
+ struct otx2_nic *pfvf = dev;

/* Check if there is enough room between producer
* and consumer index.
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
index 3f1d2655ff77..e1db5f961877 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
@@ -167,7 +167,8 @@ static inline u64 otx2_iova_to_phys(void *iommu_domain, dma_addr_t dma_addr)
}

int otx2_napi_handler(struct napi_struct *napi, int budget);
-bool otx2_sq_append_skb(struct net_device *netdev, struct otx2_snd_queue *sq,
+bool otx2_sq_append_skb(void *dev, struct netdev_queue *txq,
+ struct otx2_snd_queue *sq,
struct sk_buff *skb, u16 qidx);
void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq,
int size, int qidx);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
index 99fcc5661674..0486fca8b573 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
@@ -395,7 +395,7 @@ static netdev_tx_t otx2vf_xmit(struct sk_buff *skb, struct net_device *netdev)
sq = &vf->qset.sq[qidx];
txq = netdev_get_tx_queue(netdev, qidx);

- if (!otx2_sq_append_skb(netdev, sq, skb, qidx)) {
+ if (!otx2_sq_append_skb(vf, txq, sq, skb, qidx)) {
netif_tx_stop_queue(txq);

/* Check again, incase SQBs got freed up */
@@ -500,7 +500,7 @@ static const struct net_device_ops otx2vf_netdev_ops = {
.ndo_setup_tc = otx2_setup_tc,
};

-static int otx2_wq_init(struct otx2_nic *vf)
+static int otx2_vf_wq_init(struct otx2_nic *vf)
{
vf->otx2_wq = create_singlethread_workqueue("otx2vf_wq");
if (!vf->otx2_wq)
@@ -671,6 +671,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)

netdev->min_mtu = OTX2_MIN_MTU;
netdev->max_mtu = otx2_get_max_mtu(vf);
+ hw->max_mtu = netdev->max_mtu;

/* To distinguish, for LBK VFs set netdev name explicitly */
if (is_otx2_lbkvf(vf->pdev)) {
@@ -688,7 +689,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto err_ptp_destroy;
}

- err = otx2_wq_init(vf);
+ err = otx2_vf_wq_init(vf);
if (err)
goto err_unreg_netdev;

--
2.25.1


2024-06-11 16:25:30

by Geetha sowjanya

[permalink] [raw]
Subject: [net-next PATCH v4 08/10] octeontx2-pf: Configure VF mtu via representor

Add support to manage the mtu configuration for VF through representor.
On update of representor mtu a mbox notification is send
to VF to update its mtu.

Signed-off-by: Sai Krishna <[email protected]>
Signed-off-by: Geetha sowjanya <[email protected]>
---
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 5 +++++
.../net/ethernet/marvell/octeontx2/nic/rep.c | 17 +++++++++++++++++
2 files changed, 22 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 0f17e8af4f76..91a458c5028a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -854,6 +854,11 @@ static int otx2_mbox_up_handler_rep_event_up_notify(struct otx2_nic *pf,
{
struct net_device *netdev = pf->netdev;

+ if (info->event == RVU_EVENT_MTU_CHANGE) {
+ netdev->mtu = info->evt_data.mtu;
+ return 0;
+ }
+
if (info->event == RVU_EVENT_PORT_STATE) {
if (info->evt_data.port_state) {
pf->flags |= OTX2_FLAG_PORT_UP;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
index 6a527b824f33..d984b7d9dc64 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
@@ -79,6 +79,22 @@ int rvu_event_up_notify(struct otx2_nic *pf, struct rep_event *info)
return 0;
}

+static int rvu_rep_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct rep_dev *rep = netdev_priv(dev);
+ struct otx2_nic *priv = rep->mdev;
+ struct rep_event evt = {0};
+
+ netdev_info(dev, "Changing MTU from %d to %d\n",
+ dev->mtu, new_mtu);
+ dev->mtu = new_mtu;
+
+ evt.evt_data.mtu = new_mtu;
+ evt.pcifunc = rep->pcifunc;
+ rvu_rep_notify_pfvf(priv, RVU_EVENT_MTU_CHANGE, &evt);
+ return 0;
+}
+
static void rvu_rep_get_stats(struct work_struct *work)
{
struct delayed_work *del_work = to_delayed_work(work);
@@ -226,6 +242,7 @@ static const struct net_device_ops rvu_rep_netdev_ops = {
.ndo_stop = rvu_rep_stop,
.ndo_start_xmit = rvu_rep_xmit,
.ndo_get_stats64 = rvu_rep_get_stats64,
+ .ndo_change_mtu = rvu_rep_change_mtu,
};

static int rvu_rep_napi_init(struct otx2_nic *priv,
--
2.25.1


2024-06-11 16:25:29

by Geetha sowjanya

[permalink] [raw]
Subject: [net-next PATCH v5 02/10] octeontx2-pf: RVU representor driver

This patch adds basic driver for the RVU representor.
Driver on probe does pci specific initialization and does hw
resources configuration.
Introduces RVU_ESWITCH kernel config to enable/disable
this driver. Representor and NIC shares the code but representors
netdev support subset of NIC functionality. Hence "otx2_rep_dev"
api helps to skip the features initialization that are not supported
by the representors.

Signed-off-by: Geetha sowjanya <[email protected]>
---
.../net/ethernet/marvell/octeontx2/Kconfig | 8 +
.../ethernet/marvell/octeontx2/af/Makefile | 3 +-
.../net/ethernet/marvell/octeontx2/af/mbox.h | 8 +
.../net/ethernet/marvell/octeontx2/af/rvu.h | 11 +
.../ethernet/marvell/octeontx2/af/rvu_nix.c | 21 +-
.../ethernet/marvell/octeontx2/af/rvu_rep.c | 47 ++++
.../ethernet/marvell/octeontx2/nic/Makefile | 2 +
.../marvell/octeontx2/nic/otx2_common.h | 12 +-
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 17 +-
.../marvell/octeontx2/nic/otx2_txrx.c | 21 +-
.../net/ethernet/marvell/octeontx2/nic/rep.c | 221 ++++++++++++++++++
.../net/ethernet/marvell/octeontx2/nic/rep.h | 31 +++
12 files changed, 385 insertions(+), 17 deletions(-)
create mode 100644 drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/rep.c
create mode 100644 drivers/net/ethernet/marvell/octeontx2/nic/rep.h

diff --git a/drivers/net/ethernet/marvell/octeontx2/Kconfig b/drivers/net/ethernet/marvell/octeontx2/Kconfig
index a32d85d6f599..ff86a5f267c3 100644
--- a/drivers/net/ethernet/marvell/octeontx2/Kconfig
+++ b/drivers/net/ethernet/marvell/octeontx2/Kconfig
@@ -46,3 +46,11 @@ config OCTEONTX2_VF
depends on OCTEONTX2_PF
help
This driver supports Marvell's OcteonTX2 NIC virtual function.
+
+config RVU_ESWITCH
+ tristate "Marvell RVU E-Switch support"
+ depends on OCTEONTX2_PF
+ default m
+ help
+ This driver supports Marvell's RVU E-Switch that
+ provides internal SRIOV packet steering and switching for the
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/Makefile b/drivers/net/ethernet/marvell/octeontx2/af/Makefile
index 3cf4c8285c90..ccea37847df8 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/Makefile
+++ b/drivers/net/ethernet/marvell/octeontx2/af/Makefile
@@ -11,4 +11,5 @@ rvu_mbox-y := mbox.o rvu_trace.o
rvu_af-y := cgx.o rvu.o rvu_cgx.o rvu_npa.o rvu_nix.o \
rvu_reg.o rvu_npc.o rvu_debugfs.o ptp.o rvu_npc_fs.o \
rvu_cpt.o rvu_devlink.o rpm.o rvu_cn10k.o rvu_switch.o \
- rvu_sdp.o rvu_npc_hash.o mcs.o mcs_rvu_if.o mcs_cnf10kb.o
+ rvu_sdp.o rvu_npc_hash.o mcs.o mcs_rvu_if.o mcs_cnf10kb.o \
+ rvu_rep.o
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index e6d7d6e862c0..befb327e8aff 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -144,6 +144,7 @@ M(LMTST_TBL_SETUP, 0x00a, lmtst_tbl_setup, lmtst_tbl_setup_req, \
msg_rsp) \
M(SET_VF_PERM, 0x00b, set_vf_perm, set_vf_perm, msg_rsp) \
M(PTP_GET_CAP, 0x00c, ptp_get_cap, msg_req, ptp_get_cap_rsp) \
+M(GET_REP_CNT, 0x00d, get_rep_cnt, msg_req, get_rep_cnt_rsp) \
/* CGX mbox IDs (range 0x200 - 0x3FF) */ \
M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \
M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \
@@ -1525,6 +1526,13 @@ struct ptp_get_cap_rsp {
u64 cap;
};

+struct get_rep_cnt_rsp {
+ struct mbox_msghdr hdr;
+ u16 rep_cnt;
+ u16 rep_pf_map[64];
+ u64 rsvd;
+};
+
struct flow_msg {
unsigned char dmac[6];
unsigned char smac[6];
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index 30efa5607c58..cbdc7aeaccfc 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -594,6 +594,9 @@ struct rvu {
spinlock_t cpt_intr_lock;

struct mutex mbox_lock; /* Serialize mbox up and down msgs */
+ u16 rep_pcifunc;
+ int rep_cnt;
+ u16 *rep2pfvf_map;
};

static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
@@ -822,6 +825,14 @@ bool is_sdp_pfvf(u16 pcifunc);
bool is_sdp_pf(u16 pcifunc);
bool is_sdp_vf(struct rvu *rvu, u16 pcifunc);

+static inline bool is_rep_dev(struct rvu *rvu, u16 pcifunc)
+{
+ if (rvu->rep_pcifunc && rvu->rep_pcifunc == pcifunc)
+ return true;
+
+ return false;
+}
+
/* CGX APIs */
static inline bool is_pf_cgxmapped(struct rvu *rvu, u8 pf)
{
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index b94ff8c75ffc..44db4694a257 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -329,7 +329,9 @@ static bool is_valid_txschq(struct rvu *rvu, int blkaddr,

/* TLs aggegating traffic are shared across PF and VFs */
if (lvl >= hw->cap.nix_tx_aggr_lvl) {
- if (rvu_get_pf(map_func) != rvu_get_pf(pcifunc))
+ if ((nix_get_tx_link(rvu, map_func) !=
+ nix_get_tx_link(rvu, pcifunc)) &&
+ (rvu_get_pf(map_func) != rvu_get_pf(pcifunc)))
return false;
else
return true;
@@ -1634,6 +1636,12 @@ int rvu_mbox_handler_nix_lf_alloc(struct rvu *rvu,
cfg = NPC_TX_DEF_PKIND;
rvu_write64(rvu, blkaddr, NIX_AF_LFX_TX_PARSE_CFG(nixlf), cfg);

+ if (is_rep_dev(rvu, pcifunc)) {
+ pfvf->tx_chan_base = RVU_SWITCH_LBK_CHAN;
+ pfvf->tx_chan_cnt = 1;
+ goto exit;
+ }
+
intf = is_lbk_vf(rvu, pcifunc) ? NIX_INTF_TYPE_LBK : NIX_INTF_TYPE_CGX;
if (is_sdp_pfvf(pcifunc))
intf = NIX_INTF_TYPE_SDP;
@@ -1704,6 +1712,9 @@ int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct nix_lf_free_req *req,
if (nixlf < 0)
return NIX_AF_ERR_AF_LF_INVALID;

+ if (is_rep_dev(rvu, pcifunc))
+ goto free_lf;
+
if (req->flags & NIX_LF_DISABLE_FLOWS)
rvu_npc_disable_mcam_entries(rvu, pcifunc, nixlf);
else
@@ -1715,6 +1726,7 @@ int rvu_mbox_handler_nix_lf_free(struct rvu *rvu, struct nix_lf_free_req *req,

nix_interface_deinit(rvu, pcifunc, nixlf);

+free_lf:
/* Reset this NIX LF */
err = rvu_lf_reset(rvu, block, nixlf);
if (err) {
@@ -2010,7 +2022,8 @@ static void nix_get_txschq_range(struct rvu *rvu, u16 pcifunc,
struct rvu_hwinfo *hw = rvu->hw;
int pf = rvu_get_pf(pcifunc);

- if (is_lbk_vf(rvu, pcifunc)) { /* LBK links */
+ /* LBK links */
+ if (is_lbk_vf(rvu, pcifunc) || is_rep_dev(rvu, pcifunc)) {
*start = hw->cap.nix_txsch_per_cgx_lmac * link;
*end = *start + hw->cap.nix_txsch_per_lbk_lmac;
} else if (is_pf_cgxmapped(rvu, pf)) { /* CGX links */
@@ -4522,7 +4535,7 @@ int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req,
if (!nix_hw)
return NIX_AF_ERR_INVALID_NIXBLK;

- if (is_lbk_vf(rvu, pcifunc))
+ if (is_lbk_vf(rvu, pcifunc) || is_rep_dev(rvu, pcifunc))
rvu_get_lbk_link_max_frs(rvu, &max_mtu);
else
rvu_get_lmac_link_max_frs(rvu, &max_mtu);
@@ -4550,6 +4563,8 @@ int rvu_mbox_handler_nix_set_hw_frs(struct rvu *rvu, struct nix_frs_cfg *req,
/* For VFs of PF0 ingress is LBK port, so config LBK link */
pfvf = rvu_get_pfvf(rvu, pcifunc);
link = hw->cgx_links + pfvf->lbkid;
+ } else if (is_rep_dev(rvu, pcifunc)) {
+ link = hw->cgx_links + 0;
}

if (link < 0)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
new file mode 100644
index 000000000000..cf13c5f0a3c5
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
@@ -0,0 +1,47 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell RVU Admin Function driver
+ *
+ * Copyright (C) 2024 Marvell.
+ *
+ */
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+#include "rvu.h"
+#include "rvu_reg.h"
+
+int rvu_mbox_handler_get_rep_cnt(struct rvu *rvu, struct msg_req *req,
+ struct get_rep_cnt_rsp *rsp)
+{
+ int pf, vf, numvfs, hwvf, rep = 0;
+ u16 pcifunc;
+
+ rvu->rep_pcifunc = req->hdr.pcifunc;
+ rsp->rep_cnt = rvu->cgx_mapped_pfs + rvu->cgx_mapped_vfs;
+ rvu->rep_cnt = rsp->rep_cnt;
+
+ rvu->rep2pfvf_map = devm_kzalloc(rvu->dev, rvu->rep_cnt *
+ sizeof(u16), GFP_KERNEL);
+ if (!rvu->rep2pfvf_map)
+ return -ENOMEM;
+
+ for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
+ if (!is_pf_cgxmapped(rvu, pf))
+ continue;
+ pcifunc = pf << RVU_PFVF_PF_SHIFT;
+ rvu->rep2pfvf_map[rep] = pcifunc;
+ rsp->rep_pf_map[rep] = pcifunc;
+ rep++;
+ rvu_get_pf_numvfs(rvu, pf, &numvfs, &hwvf);
+ for (vf = 0; vf < numvfs; vf++) {
+ rvu->rep2pfvf_map[rep] = pcifunc |
+ ((vf + 1) & RVU_PFVF_FUNC_MASK);
+ rsp->rep_pf_map[rep] = rvu->rep2pfvf_map[rep];
+ rep++;
+ }
+ }
+ return 0;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
index 5664f768cb0c..7420915f72bb 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/Makefile
@@ -5,11 +5,13 @@

obj-$(CONFIG_OCTEONTX2_PF) += rvu_nicpf.o otx2_ptp.o
obj-$(CONFIG_OCTEONTX2_VF) += rvu_nicvf.o otx2_ptp.o
+obj-$(CONFIG_RVU_ESWITCH) += rvu_rep.o

rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \
otx2_flows.o otx2_tc.o cn10k.o otx2_dmac_flt.o \
otx2_devlink.o qos_sq.o qos.o
rvu_nicvf-y := otx2_vf.o otx2_devlink.o
+rvu_rep-y := rep.o otx2_devlink.o

rvu_nicpf-$(CONFIG_DCB) += otx2_dcbnl.o
rvu_nicvf-$(CONFIG_DCB) += otx2_dcbnl.o
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index 3072648f478c..cd4b84d89dd1 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -29,6 +29,7 @@
#include "otx2_devlink.h"
#include <rvu_trace.h>
#include "qos.h"
+#include "rep.h"

/* IPv4 flag more fragment bit */
#define IPV4_FLAG_MORE 0x20
@@ -441,6 +442,7 @@ struct otx2_nic {
#define OTX2_FLAG_PTP_ONESTEP_SYNC BIT_ULL(15)
#define OTX2_FLAG_ADPTV_INT_COAL_ENABLED BIT_ULL(16)
#define OTX2_FLAG_TC_MARK_ENABLED BIT_ULL(17)
+#define OTX2_FLAG_REP_MODE_ENABLED BIT_ULL(18)
u64 flags;
u64 *cq_op_addr;

@@ -508,11 +510,19 @@ struct otx2_nic {
#if IS_ENABLED(CONFIG_MACSEC)
struct cn10k_mcs_cfg *macsec_cfg;
#endif
+
+#if IS_ENABLED(CONFIG_RVU_ESWITCH)
+ struct rep_dev **reps;
+ int rep_cnt;
+ u16 rep_pf_map[RVU_MAX_REP];
+ u16 esw_mode;
+#endif
};

static inline bool is_otx2_lbkvf(struct pci_dev *pdev)
{
- return pdev->device == PCI_DEVID_OCTEONTX2_RVU_AFVF;
+ return (pdev->device == PCI_DEVID_OCTEONTX2_RVU_AFVF) ||
+ (pdev->device == PCI_DEVID_RVU_REP);
}

static inline bool is_96xx_A0(struct pci_dev *pdev)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 9d16019cfaab..5e43d1ffaa33 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -1502,10 +1502,11 @@ int otx2_init_hw_resources(struct otx2_nic *pf)
hw->sqpool_cnt = otx2_get_total_tx_queues(pf);
hw->pool_cnt = hw->rqpool_cnt + hw->sqpool_cnt;

- /* Maximum hardware supported transmit length */
- pf->tx_max_pktlen = pf->netdev->max_mtu + OTX2_ETH_HLEN;
-
- pf->rbsize = otx2_get_rbuf_size(pf, pf->netdev->mtu);
+ if (!otx2_rep_dev(pf->pdev)) {
+ /* Maximum hardware supported transmit length */
+ pf->tx_max_pktlen = pf->netdev->max_mtu + OTX2_ETH_HLEN;
+ pf->rbsize = otx2_get_rbuf_size(pf, pf->netdev->mtu);
+ }

mutex_lock(&mbox->lock);
/* NPA init */
@@ -1634,11 +1635,12 @@ void otx2_free_hw_resources(struct otx2_nic *pf)
otx2_pfc_txschq_stop(pf);
#endif

- otx2_clean_qos_queues(pf);
+ if (!otx2_rep_dev(pf->pdev))
+ otx2_clean_qos_queues(pf);

mutex_lock(&mbox->lock);
/* Disable backpressure */
- if (!(pf->pcifunc & RVU_PFVF_FUNC_MASK))
+ if (!is_otx2_lbkvf(pf->pdev))
otx2_nix_config_bp(pf, false);
mutex_unlock(&mbox->lock);

@@ -1670,7 +1672,8 @@ void otx2_free_hw_resources(struct otx2_nic *pf)
otx2_free_cq_res(pf);

/* Free all ingress bandwidth profiles allocated */
- cn10k_free_all_ipolicers(pf);
+ if (!otx2_rep_dev(pf->pdev))
+ cn10k_free_all_ipolicers(pf);

mutex_lock(&mbox->lock);
/* Reset NIX LF */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
index aad467aa94a3..87489904e595 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
@@ -375,11 +375,13 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf,
}
start += sizeof(*sg);
}
- otx2_set_rxhash(pfvf, cqe, skb);

- skb_record_rx_queue(skb, cq->cq_idx);
- if (pfvf->netdev->features & NETIF_F_RXCSUM)
- skb->ip_summed = CHECKSUM_UNNECESSARY;
+ if (!(pfvf->flags & OTX2_FLAG_REP_MODE_ENABLED)) {
+ otx2_set_rxhash(pfvf, cqe, skb);
+ skb_record_rx_queue(skb, cq->cq_idx);
+ if (pfvf->netdev->features & NETIF_F_RXCSUM)
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ }

if (pfvf->flags & OTX2_FLAG_TC_MARK_ENABLED)
skb->mark = parse->match_id;
@@ -466,7 +468,13 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf,
process_cqe:
qidx = cq->cq_idx - pfvf->hw.rx_queues;
sq = &pfvf->qset.sq[qidx];
- ndev = pfvf->netdev;
+
+#if IS_ENABLED(CONFIG_RVU_ESWITCH)
+ if (pfvf->flags & OTX2_FLAG_REP_MODE_ENABLED)
+ ndev = pfvf->reps[qidx]->netdev;
+ else
+#endif
+ ndev = pfvf->netdev;

while (likely(processed_cqe < budget) && cq->pend_cqe) {
cqe = (struct nix_cqe_tx_s *)otx2_get_next_cqe(cq);
@@ -504,6 +512,9 @@ static int otx2_tx_napi_handler(struct otx2_nic *pfvf,

if (qidx >= pfvf->hw.tx_queues)
qidx -= pfvf->hw.xdp_queues;
+
+ if (pfvf->flags & OTX2_FLAG_REP_MODE_ENABLED)
+ qidx = 0;
txq = netdev_get_tx_queue(pfvf->netdev, qidx);
netdev_tx_completed_queue(txq, tx_pkts, tx_bytes);
/* Check if queue was stopped earlier due to ring full */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
new file mode 100644
index 000000000000..26f6935279ad
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
@@ -0,0 +1,221 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell RVU representor driver
+ *
+ * Copyright (C) 2024 Marvell.
+ *
+ */
+
+#include <linux/etherdevice.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/net_tstamp.h>
+
+#include "otx2_common.h"
+#include "cn10k.h"
+#include "otx2_reg.h"
+#include "rep.h"
+
+#define DRV_NAME "rvu_rep"
+#define DRV_STRING "Marvell RVU Representor Driver"
+
+static const struct pci_device_id rvu_rep_id_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_RVU_REP) },
+ { }
+};
+
+MODULE_AUTHOR("Marvell International Ltd.");
+MODULE_DESCRIPTION(DRV_STRING);
+MODULE_LICENSE("GPL");
+MODULE_DEVICE_TABLE(pci, rvu_rep_id_table);
+
+static int rvu_rep_rsrc_free(struct otx2_nic *priv)
+{
+ struct otx2_qset *qset = &priv->qset;
+ int wrk;
+
+ for (wrk = 0; wrk < priv->qset.cq_cnt; wrk++)
+ cancel_delayed_work_sync(&priv->refill_wrk[wrk].pool_refill_work);
+ devm_kfree(priv->dev, priv->refill_wrk);
+
+ otx2_free_hw_resources(priv);
+ otx2_free_queue_mem(qset);
+ return 0;
+}
+
+static int rvu_rep_rsrc_init(struct otx2_nic *priv)
+{
+ struct otx2_qset *qset = &priv->qset;
+ int err = 0;
+
+ err = otx2_alloc_queue_mem(priv);
+ if (err)
+ return err;
+
+ priv->hw.max_mtu = otx2_get_max_mtu(priv);
+ priv->tx_max_pktlen = priv->hw.max_mtu + OTX2_ETH_HLEN;
+ priv->rbsize = ALIGN(priv->hw.rbuf_len, OTX2_ALIGN) + OTX2_HEAD_ROOM;
+
+ err = otx2_init_hw_resources(priv);
+ if (err)
+ goto err_free_rsrc;
+
+ /* Set maximum frame size allowed in HW */
+ err = otx2_hw_set_mtu(priv, priv->hw.max_mtu);
+ if (err) {
+ dev_err(priv->dev, "Failed to set HW MTU\n");
+ goto err_free_rsrc;
+ }
+ return 0;
+
+err_free_rsrc:
+ otx2_free_hw_resources(priv);
+ otx2_free_queue_mem(qset);
+ return err;
+}
+
+static int rvu_get_rep_cnt(struct otx2_nic *priv)
+{
+ struct get_rep_cnt_rsp *rsp;
+ struct mbox_msghdr *msghdr;
+ struct msg_req *req;
+ int err, rep;
+
+ mutex_lock(&priv->mbox.lock);
+ req = otx2_mbox_alloc_msg_get_rep_cnt(&priv->mbox);
+ if (!req) {
+ mutex_unlock(&priv->mbox.lock);
+ return -ENOMEM;
+ }
+ err = otx2_sync_mbox_msg(&priv->mbox);
+ if (err)
+ goto exit;
+
+ msghdr = otx2_mbox_get_rsp(&priv->mbox.mbox, 0, &req->hdr);
+ if (IS_ERR(msghdr)) {
+ err = PTR_ERR(msghdr);
+ goto exit;
+ }
+
+ rsp = (struct get_rep_cnt_rsp *)msghdr;
+ priv->hw.tx_queues = rsp->rep_cnt;
+ priv->hw.rx_queues = rsp->rep_cnt;
+ priv->rep_cnt = rsp->rep_cnt;
+ for (rep = 0; rep < priv->rep_cnt; rep++)
+ priv->rep_pf_map[rep] = rsp->rep_pf_map[rep];
+
+exit:
+ mutex_unlock(&priv->mbox.lock);
+ return err;
+}
+
+static int rvu_rep_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct device *dev = &pdev->dev;
+ struct otx2_nic *priv;
+ struct otx2_hw *hw;
+ int err;
+
+ err = pcim_enable_device(pdev);
+ if (err) {
+ dev_err(dev, "Failed to enable PCI device\n");
+ return err;
+ }
+
+ err = pci_request_regions(pdev, DRV_NAME);
+ if (err) {
+ dev_err(dev, "PCI request regions failed 0x%x\n", err);
+ return err;
+ }
+
+ err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48));
+ if (err) {
+ dev_err(dev, "DMA mask config failed, abort\n");
+ goto err_release_regions;
+ }
+
+ pci_set_master(pdev);
+
+ priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv) {
+ err = -ENOMEM;
+ goto err_release_regions;
+ }
+
+ pci_set_drvdata(pdev, priv);
+ priv->pdev = pdev;
+ priv->dev = dev;
+ priv->flags |= OTX2_FLAG_INTF_DOWN;
+ priv->flags |= OTX2_FLAG_REP_MODE_ENABLED;
+
+ hw = &priv->hw;
+ hw->pdev = pdev;
+ hw->max_queues = OTX2_MAX_CQ_CNT;
+ hw->rbuf_len = OTX2_DEFAULT_RBUF_LEN;
+ hw->xqe_size = 128;
+
+ err = otx2_init_rsrc(pdev, priv);
+ if (err)
+ goto err_release_regions;
+
+ err = rvu_get_rep_cnt(priv);
+ if (err)
+ goto err_detach_rsrc;
+
+ err = rvu_rep_rsrc_init(priv);
+ if (err)
+ goto err_detach_rsrc;
+
+ return 0;
+
+err_detach_rsrc:
+ if (priv->hw.lmt_info)
+ free_percpu(priv->hw.lmt_info);
+ if (test_bit(CN10K_LMTST, &priv->hw.cap_flag))
+ qmem_free(priv->dev, priv->dync_lmt);
+ otx2_detach_resources(&priv->mbox);
+ otx2_disable_mbox_intr(priv);
+ otx2_pfaf_mbox_destroy(priv);
+ pci_free_irq_vectors(pdev);
+err_release_regions:
+ pci_set_drvdata(pdev, NULL);
+ pci_release_regions(pdev);
+ return err;
+}
+
+static void rvu_rep_remove(struct pci_dev *pdev)
+{
+ struct otx2_nic *priv = pci_get_drvdata(pdev);
+
+ rvu_rep_rsrc_free(priv);
+ otx2_detach_resources(&priv->mbox);
+ if (priv->hw.lmt_info)
+ free_percpu(priv->hw.lmt_info);
+ if (test_bit(CN10K_LMTST, &priv->hw.cap_flag))
+ qmem_free(priv->dev, priv->dync_lmt);
+ otx2_disable_mbox_intr(priv);
+ otx2_pfaf_mbox_destroy(priv);
+ pci_free_irq_vectors(priv->pdev);
+ pci_set_drvdata(pdev, NULL);
+ pci_release_regions(pdev);
+}
+
+static struct pci_driver rvu_rep_driver = {
+ .name = DRV_NAME,
+ .id_table = rvu_rep_id_table,
+ .probe = rvu_rep_probe,
+ .remove = rvu_rep_remove,
+ .shutdown = rvu_rep_remove,
+};
+
+static int __init rvu_rep_init_module(void)
+{
+ return pci_register_driver(&rvu_rep_driver);
+}
+
+static void __exit rvu_rep_cleanup_module(void)
+{
+ pci_unregister_driver(&rvu_rep_driver);
+}
+
+module_init(rvu_rep_init_module);
+module_exit(rvu_rep_cleanup_module);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
new file mode 100644
index 000000000000..565e75628df2
--- /dev/null
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Marvell RVU REPRESENTOR driver
+ *
+ * Copyright (C) 2024 Marvell.
+ *
+ */
+
+#ifndef REP_H
+#define REP_H
+
+#include <linux/pci.h>
+
+#include "otx2_reg.h"
+#include "otx2_txrx.h"
+#include "otx2_common.h"
+
+#define PCI_DEVID_RVU_REP 0xA0E0
+
+#define RVU_MAX_REP OTX2_MAX_CQ_CNT
+struct rep_dev {
+ struct otx2_nic *mdev;
+ struct net_device *netdev;
+ u16 rep_id;
+ u16 pcifunc;
+};
+
+static inline bool otx2_rep_dev(struct pci_dev *pdev)
+{
+ return pdev->device == PCI_DEVID_RVU_REP;
+}
+#endif /* REP_H */
--
2.25.1


2024-06-11 16:25:43

by Geetha sowjanya

[permalink] [raw]
Subject: [net-next PATCH v5 09/10] octeontx2-pf: Add representors for sdp MAC

Hardware supports different types of MACs eg RPM, SDP, LBK.
Also, hardware doesn't have an internal switch. LBK is a
loopback mac which punts egress pkts back to ingress pipeline.
RPM and SDP MACs support ingress/egress pkt IO on interfaces with
different set of capabilities like interface modes ranging from
10/100/1000BaseX to 100Gbps KR modes.

At the time of netdev driver registration PF will
seek MAC related information from Admin function driver
'drivers/net/ethernet/marvell/octeontx2/af' and sets up ingress/egress
queues etc such that pkt IO on the channels of these different MACs is
possible. This patch add representors for SDP MAC.

Signed-off-by: Geetha sowjanya <[email protected]>
---
.../ethernet/marvell/octeontx2/nic/cn10k.c | 4 +-
.../ethernet/marvell/octeontx2/nic/cn10k.h | 2 +-
.../marvell/octeontx2/nic/otx2_common.c | 50 +++++++++++++++----
.../marvell/octeontx2/nic/otx2_common.h | 27 +++++++---
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 13 +++--
.../ethernet/marvell/octeontx2/nic/otx2_reg.h | 1 +
.../ethernet/marvell/octeontx2/nic/otx2_vf.c | 12 ++++-
7 files changed, 84 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
index c1c99d7054f8..777e123e69c7 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
@@ -72,7 +72,7 @@ int cn10k_lmtst_init(struct otx2_nic *pfvf)
}
EXPORT_SYMBOL(cn10k_lmtst_init);

-int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura)
+int cn10k_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura)
{
struct nix_cn10k_aq_enq_req *aq;
struct otx2_nic *pfvf = dev;
@@ -88,7 +88,7 @@ int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura)
aq->sq.ena = 1;
aq->sq.smq = otx2_get_smq_idx(pfvf, qidx);
aq->sq.smq_rr_weight = mtu_to_dwrr_weight(pfvf, pfvf->tx_max_pktlen);
- aq->sq.default_chan = pfvf->hw.tx_chan_base;
+ aq->sq.default_chan = pfvf->hw.tx_chan_base + chan_offset;
aq->sq.sqe_stype = NIX_STYPE_STF; /* Cache SQB */
aq->sq.sqb_aura = sqb_aura;
aq->sq.sq_int_ena = NIX_SQINT_BITS;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h
index c1861f7de254..e3f0bce9908f 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h
@@ -26,7 +26,7 @@ static inline int mtu_to_dwrr_weight(struct otx2_nic *pfvf, int mtu)

int cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq);
void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx);
-int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura);
+int cn10k_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura);
int cn10k_lmtst_init(struct otx2_nic *pfvf);
int cn10k_free_all_ipolicers(struct otx2_nic *pfvf);
int cn10k_alloc_matchall_ipolicer(struct otx2_nic *pfvf);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index 4b53af087cb4..af37015ef8e2 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -253,7 +253,7 @@ int otx2_config_pause_frm(struct otx2_nic *pfvf)
struct cgx_pause_frm_cfg *req;
int err;

- if (is_otx2_lbkvf(pfvf->pdev))
+ if (is_otx2_lbkvf(pfvf->pdev) || is_otx2_sdp_rep(pfvf->pdev))
return 0;

mutex_lock(&pfvf->mbox.lock);
@@ -647,12 +647,22 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool txschq_for
req->reg[2] = NIX_AF_MDQX_SCHEDULE(schq);
req->regval[2] = dwrr_val;
} else if (lvl == NIX_TXSCH_LVL_TL4) {
+ int sdp_chan = hw->tx_chan_base + prio;
+
+ if (is_otx2_sdp_rep(pfvf->pdev))
+ prio = 0;
parent = schq_list[NIX_TXSCH_LVL_TL3][prio];
req->reg[0] = NIX_AF_TL4X_PARENT(schq);
req->regval[0] = parent << 16;
req->num_regs++;
req->reg[1] = NIX_AF_TL4X_SCHEDULE(schq);
req->regval[1] = dwrr_val;
+ if (is_otx2_sdp_rep(pfvf->pdev)) {
+ req->num_regs++;
+ req->reg[2] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
+ req->regval[2] = BIT_ULL(12) | BIT_ULL(13) |
+ (sdp_chan & 0xff);
+ }
} else if (lvl == NIX_TXSCH_LVL_TL3) {
parent = schq_list[NIX_TXSCH_LVL_TL2][prio];
req->reg[0] = NIX_AF_TL3X_PARENT(schq);
@@ -660,7 +670,8 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool txschq_for
req->num_regs++;
req->reg[1] = NIX_AF_TL3X_SCHEDULE(schq);
req->regval[1] = dwrr_val;
- if (lvl == hw->txschq_link_cfg_lvl) {
+ if (lvl == hw->txschq_link_cfg_lvl &&
+ !is_otx2_sdp_rep(pfvf->pdev)) {
req->num_regs++;
req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link);
/* Enable this queue and backpressure
@@ -677,7 +688,8 @@ int otx2_txschq_config(struct otx2_nic *pfvf, int lvl, int prio, bool txschq_for
req->reg[1] = NIX_AF_TL2X_SCHEDULE(schq);
req->regval[1] = TXSCH_TL1_DFLT_RR_PRIO << 24 | dwrr_val;

- if (lvl == hw->txschq_link_cfg_lvl) {
+ if (lvl == hw->txschq_link_cfg_lvl &&
+ !is_otx2_sdp_rep(pfvf->pdev)) {
req->num_regs++;
req->reg[2] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, hw->tx_link);
/* Enable this queue and backpressure
@@ -736,6 +748,7 @@ EXPORT_SYMBOL(otx2_smq_flush);

int otx2_txsch_alloc(struct otx2_nic *pfvf)
{
+ int chan_cnt = pfvf->hw.tx_chan_cnt;
struct nix_txsch_alloc_req *req;
struct nix_txsch_alloc_rsp *rsp;
int lvl, schq, rc;
@@ -748,6 +761,12 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf)
/* Request one schq per level */
for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++)
req->schq[lvl] = 1;
+
+ if (is_otx2_sdp_rep(pfvf->pdev) && chan_cnt > 1) {
+ req->schq[NIX_TXSCH_LVL_SMQ] = chan_cnt;
+ req->schq[NIX_TXSCH_LVL_TL4] = chan_cnt;
+ }
+
rc = otx2_sync_mbox_msg(&pfvf->mbox);
if (rc)
return rc;
@@ -758,10 +777,12 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf)
return PTR_ERR(rsp);

/* Setup transmit scheduler list */
- for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++)
+ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
+ pfvf->hw.txschq_cnt[lvl] = rsp->schq[lvl];
for (schq = 0; schq < rsp->schq[lvl]; schq++)
pfvf->hw.txschq_list[lvl][schq] =
rsp->schq_list[lvl][schq];
+ }

pfvf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl;
pfvf->hw.txschq_aggr_lvl_rr_prio = rsp->aggr_lvl_rr_prio;
@@ -799,12 +820,15 @@ EXPORT_SYMBOL(otx2_txschq_free_one);

void otx2_txschq_stop(struct otx2_nic *pfvf)
{
- int lvl, schq;
+ int lvl, schq, idx;

/* free non QOS TLx nodes */
- for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++)
- otx2_txschq_free_one(pfvf, lvl,
- pfvf->hw.txschq_list[lvl][0]);
+ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
+ for (idx = 0; idx < pfvf->hw.txschq_cnt[lvl]; idx++) {
+ otx2_txschq_free_one(pfvf, lvl,
+ pfvf->hw.txschq_list[lvl][idx]);
+ }
+ }

/* Clear the txschq list */
for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
@@ -884,7 +908,7 @@ static int otx2_rq_init(struct otx2_nic *pfvf, u16 qidx, u16 lpb_aura)
return otx2_sync_mbox_msg(&pfvf->mbox);
}

-int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura)
+int otx2_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura)
{
struct otx2_nic *pfvf = dev;
struct otx2_snd_queue *sq;
@@ -903,7 +927,7 @@ int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura)
aq->sq.ena = 1;
aq->sq.smq = otx2_get_smq_idx(pfvf, qidx);
aq->sq.smq_rr_quantum = mtu_to_dwrr_weight(pfvf, pfvf->tx_max_pktlen);
- aq->sq.default_chan = pfvf->hw.tx_chan_base;
+ aq->sq.default_chan = pfvf->hw.tx_chan_base + chan_offset;
aq->sq.sqe_stype = NIX_STYPE_STF; /* Cache SQB */
aq->sq.sqb_aura = sqb_aura;
aq->sq.sq_int_ena = NIX_SQINT_BITS;
@@ -926,6 +950,7 @@ int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura)
struct otx2_qset *qset = &pfvf->qset;
struct otx2_snd_queue *sq;
struct otx2_pool *pool;
+ u8 chan_offset;
int err;

pool = &pfvf->qset.pool[sqb_aura];
@@ -972,7 +997,8 @@ int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura)
sq->stats.bytes = 0;
sq->stats.pkts = 0;

- err = pfvf->hw_ops->sq_aq_init(pfvf, qidx, sqb_aura);
+ chan_offset = qidx % pfvf->hw.tx_chan_cnt;
+ err = pfvf->hw_ops->sq_aq_init(pfvf, qidx, chan_offset, sqb_aura);
if (err) {
kfree(sq->sg);
sq->sg = NULL;
@@ -1739,6 +1765,8 @@ void mbox_handler_nix_lf_alloc(struct otx2_nic *pfvf,
pfvf->hw.sqb_size = rsp->sqb_size;
pfvf->hw.rx_chan_base = rsp->rx_chan_base;
pfvf->hw.tx_chan_base = rsp->tx_chan_base;
+ pfvf->hw.rx_chan_cnt = rsp->rx_chan_cnt;
+ pfvf->hw.tx_chan_cnt = rsp->tx_chan_cnt;
pfvf->hw.lso_tsov4_idx = rsp->lso_tsov4_idx;
pfvf->hw.lso_tsov6_idx = rsp->lso_tsov6_idx;
pfvf->hw.cgx_links = rsp->cgx_links;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index d3d56ca6d3a2..d3c78538d36c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -42,6 +42,8 @@
#define PCI_SUBSYS_DEVID_96XX_RVU_PFVF 0xB200
#define PCI_SUBSYS_DEVID_CN10K_B_RVU_PFVF 0xBD00

+#define PCI_DEVID_OCTEONTX2_SDP_REP 0xA0F7
+
/* PCI BAR nos */
#define PCI_CFG_REG_BAR_NUM 2
#define PCI_MBOX_BAR_NUM 4
@@ -198,6 +200,7 @@ struct otx2_hw {

/* NIX */
u8 txschq_link_cfg_lvl;
+ u8 txschq_cnt[NIX_TXSCH_LVL_CNT];
u8 txschq_aggr_lvl_rr_prio;
u16 txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
u16 matchall_ipolicer;
@@ -208,6 +211,8 @@ struct otx2_hw {
/* HW settings, coalescing etc */
u16 rx_chan_base;
u16 tx_chan_base;
+ u8 rx_chan_cnt;
+ u8 tx_chan_cnt;
u16 cq_qcount_wait;
u16 cq_ecount_wait;
u16 rq_skid;
@@ -344,7 +349,8 @@ struct otx2_flow_config {
};

struct dev_hw_ops {
- int (*sq_aq_init)(void *dev, u16 qidx, u16 sqb_aura);
+ int (*sq_aq_init)(void *dev, u16 qidx, u8 chan_offset,
+ u16 sqb_aura);
void (*sqe_flush)(void *dev, struct otx2_snd_queue *sq,
int size, int qidx);
int (*refill_pool_ptrs)(void *dev, struct otx2_cq_queue *cq);
@@ -538,6 +544,11 @@ static inline bool is_96xx_B0(struct pci_dev *pdev)
(pdev->subsystem_device == PCI_SUBSYS_DEVID_96XX_RVU_PFVF);
}

+static inline bool is_otx2_sdp_rep(struct pci_dev *pdev)
+{
+ return pdev->device == PCI_DEVID_OCTEONTX2_SDP_REP;
+}
+
/* REVID for PCIe devices.
* Bits 0..1: minor pass, bit 3..2: major pass
* bits 7..4: midr id
@@ -900,15 +911,19 @@ static inline void otx2_dma_unmap_page(struct otx2_nic *pfvf,
static inline u16 otx2_get_smq_idx(struct otx2_nic *pfvf, u16 qidx)
{
u16 smq;
+ int idx;
+
#ifdef CONFIG_DCB
if (qidx < NIX_PF_PFC_PRIO_MAX && pfvf->pfc_alloc_status[qidx])
return pfvf->pfc_schq_list[NIX_TXSCH_LVL_SMQ][qidx];
#endif
/* check if qidx falls under QOS queues */
- if (qidx >= pfvf->hw.non_qos_queues)
+ if (qidx >= pfvf->hw.non_qos_queues) {
smq = pfvf->qos.qid_to_sqmap[qidx - pfvf->hw.non_qos_queues];
- else
- smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][0];
+ } else {
+ idx = qidx % pfvf->hw.txschq_cnt[NIX_TXSCH_LVL_SMQ];
+ smq = pfvf->hw.txschq_list[NIX_TXSCH_LVL_SMQ][idx];
+ }

return smq;
}
@@ -975,8 +990,8 @@ int otx2_nix_config_bp(struct otx2_nic *pfvf, bool enable);
void otx2_cleanup_rx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq, int qidx);
void otx2_cleanup_tx_cqes(struct otx2_nic *pfvf, struct otx2_cq_queue *cq);
int otx2_sq_init(struct otx2_nic *pfvf, u16 qidx, u16 sqb_aura);
-int otx2_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura);
-int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura);
+int otx2_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura);
+int cn10k_sq_aq_init(void *dev, u16 qidx, u8 chan_offset, u16 sqb_aura);
int otx2_alloc_buffer(struct otx2_nic *pfvf, struct otx2_cq_queue *cq,
dma_addr_t *dma);
int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 91a458c5028a..fcdeac36306d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -1593,10 +1593,15 @@ int otx2_init_hw_resources(struct otx2_nic *pf)
}

for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
- err = otx2_txschq_config(pf, lvl, 0, false);
- if (err) {
- mutex_unlock(&mbox->lock);
- goto err_free_nix_queues;
+ int idx;
+
+ for (idx = 0; idx < pf->hw.txschq_cnt[lvl]; idx++) {
+ err = otx2_txschq_config(pf, lvl, idx, false);
+ if (err) {
+ dev_err(pf->dev, "Failed to config TXSCH\n");
+ mutex_unlock(&mbox->lock);
+ goto err_free_nix_queues;
+ }
}
}

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
index 45a32e4b49d1..defcb9ea21e1 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_reg.h
@@ -140,6 +140,7 @@

/* NIX AF transmit scheduler registers */
#define NIX_AF_SMQX_CFG(a) (0x700 | (a) << 16)
+#define NIX_AF_TL4X_SDP_LINK_CFG(a) (0xB10 | (a) << 16)
#define NIX_AF_TL1X_SCHEDULE(a) (0xC00 | (a) << 16)
#define NIX_AF_TL1X_CIR(a) (0xC20 | (a) << 16)
#define NIX_AF_TL1X_TOPOLOGY(a) (0xC80 | (a) << 16)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
index 0486fca8b573..64c88119b79a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
@@ -21,6 +21,7 @@
static const struct pci_device_id otx2_vf_id_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_AFVF) },
{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_VF) },
+ { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_SDP_REP) },
{ }
};

@@ -371,7 +372,7 @@ static int otx2vf_open(struct net_device *netdev)

/* LBKs do not receive link events so tell everyone we are up here */
vf = netdev_priv(netdev);
- if (is_otx2_lbkvf(vf->pdev)) {
+ if (is_otx2_lbkvf(vf->pdev) || is_otx2_sdp_rep(vf->pdev)) {
pr_info("%s NIC Link is UP\n", netdev->name);
netif_carrier_on(netdev);
netif_tx_start_all_queues(netdev);
@@ -683,6 +684,15 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
snprintf(netdev->name, sizeof(netdev->name), "lbk%d", n);
}

+ if (is_otx2_sdp_rep(vf->pdev)) {
+ int n;
+
+ n = (vf->pcifunc >> RVU_PFVF_FUNC_SHIFT) & RVU_PFVF_FUNC_MASK;
+ n -= 1;
+ snprintf(netdev->name, sizeof(netdev->name), "sdp%d-%d",
+ pdev->bus->number, n);
+ }
+
err = register_netdev(netdev);
if (err) {
dev_err(dev, "Failed to register netdevice\n");
--
2.25.1


2024-06-11 16:26:00

by Geetha sowjanya

[permalink] [raw]
Subject: [net-next PATCH v5 10/10] octeontx2-pf: Add devlink port support

Register devlink port for the rvu representors.

Signed-off-by: Geetha sowjanya <[email protected]>
---
.../net/ethernet/marvell/octeontx2/nic/rep.c | 74 +++++++++++++++++++
.../net/ethernet/marvell/octeontx2/nic/rep.h | 2 +
2 files changed, 76 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
index d984b7d9dc64..2a0ec4c25c22 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
@@ -28,6 +28,73 @@ MODULE_DESCRIPTION(DRV_STRING);
MODULE_LICENSE("GPL");
MODULE_DEVICE_TABLE(pci, rvu_rep_id_table);

+static int rvu_rep_dl_port_fn_hw_addr_get(struct devlink_port *port,
+ u8 *hw_addr, int *hw_addr_len,
+ struct netlink_ext_ack *extack)
+{
+ struct otx2_devlink *otx2_dl = devlink_priv(port->devlink);
+ int rep_id = port->index;
+ struct otx2_nic *priv;
+ struct rep_dev *rep;
+
+ priv = otx2_dl->pfvf;
+ rep = priv->reps[rep_id];
+ ether_addr_copy(hw_addr, rep->mac);
+ *hw_addr_len = ETH_ALEN;
+ return 0;
+}
+
+static int rvu_rep_dl_port_fn_hw_addr_set(struct devlink_port *port,
+ const u8 *hw_addr, int hw_addr_len,
+ struct netlink_ext_ack *extack)
+{
+ struct otx2_devlink *otx2_dl = devlink_priv(port->devlink);
+ int rep_id = port->index;
+ struct otx2_nic *priv;
+ struct rep_dev *rep;
+
+ priv = otx2_dl->pfvf;
+ rep = priv->reps[rep_id];
+ eth_hw_addr_set(rep->netdev, hw_addr);
+ ether_addr_copy(rep->mac, hw_addr);
+ return 0;
+}
+
+static const struct devlink_port_ops rvu_rep_dl_port_ops = {
+ .port_fn_hw_addr_get = rvu_rep_dl_port_fn_hw_addr_get,
+ .port_fn_hw_addr_set = rvu_rep_dl_port_fn_hw_addr_set,
+};
+
+static void rvu_rep_devlink_port_unregister(struct rep_dev *rep)
+{
+ devlink_port_unregister(&rep->dl_port);
+}
+
+static int rvu_rep_devlink_port_register(struct rep_dev *rep)
+{
+ struct devlink_port_attrs attrs = {};
+ struct otx2_nic *priv = rep->mdev;
+ struct devlink *dl = priv->dl->dl;
+ int err;
+
+ attrs.flavour = DEVLINK_PORT_FLAVOUR_PCI_PF;
+ attrs.pci_vf.pf = rvu_get_pf(rep->pcifunc);
+ attrs.pci_vf.vf = rep->pcifunc & RVU_PFVF_FUNC_MASK;
+ if (attrs.pci_vf.vf)
+ attrs.flavour = DEVLINK_PORT_FLAVOUR_PCI_VF;
+
+ devlink_port_attrs_set(&rep->dl_port, &attrs);
+
+ err = devl_port_register_with_ops(dl, &rep->dl_port, rep->rep_id,
+ &rvu_rep_dl_port_ops);
+ if (err) {
+ dev_err(rep->mdev->dev, "devlink_port_register failed: %d\n",
+ err);
+ return err;
+ }
+ return 0;
+}
+
static int rvu_rep_get_repid(struct otx2_nic *priv, u16 pcifunc)
{
int rep_id;
@@ -338,6 +405,7 @@ void rvu_rep_destroy(struct otx2_nic *priv)
for (rep_id = 0; rep_id < priv->rep_cnt; rep_id++) {
rep = priv->reps[rep_id];
unregister_netdev(rep->netdev);
+ rvu_rep_devlink_port_unregister(rep);
free_netdev(rep->netdev);
}
kfree(priv->reps);
@@ -380,6 +448,11 @@ int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack)
snprintf(ndev->name, sizeof(ndev->name), "r%dp%d", rep_id,
rvu_get_pf(pcifunc));

+ err = rvu_rep_devlink_port_register(rep);
+ if (err)
+ goto exit;
+
+ SET_NETDEV_DEVLINK_PORT(ndev, &rep->dl_port);
eth_hw_addr_random(ndev);
err = register_netdev(ndev);
if (err) {
@@ -400,6 +473,7 @@ int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack)
while (--rep_id >= 0) {
rep = priv->reps[rep_id];
unregister_netdev(rep->netdev);
+ rvu_rep_devlink_port_unregister(rep);
free_netdev(rep->netdev);
}
kfree(priv->reps);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
index 0cefa482f83c..d81af376bf50 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
@@ -34,10 +34,12 @@ struct rep_dev {
struct net_device *netdev;
struct rep_stats stats;
struct delayed_work stats_wrk;
+ struct devlink_port dl_port;
u16 rep_id;
u16 pcifunc;
#define RVU_REP_VF_INITIALIZED BIT_ULL(0)
u8 flags;
+ u8 mac[ETH_ALEN];
};

static inline bool otx2_rep_dev(struct pci_dev *pdev)
--
2.25.1


2024-06-11 16:27:20

by Geetha sowjanya

[permalink] [raw]
Subject: [net-next PATCH v5 07/10] octeontx2-pf: Add support to sync link state between representor and VFs

This patch implement the below requirement mentioned
in the representors documentation.

"
The representee's link state is controlled through the
representor. Setting the representor administratively UP
or DOWN should cause carrier ON or OFF at the representee.
"

This patch enables
- Reflecting the link state of representor based on the VF state and
link state of VF based on representor.
- On VF interface up/down a notification is sent via mbox to representor
to update the link state.
eg: ip link set eth0 up/down will disable carrier on/off
of the corresponding representor(r0p1) interface.
- On representor interface up/down will cause the link state update of VF.
eg: ip link set r0p1 up/down will disable carrier on/off
of the corresponding representee(eth0) interface.

Signed-off-by: Harman Kalra <[email protected]>
Signed-off-by: Geetha sowjanya <[email protected]>
---
.../net/ethernet/marvell/octeontx2/af/mbox.h | 25 ++++
.../net/ethernet/marvell/octeontx2/af/rvu.h | 11 ++
.../ethernet/marvell/octeontx2/af/rvu_nix.c | 6 +
.../ethernet/marvell/octeontx2/af/rvu_rep.c | 127 ++++++++++++++++++
.../marvell/octeontx2/nic/otx2_common.h | 2 +
.../ethernet/marvell/octeontx2/nic/otx2_pf.c | 30 +++++
.../net/ethernet/marvell/octeontx2/nic/rep.c | 76 +++++++++++
.../net/ethernet/marvell/octeontx2/nic/rep.h | 3 +
8 files changed, 280 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index d293a3c35b6b..2381f84efc21 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -146,6 +146,7 @@ M(SET_VF_PERM, 0x00b, set_vf_perm, set_vf_perm, msg_rsp) \
M(PTP_GET_CAP, 0x00c, ptp_get_cap, msg_req, ptp_get_cap_rsp) \
M(GET_REP_CNT, 0x00d, get_rep_cnt, msg_req, get_rep_cnt_rsp) \
M(ESW_CFG, 0x00e, esw_cfg, esw_cfg_req, msg_rsp) \
+M(REP_EVENT_NOTIFY, 0x00f, rep_event_notify, rep_event, msg_rsp) \
/* CGX mbox IDs (range 0x200 - 0x3FF) */ \
M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \
M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \
@@ -383,12 +384,16 @@ M(CPT_INST_LMTST, 0xD00, cpt_inst_lmtst, cpt_inst_lmtst_req, msg_rsp)
#define MBOX_UP_MCS_MESSAGES \
M(MCS_INTR_NOTIFY, 0xE00, mcs_intr_notify, mcs_intr_info, msg_rsp)

+#define MBOX_UP_REP_MESSAGES \
+M(REP_EVENT_UP_NOTIFY, 0xEF0, rep_event_up_notify, rep_event, msg_rsp) \
+
enum {
#define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id,
MBOX_MESSAGES
MBOX_UP_CGX_MESSAGES
MBOX_UP_CPT_MESSAGES
MBOX_UP_MCS_MESSAGES
+MBOX_UP_REP_MESSAGES
#undef M
};

@@ -1572,6 +1577,26 @@ struct esw_cfg_req {
u64 rsvd;
};

+struct rep_evt_data {
+ u8 port_state;
+ u8 vf_state;
+ u16 rx_mode;
+ u16 rx_flags;
+ u16 mtu;
+ u64 rsvd[5];
+};
+
+struct rep_event {
+ struct mbox_msghdr hdr;
+ u16 pcifunc;
+#define RVU_EVENT_PORT_STATE BIT_ULL(0)
+#define RVU_EVENT_PFVF_STATE BIT_ULL(1)
+#define RVU_EVENT_MTU_CHANGE BIT_ULL(2)
+#define RVU_EVENT_RX_MODE_CHANGE BIT_ULL(3)
+ u16 event;
+ struct rep_evt_data evt_data;
+};
+
struct flow_msg {
unsigned char dmac[6];
unsigned char smac[6];
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index f7f8b96a6208..c040b667d079 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -513,6 +513,11 @@ struct rvu_switch {
u16 start_entry;
};

+struct rep_evtq_ent {
+ struct list_head node;
+ struct rep_event event;
+};
+
struct rvu {
void __iomem *afreg_base;
void __iomem *pfreg_base;
@@ -598,6 +603,11 @@ struct rvu {
int rep_cnt;
u16 *rep2pfvf_map;
u8 rep_mode;
+ struct work_struct rep_evt_work;
+ struct workqueue_struct *rep_evt_wq;
+ struct list_head rep_evtq_head;
+ /* Representor event lock */
+ spinlock_t rep_evtq_lock;
};

static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
@@ -1045,4 +1055,5 @@ void rvu_mcs_exit(struct rvu *rvu);
int rvu_rep_pf_init(struct rvu *rvu);
int rvu_rep_install_mcam_rules(struct rvu *rvu);
void rvu_rep_update_rules(struct rvu *rvu, u16 pcifunc, bool ena);
+int rvu_rep_notify_pfvf_state(struct rvu *rvu, u16 pcifunc, bool enable);
#endif /* RVU_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 9cf47cdee192..dda69162badc 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -380,6 +380,9 @@ static int nix_interface_init(struct rvu *rvu, u16 pcifunc, int type, int nixlf,
cgx_set_pkind(rvu_cgx_pdata(cgx_id, rvu), lmac_id, pkind);
rvu_npc_set_pkind(rvu, pkind, pfvf);

+ if (rvu->rep_mode)
+ rvu_rep_notify_pfvf_state(rvu, pcifunc, true);
+
break;
case NIX_INTF_TYPE_LBK:
vf = (pcifunc & RVU_PFVF_FUNC_MASK) - 1;
@@ -516,6 +519,9 @@ static void nix_interface_deinit(struct rvu *rvu, u16 pcifunc, u8 nixlf)

/* Disable DMAC filters used */
rvu_cgx_disable_dmac_entries(rvu, pcifunc);
+
+ if (rvu->rep_mode)
+ rvu_rep_notify_pfvf_state(rvu, pcifunc, false);
}

#define NIX_BPIDS_PER_LMAC 8
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
index a7a15d7878e5..5d06ef3aa877 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
@@ -13,6 +13,124 @@
#include "rvu.h"
#include "rvu_reg.h"

+#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
+static struct _req_type __maybe_unused \
+*otx2_mbox_alloc_msg_ ## _fn_name(struct rvu *rvu, int devid) \
+{ \
+ struct _req_type *req; \
+ \
+ req = (struct _req_type *)otx2_mbox_alloc_msg_rsp( \
+ &rvu->afpf_wq_info.mbox_up, devid, sizeof(struct _req_type), \
+ sizeof(struct _rsp_type)); \
+ if (!req) \
+ return NULL; \
+ req->hdr.sig = OTX2_MBOX_REQ_SIG; \
+ req->hdr.id = _id; \
+ return req; \
+}
+
+MBOX_UP_REP_MESSAGES
+#undef M
+
+static int rvu_rep_up_notify(struct rvu *rvu, struct rep_event *event)
+{
+ struct rep_event *msg;
+ int pf;
+
+ pf = rvu_get_pf(event->pcifunc);
+
+ mutex_lock(&rvu->mbox_lock);
+ msg = otx2_mbox_alloc_msg_rep_event_up_notify(rvu, pf);
+ if (!msg) {
+ mutex_unlock(&rvu->mbox_lock);
+ return -ENOMEM;
+ }
+
+ msg->hdr.pcifunc = event->pcifunc;
+ msg->event = event->event;
+
+ memcpy(&msg->evt_data, &event->evt_data, sizeof(struct rep_evt_data));
+
+ otx2_mbox_wait_for_zero(&rvu->afpf_wq_info.mbox_up, pf);
+
+ otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf);
+
+ mutex_unlock(&rvu->mbox_lock);
+ return 0;
+}
+
+static void rvu_rep_wq_handler(struct work_struct *work)
+{
+ struct rvu *rvu = container_of(work, struct rvu, rep_evt_work);
+ struct rep_evtq_ent *qentry;
+ struct rep_event *event;
+ unsigned long flags;
+
+ do {
+ spin_lock_irqsave(&rvu->rep_evtq_lock, flags);
+ qentry = list_first_entry_or_null(&rvu->rep_evtq_head,
+ struct rep_evtq_ent,
+ node);
+ if (qentry)
+ list_del(&qentry->node);
+
+ spin_unlock_irqrestore(&rvu->rep_evtq_lock, flags);
+ if (!qentry)
+ break; /* nothing more to process */
+
+ event = &qentry->event;
+
+ rvu_rep_up_notify(rvu, event);
+ kfree(qentry);
+ } while (1);
+}
+
+int rvu_mbox_handler_rep_event_notify(struct rvu *rvu, struct rep_event *req,
+ struct msg_rsp *rsp)
+{
+ struct rep_evtq_ent *qentry;
+
+ qentry = kmalloc(sizeof(*qentry), GFP_ATOMIC);
+ if (!qentry)
+ return -ENOMEM;
+
+ qentry->event = *req;
+ spin_lock(&rvu->rep_evtq_lock);
+ list_add_tail(&qentry->node, &rvu->rep_evtq_head);
+ spin_unlock(&rvu->rep_evtq_lock);
+ queue_work(rvu->rep_evt_wq, &rvu->rep_evt_work);
+ return 0;
+}
+
+int rvu_rep_notify_pfvf_state(struct rvu *rvu, u16 pcifunc, bool enable)
+{
+ struct rep_event *req;
+ int pf;
+
+ if (!is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc)))
+ return 0;
+
+ pf = rvu_get_pf(rvu->rep_pcifunc);
+
+ mutex_lock(&rvu->mbox_lock);
+ req = otx2_mbox_alloc_msg_rep_event_up_notify(rvu, pf);
+ if (!req) {
+ mutex_unlock(&rvu->mbox_lock);
+ return -ENOMEM;
+ }
+
+ req->hdr.pcifunc = rvu->rep_pcifunc;
+ req->event |= RVU_EVENT_PFVF_STATE;
+ req->pcifunc = pcifunc;
+ req->evt_data.vf_state = enable;
+
+ otx2_mbox_wait_for_zero(&rvu->afpf_wq_info.mbox_up, pf);
+ otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf);
+
+ mutex_unlock(&rvu->mbox_lock);
+ return 0;
+}
+
#define RVU_LF_RX_STATS(reg) \
rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_STATX(nixlf, reg))

@@ -247,6 +365,15 @@ int rvu_rep_install_mcam_rules(struct rvu *rvu)
}
}
}
+
+ /* Initialize the wq for handling REP events */
+ INIT_LIST_HEAD(&rvu->rep_evtq_head);
+ INIT_WORK(&rvu->rep_evt_work, rvu_rep_wq_handler);
+ rvu->rep_evt_wq = alloc_workqueue("rep_evt_wq", 0, 0);
+ if (!rvu->rep_evt_wq) {
+ dev_err(rvu->dev, "REP workqueue allocation failed\n");
+ return -ENOMEM;
+ }
return 0;
}

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index cd4b84d89dd1..d3d56ca6d3a2 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -443,6 +443,7 @@ struct otx2_nic {
#define OTX2_FLAG_ADPTV_INT_COAL_ENABLED BIT_ULL(16)
#define OTX2_FLAG_TC_MARK_ENABLED BIT_ULL(17)
#define OTX2_FLAG_REP_MODE_ENABLED BIT_ULL(18)
+#define OTX2_FLAG_PORT_UP BIT_ULL(19)
u64 flags;
u64 *cq_op_addr;

@@ -1127,4 +1128,5 @@ u16 otx2_select_queue(struct net_device *netdev, struct sk_buff *skb,
int otx2_get_txq_by_classid(struct otx2_nic *pfvf, u16 classid);
void otx2_qos_config_txschq(struct otx2_nic *pfvf);
void otx2_clean_qos_queues(struct otx2_nic *pfvf);
+int rvu_event_up_notify(struct otx2_nic *pf, struct rep_event *info);
#endif /* OTX2_COMMON_H */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 5e43d1ffaa33..0f17e8af4f76 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -519,6 +519,7 @@ static void otx2_pfvf_mbox_up_handler(struct work_struct *work)

switch (msg->id) {
case MBOX_MSG_CGX_LINK_EVENT:
+ case MBOX_MSG_REP_EVENT_UP_NOTIFY:
break;
default:
if (msg->rc)
@@ -832,6 +833,9 @@ static void otx2_handle_link_event(struct otx2_nic *pf)
struct cgx_link_user_info *linfo = &pf->linfo;
struct net_device *netdev = pf->netdev;

+ if (pf->flags & OTX2_FLAG_PORT_UP)
+ return;
+
pr_info("%s NIC Link is %s %d Mbps %s duplex\n", netdev->name,
linfo->link_up ? "UP" : "DOWN", linfo->speed,
linfo->full_duplex ? "Full" : "Half");
@@ -844,6 +848,30 @@ static void otx2_handle_link_event(struct otx2_nic *pf)
}
}

+static int otx2_mbox_up_handler_rep_event_up_notify(struct otx2_nic *pf,
+ struct rep_event *info,
+ struct msg_rsp *rsp)
+{
+ struct net_device *netdev = pf->netdev;
+
+ if (info->event == RVU_EVENT_PORT_STATE) {
+ if (info->evt_data.port_state) {
+ pf->flags |= OTX2_FLAG_PORT_UP;
+ netif_carrier_on(netdev);
+ netif_tx_start_all_queues(netdev);
+ } else {
+ pf->flags &= ~OTX2_FLAG_PORT_UP;
+ netif_tx_stop_all_queues(netdev);
+ netif_carrier_off(netdev);
+ }
+ return 0;
+ }
+#ifdef CONFIG_RVU_ESWITCH
+ rvu_event_up_notify(pf, info);
+#endif
+ return 0;
+}
+
int otx2_mbox_up_handler_mcs_intr_notify(struct otx2_nic *pf,
struct mcs_intr_info *event,
struct msg_rsp *rsp)
@@ -913,6 +941,7 @@ static int otx2_process_mbox_msg_up(struct otx2_nic *pf,
}
MBOX_UP_CGX_MESSAGES
MBOX_UP_MCS_MESSAGES
+MBOX_UP_REP_MESSAGES
#undef M
break;
default:
@@ -1975,6 +2004,7 @@ int otx2_open(struct net_device *netdev)
}

pf->flags &= ~OTX2_FLAG_INTF_DOWN;
+ pf->flags &= ~OTX2_FLAG_PORT_UP;
/* 'intf_down' may be checked on any cpu */
smp_wmb();

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
index 592b0f4406f0..6a527b824f33 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
@@ -28,6 +28,57 @@ MODULE_DESCRIPTION(DRV_STRING);
MODULE_LICENSE("GPL");
MODULE_DEVICE_TABLE(pci, rvu_rep_id_table);

+static int rvu_rep_get_repid(struct otx2_nic *priv, u16 pcifunc)
+{
+ int rep_id;
+
+ for (rep_id = 0; rep_id < priv->rep_cnt; rep_id++)
+ if (priv->rep_pf_map[rep_id] == pcifunc)
+ return rep_id;
+ return -EINVAL;
+}
+
+static int rvu_rep_notify_pfvf(struct otx2_nic *priv, u16 event,
+ struct rep_event *data)
+{
+ struct rep_event *req;
+
+ mutex_lock(&priv->mbox.lock);
+ req = otx2_mbox_alloc_msg_rep_event_notify(&priv->mbox);
+ if (!req) {
+ mutex_unlock(&priv->mbox.lock);
+ return -ENOMEM;
+ }
+ req->event = event;
+ req->pcifunc = data->pcifunc;
+
+ memcpy(&req->evt_data, &data->evt_data, sizeof(struct rep_evt_data));
+ otx2_sync_mbox_msg(&priv->mbox);
+ mutex_unlock(&priv->mbox.lock);
+ return 0;
+}
+
+static void rvu_rep_state_evt_handler(struct otx2_nic *priv,
+ struct rep_event *info)
+{
+ struct rep_dev *rep;
+ int rep_id;
+
+ rep_id = rvu_rep_get_repid(priv, info->pcifunc);
+ rep = priv->reps[rep_id];
+ if (info->evt_data.vf_state)
+ rep->flags |= RVU_REP_VF_INITIALIZED;
+ else
+ rep->flags &= ~RVU_REP_VF_INITIALIZED;
+}
+
+int rvu_event_up_notify(struct otx2_nic *pf, struct rep_event *info)
+{
+ if (info->event & RVU_EVENT_PFVF_STATE)
+ rvu_rep_state_evt_handler(pf, info);
+ return 0;
+}
+
static void rvu_rep_get_stats(struct work_struct *work)
{
struct delayed_work *del_work = to_delayed_work(work);
@@ -78,6 +129,9 @@ static void rvu_rep_get_stats64(struct net_device *dev,
{
struct rep_dev *rep = netdev_priv(dev);

+ if (!(rep->flags & RVU_REP_VF_INITIALIZED))
+ return;
+
stats->rx_packets = rep->stats.rx_frames;
stats->rx_bytes = rep->stats.rx_bytes;
stats->rx_dropped = rep->stats.rx_drops;
@@ -132,16 +186,38 @@ static netdev_tx_t rvu_rep_xmit(struct sk_buff *skb, struct net_device *dev)

static int rvu_rep_open(struct net_device *dev)
{
+ struct rep_dev *rep = netdev_priv(dev);
+ struct otx2_nic *priv = rep->mdev;
+ struct rep_event evt = {0};
+
+ if (!(rep->flags & RVU_REP_VF_INITIALIZED))
+ return 0;
+
netif_carrier_on(dev);
netif_tx_start_all_queues(dev);
+
+ evt.event = RVU_EVENT_PORT_STATE;
+ evt.evt_data.port_state = 1;
+ evt.pcifunc = rep->pcifunc;
+ rvu_rep_notify_pfvf(priv, RVU_EVENT_PORT_STATE, &evt);
return 0;
}

static int rvu_rep_stop(struct net_device *dev)
{
+ struct rep_dev *rep = netdev_priv(dev);
+ struct otx2_nic *priv = rep->mdev;
+ struct rep_event evt = {0};
+
+ if (!(rep->flags & RVU_REP_VF_INITIALIZED))
+ return 0;
+
netif_carrier_off(dev);
netif_tx_disable(dev);

+ evt.event = RVU_EVENT_PORT_STATE;
+ evt.pcifunc = rep->pcifunc;
+ rvu_rep_notify_pfvf(priv, RVU_EVENT_PORT_STATE, &evt);
return 0;
}

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
index 5d39bf636655..0cefa482f83c 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
@@ -36,6 +36,8 @@ struct rep_dev {
struct delayed_work stats_wrk;
u16 rep_id;
u16 pcifunc;
+#define RVU_REP_VF_INITIALIZED BIT_ULL(0)
+ u8 flags;
};

static inline bool otx2_rep_dev(struct pci_dev *pdev)
@@ -45,4 +47,5 @@ static inline bool otx2_rep_dev(struct pci_dev *pdev)

int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack);
void rvu_rep_destroy(struct otx2_nic *priv);
+int rvu_event_up_notify(struct otx2_nic *pf, struct rep_event *info);
#endif /* REP_H */
--
2.25.1


2024-06-11 16:33:28

by Geetha sowjanya

[permalink] [raw]
Subject: [net-next PATCH v5 04/10] octeontx2-pf: Add basic net_device_ops

Implements basic set of net_device_ops.

Signed-off-by: Geetha sowjanya <[email protected]>
---
.../net/ethernet/marvell/octeontx2/nic/rep.c | 46 +++++++++++++++++++
1 file changed, 46 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
index 16325ba23c74..3cb8dc820fdd 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
@@ -28,6 +28,51 @@ MODULE_DESCRIPTION(DRV_STRING);
MODULE_LICENSE("GPL");
MODULE_DEVICE_TABLE(pci, rvu_rep_id_table);

+static netdev_tx_t rvu_rep_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct rep_dev *rep = netdev_priv(dev);
+ struct otx2_nic *pf = rep->mdev;
+ struct otx2_snd_queue *sq;
+ struct netdev_queue *txq;
+
+ sq = &pf->qset.sq[rep->rep_id];
+ txq = netdev_get_tx_queue(dev, 0);
+
+ if (!otx2_sq_append_skb(pf, txq, sq, skb, rep->rep_id)) {
+ netif_tx_stop_queue(txq);
+
+ /* Check again, in case SQBs got freed up */
+ smp_mb();
+ if (((sq->num_sqbs - *sq->aura_fc_addr) * sq->sqe_per_sqb)
+ > sq->sqe_thresh)
+ netif_tx_wake_queue(txq);
+
+ return NETDEV_TX_BUSY;
+ }
+ return NETDEV_TX_OK;
+}
+
+static int rvu_rep_open(struct net_device *dev)
+{
+ netif_carrier_on(dev);
+ netif_tx_start_all_queues(dev);
+ return 0;
+}
+
+static int rvu_rep_stop(struct net_device *dev)
+{
+ netif_carrier_off(dev);
+ netif_tx_disable(dev);
+
+ return 0;
+}
+
+static const struct net_device_ops rvu_rep_netdev_ops = {
+ .ndo_open = rvu_rep_open,
+ .ndo_stop = rvu_rep_stop,
+ .ndo_start_xmit = rvu_rep_xmit,
+};
+
static int rvu_rep_napi_init(struct otx2_nic *priv,
struct netlink_ext_ack *extack)
{
@@ -155,6 +200,7 @@ int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack)

ndev->min_mtu = OTX2_MIN_MTU;
ndev->max_mtu = priv->hw.max_mtu;
+ ndev->netdev_ops = &rvu_rep_netdev_ops;
pcifunc = priv->rep_pf_map[rep_id];
rep->pcifunc = pcifunc;

--
2.25.1


2024-06-11 16:34:32

by Geetha sowjanya

[permalink] [raw]
Subject: [net-next PATCH v5 06/10] octeontx2-pf: Get VF stats via representor

This patch add support to export VF port statistics via representor
netdev. Defines new mbox "NIX_LF_STATS" to fetch VF hw stats.

Signed-off-by: Geetha sowjanya <[email protected]>
---
.../net/ethernet/marvell/octeontx2/af/mbox.h | 32 +++++++++
.../ethernet/marvell/octeontx2/af/rvu_rep.c | 43 ++++++++++++
.../net/ethernet/marvell/octeontx2/nic/rep.c | 65 +++++++++++++++++++
.../net/ethernet/marvell/octeontx2/nic/rep.h | 14 ++++
4 files changed, 154 insertions(+)

diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index a7c32f1cc924..d293a3c35b6b 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -321,6 +321,7 @@ M(NIX_MCAST_GRP_DESTROY, 0x802c, nix_mcast_grp_destroy, nix_mcast_grp_destroy_re
M(NIX_MCAST_GRP_UPDATE, 0x802d, nix_mcast_grp_update, \
nix_mcast_grp_update_req, \
nix_mcast_grp_update_rsp) \
+M(NIX_LF_STATS, 0x802e, nix_lf_stats, nix_stats_req, nix_stats_rsp) \
/* MCS mbox IDs (range 0xA000 - 0xBFFF) */ \
M(MCS_ALLOC_RESOURCES, 0xa000, mcs_alloc_resources, mcs_alloc_rsrc_req, \
mcs_alloc_rsrc_rsp) \
@@ -1366,6 +1367,37 @@ struct nix_bandprof_get_hwinfo_rsp {
u32 policer_timeunit;
};

+struct nix_stats_req {
+ struct mbox_msghdr hdr;
+ u8 reset;
+ u16 pcifunc;
+ u64 rsvd;
+};
+
+struct nix_stats_rsp {
+ struct mbox_msghdr hdr;
+ u16 pcifunc;
+ struct {
+ u64 octs;
+ u64 ucast;
+ u64 bcast;
+ u64 mcast;
+ u64 drop;
+ u64 drop_octs;
+ u64 drop_mcast;
+ u64 drop_bcast;
+ u64 err;
+ u64 rsvd[5];
+ } rx;
+ struct {
+ u64 ucast;
+ u64 bcast;
+ u64 mcast;
+ u64 drop;
+ u64 octs;
+ } tx;
+};
+
/* NPC mbox message structs */

#define NPC_MCAM_ENTRY_INVALID 0xFFFF
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
index e137bb9383a2..a7a15d7878e5 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c
@@ -13,6 +13,49 @@
#include "rvu.h"
#include "rvu_reg.h"

+#define RVU_LF_RX_STATS(reg) \
+ rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_STATX(nixlf, reg))
+
+#define RVU_LF_TX_STATS(reg) \
+ rvu_read64(rvu, blkaddr, NIX_AF_LFX_TX_STATX(nixlf, reg))
+
+int rvu_mbox_handler_nix_lf_stats(struct rvu *rvu,
+ struct nix_stats_req *req,
+ struct nix_stats_rsp *rsp)
+{
+ u16 pcifunc = req->pcifunc;
+ int nixlf, blkaddr, err;
+ struct msg_req rst_req;
+ struct msg_rsp rst_rsp;
+
+ err = nix_get_nixlf(rvu, pcifunc, &nixlf, &blkaddr);
+ if (err)
+ return 0;
+
+ if (req->reset) {
+ rst_req.hdr.pcifunc = pcifunc;
+ return rvu_mbox_handler_nix_stats_rst(rvu, &rst_req, &rst_rsp);
+ }
+ rsp->rx.octs = RVU_LF_RX_STATS(RX_OCTS);
+ rsp->rx.ucast = RVU_LF_RX_STATS(RX_UCAST);
+ rsp->rx.bcast = RVU_LF_RX_STATS(RX_BCAST);
+ rsp->rx.mcast = RVU_LF_RX_STATS(RX_MCAST);
+ rsp->rx.drop = RVU_LF_RX_STATS(RX_DROP);
+ rsp->rx.err = RVU_LF_RX_STATS(RX_ERR);
+ rsp->rx.drop_octs = RVU_LF_RX_STATS(RX_DROP_OCTS);
+ rsp->rx.drop_mcast = RVU_LF_RX_STATS(RX_DRP_MCAST);
+ rsp->rx.drop_bcast = RVU_LF_RX_STATS(RX_DRP_BCAST);
+
+ rsp->tx.octs = RVU_LF_TX_STATS(TX_OCTS);
+ rsp->tx.ucast = RVU_LF_TX_STATS(TX_UCAST);
+ rsp->tx.bcast = RVU_LF_TX_STATS(TX_BCAST);
+ rsp->tx.mcast = RVU_LF_TX_STATS(TX_MCAST);
+ rsp->tx.drop = RVU_LF_TX_STATS(TX_DROP);
+
+ rsp->pcifunc = req->pcifunc;
+ return 0;
+}
+
static int rvu_rep_get_vlan_id(struct rvu *rvu, u16 pcifunc)
{
int id;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
index e276a354d9e4..592b0f4406f0 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.c
@@ -28,6 +28,68 @@ MODULE_DESCRIPTION(DRV_STRING);
MODULE_LICENSE("GPL");
MODULE_DEVICE_TABLE(pci, rvu_rep_id_table);

+static void rvu_rep_get_stats(struct work_struct *work)
+{
+ struct delayed_work *del_work = to_delayed_work(work);
+ struct nix_stats_req *req;
+ struct nix_stats_rsp *rsp;
+ struct rep_stats *stats;
+ struct otx2_nic *priv;
+ struct rep_dev *rep;
+ int err;
+
+ rep = container_of(del_work, struct rep_dev, stats_wrk);
+ priv = rep->mdev;
+
+ mutex_lock(&priv->mbox.lock);
+ req = otx2_mbox_alloc_msg_nix_lf_stats(&priv->mbox);
+ if (!req) {
+ mutex_unlock(&priv->mbox.lock);
+ return;
+ }
+ req->pcifunc = rep->pcifunc;
+ err = otx2_sync_mbox_msg_busy_poll(&priv->mbox);
+ if (err)
+ goto exit;
+
+ rsp = (struct nix_stats_rsp *)
+ otx2_mbox_get_rsp(&priv->mbox.mbox, 0, &req->hdr);
+
+ if (IS_ERR(rsp)) {
+ err = PTR_ERR(rsp);
+ goto exit;
+ }
+
+ stats = &rep->stats;
+ stats->rx_bytes = rsp->rx.octs;
+ stats->rx_frames = rsp->rx.ucast + rsp->rx.bcast +
+ rsp->rx.mcast;
+ stats->rx_drops = rsp->rx.drop;
+ stats->rx_mcast_frames = rsp->rx.mcast;
+ stats->tx_bytes = rsp->tx.octs;
+ stats->tx_frames = rsp->tx.ucast + rsp->tx.bcast + rsp->tx.mcast;
+ stats->tx_drops = rsp->tx.drop;
+exit:
+ mutex_unlock(&priv->mbox.lock);
+}
+
+static void rvu_rep_get_stats64(struct net_device *dev,
+ struct rtnl_link_stats64 *stats)
+{
+ struct rep_dev *rep = netdev_priv(dev);
+
+ stats->rx_packets = rep->stats.rx_frames;
+ stats->rx_bytes = rep->stats.rx_bytes;
+ stats->rx_dropped = rep->stats.rx_drops;
+ stats->multicast = rep->stats.rx_mcast_frames;
+
+ stats->tx_packets = rep->stats.tx_frames;
+ stats->tx_bytes = rep->stats.tx_bytes;
+ stats->tx_dropped = rep->stats.tx_drops;
+
+ schedule_delayed_work(&rep->stats_wrk, msecs_to_jiffies(100));
+}
+
static int rvu_eswitch_config(struct otx2_nic *priv, u8 ena)
{
struct esw_cfg_req *req;
@@ -87,6 +149,7 @@ static const struct net_device_ops rvu_rep_netdev_ops = {
.ndo_open = rvu_rep_open,
.ndo_stop = rvu_rep_stop,
.ndo_start_xmit = rvu_rep_xmit,
+ .ndo_get_stats64 = rvu_rep_get_stats64,
};

static int rvu_rep_napi_init(struct otx2_nic *priv,
@@ -231,6 +294,8 @@ int rvu_rep_create(struct otx2_nic *priv, struct netlink_ext_ack *extack)
"PFVF reprentator registration failed");
goto exit;
}
+
+ INIT_DELAYED_WORK(&rep->stats_wrk, rvu_rep_get_stats);
}
err = rvu_rep_napi_init(priv, extack);
if (err)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
index c04874c4d4c6..5d39bf636655 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
@@ -17,9 +17,23 @@
#define PCI_DEVID_RVU_REP 0xA0E0

#define RVU_MAX_REP OTX2_MAX_CQ_CNT
+
+struct rep_stats {
+ u64 rx_bytes;
+ u64 rx_frames;
+ u64 rx_drops;
+ u64 rx_mcast_frames;
+
+ u64 tx_bytes;
+ u64 tx_frames;
+ u64 tx_drops;
+};
+
struct rep_dev {
struct otx2_nic *mdev;
struct net_device *netdev;
+ struct rep_stats stats;
+ struct delayed_work stats_wrk;
u16 rep_id;
u16 pcifunc;
};
--
2.25.1


2024-06-14 02:02:27

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [net-next PATCH v5 02/10] octeontx2-pf: RVU representor driver

On Tue, 11 Jun 2024 21:52:05 +0530 Geetha sowjanya wrote:
> obj-$(CONFIG_OCTEONTX2_PF) += rvu_nicpf.o otx2_ptp.o
> obj-$(CONFIG_OCTEONTX2_VF) += rvu_nicvf.o otx2_ptp.o
> +obj-$(CONFIG_RVU_ESWITCH) += rvu_rep.o
>
> rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \
> otx2_flows.o otx2_tc.o cn10k.o otx2_dmac_flt.o \
> otx2_devlink.o qos_sq.o qos.o
> rvu_nicvf-y := otx2_vf.o otx2_devlink.o
> +rvu_rep-y := rep.o otx2_devlink.o

You gotta fix the symbol duplication first, please:

drivers/net/ethernet/marvell/octeontx2/nic/Makefile: otx2_devlink.o is added to multiple modules: rvu_nicpf rvu_nicvf rvu_rep

2024-06-15 11:16:30

by Markus Elfring

[permalink] [raw]
Subject: Re: [net-next PATCH v5 01/10] octeontx2-pf: Refactoring RVU driver


> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c

> +void otx2_disable_napi(struct otx2_nic *pf)
> {

> - cancel_work_sync(&cq_poll->dim.work);
> + work = &cq_poll->dim.work;
> + if (work->func)
> + cancel_work_sync(&cq_poll->dim.work);


I suggest to use the shown local variable once more.

+ cancel_work_sync(work);



> +int otx2_init_rsrc(struct pci_dev *pdev, struct otx2_nic *pf)
> +{

> + return 0;
> +
> +err_detach_rsrc:
> + if (pf->hw.lmt_info)
> + free_percpu(pf->hw.lmt_info);
> + if (test_bit(CN10K_LMTST, &pf->hw.cap_flag))
> + qmem_free(pf->dev, pf->dync_lmt);
> + otx2_detach_resources(&pf->mbox);
> +err_disable_mbox_intr:
> + otx2_disable_mbox_intr(pf);
> +err_mbox_destroy:
> + otx2_pfaf_mbox_destroy(pf);
> +err_free_irq_vectors:
> + pci_free_irq_vectors(hw->pdev);
> +
> + return err;
> +}


Would you become interested to convert any usages of goto chains
into applications of scope-based resource management?
https://elixir.bootlin.com/linux/v6.10-rc3/source/include/linux/cleanup.h#L8

Regards,
Markus

2024-06-15 15:12:42

by Markus Elfring

[permalink] [raw]
Subject: Re: [net-next PATCH v5 02/10] octeontx2-pf: RVU representor driver

> This patch adds basic driver for the RVU representor.


Please improve such a change description with imperative wordings.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/submitting-patches.rst?h=v6.10-rc3#n94

Can an adjusted summary phrase become also a bit more helpful?
https://elixir.bootlin.com/linux/v6.10-rc3/source/Documentation/process/maintainer-tip.rst#L124



> +static int rvu_get_rep_cnt(struct otx2_nic *priv)
> +{

> + mutex_lock(&priv->mbox.lock);
> + req = otx2_mbox_alloc_msg_get_rep_cnt(&priv->mbox);

> +exit:
> + mutex_unlock(&priv->mbox.lock);
> + return err;
> +}


Would you become interested to apply a statement like “guard(mutex)(&priv->mbox.lock);”?
https://elixir.bootlin.com/linux/v6.10-rc3/source/include/linux/mutex.h#L196



> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/rep.h
> @@ -0,0 +1,31 @@

> +#ifndef REP_H
> +#define REP_H


Can unique include guards be more desirable also for this software?

Regards,
Markus