2017-08-31 13:11:25

by Lipeng

[permalink] [raw]
Subject: [PATCH net-next 0/8] Bug fixes & Code improvements in HNS driver

This patch-set introduces some bug fixes and code improvements.
These have been identified during internal review or testing of
the driver by internal Hisilicon teams.

Lipeng (8):
net: hns3: add check when initialize
net: hns3: update ring and vector map command
net: hns3: set default mac vlan mask
net: hns3: set default vlan id to PF
net: hns3: set the VLAN Ethernet type to HW
net: hns3: fix bug of reuse command description
net: hns3: add vlan filter config of Ports
net: hns3: reimplemmentation of pkt buffer allocation

drivers/net/ethernet/hisilicon/hns3/hnae3.h | 5 +
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c | 9 +
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h | 64 ++-
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 613 +++++++++++++--------
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 16 +-
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c | 84 ++-
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h | 9 +
.../net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c | 9 +
8 files changed, 541 insertions(+), 268 deletions(-)

--
1.9.1


2017-08-31 13:11:29

by Lipeng

[permalink] [raw]
Subject: [PATCH net-next 1/8] net: hns3: add check when initialize

private waterline and and common packet buffer

Command HCLGE_OPC_RX_PRIV_WL_ALLOC configure waterline for TC's PFC,
which has private buffer.Command HCLGE_OPC_RX_COM_THRD_ALLOC Control
each TC's occupation in common packet buffer, also generate PFC for
TC, which has not private buffer.When device do not support DCB,
command HCLGE_OPC_RX_PRIV_WL_ALLOC and HCLGE_OPC_RX_COM_THRD_ALLOC
should not be used.

The current code does not support DCB feature, DCB feature will be added
later. The current code works well if device support DCB though it do
not enable DCB feature, but it works fail if device do not support DCB.

Signed-off-by: Lipeng <[email protected]>
---
drivers/net/ethernet/hisilicon/hns3/hnae3.h | 1 +
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 41 ++++++++++++++++------
2 files changed, 31 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
index b2f28ae..e23e028 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
@@ -50,6 +50,7 @@

#define HNAE3_DEV_INITED_B 0x0
#define HNAE_DEV_SUPPORT_ROCE_B 0x1
+#define HNAE_DEV_SUPPORT_DCB_B 0x2

#define ring_ptr_move_fw(ring, p) \
((ring)->p = ((ring)->p + 1) % (ring)->desc_num)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index bb45365..acc4016 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -60,6 +60,16 @@ static int hclge_set_mta_filter_mode(struct hclge_dev *hdev,
{0, }
};

+static const struct pci_device_id dcb_pci_tbl[] = {
+ {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_25GE_RDMA), 0},
+ {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_25GE_RDMA_MACSEC), 0},
+ {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_50GE_RDMA), 0},
+ {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_50GE_RDMA_MACSEC), 0},
+ {PCI_VDEVICE(HUAWEI, HNAE3_DEV_ID_100G_RDMA_MACSEC), 0},
+ /* Required last entry */
+ {0, }
+};
+
static const char hns3_nic_test_strs[][ETH_GSTRING_LEN] = {
"Mac Loopback test",
"Serdes Loopback test",
@@ -1782,18 +1792,23 @@ int hclge_buffer_alloc(struct hclge_dev *hdev)
return ret;
}

- ret = hclge_rx_priv_wl_config(hdev);
- if (ret) {
- dev_err(&hdev->pdev->dev,
- "could not configure rx private waterline %d\n", ret);
- return ret;
- }
+ if (hnae_get_bit(hdev->ae_dev->flag,
+ HNAE_DEV_SUPPORT_DCB_B)) {
+ ret = hclge_rx_priv_wl_config(hdev);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "could not configure rx private waterline %d\n",
+ ret);
+ return ret;
+ }

- ret = hclge_common_thrd_config(hdev);
- if (ret) {
- dev_err(&hdev->pdev->dev,
- "could not configure common threshold %d\n", ret);
- return ret;
+ ret = hclge_common_thrd_config(hdev);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "could not configure common threshold %d\n",
+ ret);
+ return ret;
+ }
}

ret = hclge_common_wl_config(hdev);
@@ -4076,6 +4091,10 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
if (id)
hnae_set_bit(ae_dev->flag, HNAE_DEV_SUPPORT_ROCE_B, 1);

+ id = pci_match_id(dcb_pci_tbl, ae_dev->pdev);
+ if (id)
+ hnae_set_bit(ae_dev->flag, HNAE_DEV_SUPPORT_DCB_B, 1);
+
ret = hclge_pci_init(hdev);
if (ret) {
dev_err(&pdev->dev, "PCI init failed\n");
--
1.9.1

2017-08-31 13:11:28

by Lipeng

[permalink] [raw]
Subject: [PATCH net-next 6/8] net: hns3: fix bug of reuse command description

When reuse command description, the in/out bit and W/R bit of the
command flag should be both updated. The old code did not update
in/out bit, this patch fix it.

Signed-off-by: Mingguang Qu <[email protected]>
Signed-off-by: Lipeng <[email protected]>
---
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c | 9 +++++++++
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h | 2 +-
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 6 +++---
3 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
index 8b511e6..fe2b116 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
@@ -85,6 +85,15 @@ static int hclge_init_cmd_queue(struct hclge_dev *hdev, int ring_type)
return 0;
}

+void hclge_cmd_reuse_desc(struct hclge_desc *desc, bool is_read)
+{
+ desc->flag = cpu_to_le16(HCLGE_CMD_FLAG_NO_INTR | HCLGE_CMD_FLAG_IN);
+ if (is_read)
+ desc->flag |= cpu_to_le16(HCLGE_CMD_FLAG_WR);
+ else
+ desc->flag &= cpu_to_le16(~HCLGE_CMD_FLAG_WR);
+}
+
void hclge_cmd_setup_basic_desc(struct hclge_desc *desc,
enum hclge_opcode_type opcode, bool is_read)
{
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
index b841df1..5887418 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
@@ -753,7 +753,7 @@ static inline u32 hclge_read_reg(u8 __iomem *base, u32 reg)
int hclge_cmd_send(struct hclge_hw *hw, struct hclge_desc *desc, int num);
void hclge_cmd_setup_basic_desc(struct hclge_desc *desc,
enum hclge_opcode_type opcode, bool is_read);
-
+void hclge_cmd_reuse_desc(struct hclge_desc *desc, bool is_read);
int hclge_cmd_set_promisc_mode(struct hclge_dev *hdev,
struct hclge_promisc_param *param);

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index f2ea88f..12be24f 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -3302,11 +3302,11 @@ static int hclge_add_mac_vlan_tbl(struct hclge_vport *vport,
resp_code,
HCLGE_MAC_VLAN_ADD);
} else {
- mc_desc[0].flag &= cpu_to_le16(~HCLGE_CMD_FLAG_WR);
+ hclge_cmd_reuse_desc(&mc_desc[0], false);
mc_desc[0].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
- mc_desc[1].flag &= cpu_to_le16(~HCLGE_CMD_FLAG_WR);
+ hclge_cmd_reuse_desc(&mc_desc[1], false);
mc_desc[1].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
- mc_desc[2].flag &= cpu_to_le16(~HCLGE_CMD_FLAG_WR);
+ hclge_cmd_reuse_desc(&mc_desc[2], false);
mc_desc[2].flag &= cpu_to_le16(~HCLGE_CMD_FLAG_NEXT);
memcpy(mc_desc[0].data, req,
sizeof(struct hclge_mac_vlan_tbl_entry));
--
1.9.1

2017-08-31 13:12:13

by Lipeng

[permalink] [raw]
Subject: [PATCH net-next 8/8] net: hns3: reimplemmentation of pkt buffer allocation

Current implemmentation of buffer allocation in SSU do not meet
the requirement to do the buffer reallocation. This patch fixs
that in order to support buffer reallocation between Mac and
PFC pause.

Signed-off-by: Yunsheng Lin <[email protected]>
Signed-off-by: Lipeng <[email protected]>
---
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h | 32 +-
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 368 +++++++++++----------
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 5 +-
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c | 84 ++++-
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h | 9 +
5 files changed, 308 insertions(+), 190 deletions(-)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
index 5887418..26e8ca6 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
@@ -141,7 +141,7 @@ enum hclge_opcode_type {

/* Packet buffer allocate command */
HCLGE_OPC_TX_BUFF_ALLOC = 0x0901,
- HCLGE_OPC_RX_PRIV_BUFF_ALLOC = 0x0902,
+ HCLGE_OPC_RX_BUFF_ALLOC = 0x0902,
HCLGE_OPC_RX_PRIV_WL_ALLOC = 0x0903,
HCLGE_OPC_RX_COM_THRD_ALLOC = 0x0904,
HCLGE_OPC_RX_COM_WL_ALLOC = 0x0905,
@@ -264,14 +264,15 @@ struct hclge_ctrl_vector_chain {
#define HCLGE_TC_NUM 8
#define HCLGE_TC0_PRI_BUF_EN_B 15 /* Bit 15 indicate enable or not */
#define HCLGE_BUF_UNIT_S 7 /* Buf size is united by 128 bytes */
-struct hclge_tx_buff_alloc {
- __le16 tx_pkt_buff[HCLGE_TC_NUM];
- u8 tx_buff_rsv[8];
+struct hclge_tx_buf_alloc {
+ __le16 buf[HCLGE_TC_NUM];
+ u8 rsv[8];
};

-struct hclge_rx_priv_buff {
- __le16 buf_num[HCLGE_TC_NUM];
- u8 rsv[8];
+struct hclge_rx_buf_alloc {
+ __le16 priv_buf[HCLGE_TC_NUM];
+ __le16 shared_buf;
+ u8 rsv[6];
};

struct hclge_query_version {
@@ -308,19 +309,24 @@ struct hclge_tc_thrd {
u32 high;
};

-struct hclge_priv_buf {
+struct hclge_rx_priv_buf {
struct hclge_waterline wl; /* Waterline for low and high*/
u32 buf_size; /* TC private buffer size */
- u32 enable; /* Enable TC private buffer or not */
};

#define HCLGE_MAX_TC_NUM 8
-struct hclge_shared_buf {
+struct hclge_rx_shared_buf {
struct hclge_waterline self;
struct hclge_tc_thrd tc_thrd[HCLGE_MAX_TC_NUM];
u32 buf_size;
};

+struct hclge_pkt_buf_alloc {
+ u32 tx_buf_size[HCLGE_MAX_TC_NUM];
+ struct hclge_rx_priv_buf rx_buf[HCLGE_MAX_TC_NUM];
+ struct hclge_rx_shared_buf s_buf;
+};
+
#define HCLGE_RX_COM_WL_EN_B 15
struct hclge_rx_com_wl_buf {
__le16 high_wl;
@@ -707,9 +713,9 @@ struct hclge_reset_tqp_queue {
u8 rsv[20];
};

-#define HCLGE_DEFAULT_TX_BUF 0x4000 /* 16k bytes */
-#define HCLGE_TOTAL_PKT_BUF 0x108000 /* 1.03125M bytes */
-#define HCLGE_DEFAULT_DV 0xA000 /* 40k byte */
+#define HCLGE_DEFAULT_TX_BUF 0x4000 /* 16k bytes */
+#define HCLGE_DEFAULT_DV 0xA000 /* 40k byte */
+#define HCLGE_DEFAULT_NON_DCB_DV 0x7800 /* 30K byte */

#define HCLGE_TYPE_CRQ 0
#define HCLGE_TYPE_CSQ 1
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index d0a30f5..61073c2 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -1094,8 +1094,18 @@ static int hclge_configure(struct hclge_dev *hdev)
hdev->tm_info.num_tc = 1;
}

+ /* non-DCB supported dev */
+ if (!hnae_get_bit(hdev->ae_dev->flag,
+ HNAE_DEV_SUPPORT_DCB_B)) {
+ hdev->tc_cap = 1;
+ hdev->pfc_cap = 0;
+ } else {
+ hdev->tc_cap = hdev->tm_info.num_tc;
+ hdev->pfc_cap = hdev->tm_info.num_tc;
+ }
+
/* Currently not support uncontiuous tc */
- for (i = 0; i < cfg.tc_num; i++)
+ for (i = 0; i < hdev->tc_cap; i++)
hnae_set_bit(hdev->hw_tc_map, i, 1);

if (!hdev->num_vmdq_vport && !hdev->num_req_vfs)
@@ -1344,45 +1354,32 @@ static int hclge_alloc_vport(struct hclge_dev *hdev)
return 0;
}

-static int hclge_cmd_alloc_tx_buff(struct hclge_dev *hdev, u16 buf_size)
+static int hclge_tx_buffer_alloc(struct hclge_dev *hdev,
+ struct hclge_pkt_buf_alloc *buf_alloc)
{
-/* TX buffer size is unit by 128 byte */
-#define HCLGE_BUF_SIZE_UNIT_SHIFT 7
-#define HCLGE_BUF_SIZE_UPDATE_EN_MSK BIT(15)
- struct hclge_tx_buff_alloc *req;
struct hclge_desc desc;
- int ret;
+ struct hclge_tx_buf_alloc *req =
+ (struct hclge_tx_buf_alloc *)desc.data;
+ enum hclge_cmd_status status;
u8 i;

- req = (struct hclge_tx_buff_alloc *)desc.data;
-
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_TX_BUFF_ALLOC, 0);
- for (i = 0; i < HCLGE_TC_NUM; i++)
- req->tx_pkt_buff[i] =
- cpu_to_le16((buf_size >> HCLGE_BUF_SIZE_UNIT_SHIFT) |
- HCLGE_BUF_SIZE_UPDATE_EN_MSK);
+ for (i = 0; i < HCLGE_TC_NUM; i++) {
+ u32 buf_size = buf_alloc->tx_buf_size[i];

- ret = hclge_cmd_send(&hdev->hw, &desc, 1);
- if (ret) {
- dev_err(&hdev->pdev->dev, "tx buffer alloc cmd failed %d.\n",
- ret);
- return ret;
+ req->buf[i] =
+ cpu_to_le16((buf_size >> HCLGE_BUF_UNIT_S) |
+ 1 << HCLGE_TC0_PRI_BUF_EN_B);
}

- return 0;
-}
-
-static int hclge_tx_buffer_alloc(struct hclge_dev *hdev, u32 buf_size)
-{
- int ret = hclge_cmd_alloc_tx_buff(hdev, buf_size);
+ status = hclge_cmd_send(&hdev->hw, &desc, 1);

- if (ret) {
+ if (status) {
dev_err(&hdev->pdev->dev,
- "tx buffer alloc failed %d\n", ret);
- return ret;
+ "Allocat tx buff fail, ret = %d\n", status);
}

- return 0;
+ return status;
}

static int hclge_get_tc_num(struct hclge_dev *hdev)
@@ -1407,15 +1404,16 @@ static int hclge_get_pfc_enalbe_num(struct hclge_dev *hdev)
}

/* Get the number of pfc enabled TCs, which have private buffer */
-static int hclge_get_pfc_priv_num(struct hclge_dev *hdev)
+static int hclge_get_pfc_priv_num(struct hclge_dev *hdev,
+ struct hclge_pkt_buf_alloc *buf_alloc)
{
- struct hclge_priv_buf *priv;
+ struct hclge_rx_priv_buf *priv;
int i, cnt = 0;

for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
- priv = &hdev->priv_buf[i];
+ priv = &buf_alloc->rx_buf[i];
if ((hdev->tm_info.hw_pfc_map & BIT(i)) &&
- priv->enable)
+ priv->buf_size > 0)
cnt++;
}

@@ -1423,37 +1421,40 @@ static int hclge_get_pfc_priv_num(struct hclge_dev *hdev)
}

/* Get the number of pfc disabled TCs, which have private buffer */
-static int hclge_get_no_pfc_priv_num(struct hclge_dev *hdev)
+static int hclge_get_no_pfc_priv_num(struct hclge_dev *hdev,
+ struct hclge_pkt_buf_alloc *buf_alloc)
{
- struct hclge_priv_buf *priv;
+ struct hclge_rx_priv_buf *priv;
int i, cnt = 0;

for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
- priv = &hdev->priv_buf[i];
+ priv = &buf_alloc->rx_buf[i];
if (hdev->hw_tc_map & BIT(i) &&
!(hdev->tm_info.hw_pfc_map & BIT(i)) &&
- priv->enable)
+ priv->buf_size > 0)
cnt++;
}

return cnt;
}

-static u32 hclge_get_rx_priv_buff_alloced(struct hclge_dev *hdev)
+static u32 hclge_get_rx_priv_buff_alloced(struct hclge_dev *hdev,
+ struct hclge_pkt_buf_alloc *buf_alloc)
{
- struct hclge_priv_buf *priv;
+ struct hclge_rx_priv_buf *priv;
u32 rx_priv = 0;
int i;

for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
- priv = &hdev->priv_buf[i];
- if (priv->enable)
- rx_priv += priv->buf_size;
+ priv = &buf_alloc->rx_buf[i];
+ rx_priv += priv->buf_size;
}
return rx_priv;
}

-static bool hclge_is_rx_buf_ok(struct hclge_dev *hdev, u32 rx_all)
+static bool hclge_is_rx_buf_ok(struct hclge_dev *hdev,
+ struct hclge_pkt_buf_alloc *buf_alloc,
+ u32 rx_all)
{
u32 shared_buf_min, shared_buf_tc, shared_std;
int tc_num, pfc_enable_num;
@@ -1464,52 +1465,85 @@ static bool hclge_is_rx_buf_ok(struct hclge_dev *hdev, u32 rx_all)
tc_num = hclge_get_tc_num(hdev);
pfc_enable_num = hclge_get_pfc_enalbe_num(hdev);

- shared_buf_min = 2 * hdev->mps + HCLGE_DEFAULT_DV;
+ if (hnae_get_bit(hdev->ae_dev->flag,
+ HNAE_DEV_SUPPORT_DCB_B))
+ shared_buf_min = 2 * hdev->mps + HCLGE_DEFAULT_DV;
+ else
+ shared_buf_min = 2 * hdev->mps + HCLGE_DEFAULT_NON_DCB_DV;
+
shared_buf_tc = pfc_enable_num * hdev->mps +
(tc_num - pfc_enable_num) * hdev->mps / 2 +
hdev->mps;
shared_std = max_t(u32, shared_buf_min, shared_buf_tc);

- rx_priv = hclge_get_rx_priv_buff_alloced(hdev);
- if (rx_all <= rx_priv + shared_std)
+ rx_priv = hclge_get_rx_priv_buff_alloced(hdev, buf_alloc);
+ if (rx_all <= rx_priv + shared_std) {
+ dev_err(&hdev->pdev->dev,
+ "pkt buffer allocted failed, total:%u, rx_all:%u\n",
+ hdev->pkt_buf_size, rx_all);
return false;
+ }

shared_buf = rx_all - rx_priv;
- hdev->s_buf.buf_size = shared_buf;
- hdev->s_buf.self.high = shared_buf;
- hdev->s_buf.self.low = 2 * hdev->mps;
-
+ buf_alloc->s_buf.buf_size = shared_buf;
+ buf_alloc->s_buf.self.high = shared_buf;
+ buf_alloc->s_buf.self.low = 2 * hdev->mps;
for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
if ((hdev->hw_tc_map & BIT(i)) &&
(hdev->tm_info.hw_pfc_map & BIT(i))) {
- hdev->s_buf.tc_thrd[i].low = hdev->mps;
- hdev->s_buf.tc_thrd[i].high = 2 * hdev->mps;
+ buf_alloc->s_buf.tc_thrd[i].low = hdev->mps;
+ buf_alloc->s_buf.tc_thrd[i].high = 2 * hdev->mps;
} else {
- hdev->s_buf.tc_thrd[i].low = 0;
- hdev->s_buf.tc_thrd[i].high = hdev->mps;
+ buf_alloc->s_buf.tc_thrd[i].low = 0;
+ buf_alloc->s_buf.tc_thrd[i].high = hdev->mps;
}
}

return true;
}

-/* hclge_rx_buffer_calc: calculate the rx private buffer size for all TCs
+/**
+ * hclge_buffer_calc: calculate the private buffer size for all TCs
* @hdev: pointer to struct hclge_dev
* @tx_size: the allocated tx buffer for all TCs
* @return: 0: calculate sucessful, negative: fail
*/
-int hclge_rx_buffer_calc(struct hclge_dev *hdev, u32 tx_size)
+int hclge_buffer_calc(struct hclge_dev *hdev,
+ struct hclge_pkt_buf_alloc *buf_alloc,
+ u32 tx_size)
{
- u32 rx_all = hdev->pkt_buf_size - tx_size;
+ u32 rx_all = hdev->pkt_buf_size;
int no_pfc_priv_num, pfc_priv_num;
- struct hclge_priv_buf *priv;
+ struct hclge_rx_priv_buf *priv;
int i;

- /* step 1, try to alloc private buffer for all enabled tc */
+ /* alloc tx buffer for all enabled tc */
+ for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
+ if (rx_all < tx_size)
+ return -ENOMEM;
+
+ if (hdev->hw_tc_map & BIT(i)) {
+ buf_alloc->tx_buf_size[i] = tx_size;
+ rx_all -= tx_size;
+ } else {
+ buf_alloc->tx_buf_size[i] = 0;
+ }
+ }
+
+ /* If pfc is not supported, rx private
+ * buffer is not allocated.
+ */
+ if (hdev->pfc_cap == 0) {
+ if (!hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all))
+ return -ENOMEM;
+
+ return 0;
+ }
+
+ /* Step 1, try to alloc private buffer for all enabled tc */
for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
- priv = &hdev->priv_buf[i];
+ priv = &buf_alloc->rx_buf[i];
if (hdev->hw_tc_map & BIT(i)) {
- priv->enable = 1;
if (hdev->tm_info.hw_pfc_map & BIT(i)) {
priv->wl.low = hdev->mps;
priv->wl.high = priv->wl.low + hdev->mps;
@@ -1520,128 +1554,133 @@ int hclge_rx_buffer_calc(struct hclge_dev *hdev, u32 tx_size)
priv->wl.high = 2 * hdev->mps;
priv->buf_size = priv->wl.high;
}
+ } else {
+ priv->wl.low = 0;
+ priv->wl.high = 0;
+ priv->buf_size = 0;
}
}

- if (hclge_is_rx_buf_ok(hdev, rx_all))
+ if (hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all))
return 0;

- /* step 2, try to decrease the buffer size of
+ /**
+ * Step 2, try to decrease the buffer size of
* no pfc TC's private buffer
- */
+ **/
for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
- priv = &hdev->priv_buf[i];
-
- if (hdev->hw_tc_map & BIT(i))
- priv->enable = 1;
-
- if (hdev->tm_info.hw_pfc_map & BIT(i)) {
- priv->wl.low = 128;
- priv->wl.high = priv->wl.low + hdev->mps;
- priv->buf_size = priv->wl.high + HCLGE_DEFAULT_DV;
+ priv = &buf_alloc->rx_buf[i];
+ if (hdev->hw_tc_map & BIT(i)) {
+ if (hdev->tm_info.hw_pfc_map & BIT(i)) {
+ priv->wl.low = 128;
+ priv->wl.high = priv->wl.low + hdev->mps;
+ priv->buf_size = priv->wl.high
+ + HCLGE_DEFAULT_DV;
+ } else {
+ priv->wl.low = 0;
+ priv->wl.high = hdev->mps;
+ priv->buf_size = priv->wl.high;
+ }
} else {
priv->wl.low = 0;
- priv->wl.high = hdev->mps;
- priv->buf_size = priv->wl.high;
+ priv->wl.high = 0;
+ priv->buf_size = 0;
}
}

- if (hclge_is_rx_buf_ok(hdev, rx_all))
+ if (hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all))
return 0;

- /* step 3, try to reduce the number of pfc disabled TCs,
+ /**
+ * Step 3, try to reduce the number of pfc disabled TCs,
* which have private buffer
- */
- /* get the total no pfc enable TC number, which have private buffer */
- no_pfc_priv_num = hclge_get_no_pfc_priv_num(hdev);
+ **/

- /* let the last to be cleared first */
+ /* Get the total no pfc enable TC number, which have private buffer */
+ no_pfc_priv_num = hclge_get_no_pfc_priv_num(hdev, buf_alloc);
+ /* Let the last to be cleared first */
for (i = HCLGE_MAX_TC_NUM - 1; i >= 0; i--) {
- priv = &hdev->priv_buf[i];
-
+ priv = &buf_alloc->rx_buf[i];
if (hdev->hw_tc_map & BIT(i) &&
!(hdev->tm_info.hw_pfc_map & BIT(i))) {
/* Clear the no pfc TC private buffer */
priv->wl.low = 0;
priv->wl.high = 0;
priv->buf_size = 0;
- priv->enable = 0;
no_pfc_priv_num--;
}
-
- if (hclge_is_rx_buf_ok(hdev, rx_all) ||
+ if (hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all) ||
no_pfc_priv_num == 0)
break;
}
-
- if (hclge_is_rx_buf_ok(hdev, rx_all))
+ if (hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all))
return 0;

- /* step 4, try to reduce the number of pfc enabled TCs
+ /**
+ * Step 4, try to reduce the number of pfc enabled TCs
* which have private buffer.
- */
- pfc_priv_num = hclge_get_pfc_priv_num(hdev);
-
- /* let the last to be cleared first */
+ **/
+ pfc_priv_num = hclge_get_pfc_priv_num(hdev, buf_alloc);
+ /* Let the last to be cleared first */
for (i = HCLGE_MAX_TC_NUM - 1; i >= 0; i--) {
- priv = &hdev->priv_buf[i];
-
+ priv = &buf_alloc->rx_buf[i];
if (hdev->hw_tc_map & BIT(i) &&
hdev->tm_info.hw_pfc_map & BIT(i)) {
/* Reduce the number of pfc TC with private buffer */
priv->wl.low = 0;
- priv->enable = 0;
priv->wl.high = 0;
priv->buf_size = 0;
pfc_priv_num--;
}
-
- if (hclge_is_rx_buf_ok(hdev, rx_all) ||
+ if (hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all) ||
pfc_priv_num == 0)
break;
}
- if (hclge_is_rx_buf_ok(hdev, rx_all))
+ if (hclge_is_rx_buf_ok(hdev, buf_alloc, rx_all))
return 0;

return -ENOMEM;
}

-static int hclge_rx_priv_buf_alloc(struct hclge_dev *hdev)
+static int hclge_rx_buf_alloc(struct hclge_dev *hdev,
+ struct hclge_pkt_buf_alloc *buf_alloc)
{
- struct hclge_rx_priv_buff *req;
struct hclge_desc desc;
+ struct hclge_rx_buf_alloc *req =
+ (struct hclge_rx_buf_alloc *)desc.data;
+ struct hclge_rx_shared_buf *s_buf = &buf_alloc->s_buf;
int ret;
int i;

- hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_RX_PRIV_BUFF_ALLOC, false);
- req = (struct hclge_rx_priv_buff *)desc.data;
+ hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_RX_BUFF_ALLOC, false);

/* Alloc private buffer TCs */
for (i = 0; i < HCLGE_MAX_TC_NUM; i++) {
- struct hclge_priv_buf *priv = &hdev->priv_buf[i];
+ struct hclge_rx_priv_buf *priv = &buf_alloc->rx_buf[i];

- req->buf_num[i] =
+ req->priv_buf[i] =
cpu_to_le16(priv->buf_size >> HCLGE_BUF_UNIT_S);
- req->buf_num[i] |=
- cpu_to_le16(true << HCLGE_TC0_PRI_BUF_EN_B);
+ req->priv_buf[i] |=
+ cpu_to_le16(1 << HCLGE_TC0_PRI_BUF_EN_B);
}

+ req->shared_buf = cpu_to_le16(s_buf->buf_size >> HCLGE_BUF_UNIT_S);
+ req->shared_buf |= cpu_to_le16(1 << HCLGE_TC0_PRI_BUF_EN_B);
+
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
- if (ret) {
+ if (ret)
dev_err(&hdev->pdev->dev,
- "rx private buffer alloc cmd failed %d\n", ret);
- return ret;
- }
+ "Set rx private buffer fail, status = %d\n", ret);

- return 0;
+ return ret;
}

#define HCLGE_PRIV_ENABLE(a) ((a) > 0 ? 1 : 0)
-
-static int hclge_rx_priv_wl_config(struct hclge_dev *hdev)
+static int hclge_rx_priv_wl_config(struct hclge_dev *hdev,
+ struct hclge_pkt_buf_alloc *buf_alloc)
{
struct hclge_rx_priv_wl_buf *req;
- struct hclge_priv_buf *priv;
+ struct hclge_rx_priv_buf *priv;
struct hclge_desc desc[2];
int i, j;
int ret;
@@ -1658,7 +1697,9 @@ static int hclge_rx_priv_wl_config(struct hclge_dev *hdev)
desc[i].flag &= ~cpu_to_le16(HCLGE_CMD_FLAG_NEXT);

for (j = 0; j < HCLGE_TC_NUM_ONE_DESC; j++) {
- priv = &hdev->priv_buf[i * HCLGE_TC_NUM_ONE_DESC + j];
+ u32 idx = i * HCLGE_TC_NUM_ONE_DESC + j;
+
+ priv = &buf_alloc->rx_buf[idx];
req->tc_wl[j].high =
cpu_to_le16(priv->wl.high >> HCLGE_BUF_UNIT_S);
req->tc_wl[j].high |=
@@ -1674,18 +1715,17 @@ static int hclge_rx_priv_wl_config(struct hclge_dev *hdev)

/* Send 2 descriptor at one time */
ret = hclge_cmd_send(&hdev->hw, desc, 2);
- if (ret) {
+ if (ret)
dev_err(&hdev->pdev->dev,
- "rx private waterline config cmd failed %d\n",
- ret);
- return ret;
- }
- return 0;
+ "Set rx private waterline fail, status %d\n", ret);
+
+ return ret;
}

-static int hclge_common_thrd_config(struct hclge_dev *hdev)
+static int hclge_common_thrd_config(struct hclge_dev *hdev,
+ struct hclge_pkt_buf_alloc *buf_alloc)
{
- struct hclge_shared_buf *s_buf = &hdev->s_buf;
+ struct hclge_rx_shared_buf *s_buf = &buf_alloc->s_buf;
struct hclge_rx_com_thrd *req;
struct hclge_desc desc[2];
struct hclge_tc_thrd *tc;
@@ -1721,104 +1761,100 @@ static int hclge_common_thrd_config(struct hclge_dev *hdev)

/* Send 2 descriptors at one time */
ret = hclge_cmd_send(&hdev->hw, desc, 2);
- if (ret) {
+ if (ret)
dev_err(&hdev->pdev->dev,
- "common threshold config cmd failed %d\n", ret);
- return ret;
- }
- return 0;
+ "Set rx private waterline fail, status %d\n", ret);
+
+ return ret;
}

-static int hclge_common_wl_config(struct hclge_dev *hdev)
+static int hclge_common_wl_config(struct hclge_dev *hdev,
+ struct hclge_pkt_buf_alloc *buf_alloc)
{
- struct hclge_shared_buf *buf = &hdev->s_buf;
- struct hclge_rx_com_wl *req;
struct hclge_desc desc;
+ struct hclge_rx_com_wl *req = (struct hclge_rx_com_wl *)desc.data;
+ struct hclge_rx_shared_buf *s_buf = &buf_alloc->s_buf;
int ret;

hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_RX_COM_WL_ALLOC, false);

- req = (struct hclge_rx_com_wl *)desc.data;
- req->com_wl.high = cpu_to_le16(buf->self.high >> HCLGE_BUF_UNIT_S);
+ req->com_wl.high = cpu_to_le16(s_buf->self.high >> HCLGE_BUF_UNIT_S);
req->com_wl.high |=
- cpu_to_le16(HCLGE_PRIV_ENABLE(buf->self.high) <<
+ cpu_to_le16(HCLGE_PRIV_ENABLE(s_buf->self.high) <<
HCLGE_RX_PRIV_EN_B);

- req->com_wl.low = cpu_to_le16(buf->self.low >> HCLGE_BUF_UNIT_S);
+ req->com_wl.low = cpu_to_le16(s_buf->self.low >> HCLGE_BUF_UNIT_S);
req->com_wl.low |=
- cpu_to_le16(HCLGE_PRIV_ENABLE(buf->self.low) <<
+ cpu_to_le16(HCLGE_PRIV_ENABLE(s_buf->self.low) <<
HCLGE_RX_PRIV_EN_B);

ret = hclge_cmd_send(&hdev->hw, &desc, 1);
- if (ret) {
+ if (ret)
dev_err(&hdev->pdev->dev,
- "common waterline config cmd failed %d\n", ret);
- return ret;
- }
+ "Set rx private waterline fail, status %d\n", ret);

- return 0;
+ return ret;
}

int hclge_buffer_alloc(struct hclge_dev *hdev)
{
+ struct hclge_pkt_buf_alloc *pkt_buf;
u32 tx_buf_size = HCLGE_DEFAULT_TX_BUF;
int ret;

- hdev->priv_buf = devm_kmalloc_array(&hdev->pdev->dev, HCLGE_MAX_TC_NUM,
- sizeof(struct hclge_priv_buf),
- GFP_KERNEL | __GFP_ZERO);
- if (!hdev->priv_buf)
+ pkt_buf = kzalloc(sizeof(*pkt_buf), GFP_KERNEL);
+ if (!pkt_buf)
return -ENOMEM;

- ret = hclge_tx_buffer_alloc(hdev, tx_buf_size);
+ ret = hclge_buffer_calc(hdev, pkt_buf, tx_buf_size);
if (ret) {
dev_err(&hdev->pdev->dev,
- "could not alloc tx buffers %d\n", ret);
- return ret;
+ "Calculate Rx buffer error ret =%d.\n", ret);
+ goto err;
}

- ret = hclge_rx_buffer_calc(hdev, tx_buf_size);
+ ret = hclge_tx_buffer_alloc(hdev, pkt_buf);
if (ret) {
dev_err(&hdev->pdev->dev,
- "could not calc rx priv buffer size for all TCs %d\n",
- ret);
- return ret;
+ "Allocate Tx buffer fail, ret =%d\n", ret);
+ goto err;
}

- ret = hclge_rx_priv_buf_alloc(hdev);
+ ret = hclge_rx_buf_alloc(hdev, pkt_buf);
if (ret) {
- dev_err(&hdev->pdev->dev, "could not alloc rx priv buffer %d\n",
- ret);
- return ret;
+ dev_err(&hdev->pdev->dev,
+ "Private buffer config fail, ret = %d\n", ret);
+ goto err;
}

if (hnae_get_bit(hdev->ae_dev->flag,
HNAE_DEV_SUPPORT_DCB_B)) {
- ret = hclge_rx_priv_wl_config(hdev);
+ ret = hclge_rx_priv_wl_config(hdev, pkt_buf);
if (ret) {
dev_err(&hdev->pdev->dev,
- "could not configure rx private waterline %d\n",
+ "Private waterline config fail, ret = %d\n",
ret);
- return ret;
+ goto err;
}

- ret = hclge_common_thrd_config(hdev);
+ ret = hclge_common_thrd_config(hdev, pkt_buf);
if (ret) {
dev_err(&hdev->pdev->dev,
- "could not configure common threshold %d\n",
+ "Common threshold config fail, ret = %d\n",
ret);
- return ret;
+ goto err;
}
}

- ret = hclge_common_wl_config(hdev);
+ ret = hclge_common_wl_config(hdev, pkt_buf);
if (ret) {
dev_err(&hdev->pdev->dev,
- "could not configure common waterline %d\n", ret);
- return ret;
+ "Common waterline config fail, ret = %d\n", ret);
}

- return 0;
+err:
+ kfree(pkt_buf);
+ return ret;
}

static int hclge_init_roce_base_info(struct hclge_vport *vport)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index 0905ae5..4bdec1f 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -430,6 +430,9 @@ struct hclge_dev {
#define HCLGE_FLAG_TC_BASE_SCH_MODE 1
#define HCLGE_FLAG_VNET_BASE_SCH_MODE 2
u8 tx_sch_mode;
+ u8 pg_cap;
+ u8 tc_cap;
+ u8 pfc_cap;

u8 default_up;
struct hclge_tm_info tm_info;
@@ -472,8 +475,6 @@ struct hclge_dev {

u32 pkt_buf_size; /* Total pf buf size for tx/rx */
u32 mps; /* Max packet size */
- struct hclge_priv_buf *priv_buf;
- struct hclge_shared_buf s_buf;

enum hclge_mta_dmac_sel_type mta_mac_sel_type;
bool enable_mta; /* Mutilcast filter enable */
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
index 1c577d2..59b0cfb 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
@@ -364,7 +364,8 @@ static int hclge_tm_qs_schd_mode_cfg(struct hclge_dev *hdev, u16 qs_id)
return hclge_cmd_send(&hdev->hw, &desc, 1);
}

-static int hclge_tm_qs_bp_cfg(struct hclge_dev *hdev, u8 tc)
+static int hclge_tm_qs_bp_cfg(struct hclge_dev *hdev,
+ u8 tc, u8 grp_id, u32 bit_map)
{
struct hclge_bp_to_qs_map_cmd *bp_to_qs_map_cmd;
struct hclge_desc desc;
@@ -375,9 +376,8 @@ static int hclge_tm_qs_bp_cfg(struct hclge_dev *hdev, u8 tc)
bp_to_qs_map_cmd = (struct hclge_bp_to_qs_map_cmd *)desc.data;

bp_to_qs_map_cmd->tc_id = tc;
-
- /* Qset and tc is one by one mapping */
- bp_to_qs_map_cmd->qs_bit_map = cpu_to_le32(1 << tc);
+ bp_to_qs_map_cmd->qs_group_id = grp_id;
+ bp_to_qs_map_cmd->qs_bit_map = cpu_to_le32(bit_map);

return hclge_cmd_send(&hdev->hw, &desc, 1);
}
@@ -836,6 +836,10 @@ static int hclge_tm_map_cfg(struct hclge_dev *hdev)
{
int ret;

+ ret = hclge_up_to_tc_map(hdev);
+ if (ret)
+ return ret;
+
ret = hclge_tm_pg_to_pri_map(hdev);
if (ret)
return ret;
@@ -966,23 +970,85 @@ static int hclge_tm_schd_setup_hw(struct hclge_dev *hdev)
return hclge_tm_schd_mode_hw(hdev);
}

+/* Each Tc has a 1024 queue sets to backpress, it divides to
+ * 32 group, each group contains 32 queue sets, which can be
+ * represented by u32 bitmap.
+ */
+static int hclge_bp_setup_hw(struct hclge_dev *hdev, u8 tc)
+{
+ struct hclge_vport *vport = hdev->vport;
+ u32 i, k, qs_bitmap;
+ int ret;
+
+ for (i = 0; i < HCLGE_BP_GRP_NUM; i++) {
+ qs_bitmap = 0;
+
+ for (k = 0; k < hdev->num_alloc_vport; k++) {
+ u16 qs_id = vport->qs_offset + tc;
+ u8 grp, sub_grp;
+
+ grp = hnae_get_field(qs_id, HCLGE_BP_GRP_ID_M,
+ HCLGE_BP_GRP_ID_S);
+ sub_grp = hnae_get_field(qs_id, HCLGE_BP_SUB_GRP_ID_M,
+ HCLGE_BP_SUB_GRP_ID_S);
+ if (i == grp)
+ qs_bitmap |= (1 << sub_grp);
+
+ vport++;
+ }
+
+ ret = hclge_tm_qs_bp_cfg(hdev, tc, i, qs_bitmap);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
int hclge_pause_setup_hw(struct hclge_dev *hdev)
{
- bool en = hdev->tm_info.fc_mode != HCLGE_FC_PFC;
int ret;
u8 i;

- ret = hclge_mac_pause_en_cfg(hdev, en, en);
- if (ret)
+ if (hdev->tm_info.fc_mode != HCLGE_FC_PFC) {
+ bool tx_en, rx_en;
+
+ switch (hdev->tm_info.fc_mode) {
+ case HCLGE_FC_NONE:
+ tx_en = false;
+ rx_en = false;
+ break;
+ case HCLGE_FC_RX_PAUSE:
+ tx_en = false;
+ rx_en = true;
+ break;
+ case HCLGE_FC_TX_PAUSE:
+ tx_en = true;
+ rx_en = false;
+ break;
+ case HCLGE_FC_FULL:
+ tx_en = true;
+ rx_en = true;
+ break;
+ default:
+ tx_en = true;
+ rx_en = true;
+ }
+ ret = hclge_mac_pause_en_cfg(hdev, tx_en, rx_en);
return ret;
+ }
+
+ /* Only DCB-supported port supports qset back pressure setting */
+ if (!hnae_get_bit(hdev->ae_dev->flag, HNAE_DEV_SUPPORT_DCB_B))
+ return 0;

for (i = 0; i < hdev->tm_info.num_tc; i++) {
- ret = hclge_tm_qs_bp_cfg(hdev, i);
+ ret = hclge_bp_setup_hw(hdev, i);
if (ret)
return ret;
}

- return hclge_up_to_tc_map(hdev);
+ return 0;
}

int hclge_tm_init_hw(struct hclge_dev *hdev)
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
index 7e67337..dbaa3b5 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.h
@@ -86,6 +86,15 @@ struct hclge_pg_shapping_cmd {
__le32 pg_shapping_para;
};

+struct hclge_port_shapping_cmd {
+ __le32 port_shapping_para;
+};
+
+#define HCLGE_BP_GRP_NUM 32
+#define HCLGE_BP_SUB_GRP_ID_S 0
+#define HCLGE_BP_SUB_GRP_ID_M GENMASK(4, 0)
+#define HCLGE_BP_GRP_ID_S 5
+#define HCLGE_BP_GRP_ID_M GENMASK(9, 5)
struct hclge_bp_to_qs_map_cmd {
u8 tc_id;
u8 rsvd[2];
--
1.9.1

2017-08-31 13:12:47

by Lipeng

[permalink] [raw]
Subject: [PATCH net-next 2/8] net: hns3: update ring and vector map command

Add INT_GL and VF id to vector configure when bind ring with vector.
INT_GL means Interrupt Gap Limiting.Vector id starts from 0 in each
VF, so the bind command must specify VF id.

Signed-off-by: Lipeng <[email protected]>
Signed-off-by: Mingguang Qu <[email protected]>
---
drivers/net/ethernet/hisilicon/hns3/hnae3.h | 4 +
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h | 8 +-
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 105 ++++++++-------------
.../net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c | 9 ++
4 files changed, 60 insertions(+), 66 deletions(-)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
index e23e028..3617372 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
@@ -108,11 +108,15 @@ struct hnae3_vector_info {
#define HNAE3_RING_TYPE_B 0
#define HNAE3_RING_TYPE_TX 0
#define HNAE3_RING_TYPE_RX 1
+#define HNAE3_RING_GL_IDX_B 0
+#define HNAE3_RING_GL_RX 0
+#define HNAE3_RING_GL_TX 1

struct hnae3_ring_chain_node {
struct hnae3_ring_chain_node *next;
u32 tqp_index;
u32 flag;
+ u32 int_gl_idx;
};

#define HNAE3_IS_TX_RING(node) \
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
index 91ae013..c2b613b 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
@@ -238,7 +238,7 @@ struct hclge_tqp_map {
u8 rsv[18];
};

-#define HCLGE_VECTOR_ELEMENTS_PER_CMD 11
+#define HCLGE_VECTOR_ELEMENTS_PER_CMD 10

enum hclge_int_type {
HCLGE_INT_TX,
@@ -252,8 +252,12 @@ struct hclge_ctrl_vector_chain {
#define HCLGE_INT_TYPE_S 0
#define HCLGE_INT_TYPE_M 0x3
#define HCLGE_TQP_ID_S 2
-#define HCLGE_TQP_ID_M (0x3fff << HCLGE_TQP_ID_S)
+#define HCLGE_TQP_ID_M (0x7ff << HCLGE_TQP_ID_S)
+#define HCLGE_INT_GL_IDX_S 13
+#define HCLGE_INT_GL_IDX_M (0x3 << HCLGE_INT_GL_IDX_S)
__le16 tqp_type_and_id[HCLGE_VECTOR_ELEMENTS_PER_CMD];
+ u8 vfid;
+ u8 rsv;
};

#define HCLGE_TC_NUM 8
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index acc4016..3a8cb40 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -2346,6 +2346,13 @@ static int hclge_get_vector(struct hnae3_handle *handle, u16 vector_num,
return alloc;
}

+static void hclge_free_vector(struct hclge_dev *hdev, int vector_id)
+{
+ hdev->vector_status[vector_id] = HCLGE_INVALID_VPORT;
+ hdev->num_msi_left += 1;
+ hdev->num_msi_used -= 1;
+}
+
static int hclge_get_vector_index(struct hclge_dev *hdev, int vector)
{
int i;
@@ -2672,19 +2679,21 @@ static int hclge_rss_init_hw(struct hclge_dev *hdev)
return ret;
}

-int hclge_map_vport_ring_to_vector(struct hclge_vport *vport, int vector_id,
- struct hnae3_ring_chain_node *ring_chain)
+int hclge_bind_ring_with_vector(struct hclge_vport *vport,
+ int vector_id, bool en,
+ struct hnae3_ring_chain_node *ring_chain)
{
struct hclge_dev *hdev = vport->back;
- struct hclge_ctrl_vector_chain *req;
struct hnae3_ring_chain_node *node;
struct hclge_desc desc;
- int ret;
+ struct hclge_ctrl_vector_chain *req
+ = (struct hclge_ctrl_vector_chain *)desc.data;
+ enum hclge_cmd_status status;
+ enum hclge_opcode_type op;
int i;

- hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_ADD_RING_TO_VECTOR, false);
-
- req = (struct hclge_ctrl_vector_chain *)desc.data;
+ op = en ? HCLGE_OPC_ADD_RING_TO_VECTOR : HCLGE_OPC_DEL_RING_TO_VECTOR;
+ hclge_cmd_setup_basic_desc(&desc, op, false);
req->int_vector_id = vector_id;

i = 0;
@@ -2694,17 +2703,21 @@ int hclge_map_vport_ring_to_vector(struct hclge_vport *vport, int vector_id,
hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
hnae_set_field(req->tqp_type_and_id[i], HCLGE_TQP_ID_M,
HCLGE_TQP_ID_S, node->tqp_index);
+ hnae_set_field(req->tqp_type_and_id[i], HCLGE_INT_GL_IDX_M,
+ HCLGE_INT_GL_IDX_S,
+ hnae_get_bit(node->int_gl_idx,
+ HNAE3_RING_GL_IDX_B));
req->tqp_type_and_id[i] = cpu_to_le16(req->tqp_type_and_id[i]);
-
if (++i >= HCLGE_VECTOR_ELEMENTS_PER_CMD) {
req->int_cause_num = HCLGE_VECTOR_ELEMENTS_PER_CMD;
+ req->vfid = vport->vport_id;

- ret = hclge_cmd_send(&hdev->hw, &desc, 1);
- if (ret) {
+ status = hclge_cmd_send(&hdev->hw, &desc, 1);
+ if (status) {
dev_err(&hdev->pdev->dev,
"Map TQP fail, status is %d.\n",
- ret);
- return ret;
+ status);
+ return -EIO;
}
i = 0;

@@ -2717,12 +2730,12 @@ int hclge_map_vport_ring_to_vector(struct hclge_vport *vport, int vector_id,

if (i > 0) {
req->int_cause_num = i;
-
- ret = hclge_cmd_send(&hdev->hw, &desc, 1);
- if (ret) {
+ req->vfid = vport->vport_id;
+ status = hclge_cmd_send(&hdev->hw, &desc, 1);
+ if (status) {
dev_err(&hdev->pdev->dev,
- "Map TQP fail, status is %d.\n", ret);
- return ret;
+ "Map TQP fail, status is %d.\n", status);
+ return -EIO;
}
}

@@ -2744,7 +2757,7 @@ int hclge_map_handle_ring_to_vector(struct hnae3_handle *handle,
return vector_id;
}

- return hclge_map_vport_ring_to_vector(vport, vector_id, ring_chain);
+ return hclge_bind_ring_with_vector(vport, vector_id, true, ring_chain);
}

static int hclge_unmap_ring_from_vector(
@@ -2753,11 +2766,7 @@ static int hclge_unmap_ring_from_vector(
{
struct hclge_vport *vport = hclge_get_vport(handle);
struct hclge_dev *hdev = vport->back;
- struct hclge_ctrl_vector_chain *req;
- struct hnae3_ring_chain_node *node;
- struct hclge_desc desc;
- int i, vector_id;
- int ret;
+ int vector_id, ret;

vector_id = hclge_get_vector_index(hdev, vector);
if (vector_id < 0) {
@@ -2766,49 +2775,17 @@ static int hclge_unmap_ring_from_vector(
return vector_id;
}

- hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_DEL_RING_TO_VECTOR, false);
-
- req = (struct hclge_ctrl_vector_chain *)desc.data;
- req->int_vector_id = vector_id;
-
- i = 0;
- for (node = ring_chain; node; node = node->next) {
- hnae_set_field(req->tqp_type_and_id[i], HCLGE_INT_TYPE_M,
- HCLGE_INT_TYPE_S,
- hnae_get_bit(node->flag, HNAE3_RING_TYPE_B));
- hnae_set_field(req->tqp_type_and_id[i], HCLGE_TQP_ID_M,
- HCLGE_TQP_ID_S, node->tqp_index);
-
- req->tqp_type_and_id[i] = cpu_to_le16(req->tqp_type_and_id[i]);
-
- if (++i >= HCLGE_VECTOR_ELEMENTS_PER_CMD) {
- req->int_cause_num = HCLGE_VECTOR_ELEMENTS_PER_CMD;
-
- ret = hclge_cmd_send(&hdev->hw, &desc, 1);
- if (ret) {
- dev_err(&hdev->pdev->dev,
- "Unmap TQP fail, status is %d.\n",
- ret);
- return ret;
- }
- i = 0;
- hclge_cmd_setup_basic_desc(&desc,
- HCLGE_OPC_ADD_RING_TO_VECTOR,
- false);
- req->int_vector_id = vector_id;
- }
+ ret = hclge_bind_ring_with_vector(vport, vector_id, false, ring_chain);
+ if (ret) {
+ dev_err(&handle->pdev->dev,
+ "Unmap ring from vector fail. vectorid=%d, ret =%d\n",
+ vector_id,
+ ret);
+ return ret;
}

- if (i > 0) {
- req->int_cause_num = i;
-
- ret = hclge_cmd_send(&hdev->hw, &desc, 1);
- if (ret) {
- dev_err(&hdev->pdev->dev,
- "Unmap TQP fail, status is %d.\n", ret);
- return ret;
- }
- }
+ /* Free this MSIX or MSI vector */
+ hclge_free_vector(hdev, vector_id);

return 0;
}
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c
index 1c3e294..2e3c287 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c
@@ -2276,6 +2276,8 @@ static int hns3_get_vector_ring_chain(struct hns3_enet_tqp_vector *tqp_vector,
cur_chain->tqp_index = tx_ring->tqp->tqp_index;
hnae_set_bit(cur_chain->flag, HNAE3_RING_TYPE_B,
HNAE3_RING_TYPE_TX);
+ hnae_set_bit(cur_chain->int_gl_idx, HNAE3_RING_GL_IDX_B,
+ HNAE3_RING_GL_TX);

cur_chain->next = NULL;

@@ -2291,6 +2293,8 @@ static int hns3_get_vector_ring_chain(struct hns3_enet_tqp_vector *tqp_vector,
chain->tqp_index = tx_ring->tqp->tqp_index;
hnae_set_bit(chain->flag, HNAE3_RING_TYPE_B,
HNAE3_RING_TYPE_TX);
+ hnae_set_bit(cur_chain->int_gl_idx, HNAE3_RING_GL_IDX_B,
+ HNAE3_RING_GL_TX);

cur_chain = chain;
}
@@ -2302,6 +2306,8 @@ static int hns3_get_vector_ring_chain(struct hns3_enet_tqp_vector *tqp_vector,
cur_chain->tqp_index = rx_ring->tqp->tqp_index;
hnae_set_bit(cur_chain->flag, HNAE3_RING_TYPE_B,
HNAE3_RING_TYPE_RX);
+ hnae_set_bit(cur_chain->int_gl_idx, HNAE3_RING_GL_IDX_B,
+ HNAE3_RING_GL_RX);

rx_ring = rx_ring->next;
}
@@ -2315,6 +2321,9 @@ static int hns3_get_vector_ring_chain(struct hns3_enet_tqp_vector *tqp_vector,
chain->tqp_index = rx_ring->tqp->tqp_index;
hnae_set_bit(chain->flag, HNAE3_RING_TYPE_B,
HNAE3_RING_TYPE_RX);
+ hnae_set_bit(cur_chain->int_gl_idx, HNAE3_RING_GL_IDX_B,
+ HNAE3_RING_GL_RX);
+
cur_chain = chain;

rx_ring = rx_ring->next;
--
1.9.1

2017-08-31 13:13:11

by Lipeng

[permalink] [raw]
Subject: [PATCH net-next 4/8] net: hns3: set default vlan id to PF

When there is no vlan id in the packets, hardware will treat the vlan id
as 0 and look for the mac_vlan table. This patch set the default vlan id
of PF as 0.

Signed-off-by: Mingguang Qu <[email protected]>
Signed-off-by: Lipeng <[email protected]>
---
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 5d49856..7374053 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -3698,6 +3698,7 @@ static int hclge_init_vlan_config(struct hclge_dev *hdev)
{
#define HCLGE_VLAN_TYPE_VF_TABLE 0
#define HCLGE_VLAN_TYPE_PORT_TABLE 1
+ struct hnae3_handle *handle;
int ret;

ret = hclge_set_vlan_filter_ctrl(hdev, HCLGE_VLAN_TYPE_VF_TABLE,
@@ -3707,6 +3708,8 @@ static int hclge_init_vlan_config(struct hclge_dev *hdev)

ret = hclge_set_vlan_filter_ctrl(hdev, HCLGE_VLAN_TYPE_PORT_TABLE,
true);
+ handle = &hdev->vport[0].nic;
+ ret = hclge_set_port_vlan_filter(handle, htons(ETH_P_8021Q), 0, false);

return ret;
}
--
1.9.1

2017-08-31 13:13:32

by Lipeng

[permalink] [raw]
Subject: [PATCH net-next 7/8] net: hns3: add vlan filter config of Ports

Config the self_define vlan_type as TPID(0x8100) for vlan identification.
When normal port initialize vlan configure, set default vlan id as 0.

Signed-off-by: Mingguang Qu <[email protected]>
Signed-off-by: Lipeng <[email protected]>
---
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 12be24f..d0a30f5 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -3308,6 +3308,7 @@ static int hclge_add_mac_vlan_tbl(struct hclge_vport *vport,
mc_desc[1].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
hclge_cmd_reuse_desc(&mc_desc[2], false);
mc_desc[2].flag &= cpu_to_le16(~HCLGE_CMD_FLAG_NEXT);
+
memcpy(mc_desc[0].data, req,
sizeof(struct hclge_mac_vlan_tbl_entry));
ret = hclge_cmd_send(&hdev->hw, mc_desc, 3);
--
1.9.1

2017-08-31 13:13:33

by Lipeng

[permalink] [raw]
Subject: [PATCH net-next 3/8] net: hns3: set default mac vlan mask

Add the mask configuration of the MAC_VLAN table. Command
HCLGE_OPC_MAC_VLAN_MASK_SET is used to add/read the mask
configuration of the MAC_VLAN (u/m vlan) table. Set default mask
as {0x00, 0x00, 0x00, 0x00, 0x00, 0x00} means that all bits of
mac_vlan are not masked.

Signed-off-by: Mingguang Qu <[email protected]>
Signed-off-by: Lipeng <[email protected]>
---
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h | 9 +++++
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 43 +++++++++++++++++++++-
2 files changed, 51 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
index c2b613b..dd8e513 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
@@ -186,6 +186,7 @@ enum hclge_opcode_type {
HCLGE_OPC_MAC_VLAN_INSERT = 0x1003,
HCLGE_OPC_MAC_ETHTYPE_ADD = 0x1010,
HCLGE_OPC_MAC_ETHTYPE_REMOVE = 0x1011,
+ HCLGE_OPC_MAC_VLAN_MASK_SET = 0x1012,

/* Multicast linear table cmd */
HCLGE_OPC_MTA_MAC_MODE_CFG = 0x1020,
@@ -575,6 +576,14 @@ struct hclge_mac_vlan_tbl_entry {
u8 rsv2[6];
};

+#define HCLGE_VLAN_MASK_EN_B 0x0
+struct hclge_mac_vlan_mask_entry {
+ u8 rsv0[2];
+ u8 vlan_mask;
+ u8 rsv1;
+ u8 mac_mask[6];
+ u8 rsv2[14];
+};
#define HCLGE_CFG_MTA_MAC_SEL_S 0x0
#define HCLGE_CFG_MTA_MAC_SEL_M (0x3 << HCLGE_CFG_MTA_MAC_SEL_S)
#define HCLGE_CFG_MTA_MAC_EN_B 0x7
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 3a8cb40..5d49856 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -2089,8 +2089,37 @@ static int hclge_get_autoneg(struct hnae3_handle *handle)
return hdev->hw.mac.autoneg;
}

+static int hclge_set_default_mac_vlan_mask(struct hclge_dev *hdev,
+ bool mask_vlan,
+ u8 *mac_mask)
+{
+ struct hclge_mac_vlan_mask_entry *req;
+ enum hclge_cmd_status status;
+ struct hclge_desc desc;
+ int i;
+
+ req = (struct hclge_mac_vlan_mask_entry *)desc.data;
+ hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_MAC_VLAN_MASK_SET, false);
+
+ hnae_set_bit(req->vlan_mask, HCLGE_VLAN_MASK_EN_B,
+ mask_vlan);
+ for (i = 0; i < ETH_ALEN; i++)
+ req->mac_mask[i] = mac_mask[i];
+
+ status = hclge_cmd_send(&hdev->hw, &desc, 1);
+ if (status) {
+ dev_err(&hdev->pdev->dev,
+ "Config mac_vlan_mask failed for cmd_send, ret =%d.\n",
+ status);
+ return -EIO;
+ }
+
+ return 0;
+}
+
static int hclge_mac_init(struct hclge_dev *hdev)
{
+ u8 mac_mask[ETH_ALEN] = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
struct hclge_mac *mac = &hdev->hw.mac;
int ret;

@@ -2124,7 +2153,19 @@ static int hclge_mac_init(struct hclge_dev *hdev)
return ret;
}

- return hclge_cfg_func_mta_filter(hdev, 0, hdev->accept_mta_mc);
+ ret = hclge_cfg_func_mta_filter(hdev, 0, hdev->accept_mta_mc);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "set mta filter mode fail ret=%d\n", ret);
+ return ret;
+ }
+
+ ret = hclge_set_default_mac_vlan_mask(hdev, true, mac_mask);
+ if (ret)
+ dev_err(&hdev->pdev->dev,
+ "set default mac_vlan_mask fail ret=%d\n", ret);
+
+ return ret;
}

static void hclge_task_schedule(struct hclge_dev *hdev)
--
1.9.1

2017-08-31 13:14:52

by Lipeng

[permalink] [raw]
Subject: [PATCH net-next 5/8] net: hns3: set the VLAN Ethernet type to HW

This patch set the VLAN Ethetnet type(0x8100) to HW. With this
configure, HW can identify vlan packets.

Signed-off-by: Mingguang Qu <[email protected]>
Signed-off-by: Lipeng <[email protected]>
---
.../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h | 13 +++++
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 58 +++++++++++++++++++++-
.../ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 11 ++++
3 files changed, 80 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
index dd8e513..b841df1 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
@@ -655,6 +655,19 @@ struct hclge_vlan_filter_vf_cfg {
u8 vf_bitmap[16];
};

+struct hclge_tx_vlan_type_cfg {
+ u16 ot_vlan_type;
+ u16 in_vlan_type;
+ u8 rsv[20];
+};
+
+struct hclge_rx_vlan_type_cfg {
+ u16 ot_fst_vlan_type;
+ u16 ot_sec_vlan_type;
+ u16 in_fst_vlan_type;
+ u16 in_sec_vlan_type;
+ u8 rsv[16];
+};
struct hclge_cfg_com_tqp_queue {
__le16 tqp_id;
__le16 stream_id;
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 7374053..f2ea88f 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -3694,10 +3694,52 @@ static int hclge_set_vf_vlan_filter(struct hnae3_handle *handle, int vfid,
return hclge_set_vf_vlan_common(hdev, vfid, false, vlan, qos, proto);
}

+static int hclge_set_vlan_protocol_type(struct hclge_dev *hdev)
+{
+ struct hclge_desc desc;
+ struct hclge_rx_vlan_type_cfg *rx_req =
+ (struct hclge_rx_vlan_type_cfg *)&desc.data[0];
+ struct hclge_tx_vlan_type_cfg *tx_req =
+ (struct hclge_tx_vlan_type_cfg *)&desc.data[0];
+ enum hclge_cmd_status status;
+
+ hclge_cmd_setup_basic_desc(&desc,
+ HCLGE_OPC_MAC_VLAN_TYPE_ID, false);
+
+ rx_req->ot_fst_vlan_type = hdev->vlan_type_cfg.rx_ot_fst_vlan_type;
+ rx_req->ot_sec_vlan_type = hdev->vlan_type_cfg.rx_ot_sec_vlan_type;
+ rx_req->in_fst_vlan_type = hdev->vlan_type_cfg.rx_in_fst_vlan_type;
+ rx_req->in_sec_vlan_type = hdev->vlan_type_cfg.rx_in_sec_vlan_type;
+
+ status = hclge_cmd_send(&hdev->hw, &desc, 1);
+ if (status) {
+ dev_err(&hdev->pdev->dev,
+ "Send rxvlan protocol type command fail, ret =%d.\n",
+ status);
+ return -EIO;
+ }
+
+ hclge_cmd_setup_basic_desc(&desc,
+ HCLGE_OPC_MAC_VLAN_INSERT, false);
+
+ tx_req->ot_vlan_type = hdev->vlan_type_cfg.tx_ot_vlan_type;
+ tx_req->in_vlan_type = hdev->vlan_type_cfg.tx_in_vlan_type;
+
+ status = hclge_cmd_send(&hdev->hw, &desc, 1);
+ if (status) {
+ dev_err(&hdev->pdev->dev,
+ "Send txvlan protocol type command fail, ret =%d.\n",
+ status);
+ return -EIO;
+ }
+ return 0;
+}
+
static int hclge_init_vlan_config(struct hclge_dev *hdev)
{
-#define HCLGE_VLAN_TYPE_VF_TABLE 0
-#define HCLGE_VLAN_TYPE_PORT_TABLE 1
+#define HCLGE_VLAN_TYPE_VF_TABLE 0
+#define HCLGE_VLAN_TYPE_PORT_TABLE 1
+#define HCLGE_DEF_VLAN_TYPE 0x8100
struct hnae3_handle *handle;
int ret;

@@ -3708,6 +3750,18 @@ static int hclge_init_vlan_config(struct hclge_dev *hdev)

ret = hclge_set_vlan_filter_ctrl(hdev, HCLGE_VLAN_TYPE_PORT_TABLE,
true);
+
+ hdev->vlan_type_cfg.rx_in_fst_vlan_type = HCLGE_DEF_VLAN_TYPE;
+ hdev->vlan_type_cfg.rx_in_sec_vlan_type = HCLGE_DEF_VLAN_TYPE;
+ hdev->vlan_type_cfg.rx_ot_fst_vlan_type = HCLGE_DEF_VLAN_TYPE;
+ hdev->vlan_type_cfg.rx_ot_sec_vlan_type = HCLGE_DEF_VLAN_TYPE;
+ hdev->vlan_type_cfg.tx_ot_vlan_type = HCLGE_DEF_VLAN_TYPE;
+ hdev->vlan_type_cfg.tx_in_vlan_type = HCLGE_DEF_VLAN_TYPE;
+
+ ret = hclge_set_vlan_protocol_type(hdev);
+ if (ret)
+ return ret;
+
handle = &hdev->vport[0].nic;
ret = hclge_set_port_vlan_filter(handle, htons(ETH_P_8021Q), 0, false);

diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index edb10ad..0905ae5 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -388,6 +388,15 @@ struct hclge_hw_stats {
struct hclge_32_bit_stats all_32_bit_stats;
};

+struct hclge_vlan_type_cfg {
+ u16 rx_ot_fst_vlan_type;
+ u16 rx_ot_sec_vlan_type;
+ u16 rx_in_fst_vlan_type;
+ u16 rx_in_sec_vlan_type;
+ u16 tx_ot_vlan_type;
+ u16 tx_in_vlan_type;
+};
+
struct hclge_dev {
struct pci_dev *pdev;
struct hnae3_ae_dev *ae_dev;
@@ -469,6 +478,8 @@ struct hclge_dev {
enum hclge_mta_dmac_sel_type mta_mac_sel_type;
bool enable_mta; /* Mutilcast filter enable */
bool accept_mta_mc; /* Whether accept mta filter multicast */
+
+ struct hclge_vlan_type_cfg vlan_type_cfg;
};

struct hclge_vport {
--
1.9.1

2017-08-31 21:38:32

by David Miller

[permalink] [raw]
Subject: Re: [PATCH net-next 7/8] net: hns3: add vlan filter config of Ports

From: Lipeng <[email protected]>
Date: Thu, 31 Aug 2017 21:39:08 +0800

> Config the self_define vlan_type as TPID(0x8100) for vlan identification.
> When normal port initialize vlan configure, set default vlan id as 0.
>
> Signed-off-by: Mingguang Qu <[email protected]>
> Signed-off-by: Lipeng <[email protected]>

No, that's not what this patch is doing.

> @@ -3308,6 +3308,7 @@ static int hclge_add_mac_vlan_tbl(struct hclge_vport *vport,
> mc_desc[1].flag |= cpu_to_le16(HCLGE_CMD_FLAG_NEXT);
> hclge_cmd_reuse_desc(&mc_desc[2], false);
> mc_desc[2].flag &= cpu_to_le16(~HCLGE_CMD_FLAG_NEXT);
> +
> memcpy(mc_desc[0].data, req,
> sizeof(struct hclge_mac_vlan_tbl_entry));
> ret = hclge_cmd_send(&hdev->hw, mc_desc, 3);

All it does is add an empty line.