Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp3798935imm; Mon, 1 Oct 2018 04:50:10 -0700 (PDT) X-Google-Smtp-Source: ACcGV63uL+U7EjSIGZ/5Eg6LRhTZZnvrgZPuI8N4A4NvtQEvIVNnHR0IuLV4e/5ZnqCNLZ1VDzOf X-Received: by 2002:a63:5605:: with SMTP id k5-v6mr10192322pgb.189.1538394610392; Mon, 01 Oct 2018 04:50:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538394610; cv=none; d=google.com; s=arc-20160816; b=xmc/0Y7uPTSSml8EhIutNG+Oh73eBKhdfMMibUeUEsKNzs9wJMmQ26GMEb+SN/4DGm 018p4k5jCKuCu3uHv/P22GjlgdGkuF4O3PSyR950YbUBibhFSDE7ZjoDmF0pBQaKqHCc Wl8kuLBV/+W96vcIr2A2aCnBvtlJFgEkl8Bbwno1vdGgrdU1kQWtsjuFHeRF7/MES9Uh 4jSKKcSmREYH7lDNlrTT0iSCBlbrIv0YmoXov5qvyfsz7GekmKLKAsb/v/gvm21PvTI7 ligb7ZNZCMPYeHw38tAb06q9h9lzbvcsCPNa6gzS0AaKUQdI4dSEw5Yp/QwceQrRoWMK 9QLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=iDkdwyB+b3fC+BsZPEOvXYXvP8Z3XVoUCbZUzaveSvU=; b=Ol6yni6HyPY2ZzBQpi2uedlQ1FE/5rFXGVCUp3G/+1opIT0sIhughxUCMm2FJmJ+ly iso5d4bwdts0art4gEsrFccATWtcHw8x4gIBG84bedF+VlTPS6T003wxFeXXGf8dipHL 66evqK0vw8mKAZYmjwS4wX/2lYUL5J2iBseHjEDivC4cElIAKh13FqfPMaadXOjhvMGC KCI85dYULru7dNSZmtol0dVYlGdljVafZsbdTHxXsyZLF0pgjNdPGvbXo4HKF0ONVzf1 P1VwOM+u8x660PO2oX6TU2tSOxVr/p0VqHdty0km0i14v4KQQncVKlDtC3SR6TIeRUA9 y8ug== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x73-v6si12863742pfe.58.2018.10.01.04.49.55; Mon, 01 Oct 2018 04:50:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729354AbeJASZz (ORCPT + 99 others); Mon, 1 Oct 2018 14:25:55 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:51407 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729212AbeJASZy (ORCPT ); Mon, 1 Oct 2018 14:25:54 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id F1F5E84C1059B; Mon, 1 Oct 2018 19:48:26 +0800 (CST) Received: from S00293818-DELL1.china.huawei.com (10.202.226.54) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.399.0; Mon, 1 Oct 2018 19:48:20 +0800 From: Salil Mehta To: CC: , , , , , , , Jian Shen Subject: [PATCH net-next 4/7] net: hns3: Add support for rule query of flow director Date: Mon, 1 Oct 2018 12:46:44 +0100 Message-ID: <20181001114647.15504-5-salil.mehta@huawei.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20181001114647.15504-1-salil.mehta@huawei.com> References: <20181001114647.15504-1-salil.mehta@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.202.226.54] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jian Shen This patch adds support for querying rule number and rule details by ethtool commands. Signed-off-by: Jian Shen Signed-off-by: Peng Li Signed-off-by: Salil Mehta --- drivers/net/ethernet/hisilicon/hns3/hnae3.h | 6 + drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c | 25 ++- .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 239 +++++++++++++++++++++ 3 files changed, 264 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h index 262bb736..fac84d8 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h +++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h @@ -420,6 +420,12 @@ struct hnae3_ae_ops { struct ethtool_rxnfc *cmd); int (*del_fd_entry)(struct hnae3_handle *handle, struct ethtool_rxnfc *cmd); + int (*get_fd_rule_cnt)(struct hnae3_handle *handle, + struct ethtool_rxnfc *cmd); + int (*get_fd_rule_info)(struct hnae3_handle *handle, + struct ethtool_rxnfc *cmd); + int (*get_fd_all_rules)(struct hnae3_handle *handle, + struct ethtool_rxnfc *cmd, u32 *rule_locs); }; struct hnae3_dcb_ops { diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c index 59cbf78..7d79a07 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c @@ -699,20 +699,33 @@ static int hns3_get_rxnfc(struct net_device *netdev, { struct hnae3_handle *h = hns3_get_handle(netdev); - if (!h->ae_algo || !h->ae_algo->ops || !h->ae_algo->ops->get_rss_tuple) + if (!h->ae_algo || !h->ae_algo->ops) return -EOPNOTSUPP; switch (cmd->cmd) { case ETHTOOL_GRXRINGS: - cmd->data = h->kinfo.rss_size; - break; + cmd->data = h->kinfo.num_tqps; + return 0; case ETHTOOL_GRXFH: - return h->ae_algo->ops->get_rss_tuple(h, cmd); + if (h->ae_algo->ops->get_rss_tuple) + return h->ae_algo->ops->get_rss_tuple(h, cmd); + return -EOPNOTSUPP; + case ETHTOOL_GRXCLSRLCNT: + if (h->ae_algo->ops->get_fd_rule_cnt) + return h->ae_algo->ops->get_fd_rule_cnt(h, cmd); + return -EOPNOTSUPP; + case ETHTOOL_GRXCLSRULE: + if (h->ae_algo->ops->get_fd_rule_info) + return h->ae_algo->ops->get_fd_rule_info(h, cmd); + return -EOPNOTSUPP; + case ETHTOOL_GRXCLSRLALL: + if (h->ae_algo->ops->get_fd_all_rules) + return h->ae_algo->ops->get_fd_all_rules(h, cmd, + rule_locs); + return -EOPNOTSUPP; default: return -EOPNOTSUPP; } - - return 0; } static int hns3_change_all_ring_bd_num(struct hns3_nic_priv *priv, diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 43f5caa..876c7ca 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -4299,6 +4299,242 @@ static int hclge_del_fd_entry(struct hnae3_handle *handle, false); } +static int hclge_get_fd_rule_cnt(struct hnae3_handle *handle, + struct ethtool_rxnfc *cmd) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_dev *hdev = vport->back; + + if (!hnae3_dev_fd_supported(hdev)) + return -EOPNOTSUPP; + + cmd->rule_cnt = hdev->hclge_fd_rule_num; + cmd->data = hdev->fd_cfg.rule_num[HCLGE_FD_STAGE_1]; + + return 0; +} + +static int hclge_get_fd_rule_info(struct hnae3_handle *handle, + struct ethtool_rxnfc *cmd) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_fd_rule *rule = NULL; + struct hclge_dev *hdev = vport->back; + struct ethtool_rx_flow_spec *fs; + struct hlist_node *node2; + + if (!hnae3_dev_fd_supported(hdev)) + return -EOPNOTSUPP; + + fs = (struct ethtool_rx_flow_spec *)&cmd->fs; + + hlist_for_each_entry_safe(rule, node2, &hdev->fd_rule_list, rule_node) { + if (rule->location >= fs->location) + break; + } + + if (!rule || fs->location != rule->location) + return -ENOENT; + + fs->flow_type = rule->flow_type; + switch (fs->flow_type & ~(FLOW_EXT | FLOW_MAC_EXT)) { + case SCTP_V4_FLOW: + case TCP_V4_FLOW: + case UDP_V4_FLOW: + fs->h_u.tcp_ip4_spec.ip4src = + cpu_to_be32(rule->tuples.src_ip[3]); + fs->m_u.tcp_ip4_spec.ip4src = + rule->unused_tuple & BIT(INNER_SRC_IP) ? + 0 : cpu_to_be32(rule->tuples_mask.src_ip[3]); + + fs->h_u.tcp_ip4_spec.ip4dst = + cpu_to_be32(rule->tuples.dst_ip[3]); + fs->m_u.tcp_ip4_spec.ip4dst = + rule->unused_tuple & BIT(INNER_DST_IP) ? + 0 : cpu_to_be32(rule->tuples_mask.dst_ip[3]); + + fs->h_u.tcp_ip4_spec.psrc = cpu_to_be16(rule->tuples.src_port); + fs->m_u.tcp_ip4_spec.psrc = + rule->unused_tuple & BIT(INNER_SRC_PORT) ? + 0 : cpu_to_be16(rule->tuples_mask.src_port); + + fs->h_u.tcp_ip4_spec.pdst = cpu_to_be16(rule->tuples.dst_port); + fs->m_u.tcp_ip4_spec.pdst = + rule->unused_tuple & BIT(INNER_DST_PORT) ? + 0 : cpu_to_be16(rule->tuples_mask.dst_port); + + fs->h_u.tcp_ip4_spec.tos = rule->tuples.ip_tos; + fs->m_u.tcp_ip4_spec.tos = + rule->unused_tuple & BIT(INNER_IP_TOS) ? + 0 : rule->tuples_mask.ip_tos; + + break; + case IP_USER_FLOW: + fs->h_u.usr_ip4_spec.ip4src = + cpu_to_be32(rule->tuples.src_ip[3]); + fs->m_u.tcp_ip4_spec.ip4src = + rule->unused_tuple & BIT(INNER_SRC_IP) ? + 0 : cpu_to_be32(rule->tuples_mask.src_ip[3]); + + fs->h_u.usr_ip4_spec.ip4dst = + cpu_to_be32(rule->tuples.dst_ip[3]); + fs->m_u.usr_ip4_spec.ip4dst = + rule->unused_tuple & BIT(INNER_DST_IP) ? + 0 : cpu_to_be32(rule->tuples_mask.dst_ip[3]); + + fs->h_u.usr_ip4_spec.tos = rule->tuples.ip_tos; + fs->m_u.usr_ip4_spec.tos = + rule->unused_tuple & BIT(INNER_IP_TOS) ? + 0 : rule->tuples_mask.ip_tos; + + fs->h_u.usr_ip4_spec.proto = rule->tuples.ip_proto; + fs->m_u.usr_ip4_spec.proto = + rule->unused_tuple & BIT(INNER_IP_PROTO) ? + 0 : rule->tuples_mask.ip_proto; + + fs->h_u.usr_ip4_spec.ip_ver = ETH_RX_NFC_IP4; + + break; + case SCTP_V6_FLOW: + case TCP_V6_FLOW: + case UDP_V6_FLOW: + cpu_to_be32_array(fs->h_u.tcp_ip6_spec.ip6src, + rule->tuples.src_ip, 4); + if (rule->unused_tuple & BIT(INNER_SRC_IP)) + memset(fs->m_u.tcp_ip6_spec.ip6src, 0, sizeof(int) * 4); + else + cpu_to_be32_array(fs->m_u.tcp_ip6_spec.ip6src, + rule->tuples_mask.src_ip, 4); + + cpu_to_be32_array(fs->h_u.tcp_ip6_spec.ip6dst, + rule->tuples.dst_ip, 4); + if (rule->unused_tuple & BIT(INNER_DST_IP)) + memset(fs->m_u.tcp_ip6_spec.ip6dst, 0, sizeof(int) * 4); + else + cpu_to_be32_array(fs->m_u.tcp_ip6_spec.ip6dst, + rule->tuples_mask.dst_ip, 4); + + fs->h_u.tcp_ip6_spec.psrc = cpu_to_be16(rule->tuples.src_port); + fs->m_u.tcp_ip6_spec.psrc = + rule->unused_tuple & BIT(INNER_SRC_PORT) ? + 0 : cpu_to_be16(rule->tuples_mask.src_port); + + fs->h_u.tcp_ip6_spec.pdst = cpu_to_be16(rule->tuples.dst_port); + fs->m_u.tcp_ip6_spec.pdst = + rule->unused_tuple & BIT(INNER_DST_PORT) ? + 0 : cpu_to_be16(rule->tuples_mask.dst_port); + + break; + case IPV6_USER_FLOW: + cpu_to_be32_array(fs->h_u.usr_ip6_spec.ip6src, + rule->tuples.src_ip, 4); + if (rule->unused_tuple & BIT(INNER_SRC_IP)) + memset(fs->m_u.usr_ip6_spec.ip6src, 0, sizeof(int) * 4); + else + cpu_to_be32_array(fs->m_u.usr_ip6_spec.ip6src, + rule->tuples_mask.src_ip, 4); + + cpu_to_be32_array(fs->h_u.usr_ip6_spec.ip6dst, + rule->tuples.dst_ip, 4); + if (rule->unused_tuple & BIT(INNER_DST_IP)) + memset(fs->m_u.usr_ip6_spec.ip6dst, 0, sizeof(int) * 4); + else + cpu_to_be32_array(fs->m_u.usr_ip6_spec.ip6dst, + rule->tuples_mask.dst_ip, 4); + + fs->h_u.usr_ip6_spec.l4_proto = rule->tuples.ip_proto; + fs->m_u.usr_ip6_spec.l4_proto = + rule->unused_tuple & BIT(INNER_IP_PROTO) ? + 0 : rule->tuples_mask.ip_proto; + + break; + case ETHER_FLOW: + ether_addr_copy(fs->h_u.ether_spec.h_source, + rule->tuples.src_mac); + if (rule->unused_tuple & BIT(INNER_SRC_MAC)) + eth_zero_addr(fs->m_u.ether_spec.h_source); + else + ether_addr_copy(fs->m_u.ether_spec.h_source, + rule->tuples_mask.src_mac); + + ether_addr_copy(fs->h_u.ether_spec.h_dest, + rule->tuples.dst_mac); + if (rule->unused_tuple & BIT(INNER_DST_MAC)) + eth_zero_addr(fs->m_u.ether_spec.h_dest); + else + ether_addr_copy(fs->m_u.ether_spec.h_dest, + rule->tuples_mask.dst_mac); + + fs->h_u.ether_spec.h_proto = + cpu_to_be16(rule->tuples.ether_proto); + fs->m_u.ether_spec.h_proto = + rule->unused_tuple & BIT(INNER_ETH_TYPE) ? + 0 : cpu_to_be16(rule->tuples_mask.ether_proto); + + break; + default: + return -EOPNOTSUPP; + } + + if (fs->flow_type & FLOW_EXT) { + fs->h_ext.vlan_tci = cpu_to_be16(rule->tuples.vlan_tag1); + fs->m_ext.vlan_tci = + rule->unused_tuple & BIT(INNER_VLAN_TAG_FST) ? + cpu_to_be16(VLAN_VID_MASK) : + cpu_to_be16(rule->tuples_mask.vlan_tag1); + } + + if (fs->flow_type & FLOW_MAC_EXT) { + ether_addr_copy(fs->h_ext.h_dest, rule->tuples.dst_mac); + if (rule->unused_tuple & BIT(INNER_DST_MAC)) + eth_zero_addr(fs->m_u.ether_spec.h_dest); + else + ether_addr_copy(fs->m_u.ether_spec.h_dest, + rule->tuples_mask.dst_mac); + } + + if (rule->action == HCLGE_FD_ACTION_DROP_PACKET) { + fs->ring_cookie = RX_CLS_FLOW_DISC; + } else { + u64 vf_id; + + fs->ring_cookie = rule->queue_id; + vf_id = rule->vf_id; + vf_id <<= ETHTOOL_RX_FLOW_SPEC_RING_VF_OFF; + fs->ring_cookie |= vf_id; + } + + return 0; +} + +static int hclge_get_all_rules(struct hnae3_handle *handle, + struct ethtool_rxnfc *cmd, u32 *rule_locs) +{ + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_dev *hdev = vport->back; + struct hclge_fd_rule *rule; + struct hlist_node *node2; + int cnt = 0; + + if (!hnae3_dev_fd_supported(hdev)) + return -EOPNOTSUPP; + + cmd->data = hdev->fd_cfg.rule_num[HCLGE_FD_STAGE_1]; + + hlist_for_each_entry_safe(rule, node2, + &hdev->fd_rule_list, rule_node) { + if (cnt == cmd->rule_cnt) + return -EMSGSIZE; + + rule_locs[cnt] = rule->location; + cnt++; + } + + cmd->rule_cnt = cnt; + + return 0; +} + static void hclge_cfg_mac_mode(struct hclge_dev *hdev, bool enable) { struct hclge_desc desc; @@ -7038,6 +7274,9 @@ static const struct hnae3_ae_ops hclge_ops = { .get_link_mode = hclge_get_link_mode, .add_fd_entry = hclge_add_fd_entry, .del_fd_entry = hclge_del_fd_entry, + .get_fd_rule_cnt = hclge_get_fd_rule_cnt, + .get_fd_rule_info = hclge_get_fd_rule_info, + .get_fd_all_rules = hclge_get_all_rules, }; static struct hnae3_ae_algo ae_algo = { -- 2.7.4