Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3719352imu; Tue, 18 Dec 2018 03:06:18 -0800 (PST) X-Google-Smtp-Source: AFSGD/XvqLm4Pl7C+HGXG429Pgy62YRyf8iBYE1S92g9dkbXoz9R2Xim4UqbFI/UJVfZ2llZaSEs X-Received: by 2002:a17:902:c5:: with SMTP id a63mr16308677pla.267.1545131178908; Tue, 18 Dec 2018 03:06:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545131178; cv=none; d=google.com; s=arc-20160816; b=eu7lRhQNSenS9sTjxOhb/8xlTB5RWp0D5KBTf4T4xM9MDYCOYCI3SpJ1VNV/W4Q4da VhtrKQn6iuN7VtV3ZJrb1w3290rfOz1EM3W5K2tZhFT718wJrdsZ2ERufHVBsSnKMVoG kLB1aR/oqISKgTUaJi8RLQJpri6551KUIRw6C1996GAB8EiZQ1Jr49+QEgtn+dInY4Ix mnbIQU37gSzOVBerz0Ltdhdz0zFiT53SxIrm/bquVYwdgECexJlHTTbOnE+5RXd+2S/I M8yifSfDGGV09LSV9NZ7v8j2p94mrTTzpF2fvH6Wx67UCHqwl8D2cfVHxVrg5Z7hCpdz A+BA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=qkwkcDyYGLjJNl3zSJYqLX9+EeBoHfr6jBhfAAnhIHo=; b=lguoemJiLgx3e1Rz3W6Uob15jf1xfZ8TwprSiySsc96Ft4zaKg3obY0rs0d/KV13xQ scDpX98k//gJQ38JfHZ/L+d3sk+6m7NIjlQNN/0lvQYpUyyVx2TU+z4AOIH+VsH7jLpf K1LbZDZeTCDv3h+CnAxIm83jIpckuWDhAD0b9c4wa38y+DddVoXT1KUPke9hx8sV8EYJ 5aa/H3rYzNLikvQPXJ2+bnO56kCLXirDV5pOJUHQkpReoSeKHGqsUhpKMs46grJFfrKf /LzZ0YT38NiXpRQ/MzDw4khRRhuphk+6rBwp3ffE+U+ewuDONLHn1Z1mThai341OanSJ h4Dw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b8si12776251pge.384.2018.12.18.03.06.02; Tue, 18 Dec 2018 03:06:18 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726884AbeLRLDQ (ORCPT + 99 others); Tue, 18 Dec 2018 06:03:16 -0500 Received: from szxga06-in.huawei.com ([45.249.212.32]:38121 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726588AbeLRLCH (ORCPT ); Tue, 18 Dec 2018 06:02:07 -0500 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id B326E3D47F765; Tue, 18 Dec 2018 19:02:03 +0800 (CST) Received: from linux-ioko.site (10.71.200.31) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.408.0; Tue, 18 Dec 2018 19:01:58 +0800 From: Peng Li To: CC: , , , , , Subject: [PATCH net-next 11/12] net: hns3: aligning buffer size in SSU to 256 bytes Date: Tue, 18 Dec 2018 19:37:58 +0800 Message-ID: <1545133079-79605-12-git-send-email-lipeng321@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1545133079-79605-1-git-send-email-lipeng321@huawei.com> References: <1545133079-79605-1-git-send-email-lipeng321@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.71.200.31] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Yunsheng Lin The hardware expects the buffer size set to SSU is aligned to 256 bytes, this patch aligns the buffer size to 256 byte using roundup or rounddown function. Signed-off-by: Yunsheng Lin Signed-off-by: Peng Li --- .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 43 +++++++++++++--------- 1 file changed, 26 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index c52e903..f847fde 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -26,6 +26,8 @@ #define HCLGE_STATS_READ(p, offset) (*((u64 *)((u8 *)(p) + (offset)))) #define HCLGE_MAC_STATS_FIELD_OFF(f) (offsetof(struct hclge_mac_stats, f)) +#define HCLGE_BUF_SIZE_UNIT 256 + static int hclge_set_mac_mtu(struct hclge_dev *hdev, int new_mps); static int hclge_init_vlan_config(struct hclge_dev *hdev); static int hclge_reset_ae_dev(struct hnae3_ae_dev *ae_dev); @@ -693,12 +695,16 @@ static int hclge_query_pf_resource(struct hclge_dev *hdev) else hdev->tx_buf_size = HCLGE_DEFAULT_TX_BUF; + hdev->tx_buf_size = roundup(hdev->tx_buf_size, HCLGE_BUF_SIZE_UNIT); + if (req->dv_buf_size) hdev->dv_buf_size = __le16_to_cpu(req->dv_buf_size) << HCLGE_BUF_UNIT_S; else hdev->dv_buf_size = HCLGE_DEFAULT_DV; + hdev->dv_buf_size = roundup(hdev->dv_buf_size, HCLGE_BUF_SIZE_UNIT); + if (hnae3_dev_roce_supported(hdev)) { hdev->roce_base_msix_offset = hnae3_get_field(__le16_to_cpu(req->msixcap_localid_ba_rocee), @@ -1380,48 +1386,50 @@ static bool hclge_is_rx_buf_ok(struct hclge_dev *hdev, { u32 shared_buf_min, shared_buf_tc, shared_std; int tc_num, pfc_enable_num; - u32 shared_buf; + u32 shared_buf, aligned_mps; u32 rx_priv; int i; tc_num = hclge_get_tc_num(hdev); pfc_enable_num = hclge_get_pfc_enalbe_num(hdev); + aligned_mps = roundup(hdev->mps, HCLGE_BUF_SIZE_UNIT); if (hnae3_dev_dcb_supported(hdev)) - shared_buf_min = 2 * hdev->mps + hdev->dv_buf_size; + shared_buf_min = 2 * aligned_mps + hdev->dv_buf_size; else - shared_buf_min = hdev->mps + HCLGE_NON_DCB_ADDITIONAL_BUF + shared_buf_min = aligned_mps + HCLGE_NON_DCB_ADDITIONAL_BUF + hdev->dv_buf_size; - shared_buf_tc = pfc_enable_num * hdev->mps + - (tc_num - pfc_enable_num) * hdev->mps / 2 + - hdev->mps; + shared_buf_tc = pfc_enable_num * aligned_mps + + (tc_num - pfc_enable_num) * aligned_mps / 2 + + aligned_mps; shared_std = max_t(u32, shared_buf_min, shared_buf_tc); rx_priv = hclge_get_rx_priv_buff_alloced(buf_alloc); if (rx_all <= rx_priv + shared_std) return false; - shared_buf = rx_all - rx_priv; + shared_buf = rounddown(rx_all - rx_priv, HCLGE_BUF_SIZE_UNIT); buf_alloc->s_buf.buf_size = shared_buf; if (hnae3_dev_dcb_supported(hdev)) { buf_alloc->s_buf.self.high = shared_buf - hdev->dv_buf_size; buf_alloc->s_buf.self.low = buf_alloc->s_buf.self.high - - hdev->mps / 2; + - roundup(aligned_mps / 2, HCLGE_BUF_SIZE_UNIT); } else { - buf_alloc->s_buf.self.high = hdev->mps + + buf_alloc->s_buf.self.high = aligned_mps + HCLGE_NON_DCB_ADDITIONAL_BUF; - buf_alloc->s_buf.self.low = hdev->mps / 2; + buf_alloc->s_buf.self.low = + roundup(aligned_mps / 2, HCLGE_BUF_SIZE_UNIT); } for (i = 0; i < HCLGE_MAX_TC_NUM; i++) { if ((hdev->hw_tc_map & BIT(i)) && (hdev->tm_info.hw_pfc_map & BIT(i))) { - buf_alloc->s_buf.tc_thrd[i].low = hdev->mps; - buf_alloc->s_buf.tc_thrd[i].high = 2 * hdev->mps; + buf_alloc->s_buf.tc_thrd[i].low = aligned_mps; + buf_alloc->s_buf.tc_thrd[i].high = 2 * aligned_mps; } else { buf_alloc->s_buf.tc_thrd[i].low = 0; - buf_alloc->s_buf.tc_thrd[i].high = hdev->mps; + buf_alloc->s_buf.tc_thrd[i].high = aligned_mps; } } @@ -1461,7 +1469,6 @@ static int hclge_tx_buffer_calc(struct hclge_dev *hdev, static int hclge_rx_buffer_calc(struct hclge_dev *hdev, struct hclge_pkt_buf_alloc *buf_alloc) { -#define HCLGE_BUF_SIZE_UNIT 128 u32 rx_all = hdev->pkt_buf_size, aligned_mps; int no_pfc_priv_num, pfc_priv_num; struct hclge_priv_buf *priv; @@ -1487,9 +1494,11 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev, priv->enable = 1; if (hdev->tm_info.hw_pfc_map & BIT(i)) { priv->wl.low = aligned_mps; - priv->wl.high = priv->wl.low + aligned_mps; + priv->wl.high = + roundup(priv->wl.low + aligned_mps, + HCLGE_BUF_SIZE_UNIT); priv->buf_size = priv->wl.high + - hdev->dv_buf_size; + hdev->dv_buf_size; } else { priv->wl.low = 0; priv->wl.high = 2 * aligned_mps; @@ -1524,7 +1533,7 @@ static int hclge_rx_buffer_calc(struct hclge_dev *hdev, priv->enable = 1; if (hdev->tm_info.hw_pfc_map & BIT(i)) { - priv->wl.low = 128; + priv->wl.low = 256; priv->wl.high = priv->wl.low + aligned_mps; priv->buf_size = priv->wl.high + hdev->dv_buf_size; } else { -- 1.9.1