Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp507684pxb; Tue, 19 Oct 2021 07:24:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxiBKn7uvZ4377xhI+6jPlhNmNpLealPNYngGsgNY+e8BlYH47ft9FjsPh7qcnc4nrEAWLX X-Received: by 2002:a05:6a00:1a8d:b0:44d:72f1:96e5 with SMTP id e13-20020a056a001a8d00b0044d72f196e5mr201339pfv.40.1634653454737; Tue, 19 Oct 2021 07:24:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634653454; cv=none; d=google.com; s=arc-20160816; b=NGfcjPku0KRjhz00IXXKBU/A/hx2OSkv0U41RBcnT03XBwloMS36fVgjtWnAjBmzTA 2h4xbJqK9W6/VVSAyYwnkQ/Qt//bf3aESi1pXR1WhPNEm+EiS9F7atAiAhcSwuLIAaBa 4+OVDdY6vGKx8fQrBC7dSFNZRFgAbZjggVKGjRHNjQE9pZxc9sNyFLs0ZmYLGH8w20cw C8izVmgUgVy7TXzY/2sNcb7f9a60HPaSDVF+XHODJBP0Dfa3+xewRlRusyIVKINjt/Gd m/lv4lQvZqrQZ/GCxFvscprKJI0SWKjk+DzLPvwP8I1ip4+UUgkqhUDKAHlvLdyagjfj sjFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=+6mjVpGzRODRlHZvk9LcIyL4B7UeTFT8OT6gLV4VBhM=; b=QyvD5WIaHVSQXQWI8a7bAPYQ35USq3N3uOprqNrPRmhw5yWUknRYaox9Xs0NRQaH47 wRuwk/NKaFddQp9Ae9bfecUNdqRl9R8o/KbcpyuIEui63UV4B2ifaRmORuMnEK6nOeEA Xm/CdegHIajxO+EQrA3W4T12vbeamrM6CReMtBAZDovNkW03S9l6xy/++WF5FmGZcdWr vMQaTvcYDkZXGoyO+vKA0FnUAEo6vAgK9/S9Kc7nDtHR/AXplAKanph3MnRJ8WrmWBwt JZg/c40TAkP0uvO1JIgmXeeqUULjaozC3H/mmB0H88OBUQvKxxZfrHM1LM1xpbujAC+D 7unQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l185si1369939pgd.558.2021.10.19.07.23.58; Tue, 19 Oct 2021 07:24:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233392AbhJSOXe (ORCPT + 99 others); Tue, 19 Oct 2021 10:23:34 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:14828 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230162AbhJSOXI (ORCPT ); Tue, 19 Oct 2021 10:23:08 -0400 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4HYbPn6PyCz90L7; Tue, 19 Oct 2021 22:15:57 +0800 (CST) Received: from kwepemm600016.china.huawei.com (7.193.23.20) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Tue, 19 Oct 2021 22:20:53 +0800 Received: from localhost.localdomain (10.67.165.24) by kwepemm600016.china.huawei.com (7.193.23.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Tue, 19 Oct 2021 22:20:52 +0800 From: Guangbin Huang To: , CC: , , , , Subject: [PATCH net 6/8] net: hns3: schedule the polling again when allocation fails Date: Tue, 19 Oct 2021 22:16:33 +0800 Message-ID: <20211019141635.43695-7-huangguangbin2@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211019141635.43695-1-huangguangbin2@huawei.com> References: <20211019141635.43695-1-huangguangbin2@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.67.165.24] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600016.china.huawei.com (7.193.23.20) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Yunsheng Lin Currently when there is a rx page allocation failure, it is possible that polling may be stopped if there is no more packet to be reveiced, which may cause queue stall problem under memory pressure. This patch makes sure polling is scheduled again when there is any rx page allocation failure, and polling will try to allocate receive buffers until it succeeds. Now the allocation retry is added, it is unnecessary to do the rx page allocation at the end of rx cleaning, so remove it. And reset the unused_count to zero after calling hns3_nic_alloc_rx_buffers() to avoid calling hns3_nic_alloc_rx_buffers() repeatedly under memory pressure. Fixes: 76ad4f0ee747 ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: Yunsheng Lin Signed-off-by: Guangbin Huang --- .../net/ethernet/hisilicon/hns3/hns3_enet.c | 22 ++++++++++--------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index 2ecc9abc02d6..4b886a13e079 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -3486,7 +3486,8 @@ static int hns3_desc_unused(struct hns3_enet_ring *ring) return ((ntc >= ntu) ? 0 : ring->desc_num) + ntc - ntu; } -static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring, +/* Return true if there is any allocation failure */ +static bool hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring, int cleand_count) { struct hns3_desc_cb *desc_cb; @@ -3511,7 +3512,10 @@ static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring, hns3_rl_err(ring_to_netdev(ring), "alloc rx buffer failed: %d\n", ret); - break; + + writel(i, ring->tqp->io_base + + HNS3_RING_RX_RING_HEAD_REG); + return true; } hns3_replace_buffer(ring, ring->next_to_use, &res_cbs); @@ -3524,6 +3528,7 @@ static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring, } writel(i, ring->tqp->io_base + HNS3_RING_RX_RING_HEAD_REG); + return false; } static bool hns3_can_reuse_page(struct hns3_desc_cb *cb) @@ -4175,6 +4180,7 @@ int hns3_clean_rx_ring(struct hns3_enet_ring *ring, int budget, { #define RCB_NOF_ALLOC_RX_BUFF_ONCE 16 int unused_count = hns3_desc_unused(ring); + bool failure = false; int recv_pkts = 0; int err; @@ -4183,9 +4189,9 @@ int hns3_clean_rx_ring(struct hns3_enet_ring *ring, int budget, while (recv_pkts < budget) { /* Reuse or realloc buffers */ if (unused_count >= RCB_NOF_ALLOC_RX_BUFF_ONCE) { - hns3_nic_alloc_rx_buffers(ring, unused_count); - unused_count = hns3_desc_unused(ring) - - ring->pending_buf; + failure = failure || + hns3_nic_alloc_rx_buffers(ring, unused_count); + unused_count = 0; } /* Poll one pkt */ @@ -4204,11 +4210,7 @@ int hns3_clean_rx_ring(struct hns3_enet_ring *ring, int budget, } out: - /* Make all data has been write before submit */ - if (unused_count > 0) - hns3_nic_alloc_rx_buffers(ring, unused_count); - - return recv_pkts; + return failure ? budget : recv_pkts; } static void hns3_update_rx_int_coalesce(struct hns3_enet_tqp_vector *tqp_vector) -- 2.33.0