Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp9985753ybi; Wed, 24 Jul 2019 13:34:37 -0700 (PDT) X-Google-Smtp-Source: APXvYqz9zX3vh9UhEsXoyY/3VXhXls3xFJ9cJmKp3oXjPlc6O+5n7Wuip0NeItI727wZxJlD2rNv X-Received: by 2002:aa7:915a:: with SMTP id 26mr13052366pfi.247.1564000477235; Wed, 24 Jul 2019 13:34:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564000477; cv=none; d=google.com; s=arc-20160816; b=oBedH9s3ZsEBgauc7csv1qPCPHbzuQI6DytqYxuhEwDltF4E3M8b/e8u3y30eZWCex tlFpnw0vFv516BCB6eqbydqjzBLHeePoYFFbGlFeh3M5r+RX7lIiwOQbTuEOGCXt4lVY Nyxtn8U66YCC5tzJ0l+oBKzteeJbpmTzmPqAh8tqIJU/2o/UC4m2IVq2yqIA2lrb393+ nQTkvM+rbpM/F1QvQkxUHRcGzzS5AJoqLzutLh61XNZvcnHFDNVjfx/L6nSqMnrvwrmo J0TM6E+jS5pLIEe72rkSKjRha0FIT50ZPVZohH9EhZN2DIEbLSc1elIrSBdrUrpACZuH abGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=S+nSKVJfnZquxpnhalE2OkSS9/15SieaaB3nu1ftGyE=; b=Ab+VLf+wtz9c7GYFQX5lFrobBVs1g193x+yEgFXxCpqLkOoHprjy1L6J8wZ7/CZaMr O5gOwPvRuKhQSUkZMOyoOAIRGnnd409uRKqVSRn17YLF8koLxUgUQmIbw0egGIJu1/eD 2vEYBuMLzDxULY+jMx61+Vyewjuto6Ln383RjE/GTZfcW46Wyyf/eCSMQFD2GNiD0VHk JbRvb23aa6o9hXBY8Ntkj8hkW1beYbGXLUhS4J40sBbpzKFm+iprQb0UNI819QGfO/Qk 897lLKaAhnQL7l0O5yXOz2yycI/2w4VaEvG8nu+lG40jt4oItOUMG6DpD36FzolkRUT/ upoQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=DdXgMuCz; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t14si14649752ply.347.2019.07.24.13.34.23; Wed, 24 Jul 2019 13:34:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=DdXgMuCz; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388232AbfGXUc7 (ORCPT + 99 others); Wed, 24 Jul 2019 16:32:59 -0400 Received: from mail.kernel.org ([198.145.29.99]:46606 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388303AbfGXT2Y (ORCPT ); Wed, 24 Jul 2019 15:28:24 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 80796218EA; Wed, 24 Jul 2019 19:28:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1563996503; bh=tATeCKl18O/RAtTrvdgPcVJ3FeSPQJtmVX0ky60oq+c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DdXgMuCzwXO+6gnaPGiO2hXO+JH8xQ09bVWkNQsgKPxH4KkfUQajSMP5/7mevImS7 4ApQr1Xhhcy//RD9aQECnjlsMu0mWSTpY1pVJ1r1/d98ARROiNEXs/FLXIoQgJH/39 87aLXjk1X+AigML0ltU053hH/uCQ3bKATp5fGpPg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mitch Williams , Andrew Bowers , Jeff Kirsher , Sasha Levin Subject: [PATCH 5.2 117/413] iavf: allow null RX descriptors Date: Wed, 24 Jul 2019 21:16:48 +0200 Message-Id: <20190724191743.654189161@linuxfoundation.org> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724191735.096702571@linuxfoundation.org> References: <20190724191735.096702571@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [ Upstream commit efa14c3985828da3163f5372137cb64d992b0f79 ] In some circumstances, the hardware can hand us a null receive descriptor, with no data attached but otherwise valid. Unfortunately, the driver was ill-equipped to handle such an event, and would stop processing packets at that point. To fix this, use the Descriptor Done bit instead of the size to determine whether or not a descriptor is ready to be processed. Add some checks to allow for unused buffers. Signed-off-by: Mitch Williams Tested-by: Andrew Bowers Signed-off-by: Jeff Kirsher Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 21 ++++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 06d1509d57f7..c97b9ecf026a 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -1236,6 +1236,9 @@ static void iavf_add_rx_frag(struct iavf_ring *rx_ring, unsigned int truesize = SKB_DATA_ALIGN(size + iavf_rx_offset(rx_ring)); #endif + if (!size) + return; + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, rx_buffer->page_offset, size, truesize); @@ -1260,6 +1263,9 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring, { struct iavf_rx_buffer *rx_buffer; + if (!size) + return NULL; + rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean]; prefetchw(rx_buffer->page); @@ -1299,6 +1305,8 @@ static struct sk_buff *iavf_construct_skb(struct iavf_ring *rx_ring, unsigned int headlen; struct sk_buff *skb; + if (!rx_buffer) + return NULL; /* prefetch first cache line of first page */ prefetch(va); #if L1_CACHE_BYTES < 128 @@ -1363,6 +1371,8 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring, #endif struct sk_buff *skb; + if (!rx_buffer) + return NULL; /* prefetch first cache line of first page */ prefetch(va); #if L1_CACHE_BYTES < 128 @@ -1398,6 +1408,9 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring, static void iavf_put_rx_buffer(struct iavf_ring *rx_ring, struct iavf_rx_buffer *rx_buffer) { + if (!rx_buffer) + return; + if (iavf_can_reuse_rx_page(rx_buffer)) { /* hand second half of page back to the ring */ iavf_reuse_rx_page(rx_ring, rx_buffer); @@ -1496,11 +1509,12 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) * verified the descriptor has been written back. */ dma_rmb(); +#define IAVF_RXD_DD BIT(IAVF_RX_DESC_STATUS_DD_SHIFT) + if (!iavf_test_staterr(rx_desc, IAVF_RXD_DD)) + break; size = (qword & IAVF_RXD_QW1_LENGTH_PBUF_MASK) >> IAVF_RXD_QW1_LENGTH_PBUF_SHIFT; - if (!size) - break; iavf_trace(clean_rx_irq, rx_ring, rx_desc, skb); rx_buffer = iavf_get_rx_buffer(rx_ring, size); @@ -1516,7 +1530,8 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* exit if we failed to retrieve a buffer */ if (!skb) { rx_ring->rx_stats.alloc_buff_failed++; - rx_buffer->pagecnt_bias++; + if (rx_buffer) + rx_buffer->pagecnt_bias++; break; } -- 2.20.1