Received: by 2002:a25:824b:0:0:0:0:0 with SMTP id d11csp6633979ybn; Mon, 30 Sep 2019 00:57:33 -0700 (PDT) X-Google-Smtp-Source: APXvYqx6hCz+y646Pdc5kZj0jq8PZMapdCd6ZXzTpgH1I1sLeoYzP9M3/qyPTdMo84hMMQBqe59J X-Received: by 2002:a17:906:48d4:: with SMTP id d20mr18285948ejt.84.1569830253068; Mon, 30 Sep 2019 00:57:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569830253; cv=none; d=google.com; s=arc-20160816; b=aGuAuOW6p7KGqzfY4Jtpma5he5nddI9f+ZqtNHEzebxXK60iIQM5TXXqPSdm/aSuDR QonMlOYndk97Er+NO8XvqETTwFrK+XcinMidkvYDNDK2+FOy29iU3KvH52PAKZqZBz1q m4RAcuPjx8wCvtrQopJUhKjicNNs1K4p/I1zgp4gTyFgGkU0W3PVQqTdugX5llsSS+pm IuV8NQMUNLUq3zRhGTLlVJ8gMwAOXElPaaQqmomTRjfy3tZjBPw8viMoHXFPc/L7kGJO 7QiAvuWylfKyt93a/gfJsEQO4rhCwtmfWvykGTKdRhAmUcYqrs/5v5Fl8kk8IOrHYO4C aARA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=8tc8cp8p0qgEog12iGVmbsdGrrfwZpvFjvsQHvlfpU4=; b=W11tOFfnEZMomeWxJ3kZiTsIzQ/lx8urf4Sz8Ds+TJeOLhLIqxMYoi0YrvsZs7bZPM A99PWwMvc5rzYFz9GGzsehMJiLhIhZZOpuPd+ezo6Dg6etA/06HCzquiWCtcjGemwtvZ WDWICht9EfoUr29UaWRqOCvpV0NUSaG9ikOEKknFjE01f/ldAfx4+9Q1gzDbWG8ZEYcz lkjmOimpx/0nov3kxeiyWJz0E860xn3VoxFz3JxM+s5J0hQ5GjoN0/psBW2+lJaEGwE7 Er+c6FGoDt7N+fJA9lf/igwtak5X6Lut5LbT5xPOOz1FzsspJI53wMMw158mx2Zak0+J QiLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b14si2932581eju.187.2019.09.30.00.57.08; Mon, 30 Sep 2019 00:57:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729748AbfI3H4T (ORCPT + 99 others); Mon, 30 Sep 2019 03:56:19 -0400 Received: from mx2.suse.de ([195.135.220.15]:49234 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725897AbfI3H4T (ORCPT ); Mon, 30 Sep 2019 03:56:19 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 99BE0B15C; Mon, 30 Sep 2019 07:56:16 +0000 (UTC) Subject: Re: [PATCH 1/1] xen-netfront: do not use ~0U as error return value for xennet_fill_frags() To: Dongli Zhang , xen-devel@lists.xenproject.org, netdev@vger.kernel.org Cc: davem@davemloft.net, sstabellini@kernel.org, boris.ostrovsky@oracle.com, joe.jin@oracle.com, linux-kernel@vger.kernel.org References: <1569829469-16143-1-git-send-email-dongli.zhang@oracle.com> From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= Message-ID: Date: Mon, 30 Sep 2019 09:56:12 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <1569829469-16143-1-git-send-email-dongli.zhang@oracle.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 30.09.19 09:44, Dongli Zhang wrote: > xennet_fill_frags() uses ~0U as return value when the sk_buff is not able > to cache extra fragments. This is incorrect because the return type of > xennet_fill_frags() is RING_IDX and 0xffffffff is an expected value for > ring buffer index. > > In the situation when the rsp_cons is approaching 0xffffffff, the return > value of xennet_fill_frags() may become 0xffffffff which xennet_poll() (the > caller) would regard as error. As a result, queue->rx.rsp_cons is set > incorrectly because it is updated only when there is error. If there is no > error, xennet_poll() would be responsible to update queue->rx.rsp_cons. > Finally, queue->rx.rsp_cons would point to the rx ring buffer entries whose > queue->rx_skbs[i] and queue->grant_rx_ref[i] are already cleared to NULL. > This leads to NULL pointer access in the next iteration to process rx ring > buffer entries. > > The symptom is similar to the one fixed in > commit 00b368502d18 ("xen-netfront: do not assume sk_buff_head list is > empty in error handling"). > > This patch uses an extra argument to help return if there is error in > xennet_fill_frags(). Hmm, I wonder if it wouldn't be better to have the ring index returned via the new pointer and let return xennet_fill_frags() 0 in case of success and -ENOENT otherwise. This would avoid having to introduce the local errno variable. Juergen > > Fixes: ad4f15dc2c70 ("xen/netfront: don't bug in case of too many frags") > Signed-off-by: Dongli Zhang > --- > drivers/net/xen-netfront.c | 12 +++++++++--- > 1 file changed, 9 insertions(+), 3 deletions(-) > > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c > index e14ec75..c2a1e09 100644 > --- a/drivers/net/xen-netfront.c > +++ b/drivers/net/xen-netfront.c > @@ -889,11 +889,14 @@ static int xennet_set_skb_gso(struct sk_buff *skb, > > static RING_IDX xennet_fill_frags(struct netfront_queue *queue, > struct sk_buff *skb, > - struct sk_buff_head *list) > + struct sk_buff_head *list, > + int *errno) > { > RING_IDX cons = queue->rx.rsp_cons; > struct sk_buff *nskb; > > + *errno = 0; > + > while ((nskb = __skb_dequeue(list))) { > struct xen_netif_rx_response *rx = > RING_GET_RESPONSE(&queue->rx, ++cons); > @@ -908,6 +911,7 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue, > if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) { > queue->rx.rsp_cons = ++cons + skb_queue_len(list); > kfree_skb(nskb); > + *errno = -ENOENT; > return ~0U; > } > > @@ -1009,6 +1013,8 @@ static int xennet_poll(struct napi_struct *napi, int budget) > i = queue->rx.rsp_cons; > work_done = 0; > while ((i != rp) && (work_done < budget)) { > + int errno; > + > memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx)); > memset(extras, 0, sizeof(rinfo.extras)); > > @@ -1045,8 +1051,8 @@ static int xennet_poll(struct napi_struct *napi, int budget) > skb->data_len = rx->status; > skb->len += rx->status; > > - i = xennet_fill_frags(queue, skb, &tmpq); > - if (unlikely(i == ~0U)) > + i = xennet_fill_frags(queue, skb, &tmpq, &errno); > + if (unlikely(errno == -ENOENT)) > goto err; > > if (rx->flags & XEN_NETRXF_csum_blank) >