Received: by 10.192.165.148 with SMTP id m20csp237342imm; Tue, 1 May 2018 22:21:33 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpzrOudg6uIefoJegN0gmHwbaXyaWFqZSKTGNjBP2HKPlLEjda7AG4WHYGzxzVa4N8kNjXI X-Received: by 2002:a17:902:1682:: with SMTP id h2-v6mr18780879plh.127.1525238492972; Tue, 01 May 2018 22:21:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525238492; cv=none; d=google.com; s=arc-20160816; b=BrMO/b8ethO15wTdaNTABwf3UUuSCD+14esQTqYh5WwUbS9Yy9dWhx89FMLsNzjq53 FnQGwMbQJChQBCYtH9gxH75xUL1md/n0eoh7XEQNLsh97wMMIfdxw+HIqb+4EE3brPzb sGvfKbfOpOVDapnm/9U3ISngJP5/6R8fLfd8YZR9YJGBh6+vVCFNhATbdh+FaKUENEy0 wmfL/RJNONF5H1oD4o18YaaQvrP5qBhZVHBbzAIu/PhkyljnNotY7huHHqNaS1ONKsdx UcVzAQ2lDC5tPoQv0y8STW4PAAYZrlyySrsJALkZULqG1ZcpsFPxWftS3It9x+IgtRDd bH3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature :arc-authentication-results; bh=1xYgAKt2iJfS/CL1Md/DCtQA3KujGSVyfz0fEAUftc0=; b=CFC6PBV4Mh366prw94tjbNeI3thH6kfEfBmem8LQUmI1tL4EnmnAQT8mGEg6tn3miJ nsBGfivlQAB+Mkrfzfu8HPqQFLveJB6dEUrhSKVEoCL4TU0wgGL2MClEF1h/x5LXwDcB 3XHf7fEncxcpmvjiTj0UszqJiUNeWyRUy1cz+SMbWfVNt7oEgmc4+IHkmGXLE7A/HKY8 EW6qYNWiSd/5gr2SkXn6NMIjGqgv+UQe69gu6Gss5Xc/tILYCsWZ3KmO+YFUq5sml6JX lq27o/03gmEbpeXCzTDa+eTeKJODNZD3cNG4hZNZ4LPAQmMd60NSDvZcbmz5QEa/JZUM X64A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=VxHxG8Xf; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s11-v6si8897142pgn.403.2018.05.01.22.21.18; Tue, 01 May 2018 22:21:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=VxHxG8Xf; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751026AbeEBFVE (ORCPT + 99 others); Wed, 2 May 2018 01:21:04 -0400 Received: from mail-lf0-f68.google.com ([209.85.215.68]:39148 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750873AbeEBFVA (ORCPT ); Wed, 2 May 2018 01:21:00 -0400 Received: by mail-lf0-f68.google.com with SMTP id j193-v6so18993725lfg.6; Tue, 01 May 2018 22:20:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding:content-language; bh=1xYgAKt2iJfS/CL1Md/DCtQA3KujGSVyfz0fEAUftc0=; b=VxHxG8XfG8BqE1ADHgoEHq3K04LjsbfEjP9vKblQRmQQYjuo48RW9KNNnb7UUvLT1u 1eNpzwCNqtlObXwqal92F6SXg0dOU4/+cKRgAKbSjnmWdUl2l7PjSPeg6ORzvcSYBzZc 6alZxj5XD94zRZgHNZ2Qn6dFQXdHYfP7So3446XI0Rp5Owdj4/s/RvEKKB6eLIIpmgcD BFiKcJXLIa8UAadr9YxvFuXaX1hHPN5QGiqzPW09LEUgTKJcz4Xrg2xCSwWoQrRnTsDf 5lnAB59OXVaUaUycF11gv8SLpqepvYkdrF7Y9wZde1vQ6dfCzrOy3l+oRFaGBEs4tO3f 0iiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=1xYgAKt2iJfS/CL1Md/DCtQA3KujGSVyfz0fEAUftc0=; b=OOtK6m6ZnKoO0yBdvx+R1BeT/nf6cSX9CklKcUedvJRJC3+NZtYy1DQ+Pyq8UHuq79 IxwnCJpUc6uLm1VrW0z8N6RFUwskhXJWuD46BICjbrd+VguQm91dbBgFH+1AuIzX8j2I AXFfQuSugFCl6B/L1O6gOdoZvOMLc1ZLt46lrxo9HaZEGTt0u/Bd26KpsPBkm2rg472i t1B1qP2cmO7gQ/ZpaRvvDUe9euIKK4YLiVI1Xvjm4EMIE+jc/ApxdM8bRlRm3jqv6HDz kUi98fIPdXzkoGuWeW/y3HPopskoSgUq49c703bRCtB3wlqEhsLnwCxxBQfsGdxskaEK YYDA== X-Gm-Message-State: ALQs6tDW6jxdjzm8PwPjKwcHElHHsJtCDqr4w5twEF0D/DUgAIFMNOwP vcqpY99fbutOREqmiQI93nY= X-Received: by 2002:a19:8dd4:: with SMTP id p203-v6mr10479799lfd.137.1525238458626; Tue, 01 May 2018 22:20:58 -0700 (PDT) Received: from [192.168.0.20] (134-28-94-178.pool.ukrtel.net. [178.94.28.134]) by smtp.googlemail.com with ESMTPSA id h11-v6sm2224206lfd.76.2018.05.01.22.20.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 May 2018 22:20:57 -0700 (PDT) Subject: Re: [Xen-devel] [PATCH 2/6] xen-netfront: copy response out of shared buffer before accessing it To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= , xen-devel@lists.xenproject.org Cc: Juergen Gross , "open list:NETWORKING DRIVERS" , stable@vger.kernel.org, open list , Boris Ostrovsky References: <98a855dceb47dbebd9c87e024084f14a5cb127f7.1525122026.git-series.marmarek@invisiblethingslab.com> From: Oleksandr Andrushchenko Message-ID: Date: Wed, 2 May 2018 08:20:56 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <98a855dceb47dbebd9c87e024084f14a5cb127f7.1525122026.git-series.marmarek@invisiblethingslab.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/01/2018 12:01 AM, Marek Marczykowski-Górecki wrote: > Make local copy of the response, otherwise backend might modify it while > frontend is already processing it - leading to time of check / time of > use issue. > > This is complementary to XSA155. > > Cc: stable@vger.kernel.org > Signed-off-by: Marek Marczykowski-Górecki > --- > drivers/net/xen-netfront.c | 51 +++++++++++++++++++-------------------- > 1 file changed, 25 insertions(+), 26 deletions(-) > > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c > index 4dd0668..dc99763 100644 > --- a/drivers/net/xen-netfront.c > +++ b/drivers/net/xen-netfront.c > @@ -387,13 +387,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue) > rmb(); /* Ensure we see responses up to 'rp'. */ > > for (cons = queue->tx.rsp_cons; cons != prod; cons++) { Side comment: the original concern was expressed on the above counters, will those be addressed as a dedicated series? > - struct xen_netif_tx_response *txrsp; > + struct xen_netif_tx_response txrsp; > > - txrsp = RING_GET_RESPONSE(&queue->tx, cons); > - if (txrsp->status == XEN_NETIF_RSP_NULL) > + RING_COPY_RESPONSE(&queue->tx, cons, &txrsp); > + if (txrsp.status == XEN_NETIF_RSP_NULL) > continue; > IMO, there is still no guarantee you access consistent data after this change. What if part of the response was ok when you started copying and then, in the middle, backend poisons the end of the response? This seems to be just like minimizing(?) chances to work with inconsistent data rather than removing the possibility of such completely > - id = txrsp->id; > + id = txrsp.id; > skb = queue->tx_skbs[id].skb; > if (unlikely(gnttab_query_foreign_access( > queue->grant_tx_ref[id]) != 0)) { > @@ -741,7 +741,7 @@ static int xennet_get_extras(struct netfront_queue *queue, > RING_IDX rp) > > { > - struct xen_netif_extra_info *extra; > + struct xen_netif_extra_info extra; > struct device *dev = &queue->info->netdev->dev; > RING_IDX cons = queue->rx.rsp_cons; > int err = 0; > @@ -757,24 +757,23 @@ static int xennet_get_extras(struct netfront_queue *queue, > break; > } > > - extra = (struct xen_netif_extra_info *) > - RING_GET_RESPONSE(&queue->rx, ++cons); > + RING_COPY_RESPONSE(&queue->rx, ++cons, &extra); > > - if (unlikely(!extra->type || > - extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) { > + if (unlikely(!extra.type || > + extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) { > if (net_ratelimit()) > dev_warn(dev, "Invalid extra type: %d\n", > - extra->type); > + extra.type); > err = -EINVAL; > } else { > - memcpy(&extras[extra->type - 1], extra, > - sizeof(*extra)); > + memcpy(&extras[extra.type - 1], &extra, > + sizeof(extra)); > } > > skb = xennet_get_rx_skb(queue, cons); > ref = xennet_get_rx_ref(queue, cons); > xennet_move_rx_slot(queue, skb, ref); > - } while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE); > + } while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE); > > queue->rx.rsp_cons = cons; > return err; > @@ -784,28 +783,28 @@ static int xennet_get_responses(struct netfront_queue *queue, > struct netfront_rx_info *rinfo, RING_IDX rp, > struct sk_buff_head *list) > { > - struct xen_netif_rx_response *rx = &rinfo->rx; > + struct xen_netif_rx_response rx = rinfo->rx; > struct xen_netif_extra_info *extras = rinfo->extras; > struct device *dev = &queue->info->netdev->dev; > RING_IDX cons = queue->rx.rsp_cons; > struct sk_buff *skb = xennet_get_rx_skb(queue, cons); > grant_ref_t ref = xennet_get_rx_ref(queue, cons); > - int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD); > + int max = MAX_SKB_FRAGS + (rx.status <= RX_COPY_THRESHOLD); > int slots = 1; > int err = 0; > unsigned long ret; > > - if (rx->flags & XEN_NETRXF_extra_info) { > + if (rx.flags & XEN_NETRXF_extra_info) { > err = xennet_get_extras(queue, extras, rp); > cons = queue->rx.rsp_cons; > } > > for (;;) { > - if (unlikely(rx->status < 0 || > - rx->offset + rx->status > XEN_PAGE_SIZE)) { > + if (unlikely(rx.status < 0 || > + rx.offset + rx.status > XEN_PAGE_SIZE)) { > if (net_ratelimit()) > dev_warn(dev, "rx->offset: %u, size: %d\n", > - rx->offset, rx->status); > + rx.offset, rx.status); > xennet_move_rx_slot(queue, skb, ref); > err = -EINVAL; > goto next; > @@ -819,7 +818,7 @@ static int xennet_get_responses(struct netfront_queue *queue, > if (ref == GRANT_INVALID_REF) { > if (net_ratelimit()) > dev_warn(dev, "Bad rx response id %d.\n", > - rx->id); > + rx.id); > err = -EINVAL; > goto next; > } > @@ -832,7 +831,7 @@ static int xennet_get_responses(struct netfront_queue *queue, > __skb_queue_tail(list, skb); > > next: > - if (!(rx->flags & XEN_NETRXF_more_data)) > + if (!(rx.flags & XEN_NETRXF_more_data)) > break; > > if (cons + slots == rp) { > @@ -842,7 +841,7 @@ static int xennet_get_responses(struct netfront_queue *queue, > break; > } > > - rx = RING_GET_RESPONSE(&queue->rx, cons + slots); > + RING_COPY_RESPONSE(&queue->rx, cons + slots, &rx); > skb = xennet_get_rx_skb(queue, cons + slots); > ref = xennet_get_rx_ref(queue, cons + slots); > slots++; > @@ -898,9 +897,9 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue, > struct sk_buff *nskb; > > while ((nskb = __skb_dequeue(list))) { > - struct xen_netif_rx_response *rx = > - RING_GET_RESPONSE(&queue->rx, ++cons); > + struct xen_netif_rx_response rx; > skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0]; > + RING_COPY_RESPONSE(&queue->rx, ++cons, &rx); > > if (shinfo->nr_frags == MAX_SKB_FRAGS) { > unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to; > @@ -911,7 +910,7 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue, > BUG_ON(shinfo->nr_frags >= MAX_SKB_FRAGS); > > skb_add_rx_frag(skb, shinfo->nr_frags, skb_frag_page(nfrag), > - rx->offset, rx->status, PAGE_SIZE); > + rx.offset, rx.status, PAGE_SIZE); > > skb_shinfo(nskb)->nr_frags = 0; > kfree_skb(nskb); > @@ -1007,7 +1006,7 @@ static int xennet_poll(struct napi_struct *napi, int budget) > i = queue->rx.rsp_cons; > work_done = 0; > while ((i != rp) && (work_done < budget)) { > - memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx)); > + RING_COPY_RESPONSE(&queue->rx, i, rx); > memset(extras, 0, sizeof(rinfo.extras)); > > err = xennet_get_responses(queue, &rinfo, rp, &tmpq);