Received: by 10.192.165.148 with SMTP id m20csp541060imm; Wed, 25 Apr 2018 03:59:26 -0700 (PDT) X-Google-Smtp-Source: AIpwx49S1NAnF7QMGSM4jYHINPds7f/YQNuZ0s4cjmdC45q7eUcsqU6uaYl+vv3sIF0c8ippAMkm X-Received: by 10.99.106.7 with SMTP id f7mr23496514pgc.363.1524653966166; Wed, 25 Apr 2018 03:59:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524653966; cv=none; d=google.com; s=arc-20160816; b=v+xqOAdFdqJqEY0VGpk2PXJ6PTeR71X8XUR/E4E9MBlh9SkZ8HubKOi2vjxQ893LHR K57oQ9Wi2168IZvpGp9oMbvc4BoMqSdjtxYAVMcrtgN/M02Hq9hBC+xEx/1qnSDhbvGw juCw8e6+rv56KPwRJ8TsJjNkMFH5m6v1gAf2huBlP3SCftvtYg3g4lZvW1ef+lhal1Q9 bkRR6qG8heUUzEcTPZ50XXmAfXNSPwoSudm6kF7s5r3IgSLYhW+iqvLQ/BAdgYvHuNcr gRXu5ItN6Rw0w/pOA/ucRuAKtJ/YhNsejgNG9beY4QqlZfXiHoDuEFt7QBYJVEPyWPTy H/8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=pArgkhnvEniiFO09pr4AY1H4cwQCwaopfLx1piKpNiM=; b=UUSD+WUvZMiRIJAlGD2BSoQAhM7VNEJcedF+hCOKFBh88rvYaTRoFV1EDN08Secgox +g8tfodmIan6OR41t1EqDsjCubi60PVwdMvOZ7f5J3tAOSY16fgXbVqoRfbjqDITEUpC RzvUG3rOcU+PMN0kxxcKKOegr+v0LINkBOv4kv457uT83Xk7VnjyUewcGc/5hJdIBV2U rnGEzo+c/FS8JgF4gC0kvSUfpLyKEBXZieMXvZrDYbgCKUXG3pKR696oKug2LRim3f6L Hx46OVY9RQ++khD2iTRuDvlYaESOZxJuUieXDwibaPY+06IVw3s8aVv9aw+ApbB0ppCv qCwQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f3si13503635pgn.376.2018.04.25.03.59.11; Wed, 25 Apr 2018 03:59:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754071AbeDYK6J (ORCPT + 99 others); Wed, 25 Apr 2018 06:58:09 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:52834 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754161AbeDYKnB (ORCPT ); Wed, 25 Apr 2018 06:43:01 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 9449636; Wed, 25 Apr 2018 10:42:59 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ross Lagerwall , Boris Ostrovsky , Juergen Gross , Sasha Levin Subject: [PATCH 4.14 139/183] xen-netfront: Fix race between device setup and open Date: Wed, 25 Apr 2018 12:35:59 +0200 Message-Id: <20180425103248.130971244@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180425103242.532713678@linuxfoundation.org> References: <20180425103242.532713678@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Ross Lagerwall [ Upstream commit f599c64fdf7d9c108e8717fb04bc41c680120da4 ] When a netfront device is set up it registers a netdev fairly early on, before it has set up the queues and is actually usable. A userspace tool like NetworkManager will immediately try to open it and access its state as soon as it appears. The bug can be reproduced by hotplugging VIFs until the VM runs out of grant refs. It registers the netdev but fails to set up any queues (since there are no more grant refs). In the meantime, NetworkManager opens the device and the kernel crashes trying to access the queues (of which there are none). Fix this in two ways: * For initial setup, register the netdev much later, after the queues are setup. This avoids the race entirely. * During a suspend/resume cycle, the frontend reconnects to the backend and the queues are recreated. It is possible (though highly unlikely) to race with something opening the device and accessing the queues after they have been destroyed but before they have been recreated. Extend the region covered by the rtnl semaphore to protect against this race. There is a possibility that we fail to recreate the queues so check for this in the open function. Signed-off-by: Ross Lagerwall Reviewed-by: Boris Ostrovsky Signed-off-by: Juergen Gross Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- drivers/net/xen-netfront.c | 46 +++++++++++++++++++++++---------------------- 1 file changed, 24 insertions(+), 22 deletions(-) --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -351,6 +351,9 @@ static int xennet_open(struct net_device unsigned int i = 0; struct netfront_queue *queue = NULL; + if (!np->queues) + return -ENODEV; + for (i = 0; i < num_queues; ++i) { queue = &np->queues[i]; napi_enable(&queue->napi); @@ -1358,18 +1361,8 @@ static int netfront_probe(struct xenbus_ #ifdef CONFIG_SYSFS info->netdev->sysfs_groups[0] = &xennet_dev_group; #endif - err = register_netdev(info->netdev); - if (err) { - pr_warn("%s: register_netdev err=%d\n", __func__, err); - goto fail; - } return 0; - - fail: - xennet_free_netdev(netdev); - dev_set_drvdata(&dev->dev, NULL); - return err; } static void xennet_end_access(int ref, void *page) @@ -1738,8 +1731,6 @@ static void xennet_destroy_queues(struct { unsigned int i; - rtnl_lock(); - for (i = 0; i < info->netdev->real_num_tx_queues; i++) { struct netfront_queue *queue = &info->queues[i]; @@ -1748,8 +1739,6 @@ static void xennet_destroy_queues(struct netif_napi_del(&queue->napi); } - rtnl_unlock(); - kfree(info->queues); info->queues = NULL; } @@ -1765,8 +1754,6 @@ static int xennet_create_queues(struct n if (!info->queues) return -ENOMEM; - rtnl_lock(); - for (i = 0; i < *num_queues; i++) { struct netfront_queue *queue = &info->queues[i]; @@ -1775,7 +1762,7 @@ static int xennet_create_queues(struct n ret = xennet_init_queue(queue); if (ret < 0) { - dev_warn(&info->netdev->dev, + dev_warn(&info->xbdev->dev, "only created %d queues\n", i); *num_queues = i; break; @@ -1789,10 +1776,8 @@ static int xennet_create_queues(struct n netif_set_real_num_tx_queues(info->netdev, *num_queues); - rtnl_unlock(); - if (*num_queues == 0) { - dev_err(&info->netdev->dev, "no queues\n"); + dev_err(&info->xbdev->dev, "no queues\n"); return -EINVAL; } return 0; @@ -1829,6 +1814,7 @@ static int talk_to_netback(struct xenbus goto out; } + rtnl_lock(); if (info->queues) xennet_destroy_queues(info); @@ -1839,6 +1825,7 @@ static int talk_to_netback(struct xenbus info->queues = NULL; goto out; } + rtnl_unlock(); /* Create shared ring, alloc event channel -- for each queue */ for (i = 0; i < num_queues; ++i) { @@ -1935,8 +1922,10 @@ abort_transaction_no_dev_fatal: xenbus_transaction_end(xbt, 1); destroy_ring: xennet_disconnect_backend(info); + rtnl_lock(); xennet_destroy_queues(info); out: + rtnl_unlock(); device_unregister(&dev->dev); return err; } @@ -1966,6 +1955,15 @@ static int xennet_connect(struct net_dev netdev_update_features(dev); rtnl_unlock(); + if (dev->reg_state == NETREG_UNINITIALIZED) { + err = register_netdev(dev); + if (err) { + pr_warn("%s: register_netdev err=%d\n", __func__, err); + device_unregister(&np->xbdev->dev); + return err; + } + } + /* * All public and private state should now be sane. Get * ready to start sending and receiving packets and give the driver @@ -2156,10 +2154,14 @@ static int xennet_remove(struct xenbus_d xennet_disconnect_backend(info); - unregister_netdev(info->netdev); + if (info->netdev->reg_state == NETREG_REGISTERED) + unregister_netdev(info->netdev); - if (info->queues) + if (info->queues) { + rtnl_lock(); xennet_destroy_queues(info); + rtnl_unlock(); + } xennet_free_netdev(info->netdev); return 0;