Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2237914imm; Thu, 7 Jun 2018 07:31:39 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIBKEV1dDOrHc44Co2PC+30IK0GRlyIn/9AP4Rnquuya2C542JdbeT/d5ZqEKSxgCWXDXMZ X-Received: by 2002:a63:3e83:: with SMTP id l125-v6mr1819108pga.355.1528381899334; Thu, 07 Jun 2018 07:31:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528381899; cv=none; d=google.com; s=arc-20160816; b=AWnaVMg9jOePnhWsJb6oojY1Cd5MS34TPAkFBYAZNoJZxZlGrcvXATZMbnztLC5sOY lUa2e91lF1VFqZIPdql/x1HauVQADEBcH+28HDUTfkcwbzZLhmQji2kNcJIMkcwO5NBh 5HcDBnMUz/wXm6tJdBmNGdXX9wwVFDCcTvtN3QDEw1tt7OE+yHaiQ65vrsnScK8BuE5J smPWdIP/hq4CLvwqaO5R4omTtRALh0opxSyRRS2YdXrAi+J0YoaTWcMGfFP20meuD0aP 1Oiy1lS+9/DPq7BkFmm2opR9ZetoCMQsXcted0FOqaPxWKJ76M6oSovkJEkfJrLmXxHR +A5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:subject:message-id:date:cc:to :from:mime-version:content-transfer-encoding:content-disposition :arc-authentication-results; bh=Rnf+DPHQzqtFeUZZxWnRRAYT2KI7WNCJ1mFfoQhJ0JE=; b=DYLoKNlUt5bryD0h8B2RcWKesVkHHUCvpCK7vUbLP/lDV2jXIrDcluZxKedmaLuU/4 9TnzskPY+m8Wp4BG0NTqvMoKPn7/wSpdEQG9SQraldwysOfvgz1h0Lzn1ld304Ca7fG8 PNjwEsRd2RVON6ABEf1nFjlHNAxlL2qeNSM4QOiZoOl2FhORwS7PxQRISXF31X2wGpR1 jE5i5Lb+oU+TbIK19+rkyEExbKecmJN7do2FiarrWE4Y/Oj42/zGYOpmP7lyyPi6A8f2 CMcMNMIaoRdiJEat4v4OdNju/QmMEBbEl3mIKqUYrK5i5wGizhq70TMlwZtQXRlVIcpK nWtQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l13-v6si9271933pff.261.2018.06.07.07.31.24; Thu, 07 Jun 2018 07:31:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933525AbeFGO2p (ORCPT + 99 others); Thu, 7 Jun 2018 10:28:45 -0400 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:39417 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933064AbeFGOJK (ORCPT ); Thu, 7 Jun 2018 10:09:10 -0400 Received: from [148.252.241.226] (helo=deadeye) by shadbolt.decadent.org.uk with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.84_2) (envelope-from ) id 1fQvbA-0005Zn-G0; Thu, 07 Jun 2018 15:09:08 +0100 Received: from ben by deadeye with local (Exim 4.91) (envelope-from ) id 1fQvb7-00033I-DL; Thu, 07 Jun 2018 15:09:05 +0100 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "David S. Miller" , "Jakub Kicinski" Date: Thu, 07 Jun 2018 15:05:21 +0100 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) Subject: [PATCH 3.16 233/410] net: fix race on decreasing number of TX queues In-Reply-To: X-SA-Exim-Connect-IP: 148.252.241.226 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.16.57-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Jakub Kicinski commit ac5b70198adc25c73fba28de4f78adcee8f6be0b upstream. netif_set_real_num_tx_queues() can be called when netdev is up. That usually happens when user requests change of number of channels/rings with ethtool -L. The procedure for changing the number of queues involves resetting the qdiscs and setting dev->num_tx_queues to the new value. When the new value is lower than the old one, extra care has to be taken to ensure ordering of accesses to the number of queues vs qdisc reset. Currently the queues are reset before new dev->num_tx_queues is assigned, leaving a window of time where packets can be enqueued onto the queues going down, leading to a likely crash in the drivers, since most drivers don't check if TX skbs are assigned to an active queue. Fixes: e6484930d7c7 ("net: allocate tx queues in register_netdevice") Signed-off-by: Jakub Kicinski Signed-off-by: David S. Miller Signed-off-by: Ben Hutchings --- net/core/dev.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) --- a/net/core/dev.c +++ b/net/core/dev.c @@ -2084,8 +2084,11 @@ EXPORT_SYMBOL(netif_set_xps_queue); */ int netif_set_real_num_tx_queues(struct net_device *dev, unsigned int txq) { + bool disabling; int rc; + disabling = txq < dev->real_num_tx_queues; + if (txq < 1 || txq > dev->num_tx_queues) return -EINVAL; @@ -2101,15 +2104,19 @@ int netif_set_real_num_tx_queues(struct if (dev->num_tc) netif_setup_tc(dev, txq); - if (txq < dev->real_num_tx_queues) { + dev->real_num_tx_queues = txq; + + if (disabling) { + synchronize_net(); qdisc_reset_all_tx_gt(dev, txq); #ifdef CONFIG_XPS netif_reset_xps_queues_gt(dev, txq); #endif } + } else { + dev->real_num_tx_queues = txq; } - dev->real_num_tx_queues = txq; return 0; } EXPORT_SYMBOL(netif_set_real_num_tx_queues);