Received: by 10.213.65.68 with SMTP id h4csp470912imn; Fri, 16 Mar 2018 08:45:34 -0700 (PDT) X-Google-Smtp-Source: AG47ELtjak2yZBUpQFwN5qNwB4X3xKIT6Qxvgbb6bpBLoa6pFt75LgOZ3f4r5vHuJW2oiIWj1ME6 X-Received: by 2002:a17:902:183:: with SMTP id b3-v6mr2722031plb.80.1521215134179; Fri, 16 Mar 2018 08:45:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521215134; cv=none; d=google.com; s=arc-20160816; b=TmyOpptem1nr/MphWPebBSaJQ4mEqCo6EZ52oQ5q2gdzYeLMyPRbw0TwlW2ScVaIFz iJNlOfpLa2UKZgGWgjG/VzHD8epEH0nak7dGr7/XeuBnDhZei+RMFOP2z7VFLRPt3d9p FbVNfpF/CtAvJhBajrfReYsNHrfkdm0/mfvHZ56sr4HWPbPAB+8ryHSWqrc3o6Nzte20 0IIpPanxd9rUmh1yQ+naN/NsIY0+UlBGwza1qk6R6xIK0hgyGjLbPS6cqRGiitUf5HR9 nzvagffSOPROwwAjunojdZWVhsqw4VDrX9RoNS88Y6I33GzyRWQ93vut6tyDThLpMeDx U0nA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=L9Hv6b8n9f6uHoOfm88dD2vUtOlFOrtDnWEqbG72cig=; b=UH2Q57CUxPOFw3opf4anVFb6dHHSZvqLnKJFNp1DNls4S0xBLrZm4eg4o9G27UHrIC 4HHqTUfqyKwTIHKd84LyzqyFavQXy5RFTYjVYq5ZaI5I5miLdRMfOHJaWcQBeZei4E/U 7E1XCLgsIlvJXvR/q5YeWWSVyFMAL9cDzdXu3ymaNg6kXH42u9o3IVZkJ7q0eCDQEj0n NfkwX3Y5YIPu1xE3rB7orRbGByR34yzM9F63bTlHGL6cAh5xhQW4mB9Kz9ePNnk7JGDN yAm7S7LZAulZXY7y4TPujNizuU2K7/+TwhmyZ2k9DvmF1hIAOH0puvDkTFhadSOIp5FB DobA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w5si5090906pgt.823.2018.03.16.08.45.18; Fri, 16 Mar 2018 08:45:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965409AbeCPPoU (ORCPT + 99 others); Fri, 16 Mar 2018 11:44:20 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:45864 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965372AbeCPPoL (ORCPT ); Fri, 16 Mar 2018 11:44:11 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 1904EDD9; Fri, 16 Mar 2018 15:44:10 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, John Fastabend , "David S. Miller" , Sasha Levin Subject: [PATCH 4.15 109/128] net: sched: drop qdisc_reset from dev_graft_qdisc Date: Fri, 16 Mar 2018 16:24:10 +0100 Message-Id: <20180316152341.971330169@linuxfoundation.org> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180316152336.199007505@linuxfoundation.org> References: <20180316152336.199007505@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: John Fastabend [ Upstream commit 7bbde83b1860c28a1cc35516352c4e7e5172c29a ] In qdisc_graft_qdisc a "new" qdisc is attached and the 'qdisc_destroy' operation is called on the old qdisc. The destroy operation will wait a rcu grace period and call qdisc_rcu_free(). At which point gso_cpu_skb is free'd along with all stats so no need to zero stats and gso_cpu_skb from the graft operation itself. Further after dropping the qdisc locks we can not continue to call qdisc_reset before waiting an rcu grace period so that the qdisc is detached from all cpus. By removing the qdisc_reset() here we get the correct property of waiting an rcu grace period and letting the qdisc_destroy operation clean up the qdisc correctly. Note, a refcnt greater than 1 would cause the destroy operation to be aborted however if this ever happened the reference to the qdisc would be lost and we would have a memory leak. Signed-off-by: John Fastabend Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- net/sched/sch_generic.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -762,10 +762,6 @@ struct Qdisc *dev_graft_qdisc(struct net root_lock = qdisc_lock(oqdisc); spin_lock_bh(root_lock); - /* Prune old scheduler */ - if (oqdisc && refcount_read(&oqdisc->refcnt) <= 1) - qdisc_reset(oqdisc); - /* ... and graft new one */ if (qdisc == NULL) qdisc = &noop_qdisc; @@ -916,6 +912,16 @@ static bool some_qdisc_is_busy(struct ne return false; } +static void dev_qdisc_reset(struct net_device *dev, + struct netdev_queue *dev_queue, + void *none) +{ + struct Qdisc *qdisc = dev_queue->qdisc_sleeping; + + if (qdisc) + qdisc_reset(qdisc); +} + /** * dev_deactivate_many - deactivate transmissions on several devices * @head: list of devices to deactivate @@ -926,7 +932,6 @@ static bool some_qdisc_is_busy(struct ne void dev_deactivate_many(struct list_head *head) { struct net_device *dev; - bool sync_needed = false; list_for_each_entry(dev, head, close_list) { netdev_for_each_tx_queue(dev, dev_deactivate_queue, @@ -936,20 +941,25 @@ void dev_deactivate_many(struct list_hea &noop_qdisc); dev_watchdog_down(dev); - sync_needed |= !dev->dismantle; } /* Wait for outstanding qdisc-less dev_queue_xmit calls. * This is avoided if all devices are in dismantle phase : * Caller will call synchronize_net() for us */ - if (sync_needed) - synchronize_net(); + synchronize_net(); /* Wait for outstanding qdisc_run calls. */ - list_for_each_entry(dev, head, close_list) + list_for_each_entry(dev, head, close_list) { while (some_qdisc_is_busy(dev)) yield(); + /* The new qdisc is assigned at this point so we can safely + * unwind stale skb lists and qdisc statistics + */ + netdev_for_each_tx_queue(dev, dev_qdisc_reset, NULL); + if (dev_ingress_queue(dev)) + dev_qdisc_reset(dev, dev_ingress_queue(dev), NULL); + } } void dev_deactivate(struct net_device *dev)