Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1425586AbdDUTqh (ORCPT ); Fri, 21 Apr 2017 15:46:37 -0400 Received: from Chamillionaire.breakpoint.cc ([146.0.238.67]:55078 "EHLO Chamillionaire.breakpoint.cc" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1425568AbdDUTqf (ORCPT ); Fri, 21 Apr 2017 15:46:35 -0400 Date: Fri, 21 Apr 2017 21:45:15 +0200 From: Florian Westphal To: Florian Westphal Cc: Eric Dumazet , Andrey Konovalov , Cong Wang , netdev , LKML , Dmitry Vyukov , Kostya Serebryany , syzkaller Subject: Re: net: cleanup_net is slow Message-ID: <20170421194515.GB8853@breakpoint.cc> References: <20170421192729.GA8853@breakpoint.cc> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170421192729.GA8853@breakpoint.cc> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2801 Lines: 99 Florian Westphal wrote: > Indeed. Setting net.netfilter.nf_conntrack_default_on=0 cuts time > cleanup time by 2/3 ... > > nf unregister is way too happy to issue synchronize_net(), I'll work on > a fix. I'll test this patch as a start. Maybe we can also leverage exit_batch more on netfilter side. diff --git a/net/netfilter/core.c b/net/netfilter/core.c index a87a6f8a74d8..08fe1f526265 100644 --- a/net/netfilter/core.c +++ b/net/netfilter/core.c @@ -126,14 +126,15 @@ int nf_register_net_hook(struct net *net, const struct nf_hook_ops *reg) } EXPORT_SYMBOL(nf_register_net_hook); -void nf_unregister_net_hook(struct net *net, const struct nf_hook_ops *reg) +static struct nf_hook_entry * +__nf_unregister_net_hook(struct net *net, const struct nf_hook_ops *reg) { struct nf_hook_entry __rcu **pp; struct nf_hook_entry *p; pp = nf_hook_entry_head(net, reg); if (WARN_ON_ONCE(!pp)) - return; + return NULL; mutex_lock(&nf_hook_mutex); for (; (p = nf_entry_dereference(*pp)) != NULL; pp = &p->next) { @@ -145,7 +146,7 @@ void nf_unregister_net_hook(struct net *net, const struct nf_hook_ops *reg) mutex_unlock(&nf_hook_mutex); if (!p) { WARN(1, "nf_unregister_net_hook: hook not found!\n"); - return; + return NULL; } #ifdef CONFIG_NETFILTER_INGRESS if (reg->pf == NFPROTO_NETDEV && reg->hooknum == NF_NETDEV_INGRESS) @@ -154,6 +155,17 @@ void nf_unregister_net_hook(struct net *net, const struct nf_hook_ops *reg) #ifdef HAVE_JUMP_LABEL static_key_slow_dec(&nf_hooks_needed[reg->pf][reg->hooknum]); #endif + + return p; +} + +void nf_unregister_net_hook(struct net *net, const struct nf_hook_ops *reg) +{ + struct nf_hook_entry *p = __nf_unregister_net_hook(net, reg); + + if (!p) + return; + synchronize_net(); nf_queue_nf_hook_drop(net, p); /* other cpu might still process nfqueue verdict that used reg */ @@ -183,10 +195,36 @@ int nf_register_net_hooks(struct net *net, const struct nf_hook_ops *reg, EXPORT_SYMBOL(nf_register_net_hooks); void nf_unregister_net_hooks(struct net *net, const struct nf_hook_ops *reg, - unsigned int n) + unsigned int hookcount) { - while (n-- > 0) - nf_unregister_net_hook(net, ®[n]); + struct nf_hook_entry *to_free[16]; + unsigned int i, n; + + WARN_ON_ONCE(hookcount > ARRAY_SIZE(to_free)); + + next_round: + n = min_t(unsigned int, hookcount, ARRAY_SIZE(to_free)); + + for (i = 0; i < n; i++) + to_free[i] = __nf_unregister_net_hook(net, ®[i]); + + synchronize_net(); + + for (i = 0; i < n; i++) { + if (to_free[i]) + nf_queue_nf_hook_drop(net, to_free[i]); + } + + synchronize_net(); + + for (i = 0; i < n; i++) + kfree(to_free[i]); + + if (n < hookcount) { + hookcount -= n; + reg += n; + goto next_round; + } } EXPORT_SYMBOL(nf_unregister_net_hooks);