Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp577372imm; Wed, 23 May 2018 01:54:23 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqqzTBe81WjUnc6rVqOyCNL6AR6HeJtB1gZoDylJBoaKfqP3fUm7ky+6pHMhDw5hzJ15gf4 X-Received: by 2002:a17:902:f83:: with SMTP id 3-v6mr2069565plz.336.1527065663672; Wed, 23 May 2018 01:54:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527065663; cv=none; d=google.com; s=arc-20160816; b=hZqJDrdgMSMXI8C5R5HKLKV9Uh9M+q4EwEP0yRg3vhTNS/fB7wAM541hjR6WwFZGKq d3q8eibbFPEEY60bSlRBwFT2dBzSa5R6CJbMWdjZyQUIjPlIb1eTI2SqeogC7fopkkN9 6QxV1spluNjkKKa3YR21YC/7+Sjo3TJO+nytxuCef2Rzegvlu77zHdxQW007myt6HOuq QfgxfubNGNj21AtlHh0LFC+zUOWPhasNRJMoDS1Whtuv9xmWL57zMyWkVH3VTNmfPZhT +0/RV0GTkqrN8y1MlcmM15mmwjIXLfzgqxd2AqLpezXD/vFjFhz7qpBWivySqdMpKgtI 2LAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=X7p1hyonYSXIsh9+k0OpmqJIuzOjdCzyTD/3/CwkAww=; b=o8lgjKsuUvbJKdvzn/VwzHnRJ/qSxea6mQvlKQRZKGVN/lqCaSACh9Eg5RsUne57xK PMgUIG+/67YQzIef0GHSzYtvB3A7j8KYrq29MTNO+f4Brdw/IoMZ7QMadQv4DGjWtObF Bq6rfC9pLfS00NBo7qm9yikIRtV+3xTqEadQ7xIrNJO7Q6QVwmlI49VhSa60pDFxx+Gs cwKxhK9xPm+l5jrgPbf575a4XggTnLfBoRzcM26bfY9cSjsTlLcjA7V7hUkKs8intMwJ O17cYzkPgxvRNe3mK3+wiXF6vOVVXofZxZDvcU46vwomtxlXLB4RsMVPTmrBQOngZHjq M4Ng== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=mellanox.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a96-v6si18475348pla.169.2018.05.23.01.54.04; Wed, 23 May 2018 01:54:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=mellanox.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754564AbeEWIxl (ORCPT + 99 others); Wed, 23 May 2018 04:53:41 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:39049 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1754540AbeEWIxi (ORCPT ); Wed, 23 May 2018 04:53:38 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from vladbu@mellanox.com) with ESMTPS (AES256-SHA encrypted); 23 May 2018 11:55:33 +0300 Received: from reg-r-vrt-018-180.mtr.labs.mlnx (reg-r-vrt-018-180.mtr.labs.mlnx [10.213.18.180]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id w4N8rZJT000858; Wed, 23 May 2018 11:53:35 +0300 From: Vlad Buslov To: jiri@resnulli.us Cc: netdev@vger.kernel.org, jhs@mojatatu.com, xiyou.wangcong@gmail.com, davem@davemloft.net, linux-kernel@vger.kernel.org, Vlad Buslov Subject: [PATCH net-next v3] net: sched: don't disable bh when accessing action idr Date: Wed, 23 May 2018 11:52:54 +0300 Message-Id: <1527065574-11299-1-git-send-email-vladbu@mellanox.com> X-Mailer: git-send-email 2.7.5 In-Reply-To: <20180523073239.GC3155@nanopsycho> References: <20180523073239.GC3155@nanopsycho> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Initial net_device implementation used ingress_lock spinlock to synchronize ingress path of device. This lock was used in both process and bh context. In some code paths action map lock was obtained while holding ingress_lock. Commit e1e992e52faa ("[NET_SCHED] protect action config/dump from irqs") modified actions to always disable bh, while using action map lock, in order to prevent deadlock on ingress_lock in softirq. This lock was removed in commit 555353cfa1ae ("netdev: The ingress_lock member is no longer needed."). Another reason to disable bh was filters delete code, that released actions in rcu callback. This code was changed to release actions from workqueue context in patch set "net_sched: close the race between call_rcu() and cleanup_net()". With these changes it is no longer necessary to continue disable bh while accessing action map. Replace all action idr spinlock usage with regular calls that do not disable bh. Acked-by: Jiri Pirko Acked-by: Jamal Hadi Salim Signed-off-by: Vlad Buslov --- Changes from V2 to V3: - Expanded commit message. Changes from V1 to V2: - Expanded commit message. net/sched/act_api.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/net/sched/act_api.c b/net/sched/act_api.c index 72251241665a..3f4cf930f809 100644 --- a/net/sched/act_api.c +++ b/net/sched/act_api.c @@ -77,9 +77,9 @@ static void free_tcf(struct tc_action *p) static void tcf_idr_remove(struct tcf_idrinfo *idrinfo, struct tc_action *p) { - spin_lock_bh(&idrinfo->lock); + spin_lock(&idrinfo->lock); idr_remove(&idrinfo->action_idr, p->tcfa_index); - spin_unlock_bh(&idrinfo->lock); + spin_unlock(&idrinfo->lock); gen_kill_estimator(&p->tcfa_rate_est); free_tcf(p); } @@ -156,7 +156,7 @@ static int tcf_dump_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb, struct tc_action *p; unsigned long id = 1; - spin_lock_bh(&idrinfo->lock); + spin_lock(&idrinfo->lock); s_i = cb->args[0]; @@ -191,7 +191,7 @@ static int tcf_dump_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb, if (index >= 0) cb->args[0] = index + 1; - spin_unlock_bh(&idrinfo->lock); + spin_unlock(&idrinfo->lock); if (n_i) { if (act_flags & TCA_FLAG_LARGE_DUMP_ON) cb->args[1] = n_i; @@ -261,9 +261,9 @@ static struct tc_action *tcf_idr_lookup(u32 index, struct tcf_idrinfo *idrinfo) { struct tc_action *p = NULL; - spin_lock_bh(&idrinfo->lock); + spin_lock(&idrinfo->lock); p = idr_find(&idrinfo->action_idr, index); - spin_unlock_bh(&idrinfo->lock); + spin_unlock(&idrinfo->lock); return p; } @@ -323,7 +323,7 @@ int tcf_idr_create(struct tc_action_net *tn, u32 index, struct nlattr *est, } spin_lock_init(&p->tcfa_lock); idr_preload(GFP_KERNEL); - spin_lock_bh(&idrinfo->lock); + spin_lock(&idrinfo->lock); /* user doesn't specify an index */ if (!index) { index = 1; @@ -331,7 +331,7 @@ int tcf_idr_create(struct tc_action_net *tn, u32 index, struct nlattr *est, } else { err = idr_alloc_u32(idr, NULL, &index, index, GFP_ATOMIC); } - spin_unlock_bh(&idrinfo->lock); + spin_unlock(&idrinfo->lock); idr_preload_end(); if (err) goto err3; @@ -369,9 +369,9 @@ void tcf_idr_insert(struct tc_action_net *tn, struct tc_action *a) { struct tcf_idrinfo *idrinfo = tn->idrinfo; - spin_lock_bh(&idrinfo->lock); + spin_lock(&idrinfo->lock); idr_replace(&idrinfo->action_idr, a, a->tcfa_index); - spin_unlock_bh(&idrinfo->lock); + spin_unlock(&idrinfo->lock); } EXPORT_SYMBOL(tcf_idr_insert); -- 2.7.5