Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp3239469rdb; Wed, 27 Dec 2023 00:21:01 -0800 (PST) X-Google-Smtp-Source: AGHT+IETPvTQfaR+bmxOJn2x3/oaaQmEkbqPEAHDT4eiCvCP0mWgKysRXzszNoF5m1i2Liyx0MgX X-Received: by 2002:a05:6a20:daa2:b0:195:db71:985 with SMTP id iy34-20020a056a20daa200b00195db710985mr3570807pzb.117.1703665260965; Wed, 27 Dec 2023 00:21:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703665260; cv=none; d=google.com; s=arc-20160816; b=FYNF8MPG6Uezh7n32WhAytCMEwhb3lYXg+YJTKWjycUczJtpbDC0rnmkb1bfMW4cjr p/nqKWOXzKEr7QCkgmXKifrYo2wzFX1x2kcvAvgQt21zd9/E/Aqdmyz5cpL8Et2Pwn6y XBdF5rTzALfXZ1AnrA0UFGj4T6c+MwboTkn9bDdfBaFbAsX8AKTeRovb1MAYNV2ntBGY JNRgLI3BpA+Tief90WW6usYkW7HwoNTSwfkgQATR84F74WaZgdJnfsSlu1jlQj2Hr06u D6NQ4bTMgmMaedsmxHmo8sdMkxiTuTRMxaF44ITbSa34sXdwVMY74aJtdCOUlTnSxhUV KA8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=QuI2mhVyXg8/wbsUCzwex3/UXGxruTmCfkgbZdOdMcE=; fh=wLyYV7pmOPQdvjiWfvNVR5UZV9Qzovjve02tZyi75M8=; b=JT9tT2C7r1FlQ5T+dWkfDaGauKGMlZkVcyVxJ4JiBq9A4ciyeOJMJwmAReSri3VyU2 dlJD205bXzWyrDOd6jUA9hFc5k+FKKxW5iv/k6cAU8q5mvw7pypNcOQHsMpbl8O3vdx7 WyPnB/1AGab5w4qndnDZ/CEZDHkraxoEbzD1eF0vFHO9fEiYsprkB+GRQqBRM1KD1n8w f5K56CbyC1NBV6a4sY3XjOL+sy2vNg6Ur6MPvSWj6lqxLsz46dSIRRk80WApuTtx6pZE ddd8eHLrJhkTU8eIfdAPPLkkdxeKz3bWjtQN+mf8RIeVQ90DtHqQjLY3f0DXgHJKGIAg IwtA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-11901-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-11901-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id m5-20020a056a00080500b006d9ea51549bsi2034932pfk.71.2023.12.27.00.21.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Dec 2023 00:21:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-11901-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-11901-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-11901-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id EA1FDB226EC for ; Wed, 27 Dec 2023 08:20:39 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B490A63D3; Wed, 27 Dec 2023 08:20:23 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD7A86127; Wed, 27 Dec 2023 08:20:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=alibuda@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0VzKSstD_1703665209; Received: from 30.221.145.199(mailfrom:alibuda@linux.alibaba.com fp:SMTPD_---0VzKSstD_1703665209) by smtp.aliyun-inc.com; Wed, 27 Dec 2023 16:20:11 +0800 Message-ID: Date: Wed, 27 Dec 2023 16:20:09 +0800 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC nf-next v3 1/2] netfilter: bpf: support prog update Content-Language: en-US To: Alexei Starovoitov Cc: Pablo Neira Ayuso , Jozsef Kadlecsik , Florian Westphal , bpf , LKML , Network Development , coreteam@netfilter.org, netfilter-devel , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov References: <1703081351-85579-1-git-send-email-alibuda@linux.alibaba.com> <1703081351-85579-2-git-send-email-alibuda@linux.alibaba.com> <1d3cb7fc-c1dc-a779-8952-cdbaaf696ce3@linux.alibaba.com> From: "D. Wythe" In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 12/23/23 6:23 AM, Alexei Starovoitov wrote: > On Thu, Dec 21, 2023 at 11:06 PM D. Wythe wrote: >> >> >> On 12/21/23 5:11 AM, Alexei Starovoitov wrote: >>> On Wed, Dec 20, 2023 at 6:09 AM D. Wythe wrote: >>>> From: "D. Wythe" >>>> >>>> To support the prog update, we need to ensure that the prog seen >>>> within the hook is always valid. Considering that hooks are always >>>> protected by rcu_read_lock(), which provide us the ability to >>>> access the prog under rcu. >>>> >>>> Signed-off-by: D. Wythe >>>> --- >>>> net/netfilter/nf_bpf_link.c | 63 ++++++++++++++++++++++++++++++++++----------- >>>> 1 file changed, 48 insertions(+), 15 deletions(-) >>>> >>>> diff --git a/net/netfilter/nf_bpf_link.c b/net/netfilter/nf_bpf_link.c >>>> index e502ec0..9bc91d1 100644 >>>> --- a/net/netfilter/nf_bpf_link.c >>>> +++ b/net/netfilter/nf_bpf_link.c >>>> @@ -8,17 +8,8 @@ >>>> #include >>>> #include >>>> >>>> -static unsigned int nf_hook_run_bpf(void *bpf_prog, struct sk_buff *skb, >>>> - const struct nf_hook_state *s) >>>> -{ >>>> - const struct bpf_prog *prog = bpf_prog; >>>> - struct bpf_nf_ctx ctx = { >>>> - .state = s, >>>> - .skb = skb, >>>> - }; >>>> - >>>> - return bpf_prog_run(prog, &ctx); >>>> -} >>>> +/* protect link update in parallel */ >>>> +static DEFINE_MUTEX(bpf_nf_mutex); >>>> >>>> struct bpf_nf_link { >>>> struct bpf_link link; >>>> @@ -26,8 +17,20 @@ struct bpf_nf_link { >>>> struct net *net; >>>> u32 dead; >>>> const struct nf_defrag_hook *defrag_hook; >>>> + struct rcu_head head; >>> I have to point out the same issues as before, but >>> will ask them differently... >>> >>> Why do you think above rcu_head is necessary? >>> >>>> }; >>>> >>>> +static unsigned int nf_hook_run_bpf(void *bpf_link, struct sk_buff *skb, >>>> + const struct nf_hook_state *s) >>>> +{ >>>> + const struct bpf_nf_link *nf_link = bpf_link; >>>> + struct bpf_nf_ctx ctx = { >>>> + .state = s, >>>> + .skb = skb, >>>> + }; >>>> + return bpf_prog_run(rcu_dereference_raw(nf_link->link.prog), &ctx); >>>> +} >>>> + >>>> #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4) || IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) >>>> static const struct nf_defrag_hook * >>>> get_proto_defrag_hook(struct bpf_nf_link *link, >>>> @@ -126,8 +129,7 @@ static void bpf_nf_link_release(struct bpf_link *link) >>>> static void bpf_nf_link_dealloc(struct bpf_link *link) >>>> { >>>> struct bpf_nf_link *nf_link = container_of(link, struct bpf_nf_link, link); >>>> - >>>> - kfree(nf_link); >>>> + kfree_rcu(nf_link, head); >>> Why is this needed ? >>> Have you looked at tcx_link_lops ? >> Introducing rcu_head/kfree_rcu is to address the situation where the >> netfilter hooks might >> still access the link after bpf_nf_link_dealloc. > Why do you think so? > Hi Alexei, IMMO, nf_unregister_net_hook does not wait for the completion of the execution of the hook that is being removed, instead, it allocates a new array without the very hook to replace the old arrayvia rcu_assign_pointer() (in __nf_hook_entries_try_shrink), then it use call_rcu() to release the old one. You can find more details in commit 8c873e2199700c2de7dbd5eedb9d90d5f109462b. In other words, when nf_unregister_net_hook returns, there may still be contexts executing hooks on the old array, which means that the `link` may still be accessed after nf_unregister_net_hook returns. And that's the reason why we use kfree_rcu() to release the `link`. >> nf_hook_run_bpf >> const struct >> bpf_nf_link *nf_link = bpf_link; >> >> bpf_nf_link_release >> nf_unregister_net_hook(nf_link->net, &nf_link->hook_ops); >> >> bpf_nf_link_dealloc >> free(link) >> bpf_prog_run(link->prog); >> >> >> I had checked the tcx_link_lops ,it's seems it use the synchronize_rcu() >> to solve the > Where do you see such code in tcx_link_lops ? I'm not certain if the reason that it choose to use synchronize_rcu()is the same as mine, but I did see it here: tcx_link_release() -> tcx_entry_sync() static inline void tcx_entry_sync(void) {     /* bpf_mprog_entry got a/b swapped, therefore ensure that      * there are no inflight users on the old one anymore.      */     synchronize_rcu(); } >> same problem, which is also the way we used in the first version. >> >> https://lore.kernel.org/bpf/1702467945-38866-1-git-send-email-alibuda@linux.alibaba.com/ >> >> However, we have received some opposing views, believing that this is a >> bit overkill, >> so we decided to use kfree_rcu. >> >> https://lore.kernel.org/bpf/20231213222415.GA13818@breakpoint.cc/ >> >>>> } >>>> >>>> static int bpf_nf_link_detach(struct bpf_link *link) >>>> @@ -162,7 +164,34 @@ static int bpf_nf_link_fill_link_info(const struct bpf_link *link, >>>> static int bpf_nf_link_update(struct bpf_link *link, struct bpf_prog *new_prog, >>>> struct bpf_prog *old_prog) >>>> { >>>> - return -EOPNOTSUPP; >>>> + struct bpf_nf_link *nf_link = container_of(link, struct bpf_nf_link, link); >>>> + int err = 0; >>>> + >>>> + mutex_lock(&bpf_nf_mutex); >>> Why do you need this mutex? >>> What race does it solve? >> To avoid user update a link with differ prog at the same time. I noticed >> that sys_bpf() >> doesn't seem to prevent being invoked by user at the same time. Have I >> missed something? > You're correct that sys_bpf() doesn't lock anything. > But what are you serializing in this bpf_nf_link_update() ? > What will happen if multiple bpf_nf_link_update() > without mutex run on different CPUs in parallel ? I must admit that it is indeed feasible if we eliminate the mutex and use cmpxchg to swap the prog (we need to ensure that there is only one bpf_prog_put() on the old prog). However, when cmpxchg fails, it means that this context has not outcompeted the other one, and we have to return a failure. Maybe something like this: if (!cmpxchg(&link->prog, old_prog, new_prog)) {     /* already replaced by another link_update */     return -xxx; } As a comparison, The version with the mutex wouldn't encounter this error, every update would succeed. I think that it's too harsh for the user to receive a failure in that case since they haven't done anything wrong. Best wishes, D. Wythe