Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp5777000rdb; Wed, 13 Dec 2023 21:32:25 -0800 (PST) X-Google-Smtp-Source: AGHT+IGMJKhmjLKK5W3WqyTo5mgn71Y0SLf6U51qcfFBMjbFjish8MLFYosaNeOpBSCj6D7jT2kU X-Received: by 2002:a05:6a00:98e:b0:6ce:2731:47c3 with SMTP id u14-20020a056a00098e00b006ce273147c3mr8791410pfg.35.1702531945682; Wed, 13 Dec 2023 21:32:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702531945; cv=none; d=google.com; s=arc-20160816; b=Nm0aqZEd+gy1B6jtyJbBEayj36c5muRvC9mMMKZy+IWid6CAdCH7vZtxNJCsBm71AO QW5NINX/mI8kl+x+micK1RaLhWCdx79ftAHNS7B2nUGEFhywko8rQVgzVTDGCuRO56zX 8qgMqV2na4SvMwZ2eaBvdWrWHUmKKKolgXIXQH5rQMMC+gpEOu76WEP7/sHgrCgUrGdS ZHmm6SFw5cedsmx6PQqhcMIzmPm8foip3FoFL8kz60RakfKoe05R/QIpA5OLAzaSSUnf YeaQ78ou3v4oZ0VylQGgaFcLcht1ZidsWj+a18z11fTTYIfLJdheBZNZ637reX5aGbUo 3TrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=DKZ5ha8CTQvZRyod9joBttdRjqSbr0wifB702wV4z2M=; fh=K+YJW+n5sEW7522VOsPmu39bzfZFRXf7euV66+tgX58=; b=N9lLmUc2yRU2RlDR5UnBIIwL/1TlFHYssRaclM39SMUMBeVl5zSuM9YVP0TFZUiQBw i7tIU6D2U3J2gW+cK4IBaM0CTXxyF3inG1MyB9eICGi3Uy+VeRuAVaebGA76Fe7F2LKO Znt5AQmldfg2lirwJsnSzp2W5oZtrDbvAkrlN0486fZTmKjVPBN1H7TxaLIq9ikbQe0m V4BXA050nusiSj4FhYLg8DLyJLaCKVRuaWE3sFl3dgpZrfGbQMhHlrixX5fbNkEB1f0g wC9EG7z259X11dhkFU8qSL9d06ijQolHq6o1UaCQt61BeTiOkWyRAQAb7Hur1KqAxa6I FQ8g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id 37-20020a631665000000b005c18dd91684si10631348pgw.352.2023.12.13.21.32.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 21:32:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id E1D0F80D6AD4; Wed, 13 Dec 2023 21:32:22 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443162AbjLNFbX (ORCPT + 99 others); Thu, 14 Dec 2023 00:31:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229629AbjLNFbV (ORCPT ); Thu, 14 Dec 2023 00:31:21 -0500 Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12009BD; Wed, 13 Dec 2023 21:31:26 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R701e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=alibuda@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0VyTFUF0_1702531882; Received: from 30.221.148.227(mailfrom:alibuda@linux.alibaba.com fp:SMTPD_---0VyTFUF0_1702531882) by smtp.aliyun-inc.com; Thu, 14 Dec 2023 13:31:24 +0800 Message-ID: <0e94149a-05f1-3f98-3f75-ca74f364a45b@linux.alibaba.com> Date: Thu, 14 Dec 2023 13:31:22 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [RFC nf-next 1/2] netfilter: bpf: support prog update Content-Language: en-US To: Florian Westphal Cc: pablo@netfilter.org, kadlec@netfilter.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, coreteam@netfilter.org, netfilter-devel@vger.kernel.org, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org References: <1702467945-38866-1-git-send-email-alibuda@linux.alibaba.com> <1702467945-38866-2-git-send-email-alibuda@linux.alibaba.com> <20231213222415.GA13818@breakpoint.cc> From: "D. Wythe" In-Reply-To: <20231213222415.GA13818@breakpoint.cc> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-1.5 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Wed, 13 Dec 2023 21:32:23 -0800 (PST) On 12/14/23 6:24 AM, Florian Westphal wrote: > D. Wythe wrote: >> From: "D. Wythe" >> >> To support the prog update, we need to ensure that the prog seen >> within the hook is always valid. Considering that hooks are always >> protected by rcu_read_lock(), which provide us the ability to use a >> new RCU-protected context to access the prog. >> >> Signed-off-by: D. Wythe >> --- >> net/netfilter/nf_bpf_link.c | 124 +++++++++++++++++++++++++++++++++++++++----- >> 1 file changed, 111 insertions(+), 13 deletions(-) >> >> diff --git a/net/netfilter/nf_bpf_link.c b/net/netfilter/nf_bpf_link.c >> index e502ec0..918c470 100644 >> --- a/net/netfilter/nf_bpf_link.c >> +++ b/net/netfilter/nf_bpf_link.c >> @@ -8,17 +8,11 @@ >> #include >> #include >> >> -static unsigned int nf_hook_run_bpf(void *bpf_prog, struct sk_buff *skb, >> - const struct nf_hook_state *s) >> +struct bpf_nf_hook_ctx >> { >> - const struct bpf_prog *prog = bpf_prog; >> - struct bpf_nf_ctx ctx = { >> - .state = s, >> - .skb = skb, >> - }; >> - >> - return bpf_prog_run(prog, &ctx); >> -} >> + struct bpf_prog *prog; >> + struct rcu_head rcu; >> +}; > I don't understand the need for this structure. AFAICS bpf_prog_put() > will always release the program via call_rcu()? > > If it doesn't, we are probably already in trouble as-is without this > patch, I don't think anything that prevents us from ending up calling already > released bpf prog, or releasing it while another cpu is still running it > if bpf_prog_put releases the actual underlying prog instantly. > > A BPF expert could confirm bpf-prog-put-is-call-rcu. Hi Florian, I must admit that I did not realize that bpf_prog is released under RCU ... >> struct bpf_nf_link { >> struct bpf_link link; >> @@ -26,8 +20,59 @@ struct bpf_nf_link { >> struct net *net; >> u32 dead; >> const struct nf_defrag_hook *defrag_hook; >> + /* protect link update in parallel */ >> + struct mutex update_lock; >> + struct bpf_nf_hook_ctx __rcu *hook_ctx; > What kind of replacements-per-second rate are you aiming for? > I think > > static DEFINE_MUTEX(bpf_nf_mutex); > > is enough. I'm okay with that. > > Then bpf_nf_link gains > > struct bpf_prog __rcu *prog > > and possibly a trailing struct rcu_head, see below. Yes, that's what we need. >> +static void bpf_nf_hook_ctx_free_rcu(struct bpf_nf_hook_ctx *hook_ctx) >> +{ >> + call_rcu(&hook_ctx->rcu, __bpf_nf_hook_ctx_free_rcu); >> +} > Don't understand the need for call_rcu either, see below. > >> +static unsigned int nf_hook_run_bpf(void *bpf_link, struct sk_buff *skb, >> + const struct nf_hook_state *s) >> +{ >> + const struct bpf_nf_link *link = bpf_link; >> + struct bpf_nf_hook_ctx *hook_ctx; >> + struct bpf_nf_ctx ctx = { >> + .state = s, >> + .skb = skb, >> + }; >> + >> + hook_ctx = rcu_dereference(link->hook_ctx); > This could then just rcu_deref link->prog. > >> + return bpf_prog_run(hook_ctx->prog, &ctx); >> +} >> + >> #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4) || IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) >> static const struct nf_defrag_hook * >> get_proto_defrag_hook(struct bpf_nf_link *link, >> @@ -120,6 +165,10 @@ static void bpf_nf_link_release(struct bpf_link *link) >> if (!cmpxchg(&nf_link->dead, 0, 1)) { >> nf_unregister_net_hook(nf_link->net, &nf_link->hook_ops); >> bpf_nf_disable_defrag(nf_link); >> + /* Wait for outstanding hook to complete before the >> + * link gets released. >> + */ >> + synchronize_rcu(); >> } > Could you convert bpf_nf_link_dealloc to release via kfree_rcu instead? > Got it. >> @@ -162,7 +212,42 @@ static int bpf_nf_link_fill_link_info(const struct bpf_link *link, >> static int bpf_nf_link_update(struct bpf_link *link, struct bpf_prog *new_prog, >> struct bpf_prog *old_prog) >> { >> - return -EOPNOTSUPP; >> + struct bpf_nf_link *nf_link = container_of(link, struct bpf_nf_link, link); >> + struct bpf_nf_hook_ctx *hook_ctx; >> + int err = 0; >> + >> + mutex_lock(&nf_link->update_lock); >> + > I think you need to check link->dead here too. Got that. > >> + /* bpf_nf_link_release() ensures that after its execution, there will be >> + * no ongoing or upcoming execution of nf_hook_run_bpf() within any context. >> + * Therefore, within nf_hook_run_bpf(), the link remains valid at all times." >> + */ >> + link->hook_ops.priv = link; > ATM we only need to make sure the bpf prog itself stays alive until after > all concurrent rcu critical sections have completed. > > After this change, struct bpf_link gets passed instead, so we need to > keep that alive too. > > Which works with synchronize_rcu, sure, but that seems a bit overkill here. Got it! Thank you very much for your suggestion. I will address those issues you mentioned in the next version. Best wishes, D. Wythe