Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp6826750rdb; Fri, 15 Dec 2023 09:15:38 -0800 (PST) X-Google-Smtp-Source: AGHT+IGt3z7A4nmPNBRZ6bhvfUfTdjrFSoiomaq5KC2qPhT//Tncap6+diioCC8v173Rkfmf/xJt X-Received: by 2002:a05:620a:2946:b0:77d:93b3:c909 with SMTP id n6-20020a05620a294600b0077d93b3c909mr14259073qkp.0.1702660537990; Fri, 15 Dec 2023 09:15:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702660537; cv=none; d=google.com; s=arc-20160816; b=VEFSj3Pcsi7xXf8hGbWFZ2BHKZJYOP5vJNe/XHdFEEOObGXWxfz8gOj1fAvXqw0XdB /zc8XYgtFMnaEt62PtROB003RpLBy1kUsTPILZ83F0osI8oiHUmGuhVnHv5g9qmv/LPX yfa7ruX//JAZX4Mfqrm6TLzjeo0eVjV2m+s9VbgikwE3WM2IreJsOWcZEpfJ2pP20UU2 QSc9IIavTvcYiYvb8bZs0PeczLo1DavDVNwyzraYONwwIm/x8ZE++CkGB//8w9s8qNu8 y+N4vSDyjJUX3SQ3koFUHo/4mXHfZJ0lGbj/N9LpWvZ5yxAxCJa9onhHsbO77xzT/Mbm xp9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:dkim-signature:dkim-signature:from; bh=68SK6EotaqjMnRph5nBAaqz3Anc2eC7HqiBtHzcWqsI=; fh=BFX5HSGH8XPFTLLzxAnn6EkSC7BYsD5XPBcWe+bRZ6U=; b=dEMqTmDpGPlIQQhfIbuxQky5JqjZV9YqmvqdX9e0I2yH9+7zU1z8ZoKsI7OBdpAQMS S2eaSwjvPhU2VfHFqj3oOz/ofLmfJk1sjGLHi4dgmKY/6H5CqdTGUPFNP0/AiuBZZAy+ Y5Pmr8VUCv0ZnH46GMo3xV3fm0ZYXxFMSrWp+ZdM2tX1MoDjKNaXL6kzkX3EL8QIVxdS nzGFsDwVmYvZSG1dI9AQFk+2AtCr7QhW0GYLUg9txtoH5AMSsOB4u4Lh6xeO44j4bk0x JktSPq1Zf9M+904hiH6j8P12cop1SzvkWVAaCxRg4GtQi3tQ+emx4oNqTDGjp2NP9Sfv /DAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=kxpWYJqX; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e; spf=pass (google.com: domain of linux-kernel+bounces-1377-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-1377-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id dw20-20020a05620a601400b0077f10b67421si19114133qkb.225.2023.12.15.09.15.37 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Dec 2023 09:15:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-1377-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=kxpWYJqX; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e; spf=pass (google.com: domain of linux-kernel+bounces-1377-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-1377-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id A835A1C210B4 for ; Fri, 15 Dec 2023 17:15:37 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3258E67E96; Fri, 15 Dec 2023 17:10:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="kxpWYJqX"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="4ONn61AS" X-Original-To: linux-kernel@vger.kernel.org Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C733347F70; Fri, 15 Dec 2023 17:10:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1702660238; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=68SK6EotaqjMnRph5nBAaqz3Anc2eC7HqiBtHzcWqsI=; b=kxpWYJqXc+JexNk4+1gG07K3aKtXIaBxWZT8hQxb8qabTV1hHvbxQN1NvlD6oCkgKJmojI Ff39JeNPX68PV4KP1XCMG4apuGbIhq0eQwEd3cgrC2X6wdDo5TqDSQlV+e72oBQSppabUE 2P1lowUHo+2AW8ymIncIxQ0dicnF3T+LfNIqlu/mdkeInfOuVkV5OS54uDG7y7krQk5T8g k3P8WQdLNnnPsoMNQ7Q5zYHusXXC8tETpZ89r1ghuu7i4Xj2UzIWV3B7SUatnqrFCNWJRK 10sc4OFrCP4ud9LW9o49OHgA7kXGf82t+g+uc9lkZVMlY+4rn7og2eJRqZlNBQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1702660238; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=68SK6EotaqjMnRph5nBAaqz3Anc2eC7HqiBtHzcWqsI=; b=4ONn61ASttN9N3mS72Y00QjbBGAkjiduZ/+5Yn3pkVdxQqe1fitnHIMeMzEi4DvsASU3ez aeVXFCXLaAQwS6DQ== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , Alexei Starovoitov , Andrii Nakryiko , Cong Wang , Hao Luo , Jamal Hadi Salim , Jesper Dangaard Brouer , Jiri Olsa , Jiri Pirko , John Fastabend , KP Singh , Martin KaFai Lau , Ronak Doshi , Song Liu , Stanislav Fomichev , VMware PV-Drivers Reviewers , Yonghong Song , bpf@vger.kernel.org Subject: [PATCH net-next 15/24] net: Use nested-BH locking for XDP redirect. Date: Fri, 15 Dec 2023 18:07:34 +0100 Message-ID: <20231215171020.687342-16-bigeasy@linutronix.de> In-Reply-To: <20231215171020.687342-1-bigeasy@linutronix.de> References: <20231215171020.687342-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable The per-CPU variables used during bpf_prog_run_xdp() invocation and later during xdp_do_redirect() rely on disabled BH for their protection. Without locking in local_bh_disable() on PREEMPT_RT these data structure require explicit locking. This is a follow-up on the previous change which introduced bpf_run_lock.redirect_lock and uses it now within drivers. The simple way is to acquire the lock before bpf_prog_run_xdp() is invoked and hold it until the end of function. This does not always work because some drivers (cpsw, atlantic) invoke xdp_do_flush() in the same context. Acquiring the lock in bpf_prog_run_xdp() and dropping in xdp_do_redirect() (without touching drivers) does not work because not all driver, which use bpf_prog_run_xdp(), do support XDP_REDIRECT (and invoke xdp_do_redirect()). Ideally the minimal locking scope would be bpf_prog_run_xdp() + xdp_do_redirect() and everything else (error recovery, DMA unmapping, free/ alloc of memory, =E2=80=A6) would happen outside of the locked sectio= n. Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Cong Wang Cc: Hao Luo Cc: Jamal Hadi Salim Cc: Jesper Dangaard Brouer Cc: Jiri Olsa Cc: Jiri Pirko Cc: John Fastabend Cc: KP Singh Cc: Martin KaFai Lau Cc: Ronak Doshi Cc: Song Liu Cc: Stanislav Fomichev Cc: VMware PV-Drivers Reviewers Cc: Yonghong Song Cc: bpf@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior --- drivers/net/vmxnet3/vmxnet3_xdp.c | 1 + kernel/bpf/cpumap.c | 2 ++ net/bpf/test_run.c | 11 ++++++++--- net/core/dev.c | 3 +++ net/core/filter.c | 1 + net/core/lwt_bpf.c | 2 ++ net/sched/cls_api.c | 2 ++ 7 files changed, 19 insertions(+), 3 deletions(-) diff --git a/drivers/net/vmxnet3/vmxnet3_xdp.c b/drivers/net/vmxnet3/vmxnet= 3_xdp.c index 80ddaff759d47..18bce98fd2e31 100644 --- a/drivers/net/vmxnet3/vmxnet3_xdp.c +++ b/drivers/net/vmxnet3/vmxnet3_xdp.c @@ -257,6 +257,7 @@ vmxnet3_run_xdp(struct vmxnet3_rx_queue *rq, struct xdp= _buff *xdp, u32 act; =20 rq->stats.xdp_packets++; + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); act =3D bpf_prog_run_xdp(prog, xdp); page =3D virt_to_page(xdp->data_hard_start); =20 diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 8a0bb80fe48a3..c26d49bb78679 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -144,6 +144,7 @@ static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map= _entry *rcpu, int err; =20 list_for_each_entry_safe(skb, tmp, listp, list) { + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); act =3D bpf_prog_run_generic_xdp(skb, &xdp, rcpu->prog); switch (act) { case XDP_PASS: @@ -182,6 +183,7 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_= entry *rcpu, struct xdp_buff xdp; int i, nframes =3D 0; =20 + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); xdp_set_return_frame_no_direct(); xdp.rxq =3D &rxq; =20 diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index c9fdcc5cdce10..db8f7eb35c6ca 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -293,6 +293,7 @@ static int xdp_test_run_batch(struct xdp_test_data *xdp= , struct bpf_prog *prog, batch_sz =3D min_t(u32, repeat, xdp->batch_size); =20 local_bh_disable(); + local_lock_nested_bh(&bpf_run_lock.redirect_lock); xdp_set_return_frame_no_direct(); =20 for (i =3D 0; i < batch_sz; i++) { @@ -348,6 +349,9 @@ static int xdp_test_run_batch(struct xdp_test_data *xdp= , struct bpf_prog *prog, } =20 out: + xdp_clear_return_frame_no_direct(); + local_unlock_nested_bh(&bpf_run_lock.redirect_lock); + if (redirect) xdp_do_flush(); if (nframes) { @@ -356,7 +360,6 @@ static int xdp_test_run_batch(struct xdp_test_data *xdp= , struct bpf_prog *prog, err =3D ret; } =20 - xdp_clear_return_frame_no_direct(); local_bh_enable(); return err; } @@ -417,10 +420,12 @@ static int bpf_test_run(struct bpf_prog *prog, void *= ctx, u32 repeat, do { run_ctx.prog_item =3D &item; local_bh_disable(); - if (xdp) + if (xdp) { + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); *retval =3D bpf_prog_run_xdp(prog, ctx); - else + } else { *retval =3D bpf_prog_run(prog, ctx); + } local_bh_enable(); } while (bpf_test_timer_continue(&t, 1, repeat, &ret, time)); bpf_reset_run_ctx(old_ctx); diff --git a/net/core/dev.c b/net/core/dev.c index 5a0f6da7b3ae5..5ba7509e88752 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3993,6 +3993,7 @@ sch_handle_ingress(struct sk_buff *skb, struct packet= _type **pt_prev, int *ret, *pt_prev =3D NULL; } =20 + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); qdisc_skb_cb(skb)->pkt_len =3D skb->len; tcx_set_ingress(skb, true); =20 @@ -4045,6 +4046,7 @@ sch_handle_egress(struct sk_buff *skb, int *ret, stru= ct net_device *dev) if (!entry) return skb; =20 + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); /* qdisc_skb_cb(skb)->pkt_len & tcx_set_ingress() was * already set by the caller. */ @@ -5008,6 +5010,7 @@ int do_xdp_generic(struct bpf_prog *xdp_prog, struct = sk_buff *skb) u32 act; int err; =20 + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); act =3D netif_receive_generic_xdp(skb, &xdp, xdp_prog); if (act !=3D XDP_PASS) { switch (act) { diff --git a/net/core/filter.c b/net/core/filter.c index 7c9653734fb60..72a7812f933a1 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -4241,6 +4241,7 @@ static const struct bpf_func_proto bpf_xdp_adjust_met= a_proto =3D { */ void xdp_do_flush(void) { + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); __dev_flush(); __cpu_map_flush(); __xsk_map_flush(); diff --git a/net/core/lwt_bpf.c b/net/core/lwt_bpf.c index a94943681e5aa..74b88e897a7e3 100644 --- a/net/core/lwt_bpf.c +++ b/net/core/lwt_bpf.c @@ -44,6 +44,7 @@ static int run_lwt_bpf(struct sk_buff *skb, struct bpf_lw= t_prog *lwt, * BPF prog and skb_do_redirect(). */ local_bh_disable(); + local_lock_nested_bh(&bpf_run_lock.redirect_lock); bpf_compute_data_pointers(skb); ret =3D bpf_prog_run_save_cb(lwt->prog, skb); =20 @@ -76,6 +77,7 @@ static int run_lwt_bpf(struct sk_buff *skb, struct bpf_lw= t_prog *lwt, break; } =20 + local_unlock_nested_bh(&bpf_run_lock.redirect_lock); local_bh_enable(); =20 return ret; diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c index 1976bd1639863..da61b99bc558f 100644 --- a/net/sched/cls_api.c +++ b/net/sched/cls_api.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -3925,6 +3926,7 @@ struct sk_buff *tcf_qevent_handle(struct tcf_qevent *= qe, struct Qdisc *sch, stru =20 fl =3D rcu_dereference_bh(qe->filter_chain); =20 + guard(local_lock_nested_bh)(&bpf_run_lock.redirect_lock); switch (tcf_classify(skb, NULL, fl, &cl_res, false)) { case TC_ACT_SHOT: qdisc_qstats_drop(sch); --=20 2.43.0