Received: by 2002:a05:6500:1b41:b0:1fb:d597:ff75 with SMTP id cz1csp197837lqb; Tue, 4 Jun 2024 08:49:16 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCXe6mkthP/z9Gh37qWkjBA8bBCYjpwgV5UFvBNFA4UEPKm4Ofu7pMUAjPFKt1c2khhRqP+6/N4Eneee1U4O94b6aoxD+dqMOHdN95vNJw== X-Google-Smtp-Source: AGHT+IFj75jjNQnqvRGGL+rusQPiLkJMf0JFvQoocE9hJx0O1wLnfZQigU84HeesvmV39X9WIZHW X-Received: by 2002:a05:620a:5dcc:b0:78e:fd59:3c3b with SMTP id af79cd13be357-795230a383emr606185a.35.1717516156131; Tue, 04 Jun 2024 08:49:16 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1717516156; cv=pass; d=google.com; s=arc-20160816; b=Sv0BWNemy8pU777opbDc6XyQ0BR/9EJLoeE8avv2xf1qwWKQHy+ySUCFxDZgUJwUmb GpcLGKkXhEkzyW+ObXXXxu9mE5shKiW8A5H2MxVReO8Qx6tYEBUjOlWrIInHNlrPJxcO ojROezCLJEp4mnWn3lSqK3o5kMVf0Dsg76/P5lt5/3IhQuarNL2xR6IoNbvV/6k7mzMc JCOPD84d88aoGNyKlTT10D4caBwaBZ0PD0qz8vhKY4RlHFwJRYDPq0a9lYS60+FnhWO/ vfYUrKmKRYdVUTYATYKNOuX419aJDjm5SgYpihmn6y4q/mGLvcDNhWOkBODUe2tJakJy ebeA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:dkim-signature:dkim-signature:from; bh=fItCt4uoY1heQ6MoaYxdwCAtpTGZxjNBATD6InouALM=; fh=DIBGZ3CvhACeF5xNujWQkrIfn5SlegrSBr8QAxFqEr0=; b=HQFdhBMO17NgvsF7f2SSkZzwDmk/MKCkUvz3zjH2+2xwCFPTkSg+OvcPccRyEFBZIZ HrX1BWNHB8hZgzKlZOJFclCQPIkJx29UNnVqsM/VuB3+XAF3dWM4RW1r3Z2V8rnhnl4k l5KnTRK/DRO+2Ksa2DXPPI2+jvzD+n/dEHNKNrcDVByip6eHW9YLYFGwF9Xk4ezySKGJ 66k2uyDleKf6X5LXK+U6A7NHPt8T1BfgGX5KqVQIY3uvtyWhlYOEU1xys/HCTPXM/Ud7 8ffPrL2nV3JOyl9aVwoVfa4LotD6e9+n4oQ2MYUUbZsUVCex+/RCE2uqxijzC16Q5123 /0eg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=h1kzRbky; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e; arc=pass (i=1 spf=pass spfdomain=linutronix.de dkim=pass dkdomain=linutronix.de dmarc=pass fromdomain=linutronix.de); spf=pass (google.com: domain of linux-kernel+bounces-200994-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-200994-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id af79cd13be357-794fa939220si110523985a.533.2024.06.04.08.49.15 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Jun 2024 08:49:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-200994-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=h1kzRbky; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e; arc=pass (i=1 spf=pass spfdomain=linutronix.de dkim=pass dkdomain=linutronix.de dmarc=pass fromdomain=linutronix.de); spf=pass (google.com: domain of linux-kernel+bounces-200994-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-200994-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id C8B6E1C237C1 for ; Tue, 4 Jun 2024 15:49:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DA31E14C5B3; Tue, 4 Jun 2024 15:44:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="h1kzRbky"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="YMFRiHEO" Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E7461494CA; Tue, 4 Jun 2024 15:44:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717515896; cv=none; b=AkOoD2T0kKHnyu77VFwziV6h1ej8vqWNycAnVMFgXo656vgrf1ipBSfe2c2sVoazbDGDSKagmbRScHr3ODdv4ybBOi/sL7Zw+o0WVaEGqCHV49tEU2tRFQEL9ntfFfobCls3uqz6JY+XFx6c5Ei9Gv2BJ9e0qosWQZyvSZ5HLDo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717515896; c=relaxed/simple; bh=CI68OZg1l37ZhnJSbSpCG3l4+YE3SMwYd/RSLCgcmTU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WNqT0StRmGj560AGn2R7j1JOLJJ5qgnyBl6EfyQ5ril9n+ofLFuKMX8Hz07DvMh77LeguZrBppjO/AN/HXwVXcbJVXf/CK+CHzULIUJjiGikg8RJ2hbC8Uz35/Slk1vrZXeYVpYhxjmiEZbMSvyP9c2SUHhGId6kfyD29yGh+vU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=h1kzRbky; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=YMFRiHEO; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1717515892; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fItCt4uoY1heQ6MoaYxdwCAtpTGZxjNBATD6InouALM=; b=h1kzRbkyK2RdXQqMDQI3dEF3M8xfBB42xeAEuc8YFcgqVzFU16uHBP3et+DSFzCrej4qhZ 6IxS/Xf52mT+lCE7EyAspwkD7sR+sVfAAnjra6BUGysOrVpVtAtstNZZLl/Uc6TukrlGXB 9XBskCQWpqTz8SXbdp6PPMfDCtfQioBh0VPVys5XLqomaRfoUwydDY4yUAbIxLtGRAf6Wt 4GmCJbnIQufxi9USpcAlWwmEstlc3d6n4bQtOw++cTr/nfciOMrG3HEBQ93BpCINZWB+x2 SArdgLCujF0l6EiN/3L7/Tec40Jo1CRd+mPyZ143f3s0aFnNpeL0MKPXJCpOXA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1717515892; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fItCt4uoY1heQ6MoaYxdwCAtpTGZxjNBATD6InouALM=; b=YMFRiHEOBlyazyB6kwAi/sr1LmZwsvYcE6onevhPeKcI5mCQ+uuMjrgV07hYD0SjhdYXI7 pIXoxLoQfQjw2lBQ== To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Daniel Bristot de Oliveira , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Sebastian Andrzej Siewior , Alexei Starovoitov , Andrii Nakryiko , Eduard Zingerman , Hao Luo , Jesper Dangaard Brouer , Jiri Olsa , John Fastabend , KP Singh , Martin KaFai Lau , Song Liu , Stanislav Fomichev , =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= , Yonghong Song , bpf@vger.kernel.org Subject: [PATCH v4 net-next 13/14] net: Reference bpf_redirect_info via task_struct on PREEMPT_RT. Date: Tue, 4 Jun 2024 17:24:20 +0200 Message-ID: <20240604154425.878636-14-bigeasy@linutronix.de> In-Reply-To: <20240604154425.878636-1-bigeasy@linutronix.de> References: <20240604154425.878636-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable The XDP redirect process is two staged: - bpf_prog_run_xdp() is invoked to run a eBPF program which inspects the packet and makes decisions. While doing that, the per-CPU variable bpf_redirect_info is used. - Afterwards xdp_do_redirect() is invoked and accesses bpf_redirect_info and it may also access other per-CPU variables like xskmap_flush_list. At the very end of the NAPI callback, xdp_do_flush() is invoked which does not access bpf_redirect_info but will touch the individual per-CPU lists. The per-CPU variables are only used in the NAPI callback hence disabling bottom halves is the only protection mechanism. Users from preemptible context (like cpu_map_kthread_run()) explicitly disable bottom halves for protections reasons. Without locking in local_bh_disable() on PREEMPT_RT this data structure requires explicit locking. PREEMPT_RT has forced-threaded interrupts enabled and every NAPI-callback runs in a thread. If each thread has its own data structure then locking can be avoided. Create a struct bpf_net_context which contains struct bpf_redirect_info. Define the variable on stack, use bpf_net_ctx_set() to save a pointer to it. Use the __free() annotation to automatically reset the pointer once function returns. The bpf_net_ctx_set() may nest. For instance a function can be used from within NET_RX_SOFTIRQ/ net_rx_action which uses bpf_net_ctx_set() and NET_TX_SOFTIRQ which does not. Therefore only the first invocations updates the pointer. Use bpf_net_ctx_get_ri() as a wrapper to retrieve the current struct bpf_redirect_info. On PREEMPT_RT the pointer to bpf_net_context is saved task's task_struct. On non-PREEMPT_RT builds the pointer saved in a per-CPU variable (which is always NODE-local memory). Using always the bpf_net_context approach has the advantage that there is almost zero differences between PREEMPT_RT and non-PREEMPT_RT builds. Cc: Alexei Starovoitov Cc: Andrii Nakryiko Cc: Eduard Zingerman Cc: Hao Luo Cc: Jesper Dangaard Brouer Cc: Jiri Olsa Cc: John Fastabend Cc: KP Singh Cc: Martin KaFai Lau Cc: Song Liu Cc: Stanislav Fomichev Cc: Toke H=C3=B8iland-J=C3=B8rgensen Cc: Yonghong Song Cc: bpf@vger.kernel.org Acked-by: Alexei Starovoitov Signed-off-by: Sebastian Andrzej Siewior --- include/linux/filter.h | 45 +++++++++++++++++++++++++++++++++++------- include/linux/sched.h | 3 +++ kernel/bpf/cpumap.c | 3 +++ kernel/bpf/devmap.c | 9 ++++++++- kernel/fork.c | 1 + net/bpf/test_run.c | 11 ++++++++++- net/core/dev.c | 19 +++++++++++++++++- net/core/filter.c | 44 +++++++++++------------------------------ net/core/lwt_bpf.c | 3 +++ 9 files changed, 96 insertions(+), 42 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index b02aea291b7e8..ea69f776a09f2 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -744,7 +744,40 @@ struct bpf_redirect_info { struct bpf_nh_params nh; }; =20 -DECLARE_PER_CPU(struct bpf_redirect_info, bpf_redirect_info); +struct bpf_net_context { + struct bpf_redirect_info ri; +}; + +static inline struct bpf_net_context *bpf_net_ctx_set(struct bpf_net_conte= xt *bpf_net_ctx) +{ + struct task_struct *tsk =3D current; + + if (tsk->bpf_net_context !=3D NULL) + return NULL; + memset(&bpf_net_ctx->ri, 0, sizeof(bpf_net_ctx->ri)); + tsk->bpf_net_context =3D bpf_net_ctx; + return bpf_net_ctx; +} + +static inline void bpf_net_ctx_clear(struct bpf_net_context *bpf_net_ctx) +{ + if (bpf_net_ctx) + current->bpf_net_context =3D NULL; +} + +static inline struct bpf_net_context *bpf_net_ctx_get(void) +{ + return current->bpf_net_context; +} + +static inline struct bpf_redirect_info *bpf_net_ctx_get_ri(void) +{ + struct bpf_net_context *bpf_net_ctx =3D bpf_net_ctx_get(); + + return &bpf_net_ctx->ri; +} + +DEFINE_FREE(bpf_net_ctx_clear, struct bpf_net_context *, bpf_net_ctx_clear= (_T)); =20 /* flags for bpf_redirect_info kern_flags */ #define BPF_RI_F_RF_NO_DIRECT BIT(0) /* no napi_direct on return_frame */ @@ -1018,25 +1051,23 @@ struct bpf_prog *bpf_patch_insn_single(struct bpf_p= rog *prog, u32 off, const struct bpf_insn *patch, u32 len); int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt); =20 -void bpf_clear_redirect_map(struct bpf_map *map); - static inline bool xdp_return_frame_no_direct(void) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); =20 return ri->kern_flags & BPF_RI_F_RF_NO_DIRECT; } =20 static inline void xdp_set_return_frame_no_direct(void) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); =20 ri->kern_flags |=3D BPF_RI_F_RF_NO_DIRECT; } =20 static inline void xdp_clear_return_frame_no_direct(void) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); =20 ri->kern_flags &=3D ~BPF_RI_F_RF_NO_DIRECT; } @@ -1592,7 +1623,7 @@ static __always_inline long __bpf_xdp_redirect_map(st= ruct bpf_map *map, u64 inde u64 flags, const u64 flag_mask, void *lookup_elem(struct bpf_map *map, u32 key)) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); const u64 action_mask =3D XDP_ABORTED | XDP_DROP | XDP_PASS | XDP_TX; =20 /* Lower bits of the flags are used as return code on lookup failure */ diff --git a/include/linux/sched.h b/include/linux/sched.h index a9b0ca72db55f..dfa1843ab2916 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -53,6 +53,7 @@ struct bio_list; struct blk_plug; struct bpf_local_storage; struct bpf_run_ctx; +struct bpf_net_context; struct capture_control; struct cfs_rq; struct fs_struct; @@ -1508,6 +1509,8 @@ struct task_struct { /* Used for BPF run context */ struct bpf_run_ctx *bpf_ctx; #endif + /* Used by BPF for per-TASK xdp storage */ + struct bpf_net_context *bpf_net_context; =20 #ifdef CONFIG_GCC_PLUGIN_STACKLEAK unsigned long lowest_stack; diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index a8e34416e960f..66974bd027109 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -240,12 +240,14 @@ static int cpu_map_bpf_prog_run(struct bpf_cpu_map_en= try *rcpu, void **frames, int xdp_n, struct xdp_cpumap_stats *stats, struct list_head *list) { + struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; int nframes; =20 if (!rcpu->prog) return xdp_n; =20 rcu_read_lock_bh(); + bpf_net_ctx =3D bpf_net_ctx_set(&__bpf_net_ctx); =20 nframes =3D cpu_map_bpf_prog_run_xdp(rcpu, frames, xdp_n, stats); =20 @@ -255,6 +257,7 @@ static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entr= y *rcpu, void **frames, if (unlikely(!list_empty(list))) cpu_map_bpf_prog_run_skb(rcpu, list, stats); =20 + bpf_net_ctx_clear(bpf_net_ctx); rcu_read_unlock_bh(); /* resched point, may call do_softirq() */ =20 return nframes; diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index 4e2cdbb5629f2..3d9d62c6525d4 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -196,7 +196,14 @@ static void dev_map_free(struct bpf_map *map) list_del_rcu(&dtab->list); spin_unlock(&dev_map_lock); =20 - bpf_clear_redirect_map(map); + /* bpf_redirect_info->map is assigned in __bpf_xdp_redirect_map() + * during NAPI callback and cleared after the XDP redirect. There is no + * explicit RCU read section which protects bpf_redirect_info->map but + * local_bh_disable() also marks the beginning an RCU section. This + * makes the complete softirq callback RCU protected. Thus after + * following synchronize_rcu() there no bpf_redirect_info->map =3D=3D map + * assignment. + */ synchronize_rcu(); =20 /* Make sure prior __dev_map_entry_free() have completed. */ diff --git a/kernel/fork.c b/kernel/fork.c index 99076dbe27d83..f314bdd7e6108 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2355,6 +2355,7 @@ __latent_entropy struct task_struct *copy_process( RCU_INIT_POINTER(p->bpf_storage, NULL); p->bpf_ctx =3D NULL; #endif + p->bpf_net_context =3D NULL; =20 /* Perform scheduler related setup. Assign this task to a CPU. */ retval =3D sched_fork(clone_flags, p); diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index f6aad4ed2ab2f..600cc8e428c1a 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -283,9 +283,10 @@ static int xdp_recv_frames(struct xdp_frame **frames, = int nframes, static int xdp_test_run_batch(struct xdp_test_data *xdp, struct bpf_prog *= prog, u32 repeat) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; int err =3D 0, act, ret, i, nframes =3D 0, batch_sz; struct xdp_frame **frames =3D xdp->frames; + struct bpf_redirect_info *ri; struct xdp_page_head *head; struct xdp_frame *frm; bool redirect =3D false; @@ -295,6 +296,8 @@ static int xdp_test_run_batch(struct xdp_test_data *xdp= , struct bpf_prog *prog, batch_sz =3D min_t(u32, repeat, xdp->batch_size); =20 local_bh_disable(); + bpf_net_ctx =3D bpf_net_ctx_set(&__bpf_net_ctx); + ri =3D bpf_net_ctx_get_ri(); xdp_set_return_frame_no_direct(); =20 for (i =3D 0; i < batch_sz; i++) { @@ -359,6 +362,7 @@ static int xdp_test_run_batch(struct xdp_test_data *xdp= , struct bpf_prog *prog, } =20 xdp_clear_return_frame_no_direct(); + bpf_net_ctx_clear(bpf_net_ctx); local_bh_enable(); return err; } @@ -394,6 +398,7 @@ static int bpf_test_run_xdp_live(struct bpf_prog *prog,= struct xdp_buff *ctx, static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, u32 *retval, u32 *time, bool xdp) { + struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; struct bpf_prog_array_item item =3D {.prog =3D prog}; struct bpf_run_ctx *old_ctx; struct bpf_cg_run_ctx run_ctx; @@ -419,10 +424,14 @@ static int bpf_test_run(struct bpf_prog *prog, void *= ctx, u32 repeat, do { run_ctx.prog_item =3D &item; local_bh_disable(); + bpf_net_ctx =3D bpf_net_ctx_set(&__bpf_net_ctx); + if (xdp) *retval =3D bpf_prog_run_xdp(prog, ctx); else *retval =3D bpf_prog_run(prog, ctx); + + bpf_net_ctx_clear(bpf_net_ctx); local_bh_enable(); } while (bpf_test_timer_continue(&t, 1, repeat, &ret, time)); bpf_reset_run_ctx(old_ctx); diff --git a/net/core/dev.c b/net/core/dev.c index 2c3f86c8cd176..6480be4f42993 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4030,11 +4030,15 @@ sch_handle_ingress(struct sk_buff *skb, struct pack= et_type **pt_prev, int *ret, struct net_device *orig_dev, bool *another) { struct bpf_mprog_entry *entry =3D rcu_dereference_bh(skb->dev->tcx_ingres= s); + struct bpf_net_context *bpf_net_ctx __free(bpf_net_ctx_clear) =3D NULL; enum skb_drop_reason drop_reason =3D SKB_DROP_REASON_TC_INGRESS; + struct bpf_net_context __bpf_net_ctx; int sch_ret; =20 if (!entry) return skb; + + bpf_net_ctx =3D bpf_net_ctx_set(&__bpf_net_ctx); if (*pt_prev) { *ret =3D deliver_skb(skb, *pt_prev, orig_dev); *pt_prev =3D NULL; @@ -4085,13 +4089,17 @@ sch_handle_ingress(struct sk_buff *skb, struct pack= et_type **pt_prev, int *ret, static __always_inline struct sk_buff * sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev) { + struct bpf_net_context *bpf_net_ctx __free(bpf_net_ctx_clear) =3D NULL; struct bpf_mprog_entry *entry =3D rcu_dereference_bh(dev->tcx_egress); enum skb_drop_reason drop_reason =3D SKB_DROP_REASON_TC_EGRESS; + struct bpf_net_context __bpf_net_ctx; int sch_ret; =20 if (!entry) return skb; =20 + bpf_net_ctx =3D bpf_net_ctx_set(&__bpf_net_ctx); + /* qdisc_skb_cb(skb)->pkt_len & tcx_set_ingress() was * already set by the caller. */ @@ -6358,11 +6366,11 @@ static void __napi_busy_loop(unsigned int napi_id, { unsigned long start_time =3D loop_end ? busy_loop_current_time() : 0; int (*napi_poll)(struct napi_struct *napi, int budget); + struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; void *have_poll_lock =3D NULL; struct napi_struct *napi; =20 WARN_ON_ONCE(!rcu_read_lock_held()); - restart: napi_poll =3D NULL; =20 @@ -6376,6 +6384,7 @@ static void __napi_busy_loop(unsigned int napi_id, int work =3D 0; =20 local_bh_disable(); + bpf_net_ctx =3D bpf_net_ctx_set(&__bpf_net_ctx); if (!napi_poll) { unsigned long val =3D READ_ONCE(napi->state); =20 @@ -6406,6 +6415,7 @@ static void __napi_busy_loop(unsigned int napi_id, __NET_ADD_STATS(dev_net(napi->dev), LINUX_MIB_BUSYPOLLRXPACKETS, work); skb_defer_free_flush(this_cpu_ptr(&softnet_data)); + bpf_net_ctx_clear(bpf_net_ctx); local_bh_enable(); =20 if (!loop_end || loop_end(loop_end_arg, start_time)) @@ -6833,6 +6843,7 @@ static int napi_thread_wait(struct napi_struct *napi) =20 static void napi_threaded_poll_loop(struct napi_struct *napi) { + struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; struct softnet_data *sd; unsigned long last_qs =3D jiffies; =20 @@ -6841,6 +6852,8 @@ static void napi_threaded_poll_loop(struct napi_struc= t *napi) void *have; =20 local_bh_disable(); + bpf_net_ctx =3D bpf_net_ctx_set(&__bpf_net_ctx); + sd =3D this_cpu_ptr(&softnet_data); sd->in_napi_threaded_poll =3D true; =20 @@ -6856,6 +6869,7 @@ static void napi_threaded_poll_loop(struct napi_struc= t *napi) net_rps_action_and_irq_enable(sd); } skb_defer_free_flush(sd); + bpf_net_ctx_clear(bpf_net_ctx); local_bh_enable(); =20 if (!repoll) @@ -6878,13 +6892,16 @@ static int napi_threaded_poll(void *data) =20 static __latent_entropy void net_rx_action(struct softirq_action *h) { + struct bpf_net_context *bpf_net_ctx __free(bpf_net_ctx_clear) =3D NULL; struct softnet_data *sd =3D this_cpu_ptr(&softnet_data); unsigned long time_limit =3D jiffies + usecs_to_jiffies(READ_ONCE(net_hotdata.netdev_budget_usecs)); int budget =3D READ_ONCE(net_hotdata.netdev_budget); + struct bpf_net_context __bpf_net_ctx; LIST_HEAD(list); LIST_HEAD(repoll); =20 + bpf_net_ctx =3D bpf_net_ctx_set(&__bpf_net_ctx); start: sd->in_net_rx_action =3D true; local_irq_disable(); diff --git a/net/core/filter.c b/net/core/filter.c index d6cf1a63c3f43..deb0be323b0c4 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -2475,9 +2475,6 @@ static const struct bpf_func_proto bpf_clone_redirect= _proto =3D { .arg3_type =3D ARG_ANYTHING, }; =20 -DEFINE_PER_CPU(struct bpf_redirect_info, bpf_redirect_info); -EXPORT_PER_CPU_SYMBOL_GPL(bpf_redirect_info); - static struct net_device *skb_get_peer_dev(struct net_device *dev) { const struct net_device_ops *ops =3D dev->netdev_ops; @@ -2490,7 +2487,7 @@ static struct net_device *skb_get_peer_dev(struct net= _device *dev) =20 int skb_do_redirect(struct sk_buff *skb) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); struct net *net =3D dev_net(skb->dev); struct net_device *dev; u32 flags =3D ri->flags; @@ -2523,7 +2520,7 @@ int skb_do_redirect(struct sk_buff *skb) =20 BPF_CALL_2(bpf_redirect, u32, ifindex, u64, flags) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); =20 if (unlikely(flags & (~(BPF_F_INGRESS) | BPF_F_REDIRECT_INTERNAL))) return TC_ACT_SHOT; @@ -2544,7 +2541,7 @@ static const struct bpf_func_proto bpf_redirect_proto= =3D { =20 BPF_CALL_2(bpf_redirect_peer, u32, ifindex, u64, flags) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); =20 if (unlikely(flags)) return TC_ACT_SHOT; @@ -2566,7 +2563,7 @@ static const struct bpf_func_proto bpf_redirect_peer_= proto =3D { BPF_CALL_4(bpf_redirect_neigh, u32, ifindex, struct bpf_redir_neigh *, par= ams, int, plen, u64, flags) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); =20 if (unlikely((plen && plen < sizeof(*params)) || flags)) return TC_ACT_SHOT; @@ -4292,30 +4289,13 @@ void xdp_do_check_flushed(struct napi_struct *napi) } #endif =20 -void bpf_clear_redirect_map(struct bpf_map *map) -{ - struct bpf_redirect_info *ri; - int cpu; - - for_each_possible_cpu(cpu) { - ri =3D per_cpu_ptr(&bpf_redirect_info, cpu); - /* Avoid polluting remote cacheline due to writes if - * not needed. Once we pass this test, we need the - * cmpxchg() to make sure it hasn't been changed in - * the meantime by remote CPU. - */ - if (unlikely(READ_ONCE(ri->map) =3D=3D map)) - cmpxchg(&ri->map, map, NULL); - } -} - DEFINE_STATIC_KEY_FALSE(bpf_master_redirect_enabled_key); EXPORT_SYMBOL_GPL(bpf_master_redirect_enabled_key); =20 u32 xdp_master_redirect(struct xdp_buff *xdp) { + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); struct net_device *master, *slave; - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); =20 master =3D netdev_master_upper_dev_get_rcu(xdp->rxq->dev); slave =3D master->netdev_ops->ndo_xdp_get_xmit_slave(master, xdp); @@ -4387,7 +4367,7 @@ static __always_inline int __xdp_do_redirect_frame(st= ruct bpf_redirect_info *ri, map =3D READ_ONCE(ri->map); =20 /* The map pointer is cleared when the map is being torn - * down by bpf_clear_redirect_map() + * down by dev_map_free() */ if (unlikely(!map)) { err =3D -ENOENT; @@ -4432,7 +4412,7 @@ static __always_inline int __xdp_do_redirect_frame(st= ruct bpf_redirect_info *ri, int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, struct bpf_prog *xdp_prog) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); enum bpf_map_type map_type =3D ri->map_type; =20 if (map_type =3D=3D BPF_MAP_TYPE_XSKMAP) @@ -4446,7 +4426,7 @@ EXPORT_SYMBOL_GPL(xdp_do_redirect); int xdp_do_redirect_frame(struct net_device *dev, struct xdp_buff *xdp, struct xdp_frame *xdpf, struct bpf_prog *xdp_prog) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); enum bpf_map_type map_type =3D ri->map_type; =20 if (map_type =3D=3D BPF_MAP_TYPE_XSKMAP) @@ -4463,7 +4443,7 @@ static int xdp_do_generic_redirect_map(struct net_dev= ice *dev, enum bpf_map_type map_type, u32 map_id, u32 flags) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); struct bpf_map *map; int err; =20 @@ -4475,7 +4455,7 @@ static int xdp_do_generic_redirect_map(struct net_dev= ice *dev, map =3D READ_ONCE(ri->map); =20 /* The map pointer is cleared when the map is being torn - * down by bpf_clear_redirect_map() + * down by dev_map_free() */ if (unlikely(!map)) { err =3D -ENOENT; @@ -4517,7 +4497,7 @@ static int xdp_do_generic_redirect_map(struct net_dev= ice *dev, int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, struct xdp_buff *xdp, struct bpf_prog *xdp_prog) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); enum bpf_map_type map_type =3D ri->map_type; void *fwd =3D ri->tgt_value; u32 map_id =3D ri->map_id; @@ -4553,7 +4533,7 @@ int xdp_do_generic_redirect(struct net_device *dev, s= truct sk_buff *skb, =20 BPF_CALL_2(bpf_xdp_redirect, u32, ifindex, u64, flags) { - struct bpf_redirect_info *ri =3D this_cpu_ptr(&bpf_redirect_info); + struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); =20 if (unlikely(flags)) return XDP_ABORTED; diff --git a/net/core/lwt_bpf.c b/net/core/lwt_bpf.c index a94943681e5aa..afb05f58b64c5 100644 --- a/net/core/lwt_bpf.c +++ b/net/core/lwt_bpf.c @@ -38,12 +38,14 @@ static inline struct bpf_lwt *bpf_lwt_lwtunnel(struct l= wtunnel_state *lwt) static int run_lwt_bpf(struct sk_buff *skb, struct bpf_lwt_prog *lwt, struct dst_entry *dst, bool can_redirect) { + struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; int ret; =20 /* Disabling BH is needed to protect per-CPU bpf_redirect_info between * BPF prog and skb_do_redirect(). */ local_bh_disable(); + bpf_net_ctx =3D bpf_net_ctx_set(&__bpf_net_ctx); bpf_compute_data_pointers(skb); ret =3D bpf_prog_run_save_cb(lwt->prog, skb); =20 @@ -76,6 +78,7 @@ static int run_lwt_bpf(struct sk_buff *skb, struct bpf_lw= t_prog *lwt, break; } =20 + bpf_net_ctx_clear(bpf_net_ctx); local_bh_enable(); =20 return ret; --=20 2.45.1