Received: by 2002:ab2:6c55:0:b0:1fd:c486:4f03 with SMTP id v21csp610469lqp; Wed, 12 Jun 2024 10:41:24 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWFGbXVsplkrb54P1heUY89HUf9w7Hl/QM+Xnkt8Zp3mZVB5v55tWuczfAbLJhM4Uj2TZ8SNSwUL8T2Jp7iHu8A/5iedmBCvgt0PnZkfQ== X-Google-Smtp-Source: AGHT+IGNlE+wwTY5VG0Yt29vQoxrrHjxOsTqz+GLzfN77PnY/rKeCyfzZ+7g2IdnfTyqu0q7nZCk X-Received: by 2002:a17:90a:6097:b0:2c0:341d:1e30 with SMTP id 98e67ed59e1d1-2c4a7642d6cmr2717633a91.23.1718214084149; Wed, 12 Jun 2024 10:41:24 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1718214084; cv=pass; d=google.com; s=arc-20160816; b=rpzB/t5jFSrgI1KLHhiMdqzKmhRHRBXeDczL7l6/vuHDOXyNXRCznFj69O6d9gR5O+ kU0gEiQkM7cvn3HSVYEnGFQUHUh3aXoRpBUIx0j/0jqXEGwJH9oDo6UEBOh7YI6TnwIa +OkoHp0HG74OQxJTVhp1yZCEzKNNGPuZClvEJVUPsVS6nTo8/+h8zcc3hVc6bYL1ulwW lj2ejIi+mQ8A5tiXQonC6zpD6dOeeCGhrDkhD2IYPsk3YQ9c5sKBX951PMk9Pf3OMFvX 9LdKZM0ANXmNh6uspSDDxctLHIhiuc5jEqG6IqRCV6ihjKp+9ArPfrzaoKhfJLsysTxA kpzw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :subject:cc:to:from:date; bh=+6Euh+WhZLdOywBDE/e+8nT8Kz5UiMevRBfHRjhB7kY=; fh=ozNuz1HnrBpSOI14G0DBAOpGyFpnfcrpTa3sBKw/ci4=; b=jZOmSdwwnZptL/2U7PlIfEeE+DWRageLs2NxO/lsmL0b2frRRbq3OUDlipBHRtfh8n ys+OtGx1DTPx6qkY7Q+Vmv9cOKlPdF4Gx25wp0mpmcJMx7qv22eucM2G3+IsF4Ox8iD1 +sGFzZIblHxZr/VWStSiGMz9Bkojkf41fCGcAd32zGyFG/1My9QWQwaniKzsy1QACQDJ Fq92VYlKKriSKCHi7Fk1/8TFcKsMJ4UA2RKOyWY8TRQrns5YXdf3VTzlAUNbqfgVKNKr 5P2E+jlPPJAbK3tmKnFUJiisGziD8Ht7ELOekwEBPEMIJQi3dKoQGumcl4odunk9peTL 1C7g==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-211991-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-211991-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id 98e67ed59e1d1-2c4a761b92fsi1937145a91.80.2024.06.12.10.41.23 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jun 2024 10:41:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-211991-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; arc=pass (i=1); spf=pass (google.com: domain of linux-kernel+bounces-211991-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-211991-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 05527B2525F for ; Wed, 12 Jun 2024 17:18:54 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6A5031822E7; Wed, 12 Jun 2024 17:18:34 +0000 (UTC) Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D0E09181CF3; Wed, 12 Jun 2024 17:18:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718212713; cv=none; b=uTe0g8vazM9vVOZ5x+bl7GkqJRKSS4JarPqPNktoI0Yz1MyPI6gyagt2cpnJQcDOwl9sh6dyYHMgyomtUPEYGCrJmUl+njjf+yPPjRn8WF7E7P1ZD6VvUMFNO3MEzeLrbpgr9c1gI8LmnJF6jn4MpgmPBFwewMs1s/78/nYb5bs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718212713; c=relaxed/simple; bh=RlO+aaT++s2x82CDZH+vpEhG43YuOlTihcVA8EJmFI0=; h=Date:From:To:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=st8Nui8+7Zo5lubqfTDC4qOcIgT0fW8FAZcAugJV/j4o8JnTTaBOKTEP5cfGVACnU2o/fOk4/ZVQVe4Dnw+NvnvZmx6jlm/EL01VzzXBfMZOOdlwoZ3GvSHHQUXn1xRiPGNkqKlcNOV3NKCbksko4mlsCZX5cgrnHYhXevCO4Mc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id B2F0EC116B1; Wed, 12 Jun 2024 17:18:30 +0000 (UTC) Date: Wed, 12 Jun 2024 13:18:29 -0400 From: Steven Rostedt To: Sebastian Andrzej Siewior Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, "David S. Miller" , Daniel Bristot de Oliveira , Boqun Feng , Daniel Borkmann , Eric Dumazet , Frederic Weisbecker , Ingo Molnar , Jakub Kicinski , Paolo Abeni , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , Ben Segall , Daniel Bristot de Oliveira , Dietmar Eggemann , Juri Lelli , Mel Gorman , Valentin Schneider , Vincent Guittot Subject: Re: [PATCH v6 net-next 08/15] net: softnet_data: Make xmit.recursion per task. Message-ID: <20240612131829.2e33ca71@rorschach.local.home> In-Reply-To: <20240612170303.3896084-9-bigeasy@linutronix.de> References: <20240612170303.3896084-1-bigeasy@linutronix.de> <20240612170303.3896084-9-bigeasy@linutronix.de> X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit On Wed, 12 Jun 2024 18:44:34 +0200 Sebastian Andrzej Siewior wrote: > Softirq is preemptible on PREEMPT_RT. Without a per-CPU lock in > local_bh_disable() there is no guarantee that only one device is > transmitting at a time. > With preemption and multiple senders it is possible that the per-CPU > recursion counter gets incremented by different threads and exceeds > XMIT_RECURSION_LIMIT leading to a false positive recursion alert. > > Instead of adding a lock to protect the per-CPU variable it is simpler > to make the counter per-task. Sending and receiving skbs happens always > in thread context anyway. > > Having a lock to protected the per-CPU counter would block/ serialize two > sending threads needlessly. It would also require a recursive lock to > ensure that the owner can increment the counter further. > > Make the recursion counter a task_struct member on PREEMPT_RT. I'm curious to what would be the harm to using a per_task counter instead of per_cpu outside of PREEMPT_RT. That way, we wouldn't have to have the #ifdef. -- Steve > > Cc: Ben Segall > Cc: Daniel Bristot de Oliveira > Cc: Dietmar Eggemann > Cc: Juri Lelli > Cc: Mel Gorman > Cc: Steven Rostedt > Cc: Valentin Schneider > Cc: Vincent Guittot > Signed-off-by: Sebastian Andrzej Siewior > --- > include/linux/netdevice.h | 11 +++++++++++ > include/linux/sched.h | 4 +++- > net/core/dev.h | 20 ++++++++++++++++++++ > 3 files changed, 34 insertions(+), 1 deletion(-) > > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h > index d20c6c99eb887..b5ec072ec2430 100644 > --- a/include/linux/netdevice.h > +++ b/include/linux/netdevice.h > @@ -3223,7 +3223,9 @@ struct softnet_data { > #endif > /* written and read only by owning cpu: */ > struct { > +#ifndef CONFIG_PREEMPT_RT > u16 recursion; > +#endif > u8 more; > #ifdef CONFIG_NET_EGRESS > u8 skip_txqueue; > @@ -3256,10 +3258,19 @@ struct softnet_data { > > DECLARE_PER_CPU_ALIGNED(struct softnet_data, softnet_data); > > +#ifdef CONFIG_PREEMPT_RT > +static inline int dev_recursion_level(void) > +{ > + return current->net_xmit_recursion; > +} > + > +#else > + > static inline int dev_recursion_level(void) > { > return this_cpu_read(softnet_data.xmit.recursion); > } > +#endif > > void __netif_schedule(struct Qdisc *q); > void netif_schedule_queue(struct netdev_queue *txq); > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 61591ac6eab6d..a9b0ca72db55f 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -975,7 +975,9 @@ struct task_struct { > /* delay due to memory thrashing */ > unsigned in_thrashing:1; > #endif > - > +#ifdef CONFIG_PREEMPT_RT > + u8 net_xmit_recursion; > +#endif > unsigned long atomic_flags; /* Flags requiring atomic access. */ > > struct restart_block restart_block; > diff --git a/net/core/dev.h b/net/core/dev.h > index b7b518bc2be55..2f96d63053ad0 100644 > --- a/net/core/dev.h > +++ b/net/core/dev.h > @@ -150,6 +150,25 @@ struct napi_struct *napi_by_id(unsigned int napi_id); > void kick_defer_list_purge(struct softnet_data *sd, unsigned int cpu); > > #define XMIT_RECURSION_LIMIT 8 > + > +#ifdef CONFIG_PREEMPT_RT > +static inline bool dev_xmit_recursion(void) > +{ > + return unlikely(current->net_xmit_recursion > XMIT_RECURSION_LIMIT); > +} > + > +static inline void dev_xmit_recursion_inc(void) > +{ > + current->net_xmit_recursion++; > +} > + > +static inline void dev_xmit_recursion_dec(void) > +{ > + current->net_xmit_recursion--; > +} > + > +#else > + > static inline bool dev_xmit_recursion(void) > { > return unlikely(__this_cpu_read(softnet_data.xmit.recursion) > > @@ -165,5 +184,6 @@ static inline void dev_xmit_recursion_dec(void) > { > __this_cpu_dec(softnet_data.xmit.recursion); > } > +#endif > > #endif