Received: by 2002:a05:7412:2a8c:b0:e2:908c:2ebd with SMTP id u12csp3703982rdh; Thu, 28 Sep 2023 22:56:10 -0700 (PDT) X-Google-Smtp-Source: AGHT+IESqvarv7Y8sMY3D0wdoalTA9K7mrwuwZcvnXVFiB8e0Xlp4dFi+kzsmh0Dn66OfleJ4Olu X-Received: by 2002:a17:90a:2ce5:b0:276:a75b:6f96 with SMTP id n92-20020a17090a2ce500b00276a75b6f96mr3200923pjd.19.1695966969659; Thu, 28 Sep 2023 22:56:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695966969; cv=none; d=google.com; s=arc-20160816; b=KLoVtDKaeSUFUBNzMw8ARbAXCsIqUiyDnkQQExtm5UBLnEpfYQ1FZpGDlC0ZL8wbkv IGnoJrpZdxBWMQcBtS2+G8PGHpdX3Q7HiUDfJOtQK/aOsRkt0kviP+ibIzXRDng3mcSK zElTPCG8MshvPhutpyIilgn5EKyjpTwRm6RNc4XJpXvm7+3PhdFRgukVSMihWERgOi/F wwAfGxB7QbBSAL+CKFygfPTD232EuXUWsO87pR6qcLwdj/Mwh05eDlx6UVBZro9yGd2Q PmOYJ/77j5myKH07aePgc7a2Pun/mZl9PmBFGxneh57cM49tFBcGJUGMScwo0Rt5j5VY XySQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:mime-version:date :dkim-signature:message-id; bh=mJXzX4YeZuur6vzraw211+ziEWxb7cjFsR3/hk3mgcI=; fh=YOxCXTBeQxnRa637EZXRG8jc/wsLw7lbIJOc1iK4hkw=; b=QHRfXLBBu4dJQKW4m/4SQkpcVarJx/Wju5e5PrZb4oQKBSuT7iu+dz/uvGrMalFEJz 9MEua8QVMSo92snMUOXm+2AKOXUkaEwFOCqy13KqMCS8cmQuwujfksN00eEQCppcsfoG KkY8AAPfkgkqt4u7ssmFauGlH5HR3Rw/3ygD2RojmmeZhz6jxrTf5ciwYO+5kwExNRyX O5ELCf2YnAvQ35IaIUZwPk8hCCV+le1PXz5USSfWh7qQPzu7qGus7n0hhFnNjfIywhH5 E7Q4ss0JmGs1G68rG2jfIOFo4YhY0VgSyJi5yr1QNaWoKF13cFVX8knqfQW4YB3BrGre N1/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=Aery4wKa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id gb22-20020a17090b061600b00277382d4803si799021pjb.173.2023.09.28.22.56.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Sep 2023 22:56:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=Aery4wKa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 78E9B8067E01; Thu, 28 Sep 2023 09:17:10 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230523AbjI1QQ6 (ORCPT + 99 others); Thu, 28 Sep 2023 12:16:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230111AbjI1QQ5 (ORCPT ); Thu, 28 Sep 2023 12:16:57 -0400 Received: from out-202.mta1.migadu.com (out-202.mta1.migadu.com [95.215.58.202]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4B97B7 for ; Thu, 28 Sep 2023 09:16:54 -0700 (PDT) Message-ID: <5d8e302c-a28d-d4f4-eb91-4b54eb89490b@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695917813; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mJXzX4YeZuur6vzraw211+ziEWxb7cjFsR3/hk3mgcI=; b=Aery4wKawpKHsf86RLnvjI2U6DiUQ1egN70ecnHn4Fok30Ae1Jm346bsSvZDjXAHEi1IG/ eA8trbMotd7g9Sk27y/5hWmFT6IZpJFjJVrNTFAg5NmbGOEOoGIb9BOWn+SHK0T/dprzN9 P3SDRXBhsUUEREhqkfZGClQ6CrjiGh4= Date: Fri, 29 Sep 2023 00:16:45 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v6] net/core: Introduce netdev_core_stats_inc() Content-Language: en-US To: Eric Dumazet Cc: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Alexander Lobakin References: <20230928100418.521594-1-yajun.deng@linux.dev> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yajun Deng In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Thu, 28 Sep 2023 09:17:10 -0700 (PDT) On 2023/9/28 23:44, Eric Dumazet wrote: > On Thu, Sep 28, 2023 at 5:40 PM Yajun Deng wrote: >> >> On 2023/9/28 22:18, Eric Dumazet wrote: >>> On Thu, Sep 28, 2023 at 12:04 PM Yajun Deng wrote: >>>> Although there is a kfree_skb_reason() helper function that can be used to >>>> find the reason why this skb is dropped, but most callers didn't increase >>>> one of rx_dropped, tx_dropped, rx_nohandler and rx_otherhost_dropped. >>>> >>>> For the users, people are more concerned about why the dropped in ip >>>> is increasing. >>>> >>>> Introduce netdev_core_stats_inc() for trace the caller of the dropped >>>> skb. Also, add __code to netdev_core_stats_alloc(), as it's called >>>> unlinkly. >>>> >>>> Signed-off-by: Yajun Deng >>>> Suggested-by: Alexander Lobakin >>>> --- >>>> v6: merge netdev_core_stats and netdev_core_stats_inc together >>>> v5: Access the per cpu pointer before reach the relevant offset. >>>> v4: Introduce netdev_core_stats_inc() instead of export dev_core_stats_*_inc() >>>> v3: __cold should be added to the netdev_core_stats_alloc(). >>>> v2: use __cold instead of inline in dev_core_stats(). >>>> v1: https://lore.kernel.org/netdev/20230911082016.3694700-1-yajun.deng@linux.dev/ >>>> --- >>>> include/linux/netdevice.h | 21 ++++----------------- >>>> net/core/dev.c | 17 +++++++++++++++-- >>>> 2 files changed, 19 insertions(+), 19 deletions(-) >>>> >>>> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h >>>> index 7e520c14eb8c..eb1fa04fbccc 100644 >>>> --- a/include/linux/netdevice.h >>>> +++ b/include/linux/netdevice.h >>>> @@ -4002,32 +4002,19 @@ static __always_inline bool __is_skb_forwardable(const struct net_device *dev, >>>> return false; >>>> } >>>> >>>> -struct net_device_core_stats __percpu *netdev_core_stats_alloc(struct net_device *dev); >>>> - >>>> -static inline struct net_device_core_stats __percpu *dev_core_stats(struct net_device *dev) >>>> -{ >>>> - /* This READ_ONCE() pairs with the write in netdev_core_stats_alloc() */ >>>> - struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats); >>>> - >>>> - if (likely(p)) >>>> - return p; >>>> - >>>> - return netdev_core_stats_alloc(dev); >>>> -} >>>> +void netdev_core_stats_inc(struct net_device *dev, u32 offset); >>>> >>>> #define DEV_CORE_STATS_INC(FIELD) \ >>>> static inline void dev_core_stats_##FIELD##_inc(struct net_device *dev) \ >>>> { \ >>>> - struct net_device_core_stats __percpu *p; \ >>>> - \ >>>> - p = dev_core_stats(dev); \ >>>> - if (p) \ >>>> - this_cpu_inc(p->FIELD); \ >>> Note that we were using this_cpu_inc() which implied : >>> - IRQ safety, and >>> - a barrier paired with : >>> >>> net/core/dev.c:10548: storage->rx_dropped += >>> READ_ONCE(core_stats->rx_dropped); >>> net/core/dev.c:10549: storage->tx_dropped += >>> READ_ONCE(core_stats->tx_dropped); >>> net/core/dev.c:10550: storage->rx_nohandler += >>> READ_ONCE(core_stats->rx_nohandler); >>> net/core/dev.c:10551: storage->rx_otherhost_dropped >>> += READ_ONCE(core_stats->rx_otherhost_dropped); >>> >>> >>>> + netdev_core_stats_inc(dev, \ >>>> + offsetof(struct net_device_core_stats, FIELD)); \ >>>> } >>>> DEV_CORE_STATS_INC(rx_dropped) >>>> DEV_CORE_STATS_INC(tx_dropped) >>>> DEV_CORE_STATS_INC(rx_nohandler) >>>> DEV_CORE_STATS_INC(rx_otherhost_dropped) >>>> +#undef DEV_CORE_STATS_INC >>>> >>>> static __always_inline int ____dev_forward_skb(struct net_device *dev, >>>> struct sk_buff *skb, >>>> diff --git a/net/core/dev.c b/net/core/dev.c >>>> index 606a366cc209..88a32c392c1d 100644 >>>> --- a/net/core/dev.c >>>> +++ b/net/core/dev.c >>>> @@ -10497,7 +10497,8 @@ void netdev_stats_to_stats64(struct rtnl_link_stats64 *stats64, >>>> } >>>> EXPORT_SYMBOL(netdev_stats_to_stats64); >>>> >>>> -struct net_device_core_stats __percpu *netdev_core_stats_alloc(struct net_device *dev) >>>> +static __cold struct net_device_core_stats __percpu *netdev_core_stats_alloc( >>>> + struct net_device *dev) >>>> { >>>> struct net_device_core_stats __percpu *p; >>>> >>>> @@ -10510,7 +10511,19 @@ struct net_device_core_stats __percpu *netdev_core_stats_alloc(struct net_device >>>> /* This READ_ONCE() pairs with the cmpxchg() above */ >>>> return READ_ONCE(dev->core_stats); >>>> } >>>> -EXPORT_SYMBOL(netdev_core_stats_alloc); >>>> + >>>> +void netdev_core_stats_inc(struct net_device *dev, u32 offset) >>>> +{ >>>> + /* This READ_ONCE() pairs with the write in netdev_core_stats_alloc() */ >>>> + struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats); >>>> + >>>> + if (unlikely(!p)) >>>> + p = netdev_core_stats_alloc(dev); >>>> + >>>> + if (p) >>>> + (*(unsigned long *)((void *)this_cpu_ptr(p) + offset))++; >>> While here you are using a ++ operation that : >>> >>> - is not irq safe >>> - might cause store-tearing. >>> >>> I would suggest a preliminary patch converting the "unsigned long" fields in >>> struct net_device_core_stats to local_t >> Do you mean it needs to revert the commit 6510ea973d8d ("net: Use >> this_cpu_inc() to increment >> >> net->core_stats") first? But it would allocate memory which breaks on >> PREEMPT_RT. > I think I provided an (untested) alternative. > > unsigned long __percpu *field = (__force unsigned long __percpu *) > ((__force u8 *)p + offset); > this_cpu_inc(field); unsigned long __percpu *field = (__force unsigned long __percpu *) ((__force u8 *)p + offset); this_cpu_inc(*(int *)field); This would compiler success. But I didn't test it. This cold look complex. Shoud I base v3? Export dev_core_stats_*_inc() intead of introduce netdev_core_stats_inc(). That would be easy. > >>> You might be able tweak this to >>> >>> unsigned long __percpu *field = (unsigned long __percpu) ((u8 *)p + offset); >>> this_cpu_inc(field);