Received: by 2002:a05:7412:2a8c:b0:e2:908c:2ebd with SMTP id u12csp3719609rdh; Thu, 28 Sep 2023 23:38:42 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEabfJG/qpPJDjpRUJgUs2UKN3/fvUWIVHkIK/3zhpJlf9wGEvM+onHkEzvojNiWExZi0j1 X-Received: by 2002:a05:6358:9217:b0:14c:79ec:1b86 with SMTP id d23-20020a056358921700b0014c79ec1b86mr3131939rwb.24.1695969521978; Thu, 28 Sep 2023 23:38:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695969521; cv=none; d=google.com; s=arc-20160816; b=qTXupLFV1KtrXf5UHq3B/KS53lPdM1MSugIrunGPDrOqvUV8Dxtmr9yG2BxSebcxQh SEyc5VxZkvtvxkQRLJWHZLkp6VnGW8S/sxtxQG+J7uuY94ldJjVLAXrpaMxjX8/isZLY BtOQPPsd3zexGFprEdzOU3rQReH7W3T8rXbWipFAVBKGMYbLvbH+zsKpb1m7jI7QI6ln BZbTNTZlYSPH7AAMQEgL7PTYxcNMTIiknrPimsCgBxAlTDtqusBkuEutK4SEGeIhtL/R 7o9i2PjnNiz0lg5n5M9kdK7d+8+kYQIV1n4kSudJ7h+rz9yBf79Z+SuNaIs3yAvAzqRP l7kQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:mime-version:date :dkim-signature:message-id; bh=aQNi1mC/izOSusenC5e0FzCnoHgYl2k/8yWq7CFqMaw=; fh=YOxCXTBeQxnRa637EZXRG8jc/wsLw7lbIJOc1iK4hkw=; b=Cd8OAK1PjN8eT30Wccc2XJSaFFZr6Vbxy8+kHEr1nmSU5dszgCJerLsEK8in+tcI9b uOb/aG8bzPxNZGJjJy/ojQUc4qAyn6sgXcrBpdTDAqQ8Wqz/abJcqMs3NPaXHpSIUZjc rEvC9UhlRNbQ9cSXdGS9KCpZFzbH7AAhHla4wjzEUPE1qMZqzk8OZvjIduWoCe1pf0iE 0VcOn3Kv/lJNpFPsCeUgPeb6NHoB8e9WLdr5D1dR5ODwvdW4uSST58mTAkdTVjW7bVBf veDAc3yzmNfv8Zyx1b3al4dITgf1DrYvxzndVosAakg9InIFYBmyIas2s+ocKmgetBfm kZPg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=bnIzyU8k; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id dc8-20020a056a0035c800b0068a590d8043si21018766pfb.375.2023.09.28.23.38.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Sep 2023 23:38:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=bnIzyU8k; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id A3A7F8050FBA; Thu, 28 Sep 2023 09:33:06 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230225AbjI1Qc5 (ORCPT + 99 others); Thu, 28 Sep 2023 12:32:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229980AbjI1Qcz (ORCPT ); Thu, 28 Sep 2023 12:32:55 -0400 Received: from out-197.mta0.migadu.com (out-197.mta0.migadu.com [IPv6:2001:41d0:1004:224b::c5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C8BEBF for ; Thu, 28 Sep 2023 09:32:53 -0700 (PDT) Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695918771; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aQNi1mC/izOSusenC5e0FzCnoHgYl2k/8yWq7CFqMaw=; b=bnIzyU8kg1mVUs1LxL6r0w7GvDxkEiiVBwY6dTWXig9Q87mppri4IBtjcwcX4dbCuIZbDN KsNh0eujqGmIZ+fLan/m4MMA+SezSiY6sLKEJndUVPDAaeihu1u9CHjrUPbc6jVNdaRvaX gbT8ksM2Vgk1lAxzHw4LxQ606rSkX/k= Date: Fri, 29 Sep 2023 00:32:36 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v6] net/core: Introduce netdev_core_stats_inc() Content-Language: en-US To: Eric Dumazet Cc: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Alexander Lobakin References: <20230928100418.521594-1-yajun.deng@linux.dev> <5d8e302c-a28d-d4f4-eb91-4b54eb89490b@linux.dev> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yajun Deng In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 28 Sep 2023 09:33:06 -0700 (PDT) On 2023/9/29 00:23, Eric Dumazet wrote: > On Thu, Sep 28, 2023 at 6:16 PM Yajun Deng wrote: >> >> On 2023/9/28 23:44, Eric Dumazet wrote: >>> On Thu, Sep 28, 2023 at 5:40 PM Yajun Deng wrote: >>>> On 2023/9/28 22:18, Eric Dumazet wrote: >>>>> On Thu, Sep 28, 2023 at 12:04 PM Yajun Deng wrote: >>>>>> Although there is a kfree_skb_reason() helper function that can be used to >>>>>> find the reason why this skb is dropped, but most callers didn't increase >>>>>> one of rx_dropped, tx_dropped, rx_nohandler and rx_otherhost_dropped. >>>>>> >>>>>> For the users, people are more concerned about why the dropped in ip >>>>>> is increasing. >>>>>> >>>>>> Introduce netdev_core_stats_inc() for trace the caller of the dropped >>>>>> skb. Also, add __code to netdev_core_stats_alloc(), as it's called >>>>>> unlinkly. >>>>>> >>>>>> Signed-off-by: Yajun Deng >>>>>> Suggested-by: Alexander Lobakin >>>>>> --- >>>>>> v6: merge netdev_core_stats and netdev_core_stats_inc together >>>>>> v5: Access the per cpu pointer before reach the relevant offset. >>>>>> v4: Introduce netdev_core_stats_inc() instead of export dev_core_stats_*_inc() >>>>>> v3: __cold should be added to the netdev_core_stats_alloc(). >>>>>> v2: use __cold instead of inline in dev_core_stats(). >>>>>> v1: https://lore.kernel.org/netdev/20230911082016.3694700-1-yajun.deng@linux.dev/ >>>>>> --- >>>>>> include/linux/netdevice.h | 21 ++++----------------- >>>>>> net/core/dev.c | 17 +++++++++++++++-- >>>>>> 2 files changed, 19 insertions(+), 19 deletions(-) >>>>>> >>>>>> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h >>>>>> index 7e520c14eb8c..eb1fa04fbccc 100644 >>>>>> --- a/include/linux/netdevice.h >>>>>> +++ b/include/linux/netdevice.h >>>>>> @@ -4002,32 +4002,19 @@ static __always_inline bool __is_skb_forwardable(const struct net_device *dev, >>>>>> return false; >>>>>> } >>>>>> >>>>>> -struct net_device_core_stats __percpu *netdev_core_stats_alloc(struct net_device *dev); >>>>>> - >>>>>> -static inline struct net_device_core_stats __percpu *dev_core_stats(struct net_device *dev) >>>>>> -{ >>>>>> - /* This READ_ONCE() pairs with the write in netdev_core_stats_alloc() */ >>>>>> - struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats); >>>>>> - >>>>>> - if (likely(p)) >>>>>> - return p; >>>>>> - >>>>>> - return netdev_core_stats_alloc(dev); >>>>>> -} >>>>>> +void netdev_core_stats_inc(struct net_device *dev, u32 offset); >>>>>> >>>>>> #define DEV_CORE_STATS_INC(FIELD) \ >>>>>> static inline void dev_core_stats_##FIELD##_inc(struct net_device *dev) \ >>>>>> { \ >>>>>> - struct net_device_core_stats __percpu *p; \ >>>>>> - \ >>>>>> - p = dev_core_stats(dev); \ >>>>>> - if (p) \ >>>>>> - this_cpu_inc(p->FIELD); \ >>>>> Note that we were using this_cpu_inc() which implied : >>>>> - IRQ safety, and >>>>> - a barrier paired with : >>>>> >>>>> net/core/dev.c:10548: storage->rx_dropped += >>>>> READ_ONCE(core_stats->rx_dropped); >>>>> net/core/dev.c:10549: storage->tx_dropped += >>>>> READ_ONCE(core_stats->tx_dropped); >>>>> net/core/dev.c:10550: storage->rx_nohandler += >>>>> READ_ONCE(core_stats->rx_nohandler); >>>>> net/core/dev.c:10551: storage->rx_otherhost_dropped >>>>> += READ_ONCE(core_stats->rx_otherhost_dropped); >>>>> >>>>> >>>>>> + netdev_core_stats_inc(dev, \ >>>>>> + offsetof(struct net_device_core_stats, FIELD)); \ >>>>>> } >>>>>> DEV_CORE_STATS_INC(rx_dropped) >>>>>> DEV_CORE_STATS_INC(tx_dropped) >>>>>> DEV_CORE_STATS_INC(rx_nohandler) >>>>>> DEV_CORE_STATS_INC(rx_otherhost_dropped) >>>>>> +#undef DEV_CORE_STATS_INC >>>>>> >>>>>> static __always_inline int ____dev_forward_skb(struct net_device *dev, >>>>>> struct sk_buff *skb, >>>>>> diff --git a/net/core/dev.c b/net/core/dev.c >>>>>> index 606a366cc209..88a32c392c1d 100644 >>>>>> --- a/net/core/dev.c >>>>>> +++ b/net/core/dev.c >>>>>> @@ -10497,7 +10497,8 @@ void netdev_stats_to_stats64(struct rtnl_link_stats64 *stats64, >>>>>> } >>>>>> EXPORT_SYMBOL(netdev_stats_to_stats64); >>>>>> >>>>>> -struct net_device_core_stats __percpu *netdev_core_stats_alloc(struct net_device *dev) >>>>>> +static __cold struct net_device_core_stats __percpu *netdev_core_stats_alloc( >>>>>> + struct net_device *dev) >>>>>> { >>>>>> struct net_device_core_stats __percpu *p; >>>>>> >>>>>> @@ -10510,7 +10511,19 @@ struct net_device_core_stats __percpu *netdev_core_stats_alloc(struct net_device >>>>>> /* This READ_ONCE() pairs with the cmpxchg() above */ >>>>>> return READ_ONCE(dev->core_stats); >>>>>> } >>>>>> -EXPORT_SYMBOL(netdev_core_stats_alloc); >>>>>> + >>>>>> +void netdev_core_stats_inc(struct net_device *dev, u32 offset) >>>>>> +{ >>>>>> + /* This READ_ONCE() pairs with the write in netdev_core_stats_alloc() */ >>>>>> + struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats); >>>>>> + >>>>>> + if (unlikely(!p)) >>>>>> + p = netdev_core_stats_alloc(dev); >>>>>> + >>>>>> + if (p) >>>>>> + (*(unsigned long *)((void *)this_cpu_ptr(p) + offset))++; >>>>> While here you are using a ++ operation that : >>>>> >>>>> - is not irq safe >>>>> - might cause store-tearing. >>>>> >>>>> I would suggest a preliminary patch converting the "unsigned long" fields in >>>>> struct net_device_core_stats to local_t >>>> Do you mean it needs to revert the commit 6510ea973d8d ("net: Use >>>> this_cpu_inc() to increment >>>> >>>> net->core_stats") first? But it would allocate memory which breaks on >>>> PREEMPT_RT. >>> I think I provided an (untested) alternative. >>> >>> unsigned long __percpu *field = (__force unsigned long __percpu *) >>> ((__force u8 *)p + offset); >>> this_cpu_inc(field); >> unsigned long __percpu *field = (__force unsigned long __percpu *) >> ((__force u8 *)p + offset); >> this_cpu_inc(*(int *)field); >> >> This would compiler success. But I didn't test it. >> This cold look complex. > Why exactly ? Not very different from the cast you already had. Okay, I'll test it. > >> Shoud I base v3? Export dev_core_stats_*_inc() intead of introduce netdev_core_stats_inc(). >> That would be easy. > Well, you tell me, but this does not look incremental to me. > > I do not think we need 4 different (and maybe more to come if struct > net_device_core_stats > grows in the future) functions for some hardly used path.