Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp29271737rwd; Wed, 5 Jul 2023 09:25:45 -0700 (PDT) X-Google-Smtp-Source: APBJJlGdUDK9iOYv4N+cpmxxg8AtewEaa1hFrfhkaaYtHln0EANh45gGCy5Z75sd5JHvXtrvfHHX X-Received: by 2002:a05:6a20:3d1c:b0:10f:52e2:49ec with SMTP id y28-20020a056a203d1c00b0010f52e249ecmr15193070pzi.53.1688574345650; Wed, 05 Jul 2023 09:25:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688574345; cv=none; d=google.com; s=arc-20160816; b=fS/SLyAIfiaXUbaoZs5Zbc0b7pU/1xqdIkVmL73qSzgdfwwP1AlB2APm0WpWc7c08n QAAhPSEAHBiiCWYphaFOjdRwHuoCEwViBkG2knP4vQbKcwXWtqw+bSDT1g7euGXNrocG weCnbHfxSs0SGz8zuotsrnKfA4SF/YuazEWik5n2j9Ix0bOXWLtYxM+tXUGfMd4+Vxpg W3PukFtGpz5EuuDL5TsZks6c5ZyAvo6dQgqcQK10F8khKWH7O6rVcWE4fUlxT+26jssu f3C+VG3MVQ0Gqi4vN2BY2Hi8MvF4MyzyKP7YQ601dpMP/5vu+uUgeK0kHgmekcZ9YiT5 Hgfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=SMZaLUpBukjl1wrHmENYsVpTxsk9U0S0F5TY1zi2TH4=; fh=QtEizEezWMPccSW0U8o1LFLT1SqErEb68UiNokAWV6o=; b=VcWsYgwZ6kWxo/5Wrsog8myXff+9VBrC3V9xl7m3neUIhHCmvmMo79h+rFI3TedS6Y 5Ojhh8jCvbWpI1pPTcNuwRPdYfnTuhdS53NuvfnIYBIJswbjbeIq/sqkfYq6nvqwzTY+ KwGFs/e6NUFoM6CP65WOPhh1Phk9t5IUx0BZC7gSGPsur6sUBw9rq86KlxSiY6WtS9kG GlFswtmkwUiifb+XZvPyniLcd4mX3+btAAQiY9RSSFUX2X7ZjpKNHq9ZaRCd338xK1kp 6muWFa4LpOHrWM6kPzrhclHgICbEdKLj1yWfhmRunnTSf5GncuJhr/0Lo5j1ubirBbQj cRFw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Lk6Lg54c; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w8-20020a170902a70800b001ae4a01a7e0si21878504plq.236.2023.07.05.09.25.32; Wed, 05 Jul 2023 09:25:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Lk6Lg54c; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233168AbjGEP7W (ORCPT + 99 others); Wed, 5 Jul 2023 11:59:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232259AbjGEP7U (ORCPT ); Wed, 5 Jul 2023 11:59:20 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFA801BCD; Wed, 5 Jul 2023 08:58:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1688572735; x=1720108735; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=z7YNTR0ydk9mlfvqRWOxxEghkEcNLyQUUMSH+SFkzeg=; b=Lk6Lg54cB7Zd76+kSu6sbkwLwX96wdiuzOyrbCV+huZEf/xL+Oej40v0 2u6UjF9Sakp61ZLANtJ2otqesIdKGq63bDzYOhve4ULPUMs8Ymx4RAUdv v7GDDWD8EUorlywRvyPCEuWjPdjbD7jT3aExxW5DXt7WDN9FbghorCu2v oS8OrPIH0buQlydLZ/ZFr6i11Gw+qVJV6IFdl7jbRZzsRKWhjZG5TMi5G gKfokqHK0yXqh1FbhKoj8qqpTtK4y0KGwcDMwkTM5bmUAqlvCvCDYuE5g DDscwOrvaol8x15AVsWfkHoyFd3XeqkmXgUPFaDyNKMpPMtueiLxg9G5x w==; X-IronPort-AV: E=McAfee;i="6600,9927,10762"; a="366863612" X-IronPort-AV: E=Sophos;i="6.01,183,1684825200"; d="scan'208";a="366863612" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2023 08:58:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10762"; a="789205697" X-IronPort-AV: E=Sophos;i="6.01,183,1684825200"; d="scan'208";a="789205697" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga004.fm.intel.com with ESMTP; 05 Jul 2023 08:58:04 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexander Duyck , Yunsheng Lin , David Christensen , Jesper Dangaard Brouer , Ilias Apalodimas , Paul Menzel , netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next v4 8/9] libie: add per-queue Page Pool stats Date: Wed, 5 Jul 2023 17:55:50 +0200 Message-ID: <20230705155551.1317583-9-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230705155551.1317583-1-aleksander.lobakin@intel.com> References: <20230705155551.1317583-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Expand the libie generic per-queue stats with the generic Page Pool stats provided by the API itself, when CONFIG_PAGE_POOL is enabled. When it's not, there'll be no such fields in the stats structure, so no space wasted. They are also a bit special in terms of how they are obtained. One &page_pool accumulates statistics until it's destroyed obviously, which happens on ifdown. So, in order to not lose any statistics, get the stats and store in the queue container before destroying a pool. This container survives ifups/downs, so it basically stores the statistics accumulated since the very first pool was allocated on this queue. When it's needed to export the stats, first get the numbers from this container and then add the "live" numbers -- the ones that the current active pool returns. The result values will always represent the actual device-lifetime* stats. There's a cast from &page_pool_stats to `u64 *` in a couple functions, but they are guarded with stats asserts to make sure it's safe to do. FWIW it saves a lot of object code. Reviewed-by: Paul Menzel Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libie/internal.h | 23 +++++++ drivers/net/ethernet/intel/libie/rx.c | 19 ++++++ drivers/net/ethernet/intel/libie/stats.c | 73 ++++++++++++++++++++- include/linux/net/intel/libie/rx.h | 4 ++ include/linux/net/intel/libie/stats.h | 39 ++++++++++- 5 files changed, 155 insertions(+), 3 deletions(-) create mode 100644 drivers/net/ethernet/intel/libie/internal.h diff --git a/drivers/net/ethernet/intel/libie/internal.h b/drivers/net/ethernet/intel/libie/internal.h new file mode 100644 index 000000000000..083398dc37c6 --- /dev/null +++ b/drivers/net/ethernet/intel/libie/internal.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* libie internal declarations not to be used in drivers. + * + * Copyright(c) 2023 Intel Corporation. + */ + +#ifndef __LIBIE_INTERNAL_H +#define __LIBIE_INTERNAL_H + +struct libie_rq_stats; +struct page_pool; + +#ifdef CONFIG_PAGE_POOL_STATS +void libie_rq_stats_sync_pp(struct libie_rq_stats *stats, + struct page_pool *pool); +#else +static inline void libie_rq_stats_sync_pp(struct libie_rq_stats *stats, + struct page_pool *pool) +{ +} +#endif + +#endif /* __LIBIE_INTERNAL_H */ diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c index c60d7b20ed20..0c26bd066587 100644 --- a/drivers/net/ethernet/intel/libie/rx.c +++ b/drivers/net/ethernet/intel/libie/rx.c @@ -2,6 +2,7 @@ /* Copyright(c) 2023 Intel Corporation. */ #include +#include "internal.h" /* Rx buffer management */ @@ -57,6 +58,24 @@ struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi, } EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE); +/** + * libie_rx_page_pool_destroy - destroy a &page_pool created by libie + * @pool: pool to destroy + * @stats: RQ stats from the ring (or %NULL to skip updating PP stats) + * + * As the stats usually has the same lifetime as the device, but PP is usually + * created/destroyed on ifup/ifdown, in order to not lose the stats accumulated + * during the last ifup, the PP stats need to be added to the driver stats + * container. Then the PP gets destroyed. + */ +void libie_rx_page_pool_destroy(struct page_pool *pool, + struct libie_rq_stats *stats) +{ + libie_rq_stats_sync_pp(stats, pool); + page_pool_destroy(pool); +} +EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_destroy, LIBIE); + /* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed * bitfield struct. */ diff --git a/drivers/net/ethernet/intel/libie/stats.c b/drivers/net/ethernet/intel/libie/stats.c index 61456842a362..71c7ce14edca 100644 --- a/drivers/net/ethernet/intel/libie/stats.c +++ b/drivers/net/ethernet/intel/libie/stats.c @@ -3,6 +3,9 @@ #include #include +#include + +#include "internal.h" /* Rx per-queue stats */ @@ -14,6 +17,70 @@ static const char * const libie_rq_stats_str[] = { #define LIBIE_RQ_STATS_NUM ARRAY_SIZE(libie_rq_stats_str) +#ifdef CONFIG_PAGE_POOL_STATS +/** + * libie_rq_stats_get_pp - get the current stats from a &page_pool + * @sarr: local array to add stats to + * @pool: pool to get the stats from + * + * Adds the current "live" stats from an online PP to the stats read from + * the RQ container, so that the actual totals will be returned. + */ +static void libie_rq_stats_get_pp(u64 *sarr, struct page_pool *pool) +{ + struct page_pool_stats *pps; + /* Used only to calculate pos below */ + struct libie_rq_stats tmp; + u32 pos; + + /* Validate the libie PP stats array can be casted <-> PP struct */ + static_assert(sizeof(tmp.pp) == sizeof(*pps)); + + if (!pool) + return; + + /* Position of the first Page Pool stats field */ + pos = (u64_stats_t *)&tmp.pp - tmp.raw; + pps = (typeof(pps))&sarr[pos]; + + page_pool_get_stats(pool, pps); +} + +/** + * libie_rq_stats_sync_pp - add the current PP stats to the RQ stats container + * @stats: stats structure to update + * @pool: pool to read the stats + * + * Called by libie_rx_page_pool_destroy() to save the stats before destroying + * the pool. + */ +void libie_rq_stats_sync_pp(struct libie_rq_stats *stats, + struct page_pool *pool) +{ + u64_stats_t *qarr = (u64_stats_t *)&stats->pp; + struct page_pool_stats pps = { }; + u64 *sarr = (u64 *)&pps; + + if (!stats) + return; + + page_pool_get_stats(pool, &pps); + + u64_stats_update_begin(&stats->syncp); + + for (u32 i = 0; i < sizeof(pps) / sizeof(*sarr); i++) + u64_stats_add(&qarr[i], sarr[i]); + + u64_stats_update_end(&stats->syncp); +} +#else +static void libie_rq_stats_get_pp(u64 *sarr, struct page_pool *pool) +{ +} + +/* static inline void libie_rq_stats_sync_pp() is declared in "internal.h" */ +#endif + /** * libie_rq_stats_get_sset_count - get the number of Ethtool RQ stats provided * @@ -41,8 +108,10 @@ EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_strings, LIBIE); * libie_rq_stats_get_data - get the RQ stats in Ethtool format * @data: reference to the cursor pointing to the output array * @stats: RQ stats container from the queue + * @pool: &page_pool from the queue (%NULL to ignore PP "live" stats) */ -void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats) +void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats, + struct page_pool *pool) { u64 sarr[LIBIE_RQ_STATS_NUM]; u32 start; @@ -54,6 +123,8 @@ void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats) sarr[i] = u64_stats_read(&stats->raw[i]); } while (u64_stats_fetch_retry(&stats->syncp, start)); + libie_rq_stats_get_pp(sarr, pool); + for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++) (*data)[i] += sarr[i]; diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h index 8c0ccdff9a37..c6c85f956f95 100644 --- a/include/linux/net/intel/libie/rx.h +++ b/include/linux/net/intel/libie/rx.h @@ -62,8 +62,12 @@ struct libie_rx_buffer { u32 truesize; }; +struct libie_rq_stats; + struct page_pool *libie_rx_page_pool_create(struct napi_struct *napi, u32 size); +void libie_rx_page_pool_destroy(struct page_pool *pool, + struct libie_rq_stats *stats); /** * libie_rx_alloc - allocate a new Rx buffer diff --git a/include/linux/net/intel/libie/stats.h b/include/linux/net/intel/libie/stats.h index dbbc98bbd3a7..23ca0079a905 100644 --- a/include/linux/net/intel/libie/stats.h +++ b/include/linux/net/intel/libie/stats.h @@ -49,6 +49,17 @@ * fragments: number of processed descriptors carrying only a fragment * alloc_page_fail: number of Rx page allocation fails * build_skb_fail: number of build_skb() fails + * pp_alloc_fast: pages taken from the cache or ring + * pp_alloc_slow: actual page allocations + * pp_alloc_slow_ho: non-order-0 page allocations + * pp_alloc_empty: number of times the pool was empty + * pp_alloc_refill: number of cache refills + * pp_alloc_waive: NUMA node mismatches during recycling + * pp_recycle_cached: direct recyclings into the cache + * pp_recycle_cache_full: number of times the cache was full + * pp_recycle_ring: recyclings into the ring + * pp_recycle_ring_full: number of times the ring was full + * pp_recycle_released_ref: pages released due to elevated refcnt */ #define DECLARE_LIBIE_RQ_NAPI_STATS(act) \ @@ -60,9 +71,29 @@ act(alloc_page_fail) \ act(build_skb_fail) +#ifdef CONFIG_PAGE_POOL_STATS +#define DECLARE_LIBIE_RQ_PP_STATS(act) \ + act(pp_alloc_fast) \ + act(pp_alloc_slow) \ + act(pp_alloc_slow_ho) \ + act(pp_alloc_empty) \ + act(pp_alloc_refill) \ + act(pp_alloc_waive) \ + act(pp_recycle_cached) \ + act(pp_recycle_cache_full) \ + act(pp_recycle_ring) \ + act(pp_recycle_ring_full) \ + act(pp_recycle_released_ref) +#else +#define DECLARE_LIBIE_RQ_PP_STATS(act) +#endif + #define DECLARE_LIBIE_RQ_STATS(act) \ DECLARE_LIBIE_RQ_NAPI_STATS(act) \ - DECLARE_LIBIE_RQ_FAIL_STATS(act) + DECLARE_LIBIE_RQ_FAIL_STATS(act) \ + DECLARE_LIBIE_RQ_PP_STATS(act) + +struct page_pool; struct libie_rq_stats { struct u64_stats_sync syncp; @@ -72,6 +103,9 @@ struct libie_rq_stats { #define act(s) u64_stats_t s; DECLARE_LIBIE_RQ_NAPI_STATS(act); DECLARE_LIBIE_RQ_FAIL_STATS(act); + struct_group(pp, + DECLARE_LIBIE_RQ_PP_STATS(act); + ); #undef act }; DECLARE_FLEX_ARRAY(u64_stats_t, raw); @@ -110,7 +144,8 @@ libie_rq_napi_stats_add(struct libie_rq_stats *qs, u32 libie_rq_stats_get_sset_count(void); void libie_rq_stats_get_strings(u8 **data, u32 qid); -void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats); +void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats, + struct page_pool *pool); /* Tx per-queue stats: * packets: packets sent from this queue -- 2.41.0