Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp1232162rdh; Fri, 24 Nov 2023 07:53:08 -0800 (PST) X-Google-Smtp-Source: AGHT+IGqPHAcqEW47lgNIXm7cO28JgNanwxLCPqN8ZouKh4wo0MyezbH8gKKHr1vJj4aqzT8ZaM8 X-Received: by 2002:a17:90b:1b07:b0:285:9cce:a63f with SMTP id nu7-20020a17090b1b0700b002859ccea63fmr787536pjb.23.1700841187830; Fri, 24 Nov 2023 07:53:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700841187; cv=none; d=google.com; s=arc-20160816; b=c6GExCKwqeSR6dtlLHM/aycLX2vtcP0AhPvDzY1wVEkqm6pkQIO1ly+t4s4vwIOyG0 tdDZZCQJQ/nLxqjEFg/HRb1nQb+F+pL4gkcShOYsinOF3uR432fXHfvnOjuX5LrWeCke h0fJEXDav2TAv0Gq9Ua/O/yir8nVf3LCoT0b6vVI8+q6hJC0pv+n9EWb/2lh0OAlDpTB m8x9jCiT2zNAxGbVUyavneIJEaTaLUqr89X1orFlAdKsFBLNAycJpKaU/LnFtyxyeyhi C7QcHdNbE5aRWFujOcl/AObRhLsFexzyeHMhrzP8aoR7BncBFfwGfeYbsdQWSOr19NMy DpEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1hX4eeZeNdHoNIv/Dbzw4glmBnJzUPees4GioKqgR6Y=; fh=QtEizEezWMPccSW0U8o1LFLT1SqErEb68UiNokAWV6o=; b=cs6Kha/DVe6cpLAYJTui2P1n+O2GVSpHkHCA3oKS/FmVKVobYay6HClfFD1y7DdHkt ArJXFSSdHRRbgwkjNo5gwAR9W8t6QenfWgXMLSBg/heFT+p4vObX5Tnc1gmGu6Qbh9CO TFXsLgbvNk5knR+8yq6eLr/+aXoXlnh+eMq5nQ9mOt2MjTX+Jp3d+8Tg9YlRQs43vZf6 Bty+LXQv/kidXdLxH38pv03ahcUBOSpUN0pDk8wCRJ0ZmyTHq12guI379wThHc5j9qsw 9hyE3kSXjrndz1ZLBt2X7sLJjmWA3jlxL8qFXQpxqf97EoMFHos3EAvlQsC6trAOBjF4 /+Gg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nA9vEITc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id z11-20020a65610b000000b005b90af1943asi3770247pgu.807.2023.11.24.07.53.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Nov 2023 07:53:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nA9vEITc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id BF9D5805A587; Fri, 24 Nov 2023 07:52:40 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346056AbjKXPvf (ORCPT + 99 others); Fri, 24 Nov 2023 10:51:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346140AbjKXPvB (ORCPT ); Fri, 24 Nov 2023 10:51:01 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60947210B; Fri, 24 Nov 2023 07:50:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1700841050; x=1732377050; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=r4kfju2IP4bTuXNwTT8bqO0S6BPtNXOe53sPS88dS5s=; b=nA9vEITccCp86QeGLTKbyMDuYH1eqVQnXVgkTQDR0aTgqAN+qrZ5GZVt dAfdrw/mK3G5hTQ4qZ9XBrQeXzA+9K9juRIJXKb7Jk0w8Qmpb7utIqbg/ Z5U02jWPj2fWRT05uQ3lUVGRVl96guWSVJaSx6WDQwiGhhB5hGuHCnl/s RbeVwrq4DXvOilc2m5bHB0zESkBZdWMlppYzmpi/Xmj1zeCPN4L6JEzYd JZ9AR26Ia2dInrNXXUd7EifFlvYkNjT7fP/+dDYJF2wWf1SjC9gOPfKBM IpTAzt0YUYMMpI6cpMK7AnWj+BGyoTKY8w7n43EtTJO8t87SpIxr9B6or g==; X-IronPort-AV: E=McAfee;i="6600,9927,10904"; a="389592626" X-IronPort-AV: E=Sophos;i="6.04,224,1695711600"; d="scan'208";a="389592626" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Nov 2023 07:50:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,224,1695711600"; d="scan'208";a="15660369" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 24 Nov 2023 07:50:46 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexander Duyck , Yunsheng Lin , David Christensen , Jesper Dangaard Brouer , Ilias Apalodimas , Paul Menzel , netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v5 12/14] libie: add common queue stats Date: Fri, 24 Nov 2023 16:47:30 +0100 Message-ID: <20231124154732.1623518-13-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231124154732.1623518-1-aleksander.lobakin@intel.com> References: <20231124154732.1623518-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Fri, 24 Nov 2023 07:52:40 -0800 (PST) Next stop, per-queue private stats. They have only subtle differences from driver to driver and can easily be resolved. Define common structures, inline helpers and Ethtool helpers to collect, update and export the statistics. Use u64_stats_t right from the start, as well as the corresponding helpers to ensure tear-free operations. For the NAPI parts of both Rx and Tx, also define small onstack containers to update them in polling loops and then sync the actual containers once a loop ends. The drivers will be switched to use this API later on a per-driver basis, along with conversion to PP. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libie/Makefile | 1 + drivers/net/ethernet/intel/libie/stats.c | 122 +++++++++++++++ include/linux/net/intel/libie/rx.h | 5 + include/linux/net/intel/libie/stats.h | 181 ++++++++++++++++++++++ 4 files changed, 309 insertions(+) create mode 100644 drivers/net/ethernet/intel/libie/stats.c create mode 100644 include/linux/net/intel/libie/stats.h diff --git a/drivers/net/ethernet/intel/libie/Makefile b/drivers/net/ethernet/intel/libie/Makefile index 95e81d09b474..76f32253481b 100644 --- a/drivers/net/ethernet/intel/libie/Makefile +++ b/drivers/net/ethernet/intel/libie/Makefile @@ -4,3 +4,4 @@ obj-$(CONFIG_LIBIE) += libie.o libie-objs += rx.o +libie-objs += stats.o diff --git a/drivers/net/ethernet/intel/libie/stats.c b/drivers/net/ethernet/intel/libie/stats.c new file mode 100644 index 000000000000..bdcbe4304c55 --- /dev/null +++ b/drivers/net/ethernet/intel/libie/stats.c @@ -0,0 +1,122 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Intel Corporation. */ + +#include + +#include +#include + +/* Rx per-queue stats */ + +static const char * const libie_rq_stats_str[] = { +#define act(s) __stringify(s), + DECLARE_LIBIE_RQ_STATS(act) +#undef act +}; + +#define LIBIE_RQ_STATS_NUM ARRAY_SIZE(libie_rq_stats_str) + +/** + * libie_rq_stats_get_sset_count - get the number of Ethtool RQ stats provided + * + * Return: number of per-queue Rx stats supported by the library. + */ +u32 libie_rq_stats_get_sset_count(void) +{ + return LIBIE_RQ_STATS_NUM; +} +EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_sset_count, LIBIE); + +/** + * libie_rq_stats_get_strings - get the name strings of Ethtool RQ stats + * @data: reference to the cursor pointing to the output buffer + * @qid: RQ number to print in the prefix + */ +void libie_rq_stats_get_strings(u8 **data, u32 qid) +{ + for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++) + ethtool_sprintf(data, "rq%u_%s", qid, libie_rq_stats_str[i]); +} +EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_strings, LIBIE); + +/** + * libie_rq_stats_get_data - get the RQ stats in Ethtool format + * @data: reference to the cursor pointing to the output array + * @rq: Rx queue to fetch the data + */ +void libie_rq_stats_get_data(u64 **data, const struct libie_rx_queue *rq) +{ + const struct libie_rq_stats *stats = rq->stats; + u64 sarr[LIBIE_RQ_STATS_NUM]; + u32 start; + + do { + start = u64_stats_fetch_begin(&stats->syncp); + + for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++) + sarr[i] = u64_stats_read(&stats->raw[i]); + } while (u64_stats_fetch_retry(&stats->syncp, start)); + + for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++) + (*data)[i] += sarr[i]; + + *data += LIBIE_RQ_STATS_NUM; +} +EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_data, LIBIE); + +/* Tx per-queue stats */ + +static const char * const libie_sq_stats_str[] = { +#define act(s) __stringify(s), + DECLARE_LIBIE_SQ_STATS(act) +#undef act +}; + +#define LIBIE_SQ_STATS_NUM ARRAY_SIZE(libie_sq_stats_str) + +/** + * libie_sq_stats_get_sset_count - get the number of Ethtool SQ stats provided + * + * Return: number of per-queue Tx stats supported by the library. + */ +u32 libie_sq_stats_get_sset_count(void) +{ + return LIBIE_SQ_STATS_NUM; +} +EXPORT_SYMBOL_NS_GPL(libie_sq_stats_get_sset_count, LIBIE); + +/** + * libie_sq_stats_get_strings - get the name strings of Ethtool SQ stats + * @data: reference to the cursor pointing to the output buffer + * @qid: SQ number to print in the prefix + */ +void libie_sq_stats_get_strings(u8 **data, u32 qid) +{ + for (u32 i = 0; i < LIBIE_SQ_STATS_NUM; i++) + ethtool_sprintf(data, "sq%u_%s", qid, libie_sq_stats_str[i]); +} +EXPORT_SYMBOL_NS_GPL(libie_sq_stats_get_strings, LIBIE); + +/** + * libie_sq_stats_get_data - get the SQ stats in Ethtool format + * @data: reference to the cursor pointing to the output array + * @stats: SQ stats container from the queue + */ +void libie_sq_stats_get_data(u64 **data, const struct libie_sq_stats *stats) +{ + u64 sarr[LIBIE_SQ_STATS_NUM]; + u32 start; + + do { + start = u64_stats_fetch_begin(&stats->syncp); + + for (u32 i = 0; i < LIBIE_SQ_STATS_NUM; i++) + sarr[i] = u64_stats_read(&stats->raw[i]); + } while (u64_stats_fetch_retry(&stats->syncp, start)); + + for (u32 i = 0; i < LIBIE_SQ_STATS_NUM; i++) + (*data)[i] += sarr[i]; + + *data += LIBIE_SQ_STATS_NUM; +} +EXPORT_SYMBOL_NS_GPL(libie_sq_stats_get_data, LIBIE); diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h index 06c4f00dad63..8542815c9973 100644 --- a/include/linux/net/intel/libie/rx.h +++ b/include/linux/net/intel/libie/rx.h @@ -46,6 +46,8 @@ /* Rx buffer management */ +struct libie_rq_stats; + /** * struct libie_rx_buffer - structure representing an Rx buffer * @page: page holding the buffer @@ -68,6 +70,7 @@ struct libie_rx_buffer { * @rx_bi: array of Rx buffers * @truesize: size to allocate per buffer, w/overhead * @count: number of descriptors/buffers the queue has + * @stats: pointer to the software per-queue stats * @rx_buf_len: HW-writeable length per each buffer */ struct libie_rx_queue { @@ -77,6 +80,8 @@ struct libie_rx_queue { u32 truesize; u32 count; + struct libie_rq_stats *stats; + /* Cold fields */ u32 rx_buf_len; }; diff --git a/include/linux/net/intel/libie/stats.h b/include/linux/net/intel/libie/stats.h new file mode 100644 index 000000000000..4e6dfb8c715f --- /dev/null +++ b/include/linux/net/intel/libie/stats.h @@ -0,0 +1,181 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Intel Corporation. */ + +#ifndef __LIBIE_STATS_H +#define __LIBIE_STATS_H + +#include + +/* Common */ + +/* Use 32-byte alignment to reduce false sharing */ +#define __libie_stats_aligned __aligned(4 * sizeof(u64_stats_t)) + +/** + * libie_stats_add - update one structure counter from a local struct + * @qs: queue stats structure to update (&libie_rq_stats or &libie_sq_stats) + * @ss: local/onstack stats structure + * @f: name of the field to update + * + * If a local/onstack stats structure is used to collect statistics during + * hotpath loops, this macro can be used to shorthand updates, given that + * the fields have the same name. + * Must be guarded with u64_stats_update_{begin,end}(). + */ +#define libie_stats_add(qs, ss, f) \ + u64_stats_add(&(qs)->f, (ss)->f) + +/** + * __libie_stats_inc_one - safely increment one stats structure counter + * @s: queue stats structure to update (&libie_rq_stats or &libie_sq_stats) + * @f: name of the field to increment + * @n: name of the temporary variable, result of __UNIQUE_ID() + * + * To be used on exception or slow paths -- allocation fails, queue stops etc. + */ +#define __libie_stats_inc_one(s, f, n) ({ \ + typeof(*(s)) *n = (s); \ + \ + u64_stats_update_begin(&n->syncp); \ + u64_stats_inc(&n->f); \ + u64_stats_update_end(&n->syncp); \ +}) +#define libie_stats_inc_one(s, f) \ + __libie_stats_inc_one(s, f, __UNIQUE_ID(qs_)) + +/* Rx per-queue stats: + * packets: packets received on this queue + * bytes: bytes received on this queue + * fragments: number of processed descriptors carrying only a fragment + * alloc_page_fail: number of Rx page allocation fails + * build_skb_fail: number of build_skb() fails + */ + +#define DECLARE_LIBIE_RQ_NAPI_STATS(act) \ + act(packets) \ + act(bytes) \ + act(fragments) + +#define DECLARE_LIBIE_RQ_FAIL_STATS(act) \ + act(alloc_page_fail) \ + act(build_skb_fail) + +#define DECLARE_LIBIE_RQ_STATS(act) \ + DECLARE_LIBIE_RQ_NAPI_STATS(act) \ + DECLARE_LIBIE_RQ_FAIL_STATS(act) + +struct libie_rx_queue; + +struct libie_rq_stats { + struct u64_stats_sync syncp; + + union { + struct { +#define act(s) u64_stats_t s; + DECLARE_LIBIE_RQ_NAPI_STATS(act); + DECLARE_LIBIE_RQ_FAIL_STATS(act); +#undef act + }; + DECLARE_FLEX_ARRAY(u64_stats_t, raw); + }; +} __libie_stats_aligned; + +/* Rx stats being modified frequently during the NAPI polling, to sync them + * with the queue stats once after the loop is finished. + */ +struct libie_rq_onstack_stats { + union { + struct { +#define act(s) u32 s; + DECLARE_LIBIE_RQ_NAPI_STATS(act); +#undef act + }; + DECLARE_FLEX_ARRAY(u32, raw); + }; +}; + +/** + * libie_rq_napi_stats_add - add onstack Rx stats to the queue container + * @qs: Rx queue stats structure to update + * @ss: onstack structure to get the values from, updated during the NAPI loop + */ +static inline void +libie_rq_napi_stats_add(struct libie_rq_stats *qs, + const struct libie_rq_onstack_stats *ss) +{ + u64_stats_update_begin(&qs->syncp); + libie_stats_add(qs, ss, packets); + libie_stats_add(qs, ss, bytes); + libie_stats_add(qs, ss, fragments); + u64_stats_update_end(&qs->syncp); +} + +u32 libie_rq_stats_get_sset_count(void); +void libie_rq_stats_get_strings(u8 **data, u32 qid); +void libie_rq_stats_get_data(u64 **data, const struct libie_rx_queue *rq); + +/* Tx per-queue stats: + * packets: packets sent from this queue + * bytes: bytes sent from this queue + * busy: number of xmit failures due to the ring being full + * stops: number times the ring was stopped from the driver + * restarts: number times it was started after being stopped + * linearized: number of skbs linearized due to HW limits + */ + +#define DECLARE_LIBIE_SQ_NAPI_STATS(act) \ + act(packets) \ + act(bytes) + +#define DECLARE_LIBIE_SQ_XMIT_STATS(act) \ + act(busy) \ + act(stops) \ + act(restarts) \ + act(linearized) + +#define DECLARE_LIBIE_SQ_STATS(act) \ + DECLARE_LIBIE_SQ_NAPI_STATS(act) \ + DECLARE_LIBIE_SQ_XMIT_STATS(act) + +struct libie_sq_stats { + struct u64_stats_sync syncp; + + union { + struct { +#define act(s) u64_stats_t s; + DECLARE_LIBIE_SQ_STATS(act); +#undef act + }; + DECLARE_FLEX_ARRAY(u64_stats_t, raw); + }; +} __libie_stats_aligned; + +struct libie_sq_onstack_stats { +#define act(s) u32 s; + DECLARE_LIBIE_SQ_NAPI_STATS(act); +#undef act +}; + +/** + * libie_sq_napi_stats_add - add onstack Tx stats to the queue container + * @qs: Tx queue stats structure to update + * @ss: onstack structure to get the values from, updated during the NAPI loop + */ +static inline void +libie_sq_napi_stats_add(struct libie_sq_stats *qs, + const struct libie_sq_onstack_stats *ss) +{ + if (unlikely(!ss->packets)) + return; + + u64_stats_update_begin(&qs->syncp); + libie_stats_add(qs, ss, packets); + libie_stats_add(qs, ss, bytes); + u64_stats_update_end(&qs->syncp); +} + +u32 libie_sq_stats_get_sset_count(void); +void libie_sq_stats_get_strings(u8 **data, u32 qid); +void libie_sq_stats_get_data(u64 **data, const struct libie_sq_stats *stats); + +#endif /* __LIBIE_STATS_H */ -- 2.42.0