Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp5249434rdb; Wed, 13 Dec 2023 03:32:53 -0800 (PST) X-Google-Smtp-Source: AGHT+IF8vAHVtytzzOslSaUjMWZvhhhwjXJb9TkxwRrRhrZQxKjx1klrcDGi3xz5/pBw8UgCzP1p X-Received: by 2002:a05:6870:b313:b0:203:2f07:9c28 with SMTP id a19-20020a056870b31300b002032f079c28mr391637oao.31.1702467173295; Wed, 13 Dec 2023 03:32:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702467173; cv=none; d=google.com; s=arc-20160816; b=htxeEOS/C51QpFgwOimrX9go/TOp4S+sj1wGSIxzSEyf932jtLMIY3xHkTK2ChB5Rk ELvvBeYN3IY+wPa6qrZeMZJR8sVjEVvMKU+VkVYhijkMZuN3STLecMWEP3EkaGrpcnEX pWAutFBqRuVf2joDv5YV66zb46+rziPACtDJxRt7eylqxqWTQjT7uFR0q+4Nz8IJhOEW ACqc3cc03VrjObqc4zsEEdKSLSqo1KXsAEiBy2V3XNMFqXU1RQl3tqBl5ahVI0NLbDMw qmqylF62u9qFjUSPqugTMUJr9WNKwHce9WOW1Tfhk8uoXDFgND7cda2dHfGqSISScywf wqPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=7FreusBmQemk62TlNHUmJDkLEBzqeyGpZ+8DEwl+sS4=; fh=QtEizEezWMPccSW0U8o1LFLT1SqErEb68UiNokAWV6o=; b=Oy3NwaCsrmHMiT9Uc2iaznMiQr3tH0P68bjo80r6gsLHSYsT1t86KqqwfZZkiDazQN v5Txlrz5/apokHLeEmNUd9LTBFzABaczJIUa2XvwjucBZMHOam1SU5ALavadXDsky0G7 s642z1X0gkQRnY/CaBcQCAFZRb//hl3BxHBGXdOhRNwIa53MPEOE5Cw74JdfLfirw+l5 WET8MCoe8w7BBcedAeHrm5rHAU4eLcfKitRxyAgA8Tlav9WDGrKa5OdfEX6u/njnDqzf LVJMEaQKpZMW+XeHnbqtBL11XI3IaX8tlHdfk3bO0MFKbdQC48lF3GUezxDVMtOLpkwJ qMLw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cTg74aZj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id v29-20020a63481d000000b005bde6fae09asi9104393pga.135.2023.12.13.03.32.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 03:32:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cTg74aZj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 6900B809C40A; Wed, 13 Dec 2023 03:32:50 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377492AbjLMLcD (ORCPT + 99 others); Wed, 13 Dec 2023 06:32:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378008AbjLMLbm (ORCPT ); Wed, 13 Dec 2023 06:31:42 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1C8CD69; Wed, 13 Dec 2023 03:31:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702467084; x=1734003084; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wqWe+I/UHPvQw6gjoo1ner/8yLTJmp59IEP+EBAhMOE=; b=cTg74aZjAaGnFv9wosWbQ7WaIzXHv+lMVvwx2IkrBtWOrzC6TwNuGbYX U4MU/Wd+20eERAT69aKr57UNIpS7FqzWyx1UN6SZikS4oupKZCGI1mgl1 1pjUj1UIEaYhDilIH7kWNih5eDV+vu6McTQ0kpGFv6IL4OOIy/3+cczJJ icFPbcvbsp08RllwgdYVmhuktmNiKDTEL/7EmEF38QaB/oLQxDxujMlJ8 NzzCIxlEQSy9avVG9azBx/gK+9WwJ8P09UDjUYhPJBUvgdPXwRboI/i6Y EcY/HGVQtXKWDzEgmOsyVrrX6moc4VGLfkt4Ge3Q+DNwlKj6dtNc/8e00 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="375104093" X-IronPort-AV: E=Sophos;i="6.04,272,1695711600"; d="scan'208";a="375104093" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Dec 2023 03:31:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10922"; a="844279659" X-IronPort-AV: E=Sophos;i="6.04,272,1695711600"; d="scan'208";a="844279659" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmsmga004.fm.intel.com with ESMTP; 13 Dec 2023 03:31:20 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexander Duyck , Yunsheng Lin , David Christensen , Jesper Dangaard Brouer , Ilias Apalodimas , Paul Menzel , netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v7 11/12] libie: add common queue stats Date: Wed, 13 Dec 2023 12:28:34 +0100 Message-ID: <20231213112835.2262651-12-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231213112835.2262651-1-aleksander.lobakin@intel.com> References: <20231213112835.2262651-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Wed, 13 Dec 2023 03:32:50 -0800 (PST) Next stop, per-queue private stats. They have only subtle differences from driver to driver and can easily be resolved. Define common structures, inline helpers and Ethtool helpers to collect, update and export the statistics. Use u64_stats_t right from the start, as well as the corresponding helpers to ensure tear-free operations. For the NAPI parts of both Rx and Tx, also define small onstack containers to update them in polling loops and then sync the actual containers once a loop ends. The drivers will be switched to use this API later on a per-driver basis, along with conversion to PP. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libie/Makefile | 1 + drivers/net/ethernet/intel/libie/stats.c | 121 +++++++++++++++ include/linux/net/intel/libie/stats.h | 179 ++++++++++++++++++++++ 3 files changed, 301 insertions(+) create mode 100644 drivers/net/ethernet/intel/libie/stats.c create mode 100644 include/linux/net/intel/libie/stats.h diff --git a/drivers/net/ethernet/intel/libie/Makefile b/drivers/net/ethernet/intel/libie/Makefile index 95e81d09b474..76f32253481b 100644 --- a/drivers/net/ethernet/intel/libie/Makefile +++ b/drivers/net/ethernet/intel/libie/Makefile @@ -4,3 +4,4 @@ obj-$(CONFIG_LIBIE) += libie.o libie-objs += rx.o +libie-objs += stats.o diff --git a/drivers/net/ethernet/intel/libie/stats.c b/drivers/net/ethernet/intel/libie/stats.c new file mode 100644 index 000000000000..85f5d279406f --- /dev/null +++ b/drivers/net/ethernet/intel/libie/stats.c @@ -0,0 +1,121 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Intel Corporation. */ + +#include + +#include +#include + +/* Rx per-queue stats */ + +static const char * const libie_rq_stats_str[] = { +#define act(s) __stringify(s), + DECLARE_LIBIE_RQ_STATS(act) +#undef act +}; + +#define LIBIE_RQ_STATS_NUM ARRAY_SIZE(libie_rq_stats_str) + +/** + * libie_rq_stats_get_sset_count - get the number of Ethtool RQ stats provided + * + * Return: number of per-queue Rx stats supported by the library. + */ +u32 libie_rq_stats_get_sset_count(void) +{ + return LIBIE_RQ_STATS_NUM; +} +EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_sset_count, LIBIE); + +/** + * libie_rq_stats_get_strings - get the name strings of Ethtool RQ stats + * @data: reference to the cursor pointing to the output buffer + * @qid: RQ number to print in the prefix + */ +void libie_rq_stats_get_strings(u8 **data, u32 qid) +{ + for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++) + ethtool_sprintf(data, "rq%u_%s", qid, libie_rq_stats_str[i]); +} +EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_strings, LIBIE); + +/** + * libie_rq_stats_get_data - get the RQ stats in Ethtool format + * @data: reference to the cursor pointing to the output array + * @stats: RQ stats container from the queue + */ +void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats) +{ + u64 sarr[LIBIE_RQ_STATS_NUM]; + u32 start; + + do { + start = u64_stats_fetch_begin(&stats->syncp); + + for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++) + sarr[i] = u64_stats_read(&stats->raw[i]); + } while (u64_stats_fetch_retry(&stats->syncp, start)); + + for (u32 i = 0; i < LIBIE_RQ_STATS_NUM; i++) + (*data)[i] += sarr[i]; + + *data += LIBIE_RQ_STATS_NUM; +} +EXPORT_SYMBOL_NS_GPL(libie_rq_stats_get_data, LIBIE); + +/* Tx per-queue stats */ + +static const char * const libie_sq_stats_str[] = { +#define act(s) __stringify(s), + DECLARE_LIBIE_SQ_STATS(act) +#undef act +}; + +#define LIBIE_SQ_STATS_NUM ARRAY_SIZE(libie_sq_stats_str) + +/** + * libie_sq_stats_get_sset_count - get the number of Ethtool SQ stats provided + * + * Return: number of per-queue Tx stats supported by the library. + */ +u32 libie_sq_stats_get_sset_count(void) +{ + return LIBIE_SQ_STATS_NUM; +} +EXPORT_SYMBOL_NS_GPL(libie_sq_stats_get_sset_count, LIBIE); + +/** + * libie_sq_stats_get_strings - get the name strings of Ethtool SQ stats + * @data: reference to the cursor pointing to the output buffer + * @qid: SQ number to print in the prefix + */ +void libie_sq_stats_get_strings(u8 **data, u32 qid) +{ + for (u32 i = 0; i < LIBIE_SQ_STATS_NUM; i++) + ethtool_sprintf(data, "sq%u_%s", qid, libie_sq_stats_str[i]); +} +EXPORT_SYMBOL_NS_GPL(libie_sq_stats_get_strings, LIBIE); + +/** + * libie_sq_stats_get_data - get the SQ stats in Ethtool format + * @data: reference to the cursor pointing to the output array + * @stats: SQ stats container from the queue + */ +void libie_sq_stats_get_data(u64 **data, const struct libie_sq_stats *stats) +{ + u64 sarr[LIBIE_SQ_STATS_NUM]; + u32 start; + + do { + start = u64_stats_fetch_begin(&stats->syncp); + + for (u32 i = 0; i < LIBIE_SQ_STATS_NUM; i++) + sarr[i] = u64_stats_read(&stats->raw[i]); + } while (u64_stats_fetch_retry(&stats->syncp, start)); + + for (u32 i = 0; i < LIBIE_SQ_STATS_NUM; i++) + (*data)[i] += sarr[i]; + + *data += LIBIE_SQ_STATS_NUM; +} +EXPORT_SYMBOL_NS_GPL(libie_sq_stats_get_data, LIBIE); diff --git a/include/linux/net/intel/libie/stats.h b/include/linux/net/intel/libie/stats.h new file mode 100644 index 000000000000..dbbc98bbd3a7 --- /dev/null +++ b/include/linux/net/intel/libie/stats.h @@ -0,0 +1,179 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Intel Corporation. */ + +#ifndef __LIBIE_STATS_H +#define __LIBIE_STATS_H + +#include + +/* Common */ + +/* Use 32-byte alignment to reduce false sharing */ +#define __libie_stats_aligned __aligned(4 * sizeof(u64_stats_t)) + +/** + * libie_stats_add - update one structure counter from a local struct + * @qs: queue stats structure to update (&libie_rq_stats or &libie_sq_stats) + * @ss: local/onstack stats structure + * @f: name of the field to update + * + * If a local/onstack stats structure is used to collect statistics during + * hotpath loops, this macro can be used to shorthand updates, given that + * the fields have the same name. + * Must be guarded with u64_stats_update_{begin,end}(). + */ +#define libie_stats_add(qs, ss, f) \ + u64_stats_add(&(qs)->f, (ss)->f) + +/** + * __libie_stats_inc_one - safely increment one stats structure counter + * @s: queue stats structure to update (&libie_rq_stats or &libie_sq_stats) + * @f: name of the field to increment + * @n: name of the temporary variable, result of __UNIQUE_ID() + * + * To be used on exception or slow paths -- allocation fails, queue stops etc. + */ +#define __libie_stats_inc_one(s, f, n) ({ \ + typeof(*(s)) *n = (s); \ + \ + u64_stats_update_begin(&n->syncp); \ + u64_stats_inc(&n->f); \ + u64_stats_update_end(&n->syncp); \ +}) +#define libie_stats_inc_one(s, f) \ + __libie_stats_inc_one(s, f, __UNIQUE_ID(qs_)) + +/* Rx per-queue stats: + * packets: packets received on this queue + * bytes: bytes received on this queue + * fragments: number of processed descriptors carrying only a fragment + * alloc_page_fail: number of Rx page allocation fails + * build_skb_fail: number of build_skb() fails + */ + +#define DECLARE_LIBIE_RQ_NAPI_STATS(act) \ + act(packets) \ + act(bytes) \ + act(fragments) + +#define DECLARE_LIBIE_RQ_FAIL_STATS(act) \ + act(alloc_page_fail) \ + act(build_skb_fail) + +#define DECLARE_LIBIE_RQ_STATS(act) \ + DECLARE_LIBIE_RQ_NAPI_STATS(act) \ + DECLARE_LIBIE_RQ_FAIL_STATS(act) + +struct libie_rq_stats { + struct u64_stats_sync syncp; + + union { + struct { +#define act(s) u64_stats_t s; + DECLARE_LIBIE_RQ_NAPI_STATS(act); + DECLARE_LIBIE_RQ_FAIL_STATS(act); +#undef act + }; + DECLARE_FLEX_ARRAY(u64_stats_t, raw); + }; +} __libie_stats_aligned; + +/* Rx stats being modified frequently during the NAPI polling, to sync them + * with the queue stats once after the loop is finished. + */ +struct libie_rq_onstack_stats { + union { + struct { +#define act(s) u32 s; + DECLARE_LIBIE_RQ_NAPI_STATS(act); +#undef act + }; + DECLARE_FLEX_ARRAY(u32, raw); + }; +}; + +/** + * libie_rq_napi_stats_add - add onstack Rx stats to the queue container + * @qs: Rx queue stats structure to update + * @ss: onstack structure to get the values from, updated during the NAPI loop + */ +static inline void +libie_rq_napi_stats_add(struct libie_rq_stats *qs, + const struct libie_rq_onstack_stats *ss) +{ + u64_stats_update_begin(&qs->syncp); + libie_stats_add(qs, ss, packets); + libie_stats_add(qs, ss, bytes); + libie_stats_add(qs, ss, fragments); + u64_stats_update_end(&qs->syncp); +} + +u32 libie_rq_stats_get_sset_count(void); +void libie_rq_stats_get_strings(u8 **data, u32 qid); +void libie_rq_stats_get_data(u64 **data, const struct libie_rq_stats *stats); + +/* Tx per-queue stats: + * packets: packets sent from this queue + * bytes: bytes sent from this queue + * busy: number of xmit failures due to the ring being full + * stops: number times the ring was stopped from the driver + * restarts: number times it was started after being stopped + * linearized: number of skbs linearized due to HW limits + */ + +#define DECLARE_LIBIE_SQ_NAPI_STATS(act) \ + act(packets) \ + act(bytes) + +#define DECLARE_LIBIE_SQ_XMIT_STATS(act) \ + act(busy) \ + act(stops) \ + act(restarts) \ + act(linearized) + +#define DECLARE_LIBIE_SQ_STATS(act) \ + DECLARE_LIBIE_SQ_NAPI_STATS(act) \ + DECLARE_LIBIE_SQ_XMIT_STATS(act) + +struct libie_sq_stats { + struct u64_stats_sync syncp; + + union { + struct { +#define act(s) u64_stats_t s; + DECLARE_LIBIE_SQ_STATS(act); +#undef act + }; + DECLARE_FLEX_ARRAY(u64_stats_t, raw); + }; +} __libie_stats_aligned; + +struct libie_sq_onstack_stats { +#define act(s) u32 s; + DECLARE_LIBIE_SQ_NAPI_STATS(act); +#undef act +}; + +/** + * libie_sq_napi_stats_add - add onstack Tx stats to the queue container + * @qs: Tx queue stats structure to update + * @ss: onstack structure to get the values from, updated during the NAPI loop + */ +static inline void +libie_sq_napi_stats_add(struct libie_sq_stats *qs, + const struct libie_sq_onstack_stats *ss) +{ + if (unlikely(!ss->packets)) + return; + + u64_stats_update_begin(&qs->syncp); + libie_stats_add(qs, ss, packets); + libie_stats_add(qs, ss, bytes); + u64_stats_update_end(&qs->syncp); +} + +u32 libie_sq_stats_get_sset_count(void); +void libie_sq_stats_get_strings(u8 **data, u32 qid); +void libie_sq_stats_get_data(u64 **data, const struct libie_sq_stats *stats); + +#endif /* __LIBIE_STATS_H */ -- 2.43.0