Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp1815161rdb; Thu, 7 Dec 2023 09:24:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IGvjQoA/2e4YOXnEtrEBjWeboRHavlNfawrJsHLxVJnEXEw+WnXXdCveF/vq31zHVpwrAqH X-Received: by 2002:a05:6a20:12cc:b0:18b:6192:f91a with SMTP id v12-20020a056a2012cc00b0018b6192f91amr2986954pzg.9.1701969851996; Thu, 07 Dec 2023 09:24:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701969851; cv=none; d=google.com; s=arc-20160816; b=bvSCwF5EzghKLvUHr97v+0maIBtNxYA/6WLr+W1uYoJcBSOUYLCIcj0BJ+GoRqO76A zNYqEnpthQvks/Sb4RcNz31ZuRo4SaeNkn4E8V15UCFtw5URw9hx7TILNMJx+4xNcSXT 79njuY916KVZCOceYxiixPWDudJnC315aFVCQ7CnuS+cD7KN96DY5YSFgpTxZc6A2PaU 7eejHBxe/jPZnAfi6iQDTJEOHAhGhhZHmNiCBscbmoSymZ5btNS0pSsvSMCp0FXqAW86 t7h97euBBgXjWHyNyAI+DWYLmXl2Nulm7p/RS3pydZJJYEMdTKM1mmu+L9+lyXd9CKo7 t78g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=yoAJ0XU/IeLXyKfngsDKN/x2jXeBpt1b5+/wab3hnuY=; fh=QtEizEezWMPccSW0U8o1LFLT1SqErEb68UiNokAWV6o=; b=vBrqfaiEuAd42IqmN61H98WX440HQS19fjzFTbJKGnKMmB2bseC44ez4RaXuAY5EUE sSzmmVG1ibmSZhnMq+/cCaWMXEp1QzMaD1+X8Fs3irms1Y1mkWjY9w7eYlnae8eFpP84 ddHnt/9+LJ2Dl6UDhwQN96grxMQO7Be8g4aBDBfDxfKC73qY/0Ytq0ixzcyXB3Lx53V0 p2w95xnU2Yrhny0oIEBS3RRQtRyb4a6jUZWAD3Xg7ccDbCv64kfjHular2qN++IGruf0 nKt1NogBwhcWlQQ2grDoDHPc+YYrgFC6DodqWAE6F8+1MFnfvcLn/ttVEt3P84zKBGnV AsBw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OJYV3xEO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id 194-20020a6300cb000000b005bddb7249e0si26380pga.313.2023.12.07.09.24.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Dec 2023 09:24:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OJYV3xEO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 7C6CA80CB161; Thu, 7 Dec 2023 09:24:08 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443167AbjLGRXZ (ORCPT + 99 others); Thu, 7 Dec 2023 12:23:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1443560AbjLGRXA (ORCPT ); Thu, 7 Dec 2023 12:23:00 -0500 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D1051BEE; Thu, 7 Dec 2023 09:22:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701969772; x=1733505772; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gv82KkJoCtq4lPC2UFfKXT5CxlZ+DeCc/K2ljFJMnMo=; b=OJYV3xEOI9HL+9QVeMNtw8pzfjiCwcyDyrOETrr9annKZBOR3ALzgkrP M6GpLbxztH5r7mkqIR5JhU2VPZlrtZl1zsh2Tfi3PEpIV+IFvlRCHurSa A5+w8VqElwwC/jO4RtqdufO2ROuzNO951DK+Ez7jSyfGFQ9NSyNVSAdCy EREXH5htVuAyE9UJOsi9jKw+PlNhJ+ydsBwRCeQn0tARxCsQMUp9u62uY OFnf1DZ18bMmiW27rL4Ctmsm+3b3qjxOhvZq4ed0HOYp0KKDY4qq8z0yx Viz7GpZaWe3wSIWwiZTcatVvXCPyfhQ2dmAjY9NdP/XpNrwvx8JcJsClR A==; X-IronPort-AV: E=McAfee;i="6600,9927,10917"; a="374435055" X-IronPort-AV: E=Sophos;i="6.04,258,1695711600"; d="scan'208";a="374435055" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2023 09:22:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10917"; a="721548709" X-IronPort-AV: E=Sophos;i="6.04,258,1695711600"; d="scan'208";a="721548709" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orsmga003.jf.intel.com with ESMTP; 07 Dec 2023 09:22:46 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexander Duyck , Yunsheng Lin , David Christensen , Jesper Dangaard Brouer , Ilias Apalodimas , Paul Menzel , netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v6 12/12] iavf: switch queue stats to libie Date: Thu, 7 Dec 2023 18:20:10 +0100 Message-ID: <20231207172010.1441468-13-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231207172010.1441468-1-aleksander.lobakin@intel.com> References: <20231207172010.1441468-1-aleksander.lobakin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Thu, 07 Dec 2023 09:24:08 -0800 (PST) iavf is pretty much ready for using the generic libie stats, so drop all the custom code and just use generic definitions. The only thing is that it previously lacked the counter of Tx queue stops. It's present in the other drivers, so add it here as well. The rest is straightforward. Note that it makes the ring structure 1 CL bigger, but no layout changes since the stats structures are 32-byte aligned. Signed-off-by: Alexander Lobakin --- .../net/ethernet/intel/iavf/iavf_ethtool.c | 94 +++++-------------- drivers/net/ethernet/intel/iavf/iavf_main.c | 2 + drivers/net/ethernet/intel/iavf/iavf_txrx.c | 84 +++++++++-------- drivers/net/ethernet/intel/iavf/iavf_txrx.h | 28 +----- 4 files changed, 72 insertions(+), 136 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c index 964e66b4869c..898528abb03c 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c +++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c @@ -2,9 +2,10 @@ /* Copyright(c) 2013 - 2018 Intel Corporation. */ /* ethtool support for iavf */ -#include "iavf.h" -#include +#include + +#include "iavf.h" /* ethtool statistics helpers */ @@ -46,16 +47,6 @@ struct iavf_stats { .stat_offset = offsetof(_type, _stat) \ } -/* Helper macro for defining some statistics related to queues */ -#define IAVF_QUEUE_STAT(_name, _stat) \ - IAVF_STAT(struct iavf_ring, _name, _stat) - -/* Stats associated with a Tx or Rx ring */ -static const struct iavf_stats iavf_gstrings_queue_stats[] = { - IAVF_QUEUE_STAT("%s-%u.packets", stats.packets), - IAVF_QUEUE_STAT("%s-%u.bytes", stats.bytes), -}; - /** * iavf_add_one_ethtool_stat - copy the stat into the supplied buffer * @data: location to store the stat value @@ -141,43 +132,6 @@ __iavf_add_ethtool_stats(u64 **data, void *pointer, #define iavf_add_ethtool_stats(data, pointer, stats) \ __iavf_add_ethtool_stats(data, pointer, stats, ARRAY_SIZE(stats)) -/** - * iavf_add_queue_stats - copy queue statistics into supplied buffer - * @data: ethtool stats buffer - * @ring: the ring to copy - * - * Queue statistics must be copied while protected by - * u64_stats_fetch_begin, so we can't directly use iavf_add_ethtool_stats. - * Assumes that queue stats are defined in iavf_gstrings_queue_stats. If the - * ring pointer is null, zero out the queue stat values and update the data - * pointer. Otherwise safely copy the stats from the ring into the supplied - * buffer and update the data pointer when finished. - * - * This function expects to be called while under rcu_read_lock(). - **/ -static void -iavf_add_queue_stats(u64 **data, struct iavf_ring *ring) -{ - const unsigned int size = ARRAY_SIZE(iavf_gstrings_queue_stats); - const struct iavf_stats *stats = iavf_gstrings_queue_stats; - unsigned int start; - unsigned int i; - - /* To avoid invalid statistics values, ensure that we keep retrying - * the copy until we get a consistent value according to - * u64_stats_fetch_retry. But first, make sure our ring is - * non-null before attempting to access its syncp. - */ - do { - start = !ring ? 0 : u64_stats_fetch_begin(&ring->syncp); - for (i = 0; i < size; i++) - iavf_add_one_ethtool_stat(&(*data)[i], ring, &stats[i]); - } while (ring && u64_stats_fetch_retry(&ring->syncp, start)); - - /* Once we successfully copy the stats in, update the data pointer */ - *data += size; -} - /** * __iavf_add_stat_strings - copy stat strings into ethtool buffer * @p: ethtool supplied buffer @@ -237,8 +191,6 @@ static const struct iavf_stats iavf_gstrings_stats[] = { #define IAVF_STATS_LEN ARRAY_SIZE(iavf_gstrings_stats) -#define IAVF_QUEUE_STATS_LEN ARRAY_SIZE(iavf_gstrings_queue_stats) - /** * iavf_get_link_ksettings - Get Link Speed and Duplex settings * @netdev: network interface device structure @@ -308,18 +260,22 @@ static int iavf_get_link_ksettings(struct net_device *netdev, **/ static int iavf_get_sset_count(struct net_device *netdev, int sset) { - /* Report the maximum number queues, even if not every queue is - * currently configured. Since allocation of queues is in pairs, - * use netdev->real_num_tx_queues * 2. The real_num_tx_queues is set - * at device creation and never changes. - */ + u32 num; - if (sset == ETH_SS_STATS) - return IAVF_STATS_LEN + - (IAVF_QUEUE_STATS_LEN * 2 * - netdev->real_num_tx_queues); - else + switch (sset) { + case ETH_SS_STATS: + /* Per-queue */ + num = libie_rq_stats_get_sset_count(); + num += libie_sq_stats_get_sset_count(); + num *= netdev->real_num_tx_queues; + + /* Global */ + num += IAVF_STATS_LEN; + + return num; + default: return -EINVAL; + } } /** @@ -346,15 +302,11 @@ static void iavf_get_ethtool_stats(struct net_device *netdev, * it to iterate over rings' stats. */ for (i = 0; i < adapter->num_active_queues; i++) { - struct iavf_ring *ring; + /* Rx rings stats */ + libie_rq_stats_get_data(&data, &adapter->rx_rings[i].rq_stats); /* Tx rings stats */ - ring = &adapter->tx_rings[i]; - iavf_add_queue_stats(&data, ring); - - /* Rx rings stats */ - ring = &adapter->rx_rings[i]; - iavf_add_queue_stats(&data, ring); + libie_sq_stats_get_data(&data, &adapter->tx_rings[i].sq_stats); } rcu_read_unlock(); } @@ -376,10 +328,8 @@ static void iavf_get_stat_strings(struct net_device *netdev, u8 *data) * real_num_tx_queues for both Tx and Rx queues. */ for (i = 0; i < netdev->real_num_tx_queues; i++) { - iavf_add_stat_strings(&data, iavf_gstrings_queue_stats, - "tx", i); - iavf_add_stat_strings(&data, iavf_gstrings_queue_stats, - "rx", i); + libie_rq_stats_get_strings(&data, i); + libie_sq_stats_get_strings(&data, i); } } diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 0becfdf2ed99..73563ae2ea03 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1603,12 +1603,14 @@ static int iavf_alloc_queues(struct iavf_adapter *adapter) tx_ring->itr_setting = IAVF_ITR_TX_DEF; if (adapter->flags & IAVF_FLAG_WB_ON_ITR_CAPABLE) tx_ring->flags |= IAVF_TXR_FLAGS_WB_ON_ITR; + u64_stats_init(&tx_ring->sq_stats.syncp); rx_ring = &adapter->rx_rings[i]; rx_ring->queue_index = i; rx_ring->netdev = adapter->netdev; rx_ring->count = adapter->rx_desc_count; rx_ring->itr_setting = IAVF_ITR_RX_DEF; + u64_stats_init(&rx_ring->rq_stats.syncp); } adapter->num_active_queues = num_active_queues; diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index aa86a0f926c6..97f8241ba56f 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -176,6 +176,9 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi) for (i = 0; i < vsi->back->num_active_queues; i++) { tx_ring = &vsi->back->tx_rings[i]; if (tx_ring && tx_ring->desc) { + const struct libie_sq_stats *st = &tx_ring->sq_stats; + u32 start; + /* If packet counter has not changed the queue is * likely stalled, so force an interrupt for this * queue. @@ -183,7 +186,12 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi) * prev_pkt_ctr would be negative if there was no * pending work. */ - packets = tx_ring->stats.packets & INT_MAX; + do { + start = u64_stats_fetch_begin(&st->syncp); + packets = u64_stats_read(&st->packets) & + INT_MAX; + } while (u64_stats_fetch_retry(&st->syncp, start)); + if (tx_ring->prev_pkt_ctr == packets) { iavf_force_wb(vsi, tx_ring->q_vector); continue; @@ -212,11 +220,11 @@ void iavf_detect_recover_hung(struct iavf_vsi *vsi) static bool iavf_clean_tx_irq(struct iavf_vsi *vsi, struct iavf_ring *tx_ring, int napi_budget) { + unsigned int budget = IAVF_DEFAULT_IRQ_WORK; + struct libie_sq_onstack_stats stats = { }; int i = tx_ring->next_to_clean; struct iavf_tx_buffer *tx_buf; struct iavf_tx_desc *tx_desc; - unsigned int total_bytes = 0, total_packets = 0; - unsigned int budget = IAVF_DEFAULT_IRQ_WORK; tx_buf = &tx_ring->tx_bi[i]; tx_desc = IAVF_TX_DESC(tx_ring, i); @@ -242,8 +250,8 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi, tx_buf->next_to_watch = NULL; /* update the statistics for this packet */ - total_bytes += tx_buf->bytecount; - total_packets += tx_buf->gso_segs; + stats.bytes += tx_buf->bytecount; + stats.packets += tx_buf->gso_segs; /* free the skb */ napi_consume_skb(tx_buf->skb, napi_budget); @@ -300,12 +308,9 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi, i += tx_ring->count; tx_ring->next_to_clean = i; - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->stats.bytes += total_bytes; - tx_ring->stats.packets += total_packets; - u64_stats_update_end(&tx_ring->syncp); - tx_ring->q_vector->tx.total_bytes += total_bytes; - tx_ring->q_vector->tx.total_packets += total_packets; + libie_sq_napi_stats_add(&tx_ring->sq_stats, &stats); + tx_ring->q_vector->tx.total_bytes += stats.bytes; + tx_ring->q_vector->tx.total_packets += stats.packets; if (tx_ring->flags & IAVF_TXR_FLAGS_WB_ON_ITR) { /* check to see if there are < 4 descriptors @@ -324,10 +329,10 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi, /* notify netdev of completed buffers */ netdev_tx_completed_queue(txring_txq(tx_ring), - total_packets, total_bytes); + stats.packets, stats.bytes); #define TX_WAKE_THRESHOLD ((s16)(DESC_NEEDED * 2)) - if (unlikely(total_packets && netif_carrier_ok(tx_ring->netdev) && + if (unlikely(stats.packets && netif_carrier_ok(tx_ring->netdev) && (IAVF_DESC_UNUSED(tx_ring) >= TX_WAKE_THRESHOLD))) { /* Make sure that anybody stopping the queue after this * sees the new next_to_clean. @@ -338,7 +343,7 @@ static bool iavf_clean_tx_irq(struct iavf_vsi *vsi, !test_bit(__IAVF_VSI_DOWN, vsi->state)) { netif_wake_subqueue(tx_ring->netdev, tx_ring->queue_index); - ++tx_ring->tx_stats.restart_queue; + libie_stats_inc_one(&tx_ring->sq_stats, restarts); } } @@ -766,8 +771,6 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring) if (!rx_ring->rx_bi) goto err; - u64_stats_init(&rx_ring->syncp); - /* Round up to nearest 4K */ rx_ring->size = rx_ring->count * sizeof(union iavf_32byte_rx_desc); rx_ring->size = ALIGN(rx_ring->size, 4096); @@ -893,7 +896,7 @@ bool iavf_alloc_rx_buffers(struct iavf_ring *rx_ring, u16 cleaned_count) if (rx_ring->next_to_use != ntu) iavf_release_rx_desc(rx_ring, ntu); - rx_ring->rx_stats.alloc_page_failed++; + libie_stats_inc_one(&rx_ring->rq_stats, alloc_page_fail); /* make sure to come back via polling to try again after * allocation failure @@ -1101,7 +1104,7 @@ static struct sk_buff *iavf_build_skb(const struct libie_rx_buffer *rx_buffer, * iavf_is_non_eop - process handling of non-EOP buffers * @rx_ring: Rx ring being processed * @rx_desc: Rx descriptor for current buffer - * @skb: Current socket buffer containing buffer in progress + * @stats: NAPI poll local stats to update * * This function updates next to clean. If the buffer is an EOP buffer * this function exits returning false, otherwise it will place the @@ -1110,7 +1113,7 @@ static struct sk_buff *iavf_build_skb(const struct libie_rx_buffer *rx_buffer, **/ static bool iavf_is_non_eop(struct iavf_ring *rx_ring, union iavf_rx_desc *rx_desc, - struct sk_buff *skb) + struct libie_rq_onstack_stats *stats) { u32 ntc = rx_ring->next_to_clean + 1; @@ -1125,7 +1128,7 @@ static bool iavf_is_non_eop(struct iavf_ring *rx_ring, if (likely(iavf_test_staterr(rx_desc, IAVF_RXD_EOF))) return false; - rx_ring->rx_stats.non_eop_descs++; + stats->fragments++; return true; } @@ -1144,12 +1147,12 @@ static bool iavf_is_non_eop(struct iavf_ring *rx_ring, **/ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) { - unsigned int total_rx_bytes = 0, total_rx_packets = 0; - struct sk_buff *skb = rx_ring->skb; u16 cleaned_count = IAVF_DESC_UNUSED(rx_ring); + struct libie_rq_onstack_stats stats = { }; + struct sk_buff *skb = rx_ring->skb; bool failure = false; - while (likely(total_rx_packets < (unsigned int)budget)) { + while (likely(stats.packets < budget)) { struct libie_rx_buffer *rx_buffer; union iavf_rx_desc *rx_desc; unsigned int size; @@ -1199,14 +1202,15 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) /* exit if we failed to retrieve a buffer */ if (!skb) { - rx_ring->rx_stats.alloc_buff_failed++; + libie_stats_inc_one(&rx_ring->rq_stats, + build_skb_fail); break; } skip_data: cleaned_count++; - if (iavf_is_non_eop(rx_ring, rx_desc, skb)) + if (iavf_is_non_eop(rx_ring, rx_desc, &stats)) continue; /* ERR_MASK will only have valid bits if EOP set, and @@ -1226,7 +1230,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) } /* probably a little skewed due to removing CRC */ - total_rx_bytes += skb->len; + stats.bytes += skb->len; qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); rx_ptype = (qword & IAVF_RXD_QW1_PTYPE_MASK) >> @@ -1248,20 +1252,20 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget) skb = NULL; /* update budget accounting */ - total_rx_packets++; + stats.packets++; } rx_ring->skb = skb; - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->stats.packets += total_rx_packets; - rx_ring->stats.bytes += total_rx_bytes; - u64_stats_update_end(&rx_ring->syncp); - rx_ring->q_vector->rx.total_packets += total_rx_packets; - rx_ring->q_vector->rx.total_bytes += total_rx_bytes; + libie_rq_napi_stats_add(&rx_ring->rq_stats, &stats); + rx_ring->q_vector->rx.total_packets += stats.packets; + rx_ring->q_vector->rx.total_bytes += stats.bytes; /* guarantee a trip back through this routine if there was a failure */ - return failure ? budget : (int)total_rx_packets; + if (unlikely(failure)) + return budget; + + return stats.packets; } static inline u32 iavf_buildreg_itr(const int type, u16 itr) @@ -1438,10 +1442,8 @@ int iavf_napi_poll(struct napi_struct *napi, int budget) return budget - 1; } tx_only: - if (arm_wb) { - q_vector->tx.ring[0].tx_stats.tx_force_wb++; + if (arm_wb) iavf_enable_wb_on_itr(vsi, q_vector); - } return budget; } @@ -1900,6 +1902,7 @@ bool __iavf_chk_linearize(struct sk_buff *skb) int __iavf_maybe_stop_tx(struct iavf_ring *tx_ring, int size) { netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index); + libie_stats_inc_one(&tx_ring->sq_stats, stops); /* Memory barrier before checking head and tail */ smp_mb(); @@ -1909,7 +1912,8 @@ int __iavf_maybe_stop_tx(struct iavf_ring *tx_ring, int size) /* A reprieve! - use start_queue because it doesn't call schedule */ netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index); - ++tx_ring->tx_stats.restart_queue; + libie_stats_inc_one(&tx_ring->sq_stats, restarts); + return 0; } @@ -2090,7 +2094,7 @@ static netdev_tx_t iavf_xmit_frame_ring(struct sk_buff *skb, return NETDEV_TX_OK; } count = iavf_txd_use_count(skb->len); - tx_ring->tx_stats.tx_linearize++; + libie_stats_inc_one(&tx_ring->sq_stats, linearized); } /* need: 1 descriptor per page * PAGE_SIZE/IAVF_MAX_DATA_PER_TXD, @@ -2100,7 +2104,7 @@ static netdev_tx_t iavf_xmit_frame_ring(struct sk_buff *skb, * otherwise try next time */ if (iavf_maybe_stop_tx(tx_ring, count + 4 + 1)) { - tx_ring->tx_stats.tx_busy++; + libie_stats_inc_one(&tx_ring->sq_stats, busy); return NETDEV_TX_BUSY; } diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.h b/drivers/net/ethernet/intel/iavf/iavf_txrx.h index 72c6bc64d94e..016c8b6e6160 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.h +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.h @@ -4,6 +4,8 @@ #ifndef _IAVF_TXRX_H_ #define _IAVF_TXRX_H_ +#include + /* Interrupt Throttling and Rate Limiting Goodies */ #define IAVF_DEFAULT_IRQ_WORK 256 @@ -201,26 +203,6 @@ struct iavf_tx_buffer { u32 tx_flags; }; -struct iavf_queue_stats { - u64 packets; - u64 bytes; -}; - -struct iavf_tx_queue_stats { - u64 restart_queue; - u64 tx_busy; - u64 tx_done_old; - u64 tx_linearize; - u64 tx_force_wb; - u64 tx_lost_interrupt; -}; - -struct iavf_rx_queue_stats { - u64 non_eop_descs; - u64 alloc_page_failed; - u64 alloc_buff_failed; -}; - /* some useful defines for virtchannel interface, which * is the only remaining user of header split */ @@ -272,11 +254,9 @@ struct iavf_ring { #define IAVF_RXR_FLAGS_VLAN_TAG_LOC_L2TAG2_2 BIT(5) /* stats structs */ - struct iavf_queue_stats stats; - struct u64_stats_sync syncp; union { - struct iavf_tx_queue_stats tx_stats; - struct iavf_rx_queue_stats rx_stats; + struct libie_rq_stats rq_stats; + struct libie_sq_stats sq_stats; }; int prev_pkt_ctr; /* For Tx stall detection */ -- 2.43.0