Received: by 2002:a05:7412:98c1:b0:fa:551:50a7 with SMTP id kc1csp37934rdb; Fri, 5 Jan 2024 01:46:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IGyPrh2RFqxoNBl4gWuLKmvInhdrY3IjkA3JG9uN12t7Pt942TdN6wiQ89Byibuyep2MX7A X-Received: by 2002:a17:906:1c:b0:a19:a19b:55cd with SMTP id 28-20020a170906001c00b00a19a19b55cdmr949112eja.93.1704447976792; Fri, 05 Jan 2024 01:46:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704447976; cv=none; d=google.com; s=arc-20160816; b=tLiwhKMU24U/0y0pzMLVeJ8CvyMz4kqddoIIAttuyjQrJjAbQB+RSC3xuOrgHyTGOA n17BNVPwloTtIGRfu9kmn8KRipFCjDKubc4r5DB1QADXU76htlgdvo3fokY9XfoC+EyD RnNaVvYpmVqy2cULXO0RWmjrLmDRw+AqEn8d4K5GPzIvdxIWyiSn9Fob2rOt4dL4Jfmf OTURJGQlDhEKiBz+la/yN33sFooLtWi/OQVckGCoePbLlIIu8pa/t+YdeDSeDebex5rU 41bPhyj09nTf3pC4qdoUfRIGkBDrkIEPwutoKaPgOkNPpDvAwTg0qCXbg114xhKa4OdD +V/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=6dQ2Zcf3LUGtuQQ4BG4OyEdzBvj9Rr7K+Ifz60LIvyE=; fh=xexFMWUMPZiuEiAwNwhddQENZwd5k9snqL2Vp/QdmyE=; b=yI9OgBAd7FPPJk/zFtsOnFVN8bpDVB19dIZKDYI+VMASVNg7e9dY1LDQjL0NMHWgcO ZWdAO0q/UWugBRNl3e6VCikrBppnbmdJWOSXYIvej4n8/0UM0c4cjbnDn8L/oywwX+EF rW+b0L+18O5xwqpZX/UbGqHjHVp9+aKUAhnT1H9FfyqGw0oleaUOAifLiu3Pq/oVFc4K lCdTWMlwb72fqUc19xe98AeGBrd8Q3G0V89Oo9xyBm9ivo2puPsma/XOkxtd4HPUws9f 578voFVtpUnLGmP7l40a3ExNCLNoBykPEAiT8aRBycOuS8yS1FspqBKmkj4XbKzYahma l78g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@resnulli-us.20230601.gappssmtp.com header.s=20230601 header.b=Jswii35P; spf=pass (google.com: domain of linux-kernel+bounces-17683-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-17683-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id bq19-20020a170906d0d300b00a28b85a300csi453262ejb.735.2024.01.05.01.46.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 01:46:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-17683-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@resnulli-us.20230601.gappssmtp.com header.s=20230601 header.b=Jswii35P; spf=pass (google.com: domain of linux-kernel+bounces-17683-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-17683-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 305F31F22035 for ; Fri, 5 Jan 2024 09:46:16 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4DE0824A1C; Fri, 5 Jan 2024 09:46:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=resnulli-us.20230601.gappssmtp.com header.i=@resnulli-us.20230601.gappssmtp.com header.b="Jswii35P" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-wr1-f46.google.com (mail-wr1-f46.google.com [209.85.221.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E6B224201 for ; Fri, 5 Jan 2024 09:45:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=resnulli.us Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=resnulli.us Received: by mail-wr1-f46.google.com with SMTP id ffacd0b85a97d-33687627ad0so1171372f8f.2 for ; Fri, 05 Jan 2024 01:45:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=resnulli-us.20230601.gappssmtp.com; s=20230601; t=1704447957; x=1705052757; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=6dQ2Zcf3LUGtuQQ4BG4OyEdzBvj9Rr7K+Ifz60LIvyE=; b=Jswii35PSYz6xTjPOs0M+NkGwkcP7hjay30pHm3GO5kJm4WpTK+/IJ6x+VsmouVkI9 SSFTS+FlXobBjbDTB7f8gZLTkrmNXzdGB/IHLmrloNqecpOlm7gHN5e+S4r8a15lIYjX Vh/SgNy7cq5wnZXTE0IN0xRNZVJum3RtSXt73Jpb43hdKHkXl8JI2rJRGVchY0NGeUcI FM4Ut3hsFrEOGAhwXLguYD03DU26yXfV16MeJeL4iBf4sIKzdyPD1a/YyUhft8MKMPXB q5Md/btmE489eugZJG+PKGcl3e92/Y439rs+7pGa4GjRjKbyq8P09ZFYT6czdu+dIwkT JyPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704447957; x=1705052757; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=6dQ2Zcf3LUGtuQQ4BG4OyEdzBvj9Rr7K+Ifz60LIvyE=; b=XbR9biGpm0YpbhkMU3KLf0KkFdB8Y/E2/Iw/3VWnqlawLwplq2XyBEmeAJBVx98U5d Fza4LncNLT8tjKhYMaFOGhlXzjbrpL8/zwRQGT5f5s13zm7bWg8pvdqNsMHQ5EtHo0Y/ YmDpEVuQPPhMxZ4+GM1jAM9BPkj61ZqxkZMZLHO9CoADHVirvcM4PMy3oFKMKahRYnLU DvvR9ga3BSiP0PBwHtVApNCyyPa2gll/7TtyOD+Jiw2TvXZpXjdmyersY0HtQAc2IJVM 4twFEBaoUMRqMSl6L1HCVo09xzEYsvf6gm6MLm9zuaWMW2cHdc6cQEA63vVr74oJ8gVf BRng== X-Gm-Message-State: AOJu0YxZp8VxnMPoio4UHyV4/3oe/BWIUgbG3bOD9HIBQi/O2OFJe1yH bS3rv2ewOzmFTCrULwjYjs3yyghtjAENPw== X-Received: by 2002:a5d:4f83:0:b0:337:5a0e:1195 with SMTP id d3-20020a5d4f83000000b003375a0e1195mr292860wru.127.1704447957443; Fri, 05 Jan 2024 01:45:57 -0800 (PST) Received: from localhost (host-213-179-129-39.customer.m-online.net. [213.179.129.39]) by smtp.gmail.com with ESMTPSA id t1-20020adfd001000000b0033672971fabsm1049226wrh.115.2024.01.05.01.45.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jan 2024 01:45:56 -0800 (PST) Date: Fri, 5 Jan 2024 10:45:55 +0100 From: Jiri Pirko To: Petr Tesarik Cc: Alexandre Torgue , Jose Abreu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Maxime Coquelin , Chen-Yu Tsai , Jernej Skrabec , Samuel Holland , "open list:STMMAC ETHERNET DRIVER" , "moderated list:ARM/STM32 ARCHITECTURE" , "moderated list:ARM/STM32 ARCHITECTURE" , open list , "open list:ARM/Allwinner sunXi SoC support" Subject: Re: [PATCH] net: stmmac: protect statistics updates with a spinlock Message-ID: References: <20240105091556.15516-1-petr@tesarici.cz> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240105091556.15516-1-petr@tesarici.cz> Fri, Jan 05, 2024 at 10:15:56AM CET, petr@tesarici.cz wrote: >Add a spinlock to fix race conditions while updating Tx/Rx statistics. > >As explained by a comment in , write side of struct >u64_stats_sync must ensure mutual exclusion, or one seqcount update could >be lost on 32-bit platforms, thus blocking readers forever. > >Such lockups have been actually observed on 32-bit Arm after stmmac_xmit() >on one core raced with stmmac_napi_poll_tx() on another core. > >Signed-off-by: Petr Tesarik >--- > drivers/net/ethernet/stmicro/stmmac/common.h | 2 + > .../net/ethernet/stmicro/stmmac/dwmac-sun8i.c | 4 + > .../net/ethernet/stmicro/stmmac/dwmac4_lib.c | 4 + > .../net/ethernet/stmicro/stmmac/dwmac_lib.c | 4 + > .../ethernet/stmicro/stmmac/dwxgmac2_dma.c | 4 + > .../net/ethernet/stmicro/stmmac/stmmac_main.c | 80 +++++++++++++------ > 6 files changed, 72 insertions(+), 26 deletions(-) > >diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h >index e3f650e88f82..9a17dfc1055d 100644 >--- a/drivers/net/ethernet/stmicro/stmmac/common.h >+++ b/drivers/net/ethernet/stmicro/stmmac/common.h >@@ -70,6 +70,7 @@ struct stmmac_txq_stats { > u64 tx_tso_frames; > u64 tx_tso_nfrags; > struct u64_stats_sync syncp; >+ spinlock_t lock; /* mutual writer exclusion */ > } ____cacheline_aligned_in_smp; > > struct stmmac_rxq_stats { >@@ -79,6 +80,7 @@ struct stmmac_rxq_stats { > u64 rx_normal_irq_n; > u64 napi_poll; > struct u64_stats_sync syncp; >+ spinlock_t lock; /* mutual writer exclusion */ > } ____cacheline_aligned_in_smp; > > /* Extra statistic and debug information exposed by ethtool */ >diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c >index 137741b94122..9c568996321d 100644 >--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c >+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c >@@ -455,9 +455,11 @@ static int sun8i_dwmac_dma_interrupt(struct stmmac_priv *priv, > > if (v & EMAC_TX_INT) { > ret |= handle_tx; >+ spin_lock(&txq_stats->lock); > u64_stats_update_begin(&txq_stats->syncp); > txq_stats->tx_normal_irq_n++; > u64_stats_update_end(&txq_stats->syncp); >+ spin_unlock(&txq_stats->lock); > } > > if (v & EMAC_TX_DMA_STOP_INT) >@@ -479,9 +481,11 @@ static int sun8i_dwmac_dma_interrupt(struct stmmac_priv *priv, > > if (v & EMAC_RX_INT) { > ret |= handle_rx; >+ spin_lock(&rxq_stats->lock); > u64_stats_update_begin(&rxq_stats->syncp); > rxq_stats->rx_normal_irq_n++; > u64_stats_update_end(&rxq_stats->syncp); >+ spin_unlock(&rxq_stats->lock); > } > > if (v & EMAC_RX_BUF_UA_INT) >diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c >index 9470d3fd2ded..e50e8b07724b 100644 >--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c >+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c >@@ -201,15 +201,19 @@ int dwmac4_dma_interrupt(struct stmmac_priv *priv, void __iomem *ioaddr, > } > /* TX/RX NORMAL interrupts */ > if (likely(intr_status & DMA_CHAN_STATUS_RI)) { >+ spin_lock(&rxq_stats->lock); > u64_stats_update_begin(&rxq_stats->syncp); > rxq_stats->rx_normal_irq_n++; > u64_stats_update_end(&rxq_stats->syncp); >+ spin_unlock(&rxq_stats->lock); > ret |= handle_rx; > } > if (likely(intr_status & DMA_CHAN_STATUS_TI)) { >+ spin_lock(&txq_stats->lock); > u64_stats_update_begin(&txq_stats->syncp); > txq_stats->tx_normal_irq_n++; > u64_stats_update_end(&txq_stats->syncp); >+ spin_unlock(&txq_stats->lock); > ret |= handle_tx; > } > >diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c >index 7907d62d3437..a43396a7f852 100644 >--- a/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c >+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c >@@ -215,16 +215,20 @@ int dwmac_dma_interrupt(struct stmmac_priv *priv, void __iomem *ioaddr, > u32 value = readl(ioaddr + DMA_INTR_ENA); > /* to schedule NAPI on real RIE event. */ > if (likely(value & DMA_INTR_ENA_RIE)) { >+ spin_lock(&rxq_stats->lock); > u64_stats_update_begin(&rxq_stats->syncp); > rxq_stats->rx_normal_irq_n++; > u64_stats_update_end(&rxq_stats->syncp); >+ spin_unlock(&rxq_stats->lock); > ret |= handle_rx; > } > } > if (likely(intr_status & DMA_STATUS_TI)) { >+ spin_lock(&txq_stats->lock); > u64_stats_update_begin(&txq_stats->syncp); > txq_stats->tx_normal_irq_n++; > u64_stats_update_end(&txq_stats->syncp); >+ spin_unlock(&txq_stats->lock); > ret |= handle_tx; > } > if (unlikely(intr_status & DMA_STATUS_ERI)) >diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c >index 3cde695fec91..f4e01436d4cc 100644 >--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c >+++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c >@@ -367,15 +367,19 @@ static int dwxgmac2_dma_interrupt(struct stmmac_priv *priv, > /* TX/RX NORMAL interrupts */ > if (likely(intr_status & XGMAC_NIS)) { > if (likely(intr_status & XGMAC_RI)) { >+ spin_lock(&rxq_stats->lock); > u64_stats_update_begin(&rxq_stats->syncp); > rxq_stats->rx_normal_irq_n++; > u64_stats_update_end(&rxq_stats->syncp); >+ spin_unlock(&rxq_stats->lock); > ret |= handle_rx; > } > if (likely(intr_status & (XGMAC_TI | XGMAC_TBU))) { >+ spin_lock(&txq_stats->lock); > u64_stats_update_begin(&txq_stats->syncp); > txq_stats->tx_normal_irq_n++; > u64_stats_update_end(&txq_stats->syncp); >+ spin_unlock(&txq_stats->lock); > ret |= handle_tx; > } > } >diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c >index 37e64283f910..82d8db04d0d1 100644 >--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c >+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c >@@ -2515,9 +2515,11 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget) > tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, priv->dma_conf.dma_tx_size); > entry = tx_q->cur_tx; > } >- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp); >+ spin_lock_irqsave(&txq_stats->lock, flags); >+ u64_stats_update_begin(&txq_stats->syncp); > txq_stats->tx_set_ic_bit += tx_set_ic_bit; >- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags); >+ u64_stats_update_end(&txq_stats->syncp); >+ spin_unlock_irqrestore(&txq_stats->lock, flags); > > if (tx_desc) { > stmmac_flush_tx_descriptors(priv, queue); >@@ -2721,11 +2723,13 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue, > if (tx_q->dirty_tx != tx_q->cur_tx) > *pending_packets = true; > >- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp); >+ spin_lock_irqsave(&txq_stats->lock, flags); >+ u64_stats_update_begin(&txq_stats->syncp); > txq_stats->tx_packets += tx_packets; > txq_stats->tx_pkt_n += tx_packets; > txq_stats->tx_clean++; >- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags); >+ u64_stats_update_end(&txq_stats->syncp); >+ spin_unlock_irqrestore(&txq_stats->lock, flags); > > priv->xstats.tx_errors += tx_errors; > >@@ -4311,13 +4315,15 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) > netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, queue)); > } > >- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp); >+ spin_lock_irqsave(&txq_stats->lock, flags); >+ u64_stats_update_begin(&txq_stats->syncp); > txq_stats->tx_bytes += skb->len; > txq_stats->tx_tso_frames++; > txq_stats->tx_tso_nfrags += nfrags; > if (set_ic) > txq_stats->tx_set_ic_bit++; >- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags); >+ u64_stats_update_end(&txq_stats->syncp); >+ spin_unlock_irqrestore(&txq_stats->lock, flags); > > if (priv->sarc_type) > stmmac_set_desc_sarc(priv, first, priv->sarc_type); >@@ -4560,11 +4566,13 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev) > netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, queue)); > } > >- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp); >+ spin_lock_irqsave(&txq_stats->lock, flags); >+ u64_stats_update_begin(&txq_stats->syncp); > txq_stats->tx_bytes += skb->len; > if (set_ic) > txq_stats->tx_set_ic_bit++; >- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags); >+ u64_stats_update_end(&txq_stats->syncp); >+ spin_unlock_irqrestore(&txq_stats->lock, flags); > > if (priv->sarc_type) > stmmac_set_desc_sarc(priv, first, priv->sarc_type); >@@ -4831,9 +4839,11 @@ static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue, > unsigned long flags; > tx_q->tx_count_frames = 0; > stmmac_set_tx_ic(priv, tx_desc); >- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp); >+ spin_lock_irqsave(&txq_stats->lock, flags); >+ u64_stats_update_begin(&txq_stats->syncp); > txq_stats->tx_set_ic_bit++; >- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags); >+ u64_stats_update_end(&txq_stats->syncp); >+ spin_unlock_irqrestore(&txq_stats->lock, flags); > } > > stmmac_enable_dma_transmission(priv, priv->ioaddr); >@@ -5008,10 +5018,12 @@ static void stmmac_dispatch_skb_zc(struct stmmac_priv *priv, u32 queue, > skb_record_rx_queue(skb, queue); > napi_gro_receive(&ch->rxtx_napi, skb); > >- flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp); >+ spin_lock_irqsave(&rxq_stats->lock, flags); >+ u64_stats_update_begin(&rxq_stats->syncp); > rxq_stats->rx_pkt_n++; > rxq_stats->rx_bytes += len; >- u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags); >+ u64_stats_update_end(&rxq_stats->syncp); >+ spin_unlock_irqrestore(&rxq_stats->lock, flags); > } > > static bool stmmac_rx_refill_zc(struct stmmac_priv *priv, u32 queue, u32 budget) >@@ -5248,9 +5260,11 @@ static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue) > > stmmac_finalize_xdp_rx(priv, xdp_status); > >- flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp); >+ spin_lock_irqsave(&rxq_stats->lock, flags); >+ u64_stats_update_begin(&rxq_stats->syncp); > rxq_stats->rx_pkt_n += count; >- u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags); >+ u64_stats_update_end(&rxq_stats->syncp); >+ spin_unlock_irqrestore(&rxq_stats->lock, flags); > > priv->xstats.rx_dropped += rx_dropped; > priv->xstats.rx_errors += rx_errors; >@@ -5541,11 +5555,13 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) > > stmmac_rx_refill(priv, queue); > >- flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp); >+ spin_lock_irqsave(&rxq_stats->lock, flags); >+ u64_stats_update_begin(&rxq_stats->syncp); > rxq_stats->rx_packets += rx_packets; > rxq_stats->rx_bytes += rx_bytes; > rxq_stats->rx_pkt_n += count; >- u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags); >+ u64_stats_update_end(&rxq_stats->syncp); >+ spin_unlock_irqrestore(&rxq_stats->lock, flags); > > priv->xstats.rx_dropped += rx_dropped; > priv->xstats.rx_errors += rx_errors; >@@ -5564,9 +5580,11 @@ static int stmmac_napi_poll_rx(struct napi_struct *napi, int budget) > int work_done; > > rxq_stats = &priv->xstats.rxq_stats[chan]; >- flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp); >+ spin_lock_irqsave(&rxq_stats->lock, flags); >+ u64_stats_update_begin(&rxq_stats->syncp); > rxq_stats->napi_poll++; >- u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags); >+ u64_stats_update_end(&rxq_stats->syncp); >+ spin_unlock_irqrestore(&rxq_stats->lock, flags); > > work_done = stmmac_rx(priv, budget, chan); > if (work_done < budget && napi_complete_done(napi, work_done)) { >@@ -5592,9 +5610,11 @@ static int stmmac_napi_poll_tx(struct napi_struct *napi, int budget) > int work_done; > > txq_stats = &priv->xstats.txq_stats[chan]; >- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp); >+ spin_lock_irqsave(&txq_stats->lock, flags); >+ u64_stats_update_begin(&txq_stats->syncp); > txq_stats->napi_poll++; >- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags); >+ u64_stats_update_end(&txq_stats->syncp); >+ spin_unlock_irqrestore(&txq_stats->lock, flags); > > work_done = stmmac_tx_clean(priv, budget, chan, &pending_packets); > work_done = min(work_done, budget); >@@ -5627,14 +5647,18 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget) > unsigned long flags; > > rxq_stats = &priv->xstats.rxq_stats[chan]; >- flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp); >+ spin_lock_irqsave(&rxq_stats->lock, flags); >+ u64_stats_update_begin(&rxq_stats->syncp); > rxq_stats->napi_poll++; >- u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags); >+ u64_stats_update_end(&rxq_stats->syncp); >+ spin_unlock(&rxq_stats->lock); Nitpick: I know that the original code does that, but any idea why u64_stats_update_end_irqrestore() is called here when u64_stats_update_begin_irqsave() is called 2 lines below? IIUC, this could be one critical section. Could you perhaps merge these while at it? Could be a follow-up patch. Rest of the patch looks fine to me. Reviewed-by: Jiri Pirko > > txq_stats = &priv->xstats.txq_stats[chan]; >- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp); >+ spin_lock(&txq_stats->lock); >+ u64_stats_update_begin(&txq_stats->syncp); > txq_stats->napi_poll++; >- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags); >+ u64_stats_update_end(&txq_stats->syncp); >+ spin_unlock_irqrestore(&txq_stats->lock, flags); > > tx_done = stmmac_tx_clean(priv, budget, chan, &tx_pending_packets); > tx_done = min(tx_done, budget); >@@ -7371,10 +7395,14 @@ int stmmac_dvr_probe(struct device *device, > priv->device = device; > priv->dev = ndev; > >- for (i = 0; i < MTL_MAX_RX_QUEUES; i++) >+ for (i = 0; i < MTL_MAX_RX_QUEUES; i++) { > u64_stats_init(&priv->xstats.rxq_stats[i].syncp); >- for (i = 0; i < MTL_MAX_TX_QUEUES; i++) >+ spin_lock_init(&priv->xstats.rxq_stats[i].lock); >+ } >+ for (i = 0; i < MTL_MAX_TX_QUEUES; i++) { > u64_stats_init(&priv->xstats.txq_stats[i].syncp); >+ spin_lock_init(&priv->xstats.txq_stats[i].lock); >+ } > > stmmac_set_ethtool_ops(ndev); > priv->pause = pause; >-- >2.43.0 > >