Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98835C4332F for ; Tue, 23 Nov 2021 16:41:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237332AbhKWQpG (ORCPT ); Tue, 23 Nov 2021 11:45:06 -0500 Received: from mga07.intel.com ([134.134.136.100]:62027 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235541AbhKWQow (ORCPT ); Tue, 23 Nov 2021 11:44:52 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10177"; a="298468496" X-IronPort-AV: E=Sophos;i="5.87,258,1631602800"; d="scan'208";a="298468496" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2021 08:41:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,258,1631602800"; d="scan'208";a="457119935" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by orsmga006.jf.intel.com with ESMTP; 23 Nov 2021 08:41:24 -0800 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 1ANGf4Wj016784; Tue, 23 Nov 2021 16:41:21 GMT From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski Cc: Alexander Lobakin , Jesse Brandeburg , Michal Swiatkowski , Maciej Fijalkowski , Jonathan Corbet , Shay Agroskin , Arthur Kiyanovski , David Arinzon , Noam Dagan , Saeed Bishara , Ioana Ciornei , Claudiu Manoil , Tony Nguyen , Thomas Petazzoni , Marcin Wojtas , Russell King , Saeed Mahameed , Leon Romanovsky , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= , John Fastabend , Edward Cree , Martin Habets , "Michael S. Tsirkin" , Jason Wang , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Lorenzo Bianconi , Yajun Deng , Sergey Ryazanov , David Ahern , Andrei Vagin , Johannes Berg , Vladimir Oltean , Cong Wang , netdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v2 net-next 07/26] mvneta: add .ndo_get_xdp_stats() callback Date: Tue, 23 Nov 2021 17:39:36 +0100 Message-Id: <20211123163955.154512-8-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211123163955.154512-1-alexandr.lobakin@intel.com> References: <20211123163955.154512-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org mvneta driver implements 7 per-cpu counters which means we can only provide them as a global sum across CPUs. Implement a callback for querying them using generic XDP stats infra. Signed-off-by: Alexander Lobakin Reviewed-by: Jesse Brandeburg --- drivers/net/ethernet/marvell/mvneta.c | 54 +++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 7c30417a0464..5bb0bbfa1ee6 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -802,6 +802,59 @@ mvneta_get_stats64(struct net_device *dev, stats->tx_dropped = dev->stats.tx_dropped; } +static int mvneta_get_xdp_stats(const struct net_device *dev, u32 attr_id, + void *attr_data) +{ + const struct mvneta_port *pp = netdev_priv(dev); + struct ifla_xdp_stats *xdp_stats = attr_data; + u32 cpu; + + switch (attr_id) { + case IFLA_XDP_XSTATS_TYPE_XDP: + break; + default: + return -EOPNOTSUPP; + } + + for_each_possible_cpu(cpu) { + const struct mvneta_pcpu_stats *stats; + const struct mvneta_stats *ps; + u64 xdp_xmit_err; + u64 xdp_redirect; + u64 xdp_tx_err; + u64 xdp_pass; + u64 xdp_drop; + u64 xdp_xmit; + u64 xdp_tx; + u32 start; + + stats = per_cpu_ptr(pp->stats, cpu); + ps = &stats->es.ps; + + do { + start = u64_stats_fetch_begin_irq(&stats->syncp); + + xdp_drop = ps->xdp_drop; + xdp_pass = ps->xdp_pass; + xdp_redirect = ps->xdp_redirect; + xdp_tx = ps->xdp_tx; + xdp_tx_err = ps->xdp_tx_err; + xdp_xmit = ps->xdp_xmit; + xdp_xmit_err = ps->xdp_xmit_err; + } while (u64_stats_fetch_retry_irq(&stats->syncp, start)); + + xdp_stats->drop += xdp_drop; + xdp_stats->pass += xdp_pass; + xdp_stats->redirect += xdp_redirect; + xdp_stats->tx += xdp_tx; + xdp_stats->tx_errors += xdp_tx_err; + xdp_stats->xmit_packets += xdp_xmit; + xdp_stats->xmit_errors += xdp_xmit_err; + } + + return 0; +} + /* Rx descriptors helper methods */ /* Checks whether the RX descriptor having this status is both the first @@ -4957,6 +5010,7 @@ static const struct net_device_ops mvneta_netdev_ops = { .ndo_change_mtu = mvneta_change_mtu, .ndo_fix_features = mvneta_fix_features, .ndo_get_stats64 = mvneta_get_stats64, + .ndo_get_xdp_stats = mvneta_get_xdp_stats, .ndo_eth_ioctl = mvneta_ioctl, .ndo_bpf = mvneta_xdp, .ndo_xdp_xmit = mvneta_xdp_xmit, -- 2.33.1