Received: by 2002:a89:d88:0:b0:1fa:5c73:8e2d with SMTP id eb8csp2657840lqb; Tue, 28 May 2024 06:51:06 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCURrf/s6sH3IfgU3CH22OouN7SbT9QzcXOmOlU4sH/lDjDf7HDZE0q60toVIs4APIElHdkxPxOSSHX56742t5Sq79GENcJIZnz4vMw25Q== X-Google-Smtp-Source: AGHT+IEZPwvq042QyMT+sJkZxcyCSONhNbacM4GoCznyMtdGOmTgrkW7OKmJEaooLKZSWnBKvoqa X-Received: by 2002:a05:6214:2263:b0:6ad:657d:9d47 with SMTP id 6a1803df08f44-6ad657da001mr175471586d6.14.1716904265922; Tue, 28 May 2024 06:51:05 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1716904265; cv=pass; d=google.com; s=arc-20160816; b=k/6iCN2lO+eb7Tqd/0po9lV8wehDIO4HGSjJv/vX5xUm9AHEx88MjYNqS1RZLMqZVi SIJyUY6zkDBvl41t8ARe1C/eT7ELxMVRHOxuxiLOyfMakoFD9amxA9baRBRtgHM2BZzB +cX/hbnf/g1ouCjJif0/4vVNHL+CXFwjjzpXUG8+gryv/4888+FYATybATp86Kex30v6 REb3Ihed8j0Gt6GD5b2zRJUVEqIy7k2YA+nPbdGAuVTtUJqPZeuIORcilMFYqs7TGHaj 7trY6hvjKlnn0hO12KrGUSJ+lpXyb2SaEzx4CkXu2a9OfsqKtGamqydh5irnR8uE72W0 14/g== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=x8sIx0xQQ4p71szza8wtfvUCuheGLtxnC9eZzLHE1Aw=; fh=QNefxTyOcigKU2iYA2pdNn/EzFnxI1Ncw1QDULi8Nbw=; b=Ub/82Rwl9RrDIZCw1ID6uE8QN14yYxsh0Z7PJzehBZhqp4Ehpf54ej9dwE4IAyw+pP FMRHwr5BTmmsjUoRbasBw6f8IcORWIben4tAfKJNGGdyw9taBy+JpfcGMfrDkBjJ5/zz gs3WWaw1HTIdnloqqmGb/M0sBLFLv8uIqeCoyKI8Lm4Wl+q646y8RTCTIcR7lG8x+WtJ RYXdn2ZsAHwPKRaIgx5VOsSVQ19ozj/ZU/XENEvziskCI0mWR1SvqvDjzt5rMMJNKy0s 3nFpPtORvl8IWmRIhPVZTIIYwGCNdcSR2AMDZAIcHMdu52LuyjEfhIG84iEhVYhD9TyZ QHgQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="m3/9OC9r"; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-192456-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-192456-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id 6a1803df08f44-6ad7849a70bsi64121236d6.79.2024.05.28.06.51.05 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 May 2024 06:51:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-192456-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="m3/9OC9r"; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-192456-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-192456-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 973221C2267C for ; Tue, 28 May 2024 13:51:05 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B4F09170859; Tue, 28 May 2024 13:49:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="m3/9OC9r" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E38FB17082E; Tue, 28 May 2024 13:49:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716904180; cv=none; b=XAi5ZkM2m4A6UAI37TxsrlHjWYju+tONwl4FXHG0IiY24io3A5vf+1UYLEMFTYjyyMHcDxxC3EZNG+8serGWlMaUds35oV05HTa9xnQF2GowNahoD2KaBp9pdcQku0zPOj6Rzu6wfOZtt1kMMxsxVOd1a2VJTz84sasYfycyE/A= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716904180; c=relaxed/simple; bh=gZnqF0XHabYYe4ZKOW6AhiGeFoBHQS1SezjFcpmduKI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Twn/pK0NI6S7SSaAO4r+Q5yqavFgu1+oekhLy9OM1NO5ycG82RrIKcnEsGY4+SvjmCYKnH8BblpEHZI6JK+EuzOUsvpnTFftiRXgaFIcCHAk7d9+rzp4dSjLWUph30JcY2hijSkwvfpb5TIpWeklT9jGdaqMfazQfsCaD0K/Daw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=m3/9OC9r; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1716904179; x=1748440179; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gZnqF0XHabYYe4ZKOW6AhiGeFoBHQS1SezjFcpmduKI=; b=m3/9OC9rc8QZV7LpPZtg8HUBs0bKKdPBMIaju0JcfWJrCwriS5VsUF1H h9lPOTt7p9B91S7M1AXGPyBBbvUi+Tvss5vlkY5eGD54MuzQvjzloyXnN xvHi4np88mTGpf5tfXePJfNGzA9/WVnsmf7Rl1OUswSOap0hqUHELz26G Klr9eyTC+M+BKuQL5LBzIsNFYrELAyiMGpzGUdwR8bi5+8U5ezofN9hkb qRQl4JGNDD+15gTSKH+ttBKgEYYzT0HMttgLG45ZpFui5aBrg4+sXGjpP Kq0AhPwVjF/idb37nHsqJVA/PWH+L9u11369aeji2VCBrgpe6Aci+Y2lr w==; X-CSE-ConnectionGUID: VcYBNgcVS+K+dC04+fF2gw== X-CSE-MsgGUID: 1a5pKV4QT4GYCgDqWXUa8A== X-IronPort-AV: E=McAfee;i="6600,9927,11085"; a="13437028" X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="13437028" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2024 06:49:38 -0700 X-CSE-ConnectionGUID: DWC8rK24QkGdZT9Tfx4wTQ== X-CSE-MsgGUID: AgeWtTT3RyCpxR6SgqCWEw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,195,1712646000"; d="scan'208";a="35577437" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa008.jf.intel.com with ESMTP; 28 May 2024 06:49:35 -0700 From: Alexander Lobakin To: intel-wired-lan@lists.osuosl.org Cc: Alexander Lobakin , Tony Nguyen , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Mina Almasry , nex.sw.ncis.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Przemek Kitszel Subject: [PATCH iwl-next 06/12] idpf: merge singleq and splitq &net_device_ops Date: Tue, 28 May 2024 15:48:40 +0200 Message-ID: <20240528134846.148890-7-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.45.1 In-Reply-To: <20240528134846.148890-1-aleksander.lobakin@intel.com> References: <20240528134846.148890-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit It makes no sense to have a second &net_device_ops struct (800 bytes of rodata) with only one difference in .ndo_start_xmit, which can easily be just one `if`. This `if` is a drop in the ocean and you won't see any difference. Define unified idpf_xmit_start(). The preparation for sending is the same, just call either idpf_tx_splitq_frame() or idpf_tx_singleq_frame() depending on the active model to actually map and send the skb. Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 9 ++---- drivers/net/ethernet/intel/idpf/idpf_lib.c | 26 +++------------- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 31 ++----------------- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 17 ++++++---- 4 files changed, 20 insertions(+), 63 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 731e2a73def5..e8a71da62d80 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -1204,14 +1204,11 @@ void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb, struct idpf_tx_buf *first, u16 ring_idx); unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq, struct sk_buff *skb); -bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs, - unsigned int count); int idpf_tx_maybe_stop_common(struct idpf_tx_queue *tx_q, unsigned int size); void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue); -netdev_tx_t idpf_tx_splitq_start(struct sk_buff *skb, - struct net_device *netdev); -netdev_tx_t idpf_tx_singleq_start(struct sk_buff *skb, - struct net_device *netdev); +netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, + struct idpf_tx_queue *tx_q); +netdev_tx_t idpf_tx_start(struct sk_buff *skb, struct net_device *netdev); bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_rx_queue *rxq, u16 cleaned_count); int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off); diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index a8be09a89943..fe91475c7b4c 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -4,8 +4,7 @@ #include "idpf.h" #include "idpf_virtchnl.h" -static const struct net_device_ops idpf_netdev_ops_splitq; -static const struct net_device_ops idpf_netdev_ops_singleq; +static const struct net_device_ops idpf_netdev_ops; /** * idpf_init_vector_stack - Fill the MSIX vector stack with vector index @@ -764,10 +763,7 @@ static int idpf_cfg_netdev(struct idpf_vport *vport) } /* assign netdev_ops */ - if (idpf_is_queue_model_split(vport->txq_model)) - netdev->netdev_ops = &idpf_netdev_ops_splitq; - else - netdev->netdev_ops = &idpf_netdev_ops_singleq; + netdev->netdev_ops = &idpf_netdev_ops; /* setup watchdog timeout value to be 5 second */ netdev->watchdog_timeo = 5 * HZ; @@ -2353,24 +2349,10 @@ void idpf_free_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem) mem->pa = 0; } -static const struct net_device_ops idpf_netdev_ops_splitq = { - .ndo_open = idpf_open, - .ndo_stop = idpf_stop, - .ndo_start_xmit = idpf_tx_splitq_start, - .ndo_features_check = idpf_features_check, - .ndo_set_rx_mode = idpf_set_rx_mode, - .ndo_validate_addr = eth_validate_addr, - .ndo_set_mac_address = idpf_set_mac, - .ndo_change_mtu = idpf_change_mtu, - .ndo_get_stats64 = idpf_get_stats64, - .ndo_set_features = idpf_set_features, - .ndo_tx_timeout = idpf_tx_timeout, -}; - -static const struct net_device_ops idpf_netdev_ops_singleq = { +static const struct net_device_ops idpf_netdev_ops = { .ndo_open = idpf_open, .ndo_stop = idpf_stop, - .ndo_start_xmit = idpf_tx_singleq_start, + .ndo_start_xmit = idpf_tx_start, .ndo_features_check = idpf_features_check, .ndo_set_rx_mode = idpf_set_rx_mode, .ndo_validate_addr = eth_validate_addr, diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index b51f7cd6db01..a3b60a2dfcaa 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -351,8 +351,8 @@ static void idpf_tx_singleq_build_ctx_desc(struct idpf_tx_queue *txq, * * Returns NETDEV_TX_OK if sent, else an error code */ -static netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, - struct idpf_tx_queue *tx_q) +netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, + struct idpf_tx_queue *tx_q) { struct idpf_tx_offload_params offload = { }; struct idpf_tx_buf *first; @@ -408,33 +408,6 @@ static netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb, return idpf_tx_drop_skb(tx_q, skb); } -/** - * idpf_tx_singleq_start - Selects the right Tx queue to send buffer - * @skb: send buffer - * @netdev: network interface device structure - * - * Returns NETDEV_TX_OK if sent, else an error code - */ -netdev_tx_t idpf_tx_singleq_start(struct sk_buff *skb, - struct net_device *netdev) -{ - struct idpf_vport *vport = idpf_netdev_to_vport(netdev); - struct idpf_tx_queue *tx_q; - - tx_q = vport->txqs[skb_get_queue_mapping(skb)]; - - /* hardware can't handle really short frames, hardware padding works - * beyond this point - */ - if (skb_put_padto(skb, IDPF_TX_MIN_PKT_LEN)) { - idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false); - - return NETDEV_TX_OK; - } - - return idpf_tx_singleq_frame(skb, tx_q); -} - /** * idpf_tx_singleq_clean - Reclaim resources from queue * @tx_q: Tx queue to clean diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index f569ea389b04..5dd1b1a9e624 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -4,6 +4,9 @@ #include "idpf.h" #include "idpf_virtchnl.h" +static bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs, + unsigned int count); + /** * idpf_buf_lifo_push - push a buffer pointer onto stack * @stack: pointer to stack struct @@ -2702,8 +2705,8 @@ static bool __idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs) * E.g.: a packet with 7 fragments can require 9 DMA transactions; 1 for TSO * header, 1 for segment payload, and then 7 for the fragments. */ -bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs, - unsigned int count) +static bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs, + unsigned int count) { if (likely(count < max_bufs)) return false; @@ -2849,14 +2852,13 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb, } /** - * idpf_tx_splitq_start - Selects the right Tx queue to send buffer + * idpf_tx_start - Selects the right Tx queue to send buffer * @skb: send buffer * @netdev: network interface device structure * * Returns NETDEV_TX_OK if sent, else an error code */ -netdev_tx_t idpf_tx_splitq_start(struct sk_buff *skb, - struct net_device *netdev) +netdev_tx_t idpf_tx_start(struct sk_buff *skb, struct net_device *netdev) { struct idpf_vport *vport = idpf_netdev_to_vport(netdev); struct idpf_tx_queue *tx_q; @@ -2878,7 +2880,10 @@ netdev_tx_t idpf_tx_splitq_start(struct sk_buff *skb, return NETDEV_TX_OK; } - return idpf_tx_splitq_frame(skb, tx_q); + if (idpf_is_queue_model_split(vport->txq_model)) + return idpf_tx_splitq_frame(skb, tx_q); + else + return idpf_tx_singleq_frame(skb, tx_q); } /** -- 2.45.1