Received: by 2002:ac0:8845:0:0:0:0:0 with SMTP id g63csp656357img; Thu, 28 Feb 2019 05:53:30 -0800 (PST) X-Google-Smtp-Source: AHgI3Ia5o7VJ2Ilt3K9+lRfslpx6+8DId6B5dp0NONp4Rmq0ZZCntEvHKxtUS6Q0lV9NA/NGfVhp X-Received: by 2002:a17:902:6508:: with SMTP id b8mr8284371plk.17.1551362010922; Thu, 28 Feb 2019 05:53:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551362010; cv=none; d=google.com; s=arc-20160816; b=0Qxer2lnPvvcalKdLEodYg/4JRw4XgVQlLryV7GWOlMlCxafF/zDaCXEEWzXrYXA1w iZiitrmlnwGefaxfSg+wy05VpVgKUQaQwG8aWddrzJtMCioBAA5qin3/xIeG+zNOQU99 AWgRDy3qSMlPLYkc/7KMJQYCEgQ4WH4b0GRxdaoUIDpcI2+h4OR4x3dXN08D7Q+8oT5B z8ZR3JsmFTrn3SbqaoR3m7u19RIgIRpwCk0bq/bk/jSNKcaNIKqUGG8vBfjKXAMIYykY LeMm251fwAfVminepVZ5RkpyJOIqb6/VkPKZ9G7YeKaDU+varto5BspvmnhooBMCclXb bmzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=f4q0j1Jw/m8q8/dPDEdct4LRnny+f/gydvKMMdY0Krg=; b=sv/IQdvG0/y7drv98+QIeRP23KcmPrSdw5fHEM/L224+N1+hB/rHOQQDi2gcNLaXAn Z8p8iFtQp7OBCrwPsJHiRRVpZbhn47r0ybcW1vh2wCgjE7yrHGbFaoEHgIwXB3n51tVa wb98p1Ud1UgfxKWZYiwRq8mPW2tUm0lXAXhqQqNAwtm479Vnq93tJdSdY36T44nwaj0n 765Hjq4MA7GJAJPA2yG+UUwDX9+5oF7TtmcV9tZuIB44iE80RexHJDArbbKKMNOlJsuZ JGN1NbUvfNLRJN2GWN27BBiz0ENripKFuNyg9vYaUkVTfe9TNv6vjR4UT7VSpbDBNbMg X4pg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g32si15316906pgg.223.2019.02.28.05.53.14; Thu, 28 Feb 2019 05:53:30 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731981AbfB1Num (ORCPT + 99 others); Thu, 28 Feb 2019 08:50:42 -0500 Received: from mslow2.mail.gandi.net ([217.70.178.242]:57400 "EHLO mslow2.mail.gandi.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726003AbfB1Num (ORCPT ); Thu, 28 Feb 2019 08:50:42 -0500 Received: from relay12.mail.gandi.net (unknown [217.70.178.232]) by mslow2.mail.gandi.net (Postfix) with ESMTP id BF2983A966C; Thu, 28 Feb 2019 13:25:43 +0000 (UTC) Received: from localhost (aaubervilliers-681-1-27-150.w90-88.abo.wanadoo.fr [90.88.147.150]) (Authenticated sender: antoine.tenart@bootlin.com) by relay12.mail.gandi.net (Postfix) with ESMTPSA id D9ADD20000F; Thu, 28 Feb 2019 13:25:38 +0000 (UTC) From: Antoine Tenart To: davem@davemloft.net, linux@armlinux.org.uk Cc: Antoine Tenart , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, thomas.petazzoni@bootlin.com, maxime.chevallier@bootlin.com, gregory.clement@bootlin.com, miquel.raynal@bootlin.com, nadavh@marvell.com, stefanc@marvell.com, ymarkman@marvell.com, mw@semihalf.com Subject: [PATCH net-next 07/15] net: mvpp2: fix the computation of the RXQs Date: Thu, 28 Feb 2019 14:21:20 +0100 Message-Id: <20190228132128.30154-8-antoine.tenart@bootlin.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190228132128.30154-1-antoine.tenart@bootlin.com> References: <20190228132128.30154-1-antoine.tenart@bootlin.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The patch fixes the computation of RXQs being used by the PPv2 driver, which is set depending on the PPv2 engine version and the queue mode used. There are three cases: - PPv2.1: 1 RXQ per CPU. - PPV2.2 with MVPP2_QDIST_MULTI_MODE: 1 RXQ per CPU. - PPv2.2 with MVPP2_QDIST_SINGLE_MODE: 1 RXQ is shared between the CPUs. The PPv2 engine supports a maximum of 32 queues per port. This patch adds a check so that we do not overstep this maximum. It appeared the calculation was broken for PPv2.1 engines since f8c6ba8424b0, as PPv2.1 ports ended up with a single RXQ while they needed 4. This patch fixes it. Fixes: f8c6ba8424b0 ("net: mvpp2: use only one rx queue per port per CPU") Signed-off-by: Antoine Tenart --- drivers/net/ethernet/marvell/mvpp2/mvpp2.h | 4 ++-- .../net/ethernet/marvell/mvpp2/mvpp2_main.c | 23 ++++++++++++------- 2 files changed, 17 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h index 17ff330cce5f..687e011de5ef 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h @@ -549,8 +549,8 @@ #define MVPP2_MAX_TSO_SEGS 300 #define MVPP2_MAX_SKB_DESCS (MVPP2_MAX_TSO_SEGS * 2 + MAX_SKB_FRAGS) -/* Default number of RXQs in use */ -#define MVPP2_DEFAULT_RXQ 1 +/* Max number of RXQs per port */ +#define MVPP2_PORT_MAX_RXQ 32 /* Max number of Rx descriptors */ #define MVPP2_MAX_RXD_MAX 1024 diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index 24cee6cbe309..9c6200a59910 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -4062,8 +4062,8 @@ static int mvpp2_multi_queue_vectors_init(struct mvpp2_port *port, snprintf(irqname, sizeof(irqname), "hif%d", i); if (queue_mode == MVPP2_QDIST_MULTI_MODE) { - v->first_rxq = i * MVPP2_DEFAULT_RXQ; - v->nrxqs = MVPP2_DEFAULT_RXQ; + v->first_rxq = i; + v->nrxqs = 1; } else if (queue_mode == MVPP2_QDIST_SINGLE_MODE && i == (port->nqvecs - 1)) { v->first_rxq = 0; @@ -4156,8 +4156,7 @@ static int mvpp2_port_init(struct mvpp2_port *port) MVPP2_MAX_PORTS * priv->max_port_rxqs) return -EINVAL; - if (port->nrxqs % MVPP2_DEFAULT_RXQ || - port->nrxqs > priv->max_port_rxqs || port->ntxqs > MVPP2_MAX_TXQ) + if (port->nrxqs > priv->max_port_rxqs || port->ntxqs > MVPP2_MAX_TXQ) return -EINVAL; /* Disable port */ @@ -4778,10 +4777,18 @@ static int mvpp2_port_probe(struct platform_device *pdev, } ntxqs = MVPP2_MAX_TXQ; - if (priv->hw_version == MVPP22 && queue_mode == MVPP2_QDIST_MULTI_MODE) - nrxqs = MVPP2_DEFAULT_RXQ * num_possible_cpus(); - else - nrxqs = MVPP2_DEFAULT_RXQ; + if (priv->hw_version == MVPP22 && queue_mode == MVPP2_QDIST_SINGLE_MODE) { + nrxqs = 1; + } else { + /* According to the PPv2.2 datasheet and our experiments on + * PPv2.1, RX queues have an allocation granularity of 4 (when + * more than a single one on PPv2.2). + * Round up to nearest multiple of 4. + */ + nrxqs = (num_possible_cpus() + 3) & ~0x3; + if (nrxqs > MVPP2_PORT_MAX_RXQ) + nrxqs = MVPP2_PORT_MAX_RXQ; + } dev = alloc_etherdev_mqs(sizeof(*port), ntxqs, nrxqs); if (!dev) -- 2.20.1