Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754314AbcLNSBl (ORCPT ); Wed, 14 Dec 2016 13:01:41 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:48425 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753518AbcLNSBj (ORCPT ); Wed, 14 Dec 2016 13:01:39 -0500 From: "Guilherme G. Piccoli" To: tglx@linutronix.de, linux-kernel@vger.kernel.org Cc: gabriel@krisman.be, hch@lst.de, linuxppc-dev@lists.ozlabs.org, linux-pci@vger.kernel.org, gpiccoli@linux.vnet.ibm.com, stable@vger.kernel.org.#.v4.9+ Subject: [PATCH] genirq/affinity: fix node generation from cpumask Date: Wed, 14 Dec 2016 16:01:12 -0200 X-Mailer: git-send-email 2.1.0 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16121418-1523-0000-0000-0000025923E6 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16121418-1524-0000-0000-000029E33469 Message-Id: <1481738472-2671-1-git-send-email-gpiccoli@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-12-14_11:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1609300000 definitions=main-1612140282 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2381 Lines: 63 Commit 34c3d9819fda ("genirq/affinity: Provide smarter irq spreading infrastructure") introduced a better IRQ spreading mechanism, taking account of the available NUMA nodes in the machine. Problem is that the algorithm of retrieving the nodemask iterates "linearly" based on the number of online nodes - some architectures present non-linear node distribution among the nodemask, like PowerPC. If this is the case, the algorithm lead to a wrong node count number and therefore to a bad/incomplete IRQ affinity distribution. For example, this problem were found in a machine with 128 CPUs and two nodes, namely nodes 0 and 8 (instead of 0 and 1, if it was linearly distributed). This led to a wrong affinity distribution which then led to a bad mq allocation for nvme driver. Finally, we take the opportunity to fix a comment regarding the affinity distribution when we have _more_ nodes than vectors. Fixes: 34c3d9819fda ("genirq/affinity: Provide smarter irq spreading infrastructure") Reported-by: Gabriel Krisman Bertazi Signed-off-by: Guilherme G. Piccoli Cc: stable@vger.kernel.org # v4.9+ Cc: Christoph Hellwig Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-pci@vger.kernel.org --- kernel/irq/affinity.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 9be9bda..464eaf0 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -37,15 +37,15 @@ static void irq_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, static int get_nodes_in_cpumask(const struct cpumask *mask, nodemask_t *nodemsk) { - int n, nodes; + int n, nodes = 0; /* Calculate the number of nodes in the supplied affinity mask */ - for (n = 0, nodes = 0; n < num_online_nodes(); n++) { + for_each_online_node(n) if (cpumask_intersects(mask, cpumask_of_node(n))) { node_set(n, *nodemsk); nodes++; } - } + return nodes; } @@ -82,7 +82,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) nodes = get_nodes_in_cpumask(cpu_online_mask, &nodemsk); /* - * If the number of nodes in the mask is less than or equal the + * If the number of nodes in the mask is greater than or equal the * number of vectors we just spread the vectors across the nodes. */ if (affv <= nodes) { -- 2.1.0