Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp404704ybl; Fri, 9 Aug 2019 07:45:57 -0700 (PDT) X-Google-Smtp-Source: APXvYqzy84sW1QnqEB4srI/uyMhGMW7WoqdJwsBz176KMGqr00nWCn2OYGWUzO+n+hrirG7NSpm9 X-Received: by 2002:a17:902:5a46:: with SMTP id f6mr18174094plm.319.1565361957423; Fri, 09 Aug 2019 07:45:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565361957; cv=none; d=google.com; s=arc-20160816; b=xFe/Ls36/+4h470XLfbGxhb+804yJXO4gsJTYnpY+aUWFagJ7Gmq8MS2dIAakCBmij +Up5kF6Y3j/KjbxwA8rztSiSfFIuvDhRjzbQ8++LxPalQ5rGUkwVrlbCV8jp41a3aMK3 7r2GxBUEjZ+mU4JV658SWduk/oDjE5v9HvA13dIlmU/hl/ccci+IHPjhANkwdTkT5URX 2zvq7rw453zQ/oiiY/gRYZgnhOrB0V1LL2UZ/ikskE87yGIHYFhoBcRb98nACr9i9LbG Tfcv5iI50nGMLhW8/twd5yCbJOX6Lv0/FCPONSQIbsztFT2rvmFn8u04r7SHlYCqiSHd GU8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=oHMcJ3LYBCRPH0kiNY3d5do4l4wBQEW27kDBy4CK1ng=; b=eNiyJy193n6SFFc1NFgGQ7GkAC3IL3Pv0wVriko7Gm4osGtjG/yGV2w0nfVDCjYwZ0 QGdCY42VZoWlnnzMYAuQuYdkS6VjI9r27tUqn7maatwYyzjFTvcXhC0YZFqOuXFBbS2/ 7ROTmHjRPIQKypCALbDSVL1mahxF34t7XX+CtQrED1NhbQXNbysZ+2COyF+ETRxsK/mG mYLbyioS22oFtfDtwrmDp48HLKO/S/LEbzQlgOe7ztpkv13wXVjCAHNRcuNQoDeHWP13 iYcweN7GdQcEYUibZfMTDQhdBxljk1r7/KNueoT0V95iLhkp2EasYki2S+zKRTSXu+V5 Oaww== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t17si56825920pfe.269.2019.08.09.07.45.41; Fri, 09 Aug 2019 07:45:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2406725AbfHIOo3 (ORCPT + 99 others); Fri, 9 Aug 2019 10:44:29 -0400 Received: from mga05.intel.com ([192.55.52.43]:50872 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726157AbfHIOo3 (ORCPT ); Fri, 9 Aug 2019 10:44:29 -0400 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Aug 2019 07:44:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,364,1559545200"; d="scan'208";a="169333583" Received: from unknown (HELO localhost.localdomain) ([10.232.112.69]) by orsmga008.jf.intel.com with ESMTP; 09 Aug 2019 07:44:27 -0700 Date: Fri, 9 Aug 2019 08:42:04 -0600 From: Keith Busch To: Ming Lei Cc: Thomas Gleixner , linux-kernel@vger.kernel.org, Christoph Hellwig , linux-nvme@lists.infradead.org, Jon Derrick Subject: Re: [PATCH 1/2] genirq/affinity: improve __irq_build_affinity_masks() Message-ID: <20190809144204.GA28515@localhost.localdomain> References: <20190809102310.27246-1-ming.lei@redhat.com> <20190809102310.27246-2-ming.lei@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190809102310.27246-2-ming.lei@redhat.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Aug 09, 2019 at 06:23:09PM +0800, Ming Lei wrote: > One invariant of __irq_build_affinity_masks() is that all CPUs in the > specified masks( cpu_mask AND node_to_cpumask for each node) should be > covered during the spread. Even though all requested vectors have been > reached, we still need to spread vectors among left CPUs. The similar > policy has been taken in case of 'numvecs <= nodes'. > > So remove the following check inside the loop: > > if (done >= numvecs) > break; > > Meantime assign at least 1 vector for left nodes if 'numvecs' vectors > have been spread. > > Also, if the specified cpumask for one numa node is empty, simply not > spread vectors on this node. > > Cc: Christoph Hellwig > Cc: Keith Busch > Cc: linux-nvme@lists.infradead.org, > Cc: Jon Derrick > Signed-off-by: Ming Lei > --- > kernel/irq/affinity.c | 33 +++++++++++++++++++++------------ > 1 file changed, 21 insertions(+), 12 deletions(-) > > diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c > index 6fef48033f96..bc3652a2c61b 100644 > --- a/kernel/irq/affinity.c > +++ b/kernel/irq/affinity.c > @@ -129,21 +129,32 @@ static int __irq_build_affinity_masks(unsigned int startvec, > for_each_node_mask(n, nodemsk) { > unsigned int ncpus, v, vecs_to_assign, vecs_per_node; > > - /* Spread the vectors per node */ > - vecs_per_node = (numvecs - (curvec - firstvec)) / nodes; > - > /* Get the cpus on this node which are in the mask */ > cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]); > - > - /* Calculate the number of cpus per vector */ > ncpus = cpumask_weight(nmsk); > + if (!ncpus) > + continue; This shouldn't be possible, right? The nodemsk we're looping wouldn't have had that node set if no CPUs intersect the node_to_cpu_mask for that node, so the resulting cpumask should always have a non-zero weight. > @@ -153,16 +164,14 @@ static int __irq_build_affinity_masks(unsigned int startvec, > } > irq_spread_init_one(&masks[curvec].mask, nmsk, > cpus_per_vec); > + if (++curvec >= last_affv) > + curvec = firstvec; I'm not so sure about wrapping the vector to share it across nodes. We have enough vectors in this path to ensure each compute node can have a unique one, and it's much cheaper to share these within nodes than across them. > } > > done += v; > - if (done >= numvecs) > - break; > - if (curvec >= last_affv) > - curvec = firstvec; > --nodes; > } > - return done; > + return done < numvecs ? done : numvecs; > }