Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp374586ybl; Thu, 15 Aug 2019 19:31:57 -0700 (PDT) X-Google-Smtp-Source: APXvYqxECFMZCvtz9nOVT1F/fpJ05ZWx0jyGy+GzDK65o8Srs2VpL8qCOgQlUFYDwFX7doCWi6Qt X-Received: by 2002:a17:902:e406:: with SMTP id ci6mr6879533plb.207.1565922717011; Thu, 15 Aug 2019 19:31:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565922717; cv=none; d=google.com; s=arc-20160816; b=oGOJDcU8FIG0G53y3WKoB8kryNXTUQE1oO0MWjInTj1kl5VxiiN6WhJBacw7fOwZV6 HskUmpn7ukEwBkd3FcjylJiA9v+gYw0bXM8Jvr4VkSQHriNhSW35xCPO94h+6cxpfha+ 6mfgd2niErWTy0+Vimibf9uNCM7XdAevZjWDD73+gqUFH0z9OCPL+WLf/3sTqyOs5aRr Sei1BpFmOZ8epXR2Q6bwOse6M1j1klNHyw88VLUjLyA4EvLBiYJl8JM9Lggi7ZB6v/ZI RB+o4IO+e/BidLR+F6qz3/ruva16ZOYhCIMa1VWXgMjnxdsfUuBRsgozmULO17N88YRL 5HMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=4Q4NJWRRhuFVXL9aATrYwSYF1wd5KRBTur/Gl1mdTr4=; b=Mr2f572RLm92DPpcvhTBOYeIu3moZ5DULC1Sjf/b+XCikhl7thD9XztVhSPX9IBueU K8gpEV6JMlIZ4BczcUk79WpLtvTm2h/pl0LwzIl8fAo67jLJZ8qG+ckvsgi9lp3g3r4b EgwXpbXZuY++NBS/cnpinUB2kXygRNYhM3CUE2ftUI7bXUN9GoIcWiNcPXgPuRMmMHau p9IYRTVQBADyccf7moFaPuS7dLM0L3vFHgh9qcFbCSevXoJ+KdhBpiHYLVXH33/TC+dL +5WKH7NdAKMPnomPKrVIRzeU5+ke1Fz3SnkQRdcKg5x7y1Qy94yDEDpvjhVPZZEa1e3D TN+g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p44si1743948pjp.0.2019.08.15.19.31.41; Thu, 15 Aug 2019 19:31:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726530AbfHPC3N (ORCPT + 99 others); Thu, 15 Aug 2019 22:29:13 -0400 Received: from mx1.redhat.com ([209.132.183.28]:38504 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726215AbfHPC3N (ORCPT ); Thu, 15 Aug 2019 22:29:13 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 18CBD307D985; Fri, 16 Aug 2019 02:29:13 +0000 (UTC) Received: from localhost (ovpn-8-25.pek2.redhat.com [10.72.8.25]) by smtp.corp.redhat.com (Postfix) with ESMTP id 807938F6E9; Fri, 16 Aug 2019 02:29:05 +0000 (UTC) From: Ming Lei To: Thomas Gleixner Cc: linux-kernel@vger.kernel.org, Ming Lei , Christoph Hellwig , Keith Busch , linux-nvme@lists.infradead.org, Jon Derrick , Jens Axboe Subject: [PATCH V5 1/2] genirq/affinity: Improve __irq_build_affinity_masks() Date: Fri, 16 Aug 2019 10:28:48 +0800 Message-Id: <20190816022849.14075-2-ming.lei@redhat.com> In-Reply-To: <20190816022849.14075-1-ming.lei@redhat.com> References: <20190816022849.14075-1-ming.lei@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.48]); Fri, 16 Aug 2019 02:29:13 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org One invariant of __irq_build_affinity_masks() is that all CPUs in the specified masks( cpu_mask AND node_to_cpumask for each node) should be covered during the spread. Even though all requested vectors have been reached, we still need to spread vectors among remained CPUs. The similar policy has been taken in case of 'numvecs <= nodes' already: So remove the following check inside the loop: if (done >= numvecs) break; Meantime assign at least 1 vector for remained nodes if 'numvecs' vectors have been handled already. Also, if the specified cpumask for one numa node is empty, simply not spread vectors on this node. Cc: Christoph Hellwig Cc: Keith Busch Cc: linux-nvme@lists.infradead.org, Cc: Jon Derrick Cc: Jens Axboe Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 26 ++++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 6fef48033f96..c7cca942bd8a 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -129,14 +129,26 @@ static int __irq_build_affinity_masks(unsigned int startvec, for_each_node_mask(n, nodemsk) { unsigned int ncpus, v, vecs_to_assign, vecs_per_node; - /* Spread the vectors per node */ - vecs_per_node = (numvecs - (curvec - firstvec)) / nodes; - /* Get the cpus on this node which are in the mask */ cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]); - - /* Calculate the number of cpus per vector */ ncpus = cpumask_weight(nmsk); + if (!ncpus) + continue; + + /* + * Calculate the number of cpus per vector + * + * Spread the vectors evenly per node. If the requested + * vector number has been reached, simply allocate one + * vector for each remaining node so that all nodes can + * be covered + */ + if (numvecs > done) + vecs_per_node = max_t(unsigned, + (numvecs - done) / nodes, 1); + else + vecs_per_node = 1; + vecs_to_assign = min(vecs_per_node, ncpus); /* Account for rounding errors */ @@ -156,13 +168,11 @@ static int __irq_build_affinity_masks(unsigned int startvec, } done += v; - if (done >= numvecs) - break; if (curvec >= last_affv) curvec = firstvec; --nodes; } - return done; + return done < numvecs ? done : numvecs; } /* -- 2.20.1