Received: by 10.223.176.5 with SMTP id f5csp595698wra; Tue, 6 Feb 2018 04:20:02 -0800 (PST) X-Google-Smtp-Source: AH8x227/FEk4NdkfaNk1prulNG049XKcebkA3TfyrIJn7scSfIrNrm+vXOdPGh8G/YzxYagVQ0th X-Received: by 2002:a17:902:46:: with SMTP id 64-v6mr2241477pla.341.1517919601894; Tue, 06 Feb 2018 04:20:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517919601; cv=none; d=google.com; s=arc-20160816; b=Ynd8i4zuIBsczzwmmZvsxCc/XwF7fRjnHHedhObvj1cChTQTsSy8Ui/90DMo4dgO6V kznTp4lsavzGyIGoAkABBH7daCzhdN+OTJQbfvVcJca4kY2cqe5CXv8JUdeHCzFjLLxg DX5msPws1rmrKjm017NBz1j4np3cvcp7HDqiNghvSzFYBhgnV1gj/K0I6X1dPcPNsd8u yYlfwf7aSQua53+Mmu6ePzLgG4iMn/TY/EYtoOn92qKJy9kbQvDN3VX/PMQnS4r7iYfw DLCQJRmcmbWkhOGedFZogFfXWp4TR45aNcOj0vpzsNQ8wBWyFfu8QT2irpKPHyzrBFJb fK3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=uDxLC0Tim0S4H9LoMhEGj+g4kWHj0f6AXxkXi1T1Zbw=; b=fkeou7C9SDXSfo1b6D9Xv/KdHdHcOY/CkMIdnvc/4ArmGKgvd7u2+C7//dYS2p0niu J6bD/H1grlX3a1Avv2h1mLnl63VrqZ8u0KBtCy0DsfQURTO5r2DyXmAlTAEytnfTrqUH DeMvyAOSn9xpCl+Dg8sMuCnSNSVbFC+wSxLevZ2wByWAmEs6mRzY75SQpzC/1uOkZcmg +9Vx49bQaplq/hJPEpVEncAJmJgjaluTF42sJUaYVga+GF0mOxVkZ7BiJzuFlM9eHu3r bbmgzcwvTYK5g/Jy92QgGBSzOXWqlUCGyE+DGEXovNKpxu199s1lBRyF3JFgL1BHjmwt hJhQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d8si8563966pfh.120.2018.02.06.04.19.47; Tue, 06 Feb 2018 04:20:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752931AbeBFMTH (ORCPT + 99 others); Tue, 6 Feb 2018 07:19:07 -0500 Received: from mx1.redhat.com ([209.132.183.28]:44978 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752246AbeBFMSx (ORCPT ); Tue, 6 Feb 2018 07:18:53 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2F3DB883BE; Tue, 6 Feb 2018 12:18:53 +0000 (UTC) Received: from localhost (ovpn-12-136.pek2.redhat.com [10.72.12.136]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1C9572A33B; Tue, 6 Feb 2018 12:18:41 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Thomas Gleixner , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Laurence Oberman , Ming Lei , Christoph Hellwig Subject: [PATCH 2/5] genirq/affinity: move actual irq vector spread into one helper Date: Tue, 6 Feb 2018 20:17:39 +0800 Message-Id: <20180206121742.29336-3-ming.lei@redhat.com> In-Reply-To: <20180206121742.29336-1-ming.lei@redhat.com> References: <20180206121742.29336-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Tue, 06 Feb 2018 12:18:53 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org No functional change, just prepare for converting to 2-stage irq vector spread. Cc: Thomas Gleixner Cc: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 99 +++++++++++++++++++++++++++++---------------------- 1 file changed, 56 insertions(+), 43 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 4b1c4763212d..6af3f6727f63 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -79,7 +79,7 @@ static void build_node_to_cpumask(cpumask_var_t *masks) cpumask_set_cpu(cpu, masks[cpu_to_node(cpu)]); } -static int get_nodes_in_cpumask(cpumask_var_t *node_to_cpumask, +static int get_nodes_in_cpumask(const cpumask_var_t *node_to_cpumask, const struct cpumask *mask, nodemask_t *nodemsk) { int n, nodes = 0; @@ -94,50 +94,19 @@ static int get_nodes_in_cpumask(cpumask_var_t *node_to_cpumask, return nodes; } -/** - * irq_create_affinity_masks - Create affinity masks for multiqueue spreading - * @nvecs: The total number of vectors - * @affd: Description of the affinity requirements - * - * Returns the masks pointer or NULL if allocation failed. - */ -struct cpumask * -irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) +int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd, + const cpumask_var_t *node_to_cpumask, + const struct cpumask *cpu_mask, + struct cpumask *nmsk, + struct cpumask *masks) { - int n, nodes, cpus_per_vec, extra_vecs, curvec; int affv = nvecs - affd->pre_vectors - affd->post_vectors; int last_affv = affv + affd->pre_vectors; + int curvec = affd->pre_vectors; nodemask_t nodemsk = NODE_MASK_NONE; - struct cpumask *masks; - cpumask_var_t nmsk, *node_to_cpumask; - - /* - * If there aren't any vectors left after applying the pre/post - * vectors don't bother with assigning affinity. - */ - if (!affv) - return NULL; - - if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) - return NULL; - - masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL); - if (!masks) - goto out; + int n, nodes, cpus_per_vec, extra_vecs; - node_to_cpumask = alloc_node_to_cpumask(); - if (!node_to_cpumask) - goto out; - - /* Fill out vectors at the beginning that don't need affinity */ - for (curvec = 0; curvec < affd->pre_vectors; curvec++) - cpumask_copy(masks + curvec, irq_default_affinity); - - /* Stabilize the cpumasks */ - get_online_cpus(); - build_node_to_cpumask(node_to_cpumask); - nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_possible_mask, - &nodemsk); + nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_mask, &nodemsk); /* * If the number of nodes in the mask is greater than or equal the @@ -150,7 +119,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) if (++curvec == last_affv) break; } - goto done; + goto out; } for_each_node_mask(n, nodemsk) { @@ -160,7 +129,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) vecs_per_node = (affv - (curvec - affd->pre_vectors)) / nodes; /* Get the cpus on this node which are in the mask */ - cpumask_and(nmsk, cpu_possible_mask, node_to_cpumask[n]); + cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]); /* Calculate the number of cpus per vector */ ncpus = cpumask_weight(nmsk); @@ -186,7 +155,51 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) --nodes; } -done: +out: + return curvec - affd->pre_vectors; +} + +/** + * irq_create_affinity_masks - Create affinity masks for multiqueue spreading + * @nvecs: The total number of vectors + * @affd: Description of the affinity requirements + * + * Returns the masks pointer or NULL if allocation failed. + */ +struct cpumask * +irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) +{ + int curvec; + struct cpumask *masks; + cpumask_var_t nmsk, *node_to_cpumask; + + /* + * If there aren't any vectors left after applying the pre/post + * vectors don't bother with assigning affinity. + */ + if (nvecs == affd->pre_vectors + affd->post_vectors) + return NULL; + + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) + return NULL; + + masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL); + if (!masks) + goto out; + + node_to_cpumask = alloc_node_to_cpumask(); + if (!node_to_cpumask) + goto out; + + /* Fill out vectors at the beginning that don't need affinity */ + for (curvec = 0; curvec < affd->pre_vectors; curvec++) + cpumask_copy(masks + curvec, irq_default_affinity); + + /* Stabilize the cpumasks */ + get_online_cpus(); + build_node_to_cpumask(node_to_cpumask); + curvec += irq_build_affinity_masks(nvecs, affd, node_to_cpumask, + cpu_possible_mask, nmsk, masks); put_online_cpus(); /* Fill out vectors at the end that don't need affinity */ -- 2.9.5