Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp434769imu; Mon, 5 Nov 2018 03:23:55 -0800 (PST) X-Google-Smtp-Source: AJdET5dXeP9aJLoHKVHAYSPH4c9rv4h5ieoeCZyguR6Cs4xGoW9HaP4RKS7/Qcvf1DJNxqnPRVMI X-Received: by 2002:a63:5308:: with SMTP id h8-v6mr19426848pgb.358.1541417035824; Mon, 05 Nov 2018 03:23:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541417035; cv=none; d=google.com; s=arc-20160816; b=E9h5rO4v0/LR7zIFhFlnpIvQ4sgkIh5nL6NTpOYiNBFkciK46OTV04OlOBe1xXalKN nsOxLvbVEy7rvNCNaVcm8eAKG0QM8E3CM6XlEVZS/fcYMWnNnP1MQTKdXfIxgi0wAKgW c0bQWXd7I4gO+BsBRWQY+5n1xS8GM+ODjsfcVeeZDibw9+FNK0OyjzjABIGkJoa5uz2X FORtBkQiitys2GEMHCfkeRskXWK7ycibu3bBU4HQ1xvru4+WimZhjjSSIrzlDvzPczTQ f88gJgZVuny54mnPbHuhkvu1DCP4HdF67gbWAi1lAoXQ8yc7eYh1sjNncPZVhHW5EcZ4 banA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-disposition :content-transfer-encoding:mime-version:robot-unsubscribe:robot-id :git-commit-id:subject:to:references:in-reply-to:reply-to:cc :message-id:from:date; bh=0YYk/Ln83JlkT2kWPSpCI5Bop8HQCpwljZ5UnvdOMew=; b=JAQh1nGs7/xkd4EolSx/7KKt2Fmq2wY5ExdxpPdMXQQFGHAqd70sMuf4k9NS1PuDrN 8t9DcUSC+gfPncuhm6n58T34g6NTp0VSnXyLaxUo27OlJ/VvddTqGoCeTott0quhx7N1 deVXeOecu1suwlJDDYAe5kf2/GXoRM4qF75NAFpmcWN5yMB9V+9w2MtYlgGWu4dAB09k 9UWok6I7d4u3AtT3ybukj6MMp9OM66uXYka8Aj+SVTAwm/f+RZSrfKbalPRHK88Bhkgt eTTy2VFu4hNcmmAqE4DYbDXTA4+uwpB8/q8kHL2smVEmINkqx1VNutsqH6GFW7G5u0jt Rrbg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y7si9999925pgc.236.2018.11.05.03.23.41; Mon, 05 Nov 2018 03:23:55 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729571AbeKEUmF (ORCPT + 99 others); Mon, 5 Nov 2018 15:42:05 -0500 Received: from terminus.zytor.com ([198.137.202.136]:55807 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728757AbeKEUmF (ORCPT ); Mon, 5 Nov 2018 15:42:05 -0500 Received: from terminus.zytor.com (localhost [127.0.0.1]) by terminus.zytor.com (8.15.2/8.15.2) with ESMTPS id wA5BMUC8309753 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 5 Nov 2018 03:22:30 -0800 Received: (from tipbot@localhost) by terminus.zytor.com (8.15.2/8.15.2/Submit) id wA5BMU8r309750; Mon, 5 Nov 2018 03:22:30 -0800 Date: Mon, 5 Nov 2018 03:22:30 -0800 X-Authentication-Warning: terminus.zytor.com: tipbot set sender to tipbot@zytor.com using -f From: tip-bot for Ming Lei Message-ID: Cc: sagi@grimberg.me, hare@suse.com, hpa@zytor.com, mingo@kernel.org, ming.lei@redhat.com, linux-kernel@vger.kernel.org, keith.busch@intel.com, tglx@linutronix.de, axboe@kernel.dk Reply-To: sagi@grimberg.me, hare@suse.com, hpa@zytor.com, ming.lei@redhat.com, mingo@kernel.org, keith.busch@intel.com, tglx@linutronix.de, linux-kernel@vger.kernel.org, axboe@kernel.dk In-Reply-To: <20181102145951.31979-3-ming.lei@redhat.com> References: <20181102145951.31979-3-ming.lei@redhat.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:irq/core] genirq/affinity: Move two stage affinity spreading into a helper function Git-Commit-ID: 5c903e108d0b005cf59904ca3520934fca4b9439 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline X-Spam-Status: No, score=-0.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DATE_IN_FUTURE_96_Q autolearn=no autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on terminus.zytor.com Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 5c903e108d0b005cf59904ca3520934fca4b9439 Gitweb: https://git.kernel.org/tip/5c903e108d0b005cf59904ca3520934fca4b9439 Author: Ming Lei AuthorDate: Fri, 2 Nov 2018 22:59:49 +0800 Committer: Thomas Gleixner CommitDate: Mon, 5 Nov 2018 12:16:26 +0100 genirq/affinity: Move two stage affinity spreading into a helper function No functional change. Prepares for supporting allocating and affinitizing interrupt sets. [ tglx: Minor changelog tweaks ] Signed-off-by: Ming Lei Signed-off-by: Thomas Gleixner Cc: Jens Axboe Cc: linux-block@vger.kernel.org Cc: Hannes Reinecke Cc: Keith Busch Cc: Sagi Grimberg Link: https://lkml.kernel.org/r/20181102145951.31979-3-ming.lei@redhat.com --- kernel/irq/affinity.c | 92 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 56 insertions(+), 36 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index e12cdf637c71..2f9812b6035e 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -94,7 +94,7 @@ static int get_nodes_in_cpumask(cpumask_var_t *node_to_cpumask, return nodes; } -static int irq_build_affinity_masks(const struct irq_affinity *affd, +static int __irq_build_affinity_masks(const struct irq_affinity *affd, int startvec, int numvecs, cpumask_var_t *node_to_cpumask, const struct cpumask *cpu_mask, @@ -165,6 +165,58 @@ out: return done; } +/* + * build affinity in two stages: + * 1) spread present CPU on these vectors + * 2) spread other possible CPUs on these vectors + */ +static int irq_build_affinity_masks(const struct irq_affinity *affd, + int startvec, int numvecs, + cpumask_var_t *node_to_cpumask, + struct cpumask *masks) +{ + int curvec = startvec, usedvecs = -1; + cpumask_var_t nmsk, npresmsk; + + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) + return usedvecs; + + if (!zalloc_cpumask_var(&npresmsk, GFP_KERNEL)) + goto fail; + + /* Stabilize the cpumasks */ + get_online_cpus(); + build_node_to_cpumask(node_to_cpumask); + + /* Spread on present CPUs starting from affd->pre_vectors */ + usedvecs = __irq_build_affinity_masks(affd, curvec, numvecs, + node_to_cpumask, cpu_present_mask, + nmsk, masks); + + /* + * Spread on non present CPUs starting from the next vector to be + * handled. If the spreading of present CPUs already exhausted the + * vector space, assign the non present CPUs to the already spread + * out vectors. + */ + if (usedvecs >= numvecs) + curvec = affd->pre_vectors; + else + curvec = affd->pre_vectors + usedvecs; + cpumask_andnot(npresmsk, cpu_possible_mask, cpu_present_mask); + usedvecs += __irq_build_affinity_masks(affd, curvec, numvecs, + node_to_cpumask, npresmsk, + nmsk, masks); + put_online_cpus(); + + free_cpumask_var(npresmsk); + + fail: + free_cpumask_var(nmsk); + + return usedvecs; +} + /** * irq_create_affinity_masks - Create affinity masks for multiqueue spreading * @nvecs: The total number of vectors @@ -177,7 +229,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) { int affvecs = nvecs - affd->pre_vectors - affd->post_vectors; int curvec, usedvecs; - cpumask_var_t nmsk, npresmsk, *node_to_cpumask; + cpumask_var_t *node_to_cpumask; struct cpumask *masks = NULL; /* @@ -187,15 +239,9 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) if (nvecs == affd->pre_vectors + affd->post_vectors) return NULL; - if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) - return NULL; - - if (!zalloc_cpumask_var(&npresmsk, GFP_KERNEL)) - goto outcpumsk; - node_to_cpumask = alloc_node_to_cpumask(); if (!node_to_cpumask) - goto outnpresmsk; + return NULL; masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL); if (!masks) @@ -205,30 +251,8 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) for (curvec = 0; curvec < affd->pre_vectors; curvec++) cpumask_copy(masks + curvec, irq_default_affinity); - /* Stabilize the cpumasks */ - get_online_cpus(); - build_node_to_cpumask(node_to_cpumask); - - /* Spread on present CPUs starting from affd->pre_vectors */ usedvecs = irq_build_affinity_masks(affd, curvec, affvecs, - node_to_cpumask, cpu_present_mask, - nmsk, masks); - - /* - * Spread on non present CPUs starting from the next vector to be - * handled. If the spreading of present CPUs already exhausted the - * vector space, assign the non present CPUs to the already spread - * out vectors. - */ - if (usedvecs >= affvecs) - curvec = affd->pre_vectors; - else - curvec = affd->pre_vectors + usedvecs; - cpumask_andnot(npresmsk, cpu_possible_mask, cpu_present_mask); - usedvecs += irq_build_affinity_masks(affd, curvec, affvecs, - node_to_cpumask, npresmsk, - nmsk, masks); - put_online_cpus(); + node_to_cpumask, masks); /* Fill out vectors at the end that don't need affinity */ if (usedvecs >= affvecs) @@ -240,10 +264,6 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) outnodemsk: free_node_to_cpumask(node_to_cpumask); -outnpresmsk: - free_cpumask_var(npresmsk); -outcpumsk: - free_cpumask_var(nmsk); return masks; }