Received: by 10.223.185.116 with SMTP id b49csp6163409wrg; Thu, 8 Mar 2018 02:56:52 -0800 (PST) X-Google-Smtp-Source: AG47ELsTaY5JZAONdIAgjNCyBr373w1miKbqsWgj541jRuoNqX2ruP807o2PVJwZLkfxHC33eiUN X-Received: by 10.99.113.85 with SMTP id b21mr21366479pgn.305.1520506612705; Thu, 08 Mar 2018 02:56:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520506612; cv=none; d=google.com; s=arc-20160816; b=xkvCzhzrIbybqRvTTEOyvfDDJXDCqfmD7YcoBJ23bcK+uPu5xK8kWbSQ0V95EnOga4 HsbFMvl75JJIaCktUn9nnD9TmCzDe5zCX194qbnYN7sVNMYmM0V0CXjk0O2Tnhme8aGH KtdPVXbHWMAA2b8XTUeIisaWNSdEk2+fKet296bUsVq3Huqxq+DbaG44vSgMOSs6yU1+ 9KxR//Cx//cnwgMRxugSJnBR8WluWHO1Gk57AoCRztHkTTEtEgRPpkdS7G6P5Vo5ZBdc QCdLyMF3aOfqBsng+bY1Ivu83OAXoKg1qWEjfrTVmdzgCvHSKZCxCiCKZFLVBZyHZskF s+mA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=coGc0boaJeFVNcyRkjOMK+ZB2GGyfxlnGy/VG+N/svE=; b=zqz+G4IWCbiPEJRJ0kkr93rzNoYyhms8VahmFougmr4nW86wL0feShSOFPlycCzO9t Wq6PxPyG5XYqE3w8YeKl1B+MYhI9OxnMgoO4V0JiOFgaZYqpflvUJvV74pDz+ynqC1th yV2/LSo0a902EQZFDc0rU/wtW/D4Jj5z02w7j4Qp1tMz/o/MwI39dmSxSA+sbbxaNfwd BUBdqOfiU8v9/c1uEExj65lO9f+TR0SM4DINAOXspr3leNif+gbAFZttctfWNoHqBq9F 3TyDP4UVMhJc9YOwWmQnZWJQsvnf/ky7NnEOIdaUq/1+GEmwpKugut4JHO5UJoqpiH3F VPTg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x65si15748132pfj.354.2018.03.08.02.56.38; Thu, 08 Mar 2018 02:56:52 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965994AbeCHKya (ORCPT + 99 others); Thu, 8 Mar 2018 05:54:30 -0500 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:47022 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S965206AbeCHKy2 (ORCPT ); Thu, 8 Mar 2018 05:54:28 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 37B1976FB9; Thu, 8 Mar 2018 10:54:28 +0000 (UTC) Received: from localhost (ovpn-12-72.pek2.redhat.com [10.72.12.72]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0798F1102E32; Thu, 8 Mar 2018 10:54:17 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Thomas Gleixner , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, Laurence Oberman , Ming Lei Subject: [PATCH V3 1/4] genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask Date: Thu, 8 Mar 2018 18:53:55 +0800 Message-Id: <20180308105358.1506-2-ming.lei@redhat.com> In-Reply-To: <20180308105358.1506-1-ming.lei@redhat.com> References: <20180308105358.1506-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Thu, 08 Mar 2018 10:54:28 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Thu, 08 Mar 2018 10:54:28 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following patches will introduce two stage irq spread for improving irq spread on all possible CPUs. No funtional change. Cc: Thomas Gleixner Reviewed-by: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index a37a3b4b6342..4b1c4763212d 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -39,7 +39,7 @@ static void irq_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, } } -static cpumask_var_t *alloc_node_to_possible_cpumask(void) +static cpumask_var_t *alloc_node_to_cpumask(void) { cpumask_var_t *masks; int node; @@ -62,7 +62,7 @@ static cpumask_var_t *alloc_node_to_possible_cpumask(void) return NULL; } -static void free_node_to_possible_cpumask(cpumask_var_t *masks) +static void free_node_to_cpumask(cpumask_var_t *masks) { int node; @@ -71,7 +71,7 @@ static void free_node_to_possible_cpumask(cpumask_var_t *masks) kfree(masks); } -static void build_node_to_possible_cpumask(cpumask_var_t *masks) +static void build_node_to_cpumask(cpumask_var_t *masks) { int cpu; @@ -79,14 +79,14 @@ static void build_node_to_possible_cpumask(cpumask_var_t *masks) cpumask_set_cpu(cpu, masks[cpu_to_node(cpu)]); } -static int get_nodes_in_cpumask(cpumask_var_t *node_to_possible_cpumask, +static int get_nodes_in_cpumask(cpumask_var_t *node_to_cpumask, const struct cpumask *mask, nodemask_t *nodemsk) { int n, nodes = 0; /* Calculate the number of nodes in the supplied affinity mask */ for_each_node(n) { - if (cpumask_intersects(mask, node_to_possible_cpumask[n])) { + if (cpumask_intersects(mask, node_to_cpumask[n])) { node_set(n, *nodemsk); nodes++; } @@ -109,7 +109,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) int last_affv = affv + affd->pre_vectors; nodemask_t nodemsk = NODE_MASK_NONE; struct cpumask *masks; - cpumask_var_t nmsk, *node_to_possible_cpumask; + cpumask_var_t nmsk, *node_to_cpumask; /* * If there aren't any vectors left after applying the pre/post @@ -125,8 +125,8 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) if (!masks) goto out; - node_to_possible_cpumask = alloc_node_to_possible_cpumask(); - if (!node_to_possible_cpumask) + node_to_cpumask = alloc_node_to_cpumask(); + if (!node_to_cpumask) goto out; /* Fill out vectors at the beginning that don't need affinity */ @@ -135,8 +135,8 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) /* Stabilize the cpumasks */ get_online_cpus(); - build_node_to_possible_cpumask(node_to_possible_cpumask); - nodes = get_nodes_in_cpumask(node_to_possible_cpumask, cpu_possible_mask, + build_node_to_cpumask(node_to_cpumask); + nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_possible_mask, &nodemsk); /* @@ -146,7 +146,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) if (affv <= nodes) { for_each_node_mask(n, nodemsk) { cpumask_copy(masks + curvec, - node_to_possible_cpumask[n]); + node_to_cpumask[n]); if (++curvec == last_affv) break; } @@ -160,7 +160,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) vecs_per_node = (affv - (curvec - affd->pre_vectors)) / nodes; /* Get the cpus on this node which are in the mask */ - cpumask_and(nmsk, cpu_possible_mask, node_to_possible_cpumask[n]); + cpumask_and(nmsk, cpu_possible_mask, node_to_cpumask[n]); /* Calculate the number of cpus per vector */ ncpus = cpumask_weight(nmsk); @@ -192,7 +192,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) /* Fill out vectors at the end that don't need affinity */ for (; curvec < nvecs; curvec++) cpumask_copy(masks + curvec, irq_default_affinity); - free_node_to_possible_cpumask(node_to_possible_cpumask); + free_node_to_cpumask(node_to_cpumask); out: free_cpumask_var(nmsk); return masks; -- 2.9.5