Received: by 10.223.176.5 with SMTP id f5csp595969wra; Tue, 6 Feb 2018 04:20:17 -0800 (PST) X-Google-Smtp-Source: AH8x226IGaqFVvFblYw8QwpK02CJAUaXoUp1jOcJ3Qmh4Sx/zw1sWn1E5gNzLQql3bQsmrHZjagU X-Received: by 10.98.156.143 with SMTP id u15mr2245033pfk.170.1517919617318; Tue, 06 Feb 2018 04:20:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517919617; cv=none; d=google.com; s=arc-20160816; b=vgzLszYt95zV6EbpdZc8eq8f6xXuL+cx4735z3V74uDr42O36p7ATIHcKOo5pY+msy V1DBZI7yD1pl7ni0qK3XOTf7+YKnmG6v91E2/TpRa80R4FtUJjJMsxeNLO4yNN8jgspb Z8Agzc5gIj5PskkNAzxFhv6XIMRHWKG93E6mK6vy9vFouvX/Ji+kkv7uW5n/+mFqpEP1 /0l8GIQFQmCvY7uhWvnR/mdRR2D5iod1fEKdIihLsrr0It8dQ1k86uHRu7HdMMSSdYa3 BDJVz8HEskrW6hz+4SA2cI5QT3iLKhr2VVoMAVTCZVWo6vyO9hjwrPfYErUM8UnsRWkP x9XQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=4LUsJLh1TajOm3Nkmga5mLUsAHPCo4I4Id+vHA87ipQ=; b=OFip+K5+wUv2qlMeashA7/aDNgwxCcL+S5fowIwEuHihKzTarcGzOm+eaK+5fc0vw9 pMotQoduJrIOOZSdXkwwDBkMiCSVYuhSmoCsbL7S6/lLRn0wXa2rqpNq9uFmd3ezKS4K ySqW8B6zBJVE3010M4DAUuF4YtWxYBc2AVk0htDvwT9rV9fuikToXiFqjYr1MerlcOkO +s8oXgX+GUzBIiYlHySAB8DwaATc7xk1DpMOk5VC8iKMyTp0I6Sp+Dz+pNJM+Uvym0Wb dpqpZMhrbvv/Ew9DmxzYBxQpPyJNeYb5NsnqpWcx6SV/a8H9zGwr/+FhIYxvpvKpsnOd mYlQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 97-v6si1524816ple.430.2018.02.06.04.20.02; Tue, 06 Feb 2018 04:20:17 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752975AbeBFMTU (ORCPT + 99 others); Tue, 6 Feb 2018 07:19:20 -0500 Received: from mx1.redhat.com ([209.132.183.28]:44784 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752929AbeBFMTG (ORCPT ); Tue, 6 Feb 2018 07:19:06 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 69B75780E5; Tue, 6 Feb 2018 12:19:06 +0000 (UTC) Received: from localhost (ovpn-12-136.pek2.redhat.com [10.72.12.136]) by smtp.corp.redhat.com (Postfix) with ESMTP id 32B8E60BE0; Tue, 6 Feb 2018 12:18:54 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Thomas Gleixner , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Laurence Oberman , Ming Lei , Christoph Hellwig Subject: [PATCH 3/5] genirq/affinity: support to do irq vectors spread starting from any vector Date: Tue, 6 Feb 2018 20:17:40 +0800 Message-Id: <20180206121742.29336-4-ming.lei@redhat.com> In-Reply-To: <20180206121742.29336-1-ming.lei@redhat.com> References: <20180206121742.29336-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Tue, 06 Feb 2018 12:19:06 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now two parameters(start_vec, affv) are introduced to irq_build_affinity_masks(), then this helper can build the affinity of each irq vector starting from the irq vector of 'start_vec', and handle at most 'affv' vectors. This way is required to do 2-stages irq vectors spread among all possible CPUs. Cc: Thomas Gleixner Cc: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 6af3f6727f63..9801aecf8763 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -94,17 +94,17 @@ static int get_nodes_in_cpumask(const cpumask_var_t *node_to_cpumask, return nodes; } -int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd, +int irq_build_affinity_masks(const struct irq_affinity *affd, + const int start_vec, const int affv, const cpumask_var_t *node_to_cpumask, const struct cpumask *cpu_mask, struct cpumask *nmsk, struct cpumask *masks) { - int affv = nvecs - affd->pre_vectors - affd->post_vectors; int last_affv = affv + affd->pre_vectors; - int curvec = affd->pre_vectors; + int curvec = start_vec; nodemask_t nodemsk = NODE_MASK_NONE; - int n, nodes, cpus_per_vec, extra_vecs; + int n, nodes, cpus_per_vec, extra_vecs, done = 0; nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_mask, &nodemsk); @@ -116,8 +116,10 @@ int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd, for_each_node_mask(n, nodemsk) { cpumask_copy(masks + curvec, node_to_cpumask[n]); - if (++curvec == last_affv) + if (++done == affv) break; + if (++curvec == last_affv) + curvec = affd->pre_vectors; } goto out; } @@ -150,13 +152,16 @@ int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd, irq_spread_init_one(masks + curvec, nmsk, cpus_per_vec); } - if (curvec >= last_affv) + done += v; + if (done >= affv) break; + if (curvec >= last_affv) + curvec = affd->pre_vectors; --nodes; } out: - return curvec - affd->pre_vectors; + return done; } /** @@ -169,6 +174,7 @@ int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd, struct cpumask * irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) { + int affv = nvecs - affd->pre_vectors - affd->post_vectors; int curvec; struct cpumask *masks; cpumask_var_t nmsk, *node_to_cpumask; @@ -198,7 +204,8 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) /* Stabilize the cpumasks */ get_online_cpus(); build_node_to_cpumask(node_to_cpumask); - curvec += irq_build_affinity_masks(nvecs, affd, node_to_cpumask, + curvec += irq_build_affinity_masks(affd, curvec, affv, + node_to_cpumask, cpu_possible_mask, nmsk, masks); put_online_cpus(); -- 2.9.5