Received: by 10.223.176.5 with SMTP id f5csp596316wra; Tue, 6 Feb 2018 04:20:38 -0800 (PST) X-Google-Smtp-Source: AH8x2242yQbbESwaHMKIF8YWRkJfQ0fy6wLJlCuhL7Wb/+vGMliAbJTCLeDnbxJp+V9HxrEA3cbz X-Received: by 2002:a17:902:968b:: with SMTP id n11-v6mr2275813plp.168.1517919637971; Tue, 06 Feb 2018 04:20:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517919637; cv=none; d=google.com; s=arc-20160816; b=OUmyzKlaOBz6PAdjhvcd+fzlrP6fXIlti9Op8oyHpaXXiBDaz5hDFSErBUaL6svjeq y/f4QFpKt7L8iAqGiarqQJZneCQWFn/z3o7upGHVtNCBjA2vN1ID8siHvO/IDfCLXciR jAkFUvV7J3yBUTVXod1k4zoG80wutuyg3v460f7Lqc3FQe8yeKVBjZKToIcyV4z2BWSW KZox0RngRU+wvBY0pX9VbMMVe4FQGCQbHP3cX20d5JY0HY8o9kXMH3QPnbWmYqg5EQV3 mEqAlTm9JosiBVXaArkJLDA8QpUjF/nWFN+Vp7wzYEpaRNH9psbCKkNj1d9+HDdBfYCK YjHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=QkPGiKLka7AFsEeiZEBqsqcaYXt+xF0cdtWYaXEniF8=; b=ejLS/0urNHUDnRQ6Agnkxdme2iyHnWYzjcA/fivslbbtZL7Bck5zgdCeOGGIEl61fH FMbJ/Yoi51+tu01hadPgkx/OOm770SCiFCfkZcCpbEKTsQfliF0mw52u2wb7MHRtIurO rgHHTj/SoNMfzbw74teBS4oA+YmfwqKsnu71FDqDr7086Wn8KQfXyrCN2gtmEOGt6ivO /slqJBDHWyQrExV0KmkM/+VHXQ/9EQ/24sfdwmyr42y7rlHKtfpICzd7SHZjOx0skv3L NVquF5y4sqsqFO4kAyUrl43UT9BzAoejjZ+VKj/DUwO8YLyOQMJm1gU+j3ObZAzVRqs8 dqtw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q3si6397215pfi.305.2018.02.06.04.20.24; Tue, 06 Feb 2018 04:20:37 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753011AbeBFMTe (ORCPT + 99 others); Tue, 6 Feb 2018 07:19:34 -0500 Received: from mx1.redhat.com ([209.132.183.28]:58794 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752964AbeBFMTS (ORCPT ); Tue, 6 Feb 2018 07:19:18 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BC7EFC04B318; Tue, 6 Feb 2018 12:19:17 +0000 (UTC) Received: from localhost (ovpn-12-136.pek2.redhat.com [10.72.12.136]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3449D5C20B; Tue, 6 Feb 2018 12:19:07 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Thomas Gleixner , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Laurence Oberman , Ming Lei , Christoph Hellwig Subject: [PATCH 4/5] genirq/affinity: irq vector spread among online CPUs as far as possible Date: Tue, 6 Feb 2018 20:17:41 +0800 Message-Id: <20180206121742.29336-5-ming.lei@redhat.com> In-Reply-To: <20180206121742.29336-1-ming.lei@redhat.com> References: <20180206121742.29336-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Tue, 06 Feb 2018 12:19:17 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs") may cause irq vector assigned to all offline CPUs, and this kind of assignment may cause much less irq vectors mapped to online CPUs, and performance may get hurt. For example, in a 8 cores system, 0~3 online, 4~8 offline/not present, see 'lscpu': [ming@box]$lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 2 NUMA node(s): 2 ... NUMA node0 CPU(s): 0-3 NUMA node1 CPU(s): ... For example, one device has 4 queues: 1) before 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs") irq 39, cpu list 0 irq 40, cpu list 1 irq 41, cpu list 2 irq 42, cpu list 3 2) after 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs") irq 39, cpu list 0-2 irq 40, cpu list 3-4,6 irq 41, cpu list 5 irq 42, cpu list 7 3) after applying this patch against V4.15+: irq 39, cpu list 0,4 irq 40, cpu list 1,6 irq 41, cpu list 2,5 irq 42, cpu list 3,7 This patch tries to do irq vector spread among online CPUs as far as possible by 2 stages spread. The above assignment 3) isn't the optimal result from NUMA view, but it returns more irq vectors with online CPU mapped, given in reality one CPU should be enough to handle one irq vector, so it is better to do this way. Cc: Thomas Gleixner Cc: Christoph Hellwig Reported-by: Laurence Oberman Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 35 +++++++++++++++++++++++++++++------ 1 file changed, 29 insertions(+), 6 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 9801aecf8763..6755ed77d017 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -106,6 +106,9 @@ int irq_build_affinity_masks(const struct irq_affinity *affd, nodemask_t nodemsk = NODE_MASK_NONE; int n, nodes, cpus_per_vec, extra_vecs, done = 0; + if (!cpumask_weight(cpu_mask)) + return 0; + nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_mask, &nodemsk); /* @@ -175,9 +178,9 @@ struct cpumask * irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) { int affv = nvecs - affd->pre_vectors - affd->post_vectors; - int curvec; + int curvec, vecs_offline, vecs_online; struct cpumask *masks; - cpumask_var_t nmsk, *node_to_cpumask; + cpumask_var_t nmsk, cpu_mask, *node_to_cpumask; /* * If there aren't any vectors left after applying the pre/post @@ -193,9 +196,12 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) if (!masks) goto out; + if (!alloc_cpumask_var(&cpu_mask, GFP_KERNEL)) + goto out; + node_to_cpumask = alloc_node_to_cpumask(); if (!node_to_cpumask) - goto out; + goto out_free_cpu_mask; /* Fill out vectors at the beginning that don't need affinity */ for (curvec = 0; curvec < affd->pre_vectors; curvec++) @@ -204,15 +210,32 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) /* Stabilize the cpumasks */ get_online_cpus(); build_node_to_cpumask(node_to_cpumask); - curvec += irq_build_affinity_masks(affd, curvec, affv, - node_to_cpumask, - cpu_possible_mask, nmsk, masks); + /* spread on online CPUs starting from the vector of affd->pre_vectors */ + vecs_online = irq_build_affinity_masks(affd, curvec, affv, + node_to_cpumask, + cpu_online_mask, nmsk, masks); + + /* spread on offline CPUs starting from the next vector to be handled */ + if (vecs_online >= affv) + curvec = affd->pre_vectors; + else + curvec = affd->pre_vectors + vecs_online; + cpumask_andnot(cpu_mask, cpu_possible_mask, cpu_online_mask); + vecs_offline = irq_build_affinity_masks(affd, curvec, affv, + node_to_cpumask, + cpu_mask, nmsk, masks); put_online_cpus(); /* Fill out vectors at the end that don't need affinity */ + if (vecs_online + vecs_offline >= affv) + curvec = affv + affd->pre_vectors; + else + curvec = affd->pre_vectors + vecs_online + vecs_offline; for (; curvec < nvecs; curvec++) cpumask_copy(masks + curvec, irq_default_affinity); free_node_to_cpumask(node_to_cpumask); +out_free_cpu_mask: + free_cpumask_var(cpu_mask); out: free_cpumask_var(nmsk); return masks; -- 2.9.5