Received: by 10.223.185.116 with SMTP id b49csp6162322wrg; Thu, 8 Mar 2018 02:55:28 -0800 (PST) X-Google-Smtp-Source: AG47ELu8QKMUapBUbze1hPzmNaQt0f87zUpGLg2YHOHSWHWSKVWz19Uc1oCEnxm1PY0tkS01Aiu/ X-Received: by 10.99.124.91 with SMTP id l27mr20417847pgn.298.1520506528051; Thu, 08 Mar 2018 02:55:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520506528; cv=none; d=google.com; s=arc-20160816; b=lGPuB7eXnc1lkTaZ2C+javIFzugn2cL83Z0UuXfDwKIsPU4RlA+A13Pr6DXZf0YB5X Xg/EnRd29HG8b1Rw3A0dpjY0Sff0Yr0r0orR7BoZsLfBLWjxZ3VhkZuh3mSDEbAGMYqn PWJRqLAUpXgFT1+9/2UGbtAfhvWpOEpqphM23juuDUYh9H7uYWmlFFq8pPghy6MU7fAE wy6IAEAZKVDmjzfuvbtVRz0B0VMVxfuW9y6NEHi03AlVlde9dE4DZOZooQZiXqHTYBFs NORIakeTJTC+fFi9p8zad9GvTcdSS4M72BZFjGrTDntg7oxmzy+jfGBsPojzuAXuhhnu cwbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=60t31uqKTnoWTBh+dDrXnm2bqolRgP3vYrEa6RNtdz8=; b=PHwrzHRkCzQBu8m0oyJGZQJjrS3H8URFopEmTXGSccmKs7dOOfrYfcTv784uzNB86N kY7DTv6+NH2CkyWrM7lNcRwVwo5tA/Vzj80LNc13dtuXs2RV9XVmPv/3e3lacRPcgo4v Zo5HvhtY3I524L+ylJaOOCzzaIiFGkfAdf6hJ/MsFawY00uH3zL/uPmu2IVtffviyv5N C/K3MPX2IO92MQ+pibgS1EsoPNceI+qAnxxNFWl0Z2Ka2FT2ns6p/CBUg/C3kkOJBOs5 bIhKAz0fijBkJgZ20cURMp9ywSMMTi/ARG7NfMRMW5VCCoF2oRPDUxHCpn/uCvymtjys tWiA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q5si15304889pff.398.2018.03.08.02.55.13; Thu, 08 Mar 2018 02:55:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965850AbeCHKyR (ORCPT + 99 others); Thu, 8 Mar 2018 05:54:17 -0500 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:39300 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S934907AbeCHKyQ (ORCPT ); Thu, 8 Mar 2018 05:54:16 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A996E8424D; Thu, 8 Mar 2018 10:54:15 +0000 (UTC) Received: from localhost (ovpn-12-72.pek2.redhat.com [10.72.12.72]) by smtp.corp.redhat.com (Postfix) with ESMTP id BA0F42023227; Thu, 8 Mar 2018 10:54:10 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Thomas Gleixner , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, Laurence Oberman , Ming Lei Subject: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible Date: Thu, 8 Mar 2018 18:53:54 +0800 Message-Id: <20180308105358.1506-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Thu, 08 Mar 2018 10:54:15 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Thu, 08 Mar 2018 10:54:15 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, This patchset tries to spread among online CPUs as far as possible, so that we can avoid to allocate too less irq vectors with online CPUs mapped. For example, in a 8cores system, 4 cpu cores(4~7) are offline/non present, on a device with 4 queues: 1) before this patchset irq 39, cpu list 0-2 irq 40, cpu list 3-4,6 irq 41, cpu list 5 irq 42, cpu list 7 2) after this patchset irq 39, cpu list 0,4 irq 40, cpu list 1,6 irq 41, cpu list 2,5 irq 42, cpu list 3,7 Without this patchset, only two vectors(39, 40) can be active, but there can be 4 active irq vectors after applying this patchset. One disadvantage is that CPUs from different NUMA node can be mapped to one same irq vector. Given generally one CPU should be enough to handle one irq vector, it shouldn't be a big deal. Especailly more vectors have to be allocated, otherwise performance can be hurt in current assignment. V3: - fix one compile warning reported by kbuild test robot V2: - address coments from Christoph - mark irq_build_affinity_masks as static - move constification of get_nodes_in_cpumask's parameter into one prep patch - add Reviewed-by tag Thanks Ming Ming Lei (4): genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask genirq/affinity: move actual irq vector spread into one helper genirq/affinity: support to do irq vectors spread starting from any vector genirq/affinity: irq vector spread among online CPUs as far as possible kernel/irq/affinity.c | 145 ++++++++++++++++++++++++++++++++------------------ 1 file changed, 94 insertions(+), 51 deletions(-) -- 2.9.5