Received: by 10.223.176.5 with SMTP id f5csp594916wra; Tue, 6 Feb 2018 04:19:15 -0800 (PST) X-Google-Smtp-Source: AH8x227x+2Gu5NiTqlbgX5kWfP7OKDv1lvdSu4a61iuLCUsFy1NxrNGaxZ+sUmbAu0NrWU1nI17D X-Received: by 2002:a17:902:67:: with SMTP id 94-v6mr2221019pla.183.1517919555497; Tue, 06 Feb 2018 04:19:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517919555; cv=none; d=google.com; s=arc-20160816; b=vssp0dqimjFG61+Ri3lYwttUCvKiWPMkn5QRvaBKmEK8ymYBf9JSq/bc0lj42o6IpS peY2HoyQyll5r/JdmmwGnTDpAR2nn/5UczBdCluqU/T00MA2Cd+VpKk8Z0qKUxudfBjd hHhtMZBEL3c731NOXDb+qCKOzmwkqfAprHhISkSKLmVilLQUrlAP+nquEx0bxtX9s+eh Zk2pUqzlKPAdtsuVyifrg13GYoF/oMmJtxhhe1Kf5l1cVX5L/TZmvWfFhOM3Wn/3gbim 2mAqjyIrIZsFgCKavoWGuTTu0YHPadCjC32poeIOKbQdKmLyctye3uBef1c9MO1xbhjY 16Mw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=kFgidAhPHVJ/Cgwbi7a/GnvAG2Z5/UWUSdfFu+jjApw=; b=YXGA+lGDoPj/2xhE8hHiL+DoTE5UkdAlIomlEGHMjolcL/nf5+lcD6dLeqpNvBGhJe hs4S8r+40WiHkkFLFrIFDH+LDYSknwlOgXz+gx0YFz12Oh2HUEjBgDU0jA4sZK2XS/h9 WH75iMcDibpK3RAJQRqSoy4+J++ddubQHtWmyKaEzzir2sVMfe8KinEdAZciVzG+g5eP X7OLfkh7IkII3tDToJRM6fD4XsYGy/kwYWQ1l1KmxxRBZ/atc4e4ZJvauoCcSE5ynqTi 7MMbkUmww4in3FQKa006iQ+wvWs18bsKDub7BhYyNHF6SBXSY2IJO+Se5fMKPnfYCWRr feOQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 17si2732512pfo.59.2018.02.06.04.19.01; Tue, 06 Feb 2018 04:19:15 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752763AbeBFMSe (ORCPT + 99 others); Tue, 6 Feb 2018 07:18:34 -0500 Received: from mx1.redhat.com ([209.132.183.28]:52610 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752246AbeBFMS1 (ORCPT ); Tue, 6 Feb 2018 07:18:27 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0C44561D1F; Tue, 6 Feb 2018 12:18:27 +0000 (UTC) Received: from localhost (ovpn-12-136.pek2.redhat.com [10.72.12.136]) by smtp.corp.redhat.com (Postfix) with ESMTP id 220035C20B; Tue, 6 Feb 2018 12:18:17 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Thomas Gleixner , linux-kernel@vger.kernel.org Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, Laurence Oberman , Ming Lei Subject: [PATCH 0/5] genirq/affinity: irq vector spread among online CPUs as far as possible Date: Tue, 6 Feb 2018 20:17:37 +0800 Message-Id: <20180206121742.29336-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Tue, 06 Feb 2018 12:18:27 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, This patchset tries to spread among online CPUs as far as possible, so that we can avoid to allocate too less irq vectors with online CPUs mapped. For example, in a 8cores system, 4 cpu cores(4~7) are offline/non present, on a device with 4 queues: 1) before this patchset irq 39, cpu list 0-2 irq 40, cpu list 3-4,6 irq 41, cpu list 5 irq 42, cpu list 7 2) after this patchset irq 39, cpu list 0,4 irq 40, cpu list 1,6 irq 41, cpu list 2,5 irq 42, cpu list 3,7 Without this patchset, only two vectors(39, 40) can be active, but there can be 4 active irq vectors after applying this patchset. One disadvantage is that CPUs from different NUMA node can be mapped to one same irq vector. Given generally one CPU should be enough to handle one irq vector, it shouldn't be a big deal. Especailly more vectors have to be allocated, otherwise performance can be hurt in current assignment. Thanks Ming Ming Lei (5): genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask genirq/affinity: move actual irq vector spread into one helper genirq/affinity: support to do irq vectors spread starting from any vector genirq/affinity: irq vector spread among online CPUs as far as possible nvme: pci: pass max vectors as num_possible_cpus() to pci_alloc_irq_vectors drivers/nvme/host/pci.c | 2 +- kernel/irq/affinity.c | 145 +++++++++++++++++++++++++++++++----------------- 2 files changed, 95 insertions(+), 52 deletions(-) -- 2.9.5