Received: by 10.223.185.116 with SMTP id b49csp5156458wrg; Tue, 27 Feb 2018 08:38:12 -0800 (PST) X-Google-Smtp-Source: AG47ELvuIpGYfwepdb1PW4BOcKtSvQo0qFyTT8X86J+PiN+s0zBXGY1GOgUXBCJbe1t3I+chYaIm X-Received: by 2002:a17:902:9883:: with SMTP id s3-v6mr1251578plp.96.1519749492144; Tue, 27 Feb 2018 08:38:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519749492; cv=none; d=google.com; s=arc-20160816; b=orOlCFP12gNK89XlfzyhOmfgoasGbq4hWaz/oGeWv+hkMmCZoVpI+x7tKMTJg5Kzla wUB0Q6jJi1TWxjWBn+amSCf6+OxdKcLEkI22Zr3JMLSNR/0vhJSXBe8odUQYkEXbhyrQ 0EVbM2pnjb/bPI4iyww+0q/1Bk3UaIhWASYbLbpKXSTtdyfPr4wrCQL4d5Pc/H4JYyyg wamy5xIVK9XJS4wqmeaoq29LZR9gt9IcWbq5vi4WRN1e6qDqaf1y11sDrQjpsZhAezLI jeHQCKYq5ALl2TbHTh4LH/N8bcbX/qfuwLb698FlGXHUb9iW52RPIlipixfPeOVbJd0k WSyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=EGrgRp/thO30uYkH+qxXZfY7/vRMMHV+Po9vaq7Vlok=; b=OYnH6f84HPQGcufSjvevoAn7kTBDaWSww+/AO/1SMbG2SuKUKvWgd6ubEvVEsKScao mL2q0Ms3ZZnEjFvMJq8BgY7G6SvBDks7g2jjdCKWbaR7L5u7t0kuZlfA3LsMB8bQNkdi VbRIrC8G+9GxIoXzQiOEe1tOSkqWUXeiyIqevLISKH+ddOjt1FHmL6xO3IDtArIWS8iR nBmSseYJJoSQpHI1RABLLOaDwmnNnpJa/GSkRXUunPLwRWCSmaAsiN7naFU0LyIfdlO7 QOyXOsiwBv3rBk1/R7xERvDr7HtCXJQUrsji4VM5DAgdnAfghWeOaCtNaYOPJ1A+x85Q IkqA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d24-v6si8727696plr.615.2018.02.27.08.37.57; Tue, 27 Feb 2018 08:38:12 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754079AbeB0PM4 (ORCPT + 99 others); Tue, 27 Feb 2018 10:12:56 -0500 Received: from mga17.intel.com ([192.55.52.151]:45269 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753292AbeB0PMz (ORCPT ); Tue, 27 Feb 2018 10:12:55 -0500 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Feb 2018 07:12:54 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.47,401,1515484800"; d="scan'208";a="21177905" Received: from unknown (HELO localhost.localdomain) ([10.232.112.44]) by orsmga008.jf.intel.com with ESMTP; 27 Feb 2018 07:12:52 -0800 Date: Tue, 27 Feb 2018 08:13:11 -0700 From: Keith Busch To: Jianchao Wang Cc: axboe@fb.com, hch@lst.de, sagi@grimberg.me, linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] nvme-pci: assign separate irq vectors for adminq and ioq0 Message-ID: <20180227151311.GD10832@localhost.localdomain> References: <1519721177-2099-1-git-send-email-jianchao.w.wang@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1519721177-2099-1-git-send-email-jianchao.w.wang@oracle.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 27, 2018 at 04:46:17PM +0800, Jianchao Wang wrote: > Currently, adminq and ioq0 share the same irq vector. This is > unfair for both amdinq and ioq0. > - For adminq, its completion irq has to be bound on cpu0. > - For ioq0, when the irq fires for io completion, the adminq irq > action has to be checked also. This change log could use some improvements. Why is it bad if admin interrupts affinity is with cpu0? Are you able to measure _any_ performance difference on IO queue 1 vs IO queue 2 that you can attribute to IO queue 1's sharing vector 0? > @@ -1945,11 +1947,11 @@ static int nvme_setup_io_queues(struct nvme_dev *dev) > * setting up the full range we need. > */ > pci_free_irq_vectors(pdev); > - nr_io_queues = pci_alloc_irq_vectors(pdev, 1, nr_io_queues, > - PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY); > - if (nr_io_queues <= 0) > + ret = pci_alloc_irq_vectors_affinity(pdev, 1, (nr_io_queues + 1), > + PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd); > + if (ret <= 0) > return -EIO; > - dev->max_qid = nr_io_queues; > + dev->max_qid = ret - 1; So controllers that have only legacy or single-message MSI don't get any IO queues?