Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp1128937imj; Thu, 14 Feb 2019 01:35:05 -0800 (PST) X-Google-Smtp-Source: AHgI3IZW3YJoVFl9BjAqM8ZGOxJmZFuOhUQr+40jAhouPCj/BY7c3PK0TGSmf1QvKyfnq4xJm2wW X-Received: by 2002:aa7:83c5:: with SMTP id j5mr3027714pfn.21.1550136905143; Thu, 14 Feb 2019 01:35:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550136905; cv=none; d=google.com; s=arc-20160816; b=k2RVigxKmkwrZ/ZIc0U8IbSI18ai+546QaOUQ1uIUMjm+cr6tfpo2ZF7wd09mGE9M4 7LJxyGtcEUvu/CQsz2rJBdnOXtt13BQyLkGP5jiP/r38hvNposZG8qrKJAkvkLABfjse Ob5WucFPdfrIUzEdkASsx56LLxvwkDK3SRLvuqejSD+hln6SIn93nNI3LqUiyBEhw5nR i5ykR8uBJYUXaRDSA1ObDFkUff80/qMk9dl1G92vcPIDjroulJuPTkOhirmNpQVVxIaD L25TZxXAYMuQshyRLFDPKjg/hS5WlUXKhb/PJH8QWeV8AAAR/gkfJlxa52hiARai+f/Z 1btQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=R1UvsEQd0qtpDszGKFeXtfvMjoQGa2jbwkzPxCQfIBQ=; b=RPtBcrxr6W7VkAa8sfrhu8bHgpClyQ0/Iz308jsmESGBAtYT968bFJq9QyCLAMSCXY DxrdOpAcfRdOrRzAGhhPXtKnzDSPNz1Lk5DENxU1tpR7+jRWNnN2P5QM2wtm9Slr0YPy IlwPiisd6AUYo11SGDn4MeBc7YsGQfqeXeK89IXKl9iMqjgUV/nIjqD7Lq4VMWYXwOWd ORdACbg8raDZyMG4wswg9X1GpnugAxJSpsR99XysQpCgb9vHdxo1u1cpzF4zU3kywM6c mFH8ijHduOvacRfxnuQCUQXrRGUTy9wKOZofdY8qbP6RhJxNpPVGb3flwpN2wZAd11PK kBng== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u4si1861120pga.91.2019.02.14.01.34.49; Thu, 14 Feb 2019 01:35:05 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2395100AbfBMWh1 (ORCPT + 99 others); Wed, 13 Feb 2019 17:37:27 -0500 Received: from mga05.intel.com ([192.55.52.43]:38579 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727471AbfBMWh1 (ORCPT ); Wed, 13 Feb 2019 17:37:27 -0500 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Feb 2019 14:37:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,366,1544515200"; d="scan'208";a="122254784" Received: from unknown (HELO localhost.localdomain) ([10.232.112.69]) by fmsmga007.fm.intel.com with ESMTP; 13 Feb 2019 14:37:26 -0800 Date: Wed, 13 Feb 2019 15:37:11 -0700 From: Keith Busch To: Thomas Gleixner Cc: Bjorn Helgaas , Jens Axboe , Sagi Grimberg , linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, Ming Lei , linux-block@vger.kernel.org, Christoph Hellwig Subject: Re: [PATCH V3 1/5] genirq/affinity: don't mark 'affd' as const Message-ID: <20190213223711.GC8027@localhost.localdomain> References: <20190213105041.13537-1-ming.lei@redhat.com> <20190213105041.13537-2-ming.lei@redhat.com> <20190213150407.GB96272@google.com> <20190213213149.GB8027@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 13, 2019 at 10:41:55PM +0100, Thomas Gleixner wrote: > Btw, while I have your attention. There popped up an issue recently related > to that affinity logic. > > The current implementation fails when: > > /* > * If there aren't any vectors left after applying the pre/post > * vectors don't bother with assigning affinity. > */ > if (nvecs == affd->pre_vectors + affd->post_vectors) > return NULL; > > Now the discussion arised, that in that case the affinity sets are not > allocated and filled in for the pre/post vectors, but somehow the > underlying device still works and later on triggers the warning in the > blk-mq code because the MSI entries do not have affinity information > attached. > > Sure, we could make that work, but there are several issues: > > 1) irq_create_affinity_masks() has another reason to return NULL: > memory allocation fails. > > 2) Does it make sense at all. > > Right now the PCI allocator ignores the NULL return and proceeds without > setting any affinities. As a consequence nothing is managed and everything > happens to work. > > But that happens to work is more by chance than by design and the warning > is bogus if this is an expected mode of operation. > > We should address these points in some way. Ah, yes, that's a mistake in the nvme driver. It is assuming IO queues are always on managed interrupts, but that's not true if when only 1 vector could be allocated. This should be an appropriate fix to the warning: --- diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 022ea1ee63f8..f2ccebe1c926 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -506,7 +506,7 @@ static int nvme_pci_map_queues(struct blk_mq_tag_set *set) * affinity), so use the regular blk-mq cpu mapping */ map->queue_offset = qoff; - if (i != HCTX_TYPE_POLL) + if (i != HCTX_TYPE_POLL && dev->num_vecs > 1) blk_mq_pci_map_queues(map, to_pci_dev(dev->dev), offset); else blk_mq_map_queues(map); --