Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp3977541imj; Tue, 12 Feb 2019 07:50:28 -0800 (PST) X-Google-Smtp-Source: AHgI3IZlG2IfSl8/ZL+jzxDryLFap6vO+f2gIX/N5gaboy/PceCFuwuyJUmEdVgPVbjGbm+0U9jt X-Received: by 2002:a65:46c9:: with SMTP id n9mr4278479pgr.254.1549986628312; Tue, 12 Feb 2019 07:50:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1549986628; cv=none; d=google.com; s=arc-20160816; b=fXzvW3B8YchPogUE2lfL+psyR4vFP5cAwlIMzbkwoEZ9hv6egmSuRs9I9hvRVNd5xn Bibx1WUvHqn8ZrVY9W6i48w4nqZD1fQtYLUQdYpHP+NVPTh26og+GrcXHoDITWUPS/Tq 12C1Z9kg7+TQ8Tjs/b157W8mqgKeJ0ySqj2ERGsWtjYKtH7XcDrcV6OT64/A7s/jVI3L Blym/WLbPmM+QgF7raBMBGgTCBxabwWcMvoEcmlaC5zYScfcO63UJfNXFzbKtUN8RAgS X8pZwSNVOYeyAxn9VyfUHfaeVSS9+HlvlePWlL347iAam+JoCCbeBHehBPNFlgF6SwCT i/iA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=PTU4InWeNllZJFVu8pgE1WiVziT5SZhUK1WQGjM1txs=; b=SUct6CjGN5N8KG+B5tmTy/tiBc3Fhxg1SRtsgoMcBS3hCeuNk1uUGGUoDrf4+XDm3A AA89kMhAMI+eTLTQVX9eMWmbiPsl0Ju1jawvOQPE+LIbVUzyYmvqGuMqZMteQdSDJBHu p1tzTy0YGo2XIvmEELQOsogJEDhH6BTaoMfz7N6rl+SIUnQC5FYUZpo03cM9IXpkgUv2 S+9xvmEbY3ECrdNtQ/SL9hqdRpdcS1s7fPzRqpQcg0+ko/rG8lrza72rWyvuCkpADluP 4dxyJyfpMJXaEYCuJrjzUkA1DS6qXUJnsTL3k/r0ZLuerq/YPdOTNqjn4RI4flZ/yygO CPhA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bb8si7749553plb.261.2019.02.12.07.50.12; Tue, 12 Feb 2019 07:50:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730742AbfBLPtn (ORCPT + 99 others); Tue, 12 Feb 2019 10:49:43 -0500 Received: from mga06.intel.com ([134.134.136.31]:21300 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728585AbfBLPtn (ORCPT ); Tue, 12 Feb 2019 10:49:43 -0500 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Feb 2019 07:49:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,362,1544515200"; d="scan'208";a="123917630" Received: from unknown (HELO localhost.localdomain) ([10.232.112.69]) by fmsmga008.fm.intel.com with ESMTP; 12 Feb 2019 07:49:41 -0800 Date: Tue, 12 Feb 2019 08:49:23 -0700 From: Keith Busch To: Ming Lei Cc: Christoph Hellwig , Bjorn Helgaas , Thomas Gleixner , Jens Axboe , "linux-block@vger.kernel.org" , Sagi Grimberg , "linux-nvme@lists.infradead.org" , "linux-kernel@vger.kernel.org" , "linux-pci@vger.kernel.org" Subject: Re: [PATCH V2 3/4] nvme-pci: avoid irq allocation retrying via .calc_sets Message-ID: <20190212154922.GA6176@localhost.localdomain> References: <20190212130439.14501-1-ming.lei@redhat.com> <20190212130439.14501-4-ming.lei@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190212130439.14501-4-ming.lei@redhat.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 12, 2019 at 05:04:38AM -0800, Ming Lei wrote: > Currently pre-caculate each set vectors, and this way requires same > 'max_vecs' and 'min_vecs' passed to pci_alloc_irq_vectors_affinity(), > then nvme_setup_irqs() has to retry in case of allocation failure. > > This usage & interface is a bit awkward because the retry should have > been avoided by providing one reasonable 'min_vecs'. > > Implement the callback of .calc_sets, so that pci_alloc_irq_vectors_affinity() > can calculate each set's vector after IRQ vectors is allocated and > before spread IRQ, then NVMe's retry in case of irq allocation failure > can be removed. > > Signed-off-by: Ming Lei Thanks, Ming, this whole series looks like a great improvement for drivers using irq sets. Minor nit below. Otherwise you may add my review for the whole series if you spin a v3 for the other minor comments. Reviewed-by: Keith Busch > +static void nvme_calc_irq_sets(struct irq_affinity *affd, int nvecs) > +{ > + struct nvme_dev *dev = affd->priv; > + > + nvme_calc_io_queues(dev, nvecs); > + > + affd->set_vectors[HCTX_TYPE_DEFAULT] = dev->io_queues[HCTX_TYPE_DEFAULT]; > + affd->set_vectors[HCTX_TYPE_READ] = dev->io_queues[HCTX_TYPE_READ]; > + affd->nr_sets = HCTX_TYPE_POLL; > +} The value of HCTX_TYPE_POLL happens to be 2, but that seems more of a coincidence right now. Can we hard code 2 just in case the value changes?