Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp1083223imj; Thu, 14 Feb 2019 00:40:04 -0800 (PST) X-Google-Smtp-Source: AHgI3IYSnGoG0RRjb81a3c7FzxcwHg2lU9YHBH1ChAWRzU7GMlDsNX/YftLJjVnrefEUWVH3G5wj X-Received: by 2002:a63:fb0a:: with SMTP id o10mr2648340pgh.259.1550133604423; Thu, 14 Feb 2019 00:40:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550133604; cv=none; d=google.com; s=arc-20160816; b=gqVSyq0TrpRpZNfiOphy8ZUsw+uUetK4Ze7t0pG0ga1a2CJcwp+gp2AozA2bbOD18H WeJziJPVXzcp/UcuwEJhMAMCO/4ocAFB34dFbcq+2H1JqORQVQqy1wcqRnGxnmlOkZnQ YF26plFztcl+rKb0H4nyMt63v4Y9dcRb88zXkToAl9HXGAuUbMQbrDH2PYIOa0LyJ84D YilkajhnALI5BSfAIhGbIaKnR2I9AV007QV33srrLxneSRLWZfnH1JbemKZeQP0kOXHA 2rh/yM14TInu5SNhcfibhSWBnzJ4FSKSrq09bXA0W+qReg/yjbGyrMzZ1jxT/WyanEXf +kfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date; bh=m3YTwf3fdJ8O3LTDj9wqZ8Tkzx4wSretnxdz3UPAtj4=; b=fYKIt2g1xOgWH/evDV0Q/wgq8Q6UPFGSwEn0cz+P8E8HX7ty79tNPWpJUlJbKXRdGg GuepjhdD30OHhzw9a3nITZaABW6t3C7MRCbW0f0PK/YjUEzXr2ivY7w09rl09H5k4CtJ 414iD5Pk2uqOSEcixBU6baRnzjdjdwaTKcK/ns3jzhvkSqlhlgpitG91PaHIpr9OueS0 MW/+6Jh6gYU60iWHs9j8ap9sPMXEeq86/jYnF56l6vlnK2sVMP88d8LfACsUJ/eyFsvs 3oy6Q8bIozyjQNlFLNBggf+HQ8OE/0iZbhPeAGReilsUvkKUcdiOczVjIUs8xf0kpoAz CqRg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w6si1768019pfb.191.2019.02.14.00.39.48; Thu, 14 Feb 2019 00:40:04 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2394769AbfBMV07 (ORCPT + 99 others); Wed, 13 Feb 2019 16:26:59 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:47725 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728044AbfBMV06 (ORCPT ); Wed, 13 Feb 2019 16:26:58 -0500 Received: from p5492e0d8.dip0.t-ipconnect.de ([84.146.224.216] helo=nanos) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1gu23K-0005xb-7I; Wed, 13 Feb 2019 22:26:46 +0100 Date: Wed, 13 Feb 2019 22:26:45 +0100 (CET) From: Thomas Gleixner To: Bjorn Helgaas cc: Ming Lei , Christoph Hellwig , Jens Axboe , linux-block@vger.kernel.org, Sagi Grimberg , linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, Keith Busch Subject: Re: [PATCH V3 4/5] nvme-pci: avoid irq allocation retrying via .calc_sets In-Reply-To: <20190213151339.GE96272@google.com> Message-ID: References: <20190213105041.13537-1-ming.lei@redhat.com> <20190213105041.13537-5-ming.lei@redhat.com> <20190213151339.GE96272@google.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 13 Feb 2019, Bjorn Helgaas wrote: > On Wed, Feb 13, 2019 at 06:50:40PM +0800, Ming Lei wrote: > > Currently pre-caculate each set vectors, and this way requires same > > 'max_vecs' and 'min_vecs' passed to pci_alloc_irq_vectors_affinity(), > > then nvme_setup_irqs() has to retry in case of allocation failure. > > s/pre-caculate/precalculate/ > My usual "set vectors" question as on other patches. > > > This usage & interface is a bit awkward because the retry should have > > been avoided by providing one reasonable 'min_vecs'. > > > > Implement the callback of .calc_sets, so that pci_alloc_irq_vectors_affinity() > > can calculate each set's vector after IRQ vectors is allocated and > > before spread IRQ, then NVMe's retry in case of irq allocation failure > > can be removed. > > s/irq/IRQ/ Let me rephrase that thing as well Subject: nvme-pci: Simplify interrupt allocation The NVME PCI driver contains a tedious mechanism for interrupt allocation, which is necessary to adjust the number and size of interrupt sets to the maximum available number of interrupts which depends on the underlying PCI capabilities and the available CPU resources. It works around the former short comings of the PCI and core interrupt allocation mechanims in combination with interrupt sets. The PCI interrupt allocation function allows to provide a maximum and a minimum number of interrupts to be allocated and tries to allocate as many as possible. This worked without driver interaction as long as there was only a single set of interrupts to handle. With the addition of support for multiple interrupt sets in the generic affinity spreading logic, which is invoked from the PCI interrupt allocation, the adaptive loop in the PCI interrupt allocation did not work for multiple interrupt sets. The reason is that depending on the total number of interrupts which the PCI allocation adaptive loop tries to allocate in each step, the number and the size of the interrupt sets need to be adapted as well. Due to the way the interrupt sets support was implemented there was no way for the PCI interrupt allocation code or the core affinity spreading mechanism to invoke a driver specific function for adapting the interrupt sets configuration. As a consequence the driver had to implement another adaptive loop around the PCI interrupt allocation function and calling that with maximum and minimum interrupts set to the same value. This ensured that the allocation either succeeded or immediately failed without any attempt to adjust the number of interrupts in the PCI code. The core code now allows drivers to provide a callback to recalculate the number and the size of interrupt sets during PCI interrupt allocation, which in turn allows the PCI interrupt allocation function to be called in the same way as with a single set of interrupts. The PCI code handles the adaptive loop and the interrupt affinity spreading mechanism invokes the driver callback to adapt the interrupt set configuration to the current loop value. This replaces the adaptive loop in the driver completely. Implement the NVME specific callback which adjusts the interrupt sets configuration and remove the adaptive allocation loop. Thanks, tglx