Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp4666519pxu; Wed, 21 Oct 2020 02:04:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy6cpHxxFucUnf8C6LDH0TJG/fJIaQ9UpdRm3Et3WZWfD9/eyw2WeOWsMYF7aKPKZiNfJUF X-Received: by 2002:a17:906:4d44:: with SMTP id b4mr2326315ejv.131.1603271081170; Wed, 21 Oct 2020 02:04:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603271081; cv=none; d=google.com; s=arc-20160816; b=tKV7+z0FSQ765I6VrBzzZoSyJbei+g4sW8ZfFnhbyGCTvEqnMnty8i0Q5AL5UFEQuf xqM8Dx5rBNqwpJGa8V1RlxnPmQbsCFLqMIgnKBizs8v2n3CDqP8r24skOTNAUDD6SMUw Ly+4ZM/2X/1u9Vbj6xpUIE7E/k+IotczyMz8QwxNBp4YIpVO9MEimA/g2Xrk4svIHWpx 1As//rPgW6po3vN7NNSR7x/pFSXiJY9xqCtRQ35xHASG1nPylhfAwcZz8ftjQoD0TQna qGY9CGnhN/XyxX5AXdOfp86Z4dsbitepj0+C7vW4G1LyIYUK6q+CI4b+TMyAufxAbOVk RkWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:references :in-reply-to:subject:to:dkim-signature:dkim-signature:from; bh=LeMXYg7iCMItsZtXLmmRK9fkfNd0wXBuje5zWxjuqWQ=; b=Na4HxREENqWPJAzPpV7/M5/BsE6wZAvRrSmy00ykR61lis8Op4BkveP+TqjwxnN0CT mnd/nudSaK80Mik4YmlD8Zpa2WXn57kPfE3ikE8t3Xa05exrVUn+2/5WkmpctvTuwMEE ysbx61l5ut3SkltrentBVOde7gW1PZFvq7hPyG4og9O2x0xg/Fgw1+qrs9NEp+mgBJqO Vian4XsnUqiVIqIIcTvW9Pz36z5DN5C0Fdy2lehl4cJOvf9r+r0czoUJMb0Q4+xeLPyO BmtsjRHxIfdPS5wHIGqgvO/3UjY5hhKRSm+DBxUdgX5qqizrZYkY9PV7qQpncSYIMAf5 YpOQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=qDHag5jU; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z8si879503eju.267.2020.10.21.02.04.19; Wed, 21 Oct 2020 02:04:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=qDHag5jU; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437231AbgJTSHZ (ORCPT + 99 others); Tue, 20 Oct 2020 14:07:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387755AbgJTSHZ (ORCPT ); Tue, 20 Oct 2020 14:07:25 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48E4CC0613CE; Tue, 20 Oct 2020 11:07:25 -0700 (PDT) From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1603217243; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=LeMXYg7iCMItsZtXLmmRK9fkfNd0wXBuje5zWxjuqWQ=; b=qDHag5jUtcYY1ospfAJ+zlutOQbY3+C0bC9EPi1EZMpn+II8KgaYDpV1LAVI8GtoyzAz6J 0Ds6sLaCTexJ078RLsCQxHKLok/8wlwLzz4KqDEVDWN37BqpL8cHOin3Vc2lrbvKhnk9fU gTtCZ9OIDpyIrU0KF0Ri4iTj3PX6nCApRbwgEoELwTGCqc2e7AGW9/iyDB1qKEi+56dCCz pCGsFGG8IS/h6Jc+Kh+h9BZUwe79z9lfowsxuyYixjtt4jcsb8SAUm1cXvAIUnIXt6g0CW GokjEdpt/RccHcAqCUC59MCkh0mKevqzmZTL7RGJRdmm6fEF5rqg9H7QDIC5Gw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1603217243; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=LeMXYg7iCMItsZtXLmmRK9fkfNd0wXBuje5zWxjuqWQ=; b=1iCD6Fdka9UkLf2kiE9clDl7+fPycl4tRaJS8aQELf+C9KC9wIeAZui9dMdNqM+0kS1mxw uyB16tsCz17uoMBQ== To: Nitesh Narayan Lal , linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-pci@vger.kernel.org, intel-wired-lan@lists.osuosl.org, frederic@kernel.org, mtosatti@redhat.com, sassmann@redhat.com, jesse.brandeburg@intel.com, lihong.yang@intel.com, helgaas@kernel.org, jeffrey.t.kirsher@intel.com, jacob.e.keller@intel.com, jlelli@redhat.com, hch@infradead.org, bhelgaas@google.com, mike.marciniszyn@intel.com, dennis.dalessandro@intel.com, thomas.lendacky@amd.com, jiri@nvidia.com, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, lgoncalv@redhat.com Subject: Re: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs In-Reply-To: <3bca9eb1-a318-1fc6-9eee-aacc0293a193@redhat.com> References: <20200928183529.471328-1-nitesh@redhat.com> <20200928183529.471328-5-nitesh@redhat.com> <87v9f57zjf.fsf@nanos.tec.linutronix.de> <3bca9eb1-a318-1fc6-9eee-aacc0293a193@redhat.com> Date: Tue, 20 Oct 2020 20:07:23 +0200 Message-ID: <87lfg093fo.fsf@nanos.tec.linutronix.de> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 20 2020 at 12:18, Nitesh Narayan Lal wrote: > On 10/20/20 10:16 AM, Thomas Gleixner wrote: >> With the above change this will result >> >> 1 general interrupt which is free movable by user space >> 1 managed interrupts (possible affinity to all 16 CPUs, but routed >> to housekeeping CPU as long as there is one online) >> >> So the device is now limited to a single queue which also affects the >> housekeeping CPUs because now they have to share a single queue. >> >> With larger machines this gets even worse. > > Yes, the change can impact the performance, however, if we don't do that we > may have a latency impact instead. Specifically, on larger systems where > most of the CPUs are isolated as we will definitely fail in moving all of the > IRQs away from the isolated CPUs to the housekeeping. For non managed interrupts I agree. >> So no. This needs way more thought for managed interrupts and you cannot >> do that at the PCI layer. > > Maybe we should not be doing anything in the case of managed IRQs as they > are anyways pinned to the housekeeping CPUs as long as we have the > 'managed_irq' option included in the kernel cmdline. Exactly. For the PCI side this vector limiting has to be restricted to the non managed case. >> Only the affinity spreading mechanism can do >> the right thing here. > > I can definitely explore this further. > > However, IMHO we would still need a logic to prevent the devices from > creating excess vectors. Managed interrupts are preventing exactly that by pinning the interrupts and queues to one or a set of CPUs, which prevents vector exhaustion on CPU hotplug. Non-managed, yes that is and always was a problem. One of the reasons why managed interrupts exist. Thanks, tglx