Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp611021pxk; Thu, 24 Sep 2020 13:47:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyhXyhdmHqkfaMeZ/in2WoAjk+ZMTb83Hwm0YoH3NDZTYK/xyoKznPyQctdh3B2VXXxsMSu X-Received: by 2002:a17:906:b28d:: with SMTP id q13mr460716ejz.378.1600980421743; Thu, 24 Sep 2020 13:47:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600980421; cv=none; d=google.com; s=arc-20160816; b=RMNcnwJuPNVTSQwv+Hf7F10ugH350vWjkggNTmxbI53iA4rgaK0KU/gdBuNGQscZ1f W7PRtJTOBWhObM0LNByV4YzaKhA6b9N7uFjy9GzgfVnLhOR1WkJ4Flh/QNR5jBg0Ghv6 UB0JO20v03JYbRESoh4ZoAuYVLsz2u2Cz7zzwThDcgUaK9Wa8FeUTVTxrJ+YZFJktJgM dyq/u8dbPWoeu78uIXT9au7O5RgH0ZlyZRJjf9vM6svJ9ZME6fwFMaPgICHZI1DLtfVz b3TZKFx1Bo7M7LZnvKm71eOQNMhIVgxciwsTDpbuQbDLAKOLzN7DXX6rtdi/Rvo6fKrc Liqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :message-id:subject:cc:to:from:date:dkim-signature; bh=Cg5Gcnyj91tXk7plGy5y6eCwvUNQP7lf8/cpPSoi8w8=; b=XoTG75hc/8pa8XdCPqYCw36h889XYV7pzezDxhzOyPKqr8v+IcMCSqnRVdu4u66Dyg HrkYateiHu8i1ucd3/+XiJNuFbf4VqgWESoi/o/YBKIdC4w5VNvT3LCOLzfdk79djZ9R wy/9sF6qaoyYBvowoIopRM67QVe6F4rbJhWR3xVBB+oke5yUF6DeuUJY+onU5QGITM65 8PV39MfPnZUVWSlOq3q+0eJnVmWxoXulrLtgTGR76t2TLVe9z5saYm9w16uxrVmHsuUq TdBrbz3MYFlgtccUDCqJJjrZoHY1V5Kl/NjbKcn9UgfdstMK9ke1KXm3cKItzld8jx5x rWmw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=lbvEvN+h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g16si513912ejf.211.2020.09.24.13.46.37; Thu, 24 Sep 2020 13:47:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=lbvEvN+h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726448AbgIXUpi (ORCPT + 99 others); Thu, 24 Sep 2020 16:45:38 -0400 Received: from mail.kernel.org ([198.145.29.99]:44084 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725208AbgIXUph (ORCPT ); Thu, 24 Sep 2020 16:45:37 -0400 Received: from localhost (52.sub-72-107-123.myvzw.com [72.107.123.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A2B72239CF; Thu, 24 Sep 2020 20:45:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600980337; bh=CSiygoRkkeEiJ5Dg4g8lESgmoDARtWg5wsRqboDJATg=; h=Date:From:To:Cc:Subject:In-Reply-To:From; b=lbvEvN+hi7Xe6uDbbh0duG3bBFI4mlVC2eLJrkDrq0bm+QX7ibnnVWghVPjb8tcG0 YuJ+SGj7QBP8e0C3IlVs85IP5aYMRgjvFPdd5lkXT1H90pXt715BN77X7Al+8maXGt XDHdgpcD74XDD+iMJSb18S06YsjZkz/XA9eYamuE= Date: Thu, 24 Sep 2020 15:45:35 -0500 From: Bjorn Helgaas To: Nitesh Narayan Lal Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-pci@vger.kernel.org, intel-wired-lan@lists.osuosl.org, frederic@kernel.org, mtosatti@redhat.com, sassmann@redhat.com, jesse.brandeburg@intel.com, lihong.yang@intel.com, jeffrey.t.kirsher@intel.com, jacob.e.keller@intel.com, jlelli@redhat.com, hch@infradead.org, bhelgaas@google.com, mike.marciniszyn@intel.com, dennis.dalessandro@intel.com, thomas.lendacky@amd.com, jerinj@marvell.com, mathias.nyman@intel.com, jiri@nvidia.com, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org Subject: Re: [PATCH v2 4/4] PCI: Limit pci_alloc_irq_vectors as per housekeeping CPUs Message-ID: <20200924204535.GA2337207@bjorn-Precision-5520> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200923181126.223766-5-nitesh@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Possible subject: PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs On Wed, Sep 23, 2020 at 02:11:26PM -0400, Nitesh Narayan Lal wrote: > This patch limits the pci_alloc_irq_vectors, max_vecs argument that is > passed on by the caller based on the housekeeping online CPUs (that are > meant to perform managed IRQ jobs). > > A minimum of the max_vecs passed and housekeeping online CPUs is derived > to ensure that we don't create excess vectors as that can be problematic > specifically in an RT environment. In cases where the min_vecs exceeds the > housekeeping online CPUs, max vecs is restricted based on the min_vecs > instead. The proposed change is required because for an RT environment > unwanted IRQs are moved to the housekeeping CPUs from isolated CPUs to > keep the latency overhead to a minimum. If the number of housekeeping CPUs > is significantly lower than that of the isolated CPUs we can run into > failures while moving these IRQs to housekeeping CPUs due to per CPU > vector limit. Does this capture enough of the log? If we have isolated CPUs dedicated for use by real-time tasks, we try to move IRQs to housekeeping CPUs to reduce overhead on the isolated CPUs. If we allocate too many IRQ vectors, moving them all to housekeeping CPUs may exceed per-CPU vector limits. When we have isolated CPUs, limit the number of vectors allocated by pci_alloc_irq_vectors() to the minimum number required by the driver, or to one per housekeeping CPU if that is larger. > Signed-off-by: Nitesh Narayan Lal > --- > include/linux/pci.h | 15 +++++++++++++++ > 1 file changed, 15 insertions(+) > > diff --git a/include/linux/pci.h b/include/linux/pci.h > index 835530605c0d..cf9ca9410213 100644 > --- a/include/linux/pci.h > +++ b/include/linux/pci.h > @@ -38,6 +38,7 @@ > #include > #include > #include > +#include > #include > > #include > @@ -1797,6 +1798,20 @@ static inline int > pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs, > unsigned int max_vecs, unsigned int flags) > { > + unsigned int hk_cpus = hk_num_online_cpus(); > + > + /* > + * For a real-time environment, try to be conservative and at max only > + * ask for the same number of vectors as there are housekeeping online > + * CPUs. In case, the min_vecs requested exceeds the housekeeping > + * online CPUs, restrict the max_vecs based on the min_vecs instead. > + */ > + if (hk_cpus != num_online_cpus()) { > + if (min_vecs > hk_cpus) > + max_vecs = min_vecs; > + else > + max_vecs = min_t(int, max_vecs, hk_cpus); > + } Is the below basically the same? /* * If we have isolated CPUs for use by real-time tasks, * minimize overhead on those CPUs by moving IRQs to the * remaining "housekeeping" CPUs. Limit vector usage to keep * housekeeping CPUs from running out of IRQ vectors. */ if (housekeeping_cpus < num_online_cpus()) { if (housekeeping_cpus < min_vecs) max_vecs = min_vecs; else if (housekeeping_cpus < max_vecs) max_vecs = housekeeping_cpus; } My comment isn't quite right because this patch only limits the number of vectors; it doesn't actually *move* IRQs to the housekeeping CPUs. I don't know where the move happens (or maybe you just avoid assigning IRQs to isolated CPUs, and I don't know how that happens either). > return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs, flags, > NULL); > } > -- > 2.18.2 >