Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3229033pxk; Mon, 28 Sep 2020 11:37:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy8km7WlguZCMrnlfedr6AJbMtDPJ5cvs/v697aHTlxWcJTyra4PGsScBcGHnqQH+0U8j/1 X-Received: by 2002:a17:906:82c8:: with SMTP id a8mr156056ejy.174.1601318272171; Mon, 28 Sep 2020 11:37:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601318272; cv=none; d=google.com; s=arc-20160816; b=QlaUNQOusYxGKcSnCA3xp3PsHqHmUZa7WH92j742DYqEb75+eSs0Jq9kf41HwxPgjJ 0zu7Qk8jKYo0Eg4Pz52SbGoiCi9SJ/1qnc/gRSYB6HBXCl3KV1Whsh24x3wIcFfHskeS re7fjS1dz6ctkT47WErqW/9MmUNuIVdExe41QIU0uXCxW/K8ePsxNYuDw3GAdZ1thNMp eVAv6CSQ98L2u9pgMajE4dyk+c7LJQgPLJ4Vfg4ytxR27XFDqH0r3rlkP9a1y9blmVJU zW5M+BkHHiQolZhxNo32uyziu2IZhwu1NS1D/AdcnSgxooRpykRDulOLT0InbjVOCT+J nwRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :to:from:dkim-signature; bh=9oz+ynoTvhiufuVyMQ1xRlRqNWLd2L6kBLmgTWRX5ss=; b=A3TnDwDjZEysid8y40qlbmxwlv48Tx8eqd4XDeK3vUF8A20RmXJloTZ2WjnEfVfsDH faIlKdc3DlSQkJdnFrANcy3ogUDyu+mhhKXbmkjNtu7DVUYwNz/Q5JAOVjwkoGhkx4lv GIgKoQTE94Yc1PCYJZvHLfpiE148R8sZ26P/bxfgnbIpRN4DE5smasMsEEbFnlqgDQy8 YpKCDxwFV48nkVhdK+cCqW5Dw8nqikLTTtJ4NdllehevMDRe0w863ZZKzm0/L2sRtJzR xc6xCORqGx8VOtDgQ43uZXKC0sMHpVUmsEbEtI30IWvL49mQYOg+uHmNjDAA6ICAiI0X bDxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=SwoBn1tH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f11si1191831edq.528.2020.09.28.11.37.29; Mon, 28 Sep 2020 11:37:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=SwoBn1tH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726830AbgI1SgW (ORCPT + 99 others); Mon, 28 Sep 2020 14:36:22 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:31131 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726776AbgI1SgU (ORCPT ); Mon, 28 Sep 2020 14:36:20 -0400 Dkim-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601318179; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:in-reply-to:in-reply-to:references:references; bh=9oz+ynoTvhiufuVyMQ1xRlRqNWLd2L6kBLmgTWRX5ss=; b=SwoBn1tHCEK0CVnA13A4HwsxKjhYlWxF/e8IqlmIV7hQxTACdI01e7Cl0iNtze9c8Bwz22 6kh6LTiIE/9pfMgJXfmALCfGgdZ3KDi1tQ/urad4O8O0FxAw9tYs3ogiccHvjlW+SZlAuO uWLBrSu39v6D88TEE2H8s8AJ61GC7MQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-498-vfI2srfRPH2G4PSp-PI-ng-1; Mon, 28 Sep 2020 14:36:16 -0400 X-MC-Unique: vfI2srfRPH2G4PSp-PI-ng-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C434585C732; Mon, 28 Sep 2020 18:36:13 +0000 (UTC) Received: from virtlab719.virt.lab.eng.bos.redhat.com (virtlab719.virt.lab.eng.bos.redhat.com [10.19.153.15]) by smtp.corp.redhat.com (Postfix) with ESMTP id 084E860C11; Mon, 28 Sep 2020 18:36:11 +0000 (UTC) From: Nitesh Narayan Lal To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-pci@vger.kernel.org, intel-wired-lan@lists.osuosl.org, frederic@kernel.org, mtosatti@redhat.com, sassmann@redhat.com, jesse.brandeburg@intel.com, lihong.yang@intel.com, helgaas@kernel.org, nitesh@redhat.com, jeffrey.t.kirsher@intel.com, jacob.e.keller@intel.com, jlelli@redhat.com, hch@infradead.org, bhelgaas@google.com, mike.marciniszyn@intel.com, dennis.dalessandro@intel.com, thomas.lendacky@amd.com, jiri@nvidia.com, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, lgoncalv@redhat.com Subject: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping CPUs Date: Mon, 28 Sep 2020 14:35:29 -0400 Message-Id: <20200928183529.471328-5-nitesh@redhat.com> In-Reply-To: <20200928183529.471328-1-nitesh@redhat.com> References: <20200928183529.471328-1-nitesh@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If we have isolated CPUs dedicated for use by real-time tasks, we try to move IRQs to housekeeping CPUs from the userspace to reduce latency overhead on the isolated CPUs. If we allocate too many IRQ vectors, moving them all to housekeeping CPUs may exceed per-CPU vector limits. When we have isolated CPUs, limit the number of vectors allocated by pci_alloc_irq_vectors() to the minimum number required by the driver, or to one per housekeeping CPU if that is larger. Signed-off-by: Nitesh Narayan Lal --- drivers/pci/msi.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index 30ae4ffda5c1..8c156867803c 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -23,6 +23,7 @@ #include #include #include +#include #include "pci.h" @@ -1191,8 +1192,25 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs, struct irq_affinity *affd) { struct irq_affinity msi_default_affd = {0}; + unsigned int hk_cpus; int nvecs = -ENOSPC; + hk_cpus = housekeeping_num_online_cpus(HK_FLAG_MANAGED_IRQ); + + /* + * If we have isolated CPUs for use by real-time tasks, to keep the + * latency overhead to a minimum, device-specific IRQ vectors are moved + * to the housekeeping CPUs from the userspace by changing their + * affinity mask. Limit the vector usage to keep housekeeping CPUs from + * running out of IRQ vectors. + */ + if (hk_cpus < num_online_cpus()) { + if (hk_cpus < min_vecs) + max_vecs = min_vecs; + else if (hk_cpus < max_vecs) + max_vecs = hk_cpus; + } + if (flags & PCI_IRQ_AFFINITY) { if (!affd) affd = &msi_default_affd; -- 2.18.2