Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp9971354rwl; Wed, 11 Jan 2023 12:34:12 -0800 (PST) X-Google-Smtp-Source: AMrXdXvUr1wG1EAr9ZBzGXqxBPB9S6SD9p21/pRyLi/XGskRDqvTbneC5qII6ztwx+CGZ3YE9FrD X-Received: by 2002:a17:90a:448c:b0:226:8d3:7ed6 with SMTP id t12-20020a17090a448c00b0022608d37ed6mr53117782pjg.41.1673469251907; Wed, 11 Jan 2023 12:34:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673469251; cv=none; d=google.com; s=arc-20160816; b=eHTCZ/rs08LXhzq5PRQTCCEUOgb5teEOu+egjc7p5IxUMIINVPG+1jO2nQrmkJ50Ye lcAl2lBLW6pWFTW44idwo1rsP/GilL4ULExYZ+mqnfbwrL8A6ZTLD3G5Oz/XzuAiM4Bd swwgzCma0Bp/AuVKMyU9JyYq0bEqsoNisU2wLgwfJWtGWEgENq5euQC/xFYiQV65QQYG KRFiAFoEpkd3cvplnNWm2cQINQLG2hB6tWfOF30i2flNpcmrwWDOgFbS+CSolQ44DTDK cS64S286dEox7tR0b086wNtL0pw3945zEw+meTINI61wh4T0HqnrjqH2Ey8XYDOT11aB SvFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nAjgCqZJ955YjlKaa02lhctLi9k//2YFA0pbet6VWK0=; b=aZ4eJdU50RndYXwH562f19ZNKy5hDr0t6OTWk7k2MDQ3TxCm+4MIfLTUHiZT5BgmQj 6udEREL3ubbpNbTjyQZDFU1nehKjAP6nccEtpeVg/PMp/MiLFSHp2tAyGaMRh4SREKnD 4Oy/otiWyaWaLb+HBgfCRwtyc9MmZ0eJP7BYL30pJH05HjLOnyXjv2Tjki0agRqE2lV8 B5bkV+1tcqsD6F2+tNISBn5jRdQksvtc20/S2iCsBK7CpeV85hkS+sJDnRtF2Rc0w/sC +VsOG1gZJZBV1dfKMqjig3Hn+i9ItBmPV0JzhQNseb5MirIOPACY2MBGw2RGKDmED/+b xF2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=N245u+Kk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e188-20020a6369c5000000b00476c369eaa9si14997979pgc.146.2023.01.11.12.34.04; Wed, 11 Jan 2023 12:34:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=N245u+Kk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235916AbjAKU0U (ORCPT + 51 others); Wed, 11 Jan 2023 15:26:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233416AbjAKUZe (ORCPT ); Wed, 11 Jan 2023 15:25:34 -0500 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D94E513E36 for ; Wed, 11 Jan 2023 12:25:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673468732; x=1705004732; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GMeoi/8w6qsbk40t9XpztUuwD163lNMlvbB+trQLXbE=; b=N245u+KknMeW1p/yusOzv5KVwal+aNVI4DJegWg+LLlxPKBeu3AmImgo IrFfuotI6QyE+/GrMIB1epA+PwcfWBxBzLrgBr7HcjPkx5t7wIaWTNLEK CcmJN2m3wTdJO4RM5OXJ/eslI2OUueccJz0CVPrC3TfzmmiHwU7rOLVax +fMrCr7OpPsj90rDKDJh0jI4zXN7NKdJstTY4fDtVR6kf2SRZKPQd5quv KSYypWpRa8eeVieW3EgCFdrrFkwCLRB3UYPJD+ULZ10B8mIo352a4yJ6x E816JadM6ZsxYtSi9ChUluaHGynbnbx/pqqFFuN2ugJwc7IvQO5xbAE+A A==; X-IronPort-AV: E=McAfee;i="6500,9779,10586"; a="350755002" X-IronPort-AV: E=Sophos;i="5.96,317,1665471600"; d="scan'208";a="350755002" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jan 2023 12:25:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10586"; a="781518074" X-IronPort-AV: E=Sophos;i="5.96,317,1665471600"; d="scan'208";a="781518074" Received: from kanliang-dev.jf.intel.com ([10.165.154.102]) by orsmga004.jf.intel.com with ESMTP; 11 Jan 2023 12:25:28 -0800 From: kan.liang@linux.intel.com To: joro@8bytes.org, will@kernel.org, baolu.lu@linux.intel.com, dwmw2@infradead.org, robin.murphy@arm.com, robert.moore@intel.com, rafael.j.wysocki@intel.com, lenb@kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org Cc: Kan Liang Subject: [PATCH 5/7] iommu/vt-d: Support cpumask for IOMMU perfmon Date: Wed, 11 Jan 2023 12:25:02 -0800 Message-Id: <20230111202504.378258-6-kan.liang@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230111202504.378258-1-kan.liang@linux.intel.com> References: <20230111202504.378258-1-kan.liang@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Kan Liang The perf subsystem assumes that all counters are by default per-CPU. So the user space tool reads a counter from each CPU. However, the IOMMU counters are system-wide and can be read from any CPU. Here we use a CPU mask to restrict counting to one CPU to handle the issue. (with CPU hotplug notifier to choose a different CPU if the chosen one is taken off-line). The CPU is exposed to /sys/bus/event_source/devices/dmar*/cpumask for the user space perf tool. Signed-off-by: Kan Liang --- .../sysfs-bus-event_source-devices-iommu | 8 ++ drivers/iommu/intel/perfmon.c | 113 ++++++++++++++++-- include/linux/cpuhotplug.h | 1 + 3 files changed, 115 insertions(+), 7 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-bus-event_source-devices-iommu b/Documentation/ABI/testing/sysfs-bus-event_source-devices-iommu index 04e08851d8e6..3519954fe713 100644 --- a/Documentation/ABI/testing/sysfs-bus-event_source-devices-iommu +++ b/Documentation/ABI/testing/sysfs-bus-event_source-devices-iommu @@ -22,3 +22,11 @@ Description: Read-only. Attribute group to describe the magic bits filter_pasid = "config1:32-53" - PASID filter filter_ats = "config2:0-4" - Address Type filter filter_page_table = "config2:8-12" - Page Table Level filter + +What: /sys/bus/event_source/devices/dmar*/cpumask +Date: Jan 2023 +KernelVersion: 6.3 +Contact: Kan Liang +Description: Read-only. This file always returns the CPU to which the + IOMMU pmu is bound for access to all IOMMU pmu + performance monitoring events. diff --git a/drivers/iommu/intel/perfmon.c b/drivers/iommu/intel/perfmon.c index 43a5075eaecd..f332232bb345 100644 --- a/drivers/iommu/intel/perfmon.c +++ b/drivers/iommu/intel/perfmon.c @@ -32,9 +32,30 @@ static struct attribute_group iommu_pmu_events_attr_group = { .attrs = attrs_empty, }; +static cpumask_t iommu_pmu_cpu_mask; + +static ssize_t iommu_pmu_cpumask_show(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + return cpumap_print_to_pagebuf(true, buf, &iommu_pmu_cpu_mask); +} + +static DEVICE_ATTR(cpumask, S_IRUGO, iommu_pmu_cpumask_show, NULL); + +static struct attribute *iommu_pmu_cpumask_attrs[] = { + &dev_attr_cpumask.attr, + NULL +}; + +static struct attribute_group iommu_pmu_cpumask_attr_group = { + .attrs = iommu_pmu_cpumask_attrs, +}; + static const struct attribute_group *iommu_pmu_attr_groups[] = { &iommu_pmu_format_attr_group, &iommu_pmu_events_attr_group, + &iommu_pmu_cpumask_attr_group, NULL }; @@ -637,19 +658,97 @@ void free_iommu_pmu(struct intel_iommu *iommu) iommu->pmu = NULL; } +static int iommu_pmu_cpu_online(unsigned int cpu) +{ + if (cpumask_empty(&iommu_pmu_cpu_mask)) + cpumask_set_cpu(cpu, &iommu_pmu_cpu_mask); + + return 0; +} + +static int iommu_pmu_cpu_offline(unsigned int cpu) +{ + struct dmar_drhd_unit *drhd; + struct intel_iommu *iommu; + int target; + + if (!cpumask_test_and_clear_cpu(cpu, &iommu_pmu_cpu_mask)) + return 0; + + target = cpumask_any_but(cpu_online_mask, cpu); + + if (target < nr_cpu_ids) + cpumask_set_cpu(target, &iommu_pmu_cpu_mask); + else + target = -1; + + rcu_read_lock(); + + for_each_iommu(iommu, drhd) { + if (!iommu->pmu) + continue; + perf_pmu_migrate_context(&iommu->pmu->pmu, cpu, target); + } + rcu_read_unlock(); + + return 0; +} + +static int nr_iommu_pmu; + +static int iommu_pmu_cpuhp_setup(struct iommu_pmu *iommu_pmu) +{ + int ret; + + if (nr_iommu_pmu++) + return 0; + + ret = cpuhp_setup_state(CPUHP_AP_PERF_X86_IOMMU_PERF_ONLINE, + "driver/iommu/intel/perfmon:online", + iommu_pmu_cpu_online, + iommu_pmu_cpu_offline); + if (ret) + nr_iommu_pmu = 0; + + return ret; +} + +static void iommu_pmu_cpuhp_free(struct iommu_pmu *iommu_pmu) +{ + if (--nr_iommu_pmu) + return; + + cpuhp_remove_state(CPUHP_AP_PERF_X86_IOMMU_PERF_ONLINE); +} + void iommu_pmu_register(struct intel_iommu *iommu) { - if (!iommu->pmu) + struct iommu_pmu *iommu_pmu = iommu->pmu; + + if (!iommu_pmu) return; - if (__iommu_pmu_register(iommu)) { - pr_err("Failed to register PMU for iommu (seq_id = %d)\n", - iommu->seq_id); - } + if (__iommu_pmu_register(iommu)) + goto err; + + if (iommu_pmu_cpuhp_setup(iommu_pmu)) + goto unregister; + + return; + +unregister: + perf_pmu_unregister(&iommu_pmu->pmu); +err: + pr_err("Failed to register PMU for iommu (seq_id = %d)\n", iommu->seq_id); } void iommu_pmu_unregister(struct intel_iommu *iommu) { - if (iommu->pmu) - perf_pmu_unregister(&iommu->pmu->pmu); + struct iommu_pmu *iommu_pmu = iommu->pmu; + + if (!iommu_pmu) + return; + + iommu_pmu_cpuhp_free(iommu_pmu); + perf_pmu_unregister(&iommu_pmu->pmu); } diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 6c6859bfc454..f2ea348ce3b0 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -221,6 +221,7 @@ enum cpuhp_state { CPUHP_AP_PERF_X86_CQM_ONLINE, CPUHP_AP_PERF_X86_CSTATE_ONLINE, CPUHP_AP_PERF_X86_IDXD_ONLINE, + CPUHP_AP_PERF_X86_IOMMU_PERF_ONLINE, CPUHP_AP_PERF_S390_CF_ONLINE, CPUHP_AP_PERF_S390_SF_ONLINE, CPUHP_AP_PERF_ARM_CCI_ONLINE, -- 2.35.1