Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4547415pxj; Tue, 22 Jun 2021 02:47:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz5SM+bYbtiRuod7zaKS2LNfXtmw4BehZ0wV2lUtX1MQPL/YHZ7vLEo+gUbkxDyjYHVbsCx X-Received: by 2002:a05:6402:1a38:: with SMTP id be24mr3715665edb.243.1624355232214; Tue, 22 Jun 2021 02:47:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624355232; cv=none; d=google.com; s=arc-20160816; b=euZQ+Ymql+pv9AXBmFmCHHRSGlNK1l4khC3DmpFsbK3qW/Yin0ufefWTSNlVOLv9So 31Wn5ZgUUWQzTQzUJAyHEzKp0dFF/c7UKF5cD+R+wO6S3ntthp4GonhjF6N5IQ5lsHUH vsFw4gX5h2p9XVdjZSHDvEMEiBYcaPgTSHjFUZpd/iywm7JdTOJ89lFt2YcRI1OYXYvQ FE3YxBCsdkM1bwlALyYXKrA7KOH2EcVamqdqWdC7cgdOhsbVZ2cmtlO+VwBkhcPUqp7b olWYRFR7w+/PfBoN2f3F9yYwqyM/RVS1og9wH24qkxvsZ4QIwVzsvn97YbOEfFe+hyS9 udIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=Z7QGGrRGU5po7bHOb2Mm0os5olwXqRck7EhjXV2EDpU=; b=S5/weTidxgQNId2SPKJYXDTm1+JyWz9frUdwYBuO9UGGnhwgyhA+eWe30OsN0s9MMc +UO1VnbmNC7Yaj08ULekLF6zXP1lVGrqUKML5vf6OQIWUxxa0MwVfbgCdY/3utr1Ws2T 3LyXXRnqh7zAyLhhSoKupZld40c5mMKj4sF30tzrPTU2JKdFBltOqFJr8zUyP4rbbV68 zujQkoa4rFsgeUBxnRF+QAzKNGIP1HQqIZDO6AAle6YxX6dJNuJZAJJB0eCQbK3DLaNX T++sTKczR4VkwSw+ETx99lreSRYDgB0Ol4fNq7zCKb5xvHoIA3b+Z1pSAQYiVHEGh2m0 jxVg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ci12si14139427ejc.295.2021.06.22.02.46.48; Tue, 22 Jun 2021 02:47:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230429AbhFVJrQ (ORCPT + 99 others); Tue, 22 Jun 2021 05:47:16 -0400 Received: from mga18.intel.com ([134.134.136.126]:20239 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230433AbhFVJqz (ORCPT ); Tue, 22 Jun 2021 05:46:55 -0400 IronPort-SDR: WVK++d1o1Bdt9fR+eyXOiiDawWE9EFjoScG+78JoM7YmH34KHJqE2Q3FL63VifLbSynybWWj3/ MsU2VRF5vTjA== X-IronPort-AV: E=McAfee;i="6200,9189,10022"; a="194331073" X-IronPort-AV: E=Sophos;i="5.83,291,1616482800"; d="scan'208";a="194331073" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2021 02:44:24 -0700 IronPort-SDR: 8FbT3A57izWeQ+54h8WmARJpNm2MH4J8FGQieDu6CGjxvRlH0WzQuqa/Fk++tEgNVi1Tx+wSwE nQNcfObow+cA== X-IronPort-AV: E=Sophos;i="5.83,291,1616482800"; d="scan'208";a="641600302" Received: from vmm_a4_icx.sh.intel.com (HELO localhost.localdomain) ([10.239.53.245]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2021 02:44:20 -0700 From: Zhu Lingshan To: peterz@infradead.org, pbonzini@redhat.com Cc: bp@alien8.de, seanjc@google.com, vkuznets@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, weijiang.yang@intel.com, kan.liang@linux.intel.com, ak@linux.intel.com, wei.w.wang@intel.com, eranian@google.com, liuxiangdong5@huawei.com, linux-kernel@vger.kernel.org, x86@kernel.org, kvm@vger.kernel.org, like.xu.linux@gmail.com, Like Xu , Zhu Lingshan Subject: [PATCH V7 15/18] KVM: x86/pmu: Disable guest PEBS temporarily in two rare situations Date: Tue, 22 Jun 2021 17:43:03 +0800 Message-Id: <20210622094306.8336-16-lingshan.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210622094306.8336-1-lingshan.zhu@intel.com> References: <20210622094306.8336-1-lingshan.zhu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Like Xu The guest PEBS will be disabled when some users try to perf KVM and its user-space through the same PEBS facility OR when the host perf doesn't schedule the guest PEBS counter in a one-to-one mapping manner (neither of these are typical scenarios). The PEBS records in the guest DS buffer are still accurate and the above two restrictions will be checked before each vm-entry only if guest PEBS is deemed to be enabled. Suggested-by: Wei Wang Signed-off-by: Like Xu Signed-off-by: Zhu Lingshan --- arch/x86/events/intel/core.c | 11 +++++++++-- arch/x86/include/asm/kvm_host.h | 9 +++++++++ arch/x86/kvm/vmx/pmu_intel.c | 19 +++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 4 ++++ arch/x86/kvm/vmx/vmx.h | 1 + 5 files changed, 42 insertions(+), 2 deletions(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 22386c1a32b4..8bf494f8af3e 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3970,8 +3970,15 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data) .guest = pebs_mask & ~cpuc->intel_ctrl_host_mask, }; - /* Set hw GLOBAL_CTRL bits for PEBS counter when it runs for guest */ - arr[0].guest |= arr[*nr].guest; + if (arr[*nr].host) { + /* Disable guest PEBS if host PEBS is enabled. */ + arr[*nr].guest = 0; + } else { + /* Disable guest PEBS for cross-mapped PEBS counters. */ + arr[*nr].guest &= ~pmu->host_cross_mapped_mask; + /* Set hw GLOBAL_CTRL bits for PEBS counter when it runs for guest */ + arr[0].guest |= arr[*nr].guest; + } ++(*nr); return arr; diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ef22a742649b..e21989650d2a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -479,6 +479,15 @@ struct kvm_pmu { u64 pebs_data_cfg; u64 pebs_data_cfg_mask; + /* + * If a guest counter is cross-mapped to host counter with different + * index, its PEBS capability will be temporarily disabled. + * + * The user should make sure that this mask is updated + * after disabling interrupts and before perf_guest_get_msrs(); + */ + u64 host_cross_mapped_mask; + /* * The gate to release perf_events not marked in * pmc_in_use only once in a vcpu time slice. diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c index 296246bf253d..28152d7fd12d 100644 --- a/arch/x86/kvm/vmx/pmu_intel.c +++ b/arch/x86/kvm/vmx/pmu_intel.c @@ -770,6 +770,25 @@ static void intel_pmu_cleanup(struct kvm_vcpu *vcpu) intel_pmu_release_guest_lbr_event(vcpu); } +void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu) +{ + struct kvm_pmc *pmc = NULL; + int bit; + + for_each_set_bit(bit, (unsigned long *)&pmu->global_ctrl, + X86_PMC_IDX_MAX) { + pmc = kvm_x86_ops.pmu_ops->pmc_idx_to_pmc(pmu, bit); + + if (!pmc || !pmc_speculative_in_use(pmc) || + !pmc_is_enabled(pmc)) + continue; + + if (pmc->perf_event && (pmc->idx != pmc->perf_event->hw.idx)) + pmu->host_cross_mapped_mask |= + BIT_ULL(pmc->perf_event->hw.idx); + } +} + struct kvm_pmu_ops intel_pmu_ops = { .find_arch_event = intel_find_arch_event, .find_fixed_event = intel_find_fixed_event, diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 3930e89679fc..299f75747936 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6597,6 +6597,10 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx) struct perf_guest_switch_msr *msrs; struct kvm_pmu *pmu = vcpu_to_pmu(&vmx->vcpu); + pmu->host_cross_mapped_mask = 0; + if (pmu->pebs_enable & pmu->global_ctrl) + intel_pmu_cross_mapped_check(pmu); + /* Note, nr_msrs may be garbage if perf_guest_get_msrs() returns NULL. */ msrs = perf_guest_get_msrs(&nr_msrs, (void *)pmu); if (!msrs) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 16e4e457ba23..72f1175e474b 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -96,6 +96,7 @@ union vmx_exit_reason { #define vcpu_to_lbr_desc(vcpu) (&to_vmx(vcpu)->lbr_desc) #define vcpu_to_lbr_records(vcpu) (&to_vmx(vcpu)->lbr_desc.records) +void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu); bool intel_pmu_lbr_is_compatible(struct kvm_vcpu *vcpu); bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu); -- 2.27.0