Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp416990pxu; Tue, 5 Jan 2021 15:04:01 -0800 (PST) X-Google-Smtp-Source: ABdhPJzKJcXlJ83z83xaky8BCn004Yn5D7ZoRpj3PGdkgYQXikSC3Ky7n1YQ1FcejBRQSZAcnKrb X-Received: by 2002:a17:906:154d:: with SMTP id c13mr1084981ejd.471.1609887841608; Tue, 05 Jan 2021 15:04:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1609887841; cv=none; d=google.com; s=arc-20160816; b=aNB++I0cbdJZpVYS9JmsinPAVady/nDCDmfejQwjnTiamPKa4trVnjv/9NXzgm25fw VpLvLYNOiusrwuf5RGrh0JEdZoeShsQTCaC4vs/J+rM7IKSWrhX6p/XTNVeyEKPej7xy TOg0US90aAowJlSQfIDGFPO+WUcfE7CFHot2LN+H/qgFoHHx+oATSK25GbX8cyRCHgKG 77+Cprmuikp6F7fEdjul3tzOwAcVXO7toVZLaELmzaKEIZdBgPGRFhgcQpGkwyW1dgze MP9ukqX12zTYHgJNaeV5JSP/QsglHMzYsXMLmJg/0qe7uFCGQkGXhxgkXeqlljL8n6Dc vjbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=I+ecmB8P4LM1sQ+xFfF2GmApzt0t3GO8C8SxXkADzN0=; b=G6LLSIg6X7CX7XF6sAVXqrjVd34dYnGMkCFshu3PSIm9En/OOIjR/s33dk3drI/oHA ZbkpvxgvHYxj5lOH1I7VRNC9lKu51ndv13NmwiTpeC/DGW5TaeLWNtcxMxx8VAkwjqmS fH68wNNaWRruN1bQAhes46h4bmmfrLbx4LmHYe+5NMnugDovWujzN8m6YYJ/oQ9yo8aJ j+R5hg27HIMJRlHqJsdCpmuyOhvAva0RRzeTQxs/xMEGvV9gCIWqPepS3BH0n7KdEEhr 1hq9J59j3xXRYUeQmr9JkFryeFZfw9pv/OQAOcBot9Y283eqbRpFsUUWI25mqfDhKBFJ M+BQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=mLOqKmsu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g25si261303ejp.67.2021.01.05.15.03.38; Tue, 05 Jan 2021 15:04:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=mLOqKmsu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726499AbhAEVMK (ORCPT + 99 others); Tue, 5 Jan 2021 16:12:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35342 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725940AbhAEVMK (ORCPT ); Tue, 5 Jan 2021 16:12:10 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00C04C061574 for ; Tue, 5 Jan 2021 13:11:30 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id b5so415782pjl.0 for ; Tue, 05 Jan 2021 13:11:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=I+ecmB8P4LM1sQ+xFfF2GmApzt0t3GO8C8SxXkADzN0=; b=mLOqKmsuKDEyTHjWJIAfJaxvoDZRfRxfiSxvvExeBSacNcJZLogu1EvG/so3xFk/2y sVJk6KS9w5QK/wL1G3WWG6PREdcpkzS4BSb6Y21Q1JSBVhveqp9EFDcek5BhUaMm//81 lkGGBm/Ycch9uK8XkgR3XHjIqPA/gDjZtLdI424lq0q/Yi2rILw6yj9oaMoL7Hb0ZkXM ttM0ut0b4uv2twPKYL0rOwhhlHhsF/DAmv1yQUHz+JX3AOZ7a+yLsQPnpfCezGfResyb wDinvdvYslJUIplqgdzuruWsFOzEmuVznOuqqT9/GRkAbQrrSF9DapO8eyTfwDVO8QZj +8vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=I+ecmB8P4LM1sQ+xFfF2GmApzt0t3GO8C8SxXkADzN0=; b=W5e7fiCnVHCJghXO4VpVI6lougye11tVwWkVX5FJ3JCSjkWHR9LBcX4p+fVR//JYss y3yRLPNfeg+d7+VkUrsX04CWwo5VHeQ/wWh6L8r/kDol9rZGJXzSdCsRFkkQExfJJAv2 8O3pyjoehdqOeNeqo1rCvB/V8it33qOTcRwiXvdNldxCAGY/A0pMSVuYgmerL4sj+h2x 3yyER65SeSnbnam8X590dJJC5V3BqIoJHQAq/j6eVtdX6NziN6yL3Da3klRWxZrZWZVN fQBHdcCeN0vo7LnSKfBElaaZ/a5mcV5Eib7XRpGZ5F4SxJzt6jlMkhgSLuLeE8RNEQ4K W9CQ== X-Gm-Message-State: AOAM531WdBp3cqUS5PZGn/Nmmg+RwofOPwBV6PLh3Xc/ILR08FwmVeQD HO0JEYf2kOxqfFblwFq0Wq0njA== X-Received: by 2002:a17:90a:c798:: with SMTP id gn24mr995980pjb.49.1609881089352; Tue, 05 Jan 2021 13:11:29 -0800 (PST) Received: from google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) by smtp.gmail.com with ESMTPSA id s21sm112537pga.12.2021.01.05.13.11.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 13:11:28 -0800 (PST) Date: Tue, 5 Jan 2021 13:11:18 -0800 From: Sean Christopherson To: Like Xu Cc: Peter Zijlstra , Paolo Bonzini , eranian@google.com, kvm@vger.kernel.org, Ingo Molnar , Thomas Gleixner , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Andi Kleen , Kan Liang , wei.w.wang@intel.com, luwei.kang@intel.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 06/17] KVM: x86/pmu: Add IA32_PEBS_ENABLE MSR emulation for extended PEBS Message-ID: References: <20210104131542.495413-1-like.xu@linux.intel.com> <20210104131542.495413-7-like.xu@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210104131542.495413-7-like.xu@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 04, 2021, Like Xu wrote: > If IA32_PERF_CAPABILITIES.PEBS_BASELINE [bit 14] is set, the > IA32_PEBS_ENABLE MSR exists and all architecturally enumerated fixed > and general purpose counters have corresponding bits in IA32_PEBS_ENABLE > that enable generation of PEBS records. The general-purpose counter bits > start at bit IA32_PEBS_ENABLE[0], and the fixed counter bits start at > bit IA32_PEBS_ENABLE[32]. > > When guest PEBS is enabled, the IA32_PEBS_ENABLE MSR will be > added to the perf_guest_switch_msr() and switched during the > VMX transitions just like CORE_PERF_GLOBAL_CTRL MSR. > > Originally-by: Andi Kleen > Co-developed-by: Kan Liang > Signed-off-by: Kan Liang > Co-developed-by: Luwei Kang > Signed-off-by: Luwei Kang > Signed-off-by: Like Xu > --- > arch/x86/events/intel/core.c | 20 ++++++++++++++++++++ > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/include/asm/msr-index.h | 6 ++++++ > arch/x86/kvm/vmx/pmu_intel.c | 28 ++++++++++++++++++++++++++++ > 4 files changed, 55 insertions(+) > > diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c > index af457f8cb29d..6453b8a6834a 100644 > --- a/arch/x86/events/intel/core.c > +++ b/arch/x86/events/intel/core.c > @@ -3715,6 +3715,26 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr) > *nr = 2; > } > > + if (cpuc->pebs_enabled & ~cpuc->intel_ctrl_host_mask) { > + arr[1].msr = MSR_IA32_PEBS_ENABLE; > + arr[1].host = cpuc->pebs_enabled & ~cpuc->intel_ctrl_guest_mask; > + arr[1].guest = cpuc->pebs_enabled & ~cpuc->intel_ctrl_host_mask; > + /* > + * The guest PEBS will be disabled once the host PEBS is enabled > + * since the both enabled case may brings a unknown PMI to > + * confuse host and the guest PEBS overflow PMI would be missed. > + */ > + if (arr[1].host) > + arr[1].guest = 0; > + arr[0].guest |= arr[1].guest; Can't you modify the code that strips the PEBS counters from the guest's value instead of poking into the array entry after the fact? Also, why is this scenario even allowed? Can't we force exclude_guest for events that use PEBS? > + *nr = 2; > + } else if (*nr == 1) { > + /* Remove MSR_IA32_PEBS_ENABLE from MSR switch list in KVM */ > + arr[1].msr = MSR_IA32_PEBS_ENABLE; > + arr[1].host = arr[1].guest = 0; > + *nr = 2; Similar to above, rather then check "*nr == 1", this should properly integrate with the "x86_pmu.pebs && x86_pmu.pebs_no_isolation" logic instead of poking into the array after the fact. By incorporating both suggestions, the logic can be streamlined significantly, and IMO makes the overall flow much more understandable. Untested... diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index d4569bfa83e3..c5cc7e558c8e 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -3708,24 +3708,39 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr) arr[0].msr = MSR_CORE_PERF_GLOBAL_CTRL; arr[0].host = x86_pmu.intel_ctrl & ~cpuc->intel_ctrl_guest_mask; arr[0].guest = x86_pmu.intel_ctrl & ~cpuc->intel_ctrl_host_mask; - if (x86_pmu.flags & PMU_FL_PEBS_ALL) - arr[0].guest &= ~cpuc->pebs_enabled; - else - arr[0].guest &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK); + + /* + * Disable PEBS in the guest if PEBS is used by the host; enabling PEBS + * in both will lead to unexpected PMIs in the host and/or missed PMIs + * in the guest. + */ + if (cpuc->pebs_enabled & ~cpuc->intel_ctrl_guest_mask) { + if (x86_pmu.flags & PMU_FL_PEBS_ALL) + arr[0].guest &= ~cpuc->pebs_enabled; + else + arr[0].guest &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK); + } *nr = 1; - if (x86_pmu.pebs && x86_pmu.pebs_no_isolation) { - /* - * If PMU counter has PEBS enabled it is not enough to - * disable counter on a guest entry since PEBS memory - * write can overshoot guest entry and corrupt guest - * memory. Disabling PEBS solves the problem. - * - * Don't do this if the CPU already enforces it. - */ + if (x86_pmu.pebs) { arr[1].msr = MSR_IA32_PEBS_ENABLE; - arr[1].host = cpuc->pebs_enabled; - arr[1].guest = 0; + arr[1].host = cpuc->pebs_enabled & ~cpuc->intel_ctrl_guest_mask; + + /* + * Host and guest PEBS are mutually exclusive. Load the guest + * value iff PEBS is disabled in the host. If PEBS is enabled + * in the host and the CPU supports PEBS isolation, disabling + * the counters is sufficient (see above); skip the MSR loads + * by stuffing guest=host (KVM will remove the entry). Without + * isolation, PEBS must be explicitly disabled prior to + * VM-Enter to prevent PEBS writes from overshooting VM-Enter. + */ + if (!arr[1].host) + arr[1].guest = cpuc->pebs_enabled & ~cpuc->intel_ctrl_host_mask; + else if (x86_pmu.pebs_no_isolation) + arr[1].guest = 0; + else + arr[1].guest = arr[1].host; *nr = 2; }