Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp12980385rwd; Fri, 23 Jun 2023 13:42:44 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4AB9ZxV5iRrPvcCWxxwrNXO1JVw4Teo2D6BhDSK+aIPbCXjOT/smT1WLmLSEWZVhRASqwd X-Received: by 2002:a17:902:da8c:b0:1af:981b:eeff with SMTP id j12-20020a170902da8c00b001af981beeffmr114708plx.64.1687552964086; Fri, 23 Jun 2023 13:42:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687552964; cv=none; d=google.com; s=arc-20160816; b=FiAoNt0JTYRtfHi3lBcTNpeXY459dBD5ceTn43MUXIJLuHoLa9dQoB3sPbkZRE6SFb uNc2qvRveUCx7NikwnlzsoDjSNVG5dhSPNUhrFpgkL+E8NImSlFkMUPjaOvUXyGn9BYq gFcvyF2Kd0n04OOR4Nqso5kDwEOep4QZAhpBjHdUad62HkhRtr4ex7RF37raOSwh3ue3 2jfBIU0USJxdQSYEpQr/WG24QSGhOaD267uZTgmiyYh7sMbXWNLCAbL9y14RS2VVI0kf ISFSlpfZjEYHQViYNfU9OuTXRP1bX3garkN8/penqXCXR0dTWF8NXwYvBcz7aZ1z4o3C y4Rw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=QwNPgwiBQZGJeDnguHE5i5QhIkh5nKMqMyEPaR/bIi0=; b=yJ831ZstJG5WcWr23lniW67paDqUgPyRntsFDOaNY7KLxpCipU32a7ouyBtAAdc7Dh mP2C86uYs98F3aNWLXBhMi9TMogmcw1h108BH4jeJ+eQ3YZ86/EV6pJKx/5TUFbY+cfd BgcBFWgCi4rY2cVzET0e667K2TmoPcq+jgl5XW5ytUO682BtmBBG7QieyWKdnjOegvwp V9bgF8vA83onwbB978FmjpKaAjD98qCegmM/XkmAWpfWmGESxbMbUeG0tsH64fEe6kns 1205/ssVQJgxvUs8PSOFURUCt+b+eiWB/3wzZ8ogF6pnRuqr6sZgPcNiVof8axnyl2c+ DRHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=S+gS2IQY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k16-20020a170902c41000b001b008b3dedasi6026746plk.579.2023.06.23.13.42.30; Fri, 23 Jun 2023 13:42:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=S+gS2IQY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232350AbjFWUjA (ORCPT + 99 others); Fri, 23 Jun 2023 16:39:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232285AbjFWUix (ORCPT ); Fri, 23 Jun 2023 16:38:53 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF872E45 for ; Fri, 23 Jun 2023 13:38:21 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-bcef239a9fbso5115529276.1 for ; Fri, 23 Jun 2023 13:38:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687552694; x=1690144694; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QwNPgwiBQZGJeDnguHE5i5QhIkh5nKMqMyEPaR/bIi0=; b=S+gS2IQYLrYxQ/9+oRMG1ahOtYGEn4Hx1tnys8TxBtT5IefCIFC0J1mYt2T2h0EOd8 Yhvyu9Epl+xfAOfKqWuXEOKQfKH1y790+4wYskDZ0Awt0AST/Kwvi3m4NVlsMeOsaAl2 HFduxVFkrks7NWvWafWc2IB48aAaJNaQzPaiMSpC5gNB8Kb15lJD5rnFWS9F1IACFvFO TxhphQw8dS+VtPpXo/I2jouDnp01zylDHZfitq/+NpO+r/WfSv2akxEwxVEcjIgfYlxh PDXMXqHGRSuegFHhwoHEF1o0CUiq7bc78FJzS3bXOQvJMNzZ2EewecUcbGxLjn1tiIlQ DxkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687552694; x=1690144694; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QwNPgwiBQZGJeDnguHE5i5QhIkh5nKMqMyEPaR/bIi0=; b=Yp/zwJp6KsqdQy3O7s0KHLRHFgBDU9jpZwS7T0wZ5ba1DpgubOsV8aoKBQ+bYr0Qnp AKcjfUtmsVJIsB2aTsB67NOU8BA11iVEFySTC/Ot9Ga1qNfUKa2N0heOZACEUjiThzve 5KD6Ghrf3qp58AsvD1zUsQihLxOzhy1KpXGb0Piv4KxVDZ937n6AnGrZSt3WU/A36YXV 5XnhIhBV7E3VPRw8FQhOyOuuKyNoA7ObMMgxwLz6mUabD5KNVlgOkum93jIArB8mgl5m WvdV5dRye58A/7DaV8T64/spvcnWtR5ebM1JTr1YhMd2wu595idzvN7OiVe+GQzCXZhl NI1w== X-Gm-Message-State: AC+VfDxMq0U2dgvS4tVhbFFG3YU+3QQ2TgQiozA1tmAucvkHhgNnY743 kdRA6VDR+20hbRK2VO8N3aN6sz3jVHU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:2691:0:b0:ba8:1646:c15d with SMTP id m139-20020a252691000000b00ba81646c15dmr9694281ybm.1.1687552694461; Fri, 23 Jun 2023 13:38:14 -0700 (PDT) Date: Fri, 23 Jun 2023 13:38:12 -0700 In-Reply-To: <20230616113353.45202-4-xiong.y.zhang@intel.com> Mime-Version: 1.0 References: <20230616113353.45202-1-xiong.y.zhang@intel.com> <20230616113353.45202-4-xiong.y.zhang@intel.com> Message-ID: Subject: Re: [PATCH 3/4] KVM: VMX/pmu: Enable inactive vLBR event in guest LBR MSR emulation From: Sean Christopherson To: Xiong Zhang Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, pbonzini@redhat.com, peterz@infradead.org, like.xu.linux@gmail.com, kan.liang@linux.intel.com, zhenyuw@linux.intel.com, zhiyuan.lv@intel.com Content-Type: text/plain; charset="us-ascii" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 16, 2023, Xiong Zhang wrote: > vLBR event could be inactive in two case: > a. host per cpu pinned LBR event occupy LBR when vLBR event is created > b. vLBR event is preempted by host per cpu pinned LBR event during vm > exit handler. > When vLBR event is inactive, guest couldn't access LBR msr, and it is > forced into error state and is excluded from schedule by perf scheduler. > So vLBR event couldn't be active through perf scheduler even if host per > cpu pinned LBR event has released LBR, kvm could enable vLBR event > proactively, then vLBR event may be active and LBR msr could be passthrough > into guest. > > Signed-off-by: Xiong Zhang > --- > arch/x86/kvm/vmx/pmu_intel.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c > index 741efe2c497b..5a3ab8c8711b 100644 > --- a/arch/x86/kvm/vmx/pmu_intel.c > +++ b/arch/x86/kvm/vmx/pmu_intel.c > @@ -314,7 +314,16 @@ static bool intel_pmu_handle_lbr_msrs_access(struct kvm_vcpu *vcpu, > if (!intel_pmu_is_valid_lbr_msr(vcpu, index)) > return false; > > - if (!lbr_desc->event && intel_pmu_create_guest_lbr_event(vcpu) < 0) > + /* vLBR event may be inactive, but physical LBR may be free now. /* * This is the preferred block comment style. */ > + * but vLBR event is pinned event, once it is inactive state, perf > + * will force it to error state in merge_sched_in() and exclude it from > + * perf schedule, so even if LBR is free now, vLBR event couldn't be active > + * through perf scheduler and vLBR event could be active through > + * perf_event_enable(). > + */ Trimming that down, is this what you mean? /* * Attempt to re-enable the vLBR event if it was disabled due to * contention with host LBR usage, i.e. was put into an error state. * Perf doesn't notify KVM if the host stops using LBRs, i.e. KVM needs * to manually re-enable the event. */ Which begs the question, why can't there be a notification of some form that the LBRs are once again available? Assuming that's too difficult for whatever reason, why wait until the guest tries to read LBRs? E.g. why not be more aggressive and try to re-enable vLBRs on every VM-Exit. And if we do wait until the guest touches relevant MSRs, shouldn't writes to DEBUG_CTL that set DEBUGCTLMSR_LBR also try to re-enable the event? Lastly, what guarantees that the MSRs hold guest data? I assume perf purges the MSRs at some point, but it would be helpful to call that out in the changelog. > + if (lbr_desc->event && (lbr_desc->event->state == PERF_EVENT_STATE_ERROR)) > + perf_event_enable(lbr_desc->event); > + else if (!lbr_desc->event && intel_pmu_create_guest_lbr_event(vcpu) < 0) > goto dummy; > > /* > -- > 2.25.1 >