Received: by 2002:a05:6a10:7420:0:0:0:0 with SMTP id hk32csp617737pxb; Tue, 15 Feb 2022 23:59:19 -0800 (PST) X-Google-Smtp-Source: ABdhPJxSJtp39aKmtOc6lDpD4Lto4GZMBfQ0hXh00By68ZuioJKNRY1MWBf6gzz+1pSdDGK2st+F X-Received: by 2002:a17:903:2352:b0:14c:ba3c:2eae with SMTP id c18-20020a170903235200b0014cba3c2eaemr1249822plh.80.1644998359004; Tue, 15 Feb 2022 23:59:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644998358; cv=none; d=google.com; s=arc-20160816; b=kL8fthY7iUcLs5ypsEK4iYna5IykaPUA0REYcik1kPlKEKtvo3dIUVTyd8MJITdr3C p+OlDaE1Edi0fTKdeF6IiFs8//WYK/tmM4wXPgXL7ujNxEGaQypC+RSlvuYFgUhSsnlM CaHs+pB3B4cwzTNBP70DpZ3zxiKwExxGthdLLZL37HzmUhJ9aY6NFomWlTV6EOOWuGgy m5RvDRQ3CQUR4Tqw/z61QjzpI7/CKRR2qJihYCTkiVFEt57aOplqdtmAV/e49wy3VvLL gQWCzZVhDguR/Y4j4DnDbfHmX6BNZcwaLkHnyYRjtbzwcpXwD2/s8OFotEP2u9pygVke yVxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :organization:from:references:cc:to:content-language:subject :user-agent:mime-version:date:message-id:dkim-signature; bh=QO04mi9P51FCKMM2/J+pZrM97cQ38QT9y3ElCPNBoIU=; b=C9sKmrTVViAC1Hq8pRYWstPqbsogqghofF4kN0tHuktzNV8R8c3OhbY9Vonpafz7vS yNL2GzEC7zPYYpbf34nl2mVznM0BAaElI9zAvEiN/RANvFEeZJyx/rp87nXOUdlNoSWo iadDlKjYUwKKCj+8v8JlJXlOnrMUw2iULhLj/RhBSWBGyi0naTvOF6tkmQrnS39EIRU5 YDmdvi1rxABRzC6KY6lkOtv5s+Sla/pWW5aSbx77NOfAYRM0C8ET2tdZ10tKVTuLt8Hj oz8ugJyNKGg8/SLiDBmTIf1VPBVwzH72GHiu2858xD8Smq7dfh29kA2muefAQToLJMHg qv8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=Krtl4kWC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id b4si17204268plr.18.2022.02.15.23.59.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Feb 2022 23:59:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=Krtl4kWC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1AA8D1275F4; Tue, 15 Feb 2022 23:31:17 -0800 (PST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230102AbiBPHbQ (ORCPT + 99 others); Wed, 16 Feb 2022 02:31:16 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:46520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230074AbiBPHbO (ORCPT ); Wed, 16 Feb 2022 02:31:14 -0500 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DBF2D0B73; Tue, 15 Feb 2022 23:31:02 -0800 (PST) Received: by mail-pf1-x434.google.com with SMTP id e17so1473637pfv.5; Tue, 15 Feb 2022 23:31:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=message-id:date:mime-version:user-agent:subject:content-language:to :cc:references:from:organization:in-reply-to :content-transfer-encoding; bh=QO04mi9P51FCKMM2/J+pZrM97cQ38QT9y3ElCPNBoIU=; b=Krtl4kWCMaa1gpG+fVCvZCIctsIF8KiO6YsiP/PZ4FfvuqFNpp2gfhbHXS2+D1vmf0 Jyfoa0AGYzOFBhdyi/aEgJ7ac7WR6LVYEkeNCbyuheUKFJGHNf7qqRZ9ytqz5AW0esYb tnqp2P1TFXL1u6sg+sMmx0JOnsJCm5mCr43mnYb1/zmSw5Xs9xD2EtInyAJkubOQqo1z yqLiuSC4ZwVMh3LIK62E2BOBWmPcvb+RSm8B4wOo29ww5j7P8ktLyQNDYkT4fmjCQRFb 9n/yDtwzF4gumykY2dOSO+Z4j5DkWIXlir6mn2t606XK3KSahrGE/PMTxYChsAhLCkG2 K5XQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:organization:in-reply-to :content-transfer-encoding; bh=QO04mi9P51FCKMM2/J+pZrM97cQ38QT9y3ElCPNBoIU=; b=5vtEEW/BaeEOZGLqClNJyCpcbURWOgv+hfUtSGHf8LngsPQSkPp6tIoP7MCV/McNn3 6x9Qs/KpvL7opJ80vgZg/byGeWaRbhGMCrfFBBNVVLG/H287di0b2xHD3rFVvVTfdeXH jHUeFwFTcMn9ghFphf+sHjfQXL/gT52+4ybWkg81CjkoMiy5QRT1FNTW6sUg5/IUyA9S mQGwjmFRIxbqxGEB3FRgO2BLbWvKVC9YXJDHL9Of9GtzinMn18s1waeJaApsejKiwtKb alB+YNepSDb0dC9bykA8eycYUdHiUC3vvG89AUwpWu8q/cLKLEKpDZDHVkKu7TBD7j7E Nzpg== X-Gm-Message-State: AOAM532IHeNgjzet2Q+LVtF0+2WYS33z0QiklOb/2+yujfb+AhT9+fEJ c3Gm0lOrWtma1f8b4vG2XnI= X-Received: by 2002:a05:6a00:26e0:b0:4e1:7131:de2b with SMTP id p32-20020a056a0026e000b004e17131de2bmr1828821pfw.20.1644996661875; Tue, 15 Feb 2022 23:31:01 -0800 (PST) Received: from [192.168.255.10] ([103.7.29.32]) by smtp.gmail.com with ESMTPSA id c11sm43169808pfv.76.2022.02.15.23.30.57 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 15 Feb 2022 23:31:01 -0800 (PST) Message-ID: Date: Wed, 16 Feb 2022 15:30:50 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.6.0 Subject: Re: [PATCH kvm/queue v2 2/3] perf: x86/core: Add interface to query perfmon_event_map[] directly Content-Language: en-US To: "Liang, Kan" , Jim Mattson Cc: David Dunn , Dave Hansen , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Stephane Eranian References: <20220117085307.93030-1-likexu@tencent.com> <20220117085307.93030-3-likexu@tencent.com> <20220202144308.GB20638@worktop.programming.kicks-ass.net> <69c0fc41-a5bd-fea9-43f6-4724368baf66@intel.com> <67a731dd-53ba-0eb8-377f-9707e5c9be1b@intel.com> <7b5012d8-6ae1-7cde-a381-e82685dfed4f@linux.intel.com> From: Like Xu Organization: Tencent In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/2/2022 3:46 am, Liang, Kan wrote: > > > On 2/10/2022 2:16 PM, Jim Mattson wrote: >> On Thu, Feb 10, 2022 at 10:30 AM Liang, Kan wrote: >>> >>> >>> >>> On 2/10/2022 11:34 AM, Jim Mattson wrote: >>>> On Thu, Feb 10, 2022 at 7:34 AM Liang, Kan wrote: >>>>> >>>>> >>>>> >>>>> On 2/9/2022 2:24 PM, David Dunn wrote: >>>>>> Dave, >>>>>> >>>>>> In my opinion, the right policy depends on what the host owner and >>>>>> guest owner are trying to achieve. >>>>>> >>>>>> If the PMU is being used to locate places where performance could be >>>>>> improved in the system, there are two sub scenarios: >>>>>>       - The host and guest are owned by same entity that is optimizing >>>>>> overall system.  In this case, the guest doesn't need PMU access and >>>>>> better information is provided by profiling the entire system from the >>>>>> host. >>>>>>       - The host and guest are owned by different entities.  In this >>>>>> case, profiling from the host can identify perf issues in the guest. >>>>>> But what action can be taken?  The host entity must communicate issues >>>>>> back to the guest owner through some sort of out-of-band information >>>>>> channel.  On the other hand, preempting the host PMU to give the guest >>>>>> a fully functional PMU serves this use case well. >>>>>> >>>>>> TDX and SGX (outside of debug mode) strongly assume different >>>>>> entities.  And Intel is doing this to reduce insight of the host into >>>>>> guest operations.  So in my opinion, preemption makes sense. >>>>>> >>>>>> There are also scenarios where the host owner is trying to identify >>>>>> systemwide impacts of guest actions.  For example, detecting memory >>>>>> bandwidth consumption or split locks.  In this case, host control >>>>>> without preemption is necessary. >>>>>> >>>>>> To address these various scenarios, it seems like the host needs to be >>>>>> able to have policy control on whether it is willing to have the PMU >>>>>> preempted by the guest. >>>>>> >>>>>> But I don't see what scenario is well served by the current situation >>>>>> in KVM.  Currently the guest will either be told it has no PMU (which >>>>>> is fine) or that it has full control of a PMU.  If the guest is told >>>>>> it has full control of the PMU, it actually doesn't.  But instead of >>>>>> losing counters on well defined events (from the guest perspective), >>>>>> they simply stop counting depending on what the host is doing with the >>>>>> PMU. >>>>> >>>>> For the current perf subsystem, a PMU should be shared among different >>>>> users via the multiplexing mechanism if the resource is limited. No one >>>>> has full control of a PMU for lifetime. A user can only have the PMU in >>>>> its given period. I think the user can understand how long it runs via >>>>> total_time_enabled and total_time_running. >>>> >>>> For most clients, yes. For kvm, no. KVM currently tosses >>>> total_time_enabled and total_time_running in the bitbucket. It could >>>> extrapolate, but that would result in loss of precision. Some guest >>>> uses of the PMU would not be able to cope (e.g. >>>> https://github.com/rr-debugger/rr). >>>> >>>>> For a guest, it should rely on the host to tell whether the PMU resource >>>>> is available. But unfortunately, I don't think we have such a >>>>> notification mechanism in KVM. The guest has the wrong impression that >>>>> the guest can have full control of the PMU. >>>> >>>> That is the only impression that the architectural specification >>>> allows the guest to have. On Intel, we can mask off individual fixed >>>> counters, and we can reduce the number of GP counters, but AMD offers >>>> us no such freedom. Whatever resources we advertise to the guest must The future may look a little better, with more and more server hardware being designed with virtualization requirement in mind. >>>> be available for its use whenever it wants. Otherwise, PMU >>>> virtualization is simply broken. YES for "simply broken" but no for "available whenever it wants" If there is no host (core) pmu user, the guest pmu is fully and architecturally available. If there is no perf agent on host (like watchdog), current guest pmu is working fine except for some emulated instructions. >>>> >>>>> In my opinion, we should add the notification mechanism in KVM. When the >>>>> PMU resource is limited, the guest can know whether it's multiplexing or >>>>> can choose to reschedule the event. Eventually, we moved the topic to an open discussion and I am relieved. The total_time_enabled and total_time_running of the perf_events created by KVM are quite unreliable and invisible to the guest, and we may need to clearly define what they reallt mean, for example when profiling the SGX applications. The elephant in the vPMU room at the moment is that the guest has no way of knowing if the physical pmc on the back end of the vPMC is being multiplexed, even though the KVM is able to know. One way to mitigate this is to allow perf to not apply a multiplexing policy (sys knob), for example with a first-come, first-served policy. In this case, each user of the same priority of PMC is fair, and KVM goes first to request hardware when the guest uses vPMC, or requests re-sched to another pCPU, and only fails in the worst case. >>>> >>>> That sounds like a paravirtual perf mechanism, rather than PMU >>>> virtualization. Are you suggesting that we not try to virtualize the >>>> PMU? Unfortunately, PMU virtualization is what we have customers >>>> clamoring for. No one is interested in a paravirtual perf mechanism. >>>> For example, when will VTune in the guest know how to use your >>>> proposed paravirtual interface? >>> >>> OK. If KVM cannot notify the guest, maybe guest can query the usage of >>> counters before using a counter. There is a IA32_PERF_GLOBAL_INUSE MSR >>> introduced with Arch perfmon v4. The MSR provides an "InUse" bit for >>> each counters. But it cannot guarantee that the counter can always be >>> owned by the guest unless the host treats the guest as a super-user and >>> agrees to not touch its counter. This should only works for the Intel >>> platforms. >> >> Simple question: Do all existing guests (Windows and Linux are my >> primary interest) query that MSR today? If not, then this proposal is >> DOA. >> > > No, we don't, at least for Linux. Because the host own everything. It doesn't > need the MSR to tell which one is in use. We track it in an SW way. Indeed, "the host own everything", which is also the starting point for the host perf when it received the changes. > > For the new request from the guest to own a counter, I guess maybe it is worth > implementing it. But yes, the existing/legacy guest never check the MSR. We probably need an X86 generic notification solution for the worst case. > > >>>> >>>>> But seems the notification mechanism may not work for TDX case? Shared memory can be used for communication between the host and the guest, if it's allowed by the TDX guest. >>>>>> >>>>>> On the other hand, if we flip it around the semantics are more clear. >>>>>> A guest will be told it has no PMU (which is fine) or that it has full >>>>>> control of the PMU.  If the guest is told that it has full control of >>>>>> the PMU, it does.  And the host (which is the thing that granted the >>>>>> full PMU to the guest) knows that events inside the guest are not >>>>>> being measured.  This results in all entities seeing something that >>>>>> can be reasoned about from their perspective. >>>>>> >>>>> >>>>> I assume that this is for the TDX case (where the notification mechanism >>>>>     doesn't work). The host still control all the PMU resources. The TDX >>>>> guest is treated as a super-user who can 'own' a PMU. The admin in the >>>>> host can configure/change the owned PMUs of the TDX. Personally, I think >>>>> it makes sense. But please keep in mind that the counters are not >>>>> identical. There are some special events that can only run on a specific >>>>> counter. If the special counter is assigned to TDX, other entities can >>>>> never run some events. We should let other entities know if it happens. >>>>> Or we should never let non-host entities own the special counter. >>>> >>>> Right; the counters are not fungible. Ideally, when the guest requests >>>> a particular counter, that is the counter it gets. If it is given a >>>> different counter, the counter it is given must provide the same >>>> behavior as the requested counter for the event in question. >>> >>> Ideally, Yes, but sometimes KVM/host may not know whether they can use >>> another counter to replace the requested counter, because KVM/host >>> cannot retrieve the event constraint information from guest. >> >> In that case, don't do it. When the guest asks for a specific counter, >> give the guest that counter. This isn't rocket science. >> > > Sounds like the guest can own everything if they want. Maybe it makes sense from > the virtualization's perspective. But it sounds too aggressive to me. :) Until Perterz changes his will, upstream may not see this kind of change. (I actually used to like this design too). > > Thanks, > Kan > > >>> For example, we have Precise Distribution (PDist) feature enabled only >>> for the GP counter 0 on SPR. Perf uses the precise_level 3 (a SW >>> variable) to indicate the feature. For the KVM/host, they never know >>> whether the guest apply the PDist feature. Yes, just check what we did on PEBS, which is Acked-by PeterZ. >>> >>> I have a patch that forces the perf scheduler starts from the regular >>> counters, which may mitigates the issue, but cannot fix it. (I will post >>> the patch separately.) >>> >>> Or we should never let the guest own the special counters. Although the >>> guest has to lose some special events, I guess the host may more likely >>> be willing to let the guest own a regular counter. AMD seems to do this, but it's just another disable-pmu compromise. >>> >>> >>> Thanks, >>> Kan >>> >>>> >>>>> >>>>> Thanks, >>>>> Kan >>>>> >>>>>> Thanks, >>>>>> >>>>>> Dave Dunn >>>>>> >>>>>> On Wed, Feb 9, 2022 at 10:57 AM Dave Hansen wrote: >>>>>> >>>>>>>> I was referring to gaps in the collection of data that the host perf >>>>>>>> subsystem doesn't know about if ATTRIBUTES.PERFMON is set for a TDX >>>>>>>> guest. This can potentially be a problem if someone is trying to >>>>>>>> measure events per unit of time. >>>>>>> >>>>>>> Ahh, that makes sense. >>>>>>> >>>>>>> Does SGX cause problem for these people?  It can create some of the same >>>>>>> collection gaps: >>>>>>> >>>>>>>            performance monitoring activities are suppressed when entering >>>>>>>            an opt-out (of performance monitoring) enclave.