Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp3043830pxb; Thu, 10 Feb 2022 10:55:30 -0800 (PST) X-Google-Smtp-Source: ABdhPJycqVj4ZyKj1mSYaC0K4qk7TcHALN+XlHNXT7toCYDwhV5oRxtr+5SDj/C9nPto13m1/eZ2 X-Received: by 2002:a05:6402:26c7:: with SMTP id x7mr9645310edd.151.1644519329951; Thu, 10 Feb 2022 10:55:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644519329; cv=none; d=google.com; s=arc-20160816; b=KsbSTGlrX6J0n3L12YpiHmqlSpZnmknW6wipq+OmwcwgVmLxlrzwwS9CNyfQ8D72e6 6gsqW1XYy5dYbySa+dkzGcrNWZAp68MeJ18fsQUKU+w0oKoShflUAhuV8xbAtWJ4JafI ZAy1i1E+x9nWW2/k2TpKWwqhAYuZ2soAMqVD+zeXm8xJUhl9kOYMSpIAoeyH5j09nXvv dJIYLX6hXMiePYCyjf8RNfZ3rv+48RF/T9qxDm9mgJCSPDjuHgC2/lTYimL56FWb6TfG EoNFQAV9nDLlTiHynFyv9LUYA0Fme40Nl+jBqbYhdAGcqG5y6MBmPQNuOk1ivADE2Jzs PHTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=1YIKfZRvYANp4LD1zrAyetEyh6SC9UJR3WpF2jixnhg=; b=bhTXbyInQC2Jg9GVwDhKoFep34h9JJhSpstz7vNwHtxNnvmvVzqX7Ggqk5uSNd89SH bMKzIQRt5e4NPgq0IjnG1rM8oNfD0vS9Y7eXDWJn9TXcXsfErG/7l/WZHuG4irsJxYUi ujIImVBg5lm2lDE7ZnpH7pBCeyAInn/kos99LareCzX9+exp6nvZEaU3R3gSyTxvBQvp 1PyT/IJiQQfa1rUzhUviauDx69JgPH8m0ApGifrwdwsv/c94qgD2rJx8YuePDSIq5h3a 3FNEZHydrK9H+kQVTsVsx799bk6/9mrLLKtTeb43lc65iGzcLDp7GifbkRsf1W9r8aNP mg9Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=QgRqq58H; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cc25si14233703edb.69.2022.02.10.10.55.05; Thu, 10 Feb 2022 10:55:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=QgRqq58H; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244232AbiBJQeN (ORCPT + 99 others); Thu, 10 Feb 2022 11:34:13 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:45082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244226AbiBJQeM (ORCPT ); Thu, 10 Feb 2022 11:34:12 -0500 Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com [IPv6:2607:f8b0:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE8BC137 for ; Thu, 10 Feb 2022 08:34:12 -0800 (PST) Received: by mail-oi1-x22e.google.com with SMTP id i5so6514453oih.1 for ; Thu, 10 Feb 2022 08:34:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=1YIKfZRvYANp4LD1zrAyetEyh6SC9UJR3WpF2jixnhg=; b=QgRqq58HV2n/SVh/S+GojRQwp60yhgmzNVespq1oanW9kzraUwCgjC4+ZFEJR4gQJy i979etkuGy2UFJEo/siSvS/BXyO9SFMEZ45Tf+XbKJnVxiGj3Qh6jK3idttylvt5v5s7 MlYJ+zOrJ1m60zunhVDxmFVnl0w6qfdw4ILoqFM3hCS5Ht5UvP6ohgC5w/8OQwD2K89K Sm6gN5ylb1yMCHVHeUnpxqKzP+X63Xi2DsAkCxsoWDut7m7VZV9VdL7gvD4j4tVo4r/o K15CjByQaDypgGLhdmDIYwnXkGpzq0aJ9IF83sZ+4sgd9Ntp+5PG00ng40v3gvnUn9ot B50Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=1YIKfZRvYANp4LD1zrAyetEyh6SC9UJR3WpF2jixnhg=; b=xdKVAxWQMfHJ1qd8qwHrG/F49CIV9OXQ++fw5umEehxMu3Gr/ts1OE4QHwC+AmwJnE N4pWdb7L7bm8zMfidIaGXTAN56MgKUJxzntAzscxBnOTnzTdOeZ1SRSdpUmQzGnteTuD w+Nm71VgSDuf2jbmzsoSz8sfNyMPARrsBHfW8anWGB7B7oHo3vbEzzfg8GqAbM+Adjpe 7ckFyv+VeecW06+n2ycbFqJ6XJTuUAVXdSYBT2DAZiZ+zPaeQKLvorIyHcE3eboxnDvK p025ekRT/y4VNTXbwC2cS2uvQitHTPSwtMH0++Nvkd3ebb4g50/r/EU+h/TxCVx5turX K/Gw== X-Gm-Message-State: AOAM532LcKKeCVIEnGdnfxnezfMEyxaBT6dnO2W0DgzH8JElyRLyfgVQ pnLYaUOxXZjpvP80Rqrt2goHDhGSBSr0PUtcgrVC2A== X-Received: by 2002:a05:6808:21a5:: with SMTP id be37mr1396341oib.339.1644510852049; Thu, 10 Feb 2022 08:34:12 -0800 (PST) MIME-Version: 1.0 References: <20220117085307.93030-1-likexu@tencent.com> <20220117085307.93030-3-likexu@tencent.com> <20220202144308.GB20638@worktop.programming.kicks-ass.net> <69c0fc41-a5bd-fea9-43f6-4724368baf66@intel.com> <67a731dd-53ba-0eb8-377f-9707e5c9be1b@intel.com> <7b5012d8-6ae1-7cde-a381-e82685dfed4f@linux.intel.com> In-Reply-To: <7b5012d8-6ae1-7cde-a381-e82685dfed4f@linux.intel.com> From: Jim Mattson Date: Thu, 10 Feb 2022 08:34:01 -0800 Message-ID: Subject: Re: [PATCH kvm/queue v2 2/3] perf: x86/core: Add interface to query perfmon_event_map[] directly To: "Liang, Kan" Cc: David Dunn , Dave Hansen , Peter Zijlstra , Like Xu , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu , Stephane Eranian Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 10, 2022 at 7:34 AM Liang, Kan wrote: > > > > On 2/9/2022 2:24 PM, David Dunn wrote: > > Dave, > > > > In my opinion, the right policy depends on what the host owner and > > guest owner are trying to achieve. > > > > If the PMU is being used to locate places where performance could be > > improved in the system, there are two sub scenarios: > > - The host and guest are owned by same entity that is optimizing > > overall system. In this case, the guest doesn't need PMU access and > > better information is provided by profiling the entire system from the > > host. > > - The host and guest are owned by different entities. In this > > case, profiling from the host can identify perf issues in the guest. > > But what action can be taken? The host entity must communicate issues > > back to the guest owner through some sort of out-of-band information > > channel. On the other hand, preempting the host PMU to give the guest > > a fully functional PMU serves this use case well. > > > > TDX and SGX (outside of debug mode) strongly assume different > > entities. And Intel is doing this to reduce insight of the host into > > guest operations. So in my opinion, preemption makes sense. > > > > There are also scenarios where the host owner is trying to identify > > systemwide impacts of guest actions. For example, detecting memory > > bandwidth consumption or split locks. In this case, host control > > without preemption is necessary. > > > > To address these various scenarios, it seems like the host needs to be > > able to have policy control on whether it is willing to have the PMU > > preempted by the guest. > > > > But I don't see what scenario is well served by the current situation > > in KVM. Currently the guest will either be told it has no PMU (which > > is fine) or that it has full control of a PMU. If the guest is told > > it has full control of the PMU, it actually doesn't. But instead of > > losing counters on well defined events (from the guest perspective), > > they simply stop counting depending on what the host is doing with the > > PMU. > > For the current perf subsystem, a PMU should be shared among different > users via the multiplexing mechanism if the resource is limited. No one > has full control of a PMU for lifetime. A user can only have the PMU in > its given period. I think the user can understand how long it runs via > total_time_enabled and total_time_running. For most clients, yes. For kvm, no. KVM currently tosses total_time_enabled and total_time_running in the bitbucket. It could extrapolate, but that would result in loss of precision. Some guest uses of the PMU would not be able to cope (e.g. https://github.com/rr-debugger/rr). > For a guest, it should rely on the host to tell whether the PMU resource > is available. But unfortunately, I don't think we have such a > notification mechanism in KVM. The guest has the wrong impression that > the guest can have full control of the PMU. That is the only impression that the architectural specification allows the guest to have. On Intel, we can mask off individual fixed counters, and we can reduce the number of GP counters, but AMD offers us no such freedom. Whatever resources we advertise to the guest must be available for its use whenever it wants. Otherwise, PMU virtualization is simply broken. > In my opinion, we should add the notification mechanism in KVM. When the > PMU resource is limited, the guest can know whether it's multiplexing or > can choose to reschedule the event. That sounds like a paravirtual perf mechanism, rather than PMU virtualization. Are you suggesting that we not try to virtualize the PMU? Unfortunately, PMU virtualization is what we have customers clamoring for. No one is interested in a paravirtual perf mechanism. For example, when will VTune in the guest know how to use your proposed paravirtual interface? > But seems the notification mechanism may not work for TDX case? > > > > On the other hand, if we flip it around the semantics are more clear. > > A guest will be told it has no PMU (which is fine) or that it has full > > control of the PMU. If the guest is told that it has full control of > > the PMU, it does. And the host (which is the thing that granted the > > full PMU to the guest) knows that events inside the guest are not > > being measured. This results in all entities seeing something that > > can be reasoned about from their perspective. > > > > I assume that this is for the TDX case (where the notification mechanism > doesn't work). The host still control all the PMU resources. The TDX > guest is treated as a super-user who can 'own' a PMU. The admin in the > host can configure/change the owned PMUs of the TDX. Personally, I think > it makes sense. But please keep in mind that the counters are not > identical. There are some special events that can only run on a specific > counter. If the special counter is assigned to TDX, other entities can > never run some events. We should let other entities know if it happens. > Or we should never let non-host entities own the special counter. Right; the counters are not fungible. Ideally, when the guest requests a particular counter, that is the counter it gets. If it is given a different counter, the counter it is given must provide the same behavior as the requested counter for the event in question. > > Thanks, > Kan > > > Thanks, > > > > Dave Dunn > > > > On Wed, Feb 9, 2022 at 10:57 AM Dave Hansen wrote: > > > >>> I was referring to gaps in the collection of data that the host perf > >>> subsystem doesn't know about if ATTRIBUTES.PERFMON is set for a TDX > >>> guest. This can potentially be a problem if someone is trying to > >>> measure events per unit of time. > >> > >> Ahh, that makes sense. > >> > >> Does SGX cause problem for these people? It can create some of the same > >> collection gaps: > >> > >> performance monitoring activities are suppressed when entering > >> an opt-out (of performance monitoring) enclave.