Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp3962931pxb; Fri, 11 Feb 2022 11:40:52 -0800 (PST) X-Google-Smtp-Source: ABdhPJxnRj29cRplPNJ1G3QygD5pXfr18iXHaTR1wlsxv9uhv81xwRSlKXUbOH5b28jtqA3IKACh X-Received: by 2002:a17:902:b710:: with SMTP id d16mr2940817pls.130.1644608452386; Fri, 11 Feb 2022 11:40:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644608452; cv=none; d=google.com; s=arc-20160816; b=sihzkoCI7RPUagEsX+TMfT8AO7fYj01UOsTMVdy4tN9W0R7bMgRTPZ8R2rNs6yn6Ao 4xcOk3qGtB9xoRFXnwEcR8lIyjyOgtu+swft2xPfr04O7tFQ8DgGG+cQTe4zhqTckq1e o88XmPPHhzmhtO+m5/BZIZ114H8MhjCWICMXK4ilcW+xek2IfMABJyyTCQ+6X5CttFM6 zlxBmYXy5JRfupJX0SVOYHc2hLm2R8HBfjQ6QyBMNo6NxHtl7E8FSzFCWGaW/I/M2nMH Dvb+YhfhtsAkRwrUb0izAkJb+P/feipBxiTda02p3XLGdWmPRjlXo0ZayHceg+V0AeCx P7Wg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=k16MeYOe3tvEW9UdX+HTX9NGIo7L7H8oO9b068swmwc=; b=CPkWXTZzA6Bc6PaJDnG3/ZowT9myu4nj6/yDLAqgCi1LHyNEoYcDNCW+xyU1oeV94o r3jdlUzt7tsBbTUSUNtqmW8GLZj2Rnx87PtqTnmoRdyHOcDfrVBXF+l48MGOe6CN/Z8j NN8wyr/IXWxt6u8xp8K4HU1sYZhw/CXKPgfcP00pdGZWU9LGyms/IR6yTgUHFGW64F6T UU5KCIKpwf3K6znx9M+CMtgOquBGBVNCLF2lFlnReY77wUvMhoHgjNFxirV5pX/PVnmq jL2WIIuvQS6Du7dUQwTSXyMvuhPr+/bPfXTCKhDwI8M9rKfR+J+nej44RQcHwoPhJdJL abaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=BuMgD+nE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h187si23142275pgc.151.2022.02.11.11.40.36; Fri, 11 Feb 2022 11:40:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=BuMgD+nE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240931AbiBJOKA (ORCPT + 99 others); Thu, 10 Feb 2022 09:10:00 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:60298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239028AbiBJOJ7 (ORCPT ); Thu, 10 Feb 2022 09:09:59 -0500 Received: from mail-oi1-x236.google.com (mail-oi1-x236.google.com [IPv6:2607:f8b0:4864:20::236]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58BE4C1 for ; Thu, 10 Feb 2022 06:10:00 -0800 (PST) Received: by mail-oi1-x236.google.com with SMTP id s185so5985124oie.3 for ; Thu, 10 Feb 2022 06:10:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=k16MeYOe3tvEW9UdX+HTX9NGIo7L7H8oO9b068swmwc=; b=BuMgD+nEItVdQQtwBKjNuTgKIUyyvj1REr90F8mIZ7N3DxOnWby9YkSq8It6vvO/Pt nucxsA+00C+M8FDTzzamJK/iYexnj5rhOw/kq6nzPjRp2hPU4bNEM6/CyAxdsn/piMZW +RCCL57Se3G35WtnS3FJK0VxinxDe9M+zDDIG4lvEX04dlTtew97pCK+QsuZpmmlJ5YW 7I7vpqKEq3UlPMuEhkvd7n6Z3SNKeTpYwQkBz4lF/FGXnFir6MBaAaGDNkRpX0n8yB9V kW1PLl/uv1xnYjNMzGiZ+u9DyJxKNGmD/2jnnAE9+6REBwyn3KGsYu/nWJJ/62MgzGKD xGYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=k16MeYOe3tvEW9UdX+HTX9NGIo7L7H8oO9b068swmwc=; b=2FefjPe7uvGRlVkzTDCHXPDjbrdo3wJBRV1hF5sW3QLp6Y16/h0ix5LJBn0/1gpbPI I91pKPob6rxjB1l1gAU529MUK/KrGSPjZDXcUAxH2Ce/DapmsapaPnPljVZJs/HHufSA EGN0gEbfC1AdYZhJSCzw7d6vrNBZ14oawsMrM2FSkxsmtYDvD71lOlJnB1Rh8zsPQIk8 UmubBqnEsOkI4YGexUAXIVJbgZu7FQTBEI6w97ArgWIlgL8+XTjXwbOILbfq6R5g/1sG 3byqQlSRn+9seej3fikSQVwt70XaZRGKsk8haX/qJc+SO90F/lzKHV8AxS+2ZqyxuKYQ FpQQ== X-Gm-Message-State: AOAM5302xikgUAHlKD3X4zL3F6vfLi5H5bDa0kJO+jpXZRPmL14AR46w CP8aN2P3wJQjmRAwCM42HmLfQ3ep2WkxVb9HU4XXWA== X-Received: by 2002:a05:6808:1292:: with SMTP id a18mr1034942oiw.314.1644502199441; Thu, 10 Feb 2022 06:09:59 -0800 (PST) MIME-Version: 1.0 References: <20220210102603.42764-1-likexu@tencent.com> In-Reply-To: <20220210102603.42764-1-likexu@tencent.com> From: Jim Mattson Date: Thu, 10 Feb 2022 06:09:48 -0800 Message-ID: Subject: Re: [PATCH] KVM: x86/pmu: Distinguish EVENTSEL bitmasks for uniform event creation and filtering To: Like Xu Cc: Paolo Bonzini , Sean Christopherson , Wanpeng Li , Vitaly Kuznetsov , Joerg Roedel , x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 10, 2022 at 2:26 AM Like Xu wrote: > > From: Like Xu > > The current usage of EVENTSEL_* macro is a mess in the KVM context. Partly > because we have a conceptual ambiguity when choosing to create a RAW or > HARDWARE event: when bits other than HARDWARE_EVENT_MASK are set, > the pmc_reprogram_counter() will use the RAW type. > > By introducing the new macro AMD64_EXTRA_EVENTSEL_EVENT to simplify, > the following three issues can be addressed in one go: > > - the 12 selection bits are used as comparison keys for allow or deny; > - NON_HARDWARE_EVENT_MASK is only used to determine if a HARDWARE > event is programmed or not, a 12-bit selected event will be a RAW event; > (jmattson helped report this issue) > - by reusing AMD64_RAW_EVENT_MASK, the extra 4 selection bits (if set) are > passed to the perf correctly and not filtered out by X86_RAW_EVENT_MASK;. > > Signed-off-by: Like Xu > --- > arch/x86/include/asm/perf_event.h | 3 ++- > arch/x86/kvm/pmu.c | 11 ++++------- > arch/x86/kvm/pmu.h | 6 ++++++ > 3 files changed, 12 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h > index 8fc1b5003713..bd068fd19043 100644 > --- a/arch/x86/include/asm/perf_event.h > +++ b/arch/x86/include/asm/perf_event.h > @@ -43,8 +43,9 @@ > #define AMD64_EVENTSEL_INT_CORE_SEL_MASK \ > (0xFULL << AMD64_EVENTSEL_INT_CORE_SEL_SHIFT) > > +#define AMD64_EXTRA_EVENTSEL_EVENT (0x0FULL << 32) > #define AMD64_EVENTSEL_EVENT \ > - (ARCH_PERFMON_EVENTSEL_EVENT | (0x0FULL << 32)) > + (ARCH_PERFMON_EVENTSEL_EVENT | AMD64_EXTRA_EVENTSEL_EVENT) > #define INTEL_ARCH_EVENT_MASK \ > (ARCH_PERFMON_EVENTSEL_UMASK | ARCH_PERFMON_EVENTSEL_EVENT) > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c > index 2c98f3ee8df4..99426a8d7f18 100644 > --- a/arch/x86/kvm/pmu.c > +++ b/arch/x86/kvm/pmu.c > @@ -198,7 +198,8 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) > > filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu); > if (filter) { > - __u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB; > + __u64 key = eventsel & (INTEL_ARCH_EVENT_MASK | > + AMD64_EXTRA_EVENTSEL_EVENT); > > if (bsearch(&key, filter->events, filter->nevents, > sizeof(__u64), cmp_u64)) > @@ -209,18 +210,14 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) > if (!allow_event) > return; > > - if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | > - ARCH_PERFMON_EVENTSEL_INV | > - ARCH_PERFMON_EVENTSEL_CMASK | > - HSW_IN_TX | > - HSW_IN_TX_CHECKPOINTED))) { > + if (!(eventsel & NON_HARDWARE_EVENT_MASK)) { I still don't understand why we even bother doing this lookup in the first place. What's wrong with simply requesting PERF_TYPE_RAW every time? > config = kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc); > if (config != PERF_COUNT_HW_MAX) > type = PERF_TYPE_HARDWARE; > } > > if (type == PERF_TYPE_RAW) > - config = eventsel & X86_RAW_EVENT_MASK; > + config = eventsel & AMD64_RAW_EVENT_MASK; This chunk looks a lot like https://lore.kernel.org/kvm/20220203014813.2130559-2-jmattson@google.com/. Note that if you don't increase the width of config (as in the first change of that series), this mask change is ineffective. > if (pmc->current_config == eventsel && pmc_resume_counter(pmc)) > return; > diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h > index 7a7b8d5b775e..48d867e250bc 100644 > --- a/arch/x86/kvm/pmu.h > +++ b/arch/x86/kvm/pmu.h > @@ -17,6 +17,12 @@ > > #define MAX_FIXED_COUNTERS 3 > > +#define KVM_ARCH_PERFMON_EVENTSEL_IGNORE \ > + (ARCH_PERFMON_EVENTSEL_ANY | ARCH_PERFMON_EVENTSEL_PIN_CONTROL) > + > +#define NON_HARDWARE_EVENT_MASK (AMD64_EXTRA_EVENTSEL_EVENT | \ > + (X86_ALL_EVENT_FLAGS & ~KVM_ARCH_PERFMON_EVENTSEL_IGNORE)) > + > struct kvm_event_hw_type_mapping { > u8 eventsel; > u8 unit_mask; > -- > 2.35.0 >