Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp19774793rwd; Wed, 28 Jun 2023 14:12:06 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ58BXjbx4EUPDU8vZ9N5oWp1MQggxAu2aZRpdFDHzEA9V6rqfQorF2AG+7NP29yrLebTrfD X-Received: by 2002:a2e:8003:0:b0:2b6:a021:7a93 with SMTP id j3-20020a2e8003000000b002b6a0217a93mr7530976ljg.35.1687986726014; Wed, 28 Jun 2023 14:12:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687986725; cv=none; d=google.com; s=arc-20160816; b=YCePmdbOB/Ieqw8rB4hPyp0kT7EieDD0v2ug7hNJAlap2H+BGKlX8UUWAQiJ6DFtaH BZCw9ef4X3nho/GhSu+bqOFoh/DHgT3PLVRfbIPVEkc98XiTDjwm1T0mDXJERCIKWukd GNJwq71hcOKPqwt/hW/F/IvVa79oBZ3K5VKYBFCvDa4w2FkAmojl+ytQwoGgePxU1OZ4 gh/CCnDyJsOxMPzQlkjOy+AXxtb5zykHS9oDRthBv3NQpYKD/iE14oa9RS9inzwAIfJ+ DM8P0B4C+jztzzkUo/n2YbUdRyYvkYrul0jv3x9aMYTfJJyK6dzmhvch19YfcyrXi4r7 Qruw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=d0cdu9U2U8ASHi6NJW0ECm1JdkHTOWLc1i/nUUY1BF0=; fh=6CatontvuGQEtG9scws2xQjmZL6eOw+HZr7cNKhal0g=; b=bcqCcwS4kujot8DB2UeJ2U1DIgnLpfnc2Osix5/QKjfJRTrEkE2PvcY+DPjecQtH/q vlVAMN6R03HAGSwo8LtRoidgX7gtqW2tVXGqItYxnUVsgxEfcFo1R5+YsnBG5bgQwzPY mp07JoCgotJnh7mLFaqf4lIRtRpESiMsvMh00X3qteA/fkAhcs1wJV4KATP4w135rWI/ Nw7IPovhii2Hs4Jb8j3wCl/F4DIRL8khvPunjmV4E8MChog2Ey+sLVESfldekK++5r8x Z5QlwAQUrmD8YCjBwaKVRFmy19k3LqUHPgiuR4GIyxZ9I4Ldj04rvxmvrdN5Hr0jOqLB 5/DQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=qALhyPAZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z22-20020aa7d416000000b0051d89570c8asi4977568edq.651.2023.06.28.14.11.28; Wed, 28 Jun 2023 14:12:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=qALhyPAZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231929AbjF1VDh (ORCPT + 99 others); Wed, 28 Jun 2023 17:03:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231952AbjF1VDa (ORCPT ); Wed, 28 Jun 2023 17:03:30 -0400 Received: from mail-il1-x12a.google.com (mail-il1-x12a.google.com [IPv6:2607:f8b0:4864:20::12a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE8151BC5 for ; Wed, 28 Jun 2023 14:03:28 -0700 (PDT) Received: by mail-il1-x12a.google.com with SMTP id e9e14a558f8ab-34570b2d069so9775ab.1 for ; Wed, 28 Jun 2023 14:03:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687986208; x=1690578208; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=d0cdu9U2U8ASHi6NJW0ECm1JdkHTOWLc1i/nUUY1BF0=; b=qALhyPAZRGyS03UfieGmldU5/tdTCQx1STLIWruvHOrdJLpxjqfEe7TD2Qpwd0x3Uy 7wd46NnG2q+Z8T7BbodPA9sO72eYnQJVBXYdNJ4SCBSGSNSuofpCAXKSBeMIJAHZf0Wz n0RqTdu1V8lgBKGtPoUi8VIYcRW95y5JJYXR6/Rjvpt034uJxEoVGaDY1A9Vuc0iAnIN JfAvYJt4rENY5iNC4dYQIN3dH8KSA58NMxYI9BE2TFcg9CfWw76GyLJCdI7BJsLOBF7H XK8kznEWdQ3NF7T2z8/5WbaJoJUYmQBW4o8OdeCM1z3e3BMp4LGz/SY1qf6Ll4bIYDCd FJ+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687986208; x=1690578208; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=d0cdu9U2U8ASHi6NJW0ECm1JdkHTOWLc1i/nUUY1BF0=; b=YykeLn8u2F41rc0QaOywaCSqBv0PCSvLMo1bpPeD6hvM2//f0Spgd6b7scRvrGnT4W EJCkH2GQQPADlgHjQpx/XNNmuCEC8Y0eFO+reZyFS0evMYgAfscaXSNoAh4I3HcTQvC6 2YfXiqpS+5vx3uXY0lbvrlJMfVscfetONXneiLwKw8/Az2Keo2YtJ00F5tAhBfeCkIOB 9fIpgQBjgvziu8nHLjbohqN3daY+6X/lb7ko8ctf0lh8I00CmC6BhAErmHkWxVyrFPXH FcsUQArbftC0JNFk/pQHhTzyco6SxKG58JLr+F9PlTDuSrlg+i2XHr5wnKJRlKFB5ovc 7C3A== X-Gm-Message-State: AC+VfDzq+DxAg7VXkNGRtKf1Y4K6QP+EKF+v+sO089M0D9E8q6VncWmO Tcq91Pqb+TcB42CkZ2FW/hYK2uhysieEdqtTBPvrNRzsWfNpbE3MXd8= X-Received: by 2002:a05:6e02:1b82:b0:33d:ac65:f95e with SMTP id h2-20020a056e021b8200b0033dac65f95emr322693ili.12.1687986207843; Wed, 28 Jun 2023 14:03:27 -0700 (PDT) MIME-Version: 1.0 References: <20230530134248.23998-1-cloudliang@tencent.com> <20230530134248.23998-4-cloudliang@tencent.com> In-Reply-To: From: Jim Mattson Date: Wed, 28 Jun 2023 14:03:13 -0700 Message-ID: Subject: Re: [PATCH v2 3/8] KVM: selftests: Test Intel PMU architectural events on gp counters To: Sean Christopherson Cc: Jinrong Liang , Paolo Bonzini , Like Xu , David Matlack , Aaron Lewis , Vitaly Kuznetsov , Wanpeng Li , Jinrong Liang , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 28, 2023 at 1:44=E2=80=AFPM Sean Christopherson wrote: > > On Tue, May 30, 2023, Jinrong Liang wrote: > > +/* Guest payload for any performance counter counting */ > > +#define NUM_BRANCHES 10 > > + > > +static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vc= pu, > > + void *guest_code) > > +{ > > + struct kvm_vm *vm; > > + > > + vm =3D vm_create_with_one_vcpu(vcpu, guest_code); > > + vm_init_descriptor_tables(vm); > > + vcpu_init_descriptor_tables(*vcpu); > > + > > + return vm; > > +} > > + > > +static uint64_t run_vcpu(struct kvm_vcpu *vcpu, uint64_t *ucall_arg) > > +{ > > + struct ucall uc; > > + > > + vcpu_run(vcpu); > > + switch (get_ucall(vcpu, &uc)) { > > + case UCALL_SYNC: > > + *ucall_arg =3D uc.args[1]; > > + break; > > + case UCALL_DONE: > > + break; > > + default: > > + TEST_ASSERT(false, "Unexpected exit: %s", > > + exit_reason_str(vcpu->run->exit_reason)); > > TEST_FAIL() > > > + } > > + return uc.cmd; > > +} > > + > > +static void intel_guest_run_arch_event(uint8_t version, uint8_t max_gp= _num, > > Unless I'm mistaken, this isn't specific to arch events. And with a bit = of > massaging, it doesn't need to be Intel specific. Typically we try to avo= id > speculatively creating infrastructure, but in this case we *know* AMD has= vPMU > support, and we *know* from KVM-Unit-Tests that accounting for the differ= ences > between MSRs on Intel vs. AMD is doable, so we should write code with an = eye > toward supporting both AMD and Intel. > > And then we can avoid having to prefix so many functions with "intel", e.= g. this > can be something like > > static void guest_measure_loop() > > or whatever. > > > + uint32_t ctr_base_msr, uint64_t ev= t_code) > > +{ > > + uint32_t global_msr =3D MSR_CORE_PERF_GLOBAL_CTRL; > > + unsigned int i; > > + > > + for (i =3D 0; i < max_gp_num; i++) { > > + wrmsr(ctr_base_msr + i, 0); > > + wrmsr(MSR_P6_EVNTSEL0 + i, EVENTSEL_OS | EVENTSEL_EN | ev= t_code); > > + if (version > 1) > > + wrmsr(global_msr, BIT_ULL(i)); > > + > > + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})= ); > > + > > + if (version > 1) > > + wrmsr(global_msr, 0); > > + > > + GUEST_SYNC(_rdpmc(i)); > > + } > > + > > + GUEST_DONE(); > > +} > > + > > +static void test_arch_events_cpuid(struct kvm_vcpu *vcpu, uint8_t evt_= vector, > > "vector" is confusing, as "vector" usually refers to a vector number, e.g= . for > IRQs and exceptions. This is the _length_ of a so called vector. I vote= to ignore > the SDM's use of "vector" in this case and instead call it something like > arch_events_bitmap_size. And then arch_events_unavailable_mask? > > > + uint8_t unavl_mask, uint8_t idx) > > +{ > > + struct kvm_cpuid_entry2 *entry; > > + uint32_t ctr_msr =3D MSR_IA32_PERFCTR0; > > + bool is_supported; > > + uint64_t counter_val =3D 0; > > + > > + entry =3D vcpu_get_cpuid_entry(vcpu, 0xa); > > + entry->eax =3D (entry->eax & ~EVT_LEN_MASK) | > > + (evt_vector << EVT_LEN_OFS_BIT); > > EVT_LEN_OFS_BIT can be a KVM_x86_PROPERTY. And please also add a helper = to set > properties, the whole point of the FEATURE and PROPERTY frameworks is to = avoid > open coding CPUID manipulations. E.g. > > static inline void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu, > struct kvm_x86_cpu_property pr= operty, > uint32_t value) > { > ... > } > > > + entry->ebx =3D (entry->ebx & ~EVENTS_MASK) | unavl_mask; > > + vcpu_set_cpuid(vcpu); > > + > > + if (vcpu_get_msr(vcpu, MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_W= RITES) > > + ctr_msr =3D MSR_IA32_PMC0; > > This can be done in the guest, no? > > > + > > + /* Arch event x is supported if EBX[x]=3D0 && EAX[31:24]>x */ > > + is_supported =3D !(entry->ebx & BIT_ULL(idx)) && > > + (((entry->eax & EVT_LEN_MASK) >> EVT_LEN_OFS_BIT) > idx); > > Please add a helper for this. > > > + > > + vcpu_args_set(vcpu, 4, X86_INTEL_PMU_VERSION, X86_INTEL_MAX_GP_CT= R_NUM, > > + ctr_msr, arch_events[idx]); > > + > > + while (run_vcpu(vcpu, &counter_val) !=3D UCALL_DONE) > > + TEST_ASSERT(is_supported =3D=3D !!counter_val, > > + "Unavailable arch event is counting."); > > +} > > + > > +static void intel_check_arch_event_is_unavl(uint8_t idx) > > +{ > > + uint8_t eax_evt_vec, ebx_unavl_mask, i, j; > > + struct kvm_vcpu *vcpu; > > + struct kvm_vm *vm; > > + > > + /* > > + * A brute force iteration of all combinations of values is likel= y to > > + * exhaust the limit of the single-threaded thread fd nums, so it= 's > > + * tested here by iterating through all valid values on a single = bit. > > + */ > > + for (i =3D 0; i < ARRAY_SIZE(arch_events); i++) { > > + eax_evt_vec =3D BIT_ULL(i); > > + for (j =3D 0; j < ARRAY_SIZE(arch_events); j++) { > > + ebx_unavl_mask =3D BIT_ULL(j); > > + vm =3D pmu_vm_create_with_one_vcpu(&vcpu, > > + intel_guest_run_= arch_event); > > + test_arch_events_cpuid(vcpu, eax_evt_vec, > > + ebx_unavl_mask, idx); > > + > > + kvm_vm_free(vm); > > + } > > + } > > +} > > + > > +static void intel_test_arch_events(void) > > +{ > > + uint8_t idx; > > + > > + for (idx =3D 0; idx < ARRAY_SIZE(arch_events); idx++) { > > + /* > > + * Given the stability of performance event recurrence, > > + * only these arch events are currently being tested: > > + * > > + * - Core cycle event (idx =3D 0) > > + * - Instruction retired event (idx =3D 1) > > + * - Reference cycles event (idx =3D 2) > > + * - Branch instruction retired event (idx =3D 5) > > + * > > + * Note that reference cycles is one event that actually = cannot > > + * be successfully virtualized. > > + */ Actually, there is no reason that reference cycles can't be successfully virtualized. It just can't be done with the current vPMU infrastructure. > > + if (idx > 2 && idx !=3D 5) > > As request in a previous patch, use enums, then the need to document the = magic > numbers goes away. > > > + continue; > > + > > + intel_check_arch_event_is_unavl(idx); > > + } > > +} > > + > > +static void intel_test_pmu_cpuid(void) > > +{ > > + intel_test_arch_events(); > > Either put the Intel-specific TEST_REQUIRE()s in here, or open code the c= alls. > Adding a helper and then splitting code across the helper and its sole ca= ller is > unnecessary. > > > +} > > + > > +int main(int argc, char *argv[]) > > +{ > > + TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); > > + > > + if (host_cpu_is_intel) { > > Presumably AMD will be supported at some point, but until then, this need= s to be > > TEST_REQUIRE(host_cpu_is_intel); > > > + TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION)); > > + TEST_REQUIRE(X86_INTEL_PMU_VERSION > 0); > > + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_PDCM)); > > + > > + intel_test_pmu_cpuid(); > > + } > > + > > + return 0; > > +} > > -- > > 2.31.1 > >