Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30FADC433EF for ; Thu, 9 Dec 2021 04:33:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231892AbhLIEgr (ORCPT ); Wed, 8 Dec 2021 23:36:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231847AbhLIEgp (ORCPT ); Wed, 8 Dec 2021 23:36:45 -0500 Received: from mail-ot1-x32a.google.com (mail-ot1-x32a.google.com [IPv6:2607:f8b0:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CE89C0617A1 for ; Wed, 8 Dec 2021 20:33:12 -0800 (PST) Received: by mail-ot1-x32a.google.com with SMTP id 35-20020a9d08a6000000b00579cd5e605eso5055896otf.0 for ; Wed, 08 Dec 2021 20:33:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=QHuibgPSAspfNqssmkGKxfN9WyawKIpAxuzH4TjAp4c=; b=ftb1NLxmYizOc7C+dFz80X6ZjSx5PuoitZxHX+rTxeXtZRIWJiGMrsV2gLapjaDvNu JtDdCv7xYWqdmQjnhaRo4Yofp0yLN3IZlE3NMRGpxIIaowfNLNm7YP7H554nuiLzghEd PF7+IKZu/I7eMSb5fuKoVNLUJDZmYz+zABFIeCMwjNdFOpEbrRNdjfGOBmfdm/xPkww6 oLpEpA8G+OCI1HeCRaixk8fF6FVNRhub2qsmC+BE4AvDMmWY58OYFk0vgDvxzBfINpGs m1y/g9HJSSlvx/Z/09yHYf9+15KoyGtfCzB1ymTndVdkynsS36MRRJ3nmcWhivshbcxF Ghvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=QHuibgPSAspfNqssmkGKxfN9WyawKIpAxuzH4TjAp4c=; b=MRLnSe0fjSGR143Po3GPfnFmwEwJzRpCSuBCWxVlWprSLojn0/hOj+CipythNeyfPu FZFCFI9TZQ3oWx5SPSxv9g8UoiBX55y9YNug2DnObwpsQU4V41cPP6Gz0KdNBqUqWHOp WTMo6thDihL3WblXFlpsl2HrtxJbO/wv90v42lJPtpULB51ViF/v4MuNWprgwbdcF3X1 EF2UgVUyXN0XyoykGUXLSbMOQ7ia0BQ+8lR00ykj/HYwhFkeSjLfs/0N/FbXA+toOogm uNM7AILJghJrwMHTJq9JbyteGb0beJpeDknqBiMCYqiwF6Ef2J1A8hjMiPhzX6P3l/6X 6IPw== X-Gm-Message-State: AOAM531rLcL7OuxNDNkgW6Bm/HISPML557nuPGrPMvIvB8sZhGh7u5HL zxRxoVfBhpbMXVbyFBAFIeA4uHeU5ZTzsUcdyH8rGw== X-Google-Smtp-Source: ABdhPJygm6XWr1fUVKwahGV+aHqAEbw3Vz4gXvO8CaJATTJMPgTxkI3rSjQXMl3mYqxD70mqneH2r9jgmaT2h98gPy0= X-Received: by 2002:a9d:68ca:: with SMTP id i10mr3402796oto.286.1639024391535; Wed, 08 Dec 2021 20:33:11 -0800 (PST) MIME-Version: 1.0 References: <20211130074221.93635-1-likexu@tencent.com> <20211130074221.93635-6-likexu@tencent.com> In-Reply-To: <20211130074221.93635-6-likexu@tencent.com> From: Jim Mattson Date: Wed, 8 Dec 2021 20:33:00 -0800 Message-ID: Subject: Re: [PATCH v2 5/6] KVM: x86: Update vPMCs when retiring instructions To: Like Xu Cc: Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Like Xu , Peter Zijlstra Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 29, 2021 at 11:42 PM Like Xu wrote: > > From: Like Xu > > When KVM retires a guest instruction through emulation, increment any > vPMCs that are configured to monitor "instructions retired," and > update the sample period of those counters so that they will overflow > at the right time. > > Signed-off-by: Eric Hankland > [jmattson: > - Split the code to increment "branch instructions retired" into a > separate commit. > - Added 'static' to kvm_pmu_incr_counter() definition. > - Modified kvm_pmu_incr_counter() to check pmc->perf_event->state == > PERF_EVENT_STATE_ACTIVE. > ] > Fixes: f5132b01386b ("KVM: Expose a version 2 architectural PMU to a guests") > Signed-off-by: Jim Mattson > [likexu: > - Drop checks for pmc->perf_event or event state or event type > - Increase a counter once its umask bits and the first 8 select bits are matched > - Rewrite kvm_pmu_incr_counter() with a less invasive approach to the host perf; > - Rename kvm_pmu_record_event to kvm_pmu_trigger_event; > - Add counter enable and CPL check for kvm_pmu_trigger_event(); > ] > Cc: Peter Zijlstra > Signed-off-by: Like Xu > --- > +void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) > +{ > + struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); > + struct kvm_pmc *pmc; > + int i; > + > + for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { > + pmc = kvm_x86_ops.pmu_ops->pmc_idx_to_pmc(pmu, i); > + > + if (!pmc || !pmc_is_enabled(pmc) || !pmc_speculative_in_use(pmc)) > + continue; > + > + /* Ignore checks for edge detect, pin control, invert and CMASK bits */ I don't understand how we can ignore these checks. Doesn't that violate the architectural specification? > + if (eventsel_match_perf_hw_id(pmc, perf_hw_id) && cpl_is_matched(pmc)) > + kvm_pmu_incr_counter(pmc); > + } > +} > +EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); > +