Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp3691207rdh; Tue, 28 Nov 2023 00:43:33 -0800 (PST) X-Google-Smtp-Source: AGHT+IGbdxKtdlC9JZrBK62S/8eaEwnvIh7JdQCncQ/gdq/Yzn20fO9RwBHKxz955VsfwikRkwpq X-Received: by 2002:a05:6e02:13c9:b0:35c:9e1c:c59c with SMTP id v9-20020a056e0213c900b0035c9e1cc59cmr9845663ilj.6.1701161013024; Tue, 28 Nov 2023 00:43:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701161012; cv=none; d=google.com; s=arc-20160816; b=b0LlBQS52+JGMpxSi74hzM3ymkwjrX68nHNpLOCAkOiTLnPds2hIrPo0Ua9gKqZ6Nh K3yMwHTV2N4BXGzgFeobDu5IbFdwhAn3qOuyMvl4hS9QeOvHf22Dryoo+AbcN3QvwvH5 cxsMrdTGkJS/9xtq39NynFL6XbaMNU43AIcZhFsdY/XFDNK2HEd9owgrvBWJE+QDiuPw aFrHiCRfUK68/CKf1fKUqX9SNNSrh/mBvH5Zgr6mwsCRkh63I/9KWDJ/HhU4ZsmZWYUr oVzNoVQkOJlHofMYGaAeMAUUWhF1wABBmWIPOpHZzjsqRQWS7DyHsoxM+jmL4leIiMR3 4Fww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:message-id:user-agent :references:in-reply-to:subject:cc:to:from:date:mime-version :dkim-signature; bh=aOkk8B04af9K4J83bg3YuPDHTj7MVqwFUh0YaMX9bcg=; fh=TkxGFDrjxapyitIO1m/+TMI0QnWVsPPFg2kVWBt7VlQ=; b=HmTaeHN1T40JWX1eZS6xxHN9s8g7T+dCG1Wu73CGy5dRKhxZobgJlq2q42x2z5dw8F CnLpUe+9gpYi+cdHVccxNBA3Bq2OyZfJ2Scq9AXuqaZuPXUkFy0rqU7Xz5G7+UP8mvOk RLfJ5i0f4Q/pV/SlGvOIbQNhe9MJWpY4evLzhvJVJby6lJ6axFjkzk4EuhPx4i/eRT8U aqPPQ4ekdQIS/ZKnq683hVQPVFEMVrdESoDwgGRO/GCkq+8tOs7OrhAdTUVRHyVL2jI8 BeSYX/d7zVJbcnG7TXxAc96Po3FduKvsQ3p4vvwb37PNfj3WLBghlXMHLJK9b89qSs26 GHjw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=CFoaMaiD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id q3-20020a631f43000000b005898db9d676si11366792pgm.260.2023.11.28.00.43.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 00:43:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=CFoaMaiD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id B80CE80AC58E; Tue, 28 Nov 2023 00:43:31 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344195AbjK1InR (ORCPT + 99 others); Tue, 28 Nov 2023 03:43:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232697AbjK1InP (ORCPT ); Tue, 28 Nov 2023 03:43:15 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4791BC for ; Tue, 28 Nov 2023 00:43:21 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1CFE6C433C8; Tue, 28 Nov 2023 08:43:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1701161001; bh=9Are99utxUpb51rSuMw8hi7bQqLnSrPy5eLlELhxogY=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=CFoaMaiDP3S4q1sWspDPyl/BDxvasrWH8wTZVNqFRdOjgYbfmvGeQNqS93I5MgHe1 eByXUfqtI03yTIyNdSsveFZ8e/YqOKAAIbuxsdlyr2+N/80e9RidR66SaoaPF9uH5b u/kFTM0zIvyyl2hTacqQwAUXscaRj0kr1gWC4umNdgOOmTJXIJfjCjUGBbT5Rr7q+n zP50EOBPiFjOfifwtPz8OlpSMZP4qHtCz19W61fyv1/bcD6rUguROgtf6L9uKJTPWk piwwOSFmef2LnOXN/LDK4ZCPAt6GJZ1c4+sT/MdSclr5AKfgGtb+I+enmOKKgRb8le TQCKjDwW4Mbnw== Received: from disco-boy.misterjones.org ([217.182.43.188] helo=www.loen.fr) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1r7tgc-00H3Kk-8d; Tue, 28 Nov 2023 08:43:18 +0000 MIME-Version: 1.0 Date: Tue, 28 Nov 2023 08:43:18 +0000 From: Marc Zyngier To: Shaoqin Huang , Raghavendra Rao Ananta Cc: Oliver Upton , kvmarm@lists.linux.dev, James Morse , Suzuki K Poulose , Zenghui Yu , Paolo Bonzini , Shuah Khan , linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v1 2/3] KVM: selftests: aarch64: Move the pmu helper function into lib/ In-Reply-To: References: <20231123063750.2176250-1-shahuang@redhat.com> <20231123063750.2176250-3-shahuang@redhat.com> User-Agent: Roundcube Webmail/1.4.15 Message-ID: X-Sender: maz@kernel.org Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-SA-Exim-Connect-IP: 217.182.43.188 X-SA-Exim-Rcpt-To: shahuang@redhat.com, rananta@google.com, oliver.upton@linux.dev, kvmarm@lists.linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, pbonzini@redhat.com, shuah@kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 28 Nov 2023 00:43:31 -0800 (PST) On 2023-11-27 21:48, Raghavendra Rao Ananta wrote: > Hi Shaoqin, > > On Wed, Nov 22, 2023 at 10:39 PM Shaoqin Huang > wrote: >> >> Move those pmu helper function into lib/, thus it can be used by other >> pmu test. >> >> Signed-off-by: Shaoqin Huang >> --- >> .../kvm/aarch64/vpmu_counter_access.c | 118 ----------------- >> .../selftests/kvm/include/aarch64/vpmu.h | 119 >> ++++++++++++++++++ >> 2 files changed, 119 insertions(+), 118 deletions(-) >> >> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c >> b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c >> index 17305408a334..62d6315790ab 100644 >> --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c >> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c >> @@ -20,12 +20,6 @@ >> #include >> #include >> >> -/* The max number of the PMU event counters (excluding the cycle >> counter) */ >> -#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) >> - >> -/* The cycle counter bit position that's common among the PMU >> registers */ >> -#define ARMV8_PMU_CYCLE_IDX 31 >> - >> static struct vpmu_vm *vpmu_vm; >> >> struct pmreg_sets { >> @@ -35,118 +29,6 @@ struct pmreg_sets { >> >> #define PMREG_SET(set, clr) {.set_reg_id = set, .clr_reg_id = clr} >> >> -static uint64_t get_pmcr_n(uint64_t pmcr) >> -{ >> - return (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & >> ARMV8_PMU_PMCR_N_MASK; >> -} >> - >> -static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n) >> -{ >> - *pmcr = *pmcr & ~(ARMV8_PMU_PMCR_N_MASK << >> ARMV8_PMU_PMCR_N_SHIFT); >> - *pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); >> -} >> - >> -static uint64_t get_counters_mask(uint64_t n) >> -{ >> - uint64_t mask = BIT(ARMV8_PMU_CYCLE_IDX); >> - >> - if (n) >> - mask |= GENMASK(n - 1, 0); >> - return mask; >> -} >> - >> -/* Read PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ >> -static inline unsigned long read_sel_evcntr(int sel) >> -{ >> - write_sysreg(sel, pmselr_el0); >> - isb(); >> - return read_sysreg(pmxevcntr_el0); >> -} >> - >> -/* Write PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ >> -static inline void write_sel_evcntr(int sel, unsigned long val) >> -{ >> - write_sysreg(sel, pmselr_el0); >> - isb(); >> - write_sysreg(val, pmxevcntr_el0); >> - isb(); >> -} >> - >> -/* Read PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ >> -static inline unsigned long read_sel_evtyper(int sel) >> -{ >> - write_sysreg(sel, pmselr_el0); >> - isb(); >> - return read_sysreg(pmxevtyper_el0); >> -} >> - >> -/* Write PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ >> -static inline void write_sel_evtyper(int sel, unsigned long val) >> -{ >> - write_sysreg(sel, pmselr_el0); >> - isb(); >> - write_sysreg(val, pmxevtyper_el0); >> - isb(); >> -} >> - >> -static inline void enable_counter(int idx) >> -{ >> - uint64_t v = read_sysreg(pmcntenset_el0); >> - >> - write_sysreg(BIT(idx) | v, pmcntenset_el0); >> - isb(); >> -} >> - >> -static inline void disable_counter(int idx) >> -{ >> - uint64_t v = read_sysreg(pmcntenset_el0); >> - >> - write_sysreg(BIT(idx) | v, pmcntenclr_el0); >> - isb(); >> -} >> - >> -static void pmu_disable_reset(void) >> -{ >> - uint64_t pmcr = read_sysreg(pmcr_el0); >> - >> - /* Reset all counters, disabling them */ >> - pmcr &= ~ARMV8_PMU_PMCR_E; >> - write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); >> - isb(); >> -} >> - >> -#define RETURN_READ_PMEVCNTRN(n) \ >> - return read_sysreg(pmevcntr##n##_el0) >> -static unsigned long read_pmevcntrn(int n) >> -{ >> - PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN); >> - return 0; >> -} >> - >> -#define WRITE_PMEVCNTRN(n) \ >> - write_sysreg(val, pmevcntr##n##_el0) >> -static void write_pmevcntrn(int n, unsigned long val) >> -{ >> - PMEVN_SWITCH(n, WRITE_PMEVCNTRN); >> - isb(); >> -} >> - >> -#define READ_PMEVTYPERN(n) \ >> - return read_sysreg(pmevtyper##n##_el0) >> -static unsigned long read_pmevtypern(int n) >> -{ >> - PMEVN_SWITCH(n, READ_PMEVTYPERN); >> - return 0; >> -} >> - >> -#define WRITE_PMEVTYPERN(n) \ >> - write_sysreg(val, pmevtyper##n##_el0) >> -static void write_pmevtypern(int n, unsigned long val) >> -{ >> - PMEVN_SWITCH(n, WRITE_PMEVTYPERN); >> - isb(); >> -} >> - >> /* >> * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}_EL0 >> * accessors that test cases will use. Each of the accessors will >> diff --git a/tools/testing/selftests/kvm/include/aarch64/vpmu.h >> b/tools/testing/selftests/kvm/include/aarch64/vpmu.h >> index 0a56183644ee..e0cc1ca1c4b7 100644 >> --- a/tools/testing/selftests/kvm/include/aarch64/vpmu.h >> +++ b/tools/testing/selftests/kvm/include/aarch64/vpmu.h >> @@ -1,10 +1,17 @@ >> /* SPDX-License-Identifier: GPL-2.0 */ >> >> #include >> +#include >> >> #define GICD_BASE_GPA 0x8000000ULL >> #define GICR_BASE_GPA 0x80A0000ULL >> >> +/* The max number of the PMU event counters (excluding the cycle >> counter) */ >> +#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) >> + >> +/* The cycle counter bit position that's common among the PMU >> registers */ >> +#define ARMV8_PMU_CYCLE_IDX 31 >> + >> struct vpmu_vm { >> struct kvm_vm *vm; >> struct kvm_vcpu *vcpu; >> @@ -14,3 +21,115 @@ struct vpmu_vm { >> struct vpmu_vm *create_vpmu_vm(void *guest_code); >> >> void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm); >> + >> +static inline uint64_t get_pmcr_n(uint64_t pmcr) >> +{ >> + return (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & >> ARMV8_PMU_PMCR_N_MASK; >> +} >> + >> +static inline void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n) >> +{ >> + *pmcr = *pmcr & ~(ARMV8_PMU_PMCR_N_MASK << >> ARMV8_PMU_PMCR_N_SHIFT); >> + *pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); >> +} >> + >> +static inline uint64_t get_counters_mask(uint64_t n) >> +{ >> + uint64_t mask = BIT(ARMV8_PMU_CYCLE_IDX); >> + >> + if (n) >> + mask |= GENMASK(n - 1, 0); >> + return mask; >> +} >> + >> +/* Read PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ >> +static inline unsigned long read_sel_evcntr(int sel) >> +{ >> + write_sysreg(sel, pmselr_el0); >> + isb(); >> + return read_sysreg(pmxevcntr_el0); >> +} >> + >> +/* Write PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ >> +static inline void write_sel_evcntr(int sel, unsigned long val) >> +{ >> + write_sysreg(sel, pmselr_el0); >> + isb(); >> + write_sysreg(val, pmxevcntr_el0); >> + isb(); >> +} >> + >> +/* Read PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ >> +static inline unsigned long read_sel_evtyper(int sel) >> +{ >> + write_sysreg(sel, pmselr_el0); >> + isb(); >> + return read_sysreg(pmxevtyper_el0); >> +} >> + >> +/* Write PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ >> +static inline void write_sel_evtyper(int sel, unsigned long val) >> +{ >> + write_sysreg(sel, pmselr_el0); >> + isb(); >> + write_sysreg(val, pmxevtyper_el0); >> + isb(); >> +} >> + >> +static inline void enable_counter(int idx) >> +{ >> + uint64_t v = read_sysreg(pmcntenset_el0); >> + >> + write_sysreg(BIT(idx) | v, pmcntenset_el0); >> + isb(); >> +} >> + >> +static inline void disable_counter(int idx) >> +{ >> + uint64_t v = read_sysreg(pmcntenset_el0); >> + >> + write_sysreg(BIT(idx) | v, pmcntenclr_el0); >> + isb(); >> +} >> + > As mentioned in [1], the current implementation of disable_counter() > is buggy and would end up disabling all the counters. > However if you intend to keep it (even though it would remain unused), > may be change the definition something to: > > static inline void disable_counter(int idx) > { > write_sysreg(BIT(idx), pmcntenclr_el0); > isb(); > } Same thing for the enable_counter() function, by the way. It doesn't have the same disastrous effect, but it is buggy (imagine an interrupt disabling a counter between the read and the write...). In general, the set/clr registers should always be used in their write form, never in a RMW form. Thanks, M. -- Jazz is not dead. It just smells funny...