Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp4326758rdh; Tue, 28 Nov 2023 19:52:17 -0800 (PST) X-Google-Smtp-Source: AGHT+IFi8G96hWSt7yujtyAdwAgc95ui/jHlSyWag5K/1vgef7Tng9/7FQxceocir927HlAsjYrR X-Received: by 2002:a05:6a21:99a7:b0:18a:b5c3:55db with SMTP id ve39-20020a056a2199a700b0018ab5c355dbmr18958894pzb.50.1701229937027; Tue, 28 Nov 2023 19:52:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701229937; cv=none; d=google.com; s=arc-20160816; b=aPgCuH2O/d5cfnmK5QCY7lHQSC1XAWFzCLIMSPr58R4tzMhHGnCb739kdnLBF9Jefs Eex+3ApxqRaZbfxhGZNDq0GxK+YyNg4y2IGSZh8CUz8J6QYX9ioFpfekkiZvvCB2ewVn y5k/Otxw0jesXpBzajyoh4DTIGIpKLtpKRdpgc24w8ninlsobSEDQ4WYB+Q79pU6nzSS 7gyFhZugD+YmweM2eIehUEXcSxDB5m2za9iDkXWjdALVbpr81g1vQsnEBZgY4dWw1OwU 7nCxmqUiTA02rICgBIN/bCFR7LuJSuOCL+O8EQnJp8un0OeldI2Xe+/cW1T+BTbjnQlo gwSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=RL0c3l03fnSRdfvhrTd6aq49DEaPt0ziC4ZLt8/nvFs=; fh=tdsI4/Utg+hsesQo597zovTgMR/4S8iIMTSS4Nid+8E=; b=cafT69jkACnVugMFtfZpdU1ZTeC8lueQfI60TyXcFaBTX45x+2syJMNT7rmKaXhYRc Z6rZsdsvkQFfQ6WtcUDJIl8kdLmC/FsI5DScA6OA7irn0E1o1VFF5FvzTysbiQhL73by hO3pZls2FdIsw4cASN8rdalluoN/lhsqS6vby6qip2W+bs9qvlUk0c/VtH+1qQ7o+JqZ rzk+UGROgjOOJ65Vh9xRGFTmAJTDtnPQMKZrGDpfBT8ECTCqAuO+XbXQaPrGqeKkgFk9 gl2LVTDAsez2TINqFgKeFKp4Dpyx+l7UEnLxpaCXY7cpWO+3eFP5N+BFrdaV8R3B6aDW iNOQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Fnm5aptX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id a13-20020a17090ad80d00b0027769032e57si473777pjv.52.2023.11.28.19.52.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 19:52:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Fnm5aptX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id A9B8A80A6EC0; Tue, 28 Nov 2023 19:52:13 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376842AbjK2Dv4 (ORCPT + 99 others); Tue, 28 Nov 2023 22:51:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376822AbjK2Dv4 (ORCPT ); Tue, 28 Nov 2023 22:51:56 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9548F100 for ; Tue, 28 Nov 2023 19:52:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701229920; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RL0c3l03fnSRdfvhrTd6aq49DEaPt0ziC4ZLt8/nvFs=; b=Fnm5aptXowmPoxSnw+WzwVZnbf7mShdjHp/pjw3bBcltK0RnKw3HxgtSSJqLeSSb8dIPYp DF0YLY0dBzKKWCWjd/kg85OLjVfpe1+MmqrBjObEKgrvXdSMJxj0g6A47aUjBSn21igjf1 sH3vYzJn7leoA6idql2rVcKjaf00qX0= Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-159-0ZawgJB9OYqrItBK2ybrLw-1; Tue, 28 Nov 2023 22:51:59 -0500 X-MC-Unique: 0ZawgJB9OYqrItBK2ybrLw-1 Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-1cfb93fa6c1so9856185ad.1 for ; Tue, 28 Nov 2023 19:51:59 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701229918; x=1701834718; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RL0c3l03fnSRdfvhrTd6aq49DEaPt0ziC4ZLt8/nvFs=; b=gKpk2ni+O1ln3I8qd72A9BlU/UB5GwjlahkR0L7xlY5KvUlDTCuD7TYH6y/7Ftl9lm 54IPsDiodbaejm7HZl6ZPZiReOkoxVnUBLo70LueWePf6fvQ7HSOx99yJu1YuLUURsGj V71c/ikn3x4h1qTKEghxW6Jp3miFhnsbDqacI31KK1BSmUkV4V32wae3GI/pmYB12LIG 8s00DdtBTD9VfekV3+Ebt5+4gqWdgkh6xRu8PtQnEKGygZn5k8a7F5Pph4idU6qe+2in 2Ui8NJD7FtWy7KIAnIYc39/vO+IOy+vADVnpi/LAlTift1OtdPi2aUPvaAluqqCTBK8X Shzw== X-Gm-Message-State: AOJu0Yzbe/4qdx9WVdTJtk9gGY2WYWFG2K/yNZ8A1rywTmCOXS0h1R1G hZs4fRz2ZOpQpTx5gDnh/hV7RAWCUAmLuW5C6OP36GsnhehQ49o0H2kUMF6BBYpx09RZjMxWmKw 6kADmyInHASXRE/hDFy92p+fm X-Received: by 2002:a05:6a21:a598:b0:18b:d26a:375c with SMTP id gd24-20020a056a21a59800b0018bd26a375cmr24224161pzc.1.1701229918448; Tue, 28 Nov 2023 19:51:58 -0800 (PST) X-Received: by 2002:a05:6a21:a598:b0:18b:d26a:375c with SMTP id gd24-20020a056a21a59800b0018bd26a375cmr24224130pzc.1.1701229918105; Tue, 28 Nov 2023 19:51:58 -0800 (PST) Received: from [10.72.112.34] ([43.228.180.230]) by smtp.gmail.com with ESMTPSA id y10-20020a17090aca8a00b002851c9f7a77sm256560pjt.38.2023.11.28.19.51.54 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Nov 2023 19:51:57 -0800 (PST) Message-ID: <621ea04b-ae10-c2a5-8711-dcb8c6d2b322@redhat.com> Date: Wed, 29 Nov 2023 11:51:53 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [PATCH v1 2/3] KVM: selftests: aarch64: Move the pmu helper function into lib/ Content-Language: en-US To: Marc Zyngier , Raghavendra Rao Ananta Cc: Oliver Upton , kvmarm@lists.linux.dev, James Morse , Suzuki K Poulose , Zenghui Yu , Paolo Bonzini , Shuah Khan , linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org References: <20231123063750.2176250-1-shahuang@redhat.com> <20231123063750.2176250-3-shahuang@redhat.com> From: Shaoqin Huang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.7 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,RCVD_IN_SORBS_WEB,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Tue, 28 Nov 2023 19:52:13 -0800 (PST) Hi Marc, On 11/28/23 16:43, Marc Zyngier wrote: > On 2023-11-27 21:48, Raghavendra Rao Ananta wrote: >> Hi Shaoqin, >> >> On Wed, Nov 22, 2023 at 10:39 PM Shaoqin Huang >> wrote: >>> >>> Move those pmu helper function into lib/, thus it can be used by other >>> pmu test. >>> >>> Signed-off-by: Shaoqin Huang >>> --- >>>  .../kvm/aarch64/vpmu_counter_access.c         | 118 ----------------- >>>  .../selftests/kvm/include/aarch64/vpmu.h      | 119 ++++++++++++++++++ >>>  2 files changed, 119 insertions(+), 118 deletions(-) >>> >>> diff --git >>> a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c >>> b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c >>> index 17305408a334..62d6315790ab 100644 >>> --- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c >>> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c >>> @@ -20,12 +20,6 @@ >>>  #include >>>  #include >>> >>> -/* The max number of the PMU event counters (excluding the cycle >>> counter) */ >>> -#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) >>> - >>> -/* The cycle counter bit position that's common among the PMU >>> registers */ >>> -#define ARMV8_PMU_CYCLE_IDX            31 >>> - >>>  static struct vpmu_vm *vpmu_vm; >>> >>>  struct pmreg_sets { >>> @@ -35,118 +29,6 @@ struct pmreg_sets { >>> >>>  #define PMREG_SET(set, clr) {.set_reg_id = set, .clr_reg_id = clr} >>> >>> -static uint64_t get_pmcr_n(uint64_t pmcr) >>> -{ >>> -       return (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; >>> -} >>> - >>> -static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n) >>> -{ >>> -       *pmcr = *pmcr & ~(ARMV8_PMU_PMCR_N_MASK << >>> ARMV8_PMU_PMCR_N_SHIFT); >>> -       *pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); >>> -} >>> - >>> -static uint64_t get_counters_mask(uint64_t n) >>> -{ >>> -       uint64_t mask = BIT(ARMV8_PMU_CYCLE_IDX); >>> - >>> -       if (n) >>> -               mask |= GENMASK(n - 1, 0); >>> -       return mask; >>> -} >>> - >>> -/* Read PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ >>> -static inline unsigned long read_sel_evcntr(int sel) >>> -{ >>> -       write_sysreg(sel, pmselr_el0); >>> -       isb(); >>> -       return read_sysreg(pmxevcntr_el0); >>> -} >>> - >>> -/* Write PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ >>> -static inline void write_sel_evcntr(int sel, unsigned long val) >>> -{ >>> -       write_sysreg(sel, pmselr_el0); >>> -       isb(); >>> -       write_sysreg(val, pmxevcntr_el0); >>> -       isb(); >>> -} >>> - >>> -/* Read PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ >>> -static inline unsigned long read_sel_evtyper(int sel) >>> -{ >>> -       write_sysreg(sel, pmselr_el0); >>> -       isb(); >>> -       return read_sysreg(pmxevtyper_el0); >>> -} >>> - >>> -/* Write PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ >>> -static inline void write_sel_evtyper(int sel, unsigned long val) >>> -{ >>> -       write_sysreg(sel, pmselr_el0); >>> -       isb(); >>> -       write_sysreg(val, pmxevtyper_el0); >>> -       isb(); >>> -} >>> - >>> -static inline void enable_counter(int idx) >>> -{ >>> -       uint64_t v = read_sysreg(pmcntenset_el0); >>> - >>> -       write_sysreg(BIT(idx) | v, pmcntenset_el0); >>> -       isb(); >>> -} >>> - >>> -static inline void disable_counter(int idx) >>> -{ >>> -       uint64_t v = read_sysreg(pmcntenset_el0); >>> - >>> -       write_sysreg(BIT(idx) | v, pmcntenclr_el0); >>> -       isb(); >>> -} >>> - >>> -static void pmu_disable_reset(void) >>> -{ >>> -       uint64_t pmcr = read_sysreg(pmcr_el0); >>> - >>> -       /* Reset all counters, disabling them */ >>> -       pmcr &= ~ARMV8_PMU_PMCR_E; >>> -       write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); >>> -       isb(); >>> -} >>> - >>> -#define RETURN_READ_PMEVCNTRN(n) \ >>> -       return read_sysreg(pmevcntr##n##_el0) >>> -static unsigned long read_pmevcntrn(int n) >>> -{ >>> -       PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN); >>> -       return 0; >>> -} >>> - >>> -#define WRITE_PMEVCNTRN(n) \ >>> -       write_sysreg(val, pmevcntr##n##_el0) >>> -static void write_pmevcntrn(int n, unsigned long val) >>> -{ >>> -       PMEVN_SWITCH(n, WRITE_PMEVCNTRN); >>> -       isb(); >>> -} >>> - >>> -#define READ_PMEVTYPERN(n) \ >>> -       return read_sysreg(pmevtyper##n##_el0) >>> -static unsigned long read_pmevtypern(int n) >>> -{ >>> -       PMEVN_SWITCH(n, READ_PMEVTYPERN); >>> -       return 0; >>> -} >>> - >>> -#define WRITE_PMEVTYPERN(n) \ >>> -       write_sysreg(val, pmevtyper##n##_el0) >>> -static void write_pmevtypern(int n, unsigned long val) >>> -{ >>> -       PMEVN_SWITCH(n, WRITE_PMEVTYPERN); >>> -       isb(); >>> -} >>> - >>>  /* >>>   * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}_EL0 >>>   * accessors that test cases will use. Each of the accessors will >>> diff --git a/tools/testing/selftests/kvm/include/aarch64/vpmu.h >>> b/tools/testing/selftests/kvm/include/aarch64/vpmu.h >>> index 0a56183644ee..e0cc1ca1c4b7 100644 >>> --- a/tools/testing/selftests/kvm/include/aarch64/vpmu.h >>> +++ b/tools/testing/selftests/kvm/include/aarch64/vpmu.h >>> @@ -1,10 +1,17 @@ >>>  /* SPDX-License-Identifier: GPL-2.0 */ >>> >>>  #include >>> +#include >>> >>>  #define GICD_BASE_GPA  0x8000000ULL >>>  #define GICR_BASE_GPA  0x80A0000ULL >>> >>> +/* The max number of the PMU event counters (excluding the cycle >>> counter) */ >>> +#define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) >>> + >>> +/* The cycle counter bit position that's common among the PMU >>> registers */ >>> +#define ARMV8_PMU_CYCLE_IDX            31 >>> + >>>  struct vpmu_vm { >>>         struct kvm_vm *vm; >>>         struct kvm_vcpu *vcpu; >>> @@ -14,3 +21,115 @@ struct vpmu_vm { >>>  struct vpmu_vm *create_vpmu_vm(void *guest_code); >>> >>>  void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm); >>> + >>> +static inline uint64_t get_pmcr_n(uint64_t pmcr) >>> +{ >>> +       return (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; >>> +} >>> + >>> +static inline void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n) >>> +{ >>> +       *pmcr = *pmcr & ~(ARMV8_PMU_PMCR_N_MASK << >>> ARMV8_PMU_PMCR_N_SHIFT); >>> +       *pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); >>> +} >>> + >>> +static inline uint64_t get_counters_mask(uint64_t n) >>> +{ >>> +       uint64_t mask = BIT(ARMV8_PMU_CYCLE_IDX); >>> + >>> +       if (n) >>> +               mask |= GENMASK(n - 1, 0); >>> +       return mask; >>> +} >>> + >>> +/* Read PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ >>> +static inline unsigned long read_sel_evcntr(int sel) >>> +{ >>> +       write_sysreg(sel, pmselr_el0); >>> +       isb(); >>> +       return read_sysreg(pmxevcntr_el0); >>> +} >>> + >>> +/* Write PMEVTCNTR_EL0 through PMXEVCNTR_EL0 */ >>> +static inline void write_sel_evcntr(int sel, unsigned long val) >>> +{ >>> +       write_sysreg(sel, pmselr_el0); >>> +       isb(); >>> +       write_sysreg(val, pmxevcntr_el0); >>> +       isb(); >>> +} >>> + >>> +/* Read PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ >>> +static inline unsigned long read_sel_evtyper(int sel) >>> +{ >>> +       write_sysreg(sel, pmselr_el0); >>> +       isb(); >>> +       return read_sysreg(pmxevtyper_el0); >>> +} >>> + >>> +/* Write PMEVTYPER_EL0 through PMXEVTYPER_EL0 */ >>> +static inline void write_sel_evtyper(int sel, unsigned long val) >>> +{ >>> +       write_sysreg(sel, pmselr_el0); >>> +       isb(); >>> +       write_sysreg(val, pmxevtyper_el0); >>> +       isb(); >>> +} >>> + >>> +static inline void enable_counter(int idx) >>> +{ >>> +       uint64_t v = read_sysreg(pmcntenset_el0); >>> + >>> +       write_sysreg(BIT(idx) | v, pmcntenset_el0); >>> +       isb(); >>> +} >>> + >>> +static inline void disable_counter(int idx) >>> +{ >>> +       uint64_t v = read_sysreg(pmcntenset_el0); >>> + >>> +       write_sysreg(BIT(idx) | v, pmcntenclr_el0); >>> +       isb(); >>> +} >>> + >> As mentioned in [1], the current implementation of disable_counter() >> is buggy and would end up disabling all the counters. >> However if you intend to keep it (even though it would remain unused), >> may be change the definition something to: >> >> static inline void disable_counter(int idx) >> { >>     write_sysreg(BIT(idx), pmcntenclr_el0); >>     isb(); >> } > > Same thing for the enable_counter() function, by the way. > It doesn't have the same disastrous effect, but it is > buggy (imagine an interrupt disabling a counter between > the read and the write...). > > In general, the set/clr registers should always be used > in their write form, never in a RMW form. > Thanks for your explaination. I will fix the enable_counter together with the disable_counter. Thanks, Shaoqin > Thanks, > >         M.