Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp7445417rwl; Tue, 10 Jan 2023 00:31:10 -0800 (PST) X-Google-Smtp-Source: AMrXdXt5pKkgH0kzjEFy3Jkb9zO1Mz/gHbbnZyDoahIcZStI9YfVXALOhq7dJ/q4vKVwbS70OrRR X-Received: by 2002:a62:e911:0:b0:581:579d:5c44 with SMTP id j17-20020a62e911000000b00581579d5c44mr47127654pfh.5.1673339470212; Tue, 10 Jan 2023 00:31:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673339470; cv=none; d=google.com; s=arc-20160816; b=CoNNqe6b+0QOkaKMS1QWESF8+b6TK+0LlgK+rTf4c6yXw5P8h1uOaMXVHkY3h2EoWr wZuoV6sozsiZLakHW522zwygXzHHKfTbwjBUvzs+l2RB9Y80lmBieMCvaxShSBL4B8R8 IVPWsYZ7DThEH6T4EK7/3Kk/b08R7AqpKN/M4odxXJJ94s9013G7lTdajGkl1ThTXgyx nbb8O0nJe2go6p64RSskiP2PeGZaNOpInpLA3wBfb2kKIfLqPDOuCyp5Ys7ES9MaXGQC H9Scpw6nvtC2x3ruDJyYdgZqN/5qR7IX25RtPABkOzgUty/5mQ/J6sDJ+Yw9MCGCPNdg 5Twg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=4qhABNKu2v83xNJAcCiHg1li3y9xoD+vsxMFApJdmdM=; b=ePHERpF+FAqcZLrbvSjEHDofRwgAJyj0FXdi8X8ondlYVANlUbseGsZ2/AvxZd/rZQ sWTGmBenNQCKrc26kroMPTFOj8VszRUivLzLdxic5dOzHUehfAotHOr0cBfTVQCPBdOJ EUNAPyiAh/WlYriaNSxioQKQV1Lg3/GHJabkBYoFpPSUUEQW+VKVS8Oqm3wfi7uDUq4H 71o0yBw91C/GkzQ8N75VfUPHGjZ9nU3uLjF0sYv7T36zDoOqqWIbYJasmhmmVpUmk8dq RE9jtOQH6oJ1p4sXBEMNvDXQ7bRmam7Q8QLik+XKdrqyk5wWAuN51riDOB5SAOV6US/7 SGkw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z21-20020aa78895000000b0056cb4662b9csi11987211pfe.16.2023.01.10.00.31.03; Tue, 10 Jan 2023 00:31:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237945AbjAJH4C (ORCPT + 53 others); Tue, 10 Jan 2023 02:56:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237972AbjAJHzi (ORCPT ); Tue, 10 Jan 2023 02:55:38 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2CB803055A; Mon, 9 Jan 2023 23:54:57 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 48D15AD7; Mon, 9 Jan 2023 23:55:39 -0800 (PST) Received: from [10.162.42.6] (unknown [10.162.42.6]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C9F6D3F71A; Mon, 9 Jan 2023 23:54:50 -0800 (PST) Message-ID: <58f63789-6426-b069-abf3-918cd55d98f6@arm.com> Date: Tue, 10 Jan 2023 13:24:47 +0530 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.2.2 Subject: Re: [PATCH v4 2/8] arm64: Drop SYS_ from SPE register defines Content-Language: en-US To: Rob Herring , Peter Zijlstra , Will Deacon , Mark Rutland , Catalin Marinas , Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Oliver Upton , Ingo Molnar , Arnaldo Carvalho de Melo , Alexander Shishkin , Jiri Olsa , Namhyung Kim Cc: Mark Brown , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-perf-users@vger.kernel.org, James Clark References: <20220825-arm-spe-v8-7-v4-0-327f860daf28@kernel.org> <20220825-arm-spe-v8-7-v4-2-327f860daf28@kernel.org> From: Anshuman Khandual In-Reply-To: <20220825-arm-spe-v8-7-v4-2-327f860daf28@kernel.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org LGTM, except couple of small possible improvements. On 1/10/23 00:56, Rob Herring wrote: > We currently have a non-standard SYS_ prefix in the constants generated > for the SPE register bitfields. Drop this in preparation for automatic > register definition generation. > > The SPE mask defines were unshifted, and the SPE register field > enumerations were shifted. The autogenerated defines are the opposite, > so make the necessary adjustments. > > No functional changes. > > Tested-by: James Clark > Signed-off-by: Rob Herring > --- > v4: > - Rebase on v6.2-rc1 > v3: > - No change > v2: > - New patch > --- > arch/arm64/include/asm/el2_setup.h | 6 +- > arch/arm64/include/asm/sysreg.h | 112 ++++++++++++++++++------------------- > arch/arm64/kvm/debug.c | 2 +- > arch/arm64/kvm/hyp/nvhe/debug-sr.c | 2 +- > drivers/perf/arm_spe_pmu.c | 85 ++++++++++++++-------------- > 5 files changed, 103 insertions(+), 104 deletions(-) > > diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h > index 668569adf4d3..f9da43e53cdb 100644 > --- a/arch/arm64/include/asm/el2_setup.h > +++ b/arch/arm64/include/asm/el2_setup.h > @@ -53,10 +53,10 @@ > cbz x0, .Lskip_spe_\@ // Skip if SPE not present > > mrs_s x0, SYS_PMBIDR_EL1 // If SPE available at EL2, > - and x0, x0, #(1 << SYS_PMBIDR_EL1_P_SHIFT) > + and x0, x0, #(1 << PMBIDR_EL1_P_SHIFT) > cbnz x0, .Lskip_spe_el2_\@ // then permit sampling of physical > - mov x0, #(1 << SYS_PMSCR_EL2_PCT_SHIFT | \ > - 1 << SYS_PMSCR_EL2_PA_SHIFT) > + mov x0, #(1 << PMSCR_EL2_PCT_SHIFT | \ > + 1 << PMSCR_EL2_PA_SHIFT) > msr_s SYS_PMSCR_EL2, x0 // addresses and physical counter > .Lskip_spe_el2_\@: > mov x0, #(MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT) > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h > index c4ce16333750..dbb0e8e22cf4 100644 > --- a/arch/arm64/include/asm/sysreg.h > +++ b/arch/arm64/include/asm/sysreg.h > @@ -218,59 +218,59 @@ > /*** Statistical Profiling Extension ***/ > /* ID registers */ > #define SYS_PMSIDR_EL1 sys_reg(3, 0, 9, 9, 7) > -#define SYS_PMSIDR_EL1_FE_SHIFT 0 > -#define SYS_PMSIDR_EL1_FT_SHIFT 1 > -#define SYS_PMSIDR_EL1_FL_SHIFT 2 > -#define SYS_PMSIDR_EL1_ARCHINST_SHIFT 3 > -#define SYS_PMSIDR_EL1_LDS_SHIFT 4 > -#define SYS_PMSIDR_EL1_ERND_SHIFT 5 > -#define SYS_PMSIDR_EL1_INTERVAL_SHIFT 8 > -#define SYS_PMSIDR_EL1_INTERVAL_MASK 0xfUL > -#define SYS_PMSIDR_EL1_MAXSIZE_SHIFT 12 > -#define SYS_PMSIDR_EL1_MAXSIZE_MASK 0xfUL > -#define SYS_PMSIDR_EL1_COUNTSIZE_SHIFT 16 > -#define SYS_PMSIDR_EL1_COUNTSIZE_MASK 0xfUL > +#define PMSIDR_EL1_FE_SHIFT 0 > +#define PMSIDR_EL1_FT_SHIFT 1 > +#define PMSIDR_EL1_FL_SHIFT 2 > +#define PMSIDR_EL1_ARCHINST_SHIFT 3 > +#define PMSIDR_EL1_LDS_SHIFT 4 > +#define PMSIDR_EL1_ERND_SHIFT 5 > +#define PMSIDR_EL1_INTERVAL_SHIFT 8 > +#define PMSIDR_EL1_INTERVAL_MASK GENMASK_ULL(11, 8) > +#define PMSIDR_EL1_MAXSIZE_SHIFT 12 > +#define PMSIDR_EL1_MAXSIZE_MASK GENMASK_ULL(15, 12) > +#define PMSIDR_EL1_COUNTSIZE_SHIFT 16 > +#define PMSIDR_EL1_COUNTSIZE_MASK GENMASK_ULL(19, 16) > > #define SYS_PMBIDR_EL1 sys_reg(3, 0, 9, 10, 7) > -#define SYS_PMBIDR_EL1_ALIGN_SHIFT 0 > -#define SYS_PMBIDR_EL1_ALIGN_MASK 0xfU > -#define SYS_PMBIDR_EL1_P_SHIFT 4 > -#define SYS_PMBIDR_EL1_F_SHIFT 5 > +#define PMBIDR_EL1_ALIGN_SHIFT 0 > +#define PMBIDR_EL1_ALIGN_MASK 0xfU Although functionally same, s/0xfU/GENMAS_ULL(3, 0) might make it uniform across other changes. > +#define PMBIDR_EL1_P_SHIFT 4 > +#define PMBIDR_EL1_F_SHIFT 5 > > /* Sampling controls */ > #define SYS_PMSCR_EL1 sys_reg(3, 0, 9, 9, 0) > -#define SYS_PMSCR_EL1_E0SPE_SHIFT 0 > -#define SYS_PMSCR_EL1_E1SPE_SHIFT 1 > -#define SYS_PMSCR_EL1_CX_SHIFT 3 > -#define SYS_PMSCR_EL1_PA_SHIFT 4 > -#define SYS_PMSCR_EL1_TS_SHIFT 5 > -#define SYS_PMSCR_EL1_PCT_SHIFT 6 > +#define PMSCR_EL1_E0SPE_SHIFT 0 > +#define PMSCR_EL1_E1SPE_SHIFT 1 > +#define PMSCR_EL1_CX_SHIFT 3 > +#define PMSCR_EL1_PA_SHIFT 4 > +#define PMSCR_EL1_TS_SHIFT 5 > +#define PMSCR_EL1_PCT_SHIFT 6 > > #define SYS_PMSCR_EL2 sys_reg(3, 4, 9, 9, 0) > -#define SYS_PMSCR_EL2_E0HSPE_SHIFT 0 > -#define SYS_PMSCR_EL2_E2SPE_SHIFT 1 > -#define SYS_PMSCR_EL2_CX_SHIFT 3 > -#define SYS_PMSCR_EL2_PA_SHIFT 4 > -#define SYS_PMSCR_EL2_TS_SHIFT 5 > -#define SYS_PMSCR_EL2_PCT_SHIFT 6 > +#define PMSCR_EL2_E0HSPE_SHIFT 0 > +#define PMSCR_EL2_E2SPE_SHIFT 1 > +#define PMSCR_EL2_CX_SHIFT 3 > +#define PMSCR_EL2_PA_SHIFT 4 > +#define PMSCR_EL2_TS_SHIFT 5 > +#define PMSCR_EL2_PCT_SHIFT 6 > > #define SYS_PMSICR_EL1 sys_reg(3, 0, 9, 9, 2) > > #define SYS_PMSIRR_EL1 sys_reg(3, 0, 9, 9, 3) > -#define SYS_PMSIRR_EL1_RND_SHIFT 0 > -#define SYS_PMSIRR_EL1_INTERVAL_SHIFT 8 > -#define SYS_PMSIRR_EL1_INTERVAL_MASK 0xffffffUL > +#define PMSIRR_EL1_RND_SHIFT 0 > +#define PMSIRR_EL1_INTERVAL_SHIFT 8 > +#define PMSIRR_EL1_INTERVAL_MASK GENMASK_ULL(31, 8) > > /* Filtering controls */ > #define SYS_PMSNEVFR_EL1 sys_reg(3, 0, 9, 9, 1) > > #define SYS_PMSFCR_EL1 sys_reg(3, 0, 9, 9, 4) > -#define SYS_PMSFCR_EL1_FE_SHIFT 0 > -#define SYS_PMSFCR_EL1_FT_SHIFT 1 > -#define SYS_PMSFCR_EL1_FL_SHIFT 2 > -#define SYS_PMSFCR_EL1_B_SHIFT 16 > -#define SYS_PMSFCR_EL1_LD_SHIFT 17 > -#define SYS_PMSFCR_EL1_ST_SHIFT 18 > +#define PMSFCR_EL1_FE_SHIFT 0 > +#define PMSFCR_EL1_FT_SHIFT 1 > +#define PMSFCR_EL1_FL_SHIFT 2 > +#define PMSFCR_EL1_B_SHIFT 16 > +#define PMSFCR_EL1_LD_SHIFT 17 > +#define PMSFCR_EL1_ST_SHIFT 18 > > #define SYS_PMSEVFR_EL1 sys_reg(3, 0, 9, 9, 5) > #define PMSEVFR_EL1_RES0_IMP \ > @@ -280,37 +280,37 @@ > (PMSEVFR_EL1_RES0_IMP & ~(BIT_ULL(18) | BIT_ULL(17) | BIT_ULL(11))) > > #define SYS_PMSLATFR_EL1 sys_reg(3, 0, 9, 9, 6) > -#define SYS_PMSLATFR_EL1_MINLAT_SHIFT 0 > +#define PMSLATFR_EL1_MINLAT_SHIFT 0 > > /* Buffer controls */ > #define SYS_PMBLIMITR_EL1 sys_reg(3, 0, 9, 10, 0) > -#define SYS_PMBLIMITR_EL1_E_SHIFT 0 > -#define SYS_PMBLIMITR_EL1_FM_SHIFT 1 > -#define SYS_PMBLIMITR_EL1_FM_MASK 0x3UL > -#define SYS_PMBLIMITR_EL1_FM_STOP_IRQ (0 << SYS_PMBLIMITR_EL1_FM_SHIFT) > +#define PMBLIMITR_EL1_E_SHIFT 0 > +#define PMBLIMITR_EL1_FM_SHIFT 1 > +#define PMBLIMITR_EL1_FM_MASK GENMASK_ULL(2, 1) > +#define PMBLIMITR_EL1_FM_STOP_IRQ 0 > > #define SYS_PMBPTR_EL1 sys_reg(3, 0, 9, 10, 1) > > /* Buffer error reporting */ > #define SYS_PMBSR_EL1 sys_reg(3, 0, 9, 10, 3) > -#define SYS_PMBSR_EL1_COLL_SHIFT 16 > -#define SYS_PMBSR_EL1_S_SHIFT 17 > -#define SYS_PMBSR_EL1_EA_SHIFT 18 > -#define SYS_PMBSR_EL1_DL_SHIFT 19 > -#define SYS_PMBSR_EL1_EC_SHIFT 26 > -#define SYS_PMBSR_EL1_EC_MASK 0x3fUL > +#define PMBSR_EL1_COLL_SHIFT 16 > +#define PMBSR_EL1_S_SHIFT 17 > +#define PMBSR_EL1_EA_SHIFT 18 > +#define PMBSR_EL1_DL_SHIFT 19 > +#define PMBSR_EL1_EC_SHIFT 26 > +#define PMBSR_EL1_EC_MASK GENMASK_ULL(31, 26) > > -#define SYS_PMBSR_EL1_EC_BUF (0x0UL << SYS_PMBSR_EL1_EC_SHIFT) > -#define SYS_PMBSR_EL1_EC_FAULT_S1 (0x24UL << SYS_PMBSR_EL1_EC_SHIFT) > -#define SYS_PMBSR_EL1_EC_FAULT_S2 (0x25UL << SYS_PMBSR_EL1_EC_SHIFT) > +#define PMBSR_EL1_EC_BUF 0x0UL > +#define PMBSR_EL1_EC_FAULT_S1 0x24UL > +#define PMBSR_EL1_EC_FAULT_S2 0x25UL > > -#define SYS_PMBSR_EL1_FAULT_FSC_SHIFT 0 > -#define SYS_PMBSR_EL1_FAULT_FSC_MASK 0x3fUL > +#define PMBSR_EL1_FAULT_FSC_SHIFT 0 > +#define PMBSR_EL1_FAULT_FSC_MASK 0x3fUL > > -#define SYS_PMBSR_EL1_BUF_BSC_SHIFT 0 > -#define SYS_PMBSR_EL1_BUF_BSC_MASK 0x3fUL > +#define PMBSR_EL1_BUF_BSC_SHIFT 0 > +#define PMBSR_EL1_BUF_BSC_MASK 0x3fUL Although functionally same, s/0x3fUL/GENMAS_ULL(5, 0) might make it uniform across other changes. > > -#define SYS_PMBSR_EL1_BUF_BSC_FULL (0x1UL << SYS_PMBSR_EL1_BUF_BSC_SHIFT) > +#define PMBSR_EL1_BUF_BSC_FULL 0x1UL > > /*** End of Statistical Profiling Extension ***/ > > diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c > index fccf9ec01813..55f80fb93925 100644 > --- a/arch/arm64/kvm/debug.c > +++ b/arch/arm64/kvm/debug.c > @@ -328,7 +328,7 @@ void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu) > * we may need to check if the host state needs to be saved. > */ > if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMSVer_SHIFT) && > - !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(SYS_PMBIDR_EL1_P_SHIFT))) > + !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(PMBIDR_EL1_P_SHIFT))) > vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_SPE); > > /* Check if we have TRBE implemented and available at the host */ > diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c > index e17455773b98..2673bde62fad 100644 > --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c > +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c > @@ -27,7 +27,7 @@ static void __debug_save_spe(u64 *pmscr_el1) > * Check if the host is actually using it ? > */ > reg = read_sysreg_s(SYS_PMBLIMITR_EL1); > - if (!(reg & BIT(SYS_PMBLIMITR_EL1_E_SHIFT))) > + if (!(reg & BIT(PMBLIMITR_EL1_E_SHIFT))) > return; > > /* Yes; save the control register and disable data generation */ > diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c > index 65cf93dcc8ee..814ed18346b6 100644 > --- a/drivers/perf/arm_spe_pmu.c > +++ b/drivers/perf/arm_spe_pmu.c > @@ -12,6 +12,7 @@ > #define DRVNAME PMUNAME "_pmu" > #define pr_fmt(fmt) DRVNAME ": " fmt > > +#include > #include > #include > #include > @@ -282,18 +283,18 @@ static u64 arm_spe_event_to_pmscr(struct perf_event *event) > struct perf_event_attr *attr = &event->attr; > u64 reg = 0; > > - reg |= ATTR_CFG_GET_FLD(attr, ts_enable) << SYS_PMSCR_EL1_TS_SHIFT; > - reg |= ATTR_CFG_GET_FLD(attr, pa_enable) << SYS_PMSCR_EL1_PA_SHIFT; > - reg |= ATTR_CFG_GET_FLD(attr, pct_enable) << SYS_PMSCR_EL1_PCT_SHIFT; > + reg |= ATTR_CFG_GET_FLD(attr, ts_enable) << PMSCR_EL1_TS_SHIFT; > + reg |= ATTR_CFG_GET_FLD(attr, pa_enable) << PMSCR_EL1_PA_SHIFT; > + reg |= ATTR_CFG_GET_FLD(attr, pct_enable) << PMSCR_EL1_PCT_SHIFT; > > if (!attr->exclude_user) > - reg |= BIT(SYS_PMSCR_EL1_E0SPE_SHIFT); > + reg |= BIT(PMSCR_EL1_E0SPE_SHIFT); > > if (!attr->exclude_kernel) > - reg |= BIT(SYS_PMSCR_EL1_E1SPE_SHIFT); > + reg |= BIT(PMSCR_EL1_E1SPE_SHIFT); > > if (get_spe_event_has_cx(event)) > - reg |= BIT(SYS_PMSCR_EL1_CX_SHIFT); > + reg |= BIT(PMSCR_EL1_CX_SHIFT); > > return reg; > } > @@ -302,8 +303,7 @@ static void arm_spe_event_sanitise_period(struct perf_event *event) > { > struct arm_spe_pmu *spe_pmu = to_spe_pmu(event->pmu); > u64 period = event->hw.sample_period; > - u64 max_period = SYS_PMSIRR_EL1_INTERVAL_MASK > - << SYS_PMSIRR_EL1_INTERVAL_SHIFT; > + u64 max_period = PMSIRR_EL1_INTERVAL_MASK; > > if (period < spe_pmu->min_period) > period = spe_pmu->min_period; > @@ -322,7 +322,7 @@ static u64 arm_spe_event_to_pmsirr(struct perf_event *event) > > arm_spe_event_sanitise_period(event); > > - reg |= ATTR_CFG_GET_FLD(attr, jitter) << SYS_PMSIRR_EL1_RND_SHIFT; > + reg |= ATTR_CFG_GET_FLD(attr, jitter) << PMSIRR_EL1_RND_SHIFT; > reg |= event->hw.sample_period; > > return reg; > @@ -333,18 +333,18 @@ static u64 arm_spe_event_to_pmsfcr(struct perf_event *event) > struct perf_event_attr *attr = &event->attr; > u64 reg = 0; > > - reg |= ATTR_CFG_GET_FLD(attr, load_filter) << SYS_PMSFCR_EL1_LD_SHIFT; > - reg |= ATTR_CFG_GET_FLD(attr, store_filter) << SYS_PMSFCR_EL1_ST_SHIFT; > - reg |= ATTR_CFG_GET_FLD(attr, branch_filter) << SYS_PMSFCR_EL1_B_SHIFT; > + reg |= ATTR_CFG_GET_FLD(attr, load_filter) << PMSFCR_EL1_LD_SHIFT; > + reg |= ATTR_CFG_GET_FLD(attr, store_filter) << PMSFCR_EL1_ST_SHIFT; > + reg |= ATTR_CFG_GET_FLD(attr, branch_filter) << PMSFCR_EL1_B_SHIFT; > > if (reg) > - reg |= BIT(SYS_PMSFCR_EL1_FT_SHIFT); > + reg |= BIT(PMSFCR_EL1_FT_SHIFT); > > if (ATTR_CFG_GET_FLD(attr, event_filter)) > - reg |= BIT(SYS_PMSFCR_EL1_FE_SHIFT); > + reg |= BIT(PMSFCR_EL1_FE_SHIFT); > > if (ATTR_CFG_GET_FLD(attr, min_latency)) > - reg |= BIT(SYS_PMSFCR_EL1_FL_SHIFT); > + reg |= BIT(PMSFCR_EL1_FL_SHIFT); > > return reg; > } > @@ -359,7 +359,7 @@ static u64 arm_spe_event_to_pmslatfr(struct perf_event *event) > { > struct perf_event_attr *attr = &event->attr; > return ATTR_CFG_GET_FLD(attr, min_latency) > - << SYS_PMSLATFR_EL1_MINLAT_SHIFT; > + << PMSLATFR_EL1_MINLAT_SHIFT; > } > > static void arm_spe_pmu_pad_buf(struct perf_output_handle *handle, int len) > @@ -511,7 +511,7 @@ static void arm_spe_perf_aux_output_begin(struct perf_output_handle *handle, > limit = buf->snapshot ? arm_spe_pmu_next_snapshot_off(handle) > : arm_spe_pmu_next_off(handle); > if (limit) > - limit |= BIT(SYS_PMBLIMITR_EL1_E_SHIFT); > + limit |= BIT(PMBLIMITR_EL1_E_SHIFT); > > limit += (u64)buf->base; > base = (u64)buf->base + PERF_IDX2OFF(handle->head, buf); > @@ -570,28 +570,28 @@ arm_spe_pmu_buf_get_fault_act(struct perf_output_handle *handle) > > /* Service required? */ > pmbsr = read_sysreg_s(SYS_PMBSR_EL1); > - if (!(pmbsr & BIT(SYS_PMBSR_EL1_S_SHIFT))) > + if (!(pmbsr & BIT(PMBSR_EL1_S_SHIFT))) > return SPE_PMU_BUF_FAULT_ACT_SPURIOUS; > > /* > * If we've lost data, disable profiling and also set the PARTIAL > * flag to indicate that the last record is corrupted. > */ > - if (pmbsr & BIT(SYS_PMBSR_EL1_DL_SHIFT)) > + if (pmbsr & BIT(PMBSR_EL1_DL_SHIFT)) > perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED | > PERF_AUX_FLAG_PARTIAL); > > /* Report collisions to userspace so that it can up the period */ > - if (pmbsr & BIT(SYS_PMBSR_EL1_COLL_SHIFT)) > + if (pmbsr & BIT(PMBSR_EL1_COLL_SHIFT)) > perf_aux_output_flag(handle, PERF_AUX_FLAG_COLLISION); > > /* We only expect buffer management events */ > - switch (pmbsr & (SYS_PMBSR_EL1_EC_MASK << SYS_PMBSR_EL1_EC_SHIFT)) { > - case SYS_PMBSR_EL1_EC_BUF: > + switch (FIELD_GET(PMBSR_EL1_EC_MASK, pmbsr)) { > + case PMBSR_EL1_EC_BUF: > /* Handled below */ > break; > - case SYS_PMBSR_EL1_EC_FAULT_S1: > - case SYS_PMBSR_EL1_EC_FAULT_S2: > + case PMBSR_EL1_EC_FAULT_S1: > + case PMBSR_EL1_EC_FAULT_S2: > err_str = "Unexpected buffer fault"; > goto out_err; > default: > @@ -600,9 +600,8 @@ arm_spe_pmu_buf_get_fault_act(struct perf_output_handle *handle) > } > > /* Buffer management event */ > - switch (pmbsr & > - (SYS_PMBSR_EL1_BUF_BSC_MASK << SYS_PMBSR_EL1_BUF_BSC_SHIFT)) { > - case SYS_PMBSR_EL1_BUF_BSC_FULL: > + switch (FIELD_GET(PMBSR_EL1_BUF_BSC_MASK, pmbsr)) { > + case PMBSR_EL1_BUF_BSC_FULL: > ret = SPE_PMU_BUF_FAULT_ACT_OK; > goto out_stop; > default: > @@ -717,23 +716,23 @@ static int arm_spe_pmu_event_init(struct perf_event *event) > return -EINVAL; > > reg = arm_spe_event_to_pmsfcr(event); > - if ((reg & BIT(SYS_PMSFCR_EL1_FE_SHIFT)) && > + if ((reg & BIT(PMSFCR_EL1_FE_SHIFT)) && > !(spe_pmu->features & SPE_PMU_FEAT_FILT_EVT)) > return -EOPNOTSUPP; > > - if ((reg & BIT(SYS_PMSFCR_EL1_FT_SHIFT)) && > + if ((reg & BIT(PMSFCR_EL1_FT_SHIFT)) && > !(spe_pmu->features & SPE_PMU_FEAT_FILT_TYP)) > return -EOPNOTSUPP; > > - if ((reg & BIT(SYS_PMSFCR_EL1_FL_SHIFT)) && > + if ((reg & BIT(PMSFCR_EL1_FL_SHIFT)) && > !(spe_pmu->features & SPE_PMU_FEAT_FILT_LAT)) > return -EOPNOTSUPP; > > set_spe_event_has_cx(event); > reg = arm_spe_event_to_pmscr(event); > if (!perfmon_capable() && > - (reg & (BIT(SYS_PMSCR_EL1_PA_SHIFT) | > - BIT(SYS_PMSCR_EL1_PCT_SHIFT)))) > + (reg & (BIT(PMSCR_EL1_PA_SHIFT) | > + BIT(PMSCR_EL1_PCT_SHIFT)))) > return -EACCES; > > return 0; > @@ -971,14 +970,14 @@ static void __arm_spe_pmu_dev_probe(void *info) > > /* Read PMBIDR first to determine whether or not we have access */ > reg = read_sysreg_s(SYS_PMBIDR_EL1); > - if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT)) { > + if (reg & BIT(PMBIDR_EL1_P_SHIFT)) { > dev_err(dev, > "profiling buffer owned by higher exception level\n"); > return; > } > > /* Minimum alignment. If it's out-of-range, then fail the probe */ > - fld = reg >> SYS_PMBIDR_EL1_ALIGN_SHIFT & SYS_PMBIDR_EL1_ALIGN_MASK; > + fld = (reg & PMBIDR_EL1_ALIGN_MASK) >> PMBIDR_EL1_ALIGN_SHIFT; > spe_pmu->align = 1 << fld; > if (spe_pmu->align > SZ_2K) { > dev_err(dev, "unsupported PMBIDR.Align [%d] on CPU %d\n", > @@ -988,26 +987,26 @@ static void __arm_spe_pmu_dev_probe(void *info) > > /* It's now safe to read PMSIDR and figure out what we've got */ > reg = read_sysreg_s(SYS_PMSIDR_EL1); > - if (reg & BIT(SYS_PMSIDR_EL1_FE_SHIFT)) > + if (reg & BIT(PMSIDR_EL1_FE_SHIFT)) > spe_pmu->features |= SPE_PMU_FEAT_FILT_EVT; > > - if (reg & BIT(SYS_PMSIDR_EL1_FT_SHIFT)) > + if (reg & BIT(PMSIDR_EL1_FT_SHIFT)) > spe_pmu->features |= SPE_PMU_FEAT_FILT_TYP; > > - if (reg & BIT(SYS_PMSIDR_EL1_FL_SHIFT)) > + if (reg & BIT(PMSIDR_EL1_FL_SHIFT)) > spe_pmu->features |= SPE_PMU_FEAT_FILT_LAT; > > - if (reg & BIT(SYS_PMSIDR_EL1_ARCHINST_SHIFT)) > + if (reg & BIT(PMSIDR_EL1_ARCHINST_SHIFT)) > spe_pmu->features |= SPE_PMU_FEAT_ARCH_INST; > > - if (reg & BIT(SYS_PMSIDR_EL1_LDS_SHIFT)) > + if (reg & BIT(PMSIDR_EL1_LDS_SHIFT)) > spe_pmu->features |= SPE_PMU_FEAT_LDS; > > - if (reg & BIT(SYS_PMSIDR_EL1_ERND_SHIFT)) > + if (reg & BIT(PMSIDR_EL1_ERND_SHIFT)) > spe_pmu->features |= SPE_PMU_FEAT_ERND; > > /* This field has a spaced out encoding, so just use a look-up */ > - fld = reg >> SYS_PMSIDR_EL1_INTERVAL_SHIFT & SYS_PMSIDR_EL1_INTERVAL_MASK; > + fld = (reg & PMSIDR_EL1_INTERVAL_MASK) >> PMSIDR_EL1_INTERVAL_SHIFT; > switch (fld) { > case 0: > spe_pmu->min_period = 256; > @@ -1039,7 +1038,7 @@ static void __arm_spe_pmu_dev_probe(void *info) > } > > /* Maximum record size. If it's out-of-range, then fail the probe */ > - fld = reg >> SYS_PMSIDR_EL1_MAXSIZE_SHIFT & SYS_PMSIDR_EL1_MAXSIZE_MASK; > + fld = (reg & PMSIDR_EL1_MAXSIZE_MASK) >> PMSIDR_EL1_MAXSIZE_SHIFT; > spe_pmu->max_record_sz = 1 << fld; > if (spe_pmu->max_record_sz > SZ_2K || spe_pmu->max_record_sz < 16) { > dev_err(dev, "unsupported PMSIDR_EL1.MaxSize [%d] on CPU %d\n", > @@ -1047,7 +1046,7 @@ static void __arm_spe_pmu_dev_probe(void *info) > return; > } > > - fld = reg >> SYS_PMSIDR_EL1_COUNTSIZE_SHIFT & SYS_PMSIDR_EL1_COUNTSIZE_MASK; > + fld = (reg & PMSIDR_EL1_COUNTSIZE_MASK) >> PMSIDR_EL1_COUNTSIZE_SHIFT; > switch (fld) { > default: > dev_warn(dev, "unknown PMSIDR_EL1.CountSize [%d]; assuming 2\n", > With or without those changes Reviewed-by: Anshuman Khandual