2023-04-12 17:15:19

by Mark Brown

[permalink] [raw]
Subject: [PATCH v2 0/3] arm64/cpufeature: Use macros for ID based matches

As was recently done for hwcaps convert all the cpufeatures that match
on ID registers to use helper macros to initialise all the data fields
that the matching code uses. The feature table is much less of an eye
chart than the hwcap tables were so the benefits are less substantial
but the result is still less verbose and error prone so still seems like
a win.

Signed-off-by: Mark Brown <[email protected]>
---
Changes in v2:
- Re-add missed .matches in WFxT.
- Rebase onto v6.3-rc3.
- Link to v1: https://lore.kernel.org/r/[email protected]

---
Mark Brown (3):
arm64/cpufeature: Pull out helper for CPUID register definitions
arm64/cpufeature: Consistently use symbolic constants for min_field_value
arm64/cpufeature: Use helper macro to specify ID register for capabilites

arch/arm64/kernel/cpufeature.c | 271 +++++++++--------------------------------
1 file changed, 59 insertions(+), 212 deletions(-)
---
base-commit: e8d018dd0257f744ca50a729e3d042cf2ec9da65
change-id: 20230303-arm64-cpufeature-helpers-a70213a244e7

Best regards,
--
Mark Brown <[email protected]>


2023-04-12 17:15:24

by Mark Brown

[permalink] [raw]
Subject: [PATCH v2 1/3] arm64/cpufeature: Pull out helper for CPUID register definitions

We use the same structure to match hwcaps and CPU features so we can use
the same helper to generate the fields required. Pull the portion of the
current hwcaps helper that initialises the fields out into a separate
define placed earlier in the file so we can use it for cpufeatures.

No functional change.

Signed-off-by: Mark Brown <[email protected]>
---
arch/arm64/kernel/cpufeature.c | 27 +++++++++++++++------------
1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 2e3e55139777..77862b7c8908 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -140,6 +140,13 @@ void dump_cpu_features(void)
pr_emerg("0x%*pb\n", ARM64_NCAPS, &cpu_hwcaps);
}

+#define ARM64_CPUID_FIELDS(reg, field, min_value) \
+ .sys_reg = SYS_##reg, \
+ .field_pos = reg##_##field##_SHIFT, \
+ .field_width = reg##_##field##_WIDTH, \
+ .sign = reg##_##field##_SIGNED, \
+ .min_field_value = reg##_##field##_##min_value,
+
#define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
{ \
.sign = SIGNED, \
@@ -2776,12 +2783,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
};

#define HWCAP_CPUID_MATCH(reg, field, min_value) \
- .matches = has_user_cpuid_feature, \
- .sys_reg = SYS_##reg, \
- .field_pos = reg##_##field##_SHIFT, \
- .field_width = reg##_##field##_WIDTH, \
- .sign = reg##_##field##_SIGNED, \
- .min_field_value = reg##_##field##_##min_value,
+ .matches = has_user_cpuid_feature, \
+ ARM64_CPUID_FIELDS(reg, field, min_value)

#define __HWCAP_CAP(name, cap_type, cap) \
.desc = name, \
@@ -2811,26 +2814,26 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
#ifdef CONFIG_ARM64_PTR_AUTH
static const struct arm64_cpu_capabilities ptr_auth_hwcap_addr_matches[] = {
{
- HWCAP_CPUID_MATCH(ID_AA64ISAR1_EL1, APA, PAuth)
+ ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, APA, PAuth)
},
{
- HWCAP_CPUID_MATCH(ID_AA64ISAR2_EL1, APA3, PAuth)
+ ARM64_CPUID_FIELDS(ID_AA64ISAR2_EL1, APA3, PAuth)
},
{
- HWCAP_CPUID_MATCH(ID_AA64ISAR1_EL1, API, PAuth)
+ ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, API, PAuth)
},
{},
};

static const struct arm64_cpu_capabilities ptr_auth_hwcap_gen_matches[] = {
{
- HWCAP_CPUID_MATCH(ID_AA64ISAR1_EL1, GPA, IMP)
+ ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, GPA, IMP)
},
{
- HWCAP_CPUID_MATCH(ID_AA64ISAR2_EL1, GPA3, IMP)
+ ARM64_CPUID_FIELDS(ID_AA64ISAR2_EL1, GPA3, IMP)
},
{
- HWCAP_CPUID_MATCH(ID_AA64ISAR1_EL1, GPI, IMP)
+ ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, GPI, IMP)
},
{},
};

--
2.30.2

2023-04-12 17:15:46

by Mark Brown

[permalink] [raw]
Subject: [PATCH v2 3/3] arm64/cpufeature: Use helper macro to specify ID register for capabilites

When defining which value to look for in a system register field we
currently manually specify the register, field shift, width and sign and
the value to look for. This opens the potential for error with for example
the wrong field width or sign being specified, an enumeration value for
a different similarly named field or letting something be initialised to 0.

Since we now generate defines for all the ID registers we now have named
constants for all of these things generated from the system register
description, meaning that we can generate initialisation for all the fields
used in matching from a minimal specification of register, field and match
value. This is both shorter and eliminates or makes build failures several
potential errors.

No change in the generated binary.

Signed-off-by: Mark Brown <[email protected]>
---
arch/arm64/kernel/cpufeature.c | 244 ++++++++---------------------------------
1 file changed, 44 insertions(+), 200 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 1002ac437e8b..f7bba88bf75f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2213,22 +2213,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_GIC_CPUIF_SYSREGS,
.type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE,
.matches = has_useable_gicv3_cpuif,
- .sys_reg = SYS_ID_AA64PFR0_EL1,
- .field_pos = ID_AA64PFR0_EL1_GIC_SHIFT,
- .field_width = 4,
- .sign = FTR_UNSIGNED,
- .min_field_value = ID_AA64PFR0_EL1_GIC_IMP,
+ ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, GIC, IMP)
},
{
.desc = "Enhanced Counter Virtualization",
.capability = ARM64_HAS_ECV,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64MMFR0_EL1,
- .field_pos = ID_AA64MMFR0_EL1_ECV_SHIFT,
- .field_width = 4,
- .sign = FTR_UNSIGNED,
- .min_field_value = ID_AA64MMFR0_EL1_ECV_IMP,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR0_EL1, ECV, IMP)
},
#ifdef CONFIG_ARM64_PAN
{
@@ -2236,12 +2228,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_PAN,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64MMFR1_EL1,
- .field_pos = ID_AA64MMFR1_EL1_PAN_SHIFT,
- .field_width = 4,
- .sign = FTR_UNSIGNED,
- .min_field_value = ID_AA64MMFR1_EL1_PAN_IMP,
.cpu_enable = cpu_enable_pan,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, PAN, IMP)
},
#endif /* CONFIG_ARM64_PAN */
#ifdef CONFIG_ARM64_EPAN
@@ -2250,11 +2238,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_EPAN,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64MMFR1_EL1,
- .field_pos = ID_AA64MMFR1_EL1_PAN_SHIFT,
- .field_width = 4,
- .sign = FTR_UNSIGNED,
- .min_field_value = ID_AA64MMFR1_EL1_PAN_PAN3,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, PAN, PAN3)
},
#endif /* CONFIG_ARM64_EPAN */
#ifdef CONFIG_ARM64_LSE_ATOMICS
@@ -2263,11 +2247,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_LSE_ATOMICS,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64ISAR0_EL1,
- .field_pos = ID_AA64ISAR0_EL1_ATOMIC_SHIFT,
- .field_width = 4,
- .sign = FTR_UNSIGNED,
- .min_field_value = ID_AA64ISAR0_EL1_ATOMIC_IMP,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR0_EL1, ATOMIC, IMP)
},
#endif /* CONFIG_ARM64_LSE_ATOMICS */
{
@@ -2288,21 +2268,13 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_NESTED_VIRT,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_nested_virt_support,
- .sys_reg = SYS_ID_AA64MMFR2_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64MMFR2_EL1_NV_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64MMFR2_EL1_NV_IMP,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, NV, IMP)
},
{
.capability = ARM64_HAS_32BIT_EL0_DO_NOT_USE,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_32bit_el0,
- .sys_reg = SYS_ID_AA64PFR0_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64PFR0_EL1_EL0_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64PFR0_EL1_ELx_32BIT_64BIT,
+ ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, EL0, AARCH32)
},
#ifdef CONFIG_KVM
{
@@ -2310,11 +2282,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_32BIT_EL1,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64PFR0_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64PFR0_EL1_EL1_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64PFR0_EL1_ELx_32BIT_64BIT,
+ ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, EL1, AARCH32)
},
{
.desc = "Protected KVM",
@@ -2327,17 +2295,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.desc = "Kernel page table isolation (KPTI)",
.capability = ARM64_UNMAP_KERNEL_AT_EL0,
.type = ARM64_CPUCAP_BOOT_RESTRICTED_CPU_LOCAL_FEATURE,
+ .cpu_enable = kpti_install_ng_mappings,
+ .matches = unmap_kernel_at_el0,
/*
* The ID feature fields below are used to indicate that
* the CPU doesn't need KPTI. See unmap_kernel_at_el0 for
* more details.
*/
- .sys_reg = SYS_ID_AA64PFR0_EL1,
- .field_pos = ID_AA64PFR0_EL1_CSV3_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64PFR0_EL1_CSV3_IMP,
- .matches = unmap_kernel_at_el0,
- .cpu_enable = kpti_install_ng_mappings,
+ ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, CSV3, IMP)
},
{
/* FP/SIMD is not implemented */
@@ -2352,21 +2317,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_DCPOP,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64ISAR1_EL1,
- .field_pos = ID_AA64ISAR1_EL1_DPB_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64ISAR1_EL1_DPB_IMP,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, DPB, IMP)
},
{
.desc = "Data cache clean to Point of Deep Persistence",
.capability = ARM64_HAS_DCPODP,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64ISAR1_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64ISAR1_EL1_DPB_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64ISAR1_EL1_DPB_DPB2,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, DPB, DPB2)
},
#endif
#ifdef CONFIG_ARM64_SVE
@@ -2374,13 +2332,9 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.desc = "Scalable Vector Extension",
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.capability = ARM64_SVE,
- .sys_reg = SYS_ID_AA64PFR0_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64PFR0_EL1_SVE_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64PFR0_EL1_SVE_IMP,
- .matches = has_cpuid_feature,
.cpu_enable = sve_kernel_enable,
+ .matches = has_cpuid_feature,
+ ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, SVE, IMP)
},
#endif /* CONFIG_ARM64_SVE */
#ifdef CONFIG_ARM64_RAS_EXTN
@@ -2389,12 +2343,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_RAS_EXTN,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64PFR0_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64PFR0_EL1_RAS_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64PFR0_EL1_RAS_IMP,
.cpu_enable = cpu_clear_disr,
+ ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, RAS, IMP)
},
#endif /* CONFIG_ARM64_RAS_EXTN */
#ifdef CONFIG_ARM64_AMU_EXTN
@@ -2408,12 +2358,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_AMU_EXTN,
.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
.matches = has_amu,
- .sys_reg = SYS_ID_AA64PFR0_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64PFR0_EL1_AMU_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64PFR0_EL1_AMU_IMP,
.cpu_enable = cpu_amu_enable,
+ ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, AMU, IMP)
},
#endif /* CONFIG_ARM64_AMU_EXTN */
{
@@ -2433,34 +2379,22 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.desc = "Stage-2 Force Write-Back",
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.capability = ARM64_HAS_STAGE2_FWB,
- .sys_reg = SYS_ID_AA64MMFR2_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64MMFR2_EL1_FWB_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64MMFR2_EL1_FWB_IMP,
.matches = has_cpuid_feature,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, FWB, IMP)
},
{
.desc = "ARMv8.4 Translation Table Level",
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.capability = ARM64_HAS_ARMv8_4_TTL,
- .sys_reg = SYS_ID_AA64MMFR2_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64MMFR2_EL1_TTL_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64MMFR2_EL1_TTL_IMP,
.matches = has_cpuid_feature,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, TTL, IMP)
},
{
.desc = "TLB range maintenance instructions",
.capability = ARM64_HAS_TLB_RANGE,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64ISAR0_EL1,
- .field_pos = ID_AA64ISAR0_EL1_TLB_SHIFT,
- .field_width = 4,
- .sign = FTR_UNSIGNED,
- .min_field_value = ID_AA64ISAR0_EL1_TLB_RANGE,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR0_EL1, TLB, RANGE)
},
#ifdef CONFIG_ARM64_HW_AFDBM
{
@@ -2474,13 +2408,9 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
*/
.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
.capability = ARM64_HW_DBM,
- .sys_reg = SYS_ID_AA64MMFR1_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64MMFR1_EL1_HAFDBS_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64MMFR1_EL1_HAFDBS_DBM,
.matches = has_hw_dbm,
.cpu_enable = cpu_enable_hw_dbm,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, HAFDBS, DBM)
},
#endif
{
@@ -2488,21 +2418,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_CRC32,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64ISAR0_EL1,
- .field_pos = ID_AA64ISAR0_EL1_CRC32_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64ISAR0_EL1_CRC32_IMP,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR0_EL1, CRC32, IMP)
},
{
.desc = "Speculative Store Bypassing Safe (SSBS)",
.capability = ARM64_SSBS,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64PFR1_EL1,
- .field_pos = ID_AA64PFR1_EL1_SSBS_SHIFT,
- .field_width = 4,
- .sign = FTR_UNSIGNED,
- .min_field_value = ID_AA64PFR1_EL1_SSBS_IMP,
+ ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, SSBS, IMP)
},
#ifdef CONFIG_ARM64_CNP
{
@@ -2510,12 +2433,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_CNP,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_useable_cnp,
- .sys_reg = SYS_ID_AA64MMFR2_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64MMFR2_EL1_CnP_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64MMFR2_EL1_CnP_IMP,
.cpu_enable = cpu_enable_cnp,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, CnP, IMP)
},
#endif
{
@@ -2523,45 +2442,29 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_SB,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64ISAR1_EL1,
- .field_pos = ID_AA64ISAR1_EL1_SB_SHIFT,
- .field_width = 4,
- .sign = FTR_UNSIGNED,
- .min_field_value = ID_AA64ISAR1_EL1_SB_IMP,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, SB, IMP)
},
#ifdef CONFIG_ARM64_PTR_AUTH
{
.desc = "Address authentication (architected QARMA5 algorithm)",
.capability = ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA5,
.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
- .sys_reg = SYS_ID_AA64ISAR1_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64ISAR1_EL1_APA_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64ISAR1_EL1_APA_PAuth,
.matches = has_address_auth_cpucap,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, APA, PAuth)
},
{
.desc = "Address authentication (architected QARMA3 algorithm)",
.capability = ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA3,
.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
- .sys_reg = SYS_ID_AA64ISAR2_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64ISAR2_EL1_APA3_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64ISAR2_EL1_APA3_PAuth,
.matches = has_address_auth_cpucap,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR2_EL1, APA3, PAuth)
},
{
.desc = "Address authentication (IMP DEF algorithm)",
.capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
- .sys_reg = SYS_ID_AA64ISAR1_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64ISAR1_EL1_API_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64ISAR1_EL1_API_PAuth,
.matches = has_address_auth_cpucap,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, API, PAuth)
},
{
.capability = ARM64_HAS_ADDRESS_AUTH,
@@ -2572,34 +2475,22 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.desc = "Generic authentication (architected QARMA5 algorithm)",
.capability = ARM64_HAS_GENERIC_AUTH_ARCH_QARMA5,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
- .sys_reg = SYS_ID_AA64ISAR1_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64ISAR1_EL1_GPA_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64ISAR1_EL1_GPA_IMP,
.matches = has_cpuid_feature,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, GPA, IMP)
},
{
.desc = "Generic authentication (architected QARMA3 algorithm)",
.capability = ARM64_HAS_GENERIC_AUTH_ARCH_QARMA3,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
- .sys_reg = SYS_ID_AA64ISAR2_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64ISAR2_EL1_GPA3_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64ISAR2_EL1_GPA3_IMP,
.matches = has_cpuid_feature,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR2_EL1, GPA3, IMP)
},
{
.desc = "Generic authentication (IMP DEF algorithm)",
.capability = ARM64_HAS_GENERIC_AUTH_IMP_DEF,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
- .sys_reg = SYS_ID_AA64ISAR1_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64ISAR1_EL1_GPI_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64ISAR1_EL1_GPI_IMP,
.matches = has_cpuid_feature,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, GPI, IMP)
},
{
.capability = ARM64_HAS_GENERIC_AUTH,
@@ -2631,13 +2522,9 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.desc = "E0PD",
.capability = ARM64_HAS_E0PD,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
- .sys_reg = SYS_ID_AA64MMFR2_EL1,
- .sign = FTR_UNSIGNED,
- .field_width = 4,
- .field_pos = ID_AA64MMFR2_EL1_E0PD_SHIFT,
- .matches = has_cpuid_feature,
- .min_field_value = ID_AA64MMFR2_EL1_E0PD_IMP,
.cpu_enable = cpu_enable_e0pd,
+ .matches = has_cpuid_feature,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, E0PD, IMP)
},
#endif
{
@@ -2645,11 +2532,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_HAS_RNG,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64ISAR0_EL1,
- .field_pos = ID_AA64ISAR0_EL1_RNDR_SHIFT,
- .field_width = 4,
- .sign = FTR_UNSIGNED,
- .min_field_value = ID_AA64ISAR0_EL1_RNDR_IMP,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR0_EL1, RNDR, IMP)
},
#ifdef CONFIG_ARM64_BTI
{
@@ -2662,10 +2545,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
#endif
.matches = has_cpuid_feature,
.cpu_enable = bti_enable,
- .sys_reg = SYS_ID_AA64PFR1_EL1,
- .field_pos = ID_AA64PFR1_EL1_BT_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64PFR1_EL1_BT_IMP,
+ ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, BT, IMP)
.sign = FTR_UNSIGNED,
},
#endif
@@ -2675,109 +2555,73 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.capability = ARM64_MTE,
.type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64PFR1_EL1,
- .field_pos = ID_AA64PFR1_EL1_MTE_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64PFR1_EL1_MTE_MTE2,
- .sign = FTR_UNSIGNED,
.cpu_enable = cpu_enable_mte,
+ ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, MTE, MTE2)
},
{
.desc = "Asymmetric MTE Tag Check Fault",
.capability = ARM64_MTE_ASYMM,
.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
.matches = has_cpuid_feature,
- .sys_reg = SYS_ID_AA64PFR1_EL1,
- .field_pos = ID_AA64PFR1_EL1_MTE_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64PFR1_EL1_MTE_MTE3,
- .sign = FTR_UNSIGNED,
+ ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, MTE, MTE3)
},
#endif /* CONFIG_ARM64_MTE */
{
.desc = "RCpc load-acquire (LDAPR)",
.capability = ARM64_HAS_LDAPR,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
- .sys_reg = SYS_ID_AA64ISAR1_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64ISAR1_EL1_LRCPC_SHIFT,
- .field_width = 4,
.matches = has_cpuid_feature,
- .min_field_value = ID_AA64ISAR1_EL1_LRCPC_IMP,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR1_EL1, LRCPC, IMP)
},
#ifdef CONFIG_ARM64_SME
{
.desc = "Scalable Matrix Extension",
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.capability = ARM64_SME,
- .sys_reg = SYS_ID_AA64PFR1_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64PFR1_EL1_SME_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64PFR1_EL1_SME_IMP,
.matches = has_cpuid_feature,
.cpu_enable = sme_kernel_enable,
+ ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, SME, IMP)
},
/* FA64 should be sorted after the base SME capability */
{
.desc = "FA64",
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.capability = ARM64_SME_FA64,
- .sys_reg = SYS_ID_AA64SMFR0_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64SMFR0_EL1_FA64_SHIFT,
- .field_width = 1,
- .min_field_value = ID_AA64SMFR0_EL1_FA64_IMP,
.matches = has_cpuid_feature,
.cpu_enable = fa64_kernel_enable,
+ ARM64_CPUID_FIELDS(ID_AA64SMFR0_EL1, FA64, IMP)
},
{
.desc = "SME2",
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.capability = ARM64_SME2,
- .sys_reg = SYS_ID_AA64PFR1_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64PFR1_EL1_SME_SHIFT,
- .field_width = ID_AA64PFR1_EL1_SME_WIDTH,
- .min_field_value = ID_AA64PFR1_EL1_SME_SME2,
.matches = has_cpuid_feature,
.cpu_enable = sme2_kernel_enable,
+ ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, SME, SME2)
},
#endif /* CONFIG_ARM64_SME */
{
.desc = "WFx with timeout",
.capability = ARM64_HAS_WFXT,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
- .sys_reg = SYS_ID_AA64ISAR2_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64ISAR2_EL1_WFxT_SHIFT,
- .field_width = 4,
.matches = has_cpuid_feature,
- .min_field_value = ID_AA64ISAR2_EL1_WFxT_IMP,
+ ARM64_CPUID_FIELDS(ID_AA64ISAR2_EL1, WFxT, IMP)
},
{
.desc = "Trap EL0 IMPLEMENTATION DEFINED functionality",
.capability = ARM64_HAS_TIDCP1,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
- .sys_reg = SYS_ID_AA64MMFR1_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64MMFR1_EL1_TIDCP1_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64MMFR1_EL1_TIDCP1_IMP,
.matches = has_cpuid_feature,
.cpu_enable = cpu_trap_el0_impdef,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR1_EL1, TIDCP1, IMP)
},
{
.desc = "Data independent timing control (DIT)",
.capability = ARM64_HAS_DIT,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
- .sys_reg = SYS_ID_AA64PFR0_EL1,
- .sign = FTR_UNSIGNED,
- .field_pos = ID_AA64PFR0_EL1_DIT_SHIFT,
- .field_width = 4,
- .min_field_value = ID_AA64PFR0_EL1_DIT_IMP,
.matches = has_cpuid_feature,
.cpu_enable = cpu_enable_dit,
+ ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, DIT, IMP)
},
{},
};

--
2.30.2

2023-04-12 17:15:54

by Mark Brown

[permalink] [raw]
Subject: [PATCH v2 2/3] arm64/cpufeature: Consistently use symbolic constants for min_field_value

A number of the cpufeatures use raw numbers for the minimum field values
specified rather than symbolic constants. In preparation for the use of
helper macros replace all these with the appropriate constants.

No change in the generated binary.

Signed-off-by: Mark Brown <[email protected]>
---
arch/arm64/kernel/cpufeature.c | 34 +++++++++++++++++-----------------
1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 77862b7c8908..1002ac437e8b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2217,7 +2217,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64PFR0_EL1_GIC_SHIFT,
.field_width = 4,
.sign = FTR_UNSIGNED,
- .min_field_value = 1,
+ .min_field_value = ID_AA64PFR0_EL1_GIC_IMP,
},
{
.desc = "Enhanced Counter Virtualization",
@@ -2228,7 +2228,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64MMFR0_EL1_ECV_SHIFT,
.field_width = 4,
.sign = FTR_UNSIGNED,
- .min_field_value = 1,
+ .min_field_value = ID_AA64MMFR0_EL1_ECV_IMP,
},
#ifdef CONFIG_ARM64_PAN
{
@@ -2240,7 +2240,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64MMFR1_EL1_PAN_SHIFT,
.field_width = 4,
.sign = FTR_UNSIGNED,
- .min_field_value = 1,
+ .min_field_value = ID_AA64MMFR1_EL1_PAN_IMP,
.cpu_enable = cpu_enable_pan,
},
#endif /* CONFIG_ARM64_PAN */
@@ -2254,7 +2254,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64MMFR1_EL1_PAN_SHIFT,
.field_width = 4,
.sign = FTR_UNSIGNED,
- .min_field_value = 3,
+ .min_field_value = ID_AA64MMFR1_EL1_PAN_PAN3,
},
#endif /* CONFIG_ARM64_EPAN */
#ifdef CONFIG_ARM64_LSE_ATOMICS
@@ -2267,7 +2267,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64ISAR0_EL1_ATOMIC_SHIFT,
.field_width = 4,
.sign = FTR_UNSIGNED,
- .min_field_value = 2,
+ .min_field_value = ID_AA64ISAR0_EL1_ATOMIC_IMP,
},
#endif /* CONFIG_ARM64_LSE_ATOMICS */
{
@@ -2335,7 +2335,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sys_reg = SYS_ID_AA64PFR0_EL1,
.field_pos = ID_AA64PFR0_EL1_CSV3_SHIFT,
.field_width = 4,
- .min_field_value = 1,
+ .min_field_value = ID_AA64PFR0_EL1_CSV3_IMP,
.matches = unmap_kernel_at_el0,
.cpu_enable = kpti_install_ng_mappings,
},
@@ -2355,7 +2355,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sys_reg = SYS_ID_AA64ISAR1_EL1,
.field_pos = ID_AA64ISAR1_EL1_DPB_SHIFT,
.field_width = 4,
- .min_field_value = 1,
+ .min_field_value = ID_AA64ISAR1_EL1_DPB_IMP,
},
{
.desc = "Data cache clean to Point of Deep Persistence",
@@ -2366,7 +2366,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sign = FTR_UNSIGNED,
.field_pos = ID_AA64ISAR1_EL1_DPB_SHIFT,
.field_width = 4,
- .min_field_value = 2,
+ .min_field_value = ID_AA64ISAR1_EL1_DPB_DPB2,
},
#endif
#ifdef CONFIG_ARM64_SVE
@@ -2437,7 +2437,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sign = FTR_UNSIGNED,
.field_pos = ID_AA64MMFR2_EL1_FWB_SHIFT,
.field_width = 4,
- .min_field_value = 1,
+ .min_field_value = ID_AA64MMFR2_EL1_FWB_IMP,
.matches = has_cpuid_feature,
},
{
@@ -2448,7 +2448,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sign = FTR_UNSIGNED,
.field_pos = ID_AA64MMFR2_EL1_TTL_SHIFT,
.field_width = 4,
- .min_field_value = 1,
+ .min_field_value = ID_AA64MMFR2_EL1_TTL_IMP,
.matches = has_cpuid_feature,
},
{
@@ -2478,7 +2478,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sign = FTR_UNSIGNED,
.field_pos = ID_AA64MMFR1_EL1_HAFDBS_SHIFT,
.field_width = 4,
- .min_field_value = 2,
+ .min_field_value = ID_AA64MMFR1_EL1_HAFDBS_DBM,
.matches = has_hw_dbm,
.cpu_enable = cpu_enable_hw_dbm,
},
@@ -2491,7 +2491,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sys_reg = SYS_ID_AA64ISAR0_EL1,
.field_pos = ID_AA64ISAR0_EL1_CRC32_SHIFT,
.field_width = 4,
- .min_field_value = 1,
+ .min_field_value = ID_AA64ISAR0_EL1_CRC32_IMP,
},
{
.desc = "Speculative Store Bypassing Safe (SSBS)",
@@ -2514,7 +2514,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sign = FTR_UNSIGNED,
.field_pos = ID_AA64MMFR2_EL1_CnP_SHIFT,
.field_width = 4,
- .min_field_value = 1,
+ .min_field_value = ID_AA64MMFR2_EL1_CnP_IMP,
.cpu_enable = cpu_enable_cnp,
},
#endif
@@ -2527,7 +2527,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64ISAR1_EL1_SB_SHIFT,
.field_width = 4,
.sign = FTR_UNSIGNED,
- .min_field_value = 1,
+ .min_field_value = ID_AA64ISAR1_EL1_SB_IMP,
},
#ifdef CONFIG_ARM64_PTR_AUTH
{
@@ -2636,7 +2636,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_width = 4,
.field_pos = ID_AA64MMFR2_EL1_E0PD_SHIFT,
.matches = has_cpuid_feature,
- .min_field_value = 1,
+ .min_field_value = ID_AA64MMFR2_EL1_E0PD_IMP,
.cpu_enable = cpu_enable_e0pd,
},
#endif
@@ -2649,7 +2649,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64ISAR0_EL1_RNDR_SHIFT,
.field_width = 4,
.sign = FTR_UNSIGNED,
- .min_field_value = 1,
+ .min_field_value = ID_AA64ISAR0_EL1_RNDR_IMP,
},
#ifdef CONFIG_ARM64_BTI
{
@@ -2703,7 +2703,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64ISAR1_EL1_LRCPC_SHIFT,
.field_width = 4,
.matches = has_cpuid_feature,
- .min_field_value = 1,
+ .min_field_value = ID_AA64ISAR1_EL1_LRCPC_IMP,
},
#ifdef CONFIG_ARM64_SME
{

--
2.30.2

2023-04-17 15:03:42

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v2 0/3] arm64/cpufeature: Use macros for ID based matches

On Wed, 12 Apr 2023 18:13:28 +0100, Mark Brown wrote:
> As was recently done for hwcaps convert all the cpufeatures that match
> on ID registers to use helper macros to initialise all the data fields
> that the matching code uses. The feature table is much less of an eye
> chart than the hwcap tables were so the benefits are less substantial
> but the result is still less verbose and error prone so still seems like
> a win.
>
> [...]

Applied to arm64 (for-next/cpufeature), thanks!

[1/3] arm64/cpufeature: Pull out helper for CPUID register definitions
https://git.kernel.org/arm64/c/876e3c8efe79
[2/3] arm64/cpufeature: Consistently use symbolic constants for min_field_value
https://git.kernel.org/arm64/c/21642da214a9
[3/3] arm64/cpufeature: Use helper macro to specify ID register for capabilites
https://git.kernel.org/arm64/c/863da0bdb17b

Cheers,
--
Will

https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev