Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp180691imm; Fri, 5 Oct 2018 01:51:46 -0700 (PDT) X-Google-Smtp-Source: ACcGV63Ufs/GRKf7pNEND0swhc+LKz6lWYqMDmsSRRInqa8gTlan7Agpnw3XbKElwQLjKdZ3HA8U X-Received: by 2002:a63:f110:: with SMTP id f16-v6mr9320253pgi.236.1538729506260; Fri, 05 Oct 2018 01:51:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538729506; cv=none; d=google.com; s=arc-20160816; b=0OGkE1f3BPiMxX/J2MEvhT59uCWCZGaBrcfdaWs9sqbdk2HmdLykJfnW53rAjb524v M4iTcGWSXvemXg3Ggio0FyOzRKNtu/6HOOIuuMW4AxYxX2g31SQW0C8u0Jtma50fRaz/ UuJm55wb9KkiYeUvkMYTx3wh9lBcVWMM4aAMjvVK6U5aRRg9sP2fMk+q2LmLMMfvLt8g 35tdqFQm37A1/S5rtioMytJVPJ5TWoDmtyVkBpynqYnrWLmfkjfEetNc5AafH/5X9NZw eq01+Ky3F02R9RtPpKmCJsRLbPmL+IkEfJvbAH80atqw32wtnqpwawEPu5cjQwdALZzn wHfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=np7jGjNUmmVUrFCBNsezMF3Hhg46UIarEGUcKY+mtPs=; b=Y0MDJYiqntqX6hws1TsbvnKpjz99eIqygjnnoTnCJOC/toa05UHd6+fSEOEnCxmjUv rwqM2H4pXaP6DRT1x3xkAcql9ep3vm2CpTMDf2hL4rXYzz6XNvzLk3x9P1Rz+7a7M74L JzGhQqtFyWPF6TJp5caR3K2l+/vXX4UaS+/qDhMBO2MgChHsDsVZe1xKAwkA33zj+MwX ibGiOFi6GK2yN2/F44EjZttY05qCFRI9d2sW2ORz28KTrcClFF3xAZRCOl+4mlwN3LXA zOdbo4RhbYmADic3zdZO87rPIxOs1gnJQVMDEVEZfGvC9x2BnCfVWf5wYSoevGGNdy7W OdVA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x10-v6si7515478plo.100.2018.10.05.01.51.30; Fri, 05 Oct 2018 01:51:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728970AbeJEPsf (ORCPT + 99 others); Fri, 5 Oct 2018 11:48:35 -0400 Received: from foss.arm.com ([217.140.101.70]:48058 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728960AbeJEPsf (ORCPT ); Fri, 5 Oct 2018 11:48:35 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 27B5C168F; Fri, 5 Oct 2018 01:50:51 -0700 (PDT) Received: from moonbear.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E74F63F5B3; Fri, 5 Oct 2018 01:50:47 -0700 (PDT) From: Kristina Martsenko To: linux-arm-kernel@lists.infradead.org Cc: Adam Wallis , Amit Kachhap , Andrew Jones , Ard Biesheuvel , Arnd Bergmann , Catalin Marinas , Christoffer Dall , Dave P Martin , Jacob Bramley , Kees Cook , Marc Zyngier , Mark Rutland , Ramana Radhakrishnan , "Suzuki K . Poulose" , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [RFC 15/17] arm64: enable ptrauth earlier Date: Fri, 5 Oct 2018 09:47:52 +0100 Message-Id: <20181005084754.20950-16-kristina.martsenko@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181005084754.20950-1-kristina.martsenko@arm.com> References: <20181005084754.20950-1-kristina.martsenko@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When the kernel is compiled with pointer auth instructions, the boot CPU needs to start using pointer auth very early, so change the cpucap to account for this. A function that enables pointer auth cannot return, so inline such functions or compile them without pointer auth. Do not use the cpu_enable callback, to avoid compiling the whole callchain down to cpu_enable without pointer auth. Note the change in behavior: if the boot CPU has pointer auth and a late CPU does not, we panic. Until now we would have just disabled pointer auth in this case. Signed-off-by: Kristina Martsenko --- arch/arm64/include/asm/cpufeature.h | 9 +++++++++ arch/arm64/include/asm/pointer_auth.h | 18 ++++++++++++++++++ arch/arm64/kernel/cpufeature.c | 14 ++++---------- arch/arm64/kernel/smp.c | 7 ++++++- 4 files changed, 37 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 1717ba1db35d..af4ca92a5fa9 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -292,6 +292,15 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; */ #define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU +/* + * CPU feature used early in the boot based on the boot CPU. It is safe for a + * late CPU to have this feature even though the boot CPU hasn't enabled it, + * although the feature will not be used by Linux in this case. If the boot CPU + * has enabled this feature already, then every late CPU must have it. + */ +#define ARM64_CPUCAP_BOOT_CPU_FEATURE \ + (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU) + struct arm64_cpu_capabilities { const char *desc; u16 capability; diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index e60f225d9fa2..0634f06c3af2 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -11,6 +11,13 @@ #ifdef CONFIG_ARM64_PTR_AUTH /* + * Compile the function without pointer authentication instructions. This + * allows pointer authentication to be enabled/disabled within the function + * (but leaves the function unprotected by pointer authentication). + */ +#define __no_ptrauth __attribute__((target("sign-return-address=none"))) + +/* * Each key is a 128-bit quantity which is split across a pair of 64-bit * registers (Lo and Hi). */ @@ -51,6 +58,15 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys *keys) __ptrauth_key_install(APIA, keys->apia); } +static __always_inline void ptrauth_cpu_enable(void) +{ + if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) + return; + + sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA); + isb(); +} + /* * The EL0 pointer bits used by a pointer authentication code. * This is dependent on TBI0 being enabled, or bits 63:56 would also apply. @@ -71,8 +87,10 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) ptrauth_keys_init(&(tsk)->thread_info.keys_user) #else /* CONFIG_ARM64_PTR_AUTH */ +#define __no_ptrauth #define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_task_init_user(tsk) +#define ptrauth_cpu_enable(tsk) #endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* __ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 3157685aa56a..380ee01145e8 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1040,15 +1040,10 @@ static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused) } #ifdef CONFIG_ARM64_PTR_AUTH -static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) -{ - sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA); -} - static bool has_address_auth(const struct arm64_cpu_capabilities *entry, int __unused) { - u64 isar1 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1); + u64 isar1 = read_sysreg(id_aa64isar1_el1); bool api, apa; apa = cpuid_feature_extract_unsigned_field(isar1, @@ -1251,7 +1246,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Address authentication (architected algorithm)", .capability = ARM64_HAS_ADDRESS_AUTH_ARCH, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, .sys_reg = SYS_ID_AA64ISAR1_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64ISAR1_APA_SHIFT, @@ -1261,7 +1256,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Address authentication (IMP DEF algorithm)", .capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, .sys_reg = SYS_ID_AA64ISAR1_EL1, .sign = FTR_UNSIGNED, .field_pos = ID_AA64ISAR1_API_SHIFT, @@ -1270,9 +1265,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = { }, { .capability = ARM64_HAS_ADDRESS_AUTH, - .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, .matches = has_address_auth, - .cpu_enable = cpu_enable_address_auth, }, #endif /* CONFIG_ARM64_PTR_AUTH */ {}, diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 25fcd22a4bb2..09690024dce8 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -53,6 +53,7 @@ #include #include #include +#include #include #include #include @@ -211,6 +212,8 @@ asmlinkage notrace void secondary_start_kernel(void) */ check_local_cpu_capabilities(); + ptrauth_cpu_enable(); + if (cpu_ops[cpu]->cpu_postboot) cpu_ops[cpu]->cpu_postboot(); @@ -405,7 +408,7 @@ void __init smp_cpus_done(unsigned int max_cpus) mark_linear_text_alias_ro(); } -void __init smp_prepare_boot_cpu(void) +void __init __no_ptrauth smp_prepare_boot_cpu(void) { set_my_cpu_offset(per_cpu_offset(smp_processor_id())); /* @@ -414,6 +417,8 @@ void __init smp_prepare_boot_cpu(void) */ jump_label_init(); cpuinfo_store_boot_cpu(); + + ptrauth_cpu_enable(); } static u64 __init of_get_cpu_mpidr(struct device_node *dn) -- 2.11.0