Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp1554460imm; Sat, 6 Oct 2018 05:51:30 -0700 (PDT) X-Google-Smtp-Source: ACcGV62fbOlKJLF4hQfoYTZzBuM3XRckGRVNxCqEdIHnOkpbeduJZ/AMNSqX5KPJaaekHS6ZOcpw X-Received: by 2002:a63:5922:: with SMTP id n34-v6mr14131755pgb.134.1538830290093; Sat, 06 Oct 2018 05:51:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538830290; cv=none; d=google.com; s=arc-20160816; b=fsNpzwpLOrNKBS+DCHVWcEawzwFKZMpmwOLB+n4oVltCpSlgptjZhHgBbVYX+KGflz RaFIAzNDnqz97Xff65lirhE/51dBsfpUddISLVo0rrVLenEidPe5KHQVKrFvZcWM41Bf NVr5KbIY+ZGeLM+aKLR5/uvSmapVwkcLNnXgNZtXapd8/WA8N0A9sD3faDyS/YLb4eZr pqq8TjcmVFOxVSk22B1O2rIcAWDVmDcr76D20ckaa674bWnbGFV7k+TSefhe7E41J40X oI7tD1AP8jb5BJCtLR6glTUK9l/cB0YmIE9V2Stj5uUBluJNEtOO2/TnwrhnOGf6o9Y+ CdRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=e3+WqjRwzk5+vpSjFS6E9w+Tqmje8daRqre5HxXQGHQ=; b=MpMxrNAz4anojos1Ek1GXRB0F2tFjBFGiLEfieZVbYmB/SK467CxUPc/NEcz9booB3 3UvLNd+TEv+kl1CUPGU+fkNEi7Mg4J5lNLi6U9hcTP9txIbyUjydEf9W1Rr+8ZZV95mi MQwbjQSfC+zgcTns6IN+F+kH6KR/911+MsFm50hgo+17BylVnhqlk4Hzw88mI/DPAUAU USzCA52xbhZ/u1hSV9hAh9PxZUGk5lhXV9pgQMZUwQ1wuaYh84XpfpmPml4gXRL0MrJV pLreSKXNGfgsWI0+kNlv1NGYzNjZPee0Brntb9uYUEHpqD/JGbQdkDVLGjZH8/UgZEFL Am9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 1-v6si11448776ply.390.2018.10.06.05.51.14; Sat, 06 Oct 2018 05:51:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727802AbeJFTyU (ORCPT + 99 others); Sat, 6 Oct 2018 15:54:20 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:34732 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727161AbeJFTyU (ORCPT ); Sat, 6 Oct 2018 15:54:20 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 59DD67A9; Sat, 6 Oct 2018 05:51:09 -0700 (PDT) Received: from [10.162.0.175] (a75553-lin.blr.arm.com [10.162.0.175]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3A3A43F5D3; Sat, 6 Oct 2018 05:51:03 -0700 (PDT) Subject: Re: [RFC 15/17] arm64: enable ptrauth earlier To: Kristina Martsenko , linux-arm-kernel@lists.infradead.org Cc: Adam Wallis , Andrew Jones , Ard Biesheuvel , Arnd Bergmann , Catalin Marinas , Christoffer Dall , Dave P Martin , Jacob Bramley , Kees Cook , Marc Zyngier , Mark Rutland , Ramana Radhakrishnan , "Suzuki K . Poulose" , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org References: <20181005084754.20950-1-kristina.martsenko@arm.com> <20181005084754.20950-16-kristina.martsenko@arm.com> From: Amit Kachhap Message-ID: Date: Sat, 6 Oct 2018 18:21:01 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0 MIME-Version: 1.0 In-Reply-To: <20181005084754.20950-16-kristina.martsenko@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/05/2018 02:17 PM, Kristina Martsenko wrote: > When the kernel is compiled with pointer auth instructions, the boot CPU > needs to start using pointer auth very early, so change the cpucap to > account for this. > > A function that enables pointer auth cannot return, so inline such > functions or compile them without pointer auth. > > Do not use the cpu_enable callback, to avoid compiling the whole > callchain down to cpu_enable without pointer auth. > > Note the change in behavior: if the boot CPU has pointer auth and a late > CPU does not, we panic. Until now we would have just disabled pointer > auth in this case. > > Signed-off-by: Kristina Martsenko > --- > arch/arm64/include/asm/cpufeature.h | 9 +++++++++ > arch/arm64/include/asm/pointer_auth.h | 18 ++++++++++++++++++ > arch/arm64/kernel/cpufeature.c | 14 ++++---------- > arch/arm64/kernel/smp.c | 7 ++++++- > 4 files changed, 37 insertions(+), 11 deletions(-) > > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h > index 1717ba1db35d..af4ca92a5fa9 100644 > --- a/arch/arm64/include/asm/cpufeature.h > +++ b/arch/arm64/include/asm/cpufeature.h > @@ -292,6 +292,15 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; > */ > #define ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE ARM64_CPUCAP_SCOPE_BOOT_CPU > > +/* > + * CPU feature used early in the boot based on the boot CPU. It is safe for a > + * late CPU to have this feature even though the boot CPU hasn't enabled it, > + * although the feature will not be used by Linux in this case. If the boot CPU > + * has enabled this feature already, then every late CPU must have it. > + */ > +#define ARM64_CPUCAP_BOOT_CPU_FEATURE \ > + (ARM64_CPUCAP_SCOPE_BOOT_CPU | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU) > + > struct arm64_cpu_capabilities { > const char *desc; > u16 capability; > diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h > index e60f225d9fa2..0634f06c3af2 100644 > --- a/arch/arm64/include/asm/pointer_auth.h > +++ b/arch/arm64/include/asm/pointer_auth.h > @@ -11,6 +11,13 @@ > > #ifdef CONFIG_ARM64_PTR_AUTH > /* > + * Compile the function without pointer authentication instructions. This > + * allows pointer authentication to be enabled/disabled within the function > + * (but leaves the function unprotected by pointer authentication). > + */ > +#define __no_ptrauth __attribute__((target("sign-return-address=none"))) > + > +/* > * Each key is a 128-bit quantity which is split across a pair of 64-bit > * registers (Lo and Hi). > */ > @@ -51,6 +58,15 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys *keys) > __ptrauth_key_install(APIA, keys->apia); > } > > +static __always_inline void ptrauth_cpu_enable(void) > +{ > + if (!cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH)) > + return; > + > + sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA); > + isb(); > +} > + > /* > * The EL0 pointer bits used by a pointer authentication code. > * This is dependent on TBI0 being enabled, or bits 63:56 would also apply. > @@ -71,8 +87,10 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) > ptrauth_keys_init(&(tsk)->thread_info.keys_user) > > #else /* CONFIG_ARM64_PTR_AUTH */ > +#define __no_ptrauth > #define ptrauth_strip_insn_pac(lr) (lr) > #define ptrauth_task_init_user(tsk) > +#define ptrauth_cpu_enable(tsk) > #endif /* CONFIG_ARM64_PTR_AUTH */ > > #endif /* __ASM_POINTER_AUTH_H */ > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c > index 3157685aa56a..380ee01145e8 100644 > --- a/arch/arm64/kernel/cpufeature.c > +++ b/arch/arm64/kernel/cpufeature.c > @@ -1040,15 +1040,10 @@ static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused) > } > > #ifdef CONFIG_ARM64_PTR_AUTH > -static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap) > -{ > - sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_ENIA); > -} > - > static bool has_address_auth(const struct arm64_cpu_capabilities *entry, > int __unused) > { > - u64 isar1 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1); > + u64 isar1 = read_sysreg(id_aa64isar1_el1); > bool api, apa; > > apa = cpuid_feature_extract_unsigned_field(isar1, > @@ -1251,7 +1246,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { > { > .desc = "Address authentication (architected algorithm)", > .capability = ARM64_HAS_ADDRESS_AUTH_ARCH, > - .type = ARM64_CPUCAP_SYSTEM_FEATURE, > + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, > .sys_reg = SYS_ID_AA64ISAR1_EL1, > .sign = FTR_UNSIGNED, > .field_pos = ID_AA64ISAR1_APA_SHIFT, > @@ -1261,7 +1256,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { > { > .desc = "Address authentication (IMP DEF algorithm)", > .capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF, > - .type = ARM64_CPUCAP_SYSTEM_FEATURE, > + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, > .sys_reg = SYS_ID_AA64ISAR1_EL1, > .sign = FTR_UNSIGNED, > .field_pos = ID_AA64ISAR1_API_SHIFT, > @@ -1270,9 +1265,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = { > }, > { > .capability = ARM64_HAS_ADDRESS_AUTH, > - .type = ARM64_CPUCAP_SYSTEM_FEATURE, > + .type = ARM64_CPUCAP_BOOT_CPU_FEATURE, > .matches = has_address_auth, > - .cpu_enable = cpu_enable_address_auth, > }, > #endif /* CONFIG_ARM64_PTR_AUTH */ > {}, > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c > index 25fcd22a4bb2..09690024dce8 100644 > --- a/arch/arm64/kernel/smp.c > +++ b/arch/arm64/kernel/smp.c > @@ -53,6 +53,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -211,6 +212,8 @@ asmlinkage notrace void secondary_start_kernel(void) This function secondary_start_kernel attribute can be set to __no_ptrauth for better redability as below, although no functionality is broken as this function does not return. > */ > check_local_cpu_capabilities(); > > + ptrauth_cpu_enable(); There are some function calls before so wondering if pointer authentication and cpu capabilities check required by ptrauth can be moved still up. > + > if (cpu_ops[cpu]->cpu_postboot) > cpu_ops[cpu]->cpu_postboot(); > > @@ -405,7 +408,7 @@ void __init smp_cpus_done(unsigned int max_cpus) > mark_linear_text_alias_ro(); > } > > -void __init smp_prepare_boot_cpu(void) > +void __init __no_ptrauth smp_prepare_boot_cpu(void) > { > set_my_cpu_offset(per_cpu_offset(smp_processor_id())); > /* > @@ -414,6 +417,8 @@ void __init smp_prepare_boot_cpu(void) > */ > jump_label_init(); > cpuinfo_store_boot_cpu(); > + > + ptrauth_cpu_enable(); > } > > static u64 __init of_get_cpu_mpidr(struct device_node *dn) >