Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp6362013imu; Mon, 21 Jan 2019 07:38:04 -0800 (PST) X-Google-Smtp-Source: ALg8bN43a6JYoN5qINoD9Rr3bu6csx24NE1xk9VZZFme2pLVyT1tOjDcnvFRZScSESxrMpPyX3xf X-Received: by 2002:a62:5c41:: with SMTP id q62mr30605305pfb.171.1548085083997; Mon, 21 Jan 2019 07:38:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548085083; cv=none; d=google.com; s=arc-20160816; b=HYI3biqcQNoK7PrBqMxRfkEHYSpjKBZSTD++Yns8yCfTTH1uK9Pf4aKm3gISJo/2HO k9VyNsSdpXdMTezCT4ZJnZgebekVFNJBLaGwXAFaigWGEMX6TaGO5pMzu3QjYkvjotHq 1FsWNGXFHX93e2sEp4MaB01bMqYVpSwNnNX6ie9ZAVBooQOGoUar4CMTLmV5sgC1chPN jOPpog27YJYq9NYEoeN1776lXKeQw8FEPTARvrq+4XStcYoeZtTi4TejGs/cggK8ObKs lNkNDCYO3O4oaNHVm8fgGYJo3nlOinK7VA2X1dfRLM1pduvxNHNJrbwrh2GMF2wgHXUt J4Dw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=qSzedPCpNjNgdOI52RVo9uzKBx+vv8t2ii1+FqF+9mE=; b=ZGpDgrTZ9hNBFk42da4KEV4RjsnsO8edkof5AnV0W+6yJhiFgBSnEUaaCOc/uaKZxM J1uUbY5Km7MIHJYs9NEpKipz8fTJCP4paYiu0IvcDn4Xe+slQ1tbYAxI0wOekht7C1ZK 9CG9V/E2ku493HzFk6qoR6zceWVkQ15S3KAkeCexvONewBknezKUUWpWxzPwrSi19Jmi ezDTeSx3r66ogZiEjapJLkZhidLuBub0f+zHoKSU7MVugjaZ87ZruGN5HQQRniJK4A6A BrSwlh/P6QbULhky92v3VmLCNlZEkMYg+EU6e81aVFJrQAYFKLtX83fnwP0Z+W2qR3oA ScdQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c21si12307654plo.165.2019.01.21.07.37.48; Mon, 21 Jan 2019 07:38:03 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730328AbfAUPei (ORCPT + 99 others); Mon, 21 Jan 2019 10:34:38 -0500 Received: from foss.arm.com ([217.140.101.70]:36542 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730135AbfAUPeg (ORCPT ); Mon, 21 Jan 2019 10:34:36 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3ECD41650; Mon, 21 Jan 2019 07:34:36 -0800 (PST) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4E9943F5C1; Mon, 21 Jan 2019 07:34:34 -0800 (PST) From: Julien Thierry To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, daniel.thompson@linaro.org, joel@joelfernandes.org, marc.zyngier@arm.com, christoffer.dall@arm.com, james.morse@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, mark.rutland@arm.com, Julien Thierry , Suzuki K Poulose Subject: [PATCH v9 15/26] arm64: alternative: Apply alternatives early in boot process Date: Mon, 21 Jan 2019 15:33:34 +0000 Message-Id: <1548084825-8803-16-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1548084825-8803-1-git-send-email-julien.thierry@arm.com> References: <1548084825-8803-1-git-send-email-julien.thierry@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Daniel Thompson Currently alternatives are applied very late in the boot process (and a long time after we enable scheduling). Some alternative sequences, such as those that alter the way CPU context is stored, must be applied much earlier in the boot sequence. Introduce apply_boot_alternatives() to allow some alternatives to be applied immediately after we detect the CPU features of the boot CPU. Signed-off-by: Daniel Thompson [julien.thierry@arm.com: rename to fit new cpufeature framework better, apply BOOT_SCOPE feature early in boot] Signed-off-by: Julien Thierry Reviewed-by: Suzuki K Poulose Cc: Catalin Marinas Cc: Will Deacon Cc: Christoffer Dall Cc: Suzuki K Poulose --- arch/arm64/include/asm/alternative.h | 1 + arch/arm64/include/asm/cpufeature.h | 4 ++++ arch/arm64/kernel/alternative.c | 43 +++++++++++++++++++++++++++++++----- arch/arm64/kernel/cpufeature.c | 6 +++++ arch/arm64/kernel/smp.c | 7 ++++++ 5 files changed, 56 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h index 9806a23..b9f8d78 100644 --- a/arch/arm64/include/asm/alternative.h +++ b/arch/arm64/include/asm/alternative.h @@ -25,6 +25,7 @@ struct alt_instr { typedef void (*alternative_cb_t)(struct alt_instr *alt, __le32 *origptr, __le32 *updptr, int nr_inst); +void __init apply_boot_alternatives(void); void __init apply_alternatives_all(void); bool alternative_is_applied(u16 cpufeature); diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 89c3f31..e505e1f 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -391,6 +391,10 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap) extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS]; extern struct static_key_false arm64_const_caps_ready; +/* ARM64 CAPS + alternative_cb */ +#define ARM64_NPATCHABLE (ARM64_NCAPS + 1) +extern DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE); + #define for_each_available_cap(cap) \ for_each_set_bit(cap, cpu_hwcaps, ARM64_NCAPS) diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c index c947d22..a9b4677 100644 --- a/arch/arm64/kernel/alternative.c +++ b/arch/arm64/kernel/alternative.c @@ -155,7 +155,8 @@ static void clean_dcache_range_nopatch(u64 start, u64 end) } while (cur += d_size, cur < end); } -static void __apply_alternatives(void *alt_region, bool is_module) +static void __apply_alternatives(void *alt_region, bool is_module, + unsigned long *feature_mask) { struct alt_instr *alt; struct alt_region *region = alt_region; @@ -165,6 +166,9 @@ static void __apply_alternatives(void *alt_region, bool is_module) for (alt = region->begin; alt < region->end; alt++) { int nr_inst; + if (!test_bit(alt->cpufeature, feature_mask)) + continue; + /* Use ARM64_CB_PATCH as an unconditional patch */ if (alt->cpufeature < ARM64_CB_PATCH && !cpus_have_cap(alt->cpufeature)) @@ -203,8 +207,11 @@ static void __apply_alternatives(void *alt_region, bool is_module) __flush_icache_all(); isb(); - /* We applied all that was available */ - bitmap_copy(applied_alternatives, cpu_hwcaps, ARM64_NCAPS); + /* Ignore ARM64_CB bit from feature mask */ + bitmap_or(applied_alternatives, applied_alternatives, + feature_mask, ARM64_NCAPS); + bitmap_and(applied_alternatives, applied_alternatives, + cpu_hwcaps, ARM64_NCAPS); } } @@ -225,8 +232,13 @@ static int __apply_alternatives_multi_stop(void *unused) cpu_relax(); isb(); } else { + DECLARE_BITMAP(remaining_capabilities, ARM64_NPATCHABLE); + + bitmap_complement(remaining_capabilities, boot_capabilities, + ARM64_NPATCHABLE); + BUG_ON(all_alternatives_applied); - __apply_alternatives(®ion, false); + __apply_alternatives(®ion, false, remaining_capabilities); /* Barriers provided by the cache flushing */ WRITE_ONCE(all_alternatives_applied, 1); } @@ -240,6 +252,24 @@ void __init apply_alternatives_all(void) stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask); } +/* + * This is called very early in the boot process (directly after we run + * a feature detect on the boot CPU). No need to worry about other CPUs + * here. + */ +void __init apply_boot_alternatives(void) +{ + struct alt_region region = { + .begin = (struct alt_instr *)__alt_instructions, + .end = (struct alt_instr *)__alt_instructions_end, + }; + + /* If called on non-boot cpu things could go wrong */ + WARN_ON(smp_processor_id() != 0); + + __apply_alternatives(®ion, false, &boot_capabilities[0]); +} + #ifdef CONFIG_MODULES void apply_alternatives_module(void *start, size_t length) { @@ -247,7 +277,10 @@ void apply_alternatives_module(void *start, size_t length) .begin = start, .end = start + length, }; + DECLARE_BITMAP(all_capabilities, ARM64_NPATCHABLE); + + bitmap_fill(all_capabilities, ARM64_NPATCHABLE); - __apply_alternatives(®ion, true); + __apply_alternatives(®ion, true, &all_capabilities[0]); } #endif diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index d607ea3..b530fb24 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -54,6 +54,9 @@ EXPORT_SYMBOL(cpu_hwcaps); static struct arm64_cpu_capabilities const __ro_after_init *cpu_hwcaps_ptrs[ARM64_NCAPS]; +/* Need also bit for ARM64_CB_PATCH */ +DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE); + /* * Flag to indicate if we have computed the system wide * capabilities based on the boot time active CPUs. This @@ -1677,6 +1680,9 @@ static void update_cpu_capabilities(u16 scope_mask) if (caps->desc) pr_info("detected: %s\n", caps->desc); cpus_set_cap(caps->capability); + + if ((scope_mask & SCOPE_BOOT_CPU) && (caps->type & SCOPE_BOOT_CPU)) + set_bit(caps->capability, boot_capabilities); } } diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 1598d6f..a944edd 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -419,6 +419,13 @@ void __init smp_prepare_boot_cpu(void) */ jump_label_init(); cpuinfo_store_boot_cpu(); + + /* + * We now know enough about the boot CPU to apply the + * alternatives that cannot wait until interrupt handling + * and/or scheduling is enabled. + */ + apply_boot_alternatives(); } static u64 __init of_get_cpu_mpidr(struct device_node *dn) -- 1.9.1