Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp482990imm; Fri, 11 May 2018 01:14:36 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoP5ITYjI/2O4I02moOLwwWzo7JGPzxJiAugQa1lEew4a7fmDQ8cdUQTbAYLlnXG9d5YWDk X-Received: by 2002:a62:c205:: with SMTP id l5-v6mr4525839pfg.6.1526026476928; Fri, 11 May 2018 01:14:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526026476; cv=none; d=google.com; s=arc-20160816; b=tKLvHehvK9EX6JrXvi0wOisjEvZUWe3ppjnFZZHhrbxgqH1ten7ek9TZiovgQ9Ah/y xGg2sZ7TYtZJ+S8fqGR/n631osPp87AztcTm2fJJJkdE9bB9idyrYvNPG1xNE4SWcPqQ mhuqnEug87NYZrzz9tvCc8KfPOMw9fCyPdaCBOQopZQMQh/kHRZzG+gXs0ISxzt8wkBV lH4ia46hVewa9xsnxWftZvjHAdv6lzdc1x9UGglM6VbWFvIViD9rxAtRstOjrWkqqoQr 1Aw6/2oS62FXFKqUL/JmLwX4FKDce44VzhOUC1fAmIQb+R/wiOHLSiDuqQXskrF3WzGs DY9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:subject:from:arc-authentication-results; bh=iSN8GLwdzXyFaBxDBSmV+vVkEta4hUsCzxTYib8uOlo=; b=hVq5uEtJaWdAJ6eA5DtTuQ+/LsqblXzVJPIeoIVEXgFIMhRWnncwdoQkOpHDyIhIYc f0y6jXIzQn49DnqfJ+dxW+AFu9U18sS3qtqZmMm6Xewc3p6ox7iwUJBkozXugVkEo+W3 LDMdlVyrAC1ClxrHgPPzVqhT6TNcuHCDepsz/BDlN/35S20aDD833373lQNqF6001j11 A1GJPiUrvkZVvFSU+DSO2aqaFfdK4bx8Gqp5eqNgYla1tfDbh7Ug8MUrexgShTqPOFcE RegD6nRTwxhOPQXpZf8hgk1vMxNx75UBtWMkWnfpP9aMeuqhBoNRzJka/s20pP60Fu3K JZEA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b6-v6si2770774pls.583.2018.05.11.01.14.22; Fri, 11 May 2018 01:14:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752596AbeEKINE (ORCPT + 99 others); Fri, 11 May 2018 04:13:04 -0400 Received: from foss.arm.com ([217.140.101.70]:37572 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752174AbeEKIND (ORCPT ); Fri, 11 May 2018 04:13:03 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 96D2C1596; Fri, 11 May 2018 01:13:02 -0700 (PDT) Received: from [10.1.71.29] (e112298-lin.cambridge.arm.com [10.1.71.29]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 17CD63F73E; Fri, 11 May 2018 01:13:00 -0700 (PDT) From: Julien Thierry Subject: Re: [PATCH v2 2/6] arm64: alternative: Apply alternatives early in boot process To: Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: mark.rutland@arm.com, marc.zyngier@arm.com, james.morse@arm.com, daniel.thompson@linaro.org, Catalin Marinas , Will Deacon References: <1516190084-18978-1-git-send-email-julien.thierry@arm.com> <1516190084-18978-3-git-send-email-julien.thierry@arm.com> <7a3afa42-821a-3d4a-3af8-00ba18653a4a@arm.com> Message-ID: Date: Fri, 11 May 2018 09:12:59 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <7a3afa42-821a-3d4a-3af8-00ba18653a4a@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/05/18 22:52, Suzuki K Poulose wrote: > On 05/04/2018 11:06 AM, Julien Thierry wrote: >> Hi, >> >> In order to prepare the v3 of this patchset, I'd like people's opinion >> on what this patch does. More below. >> >> On 17/01/18 11:54, Julien Thierry wrote: >>> From: Daniel Thompson >>> >>> Currently alternatives are applied very late in the boot process (and >>> a long time after we enable scheduling). Some alternative sequences, >>> such as those that alter the way CPU context is stored, must be applied >>> much earlier in the boot sequence. > >>> +/* >>> + * early-apply features are detected using only the boot CPU and >>> checked on >>> + * secondary CPUs startup, even then, >>> + * These early-apply features should only include features where we >>> must >>> + * patch the kernel very early in the boot process. >>> + * >>> + * Note that the cpufeature logic *must* be made aware of early-apply >>> + * features to ensure they are reported as enabled without waiting >>> + * for other CPUs to boot. >>> + */ >>> +#define EARLY_APPLY_FEATURE_MASK BIT(ARM64_HAS_SYSREG_GIC_CPUIF) >>> + >> >> Following the change in the cpufeature infrastructure, >> ARM64_HAS_SYSREG_GIC_CPUIF will have the scope >> ARM64_CPUCAP_SCOPE_BOOT_CPU in order to be checked early in the boot >> process. > > Thats correct. > >> >> Now, regarding the early application of alternative, I am wondering >> whether we can apply all the alternatives associated with SCOPE_BOOT >> features that *do not* have a cpu_enable callback. >> > > I don't understand why would you skip the ones that have a "cpu_enable" > callback. Could you explain this a bit ? Ideally you should be able to > apply the alternatives for features with the SCOPE_BOOT, provided the > cpu_enable() callback is written properly. > In my mind the "cpu_enable" callback is the setup a cpu should perform before using the feature (i.e. the code getting patched in by the alternative). So I was worried about the code getting patched by the boot cpu and then have the secondary cpus ending up executing patched code before the cpu_enable for the corresponding feature gets called. Or is there a requirement for secondary cpu startup code to be free of alternative code? > >> Otherwise we can keep the macro to list individually each feature that >> is patchable at boot time as the current patch does (or put this info >> in a flag within the arm64_cpu_capabilities structure) > > You may be able to build up the mask of *available* capabilities with > SCOPE_BOOT at boot time by playing some trick in the > setup_boot_cpu_capabilities(), rather than embedding it in the > capabilities (and then parsing the entire table(s)) or manually keeping > track of the capabilities by having a separate mask. > Yes, I like that idea. Thanks, > Suzuki > >> >> Any thoughts or preferences on this? >> >> Thanks, >> >>>   #define __ALT_PTR(a,f)        ((void *)&(a)->f + (a)->f) >>>   #define ALT_ORIG_PTR(a)        __ALT_PTR(a, orig_offset) >>>   #define ALT_REPL_PTR(a)        __ALT_PTR(a, alt_offset) >>> @@ -105,7 +117,8 @@ static u32 get_alt_insn(struct alt_instr *alt, >>> __le32 *insnptr, __le32 *altinsnp >>>       return insn; >>>   } >>> >>> -static void __apply_alternatives(void *alt_region, bool >>> use_linear_alias) >>> +static void __apply_alternatives(void *alt_region,  bool >>> use_linear_alias, >>> +                 unsigned long feature_mask) >>>   { >>>       struct alt_instr *alt; >>>       struct alt_region *region = alt_region; >>> @@ -115,6 +128,9 @@ static void __apply_alternatives(void >>> *alt_region, bool use_linear_alias) >>>           u32 insn; >>>           int i, nr_inst; >>> >>> +        if ((BIT(alt->cpufeature) & feature_mask) == 0) >>> +            continue; >>> + >>>           if (!cpus_have_cap(alt->cpufeature)) >>>               continue; >>> >>> @@ -138,6 +154,21 @@ static void __apply_alternatives(void >>> *alt_region, bool use_linear_alias) >>>   } >>> >>>   /* >>> + * This is called very early in the boot process (directly after we run >>> + * a feature detect on the boot CPU). No need to worry about other CPUs >>> + * here. >>> + */ >>> +void apply_alternatives_early(void) >>> +{ >>> +    struct alt_region region = { >>> +        .begin    = (struct alt_instr *)__alt_instructions, >>> +        .end    = (struct alt_instr *)__alt_instructions_end, >>> +    }; >>> + >>> +    __apply_alternatives(®ion, true, EARLY_APPLY_FEATURE_MASK); >>> +} >>> + >>> +/* >>>    * We might be patching the stop_machine state machine, so implement a >>>    * really simple polling protocol here. >>>    */ >>> @@ -156,7 +187,9 @@ static int __apply_alternatives_multi_stop(void >>> *unused) >>>           isb(); >>>       } else { >>>           BUG_ON(patched); >>> -        __apply_alternatives(®ion, true); >>> + >>> +        __apply_alternatives(®ion, true, ~EARLY_APPLY_FEATURE_MASK); >>> + >>>           /* Barriers provided by the cache flushing */ >>>           WRITE_ONCE(patched, 1); >>>       } >>> @@ -177,5 +210,5 @@ void apply_alternatives(void *start, size_t length) >>>           .end    = start + length, >>>       }; >>> >>> -    __apply_alternatives(®ion, false); >>> +    __apply_alternatives(®ion, false, -1); >>>   } >>> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c >>> index 551eb07..37361b5 100644 >>> --- a/arch/arm64/kernel/smp.c >>> +++ b/arch/arm64/kernel/smp.c >>> @@ -453,6 +453,12 @@ void __init smp_prepare_boot_cpu(void) >>>        * cpuinfo_store_boot_cpu() above. >>>        */ >>>       update_cpu_errata_workarounds(); >>> +    /* >>> +     * We now know enough about the boot CPU to apply the >>> +     * alternatives that cannot wait until interrupt handling >>> +     * and/or scheduling is enabled. >>> +     */ >>> +    apply_alternatives_early(); >>>   } >>> >>>   static u64 __init of_get_cpu_mpidr(struct device_node *dn) >>> -- >>> 1.9.1 >>> >> > -- Julien Thierry