Received: by 10.192.165.148 with SMTP id m20csp399095imm; Wed, 9 May 2018 14:52:57 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoAiEUPlpQY4eB2qCjn1RhOW5TGxkPdDG7azgQv3TruDL+tkjmqriJDstbxkfn62YaVtoNN X-Received: by 2002:a65:5002:: with SMTP id f2-v6mr8233500pgo.232.1525902777134; Wed, 09 May 2018 14:52:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525902777; cv=none; d=google.com; s=arc-20160816; b=S3WTH2LtW1+DoU37pbV/EiAaDpz9O6G7E6SeKrb8p7LUnQGUaANmR2n0tcHUeRIuZc cHF8kgWbx8k4xptleoHDSbj0MJPDJwJvHDnn4PVmbGf7YleH5OStpOAlo5HH1gRtVDhL zVzPHXDHduoDi8K+rbhfHEMxqLTWSBx+j+JilMyFTJySyUGme6Z3WtEAYJq45zI5yE3A NXYJ0/4WxXNTwRTJCkV9qPZPRTpU6gbB35ogYPd9CRFTF/d09FNOeoupGF7HGH5u7MUa i1h/pnGJYPdVNPi0YBC7JxXdAhcrLfNigkhe0uF23WUav9LDHlKkETWqMvWqz+YRTuhG 0/wA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=3hQA34MhTtJGztNPM29+h0u4QlDnONoHUxrtP8gersk=; b=vs+boH/FhNc+/jfDPSEJJ2M08+AkEcuDJMrLdXNR2udeQ7wtvRUvaddsc0u4jYJkbq pxJhdjfLg+U2m0DM39Ss1abDAUmXxniomlotaeAjP0431MbxM2HsKuxtaLcw/wvfm2Qq +BVWnCm0eaUdI24jv4F1iwLZQ+9kY6roIpc5EwZGv1npNj2j4ht7f3SnJL57rmXRQR8L eO66jrx67iD3ooVE+k7wzbgqx/YVfTIFFsS21SqeJjECMaIC5duMDC3uiVdArtoqP2Er xtKD7R+oH1U1kXMCyCqKyyRA+bahiSPu00IJQ55BwRA2z6K3I5PP7hufUX5kqz8bFZb8 /1ZQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v8-v6si26896593plo.306.2018.05.09.14.52.42; Wed, 09 May 2018 14:52:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965737AbeEIVwX (ORCPT + 99 others); Wed, 9 May 2018 17:52:23 -0400 Received: from foss.arm.com ([217.140.101.70]:50184 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965697AbeEIVwW (ORCPT ); Wed, 9 May 2018 17:52:22 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 76B8480D; Wed, 9 May 2018 14:52:22 -0700 (PDT) Received: from [192.168.0.21] (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5B7793F592; Wed, 9 May 2018 14:52:20 -0700 (PDT) Subject: Re: [PATCH v2 2/6] arm64: alternative: Apply alternatives early in boot process To: Julien Thierry , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: mark.rutland@arm.com, marc.zyngier@arm.com, james.morse@arm.com, daniel.thompson@linaro.org, Catalin Marinas , Will Deacon References: <1516190084-18978-1-git-send-email-julien.thierry@arm.com> <1516190084-18978-3-git-send-email-julien.thierry@arm.com> From: Suzuki K Poulose Message-ID: <7a3afa42-821a-3d4a-3af8-00ba18653a4a@arm.com> Date: Wed, 9 May 2018 22:52:21 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/04/2018 11:06 AM, Julien Thierry wrote: > Hi, > > In order to prepare the v3 of this patchset, I'd like people's opinion > on what this patch does. More below. > > On 17/01/18 11:54, Julien Thierry wrote: >> From: Daniel Thompson >> >> Currently alternatives are applied very late in the boot process (and >> a long time after we enable scheduling). Some alternative sequences, >> such as those that alter the way CPU context is stored, must be applied >> much earlier in the boot sequence. >> +/* >> + * early-apply features are detected using only the boot CPU and >> checked on >> + * secondary CPUs startup, even then, >> + * These early-apply features should only include features where we must >> + * patch the kernel very early in the boot process. >> + * >> + * Note that the cpufeature logic *must* be made aware of early-apply >> + * features to ensure they are reported as enabled without waiting >> + * for other CPUs to boot. >> + */ >> +#define EARLY_APPLY_FEATURE_MASK BIT(ARM64_HAS_SYSREG_GIC_CPUIF) >> + > > Following the change in the cpufeature infrastructure, > ARM64_HAS_SYSREG_GIC_CPUIF will have the scope > ARM64_CPUCAP_SCOPE_BOOT_CPU in order to be checked early in the boot > process. Thats correct. > > Now, regarding the early application of alternative, I am wondering > whether we can apply all the alternatives associated with SCOPE_BOOT > features that *do not* have a cpu_enable callback. > I don't understand why would you skip the ones that have a "cpu_enable" callback. Could you explain this a bit ? Ideally you should be able to apply the alternatives for features with the SCOPE_BOOT, provided the cpu_enable() callback is written properly. > Otherwise we can keep the macro to list individually each feature that > is patchable at boot time as the current patch does (or put this info in > a flag within the arm64_cpu_capabilities structure) You may be able to build up the mask of *available* capabilities with SCOPE_BOOT at boot time by playing some trick in the setup_boot_cpu_capabilities(), rather than embedding it in the capabilities (and then parsing the entire table(s)) or manually keeping track of the capabilities by having a separate mask. Suzuki > > Any thoughts or preferences on this? > > Thanks, > >>   #define __ALT_PTR(a,f)        ((void *)&(a)->f + (a)->f) >>   #define ALT_ORIG_PTR(a)        __ALT_PTR(a, orig_offset) >>   #define ALT_REPL_PTR(a)        __ALT_PTR(a, alt_offset) >> @@ -105,7 +117,8 @@ static u32 get_alt_insn(struct alt_instr *alt, >> __le32 *insnptr, __le32 *altinsnp >>       return insn; >>   } >> >> -static void __apply_alternatives(void *alt_region, bool >> use_linear_alias) >> +static void __apply_alternatives(void *alt_region,  bool >> use_linear_alias, >> +                 unsigned long feature_mask) >>   { >>       struct alt_instr *alt; >>       struct alt_region *region = alt_region; >> @@ -115,6 +128,9 @@ static void __apply_alternatives(void *alt_region, >> bool use_linear_alias) >>           u32 insn; >>           int i, nr_inst; >> >> +        if ((BIT(alt->cpufeature) & feature_mask) == 0) >> +            continue; >> + >>           if (!cpus_have_cap(alt->cpufeature)) >>               continue; >> >> @@ -138,6 +154,21 @@ static void __apply_alternatives(void >> *alt_region, bool use_linear_alias) >>   } >> >>   /* >> + * This is called very early in the boot process (directly after we run >> + * a feature detect on the boot CPU). No need to worry about other CPUs >> + * here. >> + */ >> +void apply_alternatives_early(void) >> +{ >> +    struct alt_region region = { >> +        .begin    = (struct alt_instr *)__alt_instructions, >> +        .end    = (struct alt_instr *)__alt_instructions_end, >> +    }; >> + >> +    __apply_alternatives(®ion, true, EARLY_APPLY_FEATURE_MASK); >> +} >> + >> +/* >>    * We might be patching the stop_machine state machine, so implement a >>    * really simple polling protocol here. >>    */ >> @@ -156,7 +187,9 @@ static int __apply_alternatives_multi_stop(void >> *unused) >>           isb(); >>       } else { >>           BUG_ON(patched); >> -        __apply_alternatives(®ion, true); >> + >> +        __apply_alternatives(®ion, true, ~EARLY_APPLY_FEATURE_MASK); >> + >>           /* Barriers provided by the cache flushing */ >>           WRITE_ONCE(patched, 1); >>       } >> @@ -177,5 +210,5 @@ void apply_alternatives(void *start, size_t length) >>           .end    = start + length, >>       }; >> >> -    __apply_alternatives(®ion, false); >> +    __apply_alternatives(®ion, false, -1); >>   } >> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c >> index 551eb07..37361b5 100644 >> --- a/arch/arm64/kernel/smp.c >> +++ b/arch/arm64/kernel/smp.c >> @@ -453,6 +453,12 @@ void __init smp_prepare_boot_cpu(void) >>        * cpuinfo_store_boot_cpu() above. >>        */ >>       update_cpu_errata_workarounds(); >> +    /* >> +     * We now know enough about the boot CPU to apply the >> +     * alternatives that cannot wait until interrupt handling >> +     * and/or scheduling is enabled. >> +     */ >> +    apply_alternatives_early(); >>   } >> >>   static u64 __init of_get_cpu_mpidr(struct device_node *dn) >> -- >> 1.9.1 >> >