Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp7985409ybi; Tue, 9 Jul 2019 07:19:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqzG8NMUnUyAOt31vk4hvh6rTyGPMpr9UXI3VOIdnoNGhRfqn9v65t+0YDUpIoQAD8IJ/DdZ X-Received: by 2002:a17:902:42d:: with SMTP id 42mr31411710ple.228.1562681983811; Tue, 09 Jul 2019 07:19:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562681983; cv=none; d=google.com; s=arc-20160816; b=TkAdzSN3lnxLhdDB6pt00eEkQlsqpqYXPDCIX960k1K3iYlNDA13PnJ4xdV1yDfOe1 0hWKYvSk643unuAEWspz6GWQCK5oOT6Ys6gPyjAXYYXadfvHh8O6xLwyovRU6fMBcVkT 5p6KA/WJx9Ov+juW5c0TqKmTHYu9Jm9i2VXszEcJNFcCQAn3wRcf0QgfjykBa6NctZKU maNDlyG1PHaI6EzCRnr3Ow1CfY/B76dJ7dg5KhghQjdtvx3/veK869J+1Y5nli4pA+ku OWVsb0bYZMgMBkfK24ZpAEaXgE9HfcvCjfjbsQKxSREtlA5IXIEJ+t1MUMG6rH8/7vTL 1z+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dmarc-filter :dkim-signature:dkim-signature; bh=gmduI+r5TNgih53stK+F1YWX+2X7BH9y24yfT51r87w=; b=TX6VwH8khmoZhRI2qfwUJIxT9n1S0F9fNNP5goLg6ESA6hxX79JQ41eHoG1Iz9kzSc fWkxmc8eB9fkL0ULWoO0l/4dOpghlFv1+FHWEEVlrx3kE+aOY/Aj0AmWitCACcvrp+sl scjU9ZLZboItRXiGgLN6tw+6LZxN3AKmyipv1OFyoKK4bvlZzj/sr9XK6v+6Yr6cOYkZ 6UxSl/uDRoneN1YqKGVv+zhKY8A2vbi3vnTmbjtQUII+QOBfrMtQ5hb/1eDTNNdw7o/w j2c+MW7sqnZInu1ND8IkoreWh41uSVo3wlfpHGKoN0wl5Ew3Gjg7hSlvOBhVokZNyTHT 5Q4A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@codeaurora.org header.s=default header.b=L28CIUC0; dkim=pass header.i=@codeaurora.org header.s=default header.b=AfQ5xgsY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n9si14064579pgt.473.2019.07.09.07.19.28; Tue, 09 Jul 2019 07:19:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@codeaurora.org header.s=default header.b=L28CIUC0; dkim=pass header.i=@codeaurora.org header.s=default header.b=AfQ5xgsY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726436AbfGIOSc (ORCPT + 99 others); Tue, 9 Jul 2019 10:18:32 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:34236 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726345AbfGIOSc (ORCPT ); Tue, 9 Jul 2019 10:18:32 -0400 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id A191D60C5F; Tue, 9 Jul 2019 14:18:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1562681911; bh=jiNkhUFiBuXVKK2bh358QUq3e/FALFdLQxfZUHlBHpE=; h=Subject:To:Cc:References:From:Date:In-Reply-To:From; b=L28CIUC0p10IGJjm+Cv/Mn6QYlFY2qr+KKv33FcOfMp0gNEAOeFikvYDuy6z8KrfG B5sRsexq6jR0fLHTQCpEXVokZXfTvn34sgzF+jjrbg9MDPwPss4d9eBpEY1SowaOzH ytN8SofJgvzRbNRX0THx23KQPX5QDgVOOsSgMgNM= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on pdx-caf-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=ALL_TRUSTED,BAYES_00, DKIM_INVALID,DKIM_SIGNED,SPF_NONE autolearn=no autolearn_force=no version=3.4.0 Received: from [10.204.78.89] (blr-c-bdr-fw-01_globalnat_allzones-outside.qualcomm.com [103.229.19.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: neeraju@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id B170260863; Tue, 9 Jul 2019 14:18:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1562681910; bh=jiNkhUFiBuXVKK2bh358QUq3e/FALFdLQxfZUHlBHpE=; h=Subject:To:Cc:References:From:Date:In-Reply-To:From; b=AfQ5xgsYmVixi5uyJFFbIbqnbTrbkB2c3aywAx/U1DtJea5tnZyr8ff9yN2KpbYKr 3Canyk7awS+3Jl+RwL1GzcvE2LUlGUAWinAOSbd9CLSjbTfdtxEMqEiqRU765aCQ/2 ZT4Yf2pH5PWmKlRjD5ks5HrRCb/Uu5TF97E3G7ts= DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org B170260863 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=neeraju@codeaurora.org Subject: Re: [PATCH] arm64: Explicitly set pstate.ssbs for el0 on kernel entry To: Marc Zyngier , will@kernel.org, mark.rutland@arm.com, julien.thierry@arm.com, tglx@linutronix.de Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, gkohli@codeaurora.org, parthd@codeaurora.org References: <1562671333-3563-1-git-send-email-neeraju@codeaurora.org> <62c4fed5-39ac-adc9-3bc5-56eb5234a9d1@arm.com> From: Neeraj Upadhyay Message-ID: <386316d0-f844-d88c-8b78-0ffc4ffe0aaa@codeaurora.org> Date: Tue, 9 Jul 2019 19:48:25 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 In-Reply-To: <62c4fed5-39ac-adc9-3bc5-56eb5234a9d1@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Marc, On 7/9/19 6:38 PM, Marc Zyngier wrote: > Hi Neeraj, > > On 09/07/2019 12:22, Neeraj Upadhyay wrote: >> For cpus which do not support pstate.ssbs feature, el0 >> might not retain spsr.ssbs. This is problematic, if this >> task migrates to a cpu supporting this feature, thus >> relying on its state to be correct. On kernel entry, >> explicitly set spsr.ssbs, so that speculation is enabled >> for el0, when this task migrates to a cpu supporting >> ssbs feature. Restoring state at kernel entry ensures >> that el0 ssbs state is always consistent while we are >> in el1. >> >> As alternatives are applied by boot cpu, at the end of smp >> init, presence/absence of ssbs feature on boot cpu, is used >> for deciding, whether the capability is uniformly provided. > I've seen the same issue, but went for a slightly different > approach, see below. > >> Signed-off-by: Neeraj Upadhyay >> --- >> arch/arm64/kernel/cpu_errata.c | 16 ++++++++++++++++ >> arch/arm64/kernel/entry.S | 26 +++++++++++++++++++++++++- >> 2 files changed, 41 insertions(+), 1 deletion(-) >> >> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c >> index ca11ff7..c84a56d 100644 >> --- a/arch/arm64/kernel/cpu_errata.c >> +++ b/arch/arm64/kernel/cpu_errata.c >> @@ -336,6 +336,22 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt, >> *updptr = cpu_to_le32(aarch64_insn_gen_nop()); >> } >> >> +void __init arm64_restore_ssbs_state(struct alt_instr *alt, >> + __le32 *origptr, __le32 *updptr, >> + int nr_inst) >> +{ >> + BUG_ON(nr_inst != 1); >> + /* >> + * Only restore EL0 SSBS state on EL1 entry if cpu does not >> + * support the capability and capability is present for at >> + * least one cpu and if the SSBD state allows it to >> + * be changed. >> + */ >> + if (!this_cpu_has_cap(ARM64_SSBS) && cpus_have_cap(ARM64_SSBS) && >> + arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE) >> + *updptr = cpu_to_le32(aarch64_insn_gen_nop()); >> +} >> + >> void arm64_set_ssbd_mitigation(bool state) >> { >> if (!IS_ENABLED(CONFIG_ARM64_SSBD)) { >> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S >> index 9cdc459..7e79305 100644 >> --- a/arch/arm64/kernel/entry.S >> +++ b/arch/arm64/kernel/entry.S >> @@ -143,6 +143,25 @@ alternative_cb_end >> #endif >> .endm >> >> + // This macro updates spsr. It also corrupts the condition >> + // codes state. >> + .macro restore_ssbs_state, saved_spsr, tmp >> +#ifdef CONFIG_ARM64_SSBD >> +alternative_cb arm64_restore_ssbs_state >> + b .L__asm_ssbs_skip\@ >> +alternative_cb_end >> + ldr \tmp, [tsk, #TSK_TI_FLAGS] >> + tbnz \tmp, #TIF_SSBD, .L__asm_ssbs_skip\@ >> + tst \saved_spsr, #PSR_MODE32_BIT // native task? >> + b.ne .L__asm_ssbs_compat\@ >> + orr \saved_spsr, \saved_spsr, #PSR_SSBS_BIT >> + b .L__asm_ssbs_skip\@ >> +.L__asm_ssbs_compat\@: >> + orr \saved_spsr, \saved_spsr, #PSR_AA32_SSBS_BIT >> +.L__asm_ssbs_skip\@: >> +#endif >> + .endm > Although this is in keeping with the rest of entry.S (perfectly > unreadable ;-), I think we can do something a bit simpler, that > doesn't rely on patching. Also, this doesn't seem to take the > SSBD options such as ARM64_SSBD_FORCE_ENABLE into account. arm64_restore_ssbs_state has a check for ARM64_SSBD_FORCE_ENABLE, does that look wrong? > >> + >> .macro kernel_entry, el, regsize = 64 >> .if \regsize == 32 >> mov w0, w0 // zero upper 32 bits of x0 >> @@ -182,8 +201,13 @@ alternative_cb_end >> str x20, [tsk, #TSK_TI_ADDR_LIMIT] >> /* No need to reset PSTATE.UAO, hardware's already set it to 0 for us */ >> .endif /* \el == 0 */ >> - mrs x22, elr_el1 >> mrs x23, spsr_el1 >> + >> + .if \el == 0 >> + restore_ssbs_state x23, x22 >> + .endif >> + >> + mrs x22, elr_el1 >> stp lr, x21, [sp, #S_LR] >> >> /* >> > How about the patch below? Looks good; was just going to mention PF_KTHREAD check, but Mark R. has already given detailed information about it. Thanks Neeraj > > Thanks, > > M. > > From 7d4314d1ef3122d8bf56a7ef239c8c68e0c81277 Mon Sep 17 00:00:00 2001 > From: Marc Zyngier > Date: Tue, 4 Jun 2019 17:35:18 +0100 > Subject: [PATCH] arm64: Force SSBS on context switch > > On a CPU that doesn't support SSBS, PSTATE[12] is RES0. In a system > where only some of the CPUs implement SSBS, we end-up losing track of > the SSBS bit across task migration. > > To address this issue, let's force the SSBS bit on context switch. > > Signed-off-by: Marc Zyngier > --- > arch/arm64/include/asm/processor.h | 14 ++++++++++++-- > arch/arm64/kernel/process.c | 14 ++++++++++++++ > 2 files changed, 26 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h > index fd5b1a4efc70..844e2964b0f5 100644 > --- a/arch/arm64/include/asm/processor.h > +++ b/arch/arm64/include/asm/processor.h > @@ -193,6 +193,16 @@ static inline void start_thread_common(struct pt_regs *regs, unsigned long pc) > regs->pmr_save = GIC_PRIO_IRQON; > } > > +static inline void set_ssbs_bit(struct pt_regs *regs) > +{ > + regs->pstate |= PSR_SSBS_BIT; > +} > + > +static inline void set_compat_ssbs_bit(struct pt_regs *regs) > +{ > + regs->pstate |= PSR_AA32_SSBS_BIT; > +} > + > static inline void start_thread(struct pt_regs *regs, unsigned long pc, > unsigned long sp) > { > @@ -200,7 +210,7 @@ static inline void start_thread(struct pt_regs *regs, unsigned long pc, > regs->pstate = PSR_MODE_EL0t; > > if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE) > - regs->pstate |= PSR_SSBS_BIT; > + set_ssbs_bit(regs); > > regs->sp = sp; > } > @@ -219,7 +229,7 @@ static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc, > #endif > > if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE) > - regs->pstate |= PSR_AA32_SSBS_BIT; > + set_compat_ssbs_bit(regs); > > regs->compat_sp = sp; > } > diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c > index 9856395ccdb7..d451b3b248cf 100644 > --- a/arch/arm64/kernel/process.c > +++ b/arch/arm64/kernel/process.c > @@ -442,6 +442,19 @@ void uao_thread_switch(struct task_struct *next) > } > } > > +static void ssbs_thread_switch(struct task_struct *next) > +{ > + if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE && > + !test_tsk_thread_flag(next, TIF_SSBD)) { > + struct pt_regs *regs = task_pt_regs(next); > + > + if (compat_user_mode(regs)) > + set_compat_ssbs_bit(regs); > + else if (user_mode(regs)) > + set_ssbs_bit(regs); > + } > +} > + > /* > * We store our current task in sp_el0, which is clobbered by userspace. Keep a > * shadow copy so that we can restore this upon entry from userspace. > @@ -471,6 +484,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, > entry_task_switch(next); > uao_thread_switch(next); > ptrauth_thread_switch(next); > + ssbs_thread_switch(next); > > /* > * Complete any pending TLB or cache maintenance on this CPU in case -- QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, hosted by The Linux Foundation