Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp550403pxf; Thu, 8 Apr 2021 08:20:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwnHTWAupqDSbCudRWN/1hPI6jp9zN99A1kgqllCD/5esfZD3at9jtFDRS0mwCdineHPqDY X-Received: by 2002:a05:6402:40ca:: with SMTP id z10mr7788619edb.215.1617895234097; Thu, 08 Apr 2021 08:20:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617895234; cv=none; d=google.com; s=arc-20160816; b=iEVcJ3EUQtQoMbQTPqtmNh6YAB9pPswaU7uerFmIwRhep6SchlLqZzreDh93FTlmD4 lVOp1s8UaqwVTI8qxreRHEzL4ETKIn2ZD2eNEoatImcdq1P8MEZmA8IuhOgkNJW4fL/+ 6whAThFX/7WqwiTtM7ychnJO0F/aJD+E1WvdIPiHxhj47UHmgySRS/keN0u6BYyELULU kU479/TB67lWLHFmgwkFSieN13OeRC7CPNmWStvQuV5Xi78EA+WKYVQ2q6TkUEEIAU/3 ekEs/gCyudAWsb5Jzv6M5Da40x8GhYCGZ0HqgDf/QfmJ0JUoiAOfe3lop5qLDMKTFdDj KFjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=OEh2lUk+ZSAW4tbM1IN3OtCF9EZo57PKXPiDCSHzyt0=; b=d9qNr1nhmtGgZgWJiW6xMLcA/2ljjmlvJp6vYx53ToaNp07QA1c2Nfm28077wWOHsY 5IuqQLuLKtYsypfstlLr3FFKiuu0FGDTBsLuHggXJmHIG0SWafuicS3zebbS6LdE+Dt7 ogKiOZ3H6Jr/hyuWkPb56p7fLX5ziw5GDR+fv6QCZbvZG1AFmx3HjEHt+kmgqL54VmDU ovKUms71lJxoTmQDp7jnHqpBDWk7bikTLGh1x8pr6OqeINzrus3U18x/VADk+1JB1bqn XELowvqlX2nISPGVFvo2S8MJO05K0ZPUEuTKn3A82+D1iZzwX0eU9MYRUB028W2J1vXU ZlDg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h1si23254755ede.545.2021.04.08.08.20.10; Thu, 08 Apr 2021 08:20:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232147AbhDHPSx (ORCPT + 99 others); Thu, 8 Apr 2021 11:18:53 -0400 Received: from foss.arm.com ([217.140.110.172]:51202 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232141AbhDHPSw (ORCPT ); Thu, 8 Apr 2021 11:18:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 57521D6E; Thu, 8 Apr 2021 08:18:41 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.24.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 33FC33F694; Thu, 8 Apr 2021 08:18:40 -0700 (PDT) Date: Thu, 8 Apr 2021 16:18:37 +0100 From: Mark Rutland To: Vincenzo Frascino Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Catalin Marinas , Will Deacon Subject: Re: [PATCH] arm64: mte: Move MTE TCF0 check in entry-common Message-ID: <20210408151837.GB37165@C02TD0UTHF1T.local> References: <20210408143723.13024-1-vincenzo.frascino@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210408143723.13024-1-vincenzo.frascino@arm.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Vincenzo, On Thu, Apr 08, 2021 at 03:37:23PM +0100, Vincenzo Frascino wrote: > The check_mte_async_tcf macro sets the TIF flag non-atomically. This can > race with another CPU doing a set_tsk_thread_flag() and the flag can be > lost in the process. > > Move the tcf0 check to enter_from_user_mode() and clear tcf0 in > exit_to_user_mode() to address the problem. Beware that these are called at critical points of the entry sequence, so we need to take care that nothing is instrumented (e.g. we can only safely use noinstr functions here). > Note: Moving the check in entry-common allows to use set_thread_flag() > which is safe. > > Fixes: 637ec831ea4f ("arm64: mte: Handle synchronous and asynchronous > tag check faults") > Cc: Catalin Marinas > Cc: Will Deacon > Reported-by: Will Deacon > Signed-off-by: Vincenzo Frascino > --- > arch/arm64/include/asm/mte.h | 8 ++++++++ > arch/arm64/kernel/entry-common.c | 6 ++++++ > arch/arm64/kernel/entry.S | 30 ------------------------------ > arch/arm64/kernel/mte.c | 25 +++++++++++++++++++++++-- > 4 files changed, 37 insertions(+), 32 deletions(-) > > diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h > index 9b557a457f24..188f778c6f7b 100644 > --- a/arch/arm64/include/asm/mte.h > +++ b/arch/arm64/include/asm/mte.h > @@ -31,6 +31,8 @@ void mte_invalidate_tags(int type, pgoff_t offset); > void mte_invalidate_tags_area(int type); > void *mte_allocate_tag_storage(void); > void mte_free_tag_storage(char *storage); > +void check_mte_async_tcf0(void); > +void clear_mte_async_tcf0(void); > > #ifdef CONFIG_ARM64_MTE > > @@ -83,6 +85,12 @@ static inline int mte_ptrace_copy_tags(struct task_struct *child, > { > return -EIO; > } > +void check_mte_async_tcf0(void) > +{ > +} > +void clear_mte_async_tcf0(void) > +{ > +} Were these meant to be static inline? > static inline void mte_assign_mem_tag_range(void *addr, size_t size) > { > diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c > index 9d3588450473..837d3624a1d5 100644 > --- a/arch/arm64/kernel/entry-common.c > +++ b/arch/arm64/kernel/entry-common.c > @@ -289,10 +289,16 @@ asmlinkage void noinstr enter_from_user_mode(void) > CT_WARN_ON(ct_state() != CONTEXT_USER); > user_exit_irqoff(); > trace_hardirqs_off_finish(); > + > + /* Check for asynchronous tag check faults in user space */ > + check_mte_async_tcf0(); > } > > asmlinkage void noinstr exit_to_user_mode(void) > { > + /* Ignore asynchronous tag check faults in the uaccess routines */ > + clear_mte_async_tcf0(); > + > trace_hardirqs_on_prepare(); > lockdep_hardirqs_on_prepare(CALLER_ADDR0); > user_enter_irqoff(); > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S > index a31a0a713c85..fafd74ae5021 100644 > --- a/arch/arm64/kernel/entry.S > +++ b/arch/arm64/kernel/entry.S > @@ -147,32 +147,6 @@ alternative_cb_end > .L__asm_ssbd_skip\@: > .endm > > - /* Check for MTE asynchronous tag check faults */ > - .macro check_mte_async_tcf, flgs, tmp > -#ifdef CONFIG_ARM64_MTE > -alternative_if_not ARM64_MTE > - b 1f > -alternative_else_nop_endif > - mrs_s \tmp, SYS_TFSRE0_EL1 > - tbz \tmp, #SYS_TFSR_EL1_TF0_SHIFT, 1f > - /* Asynchronous TCF occurred for TTBR0 access, set the TI flag */ > - orr \flgs, \flgs, #_TIF_MTE_ASYNC_FAULT > - str \flgs, [tsk, #TSK_TI_FLAGS] > - msr_s SYS_TFSRE0_EL1, xzr > -1: > -#endif > - .endm > - > - /* Clear the MTE asynchronous tag check faults */ > - .macro clear_mte_async_tcf > -#ifdef CONFIG_ARM64_MTE > -alternative_if ARM64_MTE > - dsb ish > - msr_s SYS_TFSRE0_EL1, xzr > -alternative_else_nop_endif > -#endif > - .endm > - > .macro mte_set_gcr, tmp, tmp2 > #ifdef CONFIG_ARM64_MTE > /* > @@ -243,8 +217,6 @@ alternative_else_nop_endif > ldr x19, [tsk, #TSK_TI_FLAGS] > disable_step_tsk x19, x20 > > - /* Check for asynchronous tag check faults in user space */ > - check_mte_async_tcf x19, x22 > apply_ssbd 1, x22, x23 > > ptrauth_keys_install_kernel tsk, x20, x22, x23 > @@ -775,8 +747,6 @@ SYM_CODE_START_LOCAL(ret_to_user) > cbnz x2, work_pending > finish_ret_to_user: > user_enter_irqoff > - /* Ignore asynchronous tag check faults in the uaccess routines */ > - clear_mte_async_tcf > enable_step_tsk x19, x2 > #ifdef CONFIG_GCC_PLUGIN_STACKLEAK > bl stackleak_erase > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > index b3c70a612c7a..e759b0eca47e 100644 > --- a/arch/arm64/kernel/mte.c > +++ b/arch/arm64/kernel/mte.c > @@ -166,14 +166,35 @@ static void set_gcr_el1_excl(u64 excl) > */ > } > > +void check_mte_async_tcf0(void) As above, this'll need to be noinstr. I also reckon we should put this in the header so that it can be inlined. > +{ > + /* > + * dsb(ish) is not required before the register read > + * because the TFSRE0_EL1 is automatically synchronized > + * by the hardware on exception entry as SCTLR_EL1.ITFSB > + * is set. > + */ > + u64 tcf0 = read_sysreg_s(SYS_TFSRE0_EL1); Shouldn't we have an MTE feature check first? > + > + if (tcf0 & SYS_TFSR_EL1_TF0) > + set_thread_flag(TIF_MTE_ASYNC_FAULT); > + > + write_sysreg_s(0, SYS_TFSRE0_EL1); > +} > + > +void clear_mte_async_tcf0(void) > +{ > + dsb(ish); > + write_sysreg_s(0, SYS_TFSRE0_EL1); > +} Likewise here on all counts. Thanks, Mark. > void flush_mte_state(void) > { > if (!system_supports_mte()) > return; > > /* clear any pending asynchronous tag fault */ > - dsb(ish); > - write_sysreg_s(0, SYS_TFSRE0_EL1); > + clear_mte_async_tcf0(); > clear_thread_flag(TIF_MTE_ASYNC_FAULT); > /* disable tag checking */ > set_sctlr_el1_tcf0(SCTLR_EL1_TCF0_NONE); > -- > 2.30.2 >