Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3288968pxf; Mon, 15 Mar 2021 06:22:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx71tZZBq9zjWnAfo/2Zks0Ni7HKJB2hNwWxFfq823SMWK1xqcIpeQXaJuuDL1VnLEpDmag X-Received: by 2002:a17:906:5a8f:: with SMTP id l15mr23707776ejq.462.1615814534301; Mon, 15 Mar 2021 06:22:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1615814534; cv=none; d=google.com; s=arc-20160816; b=cBPUT8QffCiSgaqHPDG7guvYTq+QRghMGz1ZhEjepKJjazgDsUmxFzu7gg/NRhXwJQ Uw+EjUG2w23/ez0YwIzW39UeQQ4WHzBBi3ZMZNVfPMXnesAVeQfygiFw4pW5lIogaeZ3 HK9tC0oxosSulkhXIHtMqNLMzPfRBvpNaUI7icAOZ93gnuysTKIQhl//TA2KYjFChUws vydmmkf1jPAJCJpKDz9abJGhx7OB1Lyb2QL7i8Evnsmmwnmwq0Dcs2b7vCZlW0ye3c6t Ju823zZa8k3Uw0k8sGsE+xGf3gN64CKMWt2WBikYgSwv3+amPGxiHeH5a0/dNOGxWfZa ykcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=k6eyMOBSr/5Wd7/D3QL9P1+w7p2OeH/6mpYEpYIHK14=; b=Z/JSnkegkPpNXD9aYdaQajhYC2SR4LQfZm2hEbxhJSrJ1H7G4gEidb9qYWwW1f0dts EBqHprBldogq6V6sEklsxWXk1qHdTxuCMn24UMLLX0R9cdsy6A7TKMKAqLta3HGGEMX7 Op4pczrb1JNq1UpyudlErIfxEQ0dlszTGmUclPgjxMEtziP1Irng7u7oO0Jcyz/0kkA7 VY8ygp77uffM/73Z3LedJLi6+RFeDQ8mSK/asEDHD2nAtNrZqtojyFBVxTvw2A+9tbTN 5sBLlIZT4WbnHiRvkizPkyGIVLDhx9p7Qv2BYZpEcGdkH9luJM6vXI8I23Uk2NUGKBYy HNvQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gb27si10707393ejc.448.2021.03.15.06.21.51; Mon, 15 Mar 2021 06:22:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230353AbhCONUz (ORCPT + 99 others); Mon, 15 Mar 2021 09:20:55 -0400 Received: from foss.arm.com ([217.140.110.172]:39952 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229814AbhCONUk (ORCPT ); Mon, 15 Mar 2021 09:20:40 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BFB8B1396; Mon, 15 Mar 2021 06:20:39 -0700 (PDT) Received: from e119884-lin.cambridge.arm.com (e119884-lin.cambridge.arm.com [10.1.196.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D9DD13F792; Mon, 15 Mar 2021 06:20:37 -0700 (PDT) From: Vincenzo Frascino To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com Cc: Vincenzo Frascino , Andrew Morton , Catalin Marinas , Will Deacon , Dmitry Vyukov , Andrey Ryabinin , Alexander Potapenko , Marco Elver , Evgenii Stepanov , Branislav Rankov , Andrey Konovalov , Lorenzo Pieralisi Subject: [PATCH v16 5/9] arm64: mte: Enable TCO in functions that can read beyond buffer limits Date: Mon, 15 Mar 2021 13:20:15 +0000 Message-Id: <20210315132019.33202-6-vincenzo.frascino@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210315132019.33202-1-vincenzo.frascino@arm.com> References: <20210315132019.33202-1-vincenzo.frascino@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org load_unaligned_zeropad() and __get/put_kernel_nofault() functions can read past some buffer limits which may include some MTE granule with a different tag. When MTE async mode is enabled, the load operation crosses the boundaries and the next granule has a different tag the PE sets the TFSR_EL1.TF1 bit as if an asynchronous tag fault is happened. Enable Tag Check Override (TCO) in these functions before the load and disable it afterwards to prevent this to happen. Note: The same condition can be hit in MTE sync mode but we deal with it through the exception handling. In the current implementation, mte_async_mode flag is set only at boot time but in future kasan might acquire some runtime features that that change the mode dynamically, hence we disable it when sync mode is selected for future proof. Cc: Catalin Marinas Cc: Will Deacon Reported-by: Branislav Rankov Tested-by: Branislav Rankov Reviewed-by: Catalin Marinas Acked-by: Andrey Konovalov Tested-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino --- arch/arm64/include/asm/mte.h | 15 +++++++++++++++ arch/arm64/include/asm/uaccess.h | 22 ++++++++++++++++++++++ arch/arm64/include/asm/word-at-a-time.h | 4 ++++ arch/arm64/kernel/mte.c | 22 ++++++++++++++++++++++ 4 files changed, 63 insertions(+) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 9b557a457f24..8603c6636a7d 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -90,5 +90,20 @@ static inline void mte_assign_mem_tag_range(void *addr, size_t size) #endif /* CONFIG_ARM64_MTE */ +#ifdef CONFIG_KASAN_HW_TAGS +/* Whether the MTE asynchronous mode is enabled. */ +DECLARE_STATIC_KEY_FALSE(mte_async_mode); + +static inline bool system_uses_mte_async_mode(void) +{ + return static_branch_unlikely(&mte_async_mode); +} +#else +static inline bool system_uses_mte_async_mode(void) +{ + return false; +} +#endif /* CONFIG_KASAN_HW_TAGS */ + #endif /* __ASSEMBLY__ */ #endif /* __ASM_MTE_H */ diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 0deb88467111..b5f08621fa29 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -20,6 +20,7 @@ #include #include +#include #include #include #include @@ -188,6 +189,23 @@ static inline void __uaccess_enable_tco(void) ARM64_MTE, CONFIG_KASAN_HW_TAGS)); } +/* + * These functions disable tag checking only if in MTE async mode + * since the sync mode generates exceptions synchronously and the + * nofault or load_unaligned_zeropad can handle them. + */ +static inline void __uaccess_disable_tco_async(void) +{ + if (system_uses_mte_async_mode()) + __uaccess_disable_tco(); +} + +static inline void __uaccess_enable_tco_async(void) +{ + if (system_uses_mte_async_mode()) + __uaccess_enable_tco(); +} + static inline void uaccess_disable_privileged(void) { __uaccess_disable_tco(); @@ -307,8 +325,10 @@ do { \ do { \ int __gkn_err = 0; \ \ + __uaccess_enable_tco_async(); \ __raw_get_mem("ldr", *((type *)(dst)), \ (__force type *)(src), __gkn_err); \ + __uaccess_disable_tco_async(); \ if (unlikely(__gkn_err)) \ goto err_label; \ } while (0) @@ -380,8 +400,10 @@ do { \ do { \ int __pkn_err = 0; \ \ + __uaccess_enable_tco_async(); \ __raw_put_mem("str", *((type *)(src)), \ (__force type *)(dst), __pkn_err); \ + __uaccess_disable_tco_async(); \ if (unlikely(__pkn_err)) \ goto err_label; \ } while(0) diff --git a/arch/arm64/include/asm/word-at-a-time.h b/arch/arm64/include/asm/word-at-a-time.h index 3333950b5909..c62d9fa791aa 100644 --- a/arch/arm64/include/asm/word-at-a-time.h +++ b/arch/arm64/include/asm/word-at-a-time.h @@ -55,6 +55,8 @@ static inline unsigned long load_unaligned_zeropad(const void *addr) { unsigned long ret, offset; + __uaccess_enable_tco_async(); + /* Load word from unaligned pointer addr */ asm( "1: ldr %0, %3\n" @@ -76,6 +78,8 @@ static inline unsigned long load_unaligned_zeropad(const void *addr) : "=&r" (ret), "=&r" (offset) : "r" (addr), "Q" (*(unsigned long *)addr)); + __uaccess_disable_tco_async(); + return ret; } diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index fa755cf94e01..9362928ba0d5 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -26,6 +26,10 @@ u64 gcr_kernel_excl __ro_after_init; static bool report_fault_once = true; +/* Whether the MTE asynchronous mode is enabled. */ +DEFINE_STATIC_KEY_FALSE(mte_async_mode); +EXPORT_SYMBOL_GPL(mte_async_mode); + static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap) { pte_t old_pte = READ_ONCE(*ptep); @@ -118,12 +122,30 @@ static inline void __mte_enable_kernel(const char *mode, unsigned long tcf) void mte_enable_kernel_sync(void) { + /* + * Make sure we enter this function when no PE has set + * async mode previously. + */ + WARN_ONCE(system_uses_mte_async_mode(), + "MTE async mode enabled system wide!"); + __mte_enable_kernel("synchronous", SCTLR_ELx_TCF_SYNC); } void mte_enable_kernel_async(void) { __mte_enable_kernel("asynchronous", SCTLR_ELx_TCF_ASYNC); + + /* + * MTE async mode is set system wide by the first PE that + * executes this function. + * + * Note: If in future KASAN acquires a runtime switching + * mode in between sync and async, this strategy needs + * to be reviewed. + */ + if (!system_uses_mte_async_mode()) + static_branch_enable(&mte_async_mode); } void mte_set_report_once(bool state) -- 2.30.2