Received: by 2002:ab2:6c55:0:b0:1fd:c486:4f03 with SMTP id v21csp103864lqp; Tue, 11 Jun 2024 16:40:47 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCXhNfEn260af/zc79HF4SQ2bd0a4jWcG63xnFpvsRN+FSqheYp/puOFCEME+TDXPfKNNxfpEc9lE8SXZMYiDTK9fkNUGn7d9NJf97h8Gg== X-Google-Smtp-Source: AGHT+IHgnZHQjKxfF0JBxC5sMQparboqAij91vLjZGk6aPdjI+ptTt8eozC2WI4Mh4ZWc1h9iDeE X-Received: by 2002:a05:6102:3a56:b0:48d:941c:277b with SMTP id ada2fe7eead31-48d941c2892mr191492137.15.1718149247644; Tue, 11 Jun 2024 16:40:47 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1718149247; cv=pass; d=google.com; s=arc-20160816; b=VrFXacASwXZnNlaRNoE01Fn3qz57Nk+hwiBEr2kapFa0rrJEJMo7rvgeYCvtxaSwbO wG/1apeoZPal2QNq0Bxk23eegbI/iikYPZ/FfLgsF2cZujP80eBrd2Xb3gmh04SE83E1 mTypQ63guW6jYSZrhXyFSFUD+XaEyuKLjd0t6/3tp9w0qO7spTp8e1iudEU9UDReG9bM T0bvv/Aifb5tcD4Zb88AMNgObpftNS4Js/FcUUrhCQpAAPvgNNN2r+V24QKu11Q0sUrE p8CT+5wfb2IeH0eFMg07h/B5sEMxuE/i3eK3zj91Lt+nKGfmm069o7cgru8Kff4gkwXd dWsg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=ZREYENu4OzVrBA1SqgypBQsUgU4W8YKDCb1bj/MfRxY=; fh=QodiL59SxRMTknaqoeY66Stwl2Smf6WK9qpl9SBxZMY=; b=oRdjQ5DsE9DFxRuCQB+RrEuu/7ZXJFeg0ThhfZF/CeN/vJL1iYLGmqQ9C27jvn/ime 38BdgB9N4l42wRNBuWoH4jaG9dU8oBeoQKoKzhugteQeq5/d5tDVSpKqttmmxT8h3y4u n6ZqCy/KXpQBsguDJ0Uogj+ZHCXgMH9ZRnzskyAyeFJi4PyuM15PwQG64rfbTrzjLaCF AC+jsUNEksVu/QFJk99nmUGQbQnvYgZDWPl/UU5DAVjv+4EiKaZnsR08ZBNLCzBKQuKP C/EDwZKaySGV4oeSM6u0ywCqRdL+FLq4gURawc6yPuzpiGHaTIrgFN3mlkPk+tOnXX0U IedA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=yK9v3CVa; arc=pass (i=1 dkim=pass dkdomain=linux-foundation.org); spf=pass (google.com: domain of linux-kernel+bounces-210707-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-210707-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id af79cd13be357-7955c0b1a99si793957885a.409.2024.06.11.16.40.47 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Jun 2024 16:40:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-210707-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@linux-foundation.org header.s=korg header.b=yK9v3CVa; arc=pass (i=1 dkim=pass dkdomain=linux-foundation.org); spf=pass (google.com: domain of linux-kernel+bounces-210707-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-210707-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 5191A1C21AF7 for ; Tue, 11 Jun 2024 23:40:47 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B9668155CB1; Tue, 11 Jun 2024 23:40:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="yK9v3CVa" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E37D80607; Tue, 11 Jun 2024 23:40:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718149239; cv=none; b=SAHBI3Zj44vctibCOco/Xo6JQBnbUrro0pQE8gstOujKZJis3zDCu9ztDYw3gzwtrJnxi5dYsEdUZx6hXeCz/1Xqo/r+AU011g3pQ28AWC1YLHdJJdwC/2H9wLxjKHPpSgf7swxqdFNLYWcOczvpKei/s6T9q0XsQSffYZpEz5k= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718149239; c=relaxed/simple; bh=oZxDtzuHx8MBn0zVasvckJQ+JL28AlSKL3Zy74R6B6M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cxz3NpwzqSZLQYQBUDeO5FAuz/8C3abTmBS2qYDEtTh+IzdCzoNQcRVQpyZ7GCFkXf/ALyOhyh74a8COxqUITUxLJVigRX/WjQNyCzLfKwftGdCgO60b7homcFVDzJNrupxk5hQFcltr8+8dJEYztmIOvRU50w7ANU9Zs4zqgWU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=yK9v3CVa; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id A1653C2BD10; Tue, 11 Jun 2024 23:40:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1718149238; bh=oZxDtzuHx8MBn0zVasvckJQ+JL28AlSKL3Zy74R6B6M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=yK9v3CVaOW5j0gxxlSFLSyktayjK11TnqSusPYQalAa25PHpvBzqDPd9hG4a9zjcr JDzccQxGdu1KlCNCn+ZqHtLy8BY3tTY6xSFw7sWgSebdJ7umjJD/hfhaYKfKeiP/If FADNpbJ4/extbKVXjGqgzLao1JekKktjSCzPbxqc= From: Linus Torvalds To: Nathan Chancellor Cc: Peter Anvin , Ingo Molnar , Borislav Petkov , Thomas Gleixner , Rasmus Villemoes , Josh Poimboeuf , Catalin Marinas , Will Deacon , Linux Kernel Mailing List , the arch/x86 maintainers , linux-arm-kernel@lists.infradead.org, linux-arch , Linus Torvalds Subject: [PATCH 6/7 v2] arm64: start using 'asm goto' for put_user() when available Date: Tue, 11 Jun 2024 16:40:33 -0700 Message-ID: <20240611234033.717058-1-torvalds@linux-foundation.org> X-Mailer: git-send-email 2.45.1.209.gc6f12300df In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit This generates noticeably better code with compilers that support it, since we don't need to test the error register etc, the exception just jumps to the error handling directly. Unlike get_user(), there's no need to worry about old compilers. All supported compilers support the regular non-output 'asm goto', as pointed out by Nathan Chancellor. Signed-off-by: Linus Torvalds --- This is the fixed version that actually uses "asm goto" for put_user() because it doesn't accidentally disable it by using the old CONFIG option that no longer exists. arch/arm64/include/asm/asm-extable.h | 3 ++ arch/arm64/include/asm/uaccess.h | 70 ++++++++++++++-------------- 2 files changed, 39 insertions(+), 34 deletions(-) diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 980d1dd8e1a3..b8a5861dc7b7 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -112,6 +112,9 @@ #define _ASM_EXTABLE_KACCESS_ERR(insn, fixup, err) \ _ASM_EXTABLE_KACCESS_ERR_ZERO(insn, fixup, err, wzr) +#define _ASM_EXTABLE_KACCESS(insn, fixup) \ + _ASM_EXTABLE_KACCESS_ERR_ZERO(insn, fixup, wzr, wzr) + #define _ASM_EXTABLE_LOAD_UNALIGNED_ZEROPAD(insn, fixup, data, addr) \ __DEFINE_ASM_GPR_NUMS \ __ASM_EXTABLE_RAW(#insn, #fixup, \ diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 23c2edf517ed..6d4b16acc880 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -294,29 +294,28 @@ do { \ } while (0); \ } while (0) -#define __put_mem_asm(store, reg, x, addr, err, type) \ - asm volatile( \ - "1: " store " " reg "1, [%2]\n" \ +#define __put_mem_asm(store, reg, x, addr, label, type) \ + asm goto( \ + "1: " store " " reg "0, [%1]\n" \ "2:\n" \ - _ASM_EXTABLE_##type##ACCESS_ERR(1b, 2b, %w0) \ - : "+r" (err) \ - : "rZ" (x), "r" (addr)) + _ASM_EXTABLE_##type##ACCESS(1b, %l2) \ + : : "rZ" (x), "r" (addr) : : label) -#define __raw_put_mem(str, x, ptr, err, type) \ +#define __raw_put_mem(str, x, ptr, label, type) \ do { \ __typeof__(*(ptr)) __pu_val = (x); \ switch (sizeof(*(ptr))) { \ case 1: \ - __put_mem_asm(str "b", "%w", __pu_val, (ptr), (err), type); \ + __put_mem_asm(str "b", "%w", __pu_val, (ptr), label, type); \ break; \ case 2: \ - __put_mem_asm(str "h", "%w", __pu_val, (ptr), (err), type); \ + __put_mem_asm(str "h", "%w", __pu_val, (ptr), label, type); \ break; \ case 4: \ - __put_mem_asm(str, "%w", __pu_val, (ptr), (err), type); \ + __put_mem_asm(str, "%w", __pu_val, (ptr), label, type); \ break; \ case 8: \ - __put_mem_asm(str, "%x", __pu_val, (ptr), (err), type); \ + __put_mem_asm(str, "%x", __pu_val, (ptr), label, type); \ break; \ default: \ BUILD_BUG(); \ @@ -328,25 +327,34 @@ do { \ * uaccess_ttbr0_disable(). As `x` and `ptr` could contain blocking functions, * we must evaluate these outside of the critical section. */ -#define __raw_put_user(x, ptr, err) \ +#define __raw_put_user(x, ptr, label) \ do { \ + __label__ __rpu_failed; \ __typeof__(*(ptr)) __user *__rpu_ptr = (ptr); \ __typeof__(*(ptr)) __rpu_val = (x); \ __chk_user_ptr(__rpu_ptr); \ \ - uaccess_ttbr0_enable(); \ - __raw_put_mem("sttr", __rpu_val, __rpu_ptr, err, U); \ - uaccess_ttbr0_disable(); \ + do { \ + uaccess_ttbr0_enable(); \ + __raw_put_mem("sttr", __rpu_val, __rpu_ptr, __rpu_failed, U); \ + uaccess_ttbr0_disable(); \ + break; \ + __rpu_failed: \ + uaccess_ttbr0_disable(); \ + goto label; \ + } while (0); \ } while (0) #define __put_user_error(x, ptr, err) \ do { \ + __label__ __pu_failed; \ __typeof__(*(ptr)) __user *__p = (ptr); \ might_fault(); \ if (access_ok(__p, sizeof(*__p))) { \ __p = uaccess_mask_ptr(__p); \ - __raw_put_user((x), __p, (err)); \ + __raw_put_user((x), __p, __pu_failed); \ } else { \ + __pu_failed: \ (err) = -EFAULT; \ } \ } while (0) @@ -369,15 +377,18 @@ do { \ do { \ __typeof__(dst) __pkn_dst = (dst); \ __typeof__(src) __pkn_src = (src); \ - int __pkn_err = 0; \ \ - __mte_enable_tco_async(); \ - __raw_put_mem("str", *((type *)(__pkn_src)), \ - (__force type *)(__pkn_dst), __pkn_err, K); \ - __mte_disable_tco_async(); \ - \ - if (unlikely(__pkn_err)) \ + do { \ + __label__ __pkn_err; \ + __mte_enable_tco_async(); \ + __raw_put_mem("str", *((type *)(__pkn_src)), \ + (__force type *)(__pkn_dst), __pkn_err, K); \ + __mte_disable_tco_async(); \ + break; \ + __pkn_err: \ + __mte_disable_tco_async(); \ goto err_label; \ + } while (0); \ } while(0) extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n); @@ -411,17 +422,8 @@ static __must_check __always_inline bool user_access_begin(const void __user *pt } #define user_access_begin(a,b) user_access_begin(a,b) #define user_access_end() uaccess_ttbr0_disable() - -/* - * The arm64 inline asms should learn abut asm goto, and we should - * teach user_access_begin() about address masking. - */ -#define unsafe_put_user(x, ptr, label) do { \ - int __upu_err = 0; \ - __raw_put_mem("sttr", x, uaccess_mask_ptr(ptr), __upu_err, U); \ - if (__upu_err) goto label; \ -} while (0) - +#define unsafe_put_user(x, ptr, label) \ + __raw_put_mem("sttr", x, uaccess_mask_ptr(ptr), label, U) #define unsafe_get_user(x, ptr, label) \ __raw_get_mem("ldtr", x, uaccess_mask_ptr(ptr), label, U) -- 2.45.1.209.gc6f12300df