Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp998282iog; Fri, 17 Jun 2022 20:19:52 -0700 (PDT) X-Google-Smtp-Source: AGRyM1ssLGczcGMEUv57JsvNIeFVbKZDlYeo2vdwMC4WkKKQhpTY5C4dJXdmQJvk36RNYzOJCMTw X-Received: by 2002:a05:6402:15a:b0:431:71b9:86f3 with SMTP id s26-20020a056402015a00b0043171b986f3mr16045097edu.249.1655522392744; Fri, 17 Jun 2022 20:19:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655522392; cv=none; d=google.com; s=arc-20160816; b=gOhYD+3pUNBCc7ahkLKSnBi+/CuKcS0sfEkBtI7FJyLHkXzQkxxq1Ag3aZ34Kndajt G2iBsygeRTSYOMlxavTQ9v825l+3JlOagL1TqhekBLSagTE0M5XMB6o1CiDELtJTG2UU AulFeIa6euaukcOW/N8yRoilw0Plt+xik/9/mTljBLjY5R2kP0bgt8eUltfzgJZU4fjC iIBeLQJBbXSdeZJYrw5aiybveMysoZEJh9jwjZfRpK0NDYKcclSCRvFlUbLi8ByhFM/w Y4jDqXSTnBhN1+thCbDxTf1tBmTy2pjuP8ZxuhrpYJEGteDqzJLvD2vRewG3ULxdMSnm N5BQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:subject:user-agent:mime-version:date:message-id; bh=HQBWvyd/Pfyb8ejtE5ldqeHTRWQPrWDPJ3MA7ZjmF+Q=; b=ob1dH7YVEX+Ynqi5RP1ylHFINWSTaDCP5fO3005lGtE3jOy1Yq+lItg+d7mYjEmXSy W1V0bIimo7rghg+T5HqVJiIJGhApQX1DU++2rHXkwwuoZ3y3awUu9+tZ1KqgPhruOXhn hVU9HopOxRRwENCXeyUVcKPIvVPBYQt7EfJKhMyKq4msIAUJQjp/bYcIdKx7TVuZ9sFd yKW655tV/DctujAg70aCGHk69LRzuigGtT1LUNNT1vkNiuLtNsIsGE//qnImIIzgRoup HWc6F/ib8LtjSjiy0hNkjtloX/Sl2wAACy7Ko6JrtmFgeeVpu0a1LWubzNkrtewR94sv xYcQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i19-20020a1709067a5300b006e6f361280fsi6524716ejo.724.2022.06.17.20.19.27; Fri, 17 Jun 2022 20:19:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383910AbiFRCo6 (ORCPT + 99 others); Fri, 17 Jun 2022 22:44:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231473AbiFRCo4 (ORCPT ); Fri, 17 Jun 2022 22:44:56 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D4CF6A00E for ; Fri, 17 Jun 2022 19:44:55 -0700 (PDT) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LQ0bP2lY4zjXXr; Sat, 18 Jun 2022 10:43:45 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 18 Jun 2022 10:44:53 +0800 Received: from [10.174.179.234] (10.174.179.234) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Sat, 18 Jun 2022 10:44:51 +0800 Message-ID: Date: Sat, 18 Jun 2022 10:44:50 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.8.0 Subject: Re: [PATCH -next v5 1/8] arm64: extable: add new extable type EX_TYPE_KACCESS_ERR_ZERO support To: Mark Rutland CC: James Morse , Andrew Morton , Thomas Gleixner , "Ingo Molnar" , Borislav Petkov , Robin Murphy , Dave Hansen , "Catalin Marinas" , Will Deacon , "Alexander Viro" , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" , , , , , Kefeng Wang , Xie XiuQi , Guohanjun References: <20220528065056.1034168-1-tongtiangen@huawei.com> <20220528065056.1034168-2-tongtiangen@huawei.com> From: Tong Tiangen In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.234] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-6.4 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 在 2022/6/17 16:23, Mark Rutland 写道: > On Sat, May 28, 2022 at 06:50:49AM +0000, Tong Tiangen wrote: >> Currently, The extable type EX_TYPE_UACCESS_ERR_ZERO is used by >> __get/put_kernel_nofault(), but those helpers are not uaccess type, so we >> add a new extable type EX_TYPE_KACCESS_ERR_ZERO which can be used by >> __get/put_kernel_no_fault(). >> >> This is also to prepare for distinguishing the two types in machine check >> safe process. >> >> Suggested-by: Mark Rutland >> Signed-off-by: Tong Tiangen > > This looks good to me, so modulo one nit below: > > Acked-by: Mark Rutland > >> --- >> arch/arm64/include/asm/asm-extable.h | 13 ++++ >> arch/arm64/include/asm/uaccess.h | 94 ++++++++++++++-------------- >> arch/arm64/mm/extable.c | 1 + >> 3 files changed, 61 insertions(+), 47 deletions(-) >> >> diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h >> index c39f2437e08e..56ebe183e78b 100644 >> --- a/arch/arm64/include/asm/asm-extable.h >> +++ b/arch/arm64/include/asm/asm-extable.h >> @@ -7,6 +7,7 @@ >> #define EX_TYPE_BPF 2 >> #define EX_TYPE_UACCESS_ERR_ZERO 3 >> #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 >> +#define EX_TYPE_KACCESS_ERR_ZERO 5 > > Could we please renumber this so the UACCESS and KACCESS definitions are next > to one another, i.e. > > #define EX_TYPE_BPF 2 > #define EX_TYPE_UACCESS_ERR_ZERO 3 > #define EX_TYPE_KACCESS_ERR_ZERO 4 > #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 5 > > Thanks, > Mark. ok, It's cleaner. Thanks, Tong. > >> >> #ifdef __ASSEMBLY__ >> >> @@ -73,9 +74,21 @@ >> EX_DATA_REG(ZERO, zero) \ >> ")") >> >> +#define _ASM_EXTABLE_KACCESS_ERR_ZERO(insn, fixup, err, zero) \ >> + __DEFINE_ASM_GPR_NUMS \ >> + __ASM_EXTABLE_RAW(#insn, #fixup, \ >> + __stringify(EX_TYPE_KACCESS_ERR_ZERO), \ >> + "(" \ >> + EX_DATA_REG(ERR, err) " | " \ >> + EX_DATA_REG(ZERO, zero) \ >> + ")") >> + >> #define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \ >> _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, wzr) >> >> +#define _ASM_EXTABLE_KACCESS_ERR(insn, fixup, err) \ >> + _ASM_EXTABLE_KACCESS_ERR_ZERO(insn, fixup, err, wzr) >> + >> #define EX_DATA_REG_DATA_SHIFT 0 >> #define EX_DATA_REG_DATA GENMASK(4, 0) >> #define EX_DATA_REG_ADDR_SHIFT 5 >> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h >> index 63f9c828f1a7..2fc9f0861769 100644 >> --- a/arch/arm64/include/asm/uaccess.h >> +++ b/arch/arm64/include/asm/uaccess.h >> @@ -232,34 +232,34 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) >> * The "__xxx_error" versions set the third argument to -EFAULT if an error >> * occurs, and leave it unchanged on success. >> */ >> -#define __get_mem_asm(load, reg, x, addr, err) \ >> +#define __get_mem_asm(load, reg, x, addr, err, type) \ >> asm volatile( \ >> "1: " load " " reg "1, [%2]\n" \ >> "2:\n" \ >> - _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %w0, %w1) \ >> + _ASM_EXTABLE_##type##ACCESS_ERR_ZERO(1b, 2b, %w0, %w1) \ >> : "+r" (err), "=&r" (x) \ >> : "r" (addr)) >> >> -#define __raw_get_mem(ldr, x, ptr, err) \ >> -do { \ >> - unsigned long __gu_val; \ >> - switch (sizeof(*(ptr))) { \ >> - case 1: \ >> - __get_mem_asm(ldr "b", "%w", __gu_val, (ptr), (err)); \ >> - break; \ >> - case 2: \ >> - __get_mem_asm(ldr "h", "%w", __gu_val, (ptr), (err)); \ >> - break; \ >> - case 4: \ >> - __get_mem_asm(ldr, "%w", __gu_val, (ptr), (err)); \ >> - break; \ >> - case 8: \ >> - __get_mem_asm(ldr, "%x", __gu_val, (ptr), (err)); \ >> - break; \ >> - default: \ >> - BUILD_BUG(); \ >> - } \ >> - (x) = (__force __typeof__(*(ptr)))__gu_val; \ >> +#define __raw_get_mem(ldr, x, ptr, err, type) \ >> +do { \ >> + unsigned long __gu_val; \ >> + switch (sizeof(*(ptr))) { \ >> + case 1: \ >> + __get_mem_asm(ldr "b", "%w", __gu_val, (ptr), (err), type); \ >> + break; \ >> + case 2: \ >> + __get_mem_asm(ldr "h", "%w", __gu_val, (ptr), (err), type); \ >> + break; \ >> + case 4: \ >> + __get_mem_asm(ldr, "%w", __gu_val, (ptr), (err), type); \ >> + break; \ >> + case 8: \ >> + __get_mem_asm(ldr, "%x", __gu_val, (ptr), (err), type); \ >> + break; \ >> + default: \ >> + BUILD_BUG(); \ >> + } \ >> + (x) = (__force __typeof__(*(ptr)))__gu_val; \ >> } while (0) >> >> /* >> @@ -274,7 +274,7 @@ do { \ >> __chk_user_ptr(ptr); \ >> \ >> uaccess_ttbr0_enable(); \ >> - __raw_get_mem("ldtr", __rgu_val, __rgu_ptr, err); \ >> + __raw_get_mem("ldtr", __rgu_val, __rgu_ptr, err, U); \ >> uaccess_ttbr0_disable(); \ >> \ >> (x) = __rgu_val; \ >> @@ -314,40 +314,40 @@ do { \ >> \ >> __uaccess_enable_tco_async(); \ >> __raw_get_mem("ldr", *((type *)(__gkn_dst)), \ >> - (__force type *)(__gkn_src), __gkn_err); \ >> + (__force type *)(__gkn_src), __gkn_err, K); \ >> __uaccess_disable_tco_async(); \ >> \ >> if (unlikely(__gkn_err)) \ >> goto err_label; \ >> } while (0) >> >> -#define __put_mem_asm(store, reg, x, addr, err) \ >> +#define __put_mem_asm(store, reg, x, addr, err, type) \ >> asm volatile( \ >> "1: " store " " reg "1, [%2]\n" \ >> "2:\n" \ >> - _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0) \ >> + _ASM_EXTABLE_##type##ACCESS_ERR(1b, 2b, %w0) \ >> : "+r" (err) \ >> : "r" (x), "r" (addr)) >> >> -#define __raw_put_mem(str, x, ptr, err) \ >> -do { \ >> - __typeof__(*(ptr)) __pu_val = (x); \ >> - switch (sizeof(*(ptr))) { \ >> - case 1: \ >> - __put_mem_asm(str "b", "%w", __pu_val, (ptr), (err)); \ >> - break; \ >> - case 2: \ >> - __put_mem_asm(str "h", "%w", __pu_val, (ptr), (err)); \ >> - break; \ >> - case 4: \ >> - __put_mem_asm(str, "%w", __pu_val, (ptr), (err)); \ >> - break; \ >> - case 8: \ >> - __put_mem_asm(str, "%x", __pu_val, (ptr), (err)); \ >> - break; \ >> - default: \ >> - BUILD_BUG(); \ >> - } \ >> +#define __raw_put_mem(str, x, ptr, err, type) \ >> +do { \ >> + __typeof__(*(ptr)) __pu_val = (x); \ >> + switch (sizeof(*(ptr))) { \ >> + case 1: \ >> + __put_mem_asm(str "b", "%w", __pu_val, (ptr), (err), type); \ >> + break; \ >> + case 2: \ >> + __put_mem_asm(str "h", "%w", __pu_val, (ptr), (err), type); \ >> + break; \ >> + case 4: \ >> + __put_mem_asm(str, "%w", __pu_val, (ptr), (err), type); \ >> + break; \ >> + case 8: \ >> + __put_mem_asm(str, "%x", __pu_val, (ptr), (err), type); \ >> + break; \ >> + default: \ >> + BUILD_BUG(); \ >> + } \ >> } while (0) >> >> /* >> @@ -362,7 +362,7 @@ do { \ >> __chk_user_ptr(__rpu_ptr); \ >> \ >> uaccess_ttbr0_enable(); \ >> - __raw_put_mem("sttr", __rpu_val, __rpu_ptr, err); \ >> + __raw_put_mem("sttr", __rpu_val, __rpu_ptr, err, U); \ >> uaccess_ttbr0_disable(); \ >> } while (0) >> >> @@ -400,7 +400,7 @@ do { \ >> \ >> __uaccess_enable_tco_async(); \ >> __raw_put_mem("str", *((type *)(__pkn_src)), \ >> - (__force type *)(__pkn_dst), __pkn_err); \ >> + (__force type *)(__pkn_dst), __pkn_err, K); \ >> __uaccess_disable_tco_async(); \ >> \ >> if (unlikely(__pkn_err)) \ >> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c >> index 489455309695..056591e5ca80 100644 >> --- a/arch/arm64/mm/extable.c >> +++ b/arch/arm64/mm/extable.c >> @@ -77,6 +77,7 @@ bool fixup_exception(struct pt_regs *regs) >> case EX_TYPE_BPF: >> return ex_handler_bpf(ex, regs); >> case EX_TYPE_UACCESS_ERR_ZERO: >> + case EX_TYPE_KACCESS_ERR_ZERO: >> return ex_handler_uaccess_err_zero(ex, regs); >> case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: >> return ex_handler_load_unaligned_zeropad(ex, regs); >> -- >> 2.25.1 >> > > .