Received: by 2002:a05:7412:3784:b0:e2:908c:2ebd with SMTP id jk4csp2649922rdb; Wed, 4 Oct 2023 07:34:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFl+bWm3JCw8YVMP9A7BYT3A5/QbqmZsqg4yk2oiepjVKnjIp7BOkcdA65rqQOn39QlosWW X-Received: by 2002:a05:6358:279c:b0:135:a10e:1ed0 with SMTP id l28-20020a056358279c00b00135a10e1ed0mr2512319rwb.23.1696430095695; Wed, 04 Oct 2023 07:34:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696430095; cv=none; d=google.com; s=arc-20160816; b=K6ltZYfto0SBMEPIYfejb3AjnMG3YwBjGnFvN99Bup1MZ+EYt0RqvF3HFelxzh0eSK Jfc4dztu8i0udo/5msZ8v5OsyCUHcX2nPfsdMmwqCele+i9fve3N3mZ8EO6QUJ7IDhVV TTm/C29irppWbdRHOwjymHqUXSwqceZfsJHbOjJsVwDxjMCNeJiFGJgoYo1gzfQL9cw4 F1OLzwrzZQ9zJGRQnNnFbkp5TdDnzbW6RIHCqtklO6blnezMAtnnTOSyVqqV/AfuDU2L HgbhfijorYVBDMma0OGFfNFlodO27acuKs191ebKbhAcgJbKaFiFkYxOkQgJmgh5SHZ9 fF4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=r99tKuV5xWrXesJjmtv6oJbCMc9/ubfSXf75DUqKcqw=; fh=LGqxV//S2dNc9ZXry/APvah5P8y0u5J065Xrxe/jE+g=; b=xOvqdJ/bLVAbVWBkcaY69pe5yOEAWid83BZGBHSW2vQBNGDOaGFknNKulRLEsbjJBp m+TNkjU1+hjKFHtzYWkW9ha883wq+lROVnXgC92egIcn+KQqJS+M5R/YknrKBFtNmV/q fS8eaz/mJfs6kseDDcPz14ZAIuOAVAp9zFIF5JdZ0mpbIT+lvzMp02imLrmatN5ipJr+ wS2c9Pw0qaC0CH5XxrADtaIZwNHcSzGNCrngAUWt/Uc1C1CG8Ideyd7QFpyy/R3iRhfD M9SrTwdcy1D98mwqlozqzT5JsIFv5NVwGFpObANTrpKCLp63RPxuQ4W7mJBgRmlisrdK QqIw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=yGzrD8P4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id 19-20020a630c53000000b005779c97fae5si3660538pgm.480.2023.10.04.07.34.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Oct 2023 07:34:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=yGzrD8P4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 592B5821ADBD; Wed, 4 Oct 2023 07:34:54 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233569AbjJDOev (ORCPT + 99 others); Wed, 4 Oct 2023 10:34:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42822 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233119AbjJDOeu (ORCPT ); Wed, 4 Oct 2023 10:34:50 -0400 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69BE5BF for ; Wed, 4 Oct 2023 07:34:46 -0700 (PDT) Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-4064e3c7c07so3852365e9.1 for ; Wed, 04 Oct 2023 07:34:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1696430085; x=1697034885; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r99tKuV5xWrXesJjmtv6oJbCMc9/ubfSXf75DUqKcqw=; b=yGzrD8P4g1qpIn4RTQy446S37xgu7ky3g2bZFrq7eLXVseFssmhwk4P48LSpcjbnxM 3qcfNqeY7ex2Ekgkrge8nXarKcZ81YgpOQpo+bEbVNdD+eSNQQywLOoG/6Vhr3rZFoCv VsHhbJVqEGN7hY45JFeTlX/RyTiWgpNYxHsg2sKMXQOXIociWgEec/OVop05hJynGQxT psBHNhlljc99/PMB8VV7A+gjdrKjpPmGnjusKiMYPbPb+xzIb5z/FdPw1M4B2n76x0Yp 0rk2ED8cAy4n5Ai3TDoJJKx1CUqXMWXi3h/9UB4feGZ3pnmycf5iVup5cl60d/NcQhbZ bxhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696430085; x=1697034885; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r99tKuV5xWrXesJjmtv6oJbCMc9/ubfSXf75DUqKcqw=; b=fQ7+O3FkyRKEenpnexO2AvaeS1lrRPlK5a+9X4oDvw3xSu6vGfx5PBUZmilZBpF6bD YiCywH0Y4lcQleWNA79JRBz5mHikXi4g+bzv1TF8wEcwwHHAcHYtCCcspM7/OFFAeMx6 o7cdMtKybrD0ugn+DRaMkDs21m/ht+X9odsRlenYR6J3hfdN8qIB1Sx7D74ViT79qRDA F8s15NUG/aU9BrYwxpX8hjhIYCqf22E0ybHRUx2NDljJmT5gGSitBB9qGXS9YgXZj3s9 njpZ1hWeNjzE1ARFJ0zMi6HfxAM09cwlLxQSS/ANmrbJnEErotIuNain+Tv7LhPc4W+G gylQ== X-Gm-Message-State: AOJu0Yywgrve7xenx32yJuFQOXU7Nx9+phGdtDU/yZo2tB8ANejqnlcr zZGtSuRZrUL9JId751J3mzVtPQ== X-Received: by 2002:a05:600c:1d03:b0:405:39bb:38a8 with SMTP id l3-20020a05600c1d0300b0040539bb38a8mr2410140wms.2.1696430084758; Wed, 04 Oct 2023 07:34:44 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:9474:8d75:5115:42cb]) by smtp.gmail.com with ESMTPSA id t20-20020a1c7714000000b00401e32b25adsm1686205wmi.4.2023.10.04.07.34.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Oct 2023 07:34:44 -0700 (PDT) From: =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Anup Patel , Atish Patra Cc: =?UTF-8?q?Cl=C3=A9ment=20L=C3=A9ger?= , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Subject: [PATCH 1/5] riscv: use ".L" local labels in assembly when applicable Date: Wed, 4 Oct 2023 16:30:50 +0200 Message-ID: <20231004143054.482091-2-cleger@rivosinc.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231004143054.482091-1-cleger@rivosinc.com> References: <20231004143054.482091-1-cleger@rivosinc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Wed, 04 Oct 2023 07:34:54 -0700 (PDT) For the sake of coherency, use local labels in assembly when applicable. This also avoid kprobes being confused when applying a kprobe since the size of function is computed by checking where the next visible symbol is located. This might end up in computing some function size to be way shorter than expected and thus failing to apply kprobes to the specified offset. Signed-off-by: Clément Léger --- arch/riscv/kernel/entry.S | 10 +++---- arch/riscv/kernel/head.S | 18 ++++++------- arch/riscv/kernel/mcount.S | 10 +++---- arch/riscv/lib/memmove.S | 54 +++++++++++++++++++------------------- 4 files changed, 46 insertions(+), 46 deletions(-) diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S index 143a2bb3e697..64ac0dd6176b 100644 --- a/arch/riscv/kernel/entry.S +++ b/arch/riscv/kernel/entry.S @@ -21,9 +21,9 @@ SYM_CODE_START(handle_exception) * register will contain 0, and we should continue on the current TP. */ csrrw tp, CSR_SCRATCH, tp - bnez tp, _save_context + bnez tp, .Lsave_context -_restore_kernel_tpsp: +.Lrestore_kernel_tpsp: csrr tp, CSR_SCRATCH REG_S sp, TASK_TI_KERNEL_SP(tp) @@ -35,7 +35,7 @@ _restore_kernel_tpsp: REG_L sp, TASK_TI_KERNEL_SP(tp) #endif -_save_context: +.Lsave_context: REG_S sp, TASK_TI_USER_SP(tp) REG_L sp, TASK_TI_KERNEL_SP(tp) addi sp, sp, -(PT_SIZE_ON_STACK) @@ -205,10 +205,10 @@ SYM_CODE_START_LOCAL(handle_kernel_stack_overflow) REG_S x30, PT_T5(sp) REG_S x31, PT_T6(sp) - la ra, restore_caller_reg + la ra, .Lrestore_caller_reg tail get_overflow_stack -restore_caller_reg: +.Lrestore_caller_reg: //save per-cpu overflow stack REG_S a0, -8(sp) //restore caller register from shadow_stack diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S index 3710ea5d160f..7e1b83f18a50 100644 --- a/arch/riscv/kernel/head.S +++ b/arch/riscv/kernel/head.S @@ -168,12 +168,12 @@ secondary_start_sbi: XIP_FIXUP_OFFSET a0 call relocate_enable_mmu #endif - call setup_trap_vector + call .Lsetup_trap_vector tail smp_callin #endif /* CONFIG_SMP */ .align 2 -setup_trap_vector: +.Lsetup_trap_vector: /* Set trap vector to exception handler */ la a0, handle_exception csrw CSR_TVEC, a0 @@ -210,7 +210,7 @@ ENTRY(_start_kernel) * not implement PMPs, so we set up a quick trap handler to just skip * touching the PMPs on any trap. */ - la a0, pmp_done + la a0, .Lpmp_done csrw CSR_TVEC, a0 li a0, -1 @@ -218,7 +218,7 @@ ENTRY(_start_kernel) li a0, (PMP_A_NAPOT | PMP_R | PMP_W | PMP_X) csrw CSR_PMPCFG0, a0 .align 2 -pmp_done: +.Lpmp_done: /* * The hartid in a0 is expected later on, and we have no firmware @@ -282,12 +282,12 @@ pmp_done: /* Clear BSS for flat non-ELF images */ la a3, __bss_start la a4, __bss_stop - ble a4, a3, clear_bss_done -clear_bss: + ble a4, a3, .Lclear_bss_done +.Lclear_bss: REG_S zero, (a3) add a3, a3, RISCV_SZPTR - blt a3, a4, clear_bss -clear_bss_done: + blt a3, a4, .Lclear_bss +.Lclear_bss_done: #endif la a2, boot_cpu_hartid XIP_FIXUP_OFFSET a2 @@ -311,7 +311,7 @@ clear_bss_done: call relocate_enable_mmu #endif /* CONFIG_MMU */ - call setup_trap_vector + call .Lsetup_trap_vector /* Restore C environment */ la tp, init_task la sp, init_thread_union + THREAD_SIZE diff --git a/arch/riscv/kernel/mcount.S b/arch/riscv/kernel/mcount.S index 8818a8fa9ff3..ab4dd0594fe7 100644 --- a/arch/riscv/kernel/mcount.S +++ b/arch/riscv/kernel/mcount.S @@ -85,16 +85,16 @@ ENTRY(MCOUNT_NAME) #ifdef CONFIG_FUNCTION_GRAPH_TRACER la t0, ftrace_graph_return REG_L t1, 0(t0) - bne t1, t4, do_ftrace_graph_caller + bne t1, t4, .Ldo_ftrace_graph_caller la t3, ftrace_graph_entry REG_L t2, 0(t3) la t6, ftrace_graph_entry_stub - bne t2, t6, do_ftrace_graph_caller + bne t2, t6, .Ldo_ftrace_graph_caller #endif la t3, ftrace_trace_function REG_L t5, 0(t3) - bne t5, t4, do_trace + bne t5, t4, .Ldo_trace ret #ifdef CONFIG_FUNCTION_GRAPH_TRACER @@ -102,7 +102,7 @@ ENTRY(MCOUNT_NAME) * A pseudo representation for the function graph tracer: * prepare_to_return(&ra_to_caller_of_caller, ra_to_caller) */ -do_ftrace_graph_caller: +.Ldo_ftrace_graph_caller: addi a0, s0, -SZREG mv a1, ra #ifdef HAVE_FUNCTION_GRAPH_FP_TEST @@ -118,7 +118,7 @@ do_ftrace_graph_caller: * A pseudo representation for the function tracer: * (*ftrace_trace_function)(ra_to_caller, ra_to_caller_of_caller) */ -do_trace: +.Ldo_trace: REG_L a1, -SZREG(s0) mv a0, ra diff --git a/arch/riscv/lib/memmove.S b/arch/riscv/lib/memmove.S index 838ff2022fe3..1930b388c3a0 100644 --- a/arch/riscv/lib/memmove.S +++ b/arch/riscv/lib/memmove.S @@ -26,8 +26,8 @@ SYM_FUNC_START_WEAK(memmove) */ /* Return if nothing to do */ - beq a0, a1, return_from_memmove - beqz a2, return_from_memmove + beq a0, a1, .Lreturn_from_memmove + beqz a2, .Lreturn_from_memmove /* * Register Uses @@ -60,7 +60,7 @@ SYM_FUNC_START_WEAK(memmove) * small enough not to bother. */ andi t0, a2, -(2 * SZREG) - beqz t0, byte_copy + beqz t0, .Lbyte_copy /* * Now solve for t5 and t6. @@ -87,14 +87,14 @@ SYM_FUNC_START_WEAK(memmove) */ xor t0, a0, a1 andi t1, t0, (SZREG - 1) - beqz t1, coaligned_copy + beqz t1, .Lcoaligned_copy /* Fall through to misaligned fixup copy */ -misaligned_fixup_copy: - bltu a1, a0, misaligned_fixup_copy_reverse +.Lmisaligned_fixup_copy: + bltu a1, a0, .Lmisaligned_fixup_copy_reverse -misaligned_fixup_copy_forward: - jal t0, byte_copy_until_aligned_forward +.Lmisaligned_fixup_copy_forward: + jal t0, .Lbyte_copy_until_aligned_forward andi a5, a1, (SZREG - 1) /* Find the alignment offset of src (a1) */ slli a6, a5, 3 /* Multiply by 8 to convert that to bits to shift */ @@ -153,10 +153,10 @@ misaligned_fixup_copy_forward: mv t3, t6 /* Fix the dest pointer in case the loop was broken */ add a1, t3, a5 /* Restore the src pointer */ - j byte_copy_forward /* Copy any remaining bytes */ + j .Lbyte_copy_forward /* Copy any remaining bytes */ -misaligned_fixup_copy_reverse: - jal t0, byte_copy_until_aligned_reverse +.Lmisaligned_fixup_copy_reverse: + jal t0, .Lbyte_copy_until_aligned_reverse andi a5, a4, (SZREG - 1) /* Find the alignment offset of src (a4) */ slli a6, a5, 3 /* Multiply by 8 to convert that to bits to shift */ @@ -215,18 +215,18 @@ misaligned_fixup_copy_reverse: mv t4, t5 /* Fix the dest pointer in case the loop was broken */ add a4, t4, a5 /* Restore the src pointer */ - j byte_copy_reverse /* Copy any remaining bytes */ + j .Lbyte_copy_reverse /* Copy any remaining bytes */ /* * Simple copy loops for SZREG co-aligned memory locations. * These also make calls to do byte copies for any unaligned * data at their terminations. */ -coaligned_copy: - bltu a1, a0, coaligned_copy_reverse +.Lcoaligned_copy: + bltu a1, a0, .Lcoaligned_copy_reverse -coaligned_copy_forward: - jal t0, byte_copy_until_aligned_forward +.Lcoaligned_copy_forward: + jal t0, .Lbyte_copy_until_aligned_forward 1: REG_L t1, ( 0 * SZREG)(a1) @@ -235,10 +235,10 @@ coaligned_copy_forward: REG_S t1, (-1 * SZREG)(t3) bne t3, t6, 1b - j byte_copy_forward /* Copy any remaining bytes */ + j .Lbyte_copy_forward /* Copy any remaining bytes */ -coaligned_copy_reverse: - jal t0, byte_copy_until_aligned_reverse +.Lcoaligned_copy_reverse: + jal t0, .Lbyte_copy_until_aligned_reverse 1: REG_L t1, (-1 * SZREG)(a4) @@ -247,7 +247,7 @@ coaligned_copy_reverse: REG_S t1, ( 0 * SZREG)(t4) bne t4, t5, 1b - j byte_copy_reverse /* Copy any remaining bytes */ + j .Lbyte_copy_reverse /* Copy any remaining bytes */ /* * These are basically sub-functions within the function. They @@ -258,7 +258,7 @@ coaligned_copy_reverse: * up from where they were left and we avoid code duplication * without any overhead except the call in and return jumps. */ -byte_copy_until_aligned_forward: +.Lbyte_copy_until_aligned_forward: beq t3, t5, 2f 1: lb t1, 0(a1) @@ -269,7 +269,7 @@ byte_copy_until_aligned_forward: 2: jalr zero, 0x0(t0) /* Return to multibyte copy loop */ -byte_copy_until_aligned_reverse: +.Lbyte_copy_until_aligned_reverse: beq t4, t6, 2f 1: lb t1, -1(a4) @@ -285,10 +285,10 @@ byte_copy_until_aligned_reverse: * These will byte copy until they reach the end of data to copy. * At that point, they will call to return from memmove. */ -byte_copy: - bltu a1, a0, byte_copy_reverse +.Lbyte_copy: + bltu a1, a0, .Lbyte_copy_reverse -byte_copy_forward: +.Lbyte_copy_forward: beq t3, t4, 2f 1: lb t1, 0(a1) @@ -299,7 +299,7 @@ byte_copy_forward: 2: ret -byte_copy_reverse: +.Lbyte_copy_reverse: beq t4, t3, 2f 1: lb t1, -1(a4) @@ -309,7 +309,7 @@ byte_copy_reverse: bne t4, t3, 1b 2: -return_from_memmove: +.Lreturn_from_memmove: ret SYM_FUNC_END(memmove) -- 2.42.0