Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp934712rwb; Wed, 28 Sep 2022 10:54:12 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5x2QAB5+Th6xtwAA9H5kuke2PWLNdLfVbESZhDasAZnWhsC90TPJdBlq99thN5VsU3dXSf X-Received: by 2002:a17:902:7241:b0:179:eafe:bd89 with SMTP id c1-20020a170902724100b00179eafebd89mr950829pll.116.1664387652048; Wed, 28 Sep 2022 10:54:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1664387652; cv=none; d=google.com; s=arc-20160816; b=KmideTV9h9r2KDhXci624KQvvTEVcvI4/4AjWRA7QejGaj8HQVoTvEWZ9RVmS6Kf6l oG4uTV3vwwYJb4+JgMhyHqicgLy6WUpNQuR57dZtQt7uU/yIw9g5Bu/AJAJFH09eluwa GDwsa7Pi3b6/LzSi3q5in42z6gPYlttJqPN3OQ07Eq3a405GQVidXoWEG9N0q34GpaNv QkNtuo20G+puweeCEfYM77OEYzqcoGcPuhuZSNsrc22gKPsgxOdEjX7yh3KhWbynKMco dLoDTP/SPQxgDfxOQsAwehjrNSqAozm5MGuwZ7mnNjKGZqXFF0c7R9M+TyzPgVBJ+JOh S2pg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=BA18S/LaMS0ixixnUma5crAG0xnlo1FZOvHQe/nOYck=; b=hgB4Kn6XpTw7w13DALu+78gX4XOXwZnoMnrnTI82BO854OAf+mt2pTdGKYCHxgob64 jNnMFvHyG4azDYCFpWzErxe+JzxTZBy1cYXpbk/rz7BhHHZVSrKCYKNXPtd484qaP5Bh v81MLmzrLT0oxsfQYFd84tz2mNtvUS9yaIyYZA+/mrwFz3tlHP2jZk7o4uTqGWb0Fw08 y4OHknN4kxWvP+Hg1QSEKFmRJu2Q20bgYdQki9FOvgr7K+yMuINIRzRv5HuL0iKG7QpP kugvJUH5GJzR7/KDw7icP+FpTLnCaX1KVd6szocPy181lyGBTwLLZ7Y5PtFKNKHQK2qU FPdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Oe3jMFLT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u63-20020a638542000000b00415ff45dce3si6450842pgd.839.2022.09.28.10.54.00; Wed, 28 Sep 2022 10:54:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Oe3jMFLT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233904AbiI1RDv (ORCPT + 99 others); Wed, 28 Sep 2022 13:03:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230076AbiI1RDs (ORCPT ); Wed, 28 Sep 2022 13:03:48 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D875EBD42 for ; Wed, 28 Sep 2022 10:03:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0773F61F35 for ; Wed, 28 Sep 2022 17:03:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8938BC433D6; Wed, 28 Sep 2022 17:03:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1664384626; bh=i61YhGkKwsV9Tssg75wgUN9ZGXdvkG3vHSk/TFi5Emk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Oe3jMFLTKSgTqhuc9UDWC+r8HEkNIJt+FwMgcdX5Sne2QGqszsfulahqlp5iBrQWl ZFdvSyCGBdM2tZgGfz8GXbJQ3G0UmYTeoE7B3hEq+3B+UO4cjN5rmYRfvSVMtU+cAu 60sbowpu9gpy+EgFsB5zxrGKbjqel/06vfY6PKpDykIHms9Ki1kuDsH/aV7EZuqt4x wWX2s4HCZfj8u8GwHlapyLkDjqkMtYS/voeokcGCOCmrAix4J6vNXNfi7s5MKY2egZ e3THszk37xiMpeNvEpd/+x03qFukzNxQ/JW+jey0Whr4j1hY02hpoOM2z6GUCfCOVt /9Y50/j6tbvPw== Date: Thu, 29 Sep 2022 00:54:17 +0800 From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Nathan Chancellor , Nick Desaulniers , Guo Ren Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, llvm@lists.linux.dev Subject: Re: [PATCH v2 3/4] riscv: fix race when vmap stack overflow and remove shadow_stack Message-ID: References: <20220928162007.3791-1-jszhang@kernel.org> <20220928162007.3791-4-jszhang@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20220928162007.3791-4-jszhang@kernel.org> X-Spam-Status: No, score=-7.2 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 29, 2022 at 12:20:06AM +0800, Jisheng Zhang wrote: > Currently, when detecting vmap stack overflow, riscv firstly switches > to the so called shadow stack, then use this shadow stack to call the > get_overflow_stack() to get the overflow stack. However, there's > a race here if two or more harts use the same shadow stack at the same > time. > > To solve this race, we rely on two facts: > 1. the content of kernel thread pointer I.E "tp" register can still > be gotten from the the CSR_SCRATCH register, thus we can clobber tp > under the condtion that we restore tp from CSR_SCRATCH later. > > 2. Once vmap stack overflow happen, panic is comming soon, no > performance concern at all, so we don't need to define the overflow > stack as percpu var, we can simplify it into a pointer array which > points to allocated pages. > > Thus we can use tp as a tmp register to get the cpu id to calculate > the offset of overflow stack pointer array for each cpu w/o shadow > stack any more. Thus the race condition is removed as a side effect. > > NOTE: we can use similar mechanism to let each cpu use different shadow > stack to fix the race codition, but if we can remove shadow stack usage > totally, why not. > > Signed-off-by: Jisheng Zhang > Fixes: 31da94c25aea ("riscv: add VMAP_STACK overflow detection") > --- > arch/riscv/include/asm/asm-prototypes.h | 1 - > arch/riscv/include/asm/thread_info.h | 4 +- > arch/riscv/kernel/asm-offsets.c | 1 + > arch/riscv/kernel/entry.S | 56 ++++--------------------- > arch/riscv/kernel/traps.c | 31 ++++++++------ > 5 files changed, 29 insertions(+), 64 deletions(-) > > diff --git a/arch/riscv/include/asm/asm-prototypes.h b/arch/riscv/include/asm/asm-prototypes.h > index ef386fcf3939..4a06fa0f6493 100644 > --- a/arch/riscv/include/asm/asm-prototypes.h > +++ b/arch/riscv/include/asm/asm-prototypes.h > @@ -25,7 +25,6 @@ DECLARE_DO_ERROR_INFO(do_trap_ecall_s); > DECLARE_DO_ERROR_INFO(do_trap_ecall_m); > DECLARE_DO_ERROR_INFO(do_trap_break); > > -asmlinkage unsigned long get_overflow_stack(void); > asmlinkage void handle_bad_stack(struct pt_regs *regs); > > #endif /* _ASM_RISCV_PROTOTYPES_H */ > diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h > index c970d41dc4c6..c604a5212a73 100644 > --- a/arch/riscv/include/asm/thread_info.h > +++ b/arch/riscv/include/asm/thread_info.h > @@ -28,14 +28,12 @@ > > #define THREAD_SHIFT (PAGE_SHIFT + THREAD_SIZE_ORDER) > #define OVERFLOW_STACK_SIZE SZ_4K > -#define SHADOW_OVERFLOW_STACK_SIZE (1024) > +#define OVERFLOW_STACK_SHIFT 12 oops, this should be removed, will update it in a newer version after collecting review comments. > > #define IRQ_STACK_SIZE THREAD_SIZE > > #ifndef __ASSEMBLY__ > > -extern long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE / sizeof(long)]; > - > #include > #include > > diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c > index df9444397908..62bf3bacc322 100644 > --- a/arch/riscv/kernel/asm-offsets.c > +++ b/arch/riscv/kernel/asm-offsets.c > @@ -37,6 +37,7 @@ void asm_offsets(void) > OFFSET(TASK_TI_PREEMPT_COUNT, task_struct, thread_info.preempt_count); > OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_sp); > OFFSET(TASK_TI_USER_SP, task_struct, thread_info.user_sp); > + OFFSET(TASK_TI_CPU, task_struct, thread_info.cpu); > > OFFSET(TASK_THREAD_F0, task_struct, thread.fstate.f[0]); > OFFSET(TASK_THREAD_F1, task_struct, thread.fstate.f[1]); > diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S > index a3e1ed2fa2ac..5a6171a90d81 100644 > --- a/arch/riscv/kernel/entry.S > +++ b/arch/riscv/kernel/entry.S > @@ -223,54 +223,16 @@ END(ret_from_exception) > > #ifdef CONFIG_VMAP_STACK > ENTRY(handle_kernel_stack_overflow) > - la sp, shadow_stack > - addi sp, sp, SHADOW_OVERFLOW_STACK_SIZE > - > - //save caller register to shadow stack > - addi sp, sp, -(PT_SIZE_ON_STACK) > - REG_S x1, PT_RA(sp) > - REG_S x5, PT_T0(sp) > - REG_S x6, PT_T1(sp) > - REG_S x7, PT_T2(sp) > - REG_S x10, PT_A0(sp) > - REG_S x11, PT_A1(sp) > - REG_S x12, PT_A2(sp) > - REG_S x13, PT_A3(sp) > - REG_S x14, PT_A4(sp) > - REG_S x15, PT_A5(sp) > - REG_S x16, PT_A6(sp) > - REG_S x17, PT_A7(sp) > - REG_S x28, PT_T3(sp) > - REG_S x29, PT_T4(sp) > - REG_S x30, PT_T5(sp) > - REG_S x31, PT_T6(sp) > - > - la ra, restore_caller_reg > - tail get_overflow_stack > - > -restore_caller_reg: > - //save per-cpu overflow stack > - REG_S a0, -8(sp) > - //restore caller register from shadow_stack > - REG_L x1, PT_RA(sp) > - REG_L x5, PT_T0(sp) > - REG_L x6, PT_T1(sp) > - REG_L x7, PT_T2(sp) > - REG_L x10, PT_A0(sp) > - REG_L x11, PT_A1(sp) > - REG_L x12, PT_A2(sp) > - REG_L x13, PT_A3(sp) > - REG_L x14, PT_A4(sp) > - REG_L x15, PT_A5(sp) > - REG_L x16, PT_A6(sp) > - REG_L x17, PT_A7(sp) > - REG_L x28, PT_T3(sp) > - REG_L x29, PT_T4(sp) > - REG_L x30, PT_T5(sp) > - REG_L x31, PT_T6(sp) > + la sp, overflow_stack > + /* use tp as tmp register since we can restore it from CSR_SCRATCH */ > + REG_L tp, TASK_TI_CPU(tp) > + slli tp, tp, RISCV_LGPTR > + add tp, sp, tp > + REG_L sp, 0(tp) > + > + /* restore tp */ > + csrr tp, CSR_SCRATCH > > - //load per-cpu overflow stack > - REG_L sp, -8(sp) > addi sp, sp, -(PT_SIZE_ON_STACK) > > //save context to overflow stack > diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c > index 73f06cd149d9..b6c64f0fb70f 100644 > --- a/arch/riscv/kernel/traps.c > +++ b/arch/riscv/kernel/traps.c > @@ -216,23 +216,12 @@ int is_valid_bugaddr(unsigned long pc) > #endif /* CONFIG_GENERIC_BUG */ > > #ifdef CONFIG_VMAP_STACK > -static DEFINE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], > - overflow_stack)__aligned(16); > -/* > - * shadow stack, handled_ kernel_ stack_ overflow(in kernel/entry.S) is used > - * to get per-cpu overflow stack(get_overflow_stack). > - */ > -long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE/sizeof(long)]; > -asmlinkage unsigned long get_overflow_stack(void) > -{ > - return (unsigned long)this_cpu_ptr(overflow_stack) + > - OVERFLOW_STACK_SIZE; > -} > +void *overflow_stack[NR_CPUS] __ro_after_init __aligned(16); > > asmlinkage void handle_bad_stack(struct pt_regs *regs) > { > unsigned long tsk_stk = (unsigned long)current->stack; > - unsigned long ovf_stk = (unsigned long)this_cpu_ptr(overflow_stack); > + unsigned long ovf_stk = (unsigned long)overflow_stack[raw_smp_processor_id()]; > > console_verbose(); > > @@ -248,4 +237,20 @@ asmlinkage void handle_bad_stack(struct pt_regs *regs) > for (;;) > wait_for_interrupt(); > } > + > +static int __init alloc_overflow_stacks(void) > +{ > + u8 *s; > + int cpu; > + > + for_each_possible_cpu(cpu) { > + s = (u8 *)__get_free_pages(GFP_KERNEL, get_order(OVERFLOW_STACK_SIZE)); > + if (WARN_ON(!s)) > + return -ENOMEM; > + overflow_stack[cpu] = &s[OVERFLOW_STACK_SIZE]; Since overflow_stack[cpu] points to the top of the slack, we need to update the ovf_stack dumping in handle_bad_stack(). will take care this in newer version. > + printk("%px\n", overflow_stack[cpu]); forget to remove this printk :(