Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp6767292rwb; Mon, 5 Dec 2022 17:56:27 -0800 (PST) X-Google-Smtp-Source: AA0mqf5nSY9px4TNTcipvy98HSmNnaF4xdEr/k6hypJ8zt70USHx7WK/Yp1Lg/29AZvE3lIUHcan X-Received: by 2002:a05:6402:10d1:b0:467:7508:89ca with SMTP id p17-20020a05640210d100b00467750889camr60117163edu.284.1670291786774; Mon, 05 Dec 2022 17:56:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670291786; cv=none; d=google.com; s=arc-20160816; b=WRiXpaZz03xv8pEf/aws2enPIALjSOBgknG/2BKfKkzyWoDCYjcp1lgy7FV4/i2ACB KBgh+CnlwTUdzdlMWfrezMN4F11l4n/4HlTeGjgYJHwrbzEzB0bSCh+u1w23rZ9uSqPs KBm0rLyqm6EwMRir2F1R18bPsrkfM0o/F+/giglnqXFl57pvoOwu85aZc0h7xDQikG7W 5ck44iFECo3mUbu/qnrtyUPEn5jhUqr5zDH//1yuvh6dAU7ZXC9VpuWo7tHW+bBtQPJg LcXBNF+ORMWtG0Dxbg9zs4tE4UfkXsBFVchRt8OwRfR9u+DILOgdxQ3shhhDyPA2ejqQ m6gQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=E2tuZhFJC70yhnHIquYUdk7F0QSJou21tYanpjqbwP4=; b=rKAP2fyh59NLdovaJw7SvGFkBd3be67v3pjcoWz5sOYVTKpeK0avafHgHAhtLxzCqW 4XcsCfyuKQ9SoLbbZJPq4yeP0Ak9FjuyBv8+Yhl4Xsi9aCEjBQeGGvpy3Lj0SgHxm6HZ fx/bKmS837iJdwduF7Q0Vo7zbMYZX2OEbIUJsDkKYUKXTNJXyheKjBtMVoGsj8IdhO3Z LceCcvXjRpHl1pWm3iKxGkNBOU4oneYVgRrtAstnPrIR1PabxiXmwXLzXizdiag7FUxv MPELv6ktBAmD7Q5eVHx7Vs+DFCfTDTSt+28lZI6eD5ln79EPy+M/bO2sFmFFsEwhJqiP 2NXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=HREFEuzQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i9-20020a1709064fc900b00781dbeebeb1si13965203ejw.585.2022.12.05.17.56.08; Mon, 05 Dec 2022 17:56:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=HREFEuzQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232574AbiLFAmQ (ORCPT + 79 others); Mon, 5 Dec 2022 19:42:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233461AbiLFAlm (ORCPT ); Mon, 5 Dec 2022 19:41:42 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80886DFD for ; Mon, 5 Dec 2022 16:41:41 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id F0BED614F3 for ; Tue, 6 Dec 2022 00:41:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E5D46C433D7; Tue, 6 Dec 2022 00:41:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670287300; bh=PHYevn3Vh1bRzHgX8jaY2KGyymMFzZcEuYmLigdgpFY=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=HREFEuzQkgTBk6bZMKgFTiB8CEqWJP4JL8snfVoRvIFSkAZ+1NqBLredgeO8ut1cI BOCHfhAqHxu+dA2bgYxP72JxkG+Gy/Sb6KaWcp3aGAOaIyiFMGTIitp2hUrt2OkapU UjjZ/H0gJGeA1B5qfjLOwgoLEHrUJ9rGV8e4GsBICCkZLUqsLbhGscY9VEN+qfj3uR jcOUGDFq3BRpWgEw3W6ccjdbsGFr9S8LQVsrLgFc0+etQ/hbpIelyT4Mf4ZxXMLSvC M7DFFBwFk0gz9Zjfm9ZLS0b5J3tleD6OzTi2zgEMy56r7dlgWBo1NP5HB5ECKb3ABq DYs50wj3+Gm9A== Date: Tue, 6 Dec 2022 09:41:37 +0900 From: Masami Hiramatsu (Google) To: Mark Rutland Cc: Catalin Marinas , Will Deacon , Mark Brown , Kalesh Singh , Marc Zyngier , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sandeepa Prabhu Subject: Re: [PATCH v2 1/3] arm64: Prohibit instrumentation on arch_stack_walk() Message-Id: <20221206094137.389e4c374f4be060af19f33a@kernel.org> In-Reply-To: References: <166994750386.439920.1754385804350980158.stgit@devnote3> <166994751368.439920.3236636557520824664.stgit@devnote3> X-Mailer: Sylpheed 3.8.0beta1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 2 Dec 2022 13:17:30 +0000 Mark Rutland wrote: > On Fri, Dec 02, 2022 at 11:18:33AM +0900, Masami Hiramatsu (Google) wrote: > > From: Masami Hiramatsu (Google) > > > > Mark arch_stack_walk() as noinstr instead of notrace and inline functions > > called from arch_stack_walk() as __always_inline so that user does not > > put any instrumentations on it, because this function can be used from > > return_address() which is used by lockdep. > > Hmmm... since arch_stack_walk is marked as notrace, that will be prohibited by > default unless the kernel was built with CONFIG_KPROBE_EVENTS_ON_NOTRACE=y, > and the commit message for that says: > > │ This option disables such protection and allows you to put kprobe │ > │ events on ftrace functions for debugging ftrace by itself. │ > │ Note that this might let you shoot yourself in the foot. > > ... and IIUC we generally don't expect people to set that, and that might be > worth calling out in the commit message. Yeah, but that is the feature for kprobe events, and not applied to the kprobes itself. Maybe I should use a test kprobe module to put a probe on it. > > > Without this, if the kernel built with CONFIG_LOCKDEP=y, just probing > > arch_stack_walk() via /kprobe_events will crash the kernel on > > arm64. > > > > # echo p arch_stack_walk >> ${TRACEFS}/kprobe_events > > # echo 1 > ${TRACEFS}/events/kprobes/enable > > I had a go at testing this patch, and it fixes the crash with the reproducer > above, but there are plenty of other instances in stacktrace.c that lead to the > same sort of crash, e.g. > > # echo p stackinfo_get_task >> ${TRACEFS}/kprobe_events > # echo 1 > ${TRACEFS}/events/kprobes/enable Oops, thanks for pointing! Hmm, I thought stackinfo_get_task() is an inlined function usually. Maybe we should make it nokprobe_inline. But this is just one case. I need to scan all symbols to trace... > > ... so I think there's more that we need to do to fix this generally. > > Note: I found interesting functions to try tracing by looking at the output of: > > aarch64-linux-objdump -t arch/arm64/kernel/stacktrace.o | grep -w '.text' > > That all said, I think this patch is nice-to-have, and that we can address the > other cases as a follow-up, so for this patch as-is (with or without some > wording regarding CONFIG_KPROBE_EVENTS_ON_NOTRACE): > > Acked-by: Mark Rutland Thanks! > > Mark. > > > kprobes: Failed to recover from reentered kprobes. > > kprobes: Dump kprobe: > > .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 > > ------------[ cut here ]------------ > > kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! > > kprobes: Failed to recover from reentered kprobes. > > kprobes: Dump kprobe: > > .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 > > ------------[ cut here ]------------ > > kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! > > PREEMPT SMP > > Modules linked in: > > CPU: 0 PID: 17 Comm: migration/0 Tainted: G N 6.1.0-rc5+ #6 > > Hardware name: linux,dummy-virt (DT) > > Stopper: 0x0 <- 0x0 > > pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) > > pc : kprobe_breakpoint_handler+0x178/0x17c > > lr : kprobe_breakpoint_handler+0x178/0x17c > > sp : ffff8000080d3090 > > x29: ffff8000080d3090 x28: ffff0df5845798c0 x27: ffffc4f59057a774 > > x26: ffff0df5ffbba770 x25: ffff0df58f420f18 x24: ffff49006f641000 > > x23: ffffc4f590579768 x22: ffff0df58f420f18 x21: ffff8000080d31c0 > > x20: ffffc4f590579768 x19: ffffc4f590579770 x18: 0000000000000006 > > x17: 5f6b636174735f68 x16: 637261203d207264 x15: 64612e202c30203d > > x14: 2074657366666f2e x13: 30633178302f3078 x12: 302b6b6c61775f6b > > x11: 636174735f686372 x10: ffffc4f590dc5bd8 x9 : ffffc4f58eb31958 > > x8 : 00000000ffffefff x7 : ffffc4f590dc5bd8 x6 : 80000000fffff000 > > x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000 > > x2 : 0000000000000000 x1 : ffff0df5845798c0 x0 : 0000000000000064 > > Call trace: > > kprobes: Failed to recover from reentered kprobes. > > kprobes: Dump kprobe: > > .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0 > > ------------[ cut here ]------------ > > kernel BUG at arch/arm64/kernel/probes/kprobes.c:241! > > > > Fixes: 39ef362d2d45 ("arm64: Make return_address() use arch_stack_walk()") > > Cc: stable@vger.kernel.org > > Signed-off-by: Masami Hiramatsu (Google) > > --- > > Changes in v2: > > - Use noinstr instead of NOKPROBE_SYMBOL() > > - Use __always_inline because nokprobe_inline will be changed if > > CONFIG_KPROBES=n. > > - Fix indentation. > > --- > > arch/arm64/kernel/stacktrace.c | 10 +++++----- > > 1 file changed, 5 insertions(+), 5 deletions(-) > > > > diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c > > index 634279b3b03d..117e2c180f3c 100644 > > --- a/arch/arm64/kernel/stacktrace.c > > +++ b/arch/arm64/kernel/stacktrace.c > > @@ -23,8 +23,8 @@ > > * > > * The regs must be on a stack currently owned by the calling task. > > */ > > -static inline void unwind_init_from_regs(struct unwind_state *state, > > - struct pt_regs *regs) > > +static __always_inline void unwind_init_from_regs(struct unwind_state *state, > > + struct pt_regs *regs) > > { > > unwind_init_common(state, current); > > > > @@ -58,8 +58,8 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state) > > * duration of the unwind, or the unwind will be bogus. It is never valid to > > * call this for the current task. > > */ > > -static inline void unwind_init_from_task(struct unwind_state *state, > > - struct task_struct *task) > > +static __always_inline void unwind_init_from_task(struct unwind_state *state, > > + struct task_struct *task) > > { > > unwind_init_common(state, task); > > > > @@ -186,7 +186,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) > > : stackinfo_get_unknown(); \ > > }) > > > > -noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, > > +noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry, > > void *cookie, struct task_struct *task, > > struct pt_regs *regs) > > { > > -- Masami Hiramatsu (Google)