Received: by 2002:a05:6a10:8a4d:0:0:0:0 with SMTP id dn13csp169059pxb; Thu, 12 Aug 2021 13:28:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw/TkL2IFCNFTUZmBzABtYHpm+5CbjnlWSK1PoydR8nAHYvAYEsXQsoS4+poycjFOs6m5Iw X-Received: by 2002:a17:906:4c8c:: with SMTP id q12mr5460852eju.254.1628800092265; Thu, 12 Aug 2021 13:28:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628800092; cv=none; d=google.com; s=arc-20160816; b=AULhYPWpDTRZSaUGP+PkqyYXKUH+SWqqutna2lwsuxx6tqFMvtb9QzYDpzRnQzhKOK I2IMfmIji2fpk/iExpwDzq48G1yo9KLtlgfj7n9efjVk0C3CBRS1fmzg4nibdYr41Wlc QWvhHBjN6SXckf0zfpIK3F8GFzXxxe+qIEe5pMQUn6WOno0GQl/SxDjx6dM106ZoKROG zqitXDzm1l4N+AmgDz+D2CABbCwL6p+C2XLUtaqwkXKpPz+QPHEJkvBMB0YmG3hVdq0H 8nvu3zVbgVVBiy4D3W9caQNl2A4GMtpW/PbCm/5S7yGzuT+8GjXSiYi37qoFtBwqknd4 BieA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:to:from :dkim-signature:dkim-filter; bh=9UDTW5f/wu5lTfTc51GK4zB/v63pXMsEhlhyVfcdK1w=; b=b7sNKq0kYJjkP3NcGEJ3Uq/uGWbImNs08i2uSt9+Zf7Q4J6aen5V2KCQvJfUwfzE/G PXkRsrDVoeWa7UtFaBUnCzaMm63DrmpKgtI8Gf/+8I9bu+BSnufZO/fDMTjtZJYHQOV4 i8bGVgJJ5lllSBMYXJLlE7XsxGfdKf7doNKQTvSuBuLbQGFBgz39dJTXsK8stnV19V1Q fNetXSVIts25FByiZPKG0SlS/OVH4XU8W06nnePND/XwEeOiykfq5Q8v/17Wb6gu5BEI QiojlZ5gwuxxpfFkBa3Jb33ahjlhaO0kBW9hehcZ3ia6n++mzcQFD46pblYDAAZ00FbS W43A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=SNtllFux; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j4si3275621eje.704.2021.08.12.13.27.48; Thu, 12 Aug 2021 13:28:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=SNtllFux; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236029AbhHLTGr (ORCPT + 99 others); Thu, 12 Aug 2021 15:06:47 -0400 Received: from linux.microsoft.com ([13.77.154.182]:60586 "EHLO linux.microsoft.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231601AbhHLTGm (ORCPT ); Thu, 12 Aug 2021 15:06:42 -0400 Received: from x64host.home (unknown [47.187.212.181]) by linux.microsoft.com (Postfix) with ESMTPSA id 24AA520BE693; Thu, 12 Aug 2021 12:06:16 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 24AA520BE693 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1628795177; bh=9UDTW5f/wu5lTfTc51GK4zB/v63pXMsEhlhyVfcdK1w=; h=From:To:Subject:Date:In-Reply-To:References:From; b=SNtllFux73btP0jrBF2CqAb9OlY79+3Ekq3D+TI75ocTS782hLALfQXEjs2p21WjE lgpCjCKmj2AY0Ci9SOG8LL21x6TXOKX5Fa5jNEvAtmOu5iAdun5J/84fEYk28X58TY N31KiAFhpSO8vfhykMuwGDTJ1zQwpiXE+lpInOf4= From: madvenka@linux.microsoft.com To: mark.rutland@arm.com, broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org, nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com, catalin.marinas@arm.com, will@kernel.org, jmorris@namei.org, pasha.tatashin@soleen.com, jthierry@redhat.com, linux-arm-kernel@lists.infradead.org, live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, madvenka@linux.microsoft.com Subject: [RFC PATCH v8 3/4] arm64: Introduce stack trace reliability checks in the unwinder Date: Thu, 12 Aug 2021 14:06:02 -0500 Message-Id: <20210812190603.25326-4-madvenka@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812190603.25326-1-madvenka@linux.microsoft.com> References: <20210812190603.25326-1-madvenka@linux.microsoft.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Madhavan T. Venkataraman" There are some kernel features and conditions that make a stack trace unreliable. Callers may require the unwinder to detect these cases. E.g., livepatch. Introduce a new function called unwind_is_reliable() that will detect these cases and return a boolean. Introduce a new argument to unwind() called "need_reliable" so a caller can tell unwind() that it requires a reliable stack trace. For such a caller, any unreliability in the stack trace must be treated as a fatal error and the unwind must be aborted. Call unwind_is_reliable() from unwind_consume() like this: if (frame->need_reliable && !unwind_is_reliable(frame)) { frame->failed = true; return false; } In other words, if the return PC in the stackframe falls in unreliable code, then it cannot be unwound reliably. arch_stack_walk() will pass "false" for need_reliable because its callers don't care about reliability. arch_stack_walk() is used for debug and test purposes. Introduce arch_stack_walk_reliable() for ARM64. This works like arch_stack_walk() except for two things: - It passes "true" for need_reliable. - It returns -EINVAL if unwind() says that the stack trace is unreliable. Introduce the first reliability check in unwind_is_reliable() - If a return PC is not a valid kernel text address, consider the stack trace unreliable. It could be some generated code. Other reliability checks will be added in the future. Until all of the checks are in place, arch_stack_walk_reliable() may not be used by livepatch. But it may be used by debug and test code. Signed-off-by: Madhavan T. Venkataraman --- arch/arm64/include/asm/stacktrace.h | 4 ++ arch/arm64/kernel/stacktrace.c | 63 +++++++++++++++++++++++++++-- 2 files changed, 63 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index 407007376e97..65ea151da5da 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -53,6 +53,9 @@ struct stack_info { * replacement lr value in the ftrace graph stack. * * @failed: Unwind failed. + * + * @need_reliable The caller needs a reliable stack trace. Treat any + * unreliability as a fatal error. */ struct stackframe { struct task_struct *task; @@ -65,6 +68,7 @@ struct stackframe { int graph; #endif bool failed; + bool need_reliable; }; extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index ec8f5163c4d0..b60f8a20ba64 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -34,7 +34,8 @@ static void notrace unwind_start(struct stackframe *frame, struct task_struct *task, - unsigned long fp, unsigned long pc) + unsigned long fp, unsigned long pc, + bool need_reliable) { frame->task = task; frame->fp = fp; @@ -56,6 +57,7 @@ static void notrace unwind_start(struct stackframe *frame, frame->prev_fp = 0; frame->prev_type = STACK_TYPE_UNKNOWN; frame->failed = false; + frame->need_reliable = need_reliable; } NOKPROBE_SYMBOL(unwind_start); @@ -178,6 +180,23 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) barrier(); } +/* + * Check the stack frame for conditions that make further unwinding unreliable. + */ +static bool notrace unwind_is_reliable(struct stackframe *frame) +{ + /* + * If the PC is not a known kernel text address, then we cannot + * be sure that a subsequent unwind will be reliable, as we + * don't know that the code follows our unwind requirements. + */ + if (!__kernel_text_address(frame->pc)) + return false; + return true; +} + +NOKPROBE_SYMBOL(unwind_is_reliable); + static bool notrace unwind_consume(struct stackframe *frame, stack_trace_consume_fn consume_entry, void *cookie) @@ -197,6 +216,12 @@ static bool notrace unwind_consume(struct stackframe *frame, /* Final frame; nothing to unwind */ return false; } + + if (frame->need_reliable && !unwind_is_reliable(frame)) { + /* Cannot unwind to the next frame reliably. */ + frame->failed = true; + return false; + } return true; } @@ -210,11 +235,12 @@ static inline bool unwind_failed(struct stackframe *frame) /* Core unwind function */ static bool notrace unwind(stack_trace_consume_fn consume_entry, void *cookie, struct task_struct *task, - unsigned long fp, unsigned long pc) + unsigned long fp, unsigned long pc, + bool need_reliable) { struct stackframe frame; - unwind_start(&frame, task, fp, pc); + unwind_start(&frame, task, fp, pc, need_reliable); while (unwind_consume(&frame, consume_entry, cookie)) unwind_next(&frame); return !unwind_failed(&frame); @@ -245,7 +271,36 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, fp = thread_saved_fp(task); pc = thread_saved_pc(task); } - unwind(consume_entry, cookie, task, fp, pc); + unwind(consume_entry, cookie, task, fp, pc, false); +} + +/* + * arch_stack_walk_reliable() may not be used for livepatch until all of + * the reliability checks are in place in unwind_consume(). However, + * debug and test code can choose to use it even if all the checks are not + * in place. + */ +noinline int notrace arch_stack_walk_reliable(stack_trace_consume_fn consume_fn, + void *cookie, + struct task_struct *task) +{ + unsigned long fp, pc; + + if (!task) + task = current; + + if (task == current) { + /* Skip arch_stack_walk_reliable() in the stack trace. */ + fp = (unsigned long)__builtin_frame_address(1); + pc = (unsigned long)__builtin_return_address(0); + } else { + /* Caller guarantees that the task is not running. */ + fp = thread_saved_fp(task); + pc = thread_saved_pc(task); + } + if (unwind(consume_fn, cookie, task, fp, pc, true)) + return 0; + return -EINVAL; } #endif -- 2.25.1