Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp1620737ybk; Thu, 14 May 2020 13:36:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxPyuldKPwnh/BRE+xdzhAfb+BT4IvaSdP4SXbjmwOKaFq6/jubQRGLYor4YEhVLdh8kA3O X-Received: by 2002:a17:906:8695:: with SMTP id g21mr5867688ejx.192.1589488619692; Thu, 14 May 2020 13:36:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589488619; cv=none; d=google.com; s=arc-20160816; b=gLo65Tr5tAjt7o0XdvbsaKWQ0Efax0Bnj7mpJ+t3RMn39qkK542oSKsbyzQOddYq/N BFbbdiZgO3DTG2udL88lCwQICQVESK7E/Vox8iXfFnXwlVZYg+/q/TDTh6Rlv7xKWRad RGfGBZc8kAvo4Iw6YRs7vL+C8VDjrRvEvZvDOZMTbMm/j2JlEnxf1HbI5q3prfpbUqCe 4Yo94xg6MjfkNP4ntitCiKLx9yU6HsqdV4/Xb3AN+17e0PQTqM+t2RbZfDUc6YJ0M13i eUlM82AmLANorCtTNVszJbGh2s2wKOJpUvLZYPVC65uF8rI1rtd7L2Lf/GtRqv2hV+IN 6nrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=ZXUr2+xEv5Y5U24k+OrYRSlmeVlXLyoCCQUr5wf1pwI=; b=e8uxCmL3/4AxAi5roWqmkTF2gNh9o58A32O+JBeKaNJsCIcszKr8Eg0AMfzIgd+6xc gogFpYRj/LNASAz/ZjpE/6QnQZq6S/cIPABBWDUr5z3EL0N4Lvx9B5NVUu027Lyp9T3i 8mGECrDAx2Gsk4IrUEjF4sWLxvgePbZdfl+MIZZv65tGoyQTr/0PCWXcVHSnWW/fusia PZ2tvNk1to1RYS6A7SyVHrzOic6i2XjLknFkOOEVO8Gc2bamFJ/rnV1v2P5Efnovzceh yO3q7TeyiYBq9JnOJYMu4pDesDozDHF79G/jYco4mODv1+4SgoY1v5Q/aL8jPJ480Pjz 5SjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HgXLGbOJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v13si2650305ejo.418.2020.05.14.13.36.36; Thu, 14 May 2020 13:36:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HgXLGbOJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727832AbgENUb3 (ORCPT + 99 others); Thu, 14 May 2020 16:31:29 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:48293 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725975AbgENUb3 (ORCPT ); Thu, 14 May 2020 16:31:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1589488287; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=ZXUr2+xEv5Y5U24k+OrYRSlmeVlXLyoCCQUr5wf1pwI=; b=HgXLGbOJHu41uVlKafmEpCtByO7ls4Qe5UXiVroccY6/pjIL1J3tQpLiTz3VV7f6DclEmC iaT27oaqkDGRUYt5m+esUOlNx6Cy3LQFY9doFuZGeTNTRUDZvVbzWzhvcXFqD5OYWIjPOc dSglshoL8Pkm1sBLIr4Jq+59dWZk5VA= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-247-7C3KvG9HOpmtSKp849KGHw-1; Thu, 14 May 2020 16:31:23 -0400 X-MC-Unique: 7C3KvG9HOpmtSKp849KGHw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 44391107ACCA; Thu, 14 May 2020 20:31:22 +0000 (UTC) Received: from treble.redhat.com (ovpn-117-14.rdu2.redhat.com [10.10.117.14]) by smtp.corp.redhat.com (Postfix) with ESMTP id 05EEF10013BD; Thu, 14 May 2020 20:31:20 +0000 (UTC) From: Josh Poimboeuf To: x86@kernel.org Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Vince Weaver , Dave Jones , Jann Horn , Miroslav Benes , Andy Lutomirski , Thomas Gleixner , Pavel Machek Subject: [PATCH -tip urgent] x86/unwind/orc: Fix error handling in __unwind_start() Date: Thu, 14 May 2020 15:31:10 -0500 Message-Id: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The unwind_state 'error' field is used to inform the reliable unwinding code that the stack trace can't be trusted. Set this field for all errors in __unwind_start(). Also, move the zeroing out of the unwind_state struct to before the ORC table initialization check, to prevent the caller from reading uninitialized data if the ORC table is corrupted. Fixes: af085d9084b4 ("stacktrace/x86: add function for detecting reliable stack traces") Fixes: d3a09104018c ("x86/unwinder/orc: Dont bail on stack overflow") Fixes: 98d0c8ebf77e ("x86/unwind/orc: Prevent unwinding before ORC initialization") Reported-by: Pavel Machek Signed-off-by: Josh Poimboeuf --- arch/x86/kernel/unwind_orc.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c index 5b0bd8581fe6..fa79e4227d3d 100644 --- a/arch/x86/kernel/unwind_orc.c +++ b/arch/x86/kernel/unwind_orc.c @@ -617,23 +617,23 @@ EXPORT_SYMBOL_GPL(unwind_next_frame); void __unwind_start(struct unwind_state *state, struct task_struct *task, struct pt_regs *regs, unsigned long *first_frame) { - if (!orc_init) - goto done; - memset(state, 0, sizeof(*state)); state->task = task; + if (!orc_init) + goto err; + /* * Refuse to unwind the stack of a task while it's executing on another * CPU. This check is racy, but that's ok: the unwinder has other * checks to prevent it from going off the rails. */ if (task_on_another_cpu(task)) - goto done; + goto err; if (regs) { if (user_mode(regs)) - goto done; + goto the_end; state->ip = regs->ip; state->sp = regs->sp; @@ -666,6 +666,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task, * generate some kind of backtrace if this happens. */ void *next_page = (void *)PAGE_ALIGN((unsigned long)state->sp); + state->error = true; if (get_stack_info(next_page, state->task, &state->stack_info, &state->stack_mask)) return; @@ -691,8 +692,9 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task, return; -done: +err: + state->error = true; +the_end: state->stack_info.type = STACK_TYPE_UNKNOWN; - return; } EXPORT_SYMBOL_GPL(__unwind_start); -- 2.21.1