Received: by 2002:a25:683:0:0:0:0:0 with SMTP id 125csp945886ybg; Wed, 3 Jun 2020 18:55:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwGelxOCfMvCoZsGyYuvpLZDMUzGjvduqcEB6/QspE7IKc64TkLMy0RjIuguMcPKIPgS4qR X-Received: by 2002:a17:906:2343:: with SMTP id m3mr1825063eja.301.1591235706967; Wed, 03 Jun 2020 18:55:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1591235706; cv=none; d=google.com; s=arc-20160816; b=ePRFdY4//N1XWEcvNoqWW6ai3f40AtvB6SklGN/8aYoLa1l7EVRm/VsCqcv4hwd1oI 8WEvh6wLfqzUGvUV7oghuu8QJMXt47yuEI214kmTDEbb6mh4ubeEUkzFHEn1aXicAAWW T7qYYbe+BicimOX60pD0/gIPbU+GY1ZvhMy+tM/eiI82bSPBwlrZoX9kzB1w2h19g7i0 CJnJBDGTV3jASR1/AYhOVaeX2c1pAdNd6MKD+hZDYMAfMdycA7CxV2SWmc7uJyBnifmo lD73bntC5uGcyTUh3kkjkD4K9OC4hWrJZC3xLY7Z3K1ngN3vcAaXa/g8e1P1QFUlPnxN Dd/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=xI1hVu30aV+BtqYtYUaq8HLYtX93PNLIo6Tjx7QG/HQ=; b=FigoMGdMgMRgDGWiWV18DRm+FKRLsZIkTidiTvRTxinM2fdgk2Qf7tyC188S3gCjd5 oKmDBUVKuj8tmcccBbq4+F0EJ0dNQQFMrzJertfM82bKURw0sXfczurnjA7H2RpWk2J+ kGuwNXVCLE6thAmMHnpGRo2tQezNpnkoMceC/A/YUhgdPUPlzMwsuWlOgVvlIXfFDXzh R9CCY9b7T+Qlt4jYnwHKlq4Au5xJU7Jmj5Z/UUz6BRiQS5ZdMySIgO76QZ/FkTHRBFns uZeb+ZOWfxIDapuOhys0olGnPMkfLOgsjicsyL9Lnfz8lOd3p5SCzoBGxtNlRk4QU9HC N+Bg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qx4si786146ejb.176.2020.06.03.18.54.45; Wed, 03 Jun 2020 18:55:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727844AbgFDBZL (ORCPT + 99 others); Wed, 3 Jun 2020 21:25:11 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:43624 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725946AbgFDBZK (ORCPT ); Wed, 3 Jun 2020 21:25:10 -0400 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 961F65E1B4899E66A712; Thu, 4 Jun 2020 09:25:07 +0800 (CST) Received: from [127.0.0.1] (10.166.213.10) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.487.0; Thu, 4 Jun 2020 09:24:58 +0800 Subject: Re: Question: livepatch failed for new fork() task stack unreliable To: Josh Poimboeuf CC: , , , , , , , , , , References: <20200529101059.39885-1-bobo.shaobowang@huawei.com> <20200529174433.wpkknhypx2bmjika@treble> <20200601180538.o5agg5trbdssqken@treble> <20200602131450.oydrydelpdaval4h@treble> <1353648b-f3f7-5b8d-f0bb-28bdb1a66f0f@huawei.com> <20200603153358.2ezz2pgxxxld7mj7@treble> From: "Wangshaobo (bobo)" Message-ID: <2225bc83-95f2-bf3d-7651-fdd10a3ddd00@huawei.com> Date: Thu, 4 Jun 2020 09:24:55 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.1.0 MIME-Version: 1.0 In-Reply-To: <20200603153358.2ezz2pgxxxld7mj7@treble> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.166.213.10] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 在 2020/6/3 23:33, Josh Poimboeuf 写道: > On Wed, Jun 03, 2020 at 10:06:07PM +0800, Wangshaobo (bobo) wrote: > To be honest, I don't remember what I meant by sibling calls. They > don't even leave anything on the stack. > > For noreturns, the code might be laid out like this: > > func1: > ... > call noreturn_foo > func2: > > func2 is immediately after the call to noreturn_foo. So the return > address on the stack will actually be 'func2'. We want to retrieve the > ORC data for the call instruction (inside func1), instead of the > instruction at the beginning of func2. > > I should probably update that comment. So, I want to ask is there any side effects if i modify like this ? this modification is based on your fix. It looks like ok with proper test. diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c index e9cc182aa97e..ecce5051e8fd 100644 --- a/arch/x86/kernel/unwind_orc.c +++ b/arch/x86/kernel/unwind_orc.c @@ -620,6 +620,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,                 state->sp = task->thread.sp;                 state->bp = READ_ONCE_NOCHECK(frame->bp);                 state->ip = READ_ONCE_NOCHECK(frame->ret_addr); +              state->signal = ((void *)state->ip == ret_from_fork);         } diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c index 7f969b2d240f..d7396431261a 100644 --- a/arch/x86/kernel/unwind_orc.c +++ b/arch/x86/kernel/unwind_orc.c @@ -540,7 +540,7 @@ bool unwind_next_frame(struct unwind_state *state)          state->sp = sp;          state->regs = NULL;          state->prev_regs = NULL; -        state->signal = ((void *)state->ip == ret_from_fork); +        state->signal = false;          break; thanks, Wang ShaoBo