Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp873000pxb; Wed, 6 Apr 2022 02:47:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy4l+wbpU7sxx6OTB2A8wwYqjgKMY96hsFFSxLddmGVYuVH9AMu72pchp77iUjkW3mi98iX X-Received: by 2002:a63:6d0e:0:b0:398:677c:d8c with SMTP id i14-20020a636d0e000000b00398677c0d8cmr6313374pgc.157.1649238446413; Wed, 06 Apr 2022 02:47:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649238446; cv=none; d=google.com; s=arc-20160816; b=YgdGraH8+lwvBVi3Lsjfg8o56HckE28a1U5RCfOt7wch11uQRtkDz5/Qmj+Hw0wFN4 WocxMtuAYvIYGsDFkMy0z2FMAxY4prZNcthKRLIaKOiSSfvD99H8xkdwqcOv3tDN45zB TrGELSyU9DwBDZVCD8baapLjrK0FWd6G8rkw4bmGROoFAOU9hq3YrHQvoMv6CoQGQ3wG Nkry9WaDMV5urM34PiXMW1SBnnNtYMQf1e53DPOrWB9aE7t8jf26OTFoNu6Qq7TYeHrf hiKbaXRoqHigWt9RLcD0SWnF8J5yw2FVhybSkS9yhoCm5g9+yVEU6XbeT/E2XZYXVGaF 0ILQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=Oiy4ymydbM7WhtHftpHVioZDVdSgqELYIqmQkx6duOw=; b=dNgo/ijJy5TM7riD4mZ1nd3/yOaqBxg4SSFqZEyym/Kqbkly2EP6bcXRNiiDS5w3Ml owa18OoRTct3x03Z8//wogaNjYOD2PHUsmkFkoUJaZLCIQCIMnB5so9B1Jfl/yI84Ezq eHFIYzn+3Hq06vf0P7EJoJ43k46uHfdsCpRbQsGc3XAu9n4nV7M8iboOF/mePYVCrcor IN7W7Q5h/d7WFOpw6bm/aCvuZHyUNIKaL8GUx13HgEerQqmW0UOJpgtlAtjnVJOgf33o hUnZsUJQgUIuS1rcTQ0kEeK4j9yvXfkuoYvxmDJQjcun6IDnY1aV7m+usK8XzYjU9H2U PaYQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=ihYjRwCI; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id nu3-20020a17090b1b0300b001bd14e03034si4752251pjb.12.2022.04.06.02.47.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Apr 2022 02:47:26 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=ihYjRwCI; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C58142CF2F3; Wed, 6 Apr 2022 01:01:03 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233862AbiDEU7K (ORCPT + 99 others); Tue, 5 Apr 2022 16:59:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1457334AbiDEQDG (ORCPT ); Tue, 5 Apr 2022 12:03:06 -0400 Received: from mail-io1-xd2c.google.com (mail-io1-xd2c.google.com [IPv6:2607:f8b0:4864:20::d2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFE3F35ABD for ; Tue, 5 Apr 2022 08:38:18 -0700 (PDT) Received: by mail-io1-xd2c.google.com with SMTP id 125so15583885iov.10 for ; Tue, 05 Apr 2022 08:38:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Oiy4ymydbM7WhtHftpHVioZDVdSgqELYIqmQkx6duOw=; b=ihYjRwCIZTZwXJPzTOWwXK3ednGB5hCoNS1RNHgt1AdElXNvPJiQGMuSWIys6jLwHM hJcEcd/i+mJbqgiTM/QqEMB3jro9dJ6W0v3I+Ty0YNprxSTmdDUpdDZeq3b6WUyBHDN5 a4qxa6AkA5XahcZHSQTXwD24ZAAeQfxg9DUEOkqZ7Y1wTQZYplqniPZvmjXU1AgRdpzo BnAUl8ZsT1JkCpuftrQ/cfzcpCavx7DlzaraGcW3E7Dg3ojL/5b2oOkTNNfYYLeLaZQO nhdojGE0QsWyA0cvn+icLNQTxGFI/wQode6hKOboLUuwaB5P5ZoMTYCIclNR9XEXFc8m o1WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Oiy4ymydbM7WhtHftpHVioZDVdSgqELYIqmQkx6duOw=; b=PNRMA5LAoVga49WH5hPQfxTkyarVNY3sF0mL7JCk7Hm++Nf8HZ2VgDJmPGkb+9j5gr g5vd6LRe1TiM53A/U+cpB31/DMNbpJgPAQ1ERPHaeviWyTQmy8BamV76WUpk0e+6ebCV fk/FtZFQTNbiNGm4RZH0040NFYvjcE7SatIdNwey7NDRzNLB/zgNGQ7toFr+BntBZa5k Ld45me4PH/ByrR2Qd6HYTCZ58shCf9UFrKB7pvQJVDO1i9Rg4Hs0uKAl9J5dp2q9yKCa sffGMADZtvonddYKRQ+P2w8ZLGt/ecX+0bZemRNl2vyHnaHow/TEqjfjqgX4FD33lmeW gZJQ== X-Gm-Message-State: AOAM533AVvl0zitb/WUKRWiMak6lXN+JzYah84NF0OY3NxP8TdkS8Ri8 KzaBYIxnDthL5k/XPliF3OeQIPP6NfgcBusCTys= X-Received: by 2002:a6b:116:0:b0:648:bd29:2f44 with SMTP id 22-20020a6b0116000000b00648bd292f44mr2028950iob.56.1649173098285; Tue, 05 Apr 2022 08:38:18 -0700 (PDT) MIME-Version: 1.0 References: <0bb72ea8fa88ef9ae3508c23d993952a0ae6f0f9.1648049113.git.andreyknvl@google.com> In-Reply-To: From: Andrey Konovalov Date: Tue, 5 Apr 2022 17:38:07 +0200 Message-ID: Subject: Re: [PATCH v2 3/4] arm64: implement stack_trace_save_shadow To: Mark Rutland Cc: andrey.konovalov@linux.dev, Marco Elver , Alexander Potapenko , Catalin Marinas , Will Deacon , Andrew Morton , Dmitry Vyukov , Andrey Ryabinin , kasan-dev , Vincenzo Frascino , Sami Tolvanen , Peter Collingbourne , Evgenii Stepanov , Florian Mayer , Linux Memory Management List , LKML , Andrey Konovalov Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 31, 2022 at 11:32 AM Mark Rutland wrote: > > This doesn't do any of the trampoline repatinting (e.g. for kretprobes or > ftrace graph caller) that the regular unwinder does, so if either of those are > in use this is going to produce bogus results. Responded on the cover letter wrt this. > > +noinline notrace int arch_stack_walk_shadow(unsigned long *store, > > + unsigned int size, > > + unsigned int skipnr) > > +{ > > + unsigned long *scs_top, *scs_base, *scs_next; > > + unsigned int len = 0, part; > > + > > + preempt_disable(); > > This doesn't look necessary; it's certinaly not needed for the regular unwinder. > > Critically, in the common case of unwinding just the task stack, we don't need > to look at any of the per-cpu stacks, and so there's no need to disable > preemption. See the stack nesting logic in the regular unwinder. The common unwinder doesn't access per-cpu variables, so preempt_disable() is not required. Although, in this case, the per-cpu variable is read-only, so preempt_disable() is probably also not required. Unless LOCKDEP or some other tools complain about this. > If we *do* need to unwind per-cpu stacks, we figure that out and verify our > countext *at* the transition point. I'm not sure I understand this statement. You mean we need to keep the currently relevant SCS stack base and update it in interrupt handlers? This will require modifying the entry code. > > + > > + /* Get the SCS pointer. */ > > + asm volatile("mov %0, x18" : "=&r" (scs_top)); > > Does the compiler guarantee where this happens relative to any prologue > manipulation of x18? > > This seems like something we should be using a compilar intrinsic for, or have > a wrapper that passes this in if necessary. This is a good point, I'll investigate this. > > + > > + /* The top SCS slot is empty. */ > > + scs_top -= 1; > > + > > + /* Handle SDEI and hardirq frames. */ > > + for (part = 0; part < ARRAY_SIZE(scs_parts); part++) { > > + scs_next = *this_cpu_ptr(scs_parts[part].saved); > > + if (scs_next) { > > + scs_base = *this_cpu_ptr(scs_parts[part].base); > > + if (walk_shadow_stack_part(scs_top, scs_base, store, > > + size, &skipnr, &len)) > > + goto out; > > + scs_top = scs_next; > > + } > > + } > > We have a number of portential stack nesting orders (and may need to introduce > more stacks in future), so I think we need to be more careful with this. The > regular unwinder handles that dynamically. I'll rewrite this part based on the other comments, so let's discuss it then. Thanks!