Received: by 2002:a05:6a10:413:0:0:0:0 with SMTP id 19csp2981487pxp; Mon, 14 Mar 2022 08:33:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzChUBDN0bCJwvroR8shR+WaFof8QH+IDWkiS7Ijb5zhNV0tGcle9BZ37j6sGOBmRrr7BKU X-Received: by 2002:a17:906:9814:b0:6da:a60b:f99b with SMTP id lm20-20020a170906981400b006daa60bf99bmr18805149ejb.496.1647272023338; Mon, 14 Mar 2022 08:33:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1647272023; cv=none; d=google.com; s=arc-20160816; b=uRUFk+1/VnpGvUtcpLyiZdLXz5U9aNRDRcjJBwfk36M8G7i8L9dE+d4MEjAnBluS7j qQf2wUJTOFyyJtqBV1bRJQL/DRv/+92mOMNlNGtza77lGlRW1u/JibNzwAzeZaOZwqys N1vBfVAr6N0RuRXnviKa5KYd4Ra0LRY6iM27YQW7vkc9OTrcbSnsuwlIk0/WZWk9V0Gr XzJDona4lslUfHQ8WSmLuE4Wpqa6RI+rKKsw9NWI9Z1IbasrDkkNRJ7KKVl7urhB5hN2 EOWVqXkpIAfE6KjyHZofzuGr+IFr72OtodindpC4hwqhBgN+f1rX/NYy9LdtquVe6FQ6 8JUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=dnStb81zNfCbHNpdfKDavCPptqLqSaSsDRJFANgLx8A=; b=y5G/ENrdDafowTG0sLnLEGrsXpunzZFZnZiB5X/o6OEbmN4BZcokurnZO3SVZpRU+4 9o7j0IvDi5dqlzpM3xZe31WoIohxx3NYbwCeMfliahDwLYnPLBDY/9/GmxD85gLEgUll OKJGwNaFxsZ0N2GhaWzCzJJTDEijlNkfX9kdYsK1g1+FDef2bi2ciW4+iStdykLRDA1r h3uhEfntzPq5/vye4+VmUNJ8VocYBCLYOb0dKLgDiPAYqmrBMy10z4bFnscPgbmF55iw IIsqQDijATlN7DL9rbYwnVz4Wutmuh2FuUmByP9NUbWbY6Z8WFC6CVkpfq1wNM+8WvUw dWnQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=dq218i3q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q5-20020a17090676c500b006dbb3c795casi4879058ejn.745.2022.03.14.08.33.13; Mon, 14 Mar 2022 08:33:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=dq218i3q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237100AbiCNI7G (ORCPT + 99 others); Mon, 14 Mar 2022 04:59:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232623AbiCNI7F (ORCPT ); Mon, 14 Mar 2022 04:59:05 -0400 Received: from mail-yb1-xb32.google.com (mail-yb1-xb32.google.com [IPv6:2607:f8b0:4864:20::b32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83B78DBF for ; Mon, 14 Mar 2022 01:57:52 -0700 (PDT) Received: by mail-yb1-xb32.google.com with SMTP id l2so29301309ybe.8 for ; Mon, 14 Mar 2022 01:57:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=dnStb81zNfCbHNpdfKDavCPptqLqSaSsDRJFANgLx8A=; b=dq218i3qZDV4+B76aTfwkaH9pAlGjbYICkbByaYC8TXGnhwN6Ehv1mPsMhNyM9O8++ qvfNx5pKKzlTMNvwYIN55VhYlrepET0w3qSroHDLgagYw6yJZ/YakR4CykLuIhL3LrSF JrJEVzlId77OaIhjTUBslVlKJv4S7oqc6sySieHKCG34OCikFruQ9/H+HO0yqnIDvktS 5URQXk75ftEsml2eptXJ3rPy3obpqRDDEgcaawW2q/dqhVA1yepsxYYVhjH/szn2ORPb lzlKShszva657FhgQJR7T0zHCZPKAsh+gpEeYnp2R6tP4uoQM8OSWEB76QULmsZqMkMI 2Yfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=dnStb81zNfCbHNpdfKDavCPptqLqSaSsDRJFANgLx8A=; b=jqWCv4HLPMTPWmb8IuJ5x/+NHb7Q+m/cY45M3cYwnGc6O9uR1FgxCHP934P0GbvcKZ QSai4wHQIS8UpHsIkjqZ8JFFSxhWWFNG8t0iefVTojhEq/g0KQc9Q07J9SzpCgRiaOMC irym9/KJlrkDrwcQxwC1e37wlAjz3sECAU5fz1aAc0kDGKxBlVwssL2rAP/eSjccWgit Th6okGelkM9OV8oCtBVVqgEYcy4LF3VR33O8DfRbIZgyu7kaU3RljNvQ9I85i8FmJtcG KVvnJZjUhACFLbsmaZwpRnIUERTqM5ESV4/MaoDzuNU4zyILZ68GRJWpcKj2KwgfaCOK 2qDw== X-Gm-Message-State: AOAM532I9vMkuLfIZeCkcbkdJdwGFlG0hKYOMdQWWFiv0aTPxP1AYmf1 PBWkTU5Fl2X1lyqcvhmf0Ery74yr1yp7Vmyk5eXfJw== X-Received: by 2002:a25:6994:0:b0:629:1e05:b110 with SMTP id e142-20020a256994000000b006291e05b110mr17250131ybc.425.1647248271489; Mon, 14 Mar 2022 01:57:51 -0700 (PDT) MIME-Version: 1.0 References: <57133fafc4d74377a4a08d98e276d58fe4a127dc.1647115974.git.andreyknvl@google.com> In-Reply-To: From: Marco Elver Date: Mon, 14 Mar 2022 09:57:15 +0100 Message-ID: Subject: Re: [PATCH] kasan, scs: collect stack traces from shadow stack To: Andrey Konovalov Cc: andrey.konovalov@linux.dev, Alexander Potapenko , Andrew Morton , Dmitry Vyukov , Andrey Ryabinin , kasan-dev , Vincenzo Frascino , Catalin Marinas , Will Deacon , Mark Rutland , Sami Tolvanen , Peter Collingbourne , Evgenii Stepanov , Linux Memory Management List , LKML , Andrey Konovalov , Florian Mayer Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 14 Mar 2022 at 00:44, Andrey Konovalov wrote: > > On Sat, Mar 12, 2022 at 9:14 PM wrote: > > > > From: Andrey Konovalov > > > > Currently, KASAN always uses the normal stack trace collection routines, > > which rely on the unwinder, when saving alloc and free stack traces. > > > > Instead of invoking the unwinder, collect the stack trace by copying > > frames from the Shadow Call Stack whenever it is enabled. This reduces > > boot time by 30% for all KASAN modes when Shadow Call Stack is enabled. > > > > To avoid potentially leaking PAC pointer tags, strip them when saving > > the stack trace. > > > > Signed-off-by: Andrey Konovalov > > > > --- > > > > Things to consider: > > > > We could integrate shadow stack trace collection into kernel/stacktrace.c > > as e.g. stack_trace_save_shadow(). However, using stack_trace_consume_fn > > leads to invoking a callback on each saved from, which is undesirable. > > The plain copy loop is faster. > > > > We could add a command line flag to switch between stack trace collection > > modes. I noticed that Shadow Call Stack might be missing certain frames > > in stacks originating from a fault that happens in the middle of a > > function. I am not sure if this case is important to handle though. > > > > Looking forward to thoughts and comments. > > > > Thanks! > > > > --- > > mm/kasan/common.c | 36 +++++++++++++++++++++++++++++++++++- > > 1 file changed, 35 insertions(+), 1 deletion(-) > > > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > > index d9079ec11f31..65a0723370c7 100644 > > --- a/mm/kasan/common.c > > +++ b/mm/kasan/common.c > > @@ -9,6 +9,7 @@ > > * Andrey Konovalov > > */ > > > > +#include > > #include > > #include > > #include > > @@ -21,6 +22,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -30,12 +32,44 @@ > > #include "kasan.h" > > #include "../slab.h" > > > > +#ifdef CONFIG_SHADOW_CALL_STACK > > + > > +#ifdef CONFIG_ARM64_PTR_AUTH > > +#define PAC_TAG_RESET(x) (x | GENMASK(63, CONFIG_ARM64_VA_BITS)) > > +#else > > +#define PAC_TAG_RESET(x) (x) > > +#endif > > + > > +static unsigned int save_shadow_stack(unsigned long *entries, > > + unsigned int nr_entries) > > +{ > > + unsigned long *scs_sp = task_scs_sp(current); > > + unsigned long *scs_base = task_scs(current); > > + unsigned long *frame; > > + unsigned int i = 0; > > + > > + for (frame = scs_sp - 1; frame >= scs_base; frame--) { > > + entries[i++] = PAC_TAG_RESET(*frame); > > + if (i >= nr_entries) > > + break; > > + } > > + > > + return i; > > +} > > +#else /* CONFIG_SHADOW_CALL_STACK */ > > +static inline unsigned int save_shadow_stack(unsigned long *entries, > > + unsigned int nr_entries) { return 0; } > > +#endif /* CONFIG_SHADOW_CALL_STACK */ > > + > > depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc) > > { > > unsigned long entries[KASAN_STACK_DEPTH]; > > unsigned int nr_entries; > > > > - nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); > > + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK)) > > + nr_entries = save_shadow_stack(entries, ARRAY_SIZE(entries)); > > + else > > + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); > > return __stack_depot_save(entries, nr_entries, flags, can_alloc); > > Another option here is to instruct stack depot to get the stack from > the Shadow Call Stack. This would avoid copying the frames twice. Yes, I think a stack_depot_save_shadow() would be appropriate if it saves a copy.