Received: by 2002:a05:6602:2086:0:0:0:0 with SMTP id a6csp4780580ioa; Wed, 27 Apr 2022 10:58:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwN/RDbOQ85Vx7KyYMd72WY1YFGc9/j49PQkSHUmsk3T1TPIUMmbiE/YBCWy3MyvuWc7h+C X-Received: by 2002:a17:903:1208:b0:151:93fd:d868 with SMTP id l8-20020a170903120800b0015193fdd868mr30145796plh.121.1651082283429; Wed, 27 Apr 2022 10:58:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651082283; cv=none; d=google.com; s=arc-20160816; b=nU60gexWTB9zYBHzOwkvs3NagZCRGitnFuYnS3w18GW+Zs2+OuKHGdp7WtoOix10XC ibkIfAfhQwjmyEkfl3st/op8m1gJ0mPdbymYc7UzyoJX/sr3niGcMElz7vRfy+gJ1rap QuOrHU+2EAdRggSRrB+coM15kQrhHORBEcdvWS6jaLz7PRH3rfDBJH52OPGwmT/HsJA8 dvCXj/U7cKG8Gs7PWi2k1uD8kCl9xqxvt1RED1dUYIeGtyRAqKRmVuQw5Ap+4t5Px02/ wJLGMol9lmtlbGObLECDjPb59DCQuFPyj7tCgVH3ymrq1AhK6ewSReFZuMIRamWVv1A6 jXwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=v5hzoKNLyM4/zwg16qPxtX5br89Mp6e75W4/SwRWLe4=; b=gsSenJaaa+JcacJmAzclDfiqT6HpzwT9gep9IYZl+wjtwxZuQ0b8hoXt0Kgh412zgy s0wdC6qIq5t3RqvNU2fGAUpDCcH+euJ0QeuZZbk5th1Jeh53IYrNuK6HVqzMZkHc3ZPR edXxxl+xSzb9TsDIXgBD8kmj5JqLEzFirlYfznNbqLHJJH72yIS7zt92BP3IeV4koj34 O568hQjwEImgMLisPm51iVwwzISNb60xvsSBSZbZue/Ly7rdfvM2N0BtmMSwptNlPEyL GfjYXTXgFUxKH/I47iOLeghkXB1kL84vyXtKf4Hw5tCRAcLuk7bOuDvarF3jYEdZ9rrW QrzA== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id x64-20020a638643000000b003ab54e54914si1970845pgd.695.2022.04.27.10.58.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Apr 2022 10:58:03 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0FBEE245E53; Wed, 27 Apr 2022 10:32:01 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244279AbiD0RfE (ORCPT + 99 others); Wed, 27 Apr 2022 13:35:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244138AbiD0Rey (ORCPT ); Wed, 27 Apr 2022 13:34:54 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 580971F4798 for ; Wed, 27 Apr 2022 10:31:39 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1B9EA1474; Wed, 27 Apr 2022 10:31:39 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DBCD03F73B; Wed, 27 Apr 2022 10:31:37 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: akpm@linux-foundation.org, alex.popov@linux.com, catalin.marinas@arm.com, keescook@chromium.org, linux-kernel@vger.kernel.org, luto@kernel.org, mark.rutland@arm.com, will@kernel.org Subject: [PATCH v2 02/13] stackleak: move skip_erasing() check earlier Date: Wed, 27 Apr 2022 18:31:17 +0100 Message-Id: <20220427173128.2603085-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220427173128.2603085-1-mark.rutland@arm.com> References: <20220427173128.2603085-1-mark.rutland@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In stackleak_erase() we check skip_erasing() after accessing some fields from current. As generating the address of current uses asm which hazards with the static branch asm, this work is always performed, even when the static branch is patched to jump to the return a the end of the function. This patch avoids this redundant work by moving the skip_erasing() check earlier. To avoid complicating initialization within stackleak_erase(), the body of the function is split out into a __stackleak_erase() helper, with the check left in a wrapper function. The __stackleak_erase() helper is marked __always_inline to ensure that this is inlined into stackleak_erase() and not instrumented. Before this patch, on x86-64 w/ GCC 11.1.0 the start of the function is: : 65 48 8b 04 25 00 00 mov %gs:0x0,%rax 00 00 48 8b 48 20 mov 0x20(%rax),%rcx 48 8b 80 98 0a 00 00 mov 0xa98(%rax),%rax 66 90 xchg %ax,%ax <------------ static branch 48 89 c2 mov %rax,%rdx 48 29 ca sub %rcx,%rdx 48 81 fa ff 3f 00 00 cmp $0x3fff,%rdx After this patch, on x86-64 w/ GCC 11.1.0 the start of the function is: : 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) <--- static branch 65 48 8b 04 25 00 00 mov %gs:0x0,%rax 00 00 48 8b 48 20 mov 0x20(%rax),%rcx 48 8b 80 98 0a 00 00 mov 0xa98(%rax),%rax 48 89 c2 mov %rax,%rdx 48 29 ca sub %rcx,%rdx 48 81 fa ff 3f 00 00 cmp $0x3fff,%rdx Before this patch, on arm64 w/ GCC 11.1.0 the start of the function is: : d503245f bti c d5384100 mrs x0, sp_el0 f9401003 ldr x3, [x0, #32] f9451000 ldr x0, [x0, #2592] d503201f nop <------------------------------- static branch d503233f paciasp cb030002 sub x2, x0, x3 d287ffe1 mov x1, #0x3fff eb01005f cmp x2, x1 After this patch, on arm64 w/ GCC 11.1.0 the start of the function is: : d503245f bti c d503201f nop <------------------------------- static branch d503233f paciasp d5384100 mrs x0, sp_el0 f9401003 ldr x3, [x0, #32] d287ffe1 mov x1, #0x3fff f9451000 ldr x0, [x0, #2592] cb030002 sub x2, x0, x3 eb01005f cmp x2, x1 While this may not be a huge win on its own, moving the static branch will permit further optimization of the body of the function in subsequent patches. Signed-off-by: Mark Rutland Cc: Alexander Popov Cc: Andrew Morton Cc: Andy Lutomirski Cc: Kees Cook --- kernel/stackleak.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/kernel/stackleak.c b/kernel/stackleak.c index ddb5a7f48d69e..753eab797a04d 100644 --- a/kernel/stackleak.c +++ b/kernel/stackleak.c @@ -70,7 +70,7 @@ late_initcall(stackleak_sysctls_init); #define skip_erasing() false #endif /* CONFIG_STACKLEAK_RUNTIME_DISABLE */ -asmlinkage void noinstr stackleak_erase(void) +static __always_inline void __stackleak_erase(void) { /* It would be nice not to have 'kstack_ptr' and 'boundary' on stack */ unsigned long kstack_ptr = current->lowest_stack; @@ -78,9 +78,6 @@ asmlinkage void noinstr stackleak_erase(void) unsigned int poison_count = 0; const unsigned int depth = STACKLEAK_SEARCH_DEPTH / sizeof(unsigned long); - if (skip_erasing()) - return; - /* Check that 'lowest_stack' value is sane */ if (unlikely(kstack_ptr - boundary >= THREAD_SIZE)) kstack_ptr = boundary; @@ -125,6 +122,14 @@ asmlinkage void noinstr stackleak_erase(void) current->lowest_stack = current_top_of_stack() - THREAD_SIZE/64; } +asmlinkage void noinstr stackleak_erase(void) +{ + if (skip_erasing()) + return; + + __stackleak_erase(); +} + void __used __no_caller_saved_registers noinstr stackleak_track_stack(void) { unsigned long sp = current_stack_pointer; -- 2.30.2