Received: by 2002:ac0:da4c:0:0:0:0:0 with SMTP id a12csp1366981imi; Sat, 23 Jul 2022 03:50:24 -0700 (PDT) X-Google-Smtp-Source: AGRyM1s60yuwfchgHevJDVupgaefzgM2e8e7s84TjwQ3r9L1JZ5Xok5HDHli5C679yYjui2qBqk6 X-Received: by 2002:a63:1e5f:0:b0:419:d6bf:b9d7 with SMTP id p31-20020a631e5f000000b00419d6bfb9d7mr3521328pgm.593.1658573424109; Sat, 23 Jul 2022 03:50:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1658573424; cv=none; d=google.com; s=arc-20160816; b=DuK1THWHGn2YDlLsvOPGRvrtKOcHyw8Sfb14gvUdt7Kk5jlXtjdXsQasyYpINrzh7J MqSEMU+X3IM2jBCn6yvichaV8I+fwP65FNW9vM4MNbMQL+5UnPnf4qZB/pgHiAsnwnpq ghPMJ4kJt6fouo7xGBOVD8BZR9qSmNkPobOkWajeSTi+2PsVP1fEteXtH5LUIlNdMprR tZo4jBDLRSYE+0TPl5v+pje4dB7SAUpo8V93Xg9i8kS0rv82bPr+lmIVjA6zrRbb/oNy E3TBiN15+qYudM9J8QkjoCex+5F/HF4F3o9gkyaqBbrvNDXzfbvjYjDAO9MCces6FksA /6Lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=lba+QppGIh5XiNadT6kTH6iUJ6YXWPNePQxyAZOhP64=; b=qNPP+em7huFr70kf41xFxbi9ZWusNmwHJSBZM/bPAm7p+4aUXqWH0Ry2k33JGnV2zI 4c+saBHr4OMegkZMk5Pd1Gehdzp38j0ppQe1QLu0kT37meLEatvIjFAdTyEj1bCTeRy8 mO8s711qw+j4CRrerCHPk0A7sP8VEdWj83ku6dO790xwPoyyhAD50OwM7QgDKBGrj2/8 Dgg1aQ2H+Cl/vIXXA0i9T49t5C+WXDfsbJ28D0m1r1WzpHWcn8BDYJAB2JH8IDTPPz9s nnaob01TP7xBRgz6m23ZQi/qIl1ojdfrO2SCJ82xn5iMXwzPbBOiZT0K+uwER0KUFqcg uyjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=H0tIPUmb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s127-20020a632c85000000b0041a4d5a316dsi7926061pgs.855.2022.07.23.03.50.08; Sat, 23 Jul 2022 03:50:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=H0tIPUmb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237438AbiGWJ5f (ORCPT + 99 others); Sat, 23 Jul 2022 05:57:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237429AbiGWJ5M (ORCPT ); Sat, 23 Jul 2022 05:57:12 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF3EA3F32F; Sat, 23 Jul 2022 02:56:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 391BDB82C1F; Sat, 23 Jul 2022 09:56:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8A605C341C0; Sat, 23 Jul 2022 09:56:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1658570214; bh=Giw7ed+DIPgp8ePeOm7IAwsUx0fnFlZ6gBV2EK6t2Zc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H0tIPUmbM7kr/8q1hIpT0FDMpED/j/DMgoOAglWw/ggWj9rdLC1Y4dE71lqQ+GjJo l9QcqARKK7gR3WH+UTk7tPWUcqv2AIOO37nV9Y7P9TQQt2H0CaWtTlYwrOXAu0Er14 5BEPQwzIANvzdvKSZy8jV0aK8yLIwmQY8NXHt1TU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Juergen Gross , Borislav Petkov , Ben Hutchings Subject: [PATCH 5.10 011/148] x86/alternative: Merge include files Date: Sat, 23 Jul 2022 11:53:43 +0200 Message-Id: <20220723095227.566370762@linuxfoundation.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220723095224.302504400@linuxfoundation.org> References: <20220723095224.302504400@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Juergen Gross commit 5e21a3ecad1500e35b46701e7f3f232e15d78e69 upstream. Merge arch/x86/include/asm/alternative-asm.h into arch/x86/include/asm/alternative.h in order to make it easier to use common definitions later. Signed-off-by: Juergen Gross Signed-off-by: Borislav Petkov Link: https://lkml.kernel.org/r/20210311142319.4723-2-jgross@suse.com Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/entry/entry_32.S | 2 arch/x86/entry/vdso/vdso32/system_call.S | 2 arch/x86/include/asm/alternative-asm.h | 114 ------------------------------- arch/x86/include/asm/alternative.h | 112 +++++++++++++++++++++++++++++- arch/x86/include/asm/nospec-branch.h | 1 arch/x86/include/asm/smap.h | 5 - arch/x86/lib/atomic64_386_32.S | 2 arch/x86/lib/atomic64_cx8_32.S | 2 arch/x86/lib/copy_page_64.S | 2 arch/x86/lib/copy_user_64.S | 2 arch/x86/lib/memcpy_64.S | 2 arch/x86/lib/memmove_64.S | 2 arch/x86/lib/memset_64.S | 2 arch/x86/lib/retpoline.S | 2 14 files changed, 120 insertions(+), 132 deletions(-) delete mode 100644 arch/x86/include/asm/alternative-asm.h --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -40,7 +40,7 @@ #include #include #include -#include +#include #include #include #include --- a/arch/x86/entry/vdso/vdso32/system_call.S +++ b/arch/x86/entry/vdso/vdso32/system_call.S @@ -6,7 +6,7 @@ #include #include #include -#include +#include .text .globl __kernel_vsyscall --- a/arch/x86/include/asm/alternative-asm.h +++ /dev/null @@ -1,114 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_X86_ALTERNATIVE_ASM_H -#define _ASM_X86_ALTERNATIVE_ASM_H - -#ifdef __ASSEMBLY__ - -#include - -#ifdef CONFIG_SMP - .macro LOCK_PREFIX -672: lock - .pushsection .smp_locks,"a" - .balign 4 - .long 672b - . - .popsection - .endm -#else - .macro LOCK_PREFIX - .endm -#endif - -/* - * objtool annotation to ignore the alternatives and only consider the original - * instruction(s). - */ -.macro ANNOTATE_IGNORE_ALTERNATIVE - .Lannotate_\@: - .pushsection .discard.ignore_alts - .long .Lannotate_\@ - . - .popsection -.endm - -/* - * Issue one struct alt_instr descriptor entry (need to put it into - * the section .altinstructions, see below). This entry contains - * enough information for the alternatives patching code to patch an - * instruction. See apply_alternatives(). - */ -.macro altinstruction_entry orig alt feature orig_len alt_len pad_len - .long \orig - . - .long \alt - . - .word \feature - .byte \orig_len - .byte \alt_len - .byte \pad_len -.endm - -/* - * Define an alternative between two instructions. If @feature is - * present, early code in apply_alternatives() replaces @oldinstr with - * @newinstr. ".skip" directive takes care of proper instruction padding - * in case @newinstr is longer than @oldinstr. - */ -.macro ALTERNATIVE oldinstr, newinstr, feature -140: - \oldinstr -141: - .skip -(((144f-143f)-(141b-140b)) > 0) * ((144f-143f)-(141b-140b)),0x90 -142: - - .pushsection .altinstructions,"a" - altinstruction_entry 140b,143f,\feature,142b-140b,144f-143f,142b-141b - .popsection - - .pushsection .altinstr_replacement,"ax" -143: - \newinstr -144: - .popsection -.endm - -#define old_len 141b-140b -#define new_len1 144f-143f -#define new_len2 145f-144f - -/* - * gas compatible max based on the idea from: - * http://graphics.stanford.edu/~seander/bithacks.html#IntegerMinOrMax - * - * The additional "-" is needed because gas uses a "true" value of -1. - */ -#define alt_max_short(a, b) ((a) ^ (((a) ^ (b)) & -(-((a) < (b))))) - - -/* - * Same as ALTERNATIVE macro above but for two alternatives. If CPU - * has @feature1, it replaces @oldinstr with @newinstr1. If CPU has - * @feature2, it replaces @oldinstr with @feature2. - */ -.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2 -140: - \oldinstr -141: - .skip -((alt_max_short(new_len1, new_len2) - (old_len)) > 0) * \ - (alt_max_short(new_len1, new_len2) - (old_len)),0x90 -142: - - .pushsection .altinstructions,"a" - altinstruction_entry 140b,143f,\feature1,142b-140b,144f-143f,142b-141b - altinstruction_entry 140b,144f,\feature2,142b-140b,145f-144f,142b-141b - .popsection - - .pushsection .altinstr_replacement,"ax" -143: - \newinstr1 -144: - \newinstr2 -145: - .popsection -.endm - -#endif /* __ASSEMBLY__ */ - -#endif /* _ASM_X86_ALTERNATIVE_ASM_H */ --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -2,13 +2,14 @@ #ifndef _ASM_X86_ALTERNATIVE_H #define _ASM_X86_ALTERNATIVE_H -#ifndef __ASSEMBLY__ - #include -#include #include #include +#ifndef __ASSEMBLY__ + +#include + /* * Alternative inline assembly for SMP. * @@ -271,6 +272,111 @@ static inline int alternatives_text_rese */ #define ASM_NO_INPUT_CLOBBER(clbr...) "i" (0) : clbr +#else /* __ASSEMBLY__ */ + +#ifdef CONFIG_SMP + .macro LOCK_PREFIX +672: lock + .pushsection .smp_locks,"a" + .balign 4 + .long 672b - . + .popsection + .endm +#else + .macro LOCK_PREFIX + .endm +#endif + +/* + * objtool annotation to ignore the alternatives and only consider the original + * instruction(s). + */ +.macro ANNOTATE_IGNORE_ALTERNATIVE + .Lannotate_\@: + .pushsection .discard.ignore_alts + .long .Lannotate_\@ - . + .popsection +.endm + +/* + * Issue one struct alt_instr descriptor entry (need to put it into + * the section .altinstructions, see below). This entry contains + * enough information for the alternatives patching code to patch an + * instruction. See apply_alternatives(). + */ +.macro altinstruction_entry orig alt feature orig_len alt_len pad_len + .long \orig - . + .long \alt - . + .word \feature + .byte \orig_len + .byte \alt_len + .byte \pad_len +.endm + +/* + * Define an alternative between two instructions. If @feature is + * present, early code in apply_alternatives() replaces @oldinstr with + * @newinstr. ".skip" directive takes care of proper instruction padding + * in case @newinstr is longer than @oldinstr. + */ +.macro ALTERNATIVE oldinstr, newinstr, feature +140: + \oldinstr +141: + .skip -(((144f-143f)-(141b-140b)) > 0) * ((144f-143f)-(141b-140b)),0x90 +142: + + .pushsection .altinstructions,"a" + altinstruction_entry 140b,143f,\feature,142b-140b,144f-143f,142b-141b + .popsection + + .pushsection .altinstr_replacement,"ax" +143: + \newinstr +144: + .popsection +.endm + +#define old_len 141b-140b +#define new_len1 144f-143f +#define new_len2 145f-144f + +/* + * gas compatible max based on the idea from: + * http://graphics.stanford.edu/~seander/bithacks.html#IntegerMinOrMax + * + * The additional "-" is needed because gas uses a "true" value of -1. + */ +#define alt_max_short(a, b) ((a) ^ (((a) ^ (b)) & -(-((a) < (b))))) + + +/* + * Same as ALTERNATIVE macro above but for two alternatives. If CPU + * has @feature1, it replaces @oldinstr with @newinstr1. If CPU has + * @feature2, it replaces @oldinstr with @feature2. + */ +.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2 +140: + \oldinstr +141: + .skip -((alt_max_short(new_len1, new_len2) - (old_len)) > 0) * \ + (alt_max_short(new_len1, new_len2) - (old_len)),0x90 +142: + + .pushsection .altinstructions,"a" + altinstruction_entry 140b,143f,\feature1,142b-140b,144f-143f,142b-141b + altinstruction_entry 140b,144f,\feature2,142b-140b,145f-144f,142b-141b + .popsection + + .pushsection .altinstr_replacement,"ax" +143: + \newinstr1 +144: + \newinstr2 +145: + .popsection +.endm + #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_ALTERNATIVE_H */ --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -7,7 +7,6 @@ #include #include -#include #include #include #include --- a/arch/x86/include/asm/smap.h +++ b/arch/x86/include/asm/smap.h @@ -11,6 +11,7 @@ #include #include +#include /* "Raw" instruction opcodes */ #define __ASM_CLAC ".byte 0x0f,0x01,0xca" @@ -18,8 +19,6 @@ #ifdef __ASSEMBLY__ -#include - #ifdef CONFIG_X86_SMAP #define ASM_CLAC \ @@ -37,8 +36,6 @@ #else /* __ASSEMBLY__ */ -#include - #ifdef CONFIG_X86_SMAP static __always_inline void clac(void) --- a/arch/x86/lib/atomic64_386_32.S +++ b/arch/x86/lib/atomic64_386_32.S @@ -6,7 +6,7 @@ */ #include -#include +#include /* if you want SMP support, implement these with real spinlocks */ .macro LOCK reg --- a/arch/x86/lib/atomic64_cx8_32.S +++ b/arch/x86/lib/atomic64_cx8_32.S @@ -6,7 +6,7 @@ */ #include -#include +#include .macro read64 reg movl %ebx, %eax --- a/arch/x86/lib/copy_page_64.S +++ b/arch/x86/lib/copy_page_64.S @@ -3,7 +3,7 @@ #include #include -#include +#include #include /* --- a/arch/x86/lib/copy_user_64.S +++ b/arch/x86/lib/copy_user_64.S @@ -11,7 +11,7 @@ #include #include #include -#include +#include #include #include #include --- a/arch/x86/lib/memcpy_64.S +++ b/arch/x86/lib/memcpy_64.S @@ -4,7 +4,7 @@ #include #include #include -#include +#include #include .pushsection .noinstr.text, "ax" --- a/arch/x86/lib/memmove_64.S +++ b/arch/x86/lib/memmove_64.S @@ -8,7 +8,7 @@ */ #include #include -#include +#include #include #undef memmove --- a/arch/x86/lib/memset_64.S +++ b/arch/x86/lib/memset_64.S @@ -3,7 +3,7 @@ #include #include -#include +#include #include /* --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -4,7 +4,7 @@ #include #include #include -#include +#include #include #include #include