Received: by 2002:ac0:da4c:0:0:0:0:0 with SMTP id a12csp457448imi; Fri, 22 Jul 2022 02:48:44 -0700 (PDT) X-Google-Smtp-Source: AGRyM1srgmYK2zCXNHof1SApHg/T2xI1M/mvUwwC3oSwF77Y0x4059j7bvMcUNV1DLoQzef/qvQO X-Received: by 2002:a17:907:86ac:b0:72b:87f6:75c2 with SMTP id qa44-20020a17090786ac00b0072b87f675c2mr2521975ejc.667.1658483323772; Fri, 22 Jul 2022 02:48:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1658483323; cv=none; d=google.com; s=arc-20160816; b=gcvCWLQvtB6h9dWcL1kxNxwbFdScJTl441PXEoiXJPMjfcPoFAyAXwXw9vN90StI7e Y67A4GgN6WsJbSX4vc0YM3jAWe4rR2N/pTfSt/d1PYa46rj92P/eD8DL7dkTTSUAv9UK SAet+4AkNyt5zXj4rWBhLOtm2Ob0wCL4tcMfGCyIyWOriUTUhnQAy4Z42SSQ6eNvGyVC c2976RMENVG2cZVtiO+b3Q2zWMH7dhwBhIxM+qrDU+/EGXrG1ja3BD9QaDUci3W5w+gB DRwj+YuHm5zMO2pO189A66ywgzUTdOjqqvuwF3Fp6iqRX+EmoJD8r+iQqLEQNsOD3rN5 i+Sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=yej3DRTz7Xg1wLRazy/nSqtHYqejDDSDOHtq/t8mFl8=; b=dNe/WI0iG32WDq3Sf676GbzkvDDRpEcKtY22GJjffgaMMQDmdAUOQz5AaRWcVisFDW k0aJu3eKzN3Dct9Tzzco9b/EVgxC3R4EtONWoaldzqGYRUxjJpMklMU8HB8AQaPhVl0p 7hI32+cC2ojOzpTRcq6XfBfoF0P30faTzFr+tUUSn0o3iS7nkwrdbIIpYDVq6R86CwrL VoNwfVOdYY0IWi4fX7I6yw03T9mg7RB1lqBNKs/9mX6bFqr9Jf0XbZGTVPnujT5lz0QB frjKAmN/avZGwKzicafY07LhhsZSkPXdiY4hebP3bsgDtvPiTukSCJ8PdR1G8phrLwXV hM4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=eyMhe0uv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id tc3-20020a1709078d0300b0072b4addd854si5871685ejc.292.2022.07.22.02.48.19; Fri, 22 Jul 2022 02:48:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=eyMhe0uv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235512AbiGVJQE (ORCPT + 99 others); Fri, 22 Jul 2022 05:16:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234392AbiGVJPL (ORCPT ); Fri, 22 Jul 2022 05:15:11 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25685AF974; Fri, 22 Jul 2022 02:11:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AA6ED61F9A; Fri, 22 Jul 2022 09:11:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F8A7C341D9; Fri, 22 Jul 2022 09:11:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1658481088; bh=f+uU6J9TFzo69g/2PpcA8Me0kRnOl5SAjGpZBeSmyRU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eyMhe0uvAGFWJybKx/N6QqPgQh/v7slmVBRZ1ClhU/iV4w1TbtQH39A31zqaJQDZ+ XxlL4nKqsJ9NqThIUJGJomi0921CzWEhCrd0YVldQ8q3K5DS0d/mLc2dHFHe1MbqVo ppEy9N1EJTvso51oqgP58S36CZaXVf8m0oBdgBUU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Edward Tran , Awais Tanveer , Ankur Arora , Konrad Rzeszutek Wilk , Alexandre Chartre , Borislav Petkov , Thadeu Lima de Souza Cascardo Subject: [PATCH 5.18 59/70] x86/kexec: Disable RET on kexec Date: Fri, 22 Jul 2022 11:07:54 +0200 Message-Id: <20220722090654.116050500@linuxfoundation.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220722090650.665513668@linuxfoundation.org> References: <20220722090650.665513668@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Konrad Rzeszutek Wilk commit 697977d8415d61f3acbc4ee6d564c9dcf0309507 upstream. All the invocations unroll to __x86_return_thunk and this file must be PIC independent. This fixes kexec on 64-bit AMD boxes. [ bp: Fix 32-bit build. ] Reported-by: Edward Tran Reported-by: Awais Tanveer Suggested-by: Ankur Arora Signed-off-by: Konrad Rzeszutek Wilk Signed-off-by: Alexandre Chartre Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/relocate_kernel_32.S | 25 +++++++++++++++++++------ arch/x86/kernel/relocate_kernel_64.S | 23 +++++++++++++++++------ 2 files changed, 36 insertions(+), 12 deletions(-) --- a/arch/x86/kernel/relocate_kernel_32.S +++ b/arch/x86/kernel/relocate_kernel_32.S @@ -7,10 +7,12 @@ #include #include #include +#include #include /* - * Must be relocatable PIC code callable as a C function + * Must be relocatable PIC code callable as a C function, in particular + * there must be a plain RET and not jump to return thunk. */ #define PTR(x) (x << 2) @@ -91,7 +93,9 @@ SYM_CODE_START_NOALIGN(relocate_kernel) movl %edi, %eax addl $(identity_mapped - relocate_kernel), %eax pushl %eax - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(relocate_kernel) SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) @@ -159,12 +163,15 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_ma xorl %edx, %edx xorl %esi, %esi xorl %ebp, %ebp - RET + ANNOTATE_UNRET_SAFE + ret + int3 1: popl %edx movl CP_PA_SWAP_PAGE(%edi), %esp addl $PAGE_SIZE, %esp 2: + ANNOTATE_RETPOLINE_SAFE call *%edx /* get the re-entry point of the peer system */ @@ -190,7 +197,9 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_ma movl %edi, %eax addl $(virtual_mapped - relocate_kernel), %eax pushl %eax - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(identity_mapped) SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped) @@ -208,7 +217,9 @@ SYM_CODE_START_LOCAL_NOALIGN(virtual_map popl %edi popl %esi popl %ebx - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(virtual_mapped) /* Do the copies */ @@ -271,7 +282,9 @@ SYM_CODE_START_LOCAL_NOALIGN(swap_pages) popl %edi popl %ebx popl %ebp - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(swap_pages) .globl kexec_control_code_size --- a/arch/x86/kernel/relocate_kernel_64.S +++ b/arch/x86/kernel/relocate_kernel_64.S @@ -13,7 +13,8 @@ #include /* - * Must be relocatable PIC code callable as a C function + * Must be relocatable PIC code callable as a C function, in particular + * there must be a plain RET and not jump to return thunk. */ #define PTR(x) (x << 3) @@ -105,7 +106,9 @@ SYM_CODE_START_NOALIGN(relocate_kernel) /* jump to identity mapped page */ addq $(identity_mapped - relocate_kernel), %r8 pushq %r8 - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(relocate_kernel) SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) @@ -200,7 +203,9 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_ma xorl %r14d, %r14d xorl %r15d, %r15d - RET + ANNOTATE_UNRET_SAFE + ret + int3 1: popq %rdx @@ -219,7 +224,9 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_ma call swap_pages movq $virtual_mapped, %rax pushq %rax - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(identity_mapped) SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped) @@ -241,7 +248,9 @@ SYM_CODE_START_LOCAL_NOALIGN(virtual_map popq %r12 popq %rbp popq %rbx - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(virtual_mapped) /* Do the copies */ @@ -298,7 +307,9 @@ SYM_CODE_START_LOCAL_NOALIGN(swap_pages) lea PAGE_SIZE(%rax), %rsi jmp 0b 3: - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(swap_pages) .globl kexec_control_code_size