Received: by 2002:a25:ef43:0:0:0:0:0 with SMTP id w3csp716171ybm; Thu, 28 May 2020 13:17:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzdCoSDCdV3GdztovM1KNQ6Ht7X9wLWG8H5m8SQFt+2PpVY7/3iu44v4Bc265xE8ero97bb X-Received: by 2002:a05:6402:1855:: with SMTP id v21mr5026795edy.189.1590697058595; Thu, 28 May 2020 13:17:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590697058; cv=none; d=google.com; s=arc-20160816; b=0Fr9hstBzfIn577SvV8kcbZt5pnqMlTB4BSym7EuoPASuVG20Qfre3VJEwEgCojyBs B2Pj03BQqzawbbqZo4RWfj0dPwF58pThyrRgM+nnsbR2YOj8QsIa/+hnK/AX+Pw6ksgj Pg1orm70RzYEMa7LKO/QXjkRg7qchBCDlDQKwaNR7MmcEO2ep1nK4JxUyewRYlDCE8VD 1J0vA2xyymNfV2TnRXovy2svBwMLB7nperxwA0cSCFYMD17MHJDKwR0bE8GjrRzt6U0/ tuA2XDEGAlY3IsMgDeKPYYgzyRDk7Iy2VUiriqqc84I6axxBZK4BOBmGbQx3Cr4CO1WH S8Mw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=OpkiergJKlyRCcgonVmOgBG+BNQ4XjpsOuzlAnya2zY=; b=vfNVKMQ/Cjy0cCAQOZkh5UDm3nHbiFljyL1huovjgTYwqvyD4sL2KlZIi1SrXpoF57 avXN/omX7Rsk/CiU2Ckse67v4iaMKF0oYCpEeYTsun8m+atkiBTbKklllJ/qw8SMRvyg nnIAzIBhOFUc6tsdcDb0qdWRyE4Taz1RSCHfGuNyEPgXS8JjXpTWjjOD/f/bnRJaUk4X Gv+EROa3KugrYUCoE7bNBU/qzoQhSTz17kzN86ZvwzUkVXlOaI4QRPSuwL8Td5iOWJR/ RlF8bwnpVPHJjFEFwpIjjvANktYU9+S8rBSakVWyt+jzewXnr6cyjCPJLssBaYYc+f28 j96Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=N02hPfEK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ha12si2851630ejb.228.2020.05.28.13.17.14; Thu, 28 May 2020 13:17:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=N02hPfEK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2407082AbgE1UO4 (ORCPT + 99 others); Thu, 28 May 2020 16:14:56 -0400 Received: from mail.kernel.org ([198.145.29.99]:41994 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2407030AbgE1UOa (ORCPT ); Thu, 28 May 2020 16:14:30 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 02AAB21531; Thu, 28 May 2020 20:14:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1590696869; bh=ZkQfUCoSLhk8TJGYpu7rAbr5u4cB5xvk2DG9dyqI42M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N02hPfEKJWZAz7IhR29+6v50ljmyMrPCOpMsc0mJhrAks34MY8OjHAjpmYcilWqBC wKb3eXgoKI8xfriZByf1AxLU0ReaoAX5l24ts+6ciLqPVuhnjR2DdoIu9glstQpNo4 WPJg0rjAICwS1kB1NlIXwbSy92/tdBSfzn6nzcws= From: Sasha Levin To: tglx@linutronix.de, luto@kernel.org, ak@linux.intel.com Cc: corbet@lwn.net, mingo@redhat.com, bp@alien8.de, x86@kernel.org, shuah@kernel.org, gregkh@linuxfoundation.org, tony.luck@intel.com, chang.seok.bae@intel.com, dave.hansen@linux.intel.com, peterz@infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, jarkko.sakkinen@linux.intel.com, "H . Peter Anvin" , Ravi Shankar , Sasha Levin Subject: [PATCH v13 09/16] x86/entry/64: Switch CR3 before SWAPGS in paranoid entry Date: Thu, 28 May 2020 16:13:55 -0400 Message-Id: <20200528201402.1708239-10-sashal@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200528201402.1708239-1-sashal@kernel.org> References: <20200528201402.1708239-1-sashal@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Chang S. Bae" When FSGSBASE is enabled, the GSBASE handling in paranoid entry will need to retrieve the kernel GSBASE which requires that the kernel page table is active. As the CR3 switch to the kernel page tables (PTI is active) does not depend on kernel GSBASE, move the CR3 switch in front of the GSBASE handling. Comment the EBX content while at it. No functional change. [ tglx: Rewrote changelog and comments ] Signed-off-by: Chang S. Bae Signed-off-by: Thomas Gleixner Cc: Andy Lutomirski Cc: "H . Peter Anvin" Cc: Andi Kleen Cc: Ravi Shankar Cc: Dave Hansen Cc: H. Peter Anvin Link: https://lkml.kernel.org/r/1557309753-24073-11-git-send-email-chang.seok.bae@intel.com Signed-off-by: Sasha Levin --- arch/x86/entry/entry_64.S | 32 ++++++++++++++++++++++++-------- 1 file changed, 24 insertions(+), 8 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 3063aa9090f9..3b9ccba6c4b4 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1220,13 +1220,6 @@ SYM_CODE_START_LOCAL(paranoid_entry) cld PUSH_AND_CLEAR_REGS save_ret=1 ENCODE_FRAME_POINTER 8 - movl $1, %ebx - movl $MSR_GS_BASE, %ecx - rdmsr - testl %edx, %edx - js 1f /* negative -> in kernel */ - SWAPGS - xorl %ebx, %ebx 1: /* @@ -1238,9 +1231,29 @@ SYM_CODE_START_LOCAL(paranoid_entry) * This is also why CS (stashed in the "iret frame" by the * hardware at entry) can not be used: this may be a return * to kernel code, but with a user CR3 value. + * + * Switching CR3 does not depend on kernel GSBASE so it can + * be done before switching to the kernel GSBASE. This is + * required for FSGSBASE because the kernel GSBASE has to + * be retrieved from a kernel internal table. */ SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=%rax save_reg=%r14 + /* EBX = 1 -> kernel GSBASE active, no restore required */ + movl $1, %ebx + /* + * The kernel-enforced convention is a negative GSBASE indicates + * a kernel value. No SWAPGS needed on entry and exit. + */ + movl $MSR_GS_BASE, %ecx + rdmsr + testl %edx, %edx + jns .Lparanoid_entry_swapgs + ret + +.Lparanoid_entry_swapgs: + SWAPGS + /* * The above SAVE_AND_SWITCH_TO_KERNEL_CR3 macro doesn't do an * unconditional CR3 write, even in the PTI case. So do an lfence @@ -1248,6 +1261,8 @@ SYM_CODE_START_LOCAL(paranoid_entry) */ FENCE_SWAPGS_KERNEL_ENTRY + /* EBX = 0 -> SWAPGS required on exit */ + xorl %ebx, %ebx ret SYM_CODE_END(paranoid_entry) @@ -1267,7 +1282,8 @@ SYM_CODE_START_LOCAL(paranoid_exit) UNWIND_HINT_REGS DISABLE_INTERRUPTS(CLBR_ANY) TRACE_IRQS_OFF_DEBUG - testl %ebx, %ebx /* swapgs needed? */ + /* If EBX is 0, SWAPGS is required */ + testl %ebx, %ebx jnz .Lparanoid_exit_no_swapgs TRACE_IRQS_IRETQ /* Always restore stashed CR3 value (see paranoid_entry) */ -- 2.25.1