Received: by 2002:a05:7412:251c:b0:e2:908c:2ebd with SMTP id w28csp1730599rda; Tue, 24 Oct 2023 01:09:44 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEpv2yCl+PEyvZ8qt4UKPTzcn2PSljlnXs+YjNhPJzjk944buGyyVbP4yz+farv2zB4DsAA X-Received: by 2002:a17:90a:19c5:b0:27d:2d02:ab14 with SMTP id 5-20020a17090a19c500b0027d2d02ab14mr11617806pjj.18.1698134984486; Tue, 24 Oct 2023 01:09:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698134984; cv=none; d=google.com; s=arc-20160816; b=TiN6KsuEvOsU761XB3uGNuzTlQO1L/59F2HhAGkYXMOJNzsE8s+ytIUWUqvjQMg1iI zOqzAfUY9ZDX8Uqj7Uq9tNfYSot/2AVV7QSMgn/SXTwAZsgnU0+fX/NF/4sr+kYG6mEk irFY4S0+1AfXr7XSy5aDG/Scfd47jicx9RiwpC4vGZ/qwToKtbuJMstEnkDRAp0GXDiN Kesf0/PpEfLeak05Lo5rUsw6uiI4yQFAsSLZ8JEd0a3CexF7Xm15SZTEHdCE6PhRfxUQ XNVnZ7kin+77Oea9l4LYE4IvAvEPzsLVCCDAOZHqrQFe9nW7o7x4TlrO2RAP91exeplL DRfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=JXW6ncWciaghnfyTdoACNbmv3XQBwz+zUjQClT1iM5A=; fh=pfRkWPzvKcXRDb8bzZOHP7Lfu+YA9O7iyInvCf011xY=; b=qi6ZkrBiP1P9Cmq9c7T5N/EuA/E1DFPtmZWR1sz7B/9kma6SxgUQBcRTyNreEmltBD 6GCWvyQPsnqrN6sIH4oL5+zHo1o3N0R1IwbWlj1xCCEczG1ltnPDAfXAfH/h/OyctJZn nrJEfQ4rkSBWCS87dMhQE7VzikDqAg8fouWD/rI7fv5l5noRebVtv+gp2/VpZQiWbht2 kOijehW2aVinNX2xpcpwDt7syB6hHUgHm/gQzNisRRg4VGH7ONkSDuF1DeH/ZxdovHbg 1s/WZJ7BTliaLmebdvsktHzoLPfUUpUtvm1RZRr411sDVrPzZEutZ7W9Zdu4c1/ekxeo uU7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Bzb5AdSp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id ot16-20020a17090b3b5000b0026d034f6baesi8382555pjb.117.2023.10.24.01.09.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Oct 2023 01:09:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Bzb5AdSp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 12B908049B7A; Tue, 24 Oct 2023 01:09:39 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233808AbjJXIIg (ORCPT + 99 others); Tue, 24 Oct 2023 04:08:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233780AbjJXIIc (ORCPT ); Tue, 24 Oct 2023 04:08:32 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A539D7E; Tue, 24 Oct 2023 01:08:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698134910; x=1729670910; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=KI9R1ykioS10T+zkw0Cw1ATpow7yvlFUWGh3i+Y0oPk=; b=Bzb5AdSpZdOMd3rSkXiSjlTKgcO1n4nE+LboWOVmnP+r5t9oUi30Xs+M EOxW3M3Smusr8PM9fO3Msigc7iApaU2Y4TLs1v29V1C5cU+9F2KiYO/mT h+2n1ppNgqGVLPKH+Wx+0i16VSxhnd0GDicrreFu2oOfDqg1c83VK05ap JuyYgZx+APpvi1//pEBVvO3iv6cNViB1YUU1mdDaL8JCb3UDU5vzbUW6p 0IOZq6USfvgnomXcZmeanV92hC90+q7ATGv8T3cfkfTETpyo6iocieeYr OJN6zL5VLKmOWMf5H/uVafGsqZOgNw6ud/2jerO6ZwsckvPH/Tz8i1F6U w==; X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="451239370" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="451239370" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10872"; a="734956030" X-IronPort-AV: E=Sophos;i="6.03,247,1694761200"; d="scan'208";a="734956030" Received: from zijianw1-mobl.amr.corp.intel.com (HELO desk) ([10.209.109.187]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 01:08:29 -0700 Date: Tue, 24 Oct 2023 01:08:27 -0700 From: Pawan Gupta To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Josh Poimboeuf , Andy Lutomirski , Jonathan Corbet , Sean Christopherson , Paolo Bonzini , tony.luck@intel.com, ak@linux.intel.com, tim.c.chen@linux.intel.com Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, Alyssa Milburn , Daniel Sneddon , antonio.gomez.iglesias@linux.intel.com, Pawan Gupta , Dave Hansen Subject: [PATCH v2 2/6] x86/entry_64: Add VERW just before userspace transition Message-ID: <20231024-delay-verw-v2-2-f1881340c807@linux.intel.com> X-Mailer: b4 0.12.3 References: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231024-delay-verw-v2-0-f1881340c807@linux.intel.com> X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Tue, 24 Oct 2023 01:09:39 -0700 (PDT) Mitigation for MDS is to use VERW instruction to clear any secrets in CPU Buffers. Any memory accesses after VERW execution can still remain in CPU buffers. It is safer to execute VERW late in return to user path to minimize the window in which kernel data can end up in CPU buffers. There are not many kernel secrets to be had after SWITCH_TO_USER_CR3. Add support for deploying VERW mitigation after user register state is restored. This helps minimize the chances of kernel data ending up into CPU buffers after executing VERW. Note that the mitigation at the new location is not yet enabled. Corner case not handled ======================= Interrupts returning to kernel don't clear CPUs buffers since the exit-to-user path is expected to do that anyways. But, there could be a case when an NMI is generated in kernel after the exit-to-user path has cleared the buffers. This case is not handled and NMI returning to kernel don't clear CPU buffers because: 1. It is rare to get an NMI after VERW, but before returning to userspace. 2. For an unprivileged user, there is no known way to make that NMI less rare or target it. 3. It would take a large number of these precisely-timed NMIs to mount an actual attack. There's presumably not enough bandwidth. 4. The NMI in question occurs after a VERW, i.e. when user state is restored and most interesting data is already scrubbed. Whats left is only the data that NMI touches, and that may or may not be of any interest. Suggested-by: Dave Hansen Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_64.S | 11 +++++++++++ arch/x86/entry/entry_64_compat.S | 1 + 2 files changed, 12 insertions(+) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 43606de22511..9f97a8bd11e8 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -223,6 +223,7 @@ syscall_return_via_sysret: SYM_INNER_LABEL(entry_SYSRETQ_unsafe_stack, SYM_L_GLOBAL) ANNOTATE_NOENDBR swapgs + CLEAR_CPU_BUFFERS sysretq SYM_INNER_LABEL(entry_SYSRETQ_end, SYM_L_GLOBAL) ANNOTATE_NOENDBR @@ -663,6 +664,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL) /* Restore RDI. */ popq %rdi swapgs + CLEAR_CPU_BUFFERS jmp .Lnative_iret @@ -774,6 +776,8 @@ native_irq_return_ldt: */ popq %rax /* Restore user RAX */ + CLEAR_CPU_BUFFERS + /* * RSP now points to an ordinary IRET frame, except that the page * is read-only and RSP[31:16] are preloaded with the userspace @@ -1502,6 +1506,12 @@ nmi_restore: std movq $0, 5*8(%rsp) /* clear "NMI executing" */ + /* + * Skip CLEAR_CPU_BUFFERS here, since it only helps in rare cases like + * NMI in kernel after user state is restored. For an unprivileged user + * these conditions are hard to meet. + */ + /* * iretq reads the "iret" frame and exits the NMI stack in a * single instruction. We are returning to kernel mode, so this @@ -1520,6 +1530,7 @@ SYM_CODE_START(ignore_sysret) UNWIND_HINT_END_OF_STACK ENDBR mov $-ENOSYS, %eax + CLEAR_CPU_BUFFERS sysretl SYM_CODE_END(ignore_sysret) #endif diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S index 70150298f8bd..245697eb8485 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -271,6 +271,7 @@ SYM_INNER_LABEL(entry_SYSRETL_compat_unsafe_stack, SYM_L_GLOBAL) xorl %r9d, %r9d xorl %r10d, %r10d swapgs + CLEAR_CPU_BUFFERS sysretl SYM_INNER_LABEL(entry_SYSRETL_compat_end, SYM_L_GLOBAL) ANNOTATE_NOENDBR -- 2.34.1