Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp676556ybb; Fri, 20 Mar 2020 06:18:31 -0700 (PDT) X-Google-Smtp-Source: ADFU+vusOFInQMHg24qyGmLU+HfH2uURcOTgRnSCQPvi5tHWqY4w/Z9GeEW7uGfEQ06RwEIzZVMq X-Received: by 2002:a9d:5c83:: with SMTP id a3mr1558399oti.347.1584710310991; Fri, 20 Mar 2020 06:18:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584710310; cv=none; d=google.com; s=arc-20160816; b=uhAzQR2/lqeqeJStge+dGRCU1E9lL/w7Ccsi5BC716WaSJknWQeFEUkiTkzNPam36v 4E4lmMUTJylU/0pKhzd5b1f/YRqbpda4m4gG4IX2OpevH7OkMRI9aXcqTaLjy2S8Hkoy eKmHdR9Vorfx/Cj21exQ1E/XGMJfxSloZGxuwqTq8lqxTUxnzBeHq/Yy9/Lg61BwmeDz aC3Xo/cBodzhgFJswxSKHEQVjDokbJBmjwitn5F9O1oRQuHnOj1NK4XLZtCvKJy2+pER AhzxiMi1R7G9D2bu2KLtgv9FvRfOon8drwajfGpNRsc9xtzUVw6uY29Q2WgrT4pScR7w 0uTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=3RIbc2JJWYzY2EbVkz5EKmdRpWw90eUdohEu969WaUI=; b=u+COPiKZscdGjX71U/GDDoELu66NWdf6usvxKCk9Apwl3+pxovdzSHPf8T41zWlrsU a6VRXRardJwhFlYN0yaLGC75vs3miZv1hX6hTrii9Inb/2NUzGkZ43tN+HFKjfvEewyB ZGMmLSygTizu2+3b1dXT2qrNVWOSTjVBE9UQ/in1vLmP4YyY3iX2HZSqdrEubec+IQoA A0GNSP+yw+CXYWWE89cQgoQ+Tw/sddO2fOiRvI6CtNVgWe1b5O+DMCVVMpyFbXINopN+ z16xEc+EAXTxYnECSXWc5+TApH++1HomfgkJ1TkA0pYTLOQYlwWtmAFXr+Z8Z2h/EDjJ XIug== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z6si2451395oia.219.2020.03.20.06.18.11; Fri, 20 Mar 2020 06:18:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727518AbgCTNRO (ORCPT + 99 others); Fri, 20 Mar 2020 09:17:14 -0400 Received: from 8bytes.org ([81.169.241.247]:54470 "EHLO theia.8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727502AbgCTNRL (ORCPT ); Fri, 20 Mar 2020 09:17:11 -0400 Received: by theia.8bytes.org (Postfix, from userid 1000) id EDAE0364; Fri, 20 Mar 2020 14:17:09 +0100 (CET) Date: Fri, 20 Mar 2020 14:17:07 +0100 From: Joerg Roedel To: Andy Lutomirski Cc: X86 ML , "H. Peter Anvin" , Dave Hansen , Peter Zijlstra , Thomas Hellstrom , Jiri Slaby , Dan Williams , Tom Lendacky , Juergen Gross , Kees Cook , LKML , kvm list , Linux Virtualization , Joerg Roedel Subject: [RFC PATCH v2.1] x86/sev-es: Handle NMI State Message-ID: <20200320131707.GF5122@8bytes.org> References: <20200319091407.1481-1-joro@8bytes.org> <20200319091407.1481-71-joro@8bytes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 19, 2020 at 08:35:59AM -0700, Andy Lutomirski wrote: > 1. Just put the NMI unmask in do_nmi(). The kernel *already* knows > how to handle running do_nmi() with NMIs unmasked. This is much, much > simpler than your code. Okay, attached is the updated patch which implements this approach. I tested it in an SEV-ES guest with 'perf top' running for a little more than 30 minutes and all looked good. I also removed the dead code from the patch. From ec3b021c5d9130fd66e00d823c4fabc675c4b49e Mon Sep 17 00:00:00 2001 From: Joerg Roedel Date: Tue, 28 Jan 2020 17:31:05 +0100 Subject: [PATCH] x86/sev-es: Handle NMI State When running under SEV-ES the kernel has to tell the hypervisor when to open the NMI window again after an NMI was injected. This is done with an NMI-complete message to the hypervisor. Add code to the kernels NMI handler to send this message right at the beginning of do_nmi(). This always allows nesting NMIs. Signed-off-by: Joerg Roedel --- arch/x86/include/asm/sev-es.h | 2 ++ arch/x86/include/uapi/asm/svm.h | 1 + arch/x86/kernel/nmi.c | 8 ++++++++ arch/x86/kernel/sev-es.c | 18 ++++++++++++++++++ 4 files changed, 29 insertions(+) diff --git a/arch/x86/include/asm/sev-es.h b/arch/x86/include/asm/sev-es.h index 63acf50e6280..441ec1ba2cc7 100644 --- a/arch/x86/include/asm/sev-es.h +++ b/arch/x86/include/asm/sev-es.h @@ -82,11 +82,13 @@ struct real_mode_header; #ifdef CONFIG_AMD_MEM_ENCRYPT int sev_es_setup_ap_jump_table(struct real_mode_header *rmh); +void sev_es_nmi_complete(void); #else /* CONFIG_AMD_MEM_ENCRYPT */ static inline int sev_es_setup_ap_jump_table(struct real_mode_header *rmh) { return 0; } +static inline void sev_es_nmi_complete(void) { } #endif /* CONFIG_AMD_MEM_ENCRYPT*/ #endif diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h index 20a05839dd9a..0f837339db66 100644 --- a/arch/x86/include/uapi/asm/svm.h +++ b/arch/x86/include/uapi/asm/svm.h @@ -84,6 +84,7 @@ /* SEV-ES software-defined VMGEXIT events */ #define SVM_VMGEXIT_MMIO_READ 0x80000001 #define SVM_VMGEXIT_MMIO_WRITE 0x80000002 +#define SVM_VMGEXIT_NMI_COMPLETE 0x80000003 #define SVM_VMGEXIT_AP_HLT_LOOP 0x80000004 #define SVM_VMGEXIT_AP_JUMP_TABLE 0x80000005 #define SVM_VMGEXIT_SET_AP_JUMP_TABLE 0 diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index 54c21d6abd5a..fc872a7e0ed1 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -37,6 +37,7 @@ #include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -510,6 +511,13 @@ NOKPROBE_SYMBOL(is_debug_stack); dotraplinkage notrace void do_nmi(struct pt_regs *regs, long error_code) { + /* + * Re-enable NMIs right here when running as an SEV-ES guest. This might + * cause nested NMIs, but those can be handled safely. + */ + if (sev_es_active()) + sev_es_nmi_complete(); + if (IS_ENABLED(CONFIG_SMP) && cpu_is_offline(smp_processor_id())) return; diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c index 3c22f256645e..a7e2739771e7 100644 --- a/arch/x86/kernel/sev-es.c +++ b/arch/x86/kernel/sev-es.c @@ -270,6 +270,24 @@ static phys_addr_t vc_slow_virt_to_phys(struct ghcb *ghcb, long vaddr) /* Include code shared with pre-decompression boot stage */ #include "sev-es-shared.c" +void sev_es_nmi_complete(void) +{ + struct ghcb_state state; + struct ghcb *ghcb; + + ghcb = sev_es_get_ghcb(&state); + + vc_ghcb_invalidate(ghcb); + ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_NMI_COMPLETE); + ghcb_set_sw_exit_info_1(ghcb, 0); + ghcb_set_sw_exit_info_2(ghcb, 0); + + sev_es_wr_ghcb_msr(__pa(ghcb)); + VMGEXIT(); + + sev_es_put_ghcb(&state); +} + static u64 sev_es_get_jump_table_addr(void) { struct ghcb_state state; -- 2.16.4