Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2384964imm; Sat, 28 Jul 2018 16:11:33 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdlNr5Una/oofVE9+JXyrGrwQ95b64nW6RdUokTL4oVxhcfl72dRL+GkG4pFs3njJY8BoEc X-Received: by 2002:a62:2f84:: with SMTP id v126-v6mr12167129pfv.115.1532819493477; Sat, 28 Jul 2018 16:11:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532819493; cv=none; d=google.com; s=arc-20160816; b=euM/Uj6sBRKE1agtIzmWPNcxy5gAfqO2+P7vF8Mfpi5JTybmtX+YbsqIxPe88MquzW voZ7K5cGvOfH4SSR0NtGlrqfxZ3WFUpr0YjfTXYJsIZOwEviUIidavtdXrKenf8DAYHX wBu1i9rYMGjQ7GBTULVf6DNwf26OlieHv/NZ7vnwT6Pu36IwpMYwicRsb7YMeDjUwmsk EgZ3lHX6A8vNswPhuLjJ2tG11gN82zqo/FoEPwrnoxji7zWaVs2flgKPDaV3ie7yOX02 kdpeg/IcPFZgT2F611fY4iuphHLYwNjX8DonCWUcZ49cfhaEA/K4WoSZVzIR4m2f44zG WUCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Dw0tnLcPRlLp/CPLVdchiMw/KWxx8qaqoZVlMVZuHJA=; b=S4hecPd1wsUW79WgTWKgD+0gIY/MLeHEfSYxfvS+ggXJM0ggZSi+7WEk0/Y8cQD9Ft lOV6DiT5+0wwZgECsNO9koJRKBybF/2Ins6DBxIQnTjMiL82aeHxd3HT8LwNN3ImNsn9 gmmthysdoKZY/z25KLy+C1Nl5L4PLqGPSyfbbJ101RgsgxbW3Q3cqx7g4K7AgWlpzfeg w7UrOfCKsMJwd7Wah5IIi4DBeWLESSBIV3pzoNvSiAtlgqw+wQLHOh0tOxQ/kDx1nkKR 9F5mhiM2PAx9W8pQ9JjI/QJI9oX5b/HvuW2P1ow2Qo20NZl9WG5JaXHwpv3LJi/qmoCu 02Cw== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=mkpa724r; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e14-v6si7012431pfi.184.2018.07.28.16.11.19; Sat, 28 Jul 2018 16:11:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=mkpa724r; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731753AbeG2Aig (ORCPT + 99 others); Sat, 28 Jul 2018 20:38:36 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:39772 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731546AbeG2Aif (ORCPT ); Sat, 28 Jul 2018 20:38:35 -0400 Received: by mail-wm0-f66.google.com with SMTP id h20-v6so9483241wmb.4; Sat, 28 Jul 2018 16:10:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=Dw0tnLcPRlLp/CPLVdchiMw/KWxx8qaqoZVlMVZuHJA=; b=mkpa724r9gDv6NJgyCcA9L5Rn/JyMpEp2lwLvQ4i15HyoNLfe2OPZjR7eL57Q43psX iM0WDZt3Ru2JsKxtHSiv7W6+Zo5I8X78VGMuu0uKVrfG3foULzwrR30Xr3JQjgBumy+t TeC64BdPHQEMRzb5ICnwFQTN0IMf4csx4vr5iSbqu9voz60PovE1SQGp0D6zFnyOZSHe jpPInbGnmMw7oG3Zz3jbFir7pNIsFWLp1N5lUPn033lqSd8ZFgBS21f7L7TV31hDMV+n vMyXmqmtCV5xH35HRqX2zKRDmFvq0KqJCU9xXzRcGjEUNJctaf7faaTfsNWr2RwMi9kc EUOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=Dw0tnLcPRlLp/CPLVdchiMw/KWxx8qaqoZVlMVZuHJA=; b=HXZhCononIqyRrKgNGyc502059FKH9X73744Yq4MXcHd84G3qYV152SUlXyOwQQQuH 1dV64+BEW070Cbj/mAGzQyCw9G4rDMTo8nrd6xB8GHKnBcSioqCxPoieV8xj60jMm/pq pUlVXwt+hwijVbLfYWUaYK5Agqww4GwVuoyY0pFX8LcTtxV5hQr4oIm2QENs+VZIFuDG CPY8MHUrDSf7w8dstAfneb9S2s1FmGjqp6PyCh/hQRNOazOtZWKQlVmj69A3U+RSEqLF r/rN8L8gjTVEyigbI89T+/rwOn4hu5/F+OkqV0XB5n4zOaL8LFrmBXliwaleRSTnBI+t gSEQ== X-Gm-Message-State: AOUpUlHkySc/xeo6VY+jqhZIE9C+UVEx7E118f+aghzTTgQYxm13dDQo pSjuBLiBwP9DbxzehW9ohG9HZplB X-Received: by 2002:a1c:9b43:: with SMTP id d64-v6mr9505000wme.109.1532819423579; Sat, 28 Jul 2018 16:10:23 -0700 (PDT) Received: from 640k.lan (94-36-184-250.adsl-ull.clienti.tiscali.it. [94.36.184.250]) by smtp.gmail.com with ESMTPSA id j6-v6sm6190725wro.13.2018.07.28.16.10.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 28 Jul 2018 16:10:23 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Liran Alon , KarimAllah Ahmed , Jim Mattson , rkrcmar@redhat.com Subject: [PATCH 06/10] KVM: x86: do not load vmcs12 pages while still in SMM Date: Sun, 29 Jul 2018 01:10:08 +0200 Message-Id: <1532819412-51357-7-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1532819412-51357-1-git-send-email-pbonzini@redhat.com> References: <1532819412-51357-1-git-send-email-pbonzini@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If the vCPU enters system management mode while running a nested guest, RSM starts processing the vmentry while still in SMM. In that case, however, the pages pointed to by the vmcs12 might be incorrectly loaded from SMRAM. To avoid this, delay the handling of the pages until just before the next vmentry. This is done with a new request and a new entry in kvm_x86_ops, which we will be able to reuse for nested VMX state migration. Extracted from a patch by Jim Mattson and KarimAllah Ahmed. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 3 +++ arch/x86/kvm/vmx.c | 53 +++++++++++++++++++++++++++-------------- arch/x86/kvm/x86.c | 2 ++ 3 files changed, 40 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index c13cd28d9d1b..da957725992d 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -75,6 +75,7 @@ #define KVM_REQ_HV_EXIT KVM_ARCH_REQ(21) #define KVM_REQ_HV_STIMER KVM_ARCH_REQ(22) #define KVM_REQ_LOAD_EOI_EXITMAP KVM_ARCH_REQ(23) +#define KVM_REQ_GET_VMCS12_PAGES KVM_ARCH_REQ(24) #define CR0_RESERVED_BITS \ (~(unsigned long)(X86_CR0_PE | X86_CR0_MP | X86_CR0_EM | X86_CR0_TS \ @@ -1085,6 +1086,8 @@ struct kvm_x86_ops { void (*setup_mce)(struct kvm_vcpu *vcpu); + void (*get_vmcs12_pages)(struct kvm_vcpu *vcpu); + int (*smi_allowed)(struct kvm_vcpu *vcpu); int (*pre_enter_smm)(struct kvm_vcpu *vcpu, char *smstate); int (*pre_leave_smm)(struct kvm_vcpu *vcpu, u64 smbase); diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 2630ab38d72c..17aede06ae0e 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -10636,9 +10636,9 @@ static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu, static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12); -static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu, - struct vmcs12 *vmcs12) +static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) { + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu); struct page *page; u64 hpa; @@ -11750,13 +11750,18 @@ static int check_vmentry_postreqs(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, return 0; } -static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu) +/* + * If p_exit_qual is NULL, this is being called from state restore (either + * kvm_set_nested_state or RSM). Otherwise it's called from vmlaunch/vmresume. + */ +static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, u32 *p_exit_qual) { struct vcpu_vmx *vmx = to_vmx(vcpu); struct vmcs12 *vmcs12 = get_vmcs12(vcpu); + bool from_vmentry = !!p_exit_qual; u32 msr_entry_idx; - u32 exit_qual; - int r; + u32 dummy_exit_qual; + int r = 0; enter_guest_mode(vcpu); @@ -11770,17 +11775,27 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu) vcpu->arch.tsc_offset += vmcs12->tsc_offset; r = EXIT_REASON_INVALID_STATE; - if (prepare_vmcs02(vcpu, vmcs12, &exit_qual)) + if (prepare_vmcs02(vcpu, vmcs12, from_vmentry ? p_exit_qual : &dummy_exit_qual)) goto fail; - nested_get_vmcs12_pages(vcpu, vmcs12); + if (from_vmentry) { + nested_get_vmcs12_pages(vcpu); - r = EXIT_REASON_MSR_LOAD_FAIL; - msr_entry_idx = nested_vmx_load_msr(vcpu, - vmcs12->vm_entry_msr_load_addr, - vmcs12->vm_entry_msr_load_count); - if (msr_entry_idx) - goto fail; + r = EXIT_REASON_MSR_LOAD_FAIL; + msr_entry_idx = nested_vmx_load_msr(vcpu, + vmcs12->vm_entry_msr_load_addr, + vmcs12->vm_entry_msr_load_count); + if (msr_entry_idx) + goto fail; + } else { + /* + * The MMU is not initialized to point at the right entities yet and + * "get pages" would need to read data from the guest (i.e. we will + * need to perform gpa to hpa translation). Request a call + * to nested_get_vmcs12_pages before the next VM-entry. + */ + kvm_make_request(KVM_REQ_GET_VMCS12_PAGES, vcpu); + } /* * Note no nested_vmx_succeed or nested_vmx_fail here. At this point @@ -11795,8 +11810,7 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu) vcpu->arch.tsc_offset -= vmcs12->tsc_offset; leave_guest_mode(vcpu); vmx_switch_vmcs(vcpu, &vmx->vmcs01); - nested_vmx_entry_failure(vcpu, vmcs12, r, exit_qual); - return 1; + return r; } /* @@ -11873,10 +11887,11 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch) */ vmx->nested.nested_run_pending = 1; - ret = enter_vmx_non_root_mode(vcpu); + ret = enter_vmx_non_root_mode(vcpu, &exit_qual); if (ret) { + nested_vmx_entry_failure(vcpu, vmcs12, ret, exit_qual); vmx->nested.nested_run_pending = 0; - return ret; + return 1; } /* @@ -12962,7 +12977,7 @@ static int vmx_pre_leave_smm(struct kvm_vcpu *vcpu, u64 smbase) if (vmx->nested.smm.guest_mode) { vcpu->arch.hflags &= ~HF_SMM_MASK; - ret = enter_vmx_non_root_mode(vcpu); + ret = enter_vmx_non_root_mode(vcpu, NULL); vcpu->arch.hflags |= HF_SMM_MASK; if (ret) return ret; @@ -13111,6 +13126,8 @@ static int enable_smi_window(struct kvm_vcpu *vcpu) .setup_mce = vmx_setup_mce, + .get_vmcs12_pages = nested_get_vmcs12_pages, + .smi_allowed = vmx_smi_allowed, .pre_enter_smm = vmx_pre_enter_smm, .pre_leave_smm = vmx_pre_leave_smm, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2b812b3c5088..8ddf5f94876f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7257,6 +7257,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) bool req_immediate_exit = false; if (kvm_request_pending(vcpu)) { + if (kvm_check_request(KVM_REQ_GET_VMCS12_PAGES, vcpu)) + kvm_x86_ops->get_vmcs12_pages(vcpu); if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu)) kvm_mmu_unload(vcpu); if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu)) -- 1.8.3.1