Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp2935138pxy; Mon, 3 May 2021 11:12:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJypzLfaMOozVOk/+jUREKr8iz5wWOd1BW/5SOX+q6dkRmS7MVaZJNZ6Je6S1JDwIhHEv5Ry X-Received: by 2002:a17:902:d48a:b029:ee:dc91:862 with SMTP id c10-20020a170902d48ab02900eedc910862mr4830779plg.60.1620065465992; Mon, 03 May 2021 11:11:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620065465; cv=none; d=google.com; s=arc-20160816; b=lM8pkFY6HbhGTeB+rRwpxJOLF4mAdYvXU+vyGTd7Irma+5ptkrKm7VinOBtAeTg5Yj a/jmk+sqCEL40o20SUY/wl2XayZkuXPZUkuIP8LnyqDvdT7R4FqddzZEJNaO5OJnCBGh FB5i83+mcmn5NJzPfIsGIbwg63BOJ38bqfvf6NDbpGhfux1LD8+yAO7J7zogKThQoIGM oGU58+RE++i+jASZcyRLKx9MuLJJlqYhJ1PjGLWbUoeefMEXHdFl6jvgaCEPtTmz+kSf oxxDH4yw0h5F98g0MmdUeayqMxMhoam1r83Q9Xgb7RC9STNABTOh8CXuDjHMshqDTt/x 54XQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cEI+4Ic9HJh3gaK26jg1vQpajvtZerh1XHWV7dRvuiw=; b=kU+mqQdYbaDnNyOYYwMReT0/8JG9I7cla/hDiqAbuSu5T2CSyEwAGIRiQQlbnv32AQ CtPf2n8bsUJzuNhZ+gTQYMvTIugCHrALYwxgMqU3jfmjpu3o/suDYiThYlMu9foTHhdK 3L3k3FCrESB2CDQbMFiPxn4YidnktPyeU8LsTKapcxgH4eL1Q2T5U4G7WZDSqMyUSBL/ 00602X4TO7sForbOdY5hGUKHWVicfwOCNzdYAqMLBqonvgYLvVynwMAu6qBv5QYad6Y4 Dqw1Br0qCVkPSnq3X/7xdFX56UMiKTFCCkkGpO3rZvARY58pWzRQ/67zDt4kGkA2JoTz 40Gg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=F16A7w4h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m14si371969pgu.199.2021.05.03.11.10.52; Mon, 03 May 2021 11:11:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=F16A7w4h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230152AbhECPKB (ORCPT + 99 others); Mon, 3 May 2021 11:10:01 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:25237 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229717AbhECPJ6 (ORCPT ); Mon, 3 May 2021 11:09:58 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1620054544; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cEI+4Ic9HJh3gaK26jg1vQpajvtZerh1XHWV7dRvuiw=; b=F16A7w4hYPQ23vSPOZ6J5NE4sSkgVX9DtWhuUojZZ0V/sXjpGVe2qNkmTEvNr0BcrPpoHG vYHjtYq0C83bIX8qQCNwebe8y6Pa8BBMBNuQOLhgq1w93xE3A5o2uh9f2BV+dpJXJycgwI zCnBtbc6SnUOOeCKWPTk42CZaHjWVTQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-450-N2C7m98fPEO26AKZEzTm1w-1; Mon, 03 May 2021 11:09:02 -0400 X-MC-Unique: N2C7m98fPEO26AKZEzTm1w-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8E94D107ACC7; Mon, 3 May 2021 15:09:00 +0000 (UTC) Received: from vitty.brq.redhat.com (unknown [10.40.194.168]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7C2D719C45; Mon, 3 May 2021 15:08:58 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Maxim Levitsky , linux-kernel@vger.kernel.org Subject: [PATCH 1/4] KVM: nVMX: Always make an attempt to map eVMCS after migration Date: Mon, 3 May 2021 17:08:51 +0200 Message-Id: <20210503150854.1144255-2-vkuznets@redhat.com> In-Reply-To: <20210503150854.1144255-1-vkuznets@redhat.com> References: <20210503150854.1144255-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When enlightened VMCS is in use and nested state is migrated with vmx_get_nested_state()/vmx_set_nested_state() KVM can't map evmcs page right away: evmcs gpa is not 'struct kvm_vmx_nested_state_hdr' and we can't read it from VP assist page because userspace may decide to restore HV_X64_MSR_VP_ASSIST_PAGE after restoring nested state (and QEMU, for example, does exactly that). To make sure eVMCS is mapped /vmx_set_nested_state() raises KVM_REQ_GET_NESTED_STATE_PAGES request. Commit f2c7ef3ba955 ("KVM: nSVM: cancel KVM_REQ_GET_NESTED_STATE_PAGES on nested vmexit") added KVM_REQ_GET_NESTED_STATE_PAGES clearing to nested_vmx_vmexit() to make sure MSR permission bitmap is not switched when an immediate exit from L2 to L1 happens right after migration (caused by a pending event, for example). Unfortunately, in the exact same situation we still need to have eVMCS mapped so nested_sync_vmcs12_to_shadow() reflects changes in VMCS12 to eVMCS. As a band-aid, restore nested_get_evmcs_page() when clearing KVM_REQ_GET_NESTED_STATE_PAGES in nested_vmx_vmexit(). The 'fix' is far from being ideal as we can't easily propagate possible failures and even if we could, this is most likely already too late to do so. The whole 'KVM_REQ_GET_NESTED_STATE_PAGES' idea for mapping eVMCS after migration seems to be fragile as we diverge too much from the 'native' path when vmptr loading happens on vmx_set_nested_state(). Fixes: f2c7ef3ba955 ("KVM: nSVM: cancel KVM_REQ_GET_NESTED_STATE_PAGES on nested vmexit") Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/vmx/nested.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 1e069aac7410..2febb1dd68e8 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3098,15 +3098,8 @@ static bool nested_get_evmcs_page(struct kvm_vcpu *vcpu) nested_vmx_handle_enlightened_vmptrld(vcpu, false); if (evmptrld_status == EVMPTRLD_VMFAIL || - evmptrld_status == EVMPTRLD_ERROR) { - pr_debug_ratelimited("%s: enlightened vmptrld failed\n", - __func__); - vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; - vcpu->run->internal.suberror = - KVM_INTERNAL_ERROR_EMULATION; - vcpu->run->internal.ndata = 0; + evmptrld_status == EVMPTRLD_ERROR) return false; - } } return true; @@ -3194,8 +3187,16 @@ static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu) { - if (!nested_get_evmcs_page(vcpu)) + if (!nested_get_evmcs_page(vcpu)) { + pr_debug_ratelimited("%s: enlightened vmptrld failed\n", + __func__); + vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + vcpu->run->internal.suberror = + KVM_INTERNAL_ERROR_EMULATION; + vcpu->run->internal.ndata = 0; + return false; + } if (is_guest_mode(vcpu) && !nested_get_vmcs12_pages(vcpu)) return false; @@ -4422,7 +4423,15 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason, /* trying to cancel vmlaunch/vmresume is a bug */ WARN_ON_ONCE(vmx->nested.nested_run_pending); - kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu); + if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) { + /* + * KVM_REQ_GET_NESTED_STATE_PAGES is also used to map + * Enlightened VMCS after migration and we still need to + * do that when something is forcing L2->L1 exit prior to + * the first L2 run. + */ + (void)nested_get_evmcs_page(vcpu); + } /* Service the TLB flush request for L2 before switching to L1. */ if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) -- 2.30.2