Received: by 2002:a25:683:0:0:0:0:0 with SMTP id 125csp1562260ybg; Thu, 4 Jun 2020 12:56:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwk9mXfPGB2+i6Pd2wzdimJON5bvSXqOkYr+P2OF1Om/TYVl4nbG3/VN4kQqpX08WTUp6IM X-Received: by 2002:a17:906:3c4d:: with SMTP id i13mr5383750ejg.66.1591300575101; Thu, 04 Jun 2020 12:56:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1591300575; cv=none; d=google.com; s=arc-20160816; b=cQtFJP1Bp3SFpl7JaOEazOwdHludlQsGZljFD7uknvOnqOLobDWkiUbRqN8eQQoCVZ Sw/iBKuJClMwCh8aSsT0otpX+e7f3XwfvJpYCfKHAaSQtArzrZgzX29MDHstK1n0eJL3 puqKQeCT9QgWidpQzQCQaSb4i19TKfDMjxVlHDrUZcT1trmJ2OuYCHOzDQhqM9lJpuuh TYcmCXQjaN2KUwVgu7uip+WnBaPHX2rgp0z8fK0IQCp1SH9vRQjQgDbBlpFpm2OHnH3o h7V6DEBmuOUXYJtMZHXQja8Xa/3obtvIQHx74dK/QMK02JRfafGYq2KnP8I9lTW24FoH QjTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:ironport-sdr:ironport-sdr; bh=2OeeortQXHXNLztLUaPHyP2Pdhy9n7UzwyNG7q0LsOM=; b=kr2S7SbntCLS/dXpyreMz0afIRLQyjrrdcDwj9Gz5/10UAnHcx6prmp4fQV1S4XrEi 0x73wY/BCtyTg3UYHO7N7gknTMRoEmn2gBaPtSdPFEwpAs1Db5TvbyYjpLvfxykd/JvK AVsooOWGDG7P3/8bhl8Wo2/N0nTxhsJd5mT+2WjQ9YBjWruo4usJykVBrmEHfH7IZa9e 5Eh4/J0cwrp6tRq3xacrsLZLn+hoj+tAHmFRr+MKbl1MD0PGhW/r00CyPyxV64HUtHud /H0PgK5RNDZlXHV9RKADQXC8VcspYZV43QLvytO0sWNLezfveTC0yQTnZ9jKI70Zz+1A ozaA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dc20si2074583edb.75.2020.06.04.12.55.52; Thu, 04 Jun 2020 12:56:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729662AbgFDQCy (ORCPT + 99 others); Thu, 4 Jun 2020 12:02:54 -0400 Received: from mga11.intel.com ([192.55.52.93]:45312 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729544AbgFDQCy (ORCPT ); Thu, 4 Jun 2020 12:02:54 -0400 IronPort-SDR: l6jLOIbXa0SmQ3W6wmt1jD0DTiJ5RhXB3Ofa+Qk6TZpIJCIte+ojNTdeDe/umUZ52X8Ph3fQyy xBuWriJUuXFw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2020 09:02:53 -0700 IronPort-SDR: XfMIW/slEzHy8FEIrNuiLel4FFfrQXWrAk8p53myPCBWxxfKJlWe+z6o6XyUVmNCtSwYxqx9Xt QXKInEXDxYXw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,472,1583222400"; d="scan'208";a="258926535" Received: from sjchrist-coffee.jf.intel.com (HELO linux.intel.com) ([10.54.74.152]) by fmsmga008.fm.intel.com with ESMTP; 04 Jun 2020 09:02:53 -0700 Date: Thu, 4 Jun 2020 09:02:53 -0700 From: Sean Christopherson To: Vitaly Kuznetsov Cc: Paolo Bonzini , kvm@vger.kernel.org, Wanpeng Li , Jim Mattson , linux-kernel@vger.kernel.org Subject: Re: [PATCH] KVM: nVMX: Inject #GP when nested_vmx_get_vmptr() fails to read guest memory Message-ID: <20200604160253.GF30223@linux.intel.com> References: <20200604143158.484651-1-vkuznets@redhat.com> <20200604145357.GA30223@linux.intel.com> <87k10meth6.fsf@vitty.brq.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87k10meth6.fsf@vitty.brq.redhat.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 04, 2020 at 05:33:25PM +0200, Vitaly Kuznetsov wrote: > Sean Christopherson writes: > > > On Thu, Jun 04, 2020 at 04:40:52PM +0200, Paolo Bonzini wrote: > >> On 04/06/20 16:31, Vitaly Kuznetsov wrote: > > > > ... > > > >> > KVM could've handled the request correctly by going to userspace and > >> > performing I/O but there doesn't seem to be a good need for such requests > >> > in the first place. Sane guests should not call VMXON/VMPTRLD/VMCLEAR with > >> > anything but normal memory. Just inject #GP to find insane ones. > >> > > > ... > > >> > >> looks good but we need to do the same in handle_vmread, handle_vmwrite, > >> handle_invept and handle_invvpid. Which probably means adding something > >> like nested_inject_emulation_fault to commonize the inner "if". > > > > Can we just kill the guest already instead of throwing more hacks at this > > and hoping something sticks? We already have one in > > kvm_write_guest_virt_system... > > > > commit 541ab2aeb28251bf7135c7961f3a6080eebcc705 > > Author: Fuqian Huang > > Date: Thu Sep 12 12:18:17 2019 +0800 > > > > KVM: x86: work around leak of uninitialized stack contents > > > > Oh I see... > > [...] > > Let's get back to 'vm_bugged' idea then? > > https://lore.kernel.org/kvm/87muadnn1t.fsf@vitty.brq.redhat.com/ Hmm, I don't think we need to go that far. The 'vm_bugged' idea was more to handle cases where KVM itself (or hardware) screwed something up and detects an issue deep in a call stack with no recourse for reporting the error up the stack. That isn't the case here. Unless I'm mistaken, the end result is simliar to this patch, except that KVM would exit to userspace with KVM_INTERNAL_ERROR_EMULATION instead of injecting a #GP. E.g. diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 9c74a732b08d..e13d2c0014e2 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4624,6 +4624,20 @@ void nested_vmx_pmu_entry_exit_ctls_update(struct kvm_vcpu *vcpu) } } +static int nested_vmx_handle_memory_failure(struct kvm_vcpu *vcpu, int ret, + struct x86_exception *e) +{ + if (r == X86EMUL_PROPAGATE_FAULT) { + kvm_inject_emulated_page_fault(vcpu, &e); + return 1; + } + + vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; + vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION; + vcpu->run->internal.ndata = 0; + return 0; +} + static int nested_vmx_get_vmptr(struct kvm_vcpu *vcpu, gpa_t *vmpointer) { gva_t gva; @@ -4634,11 +4648,9 @@ static int nested_vmx_get_vmptr(struct kvm_vcpu *vcpu, gpa_t *vmpointer) sizeof(*vmpointer), &gva)) return 1; - if (kvm_read_guest_virt(vcpu, gva, vmpointer, sizeof(*vmpointer), &e)) { - kvm_inject_emulated_page_fault(vcpu, &e); - return 1; - } - + r kvm_read_guest_virt(vcpu, gva, vmpointer, sizeof(*vmpointer), &e); + if (r) + return nested_vmx_handle_memory_failure(r, &e); return 0; } Side topic, I have some preliminary patches for the 'vm_bugged' idea. I'll try to whip them into something that can be posted upstream in the next few weeks.