Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp2429132pxa; Fri, 7 Aug 2020 10:54:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw6W7VK6S8y/Bx8413xyrBoLEB9wkxjt/aM9Obn2a89w9JGZjBJ09b8+ecOuemsRrPAZWay X-Received: by 2002:a17:906:4696:: with SMTP id a22mr10349606ejr.154.1596822854286; Fri, 07 Aug 2020 10:54:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596822854; cv=none; d=google.com; s=arc-20160816; b=QpjMKlYbriDFpxQVrlH0xYhISURtN8gg1hsc7mIqEYwDgCvfHuE0mymzFejyVyh9vh cJwydfxxOwePEmdxG9eKRUzvPSYvuLWSdVeNi1BS7uQegajdAFWN1jMk3fvYLk/W4CI2 IX9Dr0M/5IskY0xJNM1mDkWkeggutqZOQWNVfOBF/3eipgqC0fkUb0HZeVIh/+RnjnZd X0XRnqrJayNK7mMfgbVXu6GRkTFkv4GRa99ukawvqaYh2++tGrUVS21UN4tbB18GOmjA T6H1OXHB3HZlsXl+TyeN7ODI7Z6EPymWiaz37yetVREYghAY768GKCkHPaT1JH9rI0E3 JRfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=LcnF4IGLPN8WITtis5b8P8IGhc5g+uS8Sxnx8vd4RTE=; b=EA4DIdPYnBONSswhxPGrdA1wjQ3H6QVJBDtbnac53ZrP/6BqWLet3/65pSSowyWzMu h54zff33BilUhysIFh0UvvkG0dvbG9PQ1LEzZU+fwEaiOt9L6tw+1GNGf6bF0V9zQLMb OMBS89Ovyn2PWNU1tufeNb6nC+z4lurarB8mDO/E0aN+QXNyxh6EX0Zm505eJiQJq5v/ gngW1I1mC8adBXqV9bAyfxIKy7k+mHmPYDPWW89gQWxaGoJs1OJBLA5pwyJvihkrz6eY CVJw/0tydzEc1PNYESvTJyvQG+E0vclxOcjb7wrpbAvYPPfUP9RcbQ/7GPpHGvDShPvy DBbA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GhiMXhUc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v3si5745693ejw.748.2020.08.07.10.53.51; Fri, 07 Aug 2020 10:54:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GhiMXhUc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728422AbgHGRva (ORCPT + 99 others); Fri, 7 Aug 2020 13:51:30 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:25717 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728407AbgHGRv3 (ORCPT ); Fri, 7 Aug 2020 13:51:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1596822687; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=LcnF4IGLPN8WITtis5b8P8IGhc5g+uS8Sxnx8vd4RTE=; b=GhiMXhUcsKQKrdsEUB+iRMNPrzR9Fo2oudhBLAg7TL1zanhd1tTu/djMTqNg/YdO1HsHEb i8oIHTKoWekD6VOGMD8XHWi7zDayLmpS7Ux3fFcIISZM77znXe+I4OCo3I4in0+fdFLZZ0 Epnqqz/4ui43wFFoKUbC2MYtt6Xps/E= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-250-b8EtrZtVOHe4WN4LCoPD-w-1; Fri, 07 Aug 2020 13:51:25 -0400 X-MC-Unique: b8EtrZtVOHe4WN4LCoPD-w-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 37B441005504; Fri, 7 Aug 2020 17:51:24 +0000 (UTC) Received: from horse.redhat.com (ovpn-113-142.rdu2.redhat.com [10.10.113.142]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4F53B100EBA4; Fri, 7 Aug 2020 17:51:21 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id D1AD5222E3F; Fri, 7 Aug 2020 13:51:20 -0400 (EDT) Date: Fri, 7 Aug 2020 13:51:20 -0400 From: Vivek Goyal To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, paolo-bonzini Cc: virtio-fs-list , vkuznets@redhat.com, sean.j.christopherson@intel.com Subject: Re: [PATCH v4] kvm,x86: Exit to user space in case page fault error Message-ID: <20200807175120.GG307931@redhat.com> References: <20200720211359.GF502563@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200720211359.GF502563@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 20, 2020 at 05:13:59PM -0400, Vivek Goyal wrote: > Page fault error handling behavior in kvm seems little inconsistent when > page fault reports error. If we are doing fault synchronously > then we capture error (-EFAULT) returned by __gfn_to_pfn_memslot() and > exit to user space and qemu reports error, "error: kvm run failed Bad address". Hi Paolo, Ping. I am wondering what do you think about this patch. Can it be merged? Thanks Vivek > > But if we are doing async page fault, then async_pf_execute() will simply > ignore the error reported by get_user_pages_remote() or > by kvm_mmu_do_page_fault(). It is assumed that page fault was successful > and either a page ready event is injected in guest or guest is brought > out of artificial halt state and run again. In both the cases when guest > retries the instruction, it takes exit again as page fault was not > successful in previous attempt. And then this infinite loop continues > forever. > > Trying fault in a loop will make sense if error is temporary and will > be resolved on retry. But I don't see any intention in the code to > determine if error is temporary or not. Whether to do fault synchronously > or asynchronously, depends on so many variables but none of the varibales > is whether error is temporary or not. (kvm_can_do_async_pf()). > > And that makes it very inconsistent or unpredictable to figure out whether > kvm will exit to qemu with error or it will just retry and go into an > infinite loop. > > This patch tries to make this behavior consistent. That is instead of > getting into infinite loop of retrying page fault, exit to user space > and stop VM if page fault error happens. > > In future this can be improved by injecting errors into guest. As of > now we don't have any race free method to inject errors in guest. > > When page fault error happens in async path save that pfn and when next > time guest retries, do a sync fault instead of async fault. So that if error > is encountered, we exit to qemu and avoid infinite loop. > > We maintain a cache of error gfns and force sync fault if a gfn is > found in cache of error gfn. There is a small possibility that we > miss an error gfn (as it got overwritten by a new error gfn). But > its just a hint and sooner or later some error pfn will match > and we will force sync fault and exit to user space. > > Changes from v3: > - Added function kvm_find_and_remove_error_gfn() and removed > kvm_find_error_gfn() and kvm_del_error_gfn(). (Vitaly) > > - Added a macro GFN_INVALID (Vitaly). > > - Used gpa_to_gfn() to convert gpa to gfn (Vitaly) > > Change from v2: > - Fixed a warning by converting kvm_find_error_gfn() static. > > Change from v1: > - Maintain a cache of error gfns, instead of single gfn. (Vitaly) > > Signed-off-by: Vivek Goyal > --- > arch/x86/include/asm/kvm_host.h | 2 ++ > arch/x86/kvm/mmu.h | 2 +- > arch/x86/kvm/mmu/mmu.c | 2 +- > arch/x86/kvm/x86.c | 54 +++++++++++++++++++++++++++++++-- > include/linux/kvm_types.h | 1 + > 5 files changed, 56 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index be5363b21540..e6f8d3f1a377 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -137,6 +137,7 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) > #define KVM_NR_VAR_MTRR 8 > > #define ASYNC_PF_PER_VCPU 64 > +#define ERROR_GFN_PER_VCPU 64 > > enum kvm_reg { > VCPU_REGS_RAX = __VCPU_REGS_RAX, > @@ -778,6 +779,7 @@ struct kvm_vcpu_arch { > unsigned long nested_apf_token; > bool delivery_as_pf_vmexit; > bool pageready_pending; > + gfn_t error_gfns[ERROR_GFN_PER_VCPU]; > } apf; > > /* OSVW MSRs (AMD only) */ > diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h > index 444bb9c54548..d0a2a12c7bb6 100644 > --- a/arch/x86/kvm/mmu.h > +++ b/arch/x86/kvm/mmu.h > @@ -60,7 +60,7 @@ void kvm_init_mmu(struct kvm_vcpu *vcpu, bool reset_roots); > void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer); > void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, > bool accessed_dirty, gpa_t new_eptp); > -bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); > +bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu, gfn_t gfn); > int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, > u64 fault_address, char *insn, int insn_len); > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 6d6a0ae7800c..b51d4aa405e0 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -4078,7 +4078,7 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, > if (!async) > return false; /* *pfn has correct page already */ > > - if (!prefault && kvm_can_do_async_pf(vcpu)) { > + if (!prefault && kvm_can_do_async_pf(vcpu, gpa_to_gfn(cr2_or_gpa))) { > trace_kvm_try_async_get_page(cr2_or_gpa, gfn); > if (kvm_find_async_pf_gfn(vcpu, gfn)) { > trace_kvm_async_pf_doublefault(cr2_or_gpa, gfn); > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 88c593f83b28..c1f5094d6e53 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -263,6 +263,13 @@ static inline void kvm_async_pf_hash_reset(struct kvm_vcpu *vcpu) > vcpu->arch.apf.gfns[i] = ~0; > } > > +static inline void kvm_error_gfn_hash_reset(struct kvm_vcpu *vcpu) > +{ > + int i; > + for (i = 0; i < ERROR_GFN_PER_VCPU; i++) > + vcpu->arch.apf.error_gfns[i] = GFN_INVALID; > +} > + > static void kvm_on_user_return(struct user_return_notifier *urn) > { > unsigned slot; > @@ -9484,6 +9491,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) > vcpu->arch.pat = MSR_IA32_CR_PAT_DEFAULT; > > kvm_async_pf_hash_reset(vcpu); > + kvm_error_gfn_hash_reset(vcpu); > kvm_pmu_init(vcpu); > > vcpu->arch.pending_external_vector = -1; > @@ -9608,6 +9616,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) > > kvm_clear_async_pf_completion_queue(vcpu); > kvm_async_pf_hash_reset(vcpu); > + kvm_error_gfn_hash_reset(vcpu); > vcpu->arch.apf.halted = false; > > if (kvm_mpx_supported()) { > @@ -10369,6 +10378,36 @@ void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags) > } > EXPORT_SYMBOL_GPL(kvm_set_rflags); > > +static inline u32 kvm_error_gfn_hash_fn(gfn_t gfn) > +{ > + BUILD_BUG_ON(!is_power_of_2(ERROR_GFN_PER_VCPU)); > + > + return hash_32(gfn & 0xffffffff, order_base_2(ERROR_GFN_PER_VCPU)); > +} > + > +static void kvm_add_error_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) > +{ > + u32 key = kvm_error_gfn_hash_fn(gfn); > + > + /* > + * Overwrite the previous gfn. This is just a hint to do > + * sync page fault. > + */ > + vcpu->arch.apf.error_gfns[key] = gfn; > +} > + > +/* Returns true if gfn was found in hash table, false otherwise */ > +static bool kvm_find_and_remove_error_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) > +{ > + u32 key = kvm_error_gfn_hash_fn(gfn); > + > + if (vcpu->arch.apf.error_gfns[key] != gfn) > + return 0; > + > + vcpu->arch.apf.error_gfns[key] = GFN_INVALID; > + return true; > +} > + > void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) > { > int r; > @@ -10385,7 +10424,9 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) > work->arch.cr3 != vcpu->arch.mmu->get_guest_pgd(vcpu)) > return; > > - kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); > + r = kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); > + if (r < 0) > + kvm_add_error_gfn(vcpu, gpa_to_gfn(work->cr2_or_gpa)); > } > > static inline u32 kvm_async_pf_hash_fn(gfn_t gfn) > @@ -10495,7 +10536,7 @@ static bool kvm_can_deliver_async_pf(struct kvm_vcpu *vcpu) > return true; > } > > -bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu) > +bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu, gfn_t gfn) > { > if (unlikely(!lapic_in_kernel(vcpu) || > kvm_event_needs_reinjection(vcpu) || > @@ -10509,7 +10550,14 @@ bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu) > * If interrupts are off we cannot even use an artificial > * halt state. > */ > - return kvm_arch_interrupt_allowed(vcpu); > + if (!kvm_arch_interrupt_allowed(vcpu)) > + return false; > + > + /* Found gfn in error gfn cache. Force sync fault */ > + if (kvm_find_and_remove_error_gfn(vcpu, gfn)) > + return false; > + > + return true; > } > > bool kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu, > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h > index 68e84cf42a3f..677bb8269cd3 100644 > --- a/include/linux/kvm_types.h > +++ b/include/linux/kvm_types.h > @@ -36,6 +36,7 @@ typedef u64 gpa_t; > typedef u64 gfn_t; > > #define GPA_INVALID (~(gpa_t)0) > +#define GFN_INVALID (~(gfn_t)0) > > typedef unsigned long hva_t; > typedef u64 hpa_t; > -- > 2.25.4 >