2020-02-27 17:24:01

by Mohammed Gamal

[permalink] [raw]
Subject: [PATCH 1/5] KVM: x86: Add function to inject guest page fault with reserved bits set

Signed-off-by: Mohammed Gamal <[email protected]>
---
arch/x86/kvm/x86.c | 14 ++++++++++++++
arch/x86/kvm/x86.h | 1 +
2 files changed, 15 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 359fcd395132..434c55a8b719 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10494,6 +10494,20 @@ u64 kvm_spec_ctrl_valid_bits(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_spec_ctrl_valid_bits);

+void kvm_inject_rsvd_bits_pf(struct kvm_vcpu *vcpu, gpa_t gpa)
+{
+ struct x86_exception fault;
+
+ fault.vector = PF_VECTOR;
+ fault.error_code_valid = true;
+ fault.error_code = PFERR_RSVD_MASK;
+ fault.nested_page_fault = false;
+ fault.address = gpa;
+
+ kvm_inject_page_fault(vcpu, &fault);
+}
+EXPORT_SYMBOL_GPL(kvm_inject_rsvd_bits_pf);
+
EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio);
EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 3624665acee4..7d8ab28a6983 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -276,6 +276,7 @@ int kvm_mtrr_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata);
bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn,
int page_num);
bool kvm_vector_hashing_enabled(void);
+void kvm_inject_rsvd_bits_pf(struct kvm_vcpu *vcpu, gpa_t gpa);
int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
int emulation_type, void *insn, int insn_len);
enum exit_fastpath_completion handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
--
2.21.1


2020-02-27 19:31:53

by Ben Gardon

[permalink] [raw]
Subject: Re: [PATCH 1/5] KVM: x86: Add function to inject guest page fault with reserved bits set

On Thu, Feb 27, 2020 at 9:23 AM Mohammed Gamal <[email protected]> wrote:
>
> Signed-off-by: Mohammed Gamal <[email protected]>
> ---
> arch/x86/kvm/x86.c | 14 ++++++++++++++
> arch/x86/kvm/x86.h | 1 +
> 2 files changed, 15 insertions(+)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 359fcd395132..434c55a8b719 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10494,6 +10494,20 @@ u64 kvm_spec_ctrl_valid_bits(struct kvm_vcpu *vcpu)
> }
> EXPORT_SYMBOL_GPL(kvm_spec_ctrl_valid_bits);
>
> +void kvm_inject_rsvd_bits_pf(struct kvm_vcpu *vcpu, gpa_t gpa)
> +{
> + struct x86_exception fault;
> +
> + fault.vector = PF_VECTOR;
> + fault.error_code_valid = true;
> + fault.error_code = PFERR_RSVD_MASK;
> + fault.nested_page_fault = false;
> + fault.address = gpa;
> +
> + kvm_inject_page_fault(vcpu, &fault);
> +}
> +EXPORT_SYMBOL_GPL(kvm_inject_rsvd_bits_pf);
> +

There are calls to kvm_mmu_page_fault in arch/x86/kvm/mmu/mmu.c that
don't get the check and injected page fault in the later patches in
this series. Is the check not needed in those cases?

> EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
> EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio);
> EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 3624665acee4..7d8ab28a6983 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -276,6 +276,7 @@ int kvm_mtrr_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata);
> bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn,
> int page_num);
> bool kvm_vector_hashing_enabled(void);
> +void kvm_inject_rsvd_bits_pf(struct kvm_vcpu *vcpu, gpa_t gpa);
> int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> int emulation_type, void *insn, int insn_len);
> enum exit_fastpath_completion handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
> --
> 2.21.1
>

2020-02-28 22:29:36

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH 1/5] KVM: x86: Add function to inject guest page fault with reserved bits set

On Thu, Feb 27, 2020 at 07:23:02PM +0200, Mohammed Gamal wrote:
> Signed-off-by: Mohammed Gamal <[email protected]>
> ---
> arch/x86/kvm/x86.c | 14 ++++++++++++++
> arch/x86/kvm/x86.h | 1 +
> 2 files changed, 15 insertions(+)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 359fcd395132..434c55a8b719 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10494,6 +10494,20 @@ u64 kvm_spec_ctrl_valid_bits(struct kvm_vcpu *vcpu)
> }
> EXPORT_SYMBOL_GPL(kvm_spec_ctrl_valid_bits);
>
> +void kvm_inject_rsvd_bits_pf(struct kvm_vcpu *vcpu, gpa_t gpa)
> +{
> + struct x86_exception fault;
> +
> + fault.vector = PF_VECTOR;
> + fault.error_code_valid = true;
> + fault.error_code = PFERR_RSVD_MASK;

As Jim pointed out, by definition this is PRESENT. Other bits needs to
be translated from the VMCS.EXIT_QUALIFICATION field and/or manually
calculated for EPT. I assume NPT is more or less good to go, i.e. just
pass in the error_code?

> + fault.nested_page_fault = false;
> + fault.address = gpa;

Taking the GPA is wrong, @address is CR3, a GVA.

> + kvm_inject_page_fault(vcpu, &fault);

This needs to be

vcpu->arch.walk_mmu->inject_page_fault(vcpu, &fault);

so that L1 (nested VMX) can intercept the #PF.

> +}
> +EXPORT_SYMBOL_GPL(kvm_inject_rsvd_bits_pf);
> +
> EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
> EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio);
> EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 3624665acee4..7d8ab28a6983 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -276,6 +276,7 @@ int kvm_mtrr_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata);
> bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn,
> int page_num);
> bool kvm_vector_hashing_enabled(void);
> +void kvm_inject_rsvd_bits_pf(struct kvm_vcpu *vcpu, gpa_t gpa);
> int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> int emulation_type, void *insn, int insn_len);
> enum exit_fastpath_completion handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
> --
> 2.21.1
>