Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932171AbbERNu5 (ORCPT ); Mon, 18 May 2015 09:50:57 -0400 Received: from mail-wi0-f181.google.com ([209.85.212.181]:37609 "EHLO mail-wi0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753325AbbERNt1 (ORCPT ); Mon, 18 May 2015 09:49:27 -0400 From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Xiao Guangrong , rkrcmar@redhat.com, bdas@redhat.com Subject: [PATCH 11/11] KVM: x86: add SMM to the MMU role, support SMRAM address space Date: Mon, 18 May 2015 15:48:43 +0200 Message-Id: <1431956923-35602-12-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1431956923-35602-1-git-send-email-pbonzini@redhat.com> References: <1431956923-35602-1-git-send-email-pbonzini@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4255 Lines: 125 This is now very simple to do. The only interesting part is a simple trick to find the right memslot in gfn_to_rmap, retrieving the address space from the spte role word. The comment on top of union kvm_mmu_page_role has been stale forever, so remove it. Speaking of stale code, remove pad_for_nice_hex_output too: it was splitting the "access" bitfield across two bytes and thus had effectively turned into pad_for_ugly_hex_output. Signed-off-by: Paolo Bonzini --- arch/x86/include/asm/kvm_host.h | 19 ++++++++----------- arch/x86/kvm/mmu.c | 7 ++++++- arch/x86/kvm/x86.c | 1 + 3 files changed, 15 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d7957733de61..5dd9d960595c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -184,23 +184,12 @@ struct kvm_mmu_memory_cache { void *objects[KVM_NR_MEM_OBJS]; }; -/* - * kvm_mmu_page_role, below, is defined as: - * - * bits 0:3 - total guest paging levels (2-4, or zero for real mode) - * bits 4:7 - page table level for this shadow (1-4) - * bits 8:9 - page table quadrant for 2-level guests - * bit 16 - direct mapping of virtual to physical mapping at gfn - * used for real mode and two-dimensional paging - * bits 17:19 - common access permissions for all ptes in this shadow page - */ union kvm_mmu_page_role { unsigned word; struct { unsigned level:4; unsigned cr4_pae:1; unsigned quadrant:2; - unsigned pad_for_nice_hex_output:6; unsigned direct:1; unsigned access:3; unsigned invalid:1; @@ -208,6 +198,7 @@ union kvm_mmu_page_role { unsigned cr0_wp:1; unsigned smep_andnot_wp:1; unsigned smap_andnot_wp:1; + unsigned smm:1; }; }; @@ -1118,6 +1109,12 @@ enum { #define HF_SMM_MASK (1 << 6) #define HF_SMM_INSIDE_NMI_MASK (1 << 7) +#define __KVM_VCPU_MULTIPLE_ADDRESS_SPACE +#define KVM_ADDRESS_SPACE_NUM 2 + +#define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) +#define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) + /* * Hardware virtualization extension instructions may fault if a * reboot turns off virtualization while processes are running. diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index b7e94a662f4e..92436bc4f3e0 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1041,9 +1041,11 @@ static unsigned long *__gfn_to_rmap(gfn_t gfn, int level, */ static unsigned long *gfn_to_rmap(struct kvm *kvm, gfn_t gfn, struct kvm_mmu_page *sp) { + struct kvm_memslots *slots; struct kvm_memory_slot *slot; - slot = gfn_to_memslot(kvm, gfn); + slots = kvm_memslots_for_spte_role(kvm, sp->role); + slot = __gfn_to_memslot(slots, gfn); return __gfn_to_rmap(gfn, sp->role.level, slot); } @@ -3918,6 +3920,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu) struct kvm_mmu *context = &vcpu->arch.mmu; context->base_role.word = 0; + context->base_role.smm = is_smm(vcpu); context->page_fault = tdp_page_fault; context->sync_page = nonpaging_sync_page; context->invlpg = nonpaging_invlpg; @@ -3979,6 +3982,7 @@ void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu) = smep && !is_write_protection(vcpu); context->base_role.smap_andnot_wp = smap && !is_write_protection(vcpu); + context->base_role.smm = is_smm(vcpu); } EXPORT_SYMBOL_GPL(kvm_init_shadow_mmu); @@ -4261,6 +4265,7 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, .nxe = 1, .smep_andnot_wp = 1, .smap_andnot_wp = 1, + .smm = 1, }; /* diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index bb637e893e13..8a05770b9fa1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5473,6 +5473,7 @@ void kvm_set_hflags(struct kvm_vcpu *vcpu, unsigned emul_flags) } vcpu->arch.hflags = emul_flags; + kvm_mmu_reset_context(vcpu); } static int kvm_vcpu_check_hw_bp(unsigned long addr, u32 type, u32 dr7, -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/