Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753581AbbFIEFf (ORCPT ); Tue, 9 Jun 2015 00:05:35 -0400 Received: from mga03.intel.com ([134.134.136.65]:32211 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751161AbbFIEF1 (ORCPT ); Tue, 9 Jun 2015 00:05:27 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,577,1427785200"; d="scan'208";a="743389859" Message-ID: <55766510.7010108@linux.intel.com> Date: Tue, 09 Jun 2015 12:01:20 +0800 From: Xiao Guangrong User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Paolo Bonzini , linux-kernel@vger.kernel.org, kvm@vger.kernel.org CC: rkrcmar@redhat.com, bdas@redhat.com Subject: Re: [PATCH v2 12/13] KVM: x86: add SMM to the MMU role, support SMRAM address space References: <1432746314-50196-1-git-send-email-pbonzini@redhat.com> <1432746314-50196-13-git-send-email-pbonzini@redhat.com> In-Reply-To: <1432746314-50196-13-git-send-email-pbonzini@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2885 Lines: 79 On 05/28/2015 01:05 AM, Paolo Bonzini wrote: > This is now very simple to do. The only interesting part is a simple > trick to find the right memslot in gfn_to_rmap, retrieving the address > space from the spte role word. The same trick is used in the auditing > code. > > The comment on top of union kvm_mmu_page_role has been stale forever, Fortunately, we have documented these fields in mmu.txt, please do it for 'smm' as well. :) > so remove it. Speaking of stale code, remove pad_for_nice_hex_output > too: it was splitting the "access" bitfield across two bytes and thus > had effectively turned into pad_for_ugly_hex_output. > > Signed-off-by: Paolo Bonzini > --- > v1->v2: new > > arch/x86/include/asm/kvm_host.h | 26 +++++++++++++++----------- > arch/x86/kvm/mmu.c | 15 ++++++++++++--- > arch/x86/kvm/mmu_audit.c | 10 +++++++--- > arch/x86/kvm/x86.c | 2 ++ > 4 files changed, 36 insertions(+), 17 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 5a5e13af6e03..47006683f2fe 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -184,23 +184,12 @@ struct kvm_mmu_memory_cache { > void *objects[KVM_NR_MEM_OBJS]; > }; > > -/* > - * kvm_mmu_page_role, below, is defined as: > - * > - * bits 0:3 - total guest paging levels (2-4, or zero for real mode) > - * bits 4:7 - page table level for this shadow (1-4) > - * bits 8:9 - page table quadrant for 2-level guests > - * bit 16 - direct mapping of virtual to physical mapping at gfn > - * used for real mode and two-dimensional paging > - * bits 17:19 - common access permissions for all ptes in this shadow page > - */ > union kvm_mmu_page_role { > unsigned word; > struct { > unsigned level:4; > unsigned cr4_pae:1; > unsigned quadrant:2; > - unsigned pad_for_nice_hex_output:6; > unsigned direct:1; > unsigned access:3; > unsigned invalid:1; > @@ -208,6 +197,15 @@ union kvm_mmu_page_role { > unsigned cr0_wp:1; > unsigned smep_andnot_wp:1; > unsigned smap_andnot_wp:1; > + unsigned :8; > + > + /* > + * This is left at the top of the word so that > + * kvm_memslots_for_spte_role can extract it with a > + * simple shift. While there is room, give it a whole > + * byte so it is also faster to load it from memory. > + */ > + unsigned smm:8; I suspect if we really need this trick, smm is not the hottest filed in this struct anyway. Otherwise looks good to me: Reviewed-by: Xiao Guangrong -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/