2015-06-09 04:05:35

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH v2 12/13] KVM: x86: add SMM to the MMU role, support SMRAM address space



On 05/28/2015 01:05 AM, Paolo Bonzini wrote:
> This is now very simple to do. The only interesting part is a simple
> trick to find the right memslot in gfn_to_rmap, retrieving the address
> space from the spte role word. The same trick is used in the auditing
> code.
>
> The comment on top of union kvm_mmu_page_role has been stale forever,

Fortunately, we have documented these fields in mmu.txt, please do it for
'smm' as well. :)

> so remove it. Speaking of stale code, remove pad_for_nice_hex_output
> too: it was splitting the "access" bitfield across two bytes and thus
> had effectively turned into pad_for_ugly_hex_output.
>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> v1->v2: new
>
> arch/x86/include/asm/kvm_host.h | 26 +++++++++++++++-----------
> arch/x86/kvm/mmu.c | 15 ++++++++++++---
> arch/x86/kvm/mmu_audit.c | 10 +++++++---
> arch/x86/kvm/x86.c | 2 ++
> 4 files changed, 36 insertions(+), 17 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 5a5e13af6e03..47006683f2fe 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -184,23 +184,12 @@ struct kvm_mmu_memory_cache {
> void *objects[KVM_NR_MEM_OBJS];
> };
>
> -/*
> - * kvm_mmu_page_role, below, is defined as:
> - *
> - * bits 0:3 - total guest paging levels (2-4, or zero for real mode)
> - * bits 4:7 - page table level for this shadow (1-4)
> - * bits 8:9 - page table quadrant for 2-level guests
> - * bit 16 - direct mapping of virtual to physical mapping at gfn
> - * used for real mode and two-dimensional paging
> - * bits 17:19 - common access permissions for all ptes in this shadow page
> - */
> union kvm_mmu_page_role {
> unsigned word;
> struct {
> unsigned level:4;
> unsigned cr4_pae:1;
> unsigned quadrant:2;
> - unsigned pad_for_nice_hex_output:6;
> unsigned direct:1;
> unsigned access:3;
> unsigned invalid:1;
> @@ -208,6 +197,15 @@ union kvm_mmu_page_role {
> unsigned cr0_wp:1;
> unsigned smep_andnot_wp:1;
> unsigned smap_andnot_wp:1;
> + unsigned :8;
> +
> + /*
> + * This is left at the top of the word so that
> + * kvm_memslots_for_spte_role can extract it with a
> + * simple shift. While there is room, give it a whole
> + * byte so it is also faster to load it from memory.
> + */
> + unsigned smm:8;

I suspect if we really need this trick, smm is not the hottest filed in this
struct anyway.

Otherwise looks good to me:
Reviewed-by: Xiao Guangrong <[email protected]>


2015-06-17 08:18:35

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v2 12/13] KVM: x86: add SMM to the MMU role, support SMRAM address space



On 09/06/2015 06:01, Xiao Guangrong wrote:
>
>
> On 05/28/2015 01:05 AM, Paolo Bonzini wrote:
>> This is now very simple to do. The only interesting part is a simple
>> trick to find the right memslot in gfn_to_rmap, retrieving the address
>> space from the spte role word. The same trick is used in the auditing
>> code.
>>
>> The comment on top of union kvm_mmu_page_role has been stale forever,
>
> Fortunately, we have documented these fields in mmu.txt, please do it for
> 'smm' as well. :)

Right, done.

>> + /*
>> + * This is left at the top of the word so that
>> + * kvm_memslots_for_spte_role can extract it with a
>> + * simple shift. While there is room, give it a whole
>> + * byte so it is also faster to load it from memory.
>> + */
>> + unsigned smm:8;
>
> I suspect if we really need this trick, smm is not the hottest filed in
> this struct anyway.

Note that after these patches it is used by gfn_to_rmap, and hence for
example rmap_add.

Paolo

2015-06-18 05:06:41

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH v2 12/13] KVM: x86: add SMM to the MMU role, support SMRAM address space



On 06/17/2015 04:18 PM, Paolo Bonzini wrote:
>
>
> On 09/06/2015 06:01, Xiao Guangrong wrote:
>>
>>
>> On 05/28/2015 01:05 AM, Paolo Bonzini wrote:
>>> This is now very simple to do. The only interesting part is a simple
>>> trick to find the right memslot in gfn_to_rmap, retrieving the address
>>> space from the spte role word. The same trick is used in the auditing
>>> code.
>>>
>>> The comment on top of union kvm_mmu_page_role has been stale forever,
>>
>> Fortunately, we have documented these fields in mmu.txt, please do it for
>> 'smm' as well. :)
>
> Right, done.
>
>>> + /*
>>> + * This is left at the top of the word so that
>>> + * kvm_memslots_for_spte_role can extract it with a
>>> + * simple shift. While there is room, give it a whole
>>> + * byte so it is also faster to load it from memory.
>>> + */
>>> + unsigned smm:8;
>>
>> I suspect if we really need this trick, smm is not the hottest filed in
>> this struct anyway.
>
> Note that after these patches it is used by gfn_to_rmap, and hence for
> example rmap_add.

However, role->level is more hotter than role->smm so that it's also a good
candidate for this kind of trick.

And this is only 32 bits which can be operated in a CPU register by a single
memory load, that is why i was worried if it is really needed.

2015-06-18 07:02:55

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v2 12/13] KVM: x86: add SMM to the MMU role, support SMRAM address space



On 18/06/2015 07:02, Xiao Guangrong wrote:
> However, role->level is more hotter than role->smm so that it's also a good
> candidate for this kind of trick.

Right, we could give the first 8 bits to role->level, so it can be
accessed with a single memory load and extracted with a single AND.
Those two are definitely the hottest fields.

> And this is only 32 bits which can be operated in a CPU register by a
> single memory load, that is why i was worried if it is really needed.

However, an 8-bit field can be loaded from memory with a single movz
instruction.

Paolo