2017-04-11 09:49:31

by Wanpeng Li

[permalink] [raw]
Subject: [PATCH] x86/kvm: virt_xxx memory barriers instead of mandatory barriers

From: Wanpeng Li <[email protected]>

virt_xxx memory barriers are implemented trivially using the low-level
__smp_xxx macros, __smp_xxx is equal to a compiler barrier for strong
TSO memory model, however, mandatory barriers will unconditional add
memory barriers, this patch replaces the rmb() in kvm_steal_clock() by
virt_rmb().

Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krčmář <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
arch/x86/kernel/kvm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 14f65a5..da5c097 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -396,9 +396,9 @@ static u64 kvm_steal_clock(int cpu)
src = &per_cpu(steal_time, cpu);
do {
version = src->version;
- rmb();
+ virt_rmb();
steal = src->steal;
- rmb();
+ virt_rmb();
} while ((version & 1) || (version != src->version));

return steal;
--
2.7.4


2017-04-11 14:20:22

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH] x86/kvm: virt_xxx memory barriers instead of mandatory barriers



----- Original Message -----
> From: "Wanpeng Li" <[email protected]>
> To: [email protected], [email protected]
> Cc: "Paolo Bonzini" <[email protected]>, "Radim Krčmář" <[email protected]>, "Wanpeng Li" <[email protected]>
> Sent: Tuesday, April 11, 2017 5:49:21 PM
> Subject: [PATCH] x86/kvm: virt_xxx memory barriers instead of mandatory barriers
>
> From: Wanpeng Li <[email protected]>
>
> virt_xxx memory barriers are implemented trivially using the low-level
> __smp_xxx macros, __smp_xxx is equal to a compiler barrier for strong
> TSO memory model, however, mandatory barriers will unconditional add
> memory barriers, this patch replaces the rmb() in kvm_steal_clock() by
> virt_rmb().
>
> Cc: Paolo Bonzini <[email protected]>
> Cc: Radim Krčmář <[email protected]>
> Signed-off-by: Wanpeng Li <[email protected]>
> ---
> arch/x86/kernel/kvm.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 14f65a5..da5c097 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -396,9 +396,9 @@ static u64 kvm_steal_clock(int cpu)
> src = &per_cpu(steal_time, cpu);
> do {
> version = src->version;
> - rmb();
> + virt_rmb();
> steal = src->steal;
> - rmb();
> + virt_rmb();
> } while ((version & 1) || (version != src->version));
>
> return steal;
> --
> 2.7.4

Reviewed-by: Paolo Bonzini <[email protected]>

2017-04-12 19:04:43

by Radim Krčmář

[permalink] [raw]
Subject: Re: [PATCH] x86/kvm: virt_xxx memory barriers instead of mandatory barriers

2017-04-11 02:49-0700, Wanpeng Li:
> From: Wanpeng Li <[email protected]>
>
> virt_xxx memory barriers are implemented trivially using the low-level
> __smp_xxx macros, __smp_xxx is equal to a compiler barrier for strong
> TSO memory model, however, mandatory barriers will unconditional add
> memory barriers, this patch replaces the rmb() in kvm_steal_clock() by
> virt_rmb().
>
> Cc: Paolo Bonzini <[email protected]>
> Cc: Radim Krčmář <[email protected]>
> Signed-off-by: Wanpeng Li <[email protected]>
> ---

Applied to kvm/queue, thanks.