2019-07-22 12:49:36

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH] Revert "kvm: x86: Use task structs fpu field for user"

This reverts commit 240c35a3783ab9b3a0afaba0dde7291295680a6b
("kvm: x86: Use task structs fpu field for user", 2018-11-06).
The commit is broken and causes QEMU's FPU state to be destroyed
when KVM_RUN is preempted.

Fixes: 240c35a3783a ("kvm: x86: Use task structs fpu field for user")
Cc: [email protected]
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 7 ++++---
arch/x86/kvm/x86.c | 4 ++--
2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 0cc5b611a113..b2f1ffb937af 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -607,15 +607,16 @@ struct kvm_vcpu_arch {

/*
* QEMU userspace and the guest each have their own FPU state.
- * In vcpu_run, we switch between the user, maintained in the
- * task_struct struct, and guest FPU contexts. While running a VCPU,
- * the VCPU thread will have the guest FPU context.
+ * In vcpu_run, we switch between the user and guest FPU contexts.
+ * While running a VCPU, the VCPU thread will have the guest FPU
+ * context.
*
* Note that while the PKRU state lives inside the fpu registers,
* it is switched out separately at VMENTER and VMEXIT time. The
* "guest_fpu" state here contains the guest FPU context, with the
* host PRKU bits.
*/
+ struct fpu user_fpu;
struct fpu *guest_fpu;

u64 xcr0;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 58305cf81182..cf2afdf8facf 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8270,7 +8270,7 @@ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
{
fpregs_lock();

- copy_fpregs_to_fpstate(&current->thread.fpu);
+ copy_fpregs_to_fpstate(&vcpu->arch.user_fpu);
/* PKRU is separately restored in kvm_x86_ops->run. */
__copy_kernel_to_fpregs(&vcpu->arch.guest_fpu->state,
~XFEATURE_MASK_PKRU);
@@ -8287,7 +8287,7 @@ static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
fpregs_lock();

copy_fpregs_to_fpstate(vcpu->arch.guest_fpu);
- copy_kernel_to_fpregs(&current->thread.fpu.state);
+ copy_kernel_to_fpregs(&vcpu->arch.user_fpu.state);

fpregs_mark_activate();
fpregs_unlock();
--
1.8.3.1


2019-07-23 08:43:51

by Wanpeng Li

[permalink] [raw]
Subject: Re: [PATCH] Revert "kvm: x86: Use task structs fpu field for user"

On Mon, 22 Jul 2019 at 20:49, Paolo Bonzini <[email protected]> wrote:
>
> This reverts commit 240c35a3783ab9b3a0afaba0dde7291295680a6b
> ("kvm: x86: Use task structs fpu field for user", 2018-11-06).
> The commit is broken and causes QEMU's FPU state to be destroyed
> when KVM_RUN is preempted.
>
> Fixes: 240c35a3783a ("kvm: x86: Use task structs fpu field for user")
> Cc: [email protected]
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> arch/x86/include/asm/kvm_host.h | 7 ++++---
> arch/x86/kvm/x86.c | 4 ++--
> 2 files changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 0cc5b611a113..b2f1ffb937af 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -607,15 +607,16 @@ struct kvm_vcpu_arch {
>
> /*
> * QEMU userspace and the guest each have their own FPU state.
> - * In vcpu_run, we switch between the user, maintained in the
> - * task_struct struct, and guest FPU contexts. While running a VCPU,
> - * the VCPU thread will have the guest FPU context.
> + * In vcpu_run, we switch between the user and guest FPU contexts.
> + * While running a VCPU, the VCPU thread will have the guest FPU
> + * context.
> *
> * Note that while the PKRU state lives inside the fpu registers,
> * it is switched out separately at VMENTER and VMEXIT time. The
> * "guest_fpu" state here contains the guest FPU context, with the
> * host PRKU bits.
> */
> + struct fpu user_fpu;
> struct fpu *guest_fpu;
>
> u64 xcr0;
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 58305cf81182..cf2afdf8facf 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -8270,7 +8270,7 @@ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
> {
> fpregs_lock();
>
> - copy_fpregs_to_fpstate(&current->thread.fpu);
> + copy_fpregs_to_fpstate(&vcpu->arch.user_fpu);
> /* PKRU is separately restored in kvm_x86_ops->run. */
> __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu->state,
> ~XFEATURE_MASK_PKRU);
> @@ -8287,7 +8287,7 @@ static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
> fpregs_lock();
>
> copy_fpregs_to_fpstate(vcpu->arch.guest_fpu);
> - copy_kernel_to_fpregs(&current->thread.fpu.state);
> + copy_kernel_to_fpregs(&vcpu->arch.user_fpu.state);
>
> fpregs_mark_activate();
> fpregs_unlock();

Looks good to me.

Regards,
Wanpeng Li

2019-07-23 09:40:47

by Sasha Levin

[permalink] [raw]
Subject: Re: [PATCH] Revert "kvm: x86: Use task structs fpu field for user"

Hi,

[This is an automated email]

This commit has been processed because it contains a "Fixes:" tag,
fixing commit: 240c35a3783a kvm: x86: Use task structs fpu field for user.

The bot has tested the following trees: v5.2.2, v5.1.19.

v5.2.2: Build OK!
v5.1.19: Failed to apply! Possible dependencies:
0cecca9d03c9 ("x86/fpu: Eager switch PKRU state")
2722146eb784 ("x86/fpu: Remove fpu->initialized")
4ee91519e1dc ("x86/fpu: Add an __fpregs_load_activate() internal helper")
5f409e20b794 ("x86/fpu: Defer FPU state load until return to userspace")


NOTE: The patch will not be queued to stable trees until it is upstream.

How should we proceed with this patch?

--
Thanks,
Sasha

2019-07-23 14:51:20

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH] Revert "kvm: x86: Use task structs fpu field for user"

On 23/07/19 06:31, Sasha Levin wrote:
>
> v5.2.2: Build OK!
> v5.1.19: Failed to apply! Possible dependencies:
> 0cecca9d03c9 ("x86/fpu: Eager switch PKRU state")
> 2722146eb784 ("x86/fpu: Remove fpu->initialized")
> 4ee91519e1dc ("x86/fpu: Add an __fpregs_load_activate() internal helper")
> 5f409e20b794 ("x86/fpu: Defer FPU state load until return to userspace")
>
>
> NOTE: The patch will not be queued to stable trees until it is upstream.
>
> How should we proceed with this patch?

I have 5-6 pending stable patches and I will send a backport for all of
them.

Paolo