On Wed, 2017-11-15 at 12:33 +0800, Wanpeng Li wrote:
> 2017-11-15 11:03 GMT+08:00 Rik van Riel <[email protected]>:
> > On Wed, 2017-11-15 at 08:47 +0800, Wanpeng Li wrote:
> > > 2017-11-15 5:54 GMT+08:00 <[email protected]>:
> > > > From: Rik van Riel <[email protected]>
> > > >
> > > > Currently, every time a VCPU is scheduled out, the host kernel
> > > > will
> > > > first save the guest FPU/xstate context, then load the qemu
> > > > userspace
> > > > FPU context, only to then immediately save the qemu userspace
> > > > FPU
> > > > context back to memory. When scheduling in a VCPU, the same
> > > > extraneous
> > > > FPU loads and saves are done.
> > > >
> > > > This could be avoided by moving from a model where the guest
> > > > FPU is
> > > > loaded and stored with preemption disabled, to a model where
> > > > the
> > > > qemu userspace FPU is swapped out for the guest FPU context for
> > > > the duration of the KVM_RUN ioctl.
> > >
> > > What will happen if CONFIG_PREEMPT is enabled?
> >
> > The scheduler will save the guest FPU context when a
> > VCPU thread is preempted, and restore it when it is
> > scheduled back in.
>
> I mean all the involved processes will use fpu. Before patch if
> kernel
> preempt occur:
>
> context_switch
> -> prepare_task_switch
> -> fire_sched_out_preempt_notifiers
> -> kvm_sched_out
> -> kvm_arch_vcpu_put
> -> kvm_put_guest_fpu
> -> copy_fpregs_to_fpstate(&vcpu-
> >arch.guest_fpu)
> save xsave area to guest fpu
> buffer
> -> __kernel_fpu_end
> ->
> copy_kernel_to_fpregs(¤t->thread.fpu.state)
> restore prev vCPU qemu
> userspace FPU to the xsave area
> -> switch_to
> -> __switch_to
> -> switch_fpu_prepare
> -> copy_fpregs_to_fpstate => save xsave area to
> prev
> vCPU qemu userspace FPU
> -> switch_fpu_finish
> -> copy_kernel_to_fpgregs => restore next task FPU
> to xsave area
>
>
> After the patch:
>
> context_switch
> -> prepare_task_switch
> -> fire_sched_out_preempt_notifiers
> -> kvm_sched_out
>
> -> switch_to
> -> __switch_to
> -> switch_fpu_prepare
> -> copy_fpregs_to_fpstate => Oops
> save xsave area to prev vCPU qemu userspace FPU,
> actually the guest FPU buffer is loaded in xsave area, you transmit
> guest FPU in xsave area into the prev vCPU qemu userspace FPU
When entering kvm_arch_vcpu_ioctl_run we save the qemu userspace
FPU context in &vcpu->arch.user_fpu, and we restore that before
leaving kvm_arch_vcpu_ioctl_run.
Userspace should always see the userspace FPU context, no?
Am I overlooking anything?
From 1584123503441072566@xxx Wed Nov 15 09:24:59 +0000 2017
X-GM-THRID: 1584017174910331026
X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread