El 14/12/15 a les 16.27, Konrad Rzeszutek Wilk ha escrit:
> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>> will likely perform same IPIs as would have the guest.
>>
>
> But if the VCPU is asleep, doing it via the hypervisor will save us waking
> up the guest VCPU, sending an IPI - just to do an TLB flush
> of that CPU. Which is pointless as the CPU hadn't been running the
> guest in the first place.
>
>>
>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>> guest's address on remote CPU (when, for example, VCPU from another
>> guest
>> is running there).
>
> Right, so the hypervisor won't even send an IPI there.
>
> But if you do it via the normal guest IPI mechanism (which are opaque
> to the hypervisor) you and up scheduling the guest VCPU to do
> send an hypervisor callback. And the callback will go the IPI routine
> which will do an TLB flush. Not necessary.
>
> This is all in case of oversubscription of course. In the case where
> we are fine on vCPU resources it does not matter.
>
> Perhaps if we have PV aware TLB flush it could do this differently?
Why don't HVM/PVH just uses the HVMOP_flush_tlbs hypercall?
Roger.
On 12/14/2015 10:35 AM, Roger Pau Monn? wrote:
> El 14/12/15 a les 16.27, Konrad Rzeszutek Wilk ha escrit:
>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>> will likely perform same IPIs as would have the guest.
>>>
>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>> up the guest VCPU, sending an IPI - just to do an TLB flush
>> of that CPU. Which is pointless as the CPU hadn't been running the
>> guest in the first place.
OK, I then mis-read the hypervisor code, I didn't realize that
vcpumask_to_pcpumask() takes into account vcpu_dirty_cpumask.
>>
>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>> guest's address on remote CPU (when, for example, VCPU from another
>>> guest
>>> is running there).
>> Right, so the hypervisor won't even send an IPI there.
>>
>> But if you do it via the normal guest IPI mechanism (which are opaque
>> to the hypervisor) you and up scheduling the guest VCPU to do
>> send an hypervisor callback. And the callback will go the IPI routine
>> which will do an TLB flush. Not necessary.
>>
>> This is all in case of oversubscription of course. In the case where
>> we are fine on vCPU resources it does not matter.
>>
>> Perhaps if we have PV aware TLB flush it could do this differently?
> Why don't HVM/PVH just uses the HVMOP_flush_tlbs hypercall?
It doesn't take any parameters so it will invalidate TLBs for all VCPUs,
which is more than is being asked for. Especially in the case of
MMUEXT_INVLPG_MULTI.
(That's in addition to the fact that it currently doesn't work for PVH
as it has a test for is_hvm_domain() instead of has_hvm_container_domain()).
-boris