2010-04-10 19:27:50

by Avi Kivity

[permalink] [raw]
Subject: Re: VM performance issue in KVM guests.

(copying lkml and some scheduler folk)

On 04/10/2010 11:16 AM, Zhang, Xiantao wrote:
> Hi, all
> We are working on the scalability work for KVM guests, and found one big issue exists in linux scheduler and it may impact guest's performance and scalability a lot for some special workloads running in VM. In the current Linux scheduler, there are some features to enhance App's performance which are defined in the file kvm.git/kernel/sched_features.h. Certainly, they are mostly beneficial optimizations to improve system's performance, but unluckily, some of them may hurt VM's performance and scalablity in KVM case
> We know that if two or more vcpus of one guests are scheduled to one same logical processor, same CPU utilization may generate less valid output due mutual lock in VM's OS than that are scheduled to different logical processors .And we also know that VM's vcpus are emulated or executed through the threads of Qemu for KVM. If the vcpu threads of qemu are often pulled to one same logical processor by some features of Linux scheduler, kvm guests'performance may be hurt a lot. In our performance testing, the results also show this performance bottleneck due to this issue.

What was the performance hit? What was your I/O setup (image format,
using aio?)

> After analysis about Linux scheduler, we found it is indeed caused by the known features of Linux schduler, such as AFFINE_WAKEUPS, SYNC_WAKEUPS etc. With these features on, linux schduler often tries to schedule the vcpu threads of one guests to one same logical processor when vcpus are over-committed and logical processors are saturated. Once the vcpu threads of one VM are scheduled to the same LP, system performance drops dramatically with some workloads(like webbench running in windows OS).
>

Were the affine wakeups due to the kernel (emulated guest IPIs) or qemu?

> To verify this finding, we also worked out a simple patch attached in the mail to dynamially switch off the two sheduler features mentioned above when scheduler knows the scheduling tasks are vcpu threads, and we found the the whole system's performance and scalability are improved a lot. Certatinly, this patch is not good for upstream, but it can enlighten us to think how to optimize Linux scheduler and we also want to initiate the discussion about how to make LINUX's scheduler more friendly to virtualization. Besides, this issue maybe not only kvm's special issue, insteadly it should be a common issue for host-based VMs, and we also expect that we can have an elegant solution to thoroughly resolve the performance or scalability gap compared with hypervisor-based VMs.
>

Most likely it also hits non-virtualized loads as well. If the
scheduler pulls two long-running threads to the same cpu, performance
will take a hit.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


2010-04-12 02:06:12

by Zhang, Xiantao

[permalink] [raw]
Subject: RE: VM performance issue in KVM guests.

Avi Kivity wrote:
> (copying lkml and some scheduler folk)
>
> On 04/10/2010 11:16 AM, Zhang, Xiantao wrote:
>> Hi, all
>> We are working on the scalability work for KVM guests, and found
>> one big issue exists in linux scheduler and it may impact guest's
>> performance and scalability a lot for some special workloads running
>> in VM. In the current Linux scheduler, there are some features to
>> enhance App's performance which are defined in the file
>> kvm.git/kernel/sched_features.h. Certainly, they are mostly
>> beneficial optimizations to improve system's performance, but
>> unluckily, some of them may hurt VM's performance and scalablity in
>> KVM case We know that if two or more vcpus of one guests are
>> scheduled to one same logical processor, same CPU utilization may
>> generate less valid output due mutual lock in VM's OS than that are
>> scheduled to different logical processors .And we also know that
>> VM's vcpus are emulated or executed through the threads of Qemu for
>> KVM. If the vcpu threads of qemu are often pulled to one same
>> logical processor by some features of Linux scheduler, kvm
>> guests'performance may be hurt a lot. In our performance testing,
>> the results also show this performance bottleneck due to this issue.
>
> What was the performance hit? What was your I/O setup (image format,
> using aio?)

The issue only happens when vcpu number is over-committed(e.g. vcpu/pcpu>2) and physical cpus are saturated. For example, when run webbench in windows OS in this case, its performance drops by 80%. In our experiment, we are using image file through virtio, and I think aio should be used by default also.


>> After analysis about Linux scheduler, we found it is indeed caused
>> by the known features of Linux schduler, such as AFFINE_WAKEUPS,
>> SYNC_WAKEUPS etc. With these features on, linux schduler often tries
>> to schedule the vcpu threads of one guests to one same logical
>> processor when vcpus are over-committed and logical processors are
>> saturated. Once the vcpu threads of one VM are scheduled to the same
>> LP, system performance drops dramatically with some workloads(like
>> webbench running in windows OS).
>>
>
> Were the affine wakeups due to the kernel (emulated guest IPIs) or
> qemu?

We have basic guesses about the reasone, one is wakeup affinity between vcpu threads due to IPI, and the other is wakeup affinity between io theads and vcpu threads.

>> To verify this finding, we also worked out a simple patch
>> attached in the mail to dynamially switch off the two sheduler
>> features mentioned above when scheduler knows the scheduling tasks
>> are vcpu threads, and we found the the whole system's performance
>> and scalability are improved a lot. Certatinly, this patch is not
>> good for upstream, but it can enlighten us to think how to optimize
>> Linux scheduler and we also want to initiate the discussion about
>> how to make LINUX's scheduler more friendly to virtualization.
>> Besides, this issue maybe not only kvm's special issue, insteadly it
>> should be a common issue for host-based VMs, and we also expect that
>> we can have an elegant solution to thoroughly resolve the
>> performance or scalability gap compared with hypervisor-based VMs.
>>
>
> Most likely it also hits non-virtualized loads as well. If the
> scheduler pulls two long-running threads to the same cpu, performance
> will take a hit.

Since the hit only happens when physical cpus are saturated, and sheduling non-virtualized multiple threads of one process to same processor can benefit the performance due to cache share or other affinities, but you know it hurts performance a lot once schedule two vcpu theads to a same processor due to mutual spin-lock in guests.
Xiantao

2010-04-12 06:41:19

by Avi Kivity

[permalink] [raw]
Subject: Re: VM performance issue in KVM guests.

On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:
>
>> What was the performance hit? What was your I/O setup (image format,
>> using aio?)
>>
> The issue only happens when vcpu number is over-committed(e.g. vcpu/pcpu>2) and physical cpus are saturated. For example, when run webbench in windows OS in this case, its performance drops by 80%. In our experiment, we are using image file through virtio, and I think aio should be used by default also.
>

Is this on a machine that does pause-loop exits? The current handing of
PLE is very suboptimal. With proper directed yield we should be much
better there.

Without PLE, we need paravirtualized spinlocks, no way around it.

>>> After analysis about Linux scheduler, we found it is indeed caused
>>> by the known features of Linux schduler, such as AFFINE_WAKEUPS,
>>> SYNC_WAKEUPS etc. With these features on, linux schduler often tries
>>> to schedule the vcpu threads of one guests to one same logical
>>> processor when vcpus are over-committed and logical processors are
>>> saturated. Once the vcpu threads of one VM are scheduled to the same
>>> LP, system performance drops dramatically with some workloads(like
>>> webbench running in windows OS).
>>>
>>>
>> Were the affine wakeups due to the kernel (emulated guest IPIs) or
>> qemu?
>>
> We have basic guesses about the reasone, one is wakeup affinity between vcpu threads due to IPI, and the other is wakeup affinity between io theads and vcpu threads.
>

It would be good to find out.

>> Most likely it also hits non-virtualized loads as well. If the
>> scheduler pulls two long-running threads to the same cpu, performance
>> will take a hit.
>>
> Since the hit only happens when physical cpus are saturated, and sheduling non-virtualized multiple threads of one process to same processor can benefit the performance due to cache share or other affinities, but you know it hurts performance a lot once schedule two vcpu theads to a same processor due to mutual spin-lock in guests.
>

Spin loops need to be addressed first, they are known to kill
performance in overcommit situations.

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

2010-04-13 00:50:28

by Zhang, Xiantao

[permalink] [raw]
Subject: RE: VM performance issue in KVM guests.

Avi Kivity wrote:
> On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:
>>
>>> What was the performance hit? What was your I/O setup (image
>>> format, using aio?)
>>>
>> The issue only happens when vcpu number is over-committed(e.g.
>> vcpu/pcpu>2) and physical cpus are saturated. For example, when run
>> webbench in windows OS in this case, its performance drops by 80%.
>> In our experiment, we are using image file through virtio, and I
>> think aio should be used by default also.
>>
>
> Is this on a machine that does pause-loop exits? The current handing
> of PLE is very suboptimal. With proper directed yield we should be
> much better there.
>
> Without PLE, we need paravirtualized spinlocks, no way around it.

PLE has the ability to eliminate the issue at some extent, and pv solution should be helpful also. But for windows guests running on machines without PLE, we still needs to enhance host side to resolve the issue.

>>>> After analysis about Linux scheduler, we found it is indeed caused
>>>> by the known features of Linux schduler, such as AFFINE_WAKEUPS,
>>>> SYNC_WAKEUPS etc. With these features on, linux schduler often
>>>> tries to schedule the vcpu threads of one guests to one same
>>>> logical processor when vcpus are over-committed and logical
>>>> processors are saturated. Once the vcpu threads of one VM are
>>>> scheduled to the same LP, system performance drops dramatically
>>>> with some workloads(like webbench running in windows OS).
>>>
>> Since the hit only happens when physical cpus are saturated, and
>> sheduling non-virtualized multiple threads of one process to same
>> processor can benefit the performance due to cache share or other
>> affinities, but you know it hurts performance a lot once schedule
>> two vcpu theads to a same processor due to mutual spin-lock in
>> guests.
>>
> Spin loops need to be addressed first, they are known to kill
> performance in overcommit situations.

Even in overcommit case, if vcpu threads of one qemu are not scheduled or pulled to the same logical processor, the performance drop is tolerant like Xen's case today. But for KVM, it has to suffer from additional performance loss, since host's scheduler actively pulls these vcpu threads together.
Xiantao

2010-04-13 06:46:38

by Avi Kivity

[permalink] [raw]
Subject: Re: VM performance issue in KVM guests.

On 04/13/2010 03:50 AM, Zhang, Xiantao wrote:
> Avi Kivity wrote:
>
>> On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:
>>
>>>
>>>> What was the performance hit? What was your I/O setup (image
>>>> format, using aio?)
>>>>
>>>>
>>> The issue only happens when vcpu number is over-committed(e.g.
>>> vcpu/pcpu>2) and physical cpus are saturated. For example, when run
>>> webbench in windows OS in this case, its performance drops by 80%.
>>> In our experiment, we are using image file through virtio, and I
>>> think aio should be used by default also.
>>>
>>>
>> Is this on a machine that does pause-loop exits? The current handing
>> of PLE is very suboptimal. With proper directed yield we should be
>> much better there.
>>
>> Without PLE, we need paravirtualized spinlocks, no way around it.
>>
> PLE has the ability to eliminate the issue at some extent, and pv solution should be helpful also. But for windows guests running on machines without PLE, we still needs to enhance host side to resolve the issue.
>

Well, was this on a machine with PLE or without PLE?

>> Spin loops need to be addressed first, they are known to kill
>> performance in overcommit situations.
>>
> Even in overcommit case, if vcpu threads of one qemu are not scheduled or pulled to the same logical processor, the performance drop is tolerant like Xen's case today. But for KVM, it has to suffer from additional performance loss, since host's scheduler actively pulls these vcpu threads together.
>
>

Can you quantify this loss? Give examples of what happens?


--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.

2010-04-14 03:25:17

by Zhang, Xiantao

[permalink] [raw]
Subject: RE: VM performance issue in KVM guests.

Avi Kivity wrote:
> On 04/13/2010 03:50 AM, Zhang, Xiantao wrote:
>> Avi Kivity wrote:
>>
>>> On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:
>>>
>>>>
>>>>> What was the performance hit? What was your I/O setup (image
>>>>> format, using aio?)
>>>>>
>>>>>
>>>> The issue only happens when vcpu number is over-committed(e.g.
>>>> vcpu/pcpu>2) and physical cpus are saturated. For example, when
>>>> run webbench in windows OS in this case, its performance drops by
>>>> 80%. In our experiment, we are using image file through virtio,
>>>> and I think aio should be used by default also.
>>>>
>>>>
>>> Is this on a machine that does pause-loop exits? The current
>>> handing of PLE is very suboptimal. With proper directed yield we
>>> should be much better there.
>>>
>>> Without PLE, we need paravirtualized spinlocks, no way around it.
>>>
>> PLE has the ability to eliminate the issue at some extent, and pv
>> solution should be helpful also. But for windows guests running on
>> machines without PLE, we still needs to enhance host side to resolve
>> the issue.
>>
>
> Well, was this on a machine with PLE or without PLE?

I am saying the machine has no PLE feature support. Even with PLE feature support, there is still performance loss due to PLE's cost.

>>> Spin loops need to be addressed first, they are known to kill
>>> performance in overcommit situations.
>>>
>> Even in overcommit case, if vcpu threads of one qemu are not
>> scheduled or pulled to the same logical processor, the performance
>> drop is tolerant like Xen's case today. But for KVM, it has to
>> suffer from additional performance loss, since host's scheduler
>> actively pulls these vcpu threads together.
>>
>>
> Can you quantify this loss? Give examples of what happens?

For example, one machine is configured with 2 pCPUs and there are two Windows guests running on the machine, and each guest is cconfigured with 2 vcpus and one webbench server runs in it.
If use host's default scheduler, webbench's performance is very bad, but if pin each geust's vCPU0 to pCPU0 and vCPU1 to pCPU1, we can see 5-10X performance improvement with same CPU utilization.
In addition, we also see kvm's perf scalability is also impacted in large systems, for some performance experiments, kvm's perf begins to drop when vCPU is overcommitted and pCPU are saturated, but once the wake_up_affine feature is switched off in scheduler, kvm's perf can keep rising in this case.
Xiantao

2010-04-14 08:14:23

by Avi Kivity

[permalink] [raw]
Subject: Re: VM performance issue in KVM guests.

On 04/14/2010 06:24 AM, Zhang, Xiantao wrote:
>
>>>> Spin loops need to be addressed first, they are known to kill
>>>> performance in overcommit situations.
>>>>
>>>>
>>> Even in overcommit case, if vcpu threads of one qemu are not
>>> scheduled or pulled to the same logical processor, the performance
>>> drop is tolerant like Xen's case today. But for KVM, it has to
>>> suffer from additional performance loss, since host's scheduler
>>> actively pulls these vcpu threads together.
>>>
>>>
>>>
>> Can you quantify this loss? Give examples of what happens?
>>
> For example, one machine is configured with 2 pCPUs and there are two Windows guests running on the machine, and each guest is cconfigured with 2 vcpus and one webbench server runs in it.
> If use host's default scheduler, webbench's performance is very bad, but if pin each geust's vCPU0 to pCPU0 and vCPU1 to pCPU1, we can see 5-10X performance improvement with same CPU utilization.
> In addition, we also see kvm's perf scalability is also impacted in large systems, for some performance experiments, kvm's perf begins to drop when vCPU is overcommitted and pCPU are saturated, but once the wake_up_affine feature is switched off in scheduler, kvm's perf can keep rising in this case.
>

Ok. This is probably due to spinlock contention.

When vcpus are pinned to pcpus, there is a 50% chance that a guest's
vcpus will be co-scheduled and spinlocks will perform will.

When vcpus are not pinned, but affine wakeups are disabled, there is a
33% chance that vcpus will be co-scheduled.

When vcpus are not pinned and affine wakeups are enabled there is a 0%
chance that vcpus will be co-scheduled.

Keeping both vcpus on the same core actually makes sense since they can
communicate through the local cache faster than across cores. What we
need is to make sure that they don't spin.

Windows 2008 can report spinlock spinning through a hypercall. Can you
hook to that interface and see if it happens regularly? Altenatively
use a PLE capable host and trace the kvm_vcpu_on_spin() function.

--
error compiling committee.c: too many arguments to function

2010-04-16 02:28:25

by Zhang, Xiantao

[permalink] [raw]
Subject: RE: VM performance issue in KVM guests.

Avi Kivity wrote:
> On 04/14/2010 06:24 AM, Zhang, Xiantao wrote:
>>
>>>>> Spin loops need to be addressed first, they are known to kill
>>>>> performance in overcommit situations.
>>>>>
>>>>>
>>>> Even in overcommit case, if vcpu threads of one qemu are not
>>>> scheduled or pulled to the same logical processor, the performance
>>>> drop is tolerant like Xen's case today. But for KVM, it has to
>>>> suffer from additional performance loss, since host's scheduler
>>>> actively pulls these vcpu threads together.
>>>>
>>>>
>>>>
>>> Can you quantify this loss? Give examples of what happens?
>>>
>> For example, one machine is configured with 2 pCPUs and there are
>> two Windows guests running on the machine, and each guest is
>> cconfigured with 2 vcpus and one webbench server runs in it.
>> If use host's default scheduler, webbench's performance is very bad,
>> but if pin each geust's vCPU0 to pCPU0 and vCPU1 to pCPU1, we can
>> see 5-10X performance improvement with same CPU utilization.
>> In addition, we also see kvm's perf scalability is also impacted in
>> large systems, for some performance experiments, kvm's perf begins
>> to drop when vCPU is overcommitted and pCPU are saturated, but once
>> the wake_up_affine feature is switched off in scheduler, kvm's perf
>> can keep rising in this case.
>>
>
> Ok. This is probably due to spinlock contention.

Yes, exactly.

> When vcpus are pinned to pcpus, there is a 50% chance that a guest's
> vcpus will be co-scheduled and spinlocks will perform will.
>
> When vcpus are not pinned, but affine wakeups are disabled, there is a
> 33% chance that vcpus will be co-scheduled.
>
> When vcpus are not pinned and affine wakeups are enabled there is a 0%
> chance that vcpus will be co-scheduled.
>
> Keeping both vcpus on the same core actually makes sense since they
> can communicate through the local cache faster than across cores.
> What we need is to make sure that they don't spin.
>
> Windows 2008 can report spinlock spinning through a hypercall. Can
> you hook to that interface and see if it happens regularly?
> Altenatively use a PLE capable host and trace the kvm_vcpu_on_spin()
> function.
We only tried windows 2003 for the experiments, and have no data related to windows 2008.
But maybe we can have a try later. Anyway, the key point is we have to enhance the scheduler to let it
Know which threads are vcpu threads to avoid perf loss in this case.
Xiantao

2010-04-17 19:03:13

by Avi Kivity

[permalink] [raw]
Subject: Re: VM performance issue in KVM guests.

On 04/16/2010 05:27 AM, Zhang, Xiantao wrote:
>
>
>> When vcpus are pinned to pcpus, there is a 50% chance that a guest's
>> vcpus will be co-scheduled and spinlocks will perform will.
>>
>> When vcpus are not pinned, but affine wakeups are disabled, there is a
>> 33% chance that vcpus will be co-scheduled.
>>
>> When vcpus are not pinned and affine wakeups are enabled there is a 0%
>> chance that vcpus will be co-scheduled.
>>
>> Keeping both vcpus on the same core actually makes sense since they
>> can communicate through the local cache faster than across cores.
>> What we need is to make sure that they don't spin.
>>
>> Windows 2008 can report spinlock spinning through a hypercall. Can
>> you hook to that interface and see if it happens regularly?
>> Altenatively use a PLE capable host and trace the kvm_vcpu_on_spin()
>> function.
>>
> We only tried windows 2003 for the experiments, and have no data related to windows 2008.
> But maybe we can have a try later. Anyway, the key point is we have to enhance the scheduler to let it
> Know which threads are vcpu threads to avoid perf loss in this case.
>

I have two worries about this approach:

1. Affine wakeups were introduced for a reason; if we disable them
(even just for vcpus), we lost something. Maybe we can tune the
mechanism not to fail, instead of disabling it.

2. Affine wakeups are a scheduler internal detail. How do we explain
what it does? the scheduler may not have affine wakeups in a few years,
yet we'll have an ABI to disable them.

--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.