2012-10-03 14:38:01

by Raghavendra K T

[permalink] [raw]
Subject: Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler

* Avi Kivity <[email protected]> [2012-09-27 14:03:59]:

> On 09/27/2012 01:23 PM, Raghavendra K T wrote:
> >>
[...]
> > 2) looking at the result (comparing A & C) , I do feel we have
> > significant in iterating over vcpus (when compared to even vmexit)
> > so We still would need undercommit fix sugested by PeterZ (improving by
> > 140%). ?
>
> Looking only at the current runqueue? My worry is that it misses a lot
> of cases. Maybe try the current runqueue first and then others.
>

Okay. Do you mean we can have something like

+ if (rq->nr_running == 1 && p_rq->nr_running == 1) {
+ yielded = -ESRCH;
+ goto out_irq;
+ }

in the Peter's patch ?

( I thought lot about && or || . Both seem to have their own cons ).
But that should be only when we have short term imbalance, as PeterZ
told.

I am experimenting all these for V2 patch. Will come back with analysis
and patch.

> Or were you referring to something else?
>


2012-10-03 17:26:24

by Avi Kivity

[permalink] [raw]
Subject: Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler

On 10/03/2012 04:29 PM, Raghavendra K T wrote:
> * Avi Kivity <[email protected]> [2012-09-27 14:03:59]:
>
>> On 09/27/2012 01:23 PM, Raghavendra K T wrote:
>> >>
> [...]
>> > 2) looking at the result (comparing A & C) , I do feel we have
>> > significant in iterating over vcpus (when compared to even vmexit)
>> > so We still would need undercommit fix sugested by PeterZ (improving by
>> > 140%). ?
>>
>> Looking only at the current runqueue? My worry is that it misses a lot
>> of cases. Maybe try the current runqueue first and then others.
>>
>
> Okay. Do you mean we can have something like
>
> + if (rq->nr_running == 1 && p_rq->nr_running == 1) {
> + yielded = -ESRCH;
> + goto out_irq;
> + }
>
> in the Peter's patch ?
>
> ( I thought lot about && or || . Both seem to have their own cons ).
> But that should be only when we have short term imbalance, as PeterZ
> told.

I'm missing the context. What is p_rq?

What I mean was:

if can_yield_to_process_in_current_rq
do that
else if can_yield_to_process_in_other_rq
do that
else
return -ESRCH


--
error compiling committee.c: too many arguments to function

2012-10-04 11:00:46

by Raghavendra K T

[permalink] [raw]
Subject: Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler

On 10/03/2012 10:55 PM, Avi Kivity wrote:
> On 10/03/2012 04:29 PM, Raghavendra K T wrote:
>> * Avi Kivity <[email protected]> [2012-09-27 14:03:59]:
>>
>>> On 09/27/2012 01:23 PM, Raghavendra K T wrote:
>>>>>
>> [...]
>>>> 2) looking at the result (comparing A & C) , I do feel we have
>>>> significant in iterating over vcpus (when compared to even vmexit)
>>>> so We still would need undercommit fix sugested by PeterZ (improving by
>>>> 140%). ?
>>>
>>> Looking only at the current runqueue? My worry is that it misses a lot
>>> of cases. Maybe try the current runqueue first and then others.
>>>
>>
>> Okay. Do you mean we can have something like
>>
>> + if (rq->nr_running == 1 && p_rq->nr_running == 1) {
>> + yielded = -ESRCH;
>> + goto out_irq;
>> + }
>>
>> in the Peter's patch ?
>>
>> ( I thought lot about && or || . Both seem to have their own cons ).
>> But that should be only when we have short term imbalance, as PeterZ
>> told.
>
> I'm missing the context. What is p_rq?

p_rq is the run queue of target vcpu.
What I was trying below was to address Rik concern. Suppose
rq of source vcpu has one task, but target probably has two task,
with a eligible vcpu waiting to be scheduled.

>
> What I mean was:
>
> if can_yield_to_process_in_current_rq
> do that
> else if can_yield_to_process_in_other_rq
> do that
> else
> return -ESRCH

I think you are saying we have to check the run queue of the
source vcpu, if we have a vcpu belonging to same VM and try yield to
that? ignoring whatever the target vcpu we received for yield_to.

Or is it that kvm_vcpu_yield_to should now check the vcpus of same vm
belonging to same run queue first. If we don't succeed, go again for
a vcpu in different runqueue.
Does it add more overhead especially in <= 1x scenario?

2012-10-04 12:44:24

by Avi Kivity

[permalink] [raw]
Subject: Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler

On 10/04/2012 12:56 PM, Raghavendra K T wrote:
> On 10/03/2012 10:55 PM, Avi Kivity wrote:
>> On 10/03/2012 04:29 PM, Raghavendra K T wrote:
>>> * Avi Kivity <[email protected]> [2012-09-27 14:03:59]:
>>>
>>>> On 09/27/2012 01:23 PM, Raghavendra K T wrote:
>>>>>>
>>> [...]
>>>>> 2) looking at the result (comparing A & C) , I do feel we have
>>>>> significant in iterating over vcpus (when compared to even vmexit)
>>>>> so We still would need undercommit fix sugested by PeterZ
>>>>> (improving by
>>>>> 140%). ?
>>>>
>>>> Looking only at the current runqueue? My worry is that it misses a lot
>>>> of cases. Maybe try the current runqueue first and then others.
>>>>
>>>
>>> Okay. Do you mean we can have something like
>>>
>>> + if (rq->nr_running == 1 && p_rq->nr_running == 1) {
>>> + yielded = -ESRCH;
>>> + goto out_irq;
>>> + }
>>>
>>> in the Peter's patch ?
>>>
>>> ( I thought lot about && or || . Both seem to have their own cons ).
>>> But that should be only when we have short term imbalance, as PeterZ
>>> told.
>>
>> I'm missing the context. What is p_rq?
>
> p_rq is the run queue of target vcpu.
> What I was trying below was to address Rik concern. Suppose
> rq of source vcpu has one task, but target probably has two task,
> with a eligible vcpu waiting to be scheduled.
>
>>
>> What I mean was:
>>
>> if can_yield_to_process_in_current_rq
>> do that
>> else if can_yield_to_process_in_other_rq
>> do that
>> else
>> return -ESRCH
>
> I think you are saying we have to check the run queue of the
> source vcpu, if we have a vcpu belonging to same VM and try yield to
> that? ignoring whatever the target vcpu we received for yield_to.
>
> Or is it that kvm_vcpu_yield_to should now check the vcpus of same vm
> belonging to same run queue first. If we don't succeed, go again for
> a vcpu in different runqueue.

Right. Prioritize vcpus that are cheap to yield to. But may return bad
results if all vcpus on the current runqueue are spinners, so probably
not a good idea.

> Does it add more overhead especially in <= 1x scenario?

The current runqueue should have just our vcpu in that case, so low
overhead. But it's a bad idea due to the above scenario.

--
error compiling committee.c: too many arguments to function

2012-10-05 09:08:59

by Raghavendra K T

[permalink] [raw]
Subject: Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler

On 10/04/2012 06:14 PM, Avi Kivity wrote:
> On 10/04/2012 12:56 PM, Raghavendra K T wrote:
>> On 10/03/2012 10:55 PM, Avi Kivity wrote:
>>> On 10/03/2012 04:29 PM, Raghavendra K T wrote:
>>>> * Avi Kivity <[email protected]> [2012-09-27 14:03:59]:
>>>>
>>>>> On 09/27/2012 01:23 PM, Raghavendra K T wrote:
>>>>>>>
>>>> [...]
>>>>>> 2) looking at the result (comparing A & C) , I do feel we have
>>>>>> significant in iterating over vcpus (when compared to even vmexit)
>>>>>> so We still would need undercommit fix sugested by PeterZ
>>>>>> (improving by
>>>>>> 140%). ?
>>>>>
>>>>> Looking only at the current runqueue? My worry is that it misses a lot
>>>>> of cases. Maybe try the current runqueue first and then others.
>>>>>
>>>>
>>>> Okay. Do you mean we can have something like
>>>>
>>>> + if (rq->nr_running == 1 && p_rq->nr_running == 1) {
>>>> + yielded = -ESRCH;
>>>> + goto out_irq;
>>>> + }
>>>>
>>>> in the Peter's patch ?
>>>>
>>>> ( I thought lot about && or || . Both seem to have their own cons ).
>>>> But that should be only when we have short term imbalance, as PeterZ
>>>> told.
>>>
>>> I'm missing the context. What is p_rq?
>>
>> p_rq is the run queue of target vcpu.
>> What I was trying below was to address Rik concern. Suppose
>> rq of source vcpu has one task, but target probably has two task,
>> with a eligible vcpu waiting to be scheduled.
>>
>>>
>>> What I mean was:
>>>
>>> if can_yield_to_process_in_current_rq
>>> do that
>>> else if can_yield_to_process_in_other_rq
>>> do that
>>> else
>>> return -ESRCH
>>
>> I think you are saying we have to check the run queue of the
>> source vcpu, if we have a vcpu belonging to same VM and try yield to
>> that? ignoring whatever the target vcpu we received for yield_to.
>>
>> Or is it that kvm_vcpu_yield_to should now check the vcpus of same vm
>> belonging to same run queue first. If we don't succeed, go again for
>> a vcpu in different runqueue.
>
> Right. Prioritize vcpus that are cheap to yield to. But may return bad
> results if all vcpus on the current runqueue are spinners, so probably
> not a good idea.

Okay. 'll drop vcpu from same rq idea now.

>
>> Does it add more overhead especially in <= 1x scenario?
>
> The current runqueue should have just our vcpu in that case, so low
> overhead. But it's a bad idea due to the above scenario.
>