Hello everybody,
While looking into Coverity ID 1371643 I ran into the following piece
of code at kernel/sched/cputime.c:568:
568/*
569 * Adjust tick based cputime random precision against scheduler runtime
570 * accounting.
571 *
572 * Tick based cputime accounting depend on random scheduling
timeslices of a
573 * task to be interrupted or not by the timer. Depending on these
574 * circumstances, the number of these interrupts may be over or
575 * under-optimistic, matching the real user and system cputime with
a variable
576 * precision.
577 *
578 * Fix this by scaling these tick based values against the total runtime
579 * accounted by the CFS scheduler.
580 *
581 * This code provides the following guarantees:
582 *
583 * stime + utime == rtime
584 * stime_i+1 >= stime_i, utime_i+1 >= utime_i
585 *
586 * Assuming that rtime_i+1 >= rtime_i.
587 */
588static void cputime_adjust(struct task_cputime *curr,
589 struct prev_cputime *prev,
590 u64 *ut, u64 *st)
591{
592 u64 rtime, stime, utime;
593 unsigned long flags;
594
595 /* Serialize concurrent callers such that we can honour our
guarantees */
596 raw_spin_lock_irqsave(&prev->lock, flags);
597 rtime = curr->sum_exec_runtime;
598
599 /*
600 * This is possible under two circumstances:
601 * - rtime isn't monotonic after all (a bug);
602 * - we got reordered by the lock.
603 *
604 * In both cases this acts as a filter such that the rest
of the code
605 * can assume it is monotonic regardless of anything else.
606 */
607 if (prev->stime + prev->utime >= rtime)
608 goto out;
609
610 stime = curr->stime;
611 utime = curr->utime;
612
613 /*
614 * If either stime or both stime and utime are 0, assume
all runtime is
615 * userspace. Once a task gets some ticks, the monotonicy code at
616 * 'update' will ensure things converge to the observed ratio.
617 */
618 if (stime == 0) {
619 utime = rtime;
620 goto update;
621 }
622
623 if (utime == 0) {
624 stime = rtime;
625 goto update;
626 }
627
628 stime = scale_stime(stime, rtime, stime + utime);
629
630update:
631 /*
632 * Make sure stime doesn't go backwards; this preserves
monotonicity
633 * for utime because rtime is monotonic.
634 *
635 * utime_i+1 = rtime_i+1 - stime_i
636 * = rtime_i+1 - (rtime_i - utime_i)
637 * = (rtime_i+1 - rtime_i) + utime_i
638 * >= utime_i
639 */
640 if (stime < prev->stime)
641 stime = prev->stime;
642 utime = rtime - stime;
643
644 /*
645 * Make sure utime doesn't go backwards; this still preserves
646 * monotonicity for stime, analogous argument to above.
647 */
648 if (utime < prev->utime) {
649 utime = prev->utime;
650 stime = rtime - utime;
651 }
652
653 prev->stime = stime;
654 prev->utime = utime;
655out:
656 *ut = prev->utime;
657 *st = prev->stime;
658 raw_spin_unlock_irqrestore(&prev->lock, flags);
659}
The issue here is that the value assigned to variable utime at line
619 is overwritten at line 642, which would make such variable
assignment useless.
But I'm suspicious that such assignment is actually correct and that
line 642 should be included into the IF block at line 640. Something
similar to the following patch:
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -637,9 +637,10 @@ static void cputime_adjust(struct task_cputime *curr,
* = (rtime_i+1 - rtime_i) + utime_i
* >= utime_i
*/
- if (stime < prev->stime)
+ if (stime < prev->stime) {
stime = prev->stime;
- utime = rtime - stime;
+ utime = rtime - stime;
+ }
If you confirm this, I will send a patch in a full and proper form.
I'd really appreciate your comments.
Thank you!
--
Gustavo A. R. Silva
On Wed, Jun 28, 2017 at 1:03 AM, Gustavo A. R. Silva
<[email protected]> wrote:
>
> Hello everybody,
>
> While looking into Coverity ID 1371643 I ran into the following piece of
> code at kernel/sched/cputime.c:568:
>
> 568/*
> 569 * Adjust tick based cputime random precision against scheduler runtime
> 570 * accounting.
> 571 *
> 572 * Tick based cputime accounting depend on random scheduling timeslices
> of a
> 573 * task to be interrupted or not by the timer. Depending on these
> 574 * circumstances, the number of these interrupts may be over or
> 575 * under-optimistic, matching the real user and system cputime with a
> variable
> 576 * precision.
> 577 *
> 578 * Fix this by scaling these tick based values against the total runtime
> 579 * accounted by the CFS scheduler.
> 580 *
> 581 * This code provides the following guarantees:
> 582 *
> 583 * stime + utime == rtime
> 584 * stime_i+1 >= stime_i, utime_i+1 >= utime_i
> 585 *
> 586 * Assuming that rtime_i+1 >= rtime_i.
> 587 */
> 588static void cputime_adjust(struct task_cputime *curr,
> 589 struct prev_cputime *prev,
> 590 u64 *ut, u64 *st)
> 591{
> 592 u64 rtime, stime, utime;
> 593 unsigned long flags;
> 594
> 595 /* Serialize concurrent callers such that we can honour our
> guarantees */
> 596 raw_spin_lock_irqsave(&prev->lock, flags);
> 597 rtime = curr->sum_exec_runtime;
> 598
> 599 /*
> 600 * This is possible under two circumstances:
> 601 * - rtime isn't monotonic after all (a bug);
> 602 * - we got reordered by the lock.
> 603 *
> 604 * In both cases this acts as a filter such that the rest of the
> code
> 605 * can assume it is monotonic regardless of anything else.
> 606 */
> 607 if (prev->stime + prev->utime >= rtime)
> 608 goto out;
> 609
> 610 stime = curr->stime;
> 611 utime = curr->utime;
> 612
> 613 /*
> 614 * If either stime or both stime and utime are 0, assume all
> runtime is
> 615 * userspace. Once a task gets some ticks, the monotonicy code at
> 616 * 'update' will ensure things converge to the observed ratio.
> 617 */
> 618 if (stime == 0) {
> 619 utime = rtime;
> 620 goto update;
> 621 }
> 622
> 623 if (utime == 0) {
> 624 stime = rtime;
> 625 goto update;
> 626 }
> 627
> 628 stime = scale_stime(stime, rtime, stime + utime);
> 629
> 630update:
> 631 /*
> 632 * Make sure stime doesn't go backwards; this preserves
> monotonicity
> 633 * for utime because rtime is monotonic.
> 634 *
> 635 * utime_i+1 = rtime_i+1 - stime_i
> 636 * = rtime_i+1 - (rtime_i - utime_i)
> 637 * = (rtime_i+1 - rtime_i) + utime_i
> 638 * >= utime_i
> 639 */
> 640 if (stime < prev->stime)
> 641 stime = prev->stime;
> 642 utime = rtime - stime;
> 643
> 644 /*
> 645 * Make sure utime doesn't go backwards; this still preserves
> 646 * monotonicity for stime, analogous argument to above.
> 647 */
> 648 if (utime < prev->utime) {
> 649 utime = prev->utime;
> 650 stime = rtime - utime;
> 651 }
> 652
> 653 prev->stime = stime;
> 654 prev->utime = utime;
> 655out:
> 656 *ut = prev->utime;
> 657 *st = prev->stime;
> 658 raw_spin_unlock_irqrestore(&prev->lock, flags);
> 659}
>
>
> The issue here is that the value assigned to variable utime at line 619 is
> overwritten at line 642, which would make such variable assignment useless.
It isn't completely useless, since utime is used in line 628 to calculate stime.
> But I'm suspicious that such assignment is actually correct and that line
> 642 should be included into the IF block at line 640. Something similar to
> the following patch:
>
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -637,9 +637,10 @@ static void cputime_adjust(struct task_cputime *curr,
> * = (rtime_i+1 - rtime_i) + utime_i
> * >= utime_i
> */
> - if (stime < prev->stime)
> + if (stime < prev->stime) {
> stime = prev->stime;
> - utime = rtime - stime;
> + utime = rtime - stime;
> + }
>
>
> If you confirm this, I will send a patch in a full and proper form.
>
> I'd really appreciate your comments.
If you do that, how would you meet the guarantee made in line 583?
Frans
On Wed, Jun 28, 2017 at 7:35 AM, Frans Klaver <[email protected]> wrote:
> On Wed, Jun 28, 2017 at 1:03 AM, Gustavo A. R. Silva
> <[email protected]> wrote:
>>
>> Hello everybody,
>>
>> While looking into Coverity ID 1371643 I ran into the following piece of
>> code at kernel/sched/cputime.c:568:
>>
>> 568/*
>> 569 * Adjust tick based cputime random precision against scheduler runtime
>> 570 * accounting.
>> 571 *
>> 572 * Tick based cputime accounting depend on random scheduling timeslices
>> of a
>> 573 * task to be interrupted or not by the timer. Depending on these
>> 574 * circumstances, the number of these interrupts may be over or
>> 575 * under-optimistic, matching the real user and system cputime with a
>> variable
>> 576 * precision.
>> 577 *
>> 578 * Fix this by scaling these tick based values against the total runtime
>> 579 * accounted by the CFS scheduler.
>> 580 *
>> 581 * This code provides the following guarantees:
>> 582 *
>> 583 * stime + utime == rtime
>> 584 * stime_i+1 >= stime_i, utime_i+1 >= utime_i
>> 585 *
>> 586 * Assuming that rtime_i+1 >= rtime_i.
>> 587 */
>> 588static void cputime_adjust(struct task_cputime *curr,
>> 589 struct prev_cputime *prev,
>> 590 u64 *ut, u64 *st)
>> 591{
>> 592 u64 rtime, stime, utime;
>> 593 unsigned long flags;
>> 594
>> 595 /* Serialize concurrent callers such that we can honour our
>> guarantees */
>> 596 raw_spin_lock_irqsave(&prev->lock, flags);
>> 597 rtime = curr->sum_exec_runtime;
>> 598
>> 599 /*
>> 600 * This is possible under two circumstances:
>> 601 * - rtime isn't monotonic after all (a bug);
>> 602 * - we got reordered by the lock.
>> 603 *
>> 604 * In both cases this acts as a filter such that the rest of the
>> code
>> 605 * can assume it is monotonic regardless of anything else.
>> 606 */
>> 607 if (prev->stime + prev->utime >= rtime)
>> 608 goto out;
>> 609
>> 610 stime = curr->stime;
>> 611 utime = curr->utime;
>> 612
>> 613 /*
>> 614 * If either stime or both stime and utime are 0, assume all
>> runtime is
>> 615 * userspace. Once a task gets some ticks, the monotonicy code at
>> 616 * 'update' will ensure things converge to the observed ratio.
>> 617 */
>> 618 if (stime == 0) {
>> 619 utime = rtime;
>> 620 goto update;
>> 621 }
>> 622
>> 623 if (utime == 0) {
>> 624 stime = rtime;
>> 625 goto update;
>> 626 }
>> 627
>> 628 stime = scale_stime(stime, rtime, stime + utime);
>> 629
>> 630update:
>> 631 /*
>> 632 * Make sure stime doesn't go backwards; this preserves
>> monotonicity
>> 633 * for utime because rtime is monotonic.
>> 634 *
>> 635 * utime_i+1 = rtime_i+1 - stime_i
>> 636 * = rtime_i+1 - (rtime_i - utime_i)
>> 637 * = (rtime_i+1 - rtime_i) + utime_i
>> 638 * >= utime_i
>> 639 */
>> 640 if (stime < prev->stime)
>> 641 stime = prev->stime;
>> 642 utime = rtime - stime;
>> 643
>> 644 /*
>> 645 * Make sure utime doesn't go backwards; this still preserves
>> 646 * monotonicity for stime, analogous argument to above.
>> 647 */
>> 648 if (utime < prev->utime) {
>> 649 utime = prev->utime;
>> 650 stime = rtime - utime;
>> 651 }
>> 652
>> 653 prev->stime = stime;
>> 654 prev->utime = utime;
>> 655out:
>> 656 *ut = prev->utime;
>> 657 *st = prev->stime;
>> 658 raw_spin_unlock_irqrestore(&prev->lock, flags);
>> 659}
>>
>>
>> The issue here is that the value assigned to variable utime at line 619 is
>> overwritten at line 642, which would make such variable assignment useless.
>
> It isn't completely useless, since utime is used in line 628 to calculate stime.
Oh, I missed 'goto update'. Never mind about this one.
>> But I'm suspicious that such assignment is actually correct and that line
>> 642 should be included into the IF block at line 640. Something similar to
>> the following patch:
>>
>> --- a/kernel/sched/cputime.c
>> +++ b/kernel/sched/cputime.c
>> @@ -637,9 +637,10 @@ static void cputime_adjust(struct task_cputime *curr,
>> * = (rtime_i+1 - rtime_i) + utime_i
>> * >= utime_i
>> */
>> - if (stime < prev->stime)
>> + if (stime < prev->stime) {
>> stime = prev->stime;
>> - utime = rtime - stime;
>> + utime = rtime - stime;
>> + }
>>
>>
>> If you confirm this, I will send a patch in a full and proper form.
>>
>> I'd really appreciate your comments.
>
> If you do that, how would you meet the guarantee made in line 583?
>
> Frans
Hi Frans,
Quoting Frans Klaver <[email protected]>:
> On Wed, Jun 28, 2017 at 7:35 AM, Frans Klaver <[email protected]> wrote:
>> On Wed, Jun 28, 2017 at 1:03 AM, Gustavo A. R. Silva
>> <[email protected]> wrote:
>>>
>>> Hello everybody,
>>>
>>> While looking into Coverity ID 1371643 I ran into the following piece of
>>> code at kernel/sched/cputime.c:568:
>>>
>>> 568/*
>>> 569 * Adjust tick based cputime random precision against scheduler runtime
>>> 570 * accounting.
>>> 571 *
>>> 572 * Tick based cputime accounting depend on random scheduling timeslices
>>> of a
>>> 573 * task to be interrupted or not by the timer. Depending on these
>>> 574 * circumstances, the number of these interrupts may be over or
>>> 575 * under-optimistic, matching the real user and system cputime with a
>>> variable
>>> 576 * precision.
>>> 577 *
>>> 578 * Fix this by scaling these tick based values against the total runtime
>>> 579 * accounted by the CFS scheduler.
>>> 580 *
>>> 581 * This code provides the following guarantees:
>>> 582 *
>>> 583 * stime + utime == rtime
>>> 584 * stime_i+1 >= stime_i, utime_i+1 >= utime_i
>>> 585 *
>>> 586 * Assuming that rtime_i+1 >= rtime_i.
>>> 587 */
>>> 588static void cputime_adjust(struct task_cputime *curr,
>>> 589 struct prev_cputime *prev,
>>> 590 u64 *ut, u64 *st)
>>> 591{
>>> 592 u64 rtime, stime, utime;
>>> 593 unsigned long flags;
>>> 594
>>> 595 /* Serialize concurrent callers such that we can honour our
>>> guarantees */
>>> 596 raw_spin_lock_irqsave(&prev->lock, flags);
>>> 597 rtime = curr->sum_exec_runtime;
>>> 598
>>> 599 /*
>>> 600 * This is possible under two circumstances:
>>> 601 * - rtime isn't monotonic after all (a bug);
>>> 602 * - we got reordered by the lock.
>>> 603 *
>>> 604 * In both cases this acts as a filter such that the rest of the
>>> code
>>> 605 * can assume it is monotonic regardless of anything else.
>>> 606 */
>>> 607 if (prev->stime + prev->utime >= rtime)
>>> 608 goto out;
>>> 609
>>> 610 stime = curr->stime;
>>> 611 utime = curr->utime;
>>> 612
>>> 613 /*
>>> 614 * If either stime or both stime and utime are 0, assume all
>>> runtime is
>>> 615 * userspace. Once a task gets some ticks, the
>>> monotonicy code at
>>> 616 * 'update' will ensure things converge to the observed ratio.
>>> 617 */
>>> 618 if (stime == 0) {
>>> 619 utime = rtime;
>>> 620 goto update;
>>> 621 }
>>> 622
>>> 623 if (utime == 0) {
>>> 624 stime = rtime;
>>> 625 goto update;
>>> 626 }
>>> 627
>>> 628 stime = scale_stime(stime, rtime, stime + utime);
>>> 629
>>> 630update:
>>> 631 /*
>>> 632 * Make sure stime doesn't go backwards; this preserves
>>> monotonicity
>>> 633 * for utime because rtime is monotonic.
>>> 634 *
>>> 635 * utime_i+1 = rtime_i+1 - stime_i
>>> 636 * = rtime_i+1 - (rtime_i - utime_i)
>>> 637 * = (rtime_i+1 - rtime_i) + utime_i
>>> 638 * >= utime_i
>>> 639 */
>>> 640 if (stime < prev->stime)
>>> 641 stime = prev->stime;
>>> 642 utime = rtime - stime;
>>> 643
>>> 644 /*
>>> 645 * Make sure utime doesn't go backwards; this still preserves
>>> 646 * monotonicity for stime, analogous argument to above.
>>> 647 */
>>> 648 if (utime < prev->utime) {
>>> 649 utime = prev->utime;
>>> 650 stime = rtime - utime;
>>> 651 }
>>> 652
>>> 653 prev->stime = stime;
>>> 654 prev->utime = utime;
>>> 655out:
>>> 656 *ut = prev->utime;
>>> 657 *st = prev->stime;
>>> 658 raw_spin_unlock_irqrestore(&prev->lock, flags);
>>> 659}
>>>
>>>
>>> The issue here is that the value assigned to variable utime at line 619 is
>>> overwritten at line 642, which would make such variable assignment useless.
>>
>> It isn't completely useless, since utime is used in line 628 to
>> calculate stime.
>
> Oh, I missed 'goto update'. Never mind about this one.
>
>
>>> But I'm suspicious that such assignment is actually correct and that line
>>> 642 should be included into the IF block at line 640. Something similar to
>>> the following patch:
>>>
>>> --- a/kernel/sched/cputime.c
>>> +++ b/kernel/sched/cputime.c
>>> @@ -637,9 +637,10 @@ static void cputime_adjust(struct task_cputime *curr,
>>> * = (rtime_i+1 - rtime_i) + utime_i
>>> * >= utime_i
>>> */
>>> - if (stime < prev->stime)
>>> + if (stime < prev->stime) {
>>> stime = prev->stime;
>>> - utime = rtime - stime;
>>> + utime = rtime - stime;
>>> + }
>>>
>>>
>>> If you confirm this, I will send a patch in a full and proper form.
>>>
>>> I'd really appreciate your comments.
>>
>> If you do that, how would you meet the guarantee made in line 583?
>>
You are right, I see now.
Then in this case the following patch would be the way to go:
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -615,10 +615,8 @@ static void cputime_adjust(struct task_cputime *curr,
* userspace. Once a task gets some ticks, the monotonicy code at
* 'update' will ensure things converge to the observed ratio.
*/
- if (stime == 0) {
- utime = rtime;
+ if (stime == 0)
goto update;
- }
if (utime == 0) {
stime = rtime;
but I think this one is even better:
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -615,19 +615,11 @@ static void cputime_adjust(struct task_cputime *curr,
* userspace. Once a task gets some ticks, the monotonicy code at
* 'update' will ensure things converge to the observed ratio.
*/
- if (stime == 0) {
- utime = rtime;
- goto update;
- }
-
- if (utime == 0) {
+ if (stime != 0 && utime == 0)
stime = rtime;
- goto update;
- }
-
- stime = scale_stime(stime, rtime, stime + utime);
+ else
+ stime = scale_stime(stime, rtime, stime + utime);
-update:
/*
* Make sure stime doesn't go backwards; this preserves monotonicity
* for utime because rtime is monotonic.
What do you think?
Thank you!
--
Gustavo A. R. Silva
On 29 June 2017 01:57:19 CEST, "Gustavo A. R. Silva" <[email protected]> wrote:
>>>> --- a/kernel/sched/cputime.c
>>>> +++ b/kernel/sched/cputime.c
>>>> @@ -637,9 +637,10 @@ static void cputime_adjust(struct task_cputime
>*curr,
>>>> * = (rtime_i+1 - rtime_i) + utime_i
>>>> * >= utime_i
>>>> */
>>>> - if (stime < prev->stime)
>>>> + if (stime < prev->stime) {
>>>> stime = prev->stime;
>>>> - utime = rtime - stime;
>>>> + utime = rtime - stime;
>>>> + }
>>>>
>>>>
>>>> If you confirm this, I will send a patch in a full and proper form.
>>>>
>>>> I'd really appreciate your comments.
>>>
>>> If you do that, how would you meet the guarantee made in line 583?
>>>
>
>You are right, I see now.
>
>Then in this case the following patch would be the way to go:
>
>--- a/kernel/sched/cputime.c
>+++ b/kernel/sched/cputime.c
>@@ -615,10 +615,8 @@ static void cputime_adjust(struct task_cputime
>*curr,
> * userspace. Once a task gets some ticks, the monotonicy code at
> * 'update' will ensure things converge to the observed ratio.
> */
>- if (stime == 0) {
>- utime = rtime;
>+ if (stime == 0)
> goto update;
>- }
>
> if (utime == 0) {
> stime = rtime;
>
>
>but I think this one is even better:
>
>
>--- a/kernel/sched/cputime.c
>+++ b/kernel/sched/cputime.c
>@@ -615,19 +615,11 @@ static void cputime_adjust(struct task_cputime
>*curr,
> * userspace. Once a task gets some ticks, the monotonicy code at
> * 'update' will ensure things converge to the observed ratio.
> */
>- if (stime == 0) {
>- utime = rtime;
>- goto update;
>- }
>-
>- if (utime == 0) {
>+ if (stime != 0 && utime == 0)
> stime = rtime;
>- goto update;
>- }
>-
>- stime = scale_stime(stime, rtime, stime + utime);
>+ else
>+ stime = scale_stime(stime, rtime, stime + utime);
I don't think it is better. The stime == 0 case is gone now. So scale_time() will be called in that case. This whole if/else block should only be executed if stime != 0.
Quoting Frans Klaver <[email protected]>:
> On 29 June 2017 01:57:19 CEST, "Gustavo A. R. Silva"
> <[email protected]> wrote:
>>>>> --- a/kernel/sched/cputime.c
>>>>> +++ b/kernel/sched/cputime.c
>>>>> @@ -637,9 +637,10 @@ static void cputime_adjust(struct task_cputime
>> *curr,
>>>>> * = (rtime_i+1 - rtime_i) + utime_i
>>>>> * >= utime_i
>>>>> */
>>>>> - if (stime < prev->stime)
>>>>> + if (stime < prev->stime) {
>>>>> stime = prev->stime;
>>>>> - utime = rtime - stime;
>>>>> + utime = rtime - stime;
>>>>> + }
>>>>>
>>>>>
>>>>> If you confirm this, I will send a patch in a full and proper form.
>>>>>
>>>>> I'd really appreciate your comments.
>>>>
>>>> If you do that, how would you meet the guarantee made in line 583?
>>>>
>>
>> You are right, I see now.
>>
>> Then in this case the following patch would be the way to go:
>>
>> --- a/kernel/sched/cputime.c
>> +++ b/kernel/sched/cputime.c
>> @@ -615,10 +615,8 @@ static void cputime_adjust(struct task_cputime
>> *curr,
>> * userspace. Once a task gets some ticks, the monotonicy code at
>> * 'update' will ensure things converge to the observed ratio.
>> */
>> - if (stime == 0) {
>> - utime = rtime;
>> + if (stime == 0)
>> goto update;
>> - }
>>
>> if (utime == 0) {
>> stime = rtime;
>>
>>
>> but I think this one is even better:
>>
>>
>> --- a/kernel/sched/cputime.c
>> +++ b/kernel/sched/cputime.c
>> @@ -615,19 +615,11 @@ static void cputime_adjust(struct task_cputime
>> *curr,
>> * userspace. Once a task gets some ticks, the monotonicy code at
>> * 'update' will ensure things converge to the observed ratio.
>> */
>> - if (stime == 0) {
>> - utime = rtime;
>> - goto update;
>> - }
>> -
>> - if (utime == 0) {
>> + if (stime != 0 && utime == 0)
>> stime = rtime;
>> - goto update;
>> - }
>> -
>> - stime = scale_stime(stime, rtime, stime + utime);
>> + else
>> + stime = scale_stime(stime, rtime, stime + utime);
>
> I don't think it is better. The stime == 0 case is gone now. So
> scale_time() will be called in that case. This whole if/else block
> should only be executed if stime != 0.
Oh yeah! something like:
if (stime != 0) {
if (stime != 0 && utime == 0)
stime = rtime;
else
stime = scale_stime(stime, rtime, stime + utime);
}
I'll be right back with the final patch.
Thanks for your time, Frans.
Much appreciated :)
--
Gustavo A. R. Silva
Value assigned to variable utime at line 619:utime = rtime;
is overwritten at line 642:utime = rtime - stime; before it
can be used. This makes such variable assignment useless.
Remove this variable assignment and refactor the code related.
Addresses-Coverity-ID: 1371643
Cc: Frans Klaver <[email protected]>
Signed-off-by: Gustavo A. R. Silva <[email protected]>
---
kernel/sched/cputime.c | 16 +++++-----------
1 file changed, 5 insertions(+), 11 deletions(-)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index aea3135..a83fd9a 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -615,19 +615,13 @@ static void cputime_adjust(struct task_cputime *curr,
* userspace. Once a task gets some ticks, the monotonicy code at
* 'update' will ensure things converge to the observed ratio.
*/
- if (stime == 0) {
- utime = rtime;
- goto update;
+ if (stime != 0) {
+ if (utime == 0)
+ stime = rtime;
+ else
+ stime = scale_stime(stime, rtime, stime + utime);
}
- if (utime == 0) {
- stime = rtime;
- goto update;
- }
-
- stime = scale_stime(stime, rtime, stime + utime);
-
-update:
/*
* Make sure stime doesn't go backwards; this preserves monotonicity
* for utime because rtime is monotonic.
--
2.5.0
Commit-ID: 72298e5c92c50edd8cb7cfda4519483ce65fa166
Gitweb: http://git.kernel.org/tip/72298e5c92c50edd8cb7cfda4519483ce65fa166
Author: Gustavo A. R. Silva <[email protected]>
AuthorDate: Thu, 29 Jun 2017 13:41:28 -0500
Committer: Ingo Molnar <[email protected]>
CommitDate: Fri, 30 Jun 2017 09:37:59 +0200
sched/cputime: Refactor the cputime_adjust() code
Address a Coverity false positive, which is caused by overly
convoluted code:
Value assigned to variable 'utime' at line 619:utime = rtime;
is overwritten at line 642:utime = rtime - stime; before it
can be used. This makes such variable assignment useless.
Remove this variable assignment and refactor the code related.
Addresses-Coverity-ID: 1371643
Signed-off-by: Gustavo A. R. Silva <[email protected]>
Cc: Frans Klaver <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Stanislaw Gruszka <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Wanpeng Li <[email protected]>
Link: http://lkml.kernel.org/r/20170629184128.GA5271@embeddedgus
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/cputime.c | 16 +++++-----------
1 file changed, 5 insertions(+), 11 deletions(-)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index aea3135..67c70e2 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -615,19 +615,13 @@ static void cputime_adjust(struct task_cputime *curr,
* userspace. Once a task gets some ticks, the monotonicy code at
* 'update' will ensure things converge to the observed ratio.
*/
- if (stime == 0) {
- utime = rtime;
- goto update;
+ if (stime != 0) {
+ if (utime == 0)
+ stime = rtime;
+ else
+ stime = scale_stime(stime, rtime, stime + utime);
}
- if (utime == 0) {
- stime = rtime;
- goto update;
- }
-
- stime = scale_stime(stime, rtime, stime + utime);
-
-update:
/*
* Make sure stime doesn't go backwards; this preserves monotonicity
* for utime because rtime is monotonic.
On Fri, 2017-06-30 at 06:10 -0700, tip-bot for Gustavo A. R. Silva
wrote:
> +++ b/kernel/sched/cputime.c
> @@ -615,19 +615,13 @@ static void cputime_adjust(struct task_cputime
> *curr,
> * userspace. Once a task gets some ticks, the monotonicy
> code at
> * 'update' will ensure things converge to the observed
> ratio.
> */
> - if (stime == 0) {
> - utime = rtime;
> - goto update;
> + if (stime != 0) {
> + if (utime == 0)
> + stime = rtime;
> + else
> + stime = scale_stime(stime, rtime, stime +
> utime);
> }
>
> - if (utime == 0) {
> - stime = rtime;
> - goto update;
> - }
> -
> - stime = scale_stime(stime, rtime, stime + utime);
> -
> -update:
Wait, what?
This get rid of the utime = rtime assignment, when
stime == 0. That could be a correctness issue.
> /*
> * Make sure stime doesn't go backwards; this preserves
> monotonicity
> * for utime because rtime is monotonic.
On Fri, Jun 30, 2017 at 4:00 PM, Rik van Riel <[email protected]> wrote:
> On Fri, 2017-06-30 at 06:10 -0700, tip-bot for Gustavo A. R. Silva
> wrote:
>
>> +++ b/kernel/sched/cputime.c
>> @@ -615,19 +615,13 @@ static void cputime_adjust(struct task_cputime
>> *curr,
>> * userspace. Once a task gets some ticks, the monotonicy
>> code at
>> * 'update' will ensure things converge to the observed
>> ratio.
>> */
>> - if (stime == 0) {
>> - utime = rtime;
>> - goto update;
>> + if (stime != 0) {
>> + if (utime == 0)
>> + stime = rtime;
>> + else
>> + stime = scale_stime(stime, rtime, stime +
>> utime);
>> }
>>
>> - if (utime == 0) {
>> - stime = rtime;
>> - goto update;
>> - }
>> -
>> - stime = scale_stime(stime, rtime, stime + utime);
>> -
>> -update:
>
> Wait, what?
>
> This get rid of the utime = rtime assignment, when
> stime == 0. That could be a correctness issue.
The first time utime is used after that assignment, it is overwritten
with rtime - stime. The utime = rtime assignment is then pointless.
On Fri, Jun 30, 2017 at 04:41:50PM +0200, Frans Klaver wrote:
> On Fri, Jun 30, 2017 at 4:00 PM, Rik van Riel <[email protected]> wrote:
> > On Fri, 2017-06-30 at 06:10 -0700, tip-bot for Gustavo A. R. Silva
> > wrote:
> >
> >> +++ b/kernel/sched/cputime.c
> >> @@ -615,19 +615,13 @@ static void cputime_adjust(struct task_cputime
> >> *curr,
> >> * userspace. Once a task gets some ticks, the monotonicy
> >> code at
> >> * 'update' will ensure things converge to the observed
> >> ratio.
> >> */
> >> - if (stime == 0) {
> >> - utime = rtime;
> >> - goto update;
> >> + if (stime != 0) {
> >> + if (utime == 0)
> >> + stime = rtime;
> >> + else
> >> + stime = scale_stime(stime, rtime, stime +
> >> utime);
> >> }
> >>
> >> - if (utime == 0) {
> >> - stime = rtime;
> >> - goto update;
> >> - }
> >> -
> >> - stime = scale_stime(stime, rtime, stime + utime);
> >> -
> >> -update:
> >
> > Wait, what?
> >
> > This get rid of the utime = rtime assignment, when
> > stime == 0. That could be a correctness issue.
>
> The first time utime is used after that assignment, it is overwritten
> with rtime - stime. The utime = rtime assignment is then pointless.
Right, I also got confused first but after starring at the code, the patch looks right.
Thanks.
On Fri, Jun 30, 2017 at 06:10:35AM -0700, tip-bot for Gustavo A. R. Silva wrote:
> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> index aea3135..67c70e2 100644
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -615,19 +615,13 @@ static void cputime_adjust(struct task_cputime *curr,
> * userspace. Once a task gets some ticks, the monotonicy code at
> * 'update' will ensure things converge to the observed ratio.
^^^^^^
> */
> - if (stime == 0) {
> - utime = rtime;
> - goto update;
> + if (stime != 0) {
> + if (utime == 0)
> + stime = rtime;
> + else
> + stime = scale_stime(stime, rtime, stime + utime);
> }
>
> - if (utime == 0) {
> - stime = rtime;
> - goto update;
> - }
> -
> - stime = scale_stime(stime, rtime, stime + utime);
> -
> -update:
Since 'update' label is removed, I think above comment should be
corrected too. Eventually patch could just remove 'utime = rtime;'
line to shut up coverity.
Stanislaw
On Fri, Jun 30, 2017 at 06:10:35AM -0700, tip-bot for Gustavo A. R. Silva wrote:
> Commit-ID: 72298e5c92c50edd8cb7cfda4519483ce65fa166
> Gitweb: http://git.kernel.org/tip/72298e5c92c50edd8cb7cfda4519483ce65fa166
> Author: Gustavo A. R. Silva <[email protected]>
> AuthorDate: Thu, 29 Jun 2017 13:41:28 -0500
> Committer: Ingo Molnar <[email protected]>
> CommitDate: Fri, 30 Jun 2017 09:37:59 +0200
>
> sched/cputime: Refactor the cputime_adjust() code
>
> Address a Coverity false positive, which is caused by overly
> convoluted code:
>
> Value assigned to variable 'utime' at line 619:utime = rtime;
> is overwritten at line 642:utime = rtime - stime; before it
> can be used. This makes such variable assignment useless.
>
> Remove this variable assignment and refactor the code related.
>
> Addresses-Coverity-ID: 1371643
> Signed-off-by: Gustavo A. R. Silva <[email protected]>
> Cc: Frans Klaver <[email protected]>
> Cc: Frederic Weisbecker <[email protected]>
> Cc: Linus Torvalds <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Rik van Riel <[email protected]>
> Cc: Stanislaw Gruszka <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Wanpeng Li <[email protected]>
> Link: http://lkml.kernel.org/r/20170629184128.GA5271@embeddedgus
> Signed-off-by: Ingo Molnar <[email protected]>
> ---
> kernel/sched/cputime.c | 16 +++++-----------
> 1 file changed, 5 insertions(+), 11 deletions(-)
>
> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> index aea3135..67c70e2 100644
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -615,19 +615,13 @@ static void cputime_adjust(struct task_cputime *curr,
> * userspace. Once a task gets some ticks, the monotonicy code at
> * 'update' will ensure things converge to the observed ratio.
> */
> - if (stime == 0) {
> - utime = rtime;
> - goto update;
> + if (stime != 0) {
> + if (utime == 0)
> + stime = rtime;
> + else
> + stime = scale_stime(stime, rtime, stime + utime);
> }
>
> - if (utime == 0) {
> - stime = rtime;
> - goto update;
> - }
> -
> - stime = scale_stime(stime, rtime, stime + utime);
> -
> -update:
> /*
> * Make sure stime doesn't go backwards; this preserves monotonicity
> * for utime because rtime is monotonic.
Argh, no... That code was perfectly fine. The new code otoh is
convoluted crap.
It had the form:
if (exception1)
deal with exception1
if (execption2)
deal with exception2
do normal stuff
Which is as simple and straight forward as it gets.
The new code otoh reads like:
if (!exception1) {
if (exception2)
deal with exception 2
else
do normal stuff
}
which is absolute shit.
So NAK on this.
* Peter Zijlstra <[email protected]> wrote:
> Argh, no... That code was perfectly fine. The new code otoh is
> convoluted crap.
>
> It had the form:
>
> if (exception1)
> deal with exception1
>
> if (execption2)
> deal with exception2
>
> do normal stuff
>
> Which is as simple and straight forward as it gets.
>
> The new code otoh reads like:
>
> if (!exception1) {
> if (exception2)
> deal with exception 2
> else
> do normal stuff
> }
>
> which is absolute shit.
>
> So NAK on this.
Agreed - I've queued up a revert.
Note that I fixed the old comment, which was arguably wrong:
/*
* If either stime or both stime and utime are 0, assume all runtime is
* userspace. Once a task gets some ticks, the monotonicy code at
* 'update' will ensure things converge to the observed ratio.
*/
The correct comment is something like:
/*
* If either stime or utime are 0, assume all runtime is userspace.
* Once a task gets some ticks, the monotonicy code at 'update:'
* will ensure things converge to the observed ratio.
*/
Thanks,
Ingo