2017-09-01 11:33:22

by Ethan Zhao

[permalink] [raw]
Subject: [PATCH] sched: reset sysctl_sched_time_avg to default when

System will hang if user set sysctl_sched_time_avg to 0 by

[root@XXX ~]# sysctl kernel.sched_time_avg_ms=0

Stack traceback for pid 0
0xffff883f6406c600 0 0 1 3 R 0xffff883f6406cf50 *swapper/3
ffff883f7ccc3ae8 0000000000000018 ffffffff810c4dd0 0000000000000000
0000000000017800 ffff883f7ccc3d78 0000000000000003 ffff883f7ccc3bf8
ffffffff810c4fc9 ffff883f7ccc3c08 00000000810c5043 ffff883f7ccc3c08
Call Trace:
<IRQ> [<ffffffff810c4dd0>] ? update_group_capacity+0x110/0x200
[<ffffffff810c4fc9>] ? update_sd_lb_stats+0x109/0x600
[<ffffffff810c5507>] ? find_busiest_group+0x47/0x530
[<ffffffff810c5b84>] ? load_balance+0x194/0x900
[<ffffffff810ad5ca>] ? update_rq_clock.part.83+0x1a/0xe0
[<ffffffff810c6d42>] ? rebalance_domains+0x152/0x290
[<ffffffff810c6f5c>] ? run_rebalance_domains+0xdc/0x1d0
[<ffffffff8108a75b>] ? __do_softirq+0xfb/0x320
[<ffffffff8108ac85>] ? irq_exit+0x125/0x130
[<ffffffff810b3a17>] ? scheduler_ipi+0x97/0x160
[<ffffffff81052709>] ? smp_reschedule_interrupt+0x29/0x30
[<ffffffff8173a1be>] ? reschedule_interrupt+0x6e/0x80
<EOI> [<ffffffff815bc83c>] ? cpuidle_enter_state+0xcc/0x230
[<ffffffff815bc80c>] ? cpuidle_enter_state+0x9c/0x230
[<ffffffff815bc9d7>] ? cpuidle_enter+0x17/0x20
[<ffffffff810cd6dc>] ? cpu_startup_entry+0x38c/0x420
[<ffffffff81053373>] ? start_secondary+0x173/0x1e0

Because divide-by-zero error happens in function

update_group_capacity()
update_cpu_capacity()
scale_rt_capacity()
{
...
total = sched_avg_period() + delta;
used = div_u64(avg, total);
...
}

Seems this issue could be reproduced on all I tried stable 4.1 - last
kernel.

To fix this issue, reset sysctl_sched_time_avg to default value
MSEC_PER_SEC if user input invalid value in function

sched_avg_period()

Reported-by: James Puthukattukaran <[email protected]>
Signed-off-by: Ethan Zhao <[email protected]>
---
Tested on stable 4.1, compiled on stable 4.13-rc5
kernel/sched/sched.h | 3 +++
1 file changed, 3 insertions(+)

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index eeef1a3..b398560 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1620,6 +1620,9 @@ static inline void rq_last_tick_reset(struct rq *rq)

static inline u64 sched_avg_period(void)
{
+ if (unlikely(sysctl_sched_time_avg <= 0))
+ sysctl_sched_time_avg = MSEC_PER_SEC;
+
return (u64)sysctl_sched_time_avg * NSEC_PER_MSEC / 2;
}

--
1.8.3.1


2017-09-01 12:32:43

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched: reset sysctl_sched_time_avg to default when

On Fri, Sep 01, 2017 at 07:31:54PM +0800, Ethan Zhao wrote:
> System will hang if user set sysctl_sched_time_avg to 0 by
>
> [root@XXX ~]# sysctl kernel.sched_time_avg_ms=0
>
> Stack traceback for pid 0
> 0xffff883f6406c600 0 0 1 3 R 0xffff883f6406cf50 *swapper/3
> ffff883f7ccc3ae8 0000000000000018 ffffffff810c4dd0 0000000000000000
> 0000000000017800 ffff883f7ccc3d78 0000000000000003 ffff883f7ccc3bf8
> ffffffff810c4fc9 ffff883f7ccc3c08 00000000810c5043 ffff883f7ccc3c08
> Call Trace:
> <IRQ> [<ffffffff810c4dd0>] ? update_group_capacity+0x110/0x200
> [<ffffffff810c4fc9>] ? update_sd_lb_stats+0x109/0x600
> [<ffffffff810c5507>] ? find_busiest_group+0x47/0x530
> [<ffffffff810c5b84>] ? load_balance+0x194/0x900
> [<ffffffff810ad5ca>] ? update_rq_clock.part.83+0x1a/0xe0
> [<ffffffff810c6d42>] ? rebalance_domains+0x152/0x290
> [<ffffffff810c6f5c>] ? run_rebalance_domains+0xdc/0x1d0
> [<ffffffff8108a75b>] ? __do_softirq+0xfb/0x320
> [<ffffffff8108ac85>] ? irq_exit+0x125/0x130
> [<ffffffff810b3a17>] ? scheduler_ipi+0x97/0x160
> [<ffffffff81052709>] ? smp_reschedule_interrupt+0x29/0x30
> [<ffffffff8173a1be>] ? reschedule_interrupt+0x6e/0x80
> <EOI> [<ffffffff815bc83c>] ? cpuidle_enter_state+0xcc/0x230
> [<ffffffff815bc80c>] ? cpuidle_enter_state+0x9c/0x230
> [<ffffffff815bc9d7>] ? cpuidle_enter+0x17/0x20
> [<ffffffff810cd6dc>] ? cpu_startup_entry+0x38c/0x420
> [<ffffffff81053373>] ? start_secondary+0x173/0x1e0
>
> Because divide-by-zero error happens in function
>
> update_group_capacity()
> update_cpu_capacity()
> scale_rt_capacity()
> {
> ...
> total = sched_avg_period() + delta;
> used = div_u64(avg, total);
> ...
> }
>
> Seems this issue could be reproduced on all I tried stable 4.1 - last
> kernel.
>
> To fix this issue, reset sysctl_sched_time_avg to default value
> MSEC_PER_SEC if user input invalid value in function
>
> sched_avg_period()
>
> Reported-by: James Puthukattukaran <[email protected]>
> Signed-off-by: Ethan Zhao <[email protected]>
> ---
> Tested on stable 4.1, compiled on stable 4.13-rc5
> kernel/sched/sched.h | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index eeef1a3..b398560 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1620,6 +1620,9 @@ static inline void rq_last_tick_reset(struct rq *rq)
>
> static inline u64 sched_avg_period(void)
> {
> + if (unlikely(sysctl_sched_time_avg <= 0))
> + sysctl_sched_time_avg = MSEC_PER_SEC;
> +

Sight, no.. you can set limits in the sysctl table.

2017-09-02 00:33:06

by ethan zhao

[permalink] [raw]
Subject: Re: [PATCH] sched: reset sysctl_sched_time_avg to default when

Yep, that is the first place I considered to set the limit, but that would
break KABI ?

Thanks,
Ethan

On Fri, Sep 1, 2017 at 8:32 PM, Peter Zijlstra <[email protected]> wrote:
> On Fri, Sep 01, 2017 at 07:31:54PM +0800, Ethan Zhao wrote:
>> System will hang if user set sysctl_sched_time_avg to 0 by
>>
>> [root@XXX ~]# sysctl kernel.sched_time_avg_ms=0
>>
>> Stack traceback for pid 0
>> 0xffff883f6406c600 0 0 1 3 R 0xffff883f6406cf50 *swapper/3
>> ffff883f7ccc3ae8 0000000000000018 ffffffff810c4dd0 0000000000000000
>> 0000000000017800 ffff883f7ccc3d78 0000000000000003 ffff883f7ccc3bf8
>> ffffffff810c4fc9 ffff883f7ccc3c08 00000000810c5043 ffff883f7ccc3c08
>> Call Trace:
>> <IRQ> [<ffffffff810c4dd0>] ? update_group_capacity+0x110/0x200
>> [<ffffffff810c4fc9>] ? update_sd_lb_stats+0x109/0x600
>> [<ffffffff810c5507>] ? find_busiest_group+0x47/0x530
>> [<ffffffff810c5b84>] ? load_balance+0x194/0x900
>> [<ffffffff810ad5ca>] ? update_rq_clock.part.83+0x1a/0xe0
>> [<ffffffff810c6d42>] ? rebalance_domains+0x152/0x290
>> [<ffffffff810c6f5c>] ? run_rebalance_domains+0xdc/0x1d0
>> [<ffffffff8108a75b>] ? __do_softirq+0xfb/0x320
>> [<ffffffff8108ac85>] ? irq_exit+0x125/0x130
>> [<ffffffff810b3a17>] ? scheduler_ipi+0x97/0x160
>> [<ffffffff81052709>] ? smp_reschedule_interrupt+0x29/0x30
>> [<ffffffff8173a1be>] ? reschedule_interrupt+0x6e/0x80
>> <EOI> [<ffffffff815bc83c>] ? cpuidle_enter_state+0xcc/0x230
>> [<ffffffff815bc80c>] ? cpuidle_enter_state+0x9c/0x230
>> [<ffffffff815bc9d7>] ? cpuidle_enter+0x17/0x20
>> [<ffffffff810cd6dc>] ? cpu_startup_entry+0x38c/0x420
>> [<ffffffff81053373>] ? start_secondary+0x173/0x1e0
>>
>> Because divide-by-zero error happens in function
>>
>> update_group_capacity()
>> update_cpu_capacity()
>> scale_rt_capacity()
>> {
>> ...
>> total = sched_avg_period() + delta;
>> used = div_u64(avg, total);
>> ...
>> }
>>
>> Seems this issue could be reproduced on all I tried stable 4.1 - last
>> kernel.
>>
>> To fix this issue, reset sysctl_sched_time_avg to default value
>> MSEC_PER_SEC if user input invalid value in function
>>
>> sched_avg_period()
>>
>> Reported-by: James Puthukattukaran <[email protected]>
>> Signed-off-by: Ethan Zhao <[email protected]>
>> ---
>> Tested on stable 4.1, compiled on stable 4.13-rc5
>> kernel/sched/sched.h | 3 +++
>> 1 file changed, 3 insertions(+)
>>
>> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
>> index eeef1a3..b398560 100644
>> --- a/kernel/sched/sched.h
>> +++ b/kernel/sched/sched.h
>> @@ -1620,6 +1620,9 @@ static inline void rq_last_tick_reset(struct rq *rq)
>>
>> static inline u64 sched_avg_period(void)
>> {
>> + if (unlikely(sysctl_sched_time_avg <= 0))
>> + sysctl_sched_time_avg = MSEC_PER_SEC;
>> +
>
> Sight, no.. you can set limits in the sysctl table.

2017-09-02 06:54:49

by ethan zhao

[permalink] [raw]
Subject: Re: [PATCH] sched: reset sysctl_sched_time_avg to default when

Peter,

V2 sent, tested okay on stable 4.1, re-factor a little, could apply
to v4.13-rc5,
please review.

[root@ ~]# sysctl kernel.sched_time_avg_ms
kernel.sched_time_avg_ms = 1000
[root@ ~]# sysctl kernel.sched_time_avg_ms=0
sysctl: setting key "kernel.sched_time_avg_ms": Invalid argument
kernel.sched_time_avg_ms = 0
[root@ ~]# sysctl kernel.sched_time_avg_ms
kernel.sched_time_avg_ms = 1000
[root@ ~]# sysctl kernel.sched_time_avg_ms=900
kernel.sched_time_avg_ms = 900
[root@ ~]# sysctl kernel.sched_time_avg_ms
kernel.sched_time_avg_ms = 900

Thanks,
Ethan

On Sat, Sep 2, 2017 at 8:33 AM, Ethan Zhao <[email protected]> wrote:
> Yep, that is the first place I considered to set the limit, but that would
> break KABI ?
>
> Thanks,
> Ethan
>
> On Fri, Sep 1, 2017 at 8:32 PM, Peter Zijlstra <[email protected]> wrote:
>> On Fri, Sep 01, 2017 at 07:31:54PM +0800, Ethan Zhao wrote:
>>> System will hang if user set sysctl_sched_time_avg to 0 by
>>>
>>> [root@XXX ~]# sysctl kernel.sched_time_avg_ms=0
>>>
>>> Stack traceback for pid 0
>>> 0xffff883f6406c600 0 0 1 3 R 0xffff883f6406cf50 *swapper/3
>>> ffff883f7ccc3ae8 0000000000000018 ffffffff810c4dd0 0000000000000000
>>> 0000000000017800 ffff883f7ccc3d78 0000000000000003 ffff883f7ccc3bf8
>>> ffffffff810c4fc9 ffff883f7ccc3c08 00000000810c5043 ffff883f7ccc3c08
>>> Call Trace:
>>> <IRQ> [<ffffffff810c4dd0>] ? update_group_capacity+0x110/0x200
>>> [<ffffffff810c4fc9>] ? update_sd_lb_stats+0x109/0x600
>>> [<ffffffff810c5507>] ? find_busiest_group+0x47/0x530
>>> [<ffffffff810c5b84>] ? load_balance+0x194/0x900
>>> [<ffffffff810ad5ca>] ? update_rq_clock.part.83+0x1a/0xe0
>>> [<ffffffff810c6d42>] ? rebalance_domains+0x152/0x290
>>> [<ffffffff810c6f5c>] ? run_rebalance_domains+0xdc/0x1d0
>>> [<ffffffff8108a75b>] ? __do_softirq+0xfb/0x320
>>> [<ffffffff8108ac85>] ? irq_exit+0x125/0x130
>>> [<ffffffff810b3a17>] ? scheduler_ipi+0x97/0x160
>>> [<ffffffff81052709>] ? smp_reschedule_interrupt+0x29/0x30
>>> [<ffffffff8173a1be>] ? reschedule_interrupt+0x6e/0x80
>>> <EOI> [<ffffffff815bc83c>] ? cpuidle_enter_state+0xcc/0x230
>>> [<ffffffff815bc80c>] ? cpuidle_enter_state+0x9c/0x230
>>> [<ffffffff815bc9d7>] ? cpuidle_enter+0x17/0x20
>>> [<ffffffff810cd6dc>] ? cpu_startup_entry+0x38c/0x420
>>> [<ffffffff81053373>] ? start_secondary+0x173/0x1e0
>>>
>>> Because divide-by-zero error happens in function
>>>
>>> update_group_capacity()
>>> update_cpu_capacity()
>>> scale_rt_capacity()
>>> {
>>> ...
>>> total = sched_avg_period() + delta;
>>> used = div_u64(avg, total);
>>> ...
>>> }
>>>
>>> Seems this issue could be reproduced on all I tried stable 4.1 - last
>>> kernel.
>>>
>>> To fix this issue, reset sysctl_sched_time_avg to default value
>>> MSEC_PER_SEC if user input invalid value in function
>>>
>>> sched_avg_period()
>>>
>>> Reported-by: James Puthukattukaran <[email protected]>
>>> Signed-off-by: Ethan Zhao <[email protected]>
>>> ---
>>> Tested on stable 4.1, compiled on stable 4.13-rc5
>>> kernel/sched/sched.h | 3 +++
>>> 1 file changed, 3 insertions(+)
>>>
>>> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
>>> index eeef1a3..b398560 100644
>>> --- a/kernel/sched/sched.h
>>> +++ b/kernel/sched/sched.h
>>> @@ -1620,6 +1620,9 @@ static inline void rq_last_tick_reset(struct rq *rq)
>>>
>>> static inline u64 sched_avg_period(void)
>>> {
>>> + if (unlikely(sysctl_sched_time_avg <= 0))
>>> + sysctl_sched_time_avg = MSEC_PER_SEC;
>>> +
>>
>> Sight, no.. you can set limits in the sysctl table.

2017-09-04 07:32:04

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched: reset sysctl_sched_time_avg to default when

On Sat, Sep 02, 2017 at 08:33:03AM +0800, Ethan Zhao wrote:
> Yep, that is the first place I considered to set the limit, but that would
> break KABI ?

nah..

2017-09-04 07:43:13

by Ethan Zhao

[permalink] [raw]
Subject: Re: [PATCH] sched: reset sysctl_sched_time_avg to default when

Peter,


On 2017/9/4 15:32, Peter Zijlstra wrote:
> On Sat, Sep 02, 2017 at 08:33:03AM +0800, Ethan Zhao wrote:
>> Yep, that is the first place I considered to set the limit, but that would
>> break KABI ?
> nah..
  V4 sent, please ignore v2 & v3.

  Thanks,
  Ethan