proc_sched_show_task does:
if (nr_switches)
do_div(avg_atom, nr_switches);
nr_switches is unsigned long and do_div truncates it to 32 bits, which
means it can test non-zero on e.g. x86-64 and be truncated to zero for
division.
Fix the problem by using div64_ul instead.
As a side effect calculations of avg_atom for big nr_switches are now correct.
Signed-off-by: Mateusz Guzik <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]
---
kernel/sched/debug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 695f977..627b3c3 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -608,7 +608,7 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m)
avg_atom = p->se.sum_exec_runtime;
if (nr_switches)
- do_div(avg_atom, nr_switches);
+ avg_atom = div64_ul(avg_atom, nr_switches);
else
avg_atom = -1LL;
--
1.8.3.1
On Sat, Jun 14, 2014 at 03:00:09PM +0200, Mateusz Guzik wrote:
> proc_sched_show_task does:
> if (nr_switches)
> do_div(avg_atom, nr_switches);
>
> nr_switches is unsigned long and do_div truncates it to 32 bits, which
> means it can test non-zero on e.g. x86-64 and be truncated to zero for
> division.
>
> Fix the problem by using div64_ul instead.
>
> As a side effect calculations of avg_atom for big nr_switches are now correct.
>
> Signed-off-by: Mateusz Guzik <[email protected]>
Thanks.
Commit-ID: b0ab99e7736af88b8ac1b7ae50ea287fffa2badc
Gitweb: http://git.kernel.org/tip/b0ab99e7736af88b8ac1b7ae50ea287fffa2badc
Author: Mateusz Guzik <[email protected]>
AuthorDate: Sat, 14 Jun 2014 15:00:09 +0200
Committer: Ingo Molnar <[email protected]>
CommitDate: Wed, 16 Jul 2014 13:36:07 +0200
sched: Fix possible divide by zero in avg_atom() calculation
proc_sched_show_task() does:
if (nr_switches)
do_div(avg_atom, nr_switches);
nr_switches is unsigned long and do_div truncates it to 32 bits, which
means it can test non-zero on e.g. x86-64 and be truncated to zero for
division.
Fix the problem by using div64_ul() instead.
As a side effect calculations of avg_atom for big nr_switches are now correct.
Signed-off-by: Mateusz Guzik <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: [email protected]
Cc: Linus Torvalds <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/debug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 695f977..627b3c3 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -608,7 +608,7 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m)
avg_atom = p->se.sum_exec_runtime;
if (nr_switches)
- do_div(avg_atom, nr_switches);
+ avg_atom = div64_ul(avg_atom, nr_switches);
else
avg_atom = -1LL;