2020-10-15 11:07:19

by jun qian

[permalink] [raw]
Subject: [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics

From: jun qian <[email protected]>

When the sched_schedstat changes from 0 to 1, some sched se maybe
already in the runqueue, the se->statistics.wait_start will be 0.
So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
wrong. We need to avoid this scenario.

Signed-off-by: jun qian <[email protected]>
Reviewed-by: Yafang Shao <[email protected]>
---
kernel/sched/fair.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1a68a05..6f8ca0c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -906,6 +906,15 @@ static void update_curr_fair(struct rq *rq)
if (!schedstat_enabled())
return;

+ /*
+ * When the sched_schedstat changes from 0 to 1, some sched se
+ * maybe already in the runqueue, the se->statistics.wait_start
+ * will be 0.So it will let the delta wrong. We need to avoid this
+ * scenario.
+ */
+ if (unlikely(!schedstat_val(se->statistics.wait_start)))
+ return;
+
delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start);

if (entity_is_task(se)) {
--
1.8.3.1


2020-10-15 17:59:05

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/1] Sched/fair: Improve the accuracy of sched_stat_wait statistics

On Thu, Oct 15, 2020 at 02:48:46PM +0800, [email protected] wrote:
> From: jun qian <[email protected]>
>
> When the sched_schedstat changes from 0 to 1, some sched se maybe
> already in the runqueue, the se->statistics.wait_start will be 0.
> So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
> wrong. We need to avoid this scenario.
>
> Signed-off-by: jun qian <[email protected]>
> Reviewed-by: Yafang Shao <[email protected]>

Thanks!

2020-10-29 10:54:16

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: sched/core] sched/fair: Improve the accuracy of sched_stat_wait statistics

The following commit has been merged into the sched/core branch of tip:

Commit-ID: b9c88f752268383beff0d56e50d52b8ae62a02f8
Gitweb: https://git.kernel.org/tip/b9c88f752268383beff0d56e50d52b8ae62a02f8
Author: jun qian <[email protected]>
AuthorDate: Thu, 15 Oct 2020 14:48:46 +08:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Thu, 29 Oct 2020 11:00:28 +01:00

sched/fair: Improve the accuracy of sched_stat_wait statistics

When the sched_schedstat changes from 0 to 1, some sched se maybe
already in the runqueue, the se->statistics.wait_start will be 0.
So it will let the (rq_of(cfs_rq)) - se->statistics.wait_start)
wrong. We need to avoid this scenario.

Signed-off-by: jun qian <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Yafang Shao <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
kernel/sched/fair.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 290f9e3..b9368d1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -906,6 +906,15 @@ update_stats_wait_end(struct cfs_rq *cfs_rq, struct sched_entity *se)
if (!schedstat_enabled())
return;

+ /*
+ * When the sched_schedstat changes from 0 to 1, some sched se
+ * maybe already in the runqueue, the se->statistics.wait_start
+ * will be 0.So it will let the delta wrong. We need to avoid this
+ * scenario.
+ */
+ if (unlikely(!schedstat_val(se->statistics.wait_start)))
+ return;
+
delta = rq_clock(rq_of(cfs_rq)) - schedstat_val(se->statistics.wait_start);

if (entity_is_task(se)) {