2021-06-17 21:40:24

by Suren Baghdasaryan

[permalink] [raw]
Subject: [PATCH 1/1] psi: stop relying on timer_pending for poll_work rescheduling

Psi polling mechanism is trying to minimize the number of wakeups to
run psi_poll_work and is currently relying on timer_pending() to detect
when this work is already scheduled. This provides a window of opportunity
for psi_group_change to schedule an immediate psi_poll_work after
poll_timer_fn got called but before psi_poll_work could reschedule itself.
Below is the depiction of this entire window:

poll_timer_fn
wake_up_interruptible(&group->poll_wait);

psi_poll_worker
wait_event_interruptible(group->poll_wait, ...)
psi_poll_work
psi_schedule_poll_work
if (timer_pending(&group->poll_timer)) return;
...
mod_timer(&group->poll_timer, jiffies + delay);

Prior to 461daba06bdc we used to rely on poll_scheduled atomic which was
reset and set back inside psi_poll_work and therefore this race window
was much smaller.
The larger window causes increased number of wakeups and our partners
report visible power regression of ~10mA after applying 461daba06bdc.
Bring back the poll_scheduled atomic and make this race window even
narrower by resetting poll_scheduled only when we reach polling expiration
time. This does not completely eliminate the possibility of extra wakeups
caused by a race with psi_group_change however it will limit it to the
worst case scenario of one extra wakeup per every tracking window (0.5s
in the worst case).
By tracing the number of immediate rescheduling attempts performed by
psi_group_change and the number of these attempts being blocked due to
psi monitor being already active, we can assess the effects of this change:

Before the patch:
Run#1 Run#2 Run#3
Immediate reschedules attempted: 684365 1385156 1261240
Immediate reschedules blocked: 682846 1381654 1258682
Immediate reschedules (delta): 1519 3502 2558
Immediate reschedules (% of attempted): 0.22% 0.25% 0.20%

After the patch:
Run#1 Run#2 Run#3
Immediate reschedules attempted: 882244 770298 426218
Immediate reschedules blocked: 881996 769796 426074
Immediate reschedules (delta): 248 502 144
Immediate reschedules (% of attempted): 0.03% 0.07% 0.03%

The number of non-blocked immediate reschedules dropped from 0.22-0.25%
to 0.03-0.07%. The drop is attributed to the decrease in the race window
size and the fact that we allow this race only when psi monitors reach
polling window expiration time.

Fixes: 461daba06bdc ("psi: eliminate kthread_worker from psi trigger scheduling mechanism")
Reported-by: Kathleen Chang <[email protected]>
Reported-by: Wenju Xu <[email protected]>
Reported-by: Jonathan Chen <[email protected]>
Signed-off-by: Suren Baghdasaryan <[email protected]>
---
include/linux/psi_types.h | 1 +
kernel/sched/psi.c | 41 ++++++++++++++++++++++++++++-----------
2 files changed, 31 insertions(+), 11 deletions(-)

diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
index 0a23300d49af..ef8bd89d065e 100644
--- a/include/linux/psi_types.h
+++ b/include/linux/psi_types.h
@@ -158,6 +158,7 @@ struct psi_group {
struct timer_list poll_timer;
wait_queue_head_t poll_wait;
atomic_t poll_wakeup;
+ atomic_t poll_scheduled;

/* Protects data used by the monitor */
struct mutex trigger_lock;
diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index cc25a3cff41f..fed7c9c2b276 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -193,6 +193,7 @@ static void group_init(struct psi_group *group)
INIT_DELAYED_WORK(&group->avgs_work, psi_avgs_work);
mutex_init(&group->avgs_lock);
/* Init trigger-related members */
+ atomic_set(&group->poll_scheduled, 0);
mutex_init(&group->trigger_lock);
INIT_LIST_HEAD(&group->triggers);
memset(group->nr_triggers, 0, sizeof(group->nr_triggers));
@@ -551,18 +552,14 @@ static u64 update_triggers(struct psi_group *group, u64 now)
return now + group->poll_min_period;
}

-/* Schedule polling if it's not already scheduled. */
-static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
+/* Schedule polling if it's not already scheduled or forced. */
+static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
+ bool force)
{
struct task_struct *task;

- /*
- * Do not reschedule if already scheduled.
- * Possible race with a timer scheduled after this check but before
- * mod_timer below can be tolerated because group->polling_next_update
- * will keep updates on schedule.
- */
- if (timer_pending(&group->poll_timer))
+ /* cmpxchg should be called even when !force to set poll_scheduled */
+ if (atomic_cmpxchg(&group->poll_scheduled, 0, 1) != 0 && !force)
return;

rcu_read_lock();
@@ -574,12 +571,15 @@ static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
*/
if (likely(task))
mod_timer(&group->poll_timer, jiffies + delay);
+ else
+ atomic_set(&group->poll_scheduled, 0);

rcu_read_unlock();
}

static void psi_poll_work(struct psi_group *group)
{
+ bool force_reschedule = false;
u32 changed_states;
u64 now;

@@ -587,6 +587,23 @@ static void psi_poll_work(struct psi_group *group)

now = sched_clock();

+ if (now > group->polling_until) {
+ /*
+ * We are either about to start or might stop polling if no
+ * state change was recorded. Resetting poll_scheduled leaves
+ * a small window for psi_group_change to sneak in and schedule
+ * an immegiate poll_work before we get to rescheduling. One
+ * potential extra wakeup at the end of the polling window
+ * should be negligible and polling_next_update still keeps
+ * updates correctly on schedule.
+ */
+ atomic_set(&group->poll_scheduled, 0);
+ } else {
+ /* Polling window is not over, keep rescheduling */
+ force_reschedule = true;
+ }
+
+
collect_percpu_times(group, PSI_POLL, &changed_states);

if (changed_states & group->poll_states) {
@@ -612,7 +629,8 @@ static void psi_poll_work(struct psi_group *group)
group->polling_next_update = update_triggers(group, now);

psi_schedule_poll_work(group,
- nsecs_to_jiffies(group->polling_next_update - now) + 1);
+ nsecs_to_jiffies(group->polling_next_update - now) + 1,
+ force_reschedule);

out:
mutex_unlock(&group->trigger_lock);
@@ -736,7 +754,7 @@ static void psi_group_change(struct psi_group *group, int cpu,
write_seqcount_end(&groupc->seq);

if (state_mask & group->poll_states)
- psi_schedule_poll_work(group, 1);
+ psi_schedule_poll_work(group, 1, false);

if (wake_clock && !delayed_work_pending(&group->avgs_work))
schedule_delayed_work(&group->avgs_work, PSI_FREQ);
@@ -1235,6 +1253,7 @@ static void psi_trigger_destroy(struct kref *ref)
*/
del_timer_sync(&group->poll_timer);
kthread_stop(task_to_destroy);
+ atomic_set(&group->poll_scheduled, 0);
}
kfree(t);
}
--
2.32.0.288.g62a8d224e6-goog


2021-06-21 12:37:44

by YT Chang

[permalink] [raw]
Subject: Re: [PATCH 1/1] psi: stop relying on timer_pending for poll_work rescheduling

On Thu, 2021-06-17 at 14:26 -0700, Suren Baghdasaryan wrote:
> Psi polling mechanism is trying to minimize the number of wakeups to
> run psi_poll_work and is currently relying on timer_pending() to
> detect
> when this work is already scheduled. This provides a window of
> opportunity
> for psi_group_change to schedule an immediate psi_poll_work after
> poll_timer_fn got called but before psi_poll_work could reschedule
> itself.
> Below is the depiction of this entire window:
>
> poll_timer_fn
> wake_up_interruptible(&group->poll_wait);
>
> psi_poll_worker
> wait_event_interruptible(group->poll_wait, ...)
> psi_poll_work
> psi_schedule_poll_work
> if (timer_pending(&group->poll_timer)) return;
> ...
> mod_timer(&group->poll_timer, jiffies + delay);
>
> Prior to 461daba06bdc we used to rely on poll_scheduled atomic which
> was
> reset and set back inside psi_poll_work and therefore this race
> window
> was much smaller.
> The larger window causes increased number of wakeups and our partners
> report visible power regression of ~10mA after applying 461daba06bdc.
> Bring back the poll_scheduled atomic and make this race window even
> narrower by resetting poll_scheduled only when we reach polling
> expiration
> time. This does not completely eliminate the possibility of extra
> wakeups
> caused by a race with psi_group_change however it will limit it to
> the
> worst case scenario of one extra wakeup per every tracking window
> (0.5s
> in the worst case).
> By tracing the number of immediate rescheduling attempts performed by
> psi_group_change and the number of these attempts being blocked due
> to
> psi monitor being already active, we can assess the effects of this
> change:
>
> Before the patch:
> Run#1 Run#2 Run#3
> Immediate reschedules
> attempted: 684365 1385156 1261240
> Immediate reschedules
> blocked: 682846 1381654 1258682
> Immediate reschedules (delta): 1519 3502 2558
> Immediate reschedules (% of attempted): 0.22% 0.25% 0.20%
>
> After the patch:
> Run#1 Run#2 Run#3
> Immediate reschedules attempted: 882244 770298 426218
> Immediate reschedules blocked: 881996 769796 426074
> Immediate reschedules (delta): 248 502 144
> Immediate reschedules (% of attempted): 0.03% 0.07% 0.03%
>
> The number of non-blocked immediate reschedules dropped from 0.22-
> 0.25%
> to 0.03-0.07%. The drop is attributed to the decrease in the race
> window
> size and the fact that we allow this race only when psi monitors
> reach
> polling window expiration time.
>


The regression power data points of Android phone in home screen idle:
Original :baseline
Original+ Patch : -21.5% (-11.5mA)
Tested-by: SH Chen <[email protected]>

> Fixes: 461daba06bdc ("psi: eliminate kthread_worker from psi trigger
> scheduling mechanism")
> Reported-by: Kathleen Chang <[email protected]>
> Reported-by: Wenju Xu <[email protected]>
> Reported-by: Jonathan Chen <[email protected]>
> Signed-off-by: Suren Baghdasaryan <[email protected]>
> ---
> include/linux/psi_types.h | 1 +
> kernel/sched/psi.c | 41 ++++++++++++++++++++++++++++---------
> --
> 2 files changed, 31 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
> index 0a23300d49af..ef8bd89d065e 100644
> --- a/include/linux/psi_types.h
> +++ b/include/linux/psi_types.h
> @@ -158,6 +158,7 @@ struct psi_group {
> struct timer_list poll_timer;
> wait_queue_head_t poll_wait;
> atomic_t poll_wakeup;
> + atomic_t poll_scheduled;
>
> /* Protects data used by the monitor */
> struct mutex trigger_lock;
> diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
> index cc25a3cff41f..fed7c9c2b276 100644
> --- a/kernel/sched/psi.c
> +++ b/kernel/sched/psi.c
> @@ -193,6 +193,7 @@ static void group_init(struct psi_group *group)
> INIT_DELAYED_WORK(&group->avgs_work, psi_avgs_work);
> mutex_init(&group->avgs_lock);
> /* Init trigger-related members */
> + atomic_set(&group->poll_scheduled, 0);
> mutex_init(&group->trigger_lock);
> INIT_LIST_HEAD(&group->triggers);
> memset(group->nr_triggers, 0, sizeof(group->nr_triggers));
> @@ -551,18 +552,14 @@ static u64 update_triggers(struct psi_group
> *group, u64 now)
> return now + group->poll_min_period;
> }
>
> -/* Schedule polling if it's not already scheduled. */
> -static void psi_schedule_poll_work(struct psi_group *group, unsigned
> long delay)
> +/* Schedule polling if it's not already scheduled or forced. */
> +static void psi_schedule_poll_work(struct psi_group *group, unsigned
> long delay,
> + bool force)
> {
> struct task_struct *task;
>
> - /*
> - * Do not reschedule if already scheduled.
> - * Possible race with a timer scheduled after this check but
> before
> - * mod_timer below can be tolerated because group-
> >polling_next_update
> - * will keep updates on schedule.
> - */
> - if (timer_pending(&group->poll_timer))
> + /* cmpxchg should be called even when !force to set
> poll_scheduled */
> + if (atomic_cmpxchg(&group->poll_scheduled, 0, 1) != 0 &&
> !force)
> return;
>
> rcu_read_lock();
> @@ -574,12 +571,15 @@ static void psi_schedule_poll_work(struct
> psi_group *group, unsigned long delay)
> */
> if (likely(task))
> mod_timer(&group->poll_timer, jiffies + delay);
> + else
> + atomic_set(&group->poll_scheduled, 0);
>
> rcu_read_unlock();
> }
>
> static void psi_poll_work(struct psi_group *group)
> {
> + bool force_reschedule = false;
> u32 changed_states;
> u64 now;
>
> @@ -587,6 +587,23 @@ static void psi_poll_work(struct psi_group
> *group)
>
> now = sched_clock();
>
> + if (now > group->polling_until) {
> + /*
> + * We are either about to start or might stop polling
> if no
> + * state change was recorded. Resetting poll_scheduled
> leaves
> + * a small window for psi_group_change to sneak in and
> schedule
> + * an immegiate poll_work before we get to
> rescheduling. One
> + * potential extra wakeup at the end of the polling
> window
> + * should be negligible and polling_next_update still
> keeps
> + * updates correctly on schedule.
> + */
> + atomic_set(&group->poll_scheduled, 0);
> + } else {
> + /* Polling window is not over, keep rescheduling */
> + force_reschedule = true;
> + }
> +
> +
> collect_percpu_times(group, PSI_POLL, &changed_states);
>
> if (changed_states & group->poll_states) {
> @@ -612,7 +629,8 @@ static void psi_poll_work(struct psi_group
> *group)
> group->polling_next_update = update_triggers(group,
> now);
>
> psi_schedule_poll_work(group,
> - nsecs_to_jiffies(group->polling_next_update - now) +
> 1);
> + nsecs_to_jiffies(group->polling_next_update - now) + 1,
> + force_reschedule);
>
> out:
> mutex_unlock(&group->trigger_lock);
> @@ -736,7 +754,7 @@ static void psi_group_change(struct psi_group
> *group, int cpu,
> write_seqcount_end(&groupc->seq);
>
> if (state_mask & group->poll_states)
> - psi_schedule_poll_work(group, 1);
> + psi_schedule_poll_work(group, 1, false);
>
> if (wake_clock && !delayed_work_pending(&group->avgs_work))
> schedule_delayed_work(&group->avgs_work, PSI_FREQ);
> @@ -1235,6 +1253,7 @@ static void psi_trigger_destroy(struct kref
> *ref)
> */
> del_timer_sync(&group->poll_timer);
> kthread_stop(task_to_destroy);
> + atomic_set(&group->poll_scheduled, 0);
> }
> kfree(t);
> }

2021-06-21 17:51:18

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH 1/1] psi: stop relying on timer_pending for poll_work rescheduling

On Mon, Jun 21, 2021 at 5:31 AM YT Chang <[email protected]> wrote:
>
> On Thu, 2021-06-17 at 14:26 -0700, Suren Baghdasaryan wrote:
> > Psi polling mechanism is trying to minimize the number of wakeups to
> > run psi_poll_work and is currently relying on timer_pending() to
> > detect
> > when this work is already scheduled. This provides a window of
> > opportunity
> > for psi_group_change to schedule an immediate psi_poll_work after
> > poll_timer_fn got called but before psi_poll_work could reschedule
> > itself.
> > Below is the depiction of this entire window:
> >
> > poll_timer_fn
> > wake_up_interruptible(&group->poll_wait);
> >
> > psi_poll_worker
> > wait_event_interruptible(group->poll_wait, ...)
> > psi_poll_work
> > psi_schedule_poll_work
> > if (timer_pending(&group->poll_timer)) return;
> > ...
> > mod_timer(&group->poll_timer, jiffies + delay);
> >
> > Prior to 461daba06bdc we used to rely on poll_scheduled atomic which
> > was
> > reset and set back inside psi_poll_work and therefore this race
> > window
> > was much smaller.
> > The larger window causes increased number of wakeups and our partners
> > report visible power regression of ~10mA after applying 461daba06bdc.
> > Bring back the poll_scheduled atomic and make this race window even
> > narrower by resetting poll_scheduled only when we reach polling
> > expiration
> > time. This does not completely eliminate the possibility of extra
> > wakeups
> > caused by a race with psi_group_change however it will limit it to
> > the
> > worst case scenario of one extra wakeup per every tracking window
> > (0.5s
> > in the worst case).
> > By tracing the number of immediate rescheduling attempts performed by
> > psi_group_change and the number of these attempts being blocked due
> > to
> > psi monitor being already active, we can assess the effects of this
> > change:
> >
> > Before the patch:
> > Run#1 Run#2 Run#3
> > Immediate reschedules
> > attempted: 684365 1385156 1261240
> > Immediate reschedules
> > blocked: 682846 1381654 1258682
> > Immediate reschedules (delta): 1519 3502 2558
> > Immediate reschedules (% of attempted): 0.22% 0.25% 0.20%
> >
> > After the patch:
> > Run#1 Run#2 Run#3
> > Immediate reschedules attempted: 882244 770298 426218
> > Immediate reschedules blocked: 881996 769796 426074
> > Immediate reschedules (delta): 248 502 144
> > Immediate reschedules (% of attempted): 0.03% 0.07% 0.03%
> >
> > The number of non-blocked immediate reschedules dropped from 0.22-
> > 0.25%
> > to 0.03-0.07%. The drop is attributed to the decrease in the race
> > window
> > size and the fact that we allow this race only when psi monitors
> > reach
> > polling window expiration time.
> >
>
>
> The regression power data points of Android phone in home screen idle:
> Original :baseline
> Original+ Patch : -21.5% (-11.5mA)

Ok, so this fixes the regression and then some (original regression
was +10mA and with the patch we've got -11.5mA). This must be the
contribution of the additional logic I added to avoid resetting
poll_scheduled when we know that the polling window is not yet over
(line with /* Polling window is not over, keep rescheduling */
comment).

> Tested-by: SH Chen <[email protected]>

Thanks!

>
> > Fixes: 461daba06bdc ("psi: eliminate kthread_worker from psi trigger
> > scheduling mechanism")
> > Reported-by: Kathleen Chang <[email protected]>
> > Reported-by: Wenju Xu <[email protected]>
> > Reported-by: Jonathan Chen <[email protected]>
> > Signed-off-by: Suren Baghdasaryan <[email protected]>
> > ---
> > include/linux/psi_types.h | 1 +
> > kernel/sched/psi.c | 41 ++++++++++++++++++++++++++++---------
> > --
> > 2 files changed, 31 insertions(+), 11 deletions(-)
> >
> > diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
> > index 0a23300d49af..ef8bd89d065e 100644
> > --- a/include/linux/psi_types.h
> > +++ b/include/linux/psi_types.h
> > @@ -158,6 +158,7 @@ struct psi_group {
> > struct timer_list poll_timer;
> > wait_queue_head_t poll_wait;
> > atomic_t poll_wakeup;
> > + atomic_t poll_scheduled;
> >
> > /* Protects data used by the monitor */
> > struct mutex trigger_lock;
> > diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
> > index cc25a3cff41f..fed7c9c2b276 100644
> > --- a/kernel/sched/psi.c
> > +++ b/kernel/sched/psi.c
> > @@ -193,6 +193,7 @@ static void group_init(struct psi_group *group)
> > INIT_DELAYED_WORK(&group->avgs_work, psi_avgs_work);
> > mutex_init(&group->avgs_lock);
> > /* Init trigger-related members */
> > + atomic_set(&group->poll_scheduled, 0);
> > mutex_init(&group->trigger_lock);
> > INIT_LIST_HEAD(&group->triggers);
> > memset(group->nr_triggers, 0, sizeof(group->nr_triggers));
> > @@ -551,18 +552,14 @@ static u64 update_triggers(struct psi_group
> > *group, u64 now)
> > return now + group->poll_min_period;
> > }
> >
> > -/* Schedule polling if it's not already scheduled. */
> > -static void psi_schedule_poll_work(struct psi_group *group, unsigned
> > long delay)
> > +/* Schedule polling if it's not already scheduled or forced. */
> > +static void psi_schedule_poll_work(struct psi_group *group, unsigned
> > long delay,
> > + bool force)
> > {
> > struct task_struct *task;
> >
> > - /*
> > - * Do not reschedule if already scheduled.
> > - * Possible race with a timer scheduled after this check but
> > before
> > - * mod_timer below can be tolerated because group-
> > >polling_next_update
> > - * will keep updates on schedule.
> > - */
> > - if (timer_pending(&group->poll_timer))
> > + /* cmpxchg should be called even when !force to set
> > poll_scheduled */
> > + if (atomic_cmpxchg(&group->poll_scheduled, 0, 1) != 0 &&
> > !force)
> > return;
> >
> > rcu_read_lock();
> > @@ -574,12 +571,15 @@ static void psi_schedule_poll_work(struct
> > psi_group *group, unsigned long delay)
> > */
> > if (likely(task))
> > mod_timer(&group->poll_timer, jiffies + delay);
> > + else
> > + atomic_set(&group->poll_scheduled, 0);
> >
> > rcu_read_unlock();
> > }
> >
> > static void psi_poll_work(struct psi_group *group)
> > {
> > + bool force_reschedule = false;
> > u32 changed_states;
> > u64 now;
> >
> > @@ -587,6 +587,23 @@ static void psi_poll_work(struct psi_group
> > *group)
> >
> > now = sched_clock();
> >
> > + if (now > group->polling_until) {
> > + /*
> > + * We are either about to start or might stop polling
> > if no
> > + * state change was recorded. Resetting poll_scheduled
> > leaves
> > + * a small window for psi_group_change to sneak in and
> > schedule
> > + * an immegiate poll_work before we get to
> > rescheduling. One
> > + * potential extra wakeup at the end of the polling
> > window
> > + * should be negligible and polling_next_update still
> > keeps
> > + * updates correctly on schedule.
> > + */
> > + atomic_set(&group->poll_scheduled, 0);
> > + } else {
> > + /* Polling window is not over, keep rescheduling */
> > + force_reschedule = true;
> > + }
> > +
> > +
> > collect_percpu_times(group, PSI_POLL, &changed_states);
> >
> > if (changed_states & group->poll_states) {
> > @@ -612,7 +629,8 @@ static void psi_poll_work(struct psi_group
> > *group)
> > group->polling_next_update = update_triggers(group,
> > now);
> >
> > psi_schedule_poll_work(group,
> > - nsecs_to_jiffies(group->polling_next_update - now) +
> > 1);
> > + nsecs_to_jiffies(group->polling_next_update - now) + 1,
> > + force_reschedule);
> >
> > out:
> > mutex_unlock(&group->trigger_lock);
> > @@ -736,7 +754,7 @@ static void psi_group_change(struct psi_group
> > *group, int cpu,
> > write_seqcount_end(&groupc->seq);
> >
> > if (state_mask & group->poll_states)
> > - psi_schedule_poll_work(group, 1);
> > + psi_schedule_poll_work(group, 1, false);
> >
> > if (wake_clock && !delayed_work_pending(&group->avgs_work))
> > schedule_delayed_work(&group->avgs_work, PSI_FREQ);
> > @@ -1235,6 +1253,7 @@ static void psi_trigger_destroy(struct kref
> > *ref)
> > */
> > del_timer_sync(&group->poll_timer);
> > kthread_stop(task_to_destroy);
> > + atomic_set(&group->poll_scheduled, 0);
> > }
> > kfree(t);
> > }

2021-06-22 13:57:18

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/1] psi: stop relying on timer_pending for poll_work rescheduling

On Thu, Jun 17, 2021 at 02:26:54PM -0700, Suren Baghdasaryan wrote:
> Psi polling mechanism is trying to minimize the number of wakeups to
> run psi_poll_work and is currently relying on timer_pending() to detect
> when this work is already scheduled. This provides a window of opportunity
> for psi_group_change to schedule an immediate psi_poll_work after
> poll_timer_fn got called but before psi_poll_work could reschedule itself.
> Below is the depiction of this entire window:
>
> poll_timer_fn
> wake_up_interruptible(&group->poll_wait);
>
> psi_poll_worker
> wait_event_interruptible(group->poll_wait, ...)
> psi_poll_work
> psi_schedule_poll_work
> if (timer_pending(&group->poll_timer)) return;
> ...
> mod_timer(&group->poll_timer, jiffies + delay);
>
> Prior to 461daba06bdc we used to rely on poll_scheduled atomic which was
> reset and set back inside psi_poll_work and therefore this race window
> was much smaller.
> The larger window causes increased number of wakeups and our partners
> report visible power regression of ~10mA after applying 461daba06bdc.
> Bring back the poll_scheduled atomic and make this race window even
> narrower by resetting poll_scheduled only when we reach polling expiration
> time. This does not completely eliminate the possibility of extra wakeups
> caused by a race with psi_group_change however it will limit it to the
> worst case scenario of one extra wakeup per every tracking window (0.5s
> in the worst case).
> By tracing the number of immediate rescheduling attempts performed by
> psi_group_change and the number of these attempts being blocked due to
> psi monitor being already active, we can assess the effects of this change:
>
> Before the patch:
> Run#1 Run#2 Run#3
> Immediate reschedules attempted: 684365 1385156 1261240
> Immediate reschedules blocked: 682846 1381654 1258682
> Immediate reschedules (delta): 1519 3502 2558
> Immediate reschedules (% of attempted): 0.22% 0.25% 0.20%
>
> After the patch:
> Run#1 Run#2 Run#3
> Immediate reschedules attempted: 882244 770298 426218
> Immediate reschedules blocked: 881996 769796 426074
> Immediate reschedules (delta): 248 502 144
> Immediate reschedules (% of attempted): 0.03% 0.07% 0.03%
>
> The number of non-blocked immediate reschedules dropped from 0.22-0.25%
> to 0.03-0.07%. The drop is attributed to the decrease in the race window
> size and the fact that we allow this race only when psi monitors reach
> polling window expiration time.
>
> Fixes: 461daba06bdc ("psi: eliminate kthread_worker from psi trigger scheduling mechanism")
> Reported-by: Kathleen Chang <[email protected]>
> Reported-by: Wenju Xu <[email protected]>
> Reported-by: Jonathan Chen <[email protected]>
> Signed-off-by: Suren Baghdasaryan <[email protected]>

Johannes?


> -/* Schedule polling if it's not already scheduled. */
> -static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
> +/* Schedule polling if it's not already scheduled or forced. */
> +static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
> + bool force)
> {
> struct task_struct *task;
>
> - /*
> - * Do not reschedule if already scheduled.
> - * Possible race with a timer scheduled after this check but before
> - * mod_timer below can be tolerated because group->polling_next_update
> - * will keep updates on schedule.
> - */
> - if (timer_pending(&group->poll_timer))
> + /* cmpxchg should be called even when !force to set poll_scheduled */
> + if (atomic_cmpxchg(&group->poll_scheduled, 0, 1) != 0 && !force)

Do you care about memory ordering here? Afaict the whole thing is
supposed to be ordered by ->trigger_lock, so you don't.

Howver, if you did, the code seems to suggest you need a RELEASE
ordering, but cmpxchg() as used above can fail in which case it does't
provide anything.

Also, I think the more conventional way to write that might be:

if (atomic_xchg_relaxed(&group->poll_scheduled, 1) && !force)

Hmm?

2021-06-22 15:10:11

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH 1/1] psi: stop relying on timer_pending for poll_work rescheduling

On Tue, Jun 22, 2021 at 03:56:03PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 17, 2021 at 02:26:54PM -0700, Suren Baghdasaryan wrote:
> > Fixes: 461daba06bdc ("psi: eliminate kthread_worker from psi trigger scheduling mechanism")
> > Reported-by: Kathleen Chang <[email protected]>
> > Reported-by: Wenju Xu <[email protected]>
> > Reported-by: Jonathan Chen <[email protected]>
> > Signed-off-by: Suren Baghdasaryan <[email protected]>
>
> Johannes?

Looks generally good to me, I'd just want to get to the bottom of the
memory ordering before acking...

> > -/* Schedule polling if it's not already scheduled. */
> > -static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
> > +/* Schedule polling if it's not already scheduled or forced. */
> > +static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
> > + bool force)
> > {
> > struct task_struct *task;
> >
> > - /*
> > - * Do not reschedule if already scheduled.
> > - * Possible race with a timer scheduled after this check but before
> > - * mod_timer below can be tolerated because group->polling_next_update
> > - * will keep updates on schedule.
> > - */
> > - if (timer_pending(&group->poll_timer))
> > + /* cmpxchg should be called even when !force to set poll_scheduled */
> > + if (atomic_cmpxchg(&group->poll_scheduled, 0, 1) != 0 && !force)
>
> Do you care about memory ordering here? Afaict the whole thing is
> supposed to be ordered by ->trigger_lock, so you don't.

It's not always held when we get here.

The worker holds it when it reschedules itself, to serialize against
userspace destroying the trigger itself. But the scheduler doesn't
hold it when it kicks the worker on an actionable task change.

That said, I think the ordering we care about there is that when the
scheduler side sees the worker still queued, the worker must see the
scheduler's updates to the percpu states and process them correctly.
But that should be ensured already by the ordering implied by the
seqcount sections around both the writer and the reader side.

Is there another possible race that I'm missing?

2021-06-22 15:52:52

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH 1/1] psi: stop relying on timer_pending for poll_work rescheduling

On Tue, Jun 22, 2021 at 8:08 AM Johannes Weiner <[email protected]> wrote:
>
> On Tue, Jun 22, 2021 at 03:56:03PM +0200, Peter Zijlstra wrote:
> > On Thu, Jun 17, 2021 at 02:26:54PM -0700, Suren Baghdasaryan wrote:
> > > Fixes: 461daba06bdc ("psi: eliminate kthread_worker from psi trigger scheduling mechanism")
> > > Reported-by: Kathleen Chang <[email protected]>
> > > Reported-by: Wenju Xu <[email protected]>
> > > Reported-by: Jonathan Chen <[email protected]>
> > > Signed-off-by: Suren Baghdasaryan <[email protected]>
> >
> > Johannes?
>
> Looks generally good to me, I'd just want to get to the bottom of the
> memory ordering before acking...
>
> > > -/* Schedule polling if it's not already scheduled. */
> > > -static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
> > > +/* Schedule polling if it's not already scheduled or forced. */
> > > +static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
> > > + bool force)
> > > {
> > > struct task_struct *task;
> > >
> > > - /*
> > > - * Do not reschedule if already scheduled.
> > > - * Possible race with a timer scheduled after this check but before
> > > - * mod_timer below can be tolerated because group->polling_next_update
> > > - * will keep updates on schedule.
> > > - */
> > > - if (timer_pending(&group->poll_timer))
> > > + /* cmpxchg should be called even when !force to set poll_scheduled */
> > > + if (atomic_cmpxchg(&group->poll_scheduled, 0, 1) != 0 && !force)

Will change to Peter's suggested "if
(atomic_cmpxchg(&group->poll_scheduled, 0, 1) && !force)" once the
ordering question is finalized.

> >
> > Do you care about memory ordering here? Afaict the whole thing is
> > supposed to be ordered by ->trigger_lock, so you don't.
>
> It's not always held when we get here.
>
> The worker holds it when it reschedules itself, to serialize against
> userspace destroying the trigger itself. But the scheduler doesn't
> hold it when it kicks the worker on an actionable task change.
>
> That said, I think the ordering we care about there is that when the
> scheduler side sees the worker still queued, the worker must see the
> scheduler's updates to the percpu states and process them correctly.
> But that should be ensured already by the ordering implied by the
> seqcount sections around both the writer and the reader side.

Thanks Johannes! I have nothing to add here really.

>
> Is there another possible race that I'm missing?