Following comment in [1], I prepared a patch to remove UTIL_EST_FASTUP.
This enables us to simplify util_est behavior as proposed in patch 2.
[1] https://lore.kernel.org/lkml/CAKfTPtCAZWp7tRgTpwJmyEAkyN65acmYrfu9naEUpBZVWNTcQA@mail.gmail.com/
Vincent Guittot (2):
sched/fair: Remove SCHED_FEAT(UTIL_EST_FASTUP, true)
sched/fair: Simplify util_est
Documentation/scheduler/schedutil.rst | 7 +--
include/linux/sched.h | 35 ++----------
kernel/sched/debug.c | 7 +--
kernel/sched/fair.c | 81 ++++++++++-----------------
kernel/sched/features.h | 1 -
kernel/sched/pelt.h | 4 +-
6 files changed, 43 insertions(+), 92 deletions(-)
--
2.34.1
Hi Vincent,
On 11/27/23 14:32, Vincent Guittot wrote:
> Following comment in [1], I prepared a patch to remove UTIL_EST_FASTUP.
> This enables us to simplify util_est behavior as proposed in patch 2.
>
> [1] https://lore.kernel.org/lkml/CAKfTPtCAZWp7tRgTpwJmyEAkyN65acmYrfu9naEUpBZVWNTcQA@mail.gmail.com/
>
> Vincent Guittot (2):
> sched/fair: Remove SCHED_FEAT(UTIL_EST_FASTUP, true)
> sched/fair: Simplify util_est
>
> Documentation/scheduler/schedutil.rst | 7 +--
> include/linux/sched.h | 35 ++----------
> kernel/sched/debug.c | 7 +--
> kernel/sched/fair.c | 81 ++++++++++-----------------
> kernel/sched/features.h | 1 -
> kernel/sched/pelt.h | 4 +-
> 6 files changed, 43 insertions(+), 92 deletions(-)
>
I recovered my pixel6 and applied these changes.
No power regression in Jankbench. No performance regression in GB5.
Better score in Chrome running Speedometer +3..5%.
The code looks much more clean, without the 'struct util_est'
(we will have to adjust our trace events to that change but it's worth).
Also, I was a bit surprised that the UTIL_EST_FASTUP wasn't helping
that much comparing to that new 'runnable' signal for the
underestimation corner case...
Reviewed-and-tested-by: Lukasz Luba <[email protected]>
On Thu, 30 Nov 2023 at 13:52, Lukasz Luba <[email protected]> wrote:
>
> Hi Vincent,
>
> On 11/27/23 14:32, Vincent Guittot wrote:
> > Following comment in [1], I prepared a patch to remove UTIL_EST_FASTUP.
> > This enables us to simplify util_est behavior as proposed in patch 2.
> >
> > [1] https://lore.kernel.org/lkml/CAKfTPtCAZWp7tRgTpwJmyEAkyN65acmYrfu9naEUpBZVWNTcQA@mail.gmail.com/
> >
> > Vincent Guittot (2):
> > sched/fair: Remove SCHED_FEAT(UTIL_EST_FASTUP, true)
> > sched/fair: Simplify util_est
> >
> > Documentation/scheduler/schedutil.rst | 7 +--
> > include/linux/sched.h | 35 ++----------
> > kernel/sched/debug.c | 7 +--
> > kernel/sched/fair.c | 81 ++++++++++-----------------
> > kernel/sched/features.h | 1 -
> > kernel/sched/pelt.h | 4 +-
> > 6 files changed, 43 insertions(+), 92 deletions(-)
> >
>
> I recovered my pixel6 and applied these changes.
> No power regression in Jankbench. No performance regression in GB5.
> Better score in Chrome running Speedometer +3..5%.
Thanks for testing
>
> The code looks much more clean, without the 'struct util_est'
> (we will have to adjust our trace events to that change but it's worth).
Same for me
>
> Also, I was a bit surprised that the UTIL_EST_FASTUP wasn't helping
> that much comparing to that new 'runnable' signal for the
> underestimation corner case...
>
> Reviewed-and-tested-by: Lukasz Luba <[email protected]>
Thanks