Hi,
I come across an interesting bug report on Bugzilla [1]. The reporter
wrote:
> I am running an intel alder lake system (Core i7-1260P), with a mix of P and E cores.
>
> Since Linux 6.6, and also on the current 6.7 RC, the scheduler seems to have a strong preference for the E cores, and single threaded workloads are consistently scheduled on one of the E cores.
>
> With Linux 6.4 and before, when I ran a single threaded CPU-bound process, it was scheduled on a P core. With 6.5, it seems that the choice of P or E seemed rather random.
>
> I tested these by running "stress" with different amounts of threads. With a single thread on Linux 6.6 and 6.7, I always have an E core at 100% and no load on the P cores. Starting from 3 threads I get some load on the P cores as well, but the E cores stay more heavily loaded.
> With "taskset" I can force a process to run on a P core, but clearly it's not very practical to have to do CPU scheduling manually.
>
> This severely affects single-threaded performance of my CPU since the E cores are considerably slower. Several of my workflows are now a lot slower due to them being single-threaded and heavily CPU-bound and being scheduled on E cores whereas they would run on P cores before.
>
> I am not sure what the exact desired behaviour is here, to balance power consumption and performance, but currently my P cores are barely used for single-threaded workloads.
>
> Is this intended behaviour or is this indeed a regression? Or is there perhaps any configuration that I should have done from my side? Is there any further info that I can provide to help you figure out what's going on?
PM and scheduler people, is this a regression or works as intended?
Thanks.
[1]: https://bugzilla.kernel.org/show_bug.cgi?id=218195
--
An old man doll... just what I always wanted! - Clara
On Tue, Nov 28, 2023 at 08:22:27PM +0700, Bagas Sanjaya wrote:
> Hi,
>
> I come across an interesting bug report on Bugzilla [1]. The reporter
> wrote:
Thanks for forwarding, what happend in bugzilla staysi in bugzilla etc..
Did you perchance Cc the reporter?
> > I am running an intel alder lake system (Core i7-1260P), with a mix
> > of P and E cores.
> >
> > Since Linux 6.6, and also on the current 6.7 RC, the scheduler seems
> > to have a strong preference for the E cores, and single threaded
> > workloads are consistently scheduled on one of the E cores.
> >
> > With Linux 6.4 and before, when I ran a single threaded CPU-bound
> > process, it was scheduled on a P core. With 6.5, it seems that the
> > choice of P or E seemed rather random.
> >
> > I tested these by running "stress" with different amounts of
> > threads. With a single thread on Linux 6.6 and 6.7, I always have an
> > E core at 100% and no load on the P cores. Starting from 3 threads I
> > get some load on the P cores as well, but the E cores stay more
> > heavily loaded. With "taskset" I can force a process to run on a P
> > core, but clearly it's not very practical to have to do CPU
> > scheduling manually.
> >
> > This severely affects single-threaded performance of my CPU since
> > the E cores are considerably slower. Several of my workflows are now
> > a lot slower due to them being single-threaded and heavily CPU-bound
> > and being scheduled on E cores whereas they would run on P cores
> > before.
> >
> > I am not sure what the exact desired behaviour is here, to balance
> > power consumption and performance, but currently my P cores are
> > barely used for single-threaded workloads.
> >
> > Is this intended behaviour or is this indeed a regression? Or is
> > there perhaps any configuration that I should have done from my
> > side? Is there any further info that I can provide to help you
> > figure out what's going on?
>
> PM and scheduler people, is this a regression or works as intended?
AFAIK that is supposed to be steered by the ITMT muck and I don't think
we changed that.
Ricardo?
>
> Thanks.
>
> [1]: https://bugzilla.kernel.org/show_bug.cgi?id=218195
>
> --
> An old man doll... just what I always wanted! - Clara
On 11/28/23 21:02, Peter Zijlstra wrote:
> On Tue, Nov 28, 2023 at 08:22:27PM +0700, Bagas Sanjaya wrote:
>> Hi,
>>
>> I come across an interesting bug report on Bugzilla [1]. The reporter
>> wrote:
>
> Thanks for forwarding, what happend in bugzilla staysi in bugzilla etc..
>
> Did you perchance Cc the reporter?
>
Yes, I CC'ed him.
--
An old man doll... just what I always wanted! - Clara
On Tue, 2023-11-28 at 20:22 +0700, Bagas Sanjaya wrote:
> Hi,
>
> I come across an interesting bug report on Bugzilla [1]. The reporter
> wrote:
>
> > I am running an intel alder lake system (Core i7-1260P), with a mix of P and E cores.
> >
> > Since Linux 6.6, and also on the current 6.7 RC, the scheduler seems to have a strong preference for the E cores, and single threaded workloads are consistently scheduled on one of the E cores.
> >
> > With Linux 6.4 and before, when I ran a single threaded CPU-bound process, it was scheduled on a P core. With 6.5, it seems that the choice of P or E seemed rather random.
> >
> > I tested these by running "stress" with different amounts of threads. With a single thread on Linux 6.6 and 6.7, I always have an E core at 100% and no load on the P cores. Starting from 3 threads I get some load on the P cores as well, but the E cores stay more heavily loaded.
> > With "taskset" I can force a process to run on a P core, but clearly it's not very practical to have to do CPU scheduling manually.
> >
> > This severely affects single-threaded performance of my CPU since the E cores are considerably slower. Several of my workflows are now a lot slower due to them being single-threaded and heavily CPU-bound and being scheduled on E cores whereas they would run on P cores before.
> >
> > I am not sure what the exact desired behaviour is here, to balance power consumption and performance, but currently my P cores are barely used for single-threaded workloads.
> >
> > Is this intended behaviour or is this indeed a regression? Or is there perhaps any configuration that I should have done from my side? Is there any further info that I can provide to help you figure out what's going on?
>
> PM and scheduler people, is this a regression or works as intended?
>
> Thanks.
>
> [1]: https://bugzilla.kernel.org/show_bug.cgi?id=218195
>
I have noticed that the current code sometimes is quite trigger happy
moving tasks off P-core, whenever there are more than 2 tasks on a core.
Sometimes, Short running house keeping tasks
could disturb the running task on P-core as a result.
Can you try the following patch? On my Alder Lake system, I see as I add single
threaded tasks, they first run on P-cores, then followed by E-cores with this
patch on 6.6.
Tim
From 68a15ef01803c252261ebb47d86dfc1f2c68ae1e Mon Sep 17 00:00:00 2001
From: Tim Chen <[email protected]>
Date: Fri, 6 Oct 2023 15:58:56 -0700
Subject: [PATCH] sched/fair: Don't force smt balancing when CPU has spare
capacity
Currently group_smt_balance is picked whenever there are more
than two tasks on a core with two SMT. However, the utilization
of those tasks may be low and do not warrant a task
migration to a CPU of lower priority.
Adjust sched group clssification and sibling_imbalance()
to reflect this consideration. Use sibling_imbalance() to
compute imbalance in calculate_imbalance() for the group_smt_balance
case.
Signed-off-by: Tim Chen <[email protected]>
---
kernel/sched/fair.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ef7490c4b8b4..7dd7c2d2367a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9460,14 +9460,15 @@ group_type group_classify(unsigned int imbalance_pct,
if (sgs->group_asym_packing)
return group_asym_packing;
- if (sgs->group_smt_balance)
- return group_smt_balance;
-
if (sgs->group_misfit_task_load)
return group_misfit_task;
- if (!group_has_capacity(imbalance_pct, sgs))
- return group_fully_busy;
+ if (!group_has_capacity(imbalance_pct, sgs)) {
+ if (sgs->group_smt_balance)
+ return group_smt_balance;
+ else
+ return group_fully_busy;
+ }
return group_has_spare;
}
@@ -9573,6 +9574,11 @@ static inline long sibling_imbalance(struct lb_env *env,
if (env->idle == CPU_NOT_IDLE || !busiest->sum_nr_running)
return 0;
+ /* Do not pull tasks off preferred group with spare capacity */
+ if (busiest->group_type == group_has_spare &&
+ sched_asym_prefer(sds->busiest->asym_prefer_cpu, env->dst_cpu))
+ return 0;
+
ncores_busiest = sds->busiest->cores;
ncores_local = sds->local->cores;
@@ -10411,13 +10417,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
return;
}
- if (busiest->group_type == group_smt_balance) {
- /* Reduce number of tasks sharing CPU capacity */
- env->migration_type = migrate_task;
- env->imbalance = 1;
- return;
- }
-
if (busiest->group_type == group_imbalanced) {
/*
* In the group_imb case we cannot rely on group-wide averages
--
2.32.0
(Sending again since I accidentally sent my last mail as HTML.)
I applied the patch on top of 6.6.2, but unfortunately I see more or less the same behaviour as before, with single-threaded CPU-bound tasks running almost exclusively on E cores.
Ramses
Nov 28, 2023, 18:39 by [email protected]:
> On Tue, 2023-11-28 at 20:22 +0700, Bagas Sanjaya wrote:
>
>> Hi,
>>
>> I come across an interesting bug report on Bugzilla [1]. The reporter
>> wrote:
>>
>> > I am running an intel alder lake system (Core i7-1260P), with a mix of P and E cores.
>> >
>> > Since Linux 6.6, and also on the current 6.7 RC, the scheduler seems to have a strong preference for the E cores, and single threaded workloads are consistently scheduled on one of the E cores.
>> >
>> > With Linux 6.4 and before, when I ran a single threaded CPU-bound process, it was scheduled on a P core. With 6.5, it seems that the choice of P or E seemed rather random.
>> >
>> > I tested these by running "stress" with different amounts of threads. With a single thread on Linux 6.6 and 6.7, I always have an E core at 100% and no load on the P cores. Starting from 3 threads I get some load on the P cores as well, but the E cores stay more heavily loaded.
>> > With "taskset" I can force a process to run on a P core, but clearly it's not very practical to have to do CPU scheduling manually.
>> >
>> > This severely affects single-threaded performance of my CPU since the E cores are considerably slower. Several of my workflows are now a lot slower due to them being single-threaded and heavily CPU-bound and being scheduled on E cores whereas they would run on P cores before.
>> >
>> > I am not sure what the exact desired behaviour is here, to balance power consumption and performance, but currently my P cores are barely used for single-threaded workloads.
>> >
>> > Is this intended behaviour or is this indeed a regression? Or is there perhaps any configuration that I should have done from my side? Is there any further info that I can provide to help you figure out what's going on?
>>
>> PM and scheduler people, is this a regression or works as intended?
>>
>> Thanks.
>>
>> [1]: https://bugzilla.kernel.org/show_bug.cgi?id=218195
>>
>
> I have noticed that the current code sometimes is quite trigger happy
> moving tasks off P-core, whenever there are more than 2 tasks on a core.
> Sometimes, Short running house keeping tasks
> could disturb the running task on P-core as a result.
>
> Can you try the following patch? On my Alder Lake system, I see as I add single
> threaded tasks, they first run on P-cores, then followed by E-cores with this
> patch on 6.6.
>
> Tim
>
> From 68a15ef01803c252261ebb47d86dfc1f2c68ae1e Mon Sep 17 00:00:00 2001
> From: Tim Chen <[email protected]>
> Date: Fri, 6 Oct 2023 15:58:56 -0700
> Subject: [PATCH] sched/fair: Don't force smt balancing when CPU has spare
> capacity
>
> Currently group_smt_balance is picked whenever there are more
> than two tasks on a core with two SMT. However, the utilization
> of those tasks may be low and do not warrant a task
> migration to a CPU of lower priority.
>
> Adjust sched group clssification and sibling_imbalance()
> to reflect this consideration. Use sibling_imbalance() to
> compute imbalance in calculate_imbalance() for the group_smt_balance
> case.
>
> Signed-off-by: Tim Chen <[email protected]>
>
> ---
> kernel/sched/fair.c | 23 +++++++++++------------
> 1 file changed, 11 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index ef7490c4b8b4..7dd7c2d2367a 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -9460,14 +9460,15 @@ group_type group_classify(unsigned int imbalance_pct,
> if (sgs->group_asym_packing)
> return group_asym_packing;
>
> - if (sgs->group_smt_balance)
> - return group_smt_balance;
> -
> if (sgs->group_misfit_task_load)
> return group_misfit_task;
>
> - if (!group_has_capacity(imbalance_pct, sgs))
> - return group_fully_busy;
> + if (!group_has_capacity(imbalance_pct, sgs)) {
> + if (sgs->group_smt_balance)
> + return group_smt_balance;
> + else
> + return group_fully_busy;
> + }
>
> return group_has_spare;
> }
> @@ -9573,6 +9574,11 @@ static inline long sibling_imbalance(struct lb_env *env,
> if (env->idle == CPU_NOT_IDLE || !busiest->sum_nr_running)
> return 0;
>
> + /* Do not pull tasks off preferred group with spare capacity */
> + if (busiest->group_type == group_has_spare &&
> + sched_asym_prefer(sds->busiest->asym_prefer_cpu, env->dst_cpu))
> + return 0;
> +
> ncores_busiest = sds->busiest->cores;
> ncores_local = sds->local->cores;
>
> @@ -10411,13 +10417,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
> return;
> }
>
> - if (busiest->group_type == group_smt_balance) {
> - /* Reduce number of tasks sharing CPU capacity */
> - env->migration_type = migrate_task;
> - env->imbalance = 1;
> - return;
> - }
> -
> if (busiest->group_type == group_imbalanced) {
> /*
> * In the group_imb case we cannot rely on group-wide averages
> --
> 2.32.0
>
On Tue, 2023-11-28 at 23:33 +0100, Ramses wrote:
> I applied the patch on top of 6.6.2, but unfortunately I see more or less the same behaviour as before, with single-threaded CPU-bound tasks running almost exclusively on E cores.
>
> Ramses
I suspect that you may have other issues. I wonder if CPU priorities are getting
assigned properly on your system.
Saw in the original bugzilla
https://bugzilla.kernel.org/show_bug.cgi?id=218195
that you don't see /proc/sys/kernel/sched_itmt_enabled which
may be a symptom of such a problem.
+Srinivas, is there something Ramses can do to help
find out if there are issues with cppc?
Tim
Nov 29, 2023, 00:10 by [email protected]:
> On Tue, 2023-11-28 at 23:33 +0100, Ramses wrote:
>
>> I applied the patch on top of 6.6.2, but unfortunately I see more or less the same behaviour as before, with single-threaded CPU-bound tasks running almost exclusively on E cores.
>>
>> Ramses
>>
>
> I suspect that you may have other issues. I wonder if CPU priorities are getting
> assigned properly on your system.
>
> Saw in the original bugzilla
> https://bugzilla.kernel.org/show_bug.cgi?id=218195
> that you don't see /proc/sys/kernel/sched_itmt_enabled which
> may be a symptom of such a problem.
>
> +Srinivas, is there something Ramses can do to help
> find out if there are issues with cppc?
>
> Tim
>
Yeah I'm getting the impression that something is going wrong on my system. AFAIU, itmt is supposed to be auto-detected and doesn't require additional config?
I have the intel ME disabled on my CPU (it came like that from the OEM), I don't know if that can have an effect?
Let me know if there's any additional info that I can provide.
Ramses
On Tue, 2023-11-28 at 15:10 -0800, Tim Chen wrote:
> On Tue, 2023-11-28 at 23:33 +0100, Ramses wrote:
>
> > I applied the patch on top of 6.6.2, but unfortunately I see more
> > or less the same behaviour as before, with single-threaded CPU-
> > bound tasks running almost exclusively on E cores.
> >
> > Ramses
>
> I suspect that you may have other issues. I wonder if CPU priorities
> are getting
> assigned properly on your system.
>
> Saw in the original bugzilla
> https://bugzilla.kernel.org/show_bug.cgi?id=218195
> that you don't see /proc/sys/kernel/sched_itmt_enabled which
> may be a symptom of such a problem.
>
> +Srinivas, is there something Ramses can do to help
> find out if there are issues with cppc?
I have updated the bugzilla with the findings. The ACPI config on this
system is telling us that CPPC v2 is not supported. Current
implementation depends on CPPC v2.
Even in 6.4 kernel, ITMT is not enabled.
Thanks,
Srinivas
>
> Tim
>
On Tue, Nov 28, 2023 at 03:02:25PM +0100, Peter Zijlstra wrote:
> On Tue, Nov 28, 2023 at 08:22:27PM +0700, Bagas Sanjaya wrote:
> > Hi,
> >
> > I come across an interesting bug report on Bugzilla [1]. The reporter
> > wrote:
>
> Thanks for forwarding, what happend in bugzilla staysi in bugzilla etc..
>
> Did you perchance Cc the reporter?
>
> > > I am running an intel alder lake system (Core i7-1260P), with a mix
> > > of P and E cores.
> > >
> > > Since Linux 6.6, and also on the current 6.7 RC, the scheduler seems
> > > to have a strong preference for the E cores, and single threaded
> > > workloads are consistently scheduled on one of the E cores.
> > >
> > > With Linux 6.4 and before, when I ran a single threaded CPU-bound
> > > process, it was scheduled on a P core. With 6.5, it seems that the
> > > choice of P or E seemed rather random.
> > >
> > > I tested these by running "stress" with different amounts of
> > > threads. With a single thread on Linux 6.6 and 6.7, I always have an
> > > E core at 100% and no load on the P cores. Starting from 3 threads I
> > > get some load on the P cores as well, but the E cores stay more
> > > heavily loaded. With "taskset" I can force a process to run on a P
> > > core, but clearly it's not very practical to have to do CPU
> > > scheduling manually.
> > >
> > > This severely affects single-threaded performance of my CPU since
> > > the E cores are considerably slower. Several of my workflows are now
> > > a lot slower due to them being single-threaded and heavily CPU-bound
> > > and being scheduled on E cores whereas they would run on P cores
> > > before.
> > >
> > > I am not sure what the exact desired behaviour is here, to balance
> > > power consumption and performance, but currently my P cores are
> > > barely used for single-threaded workloads.
> > >
> > > Is this intended behaviour or is this indeed a regression? Or is
> > > there perhaps any configuration that I should have done from my
> > > side? Is there any further info that I can provide to help you
> > > figure out what's going on?
> >
> > PM and scheduler people, is this a regression or works as intended?
>
> AFAIK that is supposed to be steered by the ITMT muck and I don't think
> we changed that.
>
> Ricardo?
Sorry for the late reply. This email was buried in a ton of email. To
complete report here, Srinivas helped to debug the issue. The problem is
that the computer in question lacks the necessary ACPI support to use ITMT.
A new firmware release appears to have solved the issue.
Thanks and BR,
Ricardo