2019-10-19 08:26:56

by Vincent Guittot

[permalink] [raw]
Subject: [PATCH v4 00/10] sched/fair: rework the CFS load balance

Several wrong task placement have been raised with the current load
balance algorithm but their fixes are not always straight forward and
end up with using biased values to force migrations. A cleanup and rework
of the load balance will help to handle such UCs and enable to fine grain
the behavior of the scheduler for other cases.

Patch 1 has already been sent separately and only consolidate asym policy
in one place and help the review of the changes in load_balance.

Patch 2 renames the sum of h_nr_running in stats.

Patch 3 removes meaningless imbalance computation to make review of
patch 4 easier.

Patch 4 reworks load_balance algorithm and fixes some wrong task placement
but try to stay conservative.

Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
into account when pulling tasks.

Patch 6 replaces runnable_load by load now that the signal is only used
when overloaded.

Patch 7 improves the spread of tasks at the 1st scheduling level.

Patch 8 uses utilization instead of load in all steps of misfit task
path.

Patch 9 replaces runnable_load_avg by load_avg in the wake up path.

Patch 10 optimizes find_idlest_group() that was using both runnable_load
and load. This has not been squashed with previous patch to ease the
review.

Patch 11 reworks find_idlest_group() to follow the same steps as
find_busiest_group()

Some benchmarks results based on 8 iterations of each tests:
- small arm64 dual quad cores system

tip/sched/core w/ this patchset improvement
schedpipe 53125 +/-0.18% 53443 +/-0.52% (+0.60%)

hackbench -l (2560/#grp) -g #grp
1 groups 1.579 +/-29.16% 1.410 +/-13.46% (+10.70%)
4 groups 1.269 +/-9.69% 1.205 +/-3.27% (+5.00%)
8 groups 1.117 +/-1.51% 1.123 +/-1.27% (+4.57%)
16 groups 1.176 +/-1.76% 1.164 +/-2.42% (+1.07%)

Unixbench shell8
1 test 1963.48 +/-0.36% 1902.88 +/-0.73% (-3.09%)
224 tests 2427.60 +/-0.20% 2469.80 +/-0.42% (1.74%)

- large arm64 2 nodes / 224 cores system

tip/sched/core w/ this patchset improvement
schedpipe 124084 +/-1.36% 124445 +/-0.67% (+0.29%)

hackbench -l (256000/#grp) -g #grp
1 groups 15.305 +/-1.50% 14.001 +/-1.99% (+8.52%)
4 groups 5.959 +/-0.70% 5.542 +/-3.76% (+6.99%)
16 groups 3.120 +/-1.72% 3.253 +/-0.61% (-4.92%)
32 groups 2.911 +/-0.88% 2.837 +/-1.16% (+2.54%)
64 groups 2.805 +/-1.90% 2.716 +/-1.18% (+3.17%)
128 groups 3.166 +/-7.71% 3.891 +/-6.77% (+5.82%)
256 groups 3.655 +/-10.09% 3.185 +/-6.65% (+12.87%)

dbench
1 groups 328.176 +/-0.29% 330.217 +/-0.32% (+0.62%)
4 groups 930.739 +/-0.50% 957.173 +/-0.66% (+2.84%)
16 groups 1928.292 +/-0.36% 1978.234 +/-0.88% (+0.92%)
32 groups 2369.348 +/-1.72% 2454.020 +/-0.90% (+3.57%)
64 groups 2583.880 +/-3.39% 2618.860 +/-0.84% (+1.35%)
128 groups 2256.406 +/-10.67% 2392.498 +/-2.13% (+6.03%)
256 groups 1257.546 +/-3.81% 1674.684 +/-4.97% (+33.17%)

Unixbench shell8
1 test 6944.16 +/-0.02 6605.82 +/-0.11 (-4.87%)
224 tests 13499.02 +/-0.14 13637.94 +/-0.47% (+1.03%)
lkp reported a -10% regression on shell8 (1 test) for v3 that
seems that is partially recovered on my platform with v4.

tip/sched/core sha1:
commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")

Changes since v3:
- small typo and variable ordering fixes
- add some acked/reviewed tag
- set 1 instead of load for migrate_misfit
- use nr_h_running instead of load for asym_packing
- update the optimization of find_idlest_group() and put back somes
conditions when comparing load
- rework find_idlest_group() to match find_busiest_group() behavior

Changes since v2:
- fix typo and reorder code
- some minor code fixes
- optimize the find_idles_group()

Not covered in this patchset:
- Better detection of overloaded and fully busy state, especially for cases
when nr_running > nr CPUs.

Vincent Guittot (11):
sched/fair: clean up asym packing
sched/fair: rename sum_nr_running to sum_h_nr_running
sched/fair: remove meaningless imbalance calculation
sched/fair: rework load_balance
sched/fair: use rq->nr_running when balancing load
sched/fair: use load instead of runnable load in load_balance
sched/fair: evenly spread tasks when not overloaded
sched/fair: use utilization to select misfit task
sched/fair: use load instead of runnable load in wakeup path
sched/fair: optimize find_idlest_group
sched/fair: rework find_idlest_group

kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
1 file changed, 682 insertions(+), 499 deletions(-)

--
2.7.4


2019-10-19 08:26:57

by Vincent Guittot

[permalink] [raw]
Subject: [PATCH v4 01/11] sched/fair: clean up asym packing

Clean up asym packing to follow the default load balance behavior:
- classify the group by creating a group_asym_packing field.
- calculate the imbalance in calculate_imbalance() instead of bypassing it.

We don't need to test twice same conditions anymore to detect asym packing
and we consolidate the calculation of imbalance in calculate_imbalance().

There is no functional changes.

Signed-off-by: Vincent Guittot <[email protected]>
Acked-by: Rik van Riel <[email protected]>
---
kernel/sched/fair.c | 63 ++++++++++++++---------------------------------------
1 file changed, 16 insertions(+), 47 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1f0a5e1..617145c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7675,6 +7675,7 @@ struct sg_lb_stats {
unsigned int group_weight;
enum group_type group_type;
int group_no_capacity;
+ unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
#ifdef CONFIG_NUMA_BALANCING
unsigned int nr_numa_running;
@@ -8129,9 +8130,17 @@ static bool update_sd_pick_busiest(struct lb_env *env,
* ASYM_PACKING needs to move all the work to the highest
* prority CPUs in the group, therefore mark all groups
* of lower priority than ourself as busy.
+ *
+ * This is primarily intended to used at the sibling level. Some
+ * cores like POWER7 prefer to use lower numbered SMT threads. In the
+ * case of POWER7, it can move to lower SMT modes only when higher
+ * threads are idle. When in lower SMT modes, the threads will
+ * perform better since they share less core resources. Hence when we
+ * have idle threads, we want them to be the higher ones.
*/
if (sgs->sum_nr_running &&
sched_asym_prefer(env->dst_cpu, sg->asym_prefer_cpu)) {
+ sgs->group_asym_packing = 1;
if (!sds->busiest)
return true;

@@ -8273,51 +8282,6 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
}

/**
- * check_asym_packing - Check to see if the group is packed into the
- * sched domain.
- *
- * This is primarily intended to used at the sibling level. Some
- * cores like POWER7 prefer to use lower numbered SMT threads. In the
- * case of POWER7, it can move to lower SMT modes only when higher
- * threads are idle. When in lower SMT modes, the threads will
- * perform better since they share less core resources. Hence when we
- * have idle threads, we want them to be the higher ones.
- *
- * This packing function is run on idle threads. It checks to see if
- * the busiest CPU in this domain (core in the P7 case) has a higher
- * CPU number than the packing function is being run on. Here we are
- * assuming lower CPU number will be equivalent to lower a SMT thread
- * number.
- *
- * Return: 1 when packing is required and a task should be moved to
- * this CPU. The amount of the imbalance is returned in env->imbalance.
- *
- * @env: The load balancing environment.
- * @sds: Statistics of the sched_domain which is to be packed
- */
-static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)
-{
- int busiest_cpu;
-
- if (!(env->sd->flags & SD_ASYM_PACKING))
- return 0;
-
- if (env->idle == CPU_NOT_IDLE)
- return 0;
-
- if (!sds->busiest)
- return 0;
-
- busiest_cpu = sds->busiest->asym_prefer_cpu;
- if (sched_asym_prefer(busiest_cpu, env->dst_cpu))
- return 0;
-
- env->imbalance = sds->busiest_stat.group_load;
-
- return 1;
-}
-
-/**
* fix_small_imbalance - Calculate the minor imbalance that exists
* amongst the groups of a sched_domain, during
* load balancing.
@@ -8401,6 +8365,11 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
local = &sds->local_stat;
busiest = &sds->busiest_stat;

+ if (busiest->group_asym_packing) {
+ env->imbalance = busiest->group_load;
+ return;
+ }
+
if (busiest->group_type == group_imbalanced) {
/*
* In the group_imb case we cannot rely on group-wide averages
@@ -8505,8 +8474,8 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
busiest = &sds.busiest_stat;

/* ASYM feature bypasses nice load balance check */
- if (check_asym_packing(env, &sds))
- return sds.busiest;
+ if (busiest->group_asym_packing)
+ goto force_balance;

/* There is no busy sibling group to pull tasks from */
if (!sds.busiest || busiest->sum_nr_running == 0)
--
2.7.4

2019-10-19 08:27:03

by Vincent Guittot

[permalink] [raw]
Subject: [PATCH v4 03/11] sched/fair: remove meaningless imbalance calculation

clean up load_balance and remove meaningless calculation and fields before
adding new algorithm.

Signed-off-by: Vincent Guittot <[email protected]>
Acked-by: Rik van Riel <[email protected]>
---
kernel/sched/fair.c | 105 +---------------------------------------------------
1 file changed, 1 insertion(+), 104 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9a2aceb..e004841 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5390,18 +5390,6 @@ static unsigned long capacity_of(int cpu)
return cpu_rq(cpu)->cpu_capacity;
}

-static unsigned long cpu_avg_load_per_task(int cpu)
-{
- struct rq *rq = cpu_rq(cpu);
- unsigned long nr_running = READ_ONCE(rq->cfs.h_nr_running);
- unsigned long load_avg = cpu_runnable_load(rq);
-
- if (nr_running)
- return load_avg / nr_running;
-
- return 0;
-}
-
static void record_wakee(struct task_struct *p)
{
/*
@@ -7667,7 +7655,6 @@ static unsigned long task_h_load(struct task_struct *p)
struct sg_lb_stats {
unsigned long avg_load; /*Avg load across the CPUs of the group */
unsigned long group_load; /* Total load over the CPUs of the group */
- unsigned long load_per_task;
unsigned long group_capacity;
unsigned long group_util; /* Total utilization of the group */
unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
@@ -8049,9 +8036,6 @@ static inline void update_sg_lb_stats(struct lb_env *env,
sgs->group_capacity = group->sgc->capacity;
sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity;

- if (sgs->sum_h_nr_running)
- sgs->load_per_task = sgs->group_load / sgs->sum_h_nr_running;
-
sgs->group_weight = group->group_weight;

sgs->group_no_capacity = group_is_overloaded(env, sgs);
@@ -8282,76 +8266,6 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
}

/**
- * fix_small_imbalance - Calculate the minor imbalance that exists
- * amongst the groups of a sched_domain, during
- * load balancing.
- * @env: The load balancing environment.
- * @sds: Statistics of the sched_domain whose imbalance is to be calculated.
- */
-static inline
-void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
-{
- unsigned long tmp, capa_now = 0, capa_move = 0;
- unsigned int imbn = 2;
- unsigned long scaled_busy_load_per_task;
- struct sg_lb_stats *local, *busiest;
-
- local = &sds->local_stat;
- busiest = &sds->busiest_stat;
-
- if (!local->sum_h_nr_running)
- local->load_per_task = cpu_avg_load_per_task(env->dst_cpu);
- else if (busiest->load_per_task > local->load_per_task)
- imbn = 1;
-
- scaled_busy_load_per_task =
- (busiest->load_per_task * SCHED_CAPACITY_SCALE) /
- busiest->group_capacity;
-
- if (busiest->avg_load + scaled_busy_load_per_task >=
- local->avg_load + (scaled_busy_load_per_task * imbn)) {
- env->imbalance = busiest->load_per_task;
- return;
- }
-
- /*
- * OK, we don't have enough imbalance to justify moving tasks,
- * however we may be able to increase total CPU capacity used by
- * moving them.
- */
-
- capa_now += busiest->group_capacity *
- min(busiest->load_per_task, busiest->avg_load);
- capa_now += local->group_capacity *
- min(local->load_per_task, local->avg_load);
- capa_now /= SCHED_CAPACITY_SCALE;
-
- /* Amount of load we'd subtract */
- if (busiest->avg_load > scaled_busy_load_per_task) {
- capa_move += busiest->group_capacity *
- min(busiest->load_per_task,
- busiest->avg_load - scaled_busy_load_per_task);
- }
-
- /* Amount of load we'd add */
- if (busiest->avg_load * busiest->group_capacity <
- busiest->load_per_task * SCHED_CAPACITY_SCALE) {
- tmp = (busiest->avg_load * busiest->group_capacity) /
- local->group_capacity;
- } else {
- tmp = (busiest->load_per_task * SCHED_CAPACITY_SCALE) /
- local->group_capacity;
- }
- capa_move += local->group_capacity *
- min(local->load_per_task, local->avg_load + tmp);
- capa_move /= SCHED_CAPACITY_SCALE;
-
- /* Move if we gain throughput */
- if (capa_move > capa_now)
- env->imbalance = busiest->load_per_task;
-}
-
-/**
* calculate_imbalance - Calculate the amount of imbalance present within the
* groups of a given sched_domain during load balance.
* @env: load balance environment
@@ -8370,15 +8284,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
return;
}

- if (busiest->group_type == group_imbalanced) {
- /*
- * In the group_imb case we cannot rely on group-wide averages
- * to ensure CPU-load equilibrium, look at wider averages. XXX
- */
- busiest->load_per_task =
- min(busiest->load_per_task, sds->avg_load);
- }
-
/*
* Avg load of busiest sg can be less and avg load of local sg can
* be greater than avg load across all sgs of sd because avg load
@@ -8389,7 +8294,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
(busiest->avg_load <= sds->avg_load ||
local->avg_load >= sds->avg_load)) {
env->imbalance = 0;
- return fix_small_imbalance(env, sds);
+ return;
}

/*
@@ -8427,14 +8332,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
busiest->group_misfit_task_load);
}

- /*
- * if *imbalance is less than the average load per runnable task
- * there is no guarantee that any tasks will be moved so we'll have
- * a think about bumping its value to force at least one task to be
- * moved
- */
- if (env->imbalance < busiest->load_per_task)
- return fix_small_imbalance(env, sds);
}

/******* find_busiest_group() helpers end here *********************/
--
2.7.4

2019-10-19 08:27:24

by Vincent Guittot

[permalink] [raw]
Subject: [PATCH v4 11/11] sched/fair: rework find_idlest_group

The slow wake up path computes per sched_group statisics to select the
idlest group, which is quite similar to what load_balance() is doing
for selecting busiest group. Rework find_idlest_group() to classify the
sched_group and select the idlest one following the same steps as
load_balance().

Signed-off-by: Vincent Guittot <[email protected]>
---
kernel/sched/fair.c | 384 ++++++++++++++++++++++++++++++++++------------------
1 file changed, 256 insertions(+), 128 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ed1800d..fbaafae 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5541,127 +5541,9 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
return target;
}

-static unsigned long cpu_util_without(int cpu, struct task_struct *p);
-
-static unsigned long capacity_spare_without(int cpu, struct task_struct *p)
-{
- return max_t(long, capacity_of(cpu) - cpu_util_without(cpu, p), 0);
-}
-
-/*
- * find_idlest_group finds and returns the least busy CPU group within the
- * domain.
- *
- * Assumes p is allowed on at least one CPU in sd.
- */
static struct sched_group *
find_idlest_group(struct sched_domain *sd, struct task_struct *p,
- int this_cpu, int sd_flag)
-{
- struct sched_group *idlest = NULL, *group = sd->groups;
- struct sched_group *most_spare_sg = NULL;
- unsigned long min_load = ULONG_MAX, this_load = ULONG_MAX;
- unsigned long most_spare = 0, this_spare = 0;
- int imbalance_scale = 100 + (sd->imbalance_pct-100)/2;
- unsigned long imbalance = scale_load_down(NICE_0_LOAD) *
- (sd->imbalance_pct-100) / 100;
-
- do {
- unsigned long load;
- unsigned long spare_cap, max_spare_cap;
- int local_group;
- int i;
-
- /* Skip over this group if it has no CPUs allowed */
- if (!cpumask_intersects(sched_group_span(group),
- p->cpus_ptr))
- continue;
-
- local_group = cpumask_test_cpu(this_cpu,
- sched_group_span(group));
-
- /*
- * Tally up the load of all CPUs in the group and find
- * the group containing the CPU with most spare capacity.
- */
- load = 0;
- max_spare_cap = 0;
-
- for_each_cpu(i, sched_group_span(group)) {
- load += cpu_load(cpu_rq(i));
-
- spare_cap = capacity_spare_without(i, p);
-
- if (spare_cap > max_spare_cap)
- max_spare_cap = spare_cap;
- }
-
- /* Adjust by relative CPU capacity of the group */
- load = (load * SCHED_CAPACITY_SCALE) /
- group->sgc->capacity;
-
- if (local_group) {
- this_load = load;
- this_spare = max_spare_cap;
- } else {
- if (load < min_load) {
- min_load = load;
- idlest = group;
- }
-
- if (most_spare < max_spare_cap) {
- most_spare = max_spare_cap;
- most_spare_sg = group;
- }
- }
- } while (group = group->next, group != sd->groups);
-
- /*
- * The cross-over point between using spare capacity or least load
- * is too conservative for high utilization tasks on partially
- * utilized systems if we require spare_capacity > task_util(p),
- * so we allow for some task stuffing by using
- * spare_capacity > task_util(p)/2.
- *
- * Spare capacity can't be used for fork because the utilization has
- * not been set yet, we must first select a rq to compute the initial
- * utilization.
- */
- if (sd_flag & SD_BALANCE_FORK)
- goto skip_spare;
-
- if (this_spare > task_util(p) / 2 &&
- imbalance_scale*this_spare > 100*most_spare)
- return NULL;
-
- if (most_spare > task_util(p) / 2)
- return most_spare_sg;
-
-skip_spare:
- if (!idlest)
- return NULL;
-
- /*
- * When comparing groups across NUMA domains, it's possible for the
- * local domain to be very lightly loaded relative to the remote
- * domains but "imbalance" skews the comparison making remote CPUs
- * look much more favourable. When considering cross-domain, add
- * imbalance to the load on the remote node and consider staying
- * local.
- */
- if ((sd->flags & SD_NUMA) &&
- min_load + imbalance >= this_load)
- return NULL;
-
- if (min_load >= this_load + imbalance)
- return NULL;
-
- if ((this_load < (min_load + imbalance)) &&
- (100*this_load < imbalance_scale*min_load))
- return NULL;
-
- return idlest;
-}
+ int this_cpu, int sd_flag);

/*
* find_idlest_group_cpu - find the idlest CPU among the CPUs in the group.
@@ -5734,7 +5616,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
return prev_cpu;

/*
- * We need task's util for capacity_spare_without, sync it up to
+ * We need task's util for cpu_util_without, sync it up to
* prev_cpu's last_update_time.
*/
if (!(sd_flag & SD_BALANCE_FORK))
@@ -7915,13 +7797,13 @@ static inline int sg_imbalanced(struct sched_group *group)
* any benefit for the load balance.
*/
static inline bool
-group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
+group_has_capacity(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
{
if (sgs->sum_nr_running < sgs->group_weight)
return true;

if ((sgs->group_capacity * 100) >
- (sgs->group_util * env->sd->imbalance_pct))
+ (sgs->group_util * imbalance_pct))
return true;

return false;
@@ -7936,13 +7818,13 @@ group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
* false.
*/
static inline bool
-group_is_overloaded(struct lb_env *env, struct sg_lb_stats *sgs)
+group_is_overloaded(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
{
if (sgs->sum_nr_running <= sgs->group_weight)
return false;

if ((sgs->group_capacity * 100) <
- (sgs->group_util * env->sd->imbalance_pct))
+ (sgs->group_util * imbalance_pct))
return true;

return false;
@@ -7969,11 +7851,11 @@ group_smaller_max_cpu_capacity(struct sched_group *sg, struct sched_group *ref)
}

static inline enum
-group_type group_classify(struct lb_env *env,
+group_type group_classify(unsigned int imbalance_pct,
struct sched_group *group,
struct sg_lb_stats *sgs)
{
- if (group_is_overloaded(env, sgs))
+ if (group_is_overloaded(imbalance_pct, sgs))
return group_overloaded;

if (sg_imbalanced(group))
@@ -7985,7 +7867,7 @@ group_type group_classify(struct lb_env *env,
if (sgs->group_misfit_task_load)
return group_misfit_task;

- if (!group_has_capacity(env, sgs))
+ if (!group_has_capacity(imbalance_pct, sgs))
return group_fully_busy;

return group_has_spare;
@@ -8086,7 +7968,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,

sgs->group_weight = group->group_weight;

- sgs->group_type = group_classify(env, group, sgs);
+ sgs->group_type = group_classify(env->sd->imbalance_pct, group, sgs);

/* Computing avg_load makes sense only when group is overloaded */
if (sgs->group_type == group_overloaded)
@@ -8241,6 +8123,252 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
}
#endif /* CONFIG_NUMA_BALANCING */

+
+struct sg_lb_stats;
+
+/*
+ * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
+ * @denv: The ched_domain level to look for idlest group.
+ * @group: sched_group whose statistics are to be updated.
+ * @sgs: variable to hold the statistics for this group.
+ */
+static inline void update_sg_wakeup_stats(struct sched_domain *sd,
+ struct sched_group *group,
+ struct sg_lb_stats *sgs,
+ struct task_struct *p)
+{
+ int i, nr_running;
+
+ memset(sgs, 0, sizeof(*sgs));
+
+ for_each_cpu(i, sched_group_span(group)) {
+ struct rq *rq = cpu_rq(i);
+
+ sgs->group_load += cpu_load(rq);
+ sgs->group_util += cpu_util_without(i, p);
+ sgs->sum_h_nr_running += rq->cfs.h_nr_running;
+
+ nr_running = rq->nr_running;
+ sgs->sum_nr_running += nr_running;
+
+ /*
+ * No need to call idle_cpu() if nr_running is not 0
+ */
+ if (!nr_running && idle_cpu(i))
+ sgs->idle_cpus++;
+
+
+ }
+
+ /* Check if task fits in the group */
+ if (sd->flags & SD_ASYM_CPUCAPACITY &&
+ !task_fits_capacity(p, group->sgc->max_capacity)) {
+ sgs->group_misfit_task_load = 1;
+ }
+
+ sgs->group_capacity = group->sgc->capacity;
+
+ sgs->group_type = group_classify(sd->imbalance_pct, group, sgs);
+
+ /*
+ * Computing avg_load makes sense only when group is fully busy or
+ * overloaded
+ */
+ if (sgs->group_type < group_fully_busy)
+ sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
+ sgs->group_capacity;
+}
+
+static bool update_pick_idlest(struct sched_group *idlest,
+ struct sg_lb_stats *idlest_sgs,
+ struct sched_group *group,
+ struct sg_lb_stats *sgs)
+{
+ if (sgs->group_type < idlest_sgs->group_type)
+ return true;
+
+ if (sgs->group_type > idlest_sgs->group_type)
+ return false;
+
+ /*
+ * The candidate and the current idles group are the same type of
+ * group. Let check which one is the idlest according to the type.
+ */
+
+ switch (sgs->group_type) {
+ case group_overloaded:
+ case group_fully_busy:
+ /* Select the group with lowest avg_load. */
+ if (idlest_sgs->avg_load <= sgs->avg_load)
+ return false;
+ break;
+
+ case group_imbalanced:
+ case group_asym_packing:
+ /* Those types are not used in the slow wakeup path */
+ return false;
+
+ case group_misfit_task:
+ /* Select group with the highest max capacity */
+ if (idlest->sgc->max_capacity >= group->sgc->max_capacity)
+ return false;
+ break;
+
+ case group_has_spare:
+ /* Select group with most idle CPUs */
+ if (idlest_sgs->idle_cpus >= sgs->idle_cpus)
+ return false;
+ break;
+ }
+
+ return true;
+}
+
+/*
+ * find_idlest_group finds and returns the least busy CPU group within the
+ * domain.
+ *
+ * Assumes p is allowed on at least one CPU in sd.
+ */
+static struct sched_group *
+find_idlest_group(struct sched_domain *sd, struct task_struct *p,
+ int this_cpu, int sd_flag)
+{
+ struct sched_group *idlest = NULL, *local = NULL, *group = sd->groups;
+ struct sg_lb_stats local_sgs, tmp_sgs;
+ struct sg_lb_stats *sgs;
+ unsigned long imbalance;
+ struct sg_lb_stats idlest_sgs = {
+ .avg_load = UINT_MAX,
+ .group_type = group_overloaded,
+ };
+
+ imbalance = scale_load_down(NICE_0_LOAD) *
+ (sd->imbalance_pct-100) / 100;
+
+ do {
+ int local_group;
+
+ /* Skip over this group if it has no CPUs allowed */
+ if (!cpumask_intersects(sched_group_span(group),
+ p->cpus_ptr))
+ continue;
+
+ local_group = cpumask_test_cpu(this_cpu,
+ sched_group_span(group));
+
+ if (local_group) {
+ sgs = &local_sgs;
+ local = group;
+ } else {
+ sgs = &tmp_sgs;
+ }
+
+ update_sg_wakeup_stats(sd, group, sgs, p);
+
+ if (!local_group && update_pick_idlest(idlest, &idlest_sgs, group, sgs)) {
+ idlest = group;
+ idlest_sgs = *sgs;
+ }
+
+ } while (group = group->next, group != sd->groups);
+
+
+ /* There is no idlest group to push tasks to */
+ if (!idlest)
+ return NULL;
+
+ /*
+ * If the local group is idler than the selected idlest group
+ * don't try and push the task.
+ */
+ if (local_sgs.group_type < idlest_sgs.group_type)
+ return NULL;
+
+ /*
+ * If the local group is busier than the selected idlest group
+ * try and push the task.
+ */
+ if (local_sgs.group_type > idlest_sgs.group_type)
+ return idlest;
+
+ switch (local_sgs.group_type) {
+ case group_overloaded:
+ case group_fully_busy:
+ /*
+ * When comparing groups across NUMA domains, it's possible for
+ * the local domain to be very lightly loaded relative to the
+ * remote domains but "imbalance" skews the comparison making
+ * remote CPUs look much more favourable. When considering
+ * cross-domain, add imbalance to the load on the remote node
+ * and consider staying local.
+ */
+
+ if ((sd->flags & SD_NUMA) &&
+ ((idlest_sgs.avg_load + imbalance) >= local_sgs.avg_load))
+ return NULL;
+
+ /*
+ * If the local group is less loaded than the selected
+ * idlest group don't try and push any tasks.
+ */
+ if (idlest_sgs.avg_load >= (local_sgs.avg_load + imbalance))
+ return NULL;
+
+ if (100 * local_sgs.avg_load <= sd->imbalance_pct * idlest_sgs.avg_load)
+ return NULL;
+ break;
+
+ case group_imbalanced:
+ case group_asym_packing:
+ /* Those type are not used in the slow wakeup path */
+ return NULL;
+
+ case group_misfit_task:
+ /* Select group with the highest max capacity */
+ if (local->sgc->max_capacity >= idlest->sgc->max_capacity)
+ return NULL;
+ break;
+
+ case group_has_spare:
+ if (sd->flags & SD_NUMA) {
+#ifdef CONFIG_NUMA_BALANCING
+ int idlest_cpu;
+ /*
+ * If there is spare capacity at NUMA, try to select
+ * the preferred node
+ */
+ if (cpu_to_node(this_cpu) == p->numa_preferred_nid)
+ return NULL;
+
+ idlest_cpu = cpumask_first(sched_group_span(idlest));
+ if (cpu_to_node(idlest_cpu) == p->numa_preferred_nid)
+ return idlest;
+#endif
+ /*
+ * Otherwise, keep the task on this node to stay close
+ * its wakeup source and improve locality. If there is
+ * a real need of migration, periodic load balance will
+ * take care of it.
+ */
+ if (local_sgs.idle_cpus)
+ return NULL;
+ }
+
+ /*
+ * Select group with highest number of idle cpus. We could also
+ * compare the utilization which is more stable but it can end
+ * up that the group has less spare capacity but finally more
+ * idle cpus which means more opportunity to run task.
+ */
+ if (local_sgs.idle_cpus >= idlest_sgs.idle_cpus)
+ return NULL;
+ break;
+ }
+
+ return idlest;
+}
+
/**
* update_sd_lb_stats - Update sched_domain's statistics for load balancing.
* @env: The load balancing environment.
--
2.7.4

2019-10-19 08:27:25

by Vincent Guittot

[permalink] [raw]
Subject: [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance

runnable load has been introduced to take into account the case
where blocked load biases the load balance decision which was selecting
underutilized group with huge blocked load whereas other groups were
overloaded.

The load is now only used when groups are overloaded. In this case,
it's worth being conservative and taking into account the sleeping
tasks that might wakeup on the cpu.

Signed-off-by: Vincent Guittot <[email protected]>
---
kernel/sched/fair.c | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e09fe12b..9ac2264 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5385,6 +5385,11 @@ static unsigned long cpu_runnable_load(struct rq *rq)
return cfs_rq_runnable_load_avg(&rq->cfs);
}

+static unsigned long cpu_load(struct rq *rq)
+{
+ return cfs_rq_load_avg(&rq->cfs);
+}
+
static unsigned long capacity_of(int cpu)
{
return cpu_rq(cpu)->cpu_capacity;
@@ -8059,7 +8064,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
if ((env->flags & LBF_NOHZ_STATS) && update_nohz_stats(rq, false))
env->flags |= LBF_NOHZ_AGAIN;

- sgs->group_load += cpu_runnable_load(rq);
+ sgs->group_load += cpu_load(rq);
sgs->group_util += cpu_util(i);
sgs->sum_h_nr_running += rq->cfs.h_nr_running;

@@ -8517,7 +8522,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
init_sd_lb_stats(&sds);

/*
- * Compute the various statistics relavent for load balancing at
+ * Compute the various statistics relevant for load balancing at
* this level.
*/
update_sd_lb_stats(env, &sds);
@@ -8677,11 +8682,10 @@ static struct rq *find_busiest_queue(struct lb_env *env,
switch (env->migration_type) {
case migrate_load:
/*
- * When comparing with load imbalance, use
- * cpu_runnable_load() which is not scaled with the CPU
- * capacity.
+ * When comparing with load imbalance, use cpu_load()
+ * which is not scaled with the CPU capacity.
*/
- load = cpu_runnable_load(rq);
+ load = cpu_load(rq);

if (nr_running == 1 && load > env->imbalance &&
!check_cpu_capacity(rq, env->sd))
@@ -8689,10 +8693,10 @@ static struct rq *find_busiest_queue(struct lb_env *env,

/*
* For the load comparisons with the other CPU's,
- * consider the cpu_runnable_load() scaled with the CPU
- * capacity, so that the load can be moved away from
- * the CPU that is potentially running at a lower
- * capacity.
+ * consider the cpu_load() scaled with the CPU
+ * capacity, so that the load can be moved away
+ * from the CPU that is potentially running at a
+ * lower capacity.
*
* Thus we're looking for max(load_i / capacity_i),
* crosswise multiplication to rid ourselves of the
--
2.7.4

2019-10-21 07:51:12

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance


* Vincent Guittot <[email protected]> wrote:

> Several wrong task placement have been raised with the current load
> balance algorithm but their fixes are not always straight forward and
> end up with using biased values to force migrations. A cleanup and rework
> of the load balance will help to handle such UCs and enable to fine grain
> the behavior of the scheduler for other cases.
>
> Patch 1 has already been sent separately and only consolidate asym policy
> in one place and help the review of the changes in load_balance.
>
> Patch 2 renames the sum of h_nr_running in stats.
>
> Patch 3 removes meaningless imbalance computation to make review of
> patch 4 easier.
>
> Patch 4 reworks load_balance algorithm and fixes some wrong task placement
> but try to stay conservative.
>
> Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
> into account when pulling tasks.
>
> Patch 6 replaces runnable_load by load now that the signal is only used
> when overloaded.
>
> Patch 7 improves the spread of tasks at the 1st scheduling level.
>
> Patch 8 uses utilization instead of load in all steps of misfit task
> path.
>
> Patch 9 replaces runnable_load_avg by load_avg in the wake up path.
>
> Patch 10 optimizes find_idlest_group() that was using both runnable_load
> and load. This has not been squashed with previous patch to ease the
> review.
>
> Patch 11 reworks find_idlest_group() to follow the same steps as
> find_busiest_group()
>
> Some benchmarks results based on 8 iterations of each tests:
> - small arm64 dual quad cores system
>
> tip/sched/core w/ this patchset improvement
> schedpipe 53125 +/-0.18% 53443 +/-0.52% (+0.60%)
>
> hackbench -l (2560/#grp) -g #grp
> 1 groups 1.579 +/-29.16% 1.410 +/-13.46% (+10.70%)
> 4 groups 1.269 +/-9.69% 1.205 +/-3.27% (+5.00%)
> 8 groups 1.117 +/-1.51% 1.123 +/-1.27% (+4.57%)
> 16 groups 1.176 +/-1.76% 1.164 +/-2.42% (+1.07%)
>
> Unixbench shell8
> 1 test 1963.48 +/-0.36% 1902.88 +/-0.73% (-3.09%)
> 224 tests 2427.60 +/-0.20% 2469.80 +/-0.42% (1.74%)
>
> - large arm64 2 nodes / 224 cores system
>
> tip/sched/core w/ this patchset improvement
> schedpipe 124084 +/-1.36% 124445 +/-0.67% (+0.29%)
>
> hackbench -l (256000/#grp) -g #grp
> 1 groups 15.305 +/-1.50% 14.001 +/-1.99% (+8.52%)
> 4 groups 5.959 +/-0.70% 5.542 +/-3.76% (+6.99%)
> 16 groups 3.120 +/-1.72% 3.253 +/-0.61% (-4.92%)
> 32 groups 2.911 +/-0.88% 2.837 +/-1.16% (+2.54%)
> 64 groups 2.805 +/-1.90% 2.716 +/-1.18% (+3.17%)
> 128 groups 3.166 +/-7.71% 3.891 +/-6.77% (+5.82%)
> 256 groups 3.655 +/-10.09% 3.185 +/-6.65% (+12.87%)
>
> dbench
> 1 groups 328.176 +/-0.29% 330.217 +/-0.32% (+0.62%)
> 4 groups 930.739 +/-0.50% 957.173 +/-0.66% (+2.84%)
> 16 groups 1928.292 +/-0.36% 1978.234 +/-0.88% (+0.92%)
> 32 groups 2369.348 +/-1.72% 2454.020 +/-0.90% (+3.57%)
> 64 groups 2583.880 +/-3.39% 2618.860 +/-0.84% (+1.35%)
> 128 groups 2256.406 +/-10.67% 2392.498 +/-2.13% (+6.03%)
> 256 groups 1257.546 +/-3.81% 1674.684 +/-4.97% (+33.17%)
>
> Unixbench shell8
> 1 test 6944.16 +/-0.02 6605.82 +/-0.11 (-4.87%)
> 224 tests 13499.02 +/-0.14 13637.94 +/-0.47% (+1.03%)
> lkp reported a -10% regression on shell8 (1 test) for v3 that
> seems that is partially recovered on my platform with v4.
>
> tip/sched/core sha1:
> commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")
>
> Changes since v3:
> - small typo and variable ordering fixes
> - add some acked/reviewed tag
> - set 1 instead of load for migrate_misfit
> - use nr_h_running instead of load for asym_packing
> - update the optimization of find_idlest_group() and put back somes
> conditions when comparing load
> - rework find_idlest_group() to match find_busiest_group() behavior
>
> Changes since v2:
> - fix typo and reorder code
> - some minor code fixes
> - optimize the find_idles_group()
>
> Not covered in this patchset:
> - Better detection of overloaded and fully busy state, especially for cases
> when nr_running > nr CPUs.
>
> Vincent Guittot (11):
> sched/fair: clean up asym packing
> sched/fair: rename sum_nr_running to sum_h_nr_running
> sched/fair: remove meaningless imbalance calculation
> sched/fair: rework load_balance
> sched/fair: use rq->nr_running when balancing load
> sched/fair: use load instead of runnable load in load_balance
> sched/fair: evenly spread tasks when not overloaded
> sched/fair: use utilization to select misfit task
> sched/fair: use load instead of runnable load in wakeup path
> sched/fair: optimize find_idlest_group
> sched/fair: rework find_idlest_group
>
> kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
> 1 file changed, 682 insertions(+), 499 deletions(-)

Thanks, that's an excellent series!

I've queued it up in sched/core with a handful of readability edits to
comments and changelogs.

There are some upstreaming caveats though, I expect this series to be a
performance regression magnet:

- load_balance() and wake-up changes invariably are such: some workloads
only work/scale well by accident, and if we touch the logic it might
flip over into a less advantageous scheduling pattern.

- In particular the changes from balancing and waking on runnable load
to full load that includes blocking *will* shift IO-intensive
workloads that you tests don't fully capture I believe. You also made
idle balancing more aggressive in essence - which might reduce cache
locality for some workloads.

A full run on Mel Gorman's magic scalability test-suite would be super
useful ...

Anyway, please be on the lookout for such performance regression reports.

Also, we seem to have grown a fair amount of these TODO entries:

kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
kernel/sched/fair.c: * XXX illustrate
kernel/sched/fair.c: } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
kernel/sched/fair.c: * can also include other factors [XXX].
kernel/sched/fair.c: * [XXX expand on:
kernel/sched/fair.c: * [XXX more?]
kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
kernel/sched/fair.c: * XXX for now avg_load is not computed and always 0 so we
kernel/sched/fair.c: /* XXX broken for overlapping NUMA groups */

:-)

Thanks,

Ingo

2019-10-21 08:46:00

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <[email protected]> wrote:
>
>
> * Vincent Guittot <[email protected]> wrote:
>
> > Several wrong task placement have been raised with the current load
> > balance algorithm but their fixes are not always straight forward and
> > end up with using biased values to force migrations. A cleanup and rework
> > of the load balance will help to handle such UCs and enable to fine grain
> > the behavior of the scheduler for other cases.
> >
> > Patch 1 has already been sent separately and only consolidate asym policy
> > in one place and help the review of the changes in load_balance.
> >
> > Patch 2 renames the sum of h_nr_running in stats.
> >
> > Patch 3 removes meaningless imbalance computation to make review of
> > patch 4 easier.
> >
> > Patch 4 reworks load_balance algorithm and fixes some wrong task placement
> > but try to stay conservative.
> >
> > Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
> > into account when pulling tasks.
> >
> > Patch 6 replaces runnable_load by load now that the signal is only used
> > when overloaded.
> >
> > Patch 7 improves the spread of tasks at the 1st scheduling level.
> >
> > Patch 8 uses utilization instead of load in all steps of misfit task
> > path.
> >
> > Patch 9 replaces runnable_load_avg by load_avg in the wake up path.
> >
> > Patch 10 optimizes find_idlest_group() that was using both runnable_load
> > and load. This has not been squashed with previous patch to ease the
> > review.
> >
> > Patch 11 reworks find_idlest_group() to follow the same steps as
> > find_busiest_group()
> >
> > Some benchmarks results based on 8 iterations of each tests:
> > - small arm64 dual quad cores system
> >
> > tip/sched/core w/ this patchset improvement
> > schedpipe 53125 +/-0.18% 53443 +/-0.52% (+0.60%)
> >
> > hackbench -l (2560/#grp) -g #grp
> > 1 groups 1.579 +/-29.16% 1.410 +/-13.46% (+10.70%)
> > 4 groups 1.269 +/-9.69% 1.205 +/-3.27% (+5.00%)
> > 8 groups 1.117 +/-1.51% 1.123 +/-1.27% (+4.57%)
> > 16 groups 1.176 +/-1.76% 1.164 +/-2.42% (+1.07%)
> >
> > Unixbench shell8
> > 1 test 1963.48 +/-0.36% 1902.88 +/-0.73% (-3.09%)
> > 224 tests 2427.60 +/-0.20% 2469.80 +/-0.42% (1.74%)
> >
> > - large arm64 2 nodes / 224 cores system
> >
> > tip/sched/core w/ this patchset improvement
> > schedpipe 124084 +/-1.36% 124445 +/-0.67% (+0.29%)
> >
> > hackbench -l (256000/#grp) -g #grp
> > 1 groups 15.305 +/-1.50% 14.001 +/-1.99% (+8.52%)
> > 4 groups 5.959 +/-0.70% 5.542 +/-3.76% (+6.99%)
> > 16 groups 3.120 +/-1.72% 3.253 +/-0.61% (-4.92%)
> > 32 groups 2.911 +/-0.88% 2.837 +/-1.16% (+2.54%)
> > 64 groups 2.805 +/-1.90% 2.716 +/-1.18% (+3.17%)
> > 128 groups 3.166 +/-7.71% 3.891 +/-6.77% (+5.82%)
> > 256 groups 3.655 +/-10.09% 3.185 +/-6.65% (+12.87%)
> >
> > dbench
> > 1 groups 328.176 +/-0.29% 330.217 +/-0.32% (+0.62%)
> > 4 groups 930.739 +/-0.50% 957.173 +/-0.66% (+2.84%)
> > 16 groups 1928.292 +/-0.36% 1978.234 +/-0.88% (+0.92%)
> > 32 groups 2369.348 +/-1.72% 2454.020 +/-0.90% (+3.57%)
> > 64 groups 2583.880 +/-3.39% 2618.860 +/-0.84% (+1.35%)
> > 128 groups 2256.406 +/-10.67% 2392.498 +/-2.13% (+6.03%)
> > 256 groups 1257.546 +/-3.81% 1674.684 +/-4.97% (+33.17%)
> >
> > Unixbench shell8
> > 1 test 6944.16 +/-0.02 6605.82 +/-0.11 (-4.87%)
> > 224 tests 13499.02 +/-0.14 13637.94 +/-0.47% (+1.03%)
> > lkp reported a -10% regression on shell8 (1 test) for v3 that
> > seems that is partially recovered on my platform with v4.
> >
> > tip/sched/core sha1:
> > commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")
> >
> > Changes since v3:
> > - small typo and variable ordering fixes
> > - add some acked/reviewed tag
> > - set 1 instead of load for migrate_misfit
> > - use nr_h_running instead of load for asym_packing
> > - update the optimization of find_idlest_group() and put back somes
> > conditions when comparing load
> > - rework find_idlest_group() to match find_busiest_group() behavior
> >
> > Changes since v2:
> > - fix typo and reorder code
> > - some minor code fixes
> > - optimize the find_idles_group()
> >
> > Not covered in this patchset:
> > - Better detection of overloaded and fully busy state, especially for cases
> > when nr_running > nr CPUs.
> >
> > Vincent Guittot (11):
> > sched/fair: clean up asym packing
> > sched/fair: rename sum_nr_running to sum_h_nr_running
> > sched/fair: remove meaningless imbalance calculation
> > sched/fair: rework load_balance
> > sched/fair: use rq->nr_running when balancing load
> > sched/fair: use load instead of runnable load in load_balance
> > sched/fair: evenly spread tasks when not overloaded
> > sched/fair: use utilization to select misfit task
> > sched/fair: use load instead of runnable load in wakeup path
> > sched/fair: optimize find_idlest_group
> > sched/fair: rework find_idlest_group
> >
> > kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
> > 1 file changed, 682 insertions(+), 499 deletions(-)
>
> Thanks, that's an excellent series!
>
> I've queued it up in sched/core with a handful of readability edits to
> comments and changelogs.

Thanks

>
> There are some upstreaming caveats though, I expect this series to be a
> performance regression magnet:
>
> - load_balance() and wake-up changes invariably are such: some workloads
> only work/scale well by accident, and if we touch the logic it might
> flip over into a less advantageous scheduling pattern.
>
> - In particular the changes from balancing and waking on runnable load
> to full load that includes blocking *will* shift IO-intensive
> workloads that you tests don't fully capture I believe. You also made
> idle balancing more aggressive in essence - which might reduce cache
> locality for some workloads.
>
> A full run on Mel Gorman's magic scalability test-suite would be super
> useful ...
>
> Anyway, please be on the lookout for such performance regression reports.

Yes I monitor the regressions on the mailing list

>
> Also, we seem to have grown a fair amount of these TODO entries:
>
> kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> kernel/sched/fair.c: * XXX illustrate
> kernel/sched/fair.c: } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> kernel/sched/fair.c: * can also include other factors [XXX].
> kernel/sched/fair.c: * [XXX expand on:
> kernel/sched/fair.c: * [XXX more?]
> kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> kernel/sched/fair.c: * XXX for now avg_load is not computed and always 0 so we
> kernel/sched/fair.c: /* XXX broken for overlapping NUMA groups */
>

I will have a look :-)

> :-)
>
> Thanks,
>
> Ingo

Subject: [tip: sched/core] sched/fair: Clean up asym packing

The following commit has been merged into the sched/core branch of tip:

Commit-ID: 490ba971d8b498ba3a47999ab94c6a0d1830ad41
Gitweb: https://git.kernel.org/tip/490ba971d8b498ba3a47999ab94c6a0d1830ad41
Author: Vincent Guittot <[email protected]>
AuthorDate: Fri, 18 Oct 2019 15:26:28 +02:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Mon, 21 Oct 2019 09:40:53 +02:00

sched/fair: Clean up asym packing

Clean up asym packing to follow the default load balance behavior:

- classify the group by creating a group_asym_packing field.
- calculate the imbalance in calculate_imbalance() instead of bypassing it.

We don't need to test twice same conditions anymore to detect asym packing
and we consolidate the calculation of imbalance in calculate_imbalance().

There is no functional changes.

Signed-off-by: Vincent Guittot <[email protected]>
Acked-by: Rik van Riel <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: [email protected]
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 63 +++++++++++---------------------------------
1 file changed, 16 insertions(+), 47 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 682a754..5ce0f71 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7665,6 +7665,7 @@ struct sg_lb_stats {
unsigned int group_weight;
enum group_type group_type;
int group_no_capacity;
+ unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
#ifdef CONFIG_NUMA_BALANCING
unsigned int nr_numa_running;
@@ -8119,9 +8120,17 @@ asym_packing:
* ASYM_PACKING needs to move all the work to the highest
* prority CPUs in the group, therefore mark all groups
* of lower priority than ourself as busy.
+ *
+ * This is primarily intended to used at the sibling level. Some
+ * cores like POWER7 prefer to use lower numbered SMT threads. In the
+ * case of POWER7, it can move to lower SMT modes only when higher
+ * threads are idle. When in lower SMT modes, the threads will
+ * perform better since they share less core resources. Hence when we
+ * have idle threads, we want them to be the higher ones.
*/
if (sgs->sum_nr_running &&
sched_asym_prefer(env->dst_cpu, sg->asym_prefer_cpu)) {
+ sgs->group_asym_packing = 1;
if (!sds->busiest)
return true;

@@ -8263,51 +8272,6 @@ next_group:
}

/**
- * check_asym_packing - Check to see if the group is packed into the
- * sched domain.
- *
- * This is primarily intended to used at the sibling level. Some
- * cores like POWER7 prefer to use lower numbered SMT threads. In the
- * case of POWER7, it can move to lower SMT modes only when higher
- * threads are idle. When in lower SMT modes, the threads will
- * perform better since they share less core resources. Hence when we
- * have idle threads, we want them to be the higher ones.
- *
- * This packing function is run on idle threads. It checks to see if
- * the busiest CPU in this domain (core in the P7 case) has a higher
- * CPU number than the packing function is being run on. Here we are
- * assuming lower CPU number will be equivalent to lower a SMT thread
- * number.
- *
- * Return: 1 when packing is required and a task should be moved to
- * this CPU. The amount of the imbalance is returned in env->imbalance.
- *
- * @env: The load balancing environment.
- * @sds: Statistics of the sched_domain which is to be packed
- */
-static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)
-{
- int busiest_cpu;
-
- if (!(env->sd->flags & SD_ASYM_PACKING))
- return 0;
-
- if (env->idle == CPU_NOT_IDLE)
- return 0;
-
- if (!sds->busiest)
- return 0;
-
- busiest_cpu = sds->busiest->asym_prefer_cpu;
- if (sched_asym_prefer(busiest_cpu, env->dst_cpu))
- return 0;
-
- env->imbalance = sds->busiest_stat.group_load;
-
- return 1;
-}
-
-/**
* fix_small_imbalance - Calculate the minor imbalance that exists
* amongst the groups of a sched_domain, during
* load balancing.
@@ -8391,6 +8355,11 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
local = &sds->local_stat;
busiest = &sds->busiest_stat;

+ if (busiest->group_asym_packing) {
+ env->imbalance = busiest->group_load;
+ return;
+ }
+
if (busiest->group_type == group_imbalanced) {
/*
* In the group_imb case we cannot rely on group-wide averages
@@ -8495,8 +8464,8 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
busiest = &sds.busiest_stat;

/* ASYM feature bypasses nice load balance check */
- if (check_asym_packing(env, &sds))
- return sds.busiest;
+ if (busiest->group_asym_packing)
+ goto force_balance;

/* There is no busy sibling group to pull tasks from */
if (!sds.busiest || busiest->sum_nr_running == 0)

Subject: [tip: sched/core] sched/fair: Remove meaningless imbalance calculation

The following commit has been merged into the sched/core branch of tip:

Commit-ID: fcf0553db6f4c79387864f6e4ab4a891601f395e
Gitweb: https://git.kernel.org/tip/fcf0553db6f4c79387864f6e4ab4a891601f395e
Author: Vincent Guittot <[email protected]>
AuthorDate: Fri, 18 Oct 2019 15:26:30 +02:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Mon, 21 Oct 2019 09:40:53 +02:00

sched/fair: Remove meaningless imbalance calculation

Clean up load_balance() and remove meaningless calculation and fields before
adding a new algorithm.

Signed-off-by: Vincent Guittot <[email protected]>
Acked-by: Rik van Riel <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: [email protected]
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 105 +-------------------------------------------
1 file changed, 1 insertion(+), 104 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ad8f16a..a1bc04f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5380,18 +5380,6 @@ static unsigned long capacity_of(int cpu)
return cpu_rq(cpu)->cpu_capacity;
}

-static unsigned long cpu_avg_load_per_task(int cpu)
-{
- struct rq *rq = cpu_rq(cpu);
- unsigned long nr_running = READ_ONCE(rq->cfs.h_nr_running);
- unsigned long load_avg = cpu_runnable_load(rq);
-
- if (nr_running)
- return load_avg / nr_running;
-
- return 0;
-}
-
static void record_wakee(struct task_struct *p)
{
/*
@@ -7657,7 +7645,6 @@ static unsigned long task_h_load(struct task_struct *p)
struct sg_lb_stats {
unsigned long avg_load; /*Avg load across the CPUs of the group */
unsigned long group_load; /* Total load over the CPUs of the group */
- unsigned long load_per_task;
unsigned long group_capacity;
unsigned long group_util; /* Total utilization of the group */
unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
@@ -8039,9 +8026,6 @@ static inline void update_sg_lb_stats(struct lb_env *env,
sgs->group_capacity = group->sgc->capacity;
sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity;

- if (sgs->sum_h_nr_running)
- sgs->load_per_task = sgs->group_load / sgs->sum_h_nr_running;
-
sgs->group_weight = group->group_weight;

sgs->group_no_capacity = group_is_overloaded(env, sgs);
@@ -8272,76 +8256,6 @@ next_group:
}

/**
- * fix_small_imbalance - Calculate the minor imbalance that exists
- * amongst the groups of a sched_domain, during
- * load balancing.
- * @env: The load balancing environment.
- * @sds: Statistics of the sched_domain whose imbalance is to be calculated.
- */
-static inline
-void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
-{
- unsigned long tmp, capa_now = 0, capa_move = 0;
- unsigned int imbn = 2;
- unsigned long scaled_busy_load_per_task;
- struct sg_lb_stats *local, *busiest;
-
- local = &sds->local_stat;
- busiest = &sds->busiest_stat;
-
- if (!local->sum_h_nr_running)
- local->load_per_task = cpu_avg_load_per_task(env->dst_cpu);
- else if (busiest->load_per_task > local->load_per_task)
- imbn = 1;
-
- scaled_busy_load_per_task =
- (busiest->load_per_task * SCHED_CAPACITY_SCALE) /
- busiest->group_capacity;
-
- if (busiest->avg_load + scaled_busy_load_per_task >=
- local->avg_load + (scaled_busy_load_per_task * imbn)) {
- env->imbalance = busiest->load_per_task;
- return;
- }
-
- /*
- * OK, we don't have enough imbalance to justify moving tasks,
- * however we may be able to increase total CPU capacity used by
- * moving them.
- */
-
- capa_now += busiest->group_capacity *
- min(busiest->load_per_task, busiest->avg_load);
- capa_now += local->group_capacity *
- min(local->load_per_task, local->avg_load);
- capa_now /= SCHED_CAPACITY_SCALE;
-
- /* Amount of load we'd subtract */
- if (busiest->avg_load > scaled_busy_load_per_task) {
- capa_move += busiest->group_capacity *
- min(busiest->load_per_task,
- busiest->avg_load - scaled_busy_load_per_task);
- }
-
- /* Amount of load we'd add */
- if (busiest->avg_load * busiest->group_capacity <
- busiest->load_per_task * SCHED_CAPACITY_SCALE) {
- tmp = (busiest->avg_load * busiest->group_capacity) /
- local->group_capacity;
- } else {
- tmp = (busiest->load_per_task * SCHED_CAPACITY_SCALE) /
- local->group_capacity;
- }
- capa_move += local->group_capacity *
- min(local->load_per_task, local->avg_load + tmp);
- capa_move /= SCHED_CAPACITY_SCALE;
-
- /* Move if we gain throughput */
- if (capa_move > capa_now)
- env->imbalance = busiest->load_per_task;
-}
-
-/**
* calculate_imbalance - Calculate the amount of imbalance present within the
* groups of a given sched_domain during load balance.
* @env: load balance environment
@@ -8360,15 +8274,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
return;
}

- if (busiest->group_type == group_imbalanced) {
- /*
- * In the group_imb case we cannot rely on group-wide averages
- * to ensure CPU-load equilibrium, look at wider averages. XXX
- */
- busiest->load_per_task =
- min(busiest->load_per_task, sds->avg_load);
- }
-
/*
* Avg load of busiest sg can be less and avg load of local sg can
* be greater than avg load across all sgs of sd because avg load
@@ -8379,7 +8284,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
(busiest->avg_load <= sds->avg_load ||
local->avg_load >= sds->avg_load)) {
env->imbalance = 0;
- return fix_small_imbalance(env, sds);
+ return;
}

/*
@@ -8417,14 +8322,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
busiest->group_misfit_task_load);
}

- /*
- * if *imbalance is less than the average load per runnable task
- * there is no guarantee that any tasks will be moved so we'll have
- * a think about bumping its value to force at least one task to be
- * moved
- */
- if (env->imbalance < busiest->load_per_task)
- return fix_small_imbalance(env, sds);
}

/******* find_busiest_group() helpers end here *********************/

Subject: [tip: sched/core] sched/fair: Rework find_idlest_group()

The following commit has been merged into the sched/core branch of tip:

Commit-ID: 57abff067a084889b6e06137e61a3dc3458acd56
Gitweb: https://git.kernel.org/tip/57abff067a084889b6e06137e61a3dc3458acd56
Author: Vincent Guittot <[email protected]>
AuthorDate: Fri, 18 Oct 2019 15:26:38 +02:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Mon, 21 Oct 2019 09:40:55 +02:00

sched/fair: Rework find_idlest_group()

The slow wake up path computes per sched_group statisics to select the
idlest group, which is quite similar to what load_balance() is doing
for selecting busiest group. Rework find_idlest_group() to classify the
sched_group and select the idlest one following the same steps as
load_balance().

Signed-off-by: Vincent Guittot <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: [email protected]
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 384 ++++++++++++++++++++++++++++---------------
1 file changed, 256 insertions(+), 128 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 95a57c7..a81c364 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5531,127 +5531,9 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
return target;
}

-static unsigned long cpu_util_without(int cpu, struct task_struct *p);
-
-static unsigned long capacity_spare_without(int cpu, struct task_struct *p)
-{
- return max_t(long, capacity_of(cpu) - cpu_util_without(cpu, p), 0);
-}
-
-/*
- * find_idlest_group finds and returns the least busy CPU group within the
- * domain.
- *
- * Assumes p is allowed on at least one CPU in sd.
- */
static struct sched_group *
find_idlest_group(struct sched_domain *sd, struct task_struct *p,
- int this_cpu, int sd_flag)
-{
- struct sched_group *idlest = NULL, *group = sd->groups;
- struct sched_group *most_spare_sg = NULL;
- unsigned long min_load = ULONG_MAX, this_load = ULONG_MAX;
- unsigned long most_spare = 0, this_spare = 0;
- int imbalance_scale = 100 + (sd->imbalance_pct-100)/2;
- unsigned long imbalance = scale_load_down(NICE_0_LOAD) *
- (sd->imbalance_pct-100) / 100;
-
- do {
- unsigned long load;
- unsigned long spare_cap, max_spare_cap;
- int local_group;
- int i;
-
- /* Skip over this group if it has no CPUs allowed */
- if (!cpumask_intersects(sched_group_span(group),
- p->cpus_ptr))
- continue;
-
- local_group = cpumask_test_cpu(this_cpu,
- sched_group_span(group));
-
- /*
- * Tally up the load of all CPUs in the group and find
- * the group containing the CPU with most spare capacity.
- */
- load = 0;
- max_spare_cap = 0;
-
- for_each_cpu(i, sched_group_span(group)) {
- load += cpu_load(cpu_rq(i));
-
- spare_cap = capacity_spare_without(i, p);
-
- if (spare_cap > max_spare_cap)
- max_spare_cap = spare_cap;
- }
-
- /* Adjust by relative CPU capacity of the group */
- load = (load * SCHED_CAPACITY_SCALE) /
- group->sgc->capacity;
-
- if (local_group) {
- this_load = load;
- this_spare = max_spare_cap;
- } else {
- if (load < min_load) {
- min_load = load;
- idlest = group;
- }
-
- if (most_spare < max_spare_cap) {
- most_spare = max_spare_cap;
- most_spare_sg = group;
- }
- }
- } while (group = group->next, group != sd->groups);
-
- /*
- * The cross-over point between using spare capacity or least load
- * is too conservative for high utilization tasks on partially
- * utilized systems if we require spare_capacity > task_util(p),
- * so we allow for some task stuffing by using
- * spare_capacity > task_util(p)/2.
- *
- * Spare capacity can't be used for fork because the utilization has
- * not been set yet, we must first select a rq to compute the initial
- * utilization.
- */
- if (sd_flag & SD_BALANCE_FORK)
- goto skip_spare;
-
- if (this_spare > task_util(p) / 2 &&
- imbalance_scale*this_spare > 100*most_spare)
- return NULL;
-
- if (most_spare > task_util(p) / 2)
- return most_spare_sg;
-
-skip_spare:
- if (!idlest)
- return NULL;
-
- /*
- * When comparing groups across NUMA domains, it's possible for the
- * local domain to be very lightly loaded relative to the remote
- * domains but "imbalance" skews the comparison making remote CPUs
- * look much more favourable. When considering cross-domain, add
- * imbalance to the load on the remote node and consider staying
- * local.
- */
- if ((sd->flags & SD_NUMA) &&
- min_load + imbalance >= this_load)
- return NULL;
-
- if (min_load >= this_load + imbalance)
- return NULL;
-
- if ((this_load < (min_load + imbalance)) &&
- (100*this_load < imbalance_scale*min_load))
- return NULL;
-
- return idlest;
-}
+ int this_cpu, int sd_flag);

/*
* find_idlest_group_cpu - find the idlest CPU among the CPUs in the group.
@@ -5724,7 +5606,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
return prev_cpu;

/*
- * We need task's util for capacity_spare_without, sync it up to
+ * We need task's util for cpu_util_without, sync it up to
* prev_cpu's last_update_time.
*/
if (!(sd_flag & SD_BALANCE_FORK))
@@ -7905,13 +7787,13 @@ static inline int sg_imbalanced(struct sched_group *group)
* any benefit for the load balance.
*/
static inline bool
-group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
+group_has_capacity(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
{
if (sgs->sum_nr_running < sgs->group_weight)
return true;

if ((sgs->group_capacity * 100) >
- (sgs->group_util * env->sd->imbalance_pct))
+ (sgs->group_util * imbalance_pct))
return true;

return false;
@@ -7926,13 +7808,13 @@ group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
* false.
*/
static inline bool
-group_is_overloaded(struct lb_env *env, struct sg_lb_stats *sgs)
+group_is_overloaded(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
{
if (sgs->sum_nr_running <= sgs->group_weight)
return false;

if ((sgs->group_capacity * 100) <
- (sgs->group_util * env->sd->imbalance_pct))
+ (sgs->group_util * imbalance_pct))
return true;

return false;
@@ -7959,11 +7841,11 @@ group_smaller_max_cpu_capacity(struct sched_group *sg, struct sched_group *ref)
}

static inline enum
-group_type group_classify(struct lb_env *env,
+group_type group_classify(unsigned int imbalance_pct,
struct sched_group *group,
struct sg_lb_stats *sgs)
{
- if (group_is_overloaded(env, sgs))
+ if (group_is_overloaded(imbalance_pct, sgs))
return group_overloaded;

if (sg_imbalanced(group))
@@ -7975,7 +7857,7 @@ group_type group_classify(struct lb_env *env,
if (sgs->group_misfit_task_load)
return group_misfit_task;

- if (!group_has_capacity(env, sgs))
+ if (!group_has_capacity(imbalance_pct, sgs))
return group_fully_busy;

return group_has_spare;
@@ -8076,7 +7958,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,

sgs->group_weight = group->group_weight;

- sgs->group_type = group_classify(env, group, sgs);
+ sgs->group_type = group_classify(env->sd->imbalance_pct, group, sgs);

/* Computing avg_load makes sense only when group is overloaded */
if (sgs->group_type == group_overloaded)
@@ -8231,6 +8113,252 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
}
#endif /* CONFIG_NUMA_BALANCING */

+
+struct sg_lb_stats;
+
+/*
+ * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
+ * @denv: The ched_domain level to look for idlest group.
+ * @group: sched_group whose statistics are to be updated.
+ * @sgs: variable to hold the statistics for this group.
+ */
+static inline void update_sg_wakeup_stats(struct sched_domain *sd,
+ struct sched_group *group,
+ struct sg_lb_stats *sgs,
+ struct task_struct *p)
+{
+ int i, nr_running;
+
+ memset(sgs, 0, sizeof(*sgs));
+
+ for_each_cpu(i, sched_group_span(group)) {
+ struct rq *rq = cpu_rq(i);
+
+ sgs->group_load += cpu_load(rq);
+ sgs->group_util += cpu_util_without(i, p);
+ sgs->sum_h_nr_running += rq->cfs.h_nr_running;
+
+ nr_running = rq->nr_running;
+ sgs->sum_nr_running += nr_running;
+
+ /*
+ * No need to call idle_cpu() if nr_running is not 0
+ */
+ if (!nr_running && idle_cpu(i))
+ sgs->idle_cpus++;
+
+
+ }
+
+ /* Check if task fits in the group */
+ if (sd->flags & SD_ASYM_CPUCAPACITY &&
+ !task_fits_capacity(p, group->sgc->max_capacity)) {
+ sgs->group_misfit_task_load = 1;
+ }
+
+ sgs->group_capacity = group->sgc->capacity;
+
+ sgs->group_type = group_classify(sd->imbalance_pct, group, sgs);
+
+ /*
+ * Computing avg_load makes sense only when group is fully busy or
+ * overloaded
+ */
+ if (sgs->group_type < group_fully_busy)
+ sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
+ sgs->group_capacity;
+}
+
+static bool update_pick_idlest(struct sched_group *idlest,
+ struct sg_lb_stats *idlest_sgs,
+ struct sched_group *group,
+ struct sg_lb_stats *sgs)
+{
+ if (sgs->group_type < idlest_sgs->group_type)
+ return true;
+
+ if (sgs->group_type > idlest_sgs->group_type)
+ return false;
+
+ /*
+ * The candidate and the current idlest group are the same type of
+ * group. Let check which one is the idlest according to the type.
+ */
+
+ switch (sgs->group_type) {
+ case group_overloaded:
+ case group_fully_busy:
+ /* Select the group with lowest avg_load. */
+ if (idlest_sgs->avg_load <= sgs->avg_load)
+ return false;
+ break;
+
+ case group_imbalanced:
+ case group_asym_packing:
+ /* Those types are not used in the slow wakeup path */
+ return false;
+
+ case group_misfit_task:
+ /* Select group with the highest max capacity */
+ if (idlest->sgc->max_capacity >= group->sgc->max_capacity)
+ return false;
+ break;
+
+ case group_has_spare:
+ /* Select group with most idle CPUs */
+ if (idlest_sgs->idle_cpus >= sgs->idle_cpus)
+ return false;
+ break;
+ }
+
+ return true;
+}
+
+/*
+ * find_idlest_group() finds and returns the least busy CPU group within the
+ * domain.
+ *
+ * Assumes p is allowed on at least one CPU in sd.
+ */
+static struct sched_group *
+find_idlest_group(struct sched_domain *sd, struct task_struct *p,
+ int this_cpu, int sd_flag)
+{
+ struct sched_group *idlest = NULL, *local = NULL, *group = sd->groups;
+ struct sg_lb_stats local_sgs, tmp_sgs;
+ struct sg_lb_stats *sgs;
+ unsigned long imbalance;
+ struct sg_lb_stats idlest_sgs = {
+ .avg_load = UINT_MAX,
+ .group_type = group_overloaded,
+ };
+
+ imbalance = scale_load_down(NICE_0_LOAD) *
+ (sd->imbalance_pct-100) / 100;
+
+ do {
+ int local_group;
+
+ /* Skip over this group if it has no CPUs allowed */
+ if (!cpumask_intersects(sched_group_span(group),
+ p->cpus_ptr))
+ continue;
+
+ local_group = cpumask_test_cpu(this_cpu,
+ sched_group_span(group));
+
+ if (local_group) {
+ sgs = &local_sgs;
+ local = group;
+ } else {
+ sgs = &tmp_sgs;
+ }
+
+ update_sg_wakeup_stats(sd, group, sgs, p);
+
+ if (!local_group && update_pick_idlest(idlest, &idlest_sgs, group, sgs)) {
+ idlest = group;
+ idlest_sgs = *sgs;
+ }
+
+ } while (group = group->next, group != sd->groups);
+
+
+ /* There is no idlest group to push tasks to */
+ if (!idlest)
+ return NULL;
+
+ /*
+ * If the local group is idler than the selected idlest group
+ * don't try and push the task.
+ */
+ if (local_sgs.group_type < idlest_sgs.group_type)
+ return NULL;
+
+ /*
+ * If the local group is busier than the selected idlest group
+ * try and push the task.
+ */
+ if (local_sgs.group_type > idlest_sgs.group_type)
+ return idlest;
+
+ switch (local_sgs.group_type) {
+ case group_overloaded:
+ case group_fully_busy:
+ /*
+ * When comparing groups across NUMA domains, it's possible for
+ * the local domain to be very lightly loaded relative to the
+ * remote domains but "imbalance" skews the comparison making
+ * remote CPUs look much more favourable. When considering
+ * cross-domain, add imbalance to the load on the remote node
+ * and consider staying local.
+ */
+
+ if ((sd->flags & SD_NUMA) &&
+ ((idlest_sgs.avg_load + imbalance) >= local_sgs.avg_load))
+ return NULL;
+
+ /*
+ * If the local group is less loaded than the selected
+ * idlest group don't try and push any tasks.
+ */
+ if (idlest_sgs.avg_load >= (local_sgs.avg_load + imbalance))
+ return NULL;
+
+ if (100 * local_sgs.avg_load <= sd->imbalance_pct * idlest_sgs.avg_load)
+ return NULL;
+ break;
+
+ case group_imbalanced:
+ case group_asym_packing:
+ /* Those type are not used in the slow wakeup path */
+ return NULL;
+
+ case group_misfit_task:
+ /* Select group with the highest max capacity */
+ if (local->sgc->max_capacity >= idlest->sgc->max_capacity)
+ return NULL;
+ break;
+
+ case group_has_spare:
+ if (sd->flags & SD_NUMA) {
+#ifdef CONFIG_NUMA_BALANCING
+ int idlest_cpu;
+ /*
+ * If there is spare capacity at NUMA, try to select
+ * the preferred node
+ */
+ if (cpu_to_node(this_cpu) == p->numa_preferred_nid)
+ return NULL;
+
+ idlest_cpu = cpumask_first(sched_group_span(idlest));
+ if (cpu_to_node(idlest_cpu) == p->numa_preferred_nid)
+ return idlest;
+#endif
+ /*
+ * Otherwise, keep the task on this node to stay close
+ * its wakeup source and improve locality. If there is
+ * a real need of migration, periodic load balance will
+ * take care of it.
+ */
+ if (local_sgs.idle_cpus)
+ return NULL;
+ }
+
+ /*
+ * Select group with highest number of idle CPUs. We could also
+ * compare the utilization which is more stable but it can end
+ * up that the group has less spare capacity but finally more
+ * idle CPUs which means more opportunity to run task.
+ */
+ if (local_sgs.idle_cpus >= idlest_sgs.idle_cpus)
+ return NULL;
+ break;
+ }
+
+ return idlest;
+}
+
/**
* update_sd_lb_stats - Update sched_domain's statistics for load balancing.
* @env: The load balancing environment.

Subject: [tip: sched/core] sched/fair: Use load instead of runnable load in load_balance()

The following commit has been merged into the sched/core branch of tip:

Commit-ID: b0fb1eb4f04ae4768231b9731efb1134e22053a4
Gitweb: https://git.kernel.org/tip/b0fb1eb4f04ae4768231b9731efb1134e22053a4
Author: Vincent Guittot <[email protected]>
AuthorDate: Fri, 18 Oct 2019 15:26:33 +02:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Mon, 21 Oct 2019 09:40:54 +02:00

sched/fair: Use load instead of runnable load in load_balance()

'runnable load' was originally introduced to take into account the case
where blocked load biases the load balance decision which was selecting
underutilized groups with huge blocked load whereas other groups were
overloaded.

The load is now only used when groups are overloaded. In this case,
it's worth being conservative and taking into account the sleeping
tasks that might wake up on the CPU.

Signed-off-by: Vincent Guittot <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: [email protected]
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4e7396c..e6a3db0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5375,6 +5375,11 @@ static unsigned long cpu_runnable_load(struct rq *rq)
return cfs_rq_runnable_load_avg(&rq->cfs);
}

+static unsigned long cpu_load(struct rq *rq)
+{
+ return cfs_rq_load_avg(&rq->cfs);
+}
+
static unsigned long capacity_of(int cpu)
{
return cpu_rq(cpu)->cpu_capacity;
@@ -8049,7 +8054,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
if ((env->flags & LBF_NOHZ_STATS) && update_nohz_stats(rq, false))
env->flags |= LBF_NOHZ_AGAIN;

- sgs->group_load += cpu_runnable_load(rq);
+ sgs->group_load += cpu_load(rq);
sgs->group_util += cpu_util(i);
sgs->sum_h_nr_running += rq->cfs.h_nr_running;

@@ -8507,7 +8512,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
init_sd_lb_stats(&sds);

/*
- * Compute the various statistics relavent for load balancing at
+ * Compute the various statistics relevant for load balancing at
* this level.
*/
update_sd_lb_stats(env, &sds);
@@ -8667,11 +8672,10 @@ static struct rq *find_busiest_queue(struct lb_env *env,
switch (env->migration_type) {
case migrate_load:
/*
- * When comparing with load imbalance, use
- * cpu_runnable_load() which is not scaled with the CPU
- * capacity.
+ * When comparing with load imbalance, use cpu_load()
+ * which is not scaled with the CPU capacity.
*/
- load = cpu_runnable_load(rq);
+ load = cpu_load(rq);

if (nr_running == 1 && load > env->imbalance &&
!check_cpu_capacity(rq, env->sd))
@@ -8679,10 +8683,10 @@ static struct rq *find_busiest_queue(struct lb_env *env,

/*
* For the load comparisons with the other CPUs,
- * consider the cpu_runnable_load() scaled with the CPU
- * capacity, so that the load can be moved away from
- * the CPU that is potentially running at a lower
- * capacity.
+ * consider the cpu_load() scaled with the CPU
+ * capacity, so that the load can be moved away
+ * from the CPU that is potentially running at a
+ * lower capacity.
*
* Thus we're looking for max(load_i / capacity_i),
* crosswise multiplication to rid ourselves of the

2019-10-21 12:57:11

by Phil Auld

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <[email protected]> wrote:
> >
> >
> > * Vincent Guittot <[email protected]> wrote:
> >
> > > Several wrong task placement have been raised with the current load
> > > balance algorithm but their fixes are not always straight forward and
> > > end up with using biased values to force migrations. A cleanup and rework
> > > of the load balance will help to handle such UCs and enable to fine grain
> > > the behavior of the scheduler for other cases.
> > >
> > > Patch 1 has already been sent separately and only consolidate asym policy
> > > in one place and help the review of the changes in load_balance.
> > >
> > > Patch 2 renames the sum of h_nr_running in stats.
> > >
> > > Patch 3 removes meaningless imbalance computation to make review of
> > > patch 4 easier.
> > >
> > > Patch 4 reworks load_balance algorithm and fixes some wrong task placement
> > > but try to stay conservative.
> > >
> > > Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
> > > into account when pulling tasks.
> > >
> > > Patch 6 replaces runnable_load by load now that the signal is only used
> > > when overloaded.
> > >
> > > Patch 7 improves the spread of tasks at the 1st scheduling level.
> > >
> > > Patch 8 uses utilization instead of load in all steps of misfit task
> > > path.
> > >
> > > Patch 9 replaces runnable_load_avg by load_avg in the wake up path.
> > >
> > > Patch 10 optimizes find_idlest_group() that was using both runnable_load
> > > and load. This has not been squashed with previous patch to ease the
> > > review.
> > >
> > > Patch 11 reworks find_idlest_group() to follow the same steps as
> > > find_busiest_group()
> > >
> > > Some benchmarks results based on 8 iterations of each tests:
> > > - small arm64 dual quad cores system
> > >
> > > tip/sched/core w/ this patchset improvement
> > > schedpipe 53125 +/-0.18% 53443 +/-0.52% (+0.60%)
> > >
> > > hackbench -l (2560/#grp) -g #grp
> > > 1 groups 1.579 +/-29.16% 1.410 +/-13.46% (+10.70%)
> > > 4 groups 1.269 +/-9.69% 1.205 +/-3.27% (+5.00%)
> > > 8 groups 1.117 +/-1.51% 1.123 +/-1.27% (+4.57%)
> > > 16 groups 1.176 +/-1.76% 1.164 +/-2.42% (+1.07%)
> > >
> > > Unixbench shell8
> > > 1 test 1963.48 +/-0.36% 1902.88 +/-0.73% (-3.09%)
> > > 224 tests 2427.60 +/-0.20% 2469.80 +/-0.42% (1.74%)
> > >
> > > - large arm64 2 nodes / 224 cores system
> > >
> > > tip/sched/core w/ this patchset improvement
> > > schedpipe 124084 +/-1.36% 124445 +/-0.67% (+0.29%)
> > >
> > > hackbench -l (256000/#grp) -g #grp
> > > 1 groups 15.305 +/-1.50% 14.001 +/-1.99% (+8.52%)
> > > 4 groups 5.959 +/-0.70% 5.542 +/-3.76% (+6.99%)
> > > 16 groups 3.120 +/-1.72% 3.253 +/-0.61% (-4.92%)
> > > 32 groups 2.911 +/-0.88% 2.837 +/-1.16% (+2.54%)
> > > 64 groups 2.805 +/-1.90% 2.716 +/-1.18% (+3.17%)
> > > 128 groups 3.166 +/-7.71% 3.891 +/-6.77% (+5.82%)
> > > 256 groups 3.655 +/-10.09% 3.185 +/-6.65% (+12.87%)
> > >
> > > dbench
> > > 1 groups 328.176 +/-0.29% 330.217 +/-0.32% (+0.62%)
> > > 4 groups 930.739 +/-0.50% 957.173 +/-0.66% (+2.84%)
> > > 16 groups 1928.292 +/-0.36% 1978.234 +/-0.88% (+0.92%)
> > > 32 groups 2369.348 +/-1.72% 2454.020 +/-0.90% (+3.57%)
> > > 64 groups 2583.880 +/-3.39% 2618.860 +/-0.84% (+1.35%)
> > > 128 groups 2256.406 +/-10.67% 2392.498 +/-2.13% (+6.03%)
> > > 256 groups 1257.546 +/-3.81% 1674.684 +/-4.97% (+33.17%)
> > >
> > > Unixbench shell8
> > > 1 test 6944.16 +/-0.02 6605.82 +/-0.11 (-4.87%)
> > > 224 tests 13499.02 +/-0.14 13637.94 +/-0.47% (+1.03%)
> > > lkp reported a -10% regression on shell8 (1 test) for v3 that
> > > seems that is partially recovered on my platform with v4.
> > >
> > > tip/sched/core sha1:
> > > commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")
> > >
> > > Changes since v3:
> > > - small typo and variable ordering fixes
> > > - add some acked/reviewed tag
> > > - set 1 instead of load for migrate_misfit
> > > - use nr_h_running instead of load for asym_packing
> > > - update the optimization of find_idlest_group() and put back somes
> > > conditions when comparing load
> > > - rework find_idlest_group() to match find_busiest_group() behavior
> > >
> > > Changes since v2:
> > > - fix typo and reorder code
> > > - some minor code fixes
> > > - optimize the find_idles_group()
> > >
> > > Not covered in this patchset:
> > > - Better detection of overloaded and fully busy state, especially for cases
> > > when nr_running > nr CPUs.
> > >
> > > Vincent Guittot (11):
> > > sched/fair: clean up asym packing
> > > sched/fair: rename sum_nr_running to sum_h_nr_running
> > > sched/fair: remove meaningless imbalance calculation
> > > sched/fair: rework load_balance
> > > sched/fair: use rq->nr_running when balancing load
> > > sched/fair: use load instead of runnable load in load_balance
> > > sched/fair: evenly spread tasks when not overloaded
> > > sched/fair: use utilization to select misfit task
> > > sched/fair: use load instead of runnable load in wakeup path
> > > sched/fair: optimize find_idlest_group
> > > sched/fair: rework find_idlest_group
> > >
> > > kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
> > > 1 file changed, 682 insertions(+), 499 deletions(-)
> >
> > Thanks, that's an excellent series!
> >
> > I've queued it up in sched/core with a handful of readability edits to
> > comments and changelogs.
>
> Thanks
>
> >
> > There are some upstreaming caveats though, I expect this series to be a
> > performance regression magnet:
> >
> > - load_balance() and wake-up changes invariably are such: some workloads
> > only work/scale well by accident, and if we touch the logic it might
> > flip over into a less advantageous scheduling pattern.
> >
> > - In particular the changes from balancing and waking on runnable load
> > to full load that includes blocking *will* shift IO-intensive
> > workloads that you tests don't fully capture I believe. You also made
> > idle balancing more aggressive in essence - which might reduce cache
> > locality for some workloads.
> >
> > A full run on Mel Gorman's magic scalability test-suite would be super
> > useful ...
> >
> > Anyway, please be on the lookout for such performance regression reports.
>
> Yes I monitor the regressions on the mailing list
>

Nice to see these in! Our perf team is running tests on this version. I
should have results in a couple days.


Cheers,
Phil

> >
> > Also, we seem to have grown a fair amount of these TODO entries:
> >
> > kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > kernel/sched/fair.c: * XXX illustrate
> > kernel/sched/fair.c: } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > kernel/sched/fair.c: * can also include other factors [XXX].
> > kernel/sched/fair.c: * [XXX expand on:
> > kernel/sched/fair.c: * [XXX more?]
> > kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > kernel/sched/fair.c: * XXX for now avg_load is not computed and always 0 so we
> > kernel/sched/fair.c: /* XXX broken for overlapping NUMA groups */
> >
>
> I will have a look :-)
>
> > :-)
> >
> > Thanks,
> >
> > Ingo

--

2019-10-22 21:46:56

by Vincent Guittot

[permalink] [raw]
Subject: [PATCH] sched/fair: fix rework of find_idlest_group()

The task, for which the scheduler looks for the idlest group of CPUs, must
be discounted from all statistics in order to get a fair comparison
between groups. This includes utilization, load, nr_running and idle_cpus.

Such unfairness can be easily highlighted with the unixbench execl 1 task.
This test continuously call execve() and the scheduler looks for the idlest
group/CPU on which it should place the task. Because the task runs on the
local group/CPU, the latter seems already busy even if there is nothing
else running on it. As a result, the scheduler will always select another
group/CPU than the local one.

Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
Reported-by: kernel test robot <[email protected]>
Signed-off-by: Vincent Guittot <[email protected]>
---

This recover most of the perf regression on my system and I have asked
Rong if he can rerun the test with the patch to check that it fixes his
system as well.

kernel/sched/fair.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 83 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a81c364..0ad4b21 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5379,6 +5379,36 @@ static unsigned long cpu_load(struct rq *rq)
{
return cfs_rq_load_avg(&rq->cfs);
}
+/*
+ * cpu_load_without - compute cpu load without any contributions from *p
+ * @cpu: the CPU which load is requested
+ * @p: the task which load should be discounted
+ *
+ * The load of a CPU is defined by the load of tasks currently enqueued on that
+ * CPU as well as tasks which are currently sleeping after an execution on that
+ * CPU.
+ *
+ * This method returns the load of the specified CPU by discounting the load of
+ * the specified task, whenever the task is currently contributing to the CPU
+ * load.
+ */
+static unsigned long cpu_load_without(struct rq *rq, struct task_struct *p)
+{
+ struct cfs_rq *cfs_rq;
+ unsigned int load;
+
+ /* Task has no contribution or is new */
+ if (cpu_of(rq) != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
+ return cpu_load(rq);
+
+ cfs_rq = &rq->cfs;
+ load = READ_ONCE(cfs_rq->avg.load_avg);
+
+ /* Discount task's util from CPU's util */
+ lsub_positive(&load, task_h_load(p));
+
+ return load;
+}

static unsigned long capacity_of(int cpu)
{
@@ -8117,10 +8147,55 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
struct sg_lb_stats;

/*
+ * task_running_on_cpu - return 1 if @p is running on @cpu.
+ */
+
+static unsigned int task_running_on_cpu(int cpu, struct task_struct *p)
+{
+ /* Task has no contribution or is new */
+ if (cpu != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
+ return 0;
+
+ if (task_on_rq_queued(p))
+ return 1;
+
+ return 0;
+}
+
+/**
+ * idle_cpu_without - would a given CPU be idle without p ?
+ * @cpu: the processor on which idleness is tested.
+ * @p: task which should be ignored.
+ *
+ * Return: 1 if the CPU would be idle. 0 otherwise.
+ */
+static int idle_cpu_without(int cpu, struct task_struct *p)
+{
+ struct rq *rq = cpu_rq(cpu);
+
+ if ((rq->curr != rq->idle) && (rq->curr != p))
+ return 0;
+
+ /*
+ * rq->nr_running can't be used but an updated version without the
+ * impact of p on cpu must be used instead. The updated nr_running
+ * be computed and tested before calling idle_cpu_without().
+ */
+
+#ifdef CONFIG_SMP
+ if (!llist_empty(&rq->wake_list))
+ return 0;
+#endif
+
+ return 1;
+}
+
+/*
* update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
- * @denv: The ched_domain level to look for idlest group.
+ * @sd: The sched_domain level to look for idlest group.
* @group: sched_group whose statistics are to be updated.
* @sgs: variable to hold the statistics for this group.
+ * @p: The task for which we look for the idlest group/CPU.
*/
static inline void update_sg_wakeup_stats(struct sched_domain *sd,
struct sched_group *group,
@@ -8133,21 +8208,22 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,

for_each_cpu(i, sched_group_span(group)) {
struct rq *rq = cpu_rq(i);
+ unsigned int local;

- sgs->group_load += cpu_load(rq);
+ sgs->group_load += cpu_load_without(rq, p);
sgs->group_util += cpu_util_without(i, p);
- sgs->sum_h_nr_running += rq->cfs.h_nr_running;
+ local = task_running_on_cpu(i, p);
+ sgs->sum_h_nr_running += rq->cfs.h_nr_running - local;

- nr_running = rq->nr_running;
+ nr_running = rq->nr_running - local;
sgs->sum_nr_running += nr_running;

/*
- * No need to call idle_cpu() if nr_running is not 0
+ * No need to call idle_cpu_without() if nr_running is not 0
*/
- if (!nr_running && idle_cpu(i))
+ if (!nr_running && idle_cpu_without(i, p))
sgs->idle_cpus++;

-
}

/* Check if task fits in the group */
--
2.7.4

2019-10-23 07:53:33

by Chen, Rong A

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: fix rework of find_idlest_group()

Tested-by: kernel test robot <[email protected]>

On 10/23/2019 12:46 AM, Vincent Guittot wrote:
> The task, for which the scheduler looks for the idlest group of CPUs, must
> be discounted from all statistics in order to get a fair comparison
> between groups. This includes utilization, load, nr_running and idle_cpus.
>
> Such unfairness can be easily highlighted with the unixbench execl 1 task.
> This test continuously call execve() and the scheduler looks for the idlest
> group/CPU on which it should place the task. Because the task runs on the
> local group/CPU, the latter seems already busy even if there is nothing
> else running on it. As a result, the scheduler will always select another
> group/CPU than the local one.
>
> Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
> Reported-by: kernel test robot <[email protected]>
> Signed-off-by: Vincent Guittot <[email protected]>
> ---
>
> This recover most of the perf regression on my system and I have asked
> Rong if he can rerun the test with the patch to check that it fixes his
> system as well.
>
> kernel/sched/fair.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 83 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a81c364..0ad4b21 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5379,6 +5379,36 @@ static unsigned long cpu_load(struct rq *rq)
> {
> return cfs_rq_load_avg(&rq->cfs);
> }
> +/*
> + * cpu_load_without - compute cpu load without any contributions from *p
> + * @cpu: the CPU which load is requested
> + * @p: the task which load should be discounted
> + *
> + * The load of a CPU is defined by the load of tasks currently enqueued on that
> + * CPU as well as tasks which are currently sleeping after an execution on that
> + * CPU.
> + *
> + * This method returns the load of the specified CPU by discounting the load of
> + * the specified task, whenever the task is currently contributing to the CPU
> + * load.
> + */
> +static unsigned long cpu_load_without(struct rq *rq, struct task_struct *p)
> +{
> + struct cfs_rq *cfs_rq;
> + unsigned int load;
> +
> + /* Task has no contribution or is new */
> + if (cpu_of(rq) != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
> + return cpu_load(rq);
> +
> + cfs_rq = &rq->cfs;
> + load = READ_ONCE(cfs_rq->avg.load_avg);
> +
> + /* Discount task's util from CPU's util */
> + lsub_positive(&load, task_h_load(p));
> +
> + return load;
> +}
>
> static unsigned long capacity_of(int cpu)
> {
> @@ -8117,10 +8147,55 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
> struct sg_lb_stats;
>
> /*
> + * task_running_on_cpu - return 1 if @p is running on @cpu.
> + */
> +
> +static unsigned int task_running_on_cpu(int cpu, struct task_struct *p)
> +{
> + /* Task has no contribution or is new */
> + if (cpu != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
> + return 0;
> +
> + if (task_on_rq_queued(p))
> + return 1;
> +
> + return 0;
> +}
> +
> +/**
> + * idle_cpu_without - would a given CPU be idle without p ?
> + * @cpu: the processor on which idleness is tested.
> + * @p: task which should be ignored.
> + *
> + * Return: 1 if the CPU would be idle. 0 otherwise.
> + */
> +static int idle_cpu_without(int cpu, struct task_struct *p)
> +{
> + struct rq *rq = cpu_rq(cpu);
> +
> + if ((rq->curr != rq->idle) && (rq->curr != p))
> + return 0;
> +
> + /*
> + * rq->nr_running can't be used but an updated version without the
> + * impact of p on cpu must be used instead. The updated nr_running
> + * be computed and tested before calling idle_cpu_without().
> + */
> +
> +#ifdef CONFIG_SMP
> + if (!llist_empty(&rq->wake_list))
> + return 0;
> +#endif
> +
> + return 1;
> +}
> +
> +/*
> * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
> - * @denv: The ched_domain level to look for idlest group.
> + * @sd: The sched_domain level to look for idlest group.
> * @group: sched_group whose statistics are to be updated.
> * @sgs: variable to hold the statistics for this group.
> + * @p: The task for which we look for the idlest group/CPU.
> */
> static inline void update_sg_wakeup_stats(struct sched_domain *sd,
> struct sched_group *group,
> @@ -8133,21 +8208,22 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
>
> for_each_cpu(i, sched_group_span(group)) {
> struct rq *rq = cpu_rq(i);
> + unsigned int local;
>
> - sgs->group_load += cpu_load(rq);
> + sgs->group_load += cpu_load_without(rq, p);
> sgs->group_util += cpu_util_without(i, p);
> - sgs->sum_h_nr_running += rq->cfs.h_nr_running;
> + local = task_running_on_cpu(i, p);
> + sgs->sum_h_nr_running += rq->cfs.h_nr_running - local;
>
> - nr_running = rq->nr_running;
> + nr_running = rq->nr_running - local;
> sgs->sum_nr_running += nr_running;
>
> /*
> - * No need to call idle_cpu() if nr_running is not 0
> + * No need to call idle_cpu_without() if nr_running is not 0
> */
> - if (!nr_running && idle_cpu(i))
> + if (!nr_running && idle_cpu_without(i, p))
> sgs->idle_cpus++;
>
> -
> }
>
> /* Check if task fits in the group */

2019-10-25 11:59:42

by Phil Auld

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On Thu, Oct 24, 2019 at 08:38:44AM -0400 Phil Auld wrote:
> Hi Vincent,
>
> On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> > On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <[email protected]> wrote:
> > >
> > >
> > > * Vincent Guittot <[email protected]> wrote:
> > >
> > > > Several wrong task placement have been raised with the current load
> > > > balance algorithm but their fixes are not always straight forward and
> > > > end up with using biased values to force migrations. A cleanup and rework
> > > > of the load balance will help to handle such UCs and enable to fine grain
> > > > the behavior of the scheduler for other cases.
> > > >
> > > > Patch 1 has already been sent separately and only consolidate asym policy
> > > > in one place and help the review of the changes in load_balance.
> > > >
> > > > Patch 2 renames the sum of h_nr_running in stats.
> > > >
> > > > Patch 3 removes meaningless imbalance computation to make review of
> > > > patch 4 easier.
> > > >
> > > > Patch 4 reworks load_balance algorithm and fixes some wrong task placement
> > > > but try to stay conservative.
> > > >
> > > > Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
> > > > into account when pulling tasks.
> > > >
> > > > Patch 6 replaces runnable_load by load now that the signal is only used
> > > > when overloaded.
> > > >
> > > > Patch 7 improves the spread of tasks at the 1st scheduling level.
> > > >
> > > > Patch 8 uses utilization instead of load in all steps of misfit task
> > > > path.
> > > >
> > > > Patch 9 replaces runnable_load_avg by load_avg in the wake up path.
> > > >
> > > > Patch 10 optimizes find_idlest_group() that was using both runnable_load
> > > > and load. This has not been squashed with previous patch to ease the
> > > > review.
> > > >
> > > > Patch 11 reworks find_idlest_group() to follow the same steps as
> > > > find_busiest_group()
> > > >
> > > > Some benchmarks results based on 8 iterations of each tests:
> > > > - small arm64 dual quad cores system
> > > >
> > > > tip/sched/core w/ this patchset improvement
> > > > schedpipe 53125 +/-0.18% 53443 +/-0.52% (+0.60%)
> > > >
> > > > hackbench -l (2560/#grp) -g #grp
> > > > 1 groups 1.579 +/-29.16% 1.410 +/-13.46% (+10.70%)
> > > > 4 groups 1.269 +/-9.69% 1.205 +/-3.27% (+5.00%)
> > > > 8 groups 1.117 +/-1.51% 1.123 +/-1.27% (+4.57%)
> > > > 16 groups 1.176 +/-1.76% 1.164 +/-2.42% (+1.07%)
> > > >
> > > > Unixbench shell8
> > > > 1 test 1963.48 +/-0.36% 1902.88 +/-0.73% (-3.09%)
> > > > 224 tests 2427.60 +/-0.20% 2469.80 +/-0.42% (1.74%)
> > > >
> > > > - large arm64 2 nodes / 224 cores system
> > > >
> > > > tip/sched/core w/ this patchset improvement
> > > > schedpipe 124084 +/-1.36% 124445 +/-0.67% (+0.29%)
> > > >
> > > > hackbench -l (256000/#grp) -g #grp
> > > > 1 groups 15.305 +/-1.50% 14.001 +/-1.99% (+8.52%)
> > > > 4 groups 5.959 +/-0.70% 5.542 +/-3.76% (+6.99%)
> > > > 16 groups 3.120 +/-1.72% 3.253 +/-0.61% (-4.92%)
> > > > 32 groups 2.911 +/-0.88% 2.837 +/-1.16% (+2.54%)
> > > > 64 groups 2.805 +/-1.90% 2.716 +/-1.18% (+3.17%)
> > > > 128 groups 3.166 +/-7.71% 3.891 +/-6.77% (+5.82%)
> > > > 256 groups 3.655 +/-10.09% 3.185 +/-6.65% (+12.87%)
> > > >
> > > > dbench
> > > > 1 groups 328.176 +/-0.29% 330.217 +/-0.32% (+0.62%)
> > > > 4 groups 930.739 +/-0.50% 957.173 +/-0.66% (+2.84%)
> > > > 16 groups 1928.292 +/-0.36% 1978.234 +/-0.88% (+0.92%)
> > > > 32 groups 2369.348 +/-1.72% 2454.020 +/-0.90% (+3.57%)
> > > > 64 groups 2583.880 +/-3.39% 2618.860 +/-0.84% (+1.35%)
> > > > 128 groups 2256.406 +/-10.67% 2392.498 +/-2.13% (+6.03%)
> > > > 256 groups 1257.546 +/-3.81% 1674.684 +/-4.97% (+33.17%)
> > > >
> > > > Unixbench shell8
> > > > 1 test 6944.16 +/-0.02 6605.82 +/-0.11 (-4.87%)
> > > > 224 tests 13499.02 +/-0.14 13637.94 +/-0.47% (+1.03%)
> > > > lkp reported a -10% regression on shell8 (1 test) for v3 that
> > > > seems that is partially recovered on my platform with v4.
> > > >
> > > > tip/sched/core sha1:
> > > > commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")
> > > >
> > > > Changes since v3:
> > > > - small typo and variable ordering fixes
> > > > - add some acked/reviewed tag
> > > > - set 1 instead of load for migrate_misfit
> > > > - use nr_h_running instead of load for asym_packing
> > > > - update the optimization of find_idlest_group() and put back somes
> > > > conditions when comparing load
> > > > - rework find_idlest_group() to match find_busiest_group() behavior
> > > >
> > > > Changes since v2:
> > > > - fix typo and reorder code
> > > > - some minor code fixes
> > > > - optimize the find_idles_group()
> > > >
> > > > Not covered in this patchset:
> > > > - Better detection of overloaded and fully busy state, especially for cases
> > > > when nr_running > nr CPUs.
> > > >
> > > > Vincent Guittot (11):
> > > > sched/fair: clean up asym packing
> > > > sched/fair: rename sum_nr_running to sum_h_nr_running
> > > > sched/fair: remove meaningless imbalance calculation
> > > > sched/fair: rework load_balance
> > > > sched/fair: use rq->nr_running when balancing load
> > > > sched/fair: use load instead of runnable load in load_balance
> > > > sched/fair: evenly spread tasks when not overloaded
> > > > sched/fair: use utilization to select misfit task
> > > > sched/fair: use load instead of runnable load in wakeup path
> > > > sched/fair: optimize find_idlest_group
> > > > sched/fair: rework find_idlest_group
> > > >
> > > > kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
> > > > 1 file changed, 682 insertions(+), 499 deletions(-)
> > >
> > > Thanks, that's an excellent series!
> > >
> > > I've queued it up in sched/core with a handful of readability edits to
> > > comments and changelogs.
> >
> > Thanks
> >
> > >
> > > There are some upstreaming caveats though, I expect this series to be a
> > > performance regression magnet:
> > >
> > > - load_balance() and wake-up changes invariably are such: some workloads
> > > only work/scale well by accident, and if we touch the logic it might
> > > flip over into a less advantageous scheduling pattern.
> > >
> > > - In particular the changes from balancing and waking on runnable load
> > > to full load that includes blocking *will* shift IO-intensive
> > > workloads that you tests don't fully capture I believe. You also made
> > > idle balancing more aggressive in essence - which might reduce cache
> > > locality for some workloads.
> > >
> > > A full run on Mel Gorman's magic scalability test-suite would be super
> > > useful ...
> > >
> > > Anyway, please be on the lookout for such performance regression reports.
> >
> > Yes I monitor the regressions on the mailing list
>
>
> Our kernel perf tests show good results across the board for v4.
>
> The issue we hit on the 8-node system is fixed. Thanks!
>
> As we didn't see the fairness issue I don't expect the results to be
> that different on v4a (with the followup patch) but those tests are
> queued up now and we'll see what they look like.
>

Initial results with fix patch (v4a) show that the outlier issues on
the 8-node system have returned. Median time for 152 and 156 threads
(160 cpu system) goes up significantly and worst case goes from 340
and 250 to 550 sec. for both. And doubles from 150 to 300 for 144
threads. These look more like the results from v3.

We're re-running the test to get more samples.


Other tests and systems were still fine.


Cheers,
Phil


> Numbers for my specific testcase (the cgroup imbalance) are basically
> the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> series solves that issue.
>
>
> Cheers,
> Phil
>
>
> >
> > >
> > > Also, we seem to have grown a fair amount of these TODO entries:
> > >
> > > kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > > kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > > kernel/sched/fair.c: * XXX illustrate
> > > kernel/sched/fair.c: } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > > kernel/sched/fair.c: * can also include other factors [XXX].
> > > kernel/sched/fair.c: * [XXX expand on:
> > > kernel/sched/fair.c: * [XXX more?]
> > > kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > > kernel/sched/fair.c: * XXX for now avg_load is not computed and always 0 so we
> > > kernel/sched/fair.c: /* XXX broken for overlapping NUMA groups */
> > >
> >
> > I will have a look :-)
> >
> > > :-)
> > >
> > > Thanks,
> > >
> > > Ingo
>
> --
>

--

2019-10-25 13:32:50

by Phil Auld

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

Hi Vincent,

On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <[email protected]> wrote:
> >
> >
> > * Vincent Guittot <[email protected]> wrote:
> >
> > > Several wrong task placement have been raised with the current load
> > > balance algorithm but their fixes are not always straight forward and
> > > end up with using biased values to force migrations. A cleanup and rework
> > > of the load balance will help to handle such UCs and enable to fine grain
> > > the behavior of the scheduler for other cases.
> > >
> > > Patch 1 has already been sent separately and only consolidate asym policy
> > > in one place and help the review of the changes in load_balance.
> > >
> > > Patch 2 renames the sum of h_nr_running in stats.
> > >
> > > Patch 3 removes meaningless imbalance computation to make review of
> > > patch 4 easier.
> > >
> > > Patch 4 reworks load_balance algorithm and fixes some wrong task placement
> > > but try to stay conservative.
> > >
> > > Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
> > > into account when pulling tasks.
> > >
> > > Patch 6 replaces runnable_load by load now that the signal is only used
> > > when overloaded.
> > >
> > > Patch 7 improves the spread of tasks at the 1st scheduling level.
> > >
> > > Patch 8 uses utilization instead of load in all steps of misfit task
> > > path.
> > >
> > > Patch 9 replaces runnable_load_avg by load_avg in the wake up path.
> > >
> > > Patch 10 optimizes find_idlest_group() that was using both runnable_load
> > > and load. This has not been squashed with previous patch to ease the
> > > review.
> > >
> > > Patch 11 reworks find_idlest_group() to follow the same steps as
> > > find_busiest_group()
> > >
> > > Some benchmarks results based on 8 iterations of each tests:
> > > - small arm64 dual quad cores system
> > >
> > > tip/sched/core w/ this patchset improvement
> > > schedpipe 53125 +/-0.18% 53443 +/-0.52% (+0.60%)
> > >
> > > hackbench -l (2560/#grp) -g #grp
> > > 1 groups 1.579 +/-29.16% 1.410 +/-13.46% (+10.70%)
> > > 4 groups 1.269 +/-9.69% 1.205 +/-3.27% (+5.00%)
> > > 8 groups 1.117 +/-1.51% 1.123 +/-1.27% (+4.57%)
> > > 16 groups 1.176 +/-1.76% 1.164 +/-2.42% (+1.07%)
> > >
> > > Unixbench shell8
> > > 1 test 1963.48 +/-0.36% 1902.88 +/-0.73% (-3.09%)
> > > 224 tests 2427.60 +/-0.20% 2469.80 +/-0.42% (1.74%)
> > >
> > > - large arm64 2 nodes / 224 cores system
> > >
> > > tip/sched/core w/ this patchset improvement
> > > schedpipe 124084 +/-1.36% 124445 +/-0.67% (+0.29%)
> > >
> > > hackbench -l (256000/#grp) -g #grp
> > > 1 groups 15.305 +/-1.50% 14.001 +/-1.99% (+8.52%)
> > > 4 groups 5.959 +/-0.70% 5.542 +/-3.76% (+6.99%)
> > > 16 groups 3.120 +/-1.72% 3.253 +/-0.61% (-4.92%)
> > > 32 groups 2.911 +/-0.88% 2.837 +/-1.16% (+2.54%)
> > > 64 groups 2.805 +/-1.90% 2.716 +/-1.18% (+3.17%)
> > > 128 groups 3.166 +/-7.71% 3.891 +/-6.77% (+5.82%)
> > > 256 groups 3.655 +/-10.09% 3.185 +/-6.65% (+12.87%)
> > >
> > > dbench
> > > 1 groups 328.176 +/-0.29% 330.217 +/-0.32% (+0.62%)
> > > 4 groups 930.739 +/-0.50% 957.173 +/-0.66% (+2.84%)
> > > 16 groups 1928.292 +/-0.36% 1978.234 +/-0.88% (+0.92%)
> > > 32 groups 2369.348 +/-1.72% 2454.020 +/-0.90% (+3.57%)
> > > 64 groups 2583.880 +/-3.39% 2618.860 +/-0.84% (+1.35%)
> > > 128 groups 2256.406 +/-10.67% 2392.498 +/-2.13% (+6.03%)
> > > 256 groups 1257.546 +/-3.81% 1674.684 +/-4.97% (+33.17%)
> > >
> > > Unixbench shell8
> > > 1 test 6944.16 +/-0.02 6605.82 +/-0.11 (-4.87%)
> > > 224 tests 13499.02 +/-0.14 13637.94 +/-0.47% (+1.03%)
> > > lkp reported a -10% regression on shell8 (1 test) for v3 that
> > > seems that is partially recovered on my platform with v4.
> > >
> > > tip/sched/core sha1:
> > > commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")
> > >
> > > Changes since v3:
> > > - small typo and variable ordering fixes
> > > - add some acked/reviewed tag
> > > - set 1 instead of load for migrate_misfit
> > > - use nr_h_running instead of load for asym_packing
> > > - update the optimization of find_idlest_group() and put back somes
> > > conditions when comparing load
> > > - rework find_idlest_group() to match find_busiest_group() behavior
> > >
> > > Changes since v2:
> > > - fix typo and reorder code
> > > - some minor code fixes
> > > - optimize the find_idles_group()
> > >
> > > Not covered in this patchset:
> > > - Better detection of overloaded and fully busy state, especially for cases
> > > when nr_running > nr CPUs.
> > >
> > > Vincent Guittot (11):
> > > sched/fair: clean up asym packing
> > > sched/fair: rename sum_nr_running to sum_h_nr_running
> > > sched/fair: remove meaningless imbalance calculation
> > > sched/fair: rework load_balance
> > > sched/fair: use rq->nr_running when balancing load
> > > sched/fair: use load instead of runnable load in load_balance
> > > sched/fair: evenly spread tasks when not overloaded
> > > sched/fair: use utilization to select misfit task
> > > sched/fair: use load instead of runnable load in wakeup path
> > > sched/fair: optimize find_idlest_group
> > > sched/fair: rework find_idlest_group
> > >
> > > kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
> > > 1 file changed, 682 insertions(+), 499 deletions(-)
> >
> > Thanks, that's an excellent series!
> >
> > I've queued it up in sched/core with a handful of readability edits to
> > comments and changelogs.
>
> Thanks
>
> >
> > There are some upstreaming caveats though, I expect this series to be a
> > performance regression magnet:
> >
> > - load_balance() and wake-up changes invariably are such: some workloads
> > only work/scale well by accident, and if we touch the logic it might
> > flip over into a less advantageous scheduling pattern.
> >
> > - In particular the changes from balancing and waking on runnable load
> > to full load that includes blocking *will* shift IO-intensive
> > workloads that you tests don't fully capture I believe. You also made
> > idle balancing more aggressive in essence - which might reduce cache
> > locality for some workloads.
> >
> > A full run on Mel Gorman's magic scalability test-suite would be super
> > useful ...
> >
> > Anyway, please be on the lookout for such performance regression reports.
>
> Yes I monitor the regressions on the mailing list


Our kernel perf tests show good results across the board for v4.

The issue we hit on the 8-node system is fixed. Thanks!

As we didn't see the fairness issue I don't expect the results to be
that different on v4a (with the followup patch) but those tests are
queued up now and we'll see what they look like.

Numbers for my specific testcase (the cgroup imbalance) are basically
the same as I posted for v3 (plus the better 8-node numbers). I.e. this
series solves that issue.


Cheers,
Phil


>
> >
> > Also, we seem to have grown a fair amount of these TODO entries:
> >
> > kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > kernel/sched/fair.c: * XXX illustrate
> > kernel/sched/fair.c: } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > kernel/sched/fair.c: * can also include other factors [XXX].
> > kernel/sched/fair.c: * [XXX expand on:
> > kernel/sched/fair.c: * [XXX more?]
> > kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > kernel/sched/fair.c: * XXX for now avg_load is not computed and always 0 so we
> > kernel/sched/fair.c: /* XXX broken for overlapping NUMA groups */
> >
>
> I will have a look :-)
>
> > :-)
> >
> > Thanks,
> >
> > Ingo

--

2019-10-25 14:03:14

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On Thu, 24 Oct 2019 at 15:47, Phil Auld <[email protected]> wrote:
>
> On Thu, Oct 24, 2019 at 08:38:44AM -0400 Phil Auld wrote:
> > Hi Vincent,
> >
> > On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> > > On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <[email protected]> wrote:
> > > >

[...]

> > > > A full run on Mel Gorman's magic scalability test-suite would be super
> > > > useful ...
> > > >
> > > > Anyway, please be on the lookout for such performance regression reports.
> > >
> > > Yes I monitor the regressions on the mailing list
> >
> >
> > Our kernel perf tests show good results across the board for v4.
> >
> > The issue we hit on the 8-node system is fixed. Thanks!
> >
> > As we didn't see the fairness issue I don't expect the results to be
> > that different on v4a (with the followup patch) but those tests are
> > queued up now and we'll see what they look like.
> >
>
> Initial results with fix patch (v4a) show that the outlier issues on
> the 8-node system have returned. Median time for 152 and 156 threads
> (160 cpu system) goes up significantly and worst case goes from 340
> and 250 to 550 sec. for both. And doubles from 150 to 300 for 144

For v3, you had a x4 slow down IIRC.


> threads. These look more like the results from v3.

OK. For v3, we were not sure that your UC triggers the slow path but
it seems that we have the confirmation now.
The problem happens only for this 8 node 160 cores system, isn't it ?

The fix favors the local group so your UC seems to prefer spreading
tasks at wake up
If you have any traces that you can share, this could help to
understand what's going on. I will try to reproduce the problem on my
system

>
> We're re-running the test to get more samples.

Thanks
Vincent

>
>
> Other tests and systems were still fine.
>
>
> Cheers,
> Phil
>
>
> > Numbers for my specific testcase (the cgroup imbalance) are basically
> > the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> > series solves that issue.
> >
> >
> > Cheers,
> > Phil
> >
> >
> > >
> > > >
> > > > Also, we seem to have grown a fair amount of these TODO entries:
> > > >
> > > > kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > > > kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > > > kernel/sched/fair.c: * XXX illustrate
> > > > kernel/sched/fair.c: } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > > > kernel/sched/fair.c: * can also include other factors [XXX].
> > > > kernel/sched/fair.c: * [XXX expand on:
> > > > kernel/sched/fair.c: * [XXX more?]
> > > > kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > > > kernel/sched/fair.c: * XXX for now avg_load is not computed and always 0 so we
> > > > kernel/sched/fair.c: /* XXX broken for overlapping NUMA groups */
> > > >
> > >
> > > I will have a look :-)
> > >
> > > > :-)
> > > >
> > > > Thanks,
> > > >
> > > > Ingo
> >
> > --
> >
>
> --
>

2019-10-25 20:19:08

by Phil Auld

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance


Hi Vincent,


On Thu, Oct 24, 2019 at 04:59:05PM +0200 Vincent Guittot wrote:
> On Thu, 24 Oct 2019 at 15:47, Phil Auld <[email protected]> wrote:
> >
> > On Thu, Oct 24, 2019 at 08:38:44AM -0400 Phil Auld wrote:
> > > Hi Vincent,
> > >
> > > On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> > > > On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <[email protected]> wrote:
> > > > >
>
> [...]
>
> > > > > A full run on Mel Gorman's magic scalability test-suite would be super
> > > > > useful ...
> > > > >
> > > > > Anyway, please be on the lookout for such performance regression reports.
> > > >
> > > > Yes I monitor the regressions on the mailing list
> > >
> > >
> > > Our kernel perf tests show good results across the board for v4.
> > >
> > > The issue we hit on the 8-node system is fixed. Thanks!
> > >
> > > As we didn't see the fairness issue I don't expect the results to be
> > > that different on v4a (with the followup patch) but those tests are
> > > queued up now and we'll see what they look like.
> > >
> >
> > Initial results with fix patch (v4a) show that the outlier issues on
> > the 8-node system have returned. Median time for 152 and 156 threads
> > (160 cpu system) goes up significantly and worst case goes from 340
> > and 250 to 550 sec. for both. And doubles from 150 to 300 for 144
>
> For v3, you had a x4 slow down IIRC.
>

Sorry, that was a confusing change of data point :)


That 4x was the normal versus group result for v3. I.e. the usual
view of this test case's data.

These numbers above are the group vs group difference between
v4 and v4a.

The similar data points are that for v4 there was no difference
in performance between group and normal at 152 threads and a 35%
drop off from normal to group at 156.

With v4a there was 100% drop (2x slowdown) normal to group at 152
and close to that at 156 (~75-80% drop off).

So, yes, not as severe as v3. But significantly off from v4.

>
> > threads. These look more like the results from v3.
>
> OK. For v3, we were not sure that your UC triggers the slow path but
> it seems that we have the confirmation now.
> The problem happens only for this 8 node 160 cores system, isn't it ?

Yes. It only shows up now on this 8-node system.

>
> The fix favors the local group so your UC seems to prefer spreading
> tasks at wake up
> If you have any traces that you can share, this could help to
> understand what's going on. I will try to reproduce the problem on my
> system

I'm not actually sure the fix here is causing this. Looking at the data
more closely I see similar imbalances on v4, v4a and v3.

When you say slow versus fast wakeup paths what do you mean? I'm still
learning my way around all this code.

This particular test is specifically designed to highlight the imbalance
cause by the use of group scheduler defined load and averages. The threads
are mostly CPU bound but will join up every time step. So if each thread
more or less gets its own CPU (we run with fewer threads than CPUs) they
all finish the timestep at about the same time. If threads are stuck
sharing cpus then those finish later and the whole computation is slowed
down. In addition to the NAS benchmark threads there are 2 stress CPU
burners. These are either run in their own cgroups (thus having full "load")
or all in the same cgroup with the benchmarck, thus all having tiny "loads".

In this system, there are 20 cpus per node. We track average number of
benchmark threads running in each node. Generally for a balanced case
we should not have any much over 20 and indeed in the normal case (every
one in one cgroup) we see pretty nice balance. In the cgroup case we are
still seeing numbers much higher than 20.

Here are some eye charts:

This is the GROUP numbers from that machine on the v1 series (I don't have the
NORMAL lines handy for this one):
lu.C.x_152_GROUP_1 Average 18.08 18.17 19.58 19.29 19.25 17.50 21.46 18.67
lu.C.x_152_GROUP_2 Average 17.12 17.48 17.88 17.62 19.57 17.31 23.00 22.02
lu.C.x_152_GROUP_3 Average 17.82 17.97 18.12 18.18 24.55 22.18 16.97 16.21
lu.C.x_152_GROUP_4 Average 18.47 19.08 18.50 18.66 21.45 25.00 15.47 15.37
lu.C.x_152_GROUP_5 Average 20.46 20.71 27.38 24.75 17.06 16.65 12.81 12.19

lu.C.x_156_GROUP_1 Average 18.70 18.80 20.25 19.50 20.45 20.30 19.55 18.45
lu.C.x_156_GROUP_2 Average 19.29 19.90 17.71 18.10 20.76 21.57 19.81 18.86
lu.C.x_156_GROUP_3 Average 25.09 29.19 21.83 21.33 18.67 18.57 11.03 10.29
lu.C.x_156_GROUP_4 Average 18.60 19.10 19.20 18.70 20.30 20.00 19.70 20.40
lu.C.x_156_GROUP_5 Average 18.58 18.95 18.63 18.1 17.32 19.37 23.92 21.08

There are a couple that did not balance well but the overall results were good.

This is v4:
lu.C.x_152_GROUP_1 Average 18.80 19.25 21.95 21.25 17.55 17.25 17.85 18.10
lu.C.x_152_GROUP_2 Average 20.57 20.62 19.76 17.76 18.95 18.33 18.52 17.48
lu.C.x_152_GROUP_3 Average 15.39 12.22 13.96 12.19 25.51 28.91 21.88 21.94
lu.C.x_152_GROUP_4 Average 20.30 19.75 20.75 19.45 18.15 17.80 18.15 17.65
lu.C.x_152_GROUP_5 Average 15.13 12.21 13.63 11.39 25.42 30.21 21.55 22.46
lu.C.x_152_NORMAL_1 Average 17.00 16.88 19.52 18.28 19.24 19.08 21.08 20.92
lu.C.x_152_NORMAL_2 Average 18.61 16.56 18.56 17.00 20.56 20.28 20.00 20.44
lu.C.x_152_NORMAL_3 Average 19.27 19.77 21.23 20.86 18.00 17.68 17.73 17.45
lu.C.x_152_NORMAL_4 Average 20.24 19.33 21.33 21.10 17.33 18.43 17.57 16.67
lu.C.x_152_NORMAL_5 Average 21.27 20.36 20.86 19.36 17.50 17.77 17.32 17.55

lu.C.x_156_GROUP_1 Average 18.60 18.68 21.16 23.40 18.96 19.72 17.76 17.72
lu.C.x_156_GROUP_2 Average 22.76 21.71 20.55 21.32 18.18 16.42 17.58 17.47
lu.C.x_156_GROUP_3 Average 13.62 11.52 15.54 15.58 25.42 28.54 23.22 22.56
lu.C.x_156_GROUP_4 Average 17.73 18.14 21.95 21.82 19.73 19.68 18.55 18.41
lu.C.x_156_GROUP_5 Average 15.32 15.14 17.30 17.11 23.59 25.75 20.77 21.02
lu.C.x_156_NORMAL_1 Average 19.06 18.72 19.56 18.72 19.72 21.28 19.44 19.50
lu.C.x_156_NORMAL_2 Average 20.25 19.86 22.61 23.18 18.32 17.93 16.39 17.46
lu.C.x_156_NORMAL_3 Average 18.84 17.88 19.24 17.76 21.04 20.64 20.16 20.44
lu.C.x_156_NORMAL_4 Average 20.67 19.44 20.74 22.15 18.89 18.85 18.00 17.26
lu.C.x_156_NORMAL_5 Average 20.12 19.65 24.12 24.15 17.40 16.62 17.10 16.83

This one is better overall, but there are some mid 20s abd 152_GROUP_5 is pretty bad.


This is v4a
lu.C.x_152_GROUP_1 Average 28.64 34.49 23.60 24.48 10.35 11.99 8.36 10.09
lu.C.x_152_GROUP_2 Average 17.36 17.33 15.48 13.12 24.90 24.43 18.55 20.83
lu.C.x_152_GROUP_3 Average 20.00 19.92 20.21 21.33 18.50 18.50 16.50 17.04
lu.C.x_152_GROUP_4 Average 18.07 17.87 18.40 17.87 23.07 22.73 17.60 16.40
lu.C.x_152_GROUP_5 Average 25.50 24.69 21.48 21.46 16.85 16.00 14.06 11.96
lu.C.x_152_NORMAL_1 Average 22.27 20.77 20.60 19.83 16.73 17.53 15.83 18.43
lu.C.x_152_NORMAL_2 Average 19.83 20.81 23.06 21.97 17.28 16.92 15.83 16.31
lu.C.x_152_NORMAL_3 Average 17.85 19.31 18.85 19.08 19.00 19.31 19.08 19.54
lu.C.x_152_NORMAL_4 Average 18.87 18.13 19.00 20.27 18.20 18.67 19.73 19.13
lu.C.x_152_NORMAL_5 Average 18.16 18.63 18.11 17.00 19.79 20.63 19.47 20.21

lu.C.x_156_GROUP_1 Average 24.96 26.15 21.78 21.48 18.52 19.11 12.98 11.02
lu.C.x_156_GROUP_2 Average 18.69 19.00 18.65 18.42 20.50 20.46 19.85 20.42
lu.C.x_156_GROUP_3 Average 24.32 23.79 20.82 20.95 16.63 16.61 18.47 14.42
lu.C.x_156_GROUP_4 Average 18.27 18.34 14.88 16.07 27.00 21.93 20.56 18.95
lu.C.x_156_GROUP_5 Average 19.18 20.99 33.43 29.57 15.63 15.54 12.13 9.53
lu.C.x_156_NORMAL_1 Average 21.60 23.37 20.11 19.60 17.11 17.83 18.17 18.20
lu.C.x_156_NORMAL_2 Average 21.00 20.54 19.88 18.79 17.62 18.67 19.29 20.21
lu.C.x_156_NORMAL_3 Average 19.50 19.94 20.12 18.62 19.88 19.50 19.00 19.44
lu.C.x_156_NORMAL_4 Average 20.62 19.72 20.03 22.17 18.21 18.55 18.45 18.24
lu.C.x_156_NORMAL_5 Average 19.64 19.86 21.46 22.43 17.21 17.89 18.96 18.54


This shows much more imblance in the GROUP case. There are some single digits
and some 30s.

For comparison here are some from my 4-node (80 cpu) system:

v4
lu.C.x_76_GROUP_1.ps.numa.hist Average 19.58 17.67 18.25 20.50
lu.C.x_76_GROUP_2.ps.numa.hist Average 19.08 19.17 17.67 20.08
lu.C.x_76_GROUP_3.ps.numa.hist Average 19.42 18.58 18.42 19.58
lu.C.x_76_NORMAL_1.ps.numa.hist Average 20.50 17.33 19.08 19.08
lu.C.x_76_NORMAL_2.ps.numa.hist Average 19.45 18.73 19.27 18.55


v4a
lu.C.x_76_GROUP_1.ps.numa.hist Average 19.46 19.15 18.62 18.77
lu.C.x_76_GROUP_2.ps.numa.hist Average 19.00 18.58 17.75 20.67
lu.C.x_76_GROUP_3.ps.numa.hist Average 19.08 17.08 20.08 19.77
lu.C.x_76_NORMAL_1.ps.numa.hist Average 18.67 18.93 18.60 19.80
lu.C.x_76_NORMAL_2.ps.numa.hist Average 19.08 18.67 18.58 19.67

Nicely balanced in both kernels and normal and group are basically the
same.

There's still something between v1 and v4 on that 8-node system that is
still illustrating the original problem. On our other test systems this
series really works nicely to solve this problem. And even if we can't get
to the bottom if this it's a significant improvement.


Here is v3 for the 8-node system
lu.C.x_152_GROUP_1 Average 17.52 16.86 17.90 18.52 20.00 19.00 22.00 20.19
lu.C.x_152_GROUP_2 Average 15.70 15.04 15.65 15.72 23.30 28.98 20.09 17.52
lu.C.x_152_GROUP_3 Average 27.72 32.79 22.89 22.62 11.01 12.90 12.14 9.93
lu.C.x_152_GROUP_4 Average 18.13 18.87 18.40 17.87 18.80 19.93 20.40 19.60
lu.C.x_152_GROUP_5 Average 24.14 26.46 20.92 21.43 14.70 16.05 15.14 13.16
lu.C.x_152_NORMAL_1 Average 21.03 22.43 20.27 19.97 18.37 18.80 16.27 14.87
lu.C.x_152_NORMAL_2 Average 19.24 18.29 18.41 17.41 19.71 19.00 20.29 19.65
lu.C.x_152_NORMAL_3 Average 19.43 20.00 19.05 20.24 18.76 17.38 18.52 18.62
lu.C.x_152_NORMAL_4 Average 17.19 18.25 17.81 18.69 20.44 19.75 20.12 19.75
lu.C.x_152_NORMAL_5 Average 19.25 19.56 19.12 19.56 19.38 19.38 18.12 17.62

lu.C.x_156_GROUP_1 Average 18.62 19.31 18.38 18.77 19.88 21.35 19.35 20.35
lu.C.x_156_GROUP_2 Average 15.58 12.72 14.96 14.83 20.59 19.35 29.75 28.22
lu.C.x_156_GROUP_3 Average 20.05 18.74 19.63 18.32 20.26 20.89 19.53 18.58
lu.C.x_156_GROUP_4 Average 14.77 11.42 13.01 10.09 27.05 33.52 23.16 22.98
lu.C.x_156_GROUP_5 Average 14.94 11.45 12.77 10.52 28.01 33.88 22.37 22.05
lu.C.x_156_NORMAL_1 Average 20.00 20.58 18.47 18.68 19.47 19.74 19.42 19.63
lu.C.x_156_NORMAL_2 Average 18.52 18.48 18.83 18.43 20.57 20.48 20.61 20.09
lu.C.x_156_NORMAL_3 Average 20.27 20.00 20.05 21.18 19.55 19.00 18.59 17.36
lu.C.x_156_NORMAL_4 Average 19.65 19.60 20.25 20.75 19.35 20.10 19.00 17.30
lu.C.x_156_NORMAL_5 Average 19.79 19.67 20.62 22.42 18.42 18.00 17.67 19.42


I'll try to find pre-patched results for this 8 node system. Just to keep things
together for reference here is the 4-node system before this re-work series.

lu.C.x_76_GROUP_1 Average 15.84 24.06 23.37 12.73
lu.C.x_76_GROUP_2 Average 15.29 22.78 22.49 15.45
lu.C.x_76_GROUP_3 Average 13.45 23.90 22.97 15.68
lu.C.x_76_NORMAL_1 Average 18.31 19.54 19.54 18.62
lu.C.x_76_NORMAL_2 Average 19.73 19.18 19.45 17.64

This produced a 4.5x slowdown for the group runs versus the nicely balance
normal runs.



I can try to get traces but this is not my system so it may take a little
while. I've found that the existing trace points don't give enough information
to see what is happening in this problem. But the visualization in kernelshark
does show the problem pretty well. Do you want just the existing sched tracepoints
or should I update some of the traceprintks I used in the earlier traces?



Cheers,
Phil


>
> >
> > We're re-running the test to get more samples.
>
> Thanks
> Vincent
>
> >
> >
> > Other tests and systems were still fine.
> >
> >
> > Cheers,
> > Phil
> >
> >
> > > Numbers for my specific testcase (the cgroup imbalance) are basically
> > > the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> > > series solves that issue.
> > >
> > >
> > > Cheers,
> > > Phil
> > >
> > >
> > > >
> > > > >
> > > > > Also, we seem to have grown a fair amount of these TODO entries:
> > > > >
> > > > > kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > > > > kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > > > > kernel/sched/fair.c: * XXX illustrate
> > > > > kernel/sched/fair.c: } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > > > > kernel/sched/fair.c: * can also include other factors [XXX].
> > > > > kernel/sched/fair.c: * [XXX expand on:
> > > > > kernel/sched/fair.c: * [XXX more?]
> > > > > kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > > > > kernel/sched/fair.c: * XXX for now avg_load is not computed and always 0 so we
> > > > > kernel/sched/fair.c: /* XXX broken for overlapping NUMA groups */
> > > > >
> > > >
> > > > I will have a look :-)
> > > >
> > > > > :-)
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Ingo
> > >
> > > --
> > >
> >
> > --
> >

--

2019-10-28 20:33:08

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

Hi Phil,

On Fri, 25 Oct 2019 at 15:33, Phil Auld <[email protected]> wrote:
>
>
> Hi Vincent,
>
>
> On Thu, Oct 24, 2019 at 04:59:05PM +0200 Vincent Guittot wrote:
> > On Thu, 24 Oct 2019 at 15:47, Phil Auld <[email protected]> wrote:
> > >
> > > On Thu, Oct 24, 2019 at 08:38:44AM -0400 Phil Auld wrote:
> > > > Hi Vincent,
> > > >
> > > > On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> > > > > On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <[email protected]> wrote:
> > > > > >
> >
> > [...]
> >
> > > > > > A full run on Mel Gorman's magic scalability test-suite would be super
> > > > > > useful ...
> > > > > >
> > > > > > Anyway, please be on the lookout for such performance regression reports.
> > > > >
> > > > > Yes I monitor the regressions on the mailing list
> > > >
> > > >
> > > > Our kernel perf tests show good results across the board for v4.
> > > >
> > > > The issue we hit on the 8-node system is fixed. Thanks!
> > > >
> > > > As we didn't see the fairness issue I don't expect the results to be
> > > > that different on v4a (with the followup patch) but those tests are
> > > > queued up now and we'll see what they look like.
> > > >
> > >
> > > Initial results with fix patch (v4a) show that the outlier issues on
> > > the 8-node system have returned. Median time for 152 and 156 threads
> > > (160 cpu system) goes up significantly and worst case goes from 340
> > > and 250 to 550 sec. for both. And doubles from 150 to 300 for 144
> >
> > For v3, you had a x4 slow down IIRC.
> >
>
> Sorry, that was a confusing change of data point :)
>
>
> That 4x was the normal versus group result for v3. I.e. the usual
> view of this test case's data.
>
> These numbers above are the group vs group difference between
> v4 and v4a.

ok. Thanks for the clarification

>
> The similar data points are that for v4 there was no difference
> in performance between group and normal at 152 threads and a 35%
> drop off from normal to group at 156.
>
> With v4a there was 100% drop (2x slowdown) normal to group at 152
> and close to that at 156 (~75-80% drop off).
>
> So, yes, not as severe as v3. But significantly off from v4.

Thanks for the details

>
> >
> > > threads. These look more like the results from v3.
> >
> > OK. For v3, we were not sure that your UC triggers the slow path but
> > it seems that we have the confirmation now.
> > The problem happens only for this 8 node 160 cores system, isn't it ?
>
> Yes. It only shows up now on this 8-node system.

The input could mean that this system reaches a particular level of
utilization and load that is close to the threshold between 2
different behavior like spare capacity and fully_busy/overloaded case.
But at the opposite, there is less threads that CPUs in your UCs so
one group at least at NUMA level should be tagged as
has_spare_capacity and should pull tasks.

>
> >
> > The fix favors the local group so your UC seems to prefer spreading
> > tasks at wake up
> > If you have any traces that you can share, this could help to
> > understand what's going on. I will try to reproduce the problem on my
> > system
>
> I'm not actually sure the fix here is causing this. Looking at the data
> more closely I see similar imbalances on v4, v4a and v3.
>
> When you say slow versus fast wakeup paths what do you mean? I'm still
> learning my way around all this code.

When task wakes up, we can decide to
- speedup the wakeup and shorten the list of cpus and compare only
prev_cpu vs this_cpu (in fact the group of cpu that share their
respective LLC). That's the fast wakeup path that is used most of the
time during a wakeup
- or start to find the idlest CPU of the system and scan all domains.
That's the slow path that is used for new tasks or when a task wakes
up a lot of other tasks at the same time


>
> This particular test is specifically designed to highlight the imbalance
> cause by the use of group scheduler defined load and averages. The threads
> are mostly CPU bound but will join up every time step. So if each thread

ok the fact that they join up might be the root cause of your problem.
They will wake up at the same time by the same task and CPU.

> more or less gets its own CPU (we run with fewer threads than CPUs) they
> all finish the timestep at about the same time. If threads are stuck
> sharing cpus then those finish later and the whole computation is slowed
> down. In addition to the NAS benchmark threads there are 2 stress CPU
> burners. These are either run in their own cgroups (thus having full "load")
> or all in the same cgroup with the benchmarck, thus all having tiny "loads".
>
> In this system, there are 20 cpus per node. We track average number of
> benchmark threads running in each node. Generally for a balanced case
> we should not have any much over 20 and indeed in the normal case (every
> one in one cgroup) we see pretty nice balance. In the cgroup case we are
> still seeing numbers much higher than 20.
>
> Here are some eye charts:
>
> This is the GROUP numbers from that machine on the v1 series (I don't have the
> NORMAL lines handy for this one):
> lu.C.x_152_GROUP_1 Average 18.08 18.17 19.58 19.29 19.25 17.50 21.46 18.67
> lu.C.x_152_GROUP_2 Average 17.12 17.48 17.88 17.62 19.57 17.31 23.00 22.02
> lu.C.x_152_GROUP_3 Average 17.82 17.97 18.12 18.18 24.55 22.18 16.97 16.21
> lu.C.x_152_GROUP_4 Average 18.47 19.08 18.50 18.66 21.45 25.00 15.47 15.37
> lu.C.x_152_GROUP_5 Average 20.46 20.71 27.38 24.75 17.06 16.65 12.81 12.19
>
> lu.C.x_156_GROUP_1 Average 18.70 18.80 20.25 19.50 20.45 20.30 19.55 18.45
> lu.C.x_156_GROUP_2 Average 19.29 19.90 17.71 18.10 20.76 21.57 19.81 18.86
> lu.C.x_156_GROUP_3 Average 25.09 29.19 21.83 21.33 18.67 18.57 11.03 10.29
> lu.C.x_156_GROUP_4 Average 18.60 19.10 19.20 18.70 20.30 20.00 19.70 20.40
> lu.C.x_156_GROUP_5 Average 18.58 18.95 18.63 18.1 17.32 19.37 23.92 21.08
>
> There are a couple that did not balance well but the overall results were good.
>
> This is v4:
> lu.C.x_152_GROUP_1 Average 18.80 19.25 21.95 21.25 17.55 17.25 17.85 18.10
> lu.C.x_152_GROUP_2 Average 20.57 20.62 19.76 17.76 18.95 18.33 18.52 17.48
> lu.C.x_152_GROUP_3 Average 15.39 12.22 13.96 12.19 25.51 28.91 21.88 21.94
> lu.C.x_152_GROUP_4 Average 20.30 19.75 20.75 19.45 18.15 17.80 18.15 17.65
> lu.C.x_152_GROUP_5 Average 15.13 12.21 13.63 11.39 25.42 30.21 21.55 22.46
> lu.C.x_152_NORMAL_1 Average 17.00 16.88 19.52 18.28 19.24 19.08 21.08 20.92
> lu.C.x_152_NORMAL_2 Average 18.61 16.56 18.56 17.00 20.56 20.28 20.00 20.44
> lu.C.x_152_NORMAL_3 Average 19.27 19.77 21.23 20.86 18.00 17.68 17.73 17.45
> lu.C.x_152_NORMAL_4 Average 20.24 19.33 21.33 21.10 17.33 18.43 17.57 16.67
> lu.C.x_152_NORMAL_5 Average 21.27 20.36 20.86 19.36 17.50 17.77 17.32 17.55
>
> lu.C.x_156_GROUP_1 Average 18.60 18.68 21.16 23.40 18.96 19.72 17.76 17.72
> lu.C.x_156_GROUP_2 Average 22.76 21.71 20.55 21.32 18.18 16.42 17.58 17.47
> lu.C.x_156_GROUP_3 Average 13.62 11.52 15.54 15.58 25.42 28.54 23.22 22.56
> lu.C.x_156_GROUP_4 Average 17.73 18.14 21.95 21.82 19.73 19.68 18.55 18.41
> lu.C.x_156_GROUP_5 Average 15.32 15.14 17.30 17.11 23.59 25.75 20.77 21.02
> lu.C.x_156_NORMAL_1 Average 19.06 18.72 19.56 18.72 19.72 21.28 19.44 19.50
> lu.C.x_156_NORMAL_2 Average 20.25 19.86 22.61 23.18 18.32 17.93 16.39 17.46
> lu.C.x_156_NORMAL_3 Average 18.84 17.88 19.24 17.76 21.04 20.64 20.16 20.44
> lu.C.x_156_NORMAL_4 Average 20.67 19.44 20.74 22.15 18.89 18.85 18.00 17.26
> lu.C.x_156_NORMAL_5 Average 20.12 19.65 24.12 24.15 17.40 16.62 17.10 16.83
>
> This one is better overall, but there are some mid 20s abd 152_GROUP_5 is pretty bad.
>
>
> This is v4a
> lu.C.x_152_GROUP_1 Average 28.64 34.49 23.60 24.48 10.35 11.99 8.36 10.09
> lu.C.x_152_GROUP_2 Average 17.36 17.33 15.48 13.12 24.90 24.43 18.55 20.83
> lu.C.x_152_GROUP_3 Average 20.00 19.92 20.21 21.33 18.50 18.50 16.50 17.04
> lu.C.x_152_GROUP_4 Average 18.07 17.87 18.40 17.87 23.07 22.73 17.60 16.40
> lu.C.x_152_GROUP_5 Average 25.50 24.69 21.48 21.46 16.85 16.00 14.06 11.96
> lu.C.x_152_NORMAL_1 Average 22.27 20.77 20.60 19.83 16.73 17.53 15.83 18.43
> lu.C.x_152_NORMAL_2 Average 19.83 20.81 23.06 21.97 17.28 16.92 15.83 16.31
> lu.C.x_152_NORMAL_3 Average 17.85 19.31 18.85 19.08 19.00 19.31 19.08 19.54
> lu.C.x_152_NORMAL_4 Average 18.87 18.13 19.00 20.27 18.20 18.67 19.73 19.13
> lu.C.x_152_NORMAL_5 Average 18.16 18.63 18.11 17.00 19.79 20.63 19.47 20.21
>
> lu.C.x_156_GROUP_1 Average 24.96 26.15 21.78 21.48 18.52 19.11 12.98 11.02
> lu.C.x_156_GROUP_2 Average 18.69 19.00 18.65 18.42 20.50 20.46 19.85 20.42
> lu.C.x_156_GROUP_3 Average 24.32 23.79 20.82 20.95 16.63 16.61 18.47 14.42
> lu.C.x_156_GROUP_4 Average 18.27 18.34 14.88 16.07 27.00 21.93 20.56 18.95
> lu.C.x_156_GROUP_5 Average 19.18 20.99 33.43 29.57 15.63 15.54 12.13 9.53
> lu.C.x_156_NORMAL_1 Average 21.60 23.37 20.11 19.60 17.11 17.83 18.17 18.20
> lu.C.x_156_NORMAL_2 Average 21.00 20.54 19.88 18.79 17.62 18.67 19.29 20.21
> lu.C.x_156_NORMAL_3 Average 19.50 19.94 20.12 18.62 19.88 19.50 19.00 19.44
> lu.C.x_156_NORMAL_4 Average 20.62 19.72 20.03 22.17 18.21 18.55 18.45 18.24
> lu.C.x_156_NORMAL_5 Average 19.64 19.86 21.46 22.43 17.21 17.89 18.96 18.54
>
>
> This shows much more imblance in the GROUP case. There are some single digits
> and some 30s.
>
> For comparison here are some from my 4-node (80 cpu) system:
>
> v4
> lu.C.x_76_GROUP_1.ps.numa.hist Average 19.58 17.67 18.25 20.50
> lu.C.x_76_GROUP_2.ps.numa.hist Average 19.08 19.17 17.67 20.08
> lu.C.x_76_GROUP_3.ps.numa.hist Average 19.42 18.58 18.42 19.58
> lu.C.x_76_NORMAL_1.ps.numa.hist Average 20.50 17.33 19.08 19.08
> lu.C.x_76_NORMAL_2.ps.numa.hist Average 19.45 18.73 19.27 18.55
>
>
> v4a
> lu.C.x_76_GROUP_1.ps.numa.hist Average 19.46 19.15 18.62 18.77
> lu.C.x_76_GROUP_2.ps.numa.hist Average 19.00 18.58 17.75 20.67
> lu.C.x_76_GROUP_3.ps.numa.hist Average 19.08 17.08 20.08 19.77
> lu.C.x_76_NORMAL_1.ps.numa.hist Average 18.67 18.93 18.60 19.80
> lu.C.x_76_NORMAL_2.ps.numa.hist Average 19.08 18.67 18.58 19.67
>
> Nicely balanced in both kernels and normal and group are basically the
> same.

That fact that the 4 nodes works well but not the 8 nodes is a bit
surprising except if this means more NUMA level in the sched_domain
topology
Could you give us more details about the sched domain topology ?

>
> There's still something between v1 and v4 on that 8-node system that is
> still illustrating the original problem. On our other test systems this
> series really works nicely to solve this problem. And even if we can't get
> to the bottom if this it's a significant improvement.
>
>
> Here is v3 for the 8-node system
> lu.C.x_152_GROUP_1 Average 17.52 16.86 17.90 18.52 20.00 19.00 22.00 20.19
> lu.C.x_152_GROUP_2 Average 15.70 15.04 15.65 15.72 23.30 28.98 20.09 17.52
> lu.C.x_152_GROUP_3 Average 27.72 32.79 22.89 22.62 11.01 12.90 12.14 9.93
> lu.C.x_152_GROUP_4 Average 18.13 18.87 18.40 17.87 18.80 19.93 20.40 19.60
> lu.C.x_152_GROUP_5 Average 24.14 26.46 20.92 21.43 14.70 16.05 15.14 13.16
> lu.C.x_152_NORMAL_1 Average 21.03 22.43 20.27 19.97 18.37 18.80 16.27 14.87
> lu.C.x_152_NORMAL_2 Average 19.24 18.29 18.41 17.41 19.71 19.00 20.29 19.65
> lu.C.x_152_NORMAL_3 Average 19.43 20.00 19.05 20.24 18.76 17.38 18.52 18.62
> lu.C.x_152_NORMAL_4 Average 17.19 18.25 17.81 18.69 20.44 19.75 20.12 19.75
> lu.C.x_152_NORMAL_5 Average 19.25 19.56 19.12 19.56 19.38 19.38 18.12 17.62
>
> lu.C.x_156_GROUP_1 Average 18.62 19.31 18.38 18.77 19.88 21.35 19.35 20.35
> lu.C.x_156_GROUP_2 Average 15.58 12.72 14.96 14.83 20.59 19.35 29.75 28.22
> lu.C.x_156_GROUP_3 Average 20.05 18.74 19.63 18.32 20.26 20.89 19.53 18.58
> lu.C.x_156_GROUP_4 Average 14.77 11.42 13.01 10.09 27.05 33.52 23.16 22.98
> lu.C.x_156_GROUP_5 Average 14.94 11.45 12.77 10.52 28.01 33.88 22.37 22.05
> lu.C.x_156_NORMAL_1 Average 20.00 20.58 18.47 18.68 19.47 19.74 19.42 19.63
> lu.C.x_156_NORMAL_2 Average 18.52 18.48 18.83 18.43 20.57 20.48 20.61 20.09
> lu.C.x_156_NORMAL_3 Average 20.27 20.00 20.05 21.18 19.55 19.00 18.59 17.36
> lu.C.x_156_NORMAL_4 Average 19.65 19.60 20.25 20.75 19.35 20.10 19.00 17.30
> lu.C.x_156_NORMAL_5 Average 19.79 19.67 20.62 22.42 18.42 18.00 17.67 19.42
>
>
> I'll try to find pre-patched results for this 8 node system. Just to keep things
> together for reference here is the 4-node system before this re-work series.
>
> lu.C.x_76_GROUP_1 Average 15.84 24.06 23.37 12.73
> lu.C.x_76_GROUP_2 Average 15.29 22.78 22.49 15.45
> lu.C.x_76_GROUP_3 Average 13.45 23.90 22.97 15.68
> lu.C.x_76_NORMAL_1 Average 18.31 19.54 19.54 18.62
> lu.C.x_76_NORMAL_2 Average 19.73 19.18 19.45 17.64
>
> This produced a 4.5x slowdown for the group runs versus the nicely balance
> normal runs.
>
>
>
> I can try to get traces but this is not my system so it may take a little
> while. I've found that the existing trace points don't give enough information
> to see what is happening in this problem. But the visualization in kernelshark
> does show the problem pretty well. Do you want just the existing sched tracepoints
> or should I update some of the traceprintks I used in the earlier traces?

The standard tracepoint is a good starting point but tracing the
statistings for find_busiest_group and find_idlest_group should help a
lot.

Cheers,
Vincent

>
>
>
> Cheers,
> Phil
>
>
> >
> > >
> > > We're re-running the test to get more samples.
> >
> > Thanks
> > Vincent
> >
> > >
> > >
> > > Other tests and systems were still fine.
> > >
> > >
> > > Cheers,
> > > Phil
> > >
> > >
> > > > Numbers for my specific testcase (the cgroup imbalance) are basically
> > > > the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> > > > series solves that issue.
> > > >
> > > >
> > > > Cheers,
> > > > Phil
> > > >
> > > >
> > > > >
> > > > > >
> > > > > > Also, we seem to have grown a fair amount of these TODO entries:
> > > > > >
> > > > > > kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > > > > > kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > > > > > kernel/sched/fair.c: * XXX illustrate
> > > > > > kernel/sched/fair.c: } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > > > > > kernel/sched/fair.c: * can also include other factors [XXX].
> > > > > > kernel/sched/fair.c: * [XXX expand on:
> > > > > > kernel/sched/fair.c: * [XXX more?]
> > > > > > kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > > > > > kernel/sched/fair.c: * XXX for now avg_load is not computed and always 0 so we
> > > > > > kernel/sched/fair.c: /* XXX broken for overlapping NUMA groups */
> > > > > >
> > > > >
> > > > > I will have a look :-)
> > > > >
> > > > > > :-)
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Ingo
> > > >
> > > > --
> > > >
> > >
> > > --
> > >
>
> --
>

2019-10-30 14:41:39

by Phil Auld

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

Hi Vincent,

On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:
> Hi Phil,
>

...

>
> The input could mean that this system reaches a particular level of
> utilization and load that is close to the threshold between 2
> different behavior like spare capacity and fully_busy/overloaded case.
> But at the opposite, there is less threads that CPUs in your UCs so
> one group at least at NUMA level should be tagged as
> has_spare_capacity and should pull tasks.

Yes. Maybe we don't hit that and rely on "load" since things look
busy. There are only 2 spare cpus in the 156 + 2 case. Is it possible
that information is getting lost with the extra NUMA levels?

>
> >
> > >
> > > The fix favors the local group so your UC seems to prefer spreading
> > > tasks at wake up
> > > If you have any traces that you can share, this could help to
> > > understand what's going on. I will try to reproduce the problem on my
> > > system
> >
> > I'm not actually sure the fix here is causing this. Looking at the data
> > more closely I see similar imbalances on v4, v4a and v3.
> >
> > When you say slow versus fast wakeup paths what do you mean? I'm still
> > learning my way around all this code.
>
> When task wakes up, we can decide to
> - speedup the wakeup and shorten the list of cpus and compare only
> prev_cpu vs this_cpu (in fact the group of cpu that share their
> respective LLC). That's the fast wakeup path that is used most of the
> time during a wakeup
> - or start to find the idlest CPU of the system and scan all domains.
> That's the slow path that is used for new tasks or when a task wakes
> up a lot of other tasks at the same time
>

Thanks.

>
> >
> > This particular test is specifically designed to highlight the imbalance
> > cause by the use of group scheduler defined load and averages. The threads
> > are mostly CPU bound but will join up every time step. So if each thread
>
> ok the fact that they join up might be the root cause of your problem.
> They will wake up at the same time by the same task and CPU.
>

If that was the problem I'd expect issues on other high node count systems.

>
> That fact that the 4 nodes works well but not the 8 nodes is a bit
> surprising except if this means more NUMA level in the sched_domain
> topology
> Could you give us more details about the sched domain topology ?
>

The 8-node system has 5 sched domain levels. The 4-node system only
has 3.


cpu159 0 0 0 0 0 0 4361694551702 124316659623 94736
domain0 80000000,00000000,00008000,00000000,00000000 0 0
domain1 ffc00000,00000000,0000ffc0,00000000,00000000 0 0
domain2 fffff000,00000000,0000ffff,f0000000,00000000 0 0
domain3 ffffffff,ff000000,0000ffff,ffffff00,00000000 0 0
domain4 ffffffff,ffffffff,ffffffff,ffffffff,ffffffff 0 0

numactl --hardware
available: 8 nodes (0-7)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 80 81 82 83 84 85 86 87 88 89
node 0 size: 126928 MB
node 0 free: 126452 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19 90 91 92 93 94 95 96 97 98 99
node 1 size: 129019 MB
node 1 free: 128813 MB
node 2 cpus: 20 21 22 23 24 25 26 27 28 29 100 101 102 103 104 105 106 107 108 109
node 2 size: 129019 MB
node 2 free: 128875 MB
node 3 cpus: 30 31 32 33 34 35 36 37 38 39 110 111 112 113 114 115 116 117 118 119
node 3 size: 129019 MB
node 3 free: 128850 MB
node 4 cpus: 40 41 42 43 44 45 46 47 48 49 120 121 122 123 124 125 126 127 128 129
node 4 size: 128993 MB
node 4 free: 128862 MB
node 5 cpus: 50 51 52 53 54 55 56 57 58 59 130 131 132 133 134 135 136 137 138 139
node 5 size: 129019 MB
node 5 free: 128872 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69 140 141 142 143 144 145 146 147 148 149
node 6 size: 129019 MB
node 6 free: 128852 MB
node 7 cpus: 70 71 72 73 74 75 76 77 78 79 150 151 152 153 154 155 156 157 158 159
node 7 size: 112889 MB
node 7 free: 112720 MB
node distances:
node 0 1 2 3 4 5 6 7
0: 10 12 17 17 19 19 19 19
1: 12 10 17 17 19 19 19 19
2: 17 17 10 12 19 19 19 19
3: 17 17 12 10 19 19 19 19
4: 19 19 19 19 10 12 17 17
5: 19 19 19 19 12 10 17 17
6: 19 19 19 19 17 17 10 12
7: 19 19 19 19 17 17 12 10



available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 40 41 42 43 44 45 46 47 48 49
node 0 size: 257943 MB
node 0 free: 257602 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19 50 51 52 53 54 55 56 57 58 59
node 1 size: 258043 MB
node 1 free: 257619 MB
node 2 cpus: 20 21 22 23 24 25 26 27 28 29 60 61 62 63 64 65 66 67 68 69
node 2 size: 258043 MB
node 2 free: 257879 MB
node 3 cpus: 30 31 32 33 34 35 36 37 38 39 70 71 72 73 74 75 76 77 78 79
node 3 size: 258043 MB
node 3 free: 257823 MB
node distances:
node 0 1 2 3
0: 10 20 20 20
1: 20 10 20 20
2: 20 20 10 20
3: 20 20 20 10




An 8-node system (albeit with sub-numa) has node distances

node distances:
node 0 1 2 3 4 5 6 7
0: 10 11 21 21 21 21 21 21
1: 11 10 21 21 21 21 21 21
2: 21 21 10 11 21 21 21 21
3: 21 21 11 10 21 21 21 21
4: 21 21 21 21 10 11 21 21
5: 21 21 21 21 11 10 21 21
6: 21 21 21 21 21 21 10 11
7: 21 21 21 21 21 21 11 10

This one does not exhibit the problem with the latest (v4a). But also
only has 3 levels.


> >
> > There's still something between v1 and v4 on that 8-node system that is
> > still illustrating the original problem. On our other test systems this
> > series really works nicely to solve this problem. And even if we can't get
> > to the bottom if this it's a significant improvement.
> >
> >
> > Here is v3 for the 8-node system
> > lu.C.x_152_GROUP_1 Average 17.52 16.86 17.90 18.52 20.00 19.00 22.00 20.19
> > lu.C.x_152_GROUP_2 Average 15.70 15.04 15.65 15.72 23.30 28.98 20.09 17.52
> > lu.C.x_152_GROUP_3 Average 27.72 32.79 22.89 22.62 11.01 12.90 12.14 9.93
> > lu.C.x_152_GROUP_4 Average 18.13 18.87 18.40 17.87 18.80 19.93 20.40 19.60
> > lu.C.x_152_GROUP_5 Average 24.14 26.46 20.92 21.43 14.70 16.05 15.14 13.16
> > lu.C.x_152_NORMAL_1 Average 21.03 22.43 20.27 19.97 18.37 18.80 16.27 14.87
> > lu.C.x_152_NORMAL_2 Average 19.24 18.29 18.41 17.41 19.71 19.00 20.29 19.65
> > lu.C.x_152_NORMAL_3 Average 19.43 20.00 19.05 20.24 18.76 17.38 18.52 18.62
> > lu.C.x_152_NORMAL_4 Average 17.19 18.25 17.81 18.69 20.44 19.75 20.12 19.75
> > lu.C.x_152_NORMAL_5 Average 19.25 19.56 19.12 19.56 19.38 19.38 18.12 17.62
> >
> > lu.C.x_156_GROUP_1 Average 18.62 19.31 18.38 18.77 19.88 21.35 19.35 20.35
> > lu.C.x_156_GROUP_2 Average 15.58 12.72 14.96 14.83 20.59 19.35 29.75 28.22
> > lu.C.x_156_GROUP_3 Average 20.05 18.74 19.63 18.32 20.26 20.89 19.53 18.58
> > lu.C.x_156_GROUP_4 Average 14.77 11.42 13.01 10.09 27.05 33.52 23.16 22.98
> > lu.C.x_156_GROUP_5 Average 14.94 11.45 12.77 10.52 28.01 33.88 22.37 22.05
> > lu.C.x_156_NORMAL_1 Average 20.00 20.58 18.47 18.68 19.47 19.74 19.42 19.63
> > lu.C.x_156_NORMAL_2 Average 18.52 18.48 18.83 18.43 20.57 20.48 20.61 20.09
> > lu.C.x_156_NORMAL_3 Average 20.27 20.00 20.05 21.18 19.55 19.00 18.59 17.36
> > lu.C.x_156_NORMAL_4 Average 19.65 19.60 20.25 20.75 19.35 20.10 19.00 17.30
> > lu.C.x_156_NORMAL_5 Average 19.79 19.67 20.62 22.42 18.42 18.00 17.67 19.42
> >
> >
> > I'll try to find pre-patched results for this 8 node system. Just to keep things
> > together for reference here is the 4-node system before this re-work series.
> >
> > lu.C.x_76_GROUP_1 Average 15.84 24.06 23.37 12.73
> > lu.C.x_76_GROUP_2 Average 15.29 22.78 22.49 15.45
> > lu.C.x_76_GROUP_3 Average 13.45 23.90 22.97 15.68
> > lu.C.x_76_NORMAL_1 Average 18.31 19.54 19.54 18.62
> > lu.C.x_76_NORMAL_2 Average 19.73 19.18 19.45 17.64
> >
> > This produced a 4.5x slowdown for the group runs versus the nicely balance
> > normal runs.
> >

Here is the base 5.4.0-rc3+ kernel on the 8-node system:

lu.C.x_156_GROUP_1 Average 10.87 0.00 0.00 11.49 36.69 34.26 30.59 32.10
lu.C.x_156_GROUP_2 Average 20.15 16.32 9.49 24.91 21.07 20.93 21.63 21.50
lu.C.x_156_GROUP_3 Average 21.27 17.23 11.84 21.80 20.91 20.68 21.11 21.16
lu.C.x_156_GROUP_4 Average 19.44 6.53 8.71 19.72 22.95 23.16 28.85 26.64
lu.C.x_156_GROUP_5 Average 20.59 6.20 11.32 14.63 28.73 30.36 22.20 21.98
lu.C.x_156_NORMAL_1 Average 20.50 19.95 20.40 20.45 18.75 19.35 18.25 18.35
lu.C.x_156_NORMAL_2 Average 17.15 19.04 18.42 18.69 21.35 21.42 20.00 19.92
lu.C.x_156_NORMAL_3 Average 18.00 18.15 17.55 17.60 18.90 18.40 19.90 19.75
lu.C.x_156_NORMAL_4 Average 20.53 20.05 20.21 19.11 19.00 19.47 19.37 18.26
lu.C.x_156_NORMAL_5 Average 18.72 18.78 19.72 18.50 19.67 19.72 21.11 19.78

Including the actual benchmark results.
============156_GROUP========Mop/s===================================
min q1 median q3 max
1564.63 3003.87 3928.23 5411.13 8386.66
============156_GROUP========time====================================
min q1 median q3 max
243.12 376.82 519.06 678.79 1303.18
============156_NORMAL========Mop/s===================================
min q1 median q3 max
13845.6 18013.8 18545.5 19359.9 19647.4
============156_NORMAL========time====================================
min q1 median q3 max
103.78 105.32 109.95 113.19 147.27

You can see the ~5x slowdown of the pre-rework issue. v4a is much improved over
mainline.

I'll try to find some other machines as well.


> >
> >
> > I can try to get traces but this is not my system so it may take a little
> > while. I've found that the existing trace points don't give enough information
> > to see what is happening in this problem. But the visualization in kernelshark
> > does show the problem pretty well. Do you want just the existing sched tracepoints
> > or should I update some of the traceprintks I used in the earlier traces?
>
> The standard tracepoint is a good starting point but tracing the
> statistings for find_busiest_group and find_idlest_group should help a
> lot.
>

I have some traces which I'll send you directly since they're large.


Cheers,
Phil



> Cheers,
> Vincent
>
> >
> >
> >
> > Cheers,
> > Phil
> >
> >
> > >
> > > >
> > > > We're re-running the test to get more samples.
> > >
> > > Thanks
> > > Vincent
> > >
> > > >
> > > >
> > > > Other tests and systems were still fine.
> > > >
> > > >
> > > > Cheers,
> > > > Phil
> > > >
> > > >
> > > > > Numbers for my specific testcase (the cgroup imbalance) are basically
> > > > > the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> > > > > series solves that issue.
> > > > >
> > > > >
> > > > > Cheers,
> > > > > Phil
> > > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > Also, we seem to have grown a fair amount of these TODO entries:
> > > > > > >
> > > > > > > kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > > > > > > kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > > > > > > kernel/sched/fair.c: * XXX illustrate
> > > > > > > kernel/sched/fair.c: } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > > > > > > kernel/sched/fair.c: * can also include other factors [XXX].
> > > > > > > kernel/sched/fair.c: * [XXX expand on:
> > > > > > > kernel/sched/fair.c: * [XXX more?]
> > > > > > > kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > > > > > > kernel/sched/fair.c: * XXX for now avg_load is not computed and always 0 so we
> > > > > > > kernel/sched/fair.c: /* XXX broken for overlapping NUMA groups */
> > > > > > >
> > > > > >
> > > > > > I will have a look :-)
> > > > > >
> > > > > > > :-)
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Ingo
> > > > >
> > > > > --
> > > > >
> > > >
> > > > --
> > > >
> >
> > --
> >

--

2019-10-30 14:53:53

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v4 01/11] sched/fair: clean up asym packing

On Fri, Oct 18, 2019 at 03:26:28PM +0200, Vincent Guittot wrote:
> Clean up asym packing to follow the default load balance behavior:
> - classify the group by creating a group_asym_packing field.
> - calculate the imbalance in calculate_imbalance() instead of bypassing it.
>
> We don't need to test twice same conditions anymore to detect asym packing
> and we consolidate the calculation of imbalance in calculate_imbalance().
>
> There is no functional changes.
>
> Signed-off-by: Vincent Guittot <[email protected]>
> Acked-by: Rik van Riel <[email protected]>
> ---
> kernel/sched/fair.c | 63 ++++++++++++++---------------------------------------
> 1 file changed, 16 insertions(+), 47 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1f0a5e1..617145c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7675,6 +7675,7 @@ struct sg_lb_stats {
> unsigned int group_weight;
> enum group_type group_type;
> int group_no_capacity;
> + unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
> unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
> #ifdef CONFIG_NUMA_BALANCING
> unsigned int nr_numa_running;
> @@ -8129,9 +8130,17 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> * ASYM_PACKING needs to move all the work to the highest
> * prority CPUs in the group, therefore mark all groups
> * of lower priority than ourself as busy.
> + *
> + * This is primarily intended to used at the sibling level. Some
> + * cores like POWER7 prefer to use lower numbered SMT threads. In the
> + * case of POWER7, it can move to lower SMT modes only when higher
> + * threads are idle. When in lower SMT modes, the threads will
> + * perform better since they share less core resources. Hence when we
> + * have idle threads, we want them to be the higher ones.
> */
> if (sgs->sum_nr_running &&
> sched_asym_prefer(env->dst_cpu, sg->asym_prefer_cpu)) {
> + sgs->group_asym_packing = 1;
> if (!sds->busiest)
> return true;
>

(I did not read any of the earlier implementations of this series, maybe
this was discussed already in which case, sorry for the noise)

Are you *sure* this is not a functional change?

Asym packing is a twisty maze of headaches and I'm not familiar enough
with it to be 100% certain without spending a lot of time on this. Even
spotting how Power7 ends up using asym packing with lower-numbered SMT
threads is a bit of a challenge. Specifically, it relies on the scheduler
domain SD_ASYM_PACKING flag for SMT domains to use the weak implementation
of arch_asym_cpu_priority which by defaults favours the lower-numbered CPU.

The check_asym_packing implementation checks that flag but I can't see
the equiavlent type of check here. update_sd_pick_busiest could be called
for domains that span NUMA or basically any domain that does not specify
SD_ASYM_PACKING and end up favouring a lower-numbered CPU (or whatever
arch_asym_cpu_priority returns in the case of x86 which has a different
idea for favoured CPUs).

sched_asym_prefer appears to be a function that is very easy to use
incorrectly. Should it take env and check the SD flags first?

--
Mel Gorman
SUSE Labs

2019-10-30 16:00:31

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance

On Fri, Oct 18, 2019 at 03:26:33PM +0200, Vincent Guittot wrote:
> runnable load has been introduced to take into account the case
> where blocked load biases the load balance decision which was selecting
> underutilized group with huge blocked load whereas other groups were
> overloaded.
>
> The load is now only used when groups are overloaded. In this case,
> it's worth being conservative and taking into account the sleeping
> tasks that might wakeup on the cpu.
>
> Signed-off-by: Vincent Guittot <[email protected]>

Hmm.... ok. Superficially I get what you're doing but worry slightly
about groups that have lots of tasks that are frequently idling on short
periods of IO.

Unfortuntely when I queued this series for testing I did not queue a load
that idles rapidly for short durations that would highlight problems in
that area.

I cannot convince myself it's ok enough for an ack but I have no reason
to complain either.

--
Mel Gorman
SUSE Labs

2019-10-30 16:05:10

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 01/11] sched/fair: clean up asym packing

On Wed, 30 Oct 2019 at 15:51, Mel Gorman <[email protected]> wrote:
>
> On Fri, Oct 18, 2019 at 03:26:28PM +0200, Vincent Guittot wrote:
> > Clean up asym packing to follow the default load balance behavior:
> > - classify the group by creating a group_asym_packing field.
> > - calculate the imbalance in calculate_imbalance() instead of bypassing it.
> >
> > We don't need to test twice same conditions anymore to detect asym packing
> > and we consolidate the calculation of imbalance in calculate_imbalance().
> >
> > There is no functional changes.
> >
> > Signed-off-by: Vincent Guittot <[email protected]>
> > Acked-by: Rik van Riel <[email protected]>
> > ---
> > kernel/sched/fair.c | 63 ++++++++++++++---------------------------------------
> > 1 file changed, 16 insertions(+), 47 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 1f0a5e1..617145c 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7675,6 +7675,7 @@ struct sg_lb_stats {
> > unsigned int group_weight;
> > enum group_type group_type;
> > int group_no_capacity;
> > + unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
> > unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
> > #ifdef CONFIG_NUMA_BALANCING
> > unsigned int nr_numa_running;
> > @@ -8129,9 +8130,17 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> > * ASYM_PACKING needs to move all the work to the highest
> > * prority CPUs in the group, therefore mark all groups
> > * of lower priority than ourself as busy.
> > + *
> > + * This is primarily intended to used at the sibling level. Some
> > + * cores like POWER7 prefer to use lower numbered SMT threads. In the
> > + * case of POWER7, it can move to lower SMT modes only when higher
> > + * threads are idle. When in lower SMT modes, the threads will
> > + * perform better since they share less core resources. Hence when we
> > + * have idle threads, we want them to be the higher ones.
> > */
> > if (sgs->sum_nr_running &&
> > sched_asym_prefer(env->dst_cpu, sg->asym_prefer_cpu)) {
> > + sgs->group_asym_packing = 1;
> > if (!sds->busiest)
> > return true;
> >
>
> (I did not read any of the earlier implementations of this series, maybe
> this was discussed already in which case, sorry for the noise)
>
> Are you *sure* this is not a functional change?
>
> Asym packing is a twisty maze of headaches and I'm not familiar enough
> with it to be 100% certain without spending a lot of time on this. Even
> spotting how Power7 ends up using asym packing with lower-numbered SMT
> threads is a bit of a challenge. Specifically, it relies on the scheduler
> domain SD_ASYM_PACKING flag for SMT domains to use the weak implementation
> of arch_asym_cpu_priority which by defaults favours the lower-numbered CPU.
>
> The check_asym_packing implementation checks that flag but I can't see
> the equiavlent type of check here. update_sd_pick_busiest could be called

The checks of SD_ASYM_PACKING and CPU_NOT_IDLE are already above in
the function but out of the patch.
In fact this part of update_sd_pick_busiest is already dedicated to
asym_packing.

What I'm doing is that instead of checking asym_packing in
update_sd_pick_busiest and then rechecking the same thing in
find_busiest_group, I save the check result and reuse it

Also patch 04 moves further this code

> for domains that span NUMA or basically any domain that does not specify
> SD_ASYM_PACKING and end up favouring a lower-numbered CPU (or whatever
> arch_asym_cpu_priority returns in the case of x86 which has a different
> idea for favoured CPUs).
>
> sched_asym_prefer appears to be a function that is very easy to use
> incorrectly. Should it take env and check the SD flags first?
>
> --
> Mel Gorman
> SUSE Labs

2019-10-30 16:08:52

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: fix rework of find_idlest_group()

On Tue, Oct 22, 2019 at 06:46:38PM +0200, Vincent Guittot wrote:
> The task, for which the scheduler looks for the idlest group of CPUs, must
> be discounted from all statistics in order to get a fair comparison
> between groups. This includes utilization, load, nr_running and idle_cpus.
>
> Such unfairness can be easily highlighted with the unixbench execl 1 task.
> This test continuously call execve() and the scheduler looks for the idlest
> group/CPU on which it should place the task. Because the task runs on the
> local group/CPU, the latter seems already busy even if there is nothing
> else running on it. As a result, the scheduler will always select another
> group/CPU than the local one.
>
> Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
> Reported-by: kernel test robot <[email protected]>
> Signed-off-by: Vincent Guittot <[email protected]>

I had gotten fried by this point and had not queued this patch in advance
so I don't want to comment one way or the other. However, I note this was
not picked up in tip and it probably is best that this series all go in
as one lump and not separate out the fixes in the final merge. Otherwise
it'll trigger false positives by LKP.

--
Mel Gorman
SUSE Labs

2019-10-30 16:27:38

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On Mon, Oct 21, 2019 at 09:50:38AM +0200, Ingo Molnar wrote:
> > <SNIP>
>
> Thanks, that's an excellent series!
>

Agreed despite the level of whining and complaining I made during the
review.

> I've queued it up in sched/core with a handful of readability edits to
> comments and changelogs.
>
> There are some upstreaming caveats though, I expect this series to be a
> performance regression magnet:
>
> - load_balance() and wake-up changes invariably are such: some workloads
> only work/scale well by accident, and if we touch the logic it might
> flip over into a less advantageous scheduling pattern.
>
> - In particular the changes from balancing and waking on runnable load
> to full load that includes blocking *will* shift IO-intensive
> workloads that you tests don't fully capture I believe. You also made
> idle balancing more aggressive in essence - which might reduce cache
> locality for some workloads.
>
> A full run on Mel Gorman's magic scalability test-suite would be super
> useful ...
>

I queued this back on the 21st and it took this long for me to get back
to it.

What I tested did not include the fix for the last patch so I cannot say
the data is that useful. I also failed to include something that exercised
the IO paths in a way that idles rapidly as that can catch interesting
details (usually cpufreq related but sometimes load-balancing related).
There was no real thinking behind this decision, I just used an old
collection of tests to get a general feel for the series.

Most of the results were performance-neutral and some notable gains
(kernel compiles were 1-6% faster depending on the -j count). Hackbench
saw a disproportionate gain in terms of performance but I tend to be wary
of hackbench as improving it is rarely a universal win.
There tends to be some jitter around the point where a NUMA nodes worth
of CPUs gets overloaded. tbench (mmtests configuation network-tbench) on
a NUMA machine showed gains for low thread counts and high thread counts
but a loss near the boundary where a single node would get overloaded.

Some NAS-related workloads saw a drop in performance on NUMA machines
but the size class might be too small to be certain, I'd have to rerun
with the D class to be sure. The biggest strange drop in performance
was the elapsed time to run the git test suite (mmtests configuration
workload-shellscripts modified to use a fresh XFS partition) took 17.61%
longer to execute on a UMA Skylake machine. This *might* be due to the
missing fix because it is mostly a single-task workload.

I'm not going to go through the results in detail because I think another
full round of testing would be required to take the fix into account. I'd
also prefer to wait to see if the review results in any material change
to the series.

--
Mel Gorman
SUSE Labs

2019-10-30 16:37:35

by Valentin Schneider

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance



On 30/10/2019 17:24, Dietmar Eggemann wrote:
> On 30.10.19 15:39, Phil Auld wrote:
>> Hi Vincent,
>>
>> On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:
>
> [...]
>
>>>> When you say slow versus fast wakeup paths what do you mean? I'm still
>>>> learning my way around all this code.
>>>
>>> When task wakes up, we can decide to
>>> - speedup the wakeup and shorten the list of cpus and compare only
>>> prev_cpu vs this_cpu (in fact the group of cpu that share their
>>> respective LLC). That's the fast wakeup path that is used most of the
>>> time during a wakeup
>>> - or start to find the idlest CPU of the system and scan all domains.
>>> That's the slow path that is used for new tasks or when a task wakes
>>> up a lot of other tasks at the same time
>
> [...]
>
> Is the latter related to wake_wide()? If yes, is the SD_BALANCE_WAKE
> flag set on the sched domains on your machines? IMHO, otherwise those
> wakeups are not forced into the slowpath (if (unlikely(sd))?
>
> I had this discussion the other day with Valentin S. on #sched and we
> were not sure how SD_BALANCE_WAKE is set on sched domains on
> !SD_ASYM_CPUCAPACITY systems.
>

Well from the code nobody but us (asymmetric capacity systems) set
SD_BALANCE_WAKE. I was however curious if there were some folks who set it
with out of tree code for some reason.

As Dietmar said, not having SD_BALANCE_WAKE means you'll never go through
the slow path on wakeups, because there is no domain with SD_BALANCE_WAKE for
the domain loop to find. Depending on your topology you most likely will
go through it on fork or exec though.

IOW wake_wide() is not really widening the wakeup scan on wakeups using
mainline topology code (disregarding asymmetric capacity systems), which
sounds a bit... off.

2019-10-30 17:22:07

by Phil Auld

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

Hi,

On Wed, Oct 30, 2019 at 05:35:55PM +0100 Valentin Schneider wrote:
>
>
> On 30/10/2019 17:24, Dietmar Eggemann wrote:
> > On 30.10.19 15:39, Phil Auld wrote:
> >> Hi Vincent,
> >>
> >> On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:
> >
> > [...]
> >
> >>>> When you say slow versus fast wakeup paths what do you mean? I'm still
> >>>> learning my way around all this code.
> >>>
> >>> When task wakes up, we can decide to
> >>> - speedup the wakeup and shorten the list of cpus and compare only
> >>> prev_cpu vs this_cpu (in fact the group of cpu that share their
> >>> respective LLC). That's the fast wakeup path that is used most of the
> >>> time during a wakeup
> >>> - or start to find the idlest CPU of the system and scan all domains.
> >>> That's the slow path that is used for new tasks or when a task wakes
> >>> up a lot of other tasks at the same time
> >
> > [...]
> >
> > Is the latter related to wake_wide()? If yes, is the SD_BALANCE_WAKE
> > flag set on the sched domains on your machines? IMHO, otherwise those
> > wakeups are not forced into the slowpath (if (unlikely(sd))?
> >
> > I had this discussion the other day with Valentin S. on #sched and we
> > were not sure how SD_BALANCE_WAKE is set on sched domains on
> > !SD_ASYM_CPUCAPACITY systems.
> >
>
> Well from the code nobody but us (asymmetric capacity systems) set
> SD_BALANCE_WAKE. I was however curious if there were some folks who set it
> with out of tree code for some reason.
>
> As Dietmar said, not having SD_BALANCE_WAKE means you'll never go through
> the slow path on wakeups, because there is no domain with SD_BALANCE_WAKE for
> the domain loop to find. Depending on your topology you most likely will
> go through it on fork or exec though.
>
> IOW wake_wide() is not really widening the wakeup scan on wakeups using
> mainline topology code (disregarding asymmetric capacity systems), which
> sounds a bit... off.

Thanks. It's not currently set. I'll set it and re-run to see if it makes
a difference.


However, I'm not sure why it would be making a difference for only the cgroup
case. If this is causing issues I'd expect it to effect both runs.

In general I think these threads want to wake up the last cpu they were on.
And given there are fewer cpu bound tasks that CPUs that wake cpu should,
more often than not, be idle.


Cheers,
Phil



--

2019-10-30 17:28:40

by Valentin Schneider

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On 30/10/2019 18:19, Phil Auld wrote:
>> Well from the code nobody but us (asymmetric capacity systems) set
>> SD_BALANCE_WAKE. I was however curious if there were some folks who set it
>> with out of tree code for some reason.
>>
>> As Dietmar said, not having SD_BALANCE_WAKE means you'll never go through
>> the slow path on wakeups, because there is no domain with SD_BALANCE_WAKE for
>> the domain loop to find. Depending on your topology you most likely will
>> go through it on fork or exec though.
>>
>> IOW wake_wide() is not really widening the wakeup scan on wakeups using
>> mainline topology code (disregarding asymmetric capacity systems), which
>> sounds a bit... off.
>
> Thanks. It's not currently set. I'll set it and re-run to see if it makes
> a difference.
>

Note that it might do more harm than good, it's not set in the default
topology because it's too aggressive, see

182a85f8a119 ("sched: Disable wakeup balancing")

>
> However, I'm not sure why it would be making a difference for only the cgroup
> case. If this is causing issues I'd expect it to effect both runs.
>
> In general I think these threads want to wake up the last cpu they were on.
> And given there are fewer cpu bound tasks that CPUs that wake cpu should,
> more often than not, be idle.
>
>
> Cheers,
> Phil
>
>
>

2019-10-30 17:30:24

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On Wed, 30 Oct 2019 at 15:39, Phil Auld <[email protected]> wrote:
>
> Hi Vincent,
>
> On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:
> > Hi Phil,
> >
>
> ...
>
> >
> > The input could mean that this system reaches a particular level of
> > utilization and load that is close to the threshold between 2
> > different behavior like spare capacity and fully_busy/overloaded case.
> > But at the opposite, there is less threads that CPUs in your UCs so
> > one group at least at NUMA level should be tagged as
> > has_spare_capacity and should pull tasks.
>
> Yes. Maybe we don't hit that and rely on "load" since things look
> busy. There are only 2 spare cpus in the 156 + 2 case. Is it possible
> that information is getting lost with the extra NUMA levels?

It should not but i have to look more deeply your topology
If we have less tasks than CPUs, one group should always be tagged
"has_spare_capacity"

>
> >
> > >
> > > >
> > > > The fix favors the local group so your UC seems to prefer spreading
> > > > tasks at wake up
> > > > If you have any traces that you can share, this could help to
> > > > understand what's going on. I will try to reproduce the problem on my
> > > > system
> > >
> > > I'm not actually sure the fix here is causing this. Looking at the data
> > > more closely I see similar imbalances on v4, v4a and v3.
> > >
> > > When you say slow versus fast wakeup paths what do you mean? I'm still
> > > learning my way around all this code.
> >
> > When task wakes up, we can decide to
> > - speedup the wakeup and shorten the list of cpus and compare only
> > prev_cpu vs this_cpu (in fact the group of cpu that share their
> > respective LLC). That's the fast wakeup path that is used most of the
> > time during a wakeup
> > - or start to find the idlest CPU of the system and scan all domains.
> > That's the slow path that is used for new tasks or when a task wakes
> > up a lot of other tasks at the same time
> >
>
> Thanks.
>
> >
> > >
> > > This particular test is specifically designed to highlight the imbalance
> > > cause by the use of group scheduler defined load and averages. The threads
> > > are mostly CPU bound but will join up every time step. So if each thread
> >
> > ok the fact that they join up might be the root cause of your problem.
> > They will wake up at the same time by the same task and CPU.
> >
>
> If that was the problem I'd expect issues on other high node count systems.

yes probably

>
> >
> > That fact that the 4 nodes works well but not the 8 nodes is a bit
> > surprising except if this means more NUMA level in the sched_domain
> > topology
> > Could you give us more details about the sched domain topology ?
> >
>
> The 8-node system has 5 sched domain levels. The 4-node system only
> has 3.

That's an interesting difference. and your additional tests on a 8
nodes with 3 level tends to confirm that the number of level make a
difference
I need to study a bit more how this can impact the spread of tasks

>
>
> cpu159 0 0 0 0 0 0 4361694551702 124316659623 94736
> domain0 80000000,00000000,00008000,00000000,00000000 0 0
> domain1 ffc00000,00000000,0000ffc0,00000000,00000000 0 0
> domain2 fffff000,00000000,0000ffff,f0000000,00000000 0 0
> domain3 ffffffff,ff000000,0000ffff,ffffff00,00000000 0 0
> domain4 ffffffff,ffffffff,ffffffff,ffffffff,ffffffff 0 0
>
> numactl --hardware
> available: 8 nodes (0-7)
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 80 81 82 83 84 85 86 87 88 89
> node 0 size: 126928 MB
> node 0 free: 126452 MB
> node 1 cpus: 10 11 12 13 14 15 16 17 18 19 90 91 92 93 94 95 96 97 98 99
> node 1 size: 129019 MB
> node 1 free: 128813 MB
> node 2 cpus: 20 21 22 23 24 25 26 27 28 29 100 101 102 103 104 105 106 107 108 109
> node 2 size: 129019 MB
> node 2 free: 128875 MB
> node 3 cpus: 30 31 32 33 34 35 36 37 38 39 110 111 112 113 114 115 116 117 118 119
> node 3 size: 129019 MB
> node 3 free: 128850 MB
> node 4 cpus: 40 41 42 43 44 45 46 47 48 49 120 121 122 123 124 125 126 127 128 129
> node 4 size: 128993 MB
> node 4 free: 128862 MB
> node 5 cpus: 50 51 52 53 54 55 56 57 58 59 130 131 132 133 134 135 136 137 138 139
> node 5 size: 129019 MB
> node 5 free: 128872 MB
> node 6 cpus: 60 61 62 63 64 65 66 67 68 69 140 141 142 143 144 145 146 147 148 149
> node 6 size: 129019 MB
> node 6 free: 128852 MB
> node 7 cpus: 70 71 72 73 74 75 76 77 78 79 150 151 152 153 154 155 156 157 158 159
> node 7 size: 112889 MB
> node 7 free: 112720 MB
> node distances:
> node 0 1 2 3 4 5 6 7
> 0: 10 12 17 17 19 19 19 19
> 1: 12 10 17 17 19 19 19 19
> 2: 17 17 10 12 19 19 19 19
> 3: 17 17 12 10 19 19 19 19
> 4: 19 19 19 19 10 12 17 17
> 5: 19 19 19 19 12 10 17 17
> 6: 19 19 19 19 17 17 10 12
> 7: 19 19 19 19 17 17 12 10
>
>
>
> available: 4 nodes (0-3)
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 40 41 42 43 44 45 46 47 48 49
> node 0 size: 257943 MB
> node 0 free: 257602 MB
> node 1 cpus: 10 11 12 13 14 15 16 17 18 19 50 51 52 53 54 55 56 57 58 59
> node 1 size: 258043 MB
> node 1 free: 257619 MB
> node 2 cpus: 20 21 22 23 24 25 26 27 28 29 60 61 62 63 64 65 66 67 68 69
> node 2 size: 258043 MB
> node 2 free: 257879 MB
> node 3 cpus: 30 31 32 33 34 35 36 37 38 39 70 71 72 73 74 75 76 77 78 79
> node 3 size: 258043 MB
> node 3 free: 257823 MB
> node distances:
> node 0 1 2 3
> 0: 10 20 20 20
> 1: 20 10 20 20
> 2: 20 20 10 20
> 3: 20 20 20 10
>
>
>
>
> An 8-node system (albeit with sub-numa) has node distances
>
> node distances:
> node 0 1 2 3 4 5 6 7
> 0: 10 11 21 21 21 21 21 21
> 1: 11 10 21 21 21 21 21 21
> 2: 21 21 10 11 21 21 21 21
> 3: 21 21 11 10 21 21 21 21
> 4: 21 21 21 21 10 11 21 21
> 5: 21 21 21 21 11 10 21 21
> 6: 21 21 21 21 21 21 10 11
> 7: 21 21 21 21 21 21 11 10
>
> This one does not exhibit the problem with the latest (v4a). But also
> only has 3 levels.
>
>
> > >
> > > There's still something between v1 and v4 on that 8-node system that is
> > > still illustrating the original problem. On our other test systems this
> > > series really works nicely to solve this problem. And even if we can't get
> > > to the bottom if this it's a significant improvement.
> > >
> > >
> > > Here is v3 for the 8-node system
> > > lu.C.x_152_GROUP_1 Average 17.52 16.86 17.90 18.52 20.00 19.00 22.00 20.19
> > > lu.C.x_152_GROUP_2 Average 15.70 15.04 15.65 15.72 23.30 28.98 20.09 17.52
> > > lu.C.x_152_GROUP_3 Average 27.72 32.79 22.89 22.62 11.01 12.90 12.14 9.93
> > > lu.C.x_152_GROUP_4 Average 18.13 18.87 18.40 17.87 18.80 19.93 20.40 19.60
> > > lu.C.x_152_GROUP_5 Average 24.14 26.46 20.92 21.43 14.70 16.05 15.14 13.16
> > > lu.C.x_152_NORMAL_1 Average 21.03 22.43 20.27 19.97 18.37 18.80 16.27 14.87
> > > lu.C.x_152_NORMAL_2 Average 19.24 18.29 18.41 17.41 19.71 19.00 20.29 19.65
> > > lu.C.x_152_NORMAL_3 Average 19.43 20.00 19.05 20.24 18.76 17.38 18.52 18.62
> > > lu.C.x_152_NORMAL_4 Average 17.19 18.25 17.81 18.69 20.44 19.75 20.12 19.75
> > > lu.C.x_152_NORMAL_5 Average 19.25 19.56 19.12 19.56 19.38 19.38 18.12 17.62
> > >
> > > lu.C.x_156_GROUP_1 Average 18.62 19.31 18.38 18.77 19.88 21.35 19.35 20.35
> > > lu.C.x_156_GROUP_2 Average 15.58 12.72 14.96 14.83 20.59 19.35 29.75 28.22
> > > lu.C.x_156_GROUP_3 Average 20.05 18.74 19.63 18.32 20.26 20.89 19.53 18.58
> > > lu.C.x_156_GROUP_4 Average 14.77 11.42 13.01 10.09 27.05 33.52 23.16 22.98
> > > lu.C.x_156_GROUP_5 Average 14.94 11.45 12.77 10.52 28.01 33.88 22.37 22.05
> > > lu.C.x_156_NORMAL_1 Average 20.00 20.58 18.47 18.68 19.47 19.74 19.42 19.63
> > > lu.C.x_156_NORMAL_2 Average 18.52 18.48 18.83 18.43 20.57 20.48 20.61 20.09
> > > lu.C.x_156_NORMAL_3 Average 20.27 20.00 20.05 21.18 19.55 19.00 18.59 17.36
> > > lu.C.x_156_NORMAL_4 Average 19.65 19.60 20.25 20.75 19.35 20.10 19.00 17.30
> > > lu.C.x_156_NORMAL_5 Average 19.79 19.67 20.62 22.42 18.42 18.00 17.67 19.42
> > >
> > >
> > > I'll try to find pre-patched results for this 8 node system. Just to keep things
> > > together for reference here is the 4-node system before this re-work series.
> > >
> > > lu.C.x_76_GROUP_1 Average 15.84 24.06 23.37 12.73
> > > lu.C.x_76_GROUP_2 Average 15.29 22.78 22.49 15.45
> > > lu.C.x_76_GROUP_3 Average 13.45 23.90 22.97 15.68
> > > lu.C.x_76_NORMAL_1 Average 18.31 19.54 19.54 18.62
> > > lu.C.x_76_NORMAL_2 Average 19.73 19.18 19.45 17.64
> > >
> > > This produced a 4.5x slowdown for the group runs versus the nicely balance
> > > normal runs.
> > >
>
> Here is the base 5.4.0-rc3+ kernel on the 8-node system:
>
> lu.C.x_156_GROUP_1 Average 10.87 0.00 0.00 11.49 36.69 34.26 30.59 32.10
> lu.C.x_156_GROUP_2 Average 20.15 16.32 9.49 24.91 21.07 20.93 21.63 21.50
> lu.C.x_156_GROUP_3 Average 21.27 17.23 11.84 21.80 20.91 20.68 21.11 21.16
> lu.C.x_156_GROUP_4 Average 19.44 6.53 8.71 19.72 22.95 23.16 28.85 26.64
> lu.C.x_156_GROUP_5 Average 20.59 6.20 11.32 14.63 28.73 30.36 22.20 21.98
> lu.C.x_156_NORMAL_1 Average 20.50 19.95 20.40 20.45 18.75 19.35 18.25 18.35
> lu.C.x_156_NORMAL_2 Average 17.15 19.04 18.42 18.69 21.35 21.42 20.00 19.92
> lu.C.x_156_NORMAL_3 Average 18.00 18.15 17.55 17.60 18.90 18.40 19.90 19.75
> lu.C.x_156_NORMAL_4 Average 20.53 20.05 20.21 19.11 19.00 19.47 19.37 18.26
> lu.C.x_156_NORMAL_5 Average 18.72 18.78 19.72 18.50 19.67 19.72 21.11 19.78
>
> Including the actual benchmark results.
> ============156_GROUP========Mop/s===================================
> min q1 median q3 max
> 1564.63 3003.87 3928.23 5411.13 8386.66
> ============156_GROUP========time====================================
> min q1 median q3 max
> 243.12 376.82 519.06 678.79 1303.18
> ============156_NORMAL========Mop/s===================================
> min q1 median q3 max
> 13845.6 18013.8 18545.5 19359.9 19647.4
> ============156_NORMAL========time====================================
> min q1 median q3 max
> 103.78 105.32 109.95 113.19 147.27
>
> You can see the ~5x slowdown of the pre-rework issue. v4a is much improved over
> mainline.
>
> I'll try to find some other machines as well.
>
>
> > >
> > >
> > > I can try to get traces but this is not my system so it may take a little
> > > while. I've found that the existing trace points don't give enough information
> > > to see what is happening in this problem. But the visualization in kernelshark
> > > does show the problem pretty well. Do you want just the existing sched tracepoints
> > > or should I update some of the traceprintks I used in the earlier traces?
> >
> > The standard tracepoint is a good starting point but tracing the
> > statistings for find_busiest_group and find_idlest_group should help a
> > lot.
> >
>
> I have some traces which I'll send you directly since they're large.

Thanks

>
>
> Cheers,
> Phil
>
>
>
> > Cheers,
> > Vincent
> >
> > >
> > >
> > >
> > > Cheers,
> > > Phil
> > >
> > >
> > > >
> > > > >
> > > > > We're re-running the test to get more samples.
> > > >
> > > > Thanks
> > > > Vincent
> > > >
> > > > >
> > > > >
> > > > > Other tests and systems were still fine.
> > > > >
> > > > >
> > > > > Cheers,
> > > > > Phil
> > > > >
> > > > >
> > > > > > Numbers for my specific testcase (the cgroup imbalance) are basically
> > > > > > the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> > > > > > series solves that issue.
> > > > > >
> > > > > >
> > > > > > Cheers,
> > > > > > Phil
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > Also, we seem to have grown a fair amount of these TODO entries:
> > > > > > > >
> > > > > > > > kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > > > > > > > kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > > > > > > > kernel/sched/fair.c: * XXX illustrate
> > > > > > > > kernel/sched/fair.c: } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > > > > > > > kernel/sched/fair.c: * can also include other factors [XXX].
> > > > > > > > kernel/sched/fair.c: * [XXX expand on:
> > > > > > > > kernel/sched/fair.c: * [XXX more?]
> > > > > > > > kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > > > > > > > kernel/sched/fair.c: * XXX for now avg_load is not computed and always 0 so we
> > > > > > > > kernel/sched/fair.c: /* XXX broken for overlapping NUMA groups */
> > > > > > > >
> > > > > > >
> > > > > > > I will have a look :-)
> > > > > > >
> > > > > > > > :-)
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > >
> > > > > > > > Ingo
> > > > > >
> > > > > > --
> > > > > >
> > > > >
> > > > > --
> > > > >
> > >
> > > --
> > >
>
> --
>

2019-10-30 17:30:46

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On Wed, 30 Oct 2019 at 18:19, Phil Auld <[email protected]> wrote:
>
> Hi,
>
> On Wed, Oct 30, 2019 at 05:35:55PM +0100 Valentin Schneider wrote:
> >
> >
> > On 30/10/2019 17:24, Dietmar Eggemann wrote:
> > > On 30.10.19 15:39, Phil Auld wrote:
> > >> Hi Vincent,
> > >>
> > >> On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:
> > >
> > > [...]
> > >
> > >>>> When you say slow versus fast wakeup paths what do you mean? I'm still
> > >>>> learning my way around all this code.
> > >>>
> > >>> When task wakes up, we can decide to
> > >>> - speedup the wakeup and shorten the list of cpus and compare only
> > >>> prev_cpu vs this_cpu (in fact the group of cpu that share their
> > >>> respective LLC). That's the fast wakeup path that is used most of the
> > >>> time during a wakeup
> > >>> - or start to find the idlest CPU of the system and scan all domains.
> > >>> That's the slow path that is used for new tasks or when a task wakes
> > >>> up a lot of other tasks at the same time
> > >
> > > [...]
> > >
> > > Is the latter related to wake_wide()? If yes, is the SD_BALANCE_WAKE
> > > flag set on the sched domains on your machines? IMHO, otherwise those
> > > wakeups are not forced into the slowpath (if (unlikely(sd))?
> > >
> > > I had this discussion the other day with Valentin S. on #sched and we
> > > were not sure how SD_BALANCE_WAKE is set on sched domains on
> > > !SD_ASYM_CPUCAPACITY systems.
> > >
> >
> > Well from the code nobody but us (asymmetric capacity systems) set
> > SD_BALANCE_WAKE. I was however curious if there were some folks who set it
> > with out of tree code for some reason.
> >
> > As Dietmar said, not having SD_BALANCE_WAKE means you'll never go through
> > the slow path on wakeups, because there is no domain with SD_BALANCE_WAKE for
> > the domain loop to find. Depending on your topology you most likely will
> > go through it on fork or exec though.
> >
> > IOW wake_wide() is not really widening the wakeup scan on wakeups using
> > mainline topology code (disregarding asymmetric capacity systems), which
> > sounds a bit... off.
>
> Thanks. It's not currently set. I'll set it and re-run to see if it makes
> a difference.

Because the fix only touches the slow path and according to Valentin
and Dietmar comments on the wake up path, it would mean that your UC
creates regularly some new threads during the test ?

>
>
> However, I'm not sure why it would be making a difference for only the cgroup
> case. If this is causing issues I'd expect it to effect both runs.
>
> In general I think these threads want to wake up the last cpu they were on.
> And given there are fewer cpu bound tasks that CPUs that wake cpu should,
> more often than not, be idle.
>
>
> Cheers,
> Phil
>
>
>
> --
>

2019-10-30 17:33:23

by Phil Auld

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On Wed, Oct 30, 2019 at 06:25:09PM +0100 Valentin Schneider wrote:
> On 30/10/2019 18:19, Phil Auld wrote:
> >> Well from the code nobody but us (asymmetric capacity systems) set
> >> SD_BALANCE_WAKE. I was however curious if there were some folks who set it
> >> with out of tree code for some reason.
> >>
> >> As Dietmar said, not having SD_BALANCE_WAKE means you'll never go through
> >> the slow path on wakeups, because there is no domain with SD_BALANCE_WAKE for
> >> the domain loop to find. Depending on your topology you most likely will
> >> go through it on fork or exec though.
> >>
> >> IOW wake_wide() is not really widening the wakeup scan on wakeups using
> >> mainline topology code (disregarding asymmetric capacity systems), which
> >> sounds a bit... off.
> >
> > Thanks. It's not currently set. I'll set it and re-run to see if it makes
> > a difference.
> >
>
> Note that it might do more harm than good, it's not set in the default
> topology because it's too aggressive, see
>
> 182a85f8a119 ("sched: Disable wakeup balancing")
>

Heh, yeah... even as it's running I can see that this killing it :)


> >
> > However, I'm not sure why it would be making a difference for only the cgroup
> > case. If this is causing issues I'd expect it to effect both runs.
> >
> > In general I think these threads want to wake up the last cpu they were on.
> > And given there are fewer cpu bound tasks that CPUs that wake cpu should,
> > more often than not, be idle.
> >
> >
> > Cheers,
> > Phil
> >
> >
> >

--

2019-10-30 18:58:08

by Dietmar Eggemann

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On 30.10.19 15:39, Phil Auld wrote:
> Hi Vincent,
>
> On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:

[...]

>>> When you say slow versus fast wakeup paths what do you mean? I'm still
>>> learning my way around all this code.
>>
>> When task wakes up, we can decide to
>> - speedup the wakeup and shorten the list of cpus and compare only
>> prev_cpu vs this_cpu (in fact the group of cpu that share their
>> respective LLC). That's the fast wakeup path that is used most of the
>> time during a wakeup
>> - or start to find the idlest CPU of the system and scan all domains.
>> That's the slow path that is used for new tasks or when a task wakes
>> up a lot of other tasks at the same time

[...]

Is the latter related to wake_wide()? If yes, is the SD_BALANCE_WAKE
flag set on the sched domains on your machines? IMHO, otherwise those
wakeups are not forced into the slowpath (if (unlikely(sd))?

I had this discussion the other day with Valentin S. on #sched and we
were not sure how SD_BALANCE_WAKE is set on sched domains on
!SD_ASYM_CPUCAPACITY systems.

2019-10-30 19:01:35

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On Wed, 30 Oct 2019 at 17:24, Mel Gorman <[email protected]> wrote:
>
> On Mon, Oct 21, 2019 at 09:50:38AM +0200, Ingo Molnar wrote:
> > > <SNIP>
> >
> > Thanks, that's an excellent series!
> >
>
> Agreed despite the level of whining and complaining I made during the
> review.

Thanks for the review.
I haven't gone through all your comments yet but will do in the coming days

>
> > I've queued it up in sched/core with a handful of readability edits to
> > comments and changelogs.
> >
> > There are some upstreaming caveats though, I expect this series to be a
> > performance regression magnet:
> >
> > - load_balance() and wake-up changes invariably are such: some workloads
> > only work/scale well by accident, and if we touch the logic it might
> > flip over into a less advantageous scheduling pattern.
> >
> > - In particular the changes from balancing and waking on runnable load
> > to full load that includes blocking *will* shift IO-intensive
> > workloads that you tests don't fully capture I believe. You also made
> > idle balancing more aggressive in essence - which might reduce cache
> > locality for some workloads.
> >
> > A full run on Mel Gorman's magic scalability test-suite would be super
> > useful ...
> >
>
> I queued this back on the 21st and it took this long for me to get back
> to it.
>
> What I tested did not include the fix for the last patch so I cannot say
> the data is that useful. I also failed to include something that exercised
> the IO paths in a way that idles rapidly as that can catch interesting
> details (usually cpufreq related but sometimes load-balancing related).
> There was no real thinking behind this decision, I just used an old
> collection of tests to get a general feel for the series.
>
> Most of the results were performance-neutral and some notable gains
> (kernel compiles were 1-6% faster depending on the -j count). Hackbench
> saw a disproportionate gain in terms of performance but I tend to be wary
> of hackbench as improving it is rarely a universal win.
> There tends to be some jitter around the point where a NUMA nodes worth
> of CPUs gets overloaded. tbench (mmtests configuation network-tbench) on
> a NUMA machine showed gains for low thread counts and high thread counts
> but a loss near the boundary where a single node would get overloaded.
>
> Some NAS-related workloads saw a drop in performance on NUMA machines
> but the size class might be too small to be certain, I'd have to rerun
> with the D class to be sure. The biggest strange drop in performance
> was the elapsed time to run the git test suite (mmtests configuration
> workload-shellscripts modified to use a fresh XFS partition) took 17.61%
> longer to execute on a UMA Skylake machine. This *might* be due to the
> missing fix because it is mostly a single-task workload.
>
> I'm not going to go through the results in detail because I think another
> full round of testing would be required to take the fix into account. I'd
> also prefer to wait to see if the review results in any material change
> to the series.
>
> --
> Mel Gorman
> SUSE Labs

2019-10-30 19:05:55

by Phil Auld

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On Wed, Oct 30, 2019 at 06:28:50PM +0100 Vincent Guittot wrote:
> On Wed, 30 Oct 2019 at 18:19, Phil Auld <[email protected]> wrote:
> >
> > Hi,
> >
> > On Wed, Oct 30, 2019 at 05:35:55PM +0100 Valentin Schneider wrote:
> > >
> > >
> > > On 30/10/2019 17:24, Dietmar Eggemann wrote:
> > > > On 30.10.19 15:39, Phil Auld wrote:
> > > >> Hi Vincent,
> > > >>
> > > >> On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:
> > > >
> > > > [...]
> > > >
> > > >>>> When you say slow versus fast wakeup paths what do you mean? I'm still
> > > >>>> learning my way around all this code.
> > > >>>
> > > >>> When task wakes up, we can decide to
> > > >>> - speedup the wakeup and shorten the list of cpus and compare only
> > > >>> prev_cpu vs this_cpu (in fact the group of cpu that share their
> > > >>> respective LLC). That's the fast wakeup path that is used most of the
> > > >>> time during a wakeup
> > > >>> - or start to find the idlest CPU of the system and scan all domains.
> > > >>> That's the slow path that is used for new tasks or when a task wakes
> > > >>> up a lot of other tasks at the same time
> > > >
> > > > [...]
> > > >
> > > > Is the latter related to wake_wide()? If yes, is the SD_BALANCE_WAKE
> > > > flag set on the sched domains on your machines? IMHO, otherwise those
> > > > wakeups are not forced into the slowpath (if (unlikely(sd))?
> > > >
> > > > I had this discussion the other day with Valentin S. on #sched and we
> > > > were not sure how SD_BALANCE_WAKE is set on sched domains on
> > > > !SD_ASYM_CPUCAPACITY systems.
> > > >
> > >
> > > Well from the code nobody but us (asymmetric capacity systems) set
> > > SD_BALANCE_WAKE. I was however curious if there were some folks who set it
> > > with out of tree code for some reason.
> > >
> > > As Dietmar said, not having SD_BALANCE_WAKE means you'll never go through
> > > the slow path on wakeups, because there is no domain with SD_BALANCE_WAKE for
> > > the domain loop to find. Depending on your topology you most likely will
> > > go through it on fork or exec though.
> > >
> > > IOW wake_wide() is not really widening the wakeup scan on wakeups using
> > > mainline topology code (disregarding asymmetric capacity systems), which
> > > sounds a bit... off.
> >
> > Thanks. It's not currently set. I'll set it and re-run to see if it makes
> > a difference.
>
> Because the fix only touches the slow path and according to Valentin
> and Dietmar comments on the wake up path, it would mean that your UC
> creates regularly some new threads during the test ?
>

I believe it is not creating any new threads during each run.


> >
> >
> > However, I'm not sure why it would be making a difference for only the cgroup
> > case. If this is causing issues I'd expect it to effect both runs.
> >
> > In general I think these threads want to wake up the last cpu they were on.
> > And given there are fewer cpu bound tasks that CPUs that wake cpu should,
> > more often than not, be idle.
> >
> >
> > Cheers,
> > Phil
> >
> >
> >
> > --
> >

--

2019-10-31 14:00:07

by Phil Auld

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

Hi Vincent,

On Wed, Oct 30, 2019 at 06:25:49PM +0100 Vincent Guittot wrote:
> On Wed, 30 Oct 2019 at 15:39, Phil Auld <[email protected]> wrote:
> > > That fact that the 4 nodes works well but not the 8 nodes is a bit
> > > surprising except if this means more NUMA level in the sched_domain
> > > topology
> > > Could you give us more details about the sched domain topology ?
> > >
> >
> > The 8-node system has 5 sched domain levels. The 4-node system only
> > has 3.
>
> That's an interesting difference. and your additional tests on a 8
> nodes with 3 level tends to confirm that the number of level make a
> difference
> I need to study a bit more how this can impact the spread of tasks

So I think I understand what my numbers have been showing.

I believe the numa balancing is causing problems.

Here's numbers from the test on 5.4-rc3+ without your series:

echo 1 > /proc/sys/kernel/numa_balancing
lu.C.x_156_GROUP_1 Average 10.87 0.00 0.00 11.49 36.69 34.26 30.59 32.10
lu.C.x_156_GROUP_2 Average 20.15 16.32 9.49 24.91 21.07 20.93 21.63 21.50
lu.C.x_156_GROUP_3 Average 21.27 17.23 11.84 21.80 20.91 20.68 21.11 21.16
lu.C.x_156_GROUP_4 Average 19.44 6.53 8.71 19.72 22.95 23.16 28.85 26.64
lu.C.x_156_GROUP_5 Average 20.59 6.20 11.32 14.63 28.73 30.36 22.20 21.98
lu.C.x_156_NORMAL_1 Average 20.50 19.95 20.40 20.45 18.75 19.35 18.25 18.35
lu.C.x_156_NORMAL_2 Average 17.15 19.04 18.42 18.69 21.35 21.42 20.00 19.92
lu.C.x_156_NORMAL_3 Average 18.00 18.15 17.55 17.60 18.90 18.40 19.90 19.75
lu.C.x_156_NORMAL_4 Average 20.53 20.05 20.21 19.11 19.00 19.47 19.37 18.26
lu.C.x_156_NORMAL_5 Average 18.72 18.78 19.72 18.50 19.67 19.72 21.11 19.78

============156_GROUP========Mop/s===================================
min q1 median q3 max
1564.63 3003.87 3928.23 5411.13 8386.66
============156_GROUP========time====================================
min q1 median q3 max
243.12 376.82 519.06 678.79 1303.18
============156_NORMAL========Mop/s===================================
min q1 median q3 max
13845.6 18013.8 18545.5 19359.9 19647.4
============156_NORMAL========time====================================
min q1 median q3 max
103.78 105.32 109.95 113.19 147.27

(This one above is especially bad... we don't usually see 0.00s, but overall it's
basically on par. It's reflected in the spread of the results).


echo 0 > /proc/sys/kernel/numa_balancing
lu.C.x_156_GROUP_1 Average 17.75 19.30 21.20 21.20 20.20 20.80 18.90 16.65
lu.C.x_156_GROUP_2 Average 18.38 19.25 21.00 20.06 20.19 20.31 19.56 17.25
lu.C.x_156_GROUP_3 Average 21.81 21.00 18.38 16.86 20.81 21.48 18.24 17.43
lu.C.x_156_GROUP_4 Average 20.48 20.96 19.61 17.61 17.57 19.74 18.48 21.57
lu.C.x_156_GROUP_5 Average 23.32 21.96 19.16 14.28 21.44 22.56 17.00 16.28
lu.C.x_156_NORMAL_1 Average 19.50 19.83 19.58 19.25 19.58 19.42 19.42 19.42
lu.C.x_156_NORMAL_2 Average 18.90 18.40 20.00 19.80 19.70 19.30 19.80 20.10
lu.C.x_156_NORMAL_3 Average 19.45 19.09 19.91 20.09 19.45 18.73 19.45 19.82
lu.C.x_156_NORMAL_4 Average 19.64 19.27 19.64 19.00 19.82 19.55 19.73 19.36
lu.C.x_156_NORMAL_5 Average 18.75 19.42 20.08 19.67 18.75 19.50 19.92 19.92

============156_GROUP========Mop/s===================================
min q1 median q3 max
14956.3 16346.5 17505.7 18440.6 22492.7
============156_GROUP========time====================================
min q1 median q3 max
90.65 110.57 116.48 124.74 136.33
============156_NORMAL========Mop/s===================================
min q1 median q3 max
29801.3 30739.2 31967.5 32151.3 34036
============156_NORMAL========time====================================
min q1 median q3 max
59.91 63.42 63.78 66.33 68.42


Note there is a significant improvement already. But we are seeing imbalance due to
using weighted load and averages. In this case it's only 55% slowdown rather than
the 5x. But the overall performance if the benchmark is also much better in both cases.



Here's the same test, same system with the full series (lb_v4a as I've been calling it):

echo 1 > /proc/sys/kernel/numa_balancing
lu.C.x_156_GROUP_1 Average 18.59 19.36 19.50 18.86 20.41 20.59 18.27 20.41
lu.C.x_156_GROUP_2 Average 19.52 20.52 20.48 21.17 19.52 19.09 17.70 18.00
lu.C.x_156_GROUP_3 Average 20.58 20.71 20.17 20.50 18.46 19.50 18.58 17.50
lu.C.x_156_GROUP_4 Average 18.95 19.63 19.47 19.84 18.79 19.84 20.84 18.63
lu.C.x_156_GROUP_5 Average 16.85 17.96 19.89 19.15 19.26 20.48 21.70 20.70
lu.C.x_156_NORMAL_1 Average 18.04 18.48 20.00 19.72 20.72 20.48 18.48 20.08
lu.C.x_156_NORMAL_2 Average 18.22 20.56 19.50 19.39 20.67 19.83 18.44 19.39
lu.C.x_156_NORMAL_3 Average 17.72 19.61 19.56 19.17 20.17 19.89 20.78 19.11
lu.C.x_156_NORMAL_4 Average 18.05 19.74 20.21 19.89 20.32 20.26 19.16 18.37
lu.C.x_156_NORMAL_5 Average 18.89 19.95 20.21 20.63 19.84 19.26 19.26 17.95

============156_GROUP========Mop/s===================================
min q1 median q3 max
13460.1 14949 15851.7 16391.4 18993
============156_GROUP========time====================================
min q1 median q3 max
107.35 124.39 128.63 136.4 151.48
============156_NORMAL========Mop/s===================================
min q1 median q3 max
14418.5 18512.4 19049.5 19682 19808.8
============156_NORMAL========time====================================
min q1 median q3 max
102.93 103.6 107.04 110.14 141.42


echo 0 > /proc/sys/kernel/numa_balancing
lu.C.x_156_GROUP_1 Average 19.00 19.33 19.33 19.58 20.08 19.67 19.83 19.17
lu.C.x_156_GROUP_2 Average 18.55 19.91 20.09 19.27 18.82 19.27 19.91 20.18
lu.C.x_156_GROUP_3 Average 18.42 19.08 19.75 19.00 19.50 20.08 20.25 19.92
lu.C.x_156_GROUP_4 Average 18.42 19.83 19.17 19.50 19.58 19.83 19.83 19.83
lu.C.x_156_GROUP_5 Average 19.17 19.42 20.17 19.92 19.25 18.58 19.92 19.58
lu.C.x_156_NORMAL_1 Average 19.25 19.50 19.92 18.92 19.33 19.75 19.58 19.75
lu.C.x_156_NORMAL_2 Average 19.42 19.25 17.83 18.17 19.83 20.50 20.42 20.58
lu.C.x_156_NORMAL_3 Average 18.58 19.33 19.75 18.25 19.42 20.25 20.08 20.33
lu.C.x_156_NORMAL_4 Average 19.00 19.55 19.73 18.73 19.55 20.00 19.64 19.82
lu.C.x_156_NORMAL_5 Average 19.25 19.25 19.50 18.75 19.92 19.58 19.92 19.83

============156_GROUP========Mop/s===================================
min q1 median q3 max
28520.1 29024.2 29042.1 29367.4 31235.2
============156_GROUP========time====================================
min q1 median q3 max
65.28 69.43 70.21 70.25 71.49
============156_NORMAL========Mop/s===================================
min q1 median q3 max
28974.5 29806.5 30237.1 30907.4 31830.1
============156_NORMAL========time====================================
min q1 median q3 max
64.06 65.97 67.43 68.41 70.37


This all now makes sense. Looking at the numa balancing code a bit you can see
that it still uses load so it will still be subject to making bogus decisions
based on the weighted load. In this case it's been actively working against the
load balancer because of that.

I think with the three numa levels on this system the numa balancing was able to
win more often. We don't see the same level of this result on systems with only
one SD_NUMA level.

Following the other part of this thread, I have to add that I'm of the opinion
that the weighted load (which is all we have now I believe) really should be used
only in extreme cases of overload to deal with fairness. And even then maybe not.
As far as I can see, once the fair group scheduling is involved, that load is
basically a random number between 1 and 1024. It really has no bearing on how
much "load" a task will put on a cpu. Any comparison of that to cpu capacity
is pretty meaningless.

I'm sure there are workloads for which the numa balancing is more important. But
even then I suspect it is making the wrong decisions more often than not. I think
a similar rework may be needed :)

I've asked our perf team to try the full battery of tests with numa balancing
disabled to see what it shows across the board.


Good job on this and thanks for the time looking at my specific issues.


As far as this series is concerned, and as far as it matters:

Acked-by: Phil Auld <[email protected]>



Cheers,
Phil

--

2019-10-31 16:44:56

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On Thu, 31 Oct 2019 at 14:57, Phil Auld <[email protected]> wrote:
>
> Hi Vincent,
>
> On Wed, Oct 30, 2019 at 06:25:49PM +0100 Vincent Guittot wrote:
> > On Wed, 30 Oct 2019 at 15:39, Phil Auld <[email protected]> wrote:
> > > > That fact that the 4 nodes works well but not the 8 nodes is a bit
> > > > surprising except if this means more NUMA level in the sched_domain
> > > > topology
> > > > Could you give us more details about the sched domain topology ?
> > > >
> > >
> > > The 8-node system has 5 sched domain levels. The 4-node system only
> > > has 3.
> >
> > That's an interesting difference. and your additional tests on a 8
> > nodes with 3 level tends to confirm that the number of level make a
> > difference
> > I need to study a bit more how this can impact the spread of tasks
>
> So I think I understand what my numbers have been showing.
>
> I believe the numa balancing is causing problems.
>
> Here's numbers from the test on 5.4-rc3+ without your series:
>
> echo 1 > /proc/sys/kernel/numa_balancing
> lu.C.x_156_GROUP_1 Average 10.87 0.00 0.00 11.49 36.69 34.26 30.59 32.10
> lu.C.x_156_GROUP_2 Average 20.15 16.32 9.49 24.91 21.07 20.93 21.63 21.50
> lu.C.x_156_GROUP_3 Average 21.27 17.23 11.84 21.80 20.91 20.68 21.11 21.16
> lu.C.x_156_GROUP_4 Average 19.44 6.53 8.71 19.72 22.95 23.16 28.85 26.64
> lu.C.x_156_GROUP_5 Average 20.59 6.20 11.32 14.63 28.73 30.36 22.20 21.98
> lu.C.x_156_NORMAL_1 Average 20.50 19.95 20.40 20.45 18.75 19.35 18.25 18.35
> lu.C.x_156_NORMAL_2 Average 17.15 19.04 18.42 18.69 21.35 21.42 20.00 19.92
> lu.C.x_156_NORMAL_3 Average 18.00 18.15 17.55 17.60 18.90 18.40 19.90 19.75
> lu.C.x_156_NORMAL_4 Average 20.53 20.05 20.21 19.11 19.00 19.47 19.37 18.26
> lu.C.x_156_NORMAL_5 Average 18.72 18.78 19.72 18.50 19.67 19.72 21.11 19.78
>
> ============156_GROUP========Mop/s===================================
> min q1 median q3 max
> 1564.63 3003.87 3928.23 5411.13 8386.66
> ============156_GROUP========time====================================
> min q1 median q3 max
> 243.12 376.82 519.06 678.79 1303.18
> ============156_NORMAL========Mop/s===================================
> min q1 median q3 max
> 13845.6 18013.8 18545.5 19359.9 19647.4
> ============156_NORMAL========time====================================
> min q1 median q3 max
> 103.78 105.32 109.95 113.19 147.27
>
> (This one above is especially bad... we don't usually see 0.00s, but overall it's
> basically on par. It's reflected in the spread of the results).
>
>
> echo 0 > /proc/sys/kernel/numa_balancing
> lu.C.x_156_GROUP_1 Average 17.75 19.30 21.20 21.20 20.20 20.80 18.90 16.65
> lu.C.x_156_GROUP_2 Average 18.38 19.25 21.00 20.06 20.19 20.31 19.56 17.25
> lu.C.x_156_GROUP_3 Average 21.81 21.00 18.38 16.86 20.81 21.48 18.24 17.43
> lu.C.x_156_GROUP_4 Average 20.48 20.96 19.61 17.61 17.57 19.74 18.48 21.57
> lu.C.x_156_GROUP_5 Average 23.32 21.96 19.16 14.28 21.44 22.56 17.00 16.28
> lu.C.x_156_NORMAL_1 Average 19.50 19.83 19.58 19.25 19.58 19.42 19.42 19.42
> lu.C.x_156_NORMAL_2 Average 18.90 18.40 20.00 19.80 19.70 19.30 19.80 20.10
> lu.C.x_156_NORMAL_3 Average 19.45 19.09 19.91 20.09 19.45 18.73 19.45 19.82
> lu.C.x_156_NORMAL_4 Average 19.64 19.27 19.64 19.00 19.82 19.55 19.73 19.36
> lu.C.x_156_NORMAL_5 Average 18.75 19.42 20.08 19.67 18.75 19.50 19.92 19.92
>
> ============156_GROUP========Mop/s===================================
> min q1 median q3 max
> 14956.3 16346.5 17505.7 18440.6 22492.7
> ============156_GROUP========time====================================
> min q1 median q3 max
> 90.65 110.57 116.48 124.74 136.33
> ============156_NORMAL========Mop/s===================================
> min q1 median q3 max
> 29801.3 30739.2 31967.5 32151.3 34036
> ============156_NORMAL========time====================================
> min q1 median q3 max
> 59.91 63.42 63.78 66.33 68.42
>
>
> Note there is a significant improvement already. But we are seeing imbalance due to
> using weighted load and averages. In this case it's only 55% slowdown rather than
> the 5x. But the overall performance if the benchmark is also much better in both cases.
>
>
>
> Here's the same test, same system with the full series (lb_v4a as I've been calling it):
>
> echo 1 > /proc/sys/kernel/numa_balancing
> lu.C.x_156_GROUP_1 Average 18.59 19.36 19.50 18.86 20.41 20.59 18.27 20.41
> lu.C.x_156_GROUP_2 Average 19.52 20.52 20.48 21.17 19.52 19.09 17.70 18.00
> lu.C.x_156_GROUP_3 Average 20.58 20.71 20.17 20.50 18.46 19.50 18.58 17.50
> lu.C.x_156_GROUP_4 Average 18.95 19.63 19.47 19.84 18.79 19.84 20.84 18.63
> lu.C.x_156_GROUP_5 Average 16.85 17.96 19.89 19.15 19.26 20.48 21.70 20.70
> lu.C.x_156_NORMAL_1 Average 18.04 18.48 20.00 19.72 20.72 20.48 18.48 20.08
> lu.C.x_156_NORMAL_2 Average 18.22 20.56 19.50 19.39 20.67 19.83 18.44 19.39
> lu.C.x_156_NORMAL_3 Average 17.72 19.61 19.56 19.17 20.17 19.89 20.78 19.11
> lu.C.x_156_NORMAL_4 Average 18.05 19.74 20.21 19.89 20.32 20.26 19.16 18.37
> lu.C.x_156_NORMAL_5 Average 18.89 19.95 20.21 20.63 19.84 19.26 19.26 17.95
>
> ============156_GROUP========Mop/s===================================
> min q1 median q3 max
> 13460.1 14949 15851.7 16391.4 18993
> ============156_GROUP========time====================================
> min q1 median q3 max
> 107.35 124.39 128.63 136.4 151.48
> ============156_NORMAL========Mop/s===================================
> min q1 median q3 max
> 14418.5 18512.4 19049.5 19682 19808.8
> ============156_NORMAL========time====================================
> min q1 median q3 max
> 102.93 103.6 107.04 110.14 141.42
>
>
> echo 0 > /proc/sys/kernel/numa_balancing
> lu.C.x_156_GROUP_1 Average 19.00 19.33 19.33 19.58 20.08 19.67 19.83 19.17
> lu.C.x_156_GROUP_2 Average 18.55 19.91 20.09 19.27 18.82 19.27 19.91 20.18
> lu.C.x_156_GROUP_3 Average 18.42 19.08 19.75 19.00 19.50 20.08 20.25 19.92
> lu.C.x_156_GROUP_4 Average 18.42 19.83 19.17 19.50 19.58 19.83 19.83 19.83
> lu.C.x_156_GROUP_5 Average 19.17 19.42 20.17 19.92 19.25 18.58 19.92 19.58
> lu.C.x_156_NORMAL_1 Average 19.25 19.50 19.92 18.92 19.33 19.75 19.58 19.75
> lu.C.x_156_NORMAL_2 Average 19.42 19.25 17.83 18.17 19.83 20.50 20.42 20.58
> lu.C.x_156_NORMAL_3 Average 18.58 19.33 19.75 18.25 19.42 20.25 20.08 20.33
> lu.C.x_156_NORMAL_4 Average 19.00 19.55 19.73 18.73 19.55 20.00 19.64 19.82
> lu.C.x_156_NORMAL_5 Average 19.25 19.25 19.50 18.75 19.92 19.58 19.92 19.83
>
> ============156_GROUP========Mop/s===================================
> min q1 median q3 max
> 28520.1 29024.2 29042.1 29367.4 31235.2
> ============156_GROUP========time====================================
> min q1 median q3 max
> 65.28 69.43 70.21 70.25 71.49
> ============156_NORMAL========Mop/s===================================
> min q1 median q3 max
> 28974.5 29806.5 30237.1 30907.4 31830.1
> ============156_NORMAL========time====================================
> min q1 median q3 max
> 64.06 65.97 67.43 68.41 70.37
>
>
> This all now makes sense. Looking at the numa balancing code a bit you can see
> that it still uses load so it will still be subject to making bogus decisions
> based on the weighted load. In this case it's been actively working against the
> load balancer because of that.

Thanks for the tests and interesting results

>
> I think with the three numa levels on this system the numa balancing was able to
> win more often. We don't see the same level of this result on systems with only
> one SD_NUMA level.
>
> Following the other part of this thread, I have to add that I'm of the opinion
> that the weighted load (which is all we have now I believe) really should be used
> only in extreme cases of overload to deal with fairness. And even then maybe not.
> As far as I can see, once the fair group scheduling is involved, that load is
> basically a random number between 1 and 1024. It really has no bearing on how
> much "load" a task will put on a cpu. Any comparison of that to cpu capacity
> is pretty meaningless.
>
> I'm sure there are workloads for which the numa balancing is more important. But
> even then I suspect it is making the wrong decisions more often than not. I think
> a similar rework may be needed :)

Yes , there is probably space for a better collaboration between the
load and numa balancing

>
> I've asked our perf team to try the full battery of tests with numa balancing
> disabled to see what it shows across the board.
>
>
> Good job on this and thanks for the time looking at my specific issues.

Thanks for your help


>
>
> As far as this series is concerned, and as far as it matters:
>
> Acked-by: Phil Auld <[email protected]>
>
>
>
> Cheers,
> Phil
>
> --
>

2019-11-18 13:19:39

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance


* Mel Gorman <[email protected]> wrote:

> On Mon, Oct 21, 2019 at 09:50:38AM +0200, Ingo Molnar wrote:
> > > <SNIP>
> >
> > Thanks, that's an excellent series!
> >
>
> Agreed despite the level of whining and complaining I made during the
> review.

I saw no whining and complaining whatsoever, and thanks for the feedback!
:-)

>
> > I've queued it up in sched/core with a handful of readability edits to
> > comments and changelogs.
> >
> > There are some upstreaming caveats though, I expect this series to be a
> > performance regression magnet:
> >
> > - load_balance() and wake-up changes invariably are such: some workloads
> > only work/scale well by accident, and if we touch the logic it might
> > flip over into a less advantageous scheduling pattern.
> >
> > - In particular the changes from balancing and waking on runnable load
> > to full load that includes blocking *will* shift IO-intensive
> > workloads that you tests don't fully capture I believe. You also made
> > idle balancing more aggressive in essence - which might reduce cache
> > locality for some workloads.
> >
> > A full run on Mel Gorman's magic scalability test-suite would be super
> > useful ...
> >
>
> I queued this back on the 21st and it took this long for me to get back
> to it.
>
> What I tested did not include the fix for the last patch so I cannot say
> the data is that useful. I also failed to include something that exercised
> the IO paths in a way that idles rapidly as that can catch interesting
> details (usually cpufreq related but sometimes load-balancing related).
> There was no real thinking behind this decision, I just used an old
> collection of tests to get a general feel for the series.

I have just applied Vincent's fix to find_idlest_group(), so that will
probably modify some of the results. (Hopefully for the better.)

Will push it out later today-ish.

> Most of the results were performance-neutral and some notable gains
> (kernel compiles were 1-6% faster depending on the -j count). Hackbench
> saw a disproportionate gain in terms of performance but I tend to be
> wary of hackbench as improving it is rarely a universal win. There
> tends to be some jitter around the point where a NUMA nodes worth of
> CPUs gets overloaded. tbench (mmtests configuation network-tbench) on a
> NUMA machine showed gains for low thread counts and high thread counts
> but a loss near the boundary where a single node would get overloaded.
>
> Some NAS-related workloads saw a drop in performance on NUMA machines
> but the size class might be too small to be certain, I'd have to rerun
> with the D class to be sure. The biggest strange drop in performance
> was the elapsed time to run the git test suite (mmtests configuration
> workload-shellscripts modified to use a fresh XFS partition) took
> 17.61% longer to execute on a UMA Skylake machine. This *might* be due
> to the missing fix because it is mostly a single-task workload.

Thanks a lot for your testing!

> I'm not going to go through the results in detail because I think
> another full round of testing would be required to take the fix into
> account. I'd also prefer to wait to see if the review results in any
> material change to the series.

I'll try to make sure it all gets addressed.

Thanks,

Ingo

Subject: [tip: sched/core] sched/fair: Fix rework of find_idlest_group()

The following commit has been merged into the sched/core branch of tip:

Commit-ID: 3318544b721d3072fdd1f85ee0f1f214c0b211ee
Gitweb: https://git.kernel.org/tip/3318544b721d3072fdd1f85ee0f1f214c0b211ee
Author: Vincent Guittot <[email protected]>
AuthorDate: Tue, 22 Oct 2019 18:46:38 +02:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Mon, 18 Nov 2019 14:11:56 +01:00

sched/fair: Fix rework of find_idlest_group()

The task, for which the scheduler looks for the idlest group of CPUs, must
be discounted from all statistics in order to get a fair comparison
between groups. This includes utilization, load, nr_running and idle_cpus.

Such unfairness can be easily highlighted with the unixbench execl 1 task.
This test continuously call execve() and the scheduler looks for the idlest
group/CPU on which it should place the task. Because the task runs on the
local group/CPU, the latter seems already busy even if there is nothing
else running on it. As a result, the scheduler will always select another
group/CPU than the local one.

This recovers most of the performance regression on my system from the
recent load-balancer rewrite.

[ mingo: Minor cleanups. ]

Reported-by: kernel test robot <[email protected]>
Tested-by: kernel test robot <[email protected]>
Signed-off-by: Vincent Guittot <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: [email protected]
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 91 ++++++++++++++++++++++++++++++++++++++++----
1 file changed, 84 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 81eba55..2fc08e7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5391,6 +5391,37 @@ static unsigned long cpu_load(struct rq *rq)
return cfs_rq_load_avg(&rq->cfs);
}

+/*
+ * cpu_load_without - compute CPU load without any contributions from *p
+ * @cpu: the CPU which load is requested
+ * @p: the task which load should be discounted
+ *
+ * The load of a CPU is defined by the load of tasks currently enqueued on that
+ * CPU as well as tasks which are currently sleeping after an execution on that
+ * CPU.
+ *
+ * This method returns the load of the specified CPU by discounting the load of
+ * the specified task, whenever the task is currently contributing to the CPU
+ * load.
+ */
+static unsigned long cpu_load_without(struct rq *rq, struct task_struct *p)
+{
+ struct cfs_rq *cfs_rq;
+ unsigned int load;
+
+ /* Task has no contribution or is new */
+ if (cpu_of(rq) != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
+ return cpu_load(rq);
+
+ cfs_rq = &rq->cfs;
+ load = READ_ONCE(cfs_rq->avg.load_avg);
+
+ /* Discount task's util from CPU's util */
+ lsub_positive(&load, task_h_load(p));
+
+ return load;
+}
+
static unsigned long capacity_of(int cpu)
{
return cpu_rq(cpu)->cpu_capacity;
@@ -8142,10 +8173,55 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
struct sg_lb_stats;

/*
+ * task_running_on_cpu - return 1 if @p is running on @cpu.
+ */
+
+static unsigned int task_running_on_cpu(int cpu, struct task_struct *p)
+{
+ /* Task has no contribution or is new */
+ if (cpu != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
+ return 0;
+
+ if (task_on_rq_queued(p))
+ return 1;
+
+ return 0;
+}
+
+/**
+ * idle_cpu_without - would a given CPU be idle without p ?
+ * @cpu: the processor on which idleness is tested.
+ * @p: task which should be ignored.
+ *
+ * Return: 1 if the CPU would be idle. 0 otherwise.
+ */
+static int idle_cpu_without(int cpu, struct task_struct *p)
+{
+ struct rq *rq = cpu_rq(cpu);
+
+ if (rq->curr != rq->idle && rq->curr != p)
+ return 0;
+
+ /*
+ * rq->nr_running can't be used but an updated version without the
+ * impact of p on cpu must be used instead. The updated nr_running
+ * be computed and tested before calling idle_cpu_without().
+ */
+
+#ifdef CONFIG_SMP
+ if (!llist_empty(&rq->wake_list))
+ return 0;
+#endif
+
+ return 1;
+}
+
+/*
* update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
- * @denv: The ched_domain level to look for idlest group.
+ * @sd: The sched_domain level to look for idlest group.
* @group: sched_group whose statistics are to be updated.
* @sgs: variable to hold the statistics for this group.
+ * @p: The task for which we look for the idlest group/CPU.
*/
static inline void update_sg_wakeup_stats(struct sched_domain *sd,
struct sched_group *group,
@@ -8158,21 +8234,22 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,

for_each_cpu(i, sched_group_span(group)) {
struct rq *rq = cpu_rq(i);
+ unsigned int local;

- sgs->group_load += cpu_load(rq);
+ sgs->group_load += cpu_load_without(rq, p);
sgs->group_util += cpu_util_without(i, p);
- sgs->sum_h_nr_running += rq->cfs.h_nr_running;
+ local = task_running_on_cpu(i, p);
+ sgs->sum_h_nr_running += rq->cfs.h_nr_running - local;

- nr_running = rq->nr_running;
+ nr_running = rq->nr_running - local;
sgs->sum_nr_running += nr_running;

/*
- * No need to call idle_cpu() if nr_running is not 0
+ * No need to call idle_cpu_without() if nr_running is not 0
*/
- if (!nr_running && idle_cpu(i))
+ if (!nr_running && idle_cpu_without(i, p))
sgs->idle_cpus++;

-
}

/* Check if task fits in the group */

2019-11-20 13:40:51

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

Hi Qais,

On Wed, 20 Nov 2019 at 12:58, Qais Yousef <[email protected]> wrote:
>
> Hi Vincent
>
> On 10/18/19 15:26, Vincent Guittot wrote:
> > The slow wake up path computes per sched_group statisics to select the
> > idlest group, which is quite similar to what load_balance() is doing
> > for selecting busiest group. Rework find_idlest_group() to classify the
> > sched_group and select the idlest one following the same steps as
> > load_balance().
> >
> > Signed-off-by: Vincent Guittot <[email protected]>
> > ---
>
> LTP test has caught a regression in perf_event_open02 test on linux-next and I
> bisected it to this patch.
>
> That is checking out next-20191119 tag and reverting this patch on top the test
> passes. Without the revert the test fails.
>
> I think this patch disturbs this part of the test:
>
> https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/perf_event_open/perf_event_open02.c#L209
>
> When I revert this patch count_hardware_counters() returns a non zero value.
> But with it applied it returns 0 which indicates that the condition terminates
> earlier than what the test expects.

Thanks for the report and starting analysing it

>
> I'm failing to see the connection yet, but since I spent enough time bisecting
> it I thought I'll throw this out before I continue to bottom it out in hope it
> rings a bell for you or someone else.

I will try to reproduce the problem and understand why it's failing
because i don't have any clue of the relation between both for now

>
> The problem was consistently reproducible on Juno-r2.
>
> LTP was compiled from 20190930 tag using
>
> ./configure --host=aarch64-linux-gnu --prefix=~/arm64-ltp/
> make && make install
>
>
>
> *** Output of the test when it fails ***
>
> # ./perf_event_open02 -v
> at iteration:0 value:254410384 time_enabled:195570320 time_running:156044100
> perf_event_open02 0 TINFO : overall task clock: 166935520
> perf_event_open02 0 TINFO : hw sum: 1200812256, task clock sum: 667703360
> hw counters: 300202518 300202881 300203246 300203611
> task clock counters: 166927400 166926780 166925660 166923520
> perf_event_open02 0 TINFO : ratio: 3.999768
> perf_event_open02 0 TINFO : nhw: 0.000100 /* I added this extra line for debug */
> perf_event_open02 1 TFAIL : perf_event_open02.c:370: test failed (ratio was greater than )
>
>
>
> *** Output of the test when it passes (this patch reverted) ***
>
> # ./perf_event_open02 -v
> at iteration:0 value:300271482 time_enabled:177756080 time_running:177756080
> at iteration:1 value:300252655 time_enabled:166939100 time_running:166939100
> at iteration:2 value:300252877 time_enabled:166924920 time_running:166924920
> at iteration:3 value:300242545 time_enabled:166909620 time_running:166909620
> at iteration:4 value:300250779 time_enabled:166918540 time_running:166918540
> at iteration:5 value:300250660 time_enabled:166922180 time_running:166922180
> at iteration:6 value:258369655 time_enabled:167388920 time_running:143996600
> perf_event_open02 0 TINFO : overall task clock: 167540640
> perf_event_open02 0 TINFO : hw sum: 1801473873, task clock sum: 1005046160
> hw counters: 177971955 185132938 185488818 185488199 185480943 185477118 179657001 172499668 172137672 172139561
> task clock counters: 99299900 103293440 103503840 103502040 103499020 103496160 100224320 96227620 95999400 96000420
> perf_event_open02 0 TINFO : ratio: 5.998820
> perf_event_open02 0 TINFO : nhw: 6.000100 /* I added this extra line for debug */
> perf_event_open02 1 TPASS : test passed
>
> Thanks
>
> --
> Qais Yousef

2019-11-20 15:28:48

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

Hi Vincent

On 10/18/19 15:26, Vincent Guittot wrote:
> The slow wake up path computes per sched_group statisics to select the
> idlest group, which is quite similar to what load_balance() is doing
> for selecting busiest group. Rework find_idlest_group() to classify the
> sched_group and select the idlest one following the same steps as
> load_balance().
>
> Signed-off-by: Vincent Guittot <[email protected]>
> ---

LTP test has caught a regression in perf_event_open02 test on linux-next and I
bisected it to this patch.

That is checking out next-20191119 tag and reverting this patch on top the test
passes. Without the revert the test fails.

I think this patch disturbs this part of the test:

https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/perf_event_open/perf_event_open02.c#L209

When I revert this patch count_hardware_counters() returns a non zero value.
But with it applied it returns 0 which indicates that the condition terminates
earlier than what the test expects.

I'm failing to see the connection yet, but since I spent enough time bisecting
it I thought I'll throw this out before I continue to bottom it out in hope it
rings a bell for you or someone else.

The problem was consistently reproducible on Juno-r2.

LTP was compiled from 20190930 tag using

./configure --host=aarch64-linux-gnu --prefix=~/arm64-ltp/
make && make install



*** Output of the test when it fails ***

# ./perf_event_open02 -v
at iteration:0 value:254410384 time_enabled:195570320 time_running:156044100
perf_event_open02 0 TINFO : overall task clock: 166935520
perf_event_open02 0 TINFO : hw sum: 1200812256, task clock sum: 667703360
hw counters: 300202518 300202881 300203246 300203611
task clock counters: 166927400 166926780 166925660 166923520
perf_event_open02 0 TINFO : ratio: 3.999768
perf_event_open02 0 TINFO : nhw: 0.000100 /* I added this extra line for debug */
perf_event_open02 1 TFAIL : perf_event_open02.c:370: test failed (ratio was greater than )



*** Output of the test when it passes (this patch reverted) ***

# ./perf_event_open02 -v
at iteration:0 value:300271482 time_enabled:177756080 time_running:177756080
at iteration:1 value:300252655 time_enabled:166939100 time_running:166939100
at iteration:2 value:300252877 time_enabled:166924920 time_running:166924920
at iteration:3 value:300242545 time_enabled:166909620 time_running:166909620
at iteration:4 value:300250779 time_enabled:166918540 time_running:166918540
at iteration:5 value:300250660 time_enabled:166922180 time_running:166922180
at iteration:6 value:258369655 time_enabled:167388920 time_running:143996600
perf_event_open02 0 TINFO : overall task clock: 167540640
perf_event_open02 0 TINFO : hw sum: 1801473873, task clock sum: 1005046160
hw counters: 177971955 185132938 185488818 185488199 185480943 185477118 179657001 172499668 172137672 172139561
task clock counters: 99299900 103293440 103503840 103502040 103499020 103496160 100224320 96227620 95999400 96000420
perf_event_open02 0 TINFO : ratio: 5.998820
perf_event_open02 0 TINFO : nhw: 6.000100 /* I added this extra line for debug */
perf_event_open02 1 TPASS : test passed

Thanks

--
Qais Yousef

2019-11-20 17:40:18

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
<[email protected]> wrote:
>
> Hi Qais,
>
> On Wed, 20 Nov 2019 at 12:58, Qais Yousef <[email protected]> wrote:
> >
> > Hi Vincent
> >
> > On 10/18/19 15:26, Vincent Guittot wrote:
> > > The slow wake up path computes per sched_group statisics to select the
> > > idlest group, which is quite similar to what load_balance() is doing
> > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > sched_group and select the idlest one following the same steps as
> > > load_balance().
> > >
> > > Signed-off-by: Vincent Guittot <[email protected]>
> > > ---
> >
> > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > bisected it to this patch.
> >
> > That is checking out next-20191119 tag and reverting this patch on top the test
> > passes. Without the revert the test fails.

I haven't tried linux-next yet but LTP test is passed with
tip/sched/core, which includes this patch, on hikey960 which is arm64
too.

Have you tried tip/sched/core on your juno ? this could help to
understand if it's only for juno or if this patch interact with
another branch merged in linux next

Thanks
Vincent

> >
> > I think this patch disturbs this part of the test:
> >
> > https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/perf_event_open/perf_event_open02.c#L209
> >
> > When I revert this patch count_hardware_counters() returns a non zero value.
> > But with it applied it returns 0 which indicates that the condition terminates
> > earlier than what the test expects.
>
> Thanks for the report and starting analysing it
>
> >
> > I'm failing to see the connection yet, but since I spent enough time bisecting
> > it I thought I'll throw this out before I continue to bottom it out in hope it
> > rings a bell for you or someone else.
>
> I will try to reproduce the problem and understand why it's failing
> because i don't have any clue of the relation between both for now
>
> >
> > The problem was consistently reproducible on Juno-r2.
> >
> > LTP was compiled from 20190930 tag using
> >
> > ./configure --host=aarch64-linux-gnu --prefix=~/arm64-ltp/
> > make && make install
> >
> >
> >
> > *** Output of the test when it fails ***
> >
> > # ./perf_event_open02 -v
> > at iteration:0 value:254410384 time_enabled:195570320 time_running:156044100
> > perf_event_open02 0 TINFO : overall task clock: 166935520
> > perf_event_open02 0 TINFO : hw sum: 1200812256, task clock sum: 667703360
> > hw counters: 300202518 300202881 300203246 300203611
> > task clock counters: 166927400 166926780 166925660 166923520
> > perf_event_open02 0 TINFO : ratio: 3.999768
> > perf_event_open02 0 TINFO : nhw: 0.000100 /* I added this extra line for debug */
> > perf_event_open02 1 TFAIL : perf_event_open02.c:370: test failed (ratio was greater than )
> >
> >
> >
> > *** Output of the test when it passes (this patch reverted) ***
> >
> > # ./perf_event_open02 -v
> > at iteration:0 value:300271482 time_enabled:177756080 time_running:177756080
> > at iteration:1 value:300252655 time_enabled:166939100 time_running:166939100
> > at iteration:2 value:300252877 time_enabled:166924920 time_running:166924920
> > at iteration:3 value:300242545 time_enabled:166909620 time_running:166909620
> > at iteration:4 value:300250779 time_enabled:166918540 time_running:166918540
> > at iteration:5 value:300250660 time_enabled:166922180 time_running:166922180
> > at iteration:6 value:258369655 time_enabled:167388920 time_running:143996600
> > perf_event_open02 0 TINFO : overall task clock: 167540640
> > perf_event_open02 0 TINFO : hw sum: 1801473873, task clock sum: 1005046160
> > hw counters: 177971955 185132938 185488818 185488199 185480943 185477118 179657001 172499668 172137672 172139561
> > task clock counters: 99299900 103293440 103503840 103502040 103499020 103496160 100224320 96227620 95999400 96000420
> > perf_event_open02 0 TINFO : ratio: 5.998820
> > perf_event_open02 0 TINFO : nhw: 6.000100 /* I added this extra line for debug */
> > perf_event_open02 1 TPASS : test passed
> >
> > Thanks
> >
> > --
> > Qais Yousef

2019-11-20 17:41:04

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

On 11/20/19 17:53, Vincent Guittot wrote:
> On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> <[email protected]> wrote:
> >
> > Hi Qais,
> >
> > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <[email protected]> wrote:
> > >
> > > Hi Vincent
> > >
> > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > The slow wake up path computes per sched_group statisics to select the
> > > > idlest group, which is quite similar to what load_balance() is doing
> > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > sched_group and select the idlest one following the same steps as
> > > > load_balance().
> > > >
> > > > Signed-off-by: Vincent Guittot <[email protected]>
> > > > ---
> > >
> > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > bisected it to this patch.
> > >
> > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > passes. Without the revert the test fails.
>
> I haven't tried linux-next yet but LTP test is passed with
> tip/sched/core, which includes this patch, on hikey960 which is arm64
> too.
>
> Have you tried tip/sched/core on your juno ? this could help to
> understand if it's only for juno or if this patch interact with
> another branch merged in linux next

Okay will give it a go. But out of curiosity, what is the output of your run?

While bisecting on linux-next I noticed that at some point the test was
passing but all the read values were 0. At some point I started seeing
none-zero values.

--
Qais Yousef

2019-11-20 17:47:07

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

On Wed, 20 Nov 2019 at 18:34, Qais Yousef <[email protected]> wrote:
>
> On 11/20/19 17:53, Vincent Guittot wrote:
> > On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> > <[email protected]> wrote:
> > >
> > > Hi Qais,
> > >
> > > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <[email protected]> wrote:
> > > >
> > > > Hi Vincent
> > > >
> > > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > > The slow wake up path computes per sched_group statisics to select the
> > > > > idlest group, which is quite similar to what load_balance() is doing
> > > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > > sched_group and select the idlest one following the same steps as
> > > > > load_balance().
> > > > >
> > > > > Signed-off-by: Vincent Guittot <[email protected]>
> > > > > ---
> > > >
> > > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > > bisected it to this patch.
> > > >
> > > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > > passes. Without the revert the test fails.
> >
> > I haven't tried linux-next yet but LTP test is passed with
> > tip/sched/core, which includes this patch, on hikey960 which is arm64
> > too.
> >
> > Have you tried tip/sched/core on your juno ? this could help to
> > understand if it's only for juno or if this patch interact with
> > another branch merged in linux next
>
> Okay will give it a go. But out of curiosity, what is the output of your run?
>
> While bisecting on linux-next I noticed that at some point the test was
> passing but all the read values were 0. At some point I started seeing
> none-zero values.

for tip/sched/core
linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
sudo ./perf_event_open02
perf_event_open02 0 TINFO : overall task clock: 63724479
perf_event_open02 0 TINFO : hw sum: 1800900992, task clock sum: 382170311
perf_event_open02 0 TINFO : ratio: 5.997229
perf_event_open02 1 TPASS : test passed

for next-2019119
~/ltp/testcases/kernel/syscalls/perf_event_open$ sudo ./perf_event_open02 -v
at iteration:0 value:0 time_enabled:69795312 time_running:0
perf_event_open02 0 TINFO : overall task clock: 63582292
perf_event_open02 0 TINFO : hw sum: 0, task clock sum: 0
hw counters: 0 0 0 0
task clock counters: 0 0 0 0
perf_event_open02 0 TINFO : ratio: 0.000000
perf_event_open02 1 TPASS : test passed

>
> --
> Qais Yousef

2019-11-20 18:13:38

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

On 11/20/19 18:43, Vincent Guittot wrote:
> On Wed, 20 Nov 2019 at 18:34, Qais Yousef <[email protected]> wrote:
> >
> > On 11/20/19 17:53, Vincent Guittot wrote:
> > > On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> > > <[email protected]> wrote:
> > > >
> > > > Hi Qais,
> > > >
> > > > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <[email protected]> wrote:
> > > > >
> > > > > Hi Vincent
> > > > >
> > > > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > > > The slow wake up path computes per sched_group statisics to select the
> > > > > > idlest group, which is quite similar to what load_balance() is doing
> > > > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > > > sched_group and select the idlest one following the same steps as
> > > > > > load_balance().
> > > > > >
> > > > > > Signed-off-by: Vincent Guittot <[email protected]>
> > > > > > ---
> > > > >
> > > > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > > > bisected it to this patch.
> > > > >
> > > > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > > > passes. Without the revert the test fails.
> > >
> > > I haven't tried linux-next yet but LTP test is passed with
> > > tip/sched/core, which includes this patch, on hikey960 which is arm64
> > > too.
> > >
> > > Have you tried tip/sched/core on your juno ? this could help to
> > > understand if it's only for juno or if this patch interact with
> > > another branch merged in linux next
> >
> > Okay will give it a go. But out of curiosity, what is the output of your run?
> >
> > While bisecting on linux-next I noticed that at some point the test was
> > passing but all the read values were 0. At some point I started seeing
> > none-zero values.
>
> for tip/sched/core
> linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> sudo ./perf_event_open02
> perf_event_open02 0 TINFO : overall task clock: 63724479
> perf_event_open02 0 TINFO : hw sum: 1800900992, task clock sum: 382170311
> perf_event_open02 0 TINFO : ratio: 5.997229
> perf_event_open02 1 TPASS : test passed
>
> for next-2019119
> ~/ltp/testcases/kernel/syscalls/perf_event_open$ sudo ./perf_event_open02 -v
> at iteration:0 value:0 time_enabled:69795312 time_running:0
> perf_event_open02 0 TINFO : overall task clock: 63582292
> perf_event_open02 0 TINFO : hw sum: 0, task clock sum: 0
> hw counters: 0 0 0 0
> task clock counters: 0 0 0 0
> perf_event_open02 0 TINFO : ratio: 0.000000
> perf_event_open02 1 TPASS : test passed

Okay that is weird. But ratio, hw sum, task clock sum are all 0 in your
next-20191119. I'm not sure why the counters return 0 sometimes - is it
dependent on some option or a bug somewhere.

I just did another run and it failed for me (building with defconfig)

# uname -a
Linux buildroot 5.4.0-rc8-next-20191119 #72 SMP PREEMPT Wed Nov 20 17:57:48 GMT 2019 aarch64 GNU/Linux

# ./perf_event_open02 -v
at iteration:0 value:260700250 time_enabled:172739760 time_running:144956600
perf_event_open02 0 TINFO : overall task clock: 166915220
perf_event_open02 0 TINFO : hw sum: 1200718268, task clock sum: 667621320
hw counters: 300179051 300179395 300179739 300180083
task clock counters: 166906620 166906200 166905160 166903340
perf_event_open02 0 TINFO : ratio: 3.999763
perf_event_open02 0 TINFO : nhw: 0.000100
perf_event_open02 1 TFAIL : perf_event_open02.c:370: test failed (ratio was greater than )

It is a funny one for sure. I haven't tried tip/sched/core yet.

Thanks

--
Qais Yousef

2019-11-20 18:23:17

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

On Wed, 20 Nov 2019 at 19:10, Qais Yousef <[email protected]> wrote:
>
> On 11/20/19 18:43, Vincent Guittot wrote:
> > On Wed, 20 Nov 2019 at 18:34, Qais Yousef <[email protected]> wrote:
> > >
> > > On 11/20/19 17:53, Vincent Guittot wrote:
> > > > On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> > > > <[email protected]> wrote:
> > > > >
> > > > > Hi Qais,
> > > > >
> > > > > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <[email protected]> wrote:
> > > > > >
> > > > > > Hi Vincent
> > > > > >
> > > > > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > > > > The slow wake up path computes per sched_group statisics to select the
> > > > > > > idlest group, which is quite similar to what load_balance() is doing
> > > > > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > > > > sched_group and select the idlest one following the same steps as
> > > > > > > load_balance().
> > > > > > >
> > > > > > > Signed-off-by: Vincent Guittot <[email protected]>
> > > > > > > ---
> > > > > >
> > > > > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > > > > bisected it to this patch.
> > > > > >
> > > > > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > > > > passes. Without the revert the test fails.
> > > >
> > > > I haven't tried linux-next yet but LTP test is passed with
> > > > tip/sched/core, which includes this patch, on hikey960 which is arm64
> > > > too.
> > > >
> > > > Have you tried tip/sched/core on your juno ? this could help to
> > > > understand if it's only for juno or if this patch interact with
> > > > another branch merged in linux next
> > >
> > > Okay will give it a go. But out of curiosity, what is the output of your run?
> > >
> > > While bisecting on linux-next I noticed that at some point the test was
> > > passing but all the read values were 0. At some point I started seeing
> > > none-zero values.
> >
> > for tip/sched/core
> > linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> > sudo ./perf_event_open02
> > perf_event_open02 0 TINFO : overall task clock: 63724479
> > perf_event_open02 0 TINFO : hw sum: 1800900992, task clock sum: 382170311
> > perf_event_open02 0 TINFO : ratio: 5.997229
> > perf_event_open02 1 TPASS : test passed
> >
> > for next-2019119
> > ~/ltp/testcases/kernel/syscalls/perf_event_open$ sudo ./perf_event_open02 -v
> > at iteration:0 value:0 time_enabled:69795312 time_running:0
> > perf_event_open02 0 TINFO : overall task clock: 63582292
> > perf_event_open02 0 TINFO : hw sum: 0, task clock sum: 0
> > hw counters: 0 0 0 0
> > task clock counters: 0 0 0 0
> > perf_event_open02 0 TINFO : ratio: 0.000000
> > perf_event_open02 1 TPASS : test passed
>
> Okay that is weird. But ratio, hw sum, task clock sum are all 0 in your
> next-20191119. I'm not sure why the counters return 0 sometimes - is it
> dependent on some option or a bug somewhere.
>
> I just did another run and it failed for me (building with defconfig)
>
> # uname -a
> Linux buildroot 5.4.0-rc8-next-20191119 #72 SMP PREEMPT Wed Nov 20 17:57:48 GMT 2019 aarch64 GNU/Linux
>
> # ./perf_event_open02 -v
> at iteration:0 value:260700250 time_enabled:172739760 time_running:144956600
> perf_event_open02 0 TINFO : overall task clock: 166915220
> perf_event_open02 0 TINFO : hw sum: 1200718268, task clock sum: 667621320
> hw counters: 300179051 300179395 300179739 300180083
> task clock counters: 166906620 166906200 166905160 166903340
> perf_event_open02 0 TINFO : ratio: 3.999763
> perf_event_open02 0 TINFO : nhw: 0.000100
> perf_event_open02 1 TFAIL : perf_event_open02.c:370: test failed (ratio was greater than )
>
> It is a funny one for sure. I haven't tried tip/sched/core yet.

I confirm that on next-20191119, hw counters always return 0
but on tip/sched/core which has this patch and v5.4-rc7 which has not,
the hw counters are always different from 0

on v5.4-rc7 i have got the same ratio :
linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
sudo ./perf_event_open02 -v
at iteration:0 value:300157088 time_enabled:80641145 time_running:80641145
at iteration:1 value:300100129 time_enabled:63572917 time_running:63572917
at iteration:2 value:300100885 time_enabled:63569271 time_running:63569271
at iteration:3 value:300103998 time_enabled:63573437 time_running:63573437
at iteration:4 value:300101477 time_enabled:63571875 time_running:63571875
at iteration:5 value:300100698 time_enabled:63569791 time_running:63569791
at iteration:6 value:245252526 time_enabled:63650520 time_running:52012500
perf_event_open02 0 TINFO : overall task clock: 63717187
perf_event_open02 0 TINFO : hw sum: 1800857435, task clock sum: 382156248
hw counters: 149326575 150152481 169006047 187845928 206684169
224693333 206543358 187716226 168865909 150023409
task clock counters: 31694792 31870834 35868749 39866666 43863541
47685936 43822396 39826042 35828125 31829167
perf_event_open02 0 TINFO : ratio: 5.997695
perf_event_open02 1 TPASS : test passed

Thanks

>
> Thanks
>
> --
> Qais Yousef

2019-11-20 18:29:22

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

On 11/20/19 19:20, Vincent Guittot wrote:
> On Wed, 20 Nov 2019 at 19:10, Qais Yousef <[email protected]> wrote:
> >
> > On 11/20/19 18:43, Vincent Guittot wrote:
> > > On Wed, 20 Nov 2019 at 18:34, Qais Yousef <[email protected]> wrote:
> > > >
> > > > On 11/20/19 17:53, Vincent Guittot wrote:
> > > > > On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> > > > > <[email protected]> wrote:
> > > > > >
> > > > > > Hi Qais,
> > > > > >
> > > > > > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <[email protected]> wrote:
> > > > > > >
> > > > > > > Hi Vincent
> > > > > > >
> > > > > > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > > > > > The slow wake up path computes per sched_group statisics to select the
> > > > > > > > idlest group, which is quite similar to what load_balance() is doing
> > > > > > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > > > > > sched_group and select the idlest one following the same steps as
> > > > > > > > load_balance().
> > > > > > > >
> > > > > > > > Signed-off-by: Vincent Guittot <[email protected]>
> > > > > > > > ---
> > > > > > >
> > > > > > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > > > > > bisected it to this patch.
> > > > > > >
> > > > > > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > > > > > passes. Without the revert the test fails.
> > > > >
> > > > > I haven't tried linux-next yet but LTP test is passed with
> > > > > tip/sched/core, which includes this patch, on hikey960 which is arm64
> > > > > too.
> > > > >
> > > > > Have you tried tip/sched/core on your juno ? this could help to
> > > > > understand if it's only for juno or if this patch interact with
> > > > > another branch merged in linux next
> > > >
> > > > Okay will give it a go. But out of curiosity, what is the output of your run?
> > > >
> > > > While bisecting on linux-next I noticed that at some point the test was
> > > > passing but all the read values were 0. At some point I started seeing
> > > > none-zero values.
> > >
> > > for tip/sched/core
> > > linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> > > sudo ./perf_event_open02
> > > perf_event_open02 0 TINFO : overall task clock: 63724479
> > > perf_event_open02 0 TINFO : hw sum: 1800900992, task clock sum: 382170311
> > > perf_event_open02 0 TINFO : ratio: 5.997229
> > > perf_event_open02 1 TPASS : test passed
> > >
> > > for next-2019119
> > > ~/ltp/testcases/kernel/syscalls/perf_event_open$ sudo ./perf_event_open02 -v
> > > at iteration:0 value:0 time_enabled:69795312 time_running:0
> > > perf_event_open02 0 TINFO : overall task clock: 63582292
> > > perf_event_open02 0 TINFO : hw sum: 0, task clock sum: 0
> > > hw counters: 0 0 0 0
> > > task clock counters: 0 0 0 0
> > > perf_event_open02 0 TINFO : ratio: 0.000000
> > > perf_event_open02 1 TPASS : test passed
> >
> > Okay that is weird. But ratio, hw sum, task clock sum are all 0 in your
> > next-20191119. I'm not sure why the counters return 0 sometimes - is it
> > dependent on some option or a bug somewhere.
> >
> > I just did another run and it failed for me (building with defconfig)
> >
> > # uname -a
> > Linux buildroot 5.4.0-rc8-next-20191119 #72 SMP PREEMPT Wed Nov 20 17:57:48 GMT 2019 aarch64 GNU/Linux
> >
> > # ./perf_event_open02 -v
> > at iteration:0 value:260700250 time_enabled:172739760 time_running:144956600
> > perf_event_open02 0 TINFO : overall task clock: 166915220
> > perf_event_open02 0 TINFO : hw sum: 1200718268, task clock sum: 667621320
> > hw counters: 300179051 300179395 300179739 300180083
> > task clock counters: 166906620 166906200 166905160 166903340
> > perf_event_open02 0 TINFO : ratio: 3.999763
> > perf_event_open02 0 TINFO : nhw: 0.000100
> > perf_event_open02 1 TFAIL : perf_event_open02.c:370: test failed (ratio was greater than )
> >
> > It is a funny one for sure. I haven't tried tip/sched/core yet.
>
> I confirm that on next-20191119, hw counters always return 0
> but on tip/sched/core which has this patch and v5.4-rc7 which has not,
> the hw counters are always different from 0

It's the other way around for me. tip/sched/core returns 0 hw counters. I tried
enabling coresight; that had no effect. Nor copying the .config that failed
from linux-next to tip/sched/core. I'm not sure what's the dependency/breakage
:-/

--
Qais Yousef

>
> on v5.4-rc7 i have got the same ratio :
> linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> sudo ./perf_event_open02 -v
> at iteration:0 value:300157088 time_enabled:80641145 time_running:80641145
> at iteration:1 value:300100129 time_enabled:63572917 time_running:63572917
> at iteration:2 value:300100885 time_enabled:63569271 time_running:63569271
> at iteration:3 value:300103998 time_enabled:63573437 time_running:63573437
> at iteration:4 value:300101477 time_enabled:63571875 time_running:63571875
> at iteration:5 value:300100698 time_enabled:63569791 time_running:63569791
> at iteration:6 value:245252526 time_enabled:63650520 time_running:52012500
> perf_event_open02 0 TINFO : overall task clock: 63717187
> perf_event_open02 0 TINFO : hw sum: 1800857435, task clock sum: 382156248
> hw counters: 149326575 150152481 169006047 187845928 206684169
> 224693333 206543358 187716226 168865909 150023409
> task clock counters: 31694792 31870834 35868749 39866666 43863541
> 47685936 43822396 39826042 35828125 31829167
> perf_event_open02 0 TINFO : ratio: 5.997695
> perf_event_open02 1 TPASS : test passed

2019-11-20 19:31:05

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

On Wed, 20 Nov 2019 at 19:27, Qais Yousef <[email protected]> wrote:
>
> On 11/20/19 19:20, Vincent Guittot wrote:
> > On Wed, 20 Nov 2019 at 19:10, Qais Yousef <[email protected]> wrote:
> > >
> > > On 11/20/19 18:43, Vincent Guittot wrote:
> > > > On Wed, 20 Nov 2019 at 18:34, Qais Yousef <[email protected]> wrote:
> > > > >
> > > > > On 11/20/19 17:53, Vincent Guittot wrote:
> > > > > > On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> > > > > > <[email protected]> wrote:
> > > > > > >
> > > > > > > Hi Qais,
> > > > > > >
> > > > > > > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <[email protected]> wrote:
> > > > > > > >
> > > > > > > > Hi Vincent
> > > > > > > >
> > > > > > > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > > > > > > The slow wake up path computes per sched_group statisics to select the
> > > > > > > > > idlest group, which is quite similar to what load_balance() is doing
> > > > > > > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > > > > > > sched_group and select the idlest one following the same steps as
> > > > > > > > > load_balance().
> > > > > > > > >
> > > > > > > > > Signed-off-by: Vincent Guittot <[email protected]>
> > > > > > > > > ---
> > > > > > > >
> > > > > > > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > > > > > > bisected it to this patch.
> > > > > > > >
> > > > > > > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > > > > > > passes. Without the revert the test fails.
> > > > > >
> > > > > > I haven't tried linux-next yet but LTP test is passed with
> > > > > > tip/sched/core, which includes this patch, on hikey960 which is arm64
> > > > > > too.
> > > > > >
> > > > > > Have you tried tip/sched/core on your juno ? this could help to
> > > > > > understand if it's only for juno or if this patch interact with
> > > > > > another branch merged in linux next
> > > > >
> > > > > Okay will give it a go. But out of curiosity, what is the output of your run?
> > > > >
> > > > > While bisecting on linux-next I noticed that at some point the test was
> > > > > passing but all the read values were 0. At some point I started seeing
> > > > > none-zero values.
> > > >
> > > > for tip/sched/core
> > > > linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> > > > sudo ./perf_event_open02
> > > > perf_event_open02 0 TINFO : overall task clock: 63724479
> > > > perf_event_open02 0 TINFO : hw sum: 1800900992, task clock sum: 382170311
> > > > perf_event_open02 0 TINFO : ratio: 5.997229
> > > > perf_event_open02 1 TPASS : test passed
> > > >
> > > > for next-2019119
> > > > ~/ltp/testcases/kernel/syscalls/perf_event_open$ sudo ./perf_event_open02 -v
> > > > at iteration:0 value:0 time_enabled:69795312 time_running:0
> > > > perf_event_open02 0 TINFO : overall task clock: 63582292
> > > > perf_event_open02 0 TINFO : hw sum: 0, task clock sum: 0
> > > > hw counters: 0 0 0 0
> > > > task clock counters: 0 0 0 0
> > > > perf_event_open02 0 TINFO : ratio: 0.000000
> > > > perf_event_open02 1 TPASS : test passed
> > >
> > > Okay that is weird. But ratio, hw sum, task clock sum are all 0 in your
> > > next-20191119. I'm not sure why the counters return 0 sometimes - is it
> > > dependent on some option or a bug somewhere.
> > >
> > > I just did another run and it failed for me (building with defconfig)
> > >
> > > # uname -a
> > > Linux buildroot 5.4.0-rc8-next-20191119 #72 SMP PREEMPT Wed Nov 20 17:57:48 GMT 2019 aarch64 GNU/Linux
> > >
> > > # ./perf_event_open02 -v
> > > at iteration:0 value:260700250 time_enabled:172739760 time_running:144956600
> > > perf_event_open02 0 TINFO : overall task clock: 166915220
> > > perf_event_open02 0 TINFO : hw sum: 1200718268, task clock sum: 667621320
> > > hw counters: 300179051 300179395 300179739 300180083
> > > task clock counters: 166906620 166906200 166905160 166903340
> > > perf_event_open02 0 TINFO : ratio: 3.999763
> > > perf_event_open02 0 TINFO : nhw: 0.000100
> > > perf_event_open02 1 TFAIL : perf_event_open02.c:370: test failed (ratio was greater than )
> > >
> > > It is a funny one for sure. I haven't tried tip/sched/core yet.
> >
> > I confirm that on next-20191119, hw counters always return 0
> > but on tip/sched/core which has this patch and v5.4-rc7 which has not,
> > the hw counters are always different from 0
>
> It's the other way around for me. tip/sched/core returns 0 hw counters. I tried
> enabling coresight; that had no effect. Nor copying the .config that failed
> from linux-next to tip/sched/core. I'm not sure what's the dependency/breakage
> :-/

I run few more tests and i can get either hw counter with 0 or not.
The main difference is on which CPU it runs: either big or little
little return always 0 and big always non-zero value

on v5.4-rc7 and tip/sched/core, cpu0-3 return 0 and other non zeroa
but on next, it's the opposite cpu0-3 return non zero ratio

Could you try to run the test with taskset to run it on big or little ?

>
> --
> Qais Yousef
>
> >
> > on v5.4-rc7 i have got the same ratio :
> > linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> > sudo ./perf_event_open02 -v
> > at iteration:0 value:300157088 time_enabled:80641145 time_running:80641145
> > at iteration:1 value:300100129 time_enabled:63572917 time_running:63572917
> > at iteration:2 value:300100885 time_enabled:63569271 time_running:63569271
> > at iteration:3 value:300103998 time_enabled:63573437 time_running:63573437
> > at iteration:4 value:300101477 time_enabled:63571875 time_running:63571875
> > at iteration:5 value:300100698 time_enabled:63569791 time_running:63569791
> > at iteration:6 value:245252526 time_enabled:63650520 time_running:52012500
> > perf_event_open02 0 TINFO : overall task clock: 63717187
> > perf_event_open02 0 TINFO : hw sum: 1800857435, task clock sum: 382156248
> > hw counters: 149326575 150152481 169006047 187845928 206684169
> > 224693333 206543358 187716226 168865909 150023409
> > task clock counters: 31694792 31870834 35868749 39866666 43863541
> > 47685936 43822396 39826042 35828125 31829167
> > perf_event_open02 0 TINFO : ratio: 5.997695
> > perf_event_open02 1 TPASS : test passed

2019-11-20 19:58:14

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

On 11/20/19 20:28, Vincent Guittot wrote:
> I run few more tests and i can get either hw counter with 0 or not.
> The main difference is on which CPU it runs: either big or little
> little return always 0 and big always non-zero value
>
> on v5.4-rc7 and tip/sched/core, cpu0-3 return 0 and other non zeroa
> but on next, it's the opposite cpu0-3 return non zero ratio
>
> Could you try to run the test with taskset to run it on big or little ?

Nice catch!

Yes indeed using taskset and forcing it to run on the big cpus it passes even
on linux-next/next-20191119.

So the relation to your patch is that it just biased where this test is likely
to run in my case and highlighted the breakage in the counters, probably?

FWIW, if I use taskset to force always big it passes. Always small, the counters
are always 0 and it passes too. But if I have mixed I see what I pasted before,
the counters have valid value but nhw is 0.

So the questions are, why little counters aren't working. And whether we should
run the test with taskset generally as it can't handle the asymmetry correctly.

Let me first try to find out why the little counters aren't working.

Thanks

--
Qais Yousef

2019-11-21 14:59:46

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

On 11/20/19 19:55, Qais Yousef wrote:
> On 11/20/19 20:28, Vincent Guittot wrote:
> > I run few more tests and i can get either hw counter with 0 or not.
> > The main difference is on which CPU it runs: either big or little
> > little return always 0 and big always non-zero value
> >
> > on v5.4-rc7 and tip/sched/core, cpu0-3 return 0 and other non zeroa
> > but on next, it's the opposite cpu0-3 return non zero ratio
> >
> > Could you try to run the test with taskset to run it on big or little ?
>
> Nice catch!
>
> Yes indeed using taskset and forcing it to run on the big cpus it passes even
> on linux-next/next-20191119.
>
> So the relation to your patch is that it just biased where this test is likely
> to run in my case and highlighted the breakage in the counters, probably?
>
> FWIW, if I use taskset to force always big it passes. Always small, the counters
> are always 0 and it passes too. But if I have mixed I see what I pasted before,
> the counters have valid value but nhw is 0.
>
> So the questions are, why little counters aren't working. And whether we should
> run the test with taskset generally as it can't handle the asymmetry correctly.
>
> Let me first try to find out why the little counters aren't working.

So it turns out there's a caveat on usage of perf counters on big.LITTLE
systems.

Mark on CC can explain this better than me so I'll leave the details to him.

Sorry about the noise Vincent - it seems your patch was shifting things
slightly to cause migrating the task to another CPU, hence trigger the failure
on reading the perf counters, and the test in return.

Thanks

--
Qais Yousef

2019-11-22 14:35:38

by Valentin Schneider

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

Hi Vincent,

Apologies for the delayed review on that one. I have a few comments inline,
otherwise for the misfit part, if at all still relevant:

Reviewed-by: Valentin Schneider <[email protected]>

On 18/10/2019 14:26, Vincent Guittot wrote:
> static struct sched_group *
> find_idlest_group(struct sched_domain *sd, struct task_struct *p,
> + int this_cpu, int sd_flag);
^^^^^^^
That parameter is now unused. AFAICT it was only used to special-case fork
events (sd flag & SD_BALANCE_FORK). I didn't see any explicit handling of
this case in the rework, I assume the new group type classification makes
it possible to forgo?

> @@ -8241,6 +8123,252 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
> }
> #endif /* CONFIG_NUMA_BALANCING */
>
> +
> +struct sg_lb_stats;
> +
> +/*
> + * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
> + * @denv: The ched_domain level to look for idlest group.
> + * @group: sched_group whose statistics are to be updated.
> + * @sgs: variable to hold the statistics for this group.
> + */
> +static inline void update_sg_wakeup_stats(struct sched_domain *sd,
> + struct sched_group *group,
> + struct sg_lb_stats *sgs,
> + struct task_struct *p)
> +{
> + int i, nr_running;
> +
> + memset(sgs, 0, sizeof(*sgs));
> +
> + for_each_cpu(i, sched_group_span(group)) {
> + struct rq *rq = cpu_rq(i);
> +
> + sgs->group_load += cpu_load(rq);
> + sgs->group_util += cpu_util_without(i, p);
> + sgs->sum_h_nr_running += rq->cfs.h_nr_running;
> +
> + nr_running = rq->nr_running;
> + sgs->sum_nr_running += nr_running;
> +
> + /*
> + * No need to call idle_cpu() if nr_running is not 0
> + */
> + if (!nr_running && idle_cpu(i))
> + sgs->idle_cpus++;
> +
> +
> + }
> +
> + /* Check if task fits in the group */
> + if (sd->flags & SD_ASYM_CPUCAPACITY &&
> + !task_fits_capacity(p, group->sgc->max_capacity)) {
> + sgs->group_misfit_task_load = 1;
> + }
> +
> + sgs->group_capacity = group->sgc->capacity;
> +
> + sgs->group_type = group_classify(sd->imbalance_pct, group, sgs);
> +
> + /*
> + * Computing avg_load makes sense only when group is fully busy or
> + * overloaded
> + */
> + if (sgs->group_type < group_fully_busy)
> + sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
> + sgs->group_capacity;
> +}
> +
> +static bool update_pick_idlest(struct sched_group *idlest,

Nit: could we name this update_sd_pick_idlest() to follow
update_sd_pick_busiest()? It's the kind of thing where if I typed
"update_sd" in gtags I'd like to see both listed, seeing as they are
*very* similar. And we already have update_sg_{wakeup, lb}_stats().

> + struct sg_lb_stats *idlest_sgs,
> + struct sched_group *group,
> + struct sg_lb_stats *sgs)
> +{
> + if (sgs->group_type < idlest_sgs->group_type)
> + return true;
> +
> + if (sgs->group_type > idlest_sgs->group_type)
> + return false;
> +
> + /*
> + * The candidate and the current idles group are the same type of
> + * group. Let check which one is the idlest according to the type.
> + */
> +
> + switch (sgs->group_type) {
> + case group_overloaded:
> + case group_fully_busy:
> + /* Select the group with lowest avg_load. */
> + if (idlest_sgs->avg_load <= sgs->avg_load)
> + return false;
> + break;
> +
> + case group_imbalanced:
> + case group_asym_packing:
> + /* Those types are not used in the slow wakeup path */
> + return false;
> +
> + case group_misfit_task:
> + /* Select group with the highest max capacity */
> + if (idlest->sgc->max_capacity >= group->sgc->max_capacity)
> + return false;
> + break;
> +
> + case group_has_spare:
> + /* Select group with most idle CPUs */
> + if (idlest_sgs->idle_cpus >= sgs->idle_cpus)
> + return false;
> + break;
> + }
> +
> + return true;
> +}
> +
> +/*
> + * find_idlest_group finds and returns the least busy CPU group within the
> + * domain.
> + *
> + * Assumes p is allowed on at least one CPU in sd.
> + */
> +static struct sched_group *
> +find_idlest_group(struct sched_domain *sd, struct task_struct *p,
> + int this_cpu, int sd_flag)
> +{
> + struct sched_group *idlest = NULL, *local = NULL, *group = sd->groups;
> + struct sg_lb_stats local_sgs, tmp_sgs;
> + struct sg_lb_stats *sgs;
> + unsigned long imbalance;
> + struct sg_lb_stats idlest_sgs = {
> + .avg_load = UINT_MAX,
> + .group_type = group_overloaded,
> + };
> +
> + imbalance = scale_load_down(NICE_0_LOAD) *
> + (sd->imbalance_pct-100) / 100;
> +
> + do {
> + int local_group;
> +
> + /* Skip over this group if it has no CPUs allowed */
> + if (!cpumask_intersects(sched_group_span(group),
> + p->cpus_ptr))
> + continue;
> +
> + local_group = cpumask_test_cpu(this_cpu,
> + sched_group_span(group));
> +
> + if (local_group) {
> + sgs = &local_sgs;
> + local = group;
> + } else {
> + sgs = &tmp_sgs;
> + }
> +
> + update_sg_wakeup_stats(sd, group, sgs, p);
> +
> + if (!local_group && update_pick_idlest(idlest, &idlest_sgs, group, sgs)) {
> + idlest = group;
> + idlest_sgs = *sgs;
> + }
> +
> + } while (group = group->next, group != sd->groups);
> +
> +
> + /* There is no idlest group to push tasks to */
> + if (!idlest)
> + return NULL;
> +
> + /*
> + * If the local group is idler than the selected idlest group
> + * don't try and push the task.
> + */
> + if (local_sgs.group_type < idlest_sgs.group_type)
> + return NULL;
> +
> + /*
> + * If the local group is busier than the selected idlest group
> + * try and push the task.
> + */
> + if (local_sgs.group_type > idlest_sgs.group_type)
> + return idlest;
> +
> + switch (local_sgs.group_type) {
> + case group_overloaded:
> + case group_fully_busy:
> + /*
> + * When comparing groups across NUMA domains, it's possible for
> + * the local domain to be very lightly loaded relative to the
> + * remote domains but "imbalance" skews the comparison making
> + * remote CPUs look much more favourable. When considering
> + * cross-domain, add imbalance to the load on the remote node
> + * and consider staying local.
> + */
> +
> + if ((sd->flags & SD_NUMA) &&
> + ((idlest_sgs.avg_load + imbalance) >= local_sgs.avg_load))
> + return NULL;
> +
> + /*
> + * If the local group is less loaded than the selected
> + * idlest group don't try and push any tasks.
> + */
> + if (idlest_sgs.avg_load >= (local_sgs.avg_load + imbalance))
> + return NULL;
> +
> + if (100 * local_sgs.avg_load <= sd->imbalance_pct * idlest_sgs.avg_load)
> + return NULL;
> + break;
> +
> + case group_imbalanced:
> + case group_asym_packing:
> + /* Those type are not used in the slow wakeup path */
> + return NULL;

I suppose group_asym_packing could be handled similarly to misfit, right?
i.e. make the group type group_asym_packing if

!sched_asym_prefer(sg.asym_prefer_cpu, local.asym_prefer_cpu)

> +
> + case group_misfit_task:
> + /* Select group with the highest max capacity */
> + if (local->sgc->max_capacity >= idlest->sgc->max_capacity)
> + return NULL;

Got confused a bit here due to the naming; in this case 'group_misfit_task'
only means 'if placed on this group, the task will be misfit'. If the
idlest group will cause us to remain misfit, but can give us some extra
capacity, I think it makes sense to move.

> + break;
> +
> + case group_has_spare:
> + if (sd->flags & SD_NUMA) {
> +#ifdef CONFIG_NUMA_BALANCING
> + int idlest_cpu;
> + /*
> + * If there is spare capacity at NUMA, try to select
> + * the preferred node
> + */
> + if (cpu_to_node(this_cpu) == p->numa_preferred_nid)
> + return NULL;
> +
> + idlest_cpu = cpumask_first(sched_group_span(idlest));
> + if (cpu_to_node(idlest_cpu) == p->numa_preferred_nid)
> + return idlest;
> +#endif
> + /*
> + * Otherwise, keep the task on this node to stay close
> + * its wakeup source and improve locality. If there is
> + * a real need of migration, periodic load balance will
> + * take care of it.
> + */
> + if (local_sgs.idle_cpus)
> + return NULL;
> + }
> +
> + /*
> + * Select group with highest number of idle cpus. We could also
> + * compare the utilization which is more stable but it can end
> + * up that the group has less spare capacity but finally more
> + * idle cpus which means more opportunity to run task.
> + */
> + if (local_sgs.idle_cpus >= idlest_sgs.idle_cpus)
> + return NULL;
> + break;
> + }
> +
> + return idlest;
> +}
> +
> /**
> * update_sd_lb_stats - Update sched_domain's statistics for load balancing.
> * @env: The load balancing environment.
>

2019-11-22 14:39:54

by Valentin Schneider

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: fix rework of find_idlest_group()

Hi Vincent,

I took the liberty of adding some commenting nits in my review. I
know this is already in tip, but as Mel pointed out this should be merged
with the rework when sent out to mainline (similar to the removal of
fix_small_imbalance() & the LB rework).

On 22/10/2019 17:46, Vincent Guittot wrote:
> The task, for which the scheduler looks for the idlest group of CPUs, must
> be discounted from all statistics in order to get a fair comparison
> between groups. This includes utilization, load, nr_running and idle_cpus.
>
> Such unfairness can be easily highlighted with the unixbench execl 1 task.
> This test continuously call execve() and the scheduler looks for the idlest
> group/CPU on which it should place the task. Because the task runs on the
> local group/CPU, the latter seems already busy even if there is nothing
> else running on it. As a result, the scheduler will always select another
> group/CPU than the local one.
>
> Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
> Reported-by: kernel test robot <[email protected]>
> Signed-off-by: Vincent Guittot <[email protected]>
> ---
>
> This recover most of the perf regression on my system and I have asked
> Rong if he can rerun the test with the patch to check that it fixes his
> system as well.
>
> kernel/sched/fair.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 83 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a81c364..0ad4b21 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5379,6 +5379,36 @@ static unsigned long cpu_load(struct rq *rq)
> {
> return cfs_rq_load_avg(&rq->cfs);
> }
> +/*
> + * cpu_load_without - compute cpu load without any contributions from *p
> + * @cpu: the CPU which load is requested
> + * @p: the task which load should be discounted

For both @cpu and @p, s/which/whose/ (also applies to cpu_util_without()
which inspired this).

> + *
> + * The load of a CPU is defined by the load of tasks currently enqueued on that
> + * CPU as well as tasks which are currently sleeping after an execution on that
> + * CPU.
> + *
> + * This method returns the load of the specified CPU by discounting the load of
> + * the specified task, whenever the task is currently contributing to the CPU
> + * load.
> + */
> +static unsigned long cpu_load_without(struct rq *rq, struct task_struct *p)
> +{
> + struct cfs_rq *cfs_rq;
> + unsigned int load;
> +
> + /* Task has no contribution or is new */
> + if (cpu_of(rq) != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
> + return cpu_load(rq);
> +
> + cfs_rq = &rq->cfs;
> + load = READ_ONCE(cfs_rq->avg.load_avg);
> +
> + /* Discount task's util from CPU's util */

s/util/load

> + lsub_positive(&load, task_h_load(p));
> +
> + return load;
> +}
>
> static unsigned long capacity_of(int cpu)
> {
> @@ -8117,10 +8147,55 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
> struct sg_lb_stats;
>
> /*
> + * task_running_on_cpu - return 1 if @p is running on @cpu.
> + */
> +
> +static unsigned int task_running_on_cpu(int cpu, struct task_struct *p)
^^^^^^^^^^^^
That could very well be bool, right?


> +{
> + /* Task has no contribution or is new */
> + if (cpu != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
> + return 0;
> +
> + if (task_on_rq_queued(p))
> + return 1;
> +
> + return 0;
> +}
> +
> +/**
> + * idle_cpu_without - would a given CPU be idle without p ?
> + * @cpu: the processor on which idleness is tested.
> + * @p: task which should be ignored.
> + *
> + * Return: 1 if the CPU would be idle. 0 otherwise.
> + */
> +static int idle_cpu_without(int cpu, struct task_struct *p)
^^^
Ditto on the boolean return values

> +{
> + struct rq *rq = cpu_rq(cpu);
> +
> + if ((rq->curr != rq->idle) && (rq->curr != p))
> + return 0;
> +
> + /*
> + * rq->nr_running can't be used but an updated version without the
> + * impact of p on cpu must be used instead. The updated nr_running
> + * be computed and tested before calling idle_cpu_without().
> + */
> +
> +#ifdef CONFIG_SMP
> + if (!llist_empty(&rq->wake_list))
> + return 0;
> +#endif
> +
> + return 1;
> +}
> +
> +/*
> * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
> - * @denv: The ched_domain level to look for idlest group.
> + * @sd: The sched_domain level to look for idlest group.
> * @group: sched_group whose statistics are to be updated.
> * @sgs: variable to hold the statistics for this group.
> + * @p: The task for which we look for the idlest group/CPU.
> */
> static inline void update_sg_wakeup_stats(struct sched_domain *sd,
> struct sched_group *group,

2019-11-25 09:20:34

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: fix rework of find_idlest_group()

On Fri, 22 Nov 2019 at 15:37, Valentin Schneider
<[email protected]> wrote:
>
> Hi Vincent,
>
> I took the liberty of adding some commenting nits in my review. I
> know this is already in tip, but as Mel pointed out this should be merged
> with the rework when sent out to mainline (similar to the removal of
> fix_small_imbalance() & the LB rework).
>
> On 22/10/2019 17:46, Vincent Guittot wrote:
> > The task, for which the scheduler looks for the idlest group of CPUs, must
> > be discounted from all statistics in order to get a fair comparison
> > between groups. This includes utilization, load, nr_running and idle_cpus.
> >
> > Such unfairness can be easily highlighted with the unixbench execl 1 task.
> > This test continuously call execve() and the scheduler looks for the idlest
> > group/CPU on which it should place the task. Because the task runs on the
> > local group/CPU, the latter seems already busy even if there is nothing
> > else running on it. As a result, the scheduler will always select another
> > group/CPU than the local one.
> >
> > Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
> > Reported-by: kernel test robot <[email protected]>
> > Signed-off-by: Vincent Guittot <[email protected]>
> > ---
> >
> > This recover most of the perf regression on my system and I have asked
> > Rong if he can rerun the test with the patch to check that it fixes his
> > system as well.
> >
> > kernel/sched/fair.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++-----
> > 1 file changed, 83 insertions(+), 7 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index a81c364..0ad4b21 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5379,6 +5379,36 @@ static unsigned long cpu_load(struct rq *rq)
> > {
> > return cfs_rq_load_avg(&rq->cfs);
> > }
> > +/*
> > + * cpu_load_without - compute cpu load without any contributions from *p
> > + * @cpu: the CPU which load is requested
> > + * @p: the task which load should be discounted
>
> For both @cpu and @p, s/which/whose/ (also applies to cpu_util_without()
> which inspired this).

As you mentioned, this is inspired from cpu_util_without and stay
consistent with it

>
> > + *
> > + * The load of a CPU is defined by the load of tasks currently enqueued on that
> > + * CPU as well as tasks which are currently sleeping after an execution on that
> > + * CPU.
> > + *
> > + * This method returns the load of the specified CPU by discounting the load of
> > + * the specified task, whenever the task is currently contributing to the CPU
> > + * load.
> > + */
> > +static unsigned long cpu_load_without(struct rq *rq, struct task_struct *p)
> > +{
> > + struct cfs_rq *cfs_rq;
> > + unsigned int load;
> > +
> > + /* Task has no contribution or is new */
> > + if (cpu_of(rq) != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
> > + return cpu_load(rq);
> > +
> > + cfs_rq = &rq->cfs;
> > + load = READ_ONCE(cfs_rq->avg.load_avg);
> > +
> > + /* Discount task's util from CPU's util */
>
> s/util/load
>
> > + lsub_positive(&load, task_h_load(p));
> > +
> > + return load;
> > +}
> >
> > static unsigned long capacity_of(int cpu)
> > {
> > @@ -8117,10 +8147,55 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
> > struct sg_lb_stats;
> >
> > /*
> > + * task_running_on_cpu - return 1 if @p is running on @cpu.
> > + */
> > +
> > +static unsigned int task_running_on_cpu(int cpu, struct task_struct *p)
> ^^^^^^^^^^^^
> That could very well be bool, right?
>
>
> > +{
> > + /* Task has no contribution or is new */
> > + if (cpu != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
> > + return 0;
> > +
> > + if (task_on_rq_queued(p))
> > + return 1;
> > +
> > + return 0;
> > +}
> > +
> > +/**
> > + * idle_cpu_without - would a given CPU be idle without p ?
> > + * @cpu: the processor on which idleness is tested.
> > + * @p: task which should be ignored.
> > + *
> > + * Return: 1 if the CPU would be idle. 0 otherwise.
> > + */
> > +static int idle_cpu_without(int cpu, struct task_struct *p)
> ^^^
> Ditto on the boolean return values

This is an extension of idle_cpu which also returns int and I wanted
to stay consistent with it

So we might want to make some kind of cleanup or rewording of
interfaces and their descriptions but this should be done as a whole
and out of the scope of this patch and would worth having a dedicated
patch IMO because it would imply to modify other patch of the code
that is not covered by this patch like idle_cpu or cpu_util_without


>
> > +{
> > + struct rq *rq = cpu_rq(cpu);
> > +
> > + if ((rq->curr != rq->idle) && (rq->curr != p))
> > + return 0;
> > +
> > + /*
> > + * rq->nr_running can't be used but an updated version without the
> > + * impact of p on cpu must be used instead. The updated nr_running
> > + * be computed and tested before calling idle_cpu_without().
> > + */
> > +
> > +#ifdef CONFIG_SMP
> > + if (!llist_empty(&rq->wake_list))
> > + return 0;
> > +#endif
> > +
> > + return 1;
> > +}
> > +
> > +/*
> > * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
> > - * @denv: The ched_domain level to look for idlest group.
> > + * @sd: The sched_domain level to look for idlest group.
> > * @group: sched_group whose statistics are to be updated.
> > * @sgs: variable to hold the statistics for this group.
> > + * @p: The task for which we look for the idlest group/CPU.
> > */
> > static inline void update_sg_wakeup_stats(struct sched_domain *sd,
> > struct sched_group *group,

2019-11-25 10:02:38

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

On Fri, 22 Nov 2019 at 15:34, Valentin Schneider
<[email protected]> wrote:
>
> Hi Vincent,
>
> Apologies for the delayed review on that one. I have a few comments inline,
> otherwise for the misfit part, if at all still relevant:
>
> Reviewed-by: Valentin Schneider <[email protected]>
>
> On 18/10/2019 14:26, Vincent Guittot wrote:
> > static struct sched_group *
> > find_idlest_group(struct sched_domain *sd, struct task_struct *p,
> > + int this_cpu, int sd_flag);
> ^^^^^^^
> That parameter is now unused. AFAICT it was only used to special-case fork
> events (sd flag & SD_BALANCE_FORK). I didn't see any explicit handling of
> this case in the rework, I assume the new group type classification makes
> it possible to forgo?
>
> > @@ -8241,6 +8123,252 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
> > }
> > #endif /* CONFIG_NUMA_BALANCING */
> >
> > +
> > +struct sg_lb_stats;
> > +
> > +/*
> > + * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
> > + * @denv: The ched_domain level to look for idlest group.
> > + * @group: sched_group whose statistics are to be updated.
> > + * @sgs: variable to hold the statistics for this group.
> > + */
> > +static inline void update_sg_wakeup_stats(struct sched_domain *sd,
> > + struct sched_group *group,
> > + struct sg_lb_stats *sgs,
> > + struct task_struct *p)
> > +{
> > + int i, nr_running;
> > +
> > + memset(sgs, 0, sizeof(*sgs));
> > +
> > + for_each_cpu(i, sched_group_span(group)) {
> > + struct rq *rq = cpu_rq(i);
> > +
> > + sgs->group_load += cpu_load(rq);
> > + sgs->group_util += cpu_util_without(i, p);
> > + sgs->sum_h_nr_running += rq->cfs.h_nr_running;
> > +
> > + nr_running = rq->nr_running;
> > + sgs->sum_nr_running += nr_running;
> > +
> > + /*
> > + * No need to call idle_cpu() if nr_running is not 0
> > + */
> > + if (!nr_running && idle_cpu(i))
> > + sgs->idle_cpus++;
> > +
> > +
> > + }
> > +
> > + /* Check if task fits in the group */
> > + if (sd->flags & SD_ASYM_CPUCAPACITY &&
> > + !task_fits_capacity(p, group->sgc->max_capacity)) {
> > + sgs->group_misfit_task_load = 1;
> > + }
> > +
> > + sgs->group_capacity = group->sgc->capacity;
> > +
> > + sgs->group_type = group_classify(sd->imbalance_pct, group, sgs);
> > +
> > + /*
> > + * Computing avg_load makes sense only when group is fully busy or
> > + * overloaded
> > + */
> > + if (sgs->group_type < group_fully_busy)
> > + sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
> > + sgs->group_capacity;
> > +}
> > +
> > +static bool update_pick_idlest(struct sched_group *idlest,
>
> Nit: could we name this update_sd_pick_idlest() to follow
> update_sd_pick_busiest()? It's the kind of thing where if I typed
> "update_sd" in gtags I'd like to see both listed, seeing as they are
> *very* similar. And we already have update_sg_{wakeup, lb}_stats().
>
> > + struct sg_lb_stats *idlest_sgs,
> > + struct sched_group *group,
> > + struct sg_lb_stats *sgs)
> > +{
> > + if (sgs->group_type < idlest_sgs->group_type)
> > + return true;
> > +
> > + if (sgs->group_type > idlest_sgs->group_type)
> > + return false;
> > +
> > + /*
> > + * The candidate and the current idles group are the same type of
> > + * group. Let check which one is the idlest according to the type.
> > + */
> > +
> > + switch (sgs->group_type) {
> > + case group_overloaded:
> > + case group_fully_busy:
> > + /* Select the group with lowest avg_load. */
> > + if (idlest_sgs->avg_load <= sgs->avg_load)
> > + return false;
> > + break;
> > +
> > + case group_imbalanced:
> > + case group_asym_packing:
> > + /* Those types are not used in the slow wakeup path */
> > + return false;
> > +
> > + case group_misfit_task:
> > + /* Select group with the highest max capacity */
> > + if (idlest->sgc->max_capacity >= group->sgc->max_capacity)
> > + return false;
> > + break;
> > +
> > + case group_has_spare:
> > + /* Select group with most idle CPUs */
> > + if (idlest_sgs->idle_cpus >= sgs->idle_cpus)
> > + return false;
> > + break;
> > + }
> > +
> > + return true;
> > +}
> > +
> > +/*
> > + * find_idlest_group finds and returns the least busy CPU group within the
> > + * domain.
> > + *
> > + * Assumes p is allowed on at least one CPU in sd.
> > + */
> > +static struct sched_group *
> > +find_idlest_group(struct sched_domain *sd, struct task_struct *p,
> > + int this_cpu, int sd_flag)
> > +{
> > + struct sched_group *idlest = NULL, *local = NULL, *group = sd->groups;
> > + struct sg_lb_stats local_sgs, tmp_sgs;
> > + struct sg_lb_stats *sgs;
> > + unsigned long imbalance;
> > + struct sg_lb_stats idlest_sgs = {
> > + .avg_load = UINT_MAX,
> > + .group_type = group_overloaded,
> > + };
> > +
> > + imbalance = scale_load_down(NICE_0_LOAD) *
> > + (sd->imbalance_pct-100) / 100;
> > +
> > + do {
> > + int local_group;
> > +
> > + /* Skip over this group if it has no CPUs allowed */
> > + if (!cpumask_intersects(sched_group_span(group),
> > + p->cpus_ptr))
> > + continue;
> > +
> > + local_group = cpumask_test_cpu(this_cpu,
> > + sched_group_span(group));
> > +
> > + if (local_group) {
> > + sgs = &local_sgs;
> > + local = group;
> > + } else {
> > + sgs = &tmp_sgs;
> > + }
> > +
> > + update_sg_wakeup_stats(sd, group, sgs, p);
> > +
> > + if (!local_group && update_pick_idlest(idlest, &idlest_sgs, group, sgs)) {
> > + idlest = group;
> > + idlest_sgs = *sgs;
> > + }
> > +
> > + } while (group = group->next, group != sd->groups);
> > +
> > +
> > + /* There is no idlest group to push tasks to */
> > + if (!idlest)
> > + return NULL;
> > +
> > + /*
> > + * If the local group is idler than the selected idlest group
> > + * don't try and push the task.
> > + */
> > + if (local_sgs.group_type < idlest_sgs.group_type)
> > + return NULL;
> > +
> > + /*
> > + * If the local group is busier than the selected idlest group
> > + * try and push the task.
> > + */
> > + if (local_sgs.group_type > idlest_sgs.group_type)
> > + return idlest;
> > +
> > + switch (local_sgs.group_type) {
> > + case group_overloaded:
> > + case group_fully_busy:
> > + /*
> > + * When comparing groups across NUMA domains, it's possible for
> > + * the local domain to be very lightly loaded relative to the
> > + * remote domains but "imbalance" skews the comparison making
> > + * remote CPUs look much more favourable. When considering
> > + * cross-domain, add imbalance to the load on the remote node
> > + * and consider staying local.
> > + */
> > +
> > + if ((sd->flags & SD_NUMA) &&
> > + ((idlest_sgs.avg_load + imbalance) >= local_sgs.avg_load))
> > + return NULL;
> > +
> > + /*
> > + * If the local group is less loaded than the selected
> > + * idlest group don't try and push any tasks.
> > + */
> > + if (idlest_sgs.avg_load >= (local_sgs.avg_load + imbalance))
> > + return NULL;
> > +
> > + if (100 * local_sgs.avg_load <= sd->imbalance_pct * idlest_sgs.avg_load)
> > + return NULL;
> > + break;
> > +
> > + case group_imbalanced:
> > + case group_asym_packing:
> > + /* Those type are not used in the slow wakeup path */
> > + return NULL;
>
> I suppose group_asym_packing could be handled similarly to misfit, right?
> i.e. make the group type group_asym_packing if
>
> !sched_asym_prefer(sg.asym_prefer_cpu, local.asym_prefer_cpu)

Unlike group_misfit_task that was somehow already taken into account
through the comparison of spare capacity, group_asym_packing was not
considered at all in find_idlest_group so I prefer to stay
conservative and wait for users of asym_packing to come with a need
before adding this new mechanism.

>
> > +
> > + case group_misfit_task:
> > + /* Select group with the highest max capacity */
> > + if (local->sgc->max_capacity >= idlest->sgc->max_capacity)
> > + return NULL;
>
> Got confused a bit here due to the naming; in this case 'group_misfit_task'
> only means 'if placed on this group, the task will be misfit'. If the
> idlest group will cause us to remain misfit, but can give us some extra
> capacity, I think it makes sense to move.
>
> > + break;
> > +
> > + case group_has_spare:
> > + if (sd->flags & SD_NUMA) {
> > +#ifdef CONFIG_NUMA_BALANCING
> > + int idlest_cpu;
> > + /*
> > + * If there is spare capacity at NUMA, try to select
> > + * the preferred node
> > + */
> > + if (cpu_to_node(this_cpu) == p->numa_preferred_nid)
> > + return NULL;
> > +
> > + idlest_cpu = cpumask_first(sched_group_span(idlest));
> > + if (cpu_to_node(idlest_cpu) == p->numa_preferred_nid)
> > + return idlest;
> > +#endif
> > + /*
> > + * Otherwise, keep the task on this node to stay close
> > + * its wakeup source and improve locality. If there is
> > + * a real need of migration, periodic load balance will
> > + * take care of it.
> > + */
> > + if (local_sgs.idle_cpus)
> > + return NULL;
> > + }
> > +
> > + /*
> > + * Select group with highest number of idle cpus. We could also
> > + * compare the utilization which is more stable but it can end
> > + * up that the group has less spare capacity but finally more
> > + * idle cpus which means more opportunity to run task.
> > + */
> > + if (local_sgs.idle_cpus >= idlest_sgs.idle_cpus)
> > + return NULL;
> > + break;
> > + }
> > +
> > + return idlest;
> > +}
> > +
> > /**
> > * update_sd_lb_stats - Update sched_domain's statistics for load balancing.
> > * @env: The load balancing environment.
> >

2019-11-25 11:15:51

by Valentin Schneider

[permalink] [raw]
Subject: Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group

On 25/11/2019 09:59, Vincent Guittot wrote:
>>> + case group_imbalanced:
>>> + case group_asym_packing:
>>> + /* Those type are not used in the slow wakeup path */
>>> + return NULL;
>>
>> I suppose group_asym_packing could be handled similarly to misfit, right?
>> i.e. make the group type group_asym_packing if
>>
>> !sched_asym_prefer(sg.asym_prefer_cpu, local.asym_prefer_cpu)
>
> Unlike group_misfit_task that was somehow already taken into account
> through the comparison of spare capacity, group_asym_packing was not
> considered at all in find_idlest_group so I prefer to stay
> conservative and wait for users of asym_packing to come with a need
> before adding this new mechanism.
>

Right, makes sense.

2019-11-25 11:31:47

by Valentin Schneider

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: fix rework of find_idlest_group()

On 25/11/2019 09:16, Vincent Guittot wrote:
>
> This is an extension of idle_cpu which also returns int and I wanted
> to stay consistent with it
>
> So we might want to make some kind of cleanup or rewording of
> interfaces and their descriptions but this should be done as a whole
> and out of the scope of this patch and would worth having a dedicated
> patch IMO because it would imply to modify other patch of the code
> that is not covered by this patch like idle_cpu or cpu_util_without
>

Fair enough.

2019-11-25 12:51:18

by Valentin Schneider

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On 18/10/2019 14:26, Vincent Guittot wrote:
> tip/sched/core w/ this patchset improvement
> schedpipe 53125 +/-0.18% 53443 +/-0.52% (+0.60%)
>
> hackbench -l (2560/#grp) -g #grp
> 1 groups 1.579 +/-29.16% 1.410 +/-13.46% (+10.70%)
> 4 groups 1.269 +/-9.69% 1.205 +/-3.27% (+5.00%)
> 8 groups 1.117 +/-1.51% 1.123 +/-1.27% (+4.57%)
> 16 groups 1.176 +/-1.76% 1.164 +/-2.42% (+1.07%)
>
> Unixbench shell8
> 1 test 1963.48 +/-0.36% 1902.88 +/-0.73% (-3.09%)
> 224 tests 2427.60 +/-0.20% 2469.80 +/-0.42% (1.74%)
>
> - large arm64 2 nodes / 224 cores system
>
> tip/sched/core w/ this patchset improvement
> schedpipe 124084 +/-1.36% 124445 +/-0.67% (+0.29%)
>
> hackbench -l (256000/#grp) -g #grp
> 1 groups 15.305 +/-1.50% 14.001 +/-1.99% (+8.52%)
> 4 groups 5.959 +/-0.70% 5.542 +/-3.76% (+6.99%)
> 16 groups 3.120 +/-1.72% 3.253 +/-0.61% (-4.92%)
> 32 groups 2.911 +/-0.88% 2.837 +/-1.16% (+2.54%)
> 64 groups 2.805 +/-1.90% 2.716 +/-1.18% (+3.17%)
> 128 groups 3.166 +/-7.71% 3.891 +/-6.77% (+5.82%)
> 256 groups 3.655 +/-10.09% 3.185 +/-6.65% (+12.87%)
>
> dbench
> 1 groups 328.176 +/-0.29% 330.217 +/-0.32% (+0.62%)
> 4 groups 930.739 +/-0.50% 957.173 +/-0.66% (+2.84%)
> 16 groups 1928.292 +/-0.36% 1978.234 +/-0.88% (+0.92%)
> 32 groups 2369.348 +/-1.72% 2454.020 +/-0.90% (+3.57%)
> 64 groups 2583.880 +/-3.39% 2618.860 +/-0.84% (+1.35%)
> 128 groups 2256.406 +/-10.67% 2392.498 +/-2.13% (+6.03%)
> 256 groups 1257.546 +/-3.81% 1674.684 +/-4.97% (+33.17%)
>
> Unixbench shell8
> 1 test 6944.16 +/-0.02 6605.82 +/-0.11 (-4.87%)
> 224 tests 13499.02 +/-0.14 13637.94 +/-0.47% (+1.03%)
> lkp reported a -10% regression on shell8 (1 test) for v3 that
> seems that is partially recovered on my platform with v4.
>

I've been busy trying to get some perf numbers on arm64 server~ish systems,
I finally managed to get some specjbb numbers on TX2 (the 2 nodes, 224
CPUs version which I suspect is the same as you used in the above). I only
have a limited number of iterations (5, although each runs for about 2h)
because I wanted to get some (usable) results by today, I'll spin some more
during the week.


This is based on the "critical-jOPs" metric which AFAIU higher is better:

Baseline, SMTOFF:
mean 12156.400000
std 660.640068
min 11016.000000
25% 12158.000000
50% 12464.000000
75% 12521.000000
max 12623.000000

Patches (+ find_idlest_group() fixup), SMTOFF:
mean 12487.250000
std 184.404221
min 12326.000000
25% 12349.250000
50% 12449.500000
75% 12587.500000
max 12724.000000


It looks slightly better overall (mean, stddev), but I'm annoyed by that
low iteration count. I also had some issues with my SMTON run and I only
got numbers for 2 iterations, so I'll respin that before complaining.

FWIW the branch I've been using is:

http://www.linux-arm.org/git?p=linux-vs.git;a=shortlog;h=refs/heads/mainline/load-balance/vincent_rework/tip

2020-01-03 16:40:29

by Valentin Schneider

[permalink] [raw]
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance

On 25/11/2019 12:48, Valentin Schneider wrote:
> I've been busy trying to get some perf numbers on arm64 server~ish systems,
> I finally managed to get some specjbb numbers on TX2 (the 2 nodes, 224
> CPUs version which I suspect is the same as you used in the above). I only
> have a limited number of iterations (5, although each runs for about 2h)
> because I wanted to get some (usable) results by today, I'll spin some more
> during the week.
>
>
> This is based on the "critical-jOPs" metric which AFAIU higher is better:
>
> Baseline, SMTOFF:
> mean 12156.400000
> std 660.640068
> min 11016.000000
> 25% 12158.000000
> 50% 12464.000000
> 75% 12521.000000
> max 12623.000000
>
> Patches (+ find_idlest_group() fixup), SMTOFF:
> mean 12487.250000
> std 184.404221
> min 12326.000000
> 25% 12349.250000
> 50% 12449.500000
> 75% 12587.500000
> max 12724.000000
>
>
> It looks slightly better overall (mean, stddev), but I'm annoyed by that
> low iteration count. I also had some issues with my SMTON run and I only
> got numbers for 2 iterations, so I'll respin that before complaining.
>
> FWIW the branch I've been using is:
>
> http://www.linux-arm.org/git?p=linux-vs.git;a=shortlog;h=refs/heads/mainline/load-balance/vincent_rework/tip
>

Forgot about that; I got some more results in the meantime, still specjbb
and still ThunderX2):

| kernel | count | mean | std | min | 50% | 75% | 99% | max |
|-----------------+-------+--------------+------------+---------+---------+----------+----------+---------|
| -REWORK SMT-ON | 15 | 19961.133333 | 613.406515 | 19058.0 | 20006.0 | 20427.50 | 20903.42 | 20924.0 |
| +REWORK SMT-ON | 12 | 19265.666667 | 563.959917 | 18380.0 | 19133.5 | 19699.25 | 20024.90 | 20026.0 |
| -REWORK SMT-OFF | 25 | 12397.000000 | 425.763628 | 11016.0 | 12377.0 | 12623.00 | 13137.20 | 13154.0 |
| +REWORK SMT-OFF | 20 | 12436.700000 | 414.130554 | 11313.0 | 12505.0 | 12687.00 | 12981.44 | 12986.0 |

SMT-ON delta: -3.48%
SMT-OFF delta: +0.32%


This is consistent with some earlier runs (where I had a few issues
getting enough iterations): SMT-OFF performs a tad better, and SMT-ON
performs slightly worse.

Looking at the 99th percentile, it seems we're a bit worse compared to
the previous best cases, but looking at the slightly reduced stddev it also
seems that we are somewhat more consistent.