2023-06-27 14:56:50

by Waiman Long

[permalink] [raw]
Subject: [PATCH v4 0/9] cgroup/cpuset: Support remote partitions

v4:
- [v3] https://lore.kernel.org/lkml/[email protected]/
- Fix compilation problem reported by kernel test robot.

v3:
- [v2] https://lore.kernel.org/lkml/[email protected]/
- Change the new control file from root-only "cpuset.cpus.reserve" to
non-root "cpuset.cpus.exclusive" which lists the set of exclusive
CPUs distributed down the hierarchy.
- Add a patch to restrict boot-time isolated CPUs to isolated
partitions only.
- Update the test_cpuset_prs.sh test script and documentation
accordingly.

This patch series introduces a new cpuset control file
"cpuset.cpus.exclusive" which must be a subset of "cpuset.cpus"
and the parent's "cpuset.cpus.exclusive". This control file lists
the exclusive CPUs to be distributed down the hierarchy. Any one
of the exclusive CPUs can only be distributed to at most one child
cpuset. Unlike "cpuset.cpus", invalid input to "cpuset.cpus.exclusive"
will be rejected with an error. This new control file has no effect on
the behavior of the cpuset until it turns into a partition root. At that
point, its effective CPUs will be set to its exclusive CPUs unless some
of them are offline.

This patch series also introduces a new category of cpuset partition
called remote partitions. The existing partition category where the
partition roots have to be clustered around the root cgroup in a
hierarchical way is now referred to as local partitions.

A remote partition can be formed far from the root cgroup
with no partition root parent. While local partitions can be
created without touching "cpuset.cpus.exclusive" as it can be set
automatically if a cpuset becomes a local partition root. Properly set
"cpuset.cpus.exclusive" values down the hierarchy are required to create
a remote partition.

Both scheduling and isolated partitions can be formed in a remote
partition. A local partition can be created under a remote partition.
A remote partition, however, cannot be formed under a local partition
for now.

Modern container orchestration tools like Kubernetes use the cgroup
hierarchy to manage different containers. And it is relying on other
middleware like systemd to help managing it. If a container needs to
use isolated CPUs, it is hard to get those with the local partitions
as it will require the administrative parent cgroup to be a partition
root too which tool like systemd may not be ready to manage.

With this patch series, we allow the creation of remote partition
far from the root. The container management tool can manage the
"cpuset.cpus.exclusive" file without impacting the other cpuset
files that are managed by other middlewares. Of course, invalid
"cpuset.cpus.exclusive" values will be rejected and changes to
"cpuset.cpus" can affect the value of "cpuset.cpus.exclusive" due to
the requirement that it has to be a subset of the former control file.

Waiman Long (9):
cgroup/cpuset: Inherit parent's load balance state in v2
cgroup/cpuset: Extract out CS_CPU_EXCLUSIVE & CS_SCHED_LOAD_BALANCE
handling
cgroup/cpuset: Improve temporary cpumasks handling
cgroup/cpuset: Allow suppression of sched domain rebuild in
update_cpumasks_hier()
cgroup/cpuset: Add cpuset.cpus.exclusive for v2
cgroup/cpuset: Introduce remote partition
cgroup/cpuset: Check partition conflict with housekeeping setup
cgroup/cpuset: Documentation update for partition
cgroup/cpuset: Extend test_cpuset_prs.sh to test remote partition

Documentation/admin-guide/cgroup-v2.rst | 100 +-
kernel/cgroup/cpuset.c | 1347 ++++++++++++-----
.../selftests/cgroup/test_cpuset_prs.sh | 398 +++--
3 files changed, 1291 insertions(+), 554 deletions(-)

--
2.31.1



2023-06-27 14:58:16

by Waiman Long

[permalink] [raw]
Subject: [PATCH v4 5/9] cgroup/cpuset: Add cpuset.cpus.exclusive for v2

The creation of a cpuset partition means dedicating a set of exclusive
CPUs to be used by a particular partition only. These exclusive CPUs
will not be used by any cpusets outside of that partition.

To enable more flexibility in creating partitions, we need a way to
distribute exclusive CPUs that can be used in new partitions. Currently,
we have a subparts_cpus cpumask in struct cpuset that tracks only
the exclusive CPUs used by all the sub-partitions underneath a given
cpuset. This patch reworks the way we do exclusive CPUs tracking. The
subparts_cpus is now renamed to exclusive_cpus which tracks the exclusive
CPUs allocated to a partition root including those that are further
distributed down to sub-partitions underneath it. IOW, it also includes
the exclusive CPUs used by the current partition root.

The renamed exclusive_cpus is now exposed via a new read-only
"cpuset.cpus.exclusive" control file. The new exclusive_cpus cpumask
will be set to cpus_allowed when a cpuset becomes a partition root
and cleared if it is not a valid partition root.

In the next patch, we will enable write to this new control file and
allow it to differ from cpus_allowed. However, it must remains a subset
of cpus_allowed.

A parent cpuset can distribute an exclusive CPU to at most one of its
children only.

Signed-off-by: Waiman Long <[email protected]>
---
kernel/cgroup/cpuset.c | 733 ++++++++++++++++++++++++-----------------
1 file changed, 428 insertions(+), 305 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 64f9e305b3ab..9f2ec8394736 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -78,7 +78,7 @@ enum prs_errcode {
};

static const char * const perr_strings[] = {
- [PERR_INVCPUS] = "Invalid cpu list in cpuset.cpus",
+ [PERR_INVCPUS] = "Invalid cpu list in cpuset.cpus.exclusive",
[PERR_INVPARENT] = "Parent is an invalid partition root",
[PERR_NOTPART] = "Parent is not a partition root",
[PERR_NOTEXCL] = "Cpu list in cpuset.cpus not exclusive",
@@ -121,14 +121,18 @@ struct cpuset {
nodemask_t effective_mems;

/*
- * CPUs allocated to child sub-partitions (default hierarchy only)
- * - CPUs granted by the parent = effective_cpus U subparts_cpus
- * - effective_cpus and subparts_cpus are mutually exclusive.
+ * Exclusive CPUs dedicated to current cgroup (default hierarchy only)
*
- * effective_cpus contains only onlined CPUs, but subparts_cpus
- * may have offlined ones.
+ * This exclusive CPUs must be a subset of cpus_allowed. A parent
+ * cgroup can only grant exclusive CPUs to one of its children.
+ *
+ * When the cgroup becomes a valid partition root, exclusive_cpus
+ * defaults to cpus_allowed if not set. The effective_cpus of a valid
+ * partition root comes solely from its exclusive_cpus and some of the
+ * exclusive_cpus may be distributed to sub-partitions below & hence
+ * excluded from its effective_cpus.
*/
- cpumask_var_t subparts_cpus;
+ cpumask_var_t exclusive_cpus;

/*
* This is old Memory Nodes tasks took on.
@@ -156,8 +160,8 @@ struct cpuset {
/* for custom sched domain */
int relax_domain_level;

- /* number of CPUs in subparts_cpus */
- int nr_subparts_cpus;
+ /* number of valid sub-partitions */
+ int nr_subparts;

/* partition root state */
int partition_root_state;
@@ -185,6 +189,11 @@ struct cpuset {
struct cgroup_file partition_file;
};

+/*
+ * Exclusive CPUs distributed out to sub-partitions of top_cpuset
+ */
+static cpumask_var_t subpartitions_cpus;
+
/*
* Partition root states:
*
@@ -312,7 +321,7 @@ static inline int is_partition_invalid(const struct cpuset *cs)
*/
static inline void make_partition_invalid(struct cpuset *cs)
{
- if (is_partition_valid(cs))
+ if (cs->partition_root_state > 0)
cs->partition_root_state = -cs->partition_root_state;
}

@@ -469,7 +478,7 @@ static inline bool partition_is_populated(struct cpuset *cs,

if (cs->css.cgroup->nr_populated_csets)
return true;
- if (!excluded_child && !cs->nr_subparts_cpus)
+ if (!excluded_child && !cs->nr_subparts)
return cgroup_is_populated(cs->css.cgroup);

rcu_read_lock();
@@ -601,7 +610,7 @@ static inline int alloc_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
if (cs) {
pmask1 = &cs->cpus_allowed;
pmask2 = &cs->effective_cpus;
- pmask3 = &cs->subparts_cpus;
+ pmask3 = &cs->exclusive_cpus;
} else {
pmask1 = &tmp->new_cpus;
pmask2 = &tmp->addmask;
@@ -636,7 +645,7 @@ static inline void free_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
if (cs) {
free_cpumask_var(cs->cpus_allowed);
free_cpumask_var(cs->effective_cpus);
- free_cpumask_var(cs->subparts_cpus);
+ free_cpumask_var(cs->exclusive_cpus);
}
if (tmp) {
free_cpumask_var(tmp->new_cpus);
@@ -664,6 +673,7 @@ static struct cpuset *alloc_trial_cpuset(struct cpuset *cs)

cpumask_copy(trial->cpus_allowed, cs->cpus_allowed);
cpumask_copy(trial->effective_cpus, cs->effective_cpus);
+ cpumask_copy(trial->exclusive_cpus, cs->exclusive_cpus);
return trial;
}

@@ -677,6 +687,25 @@ static inline void free_cpuset(struct cpuset *cs)
kfree(cs);
}

+/*
+ * cpu_exclusive_check() - check if two cpusets are exclusive
+ *
+ * Return 0 if exclusive, -EINVAL if not
+ */
+static inline bool cpu_exclusive_check(struct cpuset *cs1, struct cpuset *cs2)
+{
+ struct cpumask *cpus1, *cpus2;
+
+ cpus1 = cpumask_empty(cs1->exclusive_cpus)
+ ? cs1->cpus_allowed : cs1->exclusive_cpus;
+ cpus2 = cpumask_empty(cs2->exclusive_cpus)
+ ? cs2->cpus_allowed : cs2->exclusive_cpus;
+
+ if (cpumask_intersects(cpus1, cpus2))
+ return -EINVAL;
+ return 0;
+}
+
/*
* validate_change_legacy() - Validate conditions specific to legacy (v1)
* behavior.
@@ -776,9 +805,10 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
ret = -EINVAL;
cpuset_for_each_child(c, css, par) {
if ((is_cpu_exclusive(trial) || is_cpu_exclusive(c)) &&
- c != cur &&
- cpumask_intersects(trial->cpus_allowed, c->cpus_allowed))
- goto out;
+ c != cur) {
+ if (cpu_exclusive_check(trial, c))
+ goto out;
+ }
if ((is_mem_exclusive(trial) || is_mem_exclusive(c)) &&
c != cur &&
nodes_intersects(trial->mems_allowed, c->mems_allowed))
@@ -908,7 +938,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
csa = NULL;

/* Special case for the 99% of systems with one, full, sched domain */
- if (root_load_balance && !top_cpuset.nr_subparts_cpus) {
+ if (root_load_balance && !top_cpuset.nr_subparts) {
ndoms = 1;
doms = alloc_sched_domains(ndoms);
if (!doms)
@@ -1159,7 +1189,7 @@ static void rebuild_sched_domains_locked(void)
* should be the same as the active CPUs, so checking only top_cpuset
* is enough to detect racing CPU offlines.
*/
- if (!top_cpuset.nr_subparts_cpus &&
+ if (cpumask_empty(subpartitions_cpus) &&
!cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask))
return;

@@ -1168,7 +1198,7 @@ static void rebuild_sched_domains_locked(void)
* root should be only a subset of the active CPUs. Since a CPU in any
* partition root could be offlined, all must be checked.
*/
- if (top_cpuset.nr_subparts_cpus) {
+ if (top_cpuset.nr_subparts) {
rcu_read_lock();
cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
if (!is_partition_valid(cs)) {
@@ -1232,7 +1262,7 @@ static void update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus)
*/
if ((task->flags & PF_KTHREAD) && kthread_is_per_cpu(task))
continue;
- cpumask_andnot(new_cpus, possible_mask, cs->subparts_cpus);
+ cpumask_andnot(new_cpus, possible_mask, cs->exclusive_cpus);
} else {
cpumask_and(new_cpus, possible_mask, cs->effective_cpus);
}
@@ -1247,32 +1277,22 @@ static void update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus)
* @cs: the cpuset the need to recompute the new effective_cpus mask
* @parent: the parent cpuset
*
- * If the parent has subpartition CPUs, include them in the list of
- * allowable CPUs in computing the new effective_cpus mask. Since offlined
- * CPUs are not removed from subparts_cpus, we have to use cpu_active_mask
- * to mask those out.
+ * The result is valid only if the given cpuset isn't a partition root.
*/
static void compute_effective_cpumask(struct cpumask *new_cpus,
struct cpuset *cs, struct cpuset *parent)
{
- if (parent->nr_subparts_cpus && is_partition_valid(cs)) {
- cpumask_or(new_cpus, parent->effective_cpus,
- parent->subparts_cpus);
- cpumask_and(new_cpus, new_cpus, cs->cpus_allowed);
- cpumask_and(new_cpus, new_cpus, cpu_active_mask);
- } else {
- cpumask_and(new_cpus, cs->cpus_allowed, parent->effective_cpus);
- }
+ cpumask_and(new_cpus, cs->cpus_allowed, parent->effective_cpus);
}

/*
- * Commands for update_parent_subparts_cpumask
+ * Commands for update_parent_effective_cpumask
*/
-enum subparts_cmd {
- partcmd_enable, /* Enable partition root */
- partcmd_disable, /* Disable partition root */
- partcmd_update, /* Update parent's subparts_cpus */
- partcmd_invalidate, /* Make partition invalid */
+enum partition_cmd {
+ partcmd_enable, /* Enable partition root */
+ partcmd_disable, /* Disable partition root */
+ partcmd_update, /* Update parent's effective_cpus */
+ partcmd_invalidate, /* Make partition invalid */
};

static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
@@ -1323,8 +1343,39 @@ static void update_partition_sd_lb(struct cpuset *cs, int old_prs)
rebuild_sched_domains_locked();
}

+/*
+ * tasks_nocpu_error - Return true if tasks will have no effective_cpus
+ */
+static bool tasks_nocpu_error(struct cpuset *parent, struct cpuset *cs,
+ struct cpumask *xcpus)
+{
+ /*
+ * A populated partition (cs or parent) can't have empty effective_cpus
+ */
+ return (cpumask_subset(parent->effective_cpus, xcpus) &&
+ partition_is_populated(parent, cs)) ||
+ (!cpumask_intersects(xcpus, cpu_active_mask) &&
+ partition_is_populated(cs, NULL));
+}
+
+/*
+ * setup_exclusive_cpus - setup exclusive_cpus if not set yet
+ */
+static void setup_exclusive_cpus(struct cpuset *cs, struct cpuset *parent)
+{
+ if (!cpumask_empty(cs->exclusive_cpus))
+ return;
+
+ if (!parent)
+ parent = parent_cs(cs);
+ spin_lock_irq(&callback_lock);
+ cpumask_and(cs->exclusive_cpus,
+ cs->cpus_allowed, parent->exclusive_cpus);
+ spin_unlock_irq(&callback_lock);
+}
+
/**
- * update_parent_subparts_cpumask - update subparts_cpus mask of parent cpuset
+ * update_parent_effective_cpumask - update effective_cpus mask of parent cpuset
* @cs: The cpuset that requests change in partition root state
* @cmd: Partition root state change command
* @newmask: Optional new cpumask for partcmd_update
@@ -1332,21 +1383,20 @@ static void update_partition_sd_lb(struct cpuset *cs, int old_prs)
* Return: 0 or a partition root state error code
*
* For partcmd_enable, the cpuset is being transformed from a non-partition
- * root to a partition root. The cpus_allowed mask of the given cpuset will
- * be put into parent's subparts_cpus and taken away from parent's
+ * root to a partition root. The exclusive_cpus (cpus_allowed if exclusive_cpus
+ * not set) mask of the given cpuset will be taken away from parent's
* effective_cpus. The function will return 0 if all the CPUs listed in
- * cpus_allowed can be granted or an error code will be returned.
+ * exclusive_cpus can be granted or an error code will be returned.
*
* For partcmd_disable, the cpuset is being transformed from a partition
- * root back to a non-partition root. Any CPUs in cpus_allowed that are in
- * parent's subparts_cpus will be taken away from that cpumask and put back
- * into parent's effective_cpus. 0 will always be returned.
+ * root back to a non-partition root. Any CPUs in exclusive_cpus will be
+ * given back to parent's effective_cpus. 0 will always be returned.
*
* For partcmd_update, if the optional newmask is specified, the cpu list is
- * to be changed from cpus_allowed to newmask. Otherwise, cpus_allowed is
+ * to be changed from exclusive_cpus to newmask. Otherwise, exclusive_cpus is
* assumed to remain the same. The cpuset should either be a valid or invalid
* partition root. The partition root state may change from valid to invalid
- * or vice versa. An error code will only be returned if transitioning from
+ * or vice versa. An error code will be returned if transitioning from
* invalid to valid violates the exclusivity rule.
*
* For partcmd_invalidate, the current partition will be made invalid.
@@ -1361,18 +1411,47 @@ static void update_partition_sd_lb(struct cpuset *cs, int old_prs)
* check for error and so partition_root_state and prs_error will be updated
* directly.
*/
-static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
- struct cpumask *newmask,
- struct tmpmasks *tmp)
+static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
+ struct cpumask *newmask,
+ struct tmpmasks *tmp)
{
struct cpuset *parent = parent_cs(cs);
- int adding; /* Moving cpus from effective_cpus to subparts_cpus */
- int deleting; /* Moving cpus from subparts_cpus to effective_cpus */
+ int adding; /* Adding cpus to parent's effective_cpus */
+ int deleting; /* Deleting cpus from parent's effective_cpus */
int old_prs, new_prs;
int part_error = PERR_NONE; /* Partition error? */
+ int subparts_delta = 0;
+ struct cpumask *xcpus; /* cs exclusive_cpus */
+ bool nocpu;

lockdep_assert_held(&cpuset_mutex);

+ /*
+ * new_prs will only be changed for the partcmd_update and
+ * partcmd_invalidate commands.
+ */
+ adding = deleting = false;
+ old_prs = new_prs = cs->partition_root_state;
+ xcpus = !cpumask_empty(cs->exclusive_cpus)
+ ? cs->exclusive_cpus : cs->cpus_allowed;
+
+ if (cmd == partcmd_invalidate) {
+ if (is_prs_invalid(old_prs))
+ return 0;
+
+ /*
+ * Make the current partition invalid.
+ */
+ if (is_partition_valid(parent))
+ adding = cpumask_and(tmp->addmask,
+ xcpus, parent->exclusive_cpus);
+ if (old_prs > 0) {
+ new_prs = -old_prs;
+ subparts_delta--;
+ }
+ goto write_error;
+ }
+
/*
* The parent must be a partition root.
* The new cpumask, if present, or the current cpus_allowed must
@@ -1385,124 +1464,122 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
if (!newmask && cpumask_empty(cs->cpus_allowed))
return PERR_CPUSEMPTY;

- /*
- * new_prs will only be changed for the partcmd_update and
- * partcmd_invalidate commands.
- */
- adding = deleting = false;
- old_prs = new_prs = cs->partition_root_state;
+ nocpu = tasks_nocpu_error(parent, cs, xcpus);
+
if (cmd == partcmd_enable) {
/*
* Enabling partition root is not allowed if cpus_allowed
* doesn't overlap parent's cpus_allowed.
*/
- if (!cpumask_intersects(cs->cpus_allowed, parent->cpus_allowed))
+ if (!cpumask_intersects(xcpus, parent->exclusive_cpus))
return PERR_INVCPUS;

/*
* A parent can be left with no CPU as long as there is no
* task directly associated with the parent partition.
*/
- if (cpumask_subset(parent->effective_cpus, cs->cpus_allowed) &&
- partition_is_populated(parent, cs))
+ if (nocpu)
return PERR_NOCPUS;

- cpumask_copy(tmp->addmask, cs->cpus_allowed);
- adding = true;
+ cpumask_copy(tmp->delmask, xcpus);
+ deleting = true;
+ subparts_delta++;
} else if (cmd == partcmd_disable) {
/*
- * Need to remove cpus from parent's subparts_cpus for valid
- * partition root.
+ n* May need to add cpus to parent's effective_cpus for
+ * valid partition root.
*/
- deleting = !is_prs_invalid(old_prs) &&
- cpumask_and(tmp->delmask, cs->cpus_allowed,
- parent->subparts_cpus);
- } else if (cmd == partcmd_invalidate) {
- if (is_prs_invalid(old_prs))
- return 0;
-
+ adding = !is_prs_invalid(old_prs) &&
+ cpumask_and(tmp->addmask, xcpus, parent->exclusive_cpus);
+ if (adding)
+ subparts_delta--;
+ } else if (newmask) {
/*
- * Make the current partition invalid. It is assumed that
- * invalidation is caused by violating cpu exclusivity rule.
+ * Empty cpumask is not allowed
*/
- deleting = cpumask_and(tmp->delmask, cs->cpus_allowed,
- parent->subparts_cpus);
- if (old_prs > 0) {
- new_prs = -old_prs;
- part_error = PERR_NOTEXCL;
+ if (cpumask_empty(newmask)) {
+ part_error = PERR_CPUSEMPTY;
+ goto write_error;
}
- } else if (newmask) {
+
/*
* partcmd_update with newmask:
*
- * Compute add/delete mask to/from subparts_cpus
+ * Compute add/delete mask to/from effective_cpus
*
- * delmask = cpus_allowed & ~newmask & parent->subparts_cpus
- * addmask = newmask & parent->cpus_allowed
- * & ~parent->subparts_cpus
+ * addmask = exclusive_cpus & ~newmask & parent->exclusive_cpus
+ * delmask = newmask & ~cs->exclusive_cpus
+ * & parent->exclusive_cpus
*/
- cpumask_andnot(tmp->delmask, cs->cpus_allowed, newmask);
- deleting = cpumask_and(tmp->delmask, tmp->delmask,
- parent->subparts_cpus);
+ cpumask_andnot(tmp->addmask, xcpus, newmask);
+ adding = cpumask_and(tmp->addmask, tmp->addmask,
+ parent->exclusive_cpus);

- cpumask_and(tmp->addmask, newmask, parent->cpus_allowed);
- adding = cpumask_andnot(tmp->addmask, tmp->addmask,
- parent->subparts_cpus);
- /*
- * Empty cpumask is not allowed
- */
- if (cpumask_empty(newmask)) {
- part_error = PERR_CPUSEMPTY;
+ cpumask_andnot(tmp->delmask, newmask, xcpus);
+ deleting = cpumask_and(tmp->delmask, tmp->delmask,
+ parent->exclusive_cpus);
/*
* Make partition invalid if parent's effective_cpus could
* become empty and there are tasks in the parent.
*/
- } else if (adding &&
- cpumask_subset(parent->effective_cpus, tmp->addmask) &&
- !cpumask_intersects(tmp->delmask, cpu_active_mask) &&
- partition_is_populated(parent, cs)) {
+ if (nocpu && (!adding ||
+ !cpumask_intersects(tmp->addmask, cpu_active_mask))) {
part_error = PERR_NOCPUS;
- adding = false;
- deleting = cpumask_and(tmp->delmask, cs->cpus_allowed,
- parent->subparts_cpus);
+ deleting = false;
+ adding = cpumask_and(tmp->addmask,
+ xcpus, parent->exclusive_cpus);
}
} else {
/*
- * partcmd_update w/o newmask:
+ * partcmd_update w/o newmask
+ *
+ * delmask = exclusive_cpus & parent->effective_cpus
+ *
+ * This can be called from:
+ * 1) update_cpumasks_hier()
+ * 2) cpuset_hotplug_update_tasks()
*
- * delmask = cpus_allowed & parent->subparts_cpus
- * addmask = cpus_allowed & parent->cpus_allowed
- * & ~parent->subparts_cpus
+ * Check to see if it can be transitioned from valid to
+ * invalid partition or vice versa.
*
- * This gets invoked either due to a hotplug event or from
- * update_cpumasks_hier(). This can cause the state of a
- * partition root to transition from valid to invalid or vice
- * versa. So we still need to compute the addmask and delmask.
-
- * A partition error happens when:
- * 1) Cpuset is valid partition, but parent does not distribute
- * out any CPUs.
- * 2) Parent has tasks and all its effective CPUs will have
- * to be distributed out.
+ * A partition error happens when parent has tasks and all
+ * its effective CPUs will have to be distributed out.
*/
- cpumask_and(tmp->addmask, cs->cpus_allowed,
- parent->cpus_allowed);
- adding = cpumask_andnot(tmp->addmask, tmp->addmask,
- parent->subparts_cpus);
-
- if ((is_partition_valid(cs) && !parent->nr_subparts_cpus) ||
- (adding &&
- cpumask_subset(parent->effective_cpus, tmp->addmask) &&
- partition_is_populated(parent, cs))) {
+ WARN_ON_ONCE(!is_partition_valid(parent));
+ if (nocpu) {
part_error = PERR_NOCPUS;
- adding = false;
- }
+ if (is_partition_valid(cs))
+ adding = cpumask_and(tmp->addmask,
+ xcpus, parent->exclusive_cpus);
+ } else if (is_partition_invalid(cs) &&
+ cpumask_subset(xcpus, parent->exclusive_cpus)) {
+ struct cgroup_subsys_state *css;
+ struct cpuset *child;
+ bool exclusive = true;

- if (part_error && is_partition_valid(cs) &&
- parent->nr_subparts_cpus)
- deleting = cpumask_and(tmp->delmask, cs->cpus_allowed,
- parent->subparts_cpus);
+ /*
+ * Convert invalid partition to valid has to
+ * pass the cpu exclusivity test.
+ */
+ rcu_read_lock();
+ cpuset_for_each_child(child, css, parent) {
+ if (child == cs)
+ continue;
+ if (cpu_exclusive_check(cs, child)) {
+ exclusive = false;
+ break;
+ }
+ }
+ rcu_read_unlock();
+ if (exclusive)
+ deleting = cpumask_and(tmp->delmask,
+ xcpus, parent->effective_cpus);
+ else
+ part_error = PERR_NOTEXCL;
+ }
}
+
+write_error:
if (part_error)
WRITE_ONCE(cs->prs_err, part_error);

@@ -1514,13 +1591,17 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
switch (cs->partition_root_state) {
case PRS_ROOT:
case PRS_ISOLATED:
- if (part_error)
+ if (part_error) {
new_prs = -old_prs;
+ subparts_delta--;
+ }
break;
case PRS_INVALID_ROOT:
case PRS_INVALID_ISOLATED:
- if (!part_error)
+ if (!part_error) {
new_prs = -old_prs;
+ subparts_delta++;
+ }
break;
}
}
@@ -1540,32 +1621,43 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
}

/*
- * Change the parent's subparts_cpus.
+ * Change the parent's effective_cpus & exclusive_cpus (top cpuset
+ * only).
+ *
* Newly added CPUs will be removed from effective_cpus and
* newly deleted ones will be added back to effective_cpus.
*/
spin_lock_irq(&callback_lock);
if (adding) {
- cpumask_or(parent->subparts_cpus,
- parent->subparts_cpus, tmp->addmask);
- cpumask_andnot(parent->effective_cpus,
- parent->effective_cpus, tmp->addmask);
- }
- if (deleting) {
- cpumask_andnot(parent->subparts_cpus,
- parent->subparts_cpus, tmp->delmask);
+ if (parent == &top_cpuset)
+ cpumask_andnot(subpartitions_cpus,
+ subpartitions_cpus, tmp->addmask);
/*
- * Some of the CPUs in subparts_cpus might have been offlined.
+ * Some of the CPUs in exclusive_cpus might have been offlined.
*/
- cpumask_and(tmp->delmask, tmp->delmask, cpu_active_mask);
cpumask_or(parent->effective_cpus,
- parent->effective_cpus, tmp->delmask);
+ parent->effective_cpus, tmp->addmask);
+ cpumask_and(parent->effective_cpus,
+ parent->effective_cpus, cpu_active_mask);
+ }
+ if (deleting) {
+ if (parent == &top_cpuset)
+ cpumask_or(subpartitions_cpus,
+ subpartitions_cpus, tmp->delmask);
+ cpumask_andnot(parent->effective_cpus,
+ parent->effective_cpus, tmp->delmask);
}

- parent->nr_subparts_cpus = cpumask_weight(parent->subparts_cpus);
+ if (is_partition_valid(parent)) {
+ parent->nr_subparts += subparts_delta;
+ WARN_ON_ONCE(parent->nr_subparts < 0);
+ }

- if (old_prs != new_prs)
+ if (old_prs != new_prs) {
cs->partition_root_state = new_prs;
+ if (new_prs <= 0)
+ cs->nr_subparts = 0;
+ }

spin_unlock_irq(&callback_lock);

@@ -1590,6 +1682,71 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
return 0;
}

+/**
+ * compute_partition_effective_cpumask - compute effective_cpus for partition
+ * @cs: partition root cpuset
+ * @new_ecpus: previously computed effective_cpus to be updated
+ *
+ * Compute the effective_cpus of a partition root by scanning exclusive_cpus
+ * of child partition roots and exclusing their exclusive_cpus.
+ *
+ * This has the side effect of invalidating valid child partition roots,
+ * if necessary. Since it is called from either cpuset_hotplug_update_tasks()
+ * or update_cpumasks_hier() where parent and children are modified
+ * successively, we don't need to call update_parent_effective_cpumask()
+ * and the child's effective_cpus will be updated in later iterations.
+ *
+ * Note that rcu_read_lock() is assumed to be held.
+ */
+static void compute_partition_effective_cpumask(struct cpuset *cs,
+ struct cpumask *new_ecpus)
+{
+ struct cgroup_subsys_state *css;
+ struct cpuset *child;
+ bool populated = partition_is_populated(cs, NULL);
+
+ /*
+ * Check child partition roots to see if they should be
+ * invalidated when
+ * 1) child exclusive_cpus not a subset of new
+ * excluisve_cpus
+ * 2) All the effective_cpus will be used up and cp
+ * has tasks
+ */
+ cpumask_and(new_ecpus, cs->exclusive_cpus, cpu_active_mask);
+ rcu_read_lock();
+ cpuset_for_each_child(child, css, cs) {
+ if (!is_partition_valid(child))
+ continue;
+
+ child->prs_err = 0;
+ if (!cpumask_subset(child->exclusive_cpus,
+ cs->exclusive_cpus))
+ child->prs_err = PERR_INVCPUS;
+ else if (populated &&
+ cpumask_subset(new_ecpus, child->exclusive_cpus))
+ child->prs_err = PERR_NOCPUS;
+
+ if (child->prs_err) {
+ int old_prs = child->partition_root_state;
+
+ /*
+ * Invalidate child partition
+ */
+ spin_lock_irq(&callback_lock);
+ make_partition_invalid(child);
+ cs->nr_subparts--;
+ child->nr_subparts = 0;
+ spin_unlock_irq(&callback_lock);
+ notify_partition_change(child, old_prs);
+ continue;
+ }
+ cpumask_andnot(new_ecpus, new_ecpus,
+ child->exclusive_cpus);
+ }
+ rcu_read_unlock();
+}
+
/*
* update_cpumasks_hier() flags
*/
@@ -1624,6 +1781,19 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,

compute_effective_cpumask(tmp->new_cpus, cp, parent);

+ if (is_partition_valid(parent) && is_partition_valid(cp))
+ compute_partition_effective_cpumask(cp, tmp->new_cpus);
+
+ /*
+ * A partition with no effective_cpus is allowed as long as
+ * there is no task associated with it. Call
+ * update_parent_effective_cpumask() to check it.
+ */
+ if (is_partition_valid(cp) && cpumask_empty(tmp->new_cpus)) {
+ update_parent = true;
+ goto update_parent_effective;
+ }
+
/*
* If it becomes empty, inherit the effective mask of the
* parent, which is guaranteed to have some CPUs unless
@@ -1631,10 +1801,6 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
* out all its CPUs.
*/
if (is_in_v2_mode() && cpumask_empty(tmp->new_cpus)) {
- if (is_partition_valid(cp) &&
- cpumask_equal(cp->cpus_allowed, cp->subparts_cpus))
- goto update_parent_subparts;
-
cpumask_copy(tmp->new_cpus, parent->effective_cpus);
if (!cp->use_parent_ecpus) {
cp->use_parent_ecpus = true;
@@ -1661,12 +1827,12 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
continue;
}

-update_parent_subparts:
+update_parent_effective:
/*
- * update_parent_subparts_cpumask() should have been called
+ * update_parent_effective_cpumask() should have been called
* for cs already in update_cpumask(). We should also call
* update_tasks_cpumask() again for tasks in the parent
- * cpuset if the parent's subparts_cpus changes.
+ * cpuset if the parent's effective_cpus changes.
*/
old_prs = new_prs = cp->partition_root_state;
if ((cp != cs) && old_prs) {
@@ -1696,8 +1862,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
rcu_read_unlock();

if (update_parent) {
- update_parent_subparts_cpumask(cp, partcmd_update, NULL,
- tmp);
+ update_parent_effective_cpumask(cp, partcmd_update, NULL, tmp);
/*
* The cpuset partition_root_state may become
* invalid. Capture it.
@@ -1706,30 +1871,18 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
}

spin_lock_irq(&callback_lock);
-
- if (cp->nr_subparts_cpus && !is_partition_valid(cp)) {
- /*
- * Put all active subparts_cpus back to effective_cpus.
- */
- cpumask_or(tmp->new_cpus, tmp->new_cpus,
- cp->subparts_cpus);
- cpumask_and(tmp->new_cpus, tmp->new_cpus,
- cpu_active_mask);
- cp->nr_subparts_cpus = 0;
- cpumask_clear(cp->subparts_cpus);
- }
-
cpumask_copy(cp->effective_cpus, tmp->new_cpus);
- if (cp->nr_subparts_cpus) {
- /*
- * Make sure that effective_cpus & subparts_cpus
- * are mutually exclusive.
- */
- cpumask_andnot(cp->effective_cpus, cp->effective_cpus,
- cp->subparts_cpus);
- }
-
cp->partition_root_state = new_prs;
+ if ((new_prs > 0) && cpumask_empty(cp->exclusive_cpus))
+ cpumask_and(cp->exclusive_cpus,
+ cp->cpus_allowed, parent->exclusive_cpus);
+ if (new_prs < 0) {
+ /* Reset partition data */
+ cp->nr_subparts = 0;
+ cpumask_clear(cp->exclusive_cpus);
+ if (is_cpu_exclusive(cp))
+ clear_bit(CS_CPU_EXCLUSIVE, &cp->flags);
+ }
spin_unlock_irq(&callback_lock);

notify_partition_change(cp, old_prs);
@@ -1826,6 +1979,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
{
int retval;
struct tmpmasks tmp;
+ struct cpuset *parent = parent_cs(cs);
bool invalidate = false;
int old_prs = cs->partition_root_state;

@@ -1841,6 +1995,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
*/
if (!*buf) {
cpumask_clear(trialcs->cpus_allowed);
+ cpumask_clear(trialcs->exclusive_cpus);
} else {
retval = cpulist_parse(buf, trialcs->cpus_allowed);
if (retval < 0)
@@ -1849,6 +2004,13 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
if (!cpumask_subset(trialcs->cpus_allowed,
top_cpuset.cpus_allowed))
return -EINVAL;
+
+ /*
+ * When exclusive_cpus is set, make sure it is a subset of
+ * cpus_allowed and parent's exclusive_cpus.
+ */
+ cpumask_and(trialcs->exclusive_cpus,
+ parent->exclusive_cpus, trialcs->cpus_allowed);
}

/* Nothing to do if the cpus didn't change */
@@ -1858,11 +2020,21 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
if (alloc_cpumasks(NULL, &tmp))
return -ENOMEM;

+ if (is_partition_valid(cs)) {
+ if (cpumask_empty(trialcs->exclusive_cpus)) {
+ invalidate = true;
+ cs->prs_err = PERR_INVCPUS;
+ } else if (tasks_nocpu_error(parent, cs, trialcs->exclusive_cpus)) {
+ invalidate = true;
+ cs->prs_err = PERR_NOCPUS;
+ }
+ }
+
retval = validate_change(cs, trialcs);

if ((retval == -EINVAL) && cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) {
- struct cpuset *cp, *parent;
struct cgroup_subsys_state *css;
+ struct cpuset *cp;

/*
* The -EINVAL error code indicates that partition sibling
@@ -1873,69 +2045,44 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
*/
invalidate = true;
rcu_read_lock();
- parent = parent_cs(cs);
cpuset_for_each_child(cp, css, parent)
if (is_partition_valid(cp) &&
- cpumask_intersects(trialcs->cpus_allowed, cp->cpus_allowed)) {
+ cpumask_intersects(trialcs->exclusive_cpus, cp->exclusive_cpus)) {
rcu_read_unlock();
- update_parent_subparts_cpumask(cp, partcmd_invalidate, NULL, &tmp);
+ update_parent_effective_cpumask(cp, partcmd_invalidate, NULL, &tmp);
rcu_read_lock();
}
rcu_read_unlock();
retval = 0;
}
+
if (retval < 0)
goto out_free;

if (cs->partition_root_state) {
if (invalidate)
- update_parent_subparts_cpumask(cs, partcmd_invalidate,
- NULL, &tmp);
+ update_parent_effective_cpumask(cs, partcmd_invalidate,
+ NULL, &tmp);
else
- update_parent_subparts_cpumask(cs, partcmd_update,
- trialcs->cpus_allowed, &tmp);
+ update_parent_effective_cpumask(cs, partcmd_update,
+ trialcs->exclusive_cpus, &tmp);
}

- compute_effective_cpumask(trialcs->effective_cpus, trialcs,
- parent_cs(cs));
spin_lock_irq(&callback_lock);
cpumask_copy(cs->cpus_allowed, trialcs->cpus_allowed);
+ if (!is_partition_valid(cs))
+ cpumask_clear(cs->exclusive_cpus);
+ else
+ cpumask_copy(cs->exclusive_cpus, trialcs->exclusive_cpus);

- /*
- * Make sure that subparts_cpus, if not empty, is a subset of
- * cpus_allowed. Clear subparts_cpus if partition not valid or
- * empty effective cpus with tasks.
- */
- if (cs->nr_subparts_cpus) {
- if (!is_partition_valid(cs) ||
- (cpumask_subset(trialcs->effective_cpus, cs->subparts_cpus) &&
- partition_is_populated(cs, NULL))) {
- cs->nr_subparts_cpus = 0;
- cpumask_clear(cs->subparts_cpus);
- } else {
- cpumask_and(cs->subparts_cpus, cs->subparts_cpus,
- cs->cpus_allowed);
- cs->nr_subparts_cpus = cpumask_weight(cs->subparts_cpus);
- }
- }
spin_unlock_irq(&callback_lock);

/* effective_cpus will be updated here */
update_cpumasks_hier(cs, &tmp, 0);

- if (cs->partition_root_state) {
- struct cpuset *parent = parent_cs(cs);
-
- /*
- * For partition root, update the cpumasks of sibling
- * cpusets if they use parent's effective_cpus.
- */
- if (parent->child_ecpus_count)
- update_sibling_cpumasks(parent, cs, &tmp);
-
- /* Update CS_SCHED_LOAD_BALANCE and/or sched_domains */
+ /* Update CS_SCHED_LOAD_BALANCE and/or sched_domains, if necessary */
+ if (cs->partition_root_state)
update_partition_sd_lb(cs, old_prs);
- }
out_free:
free_cpumasks(NULL, &tmp);
return 0;
@@ -2313,7 +2460,6 @@ static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
static int update_prstate(struct cpuset *cs, int new_prs)
{
int err = PERR_NONE, old_prs = cs->partition_root_state;
- struct cpuset *parent = parent_cs(cs);
struct tmpmasks tmpmask;

if (old_prs == new_prs)
@@ -2331,6 +2477,13 @@ static int update_prstate(struct cpuset *cs, int new_prs)
if (alloc_cpumasks(NULL, &tmpmask))
return -ENOMEM;

+ /*
+ * Setup exclusive_cpus if not set yet, it will be cleared later
+ * if partition becomes invalid.
+ */
+ if (new_prs > 0)
+ setup_exclusive_cpus(cs, NULL);
+
err = update_partition_exclusive(cs, new_prs);
if (err)
goto out;
@@ -2344,8 +2497,8 @@ static int update_prstate(struct cpuset *cs, int new_prs)
goto out;
}

- err = update_parent_subparts_cpumask(cs, partcmd_enable,
- NULL, &tmpmask);
+ err = update_parent_effective_cpumask(cs, partcmd_enable,
+ NULL, &tmpmask);
} else if (old_prs && new_prs) {
/*
* A change in load balance state only, no change in cpumasks.
@@ -2356,19 +2509,13 @@ static int update_prstate(struct cpuset *cs, int new_prs)
* Switching back to member is always allowed even if it
* disables child partitions.
*/
- update_parent_subparts_cpumask(cs, partcmd_disable, NULL,
- &tmpmask);
+ update_parent_effective_cpumask(cs, partcmd_disable, NULL,
+ &tmpmask);

/*
- * If there are child partitions, they will all become invalid.
+ * Invalidation of child partitions will be done in
+ * update_cpumasks_hier().
*/
- if (unlikely(cs->nr_subparts_cpus)) {
- spin_lock_irq(&callback_lock);
- cs->nr_subparts_cpus = 0;
- cpumask_clear(cs->subparts_cpus);
- compute_effective_cpumask(cs->effective_cpus, cs, parent);
- spin_unlock_irq(&callback_lock);
- }
}
out:
/*
@@ -2383,14 +2530,12 @@ static int update_prstate(struct cpuset *cs, int new_prs)
spin_lock_irq(&callback_lock);
cs->partition_root_state = new_prs;
WRITE_ONCE(cs->prs_err, err);
+ if (!is_partition_valid(cs))
+ cpumask_clear(cs->exclusive_cpus);
spin_unlock_irq(&callback_lock);

- /*
- * Update child cpusets, if present.
- * Force update if switching back to member.
- */
- if (!list_empty(&cs->css.children))
- update_cpumasks_hier(cs, &tmpmask, !new_prs ? HIER_CHECKALL : 0);
+ /* Force update if switching back to member */
+ update_cpumasks_hier(cs, &tmpmask, !new_prs ? HIER_CHECKALL : 0);

/* Update sched domains and load balance flag */
update_partition_sd_lb(cs, old_prs);
@@ -2626,7 +2771,7 @@ static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task)
guarantee_online_cpus(task, cpus_attach);
else
cpumask_andnot(cpus_attach, task_cpu_possible_mask(task),
- cs->subparts_cpus);
+ subpartitions_cpus);
/*
* can_attach beforehand should guarantee that this doesn't
* fail. TODO: have a better way to handle failure here
@@ -2729,6 +2874,7 @@ typedef enum {
FILE_EFFECTIVE_CPULIST,
FILE_EFFECTIVE_MEMLIST,
FILE_SUBPARTS_CPULIST,
+ FILE_EXCLUSIVE_CPULIST,
FILE_CPU_EXCLUSIVE,
FILE_MEM_EXCLUSIVE,
FILE_MEM_HARDWALL,
@@ -2913,8 +3059,11 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
case FILE_EFFECTIVE_MEMLIST:
seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
break;
+ case FILE_EXCLUSIVE_CPULIST:
+ seq_printf(sf, "%*pbl\n", cpumask_pr_args(cs->exclusive_cpus));
+ break;
case FILE_SUBPARTS_CPULIST:
- seq_printf(sf, "%*pbl\n", cpumask_pr_args(cs->subparts_cpus));
+ seq_printf(sf, "%*pbl\n", cpumask_pr_args(subpartitions_cpus));
break;
default:
ret = -EINVAL;
@@ -3186,11 +3335,18 @@ static struct cftype dfl_files[] = {
.file_offset = offsetof(struct cpuset, partition_file),
},

+ {
+ .name = "cpus.exclusive",
+ .seq_show = cpuset_common_seq_show,
+ .private = FILE_EXCLUSIVE_CPULIST,
+ .flags = CFTYPE_NOT_ON_ROOT,
+ },
+
{
.name = "cpus.subpartitions",
.seq_show = cpuset_common_seq_show,
.private = FILE_SUBPARTS_CPULIST,
- .flags = CFTYPE_DEBUG,
+ .flags = CFTYPE_ONLY_ON_ROOT | CFTYPE_DEBUG,
},

{ } /* terminate */
@@ -3364,6 +3520,7 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css)

if (is_in_v2_mode()) {
cpumask_copy(top_cpuset.cpus_allowed, cpu_possible_mask);
+ cpumask_copy(top_cpuset.exclusive_cpus, cpu_possible_mask);
top_cpuset.mems_allowed = node_possible_map;
} else {
cpumask_copy(top_cpuset.cpus_allowed,
@@ -3502,11 +3659,13 @@ int __init cpuset_init(void)
{
BUG_ON(!alloc_cpumask_var(&top_cpuset.cpus_allowed, GFP_KERNEL));
BUG_ON(!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL));
- BUG_ON(!zalloc_cpumask_var(&top_cpuset.subparts_cpus, GFP_KERNEL));
+ BUG_ON(!alloc_cpumask_var(&top_cpuset.exclusive_cpus, GFP_KERNEL));
+ BUG_ON(!zalloc_cpumask_var(&subpartitions_cpus, GFP_KERNEL));

cpumask_setall(top_cpuset.cpus_allowed);
nodes_setall(top_cpuset.mems_allowed);
cpumask_setall(top_cpuset.effective_cpus);
+ cpumask_setall(top_cpuset.exclusive_cpus);
nodes_setall(top_cpuset.effective_mems);

fmeter_init(&top_cpuset.fmeter);
@@ -3647,30 +3806,15 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
compute_effective_cpumask(&new_cpus, cs, parent);
nodes_and(new_mems, cs->mems_allowed, parent->effective_mems);

- if (cs->nr_subparts_cpus)
- /*
- * Make sure that CPUs allocated to child partitions
- * do not show up in effective_cpus.
- */
- cpumask_andnot(&new_cpus, &new_cpus, cs->subparts_cpus);
-
if (!tmp || !cs->partition_root_state)
goto update_tasks;

/*
- * In the unlikely event that a partition root has empty
- * effective_cpus with tasks, we will have to invalidate child
- * partitions, if present, by setting nr_subparts_cpus to 0 to
- * reclaim their cpus.
+ * Compute effective_cpus for valid partition root, may invalidate
+ * child partition roots if necessary.
*/
- if (cs->nr_subparts_cpus && is_partition_valid(cs) &&
- cpumask_empty(&new_cpus) && partition_is_populated(cs, NULL)) {
- spin_lock_irq(&callback_lock);
- cs->nr_subparts_cpus = 0;
- cpumask_clear(cs->subparts_cpus);
- spin_unlock_irq(&callback_lock);
- compute_effective_cpumask(&new_cpus, cs, parent);
- }
+ if (is_partition_valid(cs) && is_partition_valid(parent))
+ compute_partition_effective_cpumask(cs, &new_cpus);

/*
* Force the partition to become invalid if either one of
@@ -3679,44 +3823,23 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
* 2) parent is invalid or doesn't grant any cpus to child
* partitions.
*/
- if (is_partition_valid(cs) && (!parent->nr_subparts_cpus ||
- (cpumask_empty(&new_cpus) && partition_is_populated(cs, NULL)))) {
- int old_prs, parent_prs;
-
- update_parent_subparts_cpumask(cs, partcmd_disable, NULL, tmp);
- if (cs->nr_subparts_cpus) {
- spin_lock_irq(&callback_lock);
- cs->nr_subparts_cpus = 0;
- cpumask_clear(cs->subparts_cpus);
- spin_unlock_irq(&callback_lock);
- compute_effective_cpumask(&new_cpus, cs, parent);
- }
-
- old_prs = cs->partition_root_state;
- parent_prs = parent->partition_root_state;
- if (is_partition_valid(cs)) {
- spin_lock_irq(&callback_lock);
- make_partition_invalid(cs);
- spin_unlock_irq(&callback_lock);
- if (is_prs_invalid(parent_prs))
- WRITE_ONCE(cs->prs_err, PERR_INVPARENT);
- else if (!parent_prs)
- WRITE_ONCE(cs->prs_err, PERR_NOTPART);
- else
- WRITE_ONCE(cs->prs_err, PERR_HOTPLUG);
- notify_partition_change(cs, old_prs);
- }
+ if (is_partition_valid(cs) && (!is_partition_valid(parent) ||
+ tasks_nocpu_error(parent, cs, &new_cpus))) {
+ update_parent_effective_cpumask(cs, partcmd_invalidate, NULL, tmp);
+ compute_effective_cpumask(&new_cpus, cs, parent);
cpuset_force_rebuild();
}
-
/*
* On the other hand, an invalid partition root may be transitioned
* back to a regular one.
*/
else if (is_partition_valid(parent) && is_partition_invalid(cs)) {
- update_parent_subparts_cpumask(cs, partcmd_update, NULL, tmp);
- if (is_partition_valid(cs))
+ update_parent_effective_cpumask(cs, partcmd_update, NULL, tmp);
+ if (is_partition_valid(cs)) {
+ setup_exclusive_cpus(cs, parent);
+ compute_partition_effective_cpumask(cs, &new_cpus);
cpuset_force_rebuild();
+ }
}

update_tasks:
@@ -3773,21 +3896,22 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
new_mems = node_states[N_MEMORY];

/*
- * If subparts_cpus is populated, it is likely that the check below
- * will produce a false positive on cpus_updated when the cpu list
- * isn't changed. It is extra work, but it is better to be safe.
+ * If subpartitions_cpus is populated, it is likely that the check
+ * below will produce a false positive on cpus_updated when the cpu
+ * list isn't changed. It is extra work, but it is better to be safe.
*/
- cpus_updated = !cpumask_equal(top_cpuset.effective_cpus, &new_cpus);
+ cpus_updated = !cpumask_equal(top_cpuset.effective_cpus, &new_cpus) ||
+ !cpumask_empty(subpartitions_cpus);
mems_updated = !nodes_equal(top_cpuset.effective_mems, new_mems);

/*
- * In the rare case that hotplug removes all the cpus in subparts_cpus,
- * we assumed that cpus are updated.
+ * In the rare case that hotplug removes all the cpus in
+ * subpartitions_cpus, we assumed that cpus are updated.
*/
- if (!cpus_updated && top_cpuset.nr_subparts_cpus)
+ if (!cpus_updated && top_cpuset.nr_subparts)
cpus_updated = true;

- /* synchronize cpus_allowed to cpu_active_mask */
+ /* For v1, synchronize cpus_allowed to cpu_active_mask */
if (cpus_updated) {
spin_lock_irq(&callback_lock);
if (!on_dfl)
@@ -3795,17 +3919,16 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
/*
* Make sure that CPUs allocated to child partitions
* do not show up in effective_cpus. If no CPU is left,
- * we clear the subparts_cpus & let the child partitions
+ * we clear the subpartitions_cpus & let the child partitions
* fight for the CPUs again.
*/
- if (top_cpuset.nr_subparts_cpus) {
- if (cpumask_subset(&new_cpus,
- top_cpuset.subparts_cpus)) {
- top_cpuset.nr_subparts_cpus = 0;
- cpumask_clear(top_cpuset.subparts_cpus);
+ if (!cpumask_empty(subpartitions_cpus)) {
+ if (cpumask_subset(&new_cpus, subpartitions_cpus)) {
+ top_cpuset.nr_subparts = 0;
+ cpumask_clear(subpartitions_cpus);
} else {
cpumask_andnot(&new_cpus, &new_cpus,
- top_cpuset.subparts_cpus);
+ subpartitions_cpus);
}
}
cpumask_copy(top_cpuset.effective_cpus, &new_cpus);
@@ -3937,7 +4060,7 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
* We first exclude cpus allocated to partitions. If there is no
* allowable online cpu left, we fall back to all possible cpus.
*/
- cpumask_andnot(pmask, possible_mask, top_cpuset.subparts_cpus);
+ cpumask_andnot(pmask, possible_mask, subpartitions_cpus);
if (!cpumask_intersects(pmask, cpu_online_mask))
cpumask_copy(pmask, possible_mask);
}
--
2.31.1


2023-06-27 14:59:20

by Waiman Long

[permalink] [raw]
Subject: [PATCH v4 3/9] cgroup/cpuset: Improve temporary cpumasks handling

The limitation that update_parent_subparts_cpumask() can only use
addmask & delmask in the given tmp cpumasks is fragile and may lead to
unexpected error.

Fix this problem by allocating/freeing a struct tmpmasks in
update_cpumask() to avoid reusing the cpumasks in trial_cs.

With this change, we can move the update_tasks_cpumask() for the
parent and update_sibling_cpumasks() for the sibling to inside
update_parent_subparts_cpumask().

Signed-off-by: Waiman Long <[email protected]>
---
kernel/cgroup/cpuset.c | 42 +++++++++++++-----------------------------
1 file changed, 13 insertions(+), 29 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index ade33e50ffe2..b8ccc1be7bde 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1277,6 +1277,8 @@ enum subparts_cmd {

static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
int turning_on);
+static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
+ struct tmpmasks *tmp);

/*
* Update partition exclusive flag
@@ -1447,7 +1449,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
adding = cpumask_andnot(tmp->addmask, tmp->addmask,
parent->subparts_cpus);
/*
- * Empty cpumask is not allewed
+ * Empty cpumask is not allowed
*/
if (cpumask_empty(newmask)) {
part_error = PERR_CPUSEMPTY;
@@ -1567,8 +1569,11 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,

spin_unlock_irq(&callback_lock);

- if (adding || deleting)
+ if (adding || deleting) {
update_tasks_cpumask(parent, tmp->addmask);
+ if (parent->child_ecpus_count)
+ update_sibling_cpumasks(parent, cs, tmp);
+ }

/*
* For partcmd_update without newmask, it is being called from
@@ -1842,18 +1847,8 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
if (cpumask_equal(cs->cpus_allowed, trialcs->cpus_allowed))
return 0;

-#ifdef CONFIG_CPUMASK_OFFSTACK
- /*
- * Use the cpumasks in trialcs for tmpmasks when they are pointers
- * to allocated cpumasks.
- *
- * Note that update_parent_subparts_cpumask() uses only addmask &
- * delmask, but not new_cpus.
- */
- tmp.addmask = trialcs->subparts_cpus;
- tmp.delmask = trialcs->effective_cpus;
- tmp.new_cpus = NULL;
-#endif
+ if (alloc_cpumasks(NULL, &tmp))
+ return -ENOMEM;

retval = validate_change(cs, trialcs);

@@ -1882,7 +1877,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
retval = 0;
}
if (retval < 0)
- return retval;
+ goto out_free;

if (cs->partition_root_state) {
if (invalidate)
@@ -1917,11 +1912,6 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
}
spin_unlock_irq(&callback_lock);

-#ifdef CONFIG_CPUMASK_OFFSTACK
- /* Now trialcs->cpus_allowed is available */
- tmp.new_cpus = trialcs->cpus_allowed;
-#endif
-
/* effective_cpus will be updated here */
update_cpumasks_hier(cs, &tmp, false);

@@ -1938,6 +1928,8 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
/* Update CS_SCHED_LOAD_BALANCE and/or sched_domains */
update_partition_sd_lb(cs, old_prs);
}
+out_free:
+ free_cpumasks(NULL, &tmp);
return 0;
}

@@ -2346,13 +2338,11 @@ static int update_prstate(struct cpuset *cs, int new_prs)

err = update_parent_subparts_cpumask(cs, partcmd_enable,
NULL, &tmpmask);
- if (err)
- goto out;
} else if (old_prs && new_prs) {
/*
* A change in load balance state only, no change in cpumasks.
*/
- goto out;
+ ;
} else {
/*
* Switching back to member is always allowed even if it
@@ -2372,12 +2362,6 @@ static int update_prstate(struct cpuset *cs, int new_prs)
spin_unlock_irq(&callback_lock);
}
}
-
- update_tasks_cpumask(parent, tmpmask.new_cpus);
-
- if (parent->child_ecpus_count)
- update_sibling_cpumasks(parent, cs, &tmpmask);
-
out:
/*
* Make partition invalid & disable CS_CPU_EXCLUSIVE if an error
--
2.31.1


2023-06-27 15:05:36

by Waiman Long

[permalink] [raw]
Subject: [PATCH v4 1/9] cgroup/cpuset: Inherit parent's load balance state in v2

Since commit f28e22441f35 ("cgroup/cpuset: Add a new isolated
cpus.partition type"), the CS_SCHED_LOAD_BALANCE bit of a v2 cpuset
can be on or off. The child cpusets of a partition root must have the
same setting as its parent or it may screw up the rebuilding of sched
domains. Fix this problem by making sure the a child v2 cpuset will
follows its parent cpuset load balance state unless the child cpuset
is a new partition root itself.

Fixes: f28e22441f35 ("cgroup/cpuset: Add a new isolated cpus.partition type")
Signed-off-by: Waiman Long <[email protected]>
---
kernel/cgroup/cpuset.c | 33 ++++++++++++++++++++++++++++++---
1 file changed, 30 insertions(+), 3 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 58e6f18f01c1..170e342b07e3 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1588,11 +1588,16 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
}

/*
- * Skip the whole subtree if the cpumask remains the same
- * and has no partition root state and force flag not set.
+ * Skip the whole subtree if
+ * 1) the cpumask remains the same,
+ * 2) has no partition root state,
+ * 3) force flag not set, and
+ * 4) for v2 load balance state same as its parent.
*/
if (!cp->partition_root_state && !force &&
- cpumask_equal(tmp->new_cpus, cp->effective_cpus)) {
+ cpumask_equal(tmp->new_cpus, cp->effective_cpus) &&
+ (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) ||
+ (is_sched_load_balance(parent) == is_sched_load_balance(cp)))) {
pos_css = css_rightmost_descendant(pos_css);
continue;
}
@@ -1675,6 +1680,20 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,

update_tasks_cpumask(cp, tmp->new_cpus);

+ /*
+ * On default hierarchy, inherit the CS_SCHED_LOAD_BALANCE
+ * from parent if current cpuset isn't a valid partition root
+ * and their load balance states differ.
+ */
+ if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) &&
+ !is_partition_valid(cp) &&
+ (is_sched_load_balance(parent) != is_sched_load_balance(cp))) {
+ if (is_sched_load_balance(parent))
+ set_bit(CS_SCHED_LOAD_BALANCE, &cp->flags);
+ else
+ clear_bit(CS_SCHED_LOAD_BALANCE, &cp->flags);
+ }
+
/*
* On legacy hierarchy, if the effective cpumask of any non-
* empty cpuset is changed, we need to rebuild sched domains.
@@ -3222,6 +3241,14 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
cs->use_parent_ecpus = true;
parent->child_ecpus_count++;
}
+
+ /*
+ * For v2, clear CS_SCHED_LOAD_BALANCE if parent is isolated
+ */
+ if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) &&
+ !is_sched_load_balance(parent))
+ clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
+
spin_unlock_irq(&callback_lock);

if (!test_bit(CGRP_CPUSET_CLONE_CHILDREN, &css->cgroup->flags))
--
2.31.1


2023-06-27 15:05:55

by Waiman Long

[permalink] [raw]
Subject: [PATCH v4 8/9] cgroup/cpuset: Documentation update for partition

This patch updates the cgroup-v2.rst file to include information about
the new "cpuset.cpus.exclusive" control file as well as the new remote
partition.

Signed-off-by: Waiman Long <[email protected]>
---
Documentation/admin-guide/cgroup-v2.rst | 100 ++++++++++++++++++------
1 file changed, 74 insertions(+), 26 deletions(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index d9f3768a10db..8dd7464f93dc 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -2215,6 +2215,27 @@ Cpuset Interface Files

Its value will be affected by memory nodes hotplug events.

+ cpuset.cpus.exclusive
+ A read-write multiple values file which exists on non-root
+ cpuset-enabled cgroups.
+
+ It lists all the exclusive CPUs that can be used to create a
+ new cpuset partition. Its value is not used unless the cgroup
+ becomes a valid partition root. See the next section below
+ for a description of what a cpuset partition is.
+
+ The root cgroup is a partition root and all its available CPUs
+ are in its exclusive CPU set.
+
+ There are constraints on what values are acceptable
+ to this control file. Its value must be a subset of
+ the cgroup's "cpuset.cpus" value and the parent cgroup's
+ "cpuset.cpus.exclusive" value. For a parent cgroup, any one
+ its exclusive CPUs can only be distributed to at most one of
+ its child cgroups. Having an exclusive CPU appearing in two or
+ more of its child cgroups is not allowed (the exclusivity rule).
+ An invalid value will be rejected with a write error.
+
cpuset.cpus.partition
A read-write single value file which exists on non-root
cpuset-enabled cgroups. This flag is owned by the parent cgroup
@@ -2228,26 +2249,41 @@ Cpuset Interface Files
"isolated" Partition root without load balancing
========== =====================================

- The root cgroup is always a partition root and its state
- cannot be changed. All other non-root cgroups start out as
- "member".
+ A cpuset partition is a collection of cpuset-enabled cgroups with
+ a partition root at the top of the hierarchy and its descendants
+ except those that are separate partition roots themselves and
+ their descendants. A partition has exclusive access to the
+ set of exclusive CPUs allocated to it. Other cgroups outside
+ of that partition cannot use any CPUs in that set.
+
+ There are two types of partitions - local and remote. A local
+ partition is one whose parent cgroup is also a valid partition
+ root. A remote partition is one whose parent cgroup is not a
+ valid partition root itself. Writing to "cpuset.cpus.exclusive"
+ is not mandatory for the creation of a local partition as its
+ "cpuset.cpus.exclusive" file will be filled in automatically if
+ it is not set. The automaticaly set value will be based on its
+ "cpuset.cpus" value. Writing the proper "cpuset.cpus.exclusive"
+ values down the cgroup hierarchy is mandatory for the creation
+ of a remote partition.
+
+ Currently, a remote partition cannot be created under a local
+ partition. All the ancestors of a remote partition root except
+ the root cgroup cannot be partition root.
+
+ The root cgroup is always a partition root and its state cannot
+ be changed. All other non-root cgroups start out as "member".

When set to "root", the current cgroup is the root of a new
- partition or scheduling domain that comprises itself and all
- its descendants except those that are separate partition roots
- themselves and their descendants.
+ partition or scheduling domain. The set of exclusive CPUs is
+ determined by the value of its "cpuset.cpus.exclusive".

- When set to "isolated", the CPUs in that partition root will
+ When set to "isolated", the CPUs in that partition will
be in an isolated state without any load balancing from the
scheduler. Tasks placed in such a partition with multiple
CPUs should be carefully distributed and bound to each of the
individual CPUs for optimal performance.

- The value shown in "cpuset.cpus.effective" of a partition root
- is the CPUs that the partition root can dedicate to a potential
- new child partition root. The new child subtracts available
- CPUs from its parent "cpuset.cpus.effective".
-
A partition root ("root" or "isolated") can be in one of the
two possible states - valid or invalid. An invalid partition
root is in a degraded state where some state information may
@@ -2270,33 +2306,40 @@ Cpuset Interface Files
In the case of an invalid partition root, a descriptive string on
why the partition is invalid is included within parentheses.

- For a partition root to become valid, the following conditions
+ For a local partition root to be valid, the following conditions
must be met.

- 1) The "cpuset.cpus" is exclusive with its siblings , i.e. they
- are not shared by any of its siblings (exclusivity rule).
- 2) The parent cgroup is a valid partition root.
- 3) The "cpuset.cpus" is not empty and must contain at least
- one of the CPUs from parent's "cpuset.cpus", i.e. they overlap.
+ 1) The parent cgroup is a valid partition root.
+ 2) The "cpuset.cpus.exclusive" is exclusive with its siblings ,
+ i.e. they are not shared by any of its siblings (exclusivity
+ rule).
+ 3) The "cpuset.cpus.exclusive" is not empty, but it may contain
+ offline CPUs.
4) The "cpuset.cpus.effective" cannot be empty unless there is
no task associated with this partition.

- External events like hotplug or changes to "cpuset.cpus" can
- cause a valid partition root to become invalid and vice versa.
- Note that a task cannot be moved to a cgroup with empty
- "cpuset.cpus.effective".
+ For a remote partition root to be valid, all the above conditions
+ except the first one must be met.
+
+ External events like hotplug or changes to "cpuset.cpus" or
+ "cpuset.cpus.exclusive" can cause a valid partition root to
+ become invalid and vice versa. Note that a task cannot be
+ moved to a cgroup with empty "cpuset.cpus.effective".

For a valid partition root with the sibling cpu exclusivity
rule enabled, changes made to "cpuset.cpus" that violate the
exclusivity rule will invalidate the partition as well as its
sibling partitions with conflicting cpuset.cpus values. So
- care must be taking in changing "cpuset.cpus".
+ care must be taking in changing "cpuset.cpus". Changes to
+ "cpuset.cpus.exclusive" that violates the exclusivity rule will
+ not be allowed.

A valid non-root parent partition may distribute out all its CPUs
- to its child partitions when there is no task associated with it.
+ to its child local partitions when there is no task associated
+ with it.

- Care must be taken to change a valid partition root to
- "member" as all its child partitions, if present, will become
+ Care must be taken to change a valid partition root to "member"
+ as all its child local partitions, if present, will become
invalid causing disruption to tasks running in those child
partitions. These inactivated partitions could be recovered if
their parent is switched back to a partition root with a proper
@@ -2310,6 +2353,11 @@ Cpuset Interface Files
to "cpuset.cpus.partition" without the need to do continuous
polling.

+ A user can pre-configure certain CPUs to an isolated state at
+ boot time with the "isolcpus" kernel boot command line option.
+ If those CPUs are to be put into a partition, they have to
+ be used in an isolated partition.
+

Device controller
-----------------
--
2.31.1


2023-06-27 15:12:07

by Waiman Long

[permalink] [raw]
Subject: [PATCH v4 4/9] cgroup/cpuset: Allow suppression of sched domain rebuild in update_cpumasks_hier()

A single partition setup and tear-down operation can lead to
multiple rebuild_sched_domains_locked() calls which is a waste of
effort. This can partly be mitigated by adding a flag to suppress the
rebuild_sched_domains_locked() call in update_cpumasks_hier(). Since
a Boolean flag has already been passed as the 3rd argument to
update_cpumasks_hier(), we can extend that to a full flag word.

The sched domain rebuild suppression is now enabled in
update_sibling_cpumasks() as all it callers will do the sched domain
rebuild after its return later on anyway.

Signed-off-by: Waiman Long <[email protected]>
---
kernel/cgroup/cpuset.c | 24 ++++++++++++++++--------
1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index b8ccc1be7bde..64f9e305b3ab 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1590,6 +1590,12 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
return 0;
}

+/*
+ * update_cpumasks_hier() flags
+ */
+#define HIER_CHECKALL 0x01 /* Check all cpusets with no skipping */
+#define HIER_NO_SD_REBUILD 0x02 /* Don't rebuild sched domains */
+
/*
* update_cpumasks_hier - Update effective cpumasks and tasks in the subtree
* @cs: the cpuset to consider
@@ -1604,7 +1610,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
* Called with cpuset_mutex held
*/
static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
- bool force)
+ int flags)
{
struct cpuset *cp;
struct cgroup_subsys_state *pos_css;
@@ -1644,10 +1650,10 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
* Skip the whole subtree if
* 1) the cpumask remains the same,
* 2) has no partition root state,
- * 3) force flag not set, and
+ * 3) HIER_CHECKALL flag not set, and
* 4) for v2 load balance state same as its parent.
*/
- if (!cp->partition_root_state && !force &&
+ if (!cp->partition_root_state && !(flags & HIER_CHECKALL) &&
cpumask_equal(tmp->new_cpus, cp->effective_cpus) &&
(!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) ||
(is_sched_load_balance(parent) == is_sched_load_balance(cp)))) {
@@ -1764,7 +1770,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
}
rcu_read_unlock();

- if (need_rebuild_sched_domains)
+ if (need_rebuild_sched_domains && !(flags & HIER_NO_SD_REBUILD))
rebuild_sched_domains_locked();
}

@@ -1788,7 +1794,9 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
* to use the right effective_cpus value.
*
* The update_cpumasks_hier() function may sleep. So we have to
- * release the RCU read lock before calling it.
+ * release the RCU read lock before calling it. HIER_NO_SD_REBUILD
+ * flag is used to suppress rebuild of sched domains as the callers
+ * will take care of that.
*/
rcu_read_lock();
cpuset_for_each_child(sibling, pos_css, parent) {
@@ -1800,7 +1808,7 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
continue;

rcu_read_unlock();
- update_cpumasks_hier(sibling, tmp, false);
+ update_cpumasks_hier(sibling, tmp, HIER_NO_SD_REBUILD);
rcu_read_lock();
css_put(&sibling->css);
}
@@ -1913,7 +1921,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
spin_unlock_irq(&callback_lock);

/* effective_cpus will be updated here */
- update_cpumasks_hier(cs, &tmp, false);
+ update_cpumasks_hier(cs, &tmp, 0);

if (cs->partition_root_state) {
struct cpuset *parent = parent_cs(cs);
@@ -2382,7 +2390,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
* Force update if switching back to member.
*/
if (!list_empty(&cs->css.children))
- update_cpumasks_hier(cs, &tmpmask, !new_prs);
+ update_cpumasks_hier(cs, &tmpmask, !new_prs ? HIER_CHECKALL : 0);

/* Update sched domains and load balance flag */
update_partition_sd_lb(cs, old_prs);
--
2.31.1


2023-07-10 21:16:48

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH v4 4/9] cgroup/cpuset: Allow suppression of sched domain rebuild in update_cpumasks_hier()

On Tue, Jun 27, 2023 at 10:35:03AM -0400, Waiman Long wrote:
> A single partition setup and tear-down operation can lead to
> multiple rebuild_sched_domains_locked() calls which is a waste of
> effort. This can partly be mitigated by adding a flag to suppress the
> rebuild_sched_domains_locked() call in update_cpumasks_hier(). Since
> a Boolean flag has already been passed as the 3rd argument to
> update_cpumasks_hier(), we can extend that to a full flag word.
>
> The sched domain rebuild suppression is now enabled in
> update_sibling_cpumasks() as all it callers will do the sched domain
> rebuild after its return later on anyway.
>
> Signed-off-by: Waiman Long <[email protected]>

Applied 1-3 to cgroup/for-6.6.

Thanks.

--
tejun

2023-07-10 21:22:09

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH v4 0/9] cgroup/cpuset: Support remote partitions

Hello, Waiman.

I applied the prep patches. They look good on their own.

On Tue, Jun 27, 2023 at 10:34:59AM -0400, Waiman Long wrote:
...
> cpuset. Unlike "cpuset.cpus", invalid input to "cpuset.cpus.exclusive"
> will be rejected with an error. This new control file has no effect on

We cannot maintain this as an invariant tho, right? For example, what
happens when a parent cgroup later wants to withdraw a CPU from its
cpuset.cpus which should always be allowed regardless of what its
descendants are doing? Even with cpus.exclusive itself, I think it'd be
important to always allow ancestors to be able to withdraw from the
commitment as with other resources. I suppose one can argue that giving
exclusive access to CPUs is a special case which doesn't follow this rule
but cpus.exclusive having to be nested inside cpus which is subject to that
rule makes that combination too contorted.

Would it be difficult to follow how isolation modes behave when the target
configuration can't be achieved?

Thanks.

--
tejun

2023-07-10 21:56:54

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH v4 8/9] cgroup/cpuset: Documentation update for partition

Hello,

On Tue, Jun 27, 2023 at 10:35:07AM -0400, Waiman Long wrote:
...
> + There are two types of partitions - local and remote. A local
> + partition is one whose parent cgroup is also a valid partition
> + root. A remote partition is one whose parent cgroup is not a
> + valid partition root itself. Writing to "cpuset.cpus.exclusive"
> + is not mandatory for the creation of a local partition as its
> + "cpuset.cpus.exclusive" file will be filled in automatically if
> + it is not set. The automaticaly set value will be based on its
> + "cpuset.cpus" value. Writing the proper "cpuset.cpus.exclusive"
> + values down the cgroup hierarchy is mandatory for the creation
> + of a remote partition.

Wouldn't a partition root's cpus.exclusive always contain all of the CPUs in
its cpus? Would it make sense for cpus.exclusive to be different from .cpus?

Thanks.

--
tejun

2023-07-11 00:46:12

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v4 8/9] cgroup/cpuset: Documentation update for partition

On 7/10/23 17:30, Tejun Heo wrote:
> Hello,
>
> On Tue, Jun 27, 2023 at 10:35:07AM -0400, Waiman Long wrote:
> ...
>> + There are two types of partitions - local and remote. A local
>> + partition is one whose parent cgroup is also a valid partition
>> + root. A remote partition is one whose parent cgroup is not a
>> + valid partition root itself. Writing to "cpuset.cpus.exclusive"
>> + is not mandatory for the creation of a local partition as its
>> + "cpuset.cpus.exclusive" file will be filled in automatically if
>> + it is not set. The automaticaly set value will be based on its
>> + "cpuset.cpus" value. Writing the proper "cpuset.cpus.exclusive"
>> + values down the cgroup hierarchy is mandatory for the creation
>> + of a remote partition.
> Wouldn't a partition root's cpus.exclusive always contain all of the CPUs in
> its cpus? Would it make sense for cpus.exclusive to be different from .cpus?
>
> Thanks.

In auto-filled case, it should be the same as cpuset.cpus. I will
clarify that in the documentation. Thanks for catching that.

Cheers,
Longman


2023-07-11 00:50:16

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v4 0/9] cgroup/cpuset: Support remote partitions

On 7/10/23 17:08, Tejun Heo wrote:
> Hello, Waiman.
>
> I applied the prep patches. They look good on their own.
>
> On Tue, Jun 27, 2023 at 10:34:59AM -0400, Waiman Long wrote:
> ...
>> cpuset. Unlike "cpuset.cpus", invalid input to "cpuset.cpus.exclusive"
>> will be rejected with an error. This new control file has no effect on
> We cannot maintain this as an invariant tho, right? For example, what
> happens when a parent cgroup later wants to withdraw a CPU from its
> cpuset.cpus which should always be allowed regardless of what its
> descendants are doing? Even with cpus.exclusive itself, I think it'd be
> important to always allow ancestors to be able to withdraw from the
> commitment as with other resources. I suppose one can argue that giving
> exclusive access to CPUs is a special case which doesn't follow this rule
> but cpus.exclusive having to be nested inside cpus which is subject to that
> rule makes that combination too contorted.
>
> Would it be difficult to follow how isolation modes behave when the target
> configuration can't be achieved?

I would like to clarify that withdrawal of CPUs from
cpuset.cpus.exclusive is always allowed. It is the addition of CPUs not
presents in cpuset.cpus that will be rejected. The invariant is that
cpuset.cpus.exclusive must always be a subset of cpuset.cpus. Any change
that violates this rule is not allowed. Alternately I can silently
dropped the offending CPUs without returning an error, but that may
surprise users.

BTW, withdrawal of CPUs from cpuset.cpus will also withdraw them from
cpuset.cpus.exclusive, if present. This allows the partition code to use
cpuset.cpus.exclusive directly to determine the allowable exclusive CPUs
without doing an intersection with cpuset.cpus each time it is used.

Please let me know if you want a different behavior.

Cheers,
Longman


2023-07-11 01:09:34

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH v4 0/9] cgroup/cpuset: Support remote partitions

Hello,

On Mon, Jul 10, 2023 at 08:33:11PM -0400, Waiman Long wrote:
> I would like to clarify that withdrawal of CPUs from cpuset.cpus.exclusive
> is always allowed. It is the addition of CPUs not presents in cpuset.cpus
> that will be rejected. The invariant is that cpuset.cpus.exclusive must
> always be a subset of cpuset.cpus. Any change that violates this rule is not
> allowed. Alternately I can silently dropped the offending CPUs without
> returning an error, but that may surprise users.

Right, that'd be confusing.

> BTW, withdrawal of CPUs from cpuset.cpus will also withdraw them from
> cpuset.cpus.exclusive, if present. This allows the partition code to use
> cpuset.cpus.exclusive directly to determine the allowable exclusive CPUs
> without doing an intersection with cpuset.cpus each time it is used.

This is kinda confusing too, I think. Changing cpuset.cpus in an ancestor
doesn't affect the contents of the descendants' cpuset.cpus files but would
directly modify the contents of their cpuset.cpus.exclusive files.

There's some inherent friction because cpuset.cpus separates configuration
(cpuset.cpus) and the current state (cpuset.cpus.effective) while
cpuset.cpus.exclusive is trying to do both in the same interface file. When
the two behavior modes collide, it becomes rather confusing. Do you think
it'd make sense to make cpus.exclusive follow the same pattern as
cpuset.cpus?

Thanks.

--
tejun

2023-07-11 01:09:45

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH v4 8/9] cgroup/cpuset: Documentation update for partition

Hello,

On Mon, Jul 10, 2023 at 08:21:43PM -0400, Waiman Long wrote:
> > Wouldn't a partition root's cpus.exclusive always contain all of the CPUs in
> > its cpus? Would it make sense for cpus.exclusive to be different from .cpus?
>
> In auto-filled case, it should be the same as cpuset.cpus. I will clarify
> that in the documentation. Thanks for catching that.

When the user writes something to the file, what would it mena if the
content differs from the cgroup's cpuset.cpus?

Thanks.

--
tejun

2023-07-11 01:12:56

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v4 8/9] cgroup/cpuset: Documentation update for partition

On 7/10/23 20:42, Tejun Heo wrote:
> Hello,
>
> On Mon, Jul 10, 2023 at 08:21:43PM -0400, Waiman Long wrote:
>>> Wouldn't a partition root's cpus.exclusive always contain all of the CPUs in
>>> its cpus? Would it make sense for cpus.exclusive to be different from .cpus?
>> In auto-filled case, it should be the same as cpuset.cpus. I will clarify
>> that in the documentation. Thanks for catching that.
> When the user writes something to the file, what would it mena if the
> content differs from the cgroup's cpuset.cpus?

For local partition, it doesn't make sense to have a
cpust.cpus.exclusive that is not the same as cpuset.cpus as it
artificially reduce the set of CPUs that can be used in a partition. In
the case of a remote partition, the ancestor cgroups of a remote
partition should have cpuset.cpus.exclusive smaller than cpuset.cpus so
that when the remote partition is enabled, there are still CPUs left to
be used by those cgroups. In essence, the cpuset.cpus.exclusive
represents the CPUs that may not be usable anymore if they are taken by
a remote partition downstream.

Cheers,
Longman


2023-07-11 01:13:06

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH v4 8/9] cgroup/cpuset: Documentation update for partition

Hello,

On Mon, Jul 10, 2023 at 08:53:18PM -0400, Waiman Long wrote:
> For local partition, it doesn't make sense to have a cpust.cpus.exclusive
> that is not the same as cpuset.cpus as it artificially reduce the set of
> CPUs that can be used in a partition. In the case of a remote partition, the

Yeah, I was wondering about local partitions. "Automatic but can be
overridden" behavior becomes confusing if it's difficult for the user to
easily tell which part is automatic when. I wonder whether it'd be better to
make the condition static - e.g. for a partition cgroup, cpus.exclusive
always contains all bits in cpus no matter what value is written to it. Or,
if we separate out cpus.exclusive and cpus.exclusive.effective, no matter
what cpus.exclusive is set, a partition root's cpus.exclusive.effective
always includes all bits in cpus.effective.

Thanks.

--
tejun

2023-07-11 02:17:10

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v4 0/9] cgroup/cpuset: Support remote partitions

On 7/10/23 21:00, Tejun Heo wrote:
> Hello,
>
> On Mon, Jul 10, 2023 at 08:33:11PM -0400, Waiman Long wrote:
>> I would like to clarify that withdrawal of CPUs from cpuset.cpus.exclusive
>> is always allowed. It is the addition of CPUs not presents in cpuset.cpus
>> that will be rejected. The invariant is that cpuset.cpus.exclusive must
>> always be a subset of cpuset.cpus. Any change that violates this rule is not
>> allowed. Alternately I can silently dropped the offending CPUs without
>> returning an error, but that may surprise users.
> Right, that'd be confusing.
>
>> BTW, withdrawal of CPUs from cpuset.cpus will also withdraw them from
>> cpuset.cpus.exclusive, if present. This allows the partition code to use
>> cpuset.cpus.exclusive directly to determine the allowable exclusive CPUs
>> without doing an intersection with cpuset.cpus each time it is used.
> This is kinda confusing too, I think. Changing cpuset.cpus in an ancestor
> doesn't affect the contents of the descendants' cpuset.cpus files but would
> directly modify the contents of their cpuset.cpus.exclusive files.
>
> There's some inherent friction because cpuset.cpus separates configuration
> (cpuset.cpus) and the current state (cpuset.cpus.effective) while
> cpuset.cpus.exclusive is trying to do both in the same interface file. When
> the two behavior modes collide, it becomes rather confusing. Do you think
> it'd make sense to make cpus.exclusive follow the same pattern as
> cpuset.cpus?

I don't want to add another cpuset.cpus.exclusive.effective control
file. One possibility is to keep another effective masks in the struct
cpuset and list both exclusive cpus set by the user and the effective
ones side by side, like "<cpus> (<effective_cpus>)" if they differ or
some other format. What do you think?

Regards,
Longman


2023-07-11 02:32:11

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH v4 0/9] cgroup/cpuset: Support remote partitions

Hello,

On Mon, Jul 10, 2023 at 09:38:12PM -0400, Waiman Long wrote:
> I don't want to add another cpuset.cpus.exclusive.effective control file.
> One possibility is to keep another effective masks in the struct cpuset and
> list both exclusive cpus set by the user and the effective ones side by
> side, like "<cpus> (<effective_cpus>)" if they differ or some other format.
> What do you think?

Hmm... if we go for separate effective mask, I think it'd be better to stay
consistent with cpuset.cpus[.effective]. That's the convention both
cpuset.cpus and cpuset.mems already follow. I'm not sure what we'd gain by
deviating.

Thanks.

--
tejun

2023-07-11 04:37:08

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v4 8/9] cgroup/cpuset: Documentation update for partition

On 7/10/23 21:07, Tejun Heo wrote:
> Hello,
>
> On Mon, Jul 10, 2023 at 08:53:18PM -0400, Waiman Long wrote:
>> For local partition, it doesn't make sense to have a cpust.cpus.exclusive
>> that is not the same as cpuset.cpus as it artificially reduce the set of
>> CPUs that can be used in a partition. In the case of a remote partition, the
> Yeah, I was wondering about local partitions. "Automatic but can be
> overridden" behavior becomes confusing if it's difficult for the user to
> easily tell which part is automatic when. I wonder whether it'd be better to
> make the condition static - e.g. for a partition cgroup, cpus.exclusive
> always contains all bits in cpus no matter what value is written to it. Or,
> if we separate out cpus.exclusive and cpus.exclusive.effective, no matter
> what cpus.exclusive is set, a partition root's cpus.exclusive.effective
> always includes all bits in cpus.effective.

With no offline CPUs, cpus.effective should be the same as
cpus.exclusive.effective for a valid partition root. Here
cpus.exclusive.effective is a bit different from cpus.effective as it
can contain offline cpus. It also mean that adding
cpus.exclusive.effective can be redundant.

As said before, I try to avoid adding new cpuset control file unless
absolutely necessary. I now have a slight different proposal. Once
manually set, I can keep cpuset.cpus.exclusive invariant. I do need to
do a bit more work when enabling a partition root to find out the
effective set of exclusive CPUs to be used or make the partition invalid
if no exclusive CPU is available. I still want to do a initial check
when setting cpuset.cpus.exclusive to make sure that the value is at
least valid at the beginning.

Do you think this is an acceptable compromise?

Thanks,
Longman