2024-01-17 16:50:12

by Waiman Long

[permalink] [raw]
Subject: [RFC PATCH 0/8] cgroup/cpuset: Support RCU_NOCB on isolated partitions

This patch series is based on the RFC patch from Frederic [1]. Instead
of offering RCU_NOCB as a separate option, it is now lumped into a
root-only cpuset.cpus.isolation_full flag that will enable all the
additional CPU isolation capabilities available for isolated partitions
if set. RCU_NOCB is just the first one to this party. Additional dynamic
CPU isolation capabilities will be added in the future.

The first 2 patches are adopted from Federic with minor twists to fix
merge conflicts and compilation issue. The rests are for implementing
the new cpuset.cpus.isolation_full interface which is essentially a flag
to globally enable or disable full CPU isolation on isolated partitions.
On read, it also shows the CPU isolation capabilities that are currently
enabled. RCU_NOCB requires that the rcu_nocbs option be present in
the kernel boot command line. Without that, the rcu_nocb functionality
cannot be enabled even if the isolation_full flag is set. So we allow
users to check the isolation_full file to verify that if the desired
CPU isolation capability is enabled or not.

Only sanity checking has been done so far. More testing, especially on
the RCU side, will be needed.

[1] https://lore.kernel.org/lkml/[email protected]/

Frederic Weisbecker (2):
rcu/nocb: Pass a cpumask instead of a single CPU to offload/deoffload
rcu/nocb: Prepare to change nocb cpumask from CPU-hotplug protected
cpuset caller

Waiman Long (6):
rcu/no_cb: Add rcu_nocb_enabled() to expose the rcu_nocb state
cgroup/cpuset: Better tracking of addition/deletion of isolated CPUs
cgroup/cpuset: Add cpuset.cpus.isolation_full
cgroup/cpuset: Enable dynamic rcu_nocb mode on isolated CPUs
cgroup/cpuset: Document the new cpuset.cpus.isolation_full control
file
cgroup/cpuset: Update test_cpuset_prs.sh to handle
cpuset.cpus.isolation_full

Documentation/admin-guide/cgroup-v2.rst | 24 ++
include/linux/rcupdate.h | 15 +-
kernel/cgroup/cpuset.c | 237 ++++++++++++++----
kernel/rcu/rcutorture.c | 6 +-
kernel/rcu/tree_nocb.h | 118 ++++++---
.../selftests/cgroup/test_cpuset_prs.sh | 23 +-
6 files changed, 337 insertions(+), 86 deletions(-)

--
2.39.3



2024-01-17 16:50:29

by Waiman Long

[permalink] [raw]
Subject: [RFC PATCH 8/8] cgroup/cpuset: Update test_cpuset_prs.sh to handle cpuset.cpus.isolation_full

Add a new "-F" option to cpuset.cpus.isolation_full to enable
cpuset.cpus.isolation_full for trying out the effect of enabling
full CPU isolation.

Signed-off-by: Waiman Long <[email protected]>
---
.../selftests/cgroup/test_cpuset_prs.sh | 23 ++++++++++++++++++-
1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/cgroup/test_cpuset_prs.sh b/tools/testing/selftests/cgroup/test_cpuset_prs.sh
index b5eb1be2248c..2a8f0cb8d252 100755
--- a/tools/testing/selftests/cgroup/test_cpuset_prs.sh
+++ b/tools/testing/selftests/cgroup/test_cpuset_prs.sh
@@ -32,6 +32,7 @@ NR_CPUS=$(lscpu | grep "^CPU(s):" | sed -e "s/.*:[[:space:]]*//")
PROG=$1
VERBOSE=0
DELAY_FACTOR=1
+ISOLATION_FULL=
SCHED_DEBUG=
while [[ "$1" = -* ]]
do
@@ -44,7 +45,10 @@ do
-d) DELAY_FACTOR=$2
shift
;;
- *) echo "Usage: $PROG [-v] [-d <delay-factor>"
+ -F) ISOLATION_FULL=1
+ shift
+ ;;
+ *) echo "Usage: $PROG [-v] [-d <delay-factor>] [-F]"
exit
;;
esac
@@ -108,6 +112,22 @@ console_msg()
pause 0.01
}

+setup_isolation_full()
+{
+ ISOL_FULL=${CGROUP2}/cpuset.cpus.isolation_full
+ if [[ -n "$ISOLATION_FULL" ]]
+ then
+ echo 1 > $ISOL_FULL
+ set -- $(cat $ISOL_FULL)
+ ISOLATION_FLAGS=$2
+ [[ $VERBOSE -gt 0 ]] && {
+ echo "Full CPU isolation flags: $ISOLATION_FLAGS"
+ }
+ else
+ echo 0 > $ISOL_FULL
+ fi
+}
+
test_partition()
{
EXPECTED_VAL=$1
@@ -930,6 +950,7 @@ test_inotify()
}

trap cleanup 0 2 3 6
+setup_isolation_full
run_state_test TEST_MATRIX
test_isolated
test_inotify
--
2.39.3


2024-01-17 16:50:50

by Waiman Long

[permalink] [raw]
Subject: [RFC PATCH 4/8] cgroup/cpuset: Better tracking of addition/deletion of isolated CPUs

The process of updating workqueue unbound cpumask to exclude isolated
CPUs in cpuset only requires the use of the aggregated isolated_cpus
cpumask. Other types of CPU isolation, like the RCU no-callback CPU
mode, may require knowing more granular addition and deletion of isolated
CPUs. To enable these types of CPU isolation at run time, we need to
provide better tracking of the addition and deletion of isolated CPUs.

This patch adds a new isolated_cpus_modifier enum type for tracking
the addition and deletion of isolated CPUs as well as renaming
update_unbound_workqueue_cpumask() to update_isolation_cpumasks()
to accommodate additional CPU isolation modes in the future.

There is no functional change.

Signed-off-by: Waiman Long <[email protected]>
---
kernel/cgroup/cpuset.c | 113 +++++++++++++++++++++++++----------------
1 file changed, 69 insertions(+), 44 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index dfbb16aca9f4..0479af76a5dc 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -206,6 +206,13 @@ struct cpuset {
*/
static cpumask_var_t subpartitions_cpus;

+/* Enum types for possible changes to the set of isolated CPUs */
+enum isolated_cpus_modifiers {
+ ISOL_CPUS_NONE = 0,
+ ISOL_CPUS_ADD,
+ ISOL_CPUS_DELETE,
+};
+
/*
* Exclusive CPUs in isolated partitions
*/
@@ -1446,14 +1453,14 @@ static void partition_xcpus_newstate(int old_prs, int new_prs, struct cpumask *x
* @new_prs: new partition_root_state
* @parent: parent cpuset
* @xcpus: exclusive CPUs to be added
- * Return: true if isolated_cpus modified, false otherwise
+ * Return: isolated_cpus modifier
*
* Remote partition if parent == NULL
*/
-static bool partition_xcpus_add(int new_prs, struct cpuset *parent,
- struct cpumask *xcpus)
+static int partition_xcpus_add(int new_prs, struct cpuset *parent,
+ struct cpumask *xcpus)
{
- bool isolcpus_updated;
+ int icpus_mod = ISOL_CPUS_NONE;

WARN_ON_ONCE(new_prs < 0);
lockdep_assert_held(&callback_lock);
@@ -1464,13 +1471,14 @@ static bool partition_xcpus_add(int new_prs, struct cpuset *parent,
if (parent == &top_cpuset)
cpumask_or(subpartitions_cpus, subpartitions_cpus, xcpus);

- isolcpus_updated = (new_prs != parent->partition_root_state);
- if (isolcpus_updated)
+ if (new_prs != parent->partition_root_state) {
partition_xcpus_newstate(parent->partition_root_state, new_prs,
xcpus);
-
+ icpus_mod = (new_prs == PRS_ISOLATED)
+ ? ISOL_CPUS_ADD : ISOL_CPUS_DELETE;
+ }
cpumask_andnot(parent->effective_cpus, parent->effective_cpus, xcpus);
- return isolcpus_updated;
+ return icpus_mod;
}

/*
@@ -1478,14 +1486,14 @@ static bool partition_xcpus_add(int new_prs, struct cpuset *parent,
* @old_prs: old partition_root_state
* @parent: parent cpuset
* @xcpus: exclusive CPUs to be removed
- * Return: true if isolated_cpus modified, false otherwise
+ * Return: isolated_cpus modifier
*
* Remote partition if parent == NULL
*/
-static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
+static int partition_xcpus_del(int old_prs, struct cpuset *parent,
struct cpumask *xcpus)
{
- bool isolcpus_updated;
+ int icpus_mod;

WARN_ON_ONCE(old_prs < 0);
lockdep_assert_held(&callback_lock);
@@ -1495,27 +1503,40 @@ static bool partition_xcpus_del(int old_prs, struct cpuset *parent,
if (parent == &top_cpuset)
cpumask_andnot(subpartitions_cpus, subpartitions_cpus, xcpus);

- isolcpus_updated = (old_prs != parent->partition_root_state);
- if (isolcpus_updated)
+ if (old_prs != parent->partition_root_state) {
partition_xcpus_newstate(old_prs, parent->partition_root_state,
xcpus);
-
+ icpus_mod = (old_prs == PRS_ISOLATED)
+ ? ISOL_CPUS_DELETE : ISOL_CPUS_ADD;
+ }
cpumask_and(xcpus, xcpus, cpu_active_mask);
cpumask_or(parent->effective_cpus, parent->effective_cpus, xcpus);
- return isolcpus_updated;
+ return icpus_mod;
}

-static void update_unbound_workqueue_cpumask(bool isolcpus_updated)
+/**
+ * update_isolation_cpumasks - Add or remove CPUs to/from full isolation state
+ * @mask: cpumask of the CPUs to be added or removed
+ * @modifier: enum isolated_cpus_modifiers
+ * Return: 0 if successful, error code otherwise
+ *
+ * Workqueue unbound cpumask update is applied irrespective of isolation_full
+ * state and the whole isolated_cpus is passed. Repeated calls with the same
+ * isolated_cpus will not cause further action other than a wasted mutex
+ * lock/unlock.
+ */
+static int update_isolation_cpumasks(struct cpumask *mask, int modifier)
{
- int ret;
+ int err;

lockdep_assert_cpus_held();

- if (!isolcpus_updated)
- return;
+ if (!modifier)
+ return 0; /* No change in isolated CPUs */

- ret = workqueue_unbound_exclude_cpumask(isolated_cpus);
- WARN_ON_ONCE(ret < 0);
+ err = workqueue_unbound_exclude_cpumask(isolated_cpus);
+ WARN_ON_ONCE(err);
+ return err;
}

/**
@@ -1577,7 +1598,7 @@ static inline bool is_local_partition(struct cpuset *cs)
static int remote_partition_enable(struct cpuset *cs, int new_prs,
struct tmpmasks *tmp)
{
- bool isolcpus_updated;
+ int icpus_mod;

/*
* The user must have sysadmin privilege.
@@ -1600,7 +1621,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
return 0;

spin_lock_irq(&callback_lock);
- isolcpus_updated = partition_xcpus_add(new_prs, NULL, tmp->new_cpus);
+ icpus_mod = partition_xcpus_add(new_prs, NULL, tmp->new_cpus);
list_add(&cs->remote_sibling, &remote_children);
if (cs->use_parent_ecpus) {
struct cpuset *parent = parent_cs(cs);
@@ -1609,7 +1630,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
parent->child_ecpus_count--;
}
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_isolation_cpumasks(tmp->new_cpus, icpus_mod);

/*
* Proprogate changes in top_cpuset's effective_cpus down the hierarchy.
@@ -1630,7 +1651,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
*/
static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp)
{
- bool isolcpus_updated;
+ int icpus_mod;

compute_effective_exclusive_cpumask(cs, tmp->new_cpus);
WARN_ON_ONCE(!is_remote_partition(cs));
@@ -1638,14 +1659,14 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp)

spin_lock_irq(&callback_lock);
list_del_init(&cs->remote_sibling);
- isolcpus_updated = partition_xcpus_del(cs->partition_root_state,
- NULL, tmp->new_cpus);
+ icpus_mod = partition_xcpus_del(cs->partition_root_state, NULL,
+ tmp->new_cpus);
cs->partition_root_state = -cs->partition_root_state;
if (!cs->prs_err)
cs->prs_err = PERR_INVCPUS;
reset_partition_data(cs);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_isolation_cpumasks(tmp->new_cpus, icpus_mod);

/*
* Proprogate changes in top_cpuset's effective_cpus down the hierarchy.
@@ -1668,7 +1689,8 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *newmask,
{
bool adding, deleting;
int prs = cs->partition_root_state;
- int isolcpus_updated = 0;
+ int icpus_add_mod = ISOL_CPUS_NONE;
+ int icpus_del_mod = ISOL_CPUS_NONE;

if (WARN_ON_ONCE(!is_remote_partition(cs)))
return;
@@ -1693,12 +1715,12 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *newmask,

spin_lock_irq(&callback_lock);
if (adding)
- isolcpus_updated += partition_xcpus_add(prs, NULL, tmp->addmask);
+ icpus_add_mod = partition_xcpus_add(prs, NULL, tmp->addmask);
if (deleting)
- isolcpus_updated += partition_xcpus_del(prs, NULL, tmp->delmask);
+ icpus_del_mod = partition_xcpus_del(prs, NULL, tmp->delmask);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
-
+ update_isolation_cpumasks(tmp->addmask, icpus_add_mod);
+ update_isolation_cpumasks(tmp->delmask, icpus_del_mod);
/*
* Proprogate changes in top_cpuset's effective_cpus down the hierarchy.
*/
@@ -1819,7 +1841,8 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
int part_error = PERR_NONE; /* Partition error? */
int subparts_delta = 0;
struct cpumask *xcpus; /* cs effective_xcpus */
- int isolcpus_updated = 0;
+ int icpus_add_mod = ISOL_CPUS_NONE;
+ int icpus_del_mod = ISOL_CPUS_NONE;
bool nocpu;

lockdep_assert_held(&cpuset_mutex);
@@ -2052,22 +2075,23 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
cs->nr_subparts = 0;
}
/*
- * Adding to parent's effective_cpus means deletion CPUs from cs
+ * Adding to parent's effective_cpus means deleting CPUs from cs
* and vice versa.
*/
if (adding)
- isolcpus_updated += partition_xcpus_del(old_prs, parent,
- tmp->addmask);
+ icpus_add_mod = partition_xcpus_del(old_prs, parent,
+ tmp->addmask);
if (deleting)
- isolcpus_updated += partition_xcpus_add(new_prs, parent,
- tmp->delmask);
+ icpus_del_mod = partition_xcpus_add(new_prs, parent,
+ tmp->delmask);

if (is_partition_valid(parent)) {
parent->nr_subparts += subparts_delta;
WARN_ON_ONCE(parent->nr_subparts < 0);
}
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(isolcpus_updated);
+ update_isolation_cpumasks(tmp->addmask, icpus_add_mod);
+ update_isolation_cpumasks(tmp->delmask, icpus_del_mod);

if ((old_prs != new_prs) && (cmd == partcmd_update))
update_partition_exclusive(cs, new_prs);
@@ -3044,7 +3068,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
int err = PERR_NONE, old_prs = cs->partition_root_state;
struct cpuset *parent = parent_cs(cs);
struct tmpmasks tmpmask;
- bool new_xcpus_state = false;
+ int icpus_mod = ISOL_CPUS_NONE;

if (old_prs == new_prs)
return 0;
@@ -3096,7 +3120,8 @@ static int update_prstate(struct cpuset *cs, int new_prs)
/*
* A change in load balance state only, no change in cpumasks.
*/
- new_xcpus_state = true;
+ icpus_mod = (new_prs == PRS_ISOLATED)
+ ? ISOL_CPUS_ADD : ISOL_CPUS_DELETE;
} else {
/*
* Switching back to member is always allowed even if it
@@ -3128,10 +3153,10 @@ static int update_prstate(struct cpuset *cs, int new_prs)
WRITE_ONCE(cs->prs_err, err);
if (!is_partition_valid(cs))
reset_partition_data(cs);
- else if (new_xcpus_state)
+ else if (icpus_mod)
partition_xcpus_newstate(old_prs, new_prs, cs->effective_xcpus);
spin_unlock_irq(&callback_lock);
- update_unbound_workqueue_cpumask(new_xcpus_state);
+ update_isolation_cpumasks(cs->effective_xcpus, icpus_mod);

/* Force update if switching back to member */
update_cpumasks_hier(cs, &tmpmask, !new_prs ? HIER_CHECKALL : 0);
--
2.39.3


2024-01-17 17:10:54

by Tejun Heo

[permalink] [raw]
Subject: Re: [RFC PATCH 0/8] cgroup/cpuset: Support RCU_NOCB on isolated partitions

Hello,

On Wed, Jan 17, 2024 at 11:35:03AM -0500, Waiman Long wrote:
> The first 2 patches are adopted from Federic with minor twists to fix
> merge conflicts and compilation issue. The rests are for implementing
> the new cpuset.cpus.isolation_full interface which is essentially a flag
> to globally enable or disable full CPU isolation on isolated partitions.

I think the interface is a bit premature. The cpuset partition feature is
already pretty restrictive and makes it really clear that it's to isolate
the CPUs. I think it'd be better to just enable all the isolation features
by default. If there are valid use cases which can't be served without
disabling some isolation features, we can worry about adding the interface
at that point.

Thanks.

--
tejun

2024-01-17 17:15:27

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC PATCH 0/8] cgroup/cpuset: Support RCU_NOCB on isolated partitions


On 1/17/24 12:07, Tejun Heo wrote:
> Hello,
>
> On Wed, Jan 17, 2024 at 11:35:03AM -0500, Waiman Long wrote:
>> The first 2 patches are adopted from Federic with minor twists to fix
>> merge conflicts and compilation issue. The rests are for implementing
>> the new cpuset.cpus.isolation_full interface which is essentially a flag
>> to globally enable or disable full CPU isolation on isolated partitions.
> I think the interface is a bit premature. The cpuset partition feature is
> already pretty restrictive and makes it really clear that it's to isolate
> the CPUs. I think it'd be better to just enable all the isolation features
> by default. If there are valid use cases which can't be served without
> disabling some isolation features, we can worry about adding the interface
> at that point.

My current thought is to make isolated partitions act like
isolcpus=domain, additional CPU isolation capabilities are optional and
can be turned on using isolation_full. However, I am fine with making
all these turned on by default if it is the consensus.

Cheers,
Longman


2024-01-19 10:25:53

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [RFC PATCH 0/8] cgroup/cpuset: Support RCU_NOCB on isolated partitions

On Wed, Jan 17, 2024 at 11:35:03AM -0500, Waiman Long wrote:
> This patch series is based on the RFC patch from Frederic [1]. Instead
> of offering RCU_NOCB as a separate option, it is now lumped into a
> root-only cpuset.cpus.isolation_full flag that will enable all the
> additional CPU isolation capabilities available for isolated partitions
> if set. RCU_NOCB is just the first one to this party. Additional dynamic
> CPU isolation capabilities will be added in the future.
>
> The first 2 patches are adopted from Federic with minor twists to fix
> merge conflicts and compilation issue. The rests are for implementing
> the new cpuset.cpus.isolation_full interface which is essentially a flag
> to globally enable or disable full CPU isolation on isolated partitions.
> On read, it also shows the CPU isolation capabilities that are currently
> enabled. RCU_NOCB requires that the rcu_nocbs option be present in
> the kernel boot command line. Without that, the rcu_nocb functionality
> cannot be enabled even if the isolation_full flag is set. So we allow
> users to check the isolation_full file to verify that if the desired
> CPU isolation capability is enabled or not.
>
> Only sanity checking has been done so far. More testing, especially on
> the RCU side, will be needed.

There has been some discussion of simplifying the (de-)offloading code
to handle only offline CPUs. Along with some discussion of eliminating
the (de-)offloading capability altogehter.

We clearly should converge on the capability to be provided before
exposing this to userspace. ;-)

Thanx, Paul

> [1] https://lore.kernel.org/lkml/[email protected]/
>
> Frederic Weisbecker (2):
> rcu/nocb: Pass a cpumask instead of a single CPU to offload/deoffload
> rcu/nocb: Prepare to change nocb cpumask from CPU-hotplug protected
> cpuset caller
>
> Waiman Long (6):
> rcu/no_cb: Add rcu_nocb_enabled() to expose the rcu_nocb state
> cgroup/cpuset: Better tracking of addition/deletion of isolated CPUs
> cgroup/cpuset: Add cpuset.cpus.isolation_full
> cgroup/cpuset: Enable dynamic rcu_nocb mode on isolated CPUs
> cgroup/cpuset: Document the new cpuset.cpus.isolation_full control
> file
> cgroup/cpuset: Update test_cpuset_prs.sh to handle
> cpuset.cpus.isolation_full
>
> Documentation/admin-guide/cgroup-v2.rst | 24 ++
> include/linux/rcupdate.h | 15 +-
> kernel/cgroup/cpuset.c | 237 ++++++++++++++----
> kernel/rcu/rcutorture.c | 6 +-
> kernel/rcu/tree_nocb.h | 118 ++++++---
> .../selftests/cgroup/test_cpuset_prs.sh | 23 +-
> 6 files changed, 337 insertions(+), 86 deletions(-)
>
> --
> 2.39.3
>

2024-01-22 15:53:34

by Michal Koutný

[permalink] [raw]
Subject: Re: [RFC PATCH 0/8] cgroup/cpuset: Support RCU_NOCB on isolated partitions

Hello Waiman.

On Wed, Jan 17, 2024 at 11:35:03AM -0500, Waiman Long <[email protected]> wrote:
> This patch series is based on the RFC patch from Frederic [1]. Instead
> of offering RCU_NOCB as a separate option, it is now lumped into a
> root-only cpuset.cpus.isolation_full flag that will enable all the
> additional CPU isolation capabilities available for isolated partitions
> if set. RCU_NOCB is just the first one to this party. Additional dynamic
> CPU isolation capabilities will be added in the future.

IIUC this is similar to what I suggested back in the day and you didn't
consider it [1]. Do I read this right that you've changed your mind?

(It's fine if you did, I'm only asking to follow the heading of cpuset
controller.)

Thanks,
Michal

[1] https://lore.kernel.org/r/[email protected]/


Attachments:
(No filename) (858.00 B)
signature.asc (235.00 B)
Download all attachments

2024-01-23 05:51:05

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC PATCH 0/8] cgroup/cpuset: Support RCU_NOCB on isolated partitions


On 1/22/24 10:07, Michal Koutný wrote:
> Hello Waiman.
>
> On Wed, Jan 17, 2024 at 11:35:03AM -0500, Waiman Long <[email protected]> wrote:
>> This patch series is based on the RFC patch from Frederic [1]. Instead
>> of offering RCU_NOCB as a separate option, it is now lumped into a
>> root-only cpuset.cpus.isolation_full flag that will enable all the
>> additional CPU isolation capabilities available for isolated partitions
>> if set. RCU_NOCB is just the first one to this party. Additional dynamic
>> CPU isolation capabilities will be added in the future.
> IIUC this is similar to what I suggested back in the day and you didn't
> consider it [1]. Do I read this right that you've changed your mind?

I didn't said that we were not going to do this at the time. It's just
that more evaluation will need to be done before we are going to do
this. I was also looking to see if there were use cases where such
capabilities were needed. Now I am aware that such use cases do exist
and we should start looking into it.

>
> (It's fine if you did, I'm only asking to follow the heading of cpuset
> controller.)

OK, the title of the cover-letter may be too specific. I will make it
more general in the next version.

Cheers,
Longman


2024-02-06 12:56:38

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [RFC PATCH 0/8] cgroup/cpuset: Support RCU_NOCB on isolated partitions

Le Wed, Jan 17, 2024 at 12:15:07PM -0500, Waiman Long a ?crit :
>
> On 1/17/24 12:07, Tejun Heo wrote:
> > Hello,
> >
> > On Wed, Jan 17, 2024 at 11:35:03AM -0500, Waiman Long wrote:
> > > The first 2 patches are adopted from Federic with minor twists to fix
> > > merge conflicts and compilation issue. The rests are for implementing
> > > the new cpuset.cpus.isolation_full interface which is essentially a flag
> > > to globally enable or disable full CPU isolation on isolated partitions.
> > I think the interface is a bit premature. The cpuset partition feature is
> > already pretty restrictive and makes it really clear that it's to isolate
> > the CPUs. I think it'd be better to just enable all the isolation features
> > by default. If there are valid use cases which can't be served without
> > disabling some isolation features, we can worry about adding the interface
> > at that point.
>
> My current thought is to make isolated partitions act like isolcpus=domain,
> additional CPU isolation capabilities are optional and can be turned on
> using isolation_full. However, I am fine with making all these turned on by
> default if it is the consensus.

Right it was the consensus last time I tried. Along with the fact that mutating
this isolation_full set has to be done on offline CPUs to simplify the whole
picture.

So lemme try to summarize what needs to be done:

1) An all-isolation feature file (that is, all the HK_TYPE_* things) on/off for
now. And if it ever proves needed, provide a way later for more finegrained
tuning.

2) This file must only apply to offline CPUs because it avoids migrations and
stuff.

3) I need to make RCU NOCB tunable only on offline CPUs, which isn't that much
changes.

4) HK_TYPE_TIMER:
* Wrt. timers in general, not much needs to be done, the CPUs are
offline. But:
* arch/x86/kvm/x86.c does something weird
* drivers/char/random.c might need some care
* watchdog needs to be (de-)activated

5) HK_TYPE_DOMAIN:
* This one I fear is not mutable, this is isolcpus...

6) HK_TYPE_MANAGED_IRQ:
* I prefer not to think about it :-)

7) HK_TYPE_TICK:
* Maybe some tiny ticks internals to revisit, I'll check that.
* There is a remote tick to take into consideration, but again the
CPUs are offline so it shouldn't be too complicated.

8) HK_TYPE_WQ:
* Fortunately we already have all the mutable interface in place.
But we must make it live nicely with the sysfs workqueue affinity
files.

9) HK_FLAG_SCHED:
* Oops, this one is ignored by nohz_full/isolcpus, isn't it?
Should be removed?

10) HK_TYPE_RCU:
* That's point 3) and also some kthreads to affine, which leads us
to the following in HK_TYPE_KTHREAD:

11) HK_FLAG_KTHREAD:
* I'm guessing it's fine as long as isolation_full is also an
isolated partition. Then unbound kthreads shouldn't run there.

12) HK_TYPE_MISC:
* Should be fine as ILB isn't running on offline CPUs.

Thanks.

2024-02-06 19:30:35

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [RFC PATCH 0/8] cgroup/cpuset: Support RCU_NOCB on isolated partitions

On Tue, Feb 06, 2024 at 01:56:23PM +0100, Frederic Weisbecker wrote:
> Le Wed, Jan 17, 2024 at 12:15:07PM -0500, Waiman Long a ?crit :
> >
> > On 1/17/24 12:07, Tejun Heo wrote:
> > > Hello,
> > >
> > > On Wed, Jan 17, 2024 at 11:35:03AM -0500, Waiman Long wrote:
> > > > The first 2 patches are adopted from Federic with minor twists to fix
> > > > merge conflicts and compilation issue. The rests are for implementing
> > > > the new cpuset.cpus.isolation_full interface which is essentially a flag
> > > > to globally enable or disable full CPU isolation on isolated partitions.
> > > I think the interface is a bit premature. The cpuset partition feature is
> > > already pretty restrictive and makes it really clear that it's to isolate
> > > the CPUs. I think it'd be better to just enable all the isolation features
> > > by default. If there are valid use cases which can't be served without
> > > disabling some isolation features, we can worry about adding the interface
> > > at that point.
> >
> > My current thought is to make isolated partitions act like isolcpus=domain,
> > additional CPU isolation capabilities are optional and can be turned on
> > using isolation_full. However, I am fine with making all these turned on by
> > default if it is the consensus.
>
> Right it was the consensus last time I tried. Along with the fact that mutating
> this isolation_full set has to be done on offline CPUs to simplify the whole
> picture.
>
> So lemme try to summarize what needs to be done:
>
> 1) An all-isolation feature file (that is, all the HK_TYPE_* things) on/off for
> now. And if it ever proves needed, provide a way later for more finegrained
> tuning.
>
> 2) This file must only apply to offline CPUs because it avoids migrations and
> stuff.
>
> 3) I need to make RCU NOCB tunable only on offline CPUs, which isn't that much
> changes.
>
> 4) HK_TYPE_TIMER:
> * Wrt. timers in general, not much needs to be done, the CPUs are
> offline. But:
> * arch/x86/kvm/x86.c does something weird
> * drivers/char/random.c might need some care
> * watchdog needs to be (de-)activated
>
> 5) HK_TYPE_DOMAIN:
> * This one I fear is not mutable, this is isolcpus...

Except for HK_TYPE_DOMAIN, i have never seen anyone use any of this
flags.

>
> 6) HK_TYPE_MANAGED_IRQ:
> * I prefer not to think about it :-)
>
> 7) HK_TYPE_TICK:
> * Maybe some tiny ticks internals to revisit, I'll check that.
> * There is a remote tick to take into consideration, but again the
> CPUs are offline so it shouldn't be too complicated.
>
> 8) HK_TYPE_WQ:
> * Fortunately we already have all the mutable interface in place.
> But we must make it live nicely with the sysfs workqueue affinity
> files.
>
> 9) HK_FLAG_SCHED:
> * Oops, this one is ignored by nohz_full/isolcpus, isn't it?
> Should be removed?
>
> 10) HK_TYPE_RCU:
> * That's point 3) and also some kthreads to affine, which leads us
> to the following in HK_TYPE_KTHREAD:
>
> 11) HK_FLAG_KTHREAD:
> * I'm guessing it's fine as long as isolation_full is also an
> isolated partition. Then unbound kthreads shouldn't run there.
>
> 12) HK_TYPE_MISC:
> * Should be fine as ILB isn't running on offline CPUs.
>
> Thanks.
>
>


2024-02-07 14:48:09

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [RFC PATCH 0/8] cgroup/cpuset: Support RCU_NOCB on isolated partitions

Le Tue, Feb 06, 2024 at 04:15:18PM -0300, Marcelo Tosatti a ?crit :
> On Tue, Feb 06, 2024 at 01:56:23PM +0100, Frederic Weisbecker wrote:
> > Le Wed, Jan 17, 2024 at 12:15:07PM -0500, Waiman Long a ?crit :
> > >
> > > On 1/17/24 12:07, Tejun Heo wrote:
> > > > Hello,
> > > >
> > > > On Wed, Jan 17, 2024 at 11:35:03AM -0500, Waiman Long wrote:
> > > > > The first 2 patches are adopted from Federic with minor twists to fix
> > > > > merge conflicts and compilation issue. The rests are for implementing
> > > > > the new cpuset.cpus.isolation_full interface which is essentially a flag
> > > > > to globally enable or disable full CPU isolation on isolated partitions.
> > > > I think the interface is a bit premature. The cpuset partition feature is
> > > > already pretty restrictive and makes it really clear that it's to isolate
> > > > the CPUs. I think it'd be better to just enable all the isolation features
> > > > by default. If there are valid use cases which can't be served without
> > > > disabling some isolation features, we can worry about adding the interface
> > > > at that point.
> > >
> > > My current thought is to make isolated partitions act like isolcpus=domain,
> > > additional CPU isolation capabilities are optional and can be turned on
> > > using isolation_full. However, I am fine with making all these turned on by
> > > default if it is the consensus.
> >
> > Right it was the consensus last time I tried. Along with the fact that mutating
> > this isolation_full set has to be done on offline CPUs to simplify the whole
> > picture.
> >
> > So lemme try to summarize what needs to be done:
> >
> > 1) An all-isolation feature file (that is, all the HK_TYPE_* things) on/off for
> > now. And if it ever proves needed, provide a way later for more finegrained
> > tuning.
> >
> > 2) This file must only apply to offline CPUs because it avoids migrations and
> > stuff.
> >
> > 3) I need to make RCU NOCB tunable only on offline CPUs, which isn't that much
> > changes.
> >
> > 4) HK_TYPE_TIMER:
> > * Wrt. timers in general, not much needs to be done, the CPUs are
> > offline. But:
> > * arch/x86/kvm/x86.c does something weird
> > * drivers/char/random.c might need some care
> > * watchdog needs to be (de-)activated
> >
> > 5) HK_TYPE_DOMAIN:
> > * This one I fear is not mutable, this is isolcpus...
>
> Except for HK_TYPE_DOMAIN, i have never seen anyone use any of this
> flags.

HK_TYPE_DOMAIN is used by isolcpus=domain,....
HK_TYPE_MANAGED_IRQ is used by isolcpus=managed_irq,...

All the others (except HK_TYPE_SCHED) are used by nohz_full=

Thanks.

>
> >
> > 6) HK_TYPE_MANAGED_IRQ:
> > * I prefer not to think about it :-)
> >
> > 7) HK_TYPE_TICK:
> > * Maybe some tiny ticks internals to revisit, I'll check that.
> > * There is a remote tick to take into consideration, but again the
> > CPUs are offline so it shouldn't be too complicated.
> >
> > 8) HK_TYPE_WQ:
> > * Fortunately we already have all the mutable interface in place.
> > But we must make it live nicely with the sysfs workqueue affinity
> > files.
> >
> > 9) HK_FLAG_SCHED:
> > * Oops, this one is ignored by nohz_full/isolcpus, isn't it?
> > Should be removed?
> >
> > 10) HK_TYPE_RCU:
> > * That's point 3) and also some kthreads to affine, which leads us
> > to the following in HK_TYPE_KTHREAD:
> >
> > 11) HK_FLAG_KTHREAD:
> > * I'm guessing it's fine as long as isolation_full is also an
> > isolated partition. Then unbound kthreads shouldn't run there.
> >
> > 12) HK_TYPE_MISC:
> > * Should be fine as ILB isn't running on offline CPUs.
> >
> > Thanks.
> >
> >
>

2024-02-09 16:14:14

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [RFC PATCH 0/8] cgroup/cpuset: Support RCU_NOCB on isolated partitions

On Wed, Feb 07, 2024 at 03:47:46PM +0100, Frederic Weisbecker wrote:
> Le Tue, Feb 06, 2024 at 04:15:18PM -0300, Marcelo Tosatti a ?crit :
> > On Tue, Feb 06, 2024 at 01:56:23PM +0100, Frederic Weisbecker wrote:
> > > Le Wed, Jan 17, 2024 at 12:15:07PM -0500, Waiman Long a ?crit :
> > > >
> > > > On 1/17/24 12:07, Tejun Heo wrote:
> > > > > Hello,
> > > > >
> > > > > On Wed, Jan 17, 2024 at 11:35:03AM -0500, Waiman Long wrote:
> > > > > > The first 2 patches are adopted from Federic with minor twists to fix
> > > > > > merge conflicts and compilation issue. The rests are for implementing
> > > > > > the new cpuset.cpus.isolation_full interface which is essentially a flag
> > > > > > to globally enable or disable full CPU isolation on isolated partitions.
> > > > > I think the interface is a bit premature. The cpuset partition feature is
> > > > > already pretty restrictive and makes it really clear that it's to isolate
> > > > > the CPUs. I think it'd be better to just enable all the isolation features
> > > > > by default. If there are valid use cases which can't be served without
> > > > > disabling some isolation features, we can worry about adding the interface
> > > > > at that point.
> > > >
> > > > My current thought is to make isolated partitions act like isolcpus=domain,
> > > > additional CPU isolation capabilities are optional and can be turned on
> > > > using isolation_full. However, I am fine with making all these turned on by
> > > > default if it is the consensus.
> > >
> > > Right it was the consensus last time I tried. Along with the fact that mutating
> > > this isolation_full set has to be done on offline CPUs to simplify the whole
> > > picture.
> > >
> > > So lemme try to summarize what needs to be done:
> > >
> > > 1) An all-isolation feature file (that is, all the HK_TYPE_* things) on/off for
> > > now. And if it ever proves needed, provide a way later for more finegrained
> > > tuning.
> > >
> > > 2) This file must only apply to offline CPUs because it avoids migrations and
> > > stuff.
> > >
> > > 3) I need to make RCU NOCB tunable only on offline CPUs, which isn't that much
> > > changes.
> > >
> > > 4) HK_TYPE_TIMER:
> > > * Wrt. timers in general, not much needs to be done, the CPUs are
> > > offline. But:
> > > * arch/x86/kvm/x86.c does something weird
> > > * drivers/char/random.c might need some care
> > > * watchdog needs to be (de-)activated
> > >
> > > 5) HK_TYPE_DOMAIN:
> > > * This one I fear is not mutable, this is isolcpus...
> >
> > Except for HK_TYPE_DOMAIN, i have never seen anyone use any of this
> > flags.
>
> HK_TYPE_DOMAIN is used by isolcpus=domain,....

> HK_TYPE_MANAGED_IRQ is used by isolcpus=managed_irq,...
>
> All the others (except HK_TYPE_SCHED) are used by nohz_full=

I mean i've never seen any use of the individual flags being set.

You either want full isolation (nohz_full and all the flags together,
except for HK_TYPE_DOMAIN which is sometimes enabled/disabled), or not.

So why not group them all together ?

Do you know of any separate uses of these flags (except for
HK_TYPE_DOMAIN).

> Thanks.
>
> >
> > >
> > > 6) HK_TYPE_MANAGED_IRQ:
> > > * I prefer not to think about it :-)
> > >
> > > 7) HK_TYPE_TICK:
> > > * Maybe some tiny ticks internals to revisit, I'll check that.
> > > * There is a remote tick to take into consideration, but again the
> > > CPUs are offline so it shouldn't be too complicated.
> > >
> > > 8) HK_TYPE_WQ:
> > > * Fortunately we already have all the mutable interface in place.
> > > But we must make it live nicely with the sysfs workqueue affinity
> > > files.
> > >
> > > 9) HK_FLAG_SCHED:
> > > * Oops, this one is ignored by nohz_full/isolcpus, isn't it?
> > > Should be removed?
> > >
> > > 10) HK_TYPE_RCU:
> > > * That's point 3) and also some kthreads to affine, which leads us
> > > to the following in HK_TYPE_KTHREAD:
> > >
> > > 11) HK_FLAG_KTHREAD:
> > > * I'm guessing it's fine as long as isolation_full is also an
> > > isolated partition. Then unbound kthreads shouldn't run there.
> > >
> > > 12) HK_TYPE_MISC:
> > > * Should be fine as ILB isn't running on offline CPUs.
> > >
> > > Thanks.
> > >
> > >
> >
>
>


2024-02-10 04:22:15

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC PATCH 0/8] cgroup/cpuset: Support RCU_NOCB on isolated partitions

On 2/6/24 07:56, Frederic Weisbecker wrote:
> Le Wed, Jan 17, 2024 at 12:15:07PM -0500, Waiman Long a écrit :
>> On 1/17/24 12:07, Tejun Heo wrote:
>>> Hello,
>>>
>>> On Wed, Jan 17, 2024 at 11:35:03AM -0500, Waiman Long wrote:
>>>> The first 2 patches are adopted from Federic with minor twists to fix
>>>> merge conflicts and compilation issue. The rests are for implementing
>>>> the new cpuset.cpus.isolation_full interface which is essentially a flag
>>>> to globally enable or disable full CPU isolation on isolated partitions.
>>> I think the interface is a bit premature. The cpuset partition feature is
>>> already pretty restrictive and makes it really clear that it's to isolate
>>> the CPUs. I think it'd be better to just enable all the isolation features
>>> by default. If there are valid use cases which can't be served without
>>> disabling some isolation features, we can worry about adding the interface
>>> at that point.
>> My current thought is to make isolated partitions act like isolcpus=domain,
>> additional CPU isolation capabilities are optional and can be turned on
>> using isolation_full. However, I am fine with making all these turned on by
>> default if it is the consensus.
> Right it was the consensus last time I tried. Along with the fact that mutating
> this isolation_full set has to be done on offline CPUs to simplify the whole
> picture.
>
> So lemme try to summarize what needs to be done:
>
> 1) An all-isolation feature file (that is, all the HK_TYPE_* things) on/off for
> now. And if it ever proves needed, provide a way later for more finegrained
> tuning.
That is more or less the current plan. As detailed below, HK_TYPE_DOMAIN
& HK_TYPE_WQ isolation are included in the isolated partitions by
default. I am also thinking about including other relatively cheap
isolation flags by default. The expensive ones will have to be enabled
via isolation_full.
>
> 2) This file must only apply to offline CPUs because it avoids migrations and
> stuff.
Well, the process of first moving the CPUs offline first is rather
expensive. I won't mind doing some partial offlining based on the
existing set of teardown and bringup callbacks, but I would try to avoid
fully offlining the CPUs first.
>
> 3) I need to make RCU NOCB tunable only on offline CPUs, which isn't that much
> changes.
>
> 4) HK_TYPE_TIMER:
> * Wrt. timers in general, not much needs to be done, the CPUs are
> offline. But:
> * arch/x86/kvm/x86.c does something weird
> * drivers/char/random.c might need some care
> * watchdog needs to be (de-)activated
>
> 5) HK_TYPE_DOMAIN:
> * This one I fear is not mutable, this is isolcpus...

HK_TYPE_DOMAIN is already available via the current cpuset isolated
partition functionality. What I am currently doing is to extend that to
other HK_TYPE* flags.


>
> 6) HK_TYPE_MANAGED_IRQ:
> * I prefer not to think about it :-)
>
> 7) HK_TYPE_TICK:
> * Maybe some tiny ticks internals to revisit, I'll check that.
> * There is a remote tick to take into consideration, but again the
> CPUs are offline so it shouldn't be too complicated.
>
> 8) HK_TYPE_WQ:
> * Fortunately we already have all the mutable interface in place.
> But we must make it live nicely with the sysfs workqueue affinity
> files.

HK_TYPE_WQ is basically done and it is going to work properly with the
workqueue affinity sysfs files. From the workqueue of view, HK_TYPE_WQ
is currently treated the same as HK_TYPE_DOMAIN.

>
> 9) HK_FLAG_SCHED:
> * Oops, this one is ignored by nohz_full/isolcpus, isn't it?
> Should be removed?
I don't think HK_FLAG_SCHED is being used at all. So I believe we should
remove it to avoid confusion.
>
> 10) HK_TYPE_RCU:
> * That's point 3) and also some kthreads to affine, which leads us
> to the following in HK_TYPE_KTHREAD:
>
> 11) HK_FLAG_KTHREAD:
> * I'm guessing it's fine as long as isolation_full is also an
> isolated partition. Then unbound kthreads shouldn't run there.

Yes, isolation_full applies only to isolated partitions. It extends the
amount of CPU isolation by enabling all the other CPU available
isolation flags.

Cheers,
Longman


2024-02-11 01:47:19

by Waiman Long

[permalink] [raw]
Subject: Re: [RFC PATCH 0/8] cgroup/cpuset: Support RCU_NOCB on isolated partitions

On 1/19/24 05:24, Paul E. McKenney wrote:
> On Wed, Jan 17, 2024 at 11:35:03AM -0500, Waiman Long wrote:
>> This patch series is based on the RFC patch from Frederic [1]. Instead
>> of offering RCU_NOCB as a separate option, it is now lumped into a
>> root-only cpuset.cpus.isolation_full flag that will enable all the
>> additional CPU isolation capabilities available for isolated partitions
>> if set. RCU_NOCB is just the first one to this party. Additional dynamic
>> CPU isolation capabilities will be added in the future.
>>
>> The first 2 patches are adopted from Federic with minor twists to fix
>> merge conflicts and compilation issue. The rests are for implementing
>> the new cpuset.cpus.isolation_full interface which is essentially a flag
>> to globally enable or disable full CPU isolation on isolated partitions.
>> On read, it also shows the CPU isolation capabilities that are currently
>> enabled. RCU_NOCB requires that the rcu_nocbs option be present in
>> the kernel boot command line. Without that, the rcu_nocb functionality
>> cannot be enabled even if the isolation_full flag is set. So we allow
>> users to check the isolation_full file to verify that if the desired
>> CPU isolation capability is enabled or not.
>>
>> Only sanity checking has been done so far. More testing, especially on
>> the RCU side, will be needed.
> There has been some discussion of simplifying the (de-)offloading code
> to handle only offline CPUs. Along with some discussion of eliminating
> the (de-)offloading capability altogehter.
>
> We clearly should converge on the capability to be provided before
> exposing this to userspace. ;-)

Would you mind giving me a pointer to the discussion of simplifying the
de-offloading code to  handle only offline CPUs?

Thanks,
Longman