2021-07-01 21:09:03

by Marcelo Tosatti

[permalink] [raw]
Subject: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

The logic to disable vmstat worker thread, when entering
nohz full, does not cover all scenarios. For example, it is possible
for the following to happen:

1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
2) app runs mlock, which increases counters for mlock'ed pages.
3) start -RT loop

Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
the mlock, vmstat shepherd can restart vmstat worker thread on
the CPU in question.

To fix this, optionally sync the vmstat counters when returning
from userspace, controllable by a new "vmstat_sync" isolcpus
flags (default off).

See individual patches for details.



2021-07-02 08:02:46

by Christoph Lameter

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

On Thu, 1 Jul 2021, Marcelo Tosatti wrote:

> The logic to disable vmstat worker thread, when entering
> nohz full, does not cover all scenarios. For example, it is possible
> for the following to happen:
>
> 1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
> 2) app runs mlock, which increases counters for mlock'ed pages.
> 3) start -RT loop
>
> Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
> the mlock, vmstat shepherd can restart vmstat worker thread on
> the CPU in question.

Can we enter nohz_full after the app runs mlock?

> To fix this, optionally sync the vmstat counters when returning
> from userspace, controllable by a new "vmstat_sync" isolcpus
> flags (default off).
>
> See individual patches for details.

Wow... This is going into some performance sensitive VM counters here and
adds code to their primitives.

Isnt there a simpler solution that does not require this amount of
changes?

2021-07-02 11:56:50

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

Hi Christoph,

On Fri, Jul 02, 2021 at 10:00:11AM +0200, Christoph Lameter wrote:
> On Thu, 1 Jul 2021, Marcelo Tosatti wrote:
>
> > The logic to disable vmstat worker thread, when entering
> > nohz full, does not cover all scenarios. For example, it is possible
> > for the following to happen:
> >
> > 1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
> > 2) app runs mlock, which increases counters for mlock'ed pages.
> > 3) start -RT loop
> >
> > Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
> > the mlock, vmstat shepherd can restart vmstat worker thread on
> > the CPU in question.
>
> Can we enter nohz_full after the app runs mlock?
>
> > To fix this, optionally sync the vmstat counters when returning
> > from userspace, controllable by a new "vmstat_sync" isolcpus
> > flags (default off).
> >
> > See individual patches for details.
>
> Wow... This is going into some performance sensitive VM counters here and
> adds code to their primitives.

Yes, but it should all be under static key (therefore the performance
impact, when isolcpus=vmstat_sync,CPULIST is not enabled, should be
zero) (if the patchset is correct! ...).

For the case where isolcpus=vmstat_sync is enabled, the most important
performance aspect is the latency spike which this patch is dealing
with.

> Isnt there a simpler solution that does not require this amount of
> changes?

The one other change (I can think of) which could solve this problem
would be allowing remote access to per-CPU vmstat counters
(requiring a local_lock to be added), which seems to be more complex
than this.

2021-07-02 12:02:15

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

Hi Christoph,

Forgot to reply to this question...

On Fri, Jul 02, 2021 at 10:00:11AM +0200, Christoph Lameter wrote:
> On Thu, 1 Jul 2021, Marcelo Tosatti wrote:
>
> > The logic to disable vmstat worker thread, when entering
> > nohz full, does not cover all scenarios. For example, it is possible
> > for the following to happen:
> >
> > 1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
> > 2) app runs mlock, which increases counters for mlock'ed pages.
> > 3) start -RT loop
> >
> > Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
> > the mlock, vmstat shepherd can restart vmstat worker thread on
> > the CPU in question.
>
> Can we enter nohz_full after the app runs mlock?

Hum, i don't think its a good idea to use that route, because
entering or exiting nohz_full depends on a number of variable
outside of one's control (and additional variables might be
added in the future).

So preparing the system to function
while entering nohz_full at any location seems the sane thing to do.

And that would be at return to userspace (since, if mlocked, after
that point there will be no more changes to propagate to vmstat
counters).

Or am i missing something else you can think of ?


2021-07-02 12:31:25

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

On Thu, Jul 01, 2021 at 06:03:36PM -0300, Marcelo Tosatti wrote:
> The logic to disable vmstat worker thread, when entering
> nohz full, does not cover all scenarios. For example, it is possible
> for the following to happen:
>
> 1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
> 2) app runs mlock, which increases counters for mlock'ed pages.
> 3) start -RT loop
>
> Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
> the mlock, vmstat shepherd can restart vmstat worker thread on
> the CPU in question.
>
> To fix this, optionally sync the vmstat counters when returning
> from userspace, controllable by a new "vmstat_sync" isolcpus
> flags (default off).

Wasn't the plan for such finegrained isolation features to do it at
the per task level using prctl()?

Thanks.

>
> See individual patches for details.
>
>

2021-07-02 15:44:40

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace


Hi Frederic,

On Fri, Jul 02, 2021 at 02:30:32PM +0200, Frederic Weisbecker wrote:
> On Thu, Jul 01, 2021 at 06:03:36PM -0300, Marcelo Tosatti wrote:
> > The logic to disable vmstat worker thread, when entering
> > nohz full, does not cover all scenarios. For example, it is possible
> > for the following to happen:
> >
> > 1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
> > 2) app runs mlock, which increases counters for mlock'ed pages.
> > 3) start -RT loop
> >
> > Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
> > the mlock, vmstat shepherd can restart vmstat worker thread on
> > the CPU in question.
> >
> > To fix this, optionally sync the vmstat counters when returning
> > from userspace, controllable by a new "vmstat_sync" isolcpus
> > flags (default off).
>
> Wasn't the plan for such finegrained isolation features to do it at
> the per task level using prctl()?

Yes, but its orthogonal: when we integrate the finegrained isolation
interface, will be able to use this code (to sync vmstat counters
on return to userspace) only when userspace informs that it has entered
isolated mode, so you don't incur the performance penalty of frequent
vmstat counter writes when not using isolated apps.

This is what the full task isolation task patchset mode is doing
as well (CC'ing Alex BTW).

This will require modifying applications (and the new kernel with the
exposed interface).

But there is demand for fixing this now, for currently existing
binary only applications.

2021-07-05 14:28:32

by Christoph Lameter

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

On Fri, 2 Jul 2021, Marcelo Tosatti wrote:

> > > The logic to disable vmstat worker thread, when entering
> > > nohz full, does not cover all scenarios. For example, it is possible
> > > for the following to happen:
> > >
> > > 1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
> > > 2) app runs mlock, which increases counters for mlock'ed pages.
> > > 3) start -RT loop
> > >
> > > Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
> > > the mlock, vmstat shepherd can restart vmstat worker thread on
> > > the CPU in question.
> >
> > Can we enter nohz_full after the app runs mlock?
>
> Hum, i don't think its a good idea to use that route, because
> entering or exiting nohz_full depends on a number of variable
> outside of one's control (and additional variables might be
> added in the future).

Then I do not see any need for this patch. Because after a certain time
of inactivity (after the mlock) the system will enter nohz_full again.
If userspace has no direct control over nohz_full and can only wait then
it just has to do so.

> So preparing the system to function
> while entering nohz_full at any location seems the sane thing to do.
>
> And that would be at return to userspace (since, if mlocked, after
> that point there will be no more changes to propagate to vmstat
> counters).
>
> Or am i missing something else you can think of ?

I assumed that the "enter nohz full" was an action by the user
space app because I saw some earlier patches to introduce such
functionality in the past.

2021-07-05 14:47:50

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

On Mon, Jul 05, 2021 at 04:26:48PM +0200, Christoph Lameter wrote:
> On Fri, 2 Jul 2021, Marcelo Tosatti wrote:
>
> > > > The logic to disable vmstat worker thread, when entering
> > > > nohz full, does not cover all scenarios. For example, it is possible
> > > > for the following to happen:
> > > >
> > > > 1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
> > > > 2) app runs mlock, which increases counters for mlock'ed pages.
> > > > 3) start -RT loop
> > > >
> > > > Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
> > > > the mlock, vmstat shepherd can restart vmstat worker thread on
> > > > the CPU in question.
> > >
> > > Can we enter nohz_full after the app runs mlock?
> >
> > Hum, i don't think its a good idea to use that route, because
> > entering or exiting nohz_full depends on a number of variable
> > outside of one's control (and additional variables might be
> > added in the future).
>
> Then I do not see any need for this patch. Because after a certain time
> of inactivity (after the mlock) the system will enter nohz_full again.
> If userspace has no direct control over nohz_full and can only wait then
> it just has to do so.

Sorry, fail to see what you mean.

The problem (well its not a bug per se, but basically the current
disablement of vmstat_worker thread is not aggressive enough).

From the initial message:

1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
2) app runs mlock, which increases counters for mlock'ed pages.
3) start -RT loop

Note that any activity that triggers stat counter changes (other than
mlock, it just happens that it was mlock in the test application i was
using, just replace with any other system call that triggers writes
to per-CPU vmstat counters), will cause this.

You said:

"Because after a certain time of inactivity (after the mlock) the
system will enter nohz_full again."

Yes, but we can't tolerate any activity from vmstat worker thread
on this particular CPU.

Do you want the app to wait for an event saying: "vmstat_worker is now
disabled, as long as you don't dirty vmstat counters, vmstat_shepherd
won't wake it up".

Rather than that, what this patch does is to sync the vmstat counters on
return to userspace, so that:

"We synced per-CPU vmstat counters to global counters, and disable
local-CPU vmstat worker (on return to userspace). As long as you
don't dirty vmstat counters, vmstat_shepherd won't wake it up".

Makes sense?

> > So preparing the system to function
> > while entering nohz_full at any location seems the sane thing to do.
> >
> > And that would be at return to userspace (since, if mlocked, after
> > that point there will be no more changes to propagate to vmstat
> > counters).
> >
> > Or am i missing something else you can think of ?
>
> I assumed that the "enter nohz full" was an action by the user
> space app because I saw some earlier patches to introduce such
> functionality in the past.

No, it meant "enter nohz full" (in the current Linux codebase, for
existing applications).

2021-07-06 13:10:46

by Frederic Weisbecker

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

On Fri, Jul 02, 2021 at 12:28:16PM -0300, Marcelo Tosatti wrote:
>
> Hi Frederic,
>
> On Fri, Jul 02, 2021 at 02:30:32PM +0200, Frederic Weisbecker wrote:
> > On Thu, Jul 01, 2021 at 06:03:36PM -0300, Marcelo Tosatti wrote:
> > > The logic to disable vmstat worker thread, when entering
> > > nohz full, does not cover all scenarios. For example, it is possible
> > > for the following to happen:
> > >
> > > 1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
> > > 2) app runs mlock, which increases counters for mlock'ed pages.
> > > 3) start -RT loop
> > >
> > > Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
> > > the mlock, vmstat shepherd can restart vmstat worker thread on
> > > the CPU in question.
> > >
> > > To fix this, optionally sync the vmstat counters when returning
> > > from userspace, controllable by a new "vmstat_sync" isolcpus
> > > flags (default off).
> >
> > Wasn't the plan for such finegrained isolation features to do it at
> > the per task level using prctl()?
>
> Yes, but its orthogonal: when we integrate the finegrained isolation
> interface, will be able to use this code (to sync vmstat counters
> on return to userspace) only when userspace informs that it has entered
> isolated mode, so you don't incur the performance penalty of frequent
> vmstat counter writes when not using isolated apps.
>
> This is what the full task isolation task patchset mode is doing
> as well (CC'ing Alex BTW).

Right there can be two ways:

* A prctl request to sync vmstat only on exit from that prctl
* A prctl request to sync vmstat on all subsequent exit from
kernel space.

>
> This will require modifying applications (and the new kernel with the
> exposed interface).
>
> But there is demand for fixing this now, for currently existing
> binary only applications.

I would agree if it were a regression but it's not. It's merely
a new feature and we don't want to rush on a broken interface.

And I suspect some other people won't like much a new extension
to isolcpus.

2021-07-06 14:41:04

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

On Tue, Jul 06, 2021 at 03:09:25PM +0200, Frederic Weisbecker wrote:
> On Fri, Jul 02, 2021 at 12:28:16PM -0300, Marcelo Tosatti wrote:
> >
> > Hi Frederic,
> >
> > On Fri, Jul 02, 2021 at 02:30:32PM +0200, Frederic Weisbecker wrote:
> > > On Thu, Jul 01, 2021 at 06:03:36PM -0300, Marcelo Tosatti wrote:
> > > > The logic to disable vmstat worker thread, when entering
> > > > nohz full, does not cover all scenarios. For example, it is possible
> > > > for the following to happen:
> > > >
> > > > 1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
> > > > 2) app runs mlock, which increases counters for mlock'ed pages.
> > > > 3) start -RT loop
> > > >
> > > > Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
> > > > the mlock, vmstat shepherd can restart vmstat worker thread on
> > > > the CPU in question.
> > > >
> > > > To fix this, optionally sync the vmstat counters when returning
> > > > from userspace, controllable by a new "vmstat_sync" isolcpus
> > > > flags (default off).
> > >
> > > Wasn't the plan for such finegrained isolation features to do it at
> > > the per task level using prctl()?
> >
> > Yes, but its orthogonal: when we integrate the finegrained isolation
> > interface, will be able to use this code (to sync vmstat counters
> > on return to userspace) only when userspace informs that it has entered
> > isolated mode, so you don't incur the performance penalty of frequent
> > vmstat counter writes when not using isolated apps.
> >
> > This is what the full task isolation task patchset mode is doing
> > as well (CC'ing Alex BTW).
>
> Right there can be two ways:


* An isolcpus flag to request sync of vmstat on all exits
to userspace.
> * A prctl request to sync vmstat only on exit from that prctl
> * A prctl request to sync vmstat on all subsequent exit from
> kernel space.

* A prctl to expose "vmstat is out of sync" information
to userspace, so that it can be queried and flushed
(Christoph's suggestion:
https://www.spinics.net/lists/linux-mm/msg243788.html).

> > This will require modifying applications (and the new kernel with the
> > exposed interface).
> >
> > But there is demand for fixing this now, for currently existing
> > binary only applications.
>
> I would agree if it were a regression but it's not. It's merely
> a new feature and we don't want to rush on a broken interface.

Well, people out there need it in some form (vmstat sync).
Can we please agree on an acceptable way to allow this.

Why its a broken interface? It has good qualities IMO:

- Its well contained (if you don't need, don't use it).
- Does not require modifying -RT applications.
- Works well for a set of applications (where the overhead of
syncing vmstat is largely irrelevant, but the vmstat_worker
interruption is).

And its patchset integrates part another piece of full task isolation.

> And I suspect some other people won't like much a new extension
> to isolcpus.

Why is that so?

---

Regarding the prctl interface: The suggestion to allow
system calls (https://www.spinics.net/lists/linux-mm/msg241750.html)
conflicts with "full task isolation": when entering the kernel,
one might be target of an interruption (for example a TLB flush).

Thomas wrote on that thread:

"So you say some code can tolerate a few interrupts, then comes Alex and
says 'no disturbance' at all.

The point is that all of this shares the mechanisms to quiesce certain
parts of the kernel so this wants to build common infrastructure and the
prctl(ISOLATION, MODE) mode argument defines the scope of isolation
which the task asks for and the infrastructure decides whether it can be
granted and if so orchestrates the operation and provides a common
infrastructure for instrumentation, violation monitoring etc.

We really need to stop to look at particular workloads and defining
adhoc solutions tailored to their particular itch if we don't want to
end up with an uncoordinated and unmaintainable zoo of interfaces, hooks
and knobs.

Just looking at the problem at hand as an example. NOHZ already issues
quiet_vmstat(), but it does not cancel already scheduled work. Now
Marcelo wants a new mechanism which is supposed to cancel the work and
then Alex want's to prevent it from being rescheduled. If that's not
properly coordinated this goes down the drain very fast."

Not allowing the vmstat_sync to happen for unmodified applications seems
undesired, as Matthew Wilcox mentioned:

From: Matthew Wilcox <[email protected]>

"Subject: Re: [PATCH] mm: introduce sysctl file to flush per-cpu vmstat statistics

On Tue, Nov 17, 2020 at 01:28:06PM -0300, Marcelo Tosatti wrote:
> For isolated applications that busy loop (packet processing with DPDK,
> for example), workqueue functions either stall (if the -rt app priority
> is higher than kworker thread priority) or interrupt the -rt app
> (if the -rt app priority is lower than kworker thread priority.

This seems a bit obscure to expect an application to do. Can we make
this happen automatically when we bind an rt task to a group of CPUs?"

It turns out that is what would make most sense in the field.

And even if a prctl interface is added, a mode where the "flushing of
pending activities" happens automatically on return to userspace
would be desired (to allow unmodified applications to take benefit
of the decreased interruptions by the OS).

So the isolcpus flag is a way to enable/disable this feature.
prctl interface would be another.

Would you prefer a more generic "quiesce OS activities on return
from system calls" type of flag?

2021-07-06 14:41:44

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

On Tue, Jul 06, 2021 at 11:05:50AM -0300, Marcelo Tosatti wrote:
> On Tue, Jul 06, 2021 at 03:09:25PM +0200, Frederic Weisbecker wrote:
> > On Fri, Jul 02, 2021 at 12:28:16PM -0300, Marcelo Tosatti wrote:
> > >
> > > Hi Frederic,
> > >
> > > On Fri, Jul 02, 2021 at 02:30:32PM +0200, Frederic Weisbecker wrote:
> > > > On Thu, Jul 01, 2021 at 06:03:36PM -0300, Marcelo Tosatti wrote:
> > > > > The logic to disable vmstat worker thread, when entering
> > > > > nohz full, does not cover all scenarios. For example, it is possible
> > > > > for the following to happen:
> > > > >
> > > > > 1) enter nohz_full, which calls refresh_cpu_vm_stats, syncing the stats.
> > > > > 2) app runs mlock, which increases counters for mlock'ed pages.
> > > > > 3) start -RT loop
> > > > >
> > > > > Since refresh_cpu_vm_stats from nohz_full logic can happen _before_
> > > > > the mlock, vmstat shepherd can restart vmstat worker thread on
> > > > > the CPU in question.
> > > > >
> > > > > To fix this, optionally sync the vmstat counters when returning
> > > > > from userspace, controllable by a new "vmstat_sync" isolcpus
> > > > > flags (default off).
> > > >
> > > > Wasn't the plan for such finegrained isolation features to do it at
> > > > the per task level using prctl()?
> > >
> > > Yes, but its orthogonal: when we integrate the finegrained isolation
> > > interface, will be able to use this code (to sync vmstat counters
> > > on return to userspace) only when userspace informs that it has entered
> > > isolated mode, so you don't incur the performance penalty of frequent
> > > vmstat counter writes when not using isolated apps.
> > >
> > > This is what the full task isolation task patchset mode is doing
> > > as well (CC'ing Alex BTW).
> >
> > Right there can be two ways:
>
>
> * An isolcpus flag to request sync of vmstat on all exits
> to userspace.
> > * A prctl request to sync vmstat only on exit from that prctl
> > * A prctl request to sync vmstat on all subsequent exit from
> > kernel space.
>
> * A prctl to expose "vmstat is out of sync" information
> to userspace, so that it can be queried and flushed
> (Christoph's suggestion:
> https://www.spinics.net/lists/linux-mm/msg243788.html).
>
> > > This will require modifying applications (and the new kernel with the
> > > exposed interface).
> > >
> > > But there is demand for fixing this now, for currently existing
> > > binary only applications.
> >
> > I would agree if it were a regression but it's not. It's merely
> > a new feature and we don't want to rush on a broken interface.
>
> Well, people out there need it in some form (vmstat sync).
> Can we please agree on an acceptable way to allow this.
>
> Why its a broken interface? It has good qualities IMO:
>
> - Its well contained (if you don't need, don't use it).
> - Does not require modifying -RT applications.
> - Works well for a set of applications (where the overhead of
> syncing vmstat is largely irrelevant, but the vmstat_worker
> interruption is).
>
> And its patchset integrates part another piece of full task isolation.
>
> > And I suspect some other people won't like much a new extension
> > to isolcpus.
>
> Why is that so?

Ah, yes, that would be PeterZ.

IIRC his main point was that its not runtime changeable.
We can (partially fix that), if that is the case.

Peter, was that the only problem you saw with isolcpus interface?

2021-07-06 14:43:17

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

On Tue, Jul 06, 2021 at 11:09:20AM -0300, Marcelo Tosatti wrote:
> > > And I suspect some other people won't like much a new extension
> > > to isolcpus.
> >
> > Why is that so?
>
> Ah, yes, that would be PeterZ.
>
> IIRC his main point was that its not runtime changeable.
> We can (partially fix that), if that is the case.
>
> Peter, was that the only problem you saw with isolcpus interface?

Oh, and BTW, isolcpus=managed_irq flag was recently added due to another
isolation bug.

This problem is the same category, so i don't see why it should be
treated especially (yes, i agree isolcpus= interface should be
improved, but thats what is available today).

2021-07-06 16:17:01

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

On Tue, Jul 06, 2021 at 11:09:20AM -0300, Marcelo Tosatti wrote:
> Peter, was that the only problem you saw with isolcpus interface?

It needs to die, it's a piece of crap. Use cpusets already.

2021-07-06 17:04:55

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [patch 0/5] optionally sync per-CPU vmstats counter on return to userspace

On Tue, Jul 06, 2021 at 06:15:24PM +0200, Peter Zijlstra wrote:
> On Tue, Jul 06, 2021 at 11:09:20AM -0300, Marcelo Tosatti wrote:
> > Peter, was that the only problem you saw with isolcpus interface?
>
> It needs to die, it's a piece of crap. Use cpusets already.

OK, can do that. So how about, in addition to this patch (which again,
is needed for current systems, so we will have to keep extending it
for the current kernels which patches are backported to, as done with
managed_irqs... note most of the code that is integrated will be reused,
just a different path that enables it).

So what was discussed before was the following:

https://lkml.org/lkml/2020/9/9/1120

Do you have any other comments
(on the "new file per isolation feature" structure) ?

Would probably want to split the flags per-CPU as well.