2023-06-02 19:15:45

by Marcelo Tosatti

[permalink] [raw]
Subject: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs

The interruption caused by queueing work on nohz_full CPUs
is undesirable for certain aplications.

Fix by not refreshing per-CPU stats of nohz_full CPUs.

Signed-off-by: Marcelo Tosatti <[email protected]>

---
v2: opencode schedule_on_each_cpu (Michal Hocko)

Index: linux-vmstat-remote/mm/vmstat.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmstat.c
+++ linux-vmstat-remote/mm/vmstat.c
@@ -1881,8 +1881,13 @@ int vmstat_refresh(struct ctl_table *tab
void *buffer, size_t *lenp, loff_t *ppos)
{
long val;
- int err;
int i;
+ int cpu;
+ struct work_struct __percpu *works;
+
+ works = alloc_percpu(struct work_struct);
+ if (!works)
+ return -ENOMEM;

/*
* The regular update, every sysctl_stat_interval, may come later
@@ -1896,9 +1901,24 @@ int vmstat_refresh(struct ctl_table *tab
* transiently negative values, report an error here if any of
* the stats is negative, so we know to go looking for imbalance.
*/
- err = schedule_on_each_cpu(refresh_vm_stats);
- if (err)
- return err;
+ cpus_read_lock();
+ for_each_online_cpu(cpu) {
+ struct work_struct *work;
+
+ if (cpu_is_isolated(cpu))
+ continue;
+ work = per_cpu_ptr(works, cpu);
+ INIT_WORK(work, refresh_vm_stats);
+ schedule_work_on(cpu, work);
+ }
+
+ for_each_online_cpu(cpu) {
+ if (cpu_is_isolated(cpu))
+ continue;
+ flush_work(per_cpu_ptr(works, cpu));
+ }
+ cpus_read_unlock();
+ free_percpu(works);
for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) {
/*
* Skip checking stats known to go negative occasionally.




2023-06-05 08:18:02

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs

On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> The interruption caused by queueing work on nohz_full CPUs
> is undesirable for certain aplications.

This is not a proper changelog. I am not going to write a changelog for
you this time. Please explain why this is really needed and why this
approach is desired. E.g. why don't you prevent userspace from
refreshing stats if interference is not desirable. Also would it make
some sense to reduce flushing to cpumask of the calling process?
(certainly a daring thought but have you even considered it?)

> Fix by not refreshing per-CPU stats of nohz_full CPUs.
>
> Signed-off-by: Marcelo Tosatti <[email protected]>
>
> ---
> v2: opencode schedule_on_each_cpu (Michal Hocko)
>
> Index: linux-vmstat-remote/mm/vmstat.c
> ===================================================================
> --- linux-vmstat-remote.orig/mm/vmstat.c
> +++ linux-vmstat-remote/mm/vmstat.c
> @@ -1881,8 +1881,13 @@ int vmstat_refresh(struct ctl_table *tab
> void *buffer, size_t *lenp, loff_t *ppos)
> {
> long val;
> - int err;
> int i;
> + int cpu;
> + struct work_struct __percpu *works;
> +
> + works = alloc_percpu(struct work_struct);
> + if (!works)
> + return -ENOMEM;
>
> /*
> * The regular update, every sysctl_stat_interval, may come later
> @@ -1896,9 +1901,24 @@ int vmstat_refresh(struct ctl_table *tab
> * transiently negative values, report an error here if any of
> * the stats is negative, so we know to go looking for imbalance.
> */
> - err = schedule_on_each_cpu(refresh_vm_stats);
> - if (err)
> - return err;
> + cpus_read_lock();
> + for_each_online_cpu(cpu) {
> + struct work_struct *work;
> +
> + if (cpu_is_isolated(cpu))
> + continue;
> + work = per_cpu_ptr(works, cpu);
> + INIT_WORK(work, refresh_vm_stats);
> + schedule_work_on(cpu, work);
> + }
> +
> + for_each_online_cpu(cpu) {
> + if (cpu_is_isolated(cpu))
> + continue;
> + flush_work(per_cpu_ptr(works, cpu));
> + }
> + cpus_read_unlock();
> + free_percpu(works);
> for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) {
> /*
> * Skip checking stats known to go negative occasionally.
>

--
Michal Hocko
SUSE Labs

2023-06-05 16:02:55

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs

On Mon, Jun 05, 2023 at 09:59:57AM +0200, Michal Hocko wrote:
> On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> > The interruption caused by queueing work on nohz_full CPUs
> > is undesirable for certain aplications.
>
> This is not a proper changelog. I am not going to write a changelog for
> you this time. Please explain why this is really needed and why this
> approach is desired.
> E.g. why don't you prevent userspace from
> refreshing stats if interference is not desirable.

Michal,

Can you please check if the following looks better, as
a changelog? thanks

---

schedule_work_on API uses the workqueue mechanism to
queue a work item on a queue. A kernel thread, which
runs on the target CPU, executes those work items.

Therefore, when using the schedule_work_on API,
it is necessary for the kworker kernel thread to
be scheduled in, for the work function to be executed.

Time sensitive applications such as SoftPLCs
(https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Are_Virtual_PLCs.pdf),
have their response times affected by such interruptions.

The /proc/sys/vm/stat_refresh file was originally introduced by

commit 52b6f46bc163eef17ecba4cd552beeafe2b24453
Author: Hugh Dickins <[email protected]>
Date: Thu May 19 17:12:50 2016 -0700

mm: /proc/sys/vm/stat_refresh to force vmstat update

Provide /proc/sys/vm/stat_refresh to force an immediate update of
per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
before checking counts when testing. Originally added to work around a
bug which left counts stranded indefinitely on a cpu going idle (an
inaccuracy magnified when small below-batch numbers represent "huge"
amounts of memory), but I believe that bug is now fixed: nonetheless,
this is still a useful knob.

Other than the potential interruption to a time sensitive application,
if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then
system hangs can occur:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978688

To avoid the problems above, do not schedule the work to synchronize
per-CPU mm counters on isolated CPUs. Given the possibility for
breaking existing userspace applications, avoid changing
behaviour of access to /proc/sys/vm/stat_refresh, such as
returning errors to userspace.

---

> Also would it make some sense to reduce flushing to cpumask
> of the calling process? (certainly a daring thought but have
> you even considered it?)

Fail to see the point here ?



2023-06-05 16:35:54

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs

On Mon 05-06-23 12:43:24, Marcelo Tosatti wrote:
> On Mon, Jun 05, 2023 at 09:59:57AM +0200, Michal Hocko wrote:
> > On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> > > The interruption caused by queueing work on nohz_full CPUs
> > > is undesirable for certain aplications.
> >
> > This is not a proper changelog. I am not going to write a changelog for
> > you this time. Please explain why this is really needed and why this
> > approach is desired.
> > E.g. why don't you prevent userspace from
> > refreshing stats if interference is not desirable.
>
> Michal,
>
> Can you please check if the following looks better, as
> a changelog? thanks
>
> ---
>
> schedule_work_on API uses the workqueue mechanism to
> queue a work item on a queue. A kernel thread, which
> runs on the target CPU, executes those work items.
>
> Therefore, when using the schedule_work_on API,
> it is necessary for the kworker kernel thread to
> be scheduled in, for the work function to be executed.
>
> Time sensitive applications such as SoftPLCs
> (https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Are_Virtual_PLCs.pdf),
> have their response times affected by such interruptions.
>
> The /proc/sys/vm/stat_refresh file was originally introduced by
>
> commit 52b6f46bc163eef17ecba4cd552beeafe2b24453
> Author: Hugh Dickins <[email protected]>
> Date: Thu May 19 17:12:50 2016 -0700
>
> mm: /proc/sys/vm/stat_refresh to force vmstat update
>
> Provide /proc/sys/vm/stat_refresh to force an immediate update of
> per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
> before checking counts when testing. Originally added to work around a
> bug which left counts stranded indefinitely on a cpu going idle (an
> inaccuracy magnified when small below-batch numbers represent "huge"
> amounts of memory), but I believe that bug is now fixed: nonetheless,
> this is still a useful knob.

No need to quote the full changelog.

> Other than the potential interruption to a time sensitive application,
> if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then
> system hangs can occur:
>
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978688

Confused... This report says that accessing the file (i.e. to force the
refresh) can get stalled because high priority tasks will not allow
kworkers to run. No?

There is simply no way around that unless those kworkers inherit the
priority. It certainly is unfortunate that the call is not killable
but being stuck behind real time busy looping processes is nothing
really uncommong. One has to be really careful when using real time
priorities.

> To avoid the problems above, do not schedule the work to synchronize
> per-CPU mm counters on isolated CPUs. Given the possibility for
> breaking existing userspace applications, avoid changing
> behaviour of access to /proc/sys/vm/stat_refresh, such as
> returning errors to userspace.

You are changing the behavior. The preexisting behavior was to flush
everything. This is clearly changing that.

> ---
>
> > Also would it make some sense to reduce flushing to cpumask
> > of the calling process? (certainly a daring thought but have
> > you even considered it?)
>
> Fail to see the point here ?

I mean that, if you already want to change the semantic of the call then
it would likely be safer to change it in a more robust way and only
flush pcp vmstat caches that are in the process effective cpu mask. This
way one can control which pcp caches to flush (e.g. those that are not
on isolated CPUs or contrary those that are isolated but you can afford
to flush at the specific moment). See?

Now I am not saying this is the right way to go because there is still a
slim chance this will break userspace expectations. Therefore I have
asked why you simply do not stop any random application accessing
stat_refresh in the first place. These highly specialized setups with
isolated resources shouldn't run arbitrary crap, should that? What if I
just start allocating memory and get the system close to OOM. I am
pretty sure a small latency induced by the vmstat refreshes is the least
problem you will have.

So please step back and try to think whether this is actually fixing
anything real before trying to change a user visible interface.

--
Michal Hocko
SUSE Labs

2023-06-05 18:34:49

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs

On Mon, Jun 05, 2023 at 03:14:25PM -0300, Marcelo Tosatti wrote:
> On Mon, Jun 05, 2023 at 06:10:57PM +0200, Michal Hocko wrote:
> > On Mon 05-06-23 12:43:24, Marcelo Tosatti wrote:
> > > On Mon, Jun 05, 2023 at 09:59:57AM +0200, Michal Hocko wrote:
> > > > On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> > > > > The interruption caused by queueing work on nohz_full CPUs
> > > > > is undesirable for certain aplications.
> > > >
> > > > This is not a proper changelog. I am not going to write a changelog for
> > > > you this time. Please explain why this is really needed and why this
> > > > approach is desired.
> > > > E.g. why don't you prevent userspace from
> > > > refreshing stats if interference is not desirable.
> > >
> > > Michal,
> > >
> > > Can you please check if the following looks better, as
> > > a changelog? thanks
> > >
> > > ---
> > >
> > > schedule_work_on API uses the workqueue mechanism to
> > > queue a work item on a queue. A kernel thread, which
> > > runs on the target CPU, executes those work items.
> > >
> > > Therefore, when using the schedule_work_on API,
> > > it is necessary for the kworker kernel thread to
> > > be scheduled in, for the work function to be executed.
> > >
> > > Time sensitive applications such as SoftPLCs
> > > (https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Are_Virtual_PLCs.pdf),
> > > have their response times affected by such interruptions.
> > >
> > > The /proc/sys/vm/stat_refresh file was originally introduced by
> > >
> > > commit 52b6f46bc163eef17ecba4cd552beeafe2b24453
> > > Author: Hugh Dickins <[email protected]>
> > > Date: Thu May 19 17:12:50 2016 -0700
> > >
> > > mm: /proc/sys/vm/stat_refresh to force vmstat update
> > >
> > > Provide /proc/sys/vm/stat_refresh to force an immediate update of
> > > per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
> > > before checking counts when testing. Originally added to work around a
> > > bug which left counts stranded indefinitely on a cpu going idle (an
> > > inaccuracy magnified when small below-batch numbers represent "huge"
> > > amounts of memory), but I believe that bug is now fixed: nonetheless,
> > > this is still a useful knob.
> >
> > No need to quote the full changelog.

I think its useful to put things in perspective.

> > > Other than the potential interruption to a time sensitive application,
> > > if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then
> > > system hangs can occur:
> > >
> > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978688
> >
> > Confused... This report says that accessing the file (i.e. to force the
> > refresh) can get stalled because high priority tasks will not allow
> > kworkers to run. No?
>
> Yes.
>
> > There is simply no way around that unless those kworkers inherit the
> > priority.
>
> stalld is an attempt to workaround the situation by allowing the
> lower priority processes to execute for a small amount of time
> (for example 20us every 1s). https://github.com/bristot/stalld:
>
> "The stalld program (which stands for 'stall daemon') is a mechanism to
> prevent the starvation of operating system threads in a Linux system.
> The premise is to start up on a housekeeping cpu (one that is not used
> for real-application purposes) and to periodically monitor the state of
> each thread in the system, looking for a thread that has been on a run
> queue (i.e. ready to run) for a specifed length of time without being
> run. This condition is usually hit when the thread is on the same cpu
> as a high-priority cpu-intensive task and therefore is being given no
> opportunity to run.
>
> When a thread is judged to be starving, stalld changes that thread to
> use the SCHED_DEADLINE policy and gives the thread a small slice of time
> for that cpu (specified on the command line). The thread then runs and
> when that timeslice is used, the thread is then returned to its original
> scheduling policy and stalld then continues to monitor thread states."
>
> Unfortunately, if you allow that, then the latency sensitive
> application might be interrupted for longer than acceptable
> (which is the case for a certain class of applications, for example
> SoftPLC inside a VM).
>
> > It certainly is unfortunate that the call is not killable
> > but being stuck behind real time busy looping processes is nothing
> > really uncommong. One has to be really careful when using real time
> > priorities.
>
> Yes.
>
> > > To avoid the problems above, do not schedule the work to synchronize
> > > per-CPU mm counters on isolated CPUs. Given the possibility for
> > > breaking existing userspace applications, avoid changing
> > > behaviour of access to /proc/sys/vm/stat_refresh, such as
> > > returning errors to userspace.
> >
> > You are changing the behavior. The preexisting behavior was to flush
> > everything. This is clearly changing that.
>
> I meant that this patch does not cause read/write to the procfs file
> to return errors.
>
> I believe returning errors has a higher potential for regressions
> than not flushing per-CPU VM counters of isolated CPUs (which are
> bounded).
>
> > > ---
> > >
> > > > Also would it make some sense to reduce flushing to cpumask
> > > > of the calling process? (certainly a daring thought but have
> > > > you even considered it?)
> > >
> > > Fail to see the point here ?
> >
> > I mean that, if you already want to change the semantic of the call then
> > it would likely be safer to change it in a more robust way and only
> > flush pcp vmstat caches that are in the process effective cpu mask.
>
> That would change behaviour for systems without isolated CPUs.
>
> > This
> > way one can control which pcp caches to flush (e.g. those that are not
> > on isolated CPUs or contrary those that are isolated but you can afford
> > to flush at the specific moment). See?
>
> Yes, but not sure what to think of this idea.
>
> > Now I am not saying this is the right way to go because there is still a
> > slim chance this will break userspace expectations. Therefore I have
> > asked why you simply do not stop any random application accessing
> > stat_refresh in the first place.
>
> I think this is what should be done, but not on the current patchset.
>
> https://lkml.iu.edu/hypermail/linux/kernel/2209.1/01263.html
>
> Regarding housekeeping flags, it is usually the case that initialization might
> require code execution on interference blocked CPUs (for example MTRR
> initialization, resctrlfs initialization, MSR writes, ...). Therefore
> tagging the CPUs after system initialization is necessary, which
> is not possible with current housekeeping flags infrastructure.
>
> > These highly specialized setups with
> > isolated resources shouldn't run arbitrary crap, should that?
>
> Problem is that its hard to control what people run on a system.
>
> > What if I just start allocating memory and get the system close to OOM.
>
> Sure, or "poweroff".
>
> > I am
> > pretty sure a small latency induced by the vmstat refreshes is the least
> > problem you will have.
>
> If OOM codepath sends no IPI or queues work on isolated CPUs, then OOM
> should be fine.
>
> > So please step back and try to think whether this is actually fixing
> > anything real before trying to change a user visible interface.
>
> It is fixing either a latency violation or a hang on a system where some user or
> piece of software happens to run "sysctl -a" (or read vmstat_refresh).
>
> If one is using CPU isolation, the latency violation has higher
> priority than vmstat_refresh returning proper counters.

OK, so this patch is not going to include the per-CPU vmstat counters
(up to the threshold) in the synchronization step of reading/writing to the
vmstat_refresh file.

This is a tradeoff: one prefers not to have accurate counters
(for a procfs file whose value is going to be interpreted, and which
accurate value might or might not be important) than to interrupt
an isolated CPU.


2023-06-05 18:51:46

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs

On Mon, Jun 05, 2023 at 06:10:57PM +0200, Michal Hocko wrote:
> On Mon 05-06-23 12:43:24, Marcelo Tosatti wrote:
> > On Mon, Jun 05, 2023 at 09:59:57AM +0200, Michal Hocko wrote:
> > > On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> > > > The interruption caused by queueing work on nohz_full CPUs
> > > > is undesirable for certain aplications.
> > >
> > > This is not a proper changelog. I am not going to write a changelog for
> > > you this time. Please explain why this is really needed and why this
> > > approach is desired.
> > > E.g. why don't you prevent userspace from
> > > refreshing stats if interference is not desirable.
> >
> > Michal,
> >
> > Can you please check if the following looks better, as
> > a changelog? thanks
> >
> > ---
> >
> > schedule_work_on API uses the workqueue mechanism to
> > queue a work item on a queue. A kernel thread, which
> > runs on the target CPU, executes those work items.
> >
> > Therefore, when using the schedule_work_on API,
> > it is necessary for the kworker kernel thread to
> > be scheduled in, for the work function to be executed.
> >
> > Time sensitive applications such as SoftPLCs
> > (https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Are_Virtual_PLCs.pdf),
> > have their response times affected by such interruptions.
> >
> > The /proc/sys/vm/stat_refresh file was originally introduced by
> >
> > commit 52b6f46bc163eef17ecba4cd552beeafe2b24453
> > Author: Hugh Dickins <[email protected]>
> > Date: Thu May 19 17:12:50 2016 -0700
> >
> > mm: /proc/sys/vm/stat_refresh to force vmstat update
> >
> > Provide /proc/sys/vm/stat_refresh to force an immediate update of
> > per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
> > before checking counts when testing. Originally added to work around a
> > bug which left counts stranded indefinitely on a cpu going idle (an
> > inaccuracy magnified when small below-batch numbers represent "huge"
> > amounts of memory), but I believe that bug is now fixed: nonetheless,
> > this is still a useful knob.
>
> No need to quote the full changelog.
>
> > Other than the potential interruption to a time sensitive application,
> > if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then
> > system hangs can occur:
> >
> > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978688
>
> Confused... This report says that accessing the file (i.e. to force the
> refresh) can get stalled because high priority tasks will not allow
> kworkers to run. No?

Yes.

> There is simply no way around that unless those kworkers inherit the
> priority.

stalld is an attempt to workaround the situation by allowing the
lower priority processes to execute for a small amount of time
(for example 20us every 1s). https://github.com/bristot/stalld:

"The stalld program (which stands for 'stall daemon') is a mechanism to
prevent the starvation of operating system threads in a Linux system.
The premise is to start up on a housekeeping cpu (one that is not used
for real-application purposes) and to periodically monitor the state of
each thread in the system, looking for a thread that has been on a run
queue (i.e. ready to run) for a specifed length of time without being
run. This condition is usually hit when the thread is on the same cpu
as a high-priority cpu-intensive task and therefore is being given no
opportunity to run.

When a thread is judged to be starving, stalld changes that thread to
use the SCHED_DEADLINE policy and gives the thread a small slice of time
for that cpu (specified on the command line). The thread then runs and
when that timeslice is used, the thread is then returned to its original
scheduling policy and stalld then continues to monitor thread states."

Unfortunately, if you allow that, then the latency sensitive
application might be interrupted for longer than acceptable
(which is the case for a certain class of applications, for example
SoftPLC inside a VM).

> It certainly is unfortunate that the call is not killable
> but being stuck behind real time busy looping processes is nothing
> really uncommong. One has to be really careful when using real time
> priorities.

Yes.

> > To avoid the problems above, do not schedule the work to synchronize
> > per-CPU mm counters on isolated CPUs. Given the possibility for
> > breaking existing userspace applications, avoid changing
> > behaviour of access to /proc/sys/vm/stat_refresh, such as
> > returning errors to userspace.
>
> You are changing the behavior. The preexisting behavior was to flush
> everything. This is clearly changing that.

I meant that this patch does not cause read/write to the procfs file
to return errors.

I believe returning errors has a higher potential for regressions
than not flushing per-CPU VM counters of isolated CPUs (which are
bounded).

> > ---
> >
> > > Also would it make some sense to reduce flushing to cpumask
> > > of the calling process? (certainly a daring thought but have
> > > you even considered it?)
> >
> > Fail to see the point here ?
>
> I mean that, if you already want to change the semantic of the call then
> it would likely be safer to change it in a more robust way and only
> flush pcp vmstat caches that are in the process effective cpu mask.

That would change behaviour for systems without isolated CPUs.

> This
> way one can control which pcp caches to flush (e.g. those that are not
> on isolated CPUs or contrary those that are isolated but you can afford
> to flush at the specific moment). See?

Yes, but not sure what to think of this idea.

> Now I am not saying this is the right way to go because there is still a
> slim chance this will break userspace expectations. Therefore I have
> asked why you simply do not stop any random application accessing
> stat_refresh in the first place.

I think this is what should be done, but not on the current patchset.

https://lkml.iu.edu/hypermail/linux/kernel/2209.1/01263.html

Regarding housekeeping flags, it is usually the case that initialization might
require code execution on interference blocked CPUs (for example MTRR
initialization, resctrlfs initialization, MSR writes, ...). Therefore
tagging the CPUs after system initialization is necessary, which
is not possible with current housekeeping flags infrastructure.

> These highly specialized setups with
> isolated resources shouldn't run arbitrary crap, should that?

Problem is that its hard to control what people run on a system.

> What if I just start allocating memory and get the system close to OOM.

Sure, or "poweroff".

> I am
> pretty sure a small latency induced by the vmstat refreshes is the least
> problem you will have.

If OOM codepath sends no IPI or queues work on isolated CPUs, then OOM
should be fine.

> So please step back and try to think whether this is actually fixing
> anything real before trying to change a user visible interface.

It is fixing either a latency violation or a hang on a system where some user or
piece of software happens to run "sysctl -a" (or read vmstat_refresh).

If one is using CPU isolation, the latency violation has higher
priority than vmstat_refresh returning proper counters.


2023-06-05 19:43:16

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs

On Mon 05-06-23 15:14:25, Marcelo Tosatti wrote:
> On Mon, Jun 05, 2023 at 06:10:57PM +0200, Michal Hocko wrote:
> > On Mon 05-06-23 12:43:24, Marcelo Tosatti wrote:
> > > On Mon, Jun 05, 2023 at 09:59:57AM +0200, Michal Hocko wrote:
> > > > On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> > > > > The interruption caused by queueing work on nohz_full CPUs
> > > > > is undesirable for certain aplications.
> > > >
> > > > This is not a proper changelog. I am not going to write a changelog for
> > > > you this time. Please explain why this is really needed and why this
> > > > approach is desired.
> > > > E.g. why don't you prevent userspace from
> > > > refreshing stats if interference is not desirable.
> > >
> > > Michal,
> > >
> > > Can you please check if the following looks better, as
> > > a changelog? thanks
> > >
> > > ---
> > >
> > > schedule_work_on API uses the workqueue mechanism to
> > > queue a work item on a queue. A kernel thread, which
> > > runs on the target CPU, executes those work items.
> > >
> > > Therefore, when using the schedule_work_on API,
> > > it is necessary for the kworker kernel thread to
> > > be scheduled in, for the work function to be executed.
> > >
> > > Time sensitive applications such as SoftPLCs
> > > (https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Are_Virtual_PLCs.pdf),
> > > have their response times affected by such interruptions.
> > >
> > > The /proc/sys/vm/stat_refresh file was originally introduced by
> > >
> > > commit 52b6f46bc163eef17ecba4cd552beeafe2b24453
> > > Author: Hugh Dickins <[email protected]>
> > > Date: Thu May 19 17:12:50 2016 -0700
> > >
> > > mm: /proc/sys/vm/stat_refresh to force vmstat update
> > >
> > > Provide /proc/sys/vm/stat_refresh to force an immediate update of
> > > per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
> > > before checking counts when testing. Originally added to work around a
> > > bug which left counts stranded indefinitely on a cpu going idle (an
> > > inaccuracy magnified when small below-batch numbers represent "huge"
> > > amounts of memory), but I believe that bug is now fixed: nonetheless,
> > > this is still a useful knob.
> >
> > No need to quote the full changelog.
> >
> > > Other than the potential interruption to a time sensitive application,
> > > if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then
> > > system hangs can occur:
> > >
> > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978688
> >
> > Confused... This report says that accessing the file (i.e. to force the
> > refresh) can get stalled because high priority tasks will not allow
> > kworkers to run. No?
>
> Yes.
>
> > There is simply no way around that unless those kworkers inherit the
> > priority.
>
> stalld is an attempt to workaround the situation by allowing the
> lower priority processes to execute for a small amount of time
> (for example 20us every 1s). https://github.com/bristot/stalld:
>
> "The stalld program (which stands for 'stall daemon') is a mechanism to
> prevent the starvation of operating system threads in a Linux system.
> The premise is to start up on a housekeeping cpu (one that is not used
> for real-application purposes) and to periodically monitor the state of
> each thread in the system, looking for a thread that has been on a run
> queue (i.e. ready to run) for a specifed length of time without being
> run. This condition is usually hit when the thread is on the same cpu
> as a high-priority cpu-intensive task and therefore is being given no
> opportunity to run.
>
> When a thread is judged to be starving, stalld changes that thread to
> use the SCHED_DEADLINE policy and gives the thread a small slice of time
> for that cpu (specified on the command line). The thread then runs and
> when that timeslice is used, the thread is then returned to its original
> scheduling policy and stalld then continues to monitor thread states."

But your task is not on rq. It is sleeping and waiting for completion so
I fail to see how this all is related. The problem is that the userspace
depends on kernel WQ to complete. There quite some operations that will
behave like that.

> Unfortunately, if you allow that, then the latency sensitive
> application might be interrupted for longer than acceptable
> (which is the case for a certain class of applications, for example
> SoftPLC inside a VM).

I am losing you again. You can either have top priority processes
running uninterrupted or or latency sensitive running with your SLAs.
Even if you apply this patch you cannot protect your sensitive
application from CPU top prio hogs. You either have your process
priorities configured properly or not. I really fail to follow your line
of arguments here.

> > It certainly is unfortunate that the call is not killable
> > but being stuck behind real time busy looping processes is nothing
> > really uncommong. One has to be really careful when using real time
> > priorities.
>
> Yes.
>
> > > To avoid the problems above, do not schedule the work to synchronize
> > > per-CPU mm counters on isolated CPUs. Given the possibility for
> > > breaking existing userspace applications, avoid changing
> > > behaviour of access to /proc/sys/vm/stat_refresh, such as
> > > returning errors to userspace.
> >
> > You are changing the behavior. The preexisting behavior was to flush
> > everything. This is clearly changing that.
>
> I meant that this patch does not cause read/write to the procfs file
> to return errors.
>
> I believe returning errors has a higher potential for regressions
> than not flushing per-CPU VM counters of isolated CPUs (which are
> bounded).

Silent change of behavior is even worse because you cannot really tell a
difference.

> > > > Also would it make some sense to reduce flushing to cpumask
> > > > of the calling process? (certainly a daring thought but have
> > > > you even considered it?)
> > >
> > > Fail to see the point here ?
> >
> > I mean that, if you already want to change the semantic of the call then
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > it would likely be safer to change it in a more robust way and only
> > flush pcp vmstat caches that are in the process effective cpu mask.
>
> That would change behaviour for systems without isolated CPUs.

yes, see above.

> > This
> > way one can control which pcp caches to flush (e.g. those that are not
> > on isolated CPUs or contrary those that are isolated but you can afford
> > to flush at the specific moment). See?
>
> Yes, but not sure what to think of this idea.

If you want to break something, at least make the change kinda more
generic than, magically single purpose (isolcpu/nohz in this case) oriented.

> > Now I am not saying this is the right way to go because there is still a
> > slim chance this will break userspace expectations. Therefore I have
> > asked why you simply do not stop any random application accessing
> > stat_refresh in the first place.
>
> I think this is what should be done, but not on the current patchset.
>
> https://lkml.iu.edu/hypermail/linux/kernel/2209.1/01263.html
>
> Regarding housekeeping flags, it is usually the case that initialization might
> require code execution on interference blocked CPUs (for example MTRR
> initialization, resctrlfs initialization, MSR writes, ...). Therefore
> tagging the CPUs after system initialization is necessary, which
> is not possible with current housekeeping flags infrastructure.
>
> > These highly specialized setups with
> > isolated resources shouldn't run arbitrary crap, should that?
>
> Problem is that its hard to control what people run on a system.
>
> > What if I just start allocating memory and get the system close to OOM.
>
> Sure, or "poweroff".
>
> > I am
> > pretty sure a small latency induced by the vmstat refreshes is the least
> > problem you will have.
>
> If OOM codepath sends no IPI or queues work on isolated CPUs, then OOM
> should be fine.

You are missing a big picture I am afraid. IPIs is the least of a
problem in that case (just imagine all the indirect dependencies through
locking - get a lock, held by somebody requesting a memory).

> > So please step back and try to think whether this is actually fixing
> > anything real before trying to change a user visible interface.
>
> It is fixing either a latency violation or a hang on a system where some user or
> piece of software happens to run "sysctl -a" (or read vmstat_refresh).

I believe we have established the "hang problem" as described above is
not fixable by this patch. And I still argue that you should simply not
allow abuse of the interface if you want to have any latency guarantees.
Same as with any other kernel activity where you can compete for
resources (directly or indirectly).

> If one is using CPU isolation, the latency violation has higher
> priority than vmstat_refresh returning proper counters.

This is a strong claim without any actual argument other than you would
like to have it this way.

--
Michal Hocko
SUSE Labs

2023-06-05 20:16:45

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs

On Mon, Jun 05, 2023 at 09:12:01PM +0200, Michal Hocko wrote:
> On Mon 05-06-23 15:14:25, Marcelo Tosatti wrote:
> > On Mon, Jun 05, 2023 at 06:10:57PM +0200, Michal Hocko wrote:
> > > On Mon 05-06-23 12:43:24, Marcelo Tosatti wrote:
> > > > On Mon, Jun 05, 2023 at 09:59:57AM +0200, Michal Hocko wrote:
> > > > > On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> > > > > > The interruption caused by queueing work on nohz_full CPUs
> > > > > > is undesirable for certain aplications.
> > > > >
> > > > > This is not a proper changelog. I am not going to write a changelog for
> > > > > you this time. Please explain why this is really needed and why this
> > > > > approach is desired.
> > > > > E.g. why don't you prevent userspace from
> > > > > refreshing stats if interference is not desirable.
> > > >
> > > > Michal,
> > > >
> > > > Can you please check if the following looks better, as
> > > > a changelog? thanks
> > > >
> > > > ---
> > > >
> > > > schedule_work_on API uses the workqueue mechanism to
> > > > queue a work item on a queue. A kernel thread, which
> > > > runs on the target CPU, executes those work items.
> > > >
> > > > Therefore, when using the schedule_work_on API,
> > > > it is necessary for the kworker kernel thread to
> > > > be scheduled in, for the work function to be executed.
> > > >
> > > > Time sensitive applications such as SoftPLCs
> > > > (https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Are_Virtual_PLCs.pdf),
> > > > have their response times affected by such interruptions.
> > > >
> > > > The /proc/sys/vm/stat_refresh file was originally introduced by
> > > >
> > > > commit 52b6f46bc163eef17ecba4cd552beeafe2b24453
> > > > Author: Hugh Dickins <[email protected]>
> > > > Date: Thu May 19 17:12:50 2016 -0700
> > > >
> > > > mm: /proc/sys/vm/stat_refresh to force vmstat update
> > > >
> > > > Provide /proc/sys/vm/stat_refresh to force an immediate update of
> > > > per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
> > > > before checking counts when testing. Originally added to work around a
> > > > bug which left counts stranded indefinitely on a cpu going idle (an
> > > > inaccuracy magnified when small below-batch numbers represent "huge"
> > > > amounts of memory), but I believe that bug is now fixed: nonetheless,
> > > > this is still a useful knob.
> > >
> > > No need to quote the full changelog.
> > >
> > > > Other than the potential interruption to a time sensitive application,
> > > > if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then
> > > > system hangs can occur:
> > > >
> > > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978688
> > >
> > > Confused... This report says that accessing the file (i.e. to force the
> > > refresh) can get stalled because high priority tasks will not allow
> > > kworkers to run. No?
> >
> > Yes.
> >
> > > There is simply no way around that unless those kworkers inherit the
> > > priority.
> >
> > stalld is an attempt to workaround the situation by allowing the
> > lower priority processes to execute for a small amount of time
> > (for example 20us every 1s). https://github.com/bristot/stalld:
> >
> > "The stalld program (which stands for 'stall daemon') is a mechanism to
> > prevent the starvation of operating system threads in a Linux system.
> > The premise is to start up on a housekeeping cpu (one that is not used
> > for real-application purposes) and to periodically monitor the state of
> > each thread in the system, looking for a thread that has been on a run
> > queue (i.e. ready to run) for a specifed length of time without being
> > run. This condition is usually hit when the thread is on the same cpu
> > as a high-priority cpu-intensive task and therefore is being given no
> > opportunity to run.
> >
> > When a thread is judged to be starving, stalld changes that thread to
> > use the SCHED_DEADLINE policy and gives the thread a small slice of time
> > for that cpu (specified on the command line). The thread then runs and
> > when that timeslice is used, the thread is then returned to its original
> > scheduling policy and stalld then continues to monitor thread states."
>
> But your task is not on rq. It is sleeping and waiting for completion so
> I fail to see how this all is related. The problem is that the userspace
> depends on kernel WQ to complete. There quite some operations that will
> behave like that.

Yes. In more detail, two cases, with current kernel code:

Case-1) no stalld daemon running. SCHED_FIFO task on isolated CPU that does:

do {
pkt = read_network_packet();
if (pkt) {
process_pkt(pkt);
}
} while (!stop_request);

Someone else runs "echo 1 > /proc/sys/vm/stat_refresh" or
"sysctl -a".

flush_work hangs, because SCHED_FIFO task never yields the processor
for kwork to finish execution of vmstat_refresh work.

Case-2) stalld daemon running. SCHED_FIFO has on isolated CPU with same
condition as above. stalld daemon detects kworker "starving" (not being
able to execute) and changes it priority to SCHED_DEADLINE, for 20us,
then changes it back to SCHED_OTHER.

This causes the packet processing thread to be interrupted for 20us,
which is not what is expected from the system.

---

Now with this patch integrated:

Case-1B)

"echo 1 > /proc/sys/vm/stat_refresh" or "sysctl -a" returns, no system
hang.

Case-2B)

No work is scheduled to run on the isolated CPU, packet processing
is not interrupted for 20us, everyone is happy.

---

In both B cases, kernel statistics were not fully synchronized from per-CPU
counters to global counters (but hopefully nobody cares).

> > Unfortunately, if you allow that, then the latency sensitive
> > application might be interrupted for longer than acceptable
> > (which is the case for a certain class of applications, for example
> > SoftPLC inside a VM).
>
> I am losing you again. You can either have top priority processes
> running uninterrupted or or latency sensitive running with your SLAs.
> Even if you apply this patch you cannot protect your sensitive
> application from CPU top prio hogs. You either have your process
> priorities configured properly or not. I really fail to follow your line
> of arguments here.

Hopefully what is written above helps.

> > > It certainly is unfortunate that the call is not killable
> > > but being stuck behind real time busy looping processes is nothing
> > > really uncommong. One has to be really careful when using real time
> > > priorities.
> >
> > Yes.
> >
> > > > To avoid the problems above, do not schedule the work to synchronize
> > > > per-CPU mm counters on isolated CPUs. Given the possibility for
> > > > breaking existing userspace applications, avoid changing
> > > > behaviour of access to /proc/sys/vm/stat_refresh, such as
> > > > returning errors to userspace.
> > >
> > > You are changing the behavior. The preexisting behavior was to flush
> > > everything. This is clearly changing that.
> >
> > I meant that this patch does not cause read/write to the procfs file
> > to return errors.
> >
> > I believe returning errors has a higher potential for regressions
> > than not flushing per-CPU VM counters of isolated CPUs (which are
> > bounded).
>
> Silent change of behavior is even worse because you cannot really tell a
> difference.

What do you suggest ?

> > > > > Also would it make some sense to reduce flushing to cpumask
> > > > > of the calling process? (certainly a daring thought but have
> > > > > you even considered it?)
> > > >
> > > > Fail to see the point here ?
> > >
> > > I mean that, if you already want to change the semantic of the call then
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > it would likely be safer to change it in a more robust way and only
> > > flush pcp vmstat caches that are in the process effective cpu mask.
> >
> > That would change behaviour for systems without isolated CPUs.
>
> yes, see above.
>
> > > This
> > > way one can control which pcp caches to flush (e.g. those that are not
> > > on isolated CPUs or contrary those that are isolated but you can afford
> > > to flush at the specific moment). See?
> >
> > Yes, but not sure what to think of this idea.
>
> If you want to break something, at least make the change kinda more
> generic than, magically single purpose (isolcpu/nohz in this case) oriented.

I hope nobody cares about stat_refresh being off by reclaim threshold
(which is a small percentage).

The alternative was to synchronize per-CPU counters remotely, but it was
decided as too complex.

> > > Now I am not saying this is the right way to go because there is still a
> > > slim chance this will break userspace expectations. Therefore I have
> > > asked why you simply do not stop any random application accessing
> > > stat_refresh in the first place.
> >
> > I think this is what should be done, but not on the current patchset.
> >
> > https://lkml.iu.edu/hypermail/linux/kernel/2209.1/01263.html
> >
> > Regarding housekeeping flags, it is usually the case that initialization might
> > require code execution on interference blocked CPUs (for example MTRR
> > initialization, resctrlfs initialization, MSR writes, ...). Therefore
> > tagging the CPUs after system initialization is necessary, which
> > is not possible with current housekeeping flags infrastructure.
> >
> > > These highly specialized setups with
> > > isolated resources shouldn't run arbitrary crap, should that?
> >
> > Problem is that its hard to control what people run on a system.
> >
> > > What if I just start allocating memory and get the system close to OOM.
> >
> > Sure, or "poweroff".
> >
> > > I am
> > > pretty sure a small latency induced by the vmstat refreshes is the least
> > > problem you will have.
> >
> > If OOM codepath sends no IPI or queues work on isolated CPUs, then OOM
> > should be fine.
>
> You are missing a big picture I am afraid. IPIs is the least of a
> problem in that case (just imagine all the indirect dependencies through
> locking - get a lock, held by somebody requesting a memory).
>
> > > So please step back and try to think whether this is actually fixing
> > > anything real before trying to change a user visible interface.
> >
> > It is fixing either a latency violation or a hang on a system where some user or
> > piece of software happens to run "sysctl -a" (or read vmstat_refresh).
>
> I believe we have established the "hang problem" as described above is
> not fixable by this patch. And I still argue that you should simply not
> allow abuse of the interface if you want to have any latency guarantees.
> Same as with any other kernel activity where you can compete for
> resources (directly or indirectly).
>
> > If one is using CPU isolation, the latency violation has higher
> > priority than vmstat_refresh returning proper counters.
>
> This is a strong claim without any actual argument other than you would
> like to have it this way.

Well its the least worse option that i can see.

Do you think returning error from procfs file handler, if isolated cpu
is encountered, is not a potential source of regressions ?

Again, it seemed to me that returning an error only if a different
flag is enabled (flag which is disabled by default), similar to what
is suggested in the "block interference" patchset, would be more
resilient against regressions. But don't have a strong preference.