2023-11-16 02:24:29

by Yosry Ahmed

[permalink] [raw]
Subject: [PATCH v3 0/5] mm: memcg: subtree stats flushing and thresholds

This series attempts to address shortages in today's approach for memcg
stats flushing, namely occasionally stale or expensive stat reads. The
series does so by changing the threshold that we use to decide whether
to trigger a flush to be per memcg instead of global (patch 3), and then
changing flushing to be per memcg (i.e. subtree flushes) instead of
global (patch 5).

Patch 3 & 5 are the core of the series, and they include more details
and testing results. The rest are either cleanups or prep work.

This series replaces the "memcg: more sophisticated stats flushing"
series [1], which also replaces another series, in a long list of
attempts to improve memcg stats flushing. It is not a new version of
the same patchset as it is a completely different approach. This is
based on collected feedback from discussions on lkml in all previous
attempts. Hopefully, this is the final attempt.

There was a reported regression in v2 [2] for will-it-scale::fallocate
benchmark. I believe this regression should not affect production
workloads. This specific benchmark is allocating and freeing memory
(using fallocate/ftruncate) at a rate that is much faster to make actual
use of the memory. Testing this series on 100+ machines running
production workloads did not show any practical regressions in page
fault latency or allocation latency, but it showed great improvements in
stats read time. I do not have numbers about the exact improvements for
this series, but combined with another optimization for cgroup v1 [3] we
see 5-10x improvements. A significant chunk of that is coming from the
cgroup v1 optimization, but this series also made an improvement as
reported by Domenico [4].

[1]https://lore.kernel.org/lkml/[email protected]/
[2]https://lore.kernel.org/lkml/[email protected]/
[3]https://lore.kernel.org/lkml/[email protected]/
[4]https://lore.kernel.org/lkml/CAFYChMv_kv_KXOMRkrmTN-7MrfgBHMcK3YXv0dPYEL7nK77e2A@mail.gmail.com/

v2 -> v3:
- Rebased on top of v6.7-rc1.
- Updated commit messages based on discussions in previous versions.
- Reset percpu stats_updates in mem_cgroup_css_rstat_flush().
- Added a mem_cgroup_disabled() check to mem_cgroup_flush_stats().

v2: https://lore.kernel.org/lkml/[email protected]/

Yosry Ahmed (5):
mm: memcg: change flush_next_time to flush_last_time
mm: memcg: move vmstats structs definition above flushing code
mm: memcg: make stats flushing threshold per-memcg
mm: workingset: move the stats flush into workingset_test_recent()
mm: memcg: restore subtree stats flushing

include/linux/memcontrol.h | 8 +-
mm/memcontrol.c | 272 +++++++++++++++++++++----------------
mm/vmscan.c | 2 +-
mm/workingset.c | 42 ++++--
4 files changed, 188 insertions(+), 136 deletions(-)

--
2.43.0.rc0.421.g78406f8d94-goog


2023-11-16 02:24:31

by Yosry Ahmed

[permalink] [raw]
Subject: [PATCH v3 1/5] mm: memcg: change flush_next_time to flush_last_time

flush_next_time is an inaccurate name. It's not the next time that
periodic flushing will happen, it's rather the next time that
ratelimited flushing can happen if the periodic flusher is late.

Simplify its semantics by just storing the timestamp of the last flush
instead, flush_last_time. Move the 2*FLUSH_TIME addition to
mem_cgroup_flush_stats_ratelimited(), and add a comment explaining it.
This way, all the ratelimiting semantics live in one place.

No functional change intended.

Signed-off-by: Yosry Ahmed <[email protected]>
Tested-by: Domenico Cerasuolo <[email protected]>
---
mm/memcontrol.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 774bd6e21e278..18931d82f108f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -593,7 +593,7 @@ static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
static DEFINE_PER_CPU(unsigned int, stats_updates);
static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
-static u64 flush_next_time;
+static u64 flush_last_time;

#define FLUSH_TIME (2UL*HZ)

@@ -653,7 +653,7 @@ static void do_flush_stats(void)
atomic_xchg(&stats_flush_ongoing, 1))
return;

- WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME);
+ WRITE_ONCE(flush_last_time, jiffies_64);

cgroup_rstat_flush(root_mem_cgroup->css.cgroup);

@@ -669,7 +669,8 @@ void mem_cgroup_flush_stats(void)

void mem_cgroup_flush_stats_ratelimited(void)
{
- if (time_after64(jiffies_64, READ_ONCE(flush_next_time)))
+ /* Only flush if the periodic flusher is one full cycle late */
+ if (time_after64(jiffies_64, READ_ONCE(flush_last_time) + 2*FLUSH_TIME))
mem_cgroup_flush_stats();
}

--
2.43.0.rc0.421.g78406f8d94-goog

2023-11-16 02:24:34

by Yosry Ahmed

[permalink] [raw]
Subject: [PATCH v3 2/5] mm: memcg: move vmstats structs definition above flushing code

The following patch will make use of those structs in the flushing code,
so move their definitions (and a few other dependencies) a little bit up
to reduce the diff noise in the following patch.

No functional change intended.

Signed-off-by: Yosry Ahmed <[email protected]>
Tested-by: Domenico Cerasuolo <[email protected]>
---
mm/memcontrol.c | 146 ++++++++++++++++++++++++------------------------
1 file changed, 73 insertions(+), 73 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 18931d82f108f..5ae2a8f04be45 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -573,6 +573,79 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz)
return mz;
}

+/* Subset of vm_event_item to report for memcg event stats */
+static const unsigned int memcg_vm_event_stat[] = {
+ PGPGIN,
+ PGPGOUT,
+ PGSCAN_KSWAPD,
+ PGSCAN_DIRECT,
+ PGSCAN_KHUGEPAGED,
+ PGSTEAL_KSWAPD,
+ PGSTEAL_DIRECT,
+ PGSTEAL_KHUGEPAGED,
+ PGFAULT,
+ PGMAJFAULT,
+ PGREFILL,
+ PGACTIVATE,
+ PGDEACTIVATE,
+ PGLAZYFREE,
+ PGLAZYFREED,
+#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
+ ZSWPIN,
+ ZSWPOUT,
+#endif
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ THP_FAULT_ALLOC,
+ THP_COLLAPSE_ALLOC,
+ THP_SWPOUT,
+ THP_SWPOUT_FALLBACK,
+#endif
+};
+
+#define NR_MEMCG_EVENTS ARRAY_SIZE(memcg_vm_event_stat)
+static int mem_cgroup_events_index[NR_VM_EVENT_ITEMS] __read_mostly;
+
+static void init_memcg_events(void)
+{
+ int i;
+
+ for (i = 0; i < NR_MEMCG_EVENTS; ++i)
+ mem_cgroup_events_index[memcg_vm_event_stat[i]] = i + 1;
+}
+
+static inline int memcg_events_index(enum vm_event_item idx)
+{
+ return mem_cgroup_events_index[idx] - 1;
+}
+
+struct memcg_vmstats_percpu {
+ /* Local (CPU and cgroup) page state & events */
+ long state[MEMCG_NR_STAT];
+ unsigned long events[NR_MEMCG_EVENTS];
+
+ /* Delta calculation for lockless upward propagation */
+ long state_prev[MEMCG_NR_STAT];
+ unsigned long events_prev[NR_MEMCG_EVENTS];
+
+ /* Cgroup1: threshold notifications & softlimit tree updates */
+ unsigned long nr_page_events;
+ unsigned long targets[MEM_CGROUP_NTARGETS];
+};
+
+struct memcg_vmstats {
+ /* Aggregated (CPU and subtree) page state & events */
+ long state[MEMCG_NR_STAT];
+ unsigned long events[NR_MEMCG_EVENTS];
+
+ /* Non-hierarchical (CPU aggregated) page state & events */
+ long state_local[MEMCG_NR_STAT];
+ unsigned long events_local[NR_MEMCG_EVENTS];
+
+ /* Pending child counts during tree propagation */
+ long state_pending[MEMCG_NR_STAT];
+ unsigned long events_pending[NR_MEMCG_EVENTS];
+};
+
/*
* memcg and lruvec stats flushing
*
@@ -684,79 +757,6 @@ static void flush_memcg_stats_dwork(struct work_struct *w)
queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
}

-/* Subset of vm_event_item to report for memcg event stats */
-static const unsigned int memcg_vm_event_stat[] = {
- PGPGIN,
- PGPGOUT,
- PGSCAN_KSWAPD,
- PGSCAN_DIRECT,
- PGSCAN_KHUGEPAGED,
- PGSTEAL_KSWAPD,
- PGSTEAL_DIRECT,
- PGSTEAL_KHUGEPAGED,
- PGFAULT,
- PGMAJFAULT,
- PGREFILL,
- PGACTIVATE,
- PGDEACTIVATE,
- PGLAZYFREE,
- PGLAZYFREED,
-#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
- ZSWPIN,
- ZSWPOUT,
-#endif
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- THP_FAULT_ALLOC,
- THP_COLLAPSE_ALLOC,
- THP_SWPOUT,
- THP_SWPOUT_FALLBACK,
-#endif
-};
-
-#define NR_MEMCG_EVENTS ARRAY_SIZE(memcg_vm_event_stat)
-static int mem_cgroup_events_index[NR_VM_EVENT_ITEMS] __read_mostly;
-
-static void init_memcg_events(void)
-{
- int i;
-
- for (i = 0; i < NR_MEMCG_EVENTS; ++i)
- mem_cgroup_events_index[memcg_vm_event_stat[i]] = i + 1;
-}
-
-static inline int memcg_events_index(enum vm_event_item idx)
-{
- return mem_cgroup_events_index[idx] - 1;
-}
-
-struct memcg_vmstats_percpu {
- /* Local (CPU and cgroup) page state & events */
- long state[MEMCG_NR_STAT];
- unsigned long events[NR_MEMCG_EVENTS];
-
- /* Delta calculation for lockless upward propagation */
- long state_prev[MEMCG_NR_STAT];
- unsigned long events_prev[NR_MEMCG_EVENTS];
-
- /* Cgroup1: threshold notifications & softlimit tree updates */
- unsigned long nr_page_events;
- unsigned long targets[MEM_CGROUP_NTARGETS];
-};
-
-struct memcg_vmstats {
- /* Aggregated (CPU and subtree) page state & events */
- long state[MEMCG_NR_STAT];
- unsigned long events[NR_MEMCG_EVENTS];
-
- /* Non-hierarchical (CPU aggregated) page state & events */
- long state_local[MEMCG_NR_STAT];
- unsigned long events_local[NR_MEMCG_EVENTS];
-
- /* Pending child counts during tree propagation */
- long state_pending[MEMCG_NR_STAT];
- unsigned long events_pending[NR_MEMCG_EVENTS];
-};
-
unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx)
{
long x = READ_ONCE(memcg->vmstats->state[idx]);
--
2.43.0.rc0.421.g78406f8d94-goog

2023-11-16 02:24:35

by Yosry Ahmed

[permalink] [raw]
Subject: [PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg

A global counter for the magnitude of memcg stats update is maintained
on the memcg side to avoid invoking rstat flushes when the pending
updates are not significant. This avoids unnecessary flushes, which are
not very cheap even if there isn't a lot of stats to flush. It also
avoids unnecessary lock contention on the underlying global rstat lock.

Make this threshold per-memcg. The scheme is followed where percpu (now
also per-memcg) counters are incremented in the update path, and only
propagated to per-memcg atomics when they exceed a certain threshold.

This provides two benefits:
(a) On large machines with a lot of memcgs, the global threshold can be
reached relatively fast, so guarding the underlying lock becomes less
effective. Making the threshold per-memcg avoids this.

(b) Having a global threshold makes it hard to do subtree flushes, as we
cannot reset the global counter except for a full flush. Per-memcg
counters removes this as a blocker from doing subtree flushes, which
helps avoid unnecessary work when the stats of a small subtree are
needed.

Nothing is free, of course. This comes at a cost:
(a) A new per-cpu counter per memcg, consuming NR_CPUS * NR_MEMCGS * 4
bytes. The extra memory usage is insigificant.

(b) More work on the update side, although in the common case it will
only be percpu counter updates. The amount of work scales with the
number of ancestors (i.e. tree depth). This is not a new concept, adding
a cgroup to the rstat tree involves a parent loop, so is charging.
Testing results below show no significant regressions.

(c) The error margin in the stats for the system as a whole increases
from NR_CPUS * MEMCG_CHARGE_BATCH to NR_CPUS * MEMCG_CHARGE_BATCH *
NR_MEMCGS. This is probably fine because we have a similar per-memcg
error in charges coming from percpu stocks, and we have a periodic
flusher that makes sure we always flush all the stats every 2s anyway.

This patch was tested to make sure no significant regressions are
introduced on the update path as follows. The following benchmarks were
ran in a cgroup that is 2 levels deep (/sys/fs/cgroup/a/b/):

(1) Running 22 instances of netperf on a 44 cpu machine with
hyperthreading disabled. All instances are run in a level 2 cgroup, as
well as netserver:
# netserver -6
# netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K

Averaging 20 runs, the numbers are as follows:
Base: 40198.0 mbps
Patched: 38629.7 mbps (-3.9%)

The regression is minimal, especially for 22 instances in the same
cgroup sharing all ancestors (so updating the same atomics).

(2) will-it-scale page_fault tests. These tests (specifically
per_process_ops in page_fault3 test) detected a 25.9% regression before
for a change in the stats update path [1]. These are the
numbers from 10 runs (+ is good) on a machine with 256 cpus:

LABEL | MEAN | MEDIAN | STDDEV |
------------------------------+-------------+-------------+-------------
page_fault1_per_process_ops | | | |
(A) base | 270249.164 | 265437.000 | 13451.836 |
(B) patched | 261368.709 | 255725.000 | 13394.767 |
| -3.29% | -3.66% | |
page_fault1_per_thread_ops | | | |
(A) base | 242111.345 | 239737.000 | 10026.031 |
(B) patched | 237057.109 | 235305.000 | 9769.687 |
| -2.09% | -1.85% | |
page_fault1_scalability | | |
(A) base | 0.034387 | 0.035168 | 0.0018283 |
(B) patched | 0.033988 | 0.034573 | 0.0018056 |
| -1.16% | -1.69% | |
page_fault2_per_process_ops | | |
(A) base | 203561.836 | 203301.000 | 2550.764 |
(B) patched | 197195.945 | 197746.000 | 2264.263 |
| -3.13% | -2.73% | |
page_fault2_per_thread_ops | | |
(A) base | 171046.473 | 170776.000 | 1509.679 |
(B) patched | 166626.327 | 166406.000 | 768.753 |
| -2.58% | -2.56% | |
page_fault2_scalability | | |
(A) base | 0.054026 | 0.053821 | 0.00062121 |
(B) patched | 0.053329 | 0.05306 | 0.00048394 |
| -1.29% | -1.41% | |
page_fault3_per_process_ops | | |
(A) base | 1295807.782 | 1297550.000 | 5907.585 |
(B) patched | 1275579.873 | 1273359.000 | 8759.160 |
| -1.56% | -1.86% | |
page_fault3_per_thread_ops | | |
(A) base | 391234.164 | 390860.000 | 1760.720 |
(B) patched | 377231.273 | 376369.000 | 1874.971 |
| -3.58% | -3.71% | |
page_fault3_scalability | | |
(A) base | 0.60369 | 0.60072 | 0.0083029 |
(B) patched | 0.61733 | 0.61544 | 0.009855 |
| +2.26% | +2.45% | |

All regressions seem to be minimal, and within the normal variance for
the benchmark. The fix for [1] assumes that 3% is noise -- and there
were no further practical complaints), so hopefully this means that such
variations in these microbenchmarks do not reflect on practical
workloads.

(3) I also ran stress-ng in a nested cgroup and did not observe any
obvious regressions.

[1]https://lore.kernel.org/all/20190520063534.GB19312@shao2-debian/

Suggested-by: Johannes Weiner <[email protected]>
Signed-off-by: Yosry Ahmed <[email protected]>
Tested-by: Domenico Cerasuolo <[email protected]>
---
mm/memcontrol.c | 50 +++++++++++++++++++++++++++++++++----------------
1 file changed, 34 insertions(+), 16 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5ae2a8f04be45..74db05237775d 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -630,6 +630,9 @@ struct memcg_vmstats_percpu {
/* Cgroup1: threshold notifications & softlimit tree updates */
unsigned long nr_page_events;
unsigned long targets[MEM_CGROUP_NTARGETS];
+
+ /* Stats updates since the last flush */
+ unsigned int stats_updates;
};

struct memcg_vmstats {
@@ -644,6 +647,9 @@ struct memcg_vmstats {
/* Pending child counts during tree propagation */
long state_pending[MEMCG_NR_STAT];
unsigned long events_pending[NR_MEMCG_EVENTS];
+
+ /* Stats updates since the last flush */
+ atomic64_t stats_updates;
};

/*
@@ -663,9 +669,7 @@ struct memcg_vmstats {
*/
static void flush_memcg_stats_dwork(struct work_struct *w);
static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
-static DEFINE_PER_CPU(unsigned int, stats_updates);
static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
-static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
static u64 flush_last_time;

#define FLUSH_TIME (2UL*HZ)
@@ -692,26 +696,37 @@ static void memcg_stats_unlock(void)
preempt_enable_nested();
}

+
+static bool memcg_should_flush_stats(struct mem_cgroup *memcg)
+{
+ return atomic64_read(&memcg->vmstats->stats_updates) >
+ MEMCG_CHARGE_BATCH * num_online_cpus();
+}
+
static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
{
+ int cpu = smp_processor_id();
unsigned int x;

if (!val)
return;

- cgroup_rstat_updated(memcg->css.cgroup, smp_processor_id());
+ cgroup_rstat_updated(memcg->css.cgroup, cpu);
+
+ for (; memcg; memcg = parent_mem_cgroup(memcg)) {
+ x = __this_cpu_add_return(memcg->vmstats_percpu->stats_updates,
+ abs(val));
+
+ if (x < MEMCG_CHARGE_BATCH)
+ continue;

- x = __this_cpu_add_return(stats_updates, abs(val));
- if (x > MEMCG_CHARGE_BATCH) {
/*
- * If stats_flush_threshold exceeds the threshold
- * (>num_online_cpus()), cgroup stats update will be triggered
- * in __mem_cgroup_flush_stats(). Increasing this var further
- * is redundant and simply adds overhead in atomic update.
+ * If @memcg is already flush-able, increasing stats_updates is
+ * redundant. Avoid the overhead of the atomic update.
*/
- if (atomic_read(&stats_flush_threshold) <= num_online_cpus())
- atomic_add(x / MEMCG_CHARGE_BATCH, &stats_flush_threshold);
- __this_cpu_write(stats_updates, 0);
+ if (!memcg_should_flush_stats(memcg))
+ atomic64_add(x, &memcg->vmstats->stats_updates);
+ __this_cpu_write(memcg->vmstats_percpu->stats_updates, 0);
}
}

@@ -730,13 +745,12 @@ static void do_flush_stats(void)

cgroup_rstat_flush(root_mem_cgroup->css.cgroup);

- atomic_set(&stats_flush_threshold, 0);
atomic_set(&stats_flush_ongoing, 0);
}

void mem_cgroup_flush_stats(void)
{
- if (atomic_read(&stats_flush_threshold) > num_online_cpus())
+ if (memcg_should_flush_stats(root_mem_cgroup))
do_flush_stats();
}

@@ -750,8 +764,8 @@ void mem_cgroup_flush_stats_ratelimited(void)
static void flush_memcg_stats_dwork(struct work_struct *w)
{
/*
- * Always flush here so that flushing in latency-sensitive paths is
- * as cheap as possible.
+ * Deliberately ignore memcg_should_flush_stats() here so that flushing
+ * in latency-sensitive paths is as cheap as possible.
*/
do_flush_stats();
queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
@@ -5784,6 +5798,10 @@ static void mem_cgroup_css_rstat_flush(struct cgroup_subsys_state *css, int cpu)
}
}
}
+ statc->stats_updates = 0;
+ /* We are in a per-cpu loop here, only do the atomic write once */
+ if (atomic64_read(&memcg->vmstats->stats_updates))
+ atomic64_set(&memcg->vmstats->stats_updates, 0);
}

#ifdef CONFIG_MMU
--
2.43.0.rc0.421.g78406f8d94-goog

2023-11-16 02:24:39

by Yosry Ahmed

[permalink] [raw]
Subject: [PATCH v3 4/5] mm: workingset: move the stats flush into workingset_test_recent()

The workingset code flushes the stats in workingset_refault() to get
accurate stats of the eviction memcg. In preparation for more scoped
flushed and passing the eviction memcg to the flush call, move the call
to workingset_test_recent() where we have a pointer to the eviction
memcg.

The flush call is sleepable, and cannot be made in an rcu read section.
Hence, minimize the rcu read section by also moving it into
workingset_test_recent(). Furthermore, instead of holding the rcu read
lock throughout workingset_test_recent(), only hold it briefly to get a
ref on the eviction memcg. This allows us to make the flush call after
we get the eviction memcg.

As for workingset_refault(), nothing else there appears to be protected
by rcu. The memcg of the faulted folio (which is not necessarily the
same as the eviction memcg) is protected by the folio lock, which is
held from all callsites. Add a VM_BUG_ON() to make sure this doesn't
change from under us.

No functional change intended.

Signed-off-by: Yosry Ahmed <[email protected]>
Tested-by: Domenico Cerasuolo <[email protected]>
---
mm/workingset.c | 36 ++++++++++++++++++++++++------------
1 file changed, 24 insertions(+), 12 deletions(-)

diff --git a/mm/workingset.c b/mm/workingset.c
index b192e44a0e7cc..a573be6c59fd9 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -425,8 +425,16 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset)
struct pglist_data *pgdat;
unsigned long eviction;

- if (lru_gen_enabled())
- return lru_gen_test_recent(shadow, file, &eviction_lruvec, &eviction, workingset);
+ rcu_read_lock();
+
+ if (lru_gen_enabled()) {
+ bool recent = lru_gen_test_recent(shadow, file,
+ &eviction_lruvec, &eviction, workingset);
+
+ rcu_read_unlock();
+ return recent;
+ }
+

unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
eviction <<= bucket_order;
@@ -448,8 +456,16 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset)
* configurations instead.
*/
eviction_memcg = mem_cgroup_from_id(memcgid);
- if (!mem_cgroup_disabled() && !eviction_memcg)
+ if (!mem_cgroup_disabled() &&
+ (!eviction_memcg || !mem_cgroup_tryget(eviction_memcg))) {
+ rcu_read_unlock();
return false;
+ }
+
+ rcu_read_unlock();
+
+ /* Flush stats (and potentially sleep) outside the RCU read section */
+ mem_cgroup_flush_stats_ratelimited();

eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
refault = atomic_long_read(&eviction_lruvec->nonresident_age);
@@ -493,6 +509,7 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset)
}
}

+ mem_cgroup_put(eviction_memcg);
return refault_distance <= workingset_size;
}

@@ -519,19 +536,16 @@ void workingset_refault(struct folio *folio, void *shadow)
return;
}

- /* Flush stats (and potentially sleep) before holding RCU read lock */
- mem_cgroup_flush_stats_ratelimited();
-
- rcu_read_lock();
-
/*
* The activation decision for this folio is made at the level
* where the eviction occurred, as that is where the LRU order
* during folio reclaim is being determined.
*
* However, the cgroup that will own the folio is the one that
- * is actually experiencing the refault event.
+ * is actually experiencing the refault event. Make sure the folio is
+ * locked to guarantee folio_memcg() stability throughout.
*/
+ VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
nr = folio_nr_pages(folio);
memcg = folio_memcg(folio);
pgdat = folio_pgdat(folio);
@@ -540,7 +554,7 @@ void workingset_refault(struct folio *folio, void *shadow)
mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);

if (!workingset_test_recent(shadow, file, &workingset))
- goto out;
+ return;

folio_set_active(folio);
workingset_age_nonresident(lruvec, nr);
@@ -556,8 +570,6 @@ void workingset_refault(struct folio *folio, void *shadow)
lru_note_cost_refault(folio);
mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr);
}
-out:
- rcu_read_unlock();
}

/**
--
2.43.0.rc0.421.g78406f8d94-goog

2023-11-16 02:24:50

by Yosry Ahmed

[permalink] [raw]
Subject: [PATCH v3 5/5] mm: memcg: restore subtree stats flushing

Stats flushing for memcg currently follows the following rules:
- Always flush the entire memcg hierarchy (i.e. flush the root).
- Only one flusher is allowed at a time. If someone else tries to flush
concurrently, they skip and return immediately.
- A periodic flusher flushes all the stats every 2 seconds.

The reason this approach is followed is because all flushes are
serialized by a global rstat spinlock. On the memcg side, flushing is
invoked from userspace reads as well as in-kernel flushers (e.g.
reclaim, refault, etc). This approach aims to avoid serializing all
flushers on the global lock, which can cause a significant performance
hit under high concurrency.

This approach has the following problems:
- Occasionally a userspace read of the stats of a non-root cgroup will
be too expensive as it has to flush the entire hierarchy [1].
- Sometimes the stats accuracy are compromised if there is an ongoing
flush, and we skip and return before the subtree of interest is
actually flushed, yielding stale stats (by up to 2s due to periodic
flushing). This is more visible when reading stats from userspace,
but can also affect in-kernel flushers.

The latter problem is particulary a concern when userspace reads stats
after an event occurs, but gets stats from before the event. Examples:
- When memory usage / pressure spikes, a userspace OOM handler may look
at the stats of different memcgs to select a victim based on various
heuristics (e.g. how much private memory will be freed by killing
this). Reading stale stats from before the usage spike in this case
may cause a wrongful OOM kill.
- A proactive reclaimer may read the stats after writing to
memory.reclaim to measure the success of the reclaim operation. Stale
stats from before reclaim may give a false negative.
- Reading the stats of a parent and a child memcg may be inconsistent
(child larger than parent), if the flush doesn't happen when the
parent is read, but happens when the child is read.

As for in-kernel flushers, they will occasionally get stale stats. No
regressions are currently known from this, but if there are regressions,
they would be very difficult to debug and link to the source of the
problem.

This patch aims to fix these problems by restoring subtree flushing,
and removing the unified/coalesced flushing logic that skips flushing if
there is an ongoing flush. This change would introduce a significant
regression with global stats flushing thresholds. With per-memcg stats
flushing thresholds, this seems to perform really well. The thresholds
protect the underlying lock from unnecessary contention.

Add a mutex to protect the underlying rstat lock from excessive memcg
flushing. The thresholds are re-checked after the mutex is grabbed to
make sure that a concurrent flush did not already get the subtree we are
trying to flush. A call to cgroup_rstat_flush() is not cheap, even if
there are no pending updates.

This patch was tested in two ways to ensure the latency of flushing is
up to bar, on a machine with 384 cpus:
- A synthetic test with 5000 concurrent workers in 500 cgroups doing
allocations and reclaim, as well as 1000 readers for memory.stat
(variation of [2]). No regressions were noticed in the total runtime.
Note that significant regressions in this test are observed with
global stats thresholds, but not with per-memcg thresholds.

- A synthetic stress test for concurrently reading memcg stats while
memory allocation/freeing workers are running in the background,
provided by Wei Xu [3]. With 250k threads reading the stats every
100ms in 50k cgroups, 99.9% of reads take <= 50us. Less than 0.01%
of reads take more than 1ms, and no reads take more than 100ms.

[1] https://lore.kernel.org/lkml/CABWYdi0c6__rh-K7dcM_pkf9BJdTRtAU08M43KO9ME4-dsgfoQ@mail.gmail.com/
[2] https://lore.kernel.org/lkml/CAJD7tka13M-zVZTyQJYL1iUAYvuQ1fcHbCjcOBZcz6POYTV-4g@mail.gmail.com/
[3] https://lore.kernel.org/lkml/CAAPL-u9D2b=iF5Lf_cRnKxUfkiEe0AMDTu6yhrUAzX0b6a6rDg@mail.gmail.com/

Signed-off-by: Yosry Ahmed <[email protected]>
Tested-by: Domenico Cerasuolo <[email protected]>
---
include/linux/memcontrol.h | 8 ++--
mm/memcontrol.c | 75 +++++++++++++++++++++++---------------
mm/vmscan.c | 2 +-
mm/workingset.c | 10 +++--
4 files changed, 58 insertions(+), 37 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 7bdcf3020d7a3..6edd3ec4d8d54 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1046,8 +1046,8 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec,
return x;
}

-void mem_cgroup_flush_stats(void);
-void mem_cgroup_flush_stats_ratelimited(void);
+void mem_cgroup_flush_stats(struct mem_cgroup *memcg);
+void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg);

void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
int val);
@@ -1548,11 +1548,11 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec,
return node_page_state(lruvec_pgdat(lruvec), idx);
}

-static inline void mem_cgroup_flush_stats(void)
+static inline void mem_cgroup_flush_stats(struct mem_cgroup *memcg)
{
}

-static inline void mem_cgroup_flush_stats_ratelimited(void)
+static inline void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg)
{
}

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 74db05237775d..2baa9349d1590 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -669,7 +669,6 @@ struct memcg_vmstats {
*/
static void flush_memcg_stats_dwork(struct work_struct *w);
static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
-static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
static u64 flush_last_time;

#define FLUSH_TIME (2UL*HZ)
@@ -730,35 +729,47 @@ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
}
}

-static void do_flush_stats(void)
+static void do_flush_stats(struct mem_cgroup *memcg)
{
- /*
- * We always flush the entire tree, so concurrent flushers can just
- * skip. This avoids a thundering herd problem on the rstat global lock
- * from memcg flushers (e.g. reclaim, refault, etc).
- */
- if (atomic_read(&stats_flush_ongoing) ||
- atomic_xchg(&stats_flush_ongoing, 1))
- return;
-
- WRITE_ONCE(flush_last_time, jiffies_64);
-
- cgroup_rstat_flush(root_mem_cgroup->css.cgroup);
+ if (mem_cgroup_is_root(memcg))
+ WRITE_ONCE(flush_last_time, jiffies_64);

- atomic_set(&stats_flush_ongoing, 0);
+ cgroup_rstat_flush(memcg->css.cgroup);
}

-void mem_cgroup_flush_stats(void)
+/*
+ * mem_cgroup_flush_stats - flush the stats of a memory cgroup subtree
+ * @memcg: root of the subtree to flush
+ *
+ * Flushing is serialized by the underlying global rstat lock. There is also a
+ * minimum amount of work to be done even if there are no stat updates to flush.
+ * Hence, we only flush the stats if the updates delta exceeds a threshold. This
+ * avoids unnecessary work and contention on the underlying lock.
+ */
+void mem_cgroup_flush_stats(struct mem_cgroup *memcg)
{
- if (memcg_should_flush_stats(root_mem_cgroup))
- do_flush_stats();
+ static DEFINE_MUTEX(memcg_stats_flush_mutex);
+
+ if (mem_cgroup_disabled())
+ return;
+
+ if (!memcg)
+ memcg = root_mem_cgroup;
+
+ if (memcg_should_flush_stats(memcg)) {
+ mutex_lock(&memcg_stats_flush_mutex);
+ /* Check again after locking, another flush may have occurred */
+ if (memcg_should_flush_stats(memcg))
+ do_flush_stats(memcg);
+ mutex_unlock(&memcg_stats_flush_mutex);
+ }
}

-void mem_cgroup_flush_stats_ratelimited(void)
+void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg)
{
/* Only flush if the periodic flusher is one full cycle late */
if (time_after64(jiffies_64, READ_ONCE(flush_last_time) + 2*FLUSH_TIME))
- mem_cgroup_flush_stats();
+ mem_cgroup_flush_stats(memcg);
}

static void flush_memcg_stats_dwork(struct work_struct *w)
@@ -767,7 +778,7 @@ static void flush_memcg_stats_dwork(struct work_struct *w)
* Deliberately ignore memcg_should_flush_stats() here so that flushing
* in latency-sensitive paths is as cheap as possible.
*/
- do_flush_stats();
+ do_flush_stats(root_mem_cgroup);
queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
}

@@ -1642,7 +1653,7 @@ static void memcg_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
*
* Current memory state:
*/
- mem_cgroup_flush_stats();
+ mem_cgroup_flush_stats(memcg);

for (i = 0; i < ARRAY_SIZE(memory_stats); i++) {
u64 size;
@@ -4191,7 +4202,7 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v)
int nid;
struct mem_cgroup *memcg = mem_cgroup_from_seq(m);

- mem_cgroup_flush_stats();
+ mem_cgroup_flush_stats(memcg);

for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) {
seq_printf(m, "%s=%lu", stat->name,
@@ -4272,7 +4283,7 @@ static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)

BUILD_BUG_ON(ARRAY_SIZE(memcg1_stat_names) != ARRAY_SIZE(memcg1_stats));

- mem_cgroup_flush_stats();
+ mem_cgroup_flush_stats(memcg);

for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) {
unsigned long nr;
@@ -4768,7 +4779,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css);
struct mem_cgroup *parent;

- mem_cgroup_flush_stats();
+ mem_cgroup_flush_stats(memcg);

*pdirty = memcg_page_state(memcg, NR_FILE_DIRTY);
*pwriteback = memcg_page_state(memcg, NR_WRITEBACK);
@@ -6857,7 +6868,7 @@ static int memory_numa_stat_show(struct seq_file *m, void *v)
int i;
struct mem_cgroup *memcg = mem_cgroup_from_seq(m);

- mem_cgroup_flush_stats();
+ mem_cgroup_flush_stats(memcg);

for (i = 0; i < ARRAY_SIZE(memory_stats); i++) {
int nid;
@@ -8088,7 +8099,11 @@ bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
break;
}

- cgroup_rstat_flush(memcg->css.cgroup);
+ /*
+ * mem_cgroup_flush_stats() ignores small changes. Use
+ * do_flush_stats() directly to get accurate stats for charging.
+ */
+ do_flush_stats(memcg);
pages = memcg_page_state(memcg, MEMCG_ZSWAP_B) / PAGE_SIZE;
if (pages < max)
continue;
@@ -8153,8 +8168,10 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
static u64 zswap_current_read(struct cgroup_subsys_state *css,
struct cftype *cft)
{
- cgroup_rstat_flush(css->cgroup);
- return memcg_page_state(mem_cgroup_from_css(css), MEMCG_ZSWAP_B);
+ struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+
+ mem_cgroup_flush_stats(memcg);
+ return memcg_page_state(memcg, MEMCG_ZSWAP_B);
}

static int zswap_max_show(struct seq_file *m, void *v)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 506f8220c5fe5..f93c989d7b387 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2222,7 +2222,7 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
* Flush the memory cgroup stats, so that we read accurate per-memcg
* lruvec stats for heuristics.
*/
- mem_cgroup_flush_stats();
+ mem_cgroup_flush_stats(sc->target_mem_cgroup);

/*
* Determine the scan balance between anon and file LRUs.
diff --git a/mm/workingset.c b/mm/workingset.c
index a573be6c59fd9..11045febc3838 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -464,8 +464,12 @@ bool workingset_test_recent(void *shadow, bool file, bool *workingset)

rcu_read_unlock();

- /* Flush stats (and potentially sleep) outside the RCU read section */
- mem_cgroup_flush_stats_ratelimited();
+ /*
+ * Flush stats (and potentially sleep) outside the RCU read section.
+ * XXX: With per-memcg flushing and thresholding, is ratelimiting
+ * still needed here?
+ */
+ mem_cgroup_flush_stats_ratelimited(eviction_memcg);

eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
refault = atomic_long_read(&eviction_lruvec->nonresident_age);
@@ -676,7 +680,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
struct lruvec *lruvec;
int i;

- mem_cgroup_flush_stats();
+ mem_cgroup_flush_stats(sc->memcg);
lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
for (pages = 0, i = 0; i < NR_LRU_LISTS; i++)
pages += lruvec_page_state_local(lruvec,
--
2.43.0.rc0.421.g78406f8d94-goog

2023-11-17 18:22:26

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH v3 1/5] mm: memcg: change flush_next_time to flush_last_time

On Thu, Nov 16, 2023 at 02:24:06AM +0000, Yosry Ahmed wrote:
> flush_next_time is an inaccurate name. It's not the next time that
> periodic flushing will happen, it's rather the next time that
> ratelimited flushing can happen if the periodic flusher is late.
>
> Simplify its semantics by just storing the timestamp of the last flush
> instead, flush_last_time. Move the 2*FLUSH_TIME addition to
> mem_cgroup_flush_stats_ratelimited(), and add a comment explaining it.
> This way, all the ratelimiting semantics live in one place.
>
> No functional change intended.
>
> Signed-off-by: Yosry Ahmed <[email protected]>
> Tested-by: Domenico Cerasuolo <[email protected]>

Acked-by: Shakeel Butt <[email protected]>

2023-11-17 18:28:02

by Chris Li

[permalink] [raw]
Subject: Re: [PATCH v3 1/5] mm: memcg: change flush_next_time to flush_last_time

Hi Yosry,

Acked-by: Chris Li <[email protected]> (Google)

Chris

On Wed, Nov 15, 2023 at 6:24 PM Yosry Ahmed <[email protected]> wrote:
>
> flush_next_time is an inaccurate name. It's not the next time that
> periodic flushing will happen, it's rather the next time that
> ratelimited flushing can happen if the periodic flusher is late.
>
> Simplify its semantics by just storing the timestamp of the last flush
> instead, flush_last_time. Move the 2*FLUSH_TIME addition to
> mem_cgroup_flush_stats_ratelimited(), and add a comment explaining it.
> This way, all the ratelimiting semantics live in one place.
>
> No functional change intended.
>
> Signed-off-by: Yosry Ahmed <[email protected]>
> Tested-by: Domenico Cerasuolo <[email protected]>
> ---
> mm/memcontrol.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 774bd6e21e278..18931d82f108f 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -593,7 +593,7 @@ static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
> static DEFINE_PER_CPU(unsigned int, stats_updates);
> static atomic_t stats_flush_ongoing = ATOMIC_INIT(0);
> static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
> -static u64 flush_next_time;
> +static u64 flush_last_time;
>
> #define FLUSH_TIME (2UL*HZ)
>
> @@ -653,7 +653,7 @@ static void do_flush_stats(void)
> atomic_xchg(&stats_flush_ongoing, 1))
> return;
>
> - WRITE_ONCE(flush_next_time, jiffies_64 + 2*FLUSH_TIME);
> + WRITE_ONCE(flush_last_time, jiffies_64);
>
> cgroup_rstat_flush(root_mem_cgroup->css.cgroup);
>
> @@ -669,7 +669,8 @@ void mem_cgroup_flush_stats(void)
>
> void mem_cgroup_flush_stats_ratelimited(void)
> {
> - if (time_after64(jiffies_64, READ_ONCE(flush_next_time)))
> + /* Only flush if the periodic flusher is one full cycle late */
> + if (time_after64(jiffies_64, READ_ONCE(flush_last_time) + 2*FLUSH_TIME))
> mem_cgroup_flush_stats();
> }
>
> --
> 2.43.0.rc0.421.g78406f8d94-goog
>
>

2023-11-17 18:51:16

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH v3 2/5] mm: memcg: move vmstats structs definition above flushing code

On Thu, Nov 16, 2023 at 02:24:07AM +0000, Yosry Ahmed wrote:
> The following patch will make use of those structs in the flushing code,
> so move their definitions (and a few other dependencies) a little bit up
> to reduce the diff noise in the following patch.
>
> No functional change intended.
>
> Signed-off-by: Yosry Ahmed <[email protected]>
> Tested-by: Domenico Cerasuolo <[email protected]>

Acked-by: Shakeel Butt <[email protected]>

2023-11-22 13:44:19

by Oliver Sang

[permalink] [raw]
Subject: Re: [PATCH v3 5/5] mm: memcg: restore subtree stats flushing



Hello,

kernel test robot noticed a -3.7% regression of aim7.jobs-per-min on:


commit: f6eccb430010201d3c155b73035f3bf755fe7697 ("[PATCH v3 5/5] mm: memcg: restore subtree stats flushing")
url: https://github.com/intel-lab-lkp/linux/commits/Yosry-Ahmed/mm-memcg-change-flush_next_time-to-flush_last_time/20231116-103300
base: https://git.kernel.org/cgit/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/all/[email protected]/
patch subject: [PATCH v3 5/5] mm: memcg: restore subtree stats flushing

testcase: aim7
test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
parameters:

disk: 1BRD_48G
fs: ext4
test: disk_rr
load: 3000
cpufreq_governor: performance




If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-lkp/[email protected]


Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20231122/[email protected]

=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-12/performance/1BRD_48G/ext4/x86_64-rhel-8.3/3000/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp2/disk_rr/aim7

commit:
4c86da8ea2 ("mm: workingset: move the stats flush into workingset_test_recent()")
f6eccb4300 ("mm: memcg: restore subtree stats flushing")

4c86da8ea2d2f784 f6eccb430010201d3c155b73035
---------------- ---------------------------
%stddev %change %stddev
\ | \
15513 ? 14% +17.4% 18206 ? 7% numa-vmstat.node1.nr_mapped
616938 -3.7% 593885 aim7.jobs-per-min
149804 ? 4% +17.6% 176189 ? 6% aim7.time.involuntary_context_switches
2310 +6.3% 2455 aim7.time.system_time
24960256 ? 9% -14.1% 21429987 ? 7% perf-stat.i.branch-misses
1357010 ? 14% -22.6% 1050646 ? 10% perf-stat.i.dTLB-load-misses
0.20 ? 8% -0.0 0.16 ? 7% perf-stat.overall.branch-miss-rate%
2.80 +5.7% 2.96 perf-stat.overall.cpi
1506 +7.9% 1624 ? 2% perf-stat.overall.cycles-between-cache-misses
0.36 -5.4% 0.34 perf-stat.overall.ipc
24383919 ? 8% -14.5% 20853721 ? 7% perf-stat.ps.branch-misses
0.00 ?223% +2700.0% 0.01 ? 10% perf-sched.sch_delay.avg.ms.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
0.00 ? 35% +1454.2% 0.06 ? 54% perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.01 ? 13% +3233.3% 0.18 ? 41% perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.01 ? 30% +5900.0% 0.31 ? 47% perf-sched.sch_delay.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
0.00 ?141% +337.5% 0.01 ? 6% perf-sched.sch_delay.avg.ms.schedule_timeout.__wait_for_common.__flush_work.isra.0
0.00 ? 9% +2843.5% 0.11 ?116% perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
0.00 ?223% +660.0% 0.01 ? 16% perf-sched.sch_delay.max.ms.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
0.01 ? 9% -41.3% 0.00 ? 11% perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.20 ?206% +3311.9% 6.66 ? 72% perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.02 ? 41% +1.8e+05% 28.67 ? 53% perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.01 ? 52% +41275.8% 2.28 ? 72% perf-sched.sch_delay.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
0.01 ? 23% +2.8e+05% 20.56 ? 65% perf-sched.sch_delay.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
0.01 ? 11% +142.9% 0.01 ? 76% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
0.00 ?141% +412.5% 0.01 ? 15% perf-sched.sch_delay.max.ms.schedule_timeout.__wait_for_common.__flush_work.isra.0
0.01 ? 42% +177.3% 0.02 ? 66% perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
0.01 ? 20% +1.3e+05% 12.95 ?105% perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
0.07 ?131% +289.2% 0.27 ? 55% perf-sched.total_sch_delay.average.ms
0.39 ? 5% +307.4% 1.58 ? 22% perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.33 ? 46% +5674.0% 18.79 ? 73% perf-sched.wait_and_delay.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
0.83 ?223% +41660.0% 348.00 ? 74% perf-sched.wait_and_delay.count.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
11.25 ? 64% +225.6% 36.62 ? 45% perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.81 ? 44% +1.1e+05% 912.56 ? 92% perf-sched.wait_and_delay.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
0.61 ?223% +11430.9% 69.86 ? 55% perf-sched.wait_time.avg.ms.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
1.44 ? 50% +1120.7% 17.58 ? 49% perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.06 ?204% +6992.9% 4.16 ? 91% perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.get_signal.arch_do_signal_or_restart
0.38 ? 5% +265.2% 1.40 ? 21% perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.57 ?141% +1413.0% 8.59 ?110% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.do_select.core_sys_select.kern_select
0.00 ?223% +3.8e+06% 25.42 ?143% perf-sched.wait_time.avg.ms.schedule_timeout.__wait_for_common.__flush_work.isra.0
0.35 ? 24% +5215.2% 18.72 ? 73% perf-sched.wait_time.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
1.03 ? 70% +1610.0% 17.59 ? 49% perf-sched.wait_time.avg.ms.syslog_print.do_syslog.kmsg_read.vfs_read
2.82 ?223% +6949.3% 198.44 ? 60% perf-sched.wait_time.max.ms.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
2.69 ? 45% +4345.1% 119.46 ? 71% perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.10 ?212% +10364.1% 10.59 ?106% perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.get_signal.arch_do_signal_or_restart
1.14 ?141% +6549.1% 75.53 ?137% perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.do_select.core_sys_select.kern_select
0.00 ?223% +6.5e+06% 76.30 ?141% perf-sched.wait_time.max.ms.schedule_timeout.__wait_for_common.__flush_work.isra.0
0.91 ? 15% +1e+05% 912.19 ? 92% perf-sched.wait_time.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
2.06 ? 70% +5708.3% 119.46 ? 71% perf-sched.wait_time.max.ms.syslog_print.do_syslog.kmsg_read.vfs_read
2.59 -0.1 2.45 perf-profile.calltrace.cycles-pp.ext4_block_write_begin.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter.vfs_write
2.10 -0.1 1.99 perf-profile.calltrace.cycles-pp.ext4_da_do_write_end.generic_perform_write.ext4_buffered_write_iter.vfs_write.ksys_write
0.70 ? 2% -0.1 0.59 perf-profile.calltrace.cycles-pp.workingset_activation.folio_mark_accessed.filemap_read.vfs_read.ksys_read
1.75 -0.1 1.65 perf-profile.calltrace.cycles-pp.copy_page_to_iter.filemap_read.vfs_read.ksys_read.do_syscall_64
1.41 -0.1 1.32 perf-profile.calltrace.cycles-pp.llseek
1.66 -0.1 1.57 perf-profile.calltrace.cycles-pp._copy_to_iter.copy_page_to_iter.filemap_read.vfs_read.ksys_read
1.75 -0.1 1.67 perf-profile.calltrace.cycles-pp.block_write_end.ext4_da_do_write_end.generic_perform_write.ext4_buffered_write_iter.vfs_write
1.66 -0.1 1.58 perf-profile.calltrace.cycles-pp.__block_commit_write.block_write_end.ext4_da_do_write_end.generic_perform_write.ext4_buffered_write_iter
0.84 -0.1 0.78 perf-profile.calltrace.cycles-pp.ext4_da_map_blocks.ext4_da_get_block_prep.ext4_block_write_begin.ext4_da_write_begin.generic_perform_write
0.86 -0.1 0.80 perf-profile.calltrace.cycles-pp.ext4_da_get_block_prep.ext4_block_write_begin.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter
0.94 -0.1 0.89 perf-profile.calltrace.cycles-pp.zero_user_segments.ext4_block_write_begin.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter
0.92 -0.1 0.86 perf-profile.calltrace.cycles-pp.memset_orig.zero_user_segments.ext4_block_write_begin.ext4_da_write_begin.generic_perform_write
0.71 -0.1 0.66 ? 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.llseek
0.86 -0.0 0.81 perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.generic_perform_write.ext4_buffered_write_iter.vfs_write.ksys_write
0.60 -0.0 0.56 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.llseek
0.94 -0.0 0.90 perf-profile.calltrace.cycles-pp.mark_buffer_dirty.__block_commit_write.block_write_end.ext4_da_do_write_end.generic_perform_write
0.85 -0.0 0.82 perf-profile.calltrace.cycles-pp.filemap_get_pages.filemap_read.vfs_read.ksys_read.do_syscall_64
0.71 -0.0 0.69 perf-profile.calltrace.cycles-pp.filemap_get_read_batch.filemap_get_pages.filemap_read.vfs_read.ksys_read
0.94 -0.0 0.91 perf-profile.calltrace.cycles-pp.balance_dirty_pages_ratelimited_flags.generic_perform_write.ext4_buffered_write_iter.vfs_write.ksys_write
1.08 -0.0 1.05 perf-profile.calltrace.cycles-pp.try_to_free_buffers.truncate_cleanup_folio.truncate_inode_pages_range.ext4_evict_inode.evict
0.70 -0.0 0.68 perf-profile.calltrace.cycles-pp.__folio_mark_dirty.mark_buffer_dirty.__block_commit_write.block_write_end.ext4_da_do_write_end
1.35 -0.0 1.34 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.__folio_batch_release
1.39 -0.0 1.37 perf-profile.calltrace.cycles-pp.folio_batch_move_lru.__folio_batch_release.truncate_inode_pages_range.ext4_evict_inode.evict
1.35 -0.0 1.34 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.__folio_batch_release.truncate_inode_pages_range
1.35 -0.0 1.34 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.__folio_batch_release.truncate_inode_pages_range.ext4_evict_inode
0.53 -0.0 0.51 perf-profile.calltrace.cycles-pp.folio_alloc.__filemap_get_folio.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter
28.25 +0.2 28.47 perf-profile.calltrace.cycles-pp.__folio_batch_release.truncate_inode_pages_range.ext4_evict_inode.evict.__dentry_kill
25.49 +0.2 25.73 perf-profile.calltrace.cycles-pp.release_pages.__folio_batch_release.truncate_inode_pages_range.ext4_evict_inode.evict
24.68 +0.3 24.94 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.release_pages.__folio_batch_release
24.70 +0.3 24.96 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.release_pages.__folio_batch_release.truncate_inode_pages_range
24.70 +0.3 24.97 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.release_pages.__folio_batch_release.truncate_inode_pages_range.ext4_evict_inode
33.66 +0.3 33.95 perf-profile.calltrace.cycles-pp.ext4_buffered_write_iter.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
23.80 +0.3 24.11 perf-profile.calltrace.cycles-pp.folio_mark_accessed.filemap_read.vfs_read.ksys_read.do_syscall_64
32.63 +0.3 32.97 perf-profile.calltrace.cycles-pp.generic_perform_write.ext4_buffered_write_iter.vfs_write.ksys_write.do_syscall_64
22.93 +0.4 23.35 perf-profile.calltrace.cycles-pp.folio_activate.folio_mark_accessed.filemap_read.vfs_read.ksys_read
22.08 +0.4 22.50 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_activate.folio_mark_accessed.filemap_read
22.07 +0.4 22.49 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_activate.folio_mark_accessed
22.06 +0.4 22.48 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_activate
22.88 +0.4 23.31 perf-profile.calltrace.cycles-pp.folio_batch_move_lru.folio_activate.folio_mark_accessed.filemap_read.vfs_read
27.90 +0.6 28.49 perf-profile.calltrace.cycles-pp.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter.vfs_write.ksys_write
25.00 +0.8 25.76 perf-profile.calltrace.cycles-pp.__filemap_get_folio.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter.vfs_write
23.72 +0.8 24.54 perf-profile.calltrace.cycles-pp.filemap_add_folio.__filemap_get_folio.ext4_da_write_begin.generic_perform_write.ext4_buffered_write_iter
22.56 +0.8 23.39 perf-profile.calltrace.cycles-pp.folio_add_lru.filemap_add_folio.__filemap_get_folio.ext4_da_write_begin.generic_perform_write
22.52 +0.8 23.34 perf-profile.calltrace.cycles-pp.folio_batch_move_lru.folio_add_lru.filemap_add_folio.__filemap_get_folio.ext4_da_write_begin
21.97 +0.8 22.81 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_add_lru.filemap_add_folio.__filemap_get_folio
21.94 +0.8 22.79 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_add_lru
21.96 +0.8 22.80 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_add_lru.filemap_add_folio
0.41 -0.2 0.24 ? 2% perf-profile.children.cycles-pp.mem_cgroup_css_rstat_flush
0.54 -0.2 0.37 ? 2% perf-profile.children.cycles-pp.cgroup_rstat_flush_locked
0.55 -0.2 0.38 perf-profile.children.cycles-pp.cgroup_rstat_flush
2.60 -0.1 2.46 perf-profile.children.cycles-pp.ext4_block_write_begin
1.66 -0.1 1.56 perf-profile.children.cycles-pp.llseek
0.70 ? 2% -0.1 0.59 perf-profile.children.cycles-pp.workingset_activation
2.12 -0.1 2.02 perf-profile.children.cycles-pp.ext4_da_do_write_end
0.52 ? 3% -0.1 0.42 perf-profile.children.cycles-pp.workingset_age_nonresident
1.76 -0.1 1.66 perf-profile.children.cycles-pp.copy_page_to_iter
1.67 -0.1 1.58 perf-profile.children.cycles-pp._copy_to_iter
1.78 -0.1 1.69 perf-profile.children.cycles-pp.block_write_end
1.67 -0.1 1.59 perf-profile.children.cycles-pp.__block_commit_write
1.00 -0.1 0.94 perf-profile.children.cycles-pp.__entry_text_start
0.86 -0.1 0.81 perf-profile.children.cycles-pp.ext4_da_get_block_prep
0.60 -0.1 0.54 ? 2% perf-profile.children.cycles-pp.__fdget_pos
0.79 -0.1 0.73 ? 2% perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
0.87 -0.1 0.82 perf-profile.children.cycles-pp.copy_page_from_iter_atomic
0.95 -0.1 0.89 perf-profile.children.cycles-pp.zero_user_segments
0.85 -0.1 0.80 perf-profile.children.cycles-pp.ext4_da_map_blocks
0.95 -0.1 0.90 perf-profile.children.cycles-pp.memset_orig
0.43 -0.0 0.38 ? 2% perf-profile.children.cycles-pp.__fget_light
0.50 -0.0 0.46 perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.47 ? 2% -0.0 0.42 perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.41 ? 3% -0.0 0.36 ? 2% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.40 ? 3% -0.0 0.36 ? 2% perf-profile.children.cycles-pp.hrtimer_interrupt
0.37 ? 2% -0.0 0.33 ? 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.64 -0.0 0.60 perf-profile.children.cycles-pp.xas_load
0.74 -0.0 0.70 perf-profile.children.cycles-pp.filemap_get_read_batch
0.95 -0.0 0.92 perf-profile.children.cycles-pp.mark_buffer_dirty
0.44 -0.0 0.41 perf-profile.children.cycles-pp.file_modified
0.98 -0.0 0.94 perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited_flags
0.87 -0.0 0.84 perf-profile.children.cycles-pp.filemap_get_pages
0.43 -0.0 0.40 perf-profile.children.cycles-pp.fault_in_iov_iter_readable
0.31 ? 6% -0.0 0.28 perf-profile.children.cycles-pp.disk_rr
0.41 -0.0 0.38 perf-profile.children.cycles-pp.touch_atime
0.38 -0.0 0.35 perf-profile.children.cycles-pp.fault_in_readable
0.32 ? 2% -0.0 0.30 perf-profile.children.cycles-pp.xas_descend
0.37 -0.0 0.34 ? 3% perf-profile.children.cycles-pp.ksys_lseek
0.34 -0.0 0.32 perf-profile.children.cycles-pp.atime_needs_update
1.08 -0.0 1.06 perf-profile.children.cycles-pp.try_to_free_buffers
0.20 ? 2% -0.0 0.17 ? 2% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.22 ? 2% -0.0 0.20 ? 2% perf-profile.children.cycles-pp.ext4_es_insert_delayed_block
0.34 ? 2% -0.0 0.32 perf-profile.children.cycles-pp.__cond_resched
0.44 -0.0 0.42 perf-profile.children.cycles-pp.filemap_get_entry
0.23 ? 2% -0.0 0.21 perf-profile.children.cycles-pp.inode_needs_update_time
0.71 -0.0 0.69 perf-profile.children.cycles-pp.__folio_mark_dirty
0.37 -0.0 0.36 perf-profile.children.cycles-pp.__mem_cgroup_charge
0.24 ? 2% -0.0 0.22 ? 2% perf-profile.children.cycles-pp._raw_spin_lock
0.24 -0.0 0.22 perf-profile.children.cycles-pp.syscall_return_via_sysret
0.40 -0.0 0.38 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.14 -0.0 0.13 ? 2% perf-profile.children.cycles-pp.up_write
0.50 -0.0 0.49 perf-profile.children.cycles-pp.alloc_pages_mpol
0.14 -0.0 0.13 perf-profile.children.cycles-pp.current_time
0.10 -0.0 0.09 perf-profile.children.cycles-pp.__es_insert_extent
0.25 ? 3% +0.0 0.27 ? 3% perf-profile.children.cycles-pp.__mod_lruvec_state
0.19 ? 3% +0.0 0.21 ? 3% perf-profile.children.cycles-pp.__mod_node_page_state
1.12 +0.1 1.20 perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.99 +0.1 1.09 ? 2% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.00 +0.1 0.13 ? 3% perf-profile.children.cycles-pp.mutex_spin_on_owner
30.58 +0.1 30.72 perf-profile.children.cycles-pp.dput
0.64 +0.1 0.79 ? 4% perf-profile.children.cycles-pp.cgroup_rstat_updated
30.44 +0.2 30.60 perf-profile.children.cycles-pp.truncate_inode_pages_range
0.00 +0.2 0.18 ? 3% perf-profile.children.cycles-pp.__mutex_lock
97.33 +0.2 97.51 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
97.11 +0.2 97.31 perf-profile.children.cycles-pp.do_syscall_64
28.25 +0.2 28.47 perf-profile.children.cycles-pp.__folio_batch_release
25.74 +0.2 25.96 perf-profile.children.cycles-pp.release_pages
33.71 +0.3 33.99 perf-profile.children.cycles-pp.ext4_buffered_write_iter
23.82 +0.3 24.12 perf-profile.children.cycles-pp.folio_mark_accessed
32.74 +0.3 33.09 perf-profile.children.cycles-pp.generic_perform_write
22.94 +0.4 23.36 perf-profile.children.cycles-pp.folio_activate
27.94 +0.6 28.53 perf-profile.children.cycles-pp.ext4_da_write_begin
25.04 +0.8 25.80 perf-profile.children.cycles-pp.__filemap_get_folio
23.73 +0.8 24.54 perf-profile.children.cycles-pp.filemap_add_folio
22.61 +0.8 23.44 perf-profile.children.cycles-pp.folio_add_lru
48.23 +1.2 49.47 perf-profile.children.cycles-pp.folio_batch_move_lru
71.67 +1.5 73.13 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
71.80 +1.5 73.29 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
71.64 +1.5 73.14 perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
0.40 ? 2% -0.2 0.23 ? 2% perf-profile.self.cycles-pp.mem_cgroup_css_rstat_flush
0.52 ? 2% -0.1 0.42 ? 2% perf-profile.self.cycles-pp.workingset_age_nonresident
1.65 -0.1 1.56 perf-profile.self.cycles-pp._copy_to_iter
0.86 -0.1 0.81 perf-profile.self.cycles-pp.copy_page_from_iter_atomic
0.94 -0.1 0.89 perf-profile.self.cycles-pp.memset_orig
0.76 -0.0 0.71 ? 2% perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
0.52 ? 4% -0.0 0.47 perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.40 -0.0 0.36 ? 3% perf-profile.self.cycles-pp.__fget_light
0.53 ? 2% -0.0 0.50 perf-profile.self.cycles-pp.vfs_write
0.63 -0.0 0.59 perf-profile.self.cycles-pp.filemap_read
0.66 -0.0 0.62 perf-profile.self.cycles-pp.__block_commit_write
0.37 -0.0 0.34 ? 2% perf-profile.self.cycles-pp.fault_in_readable
0.43 -0.0 0.41 perf-profile.self.cycles-pp.vfs_read
0.26 ? 4% -0.0 0.24 ? 2% perf-profile.self.cycles-pp.balance_dirty_pages_ratelimited_flags
0.28 -0.0 0.26 ? 2% perf-profile.self.cycles-pp.xas_descend
0.28 -0.0 0.26 perf-profile.self.cycles-pp.read
0.28 -0.0 0.25 perf-profile.self.cycles-pp.__filemap_get_folio
0.17 -0.0 0.15 ? 2% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.27 -0.0 0.25 perf-profile.self.cycles-pp.do_syscall_64
0.46 -0.0 0.44 perf-profile.self.cycles-pp.filemap_get_read_batch
0.22 ? 2% -0.0 0.20 ? 4% perf-profile.self.cycles-pp.ext4_da_write_begin
0.26 -0.0 0.25 ? 2% perf-profile.self.cycles-pp.__entry_text_start
0.24 -0.0 0.22 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.21 ? 2% -0.0 0.19 ? 2% perf-profile.self.cycles-pp.filemap_get_entry
0.13 -0.0 0.12 ? 3% perf-profile.self.cycles-pp.down_write
0.22 ? 2% -0.0 0.21 ? 2% perf-profile.self.cycles-pp.ext4_da_do_write_end
0.20 ? 2% -0.0 0.19 ? 2% perf-profile.self.cycles-pp.__cond_resched
0.17 -0.0 0.16 perf-profile.self.cycles-pp.folio_mark_accessed
0.10 -0.0 0.09 perf-profile.self.cycles-pp.ksys_write
0.09 -0.0 0.08 perf-profile.self.cycles-pp.entry_SYSCALL_64_safe_stack
0.12 -0.0 0.11 perf-profile.self.cycles-pp.find_lock_entries
0.18 ? 2% +0.0 0.20 ? 2% perf-profile.self.cycles-pp.__mod_node_page_state
0.16 +0.0 0.19 ? 3% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +0.1 0.13 ? 3% perf-profile.self.cycles-pp.mutex_spin_on_owner
0.54 ? 2% +0.2 0.72 ? 4% perf-profile.self.cycles-pp.cgroup_rstat_updated
71.67 +1.5 73.13 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-11-22 13:56:11

by Oliver Sang

[permalink] [raw]
Subject: Re: [PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg



Hello,

kernel test robot noticed a -30.2% regression of will-it-scale.per_thread_ops on:


commit: c7fbfc7b4e089c4a9b292b1973a42a5761c1342f ("[PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg")
url: https://github.com/intel-lab-lkp/linux/commits/Yosry-Ahmed/mm-memcg-change-flush_next_time-to-flush_last_time/20231116-103300
base: https://git.kernel.org/cgit/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/all/[email protected]/
patch subject: [PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg

testcase: will-it-scale
test machine: 104 threads 2 sockets (Skylake) with 192G memory
parameters:

nr_task: 50%
mode: thread
test: fallocate2
cpufreq_governor: performance




If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-lkp/[email protected]


Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20231122/[email protected]

=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-12/performance/x86_64-rhel-8.3/thread/50%/debian-11.1-x86_64-20220510.cgz/lkp-skl-fpga01/fallocate2/will-it-scale

commit:
c5caa5bb03 ("mm: memcg: move vmstats structs definition above flushing code")
c7fbfc7b4e ("mm: memcg: make stats flushing threshold per-memcg")

c5caa5bb0376e3e5 c7fbfc7b4e089c4a9b292b1973a
---------------- ---------------------------
%stddev %change %stddev
\ | \
1.84 -0.5 1.37 ? 9% mpstat.cpu.all.usr%
0.08 -25.0% 0.06 turbostat.IPC
3121 -9.2% 2835 ? 5% vmstat.system.cs
78.17 ? 12% +96.6% 153.67 ? 18% perf-c2c.DRAM.local
504.17 ? 6% +34.4% 677.50 ? 4% perf-c2c.DRAM.remote
3980762 -30.2% 2777359 will-it-scale.52.threads
76552 -30.2% 53410 will-it-scale.per_thread_ops
3980762 -30.2% 2777359 will-it-scale.workload
1.192e+09 ? 2% -30.2% 8.324e+08 ? 3% numa-numastat.node0.local_node
1.192e+09 ? 2% -30.2% 8.324e+08 ? 3% numa-numastat.node0.numa_hit
1.215e+09 ? 2% -30.3% 8.471e+08 ? 3% numa-numastat.node1.local_node
1.215e+09 ? 2% -30.3% 8.474e+08 ? 3% numa-numastat.node1.numa_hit
1.192e+09 ? 2% -30.2% 8.324e+08 ? 3% numa-vmstat.node0.numa_hit
1.192e+09 ? 2% -30.2% 8.324e+08 ? 3% numa-vmstat.node0.numa_local
1.215e+09 ? 2% -30.3% 8.474e+08 ? 3% numa-vmstat.node1.numa_hit
1.215e+09 ? 2% -30.3% 8.471e+08 ? 3% numa-vmstat.node1.numa_local
31404 -1.6% 30913 proc-vmstat.nr_slab_reclaimable
2.408e+09 -30.2% 1.68e+09 proc-vmstat.numa_hit
2.407e+09 -30.2% 1.68e+09 proc-vmstat.numa_local
2.404e+09 -30.2% 1.678e+09 proc-vmstat.pgalloc_normal
2.403e+09 -30.2% 1.678e+09 proc-vmstat.pgfree
0.05 ? 8% -27.3% 0.04 ? 4% perf-sched.wait_and_delay.avg.ms.__cond_resched.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate.do_syscall_64
0.05 ? 10% -24.9% 0.04 ? 8% perf-sched.wait_and_delay.avg.ms.__cond_resched.shmem_inode_acct_blocks.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
0.05 ? 8% -27.2% 0.04 ? 5% perf-sched.wait_and_delay.avg.ms.__cond_resched.shmem_undo_range.shmem_setattr.notify_change.do_truncate
1.14 +14.1% 1.30 perf-sched.wait_and_delay.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
198.38 ? 3% +16.5% 231.12 ? 3% perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
1563 ? 5% -11.4% 1384 ? 5% perf-sched.wait_and_delay.count.__cond_resched.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate.do_syscall_64
1677 ? 5% -18.7% 1364 ? 4% perf-sched.wait_and_delay.count.__cond_resched.shmem_undo_range.shmem_setattr.notify_change.do_truncate
3815 ? 2% -14.5% 3260 ? 2% perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
0.51 ? 5% -32.3% 0.35 ? 16% perf-sched.wait_and_delay.max.ms.__cond_resched.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate.do_syscall_64
0.47 ? 11% -33.3% 0.31 ? 20% perf-sched.wait_and_delay.max.ms.__cond_resched.shmem_inode_acct_blocks.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
2.37 +13.0% 2.68 ? 2% perf-sched.wait_and_delay.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
0.05 ? 8% -27.3% 0.04 ? 4% perf-sched.wait_time.avg.ms.__cond_resched.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate.do_syscall_64
0.05 ? 10% -24.9% 0.04 ? 8% perf-sched.wait_time.avg.ms.__cond_resched.shmem_inode_acct_blocks.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
0.05 ? 8% -27.2% 0.04 ? 5% perf-sched.wait_time.avg.ms.__cond_resched.shmem_undo_range.shmem_setattr.notify_change.do_truncate
1.14 +14.1% 1.30 perf-sched.wait_time.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
198.37 ? 3% +16.5% 231.11 ? 3% perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
0.39 ? 31% -72.9% 0.11 ? 28% perf-sched.wait_time.max.ms.__cond_resched.__alloc_pages.alloc_pages_mpol.shmem_alloc_folio.shmem_alloc_and_add_folio
0.51 ? 5% -32.3% 0.35 ? 16% perf-sched.wait_time.max.ms.__cond_resched.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate.do_syscall_64
0.47 ? 11% -33.3% 0.31 ? 20% perf-sched.wait_time.max.ms.__cond_resched.shmem_inode_acct_blocks.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
2.37 +13.1% 2.68 ? 2% perf-sched.wait_time.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
0.82 ? 14% +174.7% 2.24 ? 30% perf-stat.i.MPKI
8.476e+09 -27.7% 6.127e+09 ? 10% perf-stat.i.branch-instructions
55486131 -28.1% 39884260 ? 6% perf-stat.i.branch-misses
14.80 ? 2% +6.2 20.96 ? 7% perf-stat.i.cache-miss-rate%
30690945 ? 3% +79.9% 55207216 ? 10% perf-stat.i.cache-misses
2.066e+08 +24.2% 2.567e+08 ? 7% perf-stat.i.cache-references
3070 -9.7% 2772 ? 5% perf-stat.i.context-switches
3.58 ? 2% +39.7% 5.00 ? 11% perf-stat.i.cpi
4688 ? 3% -47.9% 2442 ? 4% perf-stat.i.cycles-between-cache-misses
4098916 -29.7% 2879809 perf-stat.i.dTLB-load-misses
1.052e+10 -27.5% 7.63e+09 ? 10% perf-stat.i.dTLB-loads
5.845e+09 -30.7% 4.051e+09 ? 10% perf-stat.i.dTLB-stores
77.61 -6.0 71.56 perf-stat.i.iTLB-load-miss-rate%
4058819 -32.5% 2739054 ? 8% perf-stat.i.iTLB-load-misses
4.089e+10 -28.3% 2.932e+10 ? 10% perf-stat.i.instructions
0.28 -26.8% 0.21 ? 5% perf-stat.i.ipc
240.84 -27.9% 173.57 ? 10% perf-stat.i.metric.M/sec
3814721 ? 3% +72.2% 6569712 ? 10% perf-stat.i.node-load-misses
407308 ? 7% +72.0% 700502 ? 18% perf-stat.i.node-loads
1323090 ? 2% -28.1% 951590 ? 12% perf-stat.i.node-store-misses
36568 ? 2% -20.7% 29014 ? 12% perf-stat.i.node-stores
0.75 ? 3% +151.0% 1.88 perf-stat.overall.MPKI
14.85 ? 2% +6.6 21.47 ? 3% perf-stat.overall.cache-miss-rate%
3.53 +33.8% 4.72 perf-stat.overall.cpi
4704 ? 3% -46.8% 2505 perf-stat.overall.cycles-between-cache-misses
77.62 -6.2 71.39 perf-stat.overall.iTLB-load-miss-rate%
0.28 -25.3% 0.21 perf-stat.overall.ipc
3121462 +7.4% 3353425 perf-stat.overall.path-length
8.451e+09 -27.6% 6.119e+09 ? 10% perf-stat.ps.branch-instructions
55320195 -28.0% 39804925 ? 6% perf-stat.ps.branch-misses
30594557 ? 3% +80.2% 55116821 ? 9% perf-stat.ps.cache-misses
2.059e+08 +24.4% 2.561e+08 ? 6% perf-stat.ps.cache-references
3059 -9.6% 2765 ? 5% perf-stat.ps.context-switches
4085949 -29.7% 2871251 perf-stat.ps.dTLB-load-misses
1.049e+10 -27.4% 7.62e+09 ? 10% perf-stat.ps.dTLB-loads
5.828e+09 -30.6% 4.046e+09 ? 10% perf-stat.ps.dTLB-stores
4046367 -32.4% 2734227 ? 7% perf-stat.ps.iTLB-load-misses
4.077e+10 -28.2% 2.928e+10 ? 10% perf-stat.ps.instructions
3802900 ? 3% +72.5% 6559980 ? 10% perf-stat.ps.node-load-misses
406123 ? 7% +72.2% 699397 ? 17% perf-stat.ps.node-loads
1319155 ? 2% -28.0% 950261 ? 12% perf-stat.ps.node-store-misses
36542 ? 2% -20.6% 29007 ? 11% perf-stat.ps.node-stores
1.243e+13 -25.0% 9.313e+12 perf-stat.total.instructions
1.26 ? 2% -0.4 0.91 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.fallocate64
1.22 -0.3 0.88 ? 2% perf-profile.calltrace.cycles-pp.shmem_alloc_folio.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate.vfs_fallocate
0.92 ? 2% -0.3 0.62 ? 3% perf-profile.calltrace.cycles-pp.shmem_inode_acct_blocks.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate.vfs_fallocate
1.04 -0.3 0.76 ? 2% perf-profile.calltrace.cycles-pp.alloc_pages_mpol.shmem_alloc_folio.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
0.80 -0.2 0.58 ? 3% perf-profile.calltrace.cycles-pp.__alloc_pages.alloc_pages_mpol.shmem_alloc_folio.shmem_alloc_and_add_folio.shmem_get_folio_gfp
1.25 ? 2% -0.2 1.07 perf-profile.calltrace.cycles-pp.folio_batch_move_lru.lru_add_drain_cpu.__folio_batch_release.shmem_undo_range.shmem_setattr
1.25 ? 2% -0.2 1.07 perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.__folio_batch_release.shmem_undo_range.shmem_setattr.notify_change
1.23 ? 2% -0.2 1.06 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu.__folio_batch_release.shmem_undo_range
1.23 ? 2% -0.2 1.06 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu.__folio_batch_release
1.23 ? 2% -0.2 1.05 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu
1.16 ? 2% -0.1 1.02 ? 2% perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.fallocate64
0.68 +0.1 0.75 ? 2% perf-profile.calltrace.cycles-pp.__mem_cgroup_uncharge_list.release_pages.__folio_batch_release.shmem_undo_range.shmem_setattr
1.07 +0.1 1.18 ? 2% perf-profile.calltrace.cycles-pp.lru_add_fn.folio_batch_move_lru.folio_add_lru.shmem_alloc_and_add_folio.shmem_get_folio_gfp
2.95 +0.3 3.21 ? 2% perf-profile.calltrace.cycles-pp.truncate_inode_folio.shmem_undo_range.shmem_setattr.notify_change.do_truncate
2.60 +0.4 2.95 perf-profile.calltrace.cycles-pp.filemap_remove_folio.truncate_inode_folio.shmem_undo_range.shmem_setattr.notify_change
2.27 +0.4 2.71 ? 2% perf-profile.calltrace.cycles-pp.__filemap_remove_folio.filemap_remove_folio.truncate_inode_folio.shmem_undo_range.shmem_setattr
1.38 ? 3% +0.5 1.85 ? 5% perf-profile.calltrace.cycles-pp.get_mem_cgroup_from_mm.__mem_cgroup_charge.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
2.29 ? 2% +0.6 2.90 ? 2% perf-profile.calltrace.cycles-pp.shmem_add_to_page_cache.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate.vfs_fallocate
0.00 +0.6 0.63 ? 2% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.release_pages.__folio_batch_release.shmem_undo_range.shmem_setattr
0.00 +0.7 0.74 ? 3% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.lru_add_fn.folio_batch_move_lru.folio_add_lru.shmem_alloc_and_add_folio
1.30 +0.8 2.07 ? 3% perf-profile.calltrace.cycles-pp.filemap_unaccount_folio.__filemap_remove_folio.filemap_remove_folio.truncate_inode_folio.shmem_undo_range
0.73 ? 2% +0.8 1.53 ? 2% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.__mod_lruvec_page_state.filemap_unaccount_folio.__filemap_remove_folio.filemap_remove_folio
1.23 +0.8 2.04 ? 3% perf-profile.calltrace.cycles-pp.__mod_lruvec_page_state.filemap_unaccount_folio.__filemap_remove_folio.filemap_remove_folio.truncate_inode_folio
0.00 +0.8 0.82 ? 2% perf-profile.calltrace.cycles-pp.__count_memcg_events.mem_cgroup_commit_charge.__mem_cgroup_charge.shmem_alloc_and_add_folio.shmem_get_folio_gfp
1.39 ? 2% +0.9 2.32 ? 2% perf-profile.calltrace.cycles-pp.__mod_lruvec_page_state.shmem_add_to_page_cache.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
0.59 ? 2% +0.9 1.53 ? 2% perf-profile.calltrace.cycles-pp.__mod_memcg_lruvec_state.__mod_lruvec_page_state.shmem_add_to_page_cache.shmem_alloc_and_add_folio.shmem_get_folio_gfp
38.12 +1.0 39.16 perf-profile.calltrace.cycles-pp.vfs_fallocate.__x64_sys_fallocate.do_syscall_64.entry_SYSCALL_64_after_hwframe.fallocate64
0.62 ? 4% +1.1 1.71 ? 3% perf-profile.calltrace.cycles-pp.mem_cgroup_commit_charge.__mem_cgroup_charge.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate
37.61 +1.2 38.80 perf-profile.calltrace.cycles-pp.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate.do_syscall_64.entry_SYSCALL_64_after_hwframe
36.54 +1.5 38.02 perf-profile.calltrace.cycles-pp.shmem_get_folio_gfp.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate.do_syscall_64
35.97 +1.6 37.60 perf-profile.calltrace.cycles-pp.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate.vfs_fallocate.__x64_sys_fallocate
2.48 ? 3% +2.3 4.80 ? 4% perf-profile.calltrace.cycles-pp.__mem_cgroup_charge.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fallocate.vfs_fallocate
1.28 ? 2% -0.4 0.92 perf-profile.children.cycles-pp.syscall_return_via_sysret
1.23 -0.3 0.88 ? 2% perf-profile.children.cycles-pp.shmem_alloc_folio
0.95 ? 2% -0.3 0.64 ? 3% perf-profile.children.cycles-pp.shmem_inode_acct_blocks
1.07 -0.3 0.77 ? 3% perf-profile.children.cycles-pp.alloc_pages_mpol
0.86 ? 2% -0.3 0.58 ? 2% perf-profile.children.cycles-pp.xas_store
0.84 -0.2 0.61 ? 3% perf-profile.children.cycles-pp.__alloc_pages
1.26 ? 2% -0.2 1.08 perf-profile.children.cycles-pp.lru_add_drain_cpu
0.61 ? 3% -0.2 0.43 perf-profile.children.cycles-pp.__entry_text_start
0.56 ? 2% -0.2 0.40 ? 3% perf-profile.children.cycles-pp.free_unref_page_list
0.26 ? 7% -0.2 0.11 ? 5% perf-profile.children.cycles-pp.__list_add_valid_or_report
1.19 ? 2% -0.1 1.04 ? 2% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.45 ? 4% -0.1 0.31 perf-profile.children.cycles-pp.__mod_lruvec_state
0.48 ? 2% -0.1 0.35 ? 2% perf-profile.children.cycles-pp.get_page_from_freelist
0.38 ? 5% -0.1 0.27 ? 2% perf-profile.children.cycles-pp.xas_load
0.38 ? 2% -0.1 0.27 ? 2% perf-profile.children.cycles-pp._raw_spin_lock
0.33 ? 4% -0.1 0.23 ? 2% perf-profile.children.cycles-pp.__mod_node_page_state
0.42 ? 2% -0.1 0.32 ? 5% perf-profile.children.cycles-pp.find_lock_entries
0.32 ? 2% -0.1 0.23 ? 2% perf-profile.children.cycles-pp.__dquot_alloc_space
0.33 ? 2% -0.1 0.24 ? 3% perf-profile.children.cycles-pp.rmqueue
0.24 ? 3% -0.1 0.17 ? 4% perf-profile.children.cycles-pp.xas_descend
0.23 ? 3% -0.1 0.16 ? 4% perf-profile.children.cycles-pp.xas_init_marks
0.25 ? 3% -0.1 0.17 ? 2% perf-profile.children.cycles-pp.xas_clear_mark
0.23 ? 2% -0.1 0.16 ? 5% perf-profile.children.cycles-pp.__cond_resched
0.28 ? 5% -0.1 0.22 ? 2% perf-profile.children.cycles-pp.filemap_get_entry
0.24 ? 3% -0.1 0.18 ? 4% perf-profile.children.cycles-pp.truncate_cleanup_folio
0.16 ? 4% -0.1 0.10 ? 4% perf-profile.children.cycles-pp.xas_find_conflict
0.09 ? 7% -0.1 0.03 ? 70% perf-profile.children.cycles-pp.mem_cgroup_update_lru_size
0.18 -0.1 0.12 ? 6% perf-profile.children.cycles-pp.shmem_recalc_inode
0.18 ? 2% -0.1 0.12 ? 3% perf-profile.children.cycles-pp.folio_unlock
0.17 ? 4% -0.1 0.12 ? 3% perf-profile.children.cycles-pp.free_unref_page_prepare
0.16 ? 6% -0.1 0.11 ? 4% perf-profile.children.cycles-pp.security_file_permission
0.13 ? 7% -0.0 0.08 ? 13% perf-profile.children.cycles-pp.security_vm_enough_memory_mm
0.20 ? 4% -0.0 0.15 ? 2% perf-profile.children.cycles-pp.free_unref_page_commit
0.16 ? 5% -0.0 0.11 ? 3% perf-profile.children.cycles-pp.noop_dirty_folio
0.15 ? 5% -0.0 0.11 ? 4% perf-profile.children.cycles-pp.file_modified
0.12 ? 10% -0.0 0.08 perf-profile.children.cycles-pp.__percpu_counter_limited_add
0.19 ? 5% -0.0 0.14 ? 5% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.11 ? 12% -0.0 0.06 ? 17% perf-profile.children.cycles-pp.cap_vm_enough_memory
0.14 ? 5% -0.0 0.10 ? 6% perf-profile.children.cycles-pp.__fget_light
0.14 ? 7% -0.0 0.10 ? 4% perf-profile.children.cycles-pp.apparmor_file_permission
0.14 ? 2% -0.0 0.10 ? 4% perf-profile.children.cycles-pp.__folio_cancel_dirty
0.12 ? 3% -0.0 0.08 ? 4% perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack
0.11 ? 10% -0.0 0.08 ? 16% perf-profile.children.cycles-pp.__vm_enough_memory
0.11 ? 8% -0.0 0.08 ? 4% perf-profile.children.cycles-pp.xas_start
0.11 ? 3% -0.0 0.08 ? 4% perf-profile.children.cycles-pp.__fsnotify_parent
0.18 ? 2% -0.0 0.14 ? 6% perf-profile.children.cycles-pp.__list_del_entry_valid_or_report
0.08 ? 6% -0.0 0.04 ? 45% perf-profile.children.cycles-pp.__get_file_rcu
0.12 ? 7% -0.0 0.09 ? 5% perf-profile.children.cycles-pp.inode_add_bytes
0.11 ? 4% -0.0 0.08 ? 8% perf-profile.children.cycles-pp._raw_spin_trylock
0.08 ? 6% -0.0 0.05 ? 45% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.10 -0.0 0.07 ? 9% perf-profile.children.cycles-pp.inode_needs_update_time
0.09 ? 7% -0.0 0.06 ? 6% perf-profile.children.cycles-pp.get_pfnblock_flags_mask
0.07 ? 6% -0.0 0.05 ? 45% perf-profile.children.cycles-pp.shmem_is_huge
0.09 ? 7% -0.0 0.07 ? 7% perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
0.08 ? 4% -0.0 0.06 ? 8% perf-profile.children.cycles-pp.policy_nodemask
0.19 ? 3% -0.0 0.17 ? 4% perf-profile.children.cycles-pp.try_charge_memcg
0.08 ? 8% -0.0 0.06 ? 8% perf-profile.children.cycles-pp.down_write
0.09 ? 7% -0.0 0.07 ? 5% perf-profile.children.cycles-pp.xas_create
0.09 ? 7% -0.0 0.07 ? 7% perf-profile.children.cycles-pp.filemap_free_folio
0.08 ? 4% -0.0 0.06 ? 6% perf-profile.children.cycles-pp.xas_find
0.07 ? 5% +0.0 0.09 ? 5% perf-profile.children.cycles-pp.propagate_protected_usage
0.25 +0.0 0.28 ? 2% perf-profile.children.cycles-pp.uncharge_folio
0.43 ? 2% +0.0 0.47 ? 2% perf-profile.children.cycles-pp.uncharge_batch
0.69 +0.1 0.75 ? 2% perf-profile.children.cycles-pp.__mem_cgroup_uncharge_list
1.10 +0.1 1.20 ? 2% perf-profile.children.cycles-pp.lru_add_fn
2.96 +0.3 3.21 perf-profile.children.cycles-pp.truncate_inode_folio
2.60 +0.4 2.96 perf-profile.children.cycles-pp.filemap_remove_folio
2.29 +0.4 2.73 ? 2% perf-profile.children.cycles-pp.__filemap_remove_folio
1.39 ? 3% +0.5 1.85 ? 5% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
2.34 ? 2% +0.6 2.93 ? 2% perf-profile.children.cycles-pp.shmem_add_to_page_cache
0.18 ? 5% +0.7 0.92 ? 2% perf-profile.children.cycles-pp.__count_memcg_events
1.32 +0.8 2.07 ? 3% perf-profile.children.cycles-pp.filemap_unaccount_folio
38.14 +1.0 39.17 perf-profile.children.cycles-pp.vfs_fallocate
0.64 ? 4% +1.1 1.72 ? 3% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
37.63 +1.2 38.81 perf-profile.children.cycles-pp.shmem_fallocate
36.57 +1.5 38.05 perf-profile.children.cycles-pp.shmem_get_folio_gfp
36.04 +1.6 37.65 perf-profile.children.cycles-pp.shmem_alloc_and_add_folio
2.66 +1.7 4.38 ? 3% perf-profile.children.cycles-pp.__mod_lruvec_page_state
2.49 ? 2% +2.3 4.80 ? 4% perf-profile.children.cycles-pp.__mem_cgroup_charge
1.99 ? 2% +2.5 4.46 perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
1.28 ? 2% -0.4 0.92 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.69 ? 2% -0.2 0.50 ? 2% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.54 ? 2% -0.2 0.36 ? 2% perf-profile.self.cycles-pp.release_pages
0.47 ? 2% -0.2 0.31 ? 2% perf-profile.self.cycles-pp.xas_store
0.53 ? 3% -0.2 0.37 ? 2% perf-profile.self.cycles-pp.__entry_text_start
0.36 ? 3% -0.2 0.21 ? 2% perf-profile.self.cycles-pp.shmem_add_to_page_cache
0.26 ? 8% -0.2 0.11 ? 5% perf-profile.self.cycles-pp.__list_add_valid_or_report
1.14 ? 2% -0.1 1.01 ? 2% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.40 ? 4% -0.1 0.28 ? 4% perf-profile.self.cycles-pp.lru_add_fn
0.37 ? 2% -0.1 0.26 perf-profile.self.cycles-pp._raw_spin_lock
0.32 ? 3% -0.1 0.22 ? 2% perf-profile.self.cycles-pp.__mod_node_page_state
0.35 ? 3% -0.1 0.25 ? 2% perf-profile.self.cycles-pp.shmem_fallocate
0.50 ? 2% -0.1 0.40 ? 3% perf-profile.self.cycles-pp.folio_batch_move_lru
0.34 ? 3% -0.1 0.26 ? 5% perf-profile.self.cycles-pp.find_lock_entries
0.28 ? 2% -0.1 0.20 ? 5% perf-profile.self.cycles-pp.__alloc_pages
0.22 ? 2% -0.1 0.16 ? 3% perf-profile.self.cycles-pp.xas_clear_mark
0.21 ? 3% -0.1 0.15 ? 4% perf-profile.self.cycles-pp.shmem_alloc_and_add_folio
0.18 ? 3% -0.1 0.12 ? 5% perf-profile.self.cycles-pp.free_unref_page_list
0.22 ? 3% -0.1 0.16 ? 3% perf-profile.self.cycles-pp.xas_descend
0.20 ? 6% -0.1 0.14 ? 2% perf-profile.self.cycles-pp.__dquot_alloc_space
0.18 ? 4% -0.1 0.12 ? 6% perf-profile.self.cycles-pp.shmem_inode_acct_blocks
0.21 ? 5% -0.1 0.15 ? 5% perf-profile.self.cycles-pp.vfs_fallocate
0.18 ? 2% -0.1 0.12 ? 3% perf-profile.self.cycles-pp.__filemap_remove_folio
0.17 ? 4% -0.1 0.12 ? 3% perf-profile.self.cycles-pp.folio_unlock
0.20 ? 3% -0.0 0.14 ? 5% perf-profile.self.cycles-pp.shmem_get_folio_gfp
0.16 ? 5% -0.0 0.12 ? 6% perf-profile.self.cycles-pp.__cond_resched
0.17 ? 5% -0.0 0.12 ? 3% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.14 ? 4% -0.0 0.10 ? 5% perf-profile.self.cycles-pp.xas_load
0.15 ? 2% -0.0 0.10 ? 4% perf-profile.self.cycles-pp.get_page_from_freelist
0.16 ? 4% -0.0 0.12 ? 4% perf-profile.self.cycles-pp.alloc_pages_mpol
0.12 ? 6% -0.0 0.08 ? 6% perf-profile.self.cycles-pp.rmqueue
0.13 ? 6% -0.0 0.09 ? 4% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.13 ? 8% -0.0 0.09 ? 4% perf-profile.self.cycles-pp.apparmor_file_permission
0.15 ? 3% -0.0 0.11 ? 3% perf-profile.self.cycles-pp.noop_dirty_folio
0.10 ? 10% -0.0 0.06 ? 17% perf-profile.self.cycles-pp.cap_vm_enough_memory
0.07 ? 8% -0.0 0.03 ? 70% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.12 ? 10% -0.0 0.08 ? 4% perf-profile.self.cycles-pp.__percpu_counter_limited_add
0.12 ? 4% -0.0 0.08 ? 8% perf-profile.self.cycles-pp.folio_add_lru
0.10 ? 3% -0.0 0.06 ? 7% perf-profile.self.cycles-pp.xas_init_marks
0.10 ? 9% -0.0 0.07 perf-profile.self.cycles-pp.xas_start
0.13 ? 3% -0.0 0.10 ? 5% perf-profile.self.cycles-pp.filemap_remove_folio
0.11 ? 6% -0.0 0.08 ? 6% perf-profile.self.cycles-pp.__fsnotify_parent
0.16 ? 4% -0.0 0.12 ? 4% perf-profile.self.cycles-pp.free_unref_page_commit
0.12 ? 4% -0.0 0.08 ? 4% perf-profile.self.cycles-pp.__mod_lruvec_state
0.11 ? 3% -0.0 0.08 ? 7% perf-profile.self.cycles-pp.fallocate64
0.08 ? 6% -0.0 0.04 ? 45% perf-profile.self.cycles-pp.__get_file_rcu
0.11 ? 4% -0.0 0.08 ? 8% perf-profile.self.cycles-pp.__folio_cancel_dirty
0.11 ? 3% -0.0 0.08 ? 6% perf-profile.self.cycles-pp._raw_spin_trylock
0.11 ? 6% -0.0 0.08 ? 6% perf-profile.self.cycles-pp.truncate_cleanup_folio
0.06 ? 7% -0.0 0.03 ? 70% perf-profile.self.cycles-pp.__fget_light
0.17 ? 3% -0.0 0.14 ? 5% perf-profile.self.cycles-pp.__list_del_entry_valid_or_report
0.09 ? 5% -0.0 0.06 ? 7% perf-profile.self.cycles-pp.filemap_get_entry
0.22 ? 6% -0.0 0.19 ? 3% perf-profile.self.cycles-pp.page_counter_uncharge
0.09 ? 7% -0.0 0.06 ? 6% perf-profile.self.cycles-pp.get_pfnblock_flags_mask
0.08 ? 5% -0.0 0.06 ? 8% perf-profile.self.cycles-pp.free_unref_page_prepare
0.07 ? 11% -0.0 0.04 ? 44% perf-profile.self.cycles-pp.shmem_is_huge
0.08 ? 6% -0.0 0.05 ? 7% perf-profile.self.cycles-pp.__x64_sys_fallocate
0.08 ? 7% -0.0 0.06 ? 6% perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
0.09 ? 7% -0.0 0.07 ? 7% perf-profile.self.cycles-pp.filemap_free_folio
0.08 ? 8% -0.0 0.06 perf-profile.self.cycles-pp.shmem_alloc_folio
0.12 ? 6% -0.0 0.10 ? 6% perf-profile.self.cycles-pp.try_charge_memcg
0.07 ? 5% +0.0 0.09 ? 5% perf-profile.self.cycles-pp.propagate_protected_usage
0.24 +0.0 0.27 ? 3% perf-profile.self.cycles-pp.uncharge_folio
0.46 ? 4% +0.4 0.86 ? 5% perf-profile.self.cycles-pp.mem_cgroup_commit_charge
1.38 ? 3% +0.5 1.84 ? 5% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.16 ? 3% +0.7 0.90 ? 2% perf-profile.self.cycles-pp.__count_memcg_events
0.28 ? 3% +0.8 1.06 ? 5% perf-profile.self.cycles-pp.__mem_cgroup_charge
1.86 ? 2% +2.5 4.36 perf-profile.self.cycles-pp.__mod_memcg_lruvec_state




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-11-27 21:14:37

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg

On Wed, Nov 22, 2023 at 5:54 AM kernel test robot <[email protected]> wrote:
>
>
>
> Hello,
>
> kernel test robot noticed a -30.2% regression of will-it-scale.per_thread_ops on:
>
>
> commit: c7fbfc7b4e089c4a9b292b1973a42a5761c1342f ("[PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg")
> url: https://github.com/intel-lab-lkp/linux/commits/Yosry-Ahmed/mm-memcg-change-flush_next_time-to-flush_last_time/20231116-103300
> base: https://git.kernel.org/cgit/linux/kernel/git/akpm/mm.git mm-everything
> patch link: https://lore.kernel.org/all/[email protected]/
> patch subject: [PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg
>
> testcase: will-it-scale
> test machine: 104 threads 2 sockets (Skylake) with 192G memory
> parameters:
>
> nr_task: 50%
> mode: thread
> test: fallocate2
> cpufreq_governor: performance
>
>

This regression was also reported in v2, and I explicitly mention it
in the cover letter here:
https://lore.kernel.org/lkml/[email protected]/

In a nutshell, I think this microbenchmark regression does not
represent real workloads. On the other hand, there are demonstrated
benefits on real workloads from this series in terms of stats reading
time.

2023-11-28 01:46:25

by Oliver Sang

[permalink] [raw]
Subject: Re: [PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg

hi, Yosry Ahmed,

On Mon, Nov 27, 2023 at 01:13:44PM -0800, Yosry Ahmed wrote:
> On Wed, Nov 22, 2023 at 5:54 AM kernel test robot <[email protected]> wrote:
> >
> >
> >
> > Hello,
> >
> > kernel test robot noticed a -30.2% regression of will-it-scale.per_thread_ops on:
> >
> >
> > commit: c7fbfc7b4e089c4a9b292b1973a42a5761c1342f ("[PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg")
> > url: https://github.com/intel-lab-lkp/linux/commits/Yosry-Ahmed/mm-memcg-change-flush_next_time-to-flush_last_time/20231116-103300
> > base: https://git.kernel.org/cgit/linux/kernel/git/akpm/mm.git mm-everything
> > patch link: https://lore.kernel.org/all/[email protected]/
> > patch subject: [PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg
> >
> > testcase: will-it-scale
> > test machine: 104 threads 2 sockets (Skylake) with 192G memory
> > parameters:
> >
> > nr_task: 50%
> > mode: thread
> > test: fallocate2
> > cpufreq_governor: performance
> >
> >
>
> This regression was also reported in v2, and I explicitly mention it
> in the cover letter here:
> https://lore.kernel.org/lkml/[email protected]/

got it. this also reminds us to read cover letter for a patch set in the
future. Thanks!

>
> In a nutshell, I think this microbenchmark regression does not
> represent real workloads. On the other hand, there are demonstrated
> benefits on real workloads from this series in terms of stats reading
> time.
>

ok, if there are future versions of this patch, or when it is merged, we will
ignore similar results.

just a small question, since we focus on microbenchmark, if we found other
regression (or improvement) on tests other than will-it-scale::fallocate,
do you want us to send report or just ignore them, either?

2023-11-28 01:59:13

by Yosry Ahmed

[permalink] [raw]
Subject: Re: [PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg

On Mon, Nov 27, 2023 at 5:46 PM Oliver Sang <[email protected]> wrote:
>
> hi, Yosry Ahmed,
>
> On Mon, Nov 27, 2023 at 01:13:44PM -0800, Yosry Ahmed wrote:
> > On Wed, Nov 22, 2023 at 5:54 AM kernel test robot <[email protected]> wrote:
> > >
> > >
> > >
> > > Hello,
> > >
> > > kernel test robot noticed a -30.2% regression of will-it-scale.per_thread_ops on:
> > >
> > >
> > > commit: c7fbfc7b4e089c4a9b292b1973a42a5761c1342f ("[PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg")
> > > url: https://github.com/intel-lab-lkp/linux/commits/Yosry-Ahmed/mm-memcg-change-flush_next_time-to-flush_last_time/20231116-103300
> > > base: https://git.kernel.org/cgit/linux/kernel/git/akpm/mm.git mm-everything
> > > patch link: https://lore.kernel.org/all/[email protected]/
> > > patch subject: [PATCH v3 3/5] mm: memcg: make stats flushing threshold per-memcg
> > >
> > > testcase: will-it-scale
> > > test machine: 104 threads 2 sockets (Skylake) with 192G memory
> > > parameters:
> > >
> > > nr_task: 50%
> > > mode: thread
> > > test: fallocate2
> > > cpufreq_governor: performance
> > >
> > >
> >
> > This regression was also reported in v2, and I explicitly mention it
> > in the cover letter here:
> > https://lore.kernel.org/lkml/[email protected]/
>
> got it. this also reminds us to read cover letter for a patch set in the
> future. Thanks!
>
> >
> > In a nutshell, I think this microbenchmark regression does not
> > represent real workloads. On the other hand, there are demonstrated
> > benefits on real workloads from this series in terms of stats reading
> > time.
> >
>
> ok, if there are future versions of this patch, or when it is merged, we will
> ignore similar results.
>
> just a small question, since we focus on microbenchmark, if we found other
> regression (or improvement) on tests other than will-it-scale::fallocate,
> do you want us to send report or just ignore them, either?

I think it would be useful to know if there are
regressions/improvements in other microbenchmarks, at least to
investigate whether they represent real regressions.