2020-11-21 04:17:20

by Waiman Long

[permalink] [raw]
Subject: [PATCH v2 0/5] locking/rwsem: Rework reader optimistic spinning

v2:
- Update some commit logs to incorporate review comments.
- Patch 2: remove unnecessary comment.
- Patch 3: rename osq_is_empty() to rwsem_no_spinners() as suggested.
- Patch 4: correctly handle HANDOFF clearing.
- Patch 5: fix !CONFIG_RWSEM_SPIN_ON_OWNER compilation errors.

A recent report of SAP certification failure caused by increased system
time due to rwsem reader optimistic spinning led me to reexamine the
code to see the pro and cons of doing it. This led me to discover a
potential lock starvation scenario as explained in patch 2. That patch
does reduce reader spinning to avoid this potential problem. Patches
3 and 4 are further optimizations of the current code.

Then there is the issue of reader fragmentation that can potentially
reduce performance in some heavily contented cases. Two different
approaches are attempted:
1) further reduce reader optimistic spinning
2) disable reader spinning

See the performance shown in patch 5.

This patch series adopts the second approach by dropping reader spinning
for now as it simplifies the code. However, writers are still allowed
to spin on a reader-owned rwsem for a limited time.

Waiman Long (5):
locking/rwsem: Pass the current atomic count to
rwsem_down_read_slowpath()
locking/rwsem: Prevent potential lock starvation
locking/rwsem: Enable reader optimistic lock stealing
locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED
locking/rwsem: Remove reader optimistic spinning

kernel/locking/lock_events_list.h | 6 +-
kernel/locking/rwsem.c | 293 ++++++++----------------------
2 files changed, 82 insertions(+), 217 deletions(-)

--
2.18.1


2020-11-21 04:17:22

by Waiman Long

[permalink] [raw]
Subject: [PATCH v2 4/5] locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED

The rwsem wakeup logic has been modified by commit d3681e269fff
("locking/rwsem: Wake up almost all readers in wait queue") to wake up
all readers in the wait queue if the first waiter is a reader. This
change was made to implement a phase-fair reader/writer lock. Once a
reader gets the lock, all the current waiting readers will be allowed
to join. Other readers that come after that will not be allowed to
prevent writer starvation.

In the case of RWSEM_WAKE_READ_OWNED, not all currently waiting readers
can be woken up if the first waiter happens to be a writer. Complete
the phase-fair logic by waking up all readers even for this case.

Signed-off-by: Waiman Long <[email protected]>
---
kernel/locking/rwsem.c | 20 ++++++++++++++------
1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index b373990fcab8..e0ad2019c518 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -404,6 +404,7 @@ static void rwsem_mark_wake(struct rw_semaphore *sem,
struct rwsem_waiter *waiter, *tmp;
long oldcount, woken = 0, adjustment = 0;
struct list_head wlist;
+ bool first_is_reader = true;

lockdep_assert_held(&sem->wait_lock);

@@ -426,7 +427,13 @@ static void rwsem_mark_wake(struct rw_semaphore *sem,
lockevent_inc(rwsem_wake_writer);
}

- return;
+ /*
+ * If rwsem has already been owned by reader, wake up other
+ * readers in the wait queue even if first one is a writer.
+ */
+ if (wake_type != RWSEM_WAKE_READ_OWNED)
+ return;
+ first_is_reader = false;
}

/*
@@ -520,10 +527,12 @@ static void rwsem_mark_wake(struct rw_semaphore *sem,
}

/*
- * When we've woken a reader, we no longer need to force writers
- * to give up the lock and we can clear HANDOFF.
+ * When readers are woken, we no longer need to force writers to
+ * give up the lock and we can clear HANDOFF unless the first
+ * waiter is a writer.
*/
- if (woken && (atomic_long_read(&sem->count) & RWSEM_FLAG_HANDOFF))
+ if (woken && first_is_reader &&
+ (atomic_long_read(&sem->count) & RWSEM_FLAG_HANDOFF))
adjustment -= RWSEM_FLAG_HANDOFF;

if (adjustment)
@@ -1053,8 +1062,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
if (rwsem_optimistic_spin(sem, false)) {
/* rwsem_optimistic_spin() implies ACQUIRE on success */
/*
- * Wake up other readers in the wait list if the front
- * waiter is a reader.
+ * Wake up other readers in the wait queue.
*/
wake_readers:
if ((atomic_long_read(&sem->count) & RWSEM_FLAG_WAITERS)) {
--
2.18.1

2020-11-21 04:17:22

by Waiman Long

[permalink] [raw]
Subject: [PATCH v2 3/5] locking/rwsem: Enable reader optimistic lock stealing

If the optimistic spinning queue is empty and the rwsem does not have
the handoff or write-lock bits set, it is actually not necessary to
call rwsem_optimistic_spin() to spin on it. Instead, it can steal the
lock directly as its reader bias is in the count already. If it is
the first reader in this state, it will try to wake up other readers
in the wait queue.

With this patch applied, the following were the lock event counts
after rebooting a 2-socket system and a "make -j96" kernel rebuild.

rwsem_opt_rlock=4437
rwsem_rlock=29
rwsem_rlock_steal=19

So lock stealing represents about 0.4% of all the read locks acquired
in the slow path.

Signed-off-by: Waiman Long <[email protected]>
---
kernel/locking/lock_events_list.h | 1 +
kernel/locking/rwsem.c | 28 ++++++++++++++++++++++++++++
2 files changed, 29 insertions(+)

diff --git a/kernel/locking/lock_events_list.h b/kernel/locking/lock_events_list.h
index 239039d0ce21..270a0d351932 100644
--- a/kernel/locking/lock_events_list.h
+++ b/kernel/locking/lock_events_list.h
@@ -63,6 +63,7 @@ LOCK_EVENT(rwsem_opt_nospin) /* # of disabled optspins */
LOCK_EVENT(rwsem_opt_norspin) /* # of disabled reader-only optspins */
LOCK_EVENT(rwsem_opt_rlock2) /* # of opt-acquired 2ndary read locks */
LOCK_EVENT(rwsem_rlock) /* # of read locks acquired */
+LOCK_EVENT(rwsem_rlock_steal) /* # of read locks by lock stealing */
LOCK_EVENT(rwsem_rlock_fast) /* # of fast read locks acquired */
LOCK_EVENT(rwsem_rlock_fail) /* # of failed read lock acquisitions */
LOCK_EVENT(rwsem_rlock_handoff) /* # of read lock handoffs */
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index a961c5c53b70..b373990fcab8 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -957,6 +957,12 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
}
return false;
}
+
+static inline bool rwsem_no_spinners(struct rw_semaphore *sem)
+{
+ return !osq_is_locked(&sem->osq);
+}
+
#else
static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem,
unsigned long nonspinnable)
@@ -977,6 +983,11 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
return false;
}

+static inline bool rwsem_no_spinners(sem)
+{
+ return false;
+}
+
static inline int
rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
{
@@ -1007,6 +1018,22 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
!(count & RWSEM_WRITER_LOCKED))
goto queue;

+ /*
+ * Reader optimistic lock stealing
+ *
+ * We can take the read lock directly without doing
+ * rwsem_optimistic_spin() if the conditions are right.
+ * Also wake up other readers if it is the first reader.
+ */
+ if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF)) &&
+ rwsem_no_spinners(sem)) {
+ rwsem_set_reader_owned(sem);
+ lockevent_inc(rwsem_rlock_steal);
+ if (rcnt == 1)
+ goto wake_readers;
+ return sem;
+ }
+
/*
* Save the current read-owner of rwsem, if available, and the
* reader nonspinnable bit.
@@ -1029,6 +1056,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
* Wake up other readers in the wait list if the front
* waiter is a reader.
*/
+wake_readers:
if ((atomic_long_read(&sem->count) & RWSEM_FLAG_WAITERS)) {
raw_spin_lock_irq(&sem->wait_lock);
if (!list_empty(&sem->wait_list))
--
2.18.1

2020-11-21 04:17:28

by Waiman Long

[permalink] [raw]
Subject: [PATCH v2 1/5] locking/rwsem: Pass the current atomic count to rwsem_down_read_slowpath()

The atomic count value right after reader count increment can be useful
to determine the rwsem state at trylock time. So the count value is
passed down to rwsem_down_read_slowpath() to be used when appropriate.

Signed-off-by: Waiman Long <[email protected]>
---
kernel/locking/rwsem.c | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index f11b9bd3431d..12761e02ab9b 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -270,12 +270,12 @@ static inline void rwsem_set_nonspinnable(struct rw_semaphore *sem)
owner | RWSEM_NONSPINNABLE));
}

-static inline bool rwsem_read_trylock(struct rw_semaphore *sem)
+static inline long rwsem_read_trylock(struct rw_semaphore *sem)
{
long cnt = atomic_long_add_return_acquire(RWSEM_READER_BIAS, &sem->count);
if (WARN_ON_ONCE(cnt < 0))
rwsem_set_nonspinnable(sem);
- return !(cnt & RWSEM_READ_FAILED_MASK);
+ return cnt;
}

/*
@@ -989,9 +989,9 @@ rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
* Wait for the read lock to be granted
*/
static struct rw_semaphore __sched *
-rwsem_down_read_slowpath(struct rw_semaphore *sem, int state)
+rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
{
- long count, adjustment = -RWSEM_READER_BIAS;
+ long adjustment = -RWSEM_READER_BIAS;
struct rwsem_waiter waiter;
DEFINE_WAKE_Q(wake_q);
bool wake = false;
@@ -1337,8 +1337,10 @@ static struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem)
*/
static inline void __down_read(struct rw_semaphore *sem)
{
- if (!rwsem_read_trylock(sem)) {
- rwsem_down_read_slowpath(sem, TASK_UNINTERRUPTIBLE);
+ long count = rwsem_read_trylock(sem);
+
+ if (count & RWSEM_READ_FAILED_MASK) {
+ rwsem_down_read_slowpath(sem, TASK_UNINTERRUPTIBLE, count);
DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem);
} else {
rwsem_set_reader_owned(sem);
@@ -1347,8 +1349,10 @@ static inline void __down_read(struct rw_semaphore *sem)

static inline int __down_read_killable(struct rw_semaphore *sem)
{
- if (!rwsem_read_trylock(sem)) {
- if (IS_ERR(rwsem_down_read_slowpath(sem, TASK_KILLABLE)))
+ long count = rwsem_read_trylock(sem);
+
+ if (count & RWSEM_READ_FAILED_MASK) {
+ if (IS_ERR(rwsem_down_read_slowpath(sem, TASK_KILLABLE, count)))
return -EINTR;
DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem);
} else {
--
2.18.1

2020-11-21 04:18:10

by Waiman Long

[permalink] [raw]
Subject: [PATCH v2 5/5] locking/rwsem: Remove reader optimistic spinning

Reader optimistic spinning is helpful when the reader critical section
is short and there aren't that many readers around. It also improves
the chance that a reader can get the lock as writer optimistic spinning
disproportionally favors writers much more than readers.

Since commit d3681e269fff ("locking/rwsem: Wake up almost all readers
in wait queue"), all the waiting readers are woken up so that they can
all get the read lock and run in parallel. When the number of contending
readers is large, allowing reader optimistic spinning will likely cause
reader fragmentation where multiple smaller groups of readers can get
the read lock in a sequential manner separated by writers. That reduces
reader parallelism.

One possible way to address that drawback is to limit the number of
readers (preferably one) that can do optimistic spinning. These readers
act as representatives of all the waiting readers in the wait queue as
they will wake up all those waiting readers once they get the lock.

Alternatively, as reader optimistic lock stealing has already enhanced
fairness to readers, it may be easier to just remove reader optimistic
spinning and simplifying the optimistic spinning code as a result.

Performance measurements (locking throughput kops/s) using a locking
microbenchmark with 50/50 reader/writer distribution and turbo-boost
disabled was done on a 2-socket Cascade Lake system (48-core 96-thread)
to see the impacts of these changes:

1) Vanilla - 5.10-rc3 kernel
2) Before - 5.10-rc3 kernel with previous patches in this series
2) limit-rspin - 5.10-rc3 kernel with limited reader spinning patch
3) no-rspin - 5.10-rc3 kernel with reader spinning disabled

# of threads CS Load Vanilla Before limit-rspin no-rspin
------------ ------- ------- ------ ----------- --------
2 1 5,185 5,662 5,214 5,077
4 1 5,107 4,983 5,188 4,760
8 1 4,782 4,564 4,720 4,628
16 1 4,680 4,053 4,567 3,402
32 1 4,299 1,115 1,118 1,098
64 1 3,218 983 1,001 957
96 1 1,938 944 957 930

2 20 2,008 2,128 2,264 1,665
4 20 1,390 1,033 1,046 1,101
8 20 1,472 1,155 1,098 1,213
16 20 1,332 1,077 1,089 1,122
32 20 967 914 917 980
64 20 787 874 891 858
96 20 730 836 847 844

2 100 372 356 360 355
4 100 492 425 434 392
8 100 533 537 529 538
16 100 548 572 568 598
32 100 499 520 527 537
64 100 466 517 526 512
96 100 406 497 506 509

The column "CS Load" represents the number of pause instructions issued
in the locking critical section. A CS load of 1 is extremely short and
is not likey in real situations. A load of 20 (moderate) and 100 (long)
are more realistic.

It can be seen that the previous patches in this series have reduced
performance in general except in highly contended cases with moderate
or long critical sections that performance improves a bit. This change
is mostly caused by the "Prevent potential lock starvation" patch that
reduce reader optimistic spinning and hence reduce reader fragmentation.

The patch that further limit reader optimistic spinning doesn't seem to
have too much impact on overall performance as shown in the benchmark
data.

The patch that disables reader optimistic spinning shows reduced
performance at lightly loaded cases, but comparable or slightly better
performance on with heavier contention.

This patch just removes reader optimistic spinning for now. As readers
are not going to do optimistic spinning anymore, we don't need to
consider if the OSQ is empty or not when doing lock stealing.

Signed-off-by: Waiman Long <[email protected]>
---
kernel/locking/lock_events_list.h | 5 +-
kernel/locking/rwsem.c | 278 +++++-------------------------
2 files changed, 48 insertions(+), 235 deletions(-)

diff --git a/kernel/locking/lock_events_list.h b/kernel/locking/lock_events_list.h
index 270a0d351932..97fb6f3f840a 100644
--- a/kernel/locking/lock_events_list.h
+++ b/kernel/locking/lock_events_list.h
@@ -56,12 +56,9 @@ LOCK_EVENT(rwsem_sleep_reader) /* # of reader sleeps */
LOCK_EVENT(rwsem_sleep_writer) /* # of writer sleeps */
LOCK_EVENT(rwsem_wake_reader) /* # of reader wakeups */
LOCK_EVENT(rwsem_wake_writer) /* # of writer wakeups */
-LOCK_EVENT(rwsem_opt_rlock) /* # of opt-acquired read locks */
-LOCK_EVENT(rwsem_opt_wlock) /* # of opt-acquired write locks */
+LOCK_EVENT(rwsem_opt_lock) /* # of opt-acquired write locks */
LOCK_EVENT(rwsem_opt_fail) /* # of failed optspins */
LOCK_EVENT(rwsem_opt_nospin) /* # of disabled optspins */
-LOCK_EVENT(rwsem_opt_norspin) /* # of disabled reader-only optspins */
-LOCK_EVENT(rwsem_opt_rlock2) /* # of opt-acquired 2ndary read locks */
LOCK_EVENT(rwsem_rlock) /* # of read locks acquired */
LOCK_EVENT(rwsem_rlock_steal) /* # of read locks by lock stealing */
LOCK_EVENT(rwsem_rlock_fast) /* # of fast read locks acquired */
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index e0ad2019c518..6203a182e6c6 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -31,19 +31,13 @@
#include "lock_events.h"

/*
- * The least significant 3 bits of the owner value has the following
+ * The least significant 2 bits of the owner value has the following
* meanings when set.
* - Bit 0: RWSEM_READER_OWNED - The rwsem is owned by readers
- * - Bit 1: RWSEM_RD_NONSPINNABLE - Readers cannot spin on this lock.
- * - Bit 2: RWSEM_WR_NONSPINNABLE - Writers cannot spin on this lock.
+ * - Bit 1: RWSEM_NONSPINNABLE - Cannot spin on a reader-owned lock
*
- * When the rwsem is either owned by an anonymous writer, or it is
- * reader-owned, but a spinning writer has timed out, both nonspinnable
- * bits will be set to disable optimistic spinning by readers and writers.
- * In the later case, the last unlocking reader should then check the
- * writer nonspinnable bit and clear it only to give writers preference
- * to acquire the lock via optimistic spinning, but not readers. Similar
- * action is also done in the reader slowpath.
+ * When the rwsem is reader-owned and a spinning writer has timed out,
+ * the nonspinnable bit will be set to disable optimistic spinning.

* When a writer acquires a rwsem, it puts its task_struct pointer
* into the owner field. It is cleared after an unlock.
@@ -59,46 +53,14 @@
* is involved. Ideally we would like to track all the readers that own
* a rwsem, but the overhead is simply too big.
*
- * Reader optimistic spinning is helpful when the reader critical section
- * is short and there aren't that many readers around. It makes readers
- * relatively more preferred than writers. When a writer times out spinning
- * on a reader-owned lock and set the nospinnable bits, there are two main
- * reasons for that.
- *
- * 1) The reader critical section is long, perhaps the task sleeps after
- * acquiring the read lock.
- * 2) There are just too many readers contending the lock causing it to
- * take a while to service all of them.
- *
- * In the former case, long reader critical section will impede the progress
- * of writers which is usually more important for system performance. In
- * the later case, reader optimistic spinning tends to make the reader
- * groups that contain readers that acquire the lock together smaller
- * leading to more of them. That may hurt performance in some cases. In
- * other words, the setting of nonspinnable bits indicates that reader
- * optimistic spinning may not be helpful for those workloads that cause
- * it.
- *
- * Therefore, any writers that had observed the setting of the writer
- * nonspinnable bit for a given rwsem after they fail to acquire the lock
- * via optimistic spinning will set the reader nonspinnable bit once they
- * acquire the write lock. Similarly, readers that observe the setting
- * of reader nonspinnable bit at slowpath entry will set the reader
- * nonspinnable bits when they acquire the read lock via the wakeup path.
- *
- * Once the reader nonspinnable bit is on, it will only be reset when
- * a writer is able to acquire the rwsem in the fast path or somehow a
- * reader or writer in the slowpath doesn't observe the nonspinable bit.
- *
- * This is to discourage reader optmistic spinning on that particular
- * rwsem and make writers more preferred. This adaptive disabling of reader
- * optimistic spinning will alleviate the negative side effect of this
- * feature.
+ * A fast path reader optimistic lock stealing is supported when the rwsem
+ * is previously owned by a writer and the following conditions are met:
+ * - OSQ is empty
+ * - rwsem is not currently writer owned
+ * - the handoff isn't set.
*/
#define RWSEM_READER_OWNED (1UL << 0)
-#define RWSEM_RD_NONSPINNABLE (1UL << 1)
-#define RWSEM_WR_NONSPINNABLE (1UL << 2)
-#define RWSEM_NONSPINNABLE (RWSEM_RD_NONSPINNABLE | RWSEM_WR_NONSPINNABLE)
+#define RWSEM_NONSPINNABLE (1UL << 1)
#define RWSEM_OWNER_FLAGS_MASK (RWSEM_READER_OWNED | RWSEM_NONSPINNABLE)

#ifdef CONFIG_DEBUG_RWSEMS
@@ -203,7 +165,7 @@ static inline void __rwsem_set_reader_owned(struct rw_semaphore *sem,
struct task_struct *owner)
{
unsigned long val = (unsigned long)owner | RWSEM_READER_OWNED |
- (atomic_long_read(&sem->owner) & RWSEM_RD_NONSPINNABLE);
+ (atomic_long_read(&sem->owner) & RWSEM_NONSPINNABLE);

atomic_long_set(&sem->owner, val);
}
@@ -353,7 +315,6 @@ struct rwsem_waiter {
struct task_struct *task;
enum rwsem_waiter_type type;
unsigned long timeout;
- unsigned long last_rowner;
};
#define rwsem_first_waiter(sem) \
list_first_entry(&sem->wait_list, struct rwsem_waiter, list)
@@ -474,10 +435,6 @@ static void rwsem_mark_wake(struct rw_semaphore *sem,
* the reader is copied over.
*/
owner = waiter->task;
- if (waiter->last_rowner & RWSEM_RD_NONSPINNABLE) {
- owner = (void *)((unsigned long)owner | RWSEM_RD_NONSPINNABLE);
- lockevent_inc(rwsem_opt_norspin);
- }
__rwsem_set_reader_owned(sem, owner);
}

@@ -610,30 +567,6 @@ static inline bool rwsem_try_write_lock(struct rw_semaphore *sem,
}

#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
-/*
- * Try to acquire read lock before the reader is put on wait queue.
- * Lock acquisition isn't allowed if the rwsem is locked or a writer handoff
- * is ongoing.
- */
-static inline bool rwsem_try_read_lock_unqueued(struct rw_semaphore *sem)
-{
- long count = atomic_long_read(&sem->count);
-
- if (count & (RWSEM_WRITER_MASK | RWSEM_FLAG_HANDOFF))
- return false;
-
- count = atomic_long_fetch_add_acquire(RWSEM_READER_BIAS, &sem->count);
- if (!(count & (RWSEM_WRITER_MASK | RWSEM_FLAG_HANDOFF))) {
- rwsem_set_reader_owned(sem);
- lockevent_inc(rwsem_opt_rlock);
- return true;
- }
-
- /* Back out the change */
- atomic_long_add(-RWSEM_READER_BIAS, &sem->count);
- return false;
-}
-
/*
* Try to acquire write lock before the writer has been put on wait queue.
*/
@@ -645,7 +578,7 @@ static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
if (atomic_long_try_cmpxchg_acquire(&sem->count, &count,
count | RWSEM_WRITER_LOCKED)) {
rwsem_set_owner(sem);
- lockevent_inc(rwsem_opt_wlock);
+ lockevent_inc(rwsem_opt_lock);
return true;
}
}
@@ -661,8 +594,7 @@ static inline bool owner_on_cpu(struct task_struct *owner)
return owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
}

-static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem,
- unsigned long nonspinnable)
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
{
struct task_struct *owner;
unsigned long flags;
@@ -679,7 +611,7 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem,
/*
* Don't check the read-owner as the entry may be stale.
*/
- if ((flags & nonspinnable) ||
+ if ((flags & RWSEM_NONSPINNABLE) ||
(owner && !(flags & RWSEM_READER_OWNED) && !owner_on_cpu(owner)))
ret = false;
rcu_read_unlock();
@@ -709,9 +641,9 @@ enum owner_state {
#define OWNER_SPINNABLE (OWNER_NULL | OWNER_WRITER | OWNER_READER)

static inline enum owner_state
-rwsem_owner_state(struct task_struct *owner, unsigned long flags, unsigned long nonspinnable)
+rwsem_owner_state(struct task_struct *owner, unsigned long flags)
{
- if (flags & nonspinnable)
+ if (flags & RWSEM_NONSPINNABLE)
return OWNER_NONSPINNABLE;

if (flags & RWSEM_READER_OWNED)
@@ -721,14 +653,14 @@ rwsem_owner_state(struct task_struct *owner, unsigned long flags, unsigned long
}

static noinline enum owner_state
-rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
+rwsem_spin_on_owner(struct rw_semaphore *sem)
{
struct task_struct *new, *owner;
unsigned long flags, new_flags;
enum owner_state state;

owner = rwsem_owner_flags(sem, &flags);
- state = rwsem_owner_state(owner, flags, nonspinnable);
+ state = rwsem_owner_state(owner, flags);
if (state != OWNER_WRITER)
return state;

@@ -742,7 +674,7 @@ rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
*/
new = rwsem_owner_flags(sem, &new_flags);
if ((new != owner) || (new_flags != flags)) {
- state = rwsem_owner_state(new, new_flags, nonspinnable);
+ state = rwsem_owner_state(new, new_flags);
break;
}

@@ -791,14 +723,12 @@ static inline u64 rwsem_rspin_threshold(struct rw_semaphore *sem)
return sched_clock() + delta;
}

-static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
+static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
{
bool taken = false;
int prev_owner_state = OWNER_NULL;
int loop = 0;
u64 rspin_threshold = 0;
- unsigned long nonspinnable = wlock ? RWSEM_WR_NONSPINNABLE
- : RWSEM_RD_NONSPINNABLE;

preempt_disable();

@@ -815,15 +745,14 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
for (;;) {
enum owner_state owner_state;

- owner_state = rwsem_spin_on_owner(sem, nonspinnable);
+ owner_state = rwsem_spin_on_owner(sem);
if (!(owner_state & OWNER_SPINNABLE))
break;

/*
* Try to acquire the lock
*/
- taken = wlock ? rwsem_try_write_lock_unqueued(sem)
- : rwsem_try_read_lock_unqueued(sem);
+ taken = rwsem_try_write_lock_unqueued(sem);

if (taken)
break;
@@ -831,7 +760,7 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
/*
* Time-based reader-owned rwsem optimistic spinning
*/
- if (wlock && (owner_state == OWNER_READER)) {
+ if (owner_state == OWNER_READER) {
/*
* Re-initialize rspin_threshold every time when
* the owner state changes from non-reader to reader.
@@ -840,7 +769,7 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
* the beginning of the 2nd reader phase.
*/
if (prev_owner_state != OWNER_READER) {
- if (rwsem_test_oflags(sem, nonspinnable))
+ if (rwsem_test_oflags(sem, RWSEM_NONSPINNABLE))
break;
rspin_threshold = rwsem_rspin_threshold(sem);
loop = 0;
@@ -916,89 +845,30 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
}

/*
- * Clear the owner's RWSEM_WR_NONSPINNABLE bit if it is set. This should
+ * Clear the owner's RWSEM_NONSPINNABLE bit if it is set. This should
* only be called when the reader count reaches 0.
- *
- * This give writers better chance to acquire the rwsem first before
- * readers when the rwsem was being held by readers for a relatively long
- * period of time. Race can happen that an optimistic spinner may have
- * just stolen the rwsem and set the owner, but just clearing the
- * RWSEM_WR_NONSPINNABLE bit will do no harm anyway.
*/
-static inline void clear_wr_nonspinnable(struct rw_semaphore *sem)
-{
- if (rwsem_test_oflags(sem, RWSEM_WR_NONSPINNABLE))
- atomic_long_andnot(RWSEM_WR_NONSPINNABLE, &sem->owner);
-}
-
-/*
- * This function is called when the reader fails to acquire the lock via
- * optimistic spinning. In this case we will still attempt to do a trylock
- * when comparing the rwsem state right now with the state when entering
- * the slowpath indicates that the reader is still in a valid reader phase.
- * This happens when the following conditions are true:
- *
- * 1) The lock is currently reader owned, and
- * 2) The lock is previously not reader-owned or the last read owner changes.
- *
- * In the former case, we have transitioned from a writer phase to a
- * reader-phase while spinning. In the latter case, it means the reader
- * phase hasn't ended when we entered the optimistic spinning loop. In
- * both cases, the reader is eligible to acquire the lock. This is the
- * secondary path where a read lock is acquired optimistically.
- *
- * The reader non-spinnable bit wasn't set at time of entry or it will
- * not be here at all.
- */
-static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
- unsigned long last_rowner)
-{
- unsigned long owner = atomic_long_read(&sem->owner);
-
- if (!(owner & RWSEM_READER_OWNED))
- return false;
-
- if (((owner ^ last_rowner) & ~RWSEM_OWNER_FLAGS_MASK) &&
- rwsem_try_read_lock_unqueued(sem)) {
- lockevent_inc(rwsem_opt_rlock2);
- lockevent_add(rwsem_opt_fail, -1);
- return true;
- }
- return false;
-}
-
-static inline bool rwsem_no_spinners(struct rw_semaphore *sem)
+static inline void clear_nonspinnable(struct rw_semaphore *sem)
{
- return !osq_is_locked(&sem->osq);
+ if (rwsem_test_oflags(sem, RWSEM_NONSPINNABLE))
+ atomic_long_andnot(RWSEM_NONSPINNABLE, &sem->owner);
}

#else
-static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem,
- unsigned long nonspinnable)
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
{
return false;
}

-static inline bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
-{
- return false;
-}
-
-static inline void clear_wr_nonspinnable(struct rw_semaphore *sem) { }
-
-static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
- unsigned long last_rowner)
+static inline bool rwsem_optimistic_spin(struct rw_semaphore *sem)
{
return false;
}

-static inline bool rwsem_no_spinners(sem)
-{
- return false;
-}
+static inline void clear_nonspinnable(struct rw_semaphore *sem) { }

static inline int
-rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
+rwsem_spin_on_owner(struct rw_semaphore *sem)
{
return 0;
}
@@ -1011,7 +881,7 @@ rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
static struct rw_semaphore __sched *
rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
{
- long owner, adjustment = -RWSEM_READER_BIAS;
+ long adjustment = -RWSEM_READER_BIAS;
long rcnt = (count >> RWSEM_READER_SHIFT);
struct rwsem_waiter waiter;
DEFINE_WAKE_Q(wake_q);
@@ -1019,12 +889,11 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)

/*
* To prevent a constant stream of readers from starving a sleeping
- * waiter, don't attempt optimistic spinning if the lock is currently
- * owned by readers.
+ * waiter, don't attempt optimistic lock stealing if the lock is
+ * currently owned by readers.
*/
- owner = atomic_long_read(&sem->owner);
- if ((owner & RWSEM_READER_OWNED) && (rcnt > 1) &&
- !(count & RWSEM_WRITER_LOCKED))
+ if ((atomic_long_read(&sem->owner) & RWSEM_READER_OWNED) &&
+ (rcnt > 1) && !(count & RWSEM_WRITER_LOCKED))
goto queue;

/*
@@ -1032,40 +901,16 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
*
* We can take the read lock directly without doing
* rwsem_optimistic_spin() if the conditions are right.
- * Also wake up other readers if it is the first reader.
*/
- if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF)) &&
- rwsem_no_spinners(sem)) {
+ if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF))) {
rwsem_set_reader_owned(sem);
lockevent_inc(rwsem_rlock_steal);
- if (rcnt == 1)
- goto wake_readers;
- return sem;
- }

- /*
- * Save the current read-owner of rwsem, if available, and the
- * reader nonspinnable bit.
- */
- waiter.last_rowner = owner;
- if (!(waiter.last_rowner & RWSEM_READER_OWNED))
- waiter.last_rowner &= RWSEM_RD_NONSPINNABLE;
-
- if (!rwsem_can_spin_on_owner(sem, RWSEM_RD_NONSPINNABLE))
- goto queue;
-
- /*
- * Undo read bias from down_read() and do optimistic spinning.
- */
- atomic_long_add(-RWSEM_READER_BIAS, &sem->count);
- adjustment = 0;
- if (rwsem_optimistic_spin(sem, false)) {
- /* rwsem_optimistic_spin() implies ACQUIRE on success */
/*
- * Wake up other readers in the wait queue.
+ * Wake up other readers in the wait queue if it is
+ * the first reader.
*/
-wake_readers:
- if ((atomic_long_read(&sem->count) & RWSEM_FLAG_WAITERS)) {
+ if ((rcnt == 1) && (count & RWSEM_FLAG_WAITERS)) {
raw_spin_lock_irq(&sem->wait_lock);
if (!list_empty(&sem->wait_list))
rwsem_mark_wake(sem, RWSEM_WAKE_READ_OWNED,
@@ -1074,9 +919,6 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
wake_up_q(&wake_q);
}
return sem;
- } else if (rwsem_reader_phase_trylock(sem, waiter.last_rowner)) {
- /* rwsem_reader_phase_trylock() implies ACQUIRE on success */
- return sem;
}

queue:
@@ -1092,7 +934,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
* exit the slowpath and return immediately as its
* RWSEM_READER_BIAS has already been set in the count.
*/
- if (adjustment && !(atomic_long_read(&sem->count) &
+ if (!(atomic_long_read(&sem->count) &
(RWSEM_WRITER_MASK | RWSEM_FLAG_HANDOFF))) {
/* Provide lock ACQUIRE */
smp_acquire__after_ctrl_dep();
@@ -1106,10 +948,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
list_add_tail(&waiter.list, &sem->wait_list);

/* we're now waiting on the lock, but no longer actively locking */
- if (adjustment)
- count = atomic_long_add_return(adjustment, &sem->count);
- else
- count = atomic_long_read(&sem->count);
+ count = atomic_long_add_return(adjustment, &sem->count);

/*
* If there are no active locks, wake the front queued process(es).
@@ -1118,7 +957,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
* wake our own waiter to join the existing active readers !
*/
if (!(count & RWSEM_LOCK_MASK)) {
- clear_wr_nonspinnable(sem);
+ clear_nonspinnable(sem);
wake = true;
}
if (wake || (!(count & RWSEM_WRITER_MASK) &&
@@ -1163,19 +1002,6 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
return ERR_PTR(-EINTR);
}

-/*
- * This function is called by the a write lock owner. So the owner value
- * won't get changed by others.
- */
-static inline void rwsem_disable_reader_optspin(struct rw_semaphore *sem,
- bool disable)
-{
- if (unlikely(disable)) {
- atomic_long_or(RWSEM_RD_NONSPINNABLE, &sem->owner);
- lockevent_inc(rwsem_opt_norspin);
- }
-}
-
/*
* Wait until we successfully acquire the write lock
*/
@@ -1183,26 +1009,17 @@ static struct rw_semaphore *
rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
{
long count;
- bool disable_rspin;
enum writer_wait_state wstate;
struct rwsem_waiter waiter;
struct rw_semaphore *ret = sem;
DEFINE_WAKE_Q(wake_q);

/* do optimistic spinning and steal lock if possible */
- if (rwsem_can_spin_on_owner(sem, RWSEM_WR_NONSPINNABLE) &&
- rwsem_optimistic_spin(sem, true)) {
+ if (rwsem_can_spin_on_owner(sem) && rwsem_optimistic_spin(sem)) {
/* rwsem_optimistic_spin() implies ACQUIRE on success */
return sem;
}

- /*
- * Disable reader optimistic spinning for this rwsem after
- * acquiring the write lock when the setting of the nonspinnable
- * bits are observed.
- */
- disable_rspin = atomic_long_read(&sem->owner) & RWSEM_NONSPINNABLE;
-
/*
* Optimistic spinning failed, proceed to the slowpath
* and block until we can acquire the sem.
@@ -1271,7 +1088,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
* without sleeping.
*/
if (wstate == WRITER_HANDOFF &&
- rwsem_spin_on_owner(sem, RWSEM_NONSPINNABLE) == OWNER_NULL)
+ rwsem_spin_on_owner(sem) == OWNER_NULL)
goto trylock_again;

/* Block until there are no active lockers. */
@@ -1313,7 +1130,6 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
}
__set_current_state(TASK_RUNNING);
list_del(&waiter.list);
- rwsem_disable_reader_optspin(sem, disable_rspin);
raw_spin_unlock_irq(&sem->wait_lock);
lockevent_inc(rwsem_wlock);

@@ -1486,7 +1302,7 @@ static inline void __up_read(struct rw_semaphore *sem)
DEBUG_RWSEMS_WARN_ON(tmp < 0, sem);
if (unlikely((tmp & (RWSEM_LOCK_MASK|RWSEM_FLAG_WAITERS)) ==
RWSEM_FLAG_WAITERS)) {
- clear_wr_nonspinnable(sem);
+ clear_nonspinnable(sem);
rwsem_wake(sem, tmp);
}
}
--
2.18.1

2020-11-21 04:18:18

by Waiman Long

[permalink] [raw]
Subject: [PATCH v2 2/5] locking/rwsem: Prevent potential lock starvation

The lock handoff bit is added in commit 4f23dbc1e657 ("locking/rwsem:
Implement lock handoff to prevent lock starvation") to avoid lock
starvation. However, allowing readers to do optimistic spinning does
introduce an unlikely scenario where lock starvation can happen.

The lock handoff bit may only be set when a waiter is being woken up.
In the case of reader unlock, wakeup happens only when the reader count
reaches 0. If there is a continuous stream of incoming readers acquiring
read lock via optimistic spinning, it is possible that the reader count
may never reach 0 and so the handoff bit will never be asserted.

One way to prevent this scenario from happening is to disallow optimistic
spinning if the rwsem is currently owned by readers. If the previous
or current owner is a writer, optimistic spinning will be allowed.

If the previous owner is a reader but the reader count has reached 0
before, a wakeup should have been issued. So the handoff mechanism
will be kicked in to prevent lock starvation. As a result, it should
be OK to do optimistic spinning in this case.

This patch may have some impact on reader performance as it reduces
reader optimistic spinning especially if the lock critical sections
are short the number of contending readers are small.

Signed-off-by: Waiman Long <[email protected]>
---
kernel/locking/rwsem.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 12761e02ab9b..a961c5c53b70 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -991,16 +991,27 @@ rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
static struct rw_semaphore __sched *
rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
{
- long adjustment = -RWSEM_READER_BIAS;
+ long owner, adjustment = -RWSEM_READER_BIAS;
+ long rcnt = (count >> RWSEM_READER_SHIFT);
struct rwsem_waiter waiter;
DEFINE_WAKE_Q(wake_q);
bool wake = false;

+ /*
+ * To prevent a constant stream of readers from starving a sleeping
+ * waiter, don't attempt optimistic spinning if the lock is currently
+ * owned by readers.
+ */
+ owner = atomic_long_read(&sem->owner);
+ if ((owner & RWSEM_READER_OWNED) && (rcnt > 1) &&
+ !(count & RWSEM_WRITER_LOCKED))
+ goto queue;
+
/*
* Save the current read-owner of rwsem, if available, and the
* reader nonspinnable bit.
*/
- waiter.last_rowner = atomic_long_read(&sem->owner);
+ waiter.last_rowner = owner;
if (!(waiter.last_rowner & RWSEM_READER_OWNED))
waiter.last_rowner &= RWSEM_RD_NONSPINNABLE;

--
2.18.1

2020-11-23 15:42:55

by kernel test robot

[permalink] [raw]
Subject: [locking/rwsem] 10a59003d2: unixbench.score -25.5% regression



Greeting,

FYI, we noticed a -25.5% regression of unixbench.score due to commit:


commit: 10a59003d29fbfa855b2ef4f3534fee9bdf4e575 ("[PATCH v2 5/5] locking/rwsem: Remove reader optimistic spinning")
url: https://github.com/0day-ci/linux/commits/Waiman-Long/locking-rwsem-Rework-reader-optimistic-spinning/20201121-122118
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 932f8c64d38bb08f69c8c26a2216ba0c36c6daa8

in testcase: unixbench
on test machine: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
with following parameters:

runtime: 300s
nr_task: 30%
test: shell8
cpufreq_governor: performance
ucode: 0xde

test-description: UnixBench is the original BYTE UNIX benchmark suite aims to test performance of Unix-like system.
test-url: https://github.com/kdlucas/byte-unixbench

In addition to that, the commit also has significant impact on the following tests:

+------------------+---------------------------------------------------------------------------+
| testcase: change | fio-basic: fio.write_iops -29.9% regression |
| test machine | 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory |
| test parameters | bs=4k |
| | cpufreq_governor=performance |
| | disk=1SSD |
| | fs=xfs |
| | ioengine=sync |
| | nr_task=32 |
| | runtime=300s |
| | rw=randwrite |
| | test_size=256g |
| | ucode=0x4003003 |
+------------------+---------------------------------------------------------------------------+
| testcase: change | aim7: aim7.jobs-per-min 952.6% improvement |
| test machine | 96 threads Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | disk=4BRD_12G |
| | fs=f2fs |
| | load=100 |
| | md=RAID0 |
| | test=sync_disk_rw |
| | ucode=0x4003003 |
+------------------+---------------------------------------------------------------------------+


If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/30%/debian-10.4-x86_64-20200603.cgz/300s/lkp-cfl-e1/shell8/unixbench/0xde

commit:
c9847a7f94 ("locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED")
10a59003d2 ("locking/rwsem: Remove reader optimistic spinning")

c9847a7f94679e74 10a59003d29fbfa855b2ef4f353
---------------- ---------------------------
%stddev %change %stddev
\ | \
21939 -25.5% 16346 unixbench.score
1287341 -46.8% 684642 ? 2% unixbench.time.involuntary_context_switches
37785 ? 2% +7.6% 40661 unixbench.time.major_page_faults
1.054e+08 -25.4% 78563350 unixbench.time.minor_page_faults
1337 -30.1% 934.50 unixbench.time.percent_of_cpu_this_job_got
363.18 -36.6% 230.42 unixbench.time.system_time
481.37 -25.2% 360.16 unixbench.time.user_time
3528615 +89.5% 6688263 unixbench.time.voluntary_context_switches
829330 -25.5% 617908 unixbench.workload
40455 ? 10% +18.6% 47991 ? 3% meminfo.AnonHugePages
373.42 ? 3% +73.7% 648.78 uptime.idle
4.16 ? 86% -4.2 0.00 perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.do_exit.do_group_exit
6.25 ?110% -6.2 0.00 perf-profile.children.cycles-pp.dput
5.95 ? 97% -5.9 0.00 perf-profile.children.cycles-pp.free_pgtables
11058 ? 3% -18.9% 8972 ? 6% slabinfo.filp.active_objs
12400 ? 2% -16.6% 10346 ? 6% slabinfo.filp.num_objs
5364 ? 3% -39.6% 3238 ? 16% slabinfo.task_delay_info.active_objs
5364 ? 3% -39.6% 3238 ? 16% slabinfo.task_delay_info.num_objs
13.83 ? 4% +26.6 40.40 mpstat.cpu.all.idle%
1.06 -0.1 0.91 mpstat.cpu.all.irq%
0.54 +0.1 0.69 ? 7% mpstat.cpu.all.soft%
50.14 -14.5 35.61 mpstat.cpu.all.sys%
34.43 -12.0 22.39 mpstat.cpu.all.usr%
15.50 ? 3% +167.7% 41.50 vmstat.cpu.id
49.75 -28.1% 35.75 vmstat.cpu.sy
33.00 -35.6% 21.25 ? 2% vmstat.cpu.us
21.25 ? 5% -35.3% 13.75 ? 12% vmstat.procs.r
113264 +82.8% 206990 vmstat.system.cs
37223 -14.8% 31718 ? 20% vmstat.system.in
50992679 ? 8% +329.6% 2.19e+08 ? 3% cpuidle.C1.time
1297530 ? 6% +230.7% 4290676 ? 5% cpuidle.C1.usage
190270 ? 12% -41.6% 111170 ? 18% cpuidle.C10.time
21241624 ? 6% +186.1% 60774649 ? 10% cpuidle.C1E.time
458079 ? 6% +119.6% 1005897 ? 9% cpuidle.C1E.usage
4577780 ? 17% +595.5% 31836464 ? 12% cpuidle.C3.time
62068 ? 13% +527.9% 389751 ? 11% cpuidle.C3.usage
31244183 ? 25% +68.9% 52781560 ? 11% cpuidle.C6.time
50282 ? 17% +325.2% 213780 ? 13% cpuidle.C6.usage
259071 ? 8% +509.9% 1580003 ? 8% cpuidle.POLL.time
32585 ? 7% +633.5% 239029 ? 7% cpuidle.POLL.usage
522.75 +9.7% 573.25 ? 3% proc-vmstat.nr_active_anon
5838 +3.4% 6034 proc-vmstat.nr_kernel_stack
1416 +5.5% 1494 proc-vmstat.nr_page_table_pages
13041 -1.0% 12911 proc-vmstat.nr_slab_reclaimable
522.75 +9.7% 573.25 ? 3% proc-vmstat.nr_zone_active_anon
71020069 -24.7% 53452045 proc-vmstat.numa_hit
71020069 -24.7% 53452045 proc-vmstat.numa_local
115977 -39.0% 70793 proc-vmstat.pgactivate
75313926 -24.8% 56608462 proc-vmstat.pgalloc_normal
1.056e+08 -25.4% 78785247 proc-vmstat.pgfault
75305695 -24.8% 56602456 proc-vmstat.pgfree
5357159 -28.6% 3826576 proc-vmstat.pgreuse
3475 -24.0% 2641 proc-vmstat.thp_fault_alloc
1474994 -25.5% 1099150 proc-vmstat.unevictable_pgs_culled
250788 -41.8% 146078 ? 3% softirqs.CPU0.RCU
248710 -39.9% 149519 softirqs.CPU1.RCU
248300 -52.5% 118065 ? 41% softirqs.CPU10.RCU
246447 ? 2% -39.2% 149847 softirqs.CPU11.RCU
250321 -52.0% 120263 ? 43% softirqs.CPU12.RCU
247140 ? 2% -39.3% 150089 softirqs.CPU13.RCU
13849 ? 2% +10.4% 15286 softirqs.CPU13.SCHED
250080 -50.5% 123681 ? 36% softirqs.CPU14.RCU
248790 -38.6% 152862 ? 2% softirqs.CPU15.RCU
247174 -39.6% 149226 softirqs.CPU2.RCU
248590 -39.3% 150877 softirqs.CPU3.RCU
250818 -39.8% 150890 ? 2% softirqs.CPU4.RCU
246772 ? 2% -39.1% 150263 softirqs.CPU5.RCU
248637 -39.0% 151575 softirqs.CPU6.RCU
249756 -40.1% 149525 ? 3% softirqs.CPU7.RCU
248263 -42.8% 142057 ? 2% softirqs.CPU8.RCU
242793 ? 2% -38.8% 148663 softirqs.CPU9.RCU
3973390 -42.0% 2303491 ? 5% softirqs.RCU
25441 -29.8% 17865 sched_debug.cfs_rq:/.exec_clock.avg
26174 -29.0% 18583 sched_debug.cfs_rq:/.exec_clock.max
25149 -30.4% 17493 sched_debug.cfs_rq:/.exec_clock.min
293.59 ? 5% +13.5% 333.29 ? 14% sched_debug.cfs_rq:/.exec_clock.stddev
188.59 ? 19% -29.2% 133.54 ? 12% sched_debug.cfs_rq:/.load_avg.avg
26.12 ? 23% -48.8% 13.38 ? 50% sched_debug.cfs_rq:/.load_avg.min
443268 -28.2% 318361 sched_debug.cfs_rq:/.min_vruntime.avg
489789 -21.5% 384542 ? 4% sched_debug.cfs_rq:/.min_vruntime.max
431839 -29.6% 304066 sched_debug.cfs_rq:/.min_vruntime.min
0.38 ? 22% +31.7% 0.51 ? 6% sched_debug.cfs_rq:/.nr_running.stddev
1.92 ? 14% -30.1% 1.34 ? 12% sched_debug.cfs_rq:/.nr_spread_over.avg
905.09 ? 3% +9.9% 994.48 ? 4% sched_debug.cfs_rq:/.runnable_avg.avg
870.48 ? 4% +8.5% 944.88 ? 3% sched_debug.cfs_rq:/.util_avg.avg
91.52 ? 17% +58.8% 145.33 ? 23% sched_debug.cfs_rq:/.util_est_enqueued.avg
30990 ? 17% -34.1% 20422 ? 12% sched_debug.cpu.avg_idle.min
11558 ? 25% -53.5% 5376 ? 38% sched_debug.cpu.curr->pid.avg
0.00 ? 4% -26.3% 0.00 ? 19% sched_debug.cpu.next_balance.stddev
1.10 ? 7% -27.0% 0.80 ? 19% sched_debug.cpu.nr_running.avg
225114 +80.9% 407331 sched_debug.cpu.nr_switches.avg
239349 +79.5% 429598 sched_debug.cpu.nr_switches.max
210368 +82.6% 384043 sched_debug.cpu.nr_switches.min
6933 ? 8% +47.7% 10240 ? 7% sched_debug.cpu.nr_switches.stddev
0.20 ?182% +276.9% 0.77 ? 16% sched_debug.cpu.nr_uninterruptible.avg
105.88 ? 13% +193.6% 310.88 ? 15% sched_debug.cpu.nr_uninterruptible.max
-188.38 +222.9% -608.25 sched_debug.cpu.nr_uninterruptible.min
72.96 ? 16% +191.3% 212.52 ? 7% sched_debug.cpu.nr_uninterruptible.stddev
219447 +83.3% 402333 sched_debug.cpu.sched_count.avg
223137 +83.2% 408852 sched_debug.cpu.sched_count.max
208437 +82.8% 381062 sched_debug.cpu.sched_count.min
3763 ? 10% +75.0% 6584 ? 23% sched_debug.cpu.sched_count.stddev
56141 ? 3% +199.1% 167917 sched_debug.cpu.sched_goidle.avg
57397 ? 3% +197.7% 170892 sched_debug.cpu.sched_goidle.max
53349 ? 2% +197.8% 158895 sched_debug.cpu.sched_goidle.min
1064 ? 10% +161.3% 2782 ? 22% sched_debug.cpu.sched_goidle.stddev
97665 +98.5% 193817 sched_debug.cpu.ttwu_count.avg
100103 +97.6% 197827 sched_debug.cpu.ttwu_count.max
91752 ? 2% +100.6% 184037 sched_debug.cpu.ttwu_count.min
2014 ? 18% +66.4% 3351 ? 16% sched_debug.cpu.ttwu_count.stddev
42150 -25.4% 31431 sched_debug.cpu.ttwu_local.avg
42994 -25.0% 32238 sched_debug.cpu.ttwu_local.max
39998 -26.4% 29443 ? 3% sched_debug.cpu.ttwu_local.min
519.50 ?139% +1032.1% 5881 ? 75% interrupts.132:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
1078 ?159% -95.8% 45.75 ? 10% interrupts.134:IR-PCI-MSI.2097155-edge.eth1-TxRx-2
21268 ? 3% +143.8% 51854 ? 2% interrupts.CAL:Function_call_interrupts
1603 ? 15% +93.9% 3109 ? 5% interrupts.CPU0.CAL:Function_call_interrupts
26260 -37.4% 16446 ? 3% interrupts.CPU0.RES:Rescheduling_interrupts
639.00 ? 11% +240.8% 2177 ? 5% interrupts.CPU0.TLB:TLB_shootdowns
519.50 ?139% +1032.1% 5881 ? 75% interrupts.CPU1.132:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
1237 ? 6% +155.9% 3166 ? 5% interrupts.CPU1.CAL:Function_call_interrupts
26527 -36.2% 16918 interrupts.CPU1.RES:Rescheduling_interrupts
654.50 ? 2% +241.6% 2235 ? 2% interrupts.CPU1.TLB:TLB_shootdowns
1267 ? 8% +154.5% 3225 ? 5% interrupts.CPU10.CAL:Function_call_interrupts
26382 -35.7% 16965 ? 2% interrupts.CPU10.RES:Rescheduling_interrupts
663.75 ? 4% +238.0% 2243 ? 2% interrupts.CPU10.TLB:TLB_shootdowns
1434 ? 13% +128.8% 3280 ? 3% interrupts.CPU11.CAL:Function_call_interrupts
26307 -34.8% 17142 interrupts.CPU11.RES:Rescheduling_interrupts
658.00 ? 4% +236.8% 2216 ? 2% interrupts.CPU11.TLB:TLB_shootdowns
1320 ? 4% +143.8% 3218 ? 2% interrupts.CPU12.CAL:Function_call_interrupts
26594 -35.4% 17171 interrupts.CPU12.RES:Rescheduling_interrupts
697.75 ? 3% +217.7% 2217 interrupts.CPU12.TLB:TLB_shootdowns
1332 ? 3% +153.4% 3375 ? 13% interrupts.CPU13.CAL:Function_call_interrupts
26096 ? 2% -34.8% 17024 interrupts.CPU13.RES:Rescheduling_interrupts
690.50 ? 6% +220.1% 2210 ? 3% interrupts.CPU13.TLB:TLB_shootdowns
1335 ? 6% +144.0% 3258 ? 3% interrupts.CPU14.CAL:Function_call_interrupts
26453 -34.7% 17268 interrupts.CPU14.RES:Rescheduling_interrupts
667.50 ? 6% +234.4% 2232 interrupts.CPU14.TLB:TLB_shootdowns
1302 ? 3% +146.2% 3205 ? 5% interrupts.CPU15.CAL:Function_call_interrupts
26159 -35.1% 16980 interrupts.CPU15.RES:Rescheduling_interrupts
694.00 ? 8% +226.7% 2267 ? 4% interrupts.CPU15.TLB:TLB_shootdowns
1283 ? 5% +142.8% 3114 ? 3% interrupts.CPU2.CAL:Function_call_interrupts
25846 -35.0% 16792 ? 2% interrupts.CPU2.RES:Rescheduling_interrupts
662.75 ? 6% +233.2% 2208 interrupts.CPU2.TLB:TLB_shootdowns
1078 ?159% -95.8% 45.75 ? 10% interrupts.CPU3.134:IR-PCI-MSI.2097155-edge.eth1-TxRx-2
1330 ? 6% +178.9% 3710 ? 11% interrupts.CPU3.CAL:Function_call_interrupts
26346 -35.9% 16877 interrupts.CPU3.RES:Rescheduling_interrupts
678.00 ? 8% +224.9% 2202 ? 2% interrupts.CPU3.TLB:TLB_shootdowns
1300 ? 3% +144.9% 3183 interrupts.CPU4.CAL:Function_call_interrupts
26432 -37.2% 16586 interrupts.CPU4.RES:Rescheduling_interrupts
675.75 ? 2% +223.3% 2184 ? 2% interrupts.CPU4.TLB:TLB_shootdowns
1261 +150.0% 3153 ? 4% interrupts.CPU5.CAL:Function_call_interrupts
26321 -36.1% 16814 interrupts.CPU5.RES:Rescheduling_interrupts
681.75 ? 6% +223.0% 2201 ? 2% interrupts.CPU5.TLB:TLB_shootdowns
1274 ? 5% +149.3% 3176 ? 4% interrupts.CPU6.CAL:Function_call_interrupts
26106 -34.9% 17005 interrupts.CPU6.RES:Rescheduling_interrupts
659.50 ? 4% +234.6% 2207 interrupts.CPU6.TLB:TLB_shootdowns
1293 ? 2% +150.7% 3241 ? 3% interrupts.CPU7.CAL:Function_call_interrupts
26066 -36.8% 16468 ? 2% interrupts.CPU7.RES:Rescheduling_interrupts
678.50 ? 7% +228.1% 2226 ? 4% interrupts.CPU7.TLB:TLB_shootdowns
1354 ? 12% +132.3% 3145 ? 5% interrupts.CPU8.CAL:Function_call_interrupts
26259 -37.1% 16509 interrupts.CPU8.RES:Rescheduling_interrupts
679.75 ? 7% +221.8% 2187 ? 2% interrupts.CPU8.TLB:TLB_shootdowns
1338 ? 12% +145.8% 3289 ? 2% interrupts.CPU9.CAL:Function_call_interrupts
25685 ? 2% -33.4% 17114 interrupts.CPU9.RES:Rescheduling_interrupts
657.75 ? 2% +244.5% 2266 ? 3% interrupts.CPU9.TLB:TLB_shootdowns
419844 -35.7% 270085 interrupts.RES:Rescheduling_interrupts
10738 ? 4% +230.4% 35483 interrupts.TLB:TLB_shootdowns
48.03 +6.7% 51.27 perf-stat.i.MPKI
1.008e+10 -23.2% 7.737e+09 perf-stat.i.branch-instructions
2.27 +0.0 2.30 perf-stat.i.branch-miss-rate%
2.205e+08 -21.7% 1.726e+08 perf-stat.i.branch-misses
4.69 +0.2 4.89 perf-stat.i.cache-miss-rate%
1.009e+08 -13.0% 87791224 perf-stat.i.cache-misses
2.442e+09 -18.2% 1.997e+09 perf-stat.i.cache-references
117262 +83.3% 214963 perf-stat.i.context-switches
1.17 -3.4% 1.13 perf-stat.i.cpi
5.529e+10 -26.1% 4.088e+10 perf-stat.i.cpu-cycles
18271 ? 2% +149.1% 45523 perf-stat.i.cpu-migrations
593.84 -12.8% 517.54 ? 2% perf-stat.i.cycles-between-cache-misses
0.06 ? 2% +0.0 0.08 ? 3% perf-stat.i.dTLB-load-miss-rate%
1.253e+10 -23.5% 9.585e+09 perf-stat.i.dTLB-loads
0.05 +0.0 0.05 perf-stat.i.dTLB-store-miss-rate%
3638057 -19.1% 2941687 perf-stat.i.dTLB-store-misses
7.236e+09 -23.7% 5.52e+09 perf-stat.i.dTLB-stores
57.59 -3.4 54.19 perf-stat.i.iTLB-load-miss-rate%
10438031 -19.4% 8408188 perf-stat.i.iTLB-load-misses
7925579 -6.8% 7388686 perf-stat.i.iTLB-loads
4.914e+10 -23.4% 3.763e+10 perf-stat.i.instructions
5390 -4.2% 5162 perf-stat.i.instructions-per-iTLB-miss
0.88 +3.5% 0.91 perf-stat.i.ipc
591.27 ? 2% +7.6% 636.05 perf-stat.i.major-faults
3.46 -26.1% 2.55 perf-stat.i.metric.GHz
2021 -23.1% 1555 perf-stat.i.metric.M/sec
1619167 -25.4% 1207381 perf-stat.i.minor-faults
0.00 ? 15% +0.0 0.01 ?115% perf-stat.i.node-load-miss-rate%
47.02 ? 19% +455.1% 260.98 ?117% perf-stat.i.node-load-misses
5399939 -9.5% 4887416 perf-stat.i.node-loads
46.42 ? 20% +472.0% 265.53 ?117% perf-stat.i.node-store-misses
32686027 -14.7% 27888853 perf-stat.i.node-stores
1619758 -25.4% 1208017 perf-stat.i.page-faults
49.70 +6.8% 53.06 perf-stat.overall.MPKI
2.19 +0.0 2.23 perf-stat.overall.branch-miss-rate%
4.13 +0.3 4.40 perf-stat.overall.cache-miss-rate%
1.13 -3.4% 1.09 perf-stat.overall.cpi
547.96 -15.0% 465.71 perf-stat.overall.cycles-between-cache-misses
0.06 ? 2% +0.0 0.08 ? 3% perf-stat.overall.dTLB-load-miss-rate%
0.05 +0.0 0.05 perf-stat.overall.dTLB-store-miss-rate%
56.84 -3.6 53.23 perf-stat.overall.iTLB-load-miss-rate%
4708 -4.9% 4475 perf-stat.overall.instructions-per-iTLB-miss
0.89 +3.6% 0.92 perf-stat.overall.ipc
0.00 ? 19% +0.0 0.01 ?117% perf-stat.overall.node-load-miss-rate%
0.00 ? 19% +0.0 0.00 ?117% perf-stat.overall.node-store-miss-rate%
3739839 +3.3% 3861837 perf-stat.overall.path-length
9.918e+09 -23.3% 7.611e+09 perf-stat.ps.branch-instructions
2.17e+08 -21.8% 1.698e+08 perf-stat.ps.branch-misses
99277053 -13.0% 86352292 perf-stat.ps.cache-misses
2.403e+09 -18.3% 1.964e+09 perf-stat.ps.cache-references
115371 +83.3% 211429 perf-stat.ps.context-switches
5.44e+10 -26.1% 4.021e+10 perf-stat.ps.cpu-cycles
17977 ? 2% +149.1% 44774 perf-stat.ps.cpu-migrations
1.233e+10 -23.5% 9.428e+09 perf-stat.ps.dTLB-loads
3579226 -19.2% 2893270 perf-stat.ps.dTLB-store-misses
7.119e+09 -23.7% 5.429e+09 perf-stat.ps.dTLB-stores
10269385 -19.5% 8270074 perf-stat.ps.iTLB-load-misses
7797424 -6.8% 7267132 perf-stat.ps.iTLB-loads
4.835e+10 -23.4% 3.702e+10 perf-stat.ps.instructions
581.70 ? 2% +7.5% 625.60 perf-stat.ps.major-faults
1592966 -25.5% 1187499 perf-stat.ps.minor-faults
46.26 ? 19% +454.5% 256.51 ?117% perf-stat.ps.node-load-misses
5312864 -9.5% 4807129 perf-stat.ps.node-loads
45.67 ? 20% +471.5% 260.99 ?117% perf-stat.ps.node-store-misses
32157625 -14.7% 27429683 perf-stat.ps.node-stores
1593547 -25.4% 1188124 perf-stat.ps.page-faults
3.102e+12 -23.1% 2.386e+12 perf-stat.total.instructions



unixbench.time.user_time

500 +---------------------------------------------------------------------+
480 |..+ +..+.+. +. +. +..+..+.+..+..+..+.+..+ |
| |
460 |-+ |
440 |-+ |
| |
420 |-+ |
400 |-+ |
380 |-+ |
| |
360 |-+ O O O O O O O O O O O O O O O O O O O O O |
340 |-+ |
| O |
320 |-+O O O |
300 +---------------------------------------------------------------------+


unixbench.time.system_time

380 +---------------------------------------------------------------------+
| +. +.+..+. +. + +..+.+..+.. .+.+..+ |
360 |-+ +. |
340 |-+ |
| |
320 |-+ |
300 |-+ |
| |
280 |-+ |
260 |-+ |
| |
240 |-+ O O O |
220 |-+ O O O O O O O O O O O O O O O O O O |
| |
200 +---------------------------------------------------------------------+


unixbench.time.percent_of_cpu_this_job_got

1400 +--------------------------------------------------------------------+
|..+.+..+..+.+..+..+.+..+..+.+..+..+.+..+.+..+..+.+.. .+.+..+ |
1300 |-+ +. |
| |
| |
1200 |-+ |
| |
1100 |-+ |
| |
1000 |-+ |
| |
| O O O O O O O O O O O O O O O O O O O O O |
900 |-+ |
| O O O O |
800 +--------------------------------------------------------------------+


unixbench.time.minor_page_faults

1.1e+08 +----------------------------------------------------------------+
|..+.+..+.+..+ + +.+..+.+..+.+..+.+..+.+..+.+..+.+..+ |
1.05e+08 |-+ |
1e+08 |-+ |
| |
9.5e+07 |-+ |
9e+07 |-+ |
| |
8.5e+07 |-+ |
8e+07 |-+ |
| O O O O O O O O O O O O O O O O O O O O O |
7.5e+07 |-+ |
7e+07 |-+O O O |
| O |
6.5e+07 +----------------------------------------------------------------+


unixbench.time.voluntary_context_switches

7e+06 +-----------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O O O O O |
6.5e+06 |-+ |
6e+06 |-+ |
| O O O O |
5.5e+06 |-+ |
| |
5e+06 |-+ |
| |
4.5e+06 |-+ |
4e+06 |-+ |
| |
3.5e+06 |-+ .+.+..+.+ |
| .+. .+.. .+. .+.+..+.+. |
3e+06 +-----------------------------------------------------------------+


unixbench.time.involuntary_context_switches

1.4e+06 +-----------------------------------------------------------------+
| +.+..+.+..+.+..+.+..+.+. +.. .+. .+..+.+.. |
1.3e+06 |.. +. +..+ +.+..+.+ |
| |
1.2e+06 |-+ |
1.1e+06 |-+ |
| |
1e+06 |-+ |
| |
900000 |-+ |
800000 |-+ |
| O O O |
700000 |-+ O O O O O O O O O O O O |
| O O O O O O O O O O |
600000 +-----------------------------------------------------------------+


unixbench.score

23000 +-------------------------------------------------------------------+
|..+.+..+.+..+..+.+..+..+.+..+.+..+..+.+..+.+..+..+. |
22000 |-+ +..+..+.+ |
21000 |-+ |
| |
20000 |-+ |
19000 |-+ |
| |
18000 |-+ |
17000 |-+ |
| O O O O O O O O O O O O O O O O O |
16000 |-+ O O O O |
15000 |-+ |
| O O O O |
14000 +-------------------------------------------------------------------+


unixbench.workload

900000 +------------------------------------------------------------------+
| .+. |
850000 |..+.+..+.+..+.+..+..+.+. +..+..+.+..+.+..+.+..+.. .+..+. |
| + + |
800000 |-+ |
750000 |-+ |
| |
700000 |-+ |
| |
650000 |-+ |
600000 |-+ O O O O O O O O O O O O O O O O O O O O O |
| |
550000 |-+O O O O |
| |
500000 +------------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample

***************************************************************************************************
lkp-csl-2ap1: 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/ucode:
4k/gcc-9/performance/1SSD/xfs/sync/x86_64-rhel-8.3/32/debian-10.4-x86_64-20200603.cgz/300s/randwrite/lkp-csl-2ap1/256g/fio-basic/0x4003003

commit:
c9847a7f94 ("locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED")
10a59003d2 ("locking/rwsem: Remove reader optimistic spinning")

c9847a7f94679e74 10a59003d29fbfa855b2ef4f353
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 kmsg.ACPI_Error
%stddev %change %stddev
\ | \
0.04 ? 5% +0.1 0.10 ? 6% fio.latency_100us%
33.62 ? 7% +3.6 37.18 ? 3% fio.latency_10us%
0.01 +0.0 0.05 ? 6% fio.latency_250us%
0.14 ? 9% -0.1 0.03 ? 61% fio.latency_2us%
59.95 ? 3% -4.3 55.60 ? 2% fio.latency_4us%
0.34 ? 2% +0.1 0.46 fio.latency_50ms%
256.52 +17.5% 301.42 fio.time.elapsed_time
256.52 +17.5% 301.42 fio.time.elapsed_time.max
5.369e+08 -17.5% 4.43e+08 fio.time.file_system_outputs
1384 ? 13% +410.5% 7064 ? 21% fio.time.involuntary_context_switches
153.25 -23.5% 117.25 ? 3% fio.time.percent_of_cpu_this_job_got
316.95 ? 2% -10.8% 282.66 ? 3% fio.time.system_time
77.63 -6.9% 72.27 ? 3% fio.time.user_time
631893 ? 7% +225.6% 2057309 ? 4% fio.time.voluntary_context_switches
67108864 -17.5% 55373247 fio.workload
1027 -29.9% 720.93 fio.write_bw_MBps
20288 ? 5% +19.2% 24192 ? 4% fio.write_clat_99%_us
117541 +46.9% 172624 fio.write_clat_mean_us
1928136 +27.7% 2462685 fio.write_clat_stddev
263161 -29.9% 184558 fio.write_iops
14.11 +6.5% 15.03 iostat.cpu.iowait
2.94 -16.2% 2.46 iostat.cpu.system
1.51 -0.5 1.00 mpstat.cpu.all.sys%
0.15 -0.0 0.12 ? 3% mpstat.cpu.all.usr%
297.78 +14.8% 342.00 uptime.boot
47592 +14.2% 54341 uptime.idle
13822886 ? 16% -35.1% 8966774 ? 32% numa-numastat.node1.local_node
13838524 ? 16% -35.0% 8997869 ? 32% numa-numastat.node1.numa_hit
2422719 ? 27% -52.3% 1155068 ? 43% numa-numastat.node2.numa_foreign
6968721 ? 52% +189.4% 20169562 ? 5% cpuidle.C1.usage
3.139e+10 ? 51% +74.9% 5.488e+10 cpuidle.C1E.time
76223737 ? 30% +59.5% 1.216e+08 cpuidle.C1E.usage
15993891 ? 63% +148.7% 39780679 ? 4% cpuidle.POLL.time
2432485 ? 11% +461.8% 13666817 ? 5% cpuidle.POLL.usage
929962 -32.4% 628917 vmstat.io.bo
1.387e+08 -13.0% 1.206e+08 vmstat.memory.cache
56328701 +32.2% 74444784 vmstat.memory.free
11079 ? 2% +2404.1% 277436 vmstat.system.cs
526522 -3.1% 509959 vmstat.system.in
8306 +9.6% 9105 meminfo.Active(anon)
1.378e+08 -13.0% 1.198e+08 meminfo.Cached
1.37e+08 -13.1% 1.19e+08 meminfo.Inactive
1.356e+08 -13.3% 1.176e+08 meminfo.Inactive(file)
56492602 +32.0% 74549292 meminfo.MemFree
1.412e+08 -12.8% 1.232e+08 meminfo.Memused
18226 ? 2% -87.4% 2302 ? 24% meminfo.Writeback
763436 -16.4% 638161 meminfo.max_used_kB
11511284 ? 6% +48.5% 17092100 ? 28% numa-meminfo.node0.MemFree
37643579 -14.8% 32062762 ? 15% numa-meminfo.node0.MemUsed
5353 ? 7% -87.6% 664.50 ? 30% numa-meminfo.node0.Writeback
37095792 ? 2% -11.0% 33024787 ? 4% numa-meminfo.node1.FilePages
36877519 ? 2% -10.9% 32843351 ? 4% numa-meminfo.node1.Inactive
36551402 ? 3% -11.1% 32491409 ? 6% numa-meminfo.node1.Inactive(file)
12041430 ? 7% +32.7% 15975564 ? 10% numa-meminfo.node1.MemFree
37493024 ? 2% -10.5% 33558891 ? 4% numa-meminfo.node1.MemUsed
5475 ? 8% -87.5% 686.25 ? 16% numa-meminfo.node1.Writeback
34723911 ? 5% -17.0% 28828495 ? 11% numa-meminfo.node2.FilePages
34527637 ? 5% -17.2% 28599325 ? 12% numa-meminfo.node2.Inactive
33909426 ? 4% -16.6% 28295544 ? 11% numa-meminfo.node2.Inactive(file)
14433625 ? 12% +36.9% 19759122 ? 18% numa-meminfo.node2.MemFree
2723 ? 67% -49.6% 1373 ?118% numa-meminfo.node2.PageTables
4375 ? 11% -85.0% 657.75 ? 11% numa-meminfo.node2.Writeback
3900 ? 29% -84.2% 615.50 ? 37% numa-meminfo.node3.Writeback
10098273 ? 3% -20.8% 7998683 ? 18% numa-vmstat.node0.nr_dirtied
2877523 ? 6% +48.4% 4270350 ? 28% numa-vmstat.node0.nr_free_pages
1360 ? 11% -86.5% 183.00 ? 23% numa-vmstat.node0.nr_writeback
8070468 ? 2% -26.8% 5909623 ? 21% numa-vmstat.node0.nr_written
10724541 ? 5% -21.4% 8433227 ? 7% numa-vmstat.node1.nr_dirtied
9274641 ? 2% -11.0% 8258905 ? 4% numa-vmstat.node1.nr_file_pages
3009682 ? 7% +32.6% 3991208 ? 10% numa-vmstat.node1.nr_free_pages
9138561 ? 3% -11.1% 8125800 ? 6% numa-vmstat.node1.nr_inactive_file
1398 ? 8% -85.1% 208.50 ? 22% numa-vmstat.node1.nr_writeback
8489928 ? 6% -25.6% 6312338 ? 8% numa-vmstat.node1.nr_written
9138656 ? 3% -11.1% 8125844 ? 6% numa-vmstat.node1.nr_zone_inactive_file
9486707 ? 6% -24.2% 7193218 ? 12% numa-vmstat.node2.nr_dirtied
8682169 ? 5% -17.0% 7210434 ? 11% numa-vmstat.node2.nr_file_pages
3607260 ? 12% +36.9% 4936622 ? 18% numa-vmstat.node2.nr_free_pages
8478548 ? 4% -16.5% 7077427 ? 11% numa-vmstat.node2.nr_inactive_file
677.50 ? 68% -49.7% 341.00 ?119% numa-vmstat.node2.nr_page_table_pages
1037 ? 18% -85.2% 153.00 ? 11% numa-vmstat.node2.nr_writeback
7392605 ? 7% -30.2% 5158591 ? 15% numa-vmstat.node2.nr_written
8478605 ? 4% -16.5% 7077446 ? 11% numa-vmstat.node2.nr_zone_inactive_file
951.75 ? 32% -83.7% 154.75 ? 21% numa-vmstat.node3.nr_writeback
2423 ? 4% -12.6% 2116 ? 3% slabinfo.Acpi-Parse.active_objs
2423 ? 4% -12.6% 2116 ? 3% slabinfo.Acpi-Parse.num_objs
4105 ? 4% -67.3% 1343 ? 3% slabinfo.biovec-max.active_objs
522.75 ? 4% -67.0% 172.25 ? 4% slabinfo.biovec-max.active_slabs
4184 ? 4% -67.0% 1380 ? 4% slabinfo.biovec-max.num_objs
522.75 ? 4% -67.0% 172.25 ? 4% slabinfo.biovec-max.num_slabs
4329 ? 6% +20.2% 5203 ? 5% slabinfo.ip6-frags.active_objs
4329 ? 6% +20.2% 5203 ? 5% slabinfo.ip6-frags.num_objs
21021 ? 4% -25.6% 15642 slabinfo.kmalloc-128.active_objs
21433 ? 4% -26.0% 15861 slabinfo.kmalloc-128.num_objs
16475 ? 4% -28.5% 11778 slabinfo.kmalloc-4k.active_objs
2258 ? 4% -28.8% 1608 slabinfo.kmalloc-4k.active_slabs
18069 ? 4% -28.8% 12871 slabinfo.kmalloc-4k.num_objs
2258 ? 4% -28.8% 1608 slabinfo.kmalloc-4k.num_slabs
6140 -11.7% 5419 slabinfo.kmalloc-8k.active_slabs
24564 -11.7% 21680 slabinfo.kmalloc-8k.num_objs
6140 -11.7% 5419 slabinfo.kmalloc-8k.num_slabs
27805 ? 4% -16.3% 23275 ? 4% slabinfo.numa_policy.active_objs
470.00 ? 3% -19.6% 377.75 ? 5% slabinfo.numa_policy.active_slabs
29169 ? 3% -19.6% 23440 ? 4% slabinfo.numa_policy.num_objs
470.00 ? 3% -19.6% 377.75 ? 5% slabinfo.numa_policy.num_slabs
20269 ? 4% -27.1% 14776 ? 3% slabinfo.pool_workqueue.active_objs
20699 ? 4% -27.9% 14916 ? 3% slabinfo.pool_workqueue.num_objs
16526 +13.9% 18824 ? 7% slabinfo.proc_inode_cache.active_objs
17590 +13.3% 19925 ? 6% slabinfo.proc_inode_cache.num_objs
3233 ? 9% -82.7% 559.50 ? 10% slabinfo.xfs_efd_item.active_objs
3369 ? 9% -83.2% 567.00 ? 10% slabinfo.xfs_efd_item.num_objs
3276 ? 8% -82.7% 567.50 ? 10% slabinfo.xfs_efi_item.active_objs
3411 ? 8% -83.1% 575.50 ? 10% slabinfo.xfs_efi_item.num_objs
183556 ? 15% -40.0% 110053 ? 24% proc-vmstat.compact_daemon_free_scanned
1164 ? 17% -56.0% 512.00 ? 38% proc-vmstat.compact_daemon_wake
178173 ? 14% -36.5% 113212 ? 16% proc-vmstat.compact_isolated
768.25 ? 12% -43.2% 436.25 ? 15% proc-vmstat.kswapd_high_wmark_hit_quickly
1242 ? 18% -55.8% 548.75 ? 25% proc-vmstat.kswapd_low_wmark_hit_quickly
2073 +9.6% 2272 proc-vmstat.nr_active_anon
67108825 -17.5% 55358412 proc-vmstat.nr_dirtied
34429046 -13.1% 29922129 proc-vmstat.nr_file_pages
14133405 +32.0% 18660812 proc-vmstat.nr_free_pages
33887409 -13.3% 29379656 proc-vmstat.nr_inactive_file
299680 -7.2% 277963 proc-vmstat.nr_slab_unreclaimable
4828 ? 4% -88.7% 546.75 ? 5% proc-vmstat.nr_writeback
59368449 -20.8% 47001378 proc-vmstat.nr_written
2073 +9.6% 2272 proc-vmstat.nr_zone_active_anon
33887718 -13.3% 29379826 proc-vmstat.nr_zone_inactive_file
20453611 ? 9% -18.6% 16639765 ? 10% proc-vmstat.numa_foreign
49075650 ? 3% -16.2% 41121350 ? 5% proc-vmstat.numa_hit
48981929 ? 3% -16.2% 41027625 ? 5% proc-vmstat.numa_local
20453611 ? 9% -18.6% 16639765 ? 10% proc-vmstat.numa_miss
20547332 ? 9% -18.6% 16733490 ? 10% proc-vmstat.numa_other
2371 ? 8% -51.0% 1161 ? 8% proc-vmstat.pageoutrun
75679865 -15.3% 64118824 proc-vmstat.pgalloc_normal
1556864 +10.8% 1724803 proc-vmstat.pgfault
27733269 -38.9% 16947822 ? 3% proc-vmstat.pgfree
90755 ? 13% -35.4% 58642 ? 15% proc-vmstat.pgmigrate_success
2.404e+08 -20.7% 1.906e+08 proc-vmstat.pgpgout
56870 +16.6% 66312 proc-vmstat.pgreuse
19170054 -56.4% 8359591 proc-vmstat.pgscan_file
19170054 -56.4% 8359591 proc-vmstat.pgscan_kswapd
19170022 -56.4% 8359567 proc-vmstat.pgsteal_file
19170022 -56.4% 8359567 proc-vmstat.pgsteal_kswapd
166406 ? 8% -45.0% 91531 ? 33% proc-vmstat.slabs_scanned
45.86 ? 34% +181.8% 129.26 ? 13% sched_debug.cfs_rq:/.load_avg.avg
2084 ? 61% +101.1% 4192 ? 17% sched_debug.cfs_rq:/.load_avg.max
228.76 ? 54% +113.4% 488.17 ? 15% sched_debug.cfs_rq:/.load_avg.stddev
0.05 ? 8% -24.0% 0.04 ? 14% sched_debug.cfs_rq:/.nr_running.avg
0.22 ? 3% -10.7% 0.19 ? 8% sched_debug.cfs_rq:/.nr_running.stddev
59.93 ? 3% -26.4% 44.09 ? 9% sched_debug.cfs_rq:/.runnable_avg.avg
967.61 ? 9% -27.3% 703.21 ? 3% sched_debug.cfs_rq:/.runnable_avg.max
151.63 ? 7% -24.6% 114.37 ? 7% sched_debug.cfs_rq:/.runnable_avg.stddev
17842 ? 96% +169.4% 48067 ? 34% sched_debug.cfs_rq:/.spread0.max
59.77 ? 3% -28.2% 42.91 ? 9% sched_debug.cfs_rq:/.util_avg.avg
966.71 ? 9% -28.5% 691.08 ? 2% sched_debug.cfs_rq:/.util_avg.max
151.34 ? 7% -27.8% 109.22 ? 6% sched_debug.cfs_rq:/.util_avg.stddev
13.27 ? 13% -58.4% 5.52 ? 27% sched_debug.cfs_rq:/.util_est_enqueued.avg
768.17 ? 9% -49.2% 389.92 ? 20% sched_debug.cfs_rq:/.util_est_enqueued.max
88.39 ? 9% -54.0% 40.68 ? 22% sched_debug.cfs_rq:/.util_est_enqueued.stddev
39884 ? 51% -86.8% 5276 ? 39% sched_debug.cpu.avg_idle.min
197646 ? 12% +52.6% 301694 ? 5% sched_debug.cpu.avg_idle.stddev
145476 ? 10% +30.6% 189963 sched_debug.cpu.clock.avg
145491 ? 10% +30.6% 189978 sched_debug.cpu.clock.max
145461 ? 10% +30.6% 189950 sched_debug.cpu.clock.min
143795 ? 10% +30.5% 187634 sched_debug.cpu.clock_task.avg
144737 ? 10% +30.4% 188805 sched_debug.cpu.clock_task.max
136580 ? 10% +32.0% 180288 sched_debug.cpu.clock_task.min
7064 ? 7% +20.3% 8499 sched_debug.cpu.curr->pid.max
0.00 ? 49% -43.7% 0.00 ? 4% sched_debug.cpu.next_balance.stddev
0.03 ? 2% -15.5% 0.03 ? 11% sched_debug.cpu.nr_running.avg
9449 ? 7% +2186.6% 216066 sched_debug.cpu.nr_switches.avg
48468 ? 16% +3290.8% 1643455 ? 3% sched_debug.cpu.nr_switches.max
1144 ? 14% +28.6% 1471 ? 3% sched_debug.cpu.nr_switches.min
9828 ? 8% +4156.6% 418353 sched_debug.cpu.nr_switches.stddev
-28.69 +314.2% -118.83 sched_debug.cpu.nr_uninterruptible.min
16.51 ? 2% +84.0% 30.38 ? 9% sched_debug.cpu.nr_uninterruptible.stddev
7861 ? 9% +2639.8% 215396 sched_debug.cpu.sched_count.avg
45309 ? 16% +3537.5% 1648138 ? 3% sched_debug.cpu.sched_count.max
405.62 ? 14% +39.8% 567.08 sched_debug.cpu.sched_count.min
9538 ? 9% +4305.9% 420274 sched_debug.cpu.sched_count.stddev
3194 ? 9% +2430.1% 80819 sched_debug.cpu.sched_goidle.avg
22591 ? 16% +2931.8% 684944 ? 2% sched_debug.cpu.sched_goidle.max
150.03 ? 13% +38.5% 207.79 sched_debug.cpu.sched_goidle.min
4250 ? 10% +3787.0% 165209 sched_debug.cpu.sched_goidle.stddev
4451 ? 8% +2675.1% 123520 sched_debug.cpu.ttwu_count.avg
52506 ? 8% +2032.0% 1119423 ? 4% sched_debug.cpu.ttwu_count.max
141.88 ? 14% +40.3% 199.08 sched_debug.cpu.ttwu_count.min
9369 ? 5% +2656.8% 258303 sched_debug.cpu.ttwu_count.stddev
2233 ? 10% +1279.5% 30807 sched_debug.cpu.ttwu_local.avg
20049 ? 6% +1200.9% 260826 ? 3% sched_debug.cpu.ttwu_local.max
139.15 ? 14% +41.1% 196.33 sched_debug.cpu.ttwu_local.min
3680 ? 5% +1596.4% 62439 sched_debug.cpu.ttwu_local.stddev
145462 ? 10% +30.6% 189951 sched_debug.cpu_clk
144588 ? 10% +30.8% 189077 sched_debug.ktime
145825 ? 10% +30.5% 190314 sched_debug.sched_clk
2.167e+09 -14.5% 1.852e+09 perf-stat.i.branch-instructions
31.58 ? 12% -7.0 24.62 perf-stat.i.cache-miss-rate%
14614329 ? 2% -13.8% 12595505 perf-stat.i.cache-misses
11124 ? 2% +2404.3% 278598 perf-stat.i.context-switches
1.64 +8.0% 1.77 ? 2% perf-stat.i.cpi
1.802e+10 ? 2% -9.8% 1.625e+10 ? 2% perf-stat.i.cpu-cycles
253.44 +2786.6% 7315 ? 7% perf-stat.i.cpu-migrations
1244 ? 2% +3.5% 1288 ? 2% perf-stat.i.cycles-between-cache-misses
0.07 ? 26% -0.0 0.04 ? 28% perf-stat.i.dTLB-load-miss-rate%
2046807 ? 26% -53.7% 948333 ? 28% perf-stat.i.dTLB-load-misses
2.887e+09 -16.4% 2.414e+09 perf-stat.i.dTLB-loads
0.01 ? 37% -0.0 0.00 ? 20% perf-stat.i.dTLB-store-miss-rate%
115412 ? 37% -55.6% 51255 ? 20% perf-stat.i.dTLB-store-misses
1.449e+09 -11.7% 1.278e+09 perf-stat.i.dTLB-stores
61.58 -11.5 50.07 perf-stat.i.iTLB-load-miss-rate%
5301708 ? 4% +65.5% 8775440 perf-stat.i.iTLB-loads
1.076e+10 -16.4% 8.998e+09 perf-stat.i.instructions
1262 -19.3% 1017 perf-stat.i.instructions-per-iTLB-miss
0.62 -7.9% 0.57 ? 2% perf-stat.i.ipc
1094 -15.2% 928.13 perf-stat.i.major-faults
0.09 ? 2% -9.8% 0.08 ? 2% perf-stat.i.metric.GHz
34.19 -14.5% 29.22 perf-stat.i.metric.M/sec
70.32 ? 4% +8.1 78.39 ? 2% perf-stat.i.node-load-miss-rate%
1712035 ? 8% -36.8% 1081998 ? 7% perf-stat.i.node-loads
59.44 ? 10% +11.1 70.55 ? 2% perf-stat.i.node-store-miss-rate%
549390 ? 17% -35.2% 355933 ? 5% perf-stat.i.node-stores
4832 -3.6% 4656 perf-stat.i.page-faults
1.68 +7.8% 1.81 ? 2% perf-stat.overall.cpi
1234 +4.6% 1290 ? 2% perf-stat.overall.cycles-between-cache-misses
0.07 ? 26% -0.0 0.04 ? 28% perf-stat.overall.dTLB-load-miss-rate%
0.01 ? 37% -0.0 0.00 ? 20% perf-stat.overall.dTLB-store-miss-rate%
62.13 -11.7 50.40 perf-stat.overall.iTLB-load-miss-rate%
1239 -18.5% 1009 perf-stat.overall.instructions-per-iTLB-miss
0.60 -7.2% 0.55 ? 2% perf-stat.overall.ipc
68.97 ? 4% +8.1 77.07 ? 2% perf-stat.overall.node-load-miss-rate%
41110 +19.2% 49006 perf-stat.overall.path-length
2.159e+09 -14.5% 1.847e+09 perf-stat.ps.branch-instructions
14564018 ? 2% -13.7% 12562224 perf-stat.ps.cache-misses
11076 ? 2% +2406.2% 277608 perf-stat.ps.context-switches
1.797e+10 ? 2% -9.8% 1.621e+10 ? 2% perf-stat.ps.cpu-cycles
252.44 +2787.8% 7289 ? 7% perf-stat.ps.cpu-migrations
2038697 ? 26% -53.6% 945188 ? 28% perf-stat.ps.dTLB-load-misses
2.877e+09 -16.3% 2.407e+09 perf-stat.ps.dTLB-loads
115024 ? 37% -55.5% 51145 ? 20% perf-stat.ps.dTLB-store-misses
1.443e+09 -11.7% 1.275e+09 perf-stat.ps.dTLB-stores
5280755 ? 4% +65.6% 8745507 perf-stat.ps.iTLB-loads
1.072e+10 -16.3% 8.973e+09 perf-stat.ps.instructions
1094 -14.9% 931.28 perf-stat.ps.major-faults
1707854 ? 8% -36.7% 1080972 ? 7% perf-stat.ps.node-loads
548040 ? 17% -35.1% 355596 ? 5% perf-stat.ps.node-stores
4818 -3.5% 4647 perf-stat.ps.page-faults
2.759e+12 -1.6% 2.713e+12 perf-stat.total.instructions
516.00 +17.4% 606.00 interrupts.9:IO-APIC.9-fasteoi.acpi
241214 ? 20% +386.1% 1172555 interrupts.CAL:Function_call_interrupts
1173 ? 16% +1030.0% 13255 ? 81% interrupts.CPU0.CAL:Function_call_interrupts
514234 +17.7% 605443 interrupts.CPU0.LOC:Local_timer_interrupts
516.00 +17.4% 606.00 interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
512723 +18.1% 605278 interrupts.CPU1.LOC:Local_timer_interrupts
1416 ? 28% +1078.1% 16682 ? 84% interrupts.CPU10.CAL:Function_call_interrupts
511935 +18.3% 605746 interrupts.CPU10.LOC:Local_timer_interrupts
1143 ? 20% +874.2% 11139 ? 83% interrupts.CPU100.CAL:Function_call_interrupts
510002 +18.8% 605836 interrupts.CPU100.LOC:Local_timer_interrupts
1137 ? 20% +863.0% 10952 ? 74% interrupts.CPU101.CAL:Function_call_interrupts
510758 +18.5% 605423 interrupts.CPU101.LOC:Local_timer_interrupts
1112 ? 20% +799.2% 10001 ? 82% interrupts.CPU102.CAL:Function_call_interrupts
513218 +18.0% 605498 interrupts.CPU102.LOC:Local_timer_interrupts
1144 ? 20% +760.0% 9840 ? 71% interrupts.CPU103.CAL:Function_call_interrupts
513340 +17.9% 605423 interrupts.CPU103.LOC:Local_timer_interrupts
1139 ? 18% +647.4% 8518 ? 75% interrupts.CPU104.CAL:Function_call_interrupts
511907 +18.3% 605394 interrupts.CPU104.LOC:Local_timer_interrupts
1128 ? 20% +638.4% 8330 ? 55% interrupts.CPU105.CAL:Function_call_interrupts
510862 +18.5% 605403 interrupts.CPU105.LOC:Local_timer_interrupts
1136 ? 21% +501.6% 6833 ? 54% interrupts.CPU106.CAL:Function_call_interrupts
510908 +18.5% 605427 interrupts.CPU106.LOC:Local_timer_interrupts
1188 ? 21% +566.4% 7919 ? 59% interrupts.CPU107.CAL:Function_call_interrupts
513539 +17.9% 605427 interrupts.CPU107.LOC:Local_timer_interrupts
1233 ? 23% +553.6% 8064 ? 63% interrupts.CPU108.CAL:Function_call_interrupts
514204 +17.7% 605443 interrupts.CPU108.LOC:Local_timer_interrupts
1154 ? 18% +470.9% 6592 ? 74% interrupts.CPU109.CAL:Function_call_interrupts
511674 +18.3% 605335 interrupts.CPU109.LOC:Local_timer_interrupts
1318 ? 22% +1264.3% 17984 ? 87% interrupts.CPU11.CAL:Function_call_interrupts
514818 +17.6% 605440 interrupts.CPU11.LOC:Local_timer_interrupts
1130 ? 19% +519.4% 7001 ? 59% interrupts.CPU110.CAL:Function_call_interrupts
510713 +18.6% 605450 interrupts.CPU110.LOC:Local_timer_interrupts
1140 ? 17% +493.6% 6767 ? 54% interrupts.CPU111.CAL:Function_call_interrupts
509610 +18.8% 605511 interrupts.CPU111.LOC:Local_timer_interrupts
1192 ? 21% +337.7% 5217 ? 51% interrupts.CPU112.CAL:Function_call_interrupts
511076 +18.6% 605947 interrupts.CPU112.LOC:Local_timer_interrupts
37.00 ? 64% +12832.4% 4785 ? 95% interrupts.CPU112.RES:Rescheduling_interrupts
1131 ? 16% +322.9% 4784 ? 48% interrupts.CPU113.CAL:Function_call_interrupts
509857 +18.9% 606431 interrupts.CPU113.LOC:Local_timer_interrupts
41.00 ? 57% +11022.6% 4560 ? 96% interrupts.CPU113.RES:Rescheduling_interrupts
1174 ? 24% +328.4% 5029 ? 45% interrupts.CPU114.CAL:Function_call_interrupts
509860 +18.8% 605586 interrupts.CPU114.LOC:Local_timer_interrupts
46.75 ? 63% +9096.8% 4299 ? 97% interrupts.CPU114.RES:Rescheduling_interrupts
1183 ? 21% +290.9% 4626 ? 43% interrupts.CPU115.CAL:Function_call_interrupts
511295 +18.5% 605739 interrupts.CPU115.LOC:Local_timer_interrupts
35.75 ? 63% +10950.3% 3950 ? 94% interrupts.CPU115.RES:Rescheduling_interrupts
1195 ? 24% +275.5% 4489 ? 59% interrupts.CPU116.CAL:Function_call_interrupts
509693 +19.1% 607228 interrupts.CPU116.LOC:Local_timer_interrupts
1208 ? 20% +290.3% 4716 ? 42% interrupts.CPU117.CAL:Function_call_interrupts
510905 +18.5% 605473 interrupts.CPU117.LOC:Local_timer_interrupts
1136 ? 22% +338.7% 4986 ? 48% interrupts.CPU118.CAL:Function_call_interrupts
509923 +18.7% 605494 interrupts.CPU118.LOC:Local_timer_interrupts
33.75 ? 74% +12737.0% 4332 ? 98% interrupts.CPU118.RES:Rescheduling_interrupts
1153 ? 20% +365.1% 5366 ? 45% interrupts.CPU119.CAL:Function_call_interrupts
511178 +18.5% 605596 interrupts.CPU119.LOC:Local_timer_interrupts
1274 ? 22% +1407.0% 19210 ? 86% interrupts.CPU12.CAL:Function_call_interrupts
514753 +17.6% 605484 interrupts.CPU12.LOC:Local_timer_interrupts
513843 +17.8% 605343 interrupts.CPU120.LOC:Local_timer_interrupts
515081 +17.5% 605238 interrupts.CPU121.LOC:Local_timer_interrupts
515027 +17.5% 605302 interrupts.CPU122.LOC:Local_timer_interrupts
514802 +17.6% 605242 interrupts.CPU123.LOC:Local_timer_interrupts
513728 +17.8% 605344 interrupts.CPU124.LOC:Local_timer_interrupts
514733 +17.6% 605280 interrupts.CPU125.LOC:Local_timer_interrupts
513881 +17.8% 605350 interrupts.CPU126.LOC:Local_timer_interrupts
513886 +17.8% 605117 interrupts.CPU127.LOC:Local_timer_interrupts
513632 +17.8% 605303 interrupts.CPU128.LOC:Local_timer_interrupts
206.75 ? 34% -45.5% 112.75 ? 34% interrupts.CPU128.NMI:Non-maskable_interrupts
206.75 ? 34% -45.5% 112.75 ? 34% interrupts.CPU128.PMI:Performance_monitoring_interrupts
513642 +17.8% 605091 interrupts.CPU129.LOC:Local_timer_interrupts
1196 ? 19% +1408.7% 18047 ? 83% interrupts.CPU13.CAL:Function_call_interrupts
512136 +18.2% 605406 interrupts.CPU13.LOC:Local_timer_interrupts
513792 +17.8% 605254 interrupts.CPU130.LOC:Local_timer_interrupts
513659 +17.8% 605248 interrupts.CPU131.LOC:Local_timer_interrupts
513651 +17.8% 605254 interrupts.CPU132.LOC:Local_timer_interrupts
514585 +17.6% 605369 interrupts.CPU133.LOC:Local_timer_interrupts
513797 +17.9% 605557 interrupts.CPU134.LOC:Local_timer_interrupts
513767 +17.8% 605135 interrupts.CPU135.LOC:Local_timer_interrupts
514328 +17.7% 605254 interrupts.CPU136.LOC:Local_timer_interrupts
1177 ? 22% +94.6% 2290 ? 30% interrupts.CPU137.CAL:Function_call_interrupts
513855 +17.9% 605641 interrupts.CPU137.LOC:Local_timer_interrupts
3.00 ? 81% +7733.3% 235.00 ?168% interrupts.CPU137.RES:Rescheduling_interrupts
513480 +17.9% 605256 interrupts.CPU138.LOC:Local_timer_interrupts
513887 +17.8% 605283 interrupts.CPU139.LOC:Local_timer_interrupts
1196 ? 18% +1461.2% 18672 ? 88% interrupts.CPU14.CAL:Function_call_interrupts
511309 +18.4% 605432 interrupts.CPU14.LOC:Local_timer_interrupts
513628 +17.8% 605234 interrupts.CPU140.LOC:Local_timer_interrupts
513663 +17.8% 605266 interrupts.CPU141.LOC:Local_timer_interrupts
513516 +17.9% 605500 interrupts.CPU142.LOC:Local_timer_interrupts
514605 +17.7% 605624 interrupts.CPU143.LOC:Local_timer_interrupts
514859 +17.6% 605583 interrupts.CPU144.LOC:Local_timer_interrupts
1208 ? 23% +802.6% 10905 ? 82% interrupts.CPU145.CAL:Function_call_interrupts
513118 +18.0% 605359 interrupts.CPU145.LOC:Local_timer_interrupts
515103 +17.5% 605500 interrupts.CPU146.LOC:Local_timer_interrupts
512407 +18.1% 605372 interrupts.CPU147.LOC:Local_timer_interrupts
513393 +17.9% 605483 interrupts.CPU148.LOC:Local_timer_interrupts
514296 +17.7% 605316 interrupts.CPU149.LOC:Local_timer_interrupts
1216 ? 20% +1147.0% 15166 ? 90% interrupts.CPU15.CAL:Function_call_interrupts
510385 +18.6% 605419 interrupts.CPU15.LOC:Local_timer_interrupts
511128 +18.4% 605408 interrupts.CPU150.LOC:Local_timer_interrupts
511504 +18.3% 605269 interrupts.CPU151.LOC:Local_timer_interrupts
114.25 ? 11% +339.4% 502.00 ? 77% interrupts.CPU151.NMI:Non-maskable_interrupts
114.25 ? 11% +339.4% 502.00 ? 77% interrupts.CPU151.PMI:Performance_monitoring_interrupts
511130 +18.4% 605391 interrupts.CPU152.LOC:Local_timer_interrupts
510699 +18.5% 605383 interrupts.CPU153.LOC:Local_timer_interrupts
511113 +18.5% 605428 interrupts.CPU154.LOC:Local_timer_interrupts
17.75 ?143% +1e+06% 180613 ?101% interrupts.CPU154.RES:Rescheduling_interrupts
511706 +18.3% 605328 interrupts.CPU155.LOC:Local_timer_interrupts
512493 +18.1% 605464 interrupts.CPU156.LOC:Local_timer_interrupts
511511 +18.3% 605357 interrupts.CPU157.LOC:Local_timer_interrupts
1182 ? 19% +449.2% 6496 ? 81% interrupts.CPU158.CAL:Function_call_interrupts
512833 +18.0% 605345 interrupts.CPU158.LOC:Local_timer_interrupts
1164 ? 20% +406.7% 5899 ? 78% interrupts.CPU159.CAL:Function_call_interrupts
511132 +18.4% 605336 interrupts.CPU159.LOC:Local_timer_interrupts
513120 +18.1% 605836 interrupts.CPU16.LOC:Local_timer_interrupts
1186 ? 19% +295.6% 4694 ? 68% interrupts.CPU160.CAL:Function_call_interrupts
511530 +18.3% 605359 interrupts.CPU160.LOC:Local_timer_interrupts
510929 +18.6% 605714 interrupts.CPU161.LOC:Local_timer_interrupts
510784 +18.6% 605596 interrupts.CPU162.LOC:Local_timer_interrupts
511100 +18.5% 605424 interrupts.CPU163.LOC:Local_timer_interrupts
1.25 ?103% +3.4e+05% 4219 ?104% interrupts.CPU163.RES:Rescheduling_interrupts
1173 ? 18% +279.7% 4453 ? 62% interrupts.CPU164.CAL:Function_call_interrupts
510706 +18.5% 605389 interrupts.CPU164.LOC:Local_timer_interrupts
1.75 ? 24% +2.7e+05% 4804 ? 99% interrupts.CPU164.RES:Rescheduling_interrupts
1214 ? 23% +249.3% 4241 ? 67% interrupts.CPU165.CAL:Function_call_interrupts
510738 +18.6% 605932 interrupts.CPU165.LOC:Local_timer_interrupts
91.00 ? 23% +178.0% 253.00 ? 54% interrupts.CPU165.NMI:Non-maskable_interrupts
91.00 ? 23% +178.0% 253.00 ? 54% interrupts.CPU165.PMI:Performance_monitoring_interrupts
6.75 ? 63% +69411.1% 4692 ?102% interrupts.CPU165.RES:Rescheduling_interrupts
1209 ? 18% +218.9% 3856 ? 62% interrupts.CPU166.CAL:Function_call_interrupts
510778 +18.6% 605622 interrupts.CPU166.LOC:Local_timer_interrupts
90.00 ? 24% +114.2% 192.75 ? 35% interrupts.CPU166.NMI:Non-maskable_interrupts
90.00 ? 24% +114.2% 192.75 ? 35% interrupts.CPU166.PMI:Performance_monitoring_interrupts
1177 ? 18% +309.5% 4820 ? 52% interrupts.CPU167.CAL:Function_call_interrupts
511077 +18.5% 605666 interrupts.CPU167.LOC:Local_timer_interrupts
90.25 ? 24% +216.3% 285.50 ? 19% interrupts.CPU167.NMI:Non-maskable_interrupts
90.25 ? 24% +216.3% 285.50 ? 19% interrupts.CPU167.PMI:Performance_monitoring_interrupts
2.25 ? 72% +2e+05% 4486 ? 99% interrupts.CPU167.RES:Rescheduling_interrupts
1262 ? 25% +54.7% 1953 ? 27% interrupts.CPU168.CAL:Function_call_interrupts
516016 +17.3% 605276 interrupts.CPU168.LOC:Local_timer_interrupts
515960 +17.3% 605469 interrupts.CPU169.LOC:Local_timer_interrupts
434.75 ? 59% -62.9% 161.50 ? 30% interrupts.CPU169.NMI:Non-maskable_interrupts
434.75 ? 59% -62.9% 161.50 ? 30% interrupts.CPU169.PMI:Performance_monitoring_interrupts
1104 ? 22% +528.0% 6934 ? 62% interrupts.CPU17.CAL:Function_call_interrupts
510328 +18.7% 605554 interrupts.CPU17.LOC:Local_timer_interrupts
1246 ? 34% +68.1% 2095 ? 39% interrupts.CPU170.CAL:Function_call_interrupts
516201 +17.3% 605483 interrupts.CPU170.LOC:Local_timer_interrupts
334.75 ? 29% -36.1% 213.75 ? 21% interrupts.CPU170.NMI:Non-maskable_interrupts
334.75 ? 29% -36.1% 213.75 ? 21% interrupts.CPU170.PMI:Performance_monitoring_interrupts
516089 +17.3% 605414 interrupts.CPU171.LOC:Local_timer_interrupts
340.00 ? 42% -56.5% 148.00 ? 21% interrupts.CPU171.NMI:Non-maskable_interrupts
340.00 ? 42% -56.5% 148.00 ? 21% interrupts.CPU171.PMI:Performance_monitoring_interrupts
515928 +17.4% 605456 interrupts.CPU172.LOC:Local_timer_interrupts
515944 +17.3% 605313 interrupts.CPU173.LOC:Local_timer_interrupts
515836 +17.4% 605365 interrupts.CPU174.LOC:Local_timer_interrupts
304.25 ?100% -62.9% 113.00 ? 8% interrupts.CPU174.NMI:Non-maskable_interrupts
304.25 ?100% -62.9% 113.00 ? 8% interrupts.CPU174.PMI:Performance_monitoring_interrupts
515935 +17.3% 605235 interrupts.CPU175.LOC:Local_timer_interrupts
515566 +17.4% 605490 interrupts.CPU176.LOC:Local_timer_interrupts
515944 +17.3% 605365 interrupts.CPU177.LOC:Local_timer_interrupts
515917 +17.4% 605465 interrupts.CPU178.LOC:Local_timer_interrupts
515918 +17.3% 605298 interrupts.CPU179.LOC:Local_timer_interrupts
1125 ? 21% +455.4% 6248 ? 71% interrupts.CPU18.CAL:Function_call_interrupts
510435 +18.6% 605489 interrupts.CPU18.LOC:Local_timer_interrupts
515832 +17.4% 605421 interrupts.CPU180.LOC:Local_timer_interrupts
515886 +17.3% 605369 interrupts.CPU181.LOC:Local_timer_interrupts
515913 +17.3% 605307 interrupts.CPU182.LOC:Local_timer_interrupts
515907 +17.3% 605391 interrupts.CPU183.LOC:Local_timer_interrupts
515873 +17.3% 605289 interrupts.CPU184.LOC:Local_timer_interrupts
516177 +17.3% 605345 interrupts.CPU185.LOC:Local_timer_interrupts
1321 ? 9% +56.9% 2073 ? 33% interrupts.CPU186.CAL:Function_call_interrupts
515709 +17.4% 605398 interrupts.CPU186.LOC:Local_timer_interrupts
1161 ? 19% +25.2% 1453 ? 8% interrupts.CPU187.CAL:Function_call_interrupts
515995 +17.3% 605284 interrupts.CPU187.LOC:Local_timer_interrupts
515996 +17.3% 605298 interrupts.CPU188.LOC:Local_timer_interrupts
515899 +17.3% 605298 interrupts.CPU189.LOC:Local_timer_interrupts
510586 +18.6% 605784 interrupts.CPU19.LOC:Local_timer_interrupts
515909 +17.3% 605357 interrupts.CPU190.LOC:Local_timer_interrupts
1248 ? 29% +34.1% 1674 ? 7% interrupts.CPU191.CAL:Function_call_interrupts
515902 +17.3% 605378 interrupts.CPU191.LOC:Local_timer_interrupts
1299 ? 21% +1166.8% 16458 ? 80% interrupts.CPU2.CAL:Function_call_interrupts
511817 +18.3% 605535 interrupts.CPU2.LOC:Local_timer_interrupts
1134 ? 20% +419.2% 5891 ? 72% interrupts.CPU20.CAL:Function_call_interrupts
509831 +18.8% 605434 interrupts.CPU20.LOC:Local_timer_interrupts
1134 ? 22% +413.0% 5817 ? 69% interrupts.CPU21.CAL:Function_call_interrupts
510416 +18.7% 605698 interrupts.CPU21.LOC:Local_timer_interrupts
1114 ? 23% +413.3% 5720 ? 77% interrupts.CPU22.CAL:Function_call_interrupts
509905 +18.7% 605450 interrupts.CPU22.LOC:Local_timer_interrupts
1176 ? 19% +455.9% 6540 ? 77% interrupts.CPU23.CAL:Function_call_interrupts
510208 +18.8% 606161 interrupts.CPU23.LOC:Local_timer_interrupts
1535 ? 10% +78.2% 2735 ? 16% interrupts.CPU24.CAL:Function_call_interrupts
513859 +17.9% 605713 interrupts.CPU24.LOC:Local_timer_interrupts
654.75 ? 28% -71.4% 187.25 ? 7% interrupts.CPU24.NMI:Non-maskable_interrupts
654.75 ? 28% -71.4% 187.25 ? 7% interrupts.CPU24.PMI:Performance_monitoring_interrupts
1368 ? 14% +114.9% 2939 ? 32% interrupts.CPU25.CAL:Function_call_interrupts
515715 +17.5% 605936 interrupts.CPU25.LOC:Local_timer_interrupts
619.50 ? 45% -58.3% 258.50 ? 41% interrupts.CPU25.NMI:Non-maskable_interrupts
619.50 ? 45% -58.3% 258.50 ? 41% interrupts.CPU25.PMI:Performance_monitoring_interrupts
515285 +17.5% 605484 interrupts.CPU26.LOC:Local_timer_interrupts
514892 +17.6% 605366 interrupts.CPU27.LOC:Local_timer_interrupts
1157 ? 23% +66.9% 1931 ? 25% interrupts.CPU28.CAL:Function_call_interrupts
513790 +17.8% 605384 interrupts.CPU28.LOC:Local_timer_interrupts
514833 +17.6% 605296 interrupts.CPU29.LOC:Local_timer_interrupts
1215 ? 21% +1322.8% 17287 ? 86% interrupts.CPU3.CAL:Function_call_interrupts
510059 +18.7% 605325 interrupts.CPU3.LOC:Local_timer_interrupts
513997 +17.8% 605397 interrupts.CPU30.LOC:Local_timer_interrupts
513960 +17.8% 605444 interrupts.CPU31.LOC:Local_timer_interrupts
513717 +17.9% 605420 interrupts.CPU32.LOC:Local_timer_interrupts
1289 ? 19% +49.0% 1920 ? 13% interrupts.CPU33.CAL:Function_call_interrupts
513745 +17.9% 605929 interrupts.CPU33.LOC:Local_timer_interrupts
1152 ? 17% +91.7% 2208 ? 43% interrupts.CPU34.CAL:Function_call_interrupts
513887 +17.9% 605779 interrupts.CPU34.LOC:Local_timer_interrupts
1135 ? 24% +53.6% 1744 ? 28% interrupts.CPU35.CAL:Function_call_interrupts
513694 +17.8% 605333 interrupts.CPU35.LOC:Local_timer_interrupts
513746 +18.0% 606055 interrupts.CPU36.LOC:Local_timer_interrupts
514657 +17.6% 605387 interrupts.CPU37.LOC:Local_timer_interrupts
513887 +17.8% 605533 interrupts.CPU38.LOC:Local_timer_interrupts
513874 +17.8% 605393 interrupts.CPU39.LOC:Local_timer_interrupts
1186 ? 19% +1467.5% 18598 ? 87% interrupts.CPU4.CAL:Function_call_interrupts
510575 +18.6% 605409 interrupts.CPU4.LOC:Local_timer_interrupts
514437 +17.7% 605399 interrupts.CPU40.LOC:Local_timer_interrupts
513922 +17.8% 605488 interrupts.CPU41.LOC:Local_timer_interrupts
513643 +17.9% 605362 interrupts.CPU42.LOC:Local_timer_interrupts
514155 +17.8% 605708 interrupts.CPU43.LOC:Local_timer_interrupts
1097 ? 24% +91.0% 2096 ? 32% interrupts.CPU44.CAL:Function_call_interrupts
513654 +18.0% 606048 interrupts.CPU44.LOC:Local_timer_interrupts
513702 +17.9% 605573 interrupts.CPU45.LOC:Local_timer_interrupts
513557 +17.9% 605360 interrupts.CPU46.LOC:Local_timer_interrupts
514556 +17.7% 605738 interrupts.CPU47.LOC:Local_timer_interrupts
1699 ? 36% +759.3% 14598 ? 79% interrupts.CPU48.CAL:Function_call_interrupts
514869 +17.6% 605464 interrupts.CPU48.LOC:Local_timer_interrupts
1343 ? 14% +1160.3% 16932 ? 86% interrupts.CPU49.CAL:Function_call_interrupts
513305 +18.0% 605464 interrupts.CPU49.LOC:Local_timer_interrupts
1227 ? 19% +1401.3% 18432 ? 89% interrupts.CPU5.CAL:Function_call_interrupts
510871 +18.5% 605627 interrupts.CPU5.LOC:Local_timer_interrupts
515088 +17.5% 605376 interrupts.CPU50.LOC:Local_timer_interrupts
1204 ? 17% +1405.3% 18127 ? 92% interrupts.CPU51.CAL:Function_call_interrupts
512351 +18.2% 605549 interrupts.CPU51.LOC:Local_timer_interrupts
513855 +17.8% 605391 interrupts.CPU52.LOC:Local_timer_interrupts
514932 +17.6% 605560 interrupts.CPU53.LOC:Local_timer_interrupts
511184 +18.5% 605528 interrupts.CPU54.LOC:Local_timer_interrupts
1153 ? 16% +1670.5% 20423 ? 93% interrupts.CPU55.CAL:Function_call_interrupts
511641 +18.3% 605473 interrupts.CPU55.LOC:Local_timer_interrupts
511233 +18.5% 605581 interrupts.CPU56.LOC:Local_timer_interrupts
107.75 ? 11% +300.2% 431.25 ? 60% interrupts.CPU56.NMI:Non-maskable_interrupts
107.75 ? 11% +300.2% 431.25 ? 60% interrupts.CPU56.PMI:Performance_monitoring_interrupts
61.50 ?164% +11461.4% 7110 ? 99% interrupts.CPU56.RES:Rescheduling_interrupts
1107 ? 23% +1664.4% 19541 ? 91% interrupts.CPU57.CAL:Function_call_interrupts
510795 +18.6% 605563 interrupts.CPU57.LOC:Local_timer_interrupts
3.25 ? 50% +2.2e+05% 7281 ? 99% interrupts.CPU57.RES:Rescheduling_interrupts
1119 ? 23% +1660.7% 19710 ? 91% interrupts.CPU58.CAL:Function_call_interrupts
511234 +18.5% 605882 interrupts.CPU58.LOC:Local_timer_interrupts
5.25 ? 51% +1.4e+05% 7342 ?100% interrupts.CPU58.RES:Rescheduling_interrupts
1117 ? 21% +1541.4% 18342 ? 92% interrupts.CPU59.CAL:Function_call_interrupts
511755 +18.4% 605662 interrupts.CPU59.LOC:Local_timer_interrupts
1283 ? 23% +1341.6% 18502 ? 80% interrupts.CPU6.CAL:Function_call_interrupts
513323 +18.0% 605537 interrupts.CPU6.LOC:Local_timer_interrupts
512545 +18.1% 605561 interrupts.CPU60.LOC:Local_timer_interrupts
1135 ? 18% +1505.6% 18228 ? 93% interrupts.CPU61.CAL:Function_call_interrupts
511576 +18.3% 605425 interrupts.CPU61.LOC:Local_timer_interrupts
1414 ? 25% +1110.5% 17119 ? 91% interrupts.CPU62.CAL:Function_call_interrupts
512827 +18.1% 605436 interrupts.CPU62.LOC:Local_timer_interrupts
1126 ? 21% +1188.2% 14514 ? 90% interrupts.CPU63.CAL:Function_call_interrupts
511174 +18.5% 605588 interrupts.CPU63.LOC:Local_timer_interrupts
1130 ? 22% +294.1% 4456 ? 71% interrupts.CPU64.CAL:Function_call_interrupts
511557 +18.4% 605728 interrupts.CPU64.LOC:Local_timer_interrupts
510978 +18.6% 605824 interrupts.CPU65.LOC:Local_timer_interrupts
510755 +18.6% 605917 interrupts.CPU66.LOC:Local_timer_interrupts
2.25 ?101% +2.8e+05% 6412 ?100% interrupts.CPU66.RES:Rescheduling_interrupts
511092 +18.5% 605580 interrupts.CPU67.LOC:Local_timer_interrupts
1147 ? 18% +389.3% 5613 ? 68% interrupts.CPU68.CAL:Function_call_interrupts
510789 +18.5% 605511 interrupts.CPU68.LOC:Local_timer_interrupts
2.75 ?107% +2.5e+05% 6913 ?100% interrupts.CPU68.RES:Rescheduling_interrupts
510770 +18.6% 605649 interrupts.CPU69.LOC:Local_timer_interrupts
1271 ? 21% +1246.2% 17116 ? 87% interrupts.CPU7.CAL:Function_call_interrupts
513454 +17.9% 605531 interrupts.CPU7.LOC:Local_timer_interrupts
516.25 ? 44% -55.2% 231.25 ? 42% interrupts.CPU7.NMI:Non-maskable_interrupts
516.25 ? 44% -55.2% 231.25 ? 42% interrupts.CPU7.PMI:Performance_monitoring_interrupts
510835 +18.6% 606082 interrupts.CPU70.LOC:Local_timer_interrupts
1120 ? 21% +392.4% 5518 ? 61% interrupts.CPU71.CAL:Function_call_interrupts
511083 +18.5% 605658 interrupts.CPU71.LOC:Local_timer_interrupts
2.25 ? 96% +2.7e+05% 6041 ?100% interrupts.CPU71.RES:Rescheduling_interrupts
1325 ? 26% +142.0% 3207 ? 11% interrupts.CPU72.CAL:Function_call_interrupts
515998 +17.3% 605229 interrupts.CPU72.LOC:Local_timer_interrupts
1280 ? 24% +69.5% 2171 ? 16% interrupts.CPU73.CAL:Function_call_interrupts
516149 +17.3% 605586 interrupts.CPU73.LOC:Local_timer_interrupts
498.50 ? 49% -60.9% 194.75 ? 40% interrupts.CPU73.NMI:Non-maskable_interrupts
498.50 ? 49% -60.9% 194.75 ? 40% interrupts.CPU73.PMI:Performance_monitoring_interrupts
1383 ? 39% +63.0% 2255 ? 23% interrupts.CPU74.CAL:Function_call_interrupts
516376 +17.2% 605343 interrupts.CPU74.LOC:Local_timer_interrupts
516271 +17.3% 605661 interrupts.CPU75.LOC:Local_timer_interrupts
516310 +17.3% 605650 interrupts.CPU76.LOC:Local_timer_interrupts
516172 +17.3% 605330 interrupts.CPU77.LOC:Local_timer_interrupts
515955 +17.4% 605572 interrupts.CPU78.LOC:Local_timer_interrupts
515907 +17.4% 605833 interrupts.CPU79.LOC:Local_timer_interrupts
1220 ? 17% +1376.1% 18019 ? 87% interrupts.CPU8.CAL:Function_call_interrupts
512148 +18.3% 605670 interrupts.CPU8.LOC:Local_timer_interrupts
515903 +17.3% 605409 interrupts.CPU80.LOC:Local_timer_interrupts
515934 +17.3% 605309 interrupts.CPU81.LOC:Local_timer_interrupts
515929 +17.5% 606412 interrupts.CPU82.LOC:Local_timer_interrupts
516037 +17.4% 606044 interrupts.CPU83.LOC:Local_timer_interrupts
515996 +17.4% 605807 interrupts.CPU84.LOC:Local_timer_interrupts
515950 +17.4% 605519 interrupts.CPU85.LOC:Local_timer_interrupts
516261 +17.3% 605483 interrupts.CPU86.LOC:Local_timer_interrupts
516222 +17.3% 605533 interrupts.CPU87.LOC:Local_timer_interrupts
516245 +17.3% 605439 interrupts.CPU88.LOC:Local_timer_interrupts
516333 +17.4% 606092 interrupts.CPU89.LOC:Local_timer_interrupts
313.50 ? 39% -69.5% 95.75 ? 25% interrupts.CPU89.NMI:Non-maskable_interrupts
313.50 ? 39% -69.5% 95.75 ? 25% interrupts.CPU89.PMI:Performance_monitoring_interrupts
511049 +18.5% 605658 interrupts.CPU9.LOC:Local_timer_interrupts
516502 +17.2% 605354 interrupts.CPU90.LOC:Local_timer_interrupts
516464 +17.3% 605747 interrupts.CPU91.LOC:Local_timer_interrupts
516253 +17.4% 606137 interrupts.CPU92.LOC:Local_timer_interrupts
516158 +17.3% 605298 interrupts.CPU93.LOC:Local_timer_interrupts
515995 +17.4% 605674 interrupts.CPU94.LOC:Local_timer_interrupts
516304 +17.2% 605311 interrupts.CPU95.LOC:Local_timer_interrupts
513802 +17.8% 605397 interrupts.CPU96.LOC:Local_timer_interrupts
512011 +18.2% 605447 interrupts.CPU97.LOC:Local_timer_interrupts
511607 +18.3% 605448 interrupts.CPU98.LOC:Local_timer_interrupts
509867 +18.7% 605442 interrupts.CPU99.LOC:Local_timer_interrupts
98572095 +17.9% 1.163e+08 interrupts.LOC:Local_timer_interrupts
0.00 +1.9e+104% 192.00 interrupts.MCP:Machine_check_polls
18832 ? 4% +28724.8% 5428424 ? 3% interrupts.RES:Rescheduling_interrupts
24593 ? 14% +103.8% 50109 ? 2% softirqs.CPU0.RCU
20767 ? 7% +119.9% 45671 ? 5% softirqs.CPU1.RCU
21282 ? 46% +60.4% 34135 ? 18% softirqs.CPU1.SCHED
21727 ? 7% +115.9% 46904 ? 7% softirqs.CPU10.RCU
19127 ? 15% +133.1% 44585 ? 4% softirqs.CPU100.RCU
32966 ? 4% +15.6% 38122 ? 6% softirqs.CPU100.SCHED
19577 ? 15% +131.0% 45223 ? 2% softirqs.CPU101.RCU
27027 ? 31% +48.0% 40011 ? 2% softirqs.CPU101.SCHED
20058 ? 13% +121.1% 44343 ? 4% softirqs.CPU102.RCU
19259 ? 13% +132.0% 44674 ? 3% softirqs.CPU103.RCU
19621 ? 14% +130.5% 45232 ? 3% softirqs.CPU104.RCU
18948 ? 17% +137.8% 45063 ? 3% softirqs.CPU105.RCU
33071 ? 4% +16.4% 38506 ? 6% softirqs.CPU105.SCHED
19429 ? 16% +127.5% 44201 ? 2% softirqs.CPU106.RCU
29852 ? 20% +29.3% 38611 ? 5% softirqs.CPU106.SCHED
19791 ? 12% +131.1% 45743 softirqs.CPU107.RCU
32969 ? 4% +18.2% 38982 ? 6% softirqs.CPU107.SCHED
19517 ? 16% +132.4% 45355 softirqs.CPU108.RCU
32768 ? 5% +15.5% 37857 ? 8% softirqs.CPU108.SCHED
19232 ? 15% +132.6% 44727 ? 3% softirqs.CPU109.RCU
32108 ? 7% +24.0% 39802 softirqs.CPU109.SCHED
21436 ? 9% +124.3% 48089 ? 7% softirqs.CPU11.RCU
19168 ? 16% +136.7% 45370 ? 4% softirqs.CPU110.RCU
32633 ? 5% +22.0% 39828 ? 2% softirqs.CPU110.SCHED
19226 ? 16% +131.4% 44486 ? 3% softirqs.CPU111.RCU
31876 ? 9% +22.5% 39037 ? 7% softirqs.CPU111.SCHED
18752 ? 9% +125.1% 42214 ? 5% softirqs.CPU112.RCU
33139 ? 4% +19.3% 39537 ? 3% softirqs.CPU112.SCHED
18571 ? 11% +126.2% 42004 ? 3% softirqs.CPU113.RCU
33130 ? 4% +20.6% 39943 ? 2% softirqs.CPU113.SCHED
18861 ? 13% +124.8% 42409 ? 5% softirqs.CPU114.RCU
32611 ? 6% +21.7% 39691 ? 2% softirqs.CPU114.SCHED
18161 ? 14% +125.1% 40883 ? 4% softirqs.CPU115.RCU
32644 ? 5% +20.1% 39212 ? 2% softirqs.CPU115.SCHED
18222 ? 14% +131.3% 42152 ? 4% softirqs.CPU116.RCU
33025 ? 4% +20.9% 39938 softirqs.CPU116.SCHED
18640 ? 10% +125.3% 41999 ? 4% softirqs.CPU117.RCU
33222 ? 4% +18.9% 39496 ? 3% softirqs.CPU117.SCHED
18294 ? 13% +130.0% 42068 ? 3% softirqs.CPU118.RCU
33096 ? 4% +19.3% 39468 ? 3% softirqs.CPU118.SCHED
18313 ? 11% +134.4% 42927 ? 5% softirqs.CPU119.RCU
21114 ? 6% +127.6% 48057 ? 9% softirqs.CPU12.RCU
19537 ? 8% +128.5% 44642 ? 5% softirqs.CPU120.RCU
19009 ? 10% +128.4% 43409 ? 6% softirqs.CPU121.RCU
30437 ? 20% +33.8% 40736 ? 3% softirqs.CPU121.SCHED
18764 ? 10% +127.0% 42605 ? 6% softirqs.CPU122.RCU
34261 ? 2% +18.2% 40503 ? 3% softirqs.CPU122.SCHED
19388 ? 10% +123.6% 43346 ? 5% softirqs.CPU123.RCU
34412 ? 2% +18.0% 40608 ? 3% softirqs.CPU123.SCHED
18814 ? 9% +131.7% 43588 ? 7% softirqs.CPU124.RCU
34324 ? 2% +18.7% 40736 ? 3% softirqs.CPU124.SCHED
18457 ? 11% +129.0% 42271 ? 5% softirqs.CPU125.RCU
34462 ? 2% +18.4% 40809 ? 3% softirqs.CPU125.SCHED
18867 ? 9% +126.5% 42733 ? 7% softirqs.CPU126.RCU
34378 ? 2% +18.2% 40624 ? 3% softirqs.CPU126.SCHED
18279 ? 10% +133.8% 42730 ? 6% softirqs.CPU127.RCU
34155 ? 3% +18.4% 40435 ? 3% softirqs.CPU127.SCHED
21170 ? 12% +120.8% 46755 ? 8% softirqs.CPU128.RCU
34108 ? 3% +18.6% 40454 ? 3% softirqs.CPU128.SCHED
21281 ? 9% +121.3% 47090 ? 8% softirqs.CPU129.RCU
34169 ? 2% +18.4% 40450 ? 3% softirqs.CPU129.SCHED
21165 ? 7% +130.3% 48734 ? 7% softirqs.CPU13.RCU
21353 ? 10% +121.3% 47265 ? 7% softirqs.CPU130.RCU
34224 ? 2% +17.9% 40342 ? 3% softirqs.CPU130.SCHED
20225 ? 20% +138.3% 48201 ? 7% softirqs.CPU131.RCU
35035 ? 2% +15.7% 40553 ? 3% softirqs.CPU131.SCHED
21975 ? 8% +119.0% 48118 ? 7% softirqs.CPU132.RCU
34274 ? 2% +18.4% 40576 ? 3% softirqs.CPU132.SCHED
21462 ? 10% +123.6% 47991 ? 8% softirqs.CPU133.RCU
34187 ? 2% +18.6% 40544 ? 3% softirqs.CPU133.SCHED
21054 ? 9% +123.3% 47021 ? 9% softirqs.CPU134.RCU
34105 ? 2% +18.4% 40375 ? 3% softirqs.CPU134.SCHED
21711 ? 9% +117.8% 47297 ? 7% softirqs.CPU135.RCU
30667 ? 20% +32.0% 40488 ? 3% softirqs.CPU135.SCHED
21637 ? 10% +121.7% 47963 ? 7% softirqs.CPU136.RCU
34521 ? 3% +18.0% 40725 ? 3% softirqs.CPU136.SCHED
21523 ? 12% +124.0% 48201 ? 6% softirqs.CPU137.RCU
34201 ? 2% +18.8% 40631 ? 3% softirqs.CPU137.SCHED
21772 ? 11% +117.1% 47279 ? 7% softirqs.CPU138.RCU
34240 ? 2% +18.0% 40419 ? 3% softirqs.CPU138.SCHED
21384 ? 11% +120.6% 47169 ? 8% softirqs.CPU139.RCU
34207 ? 2% +18.5% 40519 ? 3% softirqs.CPU139.SCHED
21089 ? 8% +132.6% 49060 ? 7% softirqs.CPU14.RCU
21446 ? 12% +117.5% 46641 ? 7% softirqs.CPU140.RCU
34283 ? 2% +17.9% 40413 ? 3% softirqs.CPU140.SCHED
21394 ? 11% +122.1% 47526 ? 7% softirqs.CPU141.RCU
34262 ? 2% +18.2% 40484 ? 3% softirqs.CPU141.SCHED
21510 ? 12% +122.5% 47856 ? 7% softirqs.CPU142.RCU
34272 ? 2% +18.4% 40571 ? 3% softirqs.CPU142.SCHED
21794 ? 11% +119.3% 47806 ? 7% softirqs.CPU143.RCU
34379 ? 2% +17.7% 40464 ? 3% softirqs.CPU143.SCHED
22081 ? 8% +107.0% 45701 ? 12% softirqs.CPU144.RCU
21204 ? 7% +112.7% 45101 ? 11% softirqs.CPU145.RCU
21354 ? 8% +111.7% 45201 ? 11% softirqs.CPU146.RCU
34494 ? 3% +12.4% 38783 ? 3% softirqs.CPU146.SCHED
22014 ? 11% +106.0% 45348 ? 11% softirqs.CPU147.RCU
21731 ? 8% +110.1% 45650 ? 12% softirqs.CPU148.RCU
21498 ? 9% +111.3% 45427 ? 11% softirqs.CPU149.RCU
22409 ? 5% +116.3% 48480 ? 9% softirqs.CPU15.RCU
14902 ? 81% +93.8% 28882 ? 39% softirqs.CPU15.SCHED
21817 ? 9% +109.0% 45604 ? 11% softirqs.CPU150.RCU
21296 ? 10% +111.0% 44935 ? 10% softirqs.CPU151.RCU
21265 ? 10% +110.4% 44740 ? 10% softirqs.CPU152.RCU
21349 ? 10% +110.1% 44847 ? 12% softirqs.CPU153.RCU
21748 ? 10% +109.3% 45530 ? 11% softirqs.CPU154.RCU
21913 ? 9% +106.8% 45320 ? 11% softirqs.CPU155.RCU
22080 ? 9% +106.3% 45544 ? 11% softirqs.CPU156.RCU
21072 ? 9% +111.5% 44572 ? 11% softirqs.CPU157.RCU
34333 ? 2% +11.0% 38100 ? 5% softirqs.CPU157.SCHED
20975 ? 9% +108.3% 43687 ? 11% softirqs.CPU158.RCU
34232 ? 2% +11.5% 38183 ? 4% softirqs.CPU158.SCHED
20796 ? 13% +114.4% 44584 ? 11% softirqs.CPU159.RCU
34381 ? 3% +14.5% 39356 ? 2% softirqs.CPU159.SCHED
19673 ? 4% +122.2% 43719 ? 4% softirqs.CPU16.RCU
19575 ? 5% +109.4% 40996 ? 13% softirqs.CPU160.RCU
34469 ? 2% +14.7% 39527 ? 2% softirqs.CPU160.SCHED
20128 ? 6% +106.8% 41620 ? 12% softirqs.CPU161.RCU
34494 ? 2% +14.2% 39401 ? 2% softirqs.CPU161.SCHED
20309 ? 6% +106.9% 42026 ? 12% softirqs.CPU162.RCU
34472 ? 2% +12.9% 38928 ? 3% softirqs.CPU162.SCHED
20127 ? 5% +110.0% 42274 ? 14% softirqs.CPU163.RCU
34424 ? 2% +12.9% 38880 ? 4% softirqs.CPU163.SCHED
20556 ? 8% +105.5% 42253 ? 13% softirqs.CPU164.RCU
34389 ? 3% +15.5% 39725 ? 2% softirqs.CPU164.SCHED
20009 ? 6% +106.5% 41325 ? 13% softirqs.CPU165.RCU
34500 ? 2% +14.1% 39380 ? 3% softirqs.CPU165.SCHED
20229 ? 6% +108.4% 42151 ? 11% softirqs.CPU166.RCU
34465 ? 2% +14.5% 39463 ? 2% softirqs.CPU166.SCHED
20114 ? 6% +107.4% 41721 ? 13% softirqs.CPU167.RCU
34473 ? 2% +14.3% 39391 ? 2% softirqs.CPU167.SCHED
19291 ? 5% +118.3% 42117 ? 9% softirqs.CPU168.RCU
18811 ? 4% +116.9% 40810 ? 9% softirqs.CPU169.RCU
33446 ? 3% +21.1% 40487 ? 2% softirqs.CPU169.SCHED
19397 ? 9% +126.7% 43977 ? 3% softirqs.CPU17.RCU
29414 ? 18% +34.8% 39648 ? 2% softirqs.CPU17.SCHED
19066 ? 3% +117.2% 41415 ? 8% softirqs.CPU170.RCU
19220 ? 3% +115.5% 41424 ? 6% softirqs.CPU171.RCU
18493 ? 4% +118.4% 40385 ? 8% softirqs.CPU172.RCU
33551 ? 2% +20.4% 40412 ? 2% softirqs.CPU172.SCHED
18962 ? 6% +117.2% 41185 ? 10% softirqs.CPU173.RCU
33688 ? 2% +20.6% 40620 ? 2% softirqs.CPU173.SCHED
18847 ? 4% +117.1% 40924 ? 8% softirqs.CPU174.RCU
32749 ? 7% +24.3% 40700 ? 2% softirqs.CPU174.SCHED
18706 ? 5% +119.4% 41049 ? 8% softirqs.CPU175.RCU
33451 ? 3% +21.6% 40667 ? 2% softirqs.CPU175.SCHED
20138 ? 5% +121.5% 44614 ? 5% softirqs.CPU176.RCU
33630 ? 2% +21.1% 40714 ? 3% softirqs.CPU176.SCHED
20307 ? 6% +124.4% 45565 ? 5% softirqs.CPU177.RCU
20210 ? 5% +121.3% 44725 ? 5% softirqs.CPU178.RCU
33463 ? 3% +21.1% 40524 ? 3% softirqs.CPU178.SCHED
20060 ? 5% +123.6% 44863 ? 5% softirqs.CPU179.RCU
33449 ? 3% +21.4% 40607 ? 2% softirqs.CPU179.SCHED
19239 ? 11% +130.5% 44347 ? 4% softirqs.CPU18.RCU
32081 ? 6% +23.7% 39687 softirqs.CPU18.SCHED
19882 ? 5% +122.4% 44224 ? 6% softirqs.CPU180.RCU
33050 ? 4% +21.6% 40189 ? 3% softirqs.CPU180.SCHED
19705 ? 5% +123.3% 44010 ? 6% softirqs.CPU181.RCU
32993 ? 4% +21.8% 40173 ? 3% softirqs.CPU181.SCHED
20406 ? 6% +118.9% 44661 ? 5% softirqs.CPU182.RCU
33538 ? 4% +20.6% 40448 ? 2% softirqs.CPU182.SCHED
20334 ? 5% +120.9% 44912 ? 5% softirqs.CPU183.RCU
33533 ? 3% +20.7% 40474 ? 2% softirqs.CPU183.SCHED
20526 ? 6% +119.7% 45086 ? 5% softirqs.CPU184.RCU
33453 ? 3% +21.3% 40581 ? 2% softirqs.CPU184.SCHED
20277 ? 4% +126.0% 45818 ? 8% softirqs.CPU185.RCU
33871 ? 2% +19.6% 40513 ? 2% softirqs.CPU185.SCHED
20432 ? 3% +108.4% 42581 ? 14% softirqs.CPU186.RCU
33469 ? 3% +23.6% 41373 softirqs.CPU186.SCHED
20139 ? 5% +123.8% 45066 ? 5% softirqs.CPU187.RCU
33367 ? 3% +21.1% 40417 ? 3% softirqs.CPU187.SCHED
20437 ? 4% +117.8% 44508 ? 5% softirqs.CPU188.RCU
28880 ? 29% +39.2% 40207 ? 4% softirqs.CPU188.SCHED
20088 ? 5% +123.2% 44842 ? 7% softirqs.CPU189.RCU
33280 ? 3% +21.7% 40511 ? 3% softirqs.CPU189.SCHED
18485 ? 12% +125.9% 41754 ? 5% softirqs.CPU19.RCU
33600 ? 5% +16.6% 39178 ? 2% softirqs.CPU19.SCHED
20316 ? 5% +120.0% 44689 ? 6% softirqs.CPU190.RCU
33252 ? 4% +21.5% 40406 ? 3% softirqs.CPU190.SCHED
20199 ? 4% +122.6% 44967 ? 5% softirqs.CPU191.RCU
33567 ? 2% +17.2% 39326 ? 5% softirqs.CPU191.SCHED
21749 ? 9% +122.2% 48331 ? 6% softirqs.CPU2.RCU
20410 ? 47% +64.2% 33512 ? 20% softirqs.CPU2.SCHED
19165 ? 10% +126.1% 43335 ? 4% softirqs.CPU20.RCU
33501 ? 4% +17.5% 39351 ? 2% softirqs.CPU20.SCHED
18974 ? 9% +130.0% 43644 ? 3% softirqs.CPU21.RCU
33673 ? 3% +17.3% 39510 ? 2% softirqs.CPU21.SCHED
19311 ? 11% +123.9% 43243 ? 4% softirqs.CPU22.RCU
32454 ? 7% +21.2% 39319 ? 2% softirqs.CPU22.SCHED
20205 ? 12% +114.6% 43369 ? 4% softirqs.CPU23.RCU
32717 ? 5% +21.1% 39623 softirqs.CPU23.SCHED
20750 ? 7% +119.6% 45563 ? 5% softirqs.CPU24.RCU
20136 ? 12% +118.9% 44076 ? 6% softirqs.CPU25.RCU
30631 ? 16% +31.3% 40231 ? 3% softirqs.CPU25.SCHED
19508 ? 10% +123.3% 43554 ? 5% softirqs.CPU26.RCU
33343 ? 2% +21.8% 40603 ? 3% softirqs.CPU26.SCHED
19461 ? 10% +123.0% 43398 ? 4% softirqs.CPU27.RCU
34046 ? 3% +18.4% 40309 ? 3% softirqs.CPU27.SCHED
19034 ? 11% +128.3% 43462 ? 5% softirqs.CPU28.RCU
33842 ? 3% +19.6% 40485 ? 2% softirqs.CPU28.SCHED
19568 ? 9% +120.8% 43207 ? 5% softirqs.CPU29.RCU
34076 ? 3% +17.9% 40180 ? 3% softirqs.CPU29.SCHED
22102 ? 7% +121.1% 48876 ? 6% softirqs.CPU3.RCU
11352 ? 4% +179.4% 31715 ? 28% softirqs.CPU3.SCHED
19397 ? 11% +121.7% 43004 ? 5% softirqs.CPU30.RCU
34011 ? 3% +18.1% 40155 ? 3% softirqs.CPU30.SCHED
19813 ? 10% +116.7% 42936 ? 5% softirqs.CPU31.RCU
28877 ? 28% +38.9% 40106 ? 3% softirqs.CPU31.SCHED
22985 ? 7% +113.7% 49109 ? 7% softirqs.CPU32.RCU
33988 ? 3% +17.8% 40039 ? 3% softirqs.CPU32.SCHED
22657 ? 9% +112.3% 48097 ? 8% softirqs.CPU33.RCU
33971 ? 3% +18.3% 40201 ? 3% softirqs.CPU33.SCHED
22744 ? 8% +116.9% 49325 ? 6% softirqs.CPU34.RCU
33842 ? 3% +18.6% 40153 ? 3% softirqs.CPU34.SCHED
22016 ? 12% +127.9% 50172 ? 7% softirqs.CPU35.RCU
33824 ? 3% +19.0% 40256 ? 3% softirqs.CPU35.SCHED
22803 ? 9% +117.7% 49644 ? 7% softirqs.CPU36.RCU
33848 ? 3% +18.7% 40192 ? 3% softirqs.CPU36.SCHED
22024 ? 7% +123.5% 49219 ? 7% softirqs.CPU37.RCU
33798 ? 3% +14.8% 38789 ? 5% softirqs.CPU37.SCHED
22554 ? 6% +118.9% 49362 ? 8% softirqs.CPU38.RCU
33803 ? 3% +18.6% 40075 ? 3% softirqs.CPU38.SCHED
22068 ? 10% +121.3% 48832 ? 7% softirqs.CPU39.RCU
22089 ? 6% +119.8% 48551 ? 7% softirqs.CPU4.RCU
22095 ? 6% +123.8% 49458 ? 6% softirqs.CPU40.RCU
33886 ? 3% +18.7% 40212 ? 3% softirqs.CPU40.SCHED
22898 ? 9% +117.3% 49761 ? 6% softirqs.CPU41.RCU
33889 ? 3% +18.4% 40126 ? 3% softirqs.CPU41.SCHED
22668 ? 8% +118.9% 49630 ? 7% softirqs.CPU42.RCU
33918 ? 3% +18.4% 40156 ? 3% softirqs.CPU42.SCHED
22911 ? 9% +116.9% 49699 ? 8% softirqs.CPU43.RCU
28225 ? 32% +41.9% 40046 ? 3% softirqs.CPU43.SCHED
21873 ? 9% +124.1% 49008 ? 7% softirqs.CPU44.RCU
33829 ? 3% +19.0% 40251 ? 3% softirqs.CPU44.SCHED
22436 ? 7% +117.4% 48777 ? 8% softirqs.CPU45.RCU
33869 ? 3% +18.3% 40071 ? 3% softirqs.CPU45.SCHED
22360 ? 10% +119.4% 49066 ? 7% softirqs.CPU46.RCU
33855 ? 3% +18.6% 40139 ? 3% softirqs.CPU46.SCHED
22573 ? 8% +119.2% 49487 ? 8% softirqs.CPU47.RCU
33944 ? 3% +18.7% 40308 ? 3% softirqs.CPU47.SCHED
23683 ? 8% +107.2% 49070 ? 10% softirqs.CPU48.RCU
22691 ? 8% +112.6% 48236 ? 9% softirqs.CPU49.RCU
21696 ? 7% +123.7% 48539 ? 7% softirqs.CPU5.RCU
22538 ? 9% +115.6% 48603 ? 8% softirqs.CPU50.RCU
22540 ? 9% +117.4% 48995 ? 8% softirqs.CPU51.RCU
22994 ? 12% +113.7% 49148 ? 8% softirqs.CPU52.RCU
22465 ? 8% +121.5% 49768 ? 7% softirqs.CPU53.RCU
22513 ? 9% +120.3% 49589 ? 8% softirqs.CPU54.RCU
22411 ? 10% +120.3% 49378 ? 9% softirqs.CPU55.RCU
21931 ? 10% +124.4% 49215 ? 7% softirqs.CPU56.RCU
22160 ? 10% +120.3% 48823 ? 9% softirqs.CPU57.RCU
22106 ? 8% +123.4% 49379 ? 7% softirqs.CPU58.RCU
21863 ? 9% +126.7% 49571 ? 8% softirqs.CPU59.RCU
21951 ? 7% +108.2% 45699 ? 10% softirqs.CPU6.RCU
22433 ? 10% +121.4% 49674 ? 8% softirqs.CPU60.RCU
21930 ? 8% +122.4% 48767 ? 7% softirqs.CPU61.RCU
22191 ? 9% +118.4% 48457 ? 7% softirqs.CPU62.RCU
20572 ? 19% +137.3% 48814 ? 7% softirqs.CPU63.RCU
20606 ? 3% +106.0% 42440 ? 14% softirqs.CPU64.RCU
34281 ? 3% +15.6% 39616 ? 4% softirqs.CPU64.SCHED
20639 ? 6% +108.0% 42919 ? 11% softirqs.CPU65.RCU
34090 ? 3% +13.8% 38778 ? 2% softirqs.CPU65.SCHED
20711 ? 5% +107.9% 43068 ? 12% softirqs.CPU66.RCU
34059 ? 3% +13.5% 38643 ? 3% softirqs.CPU66.SCHED
20714 ? 6% +108.8% 43250 ? 12% softirqs.CPU67.RCU
33981 ? 3% +14.3% 38855 ? 3% softirqs.CPU67.SCHED
21132 ? 6% +104.5% 43216 ? 13% softirqs.CPU68.RCU
34152 ? 3% +13.9% 38908 ? 3% softirqs.CPU68.SCHED
20870 ? 6% +105.5% 42895 ? 12% softirqs.CPU69.RCU
34035 ? 3% +13.6% 38649 ? 3% softirqs.CPU69.SCHED
22131 ? 11% +118.0% 48247 ? 6% softirqs.CPU7.RCU
20514 ? 6% +110.1% 43090 ? 11% softirqs.CPU70.RCU
34016 ? 3% +14.5% 38957 ? 2% softirqs.CPU70.SCHED
20526 ? 5% +102.7% 41599 ? 15% softirqs.CPU71.RCU
32989 ? 7% +21.2% 39982 ? 4% softirqs.CPU71.SCHED
20770 ? 5% +119.3% 45551 ? 8% softirqs.CPU72.RCU
19814 ? 6% +118.6% 43309 ? 7% softirqs.CPU73.RCU
20012 ? 4% +112.5% 42519 ? 7% softirqs.CPU74.RCU
29526 ? 24% +36.3% 40247 ? 3% softirqs.CPU74.SCHED
19752 ? 6% +113.7% 42217 ? 8% softirqs.CPU75.RCU
28565 ? 30% +40.8% 40210 ? 3% softirqs.CPU75.SCHED
19667 ? 3% +112.2% 41740 ? 8% softirqs.CPU76.RCU
27517 ? 38% +44.8% 39842 ? 3% softirqs.CPU76.SCHED
19604 ? 4% +112.2% 41597 ? 8% softirqs.CPU77.RCU
27223 ? 40% +46.7% 39935 ? 3% softirqs.CPU77.SCHED
20148 +107.4% 41792 ? 8% softirqs.CPU78.RCU
28971 ? 27% +37.9% 39960 ? 3% softirqs.CPU78.SCHED
19633 ? 5% +113.1% 41846 ? 8% softirqs.CPU79.RCU
29302 ? 25% +36.4% 39968 ? 3% softirqs.CPU79.SCHED
21966 ? 9% +123.5% 49088 ? 7% softirqs.CPU8.RCU
20793 ? 6% +116.5% 45026 ? 4% softirqs.CPU80.RCU
27598 ? 37% +44.6% 39897 ? 3% softirqs.CPU80.SCHED
20668 ? 5% +120.7% 45615 ? 5% softirqs.CPU81.RCU
29054 ? 26% +37.3% 39903 ? 3% softirqs.CPU81.SCHED
20726 ? 5% +119.1% 45421 ? 5% softirqs.CPU82.RCU
28499 ? 30% +40.2% 39960 ? 3% softirqs.CPU82.SCHED
19390 ? 8% +134.3% 45429 ? 4% softirqs.CPU83.RCU
30213 ? 19% +32.5% 40043 ? 2% softirqs.CPU83.SCHED
20393 ? 4% +122.6% 45388 ? 5% softirqs.CPU84.RCU
30765 ? 14% +28.9% 39667 ? 3% softirqs.CPU84.SCHED
20725 ? 2% +118.5% 45293 ? 5% softirqs.CPU85.RCU
27760 ? 35% +43.3% 39776 ? 3% softirqs.CPU85.SCHED
20580 ? 5% +121.4% 45554 ? 5% softirqs.CPU86.RCU
20718 ? 5% +118.7% 45301 ? 5% softirqs.CPU87.RCU
26915 ? 42% +48.6% 40003 ? 3% softirqs.CPU87.SCHED
20763 ? 4% +120.2% 45728 ? 5% softirqs.CPU88.RCU
30810 ? 15% +29.3% 39842 ? 3% softirqs.CPU88.SCHED
21289 ? 2% +111.8% 45082 ? 4% softirqs.CPU89.RCU
33487 ? 2% +19.0% 39835 ? 3% softirqs.CPU89.SCHED
21579 ? 9% +123.8% 48300 ? 8% softirqs.CPU9.RCU
20400 ? 3% +120.9% 45058 ? 5% softirqs.CPU90.RCU
20371 ? 4% +126.7% 46184 ? 4% softirqs.CPU91.RCU
32988 ? 3% +21.4% 40061 ? 3% softirqs.CPU91.SCHED
20263 ? 3% +122.6% 45097 ? 4% softirqs.CPU92.RCU
33192 ? 3% +20.0% 39828 ? 4% softirqs.CPU92.SCHED
20239 ? 5% +123.1% 45150 ? 4% softirqs.CPU93.RCU
32978 ? 3% +21.0% 39900 ? 3% softirqs.CPU93.SCHED
20446 ? 5% +131.9% 47421 ? 8% softirqs.CPU94.RCU
33136 ? 4% +20.8% 40016 ? 3% softirqs.CPU94.SCHED
20498 ? 5% +126.6% 46446 ? 4% softirqs.CPU95.RCU
19594 ? 16% +130.5% 45172 ? 3% softirqs.CPU96.RCU
18967 ? 11% +130.5% 43724 ? 4% softirqs.CPU97.RCU
33298 ? 6% +14.7% 38208 ? 5% softirqs.CPU97.SCHED
19122 ? 17% +132.9% 44527 ? 3% softirqs.CPU98.RCU
19392 ? 17% +131.6% 44907 ? 4% softirqs.CPU99.RCU
32791 ? 4% +20.8% 39609 ? 3% softirqs.CPU99.SCHED
15568 ? 14% +109.9% 32681 ? 45% softirqs.NET_RX
3963598 ? 7% +119.8% 8713905 ? 6% softirqs.RCU
6125987 ? 2% +17.3% 7186965 ? 2% softirqs.SCHED
167016 ? 2% +12.5% 187925 softirqs.TIMER
31.37 ? 5% -7.9 23.48 ? 2% perf-profile.calltrace.cycles-pp.ret_from_fork
31.37 ? 5% -7.9 23.48 ? 2% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
7.60 ? 6% -7.6 0.00 perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_read_slowpath.xfs_map_blocks.iomap_writepage_map.write_cache_pages
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.calltrace.cycles-pp.wb_workfn.process_one_work.worker_thread.kthread.ret_from_fork
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.calltrace.cycles-pp.wb_writeback.wb_workfn.process_one_work.worker_thread.kthread
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.calltrace.cycles-pp.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work.worker_thread
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.calltrace.cycles-pp.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.calltrace.cycles-pp.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.calltrace.cycles-pp.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.calltrace.cycles-pp.xfs_vm_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.calltrace.cycles-pp.iomap_writepages.xfs_vm_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
14.19 ? 7% -7.5 6.64 ? 3% perf-profile.calltrace.cycles-pp.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages.__writeback_single_inode
7.23 ? 7% -7.2 0.00 perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_read_slowpath.xfs_map_blocks.iomap_writepage_map
12.80 ? 7% -7.2 5.57 ? 3% perf-profile.calltrace.cycles-pp.iomap_writepage_map.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages
29.14 ? 4% -6.6 22.56 ? 2% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
29.16 ? 4% -5.9 23.28 ? 2% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
8.69 ? 6% -5.5 3.17 ? 3% perf-profile.calltrace.cycles-pp.xfs_map_blocks.iomap_writepage_map.write_cache_pages.iomap_writepages.xfs_vm_writepages
7.74 ? 6% -5.4 2.38 ? 3% perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.xfs_map_blocks.iomap_writepage_map.write_cache_pages.iomap_writepages
13.59 ? 11% -5.0 8.57 ? 8% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
13.47 ? 11% -5.0 8.49 ? 8% perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
12.95 ? 11% -4.8 8.16 ? 8% perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write
12.93 ? 11% -4.8 8.14 ? 8% perf-profile.calltrace.cycles-pp.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write.vfs_write
9.88 ? 12% -3.8 6.08 ? 8% perf-profile.calltrace.cycles-pp.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write
4.27 ? 8% -1.6 2.70 ? 12% perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
4.14 ? 8% -1.5 2.60 ? 12% perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write
4.12 ? 8% -1.5 2.58 ? 12% perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply
7.25 ? 3% -1.3 5.90 perf-profile.calltrace.cycles-pp.xfs_bmapi_write.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io.process_one_work
2.47 ? 23% -1.2 1.30 ? 4% perf-profile.calltrace.cycles-pp.iov_iter_copy_from_user_atomic.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
2.25 ? 7% -1.1 1.10 ? 3% perf-profile.calltrace.cycles-pp.iomap_submit_ioend.iomap_writepage_map.write_cache_pages.iomap_writepages.xfs_vm_writepages
2.22 ? 7% -1.1 1.08 ? 3% perf-profile.calltrace.cycles-pp.submit_bio.iomap_submit_ioend.iomap_writepage_map.write_cache_pages.iomap_writepages
2.19 ? 7% -1.1 1.05 ? 4% perf-profile.calltrace.cycles-pp.submit_bio_noacct.submit_bio.iomap_submit_ioend.iomap_writepage_map.write_cache_pages
2.38 ? 25% -1.1 1.25 ? 4% perf-profile.calltrace.cycles-pp.copyin.iov_iter_copy_from_user_atomic.iomap_write_actor.iomap_apply.iomap_file_buffered_write
2.63 ? 11% -1.1 1.50 ? 10% perf-profile.calltrace.cycles-pp.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
2.35 ? 25% -1.1 1.23 ? 4% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.iov_iter_copy_from_user_atomic.iomap_write_actor.iomap_apply
1.83 ? 7% -1.0 0.80 ? 4% perf-profile.calltrace.cycles-pp.blk_mq_submit_bio.submit_bio_noacct.submit_bio.iomap_submit_ioend.iomap_writepage_map
6.06 ? 2% -1.0 5.07 perf-profile.calltrace.cycles-pp.xfs_bmapi_convert_unwritten.xfs_bmapi_write.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io
2.90 ? 10% -0.9 1.97 ? 9% perf-profile.calltrace.cycles-pp.xfs_buffered_write_iomap_begin.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write
5.79 ? 2% -0.9 4.87 ? 2% perf-profile.calltrace.cycles-pp.xfs_bmap_add_extent_unwritten_real.xfs_bmapi_convert_unwritten.xfs_bmapi_write.xfs_iomap_write_unwritten.xfs_end_ioend
2.24 ? 11% -0.9 1.33 ? 10% perf-profile.calltrace.cycles-pp.xfs_iext_lookup_extent.xfs_buffered_write_iomap_begin.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
2.13 ? 8% -0.9 1.26 ? 6% perf-profile.calltrace.cycles-pp.iomap_finish_ioends.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread
2.10 ? 8% -0.9 1.24 ? 6% perf-profile.calltrace.cycles-pp.iomap_finish_ioend.iomap_finish_ioends.xfs_end_ioend.xfs_end_io.process_one_work
3.60 ? 2% -0.7 2.88 perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io.process_one_work
3.46 ? 2% -0.7 2.75 perf-profile.calltrace.cycles-pp.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io
1.74 ? 5% -0.7 1.04 ? 12% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor
1.02 ? 8% -0.6 0.46 ? 58% perf-profile.calltrace.cycles-pp.__set_page_dirty.iomap_set_page_dirty.iomap_write_end.iomap_write_actor.iomap_apply
1.28 ? 18% -0.6 0.72 ? 12% perf-profile.calltrace.cycles-pp.iomap_set_range_uptodate.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.71 ? 12% -0.5 1.18 ? 14% perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor
1.26 ? 9% -0.5 0.72 ? 10% perf-profile.calltrace.cycles-pp.iomap_set_page_dirty.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.69 ? 12% -0.5 1.16 ? 14% perf-profile.calltrace.cycles-pp.xas_load.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin
1.24 ? 5% -0.5 0.75 ? 13% perf-profile.calltrace.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin
1.80 ? 3% -0.5 1.32 ? 3% perf-profile.calltrace.cycles-pp.xlog_cil_insert_items.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten.xfs_end_ioend
1.22 ? 9% -0.5 0.74 ? 8% perf-profile.calltrace.cycles-pp.end_page_writeback.iomap_finish_ioend.iomap_finish_ioends.xfs_end_ioend.xfs_end_io
2.74 ? 3% -0.5 2.27 ? 2% perf-profile.calltrace.cycles-pp.xfs_btree_lookup.xfs_bmap_add_extent_unwritten_real.xfs_bmapi_convert_unwritten.xfs_bmapi_write.xfs_iomap_write_unwritten
1.19 ? 8% -0.5 0.72 ? 8% perf-profile.calltrace.cycles-pp.test_clear_page_writeback.end_page_writeback.iomap_finish_ioend.iomap_finish_ioends.xfs_end_ioend
1.55 ? 6% -0.4 1.18 ? 2% perf-profile.calltrace.cycles-pp.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_bmap_add_extent_unwritten_real.xfs_bmapi_convert_unwritten.xfs_bmapi_write
1.01 ? 10% -0.4 0.66 ? 6% perf-profile.calltrace.cycles-pp.__test_set_page_writeback.iomap_writepage_map.write_cache_pages.iomap_writepages.xfs_vm_writepages
0.72 ? 8% -0.3 0.39 ? 58% perf-profile.calltrace.cycles-pp.xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_btree_read_buf_block
1.19 ? 4% -0.3 0.86 ? 4% perf-profile.calltrace.cycles-pp.xfs_buf_item_format.xlog_cil_insert_items.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten
1.25 ? 8% -0.3 0.92 ? 3% perf-profile.calltrace.cycles-pp.xfs_btree_read_buf_block.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_bmap_add_extent_unwritten_real.xfs_bmapi_convert_unwritten
1.06 ? 9% -0.3 0.78 ? 4% perf-profile.calltrace.cycles-pp.xfs_trans_read_buf_map.xfs_btree_read_buf_block.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_bmap_add_extent_unwritten_real
0.83 ? 9% -0.2 0.59 ? 4% perf-profile.calltrace.cycles-pp.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_btree_read_buf_block.xfs_btree_lookup_get_block.xfs_btree_lookup
0.77 ? 8% -0.2 0.55 ? 5% perf-profile.calltrace.cycles-pp.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_btree_read_buf_block.xfs_btree_lookup_get_block
0.72 ? 5% -0.2 0.54 ? 4% perf-profile.calltrace.cycles-pp.xfs_trans_alloc.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io.process_one_work
0.84 ? 3% -0.2 0.68 ? 2% perf-profile.calltrace.cycles-pp.xfs_buf_item_size.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten.xfs_end_ioend
0.77 ? 4% -0.1 0.62 ? 2% perf-profile.calltrace.cycles-pp.xfs_buf_item_size_segment.xfs_buf_item_size.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten
0.70 ? 9% +0.1 0.79 ? 4% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu
1.18 ? 7% +0.3 1.51 ? 4% perf-profile.calltrace.cycles-pp.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
1.17 ? 7% +0.3 1.51 ? 4% perf-profile.calltrace.cycles-pp.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
1.17 ? 7% +0.3 1.51 ? 4% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt
0.82 ? 4% +0.4 1.17 ? 6% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.40 ? 6% +0.5 1.91 ? 4% perf-profile.calltrace.cycles-pp.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +0.5 0.53 ? 4% perf-profile.calltrace.cycles-pp.xfs_btree_make_block_unfull.xfs_btree_insrec.xfs_btree_insert.xfs_bmap_add_extent_unwritten_real.xfs_bmapi_convert_unwritten
0.00 +0.5 0.53 ? 2% perf-profile.calltrace.cycles-pp.__blk_mq_run_hw_queue.process_one_work.worker_thread.kthread.ret_from_fork
1.55 ? 3% +0.5 2.09 ? 6% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
1.50 ? 2% +0.5 2.04 ? 6% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.6 0.59 ? 7% perf-profile.calltrace.cycles-pp.schedule.worker_thread.kthread.ret_from_fork
2.10 ? 6% +0.6 2.71 ? 8% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack
0.00 +0.6 0.65 ? 5% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
0.00 +0.7 0.65 ? 4% perf-profile.calltrace.cycles-pp.__schedule.schedule.rwsem_down_read_slowpath.xfs_map_blocks.iomap_writepage_map
0.00 +0.7 0.67 ? 3% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
3.14 ? 5% +0.7 3.85 ? 5% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt
0.00 +0.8 0.75 ? 4% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.__queue_work
0.00 +0.8 0.80 ? 3% perf-profile.calltrace.cycles-pp.unwind_next_frame.arch_stack_walk.stack_trace_save_tsk.__account_scheduler_latency.enqueue_entity
0.00 +0.8 0.81 ? 4% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.__queue_work.mod_delayed_work_on
0.00 +0.8 0.83 ? 3% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.__queue_work.mod_delayed_work_on.kblockd_mod_delayed_work_on
0.00 +1.0 0.96 ? 10% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.xfs_ilock.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io
0.00 +1.1 1.05 ? 4% perf-profile.calltrace.cycles-pp.xfs_btree_insrec.xfs_btree_insert.xfs_bmap_add_extent_unwritten_real.xfs_bmapi_convert_unwritten.xfs_bmapi_write
0.00 +1.1 1.08 ? 3% perf-profile.calltrace.cycles-pp.try_to_wake_up.__queue_work.mod_delayed_work_on.kblockd_mod_delayed_work_on.blk_mq_sched_insert_requests
0.00 +1.1 1.11 ? 4% perf-profile.calltrace.cycles-pp.xfs_btree_insert.xfs_bmap_add_extent_unwritten_real.xfs_bmapi_convert_unwritten.xfs_bmapi_write.xfs_iomap_write_unwritten
0.00 +1.1 1.11 ? 10% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io.process_one_work
0.00 +1.2 1.16 ? 4% perf-profile.calltrace.cycles-pp.__queue_work.mod_delayed_work_on.kblockd_mod_delayed_work_on.blk_mq_sched_insert_requests.blk_mq_flush_plug_list
12.23 ? 2% +1.2 13.40 perf-profile.calltrace.cycles-pp.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread
0.00 +1.2 1.18 ? 4% perf-profile.calltrace.cycles-pp.arch_stack_walk.stack_trace_save_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
0.00 +1.3 1.26 ? 4% perf-profile.calltrace.cycles-pp.mod_delayed_work_on.kblockd_mod_delayed_work_on.blk_mq_sched_insert_requests.blk_mq_flush_plug_list.blk_flush_plug_list
0.00 +1.3 1.27 ? 4% perf-profile.calltrace.cycles-pp.kblockd_mod_delayed_work_on.blk_mq_sched_insert_requests.blk_mq_flush_plug_list.blk_flush_plug_list.schedule
0.00 +1.4 1.35 ? 4% perf-profile.calltrace.cycles-pp.blk_mq_sched_insert_requests.blk_mq_flush_plug_list.blk_flush_plug_list.schedule.rwsem_down_read_slowpath
0.00 +1.4 1.37 ? 4% perf-profile.calltrace.cycles-pp.blk_mq_flush_plug_list.blk_flush_plug_list.schedule.rwsem_down_read_slowpath.xfs_map_blocks
0.00 +1.4 1.38 ? 4% perf-profile.calltrace.cycles-pp.blk_flush_plug_list.schedule.rwsem_down_read_slowpath.xfs_map_blocks.iomap_writepage_map
0.00 +1.4 1.43 ? 18% perf-profile.calltrace.cycles-pp.stack_trace_save_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate
0.00 +1.6 1.59 ? 4% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.wake_up_q
0.00 +1.6 1.64 ? 4% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.wake_up_q.rwsem_wake
0.00 +1.6 1.65 ? 4% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.wake_up_q.rwsem_wake.xfs_iunlock
4.93 ? 10% +1.7 6.67 ? 5% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.00 +2.0 1.97 ? 4% perf-profile.calltrace.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
0.00 +2.1 2.08 ? 3% perf-profile.calltrace.cycles-pp.schedule.rwsem_down_read_slowpath.xfs_map_blocks.iomap_writepage_map.write_cache_pages
0.00 +2.1 2.12 ? 8% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.00 +2.4 2.40 ? 2% perf-profile.calltrace.cycles-pp.try_to_wake_up.wake_up_q.rwsem_wake.xfs_iunlock.xfs_iomap_write_unwritten
0.00 +2.4 2.44 ? 2% perf-profile.calltrace.cycles-pp.wake_up_q.rwsem_wake.xfs_iunlock.xfs_iomap_write_unwritten.xfs_end_ioend
0.00 +2.7 2.73 ? 3% perf-profile.calltrace.cycles-pp.rwsem_wake.xfs_iunlock.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io
0.00 +2.8 2.80 ? 3% perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io.process_one_work
30.62 ? 6% +4.1 34.76 ? 2% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
13.75 ? 7% +4.3 18.07 ? 7% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
44.69 +9.2 53.92 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
46.17 ? 2% +10.1 56.24 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
51.80 ? 2% +13.1 64.85 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
51.82 ? 2% +13.1 64.89 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
51.82 ? 2% +13.1 64.89 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
52.27 ? 2% +13.2 65.45 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
8.01 ? 6% -8.0 0.00 perf-profile.children.cycles-pp.rwsem_optimistic_spin
31.37 ? 5% -7.9 23.48 ? 2% perf-profile.children.cycles-pp.kthread
31.37 ? 5% -7.9 23.48 ? 2% perf-profile.children.cycles-pp.ret_from_fork
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.children.cycles-pp.wb_workfn
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.children.cycles-pp.wb_writeback
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.children.cycles-pp.__writeback_inodes_wb
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.children.cycles-pp.writeback_sb_inodes
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.children.cycles-pp.__writeback_single_inode
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.children.cycles-pp.do_writepages
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.children.cycles-pp.xfs_vm_writepages
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.children.cycles-pp.iomap_writepages
14.20 ? 7% -7.6 6.64 ? 3% perf-profile.children.cycles-pp.write_cache_pages
12.81 ? 7% -7.2 5.58 ? 3% perf-profile.children.cycles-pp.iomap_writepage_map
7.42 ? 6% -7.1 0.35 ? 10% perf-profile.children.cycles-pp.rwsem_spin_on_owner
29.14 ? 4% -6.6 22.57 ? 2% perf-profile.children.cycles-pp.process_one_work
29.16 ? 4% -5.9 23.29 ? 2% perf-profile.children.cycles-pp.worker_thread
8.70 ? 6% -5.5 3.17 ? 4% perf-profile.children.cycles-pp.xfs_map_blocks
7.75 ? 6% -5.4 2.38 ? 3% perf-profile.children.cycles-pp.rwsem_down_read_slowpath
14.73 ? 10% -5.2 9.48 ? 7% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
14.62 ? 10% -5.2 9.40 ? 7% perf-profile.children.cycles-pp.do_syscall_64
13.96 ? 11% -5.2 8.81 ? 8% perf-profile.children.cycles-pp.ksys_write
13.88 ? 10% -5.1 8.76 ? 8% perf-profile.children.cycles-pp.vfs_write
13.61 ? 11% -5.0 8.59 ? 8% perf-profile.children.cycles-pp.new_sync_write
13.48 ? 11% -5.0 8.49 ? 8% perf-profile.children.cycles-pp.xfs_file_buffered_aio_write
12.95 ? 11% -4.8 8.16 ? 8% perf-profile.children.cycles-pp.iomap_file_buffered_write
12.93 ? 11% -4.8 8.14 ? 8% perf-profile.children.cycles-pp.iomap_apply
9.88 ? 12% -3.8 6.09 ? 8% perf-profile.children.cycles-pp.iomap_write_actor
4.28 ? 8% -1.6 2.70 ? 12% perf-profile.children.cycles-pp.iomap_write_begin
4.15 ? 8% -1.5 2.60 ? 12% perf-profile.children.cycles-pp.grab_cache_page_write_begin
4.12 ? 8% -1.5 2.59 ? 12% perf-profile.children.cycles-pp.pagecache_get_page
7.27 ? 3% -1.4 5.91 perf-profile.children.cycles-pp.xfs_bmapi_write
1.38 ? 41% -1.2 0.19 ? 22% perf-profile.children.cycles-pp.xas_store
2.47 ? 23% -1.2 1.30 ? 4% perf-profile.children.cycles-pp.iov_iter_copy_from_user_atomic
2.25 ? 7% -1.1 1.10 ? 3% perf-profile.children.cycles-pp.iomap_submit_ioend
2.23 ? 7% -1.1 1.08 ? 3% perf-profile.children.cycles-pp.submit_bio
2.20 ? 7% -1.1 1.06 ? 4% perf-profile.children.cycles-pp.submit_bio_noacct
2.64 ? 11% -1.1 1.50 ? 10% perf-profile.children.cycles-pp.iomap_write_end
2.38 ? 25% -1.1 1.25 ? 4% perf-profile.children.cycles-pp.copyin
2.37 ? 25% -1.1 1.24 ? 4% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
2.90 ? 9% -1.1 1.79 ? 7% perf-profile.children.cycles-pp.xfs_iext_lookup_extent
1.84 ? 7% -1.0 0.81 ? 4% perf-profile.children.cycles-pp.blk_mq_submit_bio
6.07 ? 2% -1.0 5.07 perf-profile.children.cycles-pp.xfs_bmapi_convert_unwritten
2.90 ? 10% -0.9 1.97 ? 9% perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
5.80 ? 2% -0.9 4.88 ? 2% perf-profile.children.cycles-pp.xfs_bmap_add_extent_unwritten_real
2.13 ? 8% -0.9 1.26 ? 6% perf-profile.children.cycles-pp.iomap_finish_ioends
2.11 ? 8% -0.9 1.25 ? 6% perf-profile.children.cycles-pp.iomap_finish_ioend
3.65 ? 2% -0.8 2.90 perf-profile.children.cycles-pp.__xfs_trans_commit
3.51 ? 2% -0.7 2.78 perf-profile.children.cycles-pp.xfs_log_commit_cil
2.14 ? 9% -0.7 1.44 ? 14% perf-profile.children.cycles-pp.xas_load
1.74 ? 5% -0.7 1.04 ? 12% perf-profile.children.cycles-pp.add_to_page_cache_lru
0.85 ? 4% -0.7 0.18 ? 10% perf-profile.children.cycles-pp.xfs_btree_delete
0.84 ? 4% -0.7 0.17 ? 11% perf-profile.children.cycles-pp.xfs_btree_delrec
1.28 ? 19% -0.6 0.72 ? 12% perf-profile.children.cycles-pp.iomap_set_range_uptodate
1.72 ? 12% -0.5 1.18 ? 14% perf-profile.children.cycles-pp.find_get_entry
1.26 ? 9% -0.5 0.72 ? 10% perf-profile.children.cycles-pp.iomap_set_page_dirty
0.61 ? 8% -0.5 0.10 ? 10% perf-profile.children.cycles-pp.blk_mq_try_issue_list_directly
1.83 ? 3% -0.5 1.33 ? 3% perf-profile.children.cycles-pp.xlog_cil_insert_items
0.60 ? 8% -0.5 0.10 ? 10% perf-profile.children.cycles-pp.blk_mq_request_issue_directly
1.37 ? 6% -0.5 0.89 ? 5% perf-profile.children.cycles-pp.xfs_buf_read_map
1.24 ? 5% -0.5 0.76 ? 14% perf-profile.children.cycles-pp.__add_to_page_cache_locked
0.58 ? 7% -0.5 0.10 ? 8% perf-profile.children.cycles-pp.__blk_mq_try_issue_directly
1.22 ? 9% -0.5 0.74 ? 8% perf-profile.children.cycles-pp.end_page_writeback
1.21 ? 9% -0.5 0.74 ? 8% perf-profile.children.cycles-pp.test_clear_page_writeback
2.78 ? 3% -0.4 2.33 ? 2% perf-profile.children.cycles-pp.xfs_btree_lookup
1.02 ? 8% -0.4 0.58 ? 12% perf-profile.children.cycles-pp.__set_page_dirty
1.56 ? 5% -0.4 1.20 ? 2% perf-profile.children.cycles-pp.xfs_btree_lookup_get_block
1.14 ? 6% -0.4 0.77 ? 5% perf-profile.children.cycles-pp.xfs_buf_get_map
1.07 ? 7% -0.4 0.72 ? 5% perf-profile.children.cycles-pp.xfs_buf_find
1.03 ? 10% -0.4 0.68 ? 6% perf-profile.children.cycles-pp.__test_set_page_writeback
0.44 ? 5% -0.3 0.10 ? 19% perf-profile.children.cycles-pp.blk_attempt_plug_merge
1.24 ? 4% -0.3 0.90 ? 4% perf-profile.children.cycles-pp.xfs_buf_item_format
0.57 ? 3% -0.3 0.23 ? 11% perf-profile.children.cycles-pp.xfs_btree_readahead_lblock
0.56 ? 3% -0.3 0.23 ? 11% perf-profile.children.cycles-pp.xfs_btree_reada_bufl
0.54 ? 4% -0.3 0.22 ? 12% perf-profile.children.cycles-pp.xfs_buf_readahead_map
0.64 ? 8% -0.3 0.35 ? 12% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.33 ? 40% -0.3 0.04 ? 59% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.47 -0.3 0.19 ? 7% perf-profile.children.cycles-pp.xfs_btree_decrement
0.34 ? 10% -0.3 0.08 ? 14% perf-profile.children.cycles-pp.osq_lock
0.57 ? 9% -0.3 0.32 ? 12% perf-profile.children.cycles-pp.get_page_from_freelist
0.57 ? 7% -0.2 0.32 ? 17% perf-profile.children.cycles-pp.__xa_set_mark
0.52 ? 9% -0.2 0.29 ? 19% perf-profile.children.cycles-pp.xas_set_mark
0.46 ? 9% -0.2 0.25 ? 7% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.48 ? 7% -0.2 0.27 ? 10% perf-profile.children.cycles-pp.lru_cache_add
0.74 ? 4% -0.2 0.55 ? 4% perf-profile.children.cycles-pp.xfs_trans_alloc
0.66 ? 2% -0.2 0.46 ? 6% perf-profile.children.cycles-pp.xfs_next_bit
1.32 ? 8% -0.2 1.13 ? 3% perf-profile.children.cycles-pp.xfs_btree_read_buf_block
0.26 ? 7% -0.2 0.08 ? 10% perf-profile.children.cycles-pp.xfs_iext_remove
0.55 ? 14% -0.2 0.38 ? 6% perf-profile.children.cycles-pp.clear_page_dirty_for_io
1.14 ? 9% -0.2 0.96 ? 4% perf-profile.children.cycles-pp.xfs_trans_read_buf_map
0.41 ? 7% -0.2 0.24 ? 7% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.83 ? 4% -0.2 0.66 ? 2% perf-profile.children.cycles-pp.xfs_buf_item_size_segment
0.85 ? 4% -0.2 0.68 ? 2% perf-profile.children.cycles-pp.xfs_buf_item_size
0.37 ? 8% -0.2 0.21 ? 12% perf-profile.children.cycles-pp.rmqueue
0.50 ? 5% -0.2 0.34 ? 5% perf-profile.children.cycles-pp.kmem_cache_free
0.59 ? 7% -0.2 0.43 ? 8% perf-profile.children.cycles-pp.find_get_pages_range_tag
0.41 ? 4% -0.2 0.26 ? 11% perf-profile.children.cycles-pp.mem_cgroup_charge
0.59 ? 7% -0.2 0.44 ? 8% perf-profile.children.cycles-pp.pagevec_lookup_range_tag
0.24 ? 5% -0.1 0.09 ? 7% perf-profile.children.cycles-pp.xfs_btree_increment
0.48 ? 8% -0.1 0.34 ? 8% perf-profile.children.cycles-pp.__irqentry_text_start
0.52 ? 5% -0.1 0.38 ? 5% perf-profile.children.cycles-pp.xfs_trans_reserve
0.33 ? 4% -0.1 0.19 ? 5% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.68 ? 5% -0.1 0.54 ? 7% perf-profile.children.cycles-pp.kmem_cache_alloc
0.32 ? 18% -0.1 0.18 ? 17% perf-profile.children.cycles-pp.account_page_dirtied
0.31 ? 8% -0.1 0.18 ? 8% perf-profile.children.cycles-pp.down_write
0.43 ? 9% -0.1 0.30 ? 9% perf-profile.children.cycles-pp.mempool_alloc
0.27 ? 8% -0.1 0.15 ? 18% perf-profile.children.cycles-pp.workingset_update_node
0.31 ? 11% -0.1 0.20 ? 7% perf-profile.children.cycles-pp.xfs_file_aio_write_checks
0.25 ? 17% -0.1 0.15 ? 15% perf-profile.children.cycles-pp.__mod_memcg_state
0.22 ? 7% -0.1 0.11 ? 14% perf-profile.children.cycles-pp.rmqueue_bulk
0.55 ? 6% -0.1 0.45 ? 3% perf-profile.children.cycles-pp.memcpy_erms
0.41 ? 4% -0.1 0.31 ? 6% perf-profile.children.cycles-pp.xfs_log_reserve
0.30 ? 7% -0.1 0.20 ? 6% perf-profile.children.cycles-pp.bio_free
0.42 ? 11% -0.1 0.32 ? 6% perf-profile.children.cycles-pp.bio_alloc_bioset
0.33 ? 2% -0.1 0.23 ? 6% perf-profile.children.cycles-pp.xfs_inode_item_format
0.25 ? 15% -0.1 0.15 ? 4% perf-profile.children.cycles-pp.xfs_btree_del_cursor
0.25 ? 7% -0.1 0.15 ? 19% perf-profile.children.cycles-pp.xfs_buf_trylock
0.31 ? 21% -0.1 0.21 ? 9% perf-profile.children.cycles-pp.submit_bio_checks
0.22 ? 7% -0.1 0.12 ? 6% perf-profile.children.cycles-pp.__might_sleep
0.22 ? 13% -0.1 0.12 ? 6% perf-profile.children.cycles-pp.__list_del_entry_valid
0.20 ? 12% -0.1 0.10 ? 12% perf-profile.children.cycles-pp.list_sort
0.24 ? 8% -0.1 0.15 ? 21% perf-profile.children.cycles-pp.down_trylock
0.25 ? 15% -0.1 0.17 ? 9% perf-profile.children.cycles-pp.__mod_lruvec_state
0.23 ? 10% -0.1 0.15 ? 12% perf-profile.children.cycles-pp.__slab_free
0.29 ? 9% -0.1 0.21 ? 6% perf-profile.children.cycles-pp.xfs_bmapi_finish
0.25 ? 7% -0.1 0.17 ? 16% perf-profile.children.cycles-pp.xfs_buf_unlock
0.25 ? 6% -0.1 0.17 ? 11% perf-profile.children.cycles-pp.xfs_buf_rele
0.11 ? 4% -0.1 0.04 ? 57% perf-profile.children.cycles-pp.unlock_page
0.35 ? 8% -0.1 0.28 ? 4% perf-profile.children.cycles-pp.xfs_buf_offset
0.19 ? 11% -0.1 0.11 ? 20% perf-profile.children.cycles-pp.__xa_clear_mark
0.28 ? 10% -0.1 0.21 ? 4% perf-profile.children.cycles-pp.xfs_trans_log_inode
0.12 ? 23% -0.1 0.05 ? 64% perf-profile.children.cycles-pp.node_dirty_ok
0.20 ? 16% -0.1 0.12 ? 12% perf-profile.children.cycles-pp.up_write
0.17 ? 15% -0.1 0.10 ? 11% perf-profile.children.cycles-pp.xfs_trans_brelse
0.23 ? 14% -0.1 0.16 ? 13% perf-profile.children.cycles-pp.bvec_alloc
0.22 ? 18% -0.1 0.15 ? 14% perf-profile.children.cycles-pp.xfs_bmbt_to_iomap
0.18 ? 5% -0.1 0.12 ? 22% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.24 ? 8% -0.1 0.17 ? 19% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.23 ? 7% -0.1 0.17 ? 20% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.16 ? 28% -0.1 0.09 ? 8% perf-profile.children.cycles-pp.___slab_alloc
0.15 ? 5% -0.1 0.08 ? 21% perf-profile.children.cycles-pp.page_mapping
0.20 ? 19% -0.1 0.14 ? 10% perf-profile.children.cycles-pp.__mod_node_page_state
0.18 ? 18% -0.1 0.12 ? 3% perf-profile.children.cycles-pp.xfs_perag_get
0.16 ? 5% -0.1 0.10 ? 7% perf-profile.children.cycles-pp.memset_erms
0.17 ? 6% -0.1 0.11 ? 14% perf-profile.children.cycles-pp.nvme_pci_complete_rq
0.14 ? 21% -0.1 0.08 ? 10% perf-profile.children.cycles-pp.xfs_trans_ijoin
0.08 ? 13% -0.1 0.03 ?100% perf-profile.children.cycles-pp.xfs_iext_update_node
0.21 ? 15% -0.1 0.15 ? 9% perf-profile.children.cycles-pp.xfs_bmbt_init_cursor
0.12 ? 6% -0.1 0.07 ? 7% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.21 ? 11% -0.1 0.15 ? 4% perf-profile.children.cycles-pp.xlog_ticket_alloc
0.10 ? 15% -0.1 0.04 ? 57% perf-profile.children.cycles-pp.xas_start
0.08 ? 19% -0.1 0.03 ?100% perf-profile.children.cycles-pp.xfs_vn_update_time
0.28 ? 6% -0.1 0.23 ? 5% perf-profile.children.cycles-pp.nvme_map_data
0.16 ? 12% -0.1 0.10 ? 25% perf-profile.children.cycles-pp.try_charge
0.08 ? 19% -0.1 0.03 ?100% perf-profile.children.cycles-pp.xfs_file_write_iter
0.08 ? 14% -0.1 0.03 ?100% perf-profile.children.cycles-pp.xfs_break_layouts
0.07 ? 15% -0.0 0.03 ?100% perf-profile.children.cycles-pp.xfs_inode_item_format_data_fork
0.14 ? 10% -0.0 0.10 ? 9% perf-profile.children.cycles-pp.dec_zone_page_state
0.14 ? 13% -0.0 0.10 ? 11% perf-profile.children.cycles-pp.__blk_queue_split
0.13 ? 12% -0.0 0.09 ? 4% perf-profile.children.cycles-pp.file_update_time
0.12 ? 5% -0.0 0.08 ? 16% perf-profile.children.cycles-pp.up
0.07 ? 7% -0.0 0.03 ?100% perf-profile.children.cycles-pp.kfree
0.07 ? 7% -0.0 0.03 ?100% perf-profile.children.cycles-pp.xfs_isilocked
0.10 ? 17% -0.0 0.06 ? 7% perf-profile.children.cycles-pp.xfs_mod_fdblocks
0.24 ? 5% -0.0 0.20 ? 8% perf-profile.children.cycles-pp.xfs_verify_agbno
0.09 ? 7% -0.0 0.06 ? 58% perf-profile.children.cycles-pp.xfs_trans_del_item
0.10 ? 11% -0.0 0.06 ? 16% perf-profile.children.cycles-pp.xa_get_order
0.09 ? 9% -0.0 0.06 ? 15% perf-profile.children.cycles-pp._atomic_dec_and_lock
0.10 ? 19% -0.0 0.07 ? 29% perf-profile.children.cycles-pp.blk_mq_rq_ctx_init
0.19 ? 7% -0.0 0.16 ? 8% perf-profile.children.cycles-pp.xfs_trans_log_buf
0.10 ? 8% -0.0 0.07 ? 11% perf-profile.children.cycles-pp.__sb_start_write
0.31 -0.0 0.28 ? 3% perf-profile.children.cycles-pp.__xfs_btree_check_lblock
0.09 ? 9% -0.0 0.06 ? 11% perf-profile.children.cycles-pp.xfs_log_ticket_ungrant
0.15 ? 5% -0.0 0.12 ? 8% perf-profile.children.cycles-pp.xfs_trans_dirty_buf
0.09 ? 8% -0.0 0.07 ? 13% perf-profile.children.cycles-pp.blk_throtl_bio
0.07 ? 6% -0.0 0.05 perf-profile.children.cycles-pp.xfs_iext_update_extent
0.06 ? 11% +0.0 0.08 ? 5% perf-profile.children.cycles-pp.__list_add_valid
0.06 ? 16% +0.0 0.08 perf-profile.children.cycles-pp.rb_insert_color
0.11 ? 9% +0.0 0.14 ? 8% perf-profile.children.cycles-pp.xfs_end_bio
0.09 ? 7% +0.0 0.12 ? 9% perf-profile.children.cycles-pp.blk_mq_start_request
0.09 ? 7% +0.0 0.12 ? 10% perf-profile.children.cycles-pp.rcu_eqs_enter
0.23 ? 6% +0.0 0.27 ? 4% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.08 ? 19% +0.0 0.12 ? 8% perf-profile.children.cycles-pp._find_next_bit
0.03 ?100% +0.0 0.07 ? 6% perf-profile.children.cycles-pp.xfs_btree_update_keys
0.01 ?173% +0.0 0.06 ? 9% perf-profile.children.cycles-pp.rb_erase
0.04 ? 58% +0.0 0.09 ? 7% perf-profile.children.cycles-pp.rcu_dynticks_eqs_exit
0.11 ? 7% +0.1 0.17 ? 7% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.08 ? 17% +0.1 0.14 ? 11% perf-profile.children.cycles-pp.call_cpuidle
0.06 ? 60% +0.1 0.12 ? 13% perf-profile.children.cycles-pp.cpumask_next_and
0.00 +0.1 0.07 ? 6% perf-profile.children.cycles-pp.blk_mq_flush_busy_ctxs
0.00 +0.1 0.07 ? 10% perf-profile.children.cycles-pp.update_cfs_rq_h_load
0.00 +0.1 0.07 ? 10% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.00 +0.1 0.07 ? 5% perf-profile.children.cycles-pp.__xfs_btree_split
0.00 +0.1 0.07 ? 17% perf-profile.children.cycles-pp.asm_sysvec_reschedule_ipi
0.16 ? 7% +0.1 0.23 ? 15% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.00 +0.1 0.08 ? 6% perf-profile.children.cycles-pp.check_preempt_wakeup
0.00 +0.1 0.08 ? 14% perf-profile.children.cycles-pp.__wrgsbase_inactive
0.00 +0.1 0.08 ? 10% perf-profile.children.cycles-pp.tick_nohz_idle_enter
0.00 +0.1 0.08 ? 13% perf-profile.children.cycles-pp.stack_access_ok
0.00 +0.1 0.08 ? 13% perf-profile.children.cycles-pp.wq_worker_running
0.00 +0.1 0.08 ? 13% perf-profile.children.cycles-pp.resched_curr
0.09 +0.1 0.17 ? 4% perf-profile.children.cycles-pp.xfs_iext_insert
0.12 ? 7% +0.1 0.20 ? 10% perf-profile.children.cycles-pp._cond_resched
0.71 ? 9% +0.1 0.80 ? 4% perf-profile.children.cycles-pp.rebalance_domains
0.11 ? 15% +0.1 0.20 ? 4% perf-profile.children.cycles-pp.rcu_eqs_exit
0.00 +0.1 0.10 ? 9% perf-profile.children.cycles-pp.in_sched_functions
0.00 +0.1 0.10 ? 4% perf-profile.children.cycles-pp.xfs_btree_split_worker
0.28 ? 11% +0.1 0.38 ? 9% perf-profile.children.cycles-pp._raw_spin_trylock
0.00 +0.1 0.10 ? 10% perf-profile.children.cycles-pp.__update_load_avg_se
0.00 +0.1 0.10 ? 12% perf-profile.children.cycles-pp.__sysvec_call_function_single
0.00 +0.1 0.10 ? 21% perf-profile.children.cycles-pp.__module_text_address
0.00 +0.1 0.10 ? 10% perf-profile.children.cycles-pp.__unwind_start
0.00 +0.1 0.10 ? 14% perf-profile.children.cycles-pp.preempt_schedule_common
0.00 +0.1 0.10 ? 8% perf-profile.children.cycles-pp.sysvec_call_function_single
0.00 +0.1 0.11 ? 19% perf-profile.children.cycles-pp.is_module_text_address
0.00 +0.1 0.11 ? 7% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.21 ? 5% +0.1 0.32 ? 4% perf-profile.children.cycles-pp.rcu_idle_exit
0.26 ? 7% +0.1 0.37 ? 7% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.00 +0.1 0.12 ? 27% perf-profile.children.cycles-pp.rcu_gp_kthread
0.00 +0.1 0.12 ? 6% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.00 +0.1 0.12 ? 8% perf-profile.children.cycles-pp.___perf_sw_event
0.00 +0.1 0.14 ? 15% perf-profile.children.cycles-pp.__module_address
0.00 +0.1 0.15 ? 5% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.55 ? 5% +0.2 0.70 ? 2% perf-profile.children.cycles-pp.sched_clock_cpu
0.47 ? 6% +0.2 0.65 ? 4% perf-profile.children.cycles-pp.native_sched_clock
0.20 ? 18% +0.2 0.37 ? 2% perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited
0.00 +0.2 0.17 ? 10% perf-profile.children.cycles-pp.xfs_btree_rshift
0.00 +0.2 0.18 ? 11% perf-profile.children.cycles-pp.flush_smp_call_function_from_idle
0.00 +0.2 0.18 ? 8% perf-profile.children.cycles-pp.update_curr
0.20 ? 11% +0.2 0.38 ? 12% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.00 +0.2 0.18 ? 10% perf-profile.children.cycles-pp.kernel_text_address
0.50 ? 6% +0.2 0.68 ? 3% perf-profile.children.cycles-pp.sched_clock
0.05 ? 59% +0.2 0.24 ? 11% perf-profile.children.cycles-pp.rcu_core
0.00 +0.2 0.19 perf-profile.children.cycles-pp.check_preempt_curr
0.13 ? 9% +0.2 0.33 ? 3% perf-profile.children.cycles-pp.balance_dirty_pages
0.00 +0.2 0.20 ? 10% perf-profile.children.cycles-pp.note_gp_changes
0.13 ? 14% +0.2 0.33 ? 4% perf-profile.children.cycles-pp.xfs_btree_lshift
0.00 +0.2 0.21 ? 8% perf-profile.children.cycles-pp.select_task_rq_fair
0.11 ? 13% +0.2 0.32 ? 11% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.09 ? 20% +0.2 0.30 ? 7% perf-profile.children.cycles-pp.nr_iowait_cpu
0.00 +0.2 0.21 ? 2% perf-profile.children.cycles-pp.stack_trace_consume_entry_nosched
0.00 +0.2 0.21 ? 6% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.09 ? 21% +0.2 0.30 ? 9% perf-profile.children.cycles-pp.update_ts_time_stats
0.00 +0.2 0.22 ? 11% perf-profile.children.cycles-pp.finish_task_switch
0.00 +0.2 0.22 ? 10% perf-profile.children.cycles-pp.__kernel_text_address
0.00 +0.2 0.23 ? 9% perf-profile.children.cycles-pp.__switch_to_asm
0.00 +0.2 0.24 ? 5% perf-profile.children.cycles-pp.sched_ttwu_pending
0.00 +0.2 0.24 ? 5% perf-profile.children.cycles-pp.set_next_entity
0.03 ?102% +0.2 0.28 ? 2% perf-profile.children.cycles-pp.io_schedule_timeout
0.00 +0.2 0.25 ? 11% perf-profile.children.cycles-pp.__switch_to
0.00 +0.3 0.26 ? 6% perf-profile.children.cycles-pp.rwsem_mark_wake
0.00 +0.3 0.27 ? 5% perf-profile.children.cycles-pp.unwind_get_return_address
0.08 ? 20% +0.3 0.36 ? 8% perf-profile.children.cycles-pp.schedule_timeout
0.13 ? 22% +0.3 0.42 ? 7% perf-profile.children.cycles-pp.idle_cpu
0.00 +0.3 0.33 perf-profile.children.cycles-pp.dequeue_entity
0.27 ? 7% +0.3 0.61 ? 8% perf-profile.children.cycles-pp.update_rq_clock
0.00 +0.3 0.34 ? 3% perf-profile.children.cycles-pp.update_load_avg
0.87 ? 4% +0.3 1.21 ? 6% perf-profile.children.cycles-pp.scheduler_tick
0.00 +0.4 0.35 ? 10% perf-profile.children.cycles-pp.orc_find
1.20 ? 6% +0.4 1.56 ? 3% perf-profile.children.cycles-pp.do_softirq_own_stack
0.00 +0.4 0.36 ? 11% perf-profile.children.cycles-pp.__orc_find
1.19 ? 6% +0.4 1.56 ? 3% perf-profile.children.cycles-pp.__softirqentry_text_start
0.00 +0.4 0.38 ? 2% perf-profile.children.cycles-pp.dequeue_task_fair
0.00 +0.4 0.43 ? 5% perf-profile.children.cycles-pp.blk_mq_dispatch_rq_list
0.00 +0.5 0.51 ? 2% perf-profile.children.cycles-pp.__blk_mq_sched_dispatch_requests
0.00 +0.5 0.52 ? 3% perf-profile.children.cycles-pp.blk_mq_sched_dispatch_requests
1.63 ? 3% +0.5 2.15 ? 6% perf-profile.children.cycles-pp.tick_sched_handle
0.31 ? 18% +0.5 0.83 ? 2% perf-profile.children.cycles-pp.update_sd_lb_stats
0.32 ? 17% +0.5 0.85 ? 3% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.5 0.53 ? 4% perf-profile.children.cycles-pp.xfs_btree_make_block_unfull
0.00 +0.5 0.53 ? 2% perf-profile.children.cycles-pp.__blk_mq_run_hw_queue
1.58 ? 2% +0.5 2.12 ? 6% perf-profile.children.cycles-pp.update_process_times
1.50 ? 6% +0.5 2.03 ? 4% perf-profile.children.cycles-pp.irq_exit_rcu
0.42 ? 14% +0.5 0.96 ? 2% perf-profile.children.cycles-pp.load_balance
0.10 ? 16% +0.6 0.67 ? 2% perf-profile.children.cycles-pp.newidle_balance
2.19 ? 6% +0.6 2.79 ? 8% perf-profile.children.cycles-pp.tick_sched_timer
0.00 +0.7 0.68 ? 4% perf-profile.children.cycles-pp.schedule_idle
3.26 ? 6% +0.7 3.96 ? 5% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.29 ? 6% +0.8 1.06 ? 3% perf-profile.children.cycles-pp.xfs_btree_insrec
0.31 ? 6% +0.8 1.12 ? 4% perf-profile.children.cycles-pp.xfs_btree_insert
0.80 ? 12% +0.8 1.62 ? 7% perf-profile.children.cycles-pp.xfs_ilock
0.61 ? 8% +0.8 1.45 ? 3% perf-profile.children.cycles-pp.blk_mq_sched_insert_requests
0.62 ? 8% +0.9 1.47 ? 4% perf-profile.children.cycles-pp.blk_mq_flush_plug_list
0.62 ? 9% +0.9 1.48 ? 3% perf-profile.children.cycles-pp.blk_flush_plug_list
0.09 ? 23% +0.9 1.04 ? 3% perf-profile.children.cycles-pp.pick_next_task_fair
0.44 ? 17% +1.0 1.41 ? 7% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
12.25 ? 2% +1.2 13.41 perf-profile.children.cycles-pp.xfs_iomap_write_unwritten
0.00 +1.2 1.19 ? 5% perf-profile.children.cycles-pp.unwind_next_frame
0.00 +1.2 1.22 ? 3% perf-profile.children.cycles-pp.__queue_work
0.00 +1.3 1.27 ? 4% perf-profile.children.cycles-pp.mod_delayed_work_on
0.00 +1.3 1.27 ? 4% perf-profile.children.cycles-pp.kblockd_mod_delayed_work_on
0.42 ? 10% +1.7 2.16 ? 9% perf-profile.children.cycles-pp.poll_idle
4.96 ? 11% +1.8 6.72 ? 5% perf-profile.children.cycles-pp.menu_select
0.01 ?173% +1.8 1.84 ? 3% perf-profile.children.cycles-pp.arch_stack_walk
0.01 ?173% +2.0 2.00 ? 3% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.03 ?100% +2.2 2.22 ? 4% perf-profile.children.cycles-pp.__account_scheduler_latency
0.15 ? 14% +2.4 2.50 ? 2% perf-profile.children.cycles-pp.__schedule
0.00 +2.5 2.48 ? 2% perf-profile.children.cycles-pp.wake_up_q
0.06 ? 13% +2.6 2.67 ? 3% perf-profile.children.cycles-pp.enqueue_entity
0.07 ? 14% +2.7 2.78 ? 3% perf-profile.children.cycles-pp.enqueue_task_fair
0.30 ? 8% +2.7 3.03 ? 3% perf-profile.children.cycles-pp.xfs_iunlock
0.07 ? 14% +2.7 2.81 ? 3% perf-profile.children.cycles-pp.ttwu_do_activate
0.00 +2.8 2.79 ? 3% perf-profile.children.cycles-pp.rwsem_wake
0.12 ? 13% +3.1 3.20 ? 3% perf-profile.children.cycles-pp.schedule
12.03 ? 7% +3.4 15.39 ? 8% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.10 ? 13% +3.6 3.67 ? 2% perf-profile.children.cycles-pp.try_to_wake_up
30.81 ? 6% +4.0 34.84 ? 2% perf-profile.children.cycles-pp.intel_idle
46.58 ? 2% +10.1 56.71 perf-profile.children.cycles-pp.cpuidle_enter_state
46.61 ? 2% +10.1 56.75 perf-profile.children.cycles-pp.cpuidle_enter
51.82 ? 2% +13.1 64.89 perf-profile.children.cycles-pp.start_secondary
52.27 ? 2% +13.2 65.44 perf-profile.children.cycles-pp.do_idle
52.27 ? 2% +13.2 65.45 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
52.27 ? 2% +13.2 65.45 perf-profile.children.cycles-pp.cpu_startup_entry
7.25 ? 7% -7.0 0.28 ? 6% perf-profile.self.cycles-pp.rwsem_spin_on_owner
2.36 ? 25% -1.1 1.23 ? 4% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
2.85 ? 9% -1.1 1.74 ? 7% perf-profile.self.cycles-pp.xfs_iext_lookup_extent
2.04 ? 9% -0.6 1.40 ? 13% perf-profile.self.cycles-pp.xas_load
1.27 ? 19% -0.6 0.72 ? 12% perf-profile.self.cycles-pp.iomap_set_range_uptodate
0.33 ? 40% -0.3 0.04 ? 59% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.34 ? 11% -0.3 0.08 ? 11% perf-profile.self.cycles-pp.osq_lock
0.45 ? 16% -0.2 0.22 ? 14% perf-profile.self.cycles-pp.iomap_finish_ioend
0.52 ? 9% -0.2 0.29 ? 19% perf-profile.self.cycles-pp.xas_set_mark
0.58 ? 3% -0.2 0.42 ? 5% perf-profile.self.cycles-pp.xfs_next_bit
0.49 ? 6% -0.1 0.34 ? 7% perf-profile.self.cycles-pp.find_get_pages_range_tag
0.16 ? 5% -0.1 0.03 ?100% perf-profile.self.cycles-pp.xfs_iext_remove
0.38 ? 8% -0.1 0.24 ? 4% perf-profile.self.cycles-pp.xfs_buf_find
0.15 ? 10% -0.1 0.03 ?100% perf-profile.self.cycles-pp.blk_attempt_plug_merge
0.26 ? 9% -0.1 0.15 ? 18% perf-profile.self.cycles-pp.workingset_update_node
0.26 ? 15% -0.1 0.15 ? 5% perf-profile.self.cycles-pp.clear_page_dirty_for_io
0.42 ? 4% -0.1 0.32 ? 8% perf-profile.self.cycles-pp.xfs_buf_item_format
0.22 -0.1 0.11 ? 7% perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
0.24 ? 16% -0.1 0.14 ? 15% perf-profile.self.cycles-pp.__mod_memcg_state
0.22 ? 9% -0.1 0.12 ? 13% perf-profile.self.cycles-pp.down_write
0.23 ? 16% -0.1 0.13 ? 6% perf-profile.self.cycles-pp.test_clear_page_writeback
0.20 ? 12% -0.1 0.11 ? 4% perf-profile.self.cycles-pp.__list_del_entry_valid
0.31 ? 9% -0.1 0.23 ? 17% perf-profile.self.cycles-pp.__test_set_page_writeback
0.51 ? 6% -0.1 0.43 ? 4% perf-profile.self.cycles-pp.memcpy_erms
0.22 ? 12% -0.1 0.14 ? 15% perf-profile.self.cycles-pp.__slab_free
0.10 ? 22% -0.1 0.03 ?100% perf-profile.self.cycles-pp.unlock_page
0.19 ? 15% -0.1 0.12 ? 15% perf-profile.self.cycles-pp.up_write
0.18 ? 5% -0.1 0.11 ? 21% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.22 ? 9% -0.1 0.15 ? 21% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.17 ? 11% -0.1 0.10 ? 19% perf-profile.self.cycles-pp.__add_to_page_cache_locked
0.31 ? 6% -0.1 0.24 ? 3% perf-profile.self.cycles-pp.xfs_buf_item_size_segment
0.16 ? 7% -0.1 0.09 ? 11% perf-profile.self.cycles-pp.rmqueue_bulk
0.20 ? 18% -0.1 0.14 ? 11% perf-profile.self.cycles-pp.__mod_node_page_state
0.16 ? 5% -0.1 0.10 ? 13% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.23 ? 7% -0.1 0.17 ? 8% perf-profile.self.cycles-pp.kmem_cache_free
0.28 ? 6% -0.1 0.22 ? 5% perf-profile.self.cycles-pp.xfs_buf_offset
0.18 ? 16% -0.1 0.12 ? 20% perf-profile.self.cycles-pp.xfs_bmbt_to_iomap
0.16 ? 15% -0.1 0.11 ? 7% perf-profile.self.cycles-pp.__might_sleep
0.14 ? 10% -0.1 0.08 ? 21% perf-profile.self.cycles-pp.page_mapping
0.15 ? 16% -0.1 0.10 ? 15% perf-profile.self.cycles-pp.down_read
0.08 ? 26% -0.1 0.03 ?100% perf-profile.self.cycles-pp.account_page_dirtied
0.28 ? 4% -0.1 0.22 ? 4% perf-profile.self.cycles-pp.kmem_cache_alloc
0.14 ? 5% -0.1 0.09 ? 4% perf-profile.self.cycles-pp.memset_erms
0.08 ? 19% -0.1 0.03 ?100% perf-profile.self.cycles-pp.xfs_file_write_iter
0.09 ? 15% -0.1 0.04 ? 58% perf-profile.self.cycles-pp.xas_start
0.12 ? 3% -0.1 0.07 ? 7% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.21 ? 6% -0.1 0.16 ? 7% perf-profile.self.cycles-pp.xfs_inode_item_format
0.11 ? 17% -0.1 0.06 ? 11% perf-profile.self.cycles-pp.iomap_apply
0.12 ? 9% -0.0 0.07 ? 6% perf-profile.self.cycles-pp.list_sort
0.07 ? 17% -0.0 0.03 ?100% perf-profile.self.cycles-pp.iomap_write_end
0.10 ? 15% -0.0 0.05 ? 58% perf-profile.self.cycles-pp.iov_iter_copy_from_user_atomic
0.07 ? 15% -0.0 0.03 ?100% perf-profile.self.cycles-pp.xfs_file_llseek
0.07 ? 24% -0.0 0.03 ?100% perf-profile.self.cycles-pp.xfs_file_buffered_aio_write
0.07 ? 17% -0.0 0.03 ?100% perf-profile.self.cycles-pp.__entry_text_start
0.16 ? 12% -0.0 0.11 ? 14% perf-profile.self.cycles-pp.xfs_trans_log_inode
0.07 ? 6% -0.0 0.03 ?100% perf-profile.self.cycles-pp.try_charge
0.12 ? 15% -0.0 0.08 ? 8% perf-profile.self.cycles-pp.__get_user_nocheck_1
0.08 ? 5% -0.0 0.04 ? 58% perf-profile.self.cycles-pp.xfs_log_ticket_ungrant
0.18 ? 6% -0.0 0.14 ? 15% perf-profile.self.cycles-pp.xfs_bmap_add_extent_unwritten_real
0.09 ? 11% -0.0 0.06 ? 14% perf-profile.self.cycles-pp.rmqueue
0.12 ? 14% -0.0 0.08 ? 8% perf-profile.self.cycles-pp.__blk_queue_split
0.14 ? 12% -0.0 0.11 ? 4% perf-profile.self.cycles-pp.xfs_btree_lookup_get_block
0.12 ? 13% -0.0 0.09 ? 4% perf-profile.self.cycles-pp.xfs_buffered_write_iomap_begin
0.19 ? 5% -0.0 0.16 ? 2% perf-profile.self.cycles-pp.iomap_writepage_map
0.10 ? 14% -0.0 0.07 ? 10% perf-profile.self.cycles-pp.iomap_write_actor
0.08 ? 13% -0.0 0.05 ? 8% perf-profile.self.cycles-pp._atomic_dec_and_lock
0.10 ? 14% -0.0 0.07 ? 5% perf-profile.self.cycles-pp.xfs_perag_get
0.07 ? 7% -0.0 0.04 ? 57% perf-profile.self.cycles-pp.xfs_buf_get_map
0.11 ? 7% -0.0 0.09 ? 10% perf-profile.self.cycles-pp.dec_zone_page_state
0.07 ? 7% -0.0 0.04 ? 58% perf-profile.self.cycles-pp.xfs_perag_put
0.08 ? 6% -0.0 0.06 ? 11% perf-profile.self.cycles-pp.xfs_errortag_test
0.16 ? 8% +0.0 0.18 ? 11% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.04 ? 58% +0.0 0.07 ? 10% perf-profile.self.cycles-pp.__list_add_valid
0.05 ? 60% +0.0 0.08 perf-profile.self.cycles-pp.rb_insert_color
0.08 ? 14% +0.0 0.11 ? 14% perf-profile.self.cycles-pp._find_next_bit
0.07 ? 7% +0.0 0.10 ? 4% perf-profile.self.cycles-pp.xfs_iext_insert
0.14 ? 22% +0.0 0.18 ? 7% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.14 ? 16% +0.0 0.19 ? 6% perf-profile.self.cycles-pp.rwsem_down_read_slowpath
0.01 ?173% +0.0 0.06 ? 14% perf-profile.self.cycles-pp.scheduler_tick
0.04 ? 58% +0.0 0.09 ? 9% perf-profile.self.cycles-pp.rcu_dynticks_eqs_exit
0.08 ? 17% +0.1 0.13 ? 9% perf-profile.self.cycles-pp.call_cpuidle
0.01 ?173% +0.1 0.07 ? 17% perf-profile.self.cycles-pp.cpuidle_governor_latency_req
0.00 +0.1 0.06 ? 14% perf-profile.self.cycles-pp.in_sched_functions
0.07 ? 13% +0.1 0.12 ? 6% perf-profile.self.cycles-pp.__softirqentry_text_start
0.04 ? 57% +0.1 0.10 ? 10% perf-profile.self.cycles-pp.rcu_eqs_exit
0.00 +0.1 0.06 ? 11% perf-profile.self.cycles-pp.kernel_text_address
0.00 +0.1 0.07 ? 13% perf-profile.self.cycles-pp.schedule
0.00 +0.1 0.07 ? 6% perf-profile.self.cycles-pp.xfs_btree_insrec
0.00 +0.1 0.07 ? 10% perf-profile.self.cycles-pp.update_cfs_rq_h_load
0.00 +0.1 0.07 ? 17% perf-profile.self.cycles-pp.pick_next_task_fair
0.00 +0.1 0.07 ? 5% perf-profile.self.cycles-pp.finish_task_switch
0.00 +0.1 0.07 ? 11% perf-profile.self.cycles-pp.arch_stack_walk
0.00 +0.1 0.07 ? 14% perf-profile.self.cycles-pp.stack_access_ok
0.00 +0.1 0.08 ? 14% perf-profile.self.cycles-pp.__wrgsbase_inactive
0.00 +0.1 0.08 ? 14% perf-profile.self.cycles-pp.note_gp_changes
0.00 +0.1 0.08 ? 10% perf-profile.self.cycles-pp.__unwind_start
0.00 +0.1 0.08 ? 14% perf-profile.self.cycles-pp.dequeue_entity
0.00 +0.1 0.08 ? 15% perf-profile.self.cycles-pp.select_task_rq_fair
0.00 +0.1 0.08 ? 13% perf-profile.self.cycles-pp.resched_curr
0.00 +0.1 0.09 ? 4% perf-profile.self.cycles-pp.__account_scheduler_latency
0.00 +0.1 0.10 ? 15% perf-profile.self.cycles-pp.update_curr
0.00 +0.1 0.10 ? 13% perf-profile.self.cycles-pp.__update_load_avg_se
0.28 ? 10% +0.1 0.38 ? 9% perf-profile.self.cycles-pp._raw_spin_trylock
0.00 +0.1 0.10 ? 12% perf-profile.self.cycles-pp.___perf_sw_event
0.00 +0.1 0.11 ? 17% perf-profile.self.cycles-pp.enqueue_task_fair
0.00 +0.1 0.11 ? 3% perf-profile.self.cycles-pp.stack_trace_consume_entry_nosched
0.00 +0.1 0.12 ? 4% perf-profile.self.cycles-pp.update_load_avg
0.00 +0.1 0.14 ? 15% perf-profile.self.cycles-pp.__module_address
0.00 +0.1 0.14 ? 7% perf-profile.self.cycles-pp.stack_trace_save_tsk
0.00 +0.1 0.14 ? 14% perf-profile.self.cycles-pp.set_next_entity
0.00 +0.1 0.14 ? 9% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.18 ? 8% +0.2 0.33 ? 13% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.00 +0.2 0.16 ? 13% perf-profile.self.cycles-pp.try_to_wake_up
0.00 +0.2 0.16 ? 4% perf-profile.self.cycles-pp.orc_find
0.46 ? 6% +0.2 0.62 ? 4% perf-profile.self.cycles-pp.native_sched_clock
1.02 ? 6% +0.2 1.20 ? 7% perf-profile.self.cycles-pp._raw_spin_lock
0.20 ? 4% +0.2 0.38 ? 5% perf-profile.self.cycles-pp.do_idle
0.00 +0.2 0.19 ? 8% perf-profile.self.cycles-pp.enqueue_entity
0.00 +0.2 0.21 ? 5% perf-profile.self.cycles-pp.rwsem_mark_wake
0.09 ? 24% +0.2 0.30 ? 6% perf-profile.self.cycles-pp.nr_iowait_cpu
0.10 ? 7% +0.2 0.32 ? 11% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.00 +0.2 0.23 ? 10% perf-profile.self.cycles-pp.__switch_to
0.00 +0.2 0.23 ? 9% perf-profile.self.cycles-pp.__switch_to_asm
0.06 ? 17% +0.3 0.32 ? 11% perf-profile.self.cycles-pp.update_rq_clock
0.12 ? 21% +0.3 0.41 ? 6% perf-profile.self.cycles-pp.idle_cpu
0.00 +0.3 0.33 ? 7% perf-profile.self.cycles-pp.__schedule
0.21 ? 19% +0.4 0.57 ? 2% perf-profile.self.cycles-pp.update_sd_lb_stats
0.00 +0.4 0.36 ? 11% perf-profile.self.cycles-pp.__orc_find
0.00 +0.5 0.51 ? 4% perf-profile.self.cycles-pp.unwind_next_frame
0.00 +0.9 0.90 ? 10% perf-profile.self.cycles-pp.rwsem_down_write_slowpath
2.50 ? 23% +1.6 4.11 ? 6% perf-profile.self.cycles-pp.menu_select
0.34 ? 12% +1.7 2.02 ? 9% perf-profile.self.cycles-pp.poll_idle
3.77 ? 21% +1.8 5.52 ? 8% perf-profile.self.cycles-pp.cpuidle_enter_state
30.80 ? 6% +4.0 34.84 ? 2% perf-profile.self.cycles-pp.intel_idle



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Oliver Sang


Attachments:
(No filename) (166.84 kB)
config-5.10.0-rc3-00006-g10a59003d29f (172.74 kB)
job-script (7.80 kB)
job.yaml (5.21 kB)
reproduce (300.00 B)
Download all attachments

2020-11-23 19:34:22

by Waiman Long

[permalink] [raw]
Subject: Re: [locking/rwsem] 10a59003d2: unixbench.score -25.5% regression

On 11/23/20 10:53 AM, kernel test robot wrote:
>
> Greeting,
>
> FYI, we noticed a -25.5% regression of unixbench.score due to commit:
>
>
> commit: 10a59003d29fbfa855b2ef4f3534fee9bdf4e575 ("[PATCH v2 5/5] locking/rwsem: Remove reader optimistic spinning")
> url: https://github.com/0day-ci/linux/commits/Waiman-Long/locking-rwsem-Rework-reader-optimistic-spinning/20201121-122118
> base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 932f8c64d38bb08f69c8c26a2216ba0c36c6daa8
>
> in testcase: unixbench
> on test machine: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
> with following parameters:
>
> runtime: 300s
> nr_task: 30%
> test: shell8
> cpufreq_governor: performance
> ucode: 0xde
>
> test-description: UnixBench is the original BYTE UNIX benchmark suite aims to test performance of Unix-like system.
> test-url: https://github.com/kdlucas/byte-unixbench
>
> In addition to that, the commit also has significant impact on the following tests:
>
> +------------------+---------------------------------------------------------------------------+
> | testcase: change | fio-basic: fio.write_iops -29.9% regression |
> | test machine | 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory |
> | test parameters | bs=4k |
> | | cpufreq_governor=performance |
> | | disk=1SSD |
> | | fs=xfs |
> | | ioengine=sync |
> | | nr_task=32 |
> | | runtime=300s |
> | | rw=randwrite |
> | | test_size=256g |
> | | ucode=0x4003003 |
> +------------------+---------------------------------------------------------------------------+
> | testcase: change | aim7: aim7.jobs-per-min 952.6% improvement |
> | test machine | 96 threads Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz with 128G memory |
> | test parameters | cpufreq_governor=performance |
> | | disk=4BRD_12G |
> | | fs=f2fs |
> | | load=100 |
> | | md=RAID0 |
> | | test=sync_disk_rw |
> | | ucode=0x4003003 |
> +------------------+---------------------------------------------------------------------------+

A performance drop in some benchmark is expected. However, there are
others that can show improvement. Will take a look to see if we can
reduce the performance regression.

Thanks,
Longman

2020-11-24 03:43:36

by kernel test robot

[permalink] [raw]
Subject: [locking/rwsem] c9847a7f94: aim7.jobs-per-min -91.8% regression


Greeting,

FYI, we noticed a -91.8% regression of aim7.jobs-per-min due to commit:


commit: c9847a7f94679e742710574a2a7fee1c30c5ecf0 ("[PATCH v2 4/5] locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED")
url: https://github.com/0day-ci/linux/commits/Waiman-Long/locking-rwsem-Rework-reader-optimistic-spinning/20201121-122118
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 932f8c64d38bb08f69c8c26a2216ba0c36c6daa8

in testcase: aim7
on test machine: 96 threads Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz with 128G memory
with following parameters:

disk: 4BRD_12G
md: RAID0
fs: f2fs
test: sync_disk_rw
load: 100
cpufreq_governor: performance
ucode: 0x4003003

test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/

In addition to that, the commit also has significant impact on the following tests:

+------------------+-------------------------------------------------------------------+
| testcase: change | unixbench: unixbench.score -1.9% regression |
| test machine | 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=30% |
| | runtime=300s |
| | test=shell8 |
| | ucode=0xde |
+------------------+-------------------------------------------------------------------+
| testcase: change | fio-basic: boot-time.dhcp 1.5% regression |
| test machine | 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory |
| test parameters | bs=4k |
| | cpufreq_governor=performance |
| | disk=1SSD |
| | fs=xfs |
| | ioengine=sync |
| | nr_task=32 |
| | runtime=300s |
| | rw=randwrite |
| | test_size=256g |
| | ucode=0x4003003 |
+------------------+-------------------------------------------------------------------+


If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml

=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/4BRD_12G/f2fs/x86_64-rhel-8.3/100/RAID0/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp2/sync_disk_rw/aim7/0x4003003

commit:
62d5313500 ("locking/rwsem: Enable reader optimistic lock stealing")
c9847a7f94 ("locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED")

62d5313500ac58b6 c9847a7f94679e742710574a2a7
---------------- ---------------------------
%stddev %change %stddev
\ | \
8183 -91.8% 670.67 ? 2% aim7.jobs-per-min
73.37 +1120.4% 895.35 ? 2% aim7.time.elapsed_time
73.37 +1120.4% 895.35 ? 2% aim7.time.elapsed_time.max
87129598 -6.7% 81275468 aim7.time.file_system_outputs
1448254 +90.4% 2757646 aim7.time.involuntary_context_switches
9577 +175.6% 26393 ? 6% aim7.time.minor_page_faults
595.56 +746.0% 5038 ? 2% aim7.time.system_time
24062906 +16.1% 27942139 aim7.time.voluntary_context_switches
91.22 +2.4% 93.44 iostat.cpu.idle
8.60 -24.0% 6.54 iostat.cpu.system
110.50 +744.5% 933.15 ? 2% uptime.boot
9659 +765.7% 83622 ? 2% uptime.idle
0.80 ? 2% -0.1 0.67 ? 2% mpstat.cpu.all.irq%
0.07 ? 5% -0.0 0.05 ? 2% mpstat.cpu.all.soft%
7.93 -2.1 5.83 mpstat.cpu.all.sys%
0.17 ? 4% -0.2 0.02 ? 2% mpstat.cpu.all.usr%
1.286e+08 ? 55% +2555.5% 3.415e+09 ?106% cpuidle.C1.time
3750293 ? 73% +947.8% 39295011 ? 90% cpuidle.C1.usage
72870517 ? 49% +27824.2% 2.035e+10 ?128% cpuidle.C6.time
115860 ? 26% +18644.3% 21717212 ?122% cpuidle.C6.usage
132759 ? 12% +1186.3% 1707686 ?144% cpuidle.POLL.usage
91.00 +2.2% 93.00 vmstat.cpu.id
564823 -92.0% 45042 ? 2% vmstat.io.bo
7.25 ? 11% -31.0% 5.00 vmstat.procs.r
747964 -89.6% 77847 ? 2% vmstat.system.cs
200863 -4.3% 192165 vmstat.system.in
19990 ? 2% +23.8% 24746 ? 3% meminfo.Active
17246 ? 3% +25.3% 21611 ? 3% meminfo.Active(anon)
59295 ? 7% +208.0% 182605 meminfo.AnonHugePages
59070 ? 4% -29.0% 41922 ? 2% meminfo.Dirty
95815 +14.6% 109848 meminfo.Inactive(file)
27037 ? 2% +17.4% 31737 ? 2% meminfo.Shmem
527416 -92.2% 41368 ? 2% meminfo.max_used_kB
26522 ? 64% +381.1% 127588 ? 30% numa-meminfo.node0.AnonHugePages
30316 ? 4% -30.3% 21143 ? 2% numa-meminfo.node0.Dirty
48795 +12.9% 55113 ? 2% numa-meminfo.node0.Inactive(file)
10871 -20.9% 8604 ? 11% numa-meminfo.node0.KernelStack
6158 ? 4% -51.1% 3010 ? 49% numa-meminfo.node0.PageTables
1865 ? 29% +428.6% 9858 ? 22% numa-meminfo.node0.Shmem
29502 ? 3% -29.2% 20881 numa-meminfo.node1.Dirty
47049 +16.3% 54722 numa-meminfo.node1.Inactive(file)
7877 ? 4% -33.1% 5268 ? 2% numa-vmstat.node0.nr_dirty
12224 +12.7% 13780 ? 2% numa-vmstat.node0.nr_inactive_file
10876 ? 2% -20.9% 8605 ? 11% numa-vmstat.node0.nr_kernel_stack
1542 ? 4% -51.2% 752.25 ? 49% numa-vmstat.node0.nr_page_table_pages
466.00 ? 29% +428.9% 2464 ? 22% numa-vmstat.node0.nr_shmem
120.25 ? 10% -88.4% 14.00 ? 23% numa-vmstat.node0.nr_writeback
12225 +12.7% 13780 ? 2% numa-vmstat.node0.nr_zone_inactive_file
6324 -14.7% 5396 ? 2% numa-vmstat.node0.nr_zone_write_pending
3076324 ? 2% +24.3% 3823329 ? 3% numa-vmstat.node0.numa_hit
3057907 ? 2% +21.3% 3710333 ? 4% numa-vmstat.node0.numa_local
2323931 +9.4% 2541694 ? 3% numa-vmstat.node1.nr_dirtied
7621 ? 2% -31.7% 5208 numa-vmstat.node1.nr_dirty
11735 +16.6% 13686 numa-vmstat.node1.nr_inactive_file
4524 ? 3% -19.9% 3623 ? 17% numa-vmstat.node1.nr_mapped
119.50 ? 9% -89.1% 13.00 ? 5% numa-vmstat.node1.nr_writeback
2310921 +9.4% 2527480 ? 3% numa-vmstat.node1.nr_written
11735 +16.6% 13686 numa-vmstat.node1.nr_zone_inactive_file
6241 ? 2% -14.4% 5341 numa-vmstat.node1.nr_zone_write_pending
3089061 ? 2% +14.8% 3545531 ? 5% numa-vmstat.node1.numa_hit
2927335 ? 3% +18.8% 3477808 ? 5% numa-vmstat.node1.numa_local
161726 ? 10% -58.1% 67723 ? 94% numa-vmstat.node1.numa_other
3783 +25.9% 4764 ? 2% slabinfo.dmaengine-unmap-16.active_objs
3783 +25.9% 4764 ? 2% slabinfo.dmaengine-unmap-16.num_objs
1036 ? 5% +12.6% 1167 ? 4% slabinfo.ext4_extent_status.active_objs
1036 ? 5% +12.6% 1167 ? 4% slabinfo.ext4_extent_status.num_objs
4130 +16.8% 4823 slabinfo.ext4_fc_dentry_update.active_objs
4130 +16.8% 4823 slabinfo.ext4_fc_dentry_update.num_objs
5187 +16.9% 6062 slabinfo.ext4_io_end.active_objs
5187 +16.9% 6062 slabinfo.ext4_io_end.num_objs
10436 +19.0% 12417 slabinfo.ext4_pending_reservation.active_objs
10436 +19.0% 12417 slabinfo.ext4_pending_reservation.num_objs
19809 +13.9% 22567 slabinfo.f2fs_free_nid.active_objs
19809 +13.9% 22567 slabinfo.f2fs_free_nid.num_objs
2297 +16.4% 2674 slabinfo.f2fs_inode_cache.active_objs
2297 +16.4% 2674 slabinfo.f2fs_inode_cache.num_objs
3158 +16.8% 3688 slabinfo.f2fs_xattr_entry-9:0.active_objs
3158 +16.8% 3688 slabinfo.f2fs_xattr_entry-9:0.num_objs
31314 ? 4% +22.9% 38490 ? 2% slabinfo.filp.active_objs
983.75 ? 4% +22.6% 1206 ? 2% slabinfo.filp.active_slabs
31497 ? 4% +22.6% 38610 ? 2% slabinfo.filp.num_objs
983.75 ? 4% +22.6% 1206 ? 2% slabinfo.filp.num_slabs
1457 ? 5% +9.2% 1590 ? 3% slabinfo.khugepaged_mm_slot.active_objs
1457 ? 5% +9.2% 1590 ? 3% slabinfo.khugepaged_mm_slot.num_objs
696.00 ? 3% +109.2% 1456 ? 33% slabinfo.kmalloc-rcl-128.active_objs
1115 ? 5% +18.5% 1322 ? 8% slabinfo.task_group.active_objs
1115 ? 5% +18.5% 1322 ? 8% slabinfo.task_group.num_objs
9806 +76.0% 17259 slabinfo.vmap_area.active_objs
152.75 +93.6% 295.75 slabinfo.vmap_area.active_slabs
9815 +93.2% 18963 slabinfo.vmap_area.num_objs
152.75 +93.6% 295.75 slabinfo.vmap_area.num_slabs
4303 ? 2% +25.5% 5402 ? 3% proc-vmstat.nr_active_anon
686.00 +14.2% 783.25 ? 18% proc-vmstat.nr_active_file
58813 -2.0% 57631 proc-vmstat.nr_anon_pages
10873317 -6.6% 10159153 proc-vmstat.nr_dirtied
15057 ? 3% -30.4% 10481 ? 2% proc-vmstat.nr_dirty
287777 +1.7% 292579 proc-vmstat.nr_file_pages
61236 -1.9% 60070 proc-vmstat.nr_inactive_anon
23951 +14.7% 27464 proc-vmstat.nr_inactive_file
18130 -8.1% 16669 proc-vmstat.nr_kernel_stack
7648 -3.7% 7364 proc-vmstat.nr_mapped
2014 ? 2% -10.7% 1799 ? 2% proc-vmstat.nr_page_table_pages
6751 +17.5% 7933 ? 2% proc-vmstat.nr_shmem
32387 +4.9% 33958 proc-vmstat.nr_slab_reclaimable
48453 +1.6% 49217 proc-vmstat.nr_slab_unreclaimable
167.75 ? 7% -89.4% 17.75 ? 16% proc-vmstat.nr_writeback
10819249 -6.5% 10112523 proc-vmstat.nr_written
4303 ? 2% +25.5% 5402 ? 3% proc-vmstat.nr_zone_active_anon
686.00 +14.2% 783.25 ? 18% proc-vmstat.nr_zone_active_file
61236 -1.9% 60070 proc-vmstat.nr_zone_inactive_anon
23951 +14.7% 27464 proc-vmstat.nr_zone_inactive_file
12619 ? 2% -14.8% 10750 ? 2% proc-vmstat.nr_zone_write_pending
5462 ? 48% +320.9% 22990 ? 22% proc-vmstat.numa_hint_faults
2501 ? 75% +456.7% 13926 ? 16% proc-vmstat.numa_hint_faults_local
11076268 +8.6% 12026224 proc-vmstat.numa_hit
11045187 +8.6% 11995001 proc-vmstat.numa_local
11143370 +8.9% 12140200 proc-vmstat.pgalloc_normal
255571 ? 2% +903.4% 2564404 ? 2% proc-vmstat.pgfault
1738243 +98.5% 3450189 proc-vmstat.pgfree
43286180 -6.5% 40452677 proc-vmstat.pgpgout
16566 +940.1% 172308 ? 2% proc-vmstat.pgreuse
2604 +849.8% 24740 ? 4% sched_debug.cfs_rq:/.exec_clock.avg
4411 ? 2% +569.2% 29523 ? 2% sched_debug.cfs_rq:/.exec_clock.max
2452 +629.8% 17895 ? 5% sched_debug.cfs_rq:/.exec_clock.min
276.44 ? 6% +522.5% 1720 ? 7% sched_debug.cfs_rq:/.exec_clock.stddev
35.17 ? 25% -62.6% 13.14 ? 6% sched_debug.cfs_rq:/.load_avg.avg
829.50 ? 24% -70.5% 244.99 ? 10% sched_debug.cfs_rq:/.load_avg.max
128.41 ? 25% -66.5% 43.06 ? 7% sched_debug.cfs_rq:/.load_avg.stddev
38331 ? 2% +313.4% 158467 ? 3% sched_debug.cfs_rq:/.min_vruntime.avg
54684 ? 5% +237.1% 184348 ? 2% sched_debug.cfs_rq:/.min_vruntime.max
31946 ? 2% +275.9% 120088 ? 4% sched_debug.cfs_rq:/.min_vruntime.min
3408 ? 9% +201.0% 10260 ? 3% sched_debug.cfs_rq:/.min_vruntime.stddev
0.15 ? 11% -43.1% 0.09 ? 5% sched_debug.cfs_rq:/.nr_running.avg
0.36 ? 4% -23.4% 0.28 ? 2% sched_debug.cfs_rq:/.nr_running.stddev
0.03 ? 48% +2800.1% 0.83 ? 5% sched_debug.cfs_rq:/.nr_spread_over.avg
1.25 ? 34% +179.2% 3.49 ? 35% sched_debug.cfs_rq:/.nr_spread_over.max
0.17 ? 26% +229.1% 0.57 ? 20% sched_debug.cfs_rq:/.nr_spread_over.stddev
238.12 ? 2% -66.5% 79.88 ? 2% sched_debug.cfs_rq:/.runnable_avg.avg
944.38 ? 7% -25.8% 700.31 ? 3% sched_debug.cfs_rq:/.runnable_avg.max
215.30 ? 8% -26.5% 158.18 ? 6% sched_debug.cfs_rq:/.runnable_avg.stddev
508.65 ?723% +6944.4% 35831 ? 11% sched_debug.cfs_rq:/.spread0.avg
16857 ? 18% +266.1% 61714 ? 8% sched_debug.cfs_rq:/.spread0.max
3407 ? 9% +201.1% 10261 ? 3% sched_debug.cfs_rq:/.spread0.stddev
236.63 ? 2% -66.4% 79.55 ? 2% sched_debug.cfs_rq:/.util_avg.avg
944.38 ? 7% -25.9% 700.12 ? 3% sched_debug.cfs_rq:/.util_avg.max
214.62 ? 7% -26.4% 158.07 ? 6% sched_debug.cfs_rq:/.util_avg.stddev
24.12 ? 16% -71.7% 6.83 ? 11% sched_debug.cfs_rq:/.util_est_enqueued.avg
79.73 ? 13% -51.1% 39.02 ? 14% sched_debug.cfs_rq:/.util_est_enqueued.stddev
480279 -15.3% 406677 ? 2% sched_debug.cpu.avg_idle.avg
195746 ? 4% +67.0% 326826 ? 3% sched_debug.cpu.avg_idle.stddev
65779 +593.8% 456360 ? 4% sched_debug.cpu.clock.avg
65785 +593.7% 456364 ? 4% sched_debug.cpu.clock.max
65773 +593.8% 456355 ? 4% sched_debug.cpu.clock.min
3.51 ? 12% -23.9% 2.67 ? 2% sched_debug.cpu.clock.stddev
65382 +593.1% 453170 ? 4% sched_debug.cpu.clock_task.avg
65543 +591.9% 453506 ? 4% sched_debug.cpu.clock_task.max
60300 +642.6% 447801 ? 4% sched_debug.cpu.clock_task.min
598.98 ? 3% +12.6% 674.28 ? 4% sched_debug.cpu.clock_task.stddev
3474 +277.9% 13129 ? 4% sched_debug.cpu.curr->pid.max
859.33 ? 6% +88.6% 1620 ? 2% sched_debug.cpu.curr->pid.stddev
714713 ? 11% -26.3% 526858 sched_debug.cpu.max_idle_balance_cost.max
24781 ? 39% -85.9% 3496 ? 27% sched_debug.cpu.max_idle_balance_cost.stddev
0.12 ? 7% -34.1% 0.08 ? 2% sched_debug.cpu.nr_running.avg
0.34 ? 5% -18.9% 0.27 sched_debug.cpu.nr_running.stddev
232828 +44.6% 336611 ? 3% sched_debug.cpu.nr_switches.avg
271544 ? 5% +43.2% 388935 ? 9% sched_debug.cpu.nr_switches.max
214594 +40.2% 300793 ? 2% sched_debug.cpu.nr_switches.min
0.47 ? 2% +94.4% 0.92 sched_debug.cpu.nr_uninterruptible.avg
36.50 ? 27% +1017.2% 407.77 ? 3% sched_debug.cpu.nr_uninterruptible.max
-18.50 +321.3% -77.94 sched_debug.cpu.nr_uninterruptible.min
8.88 ? 11% +590.1% 61.27 ? 3% sched_debug.cpu.nr_uninterruptible.stddev
231916 +44.5% 335202 ? 3% sched_debug.cpu.sched_count.avg
265672 ? 4% +45.7% 386970 ? 10% sched_debug.cpu.sched_count.max
214641 +36.0% 291906 ? 3% sched_debug.cpu.sched_count.min
101196 +42.6% 144272 ? 3% sched_debug.cpu.sched_goidle.avg
106057 +46.5% 155382 ? 3% sched_debug.cpu.sched_goidle.max
97637 +28.7% 125626 ? 3% sched_debug.cpu.sched_goidle.min
1506 ? 12% +188.0% 4339 ? 6% sched_debug.cpu.sched_goidle.stddev
117938 +43.0% 168603 ? 3% sched_debug.cpu.ttwu_count.avg
153830 ? 7% +47.5% 226894 ? 21% sched_debug.cpu.ttwu_count.max
96129 ? 3% +51.9% 146054 ? 2% sched_debug.cpu.ttwu_count.min
14465 +74.2% 25205 ? 3% sched_debug.cpu.ttwu_local.avg
27168 ? 16% +60.3% 43545 ? 32% sched_debug.cpu.ttwu_local.max
7181 ? 6% +184.5% 20433 ? 3% sched_debug.cpu.ttwu_local.min
65775 +593.8% 456355 ? 4% sched_debug.cpu_clk
65278 +598.3% 455859 ? 4% sched_debug.ktime
66120 +590.7% 456712 ? 4% sched_debug.sched_clk
11.50 -71.9% 3.23 ? 3% perf-stat.i.MPKI
2.791e+09 -40.1% 1.671e+09 perf-stat.i.branch-instructions
1.46 -1.1 0.40 ? 5% perf-stat.i.branch-miss-rate%
32652551 ? 2% -80.3% 6434530 ? 5% perf-stat.i.branch-misses
26.05 -5.9 20.19 ? 2% perf-stat.i.cache-miss-rate%
44628231 -89.3% 4769722 ? 2% perf-stat.i.cache-misses
1.567e+08 -85.0% 23447881 ? 3% perf-stat.i.cache-references
772578 -89.9% 77784 ? 2% perf-stat.i.context-switches
2.36 +24.8% 2.94 perf-stat.i.cpi
3.12e+10 -31.6% 2.135e+10 perf-stat.i.cpu-cycles
4527 ? 2% -79.6% 924.77 perf-stat.i.cpu-migrations
1510 ? 4% +221.3% 4853 perf-stat.i.cycles-between-cache-misses
0.10 ? 3% -0.1 0.01 ? 17% perf-stat.i.dTLB-load-miss-rate%
3488578 ? 3% -94.5% 191633 ? 15% perf-stat.i.dTLB-load-misses
3.232e+09 -45.2% 1.77e+09 perf-stat.i.dTLB-loads
0.01 ? 10% -0.0 0.00 ? 21% perf-stat.i.dTLB-store-miss-rate%
130165 ? 11% -87.1% 16745 ? 21% perf-stat.i.dTLB-store-misses
1.357e+09 -75.3% 3.355e+08 perf-stat.i.dTLB-stores
38.11 +7.2 45.30 perf-stat.i.iTLB-load-miss-rate%
6970161 -68.0% 2229042 perf-stat.i.iTLB-load-misses
12096922 -77.7% 2703576 perf-stat.i.iTLB-loads
1.278e+10 -43.3% 7.25e+09 perf-stat.i.instructions
1952 +67.8% 3276 perf-stat.i.instructions-per-iTLB-miss
0.45 -24.3% 0.34 perf-stat.i.ipc
1.47 ? 3% -91.0% 0.13 ? 2% perf-stat.i.major-faults
0.32 -31.6% 0.22 perf-stat.i.metric.GHz
0.29 ? 9% +386.4% 1.42 ? 2% perf-stat.i.metric.K/sec
78.95 -49.8% 39.63 perf-stat.i.metric.M/sec
3180 -12.1% 2795 perf-stat.i.minor-faults
19311689 -91.0% 1733499 ? 2% perf-stat.i.node-load-misses
1888231 ? 2% -90.0% 189573 ? 3% perf-stat.i.node-loads
88.88 +1.1 90.01 perf-stat.i.node-store-miss-rate%
6646273 -91.1% 593107 ? 2% perf-stat.i.node-store-misses
711946 -90.8% 65302 ? 3% perf-stat.i.node-stores
3181 -12.1% 2795 perf-stat.i.page-faults
12.26 -73.6% 3.23 ? 3% perf-stat.overall.MPKI
1.17 -0.8 0.39 ? 5% perf-stat.overall.branch-miss-rate%
28.48 -8.1 20.36 ? 3% perf-stat.overall.cache-miss-rate%
2.44 +20.6% 2.95 perf-stat.overall.cpi
699.29 +540.4% 4478 perf-stat.overall.cycles-between-cache-misses
0.11 ? 3% -0.1 0.01 ? 15% perf-stat.overall.dTLB-load-miss-rate%
0.01 ? 11% -0.0 0.00 ? 21% perf-stat.overall.dTLB-store-miss-rate%
36.56 +8.6 45.19 perf-stat.overall.iTLB-load-miss-rate%
1833 +77.4% 3252 perf-stat.overall.instructions-per-iTLB-miss
0.41 -17.1% 0.34 perf-stat.overall.ipc
91.09 -0.9 90.14 perf-stat.overall.node-load-miss-rate%
2.752e+09 -39.4% 1.669e+09 perf-stat.ps.branch-instructions
32218020 ? 2% -80.0% 6429298 ? 5% perf-stat.ps.branch-misses
43997213 -89.2% 4764687 ? 2% perf-stat.ps.cache-misses
1.545e+08 -84.8% 23423214 ? 3% perf-stat.ps.cache-references
761608 -89.8% 77694 ? 2% perf-stat.ps.context-switches
94708 +1.2% 95891 perf-stat.ps.cpu-clock
3.076e+10 -30.7% 2.133e+10 perf-stat.ps.cpu-cycles
4463 ? 2% -79.3% 923.71 perf-stat.ps.cpu-migrations
3438964 ? 3% -94.4% 191430 ? 15% perf-stat.ps.dTLB-load-misses
3.187e+09 -44.5% 1.768e+09 perf-stat.ps.dTLB-loads
128359 ? 11% -87.0% 16732 ? 21% perf-stat.ps.dTLB-store-misses
1.338e+09 -75.0% 3.351e+08 perf-stat.ps.dTLB-stores
6872737 -67.6% 2226542 perf-stat.ps.iTLB-load-misses
11926601 -77.4% 2700513 perf-stat.ps.iTLB-loads
1.26e+10 -42.5% 7.242e+09 perf-stat.ps.instructions
1.45 ? 3% -90.9% 0.13 ? 2% perf-stat.ps.major-faults
3138 -11.0% 2792 perf-stat.ps.minor-faults
19037965 -90.9% 1731590 ? 2% perf-stat.ps.node-load-misses
1861551 ? 2% -89.8% 189374 ? 3% perf-stat.ps.node-loads
6552105 -91.0% 592467 ? 2% perf-stat.ps.node-store-misses
701901 -90.7% 65238 ? 3% perf-stat.ps.node-stores
3140 -11.1% 2792 perf-stat.ps.page-faults
94708 +1.2% 95891 perf-stat.ps.task-clock
9.429e+11 +588.7% 6.493e+12 ? 2% perf-stat.total.instructions
18999 ? 13% +57.9% 30005 ? 8% softirqs.CPU0.RCU
13831 ? 2% +726.1% 114266 ? 3% softirqs.CPU0.SCHED
16966 ? 12% +67.2% 28360 ? 6% softirqs.CPU1.RCU
11764 ? 3% +810.4% 107101 ? 3% softirqs.CPU1.SCHED
16148 ? 10% +74.0% 28092 ? 6% softirqs.CPU10.RCU
10989 +868.2% 106397 ? 2% softirqs.CPU10.SCHED
16629 ? 12% +68.1% 27956 ? 5% softirqs.CPU11.RCU
10731 +885.4% 105746 ? 3% softirqs.CPU11.SCHED
16389 ? 10% +70.5% 27936 ? 6% softirqs.CPU12.RCU
10518 ? 6% +901.4% 105330 ? 2% softirqs.CPU12.SCHED
16638 ? 10% +66.4% 27680 ? 5% softirqs.CPU13.RCU
11381 ? 4% +829.2% 105752 ? 3% softirqs.CPU13.SCHED
16492 ? 9% +66.8% 27503 ? 5% softirqs.CPU14.RCU
10741 ? 3% +880.2% 105290 softirqs.CPU14.SCHED
16124 ? 11% +72.3% 27779 ? 5% softirqs.CPU15.RCU
11062 ? 3% +857.4% 105910 ? 3% softirqs.CPU15.SCHED
17351 ? 11% +69.1% 29335 ? 4% softirqs.CPU16.RCU
10926 ? 3% +867.1% 105672 ? 2% softirqs.CPU16.SCHED
17274 ? 12% +69.8% 29339 ? 6% softirqs.CPU17.RCU
10769 ? 6% +879.9% 105528 ? 2% softirqs.CPU17.SCHED
17227 ? 11% +66.8% 28733 ? 5% softirqs.CPU18.RCU
10603 ? 4% +885.1% 104448 ? 3% softirqs.CPU18.SCHED
17647 ? 13% +57.9% 27873 ? 9% softirqs.CPU19.RCU
10793 ? 5% +898.8% 107796 ? 2% softirqs.CPU19.SCHED
16485 ? 11% +66.2% 27399 ? 6% softirqs.CPU2.RCU
11446 +827.2% 106124 ? 3% softirqs.CPU2.SCHED
17251 ? 12% +68.7% 29107 ? 4% softirqs.CPU20.RCU
10851 ? 3% +879.2% 106250 ? 2% softirqs.CPU20.SCHED
17240 ? 11% +69.1% 29156 ? 5% softirqs.CPU21.RCU
10901 ? 4% +866.8% 105391 ? 3% softirqs.CPU21.SCHED
17395 ? 10% +68.7% 29341 ? 5% softirqs.CPU22.RCU
10518 ? 4% +911.4% 106385 ? 2% softirqs.CPU22.SCHED
17329 ? 11% +63.4% 28319 ? 4% softirqs.CPU23.RCU
11048 ? 3% +863.7% 106466 ? 2% softirqs.CPU23.SCHED
16739 ? 10% +78.2% 29836 ? 2% softirqs.CPU24.RCU
11455 ? 2% +833.8% 106975 ? 2% softirqs.CPU24.SCHED
16813 ? 10% +70.1% 28597 ? 5% softirqs.CPU25.RCU
10649 ? 5% +879.6% 104327 ? 4% softirqs.CPU25.SCHED
16623 ? 10% +72.8% 28733 ? 4% softirqs.CPU26.RCU
11315 ? 3% +839.2% 106277 ? 2% softirqs.CPU26.SCHED
16949 ? 12% +69.3% 28702 ? 5% softirqs.CPU27.RCU
11293 ? 5% +839.8% 106131 ? 2% softirqs.CPU27.SCHED
16695 ? 10% +77.6% 29654 ? 7% softirqs.CPU28.RCU
11089 ? 4% +859.8% 106432 ? 3% softirqs.CPU28.SCHED
16909 ? 10% +71.5% 29007 ? 4% softirqs.CPU29.RCU
10658 ? 8% +900.1% 106591 ? 3% softirqs.CPU29.SCHED
16636 ? 11% +68.3% 27993 ? 6% softirqs.CPU3.RCU
10732 ? 2% +895.8% 106874 ? 3% softirqs.CPU3.SCHED
16336 ? 15% +74.8% 28554 ? 5% softirqs.CPU30.RCU
11252 ? 2% +844.4% 106258 ? 2% softirqs.CPU30.SCHED
16605 ? 11% +71.3% 28450 ? 6% softirqs.CPU31.RCU
10247 ? 7% +929.3% 105474 ? 2% softirqs.CPU31.SCHED
16370 ? 12% +74.2% 28514 ? 6% softirqs.CPU32.RCU
11236 +824.5% 103880 ? 3% softirqs.CPU32.SCHED
16600 ? 12% +70.8% 28345 ? 4% softirqs.CPU33.RCU
10916 ? 4% +867.7% 105632 ? 2% softirqs.CPU33.SCHED
16306 ? 11% +74.9% 28517 ? 4% softirqs.CPU34.RCU
10756 +873.2% 104675 ? 2% softirqs.CPU34.SCHED
16582 ? 11% +73.7% 28807 ? 5% softirqs.CPU35.RCU
10909 ? 3% +856.8% 104377 softirqs.CPU35.SCHED
16432 ? 11% +75.1% 28769 ? 4% softirqs.CPU36.RCU
11042 +853.3% 105264 ? 3% softirqs.CPU36.SCHED
16174 ? 10% +74.0% 28150 ? 5% softirqs.CPU37.RCU
10713 ? 6% +881.7% 105179 ? 3% softirqs.CPU37.SCHED
16838 ? 14% +65.8% 27916 ? 4% softirqs.CPU38.RCU
10650 ? 4% +877.6% 104119 ? 4% softirqs.CPU38.SCHED
16743 ? 13% +68.4% 28200 ? 4% softirqs.CPU39.RCU
10472 ? 4% +907.0% 105456 ? 3% softirqs.CPU39.SCHED
16425 ? 11% +70.3% 27981 ? 5% softirqs.CPU4.RCU
11167 ? 4% +855.0% 106644 ? 2% softirqs.CPU4.SCHED
16614 ? 11% +75.8% 29205 ? 9% softirqs.CPU40.RCU
10426 ? 11% +917.5% 106090 ? 2% softirqs.CPU40.SCHED
16536 ? 11% +72.2% 28472 ? 4% softirqs.CPU41.RCU
10699 ? 6% +879.9% 104842 ? 2% softirqs.CPU41.SCHED
16677 ? 11% +71.0% 28516 ? 4% softirqs.CPU42.RCU
11008 ? 3% +862.6% 105966 ? 2% softirqs.CPU42.SCHED
16425 ? 11% +71.9% 28226 ? 4% softirqs.CPU43.RCU
10999 ? 7% +855.7% 105115 ? 3% softirqs.CPU43.SCHED
16376 ? 10% +72.2% 28198 ? 5% softirqs.CPU44.RCU
10963 ? 4% +868.0% 106127 ? 2% softirqs.CPU44.SCHED
16395 ? 12% +72.3% 28244 ? 4% softirqs.CPU45.RCU
11001 ? 4% +867.9% 106479 ? 2% softirqs.CPU45.SCHED
16714 ? 12% +70.5% 28497 ? 5% softirqs.CPU46.RCU
10764 ? 6% +887.7% 106321 ? 2% softirqs.CPU46.SCHED
16736 ? 10% +70.5% 28543 ? 5% softirqs.CPU47.RCU
10325 ? 6% +922.8% 105602 ? 3% softirqs.CPU47.SCHED
15954 ? 12% +71.1% 27303 ? 6% softirqs.CPU48.RCU
11170 ? 5% +840.8% 105093 ? 6% softirqs.CPU48.SCHED
16687 ? 12% +68.6% 28141 ? 5% softirqs.CPU49.RCU
10685 ? 7% +886.4% 105398 ? 3% softirqs.CPU49.SCHED
16470 ? 11% +71.0% 28160 ? 6% softirqs.CPU5.RCU
11290 ? 2% +841.9% 106341 ? 3% softirqs.CPU5.SCHED
16729 ? 11% +67.2% 27978 ? 4% softirqs.CPU50.RCU
10816 ? 3% +884.2% 106453 softirqs.CPU50.SCHED
16795 ? 11% +67.7% 28164 ? 5% softirqs.CPU51.RCU
10392 ? 8% +926.2% 106647 ? 3% softirqs.CPU51.SCHED
16750 ? 11% +69.1% 28328 ? 4% softirqs.CPU52.RCU
11200 +841.9% 105493 ? 2% softirqs.CPU52.SCHED
16558 ? 11% +70.2% 28177 ? 5% softirqs.CPU53.RCU
10992 +861.6% 105701 ? 2% softirqs.CPU53.SCHED
16690 ? 12% +69.2% 28239 ? 5% softirqs.CPU54.RCU
10530 ? 3% +904.1% 105736 ? 3% softirqs.CPU54.SCHED
16401 ? 11% +69.2% 27745 ? 5% softirqs.CPU55.RCU
10735 ? 5% +886.7% 105928 ? 5% softirqs.CPU55.SCHED
16864 ? 12% +65.7% 27946 ? 5% softirqs.CPU56.RCU
10743 ? 5% +881.5% 105447 ? 3% softirqs.CPU56.SCHED
16641 ? 12% +66.8% 27764 ? 4% softirqs.CPU57.RCU
11072 ? 4% +846.0% 104747 softirqs.CPU57.SCHED
16750 ? 13% +67.2% 28004 ? 5% softirqs.CPU58.RCU
11138 +859.2% 106841 ? 3% softirqs.CPU58.SCHED
16607 ? 11% +71.2% 28427 ? 5% softirqs.CPU59.RCU
10470 ? 7% +903.2% 105034 ? 4% softirqs.CPU59.SCHED
16705 ? 11% +67.9% 28053 ? 4% softirqs.CPU6.RCU
10945 ? 3% +873.4% 106545 ? 3% softirqs.CPU6.SCHED
16628 ? 12% +68.8% 28061 ? 5% softirqs.CPU60.RCU
10796 ? 6% +865.5% 104239 ? 4% softirqs.CPU60.SCHED
16317 ? 12% +69.0% 27571 ? 4% softirqs.CPU61.RCU
10590 ? 4% +870.2% 102747 ? 4% softirqs.CPU61.SCHED
16490 ? 11% +67.2% 27573 ? 5% softirqs.CPU62.RCU
10796 ? 3% +863.7% 104046 ? 2% softirqs.CPU62.SCHED
16779 ? 12% +67.4% 28094 ? 4% softirqs.CPU63.RCU
10155 ? 6% +934.3% 105042 ? 2% softirqs.CPU63.SCHED
17689 ? 11% +69.1% 29911 ? 5% softirqs.CPU64.RCU
10839 ? 2% +869.2% 105057 ? 2% softirqs.CPU64.SCHED
17856 ? 13% +66.3% 29699 ? 3% softirqs.CPU65.RCU
11017 ? 3% +871.5% 107027 ? 3% softirqs.CPU65.SCHED
17448 ? 12% +68.0% 29312 ? 6% softirqs.CPU66.RCU
11019 ? 4% +835.9% 103135 ? 4% softirqs.CPU66.SCHED
17590 ? 11% +64.9% 29003 ? 4% softirqs.CPU67.RCU
10926 ? 3% +852.9% 104113 softirqs.CPU67.SCHED
17726 ? 11% +67.4% 29668 ? 4% softirqs.CPU68.RCU
10924 ? 6% +879.3% 106977 ? 2% softirqs.CPU68.SCHED
17706 ? 11% +68.0% 29749 ? 2% softirqs.CPU69.RCU
10670 ? 3% +883.7% 104970 ? 4% softirqs.CPU69.SCHED
16399 ? 10% +65.1% 27075 ? 5% softirqs.CPU7.RCU
10865 ? 3% +862.7% 104594 ? 2% softirqs.CPU7.SCHED
19786 ? 18% +49.3% 29547 ? 4% softirqs.CPU70.RCU
11268 ? 2% +845.4% 106529 softirqs.CPU70.SCHED
17910 ? 14% +61.3% 28887 ? 4% softirqs.CPU71.RCU
10911 ? 13% +887.3% 107726 ? 5% softirqs.CPU71.SCHED
16810 ? 11% +72.8% 29052 ? 5% softirqs.CPU72.RCU
10526 ? 6% +891.9% 104417 ? 4% softirqs.CPU72.SCHED
17107 ? 9% +70.1% 29094 ? 5% softirqs.CPU73.RCU
10228 ? 11% +931.6% 105515 ? 4% softirqs.CPU73.SCHED
16867 ? 11% +72.2% 29054 ? 5% softirqs.CPU74.RCU
10500 ? 4% +899.3% 104938 ? 3% softirqs.CPU74.SCHED
17004 ? 12% +71.2% 29118 ? 5% softirqs.CPU75.RCU
10890 ? 3% +884.5% 107217 ? 3% softirqs.CPU75.SCHED
17071 ? 11% +70.6% 29124 ? 5% softirqs.CPU76.RCU
10506 ? 10% +914.6% 106603 ? 2% softirqs.CPU76.SCHED
16750 ? 11% +73.8% 29105 ? 4% softirqs.CPU77.RCU
11767 ? 8% +809.8% 107060 ? 2% softirqs.CPU77.SCHED
17084 ? 9% +71.6% 29322 ? 4% softirqs.CPU78.RCU
11190 ? 2% +847.6% 106038 ? 2% softirqs.CPU78.SCHED
16991 ? 10% +70.5% 28977 ? 5% softirqs.CPU79.RCU
11043 ? 2% +842.0% 104023 ? 4% softirqs.CPU79.SCHED
16433 ? 12% +68.0% 27616 ? 6% softirqs.CPU8.RCU
11061 ? 2% +850.1% 105092 ? 3% softirqs.CPU8.SCHED
16768 ? 8% +68.7% 28281 ? 7% softirqs.CPU80.RCU
11218 ? 13% +839.4% 105389 ? 4% softirqs.CPU80.SCHED
16210 ? 12% +71.8% 27856 ? 4% softirqs.CPU81.RCU
11673 ? 8% +795.0% 104478 ? 2% softirqs.CPU81.SCHED
16165 ? 12% +72.2% 27839 ? 4% softirqs.CPU82.RCU
11321 ? 2% +815.5% 103642 ? 3% softirqs.CPU82.SCHED
16735 ? 14% +69.5% 28371 ? 4% softirqs.CPU83.RCU
11498 ? 8% +811.1% 104758 softirqs.CPU83.SCHED
295.25 ?149% +7682.5% 22977 ?107% softirqs.CPU84.NET_RX
16622 ? 9% +74.2% 28953 ? 5% softirqs.CPU84.RCU
10929 +878.5% 106938 ? 2% softirqs.CPU84.SCHED
465.00 ?158% +6432.6% 30376 ? 93% softirqs.CPU85.NET_RX
16210 ? 11% +75.7% 28481 ? 3% softirqs.CPU85.RCU
10620 ? 2% +887.8% 104915 ? 4% softirqs.CPU85.SCHED
16164 ? 10% +73.2% 27992 ? 4% softirqs.CPU86.RCU
10768 ? 4% +857.1% 103066 ? 2% softirqs.CPU86.SCHED
16600 ? 14% +72.0% 28558 ? 6% softirqs.CPU87.RCU
11171 ? 6% +856.9% 106895 ? 3% softirqs.CPU87.SCHED
16697 ? 9% +68.2% 28089 ? 5% softirqs.CPU88.RCU
10092 ? 6% +931.6% 104110 ? 2% softirqs.CPU88.SCHED
16127 ? 12% +74.2% 28100 ? 5% softirqs.CPU89.RCU
11243 +836.0% 105234 softirqs.CPU89.SCHED
16381 ? 11% +70.3% 27889 ? 5% softirqs.CPU9.RCU
11070 ? 4% +849.4% 105096 ? 2% softirqs.CPU9.SCHED
16516 ? 11% +72.4% 28475 ? 5% softirqs.CPU90.RCU
10508 ? 8% +913.0% 106449 ? 2% softirqs.CPU90.SCHED
16181 ? 12% +75.8% 28447 ? 4% softirqs.CPU91.RCU
10861 ? 4% +864.6% 104766 ? 2% softirqs.CPU91.SCHED
16187 ? 10% +73.0% 27999 ? 4% softirqs.CPU92.RCU
11031 ? 2% +864.9% 106446 ? 2% softirqs.CPU92.SCHED
16126 ? 11% +75.5% 28303 ? 4% softirqs.CPU93.RCU
11021 ? 4% +860.4% 105857 ? 3% softirqs.CPU93.SCHED
16441 ? 9% +72.7% 28400 ? 5% softirqs.CPU94.RCU
11138 ? 2% +849.8% 105792 ? 2% softirqs.CPU94.SCHED
16412 ? 10% +74.1% 28567 ? 4% softirqs.CPU95.RCU
10707 ? 4% +891.4% 106157 ? 3% softirqs.CPU95.SCHED
8088 ? 51% +1114.4% 98220 ? 24% softirqs.NET_RX
1610477 ? 11% +69.8% 2734242 ? 4% softirqs.RCU
1049197 +867.0% 10145511 ? 2% softirqs.SCHED
20966 ? 7% +634.8% 154061 softirqs.TIMER
56.08 -19.9 36.20 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
55.51 -19.8 35.76 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
55.50 -19.7 35.76 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
55.48 -19.7 35.75 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
47.51 -19.6 27.95 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
49.60 -17.0 32.59 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
49.77 -16.7 33.06 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
6.48 -6.5 0.00 perf-profile.calltrace.cycles-pp.__submit_merged_write_cond.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write
26.34 -6.2 20.15 ? 5% perf-profile.calltrace.cycles-pp.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write.vfs_write
5.05 ? 2% -5.1 0.00 perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.__submit_merged_write_cond.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter
15.46 -2.1 13.36 ? 8% perf-profile.calltrace.cycles-pp.__write_node_page.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write
0.41 ? 57% +0.5 0.91 ? 7% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
1.15 ? 4% +0.6 1.78 ? 12% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.14 ?173% +0.6 0.78 ? 9% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt
0.65 ? 3% +0.6 1.29 ? 2% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt
0.00 +0.7 0.72 ? 3% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.7 0.73 ? 4% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
0.00 +0.8 0.83 ? 8% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
0.00 +0.9 0.95 ? 18% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_read_slowpath.f2fs_get_node_info.__write_node_page
0.00 +1.0 0.97 perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack
1.26 ? 6% +1.2 2.42 ? 3% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
1.29 ? 5% +1.2 2.46 ? 3% perf-profile.calltrace.cycles-pp.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
1.29 ? 6% +1.2 2.46 ? 3% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
8.04 +1.6 9.69 ? 12% perf-profile.calltrace.cycles-pp.set_node_addr.__write_node_page.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter
2.04 ? 3% +1.7 3.78 ? 3% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
7.79 +1.9 9.66 ? 12% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.set_node_addr.__write_node_page.f2fs_fsync_node_pages.f2fs_do_sync_file
2.21 ? 3% +2.2 4.39 ? 6% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
7.08 +2.4 9.48 ? 12% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.set_node_addr.__write_node_page.f2fs_fsync_node_pages
0.69 +2.6 3.29 ? 10% perf-profile.calltrace.cycles-pp.f2fs_get_node_info.__write_node_page.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter
0.00 +2.9 2.92 ? 10% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_read_slowpath.f2fs_get_node_info.__write_node_page.f2fs_fsync_node_pages
6.20 +3.0 9.23 ? 13% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.set_node_addr.__write_node_page
0.00 +3.2 3.25 ? 10% perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.f2fs_get_node_info.__write_node_page.f2fs_fsync_node_pages.f2fs_do_sync_file
0.00 +3.6 3.55 ? 6% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_read_slowpath.f2fs_need_dentry_mark.f2fs_fsync_node_pages
1.64 ? 2% +4.0 5.67 ? 8% perf-profile.calltrace.cycles-pp.f2fs_need_inode_block_update.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write.vfs_write
0.00 +4.4 4.36 ? 11% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_read_slowpath.f2fs_need_inode_block_update.f2fs_do_sync_file
1.28 +4.4 5.64 ? 9% perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.f2fs_need_inode_block_update.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write
0.00 +5.3 5.27 ? 9% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_read_slowpath.f2fs_need_inode_block_update.f2fs_do_sync_file.f2fs_file_write_iter
0.91 ? 3% +5.4 6.34 ? 3% perf-profile.calltrace.cycles-pp.f2fs_need_dentry_mark.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write
0.65 ? 3% +5.7 6.31 ? 3% perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.f2fs_need_dentry_mark.f2fs_fsync_node_pages.f2fs_do_sync_file.f2fs_file_write_iter
0.00 +5.9 5.93 ? 3% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_read_slowpath.f2fs_need_dentry_mark.f2fs_fsync_node_pages.f2fs_do_sync_file
7.65 ? 2% +6.9 14.58 ? 6% perf-profile.calltrace.cycles-pp.file_write_and_wait_range.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write.vfs_write
7.61 ? 2% +7.0 14.57 ? 6% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.file_write_and_wait_range.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write
7.59 ? 2% +7.0 14.57 ? 6% perf-profile.calltrace.cycles-pp.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range.f2fs_do_sync_file.f2fs_file_write_iter
7.59 ? 2% +7.0 14.57 ? 6% perf-profile.calltrace.cycles-pp.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range.f2fs_do_sync_file
7.47 ? 2% +7.1 14.55 ? 6% perf-profile.calltrace.cycles-pp.f2fs_write_cache_pages.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range.file_write_and_wait_range
4.51 ? 3% +9.8 14.27 ? 6% perf-profile.calltrace.cycles-pp.f2fs_write_single_data_page.f2fs_write_cache_pages.f2fs_write_data_pages.do_writepages.__filemap_fdatawrite_range
4.33 ? 3% +9.9 14.25 ? 6% perf-profile.calltrace.cycles-pp.f2fs_do_write_data_page.f2fs_write_single_data_page.f2fs_write_cache_pages.f2fs_write_data_pages.do_writepages
0.00 +11.8 11.79 ? 8% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_read_slowpath.f2fs_get_node_info.f2fs_do_write_data_page
1.10 ? 3% +12.9 13.99 ? 6% perf-profile.calltrace.cycles-pp.f2fs_get_node_info.f2fs_do_write_data_page.f2fs_write_single_data_page.f2fs_write_cache_pages.f2fs_write_data_pages
0.83 ? 4% +13.1 13.96 ? 6% perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.f2fs_get_node_info.f2fs_do_write_data_page.f2fs_write_single_data_page.f2fs_write_cache_pages
0.00 +13.6 13.59 ? 7% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_read_slowpath.f2fs_get_node_info.f2fs_do_write_data_page.f2fs_write_single_data_page
0.00 +18.4 18.43 ? 5% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_read_slowpath.f2fs_is_checkpointed_node.f2fs_do_sync_file
1.54 +19.7 21.27 ? 5% perf-profile.calltrace.cycles-pp.f2fs_is_checkpointed_node.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write.vfs_write
1.09 ? 2% +20.1 21.22 ? 5% perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.f2fs_is_checkpointed_node.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write
41.69 +20.7 62.36 perf-profile.calltrace.cycles-pp.write
0.00 +20.7 20.70 ? 6% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_read_slowpath.f2fs_is_checkpointed_node.f2fs_do_sync_file.f2fs_file_write_iter
41.45 +20.9 62.31 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
41.40 +20.9 62.30 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
41.39 +20.9 62.30 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
41.37 +20.9 62.30 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
41.26 +21.0 62.29 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
41.23 +21.0 62.28 perf-profile.calltrace.cycles-pp.f2fs_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
39.59 +22.4 62.03 perf-profile.calltrace.cycles-pp.f2fs_do_sync_file.f2fs_file_write_iter.new_sync_write.vfs_write.ksys_write
56.08 -19.9 36.20 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
56.08 -19.9 36.20 perf-profile.children.cycles-pp.cpu_startup_entry
56.06 -19.9 36.20 perf-profile.children.cycles-pp.do_idle
55.51 -19.8 35.76 perf-profile.children.cycles-pp.start_secondary
47.58 -19.3 28.30 perf-profile.children.cycles-pp.intel_idle
50.28 -16.8 33.45 perf-profile.children.cycles-pp.cpuidle_enter_state
50.28 -16.8 33.47 perf-profile.children.cycles-pp.cpuidle_enter
8.94 -8.4 0.54 ? 13% perf-profile.children.cycles-pp.__submit_merged_write_cond
26.34 -6.2 20.15 ? 5% perf-profile.children.cycles-pp.f2fs_fsync_node_pages
4.92 ? 4% -4.6 0.32 ? 15% perf-profile.children.cycles-pp.do_write_page
4.00 -3.8 0.17 ? 20% perf-profile.children.cycles-pp.rwsem_wake
4.47 -3.7 0.75 ? 12% perf-profile.children.cycles-pp.try_to_wake_up
4.36 -3.6 0.71 ? 11% perf-profile.children.cycles-pp.ttwu_do_activate
4.33 -3.6 0.71 ? 11% perf-profile.children.cycles-pp.enqueue_task_fair
4.07 -3.4 0.67 ? 10% perf-profile.children.cycles-pp.enqueue_entity
3.73 -3.1 0.62 ? 13% perf-profile.children.cycles-pp.wake_up_q
3.10 ? 7% -3.0 0.12 ? 18% perf-profile.children.cycles-pp.f2fs_submit_page_write
3.03 -2.7 0.29 ? 15% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
3.19 -2.7 0.53 ? 12% perf-profile.children.cycles-pp.__account_scheduler_latency
2.66 ? 8% -2.5 0.17 ? 16% perf-profile.children.cycles-pp.f2fs_do_write_node_page
2.85 -2.4 0.40 ? 7% perf-profile.children.cycles-pp.flush_smp_call_function_from_idle
2.60 ? 5% -2.4 0.18 ? 15% perf-profile.children.cycles-pp.f2fs_outplace_write_data
2.81 -2.4 0.45 ? 13% perf-profile.children.cycles-pp.submit_bio
2.81 -2.4 0.45 ? 13% perf-profile.children.cycles-pp.submit_bio_noacct
2.67 -2.3 0.36 ? 6% perf-profile.children.cycles-pp.sched_ttwu_pending
2.23 -2.2 0.07 ? 15% perf-profile.children.cycles-pp.pagevec_lookup_range_tag
2.22 -2.2 0.07 ? 15% perf-profile.children.cycles-pp.find_get_pages_range_tag
15.46 -2.1 13.36 ? 8% perf-profile.children.cycles-pp.__write_node_page
2.37 -2.0 0.38 ? 12% perf-profile.children.cycles-pp.stack_trace_save_tsk
2.19 -2.0 0.24 ? 12% perf-profile.children.cycles-pp._raw_spin_lock_irq
2.17 -1.8 0.35 ? 12% perf-profile.children.cycles-pp.arch_stack_walk
2.17 -1.8 0.36 ? 6% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
2.45 -1.8 0.69 ? 8% perf-profile.children.cycles-pp.__schedule
2.03 ? 2% -1.7 0.30 ? 11% perf-profile.children.cycles-pp.__submit_merged_bio
1.99 -1.7 0.28 ? 14% perf-profile.children.cycles-pp.brd_submit_bio
1.68 -1.5 0.18 ? 18% perf-profile.children.cycles-pp.f2fs_allocate_data_block
1.60 ? 2% -1.3 0.30 ? 10% perf-profile.children.cycles-pp.ret_from_fork
1.59 ? 2% -1.3 0.29 ? 12% perf-profile.children.cycles-pp.kthread
1.51 ? 2% -1.3 0.25 ? 12% perf-profile.children.cycles-pp.unwind_next_frame
1.41 ? 2% -1.2 0.22 ? 23% perf-profile.children.cycles-pp.__generic_file_write_iter
1.30 -1.1 0.19 ? 16% perf-profile.children.cycles-pp.brd_do_bvec
1.30 ? 2% -1.1 0.22 ? 25% perf-profile.children.cycles-pp.generic_perform_write
1.55 -1.0 0.54 ? 5% perf-profile.children.cycles-pp.schedule
1.04 ? 2% -0.9 0.13 ? 14% perf-profile.children.cycles-pp.issue_flush_thread
1.09 ? 2% -0.8 0.24 ? 14% perf-profile.children.cycles-pp._raw_spin_lock
0.93 ? 4% -0.8 0.15 ? 17% perf-profile.children.cycles-pp.schedule_idle
0.80 ? 9% -0.7 0.08 ? 26% perf-profile.children.cycles-pp.f2fs_space_for_roll_forward
0.87 -0.7 0.19 ? 9% perf-profile.children.cycles-pp.f2fs_issue_flush
0.76 ? 10% -0.7 0.08 ? 26% perf-profile.children.cycles-pp.__percpu_counter_sum
0.74 ? 4% -0.6 0.09 ? 11% perf-profile.children.cycles-pp.down_read
0.76 ? 3% -0.6 0.11 ? 9% perf-profile.children.cycles-pp.dequeue_task_fair
0.70 ? 3% -0.6 0.10 ? 8% perf-profile.children.cycles-pp.dequeue_entity
0.71 ? 2% -0.6 0.12 ? 22% perf-profile.children.cycles-pp.f2fs_write_begin
0.68 -0.6 0.09 ? 11% perf-profile.children.cycles-pp.brd_insert_page
0.60 -0.6 0.05 ? 58% perf-profile.children.cycles-pp.complete
0.61 -0.5 0.08 ? 13% perf-profile.children.cycles-pp.f2fs_write_end_io
0.57 -0.5 0.05 ? 58% perf-profile.children.cycles-pp.swake_up_locked
0.66 ? 2% -0.5 0.15 ? 13% perf-profile.children.cycles-pp.rwsem_mark_wake
0.62 ? 4% -0.5 0.13 ? 11% perf-profile.children.cycles-pp.md_submit_bio
0.63 -0.5 0.14 ? 8% perf-profile.children.cycles-pp.update_load_avg
0.58 -0.5 0.09 ? 11% perf-profile.children.cycles-pp.select_task_rq_fair
0.56 ? 5% -0.5 0.09 ? 10% perf-profile.children.cycles-pp.orc_find
0.59 -0.5 0.13 ? 14% perf-profile.children.cycles-pp.__submit_flush_wait
0.56 -0.4 0.12 ? 15% perf-profile.children.cycles-pp.submit_bio_wait
0.50 ? 6% -0.4 0.08 ? 15% perf-profile.children.cycles-pp.__orc_find
0.44 ? 3% -0.4 0.03 ?100% perf-profile.children.cycles-pp.__set_page_dirty_nobuffers
0.54 ? 4% -0.4 0.14 ? 11% perf-profile.children.cycles-pp.worker_thread
0.45 ? 3% -0.4 0.07 ? 17% perf-profile.children.cycles-pp.__get_node_page
0.47 ? 3% -0.4 0.10 ? 12% perf-profile.children.cycles-pp.md_handle_request
0.43 ? 2% -0.4 0.07 ? 21% perf-profile.children.cycles-pp.update_curr
0.43 ? 3% -0.4 0.07 ? 12% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.38 -0.3 0.03 ?100% perf-profile.children.cycles-pp.__test_set_page_writeback
0.38 ? 4% -0.3 0.04 ? 59% perf-profile.children.cycles-pp.unwind_get_return_address
0.42 ? 3% -0.3 0.08 ? 13% perf-profile.children.cycles-pp.raid0_make_request
0.39 -0.3 0.06 ? 34% perf-profile.children.cycles-pp.f2fs_write_end
0.36 ? 4% -0.3 0.04 ? 58% perf-profile.children.cycles-pp.f2fs_is_valid_blkaddr
0.42 ? 5% -0.3 0.10 ? 19% perf-profile.children.cycles-pp.process_one_work
0.35 -0.3 0.03 ?102% perf-profile.children.cycles-pp.f2fs_submit_merged_ipu_write
0.34 ? 6% -0.3 0.03 ?100% perf-profile.children.cycles-pp.pagecache_get_page
0.34 ? 5% -0.3 0.04 ? 58% perf-profile.children.cycles-pp.__kernel_text_address
0.32 ? 2% -0.3 0.03 ?100% perf-profile.children.cycles-pp.update_cfs_rq_h_load
0.34 ? 3% -0.3 0.04 ? 58% perf-profile.children.cycles-pp.__lookup_nat_cache
0.32 ? 2% -0.3 0.03 ?100% perf-profile.children.cycles-pp.__list_del_entry_valid
0.31 ? 6% -0.3 0.03 ?100% perf-profile.children.cycles-pp.kernel_text_address
0.30 ? 5% -0.3 0.03 ?100% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.31 ? 5% -0.3 0.04 ? 58% perf-profile.children.cycles-pp.set_next_entity
0.31 ? 2% -0.3 0.06 ? 14% perf-profile.children.cycles-pp.wait_for_completion
0.30 ? 2% -0.3 0.04 ? 59% perf-profile.children.cycles-pp.__radix_tree_lookup
0.32 ? 2% -0.2 0.07 ? 15% perf-profile.children.cycles-pp.md_flush_request
0.29 ? 2% -0.2 0.05 ? 62% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.31 ? 4% -0.2 0.07 ? 21% perf-profile.children.cycles-pp.submit_flushes
0.27 ? 4% -0.2 0.03 ?102% perf-profile.children.cycles-pp.get_page_from_freelist
0.30 ? 2% -0.2 0.07 ? 12% perf-profile.children.cycles-pp.queue_work_on
0.28 ? 2% -0.2 0.06 ? 14% perf-profile.children.cycles-pp.schedule_timeout
0.26 ? 5% -0.2 0.04 ? 58% perf-profile.children.cycles-pp.update_ts_time_stats
0.26 ? 3% -0.2 0.04 ? 58% perf-profile.children.cycles-pp.nr_iowait_cpu
0.58 ? 4% -0.2 0.38 ? 7% perf-profile.children.cycles-pp.pick_next_task_fair
0.27 ? 5% -0.2 0.07 ? 13% perf-profile.children.cycles-pp.__queue_work
0.24 ? 4% -0.2 0.07 ? 11% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.30 ? 3% -0.2 0.14 ? 6% perf-profile.children.cycles-pp.update_rq_clock
0.18 ? 2% -0.2 0.03 ?100% perf-profile.children.cycles-pp.__wake_up_common_lock
0.24 ? 6% -0.1 0.11 ? 11% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
1.89 -0.1 1.77 ? 3% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.09 -0.1 0.04 ? 57% perf-profile.children.cycles-pp._find_next_bit
0.21 ? 4% -0.0 0.17 ? 5% perf-profile.children.cycles-pp.sched_clock
0.20 ? 6% -0.0 0.16 ? 6% perf-profile.children.cycles-pp.native_sched_clock
0.10 ? 7% -0.0 0.07 perf-profile.children.cycles-pp.hrtimer_next_event_without
0.20 ? 7% -0.0 0.17 ? 4% perf-profile.children.cycles-pp.sched_clock_cpu
0.11 ? 4% -0.0 0.09 ? 15% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.06 ? 11% +0.0 0.08 ? 6% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.07 ? 13% +0.0 0.11 ? 13% perf-profile.children.cycles-pp._raw_spin_trylock
0.00 +0.1 0.06 ? 6% perf-profile.children.cycles-pp.arch_scale_freq_tick
0.00 +0.1 0.07 ? 17% perf-profile.children.cycles-pp.idle_cpu
0.06 ? 7% +0.1 0.12 ? 6% perf-profile.children.cycles-pp.run_rebalance_domains
0.04 ? 57% +0.1 0.12 ? 10% perf-profile.children.cycles-pp.update_blocked_averages
0.09 ? 13% +0.1 0.17 ? 6% perf-profile.children.cycles-pp.lapic_next_deadline
0.04 ? 58% +0.1 0.12 ? 21% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.04 ? 58% +0.1 0.12 ? 18% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.05 ? 8% +0.1 0.14 ? 5% perf-profile.children.cycles-pp.irqtime_account_irq
0.00 +0.1 0.09 ? 4% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.09 ? 4% +0.1 0.20 ? 7% perf-profile.children.cycles-pp.calc_global_load_tick
0.09 ? 4% +0.1 0.21 ? 6% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.09 ? 4% +0.1 0.22 ? 4% perf-profile.children.cycles-pp.native_irq_return_iret
0.35 ? 7% +0.1 0.48 ? 6% perf-profile.children.cycles-pp.__softirqentry_text_start
0.36 ? 6% +0.1 0.49 ? 6% perf-profile.children.cycles-pp.do_softirq_own_stack
0.14 ? 3% +0.1 0.27 ? 9% perf-profile.children.cycles-pp.rebalance_domains
0.10 ? 19% +0.1 0.24 ? 8% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.16 ? 9% +0.2 0.31 ? 8% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.16 ? 13% +0.2 0.33 ? 9% perf-profile.children.cycles-pp.tick_irq_enter
0.42 ? 7% +0.2 0.58 ? 7% perf-profile.children.cycles-pp.irq_exit_rcu
0.20 ? 8% +0.2 0.42 ? 8% perf-profile.children.cycles-pp.irq_enter_rcu
0.07 ? 11% +0.2 0.30 ? 9% perf-profile.children.cycles-pp.newidle_balance
0.23 ? 2% +0.2 0.45 ? 5% perf-profile.children.cycles-pp.scheduler_tick
0.08 ? 10% +0.2 0.33 ? 6% perf-profile.children.cycles-pp.update_sd_lb_stats
0.09 ? 12% +0.2 0.34 ? 6% perf-profile.children.cycles-pp.find_busiest_group
0.11 ? 7% +0.3 0.39 ? 5% perf-profile.children.cycles-pp.load_balance
0.16 ? 18% +0.3 0.48 ? 5% perf-profile.children.cycles-pp.timekeeping_max_deferment
0.50 ? 13% +0.3 0.84 ? 8% perf-profile.children.cycles-pp.clockevents_program_event
0.44 ? 5% +0.4 0.84 ? 4% perf-profile.children.cycles-pp.tick_sched_handle
0.52 ? 8% +0.4 0.92 ? 7% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.43 ? 6% +0.4 0.83 ? 4% perf-profile.children.cycles-pp.update_process_times
0.39 ? 11% +0.4 0.83 ? 8% perf-profile.children.cycles-pp.tick_nohz_next_event
0.88 ? 9% +0.5 1.40 ? 6% perf-profile.children.cycles-pp.ktime_get
0.57 ? 4% +0.5 1.09 ? 2% perf-profile.children.cycles-pp.tick_sched_timer
1.16 ? 4% +0.6 1.81 ? 12% perf-profile.children.cycles-pp.menu_select
0.77 ? 4% +0.7 1.46 ? 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.8 0.78 ? 75% perf-profile.children.cycles-pp.f2fs_remove_inode_page
0.00 +0.8 0.78 ? 75% perf-profile.children.cycles-pp.truncate_node
1.46 ? 6% +1.2 2.67 ? 3% perf-profile.children.cycles-pp.hrtimer_interrupt
1.48 ? 5% +1.2 2.70 ? 3% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
1.96 ? 4% +1.3 3.25 ? 2% perf-profile.children.cycles-pp.asm_call_sysvec_on_stack
8.06 +1.7 9.75 ? 12% perf-profile.children.cycles-pp.set_node_addr
2.29 ? 3% +1.8 4.06 ? 3% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
2.50 ? 3% +2.1 4.58 ? 3% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
1.64 ? 2% +4.0 5.67 ? 8% perf-profile.children.cycles-pp.f2fs_need_inode_block_update
0.91 ? 3% +5.4 6.34 ? 3% perf-profile.children.cycles-pp.f2fs_need_dentry_mark
7.65 ? 2% +6.9 14.58 ? 6% perf-profile.children.cycles-pp.file_write_and_wait_range
7.61 ? 2% +7.0 14.57 ? 6% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
7.59 ? 2% +7.0 14.57 ? 6% perf-profile.children.cycles-pp.do_writepages
7.59 ? 2% +7.0 14.57 ? 6% perf-profile.children.cycles-pp.f2fs_write_data_pages
7.47 ? 2% +7.1 14.55 ? 6% perf-profile.children.cycles-pp.f2fs_write_cache_pages
4.52 ? 3% +9.8 14.27 ? 6% perf-profile.children.cycles-pp.f2fs_write_single_data_page
4.33 ? 3% +9.9 14.25 ? 6% perf-profile.children.cycles-pp.f2fs_do_write_data_page
1.80 +16.2 18.03 ? 7% perf-profile.children.cycles-pp.f2fs_get_node_info
1.54 +19.7 21.27 ? 5% perf-profile.children.cycles-pp.f2fs_is_checkpointed_node
41.70 +20.7 62.36 perf-profile.children.cycles-pp.write
41.57 +20.9 62.48 perf-profile.children.cycles-pp.do_syscall_64
41.39 +20.9 62.30 perf-profile.children.cycles-pp.ksys_write
41.37 +20.9 62.30 perf-profile.children.cycles-pp.vfs_write
41.26 +21.0 62.29 perf-profile.children.cycles-pp.new_sync_write
41.23 +21.0 62.28 perf-profile.children.cycles-pp.f2fs_file_write_iter
41.77 +21.6 63.35 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
39.59 +22.4 62.03 perf-profile.children.cycles-pp.f2fs_do_sync_file
13.89 +35.4 49.30 ? 2% perf-profile.children.cycles-pp.osq_lock
10.27 +41.0 51.25 perf-profile.children.cycles-pp.rwsem_down_read_slowpath
16.38 +43.4 59.78 perf-profile.children.cycles-pp.rwsem_optimistic_spin
47.58 -19.3 28.30 perf-profile.self.cycles-pp.intel_idle
3.02 -2.7 0.29 ? 15% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
1.86 -1.8 0.04 ? 58% perf-profile.self.cycles-pp.find_get_pages_range_tag
1.14 -0.9 0.26 ? 9% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
1.83 -0.8 0.99 ? 6% perf-profile.self.cycles-pp.rwsem_spin_on_owner
1.04 ? 2% -0.8 0.22 ? 5% perf-profile.self.cycles-pp._raw_spin_lock
0.69 ? 4% -0.6 0.08 ? 13% perf-profile.self.cycles-pp.down_read
0.71 -0.6 0.12 ? 13% perf-profile.self.cycles-pp.unwind_next_frame
0.59 ? 3% -0.5 0.10 ? 17% perf-profile.self.cycles-pp.rwsem_down_read_slowpath
0.50 ? 6% -0.4 0.08 ? 15% perf-profile.self.cycles-pp.__orc_find
0.42 ? 6% -0.4 0.06 ? 13% perf-profile.self.cycles-pp.brd_do_bvec
0.42 ? 3% -0.4 0.06 ? 6% perf-profile.self.cycles-pp.try_to_wake_up
0.36 ? 4% -0.3 0.03 ?100% perf-profile.self.cycles-pp.f2fs_is_valid_blkaddr
0.40 ? 3% -0.3 0.07 ? 10% perf-profile.self.cycles-pp.__schedule
0.40 ? 7% -0.3 0.08 ? 10% perf-profile.self.cycles-pp.__account_scheduler_latency
0.35 ? 15% -0.3 0.05 ? 62% perf-profile.self.cycles-pp.__percpu_counter_sum
0.32 ? 2% -0.3 0.03 ?100% perf-profile.self.cycles-pp.update_cfs_rq_h_load
0.32 ? 2% -0.3 0.03 ?100% perf-profile.self.cycles-pp.__list_del_entry_valid
0.29 ? 2% -0.3 0.04 ? 60% perf-profile.self.cycles-pp.__radix_tree_lookup
0.26 ? 3% -0.2 0.04 ? 58% perf-profile.self.cycles-pp.nr_iowait_cpu
0.23 ? 3% -0.2 0.04 ? 58% perf-profile.self.cycles-pp.update_load_avg
0.23 ? 6% -0.2 0.04 ? 57% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.24 ? 3% -0.2 0.07 ? 11% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.23 ? 2% -0.1 0.10 ? 14% perf-profile.self.cycles-pp.do_idle
0.18 ? 5% -0.1 0.07 ? 7% perf-profile.self.cycles-pp.update_rq_clock
0.20 ? 2% -0.1 0.09 ? 15% perf-profile.self.cycles-pp.rwsem_mark_wake
0.20 -0.1 0.10 ? 10% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.08 ? 5% -0.1 0.03 ?100% perf-profile.self.cycles-pp._find_next_bit
0.19 ? 5% -0.0 0.15 ? 7% perf-profile.self.cycles-pp.native_sched_clock
0.06 ? 17% +0.0 0.11 ? 13% perf-profile.self.cycles-pp._raw_spin_trylock
0.03 ?100% +0.1 0.09 ? 4% perf-profile.self.cycles-pp.tick_sched_timer
0.00 +0.1 0.06 ? 17% perf-profile.self.cycles-pp.update_blocked_averages
0.00 +0.1 0.06 ? 6% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.00 +0.1 0.07 ? 17% perf-profile.self.cycles-pp.idle_cpu
0.00 +0.1 0.08 ? 6% perf-profile.self.cycles-pp.irqtime_account_irq
0.09 ? 13% +0.1 0.17 ? 6% perf-profile.self.cycles-pp.lapic_next_deadline
0.00 +0.1 0.09 ? 4% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.01 ?173% +0.1 0.11 ? 19% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.09 ? 4% +0.1 0.20 ? 7% perf-profile.self.cycles-pp.calc_global_load_tick
0.08 ? 26% +0.1 0.21 ? 13% perf-profile.self.cycles-pp.update_process_times
0.09 ? 17% +0.1 0.21 ? 6% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.09 ? 4% +0.1 0.22 ? 4% perf-profile.self.cycles-pp.native_irq_return_iret
0.09 ? 20% +0.2 0.24 ? 20% perf-profile.self.cycles-pp.tick_nohz_next_event
0.06 ? 13% +0.2 0.24 ? 9% perf-profile.self.cycles-pp.update_sd_lb_stats
0.16 ? 18% +0.3 0.48 ? 5% perf-profile.self.cycles-pp.timekeeping_max_deferment
0.76 ? 10% +0.5 1.26 ? 7% perf-profile.self.cycles-pp.ktime_get
0.26 ? 4% +0.6 0.84 ? 22% perf-profile.self.cycles-pp.cpuidle_enter_state
0.67 ? 2% +8.8 9.43 ? 2% perf-profile.self.cycles-pp.rwsem_optimistic_spin
13.81 +35.3 49.11 perf-profile.self.cycles-pp.osq_lock
250.00 ? 98% +12777.4% 32193 ?167% interrupts.75:PCI-MSI.70260737-edge.eth3-TxRx-0
774.25 ?162% +1610.6% 13244 ?136% interrupts.77:PCI-MSI.70260739-edge.eth3-TxRx-2
552.00 ?158% +8041.8% 44942 ?105% interrupts.79:PCI-MSI.70260741-edge.eth3-TxRx-4
844.50 ?163% +6985.2% 59834 ? 92% interrupts.80:PCI-MSI.70260742-edge.eth3-TxRx-5
96.50 ? 90% +4080.3% 4034 ?150% interrupts.81:PCI-MSI.70260743-edge.eth3-TxRx-6
150.50 +1092.7% 1795 ? 2% interrupts.9:IO-APIC.9-fasteoi.acpi
609956 -6.3% 571265 interrupts.CAL:Function_call_interrupts
6904 ? 4% -17.1% 5724 ? 6% interrupts.CPU0.CAL:Function_call_interrupts
149876 +1092.6% 1787470 ? 3% interrupts.CPU0.LOC:Local_timer_interrupts
1581 ? 25% -63.5% 577.25 ? 23% interrupts.CPU0.NMI:Non-maskable_interrupts
1581 ? 25% -63.5% 577.25 ? 23% interrupts.CPU0.PMI:Performance_monitoring_interrupts
537.75 ? 3% +78.5% 959.75 ? 7% interrupts.CPU0.RES:Rescheduling_interrupts
150.50 +1092.7% 1795 ? 2% interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
7196 ? 4% -22.6% 5570 ? 4% interrupts.CPU1.CAL:Function_call_interrupts
150226 +1090.1% 1787842 ? 3% interrupts.CPU1.LOC:Local_timer_interrupts
1527 ? 25% -64.8% 538.00 ? 30% interrupts.CPU1.NMI:Non-maskable_interrupts
1527 ? 25% -64.8% 538.00 ? 30% interrupts.CPU1.PMI:Performance_monitoring_interrupts
529.75 ? 3% +88.7% 999.75 ? 2% interrupts.CPU1.RES:Rescheduling_interrupts
6843 -11.7% 6039 ? 4% interrupts.CPU10.CAL:Function_call_interrupts
150193 +1090.3% 1787827 ? 3% interrupts.CPU10.LOC:Local_timer_interrupts
1773 -50.0% 886.50 ? 42% interrupts.CPU10.NMI:Non-maskable_interrupts
1773 -50.0% 886.50 ? 42% interrupts.CPU10.PMI:Performance_monitoring_interrupts
517.75 ? 6% +87.9% 972.75 ? 11% interrupts.CPU10.RES:Rescheduling_interrupts
150160 +1090.6% 1787832 ? 3% interrupts.CPU11.LOC:Local_timer_interrupts
1774 -59.7% 716.00 ? 27% interrupts.CPU11.NMI:Non-maskable_interrupts
1774 -59.7% 716.00 ? 27% interrupts.CPU11.PMI:Performance_monitoring_interrupts
515.00 ? 4% +84.8% 951.50 ? 5% interrupts.CPU11.RES:Rescheduling_interrupts
6965 -17.1% 5773 ? 3% interrupts.CPU12.CAL:Function_call_interrupts
150211 +1090.1% 1787741 ? 3% interrupts.CPU12.LOC:Local_timer_interrupts
1769 -53.4% 824.75 ? 26% interrupts.CPU12.NMI:Non-maskable_interrupts
1769 -53.4% 824.75 ? 26% interrupts.CPU12.PMI:Performance_monitoring_interrupts
509.00 ? 4% +99.6% 1015 ? 4% interrupts.CPU12.RES:Rescheduling_interrupts
6966 -15.8% 5866 ? 5% interrupts.CPU13.CAL:Function_call_interrupts
149890 +1092.7% 1787701 ? 3% interrupts.CPU13.LOC:Local_timer_interrupts
1755 -46.8% 933.25 ? 28% interrupts.CPU13.NMI:Non-maskable_interrupts
1755 -46.8% 933.25 ? 28% interrupts.CPU13.PMI:Performance_monitoring_interrupts
507.75 ? 5% +99.3% 1011 ? 8% interrupts.CPU13.RES:Rescheduling_interrupts
6884 ? 2% -18.1% 5639 ? 4% interrupts.CPU14.CAL:Function_call_interrupts
150173 +1090.5% 1787862 ? 3% interrupts.CPU14.LOC:Local_timer_interrupts
510.25 ? 6% +83.0% 934.00 ? 5% interrupts.CPU14.RES:Rescheduling_interrupts
7174 ? 4% -17.0% 5957 ? 8% interrupts.CPU15.CAL:Function_call_interrupts
150282 +1089.6% 1787694 ? 3% interrupts.CPU15.LOC:Local_timer_interrupts
1751 -48.6% 899.50 ? 24% interrupts.CPU15.NMI:Non-maskable_interrupts
1751 -48.6% 899.50 ? 24% interrupts.CPU15.PMI:Performance_monitoring_interrupts
500.75 ? 4% +100.0% 1001 ? 5% interrupts.CPU15.RES:Rescheduling_interrupts
7088 ? 2% -15.8% 5970 ? 6% interrupts.CPU16.CAL:Function_call_interrupts
150253 +1089.8% 1787673 ? 3% interrupts.CPU16.LOC:Local_timer_interrupts
1531 ? 24% -55.3% 684.00 ? 40% interrupts.CPU16.NMI:Non-maskable_interrupts
1531 ? 24% -55.3% 684.00 ? 40% interrupts.CPU16.PMI:Performance_monitoring_interrupts
520.75 ? 3% +72.5% 898.50 ? 4% interrupts.CPU16.RES:Rescheduling_interrupts
6974 ? 2% -15.5% 5896 ? 3% interrupts.CPU17.CAL:Function_call_interrupts
150233 +1090.0% 1787724 ? 3% interrupts.CPU17.LOC:Local_timer_interrupts
1782 -60.1% 710.25 ? 31% interrupts.CPU17.NMI:Non-maskable_interrupts
1782 -60.1% 710.25 ? 31% interrupts.CPU17.PMI:Performance_monitoring_interrupts
523.75 +66.5% 872.25 ? 2% interrupts.CPU17.RES:Rescheduling_interrupts
6812 -14.4% 5828 ? 4% interrupts.CPU18.CAL:Function_call_interrupts
150333 +1089.2% 1787828 ? 3% interrupts.CPU18.LOC:Local_timer_interrupts
1549 ? 23% -47.7% 811.00 ? 20% interrupts.CPU18.NMI:Non-maskable_interrupts
1549 ? 23% -47.7% 811.00 ? 20% interrupts.CPU18.PMI:Performance_monitoring_interrupts
518.75 ? 5% +93.6% 1004 ? 5% interrupts.CPU18.RES:Rescheduling_interrupts
6896 ? 2% -13.8% 5941 interrupts.CPU19.CAL:Function_call_interrupts
150247 +1090.0% 1787866 ? 3% interrupts.CPU19.LOC:Local_timer_interrupts
1534 ? 24% -43.1% 874.00 ? 20% interrupts.CPU19.NMI:Non-maskable_interrupts
1534 ? 24% -43.1% 874.00 ? 20% interrupts.CPU19.PMI:Performance_monitoring_interrupts
515.25 ? 6% +103.8% 1050 ? 4% interrupts.CPU19.RES:Rescheduling_interrupts
150079 +1091.0% 1787408 ? 2% interrupts.CPU2.LOC:Local_timer_interrupts
1516 ? 23% -58.1% 635.25 ? 22% interrupts.CPU2.NMI:Non-maskable_interrupts
1516 ? 23% -58.1% 635.25 ? 22% interrupts.CPU2.PMI:Performance_monitoring_interrupts
513.00 ? 4% +88.9% 969.00 ? 7% interrupts.CPU2.RES:Rescheduling_interrupts
6783 -15.1% 5757 ? 5% interrupts.CPU20.CAL:Function_call_interrupts
150187 +1090.4% 1787794 ? 3% interrupts.CPU20.LOC:Local_timer_interrupts
525.75 ? 6% +75.5% 922.50 ? 5% interrupts.CPU20.RES:Rescheduling_interrupts
6991 ? 3% -17.8% 5748 ? 4% interrupts.CPU21.CAL:Function_call_interrupts
150327 +1089.3% 1787825 ? 3% interrupts.CPU21.LOC:Local_timer_interrupts
1766 -68.9% 549.50 ? 34% interrupts.CPU21.NMI:Non-maskable_interrupts
1766 -68.9% 549.50 ? 34% interrupts.CPU21.PMI:Performance_monitoring_interrupts
540.50 ? 8% +73.1% 935.50 ? 5% interrupts.CPU21.RES:Rescheduling_interrupts
6879 ? 3% -14.4% 5891 ? 3% interrupts.CPU22.CAL:Function_call_interrupts
150278 +1089.6% 1787778 ? 3% interrupts.CPU22.LOC:Local_timer_interrupts
1774 -51.4% 861.50 ? 25% interrupts.CPU22.NMI:Non-maskable_interrupts
1774 -51.4% 861.50 ? 25% interrupts.CPU22.PMI:Performance_monitoring_interrupts
553.25 ? 12% +59.7% 883.50 ? 7% interrupts.CPU22.RES:Rescheduling_interrupts
6914 ? 2% -12.3% 6062 ? 6% interrupts.CPU23.CAL:Function_call_interrupts
150270 +1089.7% 1787808 ? 3% interrupts.CPU23.LOC:Local_timer_interrupts
1546 ? 25% -53.2% 723.00 ? 34% interrupts.CPU23.NMI:Non-maskable_interrupts
1546 ? 25% -53.2% 723.00 ? 34% interrupts.CPU23.PMI:Performance_monitoring_interrupts
516.75 ? 6% +79.2% 926.25 ? 6% interrupts.CPU23.RES:Rescheduling_interrupts
5887 ? 4% +15.2% 6783 ? 7% interrupts.CPU24.CAL:Function_call_interrupts
149885 +1093.6% 1789046 ? 2% interrupts.CPU24.LOC:Local_timer_interrupts
1799 ? 2% -61.7% 689.25 ? 9% interrupts.CPU24.NMI:Non-maskable_interrupts
1799 ? 2% -61.7% 689.25 ? 9% interrupts.CPU24.PMI:Performance_monitoring_interrupts
569.50 ? 10% +88.9% 1076 ? 5% interrupts.CPU24.RES:Rescheduling_interrupts
149890 +1093.6% 1789026 ? 2% interrupts.CPU25.LOC:Local_timer_interrupts
1768 -47.1% 934.75 ? 12% interrupts.CPU25.NMI:Non-maskable_interrupts
1768 -47.1% 934.75 ? 12% interrupts.CPU25.PMI:Performance_monitoring_interrupts
540.75 +92.3% 1039 ? 8% interrupts.CPU25.RES:Rescheduling_interrupts
150160 +1091.4% 1789061 ? 2% interrupts.CPU26.LOC:Local_timer_interrupts
537.25 ? 2% +88.1% 1010 ? 6% interrupts.CPU26.RES:Rescheduling_interrupts
150017 +1092.6% 1789042 ? 2% interrupts.CPU27.LOC:Local_timer_interrupts
1759 -58.1% 737.25 ? 6% interrupts.CPU27.NMI:Non-maskable_interrupts
1759 -58.1% 737.25 ? 6% interrupts.CPU27.PMI:Performance_monitoring_interrupts
528.75 ? 5% +75.1% 925.75 ? 3% interrupts.CPU27.RES:Rescheduling_interrupts
149980 +1092.8% 1789024 ? 2% interrupts.CPU28.LOC:Local_timer_interrupts
503.25 +88.9% 950.50 ? 2% interrupts.CPU28.RES:Rescheduling_interrupts
150157 +1091.3% 1788798 ? 2% interrupts.CPU29.LOC:Local_timer_interrupts
1782 ? 2% -47.6% 934.00 ? 8% interrupts.CPU29.NMI:Non-maskable_interrupts
1782 ? 2% -47.6% 934.00 ? 8% interrupts.CPU29.PMI:Performance_monitoring_interrupts
525.00 ? 6% +77.4% 931.50 ? 6% interrupts.CPU29.RES:Rescheduling_interrupts
7140 ? 4% -18.8% 5798 ? 4% interrupts.CPU3.CAL:Function_call_interrupts
149871 +1092.9% 1787743 ? 3% interrupts.CPU3.LOC:Local_timer_interrupts
1755 -50.9% 861.75 ? 17% interrupts.CPU3.NMI:Non-maskable_interrupts
1755 -50.9% 861.75 ? 17% interrupts.CPU3.PMI:Performance_monitoring_interrupts
529.75 ? 4% +80.9% 958.50 ? 5% interrupts.CPU3.RES:Rescheduling_interrupts
149977 +1092.9% 1789016 ? 2% interrupts.CPU30.LOC:Local_timer_interrupts
1791 -61.1% 696.25 ? 22% interrupts.CPU30.NMI:Non-maskable_interrupts
1791 -61.1% 696.25 ? 22% interrupts.CPU30.PMI:Performance_monitoring_interrupts
521.75 ? 5% +77.8% 927.50 ? 3% interrupts.CPU30.RES:Rescheduling_interrupts
150029 +1092.4% 1788968 ? 2% interrupts.CPU31.LOC:Local_timer_interrupts
1767 -62.0% 671.75 ? 37% interrupts.CPU31.NMI:Non-maskable_interrupts
1767 -62.0% 671.75 ? 37% interrupts.CPU31.PMI:Performance_monitoring_interrupts
521.00 ? 2% +84.7% 962.50 ? 7% interrupts.CPU31.RES:Rescheduling_interrupts
150028 +1092.5% 1789067 ? 2% interrupts.CPU32.LOC:Local_timer_interrupts
532.50 ? 2% +96.5% 1046 ? 4% interrupts.CPU32.RES:Rescheduling_interrupts
149637 +1095.6% 1789077 ? 2% interrupts.CPU33.LOC:Local_timer_interrupts
1763 ? 2% -64.4% 628.25 ? 37% interrupts.CPU33.NMI:Non-maskable_interrupts
1763 ? 2% -64.4% 628.25 ? 37% interrupts.CPU33.PMI:Performance_monitoring_interrupts
499.25 ? 5% +92.8% 962.75 ? 10% interrupts.CPU33.RES:Rescheduling_interrupts
150148 +1091.6% 1789238 ? 2% interrupts.CPU34.LOC:Local_timer_interrupts
1323 ? 33% -54.5% 602.75 ? 46% interrupts.CPU34.NMI:Non-maskable_interrupts
1323 ? 33% -54.5% 602.75 ? 46% interrupts.CPU34.PMI:Performance_monitoring_interrupts
498.00 ? 5% +92.6% 959.00 ? 3% interrupts.CPU34.RES:Rescheduling_interrupts
150084 +1091.9% 1788867 ? 2% interrupts.CPU35.LOC:Local_timer_interrupts
1544 ? 25% -57.8% 651.25 ? 43% interrupts.CPU35.NMI:Non-maskable_interrupts
1544 ? 25% -57.8% 651.25 ? 43% interrupts.CPU35.PMI:Performance_monitoring_interrupts
511.25 ? 3% +89.8% 970.25 ? 5% interrupts.CPU35.RES:Rescheduling_interrupts
150106 +1091.5% 1788521 ? 2% interrupts.CPU36.LOC:Local_timer_interrupts
533.00 ? 3% +76.4% 940.25 ? 5% interrupts.CPU36.RES:Rescheduling_interrupts
150139 +1091.8% 1789312 ? 2% interrupts.CPU37.LOC:Local_timer_interrupts
513.25 ? 4% +91.3% 982.00 ? 5% interrupts.CPU37.RES:Rescheduling_interrupts
150102 +1091.9% 1789006 ? 2% interrupts.CPU38.LOC:Local_timer_interrupts
1299 ? 32% -43.1% 739.00 ? 21% interrupts.CPU38.NMI:Non-maskable_interrupts
1299 ? 32% -43.1% 739.00 ? 21% interrupts.CPU38.PMI:Performance_monitoring_interrupts
518.75 ? 4% +107.7% 1077 ? 6% interrupts.CPU38.RES:Rescheduling_interrupts
150053 +1092.2% 1788960 ? 2% interrupts.CPU39.LOC:Local_timer_interrupts
1525 ? 25% -53.5% 709.00 ? 26% interrupts.CPU39.NMI:Non-maskable_interrupts
1525 ? 25% -53.5% 709.00 ? 26% interrupts.CPU39.PMI:Performance_monitoring_interrupts
526.50 ? 2% +94.3% 1022 ? 8% interrupts.CPU39.RES:Rescheduling_interrupts
7103 ? 2% -16.3% 5942 ? 3% interrupts.CPU4.CAL:Function_call_interrupts
150217 +1090.1% 1787766 ? 3% interrupts.CPU4.LOC:Local_timer_interrupts
1536 ? 25% -49.7% 772.25 ? 20% interrupts.CPU4.NMI:Non-maskable_interrupts
1536 ? 25% -49.7% 772.25 ? 20% interrupts.CPU4.PMI:Performance_monitoring_interrupts
495.75 ? 3% +88.8% 936.00 ? 4% interrupts.CPU4.RES:Rescheduling_interrupts
150146 +1091.5% 1788930 ? 2% interrupts.CPU40.LOC:Local_timer_interrupts
501.50 +89.0% 947.75 ? 3% interrupts.CPU40.RES:Rescheduling_interrupts
150099 +1091.8% 1788854 ? 2% interrupts.CPU41.LOC:Local_timer_interrupts
518.00 ? 5% +87.3% 970.00 ? 4% interrupts.CPU41.RES:Rescheduling_interrupts
150159 +1091.3% 1788781 ? 2% interrupts.CPU42.LOC:Local_timer_interrupts
536.00 ? 2% +69.8% 910.25 ? 9% interrupts.CPU42.RES:Rescheduling_interrupts
4.75 ?101% +2936.8% 144.25 ?137% interrupts.CPU42.TLB:TLB_shootdowns
149916 +1093.3% 1788909 ? 2% interrupts.CPU43.LOC:Local_timer_interrupts
531.25 ? 2% +80.3% 958.00 ? 2% interrupts.CPU43.RES:Rescheduling_interrupts
150126 +1091.7% 1789018 ? 2% interrupts.CPU44.LOC:Local_timer_interrupts
1525 ? 24% -52.2% 729.75 ? 24% interrupts.CPU44.NMI:Non-maskable_interrupts
1525 ? 24% -52.2% 729.75 ? 24% interrupts.CPU44.PMI:Performance_monitoring_interrupts
505.00 ? 3% +81.0% 914.25 ? 4% interrupts.CPU44.RES:Rescheduling_interrupts
150108 +1091.8% 1788995 ? 2% interrupts.CPU45.LOC:Local_timer_interrupts
1756 -58.0% 738.00 ? 15% interrupts.CPU45.NMI:Non-maskable_interrupts
1756 -58.0% 738.00 ? 15% interrupts.CPU45.PMI:Performance_monitoring_interrupts
524.50 ? 5% +74.5% 915.00 ? 2% interrupts.CPU45.RES:Rescheduling_interrupts
149970 +1092.9% 1788950 ? 2% interrupts.CPU46.LOC:Local_timer_interrupts
1790 -57.7% 757.25 ? 8% interrupts.CPU46.NMI:Non-maskable_interrupts
1790 -57.7% 757.25 ? 8% interrupts.CPU46.PMI:Performance_monitoring_interrupts
516.00 ? 6% +84.0% 949.25 ? 11% interrupts.CPU46.RES:Rescheduling_interrupts
149906 +1093.4% 1788999 ? 2% interrupts.CPU47.LOC:Local_timer_interrupts
1756 -48.8% 898.75 ? 23% interrupts.CPU47.NMI:Non-maskable_interrupts
1756 -48.8% 898.75 ? 23% interrupts.CPU47.PMI:Performance_monitoring_interrupts
495.75 ? 5% +83.2% 908.25 ? 12% interrupts.CPU47.RES:Rescheduling_interrupts
150137 +1090.7% 1787631 ? 3% interrupts.CPU48.LOC:Local_timer_interrupts
1538 ? 25% -54.2% 704.50 ? 37% interrupts.CPU48.NMI:Non-maskable_interrupts
1538 ? 25% -54.2% 704.50 ? 37% interrupts.CPU48.PMI:Performance_monitoring_interrupts
565.25 ? 7% +86.8% 1056 ? 6% interrupts.CPU48.RES:Rescheduling_interrupts
6840 ? 2% -15.8% 5757 ? 5% interrupts.CPU49.CAL:Function_call_interrupts
150165 +1090.5% 1787741 ? 3% interrupts.CPU49.LOC:Local_timer_interrupts
1516 ? 24% -52.5% 720.00 ? 29% interrupts.CPU49.NMI:Non-maskable_interrupts
1516 ? 24% -52.5% 720.00 ? 29% interrupts.CPU49.PMI:Performance_monitoring_interrupts
558.75 ? 5% +86.2% 1040 ? 9% interrupts.CPU49.RES:Rescheduling_interrupts
150264 +1089.8% 1787810 ? 3% interrupts.CPU5.LOC:Local_timer_interrupts
1751 ? 3% -62.5% 656.25 ? 30% interrupts.CPU5.NMI:Non-maskable_interrupts
1751 ? 3% -62.5% 656.25 ? 30% interrupts.CPU5.PMI:Performance_monitoring_interrupts
505.50 ? 5% +79.2% 906.00 ? 3% interrupts.CPU5.RES:Rescheduling_interrupts
6841 ? 3% -18.2% 5593 ? 5% interrupts.CPU50.CAL:Function_call_interrupts
150244 +1089.9% 1787787 ? 3% interrupts.CPU50.LOC:Local_timer_interrupts
1312 ? 34% -54.4% 599.00 ? 4% interrupts.CPU50.NMI:Non-maskable_interrupts
1312 ? 34% -54.4% 599.00 ? 4% interrupts.CPU50.PMI:Performance_monitoring_interrupts
522.50 ? 2% +77.7% 928.50 ? 4% interrupts.CPU50.RES:Rescheduling_interrupts
6771 ? 2% -16.3% 5669 ? 2% interrupts.CPU51.CAL:Function_call_interrupts
150197 +1090.3% 1787760 ? 3% interrupts.CPU51.LOC:Local_timer_interrupts
519.75 ? 4% +74.6% 907.25 ? 3% interrupts.CPU51.RES:Rescheduling_interrupts
6920 -18.7% 5624 ? 4% interrupts.CPU52.CAL:Function_call_interrupts
150223 +1090.1% 1787770 ? 3% interrupts.CPU52.LOC:Local_timer_interrupts
535.50 ? 6% +59.2% 852.75 ? 4% interrupts.CPU52.RES:Rescheduling_interrupts
6938 -16.3% 5804 ? 6% interrupts.CPU53.CAL:Function_call_interrupts
150134 +1090.8% 1787795 ? 3% interrupts.CPU53.LOC:Local_timer_interrupts
525.75 ? 7% +75.5% 922.50 ? 10% interrupts.CPU53.RES:Rescheduling_interrupts
7038 ? 2% -17.9% 5779 ? 5% interrupts.CPU54.CAL:Function_call_interrupts
150213 +1090.2% 1787848 ? 3% interrupts.CPU54.LOC:Local_timer_interrupts
1519 ? 24% -45.1% 834.75 ? 28% interrupts.CPU54.NMI:Non-maskable_interrupts
1519 ? 24% -45.1% 834.75 ? 28% interrupts.CPU54.PMI:Performance_monitoring_interrupts
520.75 ? 5% +57.3% 819.00 ? 8% interrupts.CPU54.RES:Rescheduling_interrupts
6722 ? 2% -13.0% 5848 ? 6% interrupts.CPU55.CAL:Function_call_interrupts
150261 +1089.8% 1787822 ? 3% interrupts.CPU55.LOC:Local_timer_interrupts
1523 ? 24% -53.9% 702.75 ? 35% interrupts.CPU55.NMI:Non-maskable_interrupts
1523 ? 24% -53.9% 702.75 ? 35% interrupts.CPU55.PMI:Performance_monitoring_interrupts
541.25 ? 7% +74.5% 944.25 ? 14% interrupts.CPU55.RES:Rescheduling_interrupts
150127 +1090.8% 1787690 ? 3% interrupts.CPU56.LOC:Local_timer_interrupts
516.75 ? 4% +80.0% 930.00 ? 7% interrupts.CPU56.RES:Rescheduling_interrupts
6916 ? 2% -17.1% 5737 ? 6% interrupts.CPU57.CAL:Function_call_interrupts
150223 +1090.1% 1787837 ? 3% interrupts.CPU57.LOC:Local_timer_interrupts
1528 ? 24% -45.3% 835.75 ? 29% interrupts.CPU57.NMI:Non-maskable_interrupts
1528 ? 24% -45.3% 835.75 ? 29% interrupts.CPU57.PMI:Performance_monitoring_interrupts
573.75 ? 7% +69.5% 972.50 ? 9% interrupts.CPU57.RES:Rescheduling_interrupts
6876 ? 2% -14.8% 5855 ? 4% interrupts.CPU58.CAL:Function_call_interrupts
150314 +1089.4% 1787848 ? 3% interrupts.CPU58.LOC:Local_timer_interrupts
1534 ? 24% -52.6% 727.50 ? 35% interrupts.CPU58.NMI:Non-maskable_interrupts
1534 ? 24% -52.6% 727.50 ? 35% interrupts.CPU58.PMI:Performance_monitoring_interrupts
505.00 ? 8% +68.2% 849.25 ? 4% interrupts.CPU58.RES:Rescheduling_interrupts
7000 -15.1% 5941 ? 5% interrupts.CPU59.CAL:Function_call_interrupts
150209 +1090.3% 1787892 ? 3% interrupts.CPU59.LOC:Local_timer_interrupts
1314 ? 32% -40.8% 778.00 ? 14% interrupts.CPU59.NMI:Non-maskable_interrupts
1314 ? 32% -40.8% 778.00 ? 14% interrupts.CPU59.PMI:Performance_monitoring_interrupts
555.50 ? 12% +66.2% 923.00 ? 7% interrupts.CPU59.RES:Rescheduling_interrupts
6913 ? 3% -17.0% 5740 ? 4% interrupts.CPU6.CAL:Function_call_interrupts
150232 +1090.0% 1787761 ? 3% interrupts.CPU6.LOC:Local_timer_interrupts
530.75 ? 6% +84.4% 978.50 ? 5% interrupts.CPU6.RES:Rescheduling_interrupts
6918 -14.8% 5895 ? 6% interrupts.CPU60.CAL:Function_call_interrupts
150296 +1089.5% 1787820 ? 3% interrupts.CPU60.LOC:Local_timer_interrupts
543.50 ? 5% +64.0% 891.25 ? 7% interrupts.CPU60.RES:Rescheduling_interrupts
6832 -15.7% 5756 ? 4% interrupts.CPU61.CAL:Function_call_interrupts
150275 +1089.7% 1787780 ? 3% interrupts.CPU61.LOC:Local_timer_interrupts
1736 -50.9% 852.25 ? 16% interrupts.CPU61.NMI:Non-maskable_interrupts
1736 -50.9% 852.25 ? 16% interrupts.CPU61.PMI:Performance_monitoring_interrupts
561.50 ? 3% +69.0% 948.75 ? 10% interrupts.CPU61.RES:Rescheduling_interrupts
150147 +1090.7% 1787853 ? 3% interrupts.CPU62.LOC:Local_timer_interrupts
1523 ? 24% -59.8% 613.00 ? 32% interrupts.CPU62.NMI:Non-maskable_interrupts
1523 ? 24% -59.8% 613.00 ? 32% interrupts.CPU62.PMI:Performance_monitoring_interrupts
513.50 ? 3% +87.0% 960.25 ? 11% interrupts.CPU62.RES:Rescheduling_interrupts
6929 ? 2% -17.0% 5750 ? 4% interrupts.CPU63.CAL:Function_call_interrupts
150307 +1089.5% 1787834 ? 3% interrupts.CPU63.LOC:Local_timer_interrupts
1749 ? 2% -49.4% 884.25 ? 16% interrupts.CPU63.NMI:Non-maskable_interrupts
1749 ? 2% -49.4% 884.25 ? 16% interrupts.CPU63.PMI:Performance_monitoring_interrupts
531.50 ? 7% +65.6% 880.25 interrupts.CPU63.RES:Rescheduling_interrupts
6986 ? 2% -17.1% 5788 ? 3% interrupts.CPU64.CAL:Function_call_interrupts
150298 +1089.4% 1787593 ? 3% interrupts.CPU64.LOC:Local_timer_interrupts
1521 ? 26% -52.4% 724.75 ? 18% interrupts.CPU64.NMI:Non-maskable_interrupts
1521 ? 26% -52.4% 724.75 ? 18% interrupts.CPU64.PMI:Performance_monitoring_interrupts
527.25 ? 3% +60.2% 844.50 ? 2% interrupts.CPU64.RES:Rescheduling_interrupts
7083 -16.8% 5894 ? 5% interrupts.CPU65.CAL:Function_call_interrupts
150225 +1090.2% 1787921 ? 3% interrupts.CPU65.LOC:Local_timer_interrupts
536.75 ? 4% +59.3% 855.00 ? 5% interrupts.CPU65.RES:Rescheduling_interrupts
6910 -14.1% 5938 ? 4% interrupts.CPU66.CAL:Function_call_interrupts
150340 +1089.2% 1787898 ? 3% interrupts.CPU66.LOC:Local_timer_interrupts
585.75 ? 2% +71.5% 1004 ? 9% interrupts.CPU66.RES:Rescheduling_interrupts
6851 -15.0% 5823 ? 5% interrupts.CPU67.CAL:Function_call_interrupts
150302 +1089.4% 1787748 ? 3% interrupts.CPU67.LOC:Local_timer_interrupts
1552 ? 25% -50.5% 768.00 ? 23% interrupts.CPU67.NMI:Non-maskable_interrupts
1552 ? 25% -50.5% 768.00 ? 23% interrupts.CPU67.PMI:Performance_monitoring_interrupts
555.25 ? 3% +62.0% 899.50 ? 6% interrupts.CPU67.RES:Rescheduling_interrupts
6901 ? 2% -19.6% 5550 ? 4% interrupts.CPU68.CAL:Function_call_interrupts
150258 +1089.9% 1787888 ? 3% interrupts.CPU68.LOC:Local_timer_interrupts
556.75 ? 10% +58.7% 883.75 ? 8% interrupts.CPU68.RES:Rescheduling_interrupts
6944 -18.0% 5692 ? 9% interrupts.CPU69.CAL:Function_call_interrupts
150265 +1089.8% 1787886 ? 3% interrupts.CPU69.LOC:Local_timer_interrupts
1540 ? 25% -54.8% 696.25 ? 30% interrupts.CPU69.NMI:Non-maskable_interrupts
1540 ? 25% -54.8% 696.25 ? 30% interrupts.CPU69.PMI:Performance_monitoring_interrupts
553.25 ? 9% +67.8% 928.25 ? 8% interrupts.CPU69.RES:Rescheduling_interrupts
6819 -13.5% 5900 ? 5% interrupts.CPU7.CAL:Function_call_interrupts
150231 +1090.0% 1787821 ? 3% interrupts.CPU7.LOC:Local_timer_interrupts
518.50 ? 2% +105.4% 1065 ? 10% interrupts.CPU7.RES:Rescheduling_interrupts
6876 ? 2% -16.4% 5751 ? 5% interrupts.CPU70.CAL:Function_call_interrupts
150342 +1089.2% 1787842 ? 3% interrupts.CPU70.LOC:Local_timer_interrupts
532.25 ? 6% +59.7% 849.75 ? 2% interrupts.CPU70.RES:Rescheduling_interrupts
6849 -13.8% 5904 ? 4% interrupts.CPU71.CAL:Function_call_interrupts
150219 +1090.2% 1787845 ? 3% interrupts.CPU71.LOC:Local_timer_interrupts
517.00 ? 7% +70.7% 882.75 ? 9% interrupts.CPU71.RES:Rescheduling_interrupts
149871 +1093.7% 1789000 ? 2% interrupts.CPU72.LOC:Local_timer_interrupts
1570 ? 24% -56.9% 676.50 ? 17% interrupts.CPU72.NMI:Non-maskable_interrupts
1570 ? 24% -56.9% 676.50 ? 17% interrupts.CPU72.PMI:Performance_monitoring_interrupts
553.75 ? 3% +85.5% 1027 ? 7% interrupts.CPU72.RES:Rescheduling_interrupts
5738 ? 4% +18.8% 6818 ? 11% interrupts.CPU73.CAL:Function_call_interrupts
150071 +1092.1% 1788977 ? 2% interrupts.CPU73.LOC:Local_timer_interrupts
557.25 ? 7% +92.7% 1073 ? 10% interrupts.CPU73.RES:Rescheduling_interrupts
150122 +1091.7% 1789048 ? 2% interrupts.CPU74.LOC:Local_timer_interrupts
1544 ? 24% -41.7% 900.75 ? 24% interrupts.CPU74.NMI:Non-maskable_interrupts
1544 ? 24% -41.7% 900.75 ? 24% interrupts.CPU74.PMI:Performance_monitoring_interrupts
524.50 ? 6% +87.6% 984.00 ? 11% interrupts.CPU74.RES:Rescheduling_interrupts
150079 +1092.1% 1789068 ? 2% interrupts.CPU75.LOC:Local_timer_interrupts
523.50 ? 2% +78.7% 935.25 ? 6% interrupts.CPU75.RES:Rescheduling_interrupts
150127 +1091.6% 1788984 ? 2% interrupts.CPU76.LOC:Local_timer_interrupts
1523 ? 24% -52.4% 725.50 ? 8% interrupts.CPU76.NMI:Non-maskable_interrupts
1523 ? 24% -52.4% 725.50 ? 8% interrupts.CPU76.PMI:Performance_monitoring_interrupts
525.00 +76.5% 926.75 ? 9% interrupts.CPU76.RES:Rescheduling_interrupts
150165 +1091.4% 1789008 ? 2% interrupts.CPU77.LOC:Local_timer_interrupts
1556 ? 25% -46.9% 826.50 ? 25% interrupts.CPU77.NMI:Non-maskable_interrupts
1556 ? 25% -46.9% 826.50 ? 25% interrupts.CPU77.PMI:Performance_monitoring_interrupts
535.75 ? 3% +78.3% 955.50 ? 10% interrupts.CPU77.RES:Rescheduling_interrupts
149783 +1094.4% 1789010 ? 2% interrupts.CPU78.LOC:Local_timer_interrupts
1788 -59.3% 727.50 ? 8% interrupts.CPU78.NMI:Non-maskable_interrupts
1788 -59.3% 727.50 ? 8% interrupts.CPU78.PMI:Performance_monitoring_interrupts
539.00 ? 7% +67.3% 901.50 ? 8% interrupts.CPU78.RES:Rescheduling_interrupts
150147 +1091.4% 1788924 ? 2% interrupts.CPU79.LOC:Local_timer_interrupts
1562 ? 25% -43.0% 889.75 ? 31% interrupts.CPU79.NMI:Non-maskable_interrupts
1562 ? 25% -43.0% 889.75 ? 31% interrupts.CPU79.PMI:Performance_monitoring_interrupts
542.75 ? 9% +65.5% 898.50 ? 2% interrupts.CPU79.RES:Rescheduling_interrupts
6824 ? 2% -15.1% 5793 ? 4% interrupts.CPU8.CAL:Function_call_interrupts
150210 +1090.0% 1787456 ? 3% interrupts.CPU8.LOC:Local_timer_interrupts
1741 -56.3% 760.50 ? 19% interrupts.CPU8.NMI:Non-maskable_interrupts
1741 -56.3% 760.50 ? 19% interrupts.CPU8.PMI:Performance_monitoring_interrupts
506.00 ? 4% +93.4% 978.75 ? 5% interrupts.CPU8.RES:Rescheduling_interrupts
250.00 ? 98% +12777.4% 32193 ?167% interrupts.CPU80.75:PCI-MSI.70260737-edge.eth3-TxRx-0
150009 +1092.6% 1789046 ? 2% interrupts.CPU80.LOC:Local_timer_interrupts
1761 -56.8% 760.50 ? 27% interrupts.CPU80.NMI:Non-maskable_interrupts
1761 -56.8% 760.50 ? 27% interrupts.CPU80.PMI:Performance_monitoring_interrupts
523.25 ? 5% +81.6% 950.25 ? 5% interrupts.CPU80.RES:Rescheduling_interrupts
149572 +1096.1% 1789038 ? 2% interrupts.CPU81.LOC:Local_timer_interrupts
1546 ? 24% -49.7% 777.00 ? 21% interrupts.CPU81.NMI:Non-maskable_interrupts
1546 ? 24% -49.7% 777.00 ? 21% interrupts.CPU81.PMI:Performance_monitoring_interrupts
546.75 ? 10% +64.0% 896.50 ? 6% interrupts.CPU81.RES:Rescheduling_interrupts
774.25 ?162% +1610.6% 13244 ?136% interrupts.CPU82.77:PCI-MSI.70260739-edge.eth3-TxRx-2
150143 +1091.7% 1789225 ? 2% interrupts.CPU82.LOC:Local_timer_interrupts
1771 -50.5% 876.50 ? 27% interrupts.CPU82.NMI:Non-maskable_interrupts
1771 -50.5% 876.50 ? 27% interrupts.CPU82.PMI:Performance_monitoring_interrupts
542.50 ? 7% +70.5% 924.75 ? 7% interrupts.CPU82.RES:Rescheduling_interrupts
150013 +1092.5% 1788903 ? 2% interrupts.CPU83.LOC:Local_timer_interrupts
1767 -54.8% 798.50 ? 19% interrupts.CPU83.NMI:Non-maskable_interrupts
1767 -54.8% 798.50 ? 19% interrupts.CPU83.PMI:Performance_monitoring_interrupts
526.75 ? 3% +68.0% 885.00 ? 8% interrupts.CPU83.RES:Rescheduling_interrupts
552.00 ?158% +8041.8% 44942 ?105% interrupts.CPU84.79:PCI-MSI.70260741-edge.eth3-TxRx-4
150082 +1091.7% 1788512 ? 2% interrupts.CPU84.LOC:Local_timer_interrupts
1803 -49.1% 917.25 ? 21% interrupts.CPU84.NMI:Non-maskable_interrupts
1803 -49.1% 917.25 ? 21% interrupts.CPU84.PMI:Performance_monitoring_interrupts
545.75 +71.5% 936.00 ? 4% interrupts.CPU84.RES:Rescheduling_interrupts
844.50 ?163% +6985.2% 59834 ? 92% interrupts.CPU85.80:PCI-MSI.70260742-edge.eth3-TxRx-5
150154 +1091.7% 1789379 ? 2% interrupts.CPU85.LOC:Local_timer_interrupts
1765 ? 2% -53.6% 818.75 ? 26% interrupts.CPU85.NMI:Non-maskable_interrupts
1765 ? 2% -53.6% 818.75 ? 26% interrupts.CPU85.PMI:Performance_monitoring_interrupts
545.50 ? 3% +77.2% 966.75 ? 8% interrupts.CPU85.RES:Rescheduling_interrupts
96.50 ? 90% +4080.3% 4034 ?150% interrupts.CPU86.81:PCI-MSI.70260743-edge.eth3-TxRx-6
150129 +1091.7% 1789096 ? 2% interrupts.CPU86.LOC:Local_timer_interrupts
1751 ? 2% -52.7% 828.00 ? 17% interrupts.CPU86.NMI:Non-maskable_interrupts
1751 ? 2% -52.7% 828.00 ? 17% interrupts.CPU86.PMI:Performance_monitoring_interrupts
560.00 ? 3% +65.1% 924.50 ? 4% interrupts.CPU86.RES:Rescheduling_interrupts
150050 +1092.3% 1789021 ? 2% interrupts.CPU87.LOC:Local_timer_interrupts
559.50 ? 6% +60.3% 896.75 ? 8% interrupts.CPU87.RES:Rescheduling_interrupts
150187 +1091.2% 1788981 ? 2% interrupts.CPU88.LOC:Local_timer_interrupts
1771 ? 2% -48.8% 907.00 ? 17% interrupts.CPU88.NMI:Non-maskable_interrupts
1771 ? 2% -48.8% 907.00 ? 17% interrupts.CPU88.PMI:Performance_monitoring_interrupts
561.00 ? 3% +67.9% 942.00 ? 5% interrupts.CPU88.RES:Rescheduling_interrupts
150116 +1091.7% 1788958 ? 2% interrupts.CPU89.LOC:Local_timer_interrupts
1793 -43.4% 1015 ? 17% interrupts.CPU89.NMI:Non-maskable_interrupts
1793 -43.4% 1015 ? 17% interrupts.CPU89.PMI:Performance_monitoring_interrupts
521.50 ? 4% +72.1% 897.50 ? 6% interrupts.CPU89.RES:Rescheduling_interrupts
150194 +1090.4% 1787840 ? 3% interrupts.CPU9.LOC:Local_timer_interrupts
1522 ? 23% -47.5% 799.25 ? 17% interrupts.CPU9.NMI:Non-maskable_interrupts
1522 ? 23% -47.5% 799.25 ? 17% interrupts.CPU9.PMI:Performance_monitoring_interrupts
498.00 ? 3% +101.5% 1003 ? 2% interrupts.CPU9.RES:Rescheduling_interrupts
150149 +1091.5% 1788993 ? 2% interrupts.CPU90.LOC:Local_timer_interrupts
1780 -56.4% 775.75 ? 23% interrupts.CPU90.NMI:Non-maskable_interrupts
1780 -56.4% 775.75 ? 23% interrupts.CPU90.PMI:Performance_monitoring_interrupts
512.75 ? 3% +68.9% 866.25 ? 3% interrupts.CPU90.RES:Rescheduling_interrupts
149986 +1092.7% 1788910 ? 2% interrupts.CPU91.LOC:Local_timer_interrupts
518.25 ? 4% +69.9% 880.25 ? 6% interrupts.CPU91.RES:Rescheduling_interrupts
6.00 ? 61% +2137.5% 134.25 ?125% interrupts.CPU91.TLB:TLB_shootdowns
150056 +1092.2% 1789012 ? 2% interrupts.CPU92.LOC:Local_timer_interrupts
1765 -55.4% 788.00 ? 14% interrupts.CPU92.NMI:Non-maskable_interrupts
1765 -55.4% 788.00 ? 14% interrupts.CPU92.PMI:Performance_monitoring_interrupts
526.50 ? 5% +65.5% 871.50 ? 9% interrupts.CPU92.RES:Rescheduling_interrupts
150123 +1091.7% 1789025 ? 2% interrupts.CPU93.LOC:Local_timer_interrupts
1543 ? 24% -55.3% 689.75 ? 26% interrupts.CPU93.NMI:Non-maskable_interrupts
1543 ? 24% -55.3% 689.75 ? 26% interrupts.CPU93.PMI:Performance_monitoring_interrupts
511.25 ? 3% +70.4% 871.00 ? 4% interrupts.CPU93.RES:Rescheduling_interrupts
150075 +1092.1% 1789036 ? 2% interrupts.CPU94.LOC:Local_timer_interrupts
1568 ? 25% -50.4% 777.25 ? 16% interrupts.CPU94.NMI:Non-maskable_interrupts
1568 ? 25% -50.4% 777.25 ? 16% interrupts.CPU94.PMI:Performance_monitoring_interrupts
526.25 ? 5% +66.7% 877.25 ? 4% interrupts.CPU94.RES:Rescheduling_interrupts
150146 +1091.5% 1789044 ? 2% interrupts.CPU95.LOC:Local_timer_interrupts
1784 -50.6% 880.50 ? 27% interrupts.CPU95.NMI:Non-maskable_interrupts
1784 -50.6% 880.50 ? 27% interrupts.CPU95.PMI:Performance_monitoring_interrupts
539.25 ? 7% +53.6% 828.25 ? 4% interrupts.CPU95.RES:Rescheduling_interrupts
14412372 +1091.2% 1.717e+08 ? 2% interrupts.LOC:Local_timer_interrupts
0.00 +1.9e+104% 192.00 interrupts.MCP:Machine_check_polls
154111 ? 4% -51.0% 75504 ? 4% interrupts.NMI:Non-maskable_interrupts
154111 ? 4% -51.0% 75504 ? 4% interrupts.PMI:Performance_monitoring_interrupts
50770 +78.2% 90484 ? 2% interrupts.RES:Rescheduling_interrupts
721.25 ? 3% +396.3% 3579 ? 37% interrupts.TLB:TLB_shootdowns



aim7.jobs-per-min

9000 +--------------------------------------------------------------------+
|.+..+.+.+..+.+.+.+..+.+ +.+.+..+. .+..+.+.+ |
8000 |-+ : : +.+ |
7000 |-O O O O O O O O O : O : O O O O O O O O O O O O O O |
| : : |
6000 |-+ : : |
5000 |-+ : : |
| : : |
4000 |-+ : : |
3000 |-+ : : |
| : : |
2000 |-+ :: |
1000 |-+ : |
| : O O O |
0 +--------------------------------------------------------------------+


aim7.time.elapsed_time

1000 +--------------------------------------------------------------------+
900 |-+ O |
| O O |
800 |-+ |
700 |-+ |
| |
600 |-+ |
500 |-+ |
400 |-+ |
| |
300 |-+ |
200 |-+ |
| |
100 |.+..+.+.+..+.+.+.+..+.+.O .+.+.+..+.+.+.+..+.+.+ O O O O O |
0 +--------------------------------------------------------------------+


aim7.time.elapsed_time.max

1000 +--------------------------------------------------------------------+
900 |-+ O |
| O O |
800 |-+ |
700 |-+ |
| |
600 |-+ |
500 |-+ |
400 |-+ |
| |
300 |-+ |
200 |-+ |
| |
100 |.+..+.+.+..+.+.+.+..+.+.O .+.+.+..+.+.+.+..+.+.+ O O O O O |
0 +--------------------------------------------------------------------+


aim7.time.involuntary_context_switches

3e+06 +-----------------------------------------------------------------+
| O O O |
2.5e+06 |-O O O O O O O O O O O O O O O O O O O O O O O O O O |
| |
| |
2e+06 |-+ |
| |
1.5e+06 |-+ +.+..+.+.+.+ |
| : |
1e+06 |-+ : |
|.+.+..+.+.+.+.+..+.+.+ +..+.+.+ |
| : : |
500000 |-+ : : |
| :: |
0 +-----------------------------------------------------------------+


aim7.time.file_system_outputs

9e+07 +-------------------------------------------------------------------+
| O O O O O O O O O : O : O O O O O O O O O O O O O O O O O |
8e+07 |-+ : : |
7e+07 |-+ : : |
| : : |
6e+07 |-+ : : |
5e+07 |-+ : : |
| : : |
4e+07 |-+ : : |
3e+07 |-+ : : |
| : : |
2e+07 |-+ : |
1e+07 |-+ : |
| : |
0 +-------------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample

***************************************************************************************************
lkp-cfl-e1: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/30%/debian-10.4-x86_64-20200603.cgz/300s/lkp-cfl-e1/shell8/unixbench/0xde

commit:
62d5313500 ("locking/rwsem: Enable reader optimistic lock stealing")
c9847a7f94 ("locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED")

62d5313500ac58b6 c9847a7f94679e742710574a2a7
---------------- ---------------------------
%stddev %change %stddev
\ | \
22370 -1.9% 21939 unixbench.score
1325065 -2.8% 1287341 unixbench.time.involuntary_context_switches
10030 -0.8% 9950 unixbench.time.maximum_resident_set_size
1.074e+08 -1.9% 1.054e+08 unixbench.time.minor_page_faults
1367 -2.2% 1337 unixbench.time.percent_of_cpu_this_job_got
372.90 -2.6% 363.18 unixbench.time.system_time
490.82 -1.9% 481.37 unixbench.time.user_time
3156052 +11.8% 3528615 unixbench.time.voluntary_context_switches
845598 -1.9% 829330 unixbench.workload
11.84 +2.0 13.83 ? 4% mpstat.cpu.all.idle%
14864 ? 57% -63.7% 5388 ? 26% softirqs.NET_RX
48363 -16.4% 40455 ? 10% meminfo.AnonHugePages
217888 -11.7% 192288 ? 4% meminfo.DirectMap4k
2185 +12.1% 2449 slabinfo.kmalloc-1k.num_objs
11354 ? 3% -16.1% 9521 ? 4% slabinfo.vmap_area.active_objs
11526 ? 3% -14.7% 9835 ? 4% slabinfo.vmap_area.num_objs
13.50 ? 3% +14.8% 15.50 ? 3% vmstat.cpu.id
51.00 -2.5% 49.75 vmstat.cpu.sy
34.00 -2.9% 33.00 vmstat.cpu.us
23.50 ? 2% -9.6% 21.25 ? 5% vmstat.procs.r
102380 +10.6% 113264 vmstat.system.cs
33970733 ? 2% +50.1% 50992679 ? 8% cpuidle.C1.time
886295 +46.4% 1297530 ? 6% cpuidle.C1.usage
62811 ? 99% +202.9% 190270 ? 12% cpuidle.C10.time
45921 ? 3% +35.2% 62068 ? 13% cpuidle.C3.usage
115402 +124.5% 259071 ? 8% cpuidle.POLL.time
10953 ? 3% +197.5% 32585 ? 7% cpuidle.POLL.usage
477.50 ? 3% +9.5% 522.75 proc-vmstat.nr_active_anon
5785 +0.9% 5838 proc-vmstat.nr_kernel_stack
477.50 ? 3% +9.5% 522.75 proc-vmstat.nr_zone_active_anon
71809737 -1.1% 71020069 proc-vmstat.numa_hit
71809737 -1.1% 71020069 proc-vmstat.numa_local
119599 -3.0% 115977 proc-vmstat.pgactivate
76761873 -1.9% 75313926 proc-vmstat.pgalloc_normal
1.077e+08 -1.9% 1.056e+08 proc-vmstat.pgfault
76755211 -1.9% 75305695 proc-vmstat.pgfree
5472622 -2.1% 5357159 proc-vmstat.pgreuse
3627 -4.2% 3475 proc-vmstat.thp_fault_alloc
1503946 -1.9% 1474994 proc-vmstat.unevictable_pgs_culled
4767 ? 96% -89.1% 519.50 ?139% interrupts.132:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
22159 ? 2% -4.0% 21268 ? 3% interrupts.CAL:Function_call_interrupts
558.50 ? 2% +14.4% 639.00 ? 11% interrupts.CPU0.TLB:TLB_shootdowns
4767 ? 96% -89.1% 519.50 ?139% interrupts.CPU1.132:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
1363 ? 5% -9.3% 1237 ? 6% interrupts.CPU1.CAL:Function_call_interrupts
559.00 +17.1% 654.50 ? 2% interrupts.CPU1.TLB:TLB_shootdowns
535.00 ? 2% +24.1% 663.75 ? 4% interrupts.CPU10.TLB:TLB_shootdowns
576.50 +14.1% 658.00 ? 4% interrupts.CPU11.TLB:TLB_shootdowns
1421 ? 5% -7.1% 1320 ? 4% interrupts.CPU12.CAL:Function_call_interrupts
579.00 +20.5% 697.75 ? 3% interrupts.CPU12.TLB:TLB_shootdowns
569.50 +21.2% 690.50 ? 6% interrupts.CPU13.TLB:TLB_shootdowns
505.00 +32.2% 667.50 ? 6% interrupts.CPU14.TLB:TLB_shootdowns
1422 ? 3% -8.4% 1302 ? 3% interrupts.CPU15.CAL:Function_call_interrupts
509.50 ? 6% +36.2% 694.00 ? 8% interrupts.CPU15.TLB:TLB_shootdowns
557.50 ? 2% +18.9% 662.75 ? 6% interrupts.CPU2.TLB:TLB_shootdowns
570.00 ? 2% +18.9% 678.00 ? 8% interrupts.CPU3.TLB:TLB_shootdowns
556.50 +21.4% 675.75 ? 2% interrupts.CPU4.TLB:TLB_shootdowns
506.00 +34.7% 681.75 ? 6% interrupts.CPU5.TLB:TLB_shootdowns
1363 ? 3% -6.5% 1274 ? 5% interrupts.CPU6.CAL:Function_call_interrupts
534.00 ? 3% +23.5% 659.50 ? 4% interrupts.CPU6.TLB:TLB_shootdowns
559.50 +21.3% 678.50 ? 7% interrupts.CPU7.TLB:TLB_shootdowns
563.50 ? 2% +20.6% 679.75 ? 7% interrupts.CPU8.TLB:TLB_shootdowns
514.50 ? 4% +27.8% 657.75 ? 2% interrupts.CPU9.TLB:TLB_shootdowns
8753 +22.7% 10738 ? 4% interrupts.TLB:TLB_shootdowns
242.90 ? 8% +20.9% 293.59 ? 5% sched_debug.cfs_rq:/.exec_clock.stddev
12708 ? 2% +19.8% 15225 ? 8% sched_debug.cfs_rq:/.min_vruntime.stddev
1472 +26.0% 1854 ? 12% sched_debug.cfs_rq:/.runnable_avg.max
292.07 ? 5% +28.7% 375.92 ? 13% sched_debug.cfs_rq:/.runnable_avg.stddev
12708 ? 2% +19.8% 15226 ? 8% sched_debug.cfs_rq:/.spread0.stddev
1418 +27.0% 1801 ? 12% sched_debug.cfs_rq:/.util_avg.max
281.89 ? 7% +30.6% 368.13 ? 14% sched_debug.cfs_rq:/.util_avg.stddev
212080 ? 10% -15.4% 179338 ? 12% sched_debug.cpu.avg_idle.avg
41096 ? 15% -24.6% 30990 ? 17% sched_debug.cpu.avg_idle.min
208687 ? 3% -15.1% 177102 ? 10% sched_debug.cpu.avg_idle.stddev
7605 ? 19% +92.1% 14611 ? 7% sched_debug.cpu.curr->pid.max
2601 ? 37% +93.0% 5020 ? 20% sched_debug.cpu.curr->pid.stddev
203329 +10.7% 225114 sched_debug.cpu.nr_switches.avg
-82.00 +129.7% -188.38 sched_debug.cpu.nr_uninterruptible.min
42.97 +69.8% 72.96 ? 16% sched_debug.cpu.nr_uninterruptible.stddev
197540 +11.1% 219447 sched_debug.cpu.sched_count.avg
200544 +11.3% 223137 sched_debug.cpu.sched_count.max
187758 +11.0% 208437 sched_debug.cpu.sched_count.min
3057 +23.1% 3763 ? 10% sched_debug.cpu.sched_count.stddev
43758 +28.3% 56141 ? 3% sched_debug.cpu.sched_goidle.avg
45063 +27.4% 57397 ? 3% sched_debug.cpu.sched_goidle.max
41416 +28.8% 53349 ? 2% sched_debug.cpu.sched_goidle.min
860.48 +23.8% 1064 ? 10% sched_debug.cpu.sched_goidle.stddev
86423 +13.0% 97665 sched_debug.cpu.ttwu_count.avg
87940 +13.8% 100103 sched_debug.cpu.ttwu_count.max
81643 +12.4% 91752 ? 2% sched_debug.cpu.ttwu_count.min
611.62 ? 2% +26.0% 770.48 ? 11% sched_debug.cpu.ttwu_local.stddev
1.024e+10 -1.5% 1.008e+10 perf-stat.i.branch-instructions
4.55 +0.1 4.69 perf-stat.i.cache-miss-rate%
1.019e+08 -0.9% 1.009e+08 perf-stat.i.cache-misses
2.485e+09 -1.7% 2.442e+09 perf-stat.i.cache-references
105828 +10.8% 117262 perf-stat.i.context-switches
5.631e+10 -1.8% 5.529e+10 perf-stat.i.cpu-cycles
15839 +15.4% 18271 ? 2% perf-stat.i.cpu-migrations
1.276e+10 -1.8% 1.253e+10 perf-stat.i.dTLB-loads
3739007 -2.7% 3638057 perf-stat.i.dTLB-store-misses
7.373e+09 -1.9% 7.236e+09 perf-stat.i.dTLB-stores
10623717 -1.7% 10438031 perf-stat.i.iTLB-load-misses
4.991e+10 -1.5% 4.914e+10 perf-stat.i.instructions
3.52 -1.8% 3.46 perf-stat.i.metric.GHz
0.33 ? 6% -58.4% 0.14 ? 28% perf-stat.i.metric.K/sec
2056 -1.7% 2021 perf-stat.i.metric.M/sec
1655018 -2.2% 1619167 perf-stat.i.minor-faults
0.00 ? 2% +0.0 0.00 ? 15% perf-stat.i.node-load-miss-rate%
5452609 -1.0% 5399939 perf-stat.i.node-loads
33146294 -1.4% 32686027 perf-stat.i.node-stores
1655623 -2.2% 1619758 perf-stat.i.page-faults
1.007e+10 -1.5% 9.918e+09 perf-stat.ps.branch-instructions
1.002e+08 -0.9% 99277053 perf-stat.ps.cache-misses
2.444e+09 -1.7% 2.403e+09 perf-stat.ps.cache-references
104069 +10.9% 115371 perf-stat.ps.context-switches
5.538e+10 -1.8% 5.44e+10 perf-stat.ps.cpu-cycles
15575 +15.4% 17977 ? 2% perf-stat.ps.cpu-migrations
1.254e+10 -1.7% 1.233e+10 perf-stat.ps.dTLB-loads
3676839 -2.7% 3579226 perf-stat.ps.dTLB-store-misses
7.251e+09 -1.8% 7.119e+09 perf-stat.ps.dTLB-stores
10447266 -1.7% 10269385 perf-stat.ps.iTLB-load-misses
4.909e+10 -1.5% 4.835e+10 perf-stat.ps.instructions
1627477 -2.1% 1592966 perf-stat.ps.minor-faults
5362125 -0.9% 5312864 perf-stat.ps.node-loads
32594979 -1.3% 32157625 perf-stat.ps.node-stores
1628072 -2.1% 1593547 perf-stat.ps.page-faults
3.148e+12 -1.5% 3.102e+12 perf-stat.total.instructions
15.17 ?100% -15.2 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
15.17 ?100% -15.2 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
15.17 ?100% -15.2 0.00 perf-profile.calltrace.cycles-pp.write
15.17 ?100% -15.2 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
15.17 ?100% -15.2 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
15.17 ?100% -15.2 0.00 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.17 ?100% -15.2 0.00 perf-profile.calltrace.cycles-pp.devkmsg_write.cold.new_sync_write.vfs_write.ksys_write.do_syscall_64
15.17 ?100% -15.2 0.00 perf-profile.calltrace.cycles-pp.devkmsg_emit.devkmsg_write.cold.new_sync_write.vfs_write.ksys_write
15.17 ?100% -15.2 0.00 perf-profile.calltrace.cycles-pp.vprintk_emit.devkmsg_emit.devkmsg_write.cold.new_sync_write.vfs_write
15.17 ?100% -15.2 0.00 perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.cold.new_sync_write
24.48 ?100% -14.2 10.29 ?173% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
24.48 ?100% -14.2 10.29 ?173% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
24.48 ?100% -14.2 10.29 ?173% perf-profile.calltrace.cycles-pp.drm_fb_helper_dirty_work.process_one_work.worker_thread.kthread.ret_from_fork
24.48 ?100% -13.9 10.59 ?173% perf-profile.calltrace.cycles-pp.ret_from_fork
24.48 ?100% -13.9 10.59 ?173% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
23.80 ? 99% -13.8 10.00 ?173% perf-profile.calltrace.cycles-pp.memcpy_erms.drm_fb_helper_dirty_work.process_one_work.worker_thread.kthread
13.79 ?100% -13.8 0.00 perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.cold
13.11 ?100% -13.1 0.00 perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit
8.96 ?100% -9.0 0.00 perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
8.96 ?100% -9.0 0.00 perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
8.96 ?100% -9.0 0.00 perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
4.54 ?100% -4.5 0.00 perf-profile.calltrace.cycles-pp.perf_output_begin.perf_event_mmap_output.perf_iterate_sb.perf_event_mmap.mmap_region
4.54 ?100% -4.5 0.00 perf-profile.calltrace.cycles-pp.vma_interval_tree_remove.__vma_adjust.__split_vma.mprotect_fixup.do_mprotect_pkey
4.54 ?100% -4.5 0.00 perf-profile.calltrace.cycles-pp.__tsearch
4.54 ?100% -4.5 0.00 perf-profile.calltrace.cycles-pp.__dentry_kill.shrink_dentry_list.shrink_dcache_parent.d_invalidate.proc_invalidate_siblings_dcache
4.54 ?100% -4.5 0.00 perf-profile.calltrace.cycles-pp.evict.__dentry_kill.shrink_dentry_list.shrink_dcache_parent.d_invalidate
4.54 ?100% -4.5 0.00 perf-profile.calltrace.cycles-pp.asm_exc_page_fault
4.54 ?100% -4.5 0.00 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault
4.54 ?100% -4.5 0.00 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
4.54 ?100% -4.5 0.00 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
4.54 ?100% -4.5 0.00 perf-profile.calltrace.cycles-pp.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
4.54 ?100% -4.5 0.00 perf-profile.calltrace.cycles-pp.perf_iterate_sb.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
4.54 ?100% -4.5 0.00 perf-profile.calltrace.cycles-pp.perf_event_mmap_output.perf_iterate_sb.perf_event_mmap.mmap_region.do_mmap
4.54 ?100% -4.2 0.29 ?173% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault
4.54 ?100% -4.2 0.29 ?173% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
4.54 ?100% -4.2 0.29 ?173% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
4.54 ?100% -4.2 0.29 ?173% perf-profile.calltrace.cycles-pp.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
4.14 ?100% -4.1 0.00 perf-profile.calltrace.cycles-pp.io_serial_out.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
4.54 ?100% -2.8 1.78 ?173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.waitid
4.54 ?100% -2.8 1.78 ?173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.waitid
4.54 ?100% -2.8 1.78 ?173% perf-profile.calltrace.cycles-pp.__do_sys_waitid.do_syscall_64.entry_SYSCALL_64_after_hwframe.waitid
4.54 ?100% -2.8 1.78 ?173% perf-profile.calltrace.cycles-pp.shrink_dentry_list.shrink_dcache_parent.d_invalidate.proc_invalidate_siblings_dcache.release_task
4.54 ?100% -2.8 1.78 ?173% perf-profile.calltrace.cycles-pp.kernel_waitid.__do_sys_waitid.do_syscall_64.entry_SYSCALL_64_after_hwframe.waitid
4.54 ?100% -2.8 1.78 ?173% perf-profile.calltrace.cycles-pp.do_wait.kernel_waitid.__do_sys_waitid.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.54 ?100% -2.8 1.78 ?173% perf-profile.calltrace.cycles-pp.d_invalidate.proc_invalidate_siblings_dcache.release_task.wait_task_zombie.do_wait
4.54 ?100% -2.8 1.78 ?173% perf-profile.calltrace.cycles-pp.release_task.wait_task_zombie.do_wait.kernel_waitid.__do_sys_waitid
4.54 ?100% -2.8 1.78 ?173% perf-profile.calltrace.cycles-pp.proc_invalidate_siblings_dcache.release_task.wait_task_zombie.do_wait.kernel_waitid
4.54 ?100% -2.8 1.78 ?173% perf-profile.calltrace.cycles-pp.shrink_dcache_parent.d_invalidate.proc_invalidate_siblings_dcache.release_task.wait_task_zombie
4.54 ?100% -2.8 1.78 ?173% perf-profile.calltrace.cycles-pp.waitid
4.54 ?100% -2.8 1.78 ?173% perf-profile.calltrace.cycles-pp.wait_task_zombie.do_wait.kernel_waitid.__do_sys_waitid.do_syscall_64
4.54 ?100% -2.5 2.08 ?173% perf-profile.calltrace.cycles-pp.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.54 ?100% -2.5 2.08 ?173% perf-profile.calltrace.cycles-pp.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.54 ?100% -2.5 2.08 ?173% perf-profile.calltrace.cycles-pp.__split_vma.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64
4.54 ?100% -2.5 2.08 ?173% perf-profile.calltrace.cycles-pp.__vma_adjust.__split_vma.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect
4.54 ?100% -2.5 2.08 ?173% perf-profile.calltrace.cycles-pp.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.54 ?100% -2.5 2.08 ?173% perf-profile.calltrace.cycles-pp.setlocale
0.34 ?100% +10.4 10.71 ? 57% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.34 ?100% +10.4 10.71 ? 57% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.34 ?100% +10.4 10.71 ? 57% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.17 ?100% -15.2 0.00 perf-profile.children.cycles-pp.write
15.17 ?100% -15.2 0.00 perf-profile.children.cycles-pp.ksys_write
15.17 ?100% -15.2 0.00 perf-profile.children.cycles-pp.vfs_write
15.17 ?100% -15.2 0.00 perf-profile.children.cycles-pp.new_sync_write
15.17 ?100% -15.2 0.00 perf-profile.children.cycles-pp.devkmsg_write.cold
15.17 ?100% -15.2 0.00 perf-profile.children.cycles-pp.devkmsg_emit
15.17 ?100% -15.2 0.00 perf-profile.children.cycles-pp.vprintk_emit
15.17 ?100% -15.2 0.00 perf-profile.children.cycles-pp.console_unlock
24.48 ?100% -14.2 10.29 ?173% perf-profile.children.cycles-pp.worker_thread
24.48 ?100% -14.2 10.29 ?173% perf-profile.children.cycles-pp.process_one_work
24.48 ?100% -14.2 10.29 ?173% perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
24.48 ?100% -13.9 10.59 ?173% perf-profile.children.cycles-pp.ret_from_fork
24.48 ?100% -13.9 10.59 ?173% perf-profile.children.cycles-pp.kthread
24.14 ?100% -13.8 10.29 ?173% perf-profile.children.cycles-pp.memcpy_erms
13.79 ?100% -13.8 0.00 perf-profile.children.cycles-pp.serial8250_console_write
13.11 ?100% -13.1 0.00 perf-profile.children.cycles-pp.uart_console_write
9.65 ?100% -9.7 0.00 perf-profile.children.cycles-pp.wait_for_xmitr
9.65 ?100% -9.7 0.00 perf-profile.children.cycles-pp.io_serial_in
8.96 ?100% -9.0 0.00 perf-profile.children.cycles-pp.serial8250_console_putchar
4.89 ? 85% -4.9 0.00 perf-profile.children.cycles-pp.vma_interval_tree_remove
4.54 ?100% -4.5 0.00 perf-profile.children.cycles-pp.perf_output_begin
4.54 ?100% -4.5 0.00 perf-profile.children.cycles-pp.__tsearch
4.54 ?100% -4.5 0.00 perf-profile.children.cycles-pp.__dentry_kill
4.54 ?100% -4.5 0.00 perf-profile.children.cycles-pp.evict
4.54 ?100% -4.5 0.00 perf-profile.children.cycles-pp.perf_event_mmap
4.54 ?100% -4.5 0.00 perf-profile.children.cycles-pp.perf_iterate_sb
4.54 ?100% -4.5 0.00 perf-profile.children.cycles-pp.perf_event_mmap_output
4.54 ?100% -4.2 0.29 ?173% perf-profile.children.cycles-pp.__alloc_pages_nodemask
4.54 ?100% -4.2 0.29 ?173% perf-profile.children.cycles-pp.__handle_mm_fault
4.54 ?100% -4.2 0.29 ?173% perf-profile.children.cycles-pp.alloc_pages_vma
4.54 ?100% -4.2 0.29 ?173% perf-profile.children.cycles-pp.asm_exc_page_fault
4.54 ?100% -4.2 0.29 ?173% perf-profile.children.cycles-pp.do_user_addr_fault
4.54 ?100% -4.2 0.29 ?173% perf-profile.children.cycles-pp.do_anonymous_page
4.54 ?100% -4.2 0.29 ?173% perf-profile.children.cycles-pp.exc_page_fault
4.54 ?100% -4.2 0.29 ?173% perf-profile.children.cycles-pp.handle_mm_fault
4.14 ?100% -4.1 0.00 perf-profile.children.cycles-pp.io_serial_out
4.54 ?100% -2.8 1.78 ?173% perf-profile.children.cycles-pp.__do_sys_waitid
4.54 ?100% -2.8 1.78 ?173% perf-profile.children.cycles-pp.shrink_dentry_list
4.54 ?100% -2.8 1.78 ?173% perf-profile.children.cycles-pp.kernel_waitid
4.54 ?100% -2.8 1.78 ?173% perf-profile.children.cycles-pp.do_wait
4.54 ?100% -2.8 1.78 ?173% perf-profile.children.cycles-pp.d_invalidate
4.54 ?100% -2.8 1.78 ?173% perf-profile.children.cycles-pp.release_task
4.54 ?100% -2.8 1.78 ?173% perf-profile.children.cycles-pp.proc_invalidate_siblings_dcache
4.54 ?100% -2.8 1.78 ?173% perf-profile.children.cycles-pp.shrink_dcache_parent
4.54 ?100% -2.8 1.78 ?173% perf-profile.children.cycles-pp.waitid
4.54 ?100% -2.8 1.78 ?173% perf-profile.children.cycles-pp.wait_task_zombie
4.54 ?100% -2.5 2.08 ?173% perf-profile.children.cycles-pp.__x64_sys_mprotect
4.54 ?100% -2.5 2.08 ?173% perf-profile.children.cycles-pp.do_mprotect_pkey
4.54 ?100% -2.5 2.08 ?173% perf-profile.children.cycles-pp.__vma_adjust
4.54 ?100% -2.5 2.08 ?173% perf-profile.children.cycles-pp.mprotect_fixup
4.54 ?100% -2.5 2.08 ?173% perf-profile.children.cycles-pp.setlocale
0.34 ?100% +10.4 10.71 ? 57% perf-profile.children.cycles-pp.__x64_sys_exit_group
0.69 ?100% +20.4 21.12 ? 82% perf-profile.children.cycles-pp.do_group_exit
0.69 ?100% +20.4 21.12 ? 82% perf-profile.children.cycles-pp.do_exit
24.14 ?100% -14.1 10.00 ?173% perf-profile.self.cycles-pp.memcpy_erms
9.65 ?100% -9.7 0.00 perf-profile.self.cycles-pp.io_serial_in
4.89 ? 85% -4.9 0.00 perf-profile.self.cycles-pp.vma_interval_tree_remove
4.54 ?100% -4.5 0.00 perf-profile.self.cycles-pp.perf_output_begin
4.54 ?100% -4.5 0.00 perf-profile.self.cycles-pp.__tsearch
4.54 ?100% -4.5 0.00 perf-profile.self.cycles-pp.setlocale
4.54 ?100% -4.5 0.00 perf-profile.self.cycles-pp.evict
4.54 ?100% -4.2 0.29 ?173% perf-profile.self.cycles-pp.__alloc_pages_nodemask
4.14 ?100% -4.1 0.00 perf-profile.self.cycles-pp.io_serial_out



***************************************************************************************************
lkp-csl-2ap1: 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/ucode:
4k/gcc-9/performance/1SSD/xfs/sync/x86_64-rhel-8.3/32/debian-10.4-x86_64-20200603.cgz/300s/randwrite/lkp-csl-2ap1/256g/fio-basic/0x4003003

commit:
62d5313500 ("locking/rwsem: Enable reader optimistic lock stealing")
c9847a7f94 ("locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED")

62d5313500ac58b6 c9847a7f94679e742710574a2a7
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
0:2 11% 0:4 perf-profile.children.cycles-pp.error_entry
0:2 8% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
0.03 ? 4% +0.0 0.04 ? 5% fio.latency_100us%
310.22 +2.2% 316.95 ? 2% fio.time.system_time
80.27 -3.3% 77.63 fio.time.user_time
14149913 ? 7% -13.0% 12317496 ? 7% meminfo.DirectMap2M
20012 ? 5% -8.9% 18226 ? 2% meminfo.Writeback
10114 ? 3% -7.4% 9369 ? 5% sched_debug.cpu.ttwu_count.stddev
23907 ? 5% -16.1% 20049 ? 6% sched_debug.cpu.ttwu_local.max
34.12 +1.5% 34.63 boot-time.boot
29.29 +1.5% 29.72 boot-time.dhcp
5692 +1.6% 5785 boot-time.idle
1273 -2.3% 1244 ? 2% perf-stat.i.cycles-between-cache-misses
1253 -1.5% 1234 perf-stat.overall.cycles-between-cache-misses
1255 -1.3% 1239 perf-stat.overall.instructions-per-iTLB-miss
2249599 ? 8% +196.0% 6658444 ? 27% numa-numastat.node1.numa_foreign
14201697 ? 4% -32.4% 9595709 ? 11% numa-numastat.node2.local_node
3615071 ? 18% -33.0% 2422719 ? 27% numa-numastat.node2.numa_foreign
14232753 ? 4% -32.4% 9619099 ? 11% numa-numastat.node2.numa_hit
3648742 ? 14% +111.0% 7698381 ? 17% numa-numastat.node2.numa_miss
3679798 ? 14% +109.8% 7721778 ? 17% numa-numastat.node2.other_node
565431 ? 97% -98.3% 9626 ? 27% numa-meminfo.node0.Mapped
562998 ? 98% -99.0% 5571 ? 71% numa-meminfo.node0.Shmem
5781 ? 6% -7.4% 5353 ? 7% numa-meminfo.node0.Writeback
6824 ? 8% +770.6% 59416 ? 99% numa-meminfo.node2.AnonPages
6921 ? 8% +8832.4% 618210 ? 85% numa-meminfo.node2.Inactive(anon)
6229 ? 3% +17.1% 7294 ? 6% numa-meminfo.node2.KernelStack
221.00 ? 26% +1132.2% 2723 ? 67% numa-meminfo.node2.PageTables
7779536 ? 3% -7.2% 7221783 ? 5% numa-meminfo.node3.Dirty
916.00 ? 8% -16.1% 768.25 ? 12% proc-vmstat.kswapd_high_wmark_hit_quickly
302394 -0.9% 299680 proc-vmstat.nr_slab_unreclaimable
16262663 ? 3% +25.8% 20453611 ? 9% proc-vmstat.numa_foreign
53313089 -7.9% 49075650 ? 3% proc-vmstat.numa_hit
53219629 -8.0% 48981929 ? 3% proc-vmstat.numa_local
16262663 ? 3% +25.8% 20453611 ? 9% proc-vmstat.numa_miss
16356124 ? 3% +25.6% 20547332 ? 9% proc-vmstat.numa_other
2589 ? 3% -8.4% 2371 ? 8% proc-vmstat.pageoutrun
3178 -10.7% 2837 ? 4% proc-vmstat.pgactivate
464640 -16.5% 387918 ? 4% proc-vmstat.pgalloc_dma32
28308854 -2.0% 27733269 proc-vmstat.pgfree
140746 ? 98% -99.0% 1392 ? 71% numa-vmstat.node0.nr_shmem
611995 ? 8% +329.7% 2629633 ? 30% numa-vmstat.node1.numa_foreign
1706 ? 8% +770.5% 14854 ? 99% numa-vmstat.node2.nr_anon_pages
1731 ? 8% +8828.6% 154553 ? 85% numa-vmstat.node2.nr_inactive_anon
6236 ? 3% +16.9% 7290 ? 6% numa-vmstat.node2.nr_kernel_stack
53.50 ? 27% +1166.4% 677.50 ? 68% numa-vmstat.node2.nr_page_table_pages
1276 ? 12% -18.7% 1037 ? 18% numa-vmstat.node2.nr_writeback
1733 ? 8% +8818.5% 154557 ? 85% numa-vmstat.node2.nr_zone_inactive_anon
8117299 ? 7% -30.7% 5628762 ? 11% numa-vmstat.node2.numa_hit
8000798 ? 7% -31.0% 5519712 ? 11% numa-vmstat.node2.numa_local
1945012 ? 3% -7.2% 1804575 ? 5% numa-vmstat.node3.nr_dirty
1945821 ? 3% -7.2% 1805557 ? 5% numa-vmstat.node3.nr_zone_write_pending
21294 ? 6% -10.7% 19009 ? 10% softirqs.CPU121.RCU
8982 ? 60% -59.2% 3668 ?151% softirqs.CPU13.NET_RX
21708 -9.8% 19575 ? 5% softirqs.CPU160.RCU
21280 ? 4% -9.3% 19291 ? 5% softirqs.CPU168.RCU
21250 ? 2% -11.5% 18811 ? 4% softirqs.CPU169.RCU
20674 ? 5% -7.8% 19066 ? 3% softirqs.CPU170.RCU
19940 ? 5% -7.3% 18493 ? 4% softirqs.CPU172.RCU
20439 ? 6% -7.2% 18962 ? 6% softirqs.CPU173.RCU
20464 ? 5% -7.9% 18847 ? 4% softirqs.CPU174.RCU
20260 ? 6% -7.7% 18706 ? 5% softirqs.CPU175.RCU
22472 ? 4% -10.4% 20138 ? 5% softirqs.CPU176.RCU
22274 ? 5% -8.8% 20307 ? 6% softirqs.CPU177.RCU
22391 ? 4% -9.7% 20210 ? 5% softirqs.CPU178.RCU
22638 ? 2% -11.4% 20060 ? 5% softirqs.CPU179.RCU
21904 ? 6% -9.2% 19882 ? 5% softirqs.CPU180.RCU
21495 ? 6% -8.3% 19705 ? 5% softirqs.CPU181.RCU
21991 ? 5% -7.5% 20334 ? 5% softirqs.CPU183.RCU
21678 ? 5% -6.5% 20277 ? 4% softirqs.CPU185.RCU
22661 ? 8% -11.1% 20139 ? 5% softirqs.CPU187.RCU
24764 -16.2% 20750 ? 7% softirqs.CPU24.RCU
22860 ? 7% -9.9% 20606 ? 3% softirqs.CPU64.RCU
8947 ? 12% +137.3% 21230 ? 40% softirqs.CPU7.SCHED
23032 ? 3% -9.8% 20770 ? 5% softirqs.CPU72.RCU
22208 ? 3% -10.8% 19814 ? 6% softirqs.CPU73.RCU
22009 ? 4% -11.9% 19390 ? 8% softirqs.CPU83.RCU
21939 ? 5% -7.0% 20400 ? 3% softirqs.CPU90.RCU
21959 ? 5% -7.2% 20371 ? 4% softirqs.CPU91.RCU
22777 -11.0% 20263 ? 3% softirqs.CPU92.RCU
21811 ? 5% -6.3% 20446 ? 5% softirqs.CPU94.RCU
48.90 ? 2% -2.7 46.17 ? 2% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
47.31 ? 2% -2.6 44.69 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
54.66 ? 4% -2.4 52.27 ? 2% perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
2.45 ? 8% -0.3 2.10 ? 6% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack
1.67 ? 5% -0.2 1.50 ? 2% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
1.71 ? 5% -0.2 1.55 ? 3% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
0.91 ? 8% -0.1 0.82 ? 4% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
49.23 ? 2% -2.6 46.58 ? 2% perf-profile.children.cycles-pp.cpuidle_enter_state
49.25 ? 2% -2.6 46.61 ? 2% perf-profile.children.cycles-pp.cpuidle_enter
54.66 ? 4% -2.4 52.27 ? 2% perf-profile.children.cycles-pp.secondary_startup_64_no_verify
54.66 ? 4% -2.4 52.27 ? 2% perf-profile.children.cycles-pp.cpu_startup_entry
54.66 ? 4% -2.4 52.27 ? 2% perf-profile.children.cycles-pp.do_idle
1.73 ? 6% -0.1 1.58 ? 2% perf-profile.children.cycles-pp.update_process_times
1.76 ? 6% -0.1 1.63 ? 3% perf-profile.children.cycles-pp.tick_sched_handle
0.56 ? 5% -0.1 0.47 ? 13% perf-profile.children.cycles-pp.lapic_next_deadline
0.96 ? 8% -0.1 0.87 ? 4% perf-profile.children.cycles-pp.scheduler_tick
0.11 ? 18% -0.1 0.03 ?105% perf-profile.children.cycles-pp.put_cpu_partial
0.20 ? 19% -0.1 0.13 ? 14% perf-profile.children.cycles-pp.xfs_btree_lshift
0.43 ? 8% -0.0 0.38 ? 4% perf-profile.children.cycles-pp.irqtime_account_irq
0.24 ? 4% -0.0 0.20 ? 11% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.07 ? 20% -0.0 0.04 ? 58% perf-profile.children.cycles-pp.timekeeping_advance
0.06 -0.0 0.03 ?100% perf-profile.children.cycles-pp.native_apic_mem_write
0.08 ? 6% -0.0 0.04 ? 59% perf-profile.children.cycles-pp.delay_tsc
0.07 -0.0 0.04 ? 58% perf-profile.children.cycles-pp.rcu_read_unlock_strict
0.17 ? 5% -0.0 0.14 ? 10% perf-profile.children.cycles-pp._xfs_trans_bjoin
0.28 ? 5% -0.0 0.25 ? 7% perf-profile.children.cycles-pp.calc_global_load_tick
0.11 ? 4% -0.0 0.09 ? 7% perf-profile.children.cycles-pp.xfs_errortag_test
0.05 +0.0 0.06 perf-profile.children.cycles-pp.menu_reflect
0.08 ? 5% +0.0 0.11 ? 9% perf-profile.children.cycles-pp.xfs_end_bio
0.06 ? 16% +0.0 0.09 perf-profile.children.cycles-pp.xfs_iext_insert
0.16 +0.0 0.19 ? 7% perf-profile.children.cycles-pp.xfs_trans_log_buf
0.07 ? 7% +0.0 0.10 ? 11% perf-profile.children.cycles-pp.run_local_timers
0.06 ? 9% +0.0 0.09 ? 13% perf-profile.children.cycles-pp.up_read
0.09 +0.0 0.12 ? 23% perf-profile.children.cycles-pp.node_dirty_ok
0.29 ? 6% +0.0 0.33 ? 2% perf-profile.children.cycles-pp.xfs_inode_item_format
0.00 +0.1 0.06 ? 15% perf-profile.children.cycles-pp.irqentry_exit
0.23 +0.1 0.29 ? 6% perf-profile.children.cycles-pp.xfs_btree_insrec
0.25 ? 4% +0.1 0.31 ? 6% perf-profile.children.cycles-pp.xfs_btree_insert
0.57 ? 6% +0.1 0.68 ? 5% perf-profile.children.cycles-pp.kmem_cache_alloc
0.44 ? 5% +0.1 0.57 ? 9% perf-profile.children.cycles-pp.get_page_from_freelist
0.52 ? 4% +0.1 0.64 ? 8% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.55 ? 5% -0.1 0.47 ? 13% perf-profile.self.cycles-pp.lapic_next_deadline
0.34 ? 4% -0.1 0.27 ? 15% perf-profile.self.cycles-pp.update_process_times
0.71 -0.1 0.66 ? 4% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.18 ? 13% -0.0 0.15 ? 10% perf-profile.self.cycles-pp.blk_attempt_plug_merge
0.22 ? 6% -0.0 0.18 ? 8% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.19 ? 15% -0.0 0.16 ? 8% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.16 ? 9% -0.0 0.13 ? 6% perf-profile.self.cycles-pp.tick_sched_timer
0.08 ? 6% -0.0 0.04 ? 59% perf-profile.self.cycles-pp.delay_tsc
0.09 ? 11% -0.0 0.07 ? 13% perf-profile.self.cycles-pp._xfs_trans_bjoin
0.13 ? 7% -0.0 0.11 ? 6% perf-profile.self.cycles-pp.rebalance_domains
0.10 ? 5% -0.0 0.08 ? 6% perf-profile.self.cycles-pp.xfs_errortag_test
0.13 -0.0 0.11 ? 7% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.10 -0.0 0.09 ? 5% perf-profile.self.cycles-pp.xfs_btree_rec_offset
0.11 ? 4% -0.0 0.10 ? 7% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.05 +0.0 0.07 ? 7% perf-profile.self.cycles-pp.blk_attempt_bio_merge
0.05 +0.0 0.07 ? 7% perf-profile.self.cycles-pp.xfs_iext_insert
0.12 +0.0 0.14 ? 3% perf-profile.self.cycles-pp.xfs_trans_dirty_buf
0.06 ? 9% +0.0 0.08 ? 6% perf-profile.self.cycles-pp.up_read
0.05 +0.0 0.08 ? 14% perf-profile.self.cycles-pp.run_local_timers
0.20 ? 2% +0.1 0.26 ? 15% perf-profile.self.cycles-pp.clear_page_dirty_for_io
17886 ? 60% -61.2% 6932 ?151% interrupts.34:PCI-MSI.524292-edge.eth0-TxRx-3
178.50 ? 7% -54.3% 81.50 ? 88% interrupts.CPU101.TLB:TLB_shootdowns
179.00 ? 7% -54.1% 82.25 ? 87% interrupts.CPU103.TLB:TLB_shootdowns
186.00 -52.3% 88.75 ? 82% interrupts.CPU104.TLB:TLB_shootdowns
188.50 -55.2% 84.50 ? 88% interrupts.CPU105.TLB:TLB_shootdowns
556.00 -39.6% 336.00 ? 59% interrupts.CPU106.RES:Rescheduling_interrupts
222.50 ? 14% -59.8% 89.50 ? 80% interrupts.CPU106.TLB:TLB_shootdowns
179.00 ? 6% -50.4% 88.75 ? 82% interrupts.CPU107.TLB:TLB_shootdowns
680.50 ? 16% -37.5% 425.25 ? 21% interrupts.CPU108.NMI:Non-maskable_interrupts
680.50 ? 16% -37.5% 425.25 ? 21% interrupts.CPU108.PMI:Performance_monitoring_interrupts
107.00 ? 57% -59.1% 43.75 ?145% interrupts.CPU11.TLB:TLB_shootdowns
191.50 -55.1% 86.00 ? 87% interrupts.CPU110.TLB:TLB_shootdowns
701.50 ? 62% -70.1% 209.50 ? 35% interrupts.CPU111.NMI:Non-maskable_interrupts
701.50 ? 62% -70.1% 209.50 ? 35% interrupts.CPU111.PMI:Performance_monitoring_interrupts
187.00 -54.4% 85.25 ? 88% interrupts.CPU111.TLB:TLB_shootdowns
181.00 ? 4% -51.1% 88.50 ? 81% interrupts.CPU112.TLB:TLB_shootdowns
184.50 ? 2% -54.9% 83.25 ? 89% interrupts.CPU114.TLB:TLB_shootdowns
174.00 ? 9% -52.6% 82.50 ? 87% interrupts.CPU116.TLB:TLB_shootdowns
681.00 ? 37% -69.6% 207.00 ? 48% interrupts.CPU117.NMI:Non-maskable_interrupts
681.00 ? 37% -69.6% 207.00 ? 48% interrupts.CPU117.PMI:Performance_monitoring_interrupts
194.50 ? 9% -54.8% 88.00 ? 82% interrupts.CPU117.TLB:TLB_shootdowns
189.00 -50.4% 93.75 ? 69% interrupts.CPU118.TLB:TLB_shootdowns
59.50 ? 88% +200.0% 178.50 ? 32% interrupts.CPU121.TLB:TLB_shootdowns
246.00 -42.9% 140.50 ? 61% interrupts.CPU124.NMI:Non-maskable_interrupts
246.00 -42.9% 140.50 ? 61% interrupts.CPU124.PMI:Performance_monitoring_interrupts
17886 ? 60% -61.2% 6932 ?151% interrupts.CPU13.34:PCI-MSI.524292-edge.eth0-TxRx-3
96.00 ? 67% -65.6% 33.00 ?167% interrupts.CPU13.TLB:TLB_shootdowns
109.50 ? 15% -30.1% 76.50 ? 38% interrupts.CPU136.NMI:Non-maskable_interrupts
109.50 ? 15% -30.1% 76.50 ? 38% interrupts.CPU136.PMI:Performance_monitoring_interrupts
102.50 ? 7% -30.5% 71.25 ? 22% interrupts.CPU137.NMI:Non-maskable_interrupts
102.50 ? 7% -30.5% 71.25 ? 22% interrupts.CPU137.PMI:Performance_monitoring_interrupts
101.00 ? 6% +131.2% 233.50 ? 52% interrupts.CPU143.NMI:Non-maskable_interrupts
101.00 ? 6% +131.2% 233.50 ? 52% interrupts.CPU143.PMI:Performance_monitoring_interrupts
497.00 ? 47% -63.4% 181.75 ? 45% interrupts.CPU144.NMI:Non-maskable_interrupts
497.00 ? 47% -63.4% 181.75 ? 45% interrupts.CPU144.PMI:Performance_monitoring_interrupts
336.00 ? 18% -42.8% 192.25 ? 25% interrupts.CPU145.NMI:Non-maskable_interrupts
336.00 ? 18% -42.8% 192.25 ? 25% interrupts.CPU145.PMI:Performance_monitoring_interrupts
321.50 ? 22% -55.5% 143.00 ? 29% interrupts.CPU147.NMI:Non-maskable_interrupts
321.50 ? 22% -55.5% 143.00 ? 29% interrupts.CPU147.PMI:Performance_monitoring_interrupts
103.00 ? 6% +72.1% 177.25 ? 37% interrupts.CPU149.NMI:Non-maskable_interrupts
103.00 ? 6% +72.1% 177.25 ? 37% interrupts.CPU149.PMI:Performance_monitoring_interrupts
82.50 ? 3% +84.8% 152.50 ? 40% interrupts.CPU149.TLB:TLB_shootdowns
733.50 ? 32% -56.9% 316.50 ? 46% interrupts.CPU15.NMI:Non-maskable_interrupts
733.50 ? 32% -56.9% 316.50 ? 46% interrupts.CPU15.PMI:Performance_monitoring_interrupts
3.00 ? 33% +1933.3% 61.00 ? 68% interrupts.CPU152.RES:Rescheduling_interrupts
151.50 ? 26% -40.9% 89.50 ? 26% interrupts.CPU153.NMI:Non-maskable_interrupts
151.50 ? 26% -40.9% 89.50 ? 26% interrupts.CPU153.PMI:Performance_monitoring_interrupts
111.50 ? 2% +67.9% 187.25 ? 27% interrupts.CPU158.NMI:Non-maskable_interrupts
111.50 ? 2% +67.9% 187.25 ? 27% interrupts.CPU158.PMI:Performance_monitoring_interrupts
87.50 ? 22% +92.0% 168.00 ? 31% interrupts.CPU159.TLB:TLB_shootdowns
175.50 -59.5% 71.00 ? 57% interrupts.CPU16.RES:Rescheduling_interrupts
244.00 ? 7% -55.0% 109.75 ? 9% interrupts.CPU160.NMI:Non-maskable_interrupts
244.00 ? 7% -55.0% 109.75 ? 9% interrupts.CPU160.PMI:Performance_monitoring_interrupts
311.00 ? 24% -71.1% 90.00 ? 23% interrupts.CPU163.NMI:Non-maskable_interrupts
311.00 ? 24% -71.1% 90.00 ? 23% interrupts.CPU163.PMI:Performance_monitoring_interrupts
317.00 ? 23% -57.3% 135.50 ? 58% interrupts.CPU164.NMI:Non-maskable_interrupts
317.00 ? 23% -57.3% 135.50 ? 58% interrupts.CPU164.PMI:Performance_monitoring_interrupts
101.00 ? 80% -51.5% 49.00 ?145% interrupts.CPU17.TLB:TLB_shootdowns
100.50 ? 44% +233.1% 334.75 ? 29% interrupts.CPU170.NMI:Non-maskable_interrupts
100.50 ? 44% +233.1% 334.75 ? 29% interrupts.CPU170.PMI:Performance_monitoring_interrupts
68.00 ? 33% +126.8% 154.25 ? 30% interrupts.CPU172.NMI:Non-maskable_interrupts
68.00 ? 33% +126.8% 154.25 ? 30% interrupts.CPU172.PMI:Performance_monitoring_interrupts
130.00 ? 40% -65.0% 45.50 ?142% interrupts.CPU18.TLB:TLB_shootdowns
79.50 ? 44% +255.3% 282.50 ? 27% interrupts.CPU184.NMI:Non-maskable_interrupts
79.50 ? 44% +255.3% 282.50 ? 27% interrupts.CPU184.PMI:Performance_monitoring_interrupts
714.50 ? 21% -67.9% 229.50 ? 32% interrupts.CPU21.NMI:Non-maskable_interrupts
714.50 ? 21% -67.9% 229.50 ? 32% interrupts.CPU21.PMI:Performance_monitoring_interrupts
159.50 ? 20% -59.1% 65.25 ? 81% interrupts.CPU22.TLB:TLB_shootdowns
118.00 ? 14% -53.8% 54.50 ? 56% interrupts.CPU23.RES:Rescheduling_interrupts
312.00 ? 19% +109.9% 654.75 ? 28% interrupts.CPU24.NMI:Non-maskable_interrupts
312.00 ? 19% +109.9% 654.75 ? 28% interrupts.CPU24.PMI:Performance_monitoring_interrupts
16.50 ? 81% +943.9% 172.25 ? 51% interrupts.CPU24.TLB:TLB_shootdowns
254.50 +143.4% 619.50 ? 45% interrupts.CPU25.NMI:Non-maskable_interrupts
254.50 +143.4% 619.50 ? 45% interrupts.CPU25.PMI:Performance_monitoring_interrupts
742.00 ? 19% -41.4% 435.00 ? 56% interrupts.CPU3.NMI:Non-maskable_interrupts
742.00 ? 19% -41.4% 435.00 ? 56% interrupts.CPU3.PMI:Performance_monitoring_interrupts
21.00 ? 57% +409.5% 107.00 ? 61% interrupts.CPU37.TLB:TLB_shootdowns
540.00 ? 21% -46.0% 291.75 ? 43% interrupts.CPU48.NMI:Non-maskable_interrupts
540.00 ? 21% -46.0% 291.75 ? 43% interrupts.CPU48.PMI:Performance_monitoring_interrupts
111.00 ? 54% -65.1% 38.75 ?136% interrupts.CPU5.TLB:TLB_shootdowns
103.50 ? 8% +137.9% 246.25 ? 50% interrupts.CPU53.NMI:Non-maskable_interrupts
103.50 ? 8% +137.9% 246.25 ? 50% interrupts.CPU53.PMI:Performance_monitoring_interrupts
123.50 ? 9% -16.4% 103.25 ? 8% interrupts.CPU57.NMI:Non-maskable_interrupts
123.50 ? 9% -16.4% 103.25 ? 8% interrupts.CPU57.PMI:Performance_monitoring_interrupts
105.50 ? 46% -92.7% 7.75 ? 68% interrupts.CPU6.TLB:TLB_shootdowns
82.00 ? 35% +247.6% 285.00 ? 44% interrupts.CPU62.NMI:Non-maskable_interrupts
82.00 ? 35% +247.6% 285.00 ? 44% interrupts.CPU62.PMI:Performance_monitoring_interrupts
29.50 ? 69% +341.5% 130.25 ? 43% interrupts.CPU63.TLB:TLB_shootdowns
379.50 ? 3% -71.5% 108.25 ? 6% interrupts.CPU64.NMI:Non-maskable_interrupts
379.50 ? 3% -71.5% 108.25 ? 6% interrupts.CPU64.PMI:Performance_monitoring_interrupts
342.00 ? 18% -69.9% 103.00 ? 9% interrupts.CPU67.NMI:Non-maskable_interrupts
342.00 ? 18% -69.9% 103.00 ? 9% interrupts.CPU67.PMI:Performance_monitoring_interrupts
105.50 ? 47% -68.2% 33.50 ? 85% interrupts.CPU7.TLB:TLB_shootdowns
7.50 ? 6% +960.0% 79.50 ?144% interrupts.CPU72.RES:Rescheduling_interrupts
148.00 ? 27% +137.5% 351.50 ? 16% interrupts.CPU74.NMI:Non-maskable_interrupts
148.00 ? 27% +137.5% 351.50 ? 16% interrupts.CPU74.PMI:Performance_monitoring_interrupts
407.50 ? 4% +77.5% 723.50 ? 35% interrupts.CPU8.NMI:Non-maskable_interrupts
407.50 ? 4% +77.5% 723.50 ? 35% interrupts.CPU8.PMI:Performance_monitoring_interrupts
151.50 ? 9% -69.8% 45.75 ?139% interrupts.CPU8.TLB:TLB_shootdowns
101.50 ? 12% +192.4% 296.75 ? 39% interrupts.CPU88.NMI:Non-maskable_interrupts
101.50 ? 12% +192.4% 296.75 ? 39% interrupts.CPU88.PMI:Performance_monitoring_interrupts
1.00 ?100% +3975.0% 40.75 ?150% interrupts.CPU94.RES:Rescheduling_interrupts
167.00 ? 16% -56.7% 72.25 ? 77% interrupts.CPU96.TLB:TLB_shootdowns
474.50 ? 62% -81.1% 89.75 ? 76% interrupts.CPU97.TLB:TLB_shootdowns
191.00 -52.7% 90.25 ? 75% interrupts.CPU98.TLB:TLB_shootdowns





Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Oliver Sang


Attachments:
(No filename) (159.51 kB)
config-5.10.0-rc3-00005-gc9847a7f9467 (172.74 kB)
job-script (8.37 kB)
job.yaml (5.74 kB)
reproduce (1.02 kB)
Download all attachments

2020-11-26 10:41:32

by kernel test robot

[permalink] [raw]
Subject: [locking/rwsem] 25d0c60b0e: vm-scalability.throughput 316.2% improvement


Greeting,

FYI, we noticed a 316.2% improvement of vm-scalability.throughput due to commit:


commit: 25d0c60b0e409c6c69f7192516ccdb7bcb505c19 ("[PATCH v2 2/5] locking/rwsem: Prevent potential lock starvation")
url: https://github.com/0day-ci/linux/commits/Waiman-Long/locking-rwsem-Rework-reader-optimistic-spinning/20201121-122118
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 932f8c64d38bb08f69c8c26a2216ba0c36c6daa8

in testcase: vm-scalability
on test machine: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
with following parameters:

runtime: 300s
test: small-allocs-mt
cpufreq_governor: performance
ucode: 0x5003003

test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/





Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2ap4/small-allocs-mt/vm-scalability/0x5003003

commit:
8348c157e0 ("locking/rwsem: Pass the current atomic count to rwsem_down_read_slowpath()")
25d0c60b0e ("locking/rwsem: Prevent potential lock starvation")

8348c157e07d3544 25d0c60b0e409c6c69f7192516c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 5% 0:4 perf-profile.children.cycles-pp.error_return
0:4 32% 1:4 perf-profile.children.cycles-pp.error_entry
:4 12% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
1196 ? 2% +316.4% 4981 ? 8% vm-scalability.median
667.71 ? 6% -639.9 27.79 ? 44% vm-scalability.stddev%
229812 ? 2% +316.2% 956546 ? 8% vm-scalability.throughput
302.81 +1.8% 308.20 vm-scalability.time.elapsed_time
302.81 +1.8% 308.20 vm-scalability.time.elapsed_time.max
4115 +52.5% 6276 ? 35% vm-scalability.time.maximum_resident_set_size
15368012 ? 2% +315.4% 63837201 ? 8% vm-scalability.time.minor_page_faults
1061 ? 4% -60.1% 423.50 ? 6% vm-scalability.time.percent_of_cpu_this_job_got
3182 ? 4% -69.2% 978.79 ? 9% vm-scalability.time.system_time
33.47 ? 2% +878.4% 327.44 ? 4% vm-scalability.time.user_time
5953763 ? 3% +919.5% 60697236 ? 11% vm-scalability.time.voluntary_context_switches
69123365 ? 2% +315.5% 2.872e+08 ? 8% vm-scalability.workload
4.365e+08 ?150% -91.6% 36710725 ? 9% cpuidle.C1.time
2830867 ?112% -88.9% 315177 ? 9% cpuidle.POLL.usage
1.05 ? 6% +0.5 1.55 ? 7% mpstat.cpu.all.irq%
0.09 ? 4% +0.0 0.12 ? 13% mpstat.cpu.all.soft%
5.48 ? 4% -4.0 1.50 ? 12% mpstat.cpu.all.sys%
0.08 ? 2% +0.4 0.51 mpstat.cpu.all.usr%
92.75 +3.5% 96.00 vmstat.cpu.id
9.50 ? 11% -65.8% 3.25 ? 13% vmstat.procs.r
41257 ? 3% +846.0% 390307 ? 11% vmstat.system.cs
366656 ? 4% +6.5% 390355 vmstat.system.in
16646 ? 5% -27.9% 12003 meminfo.Active
16551 ? 5% -28.1% 11908 meminfo.Active(anon)
2917185 +22.4% 3570846 ? 2% meminfo.Memused
64381 +296.8% 255438 ? 8% meminfo.PageTables
442782 +116.3% 957779 ? 5% meminfo.SUnreclaim
566580 +90.2% 1077629 ? 5% meminfo.Slab
11471 +20.3% 13802 ? 3% meminfo.max_used_kB
4008 ? 3% +323.4% 16970 ? 11% numa-vmstat.node0.nr_page_table_pages
33153 ? 6% +105.0% 67963 ? 12% numa-vmstat.node0.nr_slab_unreclaimable
3967 ? 7% +296.6% 15734 ? 8% numa-vmstat.node1.nr_page_table_pages
26111 ? 9% +122.1% 57979 ? 3% numa-vmstat.node1.nr_slab_unreclaimable
2352 ? 30% -34.3% 1545 ? 3% numa-vmstat.node2.nr_mapped
4198 ? 9% +263.5% 15262 ? 12% numa-vmstat.node2.nr_page_table_pages
26562 ? 4% +109.3% 55584 ? 11% numa-vmstat.node2.nr_slab_unreclaimable
66550 ? 3% -6.9% 61940 ? 5% numa-vmstat.node2.nr_unevictable
66550 ? 3% -6.9% 61940 ? 5% numa-vmstat.node2.nr_zone_unevictable
2376 ? 20% -57.8% 1002 ? 15% numa-vmstat.node3.nr_active_anon
3906 ? 4% +306.9% 15894 ? 15% numa-vmstat.node3.nr_page_table_pages
8765 ?112% -81.6% 1609 ? 50% numa-vmstat.node3.nr_shmem
24897 +132.5% 57880 ? 10% numa-vmstat.node3.nr_slab_unreclaimable
2376 ? 20% -57.8% 1002 ? 15% numa-vmstat.node3.nr_zone_active_anon
4134 ? 5% -28.0% 2975 proc-vmstat.nr_active_anon
16125 +296.5% 63929 ? 8% proc-vmstat.nr_page_table_pages
30303 -3.8% 29143 proc-vmstat.nr_shmem
30948 -3.2% 29962 ? 3% proc-vmstat.nr_slab_reclaimable
110703 +116.3% 239400 ? 5% proc-vmstat.nr_slab_unreclaimable
4134 ? 5% -28.0% 2975 proc-vmstat.nr_zone_active_anon
1106323 +19.2% 1319145 ? 2% proc-vmstat.numa_hit
1013157 +21.0% 1225906 ? 2% proc-vmstat.numa_local
15861 ? 24% +39.2% 22071 ? 19% proc-vmstat.numa_pages_migrated
5504 ? 9% -43.7% 3096 ? 5% proc-vmstat.pgactivate
1227807 +28.8% 1581875 ? 2% proc-vmstat.pgalloc_normal
16526250 ? 2% +293.4% 65015954 ? 8% proc-vmstat.pgfault
1341677 +26.8% 1700733 ? 2% proc-vmstat.pgfree
15861 ? 24% +39.2% 22071 ? 19% proc-vmstat.pgmigrate_success
776705 ? 2% +32.2% 1026478 ? 17% numa-meminfo.node0.MemUsed
16065 ? 3% +322.6% 67896 ? 11% numa-meminfo.node0.PageTables
132651 ? 6% +104.9% 271835 ? 12% numa-meminfo.node0.SUnreclaim
170726 ? 7% +85.2% 316147 ? 13% numa-meminfo.node0.Slab
15896 ? 7% +296.0% 62943 ? 8% numa-meminfo.node1.PageTables
104481 ? 9% +122.0% 231901 ? 3% numa-meminfo.node1.SUnreclaim
134366 ? 9% +90.7% 256275 ? 4% numa-meminfo.node1.Slab
16837 ? 9% +262.7% 61059 ? 12% numa-meminfo.node2.PageTables
106291 ? 4% +109.1% 222297 ? 11% numa-meminfo.node2.SUnreclaim
137938 ? 7% +82.9% 252310 ? 11% numa-meminfo.node2.Slab
266206 ? 3% -6.9% 247762 ? 5% numa-meminfo.node2.Unevictable
9506 ? 20% -57.9% 4005 ? 16% numa-meminfo.node3.Active
9482 ? 20% -57.8% 4005 ? 16% numa-meminfo.node3.Active(anon)
661218 ? 3% +20.6% 797250 ? 4% numa-meminfo.node3.MemUsed
15657 ? 4% +306.3% 63614 ? 16% numa-meminfo.node3.PageTables
99634 +132.4% 231507 ? 10% numa-meminfo.node3.SUnreclaim
35078 ?112% -81.7% 6436 ? 50% numa-meminfo.node3.Shmem
123833 ? 6% +104.0% 252657 ? 9% numa-meminfo.node3.Slab
2885 ? 6% -16.3% 2414 ? 13% slabinfo.PING.active_objs
2885 ? 6% -16.3% 2414 ? 13% slabinfo.PING.num_objs
2267 ? 13% -30.6% 1573 ? 11% slabinfo.dmaengine-unmap-16.active_objs
2267 ? 13% -30.6% 1573 ? 11% slabinfo.dmaengine-unmap-16.num_objs
8926 ? 15% -31.4% 6125 ? 23% slabinfo.eventpoll_pwq.active_objs
8926 ? 15% -31.4% 6125 ? 23% slabinfo.eventpoll_pwq.num_objs
45756 +131.7% 106013 ? 6% slabinfo.kmalloc-512.active_objs
717.75 ? 2% +131.5% 1661 ? 6% slabinfo.kmalloc-512.active_slabs
45974 ? 2% +131.3% 106345 ? 6% slabinfo.kmalloc-512.num_objs
717.75 ? 2% +131.5% 1661 ? 6% slabinfo.kmalloc-512.num_slabs
3927 ? 6% -14.6% 3352 ? 5% slabinfo.kmalloc-rcl-512.active_objs
3927 ? 6% -14.6% 3352 ? 5% slabinfo.kmalloc-rcl-512.num_objs
16128 ? 7% -14.1% 13859 ? 15% slabinfo.pde_opener.active_objs
16128 ? 7% -14.1% 13859 ? 15% slabinfo.pde_opener.num_objs
13844 ? 2% -12.8% 12071 ? 10% slabinfo.pid.active_objs
13844 ? 2% -12.8% 12071 ? 10% slabinfo.pid.num_objs
767484 +321.9% 3238091 ? 8% slabinfo.vm_area_struct.active_objs
19187 +321.9% 80952 ? 8% slabinfo.vm_area_struct.active_slabs
767502 +321.9% 3238131 ? 8% slabinfo.vm_area_struct.num_objs
19187 +321.9% 80952 ? 8% slabinfo.vm_area_struct.num_slabs
7603 ? 14% -61.5% 2927 ? 8% sched_debug.cfs_rq:/.exec_clock.avg
10612 ? 11% -44.5% 5886 ? 5% sched_debug.cfs_rq:/.exec_clock.max
4793 ? 20% -53.3% 2237 ? 9% sched_debug.cfs_rq:/.exec_clock.min
809.62 ? 4% -51.2% 395.20 ? 18% sched_debug.cfs_rq:/.exec_clock.stddev
6538 ? 17% +29.4% 8463 ? 7% sched_debug.cfs_rq:/.load.avg
30298 ? 35% +56.1% 47283 ? 15% sched_debug.cfs_rq:/.load.stddev
141173 ? 9% -72.2% 39255 ? 12% sched_debug.cfs_rq:/.min_vruntime.avg
181613 ? 9% -66.7% 60537 ? 8% sched_debug.cfs_rq:/.min_vruntime.max
96324 ? 13% -71.6% 27369 ? 15% sched_debug.cfs_rq:/.min_vruntime.min
13658 ? 4% -66.0% 4644 ? 9% sched_debug.cfs_rq:/.min_vruntime.stddev
77.48 ? 17% -52.2% 37.03 ? 5% sched_debug.cfs_rq:/.runnable_avg.avg
856.14 ? 5% -15.2% 726.07 ? 4% sched_debug.cfs_rq:/.runnable_avg.max
164.08 ? 11% -49.2% 83.31 ? 2% sched_debug.cfs_rq:/.runnable_avg.stddev
32899 ? 56% -102.8% -928.20 sched_debug.cfs_rq:/.spread0.avg
73337 ? 24% -72.2% 20391 ? 20% sched_debug.cfs_rq:/.spread0.max
13662 ? 4% -66.0% 4646 ? 9% sched_debug.cfs_rq:/.spread0.stddev
77.42 ? 17% -52.3% 36.89 ? 5% sched_debug.cfs_rq:/.util_avg.avg
856.08 ? 5% -15.2% 726.02 ? 4% sched_debug.cfs_rq:/.util_avg.max
164.05 ? 11% -49.2% 83.28 ? 2% sched_debug.cfs_rq:/.util_avg.stddev
8.55 ? 38% -66.6% 2.85 ? 16% sched_debug.cfs_rq:/.util_est_enqueued.avg
448.67 ? 18% -39.0% 273.87 ? 22% sched_debug.cfs_rq:/.util_est_enqueued.max
48.86 ? 26% -52.5% 23.20 ? 19% sched_debug.cfs_rq:/.util_est_enqueued.stddev
16762 ? 43% +455.0% 93037 ? 37% sched_debug.cpu.avg_idle.min
214208 ? 7% -16.2% 179508 ? 5% sched_debug.cpu.avg_idle.stddev
0.00 ? 20% -22.1% 0.00 ? 18% sched_debug.cpu.next_balance.stddev
30943 ? 13% +781.2% 272691 ? 17% sched_debug.cpu.nr_switches.avg
47808 ? 11% +554.7% 312982 ? 16% sched_debug.cpu.nr_switches.max
23676 ? 15% +755.2% 202476 ? 17% sched_debug.cpu.nr_switches.min
3440 ? 8% +606.4% 24301 ? 60% sched_debug.cpu.nr_switches.stddev
-34.81 -45.2% -19.06 sched_debug.cpu.nr_uninterruptible.min
29514 ? 13% +818.5% 271089 ? 17% sched_debug.cpu.sched_count.avg
44064 ? 13% +601.9% 309272 ? 16% sched_debug.cpu.sched_count.max
21857 ? 20% +797.1% 196073 ? 17% sched_debug.cpu.sched_count.min
3214 ? 8% +652.7% 24196 ? 60% sched_debug.cpu.sched_count.stddev
14584 ? 13% +828.8% 135457 ? 17% sched_debug.cpu.sched_goidle.avg
21841 ? 13% +607.6% 154542 ? 16% sched_debug.cpu.sched_goidle.max
10794 ? 20% +808.0% 98017 ? 17% sched_debug.cpu.sched_goidle.min
1588 ? 8% +661.3% 12090 ? 60% sched_debug.cpu.sched_goidle.stddev
14867 ? 14% +820.7% 136878 ? 17% sched_debug.cpu.ttwu_count.avg
22737 ? 19% +615.8% 162759 ? 17% sched_debug.cpu.ttwu_count.max
11312 ? 16% +771.4% 98581 ? 18% sched_debug.cpu.ttwu_count.min
1545 ? 11% +774.9% 13523 ? 50% sched_debug.cpu.ttwu_count.stddev
433.61 ? 12% -12.6% 378.89 ? 10% sched_debug.cpu.ttwu_local.avg
199.16 ? 26% -44.0% 111.58 ? 9% sched_debug.cpu.ttwu_local.stddev
3.36 ? 37% +294.6% 13.26 ? 61% perf-stat.i.MPKI
2.787e+09 ? 2% -30.6% 1.934e+09 ? 7% perf-stat.i.branch-instructions
0.42 ? 27% +0.7 1.07 ? 54% perf-stat.i.branch-miss-rate%
11297054 ? 25% +77.2% 20022864 ? 42% perf-stat.i.branch-misses
7444899 ? 4% +203.2% 22572633 ? 14% perf-stat.i.cache-misses
38212772 ? 35% +138.7% 91225234 ? 50% perf-stat.i.cache-references
41651 ? 3% +848.9% 395217 ? 11% perf-stat.i.context-switches
3.36 -37.5% 2.10 ? 14% perf-stat.i.cpi
4.162e+10 ? 2% -64.1% 1.493e+10 ? 6% perf-stat.i.cpu-cycles
1531 ? 4% -46.5% 819.49 ? 12% perf-stat.i.cpu-migrations
5753 ? 5% -88.0% 692.05 ? 23% perf-stat.i.cycles-between-cache-misses
0.03 ? 59% +0.1 0.15 ? 80% perf-stat.i.dTLB-load-miss-rate%
875229 ? 57% +224.4% 2839103 ? 69% perf-stat.i.dTLB-load-misses
3.169e+09 ? 3% -36.2% 2.021e+09 ? 6% perf-stat.i.dTLB-loads
133897 ? 45% +149.2% 333711 ? 35% perf-stat.i.dTLB-store-misses
4.066e+08 +79.3% 7.29e+08 ? 6% perf-stat.i.dTLB-stores
3558573 ? 6% +62.5% 5780994 ? 13% perf-stat.i.iTLB-load-misses
3514690 ? 11% +79.5% 6307992 perf-stat.i.iTLB-loads
1.24e+10 ? 2% -41.0% 7.316e+09 ? 6% perf-stat.i.instructions
3491 ? 7% -63.6% 1272 ? 7% perf-stat.i.instructions-per-iTLB-miss
0.30 +64.4% 0.49 ? 12% perf-stat.i.ipc
0.22 ? 2% -64.1% 0.08 ? 6% perf-stat.i.metric.GHz
33.37 ? 2% -25.2% 24.97 ? 5% perf-stat.i.metric.M/sec
54265 ? 2% +287.6% 210321 ? 8% perf-stat.i.minor-faults
93.72 +2.4 96.10 perf-stat.i.node-load-miss-rate%
1995262 +359.9% 9176181 ? 12% perf-stat.i.node-load-misses
138913 +169.7% 374605 ? 21% perf-stat.i.node-loads
731327 ? 3% +296.3% 2898344 ? 3% perf-stat.i.node-store-misses
23737 ? 30% +125.9% 53622 ? 12% perf-stat.i.node-stores
54266 ? 2% +287.6% 210321 ? 8% perf-stat.i.page-faults
3.07 ? 34% +323.2% 13.01 ? 60% perf-stat.overall.MPKI
0.41 ? 24% +0.7 1.08 ? 52% perf-stat.overall.branch-miss-rate%
3.36 -38.6% 2.06 ? 14% perf-stat.overall.cpi
5597 ? 5% -87.7% 685.84 ? 23% perf-stat.overall.cycles-between-cache-misses
0.03 ? 56% +0.1 0.15 ? 78% perf-stat.overall.dTLB-load-miss-rate%
3499 ? 7% -63.4% 1279 ? 7% perf-stat.overall.instructions-per-iTLB-miss
0.30 +65.6% 0.49 ? 12% perf-stat.overall.ipc
93.47 +2.6 96.10 perf-stat.overall.node-load-miss-rate%
54516 -85.5% 7894 perf-stat.overall.path-length
2.778e+09 ? 2% -30.7% 1.925e+09 ? 7% perf-stat.ps.branch-instructions
11269337 ? 25% +76.9% 19937196 ? 42% perf-stat.ps.branch-misses
7422896 ? 4% +202.7% 22465714 ? 14% perf-stat.ps.cache-misses
38100277 ? 35% +138.4% 90838247 ? 50% perf-stat.ps.cache-references
41502 ? 3% +846.6% 392864 ? 11% perf-stat.ps.context-switches
4.147e+10 ? 2% -64.1% 1.488e+10 ? 6% perf-stat.ps.cpu-cycles
1526 ? 4% -46.5% 816.01 ? 12% perf-stat.ps.cpu-migrations
872505 ? 57% +223.8% 2824945 ? 69% perf-stat.ps.dTLB-load-misses
3.158e+09 ? 3% -36.3% 2.012e+09 ? 6% perf-stat.ps.dTLB-loads
133483 ? 45% +148.8% 332159 ? 35% perf-stat.ps.dTLB-store-misses
4.054e+08 +79.1% 7.261e+08 ? 6% perf-stat.ps.dTLB-stores
3546688 ? 6% +62.2% 5754354 ? 13% perf-stat.ps.iTLB-load-misses
3502780 ? 11% +79.2% 6277848 perf-stat.ps.iTLB-loads
1.236e+10 ? 2% -41.0% 7.285e+09 ? 6% perf-stat.ps.instructions
54072 ? 2% +286.7% 209072 ? 8% perf-stat.ps.minor-faults
1989790 +359.0% 9132656 ? 12% perf-stat.ps.node-load-misses
138874 +169.9% 374856 ? 21% perf-stat.ps.node-loads
728929 ? 3% +295.3% 2881399 ? 3% perf-stat.ps.node-store-misses
23693 ? 30% +125.5% 53429 ? 12% perf-stat.ps.node-stores
54073 ? 2% +286.7% 209073 ? 8% perf-stat.ps.page-faults
3.768e+12 ? 2% -39.9% 2.264e+12 ? 7% perf-stat.total.instructions
13031 ? 16% +130.4% 30021 ? 8% softirqs.CPU0.RCU
9723 ? 8% +195.3% 28709 ? 10% softirqs.CPU1.RCU
9955 ? 9% +184.5% 28324 ? 10% softirqs.CPU10.RCU
9190 ? 7% +205.9% 28113 ? 14% softirqs.CPU100.RCU
9262 ? 6% +196.5% 27467 ? 13% softirqs.CPU101.RCU
9274 ? 8% +210.9% 28833 ? 11% softirqs.CPU102.RCU
9198 ? 8% +210.3% 28546 ? 11% softirqs.CPU103.RCU
9420 ? 5% +207.0% 28918 ? 10% softirqs.CPU104.RCU
9224 ? 6% +213.7% 28939 ? 9% softirqs.CPU105.RCU
9136 ? 6% +208.8% 28218 ? 10% softirqs.CPU106.RCU
9148 ? 10% +215.1% 28821 ? 11% softirqs.CPU107.RCU
9377 ? 8% +206.1% 28699 ? 11% softirqs.CPU108.RCU
9284 ? 6% +209.2% 28704 ? 12% softirqs.CPU109.RCU
9468 ? 16% +199.1% 28321 ? 13% softirqs.CPU11.RCU
9361 ? 6% +209.2% 28949 ? 11% softirqs.CPU110.RCU
9191 ? 6% +210.7% 28557 ? 12% softirqs.CPU111.RCU
9100 ? 8% +194.7% 26820 ? 11% softirqs.CPU112.RCU
9126 ? 4% +194.0% 26832 ? 11% softirqs.CPU113.RCU
9215 ? 7% +195.5% 27232 ? 11% softirqs.CPU114.RCU
8951 ? 5% +203.3% 27147 ? 8% softirqs.CPU115.RCU
9168 ? 7% +189.1% 26503 ? 9% softirqs.CPU116.RCU
9291 ? 5% +189.7% 26921 ? 10% softirqs.CPU117.RCU
9100 ? 6% +195.1% 26853 ? 11% softirqs.CPU118.RCU
9607 ? 7% +178.2% 26726 ? 10% softirqs.CPU119.RCU
9434 ? 6% +197.1% 28032 ? 11% softirqs.CPU12.RCU
9329 ? 6% +189.6% 27019 ? 10% softirqs.CPU120.RCU
9168 ? 6% +190.8% 26657 ? 10% softirqs.CPU121.RCU
9302 ? 6% +184.7% 26483 ? 10% softirqs.CPU122.RCU
9348 ? 6% +187.2% 26845 ? 11% softirqs.CPU123.RCU
9429 ? 5% +184.1% 26786 ? 11% softirqs.CPU124.RCU
9351 ? 7% +176.9% 25893 ? 11% softirqs.CPU125.RCU
9160 ? 6% +189.7% 26536 ? 9% softirqs.CPU126.RCU
9265 ? 7% +187.9% 26671 ? 10% softirqs.CPU127.RCU
9401 ? 6% +203.9% 28570 ? 9% softirqs.CPU128.RCU
9895 ? 11% +187.2% 28415 ? 9% softirqs.CPU129.RCU
9238 ? 5% +208.7% 28520 ? 9% softirqs.CPU13.RCU
9430 ? 6% +203.2% 28594 ? 9% softirqs.CPU130.RCU
9585 ? 7% +201.9% 28935 ? 10% softirqs.CPU131.RCU
9616 ? 5% +203.5% 29185 ? 9% softirqs.CPU132.RCU
9811 ? 6% +193.2% 28761 ? 10% softirqs.CPU133.RCU
9600 ? 7% +196.3% 28448 ? 8% softirqs.CPU134.RCU
9442 ? 6% +202.4% 28550 ? 9% softirqs.CPU135.RCU
9705 ? 9% +200.1% 29131 ? 10% softirqs.CPU136.RCU
9556 ? 7% +204.3% 29081 ? 10% softirqs.CPU137.RCU
9558 ? 7% +204.2% 29078 ? 9% softirqs.CPU138.RCU
9434 ? 7% +202.0% 28489 ? 10% softirqs.CPU139.RCU
9874 ? 3% +191.2% 28755 ? 12% softirqs.CPU14.RCU
9499 ? 6% +204.0% 28877 ? 9% softirqs.CPU140.RCU
9491 ? 7% +205.4% 28989 ? 10% softirqs.CPU141.RCU
9479 ? 9% +198.9% 28336 ? 9% softirqs.CPU142.RCU
9923 ? 6% +193.8% 29153 ? 10% softirqs.CPU143.RCU
9412 ? 7% +225.2% 30610 ? 8% softirqs.CPU144.RCU
9173 ? 6% +219.0% 29263 ? 10% softirqs.CPU145.RCU
9452 ? 6% +212.5% 29540 ? 10% softirqs.CPU146.RCU
9542 ? 6% +207.5% 29342 ? 10% softirqs.CPU147.RCU
9624 ? 6% +203.0% 29158 ? 9% softirqs.CPU148.RCU
9559 ? 7% +212.1% 29835 ? 9% softirqs.CPU149.RCU
9461 ? 7% +204.7% 28826 ? 11% softirqs.CPU15.RCU
9513 ? 6% +215.6% 30027 ? 10% softirqs.CPU150.RCU
9452 ? 8% +211.6% 29450 ? 10% softirqs.CPU151.RCU
9427 ? 8% +211.2% 29335 ? 11% softirqs.CPU152.RCU
9778 ? 10% +203.4% 29667 ? 11% softirqs.CPU153.RCU
9480 ? 6% +217.0% 30049 ? 10% softirqs.CPU154.RCU
9452 ? 7% +213.0% 29587 ? 10% softirqs.CPU155.RCU
9299 ? 6% +215.5% 29338 ? 10% softirqs.CPU156.RCU
9307 ? 6% +215.3% 29350 ? 9% softirqs.CPU157.RCU
9360 ? 7% +216.2% 29602 ? 10% softirqs.CPU158.RCU
9369 ? 6% +215.8% 29586 ? 10% softirqs.CPU159.RCU
9377 ? 6% +187.5% 26962 ? 11% softirqs.CPU16.RCU
9384 ? 8% +190.0% 27211 ? 10% softirqs.CPU160.RCU
9247 ? 7% +198.5% 27603 ? 10% softirqs.CPU161.RCU
9019 ? 9% +200.6% 27110 ? 9% softirqs.CPU162.RCU
9284 ? 8% +196.0% 27476 ? 11% softirqs.CPU163.RCU
9165 ? 7% +197.5% 27267 ? 10% softirqs.CPU164.RCU
9161 ? 8% +195.7% 27087 ? 10% softirqs.CPU165.RCU
9418 ? 9% +195.1% 27790 ? 11% softirqs.CPU166.RCU
9416 ? 9% +193.6% 27650 ? 10% softirqs.CPU167.RCU
8764 ? 6% +207.6% 26959 ? 9% softirqs.CPU168.RCU
8764 ? 7% +201.5% 26420 ? 9% softirqs.CPU169.RCU
9672 ? 4% +180.0% 27084 ? 11% softirqs.CPU17.RCU
8704 ? 6% +201.9% 26276 ? 9% softirqs.CPU170.RCU
8775 ? 6% +200.9% 26407 ? 9% softirqs.CPU171.RCU
8878 ? 7% +192.4% 25956 ? 9% softirqs.CPU172.RCU
8851 ? 9% +200.1% 26562 ? 9% softirqs.CPU173.RCU
8764 ? 6% +202.9% 26546 ? 9% softirqs.CPU174.RCU
8850 ? 6% +201.7% 26704 ? 9% softirqs.CPU175.RCU
8943 ? 7% +223.5% 28932 ? 8% softirqs.CPU176.RCU
8949 ? 8% +218.6% 28508 ? 8% softirqs.CPU177.RCU
8906 ? 8% +224.6% 28910 ? 10% softirqs.CPU178.RCU
8947 ? 8% +223.0% 28898 ? 10% softirqs.CPU179.RCU
9371 ? 5% +191.2% 27292 ? 11% softirqs.CPU18.RCU
8808 ? 8% +225.7% 28685 ? 10% softirqs.CPU180.RCU
8764 ? 7% +222.6% 28277 ? 10% softirqs.CPU181.RCU
8905 ? 8% +221.8% 28660 ? 9% softirqs.CPU182.RCU
8986 ? 7% +219.2% 28687 ? 9% softirqs.CPU183.RCU
8950 ? 8% +225.4% 29123 ? 10% softirqs.CPU184.RCU
9082 ? 7% +213.8% 28499 ? 9% softirqs.CPU185.RCU
8929 ? 7% +223.2% 28861 ? 8% softirqs.CPU186.RCU
9052 ? 9% +223.4% 29276 ? 9% softirqs.CPU187.RCU
8956 ? 7% +221.6% 28806 ? 10% softirqs.CPU188.RCU
8887 ? 8% +222.4% 28653 ? 9% softirqs.CPU189.RCU
9372 ? 6% +185.8% 26783 ? 11% softirqs.CPU19.RCU
8966 ? 8% +219.6% 28659 ? 9% softirqs.CPU190.RCU
8864 ? 7% +222.5% 28584 ? 10% softirqs.CPU191.RCU
9523 ? 6% +205.0% 29042 ? 11% softirqs.CPU2.RCU
9389 ? 4% +183.1% 26577 ? 9% softirqs.CPU20.RCU
9594 ? 4% +181.8% 27042 ? 10% softirqs.CPU21.RCU
9492 ? 7% +184.4% 26998 ? 11% softirqs.CPU22.RCU
9330 ? 5% +190.5% 27101 ? 10% softirqs.CPU23.RCU
9717 ? 7% +180.1% 27214 ? 11% softirqs.CPU24.RCU
9250 ? 6% +186.9% 26543 ? 10% softirqs.CPU25.RCU
9547 ? 6% +175.0% 26250 ? 10% softirqs.CPU26.RCU
9477 ? 6% +183.0% 26825 ? 10% softirqs.CPU27.RCU
9565 ? 5% +178.4% 26635 ? 10% softirqs.CPU28.RCU
9512 ? 6% +176.1% 26261 ? 12% softirqs.CPU29.RCU
9562 ? 6% +212.1% 29846 ? 13% softirqs.CPU3.RCU
9942 ? 7% +168.2% 26667 ? 10% softirqs.CPU30.RCU
9436 ? 6% +181.7% 26583 ? 10% softirqs.CPU31.RCU
9846 ? 6% +193.7% 28920 ? 9% softirqs.CPU32.RCU
9677 ? 6% +189.1% 27977 ? 9% softirqs.CPU33.RCU
9964 ? 4% +189.9% 28883 ? 9% softirqs.CPU34.RCU
9900 ? 6% +194.2% 29129 ? 10% softirqs.CPU35.RCU
9928 ? 6% +195.2% 29310 ? 9% softirqs.CPU36.RCU
9678 ? 6% +198.7% 28911 ? 10% softirqs.CPU37.RCU
9788 ? 5% +192.5% 28631 ? 8% softirqs.CPU38.RCU
9702 ? 6% +192.4% 28367 ? 9% softirqs.CPU39.RCU
9581 ? 7% +193.3% 28106 ? 17% softirqs.CPU4.RCU
9943 ? 7% +192.1% 29041 ? 10% softirqs.CPU40.RCU
9909 ? 5% +195.7% 29301 ? 9% softirqs.CPU41.RCU
9861 ? 6% +197.7% 29353 ? 9% softirqs.CPU42.RCU
9850 ? 6% +195.0% 29055 ? 9% softirqs.CPU43.RCU
9714 ? 5% +197.9% 28934 ? 10% softirqs.CPU44.RCU
9853 ? 5% +193.2% 28894 ? 10% softirqs.CPU45.RCU
9750 ? 6% +187.3% 28016 ? 10% softirqs.CPU46.RCU
9926 ? 6% +194.2% 29199 ? 10% softirqs.CPU47.RCU
9823 ? 7% +209.3% 30385 ? 9% softirqs.CPU48.RCU
10388 ? 23% +179.6% 29047 ? 10% softirqs.CPU49.RCU
9575 ? 7% +196.1% 28351 ? 11% softirqs.CPU5.RCU
9873 ? 8% +199.2% 29537 ? 10% softirqs.CPU50.RCU
9980 ? 6% +200.7% 30010 ? 10% softirqs.CPU51.RCU
10115 ? 6% +194.6% 29797 ? 10% softirqs.CPU52.RCU
10001 ? 8% +203.1% 30318 ? 9% softirqs.CPU53.RCU
9707 ? 6% +215.4% 30615 ? 10% softirqs.CPU54.RCU
9804 ? 7% +205.9% 29995 ? 10% softirqs.CPU55.RCU
9695 ? 7% +205.4% 29610 ? 10% softirqs.CPU56.RCU
9824 ? 5% +199.4% 29412 ? 10% softirqs.CPU57.RCU
10110 ? 5% +196.6% 29987 ? 10% softirqs.CPU58.RCU
10778 ? 12% +179.8% 30154 ? 8% softirqs.CPU59.RCU
9646 ? 5% +198.1% 28755 ? 11% softirqs.CPU6.RCU
9749 ? 5% +215.3% 30737 ? 11% softirqs.CPU60.RCU
9617 ? 6% +209.9% 29805 ? 9% softirqs.CPU61.RCU
10466 ? 6% +181.8% 29497 ? 10% softirqs.CPU62.RCU
9682 ? 7% +206.8% 29706 ? 10% softirqs.CPU63.RCU
9892 ? 12% +182.7% 27967 ? 9% softirqs.CPU64.RCU
9929 ? 5% +181.0% 27902 ? 10% softirqs.CPU65.RCU
9763 ? 5% +185.7% 27889 ? 11% softirqs.CPU66.RCU
9671 ? 4% +187.6% 27812 ? 10% softirqs.CPU67.RCU
9612 ? 7% +191.9% 28059 ? 11% softirqs.CPU68.RCU
9586 ? 7% +186.9% 27506 ? 10% softirqs.CPU69.RCU
9531 ? 7% +200.8% 28668 ? 12% softirqs.CPU7.RCU
9803 ? 4% +185.8% 28018 ? 10% softirqs.CPU70.RCU
9494 ? 7% +193.4% 27856 ? 10% softirqs.CPU71.RCU
9245 ? 7% +195.7% 27334 ? 10% softirqs.CPU72.RCU
9106 ? 6% +195.0% 26861 ? 10% softirqs.CPU73.RCU
9150 ? 7% +193.8% 26883 ? 9% softirqs.CPU74.RCU
9181 ? 7% +202.9% 27806 ? 13% softirqs.CPU75.RCU
9080 ? 7% +196.4% 26916 ? 9% softirqs.CPU76.RCU
9098 ? 7% +194.3% 26772 ? 9% softirqs.CPU77.RCU
9200 ? 7% +193.7% 27024 ? 9% softirqs.CPU78.RCU
9149 ? 6% +196.5% 27132 ? 9% softirqs.CPU79.RCU
9495 ? 6% +203.1% 28784 ? 11% softirqs.CPU8.RCU
9497 ? 8% +203.2% 28794 ? 9% softirqs.CPU80.RCU
9232 ? 7% +210.7% 28682 ? 8% softirqs.CPU81.RCU
9316 ? 8% +211.3% 29003 ? 10% softirqs.CPU82.RCU
9212 ? 8% +215.6% 29077 ? 10% softirqs.CPU83.RCU
9107 ? 6% +212.7% 28482 ? 9% softirqs.CPU84.RCU
9086 ? 6% +213.5% 28481 ? 10% softirqs.CPU85.RCU
9231 ? 7% +212.2% 28819 ? 9% softirqs.CPU86.RCU
9219 ? 7% +211.8% 28743 ? 9% softirqs.CPU87.RCU
9605 ? 7% +202.0% 29009 ? 9% softirqs.CPU88.RCU
9279 ? 8% +210.5% 28816 ? 9% softirqs.CPU89.RCU
9614 ? 7% +195.5% 28407 ? 11% softirqs.CPU9.RCU
9280 ? 8% +209.8% 28749 ? 8% softirqs.CPU90.RCU
9442 ? 10% +209.8% 29249 ? 9% softirqs.CPU91.RCU
9308 ? 7% +210.4% 28897 ? 9% softirqs.CPU92.RCU
9138 ? 7% +213.5% 28648 ? 10% softirqs.CPU93.RCU
9259 ? 7% +213.4% 29016 ? 10% softirqs.CPU94.RCU
9233 ? 7% +214.2% 29010 ? 10% softirqs.CPU95.RCU
9109 ? 7% +201.9% 27499 ? 11% softirqs.CPU96.RCU
9908 ? 11% +187.2% 28454 ? 10% softirqs.CPU97.RCU
9298 ? 6% +213.8% 29181 ? 11% softirqs.CPU98.RCU
9378 ? 7% +208.5% 28930 ? 9% softirqs.CPU99.RCU
1813814 ? 6% +199.9% 5440281 ? 9% softirqs.RCU
60299 ? 2% -15.0% 51225 ? 4% softirqs.TIMER
53.78 ? 5% -53.4 0.34 ?100% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_read_slowpath.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
48.86 ? 5% -48.9 0.00 perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_read_slowpath.do_user_addr_fault.exc_page_fault
54.66 ? 5% -47.9 6.78 ? 19% perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
55.11 ? 5% -45.7 9.38 ? 14% perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
55.12 ? 5% -45.7 9.42 ? 14% perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
55.19 ? 5% -44.7 10.50 ? 10% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
55.32 ? 5% -43.5 11.85 ? 9% perf-profile.calltrace.cycles-pp.do_access
3.66 ? 59% -3.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
3.66 ? 59% -3.7 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.63 ? 59% -3.6 0.00 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.62 ? 59% -3.6 0.00 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.73 ? 20% -3.0 0.76 ? 57% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff
3.95 ? 19% -2.8 1.13 ? 24% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff
4.06 ? 19% -1.7 2.33 ? 14% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
4.07 ? 19% -1.7 2.40 ? 13% perf-profile.calltrace.cycles-pp.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.56 ? 9% +0.4 1.00 ? 19% perf-profile.calltrace.cycles-pp.timekeeping_max_deferment.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
0.00 +0.5 0.53 ? 2% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule_idle.do_idle.cpu_startup_entry
0.00 +0.5 0.54 ? 2% perf-profile.calltrace.cycles-pp.finish_task_switch.__schedule.schedule.rwsem_down_read_slowpath.do_user_addr_fault
0.00 +0.6 0.56 ? 14% perf-profile.calltrace.cycles-pp.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
0.00 +0.6 0.57 perf-profile.calltrace.cycles-pp.unwind_next_frame.arch_stack_walk.stack_trace_save_tsk.__account_scheduler_latency.enqueue_entity
0.00 +0.6 0.58 ? 14% perf-profile.calltrace.cycles-pp.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
0.00 +0.7 0.66 ? 10% perf-profile.calltrace.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up
0.68 ? 14% +0.7 1.36 ? 12% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt
0.00 +0.7 0.69 ? 7% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu
0.68 ? 14% +0.7 1.37 ? 11% perf-profile.calltrace.cycles-pp.asm_call_sysvec_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.7 0.69 ? 14% perf-profile.calltrace.cycles-pp.ttwu_queue_wakelist.try_to_wake_up.wake_up_q.rwsem_wake.vm_mmap_pgoff
0.68 ? 15% +0.7 1.38 ? 11% perf-profile.calltrace.cycles-pp.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.00 +0.7 0.72 ? 8% perf-profile.calltrace.cycles-pp.update_cfs_rq_h_load.select_task_rq_fair.try_to_wake_up.wake_up_q.rwsem_wake
0.00 +0.7 0.72 ? 17% perf-profile.calltrace.cycles-pp.ktime_get.tick_nohz_irq_exit.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.00 +0.7 0.73 ? 16% perf-profile.calltrace.cycles-pp.tick_nohz_irq_exit.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +0.8 0.76 ? 3% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt
0.00 +0.8 0.77 ? 9% perf-profile.calltrace.cycles-pp.ktime_get.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.00 +0.8 0.79 ? 8% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.rwsem_down_read_slowpath
0.00 +0.8 0.83 ? 2% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 +0.8 0.84 ? 7% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.78 ? 13% +0.8 1.62 ? 12% perf-profile.calltrace.cycles-pp.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +0.8 0.85 ? 6% perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
0.42 ? 58% +0.9 1.28 ? 6% perf-profile.calltrace.cycles-pp.ktime_get.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack
0.00 +0.9 0.89 ? 6% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.rwsem_down_read_slowpath.do_user_addr_fault
0.28 ?100% +0.9 1.20 ? 9% perf-profile.calltrace.cycles-pp.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.67 ? 12% +0.9 1.59 ? 8% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.9 0.93 ? 11% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.wake_up_q
0.00 +1.0 0.97 ? 11% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.wake_up_q.rwsem_wake
0.00 +1.0 0.97 ? 11% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.wake_up_q.rwsem_wake.vm_mmap_pgoff
0.00 +1.0 0.97 ? 11% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.70 ? 12% +1.0 1.68 ? 9% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
0.90 ? 7% +1.0 1.90 ? 10% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
0.00 +1.0 1.02 ? 9% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.wake_up_q.rwsem_wake.vm_mmap_pgoff
0.00 +1.1 1.08 ? 6% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
0.00 +1.1 1.10 ? 2% perf-profile.calltrace.cycles-pp.arch_stack_walk.stack_trace_save_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
0.71 ? 9% +1.1 1.82 ? 5% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt
0.00 +1.2 1.19 ? 2% perf-profile.calltrace.cycles-pp.stack_trace_save_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate
0.93 ? 13% +1.3 2.23 ? 6% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack
1.00 ? 6% +1.3 2.32 ? 7% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.00 +1.4 1.41 ? 12% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
0.00 +1.4 1.45 ? 11% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.00 +1.4 1.45 ? 9% perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.44 ? 12% +1.9 3.31 ? 6% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt
0.00 +2.0 2.04 ? 4% perf-profile.calltrace.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.ttwu_do_activate.sched_ttwu_pending
2.34 ? 25% +2.1 4.48 ? 6% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.00 +2.4 2.35 ? 54% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.rwsem_down_read_slowpath.do_user_addr_fault.exc_page_fault
0.00 +2.4 2.40 ? 4% perf-profile.calltrace.cycles-pp.__schedule.schedule.rwsem_down_read_slowpath.do_user_addr_fault.exc_page_fault
0.00 +2.5 2.46 ? 4% perf-profile.calltrace.cycles-pp.schedule.rwsem_down_read_slowpath.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 +2.6 2.60 ? 50% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.rwsem_down_read_slowpath.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
0.00 +3.1 3.14 ? 3% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.sched_ttwu_pending.flush_smp_call_function_from_idle
0.00 +3.5 3.52 ? 12% perf-profile.calltrace.cycles-pp.try_to_wake_up.wake_up_q.rwsem_wake.vm_mmap_pgoff.ksys_mmap_pgoff
0.00 +3.5 3.53 ? 3% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.sched_ttwu_pending.flush_smp_call_function_from_idle.do_idle
0.00 +3.5 3.54 ? 3% perf-profile.calltrace.cycles-pp.ttwu_do_activate.sched_ttwu_pending.flush_smp_call_function_from_idle.do_idle.cpu_startup_entry
0.00 +3.6 3.63 ? 12% perf-profile.calltrace.cycles-pp.wake_up_q.rwsem_wake.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
2.53 ? 11% +3.8 6.35 ? 2% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
2.57 ? 11% +3.9 6.47 ? 3% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
2.57 ? 11% +3.9 6.50 ? 3% perf-profile.calltrace.cycles-pp.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +4.1 4.08 ? 2% perf-profile.calltrace.cycles-pp.sched_ttwu_pending.flush_smp_call_function_from_idle.do_idle.cpu_startup_entry.start_secondary
0.00 +4.2 4.24 ? 12% perf-profile.calltrace.cycles-pp.rwsem_wake.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +4.5 4.46 ? 2% perf-profile.calltrace.cycles-pp.flush_smp_call_function_from_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
0.00 +5.1 5.08 ? 7% perf-profile.calltrace.cycles-pp.do_rw_once
4.22 ? 12% +6.0 10.22 ? 3% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
5.61 ? 21% +6.8 12.43 ? 2% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.84 ?173% +7.3 8.15 ? 8% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
0.84 ?173% +7.4 8.22 ? 8% perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
0.84 ?173% +7.4 8.23 ? 8% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
0.84 ?173% +7.4 8.25 ? 8% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__mmap
0.85 ?173% +7.6 8.40 ? 8% perf-profile.calltrace.cycles-pp.__mmap
29.85 ? 14% +18.6 48.47 ? 2% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
36.40 ? 10% +26.2 62.55 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
35.61 ? 10% +26.3 61.89 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
39.30 ? 9% +35.2 74.48 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
39.30 ? 9% +35.2 74.54 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
39.30 ? 9% +35.2 74.55 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
39.54 ? 9% +35.3 74.86 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
58.12 ? 6% -56.5 1.65 ? 29% perf-profile.children.cycles-pp.rwsem_optimistic_spin
52.61 ? 6% -51.7 0.89 ? 29% perf-profile.children.cycles-pp.osq_lock
54.66 ? 5% -47.9 6.80 ? 19% perf-profile.children.cycles-pp.rwsem_down_read_slowpath
55.14 ? 5% -45.7 9.45 ? 14% perf-profile.children.cycles-pp.do_user_addr_fault
55.14 ? 5% -45.7 9.48 ? 14% perf-profile.children.cycles-pp.exc_page_fault
55.23 ? 5% -45.0 10.27 ? 12% perf-profile.children.cycles-pp.asm_exc_page_fault
55.54 ? 5% -41.8 13.71 ? 6% perf-profile.children.cycles-pp.do_access
4.06 ? 19% -1.7 2.34 ? 14% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
4.07 ? 19% -1.7 2.40 ? 13% perf-profile.children.cycles-pp.down_write_killable
0.97 ? 9% -0.8 0.15 ? 28% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.00 +0.1 0.06 ? 9% perf-profile.children.cycles-pp.vfs_read
0.00 +0.1 0.06 ? 7% perf-profile.children.cycles-pp.__calc_delta
0.00 +0.1 0.06 ? 14% perf-profile.children.cycles-pp.ksys_read
0.00 +0.1 0.06 ? 13% perf-profile.children.cycles-pp.__rb_insert_augmented
0.00 +0.1 0.06 ? 17% perf-profile.children.cycles-pp.tick_check_broadcast_expired
0.20 ? 12% +0.1 0.27 ? 11% perf-profile.children.cycles-pp._raw_spin_trylock
0.00 +0.1 0.07 ? 13% perf-profile.children.cycles-pp.__unwind_start
0.03 ?100% +0.1 0.09 ? 11% perf-profile.children.cycles-pp.idle_cpu
0.00 +0.1 0.07 ? 23% perf-profile.children.cycles-pp.delay_tsc
0.01 ?173% +0.1 0.08 ? 13% perf-profile.children.cycles-pp.trigger_load_balance
0.00 +0.1 0.07 ? 17% perf-profile.children.cycles-pp.switch_fpu_return
0.00 +0.1 0.07 ? 20% perf-profile.children.cycles-pp.in_sched_functions
0.00 +0.1 0.07 ? 22% perf-profile.children.cycles-pp.cpumask_next_and
0.00 +0.1 0.07 ? 34% perf-profile.children.cycles-pp.worker_thread
0.00 +0.1 0.08 ? 14% perf-profile.children.cycles-pp.__wrgsbase_inactive
0.12 ? 13% +0.1 0.19 ? 11% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.00 +0.1 0.08 ? 5% perf-profile.children.cycles-pp.get_cpu_device
0.00 +0.1 0.08 ? 16% perf-profile.children.cycles-pp.put_prev_task_fair
0.00 +0.1 0.08 ? 30% perf-profile.children.cycles-pp.__hrtimer_get_next_event
0.00 +0.1 0.08 ? 15% perf-profile.children.cycles-pp.tick_nohz_tick_stopped
0.00 +0.1 0.08 ? 17% perf-profile.children.cycles-pp.rcu_dynticks_eqs_exit
0.00 +0.1 0.08 ? 19% perf-profile.children.cycles-pp.note_gp_changes
0.00 +0.1 0.08 ? 10% perf-profile.children.cycles-pp.irqentry_exit_to_user_mode
0.12 ? 13% +0.1 0.20 ? 10% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.00 +0.1 0.09 ? 21% perf-profile.children.cycles-pp.rb_next
0.00 +0.1 0.09 ? 7% perf-profile.children.cycles-pp.rb_erase
0.00 +0.1 0.09 ? 11% perf-profile.children.cycles-pp.copy_fpregs_to_fpstate
0.00 +0.1 0.09 ? 11% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.01 ?173% +0.1 0.11 ? 13% perf-profile.children.cycles-pp.kthread
0.00 +0.1 0.10 ? 21% perf-profile.children.cycles-pp.is_bpf_text_address
0.00 +0.1 0.10 ? 18% perf-profile.children.cycles-pp.reweight_entity
0.01 ?173% +0.1 0.11 ? 15% perf-profile.children.cycles-pp.ret_from_fork
0.03 ?100% +0.1 0.12 ? 21% perf-profile.children.cycles-pp.timerqueue_del
0.00 +0.1 0.10 ? 7% perf-profile.children.cycles-pp.orc_find
0.00 +0.1 0.10 ? 18% perf-profile.children.cycles-pp.rb_insert_color
0.00 +0.1 0.10 ? 41% perf-profile.children.cycles-pp.rcu_dynticks_eqs_enter
0.02 ?173% +0.1 0.12 ? 19% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.00 +0.1 0.11 ? 7% perf-profile.children.cycles-pp.check_preempt_curr
0.00 +0.1 0.11 ? 20% perf-profile.children.cycles-pp.menu_reflect
0.10 ? 34% +0.1 0.21 ? 45% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.00 +0.1 0.11 ? 31% perf-profile.children.cycles-pp.vm_area_alloc
0.00 +0.1 0.11 ? 14% perf-profile.children.cycles-pp.run_local_timers
0.00 +0.1 0.11 ? 13% perf-profile.children.cycles-pp.wake_q_add
0.00 +0.1 0.11 ? 23% perf-profile.children.cycles-pp.perf_event_mmap
0.00 +0.1 0.12 ? 3% perf-profile.children.cycles-pp.call_cpuidle
0.03 ?100% +0.1 0.15 ? 21% perf-profile.children.cycles-pp.__remove_hrtimer
0.00 +0.1 0.12 ? 25% perf-profile.children.cycles-pp.kmem_cache_alloc
0.03 ?100% +0.1 0.15 ? 23% perf-profile.children.cycles-pp.timerqueue_add
0.07 ? 20% +0.1 0.20 ? 11% perf-profile.children.cycles-pp.arch_scale_freq_tick
0.00 +0.1 0.13 ? 12% perf-profile.children.cycles-pp.tick_nohz_idle_enter
0.00 +0.1 0.13 ? 35% perf-profile.children.cycles-pp.io_serial_in
0.00 +0.1 0.13 ? 6% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.00 +0.1 0.14 ? 12% perf-profile.children.cycles-pp.__perf_sw_event
0.03 ?100% +0.1 0.16 ? 18% perf-profile.children.cycles-pp.enqueue_hrtimer
0.00 +0.1 0.14 ? 30% perf-profile.children.cycles-pp.vmacache_find
0.03 ?100% +0.1 0.17 ? 14% perf-profile.children.cycles-pp.rcu_eqs_exit
0.00 +0.1 0.15 ? 2% perf-profile.children.cycles-pp.__sysvec_call_function_single
0.00 +0.2 0.15 ? 14% perf-profile.children.cycles-pp.wake_q_add_safe
0.03 ?100% +0.2 0.19 ? 13% perf-profile.children.cycles-pp.update_irq_load_avg
0.00 +0.2 0.16 ? 17% perf-profile.children.cycles-pp.stack_trace_consume_entry_nosched
0.00 +0.2 0.16 ? 5% perf-profile.children.cycles-pp.sysvec_call_function_single
0.16 ? 26% +0.2 0.32 ? 19% perf-profile.children.cycles-pp.update_blocked_averages
0.15 ? 7% +0.2 0.30 ? 16% perf-profile.children.cycles-pp.calc_global_load_tick
0.00 +0.2 0.16 ? 32% perf-profile.children.cycles-pp.rcu_core
0.17 ? 24% +0.2 0.33 ? 18% perf-profile.children.cycles-pp.run_rebalance_domains
0.00 +0.2 0.17 ? 30% perf-profile.children.cycles-pp.rcu_eqs_enter
0.38 ? 20% +0.2 0.55 ? 6% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.01 ?173% +0.2 0.18 ? 32% perf-profile.children.cycles-pp.serial8250_console_putchar
0.00 +0.2 0.18 ? 4% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
0.01 ?173% +0.2 0.19 ? 33% perf-profile.children.cycles-pp.uart_console_write
0.01 ?173% +0.2 0.19 ? 32% perf-profile.children.cycles-pp.wait_for_xmitr
0.04 ? 58% +0.2 0.22 ? 4% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.01 ?173% +0.2 0.20 ? 32% perf-profile.children.cycles-pp.serial8250_console_write
0.00 +0.2 0.19 ? 12% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.03 ?100% +0.2 0.21 ? 32% perf-profile.children.cycles-pp.asm_sysvec_irq_work
0.03 ?100% +0.2 0.21 ? 32% perf-profile.children.cycles-pp.sysvec_irq_work
0.03 ?100% +0.2 0.21 ? 32% perf-profile.children.cycles-pp.__sysvec_irq_work
0.03 ?100% +0.2 0.21 ? 32% perf-profile.children.cycles-pp.irq_work_run
0.03 ?100% +0.2 0.21 ? 32% perf-profile.children.cycles-pp.irq_work_single
0.03 ?100% +0.2 0.21 ? 32% perf-profile.children.cycles-pp.printk
0.03 ?100% +0.2 0.21 ? 32% perf-profile.children.cycles-pp.vprintk_emit
0.03 ?100% +0.2 0.21 ? 32% perf-profile.children.cycles-pp.console_unlock
0.03 ?100% +0.2 0.23 ? 20% perf-profile.children.cycles-pp.find_vma
0.03 ?100% +0.2 0.22 ? 32% perf-profile.children.cycles-pp.irq_work_run_list
0.08 ? 17% +0.2 0.28 ? 19% perf-profile.children.cycles-pp.newidle_balance
0.07 ? 19% +0.2 0.28 ? 12% perf-profile.children.cycles-pp.rcu_idle_exit
0.00 +0.2 0.20 ? 6% perf-profile.children.cycles-pp.__orc_find
0.04 ? 59% +0.2 0.25 ? 16% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.16 ? 10% +0.2 0.37 ? 9% perf-profile.children.cycles-pp.irqtime_account_irq
0.00 +0.2 0.21 ? 5% perf-profile.children.cycles-pp.__list_add_valid
0.00 +0.2 0.21 ? 7% perf-profile.children.cycles-pp.llist_reverse_order
0.00 +0.2 0.22 ? 16% perf-profile.children.cycles-pp.cpuacct_charge
0.04 ? 58% +0.2 0.26 ? 8% perf-profile.children.cycles-pp.down_read_trylock
0.06 ? 11% +0.2 0.29 ? 17% perf-profile.children.cycles-pp.down_read
0.16 ? 11% +0.2 0.39 ? 20% perf-profile.children.cycles-pp.update_sd_lb_stats
0.00 +0.2 0.23 ? 5% perf-profile.children.cycles-pp.___perf_sw_event
0.00 +0.2 0.23 ? 17% perf-profile.children.cycles-pp.__switch_to_asm
0.00 +0.2 0.23 ? 14% perf-profile.children.cycles-pp.send_call_function_single_ipi
0.17 ? 13% +0.2 0.41 ? 21% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.2 0.25 ? 20% perf-profile.children.cycles-pp.sync_regs
0.00 +0.2 0.25 ? 11% perf-profile.children.cycles-pp.__list_del_entry_valid
0.04 ? 59% +0.3 0.30 ? 6% perf-profile.children.cycles-pp.update_cfs_group
0.00 +0.3 0.26 ? 4% perf-profile.children.cycles-pp.__update_load_avg_se
0.00 +0.3 0.26 ? 7% perf-profile.children.cycles-pp.__switch_to
0.08 ? 11% +0.3 0.34 ? 12% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.00 +0.3 0.28 ? 8% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.07 ? 10% +0.3 0.35 ? 6% perf-profile.children.cycles-pp.up_read
0.42 ? 13% +0.3 0.70 ? 8% perf-profile.children.cycles-pp.rebalance_domains
0.00 +0.3 0.30 ? 12% perf-profile.children.cycles-pp.kernel_text_address
0.09 ? 15% +0.3 0.39 ? 10% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.00 +0.3 0.30 ? 14% perf-profile.children.cycles-pp.update_ts_time_stats
0.00 +0.3 0.31 ? 18% perf-profile.children.cycles-pp.nr_iowait_cpu
0.06 ? 11% +0.3 0.38 ? 12% perf-profile.children.cycles-pp.do_anonymous_page
0.21 ? 13% +0.3 0.53 ? 20% perf-profile.children.cycles-pp.load_balance
0.03 ?100% +0.3 0.35 ? 4% perf-profile.children.cycles-pp.vma_interval_tree_insert
0.09 ? 15% +0.3 0.42 ? 4% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.18 ? 14% +0.3 0.51 ? 5% perf-profile.children.cycles-pp.lapic_next_deadline
0.00 +0.3 0.34 ? 9% perf-profile.children.cycles-pp.__kernel_text_address
0.00 +0.4 0.36 ? 9% perf-profile.children.cycles-pp.unwind_get_return_address
0.00 +0.4 0.38 ? 7% perf-profile.children.cycles-pp.set_next_entity
0.15 ? 14% +0.4 0.53 ? 4% perf-profile.children.cycles-pp.read_tsc
0.00 +0.4 0.38 ? 13% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.00 +0.4 0.38 ? 20% perf-profile.children.cycles-pp.llist_add_batch
0.00 +0.4 0.39 ? 7% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.00 +0.4 0.39 ? 19% perf-profile.children.cycles-pp.__smp_call_single_queue
0.34 ? 16% +0.4 0.74 ? 16% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.10 ? 17% +0.4 0.51 ? 4% perf-profile.children.cycles-pp.update_rq_clock
0.13 ? 14% +0.4 0.54 ? 8% perf-profile.children.cycles-pp.native_sched_clock
0.14 ? 11% +0.4 0.57 ? 9% perf-profile.children.cycles-pp.sched_clock
0.47 ? 11% +0.4 0.91 ? 7% perf-profile.children.cycles-pp.scheduler_tick
0.56 ? 10% +0.4 1.00 ? 19% perf-profile.children.cycles-pp.timekeeping_max_deferment
0.07 ? 16% +0.4 0.51 ? 4% perf-profile.children.cycles-pp.vma_link
0.15 ? 12% +0.5 0.62 ? 9% perf-profile.children.cycles-pp.sched_clock_cpu
0.05 ? 58% +0.5 0.54 ? 12% perf-profile.children.cycles-pp.vm_unmapped_area
0.05 ? 58% +0.5 0.56 ? 14% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.28 ? 18% +0.5 0.80 ? 2% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.06 ? 13% +0.5 0.58 ? 14% perf-profile.children.cycles-pp.get_unmapped_area
0.15 ? 17% +0.5 0.70 ? 5% perf-profile.children.cycles-pp._raw_spin_lock
0.42 ? 20% +0.6 0.99 ? 11% perf-profile.children.cycles-pp.tick_irq_enter
0.32 ? 17% +0.6 0.94 ? 7% perf-profile.children.cycles-pp.native_irq_return_iret
0.04 ?100% +0.7 0.71 ?121% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.01 ?173% +0.7 0.71 ? 14% perf-profile.children.cycles-pp.rwsem_mark_wake
0.04 ? 58% +0.7 0.74 perf-profile.children.cycles-pp.finish_task_switch
0.69 ? 14% +0.7 1.40 ? 12% perf-profile.children.cycles-pp.__softirqentry_text_start
0.03 ?100% +0.7 0.73 ? 9% perf-profile.children.cycles-pp.update_curr
0.70 ? 14% +0.7 1.41 ? 12% perf-profile.children.cycles-pp.do_softirq_own_stack
0.51 ? 18% +0.7 1.23 ? 9% perf-profile.children.cycles-pp.irq_enter_rcu
0.00 +0.7 0.73 ? 15% perf-profile.children.cycles-pp.ttwu_queue_wakelist
0.00 +0.7 0.73 ? 9% perf-profile.children.cycles-pp.update_cfs_rq_h_load
0.10 ? 15% +0.7 0.83 ? 6% perf-profile.children.cycles-pp.pick_next_task_fair
0.15 ? 10% +0.7 0.89 ? 3% perf-profile.children.cycles-pp.__handle_mm_fault
0.11 ? 17% +0.8 0.86 ? 6% perf-profile.children.cycles-pp.mmap_region
0.10 ? 17% +0.8 0.91 ? 5% perf-profile.children.cycles-pp.update_load_avg
0.04 ? 57% +0.8 0.85 ? 4% perf-profile.children.cycles-pp.unwind_next_frame
0.83 ? 11% +0.9 1.70 ? 9% perf-profile.children.cycles-pp.update_process_times
0.80 ? 12% +0.9 1.68 ? 13% perf-profile.children.cycles-pp.irq_exit_rcu
0.04 ? 58% +0.9 0.95 ? 7% perf-profile.children.cycles-pp.dequeue_entity
0.85 ? 11% +0.9 1.77 ? 9% perf-profile.children.cycles-pp.tick_sched_handle
0.17 ? 11% +1.0 1.14 ? 6% perf-profile.children.cycles-pp.handle_mm_fault
0.04 ? 59% +1.0 1.04 ? 6% perf-profile.children.cycles-pp.dequeue_task_fair
0.91 ? 6% +1.0 1.92 ? 10% perf-profile.children.cycles-pp.tick_nohz_next_event
0.04 ? 58% +1.0 1.06 ? 9% perf-profile.children.cycles-pp.select_task_rq_fair
0.17 ? 10% +1.0 1.19 ? 7% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.78 ? 9% +1.1 1.89 ? 4% perf-profile.children.cycles-pp.clockevents_program_event
1.10 ? 12% +1.2 2.34 ? 7% perf-profile.children.cycles-pp.tick_sched_timer
0.17 ? 14% +1.3 1.46 ? 9% perf-profile.children.cycles-pp.do_mmap
1.01 ? 7% +1.3 2.34 ? 6% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.07 ? 14% +1.4 1.45 ? 12% perf-profile.children.cycles-pp.schedule_idle
0.06 ? 11% +1.4 1.47 ? 3% perf-profile.children.cycles-pp.arch_stack_walk
0.07 ? 15% +1.5 1.62 ? 3% perf-profile.children.cycles-pp.stack_trace_save_tsk
1.66 ? 11% +1.8 3.46 ? 6% perf-profile.children.cycles-pp.__hrtimer_run_queues
1.47 ? 13% +2.1 3.52 ? 4% perf-profile.children.cycles-pp.ktime_get
2.36 ? 25% +2.2 4.52 ? 6% perf-profile.children.cycles-pp.menu_select
0.24 ? 13% +2.7 2.91 ? 4% perf-profile.children.cycles-pp.schedule
0.12 ? 10% +2.7 2.82 ? 3% perf-profile.children.cycles-pp.__account_scheduler_latency
0.21 ? 32% +3.0 3.25 ? 41% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.15 ? 38% +3.1 3.25 ? 42% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.23 ? 8% +3.5 3.71 ? 12% perf-profile.children.cycles-pp.try_to_wake_up
0.23 ? 9% +3.6 3.82 ? 12% perf-profile.children.cycles-pp.wake_up_q
4.46 ? 17% +3.7 8.17 ? 8% perf-profile.children.cycles-pp.vm_mmap_pgoff
2.86 ? 11% +3.7 6.60 ? 3% perf-profile.children.cycles-pp.hrtimer_interrupt
4.47 ? 17% +3.8 8.23 ? 8% perf-profile.children.cycles-pp.ksys_mmap_pgoff
0.42 ? 11% +3.8 4.20 ? 4% perf-profile.children.cycles-pp.do_rw_once
2.89 ? 11% +3.8 6.72 ? 3% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
4.57 ? 17% +3.9 8.49 ? 8% perf-profile.children.cycles-pp.do_syscall_64
4.58 ? 17% +3.9 8.52 ? 8% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
0.31 ? 14% +4.0 4.28 ? 5% perf-profile.children.cycles-pp.__schedule
0.20 ? 12% +4.0 4.24 ? 3% perf-profile.children.cycles-pp.enqueue_entity
0.18 ? 17% +4.1 4.23 ? 2% perf-profile.children.cycles-pp.sched_ttwu_pending
0.29 ? 7% +4.2 4.48 ? 12% perf-profile.children.cycles-pp.rwsem_wake
0.19 ? 18% +4.3 4.48 ? 2% perf-profile.children.cycles-pp.flush_smp_call_function_from_idle
0.21 ? 14% +4.4 4.66 ? 3% perf-profile.children.cycles-pp.enqueue_task_fair
0.21 ? 14% +4.5 4.69 ? 3% perf-profile.children.cycles-pp.ttwu_do_activate
3.62 ? 11% +4.8 8.46 ? 5% perf-profile.children.cycles-pp.asm_call_sysvec_on_stack
4.60 ? 12% +6.0 10.55 ? 4% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
5.46 ? 13% +6.3 11.78 ? 3% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
0.85 ?173% +7.6 8.42 ? 8% perf-profile.children.cycles-pp.__mmap
30.02 ? 15% +18.6 48.67 ? 2% perf-profile.children.cycles-pp.intel_idle
36.60 ? 10% +26.2 62.79 perf-profile.children.cycles-pp.cpuidle_enter_state
36.61 ? 10% +26.2 62.83 perf-profile.children.cycles-pp.cpuidle_enter
39.30 ? 9% +35.2 74.55 perf-profile.children.cycles-pp.start_secondary
39.54 ? 9% +35.3 74.83 perf-profile.children.cycles-pp.do_idle
39.54 ? 9% +35.3 74.86 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
39.54 ? 9% +35.3 74.86 perf-profile.children.cycles-pp.cpu_startup_entry
52.32 ? 6% -51.4 0.89 ? 29% perf-profile.self.cycles-pp.osq_lock
4.83 ? 11% -4.2 0.61 ? 38% perf-profile.self.cycles-pp.rwsem_optimistic_spin
0.59 ? 9% -0.5 0.10 ? 24% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.09 ? 13% +0.0 0.12 ? 11% perf-profile.self.cycles-pp.rebalance_domains
0.11 ? 29% +0.0 0.15 ? 14% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.1 0.05 ? 8% perf-profile.self.cycles-pp.__unwind_start
0.00 +0.1 0.06 ? 7% perf-profile.self.cycles-pp.get_next_timer_interrupt
0.00 +0.1 0.06 ? 7% perf-profile.self.cycles-pp.__calc_delta
0.00 +0.1 0.06 ? 14% perf-profile.self.cycles-pp.menu_reflect
0.09 ? 19% +0.1 0.15 ? 8% perf-profile.self.cycles-pp.tick_sched_timer
0.00 +0.1 0.06 ? 11% perf-profile.self.cycles-pp.__rb_insert_augmented
0.00 +0.1 0.06 ? 20% perf-profile.self.cycles-pp.tick_check_broadcast_expired
0.00 +0.1 0.06 ? 6% perf-profile.self.cycles-pp.kernel_text_address
0.00 +0.1 0.06 ? 17% perf-profile.self.cycles-pp.rcu_eqs_enter
0.00 +0.1 0.07 ? 23% perf-profile.self.cycles-pp.delay_tsc
0.20 ? 11% +0.1 0.27 ? 11% perf-profile.self.cycles-pp._raw_spin_trylock
0.00 +0.1 0.07 ? 12% perf-profile.self.cycles-pp.asm_exc_page_fault
0.00 +0.1 0.07 ? 12% perf-profile.self.cycles-pp.tick_nohz_tick_stopped
0.00 +0.1 0.07 ? 16% perf-profile.self.cycles-pp.switch_fpu_return
0.00 +0.1 0.07 ? 28% perf-profile.self.cycles-pp.tick_irq_enter
0.11 ? 14% +0.1 0.18 ? 13% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.00 +0.1 0.07 ? 20% perf-profile.self.cycles-pp.flush_smp_call_function_queue
0.00 +0.1 0.07 ? 20% perf-profile.self.cycles-pp.__sysvec_apic_timer_interrupt
0.00 +0.1 0.07 ? 10% perf-profile.self.cycles-pp.flush_smp_call_function_from_idle
0.00 +0.1 0.07 ? 20% perf-profile.self.cycles-pp.trigger_load_balance
0.00 +0.1 0.07 ? 17% perf-profile.self.cycles-pp.find_vma
0.00 +0.1 0.07 ? 15% perf-profile.self.cycles-pp.rcu_dynticks_eqs_exit
0.00 +0.1 0.07 ? 24% perf-profile.self.cycles-pp.cpuidle_governor_latency_req
0.11 ? 10% +0.1 0.18 ? 11% perf-profile.self.cycles-pp.irqtime_account_irq
0.01 ?173% +0.1 0.09 ? 7% perf-profile.self.cycles-pp.idle_cpu
0.00 +0.1 0.07 ? 29% perf-profile.self.cycles-pp.newidle_balance
0.00 +0.1 0.08 ? 11% perf-profile.self.cycles-pp.dequeue_task_fair
0.00 +0.1 0.08 ? 14% perf-profile.self.cycles-pp.__wrgsbase_inactive
0.00 +0.1 0.08 ? 5% perf-profile.self.cycles-pp.check_preempt_curr
0.00 +0.1 0.08 ? 5% perf-profile.self.cycles-pp.get_cpu_device
0.00 +0.1 0.08 ? 26% perf-profile.self.cycles-pp.load_balance
0.00 +0.1 0.08 ? 15% perf-profile.self.cycles-pp.rcu_eqs_exit
0.00 +0.1 0.08 ? 27% perf-profile.self.cycles-pp.mmap_region
0.00 +0.1 0.09 ? 26% perf-profile.self.cycles-pp.stack_trace_consume_entry_nosched
0.00 +0.1 0.09 ? 19% perf-profile.self.cycles-pp.pick_next_task_fair
0.00 +0.1 0.09 ? 17% perf-profile.self.cycles-pp.rb_next
0.00 +0.1 0.09 ? 9% perf-profile.self.cycles-pp.rb_erase
0.00 +0.1 0.09 ? 13% perf-profile.self.cycles-pp.ttwu_queue_wakelist
0.00 +0.1 0.09 ? 11% perf-profile.self.cycles-pp.copy_fpregs_to_fpstate
0.00 +0.1 0.09 ? 8% perf-profile.self.cycles-pp.orc_find
0.00 +0.1 0.10 ? 18% perf-profile.self.cycles-pp.__softirqentry_text_start
0.00 +0.1 0.10 ? 18% perf-profile.self.cycles-pp.reweight_entity
0.00 +0.1 0.10 ? 15% perf-profile.self.cycles-pp.rb_insert_color
0.00 +0.1 0.10 ? 12% perf-profile.self.cycles-pp.run_local_timers
0.00 +0.1 0.10 ? 41% perf-profile.self.cycles-pp.rcu_dynticks_eqs_enter
0.00 +0.1 0.10 ? 4% perf-profile.self.cycles-pp.rcu_idle_exit
0.00 +0.1 0.11 ? 13% perf-profile.self.cycles-pp.stack_trace_save_tsk
0.01 ?173% +0.1 0.12 ? 20% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.00 +0.1 0.11 ? 13% perf-profile.self.cycles-pp.wake_q_add
0.00 +0.1 0.11 ? 4% perf-profile.self.cycles-pp.call_cpuidle
0.00 +0.1 0.12 ? 9% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.00 +0.1 0.12 ? 5% perf-profile.self.cycles-pp.hrtimer_interrupt
0.07 ? 20% +0.1 0.20 ? 11% perf-profile.self.cycles-pp.arch_scale_freq_tick
0.00 +0.1 0.13 ? 19% perf-profile.self.cycles-pp.handle_mm_fault
0.00 +0.1 0.13 ? 14% perf-profile.self.cycles-pp.dequeue_entity
0.00 +0.1 0.13 ? 35% perf-profile.self.cycles-pp.io_serial_in
0.00 +0.1 0.14 ? 29% perf-profile.self.cycles-pp.vmacache_find
0.00 +0.1 0.14 ? 10% perf-profile.self.cycles-pp.wake_up_q
0.00 +0.1 0.15 ? 12% perf-profile.self.cycles-pp.rwsem_down_write_slowpath
0.14 ? 8% +0.1 0.28 ? 2% perf-profile.self.cycles-pp.update_process_times
0.00 +0.2 0.15 ? 12% perf-profile.self.cycles-pp.wake_q_add_safe
0.03 ?100% +0.2 0.19 ? 13% perf-profile.self.cycles-pp.update_irq_load_avg
0.14 ? 10% +0.2 0.30 ? 17% perf-profile.self.cycles-pp.calc_global_load_tick
0.03 ?100% +0.2 0.19 ? 18% perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.10 ? 12% +0.2 0.28 ? 17% perf-profile.self.cycles-pp.update_sd_lb_stats
0.00 +0.2 0.18 ? 6% perf-profile.self.cycles-pp.___perf_sw_event
0.04 ? 58% +0.2 0.23 ? 11% perf-profile.self.cycles-pp.down_read
0.00 +0.2 0.20 ? 16% perf-profile.self.cycles-pp.set_next_entity
0.00 +0.2 0.20 ? 2% perf-profile.self.cycles-pp.update_rq_clock
0.00 +0.2 0.20 ? 6% perf-profile.self.cycles-pp.__list_add_valid
0.00 +0.2 0.20 ? 6% perf-profile.self.cycles-pp.__orc_find
0.00 +0.2 0.20 ? 8% perf-profile.self.cycles-pp.sched_ttwu_pending
0.00 +0.2 0.21 ? 7% perf-profile.self.cycles-pp.llist_reverse_order
0.24 ? 3% +0.2 0.45 ? 12% perf-profile.self.cycles-pp.tick_nohz_next_event
0.00 +0.2 0.22 ? 16% perf-profile.self.cycles-pp.rwsem_mark_wake
0.00 +0.2 0.22 ? 16% perf-profile.self.cycles-pp.cpuacct_charge
0.04 ? 58% +0.2 0.26 ? 8% perf-profile.self.cycles-pp.down_read_trylock
0.04 ? 59% +0.2 0.28 ? 5% perf-profile.self.cycles-pp.update_cfs_group
0.00 +0.2 0.23 ? 17% perf-profile.self.cycles-pp.__switch_to_asm
0.00 +0.2 0.23 ? 14% perf-profile.self.cycles-pp.send_call_function_single_ipi
0.00 +0.2 0.24 ? 8% perf-profile.self.cycles-pp.__switch_to
0.00 +0.2 0.25 ? 19% perf-profile.self.cycles-pp.sync_regs
0.00 +0.2 0.25 ? 13% perf-profile.self.cycles-pp.__list_del_entry_valid
0.00 +0.3 0.25 ? 3% perf-profile.self.cycles-pp.__update_load_avg_se
0.04 ? 58% +0.3 0.30 ? 13% perf-profile.self.cycles-pp.update_load_avg
0.00 +0.3 0.27 ? 14% perf-profile.self.cycles-pp.select_task_rq_fair
0.07 ? 10% +0.3 0.34 ? 6% perf-profile.self.cycles-pp.up_read
0.08 ? 13% +0.3 0.35 ? 7% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.09 ? 13% +0.3 0.38 ? 3% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.3 0.30 ? 18% perf-profile.self.cycles-pp.nr_iowait_cpu
0.03 ?100% +0.3 0.35 ? 4% perf-profile.self.cycles-pp.vma_interval_tree_insert
0.08 ? 11% +0.3 0.41 ? 6% perf-profile.self.cycles-pp.do_idle
0.17 ? 13% +0.3 0.51 ? 6% perf-profile.self.cycles-pp.lapic_next_deadline
0.00 +0.3 0.35 ? 3% perf-profile.self.cycles-pp.enqueue_entity
0.15 ? 12% +0.4 0.52 ? 4% perf-profile.self.cycles-pp.read_tsc
0.00 +0.4 0.38 ? 7% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.00 +0.4 0.38 ? 20% perf-profile.self.cycles-pp.llist_add_batch
0.12 ? 12% +0.4 0.52 ? 10% perf-profile.self.cycles-pp.native_sched_clock
0.07 ? 17% +0.4 0.47 ? 6% perf-profile.self.cycles-pp.__handle_mm_fault
0.00 +0.4 0.41 ? 7% perf-profile.self.cycles-pp.update_curr
0.00 +0.4 0.42 ? 7% perf-profile.self.cycles-pp.enqueue_task_fair
0.56 ? 10% +0.4 1.00 ? 19% perf-profile.self.cycles-pp.timekeeping_max_deferment
0.25 ? 17% +0.5 0.72 ? 3% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.00 +0.5 0.48 ? 9% perf-profile.self.cycles-pp.unwind_next_frame
0.05 ? 58% +0.5 0.54 ? 12% perf-profile.self.cycles-pp.vm_unmapped_area
0.15 ? 15% +0.5 0.65 ? 5% perf-profile.self.cycles-pp._raw_spin_lock
0.00 +0.6 0.58 ? 17% perf-profile.self.cycles-pp.__account_scheduler_latency
0.04 ?100% +0.6 0.63 ?116% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.00 +0.6 0.60 perf-profile.self.cycles-pp.finish_task_switch
0.32 ? 17% +0.6 0.94 ? 7% perf-profile.self.cycles-pp.native_irq_return_iret
0.00 +0.6 0.65 ? 21% perf-profile.self.cycles-pp.try_to_wake_up
0.03 ?100% +0.7 0.70 ? 3% perf-profile.self.cycles-pp.__schedule
0.15 ? 7% +0.7 0.87 ? 3% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.00 +0.7 0.73 ? 9% perf-profile.self.cycles-pp.update_cfs_rq_h_load
0.17 ? 10% +0.8 0.94 ? 4% perf-profile.self.cycles-pp.rwsem_down_read_slowpath
1.34 ? 13% +1.7 3.06 ? 4% perf-profile.self.cycles-pp.ktime_get
0.28 ? 11% +2.3 2.62 ? 10% perf-profile.self.cycles-pp.do_access
0.27 ? 11% +2.5 2.77 perf-profile.self.cycles-pp.do_rw_once
0.15 ? 40% +3.1 3.24 ? 42% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
30.02 ? 15% +18.6 48.66 ? 2% perf-profile.self.cycles-pp.intel_idle
1548 ? 77% +658.7% 11747 ? 65% interrupts.40:PCI-MSI.524291-edge.eth0-TxRx-2
451831 ? 15% +407.3% 2292365 ? 8% interrupts.CAL:Function_call_interrupts
1998 ? 34% +353.4% 9062 ? 13% interrupts.CPU0.CAL:Function_call_interrupts
687.25 ? 35% -69.7% 208.00 ? 35% interrupts.CPU0.NMI:Non-maskable_interrupts
687.25 ? 35% -69.7% 208.00 ? 35% interrupts.CPU0.PMI:Performance_monitoring_interrupts
108.50 ? 9% +376.5% 517.00 ? 16% interrupts.CPU0.RES:Rescheduling_interrupts
2139 ? 14% +454.4% 11862 ? 11% interrupts.CPU1.CAL:Function_call_interrupts
113.25 ? 27% +412.4% 580.25 ? 3% interrupts.CPU1.RES:Rescheduling_interrupts
2248 ? 19% +422.0% 11739 ? 13% interrupts.CPU10.CAL:Function_call_interrupts
647.75 ? 22% -49.3% 328.25 ? 7% interrupts.CPU10.NMI:Non-maskable_interrupts
647.75 ? 22% -49.3% 328.25 ? 7% interrupts.CPU10.PMI:Performance_monitoring_interrupts
110.25 ? 5% +397.3% 548.25 ? 11% interrupts.CPU10.RES:Rescheduling_interrupts
2512 ? 30% +388.3% 12269 ? 10% interrupts.CPU100.CAL:Function_call_interrupts
851.75 ? 28% -64.0% 306.50 ? 30% interrupts.CPU100.NMI:Non-maskable_interrupts
851.75 ? 28% -64.0% 306.50 ? 30% interrupts.CPU100.PMI:Performance_monitoring_interrupts
100.00 ? 30% +491.5% 591.50 ? 4% interrupts.CPU100.RES:Rescheduling_interrupts
2020 ? 26% +523.6% 12598 ? 9% interrupts.CPU101.CAL:Function_call_interrupts
789.75 ? 30% -58.1% 330.75 ? 3% interrupts.CPU101.NMI:Non-maskable_interrupts
789.75 ? 30% -58.1% 330.75 ? 3% interrupts.CPU101.PMI:Performance_monitoring_interrupts
85.75 ? 16% +673.2% 663.00 ? 23% interrupts.CPU101.RES:Rescheduling_interrupts
2688 ? 31% +342.8% 11903 ? 10% interrupts.CPU102.CAL:Function_call_interrupts
718.00 ? 23% -60.8% 281.25 ? 25% interrupts.CPU102.NMI:Non-maskable_interrupts
718.00 ? 23% -60.8% 281.25 ? 25% interrupts.CPU102.PMI:Performance_monitoring_interrupts
101.25 ? 27% +467.7% 574.75 ? 4% interrupts.CPU102.RES:Rescheduling_interrupts
2253 ? 16% +421.7% 11754 ? 8% interrupts.CPU103.CAL:Function_call_interrupts
859.00 ? 18% -63.5% 313.75 ? 5% interrupts.CPU103.NMI:Non-maskable_interrupts
859.00 ? 18% -63.5% 313.75 ? 5% interrupts.CPU103.PMI:Performance_monitoring_interrupts
107.25 ? 15% +493.2% 636.25 ? 13% interrupts.CPU103.RES:Rescheduling_interrupts
2106 ? 17% +479.0% 12194 ? 11% interrupts.CPU104.CAL:Function_call_interrupts
88.00 ? 15% +580.7% 599.00 ? 3% interrupts.CPU104.RES:Rescheduling_interrupts
45.50 ?110% +313.2% 188.00 ? 18% interrupts.CPU104.TLB:TLB_shootdowns
2266 ? 17% +423.6% 11868 ? 8% interrupts.CPU105.CAL:Function_call_interrupts
711.25 ? 20% -53.3% 332.50 ? 10% interrupts.CPU105.NMI:Non-maskable_interrupts
711.25 ? 20% -53.3% 332.50 ? 10% interrupts.CPU105.PMI:Performance_monitoring_interrupts
147.25 ? 35% +305.6% 597.25 ? 8% interrupts.CPU105.RES:Rescheduling_interrupts
2572 ? 16% +366.9% 12010 ? 12% interrupts.CPU106.CAL:Function_call_interrupts
727.25 ? 17% -56.4% 316.75 ? 5% interrupts.CPU106.NMI:Non-maskable_interrupts
727.25 ? 17% -56.4% 316.75 ? 5% interrupts.CPU106.PMI:Performance_monitoring_interrupts
137.25 ? 34% +314.2% 568.50 ? 4% interrupts.CPU106.RES:Rescheduling_interrupts
2120 ? 24% +491.9% 12550 ? 13% interrupts.CPU107.CAL:Function_call_interrupts
705.25 ? 34% -59.8% 283.50 ? 25% interrupts.CPU107.NMI:Non-maskable_interrupts
705.25 ? 34% -59.8% 283.50 ? 25% interrupts.CPU107.PMI:Performance_monitoring_interrupts
191.75 ? 89% +202.5% 580.00 interrupts.CPU107.RES:Rescheduling_interrupts
2098 ? 19% +476.2% 12090 ? 9% interrupts.CPU108.CAL:Function_call_interrupts
948.25 ? 40% -65.7% 325.00 interrupts.CPU108.NMI:Non-maskable_interrupts
948.25 ? 40% -65.7% 325.00 interrupts.CPU108.PMI:Performance_monitoring_interrupts
90.25 ? 20% +566.5% 601.50 ? 12% interrupts.CPU108.RES:Rescheduling_interrupts
2100 ? 18% +480.7% 12198 ? 9% interrupts.CPU109.CAL:Function_call_interrupts
700.75 ? 30% -53.6% 325.25 interrupts.CPU109.NMI:Non-maskable_interrupts
700.75 ? 30% -53.6% 325.25 interrupts.CPU109.PMI:Performance_monitoring_interrupts
107.75 ? 25% +446.6% 589.00 ? 6% interrupts.CPU109.RES:Rescheduling_interrupts
2088 ? 18% +474.7% 12000 ? 11% interrupts.CPU11.CAL:Function_call_interrupts
820.75 ? 21% -65.5% 283.00 ? 26% interrupts.CPU11.NMI:Non-maskable_interrupts
820.75 ? 21% -65.5% 283.00 ? 26% interrupts.CPU11.PMI:Performance_monitoring_interrupts
123.00 ? 37% +365.4% 572.50 ? 3% interrupts.CPU11.RES:Rescheduling_interrupts
2253 ? 25% +443.2% 12242 ? 11% interrupts.CPU110.CAL:Function_call_interrupts
871.75 ? 23% -63.7% 316.50 ? 6% interrupts.CPU110.NMI:Non-maskable_interrupts
871.75 ? 23% -63.7% 316.50 ? 6% interrupts.CPU110.PMI:Performance_monitoring_interrupts
90.50 ? 8% +527.9% 568.25 ? 2% interrupts.CPU110.RES:Rescheduling_interrupts
2272 ? 20% +435.9% 12179 ? 11% interrupts.CPU111.CAL:Function_call_interrupts
789.75 ? 26% -58.5% 327.50 ? 3% interrupts.CPU111.NMI:Non-maskable_interrupts
789.75 ? 26% -58.5% 327.50 ? 3% interrupts.CPU111.PMI:Performance_monitoring_interrupts
94.50 ? 15% +543.7% 608.25 ? 10% interrupts.CPU111.RES:Rescheduling_interrupts
2081 ? 21% +487.1% 12218 ? 11% interrupts.CPU112.CAL:Function_call_interrupts
932.50 ? 37% -62.6% 348.75 ? 12% interrupts.CPU112.NMI:Non-maskable_interrupts
932.50 ? 37% -62.6% 348.75 ? 12% interrupts.CPU112.PMI:Performance_monitoring_interrupts
86.25 ? 14% +595.1% 599.50 ? 5% interrupts.CPU112.RES:Rescheduling_interrupts
2368 ? 11% +417.8% 12260 ? 10% interrupts.CPU113.CAL:Function_call_interrupts
576.00 ? 22% -41.0% 339.75 ? 3% interrupts.CPU113.NMI:Non-maskable_interrupts
576.00 ? 22% -41.0% 339.75 ? 3% interrupts.CPU113.PMI:Performance_monitoring_interrupts
96.00 ? 15% +504.4% 580.25 ? 4% interrupts.CPU113.RES:Rescheduling_interrupts
2167 ? 15% +470.9% 12373 ? 11% interrupts.CPU114.CAL:Function_call_interrupts
940.00 ? 15% -65.1% 327.75 ? 6% interrupts.CPU114.NMI:Non-maskable_interrupts
940.00 ? 15% -65.1% 327.75 ? 6% interrupts.CPU114.PMI:Performance_monitoring_interrupts
111.50 ? 35% +424.4% 584.75 ? 4% interrupts.CPU114.RES:Rescheduling_interrupts
2154 ? 21% +478.2% 12455 ? 11% interrupts.CPU115.CAL:Function_call_interrupts
828.75 ? 34% -62.1% 313.75 ? 4% interrupts.CPU115.NMI:Non-maskable_interrupts
828.75 ? 34% -62.1% 313.75 ? 4% interrupts.CPU115.PMI:Performance_monitoring_interrupts
111.00 ? 18% +453.8% 614.75 ? 3% interrupts.CPU115.RES:Rescheduling_interrupts
2459 ? 26% +384.1% 11903 ? 14% interrupts.CPU116.CAL:Function_call_interrupts
763.75 ? 30% -60.8% 299.50 ? 6% interrupts.CPU116.NMI:Non-maskable_interrupts
763.75 ? 30% -60.8% 299.50 ? 6% interrupts.CPU116.PMI:Performance_monitoring_interrupts
88.75 ? 10% +546.5% 573.75 ? 9% interrupts.CPU116.RES:Rescheduling_interrupts
2557 ? 15% +380.3% 12282 ? 13% interrupts.CPU117.CAL:Function_call_interrupts
808.00 ? 39% -65.8% 276.00 ? 24% interrupts.CPU117.NMI:Non-maskable_interrupts
808.00 ? 39% -65.8% 276.00 ? 24% interrupts.CPU117.PMI:Performance_monitoring_interrupts
135.75 ? 39% +327.3% 580.00 interrupts.CPU117.RES:Rescheduling_interrupts
2157 ? 14% +462.2% 12128 ? 11% interrupts.CPU118.CAL:Function_call_interrupts
957.75 ? 12% -68.3% 303.75 ? 7% interrupts.CPU118.NMI:Non-maskable_interrupts
957.75 ? 12% -68.3% 303.75 ? 7% interrupts.CPU118.PMI:Performance_monitoring_interrupts
88.50 ? 11% +563.0% 586.75 ? 4% interrupts.CPU118.RES:Rescheduling_interrupts
2281 ? 13% +439.0% 12297 ? 11% interrupts.CPU119.CAL:Function_call_interrupts
935.50 ? 22% -69.7% 283.25 ? 25% interrupts.CPU119.NMI:Non-maskable_interrupts
935.50 ? 22% -69.7% 283.25 ? 25% interrupts.CPU119.PMI:Performance_monitoring_interrupts
94.25 ? 18% +504.8% 570.00 ? 4% interrupts.CPU119.RES:Rescheduling_interrupts
1548 ? 77% +658.7% 11747 ? 65% interrupts.CPU12.40:PCI-MSI.524291-edge.eth0-TxRx-2
2662 ? 40% +344.1% 11823 ? 10% interrupts.CPU12.CAL:Function_call_interrupts
950.25 ? 29% -66.1% 321.75 ? 3% interrupts.CPU12.NMI:Non-maskable_interrupts
950.25 ? 29% -66.1% 321.75 ? 3% interrupts.CPU12.PMI:Performance_monitoring_interrupts
101.25 ? 17% +452.1% 559.00 ? 2% interrupts.CPU12.RES:Rescheduling_interrupts
2174 ? 18% +485.4% 12728 ? 12% interrupts.CPU120.CAL:Function_call_interrupts
864.75 ? 35% -63.9% 312.50 ? 2% interrupts.CPU120.NMI:Non-maskable_interrupts
864.75 ? 35% -63.9% 312.50 ? 2% interrupts.CPU120.PMI:Performance_monitoring_interrupts
91.00 ? 15% +544.0% 586.00 ? 8% interrupts.CPU120.RES:Rescheduling_interrupts
2119 ? 10% +443.6% 11518 ? 9% interrupts.CPU121.CAL:Function_call_interrupts
865.50 ? 14% -69.5% 264.00 ? 22% interrupts.CPU121.NMI:Non-maskable_interrupts
865.50 ? 14% -69.5% 264.00 ? 22% interrupts.CPU121.PMI:Performance_monitoring_interrupts
90.00 ? 11% +525.6% 563.00 ? 6% interrupts.CPU121.RES:Rescheduling_interrupts
93.50 ? 67% +139.0% 223.50 ? 21% interrupts.CPU121.TLB:TLB_shootdowns
2092 ? 19% +446.6% 11435 ? 7% interrupts.CPU122.CAL:Function_call_interrupts
755.00 ? 18% -56.6% 327.75 ? 3% interrupts.CPU122.NMI:Non-maskable_interrupts
755.00 ? 18% -56.6% 327.75 ? 3% interrupts.CPU122.PMI:Performance_monitoring_interrupts
85.75 ? 12% +549.6% 557.00 ? 2% interrupts.CPU122.RES:Rescheduling_interrupts
102.25 ? 37% +85.6% 189.75 ? 25% interrupts.CPU122.TLB:TLB_shootdowns
2168 ? 18% +433.9% 11577 ? 8% interrupts.CPU123.CAL:Function_call_interrupts
754.50 ? 31% -57.8% 318.25 ? 6% interrupts.CPU123.NMI:Non-maskable_interrupts
754.50 ? 31% -57.8% 318.25 ? 6% interrupts.CPU123.PMI:Performance_monitoring_interrupts
79.75 ? 9% +573.4% 537.00 ? 4% interrupts.CPU123.RES:Rescheduling_interrupts
2081 ? 29% +460.5% 11666 ? 9% interrupts.CPU124.CAL:Function_call_interrupts
639.00 ? 5% -48.7% 328.00 ? 3% interrupts.CPU124.NMI:Non-maskable_interrupts
639.00 ? 5% -48.7% 328.00 ? 3% interrupts.CPU124.PMI:Performance_monitoring_interrupts
77.50 ? 32% +654.8% 585.00 ? 3% interrupts.CPU124.RES:Rescheduling_interrupts
2054 ? 27% +476.8% 11848 ? 10% interrupts.CPU125.CAL:Function_call_interrupts
893.25 ? 25% -63.5% 325.75 ? 5% interrupts.CPU125.NMI:Non-maskable_interrupts
893.25 ? 25% -63.5% 325.75 ? 5% interrupts.CPU125.PMI:Performance_monitoring_interrupts
83.25 ? 17% +575.7% 562.50 ? 4% interrupts.CPU125.RES:Rescheduling_interrupts
2038 ? 18% +477.9% 11781 ? 10% interrupts.CPU126.CAL:Function_call_interrupts
995.75 ? 23% -68.3% 315.75 ? 7% interrupts.CPU126.NMI:Non-maskable_interrupts
995.75 ? 23% -68.3% 315.75 ? 7% interrupts.CPU126.PMI:Performance_monitoring_interrupts
85.50 ? 8% +565.8% 569.25 ? 3% interrupts.CPU126.RES:Rescheduling_interrupts
2063 ? 18% +471.1% 11784 ? 9% interrupts.CPU127.CAL:Function_call_interrupts
790.75 ? 8% -56.2% 346.25 ? 9% interrupts.CPU127.NMI:Non-maskable_interrupts
790.75 ? 8% -56.2% 346.25 ? 9% interrupts.CPU127.PMI:Performance_monitoring_interrupts
73.00 ? 20% +670.2% 562.25 ? 7% interrupts.CPU127.RES:Rescheduling_interrupts
2072 ? 28% +456.8% 11539 ? 8% interrupts.CPU128.CAL:Function_call_interrupts
704.00 ? 19% -61.8% 268.75 ? 20% interrupts.CPU128.NMI:Non-maskable_interrupts
704.00 ? 19% -61.8% 268.75 ? 20% interrupts.CPU128.PMI:Performance_monitoring_interrupts
75.25 ? 13% +665.4% 576.00 ? 5% interrupts.CPU128.RES:Rescheduling_interrupts
1997 ? 17% +476.2% 11509 ? 9% interrupts.CPU129.CAL:Function_call_interrupts
740.50 ? 28% -59.4% 301.00 ? 4% interrupts.CPU129.NMI:Non-maskable_interrupts
740.50 ? 28% -59.4% 301.00 ? 4% interrupts.CPU129.PMI:Performance_monitoring_interrupts
78.00 ? 10% +625.0% 565.50 ? 7% interrupts.CPU129.RES:Rescheduling_interrupts
2297 ? 16% +421.6% 11983 ? 10% interrupts.CPU13.CAL:Function_call_interrupts
837.50 ? 31% -61.2% 324.75 ? 3% interrupts.CPU13.NMI:Non-maskable_interrupts
837.50 ? 31% -61.2% 324.75 ? 3% interrupts.CPU13.PMI:Performance_monitoring_interrupts
118.75 ? 55% +372.2% 560.75 ? 6% interrupts.CPU13.RES:Rescheduling_interrupts
2201 ? 20% +423.9% 11531 ? 10% interrupts.CPU130.CAL:Function_call_interrupts
715.50 ? 21% -56.1% 314.25 ? 8% interrupts.CPU130.NMI:Non-maskable_interrupts
715.50 ? 21% -56.1% 314.25 ? 8% interrupts.CPU130.PMI:Performance_monitoring_interrupts
88.00 ? 8% +525.3% 550.25 ? 5% interrupts.CPU130.RES:Rescheduling_interrupts
2189 ? 13% +438.1% 11779 ? 12% interrupts.CPU131.CAL:Function_call_interrupts
923.00 ? 27% -65.6% 317.25 ? 7% interrupts.CPU131.NMI:Non-maskable_interrupts
923.00 ? 27% -65.6% 317.25 ? 7% interrupts.CPU131.PMI:Performance_monitoring_interrupts
84.75 ? 16% +574.0% 571.25 ? 7% interrupts.CPU131.RES:Rescheduling_interrupts
1931 ? 15% +506.4% 11712 ? 11% interrupts.CPU132.CAL:Function_call_interrupts
894.75 ? 26% -65.2% 311.75 ? 9% interrupts.CPU132.NMI:Non-maskable_interrupts
894.75 ? 26% -65.2% 311.75 ? 9% interrupts.CPU132.PMI:Performance_monitoring_interrupts
82.00 ? 6% +600.6% 574.50 ? 4% interrupts.CPU132.RES:Rescheduling_interrupts
47.50 ? 75% +243.7% 163.25 ? 19% interrupts.CPU132.TLB:TLB_shootdowns
2144 ? 17% +439.3% 11564 ? 10% interrupts.CPU133.CAL:Function_call_interrupts
733.25 ? 11% -57.2% 314.00 ? 9% interrupts.CPU133.NMI:Non-maskable_interrupts
733.25 ? 11% -57.2% 314.00 ? 9% interrupts.CPU133.PMI:Performance_monitoring_interrupts
92.50 ? 10% +547.0% 598.50 ? 6% interrupts.CPU133.RES:Rescheduling_interrupts
80.00 ? 50% +108.4% 166.75 ? 10% interrupts.CPU133.TLB:TLB_shootdowns
2087 ? 22% +448.5% 11452 ? 11% interrupts.CPU134.CAL:Function_call_interrupts
831.75 ? 21% -61.7% 318.25 ? 5% interrupts.CPU134.NMI:Non-maskable_interrupts
831.75 ? 21% -61.7% 318.25 ? 5% interrupts.CPU134.PMI:Performance_monitoring_interrupts
74.50 ? 10% +646.6% 556.25 ? 3% interrupts.CPU134.RES:Rescheduling_interrupts
2103 ? 24% +443.9% 11440 ? 12% interrupts.CPU135.CAL:Function_call_interrupts
784.50 ? 16% -66.4% 263.75 ? 22% interrupts.CPU135.NMI:Non-maskable_interrupts
784.50 ? 16% -66.4% 263.75 ? 22% interrupts.CPU135.PMI:Performance_monitoring_interrupts
82.00 ? 14% +578.7% 556.50 ? 7% interrupts.CPU135.RES:Rescheduling_interrupts
83.75 ? 67% +113.7% 179.00 ? 22% interrupts.CPU135.TLB:TLB_shootdowns
2114 ? 24% +454.0% 11713 ? 10% interrupts.CPU136.CAL:Function_call_interrupts
771.50 ? 20% -61.0% 301.00 ? 7% interrupts.CPU136.NMI:Non-maskable_interrupts
771.50 ? 20% -61.0% 301.00 ? 7% interrupts.CPU136.PMI:Performance_monitoring_interrupts
72.00 ? 10% +701.4% 577.00 ? 5% interrupts.CPU136.RES:Rescheduling_interrupts
2097 ? 22% +464.9% 11849 ? 10% interrupts.CPU137.CAL:Function_call_interrupts
789.00 ? 40% -59.8% 317.00 ? 10% interrupts.CPU137.NMI:Non-maskable_interrupts
789.00 ? 40% -59.8% 317.00 ? 10% interrupts.CPU137.PMI:Performance_monitoring_interrupts
83.75 ? 11% +567.5% 559.00 ? 6% interrupts.CPU137.RES:Rescheduling_interrupts
109.75 ? 33% +61.7% 177.50 ? 18% interrupts.CPU137.TLB:TLB_shootdowns
2258 ? 27% +427.7% 11918 ? 10% interrupts.CPU138.CAL:Function_call_interrupts
685.00 ? 48% -54.1% 314.50 ? 8% interrupts.CPU138.NMI:Non-maskable_interrupts
685.00 ? 48% -54.1% 314.50 ? 8% interrupts.CPU138.PMI:Performance_monitoring_interrupts
86.00 ? 13% +531.7% 543.25 ? 5% interrupts.CPU138.RES:Rescheduling_interrupts
2057 ? 21% +465.4% 11629 ? 9% interrupts.CPU139.CAL:Function_call_interrupts
689.75 ? 28% -53.9% 317.75 ? 10% interrupts.CPU139.NMI:Non-maskable_interrupts
689.75 ? 28% -53.9% 317.75 ? 10% interrupts.CPU139.PMI:Performance_monitoring_interrupts
80.25 ? 16% +600.3% 562.00 ? 4% interrupts.CPU139.RES:Rescheduling_interrupts
2170 ? 13% +439.5% 11708 ? 9% interrupts.CPU14.CAL:Function_call_interrupts
782.75 ? 22% -64.7% 276.25 ? 25% interrupts.CPU14.NMI:Non-maskable_interrupts
782.75 ? 22% -64.7% 276.25 ? 25% interrupts.CPU14.PMI:Performance_monitoring_interrupts
83.50 ? 8% +582.6% 570.00 ? 2% interrupts.CPU14.RES:Rescheduling_interrupts
2243 ? 23% +424.8% 11772 ? 11% interrupts.CPU140.CAL:Function_call_interrupts
761.00 ? 38% -60.2% 302.50 ? 9% interrupts.CPU140.NMI:Non-maskable_interrupts
761.00 ? 38% -60.2% 302.50 ? 9% interrupts.CPU140.PMI:Performance_monitoring_interrupts
90.50 ? 15% +534.5% 574.25 ? 8% interrupts.CPU140.RES:Rescheduling_interrupts
2131 ? 18% +446.7% 11654 ? 10% interrupts.CPU141.CAL:Function_call_interrupts
629.50 ? 50% -56.8% 272.25 ? 28% interrupts.CPU141.NMI:Non-maskable_interrupts
629.50 ? 50% -56.8% 272.25 ? 28% interrupts.CPU141.PMI:Performance_monitoring_interrupts
75.50 ? 17% +631.1% 552.00 ? 8% interrupts.CPU141.RES:Rescheduling_interrupts
2127 ? 14% +455.4% 11815 ? 10% interrupts.CPU142.CAL:Function_call_interrupts
91.25 ? 12% +563.0% 605.00 ? 8% interrupts.CPU142.RES:Rescheduling_interrupts
2077 ? 18% +471.3% 11869 ? 8% interrupts.CPU143.CAL:Function_call_interrupts
692.00 ? 45% -54.0% 318.00 ? 7% interrupts.CPU143.NMI:Non-maskable_interrupts
692.00 ? 45% -54.0% 318.00 ? 7% interrupts.CPU143.PMI:Performance_monitoring_interrupts
81.00 ? 9% +581.8% 552.25 ? 3% interrupts.CPU143.RES:Rescheduling_interrupts
2308 ? 24% +478.9% 13364 ? 14% interrupts.CPU144.CAL:Function_call_interrupts
501.50 ? 32% -38.2% 310.00 ? 6% interrupts.CPU144.NMI:Non-maskable_interrupts
501.50 ? 32% -38.2% 310.00 ? 6% interrupts.CPU144.PMI:Performance_monitoring_interrupts
122.00 ? 38% +338.5% 535.00 ? 3% interrupts.CPU144.RES:Rescheduling_interrupts
2462 ? 18% +392.4% 12124 ? 11% interrupts.CPU145.CAL:Function_call_interrupts
536.25 ? 28% -50.7% 264.25 ? 22% interrupts.CPU145.NMI:Non-maskable_interrupts
536.25 ? 28% -50.7% 264.25 ? 22% interrupts.CPU145.PMI:Performance_monitoring_interrupts
125.50 ? 41% +327.9% 537.00 ? 5% interrupts.CPU145.RES:Rescheduling_interrupts
2393 ? 26% +409.7% 12199 ? 8% interrupts.CPU146.CAL:Function_call_interrupts
628.00 ? 41% -56.8% 271.25 ? 22% interrupts.CPU146.NMI:Non-maskable_interrupts
628.00 ? 41% -56.8% 271.25 ? 22% interrupts.CPU146.PMI:Performance_monitoring_interrupts
90.50 ? 18% +478.2% 523.25 ? 7% interrupts.CPU146.RES:Rescheduling_interrupts
2545 ? 20% +368.3% 11917 ? 12% interrupts.CPU147.CAL:Function_call_interrupts
759.75 ? 35% -64.4% 270.50 ? 24% interrupts.CPU147.NMI:Non-maskable_interrupts
759.75 ? 35% -64.4% 270.50 ? 24% interrupts.CPU147.PMI:Performance_monitoring_interrupts
103.75 ? 14% +377.6% 495.50 ? 4% interrupts.CPU147.RES:Rescheduling_interrupts
2379 ? 29% +395.6% 11793 ? 9% interrupts.CPU148.CAL:Function_call_interrupts
720.50 ? 50% -62.9% 267.00 ? 24% interrupts.CPU148.NMI:Non-maskable_interrupts
720.50 ? 50% -62.9% 267.00 ? 24% interrupts.CPU148.PMI:Performance_monitoring_interrupts
108.00 ? 51% +385.0% 523.75 ? 8% interrupts.CPU148.RES:Rescheduling_interrupts
2306 ? 27% +429.4% 12210 ? 10% interrupts.CPU149.CAL:Function_call_interrupts
106.00 ? 37% +400.0% 530.00 ? 6% interrupts.CPU149.RES:Rescheduling_interrupts
2604 ? 26% +368.1% 12190 ? 11% interrupts.CPU15.CAL:Function_call_interrupts
683.50 ? 19% -51.8% 329.50 ? 4% interrupts.CPU15.NMI:Non-maskable_interrupts
683.50 ? 19% -51.8% 329.50 ? 4% interrupts.CPU15.PMI:Performance_monitoring_interrupts
90.00 ? 22% +539.2% 575.25 ? 5% interrupts.CPU15.RES:Rescheduling_interrupts
2409 ? 21% +402.1% 12097 ? 10% interrupts.CPU150.CAL:Function_call_interrupts
732.00 ? 8% -60.0% 292.50 ? 25% interrupts.CPU150.NMI:Non-maskable_interrupts
732.00 ? 8% -60.0% 292.50 ? 25% interrupts.CPU150.PMI:Performance_monitoring_interrupts
89.50 ? 34% +454.5% 496.25 ? 3% interrupts.CPU150.RES:Rescheduling_interrupts
32.00 ?167% +322.7% 135.25 ? 32% interrupts.CPU150.TLB:TLB_shootdowns
2514 ? 24% +376.3% 11974 ? 11% interrupts.CPU151.CAL:Function_call_interrupts
732.50 ? 24% -56.9% 316.00 interrupts.CPU151.NMI:Non-maskable_interrupts
732.50 ? 24% -56.9% 316.00 interrupts.CPU151.PMI:Performance_monitoring_interrupts
126.00 ? 36% +329.2% 540.75 ? 11% interrupts.CPU151.RES:Rescheduling_interrupts
2440 ? 27% +387.6% 11901 ? 9% interrupts.CPU152.CAL:Function_call_interrupts
700.75 ? 36% -55.6% 311.25 ? 2% interrupts.CPU152.NMI:Non-maskable_interrupts
700.75 ? 36% -55.6% 311.25 ? 2% interrupts.CPU152.PMI:Performance_monitoring_interrupts
117.00 ? 28% +347.6% 523.75 ? 4% interrupts.CPU152.RES:Rescheduling_interrupts
2159 ? 23% +467.0% 12246 ? 7% interrupts.CPU153.CAL:Function_call_interrupts
714.25 ? 24% -56.6% 309.75 ? 2% interrupts.CPU153.NMI:Non-maskable_interrupts
714.25 ? 24% -56.6% 309.75 ? 2% interrupts.CPU153.PMI:Performance_monitoring_interrupts
84.75 ? 22% +563.1% 562.00 ? 5% interrupts.CPU153.RES:Rescheduling_interrupts
2361 ? 29% +414.3% 12144 ? 9% interrupts.CPU154.CAL:Function_call_interrupts
919.00 ? 21% -69.4% 281.25 ? 25% interrupts.CPU154.NMI:Non-maskable_interrupts
919.00 ? 21% -69.4% 281.25 ? 25% interrupts.CPU154.PMI:Performance_monitoring_interrupts
94.25 ? 22% +492.8% 558.75 ? 11% interrupts.CPU154.RES:Rescheduling_interrupts
70.75 ? 58% +156.9% 181.75 ? 23% interrupts.CPU154.TLB:TLB_shootdowns
2698 ? 18% +346.4% 12044 ? 7% interrupts.CPU155.CAL:Function_call_interrupts
808.00 ? 33% -65.1% 281.75 ? 23% interrupts.CPU155.NMI:Non-maskable_interrupts
808.00 ? 33% -65.1% 281.75 ? 23% interrupts.CPU155.PMI:Performance_monitoring_interrupts
120.50 ? 34% +332.6% 521.25 ? 9% interrupts.CPU155.RES:Rescheduling_interrupts
2276 ? 20% +446.5% 12442 ? 7% interrupts.CPU156.CAL:Function_call_interrupts
689.75 ? 20% -65.4% 238.75 ? 36% interrupts.CPU156.NMI:Non-maskable_interrupts
689.75 ? 20% -65.4% 238.75 ? 36% interrupts.CPU156.PMI:Performance_monitoring_interrupts
102.25 ? 37% +453.3% 565.75 ? 13% interrupts.CPU156.RES:Rescheduling_interrupts
2513 ? 22% +396.6% 12481 ? 10% interrupts.CPU157.CAL:Function_call_interrupts
663.75 ? 20% -57.4% 282.75 ? 24% interrupts.CPU157.NMI:Non-maskable_interrupts
663.75 ? 20% -57.4% 282.75 ? 24% interrupts.CPU157.PMI:Performance_monitoring_interrupts
104.00 ? 39% +428.8% 550.00 ? 8% interrupts.CPU157.RES:Rescheduling_interrupts
2570 ? 22% +425.2% 13501 ? 22% interrupts.CPU158.CAL:Function_call_interrupts
753.50 ? 29% -62.2% 284.75 ? 25% interrupts.CPU158.NMI:Non-maskable_interrupts
753.50 ? 29% -62.2% 284.75 ? 25% interrupts.CPU158.PMI:Performance_monitoring_interrupts
99.50 ? 23% +469.3% 566.50 ? 4% interrupts.CPU158.RES:Rescheduling_interrupts
80.00 ? 82% +148.8% 199.00 ? 18% interrupts.CPU158.TLB:TLB_shootdowns
2700 ? 23% +335.8% 11767 ? 7% interrupts.CPU159.CAL:Function_call_interrupts
626.75 ? 34% -62.7% 234.00 ? 29% interrupts.CPU159.NMI:Non-maskable_interrupts
626.75 ? 34% -62.7% 234.00 ? 29% interrupts.CPU159.PMI:Performance_monitoring_interrupts
142.50 ? 46% +244.7% 491.25 ? 8% interrupts.CPU159.RES:Rescheduling_interrupts
2221 ? 17% +445.5% 12114 ? 11% interrupts.CPU16.CAL:Function_call_interrupts
837.00 ? 16% -64.1% 300.50 ? 27% interrupts.CPU16.NMI:Non-maskable_interrupts
837.00 ? 16% -64.1% 300.50 ? 27% interrupts.CPU16.PMI:Performance_monitoring_interrupts
104.75 ? 23% +471.6% 598.75 ? 5% interrupts.CPU16.RES:Rescheduling_interrupts
2407 ? 21% +411.4% 12310 ? 11% interrupts.CPU160.CAL:Function_call_interrupts
90.25 ? 26% +461.2% 506.50 ? 4% interrupts.CPU160.RES:Rescheduling_interrupts
2433 ? 24% +404.8% 12281 ? 9% interrupts.CPU161.CAL:Function_call_interrupts
577.75 ? 52% -53.1% 270.75 ? 25% interrupts.CPU161.NMI:Non-maskable_interrupts
577.75 ? 52% -53.1% 270.75 ? 25% interrupts.CPU161.PMI:Performance_monitoring_interrupts
102.25 ? 23% +388.8% 499.75 ? 3% interrupts.CPU161.RES:Rescheduling_interrupts
85.75 ? 74% +116.0% 185.25 ? 17% interrupts.CPU161.TLB:TLB_shootdowns
2371 ? 23% +415.9% 12237 ? 10% interrupts.CPU162.CAL:Function_call_interrupts
648.75 ? 38% -57.0% 278.75 ? 25% interrupts.CPU162.NMI:Non-maskable_interrupts
648.75 ? 38% -57.0% 278.75 ? 25% interrupts.CPU162.PMI:Performance_monitoring_interrupts
127.50 ? 32% +363.1% 590.50 ? 9% interrupts.CPU162.RES:Rescheduling_interrupts
2301 ? 26% +422.3% 12019 ? 8% interrupts.CPU163.CAL:Function_call_interrupts
572.00 ? 36% -58.3% 238.75 ? 33% interrupts.CPU163.NMI:Non-maskable_interrupts
572.00 ? 36% -58.3% 238.75 ? 33% interrupts.CPU163.PMI:Performance_monitoring_interrupts
97.25 ? 25% +451.7% 536.50 ? 8% interrupts.CPU163.RES:Rescheduling_interrupts
2421 ? 21% +402.2% 12160 ? 6% interrupts.CPU164.CAL:Function_call_interrupts
610.75 ? 40% -61.6% 234.25 ? 35% interrupts.CPU164.NMI:Non-maskable_interrupts
610.75 ? 40% -61.6% 234.25 ? 35% interrupts.CPU164.PMI:Performance_monitoring_interrupts
103.50 ? 21% +380.0% 496.75 ? 6% interrupts.CPU164.RES:Rescheduling_interrupts
2486 ? 23% +384.2% 12039 ? 7% interrupts.CPU165.CAL:Function_call_interrupts
523.75 ? 30% -55.6% 232.50 ? 38% interrupts.CPU165.NMI:Non-maskable_interrupts
523.75 ? 30% -55.6% 232.50 ? 38% interrupts.CPU165.PMI:Performance_monitoring_interrupts
116.75 ? 44% +365.1% 543.00 ? 6% interrupts.CPU165.RES:Rescheduling_interrupts
2401 ? 28% +412.7% 12313 ? 7% interrupts.CPU166.CAL:Function_call_interrupts
866.50 ? 28% -72.5% 238.50 ? 35% interrupts.CPU166.NMI:Non-maskable_interrupts
866.50 ? 28% -72.5% 238.50 ? 35% interrupts.CPU166.PMI:Performance_monitoring_interrupts
112.25 ? 58% +366.1% 523.25 ? 9% interrupts.CPU166.RES:Rescheduling_interrupts
2708 ? 25% +355.6% 12338 ? 9% interrupts.CPU167.CAL:Function_call_interrupts
108.50 ? 34% +376.3% 516.75 ? 4% interrupts.CPU167.RES:Rescheduling_interrupts
2216 ? 8% +482.2% 12906 ? 15% interrupts.CPU168.CAL:Function_call_interrupts
89.25 ? 3% +556.9% 586.25 ? 16% interrupts.CPU168.RES:Rescheduling_interrupts
2053 ? 5% +445.2% 11193 ? 12% interrupts.CPU169.CAL:Function_call_interrupts
582.75 ? 39% -53.7% 270.00 ? 25% interrupts.CPU169.NMI:Non-maskable_interrupts
582.75 ? 39% -53.7% 270.00 ? 25% interrupts.CPU169.PMI:Performance_monitoring_interrupts
91.00 ? 10% +515.1% 559.75 ? 12% interrupts.CPU169.RES:Rescheduling_interrupts
2290 ? 11% +419.7% 11904 ? 9% interrupts.CPU17.CAL:Function_call_interrupts
652.25 ? 17% -55.4% 290.75 ? 25% interrupts.CPU17.NMI:Non-maskable_interrupts
652.25 ? 17% -55.4% 290.75 ? 25% interrupts.CPU17.PMI:Performance_monitoring_interrupts
85.75 ? 13% +609.0% 608.00 ? 10% interrupts.CPU17.RES:Rescheduling_interrupts
2642 ? 11% +341.5% 11666 ? 12% interrupts.CPU170.CAL:Function_call_interrupts
464.75 ? 16% -38.7% 285.00 ? 25% interrupts.CPU170.NMI:Non-maskable_interrupts
464.75 ? 16% -38.7% 285.00 ? 25% interrupts.CPU170.PMI:Performance_monitoring_interrupts
102.00 ? 11% +454.9% 566.00 ? 12% interrupts.CPU170.RES:Rescheduling_interrupts
2208 ? 19% +428.6% 11674 ? 14% interrupts.CPU171.CAL:Function_call_interrupts
423.00 ? 20% -33.5% 281.50 ? 25% interrupts.CPU171.NMI:Non-maskable_interrupts
423.00 ? 20% -33.5% 281.50 ? 25% interrupts.CPU171.PMI:Performance_monitoring_interrupts
89.00 ? 12% +553.9% 582.00 ? 11% interrupts.CPU171.RES:Rescheduling_interrupts
2398 ? 19% +384.9% 11630 ? 15% interrupts.CPU172.CAL:Function_call_interrupts
93.25 ? 11% +538.3% 595.25 ? 8% interrupts.CPU172.RES:Rescheduling_interrupts
2535 ? 17% +353.2% 11492 ? 13% interrupts.CPU173.CAL:Function_call_interrupts
788.75 ? 33% -64.4% 280.50 ? 25% interrupts.CPU173.NMI:Non-maskable_interrupts
788.75 ? 33% -64.4% 280.50 ? 25% interrupts.CPU173.PMI:Performance_monitoring_interrupts
79.75 ? 13% +621.3% 575.25 ? 13% interrupts.CPU173.RES:Rescheduling_interrupts
39.50 ?163% +267.7% 145.25 ? 35% interrupts.CPU173.TLB:TLB_shootdowns
2371 ? 13% +399.8% 11850 ? 11% interrupts.CPU174.CAL:Function_call_interrupts
767.75 ? 14% -62.3% 289.75 ? 27% interrupts.CPU174.NMI:Non-maskable_interrupts
767.75 ? 14% -62.3% 289.75 ? 27% interrupts.CPU174.PMI:Performance_monitoring_interrupts
92.50 ? 18% +559.2% 609.75 ? 13% interrupts.CPU174.RES:Rescheduling_interrupts
2618 ? 14% +351.9% 11833 ? 15% interrupts.CPU175.CAL:Function_call_interrupts
688.00 ? 34% -64.8% 242.00 ? 34% interrupts.CPU175.NMI:Non-maskable_interrupts
688.00 ? 34% -64.8% 242.00 ? 34% interrupts.CPU175.PMI:Performance_monitoring_interrupts
94.00 ? 11% +499.2% 563.25 ? 12% interrupts.CPU175.RES:Rescheduling_interrupts
2190 ? 8% +426.1% 11520 ? 13% interrupts.CPU176.CAL:Function_call_interrupts
91.00 +550.5% 592.00 ? 9% interrupts.CPU176.RES:Rescheduling_interrupts
2348 ? 19% +392.3% 11562 ? 14% interrupts.CPU177.CAL:Function_call_interrupts
733.50 ? 25% -67.0% 242.25 ? 32% interrupts.CPU177.NMI:Non-maskable_interrupts
733.50 ? 25% -67.0% 242.25 ? 32% interrupts.CPU177.PMI:Performance_monitoring_interrupts
91.50 ? 19% +515.0% 562.75 ? 11% interrupts.CPU177.RES:Rescheduling_interrupts
2615 ? 14% +351.9% 11820 ? 14% interrupts.CPU178.CAL:Function_call_interrupts
582.00 ? 20% -53.7% 269.25 ? 24% interrupts.CPU178.NMI:Non-maskable_interrupts
582.00 ? 20% -53.7% 269.25 ? 24% interrupts.CPU178.PMI:Performance_monitoring_interrupts
89.25 ? 16% +538.9% 570.25 ? 14% interrupts.CPU178.RES:Rescheduling_interrupts
2363 ? 17% +408.5% 12016 ? 12% interrupts.CPU179.CAL:Function_call_interrupts
613.50 ? 25% -48.7% 315.00 ? 4% interrupts.CPU179.NMI:Non-maskable_interrupts
613.50 ? 25% -48.7% 315.00 ? 4% interrupts.CPU179.PMI:Performance_monitoring_interrupts
84.25 ? 19% +535.0% 535.00 ? 10% interrupts.CPU179.RES:Rescheduling_interrupts
2323 ? 17% +438.9% 12519 ? 12% interrupts.CPU18.CAL:Function_call_interrupts
667.50 ? 23% -54.3% 305.25 ? 32% interrupts.CPU18.NMI:Non-maskable_interrupts
667.50 ? 23% -54.3% 305.25 ? 32% interrupts.CPU18.PMI:Performance_monitoring_interrupts
96.00 ? 20% +515.6% 591.00 ? 5% interrupts.CPU18.RES:Rescheduling_interrupts
42.25 ?137% +234.9% 141.50 ? 22% interrupts.CPU18.TLB:TLB_shootdowns
2238 ? 13% +434.1% 11956 ? 12% interrupts.CPU180.CAL:Function_call_interrupts
85.50 ? 6% +583.3% 584.25 ? 10% interrupts.CPU180.RES:Rescheduling_interrupts
2208 ? 13% +423.9% 11570 ? 10% interrupts.CPU181.CAL:Function_call_interrupts
686.75 ? 28% -60.7% 270.00 ? 24% interrupts.CPU181.NMI:Non-maskable_interrupts
686.75 ? 28% -60.7% 270.00 ? 24% interrupts.CPU181.PMI:Performance_monitoring_interrupts
90.50 ? 10% +553.9% 591.75 ? 7% interrupts.CPU181.RES:Rescheduling_interrupts
2230 ? 19% +416.0% 11510 ? 11% interrupts.CPU182.CAL:Function_call_interrupts
524.75 ? 28% -47.7% 274.25 ? 25% interrupts.CPU182.NMI:Non-maskable_interrupts
524.75 ? 28% -47.7% 274.25 ? 25% interrupts.CPU182.PMI:Performance_monitoring_interrupts
79.00 ? 11% +618.0% 567.25 ? 11% interrupts.CPU182.RES:Rescheduling_interrupts
2319 ? 13% +400.7% 11613 ? 13% interrupts.CPU183.CAL:Function_call_interrupts
685.00 ? 20% -59.1% 280.25 ? 24% interrupts.CPU183.NMI:Non-maskable_interrupts
685.00 ? 20% -59.1% 280.25 ? 24% interrupts.CPU183.PMI:Performance_monitoring_interrupts
84.50 ? 12% +584.0% 578.00 ? 13% interrupts.CPU183.RES:Rescheduling_interrupts
146.50 ? 17% +31.2% 192.25 ? 11% interrupts.CPU183.TLB:TLB_shootdowns
2148 ? 20% +453.3% 11888 ? 12% interrupts.CPU184.CAL:Function_call_interrupts
759.25 ? 12% -64.3% 271.25 ? 25% interrupts.CPU184.NMI:Non-maskable_interrupts
759.25 ? 12% -64.3% 271.25 ? 25% interrupts.CPU184.PMI:Performance_monitoring_interrupts
79.00 ? 20% +621.8% 570.25 ? 11% interrupts.CPU184.RES:Rescheduling_interrupts
2426 ? 10% +389.0% 11862 ? 14% interrupts.CPU185.CAL:Function_call_interrupts
787.50 ? 20% -65.4% 272.50 ? 25% interrupts.CPU185.NMI:Non-maskable_interrupts
787.50 ? 20% -65.4% 272.50 ? 25% interrupts.CPU185.PMI:Performance_monitoring_interrupts
91.75 ? 3% +507.4% 557.25 ? 11% interrupts.CPU185.RES:Rescheduling_interrupts
2325 ? 10% +416.5% 12012 ? 13% interrupts.CPU186.CAL:Function_call_interrupts
577.25 ? 30% -53.5% 268.50 ? 23% interrupts.CPU186.NMI:Non-maskable_interrupts
577.25 ? 30% -53.5% 268.50 ? 23% interrupts.CPU186.PMI:Performance_monitoring_interrupts
97.75 ? 12% +526.9% 612.75 ? 8% interrupts.CPU186.RES:Rescheduling_interrupts
2359 ? 13% +410.1% 12034 ? 13% interrupts.CPU187.CAL:Function_call_interrupts
633.25 ? 29% -55.0% 285.00 ? 23% interrupts.CPU187.NMI:Non-maskable_interrupts
633.25 ? 29% -55.0% 285.00 ? 23% interrupts.CPU187.PMI:Performance_monitoring_interrupts
88.25 ? 7% +548.2% 572.00 ? 12% interrupts.CPU187.RES:Rescheduling_interrupts
2492 ? 10% +377.4% 11897 ? 12% interrupts.CPU188.CAL:Function_call_interrupts
88.75 ? 7% +545.1% 572.50 ? 11% interrupts.CPU188.RES:Rescheduling_interrupts
2492 ? 21% +383.5% 12052 ? 13% interrupts.CPU189.CAL:Function_call_interrupts
83.50 ? 15% +585.9% 572.75 ? 12% interrupts.CPU189.RES:Rescheduling_interrupts
2112 ? 12% +486.8% 12396 ? 10% interrupts.CPU19.CAL:Function_call_interrupts
857.75 ? 25% -68.4% 271.25 ? 24% interrupts.CPU19.NMI:Non-maskable_interrupts
857.75 ? 25% -68.4% 271.25 ? 24% interrupts.CPU19.PMI:Performance_monitoring_interrupts
113.00 ? 45% +433.6% 603.00 ? 7% interrupts.CPU19.RES:Rescheduling_interrupts
2504 ? 17% +380.6% 12035 ? 12% interrupts.CPU190.CAL:Function_call_interrupts
85.75 ? 8% +563.3% 568.75 ? 7% interrupts.CPU190.RES:Rescheduling_interrupts
2150 ? 14% +425.5% 11298 ? 12% interrupts.CPU191.CAL:Function_call_interrupts
620.50 ? 27% -48.6% 319.00 ? 6% interrupts.CPU191.NMI:Non-maskable_interrupts
620.50 ? 27% -48.6% 319.00 ? 6% interrupts.CPU191.PMI:Performance_monitoring_interrupts
86.50 ? 8% +528.6% 543.75 ? 14% interrupts.CPU191.RES:Rescheduling_interrupts
2399 ? 21% +399.3% 11982 ? 11% interrupts.CPU2.CAL:Function_call_interrupts
96.75 ? 14% +480.4% 561.50 ? 2% interrupts.CPU2.RES:Rescheduling_interrupts
23.75 ?136% +531.6% 150.00 ? 39% interrupts.CPU2.TLB:TLB_shootdowns
2620 ? 23% +353.1% 11871 ? 16% interrupts.CPU20.CAL:Function_call_interrupts
733.00 ? 19% -60.5% 289.25 ? 8% interrupts.CPU20.NMI:Non-maskable_interrupts
733.00 ? 19% -60.5% 289.25 ? 8% interrupts.CPU20.PMI:Performance_monitoring_interrupts
128.25 ? 46% +342.3% 567.25 ? 7% interrupts.CPU20.RES:Rescheduling_interrupts
2243 ? 15% +445.3% 12234 ? 12% interrupts.CPU21.CAL:Function_call_interrupts
974.00 ? 27% -68.4% 308.00 ? 5% interrupts.CPU21.NMI:Non-maskable_interrupts
974.00 ? 27% -68.4% 308.00 ? 5% interrupts.CPU21.PMI:Performance_monitoring_interrupts
118.75 ? 35% +396.6% 589.75 ? 4% interrupts.CPU21.RES:Rescheduling_interrupts
2692 ? 8% +355.7% 12268 ? 10% interrupts.CPU22.CAL:Function_call_interrupts
934.00 ? 13% -67.2% 306.00 ? 12% interrupts.CPU22.NMI:Non-maskable_interrupts
934.00 ? 13% -67.2% 306.00 ? 12% interrupts.CPU22.PMI:Performance_monitoring_interrupts
105.00 ? 26% +448.3% 575.75 ? 4% interrupts.CPU22.RES:Rescheduling_interrupts
2237 ? 22% +447.9% 12260 ? 11% interrupts.CPU23.CAL:Function_call_interrupts
892.25 ? 33% -64.6% 316.25 ? 4% interrupts.CPU23.NMI:Non-maskable_interrupts
892.25 ? 33% -64.6% 316.25 ? 4% interrupts.CPU23.PMI:Performance_monitoring_interrupts
88.75 ? 14% +538.0% 566.25 interrupts.CPU23.RES:Rescheduling_interrupts
2122 ? 13% +509.9% 12946 ? 11% interrupts.CPU24.CAL:Function_call_interrupts
858.25 ? 18% -68.3% 272.00 ? 27% interrupts.CPU24.NMI:Non-maskable_interrupts
858.25 ? 18% -68.3% 272.00 ? 27% interrupts.CPU24.PMI:Performance_monitoring_interrupts
94.25 ? 15% +514.9% 579.50 ? 2% interrupts.CPU24.RES:Rescheduling_interrupts
2380 ? 25% +387.3% 11602 ? 8% interrupts.CPU25.CAL:Function_call_interrupts
862.75 ? 31% -64.3% 307.75 ? 2% interrupts.CPU25.NMI:Non-maskable_interrupts
862.75 ? 31% -64.3% 307.75 ? 2% interrupts.CPU25.PMI:Performance_monitoring_interrupts
91.75 ? 10% +503.3% 553.50 ? 6% interrupts.CPU25.RES:Rescheduling_interrupts
2430 ? 20% +360.5% 11194 ? 8% interrupts.CPU26.CAL:Function_call_interrupts
682.25 ? 25% -60.1% 272.50 ? 22% interrupts.CPU26.NMI:Non-maskable_interrupts
682.25 ? 25% -60.1% 272.50 ? 22% interrupts.CPU26.PMI:Performance_monitoring_interrupts
90.75 ? 10% +549.6% 589.50 ? 11% interrupts.CPU26.RES:Rescheduling_interrupts
2753 ? 29% +329.7% 11831 ? 10% interrupts.CPU27.CAL:Function_call_interrupts
88.25 ? 22% +513.9% 541.75 ? 2% interrupts.CPU27.RES:Rescheduling_interrupts
2153 ? 18% +430.9% 11435 ? 8% interrupts.CPU28.CAL:Function_call_interrupts
84.00 ? 10% +544.6% 541.50 ? 3% interrupts.CPU28.RES:Rescheduling_interrupts
2390 ? 26% +392.4% 11769 ? 11% interrupts.CPU29.CAL:Function_call_interrupts
89.00 ? 16% +534.6% 564.75 ? 12% interrupts.CPU29.RES:Rescheduling_interrupts
2370 ? 10% +414.5% 12194 ? 13% interrupts.CPU3.CAL:Function_call_interrupts
102.75 ? 18% +435.5% 550.25 ? 6% interrupts.CPU3.RES:Rescheduling_interrupts
2335 ? 25% +402.0% 11723 ? 9% interrupts.CPU30.CAL:Function_call_interrupts
914.50 ? 22% -65.3% 317.75 ? 7% interrupts.CPU30.NMI:Non-maskable_interrupts
914.50 ? 22% -65.3% 317.75 ? 7% interrupts.CPU30.PMI:Performance_monitoring_interrupts
87.50 ? 10% +525.4% 547.25 ? 4% interrupts.CPU30.RES:Rescheduling_interrupts
2170 ? 14% +441.0% 11742 ? 9% interrupts.CPU31.CAL:Function_call_interrupts
875.00 ? 15% -61.8% 334.00 ? 5% interrupts.CPU31.NMI:Non-maskable_interrupts
875.00 ? 15% -61.8% 334.00 ? 5% interrupts.CPU31.PMI:Performance_monitoring_interrupts
87.75 ? 7% +551.0% 571.25 ? 3% interrupts.CPU31.RES:Rescheduling_interrupts
2292 ? 17% +405.6% 11593 ? 8% interrupts.CPU32.CAL:Function_call_interrupts
887.50 ? 9% -69.0% 275.25 ? 28% interrupts.CPU32.NMI:Non-maskable_interrupts
887.50 ? 9% -69.0% 275.25 ? 28% interrupts.CPU32.PMI:Performance_monitoring_interrupts
82.75 ? 13% +590.6% 571.50 ? 4% interrupts.CPU32.RES:Rescheduling_interrupts
2208 ? 16% +419.1% 11464 ? 9% interrupts.CPU33.CAL:Function_call_interrupts
840.25 ? 21% -67.7% 271.25 ? 26% interrupts.CPU33.NMI:Non-maskable_interrupts
840.25 ? 21% -67.7% 271.25 ? 26% interrupts.CPU33.PMI:Performance_monitoring_interrupts
86.50 ? 10% +520.8% 537.00 ? 3% interrupts.CPU33.RES:Rescheduling_interrupts
2385 ? 15% +398.7% 11896 ? 8% interrupts.CPU34.CAL:Function_call_interrupts
654.25 ? 32% -58.0% 274.75 ? 27% interrupts.CPU34.NMI:Non-maskable_interrupts
654.25 ? 32% -58.0% 274.75 ? 27% interrupts.CPU34.PMI:Performance_monitoring_interrupts
87.00 ? 10% +569.8% 582.75 ? 6% interrupts.CPU34.RES:Rescheduling_interrupts
2350 ? 23% +401.3% 11785 ? 9% interrupts.CPU35.CAL:Function_call_interrupts
89.00 ? 9% +507.0% 540.25 ? 3% interrupts.CPU35.RES:Rescheduling_interrupts
2095 ? 13% +468.8% 11919 ? 9% interrupts.CPU36.CAL:Function_call_interrupts
597.00 ? 35% -53.6% 277.00 ? 29% interrupts.CPU36.NMI:Non-maskable_interrupts
597.00 ? 35% -53.6% 277.00 ? 29% interrupts.CPU36.PMI:Performance_monitoring_interrupts
76.75 ? 2% +632.6% 562.25 ? 8% interrupts.CPU36.RES:Rescheduling_interrupts
2092 ? 17% +454.5% 11599 ? 10% interrupts.CPU37.CAL:Function_call_interrupts
735.25 ? 23% -68.1% 234.50 ? 36% interrupts.CPU37.NMI:Non-maskable_interrupts
735.25 ? 23% -68.1% 234.50 ? 36% interrupts.CPU37.PMI:Performance_monitoring_interrupts
92.75 ? 10% +507.0% 563.00 ? 8% interrupts.CPU37.RES:Rescheduling_interrupts
2215 ? 13% +426.0% 11651 ? 10% interrupts.CPU38.CAL:Function_call_interrupts
633.50 ? 15% -54.8% 286.25 ? 26% interrupts.CPU38.NMI:Non-maskable_interrupts
633.50 ? 15% -54.8% 286.25 ? 26% interrupts.CPU38.PMI:Performance_monitoring_interrupts
102.50 ? 10% +454.6% 568.50 ? 3% interrupts.CPU38.RES:Rescheduling_interrupts
2136 ? 17% +441.6% 11571 ? 11% interrupts.CPU39.CAL:Function_call_interrupts
662.50 ? 41% -53.6% 307.50 ? 6% interrupts.CPU39.NMI:Non-maskable_interrupts
662.50 ? 41% -53.6% 307.50 ? 6% interrupts.CPU39.PMI:Performance_monitoring_interrupts
87.25 ? 16% +543.0% 561.00 ? 7% interrupts.CPU39.RES:Rescheduling_interrupts
2402 ? 13% +414.5% 12360 ? 13% interrupts.CPU4.CAL:Function_call_interrupts
816.75 ? 32% -64.8% 287.75 ? 19% interrupts.CPU4.NMI:Non-maskable_interrupts
816.75 ? 32% -64.8% 287.75 ? 19% interrupts.CPU4.PMI:Performance_monitoring_interrupts
90.50 ? 10% +566.6% 603.25 ? 7% interrupts.CPU4.RES:Rescheduling_interrupts
67.75 ? 58% +125.8% 153.00 ? 31% interrupts.CPU4.TLB:TLB_shootdowns
2253 ? 17% +424.5% 11819 ? 9% interrupts.CPU40.CAL:Function_call_interrupts
668.25 ? 24% -60.7% 262.75 ? 26% interrupts.CPU40.NMI:Non-maskable_interrupts
668.25 ? 24% -60.7% 262.75 ? 26% interrupts.CPU40.PMI:Performance_monitoring_interrupts
81.75 ? 9% +605.8% 577.00 ? 7% interrupts.CPU40.RES:Rescheduling_interrupts
95.00 ? 98% +160.3% 247.25 ? 21% interrupts.CPU40.TLB:TLB_shootdowns
2255 ? 9% +425.1% 11842 ? 9% interrupts.CPU41.CAL:Function_call_interrupts
801.50 ? 18% -60.7% 315.25 ? 7% interrupts.CPU41.NMI:Non-maskable_interrupts
801.50 ? 18% -60.7% 315.25 ? 7% interrupts.CPU41.PMI:Performance_monitoring_interrupts
96.50 ? 12% +487.8% 567.25 ? 4% interrupts.CPU41.RES:Rescheduling_interrupts
2304 ? 12% +408.5% 11718 ? 9% interrupts.CPU42.CAL:Function_call_interrupts
576.00 ? 15% -44.7% 318.50 ? 7% interrupts.CPU42.NMI:Non-maskable_interrupts
576.00 ? 15% -44.7% 318.50 ? 7% interrupts.CPU42.PMI:Performance_monitoring_interrupts
86.25 ? 12% +560.6% 569.75 ? 5% interrupts.CPU42.RES:Rescheduling_interrupts
2416 ? 12% +384.4% 11705 ? 11% interrupts.CPU43.CAL:Function_call_interrupts
629.75 ? 22% -49.5% 318.25 ? 9% interrupts.CPU43.NMI:Non-maskable_interrupts
629.75 ? 22% -49.5% 318.25 ? 9% interrupts.CPU43.PMI:Performance_monitoring_interrupts
89.25 ? 17% +526.1% 558.75 ? 7% interrupts.CPU43.RES:Rescheduling_interrupts
2509 ? 11% +370.7% 11813 ? 10% interrupts.CPU44.CAL:Function_call_interrupts
593.00 ? 40% -54.3% 271.00 ? 27% interrupts.CPU44.NMI:Non-maskable_interrupts
593.00 ? 40% -54.3% 271.00 ? 27% interrupts.CPU44.PMI:Performance_monitoring_interrupts
95.00 ? 23% +462.9% 534.75 ? 4% interrupts.CPU44.RES:Rescheduling_interrupts
2177 ? 21% +432.8% 11602 ? 9% interrupts.CPU45.CAL:Function_call_interrupts
920.75 ? 27% -66.3% 310.00 ? 9% interrupts.CPU45.NMI:Non-maskable_interrupts
920.75 ? 27% -66.3% 310.00 ? 9% interrupts.CPU45.PMI:Performance_monitoring_interrupts
82.00 ? 24% +588.7% 564.75 ? 4% interrupts.CPU45.RES:Rescheduling_interrupts
2375 ? 18% +400.7% 11892 ? 9% interrupts.CPU46.CAL:Function_call_interrupts
692.00 ? 22% -53.1% 324.25 ? 6% interrupts.CPU46.NMI:Non-maskable_interrupts
692.00 ? 22% -53.1% 324.25 ? 6% interrupts.CPU46.PMI:Performance_monitoring_interrupts
79.50 ? 10% +598.7% 555.50 ? 5% interrupts.CPU46.RES:Rescheduling_interrupts
2041 ? 21% +473.4% 11706 ? 9% interrupts.CPU47.CAL:Function_call_interrupts
878.75 ? 16% -63.6% 319.75 ? 4% interrupts.CPU47.NMI:Non-maskable_interrupts
878.75 ? 16% -63.6% 319.75 ? 4% interrupts.CPU47.PMI:Performance_monitoring_interrupts
86.25 ? 11% +524.6% 538.75 ? 6% interrupts.CPU47.RES:Rescheduling_interrupts
3420 ? 43% +291.2% 13381 ? 13% interrupts.CPU48.CAL:Function_call_interrupts
740.75 ? 6% -56.1% 325.50 ? 7% interrupts.CPU48.NMI:Non-maskable_interrupts
740.75 ? 6% -56.1% 325.50 ? 7% interrupts.CPU48.PMI:Performance_monitoring_interrupts
135.50 ? 35% +319.6% 568.50 ? 6% interrupts.CPU48.RES:Rescheduling_interrupts
2409 ? 24% +397.2% 11978 ? 11% interrupts.CPU49.CAL:Function_call_interrupts
608.25 ? 16% -49.8% 305.25 ? 7% interrupts.CPU49.NMI:Non-maskable_interrupts
608.25 ? 16% -49.8% 305.25 ? 7% interrupts.CPU49.PMI:Performance_monitoring_interrupts
128.75 ? 39% +305.8% 522.50 ? 9% interrupts.CPU49.RES:Rescheduling_interrupts
2299 ? 26% +424.7% 12062 ? 11% interrupts.CPU5.CAL:Function_call_interrupts
111.50 ? 12% +435.2% 596.75 ? 8% interrupts.CPU5.RES:Rescheduling_interrupts
25.25 ?168% +462.4% 142.00 ? 24% interrupts.CPU5.TLB:TLB_shootdowns
2401 ? 25% +426.8% 12650 ? 8% interrupts.CPU50.CAL:Function_call_interrupts
675.75 ? 37% -54.5% 307.25 ? 3% interrupts.CPU50.NMI:Non-maskable_interrupts
675.75 ? 37% -54.5% 307.25 ? 3% interrupts.CPU50.PMI:Performance_monitoring_interrupts
98.00 ? 28% +390.8% 481.00 ? 5% interrupts.CPU50.RES:Rescheduling_interrupts
2622 ? 15% +361.0% 12089 ? 11% interrupts.CPU51.CAL:Function_call_interrupts
991.25 ? 5% -66.2% 334.75 ? 12% interrupts.CPU51.NMI:Non-maskable_interrupts
991.25 ? 5% -66.2% 334.75 ? 12% interrupts.CPU51.PMI:Performance_monitoring_interrupts
149.25 ? 65% +280.7% 568.25 ? 8% interrupts.CPU51.RES:Rescheduling_interrupts
2565 ? 13% +358.3% 11757 ? 10% interrupts.CPU52.CAL:Function_call_interrupts
111.00 ? 24% +395.9% 550.50 ? 12% interrupts.CPU52.RES:Rescheduling_interrupts
2873 ? 21% +330.3% 12364 ? 10% interrupts.CPU53.CAL:Function_call_interrupts
103.00 ? 22% +415.5% 531.00 ? 9% interrupts.CPU53.RES:Rescheduling_interrupts
2748 ? 30% +345.0% 12227 ? 10% interrupts.CPU54.CAL:Function_call_interrupts
115.75 ? 29% +351.8% 523.00 ? 6% interrupts.CPU54.RES:Rescheduling_interrupts
2349 ? 20% +407.6% 11926 ? 10% interrupts.CPU55.CAL:Function_call_interrupts
649.75 ? 49% -49.2% 329.75 ? 5% interrupts.CPU55.NMI:Non-maskable_interrupts
649.75 ? 49% -49.2% 329.75 ? 5% interrupts.CPU55.PMI:Performance_monitoring_interrupts
103.25 ? 24% +398.8% 515.00 ? 4% interrupts.CPU55.RES:Rescheduling_interrupts
2361 ? 22% +399.4% 11794 ? 8% interrupts.CPU56.CAL:Function_call_interrupts
539.00 ? 26% -42.5% 309.75 interrupts.CPU56.NMI:Non-maskable_interrupts
539.00 ? 26% -42.5% 309.75 interrupts.CPU56.PMI:Performance_monitoring_interrupts
98.25 ? 21% +430.3% 521.00 ? 12% interrupts.CPU56.RES:Rescheduling_interrupts
112.75 ? 30% +51.7% 171.00 ? 10% interrupts.CPU56.TLB:TLB_shootdowns
2586 ? 27% +371.3% 12188 ? 9% interrupts.CPU57.CAL:Function_call_interrupts
759.75 ? 29% -57.6% 322.50 ? 10% interrupts.CPU57.NMI:Non-maskable_interrupts
759.75 ? 29% -57.6% 322.50 ? 10% interrupts.CPU57.PMI:Performance_monitoring_interrupts
88.75 ? 14% +493.8% 527.00 interrupts.CPU57.RES:Rescheduling_interrupts
2521 ? 24% +378.8% 12072 ? 9% interrupts.CPU58.CAL:Function_call_interrupts
788.50 ? 29% -60.8% 308.75 ? 5% interrupts.CPU58.NMI:Non-maskable_interrupts
788.50 ? 29% -60.8% 308.75 ? 5% interrupts.CPU58.PMI:Performance_monitoring_interrupts
127.00 ? 46% +311.0% 522.00 ? 6% interrupts.CPU58.RES:Rescheduling_interrupts
2312 ? 25% +422.9% 12090 ? 8% interrupts.CPU59.CAL:Function_call_interrupts
663.25 ? 14% -57.8% 279.75 ? 25% interrupts.CPU59.NMI:Non-maskable_interrupts
663.25 ? 14% -57.8% 279.75 ? 25% interrupts.CPU59.PMI:Performance_monitoring_interrupts
121.50 ? 20% +356.6% 554.75 ? 13% interrupts.CPU59.RES:Rescheduling_interrupts
2278 ? 14% +417.2% 11780 ? 12% interrupts.CPU6.CAL:Function_call_interrupts
894.25 ? 22% -63.8% 324.00 ? 3% interrupts.CPU6.NMI:Non-maskable_interrupts
894.25 ? 22% -63.8% 324.00 ? 3% interrupts.CPU6.PMI:Performance_monitoring_interrupts
119.25 ? 17% +353.7% 541.00 ? 8% interrupts.CPU6.RES:Rescheduling_interrupts
2423 ? 20% +399.2% 12097 ? 8% interrupts.CPU60.CAL:Function_call_interrupts
588.00 ? 15% -60.5% 232.25 ? 28% interrupts.CPU60.NMI:Non-maskable_interrupts
588.00 ? 15% -60.5% 232.25 ? 28% interrupts.CPU60.PMI:Performance_monitoring_interrupts
146.00 ? 52% +249.0% 509.50 ? 6% interrupts.CPU60.RES:Rescheduling_interrupts
2814 ? 30% +341.2% 12414 ? 9% interrupts.CPU61.CAL:Function_call_interrupts
746.75 ? 24% -68.0% 239.25 ? 30% interrupts.CPU61.NMI:Non-maskable_interrupts
746.75 ? 24% -68.0% 239.25 ? 30% interrupts.CPU61.PMI:Performance_monitoring_interrupts
136.25 ? 37% +335.4% 593.25 ? 12% interrupts.CPU61.RES:Rescheduling_interrupts
2779 ? 17% +339.5% 12216 ? 6% interrupts.CPU62.CAL:Function_call_interrupts
647.75 ? 25% -63.2% 238.25 ? 32% interrupts.CPU62.NMI:Non-maskable_interrupts
647.75 ? 25% -63.2% 238.25 ? 32% interrupts.CPU62.PMI:Performance_monitoring_interrupts
131.50 ? 54% +296.2% 521.00 ? 6% interrupts.CPU62.RES:Rescheduling_interrupts
2831 ? 24% +322.3% 11956 ? 8% interrupts.CPU63.CAL:Function_call_interrupts
714.25 ? 28% -67.4% 232.75 ? 31% interrupts.CPU63.NMI:Non-maskable_interrupts
714.25 ? 28% -67.4% 232.75 ? 31% interrupts.CPU63.PMI:Performance_monitoring_interrupts
112.50 ? 32% +365.1% 523.25 ? 5% interrupts.CPU63.RES:Rescheduling_interrupts
2512 ? 21% +399.1% 12538 ? 7% interrupts.CPU64.CAL:Function_call_interrupts
789.50 ? 25% -67.0% 260.75 ? 26% interrupts.CPU64.NMI:Non-maskable_interrupts
789.50 ? 25% -67.0% 260.75 ? 26% interrupts.CPU64.PMI:Performance_monitoring_interrupts
91.75 ? 17% +447.1% 502.00 ? 9% interrupts.CPU64.RES:Rescheduling_interrupts
2382 ? 15% +403.9% 12006 ? 8% interrupts.CPU65.CAL:Function_call_interrupts
96.00 ? 9% +436.2% 514.75 ? 3% interrupts.CPU65.RES:Rescheduling_interrupts
63.00 ?111% +181.3% 177.25 ? 37% interrupts.CPU65.TLB:TLB_shootdowns
2424 ? 19% +398.2% 12078 ? 6% interrupts.CPU66.CAL:Function_call_interrupts
108.75 ? 33% +391.3% 534.25 ? 6% interrupts.CPU66.RES:Rescheduling_interrupts
2301 ? 22% +425.4% 12093 ? 7% interrupts.CPU67.CAL:Function_call_interrupts
505.25 ? 31% -50.4% 250.75 ? 38% interrupts.CPU67.NMI:Non-maskable_interrupts
505.25 ? 31% -50.4% 250.75 ? 38% interrupts.CPU67.PMI:Performance_monitoring_interrupts
95.75 ? 8% +447.8% 524.50 ? 5% interrupts.CPU67.RES:Rescheduling_interrupts
2306 ? 17% +427.7% 12169 ? 7% interrupts.CPU68.CAL:Function_call_interrupts
580.25 ? 37% -53.1% 272.25 ? 24% interrupts.CPU68.NMI:Non-maskable_interrupts
580.25 ? 37% -53.1% 272.25 ? 24% interrupts.CPU68.PMI:Performance_monitoring_interrupts
88.75 ? 27% +490.4% 524.00 ? 13% interrupts.CPU68.RES:Rescheduling_interrupts
2404 ? 24% +389.6% 11771 ? 5% interrupts.CPU69.CAL:Function_call_interrupts
475.25 ? 28% -44.5% 264.00 ? 20% interrupts.CPU69.NMI:Non-maskable_interrupts
475.25 ? 28% -44.5% 264.00 ? 20% interrupts.CPU69.PMI:Performance_monitoring_interrupts
91.75 ? 22% +438.1% 493.75 ? 12% interrupts.CPU69.RES:Rescheduling_interrupts
79.25 ? 65% +119.9% 174.25 ? 21% interrupts.CPU69.TLB:TLB_shootdowns
2231 ? 14% +424.6% 11705 ? 12% interrupts.CPU7.CAL:Function_call_interrupts
770.75 ? 39% -59.4% 312.75 ? 5% interrupts.CPU7.NMI:Non-maskable_interrupts
770.75 ? 39% -59.4% 312.75 ? 5% interrupts.CPU7.PMI:Performance_monitoring_interrupts
124.75 ? 26% +357.5% 570.75 ? 15% interrupts.CPU7.RES:Rescheduling_interrupts
2482 ? 23% +382.3% 11971 ? 7% interrupts.CPU70.CAL:Function_call_interrupts
929.75 ? 27% -73.6% 245.00 ? 39% interrupts.CPU70.NMI:Non-maskable_interrupts
929.75 ? 27% -73.6% 245.00 ? 39% interrupts.CPU70.PMI:Performance_monitoring_interrupts
139.75 ? 66% +256.7% 498.50 ? 9% interrupts.CPU70.RES:Rescheduling_interrupts
2487 ? 23% +400.1% 12437 ? 9% interrupts.CPU71.CAL:Function_call_interrupts
811.50 ? 24% -66.3% 273.75 ? 21% interrupts.CPU71.NMI:Non-maskable_interrupts
811.50 ? 24% -66.3% 273.75 ? 21% interrupts.CPU71.PMI:Performance_monitoring_interrupts
128.75 ? 34% +308.7% 526.25 ? 7% interrupts.CPU71.RES:Rescheduling_interrupts
97.75 ? 58% +108.2% 203.50 ? 26% interrupts.CPU71.TLB:TLB_shootdowns
2289 ? 16% +458.8% 12796 ? 14% interrupts.CPU72.CAL:Function_call_interrupts
644.50 ? 27% -56.1% 283.00 ? 22% interrupts.CPU72.NMI:Non-maskable_interrupts
644.50 ? 27% -56.1% 283.00 ? 22% interrupts.CPU72.PMI:Performance_monitoring_interrupts
98.75 ? 18% +487.3% 580.00 ? 14% interrupts.CPU72.RES:Rescheduling_interrupts
2366 ? 12% +386.4% 11510 ? 12% interrupts.CPU73.CAL:Function_call_interrupts
658.50 ? 21% -59.5% 267.00 ? 23% interrupts.CPU73.NMI:Non-maskable_interrupts
658.50 ? 21% -59.5% 267.00 ? 23% interrupts.CPU73.PMI:Performance_monitoring_interrupts
110.75 ? 17% +415.8% 571.25 ? 13% interrupts.CPU73.RES:Rescheduling_interrupts
2513 ? 13% +365.4% 11696 ? 13% interrupts.CPU74.CAL:Function_call_interrupts
99.25 ? 16% +468.5% 564.25 ? 9% interrupts.CPU74.RES:Rescheduling_interrupts
2510 ? 14% +370.8% 11816 ? 13% interrupts.CPU75.CAL:Function_call_interrupts
94.00 ? 4% +497.6% 561.75 ? 10% interrupts.CPU75.RES:Rescheduling_interrupts
2801 ? 22% +319.1% 11742 ? 14% interrupts.CPU76.CAL:Function_call_interrupts
754.25 ? 26% -63.3% 277.00 ? 23% interrupts.CPU76.NMI:Non-maskable_interrupts
754.25 ? 26% -63.3% 277.00 ? 23% interrupts.CPU76.PMI:Performance_monitoring_interrupts
93.00 ? 18% +505.9% 563.50 ? 11% interrupts.CPU76.RES:Rescheduling_interrupts
2410 ? 4% +398.4% 12011 ? 13% interrupts.CPU77.CAL:Function_call_interrupts
97.75 ? 8% +474.4% 561.50 ? 10% interrupts.CPU77.RES:Rescheduling_interrupts
2780 ? 29% +336.4% 12132 ? 12% interrupts.CPU78.CAL:Function_call_interrupts
850.75 ? 35% -62.1% 322.25 ? 4% interrupts.CPU78.NMI:Non-maskable_interrupts
850.75 ? 35% -62.1% 322.25 ? 4% interrupts.CPU78.PMI:Performance_monitoring_interrupts
90.00 ? 21% +511.7% 550.50 ? 12% interrupts.CPU78.RES:Rescheduling_interrupts
2632 ? 15% +346.0% 11737 ? 13% interrupts.CPU79.CAL:Function_call_interrupts
746.75 ? 33% -57.3% 319.00 interrupts.CPU79.NMI:Non-maskable_interrupts
746.75 ? 33% -57.3% 319.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts
92.50 ? 20% +509.7% 564.00 ? 13% interrupts.CPU79.RES:Rescheduling_interrupts
2107 ? 13% +457.3% 11744 ? 10% interrupts.CPU8.CAL:Function_call_interrupts
92.75 ? 11% +510.8% 566.50 ? 5% interrupts.CPU8.RES:Rescheduling_interrupts
50.50 ?121% +255.0% 179.25 ? 17% interrupts.CPU8.TLB:TLB_shootdowns
2856 ? 25% +306.1% 11599 ? 13% interrupts.CPU80.CAL:Function_call_interrupts
674.00 ? 27% -53.2% 315.75 ? 6% interrupts.CPU80.NMI:Non-maskable_interrupts
674.00 ? 27% -53.2% 315.75 ? 6% interrupts.CPU80.PMI:Performance_monitoring_interrupts
86.25 ? 10% +529.9% 543.25 ? 11% interrupts.CPU80.RES:Rescheduling_interrupts
2688 ? 23% +339.2% 11807 ? 13% interrupts.CPU81.CAL:Function_call_interrupts
689.75 ? 46% -60.4% 273.00 ? 21% interrupts.CPU81.NMI:Non-maskable_interrupts
689.75 ? 46% -60.4% 273.00 ? 21% interrupts.CPU81.PMI:Performance_monitoring_interrupts
88.50 ? 9% +549.4% 574.75 ? 13% interrupts.CPU81.RES:Rescheduling_interrupts
2630 ? 27% +352.4% 11900 ? 13% interrupts.CPU82.CAL:Function_call_interrupts
759.25 ? 43% -65.0% 266.00 ? 24% interrupts.CPU82.NMI:Non-maskable_interrupts
759.25 ? 43% -65.0% 266.00 ? 24% interrupts.CPU82.PMI:Performance_monitoring_interrupts
89.00 ? 6% +544.4% 573.50 ? 13% interrupts.CPU82.RES:Rescheduling_interrupts
2480 ? 14% +384.8% 12024 ? 10% interrupts.CPU83.CAL:Function_call_interrupts
681.00 ? 31% -66.3% 229.75 ? 31% interrupts.CPU83.NMI:Non-maskable_interrupts
681.00 ? 31% -66.3% 229.75 ? 31% interrupts.CPU83.PMI:Performance_monitoring_interrupts
87.75 ? 10% +550.4% 570.75 ? 11% interrupts.CPU83.RES:Rescheduling_interrupts
86.75 ? 60% +68.3% 146.00 ? 10% interrupts.CPU83.TLB:TLB_shootdowns
2302 ? 11% +408.4% 11704 ? 12% interrupts.CPU84.CAL:Function_call_interrupts
558.50 ? 26% -47.5% 293.00 ? 30% interrupts.CPU84.NMI:Non-maskable_interrupts
558.50 ? 26% -47.5% 293.00 ? 30% interrupts.CPU84.PMI:Performance_monitoring_interrupts
84.50 ? 7% +568.3% 564.75 ? 9% interrupts.CPU84.RES:Rescheduling_interrupts
2474 ? 19% +378.0% 11828 ? 12% interrupts.CPU85.CAL:Function_call_interrupts
677.50 ? 14% -60.6% 266.75 ? 27% interrupts.CPU85.NMI:Non-maskable_interrupts
677.50 ? 14% -60.6% 266.75 ? 27% interrupts.CPU85.PMI:Performance_monitoring_interrupts
93.50 ? 6% +514.7% 574.75 ? 8% interrupts.CPU85.RES:Rescheduling_interrupts
2453 ? 16% +376.5% 11687 ? 12% interrupts.CPU86.CAL:Function_call_interrupts
789.75 ? 27% -66.2% 266.75 ? 26% interrupts.CPU86.NMI:Non-maskable_interrupts
789.75 ? 27% -66.2% 266.75 ? 26% interrupts.CPU86.PMI:Performance_monitoring_interrupts
94.00 ? 13% +492.3% 556.75 ? 9% interrupts.CPU86.RES:Rescheduling_interrupts
93.25 ? 28% +53.9% 143.50 ? 27% interrupts.CPU86.TLB:TLB_shootdowns
2522 ? 10% +353.0% 11426 ? 12% interrupts.CPU87.CAL:Function_call_interrupts
651.50 ? 17% -57.4% 277.50 ? 26% interrupts.CPU87.NMI:Non-maskable_interrupts
651.50 ? 17% -57.4% 277.50 ? 26% interrupts.CPU87.PMI:Performance_monitoring_interrupts
98.00 ? 15% +475.5% 564.00 ? 10% interrupts.CPU87.RES:Rescheduling_interrupts
2427 ? 8% +388.6% 11859 ? 13% interrupts.CPU88.CAL:Function_call_interrupts
661.00 ? 15% -58.9% 271.50 ? 26% interrupts.CPU88.NMI:Non-maskable_interrupts
661.00 ? 15% -58.9% 271.50 ? 26% interrupts.CPU88.PMI:Performance_monitoring_interrupts
95.75 ? 9% +491.6% 566.50 ? 11% interrupts.CPU88.RES:Rescheduling_interrupts
119.00 ? 17% +56.3% 186.00 ? 21% interrupts.CPU88.TLB:TLB_shootdowns
2317 ? 21% +410.8% 11839 ? 14% interrupts.CPU89.CAL:Function_call_interrupts
785.00 ? 33% -64.3% 280.50 ? 29% interrupts.CPU89.NMI:Non-maskable_interrupts
785.00 ? 33% -64.3% 280.50 ? 29% interrupts.CPU89.PMI:Performance_monitoring_interrupts
89.50 ? 10% +523.7% 558.25 ? 4% interrupts.CPU89.RES:Rescheduling_interrupts
2135 ? 20% +448.0% 11703 ? 9% interrupts.CPU9.CAL:Function_call_interrupts
830.00 ? 34% -56.2% 363.25 ? 25% interrupts.CPU9.NMI:Non-maskable_interrupts
830.00 ? 34% -56.2% 363.25 ? 25% interrupts.CPU9.PMI:Performance_monitoring_interrupts
110.75 ? 46% +372.0% 522.75 ? 8% interrupts.CPU9.RES:Rescheduling_interrupts
2749 ? 17% +331.5% 11862 ? 13% interrupts.CPU90.CAL:Function_call_interrupts
801.25 ? 12% -70.6% 235.50 ? 38% interrupts.CPU90.NMI:Non-maskable_interrupts
801.25 ? 12% -70.6% 235.50 ? 38% interrupts.CPU90.PMI:Performance_monitoring_interrupts
98.50 ? 12% +474.6% 566.00 ? 10% interrupts.CPU90.RES:Rescheduling_interrupts
62.50 ? 63% +184.8% 178.00 ? 10% interrupts.CPU90.TLB:TLB_shootdowns
2348 ? 15% +408.3% 11938 ? 13% interrupts.CPU91.CAL:Function_call_interrupts
679.25 ? 20% -63.7% 246.75 ? 37% interrupts.CPU91.NMI:Non-maskable_interrupts
679.25 ? 20% -63.7% 246.75 ? 37% interrupts.CPU91.PMI:Performance_monitoring_interrupts
91.75 ? 15% +524.3% 572.75 ? 10% interrupts.CPU91.RES:Rescheduling_interrupts
2451 ? 15% +379.7% 11760 ? 13% interrupts.CPU92.CAL:Function_call_interrupts
796.75 ? 19% -64.6% 282.25 ? 26% interrupts.CPU92.NMI:Non-maskable_interrupts
796.75 ? 19% -64.6% 282.25 ? 26% interrupts.CPU92.PMI:Performance_monitoring_interrupts
83.00 ? 9% +589.5% 572.25 ? 9% interrupts.CPU92.RES:Rescheduling_interrupts
2211 ? 13% +428.2% 11679 ? 11% interrupts.CPU93.CAL:Function_call_interrupts
754.75 ? 29% -63.3% 277.25 ? 26% interrupts.CPU93.NMI:Non-maskable_interrupts
754.75 ? 29% -63.3% 277.25 ? 26% interrupts.CPU93.PMI:Performance_monitoring_interrupts
91.75 ? 11% +502.7% 553.00 ? 8% interrupts.CPU93.RES:Rescheduling_interrupts
2403 ? 18% +400.0% 12015 ? 14% interrupts.CPU94.CAL:Function_call_interrupts
759.50 ? 41% -62.6% 284.25 ? 26% interrupts.CPU94.NMI:Non-maskable_interrupts
759.50 ? 41% -62.6% 284.25 ? 26% interrupts.CPU94.PMI:Performance_monitoring_interrupts
83.50 ? 14% +563.5% 554.00 ? 10% interrupts.CPU94.RES:Rescheduling_interrupts
2307 ? 18% +402.2% 11586 ? 12% interrupts.CPU95.CAL:Function_call_interrupts
843.50 ? 28% -63.5% 307.50 ? 6% interrupts.CPU95.NMI:Non-maskable_interrupts
843.50 ? 28% -63.5% 307.50 ? 6% interrupts.CPU95.PMI:Performance_monitoring_interrupts
90.50 ? 3% +518.8% 560.00 ? 11% interrupts.CPU95.RES:Rescheduling_interrupts
2106 ? 22% +416.9% 10890 ? 12% interrupts.CPU96.CAL:Function_call_interrupts
746.00 ? 31% -62.1% 283.00 ? 4% interrupts.CPU96.NMI:Non-maskable_interrupts
746.00 ? 31% -62.1% 283.00 ? 4% interrupts.CPU96.PMI:Performance_monitoring_interrupts
82.25 ? 15% +700.9% 658.75 ? 13% interrupts.CPU96.RES:Rescheduling_interrupts
2540 ? 4% +376.4% 12101 ? 12% interrupts.CPU97.CAL:Function_call_interrupts
872.50 ? 42% -65.2% 303.25 ? 4% interrupts.CPU97.NMI:Non-maskable_interrupts
872.50 ? 42% -65.2% 303.25 ? 4% interrupts.CPU97.PMI:Performance_monitoring_interrupts
133.75 ? 22% +345.0% 595.25 ? 6% interrupts.CPU97.RES:Rescheduling_interrupts
50.75 ?109% +315.3% 210.75 ? 33% interrupts.CPU97.TLB:TLB_shootdowns
2180 ? 9% +452.4% 12046 ? 11% interrupts.CPU98.CAL:Function_call_interrupts
670.00 ? 36% -53.8% 309.25 ? 5% interrupts.CPU98.NMI:Non-maskable_interrupts
670.00 ? 36% -53.8% 309.25 ? 5% interrupts.CPU98.PMI:Performance_monitoring_interrupts
120.75 ? 34% +377.0% 576.00 ? 7% interrupts.CPU98.RES:Rescheduling_interrupts
70.00 ? 96% +189.6% 202.75 ? 21% interrupts.CPU98.TLB:TLB_shootdowns
2591 ? 10% +367.7% 12121 ? 11% interrupts.CPU99.CAL:Function_call_interrupts
804.25 ? 33% -59.3% 327.25 ? 3% interrupts.CPU99.NMI:Non-maskable_interrupts
804.25 ? 33% -59.3% 327.25 ? 3% interrupts.CPU99.PMI:Performance_monitoring_interrupts
91.50 ? 17% +554.4% 598.75 ? 11% interrupts.CPU99.RES:Rescheduling_interrupts
138218 ? 11% -59.5% 55977 ? 5% interrupts.NMI:Non-maskable_interrupts
138218 ? 11% -59.5% 55977 ? 5% interrupts.PMI:Performance_monitoring_interrupts
18907 ? 8% +469.8% 107742 ? 3% interrupts.RES:Rescheduling_interrupts
19435 ? 27% +51.0% 29341 ? 9% interrupts.TLB:TLB_shootdowns



vm-scalability.time.user_time

350 +---------------------------------------------------------------------+
| O |
300 |-+ O |
| O O O |
250 |-+ O O O O O O O O O O O O O O |
| O O O O O |
200 |-+ |
| |
150 |-+ |
| |
100 |-+ |
| |
50 |-+ |
|..+.+..+.+..+..+.+..+.+..+..+.+..+.+..+.+..+..+.+..+.+..+..+.+..+.+..|
0 +---------------------------------------------------------------------+


vm-scalability.time.system_time

3500 +--------------------------------------------------------------------+
| +. .+.. .. + .. +..+.+.. .+..|
3000 |-+ .. +.. .+..+ .+.. +.. .+.. .+..+ + + |
| .+.+ + +..+ + + + |
|. + |
2500 |-+ |
| |
2000 |-+ |
| |
1500 |-+ |
| |
| O |
1000 |-+ O O O O O O |
| O O O O O O O O O O O O O O O O O O O |
500 +--------------------------------------------------------------------+


vm-scalability.time.percent_of_cpu_this_job_got

1200 +--------------------------------------------------------------------+
| + +. +..|
1100 |-+ +. .+.+.. .. + .. +..+.+.. + |
1000 |-+ .. +.. .+. .+.. .+.. .+..+.+..+ + + |
| .+.+ + +..+ + + |
900 |.+ |
800 |-+ |
| |
700 |-+ |
600 |-+ |
| |
500 |-+ O |
400 |-+ O O O |
| O O O O O O O O O O O O O O O O O |
300 +--------------------------------------------------------------------+


vm-scalability.time.minor_page_faults

7e+07 +-------------------------------------------------------------------+
| O O O O O O |
6e+07 |-+ O O O O |
| O O O O O O O O |
| O O O O |
5e+07 |-+O O |
| |
4e+07 |-+ |
| |
3e+07 |-+ |
| |
| |
2e+07 |-+ |
|..+.+..+.+..+.+..+.+..+.+..+.+..+.+..+.+..+.+..+.+..+.+..+.+..+.+..|
1e+07 +-------------------------------------------------------------------+


vm-scalability.time.voluntary_context_switches

8e+07 +-------------------------------------------------------------------+
| O O |
7e+07 |-+ O O O O O O O |
6e+07 |-+ O O O O O O O O O O O |
| O O O O |
5e+07 |-+O |
| O |
4e+07 |-+ |
| |
3e+07 |-+ |
2e+07 |-+ |
| |
1e+07 |-+ |
|..+.+..+.+..+.+..+.+..+.+..+.+..+.+..+.+..+.+..+.+..+.+..+.+..+.+..|
0 +-------------------------------------------------------------------+


vm-scalability.throughput

1.1e+06 +-----------------------------------------------------------------+
1e+06 |-+ O O O O O |
| O O O O O |
900000 |-+ O O O O O O |
800000 |-+ O O O O O O O O |
| O O |
700000 |-+ |
600000 |-+ |
500000 |-+ |
| |
400000 |-+ |
300000 |-+ |
| .+.+..+. .+.+..+. .+.. .+..+.+.+..+.+..+.+..+.+..+.|
200000 |.+. +. + +.+..+.+..+ |
100000 +-----------------------------------------------------------------+


vm-scalability.median

5500 +--------------------------------------------------------------------+
5000 |-+ O O O O O O O |
| O O O O |
4500 |-+ O O O O O O O O O |
4000 |-+ O O O O |
| O |
3500 |-+ |
3000 |-+ |
2500 |-+ |
| |
2000 |-+ |
1500 |-+ |
| .+.+..+.+..+.+..+.+.. .+. .+. .+.+..+..+.+..+.+..+.+..+.+..|
1000 |.+ +. +. +..+.+. |
500 +--------------------------------------------------------------------+


vm-scalability.workload

3.5e+08 +-----------------------------------------------------------------+
| |
3e+08 |-+ O O O O O |
| O O O O O |
| O O O O O O O |
2.5e+08 |-+ O O O O O O O |
| O O |
2e+08 |-+ |
| |
1.5e+08 |-+ |
| |
| |
1e+08 |-+ |
|.+..+.+..+.+..+.+..+. .+.. .+.. .+..+.+.+..+.+..+.+..+.+..+.|
5e+07 +-----------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Oliver Sang


Attachments:
(No filename) (157.86 kB)
config-5.10.0-rc3-00003-g25d0c60b0e40 (172.74 kB)
job-script (8.00 kB)
job.yaml (5.45 kB)
reproduce (818.00 B)
Download all attachments

2020-12-08 04:29:01

by Davidlohr Bueso

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] locking/rwsem: Remove reader optimistic spinning

On Fri, 20 Nov 2020, Waiman Long wrote:

>Reader optimistic spinning is helpful when the reader critical section
>is short and there aren't that many readers around. It also improves
>the chance that a reader can get the lock as writer optimistic spinning
>disproportionally favors writers much more than readers.
>
>Since commit d3681e269fff ("locking/rwsem: Wake up almost all readers
>in wait queue"), all the waiting readers are woken up so that they can
>all get the read lock and run in parallel. When the number of contending
>readers is large, allowing reader optimistic spinning will likely cause
>reader fragmentation where multiple smaller groups of readers can get
>the read lock in a sequential manner separated by writers. That reduces
>reader parallelism.
>
>One possible way to address that drawback is to limit the number of
>readers (preferably one) that can do optimistic spinning. These readers
>act as representatives of all the waiting readers in the wait queue as
>they will wake up all those waiting readers once they get the lock.
>
>Alternatively, as reader optimistic lock stealing has already enhanced
>fairness to readers, it may be easier to just remove reader optimistic
>spinning and simplifying the optimistic spinning code as a result.
>
>Performance measurements (locking throughput kops/s) using a locking
>microbenchmark with 50/50 reader/writer distribution and turbo-boost
>disabled was done on a 2-socket Cascade Lake system (48-core 96-thread)
>to see the impacts of these changes:
>
> 1) Vanilla - 5.10-rc3 kernel
> 2) Before - 5.10-rc3 kernel with previous patches in this series
> 2) limit-rspin - 5.10-rc3 kernel with limited reader spinning patch
> 3) no-rspin - 5.10-rc3 kernel with reader spinning disabled
>
> # of threads CS Load Vanilla Before limit-rspin no-rspin
> ------------ ------- ------- ------ ----------- --------
> 2 1 5,185 5,662 5,214 5,077
> 4 1 5,107 4,983 5,188 4,760
> 8 1 4,782 4,564 4,720 4,628
> 16 1 4,680 4,053 4,567 3,402
> 32 1 4,299 1,115 1,118 1,098
> 64 1 3,218 983 1,001 957
> 96 1 1,938 944 957 930
>
> 2 20 2,008 2,128 2,264 1,665
> 4 20 1,390 1,033 1,046 1,101
> 8 20 1,472 1,155 1,098 1,213
> 16 20 1,332 1,077 1,089 1,122
> 32 20 967 914 917 980
> 64 20 787 874 891 858
> 96 20 730 836 847 844
>
> 2 100 372 356 360 355
> 4 100 492 425 434 392
> 8 100 533 537 529 538
> 16 100 548 572 568 598
> 32 100 499 520 527 537
> 64 100 466 517 526 512
> 96 100 406 497 506 509
>
>The column "CS Load" represents the number of pause instructions issued
>in the locking critical section. A CS load of 1 is extremely short and
>is not likey in real situations. A load of 20 (moderate) and 100 (long)
>are more realistic.
>
>It can be seen that the previous patches in this series have reduced
>performance in general except in highly contended cases with moderate
>or long critical sections that performance improves a bit. This change
>is mostly caused by the "Prevent potential lock starvation" patch that
>reduce reader optimistic spinning and hence reduce reader fragmentation.
>
>The patch that further limit reader optimistic spinning doesn't seem to
>have too much impact on overall performance as shown in the benchmark
>data.
>
>The patch that disables reader optimistic spinning shows reduced
>performance at lightly loaded cases, but comparable or slightly better
>performance on with heavier contention.
>
>This patch just removes reader optimistic spinning for now. As readers
>are not going to do optimistic spinning anymore, we don't need to
>consider if the OSQ is empty or not when doing lock stealing.
>
>Signed-off-by: Waiman Long <[email protected]>

Reviewed-by: Davidlohr Bueso <[email protected]>

2020-12-08 10:10:27

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] locking/rwsem: Remove reader optimistic spinning

On Fri, Nov 20, 2020 at 11:14:16PM -0500, Waiman Long wrote:


> @@ -1032,40 +901,16 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
> *
> * We can take the read lock directly without doing
> * rwsem_optimistic_spin() if the conditions are right.

This comment no longer makes sense..

> - * Also wake up other readers if it is the first reader.
> */
> - if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF)) &&
> - rwsem_no_spinners(sem)) {
> + if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF))) {
> rwsem_set_reader_owned(sem);
> lockevent_inc(rwsem_rlock_steal);
> - if (rcnt == 1)
> - goto wake_readers;
> - return sem;
> - }
>
> - /*
> - * Save the current read-owner of rwsem, if available, and the
> - * reader nonspinnable bit.
> - */
> - waiter.last_rowner = owner;
> - if (!(waiter.last_rowner & RWSEM_READER_OWNED))
> - waiter.last_rowner &= RWSEM_RD_NONSPINNABLE;
> -
> - if (!rwsem_can_spin_on_owner(sem, RWSEM_RD_NONSPINNABLE))
> - goto queue;
> -
> - /*
> - * Undo read bias from down_read() and do optimistic spinning.
> - */
> - atomic_long_add(-RWSEM_READER_BIAS, &sem->count);
> - adjustment = 0;
> - if (rwsem_optimistic_spin(sem, false)) {

since we're removing the optimistic spinning entirely on the read side.

Also, I was looking at skipping patch #4, which mucks with the reader
wakeup logic, and afaict this removal doesn't really depend on it.

Or am I missing something?


2020-12-08 15:02:01

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] locking/rwsem: Rework reader optimistic spinning

On Fri, Nov 20, 2020 at 11:14:11PM -0500, Waiman Long wrote:
> Waiman Long (5):
> locking/rwsem: Pass the current atomic count to
> rwsem_down_read_slowpath()
> locking/rwsem: Prevent potential lock starvation
> locking/rwsem: Enable reader optimistic lock stealing
> locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED
> locking/rwsem: Remove reader optimistic spinning

So I've munged the lot onto the other rwsem patches and skipped #4, I've
not even boot tested them (will go do so shortly).

git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core

2020-12-08 15:36:29

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] locking/rwsem: Remove reader optimistic spinning

On 12/8/20 5:07 AM, Peter Zijlstra wrote:
> On Fri, Nov 20, 2020 at 11:14:16PM -0500, Waiman Long wrote:
>
>
>> @@ -1032,40 +901,16 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, int state, long count)
>> *
>> * We can take the read lock directly without doing
>> * rwsem_optimistic_spin() if the conditions are right.
> This comment no longer makes sense..

You are right. I forgot to take that out.


>> - * Also wake up other readers if it is the first reader.
>> */
>> - if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF)) &&
>> - rwsem_no_spinners(sem)) {
>> + if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF))) {
>> rwsem_set_reader_owned(sem);
>> lockevent_inc(rwsem_rlock_steal);
>> - if (rcnt == 1)
>> - goto wake_readers;
>> - return sem;
>> - }
>>
>> - /*
>> - * Save the current read-owner of rwsem, if available, and the
>> - * reader nonspinnable bit.
>> - */
>> - waiter.last_rowner = owner;
>> - if (!(waiter.last_rowner & RWSEM_READER_OWNED))
>> - waiter.last_rowner &= RWSEM_RD_NONSPINNABLE;
>> -
>> - if (!rwsem_can_spin_on_owner(sem, RWSEM_RD_NONSPINNABLE))
>> - goto queue;
>> -
>> - /*
>> - * Undo read bias from down_read() and do optimistic spinning.
>> - */
>> - atomic_long_add(-RWSEM_READER_BIAS, &sem->count);
>> - adjustment = 0;
>> - if (rwsem_optimistic_spin(sem, false)) {
> since we're removing the optimistic spinning entirely on the read side.
>
> Also, I was looking at skipping patch #4, which mucks with the reader
> wakeup logic, and afaict this removal doesn't really depend on it.
>
> Or am I missing something?

That is true. Patch 4 isn't essential for this series. So if you are
general OK with the current patchset, I can send out v3 that remove
patch 4 and make the your suggested change above.

Cheers,
Longman

2020-12-08 16:40:27

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] locking/rwsem: Rework reader optimistic spinning

On 12/8/20 9:57 AM, Peter Zijlstra wrote:
> On Fri, Nov 20, 2020 at 11:14:11PM -0500, Waiman Long wrote:
>> Waiman Long (5):
>> locking/rwsem: Pass the current atomic count to
>> rwsem_down_read_slowpath()
>> locking/rwsem: Prevent potential lock starvation
>> locking/rwsem: Enable reader optimistic lock stealing
>> locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED
>> locking/rwsem: Remove reader optimistic spinning
> So I've munged the lot onto the other rwsem patches and skipped #4, I've
> not even boot tested them (will go do so shortly).
>
> git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core
>
I have checked the four patches in your locking/core branch. They look
good to me. Are you planning to push the branch to tip soon so that it
can be ready for the next merge window?

Anyway, thanks for taking my patches.

Cheers,
Longman

2020-12-08 17:06:19

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] locking/rwsem: Rework reader optimistic spinning

On Tue, Dec 08, 2020 at 11:33:38AM -0500, Waiman Long wrote:
> On 12/8/20 9:57 AM, Peter Zijlstra wrote:
> > On Fri, Nov 20, 2020 at 11:14:11PM -0500, Waiman Long wrote:
> > > Waiman Long (5):
> > > locking/rwsem: Pass the current atomic count to
> > > rwsem_down_read_slowpath()
> > > locking/rwsem: Prevent potential lock starvation
> > > locking/rwsem: Enable reader optimistic lock stealing
> > > locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED
> > > locking/rwsem: Remove reader optimistic spinning
> > So I've munged the lot onto the other rwsem patches and skipped #4, I've
> > not even boot tested them (will go do so shortly).
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core
> >
> I have checked the four patches in your locking/core branch. They look good
> to me. Are you planning to push the branch to tip soon so that it can be
> ready for the next merge window?

Yeah, provided the robots don't hate on it more than already reported.

2020-12-08 17:38:06

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v2 0/5] locking/rwsem: Rework reader optimistic spinning

On 12/8/20 12:02 PM, Peter Zijlstra wrote:
> On Tue, Dec 08, 2020 at 11:33:38AM -0500, Waiman Long wrote:
>> On 12/8/20 9:57 AM, Peter Zijlstra wrote:
>>> On Fri, Nov 20, 2020 at 11:14:11PM -0500, Waiman Long wrote:
>>>> Waiman Long (5):
>>>> locking/rwsem: Pass the current atomic count to
>>>> rwsem_down_read_slowpath()
>>>> locking/rwsem: Prevent potential lock starvation
>>>> locking/rwsem: Enable reader optimistic lock stealing
>>>> locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED
>>>> locking/rwsem: Remove reader optimistic spinning
>>> So I've munged the lot onto the other rwsem patches and skipped #4, I've
>>> not even boot tested them (will go do so shortly).
>>>
>>> git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core
>>>
>> I have checked the four patches in your locking/core branch. They look good
>> to me. Are you planning to push the branch to tip soon so that it can be
>> ready for the next merge window?
> Yeah, provided the robots don't hate on it more than already reported.
>
Good to know:-)

Cheers,
Longman

Subject: [tip: locking/core] locking/rwsem: Remove reader optimistic spinning

The following commit has been merged into the locking/core branch of tip:

Commit-ID: 617f3ef95177840c77f59c2aec1029d27d5547d6
Gitweb: https://git.kernel.org/tip/617f3ef95177840c77f59c2aec1029d27d5547d6
Author: Waiman Long <[email protected]>
AuthorDate: Fri, 20 Nov 2020 23:14:16 -05:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Wed, 09 Dec 2020 17:08:48 +01:00

locking/rwsem: Remove reader optimistic spinning

Reader optimistic spinning is helpful when the reader critical section
is short and there aren't that many readers around. It also improves
the chance that a reader can get the lock as writer optimistic spinning
disproportionally favors writers much more than readers.

Since commit d3681e269fff ("locking/rwsem: Wake up almost all readers
in wait queue"), all the waiting readers are woken up so that they can
all get the read lock and run in parallel. When the number of contending
readers is large, allowing reader optimistic spinning will likely cause
reader fragmentation where multiple smaller groups of readers can get
the read lock in a sequential manner separated by writers. That reduces
reader parallelism.

One possible way to address that drawback is to limit the number of
readers (preferably one) that can do optimistic spinning. These readers
act as representatives of all the waiting readers in the wait queue as
they will wake up all those waiting readers once they get the lock.

Alternatively, as reader optimistic lock stealing has already enhanced
fairness to readers, it may be easier to just remove reader optimistic
spinning and simplifying the optimistic spinning code as a result.

Performance measurements (locking throughput kops/s) using a locking
microbenchmark with 50/50 reader/writer distribution and turbo-boost
disabled was done on a 2-socket Cascade Lake system (48-core 96-thread)
to see the impacts of these changes:

1) Vanilla - 5.10-rc3 kernel
2) Before - 5.10-rc3 kernel with previous patches in this series
2) limit-rspin - 5.10-rc3 kernel with limited reader spinning patch
3) no-rspin - 5.10-rc3 kernel with reader spinning disabled

# of threads CS Load Vanilla Before limit-rspin no-rspin
------------ ------- ------- ------ ----------- --------
2 1 5,185 5,662 5,214 5,077
4 1 5,107 4,983 5,188 4,760
8 1 4,782 4,564 4,720 4,628
16 1 4,680 4,053 4,567 3,402
32 1 4,299 1,115 1,118 1,098
64 1 3,218 983 1,001 957
96 1 1,938 944 957 930

2 20 2,008 2,128 2,264 1,665
4 20 1,390 1,033 1,046 1,101
8 20 1,472 1,155 1,098 1,213
16 20 1,332 1,077 1,089 1,122
32 20 967 914 917 980
64 20 787 874 891 858
96 20 730 836 847 844

2 100 372 356 360 355
4 100 492 425 434 392
8 100 533 537 529 538
16 100 548 572 568 598
32 100 499 520 527 537
64 100 466 517 526 512
96 100 406 497 506 509

The column "CS Load" represents the number of pause instructions issued
in the locking critical section. A CS load of 1 is extremely short and
is not likey in real situations. A load of 20 (moderate) and 100 (long)
are more realistic.

It can be seen that the previous patches in this series have reduced
performance in general except in highly contended cases with moderate
or long critical sections that performance improves a bit. This change
is mostly caused by the "Prevent potential lock starvation" patch that
reduce reader optimistic spinning and hence reduce reader fragmentation.

The patch that further limit reader optimistic spinning doesn't seem to
have too much impact on overall performance as shown in the benchmark
data.

The patch that disables reader optimistic spinning shows reduced
performance at lightly loaded cases, but comparable or slightly better
performance on with heavier contention.

This patch just removes reader optimistic spinning for now. As readers
are not going to do optimistic spinning anymore, we don't need to
consider if the OSQ is empty or not when doing lock stealing.

Signed-off-by: Waiman Long <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Davidlohr Bueso <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
kernel/locking/lock_events_list.h | 5 +-
kernel/locking/rwsem.c | 284 ++++-------------------------
2 files changed, 49 insertions(+), 240 deletions(-)

diff --git a/kernel/locking/lock_events_list.h b/kernel/locking/lock_events_list.h
index 270a0d3..97fb6f3 100644
--- a/kernel/locking/lock_events_list.h
+++ b/kernel/locking/lock_events_list.h
@@ -56,12 +56,9 @@ LOCK_EVENT(rwsem_sleep_reader) /* # of reader sleeps */
LOCK_EVENT(rwsem_sleep_writer) /* # of writer sleeps */
LOCK_EVENT(rwsem_wake_reader) /* # of reader wakeups */
LOCK_EVENT(rwsem_wake_writer) /* # of writer wakeups */
-LOCK_EVENT(rwsem_opt_rlock) /* # of opt-acquired read locks */
-LOCK_EVENT(rwsem_opt_wlock) /* # of opt-acquired write locks */
+LOCK_EVENT(rwsem_opt_lock) /* # of opt-acquired write locks */
LOCK_EVENT(rwsem_opt_fail) /* # of failed optspins */
LOCK_EVENT(rwsem_opt_nospin) /* # of disabled optspins */
-LOCK_EVENT(rwsem_opt_norspin) /* # of disabled reader-only optspins */
-LOCK_EVENT(rwsem_opt_rlock2) /* # of opt-acquired 2ndary read locks */
LOCK_EVENT(rwsem_rlock) /* # of read locks acquired */
LOCK_EVENT(rwsem_rlock_steal) /* # of read locks by lock stealing */
LOCK_EVENT(rwsem_rlock_fast) /* # of fast read locks acquired */
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index ba5e239..ba67600 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -31,19 +31,13 @@
#include "lock_events.h"

/*
- * The least significant 3 bits of the owner value has the following
+ * The least significant 2 bits of the owner value has the following
* meanings when set.
* - Bit 0: RWSEM_READER_OWNED - The rwsem is owned by readers
- * - Bit 1: RWSEM_RD_NONSPINNABLE - Readers cannot spin on this lock.
- * - Bit 2: RWSEM_WR_NONSPINNABLE - Writers cannot spin on this lock.
+ * - Bit 1: RWSEM_NONSPINNABLE - Cannot spin on a reader-owned lock
*
- * When the rwsem is either owned by an anonymous writer, or it is
- * reader-owned, but a spinning writer has timed out, both nonspinnable
- * bits will be set to disable optimistic spinning by readers and writers.
- * In the later case, the last unlocking reader should then check the
- * writer nonspinnable bit and clear it only to give writers preference
- * to acquire the lock via optimistic spinning, but not readers. Similar
- * action is also done in the reader slowpath.
+ * When the rwsem is reader-owned and a spinning writer has timed out,
+ * the nonspinnable bit will be set to disable optimistic spinning.

* When a writer acquires a rwsem, it puts its task_struct pointer
* into the owner field. It is cleared after an unlock.
@@ -59,46 +53,14 @@
* is involved. Ideally we would like to track all the readers that own
* a rwsem, but the overhead is simply too big.
*
- * Reader optimistic spinning is helpful when the reader critical section
- * is short and there aren't that many readers around. It makes readers
- * relatively more preferred than writers. When a writer times out spinning
- * on a reader-owned lock and set the nospinnable bits, there are two main
- * reasons for that.
- *
- * 1) The reader critical section is long, perhaps the task sleeps after
- * acquiring the read lock.
- * 2) There are just too many readers contending the lock causing it to
- * take a while to service all of them.
- *
- * In the former case, long reader critical section will impede the progress
- * of writers which is usually more important for system performance. In
- * the later case, reader optimistic spinning tends to make the reader
- * groups that contain readers that acquire the lock together smaller
- * leading to more of them. That may hurt performance in some cases. In
- * other words, the setting of nonspinnable bits indicates that reader
- * optimistic spinning may not be helpful for those workloads that cause
- * it.
- *
- * Therefore, any writers that had observed the setting of the writer
- * nonspinnable bit for a given rwsem after they fail to acquire the lock
- * via optimistic spinning will set the reader nonspinnable bit once they
- * acquire the write lock. Similarly, readers that observe the setting
- * of reader nonspinnable bit at slowpath entry will set the reader
- * nonspinnable bits when they acquire the read lock via the wakeup path.
- *
- * Once the reader nonspinnable bit is on, it will only be reset when
- * a writer is able to acquire the rwsem in the fast path or somehow a
- * reader or writer in the slowpath doesn't observe the nonspinable bit.
- *
- * This is to discourage reader optmistic spinning on that particular
- * rwsem and make writers more preferred. This adaptive disabling of reader
- * optimistic spinning will alleviate the negative side effect of this
- * feature.
+ * A fast path reader optimistic lock stealing is supported when the rwsem
+ * is previously owned by a writer and the following conditions are met:
+ * - OSQ is empty
+ * - rwsem is not currently writer owned
+ * - the handoff isn't set.
*/
#define RWSEM_READER_OWNED (1UL << 0)
-#define RWSEM_RD_NONSPINNABLE (1UL << 1)
-#define RWSEM_WR_NONSPINNABLE (1UL << 2)
-#define RWSEM_NONSPINNABLE (RWSEM_RD_NONSPINNABLE | RWSEM_WR_NONSPINNABLE)
+#define RWSEM_NONSPINNABLE (1UL << 1)
#define RWSEM_OWNER_FLAGS_MASK (RWSEM_READER_OWNED | RWSEM_NONSPINNABLE)

#ifdef CONFIG_DEBUG_RWSEMS
@@ -203,7 +165,7 @@ static inline void __rwsem_set_reader_owned(struct rw_semaphore *sem,
struct task_struct *owner)
{
unsigned long val = (unsigned long)owner | RWSEM_READER_OWNED |
- (atomic_long_read(&sem->owner) & RWSEM_RD_NONSPINNABLE);
+ (atomic_long_read(&sem->owner) & RWSEM_NONSPINNABLE);

atomic_long_set(&sem->owner, val);
}
@@ -372,7 +334,6 @@ struct rwsem_waiter {
struct task_struct *task;
enum rwsem_waiter_type type;
unsigned long timeout;
- unsigned long last_rowner;
};
#define rwsem_first_waiter(sem) \
list_first_entry(&sem->wait_list, struct rwsem_waiter, list)
@@ -486,10 +447,6 @@ static void rwsem_mark_wake(struct rw_semaphore *sem,
* the reader is copied over.
*/
owner = waiter->task;
- if (waiter->last_rowner & RWSEM_RD_NONSPINNABLE) {
- owner = (void *)((unsigned long)owner | RWSEM_RD_NONSPINNABLE);
- lockevent_inc(rwsem_opt_norspin);
- }
__rwsem_set_reader_owned(sem, owner);
}

@@ -621,30 +578,6 @@ static inline bool rwsem_try_write_lock(struct rw_semaphore *sem,

#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
/*
- * Try to acquire read lock before the reader is put on wait queue.
- * Lock acquisition isn't allowed if the rwsem is locked or a writer handoff
- * is ongoing.
- */
-static inline bool rwsem_try_read_lock_unqueued(struct rw_semaphore *sem)
-{
- long count = atomic_long_read(&sem->count);
-
- if (count & (RWSEM_WRITER_MASK | RWSEM_FLAG_HANDOFF))
- return false;
-
- count = atomic_long_fetch_add_acquire(RWSEM_READER_BIAS, &sem->count);
- if (!(count & (RWSEM_WRITER_MASK | RWSEM_FLAG_HANDOFF))) {
- rwsem_set_reader_owned(sem);
- lockevent_inc(rwsem_opt_rlock);
- return true;
- }
-
- /* Back out the change */
- atomic_long_add(-RWSEM_READER_BIAS, &sem->count);
- return false;
-}
-
-/*
* Try to acquire write lock before the writer has been put on wait queue.
*/
static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
@@ -655,7 +588,7 @@ static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
if (atomic_long_try_cmpxchg_acquire(&sem->count, &count,
count | RWSEM_WRITER_LOCKED)) {
rwsem_set_owner(sem);
- lockevent_inc(rwsem_opt_wlock);
+ lockevent_inc(rwsem_opt_lock);
return true;
}
}
@@ -671,8 +604,7 @@ static inline bool owner_on_cpu(struct task_struct *owner)
return owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
}

-static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem,
- unsigned long nonspinnable)
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
{
struct task_struct *owner;
unsigned long flags;
@@ -689,7 +621,7 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem,
/*
* Don't check the read-owner as the entry may be stale.
*/
- if ((flags & nonspinnable) ||
+ if ((flags & RWSEM_NONSPINNABLE) ||
(owner && !(flags & RWSEM_READER_OWNED) && !owner_on_cpu(owner)))
ret = false;
rcu_read_unlock();
@@ -719,9 +651,9 @@ enum owner_state {
#define OWNER_SPINNABLE (OWNER_NULL | OWNER_WRITER | OWNER_READER)

static inline enum owner_state
-rwsem_owner_state(struct task_struct *owner, unsigned long flags, unsigned long nonspinnable)
+rwsem_owner_state(struct task_struct *owner, unsigned long flags)
{
- if (flags & nonspinnable)
+ if (flags & RWSEM_NONSPINNABLE)
return OWNER_NONSPINNABLE;

if (flags & RWSEM_READER_OWNED)
@@ -731,14 +663,14 @@ rwsem_owner_state(struct task_struct *owner, unsigned long flags, unsigned long
}

static noinline enum owner_state
-rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
+rwsem_spin_on_owner(struct rw_semaphore *sem)
{
struct task_struct *new, *owner;
unsigned long flags, new_flags;
enum owner_state state;

owner = rwsem_owner_flags(sem, &flags);
- state = rwsem_owner_state(owner, flags, nonspinnable);
+ state = rwsem_owner_state(owner, flags);
if (state != OWNER_WRITER)
return state;

@@ -752,7 +684,7 @@ rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
*/
new = rwsem_owner_flags(sem, &new_flags);
if ((new != owner) || (new_flags != flags)) {
- state = rwsem_owner_state(new, new_flags, nonspinnable);
+ state = rwsem_owner_state(new, new_flags);
break;
}

@@ -801,14 +733,12 @@ static inline u64 rwsem_rspin_threshold(struct rw_semaphore *sem)
return sched_clock() + delta;
}

-static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
+static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
{
bool taken = false;
int prev_owner_state = OWNER_NULL;
int loop = 0;
u64 rspin_threshold = 0;
- unsigned long nonspinnable = wlock ? RWSEM_WR_NONSPINNABLE
- : RWSEM_RD_NONSPINNABLE;

preempt_disable();

@@ -825,15 +755,14 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
for (;;) {
enum owner_state owner_state;

- owner_state = rwsem_spin_on_owner(sem, nonspinnable);
+ owner_state = rwsem_spin_on_owner(sem);
if (!(owner_state & OWNER_SPINNABLE))
break;

/*
* Try to acquire the lock
*/
- taken = wlock ? rwsem_try_write_lock_unqueued(sem)
- : rwsem_try_read_lock_unqueued(sem);
+ taken = rwsem_try_write_lock_unqueued(sem);

if (taken)
break;
@@ -841,7 +770,7 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
/*
* Time-based reader-owned rwsem optimistic spinning
*/
- if (wlock && (owner_state == OWNER_READER)) {
+ if (owner_state == OWNER_READER) {
/*
* Re-initialize rspin_threshold every time when
* the owner state changes from non-reader to reader.
@@ -850,7 +779,7 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
* the beginning of the 2nd reader phase.
*/
if (prev_owner_state != OWNER_READER) {
- if (rwsem_test_oflags(sem, nonspinnable))
+ if (rwsem_test_oflags(sem, RWSEM_NONSPINNABLE))
break;
rspin_threshold = rwsem_rspin_threshold(sem);
loop = 0;
@@ -926,89 +855,30 @@ done:
}

/*
- * Clear the owner's RWSEM_WR_NONSPINNABLE bit if it is set. This should
+ * Clear the owner's RWSEM_NONSPINNABLE bit if it is set. This should
* only be called when the reader count reaches 0.
- *
- * This give writers better chance to acquire the rwsem first before
- * readers when the rwsem was being held by readers for a relatively long
- * period of time. Race can happen that an optimistic spinner may have
- * just stolen the rwsem and set the owner, but just clearing the
- * RWSEM_WR_NONSPINNABLE bit will do no harm anyway.
- */
-static inline void clear_wr_nonspinnable(struct rw_semaphore *sem)
-{
- if (rwsem_test_oflags(sem, RWSEM_WR_NONSPINNABLE))
- atomic_long_andnot(RWSEM_WR_NONSPINNABLE, &sem->owner);
-}
-
-/*
- * This function is called when the reader fails to acquire the lock via
- * optimistic spinning. In this case we will still attempt to do a trylock
- * when comparing the rwsem state right now with the state when entering
- * the slowpath indicates that the reader is still in a valid reader phase.
- * This happens when the following conditions are true:
- *
- * 1) The lock is currently reader owned, and
- * 2) The lock is previously not reader-owned or the last read owner changes.
- *
- * In the former case, we have transitioned from a writer phase to a
- * reader-phase while spinning. In the latter case, it means the reader
- * phase hasn't ended when we entered the optimistic spinning loop. In
- * both cases, the reader is eligible to acquire the lock. This is the
- * secondary path where a read lock is acquired optimistically.
- *
- * The reader non-spinnable bit wasn't set at time of entry or it will
- * not be here at all.
*/
-static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
- unsigned long last_rowner)
+static inline void clear_nonspinnable(struct rw_semaphore *sem)
{
- unsigned long owner = atomic_long_read(&sem->owner);
-
- if (!(owner & RWSEM_READER_OWNED))
- return false;
-
- if (((owner ^ last_rowner) & ~RWSEM_OWNER_FLAGS_MASK) &&
- rwsem_try_read_lock_unqueued(sem)) {
- lockevent_inc(rwsem_opt_rlock2);
- lockevent_add(rwsem_opt_fail, -1);
- return true;
- }
- return false;
-}
-
-static inline bool rwsem_no_spinners(struct rw_semaphore *sem)
-{
- return !osq_is_locked(&sem->osq);
+ if (rwsem_test_oflags(sem, RWSEM_NONSPINNABLE))
+ atomic_long_andnot(RWSEM_NONSPINNABLE, &sem->owner);
}

#else
-static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem,
- unsigned long nonspinnable)
+static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
{
return false;
}

-static inline bool rwsem_optimistic_spin(struct rw_semaphore *sem, bool wlock)
+static inline bool rwsem_optimistic_spin(struct rw_semaphore *sem)
{
return false;
}

-static inline void clear_wr_nonspinnable(struct rw_semaphore *sem) { }
-
-static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
- unsigned long last_rowner)
-{
- return false;
-}
-
-static inline bool rwsem_no_spinners(sem)
-{
- return false;
-}
+static inline void clear_nonspinnable(struct rw_semaphore *sem) { }

static inline int
-rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
+rwsem_spin_on_owner(struct rw_semaphore *sem)
{
return 0;
}
@@ -1021,7 +891,7 @@ rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
static struct rw_semaphore __sched *
rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, int state)
{
- long owner, adjustment = -RWSEM_READER_BIAS;
+ long adjustment = -RWSEM_READER_BIAS;
long rcnt = (count >> RWSEM_READER_SHIFT);
struct rwsem_waiter waiter;
DEFINE_WAKE_Q(wake_q);
@@ -1029,54 +899,25 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, int state)

/*
* To prevent a constant stream of readers from starving a sleeping
- * waiter, don't attempt optimistic spinning if the lock is currently
- * owned by readers.
+ * waiter, don't attempt optimistic lock stealing if the lock is
+ * currently owned by readers.
*/
- owner = atomic_long_read(&sem->owner);
- if ((owner & RWSEM_READER_OWNED) && (rcnt > 1) &&
- !(count & RWSEM_WRITER_LOCKED))
+ if ((atomic_long_read(&sem->owner) & RWSEM_READER_OWNED) &&
+ (rcnt > 1) && !(count & RWSEM_WRITER_LOCKED))
goto queue;

/*
- * Reader optimistic lock stealing
- *
- * We can take the read lock directly without doing
- * rwsem_optimistic_spin() if the conditions are right.
- * Also wake up other readers if it is the first reader.
+ * Reader optimistic lock stealing.
*/
- if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF)) &&
- rwsem_no_spinners(sem)) {
+ if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF))) {
rwsem_set_reader_owned(sem);
lockevent_inc(rwsem_rlock_steal);
- if (rcnt == 1)
- goto wake_readers;
- return sem;
- }

- /*
- * Save the current read-owner of rwsem, if available, and the
- * reader nonspinnable bit.
- */
- waiter.last_rowner = owner;
- if (!(waiter.last_rowner & RWSEM_READER_OWNED))
- waiter.last_rowner &= RWSEM_RD_NONSPINNABLE;
-
- if (!rwsem_can_spin_on_owner(sem, RWSEM_RD_NONSPINNABLE))
- goto queue;
-
- /*
- * Undo read bias from down_read() and do optimistic spinning.
- */
- atomic_long_add(-RWSEM_READER_BIAS, &sem->count);
- adjustment = 0;
- if (rwsem_optimistic_spin(sem, false)) {
- /* rwsem_optimistic_spin() implies ACQUIRE on success */
/*
- * Wake up other readers in the wait list if the front
- * waiter is a reader.
+ * Wake up other readers in the wait queue if it is
+ * the first reader.
*/
-wake_readers:
- if ((atomic_long_read(&sem->count) & RWSEM_FLAG_WAITERS)) {
+ if ((rcnt == 1) && (count & RWSEM_FLAG_WAITERS)) {
raw_spin_lock_irq(&sem->wait_lock);
if (!list_empty(&sem->wait_list))
rwsem_mark_wake(sem, RWSEM_WAKE_READ_OWNED,
@@ -1085,9 +926,6 @@ wake_readers:
wake_up_q(&wake_q);
}
return sem;
- } else if (rwsem_reader_phase_trylock(sem, waiter.last_rowner)) {
- /* rwsem_reader_phase_trylock() implies ACQUIRE on success */
- return sem;
}

queue:
@@ -1103,7 +941,7 @@ queue:
* exit the slowpath and return immediately as its
* RWSEM_READER_BIAS has already been set in the count.
*/
- if (adjustment && !(atomic_long_read(&sem->count) &
+ if (!(atomic_long_read(&sem->count) &
(RWSEM_WRITER_MASK | RWSEM_FLAG_HANDOFF))) {
/* Provide lock ACQUIRE */
smp_acquire__after_ctrl_dep();
@@ -1117,10 +955,7 @@ queue:
list_add_tail(&waiter.list, &sem->wait_list);

/* we're now waiting on the lock, but no longer actively locking */
- if (adjustment)
- count = atomic_long_add_return(adjustment, &sem->count);
- else
- count = atomic_long_read(&sem->count);
+ count = atomic_long_add_return(adjustment, &sem->count);

/*
* If there are no active locks, wake the front queued process(es).
@@ -1129,7 +964,7 @@ queue:
* wake our own waiter to join the existing active readers !
*/
if (!(count & RWSEM_LOCK_MASK)) {
- clear_wr_nonspinnable(sem);
+ clear_nonspinnable(sem);
wake = true;
}
if (wake || (!(count & RWSEM_WRITER_MASK) &&
@@ -1175,46 +1010,24 @@ out_nolock:
}

/*
- * This function is called by the a write lock owner. So the owner value
- * won't get changed by others.
- */
-static inline void rwsem_disable_reader_optspin(struct rw_semaphore *sem,
- bool disable)
-{
- if (unlikely(disable)) {
- atomic_long_or(RWSEM_RD_NONSPINNABLE, &sem->owner);
- lockevent_inc(rwsem_opt_norspin);
- }
-}
-
-/*
* Wait until we successfully acquire the write lock
*/
static struct rw_semaphore *
rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
{
long count;
- bool disable_rspin;
enum writer_wait_state wstate;
struct rwsem_waiter waiter;
struct rw_semaphore *ret = sem;
DEFINE_WAKE_Q(wake_q);

/* do optimistic spinning and steal lock if possible */
- if (rwsem_can_spin_on_owner(sem, RWSEM_WR_NONSPINNABLE) &&
- rwsem_optimistic_spin(sem, true)) {
+ if (rwsem_can_spin_on_owner(sem) && rwsem_optimistic_spin(sem)) {
/* rwsem_optimistic_spin() implies ACQUIRE on success */
return sem;
}

/*
- * Disable reader optimistic spinning for this rwsem after
- * acquiring the write lock when the setting of the nonspinnable
- * bits are observed.
- */
- disable_rspin = atomic_long_read(&sem->owner) & RWSEM_NONSPINNABLE;
-
- /*
* Optimistic spinning failed, proceed to the slowpath
* and block until we can acquire the sem.
*/
@@ -1282,7 +1095,7 @@ wait:
* without sleeping.
*/
if (wstate == WRITER_HANDOFF &&
- rwsem_spin_on_owner(sem, RWSEM_NONSPINNABLE) == OWNER_NULL)
+ rwsem_spin_on_owner(sem) == OWNER_NULL)
goto trylock_again;

/* Block until there are no active lockers. */
@@ -1324,7 +1137,6 @@ trylock_again:
}
__set_current_state(TASK_RUNNING);
list_del(&waiter.list);
- rwsem_disable_reader_optspin(sem, disable_rspin);
raw_spin_unlock_irq(&sem->wait_lock);
lockevent_inc(rwsem_wlock);

@@ -1484,7 +1296,7 @@ static inline void __up_read(struct rw_semaphore *sem)
DEBUG_RWSEMS_WARN_ON(tmp < 0, sem);
if (unlikely((tmp & (RWSEM_LOCK_MASK|RWSEM_FLAG_WAITERS)) ==
RWSEM_FLAG_WAITERS)) {
- clear_wr_nonspinnable(sem);
+ clear_nonspinnable(sem);
rwsem_wake(sem, tmp);
}
}

Subject: [tip: locking/core] locking/rwsem: Prevent potential lock starvation

The following commit has been merged into the locking/core branch of tip:

Commit-ID: 2f06f702925b512a95b95dca3855549c047eef58
Gitweb: https://git.kernel.org/tip/2f06f702925b512a95b95dca3855549c047eef58
Author: Waiman Long <[email protected]>
AuthorDate: Fri, 20 Nov 2020 23:14:13 -05:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Wed, 09 Dec 2020 17:08:48 +01:00

locking/rwsem: Prevent potential lock starvation

The lock handoff bit is added in commit 4f23dbc1e657 ("locking/rwsem:
Implement lock handoff to prevent lock starvation") to avoid lock
starvation. However, allowing readers to do optimistic spinning does
introduce an unlikely scenario where lock starvation can happen.

The lock handoff bit may only be set when a waiter is being woken up.
In the case of reader unlock, wakeup happens only when the reader count
reaches 0. If there is a continuous stream of incoming readers acquiring
read lock via optimistic spinning, it is possible that the reader count
may never reach 0 and so the handoff bit will never be asserted.

One way to prevent this scenario from happening is to disallow optimistic
spinning if the rwsem is currently owned by readers. If the previous
or current owner is a writer, optimistic spinning will be allowed.

If the previous owner is a reader but the reader count has reached 0
before, a wakeup should have been issued. So the handoff mechanism
will be kicked in to prevent lock starvation. As a result, it should
be OK to do optimistic spinning in this case.

This patch may have some impact on reader performance as it reduces
reader optimistic spinning especially if the lock critical sections
are short the number of contending readers are small.

Signed-off-by: Waiman Long <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Davidlohr Bueso <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
kernel/locking/rwsem.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 5768b90..c055f4b 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -1010,16 +1010,27 @@ rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
static struct rw_semaphore __sched *
rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, int state)
{
- long adjustment = -RWSEM_READER_BIAS;
+ long owner, adjustment = -RWSEM_READER_BIAS;
+ long rcnt = (count >> RWSEM_READER_SHIFT);
struct rwsem_waiter waiter;
DEFINE_WAKE_Q(wake_q);
bool wake = false;

/*
+ * To prevent a constant stream of readers from starving a sleeping
+ * waiter, don't attempt optimistic spinning if the lock is currently
+ * owned by readers.
+ */
+ owner = atomic_long_read(&sem->owner);
+ if ((owner & RWSEM_READER_OWNED) && (rcnt > 1) &&
+ !(count & RWSEM_WRITER_LOCKED))
+ goto queue;
+
+ /*
* Save the current read-owner of rwsem, if available, and the
* reader nonspinnable bit.
*/
- waiter.last_rowner = atomic_long_read(&sem->owner);
+ waiter.last_rowner = owner;
if (!(waiter.last_rowner & RWSEM_READER_OWNED))
waiter.last_rowner &= RWSEM_RD_NONSPINNABLE;

Subject: [tip: locking/core] locking/rwsem: Enable reader optimistic lock stealing

The following commit has been merged into the locking/core branch of tip:

Commit-ID: 1a728dff855a318bb58bcc1259b1826a7ad9f0bd
Gitweb: https://git.kernel.org/tip/1a728dff855a318bb58bcc1259b1826a7ad9f0bd
Author: Waiman Long <[email protected]>
AuthorDate: Fri, 20 Nov 2020 23:14:14 -05:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Wed, 09 Dec 2020 17:08:48 +01:00

locking/rwsem: Enable reader optimistic lock stealing

If the optimistic spinning queue is empty and the rwsem does not have
the handoff or write-lock bits set, it is actually not necessary to
call rwsem_optimistic_spin() to spin on it. Instead, it can steal the
lock directly as its reader bias is in the count already. If it is
the first reader in this state, it will try to wake up other readers
in the wait queue.

With this patch applied, the following were the lock event counts
after rebooting a 2-socket system and a "make -j96" kernel rebuild.

rwsem_opt_rlock=4437
rwsem_rlock=29
rwsem_rlock_steal=19

So lock stealing represents about 0.4% of all the read locks acquired
in the slow path.

Signed-off-by: Waiman Long <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Davidlohr Bueso <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
kernel/locking/lock_events_list.h | 1 +
kernel/locking/rwsem.c | 28 ++++++++++++++++++++++++++++
2 files changed, 29 insertions(+)

diff --git a/kernel/locking/lock_events_list.h b/kernel/locking/lock_events_list.h
index 239039d..270a0d3 100644
--- a/kernel/locking/lock_events_list.h
+++ b/kernel/locking/lock_events_list.h
@@ -63,6 +63,7 @@ LOCK_EVENT(rwsem_opt_nospin) /* # of disabled optspins */
LOCK_EVENT(rwsem_opt_norspin) /* # of disabled reader-only optspins */
LOCK_EVENT(rwsem_opt_rlock2) /* # of opt-acquired 2ndary read locks */
LOCK_EVENT(rwsem_rlock) /* # of read locks acquired */
+LOCK_EVENT(rwsem_rlock_steal) /* # of read locks by lock stealing */
LOCK_EVENT(rwsem_rlock_fast) /* # of fast read locks acquired */
LOCK_EVENT(rwsem_rlock_fail) /* # of failed read lock acquisitions */
LOCK_EVENT(rwsem_rlock_handoff) /* # of read lock handoffs */
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index c055f4b..ba5e239 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -976,6 +976,12 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
}
return false;
}
+
+static inline bool rwsem_no_spinners(struct rw_semaphore *sem)
+{
+ return !osq_is_locked(&sem->osq);
+}
+
#else
static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem,
unsigned long nonspinnable)
@@ -996,6 +1002,11 @@ static inline bool rwsem_reader_phase_trylock(struct rw_semaphore *sem,
return false;
}

+static inline bool rwsem_no_spinners(sem)
+{
+ return false;
+}
+
static inline int
rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
{
@@ -1027,6 +1038,22 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, int state)
goto queue;

/*
+ * Reader optimistic lock stealing
+ *
+ * We can take the read lock directly without doing
+ * rwsem_optimistic_spin() if the conditions are right.
+ * Also wake up other readers if it is the first reader.
+ */
+ if (!(count & (RWSEM_WRITER_LOCKED | RWSEM_FLAG_HANDOFF)) &&
+ rwsem_no_spinners(sem)) {
+ rwsem_set_reader_owned(sem);
+ lockevent_inc(rwsem_rlock_steal);
+ if (rcnt == 1)
+ goto wake_readers;
+ return sem;
+ }
+
+ /*
* Save the current read-owner of rwsem, if available, and the
* reader nonspinnable bit.
*/
@@ -1048,6 +1075,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, int state)
* Wake up other readers in the wait list if the front
* waiter is a reader.
*/
+wake_readers:
if ((atomic_long_read(&sem->count) & RWSEM_FLAG_WAITERS)) {
raw_spin_lock_irq(&sem->wait_lock);
if (!list_empty(&sem->wait_list))

Subject: [tip: locking/core] locking/rwsem: Pass the current atomic count to rwsem_down_read_slowpath()

The following commit has been merged into the locking/core branch of tip:

Commit-ID: c8fe8b0564388f41147326f31e4587171aacccd4
Gitweb: https://git.kernel.org/tip/c8fe8b0564388f41147326f31e4587171aacccd4
Author: Waiman Long <[email protected]>
AuthorDate: Fri, 20 Nov 2020 23:14:12 -05:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Wed, 09 Dec 2020 17:08:47 +01:00

locking/rwsem: Pass the current atomic count to rwsem_down_read_slowpath()

The atomic count value right after reader count increment can be useful
to determine the rwsem state at trylock time. So the count value is
passed down to rwsem_down_read_slowpath() to be used when appropriate.

Signed-off-by: Waiman Long <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Davidlohr Bueso <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
kernel/locking/rwsem.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 67ae366..5768b90 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -270,14 +270,14 @@ static inline void rwsem_set_nonspinnable(struct rw_semaphore *sem)
owner | RWSEM_NONSPINNABLE));
}

-static inline bool rwsem_read_trylock(struct rw_semaphore *sem)
+static inline bool rwsem_read_trylock(struct rw_semaphore *sem, long *cntp)
{
- long cnt = atomic_long_add_return_acquire(RWSEM_READER_BIAS, &sem->count);
+ *cntp = atomic_long_add_return_acquire(RWSEM_READER_BIAS, &sem->count);

- if (WARN_ON_ONCE(cnt < 0))
+ if (WARN_ON_ONCE(*cntp < 0))
rwsem_set_nonspinnable(sem);

- if (!(cnt & RWSEM_READ_FAILED_MASK)) {
+ if (!(*cntp & RWSEM_READ_FAILED_MASK)) {
rwsem_set_reader_owned(sem);
return true;
}
@@ -1008,9 +1008,9 @@ rwsem_spin_on_owner(struct rw_semaphore *sem, unsigned long nonspinnable)
* Wait for the read lock to be granted
*/
static struct rw_semaphore __sched *
-rwsem_down_read_slowpath(struct rw_semaphore *sem, int state)
+rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, int state)
{
- long count, adjustment = -RWSEM_READER_BIAS;
+ long adjustment = -RWSEM_READER_BIAS;
struct rwsem_waiter waiter;
DEFINE_WAKE_Q(wake_q);
bool wake = false;
@@ -1356,8 +1356,10 @@ static struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem)
*/
static inline int __down_read_common(struct rw_semaphore *sem, int state)
{
- if (!rwsem_read_trylock(sem)) {
- if (IS_ERR(rwsem_down_read_slowpath(sem, state)))
+ long count;
+
+ if (!rwsem_read_trylock(sem, &count)) {
+ if (IS_ERR(rwsem_down_read_slowpath(sem, count, state)))
return -EINTR;
DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem);
}