2019-11-07 17:54:00

by Alex Kogan

[permalink] [raw]
Subject: [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock

In CNA, spinning threads are organized in two queues, a main queue for
threads running on the same node as the current lock holder, and a
secondary queue for threads running on other nodes. After acquiring the
MCS lock and before acquiring the spinlock, the lock holder scans the
main queue looking for a thread running on the same node (pre-scan). If
found (call it thread T), all threads in the main queue between the
current lock holder and T are moved to the end of the secondary queue.
If such T is not found, we make another scan of the main queue when
unlocking the MCS lock (post-scan), starting at the position where
pre-scan stopped. If both scans fail to find such T, the MCS lock is
passed to the first thread in the secondary queue. If the secondary queue
is empty, the lock is passed to the next thread in the main queue.
For more details, see https://arxiv.org/abs/1810.05600.

Note that this variant of CNA may introduce starvation by continuously
passing the lock to threads running on the same node. This issue
will be addressed later in the series.

Enabling CNA is controlled via a new configuration option
(NUMA_AWARE_SPINLOCKS). By default, the CNA variant is patched in at the
boot time only if we run on a multi-node machine in native environment and
the new config is enabled. (For the time being, the patching requires
CONFIG_PARAVIRT_SPINLOCKS to be enabled as well. However, this should be
resolved once static_call() is available.) This default behavior can be
overridden with the new kernel boot command-line option
"numa_spinlock=on/off" (default is "auto").

Signed-off-by: Alex Kogan <[email protected]>
Reviewed-by: Steve Sistare <[email protected]>
---
Documentation/admin-guide/kernel-parameters.txt | 10 +
arch/x86/Kconfig | 19 ++
arch/x86/include/asm/qspinlock.h | 4 +
arch/x86/kernel/alternative.c | 43 ++++
kernel/locking/mcs_spinlock.h | 2 +-
kernel/locking/qspinlock.c | 34 +++-
kernel/locking/qspinlock_cna.h | 260 ++++++++++++++++++++++++
7 files changed, 367 insertions(+), 5 deletions(-)
create mode 100644 kernel/locking/qspinlock_cna.h

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index a84a83f8881e..a2fd0c669dba 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3148,6 +3148,16 @@

nox2apic [X86-64,APIC] Do not enable x2APIC mode.

+ numa_spinlock= [NUMA, PV_OPS] Select the NUMA-aware variant
+ of spinlock. The options are:
+ auto - Enable this variant if running on a multi-node
+ machine in native environment.
+ on - Unconditionally enable this variant.
+ off - Unconditionally disable this variant.
+
+ Not specifying this option is equivalent to
+ numa_spinlock=auto.
+
cpu0_hotplug [X86] Turn on CPU0 hotplug feature when
CONFIG_BOOTPARAM_HOTPLUG_CPU0 is off.
Some features depend on CPU0. Known dependencies are:
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d6e1faa28c58..1d480f190def 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1573,6 +1573,25 @@ config NUMA

Otherwise, you should say N.

+config NUMA_AWARE_SPINLOCKS
+ bool "Numa-aware spinlocks"
+ depends on NUMA
+ depends on QUEUED_SPINLOCKS
+ # For now, we depend on PARAVIRT_SPINLOCKS to make the patching work.
+ # This is awkward, but hopefully would be resolved once static_call()
+ # is available.
+ depends on PARAVIRT_SPINLOCKS
+ default y
+ help
+ Introduce NUMA (Non Uniform Memory Access) awareness into
+ the slow path of spinlocks.
+
+ In this variant of qspinlock, the kernel will try to keep the lock
+ on the same node, thus reducing the number of remote cache misses,
+ while trading some of the short term fairness for better performance.
+
+ Say N if you want absolute first come first serve fairness.
+
config AMD_NUMA
def_bool y
prompt "Old style AMD Opteron NUMA detection"
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 444d6fd9a6d8..6fa8fcc5c7af 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -27,6 +27,10 @@ static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lo
return val;
}

+#ifdef CONFIG_NUMA_AWARE_SPINLOCKS
+extern void __cna_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+#endif
+
#ifdef CONFIG_PARAVIRT_SPINLOCKS
extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
extern void __pv_init_lock_hash(void);
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 9d3a971ea364..6a4ccbf4e09c 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -698,6 +698,33 @@ static void __init int3_selftest(void)
unregister_die_notifier(&int3_exception_nb);
}

+#if defined(CONFIG_NUMA_AWARE_SPINLOCKS)
+/*
+ * Constant (boot-param configurable) flag selecting the NUMA-aware variant
+ * of spinlock. Possible values: -1 (off) / 0 (auto, default) / 1 (on).
+ */
+static int numa_spinlock_flag;
+
+static int __init numa_spinlock_setup(char *str)
+{
+ if (!strcmp(str, "auto")) {
+ numa_spinlock_flag = 0;
+ return 1;
+ } else if (!strcmp(str, "on")) {
+ numa_spinlock_flag = 1;
+ return 1;
+ } else if (!strcmp(str, "off")) {
+ numa_spinlock_flag = -1;
+ return 1;
+ }
+
+ return 0;
+}
+
+__setup("numa_spinlock=", numa_spinlock_setup);
+
+#endif
+
void __init alternative_instructions(void)
{
int3_selftest();
@@ -738,6 +765,22 @@ void __init alternative_instructions(void)
}
#endif

+#if defined(CONFIG_NUMA_AWARE_SPINLOCKS)
+ /*
+ * By default, switch to the NUMA-friendly slow path for
+ * spinlocks when we have multiple NUMA nodes in native environment.
+ */
+ if ((numa_spinlock_flag == 1) ||
+ (numa_spinlock_flag == 0 && nr_node_ids > 1 &&
+ pv_ops.lock.queued_spin_lock_slowpath ==
+ native_queued_spin_lock_slowpath)) {
+ pv_ops.lock.queued_spin_lock_slowpath =
+ __cna_queued_spin_lock_slowpath;
+
+ pr_info("Enabling CNA spinlock\n");
+ }
+#endif
+
apply_paravirt(__parainstructions, __parainstructions_end);

restart_nmi();
diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
index 52d06ec6f525..e40b9538b79f 100644
--- a/kernel/locking/mcs_spinlock.h
+++ b/kernel/locking/mcs_spinlock.h
@@ -17,7 +17,7 @@

struct mcs_spinlock {
struct mcs_spinlock *next;
- int locked; /* 1 if lock acquired */
+ unsigned int locked; /* 1 if lock acquired */
int count; /* nesting count, see qspinlock.c */
};

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index c06d1e8075d9..6d8c4a52e44e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -11,7 +11,7 @@
* Peter Zijlstra <[email protected]>
*/

-#ifndef _GEN_PV_LOCK_SLOWPATH
+#if !defined(_GEN_PV_LOCK_SLOWPATH) && !defined(_GEN_CNA_LOCK_SLOWPATH)

#include <linux/smp.h>
#include <linux/bug.h>
@@ -70,7 +70,8 @@
/*
* On 64-bit architectures, the mcs_spinlock structure will be 16 bytes in
* size and four of them will fit nicely in one 64-byte cacheline. For
- * pvqspinlock, however, we need more space for extra data. To accommodate
+ * pvqspinlock, however, we need more space for extra data. The same also
+ * applies for the NUMA-aware variant of spinlocks (CNA). To accommodate
* that, we insert two more long words to pad it up to 32 bytes. IOW, only
* two of them can fit in a cacheline in this case. That is OK as it is rare
* to have more than 2 levels of slowpath nesting in actual use. We don't
@@ -79,7 +80,7 @@
*/
struct qnode {
struct mcs_spinlock mcs;
-#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#if defined(CONFIG_PARAVIRT_SPINLOCKS) || defined(CONFIG_NUMA_AWARE_SPINLOCKS)
long reserved[2];
#endif
};
@@ -103,6 +104,8 @@ struct qnode {
* Exactly fits one 64-byte cacheline on a 64-bit architecture.
*
* PV doubles the storage and uses the second cacheline for PV state.
+ * CNA also doubles the storage and uses the second cacheline for
+ * CNA-specific state.
*/
static DEFINE_PER_CPU_ALIGNED(struct qnode, qnodes[MAX_NODES]);

@@ -316,7 +319,7 @@ static __always_inline void __mcs_pass_lock(struct mcs_spinlock *node,
#define try_clear_tail __try_clear_tail
#define mcs_pass_lock __mcs_pass_lock

-#endif /* _GEN_PV_LOCK_SLOWPATH */
+#endif /* _GEN_PV_LOCK_SLOWPATH && _GEN_CNA_LOCK_SLOWPATH */

/**
* queued_spin_lock_slowpath - acquire the queued spinlock
@@ -589,6 +592,29 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
EXPORT_SYMBOL(queued_spin_lock_slowpath);

/*
+ * Generate the code for NUMA-aware spinlocks
+ */
+#if !defined(_GEN_CNA_LOCK_SLOWPATH) && defined(CONFIG_NUMA_AWARE_SPINLOCKS)
+#define _GEN_CNA_LOCK_SLOWPATH
+
+#undef pv_wait_head_or_lock
+#define pv_wait_head_or_lock cna_pre_scan
+
+#undef try_clear_tail
+#define try_clear_tail cna_try_change_tail
+
+#undef mcs_pass_lock
+#define mcs_pass_lock cna_pass_lock
+
+#undef queued_spin_lock_slowpath
+#define queued_spin_lock_slowpath __cna_queued_spin_lock_slowpath
+
+#include "qspinlock_cna.h"
+#include "qspinlock.c"
+
+#endif
+
+/*
* Generate the paravirt code for queued_spin_unlock_slowpath().
*/
#if !defined(_GEN_PV_LOCK_SLOWPATH) && defined(CONFIG_PARAVIRT_SPINLOCKS)
diff --git a/kernel/locking/qspinlock_cna.h b/kernel/locking/qspinlock_cna.h
new file mode 100644
index 000000000000..0cf5a6a2709c
--- /dev/null
+++ b/kernel/locking/qspinlock_cna.h
@@ -0,0 +1,260 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _GEN_CNA_LOCK_SLOWPATH
+#error "do not include this file"
+#endif
+
+#include <linux/topology.h>
+
+/*
+ * Implement a NUMA-aware version of MCS (aka CNA, or compact NUMA-aware lock).
+ *
+ * In CNA, spinning threads are organized in two queues, a main queue for
+ * threads running on the same NUMA node as the current lock holder, and a
+ * secondary queue for threads running on other nodes. Schematically, it
+ * looks like this:
+ *
+ * cna_node
+ * +----------+ +--------+ +--------+
+ * |mcs:next | -> |mcs:next| -> ... |mcs:next| -> NULL [Main queue]
+ * |mcs:locked| -+ +--------+ +--------+
+ * +----------+ |
+ * +----------------------+
+ * \/
+ * +--------+ +--------+
+ * |mcs:next| -> ... |mcs:next| [Secondary queue]
+ * +--------+ +--------+
+ * ^ |
+ * +--------------------+
+ *
+ * N.B. locked = 1 if secondary queue is absent. Othewrise, it contains the
+ * encoded pointer to the tail of the secondary queue, which is organized as a
+ * circular list.
+ *
+ * After acquiring the MCS lock and before acquiring the spinlock, the lock
+ * holder scans the main queue looking for a thread running on the same node
+ * (pre-scan). If found (call it thread T), all threads in the main queue
+ * between the current lock holder and T are moved to the end of the secondary
+ * queue. If such T is not found, we make another scan of the main queue when
+ * unlocking the MCS lock (post-scan), starting at the node where pre-scan
+ * stopped. If both scans fail to find such T, the MCS lock is passed to the
+ * first thread in the secondary queue. If the secondary queue is empty, the
+ * lock is passed to the next thread in the main queue.
+ *
+ * For more details, see https://arxiv.org/abs/1810.05600.
+ *
+ * Authors: Alex Kogan <[email protected]>
+ * Dave Dice <[email protected]>
+ */
+
+struct cna_node {
+ struct mcs_spinlock mcs;
+ int numa_node;
+ u32 encoded_tail;
+ u32 pre_scan_result; /* 0 or encoded tail */
+};
+
+static void __init cna_init_nodes_per_cpu(unsigned int cpu)
+{
+ struct mcs_spinlock *base = per_cpu_ptr(&qnodes[0].mcs, cpu);
+ int numa_node = cpu_to_node(cpu);
+ int i;
+
+ for (i = 0; i < MAX_NODES; i++) {
+ struct cna_node *cn = (struct cna_node *)grab_mcs_node(base, i);
+
+ cn->numa_node = numa_node;
+ cn->encoded_tail = encode_tail(cpu, i);
+ /*
+ * @encoded_tail has to be larger than 1, so we do not confuse
+ * it with other valid values for @locked or @pre_scan_result
+ * (0 or 1)
+ */
+ WARN_ON(cn->encoded_tail <= 1);
+ }
+}
+
+static int __init cna_init_nodes(void)
+{
+ unsigned int cpu;
+
+ BUILD_BUG_ON(sizeof(struct cna_node) > sizeof(struct qnode));
+ /* we store an ecoded tail word in the node's @locked field */
+ BUILD_BUG_ON(sizeof(u32) > sizeof(unsigned int));
+
+ for_each_possible_cpu(cpu)
+ cna_init_nodes_per_cpu(cpu);
+
+ return 0;
+}
+early_initcall(cna_init_nodes);
+
+static inline bool cna_try_change_tail(struct qspinlock *lock, u32 val,
+ struct mcs_spinlock *node)
+{
+ struct mcs_spinlock *head_2nd, *tail_2nd;
+ u32 new;
+
+ /* If the secondary queue is empty, do what MCS does. */
+ if (node->locked <= 1)
+ return __try_clear_tail(lock, val, node);
+
+ /*
+ * Try to update the tail value to the last node in the secondary queue.
+ * If successful, pass the lock to the first thread in the secondary
+ * queue. Doing those two actions effectively moves all nodes from the
+ * secondary queue into the main one.
+ */
+ tail_2nd = decode_tail(node->locked);
+ head_2nd = tail_2nd->next;
+ new = ((struct cna_node *)tail_2nd)->encoded_tail + _Q_LOCKED_VAL;
+
+ if (atomic_try_cmpxchg_relaxed(&lock->val, &val, new)) {
+ /*
+ * Try to reset @next in tail_2nd to NULL, but no need to check
+ * the result - if failed, a new successor has updated it.
+ */
+ cmpxchg_relaxed(&tail_2nd->next, head_2nd, NULL);
+ arch_mcs_pass_lock(&head_2nd->locked, 1);
+ return true;
+ }
+
+ return false;
+}
+
+/*
+ * cna_splice_tail -- splice nodes in the main queue between [first, last]
+ * onto the secondary queue.
+ */
+static void cna_splice_tail(struct mcs_spinlock *node,
+ struct mcs_spinlock *first,
+ struct mcs_spinlock *last)
+{
+ /* remove [first,last] */
+ node->next = last->next;
+
+ /* stick [first,last] on the secondary queue tail */
+ if (node->locked <= 1) { /* if secondary queue is empty */
+ /* create secondary queue */
+ last->next = first;
+ } else {
+ /* add to the tail of the secondary queue */
+ struct mcs_spinlock *tail_2nd = decode_tail(node->locked);
+ struct mcs_spinlock *head_2nd = tail_2nd->next;
+
+ tail_2nd->next = first;
+ last->next = head_2nd;
+ }
+
+ node->locked = ((struct cna_node *)last)->encoded_tail;
+}
+
+/*
+ * cna_scan_main_queue - scan the main waiting queue looking for the first
+ * thread running on the same NUMA node as the lock holder. If found (call it
+ * thread T), move all threads in the main queue between the lock holder and
+ * T to the end of the secondary queue and return 0; otherwise, return the
+ * encoded pointer of the last scanned node in the primary queue (so a
+ * subsequent scan can be resumed from that node)
+ *
+ * Schematically, this may look like the following (nn stands for numa_node and
+ * et stands for encoded_tail).
+ *
+ * when cna_scan_main_queue() is called (the secondary queue is empty):
+ *
+ * A+------------+ B+--------+ C+--------+ T+--------+
+ * |mcs:next | -> |mcs:next| -> |mcs:next| -> |mcs:next| -> NULL
+ * |mcs:locked=1| |cna:nn=0| |cna:nn=2| |cna:nn=1|
+ * |cna:nn=1 | +--------+ +--------+ +--------+
+ * +----------- +
+ *
+ * when cna_scan_main_queue() returns (the secondary queue contains B and C):
+ *
+ * A+----------------+ T+--------+
+ * |mcs:next | -> |mcs:next| -> NULL
+ * |mcs:locked=C.et | -+ |cna:nn=1|
+ * |cna:nn=1 | | +--------+
+ * +--------------- + +-----+
+ * \/
+ * B+--------+ C+--------+
+ * |mcs:next| -> |mcs:next| -+
+ * |cna:nn=0| |cna:nn=2| |
+ * +--------+ +--------+ |
+ * ^ |
+ * +---------------------+
+ *
+ * The worst case complexity of the scan is O(n), where n is the number
+ * of current waiters. However, the amortized complexity is close to O(1),
+ * as the immediate successor is likely to be running on the same node once
+ * threads from other nodes are moved to the secondary queue.
+ */
+static u32 cna_scan_main_queue(struct mcs_spinlock *node,
+ struct mcs_spinlock *pred_start)
+{
+ struct cna_node *cn = (struct cna_node *)node;
+ struct cna_node *cni = (struct cna_node *)READ_ONCE(pred_start->next);
+ struct cna_node *last;
+ int my_numa_node = cn->numa_node;
+
+ /* find any next waiter on 'our' NUMA node */
+ for (last = cn;
+ cni && cni->numa_node != my_numa_node;
+ last = cni, cni = (struct cna_node *)READ_ONCE(cni->mcs.next))
+ ;
+
+ /* if found, splice any skipped waiters onto the secondary queue */
+ if (cni) {
+ if (last != cn) /* did we skip any waiters? */
+ cna_splice_tail(node, node->next,
+ (struct mcs_spinlock *)last);
+ return 0;
+ }
+
+ return last->encoded_tail;
+}
+
+__always_inline u32 cna_pre_scan(struct qspinlock *lock,
+ struct mcs_spinlock *node)
+{
+ struct cna_node *cn = (struct cna_node *)node;
+
+ cn->pre_scan_result = cna_scan_main_queue(node, node);
+
+ return 0;
+}
+
+static inline void cna_pass_lock(struct mcs_spinlock *node,
+ struct mcs_spinlock *next)
+{
+ struct cna_node *cn = (struct cna_node *)node;
+ struct mcs_spinlock *next_holder = next, *tail_2nd;
+ u32 val = 1;
+
+ u32 scan = cn->pre_scan_result;
+
+ /*
+ * check if a successor from the same numa node has not been found in
+ * pre-scan, and if so, try to find it in post-scan starting from the
+ * node where pre-scan stopped (stored in @pre_scan_result)
+ */
+ if (scan > 0)
+ scan = cna_scan_main_queue(node, decode_tail(scan));
+
+ if (!scan) { /* if found a successor from the same numa node */
+ next_holder = node->next;
+ /*
+ * we unlock successor by passing a non-zero value,
+ * so set @val to 1 iff @locked is 0, which will happen
+ * if we acquired the MCS lock when its queue was empty
+ */
+ val = node->locked ? node->locked : 1;
+ } else if (node->locked > 1) { /* if secondary queue is not empty */
+ /* next holder will be the first node in the secondary queue */
+ tail_2nd = decode_tail(node->locked);
+ /* @tail_2nd->next points to the head of the secondary queue */
+ next_holder = tail_2nd->next;
+ /* splice the secondary queue onto the head of the main queue */
+ tail_2nd->next = next;
+ }
+
+ arch_mcs_pass_lock(&next_holder->locked, val);
+}
--
2.11.0 (Apple Git-81)


2019-11-10 21:32:32

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock

Hi Alex,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[cannot apply to v5.4-rc6 next-20191108]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url: https://github.com/0day-ci/linux/commits/Alex-Kogan/locking-qspinlock-Rename-mcs-lock-unlock-macros-and-make-them-more-generic/20191109-180535
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 0058b0a506e40d9a2c62015fe92eb64a44d78cd9
reproduce:
# apt-get install sparse
# sparse version: v0.6.1-21-gb31adac-dirty
make ARCH=x86_64 allmodconfig
make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <[email protected]>


sparse warnings: (new ones prefixed by >>)

kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *[assigned] node @@ got ct mcs_spinlock *[assigned] node @@
kernel/locking/qspinlock.c:450:14: sparse: expected struct mcs_spinlock *[assigned] node
kernel/locking/qspinlock.c:450:14: sparse: got struct mcs_spinlock [pure] *
kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *prev @@ got struct struct mcs_spinlock *prev @@
kernel/locking/qspinlock.c:498:22: sparse: expected struct mcs_spinlock *prev
kernel/locking/qspinlock.c:498:22: sparse: got struct mcs_spinlock [pure] *
>> kernel/locking/qspinlock_cna.h:141:60: sparse: sparse: incorrect type in initializer (different modifiers) @@ expected struct mcs_spinlock *tail_2nd @@ got struct struct mcs_spinlock *tail_2nd @@
>> kernel/locking/qspinlock_cna.h:141:60: sparse: expected struct mcs_spinlock *tail_2nd
>> kernel/locking/qspinlock_cna.h:141:60: sparse: got struct mcs_spinlock [pure] *
kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *[assigned] node @@ got ct mcs_spinlock *[assigned] node @@
kernel/locking/qspinlock.c:450:14: sparse: expected struct mcs_spinlock *[assigned] node
kernel/locking/qspinlock.c:450:14: sparse: got struct mcs_spinlock [pure] *
kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *prev @@ got struct struct mcs_spinlock *prev @@
kernel/locking/qspinlock.c:498:22: sparse: expected struct mcs_spinlock *prev
kernel/locking/qspinlock.c:498:22: sparse: got struct mcs_spinlock [pure] *
>> kernel/locking/qspinlock_cna.h:107:18: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *tail_2nd @@ got struct struct mcs_spinlock *tail_2nd @@
kernel/locking/qspinlock_cna.h:107:18: sparse: expected struct mcs_spinlock *tail_2nd
kernel/locking/qspinlock_cna.h:107:18: sparse: got struct mcs_spinlock [pure] *
>> kernel/locking/qspinlock_cna.h:240:61: sparse: sparse: incorrect type in argument 2 (different modifiers) @@ expected struct mcs_spinlock *pred_start @@ got struct struct mcs_spinlock *pred_start @@
>> kernel/locking/qspinlock_cna.h:240:61: sparse: expected struct mcs_spinlock *pred_start
kernel/locking/qspinlock_cna.h:240:61: sparse: got struct mcs_spinlock [pure] *
kernel/locking/qspinlock_cna.h:252:26: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *tail_2nd @@ got struct struct mcs_spinlock *tail_2nd @@
kernel/locking/qspinlock_cna.h:252:26: sparse: expected struct mcs_spinlock *tail_2nd
kernel/locking/qspinlock_cna.h:252:26: sparse: got struct mcs_spinlock [pure] *
kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *[assigned] node @@ got ct mcs_spinlock *[assigned] node @@
kernel/locking/qspinlock.c:450:14: sparse: expected struct mcs_spinlock *[assigned] node
kernel/locking/qspinlock.c:450:14: sparse: got struct mcs_spinlock [pure] *
kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *prev @@ got struct struct mcs_spinlock *prev @@
kernel/locking/qspinlock.c:498:22: sparse: expected struct mcs_spinlock *prev
kernel/locking/qspinlock.c:498:22: sparse: got struct mcs_spinlock [pure] *

vim +141 kernel/locking/qspinlock_cna.h

90
91 static inline bool cna_try_change_tail(struct qspinlock *lock, u32 val,
92 struct mcs_spinlock *node)
93 {
94 struct mcs_spinlock *head_2nd, *tail_2nd;
95 u32 new;
96
97 /* If the secondary queue is empty, do what MCS does. */
98 if (node->locked <= 1)
99 return __try_clear_tail(lock, val, node);
100
101 /*
102 * Try to update the tail value to the last node in the secondary queue.
103 * If successful, pass the lock to the first thread in the secondary
104 * queue. Doing those two actions effectively moves all nodes from the
105 * secondary queue into the main one.
106 */
> 107 tail_2nd = decode_tail(node->locked);
108 head_2nd = tail_2nd->next;
109 new = ((struct cna_node *)tail_2nd)->encoded_tail + _Q_LOCKED_VAL;
110
111 if (atomic_try_cmpxchg_relaxed(&lock->val, &val, new)) {
112 /*
113 * Try to reset @next in tail_2nd to NULL, but no need to check
114 * the result - if failed, a new successor has updated it.
115 */
116 cmpxchg_relaxed(&tail_2nd->next, head_2nd, NULL);
117 arch_mcs_pass_lock(&head_2nd->locked, 1);
118 return true;
119 }
120
121 return false;
122 }
123
124 /*
125 * cna_splice_tail -- splice nodes in the main queue between [first, last]
126 * onto the secondary queue.
127 */
128 static void cna_splice_tail(struct mcs_spinlock *node,
129 struct mcs_spinlock *first,
130 struct mcs_spinlock *last)
131 {
132 /* remove [first,last] */
133 node->next = last->next;
134
135 /* stick [first,last] on the secondary queue tail */
136 if (node->locked <= 1) { /* if secondary queue is empty */
137 /* create secondary queue */
138 last->next = first;
139 } else {
140 /* add to the tail of the secondary queue */
> 141 struct mcs_spinlock *tail_2nd = decode_tail(node->locked);
142 struct mcs_spinlock *head_2nd = tail_2nd->next;
143
144 tail_2nd->next = first;
145 last->next = head_2nd;
146 }
147
148 node->locked = ((struct cna_node *)last)->encoded_tail;
149 }
150
151 /*
152 * cna_scan_main_queue - scan the main waiting queue looking for the first
153 * thread running on the same NUMA node as the lock holder. If found (call it
154 * thread T), move all threads in the main queue between the lock holder and
155 * T to the end of the secondary queue and return 0; otherwise, return the
156 * encoded pointer of the last scanned node in the primary queue (so a
157 * subsequent scan can be resumed from that node)
158 *
159 * Schematically, this may look like the following (nn stands for numa_node and
160 * et stands for encoded_tail).
161 *
162 * when cna_scan_main_queue() is called (the secondary queue is empty):
163 *
164 * A+------------+ B+--------+ C+--------+ T+--------+
165 * |mcs:next | -> |mcs:next| -> |mcs:next| -> |mcs:next| -> NULL
166 * |mcs:locked=1| |cna:nn=0| |cna:nn=2| |cna:nn=1|
167 * |cna:nn=1 | +--------+ +--------+ +--------+
168 * +----------- +
169 *
170 * when cna_scan_main_queue() returns (the secondary queue contains B and C):
171 *
172 * A+----------------+ T+--------+
173 * |mcs:next | -> |mcs:next| -> NULL
174 * |mcs:locked=C.et | -+ |cna:nn=1|
175 * |cna:nn=1 | | +--------+
176 * +--------------- + +-----+
177 * \/
178 * B+--------+ C+--------+
179 * |mcs:next| -> |mcs:next| -+
180 * |cna:nn=0| |cna:nn=2| |
181 * +--------+ +--------+ |
182 * ^ |
183 * +---------------------+
184 *
185 * The worst case complexity of the scan is O(n), where n is the number
186 * of current waiters. However, the amortized complexity is close to O(1),
187 * as the immediate successor is likely to be running on the same node once
188 * threads from other nodes are moved to the secondary queue.
189 */
190 static u32 cna_scan_main_queue(struct mcs_spinlock *node,
191 struct mcs_spinlock *pred_start)
192 {
193 struct cna_node *cn = (struct cna_node *)node;
194 struct cna_node *cni = (struct cna_node *)READ_ONCE(pred_start->next);
195 struct cna_node *last;
196 int my_numa_node = cn->numa_node;
197
198 /* find any next waiter on 'our' NUMA node */
199 for (last = cn;
200 cni && cni->numa_node != my_numa_node;
201 last = cni, cni = (struct cna_node *)READ_ONCE(cni->mcs.next))
202 ;
203
204 /* if found, splice any skipped waiters onto the secondary queue */
205 if (cni) {
206 if (last != cn) /* did we skip any waiters? */
207 cna_splice_tail(node, node->next,
208 (struct mcs_spinlock *)last);
209 return 0;
210 }
211
212 return last->encoded_tail;
213 }
214
215 __always_inline u32 cna_pre_scan(struct qspinlock *lock,
216 struct mcs_spinlock *node)
217 {
218 struct cna_node *cn = (struct cna_node *)node;
219
220 cn->pre_scan_result = cna_scan_main_queue(node, node);
221
222 return 0;
223 }
224
225 static inline void cna_pass_lock(struct mcs_spinlock *node,
226 struct mcs_spinlock *next)
227 {
228 struct cna_node *cn = (struct cna_node *)node;
229 struct mcs_spinlock *next_holder = next, *tail_2nd;
230 u32 val = 1;
231
232 u32 scan = cn->pre_scan_result;
233
234 /*
235 * check if a successor from the same numa node has not been found in
236 * pre-scan, and if so, try to find it in post-scan starting from the
237 * node where pre-scan stopped (stored in @pre_scan_result)
238 */
239 if (scan > 0)
> 240 scan = cna_scan_main_queue(node, decode_tail(scan));

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation

2019-11-14 21:00:14

by Alex Kogan

[permalink] [raw]
Subject: Re: [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock

+ linux-sparse mailing list

It seems like a bug in the way sparse handles “pure” functions that return
a pointer.

One of the pure functions in question is defined as following:
static inline __pure
struct mcs_spinlock *grab_mcs_node(struct mcs_spinlock *base, int idx)
{
return &((struct qnode *)base + idx)->mcs;
}

and the corresponding variable definition and the assignment statement that
produce a warning (in kernel/locking/qspinlock.c) are:
struct mcs_spinlock *prev, *next, *node;

node = grab_mcs_node(node, idx);

The issue can be recreated without my patch with
# sparse version: v0.6.1
make ARCH=x86_64 allmodconfig
make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' kernel/locking/qspinlock.o


The warnings can be eliminated by adding an explicit cast, e.g.:

node = (struct mcs_spinlock *)grab_mcs_node(node, idx);

but this seems wrong (unnecessary) to me.

Regards,
— Alex

> On Nov 10, 2019, at 4:30 PM, kbuild test robot <[email protected]> wrote:
>
> Hi Alex,
>
> Thank you for the patch! Perhaps something to improve:
>
> [auto build test WARNING on linus/master]
> [cannot apply to v5.4-rc6 next-20191108]
> [if your patch is applied to the wrong git tree, please drop us a note to help
> improve the system. BTW, we also suggest to use '--base' option to specify the
> base tree in git format-patch, please see https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_a_37406982&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=4bbPcLEtAedk_fBrSIBMWvdEslLtH5W28nZLbmMIgL8&e= ]
>
> url: https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_0day-2Dci_linux_commits_Alex-2DKogan_locking-2Dqspinlock-2DRename-2Dmcs-2Dlock-2Dunlock-2Dmacros-2Dand-2Dmake-2Dthem-2Dmore-2Dgeneric_20191109-2D180535&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=ydR3iBtEF-3XUySBCcPYJ8oqw_oNDB-liJdapTXeFeM&e=
> base: https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_torvalds_linux.git&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=c4rCmFY0YTXCPiXW9d_BD0RN6WU6QGb64h1iyWNCm9A&e= 0058b0a506e40d9a2c62015fe92eb64a44d78cd9
> reproduce:
> # apt-get install sparse
> # sparse version: v0.6.1-21-gb31adac-dirty
> make ARCH=x86_64 allmodconfig
> make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'
>
> If you fix the issue, kindly add following tag
> Reported-by: kbuild test robot <[email protected]>
>
>
> sparse warnings: (new ones prefixed by >>)
>
> kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *[assigned] node @@ got ct mcs_spinlock *[assigned] node @@
> kernel/locking/qspinlock.c:450:14: sparse: expected struct mcs_spinlock *[assigned] node
> kernel/locking/qspinlock.c:450:14: sparse: got struct mcs_spinlock [pure] *
> kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *prev @@ got struct struct mcs_spinlock *prev @@
> kernel/locking/qspinlock.c:498:22: sparse: expected struct mcs_spinlock *prev
> kernel/locking/qspinlock.c:498:22: sparse: got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:141:60: sparse: sparse: incorrect type in initializer (different modifiers) @@ expected struct mcs_spinlock *tail_2nd @@ got struct struct mcs_spinlock *tail_2nd @@
>>> kernel/locking/qspinlock_cna.h:141:60: sparse: expected struct mcs_spinlock *tail_2nd
>>> kernel/locking/qspinlock_cna.h:141:60: sparse: got struct mcs_spinlock [pure] *
> kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *[assigned] node @@ got ct mcs_spinlock *[assigned] node @@
> kernel/locking/qspinlock.c:450:14: sparse: expected struct mcs_spinlock *[assigned] node
> kernel/locking/qspinlock.c:450:14: sparse: got struct mcs_spinlock [pure] *
> kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *prev @@ got struct struct mcs_spinlock *prev @@
> kernel/locking/qspinlock.c:498:22: sparse: expected struct mcs_spinlock *prev
> kernel/locking/qspinlock.c:498:22: sparse: got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:107:18: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *tail_2nd @@ got struct struct mcs_spinlock *tail_2nd @@
> kernel/locking/qspinlock_cna.h:107:18: sparse: expected struct mcs_spinlock *tail_2nd
> kernel/locking/qspinlock_cna.h:107:18: sparse: got struct mcs_spinlock [pure] *
>>> kernel/locking/qspinlock_cna.h:240:61: sparse: sparse: incorrect type in argument 2 (different modifiers) @@ expected struct mcs_spinlock *pred_start @@ got struct struct mcs_spinlock *pred_start @@
>>> kernel/locking/qspinlock_cna.h:240:61: sparse: expected struct mcs_spinlock *pred_start
> kernel/locking/qspinlock_cna.h:240:61: sparse: got struct mcs_spinlock [pure] *
> kernel/locking/qspinlock_cna.h:252:26: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *tail_2nd @@ got struct struct mcs_spinlock *tail_2nd @@
> kernel/locking/qspinlock_cna.h:252:26: sparse: expected struct mcs_spinlock *tail_2nd
> kernel/locking/qspinlock_cna.h:252:26: sparse: got struct mcs_spinlock [pure] *
> kernel/locking/qspinlock.c:450:14: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *[assigned] node @@ got ct mcs_spinlock *[assigned] node @@
> kernel/locking/qspinlock.c:450:14: sparse: expected struct mcs_spinlock *[assigned] node
> kernel/locking/qspinlock.c:450:14: sparse: got struct mcs_spinlock [pure] *
> kernel/locking/qspinlock.c:498:22: sparse: sparse: incorrect type in assignment (different modifiers) @@ expected struct mcs_spinlock *prev @@ got struct struct mcs_spinlock *prev @@
> kernel/locking/qspinlock.c:498:22: sparse: expected struct mcs_spinlock *prev
> kernel/locking/qspinlock.c:498:22: sparse: got struct mcs_spinlock [pure] *
>
> vim +141 kernel/locking/qspinlock_cna.h
>
> 90
> 91 static inline bool cna_try_change_tail(struct qspinlock *lock, u32 val,
> 92 struct mcs_spinlock *node)
> 93 {
> 94 struct mcs_spinlock *head_2nd, *tail_2nd;
> 95 u32 new;
> 96
> 97 /* If the secondary queue is empty, do what MCS does. */
> 98 if (node->locked <= 1)
> 99 return __try_clear_tail(lock, val, node);
> 100
> 101 /*
> 102 * Try to update the tail value to the last node in the secondary queue.
> 103 * If successful, pass the lock to the first thread in the secondary
> 104 * queue. Doing those two actions effectively moves all nodes from the
> 105 * secondary queue into the main one.
> 106 */
>> 107 tail_2nd = decode_tail(node->locked);
> 108 head_2nd = tail_2nd->next;
> 109 new = ((struct cna_node *)tail_2nd)->encoded_tail + _Q_LOCKED_VAL;
> 110
> 111 if (atomic_try_cmpxchg_relaxed(&lock->val, &val, new)) {
> 112 /*
> 113 * Try to reset @next in tail_2nd to NULL, but no need to check
> 114 * the result - if failed, a new successor has updated it.
> 115 */
> 116 cmpxchg_relaxed(&tail_2nd->next, head_2nd, NULL);
> 117 arch_mcs_pass_lock(&head_2nd->locked, 1);
> 118 return true;
> 119 }
> 120
> 121 return false;
> 122 }
> 123
> 124 /*
> 125 * cna_splice_tail -- splice nodes in the main queue between [first, last]
> 126 * onto the secondary queue.
> 127 */
> 128 static void cna_splice_tail(struct mcs_spinlock *node,
> 129 struct mcs_spinlock *first,
> 130 struct mcs_spinlock *last)
> 131 {
> 132 /* remove [first,last] */
> 133 node->next = last->next;
> 134
> 135 /* stick [first,last] on the secondary queue tail */
> 136 if (node->locked <= 1) { /* if secondary queue is empty */
> 137 /* create secondary queue */
> 138 last->next = first;
> 139 } else {
> 140 /* add to the tail of the secondary queue */
>> 141 struct mcs_spinlock *tail_2nd = decode_tail(node->locked);
> 142 struct mcs_spinlock *head_2nd = tail_2nd->next;
> 143
> 144 tail_2nd->next = first;
> 145 last->next = head_2nd;
> 146 }
> 147
> 148 node->locked = ((struct cna_node *)last)->encoded_tail;
> 149 }
> 150
> 151 /*
> 152 * cna_scan_main_queue - scan the main waiting queue looking for the first
> 153 * thread running on the same NUMA node as the lock holder. If found (call it
> 154 * thread T), move all threads in the main queue between the lock holder and
> 155 * T to the end of the secondary queue and return 0; otherwise, return the
> 156 * encoded pointer of the last scanned node in the primary queue (so a
> 157 * subsequent scan can be resumed from that node)
> 158 *
> 159 * Schematically, this may look like the following (nn stands for numa_node and
> 160 * et stands for encoded_tail).
> 161 *
> 162 * when cna_scan_main_queue() is called (the secondary queue is empty):
> 163 *
> 164 * A+------------+ B+--------+ C+--------+ T+--------+
> 165 * |mcs:next | -> |mcs:next| -> |mcs:next| -> |mcs:next| -> NULL
> 166 * |mcs:locked=1| |cna:nn=0| |cna:nn=2| |cna:nn=1|
> 167 * |cna:nn=1 | +--------+ +--------+ +--------+
> 168 * +----------- +
> 169 *
> 170 * when cna_scan_main_queue() returns (the secondary queue contains B and C):
> 171 *
> 172 * A+----------------+ T+--------+
> 173 * |mcs:next | -> |mcs:next| -> NULL
> 174 * |mcs:locked=C.et | -+ |cna:nn=1|
> 175 * |cna:nn=1 | | +--------+
> 176 * +--------------- + +-----+
> 177 * \/
> 178 * B+--------+ C+--------+
> 179 * |mcs:next| -> |mcs:next| -+
> 180 * |cna:nn=0| |cna:nn=2| |
> 181 * +--------+ +--------+ |
> 182 * ^ |
> 183 * +---------------------+
> 184 *
> 185 * The worst case complexity of the scan is O(n), where n is the number
> 186 * of current waiters. However, the amortized complexity is close to O(1),
> 187 * as the immediate successor is likely to be running on the same node once
> 188 * threads from other nodes are moved to the secondary queue.
> 189 */
> 190 static u32 cna_scan_main_queue(struct mcs_spinlock *node,
> 191 struct mcs_spinlock *pred_start)
> 192 {
> 193 struct cna_node *cn = (struct cna_node *)node;
> 194 struct cna_node *cni = (struct cna_node *)READ_ONCE(pred_start->next);
> 195 struct cna_node *last;
> 196 int my_numa_node = cn->numa_node;
> 197
> 198 /* find any next waiter on 'our' NUMA node */
> 199 for (last = cn;
> 200 cni && cni->numa_node != my_numa_node;
> 201 last = cni, cni = (struct cna_node *)READ_ONCE(cni->mcs.next))
> 202 ;
> 203
> 204 /* if found, splice any skipped waiters onto the secondary queue */
> 205 if (cni) {
> 206 if (last != cn) /* did we skip any waiters? */
> 207 cna_splice_tail(node, node->next,
> 208 (struct mcs_spinlock *)last);
> 209 return 0;
> 210 }
> 211
> 212 return last->encoded_tail;
> 213 }
> 214
> 215 __always_inline u32 cna_pre_scan(struct qspinlock *lock,
> 216 struct mcs_spinlock *node)
> 217 {
> 218 struct cna_node *cn = (struct cna_node *)node;
> 219
> 220 cn->pre_scan_result = cna_scan_main_queue(node, node);
> 221
> 222 return 0;
> 223 }
> 224
> 225 static inline void cna_pass_lock(struct mcs_spinlock *node,
> 226 struct mcs_spinlock *next)
> 227 {
> 228 struct cna_node *cn = (struct cna_node *)node;
> 229 struct mcs_spinlock *next_holder = next, *tail_2nd;
> 230 u32 val = 1;
> 231
> 232 u32 scan = cn->pre_scan_result;
> 233
> 234 /*
> 235 * check if a successor from the same numa node has not been found in
> 236 * pre-scan, and if so, try to find it in post-scan starting from the
> 237 * node where pre-scan stopped (stored in @pre_scan_result)
> 238 */
> 239 if (scan > 0)
>> 240 scan = cna_scan_main_queue(node, decode_tail(scan));
>
> ---
> 0-DAY kernel test infrastructure Open Source Technology Center
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.01.org_hyperkitty_list_kbuild-2Dall-40lists.01.org&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=hIJsql5G3kZsA2K8s_1WK7096mEKsYe-jEraOUNhbDs&s=VprTTTCiBtYDpGK-n61PqoAYogz7_cX60cLNj_O8K2E&e= Intel Corporation

2019-11-15 00:40:07

by Luc Van Oostenryck

[permalink] [raw]
Subject: Re: [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock

On Thu, Nov 14, 2019 at 03:57:34PM -0500, Alex Kogan wrote:
> + linux-sparse mailing list
>
> It seems like a bug in the way sparse handles “pure” functions that return
> a pointer.

Yes, it's a bug in sparse.

> The warnings can be eliminated by adding an explicit cast, e.g.:
>
> node = (struct mcs_spinlock *)grab_mcs_node(node, idx);
>
> but this seems wrong (unnecessary) to me.

Indeed, it would be wrong.

Thanks for analyzing and reporting this,
-- Luc

2019-11-19 14:43:19

by Oliver Sang

[permalink] [raw]
Subject: [locking/qspinlock] ad3836e30e: will-it-scale.per_thread_ops 73.5% improvement

Greeting,

FYI, we noticed a 73.5% improvement of will-it-scale.per_thread_ops due to commit:


commit: ad3836e30e6f5f5e97867707b573f2fda5ce444a ("[PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock")
url: https://github.com/0day-ci/linux/commits/Alex-Kogan/locking-qspinlock-Rename-mcs-lock-unlock-macros-and-make-them-more-generic/20191109-180535


in testcase: will-it-scale
on test machine: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
with following parameters:

nr_task: 50%
mode: thread
test: unlink2
cpufreq_governor: performance
ucode: 0x500002b

test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale

In addition to that, the commit also has significant impact on the following tests:

+------------------+---------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_thread_ops 200.6% improvement |
| test machine | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | mode=thread |
| | nr_task=100% |
| | test=unlink2 |
| | ucode=0x500002b |
+------------------+---------------------------------------------------------------------------+
| testcase: change | will-it-scale: |
| test machine | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | mode=thread |
| | nr_task=100% |
| | test=signal1 |
| | ucode=0x500002b |
+------------------+---------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_thread_ops 93.0% improvement |
| test machine | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | mode=thread |
| | nr_task=50% |
| | test=signal1 |
| | ucode=0x500002b |
+------------------+---------------------------------------------------------------------------+




Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/thread/50%/debian-x86_64-2019-09-23.cgz/lkp-csl-2ap3/unlink2/will-it-scale/0x500002b

commit:
2f65452ad7 ("locking/qspinlock: Refactor the qspinlock slow path")
ad3836e30e ("locking/qspinlock: Introduce CNA into the slow path of qspinlock")

2f65452ad747deeb ad3836e30e6f5f5e97867707b57
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 75% 3:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:4 25% 1:4 dmesg.WARNING:stack_recursion
%stddev %change %stddev
\ | \
6672 ? 4% +73.5% 11574 ? 2% will-it-scale.per_thread_ops
640605 ? 4% +73.5% 1111223 ? 2% will-it-scale.workload
1.51 ? 5% +0.8 2.32 ? 6% mpstat.cpu.all.soft%
0.10 ? 10% +0.0 0.12 ? 2% mpstat.cpu.all.usr%
20048836 ? 12% +22.5% 24566004 ? 16% turbostat.C1E
11.23 ? 41% +7.1 18.35 ? 31% turbostat.C1E%
1.282e+10 ? 41% +63.3% 2.094e+10 ? 31% cpuidle.C1E.time
39313047 ? 12% +22.7% 48249083 ? 16% cpuidle.C1E.usage
3166305 ? 6% +14.0% 3608145 ? 8% cpuidle.POLL.time
49.00 -2.0% 48.00 vmstat.cpu.id
50.00 +2.0% 51.00 vmstat.cpu.sy
3407 +2.2% 3482 vmstat.system.cs
135302 +19.7% 161908 ? 2% meminfo.KReclaimable
135302 +19.7% 161908 ? 2% meminfo.SReclaimable
449410 ? 2% +24.4% 558953 ? 3% meminfo.SUnreclaim
584713 ? 2% +23.3% 720862 ? 3% meminfo.Slab
11319 ? 2% +10.1% 12458 ? 2% meminfo.max_used_kB
2238130 ? 3% +135.9% 5279009 ? 6% numa-numastat.node0.local_node
2269204 ? 3% +133.7% 5302396 ? 6% numa-numastat.node0.numa_hit
2365013 ? 5% +103.3% 4809115 ? 8% numa-numastat.node1.local_node
2385159 ? 5% +102.7% 4835585 ? 8% numa-numastat.node1.numa_hit
2221247 ? 3% +114.8% 4772155 ? 7% numa-numastat.node2.local_node
2247695 ? 3% +113.4% 4795496 ? 6% numa-numastat.node2.numa_hit
2309805 ? 10% +110.3% 4857220 ? 9% numa-numastat.node3.local_node
2325463 ? 9% +109.7% 4877484 ? 9% numa-numastat.node3.numa_hit
44400 ? 21% +26.8% 56304 ? 8% numa-meminfo.node0.KReclaimable
9900 ? 24% +27.6% 12631 ? 7% numa-meminfo.node0.Mapped
44400 ? 21% +26.8% 56304 ? 8% numa-meminfo.node0.SReclaimable
133122 ? 8% +24.1% 165206 ? 4% numa-meminfo.node0.SUnreclaim
177523 ? 11% +24.8% 221511 ? 5% numa-meminfo.node0.Slab
100192 ? 9% +25.2% 125474 ? 4% numa-meminfo.node1.SUnreclaim
127118 ? 9% +24.4% 158125 ? 4% numa-meminfo.node1.Slab
101780 ? 7% +27.5% 129806 ? 6% numa-meminfo.node2.SUnreclaim
128296 ? 9% +30.0% 166849 ? 7% numa-meminfo.node2.Slab
8291 ? 2% +4.9% 8697 ? 2% proc-vmstat.nr_mapped
33848 +19.6% 40470 ? 3% proc-vmstat.nr_slab_reclaimable
111872 ? 2% +24.3% 139038 ? 3% proc-vmstat.nr_slab_unreclaimable
2292 ? 12% +76.4% 4044 ? 56% proc-vmstat.numa_hint_faults_local
9232703 ? 3% +114.6% 19813037 ? 3% proc-vmstat.numa_hit
9139363 ? 3% +115.8% 19719551 ? 3% proc-vmstat.numa_local
499.00 ? 59% +1997.4% 10466 ? 49% proc-vmstat.numa_pages_migrated
111653 ? 10% +23.8% 138277 ? 10% proc-vmstat.numa_pte_updates
42745968 ? 2% +91.0% 81650355 ? 3% proc-vmstat.pgalloc_normal
42682188 ? 2% +91.1% 81573027 ? 3% proc-vmstat.pgfree
499.00 ? 59% +1997.4% 10466 ? 49% proc-vmstat.pgmigrate_success
2494 ? 24% +25.4% 3129 ? 8% numa-vmstat.node0.nr_mapped
11093 ? 21% +27.0% 14087 ? 8% numa-vmstat.node0.nr_slab_reclaimable
33289 ? 8% +24.4% 41423 ? 5% numa-vmstat.node0.nr_slab_unreclaimable
1667202 ? 4% +90.5% 3176615 ? 6% numa-vmstat.node0.numa_hit
1636540 ? 4% +92.7% 3153437 ? 6% numa-vmstat.node0.numa_local
24978 ? 9% +26.6% 31632 ? 4% numa-vmstat.node1.nr_slab_unreclaimable
1486622 ? 5% +82.6% 2713956 ? 6% numa-vmstat.node1.numa_hit
1379225 ? 5% +88.5% 2600269 ? 6% numa-vmstat.node1.numa_local
25404 ? 6% +28.2% 32569 ? 6% numa-vmstat.node2.nr_slab_unreclaimable
1359621 ? 3% +100.3% 2723407 ? 7% numa-vmstat.node2.numa_hit
1246044 ? 3% +109.7% 2612899 ? 8% numa-vmstat.node2.numa_local
1528891 ? 13% +78.0% 2721568 ? 8% numa-vmstat.node3.numa_hit
1425866 ? 15% +83.3% 2613907 ? 9% numa-vmstat.node3.numa_local
192.00 ? 11% +62.5% 312.00 ? 22% slabinfo.biovec-64.active_objs
192.00 ? 11% +62.5% 312.00 ? 22% slabinfo.biovec-64.num_objs
303493 ? 3% +33.3% 404707 ? 3% slabinfo.dentry.active_objs
7249 ? 3% +33.4% 9668 ? 3% slabinfo.dentry.active_slabs
304492 ? 3% +33.4% 406075 ? 3% slabinfo.dentry.num_objs
7249 ? 3% +33.4% 9668 ? 3% slabinfo.dentry.num_slabs
182640 ? 6% +62.8% 297383 ? 5% slabinfo.filp.active_objs
2860 ? 6% +62.8% 4655 ? 5% slabinfo.filp.active_slabs
183092 ? 6% +62.8% 297994 ? 5% slabinfo.filp.num_objs
2860 ? 6% +62.8% 4655 ? 5% slabinfo.filp.num_slabs
1323 ? 2% +15.1% 1522 ? 5% slabinfo.kmalloc-rcl-96.active_objs
1323 ? 2% +15.1% 1522 ? 5% slabinfo.kmalloc-rcl-96.num_objs
161582 ? 6% +72.7% 279012 ? 6% slabinfo.shmem_inode_cache.active_objs
3516 ? 6% +72.7% 6074 ? 6% slabinfo.shmem_inode_cache.active_slabs
161805 ? 6% +72.7% 279439 ? 6% slabinfo.shmem_inode_cache.num_objs
3516 ? 6% +72.7% 6074 ? 6% slabinfo.shmem_inode_cache.num_slabs
529.15 ? 12% +111.9% 1120 ? 19% sched_debug.cfs_rq:/.exec_clock.stddev
52879 ? 11% +74.8% 92425 ? 24% sched_debug.cfs_rq:/.load.stddev
21.36 ? 11% +47.8% 31.58 ? 17% sched_debug.cfs_rq:/.load_avg.avg
0.04 ?173% +3100.0% 1.33 ? 70% sched_debug.cfs_rq:/.load_avg.min
50786 ? 14% +98.1% 100627 ? 25% sched_debug.cfs_rq:/.min_vruntime.stddev
34.29 ? 22% +1108.9% 414.56 ? 2% sched_debug.cfs_rq:/.nr_spread_over.avg
58.54 ? 17% +853.0% 557.92 ? 18% sched_debug.cfs_rq:/.nr_spread_over.max
13.08 ? 14% +2672.0% 362.67 ? 2% sched_debug.cfs_rq:/.nr_spread_over.min
8.55 ? 16% +156.9% 21.96 ? 21% sched_debug.cfs_rq:/.nr_spread_over.stddev
5.06 ? 13% +46.9% 7.44 ? 35% sched_debug.cfs_rq:/.runnable_load_avg.avg
52870 ? 11% +74.8% 92406 ? 24% sched_debug.cfs_rq:/.runnable_weight.stddev
50790 ? 14% +98.1% 100636 ? 25% sched_debug.cfs_rq:/.spread0.stddev
180401 ? 11% +23.6% 222940 ? 12% sched_debug.cpu.avg_idle.stddev
8.91 ? 4% +6.8% 9.52 ? 6% sched_debug.cpu.clock.stddev
8.91 ? 4% +6.8% 9.52 ? 6% sched_debug.cpu.clock_task.stddev
11814 ? 22% +33.5% 15769 ? 15% sched_debug.cpu.sched_count.max
1586 ? 3% +15.7% 1834 ? 7% sched_debug.cpu.sched_count.stddev
5670 ? 23% +34.1% 7601 ? 16% sched_debug.cpu.sched_goidle.max
790.14 ? 4% +15.5% 912.55 ? 7% sched_debug.cpu.sched_goidle.stddev
7.40 ? 6% +42.3% 10.53 ? 2% perf-stat.i.MPKI
7.765e+09 +22.7% 9.528e+09 perf-stat.i.branch-instructions
0.40 ? 4% +0.0 0.45 perf-stat.i.branch-miss-rate%
31323148 ? 4% +36.4% 42730681 perf-stat.i.branch-misses
58.03 -47.3 10.78 ? 3% perf-stat.i.cache-miss-rate%
1.456e+08 ? 9% -66.8% 48391249 ? 3% perf-stat.i.cache-misses
2.506e+08 ? 7% +79.9% 4.509e+08 ? 4% perf-stat.i.cache-references
3361 +2.4% 3442 perf-stat.i.context-switches
9.05 -20.1% 7.23 perf-stat.i.cpi
344.23 -5.1% 326.65 ? 2% perf-stat.i.cpu-migrations
2122 ? 8% +202.6% 6421 ? 3% perf-stat.i.cycles-between-cache-misses
1265475 ? 4% +44.2% 1824797 ? 7% perf-stat.i.dTLB-load-misses
9.014e+09 +29.4% 1.166e+10 perf-stat.i.dTLB-loads
0.01 ? 10% +0.0 0.01 ? 7% perf-stat.i.dTLB-store-miss-rate%
169566 ? 7% +110.5% 356900 ? 6% perf-stat.i.dTLB-store-misses
2.377e+09 ? 4% +67.9% 3.991e+09 ? 2% perf-stat.i.dTLB-stores
85.27 +6.1 91.39 perf-stat.i.iTLB-load-miss-rate%
18608109 ? 5% +60.7% 29911747 perf-stat.i.iTLB-load-misses
3204887 -12.4% 2808962 ? 4% perf-stat.i.iTLB-loads
3.387e+10 +26.4% 4.281e+10 perf-stat.i.instructions
1826 ? 4% -21.6% 1432 perf-stat.i.instructions-per-iTLB-miss
0.11 +25.3% 0.14 perf-stat.i.ipc
93.40 -33.7 59.71 ? 4% perf-stat.i.node-load-miss-rate%
31424789 ? 10% -88.2% 3699396 ? 9% perf-stat.i.node-load-misses
87.06 -60.6 26.47 ? 7% perf-stat.i.node-store-miss-rate%
16693314 ? 12% -86.1% 2316840 ? 8% perf-stat.i.node-store-misses
2494712 ? 16% +156.7% 6404636 ? 3% perf-stat.i.node-stores
7.40 ? 6% +42.4% 10.53 ? 2% perf-stat.overall.MPKI
0.40 ? 4% +0.0 0.45 perf-stat.overall.branch-miss-rate%
58.02 -47.3 10.74 ? 3% perf-stat.overall.cache-miss-rate%
9.04 -20.1% 7.22 perf-stat.overall.cpi
2122 ? 8% +201.4% 6395 ? 3% perf-stat.overall.cycles-between-cache-misses
0.01 ? 10% +0.0 0.01 ? 7% perf-stat.overall.dTLB-store-miss-rate%
85.27 +6.1 91.41 perf-stat.overall.iTLB-load-miss-rate%
1825 ? 4% -21.6% 1431 perf-stat.overall.instructions-per-iTLB-miss
0.11 +25.3% 0.14 perf-stat.overall.ipc
93.40 -32.8 60.59 ? 3% perf-stat.overall.node-load-miss-rate%
87.05 -60.5 26.56 ? 7% perf-stat.overall.node-store-miss-rate%
15772532 ? 4% -27.1% 11493841 perf-stat.overall.path-length
7.739e+09 +22.7% 9.496e+09 perf-stat.ps.branch-instructions
31217264 ? 4% +36.4% 42586237 perf-stat.ps.branch-misses
1.451e+08 ? 9% -66.8% 48228082 ? 3% perf-stat.ps.cache-misses
2.497e+08 ? 7% +79.9% 4.493e+08 ? 4% perf-stat.ps.cache-references
3350 +2.4% 3430 perf-stat.ps.context-switches
343.06 -5.1% 325.54 ? 2% perf-stat.ps.cpu-migrations
1261151 ? 4% +44.2% 1818535 ? 7% perf-stat.ps.dTLB-load-misses
8.984e+09 +29.4% 1.162e+10 perf-stat.ps.dTLB-loads
168993 ? 7% +110.5% 355694 ? 6% perf-stat.ps.dTLB-store-misses
2.369e+09 ? 4% +67.9% 3.978e+09 ? 2% perf-stat.ps.dTLB-stores
18545230 ? 5% +60.7% 29810560 perf-stat.ps.iTLB-load-misses
3194060 -12.4% 2799474 ? 4% perf-stat.ps.iTLB-loads
3.376e+10 +26.4% 4.266e+10 perf-stat.ps.instructions
31318540 ? 10% -88.2% 3686933 ? 9% perf-stat.ps.node-load-misses
16636871 ? 12% -86.1% 2309030 ? 8% perf-stat.ps.node-store-misses
2486272 ? 16% +156.7% 6382952 ? 3% perf-stat.ps.node-stores
1.009e+13 +26.6% 1.277e+13 perf-stat.total.instructions
20.98 ? 47% -21.0 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.inode_doinit_with_dentry.security_d_instantiate.d_instantiate
20.96 ? 47% -21.0 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.selinux_inode_free_security.security_inode_free.__destroy_inode
0.41 ? 58% +0.3 0.67 ? 6% perf-profile.calltrace.cycles-pp.__slab_free.kmem_cache_free.rcu_do_batch.rcu_core.__softirqentry_text_start
0.42 ? 58% +0.3 0.69 ? 6% perf-profile.calltrace.cycles-pp.kmem_cache_free.rcu_do_batch.rcu_core.__softirqentry_text_start.run_ksoftirqd
0.77 ? 11% +0.3 1.10 ? 8% perf-profile.calltrace.cycles-pp.file_free_rcu.rcu_do_batch.rcu_core.__softirqentry_text_start.run_ksoftirqd
1.37 ? 10% +0.5 1.87 ? 6% perf-profile.calltrace.cycles-pp.ret_from_fork
1.37 ? 10% +0.5 1.87 ? 6% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.36 ? 10% +0.5 1.86 ? 7% perf-profile.calltrace.cycles-pp.smpboot_thread_fn.kthread.ret_from_fork
1.34 ? 10% +0.5 1.85 ? 7% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
1.34 ? 10% +0.5 1.85 ? 7% perf-profile.calltrace.cycles-pp.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
1.34 ? 10% +0.5 1.85 ? 7% perf-profile.calltrace.cycles-pp.rcu_core.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn.kthread
1.34 ? 10% +0.5 1.85 ? 7% perf-profile.calltrace.cycles-pp.rcu_do_batch.rcu_core.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn
0.99 ? 36% +2.0 3.02 ? 63% perf-profile.calltrace.cycles-pp.__alloc_fd.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_open
0.85 ? 41% +2.1 2.95 ? 65% perf-profile.calltrace.cycles-pp._raw_spin_lock.__alloc_fd.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.91 ? 42% +2.4 3.32 ? 68% perf-profile.calltrace.cycles-pp.__close_fd.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_close
0.99 ? 39% +2.4 3.41 ? 66% perf-profile.calltrace.cycles-pp.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_close
1.12 ? 36% +2.4 3.56 ? 63% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_close
1.12 ? 36% +2.4 3.56 ? 63% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_close
1.14 ? 35% +2.5 3.60 ? 63% perf-profile.calltrace.cycles-pp.__GI___libc_close
0.84 ? 43% +2.5 3.30 ? 68% perf-profile.calltrace.cycles-pp._raw_spin_lock.__close_fd.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.9 2.89 ? 66% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock.__alloc_fd.do_sys_open.do_syscall_64
0.00 +3.2 3.24 ? 69% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock.__close_fd.__x64_sys_close.do_syscall_64
0.00 +7.1 7.12 ? 44% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock.selinux_inode_free_security.security_inode_free.__destroy_inode
0.00 +7.5 7.48 ? 44% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock.inode_doinit_with_dentry.security_d_instantiate.d_instantiate
0.00 +16.5 16.53 ? 7% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock.evict.do_unlinkat.do_syscall_64
0.00 +19.3 19.32 ? 9% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock.inode_sb_list_add.new_inode.shmem_get_inode
60.59 ? 7% -60.6 0.00 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.78 ? 14% -0.3 0.48 ? 10% perf-profile.children.cycles-pp.generic_permission
0.81 ? 14% -0.3 0.51 ? 10% perf-profile.children.cycles-pp.inode_permission
0.83 ? 12% -0.2 0.63 ? 9% perf-profile.children.cycles-pp.link_path_walk
0.26 ? 14% -0.2 0.11 ? 4% perf-profile.children.cycles-pp.__mnt_want_write
0.48 ? 14% -0.1 0.34 ? 5% perf-profile.children.cycles-pp.__alloc_file
0.49 ? 13% -0.1 0.34 ? 6% perf-profile.children.cycles-pp.alloc_empty_file
0.28 ? 15% -0.1 0.13 perf-profile.children.cycles-pp.mnt_want_write
0.51 ? 12% -0.1 0.38 ? 3% perf-profile.children.cycles-pp.vfs_unlink
0.15 ? 23% -0.1 0.06 ? 6% perf-profile.children.cycles-pp.timestamp_truncate
0.22 ? 5% -0.1 0.14 ? 13% perf-profile.children.cycles-pp.rcu_segcblist_enqueue
0.23 ? 11% -0.1 0.15 ? 7% perf-profile.children.cycles-pp.do_dentry_open
0.16 ? 20% -0.1 0.08 ? 5% perf-profile.children.cycles-pp.current_time
0.15 ? 8% -0.1 0.07 ? 10% perf-profile.children.cycles-pp.__list_del_entry_valid
0.27 ? 5% -0.1 0.19 ? 11% perf-profile.children.cycles-pp.__call_rcu
0.20 ? 14% -0.1 0.13 perf-profile.children.cycles-pp.inode_init_always
0.17 ? 12% -0.1 0.11 ? 4% perf-profile.children.cycles-pp.__list_add_valid
0.15 ? 17% -0.1 0.08 ? 5% perf-profile.children.cycles-pp.shmem_unlink
0.08 ? 15% -0.1 0.03 ?100% perf-profile.children.cycles-pp.drop_nlink
0.12 ? 21% -0.1 0.07 perf-profile.children.cycles-pp.fsnotify
0.13 ? 14% -0.0 0.08 ? 5% perf-profile.children.cycles-pp.may_open
0.14 ? 13% -0.0 0.10 ? 5% perf-profile.children.cycles-pp.may_delete
0.07 ? 12% -0.0 0.03 ?100% perf-profile.children.cycles-pp.security_file_alloc
0.10 ? 18% -0.0 0.06 ? 11% perf-profile.children.cycles-pp.inode_init_owner
0.14 ? 13% -0.0 0.10 ? 4% perf-profile.children.cycles-pp.iput
0.08 ? 14% -0.0 0.04 ? 57% perf-profile.children.cycles-pp.__sb_end_write
0.10 ? 18% -0.0 0.06 perf-profile.children.cycles-pp.__sb_start_write
0.07 ? 11% -0.0 0.04 ? 58% perf-profile.children.cycles-pp.security_file_open
0.06 ? 13% +0.0 0.09 ? 9% perf-profile.children.cycles-pp.shmem_alloc_inode
0.06 ? 11% +0.0 0.09 ? 5% perf-profile.children.cycles-pp.strncpy_from_user
0.04 ? 57% +0.0 0.07 ? 7% perf-profile.children.cycles-pp.__x64_sys_unlink
0.04 ? 58% +0.0 0.07 ? 5% perf-profile.children.cycles-pp.walk_component
0.04 ? 58% +0.0 0.08 ? 6% perf-profile.children.cycles-pp.___might_sleep
0.03 ?100% +0.0 0.06 perf-profile.children.cycles-pp.lookup_fast
0.08 ? 10% +0.0 0.12 ? 3% perf-profile.children.cycles-pp.getname_flags
0.12 ? 6% +0.0 0.17 ? 5% perf-profile.children.cycles-pp.dput
0.01 ?173% +0.0 0.05 ? 9% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.01 ?173% +0.0 0.06 perf-profile.children.cycles-pp.__inode_security_revalidate
0.03 ?100% +0.0 0.07 ? 11% perf-profile.children.cycles-pp.rcu_cblist_dequeue
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__dentry_kill
0.07 ? 6% +0.1 0.12 ? 8% perf-profile.children.cycles-pp.___slab_alloc
0.06 ? 7% +0.1 0.11 ? 9% perf-profile.children.cycles-pp.new_slab
0.07 ? 6% +0.1 0.12 ? 6% perf-profile.children.cycles-pp.__slab_alloc
0.00 +0.1 0.06 ? 6% perf-profile.children.cycles-pp.inode_init_once
0.19 ? 9% +0.1 0.26 ? 4% perf-profile.children.cycles-pp.kmem_cache_alloc
0.00 +0.2 0.21 ? 7% perf-profile.children.cycles-pp.cna_scan_main_queue
0.84 ? 9% +0.4 1.21 ? 10% perf-profile.children.cycles-pp.__slab_free
0.87 ? 9% +0.4 1.25 ? 10% perf-profile.children.cycles-pp.kmem_cache_free
1.37 ? 10% +0.5 1.87 ? 6% perf-profile.children.cycles-pp.ret_from_fork
1.37 ? 10% +0.5 1.87 ? 6% perf-profile.children.cycles-pp.kthread
1.36 ? 10% +0.5 1.86 ? 7% perf-profile.children.cycles-pp.smpboot_thread_fn
1.34 ? 10% +0.5 1.85 ? 7% perf-profile.children.cycles-pp.run_ksoftirqd
2.32 ? 11% +0.8 3.12 ? 11% perf-profile.children.cycles-pp.__softirqentry_text_start
2.28 ? 11% +0.8 3.08 ? 10% perf-profile.children.cycles-pp.rcu_do_batch
2.28 ? 11% +0.8 3.08 ? 10% perf-profile.children.cycles-pp.rcu_core
0.99 ? 36% +2.0 3.02 ? 63% perf-profile.children.cycles-pp.__alloc_fd
0.91 ? 42% +2.4 3.32 ? 68% perf-profile.children.cycles-pp.__close_fd
0.99 ? 39% +2.4 3.41 ? 66% perf-profile.children.cycles-pp.__x64_sys_close
1.15 ? 35% +2.5 3.60 ? 62% perf-profile.children.cycles-pp.__GI___libc_close
0.00 +57.1 57.05 ? 5% perf-profile.children.cycles-pp.__cna_queued_spin_lock_slowpath
60.09 ? 7% -60.1 0.00 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
1.33 ? 9% -0.4 0.93 ? 6% perf-profile.self.cycles-pp._raw_spin_lock
0.74 ? 14% -0.3 0.47 ? 11% perf-profile.self.cycles-pp.generic_permission
0.26 ? 16% -0.1 0.11 ? 4% perf-profile.self.cycles-pp.__mnt_want_write
0.35 ? 16% -0.1 0.23 ? 7% perf-profile.self.cycles-pp.__alloc_file
0.15 ? 22% -0.1 0.06 perf-profile.self.cycles-pp.timestamp_truncate
0.22 ? 5% -0.1 0.14 ? 13% perf-profile.self.cycles-pp.rcu_segcblist_enqueue
0.15 ? 8% -0.1 0.07 ? 10% perf-profile.self.cycles-pp.__list_del_entry_valid
0.22 ? 20% -0.1 0.14 ? 18% perf-profile.self.cycles-pp.selinux_inode_permission
0.15 ? 13% -0.1 0.08 ? 10% perf-profile.self.cycles-pp.__destroy_inode
0.18 ? 12% -0.1 0.11 ? 6% perf-profile.self.cycles-pp.path_openat
0.17 ? 12% -0.1 0.11 ? 4% perf-profile.self.cycles-pp.__list_add_valid
0.12 ? 22% -0.1 0.06 perf-profile.self.cycles-pp.inode_init_always
0.08 ? 17% -0.1 0.03 ?100% perf-profile.self.cycles-pp.__alloc_fd
0.08 ? 15% -0.1 0.03 ?100% perf-profile.self.cycles-pp.drop_nlink
0.09 ? 16% -0.0 0.04 ? 57% perf-profile.self.cycles-pp.shmem_reserve_inode
0.12 ? 25% -0.0 0.07 perf-profile.self.cycles-pp.fsnotify
0.09 ? 20% -0.0 0.05 perf-profile.self.cycles-pp.__sb_start_write
0.08 ? 14% -0.0 0.04 ? 57% perf-profile.self.cycles-pp.__sb_end_write
0.10 ? 16% -0.0 0.06 ? 11% perf-profile.self.cycles-pp.inode_init_owner
0.04 ? 58% +0.0 0.07 perf-profile.self.cycles-pp.___might_sleep
0.03 ?100% +0.0 0.07 ? 11% perf-profile.self.cycles-pp.rcu_cblist_dequeue
0.00 +0.1 0.05 perf-profile.self.cycles-pp.kmem_cache_alloc
0.00 +0.1 0.05 ? 8% perf-profile.self.cycles-pp.__call_rcu
0.00 +0.1 0.06 ? 7% perf-profile.self.cycles-pp.inode_init_once
0.00 +0.2 0.21 ? 9% perf-profile.self.cycles-pp.cna_scan_main_queue
0.84 ? 9% +0.4 1.21 ? 10% perf-profile.self.cycles-pp.__slab_free
0.00 +55.9 55.88 ? 4% perf-profile.self.cycles-pp.__cna_queued_spin_lock_slowpath
1137 ? 44% -64.3% 405.50 ? 33% interrupts.33:PCI-MSI.524291-edge.eth0-TxRx-2
5535 ? 20% +25.4% 6939 ? 5% interrupts.CPU0.NMI:Non-maskable_interrupts
5535 ? 20% +25.4% 6939 ? 5% interrupts.CPU0.PMI:Performance_monitoring_interrupts
699.00 ? 2% +17.6% 822.25 ? 10% interrupts.CPU1.TLB:TLB_shootdowns
5464 ? 19% +28.7% 7032 ? 4% interrupts.CPU100.NMI:Non-maskable_interrupts
5464 ? 19% +28.7% 7032 ? 4% interrupts.CPU100.PMI:Performance_monitoring_interrupts
5489 ? 19% +25.7% 6900 ? 5% interrupts.CPU101.NMI:Non-maskable_interrupts
5489 ? 19% +25.7% 6900 ? 5% interrupts.CPU101.PMI:Performance_monitoring_interrupts
5447 ? 19% +28.8% 7014 ? 5% interrupts.CPU102.NMI:Non-maskable_interrupts
5447 ? 19% +28.8% 7014 ? 5% interrupts.CPU102.PMI:Performance_monitoring_interrupts
6448 ? 8% +8.5% 6998 ? 5% interrupts.CPU107.NMI:Non-maskable_interrupts
6448 ? 8% +8.5% 6998 ? 5% interrupts.CPU107.PMI:Performance_monitoring_interrupts
6457 ? 7% +8.1% 6981 ? 5% interrupts.CPU109.NMI:Non-maskable_interrupts
6457 ? 7% +8.1% 6981 ? 5% interrupts.CPU109.PMI:Performance_monitoring_interrupts
1137 ? 44% -64.3% 405.50 ? 33% interrupts.CPU11.33:PCI-MSI.524291-edge.eth0-TxRx-2
5597 ? 26% +24.9% 6990 ? 4% interrupts.CPU11.NMI:Non-maskable_interrupts
5597 ? 26% +24.9% 6990 ? 4% interrupts.CPU11.PMI:Performance_monitoring_interrupts
829.50 ? 17% +24.6% 1033 ? 6% interrupts.CPU110.TLB:TLB_shootdowns
6359 ? 8% +9.4% 6956 ? 4% interrupts.CPU111.NMI:Non-maskable_interrupts
6359 ? 8% +9.4% 6956 ? 4% interrupts.CPU111.PMI:Performance_monitoring_interrupts
6327 ? 8% +9.4% 6921 ? 5% interrupts.CPU114.NMI:Non-maskable_interrupts
6327 ? 8% +9.4% 6921 ? 5% interrupts.CPU114.PMI:Performance_monitoring_interrupts
6483 ? 7% +7.2% 6949 ? 4% interrupts.CPU115.NMI:Non-maskable_interrupts
6483 ? 7% +7.2% 6949 ? 4% interrupts.CPU115.PMI:Performance_monitoring_interrupts
732.00 ? 14% +30.5% 955.25 ? 7% interrupts.CPU116.TLB:TLB_shootdowns
6341 ? 8% +10.1% 6979 ? 4% interrupts.CPU118.NMI:Non-maskable_interrupts
6341 ? 8% +10.1% 6979 ? 4% interrupts.CPU118.PMI:Performance_monitoring_interrupts
5612 ? 27% +25.3% 7032 ? 4% interrupts.CPU12.NMI:Non-maskable_interrupts
5612 ? 27% +25.3% 7032 ? 4% interrupts.CPU12.PMI:Performance_monitoring_interrupts
731.75 ? 15% +35.7% 993.25 ? 18% interrupts.CPU123.TLB:TLB_shootdowns
657.75 ? 14% +50.4% 989.50 ? 7% interrupts.CPU134.TLB:TLB_shootdowns
790.75 ? 14% +21.7% 962.00 ? 6% interrupts.CPU152.TLB:TLB_shootdowns
733.25 ? 9% +24.1% 910.00 ? 10% interrupts.CPU157.TLB:TLB_shootdowns
5628 ? 26% +24.8% 7021 ? 3% interrupts.CPU16.NMI:Non-maskable_interrupts
5628 ? 26% +24.8% 7021 ? 3% interrupts.CPU16.PMI:Performance_monitoring_interrupts
66.50 ?139% -91.7% 5.50 ? 60% interrupts.CPU16.RES:Rescheduling_interrupts
712.50 ? 17% +29.2% 920.25 ? 15% interrupts.CPU160.TLB:TLB_shootdowns
4900 ? 38% +42.5% 6983 ? 5% interrupts.CPU18.NMI:Non-maskable_interrupts
4900 ? 38% +42.5% 6983 ? 5% interrupts.CPU18.PMI:Performance_monitoring_interrupts
4899 ? 39% +42.7% 6990 ? 4% interrupts.CPU19.NMI:Non-maskable_interrupts
4899 ? 39% +42.7% 6990 ? 4% interrupts.CPU19.PMI:Performance_monitoring_interrupts
703.75 ? 12% +15.9% 815.50 ? 9% interrupts.CPU19.TLB:TLB_shootdowns
787.00 ? 9% +16.4% 916.25 ? 5% interrupts.CPU3.TLB:TLB_shootdowns
793.50 ? 14% +16.7% 926.00 ? 6% interrupts.CPU35.TLB:TLB_shootdowns
729.50 ? 3% +19.6% 872.25 ? 14% interrupts.CPU45.TLB:TLB_shootdowns
756.50 ? 13% +26.0% 953.25 ? 6% interrupts.CPU50.TLB:TLB_shootdowns
5501 ? 19% +26.9% 6979 ? 4% interrupts.CPU57.NMI:Non-maskable_interrupts
5501 ? 19% +26.9% 6979 ? 4% interrupts.CPU57.PMI:Performance_monitoring_interrupts
5509 ? 19% +28.2% 7061 ? 3% interrupts.CPU58.NMI:Non-maskable_interrupts
5509 ? 19% +28.2% 7061 ? 3% interrupts.CPU58.PMI:Performance_monitoring_interrupts
788.00 ? 19% +27.4% 1004 ? 7% interrupts.CPU58.TLB:TLB_shootdowns
5559 ? 20% +25.6% 6984 ? 3% interrupts.CPU59.NMI:Non-maskable_interrupts
5559 ? 20% +25.6% 6984 ? 3% interrupts.CPU59.PMI:Performance_monitoring_interrupts
5444 ? 19% +27.4% 6936 ? 5% interrupts.CPU60.NMI:Non-maskable_interrupts
5444 ? 19% +27.4% 6936 ? 5% interrupts.CPU60.PMI:Performance_monitoring_interrupts
5497 ? 20% +26.4% 6948 ? 4% interrupts.CPU61.NMI:Non-maskable_interrupts
5497 ? 20% +26.4% 6948 ? 4% interrupts.CPU61.PMI:Performance_monitoring_interrupts
5433 ? 19% +28.4% 6977 ? 4% interrupts.CPU62.NMI:Non-maskable_interrupts
5433 ? 19% +28.4% 6977 ? 4% interrupts.CPU62.PMI:Performance_monitoring_interrupts
5442 ? 19% +26.9% 6904 ? 6% interrupts.CPU63.NMI:Non-maskable_interrupts
5442 ? 19% +26.9% 6904 ? 6% interrupts.CPU63.PMI:Performance_monitoring_interrupts
728.50 ? 9% +30.3% 949.50 ? 13% interrupts.CPU63.TLB:TLB_shootdowns
5457 ? 19% +27.6% 6962 ? 5% interrupts.CPU64.NMI:Non-maskable_interrupts
5457 ? 19% +27.6% 6962 ? 5% interrupts.CPU64.PMI:Performance_monitoring_interrupts
5474 ? 19% +29.5% 7087 ? 3% interrupts.CPU65.NMI:Non-maskable_interrupts
5474 ? 19% +29.5% 7087 ? 3% interrupts.CPU65.PMI:Performance_monitoring_interrupts
692.75 ? 15% +34.0% 928.00 ? 16% interrupts.CPU65.TLB:TLB_shootdowns
5436 ? 19% +27.5% 6931 ? 5% interrupts.CPU66.NMI:Non-maskable_interrupts
5436 ? 19% +27.5% 6931 ? 5% interrupts.CPU66.PMI:Performance_monitoring_interrupts
5492 ? 19% +25.8% 6907 ? 5% interrupts.CPU67.NMI:Non-maskable_interrupts
5492 ? 19% +25.8% 6907 ? 5% interrupts.CPU67.PMI:Performance_monitoring_interrupts
5492 ? 19% +27.0% 6973 ? 4% interrupts.CPU68.NMI:Non-maskable_interrupts
5492 ? 19% +27.0% 6973 ? 4% interrupts.CPU68.PMI:Performance_monitoring_interrupts
5513 ? 19% +26.7% 6983 ? 3% interrupts.CPU69.NMI:Non-maskable_interrupts
5513 ? 19% +26.7% 6983 ? 3% interrupts.CPU69.PMI:Performance_monitoring_interrupts
5522 ? 19% +26.2% 6971 ? 5% interrupts.CPU70.NMI:Non-maskable_interrupts
5522 ? 19% +26.2% 6971 ? 5% interrupts.CPU70.PMI:Performance_monitoring_interrupts
5535 ? 20% +25.2% 6928 ? 4% interrupts.CPU71.NMI:Non-maskable_interrupts
5535 ? 20% +25.2% 6928 ? 4% interrupts.CPU71.PMI:Performance_monitoring_interrupts
5472 ? 19% +28.4% 7026 ? 3% interrupts.CPU72.NMI:Non-maskable_interrupts
5472 ? 19% +28.4% 7026 ? 3% interrupts.CPU72.PMI:Performance_monitoring_interrupts
5556 ? 20% +25.1% 6951 ? 5% interrupts.CPU73.NMI:Non-maskable_interrupts
5556 ? 20% +25.1% 6951 ? 5% interrupts.CPU73.PMI:Performance_monitoring_interrupts
5524 ? 19% +25.8% 6948 ? 5% interrupts.CPU74.NMI:Non-maskable_interrupts
5524 ? 19% +25.8% 6948 ? 5% interrupts.CPU74.PMI:Performance_monitoring_interrupts
5462 ? 19% +28.4% 7012 ? 3% interrupts.CPU75.NMI:Non-maskable_interrupts
5462 ? 19% +28.4% 7012 ? 3% interrupts.CPU75.PMI:Performance_monitoring_interrupts
5409 ? 19% +28.5% 6949 ? 5% interrupts.CPU76.NMI:Non-maskable_interrupts
5409 ? 19% +28.5% 6949 ? 5% interrupts.CPU76.PMI:Performance_monitoring_interrupts
5494 ? 20% +27.0% 6978 ? 4% interrupts.CPU77.NMI:Non-maskable_interrupts
5494 ? 20% +27.0% 6978 ? 4% interrupts.CPU77.PMI:Performance_monitoring_interrupts
5522 ? 19% +26.4% 6978 ? 4% interrupts.CPU78.NMI:Non-maskable_interrupts
5522 ? 19% +26.4% 6978 ? 4% interrupts.CPU78.PMI:Performance_monitoring_interrupts
5492 ? 19% +25.2% 6878 ? 6% interrupts.CPU79.NMI:Non-maskable_interrupts
5492 ? 19% +25.2% 6878 ? 6% interrupts.CPU79.PMI:Performance_monitoring_interrupts
6330 ? 8% +10.3% 6981 ? 5% interrupts.CPU8.NMI:Non-maskable_interrupts
6330 ? 8% +10.3% 6981 ? 5% interrupts.CPU8.PMI:Performance_monitoring_interrupts
5449 ? 19% +27.4% 6945 ? 5% interrupts.CPU80.NMI:Non-maskable_interrupts
5449 ? 19% +27.4% 6945 ? 5% interrupts.CPU80.PMI:Performance_monitoring_interrupts
5513 ? 20% +25.7% 6930 ? 3% interrupts.CPU81.NMI:Non-maskable_interrupts
5513 ? 20% +25.7% 6930 ? 3% interrupts.CPU81.PMI:Performance_monitoring_interrupts
5441 ? 19% +27.2% 6919 ? 6% interrupts.CPU82.NMI:Non-maskable_interrupts
5441 ? 19% +27.2% 6919 ? 6% interrupts.CPU82.PMI:Performance_monitoring_interrupts
5498 ? 19% +26.2% 6941 ? 5% interrupts.CPU83.NMI:Non-maskable_interrupts
5498 ? 19% +26.2% 6941 ? 5% interrupts.CPU83.PMI:Performance_monitoring_interrupts
5520 ? 19% +25.4% 6924 ? 5% interrupts.CPU85.NMI:Non-maskable_interrupts
5520 ? 19% +25.4% 6924 ? 5% interrupts.CPU85.PMI:Performance_monitoring_interrupts
5468 ? 19% +24.8% 6827 ? 6% interrupts.CPU86.NMI:Non-maskable_interrupts
5468 ? 19% +24.8% 6827 ? 6% interrupts.CPU86.PMI:Performance_monitoring_interrupts
5513 ? 20% +24.7% 6875 ? 4% interrupts.CPU87.NMI:Non-maskable_interrupts
5513 ? 20% +24.7% 6875 ? 4% interrupts.CPU87.PMI:Performance_monitoring_interrupts
5522 ? 20% +26.7% 6996 ? 4% interrupts.CPU88.NMI:Non-maskable_interrupts
5522 ? 20% +26.7% 6996 ? 4% interrupts.CPU88.PMI:Performance_monitoring_interrupts
5443 ? 19% +27.7% 6950 ? 6% interrupts.CPU89.NMI:Non-maskable_interrupts
5443 ? 19% +27.7% 6950 ? 6% interrupts.CPU89.PMI:Performance_monitoring_interrupts
778.25 ? 14% +34.2% 1044 ? 8% interrupts.CPU89.TLB:TLB_shootdowns
5615 ? 27% +25.7% 7060 ? 5% interrupts.CPU9.NMI:Non-maskable_interrupts
5615 ? 27% +25.7% 7060 ? 5% interrupts.CPU9.PMI:Performance_monitoring_interrupts
5483 ? 20% +26.0% 6908 ? 6% interrupts.CPU90.NMI:Non-maskable_interrupts
5483 ? 20% +26.0% 6908 ? 6% interrupts.CPU90.PMI:Performance_monitoring_interrupts
5529 ? 19% +25.9% 6960 ? 4% interrupts.CPU91.NMI:Non-maskable_interrupts
5529 ? 19% +25.9% 6960 ? 4% interrupts.CPU91.PMI:Performance_monitoring_interrupts
5503 ? 20% +25.9% 6931 ? 5% interrupts.CPU92.NMI:Non-maskable_interrupts
5503 ? 20% +25.9% 6931 ? 5% interrupts.CPU92.PMI:Performance_monitoring_interrupts
5479 ? 19% +27.7% 6998 ? 4% interrupts.CPU93.NMI:Non-maskable_interrupts
5479 ? 19% +27.7% 6998 ? 4% interrupts.CPU93.PMI:Performance_monitoring_interrupts
5508 ? 19% +24.9% 6878 ? 6% interrupts.CPU94.NMI:Non-maskable_interrupts
5508 ? 19% +24.9% 6878 ? 6% interrupts.CPU94.PMI:Performance_monitoring_interrupts
5559 ? 20% +23.8% 6882 ? 5% interrupts.CPU95.NMI:Non-maskable_interrupts
5559 ? 20% +23.8% 6882 ? 5% interrupts.CPU95.PMI:Performance_monitoring_interrupts
5475 ? 19% +28.3% 7023 ? 4% interrupts.CPU96.NMI:Non-maskable_interrupts
5475 ? 19% +28.3% 7023 ? 4% interrupts.CPU96.PMI:Performance_monitoring_interrupts
5480 ? 19% +28.0% 7015 ? 4% interrupts.CPU97.NMI:Non-maskable_interrupts
5480 ? 19% +28.0% 7015 ? 4% interrupts.CPU97.PMI:Performance_monitoring_interrupts
5537 ? 20% +26.7% 7017 ? 4% interrupts.CPU98.NMI:Non-maskable_interrupts
5537 ? 20% +26.7% 7017 ? 4% interrupts.CPU98.PMI:Performance_monitoring_interrupts
5509 ? 19% +26.2% 6952 ? 4% interrupts.CPU99.NMI:Non-maskable_interrupts
5509 ? 19% +26.2% 6952 ? 4% interrupts.CPU99.PMI:Performance_monitoring_interrupts
32817 -75.0% 8195 ? 9% softirqs.CPU0.RCU
37804 ? 2% -77.9% 8371 ? 6% softirqs.CPU1.RCU
37107 -76.7% 8662 ? 9% softirqs.CPU10.RCU
37389 ? 3% -79.5% 7651 ? 7% softirqs.CPU100.RCU
37150 -78.8% 7864 ? 4% softirqs.CPU101.RCU
36237 -78.7% 7713 ? 4% softirqs.CPU102.RCU
36630 ? 2% -79.6% 7470 ? 4% softirqs.CPU103.RCU
36437 -79.3% 7555 ? 5% softirqs.CPU104.RCU
36242 ? 2% -81.0% 6875 ? 10% softirqs.CPU105.RCU
36592 -79.4% 7541 ? 3% softirqs.CPU106.RCU
36170 -78.2% 7885 ? 6% softirqs.CPU107.RCU
37424 ? 3% -80.9% 7155 ? 9% softirqs.CPU108.RCU
36434 -78.8% 7740 ? 5% softirqs.CPU109.RCU
36767 -77.7% 8193 ? 4% softirqs.CPU11.RCU
36384 -79.1% 7609 ? 4% softirqs.CPU110.RCU
36819 -79.9% 7394 ? 3% softirqs.CPU111.RCU
36302 -77.8% 8076 ? 8% softirqs.CPU112.RCU
36874 -79.1% 7691 ? 7% softirqs.CPU113.RCU
37614 ? 3% -80.0% 7537 ? 4% softirqs.CPU114.RCU
36309 -78.6% 7784 ? 3% softirqs.CPU115.RCU
36019 -78.9% 7607 ? 3% softirqs.CPU116.RCU
36325 -78.1% 7964 ? 5% softirqs.CPU117.RCU
36251 -79.0% 7605 ? 6% softirqs.CPU118.RCU
36363 -78.8% 7718 ? 6% softirqs.CPU119.RCU
36549 -73.0% 9853 ? 11% softirqs.CPU12.RCU
36629 -77.1% 8390 ? 5% softirqs.CPU120.RCU
36826 -76.3% 8724 ? 3% softirqs.CPU121.RCU
36974 -77.3% 8399 ? 7% softirqs.CPU122.RCU
36750 -77.2% 8375 ? 7% softirqs.CPU123.RCU
36763 -77.6% 8219 ? 6% softirqs.CPU124.RCU
36632 -76.4% 8651 ? 3% softirqs.CPU125.RCU
36767 ? 2% -77.4% 8302 ? 3% softirqs.CPU126.RCU
36987 -75.5% 9069 ? 2% softirqs.CPU127.RCU
36897 -77.9% 8164 ? 4% softirqs.CPU128.RCU
36802 -77.5% 8293 ? 3% softirqs.CPU129.RCU
37345 -78.4% 8060 ? 6% softirqs.CPU13.RCU
36961 -78.4% 7965 ? 3% softirqs.CPU130.RCU
36821 -76.9% 8510 ? 6% softirqs.CPU131.RCU
37076 -77.1% 8474 ? 11% softirqs.CPU132.RCU
36790 -78.1% 8065 ? 7% softirqs.CPU133.RCU
36815 -77.3% 8341 ? 7% softirqs.CPU134.RCU
36919 -77.1% 8444 ? 6% softirqs.CPU135.RCU
36779 -76.5% 8631 ? 7% softirqs.CPU136.RCU
36743 -77.2% 8380 ? 5% softirqs.CPU137.RCU
36757 -77.7% 8182 ? 7% softirqs.CPU138.RCU
36672 -77.3% 8309 ? 10% softirqs.CPU139.RCU
36653 -78.0% 8047 ? 4% softirqs.CPU14.RCU
36633 -78.0% 8061 ? 7% softirqs.CPU140.RCU
36514 -77.9% 8077 ? 9% softirqs.CPU141.RCU
36639 -76.7% 8537 ? 7% softirqs.CPU142.RCU
37114 -77.2% 8447 ? 4% softirqs.CPU143.RCU
36127 -76.2% 8598 ? 5% softirqs.CPU144.RCU
36113 -75.5% 8833 ? 10% softirqs.CPU145.RCU
35878 -75.7% 8730 ? 8% softirqs.CPU146.RCU
35775 -76.5% 8418 ? 6% softirqs.CPU147.RCU
35833 -77.1% 8200 ? 10% softirqs.CPU148.RCU
36232 -77.7% 8066 ? 7% softirqs.CPU149.RCU
36878 ? 2% -77.7% 8216 ? 8% softirqs.CPU15.RCU
36296 ? 2% -76.8% 8434 ? 8% softirqs.CPU150.RCU
36273 ? 2% -76.3% 8608 ? 9% softirqs.CPU151.RCU
36330 -76.3% 8613 ? 9% softirqs.CPU152.RCU
36127 -77.5% 8143 ? 6% softirqs.CPU153.RCU
35627 -76.8% 8262 ? 5% softirqs.CPU154.RCU
36544 -77.0% 8396 ? 8% softirqs.CPU155.RCU
36603 -77.8% 8120 ? 9% softirqs.CPU156.RCU
35955 ? 2% -77.0% 8286 ? 7% softirqs.CPU157.RCU
36401 -77.2% 8309 ? 8% softirqs.CPU158.RCU
35992 -77.0% 8273 ? 12% softirqs.CPU159.RCU
36719 -77.5% 8258 ? 4% softirqs.CPU16.RCU
36245 -76.9% 8366 ? 9% softirqs.CPU160.RCU
36427 -75.2% 9027 ? 12% softirqs.CPU161.RCU
36674 -75.6% 8932 ? 12% softirqs.CPU162.RCU
37059 -77.2% 8440 ? 9% softirqs.CPU163.RCU
36414 -76.1% 8713 ? 9% softirqs.CPU164.RCU
36814 -76.2% 8757 ? 7% softirqs.CPU165.RCU
36797 -77.7% 8194 ? 7% softirqs.CPU166.RCU
36779 ? 2% -76.2% 8768 ? 5% softirqs.CPU167.RCU
36322 ? 2% -78.2% 7915 ? 12% softirqs.CPU168.RCU
36036 -77.6% 8089 ? 7% softirqs.CPU169.RCU
36888 -77.5% 8311 ? 4% softirqs.CPU17.RCU
36250 ? 2% -79.1% 7578 ? 4% softirqs.CPU170.RCU
36123 -78.1% 7918 ? 7% softirqs.CPU171.RCU
36330 -77.1% 8306 ? 7% softirqs.CPU172.RCU
36640 -77.3% 8332 ? 8% softirqs.CPU173.RCU
36449 ? 2% -78.3% 7913 ? 4% softirqs.CPU174.RCU
36338 -77.0% 8342 ? 5% softirqs.CPU175.RCU
36151 ? 2% -78.5% 7776 ? 4% softirqs.CPU176.RCU
36145 -78.2% 7870 ? 5% softirqs.CPU177.RCU
35819 ? 2% -78.6% 7655 ? 4% softirqs.CPU178.RCU
35742 -79.0% 7515 ? 6% softirqs.CPU179.RCU
36950 -78.1% 8094 softirqs.CPU18.RCU
35624 -77.5% 8020 ? 5% softirqs.CPU180.RCU
36120 -79.1% 7535 ? 6% softirqs.CPU181.RCU
36321 -78.7% 7733 ? 3% softirqs.CPU182.RCU
35597 -77.0% 8196 ? 10% softirqs.CPU183.RCU
36132 ? 2% -77.8% 8018 ? 9% softirqs.CPU184.RCU
36163 ? 2% -78.6% 7730 ? 4% softirqs.CPU185.RCU
35733 -78.7% 7601 ? 4% softirqs.CPU186.RCU
36153 ? 2% -79.3% 7472 ? 5% softirqs.CPU187.RCU
36185 -77.9% 7998 ? 10% softirqs.CPU188.RCU
36420 ? 2% -78.0% 8026 ? 5% softirqs.CPU189.RCU
37188 -77.4% 8386 ? 5% softirqs.CPU19.RCU
36052 ? 2% -78.4% 7785 ? 6% softirqs.CPU190.RCU
36624 ? 3% -79.3% 7586 ? 5% softirqs.CPU191.RCU
37120 -78.1% 8144 ? 2% softirqs.CPU2.RCU
36746 -77.9% 8116 ? 8% softirqs.CPU20.RCU
37604 -78.2% 8185 ? 3% softirqs.CPU21.RCU
37218 ? 2% -79.0% 7820 ? 5% softirqs.CPU22.RCU
36829 -78.5% 7931 ? 4% softirqs.CPU23.RCU
37515 ? 2% -74.2% 9685 ? 4% softirqs.CPU24.RCU
37392 -75.9% 9019 ? 3% softirqs.CPU25.RCU
37491 -74.9% 9411 ? 3% softirqs.CPU26.RCU
37650 -76.5% 8843 ? 4% softirqs.CPU27.RCU
38138 -77.1% 8721 ? 7% softirqs.CPU28.RCU
37577 -76.4% 8851 ? 6% softirqs.CPU29.RCU
37042 -73.7% 9726 ? 23% softirqs.CPU3.RCU
37544 -76.3% 8893 ? 6% softirqs.CPU30.RCU
38053 -77.2% 8677 ? 5% softirqs.CPU31.RCU
37222 -76.6% 8719 ? 8% softirqs.CPU32.RCU
37838 -78.1% 8274 ? 4% softirqs.CPU33.RCU
36804 ? 2% -76.5% 8640 ? 3% softirqs.CPU34.RCU
37027 -76.9% 8548 ? 3% softirqs.CPU35.RCU
37487 -76.3% 8894 ? 5% softirqs.CPU36.RCU
37491 -77.2% 8566 ? 5% softirqs.CPU37.RCU
37447 ? 2% -76.6% 8767 ? 6% softirqs.CPU38.RCU
36926 -76.8% 8562 ? 4% softirqs.CPU39.RCU
37159 -74.4% 9524 ? 23% softirqs.CPU4.RCU
37474 ? 2% -75.1% 9337 ? 5% softirqs.CPU40.RCU
36979 -77.0% 8516 ? 5% softirqs.CPU41.RCU
37649 -75.3% 9290 ? 7% softirqs.CPU42.RCU
37569 -76.1% 8988 ? 6% softirqs.CPU43.RCU
37125 -75.9% 8930 ? 8% softirqs.CPU44.RCU
37251 -76.6% 8723 ? 3% softirqs.CPU45.RCU
37746 -76.1% 9012 ? 7% softirqs.CPU46.RCU
37270 -76.9% 8621 ? 6% softirqs.CPU47.RCU
37328 -75.3% 9236 ? 3% softirqs.CPU48.RCU
36703 -76.4% 8669 ? 7% softirqs.CPU49.RCU
37053 -75.1% 9236 ? 20% softirqs.CPU5.RCU
36807 -76.5% 8656 ? 5% softirqs.CPU50.RCU
36488 -75.2% 9050 ? 6% softirqs.CPU51.RCU
36664 -76.5% 8605 ? 6% softirqs.CPU52.RCU
36710 -75.7% 8915 ? 8% softirqs.CPU53.RCU
36884 -76.3% 8735 ? 5% softirqs.CPU54.RCU
36527 -77.0% 8415 ? 5% softirqs.CPU55.RCU
37519 -77.5% 8440 ? 5% softirqs.CPU56.RCU
36882 -77.1% 8440 ? 6% softirqs.CPU57.RCU
36808 -76.6% 8630 ? 7% softirqs.CPU58.RCU
36680 -75.7% 8926 ? 5% softirqs.CPU59.RCU
36417 -77.2% 8321 ? 5% softirqs.CPU6.RCU
36783 -76.7% 8558 ? 7% softirqs.CPU60.RCU
36772 -76.9% 8490 ? 8% softirqs.CPU61.RCU
36673 -76.5% 8634 ? 7% softirqs.CPU62.RCU
37173 -76.9% 8576 ? 7% softirqs.CPU63.RCU
37344 -76.3% 8854 ? 7% softirqs.CPU64.RCU
36776 -75.4% 9059 ? 10% softirqs.CPU65.RCU
36794 -75.1% 9170 ? 14% softirqs.CPU66.RCU
36727 -74.9% 9200 ? 9% softirqs.CPU67.RCU
36881 -76.0% 8854 ? 4% softirqs.CPU68.RCU
37248 -76.0% 8924 ? 6% softirqs.CPU69.RCU
36853 -78.7% 7849 ? 3% softirqs.CPU7.RCU
37149 -75.0% 9269 ? 5% softirqs.CPU70.RCU
36727 -75.5% 9003 ? 7% softirqs.CPU71.RCU
36539 ? 3% -75.4% 8993 ? 4% softirqs.CPU72.RCU
36858 ? 3% -77.1% 8439 ? 7% softirqs.CPU73.RCU
37313 ? 3% -77.8% 8302 softirqs.CPU74.RCU
36604 ? 2% -77.8% 8122 ? 4% softirqs.CPU75.RCU
36784 -77.8% 8184 ? 4% softirqs.CPU76.RCU
36613 -77.5% 8227 ? 6% softirqs.CPU77.RCU
36878 -76.5% 8670 ? 6% softirqs.CPU78.RCU
36555 -78.2% 7976 ? 7% softirqs.CPU79.RCU
37264 -78.9% 7876 softirqs.CPU8.RCU
36651 -78.0% 8072 ? 6% softirqs.CPU80.RCU
36430 -78.0% 8006 ? 4% softirqs.CPU81.RCU
36702 ? 2% -77.7% 8173 ? 9% softirqs.CPU82.RCU
36562 -76.7% 8529 ? 5% softirqs.CPU83.RCU
36679 -77.5% 8240 ? 6% softirqs.CPU84.RCU
36649 -77.1% 8397 ? 5% softirqs.CPU85.RCU
36732 -77.8% 8157 ? 3% softirqs.CPU86.RCU
36837 ? 2% -77.6% 8253 ? 4% softirqs.CPU87.RCU
36234 ? 2% -76.7% 8448 ? 8% softirqs.CPU88.RCU
36122 ? 2% -76.8% 8390 ? 3% softirqs.CPU89.RCU
38312 ? 5% -72.4% 10584 ? 9% softirqs.CPU9.RCU
36272 -77.3% 8231 ? 5% softirqs.CPU90.RCU
36411 -76.6% 8512 ? 2% softirqs.CPU91.RCU
36493 -77.6% 8180 ? 8% softirqs.CPU92.RCU
36389 ? 2% -77.4% 8227 ? 8% softirqs.CPU93.RCU
36848 ? 2% -77.8% 8173 ? 6% softirqs.CPU94.RCU
36528 ? 3% -77.2% 8316 ? 4% softirqs.CPU95.RCU
36724 ? 2% -77.5% 8270 ? 6% softirqs.CPU96.RCU
35835 -78.0% 7876 ? 8% softirqs.CPU97.RCU
39906 ? 5% -78.1% 8758 ? 20% softirqs.CPU98.RCU
37384 ? 2% -76.5% 8801 ? 20% softirqs.CPU99.RCU
7051654 -77.2% 1606631 softirqs.RCU



will-it-scale.per_thread_ops

13000 +-+-----------------------------------------------------------------+
| O O O |
12000 O-O O O O |
| O O O O |
11000 +-+ O O O |
| |
10000 +-+ O O |
| |
9000 +-+ O O O O O |
| O O |
8000 +-+ |
| |
7000 +-+..+. .+. .+. .+..+. .+.+. .+. .+..+.+. .|
| +.+.+. + +..+.+.+ +.+..+ +. +.+ +.+..+ |
6000 +-+-----------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample

***************************************************************************************************
lkp-csl-2ap4: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/thread/100%/debian-x86_64-2019-09-23.cgz/lkp-csl-2ap4/unlink2/will-it-scale/0x500002b

commit:
2f65452ad7 ("locking/qspinlock: Refactor the qspinlock slow path")
ad3836e30e ("locking/qspinlock: Introduce CNA into the slow path of qspinlock")

2f65452ad747deeb ad3836e30e6f5f5e97867707b57
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:5 kmsg.ipmi_si_dmi-ipmi-si.#:IRQ_index#not_found
:4 25% 1:5 dmesg.WARNING:at_ip__slab_free/0x
2:4 -50% :5 dmesg.WARNING:stack_recursion
%stddev %change %stddev
\ | \
3589 +200.6% 10789 ? 28% will-it-scale.per_thread_ops
301.80 +51.6% 457.59 ? 25% will-it-scale.time.elapsed_time
301.80 +51.6% 457.59 ? 25% will-it-scale.time.elapsed_time.max
538317 ? 19% -49.8% 270260 ? 19% will-it-scale.time.involuntary_context_switches
18974 -4.5% 18120 will-it-scale.time.percent_of_cpu_this_job_got
57208 +44.4% 82631 ? 24% will-it-scale.time.system_time
57.16 ? 12% +117.3% 124.19 ? 27% will-it-scale.time.user_time
689308 +200.5% 2071580 ? 28% will-it-scale.workload
2.967e+08 ? 3% +71.1% 5.077e+08 ? 12% cpuidle.C6.time
133491 ? 12% +148.4% 331599 ? 21% cpuidle.C6.usage
0.54 ? 10% +0.8 1.34 ? 30% mpstat.cpu.all.idle%
0.99 ? 17% +6.5 7.46 ? 35% mpstat.cpu.all.soft%
0.12 ? 10% +0.0 0.16 ? 2% mpstat.cpu.all.usr%
53283 ? 29% +153.3% 134950 ? 32% turbostat.C6
0.23 +143.5% 0.56 ? 25% turbostat.CPU%c1
60554120 +49.2% 90359967 ? 24% turbostat.IRQ
1385680 +70.8% 2366201 ? 31% vmstat.memory.cache
191.50 +38.8% 265.75 ? 16% vmstat.procs.r
3928 ? 17% -36.7% 2487 ? 8% vmstat.system.cs
395703 +56.9% 620682 ? 28% vmstat.system.in
14040232 ? 2% -11.7% 12403899 ? 8% meminfo.DirectMap2M
139666 +625.1% 1012761 ? 64% meminfo.KReclaimable
2976334 +185.5% 8495952 ? 47% meminfo.Memused
139666 +625.1% 1012761 ? 64% meminfo.SReclaimable
460844 +1005.1% 5092995 ? 66% meminfo.SUnreclaim
600511 +916.8% 6105757 ? 66% meminfo.Slab
12287 +823.6% 113477 ? 70% meminfo.max_used_kB
563285 ? 45% +1181.6% 7218926 ? 9% numa-numastat.node0.local_node
571064 ? 44% +1169.0% 7246937 ? 9% numa-numastat.node0.numa_hit
7784 ?172% +259.9% 28014 ? 19% numa-numastat.node0.other_node
303638 ? 24% +4426.1% 13742834 ? 47% numa-numastat.node1.local_node
334813 ? 22% +4011.2% 13764707 ? 47% numa-numastat.node1.numa_hit
392452 ? 43% +2457.2% 10035735 ? 49% numa-numastat.node2.local_node
415860 ? 40% +2318.1% 10055889 ? 49% numa-numastat.node2.numa_hit
608923 ? 35% +1556.8% 10088489 ? 48% numa-numastat.node3.local_node
640100 ? 34% +1479.7% 10111949 ? 47% numa-numastat.node3.numa_hit
4843522 -2.8% 4710146 ? 2% proc-vmstat.nr_dirty_background_threshold
9698888 -2.8% 9431809 ? 2% proc-vmstat.nr_dirty_threshold
48688478 -2.7% 47350797 ? 2% proc-vmstat.nr_free_pages
34909 +606.2% 246543 ? 63% proc-vmstat.nr_slab_reclaimable
115142 +974.2% 1236854 ? 66% proc-vmstat.nr_slab_unreclaimable
4202 ? 13% +55.1% 6516 ? 52% proc-vmstat.numa_hint_faults_local
1981117 ? 12% +1978.2% 41171805 ? 36% proc-vmstat.numa_hit
1887557 ? 13% +2076.3% 41078283 ? 37% proc-vmstat.numa_local
7997319 ? 22% +1966.2% 1.652e+08 ? 35% proc-vmstat.pgalloc_normal
1059857 -19.1% 857263 ? 9% proc-vmstat.pgfault
7913949 ? 22% +1982.9% 1.648e+08 ? 35% proc-vmstat.pgfree
356338 +1313.0% 5035198 ? 69% slabinfo.Acpi-Parse.active_objs
4882 +1313.3% 68999 ? 69% slabinfo.Acpi-Parse.active_slabs
356438 +1313.1% 5036995 ? 69% slabinfo.Acpi-Parse.num_objs
4882 +1313.3% 68999 ? 69% slabinfo.Acpi-Parse.num_slabs
326546 +1430.7% 4998350 ? 70% slabinfo.dentry.active_objs
7782 +1430.2% 119076 ? 69% slabinfo.dentry.active_slabs
326859 +1430.1% 5001249 ? 69% slabinfo.dentry.num_objs
7782 +1430.2% 119076 ? 69% slabinfo.dentry.num_slabs
4469 ? 6% +37.8% 6159 ? 18% slabinfo.eventpoll_pwq.active_objs
4469 ? 6% +37.8% 6159 ? 18% slabinfo.eventpoll_pwq.num_objs
219095 +2134.9% 4896580 ? 71% slabinfo.filp.active_objs
3450 +2119.1% 76558 ? 71% slabinfo.filp.active_slabs
220845 +2118.6% 4899769 ? 71% slabinfo.filp.num_objs
3450 +2119.1% 76558 ? 71% slabinfo.filp.num_slabs
111.50 ? 13% +96.4% 219.00 ? 29% slabinfo.nfs_read_data.active_objs
111.50 ? 13% +96.4% 219.00 ? 29% slabinfo.nfs_read_data.num_objs
183747 ? 2% +2552.9% 4874568 ? 71% slabinfo.shmem_inode_cache.active_objs
4159 ? 3% +2449.4% 106043 ? 71% slabinfo.shmem_inode_cache.active_slabs
191357 ? 3% +2449.2% 4878015 ? 71% slabinfo.shmem_inode_cache.num_objs
4159 ? 3% +2449.4% 106043 ? 71% slabinfo.shmem_inode_cache.num_slabs
719.25 ? 16% +27.6% 917.50 ? 10% slabinfo.skbuff_fclone_cache.active_objs
719.25 ? 16% +27.6% 917.50 ? 10% slabinfo.skbuff_fclone_cache.num_objs
6.00 ?173% +19462.5% 1173 ? 56% numa-meminfo.node0.Active(file)
39356 ? 14% +318.1% 164559 ? 50% numa-meminfo.node0.KReclaimable
808315 ? 8% +94.3% 1570721 ? 33% numa-meminfo.node0.MemUsed
39356 ? 14% +318.1% 164559 ? 50% numa-meminfo.node0.SReclaimable
116559 ? 4% +595.7% 810925 ? 49% numa-meminfo.node0.SUnreclaim
155916 ? 6% +525.6% 975485 ? 50% numa-meminfo.node0.Slab
37129 ? 15% +862.1% 357208 ? 73% numa-meminfo.node1.KReclaimable
11138 ? 12% -30.2% 7770 ? 17% numa-meminfo.node1.Mapped
702582 ? 9% +293.0% 2761456 ? 60% numa-meminfo.node1.MemUsed
37129 ? 15% +862.1% 357208 ? 73% numa-meminfo.node1.SReclaimable
117793 ? 7% +1447.1% 1822364 ? 75% numa-meminfo.node1.SUnreclaim
154922 ? 9% +1306.9% 2179573 ? 75% numa-meminfo.node1.Slab
28700 ? 17% +819.5% 263899 ? 81% numa-meminfo.node2.KReclaimable
7756 ? 28% +51.2% 11730 ? 15% numa-meminfo.node2.Mapped
723422 ? 10% +196.8% 2147227 ? 62% numa-meminfo.node2.MemUsed
28700 ? 17% +819.5% 263899 ? 81% numa-meminfo.node2.SReclaimable
104664 ? 6% +1155.0% 1313542 ? 86% numa-meminfo.node2.SUnreclaim
133365 ? 8% +1082.8% 1577442 ? 85% numa-meminfo.node2.Slab
34401 ? 24% +584.3% 235412 ? 70% numa-meminfo.node3.KReclaimable
741509 ? 11% +179.9% 2075506 ? 49% numa-meminfo.node3.MemUsed
34401 ? 24% +584.3% 235412 ? 70% numa-meminfo.node3.SReclaimable
121556 ? 14% +883.1% 1195072 ? 72% numa-meminfo.node3.SUnreclaim
155958 ? 16% +817.2% 1430484 ? 72% numa-meminfo.node3.Slab
1.50 ?173% +19450.0% 293.25 ? 56% numa-vmstat.node0.nr_active_file
15.50 ?173% +874.2% 151.00 ? 37% numa-vmstat.node0.nr_inactive_file
9842 ? 14% +289.4% 38325 ? 48% numa-vmstat.node0.nr_slab_reclaimable
29112 ? 4% +544.4% 187593 ? 47% numa-vmstat.node0.nr_slab_unreclaimable
1.50 ?173% +19450.0% 293.25 ? 56% numa-vmstat.node0.nr_zone_active_file
15.50 ?173% +874.2% 151.00 ? 37% numa-vmstat.node0.nr_zone_inactive_file
638578 ? 25% +510.3% 3897375 ? 8% numa-vmstat.node0.numa_hit
630696 ? 26% +513.7% 3870337 ? 8% numa-vmstat.node0.numa_local
9286 ? 15% +831.0% 86457 ? 72% numa-vmstat.node1.nr_slab_reclaimable
29415 ? 7% +1397.3% 440435 ? 74% numa-vmstat.node1.nr_slab_unreclaimable
571276 ? 22% +1089.9% 6797874 ? 41% numa-vmstat.node1.numa_hit
453323 ? 27% +1375.6% 6689282 ? 42% numa-vmstat.node1.numa_local
1974 ? 28% +52.4% 3009 ? 11% numa-vmstat.node2.nr_mapped
7177 ? 17% +797.9% 64450 ? 79% numa-vmstat.node2.nr_slab_reclaimable
26146 ? 6% +1124.7% 320206 ? 84% numa-vmstat.node2.nr_slab_unreclaimable
586126 ? 18% +804.9% 5303822 ? 48% numa-vmstat.node2.numa_hit
475643 ? 24% +992.6% 5197049 ? 49% numa-vmstat.node2.numa_local
156.00 ? 36% -78.2% 34.00 ?173% numa-vmstat.node3.nr_inactive_file
8603 ? 24% +577.7% 58303 ? 69% numa-vmstat.node3.nr_slab_reclaimable
30372 ? 14% +874.9% 296109 ? 71% numa-vmstat.node3.nr_slab_unreclaimable
156.00 ? 36% -78.2% 34.00 ?173% numa-vmstat.node3.nr_zone_inactive_file
668125 ? 21% +706.8% 5390506 ? 48% numa-vmstat.node3.numa_hit
549973 ? 26% +860.1% 5280260 ? 50% numa-vmstat.node3.numa_local
4.32 ? 5% +96.0% 8.47 ? 3% perf-stat.i.MPKI
1.317e+10 +9.7% 1.444e+10 perf-stat.i.branch-instructions
0.26 ? 2% +0.1 0.38 ? 5% perf-stat.i.branch-miss-rate%
34807513 ? 2% +51.2% 52616513 perf-stat.i.branch-misses
58.73 -46.0 12.76 ? 18% perf-stat.i.cache-miss-rate%
1.412e+08 ? 5% -55.8% 62468202 ? 13% perf-stat.i.cache-misses
2.405e+08 ? 5% +118.1% 5.245e+08 ? 3% perf-stat.i.cache-references
3875 ? 17% -49.3% 1963 ? 21% perf-stat.i.context-switches
10.63 -9.2% 9.65 ? 4% perf-stat.i.cpi
4203 ? 5% +226.2% 13713 ? 9% perf-stat.i.cycles-between-cache-misses
1555417 ? 6% +165.2% 4125218 ? 57% perf-stat.i.dTLB-load-misses
1.449e+10 +16.5% 1.688e+10 perf-stat.i.dTLB-loads
0.01 ? 4% +0.0 0.01 ? 40% perf-stat.i.dTLB-store-miss-rate%
181513 ? 5% +236.7% 611231 ? 41% perf-stat.i.dTLB-store-misses
2.498e+09 +86.8% 4.666e+09 perf-stat.i.dTLB-stores
15294329 ? 2% +71.4% 26213317 ? 2% perf-stat.i.iTLB-load-misses
115030 ? 12% +262.9% 417416 ? 31% perf-stat.i.iTLB-loads
5.563e+10 +13.7% 6.327e+10 perf-stat.i.instructions
3643 ? 2% -33.6% 2417 ? 3% perf-stat.i.instructions-per-iTLB-miss
0.09 +14.3% 0.11 perf-stat.i.ipc
3280 -34.4% 2151 ? 22% perf-stat.i.minor-faults
92.85 -49.9 42.93 ? 2% perf-stat.i.node-load-miss-rate%
35667342 ? 5% -93.3% 2372307 ? 15% perf-stat.i.node-load-misses
2735742 ? 7% +87.8% 5138809 ? 28% perf-stat.i.node-loads
89.40 -75.2 14.24 ? 23% perf-stat.i.node-store-miss-rate%
18852127 ? 6% -90.6% 1763129 ? 11% perf-stat.i.node-store-misses
2248299 ? 8% +328.4% 9632213 ? 6% perf-stat.i.node-stores
3280 -34.3% 2154 ? 22% perf-stat.i.page-faults
4.32 ? 5% +90.5% 8.24 ? 3% perf-stat.overall.MPKI
0.26 ? 2% +0.1 0.36 perf-stat.overall.branch-miss-rate%
58.71 -48.7 10.02 ? 3% perf-stat.overall.cache-miss-rate%
10.63 -14.5% 9.09 perf-stat.overall.cpi
4200 ? 5% +162.6% 11029 perf-stat.overall.cycles-between-cache-misses
0.01 ? 4% +0.0 0.01 ? 42% perf-stat.overall.dTLB-store-miss-rate%
3639 ? 2% -32.8% 2444 ? 2% perf-stat.overall.instructions-per-iTLB-miss
0.09 +16.9% 0.11 perf-stat.overall.ipc
92.84 -60.7 32.17 ? 27% perf-stat.overall.node-load-miss-rate%
89.33 -76.8 12.49 ? 35% perf-stat.overall.node-store-miss-rate%
24046863 -40.3% 14350058 ? 2% perf-stat.overall.path-length
1.312e+10 +12.5% 1.477e+10 perf-stat.ps.branch-instructions
34690150 ? 2% +51.4% 52532984 perf-stat.ps.branch-misses
1.408e+08 ? 5% -62.1% 53302394 perf-stat.ps.cache-misses
2.397e+08 ? 5% +122.2% 5.326e+08 ? 3% perf-stat.ps.cache-references
3863 ? 17% -56.5% 1678 ? 36% perf-stat.ps.context-switches
1550097 ? 6% +182.3% 4376644 ? 59% perf-stat.ps.dTLB-load-misses
1.444e+10 +19.9% 1.732e+10 perf-stat.ps.dTLB-loads
180902 ? 5% +255.2% 642581 ? 43% perf-stat.ps.dTLB-store-misses
2.489e+09 +90.8% 4.75e+09 ? 2% perf-stat.ps.dTLB-stores
15242824 ? 2% +73.6% 26467618 ? 2% perf-stat.ps.iTLB-load-misses
114701 ? 12% +159.5% 297692 ? 10% perf-stat.ps.iTLB-loads
5.544e+10 +16.6% 6.467e+10 perf-stat.ps.instructions
3269 -42.6% 1876 ? 36% perf-stat.ps.minor-faults
35546821 ? 5% -94.8% 1856314 ? 37% perf-stat.ps.node-load-misses
2726533 ? 7% +38.2% 3768365 ? 4% perf-stat.ps.node-loads
18788461 ? 6% -92.7% 1365043 ? 33% perf-stat.ps.node-store-misses
2240782 ? 8% +329.8% 9631092 ? 6% perf-stat.ps.node-stores
3269 -42.6% 1876 ? 36% perf-stat.ps.page-faults
1.657e+13 +78.0% 2.951e+13 ? 26% perf-stat.total.instructions
143828 ? 52% +7207.2% 10509827 ? 66% sched_debug.cfs_rq:/.MIN_vruntime.avg
8894516 ? 85% +296.8% 35290213 ? 39% sched_debug.cfs_rq:/.MIN_vruntime.max
1032105 ? 64% +1063.2% 12005835 ? 54% sched_debug.cfs_rq:/.MIN_vruntime.stddev
148958 +46.3% 217939 ? 25% sched_debug.cfs_rq:/.exec_clock.avg
150094 +47.0% 220673 ? 25% sched_debug.cfs_rq:/.exec_clock.max
360.31 ? 2% +386.4% 1752 ? 43% sched_debug.cfs_rq:/.exec_clock.stddev
11523 ? 23% +1707.6% 208286 ? 56% sched_debug.cfs_rq:/.load.avg
469761 ? 35% +114.2% 1006039 ? 29% sched_debug.cfs_rq:/.load.max
50657 ? 10% +414.3% 260520 ? 44% sched_debug.cfs_rq:/.load.stddev
17.29 ? 7% +1485.5% 274.11 ? 40% sched_debug.cfs_rq:/.load_avg.avg
348.46 ? 24% +188.9% 1006 ? 22% sched_debug.cfs_rq:/.load_avg.max
36.02 ? 11% +638.7% 266.12 ? 34% sched_debug.cfs_rq:/.load_avg.stddev
143828 ? 52% +7207.2% 10509827 ? 66% sched_debug.cfs_rq:/.max_vruntime.avg
8894519 ? 85% +296.8% 35290213 ? 39% sched_debug.cfs_rq:/.max_vruntime.max
1032105 ? 64% +1063.2% 12005835 ? 54% sched_debug.cfs_rq:/.max_vruntime.stddev
222058 ? 9% +274.2% 831019 ? 41% sched_debug.cfs_rq:/.min_vruntime.stddev
0.85 +26.1% 1.07 ? 12% sched_debug.cfs_rq:/.nr_running.avg
0.08 ? 14% +247.9% 0.27 ? 38% sched_debug.cfs_rq:/.nr_running.stddev
78.88 ? 75% +446.7% 431.21 ? 44% sched_debug.cfs_rq:/.nr_spread_over.min
5.40 ? 12% +3319.4% 184.60 ? 63% sched_debug.cfs_rq:/.runnable_load_avg.avg
164.75 ? 51% +352.3% 745.09 ? 37% sched_debug.cfs_rq:/.runnable_load_avg.max
12.43 ? 52% +1615.7% 213.25 ? 51% sched_debug.cfs_rq:/.runnable_load_avg.stddev
11501 ? 24% +1710.7% 208250 ? 56% sched_debug.cfs_rq:/.runnable_weight.avg
468967 ? 35% +114.4% 1005509 ? 29% sched_debug.cfs_rq:/.runnable_weight.max
50599 ? 10% +414.8% 260487 ? 44% sched_debug.cfs_rq:/.runnable_weight.stddev
234855 ?122% -364.0% -619990 sched_debug.cfs_rq:/.spread0.avg
490058 ? 46% +64.6% 806833 ? 44% sched_debug.cfs_rq:/.spread0.max
-1117727 +278.5% -4230958 sched_debug.cfs_rq:/.spread0.min
222024 ? 9% +274.3% 830997 ? 41% sched_debug.cfs_rq:/.spread0.stddev
1190 ? 9% +27.5% 1517 ? 12% sched_debug.cfs_rq:/.util_est_enqueued.max
91.17 ? 42% +168.4% 244.74 ? 29% sched_debug.cfs_rq:/.util_est_enqueued.stddev
932000 +18.3% 1102425 ? 3% sched_debug.cpu.avg_idle.avg
1268099 ? 22% +225.5% 4127538 ? 41% sched_debug.cpu.avg_idle.max
94372 ? 17% +292.9% 370789 ? 40% sched_debug.cpu.avg_idle.stddev
188367 +37.4% 258907 ? 21% sched_debug.cpu.clock.avg
188401 +37.5% 258960 ? 21% sched_debug.cpu.clock.max
188334 +37.4% 258803 ? 21% sched_debug.cpu.clock.min
20.76 ? 10% +87.6% 38.94 ? 18% sched_debug.cpu.clock.stddev
188367 +37.4% 258907 ? 21% sched_debug.cpu.clock_task.avg
188401 +37.5% 258960 ? 21% sched_debug.cpu.clock_task.max
188334 +37.4% 258803 ? 21% sched_debug.cpu.clock_task.min
20.76 ? 10% +87.6% 38.94 ? 18% sched_debug.cpu.clock_task.stddev
7529 -13.8% 6492 ? 6% sched_debug.cpu.curr->pid.max
546445 ? 8% +157.5% 1407312 ? 37% sched_debug.cpu.max_idle_balance_cost.max
4750 ?107% +1740.9% 87449 ? 53% sched_debug.cpu.max_idle_balance_cost.stddev
0.86 +38.4% 1.19 ? 15% sched_debug.cpu.nr_running.avg
2.04 ? 8% +45.6% 2.97 ? 14% sched_debug.cpu.nr_running.max
0.15 ? 9% +170.6% 0.41 ? 29% sched_debug.cpu.nr_running.stddev
5561 ? 6% -28.2% 3994 ? 4% sched_debug.cpu.nr_switches.avg
48679 ? 18% -60.7% 19152 ? 2% sched_debug.cpu.nr_switches.max
3158 ? 23% -26.9% 2308 ? 4% sched_debug.cpu.nr_switches.min
4223 ? 9% -53.6% 1958 sched_debug.cpu.nr_switches.stddev
44292 ? 19% -72.7% 12088 ? 19% sched_debug.cpu.sched_count.max
3803 ? 11% -61.2% 1476 ? 16% sched_debug.cpu.sched_count.stddev
41.56 ? 11% +38.2% 57.42 ? 6% sched_debug.cpu.sched_goidle.avg
1686 ? 10% -44.8% 931.12 ? 9% sched_debug.cpu.ttwu_count.avg
21371 ? 19% -76.5% 5022 ? 17% sched_debug.cpu.ttwu_count.max
1814 ? 10% -64.5% 643.57 ? 8% sched_debug.cpu.ttwu_count.stddev
1576 ? 11% -49.3% 799.95 ? 13% sched_debug.cpu.ttwu_local.avg
21206 ? 19% -79.8% 4293 ? 14% sched_debug.cpu.ttwu_local.max
1764 ? 11% -69.4% 539.72 ? 6% sched_debug.cpu.ttwu_local.stddev
188334 +37.4% 258803 ? 21% sched_debug.cpu_clk
184280 +38.2% 254675 ? 21% sched_debug.ktime
0.01 -75.0% 0.00 ?173% sched_debug.rt_rq:/.rt_nr_migratory.stddev
0.01 -75.0% 0.00 ?173% sched_debug.rt_rq:/.rt_nr_running.stddev
0.48 ? 65% -61.2% 0.19 ?152% sched_debug.rt_rq:/.rt_time.avg
91.72 ? 65% -61.2% 35.56 ?152% sched_debug.rt_rq:/.rt_time.max
6.60 ? 65% -61.2% 2.56 ?152% sched_debug.rt_rq:/.rt_time.stddev
194032 +36.3% 264511 ? 20% sched_debug.sched_clk
21.26 ? 79% -21.3 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.evict.do_unlinkat.do_syscall_64
21.15 ? 79% -21.2 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.inode_sb_list_add.new_inode.shmem_get_inode
13.11 ?129% -13.1 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.selinux_inode_free_security.security_inode_free.__destroy_inode
13.10 ?129% -13.1 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.inode_doinit_with_dentry.security_d_instantiate.d_instantiate
12.07 ? 96% -12.1 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__close_fd.__x64_sys_close.do_syscall_64
12.01 ? 97% -12.0 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__alloc_fd.do_sys_open.do_syscall_64
0.76 ? 15% +2.2 2.94 ? 80% perf-profile.calltrace.cycles-pp.file_free_rcu.rcu_do_batch.rcu_core.__softirqentry_text_start.run_ksoftirqd
1.18 ? 16% +4.0 5.20 ? 83% perf-profile.calltrace.cycles-pp.rcu_do_batch.rcu_core.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn
1.19 ? 16% +4.0 5.21 ? 83% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
1.19 ? 16% +4.0 5.21 ? 83% perf-profile.calltrace.cycles-pp.rcu_core.__softirqentry_text_start.run_ksoftirqd.smpboot_thread_fn.kthread
1.19 ? 16% +4.0 5.21 ? 83% perf-profile.calltrace.cycles-pp.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
1.20 ? 16% +4.0 5.23 ? 83% perf-profile.calltrace.cycles-pp.ret_from_fork
1.20 ? 16% +4.0 5.23 ? 83% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.19 ? 16% +4.0 5.23 ? 83% perf-profile.calltrace.cycles-pp.smpboot_thread_fn.kthread.ret_from_fork
0.00 +6.9 6.88 ? 45% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock.selinux_inode_free_security.security_inode_free.__destroy_inode
0.00 +7.1 7.10 ? 45% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock.inode_doinit_with_dentry.security_d_instantiate.d_instantiate
0.00 +11.9 11.93 ? 24% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock.evict.do_unlinkat.do_syscall_64
0.00 +13.2 13.19 ? 25% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock.inode_sb_list_add.new_inode.shmem_get_inode
0.00 +23.7 23.68 ? 13% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock.__alloc_fd.do_sys_open.do_syscall_64
0.00 +25.1 25.07 ? 14% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock.__close_fd.__x64_sys_close.do_syscall_64
93.29 -93.3 0.00 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
94.16 -4.7 89.47 ? 5% perf-profile.children.cycles-pp._raw_spin_lock
98.68 -4.1 94.60 ? 4% perf-profile.children.cycles-pp.do_syscall_64
98.69 -4.1 94.61 ? 4% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
0.21 ? 6% -0.1 0.11 ? 8% perf-profile.children.cycles-pp.__mnt_want_write
0.23 ? 6% -0.1 0.13 ? 6% perf-profile.children.cycles-pp.mnt_want_write
0.09 -0.0 0.04 ? 58% perf-profile.children.cycles-pp.find_next_zero_bit
0.11 ? 9% -0.0 0.07 ? 17% perf-profile.children.cycles-pp.__list_del_entry_valid
0.15 ? 7% -0.0 0.10 ? 8% perf-profile.children.cycles-pp.rcu_segcblist_enqueue
0.13 ? 10% -0.0 0.09 ? 4% perf-profile.children.cycles-pp.timestamp_truncate
0.12 ? 8% -0.0 0.08 ? 5% perf-profile.children.cycles-pp.__list_add_valid
0.11 ? 9% -0.0 0.07 ? 5% perf-profile.children.cycles-pp.fsnotify
0.09 ? 5% -0.0 0.06 ? 9% perf-profile.children.cycles-pp.__sb_start_write
0.12 ? 5% -0.0 0.09 ? 4% perf-profile.children.cycles-pp.shmem_unlink
0.18 ? 2% -0.0 0.16 ? 2% perf-profile.children.cycles-pp.do_dentry_open
0.14 ? 3% +0.0 0.15 perf-profile.children.cycles-pp.d_alloc_parallel
0.08 ? 5% +0.0 0.09 ? 4% perf-profile.children.cycles-pp.filp_close
0.07 +0.0 0.09 ? 5% perf-profile.children.cycles-pp.simple_lookup
0.07 ? 7% +0.0 0.08 perf-profile.children.cycles-pp._cond_resched
0.06 +0.0 0.08 ? 6% perf-profile.children.cycles-pp.__d_alloc
0.06 ? 7% +0.0 0.07 ? 5% perf-profile.children.cycles-pp.security_inode_unlink
0.07 ? 10% +0.0 0.09 ? 4% perf-profile.children.cycles-pp.__fput
0.05 +0.0 0.07 ? 6% perf-profile.children.cycles-pp.may_link
0.06 ? 6% +0.0 0.08 ? 5% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.07 ? 6% +0.0 0.09 ? 4% perf-profile.children.cycles-pp.d_alloc
0.05 +0.0 0.08 ? 5% perf-profile.children.cycles-pp.__x64_sys_unlink
0.06 +0.0 0.09 ? 4% perf-profile.children.cycles-pp.strncpy_from_user
0.06 ? 11% +0.0 0.09 ? 4% perf-profile.children.cycles-pp.___might_sleep
0.05 +0.0 0.08 ? 5% perf-profile.children.cycles-pp.lookup_fast
0.29 ? 7% +0.0 0.33 ? 2% perf-profile.children.cycles-pp.new_inode_pseudo
0.06 +0.0 0.10 ? 4% perf-profile.children.cycles-pp.walk_component
0.04 ? 58% +0.0 0.08 ? 5% perf-profile.children.cycles-pp.security_inode_alloc
0.10 ? 10% +0.0 0.15 ? 2% perf-profile.children.cycles-pp.task_work_run
0.03 ?100% +0.0 0.07 perf-profile.children.cycles-pp.__inode_security_revalidate
0.21 ? 6% +0.0 0.26 ? 3% perf-profile.children.cycles-pp.alloc_inode
0.09 +0.0 0.14 ? 3% perf-profile.children.cycles-pp.getname_flags
0.11 ? 9% +0.1 0.16 ? 5% perf-profile.children.cycles-pp.exit_to_usermode_loop
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__dentry_kill
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__might_sleep
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__check_object_size
0.00 +0.1 0.05 perf-profile.children.cycles-pp.complete_walk
0.06 ? 11% +0.1 0.11 ? 3% perf-profile.children.cycles-pp.shmem_alloc_inode
0.00 +0.1 0.05 ? 8% perf-profile.children.cycles-pp.avc_has_perm_noaudit
0.00 +0.1 0.05 ? 9% perf-profile.children.cycles-pp.new_slab
0.00 +0.1 0.06 ? 9% perf-profile.children.cycles-pp.shmem_undo_range
0.00 +0.1 0.06 ? 7% perf-profile.children.cycles-pp.d_add
0.10 +0.1 0.17 ? 5% perf-profile.children.cycles-pp.dput
0.00 +0.1 0.07 ? 7% perf-profile.children.cycles-pp.__slab_alloc
0.00 +0.1 0.07 ? 7% perf-profile.children.cycles-pp.___slab_alloc
0.34 ? 2% +0.1 0.43 ? 20% perf-profile.children.cycles-pp.vfs_unlink
0.15 ? 7% +0.1 0.26 ? 3% perf-profile.children.cycles-pp.kmem_cache_alloc
0.15 ? 8% +0.1 0.27 ? 19% perf-profile.children.cycles-pp.selinux_inode_permission
0.15 ? 9% +0.1 0.28 ? 18% perf-profile.children.cycles-pp.security_inode_permission
0.23 +0.1 0.38 ? 36% perf-profile.children.cycles-pp.__alloc_file
0.27 +0.2 0.42 ? 32% perf-profile.children.cycles-pp.alloc_empty_file
0.00 +0.3 0.25 perf-profile.children.cycles-pp.cna_scan_main_queue
0.04 ? 58% +0.3 0.30 ? 79% perf-profile.children.cycles-pp.rcu_cblist_dequeue
0.26 ? 6% +0.3 0.56 ? 46% perf-profile.children.cycles-pp.path_parentat
0.27 ? 6% +0.3 0.58 ? 44% perf-profile.children.cycles-pp.filename_parentat
0.42 ? 8% +0.4 0.80 ? 20% perf-profile.children.cycles-pp.irq_exit
1.04 ? 15% +0.4 1.46 ? 14% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.08 ? 16% +0.4 1.53 ? 11% perf-profile.children.cycles-pp.apic_timer_interrupt
0.48 ? 5% +0.5 1.02 ? 51% perf-profile.children.cycles-pp.link_path_walk
0.45 ? 14% +1.3 1.77 ? 67% perf-profile.children.cycles-pp.__slab_free
0.49 ? 13% +1.7 2.15 ? 72% perf-profile.children.cycles-pp.kmem_cache_free
1.02 ? 11% +2.4 3.43 ? 67% perf-profile.children.cycles-pp.file_free_rcu
1.19 ? 16% +4.0 5.21 ? 83% perf-profile.children.cycles-pp.run_ksoftirqd
1.20 ? 16% +4.0 5.23 ? 83% perf-profile.children.cycles-pp.ret_from_fork
1.20 ? 16% +4.0 5.23 ? 83% perf-profile.children.cycles-pp.kthread
1.19 ? 16% +4.0 5.23 ? 83% perf-profile.children.cycles-pp.smpboot_thread_fn
1.58 ? 12% +4.4 5.98 ? 69% perf-profile.children.cycles-pp.rcu_do_batch
1.58 ? 12% +4.4 5.99 ? 69% perf-profile.children.cycles-pp.rcu_core
1.60 ? 12% +4.4 6.01 ? 69% perf-profile.children.cycles-pp.__softirqentry_text_start
0.00 +88.7 88.72 ? 5% perf-profile.children.cycles-pp.__cna_queued_spin_lock_slowpath
92.36 -92.4 0.00 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.85 ? 2% -0.1 0.74 perf-profile.self.cycles-pp._raw_spin_lock
0.21 ? 6% -0.1 0.11 ? 8% perf-profile.self.cycles-pp.__mnt_want_write
0.11 ? 9% -0.1 0.06 ? 20% perf-profile.self.cycles-pp.__list_del_entry_valid
0.08 ? 5% -0.1 0.03 ?100% perf-profile.self.cycles-pp.__sb_start_write
0.09 ? 4% -0.1 0.04 ? 57% perf-profile.self.cycles-pp.find_next_zero_bit
0.11 ? 6% -0.0 0.07 ? 7% perf-profile.self.cycles-pp.__destroy_inode
0.15 ? 7% -0.0 0.10 ? 8% perf-profile.self.cycles-pp.rcu_segcblist_enqueue
0.12 ? 8% -0.0 0.08 ? 5% perf-profile.self.cycles-pp.__list_add_valid
0.08 ? 10% -0.0 0.04 ? 57% perf-profile.self.cycles-pp.inode_init_always
0.13 ? 8% -0.0 0.09 ? 4% perf-profile.self.cycles-pp.timestamp_truncate
0.11 ? 6% -0.0 0.07 ? 5% perf-profile.self.cycles-pp.fsnotify
0.10 ? 11% -0.0 0.06 perf-profile.self.cycles-pp.shmem_get_inode
0.08 ? 6% -0.0 0.06 perf-profile.self.cycles-pp.__alloc_fd
0.06 ? 11% +0.0 0.08 ? 5% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.05 ? 8% +0.0 0.08 perf-profile.self.cycles-pp.kmem_cache_alloc
0.06 ? 7% +0.0 0.09 perf-profile.self.cycles-pp.___might_sleep
0.00 +0.1 0.05 perf-profile.self.cycles-pp.avc_has_perm_noaudit
0.00 +0.1 0.06 ? 7% perf-profile.self.cycles-pp.link_path_walk
0.09 ? 13% +0.1 0.18 ? 32% perf-profile.self.cycles-pp.selinux_inode_permission
0.00 +0.1 0.09 ? 49% perf-profile.self.cycles-pp.kmem_cache_free
0.17 ? 2% +0.1 0.29 ? 45% perf-profile.self.cycles-pp.__alloc_file
0.00 +0.2 0.25 ? 3% perf-profile.self.cycles-pp.cna_scan_main_queue
0.04 ? 58% +0.3 0.30 ? 78% perf-profile.self.cycles-pp.rcu_cblist_dequeue
0.45 ? 14% +1.3 1.76 ? 67% perf-profile.self.cycles-pp.__slab_free
1.01 ? 11% +2.4 3.41 ? 67% perf-profile.self.cycles-pp.file_free_rcu
0.00 +87.1 87.10 ? 5% perf-profile.self.cycles-pp.__cna_queued_spin_lock_slowpath
601350 +51.9% 913379 ? 25% interrupts.CPU0.LOC:Local_timer_interrupts
601046 +51.9% 912961 ? 25% interrupts.CPU1.LOC:Local_timer_interrupts
601112 +51.9% 913326 ? 25% interrupts.CPU10.LOC:Local_timer_interrupts
601067 +51.9% 913245 ? 25% interrupts.CPU100.LOC:Local_timer_interrupts
4605 ? 51% -39.6% 2781 ? 8% interrupts.CPU100.TLB:TLB_shootdowns
601039 +52.0% 913375 ? 25% interrupts.CPU101.LOC:Local_timer_interrupts
601054 +52.0% 913416 ? 25% interrupts.CPU102.LOC:Local_timer_interrupts
601120 +51.9% 913315 ? 25% interrupts.CPU103.LOC:Local_timer_interrupts
601152 +51.9% 912929 ? 25% interrupts.CPU104.LOC:Local_timer_interrupts
601148 +51.8% 912772 ? 25% interrupts.CPU105.LOC:Local_timer_interrupts
601007 +51.9% 913225 ? 25% interrupts.CPU106.LOC:Local_timer_interrupts
601060 +51.9% 912870 ? 25% interrupts.CPU107.LOC:Local_timer_interrupts
601161 +51.9% 913355 ? 25% interrupts.CPU108.LOC:Local_timer_interrupts
8732 -34.2% 5744 ? 31% interrupts.CPU108.NMI:Non-maskable_interrupts
8732 -34.2% 5744 ? 31% interrupts.CPU108.PMI:Performance_monitoring_interrupts
601098 +51.9% 913229 ? 25% interrupts.CPU109.LOC:Local_timer_interrupts
601304 +51.7% 912281 ? 25% interrupts.CPU11.LOC:Local_timer_interrupts
601084 +52.0% 913404 ? 25% interrupts.CPU110.LOC:Local_timer_interrupts
600974 +52.0% 913378 ? 25% interrupts.CPU111.LOC:Local_timer_interrupts
601046 +52.0% 913375 ? 25% interrupts.CPU112.LOC:Local_timer_interrupts
4626 ? 51% -38.3% 2856 ? 12% interrupts.CPU112.TLB:TLB_shootdowns
601016 +51.9% 913141 ? 25% interrupts.CPU113.LOC:Local_timer_interrupts
601236 +51.8% 912742 ? 25% interrupts.CPU114.LOC:Local_timer_interrupts
601069 +51.9% 913086 ? 25% interrupts.CPU115.LOC:Local_timer_interrupts
564.75 ? 89% -83.2% 95.00 ? 26% interrupts.CPU115.RES:Rescheduling_interrupts
601005 +52.0% 913296 ? 25% interrupts.CPU116.LOC:Local_timer_interrupts
601058 +52.0% 913376 ? 25% interrupts.CPU117.LOC:Local_timer_interrupts
29.50 ? 88% +313.6% 122.00 ? 23% interrupts.CPU117.RES:Rescheduling_interrupts
601164 +52.0% 913473 ? 25% interrupts.CPU118.LOC:Local_timer_interrupts
601075 +51.9% 912767 ? 25% interrupts.CPU119.LOC:Local_timer_interrupts
601164 +51.9% 913118 ? 25% interrupts.CPU12.LOC:Local_timer_interrupts
601055 +52.0% 913667 ? 25% interrupts.CPU120.LOC:Local_timer_interrupts
15.00 ? 39% +1790.0% 283.50 ?147% interrupts.CPU120.RES:Rescheduling_interrupts
600986 +52.0% 913637 ? 25% interrupts.CPU121.LOC:Local_timer_interrupts
601024 +52.0% 913721 ? 25% interrupts.CPU122.LOC:Local_timer_interrupts
601020 +52.1% 913938 ? 25% interrupts.CPU123.LOC:Local_timer_interrupts
601038 +52.0% 913631 ? 25% interrupts.CPU124.LOC:Local_timer_interrupts
601369 +51.9% 913634 ? 25% interrupts.CPU125.LOC:Local_timer_interrupts
601037 +52.0% 913683 ? 25% interrupts.CPU126.LOC:Local_timer_interrupts
600989 +52.0% 913758 ? 25% interrupts.CPU127.LOC:Local_timer_interrupts
601026 +52.1% 914097 ? 25% interrupts.CPU128.LOC:Local_timer_interrupts
600962 +52.1% 914048 ? 25% interrupts.CPU129.LOC:Local_timer_interrupts
601086 +52.0% 913424 ? 25% interrupts.CPU13.LOC:Local_timer_interrupts
601004 +52.0% 913680 ? 25% interrupts.CPU130.LOC:Local_timer_interrupts
601032 +52.0% 913593 ? 25% interrupts.CPU131.LOC:Local_timer_interrupts
4615 ? 51% -39.7% 2781 ? 10% interrupts.CPU131.TLB:TLB_shootdowns
600999 +52.1% 914053 ? 25% interrupts.CPU132.LOC:Local_timer_interrupts
601044 +52.0% 913749 ? 25% interrupts.CPU133.LOC:Local_timer_interrupts
601011 +52.0% 913631 ? 25% interrupts.CPU134.LOC:Local_timer_interrupts
600989 +52.0% 913768 ? 25% interrupts.CPU135.LOC:Local_timer_interrupts
601026 +52.0% 913740 ? 25% interrupts.CPU136.LOC:Local_timer_interrupts
600996 +52.0% 913692 ? 25% interrupts.CPU137.LOC:Local_timer_interrupts
600992 +52.0% 913679 ? 25% interrupts.CPU138.LOC:Local_timer_interrupts
601018 +52.0% 913776 ? 25% interrupts.CPU139.LOC:Local_timer_interrupts
601206 +52.0% 913664 ? 25% interrupts.CPU14.LOC:Local_timer_interrupts
600992 +52.0% 913790 ? 25% interrupts.CPU140.LOC:Local_timer_interrupts
29.00 ? 84% +477.6% 167.50 ? 48% interrupts.CPU140.RES:Rescheduling_interrupts
601029 +52.0% 913814 ? 25% interrupts.CPU141.LOC:Local_timer_interrupts
20.25 ? 75% +2679.0% 562.75 ?113% interrupts.CPU141.RES:Rescheduling_interrupts
601282 +52.0% 913765 ? 25% interrupts.CPU142.LOC:Local_timer_interrupts
601114 +52.0% 913792 ? 25% interrupts.CPU143.LOC:Local_timer_interrupts
601195 +52.0% 913989 ? 25% interrupts.CPU144.LOC:Local_timer_interrupts
601281 +52.0% 913771 ? 25% interrupts.CPU145.LOC:Local_timer_interrupts
9097 ? 24% -28.9% 6464 ? 8% interrupts.CPU146.CAL:Function_call_interrupts
601129 +52.0% 913890 ? 25% interrupts.CPU146.LOC:Local_timer_interrupts
31.00 ? 87% +622.6% 224.00 ?114% interrupts.CPU146.RES:Rescheduling_interrupts
601167 +52.0% 913914 ? 25% interrupts.CPU147.LOC:Local_timer_interrupts
4628 ? 51% -39.6% 2796 ? 11% interrupts.CPU147.TLB:TLB_shootdowns
601151 +52.0% 913771 ? 25% interrupts.CPU148.LOC:Local_timer_interrupts
601203 +52.0% 913794 ? 25% interrupts.CPU149.LOC:Local_timer_interrupts
601401 +51.8% 913148 ? 25% interrupts.CPU15.LOC:Local_timer_interrupts
601207 +52.0% 913821 ? 25% interrupts.CPU150.LOC:Local_timer_interrupts
601208 +52.0% 913875 ? 25% interrupts.CPU151.LOC:Local_timer_interrupts
601204 +52.0% 913845 ? 25% interrupts.CPU152.LOC:Local_timer_interrupts
33.50 ? 67% +260.4% 120.75 ? 84% interrupts.CPU152.RES:Rescheduling_interrupts
601163 +52.0% 913963 ? 25% interrupts.CPU153.LOC:Local_timer_interrupts
601140 +52.0% 913912 ? 25% interrupts.CPU154.LOC:Local_timer_interrupts
601194 +52.0% 913724 ? 25% interrupts.CPU155.LOC:Local_timer_interrupts
34.50 ?129% +156.5% 88.50 ? 56% interrupts.CPU155.RES:Rescheduling_interrupts
601121 +52.0% 913853 ? 25% interrupts.CPU156.LOC:Local_timer_interrupts
601185 +52.0% 913935 ? 25% interrupts.CPU157.LOC:Local_timer_interrupts
601139 +52.0% 913967 ? 25% interrupts.CPU158.LOC:Local_timer_interrupts
601244 +52.0% 914006 ? 25% interrupts.CPU159.LOC:Local_timer_interrupts
601125 +51.9% 913407 ? 25% interrupts.CPU16.LOC:Local_timer_interrupts
601191 +52.0% 913831 ? 25% interrupts.CPU160.LOC:Local_timer_interrupts
601214 +52.0% 913956 ? 25% interrupts.CPU161.LOC:Local_timer_interrupts
20.75 ? 76% +362.7% 96.00 ? 24% interrupts.CPU161.RES:Rescheduling_interrupts
601122 +52.0% 913994 ? 25% interrupts.CPU162.LOC:Local_timer_interrupts
9094 ? 24% -25.3% 6794 ? 9% interrupts.CPU163.CAL:Function_call_interrupts
601137 +52.0% 913829 ? 25% interrupts.CPU163.LOC:Local_timer_interrupts
601098 +52.0% 913964 ? 25% interrupts.CPU164.LOC:Local_timer_interrupts
601259 +52.0% 914011 ? 25% interrupts.CPU165.LOC:Local_timer_interrupts
601195 +52.0% 913979 ? 25% interrupts.CPU166.LOC:Local_timer_interrupts
601329 +52.0% 913863 ? 25% interrupts.CPU167.LOC:Local_timer_interrupts
601108 +52.0% 913806 ? 25% interrupts.CPU168.LOC:Local_timer_interrupts
601425 +52.0% 914272 ? 25% interrupts.CPU169.LOC:Local_timer_interrupts
601093 +51.9% 912884 ? 25% interrupts.CPU17.LOC:Local_timer_interrupts
4624 ? 51% -39.1% 2814 ? 12% interrupts.CPU17.TLB:TLB_shootdowns
601035 +52.1% 914277 ? 25% interrupts.CPU170.LOC:Local_timer_interrupts
601090 +52.0% 913882 ? 25% interrupts.CPU171.LOC:Local_timer_interrupts
133.50 ? 83% -80.7% 25.75 ? 38% interrupts.CPU171.RES:Rescheduling_interrupts
601167 +52.0% 913519 ? 25% interrupts.CPU172.LOC:Local_timer_interrupts
601099 +52.1% 914287 ? 25% interrupts.CPU173.LOC:Local_timer_interrupts
601095 +52.1% 914245 ? 25% interrupts.CPU174.LOC:Local_timer_interrupts
601219 +52.0% 913858 ? 25% interrupts.CPU175.LOC:Local_timer_interrupts
601150 +51.9% 913437 ? 25% interrupts.CPU176.LOC:Local_timer_interrupts
601155 +52.0% 913519 ? 25% interrupts.CPU177.LOC:Local_timer_interrupts
601007 +52.1% 913976 ? 25% interrupts.CPU178.LOC:Local_timer_interrupts
601105 +52.1% 914223 ? 25% interrupts.CPU179.LOC:Local_timer_interrupts
601362 +51.8% 913144 ? 25% interrupts.CPU18.LOC:Local_timer_interrupts
601137 +52.0% 913924 ? 25% interrupts.CPU180.LOC:Local_timer_interrupts
601197 +52.0% 913568 ? 25% interrupts.CPU181.LOC:Local_timer_interrupts
601149 +52.0% 913945 ? 25% interrupts.CPU182.LOC:Local_timer_interrupts
601086 +52.1% 914292 ? 25% interrupts.CPU183.LOC:Local_timer_interrupts
601067 +52.0% 913700 ? 25% interrupts.CPU184.LOC:Local_timer_interrupts
601170 +52.1% 914231 ? 25% interrupts.CPU185.LOC:Local_timer_interrupts
601139 +52.0% 913746 ? 25% interrupts.CPU186.LOC:Local_timer_interrupts
601117 +52.1% 914143 ? 25% interrupts.CPU187.LOC:Local_timer_interrupts
601082 +52.1% 914222 ? 25% interrupts.CPU188.LOC:Local_timer_interrupts
601096 +52.1% 914054 ? 25% interrupts.CPU189.LOC:Local_timer_interrupts
601318 +51.8% 913095 ? 25% interrupts.CPU19.LOC:Local_timer_interrupts
601492 +52.0% 914260 ? 25% interrupts.CPU190.LOC:Local_timer_interrupts
601127 +52.1% 914531 ? 25% interrupts.CPU191.LOC:Local_timer_interrupts
601108 +52.0% 913480 ? 25% interrupts.CPU2.LOC:Local_timer_interrupts
601443 +51.8% 913085 ? 25% interrupts.CPU20.LOC:Local_timer_interrupts
601076 +52.0% 913401 ? 25% interrupts.CPU21.LOC:Local_timer_interrupts
601110 +52.0% 913461 ? 25% interrupts.CPU22.LOC:Local_timer_interrupts
601241 +51.9% 913251 ? 25% interrupts.CPU23.LOC:Local_timer_interrupts
4610 ? 51% -39.1% 2808 ? 12% interrupts.CPU23.TLB:TLB_shootdowns
601272 +52.0% 913706 ? 25% interrupts.CPU24.LOC:Local_timer_interrupts
50.75 ? 63% +288.2% 197.00 ? 31% interrupts.CPU24.RES:Rescheduling_interrupts
601079 +52.1% 913946 ? 25% interrupts.CPU25.LOC:Local_timer_interrupts
601142 +52.0% 913796 ? 25% interrupts.CPU26.LOC:Local_timer_interrupts
26.00 ? 51% +1607.7% 444.00 ?135% interrupts.CPU26.RES:Rescheduling_interrupts
600999 +52.0% 913404 ? 25% interrupts.CPU27.LOC:Local_timer_interrupts
601058 +52.0% 913471 ? 25% interrupts.CPU28.LOC:Local_timer_interrupts
601053 +52.0% 913762 ? 25% interrupts.CPU29.LOC:Local_timer_interrupts
601387 +51.8% 912824 ? 25% interrupts.CPU3.LOC:Local_timer_interrupts
4597 ? 50% -39.1% 2798 ? 11% interrupts.CPU3.TLB:TLB_shootdowns
600976 +52.0% 913724 ? 25% interrupts.CPU30.LOC:Local_timer_interrupts
601024 +52.0% 913694 ? 25% interrupts.CPU31.LOC:Local_timer_interrupts
37.75 ? 41% +133.1% 88.00 ? 10% interrupts.CPU31.RES:Rescheduling_interrupts
4621 ? 51% -37.0% 2914 ? 6% interrupts.CPU31.TLB:TLB_shootdowns
601004 +52.0% 913535 ? 25% interrupts.CPU32.LOC:Local_timer_interrupts
21.00 ? 43% +509.5% 128.00 ? 55% interrupts.CPU32.RES:Rescheduling_interrupts
601094 +52.1% 914138 ? 25% interrupts.CPU33.LOC:Local_timer_interrupts
601008 +52.0% 913660 ? 25% interrupts.CPU34.LOC:Local_timer_interrupts
600990 +52.0% 913589 ? 25% interrupts.CPU35.LOC:Local_timer_interrupts
601092 +52.0% 913630 ? 25% interrupts.CPU36.LOC:Local_timer_interrupts
601060 +52.1% 913980 ? 25% interrupts.CPU37.LOC:Local_timer_interrupts
202.75 ? 91% -89.4% 21.50 ? 64% interrupts.CPU37.RES:Rescheduling_interrupts
600979 +52.0% 913533 ? 25% interrupts.CPU38.LOC:Local_timer_interrupts
601168 +52.0% 913785 ? 25% interrupts.CPU39.LOC:Local_timer_interrupts
601133 +52.0% 913467 ? 25% interrupts.CPU4.LOC:Local_timer_interrupts
601010 +52.0% 913759 ? 25% interrupts.CPU40.LOC:Local_timer_interrupts
600994 +52.0% 913705 ? 25% interrupts.CPU41.LOC:Local_timer_interrupts
600977 +52.0% 913671 ? 25% interrupts.CPU42.LOC:Local_timer_interrupts
601004 +52.0% 913651 ? 25% interrupts.CPU43.LOC:Local_timer_interrupts
27.75 ? 53% +249.5% 97.00 ? 42% interrupts.CPU43.RES:Rescheduling_interrupts
601038 +52.0% 913832 ? 25% interrupts.CPU44.LOC:Local_timer_interrupts
601037 +52.0% 913845 ? 25% interrupts.CPU45.LOC:Local_timer_interrupts
26.50 ? 61% +206.6% 81.25 ? 20% interrupts.CPU45.RES:Rescheduling_interrupts
601080 +52.1% 913949 ? 25% interrupts.CPU46.LOC:Local_timer_interrupts
601011 +52.0% 913742 ? 25% interrupts.CPU47.LOC:Local_timer_interrupts
601561 +51.9% 913909 ? 25% interrupts.CPU48.LOC:Local_timer_interrupts
601199 +52.0% 913959 ? 25% interrupts.CPU49.LOC:Local_timer_interrupts
601239 +52.0% 913644 ? 25% interrupts.CPU5.LOC:Local_timer_interrupts
4617 ? 51% -36.5% 2930 ? 10% interrupts.CPU5.TLB:TLB_shootdowns
9146 ? 24% -29.8% 6419 ? 13% interrupts.CPU50.CAL:Function_call_interrupts
601393 +52.0% 913863 ? 25% interrupts.CPU50.LOC:Local_timer_interrupts
25.25 ?107% +470.3% 144.00 ? 19% interrupts.CPU50.RES:Rescheduling_interrupts
601182 +52.0% 913815 ? 25% interrupts.CPU51.LOC:Local_timer_interrupts
601198 +52.1% 914215 ? 25% interrupts.CPU52.LOC:Local_timer_interrupts
601138 +52.0% 913865 ? 25% interrupts.CPU53.LOC:Local_timer_interrupts
601169 +52.0% 913772 ? 25% interrupts.CPU54.LOC:Local_timer_interrupts
601293 +52.0% 913829 ? 25% interrupts.CPU55.LOC:Local_timer_interrupts
601214 +52.0% 913895 ? 25% interrupts.CPU56.LOC:Local_timer_interrupts
41.50 ?122% +912.0% 420.00 ?117% interrupts.CPU56.RES:Rescheduling_interrupts
601207 +52.0% 913831 ? 25% interrupts.CPU57.LOC:Local_timer_interrupts
601191 +52.0% 913698 ? 25% interrupts.CPU58.LOC:Local_timer_interrupts
601450 +51.9% 913892 ? 25% interrupts.CPU59.LOC:Local_timer_interrupts
601003 +52.0% 913473 ? 25% interrupts.CPU6.LOC:Local_timer_interrupts
601131 +52.0% 913906 ? 25% interrupts.CPU60.LOC:Local_timer_interrupts
601136 +52.0% 913934 ? 25% interrupts.CPU61.LOC:Local_timer_interrupts
601116 +52.0% 913825 ? 25% interrupts.CPU62.LOC:Local_timer_interrupts
601181 +52.0% 913875 ? 25% interrupts.CPU63.LOC:Local_timer_interrupts
601530 +51.9% 913886 ? 25% interrupts.CPU64.LOC:Local_timer_interrupts
601120 +52.0% 913903 ? 25% interrupts.CPU65.LOC:Local_timer_interrupts
601302 +52.0% 913820 ? 25% interrupts.CPU66.LOC:Local_timer_interrupts
8997 ? 20% -25.4% 6714 ? 9% interrupts.CPU67.CAL:Function_call_interrupts
601210 +52.0% 913603 ? 25% interrupts.CPU67.LOC:Local_timer_interrupts
601253 +52.0% 913801 ? 25% interrupts.CPU68.LOC:Local_timer_interrupts
601093 +52.0% 913801 ? 25% interrupts.CPU69.LOC:Local_timer_interrupts
601047 +51.9% 913053 ? 25% interrupts.CPU7.LOC:Local_timer_interrupts
601172 +52.0% 913851 ? 25% interrupts.CPU70.LOC:Local_timer_interrupts
601192 +52.0% 913921 ? 25% interrupts.CPU71.LOC:Local_timer_interrupts
601048 +52.1% 913935 ? 25% interrupts.CPU72.LOC:Local_timer_interrupts
600964 +52.1% 914102 ? 25% interrupts.CPU73.LOC:Local_timer_interrupts
601086 +52.1% 914248 ? 25% interrupts.CPU74.LOC:Local_timer_interrupts
601072 +52.0% 913613 ? 25% interrupts.CPU75.LOC:Local_timer_interrupts
601058 +52.0% 913695 ? 25% interrupts.CPU76.LOC:Local_timer_interrupts
601111 +52.1% 914191 ? 25% interrupts.CPU77.LOC:Local_timer_interrupts
510.00 ? 96% -86.9% 67.00 ? 92% interrupts.CPU77.RES:Rescheduling_interrupts
601214 +52.1% 914462 ? 25% interrupts.CPU78.LOC:Local_timer_interrupts
601228 +52.0% 913901 ? 25% interrupts.CPU79.LOC:Local_timer_interrupts
601102 +51.9% 912996 ? 25% interrupts.CPU8.LOC:Local_timer_interrupts
601218 +52.1% 914283 ? 25% interrupts.CPU80.LOC:Local_timer_interrupts
601186 +52.1% 914164 ? 25% interrupts.CPU81.LOC:Local_timer_interrupts
601076 +52.1% 914212 ? 25% interrupts.CPU82.LOC:Local_timer_interrupts
601085 +52.1% 914160 ? 25% interrupts.CPU83.LOC:Local_timer_interrupts
601094 +52.1% 914232 ? 25% interrupts.CPU84.LOC:Local_timer_interrupts
292.75 ? 47% -66.1% 99.25 ? 86% interrupts.CPU84.RES:Rescheduling_interrupts
601098 +52.1% 914257 ? 25% interrupts.CPU85.LOC:Local_timer_interrupts
601142 +52.1% 914230 ? 25% interrupts.CPU86.LOC:Local_timer_interrupts
601045 +52.0% 913633 ? 25% interrupts.CPU87.LOC:Local_timer_interrupts
601171 +52.0% 913748 ? 25% interrupts.CPU88.LOC:Local_timer_interrupts
601279 +52.1% 914340 ? 25% interrupts.CPU89.LOC:Local_timer_interrupts
220.00 ? 53% -71.6% 62.50 ? 56% interrupts.CPU89.RES:Rescheduling_interrupts
601269 +51.8% 912796 ? 25% interrupts.CPU9.LOC:Local_timer_interrupts
601182 +52.0% 913729 ? 25% interrupts.CPU90.LOC:Local_timer_interrupts
183.00 ? 92% -80.6% 35.50 ? 47% interrupts.CPU90.RES:Rescheduling_interrupts
601249 +52.1% 914243 ? 25% interrupts.CPU91.LOC:Local_timer_interrupts
242.00 ?104% -84.1% 38.50 ? 49% interrupts.CPU91.RES:Rescheduling_interrupts
601136 +52.1% 914181 ? 25% interrupts.CPU92.LOC:Local_timer_interrupts
601115 +52.1% 914271 ? 25% interrupts.CPU93.LOC:Local_timer_interrupts
601059 +52.1% 914225 ? 25% interrupts.CPU94.LOC:Local_timer_interrupts
601350 +52.0% 914261 ? 25% interrupts.CPU95.LOC:Local_timer_interrupts
600993 +51.9% 912787 ? 25% interrupts.CPU96.LOC:Local_timer_interrupts
601098 +51.8% 912683 ? 25% interrupts.CPU97.LOC:Local_timer_interrupts
601136 +51.9% 913362 ? 25% interrupts.CPU98.LOC:Local_timer_interrupts
601131 +51.9% 913375 ? 25% interrupts.CPU99.LOC:Local_timer_interrupts
1.154e+08 +52.0% 1.754e+08 ? 25% interrupts.LOC:Local_timer_interrupts
52752 ? 44% -56.3% 23030 ? 15% softirqs.CPU1.RCU
74179 ? 49% -68.4% 23430 ? 9% softirqs.CPU10.RCU
51891 ? 44% -56.2% 22751 ? 9% softirqs.CPU100.RCU
52951 ? 42% -57.1% 22704 ? 7% softirqs.CPU101.RCU
55961 ? 54% -59.1% 22870 ? 7% softirqs.CPU102.RCU
67355 ? 45% -67.0% 22246 ? 10% softirqs.CPU103.RCU
52253 ? 44% -56.7% 22642 ? 13% softirqs.CPU104.RCU
52250 ? 45% -55.4% 23307 ? 9% softirqs.CPU105.RCU
53056 ? 44% -56.7% 22960 ? 9% softirqs.CPU106.RCU
51921 ? 45% -56.7% 22468 ? 15% softirqs.CPU107.RCU
53400 ? 42% -58.9% 21923 ? 12% softirqs.CPU108.RCU
113674 ? 5% +35.5% 154034 ? 19% softirqs.CPU108.TIMER
52082 ? 45% -57.2% 22290 ? 10% softirqs.CPU109.RCU
52205 ? 43% -55.2% 23373 ? 10% softirqs.CPU11.RCU
67064 ? 44% -67.2% 21996 ? 9% softirqs.CPU110.RCU
51559 ? 46% -57.2% 22064 ? 8% softirqs.CPU111.RCU
52545 ? 44% -56.7% 22756 ? 9% softirqs.CPU112.RCU
52137 ? 44% -56.2% 22850 ? 8% softirqs.CPU113.RCU
52168 ? 44% -57.2% 22316 ? 8% softirqs.CPU114.RCU
51984 ? 46% -56.3% 22720 ? 10% softirqs.CPU116.RCU
52576 ? 44% -58.2% 21961 ? 10% softirqs.CPU117.RCU
52386 ? 44% -58.5% 21755 ? 7% softirqs.CPU118.RCU
52220 ? 44% -58.4% 21729 ? 13% softirqs.CPU119.RCU
52805 ? 43% -56.0% 23221 ? 14% softirqs.CPU12.RCU
114814 ? 5% +41.1% 162027 ? 15% softirqs.CPU12.TIMER
53207 ? 44% -61.5% 20481 ? 6% softirqs.CPU121.RCU
99208 ? 7% +51.0% 149847 ? 25% softirqs.CPU121.TIMER
52962 ? 44% -62.9% 19643 ? 7% softirqs.CPU122.RCU
98757 ? 7% +51.5% 149665 ? 25% softirqs.CPU122.TIMER
52865 ? 44% -61.4% 20416 ? 3% softirqs.CPU123.RCU
98423 ? 7% +53.7% 151284 ? 25% softirqs.CPU123.TIMER
53274 ? 45% -63.9% 19225 ? 6% softirqs.CPU124.RCU
99126 ? 6% +52.6% 151229 ? 25% softirqs.CPU124.TIMER
63855 ? 41% -68.5% 20133 ? 3% softirqs.CPU125.RCU
98827 ? 6% +52.7% 150882 ? 25% softirqs.CPU125.TIMER
52741 ? 45% -60.9% 20613 ? 5% softirqs.CPU126.RCU
99240 ? 6% +52.3% 151157 ? 24% softirqs.CPU126.TIMER
53003 ? 44% -62.6% 19848 ? 4% softirqs.CPU127.RCU
98688 ? 6% +51.0% 149019 ? 24% softirqs.CPU127.TIMER
52505 ? 45% -62.2% 19856 ? 6% softirqs.CPU128.RCU
98227 ? 6% +52.0% 149342 ? 25% softirqs.CPU128.TIMER
52229 ? 44% -62.1% 19782 ? 4% softirqs.CPU129.RCU
97901 ? 6% +51.9% 148689 ? 24% softirqs.CPU129.TIMER
52428 ? 45% -56.0% 23050 ? 14% softirqs.CPU13.RCU
52343 ? 44% -61.2% 20301 ? 3% softirqs.CPU130.RCU
98951 ? 6% +53.0% 151435 ? 25% softirqs.CPU130.TIMER
52646 ? 44% -61.3% 20375 ? 4% softirqs.CPU131.RCU
98560 ? 6% +53.4% 151209 ? 25% softirqs.CPU131.TIMER
52409 ? 44% -62.4% 19731 ? 7% softirqs.CPU132.RCU
98614 ? 6% +73.8% 171423 ? 37% softirqs.CPU132.TIMER
52752 ? 44% -61.2% 20471 ? 6% softirqs.CPU133.RCU
52275 ? 44% -63.6% 19031 ? 5% softirqs.CPU134.RCU
98069 ? 6% +51.5% 148560 ? 24% softirqs.CPU134.TIMER
52396 ? 44% -62.1% 19860 ? 5% softirqs.CPU135.RCU
98126 ? 6% +51.9% 149005 ? 24% softirqs.CPU135.TIMER
52698 ? 44% -62.4% 19829 ? 4% softirqs.CPU136.RCU
98621 ? 6% +53.0% 150888 ? 25% softirqs.CPU136.TIMER
52418 ? 45% -63.0% 19415 ? 5% softirqs.CPU137.RCU
98965 ? 6% +52.5% 150937 ? 25% softirqs.CPU137.TIMER
52420 ? 44% -63.2% 19316 ? 4% softirqs.CPU138.RCU
98616 ? 6% +52.9% 150751 ? 25% softirqs.CPU138.TIMER
52579 ? 44% -62.5% 19701 ? 5% softirqs.CPU139.RCU
97893 ? 7% +52.2% 148945 ? 24% softirqs.CPU139.TIMER
51429 ? 45% -55.4% 22961 ? 10% softirqs.CPU14.RCU
52334 ? 44% -62.1% 19856 ? 3% softirqs.CPU140.RCU
52224 ? 44% -60.3% 20714 softirqs.CPU141.RCU
98396 ? 6% +53.6% 151175 ? 24% softirqs.CPU141.TIMER
51988 ? 43% -62.6% 19469 ? 6% softirqs.CPU142.RCU
98787 ? 6% +51.3% 149501 ? 25% softirqs.CPU142.TIMER
52276 ? 44% -58.4% 21756 ? 2% softirqs.CPU143.RCU
98571 ? 6% +51.9% 149701 ? 24% softirqs.CPU143.TIMER
52198 ? 43% -61.4% 20138 ? 9% softirqs.CPU144.RCU
98058 ? 5% +48.9% 146016 ? 23% softirqs.CPU144.TIMER
51616 ? 43% -57.4% 22011 ? 10% softirqs.CPU145.RCU
97012 ? 6% +50.4% 145877 ? 22% softirqs.CPU145.TIMER
51953 ? 44% -62.2% 19631 ? 3% softirqs.CPU146.RCU
97113 ? 6% +50.8% 146436 ? 23% softirqs.CPU146.TIMER
51799 ? 44% -62.3% 19551 ? 8% softirqs.CPU147.RCU
97082 ? 6% +51.1% 146725 ? 23% softirqs.CPU147.TIMER
51881 ? 43% -62.3% 19566 ? 4% softirqs.CPU148.RCU
97129 ? 6% +56.6% 152104 ? 21% softirqs.CPU148.TIMER
52012 ? 43% -61.8% 19876 ? 6% softirqs.CPU149.RCU
97560 ? 6% +50.1% 146393 ? 23% softirqs.CPU149.TIMER
63722 ? 38% -67.0% 21058 ? 6% softirqs.CPU150.RCU
99044 ? 5% +47.6% 146156 ? 22% softirqs.CPU150.TIMER
52114 ? 43% -60.8% 20405 ? 7% softirqs.CPU151.RCU
96876 ? 6% +49.5% 144816 ? 22% softirqs.CPU151.TIMER
52150 ? 44% -59.9% 20894 ? 5% softirqs.CPU152.RCU
52147 ? 44% -61.7% 19950 ? 4% softirqs.CPU153.RCU
97801 ? 6% +51.0% 147661 ? 23% softirqs.CPU153.TIMER
52387 ? 43% -61.1% 20365 ? 7% softirqs.CPU154.RCU
97461 ? 6% +51.0% 147135 ? 23% softirqs.CPU154.TIMER
52457 ? 44% -62.2% 19826 ? 6% softirqs.CPU155.RCU
98132 ? 6% +49.4% 146614 ? 23% softirqs.CPU155.TIMER
52267 ? 44% -62.2% 19762 ? 4% softirqs.CPU156.RCU
96311 ? 6% +50.6% 145042 ? 22% softirqs.CPU156.TIMER
52345 ? 43% -62.4% 19691 ? 2% softirqs.CPU157.RCU
52039 ? 43% -61.5% 20023 ? 3% softirqs.CPU158.RCU
97078 ? 6% +50.6% 146237 ? 23% softirqs.CPU158.TIMER
52081 ? 44% -60.9% 20345 ? 10% softirqs.CPU159.RCU
97540 ? 6% +51.2% 147514 ? 23% softirqs.CPU159.TIMER
52798 ? 43% -55.9% 23305 ? 15% softirqs.CPU16.RCU
52603 ? 43% -61.9% 20025 ? 5% softirqs.CPU160.RCU
97475 ? 6% +51.3% 147434 ? 22% softirqs.CPU160.TIMER
52691 ? 42% -62.8% 19613 ? 5% softirqs.CPU161.RCU
97358 ? 6% +50.9% 146883 ? 23% softirqs.CPU161.TIMER
97224 ? 6% +50.4% 146203 ? 22% softirqs.CPU162.TIMER
97470 ? 7% +49.7% 145897 ? 23% softirqs.CPU163.TIMER
52412 ? 43% -61.4% 20235 ? 8% softirqs.CPU164.RCU
96832 ? 6% +49.9% 145127 ? 22% softirqs.CPU164.TIMER
52748 ? 43% -61.8% 20166 ? 7% softirqs.CPU165.RCU
97414 ? 6% +51.6% 147725 ? 23% softirqs.CPU165.TIMER
52549 ? 43% -61.5% 20210 ? 4% softirqs.CPU166.RCU
97492 ? 6% +51.4% 147639 ? 23% softirqs.CPU166.TIMER
52130 ? 43% -59.0% 21348 ? 7% softirqs.CPU167.RCU
97249 ? 6% +51.3% 147091 ? 23% softirqs.CPU167.TIMER
63144 ? 37% -65.2% 21957 ? 7% softirqs.CPU168.RCU
109843 ? 2% +47.0% 161430 ? 24% softirqs.CPU168.TIMER
63751 ? 36% -66.8% 21172 ? 6% softirqs.CPU169.RCU
52715 ? 44% -55.5% 23473 ? 10% softirqs.CPU17.RCU
63309 ? 37% -67.0% 20863 ? 6% softirqs.CPU170.RCU
109657 ? 2% +46.5% 160603 ? 24% softirqs.CPU170.TIMER
63011 ? 38% -65.2% 21949 ? 10% softirqs.CPU171.RCU
108862 ? 2% +47.3% 160315 ? 24% softirqs.CPU171.TIMER
63097 ? 38% -66.6% 21105 ? 10% softirqs.CPU172.RCU
109504 ? 2% +48.2% 162319 ? 24% softirqs.CPU172.TIMER
63186 ? 38% -66.5% 21143 ? 10% softirqs.CPU173.RCU
110095 ? 2% +46.6% 161361 ? 24% softirqs.CPU173.TIMER
62914 ? 38% -66.4% 21124 ? 10% softirqs.CPU174.RCU
110750 ? 2% +46.1% 161799 ? 24% softirqs.CPU174.TIMER
73942 ? 30% -70.4% 21912 ? 7% softirqs.CPU175.RCU
109262 ? 2% +45.4% 158907 ? 24% softirqs.CPU175.TIMER
63064 ? 38% -66.8% 20959 ? 11% softirqs.CPU176.RCU
109271 ? 2% +45.7% 159212 ? 24% softirqs.CPU176.TIMER
52190 ? 45% -59.1% 21360 ? 7% softirqs.CPU177.RCU
109591 ? 2% +47.9% 162092 ? 24% softirqs.CPU177.TIMER
62695 ? 38% -66.9% 20775 ? 8% softirqs.CPU178.RCU
110553 ? 2% +47.0% 162540 ? 24% softirqs.CPU178.TIMER
63241 ? 38% -67.4% 20619 ? 10% softirqs.CPU179.RCU
109352 ? 2% +47.4% 161142 ? 24% softirqs.CPU179.TIMER
53015 ? 43% -55.2% 23729 ? 10% softirqs.CPU18.RCU
63562 ? 37% -67.2% 20868 ? 11% softirqs.CPU180.RCU
108776 ? 3% +46.4% 159268 ? 24% softirqs.CPU180.TIMER
62801 ? 35% -66.1% 21291 ? 10% softirqs.CPU181.RCU
110792 ? 4% +43.7% 159219 ? 24% softirqs.CPU181.TIMER
63036 ? 38% -66.3% 21242 ? 7% softirqs.CPU182.RCU
108453 ? 3% +47.3% 159755 ? 24% softirqs.CPU182.TIMER
62804 ? 38% -65.3% 21790 ? 10% softirqs.CPU183.RCU
109676 ? 2% +47.5% 161820 ? 24% softirqs.CPU183.TIMER
62781 ? 38% -66.1% 21307 ? 13% softirqs.CPU184.RCU
109571 ? 2% +48.3% 162539 ? 24% softirqs.CPU184.TIMER
54016 ? 31% -60.7% 21220 ? 9% softirqs.CPU185.RCU
110523 ? 2% +46.7% 162165 ? 24% softirqs.CPU185.TIMER
62904 ? 38% -66.2% 21266 ? 12% softirqs.CPU186.RCU
109140 ? 3% +46.2% 159580 ? 24% softirqs.CPU186.TIMER
63152 ? 37% -66.5% 21150 ? 8% softirqs.CPU187.RCU
108026 ? 3% +48.0% 159836 ? 23% softirqs.CPU187.TIMER
63015 ? 38% -65.9% 21467 ? 10% softirqs.CPU188.RCU
108603 ? 2% +47.9% 160602 ? 24% softirqs.CPU188.TIMER
63405 ? 38% -65.4% 21932 ? 16% softirqs.CPU189.RCU
109973 ? 2% +47.9% 162627 ? 23% softirqs.CPU189.TIMER
111425 +45.3% 161921 ? 24% softirqs.CPU190.TIMER
62650 ? 37% -65.7% 21519 ? 5% softirqs.CPU191.RCU
110312 ? 2% +47.9% 163193 ? 24% softirqs.CPU191.TIMER
52227 ? 44% -56.7% 22596 ? 12% softirqs.CPU2.RCU
52802 ? 43% -56.9% 22773 ? 12% softirqs.CPU20.RCU
52358 ? 44% -56.2% 22936 ? 8% softirqs.CPU21.RCU
52658 ? 44% -55.0% 23715 ? 11% softirqs.CPU22.RCU
52348 ? 44% -58.2% 21870 ? 9% softirqs.CPU23.RCU
98891 ? 7% +50.4% 148736 ? 24% softirqs.CPU24.TIMER
52884 ? 43% -61.7% 20251 ? 7% softirqs.CPU25.RCU
99421 ? 6% +49.5% 148663 ? 24% softirqs.CPU25.TIMER
52954 ? 43% -60.5% 20929 ? 8% softirqs.CPU26.RCU
99661 ? 7% +50.0% 149485 ? 24% softirqs.CPU26.TIMER
53073 ? 43% -61.0% 20680 ? 4% softirqs.CPU27.RCU
99096 ? 7% +51.2% 149851 ? 24% softirqs.CPU27.TIMER
52987 ? 44% -62.5% 19884 ? 5% softirqs.CPU28.RCU
99469 ? 6% +51.7% 150901 ? 24% softirqs.CPU28.TIMER
53310 ? 43% -60.9% 20829 ? 8% softirqs.CPU29.RCU
53168 ? 43% -62.6% 19880 ? 7% softirqs.CPU30.RCU
99558 ? 6% +52.2% 151532 ? 24% softirqs.CPU30.TIMER
53425 ? 44% -61.3% 20675 ? 5% softirqs.CPU31.RCU
99184 ? 6% +49.8% 148549 ? 24% softirqs.CPU31.TIMER
52971 ? 44% -60.9% 20716 ? 6% softirqs.CPU32.RCU
99047 ? 6% +50.2% 148735 ? 24% softirqs.CPU32.TIMER
62849 ? 40% -66.7% 20903 ? 5% softirqs.CPU33.RCU
98419 ? 6% +51.2% 148841 ? 24% softirqs.CPU33.TIMER
52896 ? 44% -62.9% 19649 ? 5% softirqs.CPU34.RCU
99681 ? 6% +52.1% 151665 ? 24% softirqs.CPU34.TIMER
53180 ? 44% -60.8% 20825 ? 6% softirqs.CPU35.RCU
99229 ? 7% +52.5% 151301 ? 24% softirqs.CPU35.TIMER
52940 ? 44% -62.1% 20079 ? 8% softirqs.CPU36.RCU
99643 ? 6% +59.3% 158747 ? 28% softirqs.CPU36.TIMER
53012 ? 44% -62.8% 19745 ? 5% softirqs.CPU37.RCU
98531 ? 7% +50.7% 148462 ? 24% softirqs.CPU37.TIMER
52968 ? 44% -62.3% 19982 ? 5% softirqs.CPU38.RCU
98644 ? 7% +51.3% 149223 ? 24% softirqs.CPU38.TIMER
53394 ? 44% -62.2% 20182 ? 3% softirqs.CPU39.RCU
99000 ? 6% +51.1% 149595 ? 24% softirqs.CPU39.TIMER
52252 ? 44% -55.1% 23445 ? 9% softirqs.CPU4.RCU
52983 ? 44% -62.5% 19858 ? 8% softirqs.CPU40.RCU
99349 ? 7% +52.2% 151224 ? 24% softirqs.CPU40.TIMER
53017 ? 44% -61.6% 20355 ? 7% softirqs.CPU41.RCU
99582 ? 6% +51.6% 150968 ? 24% softirqs.CPU41.TIMER
52939 ? 44% -61.5% 20385 ? 6% softirqs.CPU42.RCU
99355 ? 6% +52.0% 151035 ? 24% softirqs.CPU42.TIMER
53093 ? 44% -61.7% 20348 ? 4% softirqs.CPU43.RCU
98724 ? 7% +50.7% 148812 ? 24% softirqs.CPU43.TIMER
53410 ? 43% -61.7% 20453 ? 6% softirqs.CPU44.RCU
98761 ? 7% +51.4% 149534 ? 24% softirqs.CPU44.TIMER
53011 ? 44% -62.2% 20025 ? 6% softirqs.CPU45.RCU
99342 ? 6% +51.5% 150526 ? 24% softirqs.CPU45.TIMER
53067 ? 44% -61.5% 20408 ? 4% softirqs.CPU46.RCU
99408 ? 6% +51.0% 150090 ? 24% softirqs.CPU46.TIMER
53057 ? 44% -60.9% 20768 ? 4% softirqs.CPU47.RCU
99258 ? 7% +51.2% 150089 ? 24% softirqs.CPU47.TIMER
98522 ? 6% +49.3% 147127 ? 22% softirqs.CPU48.TIMER
52622 ? 43% -59.7% 21212 ? 4% softirqs.CPU49.RCU
98388 ? 7% +48.7% 146349 ? 23% softirqs.CPU49.TIMER
52326 ? 43% -54.7% 23704 ? 8% softirqs.CPU5.RCU
53296 ? 44% -62.9% 19759 ? 7% softirqs.CPU50.RCU
53176 ? 44% -62.0% 20208 ? 5% softirqs.CPU51.RCU
52785 ? 43% -60.0% 21088 ? 5% softirqs.CPU52.RCU
98426 ? 7% +64.3% 161697 ? 21% softirqs.CPU52.TIMER
52446 ? 43% -61.6% 20118 ? 4% softirqs.CPU53.RCU
98506 ? 6% +50.2% 147915 ? 23% softirqs.CPU53.TIMER
63443 ? 36% -66.2% 21444 ? 6% softirqs.CPU54.RCU
52654 ? 43% -61.6% 20211 ? 3% softirqs.CPU55.RCU
97514 ? 6% +49.7% 145977 ? 22% softirqs.CPU55.TIMER
53050 ? 42% -60.6% 20923 ? 4% softirqs.CPU56.RCU
52849 ? 42% -61.1% 20543 ? 3% softirqs.CPU57.RCU
98839 ? 5% +50.1% 148366 ? 23% softirqs.CPU57.TIMER
52810 ? 43% -62.0% 20066 ? 5% softirqs.CPU58.RCU
98169 ? 6% +50.2% 147428 ? 22% softirqs.CPU58.TIMER
98431 ? 6% +50.2% 147888 ? 22% softirqs.CPU59.TIMER
55291 ? 51% -58.5% 22964 ? 13% softirqs.CPU6.RCU
52610 ? 43% -60.5% 20760 ? 7% softirqs.CPU60.RCU
97558 ? 6% +49.3% 145689 ? 22% softirqs.CPU60.TIMER
52376 ? 43% -60.0% 20953 ? 4% softirqs.CPU61.RCU
52803 ? 42% -61.1% 20514 softirqs.CPU62.RCU
97776 ? 6% +49.5% 146204 ? 22% softirqs.CPU62.TIMER
52978 ? 42% -61.0% 20652 ? 3% softirqs.CPU63.RCU
99513 ? 5% +48.3% 147566 ? 23% softirqs.CPU63.TIMER
99166 ? 5% +49.8% 148518 ? 23% softirqs.CPU64.TIMER
53149 ? 42% -61.6% 20397 ? 5% softirqs.CPU65.RCU
98053 ? 6% +50.9% 148003 ? 23% softirqs.CPU65.TIMER
97543 ? 6% +50.6% 146924 ? 22% softirqs.CPU66.TIMER
98024 ? 6% +49.3% 146396 ? 22% softirqs.CPU67.TIMER
53209 ? 42% -61.7% 20385 ? 6% softirqs.CPU68.RCU
97685 ? 6% +49.5% 146077 ? 22% softirqs.CPU68.TIMER
52987 ? 43% -61.7% 20283 ? 3% softirqs.CPU69.RCU
98100 ? 6% +52.1% 149255 ? 23% softirqs.CPU69.TIMER
54157 ? 41% -57.8% 22872 ? 13% softirqs.CPU7.RCU
53110 ? 42% -61.1% 20670 ? 6% softirqs.CPU70.RCU
98246 ? 6% +51.1% 148418 ? 22% softirqs.CPU70.TIMER
52937 ? 42% -59.0% 21725 ? 7% softirqs.CPU71.RCU
98303 ? 6% +50.5% 147903 ? 22% softirqs.CPU71.TIMER
63737 ? 38% -64.5% 22622 ? 14% softirqs.CPU72.RCU
110467 ? 2% +44.9% 160033 ? 23% softirqs.CPU72.TIMER
63283 ? 37% -66.6% 21164 ? 7% softirqs.CPU73.RCU
110488 ? 2% +45.4% 160671 ? 24% softirqs.CPU73.TIMER
63387 ? 38% -65.4% 21951 ? 12% softirqs.CPU74.RCU
109636 ? 3% +46.3% 160357 ? 24% softirqs.CPU74.TIMER
63261 ? 37% -66.5% 21187 ? 3% softirqs.CPU75.RCU
109669 ? 2% +45.9% 159968 ? 23% softirqs.CPU75.TIMER
63503 ? 38% -65.9% 21643 ? 7% softirqs.CPU76.RCU
110451 ? 2% +47.2% 162617 ? 24% softirqs.CPU76.TIMER
63642 ? 38% -66.4% 21357 ? 10% softirqs.CPU77.RCU
110824 ? 2% +45.9% 161669 ? 24% softirqs.CPU77.TIMER
63499 ? 37% -65.7% 21774 ? 4% softirqs.CPU78.RCU
110833 ? 2% +46.7% 162596 ? 24% softirqs.CPU78.TIMER
74429 ? 29% -70.5% 21959 ? 6% softirqs.CPU79.RCU
109840 ? 2% +44.7% 158935 ? 24% softirqs.CPU79.TIMER
52201 ? 44% -55.9% 23019 ? 9% softirqs.CPU8.RCU
52061 ? 43% -59.5% 21074 ? 9% softirqs.CPU80.RCU
110183 ? 2% +45.1% 159824 ? 24% softirqs.CPU80.TIMER
63444 ? 37% -65.8% 21682 ? 8% softirqs.CPU81.RCU
110233 ? 2% +48.0% 163162 ? 23% softirqs.CPU81.TIMER
52500 ? 44% -59.4% 21337 ? 6% softirqs.CPU82.RCU
112044 +45.1% 162626 ? 24% softirqs.CPU82.TIMER
63416 ? 38% -66.4% 21302 ? 3% softirqs.CPU83.RCU
110098 ? 2% +47.2% 162069 ? 24% softirqs.CPU83.TIMER
63734 ? 38% -67.1% 20994 ? 9% softirqs.CPU84.RCU
109739 ? 2% +45.1% 159281 ? 23% softirqs.CPU84.TIMER
63574 ? 37% -67.0% 20966 ? 8% softirqs.CPU85.RCU
110101 ? 3% +44.5% 159112 ? 24% softirqs.CPU85.TIMER
63264 ? 37% -66.3% 21333 ? 8% softirqs.CPU86.RCU
108989 ? 3% +47.3% 160521 ? 24% softirqs.CPU86.TIMER
63505 ? 38% -66.3% 21406 ? 10% softirqs.CPU87.RCU
110812 +45.9% 161711 ? 24% softirqs.CPU87.TIMER
63412 ? 37% -65.5% 21888 ? 7% softirqs.CPU88.RCU
110509 ? 2% +48.1% 163677 ? 24% softirqs.CPU88.TIMER
54192 ? 31% -58.8% 22349 ? 11% softirqs.CPU89.RCU
110958 ? 2% +46.8% 162933 ? 24% softirqs.CPU89.TIMER
52375 ? 44% -57.2% 22404 ? 8% softirqs.CPU9.RCU
63058 ? 37% -65.9% 21476 ? 11% softirqs.CPU90.RCU
109339 ? 3% +46.2% 159890 ? 23% softirqs.CPU90.TIMER
63304 ? 37% -66.2% 21392 ? 10% softirqs.CPU91.RCU
108899 ? 3% +47.5% 160595 ? 23% softirqs.CPU91.TIMER
63296 ? 38% -64.8% 22291 ? 9% softirqs.CPU92.RCU
109657 ? 2% +47.2% 161441 ? 23% softirqs.CPU92.TIMER
63559 ? 37% -63.7% 23046 ? 10% softirqs.CPU93.RCU
109730 ? 2% +48.0% 162421 ? 23% softirqs.CPU93.TIMER
63701 ? 38% -65.8% 21767 ? 8% softirqs.CPU94.RCU
111979 +45.1% 162467 ? 23% softirqs.CPU94.TIMER
51616 ? 43% -58.3% 21535 ? 6% softirqs.CPU95.RCU
111373 ? 2% +45.7% 162314 ? 23% softirqs.CPU95.TIMER
52067 ? 44% -55.8% 23000 ? 10% softirqs.CPU97.RCU
51971 ? 44% -57.0% 22355 ? 6% softirqs.CPU98.RCU
10523405 ? 39% -61.3% 4076167 ? 5% softirqs.RCU
464032 +43.7% 666826 ? 17% softirqs.SCHED
20273510 ? 2% +44.0% 29202119 ? 23% softirqs.TIMER



***************************************************************************************************
lkp-csl-2ap4: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/thread/100%/debian-x86_64-2019-09-23.cgz/lkp-csl-2ap4/signal1/will-it-scale/0x500002b

commit:
2f65452ad7 ("locking/qspinlock: Refactor the qspinlock slow path")
ad3836e30e ("locking/qspinlock: Introduce CNA into the slow path of qspinlock")

2f65452ad747deeb ad3836e30e6f5f5e97867707b57
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:2 50% 1:2 dmesg.RIP:smp_call_function_many
2:2 -100% :2 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
2:2 -100% :2 dmesg.WARNING:stack_recursion



***************************************************************************************************
lkp-csl-2ap3: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/thread/50%/debian-x86_64-2019-09-23.cgz/lkp-csl-2ap3/signal1/will-it-scale/0x500002b

commit:
2f65452ad7 ("locking/qspinlock: Refactor the qspinlock slow path")
ad3836e30e ("locking/qspinlock: Introduce CNA into the slow path of qspinlock")

2f65452ad747deeb ad3836e30e6f5f5e97867707b57
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
3727 +93.0% 7193 will-it-scale.per_thread_ops
301.57 +12.4% 339.00 ? 6% will-it-scale.time.elapsed_time
301.57 +12.4% 339.00 ? 6% will-it-scale.time.elapsed_time.max
28874 +12.0% 32332 ? 6% will-it-scale.time.system_time
45.30 ? 3% +381.3% 218.05 ? 12% will-it-scale.time.user_time
357808 +93.0% 690616 will-it-scale.workload
58529 +19.7% 70041 ? 3% cpuidle.POLL.usage
10784 ? 4% -15.0% 9163 ? 6% meminfo.max_used_kB
2.06 ? 53% +2.5 4.59 ? 38% turbostat.C6%
60846670 -34.7% 39716892 ? 6% turbostat.IRQ
50.00 +47.0% 73.50 vmstat.cpu.id
390430 -42.0% 226437 ? 6% vmstat.system.in
50.04 +24.1 74.18 mpstat.cpu.all.idle%
49.87 -24.2 25.63 ? 3% mpstat.cpu.all.sys%
0.09 +0.1 0.19 ? 6% mpstat.cpu.all.usr%
12756 ? 97% +168.4% 34240 ? 34% numa-vmstat.node1.nr_active_anon
12775 ? 96% +121.3% 28265 ? 57% numa-vmstat.node1.nr_anon_pages
12756 ? 97% +168.4% 34240 ? 34% numa-vmstat.node1.nr_zone_active_anon
40184 ? 28% -60.5% 15887 ? 47% numa-vmstat.node3.nr_active_anon
27539 ? 36% -61.2% 10685 ? 35% numa-vmstat.node3.nr_anon_pages
40184 ? 28% -60.5% 15887 ? 47% numa-vmstat.node3.nr_zone_active_anon
4052 ? 12% -18.2% 3315 ? 7% slabinfo.eventpoll_pwq.active_objs
4052 ? 12% -18.2% 3315 ? 7% slabinfo.eventpoll_pwq.num_objs
994.50 ? 2% -11.5% 879.75 ? 6% slabinfo.mnt_cache.active_objs
994.50 ? 2% -11.5% 879.75 ? 6% slabinfo.mnt_cache.num_objs
17986 ? 3% -9.2% 16335 ? 6% slabinfo.proc_inode_cache.active_objs
18197 ? 3% -10.2% 16335 ? 6% slabinfo.proc_inode_cache.num_objs
51039 ? 97% +168.6% 137086 ? 34% numa-meminfo.node1.Active
51039 ? 97% +168.4% 136984 ? 34% numa-meminfo.node1.Active(anon)
30173 ? 98% +168.4% 80971 ? 65% numa-meminfo.node1.AnonHugePages
51098 ? 96% +121.2% 113036 ? 57% numa-meminfo.node1.AnonPages
160864 ? 28% -60.4% 63658 ? 47% numa-meminfo.node3.Active
160695 ? 28% -60.4% 63600 ? 47% numa-meminfo.node3.Active(anon)
70866 ? 42% -72.7% 19312 ? 44% numa-meminfo.node3.AnonHugePages
110148 ? 36% -61.2% 42725 ? 35% numa-meminfo.node3.AnonPages
776854 ? 16% -21.2% 612312 ? 3% numa-meminfo.node3.MemUsed
110351 -3.8% 106144 proc-vmstat.nr_active_anon
86745 ? 2% -3.1% 84055 proc-vmstat.nr_anon_pages
27424 -1.1% 27127 proc-vmstat.nr_kernel_stack
24639 -1.9% 24166 proc-vmstat.nr_slab_reclaimable
110351 -3.8% 106144 proc-vmstat.nr_zone_active_anon
4120 ? 52% +135.0% 9683 ? 8% proc-vmstat.numa_hint_faults
1830 ? 34% +112.8% 3894 ? 40% proc-vmstat.numa_hint_faults_local
1016243 +9.6% 1113748 ? 4% proc-vmstat.numa_hit
922979 +10.6% 1020539 ? 4% proc-vmstat.numa_local
117413 ? 8% +33.5% 156705 ? 17% proc-vmstat.numa_pte_updates
1152758 +8.3% 1248795 ? 3% proc-vmstat.pgalloc_normal
1091015 +12.2% 1224544 ? 5% proc-vmstat.pgfault
532.53 ? 66% -97.6% 12.59 ?173% sched_debug.cfs_rq:/.MIN_vruntime.avg
65409 ? 65% -98.1% 1246 ?173% sched_debug.cfs_rq:/.MIN_vruntime.max
5872 ? 66% -97.9% 122.21 ?173% sched_debug.cfs_rq:/.MIN_vruntime.stddev
10631 ? 5% -21.9% 8308 ? 13% sched_debug.cfs_rq:/.load.avg
447542 ? 19% -68.1% 142964 ?100% sched_debug.cfs_rq:/.load.max
40733 ? 15% -64.2% 14580 ? 78% sched_debug.cfs_rq:/.load.stddev
532.53 ? 66% -97.6% 12.59 ?173% sched_debug.cfs_rq:/.max_vruntime.avg
65409 ? 65% -98.1% 1246 ?173% sched_debug.cfs_rq:/.max_vruntime.max
5872 ? 66% -97.9% 122.21 ?173% sched_debug.cfs_rq:/.max_vruntime.stddev
3.00 ? 32% +670.8% 23.12 ? 44% sched_debug.cfs_rq:/.nr_spread_over.max
0.70 ? 29% +262.7% 2.54 ? 36% sched_debug.cfs_rq:/.nr_spread_over.stddev
10626 ? 5% -21.9% 8304 ? 13% sched_debug.cfs_rq:/.runnable_weight.avg
446851 ? 19% -68.1% 142654 ?100% sched_debug.cfs_rq:/.runnable_weight.max
40692 ? 15% -64.2% 14570 ? 78% sched_debug.cfs_rq:/.runnable_weight.stddev
393.78 ? 15% +46.2% 575.51 sched_debug.cfs_rq:/.util_est_enqueued.avg
261.04 ? 18% +35.4% 353.35 ? 7% sched_debug.cfs_rq:/.util_est_enqueued.stddev
180819 ? 51% -69.6% 55032 ? 54% sched_debug.cpu.avg_idle.min
1.33 -18.7% 1.08 ? 7% sched_debug.cpu.nr_running.max
1358 ? 5% -18.1% 1112 ? 17% sched_debug.cpu.nr_switches.min
14.66 ? 2% +7.2% 15.71 ? 3% sched_debug.cpu.nr_uninterruptible.stddev
0.00 -100.0% 0.00 sched_debug.rt_rq:/.rt_nr_migratory.avg
0.17 -100.0% 0.00 sched_debug.rt_rq:/.rt_nr_migratory.max
0.01 -100.0% 0.00 sched_debug.rt_rq:/.rt_nr_migratory.stddev
0.00 -100.0% 0.00 sched_debug.rt_rq:/.rt_nr_running.avg
0.17 -100.0% 0.00 sched_debug.rt_rq:/.rt_nr_running.max
0.01 -100.0% 0.00 sched_debug.rt_rq:/.rt_nr_running.stddev
2.52 -28.6% 1.80 ? 18% sched_debug.rt_rq:/.rt_runtime.stddev
0.40 ? 57% -96.8% 0.01 ?173% sched_debug.rt_rq:/.rt_time.avg
76.25 ? 57% -96.8% 2.48 ?173% sched_debug.rt_rq:/.rt_time.max
5.49 ? 57% -96.8% 0.18 ?173% sched_debug.rt_rq:/.rt_time.stddev
1.90 +90.3% 3.61 perf-stat.i.MPKI
6.714e+09 +4.8% 7.038e+09 perf-stat.i.branch-instructions
0.19 +0.0 0.20 ? 3% perf-stat.i.branch-miss-rate%
12809243 ? 2% +11.7% 14311637 ? 2% perf-stat.i.branch-misses
54.09 -53.2 0.91 ? 16% perf-stat.i.cache-miss-rate%
28039617 ? 2% -96.6% 959554 ? 17% perf-stat.i.cache-misses
52077460 +100.8% 1.046e+08 perf-stat.i.cache-references
10.89 -5.1% 10.33 perf-stat.i.cpi
10674 ? 2% +3050.7% 336310 ? 16% perf-stat.i.cycles-between-cache-misses
0.00 ? 6% -0.0 0.00 ? 38% perf-stat.i.dTLB-load-miss-rate%
150739 ? 6% -38.0% 93505 ? 35% perf-stat.i.dTLB-load-misses
0.00 ? 4% -0.0 0.00 ? 8% perf-stat.i.dTLB-store-miss-rate%
11309 ? 3% -8.5% 10349 ? 3% perf-stat.i.dTLB-store-misses
5.437e+08 +64.0% 8.919e+08 ? 5% perf-stat.i.dTLB-stores
67.06 +8.4 75.43 perf-stat.i.iTLB-load-miss-rate%
3120825 ? 2% -36.5% 1982185 perf-stat.i.iTLB-loads
2.747e+10 +5.6% 2.901e+10 perf-stat.i.instructions
4327 ? 2% +10.6% 4785 ? 4% perf-stat.i.instructions-per-iTLB-miss
0.09 +5.4% 0.10 perf-stat.i.ipc
99.34 -6.0 93.32 perf-stat.i.node-load-miss-rate%
6282217 ? 2% -94.3% 360175 ? 16% perf-stat.i.node-load-misses
42895 ? 10% -25.1% 32135 ? 10% perf-stat.i.node-loads
99.93 -1.5 98.42 perf-stat.i.node-store-miss-rate%
6985323 ? 2% -98.2% 123964 ? 15% perf-stat.i.node-store-misses
4634 ? 14% -44.8% 2559 ? 29% perf-stat.i.node-stores
1.90 +90.2% 3.61 perf-stat.overall.MPKI
0.19 +0.0 0.20 ? 3% perf-stat.overall.branch-miss-rate%
53.84 -52.9 0.91 ? 16% perf-stat.overall.cache-miss-rate%
10.89 -5.1% 10.33 perf-stat.overall.cpi
10673 ? 2% +2909.6% 321225 ? 14% perf-stat.overall.cycles-between-cache-misses
0.00 ? 6% -0.0 0.00 ? 38% perf-stat.overall.dTLB-load-miss-rate%
0.00 ? 4% -0.0 0.00 ? 9% perf-stat.overall.dTLB-store-miss-rate%
67.10 +8.4 75.45 perf-stat.overall.iTLB-load-miss-rate%
4317 ? 2% +10.5% 4771 ? 4% perf-stat.overall.instructions-per-iTLB-miss
0.09 +5.4% 0.10 perf-stat.overall.ipc
99.32 -7.6 91.72 perf-stat.overall.node-load-miss-rate%
99.93 -1.9 98.01 perf-stat.overall.node-store-miss-rate%
22887340 -48.8% 11728959 ? 9% perf-stat.overall.path-length
6.692e+09 +4.6% 6.998e+09 perf-stat.ps.branch-instructions
12765641 ? 2% +11.5% 14231573 ? 2% perf-stat.ps.branch-misses
27943868 ? 2% -96.6% 953244 ? 17% perf-stat.ps.cache-misses
51899628 +100.4% 1.04e+08 perf-stat.ps.cache-references
150234 ? 6% -37.8% 93463 ? 35% perf-stat.ps.dTLB-load-misses
11274 ? 3% -8.7% 10293 ? 3% perf-stat.ps.dTLB-store-misses
5.419e+08 +63.5% 8.858e+08 ? 5% perf-stat.ps.dTLB-stores
3110140 ? 2% -36.7% 1967628 perf-stat.ps.iTLB-loads
2.737e+10 +5.4% 2.884e+10 perf-stat.ps.instructions
6260771 ? 2% -94.3% 357806 ? 16% perf-stat.ps.node-load-misses
42756 ? 10% -25.3% 31941 ? 10% perf-stat.ps.node-loads
6961478 ? 2% -98.2% 123211 ? 15% perf-stat.ps.node-store-misses
4627 ? 14% -45.0% 2544 ? 29% perf-stat.ps.node-stores
21.56 ? 5% -21.6 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.__set_current_blocked.sigprocmask.__x64_sys_rt_sigprocmask
11.15 ? 5% -11.2 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__lock_task_sighand.do_send_sig_info.do_send_specific
11.05 ? 5% -11.1 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.__set_current_blocked.signal_setup_done.do_signal
10.77 ? 5% -10.8 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.__set_current_blocked.__x64_sys_rt_sigreturn.do_syscall_64
10.71 ? 5% -10.7 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.get_signal.do_signal.exit_to_usermode_loop
44.17 ? 5% -3.5 40.67 ? 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.raise
44.18 ? 5% -3.5 40.68 ? 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.raise
44.23 ? 5% -3.5 40.76 ? 3% perf-profile.calltrace.cycles-pp.raise
11.17 ? 6% -2.7 8.48 ? 5% perf-profile.calltrace.cycles-pp.__set_current_blocked.signal_setup_done.do_signal.exit_to_usermode_loop.do_syscall_64
11.17 ? 5% -2.7 8.49 ? 5% perf-profile.calltrace.cycles-pp.signal_setup_done.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.21 ? 5% -2.7 8.54 ? 5% perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.22 ? 5% -2.7 8.54 ? 5% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.10 ? 6% -2.7 8.43 ? 5% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.__set_current_blocked.signal_setup_done.do_signal.exit_to_usermode_loop
11.25 ? 5% -2.6 8.64 ? 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
11.25 ? 5% -2.6 8.64 ? 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.82 ? 5% -2.2 8.67 ? 5% perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.91 ? 5% -2.1 8.77 ? 5% perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.raise
10.73 ? 5% -2.1 8.60 ? 5% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64
10.91 ? 5% -2.1 8.78 ? 5% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.raise
21.80 ? 5% -1.4 20.42 ? 3% perf-profile.calltrace.cycles-pp.sigprocmask.__x64_sys_rt_sigprocmask.do_syscall_64.entry_SYSCALL_64_after_hwframe.raise
21.83 ? 5% -1.4 20.45 ? 3% perf-profile.calltrace.cycles-pp.__x64_sys_rt_sigprocmask.do_syscall_64.entry_SYSCALL_64_after_hwframe.raise
21.78 ? 5% -1.4 20.41 ? 3% perf-profile.calltrace.cycles-pp.__set_current_blocked.sigprocmask.__x64_sys_rt_sigprocmask.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +8.4 8.40 ? 5% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock_irq.__set_current_blocked.signal_setup_done.do_signal
0.00 +8.6 8.57 ? 5% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock_irq.get_signal.do_signal.exit_to_usermode_loop
0.00 +11.2 11.19 ? 7% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__lock_task_sighand.do_send_sig_info.do_send_specific
0.00 +11.2 11.25 ? 4% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock_irq.__set_current_blocked.__x64_sys_rt_sigreturn.do_syscall_64
0.00 +20.2 20.22 ? 3% perf-profile.calltrace.cycles-pp.__cna_queued_spin_lock_slowpath._raw_spin_lock_irq.__set_current_blocked.sigprocmask.__x64_sys_rt_sigprocmask
65.24 ? 5% -65.2 0.00 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
54.31 ? 5% -5.7 48.62 ? 2% perf-profile.children.cycles-pp._raw_spin_lock_irq
66.33 ? 5% -5.6 60.70 ? 2% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
66.32 ? 5% -5.6 60.69 ? 2% perf-profile.children.cycles-pp.do_syscall_64
22.12 ? 5% -4.8 17.31 ? 3% perf-profile.children.cycles-pp.do_signal
22.13 ? 5% -4.8 17.32 ? 3% perf-profile.children.cycles-pp.exit_to_usermode_loop
43.86 ? 5% -3.6 40.24 ? 2% perf-profile.children.cycles-pp.__set_current_blocked
44.25 ? 5% -3.5 40.78 ? 3% perf-profile.children.cycles-pp.raise
11.17 ? 5% -2.7 8.49 ? 5% perf-profile.children.cycles-pp.signal_setup_done
10.83 ? 5% -2.2 8.67 ? 5% perf-profile.children.cycles-pp.get_signal
21.80 ? 5% -1.4 20.42 ? 3% perf-profile.children.cycles-pp.sigprocmask
21.83 ? 5% -1.4 20.45 ? 3% perf-profile.children.cycles-pp.__x64_sys_rt_sigprocmask
0.33 ? 30% -0.2 0.09 ? 11% perf-profile.children.cycles-pp.apic_timer_interrupt
0.29 ? 32% -0.2 0.08 ? 10% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.21 ? 27% -0.1 0.07 ? 14% perf-profile.children.cycles-pp.hrtimer_interrupt
0.12 ? 21% -0.1 0.03 ?100% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.08 ? 6% -0.0 0.06 perf-profile.children.cycles-pp.__send_signal
0.06 ? 6% +0.0 0.08 ? 5% perf-profile.children.cycles-pp.copy_fpstate_to_sigframe
0.00 +0.1 0.05 perf-profile.children.cycles-pp.fpu__clear
0.00 +0.1 0.05 ? 8% perf-profile.children.cycles-pp.__perf_event_read_value
0.00 +0.1 0.06 ? 7% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.00 +0.1 0.09 ? 15% perf-profile.children.cycles-pp.smp_call_function_single
0.00 +0.1 0.09 ? 15% perf-profile.children.cycles-pp.perf_read
0.00 +0.1 0.09 ? 15% perf-profile.children.cycles-pp.perf_event_read
0.00 +0.1 0.09 ? 11% perf-profile.children.cycles-pp.ksys_read
0.00 +0.1 0.09 ? 11% perf-profile.children.cycles-pp.vfs_read
0.00 +0.1 0.11 perf-profile.children.cycles-pp.cna_scan_main_queue
0.00 +59.6 59.64 ? 2% perf-profile.children.cycles-pp.__cna_queued_spin_lock_slowpath
65.24 ? 5% -65.2 0.00 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.22 ? 7% -0.0 0.18 ? 4% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.09 ? 15% perf-profile.self.cycles-pp.smp_call_function_single
0.00 +0.1 0.11 ? 4% perf-profile.self.cycles-pp.cna_scan_main_queue
0.00 +59.5 59.53 ? 2% perf-profile.self.cycles-pp.__cna_queued_spin_lock_slowpath
38469 ? 39% +70.1% 65425 ? 14% softirqs.CPU0.SCHED
109845 ? 11% -22.1% 85541 ? 16% softirqs.CPU1.TIMER
149776 ? 6% -64.5% 53122 ? 24% softirqs.CPU100.TIMER
136093 ? 14% -62.1% 51597 ? 22% softirqs.CPU101.TIMER
148314 ? 6% -63.1% 54795 ? 28% softirqs.CPU102.TIMER
154532 -60.4% 61147 ? 22% softirqs.CPU103.TIMER
149314 ? 7% -65.7% 51213 ? 25% softirqs.CPU104.TIMER
143064 ? 13% -64.4% 50921 ? 24% softirqs.CPU105.TIMER
149347 ? 7% -61.6% 57414 ? 17% softirqs.CPU106.TIMER
148768 ? 7% -65.6% 51139 ? 24% softirqs.CPU107.TIMER
146598 ? 9% -64.9% 51445 ? 25% softirqs.CPU108.TIMER
143443 ? 13% -61.7% 54910 ? 21% softirqs.CPU109.TIMER
148388 ? 6% -64.5% 52664 ? 25% softirqs.CPU110.TIMER
142930 ? 13% -63.7% 51818 ? 26% softirqs.CPU111.TIMER
18827 ? 76% -72.4% 5204 ? 51% softirqs.CPU112.SCHED
121929 ? 20% -56.9% 52576 ? 25% softirqs.CPU112.TIMER
148130 ? 6% -53.4% 68978 ? 23% softirqs.CPU113.TIMER
148182 ? 6% -64.7% 52365 ? 25% softirqs.CPU114.TIMER
136482 ? 14% -63.0% 50525 ? 24% softirqs.CPU115.TIMER
148636 ? 7% -65.8% 50801 ? 23% softirqs.CPU116.TIMER
138484 ? 12% -63.6% 50449 ? 24% softirqs.CPU117.TIMER
148128 ? 6% -65.8% 50625 ? 25% softirqs.CPU118.TIMER
148534 ? 7% -57.5% 63157 ? 25% softirqs.CPU119.TIMER
148691 ? 6% -72.9% 40267 ? 29% softirqs.CPU120.TIMER
148576 ? 6% -72.7% 40593 ? 29% softirqs.CPU121.TIMER
141691 ? 15% -71.1% 40896 ? 29% softirqs.CPU122.TIMER
154448 -73.9% 40344 ? 29% softirqs.CPU123.TIMER
154224 -74.1% 40010 ? 30% softirqs.CPU124.TIMER
154299 -74.0% 40089 ? 30% softirqs.CPU125.TIMER
148767 ? 6% -73.0% 40187 ? 30% softirqs.CPU126.TIMER
148721 ? 6% -73.0% 40160 ? 30% softirqs.CPU127.TIMER
141191 ? 15% -71.4% 40414 ? 29% softirqs.CPU128.TIMER
154117 -73.8% 40353 ? 29% softirqs.CPU129.TIMER
148486 ? 6% -72.6% 40619 ? 28% softirqs.CPU130.TIMER
154044 -73.6% 40628 ? 28% softirqs.CPU131.TIMER
154160 -74.0% 40146 ? 29% softirqs.CPU132.TIMER
148510 ? 6% -73.0% 40027 ? 29% softirqs.CPU133.TIMER
154120 -74.0% 40045 ? 30% softirqs.CPU134.TIMER
134015 ? 16% -70.1% 40040 ? 30% softirqs.CPU135.TIMER
148203 ? 6% -73.0% 40049 ? 30% softirqs.CPU136.TIMER
135128 ? 15% -70.4% 40006 ? 30% softirqs.CPU137.TIMER
148315 ? 6% -72.8% 40393 ? 29% softirqs.CPU138.TIMER
141559 ? 15% -71.6% 40161 ? 30% softirqs.CPU139.TIMER
115903 ? 21% -26.7% 84926 ? 10% softirqs.CPU14.TIMER
155720 -74.2% 40139 ? 29% softirqs.CPU140.TIMER
148282 ? 6% -73.1% 39917 ? 30% softirqs.CPU141.TIMER
148507 ? 6% -72.9% 40275 ? 30% softirqs.CPU142.TIMER
139283 ? 12% -71.0% 40417 ? 29% softirqs.CPU143.TIMER
135723 ? 14% -58.1% 56805 ? 45% softirqs.CPU144.TIMER
148242 ? 6% -72.2% 41209 ? 21% softirqs.CPU145.TIMER
148112 ? 6% -72.2% 41157 ? 22% softirqs.CPU146.TIMER
148205 ? 6% -67.5% 48141 ? 42% softirqs.CPU147.TIMER
153899 -73.2% 41291 ? 21% softirqs.CPU148.TIMER
148297 ? 6% -72.3% 41073 ? 21% softirqs.CPU149.TIMER
134239 ? 16% -69.2% 41320 ? 21% softirqs.CPU150.TIMER
136470 ? 13% -69.8% 41170 ? 21% softirqs.CPU151.TIMER
136242 ? 14% -69.4% 41690 ? 21% softirqs.CPU152.TIMER
148164 ? 6% -72.1% 41358 ? 21% softirqs.CPU153.TIMER
124656 ? 16% -67.0% 41151 ? 21% softirqs.CPU154.TIMER
134912 ? 15% -69.6% 40965 ? 21% softirqs.CPU155.TIMER
135587 ? 15% -69.7% 41136 ? 21% softirqs.CPU156.TIMER
126794 ? 12% -67.5% 41157 ? 21% softirqs.CPU157.TIMER
147915 ? 6% -70.5% 43639 ? 19% softirqs.CPU158.TIMER
154127 -73.5% 40884 ? 21% softirqs.CPU159.TIMER
146788 ? 11% -43.2% 83385 ? 9% softirqs.CPU16.TIMER
136264 ? 13% -69.8% 41149 ? 21% softirqs.CPU160.TIMER
153961 -73.4% 40950 ? 21% softirqs.CPU161.TIMER
135912 ? 14% -69.6% 41304 ? 21% softirqs.CPU162.TIMER
136130 ? 13% -70.0% 40898 ? 22% softirqs.CPU163.TIMER
147836 ? 6% -61.9% 56347 ? 48% softirqs.CPU164.TIMER
128395 ? 19% -68.1% 40903 ? 22% softirqs.CPU165.TIMER
135661 ? 14% -69.6% 41179 ? 21% softirqs.CPU166.TIMER
136822 ? 13% -69.4% 41933 ? 21% softirqs.CPU167.TIMER
149651 ? 7% -73.8% 39274 ? 18% softirqs.CPU168.TIMER
148486 ? 6% -73.6% 39198 ? 18% softirqs.CPU169.TIMER
114474 ? 21% -41.0% 67501 ? 31% softirqs.CPU17.TIMER
138367 ? 20% -71.3% 39763 ? 17% softirqs.CPU170.TIMER
148076 ? 6% -73.5% 39292 ? 19% softirqs.CPU171.TIMER
135262 ? 14% -70.7% 39670 ? 17% softirqs.CPU172.TIMER
141848 ? 14% -72.1% 39546 ? 18% softirqs.CPU173.TIMER
153762 -74.4% 39423 ? 17% softirqs.CPU174.TIMER
137278 ? 12% -71.2% 39588 ? 17% softirqs.CPU175.TIMER
147932 ? 6% -73.3% 39450 ? 18% softirqs.CPU176.TIMER
147733 ? 6% -72.3% 40862 ? 15% softirqs.CPU177.TIMER
142171 ? 14% -72.2% 39537 ? 18% softirqs.CPU178.TIMER
154034 -74.5% 39206 ? 18% softirqs.CPU179.TIMER
120594 ? 18% -29.5% 85068 ? 8% softirqs.CPU18.TIMER
153840 -74.0% 40043 ? 18% softirqs.CPU180.TIMER
127306 ? 15% -69.2% 39230 ? 18% softirqs.CPU181.TIMER
135478 ? 15% -71.2% 39082 ? 18% softirqs.CPU182.TIMER
140999 ? 15% -72.1% 39348 ? 17% softirqs.CPU183.TIMER
135904 ? 14% -71.1% 39301 ? 18% softirqs.CPU184.TIMER
135052 ? 15% -70.7% 39598 ? 17% softirqs.CPU185.TIMER
135746 ? 14% -71.2% 39091 ? 18% softirqs.CPU186.TIMER
147761 ? 6% -72.3% 40888 ? 16% softirqs.CPU187.TIMER
154798 -74.5% 39543 ? 18% softirqs.CPU188.TIMER
140624 ? 15% -72.1% 39236 ? 18% softirqs.CPU189.TIMER
143360 ? 7% -72.5% 39478 ? 17% softirqs.CPU190.TIMER
148430 ? 6% -73.4% 39529 ? 18% softirqs.CPU191.TIMER
8224 ? 63% +143.4% 20014 ? 57% softirqs.CPU26.SCHED
8692 ? 75% +167.3% 23236 ? 41% softirqs.CPU32.SCHED
6193 ? 52% +219.1% 19765 ? 55% softirqs.CPU39.SCHED
118030 ? 19% -28.9% 83893 ? 10% softirqs.CPU4.TIMER
5775 ? 36% +261.0% 20850 ? 56% softirqs.CPU43.SCHED
120622 ? 19% -28.6% 86131 ? 10% softirqs.CPU43.TIMER
108122 ? 14% -21.9% 84398 ? 11% softirqs.CPU44.TIMER
122385 ? 18% -30.7% 84858 ? 10% softirqs.CPU47.TIMER
139916 ? 20% -46.6% 74714 ? 27% softirqs.CPU48.TIMER
111871 ? 22% -29.3% 79100 ? 15% softirqs.CPU51.TIMER
111498 ? 22% -23.1% 85732 ? 11% softirqs.CPU53.TIMER
136254 ? 15% -37.7% 84823 ? 11% softirqs.CPU58.TIMER
139662 ? 18% -37.9% 86791 ? 11% softirqs.CPU61.TIMER
111954 ? 22% -36.0% 71681 ? 25% softirqs.CPU68.TIMER
113356 ? 22% -24.2% 85885 ? 11% softirqs.CPU72.TIMER
116408 ? 21% -24.7% 87660 ? 10% softirqs.CPU73.TIMER
122698 ? 17% -29.9% 86007 ? 12% softirqs.CPU74.TIMER
142166 ? 16% -39.3% 86352 ? 10% softirqs.CPU79.TIMER
116018 ? 20% -28.8% 82623 ? 14% softirqs.CPU8.TIMER
137480 ? 14% -63.4% 50359 ? 20% softirqs.CPU96.TIMER
155283 -66.9% 51430 ? 24% softirqs.CPU97.TIMER
151862 ? 6% -64.8% 53508 ? 24% softirqs.CPU98.TIMER
154615 -67.3% 50607 ? 24% softirqs.CPU99.TIMER
3398889 ? 16% -22.4% 2636078 ? 12% softirqs.SCHED
25265046 ? 5% -50.8% 12440482 ? 13% softirqs.TIMER
607.00 +11.9% 679.00 ? 5% interrupts.9:IO-APIC.9-fasteoi.acpi
4075 ? 21% -28.4% 2919 ? 4% interrupts.CPU0.CAL:Function_call_interrupts
6112 ? 53% +1087.1% 72552 ? 47% interrupts.CPU0.RES:Rescheduling_interrupts
607.00 +11.9% 679.00 ? 5% interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
4591 ? 18% +41.6% 6501 ? 18% interrupts.CPU10.NMI:Non-maskable_interrupts
4591 ? 18% +41.6% 6501 ? 18% interrupts.CPU10.PMI:Performance_monitoring_interrupts
601943 -62.1% 228243 ? 19% interrupts.CPU100.LOC:Local_timer_interrupts
601850 -66.5% 201818 ? 13% interrupts.CPU101.LOC:Local_timer_interrupts
3306 ? 90% +119.2% 7250 ? 28% interrupts.CPU101.TLB:TLB_shootdowns
601858 -67.1% 198115 ? 15% interrupts.CPU102.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU102.TLB:TLB_shootdowns
601851 -51.7% 290646 ? 56% interrupts.CPU103.LOC:Local_timer_interrupts
602179 -66.9% 199175 ? 14% interrupts.CPU104.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU104.TLB:TLB_shootdowns
601905 -67.0% 198352 ? 15% interrupts.CPU105.LOC:Local_timer_interrupts
2927 ?111% +147.6% 7250 ? 28% interrupts.CPU105.TLB:TLB_shootdowns
602090 -56.9% 259430 ? 43% interrupts.CPU106.LOC:Local_timer_interrupts
602012 -66.0% 204685 ? 10% interrupts.CPU107.LOC:Local_timer_interrupts
601965 -66.8% 199985 ? 14% interrupts.CPU108.LOC:Local_timer_interrupts
2926 ?111% +147.8% 7251 ? 28% interrupts.CPU108.TLB:TLB_shootdowns
602165 -65.9% 205217 ? 9% interrupts.CPU109.LOC:Local_timer_interrupts
3873 ? 21% +43.2% 5546 ? 5% interrupts.CPU11.NMI:Non-maskable_interrupts
3873 ? 21% +43.2% 5546 ? 5% interrupts.CPU11.PMI:Performance_monitoring_interrupts
601906 -62.9% 223229 ? 23% interrupts.CPU110.LOC:Local_timer_interrupts
601917 -66.9% 199442 ? 14% interrupts.CPU111.LOC:Local_timer_interrupts
2927 ?111% +147.6% 7250 ? 28% interrupts.CPU111.TLB:TLB_shootdowns
602399 -63.1% 222294 ? 25% interrupts.CPU112.LOC:Local_timer_interrupts
1139 ?103% +387.6% 5556 ? 50% interrupts.CPU112.TLB:TLB_shootdowns
601913 -38.9% 367637 ? 49% interrupts.CPU113.LOC:Local_timer_interrupts
601808 -63.3% 221088 ? 25% interrupts.CPU114.LOC:Local_timer_interrupts
601881 -67.0% 198830 ? 15% interrupts.CPU115.LOC:Local_timer_interrupts
3318 ? 90% +118.5% 7250 ? 28% interrupts.CPU115.TLB:TLB_shootdowns
601825 -67.0% 198442 ? 15% interrupts.CPU116.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU116.TLB:TLB_shootdowns
601918 -67.1% 197919 ? 15% interrupts.CPU117.LOC:Local_timer_interrupts
3306 ? 90% +119.3% 7250 ? 28% interrupts.CPU117.TLB:TLB_shootdowns
602168 -67.2% 197631 ? 15% interrupts.CPU118.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU118.TLB:TLB_shootdowns
4407 ? 21% -34.1% 2906 ? 4% interrupts.CPU12.CAL:Function_call_interrupts
601796 -69.2% 185490 ? 10% interrupts.CPU120.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU120.TLB:TLB_shootdowns
601800 -69.0% 186433 ? 9% interrupts.CPU121.LOC:Local_timer_interrupts
3624 ? 75% +100.1% 7251 ? 28% interrupts.CPU121.TLB:TLB_shootdowns
601838 -69.3% 184910 ? 10% interrupts.CPU122.LOC:Local_timer_interrupts
2926 ?111% +147.8% 7251 ? 28% interrupts.CPU122.TLB:TLB_shootdowns
601832 -69.2% 185242 ? 10% interrupts.CPU123.LOC:Local_timer_interrupts
3244 ? 93% +123.5% 7250 ? 28% interrupts.CPU123.TLB:TLB_shootdowns
601806 -69.3% 184763 ? 10% interrupts.CPU124.LOC:Local_timer_interrupts
3244 ? 93% +123.5% 7250 ? 28% interrupts.CPU124.TLB:TLB_shootdowns
601819 -69.2% 185388 ? 10% interrupts.CPU125.LOC:Local_timer_interrupts
3244 ? 93% +123.5% 7249 ? 28% interrupts.CPU125.TLB:TLB_shootdowns
601806 -69.3% 184822 ? 10% interrupts.CPU126.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU126.TLB:TLB_shootdowns
601885 -69.2% 185309 ? 10% interrupts.CPU127.LOC:Local_timer_interrupts
3624 ? 75% +100.1% 7250 ? 28% interrupts.CPU127.TLB:TLB_shootdowns
601841 -69.1% 186027 ? 10% interrupts.CPU128.LOC:Local_timer_interrupts
2926 ?111% +147.8% 7250 ? 28% interrupts.CPU128.TLB:TLB_shootdowns
601810 -69.1% 185827 ? 11% interrupts.CPU129.LOC:Local_timer_interrupts
3244 ? 93% +123.5% 7250 ? 28% interrupts.CPU129.TLB:TLB_shootdowns
601819 -69.3% 184843 ? 10% interrupts.CPU130.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU130.TLB:TLB_shootdowns
601837 -69.1% 185992 ? 11% interrupts.CPU131.LOC:Local_timer_interrupts
5818 ? 30% +47.4% 8578 ? 5% interrupts.CPU131.NMI:Non-maskable_interrupts
5818 ? 30% +47.4% 8578 ? 5% interrupts.CPU131.PMI:Performance_monitoring_interrupts
3244 ? 93% +123.5% 7252 ? 28% interrupts.CPU131.TLB:TLB_shootdowns
601824 -69.1% 185690 ? 9% interrupts.CPU132.LOC:Local_timer_interrupts
5808 ? 30% +47.7% 8577 ? 5% interrupts.CPU132.NMI:Non-maskable_interrupts
5808 ? 30% +47.7% 8577 ? 5% interrupts.CPU132.PMI:Performance_monitoring_interrupts
3244 ? 93% +123.5% 7250 ? 28% interrupts.CPU132.TLB:TLB_shootdowns
601829 -69.2% 185288 ? 10% interrupts.CPU133.LOC:Local_timer_interrupts
3624 ? 75% +100.1% 7250 ? 28% interrupts.CPU133.TLB:TLB_shootdowns
601811 -69.3% 184659 ? 10% interrupts.CPU134.LOC:Local_timer_interrupts
3244 ? 93% +123.5% 7250 ? 28% interrupts.CPU134.TLB:TLB_shootdowns
601837 -69.3% 184699 ? 10% interrupts.CPU135.LOC:Local_timer_interrupts
5442 ? 45% +57.6% 8577 ? 5% interrupts.CPU135.NMI:Non-maskable_interrupts
5442 ? 45% +57.6% 8577 ? 5% interrupts.CPU135.PMI:Performance_monitoring_interrupts
2518 ?133% +187.9% 7251 ? 28% interrupts.CPU135.TLB:TLB_shootdowns
601859 -69.2% 185221 ? 10% interrupts.CPU136.LOC:Local_timer_interrupts
3624 ? 75% +100.1% 7250 ? 28% interrupts.CPU136.TLB:TLB_shootdowns
601807 -69.3% 184489 ? 10% interrupts.CPU137.LOC:Local_timer_interrupts
3306 ? 90% +119.3% 7250 ? 28% interrupts.CPU137.TLB:TLB_shootdowns
601818 -69.2% 185438 ? 10% interrupts.CPU138.LOC:Local_timer_interrupts
3625 ? 75% +100.0% 7250 ? 28% interrupts.CPU138.TLB:TLB_shootdowns
601851 -69.2% 185648 ? 10% interrupts.CPU139.LOC:Local_timer_interrupts
2926 ?111% +147.8% 7250 ? 28% interrupts.CPU139.TLB:TLB_shootdowns
4536 ? 19% +26.2% 5726 interrupts.CPU14.NMI:Non-maskable_interrupts
4536 ? 19% +26.2% 5726 interrupts.CPU14.PMI:Performance_monitoring_interrupts
601876 -69.1% 185800 ? 10% interrupts.CPU140.LOC:Local_timer_interrupts
3244 ? 93% +123.5% 7249 ? 28% interrupts.CPU140.TLB:TLB_shootdowns
601854 -69.3% 184954 ? 10% interrupts.CPU141.LOC:Local_timer_interrupts
3624 ? 75% +100.1% 7250 ? 28% interrupts.CPU141.TLB:TLB_shootdowns
601804 -69.1% 186223 ? 10% interrupts.CPU142.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU142.TLB:TLB_shootdowns
5609 ? 11% +62.9% 9139 ? 22% interrupts.CPU143.CAL:Function_call_interrupts
601804 -69.3% 184931 ? 10% interrupts.CPU143.LOC:Local_timer_interrupts
1851 ? 29% +291.5% 7250 ? 28% interrupts.CPU143.TLB:TLB_shootdowns
601878 -71.4% 172258 ? 23% interrupts.CPU145.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU145.TLB:TLB_shootdowns
601935 -71.4% 172248 ? 23% interrupts.CPU146.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU146.TLB:TLB_shootdowns
601872 -60.0% 240901 ? 52% interrupts.CPU147.LOC:Local_timer_interrupts
601882 -71.3% 172461 ? 24% interrupts.CPU148.LOC:Local_timer_interrupts
3244 ? 93% +123.5% 7250 ? 28% interrupts.CPU148.TLB:TLB_shootdowns
601903 -71.4% 172014 ? 23% interrupts.CPU149.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU149.TLB:TLB_shootdowns
4447 ? 22% -34.4% 2915 ? 4% interrupts.CPU15.CAL:Function_call_interrupts
600258 -71.2% 172667 ? 23% interrupts.CPU150.LOC:Local_timer_interrupts
2949 ?107% +145.9% 7251 ? 28% interrupts.CPU150.TLB:TLB_shootdowns
600184 -71.2% 172981 ? 23% interrupts.CPU151.LOC:Local_timer_interrupts
2949 ?107% +145.8% 7250 ? 28% interrupts.CPU151.TLB:TLB_shootdowns
600158 -71.3% 172256 ? 23% interrupts.CPU152.LOC:Local_timer_interrupts
2949 ?107% +145.7% 7245 ? 28% interrupts.CPU152.TLB:TLB_shootdowns
601824 -71.5% 171400 ? 23% interrupts.CPU153.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU153.TLB:TLB_shootdowns
5217 ? 17% +73.6% 9058 ? 22% interrupts.CPU154.CAL:Function_call_interrupts
600745 -71.4% 171518 ? 23% interrupts.CPU154.LOC:Local_timer_interrupts
1177 ? 60% +515.8% 7250 ? 28% interrupts.CPU154.TLB:TLB_shootdowns
600232 -71.2% 172938 ? 23% interrupts.CPU155.LOC:Local_timer_interrupts
2949 ?107% +145.9% 7250 ? 28% interrupts.CPU155.TLB:TLB_shootdowns
601922 -71.4% 172242 ? 24% interrupts.CPU156.LOC:Local_timer_interrupts
3306 ? 90% +119.3% 7251 ? 28% interrupts.CPU156.TLB:TLB_shootdowns
601370 -71.4% 171986 ? 23% interrupts.CPU157.LOC:Local_timer_interrupts
2631 ?127% +175.5% 7250 ? 28% interrupts.CPU157.TLB:TLB_shootdowns
601813 -71.3% 172958 ? 22% interrupts.CPU158.LOC:Local_timer_interrupts
3624 ? 75% +100.1% 7251 ? 28% interrupts.CPU158.TLB:TLB_shootdowns
601843 -71.5% 171664 ? 23% interrupts.CPU159.LOC:Local_timer_interrupts
3244 ? 93% +123.5% 7250 ? 28% interrupts.CPU159.TLB:TLB_shootdowns
601832 -71.4% 172005 ? 23% interrupts.CPU160.LOC:Local_timer_interrupts
3306 ? 90% +119.3% 7250 ? 28% interrupts.CPU160.TLB:TLB_shootdowns
601853 -71.4% 172032 ? 23% interrupts.CPU161.LOC:Local_timer_interrupts
3244 ? 93% +123.5% 7250 ? 28% interrupts.CPU161.TLB:TLB_shootdowns
601899 -71.4% 171918 ? 23% interrupts.CPU162.LOC:Local_timer_interrupts
3306 ? 90% +119.3% 7250 ? 28% interrupts.CPU162.TLB:TLB_shootdowns
601878 -71.5% 171664 ? 23% interrupts.CPU163.LOC:Local_timer_interrupts
3306 ? 90% +119.3% 7250 ? 28% interrupts.CPU163.TLB:TLB_shootdowns
600238 -71.4% 171572 ? 23% interrupts.CPU165.LOC:Local_timer_interrupts
2251 ?155% +222.0% 7250 ? 28% interrupts.CPU165.TLB:TLB_shootdowns
601773 -71.3% 172794 ? 24% interrupts.CPU166.LOC:Local_timer_interrupts
3306 ? 90% +119.3% 7250 ? 28% interrupts.CPU166.TLB:TLB_shootdowns
601795 -71.3% 172782 ? 24% interrupts.CPU167.LOC:Local_timer_interrupts
3306 ? 90% +119.3% 7251 ? 28% interrupts.CPU167.TLB:TLB_shootdowns
601936 -68.6% 188943 ? 11% interrupts.CPU168.LOC:Local_timer_interrupts
3624 ? 75% +100.1% 7250 ? 28% interrupts.CPU168.TLB:TLB_shootdowns
601956 -68.7% 188544 ? 10% interrupts.CPU169.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU169.TLB:TLB_shootdowns
599814 -32.9% 402293 ? 37% interrupts.CPU17.LOC:Local_timer_interrupts
5089 ? 26% +80.1% 9163 ? 22% interrupts.CPU170.CAL:Function_call_interrupts
601727 -68.4% 190293 ? 11% interrupts.CPU170.LOC:Local_timer_interrupts
1169 ? 91% +520.0% 7249 ? 28% interrupts.CPU170.TLB:TLB_shootdowns
602029 -68.8% 188094 ? 10% interrupts.CPU171.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU171.TLB:TLB_shootdowns
601979 -68.5% 189400 ? 11% interrupts.CPU172.LOC:Local_timer_interrupts
3301 ? 90% +119.6% 7250 ? 28% interrupts.CPU172.TLB:TLB_shootdowns
602021 -68.6% 189122 ? 11% interrupts.CPU173.LOC:Local_timer_interrupts
2976 ?107% +143.6% 7249 ? 28% interrupts.CPU173.TLB:TLB_shootdowns
601927 -68.5% 189306 ? 11% interrupts.CPU174.LOC:Local_timer_interrupts
3250 ? 93% +123.1% 7250 ? 28% interrupts.CPU174.TLB:TLB_shootdowns
601137 -68.4% 190126 ? 10% interrupts.CPU175.LOC:Local_timer_interrupts
2951 ?107% +145.6% 7249 ? 28% interrupts.CPU175.TLB:TLB_shootdowns
601903 -68.5% 189781 ? 10% interrupts.CPU176.LOC:Local_timer_interrupts
3625 ? 75% +100.0% 7251 ? 28% interrupts.CPU176.TLB:TLB_shootdowns
601903 -68.3% 190951 ? 11% interrupts.CPU177.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7250 ? 28% interrupts.CPU177.TLB:TLB_shootdowns
601407 -68.4% 189929 ? 11% interrupts.CPU178.LOC:Local_timer_interrupts
2573 ?130% +181.7% 7250 ? 28% interrupts.CPU178.TLB:TLB_shootdowns
601943 -68.6% 188869 ? 11% interrupts.CPU179.LOC:Local_timer_interrupts
3249 ? 93% +123.1% 7250 ? 28% interrupts.CPU179.TLB:TLB_shootdowns
4557 ? 19% +25.7% 5726 interrupts.CPU18.NMI:Non-maskable_interrupts
4557 ? 19% +25.7% 5726 interrupts.CPU18.PMI:Performance_monitoring_interrupts
601927 -68.7% 188588 ? 10% interrupts.CPU180.LOC:Local_timer_interrupts
3250 ? 93% +123.0% 7250 ? 28% interrupts.CPU180.TLB:TLB_shootdowns
601250 -68.4% 189864 ? 10% interrupts.CPU181.LOC:Local_timer_interrupts
2426 ?140% +198.7% 7248 ? 28% interrupts.CPU181.TLB:TLB_shootdowns
600443 -68.6% 188607 ? 10% interrupts.CPU182.LOC:Local_timer_interrupts
2955 ?107% +145.3% 7248 ? 28% interrupts.CPU182.TLB:TLB_shootdowns
601967 -68.3% 191018 ? 11% interrupts.CPU183.LOC:Local_timer_interrupts
2936 ?110% +147.0% 7250 ? 28% interrupts.CPU183.TLB:TLB_shootdowns
602042 -68.7% 188708 ? 10% interrupts.CPU184.LOC:Local_timer_interrupts
3318 ? 90% +118.5% 7250 ? 28% interrupts.CPU184.TLB:TLB_shootdowns
600518 -68.3% 190238 ? 11% interrupts.CPU185.LOC:Local_timer_interrupts
2950 ?107% +145.8% 7251 ? 28% interrupts.CPU185.TLB:TLB_shootdowns
601960 -68.8% 188008 ? 10% interrupts.CPU186.LOC:Local_timer_interrupts
3315 ? 90% +118.7% 7251 ? 28% interrupts.CPU186.TLB:TLB_shootdowns
602001 -68.3% 190561 ? 11% interrupts.CPU187.LOC:Local_timer_interrupts
3625 ? 75% +100.0% 7250 ? 28% interrupts.CPU187.TLB:TLB_shootdowns
601971 -68.2% 191404 ? 11% interrupts.CPU188.LOC:Local_timer_interrupts
3624 ? 75% +100.0% 7251 ? 28% interrupts.CPU188.TLB:TLB_shootdowns
602240 -68.6% 189382 ? 11% interrupts.CPU189.LOC:Local_timer_interrupts
2572 ?130% +181.9% 7251 ? 28% interrupts.CPU189.TLB:TLB_shootdowns
4060 ? 22% -28.4% 2908 ? 4% interrupts.CPU19.CAL:Function_call_interrupts
601967 -68.3% 190865 ? 11% interrupts.CPU190.LOC:Local_timer_interrupts
3435 ? 84% +111.1% 7251 ? 28% interrupts.CPU190.TLB:TLB_shootdowns
601980 -68.1% 191744 ? 11% interrupts.CPU191.LOC:Local_timer_interrupts
3538 ? 78% +100.9% 7108 ? 26% interrupts.CPU191.TLB:TLB_shootdowns
3780 ? 9% -22.6% 2925 ? 4% interrupts.CPU2.CAL:Function_call_interrupts
4561 ? 19% +25.7% 5734 interrupts.CPU2.NMI:Non-maskable_interrupts
4561 ? 19% +25.7% 5734 interrupts.CPU2.PMI:Performance_monitoring_interrupts
1803 ?146% +492.1% 10677 ? 76% interrupts.CPU2.RES:Rescheduling_interrupts
3743 ? 9% -22.4% 2903 ? 4% interrupts.CPU20.CAL:Function_call_interrupts
4511 ? 19% +26.9% 5724 interrupts.CPU20.NMI:Non-maskable_interrupts
4511 ? 19% +26.9% 5724 interrupts.CPU20.PMI:Performance_monitoring_interrupts
4065 ? 22% -28.6% 2901 ? 5% interrupts.CPU21.CAL:Function_call_interrupts
3742 ? 9% -22.2% 2912 ? 4% interrupts.CPU22.CAL:Function_call_interrupts
4527 ? 19% +26.5% 5727 interrupts.CPU22.NMI:Non-maskable_interrupts
4527 ? 19% +26.5% 5727 interrupts.CPU22.PMI:Performance_monitoring_interrupts
3489 ? 4% -15.7% 2941 ? 3% interrupts.CPU24.CAL:Function_call_interrupts
3829 ? 11% -22.8% 2957 ? 3% interrupts.CPU25.CAL:Function_call_interrupts
4538 ? 19% +26.3% 5730 interrupts.CPU25.NMI:Non-maskable_interrupts
4538 ? 19% +26.3% 5730 interrupts.CPU25.PMI:Performance_monitoring_interrupts
4497 ? 22% -34.6% 2943 ? 3% interrupts.CPU26.CAL:Function_call_interrupts
4162 ? 17% -28.8% 2964 ? 3% interrupts.CPU27.CAL:Function_call_interrupts
4161 ? 17% -29.0% 2954 ? 3% interrupts.CPU28.CAL:Function_call_interrupts
4199 ? 17% -29.5% 2960 ? 3% interrupts.CPU29.CAL:Function_call_interrupts
4053 ? 21% -28.1% 2912 ? 5% interrupts.CPU3.CAL:Function_call_interrupts
3816 ? 11% -34.7% 2494 ? 33% interrupts.CPU30.CAL:Function_call_interrupts
4584 ? 19% +27.7% 5854 ? 3% interrupts.CPU30.NMI:Non-maskable_interrupts
4584 ? 19% +27.7% 5854 ? 3% interrupts.CPU30.PMI:Performance_monitoring_interrupts
3817 ? 11% -34.2% 2513 ? 34% interrupts.CPU31.CAL:Function_call_interrupts
4570 ? 19% +27.7% 5834 ? 3% interrupts.CPU31.NMI:Non-maskable_interrupts
4570 ? 19% +27.7% 5834 ? 3% interrupts.CPU31.PMI:Performance_monitoring_interrupts
4523 ? 22% -34.5% 2963 ? 3% interrupts.CPU32.CAL:Function_call_interrupts
4208 ? 17% -29.6% 2964 ? 3% interrupts.CPU33.CAL:Function_call_interrupts
3844 ? 12% -23.0% 2961 ? 3% interrupts.CPU34.CAL:Function_call_interrupts
4540 ? 19% +26.1% 5727 interrupts.CPU34.NMI:Non-maskable_interrupts
4540 ? 19% +26.1% 5727 interrupts.CPU34.PMI:Performance_monitoring_interrupts
4227 ? 17% -30.1% 2956 ? 3% interrupts.CPU35.CAL:Function_call_interrupts
4210 ? 17% -29.7% 2959 ? 3% interrupts.CPU36.CAL:Function_call_interrupts
3825 ? 12% -22.8% 2954 ? 3% interrupts.CPU37.CAL:Function_call_interrupts
4537 ? 19% +26.2% 5725 interrupts.CPU37.NMI:Non-maskable_interrupts
4537 ? 19% +26.2% 5725 interrupts.CPU37.PMI:Performance_monitoring_interrupts
4210 ? 17% -29.7% 2960 ? 3% interrupts.CPU38.CAL:Function_call_interrupts
4238 ? 11% +35.2% 5728 interrupts.CPU38.NMI:Non-maskable_interrupts
4238 ? 11% +35.2% 5728 interrupts.CPU38.PMI:Performance_monitoring_interrupts
4940 ? 18% -40.1% 2960 ? 3% interrupts.CPU39.CAL:Function_call_interrupts
4567 ? 19% +25.4% 5729 interrupts.CPU4.NMI:Non-maskable_interrupts
4567 ? 19% +25.4% 5729 interrupts.CPU4.PMI:Performance_monitoring_interrupts
3829 ? 12% -22.7% 2960 ? 3% interrupts.CPU40.CAL:Function_call_interrupts
3854 ? 20% +48.6% 5728 interrupts.CPU40.NMI:Non-maskable_interrupts
3854 ? 20% +48.6% 5728 interrupts.CPU40.PMI:Performance_monitoring_interrupts
4149 ? 24% -29.0% 2944 ? 4% interrupts.CPU41.CAL:Function_call_interrupts
3823 ? 12% -21.9% 2987 ? 2% interrupts.CPU42.CAL:Function_call_interrupts
3844 ? 20% +49.0% 5728 interrupts.CPU42.NMI:Non-maskable_interrupts
3844 ? 20% +49.0% 5728 interrupts.CPU42.PMI:Performance_monitoring_interrupts
4525 ? 23% -34.6% 2961 ? 3% interrupts.CPU43.CAL:Function_call_interrupts
4198 ? 17% -29.5% 2960 ? 3% interrupts.CPU44.CAL:Function_call_interrupts
4224 ? 12% +30.6% 5519 ? 6% interrupts.CPU44.NMI:Non-maskable_interrupts
4224 ? 12% +30.6% 5519 ? 6% interrupts.CPU44.PMI:Performance_monitoring_interrupts
3813 ? 12% -22.7% 2948 ? 3% interrupts.CPU45.CAL:Function_call_interrupts
3838 ? 20% +49.2% 5728 interrupts.CPU45.NMI:Non-maskable_interrupts
3838 ? 20% +49.2% 5728 interrupts.CPU45.PMI:Performance_monitoring_interrupts
3815 ? 12% -22.4% 2959 ? 3% interrupts.CPU46.CAL:Function_call_interrupts
3898 ? 20% +46.9% 5728 interrupts.CPU46.NMI:Non-maskable_interrupts
3898 ? 20% +46.9% 5728 interrupts.CPU46.PMI:Performance_monitoring_interrupts
5591 ? 52% -47.1% 2959 ? 3% interrupts.CPU47.CAL:Function_call_interrupts
3771 ? 11% -21.6% 2958 ? 4% interrupts.CPU49.CAL:Function_call_interrupts
4067 ? 21% -28.0% 2926 ? 4% interrupts.CPU5.CAL:Function_call_interrupts
3771 ? 11% -38.5% 2319 ? 40% interrupts.CPU50.CAL:Function_call_interrupts
4149 ? 17% -29.8% 2914 ? 3% interrupts.CPU52.CAL:Function_call_interrupts
3763 ? 11% -22.5% 2916 ? 3% interrupts.CPU53.CAL:Function_call_interrupts
4437 ? 24% -34.3% 2916 ? 3% interrupts.CPU54.CAL:Function_call_interrupts
4465 ? 23% -33.7% 2961 ? 3% interrupts.CPU55.CAL:Function_call_interrupts
4444 ? 23% -34.2% 2925 ? 3% interrupts.CPU56.CAL:Function_call_interrupts
3762 ? 11% -21.2% 2966 ? 4% interrupts.CPU57.CAL:Function_call_interrupts
6185 ? 43% -52.9% 2916 ? 3% interrupts.CPU58.CAL:Function_call_interrupts
4434 ? 24% -34.2% 2916 ? 3% interrupts.CPU59.CAL:Function_call_interrupts
3763 ? 9% -22.4% 2919 ? 4% interrupts.CPU6.CAL:Function_call_interrupts
4626 ? 18% +23.9% 5730 interrupts.CPU6.NMI:Non-maskable_interrupts
4626 ? 18% +23.9% 5730 interrupts.CPU6.PMI:Performance_monitoring_interrupts
4078 ? 24% -28.5% 2917 ? 3% interrupts.CPU60.CAL:Function_call_interrupts
4733 ? 25% -38.4% 2914 ? 3% interrupts.CPU61.CAL:Function_call_interrupts
3757 ? 11% -22.3% 2920 ? 3% interrupts.CPU62.CAL:Function_call_interrupts
4143 ? 17% -30.3% 2888 ? 3% interrupts.CPU63.CAL:Function_call_interrupts
4070 ? 24% -28.3% 2917 ? 3% interrupts.CPU64.CAL:Function_call_interrupts
4146 ? 17% -30.9% 2863 ? 4% interrupts.CPU65.CAL:Function_call_interrupts
4074 ? 23% -28.4% 2917 ? 3% interrupts.CPU66.CAL:Function_call_interrupts
4077 ? 24% -28.4% 2918 ? 3% interrupts.CPU67.CAL:Function_call_interrupts
598385 -21.5% 470005 ? 29% interrupts.CPU68.LOC:Local_timer_interrupts
3386 ? 35% +58.7% 5375 ? 11% interrupts.CPU68.NMI:Non-maskable_interrupts
3386 ? 35% +58.7% 5375 ? 11% interrupts.CPU68.PMI:Performance_monitoring_interrupts
5123 ? 20% -43.1% 2912 ? 3% interrupts.CPU69.CAL:Function_call_interrupts
1374 ? 69% -99.9% 1.50 ?173% interrupts.CPU69.TLB:TLB_shootdowns
4068 ? 23% -28.4% 2911 ? 3% interrupts.CPU70.CAL:Function_call_interrupts
4065 ? 23% -28.4% 2911 ? 3% interrupts.CPU71.CAL:Function_call_interrupts
3768 ? 11% -21.6% 2955 ? 4% interrupts.CPU72.CAL:Function_call_interrupts
3772 ? 11% -21.6% 2956 ? 3% interrupts.CPU73.CAL:Function_call_interrupts
6224 ? 51% -52.4% 2963 ? 3% interrupts.CPU74.CAL:Function_call_interrupts
2469 ?138% -99.8% 5.00 ?103% interrupts.CPU74.TLB:TLB_shootdowns
3773 ? 12% -21.4% 2965 ? 3% interrupts.CPU75.CAL:Function_call_interrupts
4095 ? 24% -27.7% 2960 ? 3% interrupts.CPU76.CAL:Function_call_interrupts
340.00 ?158% -99.4% 2.00 ?173% interrupts.CPU76.TLB:TLB_shootdowns
4462 ? 23% -33.6% 2964 ? 3% interrupts.CPU77.CAL:Function_call_interrupts
704.75 ? 98% -99.1% 6.50 ?105% interrupts.CPU77.TLB:TLB_shootdowns
4135 ? 17% -28.3% 2964 ? 3% interrupts.CPU78.CAL:Function_call_interrupts
438.75 ?143% -98.2% 8.00 ?145% interrupts.CPU78.TLB:TLB_shootdowns
4428 ? 24% -33.0% 2966 ? 3% interrupts.CPU79.CAL:Function_call_interrupts
3755 ? 9% -22.7% 2902 ? 4% interrupts.CPU8.CAL:Function_call_interrupts
3812 ? 10% -22.3% 2961 ? 3% interrupts.CPU80.CAL:Function_call_interrupts
3763 ? 11% -21.5% 2952 ? 3% interrupts.CPU81.CAL:Function_call_interrupts
4803 ? 21% -48.8% 2457 ? 35% interrupts.CPU82.CAL:Function_call_interrupts
4178 ? 19% -29.1% 2963 ? 3% interrupts.CPU83.CAL:Function_call_interrupts
4212 ? 18% -29.6% 2963 ? 3% interrupts.CPU84.CAL:Function_call_interrupts
393.75 ?165% -98.2% 7.25 ?150% interrupts.CPU84.TLB:TLB_shootdowns
5055 ? 21% -41.3% 2966 ? 3% interrupts.CPU85.CAL:Function_call_interrupts
4504 ? 24% -34.1% 2966 ? 3% interrupts.CPU86.CAL:Function_call_interrupts
688.25 ?168% -98.9% 7.75 ?121% interrupts.CPU86.TLB:TLB_shootdowns
4514 ? 23% -34.4% 2963 ? 3% interrupts.CPU87.CAL:Function_call_interrupts
4135 ? 22% -28.3% 2965 ? 3% interrupts.CPU88.CAL:Function_call_interrupts
4491 ? 24% -34.1% 2960 ? 3% interrupts.CPU89.CAL:Function_call_interrupts
4359 ? 21% -33.2% 2910 ? 4% interrupts.CPU9.CAL:Function_call_interrupts
4127 ? 23% -28.3% 2960 ? 3% interrupts.CPU90.CAL:Function_call_interrupts
4720 ? 16% +21.4% 5732 interrupts.CPU90.NMI:Non-maskable_interrupts
4720 ? 16% +21.4% 5732 interrupts.CPU90.PMI:Performance_monitoring_interrupts
3823 ? 11% -22.7% 2953 ? 3% interrupts.CPU91.CAL:Function_call_interrupts
4088 ? 35% +40.2% 5731 interrupts.CPU91.NMI:Non-maskable_interrupts
4088 ? 35% +40.2% 5731 interrupts.CPU91.PMI:Performance_monitoring_interrupts
3823 ? 11% -22.8% 2953 ? 3% interrupts.CPU92.CAL:Function_call_interrupts
4879 ? 22% -39.4% 2955 ? 3% interrupts.CPU93.CAL:Function_call_interrupts
4103 ? 20% -28.0% 2954 ? 3% interrupts.CPU94.CAL:Function_call_interrupts
4236 ? 30% +35.3% 5731 interrupts.CPU94.NMI:Non-maskable_interrupts
4236 ? 30% +35.3% 5731 interrupts.CPU94.PMI:Performance_monitoring_interrupts
3850 ? 13% -22.7% 2976 ? 4% interrupts.CPU95.CAL:Function_call_interrupts
4123 ? 35% +39.3% 5742 interrupts.CPU95.NMI:Non-maskable_interrupts
4123 ? 35% +39.3% 5742 interrupts.CPU95.PMI:Performance_monitoring_interrupts
601869 -66.9% 198998 ? 15% interrupts.CPU96.LOC:Local_timer_interrupts
3323 ? 89% +117.9% 7240 ? 28% interrupts.CPU96.TLB:TLB_shootdowns
602062 -66.0% 204835 ? 10% interrupts.CPU97.LOC:Local_timer_interrupts
601868 -67.1% 197900 ? 15% interrupts.CPU98.LOC:Local_timer_interrupts
3624 ? 75% +100.1% 7251 ? 28% interrupts.CPU98.TLB:TLB_shootdowns
601847 -67.2% 197325 ? 15% interrupts.CPU99.LOC:Local_timer_interrupts
3245 ? 93% +123.4% 7250 ? 28% interrupts.CPU99.TLB:TLB_shootdowns
1.153e+08 -36.6% 73117854 ? 8% interrupts.LOC:Local_timer_interrupts
350781 ? 75% +100.1% 701830 ? 28% interrupts.TLB:TLB_shootdowns





Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Oliver Sang


Attachments:
(No filename) (185.19 kB)
config-5.4.0-rc6-00240-gad3836e30e6f5 (203.89 kB)
job-script (7.57 kB)
job.yaml (4.98 kB)
reproduce (320.00 B)
Download all attachments

2019-11-20 17:19:57

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock

Hi Alex,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v5.4-rc8 next-20191120]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url: https://github.com/0day-ci/linux/commits/Alex-Kogan/locking-qspinlock-Rename-mcs-lock-unlock-macros-and-make-them-more-generic/20191109-180535
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 0058b0a506e40d9a2c62015fe92eb64a44d78cd9
config: i386-randconfig-f003-20191120 (attached as .config)
compiler: gcc-7 (Debian 7.4.0-14) 7.4.0
reproduce:
# save the attached .config to linux build tree
make ARCH=i386

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <[email protected]>

All error/warnings (new ones prefixed by >>):

In file included from include/linux/export.h:42:0,
from include/linux/linkage.h:7,
from include/linux/kernel.h:8,
from include/linux/list.h:9,
from include/linux/smp.h:12,
from kernel/locking/qspinlock.c:16:
kernel/locking/qspinlock_cna.h: In function 'cna_init_nodes':
>> include/linux/compiler.h:350:38: error: call to '__compiletime_assert_80' declared with attribute error: BUILD_BUG_ON failed: sizeof(struct cna_node) > sizeof(struct qnode)
_compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
^
include/linux/compiler.h:331:4: note: in definition of macro '__compiletime_assert'
prefix ## suffix(); \
^~~~~~
include/linux/compiler.h:350:2: note: in expansion of macro '_compiletime_assert'
_compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
^~~~~~~~~~~~~~~~~~~
include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert'
#define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
^~~~~~~~~~~~~~~~~~
include/linux/build_bug.h:50:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
^~~~~~~~~~~~~~~~
>> kernel/locking/qspinlock_cna.h:80:2: note: in expansion of macro 'BUILD_BUG_ON'
BUILD_BUG_ON(sizeof(struct cna_node) > sizeof(struct qnode));
^~~~~~~~~~~~

vim +/__compiletime_assert_80 +350 include/linux/compiler.h

9a8ab1c39970a4 Daniel Santos 2013-02-21 336
9a8ab1c39970a4 Daniel Santos 2013-02-21 337 #define _compiletime_assert(condition, msg, prefix, suffix) \
9a8ab1c39970a4 Daniel Santos 2013-02-21 338 __compiletime_assert(condition, msg, prefix, suffix)
9a8ab1c39970a4 Daniel Santos 2013-02-21 339
9a8ab1c39970a4 Daniel Santos 2013-02-21 340 /**
9a8ab1c39970a4 Daniel Santos 2013-02-21 341 * compiletime_assert - break build and emit msg if condition is false
9a8ab1c39970a4 Daniel Santos 2013-02-21 342 * @condition: a compile-time constant condition to check
9a8ab1c39970a4 Daniel Santos 2013-02-21 343 * @msg: a message to emit if condition is false
9a8ab1c39970a4 Daniel Santos 2013-02-21 344 *
9a8ab1c39970a4 Daniel Santos 2013-02-21 345 * In tradition of POSIX assert, this macro will break the build if the
9a8ab1c39970a4 Daniel Santos 2013-02-21 346 * supplied condition is *false*, emitting the supplied error message if the
9a8ab1c39970a4 Daniel Santos 2013-02-21 347 * compiler has support to do so.
9a8ab1c39970a4 Daniel Santos 2013-02-21 348 */
9a8ab1c39970a4 Daniel Santos 2013-02-21 349 #define compiletime_assert(condition, msg) \
9a8ab1c39970a4 Daniel Santos 2013-02-21 @350 _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
9a8ab1c39970a4 Daniel Santos 2013-02-21 351

:::::: The code at line 350 was first introduced by commit
:::::: 9a8ab1c39970a4938a72d94e6fd13be88a797590 bug.h, compiler.h: introduce compiletime_assert & BUILD_BUG_ON_MSG

:::::: TO: Daniel Santos <[email protected]>
:::::: CC: Linus Torvalds <[email protected]>

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/[email protected] Intel Corporation


Attachments:
(No filename) (4.37 kB)
.config.gz (36.79 kB)
Download all attachments

2019-11-22 18:32:06

by Alex Kogan

[permalink] [raw]
Subject: Re: [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock



> On Nov 20, 2019, at 10:16 AM, kbuild test robot <[email protected]> wrote:
>
> Hi Alex,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on linus/master]
> [also build test ERROR on v5.4-rc8 next-20191120]
> [if your patch is applied to the wrong git tree, please drop us a note to help
> improve the system. BTW, we also suggest to use '--base' option to specify the
> base tree in git format-patch, please see https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_a_37406982&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=BxEt1232ccGlMGDinAB0QAUaTFyl-m5sp4C-crHjpoU&s=OzzQqg4fTDV55X-y4vbnGeXoJaPHSvO_EfrUQnMVRHc&e= ]
>
> url: https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_0day-2Dci_linux_commits_Alex-2DKogan_locking-2Dqspinlock-2DRename-2Dmcs-2Dlock-2Dunlock-2Dmacros-2Dand-2Dmake-2Dthem-2Dmore-2Dgeneric_20191109-2D180535&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=BxEt1232ccGlMGDinAB0QAUaTFyl-m5sp4C-crHjpoU&s=uE7ZeYXOFiu09PUVjnCntEe2rR5x_QxS6dEW9twpfok&e=
> base: https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_torvalds_linux.git&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=BxEt1232ccGlMGDinAB0QAUaTFyl-m5sp4C-crHjpoU&s=aAKxuXc_c7OF0ffioQfVsIB6H-4Sd9PYxSM7kurm2ig&e= 0058b0a506e40d9a2c62015fe92eb64a44d78cd9
> config: i386-randconfig-f003-20191120 (attached as .config)
> compiler: gcc-7 (Debian 7.4.0-14) 7.4.0
> reproduce:
> # save the attached .config to linux build tree
> make ARCH=i386
>
> If you fix the issue, kindly add following tag
> Reported-by: kbuild test robot <[email protected]>
>
> All error/warnings (new ones prefixed by >>):
>
> In file included from include/linux/export.h:42:0,
> from include/linux/linkage.h:7,
> from include/linux/kernel.h:8,
> from include/linux/list.h:9,
> from include/linux/smp.h:12,
> from kernel/locking/qspinlock.c:16:
> kernel/locking/qspinlock_cna.h: In function 'cna_init_nodes':
>>> include/linux/compiler.h:350:38: error: call to '__compiletime_assert_80' declared with attribute error: BUILD_BUG_ON failed: sizeof(struct cna_node) > sizeof(struct qnode)
> _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
> ^
> include/linux/compiler.h:331:4: note: in definition of macro '__compiletime_assert'
> prefix ## suffix(); \
> ^~~~~~
> include/linux/compiler.h:350:2: note: in expansion of macro '_compiletime_assert'
> _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
> ^~~~~~~~~~~~~~~~~~~
> include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert'
> #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
> ^~~~~~~~~~~~~~~~~~
> include/linux/build_bug.h:50:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
> BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
> ^~~~~~~~~~~~~~~~
>>> kernel/locking/qspinlock_cna.h:80:2: note: in expansion of macro 'BUILD_BUG_ON'
> BUILD_BUG_ON(sizeof(struct cna_node) > sizeof(struct qnode));
> ^~~~~~~~~~~~

Consider the following definition of qnode:

struct qnode {
struct mcs_spinlock mcs;
#if defined(CONFIG_PARAVIRT_SPINLOCKS) || defined(CONFIG_NUMA_AWARE_SPINLOCKS)
long reserved[2];
#endif
};

and this is how cna_node is defined:

struct cna_node {
struct mcs_spinlock mcs;
int numa_node;
u32 encoded_tail;
u32 pre_scan_result; /* 0, 1, 2 or encoded tail */
u32 intra_count;
};

Since long is 32 bit on i386, we get the compilation error above.

We can try and squeeze CNA-specific fields into 64 bit on i386 (or any 32bit
architecture for that matter). Note that an encoded tail pointer requires up
to 24 bits, and we have two of those. We would want different field encodings
for 32 vs 64bit architectures, and this all will be quite ugly.

So instead we should probably either change the definition of @reserved in qnode
to long long, or perhaps disable CNA on 32bit architectures altogether?
I would certainly prefer the former, especially as it requires the least amount
of code/config changes.

Any objections / thoughts?

Thanks,
— Alex

2019-11-22 19:31:38

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock

On 11/22/19 1:28 PM, Alex Kogan wrote:
>
>> On Nov 20, 2019, at 10:16 AM, kbuild test robot <[email protected]> wrote:
>>
>> Hi Alex,
>>
>> Thank you for the patch! Yet something to improve:
>>
>> [auto build test ERROR on linus/master]
>> [also build test ERROR on v5.4-rc8 next-20191120]
>> [if your patch is applied to the wrong git tree, please drop us a note to help
>> improve the system. BTW, we also suggest to use '--base' option to specify the
>> base tree in git format-patch, please see https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_a_37406982&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=BxEt1232ccGlMGDinAB0QAUaTFyl-m5sp4C-crHjpoU&s=OzzQqg4fTDV55X-y4vbnGeXoJaPHSvO_EfrUQnMVRHc&e= ]
>>
>> url: https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_0day-2Dci_linux_commits_Alex-2DKogan_locking-2Dqspinlock-2DRename-2Dmcs-2Dlock-2Dunlock-2Dmacros-2Dand-2Dmake-2Dthem-2Dmore-2Dgeneric_20191109-2D180535&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=BxEt1232ccGlMGDinAB0QAUaTFyl-m5sp4C-crHjpoU&s=uE7ZeYXOFiu09PUVjnCntEe2rR5x_QxS6dEW9twpfok&e=
>> base: https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_torvalds_linux.git&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=BxEt1232ccGlMGDinAB0QAUaTFyl-m5sp4C-crHjpoU&s=aAKxuXc_c7OF0ffioQfVsIB6H-4Sd9PYxSM7kurm2ig&e= 0058b0a506e40d9a2c62015fe92eb64a44d78cd9
>> config: i386-randconfig-f003-20191120 (attached as .config)
>> compiler: gcc-7 (Debian 7.4.0-14) 7.4.0
>> reproduce:
>> # save the attached .config to linux build tree
>> make ARCH=i386
>>
>> If you fix the issue, kindly add following tag
>> Reported-by: kbuild test robot <[email protected]>
>>
>> All error/warnings (new ones prefixed by >>):
>>
>> In file included from include/linux/export.h:42:0,
>> from include/linux/linkage.h:7,
>> from include/linux/kernel.h:8,
>> from include/linux/list.h:9,
>> from include/linux/smp.h:12,
>> from kernel/locking/qspinlock.c:16:
>> kernel/locking/qspinlock_cna.h: In function 'cna_init_nodes':
>>>> include/linux/compiler.h:350:38: error: call to '__compiletime_assert_80' declared with attribute error: BUILD_BUG_ON failed: sizeof(struct cna_node) > sizeof(struct qnode)
>> _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
>> ^
>> include/linux/compiler.h:331:4: note: in definition of macro '__compiletime_assert'
>> prefix ## suffix(); \
>> ^~~~~~
>> include/linux/compiler.h:350:2: note: in expansion of macro '_compiletime_assert'
>> _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
>> ^~~~~~~~~~~~~~~~~~~
>> include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert'
>> #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
>> ^~~~~~~~~~~~~~~~~~
>> include/linux/build_bug.h:50:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
>> BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
>> ^~~~~~~~~~~~~~~~
>>>> kernel/locking/qspinlock_cna.h:80:2: note: in expansion of macro 'BUILD_BUG_ON'
>> BUILD_BUG_ON(sizeof(struct cna_node) > sizeof(struct qnode));
>> ^~~~~~~~~~~~
> Consider the following definition of qnode:
>
> struct qnode {
> struct mcs_spinlock mcs;
> #if defined(CONFIG_PARAVIRT_SPINLOCKS) || defined(CONFIG_NUMA_AWARE_SPINLOCKS)
> long reserved[2];
> #endif
> };
>
> and this is how cna_node is defined:
>
> struct cna_node {
> struct mcs_spinlock mcs;
> int numa_node;
> u32 encoded_tail;
> u32 pre_scan_result; /* 0, 1, 2 or encoded tail */
> u32 intra_count;
> };
>
> Since long is 32 bit on i386, we get the compilation error above.
>
> We can try and squeeze CNA-specific fields into 64 bit on i386 (or any 32bit
> architecture for that matter). Note that an encoded tail pointer requires up
> to 24 bits, and we have two of those. We would want different field encodings
> for 32 vs 64bit architectures, and this all will be quite ugly.
>
> So instead we should probably either change the definition of @reserved in qnode
> to long long, or perhaps disable CNA on 32bit architectures altogether?
> I would certainly prefer the former, especially as it requires the least amount
> of code/config changes.
>
> Any objections / thoughts?
>
> Thanks,
> — Alex
>
The easy way out is to restrict NUMA qspinlock to 64-bit only. There
aren't that many 32-bit NUMA systems out there that we have to worry about.

Just add "depends on 64BIT" to the config entry.

Cheers,
Longman


2019-11-22 19:57:33

by Alex Kogan

[permalink] [raw]
Subject: Re: [PATCH v6 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock



> On Nov 22, 2019, at 2:29 PM, Waiman Long <[email protected]> wrote:
>
> On 11/22/19 1:28 PM, Alex Kogan wrote:
>>
>>> On Nov 20, 2019, at 10:16 AM, kbuild test robot <[email protected]> wrote:
>>>
>>> Hi Alex,
>>>
>>> Thank you for the patch! Yet something to improve:
>>>
>>> [auto build test ERROR on linus/master]
>>> [also build test ERROR on v5.4-rc8 next-20191120]
>>> [if your patch is applied to the wrong git tree, please drop us a note to help
>>> improve the system. BTW, we also suggest to use '--base' option to specify the
>>> base tree in git format-patch, please see https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_a_37406982&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=BxEt1232ccGlMGDinAB0QAUaTFyl-m5sp4C-crHjpoU&s=OzzQqg4fTDV55X-y4vbnGeXoJaPHSvO_EfrUQnMVRHc&e= ]
>>>
>>> url: https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_0day-2Dci_linux_commits_Alex-2DKogan_locking-2Dqspinlock-2DRename-2Dmcs-2Dlock-2Dunlock-2Dmacros-2Dand-2Dmake-2Dthem-2Dmore-2Dgeneric_20191109-2D180535&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=BxEt1232ccGlMGDinAB0QAUaTFyl-m5sp4C-crHjpoU&s=uE7ZeYXOFiu09PUVjnCntEe2rR5x_QxS6dEW9twpfok&e=
>>> base: https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_torvalds_linux.git&d=DwIBAg&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=Hvhk3F4omdCk-GE1PTOm3Kn0A7ApWOZ2aZLTuVxFK4k&m=BxEt1232ccGlMGDinAB0QAUaTFyl-m5sp4C-crHjpoU&s=aAKxuXc_c7OF0ffioQfVsIB6H-4Sd9PYxSM7kurm2ig&e= 0058b0a506e40d9a2c62015fe92eb64a44d78cd9
>>> config: i386-randconfig-f003-20191120 (attached as .config)
>>> compiler: gcc-7 (Debian 7.4.0-14) 7.4.0
>>> reproduce:
>>> # save the attached .config to linux build tree
>>> make ARCH=i386
>>>
>>> If you fix the issue, kindly add following tag
>>> Reported-by: kbuild test robot <[email protected]>
>>>
>>> All error/warnings (new ones prefixed by >>):
>>>
>>> In file included from include/linux/export.h:42:0,
>>> from include/linux/linkage.h:7,
>>> from include/linux/kernel.h:8,
>>> from include/linux/list.h:9,
>>> from include/linux/smp.h:12,
>>> from kernel/locking/qspinlock.c:16:
>>> kernel/locking/qspinlock_cna.h: In function 'cna_init_nodes':
>>>>> include/linux/compiler.h:350:38: error: call to '__compiletime_assert_80' declared with attribute error: BUILD_BUG_ON failed: sizeof(struct cna_node) > sizeof(struct qnode)
>>> _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
>>> ^
>>> include/linux/compiler.h:331:4: note: in definition of macro '__compiletime_assert'
>>> prefix ## suffix(); \
>>> ^~~~~~
>>> include/linux/compiler.h:350:2: note: in expansion of macro '_compiletime_assert'
>>> _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
>>> ^~~~~~~~~~~~~~~~~~~
>>> include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert'
>>> #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
>>> ^~~~~~~~~~~~~~~~~~
>>> include/linux/build_bug.h:50:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
>>> BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
>>> ^~~~~~~~~~~~~~~~
>>>>> kernel/locking/qspinlock_cna.h:80:2: note: in expansion of macro 'BUILD_BUG_ON'
>>> BUILD_BUG_ON(sizeof(struct cna_node) > sizeof(struct qnode));
>>> ^~~~~~~~~~~~
>> Consider the following definition of qnode:
>>
>> struct qnode {
>> struct mcs_spinlock mcs;
>> #if defined(CONFIG_PARAVIRT_SPINLOCKS) || defined(CONFIG_NUMA_AWARE_SPINLOCKS)
>> long reserved[2];
>> #endif
>> };
>>
>> and this is how cna_node is defined:
>>
>> struct cna_node {
>> struct mcs_spinlock mcs;
>> int numa_node;
>> u32 encoded_tail;
>> u32 pre_scan_result; /* 0, 1, 2 or encoded tail */
>> u32 intra_count;
>> };
>>
>> Since long is 32 bit on i386, we get the compilation error above.
>>
>> We can try and squeeze CNA-specific fields into 64 bit on i386 (or any 32bit
>> architecture for that matter). Note that an encoded tail pointer requires up
>> to 24 bits, and we have two of those. We would want different field encodings
>> for 32 vs 64bit architectures, and this all will be quite ugly.
>>
>> So instead we should probably either change the definition of @reserved in qnode
>> to long long, or perhaps disable CNA on 32bit architectures altogether?
>> I would certainly prefer the former, especially as it requires the least amount
>> of code/config changes.
>>
>> Any objections / thoughts?
>>
>> Thanks,
>> — Alex
>>
> The easy way out is to restrict NUMA qspinlock to 64-bit only. There
> aren't that many 32-bit NUMA systems out there that we have to worry about.
>
> Just add "depends on 64BIT" to the config entry.
Ok, will do.

Thanks,
— Alex