2016-12-05 10:25:43

by Pan Xinhui

[permalink] [raw]
Subject: [PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc

Hi All,
this is the fairlock patchset. You can apply them and build successfully.
patches are based on linux-next
qspinlock can avoid waiter starved issue. It has about the same speed in
single-thread and it can be much faster in high contention situations
especially when the spinlock is embedded within the data structure to be
protected.

v7 -> v8:
add one patch to drop a function call under native qspinlock unlock.
Enabling qspinlock or not is a complier option now.
rebase onto linux-next(4.9-rc7)
v6 -> v7:
rebase onto 4.8-rc4
v1 -> v6:
too many details. snip.

some benchmark result below

perf bench
these numbers are ops per sec, So the higher the better.
*******************************************
on pSeries with 32 vcpus, 32Gb memory, pHyp.
------------------------------------------------------------------------------------
test case | pv-qspinlock | qspinlock | current-spinlock
------------------------------------------------------------------------------------
futex hash | 618572 | 552332 | 553788
futex lock-pi | 364 | 364 | 364
sched pipe | 78984 | 76060 | 81454
------------------------------------------------------------------------------------

unix bench:
these numbers are scores, So the higher the better.
************************************************
on PowerNV with 16 cores(cpus) (smt off), 32Gb memory:
-------------
pv-qspinlock and qspinlock have very similar results because pv-qspinlock use native version
which is only having one callback overhead
------------------------------------------------------------------------------------
test case | pv-qspinlock and qspinlock | current-spinlock
------------------------------------------------------------------------------------
Execl Throughput 761.1 761.4
File Copy 1024 bufsize 2000 maxblocks 1259.8 1286.6
File Copy 256 bufsize 500 maxblocks 782.2 790.3
File Copy 4096 bufsize 8000 maxblocks 2741.5 2817.4
Pipe Throughput 1063.2 1036.7
Pipe-based Context Switching 284.7 281.1
Process Creation 679.6 649.1
Shell Scripts (1 concurrent) 1933.2 1922.9
Shell Scripts (8 concurrent) 5003.3 4899.8
System Call Overhead 900.6 896.8
==========================
System Benchmarks Index Score 1139.3 1133.0
--------------------------------------------------------------------------- ---------

*******************************************
on pSeries with 32 vcpus, 32Gb memory, pHyp.
------------------------------------------------------------------------------------
test case | pv-qspinlock | qspinlock | current-spinlock
------------------------------------------------------------------------------------
Execl Throughput 877.1 891.2 872.8
File Copy 1024 bufsize 2000 maxblocks 1390.4 1399.2 1395.0
File Copy 256 bufsize 500 maxblocks 882.4 889.5 881.8
File Copy 4096 bufsize 8000 maxblocks 3112.3 3113.4 3121.7
Pipe Throughput 1095.8 1162.6 1158.5
Pipe-based Context Switching 194.9 192.7 200.7
Process Creation 518.4 526.4 509.1
Shell Scripts (1 concurrent) 1401.9 1413.9 1402.2
Shell Scripts (8 concurrent) 3215.6 3246.6 3229.1
System Call Overhead 833.2 892.4 888.1
====================================
System Benchmarks Index Score 1033.7 1052.5 1047.8
------------------------------------------------------------------------------------

******************************************
on pSeries with 32 vcpus, 16Gb memory, KVM.
------------------------------------------------------------------------------------
test case | pv-qspinlock | qspinlock | current-spinlock
------------------------------------------------------------------------------------
Execl Throughput 497.4 518.7 497.8
File Copy 1024 bufsize 2000 maxblocks 1368.8 1390.1 1343.3
File Copy 256 bufsize 500 maxblocks 857.7 859.8 831.4
File Copy 4096 bufsize 8000 maxblocks 2851.7 2838.1 2785.5
Pipe Throughput 1221.9 1265.3 1250.4
Pipe-based Context Switching 529.8 578.1 564.2
Process Creation 408.4 421.6 287.6
Shell Scripts (1 concurrent) 1201.8 1215.3 1185.8
Shell Scripts (8 concurrent) 3758.4 3799.3 3878.9
System Call Overhead 1008.3 1122.6 1134.2
=====================================
System Benchmarks Index Score 1072.0 1108.9 1050.6
------------------------------------------------------------------------------------

Pan Xinhui (6):
powerpc/qspinlock: powerpc support qspinlock
powerpc: pSeries/Kconfig: Add qspinlock build config
powerpc: lib/locks.c: Add cpu yield/wake helper function
powerpc/pv-qspinlock: powerpc support pv-qspinlock
powerpc: pSeries: Add pv-qspinlock build config/make
powerpc/pv-qspinlock: Optimise native unlock path

arch/powerpc/include/asm/qspinlock.h | 93 ++++++++++++
arch/powerpc/include/asm/qspinlock_paravirt.h | 52 +++++++
.../powerpc/include/asm/qspinlock_paravirt_types.h | 13 ++
arch/powerpc/include/asm/spinlock.h | 35 +++--
arch/powerpc/include/asm/spinlock_types.h | 4 +
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/kernel/paravirt.c | 157 +++++++++++++++++++++
arch/powerpc/lib/locks.c | 122 ++++++++++++++++
arch/powerpc/platforms/pseries/Kconfig | 16 +++
arch/powerpc/platforms/pseries/setup.c | 5 +
10 files changed, 485 insertions(+), 13 deletions(-)
create mode 100644 arch/powerpc/include/asm/qspinlock.h
create mode 100644 arch/powerpc/include/asm/qspinlock_paravirt.h
create mode 100644 arch/powerpc/include/asm/qspinlock_paravirt_types.h
create mode 100644 arch/powerpc/kernel/paravirt.c

--
2.4.11


2016-12-05 10:25:40

by Pan Xinhui

[permalink] [raw]
Subject: [PATCH v8 6/6] powerpc/pv-qspinlock: Optimise native unlock path

Avoid a function call under native version of qspinlock. On powerNV,
bafore applying this patch, every unlock is expensive. This small
optimizes enhance the performance.

We use static_key with jump_label which removes unnecessary loads of
lppaca and its stuff.

Signed-off-by: Pan Xinhui <[email protected]>
---
arch/powerpc/include/asm/qspinlock_paravirt.h | 18 +++++++++++++++++-
arch/powerpc/kernel/paravirt.c | 4 ++++
2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/qspinlock_paravirt.h b/arch/powerpc/include/asm/qspinlock_paravirt.h
index d87cda0..8d39446 100644
--- a/arch/powerpc/include/asm/qspinlock_paravirt.h
+++ b/arch/powerpc/include/asm/qspinlock_paravirt.h
@@ -6,12 +6,14 @@
#define _ASM_QSPINLOCK_PARAVIRT_H

#include <asm/qspinlock_paravirt_types.h>
+#include <linux/jump_label.h>

extern void pv_lock_init(void);
extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
extern void __pv_init_lock_hash(void);
extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
extern void __pv_queued_spin_unlock(struct qspinlock *lock);
+extern struct static_key_true sharedprocessor_key;

static inline void pv_queued_spin_lock(struct qspinlock *lock, u32 val)
{
@@ -20,7 +22,21 @@ static inline void pv_queued_spin_lock(struct qspinlock *lock, u32 val)

static inline void pv_queued_spin_unlock(struct qspinlock *lock)
{
- pv_lock_op.unlock(lock);
+ /*
+ * on powerNV and pSeries with jump_label, code will be
+ * PowerNV: pSeries:
+ * nop; b 2f;
+ * native unlock 2:
+ * pv unlock;
+ * In this way, we can do unlock quick in native case.
+ *
+ * IF jump_label is not enabled, we fall back into
+ * if condition, IOW, ld && cmp && bne.
+ */
+ if (static_branch_likely(&sharedprocessor_key))
+ native_queued_spin_unlock(lock);
+ else
+ pv_lock_op.unlock(lock);
}

static inline void pv_wait(u8 *ptr, u8 val)
diff --git a/arch/powerpc/kernel/paravirt.c b/arch/powerpc/kernel/paravirt.c
index e697b17..a0a000e 100644
--- a/arch/powerpc/kernel/paravirt.c
+++ b/arch/powerpc/kernel/paravirt.c
@@ -140,6 +140,9 @@ struct pv_lock_ops pv_lock_op = {
};
EXPORT_SYMBOL(pv_lock_op);

+struct static_key_true sharedprocessor_key = STATIC_KEY_TRUE_INIT;
+EXPORT_SYMBOL(sharedprocessor_key);
+
void __init pv_lock_init(void)
{
if (SHARED_PROCESSOR) {
@@ -149,5 +152,6 @@ void __init pv_lock_init(void)
pv_lock_op.unlock = __pv_queued_spin_unlock;
pv_lock_op.wait = __pv_wait;
pv_lock_op.kick = __pv_kick;
+ static_branch_disable(&sharedprocessor_key);
}
}
--
2.4.11

2016-12-05 10:25:41

by Pan Xinhui

[permalink] [raw]
Subject: [PATCH v8 3/6] powerpc: lib/locks.c: Add cpu yield/wake helper function

Add two corresponding helper functions to support pv-qspinlock.

For normal use, __spin_yield_cpu will confer current vcpu slices to the
target vcpu(say, a lock holder). If target vcpu is not specified or it
is in running state, such conferging to lpar happens or not depends.

Because hcall itself will introduce latency and a little overhead. And we
do NOT want to suffer any latency on some cases, e.g. in interrupt handler.
The second parameter *confer* can indicate such case.

__spin_wake_cpu is simpiler, it will wake up one vcpu regardless of its
current vcpu state.

Signed-off-by: Pan Xinhui <[email protected]>
---
arch/powerpc/include/asm/spinlock.h | 4 +++
arch/powerpc/lib/locks.c | 59 +++++++++++++++++++++++++++++++++++++
2 files changed, 63 insertions(+)

diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 954099e..6426bd5 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -64,9 +64,13 @@ static inline bool vcpu_is_preempted(int cpu)
/* We only yield to the hypervisor if we are in shared processor mode */
#define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr))
extern void __spin_yield(arch_spinlock_t *lock);
+extern void __spin_yield_cpu(int cpu, int confer);
+extern void __spin_wake_cpu(int cpu);
extern void __rw_yield(arch_rwlock_t *lock);
#else /* SPLPAR */
#define __spin_yield(x) barrier()
+#define __spin_yield_cpu(x, y) barrier()
+#define __spin_wake_cpu(x) barrier()
#define __rw_yield(x) barrier()
#define SHARED_PROCESSOR 0
#endif
diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c
index 6574626..bd872c9 100644
--- a/arch/powerpc/lib/locks.c
+++ b/arch/powerpc/lib/locks.c
@@ -23,6 +23,65 @@
#include <asm/hvcall.h>
#include <asm/smp.h>

+/*
+ * confer our slices to a specified cpu and return. If it is in running state
+ * or cpu is -1, then we will check confer. If confer is NULL, we will return
+ * otherwise we confer our slices to lpar.
+ */
+void __spin_yield_cpu(int cpu, int confer)
+{
+ unsigned int holder_cpu = cpu, yield_count;
+
+ if (cpu == -1)
+ goto yield_to_lpar;
+
+ BUG_ON(holder_cpu >= nr_cpu_ids);
+ yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count);
+
+ /* if cpu is running, confer slices to lpar conditionally*/
+ if ((yield_count & 1) == 0)
+ goto yield_to_lpar;
+
+ plpar_hcall_norets(H_CONFER,
+ get_hard_smp_processor_id(holder_cpu), yield_count);
+ return;
+
+yield_to_lpar:
+ if (confer)
+ plpar_hcall_norets(H_CONFER, -1, 0);
+}
+EXPORT_SYMBOL_GPL(__spin_yield_cpu);
+
+void __spin_wake_cpu(int cpu)
+{
+ unsigned int holder_cpu = cpu;
+
+ BUG_ON(holder_cpu >= nr_cpu_ids);
+ /*
+ * NOTE: we should always do this hcall regardless of
+ * the yield_count of the holder_cpu.
+ * as thers might be a case like below;
+ * CPU 1 CPU 2
+ * yielded = true
+ * if (yielded)
+ * __spin_wake_cpu()
+ * __spin_yield_cpu()
+ *
+ * So we might lose a wake if we check the yield_count and
+ * return directly if the holder_cpu is running.
+ * IOW. do NOT code like below.
+ * yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count);
+ * if ((yield_count & 1) == 0)
+ * return;
+ *
+ * a PROD hcall marks the target_cpu proded, which cause the next cede
+ * or confer called on the target_cpu invalid.
+ */
+ plpar_hcall_norets(H_PROD,
+ get_hard_smp_processor_id(holder_cpu));
+}
+EXPORT_SYMBOL_GPL(__spin_wake_cpu);
+
#ifndef CONFIG_QUEUED_SPINLOCKS
void __spin_yield(arch_spinlock_t *lock)
{
--
2.4.11

2016-12-05 10:25:35

by Pan Xinhui

[permalink] [raw]
Subject: [PATCH v8 5/6] powerpc: pSeries: Add pv-qspinlock build config/make

pSeries run as a guest and might need pv-qspinlock.

Signed-off-by: Pan Xinhui <[email protected]>
---
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/platforms/pseries/Kconfig | 8 ++++++++
2 files changed, 9 insertions(+)

diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 1925341..4780415 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -53,6 +53,7 @@ obj-$(CONFIG_PPC_970_NAP) += idle_power4.o
obj-$(CONFIG_PPC_P7_NAP) += idle_book3s.o
procfs-y := proc_powerpc.o
obj-$(CONFIG_PROC_FS) += $(procfs-y)
+obj-$(CONFIG_PARAVIRT_SPINLOCKS) += paravirt.o
rtaspci-$(CONFIG_PPC64)-$(CONFIG_PCI) := rtas_pci.o
obj-$(CONFIG_PPC_RTAS) += rtas.o rtas-rtc.o $(rtaspci-y-y)
obj-$(CONFIG_PPC_RTAS_DAEMON) += rtasd.o
diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index 8a87d06..0288c78 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -31,6 +31,14 @@ config ARCH_USE_QUEUED_SPINLOCKS
fairlock. It has shown a good performance improvement on x86 and also ppc
especially in high contention cases.

+config PARAVIRT_SPINLOCKS
+ bool "Paravirtialization support for qspinlock"
+ depends on PPC_SPLPAR && QUEUED_SPINLOCKS
+ default y
+ help
+ If kernel need run as a guest then enable this option.
+ Generally it can let kernel have a better performace.
+
config PPC_SPLPAR
depends on PPC_PSERIES
bool "Support for shared-processor logical partitions"
--
2.4.11

2016-12-05 11:01:07

by Pan Xinhui

[permalink] [raw]
Subject: [PATCH v8 4/6] powerpc/pv-qspinlock: powerpc support pv-qspinlock

The default pv-qspinlock uses qspinlock(native version of pv-qspinlock).
pv_lock initialization should be done in bootstage with irq disabled.
And if we run as a guest with powerKVM/pHyp shared_processor mode,
restore pv_lock_ops callbacks to pv-qspinlock(pv version) which makes
full use of virtualization.

There is a hash table, we store cpu number into it and the key is lock.
So everytime pv_wait can know who is the lock holder by searching the
lock. Also store the lock in a per_cpu struct, and remove it when we own
the lock. Then pv_wait can know which lock we are spinning on. But the
cpu in the hash table might not be the correct lock holder, as for
performace issue, we does not take care of hash conflict.

Also introduce spin_lock_holder, which tells who owns the lock now.
currently the only user is spin_unlock_wait.

Signed-off-by: Pan Xinhui <[email protected]>
---
arch/powerpc/include/asm/qspinlock.h | 29 +++-
arch/powerpc/include/asm/qspinlock_paravirt.h | 36 +++++
.../powerpc/include/asm/qspinlock_paravirt_types.h | 13 ++
arch/powerpc/kernel/paravirt.c | 153 +++++++++++++++++++++
arch/powerpc/lib/locks.c | 8 +-
arch/powerpc/platforms/pseries/setup.c | 5 +
6 files changed, 241 insertions(+), 3 deletions(-)
create mode 100644 arch/powerpc/include/asm/qspinlock_paravirt.h
create mode 100644 arch/powerpc/include/asm/qspinlock_paravirt_types.h
create mode 100644 arch/powerpc/kernel/paravirt.c

diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h
index 4c89256..8fd6349 100644
--- a/arch/powerpc/include/asm/qspinlock.h
+++ b/arch/powerpc/include/asm/qspinlock.h
@@ -15,7 +15,7 @@ static inline u8 *__qspinlock_lock_byte(struct qspinlock *lock)
return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN);
}

-static inline void queued_spin_unlock(struct qspinlock *lock)
+static inline void native_queued_spin_unlock(struct qspinlock *lock)
{
/* release semantics is required */
smp_store_release(__qspinlock_lock_byte(lock), 0);
@@ -27,6 +27,33 @@ static inline int queued_spin_is_locked(struct qspinlock *lock)
return atomic_read(&lock->val);
}

+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#include <asm/qspinlock_paravirt.h>
+/*
+ * try to know who is the lock holder, however it is not always true
+ * Return:
+ * -1, we did not know the lock holder.
+ * other value, likely is the lock holder.
+ */
+extern int spin_lock_holder(void *lock);
+
+static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+ pv_queued_spin_lock(lock, val);
+}
+
+static inline void queued_spin_unlock(struct qspinlock *lock)
+{
+ pv_queued_spin_unlock(lock);
+}
+#else
+#define spin_lock_holder(l) (-1)
+static inline void queued_spin_unlock(struct qspinlock *lock)
+{
+ native_queued_spin_unlock(lock);
+}
+#endif
+
#include <asm-generic/qspinlock.h>

/* we need override it as ppc has io_sync stuff */
diff --git a/arch/powerpc/include/asm/qspinlock_paravirt.h b/arch/powerpc/include/asm/qspinlock_paravirt.h
new file mode 100644
index 0000000..d87cda0
--- /dev/null
+++ b/arch/powerpc/include/asm/qspinlock_paravirt.h
@@ -0,0 +1,36 @@
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+#error "do not include this file"
+#endif
+
+#ifndef _ASM_QSPINLOCK_PARAVIRT_H
+#define _ASM_QSPINLOCK_PARAVIRT_H
+
+#include <asm/qspinlock_paravirt_types.h>
+
+extern void pv_lock_init(void);
+extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+extern void __pv_init_lock_hash(void);
+extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+extern void __pv_queued_spin_unlock(struct qspinlock *lock);
+
+static inline void pv_queued_spin_lock(struct qspinlock *lock, u32 val)
+{
+ pv_lock_op.lock(lock, val);
+}
+
+static inline void pv_queued_spin_unlock(struct qspinlock *lock)
+{
+ pv_lock_op.unlock(lock);
+}
+
+static inline void pv_wait(u8 *ptr, u8 val)
+{
+ pv_lock_op.wait(ptr, val);
+}
+
+static inline void pv_kick(int cpu)
+{
+ pv_lock_op.kick(cpu);
+}
+
+#endif
diff --git a/arch/powerpc/include/asm/qspinlock_paravirt_types.h b/arch/powerpc/include/asm/qspinlock_paravirt_types.h
new file mode 100644
index 0000000..83611ed
--- /dev/null
+++ b/arch/powerpc/include/asm/qspinlock_paravirt_types.h
@@ -0,0 +1,13 @@
+#ifndef _ASM_QSPINLOCK_PARAVIRT_TYPES_H
+#define _ASM_QSPINLOCK_PARAVIRT_TYPES_H
+
+struct pv_lock_ops {
+ void (*lock)(struct qspinlock *lock, u32 val);
+ void (*unlock)(struct qspinlock *lock);
+ void (*wait)(u8 *ptr, u8 val);
+ void (*kick)(int cpu);
+};
+
+extern struct pv_lock_ops pv_lock_op;
+
+#endif
diff --git a/arch/powerpc/kernel/paravirt.c b/arch/powerpc/kernel/paravirt.c
new file mode 100644
index 0000000..e697b17
--- /dev/null
+++ b/arch/powerpc/kernel/paravirt.c
@@ -0,0 +1,153 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/spinlock.h>
+#include <linux/smp.h>
+#include <linux/hash.h>
+#include <linux/bootmem.h>
+
+/* +2 here is to make sure there is not many conflict*/
+#define NUM_LOCK_CPU_ENTRY_SHIFT (order_base_2(NR_CPUS) + 2)
+#define NUM_LOCK_CPU_ENTRY (1 << NUM_LOCK_CPU_ENTRY_SHIFT)
+/* we can only spin on 4 locks at same time on same cpu*/
+#define NUM_LOCKS_PER_CPU 4
+
+static u16 *hash_lock_cpu_ptr;
+
+struct locks_on_cpu {
+ void *l[NUM_LOCKS_PER_CPU];
+ int count;
+};
+
+static DEFINE_PER_CPU(struct locks_on_cpu, node);
+
+static u16 *hash(void *l)
+{
+ int val = hash_ptr(l, NUM_LOCK_CPU_ENTRY_SHIFT);
+
+ return &hash_lock_cpu_ptr[val];
+}
+
+static void __init init_hash(void)
+{
+ int size = NUM_LOCK_CPU_ENTRY * sizeof(*hash_lock_cpu_ptr);
+
+ hash_lock_cpu_ptr = memblock_virt_alloc(size, 0);
+ memset(hash_lock_cpu_ptr, 0, size);
+}
+
+#define lock_get_holder(l) \
+ ((int)(*hash(l) - 1))
+
+#define lock_set_holder(l) \
+ (*hash(l) = raw_smp_processor_id() + 1)
+
+int spin_lock_holder(void *lock)
+{
+ /* we might run on PowerNV, which has no hash table ptr*/
+ if (hash_lock_cpu_ptr)
+ return lock_get_holder(lock);
+ return -1;
+}
+EXPORT_SYMBOL(spin_lock_holder);
+
+static void *this_cpu_lock(void)
+{
+ struct locks_on_cpu *this_node = this_cpu_ptr(&node);
+ int i = this_node->count - 1;
+
+ return this_node->l[i];
+}
+
+static void cpu_save_lock(void *l)
+{
+ struct locks_on_cpu *this_node = this_cpu_ptr(&node);
+ int i = this_node->count++;
+
+ this_node->l[i] = l;
+}
+
+static void cpu_remove_lock(void *l)
+{
+ __this_cpu_dec(node.count);
+}
+
+static void __native_queued_spin_unlock(struct qspinlock *lock)
+{
+ native_queued_spin_unlock(lock);
+}
+
+static void __pv_lock(struct qspinlock *lock, u32 val)
+{
+ /*
+ * save the lock we are spinning on
+ * pv_wait need know this lock
+ */
+ cpu_save_lock(lock);
+
+ __pv_queued_spin_lock_slowpath(lock, val);
+
+ /* as we win the lock, remove it*/
+ cpu_remove_lock(lock);
+
+ /*
+ * let other spinner know who is the lock holder
+ * we does not need to unset lock holder in unlock()
+ */
+ lock_set_holder(lock);
+}
+
+static void __pv_wait(u8 *ptr, u8 val)
+{
+ void *l = this_cpu_lock();
+ int cpu;
+ int always_confer = !in_interrupt();
+
+ while (READ_ONCE(*ptr) == val) {
+ HMT_low();
+ /*
+ * the lock might be unlocked once and locked again
+ */
+ cpu = lock_get_holder(l);
+
+ /*
+ * the default behavior of __spin_yield_cpu is yielding
+ * our cpu slices to target vcpu or lpar(pHyp or KVM).
+ * consider the latency of hcall itself and the priority of
+ * current task, we can do a optimisation.
+ * IOW, if we are in interrupt, and the target vcpu is running
+ * we do not yield ourself to lpar.
+ */
+ __spin_yield_cpu(cpu, always_confer);
+ }
+ HMT_medium();
+}
+
+static void __pv_kick(int cpu)
+{
+ __spin_wake_cpu(cpu);
+}
+
+struct pv_lock_ops pv_lock_op = {
+ .lock = native_queued_spin_lock_slowpath,
+ .unlock = __native_queued_spin_unlock,
+ .wait = NULL,
+ .kick = NULL,
+};
+EXPORT_SYMBOL(pv_lock_op);
+
+void __init pv_lock_init(void)
+{
+ if (SHARED_PROCESSOR) {
+ init_hash();
+ __pv_init_lock_hash();
+ pv_lock_op.lock = __pv_lock;
+ pv_lock_op.unlock = __pv_queued_spin_unlock;
+ pv_lock_op.wait = __pv_wait;
+ pv_lock_op.kick = __pv_kick;
+ }
+}
diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c
index bd872c9..6e28651 100644
--- a/arch/powerpc/lib/locks.c
+++ b/arch/powerpc/lib/locks.c
@@ -179,8 +179,12 @@ void queued_spin_unlock_wait(struct qspinlock *lock)
* any unlock is good. And need not _sync, as ->val is set by the SC in
* unlock(), any loads in lock() must see the correct value.
*/
- while (atomic_read(&lock->val) & _Q_LOCKED_MASK)
- cpu_relax();
+ while (atomic_read(&lock->val) & _Q_LOCKED_MASK) {
+ HMT_low();
+ if (SHARED_PROCESSOR)
+ __spin_yield_cpu(spin_lock_holder(lock), 0);
+ }
+ HMT_medium();
done:
smp_mb();
}
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 97aa3f3..ca61ead 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -487,6 +487,11 @@ static void __init pSeries_setup_arch(void)
}

ppc_md.pcibios_root_bridge_prepare = pseries_root_bridge_prepare;
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+ pv_lock_init();
+#endif
+
}

static int __init pSeries_init_panel(void)
--
2.4.11

2016-12-05 12:08:49

by Pan Xinhui

[permalink] [raw]
Subject: [PATCH v8 2/6] powerpc: pSeries/Kconfig: Add qspinlock build config

pSeries/powerNV will use qspinlock from now on.

Signed-off-by: Pan Xinhui <[email protected]>
---
arch/powerpc/platforms/pseries/Kconfig | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index bec90fb..8a87d06 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -23,6 +23,14 @@ config PPC_PSERIES
select PPC_DOORBELL
default y

+config ARCH_USE_QUEUED_SPINLOCKS
+ default y
+ bool "Enable qspinlock"
+ help
+ Enabling this option will let kernel use qspinlock which is a kind of
+ fairlock. It has shown a good performance improvement on x86 and also ppc
+ especially in high contention cases.
+
config PPC_SPLPAR
depends on PPC_PSERIES
bool "Support for shared-processor logical partitions"
--
2.4.11

2016-12-05 13:55:10

by Pan Xinhui

[permalink] [raw]
Subject: [PATCH v8 1/6] powerpc/qspinlock: powerpc support qspinlock

This patch add basic code to enable qspinlock on powerpc. qspinlock is
one kind of fairlock implementation. And seen some performance improvement
under some scenarios.

queued_spin_unlock() release the lock by just one write of NULL to the
::locked field which sits at different places in the two endianness
system.

We override some arch_spin_XXX as powerpc has io_sync stuff which makes
sure the io operations are protected by the lock correctly.

There is another special case, see commit
2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")

Signed-off-by: Pan Xinhui <[email protected]>
---
arch/powerpc/include/asm/qspinlock.h | 66 +++++++++++++++++++++++++++++++
arch/powerpc/include/asm/spinlock.h | 31 +++++++++------
arch/powerpc/include/asm/spinlock_types.h | 4 ++
arch/powerpc/lib/locks.c | 59 +++++++++++++++++++++++++++
4 files changed, 147 insertions(+), 13 deletions(-)
create mode 100644 arch/powerpc/include/asm/qspinlock.h

diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h
new file mode 100644
index 0000000..4c89256
--- /dev/null
+++ b/arch/powerpc/include/asm/qspinlock.h
@@ -0,0 +1,66 @@
+#ifndef _ASM_POWERPC_QSPINLOCK_H
+#define _ASM_POWERPC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#define SPIN_THRESHOLD (1 << 15)
+#define queued_spin_unlock queued_spin_unlock
+#define queued_spin_is_locked queued_spin_is_locked
+#define queued_spin_unlock_wait queued_spin_unlock_wait
+
+extern void queued_spin_unlock_wait(struct qspinlock *lock);
+
+static inline u8 *__qspinlock_lock_byte(struct qspinlock *lock)
+{
+ return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN);
+}
+
+static inline void queued_spin_unlock(struct qspinlock *lock)
+{
+ /* release semantics is required */
+ smp_store_release(__qspinlock_lock_byte(lock), 0);
+}
+
+static inline int queued_spin_is_locked(struct qspinlock *lock)
+{
+ smp_mb();
+ return atomic_read(&lock->val);
+}
+
+#include <asm-generic/qspinlock.h>
+
+/* we need override it as ppc has io_sync stuff */
+#undef arch_spin_trylock
+#undef arch_spin_lock
+#undef arch_spin_lock_flags
+#undef arch_spin_unlock
+#define arch_spin_trylock arch_spin_trylock
+#define arch_spin_lock arch_spin_lock
+#define arch_spin_lock_flags arch_spin_lock_flags
+#define arch_spin_unlock arch_spin_unlock
+
+static inline int arch_spin_trylock(arch_spinlock_t *lock)
+{
+ CLEAR_IO_SYNC;
+ return queued_spin_trylock(lock);
+}
+
+static inline void arch_spin_lock(arch_spinlock_t *lock)
+{
+ CLEAR_IO_SYNC;
+ queued_spin_lock(lock);
+}
+
+static inline
+void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
+{
+ CLEAR_IO_SYNC;
+ queued_spin_lock(lock);
+}
+
+static inline void arch_spin_unlock(arch_spinlock_t *lock)
+{
+ SYNC_IO;
+ queued_spin_unlock(lock);
+}
+#endif /* _ASM_POWERPC_QSPINLOCK_H */
diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 8c1b913..954099e 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -60,6 +60,23 @@ static inline bool vcpu_is_preempted(int cpu)
}
#endif

+#if defined(CONFIG_PPC_SPLPAR)
+/* We only yield to the hypervisor if we are in shared processor mode */
+#define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr))
+extern void __spin_yield(arch_spinlock_t *lock);
+extern void __rw_yield(arch_rwlock_t *lock);
+#else /* SPLPAR */
+#define __spin_yield(x) barrier()
+#define __rw_yield(x) barrier()
+#define SHARED_PROCESSOR 0
+#endif
+
+#ifdef CONFIG_QUEUED_SPINLOCKS
+#include <asm/qspinlock.h>
+#else
+
+#define arch_spin_relax(lock) __spin_yield(lock)
+
static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
{
return lock.slock == 0;
@@ -114,18 +131,6 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
* held. Conveniently, we have a word in the paca that holds this
* value.
*/
-
-#if defined(CONFIG_PPC_SPLPAR)
-/* We only yield to the hypervisor if we are in shared processor mode */
-#define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr))
-extern void __spin_yield(arch_spinlock_t *lock);
-extern void __rw_yield(arch_rwlock_t *lock);
-#else /* SPLPAR */
-#define __spin_yield(x) barrier()
-#define __rw_yield(x) barrier()
-#define SHARED_PROCESSOR 0
-#endif
-
static inline void arch_spin_lock(arch_spinlock_t *lock)
{
CLEAR_IO_SYNC;
@@ -203,6 +208,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
smp_mb();
}

+#endif /* !CONFIG_QUEUED_SPINLOCKS */
/*
* Read-write spinlocks, allowing multiple readers
* but only one writer.
@@ -338,7 +344,6 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock)

-#define arch_spin_relax(lock) __spin_yield(lock)
#define arch_read_relax(lock) __rw_yield(lock)
#define arch_write_relax(lock) __rw_yield(lock)

diff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/powerpc/include/asm/spinlock_types.h
index 2351adc..bd7144e 100644
--- a/arch/powerpc/include/asm/spinlock_types.h
+++ b/arch/powerpc/include/asm/spinlock_types.h
@@ -5,11 +5,15 @@
# error "please don't include this file directly"
#endif

+#ifdef CONFIG_QUEUED_SPINLOCKS
+#include <asm-generic/qspinlock_types.h>
+#else
typedef struct {
volatile unsigned int slock;
} arch_spinlock_t;

#define __ARCH_SPIN_LOCK_UNLOCKED { 0 }
+#endif

typedef struct {
volatile signed int lock;
diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c
index b7b1237..6574626 100644
--- a/arch/powerpc/lib/locks.c
+++ b/arch/powerpc/lib/locks.c
@@ -23,6 +23,7 @@
#include <asm/hvcall.h>
#include <asm/smp.h>

+#ifndef CONFIG_QUEUED_SPINLOCKS
void __spin_yield(arch_spinlock_t *lock)
{
unsigned int lock_value, holder_cpu, yield_count;
@@ -42,6 +43,7 @@ void __spin_yield(arch_spinlock_t *lock)
get_hard_smp_processor_id(holder_cpu), yield_count);
}
EXPORT_SYMBOL_GPL(__spin_yield);
+#endif

/*
* Waiting for a read lock or a write lock on a rwlock...
@@ -68,3 +70,60 @@ void __rw_yield(arch_rwlock_t *rw)
get_hard_smp_processor_id(holder_cpu), yield_count);
}
#endif
+
+#ifdef CONFIG_QUEUED_SPINLOCKS
+/*
+ * This forbid we load an old value in another LL/SC. Because the SC here force
+ * another LL/SC repeat. So we guarantee all loads in another LL and SC will
+ * read correct value.
+ */
+static inline u32 atomic_read_sync(atomic_t *v)
+{
+ u32 val;
+
+ __asm__ __volatile__(
+"1: " PPC_LWARX(%0, 0, %2, 0) "\n"
+" stwcx. %0, 0, %2\n"
+" bne- 1b\n"
+ : "=&r" (val), "+m" (*v)
+ : "r" (v)
+ : "cr0", "xer");
+
+ return val;
+}
+
+void queued_spin_unlock_wait(struct qspinlock *lock)
+{
+
+ u32 val;
+
+ smp_mb();
+
+ /*
+ * copied from generic queue_spin_unlock_wait with little modification
+ */
+ for (;;) {
+ /* need _sync, as we might race with another LL/SC in lock()*/
+ val = atomic_read_sync(&lock->val);
+
+ if (!val) /* not locked, we're done */
+ goto done;
+
+ if (val & _Q_LOCKED_MASK) /* locked, go wait for unlock */
+ break;
+
+ /* not locked, but pending, wait until we observe the lock */
+ cpu_relax();
+ }
+
+ /*
+ * any unlock is good. And need not _sync, as ->val is set by the SC in
+ * unlock(), any loads in lock() must see the correct value.
+ */
+ while (atomic_read(&lock->val) & _Q_LOCKED_MASK)
+ cpu_relax();
+done:
+ smp_mb();
+}
+EXPORT_SYMBOL(queued_spin_unlock_wait);
+#endif
--
2.4.11

2016-12-06 00:48:29

by Boqun Feng

[permalink] [raw]
Subject: Re: [PATCH v8 1/6] powerpc/qspinlock: powerpc support qspinlock

On Mon, Dec 05, 2016 at 10:19:21AM -0500, Pan Xinhui wrote:
> This patch add basic code to enable qspinlock on powerpc. qspinlock is
> one kind of fairlock implementation. And seen some performance improvement
> under some scenarios.
>
> queued_spin_unlock() release the lock by just one write of NULL to the
> ::locked field which sits at different places in the two endianness
> system.
>
> We override some arch_spin_XXX as powerpc has io_sync stuff which makes
> sure the io operations are protected by the lock correctly.
>
> There is another special case, see commit
> 2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")
>
> Signed-off-by: Pan Xinhui <[email protected]>
> ---
> arch/powerpc/include/asm/qspinlock.h | 66 +++++++++++++++++++++++++++++++
> arch/powerpc/include/asm/spinlock.h | 31 +++++++++------
> arch/powerpc/include/asm/spinlock_types.h | 4 ++
> arch/powerpc/lib/locks.c | 59 +++++++++++++++++++++++++++
> 4 files changed, 147 insertions(+), 13 deletions(-)
> create mode 100644 arch/powerpc/include/asm/qspinlock.h
>
> diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h
> new file mode 100644
> index 0000000..4c89256
> --- /dev/null
> +++ b/arch/powerpc/include/asm/qspinlock.h
> @@ -0,0 +1,66 @@
> +#ifndef _ASM_POWERPC_QSPINLOCK_H
> +#define _ASM_POWERPC_QSPINLOCK_H
> +
> +#include <asm-generic/qspinlock_types.h>
> +
> +#define SPIN_THRESHOLD (1 << 15)
> +#define queued_spin_unlock queued_spin_unlock
> +#define queued_spin_is_locked queued_spin_is_locked
> +#define queued_spin_unlock_wait queued_spin_unlock_wait
> +
> +extern void queued_spin_unlock_wait(struct qspinlock *lock);
> +
> +static inline u8 *__qspinlock_lock_byte(struct qspinlock *lock)
> +{
> + return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN);
> +}
> +
> +static inline void queued_spin_unlock(struct qspinlock *lock)
> +{
> + /* release semantics is required */
> + smp_store_release(__qspinlock_lock_byte(lock), 0);
> +}
> +
> +static inline int queued_spin_is_locked(struct qspinlock *lock)
> +{
> + smp_mb();
> + return atomic_read(&lock->val);
> +}
> +
> +#include <asm-generic/qspinlock.h>
> +
> +/* we need override it as ppc has io_sync stuff */
> +#undef arch_spin_trylock
> +#undef arch_spin_lock
> +#undef arch_spin_lock_flags
> +#undef arch_spin_unlock
> +#define arch_spin_trylock arch_spin_trylock
> +#define arch_spin_lock arch_spin_lock
> +#define arch_spin_lock_flags arch_spin_lock_flags
> +#define arch_spin_unlock arch_spin_unlock
> +
> +static inline int arch_spin_trylock(arch_spinlock_t *lock)
> +{
> + CLEAR_IO_SYNC;
> + return queued_spin_trylock(lock);
> +}
> +
> +static inline void arch_spin_lock(arch_spinlock_t *lock)
> +{
> + CLEAR_IO_SYNC;
> + queued_spin_lock(lock);
> +}
> +
> +static inline
> +void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
> +{
> + CLEAR_IO_SYNC;
> + queued_spin_lock(lock);
> +}
> +
> +static inline void arch_spin_unlock(arch_spinlock_t *lock)
> +{
> + SYNC_IO;
> + queued_spin_unlock(lock);
> +}
> +#endif /* _ASM_POWERPC_QSPINLOCK_H */
> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> index 8c1b913..954099e 100644
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -60,6 +60,23 @@ static inline bool vcpu_is_preempted(int cpu)
> }
> #endif
>
> +#if defined(CONFIG_PPC_SPLPAR)
> +/* We only yield to the hypervisor if we are in shared processor mode */
> +#define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr))
> +extern void __spin_yield(arch_spinlock_t *lock);
> +extern void __rw_yield(arch_rwlock_t *lock);
> +#else /* SPLPAR */
> +#define __spin_yield(x) barrier()
> +#define __rw_yield(x) barrier()
> +#define SHARED_PROCESSOR 0
> +#endif
> +
> +#ifdef CONFIG_QUEUED_SPINLOCKS
> +#include <asm/qspinlock.h>
> +#else
> +
> +#define arch_spin_relax(lock) __spin_yield(lock)
> +
> static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
> {
> return lock.slock == 0;
> @@ -114,18 +131,6 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
> * held. Conveniently, we have a word in the paca that holds this
> * value.
> */
> -
> -#if defined(CONFIG_PPC_SPLPAR)
> -/* We only yield to the hypervisor if we are in shared processor mode */
> -#define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr))
> -extern void __spin_yield(arch_spinlock_t *lock);
> -extern void __rw_yield(arch_rwlock_t *lock);
> -#else /* SPLPAR */
> -#define __spin_yield(x) barrier()
> -#define __rw_yield(x) barrier()
> -#define SHARED_PROCESSOR 0
> -#endif
> -
> static inline void arch_spin_lock(arch_spinlock_t *lock)
> {
> CLEAR_IO_SYNC;
> @@ -203,6 +208,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> smp_mb();
> }
>
> +#endif /* !CONFIG_QUEUED_SPINLOCKS */
> /*
> * Read-write spinlocks, allowing multiple readers
> * but only one writer.
> @@ -338,7 +344,6 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)
> #define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
> #define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
>
> -#define arch_spin_relax(lock) __spin_yield(lock)
> #define arch_read_relax(lock) __rw_yield(lock)
> #define arch_write_relax(lock) __rw_yield(lock)
>
> diff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/powerpc/include/asm/spinlock_types.h
> index 2351adc..bd7144e 100644
> --- a/arch/powerpc/include/asm/spinlock_types.h
> +++ b/arch/powerpc/include/asm/spinlock_types.h
> @@ -5,11 +5,15 @@
> # error "please don't include this file directly"
> #endif
>
> +#ifdef CONFIG_QUEUED_SPINLOCKS
> +#include <asm-generic/qspinlock_types.h>
> +#else
> typedef struct {
> volatile unsigned int slock;
> } arch_spinlock_t;
>
> #define __ARCH_SPIN_LOCK_UNLOCKED { 0 }
> +#endif
>
> typedef struct {
> volatile signed int lock;
> diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c
> index b7b1237..6574626 100644
> --- a/arch/powerpc/lib/locks.c
> +++ b/arch/powerpc/lib/locks.c
> @@ -23,6 +23,7 @@
> #include <asm/hvcall.h>
> #include <asm/smp.h>
>
> +#ifndef CONFIG_QUEUED_SPINLOCKS
> void __spin_yield(arch_spinlock_t *lock)
> {
> unsigned int lock_value, holder_cpu, yield_count;
> @@ -42,6 +43,7 @@ void __spin_yield(arch_spinlock_t *lock)
> get_hard_smp_processor_id(holder_cpu), yield_count);
> }
> EXPORT_SYMBOL_GPL(__spin_yield);
> +#endif
>
> /*
> * Waiting for a read lock or a write lock on a rwlock...
> @@ -68,3 +70,60 @@ void __rw_yield(arch_rwlock_t *rw)
> get_hard_smp_processor_id(holder_cpu), yield_count);
> }
> #endif
> +
> +#ifdef CONFIG_QUEUED_SPINLOCKS
> +/*
> + * This forbid we load an old value in another LL/SC. Because the SC here force
> + * another LL/SC repeat. So we guarantee all loads in another LL and SC will
> + * read correct value.
> + */
> +static inline u32 atomic_read_sync(atomic_t *v)
> +{
> + u32 val;
> +
> + __asm__ __volatile__(
> +"1: " PPC_LWARX(%0, 0, %2, 0) "\n"
> +" stwcx. %0, 0, %2\n"
> +" bne- 1b\n"
> + : "=&r" (val), "+m" (*v)
> + : "r" (v)
> + : "cr0", "xer");
> +
> + return val;
> +}
> +
> +void queued_spin_unlock_wait(struct qspinlock *lock)
> +{
> +
> + u32 val;
> +
> + smp_mb();
> +
> + /*
> + * copied from generic queue_spin_unlock_wait with little modification
> + */
> + for (;;) {
> + /* need _sync, as we might race with another LL/SC in lock()*/
> + val = atomic_read_sync(&lock->val);
> +
> + if (!val) /* not locked, we're done */
> + goto done;
> +
> + if (val & _Q_LOCKED_MASK) /* locked, go wait for unlock */
> + break;
> +
> + /* not locked, but pending, wait until we observe the lock */
> + cpu_relax();
> + }
> +
> + /*
> + * any unlock is good. And need not _sync, as ->val is set by the SC in
> + * unlock(), any loads in lock() must see the correct value.
> + */

I don't think the comment here about _sync is correct. First, not all
unlock() has a SC part, and for unlock_wait() case there is nothing to
do with whether lock() see the correct value or not. The reason with
_sync is not needed here is:

/*
* _sync() is not needed here, because once we got here, we must already
* read the ->val as LOCKED via a _sync(). Combining the smp_mb()
* before, we guarantee that all the memory accesses before
* unlock_wait() must be observed by the next lock critical section.
*/

Regards,
Boqun

> + while (atomic_read(&lock->val) & _Q_LOCKED_MASK)
> + cpu_relax();
> +done:
> + smp_mb();
> +}
> +EXPORT_SYMBOL(queued_spin_unlock_wait);
> +#endif
> --
> 2.4.11
>


Attachments:
(No filename) (8.46 kB)
signature.asc (455.00 B)
Download all attachments

2016-12-06 00:59:26

by Boqun Feng

[permalink] [raw]
Subject: Re: [PATCH v8 2/6] powerpc: pSeries/Kconfig: Add qspinlock build config

On Mon, Dec 05, 2016 at 10:19:22AM -0500, Pan Xinhui wrote:
> pSeries/powerNV will use qspinlock from now on.
>
> Signed-off-by: Pan Xinhui <[email protected]>
> ---
> arch/powerpc/platforms/pseries/Kconfig | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
> index bec90fb..8a87d06 100644
> --- a/arch/powerpc/platforms/pseries/Kconfig
> +++ b/arch/powerpc/platforms/pseries/Kconfig

Why here? Not arch/powerpc/platforms/Kconfig?

> @@ -23,6 +23,14 @@ config PPC_PSERIES
> select PPC_DOORBELL
> default y
>
> +config ARCH_USE_QUEUED_SPINLOCKS
> + default y
> + bool "Enable qspinlock"

I think you just enable qspinlock by default for all PPC platforms. I
guess you need to put

depends on PPC_PSERIES || PPC_POWERNV

here to achieve what you mean in you commit message.

Regards,
Boqun

> + help
> + Enabling this option will let kernel use qspinlock which is a kind of
> + fairlock. It has shown a good performance improvement on x86 and also ppc
> + especially in high contention cases.
> +
> config PPC_SPLPAR
> depends on PPC_PSERIES
> bool "Support for shared-processor logical partitions"
> --
> 2.4.11
>


Attachments:
(No filename) (1.21 kB)
signature.asc (455.00 B)
Download all attachments

2016-12-06 01:16:53

by Pan Xinhui

[permalink] [raw]
Subject: Re: [PATCH v8 1/6] powerpc/qspinlock: powerpc support qspinlock

correct waiman's address.

在 2016/12/6 08:47, Boqun Feng 写道:
> On Mon, Dec 05, 2016 at 10:19:21AM -0500, Pan Xinhui wrote:
>> This patch add basic code to enable qspinlock on powerpc. qspinlock is
>> one kind of fairlock implementation. And seen some performance improvement
>> under some scenarios.
>>
>> queued_spin_unlock() release the lock by just one write of NULL to the
>> ::locked field which sits at different places in the two endianness
>> system.
>>
>> We override some arch_spin_XXX as powerpc has io_sync stuff which makes
>> sure the io operations are protected by the lock correctly.
>>
>> There is another special case, see commit
>> 2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")
>>
>> Signed-off-by: Pan Xinhui <[email protected]>
>> ---
>> arch/powerpc/include/asm/qspinlock.h | 66 +++++++++++++++++++++++++++++++
>> arch/powerpc/include/asm/spinlock.h | 31 +++++++++------
>> arch/powerpc/include/asm/spinlock_types.h | 4 ++
>> arch/powerpc/lib/locks.c | 59 +++++++++++++++++++++++++++
>> 4 files changed, 147 insertions(+), 13 deletions(-)
>> create mode 100644 arch/powerpc/include/asm/qspinlock.h
>>
>> diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h
>> new file mode 100644
>> index 0000000..4c89256
>> --- /dev/null
>> +++ b/arch/powerpc/include/asm/qspinlock.h
>> @@ -0,0 +1,66 @@
>> +#ifndef _ASM_POWERPC_QSPINLOCK_H
>> +#define _ASM_POWERPC_QSPINLOCK_H
>> +
>> +#include <asm-generic/qspinlock_types.h>
>> +
>> +#define SPIN_THRESHOLD (1 << 15)
>> +#define queued_spin_unlock queued_spin_unlock
>> +#define queued_spin_is_locked queued_spin_is_locked
>> +#define queued_spin_unlock_wait queued_spin_unlock_wait
>> +
>> +extern void queued_spin_unlock_wait(struct qspinlock *lock);
>> +
>> +static inline u8 *__qspinlock_lock_byte(struct qspinlock *lock)
>> +{
>> + return (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN);
>> +}
>> +
>> +static inline void queued_spin_unlock(struct qspinlock *lock)
>> +{
>> + /* release semantics is required */
>> + smp_store_release(__qspinlock_lock_byte(lock), 0);
>> +}
>> +
>> +static inline int queued_spin_is_locked(struct qspinlock *lock)
>> +{
>> + smp_mb();
>> + return atomic_read(&lock->val);
>> +}
>> +
>> +#include <asm-generic/qspinlock.h>
>> +
>> +/* we need override it as ppc has io_sync stuff */
>> +#undef arch_spin_trylock
>> +#undef arch_spin_lock
>> +#undef arch_spin_lock_flags
>> +#undef arch_spin_unlock
>> +#define arch_spin_trylock arch_spin_trylock
>> +#define arch_spin_lock arch_spin_lock
>> +#define arch_spin_lock_flags arch_spin_lock_flags
>> +#define arch_spin_unlock arch_spin_unlock
>> +
>> +static inline int arch_spin_trylock(arch_spinlock_t *lock)
>> +{
>> + CLEAR_IO_SYNC;
>> + return queued_spin_trylock(lock);
>> +}
>> +
>> +static inline void arch_spin_lock(arch_spinlock_t *lock)
>> +{
>> + CLEAR_IO_SYNC;
>> + queued_spin_lock(lock);
>> +}
>> +
>> +static inline
>> +void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
>> +{
>> + CLEAR_IO_SYNC;
>> + queued_spin_lock(lock);
>> +}
>> +
>> +static inline void arch_spin_unlock(arch_spinlock_t *lock)
>> +{
>> + SYNC_IO;
>> + queued_spin_unlock(lock);
>> +}
>> +#endif /* _ASM_POWERPC_QSPINLOCK_H */
>> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
>> index 8c1b913..954099e 100644
>> --- a/arch/powerpc/include/asm/spinlock.h
>> +++ b/arch/powerpc/include/asm/spinlock.h
>> @@ -60,6 +60,23 @@ static inline bool vcpu_is_preempted(int cpu)
>> }
>> #endif
>>
>> +#if defined(CONFIG_PPC_SPLPAR)
>> +/* We only yield to the hypervisor if we are in shared processor mode */
>> +#define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr))
>> +extern void __spin_yield(arch_spinlock_t *lock);
>> +extern void __rw_yield(arch_rwlock_t *lock);
>> +#else /* SPLPAR */
>> +#define __spin_yield(x) barrier()
>> +#define __rw_yield(x) barrier()
>> +#define SHARED_PROCESSOR 0
>> +#endif
>> +
>> +#ifdef CONFIG_QUEUED_SPINLOCKS
>> +#include <asm/qspinlock.h>
>> +#else
>> +
>> +#define arch_spin_relax(lock) __spin_yield(lock)
>> +
>> static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
>> {
>> return lock.slock == 0;
>> @@ -114,18 +131,6 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
>> * held. Conveniently, we have a word in the paca that holds this
>> * value.
>> */
>> -
>> -#if defined(CONFIG_PPC_SPLPAR)
>> -/* We only yield to the hypervisor if we are in shared processor mode */
>> -#define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr))
>> -extern void __spin_yield(arch_spinlock_t *lock);
>> -extern void __rw_yield(arch_rwlock_t *lock);
>> -#else /* SPLPAR */
>> -#define __spin_yield(x) barrier()
>> -#define __rw_yield(x) barrier()
>> -#define SHARED_PROCESSOR 0
>> -#endif
>> -
>> static inline void arch_spin_lock(arch_spinlock_t *lock)
>> {
>> CLEAR_IO_SYNC;
>> @@ -203,6 +208,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
>> smp_mb();
>> }
>>
>> +#endif /* !CONFIG_QUEUED_SPINLOCKS */
>> /*
>> * Read-write spinlocks, allowing multiple readers
>> * but only one writer.
>> @@ -338,7 +344,6 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)
>> #define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
>> #define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
>>
>> -#define arch_spin_relax(lock) __spin_yield(lock)
>> #define arch_read_relax(lock) __rw_yield(lock)
>> #define arch_write_relax(lock) __rw_yield(lock)
>>
>> diff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/powerpc/include/asm/spinlock_types.h
>> index 2351adc..bd7144e 100644
>> --- a/arch/powerpc/include/asm/spinlock_types.h
>> +++ b/arch/powerpc/include/asm/spinlock_types.h
>> @@ -5,11 +5,15 @@
>> # error "please don't include this file directly"
>> #endif
>>
>> +#ifdef CONFIG_QUEUED_SPINLOCKS
>> +#include <asm-generic/qspinlock_types.h>
>> +#else
>> typedef struct {
>> volatile unsigned int slock;
>> } arch_spinlock_t;
>>
>> #define __ARCH_SPIN_LOCK_UNLOCKED { 0 }
>> +#endif
>>
>> typedef struct {
>> volatile signed int lock;
>> diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c
>> index b7b1237..6574626 100644
>> --- a/arch/powerpc/lib/locks.c
>> +++ b/arch/powerpc/lib/locks.c
>> @@ -23,6 +23,7 @@
>> #include <asm/hvcall.h>
>> #include <asm/smp.h>
>>
>> +#ifndef CONFIG_QUEUED_SPINLOCKS
>> void __spin_yield(arch_spinlock_t *lock)
>> {
>> unsigned int lock_value, holder_cpu, yield_count;
>> @@ -42,6 +43,7 @@ void __spin_yield(arch_spinlock_t *lock)
>> get_hard_smp_processor_id(holder_cpu), yield_count);
>> }
>> EXPORT_SYMBOL_GPL(__spin_yield);
>> +#endif
>>
>> /*
>> * Waiting for a read lock or a write lock on a rwlock...
>> @@ -68,3 +70,60 @@ void __rw_yield(arch_rwlock_t *rw)
>> get_hard_smp_processor_id(holder_cpu), yield_count);
>> }
>> #endif
>> +
>> +#ifdef CONFIG_QUEUED_SPINLOCKS
>> +/*
>> + * This forbid we load an old value in another LL/SC. Because the SC here force
>> + * another LL/SC repeat. So we guarantee all loads in another LL and SC will
>> + * read correct value.
>> + */
>> +static inline u32 atomic_read_sync(atomic_t *v)
>> +{
>> + u32 val;
>> +
>> + __asm__ __volatile__(
>> +"1: " PPC_LWARX(%0, 0, %2, 0) "\n"
>> +" stwcx. %0, 0, %2\n"
>> +" bne- 1b\n"
>> + : "=&r" (val), "+m" (*v)
>> + : "r" (v)
>> + : "cr0", "xer");
>> +
>> + return val;
>> +}
>> +
>> +void queued_spin_unlock_wait(struct qspinlock *lock)
>> +{
>> +
>> + u32 val;
>> +
>> + smp_mb();
>> +
>> + /*
>> + * copied from generic queue_spin_unlock_wait with little modification
>> + */
>> + for (;;) {
>> + /* need _sync, as we might race with another LL/SC in lock()*/
>> + val = atomic_read_sync(&lock->val);
>> +
>> + if (!val) /* not locked, we're done */
>> + goto done;
>> +
>> + if (val & _Q_LOCKED_MASK) /* locked, go wait for unlock */
>> + break;
>> +
>> + /* not locked, but pending, wait until we observe the lock */
>> + cpu_relax();
>> + }
>> +
>> + /*
>> + * any unlock is good. And need not _sync, as ->val is set by the SC in
>> + * unlock(), any loads in lock() must see the correct value.
>> + */
>
> I don't think the comment here about _sync is correct. First, not all
> unlock() has a SC part, and for unlock_wait() case there is nothing to
yes, not all unlock has sc, but we are just dealing with sc part or unlock.

> do with whether lock() see the correct value or not. The reason with
yep, but loads can be reorded, so the unlock_wait(SC) vs lock(SC) can forbid
any loads between lock(LL) and lock(SC) reading any too new values.
> _sync is not needed here is:
>
> /*
> * _sync() is not needed here, because once we got here, we must already
> * read the ->val as LOCKED via a _sync(). Combining the smp_mb()
> * before, we guarantee that all the memory accesses before
> * unlock_wait() must be observed by the next lock critical section.
> */
>
good, more clearly. thanks

xinhui

> Regards,
> Boqun
>
>> + while (atomic_read(&lock->val) & _Q_LOCKED_MASK)
>> + cpu_relax();
>> +done:
>> + smp_mb();
>> +}
>> +EXPORT_SYMBOL(queued_spin_unlock_wait);
>> +#endif
>> --
>> 2.4.11
>>

2016-12-06 01:23:57

by Boqun Feng

[permalink] [raw]
Subject: Re: [PATCH v8 3/6] powerpc: lib/locks.c: Add cpu yield/wake helper function

On Mon, Dec 05, 2016 at 10:19:23AM -0500, Pan Xinhui wrote:
> Add two corresponding helper functions to support pv-qspinlock.
>
> For normal use, __spin_yield_cpu will confer current vcpu slices to the
> target vcpu(say, a lock holder). If target vcpu is not specified or it
> is in running state, such conferging to lpar happens or not depends.
>
> Because hcall itself will introduce latency and a little overhead. And we
> do NOT want to suffer any latency on some cases, e.g. in interrupt handler.
> The second parameter *confer* can indicate such case.
>
> __spin_wake_cpu is simpiler, it will wake up one vcpu regardless of its
> current vcpu state.
>
> Signed-off-by: Pan Xinhui <[email protected]>
> ---
> arch/powerpc/include/asm/spinlock.h | 4 +++
> arch/powerpc/lib/locks.c | 59 +++++++++++++++++++++++++++++++++++++
> 2 files changed, 63 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> index 954099e..6426bd5 100644
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -64,9 +64,13 @@ static inline bool vcpu_is_preempted(int cpu)
> /* We only yield to the hypervisor if we are in shared processor mode */
> #define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr))
> extern void __spin_yield(arch_spinlock_t *lock);
> +extern void __spin_yield_cpu(int cpu, int confer);
> +extern void __spin_wake_cpu(int cpu);
> extern void __rw_yield(arch_rwlock_t *lock);
> #else /* SPLPAR */
> #define __spin_yield(x) barrier()
> +#define __spin_yield_cpu(x, y) barrier()
> +#define __spin_wake_cpu(x) barrier()
> #define __rw_yield(x) barrier()
> #define SHARED_PROCESSOR 0
> #endif
> diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c
> index 6574626..bd872c9 100644
> --- a/arch/powerpc/lib/locks.c
> +++ b/arch/powerpc/lib/locks.c
> @@ -23,6 +23,65 @@
> #include <asm/hvcall.h>
> #include <asm/smp.h>
>
> +/*
> + * confer our slices to a specified cpu and return. If it is in running state
> + * or cpu is -1, then we will check confer. If confer is NULL, we will return
> + * otherwise we confer our slices to lpar.
> + */
> +void __spin_yield_cpu(int cpu, int confer)
> +{
> + unsigned int holder_cpu = cpu, yield_count;

As I said at:

https://marc.info/?l=linux-kernel&m=147455748619343&w=2

@holder_cpu is not necessary and doesn't help anything.

> +
> + if (cpu == -1)
> + goto yield_to_lpar;
> +
> + BUG_ON(holder_cpu >= nr_cpu_ids);
> + yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count);
> +
> + /* if cpu is running, confer slices to lpar conditionally*/
> + if ((yield_count & 1) == 0)
> + goto yield_to_lpar;
> +
> + plpar_hcall_norets(H_CONFER,
> + get_hard_smp_processor_id(holder_cpu), yield_count);
> + return;
> +
> +yield_to_lpar:
> + if (confer)
> + plpar_hcall_norets(H_CONFER, -1, 0);
> +}
> +EXPORT_SYMBOL_GPL(__spin_yield_cpu);
> +
> +void __spin_wake_cpu(int cpu)
> +{
> + unsigned int holder_cpu = cpu;

And it's even wrong to call the parameter of _wake_cpu() a holder_cpu,
because it's not the current lock holder.

Regards,
Boqun

> +
> + BUG_ON(holder_cpu >= nr_cpu_ids);
> + /*
> + * NOTE: we should always do this hcall regardless of
> + * the yield_count of the holder_cpu.
> + * as thers might be a case like below;
> + * CPU 1 CPU 2
> + * yielded = true
> + * if (yielded)
> + * __spin_wake_cpu()
> + * __spin_yield_cpu()
> + *
> + * So we might lose a wake if we check the yield_count and
> + * return directly if the holder_cpu is running.
> + * IOW. do NOT code like below.
> + * yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count);
> + * if ((yield_count & 1) == 0)
> + * return;
> + *
> + * a PROD hcall marks the target_cpu proded, which cause the next cede
> + * or confer called on the target_cpu invalid.
> + */
> + plpar_hcall_norets(H_PROD,
> + get_hard_smp_processor_id(holder_cpu));
> +}
> +EXPORT_SYMBOL_GPL(__spin_wake_cpu);
> +
> #ifndef CONFIG_QUEUED_SPINLOCKS
> void __spin_yield(arch_spinlock_t *lock)
> {
> --
> 2.4.11
>


Attachments:
(No filename) (4.02 kB)
signature.asc (455.00 B)
Download all attachments

2016-12-06 01:24:03

by Pan Xinhui

[permalink] [raw]
Subject: Re: [PATCH v8 2/6] powerpc: pSeries/Kconfig: Add qspinlock build config



在 2016/12/6 08:58, Boqun Feng 写道:
> On Mon, Dec 05, 2016 at 10:19:22AM -0500, Pan Xinhui wrote:
>> pSeries/powerNV will use qspinlock from now on.
>>
>> Signed-off-by: Pan Xinhui <[email protected]>
>> ---
>> arch/powerpc/platforms/pseries/Kconfig | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>>
>> diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
>> index bec90fb..8a87d06 100644
>> --- a/arch/powerpc/platforms/pseries/Kconfig
>> +++ b/arch/powerpc/platforms/pseries/Kconfig
>
> Why here? Not arch/powerpc/platforms/Kconfig?
>
>> @@ -23,6 +23,14 @@ config PPC_PSERIES
>> select PPC_DOORBELL
>> default y
>>
>> +config ARCH_USE_QUEUED_SPINLOCKS
>> + default y
>> + bool "Enable qspinlock"
>
> I think you just enable qspinlock by default for all PPC platforms. I
> guess you need to put
>
> depends on PPC_PSERIES || PPC_POWERNV
>
> here to achieve what you mean in you commit message.
>
yes, another good way.
I prefer to put it in pseries/Kconfig as same as pv-qspinlocks config.
when we build nv, it still include pSeries's config anyway.

thanks
xinhui

> Regards,
> Boqun
>
>> + help
>> + Enabling this option will let kernel use qspinlock which is a kind of
>> + fairlock. It has shown a good performance improvement on x86 and also ppc
>> + especially in high contention cases.
>> +
>> config PPC_SPLPAR
>> depends on PPC_PSERIES
>> bool "Support for shared-processor logical partitions"
>> --
>> 2.4.11
>>

2016-12-06 02:14:05

by Pan Xinhui

[permalink] [raw]
Subject: Re: [PATCH v8 2/6] powerpc: pSeries/Kconfig: Add qspinlock build config



在 2016/12/6 09:24, Pan Xinhui 写道:
>
>
> 在 2016/12/6 08:58, Boqun Feng 写道:
>> On Mon, Dec 05, 2016 at 10:19:22AM -0500, Pan Xinhui wrote:
>>> pSeries/powerNV will use qspinlock from now on.
>>>
>>> Signed-off-by: Pan Xinhui <[email protected]>
>>> ---
>>> arch/powerpc/platforms/pseries/Kconfig | 8 ++++++++
>>> 1 file changed, 8 insertions(+)
>>>
>>> diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
>>> index bec90fb..8a87d06 100644
>>> --- a/arch/powerpc/platforms/pseries/Kconfig
>>> +++ b/arch/powerpc/platforms/pseries/Kconfig
>>
>> Why here? Not arch/powerpc/platforms/Kconfig?
>>
>>> @@ -23,6 +23,14 @@ config PPC_PSERIES
>>> select PPC_DOORBELL
>>> default y
>>>
>>> +config ARCH_USE_QUEUED_SPINLOCKS
>>> + default y
>>> + bool "Enable qspinlock"
>>
>> I think you just enable qspinlock by default for all PPC platforms. I
>> guess you need to put
>>
>> depends on PPC_PSERIES || PPC_POWERNV
>>
>> here to achieve what you mean in you commit message.
>>
oh, yes, need depends on PPC_PSERIES || PPC_POWERNV.

> yes, another good way.
> I prefer to put it in pseries/Kconfig as same as pv-qspinlocks config.
> when we build nv, it still include pSeries's config anyway.
>
> thanks
> xinhui
>
>> Regards,
>> Boqun
>>
>>> + help
>>> + Enabling this option will let kernel use qspinlock which is a kind of
>>> + fairlock. It has shown a good performance improvement on x86 and also ppc
>>> + especially in high contention cases.
>>> +
>>> config PPC_SPLPAR
>>> depends on PPC_PSERIES
>>> bool "Support for shared-processor logical partitions"
>>> --
>>> 2.4.11
>>>