Extend the rseq ABI to expose a NUMA node ID and a vm_vcpu_id field.
The NUMA node ID field allows implementing a faster getcpu(2) in libc.
The virtual cpu id allows ideal scaling (down or up) of user-space
per-cpu data structures. The virtual cpu ids allocated within a memory
space are tracked by the scheduler, which takes into account the number
of concurrently running threads, thus implicitly considering the number
of threads, the cpu affinity, the cpusets applying to those threads, and
the number of logical cores on the system.
This series is based on tip/sched/core
commit 52b33d87b9197 ("sched/psi: Use task->psi_flags to clear in CPU migration")
Thanks,
Mathieu
Mathieu Desnoyers (24):
rseq: Introduce feature size and alignment ELF auxiliary vector
entries
rseq: Introduce extensible rseq ABI
rseq: Extend struct rseq with numa node id
selftests/rseq: Use ELF auxiliary vector for extensible rseq
selftests/rseq: Implement rseq numa node id field selftest
lib: Implement find_{first,next,nth}_notandnot_bit,
find_first_andnot_bit
cpumask: Implement cpumask_{first,next}_{not,}andnot
sched: Introduce per memory space current virtual cpu id
rseq: Extend struct rseq with per memory space vcpu id
selftests/rseq: Remove RSEQ_SKIP_FASTPATH code
selftests/rseq: Implement rseq vm_vcpu_id field support
selftests/rseq: x86: Template memory ordering and percpu access mode
selftests/rseq: arm: Template memory ordering and percpu access mode
selftests/rseq: arm64: Template memory ordering and percpu access mode
selftests/rseq: mips: Template memory ordering and percpu access mode
selftests/rseq: ppc: Template memory ordering and percpu access mode
selftests/rseq: s390: Template memory ordering and percpu access mode
selftests/rseq: riscv: Template memory ordering and percpu access mode
selftests/rseq: Implement basic percpu ops vm_vcpu_id test
selftests/rseq: Implement parametrized vm_vcpu_id test
selftests/rseq: x86: Implement rseq_load_u32_u32
selftests/rseq: Implement numa node id vs vm_vcpu_id invariant test
selftests/rseq: parametrized test: Report/abort on negative cpu id
tracing/rseq: Add mm_vcpu_id field to rseq_update
fs/binfmt_elf.c | 5 +
fs/exec.c | 6 +
include/linux/cpumask.h | 60 +
include/linux/find.h | 123 +-
include/linux/mm.h | 25 +
include/linux/mm_types.h | 110 +-
include/linux/sched.h | 9 +
include/trace/events/rseq.h | 7 +-
include/uapi/linux/auxvec.h | 2 +
include/uapi/linux/rseq.h | 22 +
init/Kconfig | 4 +
kernel/fork.c | 11 +-
kernel/ptrace.c | 2 +-
kernel/rseq.c | 65 +-
kernel/sched/core.c | 52 +
kernel/sched/sched.h | 168 +++
kernel/signal.c | 2 +
lib/find_bit.c | 42 +
tools/testing/selftests/rseq/.gitignore | 5 +
tools/testing/selftests/rseq/Makefile | 20 +-
.../testing/selftests/rseq/basic_numa_test.c | 117 ++
.../selftests/rseq/basic_percpu_ops_test.c | 46 +-
tools/testing/selftests/rseq/basic_test.c | 4 +
tools/testing/selftests/rseq/compiler.h | 6 +
tools/testing/selftests/rseq/param_test.c | 157 ++-
tools/testing/selftests/rseq/rseq-abi.h | 22 +
tools/testing/selftests/rseq/rseq-arm-bits.h | 505 +++++++
tools/testing/selftests/rseq/rseq-arm.h | 701 +---------
.../testing/selftests/rseq/rseq-arm64-bits.h | 392 ++++++
tools/testing/selftests/rseq/rseq-arm64.h | 520 +------
.../testing/selftests/rseq/rseq-bits-reset.h | 11 +
.../selftests/rseq/rseq-bits-template.h | 41 +
tools/testing/selftests/rseq/rseq-mips-bits.h | 462 +++++++
tools/testing/selftests/rseq/rseq-mips.h | 646 +--------
tools/testing/selftests/rseq/rseq-ppc-bits.h | 454 +++++++
tools/testing/selftests/rseq/rseq-ppc.h | 617 +--------
.../testing/selftests/rseq/rseq-riscv-bits.h | 410 ++++++
tools/testing/selftests/rseq/rseq-riscv.h | 529 +-------
tools/testing/selftests/rseq/rseq-s390-bits.h | 474 +++++++
tools/testing/selftests/rseq/rseq-s390.h | 495 +------
tools/testing/selftests/rseq/rseq-skip.h | 65 -
tools/testing/selftests/rseq/rseq-x86-bits.h | 1036 ++++++++++++++
tools/testing/selftests/rseq/rseq-x86.h | 1193 +----------------
tools/testing/selftests/rseq/rseq.c | 86 +-
tools/testing/selftests/rseq/rseq.h | 229 +++-
.../testing/selftests/rseq/run_param_test.sh | 5 +
46 files changed, 5294 insertions(+), 4669 deletions(-)
create mode 100644 tools/testing/selftests/rseq/basic_numa_test.c
create mode 100644 tools/testing/selftests/rseq/rseq-arm-bits.h
create mode 100644 tools/testing/selftests/rseq/rseq-arm64-bits.h
create mode 100644 tools/testing/selftests/rseq/rseq-bits-reset.h
create mode 100644 tools/testing/selftests/rseq/rseq-bits-template.h
create mode 100644 tools/testing/selftests/rseq/rseq-mips-bits.h
create mode 100644 tools/testing/selftests/rseq/rseq-ppc-bits.h
create mode 100644 tools/testing/selftests/rseq/rseq-riscv-bits.h
create mode 100644 tools/testing/selftests/rseq/rseq-s390-bits.h
delete mode 100644 tools/testing/selftests/rseq/rseq-skip.h
create mode 100644 tools/testing/selftests/rseq/rseq-x86-bits.h
--
2.25.1
Allow finding the first, next, or nth bit within two input bitmasks
which is zero in both masks.
Allow fiding the first bit within two input bitmasks which is set in
first mask and cleared in the second mask. find_next_andnot_bit and
find_nth_andnot_bit already exist, so find the first bit appears to be
missing.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
include/linux/find.h | 123 +++++++++++++++++++++++++++++++++++++++++--
lib/find_bit.c | 42 +++++++++++++++
2 files changed, 161 insertions(+), 4 deletions(-)
diff --git a/include/linux/find.h b/include/linux/find.h
index ccaf61a0f5fd..43c3db92a096 100644
--- a/include/linux/find.h
+++ b/include/linux/find.h
@@ -14,6 +14,8 @@ unsigned long _find_next_and_bit(const unsigned long *addr1, const unsigned long
unsigned long nbits, unsigned long start);
unsigned long _find_next_andnot_bit(const unsigned long *addr1, const unsigned long *addr2,
unsigned long nbits, unsigned long start);
+unsigned long _find_next_notandnot_bit(const unsigned long *addr1, const unsigned long *addr2,
+ unsigned long nbits, unsigned long start);
unsigned long _find_next_zero_bit(const unsigned long *addr, unsigned long nbits,
unsigned long start);
extern unsigned long _find_first_bit(const unsigned long *addr, unsigned long size);
@@ -22,8 +24,14 @@ unsigned long __find_nth_and_bit(const unsigned long *addr1, const unsigned long
unsigned long size, unsigned long n);
unsigned long __find_nth_andnot_bit(const unsigned long *addr1, const unsigned long *addr2,
unsigned long size, unsigned long n);
+unsigned long __find_nth_notandnot_bit(const unsigned long *addr1, const unsigned long *addr2,
+ unsigned long size, unsigned long n);
extern unsigned long _find_first_and_bit(const unsigned long *addr1,
const unsigned long *addr2, unsigned long size);
+extern unsigned long _find_first_andnot_bit(const unsigned long *addr1,
+ const unsigned long *addr2, unsigned long size);
+extern unsigned long _find_first_notandnot_bit(const unsigned long *addr1,
+ const unsigned long *addr2, unsigned long size);
extern unsigned long _find_first_zero_bit(const unsigned long *addr, unsigned long size);
extern unsigned long _find_last_bit(const unsigned long *addr, unsigned long size);
@@ -95,15 +103,14 @@ unsigned long find_next_and_bit(const unsigned long *addr1,
#ifndef find_next_andnot_bit
/**
- * find_next_andnot_bit - find the next set bit in *addr1 excluding all the bits
- * in *addr2
+ * find_next_andnot_bit - find the next bit set in *addr1, cleared in *addr2
* @addr1: The first address to base the search on
* @addr2: The second address to base the search on
* @size: The bitmap size in bits
* @offset: The bitnumber to start searching at
*
- * Returns the bit number for the next set bit
- * If no bits are set, returns @size.
+ * Returns the bit number for the next bit set in *addr1, cleared in *addr2
+ * If no such bits are found, returns @size.
*/
static inline
unsigned long find_next_andnot_bit(const unsigned long *addr1,
@@ -124,6 +131,37 @@ unsigned long find_next_andnot_bit(const unsigned long *addr1,
}
#endif
+#ifndef find_next_notandnot_bit
+/**
+ * find_next_notandnot_bit - find the next bit cleared in both *addr1 and *addr2
+ * @addr1: The first address to base the search on
+ * @addr2: The second address to base the search on
+ * @size: The bitmap size in bits
+ * @offset: The bitnumber to start searching at
+ *
+ * Returns the bit number for the next bit cleared in both *addr1 and *addr2
+ * If no such bits are found, returns @size.
+ */
+static inline
+unsigned long find_next_notandnot_bit(const unsigned long *addr1,
+ const unsigned long *addr2, unsigned long size,
+ unsigned long offset)
+{
+ if (small_const_nbits(size)) {
+ unsigned long val;
+
+ if (unlikely(offset >= size))
+ return size;
+
+ val = ~*addr1 & ~*addr2 & GENMASK(size - 1, offset);
+ return val ? __ffs(val) : size;
+ }
+
+ return _find_next_notandnot_bit(addr1, addr2, size, offset);
+}
+#endif
+
+
#ifndef find_next_zero_bit
/**
* find_next_zero_bit - find the next cleared bit in a memory region
@@ -255,6 +293,32 @@ unsigned long find_nth_andnot_bit(const unsigned long *addr1, const unsigned lon
return __find_nth_andnot_bit(addr1, addr2, size, n);
}
+/**
+ * find_nth_notandnot_bit - find N'th cleared bit in 2 memory regions.
+ * @addr1: The 1st address to start the search at
+ * @addr2: The 2nd address to start the search at
+ * @size: The maximum number of bits to search
+ * @n: The number of set bit, which position is needed, counting from 0
+ *
+ * Returns the bit number of the N'th cleared bit.
+ * If no such, returns @size.
+ */
+static inline
+unsigned long find_nth_notandnot_bit(const unsigned long *addr1, const unsigned long *addr2,
+ unsigned long size, unsigned long n)
+{
+ if (n >= size)
+ return size;
+
+ if (small_const_nbits(size)) {
+ unsigned long val = (~*addr1) & (~*addr2) & GENMASK(size - 1, 0);
+
+ return val ? fns(val, n) : size;
+ }
+
+ return __find_nth_notandnot_bit(addr1, addr2, size, n);
+}
+
#ifndef find_first_and_bit
/**
* find_first_and_bit - find the first set bit in both memory regions
@@ -280,6 +344,57 @@ unsigned long find_first_and_bit(const unsigned long *addr1,
}
#endif
+#ifndef find_first_andnot_bit
+/**
+ * find_first_andnot_bit - find first set bit in 2 memory regions,
+ * flipping bits in 2nd region
+ * @addr1: The first address to base the search on
+ * @addr2: The second address to base the search on
+ * @size: The bitmap size in bits
+ *
+ * Returns the bit number for the next set bit
+ * If no bits are set, returns @size.
+ */
+static inline
+unsigned long find_first_andnot_bit(const unsigned long *addr1,
+ const unsigned long *addr2,
+ unsigned long size)
+{
+ if (small_const_nbits(size)) {
+ unsigned long val = *addr1 & (~*addr2) & GENMASK(size - 1, 0);
+
+ return val ? __ffs(val) : size;
+ }
+
+ return _find_first_andnot_bit(addr1, addr2, size);
+}
+#endif
+
+#ifndef find_first_notandnot_bit
+/**
+ * find_first_notandnot_bit - find first cleared bit in 2 memory regions
+ * @addr1: The first address to base the search on
+ * @addr2: The second address to base the search on
+ * @size: The bitmap size in bits
+ *
+ * Returns the bit number for the next cleared bit
+ * If no bits are set, returns @size.
+ */
+static inline
+unsigned long find_first_notandnot_bit(const unsigned long *addr1,
+ const unsigned long *addr2,
+ unsigned long size)
+{
+ if (small_const_nbits(size)) {
+ unsigned long val = (~*addr1) & (~*addr2) & GENMASK(size - 1, 0);
+
+ return val ? __ffs(val) : size;
+ }
+
+ return _find_first_notandnot_bit(addr1, addr2, size);
+}
+#endif
+
#ifndef find_first_zero_bit
/**
* find_first_zero_bit - find the first cleared bit in a memory region
diff --git a/lib/find_bit.c b/lib/find_bit.c
index 18bc0a7ac8ee..a1f592f2437e 100644
--- a/lib/find_bit.c
+++ b/lib/find_bit.c
@@ -116,6 +116,32 @@ unsigned long _find_first_and_bit(const unsigned long *addr1,
EXPORT_SYMBOL(_find_first_and_bit);
#endif
+#ifndef find_first_andnot_bit
+/*
+ * Find the first set bit in two memory regions, flipping bits in 2nd region.
+ */
+unsigned long _find_first_andnot_bit(const unsigned long *addr1,
+ const unsigned long *addr2,
+ unsigned long size)
+{
+ return FIND_FIRST_BIT(addr1[idx] & ~addr2[idx], /* nop */, size);
+}
+EXPORT_SYMBOL(_find_first_andnot_bit);
+#endif
+
+#ifndef find_first_notandnot_bit
+/*
+ * Find the first cleared bit in two memory regions.
+ */
+unsigned long _find_first_notandnot_bit(const unsigned long *addr1,
+ const unsigned long *addr2,
+ unsigned long size)
+{
+ return FIND_FIRST_BIT(~addr1[idx] & ~addr2[idx], /* nop */, size);
+}
+EXPORT_SYMBOL(_find_first_notandnot_bit);
+#endif
+
#ifndef find_first_zero_bit
/*
* Find the first cleared bit in a memory region.
@@ -155,6 +181,13 @@ unsigned long __find_nth_andnot_bit(const unsigned long *addr1, const unsigned l
}
EXPORT_SYMBOL(__find_nth_andnot_bit);
+unsigned long __find_nth_notandnot_bit(const unsigned long *addr1, const unsigned long *addr2,
+ unsigned long size, unsigned long n)
+{
+ return FIND_NTH_BIT(~addr1[idx] & ~addr2[idx], size, n);
+}
+EXPORT_SYMBOL(__find_nth_notandnot_bit);
+
#ifndef find_next_and_bit
unsigned long _find_next_and_bit(const unsigned long *addr1, const unsigned long *addr2,
unsigned long nbits, unsigned long start)
@@ -173,6 +206,15 @@ unsigned long _find_next_andnot_bit(const unsigned long *addr1, const unsigned l
EXPORT_SYMBOL(_find_next_andnot_bit);
#endif
+#ifndef find_next_notandnot_bit
+unsigned long _find_next_notandnot_bit(const unsigned long *addr1, const unsigned long *addr2,
+ unsigned long nbits, unsigned long start)
+{
+ return FIND_NEXT_BIT(~addr1[idx] & ~addr2[idx], /* nop */, nbits, start);
+}
+EXPORT_SYMBOL(_find_next_notandnot_bit);
+#endif
+
#ifndef find_next_zero_bit
unsigned long _find_next_zero_bit(const unsigned long *addr, unsigned long nbits,
unsigned long start)
--
2.25.1
This feature allows the scheduler to expose a current virtual cpu id
to user-space. This virtual cpu id is within the possible cpus range,
and is temporarily (and uniquely) assigned while threads are actively
running within a memory space. If a memory space has fewer threads than
cores, or is limited to run on few cores concurrently through sched
affinity or cgroup cpusets, the virtual cpu ids will be values close
to 0, thus allowing efficient use of user-space memory for per-cpu
data structures.
The vcpu_ids are NUMA-aware. On NUMA systems, when a vcpu_id is observed
by user-space to be associated with a NUMA node, it is guaranteed to
never change NUMA node unless a kernel-level NUMA configuration change
happens.
This feature is meant to be exposed by a new rseq thread area field.
The primary purpose of this feature is to do the heavy-lifting needed
by memory allocators to allow them to use per-cpu data structures
efficiently in the following situations:
- Single-threaded applications,
- Multi-threaded applications on large systems (many cores) with limited
cpu affinity mask,
- Multi-threaded applications on large systems (many cores) with
restricted cgroup cpuset per container,
- Processes using memory from many NUMA nodes.
One of the key concern from scheduler maintainers is the overhead
associated with additional spin locks or atomic operations in the
scheduler fast-path. This is why the following optimization is
implemented.
On context switch between threads belonging to the same memory space,
transfer the mm_vcpu_id from prev to next without any atomic ops. This
takes care of use-cases involving frequent context switch between
threads belonging to the same memory space.
Additional optimizations can be done if the spin locks added when
context switching between threads belonging to different processes end
up being a performance bottleneck. Those are left out of this patch
though. A performance impact would have to be clearly demonstrated to
justify the added complexity.
The credit goes to Paul Turner (Google) for the vcpu_id idea. This
feature is implemented based on the discussions with Paul Turner and
Peter Oskolkov (Google), but I took the liberty to implement scheduler
fast-path optimizations and my own NUMA-awareness scheme. The rumor has
it that Google have been running a rseq vcpu_id extension internally at
Google in production for a year. The tcmalloc source code indeed has
comments hinting at a vcpu_id prototype extension to the rseq system
call [1].
The following benchmarks do not show any significant overhead added to
the scheduler context switch by this feature:
* perf bench sched messaging (process)
Baseline: 86.5±0.3 ms
With mm_vcpu_id: 86.7±2.6 ms
* perf bench sched messaging (threaded)
Baseline: 84.3±3.0 ms
With mm_vcpu_id: 84.7±2.6 ms
* hackbench (process)
Baseline: 82.9±2.7 ms
With mm_vcpu_id: 82.9±2.9 ms
* hackbench (threaded)
Baseline: 85.2±2.6 ms
With mm_vcpu_id: 84.4±2.9 ms
[1] https://github.com/google/tcmalloc/blob/master/tcmalloc/internal/linux_syscall_support.h#L26
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
Changes since v3:
- Remove per-runqueue vcpu id cache optimization.
- Remove single-threaded process optimization.
- Introduce spinlock to protect vcpu id bitmaps.
Changes since v4:
- Disable interrupts around mm_vcpu_get/mm_vcpu_put. The spin locks are
used from within the scheduler context switch with interrupts off, so
all uses of these spin locks need to have interrupts off.
- Initialize tsk->mm_vcpu to -1 in dup_task_struct to be consistent with
other states where mm_vcpu_active is 0.
- Use cpumask andnot/notandnot APIs.
---
fs/exec.c | 6 ++
include/linux/mm.h | 25 ++++++
include/linux/mm_types.h | 110 ++++++++++++++++++++++++-
include/linux/sched.h | 5 ++
init/Kconfig | 4 +
kernel/fork.c | 11 ++-
kernel/sched/core.c | 52 ++++++++++++
kernel/sched/sched.h | 168 +++++++++++++++++++++++++++++++++++++++
kernel/signal.c | 2 +
9 files changed, 381 insertions(+), 2 deletions(-)
diff --git a/fs/exec.c b/fs/exec.c
index 349a5da91efe..93eb88f4053b 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1013,6 +1013,9 @@ static int exec_mmap(struct mm_struct *mm)
tsk->active_mm = mm;
tsk->mm = mm;
lru_gen_add_mm(mm);
+ mm_init_vcpu_lock(mm);
+ mm_init_vcpumask(mm);
+ mm_init_node_vcpumask(mm);
/*
* This prevents preemption while active_mm is being loaded and
* it and mm are being updated, which could cause problems for
@@ -1808,6 +1811,7 @@ static int bprm_execve(struct linux_binprm *bprm,
check_unsafe_exec(bprm);
current->in_execve = 1;
+ sched_vcpu_before_execve(current);
file = do_open_execat(fd, filename, flags);
retval = PTR_ERR(file);
@@ -1838,6 +1842,7 @@ static int bprm_execve(struct linux_binprm *bprm,
if (retval < 0)
goto out;
+ sched_vcpu_after_execve(current);
/* execve succeeded */
current->fs->in_exec = 0;
current->in_execve = 0;
@@ -1857,6 +1862,7 @@ static int bprm_execve(struct linux_binprm *bprm,
force_fatal_sig(SIGSEGV);
out_unmark:
+ sched_vcpu_after_execve(current);
current->fs->in_exec = 0;
current->in_execve = 0;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8bbcccbc5565..eb6075d55339 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3475,4 +3475,29 @@ madvise_set_anon_name(struct mm_struct *mm, unsigned long start,
*/
#define ZAP_FLAG_DROP_MARKER ((__force zap_flags_t) BIT(0))
+#ifdef CONFIG_SCHED_MM_VCPU
+void sched_vcpu_before_execve(struct task_struct *t);
+void sched_vcpu_after_execve(struct task_struct *t);
+void sched_vcpu_fork(struct task_struct *t);
+void sched_vcpu_exit_signals(struct task_struct *t);
+static inline int task_mm_vcpu_id(struct task_struct *t)
+{
+ return t->mm_vcpu;
+}
+#else
+static inline void sched_vcpu_before_execve(struct task_struct *t) { }
+static inline void sched_vcpu_after_execve(struct task_struct *t) { }
+static inline void sched_vcpu_fork(struct task_struct *t) { }
+static inline void sched_vcpu_exit_signals(struct task_struct *t) { }
+static inline int task_mm_vcpu_id(struct task_struct *t)
+{
+ /*
+ * Use the processor id as a fall-back when the mm vcpu feature is
+ * disabled. This provides functional per-cpu data structure accesses
+ * in user-space, althrough it won't provide the memory usage benefits.
+ */
+ return raw_smp_processor_id();
+}
+#endif
+
#endif /* _LINUX_MM_H */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 500e536796ca..9b4c7ec68057 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -18,6 +18,7 @@
#include <linux/page-flags-layout.h>
#include <linux/workqueue.h>
#include <linux/seqlock.h>
+#include <linux/nodemask.h>
#include <asm/mmu.h>
@@ -556,7 +557,19 @@ struct mm_struct {
* &struct mm_struct is freed.
*/
atomic_t mm_count;
-
+#ifdef CONFIG_SCHED_MM_VCPU
+ /**
+ * @vcpu_lock: Protect vcpu_id bitmap updates vs lookups.
+ *
+ * Prevent situations where updates to the vcpu_id bitmap
+ * happen concurrently with lookups. Those can lead to
+ * situations where a lookup cannot find a free bit simply
+ * because it was unlucky enough to load, non-atomically,
+ * bitmap words as they were being concurrently updated by the
+ * updaters.
+ */
+ spinlock_t vcpu_lock;
+#endif
#ifdef CONFIG_MMU
atomic_long_t pgtables_bytes; /* PTE page table pages */
#endif
@@ -824,6 +837,101 @@ static inline void vma_iter_init(struct vma_iterator *vmi,
vmi->mas.node = MAS_START;
}
+#ifdef CONFIG_SCHED_MM_VCPU
+/* Future-safe accessor for struct mm_struct's vcpu_mask. */
+static inline cpumask_t *mm_vcpumask(struct mm_struct *mm)
+{
+ unsigned long vcpu_bitmap = (unsigned long)mm;
+
+ vcpu_bitmap += offsetof(struct mm_struct, cpu_bitmap);
+ /* Skip cpu_bitmap */
+ vcpu_bitmap += cpumask_size();
+ return (struct cpumask *)vcpu_bitmap;
+}
+
+static inline void mm_init_vcpumask(struct mm_struct *mm)
+{
+ cpumask_clear(mm_vcpumask(mm));
+}
+
+static inline unsigned int mm_vcpumask_size(void)
+{
+ return cpumask_size();
+}
+
+#else
+static inline cpumask_t *mm_vcpumask(struct mm_struct *mm)
+{
+ return NULL;
+}
+
+static inline void mm_init_vcpumask(struct mm_struct *mm) { }
+
+static inline unsigned int mm_vcpumask_size(void)
+{
+ return 0;
+}
+#endif
+
+#if defined(CONFIG_SCHED_MM_VCPU) && defined(CONFIG_NUMA)
+/*
+ * Layout of node vcpumasks:
+ * - node_alloc vcpumask: cpumask tracking which vcpu_id were
+ * allocated (across nodes) in this
+ * memory space.
+ * - node vcpumask[nr_node_ids]: per-node cpumask tracking which vcpu_id
+ * were allocated in this memory space.
+ */
+static inline cpumask_t *mm_node_alloc_vcpumask(struct mm_struct *mm)
+{
+ unsigned long vcpu_bitmap = (unsigned long)mm_vcpumask(mm);
+
+ /* Skip mm_vcpumask */
+ vcpu_bitmap += cpumask_size();
+ return (struct cpumask *)vcpu_bitmap;
+}
+
+static inline cpumask_t *mm_node_vcpumask(struct mm_struct *mm, unsigned int node)
+{
+ unsigned long vcpu_bitmap = (unsigned long)mm_node_alloc_vcpumask(mm);
+
+ /* Skip node alloc vcpumask */
+ vcpu_bitmap += cpumask_size();
+ vcpu_bitmap += node * cpumask_size();
+ return (struct cpumask *)vcpu_bitmap;
+}
+
+static inline void mm_init_node_vcpumask(struct mm_struct *mm)
+{
+ unsigned int node;
+
+ if (num_possible_nodes() == 1)
+ return;
+ cpumask_clear(mm_node_alloc_vcpumask(mm));
+ for (node = 0; node < nr_node_ids; node++)
+ cpumask_clear(mm_node_vcpumask(mm, node));
+}
+
+static inline void mm_init_vcpu_lock(struct mm_struct *mm)
+{
+ spin_lock_init(&mm->vcpu_lock);
+}
+
+static inline unsigned int mm_node_vcpumask_size(void)
+{
+ if (num_possible_nodes() == 1)
+ return 0;
+ return (nr_node_ids + 1) * cpumask_size();
+}
+#else
+static inline void mm_init_node_vcpumask(struct mm_struct *mm) { }
+static inline void mm_init_vcpu_lock(struct mm_struct *mm) { }
+static inline unsigned int mm_node_vcpumask_size(void)
+{
+ return 0;
+}
+#endif
+
struct mmu_gather;
extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm);
extern void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2a9e14e3e668..8e68335d122e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1314,6 +1314,11 @@ struct task_struct {
unsigned long rseq_event_mask;
#endif
+#ifdef CONFIG_SCHED_MM_VCPU
+ int mm_vcpu; /* Current vcpu in mm */
+ int mm_vcpu_active; /* Whether vcpu bitmap is active */
+#endif
+
struct tlbflush_unmap_batch tlb_ubc;
union {
diff --git a/init/Kconfig b/init/Kconfig
index abf65098f1b6..cab2d2bcf62d 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1039,6 +1039,10 @@ config RT_GROUP_SCHED
endif #CGROUP_SCHED
+config SCHED_MM_VCPU
+ def_bool y
+ depends on SMP && RSEQ
+
config UCLAMP_TASK_GROUP
bool "Utilization clamping per group of tasks"
depends on CGROUP_SCHED
diff --git a/kernel/fork.c b/kernel/fork.c
index 08969f5aa38d..6a2323266942 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1047,6 +1047,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
tsk->reported_split_lock = 0;
#endif
+#ifdef CONFIG_SCHED_MM_VCPU
+ tsk->mm_vcpu = -1;
+ tsk->mm_vcpu_active = 0;
+#endif
return tsk;
free_stack:
@@ -1150,6 +1154,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
mm->user_ns = get_user_ns(user_ns);
lru_gen_init_mm(mm);
+ mm_init_vcpu_lock(mm);
+ mm_init_vcpumask(mm);
+ mm_init_node_vcpumask(mm);
return mm;
fail_nocontext:
@@ -1579,6 +1586,7 @@ static int copy_mm(unsigned long clone_flags, struct task_struct *tsk)
tsk->mm = mm;
tsk->active_mm = mm;
+ sched_vcpu_fork(tsk);
return 0;
}
@@ -3041,7 +3049,8 @@ void __init proc_caches_init(void)
* dynamically sized based on the maximum CPU number this system
* can have, taking hotplug into account (nr_cpu_ids).
*/
- mm_size = sizeof(struct mm_struct) + cpumask_size();
+ mm_size = sizeof(struct mm_struct) + cpumask_size() + mm_vcpumask_size() +
+ mm_node_vcpumask_size();
mm_cachep = kmem_cache_create_usercopy("mm_struct",
mm_size, ARCH_MIN_MMSTRUCT_ALIGN,
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 07ac08caf019..2f356169047e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5019,6 +5019,7 @@ prepare_task_switch(struct rq *rq, struct task_struct *prev,
sched_info_switch(rq, prev, next);
perf_event_task_sched_out(prev, next);
rseq_preempt(prev);
+ switch_mm_vcpu(prev, next);
fire_sched_out_preempt_notifiers(prev, next);
kmap_local_sched_out();
prepare_task(next);
@@ -11273,3 +11274,54 @@ void call_trace_sched_update_nr_running(struct rq *rq, int count)
{
trace_sched_update_nr_running_tp(rq, count);
}
+
+#ifdef CONFIG_SCHED_MM_VCPU
+void sched_vcpu_exit_signals(struct task_struct *t)
+{
+ struct mm_struct *mm = t->mm;
+ unsigned long flags;
+
+ if (!mm)
+ return;
+ local_irq_save(flags);
+ mm_vcpu_put(mm, t->mm_vcpu);
+ t->mm_vcpu = -1;
+ t->mm_vcpu_active = 0;
+ local_irq_restore(flags);
+}
+
+void sched_vcpu_before_execve(struct task_struct *t)
+{
+ struct mm_struct *mm = t->mm;
+ unsigned long flags;
+
+ if (!mm)
+ return;
+ local_irq_save(flags);
+ mm_vcpu_put(mm, t->mm_vcpu);
+ t->mm_vcpu = -1;
+ t->mm_vcpu_active = 0;
+ local_irq_restore(flags);
+}
+
+void sched_vcpu_after_execve(struct task_struct *t)
+{
+ struct mm_struct *mm = t->mm;
+ unsigned long flags;
+
+ WARN_ON_ONCE((t->flags & PF_KTHREAD) || !t->mm);
+
+ local_irq_save(flags);
+ t->mm_vcpu = mm_vcpu_get(mm);
+ t->mm_vcpu_active = 1;
+ local_irq_restore(flags);
+ rseq_set_notify_resume(t);
+}
+
+void sched_vcpu_fork(struct task_struct *t)
+{
+ WARN_ON_ONCE((t->flags & PF_KTHREAD) || !t->mm);
+ t->mm_vcpu = -1;
+ t->mm_vcpu_active = 1;
+}
+#endif
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 771f8ddb7053..28a51cad8174 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3261,4 +3261,172 @@ static inline void update_current_exec_runtime(struct task_struct *curr,
cgroup_account_cputime(curr, delta_exec);
}
+#ifdef CONFIG_SCHED_MM_VCPU
+static inline int __mm_vcpu_get_single_node(struct mm_struct *mm)
+{
+ struct cpumask *cpumask;
+ int vcpu;
+
+ cpumask = mm_vcpumask(mm);
+ vcpu = cpumask_first_zero(cpumask);
+ if (vcpu >= nr_cpu_ids)
+ return -1;
+ __cpumask_set_cpu(vcpu, cpumask);
+ return vcpu;
+}
+
+#ifdef CONFIG_NUMA
+static inline bool mm_node_vcpumask_test_cpu(struct mm_struct *mm, int vcpu_id)
+{
+ if (num_possible_nodes() == 1)
+ return true;
+ return cpumask_test_cpu(vcpu_id, mm_node_vcpumask(mm, numa_node_id()));
+}
+
+static inline int __mm_vcpu_get(struct mm_struct *mm)
+{
+ struct cpumask *cpumask = mm_vcpumask(mm),
+ *node_cpumask = mm_node_vcpumask(mm, numa_node_id()),
+ *node_alloc_cpumask = mm_node_alloc_vcpumask(mm);
+ unsigned int node;
+ int vcpu;
+
+ if (num_possible_nodes() == 1)
+ return __mm_vcpu_get_single_node(mm);
+
+ /*
+ * Try to reserve lowest available vcpu number within those already
+ * reserved for this NUMA node.
+ */
+ vcpu = cpumask_first_andnot(node_cpumask, cpumask);
+ if (vcpu >= nr_cpu_ids)
+ goto alloc_numa;
+ __cpumask_set_cpu(vcpu, cpumask);
+ goto end;
+
+alloc_numa:
+ /*
+ * Try to reserve lowest available vcpu number within those not already
+ * allocated for numa nodes.
+ */
+ vcpu = cpumask_first_notandnot(node_alloc_cpumask, cpumask);
+ if (vcpu >= nr_cpu_ids)
+ goto numa_update;
+ __cpumask_set_cpu(vcpu, cpumask);
+ __cpumask_set_cpu(vcpu, node_cpumask);
+ __cpumask_set_cpu(vcpu, node_alloc_cpumask);
+ goto end;
+
+numa_update:
+ /*
+ * NUMA node id configuration changed for at least one CPU in the system.
+ * We need to steal a currently unused vcpu_id from an overprovisioned
+ * node for our current node. Userspace must handle the fact that the
+ * node id associated with this vcpu_id may change due to node ID
+ * reconfiguration.
+ *
+ * Count how many possible cpus are attached to each (other) node id,
+ * and compare this with the per-mm node vcpumask cpu count. Find one
+ * which has too many cpus in its mask to steal from.
+ */
+ for (node = 0; node < nr_node_ids; node++) {
+ struct cpumask *iter_cpumask;
+
+ if (node == numa_node_id())
+ continue;
+ iter_cpumask = mm_node_vcpumask(mm, node);
+ if (nr_cpus_node(node) < cpumask_weight(iter_cpumask)) {
+ /* Try to steal from this node. */
+ vcpu = cpumask_first_andnot(iter_cpumask, cpumask);
+ if (vcpu >= nr_cpu_ids)
+ goto steal_fail;
+ __cpumask_set_cpu(vcpu, cpumask);
+ __cpumask_clear_cpu(vcpu, iter_cpumask);
+ __cpumask_set_cpu(vcpu, node_cpumask);
+ goto end;
+ }
+ }
+
+steal_fail:
+ /*
+ * Our attempt at gracefully stealing a vcpu_id from another
+ * overprovisioned NUMA node failed. Fallback to grabbing the first
+ * available vcpu_id.
+ */
+ vcpu = cpumask_first_zero(cpumask);
+ if (vcpu >= nr_cpu_ids)
+ return -1;
+ __cpumask_set_cpu(vcpu, cpumask);
+ /* Steal vcpu from its numa node mask. */
+ for (node = 0; node < nr_node_ids; node++) {
+ struct cpumask *iter_cpumask;
+
+ if (node == numa_node_id())
+ continue;
+ iter_cpumask = mm_node_vcpumask(mm, node);
+ if (cpumask_test_cpu(vcpu, iter_cpumask)) {
+ __cpumask_clear_cpu(vcpu, iter_cpumask);
+ break;
+ }
+ }
+ __cpumask_set_cpu(vcpu, node_cpumask);
+end:
+ return vcpu;
+}
+
+#else
+static inline bool mm_node_vcpumask_test_cpu(struct mm_struct *mm, int vcpu_id)
+{
+ return true;
+}
+static inline int __mm_vcpu_get(struct mm_struct *mm)
+{
+ return __mm_vcpu_get_single_node(mm);
+}
+#endif
+
+static inline void mm_vcpu_put(struct mm_struct *mm, int vcpu)
+{
+ lockdep_assert_irqs_disabled();
+ if (vcpu < 0)
+ return;
+ spin_lock(&mm->vcpu_lock);
+ __cpumask_clear_cpu(vcpu, mm_vcpumask(mm));
+ spin_unlock(&mm->vcpu_lock);
+}
+
+static inline int mm_vcpu_get(struct mm_struct *mm)
+{
+ int ret;
+
+ lockdep_assert_irqs_disabled();
+ spin_lock(&mm->vcpu_lock);
+ ret = __mm_vcpu_get(mm);
+ spin_unlock(&mm->vcpu_lock);
+ return ret;
+}
+
+static inline void switch_mm_vcpu(struct task_struct *prev, struct task_struct *next)
+{
+ if (prev->mm_vcpu_active) {
+ if (next->mm_vcpu_active && next->mm == prev->mm) {
+ /*
+ * Context switch between threads in same mm, hand over
+ * the mm_vcpu from prev to next.
+ */
+ next->mm_vcpu = prev->mm_vcpu;
+ prev->mm_vcpu = -1;
+ return;
+ }
+ mm_vcpu_put(prev->mm, prev->mm_vcpu);
+ prev->mm_vcpu = -1;
+ }
+ if (next->mm_vcpu_active)
+ next->mm_vcpu = mm_vcpu_get(next->mm);
+}
+
+#else
+static inline void switch_mm_vcpu(struct task_struct *prev, struct task_struct *next) { }
+#endif
+
#endif /* _KERNEL_SCHED_SCHED_H */
diff --git a/kernel/signal.c b/kernel/signal.c
index d140672185a4..44fb9fdca13e 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -2950,6 +2950,7 @@ void exit_signals(struct task_struct *tsk)
cgroup_threadgroup_change_begin(tsk);
if (thread_group_empty(tsk) || (tsk->signal->flags & SIGNAL_GROUP_EXIT)) {
+ sched_vcpu_exit_signals(tsk);
tsk->flags |= PF_EXITING;
cgroup_threadgroup_change_end(tsk);
return;
@@ -2960,6 +2961,7 @@ void exit_signals(struct task_struct *tsk)
* From now this task is not visible for group-wide signals,
* see wants_signal(), do_signal_stop().
*/
+ sched_vcpu_exit_signals(tsk);
tsk->flags |= PF_EXITING;
cgroup_threadgroup_change_end(tsk);
--
2.25.1
This code is not currently build by the test Makefile, adds complexity,
and is not overall useful considering that the abort handling loops to
retry the fast-path.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
tools/testing/selftests/rseq/param_test.c | 4 --
tools/testing/selftests/rseq/rseq-arm.h | 6 ---
tools/testing/selftests/rseq/rseq-arm64.h | 6 ---
tools/testing/selftests/rseq/rseq-mips.h | 6 ---
tools/testing/selftests/rseq/rseq-ppc.h | 6 ---
tools/testing/selftests/rseq/rseq-riscv.h | 6 ---
tools/testing/selftests/rseq/rseq-s390.h | 5 --
tools/testing/selftests/rseq/rseq-skip.h | 65 -----------------------
tools/testing/selftests/rseq/rseq-x86.h | 12 -----
9 files changed, 116 deletions(-)
delete mode 100644 tools/testing/selftests/rseq/rseq-skip.h
diff --git a/tools/testing/selftests/rseq/param_test.c b/tools/testing/selftests/rseq/param_test.c
index ef29bc16f358..9869369a8607 100644
--- a/tools/testing/selftests/rseq/param_test.c
+++ b/tools/testing/selftests/rseq/param_test.c
@@ -38,11 +38,7 @@ static int opt_yield, opt_signal, opt_sleep,
opt_disable_rseq, opt_threads = 200,
opt_disable_mod = 0, opt_test = 's', opt_mb = 0;
-#ifndef RSEQ_SKIP_FASTPATH
static long long opt_reps = 5000;
-#else
-static long long opt_reps = 100;
-#endif
static __thread __attribute__((tls_model("initial-exec")))
unsigned int signals_delivered;
diff --git a/tools/testing/selftests/rseq/rseq-arm.h b/tools/testing/selftests/rseq/rseq-arm.h
index 893a11eca9d5..7445107f842b 100644
--- a/tools/testing/selftests/rseq/rseq-arm.h
+++ b/tools/testing/selftests/rseq/rseq-arm.h
@@ -79,10 +79,6 @@ do { \
RSEQ_WRITE_ONCE(*p, v); \
} while (0)
-#ifdef RSEQ_SKIP_FASTPATH
-#include "rseq-skip.h"
-#else /* !RSEQ_SKIP_FASTPATH */
-
#define __RSEQ_ASM_DEFINE_TABLE(label, version, flags, start_ip, \
post_commit_offset, abort_ip) \
".pushsection __rseq_cs, \"aw\"\n\t" \
@@ -823,5 +819,3 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
rseq_bug("expected value comparison failed");
#endif
}
-
-#endif /* !RSEQ_SKIP_FASTPATH */
diff --git a/tools/testing/selftests/rseq/rseq-arm64.h b/tools/testing/selftests/rseq/rseq-arm64.h
index cbe190a4d005..49c387fcd868 100644
--- a/tools/testing/selftests/rseq/rseq-arm64.h
+++ b/tools/testing/selftests/rseq/rseq-arm64.h
@@ -85,10 +85,6 @@ do { \
} \
} while (0)
-#ifdef RSEQ_SKIP_FASTPATH
-#include "rseq-skip.h"
-#else /* !RSEQ_SKIP_FASTPATH */
-
#define RSEQ_ASM_TMP_REG32 "w15"
#define RSEQ_ASM_TMP_REG "x15"
#define RSEQ_ASM_TMP_REG_2 "x14"
@@ -691,5 +687,3 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
rseq_bug("expected value comparison failed");
#endif
}
-
-#endif /* !RSEQ_SKIP_FASTPATH */
diff --git a/tools/testing/selftests/rseq/rseq-mips.h b/tools/testing/selftests/rseq/rseq-mips.h
index 878739fae2fd..dd199952d649 100644
--- a/tools/testing/selftests/rseq/rseq-mips.h
+++ b/tools/testing/selftests/rseq/rseq-mips.h
@@ -60,10 +60,6 @@ do { \
RSEQ_WRITE_ONCE(*p, v); \
} while (0)
-#ifdef RSEQ_SKIP_FASTPATH
-#include "rseq-skip.h"
-#else /* !RSEQ_SKIP_FASTPATH */
-
#if _MIPS_SZLONG == 64
# define LONG ".dword"
# define LONG_LA "dla"
@@ -773,5 +769,3 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
rseq_bug("expected value comparison failed");
#endif
}
-
-#endif /* !RSEQ_SKIP_FASTPATH */
diff --git a/tools/testing/selftests/rseq/rseq-ppc.h b/tools/testing/selftests/rseq/rseq-ppc.h
index bab8e0b9fb11..f82d95c1bb3f 100644
--- a/tools/testing/selftests/rseq/rseq-ppc.h
+++ b/tools/testing/selftests/rseq/rseq-ppc.h
@@ -36,10 +36,6 @@ do { \
RSEQ_WRITE_ONCE(*p, v); \
} while (0)
-#ifdef RSEQ_SKIP_FASTPATH
-#include "rseq-skip.h"
-#else /* !RSEQ_SKIP_FASTPATH */
-
/*
* The __rseq_cs_ptr_array and __rseq_cs sections can be used by debuggers to
* better handle single-stepping through the restartable critical sections.
@@ -787,5 +783,3 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
rseq_bug("expected value comparison failed");
#endif
}
-
-#endif /* !RSEQ_SKIP_FASTPATH */
diff --git a/tools/testing/selftests/rseq/rseq-riscv.h b/tools/testing/selftests/rseq/rseq-riscv.h
index 3a391c9bf468..b16d943a63e1 100644
--- a/tools/testing/selftests/rseq/rseq-riscv.h
+++ b/tools/testing/selftests/rseq/rseq-riscv.h
@@ -49,10 +49,6 @@ do { \
RSEQ_WRITE_ONCE(*(p), v); \
} while (0)
-#ifdef RSEQ_SKIP_FASTPATH
-#include "rseq-skip.h"
-#else /* !RSEQ_SKIP_FASTPATH */
-
#define __RSEQ_ASM_DEFINE_TABLE(label, version, flags, start_ip, \
post_commit_offset, abort_ip) \
".pushsection __rseq_cs, \"aw\"\n" \
@@ -673,5 +669,3 @@ int rseq_offset_deref_addv(intptr_t *ptr, off_t off, intptr_t inc, int cpu)
rseq_bug("cpu_id comparison failed");
#endif
}
-
-#endif /* !RSEQ_SKIP_FASTPATH */
diff --git a/tools/testing/selftests/rseq/rseq-s390.h b/tools/testing/selftests/rseq/rseq-s390.h
index 4e6dc5f0cb42..4d3286453bbf 100644
--- a/tools/testing/selftests/rseq/rseq-s390.h
+++ b/tools/testing/selftests/rseq/rseq-s390.h
@@ -28,10 +28,6 @@ do { \
RSEQ_WRITE_ONCE(*p, v); \
} while (0)
-#ifdef RSEQ_SKIP_FASTPATH
-#include "rseq-skip.h"
-#else /* !RSEQ_SKIP_FASTPATH */
-
#ifdef __s390x__
#define LONG_L "lg"
@@ -607,4 +603,3 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
return rseq_cmpeqv_trymemcpy_storev(v, expect, dst, src, len,
newv, cpu);
}
-#endif /* !RSEQ_SKIP_FASTPATH */
diff --git a/tools/testing/selftests/rseq/rseq-skip.h b/tools/testing/selftests/rseq/rseq-skip.h
deleted file mode 100644
index 7b53dac1fcdd..000000000000
--- a/tools/testing/selftests/rseq/rseq-skip.h
+++ /dev/null
@@ -1,65 +0,0 @@
-/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
-/*
- * rseq-skip.h
- *
- * (C) Copyright 2017-2018 - Mathieu Desnoyers <[email protected]>
- */
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
-{
- return -1;
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
- long voffp, intptr_t *load, int cpu)
-{
- return -1;
-}
-
-static inline __attribute__((always_inline))
-int rseq_addv(intptr_t *v, intptr_t count, int cpu)
-{
- return -1;
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- return -1;
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- return -1;
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t expect2,
- intptr_t newv, int cpu)
-{
- return -1;
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- return -1;
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- return -1;
-}
diff --git a/tools/testing/selftests/rseq/rseq-x86.h b/tools/testing/selftests/rseq/rseq-x86.h
index bd01dc41ca13..e148dfb2f68a 100644
--- a/tools/testing/selftests/rseq/rseq-x86.h
+++ b/tools/testing/selftests/rseq/rseq-x86.h
@@ -50,10 +50,6 @@ do { \
RSEQ_WRITE_ONCE(*p, v); \
} while (0)
-#ifdef RSEQ_SKIP_FASTPATH
-#include "rseq-skip.h"
-#else /* !RSEQ_SKIP_FASTPATH */
-
#define __RSEQ_ASM_DEFINE_TABLE(label, version, flags, \
start_ip, post_commit_offset, abort_ip) \
".pushsection __rseq_cs, \"aw\"\n\t" \
@@ -629,8 +625,6 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
newv, cpu);
}
-#endif /* !RSEQ_SKIP_FASTPATH */
-
#elif defined(__i386__)
#define RSEQ_ASM_TP_SEGMENT %%gs
@@ -657,10 +651,6 @@ do { \
RSEQ_WRITE_ONCE(*p, v); \
} while (0)
-#ifdef RSEQ_SKIP_FASTPATH
-#include "rseq-skip.h"
-#else /* !RSEQ_SKIP_FASTPATH */
-
/*
* Use eax as scratch register and take memory operands as input to
* lessen register pressure. Especially needed when compiling in O0.
@@ -1360,6 +1350,4 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
#endif
}
-#endif /* !RSEQ_SKIP_FASTPATH */
-
#endif
--
2.25.1
Introduce a rseq-arm-bits.h template header which is internally included
to generate the static inline functions covering:
- relaxed and release memory ordering,
- per-cpu-id and per-vm-vcpu-id per-cpu data access.
Signed-off-by: Mathieu Desnoyers <[email protected]>
Cc: Russell King <[email protected]>
Cc: Mark Rutland <[email protected]>
---
Changes since v4:
- Use RSEQ_TEMPLATE_CPU_ID_FIELD.
---
tools/testing/selftests/rseq/rseq-arm-bits.h | 505 ++++++++++++++
tools/testing/selftests/rseq/rseq-arm.h | 695 +------------------
2 files changed, 530 insertions(+), 670 deletions(-)
create mode 100644 tools/testing/selftests/rseq/rseq-arm-bits.h
diff --git a/tools/testing/selftests/rseq/rseq-arm-bits.h b/tools/testing/selftests/rseq/rseq-arm-bits.h
new file mode 100644
index 000000000000..c0657c8c3e8f
--- /dev/null
+++ b/tools/testing/selftests/rseq/rseq-arm-bits.h
@@ -0,0 +1,505 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * rseq-arm-bits.h
+ *
+ * (C) Copyright 2016-2022 - Mathieu Desnoyers <[email protected]>
+ */
+
+#include "rseq-bits-template.h"
+
+#if defined(RSEQ_TEMPLATE_MO_RELAXED) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_storev)(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ "ldr r0, %[v]\n\t"
+ "cmp %[expect], r0\n\t"
+ "bne %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ "ldr r0, %[v]\n\t"
+ "cmp %[expect], r0\n\t"
+ "bne %l[error2]\n\t"
+#endif
+ /* final store */
+ "str %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(5)
+ "b 5f\n\t"
+ RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
+ "5:\n\t"
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "r0", "memory", "cc"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpnev_storeoffp_load)(intptr_t *v, intptr_t expectnot,
+ long voffp, intptr_t *load, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ "ldr r0, %[v]\n\t"
+ "cmp %[expectnot], r0\n\t"
+ "beq %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ "ldr r0, %[v]\n\t"
+ "cmp %[expectnot], r0\n\t"
+ "beq %l[error2]\n\t"
+#endif
+ "str r0, %[load]\n\t"
+ "add r0, %[voffp]\n\t"
+ "ldr r0, [r0]\n\t"
+ /* final store */
+ "str r0, %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(5)
+ "b 5f\n\t"
+ RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
+ "5:\n\t"
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* final store input */
+ [v] "m" (*v),
+ [expectnot] "r" (expectnot),
+ [voffp] "Ir" (voffp),
+ [load] "m" (*load)
+ RSEQ_INJECT_INPUT
+ : "r0", "memory", "cc"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_addv)(intptr_t *v, intptr_t count, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+#endif
+ "ldr r0, %[v]\n\t"
+ "add r0, %[count]\n\t"
+ /* final store */
+ "str r0, %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(4)
+ "b 5f\n\t"
+ RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
+ "5:\n\t"
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "m" (*v),
+ [count] "Ir" (count)
+ RSEQ_INJECT_INPUT
+ : "r0", "memory", "cc"
+ RSEQ_INJECT_CLOBBER
+ : abort
+#ifdef RSEQ_COMPARE_TWICE
+ , error1
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_cmpeqv_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t expect2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ "ldr r0, %[v]\n\t"
+ "cmp %[expect], r0\n\t"
+ "bne %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+ "ldr r0, %[v2]\n\t"
+ "cmp %[expect2], r0\n\t"
+ "bne %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ "ldr r0, %[v]\n\t"
+ "cmp %[expect], r0\n\t"
+ "bne %l[error2]\n\t"
+ "ldr r0, %[v2]\n\t"
+ "cmp %[expect2], r0\n\t"
+ "bne %l[error3]\n\t"
+#endif
+ /* final store */
+ "str %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ "b 5f\n\t"
+ RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
+ "5:\n\t"
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* cmp2 input */
+ [v2] "m" (*v2),
+ [expect2] "r" (expect2),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "r0", "memory", "cc"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2, error3
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("1st expected value comparison failed");
+error3:
+ rseq_after_asm_goto();
+ rseq_bug("2nd expected value comparison failed");
+#endif
+}
+
+#endif /* #if defined(RSEQ_TEMPLATE_MO_RELAXED) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trystorev_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t newv2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ "ldr r0, %[v]\n\t"
+ "cmp %[expect], r0\n\t"
+ "bne %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ "ldr r0, %[v]\n\t"
+ "cmp %[expect], r0\n\t"
+ "bne %l[error2]\n\t"
+#endif
+ /* try store */
+ "str %[newv2], %[v2]\n\t"
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_TEMPLATE_MO_RELEASE
+ "dmb\n\t" /* full mb provides store-release */
+#endif
+ /* final store */
+ "str %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ "b 5f\n\t"
+ RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
+ "5:\n\t"
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* try store input */
+ [v2] "m" (*v2),
+ [newv2] "r" (newv2),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "r0", "memory", "cc"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trymemcpy_storev)(intptr_t *v, intptr_t expect,
+ void *dst, void *src, size_t len,
+ intptr_t newv, int cpu)
+{
+ uint32_t rseq_scratch[3];
+
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ "str %[src], %[rseq_scratch0]\n\t"
+ "str %[dst], %[rseq_scratch1]\n\t"
+ "str %[len], %[rseq_scratch2]\n\t"
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ "ldr r0, %[v]\n\t"
+ "cmp %[expect], r0\n\t"
+ "bne 5f\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 6f)
+ "ldr r0, %[v]\n\t"
+ "cmp %[expect], r0\n\t"
+ "bne 7f\n\t"
+#endif
+ /* try memcpy */
+ "cmp %[len], #0\n\t" \
+ "beq 333f\n\t" \
+ "222:\n\t" \
+ "ldrb %%r0, [%[src]]\n\t" \
+ "strb %%r0, [%[dst]]\n\t" \
+ "adds %[src], #1\n\t" \
+ "adds %[dst], #1\n\t" \
+ "subs %[len], #1\n\t" \
+ "bne 222b\n\t" \
+ "333:\n\t" \
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_TEMPLATE_MO_RELEASE
+ "dmb\n\t" /* full mb provides store-release */
+#endif
+ /* final store */
+ "str %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ /* teardown */
+ "ldr %[len], %[rseq_scratch2]\n\t"
+ "ldr %[dst], %[rseq_scratch1]\n\t"
+ "ldr %[src], %[rseq_scratch0]\n\t"
+ "b 8f\n\t"
+ RSEQ_ASM_DEFINE_ABORT(3, 4,
+ /* teardown */
+ "ldr %[len], %[rseq_scratch2]\n\t"
+ "ldr %[dst], %[rseq_scratch1]\n\t"
+ "ldr %[src], %[rseq_scratch0]\n\t",
+ abort, 1b, 2b, 4f)
+ RSEQ_ASM_DEFINE_CMPFAIL(5,
+ /* teardown */
+ "ldr %[len], %[rseq_scratch2]\n\t"
+ "ldr %[dst], %[rseq_scratch1]\n\t"
+ "ldr %[src], %[rseq_scratch0]\n\t",
+ cmpfail)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_CMPFAIL(6,
+ /* teardown */
+ "ldr %[len], %[rseq_scratch2]\n\t"
+ "ldr %[dst], %[rseq_scratch1]\n\t"
+ "ldr %[src], %[rseq_scratch0]\n\t",
+ error1)
+ RSEQ_ASM_DEFINE_CMPFAIL(7,
+ /* teardown */
+ "ldr %[len], %[rseq_scratch2]\n\t"
+ "ldr %[dst], %[rseq_scratch1]\n\t"
+ "ldr %[src], %[rseq_scratch0]\n\t",
+ error2)
+#endif
+ "8:\n\t"
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv),
+ /* try memcpy input */
+ [dst] "r" (dst),
+ [src] "r" (src),
+ [len] "r" (len),
+ [rseq_scratch0] "m" (rseq_scratch[0]),
+ [rseq_scratch1] "m" (rseq_scratch[1]),
+ [rseq_scratch2] "m" (rseq_scratch[2])
+ RSEQ_INJECT_INPUT
+ : "r0", "memory", "cc"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+#endif /* #if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#include "rseq-bits-reset.h"
diff --git a/tools/testing/selftests/rseq/rseq-arm.h b/tools/testing/selftests/rseq/rseq-arm.h
index 7445107f842b..eb906db604f0 100644
--- a/tools/testing/selftests/rseq/rseq-arm.h
+++ b/tools/testing/selftests/rseq/rseq-arm.h
@@ -2,7 +2,7 @@
/*
* rseq-arm.h
*
- * (C) Copyright 2016-2018 - Mathieu Desnoyers <[email protected]>
+ * (C) Copyright 2016-2022 - Mathieu Desnoyers <[email protected]>
*/
/*
@@ -143,679 +143,34 @@ do { \
teardown \
"b %l[" __rseq_str(cmpfail_label) "]\n\t"
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
+/* Per-cpu-id indexing. */
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- "ldr r0, %[v]\n\t"
- "cmp %[expect], r0\n\t"
- "bne %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- "ldr r0, %[v]\n\t"
- "cmp %[expect], r0\n\t"
- "bne %l[error2]\n\t"
-#endif
- /* final store */
- "str %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(5)
- "b 5f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
- "5:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "r0", "memory", "cc"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
- long voffp, intptr_t *load, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- "ldr r0, %[v]\n\t"
- "cmp %[expectnot], r0\n\t"
- "beq %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- "ldr r0, %[v]\n\t"
- "cmp %[expectnot], r0\n\t"
- "beq %l[error2]\n\t"
-#endif
- "str r0, %[load]\n\t"
- "add r0, %[voffp]\n\t"
- "ldr r0, [r0]\n\t"
- /* final store */
- "str r0, %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(5)
- "b 5f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
- "5:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [expectnot] "r" (expectnot),
- [voffp] "Ir" (voffp),
- [load] "m" (*load)
- RSEQ_INJECT_INPUT
- : "r0", "memory", "cc"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_addv(intptr_t *v, intptr_t count, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
-#endif
- "ldr r0, %[v]\n\t"
- "add r0, %[count]\n\t"
- /* final store */
- "str r0, %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(4)
- "b 5f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
- "5:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "m" (*v),
- [count] "Ir" (count)
- RSEQ_INJECT_INPUT
- : "r0", "memory", "cc"
- RSEQ_INJECT_CLOBBER
- : abort
-#ifdef RSEQ_COMPARE_TWICE
- , error1
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- "ldr r0, %[v]\n\t"
- "cmp %[expect], r0\n\t"
- "bne %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- "ldr r0, %[v]\n\t"
- "cmp %[expect], r0\n\t"
- "bne %l[error2]\n\t"
-#endif
- /* try store */
- "str %[newv2], %[v2]\n\t"
- RSEQ_INJECT_ASM(5)
- /* final store */
- "str %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- "b 5f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
- "5:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* try store input */
- [v2] "m" (*v2),
- [newv2] "r" (newv2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "r0", "memory", "cc"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
+#define RSEQ_TEMPLATE_CPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-arm-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-arm-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_CPU_ID
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- "ldr r0, %[v]\n\t"
- "cmp %[expect], r0\n\t"
- "bne %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- "ldr r0, %[v]\n\t"
- "cmp %[expect], r0\n\t"
- "bne %l[error2]\n\t"
-#endif
- /* try store */
- "str %[newv2], %[v2]\n\t"
- RSEQ_INJECT_ASM(5)
- "dmb\n\t" /* full mb provides store-release */
- /* final store */
- "str %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- "b 5f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
- "5:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* try store input */
- [v2] "m" (*v2),
- [newv2] "r" (newv2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "r0", "memory", "cc"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t expect2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- "ldr r0, %[v]\n\t"
- "cmp %[expect], r0\n\t"
- "bne %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
- "ldr r0, %[v2]\n\t"
- "cmp %[expect2], r0\n\t"
- "bne %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(5)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- "ldr r0, %[v]\n\t"
- "cmp %[expect], r0\n\t"
- "bne %l[error2]\n\t"
- "ldr r0, %[v2]\n\t"
- "cmp %[expect2], r0\n\t"
- "bne %l[error3]\n\t"
-#endif
- /* final store */
- "str %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- "b 5f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
- "5:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* cmp2 input */
- [v2] "m" (*v2),
- [expect2] "r" (expect2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "r0", "memory", "cc"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2, error3
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("1st expected value comparison failed");
-error3:
- rseq_after_asm_goto();
- rseq_bug("2nd expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- uint32_t rseq_scratch[3];
+/* Per-vm-vcpu-id indexing. */
- RSEQ_INJECT_C(9)
+#define RSEQ_TEMPLATE_VM_VCPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-arm-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- "str %[src], %[rseq_scratch0]\n\t"
- "str %[dst], %[rseq_scratch1]\n\t"
- "str %[len], %[rseq_scratch2]\n\t"
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- "ldr r0, %[v]\n\t"
- "cmp %[expect], r0\n\t"
- "bne 5f\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 6f)
- "ldr r0, %[v]\n\t"
- "cmp %[expect], r0\n\t"
- "bne 7f\n\t"
-#endif
- /* try memcpy */
- "cmp %[len], #0\n\t" \
- "beq 333f\n\t" \
- "222:\n\t" \
- "ldrb %%r0, [%[src]]\n\t" \
- "strb %%r0, [%[dst]]\n\t" \
- "adds %[src], #1\n\t" \
- "adds %[dst], #1\n\t" \
- "subs %[len], #1\n\t" \
- "bne 222b\n\t" \
- "333:\n\t" \
- RSEQ_INJECT_ASM(5)
- /* final store */
- "str %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- /* teardown */
- "ldr %[len], %[rseq_scratch2]\n\t"
- "ldr %[dst], %[rseq_scratch1]\n\t"
- "ldr %[src], %[rseq_scratch0]\n\t"
- "b 8f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4,
- /* teardown */
- "ldr %[len], %[rseq_scratch2]\n\t"
- "ldr %[dst], %[rseq_scratch1]\n\t"
- "ldr %[src], %[rseq_scratch0]\n\t",
- abort, 1b, 2b, 4f)
- RSEQ_ASM_DEFINE_CMPFAIL(5,
- /* teardown */
- "ldr %[len], %[rseq_scratch2]\n\t"
- "ldr %[dst], %[rseq_scratch1]\n\t"
- "ldr %[src], %[rseq_scratch0]\n\t",
- cmpfail)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_CMPFAIL(6,
- /* teardown */
- "ldr %[len], %[rseq_scratch2]\n\t"
- "ldr %[dst], %[rseq_scratch1]\n\t"
- "ldr %[src], %[rseq_scratch0]\n\t",
- error1)
- RSEQ_ASM_DEFINE_CMPFAIL(7,
- /* teardown */
- "ldr %[len], %[rseq_scratch2]\n\t"
- "ldr %[dst], %[rseq_scratch1]\n\t"
- "ldr %[src], %[rseq_scratch0]\n\t",
- error2)
-#endif
- "8:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv),
- /* try memcpy input */
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len),
- [rseq_scratch0] "m" (rseq_scratch[0]),
- [rseq_scratch1] "m" (rseq_scratch[1]),
- [rseq_scratch2] "m" (rseq_scratch[2])
- RSEQ_INJECT_INPUT
- : "r0", "memory", "cc"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- uint32_t rseq_scratch[3];
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-arm-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_VM_VCPU_ID
- RSEQ_INJECT_C(9)
+/* APIs which are not based on cpu ids. */
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- "str %[src], %[rseq_scratch0]\n\t"
- "str %[dst], %[rseq_scratch1]\n\t"
- "str %[len], %[rseq_scratch2]\n\t"
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- "ldr r0, %[v]\n\t"
- "cmp %[expect], r0\n\t"
- "bne 5f\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 6f)
- "ldr r0, %[v]\n\t"
- "cmp %[expect], r0\n\t"
- "bne 7f\n\t"
-#endif
- /* try memcpy */
- "cmp %[len], #0\n\t" \
- "beq 333f\n\t" \
- "222:\n\t" \
- "ldrb %%r0, [%[src]]\n\t" \
- "strb %%r0, [%[dst]]\n\t" \
- "adds %[src], #1\n\t" \
- "adds %[dst], #1\n\t" \
- "subs %[len], #1\n\t" \
- "bne 222b\n\t" \
- "333:\n\t" \
- RSEQ_INJECT_ASM(5)
- "dmb\n\t" /* full mb provides store-release */
- /* final store */
- "str %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- /* teardown */
- "ldr %[len], %[rseq_scratch2]\n\t"
- "ldr %[dst], %[rseq_scratch1]\n\t"
- "ldr %[src], %[rseq_scratch0]\n\t"
- "b 8f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4,
- /* teardown */
- "ldr %[len], %[rseq_scratch2]\n\t"
- "ldr %[dst], %[rseq_scratch1]\n\t"
- "ldr %[src], %[rseq_scratch0]\n\t",
- abort, 1b, 2b, 4f)
- RSEQ_ASM_DEFINE_CMPFAIL(5,
- /* teardown */
- "ldr %[len], %[rseq_scratch2]\n\t"
- "ldr %[dst], %[rseq_scratch1]\n\t"
- "ldr %[src], %[rseq_scratch0]\n\t",
- cmpfail)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_CMPFAIL(6,
- /* teardown */
- "ldr %[len], %[rseq_scratch2]\n\t"
- "ldr %[dst], %[rseq_scratch1]\n\t"
- "ldr %[src], %[rseq_scratch0]\n\t",
- error1)
- RSEQ_ASM_DEFINE_CMPFAIL(7,
- /* teardown */
- "ldr %[len], %[rseq_scratch2]\n\t"
- "ldr %[dst], %[rseq_scratch1]\n\t"
- "ldr %[src], %[rseq_scratch0]\n\t",
- error2)
-#endif
- "8:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv),
- /* try memcpy input */
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len),
- [rseq_scratch0] "m" (rseq_scratch[0]),
- [rseq_scratch1] "m" (rseq_scratch[1]),
- [rseq_scratch2] "m" (rseq_scratch[2])
- RSEQ_INJECT_INPUT
- : "r0", "memory", "cc"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
+#define RSEQ_TEMPLATE_CPU_ID_NONE
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-arm-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
+#undef RSEQ_TEMPLATE_CPU_ID_NONE
--
2.25.1
Introduce a rseq-mips-bits.h template header which is internally
included to generate the static inline functions covering:
- relaxed and release memory ordering,
- per-cpu-id and per-vm-vcpu-id per-cpu data access.
Signed-off-by: Mathieu Desnoyers <[email protected]>
Cc: Paul Burton <[email protected]>
---
Changes since v4:
- Use RSEQ_TEMPLATE_CPU_ID_FIELD.
---
tools/testing/selftests/rseq/rseq-mips-bits.h | 462 +++++++++++++
tools/testing/selftests/rseq/rseq-mips.h | 640 +-----------------
2 files changed, 487 insertions(+), 615 deletions(-)
create mode 100644 tools/testing/selftests/rseq/rseq-mips-bits.h
diff --git a/tools/testing/selftests/rseq/rseq-mips-bits.h b/tools/testing/selftests/rseq/rseq-mips-bits.h
new file mode 100644
index 000000000000..7c8a4bd3ec35
--- /dev/null
+++ b/tools/testing/selftests/rseq/rseq-mips-bits.h
@@ -0,0 +1,462 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * Author: Paul Burton <[email protected]>
+ * (C) Copyright 2018 MIPS Tech LLC
+ * (C) Copyright 2016-2022 - Mathieu Desnoyers <[email protected]>
+ */
+
+#include "rseq-bits-template.h"
+
+#if defined(RSEQ_TEMPLATE_MO_RELAXED) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_storev)(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ LONG_L " $4, %[v]\n\t"
+ "bne $4, %[expect], %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ LONG_L " $4, %[v]\n\t"
+ "bne $4, %[expect], %l[error2]\n\t"
+#endif
+ /* final store */
+ LONG_S " %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(5)
+ "b 5f\n\t"
+ RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
+ "5:\n\t"
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "$4", "memory"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpnev_storeoffp_load)(intptr_t *v, intptr_t expectnot,
+ long voffp, intptr_t *load, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ LONG_L " $4, %[v]\n\t"
+ "beq $4, %[expectnot], %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ LONG_L " $4, %[v]\n\t"
+ "beq $4, %[expectnot], %l[error2]\n\t"
+#endif
+ LONG_S " $4, %[load]\n\t"
+ LONG_ADDI " $4, %[voffp]\n\t"
+ LONG_L " $4, 0($4)\n\t"
+ /* final store */
+ LONG_S " $4, %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(5)
+ "b 5f\n\t"
+ RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
+ "5:\n\t"
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* final store input */
+ [v] "m" (*v),
+ [expectnot] "r" (expectnot),
+ [voffp] "Ir" (voffp),
+ [load] "m" (*load)
+ RSEQ_INJECT_INPUT
+ : "$4", "memory"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_addv)(intptr_t *v, intptr_t count, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+#endif
+ LONG_L " $4, %[v]\n\t"
+ LONG_ADDI " $4, %[count]\n\t"
+ /* final store */
+ LONG_S " $4, %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(4)
+ "b 5f\n\t"
+ RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
+ "5:\n\t"
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "m" (*v),
+ [count] "Ir" (count)
+ RSEQ_INJECT_INPUT
+ : "$4", "memory"
+ RSEQ_INJECT_CLOBBER
+ : abort
+#ifdef RSEQ_COMPARE_TWICE
+ , error1
+#endif
+ );
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_cmpeqv_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t expect2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ LONG_L " $4, %[v]\n\t"
+ "bne $4, %[expect], %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+ LONG_L " $4, %[v2]\n\t"
+ "bne $4, %[expect2], %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ LONG_L " $4, %[v]\n\t"
+ "bne $4, %[expect], %l[error2]\n\t"
+ LONG_L " $4, %[v2]\n\t"
+ "bne $4, %[expect2], %l[error3]\n\t"
+#endif
+ /* final store */
+ LONG_S " %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ "b 5f\n\t"
+ RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
+ "5:\n\t"
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* cmp2 input */
+ [v2] "m" (*v2),
+ [expect2] "r" (expect2),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "$4", "memory"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2, error3
+#endif
+ );
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_bug("1st expected value comparison failed");
+error3:
+ rseq_bug("2nd expected value comparison failed");
+#endif
+}
+
+#endif /* #if defined(RSEQ_TEMPLATE_MO_RELAXED) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trystorev_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t newv2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ LONG_L " $4, %[v]\n\t"
+ "bne $4, %[expect], %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ LONG_L " $4, %[v]\n\t"
+ "bne $4, %[expect], %l[error2]\n\t"
+#endif
+ /* try store */
+ LONG_S " %[newv2], %[v2]\n\t"
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_TEMPLATE_MO_RELEASE
+ "sync\n\t" /* full sync provides store-release */
+#endif
+ /* final store */
+ LONG_S " %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ "b 5f\n\t"
+ RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
+ "5:\n\t"
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* try store input */
+ [v2] "m" (*v2),
+ [newv2] "r" (newv2),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "$4", "memory"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trymemcpy_storev)(intptr_t *v, intptr_t expect,
+ void *dst, void *src, size_t len,
+ intptr_t newv, int cpu)
+{
+ uintptr_t rseq_scratch[3];
+
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ LONG_S " %[src], %[rseq_scratch0]\n\t"
+ LONG_S " %[dst], %[rseq_scratch1]\n\t"
+ LONG_S " %[len], %[rseq_scratch2]\n\t"
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ LONG_L " $4, %[v]\n\t"
+ "bne $4, %[expect], 5f\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 6f)
+ LONG_L " $4, %[v]\n\t"
+ "bne $4, %[expect], 7f\n\t"
+#endif
+ /* try memcpy */
+ "beqz %[len], 333f\n\t" \
+ "222:\n\t" \
+ "lb $4, 0(%[src])\n\t" \
+ "sb $4, 0(%[dst])\n\t" \
+ LONG_ADDI " %[src], 1\n\t" \
+ LONG_ADDI " %[dst], 1\n\t" \
+ LONG_ADDI " %[len], -1\n\t" \
+ "bnez %[len], 222b\n\t" \
+ "333:\n\t" \
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_TEMPLATE_MO_RELEASE
+ "sync\n\t" /* full sync provides store-release */
+#endif
+ /* final store */
+ LONG_S " %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ /* teardown */
+ LONG_L " %[len], %[rseq_scratch2]\n\t"
+ LONG_L " %[dst], %[rseq_scratch1]\n\t"
+ LONG_L " %[src], %[rseq_scratch0]\n\t"
+ "b 8f\n\t"
+ RSEQ_ASM_DEFINE_ABORT(3, 4,
+ /* teardown */
+ LONG_L " %[len], %[rseq_scratch2]\n\t"
+ LONG_L " %[dst], %[rseq_scratch1]\n\t"
+ LONG_L " %[src], %[rseq_scratch0]\n\t",
+ abort, 1b, 2b, 4f)
+ RSEQ_ASM_DEFINE_CMPFAIL(5,
+ /* teardown */
+ LONG_L " %[len], %[rseq_scratch2]\n\t"
+ LONG_L " %[dst], %[rseq_scratch1]\n\t"
+ LONG_L " %[src], %[rseq_scratch0]\n\t",
+ cmpfail)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_CMPFAIL(6,
+ /* teardown */
+ LONG_L " %[len], %[rseq_scratch2]\n\t"
+ LONG_L " %[dst], %[rseq_scratch1]\n\t"
+ LONG_L " %[src], %[rseq_scratch0]\n\t",
+ error1)
+ RSEQ_ASM_DEFINE_CMPFAIL(7,
+ /* teardown */
+ LONG_L " %[len], %[rseq_scratch2]\n\t"
+ LONG_L " %[dst], %[rseq_scratch1]\n\t"
+ LONG_L " %[src], %[rseq_scratch0]\n\t",
+ error2)
+#endif
+ "8:\n\t"
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv),
+ /* try memcpy input */
+ [dst] "r" (dst),
+ [src] "r" (src),
+ [len] "r" (len),
+ [rseq_scratch0] "m" (rseq_scratch[0]),
+ [rseq_scratch1] "m" (rseq_scratch[1]),
+ [rseq_scratch2] "m" (rseq_scratch[2])
+ RSEQ_INJECT_INPUT
+ : "$4", "memory"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+#endif /* #if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#include "rseq-bits-reset.h"
diff --git a/tools/testing/selftests/rseq/rseq-mips.h b/tools/testing/selftests/rseq/rseq-mips.h
index dd199952d649..0ca65cc088df 100644
--- a/tools/testing/selftests/rseq/rseq-mips.h
+++ b/tools/testing/selftests/rseq/rseq-mips.h
@@ -2,9 +2,7 @@
/*
* Author: Paul Burton <[email protected]>
* (C) Copyright 2018 MIPS Tech LLC
- *
- * Based on rseq-arm.h:
- * (C) Copyright 2016-2018 - Mathieu Desnoyers <[email protected]>
+ * (C) Copyright 2016-2022 - Mathieu Desnoyers <[email protected]>
*/
/*
@@ -150,622 +148,34 @@ do { \
teardown \
"b %l[" __rseq_str(cmpfail_label) "]\n\t"
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
+/* Per-cpu-id indexing. */
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- LONG_L " $4, %[v]\n\t"
- "bne $4, %[expect], %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- LONG_L " $4, %[v]\n\t"
- "bne $4, %[expect], %l[error2]\n\t"
-#endif
- /* final store */
- LONG_S " %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(5)
- "b 5f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
- "5:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "$4", "memory"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-#endif
-}
+#define RSEQ_TEMPLATE_CPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-mips-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
-static inline __attribute__((always_inline))
-int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
- long voffp, intptr_t *load, int cpu)
-{
- RSEQ_INJECT_C(9)
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-mips-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_CPU_ID
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- LONG_L " $4, %[v]\n\t"
- "beq $4, %[expectnot], %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- LONG_L " $4, %[v]\n\t"
- "beq $4, %[expectnot], %l[error2]\n\t"
-#endif
- LONG_S " $4, %[load]\n\t"
- LONG_ADDI " $4, %[voffp]\n\t"
- LONG_L " $4, 0($4)\n\t"
- /* final store */
- LONG_S " $4, %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(5)
- "b 5f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
- "5:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [expectnot] "r" (expectnot),
- [voffp] "Ir" (voffp),
- [load] "m" (*load)
- RSEQ_INJECT_INPUT
- : "$4", "memory"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-#endif
-}
+/* Per-vm-vcpu-id indexing. */
-static inline __attribute__((always_inline))
-int rseq_addv(intptr_t *v, intptr_t count, int cpu)
-{
- RSEQ_INJECT_C(9)
+#define RSEQ_TEMPLATE_VM_VCPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-mips-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
-#endif
- LONG_L " $4, %[v]\n\t"
- LONG_ADDI " $4, %[count]\n\t"
- /* final store */
- LONG_S " $4, %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(4)
- "b 5f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
- "5:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "m" (*v),
- [count] "Ir" (count)
- RSEQ_INJECT_INPUT
- : "$4", "memory"
- RSEQ_INJECT_CLOBBER
- : abort
-#ifdef RSEQ_COMPARE_TWICE
- , error1
-#endif
- );
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-mips-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_VM_VCPU_ID
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- LONG_L " $4, %[v]\n\t"
- "bne $4, %[expect], %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- LONG_L " $4, %[v]\n\t"
- "bne $4, %[expect], %l[error2]\n\t"
-#endif
- /* try store */
- LONG_S " %[newv2], %[v2]\n\t"
- RSEQ_INJECT_ASM(5)
- /* final store */
- LONG_S " %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- "b 5f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
- "5:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* try store input */
- [v2] "m" (*v2),
- [newv2] "r" (newv2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "$4", "memory"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-#endif
-}
+/* APIs which are not based on cpu ids. */
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- LONG_L " $4, %[v]\n\t"
- "bne $4, %[expect], %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- LONG_L " $4, %[v]\n\t"
- "bne $4, %[expect], %l[error2]\n\t"
-#endif
- /* try store */
- LONG_S " %[newv2], %[v2]\n\t"
- RSEQ_INJECT_ASM(5)
- "sync\n\t" /* full sync provides store-release */
- /* final store */
- LONG_S " %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- "b 5f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
- "5:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* try store input */
- [v2] "m" (*v2),
- [newv2] "r" (newv2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "$4", "memory"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t expect2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- LONG_L " $4, %[v]\n\t"
- "bne $4, %[expect], %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
- LONG_L " $4, %[v2]\n\t"
- "bne $4, %[expect2], %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(5)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- LONG_L " $4, %[v]\n\t"
- "bne $4, %[expect], %l[error2]\n\t"
- LONG_L " $4, %[v2]\n\t"
- "bne $4, %[expect2], %l[error3]\n\t"
-#endif
- /* final store */
- LONG_S " %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- "b 5f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4, "", abort, 1b, 2b, 4f)
- "5:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* cmp2 input */
- [v2] "m" (*v2),
- [expect2] "r" (expect2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "$4", "memory"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2, error3
-#endif
- );
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("1st expected value comparison failed");
-error3:
- rseq_bug("2nd expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- uintptr_t rseq_scratch[3];
-
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- LONG_S " %[src], %[rseq_scratch0]\n\t"
- LONG_S " %[dst], %[rseq_scratch1]\n\t"
- LONG_S " %[len], %[rseq_scratch2]\n\t"
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- LONG_L " $4, %[v]\n\t"
- "bne $4, %[expect], 5f\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 6f)
- LONG_L " $4, %[v]\n\t"
- "bne $4, %[expect], 7f\n\t"
-#endif
- /* try memcpy */
- "beqz %[len], 333f\n\t" \
- "222:\n\t" \
- "lb $4, 0(%[src])\n\t" \
- "sb $4, 0(%[dst])\n\t" \
- LONG_ADDI " %[src], 1\n\t" \
- LONG_ADDI " %[dst], 1\n\t" \
- LONG_ADDI " %[len], -1\n\t" \
- "bnez %[len], 222b\n\t" \
- "333:\n\t" \
- RSEQ_INJECT_ASM(5)
- /* final store */
- LONG_S " %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- /* teardown */
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t"
- "b 8f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4,
- /* teardown */
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t",
- abort, 1b, 2b, 4f)
- RSEQ_ASM_DEFINE_CMPFAIL(5,
- /* teardown */
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t",
- cmpfail)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_CMPFAIL(6,
- /* teardown */
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t",
- error1)
- RSEQ_ASM_DEFINE_CMPFAIL(7,
- /* teardown */
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t",
- error2)
-#endif
- "8:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv),
- /* try memcpy input */
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len),
- [rseq_scratch0] "m" (rseq_scratch[0]),
- [rseq_scratch1] "m" (rseq_scratch[1]),
- [rseq_scratch2] "m" (rseq_scratch[2])
- RSEQ_INJECT_INPUT
- : "$4", "memory"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- uintptr_t rseq_scratch[3];
-
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- LONG_S " %[src], %[rseq_scratch0]\n\t"
- LONG_S " %[dst], %[rseq_scratch1]\n\t"
- LONG_S " %[len], %[rseq_scratch2]\n\t"
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3f, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- LONG_L " $4, %[v]\n\t"
- "bne $4, %[expect], 5f\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 6f)
- LONG_L " $4, %[v]\n\t"
- "bne $4, %[expect], 7f\n\t"
-#endif
- /* try memcpy */
- "beqz %[len], 333f\n\t" \
- "222:\n\t" \
- "lb $4, 0(%[src])\n\t" \
- "sb $4, 0(%[dst])\n\t" \
- LONG_ADDI " %[src], 1\n\t" \
- LONG_ADDI " %[dst], 1\n\t" \
- LONG_ADDI " %[len], -1\n\t" \
- "bnez %[len], 222b\n\t" \
- "333:\n\t" \
- RSEQ_INJECT_ASM(5)
- "sync\n\t" /* full sync provides store-release */
- /* final store */
- LONG_S " %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- /* teardown */
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t"
- "b 8f\n\t"
- RSEQ_ASM_DEFINE_ABORT(3, 4,
- /* teardown */
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t",
- abort, 1b, 2b, 4f)
- RSEQ_ASM_DEFINE_CMPFAIL(5,
- /* teardown */
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t",
- cmpfail)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_CMPFAIL(6,
- /* teardown */
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t",
- error1)
- RSEQ_ASM_DEFINE_CMPFAIL(7,
- /* teardown */
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t",
- error2)
-#endif
- "8:\n\t"
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv),
- /* try memcpy input */
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len),
- [rseq_scratch0] "m" (rseq_scratch[0]),
- [rseq_scratch1] "m" (rseq_scratch[1]),
- [rseq_scratch2] "m" (rseq_scratch[2])
- RSEQ_INJECT_INPUT
- : "$4", "memory"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-#endif
-}
+#define RSEQ_TEMPLATE_CPU_ID_NONE
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-mips-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
+#undef RSEQ_TEMPLATE_CPU_ID_NONE
--
2.25.1
Adding the NUMA node id to struct rseq is a straightforward thing to do,
and a good way to figure out if anything in the user-space ecosystem
prevents extending struct rseq.
This NUMA node id field allows memory allocators such as tcmalloc to
take advantage of fast access to the current NUMA node id to perform
NUMA-aware memory allocation.
It can also be useful for implementing fast-paths for NUMA-aware
user-space mutexes.
It also allows implementing getcpu(2) purely in user-space.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
Changes since v4:
- Use __entry->cpu_id as argument for cpu_to_node() in the rseq_update
tracepoint.
---
include/trace/events/rseq.h | 4 +++-
include/uapi/linux/rseq.h | 8 ++++++++
kernel/rseq.c | 19 +++++++++++++------
3 files changed, 24 insertions(+), 7 deletions(-)
diff --git a/include/trace/events/rseq.h b/include/trace/events/rseq.h
index a04a64bc1a00..dde7a359b4ef 100644
--- a/include/trace/events/rseq.h
+++ b/include/trace/events/rseq.h
@@ -16,13 +16,15 @@ TRACE_EVENT(rseq_update,
TP_STRUCT__entry(
__field(s32, cpu_id)
+ __field(s32, node_id)
),
TP_fast_assign(
__entry->cpu_id = raw_smp_processor_id();
+ __entry->node_id = cpu_to_node(__entry->cpu_id);
),
- TP_printk("cpu_id=%d", __entry->cpu_id)
+ TP_printk("cpu_id=%d node_id=%d", __entry->cpu_id, __entry->node_id)
);
TRACE_EVENT(rseq_ip_fixup,
diff --git a/include/uapi/linux/rseq.h b/include/uapi/linux/rseq.h
index 05d3c4cdeb40..1cb90a435c5c 100644
--- a/include/uapi/linux/rseq.h
+++ b/include/uapi/linux/rseq.h
@@ -131,6 +131,14 @@ struct rseq {
*/
__u32 flags;
+ /*
+ * Restartable sequences node_id field. Updated by the kernel. Read by
+ * user-space with single-copy atomicity semantics. This field should
+ * only be read by the thread which registered this data structure.
+ * Aligned on 32-bit. Contains the current NUMA node ID.
+ */
+ __u32 node_id;
+
/*
* Flexible array member at end of structure, after last feature field.
*/
diff --git a/kernel/rseq.c b/kernel/rseq.c
index c1058b3f10ac..e21ad8929958 100644
--- a/kernel/rseq.c
+++ b/kernel/rseq.c
@@ -85,15 +85,17 @@
* F1. <failure>
*/
-static int rseq_update_cpu_id(struct task_struct *t)
+static int rseq_update_cpu_node_id(struct task_struct *t)
{
- u32 cpu_id = raw_smp_processor_id();
struct rseq __user *rseq = t->rseq;
+ u32 cpu_id = raw_smp_processor_id();
+ u32 node_id = cpu_to_node(cpu_id);
if (!user_write_access_begin(rseq, t->rseq_len))
goto efault;
unsafe_put_user(cpu_id, &rseq->cpu_id_start, efault_end);
unsafe_put_user(cpu_id, &rseq->cpu_id, efault_end);
+ unsafe_put_user(node_id, &rseq->node_id, efault_end);
/*
* Additional feature fields added after ORIG_RSEQ_SIZE
* need to be conditionally updated only if
@@ -109,9 +111,9 @@ static int rseq_update_cpu_id(struct task_struct *t)
return -EFAULT;
}
-static int rseq_reset_rseq_cpu_id(struct task_struct *t)
+static int rseq_reset_rseq_cpu_node_id(struct task_struct *t)
{
- u32 cpu_id_start = 0, cpu_id = RSEQ_CPU_ID_UNINITIALIZED;
+ u32 cpu_id_start = 0, cpu_id = RSEQ_CPU_ID_UNINITIALIZED, node_id = 0;
/*
* Reset cpu_id_start to its initial state (0).
@@ -125,6 +127,11 @@ static int rseq_reset_rseq_cpu_id(struct task_struct *t)
*/
if (put_user(cpu_id, &t->rseq->cpu_id))
return -EFAULT;
+ /*
+ * Reset node_id to its initial state (0).
+ */
+ if (put_user(node_id, &t->rseq->node_id))
+ return -EFAULT;
/*
* Additional feature fields added after ORIG_RSEQ_SIZE
* need to be conditionally reset only if
@@ -299,7 +306,7 @@ void __rseq_handle_notify_resume(struct ksignal *ksig, struct pt_regs *regs)
if (unlikely(ret < 0))
goto error;
}
- if (unlikely(rseq_update_cpu_id(t)))
+ if (unlikely(rseq_update_cpu_node_id(t)))
goto error;
return;
@@ -346,7 +353,7 @@ SYSCALL_DEFINE4(rseq, struct rseq __user *, rseq, u32, rseq_len,
return -EINVAL;
if (current->rseq_sig != sig)
return -EPERM;
- ret = rseq_reset_rseq_cpu_id(current);
+ ret = rseq_reset_rseq_cpu_node_id(current);
if (ret)
return ret;
current->rseq = NULL;
--
2.25.1
Allow loading a pair of u32 within a rseq critical section. It can be
used in situations where both rseq_abi()->vm_vcpu_id and
rseq_abi()->node_id need to be sampled atomically with respect to
preemption, signal delivery and migration.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
tools/testing/selftests/rseq/rseq-x86-bits.h | 43 ++++++++++++++++++++
tools/testing/selftests/rseq/rseq.h | 14 +++++++
2 files changed, 57 insertions(+)
diff --git a/tools/testing/selftests/rseq/rseq-x86-bits.h b/tools/testing/selftests/rseq/rseq-x86-bits.h
index 28ca77cc876c..ef961ab012e5 100644
--- a/tools/testing/selftests/rseq/rseq-x86-bits.h
+++ b/tools/testing/selftests/rseq/rseq-x86-bits.h
@@ -990,4 +990,47 @@ int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trymemcpy_storev)(intptr_t *v, intptr_t
#endif
+#if defined(RSEQ_TEMPLATE_CPU_ID_NONE) && defined(RSEQ_TEMPLATE_MO_RELAXED)
+
+#define RSEQ_ARCH_HAS_LOAD_U32_U32
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_load_u32_u32)(uint32_t *dst1, uint32_t *src1,
+ uint32_t *dst2, uint32_t *src2)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_INJECT_ASM(3)
+ "movl %[src1], %%eax\n\t"
+ "movl %%eax, %[dst1]\n\t"
+ "movl %[src2], %%eax\n\t"
+ "movl %%eax, %[dst2]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [rseq_offset] "r" (rseq_offset),
+ /* final store input */
+ [dst1] "m" (*dst1),
+ [src1] "m" (*src1),
+ [dst2] "m" (*dst2),
+ [src2] "m" (*src2)
+ : "memory", "cc", "rax"
+ RSEQ_INJECT_CLOBBER
+ : abort
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+}
+
+#endif /* defined(RSEQ_TEMPLATE_CPU_ID_NONE) && defined(RSEQ_TEMPLATE_MO_RELAXED) */
+
#include "rseq-bits-reset.h"
diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h
index 95a76a1c3b27..30fa8bfd874e 100644
--- a/tools/testing/selftests/rseq/rseq.h
+++ b/tools/testing/selftests/rseq/rseq.h
@@ -381,4 +381,18 @@ int rseq_cmpeqv_trymemcpy_storev(enum rseq_mo rseq_mo, enum rseq_percpu_mode per
}
}
+#ifdef RSEQ_ARCH_HAS_LOAD_U32_U32
+
+static inline __attribute__((always_inline))
+int rseq_load_u32_u32(enum rseq_mo rseq_mo,
+ uint32_t *dst1, uint32_t *src1,
+ uint32_t *dst2, uint32_t *src2)
+{
+ if (rseq_mo != RSEQ_MO_RELAXED)
+ return -1;
+ return rseq_load_u32_u32_relaxed(dst1, src1, dst2, src2);
+}
+
+#endif
+
#endif /* RSEQ_H_ */
--
2.25.1
Adapt to the rseq.h API changes introduced by commits
"selftests/rseq: <arch>: Template memory ordering and percpu access mode".
Build a new param_test_vm_vcpu_id, param_test_vm_vcpu_id_benchmark, and
param_test_vm_vcpu_id_compare_twice executables to test the new
"vm_vcpu_id" rseq field.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
tools/testing/selftests/rseq/.gitignore | 3 +
tools/testing/selftests/rseq/Makefile | 15 +-
tools/testing/selftests/rseq/param_test.c | 148 ++++++++++++------
.../testing/selftests/rseq/run_param_test.sh | 5 +
4 files changed, 122 insertions(+), 49 deletions(-)
diff --git a/tools/testing/selftests/rseq/.gitignore b/tools/testing/selftests/rseq/.gitignore
index 5a7e5acc628c..db5c1a124c6c 100644
--- a/tools/testing/selftests/rseq/.gitignore
+++ b/tools/testing/selftests/rseq/.gitignore
@@ -6,3 +6,6 @@ basic_rseq_op_test
param_test
param_test_benchmark
param_test_compare_twice
+param_test_vm_vcpu_id
+param_test_vm_vcpu_id_benchmark
+param_test_vm_vcpu_id_compare_twice
diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile
index 4210c135e621..3eec8e166385 100644
--- a/tools/testing/selftests/rseq/Makefile
+++ b/tools/testing/selftests/rseq/Makefile
@@ -13,7 +13,8 @@ LDLIBS += -lpthread -ldl
OVERRIDE_TARGETS = 1
TEST_GEN_PROGS = basic_test basic_percpu_ops_test basic_percpu_ops_vm_vcpu_id_test param_test \
- param_test_benchmark param_test_compare_twice
+ param_test_benchmark param_test_compare_twice param_test_vm_vcpu_id \
+ param_test_vm_vcpu_id_benchmark param_test_vm_vcpu_id_compare_twice
TEST_GEN_PROGS_EXTENDED = librseq.so
@@ -39,3 +40,15 @@ $(OUTPUT)/param_test_benchmark: param_test.c $(TEST_GEN_PROGS_EXTENDED) \
$(OUTPUT)/param_test_compare_twice: param_test.c $(TEST_GEN_PROGS_EXTENDED) \
rseq.h rseq-*.h
$(CC) $(CFLAGS) -DRSEQ_COMPARE_TWICE $< $(LDLIBS) -lrseq -o $@
+
+$(OUTPUT)/param_test_vm_vcpu_id: param_test.c $(TEST_GEN_PROGS_EXTENDED) \
+ rseq.h rseq-*.h
+ $(CC) $(CFLAGS) -DBUILDOPT_RSEQ_PERCPU_VM_VCPU_ID $< $(LDLIBS) -lrseq -o $@
+
+$(OUTPUT)/param_test_vm_vcpu_id_benchmark: param_test.c $(TEST_GEN_PROGS_EXTENDED) \
+ rseq.h rseq-*.h
+ $(CC) $(CFLAGS) -DBUILDOPT_RSEQ_PERCPU_VM_VCPU_ID -DBENCHMARK $< $(LDLIBS) -lrseq -o $@
+
+$(OUTPUT)/param_test_vm_vcpu_id_compare_twice: param_test.c $(TEST_GEN_PROGS_EXTENDED) \
+ rseq.h rseq-*.h
+ $(CC) $(CFLAGS) -DBUILDOPT_RSEQ_PERCPU_VM_VCPU_ID -DRSEQ_COMPARE_TWICE $< $(LDLIBS) -lrseq -o $@
diff --git a/tools/testing/selftests/rseq/param_test.c b/tools/testing/selftests/rseq/param_test.c
index 9869369a8607..f3687a90ec0c 100644
--- a/tools/testing/selftests/rseq/param_test.c
+++ b/tools/testing/selftests/rseq/param_test.c
@@ -16,6 +16,7 @@
#include <signal.h>
#include <errno.h>
#include <stddef.h>
+#include <stdbool.h>
static inline pid_t rseq_gettid(void)
{
@@ -36,7 +37,7 @@ static int opt_modulo, verbose;
static int opt_yield, opt_signal, opt_sleep,
opt_disable_rseq, opt_threads = 200,
- opt_disable_mod = 0, opt_test = 's', opt_mb = 0;
+ opt_disable_mod = 0, opt_test = 's';
static long long opt_reps = 5000;
@@ -264,6 +265,63 @@ unsigned int yield_mod_cnt, nr_abort;
#include "rseq.h"
+static enum rseq_mo opt_mo = RSEQ_MO_RELAXED;
+
+#ifdef RSEQ_ARCH_HAS_OFFSET_DEREF_ADDV
+#define TEST_MEMBARRIER
+
+static int sys_membarrier(int cmd, int flags, int cpu_id)
+{
+ return syscall(__NR_membarrier, cmd, flags, cpu_id);
+}
+#endif
+
+#ifdef BUILDOPT_RSEQ_PERCPU_VM_VCPU_ID
+# define RSEQ_PERCPU RSEQ_PERCPU_VM_VCPU_ID
+static
+int get_current_cpu_id(void)
+{
+ return rseq_current_vm_vcpu_id();
+}
+static
+bool rseq_validate_cpu_id(void)
+{
+ return rseq_vm_vcpu_id_available();
+}
+# ifdef TEST_MEMBARRIER
+/*
+ * Membarrier does not currently support targeting a vm_vcpu_id, so
+ * issue the barrier on all cpus.
+ */
+static
+int rseq_membarrier_expedited(int cpu)
+{
+ return sys_membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ,
+ 0, 0);
+}
+# endif /* TEST_MEMBARRIER */
+#else
+# define RSEQ_PERCPU RSEQ_PERCPU_CPU_ID
+static
+int get_current_cpu_id(void)
+{
+ return rseq_cpu_start();
+}
+static
+bool rseq_validate_cpu_id(void)
+{
+ return rseq_current_cpu_raw() >= 0;
+}
+# ifdef TEST_MEMBARRIER
+static
+int rseq_membarrier_expedited(int cpu)
+{
+ return sys_membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ,
+ MEMBARRIER_CMD_FLAG_CPU, cpu);
+}
+# endif /* TEST_MEMBARRIER */
+#endif
+
struct percpu_lock_entry {
intptr_t v;
} __attribute__((aligned(128)));
@@ -351,8 +409,9 @@ static int rseq_this_cpu_lock(struct percpu_lock *lock)
for (;;) {
int ret;
- cpu = rseq_cpu_start();
- ret = rseq_cmpeqv_storev(&lock->c[cpu].v,
+ cpu = get_current_cpu_id();
+ ret = rseq_cmpeqv_storev(RSEQ_MO_RELAXED, RSEQ_PERCPU,
+ &lock->c[cpu].v,
0, 1, cpu);
if (rseq_likely(!ret))
break;
@@ -469,8 +528,9 @@ void *test_percpu_inc_thread(void *arg)
do {
int cpu;
- cpu = rseq_cpu_start();
- ret = rseq_addv(&data->c[cpu].count, 1, cpu);
+ cpu = get_current_cpu_id();
+ ret = rseq_addv(RSEQ_MO_RELAXED, RSEQ_PERCPU,
+ &data->c[cpu].count, 1, cpu);
} while (rseq_unlikely(ret));
#ifndef BENCHMARK
if (i != 0 && !(i % (reps / 10)))
@@ -539,13 +599,14 @@ void this_cpu_list_push(struct percpu_list *list,
intptr_t *targetptr, newval, expect;
int ret;
- cpu = rseq_cpu_start();
+ cpu = get_current_cpu_id();
/* Load list->c[cpu].head with single-copy atomicity. */
expect = (intptr_t)RSEQ_READ_ONCE(list->c[cpu].head);
newval = (intptr_t)node;
targetptr = (intptr_t *)&list->c[cpu].head;
node->next = (struct percpu_list_node *)expect;
- ret = rseq_cmpeqv_storev(targetptr, expect, newval, cpu);
+ ret = rseq_cmpeqv_storev(RSEQ_MO_RELAXED, RSEQ_PERCPU,
+ targetptr, expect, newval, cpu);
if (rseq_likely(!ret))
break;
/* Retry if comparison fails or rseq aborts. */
@@ -571,13 +632,14 @@ struct percpu_list_node *this_cpu_list_pop(struct percpu_list *list,
long offset;
int ret;
- cpu = rseq_cpu_start();
+ cpu = get_current_cpu_id();
targetptr = (intptr_t *)&list->c[cpu].head;
expectnot = (intptr_t)NULL;
offset = offsetof(struct percpu_list_node, next);
load = (intptr_t *)&head;
- ret = rseq_cmpnev_storeoffp_load(targetptr, expectnot,
- offset, load, cpu);
+ ret = rseq_cmpnev_storeoffp_load(RSEQ_MO_RELAXED, RSEQ_PERCPU,
+ targetptr, expectnot,
+ offset, load, cpu);
if (rseq_likely(!ret)) {
node = head;
break;
@@ -715,7 +777,7 @@ bool this_cpu_buffer_push(struct percpu_buffer *buffer,
intptr_t offset;
int ret;
- cpu = rseq_cpu_start();
+ cpu = get_current_cpu_id();
offset = RSEQ_READ_ONCE(buffer->c[cpu].offset);
if (offset == buffer->c[cpu].buflen)
break;
@@ -723,14 +785,9 @@ bool this_cpu_buffer_push(struct percpu_buffer *buffer,
targetptr_spec = (intptr_t *)&buffer->c[cpu].array[offset];
newval_final = offset + 1;
targetptr_final = &buffer->c[cpu].offset;
- if (opt_mb)
- ret = rseq_cmpeqv_trystorev_storev_release(
- targetptr_final, offset, targetptr_spec,
- newval_spec, newval_final, cpu);
- else
- ret = rseq_cmpeqv_trystorev_storev(targetptr_final,
- offset, targetptr_spec, newval_spec,
- newval_final, cpu);
+ ret = rseq_cmpeqv_trystorev_storev(opt_mo, RSEQ_PERCPU,
+ targetptr_final, offset, targetptr_spec,
+ newval_spec, newval_final, cpu);
if (rseq_likely(!ret)) {
result = true;
break;
@@ -753,7 +810,7 @@ struct percpu_buffer_node *this_cpu_buffer_pop(struct percpu_buffer *buffer,
intptr_t offset;
int ret;
- cpu = rseq_cpu_start();
+ cpu = get_current_cpu_id();
/* Load offset with single-copy atomicity. */
offset = RSEQ_READ_ONCE(buffer->c[cpu].offset);
if (offset == 0) {
@@ -763,7 +820,8 @@ struct percpu_buffer_node *this_cpu_buffer_pop(struct percpu_buffer *buffer,
head = RSEQ_READ_ONCE(buffer->c[cpu].array[offset - 1]);
newval = offset - 1;
targetptr = (intptr_t *)&buffer->c[cpu].offset;
- ret = rseq_cmpeqv_cmpeqv_storev(targetptr, offset,
+ ret = rseq_cmpeqv_cmpeqv_storev(RSEQ_MO_RELAXED, RSEQ_PERCPU,
+ targetptr, offset,
(intptr_t *)&buffer->c[cpu].array[offset - 1],
(intptr_t)head, newval, cpu);
if (rseq_likely(!ret))
@@ -920,7 +978,7 @@ bool this_cpu_memcpy_buffer_push(struct percpu_memcpy_buffer *buffer,
size_t copylen;
int ret;
- cpu = rseq_cpu_start();
+ cpu = get_current_cpu_id();
/* Load offset with single-copy atomicity. */
offset = RSEQ_READ_ONCE(buffer->c[cpu].offset);
if (offset == buffer->c[cpu].buflen)
@@ -931,15 +989,11 @@ bool this_cpu_memcpy_buffer_push(struct percpu_memcpy_buffer *buffer,
copylen = sizeof(item);
newval_final = offset + 1;
targetptr_final = &buffer->c[cpu].offset;
- if (opt_mb)
- ret = rseq_cmpeqv_trymemcpy_storev_release(
- targetptr_final, offset,
- destptr, srcptr, copylen,
- newval_final, cpu);
- else
- ret = rseq_cmpeqv_trymemcpy_storev(targetptr_final,
- offset, destptr, srcptr, copylen,
- newval_final, cpu);
+ ret = rseq_cmpeqv_trymemcpy_storev(
+ opt_mo, RSEQ_PERCPU,
+ targetptr_final, offset,
+ destptr, srcptr, copylen,
+ newval_final, cpu);
if (rseq_likely(!ret)) {
result = true;
break;
@@ -964,7 +1018,7 @@ bool this_cpu_memcpy_buffer_pop(struct percpu_memcpy_buffer *buffer,
size_t copylen;
int ret;
- cpu = rseq_cpu_start();
+ cpu = get_current_cpu_id();
/* Load offset with single-copy atomicity. */
offset = RSEQ_READ_ONCE(buffer->c[cpu].offset);
if (offset == 0)
@@ -975,8 +1029,8 @@ bool this_cpu_memcpy_buffer_pop(struct percpu_memcpy_buffer *buffer,
copylen = sizeof(*item);
newval_final = offset - 1;
targetptr_final = &buffer->c[cpu].offset;
- ret = rseq_cmpeqv_trymemcpy_storev(targetptr_final,
- offset, destptr, srcptr, copylen,
+ ret = rseq_cmpeqv_trymemcpy_storev(RSEQ_MO_RELAXED, RSEQ_PERCPU,
+ targetptr_final, offset, destptr, srcptr, copylen,
newval_final, cpu);
if (rseq_likely(!ret)) {
result = true;
@@ -1151,7 +1205,7 @@ static int set_signal_handler(void)
}
/* Test MEMBARRIER_CMD_PRIVATE_RESTART_RSEQ_ON_CPU membarrier command. */
-#ifdef RSEQ_ARCH_HAS_OFFSET_DEREF_ADDV
+#ifdef TEST_MEMBARRIER
struct test_membarrier_thread_args {
int stop;
intptr_t percpu_list_ptr;
@@ -1178,9 +1232,10 @@ void *test_membarrier_worker_thread(void *arg)
int ret;
do {
- int cpu = rseq_cpu_start();
+ int cpu = get_current_cpu_id();
- ret = rseq_offset_deref_addv(&args->percpu_list_ptr,
+ ret = rseq_offset_deref_addv(RSEQ_MO_RELAXED, RSEQ_PERCPU,
+ &args->percpu_list_ptr,
sizeof(struct percpu_list_entry) * cpu, 1, cpu);
} while (rseq_unlikely(ret));
}
@@ -1217,11 +1272,6 @@ void test_membarrier_free_percpu_list(struct percpu_list *list)
free(list->c[i].head);
}
-static int sys_membarrier(int cmd, int flags, int cpu_id)
-{
- return syscall(__NR_membarrier, cmd, flags, cpu_id);
-}
-
/*
* The manager thread swaps per-cpu lists that worker threads see,
* and validates that there are no unexpected modifications.
@@ -1260,8 +1310,7 @@ void *test_membarrier_manager_thread(void *arg)
/* Make list_b "active". */
atomic_store(&args->percpu_list_ptr, (intptr_t)&list_b);
- if (sys_membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ,
- MEMBARRIER_CMD_FLAG_CPU, cpu_a) &&
+ if (rseq_membarrier_expedited(cpu_a) &&
errno != ENXIO /* missing CPU */) {
perror("sys_membarrier");
abort();
@@ -1284,8 +1333,7 @@ void *test_membarrier_manager_thread(void *arg)
/* Make list_a "active". */
atomic_store(&args->percpu_list_ptr, (intptr_t)&list_a);
- if (sys_membarrier(MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ,
- MEMBARRIER_CMD_FLAG_CPU, cpu_b) &&
+ if (rseq_membarrier_expedited(cpu_b) &&
errno != ENXIO /* missing CPU*/) {
perror("sys_membarrier");
abort();
@@ -1356,7 +1404,7 @@ void test_membarrier(void)
abort();
}
}
-#else /* RSEQ_ARCH_HAS_OFFSET_DEREF_ADDV */
+#else /* TEST_MEMBARRIER */
void test_membarrier(void)
{
fprintf(stderr, "rseq_offset_deref_addv is not implemented on this architecture. "
@@ -1513,7 +1561,7 @@ int main(int argc, char **argv)
verbose = 1;
break;
case 'M':
- opt_mb = 1;
+ opt_mo = RSEQ_MO_RELEASE;
break;
default:
show_usage(argc, argv);
@@ -1533,6 +1581,10 @@ int main(int argc, char **argv)
if (!opt_disable_rseq && rseq_register_current_thread())
goto error;
+ if (!opt_disable_rseq && !rseq_validate_cpu_id()) {
+ fprintf(stderr, "Error: cpu id getter unavailable\n");
+ goto error;
+ }
switch (opt_test) {
case 's':
printf_verbose("spinlock\n");
diff --git a/tools/testing/selftests/rseq/run_param_test.sh b/tools/testing/selftests/rseq/run_param_test.sh
index f51bc83c9e41..11b5424e8b78 100755
--- a/tools/testing/selftests/rseq/run_param_test.sh
+++ b/tools/testing/selftests/rseq/run_param_test.sh
@@ -42,6 +42,11 @@ function do_tests()
./param_test ${TEST_LIST[$i]} -r ${REPS} -t ${NR_THREADS} ${@} ${EXTRA_ARGS} || exit 1
echo "Running compare-twice test ${TEST_NAME[$i]}"
./param_test_compare_twice ${TEST_LIST[$i]} -r ${REPS} -t ${NR_THREADS} ${@} ${EXTRA_ARGS} || exit 1
+
+ echo "Running vm vcpu_id test ${TEST_NAME[$i]}"
+ ./param_test_vm_vcpu_id ${TEST_LIST[$i]} -r ${REPS} -t ${NR_THREADS} ${@} ${EXTRA_ARGS} || exit 1
+ echo "Running vm vcpu_id compare-twice test ${TEST_NAME[$i]}"
+ ./param_test_vm_vcpu_id_compare_twice ${TEST_LIST[$i]} -r ${REPS} -t ${NR_THREADS} ${@} ${EXTRA_ARGS} || exit 1
let "i++"
done
}
--
2.25.1
If a memory space has fewer threads than cores, or is limited to run on
few cores concurrently through sched affinity or cgroup cpusets, the
virtual cpu ids will be values close to 0, thus allowing efficient use
of user-space memory for per-cpu data structures.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
include/uapi/linux/rseq.h | 9 +++++++++
kernel/rseq.c | 11 ++++++++++-
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/include/uapi/linux/rseq.h b/include/uapi/linux/rseq.h
index 1cb90a435c5c..77a136586ac6 100644
--- a/include/uapi/linux/rseq.h
+++ b/include/uapi/linux/rseq.h
@@ -139,6 +139,15 @@ struct rseq {
*/
__u32 node_id;
+ /*
+ * Restartable sequences vm_vcpu_id field. Updated by the kernel. Read by
+ * user-space with single-copy atomicity semantics. This field should
+ * only be read by the thread which registered this data structure.
+ * Aligned on 32-bit. Contains the current thread's virtual CPU ID
+ * (allocated uniquely within a memory space).
+ */
+ __u32 vm_vcpu_id;
+
/*
* Flexible array member at end of structure, after last feature field.
*/
diff --git a/kernel/rseq.c b/kernel/rseq.c
index e21ad8929958..604c284a355c 100644
--- a/kernel/rseq.c
+++ b/kernel/rseq.c
@@ -90,12 +90,15 @@ static int rseq_update_cpu_node_id(struct task_struct *t)
struct rseq __user *rseq = t->rseq;
u32 cpu_id = raw_smp_processor_id();
u32 node_id = cpu_to_node(cpu_id);
+ u32 vm_vcpu_id = task_mm_vcpu_id(t);
+ WARN_ON_ONCE((int) vm_vcpu_id < 0);
if (!user_write_access_begin(rseq, t->rseq_len))
goto efault;
unsafe_put_user(cpu_id, &rseq->cpu_id_start, efault_end);
unsafe_put_user(cpu_id, &rseq->cpu_id, efault_end);
unsafe_put_user(node_id, &rseq->node_id, efault_end);
+ unsafe_put_user(vm_vcpu_id, &rseq->vm_vcpu_id, efault_end);
/*
* Additional feature fields added after ORIG_RSEQ_SIZE
* need to be conditionally updated only if
@@ -113,7 +116,8 @@ static int rseq_update_cpu_node_id(struct task_struct *t)
static int rseq_reset_rseq_cpu_node_id(struct task_struct *t)
{
- u32 cpu_id_start = 0, cpu_id = RSEQ_CPU_ID_UNINITIALIZED, node_id = 0;
+ u32 cpu_id_start = 0, cpu_id = RSEQ_CPU_ID_UNINITIALIZED, node_id = 0,
+ vm_vcpu_id = 0;
/*
* Reset cpu_id_start to its initial state (0).
@@ -132,6 +136,11 @@ static int rseq_reset_rseq_cpu_node_id(struct task_struct *t)
*/
if (put_user(node_id, &t->rseq->node_id))
return -EFAULT;
+ /*
+ * Reset vm_vcpu_id to its initial state (0).
+ */
+ if (put_user(vm_vcpu_id, &t->rseq->vm_vcpu_id))
+ return -EFAULT;
/*
* Additional feature fields added after ORIG_RSEQ_SIZE
* need to be conditionally reset only if
--
2.25.1
Allow finding the first or next bit within two input cpumasks which is
either:
- both zero and zero,
- respectively one and zero.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
include/linux/cpumask.h | 60 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 60 insertions(+)
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index c2aa0aa26b45..271bccc0a6d7 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -153,6 +153,32 @@ unsigned int cpumask_first_and(const struct cpumask *srcp1, const struct cpumask
return find_first_and_bit(cpumask_bits(srcp1), cpumask_bits(srcp2), nr_cpumask_bits);
}
+/**
+ * cpumask_first_andnot - return the first cpu from *srcp1 & ~*srcp2
+ * @src1p: the first input
+ * @src2p: the second input
+ *
+ * Returns >= nr_cpu_ids if no cpus match in both.
+ */
+static inline
+unsigned int cpumask_first_andnot(const struct cpumask *srcp1, const struct cpumask *srcp2)
+{
+ return find_first_andnot_bit(cpumask_bits(srcp1), cpumask_bits(srcp2), nr_cpumask_bits);
+}
+
+/**
+ * cpumask_first_notandnot - return the first cpu from ~*srcp1 & ~*srcp2
+ * @src1p: the first input
+ * @src2p: the second input
+ *
+ * Returns >= nr_cpu_ids if no cpus match in both.
+ */
+static inline
+unsigned int cpumask_first_notandnot(const struct cpumask *srcp1, const struct cpumask *srcp2)
+{
+ return find_first_notandnot_bit(cpumask_bits(srcp1), cpumask_bits(srcp2), nr_cpumask_bits);
+}
+
/**
* cpumask_last - get the last CPU in a cpumask
* @srcp: - the cpumask pointer
@@ -195,6 +221,40 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp)
return find_next_zero_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
}
+/**
+ * cpumask_next_andnot - return the next cpu from *srcp1 & ~*srcp2
+ * @n: the cpu prior to the place to search (ie. return will be > @n)
+ * @src1p: the first input
+ * @src2p: the second input
+ *
+ * Returns >= nr_cpu_ids if no cpus match in both.
+ */
+static inline
+unsigned int cpumask_next_andnot(int n, const struct cpumask *srcp1, const struct cpumask *srcp2)
+{
+ /* -1 is a legal arg here. */
+ if (n != -1)
+ cpumask_check(n);
+ return find_next_andnot_bit(cpumask_bits(srcp1), cpumask_bits(srcp2), nr_cpumask_bits, n+1);
+}
+
+/**
+ * cpumask_next_notandnot - return the next cpu from ~*srcp1 & ~*srcp2
+ * @n: the cpu prior to the place to search (ie. return will be > @n)
+ * @src1p: the first input
+ * @src2p: the second input
+ *
+ * Returns >= nr_cpu_ids if no cpus match in both.
+ */
+static inline
+unsigned int cpumask_next_notandnot(int n, const struct cpumask *srcp1, const struct cpumask *srcp2)
+{
+ /* -1 is a legal arg here. */
+ if (n != -1)
+ cpumask_check(n);
+ return find_next_notandnot_bit(cpumask_bits(srcp1), cpumask_bits(srcp2), nr_cpumask_bits, n+1);
+}
+
#if NR_CPUS == 1
/* Uniprocessor: there is only one valid CPU */
static inline unsigned int cpumask_local_spread(unsigned int i, int node)
--
2.25.1
Use the ELF auxiliary vector AT_RSEQ_FEATURE_SIZE to detect the RSEQ
features supported by the kernel.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
tools/testing/selftests/rseq/rseq-abi.h | 5 ++
tools/testing/selftests/rseq/rseq.c | 68 ++++++++++++++++++++++---
tools/testing/selftests/rseq/rseq.h | 18 +++++--
3 files changed, 79 insertions(+), 12 deletions(-)
diff --git a/tools/testing/selftests/rseq/rseq-abi.h b/tools/testing/selftests/rseq/rseq-abi.h
index a8c44d9af71f..00ac846d85b0 100644
--- a/tools/testing/selftests/rseq/rseq-abi.h
+++ b/tools/testing/selftests/rseq/rseq-abi.h
@@ -146,6 +146,11 @@ struct rseq_abi {
* this thread.
*/
__u32 flags;
+
+ /*
+ * Flexible array member at end of structure, after last feature field.
+ */
+ char end[];
} __attribute__((aligned(4 * sizeof(__u64))));
#endif /* _RSEQ_ABI_H */
diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c
index 4177f9507bbe..c1c691eca904 100644
--- a/tools/testing/selftests/rseq/rseq.c
+++ b/tools/testing/selftests/rseq/rseq.c
@@ -28,6 +28,8 @@
#include <limits.h>
#include <dlfcn.h>
#include <stddef.h>
+#include <sys/auxv.h>
+#include <linux/auxvec.h>
#include "../kselftest.h"
#include "rseq.h"
@@ -36,20 +38,38 @@ static const ptrdiff_t *libc_rseq_offset_p;
static const unsigned int *libc_rseq_size_p;
static const unsigned int *libc_rseq_flags_p;
-/* Offset from the thread pointer to the rseq area. */
+/* Offset from the thread pointer to the rseq area. */
ptrdiff_t rseq_offset;
-/* Size of the registered rseq area. 0 if the registration was
- unsuccessful. */
+/*
+ * Size of the registered rseq area. 0 if the registration was
+ * unsuccessful.
+ */
unsigned int rseq_size = -1U;
/* Flags used during rseq registration. */
unsigned int rseq_flags;
+/*
+ * rseq feature size supported by the kernel. 0 if the registration was
+ * unsuccessful.
+ */
+unsigned int rseq_feature_size = -1U;
+
static int rseq_ownership;
+static int rseq_reg_success; /* At least one rseq registration has succeded. */
+
+/* Allocate a large area for the TLS. */
+#define RSEQ_THREAD_AREA_ALLOC_SIZE 1024
+
+/* Original struct rseq feature size is 20 bytes. */
+#define ORIG_RSEQ_FEATURE_SIZE 20
+
+/* Orignal struct rseq allocation size is 32 bytes. */
+#define ORIG_RSEQ_ALLOC_SIZE 32
static
-__thread struct rseq_abi __rseq_abi __attribute__((tls_model("initial-exec"))) = {
+__thread struct rseq_abi __rseq_abi __attribute__((tls_model("initial-exec"), aligned(RSEQ_THREAD_AREA_ALLOC_SIZE))) = {
.cpu_id = RSEQ_ABI_CPU_ID_UNINITIALIZED,
};
@@ -84,10 +104,18 @@ int rseq_register_current_thread(void)
/* Treat libc's ownership as a successful registration. */
return 0;
}
- rc = sys_rseq(&__rseq_abi, sizeof(struct rseq_abi), 0, RSEQ_SIG);
- if (rc)
+ rc = sys_rseq(&__rseq_abi, rseq_size, 0, RSEQ_SIG);
+ if (rc) {
+ if (RSEQ_READ_ONCE(rseq_reg_success)) {
+ /* Incoherent success/failure within process. */
+ abort();
+ }
+ rseq_size = 0;
+ rseq_feature_size = 0;
return -1;
+ }
assert(rseq_current_cpu_raw() >= 0);
+ RSEQ_WRITE_ONCE(rseq_reg_success, 1);
return 0;
}
@@ -99,12 +127,28 @@ int rseq_unregister_current_thread(void)
/* Treat libc's ownership as a successful unregistration. */
return 0;
}
- rc = sys_rseq(&__rseq_abi, sizeof(struct rseq_abi), RSEQ_ABI_FLAG_UNREGISTER, RSEQ_SIG);
+ rc = sys_rseq(&__rseq_abi, rseq_size, RSEQ_ABI_FLAG_UNREGISTER, RSEQ_SIG);
if (rc)
return -1;
return 0;
}
+static
+unsigned int get_rseq_feature_size(void)
+{
+ unsigned long auxv_rseq_feature_size, auxv_rseq_align;
+
+ auxv_rseq_align = getauxval(AT_RSEQ_ALIGN);
+ assert(!auxv_rseq_align || auxv_rseq_align <= RSEQ_THREAD_AREA_ALLOC_SIZE);
+
+ auxv_rseq_feature_size = getauxval(AT_RSEQ_FEATURE_SIZE);
+ assert(!auxv_rseq_feature_size || auxv_rseq_feature_size <= RSEQ_THREAD_AREA_ALLOC_SIZE);
+ if (auxv_rseq_feature_size)
+ return auxv_rseq_feature_size;
+ else
+ return ORIG_RSEQ_FEATURE_SIZE;
+}
+
static __attribute__((constructor))
void rseq_init(void)
{
@@ -117,14 +161,21 @@ void rseq_init(void)
rseq_offset = *libc_rseq_offset_p;
rseq_size = *libc_rseq_size_p;
rseq_flags = *libc_rseq_flags_p;
+ rseq_feature_size = get_rseq_feature_size();
+ if (rseq_feature_size > rseq_size)
+ rseq_feature_size = rseq_size;
return;
}
if (!rseq_available())
return;
rseq_ownership = 1;
rseq_offset = (void *)&__rseq_abi - rseq_thread_pointer();
- rseq_size = sizeof(struct rseq_abi);
rseq_flags = 0;
+ rseq_feature_size = get_rseq_feature_size();
+ if (rseq_feature_size == ORIG_RSEQ_FEATURE_SIZE)
+ rseq_size = ORIG_RSEQ_ALLOC_SIZE;
+ else
+ rseq_size = RSEQ_THREAD_AREA_ALLOC_SIZE;
}
static __attribute__((destructor))
@@ -134,6 +185,7 @@ void rseq_exit(void)
return;
rseq_offset = 0;
rseq_size = -1U;
+ rseq_feature_size = -1U;
rseq_ownership = 0;
}
diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h
index 6f7513384bf5..95adc1e1b0db 100644
--- a/tools/testing/selftests/rseq/rseq.h
+++ b/tools/testing/selftests/rseq/rseq.h
@@ -47,14 +47,24 @@
#include "rseq-thread-pointer.h"
-/* Offset from the thread pointer to the rseq area. */
+/* Offset from the thread pointer to the rseq area. */
extern ptrdiff_t rseq_offset;
-/* Size of the registered rseq area. 0 if the registration was
- unsuccessful. */
+
+/*
+ * Size of the registered rseq area. 0 if the registration was
+ * unsuccessful.
+ */
extern unsigned int rseq_size;
-/* Flags used during rseq registration. */
+
+/* Flags used during rseq registration. */
extern unsigned int rseq_flags;
+/*
+ * rseq feature size supported by the kernel. 0 if the registration was
+ * unsuccessful.
+ */
+extern unsigned int rseq_feature_size;
+
static inline struct rseq_abi *rseq_get_abi(void)
{
return (struct rseq_abi *) ((uintptr_t) rseq_thread_pointer() + rseq_offset);
--
2.25.1
Introduce a rseq-arm64-bits.h template header which is internally
included to generate the static inline functions covering:
- relaxed and release memory ordering,
- per-cpu-id and per-vm-vcpu-id per-cpu data access.
Signed-off-by: Mathieu Desnoyers <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
---
Changes since v4:
- Use RSEQ_TEMPLATE_CPU_ID_FIELD.
---
.../testing/selftests/rseq/rseq-arm64-bits.h | 392 +++++++++++++
tools/testing/selftests/rseq/rseq-arm64.h | 516 +-----------------
2 files changed, 422 insertions(+), 486 deletions(-)
create mode 100644 tools/testing/selftests/rseq/rseq-arm64-bits.h
diff --git a/tools/testing/selftests/rseq/rseq-arm64-bits.h b/tools/testing/selftests/rseq/rseq-arm64-bits.h
new file mode 100644
index 000000000000..2bc8828f3547
--- /dev/null
+++ b/tools/testing/selftests/rseq/rseq-arm64-bits.h
@@ -0,0 +1,392 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * rseq-arm64-bits.h
+ *
+ * (C) Copyright 2016-2022 - Mathieu Desnoyers <[email protected]>
+ * (C) Copyright 2018 - Will Deacon <[email protected]>
+ */
+
+#include "rseq-bits-template.h"
+
+#if defined(RSEQ_TEMPLATE_MO_RELAXED) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_storev)(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2])
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
+#endif
+ RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
+ RSEQ_INJECT_ASM(5)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "Qo" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "Qo" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpnev_storeoffp_load)(intptr_t *v, intptr_t expectnot,
+ long voffp, intptr_t *load, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2])
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ RSEQ_ASM_OP_CMPNE(v, expectnot, %l[cmpfail])
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ RSEQ_ASM_OP_CMPNE(v, expectnot, %l[error2])
+#endif
+ RSEQ_ASM_OP_R_LOAD(v)
+ RSEQ_ASM_OP_R_STORE(load)
+ RSEQ_ASM_OP_R_LOAD_OFF(voffp)
+ RSEQ_ASM_OP_R_FINAL_STORE(v, 3)
+ RSEQ_INJECT_ASM(5)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "Qo" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "Qo" (*v),
+ [expectnot] "r" (expectnot),
+ [load] "Qo" (*load),
+ [voffp] "r" (voffp)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_addv)(intptr_t *v, intptr_t count, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+#endif
+ RSEQ_ASM_OP_R_LOAD(v)
+ RSEQ_ASM_OP_R_ADD(count)
+ RSEQ_ASM_OP_R_FINAL_STORE(v, 3)
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "Qo" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "Qo" (*v),
+ [count] "r" (count)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG
+ : abort
+#ifdef RSEQ_COMPARE_TWICE
+ , error1
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_cmpeqv_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t expect2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2])
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error3])
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_OP_CMPEQ(v2, expect2, %l[cmpfail])
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
+ RSEQ_ASM_OP_CMPEQ(v2, expect2, %l[error3])
+#endif
+ RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "Qo" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "Qo" (*v),
+ [expect] "r" (expect),
+ [v2] "Qo" (*v2),
+ [expect2] "r" (expect2),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2, error3
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+error3:
+ rseq_after_asm_goto();
+ rseq_bug("2nd expected value comparison failed");
+#endif
+}
+
+#endif /* #if defined(RSEQ_TEMPLATE_MO_RELAXED) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trystorev_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t newv2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2])
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
+#endif
+ RSEQ_ASM_OP_STORE(newv2, v2)
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_TEMPLATE_MO_RELEASE
+ RSEQ_ASM_OP_FINAL_STORE_RELEASE(newv, v, 3)
+#else
+ RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
+#endif
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "Qo" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [expect] "r" (expect),
+ [v] "Qo" (*v),
+ [newv] "r" (newv),
+ [v2] "Qo" (*v2),
+ [newv2] "r" (newv2)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trymemcpy_storev)(intptr_t *v, intptr_t expect,
+ void *dst, void *src, size_t len,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2])
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
+#endif
+ RSEQ_ASM_OP_R_BAD_MEMCPY(dst, src, len)
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_TEMPLATE_MO_RELEASE
+ RSEQ_ASM_OP_FINAL_STORE_RELEASE(newv, v, 3)
+#else
+ RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
+#endif
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "Qo" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [expect] "r" (expect),
+ [v] "Qo" (*v),
+ [newv] "r" (newv),
+ [dst] "r" (dst),
+ [src] "r" (src),
+ [len] "r" (len)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG, RSEQ_ASM_TMP_REG_2
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+#endif /* #if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#include "rseq-bits-reset.h"
diff --git a/tools/testing/selftests/rseq/rseq-arm64.h b/tools/testing/selftests/rseq/rseq-arm64.h
index 49c387fcd868..7aba23cc486c 100644
--- a/tools/testing/selftests/rseq/rseq-arm64.h
+++ b/tools/testing/selftests/rseq/rseq-arm64.h
@@ -2,7 +2,7 @@
/*
* rseq-arm64.h
*
- * (C) Copyright 2016-2018 - Mathieu Desnoyers <[email protected]>
+ * (C) Copyright 2016-2022 - Mathieu Desnoyers <[email protected]>
* (C) Copyright 2018 - Will Deacon <[email protected]>
*/
@@ -200,490 +200,34 @@ do { \
" cbnz " RSEQ_ASM_TMP_REG_2 ", 222b\n" \
"333:\n"
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2])
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
-#endif
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "Qo" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
- long voffp, intptr_t *load, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2])
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPNE(v, expectnot, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- RSEQ_ASM_OP_CMPNE(v, expectnot, %l[error2])
-#endif
- RSEQ_ASM_OP_R_LOAD(v)
- RSEQ_ASM_OP_R_STORE(load)
- RSEQ_ASM_OP_R_LOAD_OFF(voffp)
- RSEQ_ASM_OP_R_FINAL_STORE(v, 3)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "Qo" (*v),
- [expectnot] "r" (expectnot),
- [load] "Qo" (*load),
- [voffp] "r" (voffp)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
+/* Per-cpu-id indexing. */
-static inline __attribute__((always_inline))
-int rseq_addv(intptr_t *v, intptr_t count, int cpu)
-{
- RSEQ_INJECT_C(9)
+#define RSEQ_TEMPLATE_CPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-arm64-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
-#endif
- RSEQ_ASM_OP_R_LOAD(v)
- RSEQ_ASM_OP_R_ADD(count)
- RSEQ_ASM_OP_R_FINAL_STORE(v, 3)
- RSEQ_INJECT_ASM(4)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "Qo" (*v),
- [count] "r" (count)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG
- : abort
-#ifdef RSEQ_COMPARE_TWICE
- , error1
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2])
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
-#endif
- RSEQ_ASM_OP_STORE(newv2, v2)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [expect] "r" (expect),
- [v] "Qo" (*v),
- [newv] "r" (newv),
- [v2] "Qo" (*v2),
- [newv2] "r" (newv2)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2])
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
-#endif
- RSEQ_ASM_OP_STORE(newv2, v2)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_OP_FINAL_STORE_RELEASE(newv, v, 3)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [expect] "r" (expect),
- [v] "Qo" (*v),
- [newv] "r" (newv),
- [v2] "Qo" (*v2),
- [newv2] "r" (newv2)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t expect2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2])
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error3])
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
- RSEQ_ASM_OP_CMPEQ(v2, expect2, %l[cmpfail])
- RSEQ_INJECT_ASM(5)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
- RSEQ_ASM_OP_CMPEQ(v2, expect2, %l[error3])
-#endif
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "Qo" (*v),
- [expect] "r" (expect),
- [v2] "Qo" (*v2),
- [expect2] "r" (expect2),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2, error3
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-error3:
- rseq_after_asm_goto();
- rseq_bug("2nd expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2])
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
-#endif
- RSEQ_ASM_OP_R_BAD_MEMCPY(dst, src, len)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [expect] "r" (expect),
- [v] "Qo" (*v),
- [newv] "r" (newv),
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG, RSEQ_ASM_TMP_REG_2
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, %l[error2])
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
-#endif
- RSEQ_ASM_OP_R_BAD_MEMCPY(dst, src, len)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_OP_FINAL_STORE_RELEASE(newv, v, 3)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [expect] "r" (expect),
- [v] "Qo" (*v),
- [newv] "r" (newv),
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG, RSEQ_ASM_TMP_REG_2
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-arm64-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_CPU_ID
+
+/* Per-vm-vcpu-id indexing. */
+
+#define RSEQ_TEMPLATE_VM_VCPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-arm64-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
+
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-arm64-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_VM_VCPU_ID
+
+/* APIs which are not based on cpu ids. */
+
+#define RSEQ_TEMPLATE_CPU_ID_NONE
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-arm64-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
+#undef RSEQ_TEMPLATE_CPU_ID_NONE
--
2.25.1
Add the mm_vcpu_id field to the rseq_update event, allowing tracers to
follow which vcpu_id is observed by user-space, and whether negative
vcpu_id values are visible in case of internal scheduler implementation
issues.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
Changes since v4:
- use task_mm_vcpu_id() to get the mm_vcpu_id from the task struct.
---
include/trace/events/rseq.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/include/trace/events/rseq.h b/include/trace/events/rseq.h
index dde7a359b4ef..9106148227c0 100644
--- a/include/trace/events/rseq.h
+++ b/include/trace/events/rseq.h
@@ -17,14 +17,17 @@ TRACE_EVENT(rseq_update,
TP_STRUCT__entry(
__field(s32, cpu_id)
__field(s32, node_id)
+ __field(s32, mm_vcpu_id)
),
TP_fast_assign(
__entry->cpu_id = raw_smp_processor_id();
__entry->node_id = cpu_to_node(__entry->cpu_id);
+ __entry->mm_vcpu_id = task_mm_vcpu_id(t);
),
- TP_printk("cpu_id=%d node_id=%d", __entry->cpu_id, __entry->node_id)
+ TP_printk("cpu_id=%d node_id=%d mm_vcpu_id=%d", __entry->cpu_id,
+ __entry->node_id, __entry->mm_vcpu_id)
);
TRACE_EVENT(rseq_ip_fixup,
--
2.25.1
Introduce a rseq-ppc-bits.h template header which is internally included
to generate the static inline functions covering:
- relaxed and release memory ordering,
- per-cpu-id and per-vm-vcpu-id per-cpu data access.
Signed-off-by: Mathieu Desnoyers <[email protected]>
Cc: Boqun Feng <[email protected]>
---
Changes since v4:
- Use RSEQ_TEMPLATE_CPU_ID_FIELD.
---
tools/testing/selftests/rseq/rseq-ppc-bits.h | 454 ++++++++++++++
tools/testing/selftests/rseq/rseq-ppc.h | 611 +------------------
2 files changed, 486 insertions(+), 579 deletions(-)
create mode 100644 tools/testing/selftests/rseq/rseq-ppc-bits.h
diff --git a/tools/testing/selftests/rseq/rseq-ppc-bits.h b/tools/testing/selftests/rseq/rseq-ppc-bits.h
new file mode 100644
index 000000000000..eed21b038aa1
--- /dev/null
+++ b/tools/testing/selftests/rseq/rseq-ppc-bits.h
@@ -0,0 +1,454 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * rseq-ppc-bits.h
+ *
+ * (C) Copyright 2016-2018 - Mathieu Desnoyers <[email protected]>
+ * (C) Copyright 2016-2018 - Boqun Feng <[email protected]>
+ */
+
+#include "rseq-bits-template.h"
+
+#if defined(RSEQ_TEMPLATE_MO_RELAXED) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_storev)(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
+ /* cmp cpuid */
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ /* cmp @v equal to @expect */
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ /* cmp cpuid */
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ /* cmp @v equal to @expect */
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
+#endif
+ /* final store */
+ RSEQ_ASM_OP_FINAL_STORE(newv, v, 2)
+ RSEQ_INJECT_ASM(5)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "memory", "cc", "r17"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpnev_storeoffp_load)(intptr_t *v, intptr_t expectnot,
+ long voffp, intptr_t *load, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
+ /* cmp cpuid */
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ /* cmp @v not equal to @expectnot */
+ RSEQ_ASM_OP_CMPNE(v, expectnot, %l[cmpfail])
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ /* cmp cpuid */
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ /* cmp @v not equal to @expectnot */
+ RSEQ_ASM_OP_CMPNE(v, expectnot, %l[error2])
+#endif
+ /* load the value of @v */
+ RSEQ_ASM_OP_R_LOAD(v)
+ /* store it in @load */
+ RSEQ_ASM_OP_R_STORE(load)
+ /* dereference voffp(v) */
+ RSEQ_ASM_OP_R_LOADX(voffp)
+ /* final store the value at voffp(v) */
+ RSEQ_ASM_OP_R_FINAL_STORE(v, 2)
+ RSEQ_INJECT_ASM(5)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* final store input */
+ [v] "m" (*v),
+ [expectnot] "r" (expectnot),
+ [voffp] "b" (voffp),
+ [load] "m" (*load)
+ RSEQ_INJECT_INPUT
+ : "memory", "cc", "r17"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_addv)(intptr_t *v, intptr_t count, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
+ /* cmp cpuid */
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+#ifdef RSEQ_COMPARE_TWICE
+ /* cmp cpuid */
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+#endif
+ /* load the value of @v */
+ RSEQ_ASM_OP_R_LOAD(v)
+ /* add @count to it */
+ RSEQ_ASM_OP_R_ADD(count)
+ /* final store */
+ RSEQ_ASM_OP_R_FINAL_STORE(v, 2)
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* final store input */
+ [v] "m" (*v),
+ [count] "r" (count)
+ RSEQ_INJECT_INPUT
+ : "memory", "cc", "r17"
+ RSEQ_INJECT_CLOBBER
+ : abort
+#ifdef RSEQ_COMPARE_TWICE
+ , error1
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_cmpeqv_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t expect2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
+ /* cmp cpuid */
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ /* cmp @v equal to @expect */
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
+ RSEQ_INJECT_ASM(4)
+ /* cmp @v2 equal to @expct2 */
+ RSEQ_ASM_OP_CMPEQ(v2, expect2, %l[cmpfail])
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_COMPARE_TWICE
+ /* cmp cpuid */
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ /* cmp @v equal to @expect */
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
+ /* cmp @v2 equal to @expct2 */
+ RSEQ_ASM_OP_CMPEQ(v2, expect2, %l[error3])
+#endif
+ /* final store */
+ RSEQ_ASM_OP_FINAL_STORE(newv, v, 2)
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* cmp2 input */
+ [v2] "m" (*v2),
+ [expect2] "r" (expect2),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "memory", "cc", "r17"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2, error3
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("1st expected value comparison failed");
+error3:
+ rseq_after_asm_goto();
+ rseq_bug("2nd expected value comparison failed");
+#endif
+}
+
+#endif /* #if defined(RSEQ_TEMPLATE_MO_RELAXED) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trystorev_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t newv2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
+ /* cmp cpuid */
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ /* cmp @v equal to @expect */
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ /* cmp cpuid */
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ /* cmp @v equal to @expect */
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
+#endif
+ /* try store */
+ RSEQ_ASM_OP_STORE(newv2, v2)
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_TEMPLATE_MO_RELEASE
+ /* for 'release' */
+ "lwsync\n\t"
+#endif
+ /* final store */
+ RSEQ_ASM_OP_FINAL_STORE(newv, v, 2)
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* try store input */
+ [v2] "m" (*v2),
+ [newv2] "r" (newv2),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "memory", "cc", "r17"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trymemcpy_storev)(intptr_t *v, intptr_t expect,
+ void *dst, void *src, size_t len,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* setup for mempcy */
+ "mr %%r19, %[len]\n\t"
+ "mr %%r20, %[src]\n\t"
+ "mr %%r21, %[dst]\n\t"
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
+ /* cmp cpuid */
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ /* cmp @v equal to @expect */
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ /* cmp cpuid */
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ /* cmp @v equal to @expect */
+ RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
+#endif
+ /* try memcpy */
+ RSEQ_ASM_OP_R_MEMCPY()
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_TEMPLATE_MO_RELEASE
+ /* for 'release' */
+ "lwsync\n\t"
+#endif
+ /* final store */
+ RSEQ_ASM_OP_FINAL_STORE(newv, v, 2)
+ RSEQ_INJECT_ASM(6)
+ /* teardown */
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv),
+ /* try memcpy input */
+ [dst] "r" (dst),
+ [src] "r" (src),
+ [len] "r" (len)
+ RSEQ_INJECT_INPUT
+ : "memory", "cc", "r17", "r18", "r19", "r20", "r21"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+#endif /* #if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#include "rseq-bits-reset.h"
diff --git a/tools/testing/selftests/rseq/rseq-ppc.h b/tools/testing/selftests/rseq/rseq-ppc.h
index f82d95c1bb3f..78015fc52e72 100644
--- a/tools/testing/selftests/rseq/rseq-ppc.h
+++ b/tools/testing/selftests/rseq/rseq-ppc.h
@@ -2,7 +2,7 @@
/*
* rseq-ppc.h
*
- * (C) Copyright 2016-2018 - Mathieu Desnoyers <[email protected]>
+ * (C) Copyright 2016-2022 - Mathieu Desnoyers <[email protected]>
* (C) Copyright 2016-2018 - Boqun Feng <[email protected]>
*/
@@ -205,581 +205,34 @@ do { \
RSEQ_STORE_LONG(var) "%[" __rseq_str(value) "], %[" __rseq_str(var) "]\n\t" \
__rseq_str(post_commit_label) ":\n\t"
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- /* cmp @v equal to @expect */
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- /* cmp @v equal to @expect */
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
-#endif
- /* final store */
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 2)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r17"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
- long voffp, intptr_t *load, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- /* cmp @v not equal to @expectnot */
- RSEQ_ASM_OP_CMPNE(v, expectnot, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- /* cmp @v not equal to @expectnot */
- RSEQ_ASM_OP_CMPNE(v, expectnot, %l[error2])
-#endif
- /* load the value of @v */
- RSEQ_ASM_OP_R_LOAD(v)
- /* store it in @load */
- RSEQ_ASM_OP_R_STORE(load)
- /* dereference voffp(v) */
- RSEQ_ASM_OP_R_LOADX(voffp)
- /* final store the value at voffp(v) */
- RSEQ_ASM_OP_R_FINAL_STORE(v, 2)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [expectnot] "r" (expectnot),
- [voffp] "b" (voffp),
- [load] "m" (*load)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r17"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_addv(intptr_t *v, intptr_t count, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
-#ifdef RSEQ_COMPARE_TWICE
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
-#endif
- /* load the value of @v */
- RSEQ_ASM_OP_R_LOAD(v)
- /* add @count to it */
- RSEQ_ASM_OP_R_ADD(count)
- /* final store */
- RSEQ_ASM_OP_R_FINAL_STORE(v, 2)
- RSEQ_INJECT_ASM(4)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [count] "r" (count)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r17"
- RSEQ_INJECT_CLOBBER
- : abort
-#ifdef RSEQ_COMPARE_TWICE
- , error1
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- /* cmp @v equal to @expect */
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- /* cmp @v equal to @expect */
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
-#endif
- /* try store */
- RSEQ_ASM_OP_STORE(newv2, v2)
- RSEQ_INJECT_ASM(5)
- /* final store */
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 2)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* try store input */
- [v2] "m" (*v2),
- [newv2] "r" (newv2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r17"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- /* cmp @v equal to @expect */
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- /* cmp @v equal to @expect */
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
-#endif
- /* try store */
- RSEQ_ASM_OP_STORE(newv2, v2)
- RSEQ_INJECT_ASM(5)
- /* for 'release' */
- "lwsync\n\t"
- /* final store */
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 2)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* try store input */
- [v2] "m" (*v2),
- [newv2] "r" (newv2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r17"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t expect2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- /* cmp @v equal to @expect */
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
- /* cmp @v2 equal to @expct2 */
- RSEQ_ASM_OP_CMPEQ(v2, expect2, %l[cmpfail])
- RSEQ_INJECT_ASM(5)
-#ifdef RSEQ_COMPARE_TWICE
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- /* cmp @v equal to @expect */
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
- /* cmp @v2 equal to @expct2 */
- RSEQ_ASM_OP_CMPEQ(v2, expect2, %l[error3])
-#endif
- /* final store */
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 2)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* cmp2 input */
- [v2] "m" (*v2),
- [expect2] "r" (expect2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r17"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2, error3
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("1st expected value comparison failed");
-error3:
- rseq_after_asm_goto();
- rseq_bug("2nd expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* setup for mempcy */
- "mr %%r19, %[len]\n\t"
- "mr %%r20, %[src]\n\t"
- "mr %%r21, %[dst]\n\t"
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- /* cmp @v equal to @expect */
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- /* cmp @v equal to @expect */
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
-#endif
- /* try memcpy */
- RSEQ_ASM_OP_R_MEMCPY()
- RSEQ_INJECT_ASM(5)
- /* final store */
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 2)
- RSEQ_INJECT_ASM(6)
- /* teardown */
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv),
- /* try memcpy input */
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r17", "r18", "r19", "r20", "r21"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* setup for mempcy */
- "mr %%r19, %[len]\n\t"
- "mr %%r20, %[src]\n\t"
- "mr %%r21, %[dst]\n\t"
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- /* cmp @v equal to @expect */
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[cmpfail])
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- /* cmp cpuid */
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- /* cmp @v equal to @expect */
- RSEQ_ASM_OP_CMPEQ(v, expect, %l[error2])
-#endif
- /* try memcpy */
- RSEQ_ASM_OP_R_MEMCPY()
- RSEQ_INJECT_ASM(5)
- /* for 'release' */
- "lwsync\n\t"
- /* final store */
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 2)
- RSEQ_INJECT_ASM(6)
- /* teardown */
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv),
- /* try memcpy input */
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r17", "r18", "r19", "r20", "r21"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
+/* Per-cpu-id indexing. */
+
+#define RSEQ_TEMPLATE_CPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-ppc-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
+
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-ppc-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_CPU_ID
+
+/* Per-vm-vcpu-id indexing. */
+
+#define RSEQ_TEMPLATE_VM_VCPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-ppc-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
+
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-ppc-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_VM_VCPU_ID
+
+/* APIs which are not based on cpu ids. */
+
+#define RSEQ_TEMPLATE_CPU_ID_NONE
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-ppc-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
+#undef RSEQ_TEMPLATE_CPU_ID_NONE
--
2.25.1
Introduce a rseq-s390-bits.h template header which is internally included
to generate the static inline functions covering:
- relaxed and release memory ordering,
- per-cpu-id and per-vm-vcpu-id per-cpu data access.
Signed-off-by: Mathieu Desnoyers <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
---
Changes since v4:
- Use RSEQ_TEMPLATE_CPU_ID_FIELD.
---
tools/testing/selftests/rseq/rseq-s390-bits.h | 474 +++++++++++++++++
tools/testing/selftests/rseq/rseq-s390.h | 490 +-----------------
2 files changed, 498 insertions(+), 466 deletions(-)
create mode 100644 tools/testing/selftests/rseq/rseq-s390-bits.h
diff --git a/tools/testing/selftests/rseq/rseq-s390-bits.h b/tools/testing/selftests/rseq/rseq-s390-bits.h
new file mode 100644
index 000000000000..77cc0a32b604
--- /dev/null
+++ b/tools/testing/selftests/rseq/rseq-s390-bits.h
@@ -0,0 +1,474 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+
+#include "rseq-bits-template.h"
+
+#if defined(RSEQ_TEMPLATE_MO_RELAXED) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_storev)(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ LONG_CMP " %[expect], %[v]\n\t"
+ "jnz %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ LONG_CMP " %[expect], %[v]\n\t"
+ "jnz %l[error2]\n\t"
+#endif
+ /* final store */
+ LONG_S " %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(5)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "memory", "cc", "r0"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+/*
+ * Compare @v against @expectnot. When it does _not_ match, load @v
+ * into @load, and store the content of *@v + voffp into @v.
+ */
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpnev_storeoffp_load)(intptr_t *v, intptr_t expectnot,
+ long voffp, intptr_t *load, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ LONG_L " %%r1, %[v]\n\t"
+ LONG_CMP_R " %%r1, %[expectnot]\n\t"
+ "je %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ LONG_L " %%r1, %[v]\n\t"
+ LONG_CMP_R " %%r1, %[expectnot]\n\t"
+ "je %l[error2]\n\t"
+#endif
+ LONG_S " %%r1, %[load]\n\t"
+ LONG_ADD_R " %%r1, %[voffp]\n\t"
+ LONG_L " %%r1, 0(%%r1)\n\t"
+ /* final store */
+ LONG_S " %%r1, %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(5)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* final store input */
+ [v] "m" (*v),
+ [expectnot] "r" (expectnot),
+ [voffp] "r" (voffp),
+ [load] "m" (*load)
+ RSEQ_INJECT_INPUT
+ : "memory", "cc", "r0", "r1"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_addv)(intptr_t *v, intptr_t count, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+#endif
+ LONG_L " %%r0, %[v]\n\t"
+ LONG_ADD_R " %%r0, %[count]\n\t"
+ /* final store */
+ LONG_S " %%r0, %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* final store input */
+ [v] "m" (*v),
+ [count] "r" (count)
+ RSEQ_INJECT_INPUT
+ : "memory", "cc", "r0"
+ RSEQ_INJECT_CLOBBER
+ : abort
+#ifdef RSEQ_COMPARE_TWICE
+ , error1
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_cmpeqv_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t expect2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ LONG_CMP " %[expect], %[v]\n\t"
+ "jnz %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+ LONG_CMP " %[expect2], %[v2]\n\t"
+ "jnz %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ LONG_CMP " %[expect], %[v]\n\t"
+ "jnz %l[error2]\n\t"
+ LONG_CMP " %[expect2], %[v2]\n\t"
+ "jnz %l[error3]\n\t"
+#endif
+ /* final store */
+ LONG_S " %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* cmp2 input */
+ [v2] "m" (*v2),
+ [expect2] "r" (expect2),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "memory", "cc", "r0"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2, error3
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("1st expected value comparison failed");
+error3:
+ rseq_after_asm_goto();
+ rseq_bug("2nd expected value comparison failed");
+#endif
+}
+
+#endif /* #if defined(RSEQ_TEMPLATE_MO_RELAXED) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+/* s390 is TSO. */
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trystorev_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t newv2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ LONG_CMP " %[expect], %[v]\n\t"
+ "jnz %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
+ LONG_CMP " %[expect], %[v]\n\t"
+ "jnz %l[error2]\n\t"
+#endif
+ /* try store */
+ LONG_S " %[newv2], %[v2]\n\t"
+ RSEQ_INJECT_ASM(5)
+ /* final store */
+ LONG_S " %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* try store input */
+ [v2] "m" (*v2),
+ [newv2] "r" (newv2),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "memory", "cc", "r0"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+/* s390 is TSO. */
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trymemcpy_storev)(intptr_t *v, intptr_t expect,
+ void *dst, void *src, size_t len,
+ intptr_t newv, int cpu)
+{
+ uint64_t rseq_scratch[3];
+
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ LONG_S " %[src], %[rseq_scratch0]\n\t"
+ LONG_S " %[dst], %[rseq_scratch1]\n\t"
+ LONG_S " %[len], %[rseq_scratch2]\n\t"
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ LONG_CMP " %[expect], %[v]\n\t"
+ "jnz 5f\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 6f)
+ LONG_CMP " %[expect], %[v]\n\t"
+ "jnz 7f\n\t"
+#endif
+ /* try memcpy */
+ LONG_LT_R " %[len], %[len]\n\t"
+ "jz 333f\n\t"
+ "222:\n\t"
+ "ic %%r0,0(%[src])\n\t"
+ "stc %%r0,0(%[dst])\n\t"
+ LONG_ADDI " %[src], 1\n\t"
+ LONG_ADDI " %[dst], 1\n\t"
+ LONG_ADDI " %[len], -1\n\t"
+ "jnz 222b\n\t"
+ "333:\n\t"
+ RSEQ_INJECT_ASM(5)
+ /* final store */
+ LONG_S " %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ /* teardown */
+ LONG_L " %[len], %[rseq_scratch2]\n\t"
+ LONG_L " %[dst], %[rseq_scratch1]\n\t"
+ LONG_L " %[src], %[rseq_scratch0]\n\t"
+ RSEQ_ASM_DEFINE_ABORT(4,
+ LONG_L " %[len], %[rseq_scratch2]\n\t"
+ LONG_L " %[dst], %[rseq_scratch1]\n\t"
+ LONG_L " %[src], %[rseq_scratch0]\n\t",
+ abort)
+ RSEQ_ASM_DEFINE_CMPFAIL(5,
+ LONG_L " %[len], %[rseq_scratch2]\n\t"
+ LONG_L " %[dst], %[rseq_scratch1]\n\t"
+ LONG_L " %[src], %[rseq_scratch0]\n\t",
+ cmpfail)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_CMPFAIL(6,
+ LONG_L " %[len], %[rseq_scratch2]\n\t"
+ LONG_L " %[dst], %[rseq_scratch1]\n\t"
+ LONG_L " %[src], %[rseq_scratch0]\n\t",
+ error1)
+ RSEQ_ASM_DEFINE_CMPFAIL(7,
+ LONG_L " %[len], %[rseq_scratch2]\n\t"
+ LONG_L " %[dst], %[rseq_scratch1]\n\t"
+ LONG_L " %[src], %[rseq_scratch0]\n\t",
+ error2)
+#endif
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv),
+ /* try memcpy input */
+ [dst] "r" (dst),
+ [src] "r" (src),
+ [len] "r" (len),
+ [rseq_scratch0] "m" (rseq_scratch[0]),
+ [rseq_scratch1] "m" (rseq_scratch[1]),
+ [rseq_scratch2] "m" (rseq_scratch[2])
+ RSEQ_INJECT_INPUT
+ : "memory", "cc", "r0"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+#endif /* #if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#include "rseq-bits-reset.h"
diff --git a/tools/testing/selftests/rseq/rseq-s390.h b/tools/testing/selftests/rseq/rseq-s390.h
index 4d3286453bbf..72c89a9b4098 100644
--- a/tools/testing/selftests/rseq/rseq-s390.h
+++ b/tools/testing/selftests/rseq/rseq-s390.h
@@ -130,476 +130,34 @@ do { \
"jg %l[" __rseq_str(cmpfail_label) "]\n\t" \
".popsection\n\t"
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
+/* Per-cpu-id indexing. */
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- LONG_CMP " %[expect], %[v]\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- LONG_CMP " %[expect], %[v]\n\t"
- "jnz %l[error2]\n\t"
-#endif
- /* final store */
- LONG_S " %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r0"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-/*
- * Compare @v against @expectnot. When it does _not_ match, load @v
- * into @load, and store the content of *@v + voffp into @v.
- */
-static inline __attribute__((always_inline))
-int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
- long voffp, intptr_t *load, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- LONG_L " %%r1, %[v]\n\t"
- LONG_CMP_R " %%r1, %[expectnot]\n\t"
- "je %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- LONG_L " %%r1, %[v]\n\t"
- LONG_CMP_R " %%r1, %[expectnot]\n\t"
- "je %l[error2]\n\t"
-#endif
- LONG_S " %%r1, %[load]\n\t"
- LONG_ADD_R " %%r1, %[voffp]\n\t"
- LONG_L " %%r1, 0(%%r1)\n\t"
- /* final store */
- LONG_S " %%r1, %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [expectnot] "r" (expectnot),
- [voffp] "r" (voffp),
- [load] "m" (*load)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r0", "r1"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_addv(intptr_t *v, intptr_t count, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
-#endif
- LONG_L " %%r0, %[v]\n\t"
- LONG_ADD_R " %%r0, %[count]\n\t"
- /* final store */
- LONG_S " %%r0, %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(4)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [count] "r" (count)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r0"
- RSEQ_INJECT_CLOBBER
- : abort
-#ifdef RSEQ_COMPARE_TWICE
- , error1
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-#endif
-}
+#define RSEQ_TEMPLATE_CPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-s390-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-s390-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_CPU_ID
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- LONG_CMP " %[expect], %[v]\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- LONG_CMP " %[expect], %[v]\n\t"
- "jnz %l[error2]\n\t"
-#endif
- /* try store */
- LONG_S " %[newv2], %[v2]\n\t"
- RSEQ_INJECT_ASM(5)
- /* final store */
- LONG_S " %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* try store input */
- [v2] "m" (*v2),
- [newv2] "r" (newv2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r0"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
+/* Per-vm-vcpu-id indexing. */
-/* s390 is TSO. */
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- return rseq_cmpeqv_trystorev_storev(v, expect, v2, newv2, newv, cpu);
-}
+#define RSEQ_TEMPLATE_VM_VCPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-s390-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t expect2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-s390-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_VM_VCPU_ID
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- LONG_CMP " %[expect], %[v]\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
- LONG_CMP " %[expect2], %[v2]\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(5)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, %l[error1])
- LONG_CMP " %[expect], %[v]\n\t"
- "jnz %l[error2]\n\t"
- LONG_CMP " %[expect2], %[v2]\n\t"
- "jnz %l[error3]\n\t"
-#endif
- /* final store */
- LONG_S " %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* cmp2 input */
- [v2] "m" (*v2),
- [expect2] "r" (expect2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r0"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2, error3
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("1st expected value comparison failed");
-error3:
- rseq_after_asm_goto();
- rseq_bug("2nd expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- uint64_t rseq_scratch[3];
-
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- LONG_S " %[src], %[rseq_scratch0]\n\t"
- LONG_S " %[dst], %[rseq_scratch1]\n\t"
- LONG_S " %[len], %[rseq_scratch2]\n\t"
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- LONG_CMP " %[expect], %[v]\n\t"
- "jnz 5f\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 6f)
- LONG_CMP " %[expect], %[v]\n\t"
- "jnz 7f\n\t"
-#endif
- /* try memcpy */
- LONG_LT_R " %[len], %[len]\n\t"
- "jz 333f\n\t"
- "222:\n\t"
- "ic %%r0,0(%[src])\n\t"
- "stc %%r0,0(%[dst])\n\t"
- LONG_ADDI " %[src], 1\n\t"
- LONG_ADDI " %[dst], 1\n\t"
- LONG_ADDI " %[len], -1\n\t"
- "jnz 222b\n\t"
- "333:\n\t"
- RSEQ_INJECT_ASM(5)
- /* final store */
- LONG_S " %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- /* teardown */
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t"
- RSEQ_ASM_DEFINE_ABORT(4,
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t",
- abort)
- RSEQ_ASM_DEFINE_CMPFAIL(5,
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t",
- cmpfail)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_CMPFAIL(6,
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t",
- error1)
- RSEQ_ASM_DEFINE_CMPFAIL(7,
- LONG_L " %[len], %[rseq_scratch2]\n\t"
- LONG_L " %[dst], %[rseq_scratch1]\n\t"
- LONG_L " %[src], %[rseq_scratch0]\n\t",
- error2)
-#endif
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv),
- /* try memcpy input */
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len),
- [rseq_scratch0] "m" (rseq_scratch[0]),
- [rseq_scratch1] "m" (rseq_scratch[1]),
- [rseq_scratch2] "m" (rseq_scratch[2])
- RSEQ_INJECT_INPUT
- : "memory", "cc", "r0"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
+/* APIs which are not based on cpu ids. */
-/* s390 is TSO. */
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- return rseq_cmpeqv_trymemcpy_storev(v, expect, dst, src, len,
- newv, cpu);
-}
+#define RSEQ_TEMPLATE_CPU_ID_NONE
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-s390-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
+#undef RSEQ_TEMPLATE_CPU_ID_NONE
--
2.25.1
Introduce the extensible rseq ABI, where the feature size supported by
the kernel and the required alignment are communicated to user-space
through ELF auxiliary vectors.
This allows user-space to call rseq registration with a rseq_len of
either 32 bytes for the original struct rseq size (which includes
padding), or larger.
If rseq_len is larger than 32 bytes, then it must be large enough to
contain the feature size communicated to user-space through ELF
auxiliary vectors.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
Changes since v4:
- Accept original rseq alignment for original rseq size.
---
include/linux/sched.h | 4 ++++
kernel/ptrace.c | 2 +-
kernel/rseq.c | 37 ++++++++++++++++++++++++++++++-------
3 files changed, 35 insertions(+), 8 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 23de7fe86cc4..2a9e14e3e668 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1305,6 +1305,7 @@ struct task_struct {
#ifdef CONFIG_RSEQ
struct rseq __user *rseq;
+ u32 rseq_len;
u32 rseq_sig;
/*
* RmW on rseq_event_mask must be performed atomically
@@ -2355,10 +2356,12 @@ static inline void rseq_fork(struct task_struct *t, unsigned long clone_flags)
{
if (clone_flags & CLONE_VM) {
t->rseq = NULL;
+ t->rseq_len = 0;
t->rseq_sig = 0;
t->rseq_event_mask = 0;
} else {
t->rseq = current->rseq;
+ t->rseq_len = current->rseq_len;
t->rseq_sig = current->rseq_sig;
t->rseq_event_mask = current->rseq_event_mask;
}
@@ -2367,6 +2370,7 @@ static inline void rseq_fork(struct task_struct *t, unsigned long clone_flags)
static inline void rseq_execve(struct task_struct *t)
{
t->rseq = NULL;
+ t->rseq_len = 0;
t->rseq_sig = 0;
t->rseq_event_mask = 0;
}
diff --git a/kernel/ptrace.c b/kernel/ptrace.c
index 54482193e1ed..0786450074c1 100644
--- a/kernel/ptrace.c
+++ b/kernel/ptrace.c
@@ -813,7 +813,7 @@ static long ptrace_get_rseq_configuration(struct task_struct *task,
{
struct ptrace_rseq_configuration conf = {
.rseq_abi_pointer = (u64)(uintptr_t)task->rseq,
- .rseq_abi_size = sizeof(*task->rseq),
+ .rseq_abi_size = task->rseq_len,
.signature = task->rseq_sig,
.flags = 0,
};
diff --git a/kernel/rseq.c b/kernel/rseq.c
index bda8175f8f99..c1058b3f10ac 100644
--- a/kernel/rseq.c
+++ b/kernel/rseq.c
@@ -18,6 +18,9 @@
#define CREATE_TRACE_POINTS
#include <trace/events/rseq.h>
+/* The original rseq structure size (including padding) is 32 bytes. */
+#define ORIG_RSEQ_SIZE 32
+
#define RSEQ_CS_NO_RESTART_FLAGS (RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT | \
RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL | \
RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE)
@@ -87,10 +90,15 @@ static int rseq_update_cpu_id(struct task_struct *t)
u32 cpu_id = raw_smp_processor_id();
struct rseq __user *rseq = t->rseq;
- if (!user_write_access_begin(rseq, sizeof(*rseq)))
+ if (!user_write_access_begin(rseq, t->rseq_len))
goto efault;
unsafe_put_user(cpu_id, &rseq->cpu_id_start, efault_end);
unsafe_put_user(cpu_id, &rseq->cpu_id, efault_end);
+ /*
+ * Additional feature fields added after ORIG_RSEQ_SIZE
+ * need to be conditionally updated only if
+ * t->rseq_len != ORIG_RSEQ_SIZE.
+ */
user_write_access_end();
trace_rseq_update(t);
return 0;
@@ -117,6 +125,11 @@ static int rseq_reset_rseq_cpu_id(struct task_struct *t)
*/
if (put_user(cpu_id, &t->rseq->cpu_id))
return -EFAULT;
+ /*
+ * Additional feature fields added after ORIG_RSEQ_SIZE
+ * need to be conditionally reset only if
+ * t->rseq_len != ORIG_RSEQ_SIZE.
+ */
return 0;
}
@@ -329,7 +342,7 @@ SYSCALL_DEFINE4(rseq, struct rseq __user *, rseq, u32, rseq_len,
/* Unregister rseq for current thread. */
if (current->rseq != rseq || !current->rseq)
return -EINVAL;
- if (rseq_len != sizeof(*rseq))
+ if (rseq_len != current->rseq_len)
return -EINVAL;
if (current->rseq_sig != sig)
return -EPERM;
@@ -338,6 +351,7 @@ SYSCALL_DEFINE4(rseq, struct rseq __user *, rseq, u32, rseq_len,
return ret;
current->rseq = NULL;
current->rseq_sig = 0;
+ current->rseq_len = 0;
return 0;
}
@@ -350,7 +364,7 @@ SYSCALL_DEFINE4(rseq, struct rseq __user *, rseq, u32, rseq_len,
* the provided address differs from the prior
* one.
*/
- if (current->rseq != rseq || rseq_len != sizeof(*rseq))
+ if (current->rseq != rseq || rseq_len != current->rseq_len)
return -EINVAL;
if (current->rseq_sig != sig)
return -EPERM;
@@ -359,15 +373,24 @@ SYSCALL_DEFINE4(rseq, struct rseq __user *, rseq, u32, rseq_len,
}
/*
- * If there was no rseq previously registered,
- * ensure the provided rseq is properly aligned and valid.
+ * If there was no rseq previously registered, ensure the provided rseq
+ * is properly aligned, as communcated to user-space through the ELF
+ * auxiliary vector AT_RSEQ_ALIGN. If rseq_len is the original rseq
+ * size, the required alignment is the original struct rseq alignment.
+ *
+ * In order to be valid, rseq_len is either the original rseq size, or
+ * large enough to contain all supported fields, as communicated to
+ * user-space through the ELF auxiliary vector AT_RSEQ_FEATURE_SIZE.
*/
- if (!IS_ALIGNED((unsigned long)rseq, __alignof__(*rseq)) ||
- rseq_len != sizeof(*rseq))
+ if (rseq_len < ORIG_RSEQ_SIZE ||
+ (rseq_len == ORIG_RSEQ_SIZE && !IS_ALIGNED((unsigned long)rseq, ORIG_RSEQ_SIZE)) ||
+ (rseq_len != ORIG_RSEQ_SIZE && (!IS_ALIGNED((unsigned long)rseq, __alignof__(*rseq)) ||
+ rseq_len < offsetof(struct rseq, end))))
return -EINVAL;
if (!access_ok(rseq, rseq_len))
return -EFAULT;
current->rseq = rseq;
+ current->rseq_len = rseq_len;
current->rseq_sig = sig;
/*
* If rseq was previously inactive, and has just been
--
2.25.1
On all architectures except Power, the NUMA topology is never
reconfigured after a CPU has been associated with a NUMA node in the
system lifetime.
Even on Power, we can assume that NUMA topology reconfiguration happens
rarely, and therefore we do not expect it to happen while the NUMA test
is running.
This test validates that the mapping between a vm_vcpu_id and a numa
node id remains valid for the process lifetime. In other words, it
validates that if any thread within the process running on behalf of a
vm_vcpu_id N observes a NUMA node id M, all threads within this process
will always observe the same NUMA node id value when running on behalf
of that same vm_vcpu_id.
This characteristic is important for NUMA locality.
This test is skipped on architectures that do not implement
rseq_load_u32_u32.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
tools/testing/selftests/rseq/.gitignore | 1 +
tools/testing/selftests/rseq/Makefile | 2 +-
.../testing/selftests/rseq/basic_numa_test.c | 117 ++++++++++++++++++
3 files changed, 119 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/rseq/basic_numa_test.c
diff --git a/tools/testing/selftests/rseq/.gitignore b/tools/testing/selftests/rseq/.gitignore
index db5c1a124c6c..9231abed69cc 100644
--- a/tools/testing/selftests/rseq/.gitignore
+++ b/tools/testing/selftests/rseq/.gitignore
@@ -1,4 +1,5 @@
# SPDX-License-Identifier: GPL-2.0-only
+basic_numa_test
basic_percpu_ops_test
basic_percpu_ops_vm_vcpu_id_test
basic_test
diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile
index 3eec8e166385..4bf5b7202254 100644
--- a/tools/testing/selftests/rseq/Makefile
+++ b/tools/testing/selftests/rseq/Makefile
@@ -12,7 +12,7 @@ LDLIBS += -lpthread -ldl
# still track changes to header files and depend on shared object.
OVERRIDE_TARGETS = 1
-TEST_GEN_PROGS = basic_test basic_percpu_ops_test basic_percpu_ops_vm_vcpu_id_test param_test \
+TEST_GEN_PROGS = basic_test basic_numa_test basic_percpu_ops_test basic_percpu_ops_vm_vcpu_id_test param_test \
param_test_benchmark param_test_compare_twice param_test_vm_vcpu_id \
param_test_vm_vcpu_id_benchmark param_test_vm_vcpu_id_compare_twice
diff --git a/tools/testing/selftests/rseq/basic_numa_test.c b/tools/testing/selftests/rseq/basic_numa_test.c
new file mode 100644
index 000000000000..45cb714b135c
--- /dev/null
+++ b/tools/testing/selftests/rseq/basic_numa_test.c
@@ -0,0 +1,117 @@
+// SPDX-License-Identifier: LGPL-2.1
+/*
+ * Basic rseq NUMA test. Validate that (vm_vcpu_id, numa_node_id) pairs are
+ * invariant. The only known scenario where this is untrue is on Power which
+ * can reconfigure the NUMA topology on CPU hotunplug/hotplug sequence.
+ */
+
+#define _GNU_SOURCE
+#include <assert.h>
+#include <sched.h>
+#include <signal.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/time.h>
+
+#include "rseq.h"
+
+#define NR_LOOPS 100000000
+#define NR_THREADS 16
+
+#ifdef RSEQ_ARCH_HAS_LOAD_U32_U32
+
+static
+int cpu_numa_id[CPU_SETSIZE];
+
+static
+void numa_id_init(void)
+{
+ int i;
+
+ for (i = 0; i < CPU_SETSIZE; i++)
+ cpu_numa_id[i] = -1;
+}
+
+static
+void *test_thread(void *arg)
+{
+ int i;
+
+ if (rseq_register_current_thread()) {
+ fprintf(stderr, "Error: rseq_register_current_thread(...) failed(%d): %s\n",
+ errno, strerror(errno));
+ abort();
+ }
+
+ for (i = 0; i < NR_LOOPS; i++) {
+ uint32_t vm_vcpu_id, node;
+ int cached_node_id;
+
+ while (rseq_load_u32_u32(RSEQ_MO_RELAXED, &vm_vcpu_id, &rseq_get_abi()->vm_vcpu_id,
+ &node, &rseq_get_abi()->node_id) != 0) {
+ /* Retry. */
+ }
+ cached_node_id = RSEQ_READ_ONCE(cpu_numa_id[vm_vcpu_id]);
+ if (cached_node_id == -1) {
+ RSEQ_WRITE_ONCE(cpu_numa_id[vm_vcpu_id], node);
+ } else {
+ if (node != cached_node_id) {
+ fprintf(stderr, "Error: NUMA node id discrepancy: vm_vcpu_id %u cached node id %d node id %u.\n",
+ vm_vcpu_id, cached_node_id, node);
+ fprintf(stderr, "This is likely a kernel bug, or caused by a concurrent NUMA topology reconfiguration.\n");
+ abort();
+ }
+ }
+ }
+
+ if (rseq_unregister_current_thread()) {
+ fprintf(stderr, "Error: rseq_unregister_current_thread(...) failed(%d): %s\n",
+ errno, strerror(errno));
+ abort();
+ }
+ return NULL;
+}
+
+static
+int test_numa(void)
+{
+ pthread_t tid[NR_THREADS];
+ int err, i;
+ void *tret;
+
+ numa_id_init();
+
+ printf("testing rseq (vm_vcpu_id, numa_node_id) invariant, single thread\n");
+
+ (void) test_thread(NULL);
+
+ printf("testing rseq (vm_vcpu_id, numa_node_id) invariant, multi-threaded\n");
+
+ for (i = 0; i < NR_THREADS; i++) {
+ err = pthread_create(&tid[i], NULL, test_thread, NULL);
+ if (err != 0)
+ abort();
+ }
+
+ for (i = 0; i < NR_THREADS; i++) {
+ err = pthread_join(tid[i], &tret);
+ if (err != 0)
+ abort();
+ }
+
+ return 0;
+}
+#else
+static
+int test_numa(void)
+{
+ fprintf(stderr, "rseq_load_u32_u32 is not implemented on this architecture. "
+ "Skipping numa test.\n");
+ return 0;
+}
+#endif
+
+int main(int argc, char **argv)
+{
+ return test_numa();
+}
--
2.25.1
Report and abort when a negative cpu id value is observed by the
spinlock test.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
tools/testing/selftests/rseq/param_test.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/tools/testing/selftests/rseq/param_test.c b/tools/testing/selftests/rseq/param_test.c
index f3687a90ec0c..1c86b45bd579 100644
--- a/tools/testing/selftests/rseq/param_test.c
+++ b/tools/testing/selftests/rseq/param_test.c
@@ -410,6 +410,11 @@ static int rseq_this_cpu_lock(struct percpu_lock *lock)
int ret;
cpu = get_current_cpu_id();
+ if (cpu < 0) {
+ fprintf(stderr, "pid: %d: tid: %d, cpu: %d: Observing vcpu id %d\n",
+ getpid(), (int) rseq_gettid(), rseq_current_cpu_raw(), cpu);
+ abort();
+ }
ret = rseq_cmpeqv_storev(RSEQ_MO_RELAXED, RSEQ_PERCPU,
&lock->c[cpu].v,
0, 1, cpu);
--
2.25.1
Introduce a rseq-riscv-bits.h template header which is internally included
to generate the static inline functions covering:
- relaxed and release memory ordering,
- per-cpu-id and per-vm-vcpu-id per-cpu data access.
Signed-off-by: Mathieu Desnoyers <[email protected]>
Cc: Vincent Chen <[email protected]>
Cc: Eric Lin <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
---
Changes since v4:
- Use RSEQ_TEMPLATE_CPU_ID_FIELD.
---
.../testing/selftests/rseq/rseq-riscv-bits.h | 410 ++++++++++++++
tools/testing/selftests/rseq/rseq-riscv.h | 527 +-----------------
2 files changed, 437 insertions(+), 500 deletions(-)
create mode 100644 tools/testing/selftests/rseq/rseq-riscv-bits.h
diff --git a/tools/testing/selftests/rseq/rseq-riscv-bits.h b/tools/testing/selftests/rseq/rseq-riscv-bits.h
new file mode 100644
index 000000000000..7a5ce10b8ccc
--- /dev/null
+++ b/tools/testing/selftests/rseq/rseq-riscv-bits.h
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+
+#include "rseq-bits-template.h"
+
+#if defined(RSEQ_TEMPLATE_MO_RELAXED) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __always_inline
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_storev)(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[cmpfail]")
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error2]")
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ RSEQ_ASM_OP_CMPEQ(v, expect, "%l[cmpfail]")
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
+ RSEQ_ASM_OP_CMPEQ(v, expect, "%l[error2]")
+#endif
+ RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
+ RSEQ_INJECT_ASM(5)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG_1
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __always_inline
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpnev_storeoffp_load)(intptr_t *v, intptr_t expectnot,
+ off_t voffp, intptr_t *load, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[cmpfail]")
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error2]")
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ RSEQ_ASM_OP_CMPNE(v, expectnot, "%l[cmpfail]")
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
+ RSEQ_ASM_OP_CMPNE(v, expectnot, "%l[error2]")
+#endif
+ RSEQ_ASM_OP_R_LOAD(v)
+ RSEQ_ASM_OP_R_STORE(load)
+ RSEQ_ASM_OP_R_LOAD_OFF(voffp)
+ RSEQ_ASM_OP_R_FINAL_STORE(v, 3)
+ RSEQ_INJECT_ASM(5)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "m" (*v),
+ [expectnot] "r" (expectnot),
+ [load] "m" (*load),
+ [voffp] "r" (voffp)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG_1
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __always_inline
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_addv)(intptr_t *v, intptr_t count, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
+#endif
+ RSEQ_ASM_OP_R_LOAD(v)
+ RSEQ_ASM_OP_R_ADD(count)
+ RSEQ_ASM_OP_R_FINAL_STORE(v, 3)
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "m" (*v),
+ [count] "r" (count)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG_1
+ RSEQ_INJECT_CLOBBER
+ : abort
+#ifdef RSEQ_COMPARE_TWICE
+ , error1
+#endif
+ );
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+#endif
+}
+
+static inline __always_inline
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_cmpeqv_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t expect2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[cmpfail]")
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error2]")
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error3]")
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ RSEQ_ASM_OP_CMPEQ(v, expect, "%l[cmpfail]")
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_OP_CMPEQ(v2, expect2, "%l[cmpfail]")
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
+ RSEQ_ASM_OP_CMPEQ(v, expect, "%l[error2]")
+ RSEQ_ASM_OP_CMPEQ(v2, expect2, "%l[error3]")
+#endif
+ RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [v2] "m" (*v2),
+ [expect2] "r" (expect2),
+ [newv] "r" (newv)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG_1
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2, error3
+#endif
+ );
+
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_bug("expected value comparison failed");
+error3:
+ rseq_bug("2nd expected value comparison failed");
+#endif
+}
+
+#define RSEQ_ARCH_HAS_OFFSET_DEREF_ADDV
+
+/*
+ * pval = *(ptr+off)
+ * *pval += inc;
+ */
+static inline __always_inline
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_offset_deref_addv)(intptr_t *ptr, off_t off, intptr_t inc, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
+#endif
+ RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, 3)
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [ptr] "r" (ptr),
+ [off] "er" (off),
+ [inc] "er" (inc)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG_1
+ RSEQ_INJECT_CLOBBER
+ : abort
+#ifdef RSEQ_COMPARE_TWICE
+ , error1
+#endif
+ );
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+#endif
+}
+
+#endif /* #if defined(RSEQ_TEMPLATE_MO_RELAXED) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __always_inline
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trystorev_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t newv2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[cmpfail]")
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error2]")
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ RSEQ_ASM_OP_CMPEQ(v, expect, "%l[cmpfail]")
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
+ RSEQ_ASM_OP_CMPEQ(v, expect, "%l[error2]")
+#endif
+ RSEQ_ASM_OP_STORE(newv2, v2)
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_TEMPLATE_MO_RELEASE
+ RSEQ_ASM_OP_FINAL_STORE_RELEASE(newv, v, 3)
+#else
+ RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
+#endif
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [expect] "r" (expect),
+ [v] "m" (*v),
+ [newv] "r" (newv),
+ [v2] "m" (*v2),
+ [newv2] "r" (newv2)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG_1
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __always_inline
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trymemcpy_storev)(intptr_t *v, intptr_t expect,
+ void *dst, void *src, size_t len,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+ __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[cmpfail]")
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
+ RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error2]")
+#endif
+ RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
+ RSEQ_INJECT_ASM(3)
+ RSEQ_ASM_OP_CMPEQ(v, expect, "%l[cmpfail]")
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
+ RSEQ_ASM_OP_CMPEQ(v, expect, "%l[error2]")
+#endif
+ RSEQ_ASM_OP_R_BAD_MEMCPY(dst, src, len)
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_TEMPLATE_MO_RELEASE
+ RSEQ_ASM_OP_FINAL_STORE_RELEASE(newv, v, 3)
+#else
+ RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
+#endif
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [expect] "r" (expect),
+ [v] "m" (*v),
+ [newv] "r" (newv),
+ [dst] "r" (dst),
+ [src] "r" (src),
+ [len] "r" (len)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG_1, RSEQ_ASM_TMP_REG_2,
+ RSEQ_ASM_TMP_REG_3, RSEQ_ASM_TMP_REG_4
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+#endif /* #if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#include "rseq-bits-reset.h"
diff --git a/tools/testing/selftests/rseq/rseq-riscv.h b/tools/testing/selftests/rseq/rseq-riscv.h
index b16d943a63e1..ad61ad06ef9f 100644
--- a/tools/testing/selftests/rseq/rseq-riscv.h
+++ b/tools/testing/selftests/rseq/rseq-riscv.h
@@ -165,507 +165,34 @@ do { \
RSEQ_ASM_OP_R_ADD(inc) \
__rseq_str(post_commit_label) ":\n"
-static inline __always_inline
-int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[cmpfail]")
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error2]")
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPEQ(v, expect, "%l[cmpfail]")
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
- RSEQ_ASM_OP_CMPEQ(v, expect, "%l[error2]")
-#endif
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG_1
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
-
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __always_inline
-int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
- off_t voffp, intptr_t *load, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[cmpfail]")
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error2]")
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPNE(v, expectnot, "%l[cmpfail]")
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
- RSEQ_ASM_OP_CMPNE(v, expectnot, "%l[error2]")
-#endif
- RSEQ_ASM_OP_R_LOAD(v)
- RSEQ_ASM_OP_R_STORE(load)
- RSEQ_ASM_OP_R_LOAD_OFF(voffp)
- RSEQ_ASM_OP_R_FINAL_STORE(v, 3)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "m" (*v),
- [expectnot] "r" (expectnot),
- [load] "m" (*load),
- [voffp] "r" (voffp)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG_1
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-#endif
-}
+/* Per-cpu-id indexing. */
-static inline __always_inline
-int rseq_addv(intptr_t *v, intptr_t count, int cpu)
-{
- RSEQ_INJECT_C(9)
+#define RSEQ_TEMPLATE_CPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-riscv-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
- __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
-#endif
- RSEQ_ASM_OP_R_LOAD(v)
- RSEQ_ASM_OP_R_ADD(count)
- RSEQ_ASM_OP_R_FINAL_STORE(v, 3)
- RSEQ_INJECT_ASM(4)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "m" (*v),
- [count] "r" (count)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG_1
- RSEQ_INJECT_CLOBBER
- : abort
-#ifdef RSEQ_COMPARE_TWICE
- , error1
-#endif
- );
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-#endif
-}
-
-static inline __always_inline
-int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[cmpfail]")
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error2]")
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPEQ(v, expect, "%l[cmpfail]")
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
- RSEQ_ASM_OP_CMPEQ(v, expect, "%l[error2]")
-#endif
- RSEQ_ASM_OP_STORE(newv2, v2)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [expect] "r" (expect),
- [v] "m" (*v),
- [newv] "r" (newv),
- [v2] "m" (*v2),
- [newv2] "r" (newv2)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG_1
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
-
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __always_inline
-int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[cmpfail]")
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error2]")
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPEQ(v, expect, "%l[cmpfail]")
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
- RSEQ_ASM_OP_CMPEQ(v, expect, "%l[error2]")
-#endif
- RSEQ_ASM_OP_STORE(newv2, v2)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_OP_FINAL_STORE_RELEASE(newv, v, 3)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [expect] "r" (expect),
- [v] "m" (*v),
- [newv] "r" (newv),
- [v2] "m" (*v2),
- [newv2] "r" (newv2)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG_1
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
-
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __always_inline
-int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t expect2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[cmpfail]")
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error2]")
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error3]")
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPEQ(v, expect, "%l[cmpfail]")
- RSEQ_INJECT_ASM(4)
- RSEQ_ASM_OP_CMPEQ(v2, expect2, "%l[cmpfail]")
- RSEQ_INJECT_ASM(5)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
- RSEQ_ASM_OP_CMPEQ(v, expect, "%l[error2]")
- RSEQ_ASM_OP_CMPEQ(v2, expect2, "%l[error3]")
-#endif
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [v] "m" (*v),
- [expect] "r" (expect),
- [v2] "m" (*v2),
- [expect2] "r" (expect2),
- [newv] "r" (newv)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG_1
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2, error3
-#endif
- );
-
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-error3:
- rseq_bug("2nd expected value comparison failed");
-#endif
-}
-
-static inline __always_inline
-int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
- __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[cmpfail]")
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error2]")
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPEQ(v, expect, "%l[cmpfail]")
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
- RSEQ_ASM_OP_CMPEQ(v, expect, "%l[error2]")
-#endif
- RSEQ_ASM_OP_R_BAD_MEMCPY(dst, src, len)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_OP_FINAL_STORE(newv, v, 3)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [expect] "r" (expect),
- [v] "m" (*v),
- [newv] "r" (newv),
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG_1, RSEQ_ASM_TMP_REG_2,
- RSEQ_ASM_TMP_REG_3, RSEQ_ASM_TMP_REG_4
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
-
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __always_inline
-int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[cmpfail]")
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error2]")
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
- RSEQ_ASM_OP_CMPEQ(v, expect, "%l[cmpfail]")
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
- RSEQ_ASM_OP_CMPEQ(v, expect, "%l[error2]")
-#endif
- RSEQ_ASM_OP_R_BAD_MEMCPY(dst, src, len)
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_OP_FINAL_STORE_RELEASE(newv, v, 3)
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [expect] "r" (expect),
- [v] "m" (*v),
- [newv] "r" (newv),
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG_1, RSEQ_ASM_TMP_REG_2,
- RSEQ_ASM_TMP_REG_3, RSEQ_ASM_TMP_REG_4
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
-
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_bug("expected value comparison failed");
-#endif
-}
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-riscv-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_CPU_ID
-#define RSEQ_ARCH_HAS_OFFSET_DEREF_ADDV
+/* Per-vm-vcpu-id indexing. */
-/*
- * pval = *(ptr+off)
- * *pval += inc;
- */
-static inline __always_inline
-int rseq_offset_deref_addv(intptr_t *ptr, off_t off, intptr_t inc, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto(RSEQ_ASM_DEFINE_TABLE(1, 2f, 3f, 4f)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(2f, "%l[error1]")
-#endif
- RSEQ_ASM_STORE_RSEQ_CS(2, 1b, rseq_cs)
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, 4f)
- RSEQ_INJECT_ASM(3)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
-#endif
- RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, 3)
- RSEQ_INJECT_ASM(4)
- RSEQ_ASM_DEFINE_ABORT(4, abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [current_cpu_id] "m" (rseq_get_abi()->cpu_id),
- [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
- [ptr] "r" (ptr),
- [off] "er" (off),
- [inc] "er" (inc)
- RSEQ_INJECT_INPUT
- : "memory", RSEQ_ASM_TMP_REG_1
- RSEQ_INJECT_CLOBBER
- : abort
-#ifdef RSEQ_COMPARE_TWICE
- , error1
-#endif
- );
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-#endif
-}
+#define RSEQ_TEMPLATE_VM_VCPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-riscv-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
+
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-riscv-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_VM_VCPU_ID
+
+/* APIs which are not based on cpu ids. */
+
+#define RSEQ_TEMPLATE_CPU_ID_NONE
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-riscv-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
+#undef RSEQ_TEMPLATE_CPU_ID_NONE
--
2.25.1
Adapt to the rseq.h API changes introduced by commits
"selftests/rseq: <arch>: Template memory ordering and percpu access mode".
Build a new basic_percpu_ops_vm_vcpu_id_test to test the new
"vm_vcpu_id" rseq field.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
tools/testing/selftests/rseq/.gitignore | 1 +
tools/testing/selftests/rseq/Makefile | 5 +-
.../selftests/rseq/basic_percpu_ops_test.c | 46 ++++++++++++++++---
3 files changed, 44 insertions(+), 8 deletions(-)
diff --git a/tools/testing/selftests/rseq/.gitignore b/tools/testing/selftests/rseq/.gitignore
index 5910888ebfe1..5a7e5acc628c 100644
--- a/tools/testing/selftests/rseq/.gitignore
+++ b/tools/testing/selftests/rseq/.gitignore
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
basic_percpu_ops_test
+basic_percpu_ops_vm_vcpu_id_test
basic_test
basic_rseq_op_test
param_test
diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile
index 215e1067f037..4210c135e621 100644
--- a/tools/testing/selftests/rseq/Makefile
+++ b/tools/testing/selftests/rseq/Makefile
@@ -12,7 +12,7 @@ LDLIBS += -lpthread -ldl
# still track changes to header files and depend on shared object.
OVERRIDE_TARGETS = 1
-TEST_GEN_PROGS = basic_test basic_percpu_ops_test param_test \
+TEST_GEN_PROGS = basic_test basic_percpu_ops_test basic_percpu_ops_vm_vcpu_id_test param_test \
param_test_benchmark param_test_compare_twice
TEST_GEN_PROGS_EXTENDED = librseq.so
@@ -29,6 +29,9 @@ $(OUTPUT)/librseq.so: rseq.c rseq.h rseq-*.h
$(OUTPUT)/%: %.c $(TEST_GEN_PROGS_EXTENDED) rseq.h rseq-*.h
$(CC) $(CFLAGS) $< $(LDLIBS) -lrseq -o $@
+$(OUTPUT)/basic_percpu_ops_vm_vcpu_id_test: basic_percpu_ops_test.c $(TEST_GEN_PROGS_EXTENDED) rseq.h rseq-*.h
+ $(CC) $(CFLAGS) -DBUILDOPT_RSEQ_PERCPU_VM_VCPU_ID $< $(LDLIBS) -lrseq -o $@
+
$(OUTPUT)/param_test_benchmark: param_test.c $(TEST_GEN_PROGS_EXTENDED) \
rseq.h rseq-*.h
$(CC) $(CFLAGS) -DBENCHMARK $< $(LDLIBS) -lrseq -o $@
diff --git a/tools/testing/selftests/rseq/basic_percpu_ops_test.c b/tools/testing/selftests/rseq/basic_percpu_ops_test.c
index 517756afc2a4..719ff9910e23 100644
--- a/tools/testing/selftests/rseq/basic_percpu_ops_test.c
+++ b/tools/testing/selftests/rseq/basic_percpu_ops_test.c
@@ -12,6 +12,32 @@
#include "../kselftest.h"
#include "rseq.h"
+#ifdef BUILDOPT_RSEQ_PERCPU_VM_VCPU_ID
+# define RSEQ_PERCPU RSEQ_PERCPU_VM_VCPU_ID
+static
+int get_current_cpu_id(void)
+{
+ return rseq_current_vm_vcpu_id();
+}
+static
+bool rseq_validate_cpu_id(void)
+{
+ return rseq_vm_vcpu_id_available();
+}
+#else
+# define RSEQ_PERCPU RSEQ_PERCPU_CPU_ID
+static
+int get_current_cpu_id(void)
+{
+ return rseq_cpu_start();
+}
+static
+bool rseq_validate_cpu_id(void)
+{
+ return rseq_current_cpu_raw() >= 0;
+}
+#endif
+
struct percpu_lock_entry {
intptr_t v;
} __attribute__((aligned(128)));
@@ -51,9 +77,9 @@ int rseq_this_cpu_lock(struct percpu_lock *lock)
for (;;) {
int ret;
- cpu = rseq_cpu_start();
- ret = rseq_cmpeqv_storev(&lock->c[cpu].v,
- 0, 1, cpu);
+ cpu = get_current_cpu_id();
+ ret = rseq_cmpeqv_storev(RSEQ_MO_RELAXED, RSEQ_PERCPU,
+ &lock->c[cpu].v, 0, 1, cpu);
if (rseq_likely(!ret))
break;
/* Retry if comparison fails or rseq aborts. */
@@ -141,13 +167,14 @@ void this_cpu_list_push(struct percpu_list *list,
intptr_t *targetptr, newval, expect;
int ret;
- cpu = rseq_cpu_start();
+ cpu = get_current_cpu_id();
/* Load list->c[cpu].head with single-copy atomicity. */
expect = (intptr_t)RSEQ_READ_ONCE(list->c[cpu].head);
newval = (intptr_t)node;
targetptr = (intptr_t *)&list->c[cpu].head;
node->next = (struct percpu_list_node *)expect;
- ret = rseq_cmpeqv_storev(targetptr, expect, newval, cpu);
+ ret = rseq_cmpeqv_storev(RSEQ_MO_RELAXED, RSEQ_PERCPU,
+ targetptr, expect, newval, cpu);
if (rseq_likely(!ret))
break;
/* Retry if comparison fails or rseq aborts. */
@@ -170,12 +197,13 @@ struct percpu_list_node *this_cpu_list_pop(struct percpu_list *list,
long offset;
int ret, cpu;
- cpu = rseq_cpu_start();
+ cpu = get_current_cpu_id();
targetptr = (intptr_t *)&list->c[cpu].head;
expectnot = (intptr_t)NULL;
offset = offsetof(struct percpu_list_node, next);
load = (intptr_t *)&head;
- ret = rseq_cmpnev_storeoffp_load(targetptr, expectnot,
+ ret = rseq_cmpnev_storeoffp_load(RSEQ_MO_RELAXED, RSEQ_PERCPU,
+ targetptr, expectnot,
offset, load, cpu);
if (rseq_likely(!ret)) {
if (_cpu)
@@ -295,6 +323,10 @@ int main(int argc, char **argv)
errno, strerror(errno));
goto error;
}
+ if (!rseq_validate_cpu_id()) {
+ fprintf(stderr, "Error: cpu id getter unavailable\n");
+ goto error;
+ }
printf("spinlock\n");
test_percpu_spinlock();
printf("percpu_list\n");
--
2.25.1
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
tools/testing/selftests/rseq/rseq-abi.h | 9 +++++++++
tools/testing/selftests/rseq/rseq.h | 10 ++++++++++
2 files changed, 19 insertions(+)
diff --git a/tools/testing/selftests/rseq/rseq-abi.h b/tools/testing/selftests/rseq/rseq-abi.h
index a1faa9162d52..1ee4740ebe94 100644
--- a/tools/testing/selftests/rseq/rseq-abi.h
+++ b/tools/testing/selftests/rseq/rseq-abi.h
@@ -155,6 +155,15 @@ struct rseq_abi {
*/
__u32 node_id;
+ /*
+ * Restartable sequences vm_vcpu_id field. Updated by the kernel. Read by
+ * user-space with single-copy atomicity semantics. This field should
+ * only be read by the thread which registered this data structure.
+ * Aligned on 32-bit. Contains the current thread's virtual CPU ID
+ * (allocated uniquely within a memory space).
+ */
+ __u32 vm_vcpu_id;
+
/*
* Flexible array member at end of structure, after last feature field.
*/
diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h
index fd17d0e54a1b..003e0e3750ce 100644
--- a/tools/testing/selftests/rseq/rseq.h
+++ b/tools/testing/selftests/rseq/rseq.h
@@ -191,6 +191,16 @@ static inline uint32_t rseq_current_node_id(void)
return RSEQ_ACCESS_ONCE(rseq_get_abi()->node_id);
}
+static inline bool rseq_vm_vcpu_id_available(void)
+{
+ return (int) rseq_feature_size >= rseq_offsetofend(struct rseq_abi, vm_vcpu_id);
+}
+
+static inline uint32_t rseq_current_vm_vcpu_id(void)
+{
+ return RSEQ_ACCESS_ONCE(rseq_get_abi()->vm_vcpu_id);
+}
+
static inline void rseq_clear_rseq_cs(void)
{
RSEQ_WRITE_ONCE(rseq_get_abi()->rseq_cs.arch.ptr, 0);
--
2.25.1
Test the NUMA node id extension rseq field. Compare it against the value
returned by the getcpu(2) system call while pinned on a specific core.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
tools/testing/selftests/rseq/basic_test.c | 4 ++++
tools/testing/selftests/rseq/rseq-abi.h | 8 +++++++
tools/testing/selftests/rseq/rseq.c | 18 +++++++++++++++
tools/testing/selftests/rseq/rseq.h | 28 +++++++++++++++++++++++
4 files changed, 58 insertions(+)
diff --git a/tools/testing/selftests/rseq/basic_test.c b/tools/testing/selftests/rseq/basic_test.c
index d8efbfb89193..295eea16466f 100644
--- a/tools/testing/selftests/rseq/basic_test.c
+++ b/tools/testing/selftests/rseq/basic_test.c
@@ -22,6 +22,8 @@ void test_cpu_pointer(void)
CPU_ZERO(&test_affinity);
for (i = 0; i < CPU_SETSIZE; i++) {
if (CPU_ISSET(i, &affinity)) {
+ int node;
+
CPU_SET(i, &test_affinity);
sched_setaffinity(0, sizeof(test_affinity),
&test_affinity);
@@ -29,6 +31,8 @@ void test_cpu_pointer(void)
assert(rseq_current_cpu() == i);
assert(rseq_current_cpu_raw() == i);
assert(rseq_cpu_start() == i);
+ node = rseq_fallback_current_node();
+ assert(rseq_current_node_id() == node);
CPU_CLR(i, &test_affinity);
}
}
diff --git a/tools/testing/selftests/rseq/rseq-abi.h b/tools/testing/selftests/rseq/rseq-abi.h
index 00ac846d85b0..a1faa9162d52 100644
--- a/tools/testing/selftests/rseq/rseq-abi.h
+++ b/tools/testing/selftests/rseq/rseq-abi.h
@@ -147,6 +147,14 @@ struct rseq_abi {
*/
__u32 flags;
+ /*
+ * Restartable sequences node_id field. Updated by the kernel. Read by
+ * user-space with single-copy atomicity semantics. This field should
+ * only be read by the thread which registered this data structure.
+ * Aligned on 32-bit. Contains the current NUMA node ID.
+ */
+ __u32 node_id;
+
/*
* Flexible array member at end of structure, after last feature field.
*/
diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c
index c1c691eca904..191fcc94ce9c 100644
--- a/tools/testing/selftests/rseq/rseq.c
+++ b/tools/testing/selftests/rseq/rseq.c
@@ -79,6 +79,11 @@ static int sys_rseq(struct rseq_abi *rseq_abi, uint32_t rseq_len,
return syscall(__NR_rseq, rseq_abi, rseq_len, flags, sig);
}
+static int sys_getcpu(unsigned *cpu, unsigned *node)
+{
+ return syscall(__NR_getcpu, cpu, node, NULL);
+}
+
int rseq_available(void)
{
int rc;
@@ -200,3 +205,16 @@ int32_t rseq_fallback_current_cpu(void)
}
return cpu;
}
+
+int32_t rseq_fallback_current_node(void)
+{
+ uint32_t cpu_id, node_id;
+ int ret;
+
+ ret = sys_getcpu(&cpu_id, &node_id);
+ if (ret) {
+ perror("sys_getcpu()");
+ return ret;
+ }
+ return (int32_t) node_id;
+}
diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h
index 95adc1e1b0db..fd17d0e54a1b 100644
--- a/tools/testing/selftests/rseq/rseq.h
+++ b/tools/testing/selftests/rseq/rseq.h
@@ -20,6 +20,15 @@
#include "rseq-abi.h"
#include "compiler.h"
+#ifndef rseq_sizeof_field
+#define rseq_sizeof_field(TYPE, MEMBER) sizeof((((TYPE *)0)->MEMBER))
+#endif
+
+#ifndef rseq_offsetofend
+#define rseq_offsetofend(TYPE, MEMBER) \
+ (offsetof(TYPE, MEMBER) + rseq_sizeof_field(TYPE, MEMBER))
+#endif
+
/*
* Empty code injection macros, override when testing.
* It is important to consider that the ASM injection macros need to be
@@ -128,6 +137,11 @@ int rseq_unregister_current_thread(void);
*/
int32_t rseq_fallback_current_cpu(void);
+/*
+ * Restartable sequence fallback for reading the current node number.
+ */
+int32_t rseq_fallback_current_node(void);
+
/*
* Values returned can be either the current CPU number, -1 (rseq is
* uninitialized), or -2 (rseq initialization has failed).
@@ -163,6 +177,20 @@ static inline uint32_t rseq_current_cpu(void)
return cpu;
}
+static inline bool rseq_node_id_available(void)
+{
+ return (int) rseq_feature_size >= rseq_offsetofend(struct rseq_abi, node_id);
+}
+
+/*
+ * Current NUMA node number.
+ */
+static inline uint32_t rseq_current_node_id(void)
+{
+ assert(rseq_node_id_available());
+ return RSEQ_ACCESS_ONCE(rseq_get_abi()->node_id);
+}
+
static inline void rseq_clear_rseq_cs(void)
{
RSEQ_WRITE_ONCE(rseq_get_abi()->rseq_cs.arch.ptr, 0);
--
2.25.1
Introduce a rseq-x86-bits.h template header which is internally included
to generate the static inline functions covering:
- relaxed and release memory ordering,
- per-cpu-id and per-vm-vcpu-id per-cpu data access.
This introduces changes to the rseq.h selftests API which require to
update the rseq selftest programs. Similar API/templating changes need
to be done for other architectures.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
Changes since v4:
- Introduce RSEQ_TEMPLATE_CPU_ID_FIELD.
---
tools/testing/selftests/rseq/compiler.h | 6 +
.../testing/selftests/rseq/rseq-bits-reset.h | 11 +
.../selftests/rseq/rseq-bits-template.h | 41 +
tools/testing/selftests/rseq/rseq-x86-bits.h | 993 ++++++++++++++
tools/testing/selftests/rseq/rseq-x86.h | 1181 +----------------
tools/testing/selftests/rseq/rseq.h | 159 +++
6 files changed, 1241 insertions(+), 1150 deletions(-)
create mode 100644 tools/testing/selftests/rseq/rseq-bits-reset.h
create mode 100644 tools/testing/selftests/rseq/rseq-bits-template.h
create mode 100644 tools/testing/selftests/rseq/rseq-x86-bits.h
diff --git a/tools/testing/selftests/rseq/compiler.h b/tools/testing/selftests/rseq/compiler.h
index 876eb6a7f75b..f47092bddeba 100644
--- a/tools/testing/selftests/rseq/compiler.h
+++ b/tools/testing/selftests/rseq/compiler.h
@@ -27,4 +27,10 @@
*/
#define rseq_after_asm_goto() asm volatile ("" : : : "memory")
+/* Combine two tokens. */
+#define RSEQ__COMBINE_TOKENS(_tokena, _tokenb) \
+ _tokena##_tokenb
+#define RSEQ_COMBINE_TOKENS(_tokena, _tokenb) \
+ RSEQ__COMBINE_TOKENS(_tokena, _tokenb)
+
#endif /* RSEQ_COMPILER_H_ */
diff --git a/tools/testing/selftests/rseq/rseq-bits-reset.h b/tools/testing/selftests/rseq/rseq-bits-reset.h
new file mode 100644
index 000000000000..e8655089f9cb
--- /dev/null
+++ b/tools/testing/selftests/rseq/rseq-bits-reset.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * rseq-bits-reset.h
+ *
+ * (C) Copyright 2016-2022 - Mathieu Desnoyers <[email protected]>
+ */
+
+#undef RSEQ_TEMPLATE_IDENTIFIER
+#undef RSEQ_TEMPLATE_CPU_ID_FIELD
+#undef RSEQ_TEMPLATE_CPU_ID_OFFSET
+#undef RSEQ_TEMPLATE_SUFFIX
diff --git a/tools/testing/selftests/rseq/rseq-bits-template.h b/tools/testing/selftests/rseq/rseq-bits-template.h
new file mode 100644
index 000000000000..3d1d8e2b1e45
--- /dev/null
+++ b/tools/testing/selftests/rseq/rseq-bits-template.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * rseq-bits-template.h
+ *
+ * (C) Copyright 2016-2022 - Mathieu Desnoyers <[email protected]>
+ */
+
+#ifdef RSEQ_TEMPLATE_CPU_ID
+# define RSEQ_TEMPLATE_CPU_ID_OFFSET RSEQ_CPU_ID_OFFSET
+# define RSEQ_TEMPLATE_CPU_ID_FIELD cpu_id
+# ifdef RSEQ_TEMPLATE_MO_RELEASE
+# define RSEQ_TEMPLATE_SUFFIX _release_cpu_id
+# elif defined (RSEQ_TEMPLATE_MO_RELAXED)
+# define RSEQ_TEMPLATE_SUFFIX _relaxed_cpu_id
+# else
+# error "Never use <rseq-bits-template.h> directly; include <rseq.h> instead."
+# endif
+#elif defined(RSEQ_TEMPLATE_VM_VCPU_ID)
+# define RSEQ_TEMPLATE_CPU_ID_OFFSET RSEQ_VM_VCPU_ID_OFFSET
+# define RSEQ_TEMPLATE_CPU_ID_FIELD vm_vcpu_id
+# ifdef RSEQ_TEMPLATE_MO_RELEASE
+# define RSEQ_TEMPLATE_SUFFIX _release_vm_vcpu_id
+# elif defined (RSEQ_TEMPLATE_MO_RELAXED)
+# define RSEQ_TEMPLATE_SUFFIX _relaxed_vm_vcpu_id
+# else
+# error "Never use <rseq-bits-template.h> directly; include <rseq.h> instead."
+# endif
+#elif defined (RSEQ_TEMPLATE_CPU_ID_NONE)
+# ifdef RSEQ_TEMPLATE_MO_RELEASE
+# define RSEQ_TEMPLATE_SUFFIX _release
+# elif defined (RSEQ_TEMPLATE_MO_RELAXED)
+# define RSEQ_TEMPLATE_SUFFIX _relaxed
+# else
+# error "Never use <rseq-bits-template.h> directly; include <rseq.h> instead."
+# endif
+#else
+# error "Never use <rseq-bits-template.h> directly; include <rseq.h> instead."
+#endif
+
+#define RSEQ_TEMPLATE_IDENTIFIER(x) RSEQ_COMBINE_TOKENS(x, RSEQ_TEMPLATE_SUFFIX)
+
diff --git a/tools/testing/selftests/rseq/rseq-x86-bits.h b/tools/testing/selftests/rseq/rseq-x86-bits.h
new file mode 100644
index 000000000000..28ca77cc876c
--- /dev/null
+++ b/tools/testing/selftests/rseq/rseq-x86-bits.h
@@ -0,0 +1,993 @@
+/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
+/*
+ * rseq-x86-bits.h
+ *
+ * (C) Copyright 2016-2022 - Mathieu Desnoyers <[email protected]>
+ */
+
+#include "rseq-bits-template.h"
+
+#ifdef __x86_64__
+
+#if defined(RSEQ_TEMPLATE_MO_RELAXED) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_storev)(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+ "cmpq %[v], %[expect]\n\t"
+ "jnz %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ "cmpq %[v], %[expect]\n\t"
+ "jnz %l[error2]\n\t"
+#endif
+ /* final store */
+ "movq %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(5)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ : "memory", "cc", "rax"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+/*
+ * Compare @v against @expectnot. When it does _not_ match, load @v
+ * into @load, and store the content of *@v + voffp into @v.
+ */
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpnev_storeoffp_load)(intptr_t *v, intptr_t expectnot,
+ long voffp, intptr_t *load, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+ "movq %[v], %%rbx\n\t"
+ "cmpq %%rbx, %[expectnot]\n\t"
+ "je %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ "movq %[v], %%rbx\n\t"
+ "cmpq %%rbx, %[expectnot]\n\t"
+ "je %l[error2]\n\t"
+#endif
+ "movq %%rbx, %[load]\n\t"
+ "addq %[voffp], %%rbx\n\t"
+ "movq (%%rbx), %%rbx\n\t"
+ /* final store */
+ "movq %%rbx, %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(5)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ /* final store input */
+ [v] "m" (*v),
+ [expectnot] "r" (expectnot),
+ [voffp] "er" (voffp),
+ [load] "m" (*load)
+ : "memory", "cc", "rax", "rbx"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_addv)(intptr_t *v, intptr_t count, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+#endif
+ /* final store */
+ "addq %[count], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ /* final store input */
+ [v] "m" (*v),
+ [count] "er" (count)
+ : "memory", "cc", "rax"
+ RSEQ_INJECT_CLOBBER
+ : abort
+#ifdef RSEQ_COMPARE_TWICE
+ , error1
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+#endif
+}
+
+#define RSEQ_ARCH_HAS_OFFSET_DEREF_ADDV
+
+/*
+ * pval = *(ptr+off)
+ * *pval += inc;
+ */
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_offset_deref_addv)(intptr_t *ptr, long off, intptr_t inc, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+#endif
+ /* get p+v */
+ "movq %[ptr], %%rbx\n\t"
+ "addq %[off], %%rbx\n\t"
+ /* get pv */
+ "movq (%%rbx), %%rcx\n\t"
+ /* *pv += inc */
+ "addq %[inc], (%%rcx)\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ /* final store input */
+ [ptr] "m" (*ptr),
+ [off] "er" (off),
+ [inc] "er" (inc)
+ : "memory", "cc", "rax", "rbx", "rcx"
+ RSEQ_INJECT_CLOBBER
+ : abort
+#ifdef RSEQ_COMPARE_TWICE
+ , error1
+#endif
+ );
+ return 0;
+abort:
+ RSEQ_INJECT_FAILED
+ return -1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_bug("cpu_id comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_cmpeqv_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t expect2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+ "cmpq %[v], %[expect]\n\t"
+ "jnz %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+ "cmpq %[v2], %[expect2]\n\t"
+ "jnz %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ "cmpq %[v], %[expect]\n\t"
+ "jnz %l[error2]\n\t"
+ "cmpq %[v2], %[expect2]\n\t"
+ "jnz %l[error3]\n\t"
+#endif
+ /* final store */
+ "movq %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ /* cmp2 input */
+ [v2] "m" (*v2),
+ [expect2] "r" (expect2),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ : "memory", "cc", "rax"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2, error3
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("1st expected value comparison failed");
+error3:
+ rseq_after_asm_goto();
+ rseq_bug("2nd expected value comparison failed");
+#endif
+}
+
+#endif /* #if defined(RSEQ_TEMPLATE_MO_RELAXED) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trystorev_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t newv2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+ "cmpq %[v], %[expect]\n\t"
+ "jnz %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ "cmpq %[v], %[expect]\n\t"
+ "jnz %l[error2]\n\t"
+#endif
+ /* try store */
+ "movq %[newv2], %[v2]\n\t"
+ RSEQ_INJECT_ASM(5)
+ /* final store */
+ "movq %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ /* try store input */
+ [v2] "m" (*v2),
+ [newv2] "r" (newv2),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ : "memory", "cc", "rax"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trymemcpy_storev)(intptr_t *v, intptr_t expect,
+ void *dst, void *src, size_t len,
+ intptr_t newv, int cpu)
+{
+ uint64_t rseq_scratch[3];
+
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ "movq %[src], %[rseq_scratch0]\n\t"
+ "movq %[dst], %[rseq_scratch1]\n\t"
+ "movq %[len], %[rseq_scratch2]\n\t"
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+ "cmpq %[v], %[expect]\n\t"
+ "jnz 5f\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 6f)
+ "cmpq %[v], %[expect]\n\t"
+ "jnz 7f\n\t"
+#endif
+ /* try memcpy */
+ "test %[len], %[len]\n\t" \
+ "jz 333f\n\t" \
+ "222:\n\t" \
+ "movb (%[src]), %%al\n\t" \
+ "movb %%al, (%[dst])\n\t" \
+ "inc %[src]\n\t" \
+ "inc %[dst]\n\t" \
+ "dec %[len]\n\t" \
+ "jnz 222b\n\t" \
+ "333:\n\t" \
+ RSEQ_INJECT_ASM(5)
+ /* final store */
+ "movq %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ /* teardown */
+ "movq %[rseq_scratch2], %[len]\n\t"
+ "movq %[rseq_scratch1], %[dst]\n\t"
+ "movq %[rseq_scratch0], %[src]\n\t"
+ RSEQ_ASM_DEFINE_ABORT(4,
+ "movq %[rseq_scratch2], %[len]\n\t"
+ "movq %[rseq_scratch1], %[dst]\n\t"
+ "movq %[rseq_scratch0], %[src]\n\t",
+ abort)
+ RSEQ_ASM_DEFINE_CMPFAIL(5,
+ "movq %[rseq_scratch2], %[len]\n\t"
+ "movq %[rseq_scratch1], %[dst]\n\t"
+ "movq %[rseq_scratch0], %[src]\n\t",
+ cmpfail)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_CMPFAIL(6,
+ "movq %[rseq_scratch2], %[len]\n\t"
+ "movq %[rseq_scratch1], %[dst]\n\t"
+ "movq %[rseq_scratch0], %[src]\n\t",
+ error1)
+ RSEQ_ASM_DEFINE_CMPFAIL(7,
+ "movq %[rseq_scratch2], %[len]\n\t"
+ "movq %[rseq_scratch1], %[dst]\n\t"
+ "movq %[rseq_scratch0], %[src]\n\t",
+ error2)
+#endif
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv),
+ /* try memcpy input */
+ [dst] "r" (dst),
+ [src] "r" (src),
+ [len] "r" (len),
+ [rseq_scratch0] "m" (rseq_scratch[0]),
+ [rseq_scratch1] "m" (rseq_scratch[1]),
+ [rseq_scratch2] "m" (rseq_scratch[2])
+ : "memory", "cc", "rax"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+#endif /* #if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#elif defined(__i386__)
+
+#if defined(RSEQ_TEMPLATE_MO_RELAXED) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_storev)(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+ "cmpl %[v], %[expect]\n\t"
+ "jnz %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ "cmpl %[v], %[expect]\n\t"
+ "jnz %l[error2]\n\t"
+#endif
+ /* final store */
+ "movl %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(5)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "r" (newv)
+ : "memory", "cc", "eax"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+/*
+ * Compare @v against @expectnot. When it does _not_ match, load @v
+ * into @load, and store the content of *@v + voffp into @v.
+ */
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpnev_storeoffp_load)(intptr_t *v, intptr_t expectnot,
+ long voffp, intptr_t *load, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+ "movl %[v], %%ebx\n\t"
+ "cmpl %%ebx, %[expectnot]\n\t"
+ "je %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ "movl %[v], %%ebx\n\t"
+ "cmpl %%ebx, %[expectnot]\n\t"
+ "je %l[error2]\n\t"
+#endif
+ "movl %%ebx, %[load]\n\t"
+ "addl %[voffp], %%ebx\n\t"
+ "movl (%%ebx), %%ebx\n\t"
+ /* final store */
+ "movl %%ebx, %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(5)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ /* final store input */
+ [v] "m" (*v),
+ [expectnot] "r" (expectnot),
+ [voffp] "ir" (voffp),
+ [load] "m" (*load)
+ : "memory", "cc", "eax", "ebx"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_addv)(intptr_t *v, intptr_t count, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+#endif
+ /* final store */
+ "addl %[count], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ /* final store input */
+ [v] "m" (*v),
+ [count] "ir" (count)
+ : "memory", "cc", "eax"
+ RSEQ_INJECT_CLOBBER
+ : abort
+#ifdef RSEQ_COMPARE_TWICE
+ , error1
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+#endif
+}
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_cmpeqv_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t expect2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+ "cmpl %[v], %[expect]\n\t"
+ "jnz %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+ "cmpl %[expect2], %[v2]\n\t"
+ "jnz %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ "cmpl %[v], %[expect]\n\t"
+ "jnz %l[error2]\n\t"
+ "cmpl %[expect2], %[v2]\n\t"
+ "jnz %l[error3]\n\t"
+#endif
+ "movl %[newv], %%eax\n\t"
+ /* final store */
+ "movl %%eax, %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ /* cmp2 input */
+ [v2] "m" (*v2),
+ [expect2] "r" (expect2),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "r" (expect),
+ [newv] "m" (newv)
+ : "memory", "cc", "eax"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2, error3
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("1st expected value comparison failed");
+error3:
+ rseq_after_asm_goto();
+ rseq_bug("2nd expected value comparison failed");
+#endif
+}
+
+#endif /* #if defined(RSEQ_TEMPLATE_MO_RELAXED) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) && \
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID))
+
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trystorev_storev)(intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t newv2,
+ intptr_t newv, int cpu)
+{
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+ "movl %[expect], %%eax\n\t"
+ "cmpl %[v], %%eax\n\t"
+ "jnz %l[cmpfail]\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
+ "movl %[expect], %%eax\n\t"
+ "cmpl %[v], %%eax\n\t"
+ "jnz %l[error2]\n\t"
+#endif
+ /* try store */
+ "movl %[newv2], %[v2]\n\t"
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_TEMPLATE_MO_RELEASE
+ "lock; addl $0,-128(%%esp)\n\t"
+#endif
+ /* final store */
+ "movl %[newv], %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ RSEQ_ASM_DEFINE_ABORT(4, "", abort)
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ /* try store input */
+ [v2] "m" (*v2),
+ [newv2] "r" (newv2),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "m" (expect),
+ [newv] "r" (newv)
+ : "memory", "cc", "eax"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+
+}
+
+/* TODO: implement a faster memcpy. */
+static inline __attribute__((always_inline))
+int RSEQ_TEMPLATE_IDENTIFIER(rseq_cmpeqv_trymemcpy_storev)(intptr_t *v, intptr_t expect,
+ void *dst, void *src, size_t len,
+ intptr_t newv, int cpu)
+{
+ uint32_t rseq_scratch[3];
+
+ RSEQ_INJECT_C(9)
+
+ __asm__ __volatile__ goto (
+ RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
+ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
+#endif
+ "movl %[src], %[rseq_scratch0]\n\t"
+ "movl %[dst], %[rseq_scratch1]\n\t"
+ "movl %[len], %[rseq_scratch2]\n\t"
+ /* Start rseq by storing table entry pointer into rseq_cs. */
+ RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 4f)
+ RSEQ_INJECT_ASM(3)
+ "movl %[expect], %%eax\n\t"
+ "cmpl %%eax, %[v]\n\t"
+ "jnz 5f\n\t"
+ RSEQ_INJECT_ASM(4)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_TEMPLATE_CPU_ID_OFFSET(%[rseq_offset]), 6f)
+ "movl %[expect], %%eax\n\t"
+ "cmpl %%eax, %[v]\n\t"
+ "jnz 7f\n\t"
+#endif
+ /* try memcpy */
+ "test %[len], %[len]\n\t" \
+ "jz 333f\n\t" \
+ "222:\n\t" \
+ "movb (%[src]), %%al\n\t" \
+ "movb %%al, (%[dst])\n\t" \
+ "inc %[src]\n\t" \
+ "inc %[dst]\n\t" \
+ "dec %[len]\n\t" \
+ "jnz 222b\n\t" \
+ "333:\n\t" \
+ RSEQ_INJECT_ASM(5)
+#ifdef RSEQ_TEMPLATE_MO_RELEASE
+ "lock; addl $0,-128(%%esp)\n\t"
+#endif
+ "movl %[newv], %%eax\n\t"
+ /* final store */
+ "movl %%eax, %[v]\n\t"
+ "2:\n\t"
+ RSEQ_INJECT_ASM(6)
+ /* teardown */
+ "movl %[rseq_scratch2], %[len]\n\t"
+ "movl %[rseq_scratch1], %[dst]\n\t"
+ "movl %[rseq_scratch0], %[src]\n\t"
+ RSEQ_ASM_DEFINE_ABORT(4,
+ "movl %[rseq_scratch2], %[len]\n\t"
+ "movl %[rseq_scratch1], %[dst]\n\t"
+ "movl %[rseq_scratch0], %[src]\n\t",
+ abort)
+ RSEQ_ASM_DEFINE_CMPFAIL(5,
+ "movl %[rseq_scratch2], %[len]\n\t"
+ "movl %[rseq_scratch1], %[dst]\n\t"
+ "movl %[rseq_scratch0], %[src]\n\t",
+ cmpfail)
+#ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_DEFINE_CMPFAIL(6,
+ "movl %[rseq_scratch2], %[len]\n\t"
+ "movl %[rseq_scratch1], %[dst]\n\t"
+ "movl %[rseq_scratch0], %[src]\n\t",
+ error1)
+ RSEQ_ASM_DEFINE_CMPFAIL(7,
+ "movl %[rseq_scratch2], %[len]\n\t"
+ "movl %[rseq_scratch1], %[dst]\n\t"
+ "movl %[rseq_scratch0], %[src]\n\t",
+ error2)
+#endif
+ : /* gcc asm goto does not allow outputs */
+ : [cpu_id] "r" (cpu),
+ [rseq_offset] "r" (rseq_offset),
+ /* final store input */
+ [v] "m" (*v),
+ [expect] "m" (expect),
+ [newv] "m" (newv),
+ /* try memcpy input */
+ [dst] "r" (dst),
+ [src] "r" (src),
+ [len] "r" (len),
+ [rseq_scratch0] "m" (rseq_scratch[0]),
+ [rseq_scratch1] "m" (rseq_scratch[1]),
+ [rseq_scratch2] "m" (rseq_scratch[2])
+ : "memory", "cc", "eax"
+ RSEQ_INJECT_CLOBBER
+ : abort, cmpfail
+#ifdef RSEQ_COMPARE_TWICE
+ , error1, error2
+#endif
+ );
+ rseq_after_asm_goto();
+ return 0;
+abort:
+ rseq_after_asm_goto();
+ RSEQ_INJECT_FAILED
+ return -1;
+cmpfail:
+ rseq_after_asm_goto();
+ return 1;
+#ifdef RSEQ_COMPARE_TWICE
+error1:
+ rseq_after_asm_goto();
+ rseq_bug("cpu_id comparison failed");
+error2:
+ rseq_after_asm_goto();
+ rseq_bug("expected value comparison failed");
+#endif
+}
+
+#endif /* #if (defined(RSEQ_TEMPLATE_MO_RELAXED) || defined(RSEQ_TEMPLATE_MO_RELEASE)) &&
+ (defined(RSEQ_TEMPLATE_CPU_ID) || defined(RSEQ_TEMPLATE_VM_VCPU_ID)) */
+
+#endif
+
+#include "rseq-bits-reset.h"
diff --git a/tools/testing/selftests/rseq/rseq-x86.h b/tools/testing/selftests/rseq/rseq-x86.h
index e148dfb2f68a..a526a6ad3a81 100644
--- a/tools/testing/selftests/rseq/rseq-x86.h
+++ b/tools/testing/selftests/rseq/rseq-x86.h
@@ -2,9 +2,13 @@
/*
* rseq-x86.h
*
- * (C) Copyright 2016-2018 - Mathieu Desnoyers <[email protected]>
+ * (C) Copyright 2016-2022 - Mathieu Desnoyers <[email protected]>
*/
+#ifndef RSEQ_H
+#error "Never use <rseq-x86.h> directly; include <rseq.h> instead."
+#endif
+
#include <stdint.h>
/*
@@ -22,9 +26,10 @@
* address through a "r" input operand.
*/
-/* Offset of cpu_id and rseq_cs fields in struct rseq. */
+/* Offset of cpu_id, rseq_cs, and vm_vcpu_id fields in struct rseq. */
#define RSEQ_CPU_ID_OFFSET 4
#define RSEQ_CS_OFFSET 8
+#define RSEQ_VM_VCPU_ID_OFFSET 24
#ifdef __x86_64__
@@ -108,523 +113,6 @@ do { \
"jmp %l[" __rseq_str(cmpfail_label) "]\n\t" \
".popsection\n\t"
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
- "cmpq %[v], %[expect]\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
- "cmpq %[v], %[expect]\n\t"
- "jnz %l[error2]\n\t"
-#endif
- /* final store */
- "movq %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- : "memory", "cc", "rax"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-/*
- * Compare @v against @expectnot. When it does _not_ match, load @v
- * into @load, and store the content of *@v + voffp into @v.
- */
-static inline __attribute__((always_inline))
-int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
- long voffp, intptr_t *load, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
- "movq %[v], %%rbx\n\t"
- "cmpq %%rbx, %[expectnot]\n\t"
- "je %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
- "movq %[v], %%rbx\n\t"
- "cmpq %%rbx, %[expectnot]\n\t"
- "je %l[error2]\n\t"
-#endif
- "movq %%rbx, %[load]\n\t"
- "addq %[voffp], %%rbx\n\t"
- "movq (%%rbx), %%rbx\n\t"
- /* final store */
- "movq %%rbx, %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* final store input */
- [v] "m" (*v),
- [expectnot] "r" (expectnot),
- [voffp] "er" (voffp),
- [load] "m" (*load)
- : "memory", "cc", "rax", "rbx"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_addv(intptr_t *v, intptr_t count, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
-#endif
- /* final store */
- "addq %[count], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(4)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* final store input */
- [v] "m" (*v),
- [count] "er" (count)
- : "memory", "cc", "rax"
- RSEQ_INJECT_CLOBBER
- : abort
-#ifdef RSEQ_COMPARE_TWICE
- , error1
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-#endif
-}
-
-#define RSEQ_ARCH_HAS_OFFSET_DEREF_ADDV
-
-/*
- * pval = *(ptr+off)
- * *pval += inc;
- */
-static inline __attribute__((always_inline))
-int rseq_offset_deref_addv(intptr_t *ptr, long off, intptr_t inc, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
-#endif
- /* get p+v */
- "movq %[ptr], %%rbx\n\t"
- "addq %[off], %%rbx\n\t"
- /* get pv */
- "movq (%%rbx), %%rcx\n\t"
- /* *pv += inc */
- "addq %[inc], (%%rcx)\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(4)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* final store input */
- [ptr] "m" (*ptr),
- [off] "er" (off),
- [inc] "er" (inc)
- : "memory", "cc", "rax", "rbx", "rcx"
- RSEQ_INJECT_CLOBBER
- : abort
-#ifdef RSEQ_COMPARE_TWICE
- , error1
-#endif
- );
- return 0;
-abort:
- RSEQ_INJECT_FAILED
- return -1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_bug("cpu_id comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
- "cmpq %[v], %[expect]\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
- "cmpq %[v], %[expect]\n\t"
- "jnz %l[error2]\n\t"
-#endif
- /* try store */
- "movq %[newv2], %[v2]\n\t"
- RSEQ_INJECT_ASM(5)
- /* final store */
- "movq %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* try store input */
- [v2] "m" (*v2),
- [newv2] "r" (newv2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- : "memory", "cc", "rax"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-/* x86-64 is TSO. */
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- return rseq_cmpeqv_trystorev_storev(v, expect, v2, newv2, newv, cpu);
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t expect2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
- "cmpq %[v], %[expect]\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
- "cmpq %[v2], %[expect2]\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(5)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
- "cmpq %[v], %[expect]\n\t"
- "jnz %l[error2]\n\t"
- "cmpq %[v2], %[expect2]\n\t"
- "jnz %l[error3]\n\t"
-#endif
- /* final store */
- "movq %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* cmp2 input */
- [v2] "m" (*v2),
- [expect2] "r" (expect2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- : "memory", "cc", "rax"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2, error3
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("1st expected value comparison failed");
-error3:
- rseq_after_asm_goto();
- rseq_bug("2nd expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- uint64_t rseq_scratch[3];
-
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- "movq %[src], %[rseq_scratch0]\n\t"
- "movq %[dst], %[rseq_scratch1]\n\t"
- "movq %[len], %[rseq_scratch2]\n\t"
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
- "cmpq %[v], %[expect]\n\t"
- "jnz 5f\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 6f)
- "cmpq %[v], %[expect]\n\t"
- "jnz 7f\n\t"
-#endif
- /* try memcpy */
- "test %[len], %[len]\n\t" \
- "jz 333f\n\t" \
- "222:\n\t" \
- "movb (%[src]), %%al\n\t" \
- "movb %%al, (%[dst])\n\t" \
- "inc %[src]\n\t" \
- "inc %[dst]\n\t" \
- "dec %[len]\n\t" \
- "jnz 222b\n\t" \
- "333:\n\t" \
- RSEQ_INJECT_ASM(5)
- /* final store */
- "movq %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- /* teardown */
- "movq %[rseq_scratch2], %[len]\n\t"
- "movq %[rseq_scratch1], %[dst]\n\t"
- "movq %[rseq_scratch0], %[src]\n\t"
- RSEQ_ASM_DEFINE_ABORT(4,
- "movq %[rseq_scratch2], %[len]\n\t"
- "movq %[rseq_scratch1], %[dst]\n\t"
- "movq %[rseq_scratch0], %[src]\n\t",
- abort)
- RSEQ_ASM_DEFINE_CMPFAIL(5,
- "movq %[rseq_scratch2], %[len]\n\t"
- "movq %[rseq_scratch1], %[dst]\n\t"
- "movq %[rseq_scratch0], %[src]\n\t",
- cmpfail)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_CMPFAIL(6,
- "movq %[rseq_scratch2], %[len]\n\t"
- "movq %[rseq_scratch1], %[dst]\n\t"
- "movq %[rseq_scratch0], %[src]\n\t",
- error1)
- RSEQ_ASM_DEFINE_CMPFAIL(7,
- "movq %[rseq_scratch2], %[len]\n\t"
- "movq %[rseq_scratch1], %[dst]\n\t"
- "movq %[rseq_scratch0], %[src]\n\t",
- error2)
-#endif
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv),
- /* try memcpy input */
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len),
- [rseq_scratch0] "m" (rseq_scratch[0]),
- [rseq_scratch1] "m" (rseq_scratch[1]),
- [rseq_scratch2] "m" (rseq_scratch[2])
- : "memory", "cc", "rax"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-/* x86-64 is TSO. */
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- return rseq_cmpeqv_trymemcpy_storev(v, expect, dst, src, len,
- newv, cpu);
-}
-
#elif defined(__i386__)
#define RSEQ_ASM_TP_SEGMENT %%gs
@@ -711,643 +199,36 @@ do { \
"jmp %l[" __rseq_str(cmpfail_label) "]\n\t" \
".popsection\n\t"
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
- "cmpl %[v], %[expect]\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
- "cmpl %[v], %[expect]\n\t"
- "jnz %l[error2]\n\t"
-#endif
- /* final store */
- "movl %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- : "memory", "cc", "eax"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-/*
- * Compare @v against @expectnot. When it does _not_ match, load @v
- * into @load, and store the content of *@v + voffp into @v.
- */
-static inline __attribute__((always_inline))
-int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot,
- long voffp, intptr_t *load, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
- "movl %[v], %%ebx\n\t"
- "cmpl %%ebx, %[expectnot]\n\t"
- "je %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
- "movl %[v], %%ebx\n\t"
- "cmpl %%ebx, %[expectnot]\n\t"
- "je %l[error2]\n\t"
-#endif
- "movl %%ebx, %[load]\n\t"
- "addl %[voffp], %%ebx\n\t"
- "movl (%%ebx), %%ebx\n\t"
- /* final store */
- "movl %%ebx, %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(5)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* final store input */
- [v] "m" (*v),
- [expectnot] "r" (expectnot),
- [voffp] "ir" (voffp),
- [load] "m" (*load)
- : "memory", "cc", "eax", "ebx"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_addv(intptr_t *v, intptr_t count, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
-#endif
- /* final store */
- "addl %[count], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(4)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* final store input */
- [v] "m" (*v),
- [count] "ir" (count)
- : "memory", "cc", "eax"
- RSEQ_INJECT_CLOBBER
- : abort
-#ifdef RSEQ_COMPARE_TWICE
- , error1
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
- "cmpl %[v], %[expect]\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
- "cmpl %[v], %[expect]\n\t"
- "jnz %l[error2]\n\t"
-#endif
- /* try store */
- "movl %[newv2], %%eax\n\t"
- "movl %%eax, %[v2]\n\t"
- RSEQ_INJECT_ASM(5)
- /* final store */
- "movl %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* try store input */
- [v2] "m" (*v2),
- [newv2] "m" (newv2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "r" (newv)
- : "memory", "cc", "eax"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t newv2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
-
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
- "movl %[expect], %%eax\n\t"
- "cmpl %[v], %%eax\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
- "movl %[expect], %%eax\n\t"
- "cmpl %[v], %%eax\n\t"
- "jnz %l[error2]\n\t"
-#endif
- /* try store */
- "movl %[newv2], %[v2]\n\t"
- RSEQ_INJECT_ASM(5)
- "lock; addl $0,-128(%%esp)\n\t"
- /* final store */
- "movl %[newv], %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* try store input */
- [v2] "m" (*v2),
- [newv2] "r" (newv2),
- /* final store input */
- [v] "m" (*v),
- [expect] "m" (expect),
- [newv] "r" (newv)
- : "memory", "cc", "eax"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
#endif
-}
+/* Per-cpu-id indexing. */
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect,
- intptr_t *v2, intptr_t expect2,
- intptr_t newv, int cpu)
-{
- RSEQ_INJECT_C(9)
+#define RSEQ_TEMPLATE_CPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-x86-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3])
-#endif
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
- "cmpl %[v], %[expect]\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(4)
- "cmpl %[expect2], %[v2]\n\t"
- "jnz %l[cmpfail]\n\t"
- RSEQ_INJECT_ASM(5)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1])
- "cmpl %[v], %[expect]\n\t"
- "jnz %l[error2]\n\t"
- "cmpl %[expect2], %[v2]\n\t"
- "jnz %l[error3]\n\t"
-#endif
- "movl %[newv], %%eax\n\t"
- /* final store */
- "movl %%eax, %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- RSEQ_ASM_DEFINE_ABORT(4, "", abort)
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* cmp2 input */
- [v2] "m" (*v2),
- [expect2] "r" (expect2),
- /* final store input */
- [v] "m" (*v),
- [expect] "r" (expect),
- [newv] "m" (newv)
- : "memory", "cc", "eax"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2, error3
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("1st expected value comparison failed");
-error3:
- rseq_after_asm_goto();
- rseq_bug("2nd expected value comparison failed");
-#endif
-}
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-x86-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_CPU_ID
-/* TODO: implement a faster memcpy. */
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- uint32_t rseq_scratch[3];
+/* Per-vm-vcpu-id indexing. */
- RSEQ_INJECT_C(9)
+#define RSEQ_TEMPLATE_VM_VCPU_ID
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-x86-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- "movl %[src], %[rseq_scratch0]\n\t"
- "movl %[dst], %[rseq_scratch1]\n\t"
- "movl %[len], %[rseq_scratch2]\n\t"
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
- "movl %[expect], %%eax\n\t"
- "cmpl %%eax, %[v]\n\t"
- "jnz 5f\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 6f)
- "movl %[expect], %%eax\n\t"
- "cmpl %%eax, %[v]\n\t"
- "jnz 7f\n\t"
-#endif
- /* try memcpy */
- "test %[len], %[len]\n\t" \
- "jz 333f\n\t" \
- "222:\n\t" \
- "movb (%[src]), %%al\n\t" \
- "movb %%al, (%[dst])\n\t" \
- "inc %[src]\n\t" \
- "inc %[dst]\n\t" \
- "dec %[len]\n\t" \
- "jnz 222b\n\t" \
- "333:\n\t" \
- RSEQ_INJECT_ASM(5)
- "movl %[newv], %%eax\n\t"
- /* final store */
- "movl %%eax, %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- /* teardown */
- "movl %[rseq_scratch2], %[len]\n\t"
- "movl %[rseq_scratch1], %[dst]\n\t"
- "movl %[rseq_scratch0], %[src]\n\t"
- RSEQ_ASM_DEFINE_ABORT(4,
- "movl %[rseq_scratch2], %[len]\n\t"
- "movl %[rseq_scratch1], %[dst]\n\t"
- "movl %[rseq_scratch0], %[src]\n\t",
- abort)
- RSEQ_ASM_DEFINE_CMPFAIL(5,
- "movl %[rseq_scratch2], %[len]\n\t"
- "movl %[rseq_scratch1], %[dst]\n\t"
- "movl %[rseq_scratch0], %[src]\n\t",
- cmpfail)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_CMPFAIL(6,
- "movl %[rseq_scratch2], %[len]\n\t"
- "movl %[rseq_scratch1], %[dst]\n\t"
- "movl %[rseq_scratch0], %[src]\n\t",
- error1)
- RSEQ_ASM_DEFINE_CMPFAIL(7,
- "movl %[rseq_scratch2], %[len]\n\t"
- "movl %[rseq_scratch1], %[dst]\n\t"
- "movl %[rseq_scratch0], %[src]\n\t",
- error2)
-#endif
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* final store input */
- [v] "m" (*v),
- [expect] "m" (expect),
- [newv] "m" (newv),
- /* try memcpy input */
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len),
- [rseq_scratch0] "m" (rseq_scratch[0]),
- [rseq_scratch1] "m" (rseq_scratch[1]),
- [rseq_scratch2] "m" (rseq_scratch[2])
- : "memory", "cc", "eax"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
-
-/* TODO: implement a faster memcpy. */
-static inline __attribute__((always_inline))
-int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect,
- void *dst, void *src, size_t len,
- intptr_t newv, int cpu)
-{
- uint32_t rseq_scratch[3];
-
- RSEQ_INJECT_C(9)
+#define RSEQ_TEMPLATE_MO_RELEASE
+#include "rseq-x86-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELEASE
+#undef RSEQ_TEMPLATE_VM_VCPU_ID
- __asm__ __volatile__ goto (
- RSEQ_ASM_DEFINE_TABLE(3, 1f, 2f, 4f) /* start, commit, abort */
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail])
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1])
- RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2])
-#endif
- "movl %[src], %[rseq_scratch0]\n\t"
- "movl %[dst], %[rseq_scratch1]\n\t"
- "movl %[len], %[rseq_scratch2]\n\t"
- /* Start rseq by storing table entry pointer into rseq_cs. */
- RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset]))
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f)
- RSEQ_INJECT_ASM(3)
- "movl %[expect], %%eax\n\t"
- "cmpl %%eax, %[v]\n\t"
- "jnz 5f\n\t"
- RSEQ_INJECT_ASM(4)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 6f)
- "movl %[expect], %%eax\n\t"
- "cmpl %%eax, %[v]\n\t"
- "jnz 7f\n\t"
-#endif
- /* try memcpy */
- "test %[len], %[len]\n\t" \
- "jz 333f\n\t" \
- "222:\n\t" \
- "movb (%[src]), %%al\n\t" \
- "movb %%al, (%[dst])\n\t" \
- "inc %[src]\n\t" \
- "inc %[dst]\n\t" \
- "dec %[len]\n\t" \
- "jnz 222b\n\t" \
- "333:\n\t" \
- RSEQ_INJECT_ASM(5)
- "lock; addl $0,-128(%%esp)\n\t"
- "movl %[newv], %%eax\n\t"
- /* final store */
- "movl %%eax, %[v]\n\t"
- "2:\n\t"
- RSEQ_INJECT_ASM(6)
- /* teardown */
- "movl %[rseq_scratch2], %[len]\n\t"
- "movl %[rseq_scratch1], %[dst]\n\t"
- "movl %[rseq_scratch0], %[src]\n\t"
- RSEQ_ASM_DEFINE_ABORT(4,
- "movl %[rseq_scratch2], %[len]\n\t"
- "movl %[rseq_scratch1], %[dst]\n\t"
- "movl %[rseq_scratch0], %[src]\n\t",
- abort)
- RSEQ_ASM_DEFINE_CMPFAIL(5,
- "movl %[rseq_scratch2], %[len]\n\t"
- "movl %[rseq_scratch1], %[dst]\n\t"
- "movl %[rseq_scratch0], %[src]\n\t",
- cmpfail)
-#ifdef RSEQ_COMPARE_TWICE
- RSEQ_ASM_DEFINE_CMPFAIL(6,
- "movl %[rseq_scratch2], %[len]\n\t"
- "movl %[rseq_scratch1], %[dst]\n\t"
- "movl %[rseq_scratch0], %[src]\n\t",
- error1)
- RSEQ_ASM_DEFINE_CMPFAIL(7,
- "movl %[rseq_scratch2], %[len]\n\t"
- "movl %[rseq_scratch1], %[dst]\n\t"
- "movl %[rseq_scratch0], %[src]\n\t",
- error2)
-#endif
- : /* gcc asm goto does not allow outputs */
- : [cpu_id] "r" (cpu),
- [rseq_offset] "r" (rseq_offset),
- /* final store input */
- [v] "m" (*v),
- [expect] "m" (expect),
- [newv] "m" (newv),
- /* try memcpy input */
- [dst] "r" (dst),
- [src] "r" (src),
- [len] "r" (len),
- [rseq_scratch0] "m" (rseq_scratch[0]),
- [rseq_scratch1] "m" (rseq_scratch[1]),
- [rseq_scratch2] "m" (rseq_scratch[2])
- : "memory", "cc", "eax"
- RSEQ_INJECT_CLOBBER
- : abort, cmpfail
-#ifdef RSEQ_COMPARE_TWICE
- , error1, error2
-#endif
- );
- rseq_after_asm_goto();
- return 0;
-abort:
- rseq_after_asm_goto();
- RSEQ_INJECT_FAILED
- return -1;
-cmpfail:
- rseq_after_asm_goto();
- return 1;
-#ifdef RSEQ_COMPARE_TWICE
-error1:
- rseq_after_asm_goto();
- rseq_bug("cpu_id comparison failed");
-error2:
- rseq_after_asm_goto();
- rseq_bug("expected value comparison failed");
-#endif
-}
+/* APIs which are not based on cpu ids. */
-#endif
+#define RSEQ_TEMPLATE_CPU_ID_NONE
+#define RSEQ_TEMPLATE_MO_RELAXED
+#include "rseq-x86-bits.h"
+#undef RSEQ_TEMPLATE_MO_RELAXED
+#undef RSEQ_TEMPLATE_CPU_ID_NONE
diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h
index 003e0e3750ce..95a76a1c3b27 100644
--- a/tools/testing/selftests/rseq/rseq.h
+++ b/tools/testing/selftests/rseq/rseq.h
@@ -74,6 +74,20 @@ extern unsigned int rseq_flags;
*/
extern unsigned int rseq_feature_size;
+enum rseq_mo {
+ RSEQ_MO_RELAXED = 0,
+ RSEQ_MO_CONSUME = 1, /* Unused */
+ RSEQ_MO_ACQUIRE = 2, /* Unused */
+ RSEQ_MO_RELEASE = 3,
+ RSEQ_MO_ACQ_REL = 4, /* Unused */
+ RSEQ_MO_SEQ_CST = 5, /* Unused */
+};
+
+enum rseq_percpu_mode {
+ RSEQ_PERCPU_CPU_ID = 0,
+ RSEQ_PERCPU_VM_VCPU_ID = 1,
+};
+
static inline struct rseq_abi *rseq_get_abi(void)
{
return (struct rseq_abi *) ((uintptr_t) rseq_thread_pointer() + rseq_offset);
@@ -222,4 +236,149 @@ static inline void rseq_prepare_unload(void)
rseq_clear_rseq_cs();
}
+static inline __attribute__((always_inline))
+int rseq_cmpeqv_storev(enum rseq_mo rseq_mo, enum rseq_percpu_mode percpu_mode,
+ intptr_t *v, intptr_t expect,
+ intptr_t newv, int cpu)
+{
+ if (rseq_mo != RSEQ_MO_RELAXED)
+ return -1;
+ switch (percpu_mode) {
+ case RSEQ_PERCPU_CPU_ID:
+ return rseq_cmpeqv_storev_relaxed_cpu_id(v, expect, newv, cpu);
+ case RSEQ_PERCPU_VM_VCPU_ID:
+ return rseq_cmpeqv_storev_relaxed_vm_vcpu_id(v, expect, newv, cpu);
+ }
+ return -1;
+}
+
+/*
+ * Compare @v against @expectnot. When it does _not_ match, load @v
+ * into @load, and store the content of *@v + voffp into @v.
+ */
+static inline __attribute__((always_inline))
+int rseq_cmpnev_storeoffp_load(enum rseq_mo rseq_mo, enum rseq_percpu_mode percpu_mode,
+ intptr_t *v, intptr_t expectnot, long voffp, intptr_t *load,
+ int cpu)
+{
+ if (rseq_mo != RSEQ_MO_RELAXED)
+ return -1;
+ switch (percpu_mode) {
+ case RSEQ_PERCPU_CPU_ID:
+ return rseq_cmpnev_storeoffp_load_relaxed_cpu_id(v, expectnot, voffp, load, cpu);
+ case RSEQ_PERCPU_VM_VCPU_ID:
+ return rseq_cmpnev_storeoffp_load_relaxed_vm_vcpu_id(v, expectnot, voffp, load, cpu);
+ }
+ return -1;
+}
+
+static inline __attribute__((always_inline))
+int rseq_addv(enum rseq_mo rseq_mo, enum rseq_percpu_mode percpu_mode,
+ intptr_t *v, intptr_t count, int cpu)
+{
+ if (rseq_mo != RSEQ_MO_RELAXED)
+ return -1;
+ switch (percpu_mode) {
+ case RSEQ_PERCPU_CPU_ID:
+ return rseq_addv_relaxed_cpu_id(v, count, cpu);
+ case RSEQ_PERCPU_VM_VCPU_ID:
+ return rseq_addv_relaxed_vm_vcpu_id(v, count, cpu);
+ }
+ return -1;
+}
+
+#ifdef RSEQ_ARCH_HAS_OFFSET_DEREF_ADDV
+/*
+ * pval = *(ptr+off)
+ * *pval += inc;
+ */
+static inline __attribute__((always_inline))
+int rseq_offset_deref_addv(enum rseq_mo rseq_mo, enum rseq_percpu_mode percpu_mode,
+ intptr_t *ptr, long off, intptr_t inc, int cpu)
+{
+ if (rseq_mo != RSEQ_MO_RELAXED)
+ return -1;
+ switch (percpu_mode) {
+ case RSEQ_PERCPU_CPU_ID:
+ return rseq_offset_deref_addv_relaxed_cpu_id(ptr, off, inc, cpu);
+ case RSEQ_PERCPU_VM_VCPU_ID:
+ return rseq_offset_deref_addv_relaxed_vm_vcpu_id(ptr, off, inc, cpu);
+ }
+ return -1;
+}
+#endif
+
+static inline __attribute__((always_inline))
+int rseq_cmpeqv_trystorev_storev(enum rseq_mo rseq_mo, enum rseq_percpu_mode percpu_mode,
+ intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t newv2,
+ intptr_t newv, int cpu)
+{
+ switch (rseq_mo) {
+ case RSEQ_MO_RELAXED:
+ switch (percpu_mode) {
+ case RSEQ_PERCPU_CPU_ID:
+ return rseq_cmpeqv_trystorev_storev_relaxed_cpu_id(v, expect, v2, newv2, newv, cpu);
+ case RSEQ_PERCPU_VM_VCPU_ID:
+ return rseq_cmpeqv_trystorev_storev_relaxed_vm_vcpu_id(v, expect, v2, newv2, newv, cpu);
+ }
+ return -1;
+ case RSEQ_MO_RELEASE:
+ switch (percpu_mode) {
+ case RSEQ_PERCPU_CPU_ID:
+ return rseq_cmpeqv_trystorev_storev_release_cpu_id(v, expect, v2, newv2, newv, cpu);
+ case RSEQ_PERCPU_VM_VCPU_ID:
+ return rseq_cmpeqv_trystorev_storev_release_vm_vcpu_id(v, expect, v2, newv2, newv, cpu);
+ }
+ return -1;
+ default:
+ return -1;
+ }
+}
+
+static inline __attribute__((always_inline))
+int rseq_cmpeqv_cmpeqv_storev(enum rseq_mo rseq_mo, enum rseq_percpu_mode percpu_mode,
+ intptr_t *v, intptr_t expect,
+ intptr_t *v2, intptr_t expect2,
+ intptr_t newv, int cpu)
+{
+ if (rseq_mo != RSEQ_MO_RELAXED)
+ return -1;
+ switch (percpu_mode) {
+ case RSEQ_PERCPU_CPU_ID:
+ return rseq_cmpeqv_cmpeqv_storev_relaxed_cpu_id(v, expect, v2, expect2, newv, cpu);
+ case RSEQ_PERCPU_VM_VCPU_ID:
+ return rseq_cmpeqv_cmpeqv_storev_relaxed_vm_vcpu_id(v, expect, v2, expect2, newv, cpu);
+ }
+ return -1;
+}
+
+static inline __attribute__((always_inline))
+int rseq_cmpeqv_trymemcpy_storev(enum rseq_mo rseq_mo, enum rseq_percpu_mode percpu_mode,
+ intptr_t *v, intptr_t expect,
+ void *dst, void *src, size_t len,
+ intptr_t newv, int cpu)
+{
+ switch (rseq_mo) {
+ case RSEQ_MO_RELAXED:
+ switch (percpu_mode) {
+ case RSEQ_PERCPU_CPU_ID:
+ return rseq_cmpeqv_trymemcpy_storev_relaxed_cpu_id(v, expect, dst, src, len, newv, cpu);
+ case RSEQ_PERCPU_VM_VCPU_ID:
+ return rseq_cmpeqv_trymemcpy_storev_relaxed_vm_vcpu_id(v, expect, dst, src, len, newv, cpu);
+ }
+ return -1;
+ case RSEQ_MO_RELEASE:
+ switch (percpu_mode) {
+ case RSEQ_PERCPU_CPU_ID:
+ return rseq_cmpeqv_trymemcpy_storev_release_cpu_id(v, expect, dst, src, len, newv, cpu);
+ case RSEQ_PERCPU_VM_VCPU_ID:
+ return rseq_cmpeqv_trymemcpy_storev_release_vm_vcpu_id(v, expect, dst, src, len, newv, cpu);
+ }
+ return -1;
+ default:
+ return -1;
+ }
+}
+
#endif /* RSEQ_H_ */
--
2.25.1
Export the rseq feature size supported by the kernel as well as the
required allocation alignment for the rseq per-thread area to user-space
through ELF auxiliary vector entries.
This is part of the extensible rseq ABI.
Signed-off-by: Mathieu Desnoyers <[email protected]>
---
fs/binfmt_elf.c | 5 +++++
include/uapi/linux/auxvec.h | 2 ++
include/uapi/linux/rseq.h | 5 +++++
3 files changed, 12 insertions(+)
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index 63c7ebb0da89..04fca1e4cbd2 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -46,6 +46,7 @@
#include <linux/cred.h>
#include <linux/dax.h>
#include <linux/uaccess.h>
+#include <linux/rseq.h>
#include <asm/param.h>
#include <asm/page.h>
@@ -288,6 +289,10 @@ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
if (bprm->have_execfd) {
NEW_AUX_ENT(AT_EXECFD, bprm->execfd);
}
+#ifdef CONFIG_RSEQ
+ NEW_AUX_ENT(AT_RSEQ_FEATURE_SIZE, offsetof(struct rseq, end));
+ NEW_AUX_ENT(AT_RSEQ_ALIGN, __alignof__(struct rseq));
+#endif
#undef NEW_AUX_ENT
/* AT_NULL is zero; clear the rest too */
memset(elf_info, 0, (char *)mm->saved_auxv +
diff --git a/include/uapi/linux/auxvec.h b/include/uapi/linux/auxvec.h
index c7e502bf5a6f..6991c4b8ab18 100644
--- a/include/uapi/linux/auxvec.h
+++ b/include/uapi/linux/auxvec.h
@@ -30,6 +30,8 @@
* differ from AT_PLATFORM. */
#define AT_RANDOM 25 /* address of 16 random bytes */
#define AT_HWCAP2 26 /* extension of AT_HWCAP */
+#define AT_RSEQ_FEATURE_SIZE 27 /* rseq supported feature size */
+#define AT_RSEQ_ALIGN 28 /* rseq allocation alignment */
#define AT_EXECFN 31 /* filename of program */
diff --git a/include/uapi/linux/rseq.h b/include/uapi/linux/rseq.h
index 77ee207623a9..05d3c4cdeb40 100644
--- a/include/uapi/linux/rseq.h
+++ b/include/uapi/linux/rseq.h
@@ -130,6 +130,11 @@ struct rseq {
* this thread.
*/
__u32 flags;
+
+ /*
+ * Flexible array member at end of structure, after last feature field.
+ */
+ char end[];
} __attribute__((aligned(4 * sizeof(__u64))));
#endif /* _UAPI_LINUX_RSEQ_H */
--
2.25.1
On Thu, Nov 03, 2022 at 04:03:43PM -0400, Mathieu Desnoyers wrote:
> diff --git a/fs/exec.c b/fs/exec.c
> index 349a5da91efe..93eb88f4053b 100644
> --- a/fs/exec.c
> +++ b/fs/exec.c
> @@ -1013,6 +1013,9 @@ static int exec_mmap(struct mm_struct *mm)
> tsk->active_mm = mm;
> tsk->mm = mm;
> lru_gen_add_mm(mm);
> + mm_init_vcpu_lock(mm);
> + mm_init_vcpumask(mm);
> + mm_init_node_vcpumask(mm);
> /*
> * This prevents preemption while active_mm is being loaded and
> * it and mm are being updated, which could cause problems for
> @@ -1150,6 +1154,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
>
> mm->user_ns = get_user_ns(user_ns);
> lru_gen_init_mm(mm);
> + mm_init_vcpu_lock(mm);
> + mm_init_vcpumask(mm);
> + mm_init_node_vcpumask(mm);
> return mm;
>
> fail_nocontext:
Why isn't all that a single mm_init_vcpu(mm) or something ?
On Thu, Nov 03, 2022 at 04:03:43PM -0400, Mathieu Desnoyers wrote:
> The credit goes to Paul Turner (Google) for the vcpu_id idea. This
> feature is implemented based on the discussions with Paul Turner and
> Peter Oskolkov (Google), but I took the liberty to implement scheduler
> fast-path optimizations and my own NUMA-awareness scheme. The rumor has
> it that Google have been running a rseq vcpu_id extension internally at
> Google in production for a year. The tcmalloc source code indeed has
> comments hinting at a vcpu_id prototype extension to the rseq system
> call [1].
Re NUMA thing -- that means that on a 512 node system a single threaded
task can still observe 512 separate vcpu-ids, right?
Also, said space won't be dense.
The main selling point of the whole vcpu-id scheme was that the id space
is dense and not larger than min(nr_cpus, nr_threads), which then gives
useful properties.
But I'm not at all seeing how the NUMA thing preserves that.
Also; given the utter mind-bendiness of the NUMA thing; should it go
into it's own patch; introduce the regular plain old vcpu first, and
then add things to it -- that also allows pushing those weird cpumask
ops you've created later into the series.
On 2022-11-08 08:04, Peter Zijlstra wrote:
> On Thu, Nov 03, 2022 at 04:03:43PM -0400, Mathieu Desnoyers wrote:
>
>> The credit goes to Paul Turner (Google) for the vcpu_id idea. This
>> feature is implemented based on the discussions with Paul Turner and
>> Peter Oskolkov (Google), but I took the liberty to implement scheduler
>> fast-path optimizations and my own NUMA-awareness scheme. The rumor has
>> it that Google have been running a rseq vcpu_id extension internally at
>> Google in production for a year. The tcmalloc source code indeed has
>> comments hinting at a vcpu_id prototype extension to the rseq system
>> call [1].
>
> Re NUMA thing -- that means that on a 512 node system a single threaded
> task can still observe 512 separate vcpu-ids, right?
Yes, that's correct.
>
> Also, said space won't be dense.
Indeed, this can be inefficient if the data structure within the
single-threaded task is not NUMA-aware *and* that task is free to bounce
all over the 512 numa nodes.
>
> The main selling point of the whole vcpu-id scheme was that the id space
> is dense and not larger than min(nr_cpus, nr_threads), which then gives
> useful properties.
>
> But I'm not at all seeing how the NUMA thing preserves that.
If a userspace per-vcpu data structure is implemented with NUMA-local
allocations, then it becomes really interesting to guarantee that the
per-vcpu-id accesses are always numa-local for performance reasons.
If a userspace per-vcpu data structure is not numa-aware, then we have
two scenarios:
A) The cpuset/sched affinity under which it runs pins it to a set of
cores belonging to a specific NUMA node. In this case, even with
numa-aware vcpu id allocation, the ids will stay as close to 0 as if not
numa-aware.
B) No specific cpuset/sched affinity set, which means the task is free
to bounce all over. In this case I agree that having the indexing
numa-aware, but the per-vcpu data structure not numa-aware, is inefficient.
I wonder whether scenarios with 512 nodes systems, with containers using
few cores, but without using cpusets/sched affinity to pin the workload
to specific numa nodes is a workload we should optimize for ? It looks
like the lack of numa locality due to lack of allowed cores restriction
is a userspace configuration issue.
We also must keep in mind that we can expect a single task to load a mix
of executable/shared libraries where some pieces may be numa-aware, and
others may not. This means we should ideally support a numa-aware
vcpu-id allocation scheme and non-numa-aware vcpu-id allocation scheme
within the same task.
This could be achieved by exposing two struct rseq fields rather than
one, e.g.:
vm_vcpu_id -> flat indexing, not numa-aware.
vm_numa_vcpu_id -> numa-aware vcpu id indexing.
This would allow data structures that are inherently numa-aware to
benefit from numa-locality, without hurting non-numa-aware data structures.
>
> Also; given the utter mind-bendiness of the NUMA thing; should it go
> into it's own patch; introduce the regular plain old vcpu first, and
> then add things to it -- that also allows pushing those weird cpumask
> ops you've created later into the series.
Good idea. I can do that once we agree on the way forward for flat vs
numa-aware vcpu-id rseq fields.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On 2022-11-08 08:00, Peter Zijlstra wrote:
> On Thu, Nov 03, 2022 at 04:03:43PM -0400, Mathieu Desnoyers wrote:
>
>> diff --git a/fs/exec.c b/fs/exec.c
>> index 349a5da91efe..93eb88f4053b 100644
>> --- a/fs/exec.c
>> +++ b/fs/exec.c
>> @@ -1013,6 +1013,9 @@ static int exec_mmap(struct mm_struct *mm)
>> tsk->active_mm = mm;
>> tsk->mm = mm;
>> lru_gen_add_mm(mm);
>> + mm_init_vcpu_lock(mm);
>> + mm_init_vcpumask(mm);
>> + mm_init_node_vcpumask(mm);
>> /*
>> * This prevents preemption while active_mm is being loaded and
>> * it and mm are being updated, which could cause problems for
>
>> @@ -1150,6 +1154,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
>>
>> mm->user_ns = get_user_ns(user_ns);
>> lru_gen_init_mm(mm);
>> + mm_init_vcpu_lock(mm);
>> + mm_init_vcpumask(mm);
>> + mm_init_node_vcpumask(mm);
>> return mm;
>>
>> fail_nocontext:
>
> Why isn't all that a single mm_init_vcpu(mm) or something ?
Good point, I'll update that.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On Tue, Nov 08, 2022 at 03:07:42PM -0500, Mathieu Desnoyers wrote:
> On 2022-11-08 08:04, Peter Zijlstra wrote:
> > On Thu, Nov 03, 2022 at 04:03:43PM -0400, Mathieu Desnoyers wrote:
> >
> > > The credit goes to Paul Turner (Google) for the vcpu_id idea. This
> > > feature is implemented based on the discussions with Paul Turner and
> > > Peter Oskolkov (Google), but I took the liberty to implement scheduler
> > > fast-path optimizations and my own NUMA-awareness scheme. The rumor has
> > > it that Google have been running a rseq vcpu_id extension internally at
> > > Google in production for a year. The tcmalloc source code indeed has
> > > comments hinting at a vcpu_id prototype extension to the rseq system
> > > call [1].
> >
> > Re NUMA thing -- that means that on a 512 node system a single threaded
> > task can still observe 512 separate vcpu-ids, right?
>
> Yes, that's correct.
So,.. I've been thinking. How about we expose _two_ vcpu-ids :-)
One is the original, system wide min(nr_thread, nr_cpus) and the other
is a per-node vcpu-id. Not like you did, but min(nr_threads,
nr_cpus_per_node) like.
That way userspace can either use:
- rseq::cpu_id -- as it does today
- rseq::vcpu_id -- (when -1, see rseq::cpu_id) the original vcpu_id
proposal, which is typically good enough when your
application is not NUMA aware and relies on NUMA
balancing (most every application out there)
- rseq::node_id *and*
rseq::vcpu_node_id -- Combined it gives both node locality
and a *dense* space within the node.
This then lets the application pick whatever is best.
Note that using node affine memory allocations by default is a *very*
bad idea since it wrecks NUMA balancing, you really should only use this
if you application is fully NUMA aware and knows what it is doing.
---
include/linux/mm.h | 15 ++--
include/linux/mm_types.h | 23 +-----
include/linux/sched.h | 1
include/uapi/linux/rseq.h | 1
kernel/fork.c | 1
kernel/rseq.c | 9 ++
kernel/sched/core.c | 18 ++---
kernel/sched/sched.h | 157 +++++++++-------------------------------------
8 files changed, 66 insertions(+), 159 deletions(-)
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3484,6 +3484,10 @@ static inline int task_mm_vcpu_id(struct
{
return t->mm_vcpu;
}
+static inline int task_mm_vcpu_node_id(struct task_struct *t)
+{
+ return t->mm_vcpu_node;
+}
#else
static inline void sched_vcpu_before_execve(struct task_struct *t) { }
static inline void sched_vcpu_after_execve(struct task_struct *t) { }
@@ -3491,12 +3495,11 @@ static inline void sched_vcpu_fork(struc
static inline void sched_vcpu_exit_signals(struct task_struct *t) { }
static inline int task_mm_vcpu_id(struct task_struct *t)
{
- /*
- * Use the processor id as a fall-back when the mm vcpu feature is
- * disabled. This provides functional per-cpu data structure accesses
- * in user-space, althrough it won't provide the memory usage benefits.
- */
- return raw_smp_processor_id();
+ return -1; /* userspace should use cpu_id */
+}
+static inline int task_mm_vcpu_node_id(struct task_struct *t)
+{
+ return -1;
}
#endif
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -568,7 +568,7 @@ struct mm_struct {
* bitmap words as they were being concurrently updated by the
* updaters.
*/
- spinlock_t vcpu_lock;
+ raw_spinlock_t vcpu_lock;
#endif
#ifdef CONFIG_MMU
atomic_long_t pgtables_bytes; /* PTE page table pages */
@@ -875,28 +875,15 @@ static inline unsigned int mm_vcpumask_s
#if defined(CONFIG_SCHED_MM_VCPU) && defined(CONFIG_NUMA)
/*
- * Layout of node vcpumasks:
- * - node_alloc vcpumask: cpumask tracking which vcpu_id were
- * allocated (across nodes) in this
- * memory space.
* - node vcpumask[nr_node_ids]: per-node cpumask tracking which vcpu_id
* were allocated in this memory space.
*/
-static inline cpumask_t *mm_node_alloc_vcpumask(struct mm_struct *mm)
+static inline cpumask_t *mm_node_vcpumask(struct mm_struct *mm, unsigned int node)
{
unsigned long vcpu_bitmap = (unsigned long)mm_vcpumask(mm);
/* Skip mm_vcpumask */
vcpu_bitmap += cpumask_size();
- return (struct cpumask *)vcpu_bitmap;
-}
-
-static inline cpumask_t *mm_node_vcpumask(struct mm_struct *mm, unsigned int node)
-{
- unsigned long vcpu_bitmap = (unsigned long)mm_node_alloc_vcpumask(mm);
-
- /* Skip node alloc vcpumask */
- vcpu_bitmap += cpumask_size();
vcpu_bitmap += node * cpumask_size();
return (struct cpumask *)vcpu_bitmap;
}
@@ -907,21 +894,21 @@ static inline void mm_init_node_vcpumask
if (num_possible_nodes() == 1)
return;
- cpumask_clear(mm_node_alloc_vcpumask(mm));
+
for (node = 0; node < nr_node_ids; node++)
cpumask_clear(mm_node_vcpumask(mm, node));
}
static inline void mm_init_vcpu_lock(struct mm_struct *mm)
{
- spin_lock_init(&mm->vcpu_lock);
+ raw_spin_lock_init(&mm->vcpu_lock);
}
static inline unsigned int mm_node_vcpumask_size(void)
{
if (num_possible_nodes() == 1)
return 0;
- return (nr_node_ids + 1) * cpumask_size();
+ return nr_node_ids * cpumask_size();
}
#else
static inline void mm_init_node_vcpumask(struct mm_struct *mm) { }
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1316,6 +1316,7 @@ struct task_struct {
#ifdef CONFIG_SCHED_MM_VCPU
int mm_vcpu; /* Current vcpu in mm */
+ int mm_node_vcpu; /* Current vcpu for this node */
int mm_vcpu_active; /* Whether vcpu bitmap is active */
#endif
--- a/include/uapi/linux/rseq.h
+++ b/include/uapi/linux/rseq.h
@@ -147,6 +147,7 @@ struct rseq {
* (allocated uniquely within a memory space).
*/
__u32 vm_vcpu_id;
+ __u32 vm_vcpu_node_id;
/*
* Flexible array member at end of structure, after last feature field.
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1049,6 +1049,7 @@ static struct task_struct *dup_task_stru
#ifdef CONFIG_SCHED_MM_VCPU
tsk->mm_vcpu = -1;
+ tsk->mm_vcpu_node = -1;
tsk->mm_vcpu_active = 0;
#endif
return tsk;
--- a/kernel/rseq.c
+++ b/kernel/rseq.c
@@ -91,6 +91,7 @@ static int rseq_update_cpu_node_id(struc
u32 cpu_id = raw_smp_processor_id();
u32 node_id = cpu_to_node(cpu_id);
u32 vm_vcpu_id = task_mm_vcpu_id(t);
+ u32 vm_vcpu_node_id = task_mm_vcpu_node_id(t);
WARN_ON_ONCE((int) vm_vcpu_id < 0);
if (!user_write_access_begin(rseq, t->rseq_len))
@@ -99,6 +100,7 @@ static int rseq_update_cpu_node_id(struc
unsafe_put_user(cpu_id, &rseq->cpu_id, efault_end);
unsafe_put_user(node_id, &rseq->node_id, efault_end);
unsafe_put_user(vm_vcpu_id, &rseq->vm_vcpu_id, efault_end);
+ unsafe_put_user(vm_vcpu_node_id, &rseq->vm_vcpu_node_id, efault_end);
/*
* Additional feature fields added after ORIG_RSEQ_SIZE
* need to be conditionally updated only if
@@ -116,8 +118,8 @@ static int rseq_update_cpu_node_id(struc
static int rseq_reset_rseq_cpu_node_id(struct task_struct *t)
{
- u32 cpu_id_start = 0, cpu_id = RSEQ_CPU_ID_UNINITIALIZED, node_id = 0,
- vm_vcpu_id = 0;
+ u32 cpu_id_start = 0, cpu_id = RSEQ_CPU_ID_UNINITIALIZED;
+ u32 node_id = 0, vm_vcpu_id = -1, vm_vcpu_node_id = -1;
/*
* Reset cpu_id_start to its initial state (0).
@@ -141,6 +143,9 @@ static int rseq_reset_rseq_cpu_node_id(s
*/
if (put_user(vm_vcpu_id, &t->rseq->vm_vcpu_id))
return -EFAULT;
+
+ if (put_user(vm_vcpu_node_id, &t->rseq->vm_vcpu_node_id))
+ return -EFAULT;
/*
* Additional feature fields added after ORIG_RSEQ_SIZE
* need to be conditionally reset only if
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -11278,15 +11278,13 @@ void call_trace_sched_update_nr_running(
#ifdef CONFIG_SCHED_MM_VCPU
void sched_vcpu_exit_signals(struct task_struct *t)
{
- struct mm_struct *mm = t->mm;
unsigned long flags;
- if (!mm)
+ if (!t->mm)
return;
+
local_irq_save(flags);
- mm_vcpu_put(mm, t->mm_vcpu);
- t->mm_vcpu = -1;
- t->mm_vcpu_active = 0;
+ mm_vcpu_put(t);
local_irq_restore(flags);
}
@@ -11297,10 +11295,9 @@ void sched_vcpu_before_execve(struct tas
if (!mm)
return;
+
local_irq_save(flags);
- mm_vcpu_put(mm, t->mm_vcpu);
- t->mm_vcpu = -1;
- t->mm_vcpu_active = 0;
+ mm_vcpu_put(t);
local_irq_restore(flags);
}
@@ -11312,9 +11309,9 @@ void sched_vcpu_after_execve(struct task
WARN_ON_ONCE((t->flags & PF_KTHREAD) || !t->mm);
local_irq_save(flags);
- t->mm_vcpu = mm_vcpu_get(mm);
- t->mm_vcpu_active = 1;
+ mm_vcpu_get(t);
local_irq_restore(flags);
+
rseq_set_notify_resume(t);
}
@@ -11322,6 +11319,7 @@ void sched_vcpu_fork(struct task_struct
{
WARN_ON_ONCE((t->flags & PF_KTHREAD) || !t->mm);
t->mm_vcpu = -1;
+ t->mm_vcpu_node = -1;
t->mm_vcpu_active = 1;
}
#endif
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3262,147 +3262,57 @@ static inline void update_current_exec_r
}
#ifdef CONFIG_SCHED_MM_VCPU
-static inline int __mm_vcpu_get_single_node(struct mm_struct *mm)
+static inline int __mm_vcpu_get(struct cpumask *cpumask)
{
- struct cpumask *cpumask;
int vcpu;
- cpumask = mm_vcpumask(mm);
vcpu = cpumask_first_zero(cpumask);
- if (vcpu >= nr_cpu_ids)
+ if (WARN_ON_ONCE(vcpu >= nr_cpu_ids))
return -1;
+
__cpumask_set_cpu(vcpu, cpumask);
return vcpu;
}
-#ifdef CONFIG_NUMA
-static inline bool mm_node_vcpumask_test_cpu(struct mm_struct *mm, int vcpu_id)
-{
- if (num_possible_nodes() == 1)
- return true;
- return cpumask_test_cpu(vcpu_id, mm_node_vcpumask(mm, numa_node_id()));
-}
-static inline int __mm_vcpu_get(struct mm_struct *mm)
+static inline void __mm_vcpu_put(struct task_struct *p, bool clear_active)
{
- struct cpumask *cpumask = mm_vcpumask(mm),
- *node_cpumask = mm_node_vcpumask(mm, numa_node_id()),
- *node_alloc_cpumask = mm_node_alloc_vcpumask(mm);
- unsigned int node;
- int vcpu;
-
- if (num_possible_nodes() == 1)
- return __mm_vcpu_get_single_node(mm);
-
- /*
- * Try to reserve lowest available vcpu number within those already
- * reserved for this NUMA node.
- */
- vcpu = cpumask_first_andnot(node_cpumask, cpumask);
- if (vcpu >= nr_cpu_ids)
- goto alloc_numa;
- __cpumask_set_cpu(vcpu, cpumask);
- goto end;
+ lockdep_assert_irqs_disabled();
+ WARN_ON_ONCE(p->mm_vcpu < 0);
-alloc_numa:
- /*
- * Try to reserve lowest available vcpu number within those not already
- * allocated for numa nodes.
- */
- vcpu = cpumask_first_notandnot(node_alloc_cpumask, cpumask);
- if (vcpu >= nr_cpu_ids)
- goto numa_update;
- __cpumask_set_cpu(vcpu, cpumask);
- __cpumask_set_cpu(vcpu, node_cpumask);
- __cpumask_set_cpu(vcpu, node_alloc_cpumask);
- goto end;
-
-numa_update:
- /*
- * NUMA node id configuration changed for at least one CPU in the system.
- * We need to steal a currently unused vcpu_id from an overprovisioned
- * node for our current node. Userspace must handle the fact that the
- * node id associated with this vcpu_id may change due to node ID
- * reconfiguration.
- *
- * Count how many possible cpus are attached to each (other) node id,
- * and compare this with the per-mm node vcpumask cpu count. Find one
- * which has too many cpus in its mask to steal from.
- */
- for (node = 0; node < nr_node_ids; node++) {
- struct cpumask *iter_cpumask;
-
- if (node == numa_node_id())
- continue;
- iter_cpumask = mm_node_vcpumask(mm, node);
- if (nr_cpus_node(node) < cpumask_weight(iter_cpumask)) {
- /* Try to steal from this node. */
- vcpu = cpumask_first_andnot(iter_cpumask, cpumask);
- if (vcpu >= nr_cpu_ids)
- goto steal_fail;
- __cpumask_set_cpu(vcpu, cpumask);
- __cpumask_clear_cpu(vcpu, iter_cpumask);
- __cpumask_set_cpu(vcpu, node_cpumask);
- goto end;
- }
- }
+ raw_spin_lock(&mm->vcpu_lock);
-steal_fail:
- /*
- * Our attempt at gracefully stealing a vcpu_id from another
- * overprovisioned NUMA node failed. Fallback to grabbing the first
- * available vcpu_id.
- */
- vcpu = cpumask_first_zero(cpumask);
- if (vcpu >= nr_cpu_ids)
- return -1;
- __cpumask_set_cpu(vcpu, cpumask);
- /* Steal vcpu from its numa node mask. */
- for (node = 0; node < nr_node_ids; node++) {
- struct cpumask *iter_cpumask;
-
- if (node == numa_node_id())
- continue;
- iter_cpumask = mm_node_vcpumask(mm, node);
- if (cpumask_test_cpu(vcpu, iter_cpumask)) {
- __cpumask_clear_cpu(vcpu, iter_cpumask);
- break;
- }
- }
- __cpumask_set_cpu(vcpu, node_cpumask);
-end:
- return vcpu;
-}
+ __cpumask_clear_cpu(p->mm_vcpu, mm_vcpumask(mm));
+ p->mm_vcpu = -1;
+#ifdef CONFIG_NUMA
+ __cpumask_clear_cpu(p->mm_vcpu_node, mm_node_vcpumask(mm, task_node(p)));
+ p->mm_vcpu_node = -1;
+#endif
+ if (clear_active)
+ p->mm_vcpu_active = 0;
-#else
-static inline bool mm_node_vcpumask_test_cpu(struct mm_struct *mm, int vcpu_id)
-{
- return true;
-}
-static inline int __mm_vcpu_get(struct mm_struct *mm)
-{
- return __mm_vcpu_get_single_node(mm);
+ raw_spin_unlock(&mm->vcpu_lock);
}
-#endif
-static inline void mm_vcpu_put(struct mm_struct *mm, int vcpu)
+static inline void mm_vcpu_put(struct task_struct *p, bool clear_active)
{
- lockdep_assert_irqs_disabled();
- if (vcpu < 0)
- return;
- spin_lock(&mm->vcpu_lock);
- __cpumask_clear_cpu(vcpu, mm_vcpumask(mm));
- spin_unlock(&mm->vcpu_lock);
+ __mm_vcpu_put(p, true);
}
-static inline int mm_vcpu_get(struct mm_struct *mm)
+static inline void mm_vcpu_get(struct task_struct *p)
{
int ret;
lockdep_assert_irqs_disabled();
- spin_lock(&mm->vcpu_lock);
- ret = __mm_vcpu_get(mm);
- spin_unlock(&mm->vcpu_lock);
+ WARN_ON_ONCE(p->mm_vcpu > 0);
+
+ raw_spin_lock(&mm->vcpu_lock);
+ p->mm_vcpu = __mm_vcpu_get(mm_vcpumask(m));
+#ifdef CONFIG_NUMA
+ p->mm_vcpu_node = __mm_vcpu_get(mm_node_vcpumask(mm, task_node(p)));
+#endif
+ p->mm_vcpu_active = 1;
+ raw_spin_unlock(&mm->vcpu_lock);
return ret;
}
@@ -3414,15 +3324,16 @@ static inline void switch_mm_vcpu(struct
* Context switch between threads in same mm, hand over
* the mm_vcpu from prev to next.
*/
- next->mm_vcpu = prev->mm_vcpu;
- prev->mm_vcpu = -1;
+ swap(next->mm_vcpu, prev->mm_vcpu);
+#ifdef CONFIG_NUMA
+ swap(next->mm_vcpu_node, prev->mm_vcpu_node);
+#endif
return;
}
- mm_vcpu_put(prev->mm, prev->mm_vcpu);
- prev->mm_vcpu = -1;
+ __mm_vcpu_put(prev, false);
}
if (next->mm_vcpu_active)
- next->mm_vcpu = mm_vcpu_get(next->mm);
+ mm_vcpu_get(next);
}
#else
On Thu, Nov 03, 2022 at 04:03:43PM -0400, Mathieu Desnoyers wrote:
> +void sched_vcpu_exit_signals(struct task_struct *t)
> +{
> + struct mm_struct *mm = t->mm;
> + unsigned long flags;
> +
> + if (!mm)
> + return;
> + local_irq_save(flags);
> + mm_vcpu_put(mm, t->mm_vcpu);
> + t->mm_vcpu = -1;
> + t->mm_vcpu_active = 0;
> + local_irq_restore(flags);
> +}
> +
> +void sched_vcpu_before_execve(struct task_struct *t)
> +{
> + struct mm_struct *mm = t->mm;
> + unsigned long flags;
> +
> + if (!mm)
> + return;
> + local_irq_save(flags);
> + mm_vcpu_put(mm, t->mm_vcpu);
> + t->mm_vcpu = -1;
> + t->mm_vcpu_active = 0;
> + local_irq_restore(flags);
> +}
> +
> +void sched_vcpu_after_execve(struct task_struct *t)
> +{
> + struct mm_struct *mm = t->mm;
> + unsigned long flags;
> +
> + WARN_ON_ONCE((t->flags & PF_KTHREAD) || !t->mm);
> +
> + local_irq_save(flags);
> + t->mm_vcpu = mm_vcpu_get(mm);
> + t->mm_vcpu_active = 1;
> + local_irq_restore(flags);
> + rseq_set_notify_resume(t);
> +}
> +static inline void mm_vcpu_put(struct mm_struct *mm, int vcpu)
> +{
> + lockdep_assert_irqs_disabled();
> + if (vcpu < 0)
> + return;
> + spin_lock(&mm->vcpu_lock);
> + __cpumask_clear_cpu(vcpu, mm_vcpumask(mm));
> + spin_unlock(&mm->vcpu_lock);
> +}
> +
> +static inline int mm_vcpu_get(struct mm_struct *mm)
> +{
> + int ret;
> +
> + lockdep_assert_irqs_disabled();
> + spin_lock(&mm->vcpu_lock);
> + ret = __mm_vcpu_get(mm);
> + spin_unlock(&mm->vcpu_lock);
> + return ret;
> +}
This:
local_irq_disable()
spin_lock()
thing is a PREEMPT_RT anti-pattern.
At the very very least this should then be raw_spin_lock(), not in the
least because you're calling this from under rq->lock, which itself is a
raw_spin_lock_t.
On Thu, Nov 03, 2022 at 04:03:43PM -0400, Mathieu Desnoyers wrote:
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 08969f5aa38d..6a2323266942 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1047,6 +1047,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
> tsk->reported_split_lock = 0;
> #endif
>
> +#ifdef CONFIG_SCHED_MM_VCPU
> + tsk->mm_vcpu = -1;
> + tsk->mm_vcpu_active = 0;
> +#endif
> return tsk;
>
> free_stack:
Note how the above hunk does exactly the same as the below thunk, and I
think they're even on the same code-path.
How about moving all of this to __sched_fork() or something?
> @@ -1579,6 +1586,7 @@ static int copy_mm(unsigned long clone_flags, struct task_struct *tsk)
>
> tsk->mm = mm;
> tsk->active_mm = mm;
> + sched_vcpu_fork(tsk);
> return 0;
> }
> +void sched_vcpu_fork(struct task_struct *t)
> +{
> + WARN_ON_ONCE((t->flags & PF_KTHREAD) || !t->mm);
> + t->mm_vcpu = -1;
> + t->mm_vcpu_active = 1;
> +}
On 2022-11-09 04:42, Peter Zijlstra wrote:
> On Thu, Nov 03, 2022 at 04:03:43PM -0400, Mathieu Desnoyers wrote:
>
>> +void sched_vcpu_exit_signals(struct task_struct *t)
>> +{
>> + struct mm_struct *mm = t->mm;
>> + unsigned long flags;
>> +
>> + if (!mm)
>> + return;
>> + local_irq_save(flags);
>> + mm_vcpu_put(mm, t->mm_vcpu);
>> + t->mm_vcpu = -1;
>> + t->mm_vcpu_active = 0;
>> + local_irq_restore(flags);
>> +}
>> +
>> +void sched_vcpu_before_execve(struct task_struct *t)
>> +{
>> + struct mm_struct *mm = t->mm;
>> + unsigned long flags;
>> +
>> + if (!mm)
>> + return;
>> + local_irq_save(flags);
>> + mm_vcpu_put(mm, t->mm_vcpu);
>> + t->mm_vcpu = -1;
>> + t->mm_vcpu_active = 0;
>> + local_irq_restore(flags);
>> +}
>> +
>> +void sched_vcpu_after_execve(struct task_struct *t)
>> +{
>> + struct mm_struct *mm = t->mm;
>> + unsigned long flags;
>> +
>> + WARN_ON_ONCE((t->flags & PF_KTHREAD) || !t->mm);
>> +
>> + local_irq_save(flags);
>> + t->mm_vcpu = mm_vcpu_get(mm);
>> + t->mm_vcpu_active = 1;
>> + local_irq_restore(flags);
>> + rseq_set_notify_resume(t);
>> +}
>
>> +static inline void mm_vcpu_put(struct mm_struct *mm, int vcpu)
>> +{
>> + lockdep_assert_irqs_disabled();
>> + if (vcpu < 0)
>> + return;
>> + spin_lock(&mm->vcpu_lock);
>> + __cpumask_clear_cpu(vcpu, mm_vcpumask(mm));
>> + spin_unlock(&mm->vcpu_lock);
>> +}
>> +
>> +static inline int mm_vcpu_get(struct mm_struct *mm)
>> +{
>> + int ret;
>> +
>> + lockdep_assert_irqs_disabled();
>> + spin_lock(&mm->vcpu_lock);
>> + ret = __mm_vcpu_get(mm);
>> + spin_unlock(&mm->vcpu_lock);
>> + return ret;
>> +}
>
>
> This:
>
> local_irq_disable()
> spin_lock()
>
> thing is a PREEMPT_RT anti-pattern.
>
> At the very very least this should then be raw_spin_lock(), not in the
> least because you're calling this from under rq->lock, which itself is a
> raw_spin_lock_t.
Very good point, will fix using raw_spinlock_t.
Thanks,
Mathieu
>
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On 2022-11-09 04:28, Peter Zijlstra wrote:
> On Thu, Nov 03, 2022 at 04:03:43PM -0400, Mathieu Desnoyers wrote:
>
>> diff --git a/kernel/fork.c b/kernel/fork.c
>> index 08969f5aa38d..6a2323266942 100644
>> --- a/kernel/fork.c
>> +++ b/kernel/fork.c
>> @@ -1047,6 +1047,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
>> tsk->reported_split_lock = 0;
>> #endif
>>
>> +#ifdef CONFIG_SCHED_MM_VCPU
>> + tsk->mm_vcpu = -1;
>> + tsk->mm_vcpu_active = 0;
>> +#endif
>> return tsk;
>>
>> free_stack:
>
> Note how the above hunk does exactly the same as the below thunk, and I
> think they're even on the same code-path.
>
> How about moving all of this to __sched_fork() or something?
>
>> @@ -1579,6 +1586,7 @@ static int copy_mm(unsigned long clone_flags, struct task_struct *tsk)
>>
>> tsk->mm = mm;
>> tsk->active_mm = mm;
>> + sched_vcpu_fork(tsk);
>> return 0;
>> }
>
>> +void sched_vcpu_fork(struct task_struct *t)
>> +{
>> + WARN_ON_ONCE((t->flags & PF_KTHREAD) || !t->mm);
>> + t->mm_vcpu = -1;
>> + t->mm_vcpu_active = 1;
>> +}
Let's look at how things are brought up in copy_process():
p = dup_task_struct(current, node);
tsk->mm_vcpu = -1;
tsk->mm_vcpu_active = 0;
[...]
/* Perform scheduler related setup. Assign this task to a CPU. */
retval = sched_fork(clone_flags, p);
-> I presume that from this point the task is observable by the scheduler. However, tsk->mm does not point to the new mm yet.
The "mm_vcpu_active" flag == 0 prevents the scheduler from trying to poke into the wrong mm vcpu_id bitmaps across early mm-struct
lifetime (clone/fork), late in the mm-struct lifetime (exit), and across reset of the mm-struct (exec).
[...]
retval = copy_mm(clone_flags, p);
new_mm = dup_mm(old_mm)
mm_init(new_mm)
mm_init_vcpu(new_mm)
-> At this point it becomes OK for the scheduler to poke into the tsk->mm vcpu_id bitmaps. Therefore, sched_vcpu_fork() sets
mm_vcpu_active flag = 1.
sched_vcpu_fork(tsk)
-> From this point the scheduler can poke into tsk->mm's vcpu_id bitmaps.
So what I think we should to do here is to remove the extra "t->mm_vcpu = -1;" assignment from sched_vcpu_fork(), because
it has already been set by dup_task_struct. We could actually turn that into a "WARN_ON_ONCE(t->mm_vcpu != -1)".
However, if my understanding is correct, keeping "tsk->mm_vcpu_active = 0;" early in dup_task_struct and "tsk->mm_vcpu_active = 1"
in sched_vcpu_fork() after the new mm is initialized is really important. Or am I missing something ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On Thu, Nov 3, 2022 at 1:05 PM Mathieu Desnoyers
<[email protected]> wrote:
>
> This feature allows the scheduler to expose a current virtual cpu id
> to user-space. This virtual cpu id is within the possible cpus range,
> and is temporarily (and uniquely) assigned while threads are actively
> running within a memory space. If a memory space has fewer threads than
> cores, or is limited to run on few cores concurrently through sched
> affinity or cgroup cpusets, the virtual cpu ids will be values close
> to 0, thus allowing efficient use of user-space memory for per-cpu
> data structures.
>
Just to check, is a "memory space" an mm? I've heard these called
"mms" or sometimes (mostly accurately) "processes" but never memory
spaces. Although I guess the clone(2) manpage says "memory space".
Also, in my mind "virtual cpu" is vCPU, which this isn't. Maybe
"compacted cpu" or something? It's a strange sort of concept.
On 2022-11-10 23:41, Andy Lutomirski wrote:
> On Thu, Nov 3, 2022 at 1:05 PM Mathieu Desnoyers
> <[email protected]> wrote:
>>
>> This feature allows the scheduler to expose a current virtual cpu id
>> to user-space. This virtual cpu id is within the possible cpus range,
>> and is temporarily (and uniquely) assigned while threads are actively
>> running within a memory space. If a memory space has fewer threads than
>> cores, or is limited to run on few cores concurrently through sched
>> affinity or cgroup cpusets, the virtual cpu ids will be values close
>> to 0, thus allowing efficient use of user-space memory for per-cpu
>> data structures.
>>
>
> Just to check, is a "memory space" an mm? I've heard these called
> "mms" or sometimes (mostly accurately) "processes" but never memory
> spaces. Although I guess the clone(2) manpage says "memory space".
Yes, exactly.
I've had a hard time finding the right word there to describe the
concept of a struct mm from a user-space point of view, and ended up
finding that the clone(2) man page expresses the result of a clone
system call with CLONE_VM set as sharing a "memory space", aka a mm_struct.
From an internal kernel implementation perspective it is usually
referred to as a "mm", but it's not a notion that appears to be exposed
to user-space.
And unfortunately "process" can mean so many things other than a struct
mm: is it a thread group ? Or just a group of processes sharing a file
descriptor table ? Or sharing signal handlers ? How would you call a
thread group (clone with CLONE_THREAD) that does not have CLONE_VM set ?
>
> Also, in my mind "virtual cpu" is vCPU, which this isn't. Maybe
> "compacted cpu" or something? It's a strange sort of concept.
I've kept the same wording that has been introduced in 2011 by Paul
Turner and used internally at Google since then, although it may be
confusing if people expect kvm-vCPU and rseq-vcpu to mean the same
thing. Both really end up providing the semantic of a virtually assigned
cpu id (in opposition to the logical cpu id on the system), but this is
much more involved in the case of KVM.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On Fri, Nov 11, 2022, Mathieu Desnoyers wrote:
> On 2022-11-10 23:41, Andy Lutomirski wrote:
> > On Thu, Nov 3, 2022 at 1:05 PM Mathieu Desnoyers
> > <[email protected]> wrote:
> > Also, in my mind "virtual cpu" is vCPU, which this isn't. Maybe
> > "compacted cpu" or something? It's a strange sort of concept.
>
> I've kept the same wording that has been introduced in 2011 by Paul Turner
> and used internally at Google since then, although it may be confusing if
> people expect kvm-vCPU and rseq-vcpu to mean the same thing. Both really end
> up providing the semantic of a virtually assigned cpu id (in opposition to
> the logical cpu id on the system), but this is much more involved in the
> case of KVM.
I had the same reaction as Andy. The rseq concepts don't worry me so much as the
existence of "vcpu" in mm_struct/task_struct, e.g. switch_mm_vcpu() when switching
between KVM vCPU tasks is going to be super confusing. Ditto for mm_vcpu_get()
and mm_vcpu_put() in the few cases where KVM currently does mmget()/mmput().
On 2022-11-14 15:49, Sean Christopherson wrote:
> On Fri, Nov 11, 2022, Mathieu Desnoyers wrote:
>> On 2022-11-10 23:41, Andy Lutomirski wrote:
>>> On Thu, Nov 3, 2022 at 1:05 PM Mathieu Desnoyers
>>> <[email protected]> wrote:
>>> Also, in my mind "virtual cpu" is vCPU, which this isn't. Maybe
>>> "compacted cpu" or something? It's a strange sort of concept.
>>
>> I've kept the same wording that has been introduced in 2011 by Paul Turner
>> and used internally at Google since then, although it may be confusing if
>> people expect kvm-vCPU and rseq-vcpu to mean the same thing. Both really end
>> up providing the semantic of a virtually assigned cpu id (in opposition to
>> the logical cpu id on the system), but this is much more involved in the
>> case of KVM.
>
> I had the same reaction as Andy. The rseq concepts don't worry me so much as the
> existence of "vcpu" in mm_struct/task_struct, e.g. switch_mm_vcpu() when switching
> between KVM vCPU tasks is going to be super confusing. Ditto for mm_vcpu_get()
> and mm_vcpu_put() in the few cases where KVM currently does mmget()/mmput().
I'm fine with changing the wording if it helps make things less confusing.
Should we go for "compact-cpu-id" ? "packed-cpu-id" ? Other ideas ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On Thu, Nov 17, 2022, Mathieu Desnoyers wrote:
> On 2022-11-14 15:49, Sean Christopherson wrote:
> > On Fri, Nov 11, 2022, Mathieu Desnoyers wrote:
> > > On 2022-11-10 23:41, Andy Lutomirski wrote:
> > > > On Thu, Nov 3, 2022 at 1:05 PM Mathieu Desnoyers
> > > > <[email protected]> wrote:
> > > > Also, in my mind "virtual cpu" is vCPU, which this isn't. Maybe
> > > > "compacted cpu" or something? It's a strange sort of concept.
> > >
> > > I've kept the same wording that has been introduced in 2011 by Paul Turner
> > > and used internally at Google since then, although it may be confusing if
> > > people expect kvm-vCPU and rseq-vcpu to mean the same thing. Both really end
> > > up providing the semantic of a virtually assigned cpu id (in opposition to
> > > the logical cpu id on the system), but this is much more involved in the
> > > case of KVM.
> >
> > I had the same reaction as Andy. The rseq concepts don't worry me so much as the
> > existence of "vcpu" in mm_struct/task_struct, e.g. switch_mm_vcpu() when switching
> > between KVM vCPU tasks is going to be super confusing. Ditto for mm_vcpu_get()
> > and mm_vcpu_put() in the few cases where KVM currently does mmget()/mmput().
>
> I'm fine with changing the wording if it helps make things less confusing.
>
> Should we go for "compact-cpu-id" ? "packed-cpu-id" ? Other ideas ?
What about something like "process-local-cpu-id" to capture that the ID has meaning
only within the associated address space / process?
On 2022-11-17 14:10, Sean Christopherson wrote:
> On Thu, Nov 17, 2022, Mathieu Desnoyers wrote:
>> On 2022-11-14 15:49, Sean Christopherson wrote:
>>> On Fri, Nov 11, 2022, Mathieu Desnoyers wrote:
>>>> On 2022-11-10 23:41, Andy Lutomirski wrote:
>>>>> On Thu, Nov 3, 2022 at 1:05 PM Mathieu Desnoyers
>>>>> <[email protected]> wrote:
>>>>> Also, in my mind "virtual cpu" is vCPU, which this isn't. Maybe
>>>>> "compacted cpu" or something? It's a strange sort of concept.
>>>>
>>>> I've kept the same wording that has been introduced in 2011 by Paul Turner
>>>> and used internally at Google since then, although it may be confusing if
>>>> people expect kvm-vCPU and rseq-vcpu to mean the same thing. Both really end
>>>> up providing the semantic of a virtually assigned cpu id (in opposition to
>>>> the logical cpu id on the system), but this is much more involved in the
>>>> case of KVM.
>>>
>>> I had the same reaction as Andy. The rseq concepts don't worry me so much as the
>>> existence of "vcpu" in mm_struct/task_struct, e.g. switch_mm_vcpu() when switching
>>> between KVM vCPU tasks is going to be super confusing. Ditto for mm_vcpu_get()
>>> and mm_vcpu_put() in the few cases where KVM currently does mmget()/mmput().
>>
>> I'm fine with changing the wording if it helps make things less confusing.
>>
>> Should we go for "compact-cpu-id" ? "packed-cpu-id" ? Other ideas ?
>
> What about something like "process-local-cpu-id" to capture that the ID has meaning
> only within the associated address space / process?
Considering that the shorthand for "memory space" is "VM" in e.g.
"CLONE_VM" clone(2) flags, perhaps "vm-cpu-id", "vm-local-cpu-id" or
"per-vm-cpu-id" ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On Thu, Nov 17, 2022, Mathieu Desnoyers wrote:
> On 2022-11-17 14:10, Sean Christopherson wrote:
> > On Thu, Nov 17, 2022, Mathieu Desnoyers wrote:
> > > On 2022-11-14 15:49, Sean Christopherson wrote:
> > > > On Fri, Nov 11, 2022, Mathieu Desnoyers wrote:
> > > > > On 2022-11-10 23:41, Andy Lutomirski wrote:
> > > > > > On Thu, Nov 3, 2022 at 1:05 PM Mathieu Desnoyers
> > > > > > <[email protected]> wrote:
> > > > > > Also, in my mind "virtual cpu" is vCPU, which this isn't. Maybe
> > > > > > "compacted cpu" or something? It's a strange sort of concept.
> > > > >
> > > > > I've kept the same wording that has been introduced in 2011 by Paul Turner
> > > > > and used internally at Google since then, although it may be confusing if
> > > > > people expect kvm-vCPU and rseq-vcpu to mean the same thing. Both really end
> > > > > up providing the semantic of a virtually assigned cpu id (in opposition to
> > > > > the logical cpu id on the system), but this is much more involved in the
> > > > > case of KVM.
> > > >
> > > > I had the same reaction as Andy. The rseq concepts don't worry me so much as the
> > > > existence of "vcpu" in mm_struct/task_struct, e.g. switch_mm_vcpu() when switching
> > > > between KVM vCPU tasks is going to be super confusing. Ditto for mm_vcpu_get()
> > > > and mm_vcpu_put() in the few cases where KVM currently does mmget()/mmput().
> > >
> > > I'm fine with changing the wording if it helps make things less confusing.
> > >
> > > Should we go for "compact-cpu-id" ? "packed-cpu-id" ? Other ideas ?
> >
> > What about something like "process-local-cpu-id" to capture that the ID has meaning
> > only within the associated address space / process?
>
> Considering that the shorthand for "memory space" is "VM" in e.g. "CLONE_VM"
No objection from me for "vm", I've already had to untrain myself and remember
that "vm" doesn't always mean "virtual machine" :-)
> clone(2) flags, perhaps "vm-cpu-id", "vm-local-cpu-id" or "per-vm-cpu-id" ?
I have a slight preference for vm-local-cpu-id, but any of 'em work for me.
On 2022-11-17 16:15, Sean Christopherson wrote:
> On Thu, Nov 17, 2022, Mathieu Desnoyers wrote:
>> On 2022-11-17 14:10, Sean Christopherson wrote:
>>> On Thu, Nov 17, 2022, Mathieu Desnoyers wrote:
>>>> On 2022-11-14 15:49, Sean Christopherson wrote:
>>>>> On Fri, Nov 11, 2022, Mathieu Desnoyers wrote:
>>>>>> On 2022-11-10 23:41, Andy Lutomirski wrote:
>>>>>>> On Thu, Nov 3, 2022 at 1:05 PM Mathieu Desnoyers
>>>>>>> <[email protected]> wrote:
>>>>>>> Also, in my mind "virtual cpu" is vCPU, which this isn't. Maybe
>>>>>>> "compacted cpu" or something? It's a strange sort of concept.
>>>>>>
>>>>>> I've kept the same wording that has been introduced in 2011 by Paul Turner
>>>>>> and used internally at Google since then, although it may be confusing if
>>>>>> people expect kvm-vCPU and rseq-vcpu to mean the same thing. Both really end
>>>>>> up providing the semantic of a virtually assigned cpu id (in opposition to
>>>>>> the logical cpu id on the system), but this is much more involved in the
>>>>>> case of KVM.
>>>>>
>>>>> I had the same reaction as Andy. The rseq concepts don't worry me so much as the
>>>>> existence of "vcpu" in mm_struct/task_struct, e.g. switch_mm_vcpu() when switching
>>>>> between KVM vCPU tasks is going to be super confusing. Ditto for mm_vcpu_get()
>>>>> and mm_vcpu_put() in the few cases where KVM currently does mmget()/mmput().
>>>>
>>>> I'm fine with changing the wording if it helps make things less confusing.
>>>>
>>>> Should we go for "compact-cpu-id" ? "packed-cpu-id" ? Other ideas ?
>>>
>>> What about something like "process-local-cpu-id" to capture that the ID has meaning
>>> only within the associated address space / process?
>>
>> Considering that the shorthand for "memory space" is "VM" in e.g. "CLONE_VM"
>
> No objection from me for "vm", I've already had to untrain myself and remember
> that "vm" doesn't always mean "virtual machine" :-)
>
>> clone(2) flags, perhaps "vm-cpu-id", "vm-local-cpu-id" or "per-vm-cpu-id" ?
>
> I have a slight preference for vm-local-cpu-id, but any of 'em work for me.
Taking a step back wrt naming (because if I do a name change across the
series, I want it to be the last time I do it):
- VM (kvm) vs vm_ (rseq) is confusing.
- vCPU (kvm) vs vcpu (rseq) is confusing.
I propose "Address Space Concurrency ID". This indicates that those IDs
are really just tags assigned uniquely within an address space for each
thread running concurrently (and only while they are running).
Then the question that arises is whether the abbreviation presented to
user-space should be "mm_cid" (as would be expected from an internal
implementation perspective) or "as_cid" (which would match the name
exposed to user-space) ?
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On 2022-11-21 14:00, Mathieu Desnoyers wrote:
> On 2022-11-17 16:15, Sean Christopherson wrote:
>> On Thu, Nov 17, 2022, Mathieu Desnoyers wrote:
>>> On 2022-11-17 14:10, Sean Christopherson wrote:
>>>> On Thu, Nov 17, 2022, Mathieu Desnoyers wrote:
>>>>> On 2022-11-14 15:49, Sean Christopherson wrote:
>>>>>> On Fri, Nov 11, 2022, Mathieu Desnoyers wrote:
>>>>>>> On 2022-11-10 23:41, Andy Lutomirski wrote:
>>>>>>>> On Thu, Nov 3, 2022 at 1:05 PM Mathieu Desnoyers
>>>>>>>> <[email protected]> wrote:
>>>>>>>> Also, in my mind "virtual cpu" is vCPU, which this isn't. Maybe
>>>>>>>> "compacted cpu" or something? It's a strange sort of concept.
>>>>>>>
>>>>>>> I've kept the same wording that has been introduced in 2011 by
>>>>>>> Paul Turner
>>>>>>> and used internally at Google since then, although it may be
>>>>>>> confusing if
>>>>>>> people expect kvm-vCPU and rseq-vcpu to mean the same thing. Both
>>>>>>> really end
>>>>>>> up providing the semantic of a virtually assigned cpu id (in
>>>>>>> opposition to
>>>>>>> the logical cpu id on the system), but this is much more involved
>>>>>>> in the
>>>>>>> case of KVM.
>>>>>>
>>>>>> I had the same reaction as Andy. The rseq concepts don't worry me
>>>>>> so much as the
>>>>>> existence of "vcpu" in mm_struct/task_struct, e.g.
>>>>>> switch_mm_vcpu() when switching
>>>>>> between KVM vCPU tasks is going to be super confusing. Ditto for
>>>>>> mm_vcpu_get()
>>>>>> and mm_vcpu_put() in the few cases where KVM currently does
>>>>>> mmget()/mmput().
>>>>>
>>>>> I'm fine with changing the wording if it helps make things less
>>>>> confusing.
>>>>>
>>>>> Should we go for "compact-cpu-id" ? "packed-cpu-id" ? Other ideas ?
>>>>
>>>> What about something like "process-local-cpu-id" to capture that the
>>>> ID has meaning
>>>> only within the associated address space / process?
>>>
>>> Considering that the shorthand for "memory space" is "VM" in e.g.
>>> "CLONE_VM"
>>
>> No objection from me for "vm", I've already had to untrain myself and
>> remember
>> that "vm" doesn't always mean "virtual machine" :-)
>>
>>> clone(2) flags, perhaps "vm-cpu-id", "vm-local-cpu-id" or
>>> "per-vm-cpu-id" ?
>>
>> I have a slight preference for vm-local-cpu-id, but any of 'em work
>> for me.
>
> Taking a step back wrt naming (because if I do a name change across the
> series, I want it to be the last time I do it):
>
> - VM (kvm) vs vm_ (rseq) is confusing.
> - vCPU (kvm) vs vcpu (rseq) is confusing.
>
> I propose "Address Space Concurrency ID". This indicates that those IDs
> are really just tags assigned uniquely within an address space for each
> thread running concurrently (and only while they are running).
>
> Then the question that arises is whether the abbreviation presented to
> user-space should be "mm_cid" (as would be expected from an internal
> implementation perspective) or "as_cid" (which would match the name
> exposed to user-space) ?
Or it could be "Memory Map Concurrency ID" (mm_cid) to have matching
abbreviation and naming. The notion of a "memory map" seems to be seen
in a few places in man pages, and there are event tools to explore
process memory maps (pmap(1)).
Thanks,
Mathieu
>
> Thanks,
>
> Mathieu
>
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com