2021-09-01 22:46:33

by Raghavendra Rao Ananta

[permalink] [raw]
Subject: [PATCH v3 00/12] KVM: arm64: selftests: Introduce arch_timer selftest

Hello,

The patch series adds a KVM selftest to validate the behavior of
ARM's generic timer (patch-11). The test programs the timer IRQs
periodically, and for each interrupt, it validates the behaviour
against the architecture specifications. The test further provides
a command-line interface to configure the number of vCPUs, the
period of the timer, and the number of iterations that the test
has to run for.

Patch-12 adds an option to randomly migrate the vCPUs to different
physical CPUs across the system. The bug for the fix provided by
Marc with commit 3134cc8beb69d0d ("KVM: arm64: vgic: Resample HW
pending state on deactivation") was discovered using arch_timer
test with vCPU migrations.

Since the test heavily depends on interrupts, patch-10 adds a host
library to setup ARM Generic Interrupt Controller v3 (GICv3). This
includes creating a vGIC device, setting up distributor and
redistributor attributes, and mapping the guest physical addresses.
Symmetrical to this, patch-9 adds a guest library to talk to the vGIC,
which includes initializing the controller, enabling/disabling the
interrupts, and so on.

Furthermore, additional processor utilities such as accessing the MMIO
(via readl/writel), read/write to assembler unsupported registers,
basic delay generation, enable/disable local IRQs, and so on, are also
introduced that the test/GICv3 takes advantage of (patches 1 through 8).

The patch series, specifically the library support, is derived from the
kvm-unit-tests and the kernel itself.

Regards,
Raghavendra

v2 -> v3:

- Addressed the comments from Ricardo regarding moving the vGIC host
support for selftests to its own library.
- Added an option (-m) to migrate the guest vCPUs to physical CPUs
in the system.

v1 -> v2:

Addressed comments from Zenghui in include/aarch64/arch_timer.h:
- Correct the header description
- Remove unnecessary inclusion of linux/sizes.h
- Re-arrange CTL_ defines in ascending order
- Remove inappropriate 'return' from timer_set_* functions, which
returns 'void'.

Raghavendra Rao Ananta (12):
KVM: arm64: selftests: Add MMIO readl/writel support
KVM: arm64: selftests: Add write_sysreg_s and read_sysreg_s
KVM: arm64: selftests: Add support for cpu_relax
KVM: arm64: selftests: Add basic support for arch_timers
KVM: arm64: selftests: Add basic support to generate delays
KVM: arm64: selftests: Add support to disable and enable local IRQs
KVM: arm64: selftests: Add support to get the vcpuid from MPIDR_EL1
KVM: arm64: selftests: Add light-weight spinlock support
KVM: arm64: selftests: Add basic GICv3 support
KVM: arm64: selftests: Add host support for vGIC
KVM: arm64: selftests: Add arch_timer test
KVM: arm64: selftests: arch_timer: Support vCPU migration

tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 3 +-
.../selftests/kvm/aarch64/arch_timer.c | 457 ++++++++++++++++++
.../kvm/include/aarch64/arch_timer.h | 142 ++++++
.../selftests/kvm/include/aarch64/delay.h | 25 +
.../selftests/kvm/include/aarch64/gic.h | 21 +
.../selftests/kvm/include/aarch64/processor.h | 140 +++++-
.../selftests/kvm/include/aarch64/spinlock.h | 13 +
.../selftests/kvm/include/aarch64/vgic.h | 14 +
tools/testing/selftests/kvm/lib/aarch64/gic.c | 93 ++++
.../selftests/kvm/lib/aarch64/gic_private.h | 21 +
.../selftests/kvm/lib/aarch64/gic_v3.c | 240 +++++++++
.../selftests/kvm/lib/aarch64/gic_v3.h | 70 +++
.../selftests/kvm/lib/aarch64/spinlock.c | 27 ++
.../testing/selftests/kvm/lib/aarch64/vgic.c | 67 +++
15 files changed, 1332 insertions(+), 2 deletions(-)
create mode 100644 tools/testing/selftests/kvm/aarch64/arch_timer.c
create mode 100644 tools/testing/selftests/kvm/include/aarch64/arch_timer.h
create mode 100644 tools/testing/selftests/kvm/include/aarch64/delay.h
create mode 100644 tools/testing/selftests/kvm/include/aarch64/gic.h
create mode 100644 tools/testing/selftests/kvm/include/aarch64/spinlock.h
create mode 100644 tools/testing/selftests/kvm/include/aarch64/vgic.h
create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic.c
create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic_private.h
create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic_v3.c
create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic_v3.h
create mode 100644 tools/testing/selftests/kvm/lib/aarch64/spinlock.c
create mode 100644 tools/testing/selftests/kvm/lib/aarch64/vgic.c

--
2.33.0.153.gba50c8fa24-goog


2021-09-01 22:46:35

by Raghavendra Rao Ananta

[permalink] [raw]
Subject: [PATCH v3 08/12] KVM: arm64: selftests: Add light-weight spinlock support

Add a simpler version of spinlock support for ARM64 for
the guests to use.

The implementation is loosely based on the spinlock
implementation in kvm-unit-tests.

Signed-off-by: Raghavendra Rao Ananta <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 2 +-
.../selftests/kvm/include/aarch64/spinlock.h | 13 +++++++++
.../selftests/kvm/lib/aarch64/spinlock.c | 27 +++++++++++++++++++
3 files changed, 41 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/kvm/include/aarch64/spinlock.h
create mode 100644 tools/testing/selftests/kvm/lib/aarch64/spinlock.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 5d05801ab816..61f0d376af99 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -35,7 +35,7 @@ endif

LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c
LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S
-LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S
+LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c
LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c

TEST_GEN_PROGS_x86_64 = x86_64/cr4_cpuid_sync_test
diff --git a/tools/testing/selftests/kvm/include/aarch64/spinlock.h b/tools/testing/selftests/kvm/include/aarch64/spinlock.h
new file mode 100644
index 000000000000..cf0984106d14
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/aarch64/spinlock.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef SELFTEST_KVM_ARM64_SPINLOCK_H
+#define SELFTEST_KVM_ARM64_SPINLOCK_H
+
+struct spinlock {
+ int v;
+};
+
+extern void spin_lock(struct spinlock *lock);
+extern void spin_unlock(struct spinlock *lock);
+
+#endif /* SELFTEST_KVM_ARM64_SPINLOCK_H */
diff --git a/tools/testing/selftests/kvm/lib/aarch64/spinlock.c b/tools/testing/selftests/kvm/lib/aarch64/spinlock.c
new file mode 100644
index 000000000000..6d66a3dac237
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/aarch64/spinlock.c
@@ -0,0 +1,27 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * ARM64 Spinlock support
+ */
+#include <stdint.h>
+
+#include "spinlock.h"
+
+void spin_lock(struct spinlock *lock)
+{
+ uint32_t val, res;
+
+ asm volatile(
+ "1: ldaxr %w0, [%2]\n"
+ " cbnz %w0, 1b\n"
+ " mov %w0, #1\n"
+ " stxr %w1, %w0, [%2]\n"
+ " cbnz %w1, 1b\n"
+ : "=&r" (val), "=&r" (res)
+ : "r" (&lock->v)
+ : "memory");
+}
+
+void spin_unlock(struct spinlock *lock)
+{
+ asm volatile("stlr wzr, [%0]\n" : : "r" (&lock->v) : "memory");
+}
--
2.33.0.153.gba50c8fa24-goog

2021-09-01 22:46:36

by Raghavendra Rao Ananta

[permalink] [raw]
Subject: [PATCH v3 12/12] KVM: arm64: selftests: arch_timer: Support vCPU migration

Since the timer stack (hardware and KVM) is per-CPU, there
are potential chances for races to occur when the scheduler
decides to migrate a vCPU thread to a different physical CPU.
Hence, include an option to stress-test this part as well by
forcing the vCPUs to migrate across physical CPUs in the
system at a particular rate.

Originally, the bug for the fix with commit 3134cc8beb69d0d
("KVM: arm64: vgic: Resample HW pending state on deactivation")
was discovered using arch_timer test with vCPU migrations and
can be easily reproduced.

Signed-off-by: Raghavendra Rao Ananta <[email protected]>
---
.../selftests/kvm/aarch64/arch_timer.c | 108 +++++++++++++++++-
1 file changed, 107 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c
index 1383f33850e9..de246c7afab2 100644
--- a/tools/testing/selftests/kvm/aarch64/arch_timer.c
+++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c
@@ -14,6 +14,8 @@
*
* The test provides command-line options to configure the timer's
* period (-p), number of vCPUs (-n), and iterations per stage (-i).
+ * To stress-test the timer stack even more, an option to migrate the
+ * vCPUs across pCPUs (-m), at a particular rate, is also provided.
*
* Copyright (c) 2021, Google LLC.
*/
@@ -24,6 +26,8 @@
#include <pthread.h>
#include <linux/kvm.h>
#include <linux/sizes.h>
+#include <linux/bitmap.h>
+#include <sys/sysinfo.h>

#include "kvm_util.h"
#include "processor.h"
@@ -41,12 +45,14 @@ struct test_args {
int nr_vcpus;
int nr_iter;
int timer_period_ms;
+ int migration_freq_ms;
};

static struct test_args test_args = {
.nr_vcpus = NR_VCPUS_DEF,
.nr_iter = NR_TEST_ITERS_DEF,
.timer_period_ms = TIMER_TEST_PERIOD_MS_DEF,
+ .migration_freq_ms = 0, /* Turn off migrations by default */
};

#define msecs_to_usecs(msec) ((msec) * 1000LL)
@@ -81,6 +87,9 @@ struct test_vcpu {
static struct test_vcpu test_vcpu[KVM_MAX_VCPUS];
static struct test_vcpu_shared_data vcpu_shared_data[KVM_MAX_VCPUS];

+static unsigned long *vcpu_done_map;
+static pthread_mutex_t vcpu_done_map_lock;
+
static void
guest_configure_timer_action(struct test_vcpu_shared_data *shared_data)
{
@@ -216,6 +225,11 @@ static void *test_vcpu_run(void *arg)

vcpu_run(vm, vcpuid);

+ /* Currently, any exit from guest is an indication of completion */
+ pthread_mutex_lock(&vcpu_done_map_lock);
+ set_bit(vcpuid, vcpu_done_map);
+ pthread_mutex_unlock(&vcpu_done_map_lock);
+
switch (get_ucall(vm, vcpuid, &uc)) {
case UCALL_SYNC:
case UCALL_DONE:
@@ -235,9 +249,73 @@ static void *test_vcpu_run(void *arg)
return NULL;
}

+static uint32_t test_get_pcpu(void)
+{
+ uint32_t pcpu;
+ unsigned int nproc_conf;
+ cpu_set_t online_cpuset;
+
+ nproc_conf = get_nprocs_conf();
+ sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset);
+
+ /* Randomly find an available pCPU to place a vCPU on */
+ do {
+ pcpu = rand() % nproc_conf;
+ } while (!CPU_ISSET(pcpu, &online_cpuset));
+
+ return pcpu;
+}
+static int test_migrate_vcpu(struct test_vcpu *vcpu)
+{
+ int ret;
+ cpu_set_t cpuset;
+ uint32_t new_pcpu = test_get_pcpu();
+
+ CPU_ZERO(&cpuset);
+ CPU_SET(new_pcpu, &cpuset);
+ ret = pthread_setaffinity_np(vcpu->pt_vcpu_run,
+ sizeof(cpuset), &cpuset);
+
+ /* Allow the error where the vCPU thread is already finished */
+ TEST_ASSERT(ret == 0 || ret == ESRCH,
+ "Failed to migrate the vCPU:%u to pCPU: %u; ret: %d\n",
+ vcpu->vcpuid, new_pcpu, ret);
+
+ return ret;
+}
+static void *test_vcpu_migration(void *arg)
+{
+ unsigned int i, n_done;
+ bool vcpu_done;
+
+ do {
+ usleep(msecs_to_usecs(test_args.migration_freq_ms));
+
+ for (n_done = 0, i = 0; i < test_args.nr_vcpus; i++) {
+ pthread_mutex_lock(&vcpu_done_map_lock);
+ vcpu_done = test_bit(i, vcpu_done_map);
+ pthread_mutex_unlock(&vcpu_done_map_lock);
+
+ if (vcpu_done) {
+ n_done++;
+ continue;
+ }
+
+ test_migrate_vcpu(&test_vcpu[i]);
+ }
+ } while (test_args.nr_vcpus != n_done);
+
+ return NULL;
+}
+
static void test_run(struct kvm_vm *vm)
{
int i, ret;
+ pthread_t pt_vcpu_migration;
+
+ pthread_mutex_init(&vcpu_done_map_lock, NULL);
+ vcpu_done_map = bitmap_alloc(test_args.nr_vcpus);
+ TEST_ASSERT(vcpu_done_map, "Failed to allocate vcpu done bitmap\n");

for (i = 0; i < test_args.nr_vcpus; i++) {
ret = pthread_create(&test_vcpu[i].pt_vcpu_run, NULL,
@@ -245,8 +323,23 @@ static void test_run(struct kvm_vm *vm)
TEST_ASSERT(!ret, "Failed to create vCPU-%d pthread\n", i);
}

+ /* Spawn a thread to control the vCPU migrations */
+ if (test_args.migration_freq_ms) {
+ srand(time(NULL));
+
+ ret = pthread_create(&pt_vcpu_migration, NULL,
+ test_vcpu_migration, NULL);
+ TEST_ASSERT(!ret, "Failed to create the migration pthread\n");
+ }
+
+
for (i = 0; i < test_args.nr_vcpus; i++)
pthread_join(test_vcpu[i].pt_vcpu_run, NULL);
+
+ if (test_args.migration_freq_ms)
+ pthread_join(pt_vcpu_migration, NULL);
+
+ bitmap_free(vcpu_done_map);
}

static struct kvm_vm *test_vm_create(void)
@@ -286,6 +379,7 @@ static void test_print_help(char *name)
NR_TEST_ITERS_DEF);
pr_info("\t-p: Periodicity (in ms) of the guest timer (default: %u)\n",
TIMER_TEST_PERIOD_MS_DEF);
+ pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. 0 to turn off (default: 0)\n");
pr_info("\t-h: print this help screen\n");
}

@@ -293,7 +387,7 @@ static bool parse_args(int argc, char *argv[])
{
int opt;

- while ((opt = getopt(argc, argv, "hn:i:p:")) != -1) {
+ while ((opt = getopt(argc, argv, "hn:i:p:m:")) != -1) {
switch (opt) {
case 'n':
test_args.nr_vcpus = atoi(optarg);
@@ -320,6 +414,13 @@ static bool parse_args(int argc, char *argv[])
goto err;
}
break;
+ case 'm':
+ test_args.migration_freq_ms = atoi(optarg);
+ if (test_args.migration_freq_ms < 0) {
+ pr_info("0 or positive value needed for -m\n");
+ goto err;
+ }
+ break;
case 'h':
default:
goto err;
@@ -343,6 +444,11 @@ int main(int argc, char *argv[])
if (!parse_args(argc, argv))
exit(KSFT_SKIP);

+ if (get_nprocs() < 2) {
+ print_skip("At least two physical CPUs needed for vCPU migration");
+ exit(KSFT_SKIP);
+ }
+
vm = test_vm_create();
test_run(vm);
kvm_vm_free(vm);
--
2.33.0.153.gba50c8fa24-goog

2021-09-01 22:46:37

by Raghavendra Rao Ananta

[permalink] [raw]
Subject: [PATCH v3 09/12] KVM: arm64: selftests: Add basic GICv3 support

Add basic support for ARM Generic Interrupt Controller v3.
The support provides guests to setup interrupts.

The work is inspired from kvm-unit-tests and the kernel's
GIC driver (drivers/irqchip/irq-gic-v3.c).

Signed-off-by: Raghavendra Rao Ananta <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 2 +-
.../selftests/kvm/include/aarch64/gic.h | 21 ++
tools/testing/selftests/kvm/lib/aarch64/gic.c | 93 +++++++
.../selftests/kvm/lib/aarch64/gic_private.h | 21 ++
.../selftests/kvm/lib/aarch64/gic_v3.c | 240 ++++++++++++++++++
.../selftests/kvm/lib/aarch64/gic_v3.h | 70 +++++
6 files changed, 446 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/kvm/include/aarch64/gic.h
create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic.c
create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic_private.h
create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic_v3.c
create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic_v3.h

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 61f0d376af99..5476a8ddef60 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -35,7 +35,7 @@ endif

LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c
LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S
-LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c
+LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c
LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c

TEST_GEN_PROGS_x86_64 = x86_64/cr4_cpuid_sync_test
diff --git a/tools/testing/selftests/kvm/include/aarch64/gic.h b/tools/testing/selftests/kvm/include/aarch64/gic.h
new file mode 100644
index 000000000000..85dd1e53048e
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/aarch64/gic.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * ARM Generic Interrupt Controller (GIC) specific defines
+ */
+
+#ifndef SELFTEST_KVM_GIC_H
+#define SELFTEST_KVM_GIC_H
+
+enum gic_type {
+ GIC_V3,
+ GIC_TYPE_MAX,
+};
+
+void gic_init(enum gic_type type, unsigned int nr_cpus,
+ void *dist_base, void *redist_base);
+void gic_irq_enable(unsigned int intid);
+void gic_irq_disable(unsigned int intid);
+unsigned int gic_get_and_ack_irq(void);
+void gic_set_eoi(unsigned int intid);
+
+#endif /* SELFTEST_KVM_GIC_H */
diff --git a/tools/testing/selftests/kvm/lib/aarch64/gic.c b/tools/testing/selftests/kvm/lib/aarch64/gic.c
new file mode 100644
index 000000000000..b0b67f5aeaa6
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/aarch64/gic.c
@@ -0,0 +1,93 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * ARM Generic Interrupt Controller (GIC) support
+ */
+
+#include <errno.h>
+#include <linux/bits.h>
+#include <linux/sizes.h>
+
+#include "kvm_util.h"
+
+#include <gic.h>
+#include "gic_private.h"
+#include "processor.h"
+#include "spinlock.h"
+
+static const struct gic_common_ops *gic_common_ops;
+static struct spinlock gic_lock;
+
+static void gic_cpu_init(unsigned int cpu, void *redist_base)
+{
+ gic_common_ops->gic_cpu_init(cpu, redist_base);
+}
+
+static void
+gic_dist_init(enum gic_type type, unsigned int nr_cpus, void *dist_base)
+{
+ const struct gic_common_ops *gic_ops;
+
+ spin_lock(&gic_lock);
+
+ /* Distributor initialization is needed only once per VM */
+ if (gic_common_ops) {
+ spin_unlock(&gic_lock);
+ return;
+ }
+
+ if (type == GIC_V3)
+ gic_ops = &gicv3_ops;
+
+ gic_ops->gic_init(nr_cpus, dist_base);
+ gic_common_ops = gic_ops;
+
+ /* Make sure that the initialized data is visible to all the vCPUs */
+ dsb(sy);
+
+ spin_unlock(&gic_lock);
+}
+
+void gic_init(enum gic_type type, unsigned int nr_cpus,
+ void *dist_base, void *redist_base)
+{
+ uint32_t cpu = get_vcpuid();
+
+ GUEST_ASSERT(type < GIC_TYPE_MAX);
+ GUEST_ASSERT(dist_base);
+ GUEST_ASSERT(redist_base);
+ GUEST_ASSERT(nr_cpus);
+
+ gic_dist_init(type, nr_cpus, dist_base);
+ gic_cpu_init(cpu, redist_base);
+}
+
+void gic_irq_enable(unsigned int intid)
+{
+ GUEST_ASSERT(gic_common_ops);
+ gic_common_ops->gic_irq_enable(intid);
+}
+
+void gic_irq_disable(unsigned int intid)
+{
+ GUEST_ASSERT(gic_common_ops);
+ gic_common_ops->gic_irq_disable(intid);
+}
+
+unsigned int gic_get_and_ack_irq(void)
+{
+ uint64_t irqstat;
+ unsigned int intid;
+
+ GUEST_ASSERT(gic_common_ops);
+
+ irqstat = gic_common_ops->gic_read_iar();
+ intid = irqstat & GENMASK(23, 0);
+
+ return intid;
+}
+
+void gic_set_eoi(unsigned int intid)
+{
+ GUEST_ASSERT(gic_common_ops);
+ gic_common_ops->gic_write_eoir(intid);
+}
diff --git a/tools/testing/selftests/kvm/lib/aarch64/gic_private.h b/tools/testing/selftests/kvm/lib/aarch64/gic_private.h
new file mode 100644
index 000000000000..d81d739433dc
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/aarch64/gic_private.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * ARM Generic Interrupt Controller (GIC) private defines that's only
+ * shared among the GIC library code.
+ */
+
+#ifndef SELFTEST_KVM_GIC_PRIVATE_H
+#define SELFTEST_KVM_GIC_PRIVATE_H
+
+struct gic_common_ops {
+ void (*gic_init)(unsigned int nr_cpus, void *dist_base);
+ void (*gic_cpu_init)(unsigned int cpu, void *redist_base);
+ void (*gic_irq_enable)(unsigned int intid);
+ void (*gic_irq_disable)(unsigned int intid);
+ uint64_t (*gic_read_iar)(void);
+ void (*gic_write_eoir)(uint32_t irq);
+};
+
+extern const struct gic_common_ops gicv3_ops;
+
+#endif /* SELFTEST_KVM_GIC_PRIVATE_H */
diff --git a/tools/testing/selftests/kvm/lib/aarch64/gic_v3.c b/tools/testing/selftests/kvm/lib/aarch64/gic_v3.c
new file mode 100644
index 000000000000..4b635ca6a8cb
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/aarch64/gic_v3.c
@@ -0,0 +1,240 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * ARM Generic Interrupt Controller (GIC) v3 support
+ */
+
+#include <linux/sizes.h>
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "delay.h"
+
+#include "gic_v3.h"
+#include "gic_private.h"
+
+struct gicv3_data {
+ void *dist_base;
+ void *redist_base[GICV3_MAX_CPUS];
+ unsigned int nr_cpus;
+ unsigned int nr_spis;
+};
+
+#define sgi_base_from_redist(redist_base) (redist_base + SZ_64K)
+
+enum gicv3_intid_range {
+ SGI_RANGE,
+ PPI_RANGE,
+ SPI_RANGE,
+ INVALID_RANGE,
+};
+
+static struct gicv3_data gicv3_data;
+
+static void gicv3_gicd_wait_for_rwp(void)
+{
+ unsigned int count = 100000; /* 1s */
+
+ while (readl(gicv3_data.dist_base + GICD_CTLR) & GICD_CTLR_RWP) {
+ GUEST_ASSERT(count--);
+ udelay(10);
+ }
+}
+
+static void gicv3_gicr_wait_for_rwp(void *redist_base)
+{
+ unsigned int count = 100000; /* 1s */
+
+ while (readl(redist_base + GICR_CTLR) & GICR_CTLR_RWP) {
+ GUEST_ASSERT(count--);
+ udelay(10);
+ }
+}
+
+static enum gicv3_intid_range get_intid_range(unsigned int intid)
+{
+ switch (intid) {
+ case 0 ... 15:
+ return SGI_RANGE;
+ case 16 ... 31:
+ return PPI_RANGE;
+ case 32 ... 1019:
+ return SPI_RANGE;
+ }
+
+ /* We should not be reaching here */
+ GUEST_ASSERT(0);
+
+ return INVALID_RANGE;
+}
+
+static uint64_t gicv3_read_iar(void)
+{
+ uint64_t irqstat = read_sysreg_s(SYS_ICC_IAR1_EL1);
+
+ dsb(sy);
+ return irqstat;
+}
+
+static void gicv3_write_eoir(uint32_t irq)
+{
+ write_sysreg_s(SYS_ICC_EOIR1_EL1, irq);
+ isb();
+}
+
+static void
+gicv3_config_irq(unsigned int intid, unsigned int offset)
+{
+ uint32_t cpu = get_vcpuid();
+ uint32_t mask = 1 << (intid % 32);
+ enum gicv3_intid_range intid_range = get_intid_range(intid);
+ void *reg;
+
+ /* We care about 'cpu' only for SGIs or PPIs */
+ if (intid_range == SGI_RANGE || intid_range == PPI_RANGE) {
+ GUEST_ASSERT(cpu < gicv3_data.nr_cpus);
+
+ reg = sgi_base_from_redist(gicv3_data.redist_base[cpu]) +
+ offset;
+ writel(mask, reg);
+ gicv3_gicr_wait_for_rwp(gicv3_data.redist_base[cpu]);
+ } else if (intid_range == SPI_RANGE) {
+ reg = gicv3_data.dist_base + offset + (intid / 32) * 4;
+ writel(mask, reg);
+ gicv3_gicd_wait_for_rwp();
+ } else {
+ GUEST_ASSERT(0);
+ }
+}
+
+static void gicv3_irq_enable(unsigned int intid)
+{
+ gicv3_config_irq(intid, GICD_ISENABLER);
+}
+
+static void gicv3_irq_disable(unsigned int intid)
+{
+ gicv3_config_irq(intid, GICD_ICENABLER);
+}
+
+static void gicv3_enable_redist(void *redist_base)
+{
+ uint32_t val = readl(redist_base + GICR_WAKER);
+ unsigned int count = 100000; /* 1s */
+
+ val &= ~GICR_WAKER_ProcessorSleep;
+ writel(val, redist_base + GICR_WAKER);
+
+ /* Wait until the processor is 'active' */
+ while (readl(redist_base + GICR_WAKER) & GICR_WAKER_ChildrenAsleep) {
+ GUEST_ASSERT(count--);
+ udelay(10);
+ }
+}
+
+static inline void *gicr_base_gpa_cpu(void *redist_base, uint32_t cpu)
+{
+ /* Align all the redistributors sequentially */
+ return redist_base + cpu * SZ_64K * 2;
+}
+
+static void gicv3_cpu_init(unsigned int cpu, void *redist_base)
+{
+ void *sgi_base;
+ unsigned int i;
+ void *redist_base_cpu;
+
+ GUEST_ASSERT(cpu < gicv3_data.nr_cpus);
+
+ redist_base_cpu = gicr_base_gpa_cpu(redist_base, cpu);
+ sgi_base = sgi_base_from_redist(redist_base_cpu);
+
+ gicv3_enable_redist(redist_base_cpu);
+
+ /*
+ * Mark all the SGI and PPI interrupts as non-secure Group-1.
+ * Also, deactivate and disable them.
+ */
+ writel(~0, sgi_base + GICR_IGROUPR0);
+ writel(~0, sgi_base + GICR_ICACTIVER0);
+ writel(~0, sgi_base + GICR_ICENABLER0);
+
+ /* Set a default priority for all the SGIs and PPIs */
+ for (i = 0; i < 32; i += 4)
+ writel(GICD_INT_DEF_PRI_X4,
+ sgi_base + GICR_IPRIORITYR0 + i);
+
+ gicv3_gicr_wait_for_rwp(redist_base_cpu);
+
+ /* Enable the GIC system register (ICC_*) access */
+ write_sysreg_s(SYS_ICC_SRE_EL1,
+ read_sysreg_s(SYS_ICC_SRE_EL1) | ICC_SRE_EL1_SRE);
+
+ /* Set a default priority threshold */
+ write_sysreg_s(SYS_ICC_PMR_EL1, ICC_PMR_DEF_PRIO);
+
+ /* Enable non-secure Group-1 interrupts */
+ write_sysreg_s(SYS_ICC_GRPEN1_EL1, ICC_IGRPEN1_EL1_ENABLE);
+
+ gicv3_data.redist_base[cpu] = redist_base_cpu;
+}
+
+static void gicv3_dist_init(void)
+{
+ void *dist_base = gicv3_data.dist_base;
+ unsigned int i;
+
+ /* Disable the distributor until we set things up */
+ writel(0, dist_base + GICD_CTLR);
+ gicv3_gicd_wait_for_rwp();
+
+ /*
+ * Mark all the SPI interrupts as non-secure Group-1.
+ * Also, deactivate and disable them.
+ */
+ for (i = 32; i < gicv3_data.nr_spis; i += 32) {
+ writel(~0, dist_base + GICD_IGROUPR + i / 8);
+ writel(~0, dist_base + GICD_ICACTIVER + i / 8);
+ writel(~0, dist_base + GICD_ICENABLER + i / 8);
+ }
+
+ /* Set a default priority for all the SPIs */
+ for (i = 32; i < gicv3_data.nr_spis; i += 4)
+ writel(GICD_INT_DEF_PRI_X4,
+ dist_base + GICD_IPRIORITYR + i);
+
+ /* Wait for the settings to sync-in */
+ gicv3_gicd_wait_for_rwp();
+
+ /* Finally, enable the distributor globally with ARE */
+ writel(GICD_CTLR_ARE_NS | GICD_CTLR_ENABLE_G1A |
+ GICD_CTLR_ENABLE_G1, dist_base + GICD_CTLR);
+ gicv3_gicd_wait_for_rwp();
+}
+
+static void gicv3_init(unsigned int nr_cpus, void *dist_base)
+{
+ GUEST_ASSERT(nr_cpus <= GICV3_MAX_CPUS);
+
+ gicv3_data.nr_cpus = nr_cpus;
+ gicv3_data.dist_base = dist_base;
+ gicv3_data.nr_spis = GICD_TYPER_SPIS(
+ readl(gicv3_data.dist_base + GICD_TYPER));
+ if (gicv3_data.nr_spis > 1020)
+ gicv3_data.nr_spis = 1020;
+
+ /*
+ * Initialize only the distributor for now.
+ * The redistributor and CPU interfaces are initialized
+ * later for every PE.
+ */
+ gicv3_dist_init();
+}
+
+const struct gic_common_ops gicv3_ops = {
+ .gic_init = gicv3_init,
+ .gic_cpu_init = gicv3_cpu_init,
+ .gic_irq_enable = gicv3_irq_enable,
+ .gic_irq_disable = gicv3_irq_disable,
+ .gic_read_iar = gicv3_read_iar,
+ .gic_write_eoir = gicv3_write_eoir,
+};
diff --git a/tools/testing/selftests/kvm/lib/aarch64/gic_v3.h b/tools/testing/selftests/kvm/lib/aarch64/gic_v3.h
new file mode 100644
index 000000000000..d41195e347b3
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/aarch64/gic_v3.h
@@ -0,0 +1,70 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * ARM Generic Interrupt Controller (GIC) v3 specific defines
+ */
+
+#ifndef SELFTEST_KVM_GICV3_H
+#define SELFTEST_KVM_GICV3_H
+
+#include "processor.h"
+
+/*
+ * Distributor registers
+ */
+#define GICD_CTLR 0x0000
+#define GICD_TYPER 0x0004
+#define GICD_IGROUPR 0x0080
+#define GICD_ISENABLER 0x0100
+#define GICD_ICENABLER 0x0180
+#define GICD_ICACTIVER 0x0380
+#define GICD_IPRIORITYR 0x0400
+
+/*
+ * The assumption is that the guest runs in a non-secure mode.
+ * The following bits of GICD_CTLR are defined accordingly.
+ */
+#define GICD_CTLR_RWP (1U << 31)
+#define GICD_CTLR_nASSGIreq (1U << 8)
+#define GICD_CTLR_ARE_NS (1U << 4)
+#define GICD_CTLR_ENABLE_G1A (1U << 1)
+#define GICD_CTLR_ENABLE_G1 (1U << 0)
+
+#define GICD_TYPER_SPIS(typer) ((((typer) & 0x1f) + 1) * 32)
+#define GICD_INT_DEF_PRI_X4 0xa0a0a0a0
+
+/*
+ * Redistributor registers
+ */
+#define GICR_CTLR 0x000
+#define GICR_WAKER 0x014
+
+#define GICR_CTLR_RWP (1U << 3)
+
+#define GICR_WAKER_ProcessorSleep (1U << 1)
+#define GICR_WAKER_ChildrenAsleep (1U << 2)
+
+/*
+ * Redistributor registers, offsets from SGI base
+ */
+#define GICR_IGROUPR0 GICD_IGROUPR
+#define GICR_ISENABLER0 GICD_ISENABLER
+#define GICR_ICENABLER0 GICD_ICENABLER
+#define GICR_ICACTIVER0 GICD_ICACTIVER
+#define GICR_IPRIORITYR0 GICD_IPRIORITYR
+
+/* CPU interface registers */
+#define SYS_ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0)
+#define SYS_ICC_IAR1_EL1 sys_reg(3, 0, 12, 12, 0)
+#define SYS_ICC_EOIR1_EL1 sys_reg(3, 0, 12, 12, 1)
+#define SYS_ICC_SRE_EL1 sys_reg(3, 0, 12, 12, 5)
+#define SYS_ICC_GRPEN1_EL1 sys_reg(3, 0, 12, 12, 7)
+
+#define ICC_PMR_DEF_PRIO 0xf0
+
+#define ICC_SRE_EL1_SRE (1U << 0)
+
+#define ICC_IGRPEN1_EL1_ENABLE (1U << 0)
+
+#define GICV3_MAX_CPUS 512
+
+#endif /* SELFTEST_KVM_GICV3_H */
--
2.33.0.153.gba50c8fa24-goog

2021-09-01 22:47:52

by Oliver Upton

[permalink] [raw]
Subject: Re: [PATCH v3 00/12] KVM: arm64: selftests: Introduce arch_timer selftest

On Wed, Sep 01, 2021 at 09:14:00PM +0000, Raghavendra Rao Ananta wrote:
> Hello,
>
> The patch series adds a KVM selftest to validate the behavior of
> ARM's generic timer (patch-11). The test programs the timer IRQs
> periodically, and for each interrupt, it validates the behaviour
> against the architecture specifications. The test further provides
> a command-line interface to configure the number of vCPUs, the
> period of the timer, and the number of iterations that the test
> has to run for.
>
> Patch-12 adds an option to randomly migrate the vCPUs to different
> physical CPUs across the system. The bug for the fix provided by
> Marc with commit 3134cc8beb69d0d ("KVM: arm64: vgic: Resample HW
> pending state on deactivation") was discovered using arch_timer
> test with vCPU migrations.
>
> Since the test heavily depends on interrupts, patch-10 adds a host
> library to setup ARM Generic Interrupt Controller v3 (GICv3). This
> includes creating a vGIC device, setting up distributor and
> redistributor attributes, and mapping the guest physical addresses.
> Symmetrical to this, patch-9 adds a guest library to talk to the vGIC,
> which includes initializing the controller, enabling/disabling the
> interrupts, and so on.
>
> Furthermore, additional processor utilities such as accessing the MMIO
> (via readl/writel), read/write to assembler unsupported registers,
> basic delay generation, enable/disable local IRQs, and so on, are also
> introduced that the test/GICv3 takes advantage of (patches 1 through 8).
>
> The patch series, specifically the library support, is derived from the
> kvm-unit-tests and the kernel itself.
>
> Regards,
> Raghavendra

For later submissions, can you include a lore.kernel.org link to your
older revisions of the series? NBD now, its easy to find in my inbox but
just for future reference.

--
Best,
Oliver

2021-09-01 22:47:52

by Oliver Upton

[permalink] [raw]
Subject: Re: [PATCH v3 00/12] KVM: arm64: selftests: Introduce arch_timer selftest

+cc Andrew Jones

On Wed, Sep 01, 2021 at 09:14:00PM +0000, Raghavendra Rao Ananta wrote:
> Hello,
>
> The patch series adds a KVM selftest to validate the behavior of
> ARM's generic timer (patch-11). The test programs the timer IRQs
> periodically, and for each interrupt, it validates the behaviour
> against the architecture specifications. The test further provides
> a command-line interface to configure the number of vCPUs, the
> period of the timer, and the number of iterations that the test
> has to run for.
>
> Patch-12 adds an option to randomly migrate the vCPUs to different
> physical CPUs across the system. The bug for the fix provided by
> Marc with commit 3134cc8beb69d0d ("KVM: arm64: vgic: Resample HW
> pending state on deactivation") was discovered using arch_timer
> test with vCPU migrations.
>
> Since the test heavily depends on interrupts, patch-10 adds a host
> library to setup ARM Generic Interrupt Controller v3 (GICv3). This
> includes creating a vGIC device, setting up distributor and
> redistributor attributes, and mapping the guest physical addresses.
> Symmetrical to this, patch-9 adds a guest library to talk to the vGIC,
> which includes initializing the controller, enabling/disabling the
> interrupts, and so on.
>
> Furthermore, additional processor utilities such as accessing the MMIO
> (via readl/writel), read/write to assembler unsupported registers,
> basic delay generation, enable/disable local IRQs, and so on, are also
> introduced that the test/GICv3 takes advantage of (patches 1 through 8).
>
> The patch series, specifically the library support, is derived from the
> kvm-unit-tests and the kernel itself.
>
> Regards,
> Raghavendra
>
> v2 -> v3:
>
> - Addressed the comments from Ricardo regarding moving the vGIC host
> support for selftests to its own library.
> - Added an option (-m) to migrate the guest vCPUs to physical CPUs
> in the system.
>
> v1 -> v2:
>
> Addressed comments from Zenghui in include/aarch64/arch_timer.h:
> - Correct the header description
> - Remove unnecessary inclusion of linux/sizes.h
> - Re-arrange CTL_ defines in ascending order
> - Remove inappropriate 'return' from timer_set_* functions, which
> returns 'void'.
>
> Raghavendra Rao Ananta (12):
> KVM: arm64: selftests: Add MMIO readl/writel support
> KVM: arm64: selftests: Add write_sysreg_s and read_sysreg_s
> KVM: arm64: selftests: Add support for cpu_relax
> KVM: arm64: selftests: Add basic support for arch_timers
> KVM: arm64: selftests: Add basic support to generate delays
> KVM: arm64: selftests: Add support to disable and enable local IRQs
> KVM: arm64: selftests: Add support to get the vcpuid from MPIDR_EL1
> KVM: arm64: selftests: Add light-weight spinlock support
> KVM: arm64: selftests: Add basic GICv3 support
> KVM: arm64: selftests: Add host support for vGIC
> KVM: arm64: selftests: Add arch_timer test
> KVM: arm64: selftests: arch_timer: Support vCPU migration
>
> tools/testing/selftests/kvm/.gitignore | 1 +
> tools/testing/selftests/kvm/Makefile | 3 +-
> .../selftests/kvm/aarch64/arch_timer.c | 457 ++++++++++++++++++
> .../kvm/include/aarch64/arch_timer.h | 142 ++++++
> .../selftests/kvm/include/aarch64/delay.h | 25 +
> .../selftests/kvm/include/aarch64/gic.h | 21 +
> .../selftests/kvm/include/aarch64/processor.h | 140 +++++-
> .../selftests/kvm/include/aarch64/spinlock.h | 13 +
> .../selftests/kvm/include/aarch64/vgic.h | 14 +
> tools/testing/selftests/kvm/lib/aarch64/gic.c | 93 ++++
> .../selftests/kvm/lib/aarch64/gic_private.h | 21 +
> .../selftests/kvm/lib/aarch64/gic_v3.c | 240 +++++++++
> .../selftests/kvm/lib/aarch64/gic_v3.h | 70 +++
> .../selftests/kvm/lib/aarch64/spinlock.c | 27 ++
> .../testing/selftests/kvm/lib/aarch64/vgic.c | 67 +++
> 15 files changed, 1332 insertions(+), 2 deletions(-)
> create mode 100644 tools/testing/selftests/kvm/aarch64/arch_timer.c
> create mode 100644 tools/testing/selftests/kvm/include/aarch64/arch_timer.h
> create mode 100644 tools/testing/selftests/kvm/include/aarch64/delay.h
> create mode 100644 tools/testing/selftests/kvm/include/aarch64/gic.h
> create mode 100644 tools/testing/selftests/kvm/include/aarch64/spinlock.h
> create mode 100644 tools/testing/selftests/kvm/include/aarch64/vgic.h
> create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic.c
> create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic_private.h
> create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic_v3.c
> create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic_v3.h
> create mode 100644 tools/testing/selftests/kvm/lib/aarch64/spinlock.c
> create mode 100644 tools/testing/selftests/kvm/lib/aarch64/vgic.c
>
> --
> 2.33.0.153.gba50c8fa24-goog
>

2021-09-02 21:10:38

by Oliver Upton

[permalink] [raw]
Subject: Re: [PATCH v3 08/12] KVM: arm64: selftests: Add light-weight spinlock support

On Wed, Sep 1, 2021 at 2:14 PM Raghavendra Rao Ananta
<[email protected]> wrote:
>
> Add a simpler version of spinlock support for ARM64 for
> the guests to use.
>
> The implementation is loosely based on the spinlock
> implementation in kvm-unit-tests.
>
> Signed-off-by: Raghavendra Rao Ananta <[email protected]>
> ---
> tools/testing/selftests/kvm/Makefile | 2 +-
> .../selftests/kvm/include/aarch64/spinlock.h | 13 +++++++++
> .../selftests/kvm/lib/aarch64/spinlock.c | 27 +++++++++++++++++++
> 3 files changed, 41 insertions(+), 1 deletion(-)
> create mode 100644 tools/testing/selftests/kvm/include/aarch64/spinlock.h
> create mode 100644 tools/testing/selftests/kvm/lib/aarch64/spinlock.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 5d05801ab816..61f0d376af99 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -35,7 +35,7 @@ endif
>
> LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c
> LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S
> -LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S
> +LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c
> LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c
>
> TEST_GEN_PROGS_x86_64 = x86_64/cr4_cpuid_sync_test
> diff --git a/tools/testing/selftests/kvm/include/aarch64/spinlock.h b/tools/testing/selftests/kvm/include/aarch64/spinlock.h
> new file mode 100644
> index 000000000000..cf0984106d14
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/aarch64/spinlock.h
> @@ -0,0 +1,13 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef SELFTEST_KVM_ARM64_SPINLOCK_H
> +#define SELFTEST_KVM_ARM64_SPINLOCK_H
> +
> +struct spinlock {
> + int v;
> +};
> +
> +extern void spin_lock(struct spinlock *lock);
> +extern void spin_unlock(struct spinlock *lock);
> +
> +#endif /* SELFTEST_KVM_ARM64_SPINLOCK_H */
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/spinlock.c b/tools/testing/selftests/kvm/lib/aarch64/spinlock.c
> new file mode 100644
> index 000000000000..6d66a3dac237
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/aarch64/spinlock.c
> @@ -0,0 +1,27 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * ARM64 Spinlock support
> + */
> +#include <stdint.h>
> +
> +#include "spinlock.h"
> +
> +void spin_lock(struct spinlock *lock)
> +{
> + uint32_t val, res;

nit: use 'int' to match the lock value type.

> +
> + asm volatile(
> + "1: ldaxr %w0, [%2]\n"
> + " cbnz %w0, 1b\n"
> + " mov %w0, #1\n"
> + " stxr %w1, %w0, [%2]\n"
> + " cbnz %w1, 1b\n"
> + : "=&r" (val), "=&r" (res)
> + : "r" (&lock->v)
> + : "memory");
> +}
> +
> +void spin_unlock(struct spinlock *lock)
> +{
> + asm volatile("stlr wzr, [%0]\n" : : "r" (&lock->v) : "memory");
> +}
> --
> 2.33.0.153.gba50c8fa24-goog
>

Otherwise, LGTM.

Reviewed-by: Oliver Upton <[email protected]>

2021-09-03 08:30:45

by Andrew Jones

[permalink] [raw]
Subject: Re: [PATCH v3 08/12] KVM: arm64: selftests: Add light-weight spinlock support

On Wed, Sep 01, 2021 at 09:14:08PM +0000, Raghavendra Rao Ananta wrote:
> Add a simpler version of spinlock support for ARM64 for
> the guests to use.
>
> The implementation is loosely based on the spinlock
> implementation in kvm-unit-tests.
>
> Signed-off-by: Raghavendra Rao Ananta <[email protected]>
> ---
> tools/testing/selftests/kvm/Makefile | 2 +-
> .../selftests/kvm/include/aarch64/spinlock.h | 13 +++++++++
> .../selftests/kvm/lib/aarch64/spinlock.c | 27 +++++++++++++++++++
> 3 files changed, 41 insertions(+), 1 deletion(-)
> create mode 100644 tools/testing/selftests/kvm/include/aarch64/spinlock.h
> create mode 100644 tools/testing/selftests/kvm/lib/aarch64/spinlock.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 5d05801ab816..61f0d376af99 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -35,7 +35,7 @@ endif
>
> LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c
> LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S
> -LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S
> +LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c
> LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c
>
> TEST_GEN_PROGS_x86_64 = x86_64/cr4_cpuid_sync_test
> diff --git a/tools/testing/selftests/kvm/include/aarch64/spinlock.h b/tools/testing/selftests/kvm/include/aarch64/spinlock.h
> new file mode 100644
> index 000000000000..cf0984106d14
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/aarch64/spinlock.h
> @@ -0,0 +1,13 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef SELFTEST_KVM_ARM64_SPINLOCK_H
> +#define SELFTEST_KVM_ARM64_SPINLOCK_H
> +
> +struct spinlock {
> + int v;
> +};
> +
> +extern void spin_lock(struct spinlock *lock);
> +extern void spin_unlock(struct spinlock *lock);
> +
> +#endif /* SELFTEST_KVM_ARM64_SPINLOCK_H */
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/spinlock.c b/tools/testing/selftests/kvm/lib/aarch64/spinlock.c
> new file mode 100644
> index 000000000000..6d66a3dac237
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/aarch64/spinlock.c
> @@ -0,0 +1,27 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * ARM64 Spinlock support
> + */
> +#include <stdint.h>
> +
> +#include "spinlock.h"
> +
> +void spin_lock(struct spinlock *lock)
> +{
> + uint32_t val, res;
> +
> + asm volatile(
> + "1: ldaxr %w0, [%2]\n"
> + " cbnz %w0, 1b\n"
> + " mov %w0, #1\n"
> + " stxr %w1, %w0, [%2]\n"
> + " cbnz %w1, 1b\n"
> + : "=&r" (val), "=&r" (res)
> + : "r" (&lock->v)
> + : "memory");
> +}
> +
> +void spin_unlock(struct spinlock *lock)
> +{
> + asm volatile("stlr wzr, [%0]\n" : : "r" (&lock->v) : "memory");
> +}
> --

Reviewed-by: Andrew Jones <[email protected]>

It makes sense that the explicit barriers in kvm-unit-tests weren't also
inherited, because we already have the implicit barriers with these ld/st
instruction variants. (I suppose we could improve the kvm-unit-tests
implementation at some point.)

Thanks,
drew

2021-09-03 09:40:29

by Andrew Jones

[permalink] [raw]
Subject: Re: [PATCH v3 09/12] KVM: arm64: selftests: Add basic GICv3 support

On Wed, Sep 01, 2021 at 09:14:09PM +0000, Raghavendra Rao Ananta wrote:
> Add basic support for ARM Generic Interrupt Controller v3.
> The support provides guests to setup interrupts.
>
> The work is inspired from kvm-unit-tests and the kernel's
> GIC driver (drivers/irqchip/irq-gic-v3.c).
>
> Signed-off-by: Raghavendra Rao Ananta <[email protected]>
> ---
> tools/testing/selftests/kvm/Makefile | 2 +-
> .../selftests/kvm/include/aarch64/gic.h | 21 ++
> tools/testing/selftests/kvm/lib/aarch64/gic.c | 93 +++++++
> .../selftests/kvm/lib/aarch64/gic_private.h | 21 ++
> .../selftests/kvm/lib/aarch64/gic_v3.c | 240 ++++++++++++++++++
> .../selftests/kvm/lib/aarch64/gic_v3.h | 70 +++++
> 6 files changed, 446 insertions(+), 1 deletion(-)
> create mode 100644 tools/testing/selftests/kvm/include/aarch64/gic.h
> create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic.c
> create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic_private.h
> create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic_v3.c
> create mode 100644 tools/testing/selftests/kvm/lib/aarch64/gic_v3.h
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 61f0d376af99..5476a8ddef60 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -35,7 +35,7 @@ endif
>
> LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c
> LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S
> -LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c
> +LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c
> LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c
>
> TEST_GEN_PROGS_x86_64 = x86_64/cr4_cpuid_sync_test
> diff --git a/tools/testing/selftests/kvm/include/aarch64/gic.h b/tools/testing/selftests/kvm/include/aarch64/gic.h
> new file mode 100644
> index 000000000000..85dd1e53048e
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/aarch64/gic.h
> @@ -0,0 +1,21 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * ARM Generic Interrupt Controller (GIC) specific defines
> + */
> +
> +#ifndef SELFTEST_KVM_GIC_H
> +#define SELFTEST_KVM_GIC_H
> +
> +enum gic_type {
> + GIC_V3,
> + GIC_TYPE_MAX,
> +};
> +
> +void gic_init(enum gic_type type, unsigned int nr_cpus,
> + void *dist_base, void *redist_base);
> +void gic_irq_enable(unsigned int intid);
> +void gic_irq_disable(unsigned int intid);
> +unsigned int gic_get_and_ack_irq(void);
> +void gic_set_eoi(unsigned int intid);
> +
> +#endif /* SELFTEST_KVM_GIC_H */
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/gic.c b/tools/testing/selftests/kvm/lib/aarch64/gic.c
> new file mode 100644
> index 000000000000..b0b67f5aeaa6
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/aarch64/gic.c
> @@ -0,0 +1,93 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * ARM Generic Interrupt Controller (GIC) support
> + */
> +
> +#include <errno.h>
> +#include <linux/bits.h>
> +#include <linux/sizes.h>
> +
> +#include "kvm_util.h"
> +
> +#include <gic.h>

"gic.h"

> +#include "gic_private.h"
> +#include "processor.h"
> +#include "spinlock.h"
> +
> +static const struct gic_common_ops *gic_common_ops;
> +static struct spinlock gic_lock;
> +
> +static void gic_cpu_init(unsigned int cpu, void *redist_base)
> +{
> + gic_common_ops->gic_cpu_init(cpu, redist_base);
> +}
> +
> +static void
> +gic_dist_init(enum gic_type type, unsigned int nr_cpus, void *dist_base)
> +{
> + const struct gic_common_ops *gic_ops;
> +
> + spin_lock(&gic_lock);
> +
> + /* Distributor initialization is needed only once per VM */
> + if (gic_common_ops) {
> + spin_unlock(&gic_lock);
> + return;
> + }
> +
> + if (type == GIC_V3)
> + gic_ops = &gicv3_ops;
> +
> + gic_ops->gic_init(nr_cpus, dist_base);
> + gic_common_ops = gic_ops;
> +
> + /* Make sure that the initialized data is visible to all the vCPUs */
> + dsb(sy);
> +
> + spin_unlock(&gic_lock);
> +}
> +
> +void gic_init(enum gic_type type, unsigned int nr_cpus,
> + void *dist_base, void *redist_base)
> +{
> + uint32_t cpu = get_vcpuid();
> +
> + GUEST_ASSERT(type < GIC_TYPE_MAX);
> + GUEST_ASSERT(dist_base);
> + GUEST_ASSERT(redist_base);
> + GUEST_ASSERT(nr_cpus);
> +
> + gic_dist_init(type, nr_cpus, dist_base);
> + gic_cpu_init(cpu, redist_base);
> +}
> +
> +void gic_irq_enable(unsigned int intid)
> +{
> + GUEST_ASSERT(gic_common_ops);
> + gic_common_ops->gic_irq_enable(intid);
> +}
> +
> +void gic_irq_disable(unsigned int intid)
> +{
> + GUEST_ASSERT(gic_common_ops);
> + gic_common_ops->gic_irq_disable(intid);
> +}
> +
> +unsigned int gic_get_and_ack_irq(void)
> +{
> + uint64_t irqstat;
> + unsigned int intid;
> +
> + GUEST_ASSERT(gic_common_ops);
> +
> + irqstat = gic_common_ops->gic_read_iar();
> + intid = irqstat & GENMASK(23, 0);
> +
> + return intid;
> +}
> +
> +void gic_set_eoi(unsigned int intid)
> +{
> + GUEST_ASSERT(gic_common_ops);
> + gic_common_ops->gic_write_eoir(intid);
> +}
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/gic_private.h b/tools/testing/selftests/kvm/lib/aarch64/gic_private.h
> new file mode 100644
> index 000000000000..d81d739433dc
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/aarch64/gic_private.h
> @@ -0,0 +1,21 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * ARM Generic Interrupt Controller (GIC) private defines that's only
> + * shared among the GIC library code.
> + */
> +
> +#ifndef SELFTEST_KVM_GIC_PRIVATE_H
> +#define SELFTEST_KVM_GIC_PRIVATE_H
> +
> +struct gic_common_ops {
> + void (*gic_init)(unsigned int nr_cpus, void *dist_base);
> + void (*gic_cpu_init)(unsigned int cpu, void *redist_base);
> + void (*gic_irq_enable)(unsigned int intid);
> + void (*gic_irq_disable)(unsigned int intid);
> + uint64_t (*gic_read_iar)(void);
> + void (*gic_write_eoir)(uint32_t irq);
> +};
> +
> +extern const struct gic_common_ops gicv3_ops;
> +
> +#endif /* SELFTEST_KVM_GIC_PRIVATE_H */
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/gic_v3.c b/tools/testing/selftests/kvm/lib/aarch64/gic_v3.c
> new file mode 100644
> index 000000000000..4b635ca6a8cb
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/aarch64/gic_v3.c
> @@ -0,0 +1,240 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * ARM Generic Interrupt Controller (GIC) v3 support
> + */
> +
> +#include <linux/sizes.h>
> +
> +#include "kvm_util.h"
> +#include "processor.h"
> +#include "delay.h"
> +
> +#include "gic_v3.h"
> +#include "gic_private.h"
> +
> +struct gicv3_data {
> + void *dist_base;
> + void *redist_base[GICV3_MAX_CPUS];
> + unsigned int nr_cpus;
> + unsigned int nr_spis;
> +};
> +
> +#define sgi_base_from_redist(redist_base) (redist_base + SZ_64K)
> +
> +enum gicv3_intid_range {
> + SGI_RANGE,
> + PPI_RANGE,
> + SPI_RANGE,
> + INVALID_RANGE,
> +};
> +
> +static struct gicv3_data gicv3_data;
> +
> +static void gicv3_gicd_wait_for_rwp(void)
> +{
> + unsigned int count = 100000; /* 1s */
> +
> + while (readl(gicv3_data.dist_base + GICD_CTLR) & GICD_CTLR_RWP) {
> + GUEST_ASSERT(count--);
> + udelay(10);
> + }
> +}
> +
> +static void gicv3_gicr_wait_for_rwp(void *redist_base)
> +{
> + unsigned int count = 100000; /* 1s */
> +
> + while (readl(redist_base + GICR_CTLR) & GICR_CTLR_RWP) {
> + GUEST_ASSERT(count--);
> + udelay(10);
> + }
> +}
> +
> +static enum gicv3_intid_range get_intid_range(unsigned int intid)
> +{
> + switch (intid) {
> + case 0 ... 15:
> + return SGI_RANGE;
> + case 16 ... 31:
> + return PPI_RANGE;
> + case 32 ... 1019:
> + return SPI_RANGE;
> + }
> +
> + /* We should not be reaching here */
> + GUEST_ASSERT(0);
> +
> + return INVALID_RANGE;
> +}
> +
> +static uint64_t gicv3_read_iar(void)
> +{
> + uint64_t irqstat = read_sysreg_s(SYS_ICC_IAR1_EL1);
> +
> + dsb(sy);
> + return irqstat;
> +}
> +
> +static void gicv3_write_eoir(uint32_t irq)
> +{
> + write_sysreg_s(SYS_ICC_EOIR1_EL1, irq);
> + isb();
> +}
> +
> +static void
> +gicv3_config_irq(unsigned int intid, unsigned int offset)
> +{
> + uint32_t cpu = get_vcpuid();
> + uint32_t mask = 1 << (intid % 32);
> + enum gicv3_intid_range intid_range = get_intid_range(intid);
> + void *reg;
> +
> + /* We care about 'cpu' only for SGIs or PPIs */
> + if (intid_range == SGI_RANGE || intid_range == PPI_RANGE) {
> + GUEST_ASSERT(cpu < gicv3_data.nr_cpus);
> +
> + reg = sgi_base_from_redist(gicv3_data.redist_base[cpu]) +
> + offset;
> + writel(mask, reg);
> + gicv3_gicr_wait_for_rwp(gicv3_data.redist_base[cpu]);
> + } else if (intid_range == SPI_RANGE) {
> + reg = gicv3_data.dist_base + offset + (intid / 32) * 4;
> + writel(mask, reg);
> + gicv3_gicd_wait_for_rwp();
> + } else {
> + GUEST_ASSERT(0);
> + }
> +}
> +
> +static void gicv3_irq_enable(unsigned int intid)
> +{
> + gicv3_config_irq(intid, GICD_ISENABLER);
> +}
> +
> +static void gicv3_irq_disable(unsigned int intid)
> +{
> + gicv3_config_irq(intid, GICD_ICENABLER);
> +}
> +
> +static void gicv3_enable_redist(void *redist_base)
> +{
> + uint32_t val = readl(redist_base + GICR_WAKER);
> + unsigned int count = 100000; /* 1s */
> +
> + val &= ~GICR_WAKER_ProcessorSleep;
> + writel(val, redist_base + GICR_WAKER);
> +
> + /* Wait until the processor is 'active' */
> + while (readl(redist_base + GICR_WAKER) & GICR_WAKER_ChildrenAsleep) {
> + GUEST_ASSERT(count--);
> + udelay(10);
> + }
> +}
> +
> +static inline void *gicr_base_gpa_cpu(void *redist_base, uint32_t cpu)
> +{
> + /* Align all the redistributors sequentially */
> + return redist_base + cpu * SZ_64K * 2;
> +}
> +
> +static void gicv3_cpu_init(unsigned int cpu, void *redist_base)
> +{
> + void *sgi_base;
> + unsigned int i;
> + void *redist_base_cpu;
> +
> + GUEST_ASSERT(cpu < gicv3_data.nr_cpus);
> +
> + redist_base_cpu = gicr_base_gpa_cpu(redist_base, cpu);
> + sgi_base = sgi_base_from_redist(redist_base_cpu);
> +
> + gicv3_enable_redist(redist_base_cpu);
> +
> + /*
> + * Mark all the SGI and PPI interrupts as non-secure Group-1.
> + * Also, deactivate and disable them.
> + */
> + writel(~0, sgi_base + GICR_IGROUPR0);
> + writel(~0, sgi_base + GICR_ICACTIVER0);
> + writel(~0, sgi_base + GICR_ICENABLER0);
> +
> + /* Set a default priority for all the SGIs and PPIs */
> + for (i = 0; i < 32; i += 4)
> + writel(GICD_INT_DEF_PRI_X4,
> + sgi_base + GICR_IPRIORITYR0 + i);
> +
> + gicv3_gicr_wait_for_rwp(redist_base_cpu);
> +
> + /* Enable the GIC system register (ICC_*) access */
> + write_sysreg_s(SYS_ICC_SRE_EL1,
> + read_sysreg_s(SYS_ICC_SRE_EL1) | ICC_SRE_EL1_SRE);
> +
> + /* Set a default priority threshold */
> + write_sysreg_s(SYS_ICC_PMR_EL1, ICC_PMR_DEF_PRIO);
> +
> + /* Enable non-secure Group-1 interrupts */
> + write_sysreg_s(SYS_ICC_GRPEN1_EL1, ICC_IGRPEN1_EL1_ENABLE);
> +
> + gicv3_data.redist_base[cpu] = redist_base_cpu;
> +}
> +
> +static void gicv3_dist_init(void)
> +{
> + void *dist_base = gicv3_data.dist_base;
> + unsigned int i;
> +
> + /* Disable the distributor until we set things up */
> + writel(0, dist_base + GICD_CTLR);
> + gicv3_gicd_wait_for_rwp();
> +
> + /*
> + * Mark all the SPI interrupts as non-secure Group-1.
> + * Also, deactivate and disable them.
> + */
> + for (i = 32; i < gicv3_data.nr_spis; i += 32) {
> + writel(~0, dist_base + GICD_IGROUPR + i / 8);
> + writel(~0, dist_base + GICD_ICACTIVER + i / 8);
> + writel(~0, dist_base + GICD_ICENABLER + i / 8);
> + }
> +
> + /* Set a default priority for all the SPIs */
> + for (i = 32; i < gicv3_data.nr_spis; i += 4)
> + writel(GICD_INT_DEF_PRI_X4,
> + dist_base + GICD_IPRIORITYR + i);
> +
> + /* Wait for the settings to sync-in */
> + gicv3_gicd_wait_for_rwp();
> +
> + /* Finally, enable the distributor globally with ARE */
> + writel(GICD_CTLR_ARE_NS | GICD_CTLR_ENABLE_G1A |
> + GICD_CTLR_ENABLE_G1, dist_base + GICD_CTLR);
> + gicv3_gicd_wait_for_rwp();
> +}
> +
> +static void gicv3_init(unsigned int nr_cpus, void *dist_base)
> +{
> + GUEST_ASSERT(nr_cpus <= GICV3_MAX_CPUS);
> +
> + gicv3_data.nr_cpus = nr_cpus;
> + gicv3_data.dist_base = dist_base;
> + gicv3_data.nr_spis = GICD_TYPER_SPIS(
> + readl(gicv3_data.dist_base + GICD_TYPER));
> + if (gicv3_data.nr_spis > 1020)
> + gicv3_data.nr_spis = 1020;
> +
> + /*
> + * Initialize only the distributor for now.
> + * The redistributor and CPU interfaces are initialized
> + * later for every PE.
> + */
> + gicv3_dist_init();
> +}
> +
> +const struct gic_common_ops gicv3_ops = {
> + .gic_init = gicv3_init,
> + .gic_cpu_init = gicv3_cpu_init,
> + .gic_irq_enable = gicv3_irq_enable,
> + .gic_irq_disable = gicv3_irq_disable,
> + .gic_read_iar = gicv3_read_iar,
> + .gic_write_eoir = gicv3_write_eoir,
> +};
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/gic_v3.h b/tools/testing/selftests/kvm/lib/aarch64/gic_v3.h
> new file mode 100644
> index 000000000000..d41195e347b3
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/aarch64/gic_v3.h
> @@ -0,0 +1,70 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * ARM Generic Interrupt Controller (GIC) v3 specific defines
> + */
> +
> +#ifndef SELFTEST_KVM_GICV3_H
> +#define SELFTEST_KVM_GICV3_H
> +
> +#include "processor.h"
> +
> +/*
> + * Distributor registers
> + */
> +#define GICD_CTLR 0x0000
> +#define GICD_TYPER 0x0004
> +#define GICD_IGROUPR 0x0080
> +#define GICD_ISENABLER 0x0100
> +#define GICD_ICENABLER 0x0180
> +#define GICD_ICACTIVER 0x0380
> +#define GICD_IPRIORITYR 0x0400
> +
> +/*
> + * The assumption is that the guest runs in a non-secure mode.
> + * The following bits of GICD_CTLR are defined accordingly.
> + */
> +#define GICD_CTLR_RWP (1U << 31)
> +#define GICD_CTLR_nASSGIreq (1U << 8)
> +#define GICD_CTLR_ARE_NS (1U << 4)
> +#define GICD_CTLR_ENABLE_G1A (1U << 1)
> +#define GICD_CTLR_ENABLE_G1 (1U << 0)
> +
> +#define GICD_TYPER_SPIS(typer) ((((typer) & 0x1f) + 1) * 32)
> +#define GICD_INT_DEF_PRI_X4 0xa0a0a0a0
> +
> +/*
> + * Redistributor registers
> + */
> +#define GICR_CTLR 0x000
> +#define GICR_WAKER 0x014
> +
> +#define GICR_CTLR_RWP (1U << 3)
> +
> +#define GICR_WAKER_ProcessorSleep (1U << 1)
> +#define GICR_WAKER_ChildrenAsleep (1U << 2)
> +
> +/*
> + * Redistributor registers, offsets from SGI base
> + */
> +#define GICR_IGROUPR0 GICD_IGROUPR
> +#define GICR_ISENABLER0 GICD_ISENABLER
> +#define GICR_ICENABLER0 GICD_ICENABLER
> +#define GICR_ICACTIVER0 GICD_ICACTIVER
> +#define GICR_IPRIORITYR0 GICD_IPRIORITYR
> +
> +/* CPU interface registers */
> +#define SYS_ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0)
> +#define SYS_ICC_IAR1_EL1 sys_reg(3, 0, 12, 12, 0)
> +#define SYS_ICC_EOIR1_EL1 sys_reg(3, 0, 12, 12, 1)
> +#define SYS_ICC_SRE_EL1 sys_reg(3, 0, 12, 12, 5)
> +#define SYS_ICC_GRPEN1_EL1 sys_reg(3, 0, 12, 12, 7)
> +
> +#define ICC_PMR_DEF_PRIO 0xf0
> +
> +#define ICC_SRE_EL1_SRE (1U << 0)
> +
> +#define ICC_IGRPEN1_EL1_ENABLE (1U << 0)
> +
> +#define GICV3_MAX_CPUS 512
> +
> +#endif /* SELFTEST_KVM_GICV3_H */
> --
> 2.33.0.153.gba50c8fa24-goog
>

Looks good to me. I also see some nice stuff to bring over to
kvm-unit-tests in order to make some improvements there.

Reviewed-by: Andrew Jones <[email protected]>

Thanks,
drew

2021-09-03 13:04:59

by Andrew Jones

[permalink] [raw]
Subject: Re: [PATCH v3 12/12] KVM: arm64: selftests: arch_timer: Support vCPU migration

On Wed, Sep 01, 2021 at 09:14:12PM +0000, Raghavendra Rao Ananta wrote:
> Since the timer stack (hardware and KVM) is per-CPU, there
> are potential chances for races to occur when the scheduler
> decides to migrate a vCPU thread to a different physical CPU.
> Hence, include an option to stress-test this part as well by
> forcing the vCPUs to migrate across physical CPUs in the
> system at a particular rate.
>
> Originally, the bug for the fix with commit 3134cc8beb69d0d
> ("KVM: arm64: vgic: Resample HW pending state on deactivation")
> was discovered using arch_timer test with vCPU migrations and
> can be easily reproduced.
>
> Signed-off-by: Raghavendra Rao Ananta <[email protected]>
> ---
> .../selftests/kvm/aarch64/arch_timer.c | 108 +++++++++++++++++-
> 1 file changed, 107 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c
> index 1383f33850e9..de246c7afab2 100644
> --- a/tools/testing/selftests/kvm/aarch64/arch_timer.c
> +++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c
> @@ -14,6 +14,8 @@
> *
> * The test provides command-line options to configure the timer's
> * period (-p), number of vCPUs (-n), and iterations per stage (-i).
> + * To stress-test the timer stack even more, an option to migrate the
> + * vCPUs across pCPUs (-m), at a particular rate, is also provided.
> *
> * Copyright (c) 2021, Google LLC.
> */
> @@ -24,6 +26,8 @@
> #include <pthread.h>
> #include <linux/kvm.h>
> #include <linux/sizes.h>
> +#include <linux/bitmap.h>
> +#include <sys/sysinfo.h>
>
> #include "kvm_util.h"
> #include "processor.h"
> @@ -41,12 +45,14 @@ struct test_args {
> int nr_vcpus;
> int nr_iter;
> int timer_period_ms;
> + int migration_freq_ms;
> };
>
> static struct test_args test_args = {
> .nr_vcpus = NR_VCPUS_DEF,
> .nr_iter = NR_TEST_ITERS_DEF,
> .timer_period_ms = TIMER_TEST_PERIOD_MS_DEF,
> + .migration_freq_ms = 0, /* Turn off migrations by default */

I'd rather we enable good tests like these by default.

> };
>
> #define msecs_to_usecs(msec) ((msec) * 1000LL)
> @@ -81,6 +87,9 @@ struct test_vcpu {
> static struct test_vcpu test_vcpu[KVM_MAX_VCPUS];
> static struct test_vcpu_shared_data vcpu_shared_data[KVM_MAX_VCPUS];
>
> +static unsigned long *vcpu_done_map;
> +static pthread_mutex_t vcpu_done_map_lock;
> +
> static void
> guest_configure_timer_action(struct test_vcpu_shared_data *shared_data)
> {
> @@ -216,6 +225,11 @@ static void *test_vcpu_run(void *arg)
>
> vcpu_run(vm, vcpuid);
>
> + /* Currently, any exit from guest is an indication of completion */
> + pthread_mutex_lock(&vcpu_done_map_lock);
> + set_bit(vcpuid, vcpu_done_map);
> + pthread_mutex_unlock(&vcpu_done_map_lock);
> +
> switch (get_ucall(vm, vcpuid, &uc)) {
> case UCALL_SYNC:
> case UCALL_DONE:
> @@ -235,9 +249,73 @@ static void *test_vcpu_run(void *arg)
> return NULL;
> }
>
> +static uint32_t test_get_pcpu(void)
> +{
> + uint32_t pcpu;
> + unsigned int nproc_conf;
> + cpu_set_t online_cpuset;
> +
> + nproc_conf = get_nprocs_conf();
> + sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset);
> +
> + /* Randomly find an available pCPU to place a vCPU on */
> + do {
> + pcpu = rand() % nproc_conf;
> + } while (!CPU_ISSET(pcpu, &online_cpuset));
> +
> + return pcpu;
> +}
> +static int test_migrate_vcpu(struct test_vcpu *vcpu)
> +{
> + int ret;
> + cpu_set_t cpuset;
> + uint32_t new_pcpu = test_get_pcpu();
> +
> + CPU_ZERO(&cpuset);
> + CPU_SET(new_pcpu, &cpuset);
> + ret = pthread_setaffinity_np(vcpu->pt_vcpu_run,
> + sizeof(cpuset), &cpuset);
> +
> + /* Allow the error where the vCPU thread is already finished */
> + TEST_ASSERT(ret == 0 || ret == ESRCH,
> + "Failed to migrate the vCPU:%u to pCPU: %u; ret: %d\n",
> + vcpu->vcpuid, new_pcpu, ret);

It'd be good to collect stats for the two cases so we know how many
vcpus we actually migrated with a successful setaffinity and how many
just completed too early. If our stats don't look good, then we can
adjust our timeouts and frequencies.

> +
> + return ret;
> +}
> +static void *test_vcpu_migration(void *arg)
> +{
> + unsigned int i, n_done;
> + bool vcpu_done;
> +
> + do {
> + usleep(msecs_to_usecs(test_args.migration_freq_ms));
> +
> + for (n_done = 0, i = 0; i < test_args.nr_vcpus; i++) {
> + pthread_mutex_lock(&vcpu_done_map_lock);
> + vcpu_done = test_bit(i, vcpu_done_map);
> + pthread_mutex_unlock(&vcpu_done_map_lock);
> +
> + if (vcpu_done) {
> + n_done++;
> + continue;
> + }
> +
> + test_migrate_vcpu(&test_vcpu[i]);
> + }
> + } while (test_args.nr_vcpus != n_done);
> +
> + return NULL;
> +}
> +
> static void test_run(struct kvm_vm *vm)
> {
> int i, ret;
> + pthread_t pt_vcpu_migration;
> +
> + pthread_mutex_init(&vcpu_done_map_lock, NULL);
> + vcpu_done_map = bitmap_alloc(test_args.nr_vcpus);
> + TEST_ASSERT(vcpu_done_map, "Failed to allocate vcpu done bitmap\n");
>
> for (i = 0; i < test_args.nr_vcpus; i++) {
> ret = pthread_create(&test_vcpu[i].pt_vcpu_run, NULL,
> @@ -245,8 +323,23 @@ static void test_run(struct kvm_vm *vm)
> TEST_ASSERT(!ret, "Failed to create vCPU-%d pthread\n", i);
> }
>
> + /* Spawn a thread to control the vCPU migrations */
> + if (test_args.migration_freq_ms) {
> + srand(time(NULL));
> +
> + ret = pthread_create(&pt_vcpu_migration, NULL,
> + test_vcpu_migration, NULL);
> + TEST_ASSERT(!ret, "Failed to create the migration pthread\n");
> + }
> +
> +
> for (i = 0; i < test_args.nr_vcpus; i++)
> pthread_join(test_vcpu[i].pt_vcpu_run, NULL);
> +
> + if (test_args.migration_freq_ms)
> + pthread_join(pt_vcpu_migration, NULL);
> +
> + bitmap_free(vcpu_done_map);
> }
>
> static struct kvm_vm *test_vm_create(void)
> @@ -286,6 +379,7 @@ static void test_print_help(char *name)
> NR_TEST_ITERS_DEF);
> pr_info("\t-p: Periodicity (in ms) of the guest timer (default: %u)\n",
> TIMER_TEST_PERIOD_MS_DEF);
> + pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. 0 to turn off (default: 0)\n");
> pr_info("\t-h: print this help screen\n");
> }
>
> @@ -293,7 +387,7 @@ static bool parse_args(int argc, char *argv[])
> {
> int opt;
>
> - while ((opt = getopt(argc, argv, "hn:i:p:")) != -1) {
> + while ((opt = getopt(argc, argv, "hn:i:p:m:")) != -1) {
> switch (opt) {
> case 'n':
> test_args.nr_vcpus = atoi(optarg);
> @@ -320,6 +414,13 @@ static bool parse_args(int argc, char *argv[])
> goto err;
> }
> break;
> + case 'm':
> + test_args.migration_freq_ms = atoi(optarg);
> + if (test_args.migration_freq_ms < 0) {
> + pr_info("0 or positive value needed for -m\n");
> + goto err;
> + }
> + break;
> case 'h':
> default:
> goto err;
> @@ -343,6 +444,11 @@ int main(int argc, char *argv[])
> if (!parse_args(argc, argv))
> exit(KSFT_SKIP);
>
> + if (get_nprocs() < 2) {

if (test_args.migration_freq_ms && get_nprocs() < 2)

> + print_skip("At least two physical CPUs needed for vCPU migration");
> + exit(KSFT_SKIP);
> + }
> +
> vm = test_vm_create();
> test_run(vm);
> kvm_vm_free(vm);
> --
> 2.33.0.153.gba50c8fa24-goog
>

Thanks,
drew

2021-09-03 20:55:21

by Raghavendra Rao Ananta

[permalink] [raw]
Subject: Re: [PATCH v3 12/12] KVM: arm64: selftests: arch_timer: Support vCPU migration

On Fri, Sep 3, 2021 at 4:05 AM Andrew Jones <[email protected]> wrote:
>
> On Wed, Sep 01, 2021 at 09:14:12PM +0000, Raghavendra Rao Ananta wrote:
> > Since the timer stack (hardware and KVM) is per-CPU, there
> > are potential chances for races to occur when the scheduler
> > decides to migrate a vCPU thread to a different physical CPU.
> > Hence, include an option to stress-test this part as well by
> > forcing the vCPUs to migrate across physical CPUs in the
> > system at a particular rate.
> >
> > Originally, the bug for the fix with commit 3134cc8beb69d0d
> > ("KVM: arm64: vgic: Resample HW pending state on deactivation")
> > was discovered using arch_timer test with vCPU migrations and
> > can be easily reproduced.
> >
> > Signed-off-by: Raghavendra Rao Ananta <[email protected]>
> > ---
> > .../selftests/kvm/aarch64/arch_timer.c | 108 +++++++++++++++++-
> > 1 file changed, 107 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c
> > index 1383f33850e9..de246c7afab2 100644
> > --- a/tools/testing/selftests/kvm/aarch64/arch_timer.c
> > +++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c
> > @@ -14,6 +14,8 @@
> > *
> > * The test provides command-line options to configure the timer's
> > * period (-p), number of vCPUs (-n), and iterations per stage (-i).
> > + * To stress-test the timer stack even more, an option to migrate the
> > + * vCPUs across pCPUs (-m), at a particular rate, is also provided.
> > *
> > * Copyright (c) 2021, Google LLC.
> > */
> > @@ -24,6 +26,8 @@
> > #include <pthread.h>
> > #include <linux/kvm.h>
> > #include <linux/sizes.h>
> > +#include <linux/bitmap.h>
> > +#include <sys/sysinfo.h>
> >
> > #include "kvm_util.h"
> > #include "processor.h"
> > @@ -41,12 +45,14 @@ struct test_args {
> > int nr_vcpus;
> > int nr_iter;
> > int timer_period_ms;
> > + int migration_freq_ms;
> > };
> >
> > static struct test_args test_args = {
> > .nr_vcpus = NR_VCPUS_DEF,
> > .nr_iter = NR_TEST_ITERS_DEF,
> > .timer_period_ms = TIMER_TEST_PERIOD_MS_DEF,
> > + .migration_freq_ms = 0, /* Turn off migrations by default */
>
> I'd rather we enable good tests like these by default.
>
Well, that was my original idea, but I was concerned about the ease
for diagnosing
things since it'll become too noisy. And so I let it as a personal
preference. But I can
include it back and see how it goes.
> > };
> >
> > #define msecs_to_usecs(msec) ((msec) * 1000LL)
> > @@ -81,6 +87,9 @@ struct test_vcpu {
> > static struct test_vcpu test_vcpu[KVM_MAX_VCPUS];
> > static struct test_vcpu_shared_data vcpu_shared_data[KVM_MAX_VCPUS];
> >
> > +static unsigned long *vcpu_done_map;
> > +static pthread_mutex_t vcpu_done_map_lock;
> > +
> > static void
> > guest_configure_timer_action(struct test_vcpu_shared_data *shared_data)
> > {
> > @@ -216,6 +225,11 @@ static void *test_vcpu_run(void *arg)
> >
> > vcpu_run(vm, vcpuid);
> >
> > + /* Currently, any exit from guest is an indication of completion */
> > + pthread_mutex_lock(&vcpu_done_map_lock);
> > + set_bit(vcpuid, vcpu_done_map);
> > + pthread_mutex_unlock(&vcpu_done_map_lock);
> > +
> > switch (get_ucall(vm, vcpuid, &uc)) {
> > case UCALL_SYNC:
> > case UCALL_DONE:
> > @@ -235,9 +249,73 @@ static void *test_vcpu_run(void *arg)
> > return NULL;
> > }
> >
> > +static uint32_t test_get_pcpu(void)
> > +{
> > + uint32_t pcpu;
> > + unsigned int nproc_conf;
> > + cpu_set_t online_cpuset;
> > +
> > + nproc_conf = get_nprocs_conf();
> > + sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset);
> > +
> > + /* Randomly find an available pCPU to place a vCPU on */
> > + do {
> > + pcpu = rand() % nproc_conf;
> > + } while (!CPU_ISSET(pcpu, &online_cpuset));
> > +
> > + return pcpu;
> > +}
> > +static int test_migrate_vcpu(struct test_vcpu *vcpu)
> > +{
> > + int ret;
> > + cpu_set_t cpuset;
> > + uint32_t new_pcpu = test_get_pcpu();
> > +
> > + CPU_ZERO(&cpuset);
> > + CPU_SET(new_pcpu, &cpuset);
> > + ret = pthread_setaffinity_np(vcpu->pt_vcpu_run,
> > + sizeof(cpuset), &cpuset);
> > +
> > + /* Allow the error where the vCPU thread is already finished */
> > + TEST_ASSERT(ret == 0 || ret == ESRCH,
> > + "Failed to migrate the vCPU:%u to pCPU: %u; ret: %d\n",
> > + vcpu->vcpuid, new_pcpu, ret);
>
> It'd be good to collect stats for the two cases so we know how many
> vcpus we actually migrated with a successful setaffinity and how many
> just completed too early. If our stats don't look good, then we can
> adjust our timeouts and frequencies.
>
I can do that, but since we don't attempt to migrate if the migration
thread learns
that the vCPU is already done, I'm guessing we may not hit ESRCH as much.
> > +
> > + return ret;
> > +}
> > +static void *test_vcpu_migration(void *arg)
> > +{
> > + unsigned int i, n_done;
> > + bool vcpu_done;
> > +
> > + do {
> > + usleep(msecs_to_usecs(test_args.migration_freq_ms));
> > +
> > + for (n_done = 0, i = 0; i < test_args.nr_vcpus; i++) {
> > + pthread_mutex_lock(&vcpu_done_map_lock);
> > + vcpu_done = test_bit(i, vcpu_done_map);
> > + pthread_mutex_unlock(&vcpu_done_map_lock);
> > +
> > + if (vcpu_done) {
> > + n_done++;
> > + continue;
> > + }
> > +
> > + test_migrate_vcpu(&test_vcpu[i]);
> > + }
> > + } while (test_args.nr_vcpus != n_done);
> > +
> > + return NULL;
> > +}
> > +
> > static void test_run(struct kvm_vm *vm)
> > {
> > int i, ret;
> > + pthread_t pt_vcpu_migration;
> > +
> > + pthread_mutex_init(&vcpu_done_map_lock, NULL);
> > + vcpu_done_map = bitmap_alloc(test_args.nr_vcpus);
> > + TEST_ASSERT(vcpu_done_map, "Failed to allocate vcpu done bitmap\n");
> >
> > for (i = 0; i < test_args.nr_vcpus; i++) {
> > ret = pthread_create(&test_vcpu[i].pt_vcpu_run, NULL,
> > @@ -245,8 +323,23 @@ static void test_run(struct kvm_vm *vm)
> > TEST_ASSERT(!ret, "Failed to create vCPU-%d pthread\n", i);
> > }
> >
> > + /* Spawn a thread to control the vCPU migrations */
> > + if (test_args.migration_freq_ms) {
> > + srand(time(NULL));
> > +
> > + ret = pthread_create(&pt_vcpu_migration, NULL,
> > + test_vcpu_migration, NULL);
> > + TEST_ASSERT(!ret, "Failed to create the migration pthread\n");
> > + }
> > +
> > +
> > for (i = 0; i < test_args.nr_vcpus; i++)
> > pthread_join(test_vcpu[i].pt_vcpu_run, NULL);
> > +
> > + if (test_args.migration_freq_ms)
> > + pthread_join(pt_vcpu_migration, NULL);
> > +
> > + bitmap_free(vcpu_done_map);
> > }
> >
> > static struct kvm_vm *test_vm_create(void)
> > @@ -286,6 +379,7 @@ static void test_print_help(char *name)
> > NR_TEST_ITERS_DEF);
> > pr_info("\t-p: Periodicity (in ms) of the guest timer (default: %u)\n",
> > TIMER_TEST_PERIOD_MS_DEF);
> > + pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. 0 to turn off (default: 0)\n");
> > pr_info("\t-h: print this help screen\n");
> > }
> >
> > @@ -293,7 +387,7 @@ static bool parse_args(int argc, char *argv[])
> > {
> > int opt;
> >
> > - while ((opt = getopt(argc, argv, "hn:i:p:")) != -1) {
> > + while ((opt = getopt(argc, argv, "hn:i:p:m:")) != -1) {
> > switch (opt) {
> > case 'n':
> > test_args.nr_vcpus = atoi(optarg);
> > @@ -320,6 +414,13 @@ static bool parse_args(int argc, char *argv[])
> > goto err;
> > }
> > break;
> > + case 'm':
> > + test_args.migration_freq_ms = atoi(optarg);
> > + if (test_args.migration_freq_ms < 0) {
> > + pr_info("0 or positive value needed for -m\n");
> > + goto err;
> > + }
> > + break;
> > case 'h':
> > default:
> > goto err;
> > @@ -343,6 +444,11 @@ int main(int argc, char *argv[])
> > if (!parse_args(argc, argv))
> > exit(KSFT_SKIP);
> >
> > + if (get_nprocs() < 2) {
>
> if (test_args.migration_freq_ms && get_nprocs() < 2)
>
> > + print_skip("At least two physical CPUs needed for vCPU migration");
> > + exit(KSFT_SKIP);
> > + }
> > +
> > vm = test_vm_create();
> > test_run(vm);
> > kvm_vm_free(vm);
> > --
> > 2.33.0.153.gba50c8fa24-goog
> >
>
> Thanks,
> drew
>

2021-09-06 06:41:11

by Andrew Jones

[permalink] [raw]
Subject: Re: [PATCH v3 12/12] KVM: arm64: selftests: arch_timer: Support vCPU migration

On Fri, Sep 03, 2021 at 01:53:27PM -0700, Raghavendra Rao Ananta wrote:
> On Fri, Sep 3, 2021 at 4:05 AM Andrew Jones <[email protected]> wrote:
> >
> > On Wed, Sep 01, 2021 at 09:14:12PM +0000, Raghavendra Rao Ananta wrote:
> > > Since the timer stack (hardware and KVM) is per-CPU, there
> > > are potential chances for races to occur when the scheduler
> > > decides to migrate a vCPU thread to a different physical CPU.
> > > Hence, include an option to stress-test this part as well by
> > > forcing the vCPUs to migrate across physical CPUs in the
> > > system at a particular rate.
> > >
> > > Originally, the bug for the fix with commit 3134cc8beb69d0d
> > > ("KVM: arm64: vgic: Resample HW pending state on deactivation")
> > > was discovered using arch_timer test with vCPU migrations and
> > > can be easily reproduced.
> > >
> > > Signed-off-by: Raghavendra Rao Ananta <[email protected]>
> > > ---
> > > .../selftests/kvm/aarch64/arch_timer.c | 108 +++++++++++++++++-
> > > 1 file changed, 107 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c
> > > index 1383f33850e9..de246c7afab2 100644
> > > --- a/tools/testing/selftests/kvm/aarch64/arch_timer.c
> > > +++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c
> > > @@ -14,6 +14,8 @@
> > > *
> > > * The test provides command-line options to configure the timer's
> > > * period (-p), number of vCPUs (-n), and iterations per stage (-i).
> > > + * To stress-test the timer stack even more, an option to migrate the
> > > + * vCPUs across pCPUs (-m), at a particular rate, is also provided.
> > > *
> > > * Copyright (c) 2021, Google LLC.
> > > */
> > > @@ -24,6 +26,8 @@
> > > #include <pthread.h>
> > > #include <linux/kvm.h>
> > > #include <linux/sizes.h>
> > > +#include <linux/bitmap.h>
> > > +#include <sys/sysinfo.h>
> > >
> > > #include "kvm_util.h"
> > > #include "processor.h"
> > > @@ -41,12 +45,14 @@ struct test_args {
> > > int nr_vcpus;
> > > int nr_iter;
> > > int timer_period_ms;
> > > + int migration_freq_ms;
> > > };
> > >
> > > static struct test_args test_args = {
> > > .nr_vcpus = NR_VCPUS_DEF,
> > > .nr_iter = NR_TEST_ITERS_DEF,
> > > .timer_period_ms = TIMER_TEST_PERIOD_MS_DEF,
> > > + .migration_freq_ms = 0, /* Turn off migrations by default */
> >
> > I'd rather we enable good tests like these by default.
> >
> Well, that was my original idea, but I was concerned about the ease
> for diagnosing
> things since it'll become too noisy. And so I let it as a personal
> preference. But I can
> include it back and see how it goes.

Break the tests into two? One run without migration and one with. If
neither work, then we can diagnose the one without first, possibly
avoiding the need to diagnose the one with.

Thanks,
drew

2021-09-07 16:21:41

by Raghavendra Rao Ananta

[permalink] [raw]
Subject: Re: [PATCH v3 12/12] KVM: arm64: selftests: arch_timer: Support vCPU migration

On Sun, Sep 5, 2021 at 11:39 PM Andrew Jones <[email protected]> wrote:
>
> On Fri, Sep 03, 2021 at 01:53:27PM -0700, Raghavendra Rao Ananta wrote:
> > On Fri, Sep 3, 2021 at 4:05 AM Andrew Jones <[email protected]> wrote:
> > >
> > > On Wed, Sep 01, 2021 at 09:14:12PM +0000, Raghavendra Rao Ananta wrote:
> > > > Since the timer stack (hardware and KVM) is per-CPU, there
> > > > are potential chances for races to occur when the scheduler
> > > > decides to migrate a vCPU thread to a different physical CPU.
> > > > Hence, include an option to stress-test this part as well by
> > > > forcing the vCPUs to migrate across physical CPUs in the
> > > > system at a particular rate.
> > > >
> > > > Originally, the bug for the fix with commit 3134cc8beb69d0d
> > > > ("KVM: arm64: vgic: Resample HW pending state on deactivation")
> > > > was discovered using arch_timer test with vCPU migrations and
> > > > can be easily reproduced.
> > > >
> > > > Signed-off-by: Raghavendra Rao Ananta <[email protected]>
> > > > ---
> > > > .../selftests/kvm/aarch64/arch_timer.c | 108 +++++++++++++++++-
> > > > 1 file changed, 107 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c
> > > > index 1383f33850e9..de246c7afab2 100644
> > > > --- a/tools/testing/selftests/kvm/aarch64/arch_timer.c
> > > > +++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c
> > > > @@ -14,6 +14,8 @@
> > > > *
> > > > * The test provides command-line options to configure the timer's
> > > > * period (-p), number of vCPUs (-n), and iterations per stage (-i).
> > > > + * To stress-test the timer stack even more, an option to migrate the
> > > > + * vCPUs across pCPUs (-m), at a particular rate, is also provided.
> > > > *
> > > > * Copyright (c) 2021, Google LLC.
> > > > */
> > > > @@ -24,6 +26,8 @@
> > > > #include <pthread.h>
> > > > #include <linux/kvm.h>
> > > > #include <linux/sizes.h>
> > > > +#include <linux/bitmap.h>
> > > > +#include <sys/sysinfo.h>
> > > >
> > > > #include "kvm_util.h"
> > > > #include "processor.h"
> > > > @@ -41,12 +45,14 @@ struct test_args {
> > > > int nr_vcpus;
> > > > int nr_iter;
> > > > int timer_period_ms;
> > > > + int migration_freq_ms;
> > > > };
> > > >
> > > > static struct test_args test_args = {
> > > > .nr_vcpus = NR_VCPUS_DEF,
> > > > .nr_iter = NR_TEST_ITERS_DEF,
> > > > .timer_period_ms = TIMER_TEST_PERIOD_MS_DEF,
> > > > + .migration_freq_ms = 0, /* Turn off migrations by default */
> > >
> > > I'd rather we enable good tests like these by default.
> > >
> > Well, that was my original idea, but I was concerned about the ease
> > for diagnosing
> > things since it'll become too noisy. And so I let it as a personal
> > preference. But I can
> > include it back and see how it goes.
>
> Break the tests into two? One run without migration and one with. If
> neither work, then we can diagnose the one without first, possibly
> avoiding the need to diagnose the one with.
>
Right. I guess that's where the test's migration switch can come in handy.
Let's turn migration ON by default. If we see a failure, we can turn
OFF migration
and run the tests. I'll try to include some verbose printing for diagnosability.

Regards,
Raghavendra
> Thanks,
> drew
>

2021-09-07 16:23:08

by Andrew Jones

[permalink] [raw]
Subject: Re: [PATCH v3 12/12] KVM: arm64: selftests: arch_timer: Support vCPU migration

On Tue, Sep 07, 2021 at 09:14:54AM -0700, Raghavendra Rao Ananta wrote:
> On Sun, Sep 5, 2021 at 11:39 PM Andrew Jones <[email protected]> wrote:
> >
> > On Fri, Sep 03, 2021 at 01:53:27PM -0700, Raghavendra Rao Ananta wrote:
> > > On Fri, Sep 3, 2021 at 4:05 AM Andrew Jones <[email protected]> wrote:
> > > >
> > > > On Wed, Sep 01, 2021 at 09:14:12PM +0000, Raghavendra Rao Ananta wrote:
> > > > > Since the timer stack (hardware and KVM) is per-CPU, there
> > > > > are potential chances for races to occur when the scheduler
> > > > > decides to migrate a vCPU thread to a different physical CPU.
> > > > > Hence, include an option to stress-test this part as well by
> > > > > forcing the vCPUs to migrate across physical CPUs in the
> > > > > system at a particular rate.
> > > > >
> > > > > Originally, the bug for the fix with commit 3134cc8beb69d0d
> > > > > ("KVM: arm64: vgic: Resample HW pending state on deactivation")
> > > > > was discovered using arch_timer test with vCPU migrations and
> > > > > can be easily reproduced.
> > > > >
> > > > > Signed-off-by: Raghavendra Rao Ananta <[email protected]>
> > > > > ---
> > > > > .../selftests/kvm/aarch64/arch_timer.c | 108 +++++++++++++++++-
> > > > > 1 file changed, 107 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c
> > > > > index 1383f33850e9..de246c7afab2 100644
> > > > > --- a/tools/testing/selftests/kvm/aarch64/arch_timer.c
> > > > > +++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c
> > > > > @@ -14,6 +14,8 @@
> > > > > *
> > > > > * The test provides command-line options to configure the timer's
> > > > > * period (-p), number of vCPUs (-n), and iterations per stage (-i).
> > > > > + * To stress-test the timer stack even more, an option to migrate the
> > > > > + * vCPUs across pCPUs (-m), at a particular rate, is also provided.
> > > > > *
> > > > > * Copyright (c) 2021, Google LLC.
> > > > > */
> > > > > @@ -24,6 +26,8 @@
> > > > > #include <pthread.h>
> > > > > #include <linux/kvm.h>
> > > > > #include <linux/sizes.h>
> > > > > +#include <linux/bitmap.h>
> > > > > +#include <sys/sysinfo.h>
> > > > >
> > > > > #include "kvm_util.h"
> > > > > #include "processor.h"
> > > > > @@ -41,12 +45,14 @@ struct test_args {
> > > > > int nr_vcpus;
> > > > > int nr_iter;
> > > > > int timer_period_ms;
> > > > > + int migration_freq_ms;
> > > > > };
> > > > >
> > > > > static struct test_args test_args = {
> > > > > .nr_vcpus = NR_VCPUS_DEF,
> > > > > .nr_iter = NR_TEST_ITERS_DEF,
> > > > > .timer_period_ms = TIMER_TEST_PERIOD_MS_DEF,
> > > > > + .migration_freq_ms = 0, /* Turn off migrations by default */
> > > >
> > > > I'd rather we enable good tests like these by default.
> > > >
> > > Well, that was my original idea, but I was concerned about the ease
> > > for diagnosing
> > > things since it'll become too noisy. And so I let it as a personal
> > > preference. But I can
> > > include it back and see how it goes.
> >
> > Break the tests into two? One run without migration and one with. If
> > neither work, then we can diagnose the one without first, possibly
> > avoiding the need to diagnose the one with.
> >
> Right. I guess that's where the test's migration switch can come in handy.
> Let's turn migration ON by default. If we see a failure, we can turn
> OFF migration
> and run the tests. I'll try to include some verbose printing for diagnosability.
>

Sounds good. You can also consider using pr_debug if you feel the need to
balance verbosity with diagnostics.

Thanks,
drew