Received: by 2002:a05:6a10:d5a5:0:0:0:0 with SMTP id gn37csp1895206pxb; Thu, 7 Oct 2021 18:06:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwrdqRBBp2MOQAnBpMKrs+JGtQTBtbJBqkTkaUckNpP7X1c9OAAulUV9dB5ASI5hixYFtjf X-Received: by 2002:a05:6402:354d:: with SMTP id f13mr10509117edd.252.1633655216141; Thu, 07 Oct 2021 18:06:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633655216; cv=none; d=google.com; s=arc-20160816; b=p8MFCBgj/CgXVv5R+j49Hx2+W5aCxoYj/40URyGjORZVRO+g3uapqctjJOITZpF3p2 aUmIZjkjCm4VEMp5VpKPuaDkNaCsCee9GYH6ZU14AVL3kKp2CpE3a2qpu3AKbO26R7JC jmEjwzaHGWMA1P8Pd7NyZ+NSpdeS9uEYzLGyopBMdMAqrr2j/FWf1GcjBvlh8DJYuS/N uSw2nCxS9GnNgTnVGKSWuLPhNbd1XGnztDsrH4g7nULIp+x5PwaUK5+IKnR1Cm6ja0dn eXbDtG+r9nv2K/bTN/hRkmcWscFZbwRKVJHz90cubSdU1MzSiI9xLBUZMbLwG1lnUXqJ 87nQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:dkim-signature; bh=eHWmwL/DdyUes1cHwvQEsD28E8ip/Y4KwKGrDHcA804=; b=XxACKWcIz80uQkbCJ50KNZPGu7qDa3cH8pEB0O+phhbHuEZ0mjck3Dq7rJDoYwhnIu U7KU0SP5nFjIKhmPhxRmSkQ7VoIC33jRi9oYkCTwMEk43UtV7X+Rx/bXujHUNBb6UPFq EwcvURWcC248WZs9CZzv44JR3MoxfNV7pvEWEgmM8IEGpEVj3Vt8j9xf8B7RNSExIkLi Idd+q3BmQ1lR3pF/TvWdZRrjNRr/cJrSnUc9lFtEcawwd+7ouIS13/9vYDXj2d3szEpl CyRYrO9nU23aIubN02cJz7xXHMkRXTVEXE/fYAMtXtBCY3DSFvvSTNEkNGShTls+ResM wM5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=GM0m7+E0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x2si1356925edl.308.2021.10.07.18.06.32; Thu, 07 Oct 2021 18:06:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=GM0m7+E0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242049AbhJGXiO (ORCPT + 99 others); Thu, 7 Oct 2021 19:38:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242165AbhJGXht (ORCPT ); Thu, 7 Oct 2021 19:37:49 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA4B0C0613EC for ; Thu, 7 Oct 2021 16:35:23 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id q23-20020a6562570000b029023cbfb4fd73so572958pgv.14 for ; Thu, 07 Oct 2021 16:35:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=eHWmwL/DdyUes1cHwvQEsD28E8ip/Y4KwKGrDHcA804=; b=GM0m7+E0VNGzbrc5pxJMbf1durKLRA8Y5N0Bzcu2SBWiEps3zwM/BBFF/W+nceQB+q basLN4ax9bOBrjbrNilou1jshsUoqDA/LYJMiS8Z1BfQl/R4im5dytZiUIQbJkY5t+ev xZxu4jLHdqUmWvdQfqR7LIOZ0D9/khwutmcqg1Yps2Px0xTy6qTSyiBlDj06XyCK/HrR xy414nG63WpuaSZUFLLo0dpZvSL8L6HiTdFWroVTEWgithKOD0VXo+6VF+UGAKYSObkC 7rhtqLwweVfip0Is/7LhgtzynhwyIX1a/yuSsaIB3BQ5ZnBkP97E7w5bnKCH+xvTJc73 t6yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=eHWmwL/DdyUes1cHwvQEsD28E8ip/Y4KwKGrDHcA804=; b=AqVIvPyHFaRnS915/jCokAP5BHC6kTeOv8gf8XMeSW5ubcfRvBDbWzk519FoKMrksJ Jczn820uJsSKsGIGeCYp3bv2IKT/B4mKRd7g4YVYH0gHZ7mKPbyk46/cx9vHVgBTDglQ tg3oLJYkBX0gsH6NVbVIJfzzQXVK80C149ArSfVIi5AYcxLnpGRIC2yc7eiPopcPWuiz l5aW0jVQt+3Q3iIjlJI5+FyiA+o2R6Lx3B1eeFu/4zwOl4s1dYylmmg9H09e5TIpoyRE xFAX8Xleb9Bbgo0zR5j1GLYLYu38zzNkvnK53vvGFR0UOpltZTe2gy9U9p1NvIw2XMqF U5Nw== X-Gm-Message-State: AOAM5302l1JExurUwGGQTAkkpvHWO4skdz9vbn/sGfaHa4dd7j6kbPrg zC0qIrobDpwO8sWDk5/4w11QqjJAXznk X-Received: from rananta-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1bcc]) (user=rananta job=sendgmr) by 2002:a17:90a:3ee4:: with SMTP id k91mr179113pjc.1.1633649722638; Thu, 07 Oct 2021 16:35:22 -0700 (PDT) Date: Thu, 7 Oct 2021 23:34:39 +0000 In-Reply-To: <20211007233439.1826892-1-rananta@google.com> Message-Id: <20211007233439.1826892-16-rananta@google.com> Mime-Version: 1.0 References: <20211007233439.1826892-1-rananta@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH v8 15/15] KVM: arm64: selftests: arch_timer: Support vCPU migration From: Raghavendra Rao Ananta To: Paolo Bonzini , Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose Cc: Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , Raghavendra Rao Anata , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since the timer stack (hardware and KVM) is per-CPU, there are potential chances for races to occur when the scheduler decides to migrate a vCPU thread to a different physical CPU. Hence, include an option to stress-test this part as well by forcing the vCPUs to migrate across physical CPUs in the system at a particular rate. Originally, the bug for the fix with commit 3134cc8beb69d0d ("KVM: arm64: vgic: Resample HW pending state on deactivation") was discovered using arch_timer test with vCPU migrations and can be easily reproduced. Signed-off-by: Raghavendra Rao Ananta Reviewed-by: Andrew Jones --- .../selftests/kvm/aarch64/arch_timer.c | 115 +++++++++++++++++- 1 file changed, 114 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c index 3b6ea6a462f4..bf6a45b0b8dc 100644 --- a/tools/testing/selftests/kvm/aarch64/arch_timer.c +++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c @@ -14,6 +14,8 @@ * * The test provides command-line options to configure the timer's * period (-p), number of vCPUs (-n), and iterations per stage (-i). + * To stress-test the timer stack even more, an option to migrate the + * vCPUs across pCPUs (-m), at a particular rate, is also provided. * * Copyright (c) 2021, Google LLC. */ @@ -24,6 +26,8 @@ #include #include #include +#include +#include #include "kvm_util.h" #include "processor.h" @@ -36,17 +40,20 @@ #define NR_TEST_ITERS_DEF 5 #define TIMER_TEST_PERIOD_MS_DEF 10 #define TIMER_TEST_ERR_MARGIN_US 100 +#define TIMER_TEST_MIGRATION_FREQ_MS 2 struct test_args { int nr_vcpus; int nr_iter; int timer_period_ms; + int migration_freq_ms; }; static struct test_args test_args = { .nr_vcpus = NR_VCPUS_DEF, .nr_iter = NR_TEST_ITERS_DEF, .timer_period_ms = TIMER_TEST_PERIOD_MS_DEF, + .migration_freq_ms = TIMER_TEST_MIGRATION_FREQ_MS, }; #define msecs_to_usecs(msec) ((msec) * 1000LL) @@ -80,6 +87,9 @@ static struct test_vcpu_shared_data vcpu_shared_data[KVM_MAX_VCPUS]; static int vtimer_irq, ptimer_irq; +static unsigned long *vcpu_done_map; +static pthread_mutex_t vcpu_done_map_lock; + static void guest_configure_timer_action(struct test_vcpu_shared_data *shared_data) { @@ -215,6 +225,11 @@ static void *test_vcpu_run(void *arg) vcpu_run(vm, vcpuid); + /* Currently, any exit from guest is an indication of completion */ + pthread_mutex_lock(&vcpu_done_map_lock); + set_bit(vcpuid, vcpu_done_map); + pthread_mutex_unlock(&vcpu_done_map_lock); + switch (get_ucall(vm, vcpuid, &uc)) { case UCALL_SYNC: case UCALL_DONE: @@ -233,9 +248,78 @@ static void *test_vcpu_run(void *arg) return NULL; } +static uint32_t test_get_pcpu(void) +{ + uint32_t pcpu; + unsigned int nproc_conf; + cpu_set_t online_cpuset; + + nproc_conf = get_nprocs_conf(); + sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset); + + /* Randomly find an available pCPU to place a vCPU on */ + do { + pcpu = rand() % nproc_conf; + } while (!CPU_ISSET(pcpu, &online_cpuset)); + + return pcpu; +} + +static int test_migrate_vcpu(struct test_vcpu *vcpu) +{ + int ret; + cpu_set_t cpuset; + uint32_t new_pcpu = test_get_pcpu(); + + CPU_ZERO(&cpuset); + CPU_SET(new_pcpu, &cpuset); + + pr_debug("Migrating vCPU: %u to pCPU: %u\n", vcpu->vcpuid, new_pcpu); + + ret = pthread_setaffinity_np(vcpu->pt_vcpu_run, + sizeof(cpuset), &cpuset); + + /* Allow the error where the vCPU thread is already finished */ + TEST_ASSERT(ret == 0 || ret == ESRCH, + "Failed to migrate the vCPU:%u to pCPU: %u; ret: %d\n", + vcpu->vcpuid, new_pcpu, ret); + + return ret; +} + +static void *test_vcpu_migration(void *arg) +{ + unsigned int i, n_done; + bool vcpu_done; + + do { + usleep(msecs_to_usecs(test_args.migration_freq_ms)); + + for (n_done = 0, i = 0; i < test_args.nr_vcpus; i++) { + pthread_mutex_lock(&vcpu_done_map_lock); + vcpu_done = test_bit(i, vcpu_done_map); + pthread_mutex_unlock(&vcpu_done_map_lock); + + if (vcpu_done) { + n_done++; + continue; + } + + test_migrate_vcpu(&test_vcpu[i]); + } + } while (test_args.nr_vcpus != n_done); + + return NULL; +} + static void test_run(struct kvm_vm *vm) { int i, ret; + pthread_t pt_vcpu_migration; + + pthread_mutex_init(&vcpu_done_map_lock, NULL); + vcpu_done_map = bitmap_zalloc(test_args.nr_vcpus); + TEST_ASSERT(vcpu_done_map, "Failed to allocate vcpu done bitmap\n"); for (i = 0; i < test_args.nr_vcpus; i++) { ret = pthread_create(&test_vcpu[i].pt_vcpu_run, NULL, @@ -243,8 +327,23 @@ static void test_run(struct kvm_vm *vm) TEST_ASSERT(!ret, "Failed to create vCPU-%d pthread\n", i); } + /* Spawn a thread to control the vCPU migrations */ + if (test_args.migration_freq_ms) { + srand(time(NULL)); + + ret = pthread_create(&pt_vcpu_migration, NULL, + test_vcpu_migration, NULL); + TEST_ASSERT(!ret, "Failed to create the migration pthread\n"); + } + + for (i = 0; i < test_args.nr_vcpus; i++) pthread_join(test_vcpu[i].pt_vcpu_run, NULL); + + if (test_args.migration_freq_ms) + pthread_join(pt_vcpu_migration, NULL); + + bitmap_free(vcpu_done_map); } static void test_init_timer_irq(struct kvm_vm *vm) @@ -301,6 +400,8 @@ static void test_print_help(char *name) NR_TEST_ITERS_DEF); pr_info("\t-p: Periodicity (in ms) of the guest timer (default: %u)\n", TIMER_TEST_PERIOD_MS_DEF); + pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. 0 to turn off (default: %u)\n", + TIMER_TEST_MIGRATION_FREQ_MS); pr_info("\t-h: print this help screen\n"); } @@ -308,7 +409,7 @@ static bool parse_args(int argc, char *argv[]) { int opt; - while ((opt = getopt(argc, argv, "hn:i:p:")) != -1) { + while ((opt = getopt(argc, argv, "hn:i:p:m:")) != -1) { switch (opt) { case 'n': test_args.nr_vcpus = atoi(optarg); @@ -335,6 +436,13 @@ static bool parse_args(int argc, char *argv[]) goto err; } break; + case 'm': + test_args.migration_freq_ms = atoi(optarg); + if (test_args.migration_freq_ms < 0) { + pr_info("0 or positive value needed for -m\n"); + goto err; + } + break; case 'h': default: goto err; @@ -358,6 +466,11 @@ int main(int argc, char *argv[]) if (!parse_args(argc, argv)) exit(KSFT_SKIP); + if (test_args.migration_freq_ms && get_nprocs() < 2) { + print_skip("At least two physical CPUs needed for vCPU migration"); + exit(KSFT_SKIP); + } + vm = test_vm_create(); test_run(vm); kvm_vm_free(vm); -- 2.33.0.882.g93a45727a2-goog