Received: by 2002:a05:6a10:eb17:0:0:0:0 with SMTP id hx23csp1676667pxb; Fri, 10 Sep 2021 11:05:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwYhoxjxm384Och6iewVGGu74EmnViCEhdRsjM8aWAeItEt7s0LeUxBU5K/cpRYs6tSQGiE X-Received: by 2002:a05:6e02:1054:: with SMTP id p20mr6810844ilj.159.1631297119767; Fri, 10 Sep 2021 11:05:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631297119; cv=none; d=google.com; s=arc-20160816; b=TgOFQNAVdWqWGITj5OxE+v15kG+agPfsMphNCev+YrR5JOjxFcp1KRFreabZTFsFHi w6QyY61/1MkLubefw9Mr9X4OmmpbmbPFBSbH6+fCBVW4Xo+QQ65D1mw1RRrNwnuNY2y+ jV+KINPQebMae0wUZmDCZxIzsKGWr1dy8w1fr1Y6twtIGy1GMtG8U34apSj6T1FFiwG8 LE+u8vUS7eMtG2dsKGELEUuOsVHwOo0Zbzo1bSuimfKYQvP2sm9SItYXNYN79Q0lpdZ7 +zWU5Y7GYheN/1bA7uW6un7s/cdNhWGtXvyniskWL9H6H1i5uszXZXDB1s1W/woDjXIx eqNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=FdiZIU18HRQNVcUBqI1gOpJDuHw72fPqFgD0vfSbxiM=; b=becD/TMYhCWEh4I6JuMaKVMlc7zLzKF0PthGytWlgKdxAHYIII7g4aURmF6V4bQGsD 0dYbyyKygtug6oibmy4fFbegsWtdb3iVUamuBg6gbvWzxHmY52s4SwYGHY9jVpWFRXVI gSP69G8sBPgTDKG7Sp3b3HnhJ+H2/nYV2p5vbq3iKGQc9rOitQ6Kvh4X0PAC5b4rpiLt fwUOhQaF9S+mSXHbMjbrvFABarOj4yRXGePgY7YKOyZfu8nMBxK/J1VpcT27UntFH7kz ea0snYs6q1LipFxYjZsMsWj2lXmDG0tQwKqTK+20Buo2KlmHYWd0UKwsq2IZp+i8z+IR DwyA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=OftdWZEw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l5si6342816jaj.63.2021.09.10.11.05.06; Fri, 10 Sep 2021 11:05:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=OftdWZEw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229718AbhIJSFY (ORCPT + 99 others); Fri, 10 Sep 2021 14:05:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229451AbhIJSFX (ORCPT ); Fri, 10 Sep 2021 14:05:23 -0400 Received: from mail-yb1-xb29.google.com (mail-yb1-xb29.google.com [IPv6:2607:f8b0:4864:20::b29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95B09C061756 for ; Fri, 10 Sep 2021 11:04:12 -0700 (PDT) Received: by mail-yb1-xb29.google.com with SMTP id z5so5573555ybj.2 for ; Fri, 10 Sep 2021 11:04:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=FdiZIU18HRQNVcUBqI1gOpJDuHw72fPqFgD0vfSbxiM=; b=OftdWZEwD9jdXht+HSnlAwhydSymZg+fXMuExwKIPpWPSdPet2l673kSg8FcGtw/6d q/A8Jammz5TZedHA0AghaMGGChHtT4/kVWecAClUWj5q1tcoBkPQ91R8uZfezyXn1maC dYBLtWCRik7kgp43oQveNCvzkjyyNnCZtQRG8fAd0m1JyDBsgXnolXRZtTMW7Nd24Dm9 /R+toReJ9Jkosf8Ny7n9oy64MKoI6hxdAt9vg8RHkQ6J5ZouUO23IUvUJGlNZsriSIUT y/mncKScphEyMFh6hYP3TjUIv7at+65xBbQO8YlIAu7cB6/3mLMu/squbAehr5XeI1qJ osVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=FdiZIU18HRQNVcUBqI1gOpJDuHw72fPqFgD0vfSbxiM=; b=3LMRN40LeF+iLGrzYIYQw9G69NZkqrjsn37JgQZEsULtrz2UPSHG02DNX4vBa+OAYi zssu/h/dzPFqLCvE5nso066+aRCq+xpfQtuF39DRVO99OxBLeLOiSr+8Wcz/Fy//3QID MWc6nbZWU7VLyYDfn5tctmfP9OyOu+Iwojk6pCAjSQTDLHgHs7t3anX2VU251odP+eXW LD9onAT5UVfdAxsVMa5BVxuxoTMQJ3Yq4qyCcwZQWtE1qN8yYhgTTd3bZU0ob4H1GLUU 1hB6grDqc0WWR6L6Km8jmM/8CwRI/oDajfJVQ7BAnTsKCU8aQ7YaICRKQVLrjcYg3Ga9 xWmg== X-Gm-Message-State: AOAM533/+HkaVIP96yhPp5Gu2nxxBZx9C56PU73Ct3W6s9u3C/f6tG9+ ho2GX96WpgGxmt8ns6uSLzPqA5XMSjK0GxtVgHWfDg== X-Received: by 2002:a25:8093:: with SMTP id n19mr12994954ybk.414.1631297049652; Fri, 10 Sep 2021 11:04:09 -0700 (PDT) MIME-Version: 1.0 References: <20210909013818.1191270-1-rananta@google.com> <20210909013818.1191270-10-rananta@google.com> <20210909075643.fhngqu6tqrpe33gl@gator> <20210910081001.4gljsvmcovvoylwt@gator> In-Reply-To: <20210910081001.4gljsvmcovvoylwt@gator> From: Raghavendra Rao Ananta Date: Fri, 10 Sep 2021 11:03:58 -0700 Message-ID: Subject: Re: [PATCH v4 09/18] KVM: arm64: selftests: Add guest support to get the vcpuid To: Andrew Jones Cc: Paolo Bonzini , Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Oliver Upton , Reiji Watanabe , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 10, 2021 at 1:10 AM Andrew Jones wrote: > > On Thu, Sep 09, 2021 at 10:10:56AM -0700, Raghavendra Rao Ananta wrote: > > On Thu, Sep 9, 2021 at 12:56 AM Andrew Jones wrote: > > > > > > On Thu, Sep 09, 2021 at 01:38:09AM +0000, Raghavendra Rao Ananta wrote: > ... > > > > + for (i = 0; i < KVM_MAX_VCPUS; i++) { > > > > + vcpuid = vcpuid_map[i].vcpuid; > > > > + GUEST_ASSERT_1(vcpuid != VM_VCPUID_MAP_INVAL, mpidr); > > > > > > We don't want this assert if it's possible to have sparse maps, which > > > it probably isn't ever going to be, but... > > > > > If you look at the way the array is arranged, the element with > > VM_VCPUID_MAP_INVAL acts as a sentinel for us and all the proper > > elements would lie before this. So, I don't think we'd have a sparse > > array here. > > If we switch to my suggestion of adding map entries at vcpu-add time and > removing them at vcpu-rm time, then the array may become sparse depending > on the order of removals. > Oh, I get it now. But like you mentioned, we add entries to the map while the vCPUs are getting added and then sync_global_to_guest() later. This seems like a lot of maintainance, unless I'm interpreting it wrong or not seeing an advantage. I like your idea of coming up an arch-independent interface, however. So I modified it similar to the familiar ucall interface that we have and does everything in one shot to avoid any confusion: diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 010b59b13917..0e87cb0c980b 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -400,4 +400,24 @@ uint64_t get_ucall(struct kvm_vm *vm, uint32_t vcpu_id, struct ucall *uc); int vm_get_stats_fd(struct kvm_vm *vm); int vcpu_get_stats_fd(struct kvm_vm *vm, uint32_t vcpuid); +#define VM_CPUID_MAP_INVAL -1 + +struct vm_cpuid_map { + uint64_t hw_cpuid; + int vcpuid; +}; + +/* + * Create a vcpuid:hw_cpuid map and export it to the guest + * + * Input Args: + * vm - KVM VM. + * + * Output Args: None + * + * Must be called after all the vCPUs are added to the VM + */ +void vm_cpuid_map_init(struct kvm_vm *vm); +int guest_get_vcpuid(void); + #endif /* SELFTEST_KVM_UTIL_H */ diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c index db64ee206064..e796bb3984a6 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c @@ -16,6 +16,8 @@ static vm_vaddr_t exception_handlers; +static struct vm_cpuid_map cpuid_map[KVM_MAX_VCPUS]; + static uint64_t page_align(struct kvm_vm *vm, uint64_t v) { return (v + vm->page_size) & ~(vm->page_size - 1); @@ -426,3 +428,42 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector, assert(vector < VECTOR_NUM); handlers->exception_handlers[vector][0] = handler; } + +void vm_cpuid_map_init(struct kvm_vm *vm) +{ + int i = 0; + struct vcpu *vcpu; + struct vm_cpuid_map *map; + + TEST_ASSERT(!list_empty(&vm->vcpus), "vCPUs must have been created\n"); + + list_for_each_entry(vcpu, &vm->vcpus, list) { + map = &cpuid_map[i++]; + map->vcpuid = vcpu->id; + get_reg(vm, vcpu->id, KVM_ARM64_SYS_REG(SYS_MPIDR_EL1), &map->hw_cpuid); + map->hw_cpuid &= MPIDR_HWID_BITMASK; + } + + if (i < KVM_MAX_VCPUS) + cpuid_map[i].vcpuid = VM_CPUID_MAP_INVAL; + + sync_global_to_guest(vm, cpuid_map); +} + +int guest_get_vcpuid(void) +{ + int i, vcpuid; + uint64_t mpidr = read_sysreg(mpidr_el1) & MPIDR_HWID_BITMASK; + + for (i = 0; i < KVM_MAX_VCPUS; i++) { + vcpuid = cpuid_map[i].vcpuid; + + /* Was this vCPU added to the VM after the map was initialized? */ + GUEST_ASSERT_1(vcpuid != VM_CPUID_MAP_INVAL, mpidr); + + if (mpidr == cpuid_map[i].hw_cpuid) + return vcpuid; + } + + /* We should not be reaching here */ + GUEST_ASSERT_1(0, mpidr); + return -1; +} This would ensure that we don't have a sparse array and can use the last non-vCPU element as a sentinal node. If you still feel preparing the map as and when the vCPUs are created makes more sense, I can go for it. Regards, Raghavendra > Thanks, > drew >