Received: by 2002:a05:6a10:eb17:0:0:0:0 with SMTP id hx23csp705805pxb; Thu, 9 Sep 2021 10:06:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx+j0I+5KTWTbJHrb7h48HVCkoEP2TOdc4uaH2FYdjU7EdoUGF/UsNku7KA7o8owO38CQIM X-Received: by 2002:a50:ed0e:: with SMTP id j14mr4339473eds.305.1631207210624; Thu, 09 Sep 2021 10:06:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631207210; cv=none; d=google.com; s=arc-20160816; b=w5vcZXiGp5nZyxJXyDykRCDWNpEIZ3LzRYmIpcooFzbY613hd/FPBq1UtMja/hrHsP 0+2N6V5b9oxMcLl5Il/mB7w4k6QIPdEIvz/el/Q5Wukc8tWYJAbG1BVUFBmAX0VRVUdT ltH40yEBcUJgMRfxwRUvhT4rwss34vQ5QmNit5onrSyi1leWYxczoc84KDA5sTib+8Nt wx1tdvrC8MlT4l+tUwau+fNtwcnZH4ZdX3Z82SAvC8xin4PTXp7oRpDCTOuj2tCMxGi/ SSEUyjiO6mQvyVNBkhni+Oyd9M/goSdaOoK4RTM9ImJHgApt69FgSV+nwZif5UElQ4+i yAWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=ElUi2zLG+FPdz2UbE6Ray8TiRA1vza4HIfAFJ/kfzDA=; b=x0c9V0BXyZCeAjssznBp6+WyxmKPY7v30FpqTv9+u8vVM6PPgyyJhMurHRbkv1T1GB PC8S9OnWdl2EdzmqoNSXexwE5OdWme/r2hMpvRD42EF/FMcPWKHbVcHwl6t+J8J8iYC0 1HzDhtSPlny0hqcTLoWIf2I85CJ1uirTTbh+BOkEBy/OvSXGSqAM6QZfvZFr8H+JaLsm W+jvw8VnfIgeKZ+mExglWieYVhlRY+1I7qCfS+PIOGeNZH2VSTMB2iuF501bRjMUiIEu pkTBIpVMeQ5mm/NxVkunkMWE+4YpoIRqJZc/WmtV/NoAdibEmfNQXh4Fgmho9Yu0RUSD HR0A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=sBGGF2kG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dm12si2512700ejc.744.2021.09.09.10.06.17; Thu, 09 Sep 2021 10:06:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=sBGGF2kG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240279AbhIIRFy (ORCPT + 99 others); Thu, 9 Sep 2021 13:05:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234524AbhIIRFx (ORCPT ); Thu, 9 Sep 2021 13:05:53 -0400 Received: from mail-lf1-x12b.google.com (mail-lf1-x12b.google.com [IPv6:2a00:1450:4864:20::12b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A06C5C061575 for ; Thu, 9 Sep 2021 10:04:43 -0700 (PDT) Received: by mail-lf1-x12b.google.com with SMTP id t19so5002518lfe.13 for ; Thu, 09 Sep 2021 10:04:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ElUi2zLG+FPdz2UbE6Ray8TiRA1vza4HIfAFJ/kfzDA=; b=sBGGF2kG+kvjABV883GUjWQBbQ2wX1NxELksiUeFov5Sj5rAv40ysH20N7VQgc/5cg DGTsfaPG76W7jKp6lCHQGpeb3qMg1jfkETHtMBdCxdXh5c2gOBgpg89a3WqziPY/83MM Ot6A5TOx+m7J0m84VaOwWv6ZwAdoTewLWePlIxQIxd7apvaH1oIskbpkSd7PwYz6oOPl JUmIjLWW1pLO63L6urfiluhdEyGC/ZeSUrpEqVBHtbdPaNCTONO5mm9fPOt/IiiZwOBJ Lld0qL0BiHZq3hUIfIXyBjAJk7u3WMKgB/VkWdPuL7XN/co3wq76VPdvD4K2hf+3X2HV lCVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ElUi2zLG+FPdz2UbE6Ray8TiRA1vza4HIfAFJ/kfzDA=; b=l4vHsPfk6UbDCeEqzwTQF60fTutpi8ac8k/nuE3b+uj/HbAbKL1Xb1h4xqTO0UB5bX zxBlYCO2zFqzpuRhSC6wiB/mJmMlyYL0a7PLHLOGsLv0m1U/7N4wsaSvbOD/BfM4JY5U b4INaC3P1qykPaa2ujNJv2kU5FN7ZhbonDOS8elztzVbX8Ue8x6WZd9cyRlOxbUll4mv tEdM+8u6WrlRxD+vnXknUyqSVNcyHBEeOdoCF9dB3Iu3dlj33dXy23z6CXM9cIoShvLt JYSt+CxXXJfbqAcWVl0PulcbcFcn9B5+MwmNHneROGR7FeWjl+aDqggHdQRvb6/zEPx3 OpJg== X-Gm-Message-State: AOAM530z1hkkggY56HgAHzyf13jBSGlgQbL19oJX03FzWb2yxobbZVj0 n+/I1AxL8JnxyrEL6AxwWRREAuYZ4QhM0PoVrBLXWg== X-Received: by 2002:a05:6512:114c:: with SMTP id m12mr700974lfg.150.1631207081580; Thu, 09 Sep 2021 10:04:41 -0700 (PDT) MIME-Version: 1.0 References: <20210909013818.1191270-1-rananta@google.com> <20210909013818.1191270-10-rananta@google.com> In-Reply-To: From: Oliver Upton Date: Thu, 9 Sep 2021 13:04:30 -0400 Message-ID: Subject: Re: [PATCH v4 09/18] KVM: arm64: selftests: Add guest support to get the vcpuid To: Raghavendra Rao Ananta Cc: Paolo Bonzini , Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Reiji Watanabe , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 9, 2021 at 12:59 PM Raghavendra Rao Ananta wrote: > > On Wed, Sep 8, 2021 at 10:09 PM Oliver Upton wrote: > > > > On Thu, Sep 09, 2021 at 01:38:09AM +0000, Raghavendra Rao Ananta wrote: > > > At times, such as when in the interrupt handler, the guest wants > > > to get the vcpuid that it's running on. As a result, introduce > > > get_vcpuid() that returns the vcpuid of the calling vcpu. At its > > > backend, the VMM prepares a map of vcpuid and mpidr during VM > > > initialization and exports the map to the guest for it to read. > > > > > > Signed-off-by: Raghavendra Rao Ananta > > > --- > > > .../selftests/kvm/include/aarch64/processor.h | 3 ++ > > > .../selftests/kvm/lib/aarch64/processor.c | 46 +++++++++++++++++++ > > > 2 files changed, 49 insertions(+) > > > > > > diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h > > > index b6088c3c67a3..150f63101f4c 100644 > > > --- a/tools/testing/selftests/kvm/include/aarch64/processor.h > > > +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h > > > @@ -133,6 +133,7 @@ void vm_install_exception_handler(struct kvm_vm *vm, > > > int vector, handler_fn handler); > > > void vm_install_sync_handler(struct kvm_vm *vm, > > > int vector, int ec, handler_fn handler); > > > +void vm_vcpuid_map_init(struct kvm_vm *vm); > > > > > > static inline void cpu_relax(void) > > > { > > > @@ -194,4 +195,6 @@ static inline void local_irq_disable(void) > > > asm volatile("msr daifset, #3" : : : "memory"); > > > } > > > > > > +int get_vcpuid(void); > > > + > > > > I believe both of these functions could use some documentation. The > > former has implicit ordering requirements (can only be called after all > > vCPUs are created) and the latter can only be used within a guest. > > > > > #endif /* SELFTEST_KVM_PROCESSOR_H */ > > > diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c > > > index 632b74d6b3ca..9844b62227b1 100644 > > > --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c > > > +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c > > > @@ -13,9 +13,17 @@ > > > #include "processor.h" > > > > > > #define DEFAULT_ARM64_GUEST_STACK_VADDR_MIN 0xac0000 > > > +#define VM_VCPUID_MAP_INVAL -1 > > > > > > static vm_vaddr_t exception_handlers; > > > > > > +struct vm_vcpuid_map { > > > + uint64_t mpidr; > > > + int vcpuid; > > > +}; > > > + > > > +static struct vm_vcpuid_map vcpuid_map[KVM_MAX_VCPUS]; > > > + > > > > Hmm. > > > > I'm not too big of a fan that the KVM_MAX_VCPUS macro is defined in the > > KVM selftests. Really, userspace should discover the limit from the > > kernel. Especially when we want to write tests that test behavior at > > KVM's limit. > > > > That being said, there are more instances of these static allocations in > > the selftests code, so you aren't to be blamed. > > > > Related: commit 074c82c8f7cf ("kvm: x86: Increase MAX_VCPUS to 1024") > > has raised this limit. > > > I'm not a fan of static allocations either, but the fact that > sync_global_to_guest() doesn't have a size argument (yet), makes me > want to take a shorter route. Anyway, if you want I can allocate it > dynamically and copy it to the guest's memory by hand, or come up with > a utility wrapper while I'm at it. > (Just wanted to make sure we are not over-engineering our needs here). No, please don't worry about it in your series. I'm just openly whining is all :-) > > > static uint64_t page_align(struct kvm_vm *vm, uint64_t v) > > > { > > > return (v + vm->page_size) & ~(vm->page_size - 1); > > > @@ -426,3 +434,41 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector, > > > assert(vector < VECTOR_NUM); > > > handlers->exception_handlers[vector][0] = handler; > > > } > > > + > > > +void vm_vcpuid_map_init(struct kvm_vm *vm) > > > +{ > > > + int i = 0; > > > + struct vcpu *vcpu; > > > + struct vm_vcpuid_map *map; > > > + > > > + list_for_each_entry(vcpu, &vm->vcpus, list) { > > > + map = &vcpuid_map[i++]; > > > + map->vcpuid = vcpu->id; > > > + get_reg(vm, vcpu->id, > > > + ARM64_SYS_KVM_REG(SYS_MPIDR_EL1), &map->mpidr); > > > + map->mpidr &= MPIDR_HWID_BITMASK; > > > + } > > > + > > > + if (i < KVM_MAX_VCPUS) > > > + vcpuid_map[i].vcpuid = VM_VCPUID_MAP_INVAL; > > > + > > > + sync_global_to_guest(vm, vcpuid_map); > > > +} > > > + > > > +int get_vcpuid(void) > > > > nit: guest_get_vcpuid() > > > Sounds nice. Since we have a lot of guest utility functions now, I'm > fancying a world where we prefix guest_ with all of them to avoid > confusion. > Sounds good to me! -- Thanks, Oliver