Received: by 2002:a05:6a10:eb17:0:0:0:0 with SMTP id hx23csp702004pxb; Thu, 9 Sep 2021 10:02:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxNDCaDQZf/hyetKT/oUpp98fN3odvzulxi2C25HVLHf2aQrHzd0xo8QCQnJaYET8fqXgJb X-Received: by 2002:a17:906:8a45:: with SMTP id gx5mr4621349ejc.462.1631206956826; Thu, 09 Sep 2021 10:02:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631206956; cv=none; d=google.com; s=arc-20160816; b=i5yOEIbOEWVcjZICQ21BCImky9CFksuroZnpYJS1+GD+HzU8DgcHrVd3SJNV0uRy+2 t3vMZa6V9an4gFuPrft0IRCQ3yJvuo84SH8T1mGQqhZoZiFWb9I3P10+juD2kujEvVyE FalHQ9yzi614vsrybKJTeNUTc0RDc2cCD+hBJpcNbNFR4mkBt0q7iIQx68e5tzqLEguz 2XQqt4SOp0eFwU69LhkIACJ5MZU7OdxlBdE5IhKZwNgZQleguVCxVNgbgKsb+NhFD9+J fh8wWXVpEUlATmwXPyvdCD8+xk2RzlTCd/n2QXfIUbZR+ebpXC2ZKcmKKVEVbO/xBTaL fnpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=h4cQqQOplk8x725n2ey4+/+BvWdK0UMLhYf+SNIozPk=; b=HwSiM/GwPBNhY4kqQhArH2kDh1et2OF/dhJl7ie0W9zMPvxBZuS5SdvK58/ygG801X meor7mX/Je1w9HLsVOXCJTj8/lDFJKLrG+NNPy2NBlatL1gW7flFQqu13sJWfv3wjscm YKAYj1YpwZbs1OdmAqFzDqqVPo694y8fH8BtCcROJ0Gl9OInbiRn7KFPe/JvYMQKVygZ 2BBuUgs6Bots3DPZNiddddwwK86MDq/UgYE3rEXe+1ew1pBmWfmP/lwPYuGbbULZBYFD EJiO8Svmxft6zYsnHeshSz5BXoW4XzMcMGKfK8w0AuLYmllLXepcfj7i5pud9Ua01JEl 6tHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=O4T0puoU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q6si2438061eds.568.2021.09.09.10.01.52; Thu, 09 Sep 2021 10:02:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=O4T0puoU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238914AbhIIRA4 (ORCPT + 99 others); Thu, 9 Sep 2021 13:00:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234524AbhIIRAz (ORCPT ); Thu, 9 Sep 2021 13:00:55 -0400 Received: from mail-yb1-xb33.google.com (mail-yb1-xb33.google.com [IPv6:2607:f8b0:4864:20::b33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD688C061575 for ; Thu, 9 Sep 2021 09:59:45 -0700 (PDT) Received: by mail-yb1-xb33.google.com with SMTP id r4so5253276ybp.4 for ; Thu, 09 Sep 2021 09:59:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=h4cQqQOplk8x725n2ey4+/+BvWdK0UMLhYf+SNIozPk=; b=O4T0puoUdDmajej0EsClr36rF3VQEmHKZCjpSAvbYXv/PDlaYW5ih7hdSkYYVp7OFr fSx1buxVk+fhg4K25HNQdnT/A9V+/xIYvYmFJarVLEcx5aWSBN4DZ6vh8mVVandRdCrU n78NET/pRJfSbRKI/IBgztDqYkf6fmuwbmFxHkeKtxXTLc0OzyytfdT+28GXRIn6xG3d xawbnTEJRRRmUO6dG1qqIDPyKZ7yIYhluR4mcoXhFrgxE6eUCAiTkSm2v9NcJ+YDXCpF cW0PSEDndkk6pjoC0rCTTo6Ye+KEjIw5Ub2Au+s/+N0g+xnYoXjWOmpb83VnwNdi0Fej /wCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=h4cQqQOplk8x725n2ey4+/+BvWdK0UMLhYf+SNIozPk=; b=pbzFO8tzY99PbFBnmQgqdOdXxd9ij6mIS55x///mcmcXYtgAYTZazIYrAfgi8sIX8w KnpYkCb3njrGyx15Z0kwGMFZiPUUD9s1Ko/MqVITdKgzNHqMLFmRlWxMMdYf2p6Ckcdw Zrwe/0Ql0+jkIDDdaeXf/hNoqA6YQG/NimkhZHoGWMXgeOEgCVkiUGvzxfMfKzxiaIPs wmyjyr1QGt2mZzIqBcq1h3GKeJR7jGI7+kXO0vO1ST63ktf/awOhNEPup1xyM6xka8/j FkblfOZor1B7VeFdWGNRQsJ6hLORXMMx+ahHCrkbJTzntKTZodRd9cdk9dUrFYAKv7nU ShVg== X-Gm-Message-State: AOAM531Bo2tfGe433Vu34TADUToTGzzWlmTjdM9S12gfr6GtGxNULeQO eRzp2ziPC3G5g5BxmrCcPrSlkai51m/55iziBRZktJDjvpi3+Q== X-Received: by 2002:a25:cd82:: with SMTP id d124mr5190323ybf.491.1631206784899; Thu, 09 Sep 2021 09:59:44 -0700 (PDT) MIME-Version: 1.0 References: <20210909013818.1191270-1-rananta@google.com> <20210909013818.1191270-10-rananta@google.com> In-Reply-To: From: Raghavendra Rao Ananta Date: Thu, 9 Sep 2021 09:59:33 -0700 Message-ID: Subject: Re: [PATCH v4 09/18] KVM: arm64: selftests: Add guest support to get the vcpuid To: Oliver Upton Cc: Paolo Bonzini , Marc Zyngier , Andrew Jones , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Peter Shier , Ricardo Koller , Reiji Watanabe , Jing Zhang , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 8, 2021 at 10:09 PM Oliver Upton wrote: > > On Thu, Sep 09, 2021 at 01:38:09AM +0000, Raghavendra Rao Ananta wrote: > > At times, such as when in the interrupt handler, the guest wants > > to get the vcpuid that it's running on. As a result, introduce > > get_vcpuid() that returns the vcpuid of the calling vcpu. At its > > backend, the VMM prepares a map of vcpuid and mpidr during VM > > initialization and exports the map to the guest for it to read. > > > > Signed-off-by: Raghavendra Rao Ananta > > --- > > .../selftests/kvm/include/aarch64/processor.h | 3 ++ > > .../selftests/kvm/lib/aarch64/processor.c | 46 +++++++++++++++++++ > > 2 files changed, 49 insertions(+) > > > > diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h > > index b6088c3c67a3..150f63101f4c 100644 > > --- a/tools/testing/selftests/kvm/include/aarch64/processor.h > > +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h > > @@ -133,6 +133,7 @@ void vm_install_exception_handler(struct kvm_vm *vm, > > int vector, handler_fn handler); > > void vm_install_sync_handler(struct kvm_vm *vm, > > int vector, int ec, handler_fn handler); > > +void vm_vcpuid_map_init(struct kvm_vm *vm); > > > > static inline void cpu_relax(void) > > { > > @@ -194,4 +195,6 @@ static inline void local_irq_disable(void) > > asm volatile("msr daifset, #3" : : : "memory"); > > } > > > > +int get_vcpuid(void); > > + > > I believe both of these functions could use some documentation. The > former has implicit ordering requirements (can only be called after all > vCPUs are created) and the latter can only be used within a guest. > > > #endif /* SELFTEST_KVM_PROCESSOR_H */ > > diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c > > index 632b74d6b3ca..9844b62227b1 100644 > > --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c > > +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c > > @@ -13,9 +13,17 @@ > > #include "processor.h" > > > > #define DEFAULT_ARM64_GUEST_STACK_VADDR_MIN 0xac0000 > > +#define VM_VCPUID_MAP_INVAL -1 > > > > static vm_vaddr_t exception_handlers; > > > > +struct vm_vcpuid_map { > > + uint64_t mpidr; > > + int vcpuid; > > +}; > > + > > +static struct vm_vcpuid_map vcpuid_map[KVM_MAX_VCPUS]; > > + > > Hmm. > > I'm not too big of a fan that the KVM_MAX_VCPUS macro is defined in the > KVM selftests. Really, userspace should discover the limit from the > kernel. Especially when we want to write tests that test behavior at > KVM's limit. > > That being said, there are more instances of these static allocations in > the selftests code, so you aren't to be blamed. > > Related: commit 074c82c8f7cf ("kvm: x86: Increase MAX_VCPUS to 1024") > has raised this limit. > I'm not a fan of static allocations either, but the fact that sync_global_to_guest() doesn't have a size argument (yet), makes me want to take a shorter route. Anyway, if you want I can allocate it dynamically and copy it to the guest's memory by hand, or come up with a utility wrapper while I'm at it. (Just wanted to make sure we are not over-engineering our needs here). > > static uint64_t page_align(struct kvm_vm *vm, uint64_t v) > > { > > return (v + vm->page_size) & ~(vm->page_size - 1); > > @@ -426,3 +434,41 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector, > > assert(vector < VECTOR_NUM); > > handlers->exception_handlers[vector][0] = handler; > > } > > + > > +void vm_vcpuid_map_init(struct kvm_vm *vm) > > +{ > > + int i = 0; > > + struct vcpu *vcpu; > > + struct vm_vcpuid_map *map; > > + > > + list_for_each_entry(vcpu, &vm->vcpus, list) { > > + map = &vcpuid_map[i++]; > > + map->vcpuid = vcpu->id; > > + get_reg(vm, vcpu->id, > > + ARM64_SYS_KVM_REG(SYS_MPIDR_EL1), &map->mpidr); > > + map->mpidr &= MPIDR_HWID_BITMASK; > > + } > > + > > + if (i < KVM_MAX_VCPUS) > > + vcpuid_map[i].vcpuid = VM_VCPUID_MAP_INVAL; > > + > > + sync_global_to_guest(vm, vcpuid_map); > > +} > > + > > +int get_vcpuid(void) > > nit: guest_get_vcpuid() > Sounds nice. Since we have a lot of guest utility functions now, I'm fancying a world where we prefix guest_ with all of them to avoid confusion. Regards, Raghavendra > > +{ > > + int i, vcpuid; > > + uint64_t mpidr = read_sysreg(mpidr_el1) & MPIDR_HWID_BITMASK; > > + > > + for (i = 0; i < KVM_MAX_VCPUS; i++) { > > + vcpuid = vcpuid_map[i].vcpuid; > > + GUEST_ASSERT_1(vcpuid != VM_VCPUID_MAP_INVAL, mpidr); > > + > > + if (mpidr == vcpuid_map[i].mpidr) > > + return vcpuid; > > + } > > + > > + /* We should not be reaching here */ > > + GUEST_ASSERT_1(0, mpidr); > > + return -1; > > +} > > -- > > 2.33.0.153.gba50c8fa24-goog > >