Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp3092994pxb; Mon, 18 Oct 2021 08:06:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzIP5rBNJKOS6q05mMpg/59FTfuS0tgeWXSqPgEGQJP+YOiCNw5xI7mnFlMB/rYS9q25twV X-Received: by 2002:a50:9e85:: with SMTP id a5mr42706155edf.148.1634569589236; Mon, 18 Oct 2021 08:06:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634569589; cv=none; d=google.com; s=arc-20160816; b=v5eLcdHJlsB+U9ZyEgkivMtnLI5yhWHzrLQ5ETbrbyzaHIHVW1RhTXpNdZwKHV68T4 EKk2c+ahX/OK04WwlMTM+xHlfTfqYgggswOZnyg6HW+352R5mt/hsbh8a1ADcd1rkKCn 8asaqN3Zz6QTWGFsvxh+e9xKqFDDtiy6WiS5Ntcm7DTXYTqZ4UWKZz0fCJMgVHfz0GbB 46+I0MskA02iK7cYyuYcXqjfBXWnBE+AN/7fhloknoNaBu9G7/n+zyOWVY+DECnYTchz QYM10czFc8VQZv15iUHPbumbyWCqwHBDtT1DDhc+Mav7VGGVAx74rTLI3Uj0OzH27oJK nJ/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=MMmlS+/+SZBwSSXsxt9wumzVOo1h71EHizXwQ2Vozuo=; b=E3CvlP74Me/kiHbMIW9Hik792Kl8/LdBkT7h/912rxDidYRvnQieXlP+ovjXcjk/Dy Oe1OFeTz3BBSH67A8dn6KvHzKiFzjyB4M5SDxDPmUBA0O7ON3F/hUIRYXHn5K0Xyvfj3 XLTgzY/SPx4c8fh/jD7CC75KBYrdVUlqTYYCWZzO27QAC/TS8FhWTg2qUqi6GW33zt70 AEGHOYAHUrSzluPqCeOnXOgcOOSGthoqEzMjj5XY/9+lGAd8hDYQSiDNhyntv7pVctQv 5UnctyedwBx2fJiYubNVUFmZBWiUYoj7LifY+GSfFpxY+avA7EUDr9QIFrshFgxEDcM+ TpdA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=SmebFWra; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l8si11223453edi.356.2021.10.18.08.06.03; Mon, 18 Oct 2021 08:06:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=SmebFWra; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233231AbhJRPDC (ORCPT + 99 others); Mon, 18 Oct 2021 11:03:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233171AbhJRPC5 (ORCPT ); Mon, 18 Oct 2021 11:02:57 -0400 Received: from mail-yb1-xb36.google.com (mail-yb1-xb36.google.com [IPv6:2607:f8b0:4864:20::b36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A9C7C06176A for ; Mon, 18 Oct 2021 08:00:46 -0700 (PDT) Received: by mail-yb1-xb36.google.com with SMTP id g6so265135ybb.3 for ; Mon, 18 Oct 2021 08:00:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=MMmlS+/+SZBwSSXsxt9wumzVOo1h71EHizXwQ2Vozuo=; b=SmebFWraC7SAK1xLKjsSIT8mMTsCqEVoOtApUR9cVOSqesBlAzVr9ZRPd9Qr/Z9wAt v4ZMzZ6zaNPvNGVVRnUYcUKFARIQy+D26VCmtBh6h4IOMfzuta8J7PAfBpsOrGRyeQs/ VgsW09h/Jieuphcvvs70S/rSgY8p15CZooeFXVTyWx30T3dyXUh11IJv+piRWfCmW/re FDJCDI3jrGPCwIRuJkYEsK88uEN0isKnoh1BWM7R+j7wHtv48ltH0tDFt+3IZsF46Djr G4XCz3N06BlUDSQEnpfBOptX4oEwODFhygDrCqsDJVtknk+JMXY1evtb4WBIiihITtjP Y8GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=MMmlS+/+SZBwSSXsxt9wumzVOo1h71EHizXwQ2Vozuo=; b=AOqJFeL4hgxOD8dLap7G6v2ltlukDqMG66ZwnKA8AsOquHNSjT7Z76U2BhpreyL0Gx jpdhCJ872Pr6A0Q8TTSjn0ecHofTfI3bS63Mm06FjAD4vJMDLyRryR+s8hUKYkX3Y0J1 PYHAd4A3YUPC5YvOPeZ/+X6KOukfDvZBTslrzG8BodJ+VPxIrlOKKAa1yALJawUNOqQG mLrrhlsIbqnpHBadIWgKR1v3lP3zZ6jtBwMmyx/2/KgRZrqHyk6r/+OtNzuuE6W+H4P6 v11bFRxqhWnopGc6Jmp/94RafK1IqLMRfLBa5bWGyurhs5qCI0wGiaBRWmve7TDzoPle U/Zw== X-Gm-Message-State: AOAM530TIVQClvlDILY+AZ9UEn3Zh48wL8i/ikUaVtlKNPYYD7rX8urN 8ffTxRpT/bFradMa3TLFF45zgcIMy09KnlxNN9/iHQ== X-Received: by 2002:a25:6405:: with SMTP id y5mr3035764ybb.0.1634569245046; Mon, 18 Oct 2021 08:00:45 -0700 (PDT) MIME-Version: 1.0 References: <20211005234459.430873-1-michael.roth@amd.com> <20211005234459.430873-2-michael.roth@amd.com> In-Reply-To: <20211005234459.430873-2-michael.roth@amd.com> From: Mingwei Zhang Date: Mon, 18 Oct 2021 08:00:00 -0700 Message-ID: Subject: Re: [RFC 01/16] KVM: selftests: move vm_phy_pages_alloc() earlier in file To: Michael Roth Cc: linux-kselftest@vger.kernel.org, kvm , LKML , x86@kernel.org, Nathan Tempelman , Marc Orr , Steve Rutherford , Sean Christopherson , Brijesh Singh , Tom Lendacky , Varad Gautam , Shuah Khan , Vitaly Kuznetsov , David Woodhouse , Ricardo Koller , Jim Mattson , Wanpeng Li , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 5, 2021 at 4:46 PM Michael Roth wrote: > > Subsequent patches will break some of this code out into file-local > helper functions, which will be used by functions like vm_vaddr_alloc(), > which currently are defined earlier in the file, so a forward > declaration would be needed. > > Instead, move it earlier in the file, just above vm_vaddr_alloc() and > and friends, which are the main users. > > Signed-off-by: Michael Roth > --- > tools/testing/selftests/kvm/lib/kvm_util.c | 146 ++++++++++----------- > 1 file changed, 73 insertions(+), 73 deletions(-) > > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c > index 10a8ed691c66..92f59adddebe 100644 > --- a/tools/testing/selftests/kvm/lib/kvm_util.c > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c > @@ -1145,6 +1145,79 @@ void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid) > list_add(&vcpu->list, &vm->vcpus); > } > > +/* > + * Physical Contiguous Page Allocator > + * > + * Input Args: > + * vm - Virtual Machine > + * num - number of pages > + * paddr_min - Physical address minimum > + * memslot - Memory region to allocate page from > + * > + * Output Args: None > + * > + * Return: > + * Starting physical address > + * > + * Within the VM specified by vm, locates a range of available physical > + * pages at or above paddr_min. If found, the pages are marked as in use > + * and their base address is returned. A TEST_ASSERT failure occurs if > + * not enough pages are available at or above paddr_min. > + */ > +vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, > + vm_paddr_t paddr_min, uint32_t memslot) > +{ > + struct userspace_mem_region *region; > + sparsebit_idx_t pg, base; > + > + TEST_ASSERT(num > 0, "Must allocate at least one page"); > + > + TEST_ASSERT((paddr_min % vm->page_size) == 0, "Min physical address " > + "not divisible by page size.\n" > + " paddr_min: 0x%lx page_size: 0x%x", > + paddr_min, vm->page_size); > + > + region = memslot2region(vm, memslot); > + base = pg = paddr_min >> vm->page_shift; > + > + do { > + for (; pg < base + num; ++pg) { > + if (!sparsebit_is_set(region->unused_phy_pages, pg)) { > + base = pg = sparsebit_next_set(region->unused_phy_pages, pg); > + break; > + } > + } > + } while (pg && pg != base + num); > + > + if (pg == 0) { > + fprintf(stderr, "No guest physical page available, " > + "paddr_min: 0x%lx page_size: 0x%x memslot: %u\n", > + paddr_min, vm->page_size, memslot); > + fputs("---- vm dump ----\n", stderr); > + vm_dump(stderr, vm, 2); > + abort(); > + } > + > + for (pg = base; pg < base + num; ++pg) > + sparsebit_clear(region->unused_phy_pages, pg); > + > + return base * vm->page_size; > +} > + > +vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, > + uint32_t memslot) > +{ > + return vm_phy_pages_alloc(vm, 1, paddr_min, memslot); > +} > + > +/* Arbitrary minimum physical address used for virtual translation tables. */ > +#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 > + > +vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) > +{ > + return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); > +} > + > /* > * VM Virtual Address Unused Gap > * > @@ -2149,79 +2222,6 @@ const char *exit_reason_str(unsigned int exit_reason) > return "Unknown"; > } > > -/* > - * Physical Contiguous Page Allocator > - * > - * Input Args: > - * vm - Virtual Machine > - * num - number of pages > - * paddr_min - Physical address minimum > - * memslot - Memory region to allocate page from > - * > - * Output Args: None > - * > - * Return: > - * Starting physical address > - * > - * Within the VM specified by vm, locates a range of available physical > - * pages at or above paddr_min. If found, the pages are marked as in use > - * and their base address is returned. A TEST_ASSERT failure occurs if > - * not enough pages are available at or above paddr_min. > - */ > -vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, > - vm_paddr_t paddr_min, uint32_t memslot) > -{ > - struct userspace_mem_region *region; > - sparsebit_idx_t pg, base; > - > - TEST_ASSERT(num > 0, "Must allocate at least one page"); > - > - TEST_ASSERT((paddr_min % vm->page_size) == 0, "Min physical address " > - "not divisible by page size.\n" > - " paddr_min: 0x%lx page_size: 0x%x", > - paddr_min, vm->page_size); > - > - region = memslot2region(vm, memslot); > - base = pg = paddr_min >> vm->page_shift; > - > - do { > - for (; pg < base + num; ++pg) { > - if (!sparsebit_is_set(region->unused_phy_pages, pg)) { > - base = pg = sparsebit_next_set(region->unused_phy_pages, pg); > - break; > - } > - } > - } while (pg && pg != base + num); > - > - if (pg == 0) { > - fprintf(stderr, "No guest physical page available, " > - "paddr_min: 0x%lx page_size: 0x%x memslot: %u\n", > - paddr_min, vm->page_size, memslot); > - fputs("---- vm dump ----\n", stderr); > - vm_dump(stderr, vm, 2); > - abort(); > - } > - > - for (pg = base; pg < base + num; ++pg) > - sparsebit_clear(region->unused_phy_pages, pg); > - > - return base * vm->page_size; > -} > - > -vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, > - uint32_t memslot) > -{ > - return vm_phy_pages_alloc(vm, 1, paddr_min, memslot); > -} > - > -/* Arbitrary minimum physical address used for virtual translation tables. */ > -#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 > - > -vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) > -{ > - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); > -} > - > /* > * Address Guest Virtual to Host Virtual > * > -- > 2.25.1 > Why move the function implementation? Maybe just adding a declaration at the top of kvm_util.c should suffice.