Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp3265938pxb; Mon, 1 Nov 2021 10:46:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxb8t7NHVuQGvmODLXG7HrLELkNCJgQ0yRfInRxXtgBQxSAeYyWLiXIxwVCNgPPCSDKnOpC X-Received: by 2002:a17:906:25d7:: with SMTP id n23mr39494659ejb.322.1635788815817; Mon, 01 Nov 2021 10:46:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1635788815; cv=none; d=google.com; s=arc-20160816; b=ObwTvVeyWCUELW93U5ZSdEo0cahKUtloCifyKaNx1Ft/Oe/x/9SJZV3AcVioAkZ4V7 f9H1HlDyXKK6SwKjkzIAWXXvhhedc5oZRVw7NtkAHzt1NYighFBzdulXJze/Kr5kHJY2 Kx6yN+sYKXkhIOgOl39ZS5+nf8TdTGWaZ95X3+yxW4EKqT1DJdValux23QveUYqEP5X+ a9uSFbvethf4sBU+nQ/kRxfkch7jRGohiy5glcUwli0JrOHL/4hRS9r8QIEmd4K6Nto6 K88BxafXG7diQlvIy0CKXNwTSpBV5m00I3fk2Q7o4CO5UMNGjtZRS6huYp8R4biBAPWQ /bBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=+Fmyu5Mn4i82cp8HMsdl33MAzo8mJP0Q39h/qUlCf0Q=; b=oZy+MNMPw9AdRkzrAOknCDd4MtaEuy/oV/oTc6GZR4bWQUeE0BOU2RrmHdMDc4XRLO PvKCd4PkOdx5wsJh3vcGXPEorv1C4m/vETfNF2GcPcLRK43rNGauPZePu1vwUpwZgE8N JYMuMPEoqUJI4D7/zBd85w2MVDk+MQQk/HhAD0zY3d92iLlhKxZ2yYRtPS7QnEKnm3I8 fDoXi3BSq09dwMnieWN45zPnce7CTIM6k9O+BBd1S9ZKGPXDNOG0nfLYs5qnJk0W5eSy vCwFBNW5A9gi1KDSzo4ILXL2h/ntmYmCRcyZYLDM8QqGdyb4lkzpUKJFGO17lkbtbOZB PTNQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=ZGLIjb6V; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qk38si853475ejc.373.2021.11.01.10.46.30; Mon, 01 Nov 2021 10:46:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=ZGLIjb6V; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230246AbhKARqf (ORCPT + 99 others); Mon, 1 Nov 2021 13:46:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229619AbhKARqe (ORCPT ); Mon, 1 Nov 2021 13:46:34 -0400 Received: from mail-yb1-xb2f.google.com (mail-yb1-xb2f.google.com [IPv6:2607:f8b0:4864:20::b2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72F0AC061714 for ; Mon, 1 Nov 2021 10:44:01 -0700 (PDT) Received: by mail-yb1-xb2f.google.com with SMTP id v7so46652926ybq.0 for ; Mon, 01 Nov 2021 10:44:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=+Fmyu5Mn4i82cp8HMsdl33MAzo8mJP0Q39h/qUlCf0Q=; b=ZGLIjb6VJaU8wer4pug6zS3Pp/Ya9KByddsZSsXhikN90vlK5Kns3+32lBZMa+bhOG fqvbiUWC8nnj8k0LR7K7IEiuEpg1lLlGflHTVR7UGt+xZ1+lmdzZhgPDHgJojsBrR8/B F0+KKHEXD7BtVvxGO5lH7Pvexp9z8qPpqE9MGQaQ/AE3oQw4+XlQi6vKHVO3CP8rJlGK 77mY/JwzORTKv+1glzTayTuswnAKEEjYhQRZl0VfmDq+sbuCfy66ohqVdsUspHBiAR/l mIBEGCaP0g628RohfMcrErdtaO4fvTLObe+0N1Wjg4i3SyTylaOEt9Q+q1K/CZcEErO+ tJoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=+Fmyu5Mn4i82cp8HMsdl33MAzo8mJP0Q39h/qUlCf0Q=; b=dLOp2n3ZUDYpnLVGKkGiBS7lTlMzoqoGxxgiWXaYYOEmgNdi1Rw7Fg8SDMLoCYq8oR 2BajoSi+0oOk85JmAqxnFZyzvzdrUxCY0vWt5Vc+ehehFsGVzbMOFNoPycGFogRt/ekw LmOB8jySwloCvWmz1ZvqFzolywFmSPe+w5ONhv0aexAJ0MfqQ7+JMc5YODS1YtanyAtY pvVb1X7hJ4jr4QGD/Rq3h+mkLb7BycutdlV1B5H8whu0kB9rGh2dcqtvSoM6mIL0FwGs WkEIIqWZyTsyLujRFqGWENVcsw4jYzKfmn3lBBlKGj6vAkHA8eCHGS+g+QJ2hPH90v0l sLSg== X-Gm-Message-State: AOAM5302Da3z2KufHiiuDctfU1L2Y24zHdamSvLEtfzNrsjSoX9BMCjE gJ5/4rVjtJtgleend8cLxPYc9HbABVirHvRYD3Z4vcEDj6TofA== X-Received: by 2002:a25:3142:: with SMTP id x63mr31367323ybx.99.1635788639054; Mon, 01 Nov 2021 10:43:59 -0700 (PDT) MIME-Version: 1.0 References: <20211005234459.430873-1-michael.roth@amd.com> <20211005234459.430873-2-michael.roth@amd.com> <20211021034529.gwv3hz5xhomtvnu7@amd.com> In-Reply-To: From: Mingwei Zhang Date: Mon, 1 Nov 2021 10:43:48 -0700 Message-ID: Subject: Re: [RFC 01/16] KVM: selftests: move vm_phy_pages_alloc() earlier in file To: Michael Roth Cc: linux-kselftest@vger.kernel.org, kvm , LKML , x86@kernel.org, Nathan Tempelman , Marc Orr , Steve Rutherford , Sean Christopherson , Brijesh Singh , Tom Lendacky , Varad Gautam , Shuah Khan , Vitaly Kuznetsov , David Woodhouse , Ricardo Koller , Jim Mattson , Wanpeng Li , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 26, 2021 at 8:52 AM Mingwei Zhang wrote: > > On Wed, Oct 20, 2021 at 8:47 PM Michael Roth wrote: > > > > On Mon, Oct 18, 2021 at 08:00:00AM -0700, Mingwei Zhang wrote: > > > On Tue, Oct 5, 2021 at 4:46 PM Michael Roth wrote: > > > > > > > > Subsequent patches will break some of this code out into file-local > > > > helper functions, which will be used by functions like vm_vaddr_alloc(), > > > > which currently are defined earlier in the file, so a forward > > > > declaration would be needed. > > > > > > > > Instead, move it earlier in the file, just above vm_vaddr_alloc() and > > > > and friends, which are the main users. > > > > > > > > Signed-off-by: Michael Roth > > > > --- > > > > tools/testing/selftests/kvm/lib/kvm_util.c | 146 ++++++++++----------- > > > > 1 file changed, 73 insertions(+), 73 deletions(-) > > > > > > > > diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c > > > > index 10a8ed691c66..92f59adddebe 100644 > > > > --- a/tools/testing/selftests/kvm/lib/kvm_util.c > > > > +++ b/tools/testing/selftests/kvm/lib/kvm_util.c > > > > @@ -1145,6 +1145,79 @@ void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid) > > > > list_add(&vcpu->list, &vm->vcpus); > > > > } > > > > > > > > +/* > > > > + * Physical Contiguous Page Allocator > > > > + * > > > > + * Input Args: > > > > + * vm - Virtual Machine > > > > + * num - number of pages > > > > + * paddr_min - Physical address minimum > > > > + * memslot - Memory region to allocate page from > > > > + * > > > > + * Output Args: None > > > > + * > > > > + * Return: > > > > + * Starting physical address > > > > + * > > > > + * Within the VM specified by vm, locates a range of available physical > > > > + * pages at or above paddr_min. If found, the pages are marked as in use > > > > + * and their base address is returned. A TEST_ASSERT failure occurs if > > > > + * not enough pages are available at or above paddr_min. > > > > + */ > > > > +vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, > > > > + vm_paddr_t paddr_min, uint32_t memslot) > > > > +{ > > > > + struct userspace_mem_region *region; > > > > + sparsebit_idx_t pg, base; > > > > + > > > > + TEST_ASSERT(num > 0, "Must allocate at least one page"); > > > > + > > > > + TEST_ASSERT((paddr_min % vm->page_size) == 0, "Min physical address " > > > > + "not divisible by page size.\n" > > > > + " paddr_min: 0x%lx page_size: 0x%x", > > > > + paddr_min, vm->page_size); > > > > + > > > > + region = memslot2region(vm, memslot); > > > > + base = pg = paddr_min >> vm->page_shift; > > > > + > > > > + do { > > > > + for (; pg < base + num; ++pg) { > > > > + if (!sparsebit_is_set(region->unused_phy_pages, pg)) { > > > > + base = pg = sparsebit_next_set(region->unused_phy_pages, pg); > > > > + break; > > > > + } > > > > + } > > > > + } while (pg && pg != base + num); > > > > + > > > > + if (pg == 0) { > > > > + fprintf(stderr, "No guest physical page available, " > > > > + "paddr_min: 0x%lx page_size: 0x%x memslot: %u\n", > > > > + paddr_min, vm->page_size, memslot); > > > > + fputs("---- vm dump ----\n", stderr); > > > > + vm_dump(stderr, vm, 2); > > > > + abort(); > > > > + } > > > > + > > > > + for (pg = base; pg < base + num; ++pg) > > > > + sparsebit_clear(region->unused_phy_pages, pg); > > > > + > > > > + return base * vm->page_size; > > > > +} > > > > + > > > > +vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, > > > > + uint32_t memslot) > > > > +{ > > > > + return vm_phy_pages_alloc(vm, 1, paddr_min, memslot); > > > > +} > > > > + > > > > +/* Arbitrary minimum physical address used for virtual translation tables. */ > > > > +#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 > > > > + > > > > +vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) > > > > +{ > > > > + return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); > > > > +} > > > > + > > > > /* > > > > * VM Virtual Address Unused Gap > > > > * > > > > @@ -2149,79 +2222,6 @@ const char *exit_reason_str(unsigned int exit_reason) > > > > return "Unknown"; > > > > } > > > > > > > > -/* > > > > - * Physical Contiguous Page Allocator > > > > - * > > > > - * Input Args: > > > > - * vm - Virtual Machine > > > > - * num - number of pages > > > > - * paddr_min - Physical address minimum > > > > - * memslot - Memory region to allocate page from > > > > - * > > > > - * Output Args: None > > > > - * > > > > - * Return: > > > > - * Starting physical address > > > > - * > > > > - * Within the VM specified by vm, locates a range of available physical > > > > - * pages at or above paddr_min. If found, the pages are marked as in use > > > > - * and their base address is returned. A TEST_ASSERT failure occurs if > > > > - * not enough pages are available at or above paddr_min. > > > > - */ > > > > -vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, > > > > - vm_paddr_t paddr_min, uint32_t memslot) > > > > -{ > > > > - struct userspace_mem_region *region; > > > > - sparsebit_idx_t pg, base; > > > > - > > > > - TEST_ASSERT(num > 0, "Must allocate at least one page"); > > > > - > > > > - TEST_ASSERT((paddr_min % vm->page_size) == 0, "Min physical address " > > > > - "not divisible by page size.\n" > > > > - " paddr_min: 0x%lx page_size: 0x%x", > > > > - paddr_min, vm->page_size); > > > > - > > > > - region = memslot2region(vm, memslot); > > > > - base = pg = paddr_min >> vm->page_shift; > > > > - > > > > - do { > > > > - for (; pg < base + num; ++pg) { > > > > - if (!sparsebit_is_set(region->unused_phy_pages, pg)) { > > > > - base = pg = sparsebit_next_set(region->unused_phy_pages, pg); > > > > - break; > > > > - } > > > > - } > > > > - } while (pg && pg != base + num); > > > > - > > > > - if (pg == 0) { > > > > - fprintf(stderr, "No guest physical page available, " > > > > - "paddr_min: 0x%lx page_size: 0x%x memslot: %u\n", > > > > - paddr_min, vm->page_size, memslot); > > > > - fputs("---- vm dump ----\n", stderr); > > > > - vm_dump(stderr, vm, 2); > > > > - abort(); > > > > - } > > > > - > > > > - for (pg = base; pg < base + num; ++pg) > > > > - sparsebit_clear(region->unused_phy_pages, pg); > > > > - > > > > - return base * vm->page_size; > > > > -} > > > > - > > > > -vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, > > > > - uint32_t memslot) > > > > -{ > > > > - return vm_phy_pages_alloc(vm, 1, paddr_min, memslot); > > > > -} > > > > - > > > > -/* Arbitrary minimum physical address used for virtual translation tables. */ > > > > -#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000 > > > > - > > > > -vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm) > > > > -{ > > > > - return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0); > > > > -} > > > > - > > > > /* > > > > * Address Guest Virtual to Host Virtual > > > > * > > > > -- > > > > 2.25.1 > > > > > > > > > > Why move the function implementation? Maybe just adding a declaration > > > at the top of kvm_util.c should suffice. > > > > At least from working on other projects I'd gotten the impression that > > forward function declarations should be avoided if they can be solved by > > moving the function above the caller. Certainly don't mind taking your > > suggestion and dropping this patch if that's not the case here though. > > Understood. Yes, I think it would be better to follow your experience > then. I was thinking that if you move the code and then potentially > git blame on that function might point to you :) > > Thanks. > -Mingwei Reviewed-by: Mingwei Zhang Thanks. -Mingwei