Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp262096pxj; Thu, 3 Jun 2021 06:10:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzGyTSTMXgw3HTv+rf9wRcjFLi6Km+ITiLB8owO1sQnsCh6N7F3blNsAEFxlz0CgMMDdO2L X-Received: by 2002:a17:906:8319:: with SMTP id j25mr14279216ejx.479.1622725811819; Thu, 03 Jun 2021 06:10:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622725811; cv=none; d=google.com; s=arc-20160816; b=a0TPv7EAmlLPDEhuvIynSZcRPYP4gR2MNGIlgl/3bb7Z6itKxgTlsX6mfL5LRaEVDe g/lIo/u4UMadYA5uMoWDLNFLrK76wdxxETwkNL5V0klZVnckD4cKXL74b8POnBkAOmYp wmFGzXC4v65QxBQyi/2iewyc9OGZ6Rlu0M0Ncs9wPB+c27wYYChD47DsQ3T16E+RmwSR gQglMUZi6sTb1Sp+4qywR2xSSdFVEaiDsFg0LUbXklMr13nlCDAbOpcYWzOIZ7H/zOI8 8gPMQT74InND2weYfGuLAQlhNAc/oxtW8UtqH88SIC8UqIdTXFpIQWjVtz/dn4EIyN6f 7Vgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:references:cc :to:subject:from:dkim-signature; bh=xMPEn5/RQtnRhoO2Hz27P882y+EyscrVpMwcpFia6P8=; b=pOMp69urvYVM6ghZMIFNWor4EGhVR4BBZmh9uawxKiLALZj2bxhieXPBgvUIwfB0kr JKhaqTjdDOFH1U4TqYuXVeFbdZL5moyhnMkZsyflT5Tdj40EO71vsjVAnpHVz6NeyYRZ KBzIkx0f0KcQdWHhGnH38JdDr45bwiOWzkjoWiTWZXgeahYm2u0Qc9ZO3+UqaZbfaEJZ mzPujgGTe+XU3FA1sv+7viFKWYfTuTISG4N2pJ5kFO6CIHk8l76dJ3Uy3zle0uJaPOER eelpz5Oy5QMglRq5MtFAL7GtGH+9MiZoig6o6qnf2fu119rPkAecRVcOeyeKzPDtVHQZ QT4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2020-01-29 header.b=RnJzIzt9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gl4si2286469ejb.133.2021.06.03.06.09.48; Thu, 03 Jun 2021 06:10:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2020-01-29 header.b=RnJzIzt9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230525AbhFCNH3 (ORCPT + 99 others); Thu, 3 Jun 2021 09:07:29 -0400 Received: from userp2130.oracle.com ([156.151.31.86]:59504 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229955AbhFCNH3 (ORCPT ); Thu, 3 Jun 2021 09:07:29 -0400 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 153D536c063312; Thu, 3 Jun 2021 13:05:40 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : subject : to : cc : references : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=corp-2020-01-29; bh=xMPEn5/RQtnRhoO2Hz27P882y+EyscrVpMwcpFia6P8=; b=RnJzIzt9q0EEYAsG+lmCrM6PTtBhdvkEFN1yQD54WGjGgqJ5g/k3EVH/hzN7t3mdwigF aOl6hzGwA5obRmmhDzy4FkhTZptAdGy9peZP6RSFZ3tbrBWgfGZCGe74hsCrceJByNjs KnPGV4RHAECs+pLh9ZPeVpwqHGktsOMGcwMr7mAig3FdRKZRvHGZLdepn4/w+FbBlqFw tN6JVuuTfAO8fTodmPrid2z7P3ZSt8mgl4LPe96Ln9rI5KyEDZUS7BCvSzou//vGIrz8 RzcV7yeKiTVkLdVLvLBULswcVoFhozWgUNjxxdrDhSVLNU1GVI8R57kgk6nnlU8h1q/G hQ== Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79]) by userp2130.oracle.com with ESMTP id 38ud1sk9my-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 03 Jun 2021 13:05:40 +0000 Received: from pps.filterd (userp3020.oracle.com [127.0.0.1]) by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 153D0j1T009117; Thu, 3 Jun 2021 13:05:40 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userp3020.oracle.com with ESMTP id 38x1bdutp9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 03 Jun 2021 13:05:39 +0000 Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 153D5dTe022577; Thu, 3 Jun 2021 13:05:39 GMT Received: from [10.175.187.218] (/10.175.187.218) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 03 Jun 2021 13:05:38 +0000 From: "Maciej S. Szmigiero" Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test To: Andrew Jones Cc: Paolo Bonzini , "linux-kernel@vger.kernel.org" , "kvm@vger.kernel.org" , "Duan, Zhenzhong" References: <20210528191134.3740950-1-pbonzini@redhat.com> <285623f6-52e4-7f8d-fab6-0476a00af68b@oracle.com> <73511f2e-7b5d-0d29-b8dc-9cb16675afb3@oracle.com> <68bda0ef-b58f-c335-a0c7-96186cbad535@oracle.com> <20210603123759.ovlgws3ycnem4t3d@gator.home> Message-ID: <8800fc7a-4501-12f7-ed15-26ea5db41df8@oracle.com> Date: Thu, 3 Jun 2021 15:05:35 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.0 MIME-Version: 1.0 In-Reply-To: <20210603123759.ovlgws3ycnem4t3d@gator.home> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10004 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 adultscore=0 malwarescore=0 mlxscore=0 suspectscore=0 spamscore=0 phishscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000 definitions=main-2106030089 X-Proofpoint-ORIG-GUID: lGsIzNPdaAz1FxZgovOGOuBtNZVwybHv X-Proofpoint-GUID: lGsIzNPdaAz1FxZgovOGOuBtNZVwybHv X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=10004 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 priorityscore=1501 suspectscore=0 phishscore=0 lowpriorityscore=0 mlxlogscore=999 malwarescore=0 clxscore=1015 spamscore=0 impostorscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000 definitions=main-2106030090 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03.06.2021 14:37, Andrew Jones wrote: > On Thu, Jun 03, 2021 at 05:26:33AM +0000, Duan, Zhenzhong wrote: >>> -----Original Message----- >>> From: Maciej S. Szmigiero >>> Sent: Thursday, June 3, 2021 7:07 AM >>> To: Paolo Bonzini ; Duan, Zhenzhong >>> >>> Cc: linux-kernel@vger.kernel.org; kvm@vger.kernel.org; Andrew Jones >>> >>> Subject: Re: [PATCH] selftests: kvm: fix overlapping addresses in >>> memslot_perf_test >>> >>> On 30.05.2021 01:13, Maciej S. Szmigiero wrote: >>>> On 29.05.2021 12:20, Paolo Bonzini wrote: >>>>> On 28/05/21 21:51, Maciej S. Szmigiero wrote: >>>>>> On 28.05.2021 21:11, Paolo Bonzini wrote: >>>>>>> The memory that is allocated in vm_create is already mapped close >>>>>>> to GPA 0, because test_execute passes the requested memory to >>>>>>> prepare_vm.  This causes overlapping memory regions and the test >>>>>>> crashes.  For simplicity just move MEM_GPA higher. >>>>>>> >>>>>>> Signed-off-by: Paolo Bonzini >>>>>> >>>>>> I am not sure that I understand the issue correctly, is >>>>>> vm_create_default() already reserving low GPAs (around 0x10000000) >>>>>> on some arches or run environments? >>>>> >>>>> It maps the number of pages you pass in the second argument, see >>>>> vm_create. >>>>> >>>>>    if (phy_pages != 0) >>>>>      vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, >>>>>                                  0, 0, phy_pages, 0); >>>>> >>>>> In this case: >>>>> >>>>>    data->vm = vm_create_default(VCPU_ID, mempages, guest_code); >>>>> >>>>> called here: >>>>> >>>>>    if (!prepare_vm(data, nslots, maxslots, tdata->guest_code, >>>>>                    mem_size, slot_runtime)) { >>>>> >>>>> where mempages is mem_size, which is declared as: >>>>> >>>>>          uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES; >>>>> >>>>> but actually a better fix is just to pass a small fixed value (e.g. >>>>> 1024) to vm_create_default, since all other regions are added by hand >>>> >>>> Yes, but the argument that is passed to vm_create_default() (mem_size >>>> in the case of the test) is not passed as phy_pages to vm_create(). >>>> Rather, vm_create_with_vcpus() calculates some upper bound of extra >>>> memory that is needed to cover that much guest memory (including for >>>> its page tables). >>>> >>>> The biggest possible mem_size from memslot_perf_test is 512 MiB + 1 >>>> page, according to my calculations this results in phy_pages of 1029 >>>> (~4 MiB) in the x86-64 case and around 1540 (~6 MiB) in the s390x case >>>> (here I am not sure about the exact number, since s390x has some >>>> additional alignment requirements). >>>> >>>> Both values are well below 256 MiB (0x10000000UL), so I was wondering >>>> what kind of circumstances can make these allocations collide (maybe I >>>> am missing something in my analysis). >>> >>> I see now that there has been a patch merged last week called >>> "selftests: kvm: make allocation of extra memory take effect" by Zhenzhong >>> that now allocates also the whole memory size passed to >>> vm_create_default() (instead of just page tables for that much memory). >>> >>> The commit message of this patch says that "perf_test_util and >>> kvm_page_table_test use it to alloc extra memory currently", however both >>> kvm_page_table_test and lib/perf_test_util framework explicitly add the >>> required memory allocation by doing a vm_userspace_mem_region_add() >>> call for the same memory size that they pass to vm_create_default(). >>> >>> So now they allocate this memory twice. >>> >>> @Zhenzhong: did you notice improper operation of either >>> kvm_page_table_test or perf_test_util-based tests (demand_paging_test, >>> dirty_log_perf_test, >>> memslot_modification_stress_test) before your patch? >> No >> >>> >>> They seem to work fine for me without the patch (and I guess other people >>> would have noticed earlier, too, if they were broken). >>> >>> After this patch not only these tests allocate their memory twice but it is >>> harder to make vm_create_default() allocate the right amount of memory for >>> the page tables in cases where the test needs to explicitly use >>> vm_userspace_mem_region_add() for its allocations (because it wants the >>> allocation placed at a specific GPA or in a specific memslot). >>> >>> One has to basically open-code the page table size calculations from >>> vm_create_with_vcpus() in the particular test then, taking also into account >>> that vm_create_with_vcpus() will not only allocate the passed memory size >>> (calculated page tables size) but also behave like it was allocating space for >>> page tables for these page tables (even though the passed memory size itself >>> is supposed to cover them). >> Looks we have different understanding to the parameter extra_mem_pages of vm_create_default(). >> >> In your usage, extra_mem_pages is only used for page table calculations, real extra memory allocation >> happens in the extra call of vm_userspace_mem_region_add(). > > Yes, this is the meaning that kvm selftests has always had for > extra_mem_pages of vm_create_default(). If we'd rather have a different > meaning, that's fine, but we need to change all the callers of the > function as well. If we change the meaning of extra_mem_pages (keep the patch) it would be good to still have an additional parameter to vm_create_with_vcpus() for tests that have to allocate their memory on their own via vm_userspace_mem_region_add() for vm_create_with_vcpus() to just allocate the page tables for these manual allocations. Or a helper to calculate the required extra_mem_pages for them. > If we decide to leave vm_create_default() the way it was by reverting this > patch, then maybe we should consider renaming the parameter and/or > documenting the function. Adding a descriptive comment (and possibly renaming the parameter) seems like a much simpler solution to me that adapting these tests (and possibly adding the parameter or helper described above for them). > Thanks, > drew Thanks, Maciej >> >> In my understanding, extra_mem_pages is used for a VM who wants a custom memory size in slot0, >> rather than the default DEFAULT_GUEST_PHY_PAGES size. >> >> I understood your comments and do agree that my patch bring some trouble to your code, sorry for that. >> I'm fine to revert that patch and I think it's better to let the maintainers to decide what extra_mem_pages >> Is used for. >> >> Thanks >> Zhenzhong >>> >>> Due to the above, I suspect the previous behavior was, in fact, correct. >>> >>> Thanks, >>> Maciej >