Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp4187572pxf; Tue, 30 Mar 2021 01:13:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxxyPjbVN6r0UKxkjQXYtPwTeXG5wbHN2Llw0npSaeNeZe03tQXReR41BgsD0AjJmyA7iQM X-Received: by 2002:a17:906:f12:: with SMTP id z18mr32386993eji.132.1617091993273; Tue, 30 Mar 2021 01:13:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617091993; cv=none; d=google.com; s=arc-20160816; b=DGRewyMKtggTbSmeGALtp91dux36xAuc/H/qvXJnx7lL2muiMFOp6bkCdsEvH7s2VV +JGhRFEUKUnw5/KXCy41hQJWpAdNKoBZjdJMpSjcnqXFbH06aUMWmWr+VjvZCN9EtEZT gG4Tasi+MKAiano2GWME++tnO9HF++6bsBXGaxg+KM0awUh3lq7sE98PihwY2i6xjtJ9 hGKk0nVVNS4vf18RSsg1UirULdsrLGNkKFlmq0w8HEczSw92pZ9SSfJ43DaQqrN6riQU R+of7AwlkPYlghSbQZF6C12xJf5OrFvxFeilClA1EvJ4gfZuW/JmebE1msOOT/iIUwKs mV/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=GMZJuo/4kP+Kt2P0+5I9JM2xIhlLO+pTqDw8LFf+Xeg=; b=WSvgL2PlQSoRKa1uJPohnK9FdNY4/e1chne1+b69I58xL9cosXsiK4zcOKL840WvBD oEb5UWCzaEGpLvJBhCM5aCaRVkSUpVC86JPUB0T3Yhfq3qzsgK70ghrO0v14TaS5dY2g MrcubUWJjTnHWyxhOhl127d5PZRkpAofr/2CCyGXQKYsgY3PW0pDI2jMFtlPCwhcqfeb vhN3DwkaILetScUfrK4w3MPa/fpdCIKZoPOSsKV7O6lPyxcd3xwrvU3wSVobz+eW6VAd O5J4wpnkp2mILapG12pO9VK2D9YGYvmF9MfLhr9JxHFr3ZVINMIeQz5EdmQllSaxrqLb 5Dig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u8si14021442ejz.667.2021.03.30.01.12.51; Tue, 30 Mar 2021 01:13:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231695AbhC3IKC (ORCPT + 99 others); Tue, 30 Mar 2021 04:10:02 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:14196 "EHLO szxga04-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231337AbhC3IJW (ORCPT ); Tue, 30 Mar 2021 04:09:22 -0400 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4F8hqQ4dLvzmcCr; Tue, 30 Mar 2021 16:06:42 +0800 (CST) Received: from DESKTOP-TMVL5KK.china.huawei.com (10.174.187.128) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.498.0; Tue, 30 Mar 2021 16:09:12 +0800 From: Yanan Wang To: Paolo Bonzini , Andrew Jones , , , CC: Ben Gardon , Sean Christopherson , Vitaly Kuznetsov , Peter Xu , "Ingo Molnar" , Adrian Hunter , Jiri Olsa , Arnaldo Carvalho de Melo , Arnd Bergmann , Michael Kerrisk , Thomas Gleixner , , , Yanan Wang Subject: [PATCH v6 09/10] KVM: selftests: Adapt vm_userspace_mem_region_add to new helpers Date: Tue, 30 Mar 2021 16:08:55 +0800 Message-ID: <20210330080856.14940-10-wangyanan55@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210330080856.14940-1-wangyanan55@huawei.com> References: <20210330080856.14940-1-wangyanan55@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.187.128] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org With VM_MEM_SRC_ANONYMOUS_THP specified in vm_userspace_mem_region_add(), we have to get the transparent hugepage size for HVA alignment. With the new helpers, we can use get_backing_src_pagesz() to check whether THP is configured and then get the exact configured hugepage size. As different architectures may have different THP page sizes configured, this can get the accurate THP page sizes on any platform. Signed-off-by: Yanan Wang Reviewed-by: Ben Gardon Reviewed-by: Andrew Jones --- tools/testing/selftests/kvm/lib/kvm_util.c | 28 +++++++--------------- 1 file changed, 9 insertions(+), 19 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 3506174c2053..c7a2228deaf3 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -18,7 +18,6 @@ #include #include -#define KVM_UTIL_PGS_PER_HUGEPG 512 #define KVM_UTIL_MIN_PFN 2 static int vcpu_mmap_sz(void); @@ -688,7 +687,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, { int ret; struct userspace_mem_region *region; - size_t huge_page_size = KVM_UTIL_PGS_PER_HUGEPG * vm->page_size; + size_t backing_src_pagesz = get_backing_src_pagesz(src_type); size_t alignment; TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) == npages, @@ -750,7 +749,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, #endif if (src_type == VM_MEM_SRC_ANONYMOUS_THP) - alignment = max(huge_page_size, alignment); + alignment = max(backing_src_pagesz, alignment); /* Add enough memory to align up if necessary */ if (alignment > 1) @@ -769,22 +768,13 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, region->host_mem = align(region->mmap_start, alignment); /* As needed perform madvise */ - if (src_type == VM_MEM_SRC_ANONYMOUS || src_type == VM_MEM_SRC_ANONYMOUS_THP) { - struct stat statbuf; - - ret = stat("/sys/kernel/mm/transparent_hugepage", &statbuf); - TEST_ASSERT(ret == 0 || (ret == -1 && errno == ENOENT), - "stat /sys/kernel/mm/transparent_hugepage"); - - TEST_ASSERT(ret == 0 || src_type != VM_MEM_SRC_ANONYMOUS_THP, - "VM_MEM_SRC_ANONYMOUS_THP requires THP to be configured in the host kernel"); - - if (ret == 0) { - ret = madvise(region->host_mem, npages * vm->page_size, - src_type == VM_MEM_SRC_ANONYMOUS ? MADV_NOHUGEPAGE : MADV_HUGEPAGE); - TEST_ASSERT(ret == 0, "madvise failed, addr: %p length: 0x%lx src_type: %x", - region->host_mem, npages * vm->page_size, src_type); - } + if ((src_type == VM_MEM_SRC_ANONYMOUS || + src_type == VM_MEM_SRC_ANONYMOUS_THP) && thp_configured()) { + ret = madvise(region->host_mem, npages * vm->page_size, + src_type == VM_MEM_SRC_ANONYMOUS ? MADV_NOHUGEPAGE : MADV_HUGEPAGE); + TEST_ASSERT(ret == 0, "madvise failed, addr: %p length: 0x%lx src_type: %s", + region->host_mem, npages * vm->page_size, + vm_mem_backing_src_alias(src_type)->name); } region->unused_phy_pages = sparsebit_alloc(); -- 2.19.1