From: Peter Xu <[email protected]>
The script calculates a mininum required size of hugetlb memories, but
it'll stop working with <1MB huge page sizes, reporting all zeros even if
huge pages are available.
In reality, the calculation doesn't really need to be as comlicated either.
Make it simpler and work for KB-level hugepages too.
Cc: Muhammad Usama Anjum <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Nico Pache <[email protected]>
Cc: Muchun Song <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
---
tools/testing/selftests/mm/run_vmtests.sh | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
index c2c542fe7b17..b1b78e45d613 100755
--- a/tools/testing/selftests/mm/run_vmtests.sh
+++ b/tools/testing/selftests/mm/run_vmtests.sh
@@ -152,9 +152,13 @@ done < /proc/meminfo
# both of these requirements into account and attempt to increase
# number of huge pages available.
nr_cpus=$(nproc)
-hpgsize_MB=$((hpgsize_KB / 1024))
-half_ufd_size_MB=$((((nr_cpus * hpgsize_MB + 127) / 128) * 128))
-needmem_KB=$((half_ufd_size_MB * 2 * 1024))
+uffd_min_KB=$((hpgsize_KB * nr_cpus * 2))
+hugetlb_min_KB=$((256 * 1024))
+if [[ $uffd_min_KB -gt $hugetlb_min_KB ]]; then
+ needmem_KB=$uffd_min_KB
+else
+ needmem_KB=$hugetlb_min_KB
+fi
# set proper nr_hugepages
if [ -n "$freepgs" ] && [ -n "$hpgsize_KB" ]; then
--
2.44.0
> On Mar 22, 2024, at 05:50, [email protected] wrote:
>
> From: Peter Xu <[email protected]>
>
> The script calculates a mininum required size of hugetlb memories, but
> it'll stop working with <1MB huge page sizes, reporting all zeros even if
> huge pages are available.
>
> In reality, the calculation doesn't really need to be as comlicated either.
^
complicated?
> Make it simpler and work for KB-level hugepages too.
>
> Cc: Muhammad Usama Anjum <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Nico Pache <[email protected]>
> Cc: Muchun Song <[email protected]>
> Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Muchun Song <[email protected]>
Thanks.
On 21.03.24 22:50, [email protected] wrote:
> From: Peter Xu <[email protected]>
>
> The script calculates a mininum required size of hugetlb memories, but
> it'll stop working with <1MB huge page sizes, reporting all zeros even if
> huge pages are available.
>
> In reality, the calculation doesn't really need to be as comlicated either.
> Make it simpler and work for KB-level hugepages too.
>
> Cc: Muhammad Usama Anjum <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Nico Pache <[email protected]>
> Cc: Muchun Song <[email protected]>
> Signed-off-by: Peter Xu <[email protected]>
> ---
> tools/testing/selftests/mm/run_vmtests.sh | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
> index c2c542fe7b17..b1b78e45d613 100755
> --- a/tools/testing/selftests/mm/run_vmtests.sh
> +++ b/tools/testing/selftests/mm/run_vmtests.sh
> @@ -152,9 +152,13 @@ done < /proc/meminfo
> # both of these requirements into account and attempt to increase
> # number of huge pages available.
> nr_cpus=$(nproc)
> -hpgsize_MB=$((hpgsize_KB / 1024))
> -half_ufd_size_MB=$((((nr_cpus * hpgsize_MB + 127) / 128) * 128))
> -needmem_KB=$((half_ufd_size_MB * 2 * 1024))
> +uffd_min_KB=$((hpgsize_KB * nr_cpus * 2))
> +hugetlb_min_KB=$((256 * 1024))
> +if [[ $uffd_min_KB -gt $hugetlb_min_KB ]]; then
> + needmem_KB=$uffd_min_KB
> +else
> + needmem_KB=$hugetlb_min_KB
> +fi
>
> # set proper nr_hugepages
> if [ -n "$freepgs" ] && [ -n "$hpgsize_KB" ]; then
Reviewed-by: David Hildenbrand <[email protected]>
--
Cheers,
David / dhildenb
On 3/22/24 2:50 AM, [email protected] wrote:
> From: Peter Xu <[email protected]>
>
> The script calculates a mininum required size of hugetlb memories, but
> it'll stop working with <1MB huge page sizes, reporting all zeros even if
> huge pages are available.
>
> In reality, the calculation doesn't really need to be as comlicated either.
> Make it simpler and work for KB-level hugepages too.
>
> Cc: Muhammad Usama Anjum <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Nico Pache <[email protected]>
> Cc: Muchun Song <[email protected]>
> Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Muhammad Usama Anjum <[email protected]>
> ---
> tools/testing/selftests/mm/run_vmtests.sh | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
> index c2c542fe7b17..b1b78e45d613 100755
> --- a/tools/testing/selftests/mm/run_vmtests.sh
> +++ b/tools/testing/selftests/mm/run_vmtests.sh
> @@ -152,9 +152,13 @@ done < /proc/meminfo
> # both of these requirements into account and attempt to increase
> # number of huge pages available.
> nr_cpus=$(nproc)
> -hpgsize_MB=$((hpgsize_KB / 1024))
> -half_ufd_size_MB=$((((nr_cpus * hpgsize_MB + 127) / 128) * 128))
> -needmem_KB=$((half_ufd_size_MB * 2 * 1024))
> +uffd_min_KB=$((hpgsize_KB * nr_cpus * 2))
> +hugetlb_min_KB=$((256 * 1024))
> +if [[ $uffd_min_KB -gt $hugetlb_min_KB ]]; then
> + needmem_KB=$uffd_min_KB
> +else
> + needmem_KB=$hugetlb_min_KB
> +fi
>
> # set proper nr_hugepages
> if [ -n "$freepgs" ] && [ -n "$hpgsize_KB" ]; then
--
BR,
Muhammad Usama Anjum
Hi Peter,
On 21/03/2024 21:50, [email protected] wrote:
> From: Peter Xu <[email protected]>
>
> The script calculates a mininum required size of hugetlb memories, but
> it'll stop working with <1MB huge page sizes, reporting all zeros even if
> huge pages are available.
>
> In reality, the calculation doesn't really need to be as comlicated either.
> Make it simpler and work for KB-level hugepages too.
>
> Cc: Muhammad Usama Anjum <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Nico Pache <[email protected]>
> Cc: Muchun Song <[email protected]>
> Signed-off-by: Peter Xu <[email protected]>
> ---
> tools/testing/selftests/mm/run_vmtests.sh | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
> index c2c542fe7b17..b1b78e45d613 100755
> --- a/tools/testing/selftests/mm/run_vmtests.sh
> +++ b/tools/testing/selftests/mm/run_vmtests.sh
> @@ -152,9 +152,13 @@ done < /proc/meminfo
> # both of these requirements into account and attempt to increase
> # number of huge pages available.
> nr_cpus=$(nproc)
> -hpgsize_MB=$((hpgsize_KB / 1024))
> -half_ufd_size_MB=$((((nr_cpus * hpgsize_MB + 127) / 128) * 128))
Removing this has broken the uffd-stress "hugetlb" and "hugetlb-private" tests
(further down the file), which rely on $half_ufd_size_MB. Now that this is not
defined, they are called with too few params:
# # ---------------------------------
# # running ./uffd-stress hugetlb 32
# # ---------------------------------
# # ERROR: invalid MiB (errno=0, @uffd-stress.c:454)
# #
# # Usage: ./uffd-stress <test type> <MiB> <bounces>
# #
# # Supported <test type>: anon, hugetlb, hugetlb-private, shmem, shmem-private
# #
# # Examples:
# #
# # # Run anonymous memory test on 100MiB region with 99999 bounces:
# # ./uffd-stress anon 100 99999
# #
# # # Run share memory test on 1GiB region with 99 bounces:
# # ./uffd-stress shmem 1000 99
# #
# # # Run hugetlb memory test on 256MiB region with 50 bounces:
# # ./uffd-stress hugetlb 256 50
# #
# # # Run the same hugetlb test but using private file:
# # ./uffd-stress hugetlb-private 256 50
# #
# # # 10MiB-~6GiB 999 bounces anonymous test, continue forever unless an error
triggers
# # while ./uffd-stress anon $[RANDOM % 6000 + 10] 999; do true; done
# #
# # [FAIL]
# not ok 16 uffd-stress hugetlb 32 # exit=1
# # -----------------------------------------
# # running ./uffd-stress hugetlb-private 32
# # -----------------------------------------
# # ERROR: invalid MiB (errno=0, @uffd-stress.c:454)
# #
# # Usage: ./uffd-stress <test type> <MiB> <bounces>
# #
# # Supported <test type>: anon, hugetlb, hugetlb-private, shmem, shmem-private
# #
# # Examples:
# #
# # # Run anonymous memory test on 100MiB region with 99999 bounces:
# # ./uffd-stress anon 100 99999
# #
# # # Run share memory test on 1GiB region with 99 bounces:
# # ./uffd-stress shmem 1000 99
# #
# # # Run hugetlb memory test on 256MiB region with 50 bounces:
# # ./uffd-stress hugetlb 256 50
# #
# # # Run the same hugetlb test but using private file:
# # ./uffd-stress hugetlb-private 256 50
# #
# # # 10MiB-~6GiB 999 bounces anonymous test, continue forever unless an error
triggers
# # while ./uffd-stress anon $[RANDOM % 6000 + 10] 999; do true; done
# #
# # [FAIL]
# not ok 17 uffd-stress hugetlb-private 32 # exit=1
Thanks,
Ryan
> -needmem_KB=$((half_ufd_size_MB * 2 * 1024))
> +uffd_min_KB=$((hpgsize_KB * nr_cpus * 2))
> +hugetlb_min_KB=$((256 * 1024))
> +if [[ $uffd_min_KB -gt $hugetlb_min_KB ]]; then
> + needmem_KB=$uffd_min_KB
> +else
> + needmem_KB=$hugetlb_min_KB
> +fi
>
> # set proper nr_hugepages
> if [ -n "$freepgs" ] && [ -n "$hpgsize_KB" ]; then
On Wed, Apr 03, 2024 at 12:04:00PM +0100, Ryan Roberts wrote:
> > diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
> > index c2c542fe7b17..b1b78e45d613 100755
> > --- a/tools/testing/selftests/mm/run_vmtests.sh
> > +++ b/tools/testing/selftests/mm/run_vmtests.sh
> > @@ -152,9 +152,13 @@ done < /proc/meminfo
> > # both of these requirements into account and attempt to increase
> > # number of huge pages available.
> > nr_cpus=$(nproc)
> > -hpgsize_MB=$((hpgsize_KB / 1024))
> > -half_ufd_size_MB=$((((nr_cpus * hpgsize_MB + 127) / 128) * 128))
>
> Removing this has broken the uffd-stress "hugetlb" and "hugetlb-private" tests
> (further down the file), which rely on $half_ufd_size_MB. Now that this is not
> defined, they are called with too few params:
Those FAILs can be burried in some other libc mismatch issues for me so I
overlooked.. My apologies.
I'll send a fixup soon, thank you Ryan!
--
Peter Xu