Refactored v1 into more smaller bits and fixed a bunch of bugs alongside. For
example the fio jobs size for rand rw used --jobs=$(nproc) which needs to fit
with the test device size.
The loop transport runs a few more test but the largest contributer why it runs
so much longer are the iteratons in 002. So I made them also configurable.
nvme_num_iter=100 nvme_img_size=350M (new defaults)
loop
real 4m3.524s
user 0m7.931s
sys 1m35.871s
rdma
real 4m20.559s
user 0m8.895s
sys 1m5.714s
tcp
real 3m55.292s
user 0m8.654s
sys 1m15.314s
fc
real 3m18.977s
user 0m8.868s
sys 0m58.655s
nvme_num_iter=1000 nvme_img_size=1G (previous/ defaults)
loop
real 8m22.109s
user 0m27.582s
sys 3m0.484s
rdma
real 9m1.784s
user 0m14.274s
sys 2m5.479s
tcp
real 8m28.443s
user 0m13.952s
sys 2m55.544s
fc
real 6m24.426s
user 0m13.944s
sys 2m2.489s
The fc tests are bit faster because some of them are failing.
changes:
v2:
- made image size configurable via nvme_img_size env
- make number of iteration configurable via nvme_num_iter
- do not hard code test values
- calculate job size
- use runtime for fio background jobs
v1:
- initial version
- https://lore.kernel.org/linux-nvme/[email protected]/
Daniel Wagner (9):
nvme-rc: Auto convert test device size info
nvme: Do not hard code device size for dd test
common-xfs: Make size argument optional for _xfs_run_fio_verify_io
nvme: Use runtime fio background jobs
nvme: Make test image size configurable
nvme-rc: Add minimal test image size requirement
nvme-rc: Calculate IO size for fio jobs
nvme-rc: Move discovery generation counter code to rc
nvme: Make the number iterations configurable
common/rc | 30 +-
common/xfs | 6 +
tests/nvme/002 | 6 +-
tests/nvme/002.out | 3004 --------------------------------------------
tests/nvme/004 | 2 +-
tests/nvme/005 | 2 +-
tests/nvme/006 | 2 +-
tests/nvme/007 | 2 +-
tests/nvme/008 | 2 +-
tests/nvme/009 | 2 +-
tests/nvme/010 | 6 +-
tests/nvme/011 | 6 +-
tests/nvme/012 | 5 +-
tests/nvme/013 | 5 +-
tests/nvme/014 | 12 +-
tests/nvme/015 | 12 +-
tests/nvme/016 | 2 +-
tests/nvme/017 | 4 +-
tests/nvme/018 | 2 +-
tests/nvme/019 | 2 +-
tests/nvme/020 | 2 +-
tests/nvme/021 | 2 +-
tests/nvme/022 | 2 +-
tests/nvme/023 | 2 +-
tests/nvme/024 | 2 +-
tests/nvme/025 | 2 +-
tests/nvme/026 | 2 +-
tests/nvme/027 | 2 +-
tests/nvme/028 | 2 +-
tests/nvme/029 | 2 +-
tests/nvme/030 | 22 +-
tests/nvme/031 | 2 +-
tests/nvme/032 | 6 +-
tests/nvme/034 | 4 +-
tests/nvme/035 | 4 +-
tests/nvme/040 | 9 +-
tests/nvme/041 | 2 +-
tests/nvme/042 | 2 +-
tests/nvme/043 | 2 +-
tests/nvme/044 | 2 +-
tests/nvme/045 | 6 +-
tests/nvme/047 | 8 +-
tests/nvme/048 | 2 +-
tests/nvme/rc | 50 +
44 files changed, 176 insertions(+), 3079 deletions(-)
--
2.40.0
Read the block device sizes instead hard coding them.
Signed-off-by: Daniel Wagner <[email protected]>
---
tests/nvme/014 | 10 +++++++++-
tests/nvme/015 | 10 +++++++++-
2 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/tests/nvme/014 b/tests/nvme/014
index d13cff7921da..28913641ae40 100755
--- a/tests/nvme/014
+++ b/tests/nvme/014
@@ -23,6 +23,9 @@ test() {
local port
local nvmedev
local loop_dev
+ local size
+ local bs
+ local count
local file_path="$TMPDIR/img"
local subsys_name="blktests-subsystem-1"
@@ -41,7 +44,12 @@ test() {
cat "/sys/block/${nvmedev}n1/uuid"
cat "/sys/block/${nvmedev}n1/wwid"
- dd if=/dev/urandom of="/dev/${nvmedev}n1" count=128000 bs=4k status=none
+ size="$(blockdev --getsize64 "/dev/${nvmedev}n1")"
+ bs="$(blockdev --getbsz "/dev/${nvmedev}n1")"
+ count=$((size / bs - 1))
+
+ dd if=/dev/urandom of="/dev/${nvmedev}n1" \
+ count="${count}" bs="${bs}" status=none
nvme flush "/dev/${nvmedev}" -n 1
diff --git a/tests/nvme/015 b/tests/nvme/015
index bb52ba2598db..2f7957caac88 100755
--- a/tests/nvme/015
+++ b/tests/nvme/015
@@ -22,6 +22,9 @@ test() {
local port
local nvmedev
+ local size
+ local bs
+ local count
local file_path="$TMPDIR/img"
local subsys_name="blktests-subsystem-1"
@@ -38,7 +41,12 @@ test() {
cat "/sys/block/${nvmedev}n1/uuid"
cat "/sys/block/${nvmedev}n1/wwid"
- dd if=/dev/urandom of="/dev/${nvmedev}n1" count=128000 bs=4k status=none
+ size="$(blockdev --getsize64 "/dev/${nvmedev}n1")"
+ bs="$(blockdev --getbsz "/dev/${nvmedev}n1")"
+ count=$((size / bs - 1))
+
+ dd if=/dev/urandom of="/dev/${nvmedev}n1" \
+ count="${count}" bs="${bs}" status=none
nvme flush "/dev/${nvmedev}n1" -n 1
--
2.40.0
Make the size argument optional by reading the filesystem info. The
caller doesn't have to guess (or calculate) how big the max IO size.
The log data structure of XFS is reducing the capacity.
Signed-off-by: Daniel Wagner <[email protected]>
---
common/xfs | 6 ++++++
tests/nvme/012 | 2 +-
tests/nvme/013 | 2 +-
tests/nvme/035 | 2 +-
4 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/common/xfs b/common/xfs
index 2c5d96164ac1..ec35599e017b 100644
--- a/common/xfs
+++ b/common/xfs
@@ -27,6 +27,12 @@ _xfs_run_fio_verify_io() {
_xfs_mkfs_and_mount "${bdev}" "${mount_dir}" >> "${FULL}" 2>&1
+ if [[ -z "${sz}" ]]; then
+ local avail
+ avail="$(df --output=avail "${mount_dir}" | awk 'NR==2 {print $1}')"
+ sz="$(printf "%d" $((avail / 1024 - 1 )))m"
+ fi
+
_run_fio_verify_io --size="$sz" --directory="${mount_dir}/"
umount "${mount_dir}" >> "${FULL}" 2>&1
diff --git a/tests/nvme/012 b/tests/nvme/012
index e60082c2e751..c9d24388306d 100755
--- a/tests/nvme/012
+++ b/tests/nvme/012
@@ -44,7 +44,7 @@ test() {
cat "/sys/block/${nvmedev}n1/uuid"
cat "/sys/block/${nvmedev}n1/wwid"
- _xfs_run_fio_verify_io "/dev/${nvmedev}n1" "900m"
+ _xfs_run_fio_verify_io "/dev/${nvmedev}n1"
_nvme_disconnect_subsys "${subsys_name}"
diff --git a/tests/nvme/013 b/tests/nvme/013
index 9d60a7df4577..265b6968fd34 100755
--- a/tests/nvme/013
+++ b/tests/nvme/013
@@ -41,7 +41,7 @@ test() {
cat "/sys/block/${nvmedev}n1/uuid"
cat "/sys/block/${nvmedev}n1/wwid"
- _xfs_run_fio_verify_io "/dev/${nvmedev}n1" "900m"
+ _xfs_run_fio_verify_io "/dev/${nvmedev}n1"
_nvme_disconnect_subsys "${subsys_name}"
diff --git a/tests/nvme/035 b/tests/nvme/035
index eb1024edddbf..8b485bc8e682 100755
--- a/tests/nvme/035
+++ b/tests/nvme/035
@@ -32,7 +32,7 @@ test_device() {
port=$(_nvmet_passthru_target_setup "${subsys}")
nsdev=$(_nvmet_passthru_target_connect "${nvme_trtype}" "${subsys}")
- _xfs_run_fio_verify_io "${nsdev}" "900m"
+ _xfs_run_fio_verify_io "${nsdev}"
_nvme_disconnect_subsys "${subsys}"
_nvmet_passthru_target_cleanup "${port}" "${subsys}"
--
2.40.0
Introduce two new function to calculate the IO size for fio jobs.
_nvme_calc_io_size() returns the jobs size for _run_fio_verify_io()
function. Reduce the max size of the job by one megabyte to make the
test more robust not to run out of space by accident. Note these fio
calls run with just one jobs.
_nvme_calc_run_io_size() returns the jobs size for _run_fio_rand_io()
function. Again, the jobs size is not maxing out the space and most
important it takes the number of jobs into account which are
created (number of CPUs).
Signed-off-by: Daniel Wagner <[email protected]>
---
tests/nvme/010 | 5 +++--
tests/nvme/011 | 5 +++--
tests/nvme/032 | 6 ++++--
tests/nvme/034 | 4 +++-
tests/nvme/040 | 4 +++-
tests/nvme/045 | 4 +++-
tests/nvme/047 | 6 ++++--
tests/nvme/rc | 20 ++++++++++++++++++++
8 files changed, 43 insertions(+), 11 deletions(-)
diff --git a/tests/nvme/010 b/tests/nvme/010
index 805f80d40620..d209335c2158 100755
--- a/tests/nvme/010
+++ b/tests/nvme/010
@@ -25,6 +25,7 @@ test() {
local loop_dev
local file_path="${TMPDIR}/img"
local subsys_name="blktests-subsystem-1"
+ local io_size
truncate -s "${nvme_img_size}" "${file_path}"
@@ -41,8 +42,8 @@ test() {
cat "/sys/block/${nvmedev}n1/uuid"
cat "/sys/block/${nvmedev}n1/wwid"
- _run_fio_verify_io --size=${nvme_img_size} \
- --filename="/dev/${nvmedev}n1"
+ io_size="$(_nvme_calc_io_size "${nvme_img_size}")"
+ _run_fio_verify_io --size="${io_size}" --filename="/dev/${nvmedev}n1"
_nvme_disconnect_subsys "${subsys_name}"
diff --git a/tests/nvme/011 b/tests/nvme/011
index da8cbac11124..294ba4333aff 100755
--- a/tests/nvme/011
+++ b/tests/nvme/011
@@ -25,6 +25,7 @@ test() {
local file_path
local file_path="${TMPDIR}/img"
local subsys_name="blktests-subsystem-1"
+ local io_size
truncate -s "${nvme_img_size}" "${file_path}"
@@ -39,8 +40,8 @@ test() {
cat "/sys/block/${nvmedev}n1/uuid"
cat "/sys/block/${nvmedev}n1/wwid"
- _run_fio_verify_io --size="${nvme_img_size}" \
- --filename="/dev/${nvmedev}n1"
+ io_size="$(_nvme_calc_io_size "${nvme_img_size}")"
+ _run_fio_verify_io --size="${io_size}" --filename="/dev/${nvmedev}n1"
_nvme_disconnect_subsys "${subsys_name}"
diff --git a/tests/nvme/032 b/tests/nvme/032
index 9f9756b0f959..ad701cea877d 100755
--- a/tests/nvme/032
+++ b/tests/nvme/032
@@ -33,13 +33,15 @@ test_device() {
local sysfs
local attr
local m
+ local rand_io_size
pdev="$(_get_pci_dev_from_blkdev)"
sysfs="/sys/bus/pci/devices/${pdev}"
# start fio job
- _run_fio_rand_io --filename="$TEST_DEV" --size="${nvme_img_size}" \
- --group_reporting --time_based --runtime=1m &> /dev/null &
+ rand_io_size="$(_nvme_calc_rand_io_size "${nvme_img_size}")"
+ _run_fio_rand_io --filename="$TEST_DEV" --size="${rand_io_size}" \
+ --group_reporting --time_based --runtime=1m > /dev/null &
sleep 5
diff --git a/tests/nvme/034 b/tests/nvme/034
index e0ede717c373..0df8bef98e5e 100755
--- a/tests/nvme/034
+++ b/tests/nvme/034
@@ -19,6 +19,7 @@ test_device() {
local ctrldev
local nsdev
local port
+ local io_size
echo "Running ${TEST_NAME}"
@@ -26,7 +27,8 @@ test_device() {
port=$(_nvmet_passthru_target_setup "${subsys}")
nsdev=$(_nvmet_passthru_target_connect "${nvme_trtype}" "${subsys}")
- _run_fio_verify_io --size="${nvme_img_size}" --filename="${nsdev}"
+ io_size="$(_nvme_calc_io_size "${nvme_img_size}")"
+ _run_fio_verify_io --size="${io_size}" --filename="${nsdev}"
_nvme_disconnect_subsys "${subsys}"
_nvmet_passthru_target_cleanup "${port}" "${subsys}"
diff --git a/tests/nvme/040 b/tests/nvme/040
index 31b7cafef4be..b033a2a866f2 100755
--- a/tests/nvme/040
+++ b/tests/nvme/040
@@ -21,6 +21,7 @@ test() {
local port
local loop_dev
local nvmedev
+ local rand_io_size
echo "Running ${TEST_NAME}"
@@ -37,7 +38,8 @@ test() {
# start fio job
echo "starting background fio"
- _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size="${nvme_img_size}" \
+ rand_io_size="$(_nvme_calc_rand_io_size "${nvme_img_size}")"
+ _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size="${rand_io_size}" \
--group_reporting --ramp_time=5 \
--time_based --runtime=1m &> /dev/null &
sleep 5
diff --git a/tests/nvme/045 b/tests/nvme/045
index 99012f6bed8f..f50087cccb6a 100755
--- a/tests/nvme/045
+++ b/tests/nvme/045
@@ -31,6 +31,7 @@ test() {
local ctrlkey
local new_ctrlkey
local ctrldev
+ local rand_io_size
echo "Running ${TEST_NAME}"
@@ -120,7 +121,8 @@ test() {
nvmedev=$(_find_nvme_dev "${subsys_name}")
- _run_fio_rand_io --size=4m --filename="/dev/${nvmedev}n1"
+ rand_io_size="$(_nvme_calc_rand_io_size 4m)"
+ _run_fio_rand_io --size="${rand_io_size}" --filename="/dev/${nvmedev}n1"
_nvme_disconnect_subsys "${subsys_name}"
diff --git a/tests/nvme/047 b/tests/nvme/047
index b5a8d469a983..6a7599bc2e91 100755
--- a/tests/nvme/047
+++ b/tests/nvme/047
@@ -25,6 +25,7 @@ test() {
local port
local nvmedev
local loop_dev
+ local rand_io_size
local file_path="$TMPDIR/img"
local subsys_name="blktests-subsystem-1"
@@ -42,7 +43,8 @@ test() {
nvmedev=$(_find_nvme_dev "${subsys_name}")
- _xfs_run_fio_verify_io /dev/"${nvmedev}n1" "1m" || echo FAIL
+ rand_io_size="$(_nvme_calc_rand_io_size 4M)"
+ _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size="${rand_io_size}"
_nvme_disconnect_subsys "${subsys_name}" >> "$FULL" 2>&1
@@ -50,7 +52,7 @@ test() {
--nr-write-queues 1 \
--nr-poll-queues 1 || echo FAIL
- _xfs_run_fio_verify_io /dev/"${nvmedev}n1" "1m" || echo FAIL
+ _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size="${rand_io_size}"
_nvme_disconnect_subsys "${subsys_name}" >> "$FULL" 2>&1
diff --git a/tests/nvme/rc b/tests/nvme/rc
index b1f2dacae125..172f510527ed 100644
--- a/tests/nvme/rc
+++ b/tests/nvme/rc
@@ -150,6 +150,26 @@ _test_dev_nvme_nsid() {
cat "${TEST_DEV_SYSFS}/nsid"
}
+_nvme_calc_io_size() {
+ local img_size_mb
+ local io_size_mb
+
+ img_size_mb="$(convert_to_mb "$1")"
+ io_size_mb="$((img_size_mb - 1))"
+
+ echo "${io_size_mb}m"
+}
+
+_nvme_calc_rand_io_size() {
+ local img_size_mb
+ local io_size_mb
+
+ img_size_mb="$(convert_to_mb "$1")"
+ io_size_mb="$(printf "%d" $((((img_size_mb * 1024 * 1024) / $(nproc) - 1) / 1024)))"
+
+ echo "${io_size_mb}k"
+}
+
_nvme_fcloop_add_rport() {
local local_wwnn="$1"
local local_wwpn="$2"
--
2.40.0
Some tests hard code high values of iterations. This makes them run
relatively long compared to the other tests. Introduce a new environment
variable nvme_num_iter to allow tune the runtime.
Signed-off-by: Daniel Wagner <[email protected]>
---
tests/nvme/002 | 2 +-
tests/nvme/016 | 2 +-
tests/nvme/017 | 2 +-
tests/nvme/rc | 1 +
4 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/tests/nvme/002 b/tests/nvme/002
index 6b8484844b4d..c28035483514 100755
--- a/tests/nvme/002
+++ b/tests/nvme/002
@@ -20,7 +20,7 @@ test() {
_setup_nvmet
- local iterations=1000
+ local iterations="${nvme_num_iter}"
local port
port="$(_create_nvmet_port "${nvme_trtype}")"
diff --git a/tests/nvme/016 b/tests/nvme/016
index 4eba30223a08..c0c31a55b190 100755
--- a/tests/nvme/016
+++ b/tests/nvme/016
@@ -17,7 +17,7 @@ test() {
echo "Running ${TEST_NAME}"
local port
- local iterations=1000
+ local iterations="${nvme_num_iter}"
local loop_dev
local subsys_nqn="blktests-subsystem-1"
diff --git a/tests/nvme/017 b/tests/nvme/017
index 0248aee9bc41..e1674508f654 100755
--- a/tests/nvme/017
+++ b/tests/nvme/017
@@ -18,7 +18,7 @@ test() {
local port
local file_path
- local iterations=1000
+ local iterations="${nvme_num_iter}"
local subsys_name="blktests-subsystem-1"
_setup_nvmet
diff --git a/tests/nvme/rc b/tests/nvme/rc
index 2aa34fb0c9b8..bb135502220a 100644
--- a/tests/nvme/rc
+++ b/tests/nvme/rc
@@ -18,6 +18,7 @@ def_hostnqn="$(cat /etc/nvme/hostnqn 2> /dev/null)"
def_hostid="$(cat /etc/nvme/hostid 2> /dev/null)"
nvme_trtype=${nvme_trtype:-"loop"}
nvme_img_size=${nvme_img_size:-"350M"}
+nvme_num_iter=${nvme_num_iter:-"100"}
_nvme_requires() {
_have_program nvme
--
2.40.0
The fio jobs are supposed to run long in background during the test.
Instead relying on a job size use explicit runtime for this.
Signed-off-by: Daniel Wagner <[email protected]>
---
tests/nvme/032 | 2 +-
tests/nvme/040 | 3 ++-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/tests/nvme/032 b/tests/nvme/032
index 017d4a339971..81e074cc11bc 100755
--- a/tests/nvme/032
+++ b/tests/nvme/032
@@ -39,7 +39,7 @@ test_device() {
# start fio job
_run_fio_rand_io --filename="$TEST_DEV" --size=1g \
- --group_reporting &> /dev/null &
+ --group_reporting --time_based --runtime=1m &> /dev/null &
sleep 5
diff --git a/tests/nvme/040 b/tests/nvme/040
index 04bd726cd309..8d29f905adb5 100755
--- a/tests/nvme/040
+++ b/tests/nvme/040
@@ -38,7 +38,8 @@ test() {
# start fio job
echo "starting background fio"
_run_fio_rand_io --filename="/dev/${nvmedev}n1" --size=1g \
- --group_reporting --ramp_time=5 &> /dev/null &
+ --group_reporting --ramp_time=5 \
+ --time_based --runtime=1m &> /dev/null &
sleep 5
# do reset/remove operation
--
2.40.0
Some tests need a minimal test image size to work correctly. Thus add a
helper to check the size and update these tests accordingly.
The image minimum is 4M because some of the test have hard coded values.
All tests which use the xfs fio verification job have a minimum
requirement of 350M impossed by the xfs filesystem.
Signed-off-by: Daniel Wagner <[email protected]>
---
tests/nvme/012 | 1 +
tests/nvme/013 | 1 +
tests/nvme/029 | 1 -
tests/nvme/045 | 2 +-
tests/nvme/rc | 15 +++++++++++++++
5 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/tests/nvme/012 b/tests/nvme/012
index ecf44fcb5a51..efe227538c57 100755
--- a/tests/nvme/012
+++ b/tests/nvme/012
@@ -16,6 +16,7 @@ requires() {
_have_fio
_have_loop
_require_nvme_trtype_is_fabrics
+ _require_nvme_test_img_size 350m
}
test() {
diff --git a/tests/nvme/013 b/tests/nvme/013
index e249add46295..14e646a19c47 100755
--- a/tests/nvme/013
+++ b/tests/nvme/013
@@ -15,6 +15,7 @@ requires() {
_have_xfs
_have_fio
_require_nvme_trtype_is_fabrics
+ _require_nvme_test_img_size 350m
}
test() {
diff --git a/tests/nvme/029 b/tests/nvme/029
index 1808b7b0edf1..c6d38b42af70 100755
--- a/tests/nvme/029
+++ b/tests/nvme/029
@@ -14,7 +14,6 @@ requires() {
_nvme_requires
_have_loop
_require_nvme_trtype_is_fabrics
- _require_test_dev_size 1M
}
test_user_io()
diff --git a/tests/nvme/045 b/tests/nvme/045
index 7c51da27b5f1..99012f6bed8f 100755
--- a/tests/nvme/045
+++ b/tests/nvme/045
@@ -120,7 +120,7 @@ test() {
nvmedev=$(_find_nvme_dev "${subsys_name}")
- _run_fio_rand_io --size=8m --filename="/dev/${nvmedev}n1"
+ _run_fio_rand_io --size=4m --filename="/dev/${nvmedev}n1"
_nvme_disconnect_subsys "${subsys_name}"
diff --git a/tests/nvme/rc b/tests/nvme/rc
index e5ba9a6d5f54..b1f2dacae125 100644
--- a/tests/nvme/rc
+++ b/tests/nvme/rc
@@ -21,6 +21,7 @@ nvme_img_size=${nvme_img_size:-"350M"}
_nvme_requires() {
_have_program nvme
+ _require_nvme_test_img_size 4m
case ${nvme_trtype} in
loop)
_have_driver nvme-loop
@@ -94,6 +95,20 @@ _require_test_dev_is_nvme() {
return 0
}
+_require_nvme_test_img_size() {
+ local require_sz_mb
+ local nvme_img_size_mb
+
+ require_sz_mb="$(convert_to_mb "$1")"
+ nvme_img_size_mb="$(convert_to_mb "${nvme_img_size}")"
+
+ if (( "${nvme_img_size_mb}" < "$require_sz_mb" )); then
+ SKIP_REASONS+=("nvme_img_size must be at least ${require_sz_mb}m")
+ return 1
+ fi
+ return 0
+}
+
_require_nvme_trtype() {
local trtype
for trtype in "$@"; do
--
2.40.0
Introduce a convert_to_mb() helper which converts the size argument
to MBytes and use in test device require function. This makes it
possible to use user input strings in future.
Signed-off-by: Daniel Wagner <[email protected]>
---
common/rc | 30 +++++++++++++++++++++++++++---
tests/nvme/035 | 2 +-
2 files changed, 28 insertions(+), 4 deletions(-)
diff --git a/common/rc b/common/rc
index af4c0b1cab22..dd0afda3d821 100644
--- a/common/rc
+++ b/common/rc
@@ -324,9 +324,12 @@ _get_pci_parent_from_blkdev() {
tail -2 | head -1
}
-_require_test_dev_size_mb() {
- local require_sz_mb=$1
- local test_dev_sz_mb=$(($(blockdev --getsize64 "$TEST_DEV")/1024/1024))
+_require_test_dev_size() {
+ local require_sz_mb
+ local test_dev_sz_mb
+
+ require_sz_mb="$(convert_to_mb "$1")"
+ test_dev_sz_mb="$(($(blockdev --getsize64 "$TEST_DEV")/1024/1024))"
if (( "$test_dev_sz_mb" < "$require_sz_mb" )); then
SKIP_REASONS+=("${TEST_DEV} required at least ${require_sz_mb}m")
@@ -422,3 +425,24 @@ _have_writeable_kmsg() {
_run_user() {
su "$NORMAL_USER" -c "$1"
}
+
+convert_to_mb()
+{
+ local str=$1
+ local res
+
+ res=$(echo "${str}" | sed -n 's/^\([0-9]\+\)$/\1/p')
+ if [[ -n "${res}" ]]; then
+ echo "$((res / 1024 / 1024))"
+ fi
+
+ res=$(echo "${str}" | sed -n 's/^\([0-9]\+\)[mM]$/\1/p')
+ if [[ -n "${res}" ]]; then
+ echo "$((res))"
+ fi
+
+ res=$(echo "${str}" | sed -n 's/^\([0-9]\+\)[gG]$/\1/p')
+ if [[ -n "${res}" ]]; then
+ echo "$((res * 1024))"
+ fi
+}
diff --git a/tests/nvme/035 b/tests/nvme/035
index d169e351e3d0..eb1024edddbf 100755
--- a/tests/nvme/035
+++ b/tests/nvme/035
@@ -17,7 +17,7 @@ requires() {
}
device_requires() {
- _require_test_dev_size_mb 1024
+ _require_test_dev_size 1024m
}
test_device() {
--
2.40.0
On 4/21/23 08:04, Daniel Wagner wrote:
> Read the block device sizes instead hard coding them.
>
> Signed-off-by: Daniel Wagner <[email protected]>
> ---
> tests/nvme/014 | 10 +++++++++-
> tests/nvme/015 | 10 +++++++++-
> 2 files changed, 18 insertions(+), 2 deletions(-)
>
Reviewed-by: Hannes Reinecke <[email protected]>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
On 4/21/23 08:05, Daniel Wagner wrote:
> Some tests need a minimal test image size to work correctly. Thus add a
> helper to check the size and update these tests accordingly.
>
> The image minimum is 4M because some of the test have hard coded values.
> All tests which use the xfs fio verification job have a minimum
> requirement of 350M impossed by the xfs filesystem.
>
> Signed-off-by: Daniel Wagner <[email protected]>
> ---
> tests/nvme/012 | 1 +
> tests/nvme/013 | 1 +
> tests/nvme/029 | 1 -
> tests/nvme/045 | 2 +-
> tests/nvme/rc | 15 +++++++++++++++
> 5 files changed, 18 insertions(+), 2 deletions(-)
>
Reviewed-by: Hannes Reinecke <[email protected]>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
On 4/21/23 08:05, Daniel Wagner wrote:
> Introduce two new function to calculate the IO size for fio jobs.
>
> _nvme_calc_io_size() returns the jobs size for _run_fio_verify_io()
> function. Reduce the max size of the job by one megabyte to make the
> test more robust not to run out of space by accident. Note these fio
> calls run with just one jobs.
>
> _nvme_calc_run_io_size() returns the jobs size for _run_fio_rand_io()
> function. Again, the jobs size is not maxing out the space and most
> important it takes the number of jobs into account which are
> created (number of CPUs).
>
> Signed-off-by: Daniel Wagner <[email protected]>
> ---
> tests/nvme/010 | 5 +++--
> tests/nvme/011 | 5 +++--
> tests/nvme/032 | 6 ++++--
> tests/nvme/034 | 4 +++-
> tests/nvme/040 | 4 +++-
> tests/nvme/045 | 4 +++-
> tests/nvme/047 | 6 ++++--
> tests/nvme/rc | 20 ++++++++++++++++++++
> 8 files changed, 43 insertions(+), 11 deletions(-)
>
> diff --git a/tests/nvme/010 b/tests/nvme/010
> index 805f80d40620..d209335c2158 100755
> --- a/tests/nvme/010
> +++ b/tests/nvme/010
> @@ -25,6 +25,7 @@ test() {
> local loop_dev
> local file_path="${TMPDIR}/img"
> local subsys_name="blktests-subsystem-1"
> + local io_size
>
> truncate -s "${nvme_img_size}" "${file_path}"
>
> @@ -41,8 +42,8 @@ test() {
> cat "/sys/block/${nvmedev}n1/uuid"
> cat "/sys/block/${nvmedev}n1/wwid"
>
> - _run_fio_verify_io --size=${nvme_img_size} \
> - --filename="/dev/${nvmedev}n1"
> + io_size="$(_nvme_calc_io_size "${nvme_img_size}")"
> + _run_fio_verify_io --size="${io_size}" --filename="/dev/${nvmedev}n1"
>
> _nvme_disconnect_subsys "${subsys_name}"
>
> diff --git a/tests/nvme/011 b/tests/nvme/011
> index da8cbac11124..294ba4333aff 100755
> --- a/tests/nvme/011
> +++ b/tests/nvme/011
> @@ -25,6 +25,7 @@ test() {
> local file_path
> local file_path="${TMPDIR}/img"
> local subsys_name="blktests-subsystem-1"
> + local io_size
>
> truncate -s "${nvme_img_size}" "${file_path}"
>
> @@ -39,8 +40,8 @@ test() {
> cat "/sys/block/${nvmedev}n1/uuid"
> cat "/sys/block/${nvmedev}n1/wwid"
>
> - _run_fio_verify_io --size="${nvme_img_size}" \
> - --filename="/dev/${nvmedev}n1"
> + io_size="$(_nvme_calc_io_size "${nvme_img_size}")"
> + _run_fio_verify_io --size="${io_size}" --filename="/dev/${nvmedev}n1"
>
> _nvme_disconnect_subsys "${subsys_name}"
>
> diff --git a/tests/nvme/032 b/tests/nvme/032
> index 9f9756b0f959..ad701cea877d 100755
> --- a/tests/nvme/032
> +++ b/tests/nvme/032
> @@ -33,13 +33,15 @@ test_device() {
> local sysfs
> local attr
> local m
> + local rand_io_size
>
> pdev="$(_get_pci_dev_from_blkdev)"
> sysfs="/sys/bus/pci/devices/${pdev}"
>
> # start fio job
> - _run_fio_rand_io --filename="$TEST_DEV" --size="${nvme_img_size}" \
> - --group_reporting --time_based --runtime=1m &> /dev/null &
> + rand_io_size="$(_nvme_calc_rand_io_size "${nvme_img_size}")"
> + _run_fio_rand_io --filename="$TEST_DEV" --size="${rand_io_size}" \
> + --group_reporting --time_based --runtime=1m > /dev/null &
>
> sleep 5
>
> diff --git a/tests/nvme/034 b/tests/nvme/034
> index e0ede717c373..0df8bef98e5e 100755
> --- a/tests/nvme/034
> +++ b/tests/nvme/034
> @@ -19,6 +19,7 @@ test_device() {
> local ctrldev
> local nsdev
> local port
> + local io_size
>
> echo "Running ${TEST_NAME}"
>
> @@ -26,7 +27,8 @@ test_device() {
> port=$(_nvmet_passthru_target_setup "${subsys}")
> nsdev=$(_nvmet_passthru_target_connect "${nvme_trtype}" "${subsys}")
>
> - _run_fio_verify_io --size="${nvme_img_size}" --filename="${nsdev}"
> + io_size="$(_nvme_calc_io_size "${nvme_img_size}")"
> + _run_fio_verify_io --size="${io_size}" --filename="${nsdev}"
>
> _nvme_disconnect_subsys "${subsys}"
> _nvmet_passthru_target_cleanup "${port}" "${subsys}"
> diff --git a/tests/nvme/040 b/tests/nvme/040
> index 31b7cafef4be..b033a2a866f2 100755
> --- a/tests/nvme/040
> +++ b/tests/nvme/040
> @@ -21,6 +21,7 @@ test() {
> local port
> local loop_dev
> local nvmedev
> + local rand_io_size
>
> echo "Running ${TEST_NAME}"
>
> @@ -37,7 +38,8 @@ test() {
>
> # start fio job
> echo "starting background fio"
> - _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size="${nvme_img_size}" \
> + rand_io_size="$(_nvme_calc_rand_io_size "${nvme_img_size}")"
> + _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size="${rand_io_size}" \
> --group_reporting --ramp_time=5 \
> --time_based --runtime=1m &> /dev/null &
> sleep 5
> diff --git a/tests/nvme/045 b/tests/nvme/045
> index 99012f6bed8f..f50087cccb6a 100755
> --- a/tests/nvme/045
> +++ b/tests/nvme/045
> @@ -31,6 +31,7 @@ test() {
> local ctrlkey
> local new_ctrlkey
> local ctrldev
> + local rand_io_size
>
> echo "Running ${TEST_NAME}"
>
> @@ -120,7 +121,8 @@ test() {
>
> nvmedev=$(_find_nvme_dev "${subsys_name}")
>
> - _run_fio_rand_io --size=4m --filename="/dev/${nvmedev}n1"
> + rand_io_size="$(_nvme_calc_rand_io_size 4m)"
> + _run_fio_rand_io --size="${rand_io_size}" --filename="/dev/${nvmedev}n1"
>
> _nvme_disconnect_subsys "${subsys_name}"
>
> diff --git a/tests/nvme/047 b/tests/nvme/047
> index b5a8d469a983..6a7599bc2e91 100755
> --- a/tests/nvme/047
> +++ b/tests/nvme/047
> @@ -25,6 +25,7 @@ test() {
> local port
> local nvmedev
> local loop_dev
> + local rand_io_size
> local file_path="$TMPDIR/img"
> local subsys_name="blktests-subsystem-1"
>
> @@ -42,7 +43,8 @@ test() {
>
> nvmedev=$(_find_nvme_dev "${subsys_name}")
>
> - _xfs_run_fio_verify_io /dev/"${nvmedev}n1" "1m" || echo FAIL
> + rand_io_size="$(_nvme_calc_rand_io_size 4M)"
> + _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size="${rand_io_size}"
>
> _nvme_disconnect_subsys "${subsys_name}" >> "$FULL" 2>&1
>
> @@ -50,7 +52,7 @@ test() {
> --nr-write-queues 1 \
> --nr-poll-queues 1 || echo FAIL
>
> - _xfs_run_fio_verify_io /dev/"${nvmedev}n1" "1m" || echo FAIL
> + _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size="${rand_io_size}"
>
> _nvme_disconnect_subsys "${subsys_name}" >> "$FULL" 2>&1
>
> diff --git a/tests/nvme/rc b/tests/nvme/rc
> index b1f2dacae125..172f510527ed 100644
> --- a/tests/nvme/rc
> +++ b/tests/nvme/rc
> @@ -150,6 +150,26 @@ _test_dev_nvme_nsid() {
> cat "${TEST_DEV_SYSFS}/nsid"
> }
>
> +_nvme_calc_io_size() {
> + local img_size_mb
> + local io_size_mb
> +
> + img_size_mb="$(convert_to_mb "$1")"
> + io_size_mb="$((img_size_mb - 1))"
> +
> + echo "${io_size_mb}m"
> +}
> +
> +_nvme_calc_rand_io_size() {
> + local img_size_mb
> + local io_size_mb
> +
> + img_size_mb="$(convert_to_mb "$1")"
> + io_size_mb="$(printf "%d" $((((img_size_mb * 1024 * 1024) / $(nproc) - 1) / 1024)))"
> +
... ending with ridiculous small io sizes on machines with lots of CPUs.
Please cap nproc by something sane like 32.
> + echo "${io_size_mb}k"
> +}
> +
> _nvme_fcloop_add_rport() {
> local local_wwnn="$1"
> local local_wwpn="$2"
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
On 4/21/23 08:04, Daniel Wagner wrote:
> Introduce a convert_to_mb() helper which converts the size argument
> to MBytes and use in test device require function. This makes it
> possible to use user input strings in future.
>
> Signed-off-by: Daniel Wagner <[email protected]>
> ---
> common/rc | 30 +++++++++++++++++++++++++++---
> tests/nvme/035 | 2 +-
> 2 files changed, 28 insertions(+), 4 deletions(-)
>
Reviewed-by: Hannes Reinecke <[email protected]>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
On 4/21/23 08:05, Daniel Wagner wrote:
> Some tests hard code high values of iterations. This makes them run
> relatively long compared to the other tests. Introduce a new environment
> variable nvme_num_iter to allow tune the runtime.
>
> Signed-off-by: Daniel Wagner <[email protected]>
> ---
> tests/nvme/002 | 2 +-
> tests/nvme/016 | 2 +-
> tests/nvme/017 | 2 +-
> tests/nvme/rc | 1 +
> 4 files changed, 4 insertions(+), 3 deletions(-)
>
Reviewed-by: Hannes Reinecke <[email protected]>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
On 4/21/23 08:05, Daniel Wagner wrote:
> The fio jobs are supposed to run long in background during the test.
> Instead relying on a job size use explicit runtime for this.
>
> Signed-off-by: Daniel Wagner <[email protected]>
> ---
> tests/nvme/032 | 2 +-
> tests/nvme/040 | 3 ++-
> 2 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/tests/nvme/032 b/tests/nvme/032
> index 017d4a339971..81e074cc11bc 100755
> --- a/tests/nvme/032
> +++ b/tests/nvme/032
> @@ -39,7 +39,7 @@ test_device() {
>
> # start fio job
> _run_fio_rand_io --filename="$TEST_DEV" --size=1g \
> - --group_reporting &> /dev/null &
> + --group_reporting --time_based --runtime=1m &> /dev/null &
>
> sleep 5
>
> diff --git a/tests/nvme/040 b/tests/nvme/040
> index 04bd726cd309..8d29f905adb5 100755
> --- a/tests/nvme/040
> +++ b/tests/nvme/040
> @@ -38,7 +38,8 @@ test() {
> # start fio job
> echo "starting background fio"
> _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size=1g \
> - --group_reporting --ramp_time=5 &> /dev/null &
> + --group_reporting --ramp_time=5 \
> + --time_based --runtime=1m &> /dev/null &
> sleep 5
>
> # do reset/remove operation
Wouldn't it be better to let _run_fio_rand_io pick the correct size?
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
On 4/21/23 08:04, Daniel Wagner wrote:
> Make the size argument optional by reading the filesystem info. The
> caller doesn't have to guess (or calculate) how big the max IO size.
> The log data structure of XFS is reducing the capacity.
>
> Signed-off-by: Daniel Wagner <[email protected]>
> ---
> common/xfs | 6 ++++++
> tests/nvme/012 | 2 +-
> tests/nvme/013 | 2 +-
> tests/nvme/035 | 2 +-
> 4 files changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/common/xfs b/common/xfs
> index 2c5d96164ac1..ec35599e017b 100644
> --- a/common/xfs
> +++ b/common/xfs
> @@ -27,6 +27,12 @@ _xfs_run_fio_verify_io() {
>
> _xfs_mkfs_and_mount "${bdev}" "${mount_dir}" >> "${FULL}" 2>&1
>
> + if [[ -z "${sz}" ]]; then
> + local avail
> + avail="$(df --output=avail "${mount_dir}" | awk 'NR==2 {print $1}')"
df --output=avail "${mount_dir}" | tail -1
> + sz="$(printf "%d" $((avail / 1024 - 1 )))m"
sz=$((avail / 1024 - 1))
> + fi
> +
> _run_fio_verify_io --size="$sz" --directory="${mount_dir}/"
_run_fio_verify_io --size="${sz}m" --directory="${mount_dir}/"
>
> umount "${mount_dir}" >> "${FULL}" 2>&1
> diff --git a/tests/nvme/012 b/tests/nvme/012
> index e60082c2e751..c9d24388306d 100755
> --- a/tests/nvme/012
> +++ b/tests/nvme/012
> @@ -44,7 +44,7 @@ test() {
> cat "/sys/block/${nvmedev}n1/uuid"
> cat "/sys/block/${nvmedev}n1/wwid"
>
> - _xfs_run_fio_verify_io "/dev/${nvmedev}n1" "900m"
> + _xfs_run_fio_verify_io "/dev/${nvmedev}n1"
>
> _nvme_disconnect_subsys "${subsys_name}"
>
> diff --git a/tests/nvme/013 b/tests/nvme/013
> index 9d60a7df4577..265b6968fd34 100755
> --- a/tests/nvme/013
> +++ b/tests/nvme/013
> @@ -41,7 +41,7 @@ test() {
> cat "/sys/block/${nvmedev}n1/uuid"
> cat "/sys/block/${nvmedev}n1/wwid"
>
> - _xfs_run_fio_verify_io "/dev/${nvmedev}n1" "900m"
> + _xfs_run_fio_verify_io "/dev/${nvmedev}n1"
>
> _nvme_disconnect_subsys "${subsys_name}"
>
> diff --git a/tests/nvme/035 b/tests/nvme/035
> index eb1024edddbf..8b485bc8e682 100755
> --- a/tests/nvme/035
> +++ b/tests/nvme/035
> @@ -32,7 +32,7 @@ test_device() {
> port=$(_nvmet_passthru_target_setup "${subsys}")
> nsdev=$(_nvmet_passthru_target_connect "${nvme_trtype}" "${subsys}")
>
> - _xfs_run_fio_verify_io "${nsdev}" "900m"
> + _xfs_run_fio_verify_io "${nsdev}"
>
> _nvme_disconnect_subsys "${subsys}"
> _nvmet_passthru_target_cleanup "${port}" "${subsys}"
Otherwise looks good.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
[email protected] +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
On Fri, Apr 21, 2023 at 08:27:35AM +0200, Hannes Reinecke wrote:
> On 4/21/23 08:04, Daniel Wagner wrote:
> > Make the size argument optional by reading the filesystem info. The
> > caller doesn't have to guess (or calculate) how big the max IO size.
> > The log data structure of XFS is reducing the capacity.
> >
> > Signed-off-by: Daniel Wagner <[email protected]>
> > ---
> > common/xfs | 6 ++++++
> > tests/nvme/012 | 2 +-
> > tests/nvme/013 | 2 +-
> > tests/nvme/035 | 2 +-
> > 4 files changed, 9 insertions(+), 3 deletions(-)
> >
> > diff --git a/common/xfs b/common/xfs
> > index 2c5d96164ac1..ec35599e017b 100644
> > --- a/common/xfs
> > +++ b/common/xfs
> > @@ -27,6 +27,12 @@ _xfs_run_fio_verify_io() {
> > _xfs_mkfs_and_mount "${bdev}" "${mount_dir}" >> "${FULL}" 2>&1
> > + if [[ -z "${sz}" ]]; then
> > + local avail
> > + avail="$(df --output=avail "${mount_dir}" | awk 'NR==2 {print $1}')"
>
> df --output=avail "${mount_dir}" | tail -1
Sure, don't think it matters.
> > + sz="$(printf "%d" $((avail / 1024 - 1 )))m"
>
> sz=$((avail / 1024 - 1))
I tried this but the devision is likely to return a floating point which fio
doesn't like. Is there a way to tell bash to do a pure integer devision?
On Fri, Apr 21, 2023 at 08:29:22AM +0200, Hannes Reinecke wrote:
> --- a/tests/nvme/040
> > +++ b/tests/nvme/040
> > @@ -38,7 +38,8 @@ test() {
> > # start fio job
> > echo "starting background fio"
> > _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size=1g \
> > - --group_reporting --ramp_time=5 &> /dev/null &
> > + --group_reporting --ramp_time=5 \
> > + --time_based --runtime=1m &> /dev/null &
> > sleep 5
> > # do reset/remove operation
>
> Wouldn't it be better to let _run_fio_rand_io pick the correct size?
Yes, makes sense.
On Fri, Apr 21, 2023 at 08:33:46AM +0200, Hannes Reinecke wrote:
> +_nvme_calc_rand_io_size() {
> > + local img_size_mb
> > + local io_size_mb
> > +
> > + img_size_mb="$(convert_to_mb "$1")"
> > + io_size_mb="$(printf "%d" $((((img_size_mb * 1024 * 1024) / $(nproc) - 1) / 1024)))"
> > +
>
> ... ending with ridiculous small io sizes on machines with lots of CPUs.
> Please cap nproc by something sane like 32.
Yeah, propably not really good long time strategy. I was wondering if we should
make run_fio() variants smarter and do the size callculation there and not by
the callee. If we do this, we could make the number of jobs dependend on CPUs
and image size a bit nicer.
This patch looks good to me. One nit in the commit title is the prefix
"nvme-rc" which is new. How about "nvme" or "nvme/rc"?
Please find one more nit comment in line.
On Apr 21, 2023 / 08:04, Daniel Wagner wrote:
> Introduce a convert_to_mb() helper which converts the size argument
> to MBytes and use in test device require function. This makes it
> possible to use user input strings in future.
>
> Signed-off-by: Daniel Wagner <[email protected]>
> ---
> common/rc | 30 +++++++++++++++++++++++++++---
> tests/nvme/035 | 2 +-
> 2 files changed, 28 insertions(+), 4 deletions(-)
>
> diff --git a/common/rc b/common/rc
> index af4c0b1cab22..dd0afda3d821 100644
> --- a/common/rc
> +++ b/common/rc
> @@ -324,9 +324,12 @@ _get_pci_parent_from_blkdev() {
> tail -2 | head -1
> }
>
> -_require_test_dev_size_mb() {
> - local require_sz_mb=$1
> - local test_dev_sz_mb=$(($(blockdev --getsize64 "$TEST_DEV")/1024/1024))
> +_require_test_dev_size() {
> + local require_sz_mb
> + local test_dev_sz_mb
> +
> + require_sz_mb="$(convert_to_mb "$1")"
> + test_dev_sz_mb="$(($(blockdev --getsize64 "$TEST_DEV")/1024/1024))"
>
> if (( "$test_dev_sz_mb" < "$require_sz_mb" )); then
> SKIP_REASONS+=("${TEST_DEV} required at least ${require_sz_mb}m")
> @@ -422,3 +425,24 @@ _have_writeable_kmsg() {
> _run_user() {
> su "$NORMAL_USER" -c "$1"
> }
> +
> +convert_to_mb()
> +{
> + local str=$1
> + local res
> +
> + res=$(echo "${str}" | sed -n 's/^\([0-9]\+\)$/\1/p')
> + if [[ -n "${res}" ]]; then
> + echo "$((res / 1024 / 1024))"
> + fi
> +
> + res=$(echo "${str}" | sed -n 's/^\([0-9]\+\)[mM]$/\1/p')
> + if [[ -n "${res}" ]]; then
> + echo "$((res))"
> + fi
> +
> + res=$(echo "${str}" | sed -n 's/^\([0-9]\+\)[gG]$/\1/p')
> + if [[ -n "${res}" ]]; then
> + echo "$((res * 1024))"
> + fi
> +}
Nit: this function uses not tabs but spaces for indent.
> diff --git a/tests/nvme/035 b/tests/nvme/035
> index d169e351e3d0..eb1024edddbf 100755
> --- a/tests/nvme/035
> +++ b/tests/nvme/035
> @@ -17,7 +17,7 @@ requires() {
> }
>
> device_requires() {
> - _require_test_dev_size_mb 1024
> + _require_test_dev_size 1024m
> }
>
> test_device() {
> --
> 2.40.0
>
On Apr 21, 2023 / 08:04, Daniel Wagner wrote:
> Read the block device sizes instead hard coding them.
I suggest to add this to clarify the purpose of this commit:
".., so that the device size can be configurable in future."
>
> Signed-off-by: Daniel Wagner <[email protected]>
> ---
> tests/nvme/014 | 10 +++++++++-
> tests/nvme/015 | 10 +++++++++-
> 2 files changed, 18 insertions(+), 2 deletions(-)
>
> diff --git a/tests/nvme/014 b/tests/nvme/014
> index d13cff7921da..28913641ae40 100755
> --- a/tests/nvme/014
> +++ b/tests/nvme/014
> @@ -23,6 +23,9 @@ test() {
> local port
> local nvmedev
> local loop_dev
> + local size
> + local bs
> + local count
> local file_path="$TMPDIR/img"
> local subsys_name="blktests-subsystem-1"
>
> @@ -41,7 +44,12 @@ test() {
> cat "/sys/block/${nvmedev}n1/uuid"
> cat "/sys/block/${nvmedev}n1/wwid"
>
> - dd if=/dev/urandom of="/dev/${nvmedev}n1" count=128000 bs=4k status=none
> + size="$(blockdev --getsize64 "/dev/${nvmedev}n1")"
> + bs="$(blockdev --getbsz "/dev/${nvmedev}n1")"
> + count=$((size / bs - 1))
Do we need -1?
> +
> + dd if=/dev/urandom of="/dev/${nvmedev}n1" \
> + count="${count}" bs="${bs}" status=none
>
> nvme flush "/dev/${nvmedev}" -n 1
>
> diff --git a/tests/nvme/015 b/tests/nvme/015
> index bb52ba2598db..2f7957caac88 100755
> --- a/tests/nvme/015
> +++ b/tests/nvme/015
> @@ -22,6 +22,9 @@ test() {
>
> local port
> local nvmedev
> + local size
> + local bs
> + local count
> local file_path="$TMPDIR/img"
> local subsys_name="blktests-subsystem-1"
>
> @@ -38,7 +41,12 @@ test() {
> cat "/sys/block/${nvmedev}n1/uuid"
> cat "/sys/block/${nvmedev}n1/wwid"
>
> - dd if=/dev/urandom of="/dev/${nvmedev}n1" count=128000 bs=4k status=none
> + size="$(blockdev --getsize64 "/dev/${nvmedev}n1")"
> + bs="$(blockdev --getbsz "/dev/${nvmedev}n1")"
> + count=$((size / bs - 1))
Same here.
> +
> + dd if=/dev/urandom of="/dev/${nvmedev}n1" \
> + count="${count}" bs="${bs}" status=none
>
> nvme flush "/dev/${nvmedev}n1" -n 1
>
> --
> 2.40.0
>
On Apr 21, 2023 / 08:54, Daniel Wagner wrote:
> On Fri, Apr 21, 2023 at 08:27:35AM +0200, Hannes Reinecke wrote:
> > On 4/21/23 08:04, Daniel Wagner wrote:
> > > Make the size argument optional by reading the filesystem info. The
> > > caller doesn't have to guess (or calculate) how big the max IO size.
> > > The log data structure of XFS is reducing the capacity.
> > >
> > > Signed-off-by: Daniel Wagner <[email protected]>
> > > ---
> > > common/xfs | 6 ++++++
> > > tests/nvme/012 | 2 +-
> > > tests/nvme/013 | 2 +-
> > > tests/nvme/035 | 2 +-
> > > 4 files changed, 9 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/common/xfs b/common/xfs
> > > index 2c5d96164ac1..ec35599e017b 100644
> > > --- a/common/xfs
> > > +++ b/common/xfs
> > > @@ -27,6 +27,12 @@ _xfs_run_fio_verify_io() {
> > > _xfs_mkfs_and_mount "${bdev}" "${mount_dir}" >> "${FULL}" 2>&1
> > > + if [[ -z "${sz}" ]]; then
> > > + local avail
> > > + avail="$(df --output=avail "${mount_dir}" | awk 'NR==2 {print $1}')"
> >
> > df --output=avail "${mount_dir}" | tail -1
>
> Sure, don't think it matters.
>
> > > + sz="$(printf "%d" $((avail / 1024 - 1 )))m"
> >
> > sz=$((avail / 1024 - 1))
>
> I tried this but the devision is likely to return a floating point which fio
> doesn't like. Is there a way to tell bash to do a pure integer devision?
Hmm, AFAIK, bash arithmetic supports integer only. I tried below, and bash did
not return floating value...
$ avail=90000; echo $((avail/1024))
87
Assuming bash arithmetic supports integer only, -1 will not be required in the
calculation.
On Apr 21, 2023 / 08:04, Daniel Wagner wrote:
> Make the size argument optional by reading the filesystem info. The
> caller doesn't have to guess (or calculate) how big the max IO size.
> The log data structure of XFS is reducing the capacity.
>
> Signed-off-by: Daniel Wagner <[email protected]>
> ---
> common/xfs | 6 ++++++
> tests/nvme/012 | 2 +-
> tests/nvme/013 | 2 +-
> tests/nvme/035 | 2 +-
> 4 files changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/common/xfs b/common/xfs
> index 2c5d96164ac1..ec35599e017b 100644
> --- a/common/xfs
> +++ b/common/xfs
> @@ -27,6 +27,12 @@ _xfs_run_fio_verify_io() {
>
> _xfs_mkfs_and_mount "${bdev}" "${mount_dir}" >> "${FULL}" 2>&1
>
> + if [[ -z "${sz}" ]]; then
> + local avail
> + avail="$(df --output=avail "${mount_dir}" | awk 'NR==2 {print $1}')"
> + sz="$(printf "%d" $((avail / 1024 - 1 )))m"
> + fi
> +
> _run_fio_verify_io --size="$sz" --directory="${mount_dir}/"
>
> umount "${mount_dir}" >> "${FULL}" 2>&1
> diff --git a/tests/nvme/012 b/tests/nvme/012
> index e60082c2e751..c9d24388306d 100755
> --- a/tests/nvme/012
> +++ b/tests/nvme/012
> @@ -44,7 +44,7 @@ test() {
> cat "/sys/block/${nvmedev}n1/uuid"
> cat "/sys/block/${nvmedev}n1/wwid"
>
> - _xfs_run_fio_verify_io "/dev/${nvmedev}n1" "900m"
> + _xfs_run_fio_verify_io "/dev/${nvmedev}n1"
>
> _nvme_disconnect_subsys "${subsys_name}"
>
> diff --git a/tests/nvme/013 b/tests/nvme/013
> index 9d60a7df4577..265b6968fd34 100755
> --- a/tests/nvme/013
> +++ b/tests/nvme/013
> @@ -41,7 +41,7 @@ test() {
> cat "/sys/block/${nvmedev}n1/uuid"
> cat "/sys/block/${nvmedev}n1/wwid"
>
> - _xfs_run_fio_verify_io "/dev/${nvmedev}n1" "900m"
> + _xfs_run_fio_verify_io "/dev/${nvmedev}n1"
>
> _nvme_disconnect_subsys "${subsys_name}"
>
As for nvme/012 and nvme/013, I observed the I/O size changes from 900m to 920m
with this patch. This condition looks better for testing point of view. Good.
> diff --git a/tests/nvme/035 b/tests/nvme/035
> index eb1024edddbf..8b485bc8e682 100755
> --- a/tests/nvme/035
> +++ b/tests/nvme/035
> @@ -32,7 +32,7 @@ test_device() {
> port=$(_nvmet_passthru_target_setup "${subsys}")
> nsdev=$(_nvmet_passthru_target_connect "${nvme_trtype}" "${subsys}")
>
> - _xfs_run_fio_verify_io "${nsdev}" "900m"
> + _xfs_run_fio_verify_io "${nsdev}"
On the other hand, this change for nvme/035 does not look good. It runs the
test on TEST_DEV, which may take very long time without TIMEOUT config.
>
> _nvme_disconnect_subsys "${subsys}"
> _nvmet_passthru_target_cleanup "${port}" "${subsys}"
> --
> 2.40.0
>
On Apr 21, 2023 / 09:03, Daniel Wagner wrote:
> On Fri, Apr 21, 2023 at 08:33:46AM +0200, Hannes Reinecke wrote:
> > +_nvme_calc_rand_io_size() {
> > > + local img_size_mb
> > > + local io_size_mb
> > > +
> > > + img_size_mb="$(convert_to_mb "$1")"
> > > + io_size_mb="$(printf "%d" $((((img_size_mb * 1024 * 1024) / $(nproc) - 1) / 1024)))"
> > > +
> >
> > ... ending with ridiculous small io sizes on machines with lots of CPUs.
> > Please cap nproc by something sane like 32.
>
> Yeah, propably not really good long time strategy. I was wondering if we should
> make run_fio() variants smarter and do the size callculation there and not by
> the callee. If we do this, we could make the number of jobs dependend on CPUs
> and image size a bit nicer.
The usage of _run_fio_rand_io() look different for each test case. nvme/032
kills the fio process when it is no longer required. Then IO size reduction with
_nvme_calc_io_size() will not reduce runtime of nvme/032. I think nvme/040 has
same story, since _nvme_delete_ctrl will stop the fio process with I/O error.
On the other hand, nvme/045 and nvme/047 may have different usage. I'm not sure
if these test case needs I/O with all CPUs. It would be better to have other
run_fio() variant as Daniel mentioned, so that their runtime will not depend on
number of CPUs.
On Apr 21, 2023 / 08:05, Daniel Wagner wrote:
> Introduce two new function to calculate the IO size for fio jobs.
>
> _nvme_calc_io_size() returns the jobs size for _run_fio_verify_io()
> function. Reduce the max size of the job by one megabyte to make the
> test more robust not to run out of space by accident. Note these fio
> calls run with just one jobs.
It is not clear for me what kind of issue happens without the 1MB decrement.
Could you share failure symptoms you observed?
>
> _nvme_calc_run_io_size() returns the jobs size for _run_fio_rand_io()
> function. Again, the jobs size is not maxing out the space and most
> important it takes the number of jobs into account which are
> created (number of CPUs).
This patch has two purposes, similar but different. It would be the better to
separate them.
On Apr 21, 2023 / 08:05, Daniel Wagner wrote:
> Some tests hard code high values of iterations. This makes them run
> relatively long compared to the other tests. Introduce a new environment
> variable nvme_num_iter to allow tune the runtime.
>
> Signed-off-by: Daniel Wagner <[email protected]>
> ---
> tests/nvme/002 | 2 +-
> tests/nvme/016 | 2 +-
> tests/nvme/017 | 2 +-
> tests/nvme/rc | 1 +
> 4 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/tests/nvme/002 b/tests/nvme/002
> index 6b8484844b4d..c28035483514 100755
> --- a/tests/nvme/002
> +++ b/tests/nvme/002
> @@ -20,7 +20,7 @@ test() {
>
> _setup_nvmet
>
> - local iterations=1000
> + local iterations="${nvme_num_iter}"
> local port
> port="$(_create_nvmet_port "${nvme_trtype}")"
>
> diff --git a/tests/nvme/016 b/tests/nvme/016
> index 4eba30223a08..c0c31a55b190 100755
> --- a/tests/nvme/016
> +++ b/tests/nvme/016
> @@ -17,7 +17,7 @@ test() {
> echo "Running ${TEST_NAME}"
>
> local port
> - local iterations=1000
> + local iterations="${nvme_num_iter}"
> local loop_dev
> local subsys_nqn="blktests-subsystem-1"
>
> diff --git a/tests/nvme/017 b/tests/nvme/017
> index 0248aee9bc41..e1674508f654 100755
> --- a/tests/nvme/017
> +++ b/tests/nvme/017
> @@ -18,7 +18,7 @@ test() {
>
> local port
> local file_path
> - local iterations=1000
> + local iterations="${nvme_num_iter}"
> local subsys_name="blktests-subsystem-1"
>
> _setup_nvmet
> diff --git a/tests/nvme/rc b/tests/nvme/rc
> index 2aa34fb0c9b8..bb135502220a 100644
> --- a/tests/nvme/rc
> +++ b/tests/nvme/rc
> @@ -18,6 +18,7 @@ def_hostnqn="$(cat /etc/nvme/hostnqn 2> /dev/null)"
> def_hostid="$(cat /etc/nvme/hostid 2> /dev/null)"
> nvme_trtype=${nvme_trtype:-"loop"}
> nvme_img_size=${nvme_img_size:-"350M"}
> +nvme_num_iter=${nvme_num_iter:-"100"}
The commit log of tests/nvme/002 says that "Test nvme discovery with many (1000)
subsystems so the discovery log page exceeds 4k.". Can we fulfill this test
purpose with the default value 100?
Also, nvme_num_iter will need description in Documentation/running-tests.md.
>
> _nvme_requires() {
> _have_program nvme
> --
> 2.40.0
>
On Apr 21, 2023 / 08:57, Daniel Wagner wrote:
> On Fri, Apr 21, 2023 at 08:29:22AM +0200, Hannes Reinecke wrote:
> > --- a/tests/nvme/040
> > > +++ b/tests/nvme/040
> > > @@ -38,7 +38,8 @@ test() {
> > > # start fio job
> > > echo "starting background fio"
> > > _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size=1g \
> > > - --group_reporting --ramp_time=5 &> /dev/null &
> > > + --group_reporting --ramp_time=5 \
> > > + --time_based --runtime=1m &> /dev/null &
> > > sleep 5
> > > # do reset/remove operation
> >
> > Wouldn't it be better to let _run_fio_rand_io pick the correct size?
>
> Yes, makes sense.
If you do I/O size change for the test cases nvme/032 and nvme/040, could you
confirm the runtime reduction of the test cases? IIUC, the fio process stops
due to process kill or an I/O error, then I/O size reduction will not change
runtime of the test cases, I guess.
IMO, --time_based --runtime=1m is good to ensure that fio runs long enough,
even when nvme device size is configured with small size.
> > - dd if=/dev/urandom of="/dev/${nvmedev}n1" count=128000 bs=4k status=none
> > + size="$(blockdev --getsize64 "/dev/${nvmedev}n1")"
> > + bs="$(blockdev --getbsz "/dev/${nvmedev}n1")"
> > + count=$((size / bs - 1))
>
> Do we need -1?
Not really. My aim was to just to make it test case a bit more
reliable. The original test didn't fill up the disk either.
I am going to drop it.
On Fri, Apr 21, 2023 at 08:27:35AM +0200, Hannes Reinecke wrote:
> > + avail="$(df --output=avail "${mount_dir}" | awk 'NR==2 {print $1}')"
>
> df --output=avail "${mount_dir}" | tail -1
ok.
> > + sz="$(printf "%d" $((avail / 1024 - 1 )))m"
>
> sz=$((avail / 1024 - 1))
>
> > + fi
> > +
> > _run_fio_verify_io --size="$sz" --directory="${mount_dir}/"
>
> _run_fio_verify_io --size="${sz}m" --directory="${mount_dir}/"
$sz might already contain the 'm', so can't do this here.
> Hmm, AFAIK, bash arithmetic supports integer only. I tried below, and bash did
> not return floating value...
>
> $ avail=90000; echo $((avail/1024))
> 87
>
> Assuming bash arithmetic supports integer only, -1 will not be required in the
> calculation.
Can't remember how I ended up with the above. Anyway, works just fine without
the printf.
> --- a/tests/nvme/035
> > +++ b/tests/nvme/035
> > @@ -32,7 +32,7 @@ test_device() {
> > port=$(_nvmet_passthru_target_setup "${subsys}")
> > nsdev=$(_nvmet_passthru_target_connect "${nvme_trtype}" "${subsys}")
> >
> > - _xfs_run_fio_verify_io "${nsdev}" "900m"
> > + _xfs_run_fio_verify_io "${nsdev}"
>
> On the other hand, this change for nvme/035 does not look good. It runs the
> test on TEST_DEV, which may take very long time without TIMEOUT config.
I'll add the nvme_img_size argument here instead (nvme: Make test image size
configurable)
On Fri, Apr 28, 2023 at 04:29:57AM +0000, Shinichiro Kawasaki wrote:
> On Apr 21, 2023 / 08:57, Daniel Wagner wrote:
> > On Fri, Apr 21, 2023 at 08:29:22AM +0200, Hannes Reinecke wrote:
> > > --- a/tests/nvme/040
> > > > +++ b/tests/nvme/040
> > > > @@ -38,7 +38,8 @@ test() {
> > > > # start fio job
> > > > echo "starting background fio"
> > > > _run_fio_rand_io --filename="/dev/${nvmedev}n1" --size=1g \
> > > > - --group_reporting --ramp_time=5 &> /dev/null &
> > > > + --group_reporting --ramp_time=5 \
> > > > + --time_based --runtime=1m &> /dev/null &
> > > > sleep 5
> > > > # do reset/remove operation
> > >
> > > Wouldn't it be better to let _run_fio_rand_io pick the correct size?
> >
> > Yes, makes sense.
>
> If you do I/O size change for the test cases nvme/032 and nvme/040, could you
> confirm the runtime reduction of the test cases? IIUC, the fio process stops
> due to process kill or an I/O error, then I/O size reduction will not change
> runtime of the test cases, I guess.
The fio process doesn't survive the reset and the deletion of the controller.
> IMO, --time_based --runtime=1m is good to ensure that fio runs long enough,
> even when nvme device size is configured with small size.
I've updated the time to 'infinity' and added a 'kill $pid' after reset and
delete. Though the process should be gone till then but making the test a bit
more robust should hurt.
On Fri, Apr 28, 2023 at 04:00:54AM +0000, Shinichiro Kawasaki wrote:
> On Apr 21, 2023 / 08:05, Daniel Wagner wrote:
> > Introduce two new function to calculate the IO size for fio jobs.
> >
> > _nvme_calc_io_size() returns the jobs size for _run_fio_verify_io()
> > function. Reduce the max size of the job by one megabyte to make the
> > test more robust not to run out of space by accident. Note these fio
> > calls run with just one jobs.
>
> It is not clear for me what kind of issue happens without the 1MB decrement.
> Could you share failure symptoms you observed?
As I said, this is just to make the test more robust as this the size limits
are not the main objective of these tests. I don't care about this too
much, I'll just drop it then.
On Fri, Apr 28, 2023 at 04:12:06AM +0000, Shinichiro Kawasaki wrote:
> > nvme_img_size=${nvme_img_size:-"350M"}
> > +nvme_num_iter=${nvme_num_iter:-"100"}
>
> The commit log of tests/nvme/002 says that "Test nvme discovery with many (1000)
> subsystems so the discovery log page exceeds 4k.". Can we fulfill this test
> purpose with the default value 100?
I am going to drop the nvme/002 change in this case and set the default of
nvme_num_iter to 1000.
> Also, nvme_num_iter will need description in Documentation/running-tests.md.
Sure.
On Tue, May 02, 2023 at 05:45:46PM +0200, Daniel Wagner wrote:
> On Fri, Apr 28, 2023 at 04:00:54AM +0000, Shinichiro Kawasaki wrote:
> > On Apr 21, 2023 / 08:05, Daniel Wagner wrote:
> > > Introduce two new function to calculate the IO size for fio jobs.
> > >
> > > _nvme_calc_io_size() returns the jobs size for _run_fio_verify_io()
> > > function. Reduce the max size of the job by one megabyte to make the
> > > test more robust not to run out of space by accident. Note these fio
> > > calls run with just one jobs.
> >
> > It is not clear for me what kind of issue happens without the 1MB decrement.
> > Could you share failure symptoms you observed?
>
> As I said, this is just to make the test more robust as this the size limits
> are not the main objective of these tests. I don't care about this too
> much, I'll just drop it then.
BTW, this is how it would look like if the disk is too small:
nvme/035 => nvme0n1 (run mkfs and data verification fio job on an NVMeOF passthru controller) [failed]
runtime 2.383s ... 51.954s
--- tests/nvme/035.out 2023-04-18 17:43:18.163745956 +0200
+++ /home/wagi/work/blktests/results/nvme0n1/nvme/035.out.bad 2023-05-02 18:21:09.442382196 +0200
@@ -1,3 +1,20 @@
Running nvme/035
+fio: io_u error on file /mnt/blktests//verify.0.0: No space left on device: write offset=925274112, buflen=4096
+fio: io_u error on file /mnt/blktests//verify.0.0: No space left on device: write offset=406040576, buflen=4096
+fio: io_u error on file /mnt/blktests//verify.0.0: No space left on device: write offset=498868224, buflen=4096
+fio: io_u error on file /mnt/blktests//verify.0.0: No space left on device: write offset=217063424, buflen=4096
+fio: io_u error on file /mnt/blktests//verify.0.0: No space left on device: write offset=1049411584, buflen=4096
+fio: io_u error on file /mnt/blktests//verify.0.0: No space left on device: write offset=348282880, buflen=4096
...
(Run 'diff -u tests/nvme/035.out /home/wagi/work/blktests/results/nvme0n1/nvme/035.out.bad' to see the entire diff)
On May 02, 2023 / 15:23, Daniel Wagner wrote:
> > --- a/tests/nvme/035
> > > +++ b/tests/nvme/035
> > > @@ -32,7 +32,7 @@ test_device() {
> > > port=$(_nvmet_passthru_target_setup "${subsys}")
> > > nsdev=$(_nvmet_passthru_target_connect "${nvme_trtype}" "${subsys}")
> > >
> > > - _xfs_run_fio_verify_io "${nsdev}" "900m"
> > > + _xfs_run_fio_verify_io "${nsdev}"
> >
> > On the other hand, this change for nvme/035 does not look good. It runs the
> > test on TEST_DEV, which may take very long time without TIMEOUT config.
>
> I'll add the nvme_img_size argument here instead (nvme: Make test image size
> configurable)
If TEST_DEV has the size same as nvme_img_size, xfs log data will consume some
part of the TEST_DEV, then _xfs_run_fio_verify_io with nvme_img_size will fail.
I think the size argument of _xfs_run_fio_verify_io should be,
min(size of TEST_DEV, nvm_img_size) - log data size of xfs
But I'm not sure if we can do this calculation correctly.
If the calculation is not possible, it would be the better to leave the hard
coded constants (1GB for TEST_DEV size and 900mb as fio I/O size) in this test
case, because nvme/035 is rather unique in the nvme group, which uses TEST_DEV.
On Wed, May 03, 2023 at 04:04:50AM +0000, Shinichiro Kawasaki wrote:
> On May 02, 2023 / 15:23, Daniel Wagner wrote:
> > > --- a/tests/nvme/035
> > > > +++ b/tests/nvme/035
> > > > @@ -32,7 +32,7 @@ test_device() {
> > > > port=$(_nvmet_passthru_target_setup "${subsys}")
> > > > nsdev=$(_nvmet_passthru_target_connect "${nvme_trtype}" "${subsys}")
> > > >
> > > > - _xfs_run_fio_verify_io "${nsdev}" "900m"
> > > > + _xfs_run_fio_verify_io "${nsdev}"
> > >
> > > On the other hand, this change for nvme/035 does not look good. It runs the
> > > test on TEST_DEV, which may take very long time without TIMEOUT config.
> >
> > I'll add the nvme_img_size argument here instead (nvme: Make test image size
> > configurable)
>
> If TEST_DEV has the size same as nvme_img_size, xfs log data will consume some
> part of the TEST_DEV, then _xfs_run_fio_verify_io with nvme_img_size will fail.
>
> I think the size argument of _xfs_run_fio_verify_io should be,
>
> min(size of TEST_DEV, nvm_img_size) - log data size of xfs
>
> But I'm not sure if we can do this calculation correctly.
>
> If the calculation is not possible, it would be the better to leave the hard
> coded constants (1GB for TEST_DEV size and 900mb as fio I/O size) in this test
> case, because nvme/035 is rather unique in the nvme group, which uses TEST_DEV.
I've solved this by extending _xfs_run_fio_verify_io() to limit the max size of
the io job:
_xfs_run_fio_verify_io() {
local mount_dir="/mnt/blktests"
local bdev=$1
local sz=$2
local sz_mb
local avail
local avail_mb
_xfs_mkfs_and_mount "${bdev}" "${mount_dir}" >> "${FULL}" 2>&1
avail="$(df --output=avail "${mount_dir}" | tail -1)"
avail_mb="$((avail / 1024))"
if [[ -z "${sz}" ]]; then
sz_mb="${avail_mb}"
else
sz_mb="$(convert_to_mb "${sz}")"
if [[ "${sz_mb}" -gt "${avail_mb}" ]]; then
sz_mb="${avail_mb}"
fi
fi
_run_fio_verify_io --size="${sz_mb}m" --directory="${mount_dir}/"
umount "${mount_dir}" >> "${FULL}" 2>&1
rm -fr "${mount_dir}"
}
Anyway, I'll send out the updated series shortly