I (think) address all the feedback I got from Sagi and Shinichiro except one.
The _have_nvme_cli_context() (previously _nvme_cli_support_context()) function
is still there. I didn't find any other good way to achieve this and I found in
blktests another function doing the same: _have_fio_zbd_zonemode().
Daniel
changes:
v2:
- nvme/048:
- untaggle waiting for state change and queue count check
- make all variables local
- compare numbers not strings
- nvme/rc:
- rename _nvme_cli_supports_context to _have_nvme_cli_context
- only add --context for fc
- reordered setup/cleanup
- nvme/{041,042,043,044,045,048}:
- move all changes related to this patch
v1:
- https://lore.kernel.org/linux-nvme/[email protected]/
Daniel Wagner (3):
nvme/048: Check for queue count check directly
nvme/rc: Avoid triggering host nvme-cli autoconnect
nvme/{041,042,043,044,045,048}: Use default hostnqn and hostid
tests/nvme/041 | 8 ++---
tests/nvme/042 | 8 ++---
tests/nvme/043 | 8 ++---
tests/nvme/044 | 8 ++---
tests/nvme/045 | 8 ++---
tests/nvme/048 | 32 +++++++++++++-------
tests/nvme/rc | 81 +++++++++++++++++++++++++++++++++++++++++++-------
7 files changed, 103 insertions(+), 50 deletions(-)
--
2.41.0
The test monitored the state changes live -> resetting -> connecting ->
live, to figure out the queue count change was successful.
The fc transport is reconnecting very fast and the state transitions
are not observed by the current approach.
So instead trying to monitor the state changes, let's just wait for the
live state and the correct queue number.
As queue count is depending on the number of online CPUs we explicitly
use 1 and 2 for the max_queue count. This means the queue_count value
needs to reach either 2 or 3 (admin queue included).
Signed-off-by: Daniel Wagner <[email protected]>
---
tests/nvme/048 | 24 ++++++++++++++++++++----
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/tests/nvme/048 b/tests/nvme/048
index 81084f0440c2..bbfb9873b5e8 100755
--- a/tests/nvme/048
+++ b/tests/nvme/048
@@ -42,6 +42,24 @@ nvmf_wait_for_state() {
return 0
}
+nvmf_check_queue_count() {
+ local subsys_name="$1"
+ local queue_count="$2"
+ local nvmedev
+ local queue_count_file
+
+ nvmedev=$(_find_nvme_dev "${subsys_name}")
+ queue_count_file=$(cat /sys/class/nvme-fabrics/ctl/"${nvmedev}"/queue_count)
+
+ queue_count=$((queue_count + 1))
+ if [[ "${queue_count}" -ne "${queue_count_file}" ]]; then
+ echo "expected queue count ${queue_count} not set"
+ return 1
+ fi
+
+ return 0
+}
+
set_nvmet_attr_qid_max() {
local nvmet_subsystem="$1"
local qid_max="$2"
@@ -56,10 +74,8 @@ set_qid_max() {
local qid_max="$3"
set_nvmet_attr_qid_max "${subsys_name}" "${qid_max}"
-
- # Setting qid_max forces a disconnect and the reconntect attempt starts
- nvmf_wait_for_state "${subsys_name}" "connecting" || return 1
nvmf_wait_for_state "${subsys_name}" "live" || return 1
+ nvmf_check_queue_count "${subsys_name}" "${qid_max}" || return 1
return 0
}
@@ -103,7 +119,7 @@ test() {
echo FAIL
else
set_qid_max "${port}" "${subsys_name}" 1 || echo FAIL
- set_qid_max "${port}" "${subsys_name}" 128 || echo FAIL
+ set_qid_max "${port}" "${subsys_name}" 2 || echo FAIL
fi
_nvme_disconnect_subsys "${subsys_name}"
--
2.41.0
The host might have enabled the udev/systemd auto connect feature.
This disturbs the blktests for the fc transport. nvme-cli is able
to distinguish between the different invocations via the --context
option. In order to get this working we have to use the default
hostnqn and hostid and not random generated IDs for every single
run.
Signed-off-by: Daniel Wagner <[email protected]>
---
tests/nvme/041 | 8 ++------
tests/nvme/042 | 8 ++------
tests/nvme/043 | 8 ++------
tests/nvme/044 | 8 ++------
tests/nvme/045 | 8 ++------
tests/nvme/048 | 8 ++------
6 files changed, 12 insertions(+), 36 deletions(-)
diff --git a/tests/nvme/041 b/tests/nvme/041
index 308655dd6090..5b04b99b128e 100755
--- a/tests/nvme/041
+++ b/tests/nvme/041
@@ -30,12 +30,8 @@ test() {
echo "Running ${TEST_NAME}"
- hostid="$(uuidgen)"
- if [ -z "$hostid" ] ; then
- echo "uuidgen failed"
- return 1
- fi
- hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}"
+ hostid="${def_hostid}"
+ hostnqn="${def_hostnqn}"
hostkey="$(nvme gen-dhchap-key -n ${subsys_name} 2> /dev/null)"
if [ -z "$hostkey" ] ; then
echo "nvme gen-dhchap-key failed"
diff --git a/tests/nvme/042 b/tests/nvme/042
index fed2efead013..8df5ed37aacc 100755
--- a/tests/nvme/042
+++ b/tests/nvme/042
@@ -32,12 +32,8 @@ test() {
echo "Running ${TEST_NAME}"
- hostid="$(uuidgen)"
- if [ -z "$hostid" ] ; then
- echo "uuidgen failed"
- return 1
- fi
- hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}"
+ hostid="${def_hostid}"
+ hostnqn="${def_hostnqn}"
_setup_nvmet
diff --git a/tests/nvme/043 b/tests/nvme/043
index a030884aa4ed..b591e39d0706 100755
--- a/tests/nvme/043
+++ b/tests/nvme/043
@@ -33,12 +33,8 @@ test() {
echo "Running ${TEST_NAME}"
- hostid="$(uuidgen)"
- if [ -z "$hostid" ] ; then
- echo "uuidgen failed"
- return 1
- fi
- hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}"
+ hostid="${def_hostid}"
+ hostnqn="${def_hostnqn}"
_setup_nvmet
diff --git a/tests/nvme/044 b/tests/nvme/044
index 9928bcc55397..fca0897af27b 100755
--- a/tests/nvme/044
+++ b/tests/nvme/044
@@ -32,12 +32,8 @@ test() {
echo "Running ${TEST_NAME}"
- hostid="$(uuidgen)"
- if [ -z "$hostid" ] ; then
- echo "uuidgen failed"
- return 1
- fi
- hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}"
+ hostid="${def_hostid}"
+ hostnqn="${def_hostnqn}"
hostkey="$(nvme gen-dhchap-key -n ${subsys_name} 2> /dev/null)"
if [ -z "$hostkey" ] ; then
diff --git a/tests/nvme/045 b/tests/nvme/045
index 26a55335a92c..eca629a18691 100755
--- a/tests/nvme/045
+++ b/tests/nvme/045
@@ -36,12 +36,8 @@ test() {
echo "Running ${TEST_NAME}"
- hostid="$(uuidgen)"
- if [ -z "$hostid" ] ; then
- echo "uuidgen failed"
- return 1
- fi
- hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}"
+ hostid="${def_hostid}"
+ hostnqn="${def_hostnqn}"
hostkey="$(nvme gen-dhchap-key -n ${subsys_name} 2> /dev/null)"
if [ -z "$hostkey" ] ; then
diff --git a/tests/nvme/048 b/tests/nvme/048
index bbfb9873b5e8..a6ebb8927865 100755
--- a/tests/nvme/048
+++ b/tests/nvme/048
@@ -93,12 +93,8 @@ test() {
_setup_nvmet
- hostid="$(uuidgen)"
- if [ -z "$hostid" ] ; then
- echo "uuidgen failed"
- return 1
- fi
- hostnqn="nqn.2014-08.org.nvmexpress:uuid:${hostid}"
+ hostid="${def_hostid}"
+ hostnqn="${def_hostnqn}"
truncate -s "${nvme_img_size}" "${file_path}"
--
2.41.0
On Jun 28, 2023 / 17:16, Daniel Wagner wrote:
> I (think) address all the feedback I got from Sagi and Shinichiro except one.
> The _have_nvme_cli_context() (previously _nvme_cli_support_context()) function
> is still there. I didn't find any other good way to achieve this and I found in
> blktests another function doing the same: _have_fio_zbd_zonemode().
I found that the latest blktests fix for the hostnqn/hostid issue created
conflict with the 2nd patch in this series. Sorry about that. (Actually, I guess
this series could be enough for the issue...)
Daniel, I can fix the conflict and repost the series to the list. Or you can fix
the conflict by yourself. Please let me know your preference.
Other than that, I think the three patches are good enough to apply.
On Mon, Jul 03, 2023 at 05:43:01AM +0000, Shinichiro Kawasaki wrote:
> On Jun 28, 2023 / 17:16, Daniel Wagner wrote:
> > I (think) address all the feedback I got from Sagi and Shinichiro except one.
> > The _have_nvme_cli_context() (previously _nvme_cli_support_context()) function
> > is still there. I didn't find any other good way to achieve this and I found in
> > blktests another function doing the same: _have_fio_zbd_zonemode().
>
> I found that the latest blktests fix for the hostnqn/hostid issue created
> conflict with the 2nd patch in this series. Sorry about that. (Actually, I guess
> this series could be enough for the issue...)
No problem.
> Daniel, I can fix the conflict and repost the series to the list. Or you can fix
> the conflict by yourself. Please let me know your preference.
I'll rebase and fixup the conflict. Also give it another quick test run just to
make sure all still works.
> Other than that, I think the three patches are good enough to apply.
Great!